mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-16 22:42:21 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
14824b71a8
@ -0,0 +1,140 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "lxbwolf"
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-12700-1.html"
|
||||
[#]: subject: "Automate testing for website errors with this Python tool"
|
||||
[#]: via: "https://opensource.com/article/20/7/seodeploy"
|
||||
[#]: author: "JR Oakes https://opensource.com/users/jroakes"
|
||||
|
||||
使用这个 Python 工具对网站 SEO 问题进行自动化测试
|
||||
======
|
||||
|
||||
> SEODeploy 可以帮助我们在网站部署之前识别出 SEO 问题。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202010/09/194928xbqvdd81amapgdae.jpg)
|
||||
|
||||
作为一个技术性搜索引擎优化开发者,我经常被请来协助做网站迁移、新网站发布、分析实施和其他一些影响网站在线可见性和测量等领域,以控制风险。许多公司每月经常性收入的很大一部分来自用户通过搜索引擎找到他们的产品和服务。虽然搜索引擎已经能妥善地处理没有被良好格式化的代码,但在开发过程中还是会出问题,对搜索引擎如何索引和为用户显示页面产生不利影响。
|
||||
|
||||
我曾经也尝试通过评审各阶段会破坏 SEO(<ruby>搜索引擎优化<rt>search engine optimization</rt></ruby>)的问题来手动降低这种风险。我的团队最终审查到的结果,决定了该项目是否可以上线。但这个过程通常很低效,只能用于有限的页面,而且很有可能出现人为错误。
|
||||
|
||||
长期以来,这个行业一直在寻找可用且值得信赖的方式来自动化这一过程,同时还能让开发人员和搜索引擎优化人员在必须测试的内容上获得有意义的发言权。这是非常重要的,因为这些团队在开发冲刺中优先级通常会发生冲突,搜索引擎优化者需要推动变化,而开发人员需要控制退化和预期之外的情况。
|
||||
|
||||
### 常见的破坏 SEO 的问题
|
||||
|
||||
我合作过的很多网站有成千上万的页面,甚至上百万。实在令人费解,为什么一个开发过程中的改动能影响这么多页面。在 SEO 的世界中,Google 或其他搜索引擎展示你的页面时,一个非常微小和看起来无关紧要的修改也可能导致全网站范围的变化。在部署到生产环境之前,必须要处理这类错误。
|
||||
|
||||
下面是我去年见过的几个例子。
|
||||
|
||||
#### 偶发的 noindex
|
||||
|
||||
在部署到生产环境之后,我们用的一个专用的第三方 SEO 监控工具 [ContentKing][2] 马上发现了这个问题。这个错误很隐蔽,因为它在 HTML 中是不可见的,确切地说,它隐藏在服务器响应头里,但它能很快导致搜索不可见。
|
||||
|
||||
```
|
||||
HTTP/1.1 200 OK
|
||||
Date: Tue May 25 2010 21:12:42 GMT
|
||||
[...]
|
||||
X-Robots-Tag: noindex
|
||||
[...]
|
||||
```
|
||||
|
||||
#### canonical 小写
|
||||
|
||||
上线时错误地把整个网站的 [canonical 链接元素][3]全改成小写了。这个改动影响了接近 30000 个 URL。在修改之前,所有的 URL 大小写都正常(例如 `URL-Path` 这样)。这之所以是个问题是因为 `canonical` 链接元素是用来给 Google 提示一个网页真实的规范 URL 版本的。这个改动导致很多 URL 被从 Google 的索引中移除并用小写的版本(`/url-path`)重新建立索引。影响范围是流量损失了 10% 到 15%,也污染了未来几个星期的网页监控数据。
|
||||
|
||||
#### 源站退化
|
||||
|
||||
有个网站的 React 实现复杂而奇特,它有个神奇的问题,`origin.domain.com` URL 退化显示为 CDN 服务器的源站。它会在网站元数据(如 `canonical` 链接元素、URL 和 Open Graph 链接)中间歇性地显示原始的主机而不是 CDN 边缘主机。这个问题在原始的 HTML 和渲染后的 HTML 中都存在。这个问题影响搜索的可见性和在社交媒体上的分享质量。
|
||||
|
||||
### SEODeploy 介绍
|
||||
|
||||
SEO 通常使用差异测试工具来检测渲染后和原始的 HTML 的差异。差异测试是很理想的,因为它避免了肉眼测试的不确定性。你希望检查 Google 对你的页面的渲染过程的差异,而不是检查用户对你页面的渲染。你希望查看下原始的 HTML 是什么样的,而不是渲染后的 HTML,因为 Google 的渲染过程是有独立的两个阶段的。
|
||||
|
||||
这促使我和我的同事创造了 [SEODeploy][4] 这个“在部署流水线中用于自动化 SEO 测试的 Python 库。”我们的使命是:
|
||||
|
||||
> 开发一个工具,让开发者能提供若干 URL 路径,并允许这些 URL 在生产环境和预演环境的主机上进行差异测试,尤其是对 SEO 相关数据的非预期的退化。
|
||||
|
||||
SEODeploy 的机制很简单:提供一个每行内容都是 URL 路径的文本文件,SEODeploy 对那些路径运行一系列模块,对比<ruby>生产环境<rt>production</rt></ruby>和<ruby>预演环境<rt>staging</rt></ruby>的 URL,把检测到的所有的错误和改动信息报告出来。
|
||||
|
||||
![SEODeploy overview][5]
|
||||
|
||||
这个工具及其模块可以用一个 YAML 文件来配置,可以根据预期的变化进行定制。
|
||||
|
||||
![SEODeploy output][7]
|
||||
|
||||
最初的发布版本包含下面的的核心功能和概念:
|
||||
|
||||
1. **开源**:我们坚信分享代码可以被大家批评、改进、扩展、分享和复用。
|
||||
2. **模块化**:Web 开发中有许多不同的堆栈和边缘案例。SEODeploy 工具在概念上很简单,因此采用模块化用来控制复杂性。我们提供了两个建好的模块和一个实例模块来简述基本结构。
|
||||
3. **URL 抽样**:由于它不是对所有 URL 都是可行和有效的,因此我们引入了一种随机抽取 XML 网站地图 URL 或被 ContentKing 监控的 URL 作为样本的方法。
|
||||
4. **灵活的差异检测**:Web 数据是凌乱的。无论被检测的数据是什么类型(如 ext、数组或列表、JSON 对象或字典、整数、浮点数等等),差异检测功能都会尝试将这些数据转换为差异信息。
|
||||
5. **自动化**: 你可以在命令行来调用抽样和运行方法,将 SEODeploy 融合到已有的流水线也很简单。
|
||||
|
||||
### 模块
|
||||
|
||||
虽然核心功能很简单,但在设计上,SEODeploy 的强大功能和复杂度体现在模块上。模块用来处理更难的任务:获取、清理和组织预演服务器和生产服务器上的数据来作对比。
|
||||
|
||||
#### Headless 模块
|
||||
|
||||
[Headless 模块][8] 是为那些从库里获取数据时不想为第三方服务付费的开发者准备的。它可以运行任意版本的 Chrome,会从每组用来比较的 URL 中提取渲染的数据。
|
||||
|
||||
Headless 模块会提取下面的核心数据用来比较:
|
||||
|
||||
1. SEO 内容,如标题、H1-H6、链接等等。
|
||||
2. 从 Chrome <ruby>计时器<rt>Timings</rt></ruby>和 CDP(<ruby>Chrome 开发工具协议<rt>Chrome DevTools Protocol</rt></ruby>)性能 API 中提取性能数据
|
||||
3. 计算出的性能指标,包括 CLS(<ruby>累积布局偏移<rt>Cumulative Layout Shift</rt></ruby>),这是 Google 最近发布的一个很受欢迎的 [Web 核心数据][9]
|
||||
4. 从上述 CDP 的覆盖率 API 获取的 CSS 和 JavaScript 的覆盖率数据
|
||||
|
||||
这个模块引入了处理预演环境、网络速度预设(为了让对比更规范化)等功能,也引入了一个处理在预演对比数据中替换预演主机的方法。开发者也能很容易地扩展这个模块,以收集他们想要在每个页面上进行比较的任何其他数据。
|
||||
|
||||
#### 其他模块
|
||||
|
||||
我们为开发者创建了一个[示例模块][10],开发者可以参照它来使用框架创建一个自定义的提取模块。另一个示例模块是与 ContentKing 结合的。ContentKing 模块需要有 ContentKing 订阅,而 Headless 可以在所有能运行 Chrome 的机器上运行。
|
||||
|
||||
### 需要解决的问题
|
||||
|
||||
我们有扩展和强化工具库的[计划][11],但正在寻求开发人员的[反馈][12],了解哪些是可行的,哪些是不符合他们的需求。我们正在解决的问题和条目有:
|
||||
|
||||
1. 对于某些对比元素(尤其是 schema),动态时间戳会产生误报。
|
||||
2. 把测试数据保存到数据库,以便查看部署历史以及与上次的预演推送进行差异测试。
|
||||
3. 通过云基础设施的渲染,强化提取的规模和速度。
|
||||
4. 把测试覆盖率从现在的 46% 提高到 99% 以上。
|
||||
5. 目前,我们依赖 [Poetry][13] 进行部署管理,但我们希望发布一个 PyPl 库,这样就可以用 `pip install` 轻松安装。
|
||||
6. 我们还在关注更多使用时的问题和相关数据。
|
||||
|
||||
### 开始使用
|
||||
|
||||
这个项目在 [GitHub][4] 上,我们对大部分功能都提供了 [文档][14]。
|
||||
|
||||
我们希望你能克隆 SEODeploy 并试试它。我们的目标是通过这个由技术性搜索引擎优化开发者开发的、经过开发者和工程师们验证的工具来支持开源社区。我们都见过验证复杂的预演问题需要多长时间,也都见过大量 URL 的微小改动能有什么样的业务影响。我们认为这个库可以为开发团队节省时间、降低部署过程中的风险。
|
||||
|
||||
如果你有问题或者想提交代码,请查看项目的[关于][15]页面。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/seodeploy
|
||||
|
||||
作者:[JR Oakes][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lxbwolf](https://github.com/lxbwolf)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jroakes
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY "Computer screen with files or windows open"
|
||||
[2]: https://www.contentkingapp.com/
|
||||
[3]: https://en.wikipedia.org/wiki/Canonical_link_element
|
||||
[4]: https://github.com/locomotive-agency/SEODeploy
|
||||
[5]: https://opensource.com/sites/default/files/uploads/seodeploy.png "SEODeploy overview"
|
||||
[6]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/seodeploy_output.png "SEODeploy output"
|
||||
[8]: https://locomotive-agency.github.io/SEODeploy/modules/headless/
|
||||
[9]: https://web.dev/vitals/
|
||||
[10]: https://locomotive-agency.github.io/SEODeploy/modules/creating/
|
||||
[11]: https://locomotive-agency.github.io/SEODeploy/todo/
|
||||
[12]: https://locomotive-agency.github.io/SEODeploy/about/#contact
|
||||
[13]: https://python-poetry.org/
|
||||
[14]: https://locomotive-agency.github.io/SEODeploy/
|
||||
[15]: https://locomotive-agency.github.io/SEODeploy/about/
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (rakino)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12699-1.html)
|
||||
[#]: subject: (Create template files in GNOME)
|
||||
[#]: via: (https://opensource.com/article/20/9/gnome-templates)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
|
||||
@ -10,27 +10,23 @@
|
||||
在 GNOME 中创建文档模板
|
||||
======
|
||||
|
||||
![Digital images of a computer desktop][1]
|
||||
> 制作模板可以让你更快地开始写作新的文档。
|
||||
|
||||
制作模板可以让你更快地开始写作新的文档。
|
||||
![](https://img.linux.net.cn/data/attachment/album/202010/08/215333mgqpiuqufhgidfpz.jpg)
|
||||
|
||||
我偶然发现了 [GNOME][2] 的一个新功能(对我来说是的):创建文档模版。模版(template)也被称作样版文件(boilerplate),一般是有着特定格式的空文档,例如律师事务所的信笺,在其顶部有着律所的名称和地址;另一个例子是银行以及保险公司的保函,在其底部页脚包含着某些免责声明。由于这类信息很少改变,你可以把它们添加到空文档中作为模板使用。
|
||||
我只是偶然发现了 [GNOME][2] 的一个新功能(对我来说是的):创建文档模版。<ruby>模版<rt>template</rt></ruby>也被称作<ruby>样版文件<rt>boilerplate</rt></ruby>,一般是有着特定格式的空文档,例如律师事务所的信笺,在其顶部有着律所的名称和地址;另一个例子是银行以及保险公司的保函,在其底部页脚包含着某些免责声明。由于这类信息很少改变,你可以把它们添加到空文档中作为模板使用。
|
||||
|
||||
一天,在浏览我的 Linux 系统文件的时候,我点击了**模板**(Templates)文件夹,然后刚好发现窗口的上方有一条消息写着:“将文件放入此文件夹并用作新文档的模板”,以及一个**获取详情……** 的链接,指向了 [GNOME 指南(GNOME help)][3]中的模板页面。
|
||||
一天,在浏览我的 Linux 系统文件的时候,我点击了<ruby>模板<rt>Templates</rt></ruby>文件夹,然后刚好发现窗口的上方有一条消息写着:“将文件放入此文件夹并用作新文档的模板”,以及一个“获取详情……” 的链接,打开了模板的 [GNOME 帮助页面][3]。
|
||||
|
||||
![Message at top of Templates folder in GNOME Desktop][4]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][5])
|
||||
|
||||
### 创建模板
|
||||
|
||||
在 GNOME 中创建模板非常简单。有几种方法可以把文件放进模板文件夹里:你既可以通过图形用户界面(GUI)或是命令行界面(CLI)从另一个位置复制或移动文件,也可以创建一个全新的文件;我选择了后者,实际上我也创建了两个文件。
|
||||
在 GNOME 中创建模板非常简单。有几种方法可以把文件放进模板文件夹里:你既可以通过图形用户界面(GUI)或是命令行界面(CLI)从另一个位置复制或移动文件,也可以创建一个全新的文件;我选择了后者,实际上,我创建了两个文件。
|
||||
|
||||
![My first two GNOME templates][6]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][5])
|
||||
|
||||
我的第一份模板是为 Opensource.com 的文章准备的,它有一个输入标题的位置以及关于我的名字和文章使用的许可证的几行。我的文章使用 Markdown 格式,所以我将模板创建为了一个新的 Markdown 文档——**Opensource.com Article.md**:
|
||||
我的第一份模板是为 Opensource.com 的文章准备的,它有一个输入标题的位置以及关于我的名字和文章使用的许可证的几行。我的文章使用 Markdown 格式,所以我将模板创建为了一个新的 Markdown 文档——`Opensource.com Article.md`:
|
||||
|
||||
````
|
||||
# Title
|
||||
@ -46,12 +42,10 @@ Creative Commons BY-SA 4.0
|
||||
|
||||
### 使用模板
|
||||
|
||||
每当我有了新文章的灵感的时候,我只需要在我计划用来组织内容的文件夹里单击右键,然后从**新建文档**(New Document)列表中选择我想要的模板就可以开始了。
|
||||
每当我有了新文章的灵感的时候,我只需要在我计划用来组织内容的文件夹里单击右键,然后从<ruby>新建文档<rt>New Document</rt></ruby>列表中选择我想要的模板就可以开始了。
|
||||
|
||||
![Select the template by name][7]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][5])
|
||||
|
||||
你可以为各种文档或文件制作模板。我写这篇文章时使用了我为 Opensource.com 的文章创建的模板。程序员可能会把模板用于软件代码,这样的话也许你想要只包含 `main()` 的模板。
|
||||
|
||||
GNOME 桌面环境为 Linux 及相关操作系统的用户提供了一个非常实用、功能丰富的界面。你最喜欢的 GNOME 功能是什么,你又是怎样使用它们的呢?请在评论中分享~
|
||||
@ -63,7 +57,7 @@ via: https://opensource.com/article/20/9/gnome-templates
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[rakino](https://github.com/rakino)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,92 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Telcos Move from Black boxes to Open Source)
|
||||
[#]: via: (https://www.linux.com/interviews/telcos-move-from-black-boxes-to-open-source/)
|
||||
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
|
||||
|
||||
Telcos Move from Black boxes to Open Source
|
||||
======
|
||||
|
||||
*Linux Foundation Networking (LFN) organized its first virtual event last week and we sat down with Arpit Joshipura, the General Manager of Networking, IoT and Edge at the Linux Foundation, to talk about the key points of the event and how LFN is leading the adoption of open source within the telco space. *
|
||||
|
||||
**Swapnil Bhartiya: Today, we have with us Arpit Joshipura, General Manager of Networking, IoT and Edge, at the Linux Foundation. Arpit, what were some of the highlights of this event? Some big announcements that you can talk about?**
|
||||
|
||||
**Arpit Joshipura:** This was a global event with more than 80 sessions and was attended by attendees from over 75 countries. The sessions were very diverse. A lot of the sessions were end-user driven, operator driven as well as from our vendors and partners. If you take LF Networking and LFH as two umbrellas that are leading the Networking and Edge implementations here, we had a very significant announcement. I would probably group them into 5 main things:
|
||||
|
||||
Number one, we released a white paper at the Linux Foundation level where we had a bunch of vertical industries transformed using open source. These are over 100-year-old industries like telecom, automotive, finance, energy, healthcare, etc. So, that’s kind of one big announcement where vertical industries have taken advantage of open source.
|
||||
|
||||
The second announcement was easy enough: Google Cloud joins Linux Foundation Networking as a partner. That announcement comes on the basis of the telecom market and the cloud market converging together and building on each other.
|
||||
|
||||
The third major announcement was a project under LF Networking. If you remember, two years ago, a project collaboration with GSMA was started. It was called CNTT, which really defined and narrowed the scope of interoperability and compliance. And we have OPNFV under LFN. What we announced at Open Networking and Edge summit is the two projects are going to come together. This would be fantastic to a global community of operators who are simplifying the deployment and interoperability of implementation of NFVI manual VNFs and CNFs.
|
||||
|
||||
The next announcement was around a research study that we released on open source code that was created by Linux Foundation Networking, using LFN analytics and COCOMO estimation. We’re talking $7.2 billion worth of IP investment, right? This is the power of shared technology.
|
||||
|
||||
And finally, we released a survey on the Edge community asking them, “Why are you contributing to open source?” And the answer was fascinating. It was all-around innovation, speed to deployment, market creation. Yes, cost was important, but not initially.
|
||||
|
||||
So those were the 5 big highlights of the show from an LFN and LFH perspective.
|
||||
|
||||
**Swapnil Bhartiya: There are two things that I’m interested in. One is the consolidation that you talk about, and the second is survey. The fact is that everybody is using open source. There is no doubt about it. But the problem that is happening is since everybody’s using it, there seems to be some gap between the awareness of how to be a good open source citizen as well. What have you seen in the telco space?**
|
||||
|
||||
**Arpit Joshipura:** First of all, 5 years ago, they were all using black box and proprietary technologies. Then, we launched a project called OpenDaylight. And of course, OpenDaylight announced its 13th release today, but that’s kind of on their 6-year anniversary from being proprietary to today in one of the more active projects called ONAP. The telcos are 4 of the Top 10 contributors of source code and open source, right? Who would have imagined that an AT&T, Verizon, Amdocs, DT, Vodafone, and a China mobile and a China telecom, you name it are all actively contributing code? So that’s a paradigm shift in terms of not only consuming it, but also contributing towards it.
|
||||
|
||||
**Swapnil Bhartiya: And since you mentioned ONAP, if I’m not wrong, I think AT&T released its own work as E-com. And then the projects within the Foundation were merged to create ONAP. And then you mentioned actually NTD. So, what I want to understand from you is how many projects are there that you see within the Foundation? The problem is that Linux Foundation and all those other foundations are open servers. It’s a very good place for those products to come in. It’s obvious that there will be some projects that will overlap. So what is the situation right now? Where do you see some overlap happening and, at the same time, are there still gaps that you need to fill?**
|
||||
|
||||
**Arpit Joshipura:** So that’s a question of the philosophies of a foundation, right? I’ll start off with the most loose situation, which is GitHub. Millions and millions of projects on GitHub. Any PhD student can throw his code on GitHub and say that’s open source and at the end of the day, if there’s no community around it, that project is dead. Okay. That’s the most extreme scenario. Then, there are foundations like CNCF who have a process of accepting projects that could have competing solutions. May the best project win.
|
||||
|
||||
From an LF Networking and LFH perspective, the process is a little bit more restrictive: there is a formal project life cycle document and a process available on the Wiki that looks at the complementary nature of the project, that looks at the ecosystem, that looks at how it will enable and foster innovation. Then based on that, the governing board and the neutral governance that we have set up under the Linux Foundation, they would approve it.
|
||||
|
||||
Overall, it depends on the philosophy for LFN and LFH. We have 8 projects each in the umbrella, and most of these projects are quite complementary when it comes to solving different use cases in different parts of the network.
|
||||
|
||||
**Swapnil Bhartiya: Awesome. Now, I want to talk about 5G a bit. I did not hear any announcements, but can you talk a bit about what is the word going on to help the further deployment of 5G technologies?**
|
||||
|
||||
**Arpit Joshipura:** Yeah. I’m happy and sad to say that 5G is old news, right? The reality is all of the infrastructure work on 5G already was released earlier this year. So ONAP Frankfurt release, for example, has a blueprint on 5G slicing, right? All the work has been done, lots of blueprint and Akraino using 5G and mech. So, that work is done. The cities are getting lit up by the carriers. You see announcements from global carriers on 5G deployments. I think there are 2 missing pieces of work remaining for 5G.
|
||||
|
||||
One is obviously the O-RAN support, right? The O-RAN software community, which we host at the Linux Foundation also is coming up with a second release. And, all the support for 5G is in there.
|
||||
|
||||
The second part of 5G is really the compliance and verification testing. A lot of work is going into CMTT and OPN and feed. Remember that merge project we talked about where 5G is in context of not just OpenStack, but also Kubernetes? So the cloud-native aspects of 5G are all being worked on this year. I think we’ll see a lot more cloud-native 5G deployments next year primarily because projects like ONAP or cloud native integrate with projects like ONAP and Anthos or Azure stack and things like that.
|
||||
|
||||
**Swapnil Bhartiya: What are some of the biggest challenges that the telco industry is facing? I mean, technically, no externalization and all those things were there, but foundations have solved the problem. Some rough ideas are still there that you’re trying to resolve for them.**
|
||||
|
||||
**Arpit Joshipura:** Yeah. I think the recent pandemic caused a significant change in the telcos’ thinking, right? Fortunately, because they had already started on a virtualization and open-source route, you heard from Android, and you heard from Deutsche Telekom, and you heard from Achronix, all of the operators were able to handle the change in the network traffic, change in the network, traffic direction, SLS workloads, etc., right? All because of the _softwarization_ as we call it on the network.
|
||||
|
||||
Given the pandemic, I think the first challenge for them was, can the network hold up? And the answer is, yes. Right? All the work-from-home and all these video recordings, we have to hang out with the web, that was number one.
|
||||
|
||||
Number two is it’s good to hold up the network, but did I end up spending millions and millions of dollars for operational expenditures? And the answer to that is no, especially for the telcos who have embraced an open-source ecosystem, right? So people who have deployed projects like SDN or ONAP or automation and orchestration or closed-loop controls, they automatically configure and reconfigure based on workloads and services and traffic, right? And that does not require manual labor, right? Tremendous amounts of costs were saved from an opex perspective, right?
|
||||
|
||||
For operators who are still in the old mindset have significantly increased their opex, and what that has caused is a real strain on their budget sheets.
|
||||
|
||||
So those were the 2 big things that we felt were challenges, but have been solved. Going forward, now it’s just a quick rollout/build-out of 5G, expanding 5G to Edge, and then partnering with the public cloud providers, at least, here in the US to bring the cloud-native solutions to market.
|
||||
|
||||
**Swapnil Bhartiya: Awesome. Now, Arpit, if I’m not wrong, LF Edge is I think, going to celebrate its second anniversary in January. What do you feel the product has achieved so far? What are its accomplishments? And what are some challenges that the project still has to tackle?**
|
||||
|
||||
**Arpit Joshipura:** Let me start off with the most important accomplishment as a community and that is terminology. We have a project called State of the Edge and we just issued a white paper, which outlines terminology, terms and definitions of what Edge is because, historically, people use terms like thin edge, thick edge, cloud edge, far edge, near edge and blah, blah, blah. They’re all relative terms. Okay. It’s an edge in relation to who I am.
|
||||
|
||||
Instead of that, the paper now defines absolute terms. If I give you a quick example, there are really 2 kinds of edges. There’s a device edge, and then there is a service provider edge. A device edge is really controlled by the operator, by the end user, I should say. Service provider edge is really shared as a service and the last mile typically separates them.
|
||||
|
||||
Now, if you double click on each of these categories, then you have several incarnations of an edge. You can have an extremely constrained edge, microcontrollers, etc., mostly manufacturing, IIoT type. You could have a smart device edge like gateways, etc. Or you could have an on-prem silver type device edge. Either way, an end user controls that edge versus the other edge. Whether it’s on the radio-based stations or in a smart central office, the operator controls it. So that’s kind of the first accomplishment, right? Standardizing on terminology.
|
||||
|
||||
The second big Edge accomplishment is around 2 projects: Akraino and EdgeX Foundry. These are stage 3 mature projects. They have come out with significant [results]. Akraino, for example, has come out with 20 plus blueprints. These are blueprints that actually can be deployed today, right? Just to refresh, a blueprint is a declarative configuration that has everything from end to end to solve a particular use case. So things like connected classrooms, AR/VR, connected cars, right? Network cloud, smart factories, smart cities, etc. So all these are available today.
|
||||
|
||||
EdgeX is the IoT framework for an industrial setup, and that’s kind of the most downloaded. Those 2 projects, along with Fledge, EVE, Baetyl, Home Edge, Open Horizon, security advanced onboarding, NSoT, right? Very, very strong growth over 200% growth in terms of contributions. Huge growth in membership, huge growth in new projects and the community overall. We’re seeing that Edge is really picking up great. Remember, I told you Edge is 4 times the size of the cloud. So, everybody is in it.
|
||||
|
||||
**Swapnil Bhartiya: Now, the second part of the question was also some of the challenges that are still there. You talked about accomplishment. What are the problems that you see that you still think that the project has to solve for the industry and the community?**
|
||||
|
||||
**Arpit Joshipura:** The fundamental challenge that remains is we’re still working as a community in different markets. I think the vendor ecosystem is trying to figure out who is the customer and who is the provider, right? Think of it this way: a carrier, for example, AT&T, could be a provider to a manufacturing factory, who actually could consume something from a provider, and then ship it to an end user. So, there’s like a value shift, if you may, in the business world, on who gets the cut, if you may. That’s still a challenge. People are trying to figure out, I think people who are going to be quick to define, solve and implement solutions using open technology will probably turn out to be winners.
|
||||
|
||||
People who will just do analysis per analysis will be left behind like any other industry. I think that is kind of fundamentally number one. And number two, I think the speed at which we want to solve things. The pandemic has just accelerated the need for Edge and 5G. I think people are just eager to get gaming with low latency, get manufacturing, predictive maintenance with low latency, home surveillance with low latency, connected cars, autonomous driving, all the classroom use cases. They should have been done next year, but because of the pandemic, it just got accelerated.
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/interviews/telcos-move-from-black-boxes-to-open-source/
|
||||
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/author/swapnil/
|
||||
[b]: https://github.com/lujun9972
|
@ -1,229 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to install software with Ansible)
|
||||
[#]: via: (https://opensource.com/article/20/9/install-packages-ansible)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
How to install software with Ansible
|
||||
======
|
||||
Automate software installations and updates across your devices with
|
||||
Ansible playbooks.
|
||||
![Puzzle pieces coming together to form a computer screen][1]
|
||||
|
||||
Ansible is a popular automation tool used by sysadmins and developers to keep their computer systems in prime condition. As is often the case with extensible frameworks, [Ansible][2] has limited use on its own, with its real power dwelling in its many modules. Ansible modules are, in a way, what commands are to a [Linux][3] computer. They provide solutions to specific problems, and one common task when maintaining computers is keeping all the ones you use updated and consistent.
|
||||
|
||||
I used to use a text list of packages to keep my systems more or less synchronized: I'd list the packages installed on my laptop and then cross-reference that with my desktop, or between one server and another server, making up for any difference manually. Of course, installing and maintaining applications on a Linux machine is a basic task for Ansible, and it means you can list what you want across all computers under your care.
|
||||
|
||||
### Finding the right Ansible module
|
||||
|
||||
The number of Ansible modules can be overwhelming. How do you find the one you need for a given task? In Linux, you might look in your Applications menu or in `/usr/bin` to discover new applications to run. When you're using Ansible, you refer to the [Ansible module index][4].
|
||||
|
||||
The index is listed primarily by category. With a little searching, you're very likely to find a module for whatever you need. For package management, the [Packaging modules][5] section contains a module for nearly any system with a package manager.
|
||||
|
||||
### Writing an Ansible playbook
|
||||
|
||||
To begin, choose the package manager on your local computer. For instance, if you're going to write your Ansible instructions (a "playbook," as it's called in Ansible) on a laptop running Fedora, start with the `dnf` module. If you're writing on Elementary OS, use the `apt` module, and so on. This gets you started with something you can test and verify as you go, and you can expand your work for your other computers later.
|
||||
|
||||
The first step is to create a directory representing your playbook. This isn't strictly necessary, but it's a good idea to establish the habit. Ansible can run with just a configuration file written in YAML, but if you want to expand your playbook later, you can control Ansible by how you lay out your directories and files. For now, just create a directory called `install_packages` or similar:
|
||||
|
||||
|
||||
```
|
||||
`$ mkdir ~/install_packages`
|
||||
```
|
||||
|
||||
The file that serves as the Ansible playbook can be named anything you like, but it's traditional to name it `site.yml`:
|
||||
|
||||
|
||||
```
|
||||
`$ touch ~/install_packages/site.yml`
|
||||
```
|
||||
|
||||
Open `site.yml` in your favorite text editor, and add this:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
\- hosts: localhost
|
||||
tasks:
|
||||
- name: install packages
|
||||
become: true
|
||||
become_user: root
|
||||
dnf:
|
||||
state: present
|
||||
name:
|
||||
- tcsh
|
||||
- htop
|
||||
```
|
||||
|
||||
You must adjust the module name you use to match the distribution you're using. In this example, I used `dnf` because I wrote the playbook on Fedora Linux.
|
||||
|
||||
Like with a command in a Linux terminal, knowing _how_ to invoke an Ansible module is half the battle. This playbook example follows the standard playbook format:
|
||||
|
||||
* `hosts` targets a computer or computers. In this case, the computer being targeted is `localhost`, which is the computer you're using right now (as opposed to a remote system you want Ansible to connect with).
|
||||
* `tasks` opens a list of tasks you want to be performed on the hosts.
|
||||
* `name` is a human-friendly title for a task. In this case, I'm using `install packages` because that's what this task is doing.
|
||||
* `become` permits Ansible to change which user is running this task.
|
||||
* `become_user` permits Ansible to become the `root` user to run this task. This is necessary because only the root user can install new applications using `dnf`.
|
||||
* `dnf` is the name of the module, which you discovered from the module index on the Ansible website.
|
||||
|
||||
|
||||
|
||||
The items under the `dnf` item are specific to the `dnf` module. This is where the module documentation is essential. Like a man page for a Linux command, the module documentation tells you what options are available and what kinds of arguments are required.
|
||||
|
||||
![Ansible documentation][6]
|
||||
|
||||
Ansible module documentation (Seth Kenlon, [CC BY-SA 4.0][7])
|
||||
|
||||
Package installation is a relatively simple task and only requires two elements. The `state` option instructs Ansible to check whether or not _some package_ is present on the system, and the `name` option lists which packages to look for. Ansible deals in machine _state_, so module instructions always imply change. Should Ansible scan a system and find a conflict between how a playbook describes a system (in this case, that the commands `tcsh` and `htop` are present) and what the system state actually is (in this example, `tcsh` and `htop` are not present), then Ansible's task is to make whatever changes are necessary for the system to match the playbook. Ansible can make those changes because of the `dnf` (or `apt` or whatever your package manager is) module.
|
||||
|
||||
Each module is likely to have a different set of options, so when you're writing playbooks, anticipate referring to the module documentation often. Until you're very familiar with a module, it's the only reasonable way to expect a module to do what you need it to do.
|
||||
|
||||
### Verifying YAML
|
||||
|
||||
Playbooks are written in YAML. Because YAML adheres to a strict syntax, it's helpful to install the `yamllint` command to check (or "lint," in computer terminology) your work. Better still, there's a linter specific to Ansible called `ansible-lint` created specifically for playbooks. Install these before continuing.
|
||||
|
||||
On Fedora or CentOS:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo dnf install yamllint python3-ansible-lint`
|
||||
```
|
||||
|
||||
On Debian, Elementary, Ubuntu, or similar:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install yamllint ansible-lint`
|
||||
```
|
||||
|
||||
Verify your playbook with `ansible-lint`. If you don't have access to `ansible-lint`, you can use `yamllint`.
|
||||
|
||||
|
||||
```
|
||||
`$ ansible-lint ~/install_packages/site.yml`
|
||||
```
|
||||
|
||||
Success returns nothing, but if there are errors in your file, you must fix them before continuing. Common errors from copying and pasting include omitting a newline character at the end of the final line and using tabs instead of spaces for indentation. Fix them in a text editor, rerun the linter, and repeat this process until you get no feedback from `ansible-lint` or `yamllint`.
|
||||
|
||||
### Installing an application with Ansible
|
||||
|
||||
Now that you have a verifiably valid playbook, you can finally run it on your local machine. Because you happen to know that the task defined by the playbook requires root permissions, you must use the `--ask-become-pass` option when invoking Ansible, so you will be prompted for your administrative password.
|
||||
|
||||
Start the installation:
|
||||
|
||||
|
||||
```
|
||||
$ ansible-playbook --ask-become-pass ~/install_packages/site.yml
|
||||
BECOME password:
|
||||
PLAY [localhost] ******************************
|
||||
|
||||
TASK [Gathering Facts] ******************************
|
||||
ok: [localhost]
|
||||
|
||||
TASK [install packages] ******************************
|
||||
ok: [localhost]
|
||||
|
||||
PLAY RECAP ******************************
|
||||
localhost: ok=0 changed=2 unreachable=0 failed=0 [...]
|
||||
```
|
||||
|
||||
The commands are installed, leaving the target system in an identical state to the one described by the playbook.
|
||||
|
||||
### Installing an application on remote systems
|
||||
|
||||
Going through all of that to replace one simple command would be counterproductive, but Ansible's advantage is that it can be automated across all of your systems. You can use conditional statements to cause Ansible to use a specific module on different systems, but for now, assume all your computers use the same package manager.
|
||||
|
||||
To connect to a remote system, you must define the remote system in the `/etc/ansible/hosts` file. This file was installed along with Ansible, so it already exists, but it's probably empty, aside from explanatory comments. Use `sudo` to open the file in your favorite text editor.
|
||||
|
||||
You can define a host by its IP address or hostname, as long as the hostname can be resolved. For instance, if you've already defined `liavara` in `/etc/hosts` and can successfully ping it, then you can set `liavara` as a host in `/etc/ansible/hosts`. Alternately, if you're running a domain name server or Avahi server and can ping `liavara`, then you can set it as a host in `/etc/ansible/hosts`. Otherwise, you must use its internet protocol address.
|
||||
|
||||
You also must have set up a successful secure shell (SSH) connection to your target hosts. The easiest way to do that is with the `ssh-copy-id` command, but if you've never set up an SSH connection with a host before, [read my article on how to create an automated SSH connection][8].
|
||||
|
||||
Once you've entered the hostname or IP address in the `/etc/ansible/hosts` file, change the `hosts` definition in your playbook:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
\- hosts: all
|
||||
tasks:
|
||||
- name: install packages
|
||||
become: true
|
||||
become_user: root
|
||||
dnf:
|
||||
state: present
|
||||
name:
|
||||
- tcsh
|
||||
- htop
|
||||
```
|
||||
|
||||
Run `ansible-playbook` again:
|
||||
|
||||
|
||||
```
|
||||
`$ ansible-playbook --ask-become-pass ~/install_packages/site.yml`
|
||||
```
|
||||
|
||||
This time, the playbook runs on your remote system.
|
||||
|
||||
Should you add more hosts, there are many ways to filter which host performs which task. For instance, you can create groups of hosts (`webservers` for servers, `workstations` for desktop machines, and so on).
|
||||
|
||||
### Ansible for mixed environments
|
||||
|
||||
The logic used in the solution so far assumes that all hosts being configured by Ansible run the same OS (specifically, one that uses the **dnf** command for package management). So what do you do if you're managing hosts running a different distribution, such as Ubuntu (which uses **apt**) or Arch (using **pacman**), or even different operating systems?
|
||||
|
||||
As long as the targeted OS has a package manager (and these days even [MacOS has Homebrew][9] and [Windows has Chocolatey][10]), Ansible can help.
|
||||
|
||||
This is where Ansible's advantage becomes most apparent. In a shell script, you'd have to check for what package manager is available on the target host, and even with pure Python you'd have to check for the OS. Ansible not only has those checks built in, but it also has mechanisms to use the results in your playbook. Instead of using the **dnf** module, you can use the **action** keyword to perform tasks defined by variables provided by Ansible's fact gathering subsystem.
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
\- hosts: all
|
||||
tasks:
|
||||
- name: install packages
|
||||
become: true
|
||||
become_user: root
|
||||
action: >
|
||||
{{ ansible_pkg_mgr }} name=htop,transmission state=present update_cache=yes
|
||||
```
|
||||
|
||||
The **action** keyword loads action plugins. In this example, it's using the **ansible_pkg_mgr** variable, which is populated by Ansible during the initial **Gathering Facts** task. You don't have to tell Ansible to gather facts about the OS it's running on, so it's easy to overlook it, but when you run a playbook, you see it listed in the default output:
|
||||
|
||||
|
||||
```
|
||||
TASK [Gathering Facts] *****************************************
|
||||
ok: [localhost]
|
||||
```
|
||||
|
||||
The **action** plugin uses information from this probe to populate **ansible_pkg_mgr** with the relevant package manager command to install the packages listed after the **name** argument. With 8 lines of code, you can overcome a complex cross-platform quandary that few other scripting options allow.
|
||||
|
||||
### Use Ansible
|
||||
|
||||
It's the 21st century, and we all expect our computing devices to be connected and relatively consistent. Whether you maintain two or 200 computers, you shouldn't have to perform the same maintenance tasks over and over again. Use Ansible to synchronize the computers in your life, then see what else Ansible can do for you.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/9/install-packages-ansible
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen)
|
||||
[2]: https://opensource.com/resources/what-ansible
|
||||
[3]: https://opensource.com/resources/linux
|
||||
[4]: https://docs.ansible.com/ansible/latest/modules/modules_by_category.html
|
||||
[5]: https://docs.ansible.com/ansible/latest/modules/list_of_packaging_modules.html
|
||||
[6]: https://opensource.com/sites/default/files/uploads/ansible-module.png (Ansible documentation)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://opensource.com/article/20/8/how-ssh
|
||||
[9]: https://opensource.com/article/20/6/homebrew-mac
|
||||
[10]: https://opensource.com/article/20/3/chocolatey
|
@ -1,89 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Use the Firefox Task Manager (to Find and Kill RAM and CPU Eating Tabs and Extensions))
|
||||
[#]: via: (https://itsfoss.com/firefox-task-manager/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
How to Use the Firefox Task Manager (to Find and Kill RAM and CPU Eating Tabs and Extensions)
|
||||
======
|
||||
|
||||
Firefox is popular among Linux users. It is the default web browser on several Linux distributions.
|
||||
|
||||
Among many other features, Firefox provides a task manager of its own.
|
||||
|
||||
Now, why would you use it when you have [task manager in Linux][1] in the form of [system monitoring tools][2]? There is a good reason for that.
|
||||
|
||||
Suppose your system is taking too much of RAM or CPU. If you use top or some other system [resource monitoring tool like Glances][3], you’ll notice that these tools cannot distinguish the opened tabs or extensions.
|
||||
|
||||
Usually, each Firefox tab is displayed as **Web Content**. You can see that some Firefox process is causing the issue but that’s no way to accurately determine which tab or extension it is.
|
||||
|
||||
This is where you can use the Firefox task manager. Let me show you how!
|
||||
|
||||
### Firefox Task Manager
|
||||
|
||||
With Firefox Task Manager, you will be able to list all the tabs, trackers, and add-ons consuming system resources.
|
||||
|
||||
![][4]
|
||||
|
||||
As you can see in the screenshot above, you get the name of the tab, the type (tab or add-on), the energy impact, and the memory consumed.
|
||||
|
||||
While everything is self-explanatory, the **energy impact refers to the CPU usage** and if you are using a Laptop, it is a good indicator to show you what will drain the battery quicker.
|
||||
|
||||
#### Access Task Manager in Firefox
|
||||
|
||||
Surprisingly, there is no [Firefox keyboard shortcut][5] for the task manager.
|
||||
|
||||
To quickly launch Firefox Task Manager, you can type “**about:performance**” in the address bar as shown in the screenshot below.
|
||||
|
||||
![Quickly access task manager in Firefox][6]
|
||||
|
||||
Alternatively, you can click on the **menu** icon and then head on to “**More**” options as shown in the screenshot below.
|
||||
|
||||
![Accessing task manager in Firefox][7]
|
||||
|
||||
Next, you will find the option to select “**Task Manager**” — so just click on it.
|
||||
|
||||
![][8]
|
||||
|
||||
#### Using Firefox task manager
|
||||
|
||||
Once there, you can check for the resource usage, expand the tabs to see the trackers and its usage, and also choose to close the tabs right there as highlighted in the screenshot below.
|
||||
|
||||
![][9]
|
||||
|
||||
Here’s what you should know:
|
||||
|
||||
* Energy impact means CPU consumption.
|
||||
* The subframes or the subtasks are usually the trackers/scripts associated with a tab that needs to run in the background.
|
||||
|
||||
|
||||
|
||||
With this task manager, you can spot a rogue script on a site as well whether it’s causing your browser to slow down.
|
||||
|
||||
This isn’t rocket-science but not many people are aware of Firefox task manager. Now that you know it, this should come in pretty handy, don’t you think?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/firefox-task-manager/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/task-manager-linux/
|
||||
[2]: https://itsfoss.com/linux-system-monitoring-tools/
|
||||
[3]: https://itsfoss.com/glances/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/firefox-task-manager-shot.png?resize=800%2C519&ssl=1
|
||||
[5]: https://itsfoss.com/firefox-keyboard-shortcuts/
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/firefox-url-performance.jpg?resize=800%2C357&ssl=1
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/firefox-task-manager-steps.jpg?resize=800%2C779&ssl=1
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/firefox-task-manager-menu.jpg?resize=800%2C465&ssl=1
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/firefox-task-manager-close-tab.png?resize=800%2C496&ssl=1
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,199 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Design and document APIs using an open source cross-platform tool)
|
||||
[#]: via: (https://opensource.com/article/20/10/spec-first-development-apis)
|
||||
[#]: author: (Greg Schier https://opensource.com/users/gregschier)
|
||||
|
||||
Design and document APIs using an open source cross-platform tool
|
||||
======
|
||||
Insomnia Designer makes spec-first development more accessible by
|
||||
providing collaborative tools to organize, maintain, and validate API
|
||||
specs.
|
||||
![Computer laptop in space][1]
|
||||
|
||||
In the world of software-as-a-service (SaaS) and service-based architectures, it's not uncommon for companies to maintain dozens or even hundreds of APIs, often spanning multiple teams, programming languages, and environments. This variability makes it extremely difficult to see what's happening at a high level to prevent changes from having negative impacts.
|
||||
|
||||
It's estimated that 40% of large enterprises struggle with challenges to secure, scale, or ensure performance for APIs. Because of this, more and more companies are choosing to adopt a "spec-first development" approach by defining and documenting APIs in an external format like [OpenAPI][2]. Storing these documents together in a central location makes it much easier to design, discuss, and approve changes _before_ implementation.
|
||||
|
||||
In this tutorial, you'll use the recently released [Insomnia Designer][3] to document an API, explore it, and propose a change using a spec-first approach. Designer is a cross-platform, open source REST client that builds on top of Insomnia Core—a popular app for interacting with HTTP and GraphQL APIs—aiming to make spec-first development more accessible by providing collaborative tools to organize, maintain, and validate API specs. In essence, Core is best for exploring and debugging APIs while Designer is best for designing and documenting them.
|
||||
|
||||
In this how-to, you'll use the [Open Library API][4] as a base to have working examples to play with. You'll create a minimal OpenAPI spec to document the APIs, use Insomnia Designer to test and verify that what you've done is correct, and then make some design changes to the API using a spec-first approach.
|
||||
|
||||
### The spec-first workflow
|
||||
|
||||
Before you begin, you should understand the steps necessary to adopt a spec-first workflow. In spec-first development, a specification can be in one of two states:
|
||||
|
||||
* **Published spec:** A specification that describes a currently published API exactly
|
||||
* **Proposal spec:** A draft specification that contains changes that need to be implemented
|
||||
|
||||
|
||||
|
||||
From this information, you can define a workflow for making changes to an API:
|
||||
|
||||
1. Start with the published specification for the API
|
||||
2. Make changes to the specification to add or modify behavior
|
||||
3. Review the proposal spec to ensure the design is sufficient
|
||||
4. Implement changes in code to match the proposal
|
||||
5. Publish the proposal spec along with the API
|
||||
|
||||
|
||||
|
||||
Now that you understand the workflow for what you are trying to accomplish, open Insomnia Designer and start trying it out.
|
||||
|
||||
### Define the initial specification
|
||||
|
||||
Since you don't yet have a published specification for the Open Library API, you need to define one.
|
||||
|
||||
Start by creating a new blank document from the **Create** menu, give it a name, then click to the document to enter **Design View**. From here, you can start editing your spec.
|
||||
|
||||
![Create a new document][5]
|
||||
|
||||
(Greg Schier, [CC BY-SA 4.0][6])
|
||||
|
||||
The OpenAPI spec is most commonly written in [YAML][7] format and requires four top-level blocks to get started: `openapi`, `info`, `servers`, and `paths`. The following example defines each of these blocks with helpful comments to describe the purpose of each. Also, the `paths` block defines a route for `GET /recentchanges.json`:
|
||||
|
||||
|
||||
```
|
||||
# Specify that your document is the OpenAPI 3 format
|
||||
openapi: 3.0.0
|
||||
|
||||
# Define high-level metadata for the API
|
||||
info:
|
||||
version: 1.0.0
|
||||
title: Open Library API
|
||||
description: Open Library has a RESTful API
|
||||
|
||||
# Specify the base URL the API can be accessed from
|
||||
servers:
|
||||
- url: <http://openlibrary.org>
|
||||
|
||||
# Define operations for the API. This will be where most
|
||||
# of the work is done. The first route you'll be defining
|
||||
# is `GET /recentchanges.json`
|
||||
paths:
|
||||
/recentchanges.json:
|
||||
get:
|
||||
summary: Recent Changes
|
||||
```
|
||||
|
||||
OpenAPI provides much more than what's visible here, such as the ability to define authentication, response formats, reusable components, and more.
|
||||
|
||||
After copying the specification above into Insomnia Designer, you'll see three columns:
|
||||
|
||||
1. **Navigation sidebar (left):** Nested menu to make navigating larger documents easier
|
||||
2. **Spec editor (middle):** Text editor for editing the YAML document
|
||||
3. **Documentation preview:** Generated documentation to preview the specification
|
||||
|
||||
|
||||
|
||||
![Insomnia Designer UI with three columns][8]
|
||||
|
||||
(Greg Schier, [CC BY-SA 4.0][6])
|
||||
|
||||
Feel free to modify different parts of the specification to see what happens. As a safeguard, Insomnia Designer alerts you when you've done something wrong. For example, if you accidentally delete the colon on line 18, an error panel will display below the editor.
|
||||
|
||||
![Insomnia Designer UI error message][9]
|
||||
|
||||
(Greg Schier, [CC BY-SA 4.0][6])
|
||||
|
||||
Now that you have defined a specification, you can verify that your definition is correct by switching to **Debug** mode and sending a real request to the API. In Debug mode, you can see a single route was generated for the `GET /recentchanges.json` endpoint. Click the **Send** button beside the URL to execute the request and render the response in the right panel.
|
||||
|
||||
![Checking response in Insomnia Designer][10]
|
||||
|
||||
(Greg Schier, [CC BY-SA 4.0][6])
|
||||
|
||||
There you have it! You've successfully verified that the API specification you created matches the production API. Now you can move to the next step in the spec-first development workflow and propose a change.
|
||||
|
||||
### Create a proposal specification
|
||||
|
||||
According to the workflow outlined above, changes made to your API should first be defined in the specification. This has a number of benefits:
|
||||
|
||||
* Specifications can be checked into a central source-code repository
|
||||
* Changes are easy to review and approve
|
||||
* APIs are defined in a single, consistent format
|
||||
* Unnecessary code changes are avoided
|
||||
|
||||
|
||||
|
||||
Go ahead and propose a change to your API. While in Debug mode, I noticed the API returned hundreds of results. To improve performance and usability, it would be useful to limit the number of results returned to a specific amount. A common way of doing this is to accept a `limit` parameter in the query section of the URL, so go ahead and modify your specification to add a `limit` parameter.
|
||||
|
||||
In OpenAPI, you can define this by adding a `parameters` block to the route**:**
|
||||
|
||||
|
||||
```
|
||||
# ...
|
||||
paths:
|
||||
/recentchanges.json:
|
||||
get:
|
||||
summary: Recent Changes
|
||||
|
||||
# Add parameter to limit the number of results
|
||||
parameters:
|
||||
- name: limit
|
||||
in: query
|
||||
description: Limit number of results
|
||||
required: true
|
||||
schema:
|
||||
type: integer
|
||||
example: 1
|
||||
```
|
||||
|
||||
You can verify you defined it correctly by expanding the route within the preview and inspecting the parameters.
|
||||
|
||||
![Verifying spec definition in Insomnia][11]
|
||||
|
||||
(Greg Schier, [CC BY-SA 4.0][6])
|
||||
|
||||
### Review and implement the proposal
|
||||
|
||||
Now that you have created a proposal spec, you can have your team review and approve the changes. Insomnia Designer provides the ability to [sync API specifications to source control][12], allowing teams to review and approve changes to API specs the same way they do with source code.
|
||||
|
||||
For example, you might commit and push your proposed spec to a new branch in GitHub and create a pull request to await approval.
|
||||
|
||||
Because this is a tutorial on spec-first development, you won't implement the proposal yourself. The parameter you added is already supported by the API, however, so for the purpose of this article, use your imagination and pretend that your team has implemented the change.
|
||||
|
||||
### Verify the updated specification
|
||||
|
||||
Once the proposal has been implemented and deployed, you can switch to Debug mode, which will regenerate the requests based on your changes, and again verify that the spec matches the production API. To ensure the new query param is being sent, click the **Query** tab within Debug mode and observe that the `limit` parameter is set to your example value of `1`.
|
||||
|
||||
Once you send the request, you can verify that it returns only a single result. Change the `limit` to a different value or disable the query parameter (using the checkbox) to further verify things work as expected.
|
||||
|
||||
![Verifying things work as expected in Insomnia Designer][13]
|
||||
|
||||
(Greg Schier, [CC BY-SA 4.0][6])
|
||||
|
||||
### The power of spec-first development
|
||||
|
||||
This tutorial walked through a simplified example of spec-first development. In it, you created an OpenAPI specification, verified the specification matched the implementation, and simulated what it's like to propose a behavior change.
|
||||
|
||||
For a single API as simple as this demo, it may be difficult to see the full benefit of spec-first development. However, imagine being a product owner in a large organization managing hundreds of production APIs. Having well-documented specifications, accessible from a central location like Insomnia Designer, allows anyone within the organization to quickly get up to speed on any API.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/10/spec-first-development-apis
|
||||
|
||||
作者:[Greg Schier][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/gregschier
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space)
|
||||
[2]: https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md
|
||||
[3]: https://insomnia.rest/products/designer
|
||||
[4]: https://openlibrary.org/developers/api
|
||||
[5]: https://opensource.com/sites/default/files/uploads/insomnia_newdocument.png (Create a new document)
|
||||
[6]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[7]: https://yaml.org/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/insomnia_columns.png (Insomnia Designer UI with three columns)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/insomnia_error.png (Insomnia Designer UI error message)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/insomnia_response.png (Checking response in Insomnia Designer)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/insomnia_verifydefinition.png (Verifying spec definition in Insomnia)
|
||||
[12]: https://support.insomnia.rest/article/96-git-sync
|
||||
[13]: https://opensource.com/sites/default/files/uploads/insomnia_limit.png (Verifying things work as expected in Insomnia Designer)
|
@ -0,0 +1,112 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Integrate your calendar with Ansible to avoid schedule conflicts)
|
||||
[#]: via: (https://opensource.com/article/20/10/calendar-ansible)
|
||||
[#]: author: (Nicolas Leiva https://opensource.com/users/nicolas-leiva)
|
||||
|
||||
Integrate your calendar with Ansible to avoid schedule conflicts
|
||||
======
|
||||
Make sure your automation workflow's schedule doesn't conflict with
|
||||
something else by integrating a calendar app into Ansible.
|
||||
![Calendar close up snapshot][1]
|
||||
|
||||
Is "anytime" a good time to execute your automation workflow? The answer is probably no, for different reasons.
|
||||
|
||||
If you want to avoid simultaneous changes to minimize the impact on critical business processes and reduce the risk of unintended service disruptions, then no one else should be attempting to make changes at the same time your automation is running.
|
||||
|
||||
In some scenarios, there could be an ongoing scheduled maintenance window. Or maybe there is a big event coming up, a critical business time, or a holiday—or maybe you prefer not to make changes on a Friday night.
|
||||
|
||||
![Street scene with a large calendar and people walking][2]
|
||||
|
||||
([Curtis MacNewton][3], [CC BY-ND 2.0][4])
|
||||
|
||||
Whatever the reason, you want to signal this information to your automation platform and prevent the execution of periodic or ad-hoc tasks during specific time slots. In change management jargon, I am talking about specifying blackout windows when change activity should not occur.
|
||||
|
||||
### Calendar integration in Ansible
|
||||
|
||||
How can you accomplish this in [Ansible][5]? While it has no calendar function per se, Ansible's extensibility will allow it to integrate with any calendar application that has an API.
|
||||
|
||||
The goal is this: Before you execute any automation or change activity, you execute a `pre-task` that checks whether something is already scheduled in the calendar (now or soon enough) and confirms you are not in the middle of a blocked timeslot.
|
||||
|
||||
Imagine you have a fictitious module named `calendar`, and it can connect to a remote calendar, like Google Calendar, to determine if the time you specify has otherwise been marked as busy. You could write a playbook that looks like this:
|
||||
|
||||
|
||||
```
|
||||
\- name: Check if timeslot is taken
|
||||
calendar:
|
||||
time: "{{ ansible_date_time.iso8601 }}"
|
||||
register: output
|
||||
```
|
||||
|
||||
Ansible facts will give `ansible_date_time`, which is passed to the `calendar` module to verify the time availability so that it can register the response (`output`) to use in subsequent tasks.
|
||||
|
||||
If your calendar looks like this:
|
||||
|
||||
![Google Calendar screenshot][6]
|
||||
|
||||
(Nicolas Leiva, [CC BY-SA 4.0][7])
|
||||
|
||||
Then the output of this task would highlight the fact this timeslot is taken (`busy: true`):
|
||||
|
||||
|
||||
```
|
||||
ok: [localhost] => {
|
||||
"output": {
|
||||
"busy": true,
|
||||
"changed": false,
|
||||
"failed": false,
|
||||
"msg": "The timeslot 2020-09-02T17:53:43Z is busy: true"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Prevent tasks from running
|
||||
|
||||
Next, [Ansible Conditionals][8] will help prevent the execution of any further tasks. As a simple example, you could use a `when` statement on the next task to enforce that it runs only when the field `busy` in the previous output is not `true`:
|
||||
|
||||
|
||||
```
|
||||
tasks:
|
||||
- shell: echo "Run this only when not busy!"
|
||||
when: not output.busy
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
In a [previous article][9], I said Ansible is a framework to wire things together, interconnecting different building blocks to orchestrate an end-to-end automation workflow.
|
||||
|
||||
This article looked at how playbooks can integrate or talk to a calendar application to check availability. However, I am just scratching the surface! For example, your tasks could also block a timeslot in the calendar… the sky is the limit.
|
||||
|
||||
In my next article, I will dig into how the `calendar` module is built and how other programming languages can be used with Ansible. Stay tuned if you are a [Go][10] fan like me!
|
||||
|
||||
* * *
|
||||
|
||||
_This originally appeared on Medium as [Ansible and Google Calendar integration for change management][11] under a CC BY-SA 4.0 license and is republished with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/10/calendar-ansible
|
||||
|
||||
作者:[Nicolas Leiva][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/nicolas-leiva
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT (Calendar close up snapshot)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/street-calendar.jpg (Street scene with a large calendar and people walking)
|
||||
[3]: https://www.flickr.com/photos/7841127@N02/4217116202
|
||||
[4]: https://creativecommons.org/licenses/by-nd/2.0/
|
||||
[5]: https://docs.ansible.com/ansible/latest/index.html
|
||||
[6]: https://opensource.com/sites/default/files/uploads/googlecalendarexample.png (Google Calendar screenshot)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_conditionals.html
|
||||
[9]: https://medium.com/swlh/python-and-ansible-to-automate-a-network-security-workflow-28b9a44660c6
|
||||
[10]: https://golang.org/
|
||||
[11]: https://medium.com/swlh/ansible-and-google-calendar-integration-for-change-management-7c00553b3d5a
|
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Protect your network with open source tools)
|
||||
[#]: via: (https://opensource.com/article/20/10/apache-security-tools)
|
||||
[#]: author: (Chantale Benoit https://opensource.com/users/chantalebenoit)
|
||||
|
||||
Protect your network with open source tools
|
||||
======
|
||||
Apache Syncope and Metron can help you secure your network against
|
||||
unauthorized access and data loss.
|
||||
![A lock on the side of a building][1]
|
||||
|
||||
System integrity is essential, especially when you're charged with safeguarding other people's personal details on your network. It's critical that system administrators are familiar with security tools, whether their purview is a home, a small business, or an organization with hundreds or thousands of employees.
|
||||
|
||||
### How cybersecurity works
|
||||
|
||||
Cybersecurity involves securing networks against unauthorized access. However, there are many attack vectors out there that most people don't consider. The cliché of a lone hacker manually dueling with firewall rules until they gain access to a network is popular—but wildly inaccurate. Security breaches happen through automation, malware, phishing, ransomware, and more. You can't directly fight every attack as it happens, and you can't count on every computer user to exercise common sense. Therefore, you have to design a system that resists intrusion and protects users against outside attacks as much as it protects them from their own mistakes.
|
||||
|
||||
The advantage of open source security tools is that they keep vulnerabilities transparent. They give full visibility into their codebase and are supported by a global community of experts working together to create strong, tried-and-tested code.
|
||||
|
||||
With so many domains needing protection, there's no single cybersecurity solution that fits every situation, but here are two that you should consider.
|
||||
|
||||
### Apache Syncope
|
||||
|
||||
[Apache Syncope][2] is an open source system for managing digital identities in an enterprise environment. From focusing on identity lifecycle management and identity storage to provisioning engines and accessing management capabilities, Apache Syncope is a comprehensive identity management solution. It also provides monitoring and security features for third-party applications.
|
||||
|
||||
Apache Syncope synchronizes users, groups, and other objects. _Users_ represent the buildup of virtual identities and account information fragmented across external resources. _Groups_ are entities on external resources that support the concept of LDAP or Active Directory. _Objects_ are entities such as printers, services, and sensors. It also does full reconciliation and live synchronization from external resources with workflow-based approval.
|
||||
|
||||
#### Third-party applications
|
||||
|
||||
Apache Syncope also exposes a fully compliant [JAX-RS][3] 2.0 [RESTful][4] interface to enable third-party applications written in any programming language. These applications consume identity management services, such as:
|
||||
|
||||
* **Logic:** Syncope implements business logic that can be triggered through REST services and controls additional features such as notifications, reports, and auditing.
|
||||
* **Provisioning:** It manages the internal and external representation of users, groups, and objects through workflow and specific connectors.
|
||||
* **Workflow:** Syncope supports Activiti or Flowable [business process management (BPM)][5] workflow engines and allows defining new and custom workflows when needed.
|
||||
* **Persistence:** It manages all data, such as users, groups, attributes, and resources, at a high level using a standard [JPA 2.0][6] approach. The data is further persisted to an underlying database, such as internal storage.
|
||||
* **Security:** Syncope defines a fine-grained set of entitlements, which are granted to administrators and enable the implementation of delegated administration scenarios.
|
||||
|
||||
|
||||
|
||||
#### Syncope extensions
|
||||
|
||||
Apache Syncope's features can be enhanced with [extensions][7], which add a REST endpoint and manage the persistence of additional entities, tweak the provisioning layer, and add features to the user interface.
|
||||
|
||||
Some popular extensions include:
|
||||
|
||||
* **Swagger UI** works as a user interface for Syncope RESTful services.
|
||||
* **SSO support** provides OpenID Connect and SAML 2.0 access to administrative or end-user web interfaces.
|
||||
* **Apache Camel provisioning manager** delegates the execution of the provisioning process to a group of Apache Camel routes. It can be dynamically changed at the runtime through the REST interfaces or the administrative console, and modifications are also instantly available for processing.
|
||||
* **Elasticsearch** provides an alternate internal search engine for users, groups, and objects through an external [Elasticsearch][8] cluster.
|
||||
|
||||
|
||||
|
||||
### Apache Metron
|
||||
|
||||
Security information and event management ([SIEM][9]) gives admins insights into the activities happening within their IT environment. It combines the concepts of security event management (SEM) with security information management (SIM) into one functionality. SIEM collects security data from network devices, servers, and domain controllers, then aggregates and analyzes the data to detect malicious threats and payloads.
|
||||
|
||||
[Apache Metron][10] is an advanced security analytics framework that detects cyber anomalies, such as phishing activity and malware infections. Further, it enables organizations to take corrective measures to counter the identified anomalies.
|
||||
|
||||
It also interprets and normalizes security events into standard JSON language, which makes it easier to analyze security events, such as:
|
||||
|
||||
* An employee flagging a suspicious email
|
||||
* An authorized or unauthorized software download by an employee to a company device
|
||||
* A security lapse due to a server outage
|
||||
|
||||
|
||||
|
||||
Apache Metron provides security alerts, labeling, and data enrichment. It can also store and index security events. Its four key capabilities are:
|
||||
|
||||
* **Security data lake:** Metron is a cost-effective way to store and combine a wide range of business and security data. The security data lake provides the amount of data required to power discovery analytics. It also provides a mechanism to search and query for operational analytics.
|
||||
* **Pluggable framework:** It provides a rich set of parsers for common security data sources such as pcap, NetFlow, Zeek (formerly Bro), Snort, FireEye, and Sourcefire. You can also add custom parsers for new data sources, including enrichment services for more contextual information, to the raw streaming data. The pluggable framework provides extensions for threat-intel feeds and lets you customize security dashboards. Machine learning and other models can also be plugged into real-time streams and provide extensibility.
|
||||
* **Threat detection platform:** It uses machine learning algorithms to detect anomalies in a system. It also helps analysts extract and reconstruct full packets to understand the attacker's identity, what data was leaked, and where the data was sent.
|
||||
* **Incident response application:** This refers to evolved SIEM capabilities, including alerting, threat intel frameworks, and agents to ingest data sources. Incident response applications include packet replay utilities, evidence storage, and hunting services commonly used by security operations center analysts.
|
||||
|
||||
|
||||
|
||||
### Security matters
|
||||
|
||||
Incorporating open source security tools into your IT infrastructure is imperative to keep your organization safe and secure. Open source tools, like Syncope and Metron from Apache, can help you identify and counter security threats. Learn to use them well, file bugs as you find them, and help the open source community protect the world's data.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/10/apache-security-tools
|
||||
|
||||
作者:[Chantale Benoit][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/chantalebenoit
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
|
||||
[2]: https://syncope.apache.org/
|
||||
[3]: https://jax-rs.github.io/apidocs/2.0/
|
||||
[4]: https://www.redhat.com/en/topics/api/what-is-a-rest-api
|
||||
[5]: https://www.redhat.com/en/topics/automation/what-is-business-process-management
|
||||
[6]: http://openjpa.apache.org/openjpa-2.0.0.html
|
||||
[7]: http://syncope.apache.org/docs/2.1/reference-guide.html#extensions
|
||||
[8]: https://opensource.com/life/16/6/overview-elastic-stack
|
||||
[9]: https://en.wikipedia.org/wiki/Security_information_and_event_management
|
||||
[10]: http://metron.apache.org/
|
@ -0,0 +1,104 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Top 5 open source alternatives to Google Analytics)
|
||||
[#]: via: (https://opensource.com/article/18/1/top-5-open-source-analytics-tools)
|
||||
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
|
||||
|
||||
Top 5 open source alternatives to Google Analytics
|
||||
======
|
||||
These four versatile web analytics tools provide valuable insights on
|
||||
your customers and site visitors while keeping you in control.
|
||||
![Analytics: Charts and Graphs][1]
|
||||
|
||||
If you have a website or run an online business, collecting data on where your visitors or customers come from, where they land on your site, and where they leave _is vital._ Why? That information can help you better target your products and services, and beef up the pages that are turning people away.
|
||||
|
||||
To gather that kind of information, you need a web analytics tool.
|
||||
|
||||
Many businesses of all sizes use Google Analytics. But if you want to keep control of your data, you need a tool that _you_ can control. You won’t get that from Google Analytics. Luckily, Google Analytics isn’t the only game on the web.
|
||||
|
||||
Here are four open source alternatives to Google Analytics.
|
||||
|
||||
### Matomo
|
||||
|
||||
Let’s start with the open source application that rivals Google Analytics for functions: [Matomo][2] (formerly known as Piwik). Matomo does most of what Google Analytics does, and chances are it offers the features that you need.
|
||||
|
||||
Those features include metrics on the number of visitors hitting your site, data on where they come from (both on the web and geographically), the pages from which they leave, and the ability to track search engine referrals. Matomo also offers many reports, and you can customize the dashboard to view the metrics that you want to see.
|
||||
|
||||
To make your life easier, Matomo integrates with more than 65 content management, e-commerce, and online forum systems, including WordPress, Magneto, Joomla, and vBulletin, using plugins. For any others, you can simply add a tracking code to a page on your site.
|
||||
|
||||
You can [test-drive][3] Matomo or use a [hosted version][4].
|
||||
|
||||
### Open Web Analytics
|
||||
|
||||
If there’s a close second to Matomo in the open source web analytics stakes, it’s [Open Web Analytics][5]. In fact, it includes key features that either rival Google Analytics or leave it in the dust.
|
||||
|
||||
In addition to the usual raft of analytics and reporting functions, Open Web Analytics tracks where on a page, and on what elements, visitors click; provides [heat maps][6] that show where on a page visitors interact the most; and even does e-commerce tracking.
|
||||
|
||||
Open Web Analytics has a [WordPress plugin][7] and can [integrate with MediaWiki][8] using a plugin. Or you can add a snippet of [JavaScript][9] or [PHP][10] code to your web pages to enable tracking.
|
||||
|
||||
Before you [download][11] the Open Web Analytics package, you can [give the demo a try][12] to see it it’s right for you.
|
||||
|
||||
### AWStats
|
||||
|
||||
Web server log files provide a rich vein of information about visitors to your site, but tapping into that vein isn't always easy. That's where [AWStats][13] comes to the rescue. While it lacks the most modern look and feel, AWStats more than makes up for that with breadth of data it can present.
|
||||
|
||||
That information includes the number of unique visitors, how long those visitors stay on the site, the operating system and web browsers they use, the size of a visitor's screen, and the search engines and search terms people use to find your site. AWStats can also tell you the number of times your site is bookmarked, track the pages where visitors enter and exit your sites, and keep a tally of the most popular pages on your site.
|
||||
|
||||
These features only scratch the surface of AWStats's capabilities. It also works with FTP and email logs, as well as [syslog][14] files. AWStats can gives you a deep insight into what's happening on your website using data that stays under your control.
|
||||
|
||||
### Countly
|
||||
|
||||
[Countly][15] bills itself as a "secure web analytics" platform. While I can't vouch for its security, Countly does a solid job of collecting and presenting data about your site and its visitors.
|
||||
|
||||
Heavily targeting marketing organizations, Countly tracks data that is important to marketers. That information includes site visitors' transactions, as well as which campaigns and sources led visitors to your site. You can also create metrics that are specific to your business. Countly doesn't forgo basic web analytics; it also keeps track of the number of visitors on your site, where they're from, which pages they visited, and more.
|
||||
|
||||
You can use the hosted version of Countly or [grab the source code][16] from GitHub and self-host the application. And yes, there are [differences between the hosted and self-hosted versions][17] of Countly.
|
||||
|
||||
### Plausible
|
||||
|
||||
[Plausible][18] is a newer kid on the open source analytics tools block. It’s lean, it’s fast, and only collects a small amount of information — that includes numbers of unique visitors and the top pages they visited, the number of page views, the bounce rate, and referrers. Plausible is simple and very focused.
|
||||
|
||||
What sets Plausible apart from its competitors is its heavy focus on privacy. The project creators state that the tool doesn’t collect or store any information about visitors to your website, which is particularly attractive if privacy is important to you. You can read more about that [here][19].
|
||||
|
||||
There’s a [demo instance][20] that you check out. After that, you can either [self-host][21] Plausible or sign up for a [paid, hosted account][22].
|
||||
|
||||
**Share your favorite open source web analytics tool with us in the comments.**
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/top-5-open-source-analytics-tools
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/analytics-graphs-charts.png?itok=sersoqbV (Analytics: Charts and Graphs)
|
||||
[2]: https://matomo.org/
|
||||
[3]: https://demo.matomo.org/index.php?module=CoreHome&action=index&idSite=3&period=day&date=yesterday
|
||||
[4]: https://www.innocraft.cloud/
|
||||
[5]: http://www.openwebanalytics.com/
|
||||
[6]: http://en.wikipedia.org/wiki/Heat_map
|
||||
[7]: https://github.com/padams/Open-Web-Analytics/wiki/WordPress-Integration
|
||||
[8]: https://github.com/padams/Open-Web-Analytics/wiki/MediaWiki-Integration
|
||||
[9]: https://github.com/padams/Open-Web-Analytics/wiki/Tracker
|
||||
[10]: https://github.com/padams/Open-Web-Analytics/wiki/PHP-Invocation
|
||||
[11]: https://github.com/padams/Open-Web-Analytics
|
||||
[12]: http://demo.openwebanalytics.com/
|
||||
[13]: http://www.awstats.org
|
||||
[14]: https://en.wikipedia.org/wiki/Syslog
|
||||
[15]: https://count.ly/web-analytics
|
||||
[16]: https://github.com/Countly
|
||||
[17]: https://count.ly/pricing#compare-editions
|
||||
[18]: https://plausible.io
|
||||
[19]: https://plausible.io/data-policy
|
||||
[20]: https://plausible.io/plausible.io
|
||||
[21]: https://plausible.io/self-hosted-web-analytics
|
||||
[22]: https://plausible.io/register
|
@ -0,0 +1,131 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Install Deepin Desktop on Ubuntu 20.04 LTS)
|
||||
[#]: via: (https://itsfoss.com/install-deepin-ubuntu/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
How to Install Deepin Desktop on Ubuntu 20.04 LTS
|
||||
======
|
||||
|
||||
_**This tutorial shows you the proper steps to install the Deepin desktop environment on Ubuntu. Removal steps are also mentioned.**_
|
||||
|
||||
Deepin is undoubtedly a [beautiful Linux distribution][1]. The recently released [Deepin version 20][2] makes it even more beautiful.
|
||||
|
||||
Now, [Deepin Linux][3] is based on [Debian][4] and the default repository mirrors are too slow. If you would rather stay with Ubuntu, you have the Deepin variant of Ubuntu in the form [UbuntuDDE Linux distribution][5]. It is not one of the [official Ubuntu flavors][6] yet.
|
||||
|
||||
[Reinstalling a new distribution][7] is a bit of annoyances for you would lose the data and you’ll have to reinstall your applications on the newly installed UbuntuDDE.
|
||||
|
||||
A simpler option is to install Deepin desktop environment on your existing Ubuntu system. After all you can easily install more than one [desktop environment][8] in one system.
|
||||
|
||||
Fret not, it is easy to do it and you can also revert the changes if you do not like it. Let me show you how to do that.
|
||||
|
||||
### Installing Deepin Desktop on Ubuntu 20.04
|
||||
|
||||
![][9]
|
||||
|
||||
The UbuntuDDE team has created a PPA for their distribution and you can use the same PPA to install Deepin desktop on Ubuntu 20.04. Keep in mind that this PPA is only available for Ubuntu 20.04. Please read about [using PPA in Ubuntu][10].
|
||||
|
||||
No Deepin version 20
|
||||
|
||||
The Deepin desktop you’ll be installing using the PPA here is NOT the new Deepin desktop version 20 yet. It will probably be there after Ubuntu 20.10 release but we cannot promise anything.
|
||||
|
||||
Here are the steps that you need to follow:
|
||||
|
||||
**Step 1**: You need to first add the [official PPA by Ubuntu DDE Remix team][11] by typing this on the terminal:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:ubuntudde-dev/stable
|
||||
```
|
||||
|
||||
**Step 2**: Once you have added the repository, proceed with installing the Deepin desktop.
|
||||
|
||||
```
|
||||
sudo apt install ubuntudde-dde
|
||||
```
|
||||
|
||||
![][12]
|
||||
|
||||
Now, the installation will start and after a while, you will be asked to choose the display manager.
|
||||
|
||||
![][13]
|
||||
|
||||
You need to select “**lightdm**” if you want Deepin desktop themed lock screen. If not, you can set it as “**gdm3**“.
|
||||
|
||||
In case you don’t see this option, you can get it by typing the following command and then select your preferred display manager:
|
||||
|
||||
```
|
||||
sudo dpkg-reconfigure lightdm
|
||||
```
|
||||
|
||||
**Step 3:** Once done, you have to log out and log in again by choosing the “**Deepin**” session or just reboot the system.
|
||||
|
||||
![][14]
|
||||
|
||||
And, that is it. Enjoy the Deepin experience on your Ubuntu 20.04 LTS system in no time!
|
||||
|
||||
![][15]
|
||||
|
||||
### Removing Deepin desktop from Ubuntu 20.04
|
||||
|
||||
In case, you don’t like the experience or of it is buggy for some reason, you can remove it by following the steps below.
|
||||
|
||||
**Step 1:** If you’ve set “lightdm” as your display manager, you need to set the display manager as “gdm3” before uninstalling Deepin. To do that, type in the following command:
|
||||
|
||||
```
|
||||
sudo dpkg-reconfigure lightdm
|
||||
```
|
||||
|
||||
![Select gdm3 on this screen][13]
|
||||
|
||||
And, select **gdm3** to proceed.
|
||||
|
||||
Once you’re done with that, you can simply enter the following command to remove Deepin completely:
|
||||
|
||||
```
|
||||
sudo apt remove startdde ubuntudde-dde
|
||||
```
|
||||
|
||||
You can just reboot to get back to your original Ubuntu desktop. In case the icons become unresponsive, you just open the terminal (**CTRL + ALT + T**) and type in:
|
||||
|
||||
```
|
||||
reboot
|
||||
```
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
It is good to have different [choices of desktop environments][16]. If you really like Deepin desktop interface, this could be a way to experience Deepin on Ubuntu.
|
||||
|
||||
If you have questions or if you face any issues, please let me know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/install-deepin-ubuntu/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/beautiful-linux-distributions/
|
||||
[2]: https://itsfoss.com/deepin-20-review/
|
||||
[3]: https://www.deepin.org/en/
|
||||
[4]: https://www.debian.org/
|
||||
[5]: https://itsfoss.com/ubuntudde/
|
||||
[6]: https://itsfoss.com/which-ubuntu-install/
|
||||
[7]: https://itsfoss.com/reinstall-ubuntu/
|
||||
[8]: https://itsfoss.com/what-is-desktop-environment/
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/ubuntu-20-with-deepin.jpg?resize=800%2C386&ssl=1
|
||||
[10]: https://itsfoss.com/ppa-guide/
|
||||
[11]: https://launchpad.net/~ubuntudde-dev/+archive/ubuntu/stable
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/10/deepin-desktop-install.png?resize=800%2C534&ssl=1
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/deepin-display-manager.jpg?resize=800%2C521&ssl=1
|
||||
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/deepin-session-ubuntu.jpg?resize=800%2C414&ssl=1
|
||||
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/10/ubuntu-20-with-deepin-1.png?resize=800%2C589&ssl=1
|
||||
[16]: https://itsfoss.com/best-linux-desktop-environments/
|
@ -0,0 +1,185 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Remove Physical Volume from a Volume Group in LVM)
|
||||
[#]: via: (https://www.2daygeek.com/linux-remove-delete-physical-volume-pv-from-volume-group-vg-in-lvm/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How to Remove Physical Volume from a Volume Group in LVM
|
||||
======
|
||||
|
||||
If a device is no longer need for use by LVM, you can use the vgreduce command to remove physical volumes from a volume group.
|
||||
|
||||
The Vgreduce command shrinks the capacity of a volume group by removing a physical volume.
|
||||
|
||||
But make sure that the physical volume is not used by any logical volumes using the pvdisplay command.
|
||||
|
||||
If the physical volume is still being used, you must transfer the data to another physical volume using the pvmove command.
|
||||
|
||||
Once the data is moved, it can be removed from the volume group.
|
||||
|
||||
Finally use the pvremove command to remove the LVM label and LVM metadata on the empty physical volume.
|
||||
|
||||
* **Part-1: [How to Create/Configure LVM (Logical Volume Management) in Linux][1]**
|
||||
* **Part-2: [How to Extend/Increase LVM’s (Logical Volume Resize) in Linux][2]**
|
||||
* **Part-3: [How to Reduce/Shrink LVM’s (Logical Volume Resize) in Linux][3]**
|
||||
|
||||
|
||||
|
||||
![][4]
|
||||
|
||||
### 1) Moving Extents to Existing Physical Volumes
|
||||
|
||||
Use the pvs command to check if the desired physical volume (we plan to remove the **“/dev/sdb1”** disk in LVM) is used or not.
|
||||
|
||||
```
|
||||
# pvs -o+pv_used
|
||||
|
||||
PV VG Fmt Attr PSize PFree Used
|
||||
/dev/sda1 myvg lvm2 a- 75.00G 14.00G 61.00G
|
||||
/dev/sdb1 myvg lvm2 a- 50.00G 45.00G 5.00G
|
||||
/dev/sdc1 myvg lvm2 a- 17.15G 12.15G 5.00G
|
||||
```
|
||||
|
||||
If this is used, check to see if there are enough free extents on the other physics volumes in the volume group.
|
||||
|
||||
If so, you can run the pvmove command on the device you want to remove. Extents will be distributed to other devices.
|
||||
|
||||
```
|
||||
# pvmove /dev/sdb1
|
||||
|
||||
/dev/sdb1: Moved: 2.0%
|
||||
…
|
||||
/dev/sdb1: Moved: 79.2%
|
||||
…
|
||||
/dev/sdb1: Moved: 100.0%
|
||||
```
|
||||
|
||||
When the pvmove command is complete. Re-use the pvs command to check whether the physics volume is free or not.
|
||||
|
||||
```
|
||||
# pvs -o+pv_used
|
||||
|
||||
PV VG Fmt Attr PSize PFree Used
|
||||
/dev/sda1 myvg lvm2 a- 75.00G 9.00G 66.00G
|
||||
/dev/sdb1 myvg lvm2 a- 50.00G 50.00G 0
|
||||
/dev/sdc1 myvg lvm2 a- 17.15G 12.15G 5.00G
|
||||
```
|
||||
|
||||
If it’s free, use the vgreduce command to remove the physical volume /dev/sdb1 from the volume group.
|
||||
|
||||
```
|
||||
# vgreduce myvg /dev/sdb1
|
||||
Removed "/dev/sdb1" from volume group "myvg"
|
||||
```
|
||||
|
||||
Finally, run the pvremove command to remove the disk from the LVM configuration. Now, the disk is completely removed from the LVM and can be used for other purposes.
|
||||
|
||||
```
|
||||
# pvremove /dev/sdb1
|
||||
Labels on physical volume "/dev/sdb1" successfully wiped.
|
||||
```
|
||||
|
||||
### 2) Moving Extents to a New Disk
|
||||
|
||||
If you don’t have enough free extents on the other physics volumes in the volume group. Add new physical volume using the steps below.
|
||||
|
||||
Request new LUNs from the storage team. Once this is allocated, run the following commands to **[discover newly added LUNs or disks in Linux][5]**.
|
||||
|
||||
```
|
||||
# ls /sys/class/scsi_host
|
||||
host0
|
||||
```
|
||||
|
||||
```
|
||||
# echo "- - -" > /sys/class/scsi_host/host0/scan
|
||||
```
|
||||
|
||||
```
|
||||
# fdisk -l
|
||||
```
|
||||
|
||||
Once the disk is detected in the OS, use the pvcreate command to create the physical volume.
|
||||
|
||||
```
|
||||
# pvcreate /dev/sdd1
|
||||
Physical volume "/dev/sdd1" successfully created
|
||||
```
|
||||
|
||||
Use the following command to add new physical volume /dev/sdd1 to the existing volume group vg01.
|
||||
|
||||
```
|
||||
# vgextend vg01 /dev/sdd1
|
||||
Volume group "vg01" successfully extended
|
||||
```
|
||||
|
||||
Now, use the pvs command to see the new disk **“/dev/sdd1”** that you have added.
|
||||
|
||||
```
|
||||
# pvs -o+pv_used
|
||||
|
||||
PV VG Fmt Attr PSize PFree Used
|
||||
/dev/sda1 myvg lvm2 a- 75.00G 14.00G 61.00G
|
||||
/dev/sdb1 myvg lvm2 a- 50.00G 0 50.00G
|
||||
/dev/sdc1 myvg lvm2 a- 17.15G 12.15G 5.00G
|
||||
/dev/sdd1 myvg lvm2 a- 60.00G 60.00G 0
|
||||
```
|
||||
|
||||
Use the pvmove command to move the data from /dev/sdb1 to /dev/sdd1.
|
||||
|
||||
```
|
||||
# pvmove /dev/sdb1 /dev/sdd1
|
||||
|
||||
/dev/sdb1: Moved: 10.0%
|
||||
…
|
||||
/dev/sdb1: Moved: 79.7%
|
||||
…
|
||||
/dev/sdb1: Moved: 100.0%
|
||||
```
|
||||
|
||||
After the data is moved to the new disk. Re-use the pvs command to check whether the physics volume is free.
|
||||
|
||||
```
|
||||
# pvs -o+pv_used
|
||||
|
||||
PV VG Fmt Attr PSize PFree Used
|
||||
/dev/sda1 myvg lvm2 a- 75.00G 14.00G 61.00G
|
||||
/dev/sdb1 myvg lvm2 a- 50.00G 50.00G 0
|
||||
/dev/sdc1 myvg lvm2 a- 17.15G 12.15G 5.00G
|
||||
/dev/sdd1 myvg lvm2 a- 60.00G 10.00G 50.00G
|
||||
```
|
||||
|
||||
If it’s free, use the vgreduce command to remove the physical volume /dev/sdb1 from the volume group.
|
||||
|
||||
```
|
||||
# vgreduce myvg /dev/sdb1
|
||||
Removed "/dev/sdb1" from volume group "myvg"
|
||||
```
|
||||
|
||||
Finally, run the pvremove command to remove the disk from the LVM configuration. Now, the disk is completely removed from the LVM and can be used for other purposes.
|
||||
|
||||
```
|
||||
# pvremove /dev/sdb1
|
||||
Labels on physical volume "/dev/sdb1" successfully wiped.
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/linux-remove-delete-physical-volume-pv-from-volume-group-vg-in-lvm/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/create-lvm-storage-logical-volume-manager-in-linux/
|
||||
[2]: https://www.2daygeek.com/extend-increase-resize-lvm-logical-volume-in-linux/
|
||||
[3]: https://www.2daygeek.com/reduce-shrink-decrease-resize-lvm-logical-volume-in-linux/
|
||||
[4]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[5]: https://www.2daygeek.com/scan-detect-luns-scsi-disks-on-redhat-centos-oracle-linux/
|
@ -1,149 +0,0 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "lxbwolf"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
[#]: subject: "Automate testing for website errors with this Python tool"
|
||||
[#]: via: "https://opensource.com/article/20/7/seodeploy"
|
||||
[#]: author: "JR Oakes https://opensource.com/users/jroakes"
|
||||
|
||||
使用这个 Python 工具对网站错误进行自动化测试
|
||||
======
|
||||
SEODeploy 可以帮助我们在网站部署之前识别出 SEO 问题。
|
||||
![Computer screen with files or windows open][1]
|
||||
|
||||
作为一个有技术的搜索引擎优化开发者,我经常被请来协助做网站迁移、新网站启动、分析实现和一些其他影响网站线上可视性和控制风险的测量方法的工作。很多公司从用户通过搜索引擎查到他们的产品和服务这个途径每月都能获得大量的收入。虽然搜索引擎已经能妥善地处理没有被很好地格式化的代码,但是在开发过程中影响搜索引擎索引和用户展示页的问题还是会发生。
|
||||
|
||||
我曾经也尝试通过评审各阶段会破坏SEO(<ruby>搜索引擎优化<rt>search engine optimization</rt></ruby>)的问题来手动降低这种风险。我的团队最终查到的结果,决定该项目是否可以上线。但这个过程通常很低效,只能依靠有限的几个页面,且人工极易出错。
|
||||
|
||||
长期以来,这个行业一直在寻找可用且值得信赖的方式,在开发者和搜索引擎优化者可以指定必须测试的内容的前提下进行自动化测试。这是非常重要的,因为这些组在开发过程中优先级通常会发生冲突,搜索引擎优化者提交修改的同时,开发者也需要进行回归测试和控制预期之外的情况。
|
||||
|
||||
### 常见的破坏 SEO 的问题
|
||||
|
||||
我见过的很多网站有成千上万的页面,甚至上百万。实在令人费解,为什么一个开发过程中的改动能影响这么多页面。在 SEO 的世界中, Google 或其他搜索引擎展示你的页面时,一个非常微小和看起来无关紧要的修改也可能导致全网站范围的变化。在部署到生产环境之前,必须要处理这类错误。
|
||||
|
||||
下面是我去年见过的几个例子。
|
||||
|
||||
#### 偶发的 noindex
|
||||
|
||||
我们用的一个专用的第三方 SEO 监控工具,[ContentKing][2],在部署到生产环境之后马上发现了这个问题。这个错误很隐蔽,因为在 HTML 中是不可见的,确切地说,它隐藏在服务器响应头里,但它能很快导致搜索不可见。
|
||||
|
||||
|
||||
```
|
||||
HTTP/1.1 200 OK
|
||||
Date: Tue May 25 2010 21:12:42 GMT
|
||||
[...]
|
||||
X-Robots-Tag: noindex
|
||||
[...]
|
||||
```
|
||||
|
||||
#### canonical 小写
|
||||
|
||||
上线时错误地把整个网站的 [canonical 标签元素][3]全改成小写了。这个改动影响了接近 30000 个 URL。在修改之前,所有的 URL 大小写都正常(例如,`URL-Path`)。这之所以是个问题是因为 canonical 标签元素是用来给 Google 提示一个网页真实的 canonical URL 版本的。这个改动导致很多 URL 被从 Google 的索引中移除并用小写(`/url-path`)重新建立索引。影响范围是流量损失了 10% 到 15%,也污染了未来几个星期的网页监控数据。
|
||||
|
||||
#### origin 的退化
|
||||
|
||||
一个有复杂、React 的奇特实现的网站有个神奇的问题,`origin.domain.com` URL 退化为显示原始的内容交付网络服务器。它会间歇性地显示原始的主机而不是网站元数据中的边缘主机(如 canonical 标签元素、URL 和 Open Graph 标签)。这个问题在原始的 HTML 和渲染后的 HTML 中都存在。这个问题影响搜索的可见度和在社交媒体上分享的质量。
|
||||
|
||||
### SEODeploy 介绍
|
||||
|
||||
SEO 通常使用差异测试工具来检测渲染后和原始的 HTML 的差异。差异测试是很理想的,因为它避免了肉眼测试的不确定性。你希望检查 Google 对你的页面的渲染过程的不同,而不是检查用户对你页面的渲染。你希望查看下原始的 HTML 是什么样的,而不是渲染后的 HTML,因为 Google 的渲染过程是有独立的两个阶段的。
|
||||
|
||||
这导致我和我的同事创造了 [SEODeploy][4] 这个“在部署流水线中用于自动化 SEO 测试的 Python 库。”我们的使命是:
|
||||
|
||||
> 开发一个工具,让开发者能通过修改 URL 路径在生产环境和分阶段的主机上进行差异测试,尤其是对 SEO 相关数据的非预期的退化的测试。
|
||||
|
||||
SEODeploy 的机制很简单:提供一个每行内容都是 URL 路径的文本文件,SEODeploy 对那些路径跑一系列的模块,对比生产环境和保存的 URL,把检测到的所有的错误和改动信息报告出来。
|
||||
|
||||
![SEODeploy overview][5]
|
||||
|
||||
(SEODeploy, [CC BY-SA 4.0][6])
|
||||
|
||||
这个工具及其模块可以用一个 YAML 文件来配置,在可预期的范围内可以自定义。
|
||||
|
||||
![SEODeploy output][7]
|
||||
|
||||
(SEODeploy, [CC BY-SA 4.0][6])
|
||||
|
||||
最初的发布版本包含下面的的核心功能和概念:
|
||||
|
||||
1. **开源**:我们坚信分享代码可以被大家评论、改进、扩展、分享和复用。
|
||||
2. **模块化**:Web 开发中有很多边界条件。SEODeploy 工具在概念上很简单,因此模块化用来控制复杂性。我们提供了两个建好的模块和一个实例模块来简述基本结构。
|
||||
3. **URL 抽样:**由于它不是对所有 URL 都是可行和有效的,因此我们引入了一种随机抽取 XML 网站地图 URL 或被 ContentKing 监控的 URL 作为样本的方法。
|
||||
4. **灵活的差异检测**:Web 数据是凌乱的。无论被检测的数据是什么类型(如 延伸文件系统、数组或列表、JSON 对象或字典、整数、浮点数等等),差异检测功能都会尝试将这些数据转换为差异信息。
|
||||
5. **自动化**: 你可以在命令行来调用抽样和运行方法,将 SEODeploy 融合到已有的流水线也很简单。
|
||||
|
||||
|
||||
|
||||
### 模块
|
||||
|
||||
虽然核心功能很简单,但在设计上,SEODeploy 的强大功能和复杂度体现在模块上。模块用来处理获取、清理和组织暂存区和生产服务器上的数据来作对比时难度更大的任务。
|
||||
|
||||
#### Headless 模块
|
||||
|
||||
[Headless 模块][8] 是为那些从库里获取数据时不想为第三方服务付费的开发者准备的。它可以在任意版本的 Chrome 上运行,会从每组用来比较的 URL 中提取渲染的数据。
|
||||
|
||||
headless 模块会提取下面的核心数据用来比较:
|
||||
|
||||
1. SEO 内容,如 标题、头、标签等等。
|
||||
2. Performance data from the Chrome Timings and Chrome DevTools Protocol (CDP) Performance APIs从Chrome 读写周期和 CDP(<ruby>Chrome 开发工具协议<rt>Chrome DevTools Protocol</rt></ruby>)性能 API 中提取性能数据
|
||||
3. 经过计算的性能监控,包括 CLS,一项由 Google 最近发布的很受欢迎的 [Web 核心数据][9]
|
||||
4. 从 CDP 覆盖率 API 获取的 CSS 和 JavaScript 的覆盖率数据
|
||||
|
||||
|
||||
|
||||
这个模块引入了处理暂存区、网络速度预设(为了让对比更规范化)等功能,也引入了一个处理在暂存的对比数据中替换临时主机的方法。开发者也能很容易地扩展这个模块,来获取他们想对任意网页进行对比的数据。
|
||||
|
||||
#### 其他模块
|
||||
|
||||
我们为开发者创建了一个[示例模块][10],开发者可以参照它来使用框架创建一个自定义的提取模块。这个示例模块也是与 ContentKing 结合的。ContentKing 模块需要订阅 ContentKing,Headless 可以在所有能运行 Chrome 的机器上运行。
|
||||
|
||||
### 需要解决的问题
|
||||
|
||||
我们有扩展和强化工具库的[计划][11],但是我们看到了开发者们对于能满足和不能满足他们需求的[反馈][12]。我们正在解决的问题和条目有:
|
||||
|
||||
1. 对于某些对比元素(尤其是 schema),动态时间戳产生假阳性。
|
||||
2. 把测试数据保存到数据库,以便查看部署历史以及与上次暂存的推送进行差异测试。
|
||||
3. 借助渲染相关的云基础设施强化提取的精度和速度。
|
||||
4. 把测试覆盖率从现在的 46% 提高到 99% 以上。
|
||||
5. 目前,我们依赖 [Poetry][13] 进行部署管理,但我们希望发布一个 PyPl 库,这样就可以用 `pip install` 轻松安装。
|
||||
6. 我们还在关注更多使用时的问题和相关数据。
|
||||
|
||||
|
||||
|
||||
### 开始使用
|
||||
|
||||
这个项目在 [GitHub][4] 上,我们对大部分功能都提供了 [文档][14]。
|
||||
|
||||
我们希望你能克隆 SEODeploy 并为其贡献代码。我们的目标是通过这个由技术精湛的搜索引擎优化开发者开发的、经过开发者和工程师们认证的工具来为开源作贡献。我们都见过验证复杂的暂存问题需要多长时间,也都见过大量 URL 的微小改动能有什么样的商业影响。我们认为这个库可以节省时间、为开发团队降低部署过程中的风险。
|
||||
|
||||
If you have questions, issues, or want to contribute, please see the project's [About page][15].如果你有问题或者想提交代码,请查看项目的[关于][15]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/seodeploy
|
||||
|
||||
作者:[JR Oakes][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lxbwolf](https://github.com/lxbwolf)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jroakes
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY "Computer screen with files or windows open"
|
||||
[2]: https://www.contentkingapp.com/
|
||||
[3]: https://en.wikipedia.org/wiki/Canonical_link_element
|
||||
[4]: https://github.com/locomotive-agency/SEODeploy
|
||||
[5]: https://opensource.com/sites/default/files/uploads/seodeploy.png "SEODeploy overview"
|
||||
[6]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/seodeploy_output.png "SEODeploy output"
|
||||
[8]: https://locomotive-agency.github.io/SEODeploy/modules/headless/
|
||||
[9]: https://web.dev/vitals/
|
||||
[10]: https://locomotive-agency.github.io/SEODeploy/modules/creating/
|
||||
[11]: https://locomotive-agency.github.io/SEODeploy/todo/
|
||||
[12]: https://locomotive-agency.github.io/SEODeploy/about/#contact
|
||||
[13]: https://python-poetry.org/
|
||||
[14]: https://locomotive-agency.github.io/SEODeploy/
|
||||
[15]: https://locomotive-agency.github.io/SEODeploy/about/
|
205
translated/tech/20200908 How to install software with Ansible.md
Normal file
205
translated/tech/20200908 How to install software with Ansible.md
Normal file
@ -0,0 +1,205 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to install software with Ansible)
|
||||
[#]: via: (https://opensource.com/article/20/9/install-packages-ansible)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
如何使用 Ansible 安装软件
|
||||
======
|
||||
|
||||
使用 Ansible 剧本自动安装和更新设备上的软件。
|
||||
![Puzzle pieces coming together to form a computer screen][1]
|
||||
|
||||
Ansible 是系统管理员和开发人员用来保持计算机系统处于最佳状态的一种流行的自动化工具。与可扩展框架一样,[Ansible][2] 本身功能有限,它真正的功能体现在许多模块中。在某种程度上,Ansible 模块是 [Linux][3] 系统的命令。它们针对特定问题提供解决方案。维护计算机时的一项常见任务是使所有计算机的更新和一致。
|
||||
|
||||
我曾经使用软件包的文本列表来保持系统或多或少同步:我会列出笔记本电脑上安装的软件包,然后将其与台式机或另一台服务器之间进行交叉引用,手动弥补差异。当然,在 Linux 机器上安装和维护应用程序是 Ansible 的一项基本功能,这意味着你可以在自己关心的计算机上列出所需的内容。
|
||||
|
||||
### 寻找正确的 Ansible
|
||||
|
||||
Ansible 模块的数量非常庞大,如何找到能完成你任务的模块?在 Linux 中,你可以在应用程序菜单或 `/usr/bin` 中查找要运行的应用程序。使用 Ansible 时,参考 [Ansible 模块索引][4]。
|
||||
|
||||
这个索引按照类别列出。稍加搜索,你就很可能找到所需的模块。对于包管理,[Packaging 模块][5]几乎适用于所有带包管理器的系统。
|
||||
|
||||
### 动手写一个 Ansible 剧本
|
||||
|
||||
首先,选择本地计算机上的包管理器。例如,如果你打算在运行 Fedora 的笔记本电脑上编写 Ansible 指令(在 Ansible 中称为“剧本”),那么从 dnf 模块开始。如果你在 Elementary OS 上编写,使用 `apt` 模块,以此类推。这样你就可以开始进行测试和验证,并可以在以后扩展到其它计算机。
|
||||
|
||||
第一步是创建一个代表你剧本的目录。这不是绝对必要的,但这是一个好习惯。Ansible 只需要一个配置文件就可以运行在 YAML 中,但是如果你以后想要扩展剧本,你就可以通过改变目录和文件的方式来控制 Ansible。现在,只需创建一个名为 `install_packages` 或类似的目录:
|
||||
```
|
||||
$ mkdir ~/install_packages
|
||||
```
|
||||
|
||||
你可以根据自己的喜好来命名 Ansible 的剧本,但通常将其命名为 `site.yml`:
|
||||
```
|
||||
$ touch ~/install_packages/site.yml
|
||||
```
|
||||
|
||||
在你最喜欢的文本编辑器中打开 `site.yml`,添加以下内容:
|
||||
```
|
||||
\---
|
||||
\- hosts: localhost
|
||||
tasks:
|
||||
- name: install packages
|
||||
become: true
|
||||
become_user: root
|
||||
dnf:
|
||||
state: present
|
||||
name:
|
||||
- tcsh
|
||||
- htop
|
||||
```
|
||||
|
||||
你必须调整使用的模块名称来匹配你使用的发行版。在此示例中,我使用 `dnf` 是因为我在 Fedora Linux 上编写剧本。
|
||||
|
||||
就像 Linux 终端中的命令一样,知道 _如何_ 来调用 Ansible 模块就已经成功了一半。这个示例剧本遵循标准剧本格式:
|
||||
|
||||
* `hosts` 是一台或多台计算机。在本示例中,目标计算机是 `localhost`,即你当前正在使用的计算机(而不是你希望 Ansible 连接的远程系统)。
|
||||
* `tasks` 是你要在主机上执行的任务列表。
|
||||
* `name` 是任务的人性化名称。在这种情况下,我使用 `install packages`,因为这就是该任务正在做的事情。
|
||||
* `become` 允许 Ansible 更改运行此任务的用户。
|
||||
* `become_user` 允许 Ansible 成为 `root` 用户来运行此任务。这是必须的,因为只有 root 用户才能使用 `dnf` 安装应用程序。
|
||||
* `dnf` 是模块名称,你可以在 Ansible 网站上的模块索引中找到。
|
||||
|
||||
`dnf` 下的节点是 `dnf` 模块专用的。这是模块文档的关键所在。就像 Linux 命令的手册页一样,模块文档会告诉你可用的选项和所需的参数。
|
||||
|
||||
![Ansible 文档][6]
|
||||
Ansible module documentation (Seth Kenlon, [CC BY-SA 4.0][7])
|
||||
|
||||
安装软件包是一个相对简单的任务,仅需要两个元素。`state` 选项指示 Ansible 检查系统上是否存在 _软件包_,而 `name` 选项列出要查找的软件包。Ansible 会处理机器 _状态_,因此模块指令始终意味着更改。假如 Ansible 扫描了系统状态,发现剧本里描述的系统(在本例中,`tcsh` 和 `htop` 存在)与实际状态存在冲突,那么 Ansible 的任务是进行必要的更改来使系统与剧本匹配。Ansible 可以通过 dnf(或 apt 或者其它任何包管理器)模块进行更改。
|
||||
|
||||
每个模块可能都有一组不同的选项,所以在编写剧本时,要经常参考模块文档。除非你对模块非常熟悉,否则这是期望模块完成工作的唯一合理方法。
|
||||
|
||||
### 验证 YAML
|
||||
|
||||
剧本是用 YAML 编写的。因为 YAML 遵循严格的语法,所以安装 `yamllint` 来检查剧本是很有帮助的。更妙的是,有一个专门针对 Ansible 的检查工具称为 `ansible-lint`,它专门为剧本而生。在继续之前,安装它。
|
||||
|
||||
在 Fedora 或 CentOs 上:
|
||||
```
|
||||
$ sudo dnf ins tall yamllint python3-ansible-lint
|
||||
```
|
||||
|
||||
在 Debian、Elementary 或 Ubuntu 上,同样的:
|
||||
```
|
||||
$ sudo apt install yamllint ansible-lint
|
||||
```
|
||||
|
||||
使用 `ansible-link` 来验证你的剧本。如果你无法使用 `ansible-lint`,你可以使用 `yamllint`。
|
||||
```
|
||||
$ ansible-lint ~/install_packages/site.yml
|
||||
```
|
||||
|
||||
成功则不返回任何内容,但如果文件中有错误,则必须先修复它们,然后再继续。复制和粘贴过程中的常见错误包括在最后一行的末尾省略换行符、使用制表符而不是空格来缩进。在文本编辑器中修复它们,重新运行 `ansible-llint`,重复这个过程,直到 `ansible-lint` 或 `yamllint` 没有返回为止。
|
||||
|
||||
### 使用 Ansible 安装一个应用
|
||||
|
||||
现在你有了一个可验证的有效剧本,你终于可以在本地计算机上运行它了,因为你碰巧知道该剧本定义的任务需要 root 权限,所以在调用 Ansible 时必须使用 `--ask-become-pass` 选项,因此系统会提示你输入管理员密码。
|
||||
|
||||
开始安装:
|
||||
```
|
||||
$ ansible-playbook --ask-become-pass ~/install_packages/site.yml
|
||||
BECOME password:
|
||||
PLAY [localhost] ******************************
|
||||
|
||||
TASK [Gathering Facts] ******************************
|
||||
ok: [localhost]
|
||||
|
||||
TASK [install packages] ******************************
|
||||
ok: [localhost]
|
||||
|
||||
PLAY RECAP ******************************
|
||||
localhost: ok=0 changed=2 unreachable=0 failed=0 [...]
|
||||
```
|
||||
|
||||
这些命令被执行后,目标系统将处于与剧本中描述的相同的状态。
|
||||
|
||||
### 在远程系统上安装应用程序
|
||||
|
||||
通过这么多操作来替换一个简单的命令可能会适得其反,但是 Ansible 的优势是它可以在你的所有系统中实现自动化。你可以使用条件语句使 Ansible 在不同的系统上使用特定的模块,但是现在,假定所有计算机都使用相同的包管理器。
|
||||
|
||||
要连接到远程系统,你必须在 `/etc/ansible/hosts` 文件中定义远程系统,该文件与 Ansible 是一起安装的,所以它已经存在了,但它可能是空的,除了一些解释性注释之外。使用 `sudo` 在你喜欢的文本编辑器中打开它。
|
||||
|
||||
只要主机名可以解析,就可以通过其 IP 地址或主机名定义主机。例如,如果你已经在 `/etc/hosts` 中定义了 `liavara` 并可以成功 ping 通,那么你可以在 `/etc/ansible/hosts` 中将 `liavara` 设置为主机。或者,如果你正在运行一个域名服务器或 Avahi 服务器并且可以 ping 通 `liavara`,那么你就可以在 `/etc/ansible/hosts` 中定义它。否则,你必须使用它的 IP 地址。
|
||||
|
||||
你还必须成功地建立与目标主机的安全 shell(SSH)连接。最简单的方法是使用 `ssh-copy-id` 命令,但是如果你以前从未与主机建立 SSH 连接,[阅读我关于如何创建自动 SSH 连接的文章][8]。
|
||||
|
||||
一旦你在 `/etc/ansible/hosts` 文件中输入了主机名或 IP 地址后,你就可以在剧本中更改 `hosts` 定义:
|
||||
|
||||
```
|
||||
\---
|
||||
\- hosts: all
|
||||
tasks:
|
||||
- name: install packages
|
||||
become: true
|
||||
become_user: root
|
||||
dnf:
|
||||
state: present
|
||||
name:
|
||||
- tcsh
|
||||
- htop
|
||||
```
|
||||
|
||||
再次运行 `ansible-playbook`:
|
||||
```
|
||||
$ ansible-playbook --ask-become-pass ~/install_packages/site.yml
|
||||
```
|
||||
|
||||
这次,剧本会在你的远程系统上运行。
|
||||
|
||||
如果你添加更多主机,则有许多方法可以过滤哪个主机执行哪个任务。例如,你可以创建主机组(服务器的 `webserves`,台式机的 `workstations`等)。
|
||||
|
||||
### 适用于混合环境的 Ansible
|
||||
|
||||
到目前为止,我们一直假定 Ansible 配置的所有主机都运行相同的操作系统(都是是使用 **dnf** 命令进行程序包管理的操作系统)。那么,如果你要管理不同发行版的主机,例如 Ubuntu(使用 **apt**)或 Arch(使用 **pacman**),或者其它的操作系统时,该怎么办?
|
||||
|
||||
只要目标操作系统具有程序包管理器([MacOs 有 Homebrew][9],[Windows 有 Chocolatey][10]),Ansible 就能派上用场。
|
||||
|
||||
这就是 Ansible 优势最明显的地方。在 shell 脚本中,你必须检查目标主机上有哪些可用的包管理器,即使使用纯 Python,也必须检查操作系统。Ansible 不仅内置了这些功能,而且还具有在剧本中使用命令结果的机制。你可以使用 **action** 关键字来执行由 Ansible 事实收集子系统提供的变量定义的任务,而不是使用 **dnf** 模块。
|
||||
```
|
||||
\---
|
||||
\- hosts: all
|
||||
tasks:
|
||||
- name: install packages
|
||||
become: true
|
||||
become_user: root
|
||||
action: >
|
||||
{{ ansible_pkg_mgr }} name=htop,transmission state=present update_cache=yes
|
||||
```
|
||||
|
||||
**action** 关键字会加载目标插件。在本例中,它使用了 **ansible_pkg_mgr** 变量,该变量由 Ansible 在初始 **收集信息** 期间填充。你不需要告诉 Ansible 收集有关其运行操作系统的事实,所以很容易忽略这一点,但是当你运行一个剧本时,你会在默认输出中看到它:
|
||||
```
|
||||
TASK [Gathering Facts] *****************************************
|
||||
ok: [localhost]
|
||||
```
|
||||
|
||||
**action** 插件使用来自这个探针的信息,使用相关的包管理器命令填充 **ansible_pkg_mgr**,以安装在 **name** 参数之后列出的程序包。使用 8 行代码,你可以克服在其它脚本选项中很少允许的复杂跨平台难题。
|
||||
|
||||
### 使用 Ansible
|
||||
|
||||
现在是 21 世纪,我们都希望我们的计算机设备能够互联并且相对一致。无论你维护的是两台还是 200 台计算机,你都不必一次又一次地执行相同的维护任务。使用 Ansible 来同步生活中的计算机设备,看看 Ansible 还能为你做些什么。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/9/install-packages-ansible
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen)
|
||||
[2]: https://opensource.com/resources/what-ansible
|
||||
[3]: https://opensource.com/resources/linux
|
||||
[4]: https://docs.ansible.com/ansible/latest/modules/modules_by_category.html
|
||||
[5]: https://docs.ansible.com/ansible/latest/modules/list_of_packaging_modules.html
|
||||
[6]: https://opensource.com/sites/default/files/uploads/ansible-module.png (Ansible documentation)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://opensource.com/article/20/8/how-ssh
|
||||
[9]: https://opensource.com/article/20/6/homebrew-mac
|
||||
[10]: https://opensource.com/article/20/3/chocolatey
|
@ -0,0 +1,89 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Use the Firefox Task Manager (to Find and Kill RAM and CPU Eating Tabs and Extensions))
|
||||
[#]: via: (https://itsfoss.com/firefox-task-manager/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
如何使用 Firefox 任务管理器(查找并杀死占用内存和 CPU 的标签页和扩展程序)
|
||||
======
|
||||
|
||||
Firefox 在 Linux 用户中很受欢迎。它是几个 Linux 发行版上的默认网络浏览器。
|
||||
|
||||
在许多其他功能中,Firefox 提供了一个自己的任务管理器。
|
||||
|
||||
现在,在 Linux 中既然你有[任务管理器][1]这种形式的[系统监控工具][2],为什么还要使用它呢?这是有一个很好的理由。
|
||||
|
||||
假设你的系统占用了太多的内存或 CPU。如果你使用 top 或其他一些系统[资源监控工具,如 Glances][3],你会发现这些工具无法区分打开的标签或扩展。
|
||||
|
||||
通常情况下,每个 Firefox 标签页都显示为 **Web 内容**。你可以看到是某个 Firefox 进程导致了这个问题,但这无法准确判断是哪个标签页或扩展。
|
||||
|
||||
这时你可以使用 Firefox 任务管理器。让我来告诉你怎么做!
|
||||
|
||||
### Firefox 任务管理器
|
||||
|
||||
有了 Firefox 任务管理器,你就可以列出所有消耗系统资源的标签页、跟踪器和附加组件。
|
||||
|
||||
![][4]
|
||||
|
||||
正如你在上面的截图中所看到的,你会看到标签页的名称、类型(标签或附加组件)、能源影响和消耗的内存。
|
||||
|
||||
虽然一切都明了,但**能源影响指的是 CPU 的使用**,如果你使用的是笔记本电脑,它是一个很好的指标,可以告诉你什么东西会更快耗尽电池。
|
||||
|
||||
#### 在 Firefox 中访问任务管理器
|
||||
|
||||
令人意外的是,任务管理器没有 [Firefox 键盘快捷键][5]。
|
||||
|
||||
要快速启动 Firefox 任务管理器,可以在地址栏中输入“**about:performance**”,如下图所示。
|
||||
|
||||
![Quickly access task manager in Firefox][6]
|
||||
|
||||
另外,你也可以点击**菜单**图标,然后进入“**更多**“选项,如下截图所示。
|
||||
|
||||
![Accessing task manager in Firefox][7]
|
||||
|
||||
接下来,你会发现选择”**任务管理器**”的选项,只需点击它就行。
|
||||
|
||||
![][8]
|
||||
|
||||
#### 使用 Firefox 任务管理器
|
||||
|
||||
到这后,你可以检查资源的使用情况,展开标签页来查看跟踪器和它的使用情况,也可以选择关闭标签,如下截图高亮所示。
|
||||
|
||||
![][9]
|
||||
|
||||
以下是你应该知道的:
|
||||
|
||||
* 能源影响指的是 CPU 消耗。
|
||||
* 子框架或子任务通常是与需要在后台运行的标签相关联的跟踪器/脚本。
|
||||
|
||||
|
||||
|
||||
通过这个任务管理器,你可以发现网站上的流氓脚本,以及它是否导致你的浏览器变慢。
|
||||
|
||||
这并不是什么火箭科学,但并不是很多人都知道 Firefox 任务管理器。现在你知道了,它应该很方便,你觉得呢?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/firefox-task-manager/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/task-manager-linux/
|
||||
[2]: https://itsfoss.com/linux-system-monitoring-tools/
|
||||
[3]: https://itsfoss.com/glances/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/firefox-task-manager-shot.png?resize=800%2C519&ssl=1
|
||||
[5]: https://itsfoss.com/firefox-keyboard-shortcuts/
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/firefox-url-performance.jpg?resize=800%2C357&ssl=1
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/firefox-task-manager-steps.jpg?resize=800%2C779&ssl=1
|
||||
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/firefox-task-manager-menu.jpg?resize=800%2C465&ssl=1
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/firefox-task-manager-close-tab.png?resize=800%2C496&ssl=1
|
Loading…
Reference in New Issue
Block a user