mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
commit
f3652514d0
@ -0,0 +1,189 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11494-1.html)
|
||||
[#]: subject: (Mutation testing by example: Failure as experimentation)
|
||||
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation)
|
||||
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzichttps://opensource.com/users/jocunddew)
|
||||
|
||||
变异测试:基于故障的试验
|
||||
======
|
||||
|
||||
> 基于 .NET 的 xUnit.net 测试框架,开发一款自动猫门的逻辑,让门在白天开放,夜间锁定。
|
||||
|
||||
![Digital hand surrounding by objects, bike, light bulb, graphs][1]
|
||||
|
||||
在本系列的[第一篇文章][2]中,我演示了如何使用设计的故障来确保代码中的预期结果。在第二篇文章中,我将继续开发示例项目:一款自动猫门,该门在白天开放,夜间锁定。
|
||||
|
||||
在此提醒一下,你可以按照[此处的说明][3]使用 .NET 的 xUnit.net 测试框架。
|
||||
|
||||
### 关于白天时间
|
||||
|
||||
回想一下,测试驱动开发(TDD)围绕着大量的单元测试。
|
||||
|
||||
第一篇文章中实现了满足 `Given7pmReturnNighttime` 单元测试期望的逻辑。但还没有完,现在,你需要描述当前时间大于 7 点时期望发生的结果。这是新的单元测试,称为 `Given7amReturnDaylight`:
|
||||
|
||||
```
|
||||
[Fact]
|
||||
public void Given7amReturnDaylight()
|
||||
{
|
||||
var expected = "Daylight";
|
||||
var actual = dayOrNightUtility.GetDayOrNight();
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
```
|
||||
|
||||
现在,新的单元测试失败了(越早失败越好!):
|
||||
|
||||
```
|
||||
Starting test execution, please wait...
|
||||
[Xunit.net 00:00:01.23] unittest.UnitTest1.Given7amReturnDaylight [FAIL]
|
||||
Failed unittest.UnitTest1.Given7amReturnDaylight
|
||||
[...]
|
||||
```
|
||||
|
||||
期望接收到字符串值是 `Daylight`,但实际接收到的值是 `Nighttime`。
|
||||
|
||||
### 分析失败的测试用例
|
||||
|
||||
经过仔细检查,代码本身似乎已经出现问题。 事实证明,`GetDayOrNight` 方法的实现是不可测试的!
|
||||
|
||||
看看我们面临的核心挑战:
|
||||
|
||||
1. `GetDayOrNight` 依赖隐藏输入。
|
||||
|
||||
`dayOrNight` 的值取决于隐藏输入(它从内置系统时钟中获取一天的时间值)。
|
||||
2. `GetDayOrNight` 包含非确定性行为。
|
||||
|
||||
从系统时钟中获取到的时间值是不确定的。(因为)该时间取决于你运行代码的时间点,而这一点我们认为这是不可预测的。
|
||||
3. `GetDayOrNight` API 的质量差。
|
||||
|
||||
该 API 与具体的数据源(系统 `DateTime`)紧密耦合。
|
||||
4. `GetDayOrNight` 违反了单一责任原则。
|
||||
|
||||
该方法实现同时使用和处理信息。优良作法是一种方法应负责执行一项职责。
|
||||
5. `GetDayOrNight` 有多个更改原因。
|
||||
|
||||
可以想象内部时间源可能会更改的情况。同样,很容易想象处理逻辑也将改变。这些变化的不同原因必须相互隔离。
|
||||
6. 当(我们)尝试了解 `GetDayOrNight` 行为时,会发现它的 API 签名不足。
|
||||
|
||||
最理想的做法就是通过简单的查看 API 的签名,就能了解 API 预期的行为类型。
|
||||
7. `GetDayOrNight` 取决于全局共享可变状态。
|
||||
|
||||
要不惜一切代价避免共享的可变状态!
|
||||
8. 即使在阅读源代码之后,也无法预测 `GetDayOrNight` 方法的行为。
|
||||
|
||||
这是一个严重的问题。通过阅读源代码,应该始终非常清晰,系统一旦开始运行,便可以预测出其行为。
|
||||
|
||||
### 失败背后的原则
|
||||
|
||||
每当你遇到工程问题时,建议使用久经考验的<ruby>分而治之<rt>divide and conquer</rt></ruby>策略。在这种情况下,遵循<ruby>关注点分离<rt>separation of concerns</rt></ruby>的原则是一种可行的方法。
|
||||
|
||||
> 关注点分离(SoC)是一种用于将计算机程序分为不同模块的设计原理,以便每个模块都可以解决一个关注点。关注点是影响计算机程序代码的一组信息。关注点可以和要优化代码的硬件的细节一样概括,也可以和要实例化的类的名称一样具体。完美体现 SoC 的程序称为模块化程序。
|
||||
>
|
||||
> ([出处][4])
|
||||
|
||||
`GetDayOrNight` 方法应仅与确定日期和时间值表示白天还是夜晚有关。它不应该与寻找该值的来源有关。该问题应留给调用客户端。
|
||||
|
||||
必须将这个问题留给调用客户端,以获取当前时间。这种方法符合另一个有价值的工程原理——<ruby>控制反转<rt>inversion of control</rt></ruby>。Martin Fowler [在这里][5]详细探讨了这一概念。
|
||||
|
||||
> 框架的一个重要特征是用户定义的用于定制框架的方法通常来自于框架本身,而不是从用户的应用程序代码调用来的。该框架通常在协调和排序应用程序活动中扮演主程序的角色。控制权的这种反转使框架有能力充当可扩展的框架。用户提供的方法为框架中的特定应用程序量身制定泛化算法。
|
||||
>
|
||||
> -- [Ralph Johnson and Brian Foote][6]
|
||||
|
||||
### 重构测试用例
|
||||
|
||||
因此,代码需要重构。摆脱对内部时钟的依赖(`DateTime` 系统实用程序):
|
||||
|
||||
```
|
||||
DateTime time = new DateTime();
|
||||
```
|
||||
|
||||
删除上述代码(在你的文件中应该是第 7 行)。通过将输入参数 `DateTime` 时间添加到 `GetDayOrNight` 方法,进一步重构代码。
|
||||
|
||||
这是重构的类 `DayOrNightUtility.cs`:
|
||||
|
||||
```
|
||||
using System;
|
||||
|
||||
namespace app {
|
||||
public class DayOrNightUtility {
|
||||
public string GetDayOrNight(DateTime time) {
|
||||
string dayOrNight = "Nighttime";
|
||||
if(time.Hour >= 7 && time.Hour < 19) {
|
||||
dayOrNight = "Daylight";
|
||||
}
|
||||
return dayOrNight;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
重构代码需要更改单元测试。 需要准备 `nightHour` 和 `dayHour` 的测试数据,并将这些值传到`GetDayOrNight` 方法中。 以下是重构的单元测试:
|
||||
|
||||
```
|
||||
using System;
|
||||
using Xunit;
|
||||
using app;
|
||||
|
||||
namespace unittest
|
||||
{
|
||||
public class UnitTest1
|
||||
{
|
||||
DayOrNightUtility dayOrNightUtility = new DayOrNightUtility();
|
||||
DateTime nightHour = new DateTime(2019, 08, 03, 19, 00, 00);
|
||||
DateTime dayHour = new DateTime(2019, 08, 03, 07, 00, 00);
|
||||
|
||||
[Fact]
|
||||
public void Given7pmReturnNighttime()
|
||||
{
|
||||
var expected = "Nighttime";
|
||||
var actual = dayOrNightUtility.GetDayOrNight(nightHour);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void Given7amReturnDaylight()
|
||||
{
|
||||
var expected = "Daylight";
|
||||
var actual = dayOrNightUtility.GetDayOrNight(dayHour);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 经验教训
|
||||
|
||||
在继续开发这种简单的场景之前,请先回顾复习一下本次练习中所学到的东西。
|
||||
|
||||
运行无法测试的代码,很容易在不经意间制造陷阱。从表面上看,这样的代码似乎可以正常工作。但是,遵循测试驱动开发(TDD)的实践(首先描述期望结果,然后才描述实现),暴露了代码中的严重问题。
|
||||
|
||||
这表明 TDD 是确保代码不会太凌乱的理想方法。TDD 指出了一些问题区域,例如缺乏单一责任和存在隐藏输入。此外,TDD 有助于删除不确定性代码,并用行为明确的完全可测试代码替换它。
|
||||
|
||||
最后,TDD 帮助交付易于阅读、逻辑易于遵循的代码。
|
||||
|
||||
在本系列的下一篇文章中,我将演示如何使用在本练习中创建的逻辑来实现功能代码,以及如何进行进一步的测试使其变得更好。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation
|
||||
|
||||
作者:[Alex Bunardzic][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Morisun029](https://github.com/Morisun029)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alex-bunardzic
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesk12_rh_021x_0.png?itok=fvorN0e- (Digital hand surrounding by objects, bike, light bulb, graphs)
|
||||
[2]: https://linux.cn/article-11483-1.html
|
||||
[3]: https://linux.cn/article-11468-1.html
|
||||
[4]: https://en.wikipedia.org/wiki/Separation_of_concerns
|
||||
[5]: https://martinfowler.com/bliki/InversionOfControl.html
|
||||
[6]: http://www.laputan.org/drc/drc.html
|
||||
[7]: http://www.google.com/search?q=new+msdn.microsoft.com
|
@ -1,34 +1,34 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "way-ww"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-11491-1.html"
|
||||
[#]: subject: "How to Run the Top Command in Batch Mode"
|
||||
[#]: via: "https://www.2daygeek.com/linux-run-execute-top-command-in-batch-mode/"
|
||||
[#]: author: "Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/"
|
||||
|
||||
如何在批处理模式下运行 Top 命令
|
||||
如何在批处理模式下运行 top 命令
|
||||
======
|
||||
|
||||
**[Top 命令][1]** 是每个人都在使用的用于 **[监控 Linux 系统性能][2]** 的最好的命令。
|
||||
![](https://img.linux.net.cn/data/attachment/album/201910/22/235420ylswdescv5ddffit.jpg)
|
||||
|
||||
除了很少的几个操作, 你可能已经知道 top 命令的绝大部分操作, 如果我没错的话, 批处理模式就是其中之一。
|
||||
[top 命令][1] 是每个人都在使用的用于 [监控 Linux 系统性能][2] 的最好的命令。你可能已经知道 `top` 命令的绝大部分操作,除了很少的几个操作,如果我没错的话,批处理模式就是其中之一。
|
||||
|
||||
大部分的脚本编写者和开发人员都知道这个, 因为这个操作主要就是用来编写脚本。
|
||||
大部分的脚本编写者和开发人员都知道这个,因为这个操作主要就是用来编写脚本。
|
||||
|
||||
如果你不了解这个, 不用担心,我们将在这里介绍它。
|
||||
如果你不了解这个,不用担心,我们将在这里介绍它。
|
||||
|
||||
### 什么是 Top 命令的批处理模式
|
||||
### 什么是 top 命令的批处理模式
|
||||
|
||||
批处理模式允许你将 top 命令的输出发送至其他程序或者文件中。
|
||||
批处理模式允许你将 `top` 命令的输出发送至其他程序或者文件中。
|
||||
|
||||
在这个模式中, top 命令将不会接收输入并且持续运行直到迭代次数达到你用 “-n” 选项指定的次数为止。
|
||||
在这个模式中,`top` 命令将不会接收输入并且持续运行,直到迭代次数达到你用 `-n` 选项指定的次数为止。
|
||||
|
||||
如果你想解决 Linux 服务器上的任何性能问题, 你需要正确的 **[理解 top 命令的输出][3]** 。
|
||||
如果你想解决 Linux 服务器上的任何性能问题,你需要正确的 [理解 top 命令的输出][3]。
|
||||
|
||||
### 1) 如何在批处理模式下运行 top 命令
|
||||
|
||||
默认地, top 命令按照 CPU 的使用率来排序输出结果, 所以当你在批处理模式中运行以下命令时, 它会执行同样的操作并打印前 35 行。
|
||||
默认地,`top` 命令按照 CPU 的使用率来排序输出结果,所以当你在批处理模式中运行以下命令时,它会执行同样的操作并打印前 35 行:
|
||||
|
||||
```
|
||||
# top -bc | head -35
|
||||
@ -72,7 +72,7 @@ PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
|
||||
|
||||
### 2) 如何在批处理模式下运行 top 命令并按内存使用率排序结果
|
||||
|
||||
在批处理模式中运行以下命令按内存使用率对结果进行排序
|
||||
在批处理模式中运行以下命令按内存使用率对结果进行排序:
|
||||
|
||||
```
|
||||
# top -bc -o +%MEM | head -n 20
|
||||
@ -99,19 +99,17 @@ KiB Swap: 1048572 total, 514640 free, 533932 used. 2475984 avail Mem
|
||||
8632 nobody 20 0 256844 25744 2216 S 0.0 0.7 0:00.03 /usr/sbin/httpd -k start
|
||||
```
|
||||
|
||||
**上面命令的详细信息:**
|
||||
|
||||
* **-b :** 批处理模式选项
|
||||
* **-c :** 打印运行中的进程的绝对路径
|
||||
* **-o :** 指定进行排序的字段
|
||||
* **head :** 输出文件的第一部分
|
||||
* **-n :** 打印前 n 行
|
||||
|
||||
上面命令的详细信息:
|
||||
|
||||
* `-b`:批处理模式选项
|
||||
* `-c`:打印运行中的进程的绝对路径
|
||||
* `-o`:指定进行排序的字段
|
||||
* `head`:输出文件的第一部分
|
||||
* `-n`:打印前 n 行
|
||||
|
||||
### 3) 如何在批处理模式下运行 top 命令并按照指定的用户进程对结果进行排序
|
||||
|
||||
如果你想要按照指定用户进程对结果进行排序请运行以下命令
|
||||
如果你想要按照指定用户进程对结果进行排序请运行以下命令:
|
||||
|
||||
```
|
||||
# top -bc -u mysql | head -n 10
|
||||
@ -128,13 +126,11 @@ KiB Swap: 1048572 total, 514640 free, 533932 used. 2649412 avail Mem
|
||||
|
||||
### 4) 如何在批处理模式下运行 top 命令并按照处理时间进行排序
|
||||
|
||||
在批处理模式中使用以下 top 命令按照处理时间对结果进行排序。 这展示了任务从启动以来已使用的总 CPU 时间
|
||||
|
||||
但是如果你想要检查一个进程在 Linux 上运行了多长时间请看接下来的文章。
|
||||
|
||||
* **[检查 Linux 中进程运行时间的五种方法][4]**
|
||||
在批处理模式中使用以下 `top` 命令按照处理时间对结果进行排序。这展示了任务从启动以来已使用的总 CPU 时间。
|
||||
|
||||
但是如果你想要检查一个进程在 Linux 上运行了多长时间请看接下来的文章:
|
||||
|
||||
* [检查 Linux 中进程运行时间的五种方法][4]
|
||||
|
||||
```
|
||||
# top -bc -o TIME+ | head -n 20
|
||||
@ -163,7 +159,7 @@ KiB Swap: 1048572 total, 514640 free, 533932 used. 2440332 avail Mem
|
||||
|
||||
### 5) 如何在批处理模式下运行 top 命令并将结果保存到文件中
|
||||
|
||||
如果出于解决问题的目的, 你想要和别人分享 top 命令的输出, 请使用以下命令重定向输出到文件中
|
||||
如果出于解决问题的目的,你想要和别人分享 `top` 命令的输出,请使用以下命令重定向输出到文件中:
|
||||
|
||||
```
|
||||
# top -bc | head -35 > top-report.txt
|
||||
@ -209,9 +205,9 @@ KiB Swap: 1048572 total, 514640 free, 533932 used. 2659084 avail Mem
|
||||
|
||||
### 如何按照指定字段对结果进行排序
|
||||
|
||||
在 top 命令的最新版本中, 按下 **“f”** 键进入字段管理界面。
|
||||
在 `top` 命令的最新版本中, 按下 `f` 键进入字段管理界面。
|
||||
|
||||
要使用新字段进行排序, 请使用 **“up/down”** 箭头选择正确的选项, 然后再按下 **“s”** 键进行排序。 最后按 **“q”** 键退出此窗口。
|
||||
要使用新字段进行排序, 请使用 `up`/`down` 箭头选择正确的选项,然后再按下 `s` 键进行排序。最后按 `q` 键退出此窗口。
|
||||
|
||||
```
|
||||
Fields Management for window 1:Def, whose current sort field is %CPU
|
||||
@ -269,9 +265,9 @@ Fields Management for window 1:Def, whose current sort field is %CPU
|
||||
nsUSER = USER namespace Inode
|
||||
```
|
||||
|
||||
对 top 命令的旧版本, 请按 **“shift+f”** 或 **“shift+o”** 键进入字段管理界面进行排序。
|
||||
对 `top` 命令的旧版本,请按 `shift+f` 或 `shift+o` 键进入字段管理界面进行排序。
|
||||
|
||||
要使用新字段进行排序, 请选择相应的排序字段字母, 然后按下 **“Enter”** 排序。
|
||||
要使用新字段进行排序,请选择相应的排序字段字母, 然后按下回车键排序。
|
||||
|
||||
```
|
||||
Current Sort Field: N for window 1:Def
|
||||
@ -323,7 +319,7 @@ via: https://www.2daygeek.com/linux-run-execute-top-command-in-batch-mode/
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[way-ww](https://github.com/way-ww)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,71 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lnrCoder)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11492-1.html)
|
||||
[#]: subject: (DevSecOps pipelines and tools: What you need to know)
|
||||
[#]: via: (https://opensource.com/article/19/10/devsecops-pipeline-and-tools)
|
||||
[#]: author: (Sagar Nangare https://opensource.com/users/sagarnangare)
|
||||
|
||||
你需要知道的 DevSecOps 流程及工具
|
||||
======
|
||||
|
||||
> DevSecOps 对 DevOps 进行了改进,以确保安全性仍然是该过程的一个重要部分。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201910/23/002010fvzh282e8ghhdzpk.jpg)
|
||||
|
||||
到目前为止,DevOps 在 IT 世界中已广为人知,但其并非完美无缺。试想一下,你在一个项目的现代应用程序交付中实施了所有 DevOps 工程实践。你已经到达开发流程的末尾,但是渗透测试团队(内部或外部)检测到安全漏洞并提出了报告。现在,你必须重新启动所有流程,并要求开发人员修复该漏洞。
|
||||
|
||||
在基于 DevOps 的软件开发生命周期(SDLC)系统中,这并不繁琐,但它确实会浪费时间并影响交付进度。如果从 SDLC 初期就已经集成了安全性,那么你可能已经跟踪到了该故障,并在开发流程中就消除了它。但是,如上述情形那样,将安全性推到开发流程的最后将导致更长的开发生命周期。
|
||||
|
||||
这就是引入 DevSecOps 的原因,它以自动化的方式巩固了整个软件交付周期。
|
||||
|
||||
在现代 DevOps 方法中,组织广泛使用容器托管应用程序,我们看到 [Kubernetes][2] 和 [Istio][3] 使用的较多。但是,这些工具都有其自身的漏洞。例如,云原生计算基金会(CNCF)最近完成了一项 [kubernetes 安全审计][4],发现了几个问题。DevOps 开发流程中使用的所有工具在流程运行时都需要进行安全检查,DevSecOps 会推动管理员去监视工具的存储库以获取升级和补丁。
|
||||
|
||||
### 什么是 DevSecOps?
|
||||
|
||||
与 DevOps 一样,DevSecOps 是开发人员和 IT 运营团队在开发和部署软件应用程序时所遵循的一种思维方式或文化。它将主动和自动化的安全审计以及渗透测试集成到敏捷应用程序开发中。
|
||||
|
||||
要使用 [DevSecOps][5],你需要:
|
||||
|
||||
* 从 SDLC 开始就引入安全性概念,以最大程度地减少软件代码中的漏洞。
|
||||
* 确保每个人(包括开发人员和 IT 运营团队)共同承担在其任务中遵循安全实践的责任。
|
||||
* 在 DevOps 工作流程开始时集成安全控件、工具和流程。这些将在软件交付的每个阶段启用自动安全检查。
|
||||
|
||||
DevOps 一直致力于在开发和发布过程中包括安全性以及质量保证(QA)、数据库管理和其他所有方面。然而,DevSecOps 是该过程的一个演进,以确保安全永远不会被遗忘,成为该过程的一个重要部分。
|
||||
|
||||
### 了解 DevSecOps 流程
|
||||
|
||||
典型的 DevOps 流程有不同的阶段;典型的 SDLC 流程包括计划、编码、构建、测试、发布和部署等阶段。在 DevSecOps 中,每个阶段都会应用特定的安全检查。
|
||||
|
||||
* **计划**:执行安全性分析并创建测试计划,以确定在何处、如何以及何时进行测试的方案。
|
||||
* **编码**:部署整理工具和 Git 控件以保护密码和 API 密钥。
|
||||
* **构建**:在构建执行代码时,请结合使用静态应用程序安全测试(SAST)工具来跟踪代码中的缺陷,然后再部署到生产环境中。这些工具针对特定的编程语言。
|
||||
* **测试**:在运行时使用动态应用程序安全测试(DAST)工具来测试您的应用程序。 这些工具可以检测与用户身份验证,授权,SQL 注入以及与 API 相关的端点相关的错误。
|
||||
* **发布**:在发布应用程序之前,请使用安全分析工具来进行全面的渗透测试和漏洞扫描。
|
||||
* **部署**:在运行时完成上述测试后,将安全的版本发送到生产中以进行最终部署。
|
||||
|
||||
### DevSecOps 工具
|
||||
|
||||
SDLC 的每个阶段都有可用的工具。有些是商业产品,但大多数是开源的。在我的下一篇文章中,我将更多地讨论在流程的不同阶段使用的工具。
|
||||
|
||||
随着基于现代 IT 基础设施的企业安全威胁的复杂性增加,DevSecOps 将发挥更加关键的作用。然而,DevSecOps 流程将需要随着时间的推移而改进,而不是仅仅依靠同时实施所有安全更改即可。这将消除回溯或应用交付失败的可能性。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/devsecops-pipeline-and-tools
|
||||
|
||||
作者:[Sagar Nangare][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lnrCoder](https://github.com/lnrCoder)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sagarnangare
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe (An intersection of pipes.)
|
||||
[2]: https://opensource.com/resources/what-is-kubernetes
|
||||
[3]: https://opensource.com/article/18/9/what-istio
|
||||
[4]: https://www.cncf.io/blog/2019/08/06/open-sourcing-the-kubernetes-security-audit/
|
||||
[5]: https://resources.whitesourcesoftware.com/blog-whitesource/devsecops
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11490-1.html)
|
||||
[#]: subject: (Bash Script to Delete Files/Folders Older Than “X” Days in Linux)
|
||||
[#]: via: (https://www.2daygeek.com/bash-script-to-delete-files-folders-older-than-x-days-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
@ -10,29 +10,21 @@
|
||||
在 Linux 中使用 Bash 脚本删除早于 “X” 天的文件/文件夹
|
||||
======
|
||||
|
||||
**[磁盘使用率][1]**监控工具能够在达到给定阈值时提醒我们。
|
||||
[磁盘使用率][1] 监控工具能够在达到给定阈值时提醒我们。但它们无法自行解决 [磁盘使用率][2] 问题。需要手动干预才能解决该问题。
|
||||
|
||||
但它们无法自行解决**[磁盘使用率][2]**问题。
|
||||
如果你想完全自动化此类操作,你会做什么。是的,可以使用 bash 脚本来完成。
|
||||
|
||||
需要手动干预才能解决该问题。
|
||||
|
||||
如果你想完全自动化此类操作,你会做什么。
|
||||
|
||||
是的,可以使用 bash 脚本来完成。
|
||||
|
||||
该脚本可防止来自**[监控工具][3]**的警报,因为我们会在填满磁盘空间之前删除旧的日志文件。
|
||||
该脚本可防止来自 [监控工具][3] 的警报,因为我们会在填满磁盘空间之前删除旧的日志文件。
|
||||
|
||||
我们过去做了很多 shell 脚本。如果要查看,请进入下面的链接。
|
||||
|
||||
* **[如何使用 shell 脚本自动化日常活动?][4]**
|
||||
|
||||
|
||||
* [如何使用 shell 脚本自动化日常活动?][4]
|
||||
|
||||
我在本文中添加了两个 bash 脚本,它们有助于清除旧日志。
|
||||
|
||||
### 1)在 Linux 中删除早于 “X” 天的文件夹的 Bash 脚本
|
||||
|
||||
我们有一个名为 **“/var/log/app/”** 的文件夹,其中包含 15 天的日志,我们将删除早于 10 天的文件夹。
|
||||
我们有一个名为 `/var/log/app/` 的文件夹,其中包含 15 天的日志,我们将删除早于 10 天的文件夹。
|
||||
|
||||
```
|
||||
$ ls -lh /var/log/app/
|
||||
@ -56,7 +48,7 @@ drwxrw-rw- 3 root root 24K Oct 15 23:52 app_log.15
|
||||
|
||||
该脚本将删除早于 10 天的文件夹,并通过邮件发送文件夹列表。
|
||||
|
||||
你可以根据需要修改 **“-mtime X”** 的值。另外,请替换你的电子邮箱,而不是用我们的。
|
||||
你可以根据需要修改 `-mtime X` 的值。另外,请替换你的电子邮箱,而不是用我们的。
|
||||
|
||||
```
|
||||
# /opt/script/delete-old-folders.sh
|
||||
@ -81,7 +73,7 @@ rm $MESSAGE /tmp/folder.out
|
||||
fi
|
||||
```
|
||||
|
||||
给 **“delete-old-folders.sh”** 设置可执行权限。
|
||||
给 `delete-old-folders.sh` 设置可执行权限。
|
||||
|
||||
```
|
||||
# chmod +x /opt/script/delete-old-folders.sh
|
||||
@ -109,15 +101,13 @@ Oct 15 /var/log/app/app_log.15
|
||||
|
||||
### 2)在 Linux 中删除早于 “X” 天的文件的 Bash 脚本
|
||||
|
||||
我们有一个名为 **“/var/log/apache/”** 的文件夹,其中包含15天的日志,我们将删除 10 天前的文件。
|
||||
我们有一个名为 `/var/log/apache/` 的文件夹,其中包含15天的日志,我们将删除 10 天前的文件。
|
||||
|
||||
以下文章与该主题相关,因此你可能有兴趣阅读。
|
||||
|
||||
* **[如何在 Linux 中查找和删除早于 “X” 天和 “X” 小时的文件?][6]**
|
||||
* **[如何在 Linux 中查找最近修改的文件/文件夹][7]**
|
||||
* **[如何在 Linux 中自动删除或清理 /tmp 文件夹内容?][8]**
|
||||
|
||||
|
||||
* [如何在 Linux 中查找和删除早于 “X” 天和 “X” 小时的文件?][6]
|
||||
* [如何在 Linux 中查找最近修改的文件/文件夹][7]
|
||||
* [如何在 Linux 中自动删除或清理 /tmp 文件夹内容?][8]
|
||||
|
||||
```
|
||||
# ls -lh /var/log/apache/
|
||||
@ -141,7 +131,7 @@ Oct 15 /var/log/app/app_log.15
|
||||
|
||||
该脚本将删除 10 天前的文件并通过邮件发送文件夹列表。
|
||||
|
||||
你可以根据需要修改 **“-mtime X”** 的值。另外,请替换你的电子邮箱,而不是用我们的。
|
||||
你可以根据需要修改 `-mtime X` 的值。另外,请替换你的电子邮箱,而不是用我们的。
|
||||
|
||||
```
|
||||
# /opt/script/delete-old-files.sh
|
||||
@ -166,7 +156,7 @@ rm $MESSAGE /tmp/file.out
|
||||
fi
|
||||
```
|
||||
|
||||
给 **“delete-old-files.sh”** 设置可执行权限。
|
||||
给 `delete-old-files.sh` 设置可执行权限。
|
||||
|
||||
```
|
||||
# chmod +x /opt/script/delete-old-files.sh
|
||||
@ -199,7 +189,7 @@ via: https://www.2daygeek.com/bash-script-to-delete-files-folders-older-than-x-d
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,79 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11495-1.html)
|
||||
[#]: subject: (Linux sudo flaw can lead to unauthorized privileges)
|
||||
[#]: via: (https://www.networkworld.com/article/3446036/linux-sudo-flaw-can-lead-to-unauthorized-privileges.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Linux sudo 漏洞可能导致未经授权的特权访问
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201910/23/173934huyi6siys2u33w9z.png)
|
||||
|
||||
> 在 Linux 中利用新发现的 sudo 漏洞可以使某些用户以 root 身份运行命令,尽管对此还有所限制。
|
||||
|
||||
[sudo][1] 命令中最近发现了一个严重漏洞,如果被利用,普通用户可以 root 身份运行命令,即使在 `/etc/sudoers` 文件中明确禁止了该用户这样做。
|
||||
|
||||
将 `sudo` 更新到版本 1.8.28 应该可以解决该问题,因此建议 Linux 管理员尽快这样做。
|
||||
|
||||
如何利用此漏洞取决于 `/etc/sudoers` 中授予的特定权限。例如,一条规则允许用户以除了 root 用户之外的任何用户身份来编辑文件,这实际上将允许该用户也以 root 用户身份来编辑文件。在这种情况下,该漏洞可能会导致非常严重的问题。
|
||||
|
||||
用户要能够利用此漏洞,需要在 `/etc/sudoers` 中为**用户**分配特权,以使该用户可以以其他用户身份运行命令,并且该漏洞仅限于以这种方式分配的命令特权。
|
||||
|
||||
此问题影响 1.8.28 之前的版本。要检查你的 `sudo` 版本,请使用以下命令:
|
||||
|
||||
```
|
||||
$ sudo -V
|
||||
Sudo version 1.8.27 <===
|
||||
Sudoers policy plugin version 1.8.27
|
||||
Sudoers file grammar version 46
|
||||
Sudoers I/O plugin version 1.8.27
|
||||
```
|
||||
|
||||
该漏洞已在 CVE 数据库中分配了编号 [CVE-2019-14287][4]。它的风险是,任何被指定能以任意用户运行某个命令的用户,即使被明确禁止以 root 身份运行,它都能逃脱限制。
|
||||
|
||||
下面这些行让 `jdoe` 能够以除了 root 用户之外的其他身份使用 `vi` 编辑文件(`!root` 表示“非 root”),同时 `nemo` 有权运行以除了 root 身份以外的任何用户使用 `id` 命令:
|
||||
|
||||
```
|
||||
# affected entries on host "dragonfly"
|
||||
jdoe dragonfly = (ALL, !root) /usr/bin/vi
|
||||
nemo dragonfly = (ALL, !root) /usr/bin/id
|
||||
```
|
||||
|
||||
但是,由于存在漏洞,这些用户中要么能够绕过限制并以 root 编辑文件,或者以 root 用户身份运行 `id` 命令。
|
||||
|
||||
攻击者可以通过指定用户 ID 为 `-1` 或 `4294967295` 来以 root 身份运行命令。
|
||||
|
||||
```
|
||||
sudo -u#-1 id -u
|
||||
```
|
||||
|
||||
或者
|
||||
|
||||
```
|
||||
sudo -u#4294967295 id -u
|
||||
```
|
||||
|
||||
响应为 `1` 表明该命令以 root 身份运行(显示 root 的用户 ID)。
|
||||
|
||||
苹果信息安全团队的 Joe Vennix 找到并分析该问题。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3446036/linux-sudo-flaw-can-lead-to-unauthorized-privileges.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3236499/some-tricks-for-using-sudo.html
|
||||
[4]: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14287
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,62 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use IoT devices to keep children safe?)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/how-to-use-iot-devices-to-keep-children-safe/)
|
||||
[#]: author: (Andrew Carroll https://opensourceforu.com/author/andrew-carroll/)
|
||||
|
||||
How to use IoT devices to keep children safe?
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
_IoT (Internet of Things) devices are transforming our lives rapidly. These devices are everywhere, from our homes to industries. According to some estimates, there will be 10 billion IoT devices by 2020. By 2025, the number of IoT devices will grow to 22 billion. IoT has found its application in a range of fields, including smart homes, industrial processes, agriculture, and even healthcare. With such a wide variety of applications, it is obvious why IoT has become one of the hot topics in recent years._
|
||||
|
||||
Several factors have contributed to the explosion of IoT devices in multiple disciplines. These include the availability of low-cost processors and wireless connectivity. Moreover, open-source platforms have enabled the exchange of information in driving innovation in the field of IoT. Compared with conventional application development, IoT has developed exponentially because its resources are open-source.
|
||||
Before explaining how IoT can be used to protect children, a basic understanding of IoT technology is essential.
|
||||
|
||||
**What are IoT devices?**
|
||||
IoT devices are those that can communicate with each other, without the involvement of humans. Hence, smartphones and computers are not considered as IoT devices by many experts. Moreover, IoT devices must be able to gather data and communicate it to other devices or the cloud for processing.
|
||||
|
||||
However, there are some fields where we need to explore the potential for IoT. Children are vulnerable, which makes them an easy target for criminals and others who mean to harm them. Whether in the physical or digital world, children are susceptible to crime. Since parents cannot be physically present to protect their children at all times; that’s where the need for monitoring tools is obvious.
|
||||
|
||||
In addition to wearable devices for children, there are plenty of parental monitoring applications such as Xnspy that monitor children in real-time and provide live updates. These tools ensure that the child is safe. While wearable devices ensure that the child is not physically in danger, parental monitoring apps ensure that the child is safe online.
|
||||
|
||||
As more children spend time on their smartphones, it is no surprise to see them becoming the primary target for frauds and scammers. Moreover, there is also a chance of children becoming targets of cyberbullying because pedophilia, catfishing, and other crimes are prevalent on the internet.
|
||||
|
||||
Are these solutions enough? We need to find IoT solutions for ensuring our children’s safety, both online and offline. How can we keep children secure in these times? We need to come up with new and innovative solutions that keep our children safe. The solutions provided by IoT can help keep our children safe in schools as well as homes.
|
||||
|
||||
**The potential of IoT**
|
||||
The benefits offered by IoT devices are numerous. For one, parents can remotely monitor their children without being too overbearing. Thus, children have space and freedom to become independent while having a safe environment to do so.
|
||||
|
||||
Moreover, parents do not have to worry about their children’s safety. IoT devices can provide 24/7 updates about a child. Monitoring apps such as Xnspy go a step further in providing information regarding a child’s smartphone activity. As IoT devices become more sophisticated, it is only a matter of time before we have devices with increased battery life. IoT devices such as location tracking can provide accurate details regarding a child’s whereabouts, so parents do not have to worry.
|
||||
|
||||
While wearable devices are great to have, these are often not enough, when ensuring a child’s safety. Hence, to provide a safe environment for children, we need other methods. Many incidents have shown that schools are just as susceptible to attacks than any other public place. Therefore, schools need to adopt safety measures that keep children and teachers safe. In this, IoT devices can be used to detect threats and take necessary action to prevent the onslaught of an attack. The threat detection system can include cameras. Once the system detects a threat, it can notify the authorities, including law enforcement agencies and hospitals. Devices such as smart locks can be used to lock down the school, including classrooms, to protect children. In addition to this, parents can be informed about their child’s safety, receive immediate alerts on threats. It would require the implementation of wireless technology, such as Wi-Fi and sensors. Thus, schools need to create a budget that is specifically for providing security in the classroom.
|
||||
|
||||
Smart homes have made it possible to turn off lights with a clap, or telling your home assistant to do so. Likewise, IoT devices can be used in a house to protect children. In a home, IoT devices such as cameras can be used to provide parents with 100% visibility when looking after the children. When parents aren’t in the house, cameras and other sensors can be used to detect if any suspicious activity takes place. Other devices, such as smart locks connected to these sensors, can lock the doors, windows, and bedrooms to ensure that the kids are safe.
|
||||
Likewise, there are plenty of IoT solutions that can be introduced to keep kids safe.
|
||||
|
||||
**Just as bad as they are good**
|
||||
Sensors in IoT devices create an enormous amount of data. The safety of the data is a crucial factor. The data gathered on a child falling into the wrong hands is a risk. Hence, precautions are required. Any data data breached from your IoT devices can be used to determine behavior patterns. So one must invest in providing safe IoT solutions that do not breach user privacy.
|
||||
|
||||
Often IoT devices connect to the Wi-Fi to transmit data between devices. Unsecure networks that deal with unencrypted data pose certain risks. Such networks are easy to eavesdrop. Hackers can use such network points to hack the system. They can also introduce malware into the system, making it vulnerable. Moreover, cyberattacks on devices and public networks such as those in schools can lead to data breaches and theft of private data. Hence, an overall plan for protecting the network and IoT devices must be in effect when implementing an IoT solution for the protection of children.
|
||||
|
||||
The potential of IoT devices to protect children in schools and homes is yet to find innovation. We need more effort to protect the network that connects IoT devices. Moreover, the data generated by an IoT device can fall into the wrong hands, causing more trouble. So this is one area where IoT security is essential.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/how-to-use-iot-devices-to-keep-children-safe/
|
||||
|
||||
作者:[Andrew Carroll][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/andrew-carroll/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Visual-Internet-of-things_EB-May18.jpg?resize=696%2C507&ssl=1 (Visual Internet of things_EB May18)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Visual-Internet-of-things_EB-May18.jpg?fit=900%2C656&ssl=1
|
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (VMware on AWS gets an on-premises option)
|
||||
[#]: via: (https://www.networkworld.com/article/3446796/vmware-on-aws-gets-an-on-premises-option.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
VMware on AWS gets an on-premises option
|
||||
======
|
||||
Amazon Relational Database Service on VMware automates database provisioning for customers running VMware vSphere 6.5 or later, and it supports Microsoft SQL Server, PostgreSQL, and MySQL.
|
||||
Getty Images
|
||||
|
||||
VMware has taken another step to integrate its virtual kingdom with Amazon Web Services' world with an [on-premise service][1] that will let customers automate database provisioning and management.
|
||||
|
||||
The package, [Amazon Relational Database Service][2] (RDS) on VMware is available now for customers running VMware vSphere 6.5 or later and supports Microsoft SQL Server, PostgreSQL, and MySQL. Other DBs will be supported in the future, the companies said.
|
||||
|
||||
****[**** Read also: [How to plan a software-defined data-center network][3] ****|**** [Get regularly scheduled insights by signing up for Network World newsletters.][4]**]**
|
||||
|
||||
The RDS lets customers run native RDS Database instances on a vSphere platform and manage those instances from the AWS Management Console in the cloud. It automates database provisioning, operating-system and database patching, backups, point-in-time restore and compute scaling, as well as database-instance health management, VMware said.
|
||||
|
||||
[][5]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[HPE Synergy For Dummies][5]
|
||||
|
||||
Here’s how IT can provide an anytime, anywhere, any workload infrastructure.
|
||||
|
||||
With the service customers such as software developers and database administrators get native access to the Amazon Relational Database Service using their familiar AWS Management Console, CLI, and RDS APIs, Chris Wolf, vice president and CTO, global field and industry at VMware wrote in a [blog][6] about the service. “Operations teams can quickly stand up an RDS instance anywhere they run vSphere, and manage it using all of their existing tools and processes.”
|
||||
|
||||
Wolf said the service should greatly simplify managing databases linked to its flagship vSphere system.
|
||||
|
||||
Advertisement
|
||||
|
||||
Managing databases on vSphere or natively has always been a tedious exercise that steals the focus of highly skilled database administrators, Wolf stated. “VMware customers will now be able to expand the benefits of automation and standardization of their database workloads inside of vSphere and focus more of their time and energy on improving their applications for their customers.”
|
||||
|
||||
The RDS is just the part of the enterprise data center/cloud integration work VMware and AWS have been up to in the past year.
|
||||
|
||||
In August [VMware said it added VMware HCX][7] capabilities to enable push-button migration and interconnectivity between VMware Cloud on AWS Software-Defined Data Centers running in different AWS Regions. It has also added new Elastic vSAN support to bolster storage scaling.
|
||||
|
||||
Once applications are migrated to the cloud, customers can extend their capabilities through the integration of native AWS services. In the future, through technology such as Bitfusion and partnerships with other vendors such as NVIDIA, customers will be able to enrich existing applications and power new enterprise applications.
|
||||
|
||||
VMware and NVIDIA also announced their intent to deliver accelerated GPU services for VMware Cloud on AWS. These services will let customers migrate VMware vSphere-based applications and containers to the cloud, unchanged, where they can be modernized to take advantage of high-performance computing, machine learning, data analytics and video-processing applications, VMware said.
|
||||
|
||||
And last November [AWS tied in VMware][8] to its on-premises Outposts development, which comes in two versions. The first, VMware Cloud on AWS Outposts, lets customers use the same VMware control plane and APIs they currently deploy. The other is an AWS-native variant that lets customers use the same APIs and control plane they use to run in the AWS cloud, but on premises, according to AWS.
|
||||
|
||||
Outposts can be upgraded with the latest hardware and next-generation instances to run all native AWS and VMware applications, [AWS stated][9]. A second version, VMware Cloud on AWS Outposts, lets customers use a VMware control plane and APIs to run the hybrid environment.
|
||||
|
||||
The idea with Outposts is that customers can use the same programming interface, same APIs, same console and CLI they use on the AWS cloud for on-premises applications, develop and maintain a single code base, and use the same deployment tools in the AWS cloud and on premises, AWS wrote.
|
||||
|
||||
VMware isn’t the only vendor cozying up to AWS. Cisco has done a variety of integration work with the cloud service provider as well. In [April Cisco released Cloud ACI for AWS][10] to let users configure inter-site connectivity, define policies and monitor the health of network infrastructure across hybrid environments, Cisco said. The AWS service utilizes the Cisco Cloud APIC [Application Policy Infrastructure Controller] to provide connectivity, policy translation and enhanced visibility of workloads in the public cloud, Cisco said.
|
||||
|
||||
“This solution brings a suite of capabilities to extend your on-premises data center into true multi-cloud architectures, helping to drive policy and operational consistency, independent of where your applications or data reside. [It] uses the native AWS constructs for policy translation and gives end-to-end visibility into the customer's multi-cloud workloads and connectivity,” Cisco said.
|
||||
|
||||
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3446796/vmware-on-aws-gets-an-on-premises-option.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://aws.amazon.com/blogs/aws/now-available-amazon-relational-database-service-rds-on-vmware/
|
||||
[2]: https://blogs.vmware.com/vsphere/2019/10/how-amazon-rds-on-vmware-works.html
|
||||
[3]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.networkworld.com/article/3399618/hpe-synergy-for-dummies.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE19718&utm_content=sidebar (HPE Synergy For Dummies)
|
||||
[6]: https://cloud.vmware.com/community/2019/10/16/announcing-general-availability-amazon-rds-vmware/
|
||||
[7]: https://www.networkworld.com/article/3434397/vmware-fortifies-its-hybrid-cloud-portfolio-with-management-automation-aws-and-dell-offerings.html
|
||||
[8]: https://www.networkworld.com/article/3324043/aws-does-hybrid-cloud-with-on-prem-hardware-vmware-help.html
|
||||
[9]: https://aws.amazon.com/outposts/
|
||||
[10]: https://www.networkworld.com/article/3388679/cisco-taps-into-aws-for-data-center-cloud-applications.html
|
||||
[11]: https://www.facebook.com/NetworkWorld/
|
||||
[12]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,71 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Enterprises find new uses for mainframes: blockchain and containerized apps)
|
||||
[#]: via: (https://www.networkworld.com/article/3446140/enterprises-find-a-new-use-for-mainframes-blockchain-and-containerized-apps.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Enterprises find new uses for mainframes: blockchain and containerized apps
|
||||
======
|
||||
Blockchain and containerized microservices can benefit from the mainframe’s integrated security and massive parallelization capabilities.
|
||||
Thinkstock
|
||||
|
||||
News flash: Mainframes still aren't dead.
|
||||
|
||||
On the contrary, mainframe use is increasing, and not to run COBOL, either. Mainframes are being eyed for modern technologies including blockchain and containers.
|
||||
|
||||
A survey of 153 IT decision makers found that 50% of organizations will continue with the mainframe and increase its use over the next two years, while just 5% plan to decrease or remove mainframe activity. The survey was conducted by Forrester Research and commissioned by Ensono, a hybrid IT services provider, and Wipro Limited, a global IT consulting services company.
|
||||
|
||||
[][1]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][1]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
**READ MORE:** [Data center workloads become more complex despite promises to the contrary][2]
|
||||
|
||||
That kind of commitment to the mainframe is a bit of a surprise, given the trend to reduce or eliminate the on-premises data center footprint and move to the cloud. However, enterprises are now taking a hybrid approach to their infrastructure, migrating some applications to the cloud while keeping the most business-critical applications on-premises and on mainframes.
|
||||
|
||||
Forrester's research found mainframes continue to be considered a critical piece of infrastructure for the modern business – and not solely to run old technologies. Of course, traditional enterprise applications and workloads remain firmly on the mainframe, with 48% of ERP apps, 45% of finance and accounting apps, 44% of HR management apps, and 43% of ECM apps staying on mainframes.
|
||||
|
||||
But that's not all. Among survey respondents, 25% said that mobile sites and applications were being put into the mainframe, and 27% said they're running new blockchain initiatives and containerized applications. Blockchain and containerized applications benefit from the integrated security and massive parallelization inherent in a mainframe, Forrester said in its report.
|
||||
|
||||
"We believe this research challenges popular opinion that mainframe is for legacy," said Brian Klingbeil, executive vice president of technology and strategy at Ensono, in a statement. "Mainframe modernization is giving enterprises not only the ability to continue to run their legacy applications, but also allows them to embrace new technologies such as containerized microservices, blockchain and mobile applications."
|
||||
|
||||
Wipro's Kiran Desai, senior vice president and global head of cloud and infrastructure services, added that enterprises should adopt two strategies to take full advantage of mainframes. The first is to refactor applications to take advantage of cloud, while the second is to adopt DevOps to modernize mainframes.
|
||||
|
||||
**Learn more about mixing cloud and on-premises workloads**
|
||||
|
||||
* [5 times when cloud repatriation makes sense][3]
|
||||
* [Network monitoring in the hybrid cloud/multi-cloud era][4]
|
||||
* [Data center workloads become more complex][2]
|
||||
* [The benefits of mixing private and public cloud services][5]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3446140/enterprises-find-a-new-use-for-mainframes-blockchain-and-containerized-apps.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[2]: https://www.networkworld.com/article/3400086/data-center-workloads-become-more-complex-despite-promises-to-the-contrary.html
|
||||
[3]: https://www.networkworld.com/article/3388032/5-times-when-cloud-repatriation-makes-sense.html
|
||||
[4]: https://www.networkworld.com/article/3398482/network-monitoring-in-the-hybrid-cloudmulti-cloud-era.html
|
||||
[5]: https://www.networkworld.com/article/3233132/what-is-hybrid-cloud-computing.html
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,84 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Tokalabs Software Defined Labs automates configuration of lab test-beds)
|
||||
[#]: via: (https://www.networkworld.com/article/3446816/tokalabs-software-defined-labs-automates-configuration-of-lab-test-beds.html)
|
||||
[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
|
||||
|
||||
Tokalabs Software Defined Labs automates configuration of lab test-beds
|
||||
======
|
||||
The primary challenge of running a test lab is the amount of time it takes to provision the test beds within the lab. This software defined lab platform automates the setup and configuration process so that tests can be accelerated.
|
||||
7Postman / Getty Images
|
||||
|
||||
Network environments have become so complex that companies such as systems integrators, equipment manufacturers and enterprise organizations feel compelled to test their configurations and equipment in lab environments before deployment. Performance test labs are used extensively for quality, proof of concept, customer support, and technical sales initiatives. Labs are the perfect place to see how well something performs before it’s put into a production environment.
|
||||
|
||||
The primary challenge of running a test lab is the amount of time it takes to provision the test environments. A network lab infrastructure might include switches, routers, servers, [virtual machines][1] running on various server clusters, security services, cloud resources, software and so on. It takes considerable time to wire the configurations, physically build the desired test beds, login to each individual device and load the proper software configurations. Quite often, lab staffers spend more time on setup than they do on conducting actual tests.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
|
||||
|
||||
This is a problem that the networking company Allied Telesis was having in building test beds for its own development engineers. The company developed an application for internal use that would ease the setup and reconfiguration problem. The equipment could be physically cabled once and then configured and controlled centrally through software. The application worked so well that Allied Telesis spun it off for others to use, and this is the origin of [Tokalabs Software Defined Labs][3] (SDL) technology.
|
||||
|
||||
[][4]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][4]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
Tokalabs provides a platform that enables engineers to manage a lab-network infrastructure and create sandboxes or topologies that can be used for R&D, product development and quality testing, customer support, sales demos, competitive benchmarking, driving proof of concept efforts, etc. There’s an automation sequencer built into the platform that allows users to automate test cases, sales demos, troubleshooting methods, image upgrades and the like.
|
||||
|
||||
The Tokalabs SDL controller is a virtual appliance that can be imported into any virtualization environment. Once installed, the customer can access the controller’s UI using a web browser. The controller has an auto-discovery mechanism that inventories everything within a specified range of IP addresses, including cloud resources.
|
||||
|
||||
Tokalabs probes the addresses to figure out what ports are open on them, what management types are supported, and the vendor information of the devices. This results in an inventory of hundreds of devices that are discovered by the SDL controller.
|
||||
|
||||
On the hardware side, lab engineers only need to cable and configure their lab devices once, which eliminates the cumbersome setup and tear down processes. These devices are abstracted and managed centrally through the SDL controller, which maintains a centralized networking fabric. Lab engineers have full visibility of every physical and virtual device and every public and [private cloud][5] instance within their domain.
|
||||
|
||||
Engineers can use the Tokalabs SDL controller to dynamically create and reserve test-bed resources and then save them as a template for future use. Engineers also can automate and schedule test executions and the controller will release the resources once the tests are done. The controller’s codeless automation feature means users don’t need to know how to write scripts to orchestrate and automate a pretty comprehensive configuration and test scenario. They can use the controller to automate sequences without writing code or instruct the controller to execute external scripts developed by an engineer.
|
||||
|
||||
The automation is helpful to set up a specific configuration quickly. For example, a customer-support engineer might need to replicate a scenario that one of its customers has in order to troubleshoot an issue. Using the controller’s automation feature, devices can be configured and loaded with specific firmware quickly to ease the setup process.
|
||||
|
||||
Tokalabs logs everything that transpires through its controller, so a lab administrator has oversight into how the equipment is being used or what types of tests are being created and executed. This helps with resource capacity planning, to ensure that there is enough equipment without having devices sit idle for too long.
|
||||
|
||||
One leader in cybersecurity became an early adopter of Tokalabs. This vendor has a test lab to conduct comparative benchmark numbers with competitors’ products in order to close large deals and to confirm their product strengths and performance numbers for marketing materials.
|
||||
|
||||
Prior to using the Tokalabs SDL controller, engineering teams would physically cable the topologies, configure the devices and execute various benchmark tests. Then they would tear down that configuration and start all over again for every set of devices and firmware revisions.
|
||||
|
||||
Given that this is a multi-billion-dollar equipment manufacturer, there are a lot of new product releases and updates to existing products. That means there’s a heck of a lot of work for the engineers in the lab to test each product and compare it to competitors’ offerings. They can’t really afford the time spent configuring rather than testing, so they turned to Tokalabs’ technology to manage the lab infrastructure and to automate the configurations and scheduling of test executions. They chose this solution largely for the ease of setup and use.
|
||||
|
||||
Now, each engineer can create hundreds of reusable templates, thus eliminating the repetitive work of creating test beds, and also automate test scripts using the Tokalabs’ automation sequencer. Additionally, all their existing test scripts are available to use through the SDL controller. This has helped the team reduce its backlog and keep up with the product release cycles.
|
||||
|
||||
Beyond this use case for comparative benchmark tests, some of the other uses for Tokalabs SDL include:
|
||||
|
||||
* Creating a portal for others to use lab resources; for example, for training purposes or for customers to test network environments prior to purchasing them
|
||||
* Doing sales demonstrations and customer PoCs in order to showcase a feature, an application, or even an entire configuration
|
||||
* Automating bringing up virtualized environments
|
||||
|
||||
|
||||
|
||||
Tokalabs claims to work closely with its customers to tailor the Software Defined Labs platform to specific use cases and customer needs.
|
||||
|
||||
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3446816/tokalabs-software-defined-labs-automates-configuration-of-lab-test-beds.html
|
||||
|
||||
作者:[Linda Musthaler][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Linda-Musthaler/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3234795/what-is-virtualization-definition-virtual-machine-hypervisor.html
|
||||
[2]: https://www.networkworld.com/newsletters/signup.html
|
||||
[3]: https://tokalabs.com/
|
||||
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[5]: https://www.networkworld.com/article/2159885/cloud-computing-gartner-5-things-a-private-cloud-is-not.html
|
||||
[6]: https://www.facebook.com/NetworkWorld/
|
||||
[7]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,80 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (“Making software liquid: a DevOps company founder’s journey from OSS community to billion-dollar darling”)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/making-software-liquid-a-devops-company-founders-journey-from-oss-community-to-billion-dollar-darling/)
|
||||
[#]: author: (Editor Team https://opensourceforu.com/author/editor/)
|
||||
|
||||
“Making software liquid: a DevOps company founder’s journey from OSS community to billion-dollar darling”
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
_JFrog claims to make software development easier and faster, and enable firms to reduce their development costs. To understand the basis of this promise, **Rahul Chopra, editorial director, EFY Group**, spoke to **Fred Simon, co-founder and chief architect, JFrog** and here’s what he discovered…_
|
||||
|
||||
**Q. How would you explain JFrog’s solutions to a senior business decision maker?**
|
||||
|
||||
**A.** It’s a fair question, as we have been – and continue to be – a developer-driven company that makes tools and solutions for developers. Originally, we were a team of engineers working on Java and had the task of solving some package management pain during the J2EE (now Java EE) transformation. So, it was all hyper-technical and hard to explain to non-engineers.
|
||||
|
||||
Today, it’s a bit easier. As the world moves towards cloud-native applications as the default and every company is now a software company, the benefits of software management and delivery are now mission-critical. We see that as this industry maturity has taken place, software conversations are now management-level conversations. So, it’s now a very simple proposition: You are getting business demands for faster, smarter, more secure software. And you must “release software fast, or you die,” as we like to say. Competition is fierce, so if you can provide value to the end-user faster and smarter without downtime – what we call “Liquid Software,” you have a competitive edge. JFrog helps you achieve these goals faster as a DevOps organization.
|
||||
|
||||
**Q. How does the explanation change, when explaining it to a senior techie like a CTO?**
|
||||
|
||||
**A.** At this level, it is even simpler. You have historically released software once a quarter or even once a year or more. You know that this demand has changed with cloud, microservices and agility movements. We give you the ability to get a new version rapidly, control where the version was validated and how it ends up in runtime as quickly as possible. We’ve been doing this for more than 10 years at scale. When we started, our customers managed a gigabyte, then later a terabyte and today we have customers with petabytes of binary software creating dozens or hundreds of builds a day.
|
||||
|
||||
**Q. You mentioned the word ‘control’. But, a lot of developers do not like control. So, how would you explain JFrog’s promise to them, what would your pitch be?**
|
||||
|
||||
**A.** The word “control” to a developer’s ear can sound like something being imposed on them. It’s “someone else’s” control. But the developer roots of JFrog demand that we provide speed and agility to developers, giving them as much control over their environments as possible without sacrificing their speed.
|
||||
|
||||
**Q. According to you, the drivers within the company are the developers, who then take it to DevOps, then to CTO and the CEO signs the cheque? Is that how it works?**
|
||||
|
||||
**A.** Yes. JFrog to date has only had an inside sales force with no outbound sales. The first time we ever talked to a company was because the developer said that they needed a professional version of JFrog tools. Then we started the discussion with the managers and so on up the chain. Developers – as some like our friends at RedMonk have said – are still the kingmakers.
|
||||
|
||||
**Q. Can you explain the term ‘Liquid Software’ that’s been mentioned quite a few times on your website?**
|
||||
|
||||
**A.** The concept of Liquid Software is to enable continuous, secure, seamless updates of every piece of software that is running without any downtime. This is very different than the traditional build, package, distribute once a year model. The old way doesn’t scale. The new world of Liquid Software makes the update process nearly seamless from code to end device.
|
||||
|
||||
**Q. Has the shift to “everything-as-a-service” become the main driver for this concept of Liquid Software?**
|
||||
|
||||
**A.** Yes. People are not making software as a service, they are delivering services. They are using our Liquid Software pipeline delivery process for every kind of delivery. It’s important to note that “as a service” isn’t just for cloud, but is in fact the standard for every type of software delivery.
|
||||
|
||||
**Q. How’s JFrog connected with open source and how does it shift to an enterprise paid version? What is the licensing model at JFrog?**
|
||||
|
||||
**A.** We have an open-source version licensed under AGPL. This open source version allows you to do many Java-related works, and is sometimes where developers start to “kick the tires.”.There is also an edition specifically for C/C++ developer utilizing the Conan framework. Since most development shops do more than one type of development, our commercial versions – starting with a Pro subscription – universally support all package types. From there, there are other plans available that include HA, Security and Compliance tools, Distribution and more. We have also recently added JFrog Pipelines for full automation of your pipelines across the organization. So, you can choose what makes the most sense for them, and JFrog can grow alongside your needs as you mature your DevOps and DevSecOps infrastructure.
|
||||
|
||||
**Q. Do you have a different set of solutions for developers depending on whether they are developing for the web, mobile, IoT?**
|
||||
|
||||
**A.** No, we are universal. You don’t need to re-install different things for different technology. So, if you are a Ruby developer or a Python developer or if you have Docker, Debian, Microsoft or NuGet, then you get everything in one single tool. Pipelines are so unique to each organization that we need to support all of it.
|
||||
|
||||
**Q. Are there any specific solutions or capabilities that you have developed for IoT?**
|
||||
|
||||
**A.** Yes. Quite early on we worked with customers on an IoT offering. We provided an IoT-specific solution, which is an extension of Debian for IoT, and we also have Conan and Yocto. Controlling and increasing the speed of delivery is something that is in the early stages of the IoT environment. So we are helping in this integration and providing tools that are enabling different technologies on your JFrog platform that are tailored to an IoT environment.
|
||||
|
||||
**Q. Overall, how important is India, both as development and tech-support centre for JFrog globally as well as a market for JFrog?**
|
||||
|
||||
**A.** JFrog opened its first office in India more than three years ago, with a development office working on JFrog Insight and JFrog Mission Control (which provide pipeline tooling and performance visibility). We purchased an organization called Shippable at the beginning of this year for their technology and their R&D team, who then created JFrog Pipelines product. They are also located in India, so India has been and is increasingly important from both an R&D and support perspective. A lot of our senior support force is in India, so we need really good developers working at JFrog to handle the high-tech support volume. We are already at 60 employees in Bangalore and have recently appointed a General Manager. As you know, JFrog is now a company of more than 500 people. We are also growing our marketing and sales teams in India that will help drive the DevOps revolution for Indian customers.
|
||||
|
||||
**Q. Are these more of a global account that have shops in India or are these Indian companies?**
|
||||
|
||||
**A.** Both. We started with the global companies with R&D in India. Today, we have companies throughout India that are directly buying from us.
|
||||
|
||||
**Q. A little bit about your personal journey, when and how did you connect with the open-source world?**
|
||||
|
||||
**A.** I will date myself and say that in 1992, I used to play with Mosaic while I was in University, and I created a web server based on open source web stacks. Gloriously geeky stuff, but it put me in the OSS community right from the beginning. When I was a kid, I used to share code and OSS was the way I learned how to code in the first place. It’s clear to me that OSS is the future for creating innovative software, and I – and JFrog – continue to support and contribute to development communities globally. I look forward to seeing OSS and OSS communities drive the innovations of the future.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/making-software-liquid-a-devops-company-founders-journey-from-oss-community-to-billion-dollar-darling/
|
||||
|
||||
作者:[Editor Team][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Copy-of-IMG_0219a-_39_-new.jpg?resize=350%2C472&ssl=1
|
@ -0,0 +1,104 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Gartner: 10 infrastructure trends you need to know)
|
||||
[#]: via: (https://www.networkworld.com/article/3447397/gartner-10-infrastructure-trends-you-need-to-know.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Gartner: 10 infrastructure trends you need to know
|
||||
======
|
||||
Gartner names the most important factors affecting infrastructure and operations
|
||||
[Daniel Páscoa][1] [(CC0)][2]
|
||||
|
||||
ORLANDO – Corporate network infrastructure is only going to get more involved over the next two to three years as automation, network challenges and hybrid cloud become more integral to the enterprise.
|
||||
|
||||
Those were some of the main infrastructure trend themes espoused by Gartner vice president and distinguished analyst [David Cappuccio][3] at the research firm’s IT Symposium/XPO here this week.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][4]
|
||||
|
||||
Cappuccio noted that Gartner’s look at the top infrastructure and operational trends reflect offshoots of technologies – such as cloud computing, automation and networking advances the company’s [analysts have talked][5] about many times before.
|
||||
|
||||
[][6]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][6]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
Gartners “Top Ten Trends Impacting Infrastructure and Operations” list is:
|
||||
|
||||
### Automation-strategy rethink
|
||||
|
||||
[Automation][7] has been going on at some level for years, Cappuccio said, but the level of complexity as it is developed and deployed further is what’s becoming confusing. The amounts and types of automation need to be managed and require a shift to a team development approach led by an automation architect that can be standardized across business units. Cappuccio said. What would help? Gartner says by 2025, more than 90 percent of enterprises will have an automation architect, up from less than 20 percent today.
|
||||
|
||||
Advertisement
|
||||
|
||||
### Hybrid IT Impacts Disaster Recovery Confidence
|
||||
|
||||
Hybrid IT which includes a mix of data center, SAAS, PAAS, branch offices, [edge computing][8] and security services makes it hard to promise enterprise resources will be available or backed-up, Cappuccio said. Overly-simplistic IT disaster recovery plans may only deliver partial success. By 2021, the root cause of 90 percent of cloud-based availability issues will be the failure to fully use cloud service provider native redundancy capabilities, he said. Enterprises need to leverage their automation investments and other IT tools to refocus how systems are recovered.
|
||||
|
||||
### Scaling DevOps agility demands platform rethinking
|
||||
|
||||
IT’s role in many companies has almost become that of a product manager for all its different DevOps teams. IT needs to build consistency across the enterprise because they don’t want islands of DeVOps teams across the company. By 2023, 90 percent of enterprises will fail to scale DevOps initiatives if shared self-service platform approaches are not adopted, Gartner stated.
|
||||
|
||||
### Infrastructure - and your data - are everywhere
|
||||
|
||||
By 2022, more than 50 percent of enterprise-generated data will be created and processed outside the [data cente][9]r or cloud, up from less than 10 percent in 2019. Infrastructure is everywhere, Cappuccio said and every time data is moved it creates challenges. How does IT manage data-everywhere scenarios? Cappuccio advocated mandating data-driven infrastructure impact-assessment at early stages of design, investing in infrastructure tools to manage data wherever it resides, and modernizing existing backup architectures to be able to protect data wherever it resides.
|
||||
|
||||
### Overwhelming Impact of IoT
|
||||
|
||||
The issue here is that most [IoT][10] implementations are not driven by IT, and they typically involve different protocols and vendors that don’t usually deal with an IT organization. In the end, who controls and manages IoT becomes an issue and it creates security and operational risks. Cappuccio said companies need to engage with business leaders to shape IoT strategies and establish a center of excellence for IoT.
|
||||
|
||||
### Distributed cloud
|
||||
|
||||
The methods of putting cloud services or cloud-like services on-premises but letting a vendor manage that cloud are increasing. Google has Athos and AWS will soon roll out OutPosts, for example, so this environment is going to change a lot in the next two years, Cappuccio said. This is a nascent market so customers should beware. Enterprises should also be prepared to set boundaries and determine who is responsible for software upgrades, patching and performance.
|
||||
|
||||
### Immersive Experience
|
||||
|
||||
Humans used to learn about and adapt to technology. Today, technology learns and adapts to humans, Cappuccio said. “We have created a world where customers have a serious expectation of perfection. We have designed applications where perfection is the norm.” Such systems are great for mindshare, marketshare and corporate reputation, but as soon as there’s one glitch that’s all out the window.
|
||||
|
||||
### Democratization of IT
|
||||
|
||||
Application development is no longer the realm of specialists. There has been the rollout of simpler development tools like [low code][11] or [no code][12] packages and a focus on bringing new applications to market quickly. That may bring a quicker time-to-market for the business but could be riskier for IT, Cappuccio said. IT leaders perhaps can’t control such rapid development, but it needs to understand what’s happening.
|
||||
|
||||
### What's next for networking?
|
||||
|
||||
There are tons of emerging trends around networking such as mesh, secure-access service edge, network automation, network-on-demand service, network automation, and firewalls as a service. “After decades of focusing on network performance and availability, future network innovation will target operational simplicity, automation, reliability and flexible business models,” Cappuccio said. Enterprises need to automate “everywhere” and balance what technologies are safe vs. what is agile, he said.
|
||||
|
||||
### Hybrid digital-infrastructure management
|
||||
|
||||
The general idea here is that CIOs face the challenge of selecting the right mixture of cloud and traditional IT for the organization. The mix of many different elements such as edge, [hybrid cloud][13], workflow and management creates complex infrastructures. Gartner recommends a focus on workflow visualization – utilizing an in integrated toolset and developing a center of excellence to work on the issues, Cappuccio said.
|
||||
|
||||
Join the Network World communities on [Facebook][14] and [LinkedIn][15] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3447397/gartner-10-infrastructure-trends-you-need-to-know.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://unsplash.com/photos/tjiPN3e45WE
|
||||
[2]: https://creativecommons.org/publicdomain/zero/1.0/
|
||||
[3]: https://www.linkedin.com/in/davecappuccio/
|
||||
[4]: https://www.networkworld.com/newsletters/signup.html
|
||||
[5]: https://www.networkworld.com/article/2160904/gartner--10-critical-it-trends-for-the-next-five-years.html
|
||||
[6]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[7]: https://www.networkworld.com/article/3223189/how-network-automation-can-speed-deployments-and-improve-security.html
|
||||
[8]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[9]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
|
||||
[10]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[11]: https://www.mendix.com/low-code-guide/
|
||||
[12]: https://kissflow.com/no-code/
|
||||
[13]: https://www.networkworld.com/article/3268448/what-is-hybrid-cloud-really-and-whats-the-best-strategy.html
|
||||
[14]: https://www.facebook.com/NetworkWorld/
|
||||
[15]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,85 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Disney’s Streaming Service is Having Troubles with Linux)
|
||||
[#]: via: (https://itsfoss.com/disney-plus-linux/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Disney’s Streaming Service is Having Troubles with Linux
|
||||
======
|
||||
|
||||
You might be already using Amazon Prime Video (comes free with [Amazon Prime membership][1]) or [Netflix on your Linux system][2]. Google Chrome supports these streaming services out of the box. You can also [watch Netflix on Firefox in Linux][3] but you have to explicitly enable DRM content.
|
||||
|
||||
However we just learned that Disney’s upcoming streaming service, Disney+ does not work in the same way.
|
||||
|
||||
A user, Hans de Goede, on [LiveJournal][4] revealed this from his experience with Disney+ in the testing period. In fact, the upcoming streaming service Disney+ does not support Linux at all, at least for now.
|
||||
|
||||
### The trouble with Disney+ and DRM
|
||||
|
||||
![][5]
|
||||
|
||||
As Hans explains in his [post][4], he subscribed to the streaming service in the testing period because of the availability of Disney+ in Netherlands.
|
||||
|
||||
Hans tested it on Fedora with mainstream browsers like Firefox and Chrome. However, every time, an error was encountered – “**Error Code 83**“.
|
||||
|
||||
So, he reached out to Disney support to solve the issue – but interestingly they weren’t even properly aware of the issue as it took them a week to give him a response.
|
||||
|
||||
Here’s how he puts his experience:
|
||||
|
||||
> So I mailed the Disney helpdesk about this, explaining how Linux works fine with Netflix, AmazonPrime video and even the web-app from my local cable provider. They promised to get back to me in 24 hours, the eventually got back to me in about a week. They wrote: “We are familiar with Error 83. This often happens if you want to play Disney + via the web browser or certain devices. Our IT department working hard to solve this. In the meantime, I want to advise you to watch Disney + via the app on a phone or tablet. If this error code still occurs in a few days, you can check the help center …” this was on September 23th.
|
||||
|
||||
They just blatantly advised him to use his phone/tablet to access the streaming service instead. That’s genius!
|
||||
|
||||
### Disney should reconsider their DRM implementation
|
||||
|
||||
What is DRM?
|
||||
|
||||
Digital Rights Management ([DRM][6]) technologies attempt to control what you can and can’t do with the media and hardware you’ve purchased.
|
||||
|
||||
Even though they want to make sure that their content remains protected from pirates (which won’t make a difference either), it creates a problem with the support for multiple platforms.
|
||||
|
||||
How on earth do you expect more people to subscribe to your streaming service when you do not even support platforms like Linux? So many media center devices run on Linux. This will be a big setback if Disney continues like this.
|
||||
|
||||
To shed some light on the issue, a user on [tweakers.net][7] found out that it is a [Widevine][8] error. Here, it generally means that your device is incompatible with the security level of DRM implemented.
|
||||
|
||||
It turns out that it isn’t just limited to Linux – but a lot of users are encountering the same error on other platforms as well.
|
||||
|
||||
In addition to the wave of issues, the Widevine error also points to a fact that Disney+ may not even work on Chromebooks, some Android smartphones, and Linux desktops in general.
|
||||
|
||||
Seriously, Disney?
|
||||
|
||||
### Go easy, Disney!
|
||||
|
||||
A common DRM (low-level security) implementation with Disney+ should make it accessible on every platform including Linux systems.
|
||||
|
||||
Disney+ might want to re-think about the DRM implementation if they want to compete with other streaming platforms like Netflix and Amazon Prime Video.
|
||||
|
||||
Personally, I would prefer to stay with Netflix if Disney does not care about supporting multiple platforms.
|
||||
|
||||
It is not actually about supporting “Linux” but conveniently making the streaming service available for more platforms which could justify its subscription fee.
|
||||
|
||||
What do you think about this? Let me know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/disney-plus-linux/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.amazon.com/tryprimefree?tag=chmod7mediate-20
|
||||
[2]: https://itsfoss.com/watch-netflix-in-ubuntu-linux/
|
||||
[3]: https://itsfoss.com/netflix-firefox-linux/
|
||||
[4]: https://hansdegoede.livejournal.com/22338.html
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/10/disney-plus-linux.jpg?resize=800%2C450&ssl=1
|
||||
[6]: https://www.eff.org/issues/drm
|
||||
[7]: https://tweakers.net/nieuws/157224/disney+-start-met-gratis-proefperiode-van-twee-maanden-in-nederland.html?showReaction=13428408#r_13428408
|
||||
[8]: https://www.widevine.com/
|
@ -1,74 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (DevSecOps pipelines and tools: What you need to know)
|
||||
[#]: via: (https://opensource.com/article/19/10/devsecops-pipeline-and-tools)
|
||||
[#]: author: (Sagar Nangare https://opensource.com/users/sagarnangare)
|
||||
|
||||
DevSecOps pipelines and tools: What you need to know
|
||||
======
|
||||
DevSecOps evolves DevOps to ensure security remains an essential part of
|
||||
the process.
|
||||
![An intersection of pipes.][1]
|
||||
|
||||
DevOps is well-understood in the IT world by now, but it's not flawless. Imagine you have implemented all of the DevOps engineering practices in modern application delivery for a project. You've reached the end of the development pipeline—but a penetration testing team (internal or external) has detected a security flaw and come up with a report. Now you have to re-initiate all of your processes and ask developers to fix the flaw.
|
||||
|
||||
This is not terribly tedious in a DevOps-based software development lifecycle (SDLC) system—but it does consume time and affects the delivery schedule. If security were integrated from the start of the SDLC, you might have tracked down the glitch and eliminated it on the go. But pushing security to the end of the development pipeline, as in the above scenario, leads to a longer development lifecycle.
|
||||
|
||||
This is the reason for introducing DevSecOps, which consolidates the overall software delivery cycle in an automated way.
|
||||
|
||||
In modern DevOps methodologies, where containers are widely used by organizations to host applications, we see greater use of [Kubernetes][2] and [Istio][3]. However, these tools have their own vulnerabilities. For example, the Cloud Native Computing Foundation (CNCF) recently completed a [Kubernetes security audit][4] that identified several issues. All tools used in the DevOps pipeline need to undergo security checks while running in the pipeline, and DevSecOps pushes admins to monitor the tools' repositories for upgrades and patches.
|
||||
|
||||
### What Is DevSecOps?
|
||||
|
||||
Like DevOps, DevSecOps is a mindset or a culture that developers and IT operations teams follow while developing and deploying software applications. It integrates active and automated security audits and penetration testing into agile application development.
|
||||
|
||||
To utilize [DevSecOps][5], you need to:
|
||||
|
||||
* Introduce the concept of security right from the start of the SDLC to minimize vulnerabilities in software code.
|
||||
* Ensure everyone (including developers and IT operations teams) shares responsibility for following security practices in their tasks.
|
||||
* Integrate security controls, tools, and processes at the start of the DevOps workflow. These will enable automated security checks at each stage of software delivery.
|
||||
|
||||
|
||||
|
||||
DevOps has always been about including security—as well as quality assurance (QA), database administration, and everyone else—in the dev and release process. However, DevSecOps is an evolution of that process to ensure security is never forgotten as an essential part of the process.
|
||||
|
||||
### Understanding the DevSecOps pipeline
|
||||
|
||||
There are different stages in a typical DevOps pipeline; a typical SDLC process includes phases like Plan, Code, Build, Test, Release, and Deploy. In DevSecOps, specific security checks are applied in each phase.
|
||||
|
||||
* **Plan:** Execute security analysis and create a test plan to determine scenarios for where, how, and when testing will be done.
|
||||
* **Code:** Deploy linting tools and Git controls to secure passwords and API keys.
|
||||
* **Build:** While building code for execution, incorporate static application security testing (SAST) tools to track down flaws in code before deploying to production. These tools are specific to programming languages.
|
||||
* **Test:** Use dynamic application security testing (DAST) tools to test your application while in runtime. These tools can detect errors associated with user authentication, authorization, SQL injection, and API-related endpoints.
|
||||
* **Release:** Just before releasing the application, employ security analysis tools to perform thorough penetration testing and vulnerability scanning.
|
||||
* **Deploy:** After completing the above tests in runtime, send a secure build to production for final deployment.
|
||||
|
||||
|
||||
|
||||
### DevSecOps tools
|
||||
|
||||
Tools are available for every phase of the SDLC. Some are commercial products, but most are open source. In my next article, I will talk more about the tools to use in different stages of the pipeline.
|
||||
|
||||
DevSecOps will play a more crucial role as we continue to see an increase in the complexity of enterprise security threats built on modern IT infrastructure. However, the DevSecOps pipeline will need to improve over time, rather than simply relying on implementing all security changes simultaneously. This will eliminate the possibility of backtracking or the failure of application delivery.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/devsecops-pipeline-and-tools
|
||||
|
||||
作者:[Sagar Nangare][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/sagarnangare
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe (An intersection of pipes.)
|
||||
[2]: https://opensource.com/resources/what-is-kubernetes
|
||||
[3]: https://opensource.com/article/18/9/what-istio
|
||||
[4]: https://www.cncf.io/blog/2019/08/06/open-sourcing-the-kubernetes-security-audit/
|
||||
[5]: https://resources.whitesourcesoftware.com/blog-whitesource/devsecops
|
@ -1,81 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux sudo flaw can lead to unauthorized privileges)
|
||||
[#]: via: (https://www.networkworld.com/article/3446036/linux-sudo-flaw-can-lead-to-unauthorized-privileges.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Linux sudo flaw can lead to unauthorized privileges
|
||||
======
|
||||
Exploiting a newly discovered sudo flaw in Linux can enable certain users with to run commands as root despite restrictions against it.
|
||||
Thinkstock
|
||||
|
||||
A newly discovered and serious flaw in the [**sudo**][1] command can, if exploited, enable users to run commands as root in spite of the fact that the syntax of the **/etc/sudoers** file specifically disallows them from doing so.
|
||||
|
||||
Updating **sudo** to version 1.8.28 should address the problem, and Linux admins are encouraged to do so as soon as possible.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
|
||||
|
||||
How the flaw might be exploited depends on specific privileges granted in the **/etc/sudoers** file. A rule that allows a user to edit files as any user except root, for example, would actually allow that user to edit files as root as well. In this case, the flaw could lead to very serious problems.
|
||||
|
||||
[][3]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][3]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
For a user to exploit the flaw, **a user** needs to be assigned privileges in the **/etc/sudoers **file that allow that user to run commands as some other users, and the flaw is limited to the command privileges that are assigned in this way.
|
||||
|
||||
This problem affects versions prior to 1.8.28. To check your sudo version, use this command:
|
||||
|
||||
```
|
||||
$ sudo -V
|
||||
Sudo version 1.8.27 <===
|
||||
Sudoers policy plugin version 1.8.27
|
||||
Sudoers file grammar version 46
|
||||
Sudoers I/O plugin version 1.8.27
|
||||
```
|
||||
|
||||
The vulnerability has been assigned [CVE-2019-14287][4] in the **Common Vulnerabilities and Exposures** database. The risk is that any user who has been given the ability to run even a single command as an arbitrary user may be able to escape the restrictions and run that command as root – even if the specified privilege is written to disallow running the command as root.
|
||||
|
||||
The lines below are meant to give the user "jdoe" the ability to edit files with **vi** as any user except root (**!root** means "not root") and nemo the right to run the **id** command as any user except root:
|
||||
|
||||
```
|
||||
# affected entries on host "dragonfly"
|
||||
jdoe dragonfly = (ALL, !root) /usr/bin/vi
|
||||
nemo dragonfly = (ALL, !root) /usr/bin/id
|
||||
```
|
||||
|
||||
However, given the flaw, either of these users would be able to circumvent the restriction and edit files or run the **id** command as root as well.
|
||||
|
||||
The flaw can be exploited by an attacker to run commands as root by specifying the user ID "-1" or "4294967295."
|
||||
|
||||
The response of "1" demonstrates that the command is being run as root (showing root's user ID).
|
||||
|
||||
Joe Vennix from Apple Information Security both found and analyzed the problem.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3446036/linux-sudo-flaw-can-lead-to-unauthorized-privileges.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3236499/some-tricks-for-using-sudo.html
|
||||
[2]: https://www.networkworld.com/newsletters/signup.html
|
||||
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[4]: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14287
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
320
sources/tech/20191021 How to build a Flatpak.md
Normal file
320
sources/tech/20191021 How to build a Flatpak.md
Normal file
@ -0,0 +1,320 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to build a Flatpak)
|
||||
[#]: via: (https://opensource.com/article/19/10/how-build-flatpak-packaging)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
How to build a Flatpak
|
||||
======
|
||||
A universal packaging format with a decentralized means of distribution.
|
||||
Plus, portability and sandboxing.
|
||||
![][1]
|
||||
|
||||
A long time ago, a Linux distribution shipped an operating system along with _all_ the software available for it. There was no concept of “third party” software because everything was a part of the distribution. Applications weren’t so much installed as they were enabled from a great big software repository that you got on one of the many floppy disks or, later, CDs you purchased or downloaded.
|
||||
|
||||
This evolved into something even more convenient as the internet became ubiquitous, and the concept of what is now the “app store” was born. Of course, Linux distributions tend to call this a _software repository_ or just _repo_ for short, with some variations for “branding”, such as _Ubuntu Software Center_ or, with typical GNOME minimalism, simply _Software_.
|
||||
|
||||
This model worked well back when open source software was still a novelty and the number of open source applications was a number rather than a _theoretical_ number. In today’s world of GitLab and GitHub and Bitbucket (and [many][2] [many][3] more), it’s hardly possible to count the number of open source projects, much less package them up in a repository. No Linux distribution today, even [Debian][4] and its formidable group of package maintainers, can claim or hope to have a package for every installable open source project.
|
||||
|
||||
Of course, a Linux package doesn’t have to be in a repository to be installable. Any programmer can package up their software and distribute it from their own website. However, because repositories are seen as an integral part of a distribution, there isn’t a universal packaging format, meaning that a programmer must decide whether to release a `.deb` or `.rpm`, or an AUR build script, or a Nix or Guix package, or a Homebrew script, or just a mostly-generic `.tgz` archive for `/opt`. It’s overwhelming for a developer who lives and breathes Linux every day, much less for a developer just trying to make a best-effort attempt at supporting a free and open source target.
|
||||
|
||||
### Why Flatpak?
|
||||
|
||||
The Flatpak project provides a universal packaging format along with a decentralized means of distribution, plus portability, and sandboxing.
|
||||
|
||||
* **Universal** Install the Flatpak system, and you can run Flatpaks, regardless of your distribution. No daemon or systemd required. The same Flatpak runs on Fedora, Ubuntu, Mageia, Pop OS, Arch, Slackware, and more.
|
||||
* **Decentralized** Developers can create and sign their own Flatpak packages and repositories. There’s no repository to petition in order to get a package included.
|
||||
* **Portability** If you have a Flatpak on your system and want to hand it to a friend so they can run the same application, you can export the Flatpak to a USB thumbdrive.
|
||||
* **Sandboxed** Flatpaks use a container-based model, allowing multiple versions of libraries and applications to exist on one system. Yes, you can easily install the latest version of an app to test out while maintaining the old version you rely on.
|
||||
|
||||
|
||||
|
||||
### Building a Flatpak
|
||||
|
||||
To build a Flatpak, you must first install Flatpak (the subsystem that enables you to use Flatpak packages) and the Flatpak-builder application.
|
||||
|
||||
On Fedora, CentOS, RHEL, and similar:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo dnf install flatpak flatpak-builder`
|
||||
```
|
||||
|
||||
On Debian, Ubuntu, and similar:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install flatpak flatpak-builder`
|
||||
```
|
||||
|
||||
You must also install the development tools required to build the application you are packaging. By nature of developing the application you’re now packaging, you may already have a development environment installed, so you might not notice that these components are required, but should you start building Flatpaks with Jenkins or from inside containers, then you must ensure that your build tools are a part of your toolchain.
|
||||
|
||||
For the first example build, this article assumes that your application uses [GNU Autotools][5], but Flatpak itself supports other build systems, such as `cmake`, `cmake-ninja`, `meson`, `ant`, as well as custom commands (a `simple` build system, in Flatpak terminology, but by no means does this imply that the build itself is actually simple).
|
||||
|
||||
#### Project directory
|
||||
|
||||
Unlike the strict RPM build infrastructure, Flatpak doesn’t impose a project directory structure. I prefer to create project directories based on the **dist** packages of software, but there’s no technical reason you can’t instead integrate your Flatpak build process with your source directory. It is technically easier to build a Flatpak from your **dist** package, though, and it’s an easier demo too, so that’s the model this article uses. Set up a project directory for GNU Hello, serving as your first Flatpak:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir hello_flatpak
|
||||
$ mkdir src
|
||||
```
|
||||
|
||||
Download your distributable source. For this example, the source code is located at `https://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz`.
|
||||
|
||||
|
||||
```
|
||||
$ cd hello_flatpak
|
||||
$ wget <https://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz>
|
||||
```
|
||||
|
||||
#### Manifest
|
||||
|
||||
A Flatpak is defined by a manifest, which describes how to build and install the application it is delivering. A manifest is atomic and reproducible. A Flatpak exists in a “sandbox” container, though, so the manifest is based on a mostly empty environment with a root directory call `/app`.
|
||||
|
||||
The first two attributes are the ID of the application you are packaging and the command provided by it. The application ID must be unique to the application you are packaging. The canonical way of formulating a unique ID is to use a triplet value consisting of the entity responsible for the code followed by the name of the application, such as `org.gnu.Hello`. The command provided by the application is whatever you type into a terminal to run the application. This does not imply that the application is intended to be run from a terminal instead of a `.desktop` file in the Activities or Applications menu.
|
||||
|
||||
In a file called `org.gnu.Hello.yaml`, enter this text:
|
||||
|
||||
|
||||
```
|
||||
id: org.gnu.Hello
|
||||
command: hello
|
||||
```
|
||||
|
||||
A manifest can be written in [YAML][6] or in JSON. This article uses YAML.
|
||||
|
||||
Next, you must define each “module” delivered by this Flatpak package. You can think of a module as a dependency or a component. For GNU Hello, there is only one module: GNU Hello. More complex applications may require a specific library or another application entirely.
|
||||
|
||||
|
||||
```
|
||||
modules:
|
||||
- name: hello
|
||||
buildsystem: autotools
|
||||
no-autogen: true
|
||||
sources:
|
||||
- type: archive
|
||||
path: src/hello-2.10.tar.gz
|
||||
```
|
||||
|
||||
The `buildsystem` value identifies how Flatpak must build the module. Each module can use its own build system, so one Flatpak can have several build systems defined.
|
||||
|
||||
The `no-autogen` value tells Flatpak not to run the setup commands for `autotools`, which aren’t necessary because the GNU Hello source code is the product of `make dist`. If the code you’re building isn’t in a easily buildable form, then you may need to install `autogen` and `autoconf` to prepare the source for `autotools`. This option doesn’t apply at all to projects that don’t use `autotools`.
|
||||
|
||||
The `type` value tells Flatpak that the source code is in an archive, which triggers the requisite unarchival tasks before building. The `path` points to the source code. In this example, the source exists in the `src` directory on your local build machine, but you could instead define the source as a remote location:
|
||||
|
||||
|
||||
```
|
||||
modules:
|
||||
- name: hello
|
||||
buildsystem: autotools
|
||||
no-autogen: true
|
||||
sources:
|
||||
- type: archive
|
||||
url: <https://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz>
|
||||
```
|
||||
|
||||
Finally, you must define the platform required for the application to run and build. The Flatpak maintainers supply runtimes and SDKs that include common libraries, including `freedesktop`, `gnome`, and `kde`. The basic requirement is the `freedesk` runtime and SDK, although this may be superseded by GNOME or KDE, depending on what your code needs to run. For this GNU Hello example, only the basics are required.
|
||||
|
||||
|
||||
```
|
||||
runtime: org.freedesktop.Platform
|
||||
runtime-version: '18.08'
|
||||
sdk: org.freedesktop.Sdk
|
||||
```
|
||||
|
||||
The entire GNU Hello flatpak manifest:
|
||||
|
||||
|
||||
```
|
||||
id: org.gnu.Hello
|
||||
runtime: org.freedesktop.Platform
|
||||
runtime-version: '18.08'
|
||||
sdk: org.freedesktop.Sdk
|
||||
command: hello
|
||||
modules:
|
||||
- name: hello
|
||||
buildsystem: autotools
|
||||
no-autogen: true
|
||||
sources:
|
||||
- type: archive
|
||||
path: src/hello-2.10.tar.gz
|
||||
```
|
||||
|
||||
#### Building a Flatpak
|
||||
|
||||
Now that the package is defined, you can build it. The build process prompts Flatpak-builder to parse the manifest and to resolve each requirement: it ensures that the necessary Platform and SDK are available (if they aren’t, then you’ll have to install them with the `flatpak` command), it unarchives the source code, and executes the `buildsystem` specified.
|
||||
|
||||
The command to start:
|
||||
|
||||
|
||||
```
|
||||
`$ flatpak-builder build-dir org.gnu.Hello.yaml`
|
||||
```
|
||||
|
||||
The directory `build-dir` is created if it does not already exist. The name `build-dir` is arbitrary; you could call it `build` or `bld` or `penguin`, and you can have more than one build destination in the same project directory. However, the term `build-dir` is a frequent value used in documentation, so using it as the literal value can be helpful.
|
||||
|
||||
#### Testing your application
|
||||
|
||||
You can test your application before or after it has been built by running the build command along with the `--run` option, and endingi the command with the command provided by the Flatpak:
|
||||
|
||||
|
||||
```
|
||||
$ flatpak-builder --run build-dir \
|
||||
org.gnu.Hello.yaml hello
|
||||
Hello, world!
|
||||
```
|
||||
|
||||
### Packaging GUI apps with Flatpak
|
||||
|
||||
Packaging up a simple self-contained _hello world_ application is trivial, and fortunately packaging up a GUI application isn’t much harder. The most difficult applications to package are those that don’t rely on common libraries and frameworks (in the context of packaging, “common” means anything _not_ already packaged by someone else). The Flatpak community provides SDKs and SDK Extensions for many components you might otherwise have had to package yourself. For instance, when packaging the pure Java implementation of `pdftk`, I use the OpenJDK SDK extension I found in the Flatpak Github repository:
|
||||
|
||||
|
||||
```
|
||||
runtime: org.freedesktop.Platform
|
||||
runtime-version: '18.08'
|
||||
sdk: org.freedesktop.Sdk
|
||||
sdk-extensions:
|
||||
- org.freedesktop.Sdk.Extension.openjdk11
|
||||
```
|
||||
|
||||
The Flatpak community does a lot of work on the foundations required for applications to run upon in order to make the packaging process easy for developers. For instance, the Kblocks game from the KDE community requires the KDE platform to run, and that’s already available from Flatpak. The additional `libkdegames` library is not included, but it’s as easy to add it to your list of `modules` as `kblocks` itself.
|
||||
|
||||
Here’s a manifest for the Kblocks game:
|
||||
|
||||
|
||||
```
|
||||
id: org.kde.kblocks
|
||||
command: kblocks
|
||||
modules:
|
||||
\- buildsystem: cmake-ninja
|
||||
name: libkdegames
|
||||
sources:
|
||||
type: archive
|
||||
path: src/libkdegames-19.08.2.tar.xz
|
||||
\- buildsystem: cmake-ninja
|
||||
name: kblocks
|
||||
sources:
|
||||
type: archive
|
||||
path: src/kblocks-19.08.2.tar.xz
|
||||
runtime: org.kde.Platform
|
||||
runtime-version: '5.13'
|
||||
sdk: org.kde.Sdk
|
||||
```
|
||||
|
||||
As you can see, the manifest is still straight-forward and relatively intuitive. The build system is different, and the runtime and SDK point to KDE instead of the Freedesktop, but the structure and requirements are basically the same.
|
||||
|
||||
Because it’s a GUI application, however, there are some new options required. First, it needs an icon so that when it’s listed in the Activities or Application menu, it looks nice and recognizable. Kblocks includes an icon in its sources, but the names of files exported by a Flatpak must be prefixed using the application ID (such as `org.kde.Kblocks.desktop`). The easiest way to do this is to rename the file directly in the application source, which Flatpak can do for you as long as you include this directive in your manifest:
|
||||
|
||||
|
||||
```
|
||||
`rename-icon: kblocks`
|
||||
```
|
||||
|
||||
Another unique trait of GUI applications is that they often require integration with common desktop services, like the graphics server (X11 or Wayland) itself, a sound server such as [Pulse Audio][7], and the Inter-Process Communication (IPC) subsystem.
|
||||
|
||||
In the case of Kblocks, the requirements are:
|
||||
|
||||
|
||||
```
|
||||
finish-args:
|
||||
\- --share=ipc
|
||||
\- --socket=x11
|
||||
\- --socket=wayland
|
||||
\- --socket=pulseaudio
|
||||
\- --device=dri
|
||||
\- --filesystem=xdg-config/kdeglobals:ro
|
||||
```
|
||||
|
||||
Here’s the final, complete manifest, using URLs for the sources so you can try this on your own system easily:
|
||||
|
||||
|
||||
```
|
||||
command: kblocks
|
||||
finish-args:
|
||||
\- --share=ipc
|
||||
\- --socket=x11
|
||||
\- --socket=wayland
|
||||
\- --socket=pulseaudio
|
||||
\- --device=dri
|
||||
\- --filesystem=xdg-config/kdeglobals:ro
|
||||
id: org.kde.kblocks
|
||||
modules:
|
||||
\- buildsystem: cmake-ninja
|
||||
name: libkdegames
|
||||
sources:
|
||||
- sha256: 83456cec44502a1f79c0be00c983090e32fd8aea5fec1461fbfbd37b5f8866ac
|
||||
type: archive
|
||||
url: <https://download.kde.org/stable/applications/19.08.2/src/libkdegames-19.08.2.tar.xz>
|
||||
\- buildsystem: cmake-ninja
|
||||
name: kblocks
|
||||
sources:
|
||||
- sha256: 8b52c949e2d446a4ccf81b09818fc90234f2f55d8722c385491ee67e1f2abf93
|
||||
type: archive
|
||||
url: <https://download.kde.org/stable/applications/19.08.2/src/kblocks-19.08.2.tar.xz>
|
||||
rename-icon: kblocks
|
||||
runtime: org.kde.Platform
|
||||
runtime-version: '5.13'
|
||||
sdk: org.kde.Sdk
|
||||
```
|
||||
|
||||
To build the application, you must have the KDE Platform and SDK Flatpaks (version 5.13 as of this writing) installed. Once the application has been built, you can run it using the `--run` method, but to see the application icon, you must install it.
|
||||
|
||||
#### Distributing and installing a Flatpak you have built
|
||||
|
||||
Distributing flatpaks happen through repositories.
|
||||
|
||||
You can list your apps on [Flathub.org][8], a community website meant as a _technically_ decentralised (but central in spirit) location for Flatpaks. To submit your Flatpak, [place your manifest into a Git repository][9] and [submit a pull request on Github][10].
|
||||
|
||||
Alternately, you can create your own repository using the `flatpak build-export` command.
|
||||
|
||||
You can also just install locally:
|
||||
|
||||
|
||||
```
|
||||
`$ flatpak-builder --force-clean --install build-dir org.kde.Kblocks.yaml`
|
||||
```
|
||||
|
||||
Once installed, open your Activities or Applications menu and search for Kblocks.
|
||||
|
||||
![The Activities menu in GNOME][11]
|
||||
|
||||
### Learning more
|
||||
|
||||
The [Flatpak documentation site][12] has a good walkthrough on building your first Flatpak. It’s worth reading even if you’ve followed along with this article. Besides that, the docs provide details on what Platforms and SDKs are available.
|
||||
|
||||
For those who enjoy learning from examples, there are manifests for _every application_ available on [Flathub][13].
|
||||
|
||||
The resources to build and use Flatpaks are plentiful, and Flatpak, along with containers and sandboxed apps, are arguably [the future][14], so get familiar with them, start integrating them with your Jenkins pipelines, and enjoy easy and universal Linux app packaging.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/how-build-flatpak-packaging
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/flatpak-lead-image.png?itok=J93RG_fi
|
||||
[2]: http://notabug.org
|
||||
[3]: http://savannah.nongnu.org/
|
||||
[4]: http://debian.org
|
||||
[5]: https://opensource.com/article/19/7/introduction-gnu-autotools
|
||||
[6]: https://www.redhat.com/sysadmin/yaml-tips
|
||||
[7]: https://opensource.com/article/17/1/linux-plays-sound
|
||||
[8]: http://flathub.org
|
||||
[9]: https://opensource.com/resources/what-is-git
|
||||
[10]: https://opensource.com/life/16/3/submit-github-pull-request
|
||||
[11]: https://opensource.com/sites/default/files/gnome-activities-kblocks.jpg (The Activities menu in GNOME)
|
||||
[12]: http://docs.flatpak.org/en/latest/introduction.html
|
||||
[13]: https://github.com/flathub
|
||||
[14]: https://silverblue.fedoraproject.org/
|
@ -0,0 +1,272 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to program with Bash: Syntax and tools)
|
||||
[#]: via: (https://opensource.com/article/19/10/programming-bash-part-1)
|
||||
[#]: author: (David Both https://opensource.com/users/dboth)
|
||||
|
||||
How to program with Bash: Syntax and tools
|
||||
======
|
||||
Learn basic Bash programming syntax and tools, as well as how to use
|
||||
variables and control operators, in the first article in this three-part
|
||||
series.
|
||||
![bash logo on green background][1]
|
||||
|
||||
A shell is the command interpreter for the operating system. Bash is my favorite shell, but every Linux shell interprets the commands typed by the user or sysadmin into a form the operating system can use. When the results are returned to the shell program, it sends them to STDOUT which, by default, [displays them in the terminal][2]. All of the shells I am familiar with are also programming languages.
|
||||
|
||||
Features like tab completion, command-line recall and editing, and shortcuts like aliases all contribute to its value as a powerful shell. Its default command-line editing mode uses Emacs, but one of my favorite Bash features is that I can change it to Vi mode to use editing commands that are already part of my muscle memory.
|
||||
|
||||
However, if you think of Bash solely as a shell, you miss much of its true power. While researching my three-volume [Linux self-study course][3] (on which this series of articles is based), I learned things about Bash that I'd never known in over 20 years of working with Linux. Some of these new bits of knowledge relate to its use as a programming language. Bash is a powerful programming language, one perfectly designed for use on the command line and in shell scripts.
|
||||
|
||||
This three-part series explores using Bash as a command-line interface (CLI) programming language. This first article looks at some simple command-line programming with Bash, variables, and control operators. The other articles explore types of Bash files; string, numeric, and miscellaneous logical operators that provide execution-flow control logic; different types of shell expansions; and the **for**, **while**, and **until** loops that enable repetitive operations. They will also look at some commands that simplify and support the use of these tools.
|
||||
|
||||
### The shell
|
||||
|
||||
A shell is the command interpreter for the operating system. Bash is my favorite shell, but every Linux shell interprets the commands typed by the user or sysadmin into a form the operating system can use. When the results are returned to the shell program, it displays them in the terminal. All of the shells I am familiar with are also programming languages.
|
||||
|
||||
Bash stands for Bourne Again Shell because the Bash shell is [based upon][4] the older Bourne shell that was written by Steven Bourne in 1977. Many [other shells][5] are available, but these are the four I encounter most frequently:
|
||||
|
||||
* **csh:** The C shell for programmers who like the syntax of the C language
|
||||
* **ksh:** The Korn shell, written by David Korn and popular with Unix users
|
||||
* **tcsh:** A version of csh with more ease-of-use features
|
||||
* **zsh:** The Z shell, which combines many features of other popular shells
|
||||
|
||||
|
||||
|
||||
All shells have built-in commands that supplement or replace the ones provided by the core utilities. Open the shell's man page and find the "BUILT-INS" section to see the commands it provides.
|
||||
|
||||
Each shell has its own personality and syntax. Some will work better for you than others. I have used the C shell, the Korn shell, and the Z shell. I still like the Bash shell more than any of them. Use the one that works best for you, although that might require you to try some of the others. Fortunately, it's quite easy to change shells.
|
||||
|
||||
All of these shells are programming languages, as well as command interpreters. Here's a quick tour of some programming constructs and tools that are integral parts of Bash.
|
||||
|
||||
### Bash as a programming language
|
||||
|
||||
Most sysadmins have used Bash to issue commands that are usually fairly simple and straightforward. But Bash can go beyond entering single commands, and many sysadmins create simple command-line programs to perform a series of tasks. These programs are common tools that can save time and effort.
|
||||
|
||||
My objective when writing CLI programs is to save time and effort (i.e., to be the lazy sysadmin). CLI programs support this by listing several commands in a specific sequence that execute one after another, so you do not need to watch the progress of one command and type in the next command when the first finishes. You can go do other things and not have to continually monitor the progress of each command.
|
||||
|
||||
### What is "a program"?
|
||||
|
||||
The Free On-line Dictionary of Computing ([FOLDOC][6]) defines a program as: "The instructions executed by a computer, as opposed to the physical device on which they run." Princeton University's [WordNet][7] defines a program as: "…a sequence of instructions that a computer can interpret and execute…" [Wikipedia][8] also has a good entry about computer programs.
|
||||
|
||||
Therefore, a program can consist of one or more instructions that perform a specific, related task. A computer program instruction is also called a program statement. For sysadmins, a program is usually a sequence of shell commands. All the shells available for Linux, at least the ones I am familiar with, have at least a basic form of programming capability, and Bash, the default shell for most Linux distributions, is no exception.
|
||||
|
||||
While this series uses Bash (because it is so ubiquitous), if you use a different shell, the general programming concepts will be the same, although the constructs and syntax may differ somewhat. Some shells may support some features that others do not, but they all provide some programming capability. Shell programs can be stored in a file for repeated use, or they may be created on the command line as needed.
|
||||
|
||||
### Simple CLI programs
|
||||
|
||||
The simplest command-line programs are one or two consecutive program statements, which may be related or not, that are entered on the command line before the **Enter** key is pressed. The second statement in a program, if there is one, might be dependent upon the actions of the first, but it does not need to be.
|
||||
|
||||
There is also one bit of syntactical punctuation that needs to be clearly stated. When entering a single command on the command line, pressing the **Enter** key terminates the command with an implicit semicolon (**;**). When used in a CLI shell program entered as a single line on the command line, the semicolon must be used to terminate each statement and separate it from the next one. The last statement in a CLI shell program can use an explicit or implicit semicolon.
|
||||
|
||||
### Some basic syntax
|
||||
|
||||
The following examples will clarify this syntax. This program consists of a single command with an explicit terminator:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ echo "Hello world." ;
|
||||
Hello world.
|
||||
```
|
||||
|
||||
That may not seem like much of a program, but it is the first program I encounter with every new programming language I learn. The syntax may be a bit different for each language, but the result is the same.
|
||||
|
||||
Let's expand a little on this trivial but ubiquitous program. Your results will be different from mine because I have done other experiments, while you may have only the default directories and files that are created in the account home directory the first time you log into an account via the GUI desktop.
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ echo "My home directory." ; ls ;
|
||||
My home directory.
|
||||
chapter25 TestFile1.Linux dmesg2.txt Downloads newfile.txt softlink1 testdir6
|
||||
chapter26 TestFile1.mac dmesg3.txt file005 Pictures Templates testdir
|
||||
TestFile1 Desktop dmesg.txt link3 Public testdir Videos
|
||||
TestFile1.dos dmesg1.txt Documents Music random.txt testdir1
|
||||
```
|
||||
|
||||
That makes a bit more sense. The results are related, but the individual program statements are independent of each other. Notice that I like to put spaces before and after the semicolon because it makes the code a bit easier to read. Try that little CLI program again without an explicit semicolon at the end:
|
||||
|
||||
|
||||
```
|
||||
`[student@studentvm1 ~]$ echo "My home directory." ; ls`
|
||||
```
|
||||
|
||||
There is no difference in the output.
|
||||
|
||||
### Something about variables
|
||||
|
||||
Like all programming languages, the Bash shell can deal with variables. A variable is a symbolic name that refers to a specific location in memory that contains a value of some sort. The value of a variable is changeable, i.e., it is variable.
|
||||
|
||||
Bash does not type variables like C and related languages, defining them as integers, floating points, or string types. In Bash, all variables are strings. A string that is an integer can be used in integer arithmetic, which is the only type of math that Bash is capable of doing. If more complex math is required, the [**bc** command][9] can be used in CLI programs and scripts.
|
||||
|
||||
Variables are assigned values and can be used to refer to those values in CLI programs and scripts. The value of a variable is set using its name but not preceded by a **$** sign. The assignment **VAR=10** sets the value of the variable VAR to 10. To print the value of the variable, you can use the statement **echo $VAR**. Start with text (i.e., non-numeric) variables.
|
||||
|
||||
Bash variables become part of the shell environment until they are unset.
|
||||
|
||||
Check the initial value of a variable that has not been assigned; it should be null. Then assign a value to the variable and print it to verify its value. You can do all of this in a single CLI program:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ echo $MyVar ; MyVar="Hello World" ; echo $MyVar ;
|
||||
|
||||
Hello World
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
_Note: The syntax of variable assignment is very strict. There must be no spaces on either side of the equal (**=**) sign in the assignment statement._
|
||||
|
||||
The empty line indicates that the initial value of **MyVar** is null. Changing and setting the value of a variable are done the same way. This example shows both the original and the new value.
|
||||
|
||||
As mentioned, Bash can perform integer arithmetic calculations, which is useful for calculating a reference to the location of an element in an array or doing simple math problems. It is not suitable for scientific computing or anything that requires decimals, such as financial calculations. There are much better tools for those types of calculations.
|
||||
|
||||
Here's a simple calculation:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ Var1="7" ; Var2="9" ; echo "Result = $((Var1*Var2))"
|
||||
Result = 63
|
||||
```
|
||||
|
||||
What happens when you perform a math operation that results in a floating-point number?
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ Var1="7" ; Var2="9" ; echo "Result = $((Var1/Var2))"
|
||||
Result = 0
|
||||
[student@studentvm1 ~]$ Var1="7" ; Var2="9" ; echo "Result = $((Var2/Var1))"
|
||||
Result = 1
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
The result is the nearest integer. Notice that the calculation was performed as part of the **echo** statement. The math is performed before the enclosing echo command due to the Bash order of precedence. For details see the Bash man page and search "precedence."
|
||||
|
||||
### Control operators
|
||||
|
||||
Shell control operators are one of the syntactical operators for easily creating some interesting command-line programs. The simplest form of CLI program is just stringing several commands together in a sequence on the command line:
|
||||
|
||||
|
||||
```
|
||||
`command1 ; command2 ; command3 ; command4 ; . . . ; etc. ;`
|
||||
```
|
||||
|
||||
Those commands all run without a problem so long as no errors occur. But what happens when an error occurs? You can anticipate and allow for errors using the built-in **&&** and **||** Bash control operators. These two control operators provide some flow control and enable you to alter the sequence of code execution. The semicolon is also considered to be a Bash control operator, as is the newline character.
|
||||
|
||||
The **&&** operator simply says, "if command1 is successful, then run command2. If command1 fails for any reason, then command2 is skipped." That syntax looks like this:
|
||||
|
||||
|
||||
```
|
||||
`command1 && command2`
|
||||
```
|
||||
|
||||
Now, look at some commands that will create a new directory and—if it's successful—make it the present working directory (PWD). Ensure that your home directory (**~**) is the PWD. Try this first in **/root**, a directory that you do not have access to:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ Dir=/root/testdir ; mkdir $Dir/ && cd $Dir
|
||||
mkdir: cannot create directory '/root/testdir/': Permission denied
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
The error was emitted by the **mkdir** command. You did not receive an error indicating that the file could not be created because the creation of the directory failed. The **&&** control operator sensed the non-zero return code, so the **touch** command was skipped. Using the **&&** control operator prevents the **touch** command from running because there was an error in creating the directory. This type of command-line program flow control can prevent errors from compounding and making a real mess of things. But it's time to get a little more complicated.
|
||||
|
||||
The **||** control operator allows you to add another program statement that executes when the initial program statement returns a code greater than zero. The basic syntax looks like this:
|
||||
|
||||
|
||||
```
|
||||
`command1 || command2`
|
||||
```
|
||||
|
||||
This syntax reads, "If command1 fails, execute command2." That implies that if command1 succeeds, command2 is skipped. Try this by attempting to create a new directory:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ Dir=/root/testdir ; mkdir $Dir || echo "$Dir was not created."
|
||||
mkdir: cannot create directory '/root/testdir': Permission denied
|
||||
/root/testdir was not created.
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
This is exactly what you would expect. Because the new directory could not be created, the first command failed, which resulted in the execution of the second command.
|
||||
|
||||
Combining these two operators provides the best of both. The control operator syntax using some flow control takes this general form when the **&&** and **||** control operators are used:
|
||||
|
||||
|
||||
```
|
||||
`preceding commands ; command1 && command2 || command3 ; following commands`
|
||||
```
|
||||
|
||||
This syntax can be stated like so: "If command1 exits with a return code of 0, then execute command2, otherwise execute command3." Try it:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ Dir=/root/testdir ; mkdir $Dir && cd $Dir || echo "$Dir was not created."
|
||||
mkdir: cannot create directory '/root/testdir': Permission denied
|
||||
/root/testdir was not created.
|
||||
[student@studentvm1 ~]$
|
||||
```
|
||||
|
||||
Now try the last command again using your home directory instead of the **/root** directory. You will have permission to create this directory:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 ~]$ Dir=~/testdir ; mkdir $Dir && cd $Dir || echo "$Dir was not created."
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
The control operator syntax, like **command1 && command2**, works because every command sends a return code (RC) to the shell that indicates if it completed successfully or whether there was some type of failure during execution. By convention, an RC of zero (0) indicates success, and any positive number indicates some type of failure. Some of the tools sysadmins use just return a one (1) to indicate a failure, but many use other codes to indicate the type of failure that occurred.
|
||||
|
||||
The Bash shell variable **$?** contains the RC from the last command. This RC can be checked very easily by a script, the next command in a list of commands, or even the sysadmin directly. Start by running a simple command and immediately checking the RC. The RC will always be for the last command that ran before you looked at it.
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ ll ; echo "RC = $?"
|
||||
total 1264
|
||||
drwxrwxr-x 2 student student 4096 Mar 2 08:21 chapter25
|
||||
drwxrwxr-x 2 student student 4096 Mar 21 15:27 chapter26
|
||||
-rwxr-xr-x 1 student student 92 Mar 20 15:53 TestFile1
|
||||
<snip>
|
||||
drwxrwxr-x. 2 student student 663552 Feb 21 14:12 testdir
|
||||
drwxr-xr-x. 2 student student 4096 Dec 22 13:15 Videos
|
||||
RC = 0
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
The RC, in this case, is zero, which means the command completed successfully. Now try the same command on root's home directory, a directory you do not have permissions for:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ ll /root ; echo "RC = $?"
|
||||
ls: cannot open directory '/root': Permission denied
|
||||
RC = 2
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
In this case, the RC is two; this means permission was denied for a non-root user to access a directory to which the user is not permitted access. The control operators use these RCs to enable you to alter the sequence of program execution.
|
||||
|
||||
### Summary
|
||||
|
||||
This article looked at Bash as a programming language and explored its basic syntax as well as some basic tools. It showed how to print data to STDOUT and how to use variables and control operators. The next article in this series looks at some of the many Bash logical operators that control the flow of instruction execution.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/programming-bash-part-1
|
||||
|
||||
作者:[David Both][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/dboth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
|
||||
[2]: https://opensource.com/article/18/10/linux-data-streams
|
||||
[3]: http://www.both.org/?page_id=1183
|
||||
[4]: https://opensource.com/19/9/command-line-heroes-bash
|
||||
[5]: https://en.wikipedia.org/wiki/Comparison_of_command_shells
|
||||
[6]: http://foldoc.org/program
|
||||
[7]: https://wordnet.princeton.edu/
|
||||
[8]: https://en.wikipedia.org/wiki/Computer_program
|
||||
[9]: https://www.gnu.org/software/bc/manual/html_mono/bc.html
|
@ -0,0 +1,101 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Pylint: Making your Python code consistent)
|
||||
[#]: via: (https://opensource.com/article/19/10/python-pylint-introduction)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
Pylint: Making your Python code consistent
|
||||
======
|
||||
Pylint is your friend when you want to avoid arguing about code
|
||||
complexity.
|
||||
![OpenStack source code \(Python\) in VIM][1]
|
||||
|
||||
Pylint is a higher-level Python style enforcer. While [flake8][2] and [black][3] will take care of "local" style: where the newlines occur, how comments are formatted, or find issues like commented out code or bad practices in log formatting.
|
||||
|
||||
Pylint is extremely aggressive by default. It will offer strong opinions on everything from checking if declared interfaces are actually implemented to opportunities to refactor duplicate code, which can be a lot to a new user. One way of introducing it gently to a project, or a team, is to start by turning _all_ checkers off, and then enabling checkers one by one. This is especially useful if you already use flake8, black, and [mypy][4]: Pylint has quite a few checkers that overlap in functionality.
|
||||
|
||||
However, one of the things unique to Pylint is the ability to enforce higher-level issues: for example, number of lines in a function, or number of methods in a class.
|
||||
|
||||
These numbers might be different from project to project and can depend on the development team's preferences. However, once the team comes to an agreement about the parameters, it is useful to _enforce_ those parameters using an automated tool. This is where Pylint shines.
|
||||
|
||||
### Configuring Pylint
|
||||
|
||||
In order to start with an empty configuration, start your `.pylintrc` with
|
||||
|
||||
|
||||
```
|
||||
[MESSAGES CONTROL]
|
||||
|
||||
disable=all
|
||||
```
|
||||
|
||||
This disables all Pylint messages. Since many of them are redundant, this makes sense. In Pylint, a `message` is a specific kind of warning.
|
||||
|
||||
You can check that all messages have been turned off by running `pylint`:
|
||||
|
||||
|
||||
```
|
||||
`$ pylint <my package>`
|
||||
```
|
||||
|
||||
In general, it is not a great idea to add parameters to the `pylint` command-line: the best place to configure your `pylint` is the `.pylintrc`. In order to have it do _something_ useful, we need to enable some messages.
|
||||
|
||||
In order to enable messages, add to your `.pylintrc`, under the `[MESSAGES CONTROL]`.
|
||||
|
||||
|
||||
```
|
||||
enable=<message>,
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
For the "messages" (what Pylint calls different kinds of warnings) that look useful. Some of my favorites include `too-many-lines`, `too-many-arguments`, and `too-many-branches`. All of those limit complexity of modules or functions, and serve as an objective check, without a human nitpicker needed, for code complexity measurement.
|
||||
|
||||
A _checker_ is a source of _messages_: every message belongs to exactly one checker. Many of the most useful messages are under the [design checker][5]. The default numbers are usually good, but tweaking the maximums is straightfoward: we can add a section called `DESIGN` in the `.pylintrc`.
|
||||
|
||||
|
||||
```
|
||||
[DESIGN]
|
||||
|
||||
max-args=7
|
||||
|
||||
max-locals=15
|
||||
```
|
||||
|
||||
Another good source of useful messages is the `refactoring` checker. Some of my favorite messages to enable there are `consider-using-dict-comprehension`, `stop-iteration-return` (which looks for generators which use `raise StopIteration` when `return` is the correct way to stop the iteration). and `chained-comparison`, which will suggest using syntax like `1 <= x < 5` rather than the less obvious `1 <= x && 5 > 5`
|
||||
|
||||
Finally, an expensive checker, in terms of performance, but highly useful, is `similarities`. It is designed to enforce "Don't Repeat Yourself" (the DRY principle) by explicitly looking for copy-paste between different parts of the code. It only has one message to enable: `duplicate-code`. The default "minimum similarity lines" is set to `4`. It is possible to set it to a different value using the `.pylintrc`.
|
||||
|
||||
|
||||
```
|
||||
[SIMILARITIES]
|
||||
|
||||
min-similarity-lines=3
|
||||
```
|
||||
|
||||
### Pylint makes code reviews easy
|
||||
|
||||
If you are sick of code reviews where you point out that a class is too complicated, or that two different functions are basically the same, add Pylint to your [Continuous Integration][6] configuration, and only have the arguments about complexity guidelines for your project _once_.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/python-pylint-introduction
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openstack_python_vim_2.jpg?itok=4fza48WU (OpenStack source code (Python) in VIM)
|
||||
[2]: https://opensource.com/article/19/5/python-flake8
|
||||
[3]: https://opensource.com/article/19/5/python-black
|
||||
[4]: https://opensource.com/article/19/5/python-mypy
|
||||
[5]: https://pylint.readthedocs.io/en/latest/technical_reference/features.html#design-checker
|
||||
[6]: https://opensource.com/business/15/7/six-continuous-integration-tools
|
185
sources/tech/20191021 Transition to Nftables.md
Normal file
185
sources/tech/20191021 Transition to Nftables.md
Normal file
@ -0,0 +1,185 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Transition to Nftables)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/transition-to-nftables/)
|
||||
[#]: author: (Vijay Marcel D https://opensourceforu.com/author/vijay-marcel/)
|
||||
|
||||
Transition to Nftables
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
_Every major distribution in the open source world is moving towards nftables as the default firewall. In short, the venerable Iptables is now dead. This article is a tutorial on how to build nftables._
|
||||
|
||||
Currently, there is an iptables-nft backend that is compatible with nftables but soon, even this will not be available. Also, as noted by Red Hat developers, sometimes it may translate the rules incorrectly. Rather than rely on an iptables-to-nftables converter, we need to know how to build our own nftables. In nftables, all the address families come under one rule. Nftables runs in the user space unlike iptables, where every module is in the kernel. It also needs less kernel updates and comes with new features such as maps, families and dictionaries.
|
||||
|
||||
**Address families**
|
||||
Address families determine the types of packets that are processed. There are six address families in nftables and they are:
|
||||
|
||||
* ip
|
||||
* ipv6
|
||||
* inet
|
||||
* arp
|
||||
* bridge
|
||||
* netdev
|
||||
|
||||
|
||||
|
||||
In nftables, the ipv4 and ipv6 protocols are combined into one single family called inet. So we do not need to specify two rules – one for ipv4 and another for ipv6. If no address family is specified, it will default to ip protocol, i.e., ipv4. Our area of interest lies in the inet family, since most home users will use either ipv4 or ipv6 protocols (see Figure 1).
|
||||
|
||||
**Nftables**
|
||||
A typical nftable rule contains three parts – table, chain and rules.
|
||||
Tables are containers for chains and rules. They are identified by their address families and their names. Chains contain the rules needed for the _inet/arp/bridge/netdev_ protocols and are of three types — filter, NAT and route. Nftable rules can be loaded from a script or they can be typed into a terminal and then saved as a rule-set. For home users, the default chain will be filter. The inet family contains the following hooks:
|
||||
|
||||
* Input
|
||||
* Output
|
||||
* Forward
|
||||
* Pre-routing
|
||||
* Post-routing
|
||||
|
||||
|
||||
|
||||
**To script or not to script?**
|
||||
One of the biggest questions is whether we can use a firewall script or not. The answer is: it’s your choice. Here’s some advice – if you have hundreds of rules in your firewall, then it is best to use a script, but if you are a typical home user, then you can type the commands in the terminal and then load your rule-set. Each option has its own advantages and disadvantages. In this article, we will type them in the terminal to build our firewall.
|
||||
|
||||
Nftables uses a program called nft to add, create, list, delete and load rules. Make sure nftables is installed along with conntrackd and netfilter-persistent, and remove iptables, using the following command:
|
||||
|
||||
```
|
||||
apt-get install nftables conntrackd netfilter-persistent
|
||||
apt-get purge iptables
|
||||
```
|
||||
|
||||
_nft_ needs to be run as root or use sudo. Use the following commands to list, flush, delete ruleset and load the script respectively.
|
||||
|
||||
```
|
||||
nft list ruleset
|
||||
nft flush ruleset
|
||||
nft delete table inet filter
|
||||
/usr/sbin/nft -f /etc/nftables.conf
|
||||
```
|
||||
|
||||
**Input policy**
|
||||
The firewall will contain three parts – input, forward and output – just like in iptables. In the terminal, type the following commands for the input firewall. Make sure you have flushed your rule-set before you begin. Our default policy will be to drop everything. We will use the inet family in the firewall. Add the following rules as root or use sudo:
|
||||
|
||||
```
|
||||
nft add table inet filter
|
||||
nft add chain inet filter input { type filter hook input priority 0 \; counter \; policy drop \; }
|
||||
```
|
||||
|
||||
You have noticed there is something called _priority 0_. It means giving the rule higher precedence. Hooks typically give higher precedence to the negative integer. Every hook has its own precedence and the filter chain has priority 0. You can check the nftables wiki page to see the priority of each hook.
|
||||
To know the network interfaces in your computer, run the following command:
|
||||
|
||||
```
|
||||
ip link show
|
||||
```
|
||||
|
||||
It will show the installed network interface, one local host and other Ethernet port or your wireless port. Your Ethernet port’s name looks something like this: _enpXsY_ where X and Y are numbers, and the same goes for your wireless port. We have to allow the local host and only allow established incoming connections from the Internet.
|
||||
Nftables has a feature called verdict statements on how to parse a rule. The verdict statements are _accept, drop, queue, jump, goto, continue_ and _return_. Since the firewall is a simple one, we will use either _accept_ or _drop the packets_ (Figure 2).
|
||||
|
||||
```
|
||||
nft add rule inet filter input iifname lo accept
|
||||
nft add rule inet filter input iifname enpXsY ct state new, established, related accept
|
||||
```
|
||||
|
||||
Next, we have to add rules to protect us from stealth scans. Not all stealth scans are malicious but most of them are. We have to protect the network from such scans. The first set lists the TCP flags to be tested. Of these flags, the second set lists the flags to be matched with the first.
|
||||
|
||||
```
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(syn\|fin\) == \(syn\|fin\) drop
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(syn\|rst\) == \(syn\|rst\) drop
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(fin\|rst\) == \(fin\|rst\) drop
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(ack\|fin\) == fin drop
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(ack\|psh\) == psh drop
|
||||
nft add rule inet filter input iifname enpXsY tcp flags \& \(ack\|urg\) == urg drop
|
||||
```
|
||||
|
||||
Remember, we are typing these commands in the terminal. So we have to add a backslash before some special characters, to make sure the terminal interprets it as it should. If you are using a script, then this isn’t required.
|
||||
|
||||
**A word of caution regarding ICMP**
|
||||
The Internet Control Message Protocol (ICMP) is a diagnostic tool and so should not be dropped outright. Any attempt to fully block ICMP is unwise as it will also stop giving error messages to us. Enable only the most important control messages such as echo-request, echo-reply, destination-unreachable and time-exceeded, and reject the rest. Echo-request and echo-reply are part of ping. In the input, we only allow echo reply and in the output, we only allow the echo-request.
|
||||
|
||||
```
|
||||
nft add rule inet filter input iifname enpXsY icmp type { echo-reply, destination-unreachable, time-exceeded } limit rate 1/second accept
|
||||
nft add rule inet filter input iifname enpXsY ip protocol icmp drop
|
||||
```
|
||||
|
||||
Finally, we are logging and dropping all the invalid packets.
|
||||
|
||||
```
|
||||
nft add rule inet filter input iifname enpXsY ct state invalid log flags all level info prefix \”Invalid-Input: \”
|
||||
nft add rule inet filter input iifname enpXsY ct state invalid drop
|
||||
```
|
||||
|
||||
**Forward and output policy**
|
||||
In both the forward and output policies, we will drop packets by default and only accept those that are established connections.
|
||||
|
||||
```
|
||||
nft add chain inet filter forward { type filter hook forward priority 0 \; counter \; policy drop \; }
|
||||
nft add rule inet filter forward ct state established, related accept
|
||||
nft add rule inet filter forward ct state invalid drop
|
||||
nft add chain inet filter output { type filter hook output priority 0 \; counter \; policy drop \; }
|
||||
```
|
||||
|
||||
A typical desktop user needs only Port 80 and 443 to be allowed to access the Internet. Finally, allow acceptable ICMP protocols and drop the invalid packets while logging them.
|
||||
|
||||
```
|
||||
nft add rule inet filter output oifname enpXsY tcp dport { 80, 443 } ct state established accept
|
||||
nft add rule inet filter output oifname enpXsY icmp type { echo-request, destination-unreachable, time-exceeded } limit rate 1/second accept
|
||||
nft add rule inet filter output oifname enpXsY ip protocol icmp drop
|
||||
nft add rule inet filter output oifname enpXsY ct state invalid log flags all level info prefix \”Invalid-Output: \”
|
||||
nft add rule inet filter output oifname enpXsY ct state invalid drop
|
||||
```
|
||||
|
||||
Now we have to save our rule-set, otherwise it will be lost when we reboot. To do so, run the following command:
|
||||
|
||||
```
|
||||
sudo nft list ruleset. > /etc/nftables.conf
|
||||
```
|
||||
|
||||
We now have to load nftables at boot, for that enables the nftables service in systemd:
|
||||
|
||||
```
|
||||
sudo systemctl enable nftables
|
||||
```
|
||||
|
||||
Next, edit the nftables unit file to remove the Execstop option to avoid flushing the rule-set at every boot. The file is usually located in /etc/systemd/system/sysinit.target.wants/nftables.service. Now restart the nftables:
|
||||
|
||||
```
|
||||
sudo systemctl restart nftables
|
||||
```
|
||||
|
||||
**Logging in rsyslog**
|
||||
When you log the dropped packets, they go straight to _syslog_, which makes reading your log file quite difficult. It is better to redirect your firewall logs to a separate file. Create a directory called nftables in
|
||||
_/var/log_ and in it, create two files called _input.log_ and _output.log_ to store the input and output logs, respectively. Make sure rsyslog is installed in your system. Now go to _/etc/rsyslog.d_ and create a file called _nftables.conf_ with the following contents:
|
||||
|
||||
```
|
||||
:msg,regex,”Invalid-Input: “ -/var/log/nftables/Input.log
|
||||
:msg,regex,”Invalid-Output: “ -/var/log/nftables/Output.log
|
||||
& stop
|
||||
```
|
||||
|
||||
Now we have to make sure the log is manageable. For that, create another file in _/etc/logrotate.d_ called nftables with the following code:
|
||||
|
||||
```
|
||||
/var/log/nftables/* { rotate 5 daily maxsize 50M missingok notifempty delaycompress compress postrotate invoke-rc.d rsyslog rotate > /dev/null endscript }
|
||||
```
|
||||
|
||||
Restart nftables. You can now check your rule-set. If you feel typing each command in the terminal is bothersome, you can use a script to load the nftables firewall. I hope this article is useful in protecting your system.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/transition-to-nftables/
|
||||
|
||||
作者:[Vijay Marcel D][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/vijay-marcel/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2017/01/REHfirewall-1.jpg?resize=696%2C481&ssl=1 (REHfirewall)
|
||||
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2017/01/REHfirewall-1.jpg?fit=900%2C622&ssl=1
|
@ -0,0 +1,261 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Beginner’s Guide to Handle Various Update Related Errors in Ubuntu)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-update-error/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Beginner’s Guide to Handle Various Update Related Errors in Ubuntu
|
||||
======
|
||||
|
||||
_**Who hasn’t come across an error while doing an update in Ubuntu? Update errors are common and plenty in Ubuntu and other Linux distributions based on Ubuntu. Here are some common Ubuntu update errors and their fixes.**_
|
||||
|
||||
This article is part of Ubuntu beginner series that explains the know-how of Ubuntu so that a new user could understand the things better.
|
||||
|
||||
In an earlier article, I discussed [how to update Ubuntu][1]. In this tutorial, I’ll discuss some common errors you may encounter while updating [Ubuntu][2]. It usually happens because you tried to add software or repositories on your own and that probably caused an issue.
|
||||
|
||||
There is no need to panic if you see the errors while updating your system.The errors are common and the fix is easy. You’ll learn how to fix those common update errors.
|
||||
|
||||
_**Before you begin, I highly advise reading these two articles to have a better understanding of the repository concept in Ubuntu.**_
|
||||
|
||||
![Understand Ubuntu repositories][3]
|
||||
|
||||
![Understand Ubuntu repositories][3]
|
||||
|
||||
###### **Understand Ubuntu repositories**
|
||||
|
||||
Learn what are various repositories in Ubuntu and how they enable you to install software in your system.
|
||||
|
||||
[Read More][4]
|
||||
|
||||
![Understanding PPA in Ubuntu][5]
|
||||
|
||||
![Understanding PPA in Ubuntu][5]
|
||||
|
||||
###### **Understanding PPA in Ubuntu**
|
||||
|
||||
Further improve your concept of repositories and package handling in Ubuntu with this detailed guide on PPA.
|
||||
|
||||
[Read More][6]
|
||||
|
||||
### Error 0: Failed to download repository information
|
||||
|
||||
Many Ubuntu desktop users update their system through the graphical software updater tool. You are notified that updates are available for your system and you can click one button to start downloading and installing the updates.
|
||||
|
||||
Well, that’s what usually happens. But sometimes you’ll see an error like this:
|
||||
|
||||
![][7]
|
||||
|
||||
_**Failed to download repository information. Check your internet connection.**_
|
||||
|
||||
That’s a weird error because your internet connection is most likely working just fine and it still says to check the internet connection.
|
||||
|
||||
Did you note that I called it ‘error 0’? It’s because it’s not an error in itself. I mean, most probably, it has nothing to do with the internet connection. But there is no useful information other than this misleading error message.
|
||||
|
||||
If you see this error message and your internet connection is working fine, it’s time to put on your detective hat and [use your grey cells][8] (as [Hercule Poirot][9] would say).
|
||||
|
||||
You’ll have to use the command line here. You can [use Ctrl+Alt+T keyboard shortcut to open the terminal in Ubuntu][10]. In the terminal, use this command:
|
||||
|
||||
```
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
Let the command finish. Observe the last three-four lines of its output. That will give you the real reason why sudo apt-get update fails. Here’s an example:
|
||||
|
||||
![][11]
|
||||
|
||||
Rest of the tutorial here shows how to handle the errors that you just saw in the last few lines of the update command output.
|
||||
|
||||
### Error 1: Problem With MergeList
|
||||
|
||||
When you run update in terminal, you may see an error “[problem with MergeList][12]” like below:
|
||||
|
||||
```
|
||||
E:Encountered a section with no Package: header,
|
||||
E:Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_universe_binary-i386_Packages,
|
||||
E:The package lists or status file could not be parsed or opened.’
|
||||
```
|
||||
|
||||
For some reasons, the file in /var/lib/apt/lists directory got corrupted. You can delete all the files in this directory and run the update again to regenerate everything afresh. Use the following commands one by one:
|
||||
|
||||
```
|
||||
sudo rm -r /var/lib/apt/lists/*
|
||||
sudo apt-get clean && sudo apt-get update
|
||||
```
|
||||
|
||||
Your problem should be fixed.
|
||||
|
||||
### Error 2: Hash Sum mismatch
|
||||
|
||||
If you find an error that talks about [Hash Sum mismatch][13], the fix is the same as the one in the previous error.
|
||||
|
||||
```
|
||||
W:Failed to fetch bzip2:/var/lib/apt/lists/partial/in.archive.ubuntu.com_ubuntu_dists_oneiric_restricted_binary-i386_Packages Hash Sum mismatch,
|
||||
W:Failed to fetch bzip2:/var/lib/apt/lists/partial/in.archive.ubuntu.com_ubuntu_dists_oneiric_multiverse_binary-i386_Packages Hash Sum mismatch,
|
||||
E:Some index files failed to download. They have been ignored, or old ones used instead
|
||||
```
|
||||
|
||||
The error occurs possibly because of a mismatched metadata cache between the server and your system. You can use the following commands to fix it:
|
||||
|
||||
```
|
||||
sudo rm -rf /var/lib/apt/lists/*
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
### Error 3: Failed to fetch with error 404 not found
|
||||
|
||||
If you try adding a PPA repository that is not available for your current [Ubuntu version][14], you’ll see that it throws a 404 not found error.
|
||||
|
||||
```
|
||||
W: Failed to fetch http://ppa.launchpad.net/venerix/pkg/ubuntu/dists/raring/main/binary-i386/Packages 404 Not Found
|
||||
E: Some index files failed to download. They have been ignored, or old ones used instead.
|
||||
```
|
||||
|
||||
You added a PPA hoping to install an application but it is not available for your Ubuntu version and you are now stuck with the update error. This is why you should check beforehand if a PPA is available for your Ubuntu version or not. I have discussed how to check the PPA availability in the detailed [PPA guide][6].
|
||||
|
||||
Anyway, the fix here is that you remove the troublesome PPA from your list of repositories. Note the PPA name from the error message. Go to _Software & Updates_ tool:
|
||||
|
||||
![Open Software & Updates][15]
|
||||
|
||||
In here, move to _Other Software_ tab and look for that PPA. Uncheck the box to [remove the PPA][16] from your system.
|
||||
|
||||
![Remove PPA Using Software & Updates In Ubuntu][17]
|
||||
|
||||
Your software list will be updated when you do that. Now if you run the update again, you shouldn’t see the error.
|
||||
|
||||
### Error 4: Failed to download package files error
|
||||
|
||||
A similar error is **[failed to download package files error][18] **like this:
|
||||
|
||||
![][19]
|
||||
|
||||
In this case, a newer version of the software is available but it’s not propagated to all the mirrors. If you are not using a mirror, easily fixed by changing the software sources to Main server. Please read this article for more details on [failed to download package error][18].
|
||||
|
||||
Go to _Software & Updates_ and in there changed the download server to Main server:
|
||||
|
||||
![][20]
|
||||
|
||||
### Error 5: GPG error: The following signatures couldn’t be verified
|
||||
|
||||
Adding a PPA may also result in the following [GPG error: The following signatures couldn’t be verified][21] when you try to run an update in terminal:
|
||||
|
||||
```
|
||||
W: GPG error: http://repo.mate-desktop.org saucy InRelease: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 68980A0EA10B4DE8
|
||||
```
|
||||
|
||||
All you need to do is to fetch this public key in the system. Get the key number from the message. In the above message, the key is 68980A0EA10B4DE8.
|
||||
|
||||
This key can be used in the following manner:
|
||||
|
||||
```
|
||||
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 68980A0EA10B4DE8
|
||||
```
|
||||
|
||||
Once the key has been added, run the update again and it should be fine.
|
||||
|
||||
### Error 6: BADSIG error
|
||||
|
||||
Another signature related Ubuntu update error is [BADSIG error][22] which looks something like this:
|
||||
|
||||
```
|
||||
W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://extras.ubuntu.com precise Release: The following signatures were invalid: BADSIG 16126D3A3E5C1192 Ubuntu Extras Archive Automatic Signing Key
|
||||
W: GPG error: http://ppa.launchpad.net precise Release:
|
||||
The following signatures were invalid: BADSIG 4C1CBC1B69B0E2F4 Launchpad PPA for Jonathan French W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/precise/Release
|
||||
```
|
||||
|
||||
All the repositories are signed with the GPG and for some reason, your system finds them invalid. You’ll need to update the signature keys. The easiest way to do that is by regenerating the apt packages list (with their signature keys) and it should have the correct key.
|
||||
|
||||
Use the following commands one by one in the terminal:
|
||||
|
||||
```
|
||||
cd /var/lib/apt
|
||||
sudo mv lists oldlist
|
||||
sudo mkdir -p lists/partial
|
||||
sudo apt-get clean
|
||||
sudo apt-get update
|
||||
```
|
||||
|
||||
### Error 7: Partial upgrade error
|
||||
|
||||
Running updates in terminal may throw this partial upgrade error:
|
||||
|
||||
![][23]
|
||||
|
||||
```
|
||||
Not all updates can be installed
|
||||
Run a partial upgrade, to install as many updates as possible
|
||||
```
|
||||
|
||||
Run the following command in terminal to fix this error:
|
||||
|
||||
```
|
||||
sudo apt-get install -f
|
||||
```
|
||||
|
||||
### Error 8: Could not get lock /var/cache/apt/archives/lock
|
||||
|
||||
This error happens when another program is using APT. Suppose you are installing some thing in Ubuntu Software Center and at the same time, trying to run apt in terminal.
|
||||
|
||||
```
|
||||
E: Could not get lock /var/cache/apt/archives/lock – open (11: Resource temporarily unavailable)
|
||||
E: Unable to lock directory /var/cache/apt/archives/
|
||||
```
|
||||
|
||||
Check if some other program might be using apt. It could be a command running terminal, Software Center, Software Updater, Software & Updates or any other software that deals with installing and removing applications.
|
||||
|
||||
If you can close other such programs, close them. If there is a process in progress, wait for it to finish.
|
||||
|
||||
If you cannot find any such programs, use the following [command to kill all such running processes][24]:
|
||||
|
||||
```
|
||||
sudo killall apt apt-get
|
||||
```
|
||||
|
||||
This is a tricky problem and if the problem still persists, please read this detailed tutorial on [fixing the unable to lock the administration directory error in Ubuntu][25].
|
||||
|
||||
_**Any other update error you encountered?**_
|
||||
|
||||
That compiles the list of frequent Ubuntu update errors you may encounter. I hope this helps you to get rid of these errors.
|
||||
|
||||
Have you encountered any other update error in Ubuntu recently that hasn’t been covered here? Do mention it in comments and I’ll try to do a quick tutorial on it.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-update-error/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/update-ubuntu/
|
||||
[2]: https://ubuntu.com/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/ubuntu-repositories.png?ssl=1
|
||||
[4]: https://itsfoss.com/ubuntu-repositories/
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/what-is-ppa.png?ssl=1
|
||||
[6]: https://itsfoss.com/ppa-guide/
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2013/04/Failed-to-download-repository-information-Ubuntu-13.04.png?ssl=1
|
||||
[8]: https://idioms.thefreedictionary.com/little+grey+cells
|
||||
[9]: https://en.wikipedia.org/wiki/Hercule_Poirot
|
||||
[10]: https://itsfoss.com/ubuntu-shortcuts/
|
||||
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2013/11/Ubuntu-Update-error.jpeg?ssl=1
|
||||
[12]: https://itsfoss.com/how-to-fix-problem-with-mergelist/
|
||||
[13]: https://itsfoss.com/solve-ubuntu-error-failed-to-download-repository-information-check-your-internet-connection/
|
||||
[14]: https://itsfoss.com/how-to-know-ubuntu-unity-version/
|
||||
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/05/software-updates-ubuntu-gnome.jpeg?ssl=1
|
||||
[16]: https://itsfoss.com/how-to-remove-or-delete-ppas-quick-tip/
|
||||
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/10/remove_ppa_using_software_updates_in_ubuntu.jpg?ssl=1
|
||||
[18]: https://itsfoss.com/fix-failed-download-package-files-error-ubuntu/
|
||||
[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2014/09/Ubuntu_Update_error.jpeg?ssl=1
|
||||
[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2014/09/Change_server_Ubuntu.jpeg?ssl=1
|
||||
[21]: https://itsfoss.com/solve-gpg-error-signatures-verified-ubuntu/
|
||||
[22]: https://itsfoss.com/solve-badsig-error-quick-tip/
|
||||
[23]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2013/09/Partial_Upgrade_error_Elementary_OS_Luna.png?ssl=1
|
||||
[24]: https://itsfoss.com/how-to-find-the-process-id-of-a-program-and-kill-it-quick-tip/
|
||||
[25]: https://itsfoss.com/could-not-get-lock-error/
|
@ -0,0 +1,108 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How collaboration fueled a development breakthrough at Greenpeace)
|
||||
[#]: via: (https://opensource.com/open-organization/19/10/collaboration-breakthrough-greenpeace)
|
||||
[#]: author: (Laura Hilliger https://opensource.com/users/laurahilliger)
|
||||
|
||||
How collaboration fueled a development breakthrough at Greenpeace
|
||||
======
|
||||
We're building an innovative platform to connect environmental
|
||||
advocates—but system complexity threatened to slow us down. Opening up
|
||||
was the answer.
|
||||
![The Open Organization at Greenpeace][1]
|
||||
|
||||
Activists really don't like feeling stuck.
|
||||
|
||||
We thrive on forward momentum and the energy it creates. When that movement grinds to a halt, even for a moment, our ability to catalyze passion in others stalls too.
|
||||
|
||||
And my colleagues and I at Greenpeace International were feeling stuck.
|
||||
|
||||
We'd managed to launch a prototype of Planet 4, [Greenpeace's new, open engagement platform][2] for activists and communities. It's live in more than 38 countries (with many more sites). More than 1.75 million people are using it. We've topped more than 3.1 million pageviews.
|
||||
|
||||
To get here, we [spent more than 650 hours in meetings, drank 1,478 litres of coffee, and fixed more than 300 bugs][3]. But it fell short of our vision; it _still_ wasn't [the minimum lovable product][4] we wanted and we didn't know how to move it forward.
|
||||
|
||||
We were stuck.
|
||||
|
||||
Planet 4's complexity was daunting. We didn't always have the right people to address the numerous challenges the project raised. We didn't know if we'd ever realize our vision. Yet a commitment to openness had gotten us here, and I knew a commitment to openness would get us through this, too.
|
||||
|
||||
As [the story of Planet 4][5] continues, I'll explain how it did.
|
||||
|
||||
### An opportunity
|
||||
|
||||
By 2016, my work helping Greenpeace International become a more open organization—[which I described in the first part of this series][6]—was beginning to bear fruit. We were holding regular [community calls][7]. We were releasing project updates frequently and publicly. We were networking with global stakeholders across the organization to define what Planet 4 needed to be. We were [architecting the project with participation in mind][8].
|
||||
|
||||
Becoming open is an organic process. There's no standard "game plan" for implementing process and practices in an organization. Success depends on the people, the tools, the project, the very fabric of the culture you're working inside.
|
||||
|
||||
Inside Greenpeace, we were beginning to see that success.
|
||||
|
||||
A commitment to openness had gotten us here, and I knew a commitment to openness would get us through this, too.
|
||||
|
||||
For some, this open way of working was inspiring and engaging. For others it was terrifying. Some thought asking for everyone's input was ridiculous. Some thought only "experts" should be part of the conversations, a viewpoint that doesn't mesh well with [the principle of inclusivity][9]. I appreciate expertise—don't get me wrong—but the problem with only asking for "expert" opinions is that you exclude people who might have more interest, passion, and knowledge than someone with a formal title.
|
||||
|
||||
Planet 4 was a vision—not just of a new and open engagement platform, but of an organization that could make _use_ of this platform. And it raised problems on both those fronts:
|
||||
|
||||
* **Data and systems integration:** As a network of 28 independent offices all over the world, Greenpeace has a complex technical landscape. While Greenpeace International provides system _recommendations_ and _support_, individual National and Regional Offices are free to make their own systems choices, even if they aren't the supported ones. This is a good thing; different tools better address different needs for different offices. But it's challenging, too, because the absence of standardization means a lack of expertise in all those systems.
|
||||
* **Organizational culture and work styles:** Planet 4 devoured many of Greenpeace's internal strategies and visions, then spit them out into a way that promised to move toward the type of organization we wanted to be. It was challenging the organizational status quo.
|
||||
|
||||
|
||||
|
||||
Our team was too small, our work too big, and the landscape of working in a global non-profit too complex. The team was struggling, and we needed help.
|
||||
|
||||
Then, in 2018, I saw an opportunity.
|
||||
|
||||
As an [Open Organization Ambassador][10], I'd been to Red Hat Summit to speak on a panel about open organizational principles. There I noticed a session exploring what [Red Hat had done to help UNICEF][11], another global non-profit, with its digital transformation efforts. Surely, I thought, Red Hat and Greenpeace could work together, too.
|
||||
|
||||
So I did something that shouldn't seem so revolutionary or audacious: I found the Red Hatter responsible for the company's collaboration with UNICEF, Alexandra Machado, and I _said hello_. I wasn't just introducing myself; I was approaching Alexandra on behalf of a global community of open-minded advocates.
|
||||
|
||||
And it worked.
|
||||
|
||||
### Accelerating
|
||||
|
||||
Together, Alexandra and I spent more than a year coordinating a collaboration that could help Greenpeace move forward. Earlier this year, we started to succeed.
|
||||
|
||||
Planet 4 was a vision—not just of a new and open engagement platform, but of an organization that could make use of this platform. And it raised problems on both those fronts.
|
||||
|
||||
In late May, members of the Planet 4 project and a team from Red Hat's App Dev Center of Excellence met in Amsterdam. The goal: Accelerate us.
|
||||
|
||||
We'd spend an entire week together in a design sprint aimed at helping us chart a speedy path toward making our vision for the Planet 4 engagement platform a reality, beginning with navigating its technical complexity. And in the process, we'd lean heavily on the open way of working we'd learned to embrace.
|
||||
|
||||
At the sprint, our teams got to know each other. We dumped everything on the table. In a radically open and honest way, the Greenpeace team helped the Red Hat team from Waterford understand the technical and cultural hurdles we faced. We explained our organization and our tech stack, our vision and our dreams. Red Hatters noticed our passion and worked alongside us to explore possible technologies that could make our vision a reality.
|
||||
|
||||
Through a series of exercises—including a particularly helpful session of [event storming][12]—we confirmed that our dream was not only the right one to have but also fully realizable. We talked through the dynamics of the systems we are addressing, and, in the end, the Red Hat team helped us envision a prototype for integrated systems that the Greenpeace team could take forward. We've already begun user testing.
|
||||
|
||||
_Listen to Patrick Carney of Red Hat Open Innovation Labs explain event storming._
|
||||
|
||||
On top of that, our new allies wrote a technical report that laid out the complexities we could _see_ but not _address_—and in a way that spurred internal conversations forward. We found ourselves, a few weeks after the event, moving forward at speed.
|
||||
|
||||
Finally, we were unstuck.
|
||||
|
||||
In the final chapter of Planet 4's story, I'll explain what the experience taught us about the power of openness.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/10/collaboration-breakthrough-greenpeace
|
||||
|
||||
作者:[Laura Hilliger][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/laurahilliger
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/images/open-org/open-org-greenpeace-article-2-blog-thumbnail-520x292.png?itok=YNEKRAxS (The Open Organization at Greenpeace)
|
||||
[2]: http://greenpeace.org/international
|
||||
[3]: https://medium.com/planet4/p4-in-2018-3bec1cc12be8
|
||||
[4]: https://medium.com/planet4/past-the-prototype-d3e0a4d3a171
|
||||
[5]: https://opensource.com/tags/open-organization-greenpeace
|
||||
[6]: https://opensource.com/open-organization/19/10/open-platform-greenpeace-1
|
||||
[7]: https://opensource.com/open-organization/16/1/community-calls-will-increase-participation-your-open-organization
|
||||
[8]: https://opensource.com/open-organization/16/8/best-results-design-participation
|
||||
[9]: https://opensource.com/open-organization/resources/open-org-definition
|
||||
[10]: https://opensource.com/open-organization/resources/meet-ambassadors
|
||||
[11]: https://www.redhat.com/en/proof-of-concept-series
|
||||
[12]: https://openpracticelibrary.com/practice/event-storming/
|
@ -0,0 +1,192 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lnrCoder)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Get the Size of a Directory in Linux)
|
||||
[#]: via: (https://www.2daygeek.com/find-get-size-of-directory-folder-linux-disk-usage-du-command/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How to Get the Size of a Directory in Linux
|
||||
======
|
||||
|
||||
You may have noticed that the size of a directory is showing only 4KB when you use the **[ls command][1]** to list the directory content in Linux.
|
||||
|
||||
Is this the right size? If not, what is it, and how to get a directory or folder size in Linux?
|
||||
|
||||
This is the default size, which is used to store the meta information of the directory on the disk.
|
||||
|
||||
There are some applications on Linux to **[get the actual size of a directory][2]**.
|
||||
|
||||
But the disk usage (du) command is widely used by the Linux administrator.
|
||||
|
||||
I will show you how to get folder size with various options.
|
||||
|
||||
### What’s du Command?
|
||||
|
||||
**[du command][3]** stands for `Disk Usage`. It’s a standard Unix program which used to estimate file space usage in present working directory.
|
||||
|
||||
It summarize disk usage recursively to get a directory and its sub-directory size.
|
||||
|
||||
As I said, the directory size only shows 4KB when you use the ls command. See the below output.
|
||||
|
||||
```
|
||||
$ ls -lh | grep ^d
|
||||
|
||||
drwxr-xr-x 3 daygeek daygeek 4.0K Aug 2 13:57 Bank_Details
|
||||
drwxr-xr-x 2 daygeek daygeek 4.0K Mar 15 2019 daygeek
|
||||
drwxr-xr-x 6 daygeek daygeek 4.0K Feb 16 2019 drive-2daygeek
|
||||
drwxr-xr-x 13 daygeek daygeek 4.0K Jan 6 2019 drive-mageshm
|
||||
drwxr-xr-x 15 daygeek daygeek 4.0K Sep 29 21:32 Thanu_Photos
|
||||
```
|
||||
|
||||
### 1) How to Check Only the Size of the Parent Directory on Linux
|
||||
|
||||
Use the below du command format to get the total size of a given directory. In this example, we are going to get the total size of the **“/home/daygeek/Documents”** directory.
|
||||
|
||||
```
|
||||
$ du -hs /home/daygeek/Documents
|
||||
or
|
||||
$ du -h --max-depth=0 /home/daygeek/Documents/
|
||||
|
||||
20G /home/daygeek/Documents
|
||||
```
|
||||
|
||||
**Details**:
|
||||
|
||||
* du – It is a command
|
||||
* h – Print sizes in human readable format (e.g., 1K 234M 2G)
|
||||
* s – Display only a total for each argument
|
||||
* –max-depth=N – Print levels of directory
|
||||
|
||||
|
||||
|
||||
### 2) How to Get the Size of Each Directory on Linux
|
||||
|
||||
Use the below du command format to get the total size of each directory, including sub-directories.
|
||||
|
||||
In this example, we are going to get the total size of each **“/home/daygeek/Documents”** directory and it’s sub-directories.
|
||||
|
||||
```
|
||||
$ du -h /home/daygeek/Documents/ | sort -rh | head -20
|
||||
|
||||
20G /home/daygeek/Documents/
|
||||
9.6G /home/daygeek/Documents/drive-2daygeek
|
||||
6.3G /home/daygeek/Documents/Thanu_Photos
|
||||
5.3G /home/daygeek/Documents/Thanu_Photos/Camera
|
||||
5.3G /home/daygeek/Documents/drive-2daygeek/Thanu-videos
|
||||
3.2G /home/daygeek/Documents/drive-mageshm
|
||||
2.3G /home/daygeek/Documents/drive-2daygeek/Thanu-Photos
|
||||
2.2G /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month
|
||||
916M /home/daygeek/Documents/drive-mageshm/Tanisha
|
||||
454M /home/daygeek/Documents/drive-mageshm/2g-backup
|
||||
415M /home/daygeek/Documents/Thanu_Photos/WhatsApp Video
|
||||
300M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Jan-2017
|
||||
288M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Oct-2017
|
||||
226M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Sep-2017
|
||||
219M /home/daygeek/Documents/Thanu_Photos/WhatsApp Documents
|
||||
213M /home/daygeek/Documents/drive-mageshm/photos
|
||||
163M /home/daygeek/Documents/Thanu_Photos/WhatsApp Video/Sent
|
||||
161M /home/daygeek/Documents/Thanu_Photos/WhatsApp Images
|
||||
154M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/June-2017
|
||||
150M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Nov-2016
|
||||
```
|
||||
|
||||
### 3) How to Get a Summary of Each Directory on Linux
|
||||
|
||||
Use the below du command format to get only the summary for each directory.
|
||||
|
||||
```
|
||||
$ du -hs /home/daygeek/Documents/* | sort -rh | head -10
|
||||
|
||||
9.6G /home/daygeek/Documents/drive-2daygeek
|
||||
6.3G /home/daygeek/Documents/Thanu_Photos
|
||||
3.2G /home/daygeek/Documents/drive-mageshm
|
||||
756K /home/daygeek/Documents/Bank_Details
|
||||
272K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-TouchInterface1.png
|
||||
172K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-NightLight.png
|
||||
164K /home/daygeek/Documents/ConfigServer Security and Firewall (csf) Cheat Sheet.pdf
|
||||
132K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-Todo.png
|
||||
112K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-ZorinAutoTheme.png
|
||||
96K /home/daygeek/Documents/distro-info.xlsx
|
||||
```
|
||||
|
||||
### 4) How to Display the Size of Each Directory and Exclude Sub-Directories on Linux
|
||||
|
||||
Use the below du command format to display the total size of each directory, excluding subdirectories.
|
||||
|
||||
```
|
||||
$ du -hS /home/daygeek/Documents/ | sort -rh | head -20
|
||||
|
||||
5.3G /home/daygeek/Documents/Thanu_Photos/Camera
|
||||
5.3G /home/daygeek/Documents/drive-2daygeek/Thanu-videos
|
||||
2.3G /home/daygeek/Documents/drive-2daygeek/Thanu-Photos
|
||||
1.5G /home/daygeek/Documents/drive-mageshm
|
||||
831M /home/daygeek/Documents/drive-mageshm/Tanisha
|
||||
454M /home/daygeek/Documents/drive-mageshm/2g-backup
|
||||
300M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Jan-2017
|
||||
288M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Oct-2017
|
||||
253M /home/daygeek/Documents/Thanu_Photos/WhatsApp Video
|
||||
226M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Sep-2017
|
||||
219M /home/daygeek/Documents/Thanu_Photos/WhatsApp Documents
|
||||
213M /home/daygeek/Documents/drive-mageshm/photos
|
||||
163M /home/daygeek/Documents/Thanu_Photos/WhatsApp Video/Sent
|
||||
154M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/June-2017
|
||||
150M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Nov-2016
|
||||
127M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Dec-2016
|
||||
100M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Oct-2016
|
||||
94M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Nov-2017
|
||||
92M /home/daygeek/Documents/Thanu_Photos/WhatsApp Images
|
||||
90M /home/daygeek/Documents/drive-2daygeek/Thanu-photos-by-month/Dec-2017
|
||||
```
|
||||
|
||||
### 5) How to Get Only the Size of First-Level Sub-Directories on Linux
|
||||
|
||||
If you want to get the size of the first-level sub-directories, including their subdirectories, for a given directory on Linux, use the command format below.
|
||||
|
||||
```
|
||||
$ du -h --max-depth=1 /home/daygeek/Documents/
|
||||
|
||||
3.2G /home/daygeek/Documents/drive-mageshm
|
||||
4.0K /home/daygeek/Documents/daygeek
|
||||
756K /home/daygeek/Documents/Bank_Details
|
||||
9.6G /home/daygeek/Documents/drive-2daygeek
|
||||
6.3G /home/daygeek/Documents/Thanu_Photos
|
||||
20G /home/daygeek/Documents/
|
||||
```
|
||||
|
||||
### 6) How to Get Grand Total in the du Command Output
|
||||
|
||||
If you want to get the grand total in the du Command output, use the below du command format.
|
||||
|
||||
```
|
||||
$ du -hsc /home/daygeek/Documents/* | sort -rh | head -10
|
||||
|
||||
20G total
|
||||
9.6G /home/daygeek/Documents/drive-2daygeek
|
||||
6.3G /home/daygeek/Documents/Thanu_Photos
|
||||
3.2G /home/daygeek/Documents/drive-mageshm
|
||||
756K /home/daygeek/Documents/Bank_Details
|
||||
272K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-TouchInterface1.png
|
||||
172K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-NightLight.png
|
||||
164K /home/daygeek/Documents/ConfigServer Security and Firewall (csf) Cheat Sheet.pdf
|
||||
132K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-Todo.png
|
||||
112K /home/daygeek/Documents/user-friendly-zorin-os-15-has-been-released-ZorinAutoTheme.png
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/find-get-size-of-directory-folder-linux-disk-usage-du-command/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lnrCoder](https://github.com/lnrCoder)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/linux-unix-ls-command-display-directory-contents/
|
||||
[2]: https://www.2daygeek.com/how-to-get-find-size-of-directory-folder-linux/
|
||||
[3]: https://www.2daygeek.com/linux-check-disk-usage-files-directories-size-du-command/
|
@ -0,0 +1,227 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Go About Linux Boot Time Optimisation)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/)
|
||||
[#]: author: (B Thangaraju https://opensourceforu.com/author/b-thangaraju/)
|
||||
|
||||
How to Go About Linux Boot Time Optimisation
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
_Booting an embedded device or a piece of telecommunication equipment quickly is crucial for time-critical applications and also plays a very major role in improving the user experience. This article gives some important tips on how to enhance the boot-up time of any device._
|
||||
|
||||
Fast booting or fast rebooting plays a crucial role in various situations. It is critical for an embedded system to boot up fast in order to maintain the high availability and better performance of all the services. Imagine a telecommunications device running a Linux operating system that does not have fast booting enabled. All the systems, services and the users dependent on that particular embedded device might be affected. It is really important that devices maintain high availability in their services, for which fast booting and rebooting play a crucial role.
|
||||
|
||||
A small failure or shutdown of a telecom device, even for a few seconds, can play havoc with countless users working on the Internet. Thus, it is really important for a lot of time-dependent devices and telecommunication devices to incorporate fast booting in their devices to help them get back to work quicker. Let us understand the Linux boot-up procedure from Figure 1.
|
||||
|
||||
![Figure 1: Boot-up procedure][3]
|
||||
|
||||
![Figure 2: Boot chart][4]
|
||||
|
||||
**Monitoring tools and the boot-up procedure**
|
||||
A user should take note of a number of factors before making changes to a machine. These include the current booting speed of the machine and also the services, processes or applications that are taking up resources and increasing the boot-up time.
|
||||
|
||||
**Boot chart:** To monitor the boot-up speed and the various services that start while booting up, the user can install the boot chart using the following command:
|
||||
|
||||
```
|
||||
sudo apt-get install pybootchartgui.
|
||||
```
|
||||
|
||||
Each time you boot up, the boot chart saves a _.png_ (portable network graphics) file in the log, which enables the user to view the _png_ files to get an understanding about the system’s boot-up process and services. Use the following command for this purpose:
|
||||
|
||||
```
|
||||
cd /var/log/bootchart
|
||||
```
|
||||
|
||||
The user might need an application to view the _.png_ files. Feh is an X11 image viewer that targets console users. It doesn’t have a fancy GUI, unlike most other image viewers, but it simply displays pictures. Feh can be used to view the _.png_ files. You can install it using the following command:
|
||||
|
||||
```
|
||||
sudo apt-get install feh
|
||||
```
|
||||
|
||||
You can view the _png_ files using _feh xxxx.png_.
|
||||
Figure 2 shows the boot chart when a boot chart _png_ file is viewed.
|
||||
However, a boot chart is not necessary for Ubuntu versions later than 15.10. To get very brief information regarding boot up speed, use the following command:
|
||||
|
||||
```
|
||||
systemd-analyze
|
||||
```
|
||||
|
||||
![Figure 3: Output of systemd-analyze][5]
|
||||
|
||||
Figure 3 shows the output of the command _systemd-analyze_.
|
||||
The command _systemd-analyze_ blame is used to print a list of all running units based on the time they took to initialise. This information is very helpful and can be used to optimise boot-up times. systemd-analyze blame doesn’t display results for services with _Type=simple_, because systemd considers such services to be started immediately; hence, no measurement of the initialisation delays can be done.
|
||||
|
||||
![Figure 4: Output of systemd-analyze blame][6]
|
||||
|
||||
Figure 4 shows the output of _systemd-analyze_ blame.
|
||||
The following command prints a tree of the time-critical chain of units:
|
||||
|
||||
```
|
||||
command systemd-analyze critical-chain
|
||||
```
|
||||
|
||||
Figure 5 shows the output of the command _systemd-analyze critical-chain_.
|
||||
|
||||
![Figure 5: Output of systemd-analyze critical-chain][7]
|
||||
|
||||
**Steps to reduce the boot-up time**
|
||||
Shown below are the various steps that can be taken to reduce boot-up time.
|
||||
|
||||
**BUM (Boot-Up-Manager):** BUM is a run level configuration editor that allows the configuration of _init_ services when the system boots up or reboots. It displays a list of every service that can be started at boot. The user can toggle individual services on and off. BUM has a very clean GUI and is very easy to use.
|
||||
|
||||
BUM can be installed in Ubuntu 14.04 using the following command:
|
||||
|
||||
```
|
||||
sudo apt-get install bum
|
||||
```
|
||||
|
||||
To install it in versions later than 15.10, download the packages from the link _<http://apt.ubuntu.com/p/bum> 13_.
|
||||
|
||||
Start with basic things and disable services related to the scanner and printer. You can also disable Bluetooth and all other unwanted devices and services if you are not using any of them. I strongly recommend that you study the basics about the services before disabling them, as it might affect the machine or operating system. Figure 6 shows the GUI of BUM.
|
||||
|
||||
![Figure 6: BUM][8]
|
||||
|
||||
**Editing the rc file:** To edit the rc file, you need to go to the rc directory. This can be done using the following command:
|
||||
|
||||
```
|
||||
cd /etc/init.d.
|
||||
```
|
||||
|
||||
However, root privileges are needed to access _init.d_, which basically contains start/stop scripts that are used to control (start, stop, reload, restart) the daemon while the system is running or during boot.
|
||||
|
||||
The _rc_ file in _init.d_ is called a run control script. During booting, init executes the _rc_ script and plays its role. To improve the booting speed, we make changes to the _rc_ file. Open the _rc_ file (once you are in the _init.d_ directory) using any file editor.
|
||||
|
||||
For example, by entering _vim rc_, you can change the value of _CONCURRENCY=none_ to _CONCURRENCY=shell_. The latter allows certain startup scripts to be executed simultaneously, rather than serially.
|
||||
|
||||
In the latest versions of the kernel, the value should be changed to _CONCURRENCY=makefile_.
|
||||
Figures 7 and 8 show the comparison of boot-up times before and after editing the rc file. The improvement in the boot-up speed can be noticed. The time to boot before editing the rc file was 50.98 seconds, whereas the time to boot after making the changes to the rc file is 23.85 seconds.
|
||||
However, the above-mentioned changes don’t work on operating systems later than the Ubuntu version 15.10, since the operating systems with the latest kernel use the systemd file and not the _init.d_ file any more.
|
||||
|
||||
![Figure 7: Boot speed before making changes to the rc file][9]
|
||||
|
||||
![Figure 8: Boot speed after making changes to the rc file][10]
|
||||
|
||||
**E4rat:** E4rat stands for e4 ‘reduced access time’ (ext4 file system only). It is a project developed by Andreas Rid and Gundolf Kiefer. E4rat is an application that helps in achieving a fast boot with the help of defragmentation. It also accelerates application startups. E4rat eliminates both seek times and rotational delays using physical file reallocation. This leads to a high disk transfer rate.
|
||||
E4rat is available as a .deb package and you can download it from its official website _<http://e4rat.sourceforge.net/>_.
|
||||
|
||||
Ubuntu’s default ureadahead package conflicts with e4rat. So a few packages have to be installed using the following command:
|
||||
|
||||
```
|
||||
sudo dpkg purge ureadahead ubuntu-minimal
|
||||
```
|
||||
|
||||
Now install the dependencies for e4rat using the following command:
|
||||
|
||||
```
|
||||
sudo apt-get install libblkid1 e2fslibs
|
||||
```
|
||||
|
||||
Open the downloaded _.deb_ file and install it. Boot data is now needed to be gathered properly to work with e4rat.
|
||||
|
||||
Follow the steps given below to get e4rat running properly and to increase the boot-up speed.
|
||||
|
||||
* Access the Grub menu while booting. This can be done by holding the shift button when the system is booting.
|
||||
* Choose the option (kernel version) that is normally used to boot and press ‘e’.
|
||||
* Look for the line starting with _linux /boot/vmlinuz_ and add the following code at the end of the line (hit space after the last letter of the sentence):
|
||||
|
||||
|
||||
|
||||
```
|
||||
- init=/sbin/e4rat-collect or try - quiet splash vt.handsoff =7 init=/sbin/e4rat-collect
|
||||
```
|
||||
|
||||
* Now press _Ctrl+x_ to continue booting. This lets e4rat collect data after booting. Work on the machine, open and close applications for the next two minutes.
|
||||
* Access the log file by going to the e4rat folder and using the following command:
|
||||
|
||||
|
||||
|
||||
```
|
||||
cd /var/log/e4rat
|
||||
```
|
||||
|
||||
* If you do not find any log file, repeat the above mentioned process. Once the log file is there, access the Grub menu again and press ‘e’ as your option.
|
||||
* Enter ‘single’ at the end of the same line that you have edited before. This will help you access the command line. If a different menu appears asking for anything, choose Resume normal boot. If you don’t get to the command prompt for some reason, hit Ctrl+Alt+F1.
|
||||
* Enter your details once you see the login prompt.
|
||||
* Now enter the following command:
|
||||
|
||||
|
||||
|
||||
```
|
||||
sudo e4rat-realloc /var/lib/e4rat/startup.log
|
||||
```
|
||||
|
||||
This process takes a while, depending on the machine’s disk speed.
|
||||
|
||||
* Now restart your machine using the following command:
|
||||
|
||||
|
||||
|
||||
```
|
||||
sudo shutdown -r now
|
||||
```
|
||||
|
||||
* Now, we need to configure Grub to run e4rat at every boot.
|
||||
* Access the grub file using any editor. For example, _gksu gedit /etc/default/grub._
|
||||
* Look for a line starting with _GRUB CMDLINE LINUX DEFAULT=_, and add the following line in between the quotes and before whatever options there are:
|
||||
|
||||
|
||||
|
||||
```
|
||||
init=/sbin/e4rat-preload 18
|
||||
```
|
||||
|
||||
* It should look like this:
|
||||
|
||||
|
||||
|
||||
```
|
||||
GRUB CMDLINE LINUX DEFAULT = init=/sbin/e4rat- preload quiet splash
|
||||
```
|
||||
|
||||
* Save and close the Grub menu and update Grub using _sudo update-grub_.
|
||||
* Reboot the system and you will find noticeable changes in boot speed.
|
||||
|
||||
|
||||
|
||||
Figures 9 and 10 show the differences between the boot-up time before and after installing e4rat. The improvement in the boot-up speed can be noticed. The time taken to boot before using e4rat was 22.32 seconds, whereas the time taken to boot after using e4rat is 9.065 seconds
|
||||
|
||||
![Figure 9: Boot speed before using e4rat][11]
|
||||
|
||||
![Figure 10: Boot speed after using e4rat][12]
|
||||
|
||||
**A few simple tweaks**
|
||||
A good boot-up speed can also be achieved using very small tweaks, two of which are listed below.
|
||||
**SSD:** Using solid-state devices rather than normal hard disks or other storage devices will surely improve your booting speed. SSDs also help in achieving great speeds in transferring files and running applications.
|
||||
|
||||
**Disabling GUI:** The graphical user interface, desktop graphics and window animations take up a lot of resources. Disabling the GUI is another good way to achieve great boot-up speed.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/how-to-go-about-linux-boot-time-optimisation/
|
||||
|
||||
作者:[B Thangaraju][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/b-thangaraju/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?resize=696%2C496&ssl=1 (Screenshot from 2019-10-07 13-16-32)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Screenshot-from-2019-10-07-13-16-32.png?fit=700%2C499&ssl=1
|
||||
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-1.png?resize=350%2C302&ssl=1
|
||||
[4]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-2.png?resize=350%2C412&ssl=1
|
||||
[5]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-3.png?resize=350%2C69&ssl=1
|
||||
[6]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-4.png?resize=350%2C535&ssl=1
|
||||
[7]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-5.png?resize=350%2C206&ssl=1
|
||||
[8]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-6.png?resize=350%2C449&ssl=1
|
||||
[9]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-7.png?resize=350%2C85&ssl=1
|
||||
[10]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-8.png?resize=350%2C72&ssl=1
|
||||
[11]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-9.png?resize=350%2C61&ssl=1
|
||||
[12]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/fig-10.png?resize=350%2C61&ssl=1
|
@ -0,0 +1,498 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to program with Bash: Logical operators and shell expansions)
|
||||
[#]: via: (https://opensource.com/article/19/10/programming-bash-part-2)
|
||||
[#]: author: (David Both https://opensource.com/users/dboth)
|
||||
|
||||
How to program with Bash: Logical operators and shell expansions
|
||||
======
|
||||
Learn about logical operators and shell expansions, in the second
|
||||
article in this three-part series on programming with Bash.
|
||||
![Women in computing and open source v5][1]
|
||||
|
||||
Bash is a powerful programming language, one perfectly designed for use on the command line and in shell scripts. This three-part series (which is based on my [three-volume Linux self-study course][2]) explores using Bash as a programming language on the command-line interface (CLI).
|
||||
|
||||
The [first article][3] explored some simple command-line programming with Bash, including using variables and control operators. This second article looks into the types of file, string, numeric, and miscellaneous logical operators that provide execution-flow control logic and different types of shell expansions in Bash. The third and final article in the series will explore the **for**, **while**, and **until** loops that enable repetitive operations.
|
||||
|
||||
Logical operators are the basis for making decisions in a program and executing different sets of instructions based on those decisions. This is sometimes called flow control.
|
||||
|
||||
### Logical operators
|
||||
|
||||
Bash has a large set of logical operators that can be used in conditional expressions. The most basic form of the **if** control structure tests for a condition and then executes a list of program statements if the condition is true. There are three types of operators: file, numeric, and non-numeric operators. Each operator returns true (0) if the condition is met and false (1) if the condition is not met.
|
||||
|
||||
The functional syntax of these comparison operators is one or two arguments with an operator that are placed within square braces, followed by a list of program statements that are executed if the condition is true, and an optional list of program statements if the condition is false:
|
||||
|
||||
|
||||
```
|
||||
if [ arg1 operator arg2 ] ; then list
|
||||
or
|
||||
if [ arg1 operator arg2 ] ; then list ; else list ; fi
|
||||
```
|
||||
|
||||
The spaces in the comparison are required as shown. The single square braces, **[** and **]**, are the traditional Bash symbols that are equivalent to the **test** command:
|
||||
|
||||
|
||||
```
|
||||
`if test arg1 operator arg2 ; then list`
|
||||
```
|
||||
|
||||
There is also a more recent syntax that offers a few advantages and that some sysadmins prefer. This format is a bit less compatible with different versions of Bash and other shells, such as ksh (the Korn shell). It looks like:
|
||||
|
||||
|
||||
```
|
||||
`if [[ arg1 operator arg2 ]] ; then list`
|
||||
```
|
||||
|
||||
#### File operators
|
||||
|
||||
File operators are a powerful set of logical operators within Bash. Figure 1 lists more than 20 different operators that Bash can perform on files. I use them quite frequently in my scripts.
|
||||
|
||||
Operator | Description
|
||||
---|---
|
||||
-a filename | True if the file exists; it can be empty or have some content but, so long as it exists, this will be true
|
||||
-b filename | True if the file exists and is a block special file such as a hard drive like **/dev/sda** or **/dev/sda1**
|
||||
-c filename | True if the file exists and is a character special file such as a TTY device like **/dev/TTY1**
|
||||
-d filename | True if the file exists and is a directory
|
||||
-e filename | True if the file exists; this is the same as **-a** above
|
||||
-f filename | True if the file exists and is a regular file, as opposed to a directory, a device special file, or a link, among others
|
||||
-g filename | True if the file exists and is **set-group-id**, **SETGID**
|
||||
-h filename | True if the file exists and is a symbolic link
|
||||
-k filename | True if the file exists and its "sticky'" bit is set
|
||||
-p filename | True if the file exists and is a named pipe (FIFO)
|
||||
-r filename | True if the file exists and is readable, i.e., has its read bit set
|
||||
-s filename | True if the file exists and has a size greater than zero; a file that exists but that has a size of zero will return false
|
||||
-t fd | True if the file descriptor **fd** is open and refers to a terminal
|
||||
-u filename | True if the file exists and its **set-user-id** bit is set
|
||||
-w filename | True if the file exists and is writable
|
||||
-x filename | True if the file exists and is executable
|
||||
-G filename | True if the file exists and is owned by the effective group ID
|
||||
-L filename | True if the file exists and is a symbolic link
|
||||
-N filename | True if the file exists and has been modified since it was last read
|
||||
-O filename | True if the file exists and is owned by the effective user ID
|
||||
-S filename | True if the file exists and is a socket
|
||||
file1 -ef file2 | True if file1 and file2 refer to the same device and iNode numbers
|
||||
file1 -nt file2 | True if file1 is newer (according to modification date) than file2, or if file1 exists and file2 does not
|
||||
file1 -ot file2 | True if file1 is older than file2, or if file2 exists and file1 does not
|
||||
|
||||
_**Fig. 1: The Bash file operators**_
|
||||
|
||||
As an example, start by testing for the existence of a file:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ File="TestFile1" ; if [ -e $File ] ; then echo "The file $File exists." ; else echo "The file $File does not exist." ; fi
|
||||
The file TestFile1 does not exist.
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
Next, create a file for testing named **TestFile1**. For now, it does not need to contain any data:
|
||||
|
||||
|
||||
```
|
||||
`[student@studentvm1 testdir]$ touch TestFile1`
|
||||
```
|
||||
|
||||
It is easy to change the value of the **$File** variable rather than a text string for the file name in multiple locations in this short CLI program:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ File="TestFile1" ; if [ -e $File ] ; then echo "The file $File exists." ; else echo "The file $File does not exist." ; fi
|
||||
The file TestFile1 exists.
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
Now, run a test to determine whether a file exists and has a non-zero length, which means it contains data. You want to test for three conditions: 1. the file does not exist; 2. the file exists and is empty; and 3. the file exists and contains data. Therefore, you need a more complex set of tests—use the **elif** stanza in the **if-elif-else** construct to test for all of the conditions:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ File="TestFile1" ; if [ -s $File ] ; then echo "$File exists and contains data." ; fi
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
In this case, the file exists but does not contain any data. Add some data and try again:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ File="TestFile1" ; echo "This is file $File" > $File ; if [ -s $File ] ; then echo "$File exists and contains data." ; fi
|
||||
TestFile1 exists and contains data.
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
That works, but it is only truly accurate for one specific condition out of the three possible ones. Add an **else** stanza so you can be somewhat more accurate, and delete the file so you can fully test this new code:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ File="TestFile1" ; rm $File ; if [ -s $File ] ; then echo "$File exists and contains data." ; else echo "$File does not exist or is empty." ; fi
|
||||
TestFile1 does not exist or is empty.
|
||||
```
|
||||
|
||||
Now create an empty file to test:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ File="TestFile1" ; touch $File ; if [ -s $File ] ; then echo "$File exists and contains data." ; else echo "$File does not exist or is empty." ; fi
|
||||
TestFile1 does not exist or is empty.
|
||||
```
|
||||
|
||||
Add some content to the file and test again:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ File="TestFile1" ; echo "This is file $File" > $File ; if [ -s $File ] ; then echo "$File exists and contains data." ; else echo "$File does not exist or is empty." ; fi
|
||||
TestFile1 exists and contains data.
|
||||
```
|
||||
|
||||
Now, add the **elif** stanza to discriminate between a file that does not exist and one that is empty:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ File="TestFile1" ; touch $File ; if [ -s $File ] ; then echo "$File exists and contains data." ; elif [ -e $File ] ; then echo "$File exists and is empty." ; else echo "$File does not exist." ; fi
|
||||
TestFile1 exists and is empty.
|
||||
[student@studentvm1 testdir]$ File="TestFile1" ; echo "This is $File" > $File ; if [ -s $File ] ; then echo "$File exists and contains data." ; elif [ -e $File ] ; then echo "$File exists and is empty." ; else echo "$File does not exist." ; fi
|
||||
TestFile1 exists and contains data.
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
Now you have a Bash CLI program that can test for these three different conditions… but the possibilities are endless.
|
||||
|
||||
It is easier to see the logic structure of the more complex compound commands if you arrange the program statements more like you would in a script that you can save in a file. Figure 2 shows how this would look. The indents of the program statements in each stanza of the **if-elif-else** structure help to clarify the logic.
|
||||
|
||||
|
||||
```
|
||||
File="TestFile1"
|
||||
echo "This is $File" > $File
|
||||
if [ -s $File ]
|
||||
then
|
||||
echo "$File exists and contains data."
|
||||
elif [ -e $File ]
|
||||
then
|
||||
echo "$File exists and is empty."
|
||||
else
|
||||
echo "$File does not exist."
|
||||
fi
|
||||
```
|
||||
|
||||
_**Fig. 2: The command line program rewritten as it would appear in a script**_
|
||||
|
||||
Logic this complex is too lengthy for most CLI programs. Although any Linux or Bash built-in commands may be used in CLI programs, as the CLI programs get longer and more complex, it makes more sense to create a script that is stored in a file and can be executed at any time, now or in the future.
|
||||
|
||||
#### String comparison operators
|
||||
|
||||
String comparison operators enable the comparison of alphanumeric strings of characters. There are only a few of these operators, which are listed in Figure 3.
|
||||
|
||||
Operator | Description
|
||||
---|---
|
||||
-z string | True if the length of string is zero
|
||||
-n string | True if the length of string is non-zero
|
||||
string1 == string2
|
||||
or
|
||||
string1 = string2 | True if the strings are equal; a single **=** should be used with the test command for POSIX conformance. When used with the **[[** command, this performs pattern matching as described above (compound commands).
|
||||
string1 != string2 | True if the strings are not equal
|
||||
string1 < string2 | True if string1 sorts before string2 lexicographically (refers to locale-specific sorting sequences for all alphanumeric and special characters)
|
||||
string1 > string2 | True if string1 sorts after string2 lexicographically
|
||||
|
||||
_**Fig. 3: Bash string logical operators**_
|
||||
|
||||
First, look at string length. The quotes around **$MyVar** in the comparison must be there for the comparison to work. (You should still be working in **~/testdir**.)
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ MyVar="" ; if [ -z "" ] ; then echo "MyVar is zero length." ; else echo "MyVar contains data" ; fi
|
||||
MyVar is zero length.
|
||||
[student@studentvm1 testdir]$ MyVar="Random text" ; if [ -z "" ] ; then echo "MyVar is zero length." ; else echo "MyVar contains data" ; fi
|
||||
MyVar is zero length.
|
||||
```
|
||||
|
||||
You could also do it this way:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ MyVar="Random text" ; if [ -n "$MyVar" ] ; then echo "MyVar contains data." ; else echo "MyVar is zero length" ; fi
|
||||
MyVar contains data.
|
||||
[student@studentvm1 testdir]$ MyVar="" ; if [ -n "$MyVar" ] ; then echo "MyVar contains data." ; else echo "MyVar is zero length" ; fi
|
||||
MyVar is zero length
|
||||
```
|
||||
|
||||
Sometimes you may need to know a string's exact length. This is not a comparison, but it is related. Unfortunately, there is no simple way to determine the length of a string. There are a couple of ways to do it, but I think using the **expr** (evaluate expression) command is easiest. Read the man page for **expr** for more about what it can do. Note that quotes are required around the string or variable you're testing.
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ MyVar="" ; expr length "$MyVar"
|
||||
0
|
||||
[student@studentvm1 testdir]$ MyVar="How long is this?" ; expr length "$MyVar"
|
||||
17
|
||||
[student@studentvm1 testdir]$ expr length "We can also find the length of a literal string as well as a variable."
|
||||
70
|
||||
```
|
||||
|
||||
Regarding comparison operators, I use a lot of testing in my scripts to determine whether two strings are equal (i.e., identical). I use the non-POSIX version of this comparison operator:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ Var1="Hello World" ; Var2="Hello World" ; if [ "$Var1" == "$Var2" ] ; then echo "Var1 matches Var2" ; else echo "Var1 and Var2 do not match." ; fi
|
||||
Var1 matches Var2
|
||||
[student@studentvm1 testdir]$ Var1="Hello World" ; Var2="Hello world" ; if [ "$Var1" == "$Var2" ] ; then echo "Var1 matches Var2" ; else echo "Var1 and Var2 do not match." ; fi
|
||||
Var1 and Var2 do not match.
|
||||
```
|
||||
|
||||
Experiment some more on your own to try out these operators.
|
||||
|
||||
#### Numeric comparison operators
|
||||
|
||||
Numeric operators make comparisons between two numeric arguments. Like the other operator classes, most are easy to understand.
|
||||
|
||||
Operator | Description
|
||||
---|---
|
||||
arg1 -eq arg2 | True if arg1 equals arg2
|
||||
arg1 -ne arg2 | True if arg1 is not equal to arg2
|
||||
arg1 -lt arg2 | True if arg1 is less than arg2
|
||||
arg1 -le arg2 | True if arg1 is less than or equal to arg2
|
||||
arg1 -gt arg2 | True if arg1 is greater than arg2
|
||||
arg1 -ge arg2 | True if arg1 is greater than or equal to arg2
|
||||
|
||||
_**Fig. 4: Bash numeric comparison logical operators**_
|
||||
|
||||
Here are some simple examples. The first instance sets the variable **$X** to 1, then tests to see if **$X** is equal to 1. In the second instance, **X** is set to 0, so the comparison is not true.
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ X=1 ; if [ $X -eq 1 ] ; then echo "X equals 1" ; else echo "X does not equal 1" ; fi
|
||||
X equals 1
|
||||
[student@studentvm1 testdir]$ X=0 ; if [ $X -eq 1 ] ; then echo "X equals 1" ; else echo "X does not equal 1" ; fi
|
||||
X does not equal 1
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
Try some more experiments on your own.
|
||||
|
||||
#### Miscellaneous operators
|
||||
|
||||
These miscellaneous operators show whether a shell option is set or a shell variable has a value, but it does not discover the value of the variable, just whether it has one.
|
||||
|
||||
Operator | Description
|
||||
---|---
|
||||
-o optname | True if the shell option optname is enabled (see the list of options under the description of the **-o** option to the Bash set builtin in the Bash man page)
|
||||
-v varname | True if the shell variable varname is set (has been assigned a value)
|
||||
-R varname | True if the shell variable varname is set and is a name reference
|
||||
|
||||
_**Fig. 5: Miscellaneous Bash logical operators**_
|
||||
|
||||
Experiment on your own to try out these operators.
|
||||
|
||||
### Expansions
|
||||
|
||||
Bash supports a number of types of expansions and substitutions that can be quite useful. According to the Bash man page, Bash has seven forms of expansions. This article looks at five of them: tilde expansion, arithmetic expansion, pathname expansion, brace expansion, and command substitution.
|
||||
|
||||
#### Brace expansion
|
||||
|
||||
Brace expansion is a method for generating arbitrary strings. (This tool is used below to create a large number of files for experiments with special pattern characters.) Brace expansion can be used to generate lists of arbitrary strings and insert them into a specific location within an enclosing static string or at either end of a static string. This may be hard to visualize, so it's best to just do it.
|
||||
|
||||
First, here's what a brace expansion does:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ echo {string1,string2,string3}
|
||||
string1 string2 string3
|
||||
```
|
||||
|
||||
Well, that is not very helpful, is it? But look what happens when you use it just a bit differently:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ echo "Hello "{David,Jen,Rikki,Jason}.
|
||||
Hello David. Hello Jen. Hello Rikki. Hello Jason.
|
||||
```
|
||||
|
||||
That looks like something useful—it could save a good deal of typing. Now try this:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ echo b{ed,olt,ar}s
|
||||
beds bolts bars
|
||||
```
|
||||
|
||||
I could go on, but you get the idea.
|
||||
|
||||
#### Tilde expansion
|
||||
|
||||
Arguably, the most common expansion is the tilde (**~**) expansion. When you use this in a command like **cd ~/Documents**, the Bash shell expands it as a shortcut to the user's full home directory.
|
||||
|
||||
Use these Bash programs to observe the effects of the tilde expansion:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ echo ~
|
||||
/home/student
|
||||
[student@studentvm1 testdir]$ echo ~/Documents
|
||||
/home/student/Documents
|
||||
[student@studentvm1 testdir]$ Var1=~/Documents ; echo $Var1 ; cd $Var1
|
||||
/home/student/Documents
|
||||
[student@studentvm1 Documents]$
|
||||
```
|
||||
|
||||
#### Pathname expansion
|
||||
|
||||
Pathname expansion is a fancy term expanding file-globbing patterns, using the characters **?** and *****, into the full names of directories that match the pattern. File globbing refers to special pattern characters that enable significant flexibility in matching file names, directories, and other strings when performing various actions. These special pattern characters allow matching single, multiple, or specific characters in a string.
|
||||
|
||||
* **?** — Matches only one of any character in the specified location within the string
|
||||
* ***** — Matches zero or more of any character in the specified location within the string
|
||||
|
||||
|
||||
|
||||
This expansion is applied to matching directory names. To see how this works, ensure that **testdir** is the present working directory (PWD) and start with a plain listing (the contents of my home directory will be different from yours):
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ ls
|
||||
chapter6 cpuHog.dos dmesg1.txt Documents Music softlink1 testdir6 Videos
|
||||
chapter7 cpuHog.Linux dmesg2.txt Downloads Pictures Templates testdir
|
||||
testdir cpuHog.mac dmesg3.txt file005 Public testdir tmp
|
||||
cpuHog Desktop dmesg.txt link3 random.txt testdir1 umask.test
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
Now list the directories that start with **Do**, **testdir/Documents**, and **testdir/Downloads**:
|
||||
|
||||
|
||||
```
|
||||
Documents:
|
||||
Directory01 file07 file15 test02 test10 test20 testfile13 TextFiles
|
||||
Directory02 file08 file16 test03 test11 testfile01 testfile14
|
||||
file01 file09 file17 test04 test12 testfile04 testfile15
|
||||
file02 file10 file18 test05 test13 testfile05 testfile16
|
||||
file03 file11 file19 test06 test14 testfile09 testfile17
|
||||
file04 file12 file20 test07 test15 testfile10 testfile18
|
||||
file05 file13 Student1.txt test08 test16 testfile11 testfile19
|
||||
file06 file14 test01 test09 test18 testfile12 testfile20
|
||||
|
||||
Downloads:
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
Well, that did not do what you wanted. It listed the contents of the directories that begin with **Do**. To list only the directories and not their contents, use the **-d** option.
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ ls -d Do*
|
||||
Documents Downloads
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
In both cases, the Bash shell expands the **Do*** pattern into the names of the two directories that match the pattern. But what if there are also files that match the pattern?
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ touch Downtown ; ls -d Do*
|
||||
Documents Downloads Downtown
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
This shows the file, too. So any files that match the pattern are also expanded to their full names.
|
||||
|
||||
#### Command substitution
|
||||
|
||||
Command substitution is a form of expansion that allows the STDOUT data stream of one command to be used as the argument of another command; for example, as a list of items to be processed in a loop. The Bash man page says: "Command substitution allows the output of a command to replace the command name." I find that to be accurate if a bit obtuse.
|
||||
|
||||
There are two forms of this substitution, **`command`** and **$(command)**. In the older form using back tics (**`**), using a backslash (**\**) in the command retains its literal meaning. However, when it's used in the newer parenthetical form, the backslash takes on its meaning as a special character. Note also that the parenthetical form uses only single parentheses to open and close the command statement.
|
||||
|
||||
I frequently use this capability in command-line programs and scripts where the results of one command can be used as an argument for another command.
|
||||
|
||||
Start with a very simple example that uses both forms of this expansion (again, ensure that **testdir** is the PWD):
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ echo "Todays date is `date`"
|
||||
Todays date is Sun Apr 7 14:42:46 EDT 2019
|
||||
[student@studentvm1 testdir]$ echo "Todays date is $(date)"
|
||||
Todays date is Sun Apr 7 14:42:59 EDT 2019
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
The **-w** option to the **seq** utility adds leading zeros to the numbers generated so that they are all the same width, i.e., the same number of digits regardless of the value. This makes it easier to sort them in numeric sequence.
|
||||
|
||||
The **seq** utility is used to generate a sequence of numbers:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ seq 5
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
[student@studentvm1 testdir]$ echo `seq 5`
|
||||
1 2 3 4 5
|
||||
[student@studentvm1 testdir]$
|
||||
```
|
||||
|
||||
Now you can do something a bit more useful, like creating a large number of empty files for testing:
|
||||
|
||||
|
||||
```
|
||||
`[student@studentvm1 testdir]$ for I in $(seq -w 5000) ; do touch file-$I ; done`
|
||||
```
|
||||
|
||||
In this usage, the statement **seq -w 5000** generates a list of numbers from one to 5,000. By using command substitution as part of the **for** statement, the list of numbers is used by the **for** statement to generate the numerical part of the file names.
|
||||
|
||||
#### Arithmetic expansion
|
||||
|
||||
Bash can perform integer math, but it is rather cumbersome (as you will soon see). The syntax for arithmetic expansion is **$((arithmetic-expression))**, using double parentheses to open and close the expression.
|
||||
|
||||
Arithmetic expansion works like command substitution in a shell program or script; the value calculated from the expression replaces the expression for further evaluation by the shell.
|
||||
|
||||
Once again, start with something simple:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ echo $((1+1))
|
||||
2
|
||||
[student@studentvm1 testdir]$ Var1=5 ; Var2=7 ; Var3=$((Var1*Var2)) ; echo "Var 3 = $Var3"
|
||||
Var 3 = 35
|
||||
```
|
||||
|
||||
The following division results in zero because the result would be a decimal value of less than one:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ Var1=5 ; Var2=7 ; Var3=$((Var1/Var2)) ; echo "Var 3 = $Var3"
|
||||
Var 3 = 0
|
||||
```
|
||||
|
||||
Here is a simple calculation I often do in a script or CLI program that tells me how much total virtual memory I have in a Linux host. The **free** command does not provide that data:
|
||||
|
||||
|
||||
```
|
||||
[student@studentvm1 testdir]$ RAM=`free | grep ^Mem | awk '{print $2}'` ; Swap=`free | grep ^Swap | awk '{print $2}'` ; echo "RAM = $RAM and Swap = $Swap" ; echo "Total Virtual memory is $((RAM+Swap))" ;
|
||||
RAM = 4037080 and Swap = 6291452
|
||||
Total Virtual memory is 10328532
|
||||
```
|
||||
|
||||
I used the **`** character to delimit the sections of code used for command substitution.
|
||||
|
||||
I use Bash arithmetic expansion mostly for checking system resource amounts in a script and then choose a program execution path based on the result.
|
||||
|
||||
### Summary
|
||||
|
||||
This article, the second in this series on Bash as a programming language, explored the Bash file, string, numeric, and miscellaneous logical operators that provide execution-flow control logic and the different types of shell expansions.
|
||||
|
||||
The third article in this series will explore the use of loops for performing various types of iterative operations.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/programming-bash-part-2
|
||||
|
||||
作者:[David Both][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/dboth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_5.png?itok=YHpNs_ss (Women in computing and open source v5)
|
||||
[2]: http://www.both.org/?page_id=1183
|
||||
[3]: https://opensource.com/article/19/10/programming-bash-part-1
|
389
sources/tech/20191022 Initializing arrays in Java.md
Normal file
389
sources/tech/20191022 Initializing arrays in Java.md
Normal file
@ -0,0 +1,389 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Initializing arrays in Java)
|
||||
[#]: via: (https://opensource.com/article/19/10/initializing-arrays-java)
|
||||
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
|
||||
|
||||
Initializing arrays in Java
|
||||
======
|
||||
Arrays are a helpful data type for managing collections elements best
|
||||
modeled in contiguous memory locations. Here's how to use them
|
||||
effectively.
|
||||
![Coffee beans and a cup of coffee][1]
|
||||
|
||||
People who have experience programming in languages like C or FORTRAN are familiar with the concept of arrays. They’re basically a contiguous block of memory where each location is a certain type: integers, floating-point numbers, or what-have-you.
|
||||
|
||||
The situation in Java is similar, but with a few extra wrinkles.
|
||||
|
||||
### An example array
|
||||
|
||||
Let’s make an array of 10 integers in Java:
|
||||
|
||||
|
||||
```
|
||||
int[] ia = new int[10];
|
||||
```
|
||||
|
||||
What’s going on in the above piece of code? From left to right:
|
||||
|
||||
1. The **int[]** to the extreme left declares the _type_ of the variable as an array (denoted by the **[]**) of **int**.
|
||||
|
||||
2. To the right is the _name_ of the variable, which in this case is **ia**.
|
||||
|
||||
3. Next, the **=** tells us that the variable defined on the left side is set to what’s to the right side.
|
||||
|
||||
4. To the right of the **=** we see the word **new**, which in Java indicates that an object is being _initialized_, meaning that storage is allocated and its constructor is called ([see here for more information][2]).
|
||||
|
||||
5. Next, we see **int[10]**, which tells us that the specific object being initialized is an array of 10 integers.
|
||||
|
||||
|
||||
|
||||
|
||||
Since Java is strongly-typed, the type of the variable **ia** must be compatible with the type of the expression on the right-hand side of the **=**.
|
||||
|
||||
### Initializing the example array
|
||||
|
||||
Let’s put this simple array in a piece of code and try it out. Save the following in a file called **Test1.java**, use **javac** to compile it, and use **java** to run it (in the terminal of course):
|
||||
|
||||
|
||||
```
|
||||
import java.lang.*;
|
||||
|
||||
public class Test1 {
|
||||
|
||||
public static void main([String][3][] args) {
|
||||
int[] ia = new int[10]; // See note 1 below
|
||||
[System][4].out.println("ia is " + ia.getClass()); // See note 2 below
|
||||
for (int i = 0; i < ia.length; i++) // See note 3 below
|
||||
[System][4].out.println("ia[" + i + "] = " + ia[i]); // See note 4 below
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
Let’s work through the most important bits.
|
||||
|
||||
1. Our declaration and initialization of the array of 10 integers, **ia**, is easy to spot.
|
||||
2. In the line just following, we see the expression **ia.getClass()**. That’s right, **ia** is an _object_ belonging to a _class_, and this code will let us know which class that is.
|
||||
3. In the next line following that, we see the start of the loop **for (int i = 0; i < ia.length; i++)**, which defines a loop index variable **i** that runs through a sequence from zero to one less than **ia.length**, which is an expression that tells us how many elements are defined in the array **ia**.
|
||||
4. Next, the body of the loop prints out the values of each element of **ia**.
|
||||
|
||||
|
||||
|
||||
When this program is compiled and run, it produces the following results:
|
||||
|
||||
|
||||
```
|
||||
me@mydesktop:~/Java$ javac Test1.java
|
||||
me@mydesktop:~/Java$ java Test1
|
||||
ia is class [I
|
||||
ia[0] = 0
|
||||
ia[1] = 0
|
||||
ia[2] = 0
|
||||
ia[3] = 0
|
||||
ia[4] = 0
|
||||
ia[5] = 0
|
||||
ia[6] = 0
|
||||
ia[7] = 0
|
||||
ia[8] = 0
|
||||
ia[9] = 0
|
||||
me@mydesktop:~/Java$
|
||||
```
|
||||
|
||||
The string representation of the output of **ia.getClass()** is **[I**, which is shorthand for "array of integer." Similar to the C programming language, Java arrays begin with element zero and extend up to element **<array size> – 1**. We can see above that each of the elements of **ia** are set to zero (by the array constructor, it seems).
|
||||
|
||||
So, is that it? We declare the type, use the appropriate initializer, and we’re done?
|
||||
|
||||
Well, no. There are many other ways to initialize an array in Java.
|
||||
|
||||
### Why do I want to initialize an array, anyway?
|
||||
|
||||
The answer to this question, like that of all good questions, is "it depends." In this case, the answer depends on what we expect to do with the array once it is initialized.
|
||||
|
||||
In some cases, arrays emerge naturally as a type of accumulator. For example, suppose we are writing code for counting the number of calls received and made by a set of telephone extensions in a small office. There are eight extensions, numbered one through eight, plus the operator’s extension, numbered zero. So we might declare two arrays:
|
||||
|
||||
|
||||
```
|
||||
int[] callsMade;
|
||||
int[] callsReceived;
|
||||
```
|
||||
|
||||
Then, whenever we start a new period of accumulating call statistics, we initialize each array as:
|
||||
|
||||
|
||||
```
|
||||
callsMade = new int[9];
|
||||
callsReceived = new int[9];
|
||||
```
|
||||
|
||||
At the end of each period of accumulating call statistics, we can print out the stats. In very rough terms, we might see:
|
||||
|
||||
|
||||
```
|
||||
import java.lang.*;
|
||||
import java.io.*;
|
||||
|
||||
public class Test2 {
|
||||
|
||||
public static void main([String][3][] args) {
|
||||
|
||||
int[] callsMade;
|
||||
int[] callsReceived;
|
||||
|
||||
// initialize call counters
|
||||
|
||||
callsMade = new int[9];
|
||||
callsReceived = new int[9];
|
||||
|
||||
// process calls...
|
||||
// an extension makes a call: callsMade[ext]++
|
||||
// an extension receives a call: callsReceived[ext]++
|
||||
|
||||
// summarize call statistics
|
||||
|
||||
[System][4].out.printf("%3s%25s%25s\n","ext"," calls made",
|
||||
"calls received");
|
||||
for (int ext = 0; ext < callsMade.length; ext++)
|
||||
[System][4].out.printf("%3d%25d%25d\n",ext,
|
||||
callsMade[ext],callsReceived[ext]);
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
Which would produce output something like this:
|
||||
|
||||
|
||||
```
|
||||
me@mydesktop:~/Java$ javac Test2.java
|
||||
me@mydesktop:~/Java$ java Test2
|
||||
ext calls made calls received
|
||||
0 0 0
|
||||
1 0 0
|
||||
2 0 0
|
||||
3 0 0
|
||||
4 0 0
|
||||
5 0 0
|
||||
6 0 0
|
||||
7 0 0
|
||||
8 0 0
|
||||
me@mydesktop:~/Java$
|
||||
```
|
||||
|
||||
Not a very busy day in the call center.
|
||||
|
||||
In the above example of an accumulator, we see that the starting value of zero as set by the array initializer is satisfactory for our needs. But in other cases, this starting value may not be the right choice.
|
||||
|
||||
For example, in some kinds of geometric computations, we might need to initialize a two-dimensional array to the identity matrix (all zeros except for the ones along the main diagonal). We might choose to do this as:
|
||||
|
||||
|
||||
```
|
||||
double[][] m = new double[3][3];
|
||||
for (int d = 0; d < 3; d++)
|
||||
m[d][d] = 1.0;
|
||||
```
|
||||
|
||||
In this case, we rely on the array initializer **new double[3][3]** to set the array to zeros, and then use a loop to set the diagonal elements to ones. In this simple case, we might use a shortcut that Java provides:
|
||||
|
||||
|
||||
```
|
||||
double[][] m = {
|
||||
{1.0, 0.0, 0.0},
|
||||
{0.0, 1.0, 0.0},
|
||||
{0.0, 0.0, 1.0}};
|
||||
```
|
||||
|
||||
This type of visual structure is particularly appropriate in this sort of application, where it can be a useful double-check to see the actual layout of the array. But in the case where the number of rows and columns is only determined at run time, we might instead see something like this:
|
||||
|
||||
|
||||
```
|
||||
int nrc;
|
||||
// some code determines the number of rows & columns = nrc
|
||||
double[][] m = new double[nrc][nrc];
|
||||
for (int d = 0; d < nrc; d++)
|
||||
m[d][d] = 1.0;
|
||||
```
|
||||
|
||||
It’s worth mentioning that a two-dimensional array in Java is actually an array of arrays, and there’s nothing stopping the intrepid programmer from having each one of those second-level arrays be a different length. That is, something like this is completely legitimate:
|
||||
|
||||
|
||||
```
|
||||
int [][] differentLengthRows = {
|
||||
{ 1, 2, 3, 4, 5},
|
||||
{ 6, 7, 8, 9},
|
||||
{10,11,12},
|
||||
{13,14},
|
||||
{15}};
|
||||
```
|
||||
|
||||
There are various linear algebra applications that involve irregularly-shaped matrices, where this type of structure could be applied (for more information see [this Wikipedia article][5] as a starting point). Beyond that, now that we understand that a two-dimensional array is actually an array of arrays, it shouldn’t be too much of a surprise that:
|
||||
|
||||
|
||||
```
|
||||
differentLengthRows.length
|
||||
```
|
||||
|
||||
tells us the number of rows in the two-dimensional array **differentLengthRows**, and:
|
||||
|
||||
|
||||
```
|
||||
differentLengthRows[i].length
|
||||
```
|
||||
|
||||
tells us the number of columns in row **i** of **differentLengthRows**.
|
||||
|
||||
### Taking the array further
|
||||
|
||||
Considering this idea of array size that is determined at run time, we see that arrays still require us to know that size before instantiating them. But what if we don’t know the size until we’ve processed all of the data? Does that mean we have to process it once to figure out the size of the array, and then process it again? That could be hard to do, especially if we only get one chance to consume the data.
|
||||
|
||||
The [Java Collections Framework][6] solves this problem in a nice way. One of the things provided there is the class **ArrayList**, which is like an array but dynamically extensible. To demonstrate the workings of **ArrayList**, let’s create one and initialize it to the first 20 [Fibonacci numbers][7]:
|
||||
|
||||
|
||||
```
|
||||
import java.lang.*;
|
||||
import java.util.*;
|
||||
|
||||
public class Test3 {
|
||||
|
||||
public static void main([String][3][] args) {
|
||||
|
||||
ArrayList<Integer> fibos = new ArrayList<Integer>();
|
||||
|
||||
fibos.add(0);
|
||||
fibos.add(1);
|
||||
for (int i = 2; i < 20; i++)
|
||||
fibos.add(fibos.get(i-1) + fibos.get(i-2));
|
||||
|
||||
for (int i = 0; i < fibos.size(); i++)
|
||||
[System][4].out.println("fibonacci " + i +
|
||||
" = " + fibos.get(i));
|
||||
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Above, we see:
|
||||
|
||||
* The declaration and instantiation of an **ArrayList** that is used to store **Integer**s.
|
||||
* The use of **add()** to append to the **ArrayList** instance.
|
||||
* The use of **get()** to retrieve an element by index number.
|
||||
* The use of **size()** to determine how many elements are already in the **ArrayList** instance.
|
||||
|
||||
|
||||
|
||||
Not shown is the **put()** method, which places a value at a given index number.
|
||||
|
||||
The output of this program is:
|
||||
|
||||
|
||||
```
|
||||
fibonacci 0 = 0
|
||||
fibonacci 1 = 1
|
||||
fibonacci 2 = 1
|
||||
fibonacci 3 = 2
|
||||
fibonacci 4 = 3
|
||||
fibonacci 5 = 5
|
||||
fibonacci 6 = 8
|
||||
fibonacci 7 = 13
|
||||
fibonacci 8 = 21
|
||||
fibonacci 9 = 34
|
||||
fibonacci 10 = 55
|
||||
fibonacci 11 = 89
|
||||
fibonacci 12 = 144
|
||||
fibonacci 13 = 233
|
||||
fibonacci 14 = 377
|
||||
fibonacci 15 = 610
|
||||
fibonacci 16 = 987
|
||||
fibonacci 17 = 1597
|
||||
fibonacci 18 = 2584
|
||||
fibonacci 19 = 4181
|
||||
```
|
||||
|
||||
**ArrayList** instances can also be initialized by other techniques. For example, an array can be supplied to the **ArrayList** constructor, or the **List.of()** and **Arrays.asList()** methods can be used when the initial elements are known at compile time. I don’t find myself using these options all that often since my primary use case for an **ArrayList** is when I only want to read the data once.
|
||||
|
||||
Moreover, an **ArrayList** instance can be converted to an array using its **toArray()** method, for those who prefer to work with an array once the data is loaded; or, returning to the current topic, once the **ArrayList** instance is initialized.
|
||||
|
||||
The Java Collections Framework provides another kind of array-like data structure called a **Map**. What I mean by "array-like" is that a **Map** defines a collection of objects whose values can be set or retrieved by a key, but unlike an array (or an **ArrayList**), this key need not be an integer; it could be a **String** or any other complex object.
|
||||
|
||||
For example, we can create a **Map** whose keys are **String**s and whose values are **Integer**s as follows:
|
||||
|
||||
|
||||
```
|
||||
Map<[String][3],Integer> stoi = new Map<[String][3],Integer>();
|
||||
```
|
||||
|
||||
Then we can initialize this **Map** as follows:
|
||||
|
||||
|
||||
```
|
||||
stoi.set("one",1);
|
||||
stoi.set("two",2);
|
||||
stoi.set("three",3);
|
||||
```
|
||||
|
||||
And so on. Later, when we want to know the numeric value of **"three"**, we can retrieve it as:
|
||||
|
||||
|
||||
```
|
||||
stoi.get("three");
|
||||
```
|
||||
|
||||
In my world, a **Map** is useful for converting strings occurring in third-party datasets into coherent code values in my datasets. As a part of a [data transformation pipeline][8], I will often build a small standalone program to clean the data before processing it; for this, I will almost always use one or more **Map**s.
|
||||
|
||||
Worth mentioning is that it’s quite possible, and sometimes reasonable, to have **ArrayLists** of **ArrayLists** and **Map**s of **Map**s. For example, let’s assume we’re looking at trees, and we’re interested in accumulating the count of the number of trees by tree species and age range. Assuming that the age range definition is a set of string values ("young," "mid," "mature," and "old") and that the species are string values like "Douglas fir," "western red cedar," and so forth, then we might define a **Map** of **Map**s as:
|
||||
|
||||
|
||||
```
|
||||
Map<[String][3],Map<[String][3],Integer>> counter =
|
||||
new Map<[String][3],Map<[String][3],Integer>>();
|
||||
```
|
||||
|
||||
One thing to watch out for here is that the above only creates storage for the _rows_ of **Map**s. So, our accumulation code might look like:
|
||||
|
||||
|
||||
```
|
||||
// assume at this point we have figured out the species
|
||||
// and age range
|
||||
if (!counter.containsKey(species))
|
||||
counter.put(species,new Map<[String][3],Integer>());
|
||||
if (!counter.get(species).containsKey(ageRange))
|
||||
counter.get(species).put(ageRange,0);
|
||||
```
|
||||
|
||||
At which point, we can start accumulating as:
|
||||
|
||||
|
||||
```
|
||||
counter.get(species).put(ageRange,
|
||||
counter.get(species).get(ageRange) + 1);
|
||||
```
|
||||
|
||||
Finally, it’s worth mentioning that the (new in Java 8) Streams facility can also be used to initialize arrays, **ArrayList** instances, and **Map** instances. A nice discussion of this feature can be found [here][9] and [here][10].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/initializing-arrays-java
|
||||
|
||||
作者:[Chris Hermansen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clhermansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/java-coffee-mug.jpg?itok=Bj6rQo8r (Coffee beans and a cup of coffee)
|
||||
[2]: https://opensource.com/article/19/8/what-object-java
|
||||
[3]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[4]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
|
||||
[5]: https://en.wikipedia.org/wiki/Irregular_matrix
|
||||
[6]: https://en.wikipedia.org/wiki/Java_collections_framework
|
||||
[7]: https://en.wikipedia.org/wiki/Fibonacci_number
|
||||
[8]: https://towardsdatascience.com/data-science-for-startups-data-pipelines-786f6746a59a
|
||||
[9]: https://stackoverflow.com/questions/36885371/lambda-expression-to-initialize-array
|
||||
[10]: https://stackoverflow.com/questions/32868665/how-to-initialize-a-map-using-a-lambda
|
@ -0,0 +1,258 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (NGT: A library for high-speed approximate nearest neighbor search)
|
||||
[#]: via: (https://opensource.com/article/19/10/ngt-open-source-library)
|
||||
[#]: author: (Masajiro Iwasaki https://opensource.com/users/masajiro-iwasaki)
|
||||
|
||||
NGT: A library for high-speed approximate nearest neighbor search
|
||||
======
|
||||
NGT is a high-performing, open source deep learning library for
|
||||
large-scale and high-dimensional vectors.
|
||||
![Houses in a row][1]
|
||||
|
||||
Approximate nearest neighbor ([ANN][2]) search is used in deep learning to make a best guess at the point in a given set that is most similar to another point. This article explains the differences between ANN search and traditional search methods and introduces [NGT][3], a top-performing open source ANN library developed by [Yahoo! Japan Research][4].
|
||||
|
||||
### Nearest neighbor search for high-dimensional data
|
||||
|
||||
Different search methods are used for different data types. For example, full-text search is for text data, content-based image retrieval is for images, and relational databases are for data relationships. Deep learning models can easily generate vectors from various kinds of data so that the vector space has embedded relationships among source data. This means that if two source data are similar, the two vectors from the data will be located near each other in the vector space. Therefore, all you have to do is search the vectors instead of the source data.
|
||||
|
||||
Moreover, the vectors not only represent the text and image characteristics of the source data, but they also represent products, human beings, organizations, and so forth. Therefore, you can search for similar documents and images as well as products with similar attributes, human beings with similar skills, clothing with similar features, and so on. For example, [Yahoo! Japan][5] provides a similarity-based fashion-item search using NGT.
|
||||
|
||||
![Nearest neighbour search][6]
|
||||
|
||||
Since the number of dimensions in deep learning models tends to increase, ANN search methods are indispensable when searching for more than several million high-dimensional vectors. ANN search methods allow you to search for neighbors to the specified query vector in high-dimensional space.
|
||||
|
||||
There are many nearest-neighbor search methods to choose from. [ANN Benchmarks][7] evaluates the best-known ANN search methods, including Faiss (Facebook), Flann, and Hnswlib. According to this benchmark, NGT achieves top-level performance.
|
||||
|
||||
### NGT algorithms
|
||||
|
||||
The NGT index combines a graph and a tree. This result is a very good search performance, with the graph's vertices representing searchable objects. Neighboring vertices are connected by edges.
|
||||
|
||||
This animation shows how a graph is constructed.
|
||||
|
||||
![NGT graph construction][8]
|
||||
|
||||
In the search procedure, neighboring vertices to the specified query can be found descending the graph. Densely connected vertices enable users to explore the graph effectively.
|
||||
|
||||
![NGT graph][9]
|
||||
|
||||
NGT provides a command-line tool, along with C, C++, and Python APIs. This article focuses on the command-line tool and the Python API.
|
||||
|
||||
### Using NGT with the command-line tool
|
||||
|
||||
#### Linux installation
|
||||
|
||||
Download the [latest version of NGT][10] as a ZIP file and install it on Linux with:
|
||||
|
||||
|
||||
```
|
||||
unzip NGT-x.x.x.zip
|
||||
cd NGT-x.x.x
|
||||
mkdir build
|
||||
cd build
|
||||
cmake ..
|
||||
make
|
||||
make install
|
||||
```
|
||||
|
||||
Since NGT libraries are installed in **/usr/local/lib(64)** by default, add the directory to the search path:
|
||||
|
||||
|
||||
```
|
||||
export PATH="$PATH:/opt/local/bin"
|
||||
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib"
|
||||
```
|
||||
|
||||
#### Sample dataset generation
|
||||
|
||||
Before you can search for a large-scale dataset, you must generate an NGT dataset. As an example, [download the][11] [fastText][11] [dataset][11] from the [fastText website][12], then convert it to the NGT registration format with:
|
||||
|
||||
|
||||
```
|
||||
curl -O <https://dl.fbaipublicfiles.com/fasttext/vectors-english/wiki-news-300d-1M-subword.vec.zip>
|
||||
unzip wiki-news-300d-1M-subword.vec.zip
|
||||
tail -n +2 wiki-news-300d-1M-subword.vec | cut -d " " -f 2- > objects.ssv
|
||||
```
|
||||
|
||||
**Objects.ssv** is a registration file that has 1 million objects. One object in the file is extracted as a query:
|
||||
|
||||
|
||||
```
|
||||
`head -10000 objects.ssv | tail -1 > query.ssv`
|
||||
```
|
||||
|
||||
#### Index construction
|
||||
|
||||
An **ngt_index** can be constructed using the following command:
|
||||
|
||||
|
||||
```
|
||||
`ngt create -d 300 -D c index objects.ssv`
|
||||
```
|
||||
|
||||
_-d_ specifies the number of dimensions of the vector. _-D c_ means using cosine similarity.
|
||||
|
||||
#### Approximate nearest neighbor search
|
||||
|
||||
The **ngt_index** can be searched for with the queries using:
|
||||
|
||||
|
||||
```
|
||||
`ngt search -n 10 index query.ssv`
|
||||
```
|
||||
|
||||
**-n** specifies the number of resulting objects.
|
||||
|
||||
The search results are:
|
||||
|
||||
|
||||
```
|
||||
Query No.1
|
||||
Rank ID Distance
|
||||
1 10000 0
|
||||
2 21516 0.184495
|
||||
3 201860 0.240375
|
||||
4 71865 0.241284
|
||||
5 339589 0.267265
|
||||
6 485158 0.280977
|
||||
7 7961 0.283865
|
||||
8 924513 0.286571
|
||||
9 28870 0.286654
|
||||
10 395274 0.290466
|
||||
Query Time= 0.000972628 (sec), 0.972628 (msec)
|
||||
Average Query Time= 0.000972628 (sec), 0.972628 (msec), (0.000972628/1)
|
||||
```
|
||||
|
||||
Please see the [NGT command-line README][13] for more information.
|
||||
|
||||
### Using NGT from Python
|
||||
|
||||
Although NGT has C and C++ APIs, the [ngtpy][14] Python binding for NGT is the simplest option for programming.
|
||||
|
||||
#### Installing ngtpy
|
||||
|
||||
Install the Python binding (ngtpy) through PyPI with:
|
||||
|
||||
|
||||
```
|
||||
`pip3 install ngt`
|
||||
```
|
||||
|
||||
#### Sample dataset generation
|
||||
|
||||
Generate data files for Python sample programs from the sample data set you downloaded by using this code:
|
||||
|
||||
|
||||
```
|
||||
dataset_path = 'wiki-news-300d-1M-subword.vec'
|
||||
with open(dataset_path, 'r') as fi, open('objects.tsv', 'w') as fov,
|
||||
open('words.tsv', 'w') as fow:
|
||||
n, dim = map(int, fi.readline().split())
|
||||
fov.write('{0}¥t{1}¥n'.format(n, dim))
|
||||
for line in fi:
|
||||
tokens = line.rstrip().split(' ')
|
||||
fow.write(tokens[0] + '¥n')
|
||||
fov.write('{0}¥n'.format('¥t'.join(tokens[1:])))
|
||||
```
|
||||
|
||||
#### Index construction
|
||||
|
||||
Construct the NGT index with:
|
||||
|
||||
|
||||
```
|
||||
import ngtpy
|
||||
|
||||
index_path = 'index'
|
||||
with open('objects.tsv', 'r') as fin:
|
||||
n, dim = map(int, fin.readline().split())
|
||||
ngtpy.create(index_path, dim, distance_type='Cosine') # create an index
|
||||
index = ngtpy.Index(index_path) # open the index
|
||||
print('inserting objects...')
|
||||
for line in fin:
|
||||
object = list(map(float, line.rstrip().split('¥t')))
|
||||
index.insert(object) # insert objects
|
||||
print('building objects...')
|
||||
index.build_index()
|
||||
print('saving the index...')
|
||||
index.save()
|
||||
```
|
||||
|
||||
#### Approximate nearest neighbor search
|
||||
|
||||
Here is an example ANN search program:
|
||||
|
||||
|
||||
```
|
||||
import ngtpy
|
||||
|
||||
print('loading words...')
|
||||
with open('words.tsv', 'r') as fin:
|
||||
words = list(map(lambda x: x.rstrip('¥n'), fin.readlines()))
|
||||
|
||||
index = ngtpy.Index('index', zero_based_numbering = False) # open index
|
||||
query_id = 10000
|
||||
query_object = index.get_object(query_id) # get the object for a query
|
||||
|
||||
result = index.search(query_object) # aproximate nearest neighbor search
|
||||
print('Query={}'.format(words[query_id - 1]))
|
||||
print('Rank¥tID¥tDistance¥tWord')
|
||||
for rank, object in enumerate(result):
|
||||
print('{}¥t{}¥t{:.6f}¥t{}'.format(rank + 1, object[0], object[1], words[object[0] - 1]))
|
||||
```
|
||||
|
||||
And here are the search results, which are the same as the NGT command-line option's results:
|
||||
|
||||
|
||||
```
|
||||
loading words...
|
||||
Query=Horse
|
||||
Rank ID Distance Word
|
||||
1 10000 0.000000 Horse
|
||||
2 21516 0.184495 Horses
|
||||
3 201860 0.240375 Horseback
|
||||
4 71865 0.241284 Horseman
|
||||
5 339589 0.267265 Prancing
|
||||
6 485158 0.280977 Horsefly
|
||||
7 7961 0.283865 Dog
|
||||
8 924513 0.286571 Horsing
|
||||
9 28870 0.286654 Pony
|
||||
10 395274 0.290466 Blood-Horse
|
||||
```
|
||||
|
||||
For more information, please see [ngtpy README][14].
|
||||
|
||||
Approximate nearest neighbor (ANN) principles are important features for analyzing data. Learning how to use it in your own projects, or to make sense of data that you're analyzing, is a powerful way to make correlations and interpret information. With NGT, you can use ANN in whatever way you require, or build upon it to add custom features.
|
||||
|
||||
Introduction to Apache Hadoop, an open source software framework for storage and large scale...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/ngt-open-source-library
|
||||
|
||||
作者:[Masajiro Iwasaki][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/masajiro-iwasaki
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/house_home_colors_live_building.jpg?itok=HLpsIfIL (Houses in a row)
|
||||
[2]: https://en.wikipedia.org/wiki/Nearest_neighbor_search#Approximate_nearest_neighbor
|
||||
[3]: https://github.com/yahoojapan/NGT
|
||||
[4]: https://research-lab.yahoo.co.jp/en/
|
||||
[5]: https://www.yahoo.co.jp/
|
||||
[6]: https://opensource.com/sites/default/files/browser-visual-search_new.jpg (Nearest neighbour search)
|
||||
[7]: https://github.com/erikbern/ann-benchmarks
|
||||
[8]: https://opensource.com/sites/default/files/uploads/ngt_movie2.gif (NGT graph construction)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/ngt_movie1.gif (NGT graph)
|
||||
[10]: https://github.com/yahoojapan/NGT/releases/latest
|
||||
[11]: https://dl.fbaipublicfiles.com/fasttext/vectors-english/wiki-news-300d-1M-subword.vec.zip
|
||||
[12]: https://fasttext.cc/
|
||||
[13]: https://github.com/yahoojapan/NGT/blob/master/bin/ngt/README.md
|
||||
[14]: https://github.com/yahoojapan/NGT/blob/master/python/README-ngtpy.md
|
@ -0,0 +1,82 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Kubernetes networking, OpenStack Train, and more industry trends)
|
||||
[#]: via: (https://opensource.com/article/19/10/kubernetes-openstack-and-more-industry-trends)
|
||||
[#]: author: (Tim Hildred https://opensource.com/users/thildred)
|
||||
|
||||
每周开源点评:Kubernetes 网络、OpenStack Train 以及更多的行业趋势
|
||||
======
|
||||
|
||||
> 开源社区和行业趋势的每周总览。
|
||||
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
作为我在具有开源开发模型的企业软件公司担任高级产品营销经理的角色的一部分,我为产品营销人员、经理和其他影响者定期发布有关开源社区,市场和行业趋势的定期更新。以下是该更新中我和他们最喜欢的五篇文章。
|
||||
|
||||
### OpenStack Train 中最令人兴奋的功能
|
||||
|
||||
- [文章地址][2]
|
||||
|
||||
> 考虑到 Train 版本必须提供的所有技术优势([你可以在此处查看版本亮点][3]),你可能会对 Red Hat 认为这些将使我们的电信和企业客户受益的顶级功能及其用例感到好奇。以下我们对该版本最兴奋的功能的概述。
|
||||
|
||||
**影响**:OpenStack 对我来说就像 Shia LaBeouf:它在几年前达到了炒作的顶峰,然后继续产出了好的作品。Train 版本看起来是又一次令人难以置信的创新下降。
|
||||
|
||||
### 以 Ansible 原生的方式构建 Kubernetes 操作器
|
||||
|
||||
- [文章地址][4]
|
||||
|
||||
> 操作器简化了 Kubernetes 上复杂应用程序的管理。它们通常是用 Go 语言编写的,并且需要懂得 Kubernetes 内部的专业知识。但是,还有另一种进入门槛较低的选择。Ansible 是操作器 SDK 中的一等公民。使用 Ansible 可以释放应用程序工程师的精力,最大限度地利用时间来自动化和协调你的应用程序,并使用一种简单的语言在新的和现有的平台上进行操作。在这里我们可以看到如何做。
|
||||
|
||||
**影响**:这就像你发现可以用搅拌器和冷冻香蕉制作出不错的冰淇淋一样:Ansible(通常被认为很容易掌握)可以使你比你想象的更容易地做一些令人印象深刻的操作器魔术。
|
||||
|
||||
### Kubernetes 网络:幕后花絮
|
||||
|
||||
- [文章地址][5]
|
||||
|
||||
> 尽管围绕该主题有很多很好的资源(链接在[这里][6]),但我找不到一个示例,可以将所有的点与网络工程师喜欢和讨厌的命令输出连接起来,以显示背后实际发生的情况。因此,我决定从许多不同的来源收集这些信息,以期帮助你更好地了解事物之间的联系。
|
||||
|
||||
**影响**:这是一篇对复杂主题(带有图片)阐述的很好的作品。保证可以使 Kubenetes 网络的混乱程度降低 10%。
|
||||
|
||||
### 保护容器供应链
|
||||
|
||||
- [文章地址][7]
|
||||
|
||||
> 随着容器、软件即服务和函数即服务的出现,人们开始着眼于在使用现有服务、函数和容器映像的过程中寻求新的价值。[Red Hat][8] 的容器首席产品经理 Scott McCarty 表示,关注这个重点既有优点也有缺点。“它使我们能够集中精力编写满足我们需求的新应用程序代码,同时将对基础架构的关注转移给其他人身上,”McCarty 说,“容器处于一个最佳位置,提供了足够的控制,而卸去了许多繁琐的基础架构工作。”但是,容器也会带来与安全性相关的劣势。
|
||||
|
||||
**影响**:我在一个由大约十位安全人员组成的小组中,可以肯定地说,整天要考虑软件安全性需要一定的倾向。当你长时间凝视深渊时,它也凝视着你。如果你不是如此倾向的软件开发人员,请听取 Scott 的建议并确保你的供应商考虑安全。
|
||||
|
||||
### 15 岁的 Fedora:为何 Matthew Miller 看到 Linux 发行版的光明前景
|
||||
|
||||
- [文章链接][9]
|
||||
|
||||
> 在 TechRepublic 的一个大范围采访中,Fedora 项目负责人 Matthew Miller 讨论了过去的经验教训、软件容器的普遍采用和竞争性标准、Fedora 的潜在变化以及包括 systemd 在内的热门话题。
|
||||
|
||||
**影响**:我喜欢 Fedora 项目的原因是它的清晰度;该项目知道它代表什么。像 Matt 这样的人就是为什么能看到光明前景的原因。
|
||||
|
||||
*我希望你喜欢这张上周让我印象深刻的列表,并在下周一回来了解更多的开放源码社区、市场和行业趋势。*
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/10/kubernetes-openstack-and-more-industry-trends
|
||||
|
||||
作者:[Tim Hildred][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/thildred
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://www.redhat.com/en/blog/look-most-exciting-features-openstack-train
|
||||
[3]: https://releases.openstack.org/train/highlights.html
|
||||
[4]: https://www.cncf.io/webinars/building-kubernetes-operators-in-an-ansible-native-way/
|
||||
[5]: https://itnext.io/kubernetes-networking-behind-the-scenes-39a1ab1792bb
|
||||
[6]: https://github.com/nleiva/kubernetes-networking-links
|
||||
[7]: https://www.devprojournal.com/technology-trends/open-source/securing-the-container-supply-chain/
|
||||
[8]: https://www.redhat.com/en
|
||||
[9]: https://www.techrepublic.com/article/fedora-at-15-why-matthew-miller-sees-a-bright-future-for-the-linux-distribution/
|
@ -0,0 +1,66 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use IoT devices to keep children safe?)
|
||||
[#]: via: (https://opensourceforu.com/2019/10/how-to-use-iot-devices-to-keep-children-safe/)
|
||||
[#]: author: (Andrew Carroll https://opensourceforu.com/author/andrew-carroll/)
|
||||
|
||||
如何使用物联网设备来确保儿童安全?
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
IoT (物联网)设备正在迅速改变我们的生活。这些设备无处不在,从我们的家庭到其它行业。根据一些预测数据,到2020年,将会有100亿个 IoT 设备。到2025年,该数量将增长到220亿。目前,物联网已经在很多领域得到了应用,包括智能家居,工业生产过程,农业甚至医疗保健领域。伴随着如此广泛的应用,物联网显然已经成为近年来的热门话题之一。
|
||||
多种因素促成了物联网设备在多个学科的爆炸式增长。这其中包括低成本处理器和无线连接的的可用性, 以及开源平台的信息交流推动了物联网领域的创新。与传统的应用程序开发相比,物联网设备的开发成指数级增长,因为它的资源是开源的。
|
||||
在解释如何使用物联网设备来保护儿童之前,必须对物联网技术有基本的了解。
|
||||
|
||||
|
||||
**IOT 设备是什么?**
|
||||
IOT 设备是指那些在没有人类参与的情况下彼此之间可以通信的设备。 因此,许多专家并不将智能手机和计算机视为物联网设备。 此外,物联网设备必须能够收集数据并且能将收集到的数据传送到其他设备或云端进行处理。
|
||||
|
||||
然而,在某些领域中,我们需要探索物联网的潜力。 儿童往往是脆弱的,他们很容易成为犯罪分子和其他蓄意伤害者的目标。 无论在物理世界还是数字世界中,儿童都很容易犯罪。 因为父母不能始终亲自到场保护孩子; 这就是为什么需要监视工具了。
|
||||
|
||||
除了适用于儿童的可穿戴设备外,还有许多父母监视应用程序,例如Xnspy,可实时监控儿童并提供信息的实时更新。 这些工具可确保儿童安全。 可穿戴设备确保儿童身体上的安全性,而家长监控应用可确保儿童的上网安全。
|
||||
|
||||
由于越来越多的孩子花费时间在智能手机上,毫无意外地,他们也就成为诈骗分子的主要目标。 此外,由于恋童癖,网络自夸和其他犯罪在网络上的盛行,儿童也有可能成为网络欺凌的目标。
|
||||
|
||||
这些解决方案够吗? 我们需要找到物联网解决方案,以确保孩子们在网上和线下的安全。 在当代,我们如何确保孩子的安全? 我们需要提出创新的解决方案。 物联网可以帮助保护孩子在学校和家里的安全。
|
||||
|
||||
|
||||
**物联网的潜力**
|
||||
物联网设备提供的好处很多。 举例来说,父母可以远程监控自己的孩子,而又不会显得太霸道。 因此,儿童在拥有安全环境的同时也会有空间和自由让自己变得独立。
|
||||
而且,父母也不必在为孩子的安全而担忧。物联网设备可以提供7x24小时的信息更新。像 Xnspy 之类的监视应用程序在提供有关孩子的智能手机活动信息方面更进了一步。随着物联网设备变得越来越复杂,拥有更长使用寿命的电池只是一个时间问题。诸如位置跟踪器之类的物联网设备可以提供有关孩子下落的准确详细信息,所以父母不必担心。
|
||||
|
||||
虽然可穿戴设备已经非常好了,但在确保儿童安全方面,这些通常还远远不够。因此,要为儿童提供安全的环境,我们还需要其他方法。许多事件表明,学校比其他任何公共场所都容易受到攻击。因此,学校需要采取安全措施,以确保儿童和教师的安全。在这一点上,物联网设备可用于检测潜在威胁并采取必要的措施来防止攻击。威胁检测系统包括摄像头。系统一旦检测到威胁,便可以通知当局,如一些执法机构和医院。智能锁等设备可用于封锁学校(包括教室),来保护儿童。除此之外,还可以告知父母其孩子的安全,并立即收到有关威胁的警报。这将需要实施无线技术,例如 Wi-Fi 和传感器。因此,学校需要制定专门用于提供教室安全性的预算。
|
||||
|
||||
智能家居实现拍手关灯,也可以让你的家庭助手帮你关灯。 同样,物联网设备也可用在屋内来保护儿童。 在家里,物联网设备(例如摄像头)为父母在照顾孩子时提供100%的可见性。 当父母不在家里时,可以使用摄像头和其他传感器检测是否发生了可疑活动。 其他设备(例如连接到这些传感器的智能锁)可以锁门和窗,以确保孩子们的安全。
|
||||
|
||||
同样,可以引入许多物联网解决方案来确保孩子的安全。
|
||||
|
||||
|
||||
|
||||
**有多好就有多坏**
|
||||
物联网设备中的传感器会创建大量数据。 数据的安全性是至关重要的一个因素。 收集的有关孩子的数据如果落入不法分子手中会存在危险。 因此,需要采取预防措施。 IoT 设备中泄露的任何数据都可用于确定行为模式。 因此,必须投资提供不违反用户隐私的安全物联网解决方案。
|
||||
|
||||
IoT 设备通常连接到 Wi-Fi,用于设备之间传输数据。未加密数据的不安全网络会带来某些风险。 这样的网络很容易被窃听。 黑客可以使用此类网点来入侵系统。 他们还可以将恶意软件引入系统,从而使系统变得脆弱,易受攻击。 此外,对设备和公共网络(例如学校的网络)的网络攻击可能导致数据泄露和私有数据盗用。 因此,在实施用于保护儿童的物联网解决方案时,保护网络和物联网设备的总体计划必须生效。
|
||||
|
||||
物联网设备保护儿童在学校和家里的安全的潜力尚未发现有什么创新。 我们需要付出更多努力来保护连接 IoT 设备的网络安全。 此外,物联网设备生成的数据可能落入不法分子手中,从而造成更多麻烦。 因此,这是物联网安全至关重要的一个领域。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/10/how-to-use-iot-devices-to-keep-children-safe/
|
||||
|
||||
作者:[Andrew Carroll][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Morisun029](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/andrew-carroll/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Visual-Internet-of-things_EB-May18.jpg?resize=696%2C507&ssl=1 (Visual Internet of things_EB May18)
|
||||
[2]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/10/Visual-Internet-of-things_EB-May18.jpg?fit=900%2C656&ssl=1
|
@ -7,22 +7,22 @@
|
||||
[#]: via: (https://nicolasparada.netlify.com/posts/go-messenger-oauth/)
|
||||
[#]: author: (Nicolás Parada https://nicolasparada.netlify.com/)
|
||||
|
||||
Building a Messenger App: OAuth
|
||||
构建一个即时消息应用(二):OAuth
|
||||
======
|
||||
|
||||
[Previous part: Schema][1].
|
||||
[上一篇:模式](https://linux.cn/article-11396-1.html),[原文][1]。
|
||||
|
||||
In this post we start the backend by adding social login.
|
||||
在这篇帖子中,我们将会通过为应用添加社交登录功能进入后端开发。
|
||||
|
||||
This is how it works: the user click on a link that redirects him to the GitHub authorization page. The user grant access to his info and get redirected back logged in. The next time he tries to login, he won’t be asked to grant permission, it is remembered so the login flow is as fast as a single click.
|
||||
社交登录的工作方式十分简单:用户点击链接,然后重定向到 GitHub 授权页面。当用户授予我们对他的个人信息的访问权限之后,就会重定向回登录页面。下一次尝试登录时,系统将不会再次请求授权,也就是说,我们的应用已经记住了这个用户。这使得整个登录流程看起来就和你用鼠标单击一样快。
|
||||
|
||||
Internally, the history is more complex tho. First we need the register a new [OAuth app on GitHub][2].
|
||||
如果进一步考虑其内部实现的话,过程就会变得复杂起来。首先,我们需要注册一个新的 [GitHub OAuth 应用][2]。
|
||||
|
||||
The important part is the callback URL. Set it to `http://localhost:3000/api/oauth/github/callback`. On development we are on localhost, so when you ship the app to production, register a new app with the correct callback URL.
|
||||
这一步中,比较重要的是回调 URL。我们将它设置为 `http://localhost:3000/api/oauth/github/callback`。这是因为,在开发过程中,我们总是在本地主机上工作。一旦你要将应用交付生产,请使用正确的回调 URL 注册一个新的应用。
|
||||
|
||||
This will give you a client id and a secret key. Don’t share them with anyone 👀
|
||||
注册以后,你将会收到「客户端 id」和「安全密钥」。安全起见,请不要与任何人分享他们 👀
|
||||
|
||||
With that off of the way, lets start to write some code. Create a `main.go` file:
|
||||
顺便让我们开始写一些代码吧。现在,创建一个 `main.go` 文件:
|
||||
|
||||
```
|
||||
package main
|
||||
@ -139,7 +139,7 @@ func intEnv(key string, fallbackValue int) int {
|
||||
}
|
||||
```
|
||||
|
||||
Install dependencies:
|
||||
安装依赖项:
|
||||
|
||||
```
|
||||
go get -u github.com/gorilla/securecookie
|
||||
@ -151,28 +151,26 @@ go get -u github.com/matryer/way
|
||||
go get -u golang.org/x/oauth2
|
||||
```
|
||||
|
||||
We use a `.env` file to save secret keys and other configurations. Create it with at least this content:
|
||||
我们将会使用 `.env` 文件来保存密钥和其他配置。请创建这个文件,并保证里面至少包含以下内容:
|
||||
|
||||
```
|
||||
GITHUB_CLIENT_ID=your_github_client_id
|
||||
GITHUB_CLIENT_SECRET=your_github_client_secret
|
||||
```
|
||||
|
||||
The other enviroment variables we use are:
|
||||
我们还要用到的其他环境变量有:
|
||||
|
||||
* `PORT`: The port in which the server runs. Defaults to `3000`.
|
||||
* `ORIGIN`: Your domain. Defaults to `http://localhost:3000/`. The port can also be extracted from this.
|
||||
* `DATABASE_URL`: The Cockroach address. Defaults to `postgresql://root@127.0.0.1:26257/messenger?sslmode=disable`.
|
||||
* `HASH_KEY`: Key to sign cookies. Yeah, we’ll use signed cookies for security.
|
||||
* `JWT_KEY`: Key to sign JSON web tokens.
|
||||
* `PORT`:服务器运行的端口,默认值是 `3000`。
|
||||
* `ORIGIN`:你的域名,默认值是 `http://localhost:3000/`。我们也可以在这里指定端口。
|
||||
* `DATABASE_URL`:Cockroach 数据库的地址。默认值是 `postgresql://root@127.0.0.1:26257/messenger?sslmode=disable`。
|
||||
* `HASH_KEY`:用于为 cookies 签名的密钥。没错,我们会使用已签名的 cookies 来确保安全。
|
||||
* `JWT_KEY`:用于签署 JSON 网络令牌(Json Web Token)的密钥。
|
||||
|
||||
因为代码中已经设定了默认值,所以你也不用把它们写到 `.env` 文件中。
|
||||
|
||||
在读取配置并连接到数据库之后,我们会创建一个 OAuth 配置。我们会使用 `ORIGIN` 来构建回调 URL(就和我们在 GitHub 页面上注册的一样)。我们的数据范围设置为 “read:user”。这会允许我们读取公开的用户信息,这里我们只需要他的用户名和头像就够了。然后我们会初始化 cookie 和 JWT 签名器。定义一些端点并启动服务器。
|
||||
|
||||
Because they have default values, your don’t need to write them on the `.env` file.
|
||||
|
||||
After reading the configuration and connecting to the database, we create an OAuth config. We use the origin to build the callback URL (the same we registered on the github page). And we set the scope to “read:user”. This will give us permission to read the public user info. That’s because we just need his username and avatar. Then we initialize the cookie and JWT signers. Define some endpoints and start the server.
|
||||
|
||||
Before implementing those HTTP handlers lets write a couple functions to send HTTP responses.
|
||||
在实现 HTTP 处理程序之前,让我们编写一些函数来发送 HTTP 响应。
|
||||
|
||||
```
|
||||
func respond(w http.ResponseWriter, v interface{}, statusCode int) {
|
||||
@ -192,11 +190,11 @@ func respondError(w http.ResponseWriter, err error) {
|
||||
}
|
||||
```
|
||||
|
||||
The first one is to send JSON and the second one logs the error to the console and return a `500 Internal Server Error` error.
|
||||
第一个函数用来发送 JSON,而第二个将错误记录到控制台并返回一个 `500 Internal Server Error` 错误信息。
|
||||
|
||||
### OAuth Start
|
||||
### OAuth 开始
|
||||
|
||||
So, the user clicks on a link that says “Access with GitHub”… That link points the this endpoint `/api/oauth/github` that will redirect the user to github.
|
||||
所以,用户点击写着 “Access with GitHub” 的链接。该链接指向 `/api/oauth/github`,这将会把用户重定向到 github。
|
||||
|
||||
```
|
||||
func githubOAuthStart(w http.ResponseWriter, r *http.Request) {
|
||||
@ -222,11 +220,11 @@ func githubOAuthStart(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
```
|
||||
|
||||
OAuth2 uses a mechanism to prevent CSRF attacks so it requires a “state”. We use nanoid to create a random string and use that as state. We save it as a cookie too.
|
||||
OAuth2 使用一种机制来防止 CSRF 攻击,因此它需要一个「状态」 "state"。我们使用 `Nanoid()` 来创建一个随机字符串,并用这个字符串作为状态。我们也把它保存为一个 cookie。
|
||||
|
||||
### OAuth Callback
|
||||
### OAuth 回调
|
||||
|
||||
Once the user grant access to his info on the GitHub page, he will be redirected to this endpoint. The URL will come with the state and a code on the query string `/api/oauth/github/callback?state=&code=`
|
||||
一旦用户授权我们访问他的个人信息,他将会被重定向到这个端点。这个 URL 的查询字符串上将会包含状态(state)和授权码(code) `/api/oauth/github/callback?state=&code=`
|
||||
|
||||
```
|
||||
const jwtLifetime = time.Hour * 24 * 14
|
||||
@ -341,19 +339,19 @@ func githubOAuthCallback(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
```
|
||||
|
||||
First we try to decode the cookie with the state we saved before. And compare it with the state that comes in the query string. In case they don’t match, we return a `418 I'm teapot` error.
|
||||
首先,我们会尝试使用之前保存的状态对 cookie 进行解码。并将其与查询字符串中的状态进行比较。如果它们不匹配,我们会返回一个 `418 I'm teapot`(未知来源)错误。
|
||||
|
||||
Then we exchange the code for a token. This token is used to create an HTTP client to make requests to the GitHub API. So we do a GET request to `https://api.github.com/user`. This endpoint will give us the current authenticated user info in JSON format. We decode it to get the user ID, login (username) and avatar URL.
|
||||
接着,我们使用授权码生成一个令牌。这个令牌被用于创建 HTTP 客户端来向 GitHub API 发出请求。所以最终我们会向 `https://api.github.com/user` 发送一个 GET 请求。这个端点将会以 JSON 格式向我们提供当前经过身份验证的用户信息。我们将会解码这些内容,一并获取用户的 ID,登录名(用户名)和头像 URL。
|
||||
|
||||
Then we try to find a user with that GitHub ID on the database. If none is found, we create one using that data.
|
||||
然后我们将会尝试在数据库上找到具有该 GitHub ID 的用户。如果没有找到,就使用该数据创建一个新的。
|
||||
|
||||
Then, with the newly created user, we issue a JSON web token with the user ID as Subject and redirect to the frontend with the token, along side the expiration date in the query string.
|
||||
之后,对于新创建的用户,我们会发出一个用户 ID 为主题(subject)的 JSON 网络令牌,并使用该令牌重定向到前端,查询字符串中一并包含该令牌的到期日(the expiration date)。
|
||||
|
||||
The web app will be for another post, but the URL you are being redirected is `/callback?token=&expires_at=`. There we’ll have some JavaScript to extract the token and expiration date from the URL and do a GET request to `/api/auth_user` with the token in the `Authorization` header in the form of `Bearer token_here` to get the authenticated user and save it to localStorage.
|
||||
这一 Web 应用也会被用在其他帖子,但是重定向的链接会是 `/callback?token=&expires_at=`。在那里,我们将会利用 JavaScript 从 URL 中获取令牌和到期日,并通过 `Authorization` 标头中的令牌以`Bearer token_here` 的形式对 `/ api / auth_user` 进行GET请求,来获取已认证的身份用户并将其保存到 localStorage。
|
||||
|
||||
### Guard Middleware
|
||||
### Guard 中间件
|
||||
|
||||
To get the current authenticated user we use a middleware. That’s because in future posts we’ll have more endpoints that requires authentication, and a middleware allow us to share functionality.
|
||||
为了获取当前已经过身份验证的用户,我们设计了 Guard 中间件。这是因为在接下来的文章中,我们会有很多需要进行身份认证的端点,而中间件将会允许我们共享这一功能。
|
||||
|
||||
```
|
||||
type ContextKey struct {
|
||||
@ -388,9 +386,9 @@ func guard(handler http.HandlerFunc) http.HandlerFunc {
|
||||
}
|
||||
```
|
||||
|
||||
First we try to read the token from the `Authorization` header or a `token` in the URL query string. If none found, we return a `401 Unauthorized` error. Then we decode the claims in the token and use the Subject as the current authenticated user ID.
|
||||
首先,我们尝试从 `Authorization` 标头或者是 URL 查询字符串中的 `token` 字段中读取令牌。如果没有找到,我们需要返回 `401 Unauthorized`(未授权)错误。然后我们将会对令牌中的申明进行解码,并使用该主题作为当前已经过身份验证的用户 ID。
|
||||
|
||||
Now, we can wrap any `http.handlerFunc` that needs authentication with this middleware and we’ll have the authenticated user ID in the context.
|
||||
现在,我们可以用这一中间件来封装任何需要授权的 `http.handlerFunc`,并且在处理函数的上下文中保有已经过身份验证的用户 ID。
|
||||
|
||||
```
|
||||
var guarded = guard(func(w http.ResponseWriter, r *http.Request) {
|
||||
@ -398,7 +396,7 @@ var guarded = guard(func(w http.ResponseWriter, r *http.Request) {
|
||||
})
|
||||
```
|
||||
|
||||
### Get Authenticated User
|
||||
### 获取认证用户
|
||||
|
||||
```
|
||||
func getAuthUser(w http.ResponseWriter, r *http.Request) {
|
||||
@ -422,13 +420,13 @@ func getAuthUser(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
```
|
||||
|
||||
We use the guard middleware to get the current authenticated user id and do a query to the database.
|
||||
我们使用 Guard 中间件来获取当前经过身份认证的用户 ID 并查询数据库。
|
||||
|
||||
* * *
|
||||
|
||||
That will cover the OAuth process on the backend. In the next part we’ll see how to start conversations with other users.
|
||||
这一部分涵盖了后端的 OAuth 流程。在下一篇帖子中,我们将会看到如何开始与其他用户的对话。
|
||||
|
||||
[Souce Code][3]
|
||||
[源代码][3]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
@ -1,192 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Morisun029)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Mutation testing by example: Failure as experimentation)
|
||||
[#]: via: (https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation)
|
||||
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzichttps://opensource.com/users/jocunddew)
|
||||
|
||||
以变异测试为例:基于故障的试验
|
||||
======
|
||||
基于 .NET 的 xUnit.net 测试框架,开发一款自动猫门的逻辑,让门在白天开放,夜间锁定,
|
||||
![Digital hand surrounding by objects, bike, light bulb, graphs][1]
|
||||
|
||||
|
||||
在本系列的[第一篇文章][2]中,我演示了如何使用设计的故障来确保代码中的预期结果。 在第二篇文章中,我将继续开发示例项目——一款自动猫门,该门在白天开放,夜间锁定。
|
||||
在此提醒一下,您可以按照[此处的说明][3]使用 .NET 的 xUnit.net 测试框架。
|
||||
|
||||
|
||||
|
||||
### 关于白天时间
|
||||
|
||||
回想一下,测试驱动开发(TDD)围绕着大量的单元测试。
|
||||
|
||||
|
||||
第一篇文章中实现了满足 **Given7pmReturnNighttime** 单元测试期望的逻辑。 但还没有完, 现在,您需要描述当前时间大于7点时期望发生的结果。 这是新的单元测试,称为 **Given7amReturnDaylight**:
|
||||
|
||||
|
||||
```
|
||||
[Fact]
|
||||
public void Given7amReturnDaylight()
|
||||
{
|
||||
var expected = "Daylight";
|
||||
var actual = dayOrNightUtility.GetDayOrNight();
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
```
|
||||
|
||||
现在,新的单元测试失败了(越早失败越好!):
|
||||
|
||||
```
|
||||
Starting test execution, please wait...
|
||||
[Xunit.net 00:00:01.23] unittest.UnitTest1.Given7amReturnDaylight [FAIL]
|
||||
Failed unittest.UnitTest1.Given7amReturnDaylight
|
||||
[...]
|
||||
```
|
||||
|
||||
期望接收到字符串值是 "Daylight" ,但实际接收到的值是 "Nighttime"。
|
||||
|
||||
|
||||
### 分析失败的测试用例
|
||||
|
||||
经过仔细检查,代码本身似乎已经出现问题。 事实证明,**GetDayOrNight** 方法的实现是不可测试的!
|
||||
看看我们面临的核心挑战:
|
||||
|
||||
1. **GetDayOrNight 依赖隐藏输入。 **
|
||||
**dayOrNight** 的值取决于隐藏输入(它从内置系统时钟中获取一天的时间值)。
|
||||
|
||||
2. **GetDayOrNight 包含非确定性行为。 **
|
||||
从系统时钟中获取到的时间值是不确定的。 (因为)该时间取决于你运行代码的时间点,而这一点我们认为这是不可预测的。
|
||||
|
||||
3. **GetDayOrNight API 的质量差。**
|
||||
该 API 与具体的数据源(系统 **DateTime**) 紧密耦合。
|
||||
|
||||
4. **GetDayOrNight violates 违反了单一责任原则。**
|
||||
该方法实现同时使用和处理信息。优良作法是一种方法应负责执行一项职责。
|
||||
5. **GetDayOrNight 有多个更改原因。**
|
||||
可以想象内部时间源可能会更改的情况。同样,很容易想象处理逻辑也将改变。这些变化的不同原因必须相互隔离。
|
||||
6. **当(我们)尝试了解 GetDayOrNight 行为时,会发现它的 API 签名不足。 **
|
||||
最理想的做法就是通过简单的查看API的签名,就能了解API预期的行为类型。。
|
||||
7. **GetDayOrNight 取决于全局共享可变状态。**
|
||||
要不惜一切代价避免共享的可变状态!
|
||||
8. **即使在阅读源代码之后,也无法预测 GetDayOrNight方法的行为。**
|
||||
这是一个严重的问题。 通过阅读源代码,应该始终非常清楚,系统一旦开始运行,便可以预测出其行为。
|
||||
|
||||
|
||||
### 失败背后的原则
|
||||
|
||||
每当您遇到工程问题时,建议使用久经考验的分而治之策略。 在这种情况下,遵循关注点分离的原则是一种可行的方法。
|
||||
|
||||
> **separation of concerns** (**SoC**) 是一种用于将计算机程序分为不同模块的设计原理,以便每个模块都可以解决一个关注点。 关注点是影响计算机程序代码的一组信息。 关注点信息可能与要优化代码的硬件的细节一样概括,也可能与要实例化的类的名称一样具体。完美体现 SoC 的程序称为模块化程序。
|
||||
>
|
||||
> ([source][4])
|
||||
|
||||
**GetDayOrNight** 方法应仅与确定日期和时间值表示白天还是夜晚有关。 它不应该与寻找该值的来源有关。该问题应留给调用客户端。
|
||||
|
||||
必须将这个问题留给调用客户端,以获取当前时间。 这种方法符合另一个有价值的工程原理-控制反转。 Martin Fowler [在这里][5]详细探讨了这一概念。
|
||||
|
||||
> 框架的一个重要特征是用户定义的用于定制框架的方法通常来自于框架本身而不是从用户的应用程序代码调用来的。 该框架通常在协调和排序应用程序活动中扮演主程序的角色。 控制权的这种反转使框架有能力充当可扩展的框架。 用户提供的方法为框架中的特定应用程序量身制定泛化算法。
|
||||
>
|
||||
> \-- [Ralph Johnson and Brian Foote][6]
|
||||
|
||||
### 重构测试用例
|
||||
|
||||
|
||||
因此,代码需要重构。 摆脱对内部时钟的依赖(**DateTime** 系统实用程序):
|
||||
|
||||
```
|
||||
` DateTime time = new DateTime();`
|
||||
```
|
||||
|
||||
删除上述代码(在你的文件中应该是第7行)。 通过将输入参数 **DateTime** 时间添加到 **GetDayOrNight** 方法,进一步重构代码。
|
||||
这是重构类 **DayOrNightUtility.cs**:
|
||||
|
||||
|
||||
```
|
||||
using System;
|
||||
|
||||
namespace app {
|
||||
public class DayOrNightUtility {
|
||||
public string GetDayOrNight(DateTime time) {
|
||||
string dayOrNight = "Nighttime";
|
||||
if(time.Hour >= 7 && time.Hour < 19) {
|
||||
dayOrNight = "Daylight";
|
||||
}
|
||||
return dayOrNight;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
重构代码需要更改单元测试。 需要准备 **nightHour** 和 **dayHour** 的测试数据,并将这些值传到**GetDayOrNight** 方法中。 以下是重构的单元测试:
|
||||
|
||||
|
||||
```
|
||||
using System;
|
||||
using Xunit;
|
||||
using app;
|
||||
|
||||
namespace unittest
|
||||
{
|
||||
public class UnitTest1
|
||||
{
|
||||
DayOrNightUtility dayOrNightUtility = [new][7] DayOrNightUtility();
|
||||
DateTime nightHour = [new][7] DateTime(2019, 08, 03, 19, 00, 00);
|
||||
DateTime dayHour = [new][7] DateTime(2019, 08, 03, 07, 00, 00);
|
||||
|
||||
[Fact]
|
||||
public void Given7pmReturnNighttime()
|
||||
{
|
||||
var expected = "Nighttime";
|
||||
var actual = dayOrNightUtility.GetDayOrNight(nightHour);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public void Given7amReturnDaylight()
|
||||
{
|
||||
var expected = "Daylight";
|
||||
var actual = dayOrNightUtility.GetDayOrNight(dayHour);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 经验教训
|
||||
|
||||
|
||||
在继续开发这种简单的场景之前,请先回顾复习一下本次练习中所学到的东西。
|
||||
|
||||
运行无法测试的代码,很容易在不经意间制造陷阱。 从表面上看,这样的代码似乎可以正常工作。但是,遵循测试驱动开发(TDD)的实践(首先描述期望结果---执行测试---暴露了代码中的严重问题。
|
||||
|
||||
这表明 TDD 是确保代码不会太凌乱的理想方法。 TDD 指出了一些问题区域,例如缺乏单一责任和存在隐藏输入。 此外,TDD 有助于删除不确定性代码,并用行为明确的完全可测试代码替换它。
|
||||
|
||||
最后,TDD 帮助交付易于阅读、逻辑易于遵循的代码。
|
||||
|
||||
在本系列的下一篇文章中,我将演示如何使用在本练习中创建的逻辑来实现功能代码,以及如何进行进一步的测试使其变得更好。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/9/mutation-testing-example-failure-experimentation
|
||||
|
||||
作者:[Alex Bunardzic][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Morisun029](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alex-bunardzichttps://opensource.com/users/jocunddew
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003588_01_rd3os.combacktoschoolseriesk12_rh_021x_0.png?itok=fvorN0e- (Digital hand surrounding by objects, bike, light bulb, graphs)
|
||||
[2]: https://opensource.com/article/19/9/mutation-testing-example-part-1-how-leverage-failure
|
||||
[3]: https://opensource.com/article/19/8/mutation-testing-evolution-tdd
|
||||
[4]: https://en.wikipedia.org/wiki/Separation_of_concerns
|
||||
[5]: https://martinfowler.com/bliki/InversionOfControl.html
|
||||
[6]: http://www.laputan.org/drc/drc.html
|
||||
[7]: http://www.google.com/search?q=new+msdn.microsoft.com
|
Loading…
Reference in New Issue
Block a user