Merge pull request #1 from LCTT/master

Update from LCTT
This commit is contained in:
Xiaobin.Liu 2020-10-02 22:56:52 +08:00 committed by GitHub
commit e1b781b024
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
84 changed files with 4578 additions and 1847 deletions

View File

@ -0,0 +1,217 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12667-1.html)
[#]: subject: (Program hardware from the Linux command line)
[#]: via: (https://opensource.com/article/20/9/hardware-command-line)
[#]: author: (Alan Smithee https://opensource.com/users/alansmithee)
使用 RT-Thread 的 FinSH 对硬件进行编程
======
> 由于物联网IoT的兴起对硬件进行编程变得越来越普遍。RT-Thread 可以让你可以用 FinSH 从 Linux 命令行与设备进行沟通、
![](https://img.linux.net.cn/data/attachment/album/202009/29/233059w523g55qzvo53h6i.jpg)
RT-Thread 是一个开源的[实时操作系统][2]用于对物联网IoT设备进行编程。FinSH 是 [RT-Thread][3] 的命令行组件,它提供了一套操作界面,使用户可以从命令行与设备进行沟通。它主要用于调试或查看系统信息。
通常情况下,开发调试使用硬件调试器和 `printf` 日志来显示。但在某些情况下,这两种方法并不是很有用,因为它是从运行的内容中抽象出来的,而且它们可能很难解析。不过 RT-Thread 是一个多线程系统,当你想知道一个正在运行的线程的状态,或者手动控制系统的当前状态时,这很有帮助。因为它是多线程的,所以你能够拥有一个交互式的 shell你可以直接在设备上输入命令、调用函数来获取你需要的信息或者控制程序的行为。如果你只习惯于 Linux 或 BSD 等现代操作系统,这在你看来可能很普通,但对于硬件黑客来说,这是极其奢侈的,远超将串行电缆直接连线到电路板上以获取一丝错误的做法。
FinSH 有两种模式。
* C 语言解释器模式,称为 c-style。
* 传统的命令行模式,称为 msh模块 shell
在 C 语言解释器模式下FinSH 可以解析执行大部分 C 语言的表达式,并使用函数调用访问系统上的函数和全局变量。它还可以从命令行创建变量。
在 msh 模式下FinSH 的操作与 Bash 等传统 shell 类似。
### GNU 命令标准
当我们在开发 FinSH 时,我们了解到,在编写命令行应用程序之前,你需要熟悉 GNU 命令行标准。这个标准实践的框架有助于给界面带入熟悉感,这有助于开发人员在使用时感到舒适和高效。
一个完整的 GNU 命令主要由四个部分组成。
1. 命令名(可执行文件):命令行程序的名称;
2. 子命令:命令程序的子函数名称。
3. 选项:子命令函数的配置选项。
4. 参数:子命令函数配置选项的相应参数。
你可以在任何命令中看到这一点。以 Git 为例:
```
git reset --hard HEAD~1
```
这一点可以分解为:
![GNU command line standards][4]
可执行的命令是 `git`,子命令是 `reset`,使用的选项是 `--head`,参数是 `HEAD~1`
再举个例子:
```
systemctl enable --now firewalld
```
可执行的命令是 `systemctl`,子命令是 `enable`,选项是 `--now`,参数是 `firewalld`
想象一下,你想用 RT-Thread 编写一个符合 GNU 标准的命令行程序。FinSH 拥有你所需要的一切,并且会按照预期运行你的代码。更棒的是,你可以依靠这种合规性,让你可以自信地移植你最喜欢的 Linux 程序。
### 编写一个优雅的命令行程序
下面是一个 RT-Thread 运行命令的例子RT-Thread 开发人员每天都在使用这个命令:
```
usage: env.py package [-h] [--force-update] [--update] [--list] [--wizard]
                      [--upgrade] [--printenv]
optional arguments:
  -h, --help      show this help message and exit
  --force-update  force update and clean packages, install or remove the
                  packages by your settings in menuconfig
  --update        update packages, install or remove the packages by your
                  settings in menuconfig
  --list          list target packages
  --wizard        create a new package with wizard
  --upgrade       upgrade local packages list and ENV scripts from git repo
  --printenv      print environmental variables to check
```
正如你所看到的那样,它看起来很熟悉,行为就像你可能已经在 Linux 或 BSD 上运行的大多数 POSIX 应用程序一样。当使用不正确或不充分的语法时,它会提供帮助,它支持长选项和短选项。这种通用的用户界面对于任何使用过 Unix 终端的人来说都是熟悉的。
### 选项种类
选项的种类很多,按长短可分为两大类。
1. 短选项:由一个连字符加一个字母组成,如 `pkgs -h` 中的 `-h` 选项。
2. 长选项:由两个连字符加上单词或字母组成,例如,`scons- --target-mdk5` 中的 `--target` 选项。
你可以把这些选项分为三类,由它们是否有参数来决定。
1. 没有参数:该选项后面不能有参数。
2. 参数必选:选项后面必须有参数。
3. 参数可选:选项后可以有参数,但不是必需的。
正如你对大多数 Linux 命令的期望FinSH 的选项解析非常灵活。它可以根据空格或等号作为定界符来区分一个选项和一个参数,或者仅仅通过提取选项本身并假设后面的内容是参数(换句话说,完全没有定界符)。
* `wavplay -v 50`
* `wavplay -v50`
* `wavplay --vol=50`
### 使用 optparse
如果你曾经写过命令行程序,你可能会知道,一般来说,你所选择的语言有一个叫做 optparse 的库或模块。它是提供给程序员的,所以作为命令的一部分输入的选项(比如 `-v``--verbose`)可以与命令的其他部分进行*解析*。这可以帮助你的代码从一个子命令或参数中获取一个选项。
当为 FinSH 编写一个命令时,`optparse` 包希望使用这种格式:
```
MSH_CMD_EXPORT_ALIAS(pkgs, pkgs, this is test cmd.);
```
你可以使用长形式或短形式,或者同时使用两种形式来实现选项。例如:
```
static struct optparse_long long_opts[] =
{
    {"help"        , 'h', OPTPARSE_NONE}, // Long command: help, corresponding to short command h, without arguments.
    {"force-update",  0 , OPTPARSE_NONE}, // Long comman: force-update, without arguments
    {"update"      ,  0 , OPTPARSE_NONE},
    {"list"        ,  0 , OPTPARSE_NONE},
    {"wizard"      ,  0 , OPTPARSE_NONE},
    {"upgrade"     ,  0 , OPTPARSE_NONE},
    {"printenv"    ,  0 , OPTPARSE_NONE},
    { NULL         ,  0 , OPTPARSE_NONE}
};
```
创建完选项后,写出每个选项及其参数的命令和说明:
```
static void usage(void)
{
    rt_kprintf("usage: env.py package [-h] [--force-update] [--update] [--list] [--wizard]\n");
    rt_kprintf("                      [--upgrade] [--printenv]\n\n");
    rt_kprintf("optional arguments:\n");
    rt_kprintf("  -h, --help      show this help message and exit\n");
    rt_kprintf("  --force-update  force update and clean packages, install or remove the\n");
    rt_kprintf("                  packages by your settings in menuconfig\n");
    rt_kprintf("  --update        update packages, install or remove the packages by your\n");
    rt_kprintf("                  settings in menuconfig\n");
    rt_kprintf("  --list          list target packages\n");
    rt_kprintf("  --wizard        create a new package with wizard\n");
    rt_kprintf("  --upgrade       upgrade local packages list and ENV scripts from git repo\n");
    rt_kprintf("  --printenv      print environmental variables to check\n");
}
```
下一步是解析。虽然你还没有实现它的功能,但解析后的代码框架是一样的:
```
int pkgs(int argc, char **argv)
{
int ch;
int option_index;
struct optparse options;
if(argc == 1)
{
usage();
return RT_EOK;
}
optparse_init(&options, argv);
while((ch = optparse_long(&options, long_opts, &option_index)) != -1)
{
ch = ch;
rt_kprintf("\n");
rt_kprintf("optopt = %c\n", options.optopt);
rt_kprintf("optarg = %s\n", options.optarg);
rt_kprintf("optind = %d\n", options.optind);
rt_kprintf("option_index = %d\n", option_index);
}
rt_kprintf("\n");
return RT_EOK;
}
```
这里是函数头文件:
```
#include "optparse.h"
#include "finsh.h"
```
然后,编译并下载到设备上。
![Output][6]
### 硬件黑客
对硬件进行编程似乎很吓人,但随着物联网的发展,它变得越来越普遍。并不是所有的东西都可以或者应该在树莓派上运行,但在 RT-ThreadFinSH 可以让你保持熟悉的 Linux 感觉。
如果你对在裸机上编码感到好奇,不妨试试 RT-Thread。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/hardware-command-line
作者:[Alan Smithee][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alansmithee
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
[2]: https://opensource.com/article/20/6/open-source-rtos
[3]: https://github.com/RT-Thread/rt-thread
[4]: https://opensource.com/sites/default/files/uploads/command-line-apps_2.png (GNU command line standards)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://opensource.com/sites/default/files/uploads/command-line-apps_3.png (Output)

View File

@ -1,34 +1,36 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12663-1.html)
[#]: subject: (What's new with rdiff-backup?)
[#]: via: (https://opensource.com/article/20/9/rdiff-backup-linux)
[#]: author: (Patrik Dufresne https://opensource.com/users/patrik-dufresne)
rdiff-backup 有什么新功能?
11 年后重新打造的 rdiff-backup 2.0 有什么新功能?
======
长期的 Linux 备份方案向 Python 3 的迁移为添加许多新功能提供了机会。
![Hand putting a Linux file folder into a drawer][1]
> 这个老牌 Linux 备份方案迁移到了 Python 3 提供了添加许多新功能的机会。
![](https://img.linux.net.cn/data/attachment/album/202009/29/094858pb9pa3sppsq9x5z1.jpg)
2020 年 3 月,[rdiff-backup][2] 升级到了 2.0 版,这距离上一个主要版本已经过去了 11 年。2020 年初 Python 2 的废弃是这次更新的动力,但它为开发团队提供了整合其他功能和优势的机会。
大约二十年来rdiff-backup 帮助 Linux 用户在本地或远程维护他们的数据的完整备份,而无需无谓地消耗资源。这是因为这个开源解决方案可以进行反向增量备份,只备份从上一次备份中改变的文件。
大约二十年来,`rdiff-backup` 帮助 Linux 用户在本地或远程维护他们的数据的完整备份,而无需无谓地消耗资源。这是因为这个开源解决方案可以进行反向增量备份,只备份从上一次备份中改变的文件。
改版(或说,重生)得益于一个新的、自组织的开发团队(由来自 [IKUS Software][3] 的 Eric Zolf 和 Patrik Dufresne 领导,以及来自 [Seravo][4]的 Otto Kekäläinen 领导)的努力,为了所有 rdiff-backup 用户的利益,他们齐心协力。
这次改版(或者说,重生)得益于一个新的、自组织的开发团队(由来自 [IKUS Software][3] 的 Eric Zolf 和 Patrik Dufresne以及来自 [Seravo][4] 的 Otto Kekäläinen 共同领导)的努力,为了所有 `rdiff-backup` 用户的利益,他们齐心协力。
### rdiff-backup 的新功能
在 Eric 的带领下,随着向 Python 3 的迁移,项目被迁移到了一个新的、不受企业限制的[仓库][5],以欢迎贡献。团队还整合了多年来提交的所有补丁,包括稀疏文件支持和硬链接的修复。
在 Eric 的带领下,随着向 Python 3 的迁移,项目被迁移到了一个新的、不受企业限制的[仓库][5],以欢迎贡献。团队还整合了多年来提交的所有补丁,包括稀疏文件支持和硬链接的修复。
#### 用 Travis CI 实现自动化
另一个巨大的改进是增加了一个使用开源 [Travis CI][6] 的持续集成/持续交付 CI/CD 管道。这允许在各种环境下测试 rdiff-backup从而确保变化不会影响方案的稳定性。CI/CD 管道包括集成所有主要平台的构建和二进制发布。
另一个巨大的改进是增加了一个使用开源 [Travis CI][6] 的持续集成/持续交付CI/CD管道。这允许在各种环境下测试 `rdiff-backup`从而确保变化不会影响方案的稳定性。CI/CD 管道包括集成所有主要平台的构建和二进制发布。
#### 使用 yum 和 apt 轻松安装
新的 rdiff-backup 解决方案可以运行在所有主流的 Linux 发行版上,包括 Fedora、Red Hat、Elementary、Debian 等。Frank 和 Otto 付出了艰辛的努力,提供了开放仓库以方便访问和安装。你可以使用你的软件包管理器安装 rdiff-backup或者按照 GitHub 项目页面上的[分步说明][7]进行安装。
新的 `rdiff-backup` 解决方案可以运行在所有主流的 Linux 发行版上,包括 Fedora、Red Hat、Elementary、Debian 等。Frank 和 Otto 付出了艰辛的努力,提供了开放仓库以方便访问和安装。你可以使用你的软件包管理器安装 `rdiff-backup`,或者按照 GitHub 项目页面上的[分步说明][7]进行安装。
#### 新的主页
@ -36,7 +38,7 @@ rdiff-backup 有什么新功能?
### 如何使用 rdiff-backup
如果你是 rdiff-backup 的新手,你可能会对它的易用性感到惊讶。一个备份方案需要让你对备份和恢复过程感到舒适,而不是吓人。
如果你是 `rdiff-backup` 的新手,你可能会对它的易用性感到惊讶。备份方案应该让你对备份和恢复过程感到舒适,而不是吓人。
#### 开始备份
@ -44,61 +46,55 @@ rdiff-backup 有什么新功能?
例如,要备份到名为 `my_backup_drive` 的本地驱动器,请输入:
```
`$ rdiff-backup /home/tux/ /run/media/tux/my_backup_drive/`
$ rdiff-backup /home/tux/ /run/media/tux/my_backup_drive/
```
要将数据备份到异地存储,请使用远程服务器的位置,并在 `::` 后面指向备份驱动器的挂载点:
```
`$ rdiff-backup /home/tux/ tux@example.com::/my_backup_drive/`
$ rdiff-backup /home/tux/ tux@example.com::/my_backup_drive/
```
你可能需要[设置 SSH 密钥][8]来使这个过程不费力
你可能需要[设置 SSH 密钥][8]来使这个过程更轻松
#### 还原文件
做备份的原因是有时文件会丢失。为了使恢复尽可能简单,你甚至不需要 rdiff-backup 来恢复文件(虽然使用 `rdiff-backup` 命令提供了一些方便)。
做备份的原因是有时文件会丢失。为了使恢复尽可能简单,你甚至不需要 `rdiff-backup` 来恢复文件(虽然使用 `rdiff-backup` 命令提供了一些方便)。
如果你需要从备份驱动器中获取一个文件,你可以使用 `cp` 将其从备份驱动器复制到本地系统,或者对于远程驱动器使用 `scp` 命令。
对于本地驱动器,使用:
```
`$ cp _run_media/tux/my_backup_drive/Documents/example.txt \ ~/Documents`
$ cp _run_media/tux/my_backup_drive/Documents/example.txt ~/Documents
```
或者用于远程驱动器:
```
`$ scp tux@example.com::/my_backup_drive/Documents/example.txt \ ~/Documents`
$ scp tux@example.com::/my_backup_drive/Documents/example.txt ~/Documents
```
然而,使用 `rdiff-backup` 命令提供了其他选项,包括 `--restore-as-of`。这允许你指定你要恢复的文件的哪个版本。
例如,假设你想恢复一个文件在四天前的版本:
```
`$ rdiff-backup --restore-as-of 4D \ /run/media/tux/foo.txt ~/foo_4D.txt`
$ rdiff-backup --restore-as-of 4D /run/media/tux/foo.txt ~/foo_4D.txt
```
你也可以用 `rdiff-backup` 来获取最新版本:
```
`$ rdiff-backup --restore-as-of now \ /run/media/tux/foo.txt ~/foo_4D.txt`
$ rdiff-backup --restore-as-of now /run/media/tux/foo.txt ~/foo_4D.txt`
```
就是这么简单。另外rdiff-backup 还有很多其他选项,例如,你可以从列表中排除文件,从一个远程备份到另一个远程等等,这些你可以在[文档][9]中了解。
就是这么简单。另外,`rdiff-backup` 还有很多其他选项,例如,你可以从列表中排除文件,从一个远程备份到另一个远程等等,这些你可以在[文档][9]中了解。
### 总结
我们的开发团队希望用户能够喜欢这个改版后的开源 rdiff-backup 方案,这是我们不断努力的结晶。我们也感谢我们的贡献者,他们真正展示了开源的力量。
我们的开发团队希望用户能够喜欢这个改版后的开源 `rdiff-backup` 方案,这是我们不断努力的结晶。我们也感谢我们的贡献者,他们真正展示了开源的力量。
--------------------------------------------------------------------------------
@ -107,7 +103,7 @@ via: https://opensource.com/article/20/9/rdiff-backup-linux
作者:[Patrik Dufresne][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,37 +1,38 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12666-1.html)
[#]: subject: (How to Fix “Repository is not valid yet” Error in Ubuntu Linux)
[#]: via: (https://itsfoss.com/fix-repository-not-valid-yet-error-ubuntu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
如何修复 Ubuntu Linux中 的 ”Repository is not valid yet“ 错误
如何修复 Ubuntu Linux 中的 “Release file is not valid yet” 错误
======
我最近[在我的树莓派上安装了 Ubuntu 服务器][1]。我[在 Ubuntu 终端连接上了 Wi-Fi][2],然后做了我在安装任何 Linux 系统后都会做的事情,那就是更新系统。
当我使用 ”sudo apt update“ 命令时,它给了一个对我而言特别的错误。它报出仓库的发布文件在某个时间段内无效。
当我使用 `sudo apt update` 命令时,它给了一个对我而言特别的错误。它报出仓库的发布文件在某个时间段内无效。
**E: Release file for <http://ports.ubuntu.com/ubuntu-ports/dists/focal-security/InRelease> is not valid yet (invalid for another 159d 15h 20min 52s). Updates for this repository will not be applied.**
> E: Release file for <http://ports.ubuntu.com/ubuntu-ports/dists/focal-security/InRelease> is not valid yet (invalid for another 159d 15h 20min 52s). Updates for this repository will not be applied.**
下面是完整输出:
```
[email protected]:~$ sudo apt update
Hit:1 http://ports.ubuntu.com/ubuntu-ports focal InRelease
Get:2 http://ports.ubuntu.com/ubuntu-ports focal-updates InRelease [111 kB]
Get:3 http://ports.ubuntu.com/ubuntu-ports focal-backports InRelease [98.3 kB]
Get:4 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease [107 kB]
ubuntu@ubuntu:~$ sudo apt update
Hit:1 http://ports.ubuntu.com/ubuntu-ports focal InRelease
Get:2 http://ports.ubuntu.com/ubuntu-ports focal-updates InRelease [111 kB]
Get:3 http://ports.ubuntu.com/ubuntu-ports focal-backports InRelease [98.3 kB]
Get:4 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease [107 kB]
Reading package lists... Done
E: Release file for http://ports.ubuntu.com/ubuntu-ports/dists/focal/InRelease is not valid yet (invalid for another 21d 23h 17min 25s). Updates for this repository will not be applied.
E: Release file for http://ports.ubuntu.com/ubuntu-ports/dists/focal-updates/InRelease is not valid yet (invalid for another 159d 15h 21min 2s). Updates for this repository will not be applied.
E: Release file for http://ports.ubuntu.com/ubuntu-ports/dists/focal-backports/InRelease is not valid yet (invalid for another 159d 15h 21min 32s). Updates for this repository will not be applied.
E: Release file for http://ports.ubuntu.com/ubuntu-ports/dists/focal-security/InRelease is not valid yet (invalid for another 159d 15h 20min 52s). Updates for this repository will not be applied.
```
### 修复 Ubuntu 和其他 Linux 发行版中 ”release file is not valid yet“ 的错误。
### 修复 Ubuntu 和其他 Linux 发行版中 “Release file is not valid yet” 的错误。
![][3]
@ -63,7 +64,7 @@ Architectures: amd64 arm64 armhf i386 ppc64el riscv64 s390x
sudo timedatectl set-local-rtc 1
```
timedatectl 命令可以让你在 Linux 上配置时间、日期和[更改时区][4]。
`timedatectl` 命令可以让你在 Linux 上配置时间、日期和[更改时区][4]。
你应该不需要重新启动。它可以立即工作,你可以通过[更新你的 Ubuntu 系统][5]再次验证它。
@ -84,7 +85,7 @@ via: https://itsfoss.com/fix-repository-not-valid-yet-error-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,325 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12670-1.html)
[#]: subject: (How to Create/Configure LVM in Linux)
[#]: via: (https://www.2daygeek.com/create-lvm-storage-logical-volume-manager-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
如何在 Linux 中创建/配置 LVM逻辑卷管理
======
![](https://img.linux.net.cn/data/attachment/album/202010/01/111414m2y0mdhgvd9j1bgv.jpg)
<ruby>逻辑卷管理<rt>Logical Volume Management</rt></ruby>LVM在 Linux 系统中扮演着重要的角色,它可以提高可用性、磁盘 I/O、性能和磁盘管理的能力。
LVM 是一种被广泛使用的技术,对于磁盘管理来说,它是非常灵活的。
它在物理磁盘和文件系统之间增加了一个额外的层,允许你创建一个逻辑卷而不是物理磁盘。
LVM 允许你在需要的时候轻松地调整、扩展和减少逻辑卷的大小。
![](https://img.linux.net.cn/data/attachment/album/202010/01/111230el14fubc4ku55o3k.jpeg)
### 如何创建 LVM 物理卷?
你可以使用任何磁盘、RAID 阵列、SAN 磁盘或分区作为 LVM <ruby>物理卷<rt>Physical Volume</rt></ruby>PV
让我们想象一下,你已经添加了三个磁盘,它们是 `/dev/sdb`、`/dev/sdc` 和 `/dev/sdd`
运行以下命令来[发现 Linux 中新添加的 LUN 或磁盘][2]
```
# ls /sys/class/scsi_host
host0
```
```
# echo "- - -" > /sys/class/scsi_host/host0/scan
```
```
# fdisk -l
```
**创建物理卷 `pvcreate` 的一般语法:**
```
pvcreate [物理卷名]
```
当在系统中检测到磁盘,使用 `pvcreate` 命令初始化 LVM PV
```
# pvcreate /dev/sdb /dev/sdc /dev/sdd
Physical volume "/dev/sdb" successfully created
Physical volume "/dev/sdc" successfully created
Physical volume "/dev/sdd" successfully created
```
**请注意:**
* 上面的命令将删除给定磁盘 `/dev/sdb`、`/dev/sdc` 和 `/dev/sdd` 上的所有数据。
* 物理磁盘可以直接添加到 LVM PV 中,而不必是磁盘分区。
使用 `pvdisplay``pvs` 命令来显示你创建的 PV。`pvs` 命令显示的是摘要输出,`pvdisplay` 显示的是 PV 的详细输出:
```
# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb lvm2 a-- 15.00g 15.00g
/dev/sdc lvm2 a-- 15.00g 15.00g
/dev/sdd lvm2 a-- 15.00g 15.00g
```
```
# pvdisplay
"/dev/sdb" is a new physical volume of "15.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb
VG Name
PV Size 15.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 69d9dd18-36be-4631-9ebb-78f05fe3217f
"/dev/sdc" is a new physical volume of "15.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdc
VG Name
PV Size 15.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID a2092b92-af29-4760-8e68-7a201922573b
"/dev/sdd" is a new physical volume of "15.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdd
VG Name
PV Size 15.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID d92fa769-e00f-4fd7-b6ed-ecf7224af7faS
```
### 如何创建一个卷组
<ruby>卷组<rt>Volume Group</rt></ruby>VG是 LVM 结构中的另一层。基本上,卷组由你创建的 LVM 物理卷组成,你可以将物理卷添加到现有的卷组中,或者根据需要为物理卷创建新的卷组。
**创建卷组 `vgcreate` 的一般语法:**
```
vgcreate [卷组名] [物理卷名]
```
使用以下命令将一个新的物理卷添加到新的卷组中:
```
# vgcreate vg01 /dev/sdb /dev/sdc /dev/sdd
Volume group "vg01" successfully created
```
**请注意:**默认情况下,它使用 4MB 的<ruby>物理范围<rt>Physical Extent</rt></ruby>PE但你可以根据你的需要改变它。
使用 `vgs``vgdisplay` 命令来显示你创建的 VG 的信息:
```
# vgs vg01
VG #PV #LV #SN Attr VSize VFree
vg01 3 0 0 wz--n- 44.99g 44.99g
```
```
# vgdisplay vg01
--- Volume group ---
VG Name vg01
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 3
Act PV 3
VG Size 44.99 GiB
PE Size 4.00 MiB
Total PE 11511
Alloc PE / Size 0 / 0
Free PE / Size 11511 / 44.99 GiB
VG UUID d17e3c31-e2c9-4f11-809c-94a549bc43b7
```
### 如何扩展卷组
如果 VG 没有空间,请使用以下命令将新的物理卷添加到现有卷组中。
**卷组扩展 `vgextend`)的一般语法:**
```
vgextend [已有卷组名] [物理卷名]
```
```
# vgextend vg01 /dev/sde
Volume group "vg01" successfully extended
```
### 如何以 GB 为单位创建逻辑卷?
<ruby>逻辑卷<rt>Logical Volume</rt></ruby>LV是 LVM 结构中的顶层。逻辑卷是由卷组创建的块设备。它作为一个虚拟磁盘分区,可以使用 LVM 命令轻松管理。
你可以使用 `lvcreate` 命令创建一个新的逻辑卷。
**创建逻辑卷(`lvcreate` 的一般语法:**
```
lvcreate n [逻辑卷名] L [逻辑卷大小] [要创建的 LV 所在的卷组名称]
```
运行下面的命令,创建一个大小为 10GB 的逻辑卷 `lv001`
```
# lvcreate -n lv001 -L 10G vg01
Logical volume "lv001" created
```
使用 `lvs``lvdisplay` 命令来显示你所创建的 LV 的信息:
```
# lvs /dev/vg01/lvol01
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
lv001 vg01 mwi-a-m-- 10.00g lv001_mlog 100.00
```
```
# lvdisplay /dev/vg01/lv001
--- Logical volume ---
LV Path /dev/vg01/lv001
LV Name lv001
VG Name vg01
LV UUID ca307aa4-0866-49b1-8184-004025789e63
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2020-09-10 11:43:05 -0700
LV Status available
# open 0
LV Size 10.00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4
```
### 如何以 PE 大小创建逻辑卷?
或者你可以使用物理范围PE大小创建逻辑卷。
### 如何计算 PE 值?
很简单,例如,如果你有一个 10GB 的卷组,那么 PE 大小是多少?
默认情况下,它使用 4MB 的物理范围,但可以通过运行 `vgdisplay` 命令来检查正确的 PE 大小,因为这可以根据需求进行更改。
```
10GB = 10240MB / 4MB PE 大小) = 2560 PE
```
**用 PE 大小创建逻辑卷 `lvcreate` 的一般语法:**
```
lvcreate n [逻辑卷名] l [物理扩展 PE 大小] [要创建的 LV 所在的卷组名称]
```
要使用 PE 大小创建 10GB 的逻辑卷,命令如下:
```
# lvcreate -n lv001 -l 2560 vg01
```
### 如何创建文件系统
在创建有效的文件系统之前,你不能使用逻辑卷。
**创建文件系统的一般语法:**
```
mkfs t [文件系统类型] /dev/[LV 所在的卷组名称]/[LV 名称]
```
使用以下命令将逻辑卷 `lv001` 格式化为 ext4 文件系统:
```
# mkfs -t ext4 /dev/vg01/lv001
```
对于 xfs 文件系统:
```
# mkfs -t xfs /dev/vg01/lv001
```
### 挂载逻辑卷
最后,你需要挂载逻辑卷来使用它。确保在 `/etc/fstab` 中添加一个条目,以便系统启动时自动加载。
创建一个目录来挂载逻辑卷:
```
# mkdir /lvmtest
```
使用挂载命令[挂载逻辑卷][3]
```
# mount /dev/vg01/lv001 /lvmtest
```
在 [/etc/fstab 文件][4]中添加新的逻辑卷详细信息,以便系统启动时自动挂载:
```
# vi /etc/fstab
/dev/vg01/lv001 /lvmtest xfs defaults 0 0
```
使用 [df 命令][5]检查新挂载的卷:
```
# df -h /lvmtest
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg01-lv001 15360M 34M 15326M 4% /lvmtest
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/create-lvm-storage-logical-volume-manager-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/wp-content/uploads/2020/09/create-lvm-storage-logical-volume-manager-in-linux-2.png
[2]: https://www.2daygeek.com/scan-detect-luns-scsi-disks-on-redhat-centos-oracle-linux/
[3]: https://www.2daygeek.com/mount-unmount-file-system-partition-in-linux/
[4]: https://www.2daygeek.com/understanding-linux-etc-fstab-file/
[5]: https://www.2daygeek.com/linux-check-disk-space-usage-df-command/

View File

@ -1,31 +1,30 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Extend/Increase LVMs (Logical Volume Resize) in Linux)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12673-1.html)
[#]: subject: (How to Extend/Increase LVMs in Linux)
[#]: via: (https://www.2daygeek.com/extend-increase-resize-lvm-logical-volume-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
如何在 Linux 中扩展/增加 LVM 大小(逻辑卷调整)
如何在 Linux 中扩展/增加 LVM 大小(逻辑卷调整)
======
![](https://img.linux.net.cn/data/attachment/album/202010/01/234018qgnwilmmzom8xarb.jpg)
扩展逻辑卷非常简单,只需要很少的步骤,而且不需要卸载某个逻辑卷就可以在线完成。
LVM 的主要目的是灵活的磁盘管理,当你需要的时候,可以很方便地调整、扩展和缩小逻辑卷的大小。
如果你是逻辑卷管理 LVM 新手,我建议你从我们之前的文章开始学习。
如果你是逻辑卷管理LVM 新手,我建议你从我们之前的文章开始学习。
* **第一部分:[如何在 Linux 中创建/配 置LVM逻辑卷管理][1]**
* **第一部分:[如何在 Linux 中创建/配置 LVM逻辑卷管理][1]**
![][2]
![](https://img.linux.net.cn/data/attachment/album/202010/01/233946ybwbnw4zanjbn00e.jpeg)
扩展逻辑卷涉及到以下步骤:
* 检查 LV 所在的卷组中是否有足够的未分配磁盘空间
* 检查逻辑卷LV所在的卷组中是否有足够的未分配磁盘空间
* 如果有,你可以使用这些空间来扩展逻辑卷
* 如果没有,请向系统中添加新的磁盘或 LUN
* 将物理磁盘转换为物理卷PV
@ -34,13 +33,11 @@ LVM 的主要目的是灵活的磁盘管理,当你需要的时候,可以很
* 扩大文件系统
* 检查扩展的文件系统大小
### 如何创建 LVM 物理卷?
使用 pvcreate 命令创建 LVM 物理卷。
使用 `pvcreate` 命令创建 LVM 物理卷。
当在操作系统中检测到磁盘,使用 pvcreate 命令初始化 LVM PV物理卷
当在操作系统中检测到磁盘,使用 `pvcreate` 命令初始化 LVM 物理卷:
```
# pvcreate /dev/sdc
@ -49,12 +46,10 @@ Physical volume "/dev/sdc" successfully created
**请注意:**
* 上面的命令将删除磁盘 /dev/sdc 上的所有数据。
* 物理磁盘可以直接添加到 LVM PV 中,而不是磁盘分区。
* 上面的命令将删除磁盘 `/dev/sdc` 上的所有数据。
* 物理磁盘可以直接添加到 LVM 物理卷中,而不是磁盘分区。
使用 pvdisplay 命令来显示你所创建的 PV。
使用 `pvdisplay` 命令来显示你所创建的物理卷:
```
# pvdisplay /dev/sdc
@ -74,14 +69,14 @@ PV UUID 69d9dd18-36be-4631-9ebb-78f05fe3217f
### 如何扩展卷组
使用以下命令在现有的卷组中添加一个新的物理卷。
使用以下命令在现有的卷组VG中添加一个新的物理卷
```
# vgextend vg01 /dev/sdc
Volume group "vg01" successfully extended
```
使用 vgdisplay 命令来显示你所创建的 PV。
使用 `vgdisplay` 命令来显示你所创建的物理卷:
```
# vgdisplay vg01
@ -111,13 +106,13 @@ VG UUID d17e3c31-e2c9-4f11-809c-94a549bc43b7
使用以下命令增加现有逻辑卷大小。
**逻辑卷扩展 lvextend 的常用语法。**
**逻辑卷扩展`lvextend`)的常用语法:**
```
lvextend [要增加的额外空间] [现有逻辑卷名称]
```
使用下面的命令将现有的逻辑卷增加 10GB
使用下面的命令将现有的逻辑卷增加 10GB
```
# lvextend -L +10G /dev/mapper/vg01-lv002
@ -126,33 +121,33 @@ Size of logical volume vg01/lv002 changed from 5.00 GiB (1280 extents) to 15.00
Logical volume var successfully resized
```
使用 PE 大小来扩展逻辑卷
使用 PE 大小来扩展逻辑卷
```
# lvextend -l +2560 /dev/mapper/vg01-lv002
```
要使用百分比 % 扩展逻辑卷,请使用以下命令。
要使用百分比%)扩展逻辑卷,请使用以下命令:
```
# lvextend -l +40%FREE /dev/mapper/vg01-lv002
```
现在,逻辑卷已经扩展,你需要调整文件系统的大小以扩展逻辑卷内的空间
现在,逻辑卷已经扩展,你需要调整文件系统的大小以扩展逻辑卷内的空间
对于基于 ext3 和 ext4 的文件系统,运行以下命令
对于基于 ext3 和 ext4 的文件系统,运行以下命令
```
# resize2fs /dev/mapper/vg01-lv002
```
对于xfs文件系统使用以下命令。
对于 xfs 文件系统,使用以下命令:
```
# xfs_growfs /dev/mapper/vg01-lv002
```
使用 **[df 命令][3]**查看文件系统大小。
使用 [df 命令][3]查看文件系统大小:
```
# df -h /lvmtest1
@ -167,12 +162,12 @@ via: https://www.2daygeek.com/extend-increase-resize-lvm-logical-volume-in-linux
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/create-lvm-storage-logical-volume-manager-in-linux/
[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[1]: https://linux.cn/article-12670-1.html
[2]: https://www.2daygeek.com/wp-content/uploads/2020/09/extend-increase-resize-lvm-logical-volume-in-linux-3.png
[3]: https://www.2daygeek.com/linux-check-disk-space-usage-df-command/

View File

@ -1,154 +1,158 @@
[#]: collector: "lujun9972"
[#]: translator: "lxbwolf"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-12671-1.html"
[#]: subject: "10 Open Source Static Site Generators to Create Fast and Resource-Friendly Websites"
[#]: via: "https://itsfoss.com/open-source-static-site-generators/"
[#]: author: "Ankush Das https://itsfoss.com/author/ankush/"
10 个用来创建快速和资源友好网站的静态网站生成工具
10 静态网站生成工具
======
_**摘要:在寻找部署静态网页的方法吗?这几个开源的静态网站生成工具可以帮你迅速部署界面优美、功能强大的静态网站,无需掌握复杂的 HTML 和 CSS 技能。**_
![](https://img.linux.net.cn/data/attachment/album/202010/01/123903lx1q0w2oh1lxx7wh.jpg)
> 在寻找部署静态网页的方法吗?这几个开源的静态网站生成工具可以帮你迅速部署界面优美、功能强大的静态网站,无需掌握复杂的 HTML 和 CSS 技能。
### 静态网站是什么?
技术上来讲,一个静态网站的网页不是由服务器动态生成的。HTML、CSS 和 JavaScript 文件就静静地躺在服务器的某个路径下,它们的内容与终端用户接收到时看到的是一样的。源码文件已经提前编译好了,源码在每次请求后都不会变化。
技术上来讲,静态网站是指网页不是由服务器动态生成的。HTML、CSS 和 JavaScript 文件就静静地躺在服务器的某个路径下,它们的内容与终端用户接收到的版本是一样的。原始的源码文件已经提前编译好了,源码在每次请求后都不会变化。
Its FOSS 是一个依赖多个数据库的动态网站,网页是在你的浏览器发出请求时即时生成和服务的。大部分网站是动态的,你与这些网站互动时,会有大量的内容在变化
Linux.CN 是一个依赖多个数据库的动态网站,当有浏览器的请求时,网页就会生成并提供服务。大部分网站是动态的,你与这些网站互动时,大量的内容会经常改变
静态网站有一些好处,比如加载时间更短,请求的服务器资源更少,更安全(有争议?)。
静态网站有一些好处,比如加载时间更短,请求的服务器资源更少、更安全(值得商榷)。
传统意义上,静态网站更适合于创建只有少量网页、内容变化不频繁的小网站。
传统上,静态网站更适合于创建只有少量网页、内容变化不频繁的小网站。
然而,静态网站生成工具出现后,静态网站的适用范围越来越大。你还可以使用这些工具搭建博客网站。
然而,随着静态网站生成工具出现后,静态网站的适用范围越来越大。你还可以使用这些工具搭建博客网站。
列出了几个开源的静态网站生成工具,这些工具可以帮你搭建界面优美的网站。
整理了几个开源的静态网站生成工具,这些工具可以帮你搭建界面优美的网站。
### 最好的开源静态网站生成工具
请注意,静态网站不会提供很复杂的功能。如果你需要复杂的功能,那么你可以参考适用于动态网站的[最好的开源 CMS][1]列表
请注意,静态网站不会提供很复杂的功能。如果你需要复杂的功能,那么你可以参考适用于动态网站的[最佳开源 CMS][1]列表。
#### 1\. Jekyll
#### 1Jekyll
![][2]
Jekyll 是用 [Ruby][3] 写的最受欢迎的开源静态生成工具之一。实际上Jekyll 是 [GitHub 页面][4] 的引擎,它可以让你免费用 GitHub 维护自己的网站。
Jekyll 是用 [Ruby][3] 写的最受欢迎的开源静态生成工具之一。实际上Jekyll 是 [GitHub 页面][4] 的引擎,它可以让你免费用 GitHub 托管网站。
你可以很轻松地跨平台配置 Jekyll包括 Ubuntu。它利用 [Markdown][5]、[Liquid][5]模板语言、HTML 和 CSS 来生成静态的网页文件。如果你要搭建一个没有广告或推广自己工具或服务的产品页的博客网站,它是个不错的选择。
它还支持从常见的 CMS<ruby>内容管理系统<rt>Content management system</rt></ruby>)如 Ghost、WordPress、Drupal 7 迁移你的博客。你可以管理永久链接、类别、页面、文章还可以自定义布局这些功能都很强大。因此即使你已经有了一个网站如果你想转成静态网站Jekyll 会是一个完美的解决方案。你可以参考[官方文档][6]或 [GitHub 页面][7]了解更多内容。
[Jekyll][8]
- [Jekyll][8]
#### 2\. Hugo
#### 2Hugo
![][9]
Hugo 是另一个很受欢迎的用于搭建静态网站的开源框架。它是用 [Go 语言][10]写的。
它运行速度快,使用简单,可靠性高。如果你需要,它也可以提供更高级的主题。它还提供了能提高你效率的实用快捷键。无论是组合展示网站还是博客网站Hogo 都有能力管理大量的内容类型。
它运行速度快、使用简单、可靠性高。如果你需要,它也可以提供更高级的主题。它还提供了一些有用的快捷方式来帮助你轻松完成任务。无论是组合展示网站还是博客网站Hogo 都有能力管理大量的内容类型。
如果你想使用 Hugo你可以参照它的[官方文档][11]或它的 [GitHub 页面][12]来安装以及了解更多相关的使用方法。你还可以用 Hugo 在 GitHub 页面或 CDN如果有需要部署网站
如果你想使用 Hugo你可以参照它的[官方文档][11]或它的 [GitHub 页面][12]来安装以及了解更多相关的使用方法。如果需要的话,你还可以将 Hugo 部署在 GitHub 页面或任何 CDN 上
[Hugo][13]
- [Hugo][13]
#### 3\. Hexo
#### 3Hexo
![][14]
Hexo 基于 [Node.js][15] 的一个有趣的开源框架。像其他的工具一样,你可以用它搭建相当快速的网站,不仅如此,它还提供了丰富的主题和插件。
Hexo 是一个有趣的开源框架,基于 [Node.js][15]。像其他的工具一样,你可以用它搭建相当快速的网站,不仅如此,它还提供了丰富的主题和插件。
它还根据用户的每个需求提供了强大的 API 来扩展功能。如果你已经有一个网站,你可以用它的[迁移][16]扩展轻松完成迁移工作。
你可以参照[官方文档][17]或 [GitHub 页面][18] 来使用 Hexo。
[Hexo][19]
- [Hexo][19]
#### 4\. Gatsby
#### 4Gatsby
![][20]
Gatsby 是一个不断发展的流行开源网站生成框架。它使用 [React.js][21] 来生成快速、界面优美的网站。
Gatsby 是一个越来越流行的开源网站生成框架。它使用 [React.js][21] 来生成快速、界面优美的网站。
几年前在一个实验性的项目中,我曾经非常想尝试一下这个工具,它提供的成千上万的新插件和主题的能力让我印象深刻。与其他静态网站生成工具不同的是,你可以用 Gatsby 在不损失任何功能的前提下来生成静态网站
几年前在一个实验性的项目中,我曾经非常想尝试一下这个工具,它提供的成千上万的新插件和主题的能力让我印象深刻。与其他静态网站生成工具不同的是,你可以使用 Gatsby 生成一个网站,并在不损失任何功能的情况下获得静态网站的好处
它提供了与很多流行的服务的整合功能。当然,你可以不使用它的复杂的功能,或选择一个流行的 CMS 与它配合使用,这也会很有趣。你可以查看他们的[官方文档][22]或它的 [GitHub 页面][23]了解更多内容。
它提供了与很多流行的服务的整合功能。当然,你可以不使用它的复杂的功能,或将其与你选择的流行 CMS 配合使用,这也会很有趣。你可以查看他们的[官方文档][22]或它的 [GitHub 页面][23]了解更多内容。
[Gatsby][24]
- [Gatsby][24]
#### 5\. VuePress
#### 5VuePress
![][25]
VuePress 是基于 [Vue.js][26] 的静态网站生成工具,同时也是开源的渐进式 JavaScript 框架。
VuePress 是由 [Vue.js][26] 支持的静态网站生成工具,而 Vue.js 是一个开源的渐进式 JavaScript 框架。
如果你了解 HTML、CSS 和 JavaScript那么你可以无压力地使用 VuePress。如果你想在搭建网站时抢先别人一步,那么你应该找几个有用的插件和主题。此外,看起来 Vue.js 更新一直很活跃,很多开发者都在关注 Vue.js这是一件好事。
如果你了解 HTML、CSS 和 JavaScript那么你可以无压力地使用 VuePress。你应该可以几个有用的插件和主题来为你的网站建设开个头。此外,看起来 Vue.js 更新一直很活跃,很多开发者都在关注 Vue.js这是一件好事。
你可以参照他们的[官方文档][27]和 [GitHub 页面][28]了解更多。
[VuePress][29]
- [VuePress][29]
#### 6\. Nuxt.js
#### 6Nuxt.js
![][30]
Nuxt.js 使用 Vue.js 和 Node.js但它致力于模块化并且有能力依赖服务端而非客户端。不仅如此还志在通过描述详尽的错误和其他方面更详细的文档来为开发者提供直观的体验
Nuxt.js 使用 Vue.js 和 Node.js但它致力于模块化并且有能力依赖服务端而非客户端。不仅如此的目标是为开发者提供直观的体验,并提供描述性错误,以及详细的文档等
正如它声称的那样在你用来搭建静态网站的所有工具中Nuxt.js 在功能和灵活性两个方面都是佼佼者。他们还提供了一个 [Nuxt 线上沙盒][31]让你直接测试。
正如它声称的那样在你用来搭建静态网站的所有工具中Nuxt.js 可以做到功能和灵活性两全其美。他们还提供了一个 [Nuxt 线上沙盒][31]让你不费吹灰之力就能直接测试
你可以查看它的 [GitHub 页面][32]和[官方网站][33]了解更多。
#### 7\. Docusaurus
- [Nuxt.js][33]
#### 7、Docusaurus
![][34]
Docusaurus 是一个为搭建文档类网站量身定制的有趣的开源静态网站生成工具。它还是 [Facebook 开源计划][35]的一个项目。
Docusaurus 是一个有趣的开源静态网站生成工具,为搭建文档类网站量身定制。它还是 [Facebook 开源计划][35]的一个项目。
Docusaurus 是用 React 构建的。你可以使用所有必要的功能,像文档版本管理、文档搜索,还有大部分已经预先配置好的翻译。如果你想为你的产品或服务搭建一个文档网站,那么可以试试 Docusaurus。
Docusaurus 是用 React 构建的。你可以使用所有的基本功能,像文档版本管理、文档搜索和翻译大多是预先配置的。如果你想为你的产品或服务搭建一个文档网站,那么可以试试 Docusaurus。
你可以从它的 [GitHub 页面][36]和它的[官网][37]获取更多信息。
[Docusaurus][37]
- [Docusaurus][37]
#### 8\. Eleventy
#### 8Eleventy
![][38]
Eleventy 自称是 Jekyll 的替代品,志在为创建更快的静态网站提供更简单的方式
Eleventy 自称是 Jekyll 的替代品,旨在以更简单的方法来制作更快的静态网站
使用 Eleventy 看起来很简单,它也提供了能解决你的问题的文档。如果你想找一个简单的静态网站生成工具Eleventy 似乎会是一个有趣的选择。
它似乎很容易上手,而且它还提供了适当的文档来帮助你。如果你想找一个简单的静态网站生成工具Eleventy 似乎会是一个有趣的选择。
你可以参照它的 [GitHub 页面][39]和[官网][40]来了解更多的细节。
[Eleventy][40]
- [Eleventy][40]
#### 9\. Publii
#### 9Publii
![][41]
Publii 是一个令人印象深刻的开源 CMS它能使生成一个静态网站变得很容易。它是用 [Electron][42] 和 Vue.js 构建的。如果有需要,你也可以把你的文章从 WorkPress 网站迁移过来。此外,它还提供了与 GitHub 页面、Netlify 及其它类似服务的一键同步功能。
利用 Publii 生成的静态网站,自带所见即所得编辑器。你可以从[官网][43]下载它,或者从它的 [GitHub 页面][44]了解更多信息。
如果你利用 Publii 生成一个静态网站,你还可以得到一个所见即所得的编辑器。你可以从[官网][43]下载它,或者从它的 [GitHub 页面][44]了解更多信息。
[Publii][43]
- [Publii][43]
#### 10\. Primo
#### 10Primo
![][45]
一个有趣的开源静态网站生成工具,目前开发工作仍很活跃。虽然与其他的静态生成工具相比,它还不是一个成熟的解决方案,有些功能还不完善,但它是一个独一无二的项目。
一个有趣的开源静态网站生成工具,目前开发工作仍很活跃。虽然与其他的静态生成工具相比,它还不是一个成熟的解决方案,有些功能还不完善,但它是一个独的项目。
Primo 在使用可视化的构建器帮你构建和搭建网站,这样你就可以轻松编辑和部署到任意主机上。
Primo 在使用可视化的构建器帮你构建和搭建网站,这样你就可以轻松编辑和部署到任意主机上。
你可以参照[官网][46]或查看它的 [GitHub 页面][47]了解更多信息。
[Primo][46]
- [Primo][46]
### 结语
还有很多文章中没有列出的网站生成工具。然而,我已经尽力写出了能提供最快的加载速度、最好的安全性和令人印象最深刻的灵活性的最好的静态生成工具了
还有很多文章中没有列出的网站生成工具。然而,我试图提到最好的静态生成器,为您提供最快的加载时间,最好的安全性和令人印象深刻的灵活性
列表中没有你最喜欢的工具?在下面的评论中告诉我。
@ -159,7 +163,7 @@ via: https://itsfoss.com/open-source-static-site-generators/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[lxbwolf](https://github.com/lxbwolf)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,201 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12674-1.html)
[#]: subject: (Recovering deleted files on Linux with testdisk)
[#]: via: (https://www.networkworld.com/article/3575524/recovering-deleted-files-on-linux-with-testdisk.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
用 testdisk 恢复 Linux 上已删除的文件
======
> 这篇文章介绍了 testdisk这是恢复最近删除的文件以及用其他方式修复分区的工具之一非常方便。
![](https://images.idgesg.net/images/article/2018/01/survival_life-preserver_risk_swimming_rescue-100747102-large.jpg)
当你在 Linux 系统上删除一个文件时,它不一定会永远消失,特别是当你最近才刚刚删除了它的时候。
除非你用 `shred` 等工具把它擦掉,否则数据仍然会放在你的磁盘上 —— 而恢复已删除文件的最佳工具之一 `testdisk` 可以帮助你拯救它。虽然 `testdisk` 具有广泛的功能,包括恢复丢失或损坏的分区和使不能启动磁盘可以重新启动,但它也经常被用来恢复被误删的文件。
在本篇文章中,我们就来看看如何使用 `testdisk` 恢复已删除的文件,以及该过程中的每一步是怎样的。由于这个过程需要不少的步骤,所以当你做了几次之后,你可能会觉得操作起来会更加得心应手。
### 安装 testdisk
可以使用 `apt install testdisk``yum install testdisk` 等命令安装 `testdisk`。有趣的是,它不仅是一个 Linux 工具,而且还适用于 MacOS、Solaris 和 Windows。
文档可在 [cgsecurity.org][1] 中找到。
### 恢复文件
首先,你必须以 `root` 身份登录,或者有 `sudo` 权限才能使用 `testdisk`。如果你没有 `sudo` 访问权限,你会在这个过程一开始就被踢出,而如果你选择创建了一个日志文件的话,最终会有这样的消息:
```
TestDisk exited normally.
jdoe is not in the sudoers file. This incident will be reported.
```
当你用 `testdisk` 恢复被删除的文件时,你最终会将恢复的文件放在你启动该工具的目录下,而这些文件会属于 `root`。出于这个原因,我喜欢在 `/home/recovery` 这样的目录下启动。一旦文件被成功地还原和验证,就可以将它们移回它们的所属位置,并将它们的所有权也恢复。
在你可以写入的选定目录下开始:
```
$ cd /home/recovery
$ testdisk
```
`testdisk` 提供的第一页信息描述了该工具并显示了一些选项。至少在刚开始,创建个日志文件是个好主意,因为它提供的信息可能会被证明是有用的。下面是如何做的:
```
Use arrow keys to select, then press Enter key:
>[ Create ] Create a new log file
[ Append ] Append information to log file
[ No Log ] Dont record anything
```
左边的 `>` 以及你看到的反转的字体和背景颜色指出了你按下回车键后将使用的选项。在这个例子中,我们选择了创建日志文件。
然后会提示你输入密码(除非你最近使用过 `sudo`)。
下一步是选择被删除文件所存储的磁盘分区(如果没有高亮显示的话)。根据需要使用上下箭头移动到它。然后点两次右箭头,当 “Proceed” 高亮显示时按回车键。
```
Select a media (use Arrow keys, then press Enter):
Disk /dev/sda - 120 GB / 111 GiB - SSD2SC120G1CS1754D117-551
>Disk /dev/sdb - 500 GB / 465 GiB - SAMSUNG HE502HJ
Disk /dev/loop0 - 13 MB / 13 MiB (RO)
Disk /dev/loop1 - 101 MB / 96 MiB (RO)
Disk /dev/loop10 - 148 MB / 141 MiB (RO)
Disk /dev/loop11 - 36 MB / 35 MiB (RO)
Disk /dev/loop12 - 52 MB / 49 MiB (RO)
Disk /dev/loop13 - 78 MB / 75 MiB (RO)
Disk /dev/loop14 - 173 MB / 165 MiB (RO)
Disk /dev/loop15 - 169 MB / 161 MiB (RO)
>[Previous] [ Next ] [Proceed ] [ Quit ]
```
在这个例子中,被删除的文件在 `/dev/sdb` 的主目录下。
此时,`testdisk` 应该已经选择了合适的分区类型。
```
Disk /dev/sdb - 500 GB / 465 GiB - SAMSUNG HE502HJ
Please select the partition table type, press Enter when done.
[Intel ] Intel/PC partition
>[EFI GPT] EFI GPT partition map (Mac i386, some x86_64...)
[Humax ] Humax partition table
[Mac ] Apple partition map (legacy)
[None ] Non partitioned media
[Sun ] Sun Solaris partition
[XBox ] XBox partition
[Return ] Return to disk selection
```
在下一步中,按向下箭头指向 “[ Advanced ] Filesystem Utils”。
```
[ Analyse ] Analyse current partition structure and search for lost partitions
>[ Advanced ] Filesystem Utils
[ Geometry ] Change disk geometry
[ Options ] Modify options
[ Quit ] Return to disk selection
```
接下来,查看选定的分区。
```
Partition Start End Size in sectors
> 1 P Linux filesys. data 2048 910155775 910153728 [drive2]
```
然后按右箭头选择底部的 “[ List ]”,按回车键。
```
[ Type ] [Superblock] >[ List ] [Image Creation] [ Quit ]
```
请注意,它看起来就像我们从根目录 `/` 开始,但实际上这是我们正在工作的文件系统的基点。在这个例子中,就是 `/home`
```
Directory / <== 开始点
>drwxr-xr-x 0 0 4096 23-Sep-2020 17:46 .
drwxr-xr-x 0 0 4096 23-Sep-2020 17:46 ..
drwx——— 0 0 16384 22-Sep-2020 11:30 lost+found
drwxr-xr-x 1008 1008 4096 9-Jul-2019 14:10 dorothy
drwxr-xr-x 1001 1001 4096 22-Sep-2020 12:12 nemo
drwxr-xr-x 1005 1005 4096 19-Jan-2020 11:49 eel
drwxrwxrwx 0 0 4096 25-Sep-2020 08:08 recovery
...
```
接下来,我们按箭头指向具体的主目录。
```
drwxr-xr-x 1016 1016 4096 17-Feb-2020 16:40 gino
>drwxr-xr-x 1000 1000 20480 25-Sep-2020 08:00 shs
```
按回车键移动到该目录,然后根据需要向下箭头移动到子目录。注意,如果选错了,可以选择列表顶部附近的 `..` 返回。
如果找不到文件,可以按 `/`(就像在 `vi` 中开始搜索时一样),提示你输入文件名或其中的一部分。
```
Directory /shs <== current location
Previous
...
-rw-rw-r— 1000 1000 426 8-Apr-2019 19:09 2-min-topics
>-rw-rw-r— 1000 1000 24667 8-Feb-2019 08:57 Up_on_the_Roof.pdf
```
一旦你找到需要恢复的文件,按 `c` 选择它。
注意:你会在屏幕底部看到有用的说明:
```
Use Left arrow to go back, Right to change directory, h to hide deleted files
q to quit, : to select the current file, a to select all files
C to copy the selected files, c to copy the current file <==
```
这时,你就可以在起始目录内选择恢复该文件的位置了(参见前面的说明,在将文件移回原点之前,先在一个合适的地方进行检查)。在这种情况下,`/home/recovery` 目录没有子目录,所以这就是我们的恢复点。
注意:你会在屏幕底部看到有用的说明:
```
Please select a destination where /shs/Up_on_the_Roof.pdf will be copied.
Keys: Arrow keys to select another directory
C when the destination is correct
Q to quit
Directory /home/recovery <== 恢复位置
```
一旦你看到 “Copy done! 1 ok, 0 failed” 的绿色字样,你就会知道文件已经恢复了。
在这种情况下,文件被留在 `/home/recovery/shs` 下(起始目录,附加所选目录)。
在将文件移回原来的位置之前,你可能应该先验证恢复的文件看起来是否正确。确保你也恢复了原来的所有者和组,因为此时文件由 root 拥有。
**注意:** 对于文件恢复过程中的很多步骤,你可以使用退出(按 `q` 或“[ Quit ]”)来返回上一步。如果你愿意,可以选择退出选项一直回到该过程中的第一步,也可以选择按下 `^c` 立即退出。
#### 恢复训练
使用 `testdisk` 恢复文件相对来说没有痛苦,但有些复杂。在恐慌时间到来之前,最好先练习一下恢复文件,让自己有机会熟悉这个过程。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3575524/recovering-deleted-files-on-linux-with-testdisk.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.cgsecurity.org/testdisk.pdf
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (rakino)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,117 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 ways to conduct user research with an open source mindset)
[#]: via: (https://opensource.com/article/20/9/open-source-user-research)
[#]: author: (Alana Fialkoff https://opensource.com/users/alana)
5 ways to conduct user research with an open source mindset
======
See how this team elevates their user research experiences and results
with an open source approach.
![Mobile devices are a big part of our daily lives][1]
There are common beliefs about user experiences—the best ones are user-centered, iterative, and intuitive. When user experience (UX) research is conducted, user stories about these experiences are collected—but the research methods chosen inform user experiences, too.
So, what makes for an engaging research experience, and how can methods evolve alongside products to better connect with users?
[Red Hat's User Experience Design (UXD) research team][2] has the answer: a community-centered, open source mindset. 
As a UX writer on Red Hat's UXD team, I create new design documentation, empower team voices, and share Red Hat's open source story. My passion lies in using content to connect and inspire others. On our [Twitter][3] and [Medium][4] channels, we share thought leadership about UX writing, research, development, and design, all to amplify and grow our open source community. This community is at the heart of what we do. So when I learned how the research team centers community throughout their user testing, I leaped at the chance to tell their story.
### Approaching research with an open source mindset
Red Hat's UXD team creates in the open, and this ideology applies to their research, too. Thinking the open source way involves adopting a community-first and community-driven frame of mind. New ideas can come from anywhere, and an open source mindset embraces these varied voices and perspectives.
Structuring research with an open source mindset means each research technique should be driven by two angles—we don't just want to learn about our users; we want to learn from them, too.
To satisfy both of these user-focused objectives, we make research decisions backed by other voices, not just our own. By sourcing input from beyond our team, we design research experiences that are truly tailored to the communities we serve.
Guiding questions help us streamline this process. Some questions the team's researchers use to develop their methods include:
* Does this method amplify user voices?
* Are we facilitating open communication with our users?
* Does this method build a meaningful connection with our user base?
* What kind of experience does this technique build? Is it memorable? Engaging?
Notice a trend? Each question hinges on community.
### Building research experiences like user experiences
Researchers approach their UX research as a user experience, too. We want our research sessions to have the same qualities as our user interfaces:
* Memorable
* Engaging
* Intuitive
* User-centered
Research offers in-depth engagement with our user base, so it's important to tailor our techniques to that community. When we conduct user research, we learn more about how our users engage with our products and use that knowledge to improve them. We can use that same process to shape future research sessions.
Open source, user-centered research—sounds great. But how can we actually achieve it?
Let's take a look at five collaborative techniques we use to design more immersive user research experiences the open source way.
### Evolve research methods to build engaging experiences and strong connections
Tailor research methods to the target audience, with a goal to create a connective experience. The tools at our disposal vary largely depending on our venue (in-person vs. virtual), so this approach lets us get creative.
* **Add dimension**: Are you conducting a survey? Consider appealing your users' senses offscreen. We've expanded our research to the third dimension using strings, LEGOs, and card diagrams to collect data in more tactile ways.
* **Streamline**: Are your research methods efficient and time-conscious? There's only one way to find out. Follow up with users about their experience post-session. If they say a survey or form was cumbersome, consider condensing your longer questions into smaller, more digestible ones.
* **Simplify**: Use what you know about your user base to customize your techniques. A busy community working in enterprise IT, for example, might only have time to fill out a brief form. Structure your methods so that they're navigable and intuitive for your specific audience.
### Guide research with research
Contextualize questions around user's needs. Explore their goals. Identify their cares, difficulties, and thoughts on product performance. Use these findings to design more meaningful research sessions, and check in often. As our products evolve with our users, our research methods do, too.
### Keep an open mind
Work with the community to disprove our own assumptions. Lean into the spirit of open source by engaging with others across the team, company, user base, and industry. Use these communities like a sounding board for ideas and welcome their feedback. Community-centered conversations take place in environments like:
* Team meetings and brainstorms
* Company calls and research shares
* Conferences and industry panels
* Internal and external blogging platforms
* Other thought leadership forums
This means speaking at monthly meetings, messaging across team channels, and presenting ideas at annual thought leadership events, where a multitude of voices meet from across the industry to share their experience and expertise.
### Communicate, communicate, communicate
With an open source mindset comes open communication. Spark meaningful conversations with users and keep those channels open beyond formal research sessions. Community-driven research techniques start with just that: community. Invest in a strong connection with users to invite deeper insights and facilitate more impactful sessions.
### Experiment with new research techniques and take a user-centered, UX approach
Prototype. Test. Iterate. Repeat. The methods will morph as the open source approach takes shape.
That's the magic of conducting research in the open: dynamic change, driven by a deeper connection to the community.
Learn more about how Red Hat's UXD team conducts research by checking out [DevConf.US][5], where individuals across the open source community come together to talk all things tech, open source, and UX.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/open-source-user-research
作者:[Alana Fialkoff][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alana
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mobile-demo-device-phone.png?itok=y9cHLI_F (Mobile devices are a big part of our daily lives)
[2]: https://medium.com/patternfly/from-interviewers-to-interviewees-meet-red-hats-ux-research-team-af860a65abc5
[3]: https://twitter.com/RedHatUXD
[4]: https://medium.com/patternfly
[5]: https://www.devconf.info/us/

View File

@ -0,0 +1,87 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (FCC auctions should be a long-term boost for 5G availability)
[#]: via: (https://www.networkworld.com/article/3584072/fcc-auctions-should-be-a-long-term-boost-for-5g-availability.html)
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
FCC auctions should be a long-term boost for 5G availability
======
Federal Communications Commission policymaking targets creation of new services by making more spectrum available
[FCC][1]
As the march towards 5G progresses, its apparent that more spectrum will be needed to fully enable it as a service, and the Federal Communications Commission has clearly taken the message to heart.
### 5G resources
* [What is 5G? Fast wireless technology for enterprises and phones][2]
* [How 5G frequency affects range and speed][3]
* [Private 5G can solve some problems that Wi-Fi cant][4]
* [Private 5G keeps Whirlpool driverless vehicles rolling][5]
* [5G can make for cost-effective private backhaul][6]
* [CBRS can bring private 5G to enterprises][7]
The FCC recently finished [auctioning off priority-access licenses for Citizens Broadband Radio Service (CBRS)][8] spectrum for 5G, representing 70MHz swath of new bandwidth within the 3.5GHz band. It took in $4.58 billion and is one of several such auctions in recent  years aimed at freeing up more channels for wireless data. In 2011, 2014 and 2015 the FCC auctioned off 65MHz in the low- to mid-band, between roughly 1.7GHz and 2.2GHz, for example, and the 700MHz band.
But the operative part of the spectrum now is the sub-6GHz or mid-band spectrum, in the same area as that sold off in the [CBRS][9] auction. A forthcoming C-Band auction will be the big one, according to experts, with a whopping 280MHz of spectrum on the table.
“The big moneys coming with the C-band auction,” said Jason Leigh, a research manager with IDC. “Mid-band spectrum in the U.S. is scarce— thats why youre seeing this great urgency.”
[[Get regularly scheduled insights by signing up for Network World newsletters.]][10]
While the major mobile-data providers are still expected to snap up the lions share of the available licenses in that auction, some of the most innovative uses of the spectrum will be implemented by the enterprise, which will compete against the carriers for some of the available frequencies.
Specialist networks for [IoT][11], asset tracking and other private networking applications are already possible via private LTE, but the maturation of 5G substantially broadens their scope, thanks to that technologys advanced spectrum sharing, low-latency and multi-connectivity features. That, broadly, means a lot of new wire-replacement applications, including industrial automation, facilities management and more.
## Reallocating spectrum means negotiation
It hasnt been a simple matter to shift Americas spectrum priorities around, and few would know that better than former FCC chair Tom Wheeler. Much of the spectrum that the government has been pushing to reallocate to mobile broadband over the past decade was already licensed out to various stakeholders, frequently government agencies and satellite network operators.
Those stakeholders have to be moved to different parts of the spectrum, often compensated at taxpayer expense, and getting the various players to share and share alike has frequently been a complicated process, Wheeler said.
“One of the challenges the FCC faces is that the allocation of spectrum was first made from analog assumptions that have been rewritten as a result of digital technology,” he pointed out, citing the transition from analog to digital TV as an example. Where an analog TV signal took up 6MHz of spectrum and required guard bands on either side to avoid interference, four or five digital signals can be fit into that one channel.
Those assumptions have proved challenging to confront. Incumbents have publicly protested the FCCs moves in the mid-band, arguing that insufficient precautions have been taken to avoid interference with existing services, and that changing frequency assignments often means they have to buy new equipment.
“I went through it with the [Department of Defense], with the satellite companies, and the fact of the matter is that one of the big regulatory challenges is that nobody wants to give up the nice secure position that they have based on analog assumptions,” said Wheeler. “I think you also have to pay serious consideration, but I found that claims of interference were the first refuge of people who didnt like the threat of competition or anything else.”
## The future: more services
The broader point of the opening of the mid-band to carrier and enterprise use will be potentially major advantages for U.S. businesses, regardless of the exact manner in which that spectrum is opened, according to Leigh. While the U.S. is sticking to the auction format for allocating wireless spectrum, other countries, like Germany, have set aside mid-band spectrum specifically for enterprise use.
For a given company trying to roll its own private 5G network, that could push spectrum auction prices higher. But, ultimately, the services are going to be available, whether theyre provisioned in-house or sold by a mobile carrier or vendor, as long as theres enough spectrum available to them.
“The things you can do on the enterprise side for 5G are whats going to drive the really futuristic stuff,” he said.
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3584072/fcc-auctions-should-be-a-long-term-boost-for-5g-availability.html
作者:[Jon Gold][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Jon-Gold/
[b]: https://github.com/lujun9972
[1]: https://www.flickr.com/photos/fccdotgov/4808818548/
[2]: https://www.networkworld.com/article/3203489/what-is-5g-fast-wireless-technology-for-enterprises-and-phones.html
[3]: https://www.networkworld.com/article/3568253/how-5g-frequency-affects-range-and-speed.html
[4]: https://www.networkworld.com/article/3568614/private-5g-can-solve-some-enterprise-problems-that-wi-fi-can-t.html
[5]: https://www.networkworld.com/article/3488799/private-5g-keeps-whirlpool-driverless-vehicles-rolling.html
[6]: https://www.networkworld.com/article/3570724/5g-can-make-for-cost-effective-private-backhaul.html
[7]: https://www.networkworld.com/article/3529291/cbrs-wireless-can-bring-private-5g-to-enterprises.html
[8]: https://www.networkworld.com/article/3572564/cbrs-wireless-yields-45b-for-licenses-to-support-5g.html
[9]: https://www.networkworld.com/article/3180615/faq-what-in-the-wireless-world-is-cbrs.html
[10]: https://www.networkworld.com/newsletters/signup.html
[11]: https://www.networkworld.com/article/3207535/what-is-iot-the-internet-of-things-explained.html
[12]: https://www.facebook.com/NetworkWorld/
[13]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,71 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (VMware plan disaggregates servers; offloads network virtualization and security)
[#]: via: (https://www.networkworld.com/article/3583990/vmware-plan-disaggregates-servers-offloads-network-virtualization-and-security.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
VMware plan disaggregates servers; offloads network virtualization and security
======
VMware Project Monterey includes NVIDIA, Intel and goes a long way to meld bare metal servers, graphics processing units
Henrik5000 / Getty Images
VMware is continuing its effort to remake the data center, cloud and edge to handle the distributed workloads and applications of the future.
At its virtual VMworld 2020 event the company previewed a new architecture called Project Monterey that goes a long way toward melding bare-metal servers, graphics processing units (GPUs), field programmable gate arrays (FPGAs), network interface cards (NICs) and security into a large-scale virtualized environment.
Monterey would extend VMware Cloud Foundation (VCF), which today integrates the companys vShphere virtualization, vSAN storage, NSX networking and vRealize cloud management systems to support GPUs, FPGAs and NICs into a single platform that can be deployed on-premises or in a public cloud.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
The combination of a rearchitected VCF with Project Monterey will disaggregate server functions, add support for bare-metal servers and let an application running on one physical server consume hardware accelerator resources such as FPGAs from other physical servers, said Kit Colbert vice president and chief technology officer of VMwares Cloud Platform business unit.
This will also enable physical resources to be dynamically accessed based on policy or via software API, tailored to the needs of the application, Colbert said.  “What we see is that these new apps are using more and more of server CPU cycles. Traditionally, the industry has relied on the CPU for everything--application business logic, processing network packets, specialized work such as 3D modeling, and more,” Colbert wrote in a [blog][2] outlining Project Monterey.
“But as app requirements for compute have continued to grow, hardware accelerators including GPUs, FPGAs, specialized NICs have been developed for processing workloads that could be offloaded from the CPU.  By leveraging these accelerators, organizations can improve performance for the offloaded activities and free up CPU cycles for core app-processing work.”
A key component of Monterey is VMwares SmartNIC which incorporates a general-purpose CPU, out-of-band management, and virtualized device features. As part of Monterey, VMware has enabled its ESXi hypervisor to run on its SmartNICs which will let customers use a single management framework to manage all their compute infrastructure whether it be virtualized or bare metal.
The idea is that by supporting SmartNICs, VCF will be able to maintain compute virtualization on the server CPU while offloading networking and storage I/O functions to the SmartNIC CPU. Applications can then make use of the available network bandwidth while saving server CPU cycles that will improve application performance, Colbert stated.
As for security, each SmartNIC can run a stateful firewall and an advanced security suite.
“Since this will run in the NIC and not in the host, up to thousands of tiny firewalls will be able to be deployed and automatically tuned to protect specific application services that make up the application--wrapping each service with intelligent defenses that can shield any vulnerability of that specific service,” Colbert stated. “Having an ESXi instance on the SmartNIC provides greater defense-in-depth. Even if the x86 ESXi is somehow compromised, the SmartNIC ESXi can still enforce proper network security and other security policies.”
Part of the Monterey rollout included a broad development agreement between VMware and GPU giant Nvidia to bring its BlueField-2 data-processing unit (DPU) and other technologies into Monterey.  The BlueField-2 offloads network, security, and storage tasks from the CPU.
Nvidia DPUs can run a number of tasks, including network virtualization, load balancing, data compression, packet switching and encryption today across two ports, each carrying traffic at 100Gbps. “Thats an order of magnitude faster than CPUs geared for enterprise apps. The DPU is taking on these jobs so CPU cores can run more apps, boosting vSphere and data-center efficiency,” according to an Nvidia blog “As a result, data centers can handle more apps, and their networks will run faster, too.”
In addition to the Monterey agreement, VMware and Nvidia said they would work together to develop an enterprise platform for AI applications.  Specifically, the companies said GPU-optimized AI software available on the [Nvidia NGC hub][3] will be integrated into VMware vSphere, VMware Cloud Foundation and VMware Tanzu.
[Now see how AI can boost data-center availability and efficiency][4]
This will help accelerate AI adoption, letting customers extend existing infrastructure to support AI and manage all applications with a single set of operations.
Intel and Pensando announced SmartNIC technology integration as part of Project Monterey, and  Dell Technologies, HPE and Lenovo said they, too, would support integrated systems based on Project Monterey.
Project Monterey is a technology preview at this point and VMware did not say when it expects to deliver it.
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3583990/vmware-plan-disaggregates-servers-offloads-network-virtualization-and-security.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/newsletters/signup.html
[2]: https://blogs.vmware.com/vsphere/2020/09/announcing-project-monterey-redefining-hybrid-cloud-architecture.html
[3]: https://www.nvidia.com/en-us/gpu-cloud/
[4]: https://www.networkworld.com/article/3274654/ai-boosts-data-center-availability-efficiency.html
[5]: https://www.facebook.com/NetworkWorld/
[6]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,85 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (VMware highlights security in COVID-era networking)
[#]: via: (https://www.networkworld.com/article/3584412/vmware-highlights-security-in-covid-era-networking.html)
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
VMware highlights security in COVID-era networking
======
VMware is tackling the challenges of securing distributed enterprise resources with product enhancements including the new Carbon Black Cloud Workload software and upgrades to its SD-WAN and SASE products.
ArtyStarty / Getty Images
As enterprise workloads continue to move off-premises and employees continue to work remotely during the COVID-19 pandemic, securing that environment remains a critical challenge for IT.
At its virtual VWworld 2020 gathering, VMware detailed products and plans to help customers deal with the challenges of securing distributed enterprise resources.
**More about SD-WAN**: [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][1] • [What SD-Branch is and why you'll need it][2] • [What are the options for securing SD-WAN?][3]
"Amid global disruption, the key to survival for many companies has meant an accelerated shift to the cloud and, ultimately, bolting on security products in their data centers," said Sanjay Poonen, VMware's Chief Operating Officer, Customer Operations. "But legacy security systems are no longer sufficient for organizations that are using the cloud as part of their computing infrastructure. It's time to rethink security for the cloud. Organizations need protection at the workload level, not just at the endpoint."
With that in mind, VMware introduced Carbon Black Cloud Workload software that combines vulnerability reporting with security detection and response capabilities to protect workloads running in virtualized, private and hybrid cloud environments, VMware stated.   
The new packages along with [other upgrades to its security software][4] represent VMware's continued development and integration of the Carbon Black security technology it [acquired a year ago][5] for $2.1 billion. 
"Tightly integrated with vSphere, VMware Carbon Black Cloud Workload provides agentless security that alleviates installation and management overhead and consolidates the collection of [telemetry][6] for multiple workload security use cases," VMware stated. 
The idea is to allow security and infrastructure teams to automatically secure new and existing workloads at every point in the security lifecycle, while simplifying operations and consolidating the IT and security stack. With the software, customers can analyze attacker behavior patterns over time to detect and stop never-seen-before attacks, including those manipulating known-good software. If an attacker bypasses perimeter defenses, security teams can shut down the attack before it escalates to a data breach, VMware stated. 
All current vSphere 6.5 and VMware Cloud Foundation 4.0 customers can give the package a try for free for the next six months, VMware stated. VMware plans to introduce a Carbon Black Cloud module for hardening and better securing Kubernetes workloads as well.
The company also enhanced its Workspace ONE platform that securely manages end users' mobile devices and cloud-hosted virtual desktops and applications from the cloud or on-premise.
The company says it blended VMware Workspace ONE Horizon and VMware Carbon Black Cloud to offer behavioral detection to protect against ransomware and file-less malware. On VMware vSphere, the solution is integrated into VMware Tools, removing the need to install and manage additional security agents, according to the company. 
Bolstering support for Apple Mac and Microsoft Windows 10 remote users, VMware added Workspace Security Remote, which includes the antivirus, audit and remediation, and detection and response capabilities of Carbon Black Cloud. It also includes the analytics, automation, device health, orchestration, and zero-trust access capabilities of the Workspace ONE platform.
Securing the remote work environment is a common theme among other VMWare announcements, including news around its [SD-WAN and secure access service edge (SASE)][7] products and its overarching Virtual Cloud Network architecture.
Taken together, the enhancements further VMware's goal of integrating security features within its infrastructure a concept it calls intrinsic security in an effort to better protect networked workloads than traditional piecemeal protection systems could.
The democratization of compute was already underway before the COVID situation pushed it further, faster, said Sanjay Uppal, senior vice president and general manager of the VeloCloud Business Unit at VMware. "So with the remote workforce growing we need to make privacy and security drop-dead simple, and that is the goal."
A more futuristic goal for the company is to provide a unified approach to security incident detection and response that can leverage multiple domains from endpoint to workload to user to network. An emerging architecture that promises those capabilities is Extended Detection and Response (XDR), and VMware says it intends to support it. 
In a recent _[CSO][8]_ [column][8], Enterprise Strategy Group senior principal analyst Jon Oltsik defined XDR as "an integrated suite of security products spanning hybrid IT architectures, designed to interoperate and coordinate on threat prevention, detection and response. In other words, XDR unifies control points, security telemetry, analytics, and operations into one enterprise system."
ESG research indicates that 84% of organizations are actively integrating security technologies so XDR can act as a turnkey security technology integration solution. 
"While vendors will offer different XDR bundles, ESG research indicates that large organizations really want XDR to include endpoint/server/cloud workload security, network security, coverage of the most common threat vectors (i.e., email/web), file detonation (i.e., sandboxing), threat intelligence, and analytics," Oltsik stated.
Gartner said of XDR: "Although XDR tools are similar in function to security incident and event monitoring (SIEM) and security orchestration, automation and response tools, they are primarily differentiated by the level of integration at deployment and the focus on incident response."
The primary goals of an XDR solution are to increase detection accuracy by correlating threat intelligence and signals across multiple security solutions, and to improve security operations efficiency and productivity.
For its part, VMware said XDR is the opportunity to do just that: provide a unified approach to security incident detection and response that can leverage multiple domains from endpoint to workload to user to network.
VMware called XDR "a multi-year effort to build the most advanced and comprehensive security incident detection and response solutions available" and will include cross-platform integration across its portfolio including Workspace ONE, vSphere, Carbon Black Cloud, and NSX Service-defined Firewall.
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3584412/vmware-highlights-security-in-covid-era-networking.html
作者:[Michael Cooney][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Michael-Cooney/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
[2]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
[3]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html
[4]: https://www.networkworld.com/article/3529369/vmware-amps-up-its-cloud-and-data-center-security.html
[5]: https://www.networkworld.com/article/3445383/vmware-builds-security-unit-around-carbon-black-tech.html
[6]: https://www.networkworld.com/article/3575837/streaming-telemetry-gains-interest-as-snmp-reliance-fades.html
[7]: https://www.networkworld.com/article/3583939/vmware-amps-up-security-for-network-sase-sd-wan-products.html
[8]: https://www.csoonline.com/article/3561291/what-is-xdr-10-things-you-should-know-about-this-security-buzz-term.html
[9]: https://www.facebook.com/NetworkWorld/
[10]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (gxlct008)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,219 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Disable IPv6 on Ubuntu Linux)
[#]: via: (https://itsfoss.com/disable-ipv6-ubuntu-linux/)
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
How to Disable IPv6 on Ubuntu Linux
======
Are you looking for a way to **disable IPv6** connections on your Ubuntu machine? In this article, Ill teach you exactly how to do it and why you would consider this option. Ill also show you how to **enable or re-enable IPv6** in case you change your mind.
### What is IPv6 and why would you want to disable IPv6 on Ubuntu?
**[Internet Protocol version 6][1]** [(][1] **[IPv6][1]**[)][1] is the most recent version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. It was developed in 1998 to replace the **IPv4** protocol.
**IPv6** aims to improve security and performance, while also making sure we dont run out of addresses. It assigns unique addresses globally to every device, storing them in **128-bits** , compared to just 32-bits used by IPv4.
![Disable IPv6 Ubuntu][2]
Although the goal is for IPv4 to be replaced by IPv6, there is still a long way to go. Less than **30%** of the sites on the Internet makes IPv6 connectivity available to users (tracked by Google [here][3]). IPv6 can also cause [problems with some applications at time][4].
Since **VPNs** provide global services, the fact that IPv6 uses globally routed addresses (uniquely assigned) and that there (still) are ISPs that dont offer IPv6 support shifts this feature lower down their priority list. This way, they can focus on what matters the most for VPN users: security.
Another possible reason you might want to disable IPv6 on your system is not wanting to expose yourself to various threats. Although IPv6 itself is safer than IPv4, the risks I am referring to are of another nature. If you arent actively using IPv6 and its features, [having IPv6 enabled leaves you vulnerable to various attacks][5], offering the hacker another possible exploitable tool.
On the same note, configuring basic network rules is not enough. You have to pay the same level of attention to tweaking your IPv6 configuration as you do for IPv4. This can prove to be quite a hassle to do (and also to maintain). With IPv6 comes a suite of problems different to those of IPv4 (many of which can be referenced online, given the age of this protocol), giving your system another layer of complexity.
[][6]
Suggested read How To Remove Drive Icons From Unity Launcher In Ubuntu 14.04 [Beginner Tips]
### Disabling IPv6 on Ubuntu [For Advanced Users Only]
In this section, Ill be covering how you can disable IPv6 protocol on your Ubuntu machine. Open up a terminal ( **default:** CTRL+ALT+T) and lets get to it!
**Note:** _For most of the commands you are going to input in the terminal_ _you are going to need root privileges ( **sudo** )._
Warning!
If you are a regular desktop Linux user and prefer a stable working system, please avoid this tutorial. This is for advanced users who know what they are doing and why they are doing so.
#### 1\. Disable IPv6 using Sysctl
First of all, you can **check** if you have IPv6 enabled with:
```
ip a
```
You should see an IPv6 address if it is enabled (the name of your internet card might be different):
![IPv6 Address Ubuntu][7]
You have see the sysctl command in the tutorial about [restarting network in Ubuntu][8]. We are going to use it here as well. To **disable IPv6** you only have to input 3 commands:
```
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1
```
You can check if it worked using:
```
ip a
```
You should see no IPv6 entry:
![IPv6 Disabled Ubuntu][9]
However, this only **temporarily disables IPv6**. The next time your system boots, IPv6 will be enabled again.
One method to make this option persist is modifying **/etc/sysctl.conf**. Ill be using vim to edit the file, but you can use any editor you like. Make sure you have **administrator rights** (use **sudo** ):
![Sysctl Configuration][10]
Add the following lines to the file:
```
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
```
For the settings to take effect use:
```
sudo sysctl -p
```
If IPv6 is still enabled after rebooting, you must create (with root privileges) the file **/etc/rc.local** and fill it with:
```
#!/bin/bash
# /etc/rc.local
/etc/sysctl.d
/etc/init.d/procps restart
exit 0
```
Now use [chmod command][11] to make the file executable:
```
sudo chmod 755 /etc/rc.local
```
What this will do is manually read (during the boot time) the kernel parameters from your sysctl configuration file.
[][12]
Suggested read 3 Ways to Check Linux Kernel Version in Command Line
#### 2\. Disable IPv6 using GRUB
An alternative method is to configure **GRUB** to pass kernel parameters at boot time. Youll have to edit **/etc/default/grub**. Once again, make sure you have administrator privileges:
![GRUB Configuration][13]
Now you need to modify **GRUB_CMDLINE_LINUX_DEFAULT** and **GRUB_CMDLINE_LINUX** to disable IPv6 on boot:
```
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash ipv6.disable=1"
GRUB_CMDLINE_LINUX="ipv6.disable=1"
```
Save the file and run:
```
sudo update-grub
```
The settings should now persist on reboot.
### Re-enabling IPv6 on Ubuntu
To re-enable IPv6, youll have to undo the changes you made. To enable IPv6 until reboot, enter:
```
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=0
sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=0
```
Otherwise, if you modified **/etc/sysctl.conf** you can either remove the lines you added or change them to:
```
net.ipv6.conf.all.disable_ipv6=0
net.ipv6.conf.default.disable_ipv6=0
net.ipv6.conf.lo.disable_ipv6=0
```
You can optionally reload these values:
```
sudo sysctl -p
```
You should once again see a IPv6 address:
![IPv6 Reenabled in Ubuntu][14]
Optionally, you can remove **/etc/rc.local** :
```
sudo rm /etc/rc.local
```
If you modified the kernel parameters in **/etc/default/grub** , go ahead and delete the added options:
```
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX=""
```
Now do:
```
sudo update-grub
```
**Wrapping Up**
In this guide I provided you ways in which you can **disable IPv6** on Linux, as well as giving you an idea about what IPv6 is and why you would want to disable it.
Did you find this article useful? Do you disable IPv6 connectivity? Let us know in the comment section!
--------------------------------------------------------------------------------
via: https://itsfoss.com/disable-ipv6-ubuntu-linux/
作者:[Sergiu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sergiu/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/IPv6
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/disable_ipv6_ubuntu.png?fit=800%2C450&ssl=1
[3]: https://www.google.com/intl/en/ipv6/statistics.html
[4]: https://whatismyipaddress.com/ipv6-issues
[5]: https://www.internetsociety.org/blog/2015/01/ipv6-security-myth-1-im-not-running-ipv6-so-i-dont-have-to-worry/
[6]: https://itsfoss.com/remove-drive-icons-from-unity-launcher-in-ubuntu/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_address_ubuntu.png?fit=800%2C517&ssl=1
[8]: https://itsfoss.com/restart-network-ubuntu/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_disabled_ubuntu.png?fit=800%2C442&ssl=1
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/sysctl_configuration.jpg?fit=800%2C554&ssl=1
[11]: https://linuxhandbook.com/chmod-command/
[12]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/grub_configuration-1.jpg?fit=800%2C565&ssl=1
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_address_ubuntu-1.png?fit=800%2C517&ssl=1

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,337 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (gxlct008)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (TCP window scaling, timestamps and SACK)
[#]: via: (https://fedoramagazine.org/tcp-window-scaling-timestamps-and-sack/)
[#]: author: (Florian Westphal https://fedoramagazine.org/author/strlen/)
TCP window scaling, timestamps and SACK
======
![][1]
The Linux TCP stack has a myriad of _sysctl_ knobs that allow to change its behavior.  This includes the amount of memory that can be used for receive or transmit operations, the maximum number of sockets and optional features and protocol extensions.
There are  multiple articles that recommend to disable TCP extensions, such as timestamps or selective acknowledgments (SACK) for various “performance tuning” or “security” reasons.
This article provides background on what these extensions do, why they
are enabled by default, how they relate to one another and why it is normally a bad idea to turn them off.
### TCP Window scaling
The data transmission rate that TCP can sustain is limited by several factors. Some of these are:
* Round trip time (RTT).  This is the time it takes for a packet to get to the destination and a reply to come back. Lower is better.
* lowest link speed of the network paths involved
* frequency of packet loss
* the speed at which new data can be made available for transmission
For example, the CPU needs to be able to pass data to the network adapter fast enough. If the CPU needs to encrypt the data first, the adapter might have to wait for new data. In similar fashion disk storage can be a bottleneck if it cant read the data fast enough.
* The maximum possible size of the TCP receive window. The receive window determines how much data (in bytes) TCP can transmit before it has to wait for the receiver to report reception of that data. This is announced by the receiver. The receiver will constantly update this value as it reads and acknowledges reception of the incoming data. The receive windows current value is contained in the [TCP header][2] that is part of every segment sent by TCP. The sender is thus aware of the current receive window whenever it receives an acknowledgment from the peer. This means that the higher the round-trip time, the longer it takes for sender to get receive window updates.
TCP is limited to at most 64 kilobytes of unacknowledged (in-flight) data. This is not even close to what is needed to sustain a decent data rate in most networking scenarios. Let us look at some examples.
##### Theoretical data rate
With a round-trip-time of 100 milliseconds, TCP can transfer at most 640 kilobytes per second. With a 1 second delay, the maximum theoretical data rate drops down to only 64 kilobytes per second.
This is because of the receive window. Once 64kbyte of data have been sent the receive window is already full.  The sender must wait until the peer informs it that at least some of the data has been read by the application. 
The first segment sent reduces the TCP window by the size of that segment. It takes one round-trip before an update of the receive window value will become available. When updates arrive with a 1 second delay, this results in a 64 kilobyte limit even if the link has plenty of bandwidth available.
In order to fully utilize a fast network with several milliseconds of delay, a window size larger than what classic TCP supports is a must. The 64 kilobyte limit is an artifact of the protocols specification: The TCP header reserves only 16bits for the receive window size. This allows receive windows of up to 64KByte. When the TCP protocol was originally designed, this size was not seen as a limit.
Unfortunately, its not possible to just change the TCP header to support a larger maximum window value. Doing so would mean all implementations of TCP would have to be updated simultaneously or they wouldnt understand one another anymore. To solve this, the interpretation of the receive window value is changed instead.
The window scaling option allows to do this while keeping compatibility to existing implementations.
#### TCP Options: Backwards-compatible protocol extensions
TCP supports optional extensions. This allows to enhance the protocol with new features without the need to update all implementations at once. When a TCP initiator connects to the peer, it also send a list of supported extensions. All extensions follow the same format: an unique option number followed by the length of the option and the option data itself.
The TCP responder checks all the option numbers contained in the connection request. If it does not understand an option number it skips
length bytes of data and checks the next option number. The responder omits those it did not understand from the reply. This allows both the sender and receiver to learn the common set of supported options.
With window scaling, the option data always consist of a single number.
### The window scaling option
```
```
Window Scale option (WSopt): Kind: 3, Length: 3
    +---------+---------+---------+
    | Kind=3  |Length=3 |shift.cnt|
    +---------+---------+---------+
         1         1         1
```
```
The [window scaling][3] option tells the peer that the receive window value found in the TCP header should be scaled by the given number to get the real size.
For example, a TCP initiator that announces a window scaling factor of 7 tries to instruct the responder that any future packets that carry a receive window value of 512 really announce a window of 65536 byte. This is an increase by a factor of 128. This would allow a maximum TCP Window of 8 Megabytes.
A TCP responder that does not understand this option ignores it. The TCP packet sent in reply to the connection request (the syn-ack) then does not contain the window scale option. In this case both sides can only use a 64k window size. Fortunately, almost every TCP stack supports and enables this option by default, including Linux.
The responder includes its own desired scaling factor. Both peers can use a different number. Its also legitimate to announce a scaling factor of 0. This means the peer should treat the receive window value it receives verbatim, but it allows scaled values in the reply direction — the recipient can then use a larger receive window.
Unlike SACK or TCP timestamps, the window scaling option only appears in the first two packets of a TCP connection, it cannot be changed afterwards. It is also not possible to determine the scaling factor by looking at a packet capture of a connection that does not contain the initial connection three-way handshake.
The largest supported scaling factor is 14. This allows TCP window sizes
of up to one Gigabyte.
##### Window scaling downsides
It can cause data corruption in very special cases. Before you disable the option it is impossible under normal circumstances. There is also a solution in place that prevents this. Unfortunately, some people disable this solution without realizing the relationship with window scaling. First, lets have a look at the actual problem that needs to be addressed. Imagine the following sequence of events:
1. The sender transmits segments: s_1, s_2, s_3, … s_n
2.  The receiver sees: s_1, s_3, .. s_n and sends an acknowledgment for s_1.
3.  The sender considers s_2 lost and sends it a second time. It also sends new data contained in segment s_n+1.
4.  The receiver then sees: s_2, s_n+1, s_2: the packet s_2 is received twice.
This can happen for example when a sender triggers re-transmission too early. Such erroneous re-transmits are never a problem in normal cases, even with window scaling. The receiver will just discard the duplicate.
#### Old data to new data
The TCP sequence number can be at most 4 Gigabyte. If it becomes larger than this, the sequence wraps back to 0 and then increases again. This is not a problem in itself, but if this occur fast enough then the above scenario can create an ambiguity.
If a wrap-around occurs at the right moment, the sequence number s_2 (the re-transmitted packet) can already be larger than s_n+1. Thus, in the last step (4), the receiver may interpret this as: s_2, s_n+1, s_n+m, i.e. it could view the old packet s_2 as containing new data.
Normally, this wont happen because a wrap around occurs only every couple of seconds or minutes even on high bandwidth links. The interval between the original and a unneeded re-transmit will be a lot smaller.
For example,with a transmit speed of 50 Megabytes per second, a
duplicate needs to arrive more than one minute late for this to become a problem. The sequence numbers do not wrap fast enough for small delays to induce this problem.
Once TCP approaches Gigabyte per second throughput rates, the sequence numbers can wrap so fast that even a delay by only a few milliseconds can create duplicates that TCP cannot detect anymore. By solving the problem of the too small receive window, TCP can now be used for network speeds that were impossible before and that creates a new, albeit rare problem. To safely use Gigabytes/s speed in environments with very low RTT receivers must be able to detect such old duplicates without relying on the sequence number alone.
### TCP time stamps
#### A best-before date
In the most simple terms, [TCP timestamps][3] just add a time stamp to the packets to resolve the ambiguity caused by very fast sequence number wrap around. If a segment appears to contain new data, but its timestamp is older than the last in-window packet, then the sequence number has wrapped and the ”new” packet is actually an older duplicate. This resolves the ambiguity of re-transmits even for extreme corner cases.
But this extension allows for more than just detection of old packets. The other major feature made possible by TCP timestamps are more precise round-trip time measurements (RTTm).
#### A need for precise round-trip-time estimation
When both peers support timestamps,  every TCP segment carries two additional numbers: a timestamp value and a timestamp echo.
```
```
TCP Timestamp option (TSopt): Kind: 8, Length: 10
+-------+----+----------------+-----------------+
|Kind=8 | 10 |TS Value (TSval)|EchoReply (TSecr)|
+-------+----+----------------+-----------------+
    1      1         4                4
```
```
An accurate RTT estimate is crucial for TCP performance. TCP automatically re-sends data that was not acknowledged. Re-transmission is triggered by a timer: If it expires, TCP considers one or more packets that it has not yet received an acknowledgment for to be lost. They are then sent again.
But “has not been acknowledged” does not mean the segment was lost. It is also possible that the receiver did not send an acknowledgment so far or that the acknowledgment is still in flight. This creates a dilemma: TCP must wait long enough for such slight delays to not matter, but it cant wait for too long either.
##### Low versus high network delay
In networks with a high delay, if the timer fires too fast, TCP frequently wastes time and bandwidth with unneeded re-sends.
In networks with a low delay however,  waiting for too long causes reduced throughput when a real packet loss occurs. Therefore, the timer should expire sooner in low-delay networks than in those with a high delay. The tcp retransmit timeout therefore cannot use a fixed constant value as a timeout. It needs to adapt the value based on the delay that it experiences in the network.
##### Round-trip time measurement
TCP picks a retransmit timeout that is based on the expected round-trip time (RTT). The RTT is not known in advance. RTT is estimated by measuring the delta between the time a segment is sent and the time TCP receives an acknowledgment for the data carried by that segment.
This is complicated by several factors.
* For performance reasons, TCP does not generate a new acknowledgment for every packet it receives. It waits  for a very small amount of time: If more segments arrive, their reception can be acknowledged with a single ACK packet. This is called “cumulative ACK”.
*  The round-trip-time is not constant. This is because of a myriad of factors. For example, a client might be a mobile phone switching to different base stations as its moved around. Its also possible that packet switching takes longer when link or CPU utilization increases.
* a packet that had to be re-sent must be ignored during computation. This is because the sender cannot tell if the ACK for the re-transmitted segment is acknowledging the original transmission (that arrived after all) or the re-transmission.
This last point is significant: When TCP is busy recovering from a loss, it may only receives ACKs for re-transmitted segments. It then cant measure (update) the RTT during this recovery phase. As a consequence it cant adjust the re-transmission timeout, which then keeps growing exponentially. Thats a pretty specific case (it assumes that other mechanisms such as fast retransmit or SACK did not help). Nevertheless, with TCP timestamps, RTT evaluation is done even in this case.
If the extension is used, the peer reads the timestamp value from the TCP segments extension space and stores it locally. It then places this value in all the segments it sends back as the “timestamp echo”.
Therefore the option carries two timestamps: Its senders own timestamp and the most recent timestamp it received from the peer. The “echo timestamp” is used by the original sender to compute the RTT. Its the delta between its current timestamp clock and what was reflected in the “timestamp echo”.
##### Other timestamp uses
TCP timestamps even have other uses beyond PAWS and RTT measurements. For example it becomes possible to detect if a retransmission was unnecessary. If the acknowledgment carries an older timestamp echo, the acknowledgment was for the initial packet, not the re-transmitted one.
Another, more obscure use case for TCP timestamps is related to the TCP [syn cookie][4] feature.
##### TCP connection establishment on server side
When connection requests arrive faster than a server application can accept the new incoming connection, the connection backlog will eventually reach its limit. This can occur because of a mis-configuration of the system or a bug in the application. It also happens when one or more clients send connection requests without reacting to the syn ack response. This fills the connection queue with incomplete connections. It takes several seconds for these entries to time out. This is called a “syn flood attack”.
##### TCP timestamps and TCP syn cookies
Some TCP stacks allow to accept new connections even if the queue is full. When this happens, the Linux kernel will print a prominent message to the system log:
> Possible SYN flooding on port P. Sending Cookies. Check SNMP counters.
This mechanism bypasses the connection queue entirely. The information that is normally stored in the connection queue is encoded into the SYN/ACK responses TCP sequence number. When the ACK comes back, the queue entry can be rebuilt from the sequence number.
The sequence number only has limited space to store information. Connections established using the TCP syn cookie mechanism can not support TCP options for this reason.
The TCP options that are common to both peers can be stored in the timestamp, however. The ACK packet reflects the value back in the timestamp echo field which allows to recover the agreed-upon TCP options as well. Else, cookie-connections are restricted by the standard 64 kbyte receive window.
##### Common myths timestamps are bad for performance
Unfortunately some guides recommend disabling TCP timestamps to reduce the number of times the kernel needs to access the timestamp clock to get the current time. This is not correct. As explained before, RTT estimation is a necessary part of TCP. For this reason, the kernel always takes a microsecond-resolution time stamp when a packet is received/sent.
Linux re-uses the clock timestamp taken for the RTT estimation for the remainder of the packet processing step. This also avoids the extra clock access to add a timestamp to an outgoing TCP packet.
The entire timestamp option only requires 10 bytes of TCP option space in each packet, this is not a significant decrease in space available for packet payload.
##### common myths timestamps are a security problem
Some security audit tools and (older) blog posts recommend to disable TCP
timestamps because they allegedly leak system uptime: This would then allow to estimate the patch level of the system/kernel. This was true in the past: The timestamp clock is based on a constantly increasing value that starts at a fixed value on each system boot. A timestamp value would give a estimate as to how long the machine has been running (uptime).
As of Linux 4.12 TCP timestamps do not reveal the uptime anymore. All timestamp values sent use a peer-specific offset. Timestamp values also wrap every 49 days.
In other words, connections from or to address “A” see a different timestamp than connections to the remote address “B”.
Run _sysctl net.ipv4.tcp_timestamps=2_ to disable the randomization offset. This makes analyzing packet traces recorded by tools like _wireshark_ or _tcpdump_ easier packets sent from the host then all have the same clock base in their TCP option timestamp.  For normal operation the default setting should be left as-is.
### Selective Acknowledgments
TCP has problems if several packets in the same window of data are lost. This is because TCP Acknowledgments are cumulative, but only for packets
that arrived in-sequence. Example:
* Sender transmits segments s_1, s_2, s_3, … s_n
* Sender receives ACK for s_2
* This means that both s_1 and s_2 were received and the
sender no longer needs to keep these segments around.
* Should s_3 be re-transmitted? What about s_4? s_n?
The sender waits for a “retransmission timeout” or duplicate ACKs for s_2 to arrive. If a retransmit timeout occurs or several duplicate ACKs for s_2 arrive, the sender transmits s_3 again.
If the sender receives an acknowledgment for s_n, s_3 was the only missing packet. This is the ideal case. Only the single lost packet was re-sent.
If the sender receives an acknowledged segment that is smaller than s_n, for example s_4, that means that more than one packet was lost. The
sender needs to re-transmit the next segment as well.
##### Re-transmit strategies
Its possible to just repeat the same sequence: re-send the next packet until the receiver indicates it has processed all packet up to s_n. The problem with this approach is that it requires one RTT until the sender knows which packet it has to re-send next. While such strategy avoids unnecessary re-transmissions, it can take several seconds and more until TCP has re-sent the entire window of data.
The alternative is to re-send several packets at once. This approach allows TCP to recover more quickly when several packets have been lost. In the above example TCP re-send s_3, s_4, s_5, .. while it can only be sure that s_3 has been lost.
From a latency point of view, neither strategy is optimal. The first strategy is fast if only a single packet has to be re-sent, but takes too long when multiple packets were lost.
The second one is fast even if multiple packet have to be re-sent, but at the cost of wasting bandwidth. In addition, such a TCP sender could have transmitted new data already while it was doing the unneeded re-transmissions.
With the available information TCP cannot know which packets were lost. This is where TCP [Selective Acknowledgments][5] (SACK) come in. Just like window scaling and timestamps, it is another optional, yet very useful TCP feature.
##### The SACK option
```
```
   TCP Sack-Permitted Option: Kind: 4, Length 2
   +---------+---------+
   | Kind=4  | Length=2|
   +---------+---------+
```
```
A sender that supports this extension includes the “Sack Permitted” option in the connection request. If both endpoints support the extension, then a peer that detects a packet is missing in the data stream can inform the sender about this.
```
```
   TCP SACK Option: Kind: 5, Length: Variable
                     +--------+--------+
                     | Kind=5 | Length |
   +--------+--------+--------+--------+
   |      Left Edge of 1st Block       |
   +--------+--------+--------+--------+
   |      Right Edge of 1st Block      |
   +--------+--------+--------+--------+
   |                                   |
   /            . . .                  /
   |                                   |
   +--------+--------+--------+--------+
   |      Left Edge of nth Block       |
   +--------+--------+--------+--------+
   |      Right Edge of nth Block      |
   +--------+--------+--------+--------+
```
```
A receiver that encounters segment_s2 followed by s_5…s_n, it will include a SACK block when it sends the acknowledgment for s_2:
```
```
                +--------+-------+
                | Kind=5 |   10  |
+--------+------+--------+-------+
| Left edge: s_5                 |
+--------+--------+-------+------+
| Right edge: s_n                |
+--------+-------+-------+-------+
```
```
This tells the sender that segments up to s_2 arrived in-sequence, but it also lets the sender know that the segments s_5 to s_n were also received. The sender can then re-transmit these two packets and proceed to send new data.
##### The mythical lossless network
In theory SACK provides no advantage if the connection cannot experience packet loss. Or the connection has such a low latency that even waiting one full RTT does not matter.
In practice lossless behavior is virtually impossible to ensure.
Even if the network and all its switches and routers have ample bandwidth and buffer space packets can still be lost:
* The host operating system might be under memory pressure and drop
packets. Remember that a host might be handling tens of thousands of packet streams simultaneously.
* The CPU might not be able to drain incoming packets from the network interface fast enough. This causes packet drops in the network adapter itself.
* If TCP timestamps are not available even a connection with a very small RTT can stall momentarily during loss recovery.
Use of SACK does not increase the size of TCP packets unless a connection experiences packet loss. Because of this, there is hardly a reason to disable this feature. Almost all TCP stacks support SACK it is typically only absent on low-power IOT-alike devices that are not doing TCP bulk data transfers.
When a Linux system accepts a connection from such a device, TCP automatically disables SACK for the affected connection.
### Summary
The three TCP extensions examined in this post are all related to TCP performance and should best be left to the default setting: enabled.
The TCP handshake ensures that only extensions that are understood by both parties are used, so there is never a need to disable an extension globally just because a peer might not support it.
Turning these extensions off results in severe performance penalties, especially in case of TCP Window Scaling and SACK. TCP timestamps can be disabled without an immediate disadvantage, however there is no compelling reason to do so anymore. Keeping them enabled also makes it possible to support TCP options even when SYN cookies come into effect.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/tcp-window-scaling-timestamps-and-sack/
作者:[Florian Westphal][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/strlen/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/08/tcp-window-scaling-816x346.png
[2]: https://en.wikipedia.org/wiki/Transmission_Control_Protocol#TCP_segment_structure
[3]: https://www.rfc-editor.org/info/rfc7323
[4]: https://en.wikipedia.org/wiki/SYN_cookies
[5]: https://www.rfc-editor.org/info/rfc2018

View File

@ -1,288 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create a mobile app with Flutter)
[#]: via: (https://opensource.com/article/20/9/mobile-app-flutter)
[#]: author: (Vitaly Kuprenko https://opensource.com/users/kooper)
Create a mobile app with Flutter
======
Start your journey toward cross-platform development with the popular
Flutter framework.
![A person looking at a phone][1]
[Flutter][2] is a popular project among mobile developers around the world. The framework has a massive, friendly community of enthusiasts, which continues to grow as Flutter helps programmers take their projects into the mobile space.
This tutorial is meant to help you start doing mobile development with Flutter. After reading it, you'll know how to quickly install and set up the framework to start coding for smartphones, tablets, and other platforms.
This how-to assumes you have [Android Studio][3] installed on your computer and some experience working with it.
### What is Flutter?
Flutter enables developers to build apps for several platforms, including:
* Android
* iOS
* Web (in beta)
* macOS (in development)
* Linux (in development)
Support for macOS and Linux is in early development, while web support is expected to be released soon. This means that you can try out its capabilities now (as I'll describe below).
### Install Flutter
I'm using Ubuntu 18.04, but the installation process is similar with other Linux distributions, such as Arch or Mint.
#### Install with snapd
To install Flutter on Ubuntu or similar distributions using [snapd][4], enter this in a terminal:
```
$ sudo snap install flutter --classic
$ sudo snap install flutter classic
flutter 0+git.142868f from flutter Team/ installed
```
Then launch it using the `flutter` command. Upon the first launch, the framework downloads to your computer:
```
$ flutter
Initializing Flutter
Downloading <https://storage.googleapis.com/flutter\_infra\[...\]>
```
Once the download is finished, you'll see a message telling you that Flutter is initialized:
![Flutter initialized][5]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
#### Install manually
If you don't have snapd or your distribution isn't Ubuntu, the installation process will be a little bit different. In that case, [download][7] the version of Flutter recommended for your operating system.
![Install Flutter manually][8]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
Then extract it to your home directory.
Open the `.bashrc` file in your home directory (or `.zshrc` if you use the [Z shell][9]) in your favorite text editor. Because it's a hidden file, you must first enable showing hidden files in your file manager or open it from a terminal with:
```
`$ gedit ~/.bashrc &`
```
Add the following line to the end of the file:
```
`export PATH="$PATH:~/flutter/bin"`
```
Save and close the file. Keep in mind that if you extracted Flutter somewhere other than your home directory, the [path to Flutter SDK][10] will be different.
Close your terminal and then open it again so that your new configuration loads. Alternatively, you can source the configuration with:
```
`$ . ~/.bashrc`
```
If you don't see an error, then everything is fine.
This installation method is a little bit harder than using the `snap` command, but it's pretty versatile and lets you install the framework on almost any distribution.
#### Check the installation
To check the result, enter the following in the terminal:
```
`flutter doctor -v`
```
You'll see information about installed components. Don't worry if you see errors. You haven't installed any IDE plugins for working with Flutter SDK yet.
![Checking Flutter installation with the doctor command][11]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
### Install IDE plugins
You should install plugins in your [integrated development environment (IDE)][12] to help it interface with the Flutter SDK, interact with devices, and build code.
The three main IDE tools that are commonly used for Flutter development are IntelliJ IDEA (Community Edition), Android Studio, and VS Code (or [VSCodium][13]). I'm using Android Studio in this tutorial, but the steps are similar to how they work on IntelliJ IDEA (Community Edition) since they're built on the same platform.
First, launch **Android Studio**. Open **Settings** and go to the **Plugins** pane, and select the **Marketplace** tab. Enter **Flutter** in the search line and click **Install**.
![Flutter plugins][14]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
You'll probably see an option to install the **Dart** plugin; agree to it. If you don't see the Dart option, then install it manually by repeating the steps above. I also recommend using the **Rainbow Brackets** plugin, which makes code navigation easier.
That's it! You've installed all the plugins you need. You can check by entering a familiar command in the terminal:
```
`flutter doctor -v`
```
![Checking Flutter plugins with the doctor command][15]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
### Build your "Hello World" application
To start a new project, create a Flutter project:
1. Select **New -&gt; New Flutter project**.
![Creating a new Flutter plugin][16]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
2. In the window, choose the type of project you want. In this case, you need **Flutter Application**.
3. Name your project **hello_world**. Note that you should use a merged name, so use an underscore instead of a space. You may also need to specify the path to the SDK.
![Naming a new Flutter plugin][17]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
4. Enter the package name.
You've created a project! Now you can launch it on a device or by using an emulator.
![Device options in Flutter][18]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
Select the device you want and press **Run**. In a moment, you will see the result.
![Flutter demo on mobile device][19]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
Now you can start working on an [intermediate project][20].
### Try Flutter for web
Before you install Flutter components for the web, you should know that Flutter's support for web apps is pretty raw at the moment. So it's not a good idea to use it for complicated projects yet.
Flutter for web is not active in the basic SDK by default. To switch it on, go to the beta channel. To do this, enter the following command in the terminal:
```
`flutter channel beta`
```
![flutter channel beta output][21]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
Next, upgrade Flutter according to the beta branch by using the command:
```
`flutter upgrade`
```
![flutter upgrade output][22]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
To make Flutter for web work, enter:
```
`flutter config --enable-web`
```
Restart your IDE; this helps Android Studio index the new IDE and reload the list of devices. You should see several new devices:
![Flutter for web device options][23]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
Selecting **Chrome** launches an app in the browser, while **Web Server** gives you the link to your web app, which you can open in any browser.
Still, it's not time to rush into development because your current project doesn't support the web. To improve it, open the terminal in the project's root and enter:
```
`flutter create`
```
This command recreates the project, adding web support. The existing code won't be deleted.
Note that the tree has changed and now has a "web" directory:
![File tree with web directory][24]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
Now you can get to work. Select **Chrome** and press **Run**. In a moment, you'll see the browser window with your app.
![Flutter web app demo][25]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
Congratulations! You've just launched a project for the browser and can continue working with it as with any other website.
All of this comes from the same codebase because Flutter makes it possible to write code for both mobile platforms and the web with little to no changes.
### Do more with Flutter
Flutter is a powerful tool for mobile development, and moreover, it's an important evolutionary step toward cross-platform development. Learn it, use it, and deliver your apps to all the platforms!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/mobile-app-flutter
作者:[Vitaly Kuprenko][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kooper
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd (A person looking at a phone)
[2]: https://flutter.dev/
[3]: https://developer.android.com/studio
[4]: https://snapcraft.io/docs/getting-started
[5]: https://opensource.com/sites/default/files/uploads/flutter1_initialized.png (Flutter initialized)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://flutter.dev/docs/get-started/install/linux
[8]: https://opensource.com/sites/default/files/uploads/flutter2_manual-install.png (Install Flutter manually)
[9]: https://opensource.com/article/19/9/getting-started-zsh
[10]: https://opensource.com/article/17/6/set-path-linux
[11]: https://opensource.com/sites/default/files/uploads/flutter3_doctor.png (Checking Flutter installation with the doctor command)
[12]: https://www.redhat.com/en/topics/middleware/what-is-ide
[13]: https://opensource.com/article/20/6/open-source-alternatives-vs-code
[14]: https://opensource.com/sites/default/files/uploads/flutter4_plugins.png (Flutter plugins)
[15]: https://opensource.com/sites/default/files/uploads/flutter5_plugincheck.png (Checking Flutter plugins with the doctor command)
[16]: https://opensource.com/sites/default/files/uploads/flutter6_newproject.png (Creating a new Flutter plugin)
[17]: https://opensource.com/sites/default/files/uploads/flutter7_projectname.png (Naming a new Flutter plugin)
[18]: https://opensource.com/sites/default/files/uploads/flutter8_launchflutter.png (Device options in Flutter)
[19]: https://opensource.com/sites/default/files/uploads/flutter9_demo.png (Flutter demo on mobile device)
[20]: https://opensource.com/article/18/6/flutter
[21]: https://opensource.com/sites/default/files/uploads/flutter10_beta.png (flutter channel beta output)
[22]: https://opensource.com/sites/default/files/uploads/flutter11_upgrade.png (flutter upgrade output)
[23]: https://opensource.com/sites/default/files/uploads/flutter12_new-devices.png (Flutter for web device options)
[24]: https://opensource.com/sites/default/files/uploads/flutter13_tree.png (File tree with web directory)
[25]: https://opensource.com/sites/default/files/uploads/flutter14_webapp.png (Flutter web app demo)

View File

@ -1,149 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A practical guide to learning awk)
[#]: via: (https://opensource.com/article/20/9/awk-ebook)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
A practical guide to learning awk
======
Get a better handle on the awk command by downloading our free eBook.
![Person programming on a laptop on a building][1]
Of all the [Linux][2] commands out there (and there are many), the three most quintessential seem to be `sed`, `awk`, and `grep`. Maybe it's the arcane sound of their names, or the breadth of their potential use, or just their age, but when someone's giving an example of a "Linuxy" command, it's usually one of those three. And while `sed` and `grep` have several simple one-line standards, the less prestigious `awk` remains persistently prominent for being particularly puzzling.
You're likely to use `sed` for a quick string replacement or `grep` to filter for a pattern on a daily basis. You're far less likely to compose an `awk` command. I often wonder why this is, and I attribute it to a few things. First of all, many of us barely use `sed` and `grep` for anything but some variation upon these two commands:
```
$ sed -e 's/foo/bar/g' file.txt
$ grep foo file.txt
```
So, even though you might feel more comfortable with `sed` and `grep`, you may not use their full potential. Of course, there's no obligation to learn more about `sed` or `grep`, but I sometimes wonder about the way I "learn" commands. Instead of learning _how_ a command works, I often learn a specific incantation that includes a command. As a result, I often feel a false familiarity with the command. I think I know a command because I can name three or four options off the top of my head, even though I don't know what the options do and can't quite put my finger on the syntax.
And that's the problem, I believe, that many people face when confronted with the power and flexibility of `awk`.
### Learning awk to use awk
The basics of `awk` are surprisingly simple. It's often noted that `awk` is a programming language, and although it's a relatively basic one, it's true. This means you can learn `awk` the same way you learn a new coding language: learn its syntax using some basic commands, learn its vocabulary so you can build up to complex actions, and then practice, practice, practice.
### How awk parses input
`Awk` sees input, essentially, as an array. When `awk` scans over a text file, it treats each line, individually and in succession, as a _record_. Each record is broken into _fields_. Of course, `awk` must keep track of this information, and you can see that data using the `NR` (number of records) and `NF` (number of fields) built-in variables. For example, this gives you the line count of a file:
```
$ awk 'END { print NR;}' example.txt
36
```
This also reveals something about `awk` syntax. Whether you're writing `awk` as a one-liner or as a self-contained script, the structure of an `awk` instruction is:
```
`pattern or keyword { actions }`
```
In this example, the word `END` is a special, reserved keyword rather than a pattern. A similar keyword is `BEGIN`. With both of these keywords, `awk` just executes the action in braces at the start or end of parsing data.
You can use a _pattern_ as a filter or qualifier so that `awk` only executes a given action when it is able to match your pattern to the current record. For instance, suppose you want to use `awk`, much as you would `grep`, to find the word _Linux_ in a file of text:
```
$ awk '/Linux/ { print $0; }' os.txt
OS: CentOS Linux (10.1.1.8)
OS: CentOS Linux (10.1.1.9)
OS: Red Hat Enterprise Linux (RHEL) (10.1.1.11)
OS: Elementary Linux (10.1.2.4)
OS: Elementary Linux (10.1.2.5)
OS: Elementary Linux (10.1.2.6)
```
For `awk`, each line in the file is a record, and each word in a record is a field. By default, fields are separated by a space. You can change that with the `--field-separator` option, which sets the `FS` (field separator) variable to whatever you want it to be:
```
$ awk --field-separator ':' '/Linux/ { print $2; }' os.txt
 CentOS Linux (10.1.1.8)
 CentOS Linux (10.1.1.9)
 Red Hat Enterprise Linux (RHEL) (10.1.1.11)
 Elementary Linux (10.1.2.4)
 Elementary Linux (10.1.2.5)
 Elementary Linux (10.1.2.6)
```
In this sample, there's an empty space before each listing because there's a blank space after each colon (`:`) in the source text. This isn't `cut`, though, so the field separator needn't be limited to one character:
```
$ awk --field-separator ': ' '/Linux/ { print $2; }' os.txt
CentOS Linux (10.1.1.8)
CentOS Linux (10.1.1.9)
Red Hat Enterprise Linux (RHEL) (10.1.1.11)
Elementary Linux (10.1.2.4)
Elementary Linux (10.1.2.5)
Elementary Linux (10.1.2.6)
```
### Functions in awk
You can build your own functions in `awk` using this syntax:
```
`name(parameters) { actions }`
```
Functions are important because they allow you to write code once and reuse it throughout your work. When constructing one-liners, custom functions are a little less useful than they are in scripts, but `awk` defines many functions for you already. They work basically the same as any function in any other language or spreadsheet: You learn the order that the function needs information from you, and you can feed it whatever you want to get the results.
There are functions to perform mathematical operations and string processing. The math ones are often fairly straightforward. You provide a number, and it crunches it:
```
$ awk 'BEGIN { print sqrt(1764); }'
42
```
String functions can be more complex but are well documented in the [GNU awk manual][3]. For example, the `split` function takes an entity that `awk` views as a single field and splits it into different parts. It requires a field, a variable to use as an array containing each part of the split, and the character you want to use as the delimiter.
Using the output of the previous examples, I know that there's an IP address at the very end of each record. In this case, I can send just the last field of a record to the `split` function by referencing the variable `NF` because it contains the number of fields (and the final field must be the highest number):
```
$ awk --field-separator ': ' '/Linux/ { split($NF, IP, "."); print "subnet: " IP[3]; }' os.txt
subnet: 1
subnet: 1
subnet: 1
subnet: 2
subnet: 2
subnet: 2
```
There are many more functions, and there's no reason to limit yourself to one per block of `awk` code. You can construct complex pipelines with `awk` in your terminal, or you can write `awk` scripts to define and utilize your own functions.
### Download the eBook
Learning `awk` is mostly a matter of using `awk`. Use it even if it means duplicating functionality you already have with `sed` or `grep` or `cut` or `tr` or any other perfectly valid commands. Once you get comfortable with it, you can write Bash functions that invoke your custom `awk` commands for easier use. And eventually, you'll be able to write scripts to parse complex datasets.
**[Download our][4]** **[eBook][4] **to learn everything you need to know about `awk`, and start using it today.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/awk-ebook
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_code_programming_laptop.jpg?itok=ormv35tV (Person programming on a laptop on a building)
[2]: https://opensource.com/resources/linux
[3]: https://www.gnu.org/software/gawk/manual/gawk.html
[4]: https://opensource.com/downloads/awk-ebook

View File

@ -1,239 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Program hardware from the Linux command line)
[#]: via: (https://opensource.com/article/20/9/hardware-command-line)
[#]: author: (Alan Smithee https://opensource.com/users/alansmithee)
Program hardware from the Linux command line
======
Programming hardware has become more common thanks to the rise of the
Internet of Things (IoT). RT-Thread lets you contact devices from the
Linux command line with FinSH.
![Command line prompt][1]
RT-Thread is an open source [real-time operating system][2] used for programming Internet of Things (IoT) devices. FinSH is [RT-Thread][3]'s command-line component, and it provides a set of operation interfaces enabling users to contact a device from the command line. It's mainly used to debug or view system information.
Usually, development debugging is displayed using hardware debuggers and `printf` logs. In some cases, however, these two methods are not very useful because it's abstracted from what's running, and they can be difficult to parse. RT-Thread is a multi-thread system, though, which is helpful when you want to know the state of a running thread, or the current state of a manual control system. Because it's multi-threaded, you're able to have an interactive shell, so you can enter commands, call a function directly on the device to get the information you need, or control the program's behavior. This may seem ordinary to you if you're only used to modern operating systems such as Linux or BSD, but for hardware hackers this is a profound luxury, and a far cry from wiring serial cables directly onto boards to get glimpses of errors.
FinSH has two modes:
* A C-language interpreter mode, known as c-style
* A traditional command-line mode, known as `msh` (module shell)
In the C-language interpretation mode, FinSH can parse expressions that execute most of the C language and access functions and global variables on the system using function calls. It can also create variables from the command line.
In `msh` mode, FinSH operates similarly to traditional shells such as Bash.
### The GNU command standard
When we were developing FinSH, we learned that before you can write a command-line application, you need to become familiar with GNU command-line standards. This framework of standard practices helps bring familiarity to an interface, which helps developers feel comfortable and productive when using it.
A complete GNU command consists of four main parts:
1. **Command name (executable):** The name of the command line program
2. **Sub-command:** The sub-function name of the command program
3. **Options:** Configuration options for the sub-command function
4. **Arguments:** The corresponding arguments for the configuration options of the sub-command function
You can see this in action with any command. Taking Git as an example:
```
`git reset --hard HEAD~1`
```
Which breaks down as:
![GNU command line standards][4]
(Cathy, [CC BY-SA 4.0][5])
The executable command is **git**, the sub-command is **reset**, the option used is **\--head**, and the argument is **HEAD~1**.
Another example:
```
`systemctl enable --now firewalld`
```
The executable command is **systemctl**, the sub-command is **enable**, the option is **\--now**, and the argument is **firewalld**.
Imagine you want to write a command-line program that complies with the GNU standards using RT-Thread. FinSH has everything you need, and will run your code as expected. Better still, you can rely on this compliance so you can confidently port your favorite Linux programs.
### Write an elegant command-line program
Here's an example of RT-Thread running a command that RT-Thread developers use every day.
```
usage: env.py package [-h] [--force-update] [--update] [--list] [--wizard]
                      [--upgrade] [--printenv]
optional arguments:
  -h, --help      show this help message and exit
  --force-update  force update and clean packages, install or remove the
                  packages by your settings in menuconfig
  --update        update packages, install or remove the packages by your
                  settings in menuconfig
  --list          list target packages
  --wizard        create a new package with wizard
  --upgrade       upgrade local packages list and ENV scripts from git repo
  --printenv      print environmental variables to check
```
As you can tell, it looks familiar and acts like most POSIX applications that you might already run on Linux or BSD. Help is provided when incorrect or insufficient syntax is used, both long and short options are supported, and the general user interface is familiar to anyone who's used a Unix terminal.
### Kinds of options
There are many different kinds of options, and they can be divided into two main categories by length:
1. **Short options:** Consist of one hyphen plus a single letter, e.g., the `-h` option in `pkgs -h`
2. **Long options:** Consist of two hyphens plus words or letters, e.g., the `--target` option in `scons- --target-mdk5`
You can divide these options into three categories, determined by whether they have arguments:
1. **No arguments:** The option cannot be followed by arguments
2. **Arguments must be included:** The option must be followed by arguments
3. **Arguments optional:** Arguments after the option are allowed but not required
As you'd expect from most Linux commands, FinSH option parsing is pretty flexible. It can distinguish an option from an argument based on a space or equal sign as delimiter, or just by extracting the option itself and assuming that whatever follows is the argument (in other words, no delimiter at all):
* `wavplay -v 50`
* `wavplay -v50`
* `wavplay --vol=50`
### Using optparse
If you've ever written a command-line application, you may know there's generally a library or module for your language of choice called optparse. It's provided to programmers so that options (such as **-v** or **\--verbose**) entered as part of a command can be _parsed_ in relation to the rest of the command. It's what helps your code know an option from a sub-command or argument.
When writing a command for FinSH, the `optparse` package expects this format:
```
`MSH_CMD_EXPORT_ALIAS(pkgs, pkgs, this is test cmd.);`
```
You can implement options using the long or short form, or both. For example:
```
static struct optparse_long long_opts[] =
{
    {"help"        , 'h', OPTPARSE_NONE}, // Long command: help, corresponding to short command h, without arguments.
    {"force-update",  0 , OPTPARSE_NONE}, // Long comman: force-update, without arguments
    {"update"      ,  0 , OPTPARSE_NONE},
    {"list"        ,  0 , OPTPARSE_NONE},
    {"wizard"      ,  0 , OPTPARSE_NONE},
    {"upgrade"     ,  0 , OPTPARSE_NONE},
    {"printenv"    ,  0 , OPTPARSE_NONE},
    { NULL         ,  0 , OPTPARSE_NONE}
};
```
After the options are created, write the command and instructions for each option and its arguments:
```
static void usage(void)
{
    rt_kprintf("usage: env.py package [-h] [--force-update] [--update] [--list] [--wizard]\n");
    rt_kprintf("                      [--upgrade] [--printenv]\n\n");
    rt_kprintf("optional arguments:\n");
    rt_kprintf("  -h, --help      show this help message and exit\n");
    rt_kprintf("  --force-update  force update and clean packages, install or remove the\n");
    rt_kprintf("                  packages by your settings in menuconfig\n");
    rt_kprintf("  --update        update packages, install or remove the packages by your\n");
    rt_kprintf("                  settings in menuconfig\n");
    rt_kprintf("  --list          list target packages\n");
    rt_kprintf("  --wizard        create a new package with wizard\n");
    rt_kprintf("  --upgrade       upgrade local packages list and ENV scripts from git repo\n");
    rt_kprintf("  --printenv      print environmental variables to check\n");
}
```
The next step is parsing. While you can't implement its functions yet, the framework of the parsed code is the same:
```
int pkgs(int argc, char **argv)
{
    int ch;
    int option_index;
    struct optparse options;
    if(argc == 1)
    {
        usage();
        return RT_EOK;
    }
    optparse_init(&amp;options, argv);
    while((ch = optparse_long(&amp;options, long_opts, &amp;option_index)) != -1)
    {
        ch = ch;
        rt_kprintf("\n");
        rt_kprintf("optopt = %c\n", options.optopt);
        rt_kprintf("optarg = %s\n", options.optarg);
        rt_kprintf("optind = %d\n", options.optind);
        rt_kprintf("option_index = %d\n", option_index);
    }
    rt_kprintf("\n");
    return RT_EOK;
}
```
Here is the function head file:
```
#include "optparse.h"
#include "finsh.h"
```
Then, compile and download onto a device.
![Output][6]
(Cathy, [CC BY-SA 4.0][5])
### Hardware hacking
Programming hardware can seem intimidating, but with IoT it's becoming more and more common. Not everything can or should be run on a Raspberry Pi, but with RT-Thread you can maintain a familiar Linux feel, thanks to FinSH.
If you're curious about coding on bare metal, give RT-Thread a try.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/hardware-command-line
作者:[Alan Smithee][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alansmithee
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
[2]: https://opensource.com/article/20/6/open-source-rtos
[3]: https://github.com/RT-Thread/rt-thread
[4]: https://opensource.com/sites/default/files/uploads/command-line-apps_2.png (GNU command line standards)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://opensource.com/sites/default/files/uploads/command-line-apps_3.png (Output)

View File

@ -1,407 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Find security issues in Go code using gosec)
[#]: via: (https://opensource.com/article/20/9/gosec)
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
Find security issues in Go code using gosec
======
Get started with gosec, the Golang security checker.
![A lock on the side of a building][1]
It's extremely common now to encounter code written in the [Go programming language][2], especially if you are working with containers, Kubernetes, or a cloud ecosystem. Docker was one of the first projects to adopt Golang, Kubernetes followed, and many new projects select Go over other programming languages.
Like any other language, Go has its share of strengths and weaknesses, which include security flaws. These can arise due to issues in the programming language itself coupled with insecure coding practices, such as memory safety issues in C code, for example.
Regardless of why they occur, security issues need to be fixed early in development to prevent them from creeping into shipped software. Fortunately, static analysis tools are available to help you tackle these issues in a more repeatable manner. Static analysis tools work by parsing source code written in a programming language and looking for issues.
Many of these tools are called linters. Traditionally, linters are more focused on finding programming issues, bugs, code style issues, and the like, and they may not find security issues in code. For example, [Coverity][3] is a popular tool that helps find issues in C/C++ code. However, there are tools that specifically seek out security issues in source code. For example, [Bandit][4] looks for security flaws in Python code. And [gosec][5] searches for security flaws in Go source code. Gosec scans the Go abstract syntax tree (AST) to inspect source code for security problems.
### Get started with gosec
To play around with gosec and learn how it works, you need a project written in Go. With a wide variety of open source software available, this shouldn't be a problem. You can find one by looking at the [trending Golang repositorties][6] on GitHub.
For this tutorial, I randomly chose the [Docker CE][7] project, but you can choose any Go project you want.
#### Install Go and gosec
If you do not already have Go installed, you can fetch it from your repository. If you use Fedora or another RPM-based Linux distribution:
```
`$ dnf install golang.x86_64`
```
Or you can visit the [Golang install][8] page for other options for your operating system.
Verify that Go is installed on your system using the `version` argument:
```
$ go version
go version go1.14.6 linux/amd64
$
```
Installing gosec is simply a matter of running the `go get` command:
```
$ go get github.com/securego/gosec/cmd/gosec
$
```
This downloads gosec's source code from GitHub, compiles it, and installs it in a specific location. You can find [other ways of installing the tools][9] in the repo's README.
Gosec's source code should be downloaded to the location set by `$GOPATH`, and the compiled binary will be installed in the `bin` directory you set for your system. To find out what `$GOPATH` and `$GOBIN` point to, run:
```
$ go env | grep GOBIN
GOBIN="/root/go/gobin"
$
$ go env | grep GOPATH
GOPATH="/root/go"
$
```
If the `go get` command worked, then the gosec binary should be available:
```
$
$ ls -l ~/go/bin/
total 9260
-rwxr-xr-x. 1 root root 9482175 Aug 20 04:17 gosec
$
```
You can add the `bin` directory in `$GOPATH` to the `$PATH` variable in your shell. This makes the gosec command-line interface (CLI) available just like any other command line on your system:
```
$ which gosec
/root/go/bin/gosec
$
```
Try running the gosec CLI with the `-help` option to see if it is working as expected:
```
$ gosec -help
gosec - Golang security checker
gosec analyzes Go source code to look for common programming mistakes that
can lead to security problems.
VERSION: dev
GIT TAG:
BUILD DATE:
USAGE:
```
Next, create a directory and get the source code for the demo project (Docker CE, in this case) using:
```
$ mkdir gosec-demo
$
$ cd gosec-demo/
$
$ pwd
/root/gosec-demo
$
$ git clone <https://github.com/docker/docker-ce.git>
Cloning into 'docker-ce'...
remote: Enumerating objects: 1271, done.
remote: Counting objects: 100% (1271/1271), done.
remote: Compressing objects: 100% (722/722), done.
remote: Total 431003 (delta 384), reused 981 (delta 318), pack-reused 429732
Receiving objects: 100% (431003/431003), 166.84 MiB | 28.94 MiB/s, done.
Resolving deltas: 100% (221338/221338), done.
Updating files: 100% (10861/10861), done.
$
```
A quick look at the source code shows that most of the project is written in Go—just what you need to tinker with gosec's features:
```
$ ./cloc /root/gosec-demo/docker-ce/
   10771 text files.
    8724 unique files.                                          
    2560 files ignored.
\-----------------------------------------------------------------------------------
Language                         files          blank        comment           code
\-----------------------------------------------------------------------------------
Go                                7222         190785         230478        1574580
YAML                                37           4831            817         156762
Markdown                           529          21422              0          67893
Protocol Buffers                   149           5014          16562          10071
```
### Run gosec with the default options
Run gosec on the Docker CE project using the default options by running `gosec ./...` from within the Git repo you just cloned. A lot of output will be shown on the screen. Towards the end, you should see a short `Summary` showing the number of files scanned, the number of lines in those files, and the issues it found in the source code:
```
$ pwd
/root/gosec-demo/docker-ce
$
$ time gosec ./...
[gosec] 2020/08/20 04:44:15 Including rules: default
[gosec] 2020/08/20 04:44:15 Excluding rules: default
[gosec] 2020/08/20 04:44:15 Import directory: /root/gosec-demo/docker-ce/components/engine/opts
[gosec] 2020/08/20 04:44:17 Checking package: opts
[gosec] 2020/08/20 04:44:17 Checking file: /root/gosec-demo/docker-ce/components/engine/opts/address_pools.go
[gosec] 2020/08/20 04:44:17 Checking file: /root/gosec-demo/docker-ce/components/engine/opts/env.go
[gosec] 2020/08/20 04:44:17 Checking file: /root/gosec-demo/docker-ce/components/engine/opts/hosts.go
# End of gosec run
Summary:
   Files: 1278
   Lines: 173979
   Nosec: 4
  Issues: 644
real    0m52.019s
user    0m37.284s
sys     0m12.734s
$
```
If you scroll through the output on the screen, you should see some lines highlighted in various colors: red indicates high-priority issues that need to be looked into first, and yellow indicates medium-priority issues.
#### About false positives
Before getting into the findings, I want to share some ground rules. By default, static analysis tools report _everything_ that they find to be an issue based on a set of rules that the tool compares against the code being tested. Does this mean that everything reported by the tool is an issue that needs to be fixed? Well, it depends. The best authorities on this question are the developers who designed and developed the software. They understand the code much better than anybody else, and more importantly, they understand the environment where the software will be deployed and how it will be used.
This knowledge is critical when deciding whether a piece of code flagged by a tool is actually a security flaw. Over time and with more experience, you will learn to tweak static analysis tools to ignore issues that are not security flaws and make the reports more actionable. So, an experienced developer doing a manual audit of the source code would be in a better position to decide whether an issue reported by gosec warrants attention or not.
#### High-priority issues
According to the output, gosec found a high-priority issue that Docker CE is using an old Transport Layer Security (TLS) version. Whenever possible, it's best to use the latest version of a software or library to ensure it is up to date and has no security issues.
```
[/root/gosec-demo/docker-ce/components/engine/daemon/logger/splunk/splunk.go:173] - G402 (CWE-295): TLS MinVersion too low. (Confidence: HIGH, Severity: HIGH)
    172:
  &gt; 173:        tlsConfig := &amp;tls.Config{}
    174:
```
It also found a weak random number generator. Depending on how the generated random number is used, you can decide whether or not this is a security flaw.
```
[/root/gosec-demo/docker-ce/components/engine/pkg/namesgenerator/names-generator.go:843] - G404 (CWE-338): Use of weak random number generator (math/rand instead of crypto/rand) (Confidence: MEDIUM, Severity: HIGH)
    842: begin:
  &gt; 843:        name := fmt.Sprintf("%s_%s", left[rand.Intn(len(left))], right[rand.Intn(len(right))])
    844:        if name == "boring_wozniak" /* Steve Wozniak is not boring */ {
```
#### Medium-priority issues
The tool also found some medium-priority issues. It flagged a potential denial of service (DoS) vulnerability by way of a decompression bomb related to a tar that could possibly be exploited by a malicious actor.
```
[/root/gosec-demo/docker-ce/components/engine/pkg/archive/copy.go:357] - G110 (CWE-409): Potential DoS vulnerability via decompression bomb (Confidence: MEDIUM, Severity: MEDIUM)
    356:
  &gt; 357:                        if _, err = io.Copy(rebasedTar, srcTar); err != nil {
    358:                                w.CloseWithError(err)
```
It also found an issue related to a file that is included by way of a variable. If malicious users take control of this variable, they could change its contents to read a different file.
```
[/root/gosec-demo/docker-ce/components/cli/cli/context/tlsdata.go:80] - G304 (CWE-22): Potential file inclusion via variable (Confidence: HIGH, Severity: MEDIUM)
    79:         if caPath != "" {
  &gt; 80:                 if ca, err = ioutil.ReadFile(caPath); err != nil {
    81:                         return nil, err
```
File and directory permissions are often the basic building blocks of security on an operating system. Here, gosec identified an issue where you might need to check whether the permissions for a directory are secure or not.
```
[/root/gosec-demo/docker-ce/components/engine/contrib/apparmor/main.go:41] - G301 (CWE-276): Expect directory permissions to be 0750 or less (Confidence: HIGH, Severity: MEDIUM)
    40:         // make sure /etc/apparmor.d exists
  &gt; 41:         if err := os.MkdirAll(path.Dir(apparmorProfilePath), 0755); err != nil {
    42:                 log.Fatal(err)
```
Often, you need to launch command-line utilities from source code. Go uses the built-in exec library to do this task. Carefully analyzing the variable used to spawn such utilities can uncover security flaws.
```
[/root/gosec-demo/docker-ce/components/engine/testutil/fakestorage/fixtures.go:59] - G204 (CWE-78): Subprocess launched with variable (Confidence: HIGH, Severity: MEDIUM)
    58:
  &gt; 59:              cmd := exec.Command(goCmd, "build", "-o", filepath.Join(tmp, "httpserver"), "github.com/docker/docker/contrib/httpserver")
    60:                 cmd.Env = append(os.Environ(), []string{
```
#### Low-severity issues
In this output, gosec identified low-severity issues related to "unsafe" calls, which typically bypass all the memory protections that Go provides. Closely analyze your use of "unsafe" calls to see if they can be exploited in any way possible.
```
[/root/gosec-demo/docker-ce/components/engine/pkg/archive/changes_linux.go:264] - G103 (CWE-242): Use of unsafe calls should be audited (Confidence: HIGH, Severity: LOW)
    263:        for len(buf) &gt; 0 {
  &gt; 264:                dirent := (*unix.Dirent)(unsafe.Pointer(&amp;buf[0]))
    265:                buf = buf[dirent.Reclen:]
[/root/gosec-demo/docker-ce/components/engine/pkg/devicemapper/devmapper_wrapper.go:88] - G103 (CWE-242): Use of unsafe calls should be audited (Confidence: HIGH, Severity: LOW)
    87: func free(p *C.char) {
  &gt; 88:         C.free(unsafe.Pointer(p))
    89: }
```
It also flagged unhandled errors in the source codebase. You are expected to handle cases where errors could arise in the source code.
```
[/root/gosec-demo/docker-ce/components/cli/cli/command/image/build/context.go:172] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
    171:                err := tar.Close()
  &gt; 172:                os.RemoveAll(dockerfileDir)
    173:                return err
```
### Customize gosec scans
Using gosec with its defaults brings up many kinds of issues. However, with manual auditing and over time, you learn which issues don't need to be flagged. You can customize gosec to exclude or include certain tests.
As I mentioned above, gosec uses a set of rules to find problems in Go source code. Here is a complete list of the [rules][10] it uses:
* G101: Look for hard coded credentials
* G102: Bind to all interfaces
* G103: Audit the use of unsafe block
* G104: Audit errors not checked
* G106: Audit the use of ssh.InsecureIgnoreHostKey
* G107: Url provided to HTTP request as taint input
* G108: Profiling endpoint automatically exposed on /debug/pprof
* G109: Potential Integer overflow made by strconv.Atoi result conversion to int16/32
* G110: Potential DoS vulnerability via decompression bomb
* G201: SQL query construction using format string
* G202: SQL query construction using string concatenation
* G203: Use of unescaped data in HTML templates
* G204: Audit use of command execution
* G301: Poor file permissions used when creating a directory
* G302: Poor file permissions used with chmod
* G303: Creating tempfile using a predictable path
* G304: File path provided as taint input
* G305: File traversal when extracting zip/tar archive
* G306: Poor file permissions used when writing to a new file
* G307: Deferring a method which returns an error
* G401: Detect the usage of DES, RC4, MD5 or SHA1
* G402: Look for bad TLS connection settings
* G403: Ensure minimum RSA key length of 2048 bits
* G404: Insecure random number source (rand)
* G501: Import blocklist: crypto/md5
* G502: Import blocklist: crypto/des
* G503: Import blocklist: crypto/rc4
* G504: Import blocklist: net/http/cgi
* G505: Import blocklist: crypto/sha1
* G601: Implicit memory aliasing of items from a range statement
#### Exclude specific tests
You can customize gosec to prevent it from looking for and reporting on issues that are safe. To ignore specific issues, you can use the `-exclude` flag with the rule codes above.
For example, if you don't want gosec to find unhandled errors related to hardcoding credentials in source code, you can ignore them by running:
```
$ gosec -exclude=G104 ./...
$ gosec -exclude=G104,G101 ./...
```
Sometimes, you know an area of source code is safe, but gosec keeps reporting it as an issue. However, you don't want to exclude that check completely because you want gosec to scan new code added to the codebase. To prevent gosec from scanning the area you know is safe, add a `#nosec` flag to that part of the source code. This ensures gosec continues to scan new code for an issue but ignores the area flagged with `#nosec`.
#### Run specific checks
On the other hand, if you need to focus on specific issues, you can use tell gosec to run those checks by using the `-include` option with the rule codes:
```
`$ gosec -include=G201,G202 ./...`
```
#### Scan test files
The Go language has built-in support for testing that uses unit tests to verify whether a component works as expected. In default mode, gosec ignores test files, but if you want them included in the scan, use the `-tests` flag:
```
`gosec -tests ./...`
```
#### Change the output format
Finding issues is only part of the picture; the other part is reporting what it finds in a way that is easy for humans and tools to consume. Fortunately, gosec can output results in a variety of ways. For example, if you want to get reports in JSON format, use the `-fmt` option to specify JSON and save the results in a `results.json` file:
```
$ gosec -fmt=json -out=results.json ./...
$ ls -l results.json
-rw-r--r--. 1 root root 748098 Aug 20 05:06 results.json
$
         {
             "severity": "LOW",
             "confidence": "HIGH",
             "cwe": {
                 "ID": "242",
                 "URL": "<https://cwe.mitre.org/data/definitions/242.html>"
             },
             "rule_id": "G103",
             "details": "Use of unsafe calls should be audited",
             "file": "/root/gosec-demo/docker-ce/components/engine/daemon/graphdriver/graphtest/graphtest_unix.go",
             "code": "304: \t// Cast to []byte\n305: \theader := *(*reflect.SliceHeader)(unsafe.Pointer(\u0026buf))\n306: \theader.      Len *= 8\n",
             "line": "305",
             "column": "36"
         },
```
### Find low-hanging fruit with gosec
A static analysis tool is not a replacement for manual code audits. However, when a codebase is large with many people contributing to it, such a tool often helps find low-hanging fruit in a repeatable way. It is also useful for helping new developers identify and avoid writing code that introduces these security flaws.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/gosec
作者:[Gaurav Kamathe][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gkamathe
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
[2]: https://golang.org/
[3]: https://www.synopsys.com/software-integrity/security-testing/static-analysis-sast.html
[4]: https://pypi.org/project/bandit/
[5]: https://github.com/securego/gosec
[6]: https://github.com/trending/go
[7]: https://github.com/docker/docker-ce
[8]: https://golang.org/doc/install
[9]: https://github.com/securego/gosec#install
[10]: https://github.com/securego/gosec#available-rules

View File

@ -1,77 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 questions to ask yourself when writing project documentation)
[#]: via: (https://opensource.com/article/20/9/project-documentation)
[#]: author: (Alexei Leontief https://opensource.com/users/alexeileontief)
5 questions to ask yourself when writing project documentation
======
Using some of the basic principles of effective communication can help
you create well-written, informative project documents that align with
your brand.
![A person writing.][1]
Before getting down to the actual writing part of documenting another one of your open source projects, and even before interviewing the experts, it's a good idea to answer some high-level questions about your new document.
Renowned communication theorist Harold Lasswell wrote in his 1948 article, _The Structure and Function of Communication in Society_:
> [A] convenient way to describe an act of communication is to answer the following questions:
>
> * Who
> * Says what
> * In which channel
> * To whom
> * With what effect?
>
As a technical communicator, you can apply Lasswell's theory and answer similar questions about your document to communicate your message better and with the desired effect.
### Who—Who is the document owner?
Or, what company is behind the document? What brand identity does it want to convey to its audience? The answer to this question will significantly influence your writing style. The company may also have its own style guide or at least a formal mission statement, in which case, you should start there.
If the company is just starting out, you may ask the questions above to the document's owner. As the writer, it's important to integrate the voice and persona you create for the company with your own worldview and beliefs. This will make your writing sound more natural and less like company jargon.
### Says what—What is the document type?
What information do you need to communicate? What type of document is it: a user guide, API reference, release notes, etc.? Many document types will have templates or generally agreed-upon structures that will give you a place to start and help ensure you include all the necessary information.
### In which channel—What is the format of the document?
With technical documents, the channel of communication often informs the final format of your doc, i.e., whether it's going to be a PDF, HTML, a text file, etc. This will, most likely, also determine the tools you should use to write your document.
### To whom—Who is the target audience?
Who will read this document? What is their level of knowledge? What are their job responsibilities and their main challenges? These questions will help you determine what you should cover, whether or not you should go into details, whether you can use any specific terms, etc. In some cases, the answers to these questions can even influence the complexity of syntax that you should use.
### With what effect—What is the purpose of the document?
This is where you should define what problem(s) this document is expected to solve for its prospective readers, or what questions it should answer for them. For example, the purpose of your document can be to teach your customers to work with your product.
At this point, you may refer to the approach suggested by [Divio][2]. According to this approach, you can assign any document one of four types, depending on the document's general orientation: learning, solving a problem, understanding, or getting information.
Another good question to ask at this stage is what business problem this document is meant to solve (for example, how to cut down support costs.) With a business problem in mind, you may see an important angle for your writing.
### Conclusion
The questions above are designed to help you form the basis for effective communication and ensure your document covers everything it should. You can break them down into your own checklist of questions and keep them around for whenever you have a document to create. This checklist may also come in handy when you become stuck, confronted with a blank page. It will hopefully inspire you and help you generate ideas.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/project-documentation
作者:[Alexei Leontief][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alexeileontief
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_resume_rh1x.png?itok=S3HGxi6E (A person writing.)
[2]: https://documentation.divio.com/

View File

@ -0,0 +1,166 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Add sound to your Python game)
[#]: via: (https://opensource.com/article/20/9/add-sound-python-game)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Add sound to your Python game
======
Hear what happens when your hero fights, jumps, collects loot, and more
by adding sounds to your game. Learn how in the 13th article in this
series on creating a platformer in Pygame.
![Colorful sound wave graph][1]
This is part 13 in an ongoing series about creating video games in [Python 3][2] using the [Pygame][3] module. Previous articles are:
1. [Learn how to program in Python by building a simple dice game][4]
2. [Build a game framework with Python using the Pygame module][5]
3. [How to add a player to your Python game][6]
4. [Using Pygame to move your game character around][7]
5. [What's a hero without a villain? How to add one to your Python game][8]
6. [Add platforms to your game][9]
7. [Simulate gravity in your Python game][10]
8. [Add jumping to your Python platformer game][11]
9. [Enable your Python game player to run forward and backward][12]
10. [Using Python to set up loot in Pygame][13]
11. [Add scorekeeping to your Python game][14]
12. [Add throwing mechanics to your Python game][15]
Pygame provides an easy way to integrate sounds into your Python video game. Pygame's [mixer module][16] can play one or more sounds on command, and by mixing those sounds together, you can have, for instance, background music playing at the same time you hear the sounds of your hero collecting loot or jumping over enemies.
It is easy to integrate the mixer module into an existing game, so—rather than giving you code samples showing you exactly where to put them—this article explains the four steps required to get sound in your application.
### Start the mixer
First, in your code's setup section, start the mixer process. Your code already starts Pygame and Pygame fonts, so grouping it together with these is a good idea:
```
pygame.init()
pygame.font.init()
pygame.mixer.init() # add this line
```
### Define the sounds
Next, you must define the sounds you want to use. This requires that you have the sounds on your computer, just as using fonts requires you to have fonts, and using graphics requires you to have graphics.
You also must bundle those sounds with your game so that anyone playing your game has the sound files.
To bundle a sound with your game, first create a new directory in your game folder, right along with the directory you created for your images and fonts. Call it `sound`:
```
`s = 'sound'`
```
Even though there are plenty of sounds on the internet, it's not necessarily _legal_ to download them and give them away with your game. It seems strange because so many sounds from famous video games are such a part of popular culture, but that's how the law works. If you want to ship a sound with your game, you must find an open source or [Creative Commons][17] sound that gives you permission to give the sound away with your game.
There are several sites that specialize in free and legal sounds, including:
* [Freesound][18] hosts sound effects of all sorts.
* [Incompetech][19] hosts background music.
* [Open Game Art][20] hosts some sound effects and music.
Some sound files are free to use only if you give the composer or sound designer credit. Read the conditions of use carefully before bundling any with your game! Musicians and sound designers work just as hard on their sounds as you work on your code, so it's nice to give them credit even when they don't require it.
To give your sound sources credit, list the sounds that you use in a text file called `CREDIT`, and place the text file in your game folder.
You might also try making your own music. The excellent [LMMS][21] audio workstation is easy to use and ships with lots of interesting sounds. It's available on all major platforms and exports to [Ogg Vorbis][22] (OGG) audio format.
### Add sound to Pygame
When you find a sound that you like, download it. If it comes in a ZIP or TAR file, extract it and move the sounds into the `sound` folder in your game directory.
If the sound file has a complicated name with spaces or special characters, rename it. The filename is completely arbitrary, and the simpler it is, the easier it is for you to type into your code.
Most video games use OGG sound files because the format provides high quality in small file sizes. When you download a sound file, it might be an MP3, WAVE, FLAC, or another audio format. To keep your compatibility high and your download size low, convert these to Ogg Vorbis with a tool like [fre:ac][23] or [Miro][24].
For example, assume you have downloaded a sound file called `ouch.ogg`.
In your code's setup section, create a variable representing the sound file you want to use:
```
`ouch = pygame.mixer.Sound(os.path.join(s, 'ouch.ogg'))`
```
### Trigger a sound
To use a sound, all you have to do is call the variable when you want to trigger it. For instance, to trigger the `OUCH` sound effect when your player hits an enemy:
```
for enemy in enemy_hit_list:
    pygame.mixer.Sound.play(ouch)
    score -= 1
```
You can create sounds for all kinds of actions, such as jumping, collecting loot, throwing, colliding, and whatever else you can imagine.
### Add background music
If you have music or atmospheric sound effects you want to play in your game's background, you can use the `music` function of Pygame's mixer module. In your setup section, load the music file:
```
`music = pygame.mixer.music.load(os.path.join(s, 'music.ogg'))`
```
And start the music:
```
`pygame.mixer.music.play(-1)`
```
The `-1` value tells Pygame to loop the music file infinitely. You can set it to anything from `0` and beyond to define how many times the music should loop before stopping.
### Enjoy the soundscapes
Music and sound can add a lot of flavor to your game. Try adding some to your Pygame project!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/add-sound-python-game
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/colorful_sound_wave.png?itok=jlUJG0bM (Colorful sound wave graph)
[2]: https://www.python.org/
[3]: https://www.pygame.org/news
[4]: https://opensource.com/article/17/10/python-101
[5]: https://opensource.com/article/17/12/game-framework-python
[6]: https://opensource.com/article/17/12/game-python-add-a-player
[7]: https://opensource.com/article/17/12/game-python-moving-player
[8]: https://opensource.com/article/18/5/pygame-enemy
[9]: https://opensource.com/article/18/7/put-platforms-python-game
[10]: https://opensource.com/article/19/11/simulate-gravity-python
[11]: https://opensource.com/article/19/12/jumping-python-platformer-game
[12]: https://opensource.com/article/19/12/python-platformer-game-run
[13]: https://opensource.com/article/19/12/loot-python-platformer-game
[14]: https://opensource.com/article/20/1/add-scorekeeping-your-python-game
[15]: https://opensource.com/article/20/9/add-throwing-python-game
[16]: https://www.pygame.org/docs/ref/mixer.html
[17]: https://opensource.com/article/20/1/what-creative-commons
[18]: https://freesound.org
[19]: https://incompetech.filmmusic.io
[20]: https://opengameart.org
[21]: https://opensource.com/life/16/2/linux-multimedia-studio
[22]: https://en.wikipedia.org/wiki/Vorbis
[23]: https://www.freac.org/index.php/en/downloads-mainmenu-330
[24]: http://getmiro.com

View File

@ -0,0 +1,266 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to view information on your Linux devices with lshw)
[#]: via: (https://www.networkworld.com/article/3583598/how-to-view-information-on-your-linux-devices-with-lshw.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
How to view information on your Linux devices with lshw
======
The lshw (list hardware) command on Linux systems provides a lot more information on system devices than most of us might imagine is available.
Kali Linux / nevarpp / Getty Images
While far from being one of the first 50 Linux commands anyone learns, the **lshw** command (read as “ls hardware”) can provide a lot of useful details on your systems hardware.
It extracts details—maybe quite a few more than you knew were available—in a format that is reasonably easy to digest. Given descriptions, logical (device) names, sizes, etc., you are likely to appreciate how much detail you can access.
This post examines the information that **lshw** provides with a particular focus on disk and related hardware. Here is some sample **lshw** output:
```
$ sudo lshw -C disk
*-disk:0
description: SCSI Disk
product: Card Reader-1
vendor: JIE LI
physical id: 0.0.0
bus info: scsi@4:0.0.0
logical name: /dev/sdc
version: 1.00
capabilities: removable
configuration: logicalsectorsize=512 sectorsize=512
*-medium
physical id: 0
logical name: /dev/sdc
```
Note that you should run the **lshw** command with **sudo** to ensure that you get all of the available details.
While we asked for “disk” in the above command (the output included shows only the first of five entries displayed), this particular output shows not a hard disk, but a card reader—another member of the disk class. Note that the system knows this device as **/dev/sdc**.
Similar details are provided on the primary disk on the system:
```
*-disk
description: ATA Disk
product: SSD2SC120G1CS175
physical id: 0
bus info: scsi@0:0.0.0
logical name: /dev/sda <==
version: 1101
serial: PNY20150000778410606
size: 111GiB (120GB)
capabilities: partitioned partitioned:dos
configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 signature=
f63b5929
```
This disk is **/dev/sda**. The hard disks on this system both show up as **ATA** disks. **ATA** is a disk-drive implementation that integrates the controller on the disk drive itself.
To get an abbreviated list of devices in the “disk” class, you can run a command like this one. Notice that two of the devices are listed twice, so we are still seeing five disk devices.
```
$ sudo lshw -short -C disk
H/W path Device Class Description
=============================================================
/0/100/1d/1/1/0.0.0 /dev/sdc disk Card Reader-1
/0/100/1d/1/1/0.0.0/0 /dev/sdc disk
/0/100/1d/1/1/0.0.1 /dev/sdd disk 2
/0/100/1d/1/1/0.0.1/0 /dev/sdd disk
/0/100/1f.2/0 /dev/sda disk 120GB SSD2SC120G1CS175
/0/100/1f.2/1 /dev/cdrom disk DVD+-RW GSA-H73N
/0/100/1f.5/0.0.0 /dev/sdb disk 500GB SAMSUNG HE502HJ
```
Hold onto your seat if you decide you want to see _**all**_ of the devices on a system. You will get a list that includes a lot more things than you probably normally think of as “devices”. Heres an example—and this is the “short” (few details) list:
```
$ sudo lshw -short
[sudo] password for shs:
H/W path Device Class Description
=============================================================
system Inspiron 530s
/0 bus 0RY007
/0/0 memory 128KiB BIOS
/0/4 processor Intel(R) Core(TM)2 Duo CPU
/0/4/a memory 32KiB L1 cache
/0/4/b memory 6MiB L2 cache
/0/24 memory 6GiB System Memory
/0/24/0 memory 2GiB DIMM DDR2 Synchronous 667
/0/24/1 memory 1GiB DIMM DDR2 Synchronous 667
/0/24/2 memory 2GiB DIMM DDR2 Synchronous 667
/0/24/3 memory 1GiB DIMM DDR2 Synchronous 667
/0/1 generic
/0/10 generic
/0/11 generic
/0/12 generic
/0/13 generic
/0/14 generic
/0/15 generic
/0/17 generic
/0/18 generic
/0/19 generic
/0/2 generic
/0/20 generic
/0/100 bridge 82G33/G31/P35/P31 Express DRAM
/0/100/1 bridge 82G33/G31/P35/P31 Express PCI
/0/100/1/0 display Caicos [Radeon HD 6450/7450/84
/0/100/1/0.1 multimedia Caicos HDMI Audio [Radeon HD 6
/0/100/19 enp0s25 network 82562V-2 10/100 Network Connec
/0/100/1a bus 82801I (ICH9 Family) USB UHCI
/0/100/1a/1 usb3 bus UHCI Host Controller
/0/100/1a.1 bus 82801I (ICH9 Family) USB UHCI
/0/100/1a.1/1 usb4 bus UHCI Host Controller
/0/100/1a.1/1/2 input Rock Candy Wireless Keyboard
/0/100/1a.2 bus 82801I (ICH9 Family) USB UHCI
/0/100/1a.2/1 usb5 bus UHCI Host Controller
/0/100/1a.2/1/2 input USB OPTICAL MOUSE
/0/100/1a.7 bus 82801I (ICH9 Family) USB2 EHCI
/0/100/1a.7/1 usb1 bus EHCI Host Controller
/0/100/1b multimedia 82801I (ICH9 Family) HD Audio
/0/100/1d bus 82801I (ICH9 Family) USB UHCI
/0/100/1d/1 usb6 bus UHCI Host Controller
/0/100/1d/1/1 scsi4 storage CD04
/0/100/1d/1/1/0.0.0 /dev/sdc disk Card Reader-1
/0/100/1d/1/1/0.0.0/0 /dev/sdc disk
/0/100/1d/1/1/0.0.1 /dev/sdd disk 2
/0/100/1d/1/1/0.0.1/0 /dev/sdd disk
/0/100/1d.1 bus 82801I (ICH9 Family) USB UHCI
/0/100/1d.1/1 usb7 bus UHCI Host Controller
/0/100/1d.2 bus 82801I (ICH9 Family) USB UHCI
/0/100/1d.2/1 usb8 bus UHCI Host Controller
/0/100/1d.7 bus 82801I (ICH9 Family) USB2 EHCI
/0/100/1d.7/1 usb2 bus EHCI Host Controller
/0/100/1d.7/1/2 multimedia USB Live camera
/0/100/1e bridge 82801 PCI Bridge
/0/100/1e/1 communication HSF 56k Data/Fax Modem
/0/100/1f bridge 82801IR (ICH9R) LPC Interface
/0/100/1f.2 scsi0 storage 82801IR/IO/IH (ICH9R/DO/DH) 4
/0/100/1f.2/0 /dev/sda disk 120GB SSD2SC120G1CS175
/0/100/1f.2/0/1 /dev/sda1 volume 111GiB EXT4 volume
/0/100/1f.2/1 /dev/cdrom disk DVD+-RW GSA-H73N
/0/100/1f.3 bus 82801I (ICH9 Family) SMBus Con
/0/100/1f.5 scsi3 storage 82801I (ICH9 Family) 2 port SA
/0/100/1f.5/0.0.0 /dev/sdb disk 500GB SAMSUNG HE502HJ
/0/100/1f.5/0.0.0/1 /dev/sdb1 volume 433GiB EXT4 volume
/0/3 system PnP device PNP0c02
/0/5 system PnP device PNP0b00
/0/6 storage PnP device PNP0700
/0/7 system PnP device PNP0c02
/0/8 system PnP device PNP0c02
/0/9 system PnP device PNP0c01
```
Run a command like this to list device classes and count how many devices are in each class.
```
$ sudo lshw -short | awk {print substr($0,36,13)} | tail -n +3 | sort | uniq -c
4 bridge
18 bus
1 communication
7 disk
1 display
12 generic
2 input
8 memory
3 multimedia
1 network
1 processor
4 storage
6 system
2 volume
```
**NOTE:** The **awk** command selects the Class column from the **lshw** output using $0 (complete lines), but taking only the substrings that start in the correct place (column 36). None of the class entries have more than 13 letters so the substring ends there. The **tail -n +3** part of the command drops the heading and the “=====” line beneath it, so only the 14 device classes are included in the final listing.
One thing youll notice is that we get approximately 12 lines of output for each device in the disk class when we dont use the **-short** option. We see the logical names, such as **/dev/sda**, disk sizes and types, etc.
```
$ sudo lshw -C disk
[sudo] password for shs:
*-disk:0
description: SCSI Disk
product: Card Reader-1  card reader?
vendor: JIE LI
physical id: 0.0.0
bus info: scsi@4:0.0.0
logical name: /dev/sdc
version: 1.00
capabilities: removable
configuration: logicalsectorsize=512 sectorsize=512
*-medium
physical id: 0
logical name: /dev/sdc
*-disk:1
description: SCSI Disk
product: 2
vendor: AC4100 -
physical id: 0.0.1
bus info: scsi@4:0.0.1
logical name: /dev/sdd
capabilities: removable
configuration: logicalsectorsize=512 sectorsize=512
*-medium
physical id: 0
logical name: /dev/sdd
*-disk
description: ATA Disk
product: SSD2SC120G1CS175
physical id: 0
bus info: scsi@0:0.0.0
logical name: /dev/sda  main system disk
version: 1101
serial: PNY20150000778410606
size: 111GiB (120GB)
capabilities: partitioned partitioned:dos
configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 signature=f63b5929
*-cdrom  aka /dev/sr0
description: DVD writer
product: DVD+-RW GSA-H73N
vendor: HL-DT-ST
physical id: 1
bus info: scsi@1:0.0.0
logical name: /dev/cdrom
logical name: /dev/cdrw
logical name: /dev/dvd
logical name: /dev/dvdrw
logical name: /dev/sr0
version: B103
serial: [
capabilities: removable audio cd-r cd-rw dvd dvd-r
configuration: ansiversion=5 status=nodisc
*-disk
description: ATA Disk
product: SAMSUNG HE502HJ
physical id: 0.0.0
bus info: scsi@3:0.0.0
logical name: /dev/sdb  secondary disk
version: 0002
serial: S2B6J90B501053
size: 465GiB (500GB)
capabilities: partitioned partitioned:dos
configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 signature=7e67ccf3
```
### Wrap-up
The **lshw** command provides details that many of us wont normally deal with. Still, its nice to know how much information is available even if you only use a portion of it.
Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3583598/how-to-view-information-on-your-linux-devices-with-lshw.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.facebook.com/NetworkWorld/
[2]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,119 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Drawing is an Open Source MS-Paint Type of App for Linux Desktop)
[#]: via: (https://itsfoss.com/drawing-app/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Drawing is an Open Source MS-Paint Type of App for Linux Desktop
======
_**Brief: Drawing is a basic image editor like Microsoft Paint. With this open source application, you can draw arrows, lines, geometrical shapes, add colors and other stuff you expect to do in a regular drawing application.**_
### Drawing: A simple drawing application for Linux
![][1]
For people introduced to computers with Windows XP (or earlier version), MS Paint was an amusing application from sketching random stuff. In a world dominated with Photoshop and GIMP, the paint applications still hold some relevance.
There are several [painting applications available for Linux][2], and I am going to add one more to this list.
The app is unsurprisingly called [Drawing][3] and you can use it on both Linux desktop and Linux smartphones.
### Features of Drawing app
![][4]
Drawing has all the features you expect from a drawing application. You can
* Create new drawings from scratch
* Edit an existing image in PNG, JPEG or BMP file
* Add geometrical shapes, lines, arrows etc
* Dashed
* Use pencil tool for free-hand drawing
* Use curve and shape tool
* Crop images
* Scale images to different pixel size
* Add text
* Select part of image (rectangle, freehand and color selection)
* Rotate images
* Add images copied to clipboard
* Eraser, Highlighter, Paint, Color Selection, Color Picker tools are available in preferences
* Unlimited undo
* Filters to add blur, pixelisation, transparency etc
### My experience with Drawing
![][5]
The application is new and has a decent user interface. It comes with all the basic features you expect to find in a standard paint app.
It has some additional tools like color selection and color picker but it might be confusing to use them. There is no documentation available to describe the use of these tools to you are on your own here.
The experience is smooth and I feel that this tool has good potential to replace Shutter as image editing tool (yes, I [use Shutter for editing screenshots][6]).
The thing that I find most bothersome is that it is not possible to edit/modify an element after adding it. You have the undo and redo options but if you want to modify a text you added 12 steps back, youll have to redo all the steps. This is something the developer may look into it in the future releases.
### Installing Drawing on Linux
This is a Linux exclusive app. It is also available for Linux-based smartphones like [PinePhone][7].
There are various ways you can install Drawing app. It is available in the repositories of many major Linux distributions.
#### Ubuntu-based distributions
Drawing is included in the universe repository in Ubuntu. Which means you can install it from the Ubuntu Software Center.
However, if you want the latest version, there is a [PPA available][8] for easily installing Drawing on Ubuntu. Linux Mint and other Ubuntu-based distributions.
Use the following command:
```
sudo add-apt-repository ppa:cartes/drawing
sudo apt update
sudo apt install drawing
```
If you want to remove it, you can use the following commands:
```
sudo apt remove drawing
sudo add-apt-repository -r ppa:cartes/drawing
```
#### Other Linux distributions
Check your distributions package manager for Drawing and install it from there. If you want the latest version, you may use the Flatpak version of the app.
[Drawing Flatpak][9]
**Conclusion**
Do you still use a paint application? Which one do you use? If you have tried Drawing app already, how is your experience with it?
--------------------------------------------------------------------------------
via: https://itsfoss.com/drawing-app/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/drawing-app-interface.jpg?resize=789%2C449&ssl=1
[2]: https://itsfoss.com/open-source-paint-apps/
[3]: https://maoschanz.github.io/drawing/
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/drawing-screenshot.jpg?resize=800%2C489&ssl=1
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/using-drawing-app-linux.png?resize=787%2C473&ssl=1
[6]: https://itsfoss.com/install-shutter-ubuntu/
[7]: https://itsfoss.com/pinephone/
[8]: https://launchpad.net/~cartes/+archive/ubuntu/drawing
[9]: https://flathub.org/apps/details/com.github.maoschanz.drawing

View File

@ -0,0 +1,106 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (This Python script mimics Babbage's Difference Engine)
[#]: via: (https://opensource.com/article/20/10/babbages-python)
[#]: author: (Greg Pittman https://opensource.com/users/greg-p)
This Python script mimics Babbage's Difference Engine
======
Python once again takes on Charles Babbage's Difference Engine.
![Math formulas in green writing][1]
In [_Use this Python script to simulate Babbage's Difference Engine_][2], Python offered an alternative solution to Babbage's problem of determining the number of marbles in a two-dimensional pyramid. Babbage's [Difference Engine][3] solved this using a table showing the number of marble rows and the total number of marbles.
After some contemplation, [Charles Babbage][4]'s ghost replied, "This is all well and good, but here you only take the number of rows and give the number of marbles. With my table, I can also tell you how large a pyramid you might construct given a certain number of marbles; simply look it up in the table."
Python had to agree that this was indeed the case, yet it knew that surely this must be solvable as well. With little delay, Python came back with another short script. The solution involves thinking through the math in reverse.
```
`MarbNum = (N * (N + 1))/2`
```
Which I can begin to solve with:
```
`N * (N + 1) = MarbNum * 2`
```
From which an approximate solution might be:
```
`N = int(sqrt(MarbNum * 2))`
```
But the integer _N_ that solves this might be too large by one, so I need to test for this. In other words, the correct number of rows will either be _N_ or _N-1_. Here is the final script:
```
#!/usr/bin/env python
# babbage2.py
"""
Using Charles Babbage's conception of a marble-counting operation for a regular
pyramid of marbles, starting with one at the top with each successive row having
one more marble than the row above it.
Will give you the total number of rows possible for a pyramid, given a total number
of marbles available.
As a bonus, you also learn how many are left over.
"""
import math
MarbNum = input("Enter the number of marbles you have:  ")
MarbNum = int(MarbNum)
firstguess = int(math.sqrt(MarbNum*2))
if (firstguess * (firstguess + 1) &gt; MarbNum*2):
    correctNum = firstguess - 1
else:
    correctNum = firstguess
MarbRem = int(MarbNum - (correctNum * (correctNum + 1)/2))
# some grammatical fixes
if MarbRem == 0:
    MarbRem = "no"
 
if MarbRem == 1:
    marbleword = "marble"
else:
    marbleword = "marbles"
   
print ("You can have",correctNum, "rows, with",MarbRem, marbleword, "remaining.")
```
The output will look something like this:
```
Enter the number of marbles you have:  374865
You can have 865 rows, with 320 marbles remaining.
```
And Mr. Babbage's ghost was impressed. "Ah, your Python Engine is impressive indeed! Surely it might rival my [Analytical Engine][5], had I had the time to complete that project."
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/10/babbages-python
作者:[Greg Pittman][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/greg-p
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/edu_math_formulas.png?itok=B59mYTG3 (Math formulas in green writing)
[2]: https://opensource.com/article/20/9/babbages-python
[3]: https://en.wikipedia.org/wiki/Difference_engine
[4]: https://en.wikipedia.org/wiki/Charles_Babbage
[5]: https://en.wikipedia.org/wiki/Analytical_Engine

View File

@ -0,0 +1,156 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Xen on Raspberry Pi 4 adventures)
[#]: via: (https://www.linux.com/featured/xen-on-raspberry-pi-4-adventures/)
[#]: author: (Linux.com Editorial Staff https://www.linux.com/author/linuxdotcom/)
Xen on Raspberry Pi 4 adventures
======
Written by [Stefano Stabellini][1] and [Roman Shaposhnik][2]
![][3]
Raspberry Pi (RPi) has been a key enabling device for the Arm community for years, given the low price and widespread adoption. According to the RPi Foundation, over 35 million have been sold, with 44% of these sold into industry. We have always been eager to get the Xen hypervisor running on it, but technical differences between RPi and other Arm platforms made it impractical for the longest time. Specifically, a non-standard interrupt controller without virtualization support.
Then the Raspberry Pi 4 came along, together with a regular GIC-400 interrupt controller that Xen supports out of the box. Finally, we could run Xen on an RPi device. Soon Roman Shaposhnik of Project EVE and a few other community members started asking about it on the **xen-devel** mailing list. _“It should be easy,”_ we answered. _“It might even work out of the box,”_ we wrote in our reply. We were utterly oblivious that we were about to embark on an adventure deep in the belly of the Xen memory allocator and Linux address translation layers.
The first hurdle was the availability of low memory addresses. RPi4 has devices that can only access the first 1GB of RAM. The amount of memory below 1GB in **Dom0** was not enough. Julien Grall solved this problem with a simple one-line fix to increase the memory allocation below 1GB for **Dom0** on RPi4. The patch is now present in Xen 4.14.
_“This lower-than-1GB limitation is uncommon, but now that it is fixed, it is just going to work.”_ We were wrong again. The Xen subsystem in Linux uses _virt_to_phys_ to convert virtual addresses to physical addresses, which works for most virtual addresses but not all. It turns out that the RPi4 Linux kernel would sometimes pass virtual addresses that cannot be translated to physical addresses using _virt_to_phys_, and doing so would result in serious errors. The fix was to use a different address translation function when appropriate. The patch is now present in Linuxs master branch.
We felt confident that we finally reached the end of the line. _“Memory allocations check. Memory translations — check. We are good to go!”_ No, not yet. It turns out that the most significant issue was yet to be discovered. The Linux kernel has always had the concept of physical addresses and DMA addresses, where DMA addresses are used to program devices and could be different from physical addresses. In practice, none of the x86, ARM, and ARM64 platforms where Xen could run had DMA addresses different from physical addresses. The Xen subsystem in Linux is exploiting the DMA/physical address duality for its own address translations. It uses it to convert physical addresses, as seen by the guest, to physical addresses, as seen by Xen.
To our surprise and astonishment, the Raspberry Pi 4 was the very first platform to have physical addresses different from DMA addresses, causing the Xen subsystem in Linux to break. It wasnt easy to narrow down the issue. Once we understood the problem, a dozen patches later, we had full support for handling DMA/physical address conversions in Linux. The Linux patches are in master and will be available in Linux 5.9.
Solving the address translation issue was the end of our fun hacking adventure. With the Xen and Linux patches applied, Xen and Dom0 work flawlessly. Once Linux 5.9 is out, we will have Xen working on RPi4 out of the box.
We will show you how to run Xen on RPi4, the real Xen hacker way, and as part of a downstream distribution for a much easier end-user experience.
## **Hacking Xen on Raspberry Pi 4**
If you intend to hack on Xen on ARM and would like to use the RPi4 to do it, here is what you need to do to get Xen up and running using UBoot and TFTP. I like to use TFTP because it makes it extremely fast to update any binary during development.  See [this tutorial][4] on how to set up and configure a TFTP server. You also need a UART connection to get early output from Xen and Linux; please refer to [this article][5].
Use the [rpi-imager][6] to format an SD card with the regular default Raspberry Pi OS. Mount the first SD card partition and edit **config.txt**. Make sure to add the following:
```
kernel=u-boot.bin
enable_uart=1
arm_64bit=1
```
Download a suitable UBoot binary for RPi4 (u-boot.bin) from any distro, for instance [OpenSUSE][7]. Download the JeOS image, then open it and save **u-boot.bin**:
```
xz -d openSUSE-Tumbleweed-ARM-JeOS-raspberrypi4.aarch64.raw.xz
kpartx -a ./openSUSE-Tumbleweed-ARM-JeOS-raspberrypi4.aarch64.raw
mount /dev/mapper/loop0p1 /mnt
cp /mnt/u-boot.bin /tmp
```
Place u-boot.bin in the first SD card partition together with config.txt. Next time the system boots, you will get a UBoot prompt that allows you to load Xen, the Linux kernel for **Dom0**, the **Dom0 rootfs**, and the device tree from a TFTP server over the network. I automated the loading steps by placing a UBoot **boot.scr** script on the SD card:
```
setenv serverip 192.168.0.1
setenv ipaddr 192.168.0.2
tftpb 0xC00000 boot2.scr
source 0xC00000
```
Where:
```
- serverip is the IP of your TFTP server
- ipaddr is the IP of the RPi4
```
Use mkimage to generate boot.scr and place it next to config.txt and u-boot.bin:
```
mkimage -T script -A arm64 -C none -a 0x2400000 -e 0x2400000 -d boot.source boot.scr
```
Where:
```
- boot.source is the input
- boot.scr is the output
```
UBoot will automatically execute the provided boot.scr, which sets up the network and fetches a second script (boot2.scr) from the TFTP server. boot2.scr should come with all the instructions to load Xen and the other required binaries. You can generate boot2.scr using [ImageBuilder][8].
Make sure to use Xen 4.14 or later. The Linux kernel should be master (or 5.9 when it is out, 5.4-rc4 works.) The Linux ARM64 default config works fine as kernel config. Any 64-bit rootfs should work for Dom0. Use the device tree that comes with upstream Linux for RPi4 (**arch/arm64/boot/dts/broadcom/bcm2711-rpi-4-b.dtb**). RPi4 has two UARTs; the default is **bcm2835-aux-uart** at address **0x7e215040**. It is specified as “serial1” in the device tree instead of serial0. You can tell Xen to use serial1 by specifying on the Xen command line:
```
console=dtuart dtuart=serial1 sync_console
```
 The Xen command line is provided by the **boot2.scr** script generated by ImageBuilder as “**xen,xen-bootargs**“. After editing **boot2.source** you can regenerate **boot2.scr** with **mkimage**:
```
mkimage -A arm64 -T script -C none -a 0xC00000 -e 0xC00000 -d boot2.source boot2.scr
```
## **Xen on Raspberry Pi 4: an easy button**
Getting your hands dirty by building and booting Xen on Raspberry Pi 4 from scratch can be not only deeply satisfying but can also give you a lot of insight into how everything fits together on ARM. Sometimes, however, you just want to get a quick taste for what it would feel to have Xen on this board. This is typically not a problem for Xen, since pretty much every Linux distribution provides Xen packages and having a fully functional Xen running on your system is a mere “apt” or “zypper” invocation away. However, given that Raspberry Pi 4 support is only a few months old, the integration work hasnt been done yet. The only operating system with fully integrated and tested support for Xen on Raspberry Pi 4 is [LF Edges Project EVE][9].
Project EVE is a secure-by-design operating system that supports running Edge Containers on compute devices deployed in the field. These devices can be IoT gateways, Industrial PCs, or general-purpose ruggedized computers. All applications running on EVE are represented as Edge Containers and are subject to container orchestration policies driven by k3s. Edge containers themselves can encapsulate Virtual Machines, Containers, or Unikernels. 
You can find more about EVE on the projects website at <http://projecteve.dev> and its GitHub repo <https://github.com/lf-edge/eve/blob/master/docs/README.md>. The latest instructions for creating a bootable media for Raspberry Pi 4 are also available at: 
<https://github.com/lf-edge/eve/blob/master/docs/README.md>
Because EVE publishes fully baked downloadable binaries, using it to give Xen on Raspberry Pi 4 a try is as simple as:
```
$ docker pull lfedge/eve:5.9.0-rpi-xen-arm64 # you can pick a different 5.x.y release if you like
$ docker run lfedge/eve:5.9.0-rpi-xen-arm64 live > live.raw
```
This is followed by flashing the resulting **live.raw** binary onto an SD card using your favorite tool. 
Once those steps are done, you can insert the card into your Raspberry Pi 4, connect the keyboard and the monitor and enjoy a minimalistic Linux distribution (based on Alpine Linux and Linuxkit) that is Project EVE running as **Dom0** under Xen.
As far as Linux distributions go, EVE presents a somewhat novel design for an operating system, but at the same time, it is heavily inspired by ideas from Qubes OS, ChromeOS, Core OS, and Smart OS. If you want to take it beyond simple console tasks and explore how to run user domains on it, we recommend heading over to EVEs sister project Eden: <https://github.com/lf-edge/eden#raspberry-pi-4-support> and following a short tutorial over there.
If anything goes wrong, you can always find an active community of EVE and Eden users on LF Edges Slack channels starting with #eve over at <http://lfedge.slack.com/> — wed love to hear your feedback.
In the meantime happy hacking!
--------------------------------------------------------------------------------
via: https://www.linux.com/featured/xen-on-raspberry-pi-4-adventures/
作者:[Linux.com Editorial Staff][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/linuxdotcom/
[b]: https://github.com/lujun9972
[1]: https://twitter.com/stabellinist?lang=en
[2]: https://twitter.com/rhatr?lang=en
[3]: https://www.linux.com/wp-content/uploads/2020/09/xen_project_logo.jpg
[4]: https://help.ubuntu.com/community/TFTP
[5]: https://lancesimms.com/RaspberryPi/HackingRaspberryPi4WithYocto_Part1.html
[6]: https://www.raspberrypi.org/documentation/installation/installing-images/#:~:text=Using%20Raspberry%20Pi%20Imager,Pi%20Imager%20and%20install%20it
[7]: https://en.opensuse.org/HCL:Raspberry_Pi4
[8]: https://wiki.xenproject.org/wiki/ImageBuilder
[9]: https://www.lfedge.org/projects/eve/

View File

@ -0,0 +1,104 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Present Slides in Linux Terminal With This Nifty Python Tool)
[#]: via: (https://itsfoss.com/presentation-linux-terminal/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Present Slides in Linux Terminal With This Nifty Python Tool
======
Presentations are often boring. This is why some people add animation or comics/meme to add some humor and style to break the monotony.
If you have to add some unique style to your college or company presentation, how about using the Linux terminal? Imagine how cool it would be!
### Present: Do Your Presentation in Linux Terminal
There are so many amusing and [fun stuff you can do in the terminal][1]. Making and presenting slides is just one of them.
Python based application named [Present][2] lets you create markdown and YML based slides that you can present in your college or company and amuse people in the true geek style.
I have made a video showing what it would look like to present something in the Linux terminal with Present.
[Subscribe to our YouTube channel for more Linux videos][3]
#### Features of Present
You can do the following things with Present:
* Use markdown syntax for adding text to the slides
* Control the slides with arrow or PgUp/Down keys
* Change the foreground and background colors
* Add images to the slides
* Add code blocks
* Play a simulation of code and output with codio YML files
#### Installing Present on Linux
Present is a Python based tool and you can use PIP to install it. You should make sure to [install Pip on Ubuntu][4] with this command:
```
sudo apt install python3-pip
```
If you are using some other distributions, please check your package manager to install PIP3.
Once you have PIP installed, you can install Present system wide in this manner:
```
sudo pip3 install present
```
You may also install it for only the current user but then youll also have to add ~/.local/bin to your PATH.
#### Using Present to create and present slides in Linux terminal
![][5]
Since Present utilizes markdown syntax, you should be aware of it to create your own slides. Using a [markdown editor][6] will be helpful here.
Present needs a markdown file to read and play the slides. You may [download this sample slide][7] but you need to download the embed image separately and put it inside image folder.
* Separate slides using — in your markdown file.
* Use markdown syntax for adding text to the slides.
* Add images with this syntax: ![RC] (images/name.png).
* Change slide colors by adding syntax like &lt;! fg=white bg=red &gt;.
* Add a slide with effects using syntax like &lt;! effect=fireworks &gt;.
* Use [codio syntax][8] to add a code running simulation.
* Quit the presentation using q and control the slides with left/right arrow or PgUp/Down keys.
Keep in mind that resizing the terminal window while running the presentation will mess things up and so does pressing enter key.
**Conclusion**
If you are familiar with Markdown and the terminal, using Present wont be difficult for you.
You cannot compare it to regular presentation slides made with Impress, MS Office etc but it is a cool tool to occasionally use it. If you are a computer science/networking student or work as a developer or sysadmin, your colleagues will surely find this amusing.
--------------------------------------------------------------------------------
via: https://itsfoss.com/presentation-linux-terminal/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/funny-linux-commands/
[2]: https://github.com/vinayak-mehta/present
[3]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
[4]: https://itsfoss.com/install-pip-ubuntu/
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/presentation-in-linux-terminal.png?resize=800%2C494&ssl=1
[6]: https://itsfoss.com/best-markdown-editors-linux/
[7]: https://github.com/vinayak-mehta/present/blob/master/examples/sample.md
[8]: https://present.readthedocs.io/en/latest/codio.html

View File

@ -0,0 +1,63 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Akraino: An Open Source Project for the Edge)
[#]: via: (https://www.linux.com/news/akraino-an-open-source-project-for-the-edge/)
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
Akraino: An Open Source Project for the Edge
======
Akraino is an open-source project designed for the Edge community to easily integrate open source components into their stack. Its a set of open infrastructures and application blueprints spanning a broad variety of use cases, including 5G, AI, Edge IaaS/PaaS, IoT, for both provider and enterprise Edge domains. We sat down with Tina Tsou, TSC Co-Chair of the Akraino project to learn more about it and its community.
Here is a lightly edited transcript of the interview:
Swapnil Bhartiya: Today, we have with us Tina Tsou, TSC Co-Chair of the Akraino project. Tell us a bit about the Akraino project.
Tina Tsou: Yeah, I think Akraino is an Edge Stack project under Linux Foundation Edge. Before Akraino, the developers had to go to the upstream community to download the upstream software components and integrate in-store to test. With the blueprint ideas and concept, the developers can directly do the use-case base to blueprint, do all the integration, and [have it] ready for the end-to-end deployment for Edge.
Swapnil Bhartiya: The blueprints are the critical piece of it. What are these blueprints and how do they integrate with the whole framework?
Tina Tsou: Based on the certain use case, we do the community CI/CD ( continuous integration and continuous deployment). We also have proven security requirements. We do the community lab and we also do the life cycle management. And then we do the production quality, which is deployment-ready.
Swapnil Bhartiya: Can you explain what the Edge computing framework looks like?
Tina Tsou: We have four segments: Cloud, Telco, IoT, and Enterprise. When we do the framework, its like we have a framework of the Edge compute in general, but for each segment, they are slightly different. You will see in the lower level, you have the network, you have the gateway, you have the switches. In the upper of it, you have all kinds of FPGA and then the data plan. Then, you have the controllers and orchestration, like the Kubernetes stuff and all kinds of applications running on bare metal, virtual machines or the containers. By the way, we also have the orchestration on the site.
Swapnil Bhartiya: And how many blueprints are there? Can you talk about it more specifically?
Tina Tsou: I think we have around 20-ish blueprints, but they are converged into blueprint families. We have a blueprint family for telco appliances, including Radio Edge Cloud, and SEBA that has enabled broadband access. We also have a blueprint for Network Cloud. We have a blueprint for Integrated Edge Cloud. We have a blueprint for Edge Lite IoT. So, in this case, the different blueprints in the same blueprint family can share the same software framework, which saves a lot of time. That means we can deploy it at a large scale.
Swapnil Bhartiya: The software components, which you already talked about in each blueprint, are they all in the Edge project or there are some components from external projects as well?
Tina Tsou: We have the philosophy of upstream first. If we can find it from the upstream community, we just directly take it from the upstream community and install and integrate it. If we find something that we need, we go to the upstream community to see whether it can be changed or updated there.
Swapnil Bhartiya: How challenging or easy it is to integrate these components together, to build the stack?
Tina Tsou: It depends on which group and family we are talking about. I think most of them at the middle level of middle are not too easy, not too complex. But the reference has to create the installation, like the YAML files configuration and for builds on ISO images, some parts may be more complex and some parts will be easy to download and integrate.
Swapnil Bhartiya: We have talked about the project. I want to talk about the community. So first of all, tell us what is the role of TSC?
Tina Tsou: We have a whole bunch of documentation on how TSA runs if you want to read. I think the role for TSC is more tactical steering. We have a chair and co-chair, and there are like 6-7 subcommittees for specific topics like security, technical community, CI and documentation process.
Swapnil Bhartiya: What kind of community is there around the Akraino project?
Tina Tsou: I think we have a pretty diverse community. We have the end-users like the telcos and the hyperscalers, the internet companies, and also enterprise companies. Then we have the OEM/ODM vendors, the chip makers or the SoC makers. Then have the IP companies and even some universities.
Swapnil Bhartiya: Tina, thank you so much for taking the time today to explain the Akraino project and also about the blueprints, the community, and the roadmap for the project. I look forward to seeing you again to get more updates about the project.
Tina Tsou: Thank you for your time. I appreciate it.
--------------------------------------------------------------------------------
via: https://www.linux.com/news/akraino-an-open-source-project-for-the-edge/
作者:[Swapnil Bhartiya][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/swapnil/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bringing COBOL to the Modern World)
[#]: via: (https://www.linux.com/news/bringing-cobol-to-the-modern-world/)
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
Bringing COBOL to the Modern World
======
COBOL is powering most of the critical infrastructure that involves any kind of monetary transaction. In this special interview conducted during the recent Open Mainframe Summit, we talked about the relevance of COBOL today and the role of the new COBOL working group that was announced at the summit. Joining us were Cameron Seay, Adjunct Professor at East Carolina University and Derek Lisinski of the Application Modernizing Group at Micro Focus. Micro Focus recently joined the Open Mainframe Project and is now also involved with the working group.
Here is an edited version of the discussion:
Swapnil Bhartiya: First of all, Cam and Derek, welcome to the show. If you look at COBOL, its very old technology. Who is still using COBOL today? Cam, I would like to hear your insight first.
Cameron Seay: Every large commercial bank I know of uses COBOL. Every large insurance company, every large federal agency, every large retailer uses COBOL to some degree, and it processes a large percentage of the worlds financial transactions. For example, if you go to Walmart and you make a sale, that transaction is probably recorded using a COBOL program. So, its used a lot, a large percentage of the global business is still done in COBOL.
Swapnil Bhartiya: Micro Focus is I think one of the few companies that offer support around COBOL. Derek, please tell people the importance of COBOL in todays modern world.
Derek Lisinski: Well, if we go back in time, there werent that many choices on the market. If you wanted robust technology to build your business systems, COBOL was one of the very few choices and so its surprising when there are so many choices around today and yet, many of the worlds largest industries, largest organizations still rely on COBOL. If COBOL wasnt being used, so many of those systems that people trust and rely on — whether youre moving money around, whether youre running someones payroll, whether youre getting insurance quotation, shipping a parcel, booking a holiday. All of these things are happening with COVID at the backend and the value youre getting from that is not just that its carried on, but it runs with the same results again and again and again, without fail.
The importance of COBOL is not just its pervasiveness, which I think is significant and perhaps not that well understood, but also its reliability. And because its welded very closely to the mainframe environments and to CICS and some other core elements of the mainframe and other platforms as well. It uses and trusts a lot of technology that is unrivaled in terms of its reliability, scalability and its performance. Thats why it remains so important to the global economy and to so many industries. It does what it needs to do, which is business processing, so fantastically well.
Swapnil Bhartiya: Excellent, thanks for talking about that. Now, you guys recently joined the project and the foundation as well, so talk about why you joined the Open Mainframe Project and what are the projects that you will be involved with, of course. I know youre involved with the working group, but talk about your involvement with the project.
Derek Lisinski: Well, our initial interest with the Open Mainframe Project goes back a couple of years. Were longtime proponents of the mainframe platform, of course, here at Micro Focus. Weve had a range of technologies that run on z/OS. But our interest in the wider mainframe community—and that would be the Open Mainframe Project—probably comes as a result of the time weve spent with the SHARE community and other IBM-sponsored communities, where the discussion was about the best way to embrace this trusted technology in the digital era. This is becoming a very topical conversation and thats also true for COBOL, which Im sure well come back to.
Our interest in the OMP has been going on for the last couple of years and we were finally able to reach an agreement between both organizations to join the group this year, specifically because of a number of initiatives that we have going on at Micro Focus and that a number of our customers have talked to us about specifically in the area of mainframe DevOps. As vital as the mainframe platform is, theres a growing desire to use it to deliver greater and greater value to the business, which typically means trying to accelerate delivery cycles and get more done.
Of course, now the mainframe is so inextricably connected with other parts of the IT ecosystem that those points of connection and the number of moving parts have to be handled, integrated with, and managed as part of a delivery process. Its an important part of our customers roadmap and, therefore, our roadmap to ensure that they get the very best of technology in the mainframe world. Whether its tried-and-trusted technology, whether its new emerging vendor technology, or whether in many cases, it becomes open source technology. We wanted to play our part in those kinds of projects and a number of initiatives around.
Swapnil Bhartiya: Is there an increase in interest in COBOL that we are seeing there now that there is a dedicated working group? And if you can also talk a bit about what will be the role of this group.
Cameron Seay: If your question was, is there an increased interest in working in COBOL because of the working group, the working group actually came as a result of a renewed interest in the written new discovery in COBOL. The governor of New Jersey made a comment that their unemployment was not able to be processed because of COBOLs obsolescence, or inefficiency, or inadequacy to some degree. And that sparked quite a furor in the mainframe world because it wasnt COBOL at all. COBOL had nothing to do with the inability of New Jersey to deliver the unemployment checks. Further, were aware that New Jersey is just typical of every state. Every state that I know of—there may be some exceptions Im not aware of, I know its certainly true for California and New York—is dependent upon COBOL to process their day-to-day business applications.
So, then Derek and some other people inside the OMP got together and started having some conversations, myself included, and said “We maybe need to form a COBOL working group to renew this interest in COBOL and establish the facts around COBOL.” So thats kind of what the working group is trying to do, and were trying to increase that familiarity, visibility and interest in COBOL.
Swapnil Bhartiya: Derek, I want to bring the same question to you also. Is there any particular reason that we are seeing an increase in interest in COBOL and what is that reason?
Derek Lisinski: Yeah, thats a great question and I think there are a few reasons. First of all, I think a really important milestone for COBOL was actually last year when it turned 60 years old. I think one of your earlier questions is related to COBOLs age being 60. Of course, COBOL isnt a 60-year-old language but the idea is 60 years old, for sure. If you drive a 2020 motor car, youre driving a 2020 motor car, youre not driving a hundred-year-old idea. No one thinks a modern telephone is an old idea, either. Its not old technology, sorry.
The idea mightve been from a long time ago, but the technology has advanced, and the same thing is true in code. But when we celebrated COBOLs 60th anniversary last year—a few of the vendors did and a number of organizations did, too—there was an outpouring of interest in the technology. A lot of times, COBOL just quietly goes about its business of running the worlds economy without any fuss. Like I said, its very, very reliable and it never really breaks. So, it was never anything to talk about. People were sort of pleasantly surprised, I think, to learn of its age, to learn of the age of the idea. Now, of course, Micro Focus and IBM and some of the other vendors continue to update and adapt COBOL so that it continues to evolve and be relevant today.
Its actually a 2020 technology rather than a 1960 one, but that was the first one. Secondly, the pandemic caused a lot of businesses to have to change how they process core systems and how they interact with their customers. That put extra strain on certain organizations or certain government agencies and, in a couple of cases, COBOL was incorrectly made the scapegoat for some of the challenges that those organizations face, whether it was a skills issue or whether it was a technology issue. Under the cover, COBOL was working just fine. So the interest has been positive regarding the anniversary, but I think the reports have been inaccurate and perhaps a little unkind about COBOL. Those were the two reasons they came together.
I remember when I first spoke to Cam and to some of the other people on the working group, you said it was a very good idea once and for all that we told the truth about COBOL, that the industry finally understood how viable it is, how valuable it is, based on the facts behind COBOLs usage. So one of the things were going to do is try to quantify and qualify as best we can, how widely COBOL is used, what do you use it for, who is using, and then present a more factual story about the technology so people can make a more informed decision about technical strategy. Rather than base it on hearsay or some reputation about something being a bit rusty and out-of-date, which is probably the reputation thats being espoused by someone who would have you replace it with something else, and their motivation might be for different reasons. Theres nothing wrong with COBOL and its very, very viable and our job I think really is to tell that truth and make sure people understand it,
Swapnil Bhartiya: What other projects, efforts, or initiatives are going on there at the Linux Foundation or Open Mainframe Project around COBOL? Can you talk about that?
Cameron Seay: Well, certainly. There is currently a course being developed by folks in the community who have developed an online course in COBOL. Its the rudiments of it. Its for novices, but its great for a continuing education program. So, thats one of the things going on around COBOL. Another thing is theres a lot going on in mainframe development in the OMP now. Theres an application framework that has been developed called Zoe that will allow you to develop applications for z/OS. Its interesting that the focus of the Open Mainframe Project when it first began was Linux on the mainframe, but actually the first real project that came out of it was a z/OS-based product, Zoe, and so were interested in that, too. Those are just a couple of peripheral projects that the COBOL working group is going to work with.
There are other things we want to do from a curriculum standpoint down the road, but fundamentally, we just want to be a fact-finding, fact-gathering operation first, and Derek Lisinski has been taking leadership and putting together a substantial reference list so that we can get the facts about COBOL. Then, were going to do other things, but that we want to get that right first.
Swapnil Bhartiya: So there are as you mentioned a couple of projects. Is there any overlap between these projects or how different they are? Do they all serve a different purpose? It looks like when youre explaining the goal and role of the working group, it sounds like its also the training or education group with the same kind of activities. Let me rephrase it properly: what are some of the pressing needs you see for the COBOL community, how are these efforts/groups are trying to help them, and how are they not overlapping or stepping on each others toes?
Cameron Seay: Thats an ongoing thing. Susharshna and I really work hard to make sure that were not working at either across purposes or theres duplication of effort. Were kind of clear about our roles. For the world at large, for the public at large, the working group—and Derek may have a different view on this because we all dont think alike, we all dont see this thing exactly the same—but I see it as information first. We want people to get accurate current information about COBOL.
Then, we want to provide some vehicle that COBOL can be reintroduced back into the general academic curriculum because it used to be. I studied COBOL at a four-year university. Most people did when they took programming in the 80s and the 90s, they took COBOL, but thats not true anymore. Our COBOL course at East Carolina this semester is the only COBOL course in the entire USC system. Thats got to change. So information, exposure, accurate information exposure, and some kind of return to the general curriculum, those are the three things that we we can provide to the community at large.
Swapnil Bhartiya: If you look at Micro Focus, you are working in the industry, you are actually solving the problem for your customers. What role do these groups or other efforts that are going on there play for the whole ecosystem?
Derek Lisinski: Well, I think if we go back to Cams answer, I think hes absolutely right that the industry, if you project forward another generation in 25 years time who are going to be managing these core business systems that currently still need to run the worlds largest organizations. I know were in a digital era and I know that things are changing at an unprecedented pace, but most of the worlds largest organizations, successful organizations still want to be in those positions in generations to come. So who is it? Who are those practitioners that are coming through the education system right now, who are going to be leaders in those organizations IT departments in the future?
And there is a concern not just for COBOL, but actually, many IT skills across the board. Is there going to be enough talent to actually run the organizations of the future? And thats true, its a true question mark about COBOL. So Micro Focus, which has its own academic initiative and its own training program as does IBM as do many of the other vendors, we all applaud the work of all community groups. The OMP is obviously a fabulous example because it is genuinely an open group. Genuinely, its a meritocracy of people with good ideas coming together to try to do the right thing. We applaud the efforts to ensure that there continues to be enough supply of talented IT professionals in the future to meet the growing and growing demand. IT is not going away. Its going to become strategically more and more important to these organizations.
Our part to play in Micro Focus is really to work shoulder-to-shoulder with organizations like the OMP because between us, we will create enough groundswell of training and opportunity for that next generation. Many people will tell you there just isnt enough of that training going on and there arent enough of those opportunities available, even though one survey that Micro Focus ran last year on the back of the COBOLs 60th anniversary suggests that around 92% of all application owners of COBOL systems confirmed that those applications remain strategic to their organization. So, if the applications are not going anywhere, whos going to be looking after them in the next generation? And thats the real challenge that I think the industry faces as a whole, which is why Micro Focus is so committed to get behind the wheel of making sure that we can make a difference.
Swapnil Bhartiya: We discussed that the interest in COBOL is increasing as COBOL is playing a very critical role in the modern economy. What kind of future do you see for COBOL and where do you see it going? I mean, its been around for 60 years, so it knows how to survive through times. Still, where do you see it go? Cam, I would love to start with you.
Cameron Seay: Yeah, absolutely. We are trying to estimate how much COBOL is actually in use. That estimate is running into hundreds of billions of lines of code. I know that, for example, Bank of America admits to at least 50 million lines of COBOL code. Thats a lot of COBOL, and youre not going to replace it over time, theres no reason to. So the solution to this problem, and this is what were going to do, is were going to figure out a way to teach people COBOL. Its not a complex language to learn. Any organization that sees lack of COBOL skills as an impediment and justification to move to another platform is [employing] a ridiculous solution, that solution is not feasible. If they try to do that, theyre going to fail because theres too much risk and, most of all, too much expense.
So, were going to figure out a way to begin to teach people COBOL again. I do it, a COBOL class at East Carolina. That is a solution to this problem because the codes not going anywhere nor is there a reason for it to go anywhere, it works! Its a simple language, its as fast as it needs to be, its as secure as it needs to be, and no one that Ive talked to, computer scientists all over the world, no one can give me any application, that any language is going to work better than COBOL. There may be some that work as good or nearly as good, but youre going to have to migrate them, but theres nothing, theres no improvement that you can make on these applications from a performance standpoint and from a security standpoint. The applications are going to stay where they are, and were just going to have to teach people COBOL. Thats the solution, thats whats going to happen. How and when, I dont know, but thats whats going to happen.
Swapnil Bhartiya: If you look at the crisis that we were going through, almost everything, every business is moving online to the cloud. All those transactions that people are already doing in person are all moving online, so it has become critical. From your perspective, what kind of future do you see?
Derek Lisinski: Well, thats a great question because the world is a very, very different place to how architecture was designed however long ago. Companies of today are not using that architecture. So there is some question mark there about whats COBOLs future. I agree with Cam. Anyone that has COBOL is not necessarily going to be able to throw that away anytime soon because, frankly, it might be difficult. It might be easy, but thats not really the question, is it? Is it a good business decision? The answer is its a terrible business decision to throw it away.
In addition to that, I would contend that there are a number of modern-day digital use cases where actually the usage of COBOL is going to increase rather than decrease. We see this all the time with our larger organizations who are using it for pretty much the whole of the backend of their core business. So, whether its a banking organization or an insurer or a logistics company, what theyre trying to do obviously is find new and exciting business opportunities.
But, upon which they will be basing their core business systems that already run most of the business today, and then trying to use that to adapt, to enhance, to innovate. There are insurers who are selling the insurance quotation system to other smaller insurances as a service. Now, of course, their insurance quotation system is probably the version that isnt quite as quick as the one that runs on their mainframe, but theyre making that available as a service to other organizations. Banking organizations are doing much the same thing with a range of banking services, maybe payment systems. These are all services that can be provided to other organizations.
The same is true in the ISB market where really, really robust COBOL-based financial services, packages, ERP systems, which are COBOL based, and they have been made available as cloud-based as-a-service packages or upon other platforms to meet new market needs. The thing about COBOL that few people understand is not only is it easy to learn, but its easy to move to somewhere else. So, if your client is now running Linux and it says, “Well, now I want it to run these core COBOL business systems there, too.” Well, maybe theyve taken a move to AIX to a Power system, but the same COBOL system can be reused, replicated as necessary, which is a little known secret about the language.
This goes back to the original design, of course. Back in the day, there was no such thing as the “standard platform” in 1960. There wasnt a single platform that you could reasonably rely on that would give you a decent answer, not very quickly anyway. So, in order for us to know that COBOL works, we have to have the same results compiled about running on different machines. It needs to be the same result running at the same speed, and from that point, thats when the portability of the system came to life. Thats what they set out to do, built that way by design.
Swapnil Bhartiya: Cam, Derek, thank you so much for taking the time out today to talk about COBOL, how important it is in todays world. Im pretty sure that when we spend our whole day, some of the activities that we have done online touch COBOL or are powered by COBOL.
--------------------------------------------------------------------------------
via: https://www.linux.com/news/bringing-cobol-to-the-modern-world/
作者:[Swapnil Bhartiya][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/author/swapnil/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,89 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How open source underpins blockchain technology)
[#]: via: (https://opensource.com/article/20/10/open-source-blockchain)
[#]: author: (Matt Shealy https://opensource.com/users/mshealy)
How open source underpins blockchain technology
======
Openness, not regulation, is what creates blockchain's security and
reliability.
![cubes coming together to create a larger cube][1]
People are often surprised when they find out that blockchain technology, which is known for its security, is built on open source software code. In fact, this openness is what creates its security and reliability.
One of the core values of building anything as open source is gaining efficiency. Creating a community of developers with different perspectives and skillsets, all working on the same code base, can exponentially increase the number and complexity of applications built.
### Open source: more common than people think
One of the more popular operating systems, Linux, is open source. Linux powers the servers for many of the services we feel comfortable sharing personal information on every day. This includes Google, Facebook, and thousands of major websites. When you're interacting with these services, you're doing so on computer networks that are running Linux. Chromebooks are using Linux. Android phones use an operating system based on Linux.
Linux is not owned by a corporation. It's free to use and created by collaborative efforts. More than 20,000 developers from more than 1,700 companies [have contributed to the code][2] since its origins in 2005. 
That's how open source software works. Tons of people contribute and constantly add, modify, or build off the open source codebase to create new apps and platforms. Much of the software code for blockchain and cryptocurrency has been developed using open source software. Open source software is built by passionate users that are constantly on guard for bugs, glitches, or flaws. When a problem is discovered, a community of developers works separately and together on the fix.
### Blockchain and open source
An entire community of open source blockchain developers is constantly adding to and refining the codebase.
Here are the fundamental ways blockchain performs:
* Blockchain platforms have a transactional database that allows peers to transact with each other at any time.
* User-identification labels are attached that facilitate the transactions.
* The platforms must have a secure way to verify transactions before they become approved.
* Transactions that cannot be verified will not take place.
Open source software allows developers to create these platforms in a [decentralized application (Dapp)][3], which is key to the safety, security, and variability of transactions in the blockchain.
This decentralized approach means there is no central authority to mediate transactions. That means no one person controls what happens. Direct peer-to-peer interactions can happen quickly and securely. As transactions are recorded in the ledger, they are distributed across the ecosystem.
Blockchain uses cryptography to keep things secure. Each transaction carries information connecting it with previous transactions to verify its authenticity. This prevents threat actors from tampering with the data because once it's added to the public ledger, it can't be changed by other users.
### Is blockchain open source?
Although blockchain itself may not technically be open source, blockchain _systems_ are typically implemented with open source software using a concept that embodies an open culture because no government authority regulates it. Proprietary software developed by a private company to handle financial transactions is likely regulated by [government agencies][4]. In the US, that might include the Securities and Exchange Commission (SEC), the Federal Reserve Board, and the Federal Deposit Insurance Corporation (FDIC). Blockchain technology doesn't require government oversight when it's used in an open environment. In effect, the community of users is what verifies transactions.
You might call it an extreme form of crowdsourcing, both for developing the open source software that's used to build the blockchain platforms and for verifying transactions. That's one of the reasons blockchain has gotten so much attention: It has the potential to disrupt entire industries because it acts as an authoritative intermediary to handle and verify transactions.
### Bitcoin, Ethereum, and other cryptocurrencies
As of June 2020, more than [50 million people have blockchain wallets][5]. Most are used for financial transactions, such as trading Bitcoin, Ethereum, and other cryptocurrencies. It's become mainstream for many to [check cryptocurrency prices][6] the same way traders watch stock prices.
Cryptocurrency platforms also use open source software. The [Ethereum project][7] developed free and open source software that anyone can use, and a large community of developers contributes to the code. The Bitcoin reference client was developed by more than 450 developers and engineers that have made more than 150,000 contributions to the code-writing effort.
A cryptocurrency blockchain is a continuously growing record. Each record is linked together in a sequence, and the records are called blocks. When linked together, they form a chain. Each block has its own [unique marker called a hash][8]. A block contains its hash and a cryptographic hash from a previous block. In essence, each block is linked to the previous block, forming long chains that are impossible to break, with each containing information about other blocks that are used to verify transactions.
There's no central bank in financial or cryptocurrency blockchains. The blocks are distributed throughout the internet, creating a robust audit trail that can be tracked. Anyone with access to the chain can verify a transaction but cannot change the records.
### An unbreakable chain
While blockchains are not regulated by any government or agency, the distributed network keeps them secure. As chains grow, each transaction makes it more difficult to fake. Blocks are distributed all over the world in networks using trust markers that can't be changed. The chain becomes virtually unbreakable.
The code behind this decentralized network is open source and is one of the reasons users trust each other in transactions rather than having to use an intermediary such as a bank or broker. The software underpinning cryptocurrency platforms is open to anyone and free to use, created by consortiums of developers that are independent of each other. This has created one of the world's largest check-and-balance systems.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/10/open-source-blockchain
作者:[Matt Shealy][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mshealy
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube)
[2]: https://www.linuxfoundation.org/wp-content/uploads/2020/08/2020_kernel_history_report_082720.pdf
[3]: https://www.freecodecamp.org/news/what-is-a-dapp-a-guide-to-ethereum-dapps/
[4]: https://www.investopedia.com/ask/answers/063015/what-are-some-major-regulatory-agencies-responsible-overseeing-financial-institutions-us.asp
[5]: https://www.statista.com/statistics/647374/worldwide-blockchain-wallet-users/
[6]: https://www.okex.com/markets
[7]: https://ethereum.org/en/
[8]: https://opensource.com/article/18/7/bitcoin-blockchain-and-open-source

View File

@ -0,0 +1,368 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Level up your shell history with Loki and fzf)
[#]: via: (https://opensource.com/article/20/10/shell-history-loki-fzf)
[#]: author: (Ed Welch https://opensource.com/users/ewelch)
Level up your shell history with Loki and fzf
======
Loki expands the model Prometheus uses for metrics for monitoring and
log aggregation.
![Gears above purple clouds][1]
[Loki][2] is an Apache 2.0-licensed open source log-aggregation framework designed by Grafana Labs and built with tremendous support from a growing community. It is also the project I work on every day. In this article, rather than just talking about how Loki works, I will provide a hands-on introduction to solving real problems with it.
### The problem: a durable centralized shell history
I love my shell history and have always been a fanatical CTRL+R user. About a year ago, my terminal life changed forever when my peer Dieter Plaetinck introduced me to the command-line fuzzy finder **[fzf][3]**.
Suddenly, searching through commands went from this:
![Before Loki and fzf][4]
(Ed Welch, [CC BY-SA 4.0][5])
To this:
![After Loki and fzf][6]
(Ed Welch, [CC BY-SA 4.0][5])
While fzf significantly improved my quality of life, there were still some pieces missing around my shell history:
* Losing shell history when terminals close abruptly, computers crash, computers die, whole disk encryption keys are forgotten
* Having access to my shell history _from_ all my computers _on_ all my computers
I think of my shell history as documentation: it's an important story I don't want to lose. Combining Loki with my shell history helps solve these problems and more.
### About Loki
Loki takes the intuitive label model that the open source [Prometheus][7] project uses for metrics and expands it into the world of log aggregation. This enables developers and operators to seamlessly pivot between their metrics and logs using the same set of labels. Even if you're not using Prometheus, there are still plenty of reasons Loki might be a good fit for your log-storage needs:
* **Low overhead:** Loki does not do full-text log indexing; it only creates an index of the labels you put on your logs. Keeping a small index substantially reduces Loki's operating requirements. I'm running my loki-shell project, which uses Loki to store shell history, on a [Raspberry Pi][8] using just a little over 50MB of memory.
* **Low cost:** The log content is compressed and stored in object stores like Amazon S3, Google Cloud Storage, Azure Blob, or even directly on a filesystem. The goal is to use storage that is inexpensive and durable.
* **Flexibility:** Loki is available in a single binary that can be downloaded and run directly or as a Docker image to run in any container environment. A [Helm chart][9] is available to get started quickly in Kubernetes. If you demand a lot from your logging tools, take a look at the [production setup][10] running at Grafana Labs. It uses open source [Jsonnet][11] and [Tanka][12] to deploy the same Loki image as discrete building blocks to enable massive horizontal scaling, high availability, replication, separate scaling of read and write paths, highly parallelizable querying, and more.
In summary, Loki's approach is to keep a small index of metadata about your logs (labels) and store the unindexed and compressed log content in inexpensive object stores to make operating easier and cheaper. The application is built to run as a single process and easily evolve into a highly available distributed system. You can obtain high query performance on larger logging workloads through parallelization and sharding of queries—a bit like MapReduce for your logs.
In addition, this functionality is available for anyone to use for free. As with its [Grafana][13] open observability platform, Grafana Labs is committed to making Loki a fully featured, fully open log-aggregation software anyone can use.
### Get started
I'm running Loki on a Raspberry Pi on my home network and storing my shell history offsite in an S3 bucket.
When I hit CTRL+R, Loki's [LogCLI][14] command-line interface makes several batching requests that are streamed into fzf. Here is an example—the top part shows the Loki server logs on the Pi.
![Logs of the Loki server on Raspberry Pi][15]
(Ed Welch, [CC BY-SA 4.0][5])
Ready to give it a try? The following guide will help you set up and run Loki to be integrated with your shell history. Since this tutorial aims to keep things simple, this setup will run Loki locally on your computer and store all the files on the filesystem.
You can find all of this, plus information about how to set up a more elaborate installation, in the [loki-shell GitHub repository][16].
Note that this tutorial will not change any existing behaviors around your history, so _your existing shell history command and history settings will be untouched._ Instead, this duplicates the command history to Loki with `$PROMPT_COMMAND` in Bash and `precmd` in Zsh. On the CTRL+R side of things, it overloads the function that fzf uses to access the CTRL+R command. Trying this is safe, and if you decide you don't like it, just follow the [uninstall steps][17] in the GitHub repo to remove all traces. Your shell history will be untouched.
#### Step 1: Install fzf
There are several ways to install fzf, but I prefer [the Git method][18]:
```
git clone --depth 1 <https://github.com/junegunn/fzf.git> ~/.fzf
~/.fzf/install
```
Say yes to all the question prompts.
If you already have fzf installed, make sure you have the key bindings enabled (i.e., make sure when you type CTRL+R, fzf pops up). You can rerun the fzf installation to enable key bindings if necessary.
#### Step 2: Install loki-shell
Like fzf, loki-shell also has a Git repo and install script:
```
git clone --depth 1 <https://github.com/slim-bean/loki-shell.git> ~/.loki-shell
~/.loki-shell/install
```
First, the script creates the `~/.loki-shell` directory where all files will be kept (including Loki data). Next, it will download binaries for [Promtail][19], LogCLI, and Loki.
Then it will ask:
```
Do you want to install Loki? ([y]/n)
```
If you already have a centralized Loki running for loki-shell, you could answer n; however, for this tutorial, answer y or press Enter.
There are two options available for running Loki locally: as a Docker image or as a single binary (with support for adding a systemd service). I recommend using Docker if it's available, as I think it simplifies operations a bit, but both work just fine.
```
#### Running with Docker
```
To run Loki as a Docker image:
[code]
```
[y] to run Loki in Docker, [n] to run Loki as a binary ([y]/n) y
Error: No such object: loki-shell
Error response from daemon: No such container: loki-shell
Error: No such container: loki-shell
54843ff3392f198f5cac51a6a5071036f67842bbc23452de8c3efa392c0c2e1e
```
```
If this is the first time you're running the installation, you can disregard the error messages. This script will stop and replace a running Loki container if the version does not match, which allows you to rerun this script to upgrade Loki.
That's it! Loki is now running as a Docker container.
Data from Loki will be stored in ~/.loki-shell/data.
The image runs with --restart=unless-stopped, so it will restart at reboot but will stay stopped if you run docker stop loki-shell.
(If you're using Docker, you can skip down to Shell integration.)
```
##### Running as binary
```
There are many ways to run a binary on a Linux system. This script can install a systemd service. If you don't have systemd, you can still use the binary install:
[code]
```
[y] to run Loki in Docker, [n] to run Loki as a binary ([y]/n) n
Run Loki with systemd? ([y]/n) n
This is as far as this script can take you
You will need to setup an auto-start for Loki
It can be run with this command: /home/username/.loki-shell/bin/loki -config.file=/home/username/.loki-shell/config/loki-binary-config.yaml
```
```
The script will spit out the command you need to use to run Loki, and you will be on your own to set up an init script or another method of auto-starting it.
You can run the command directly, if you want, and run Loki from your current shell.
If you do have systemd, you have the option of letting the script install the systemd service or showing you the commands to run it yourself:
[code]
```
Run Loki with systemd? ([y]/n) y
Installing the systemd service requires root permissions.
[y] to run these commands with sudo [n] to print out the commands and you can run them yourself. ([y]/n) n
sudo cp /home/ed/.loki-shell/config/loki-shell.service /etc/systemd/system/loki-shell.service
sudo systemctl daemon-reload
sudo systemctl enable loki-shell
sudo systemctl start loki-shell
Copy these commands and run them when the script finishes. (press enter to continue)
```
```
```
##### Shell integration
```
Regardless of how you installed Loki, you should now see a prompt:
[code]Enter the URL for your Loki server or press enter for default (http://localhost:4100)
```
If you had set up a centralized Loki, you would enter that URL here. However, this demo just uses the default, so you can press Enter.
A lot of text will spit out explaining all the entries added to your ~.bashrc or ~.zshrc (or both).
That's it!
[code]
```
Finished. Restart your shell or reload config file.
   source ~/.bashrc  # bash
   source ~/.zshrc   # zsh
```
```
```
#### Step 3: Try it out!
```
Start using your shell, and use CTRL+R to see your commands.
Open multiple terminal windows, type a command in one and CTRL+R in another, and you'll see your commands available immediately.
Also, notice that when you switch between terminals and enter commands, they are available immediately with CTRL+R, but the Up arrow's operation is not affected between terminals. (This may not be true if you have Oh My Zsh installed, as it automatically appends all commands to the history.)
Use CTRL+R multiple times to toggle between sorting by time and by relevance.
Note that this configuration will show only the current hosts' query history, even if you are sending shell data from multiple hosts to Loki. I think by default this makes the most sense. There is a lot you can tweak if you want this behavior to change; see the loki-shell repo to learn more.
It also installed an alias called hist:
[code]alias hist="$HOME/.loki-shell/bin/logcli --addr=$LOKI_URL"
```
LogCLI can be used to query and search your history directly in Loki, including allowing you to search other hosts. Check out the getting started guide for LogCLI to learn more about querying.
Loki's log query language (LogQL) provides metric queries that allow you to do some interesting things; for example, I can see how many times I issued the kc command (my alias for kubectl) in the last 30 days:
```
![Counting use of a command][20]
(Ed Welch, [CC BY-SA 4.0][5])
```
```
## Extra credit
```
Install Grafana and play around with your shell history:
[code]docker run -d -p 3000:3000 --name=grafana grafana/grafana
```
Open a web browser at http://localhost:3000 and log in using the default admin/admin username and password.
On the left, navigate to Configuration -> Datasources, click the Add Datasource button, and select Loki.
For the URL, you should be able to use http://localhost:4100 (however, on my WSL2 machine, I had to use the computer's actual IP address).
Click Save and Test. You should see Data source connected and labels found.
Click on the Explore icon on the left, make sure the Loki data source is selected, and try out a query:
[code]{job="shell"}
```
If you have more hosts sending shell commands, you can limit the results to a certain host using the hostname label:
[code]{job="shell", hostname="myhost"}.
```
You can also look for specific commands with filter expressions:
[code]{job="shell"} |= "docker"
```
Or you can start exploring the world of metrics from logs to see how often you are using your shell:
[code]rate({job="shell"}[1m])
```
```
![Counting use of the shell over previous 20 days][21]
(Ed Welch, [CC BY-SA 4.0][5])
```
Want to reconstruct a timeline from an incident? You can filter by a specific command and see when it ran.
```
![Counting use of a command][22]
(Ed Welch, [CC BY-SA 4.0][5])
```
To see what else you can do and learn more about Loki's query language, check out the LogQL guide.
```
### Final thoughts
```
For more ideas, troubleshooting, and updates, follow the GitHub repo. This is still a work in progress, so please report any issues there.
To learn more about Loki, check out the documentation, blog posts, and GitHub repo, or try it in Grafana Cloud.
```
* * *
```
A special thanks to my colleague Jack Baldry for planting the seed for this idea. I had the Loki knowledge to make this happen, but if it weren't for his suggestion, I don't think I ever would have made it here.
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/10/shell-history-loki-fzf
作者:[Ed Welch][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ewelch
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chaos_engineer_monster_scary_devops_gear_kubernetes.png?itok=GPYLvfVh (Gears above purple clouds)
[2]: https://github.com/grafana/loki
[3]: https://github.com/junegunn/fzf
[4]: https://opensource.com/sites/default/files/uploads/before.gif (Before Loki and fzf)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://opensource.com/sites/default/files/uploads/with_fzf.gif (After Loki and fzf)
[7]: https://prometheus.io/
[8]: https://www.raspberrypi.org/
[9]: https://helm.sh/docs/topics/charts/
[10]: https://grafana.com/docs/loki/latest/installation/tanka/
[11]: https://jsonnet.org
[12]: https://tanka.dev/
[13]: https://grafana.com/
[14]: https://grafana.com/docs/loki/latest/getting-started/logcli/
[15]: https://opensource.com/sites/default/files/uploads/example_logcli.gif (Logs of the Loki server on Raspberry Pi)
[16]: https://github.com/slim-bean/loki-shell
[17]: https://github.com/slim-bean/loki-shell/blob/master/uninstall
[18]: https://github.com/junegunn/fzf#using-git
[19]: https://grafana.com/docs/loki/latest/clients/promtail/
[20]: https://opensource.com/sites/default/files/uploads/count_kc.png (Counting use of a command)
[21]: https://opensource.com/sites/default/files/uploads/last_20.png (Counting use of the shell over previous 20 days)
[22]: https://opensource.com/sites/default/files/uploads/command_hist.png (Counting use of a command)

View File

@ -0,0 +1,126 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Navigating your Linux files with ranger)
[#]: via: (https://www.networkworld.com/article/3583890/navigating-your-linux-files-with-ranger.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Navigating your Linux files with ranger
======
Ranger is a great tool for providing a multi-level view of your Linux files and allowing you to both browse and make changes using arrow keys and some handy commands.
[Heidi Sandstrom][1] [(CC0)][2]
Ranger is a unique and very handy file system navigator that allows you to move around in your Linux file system, go in and out of subdirectories, view text-file contents and even make changes to files without leaving the tool.
It runs in a terminal window and lets you navigate by pressing arrow keys. It provides a multi-level file display that makes it easy to see where you are, move around the file system and select particular files.
To install ranger, use your standard install command (e.g., **sudo apt install ranger**). To start it, simply type “ranger”. It comes with a lengthy, very detailed man page, but getting started with ranger is very simple.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
### The ranger display
One of the most important things you need to get used to right away is rangers way of displaying files. Once you start ranger, you will see four columns of data. The first column is one level up from wherever you started ranger. If you start from your home directory, for example, ranger will list all of the home directories in column 1. The second column will show the first screenful of directories and files in your home directory (or whatever directory you start it from).
The key here is moving past any inclination you might have to see the details in each line of the display as related. All the entries in column 2 relate to a single entry in column 1 and content in column 4 relates to the selected file or directory in column 2.
Unlike your normal command-line view, directories will be listed first (alphanumerically) and files will be listed second (also alphanumerically). Starting in your home directory, the display might look something like this:
```
shs@dragonfly /home/shs/backups <== current selection
bugfarm backups 0 empty
dory bin 59
eel Buttons 15
nemo Desktop 0
shark Documents 0
shs Downloads 1
^ ^ ^ ^
| | | |
homes directories # files listing
in selected in each of files in
home directory selected directory
```
The top line in ranger's display tells you where  you are. In the abive example, the current directory is **/home/shs/backups**. We see the highlighted word "empty" because there are no files in this directory. If we press the down arrow key to select **bin** instead, we'll see a list of files:
```
shs@dragonfly /home/shs/bin <== current selection
bugfarm backups 0 append
dory bin 59 calcPower
eel Buttons 15 cap
nemo Desktop 0 extract
shark Documents 0 finddups
shs Downloads 1 fix
^ ^ ^ ^
| | | |
homes directories # files listing
in selected in each of files in
home directory selected directory
```
The highlighted entries in each column show the current selections. Use the right arrow to move into deeper directories or view file content.
If you continue pressing the down arrow key to move to the file portion of the listing, you will note that the third column will show file sizes (instead of the numbers of files). The "current selection" line will also display the currently selected file name while the rightmost column displays the file content when possible.
```
shs@dragonfly /home/shs/busy_wait.c <== current selection
bugfarm BushyRidge.zip 170 K /*
dory busy_wait.c 338 B * program that does a busy wait
eel camper.jpg 5.55 M * it's used to show ASLR, and that's it
nemo check_lockscreen 80 B */
shark chkrootkit-output 438 B #include <stdio.h>
^ ^ ^ ^
| | | |
homes files sizes file content
```
The bottom line of the display will show some file and directory details:
```
-rw-rw-r—- shs shs 338B 2019-01-05 14:44 1.52G, 365G free 67/488 11%
```
If you select a directory and press enter, you will move into that directory. The leftmost column in your display will then be a listing of the contents of your home directory, and the second column will be a file listing of the directory contents. You can then examine the contents of subdirectories and the contents of files.
Press the left arrow key to move back up a level.
Quit ranger by pressing "q".
### Making changes
You can press **?** to bring up a help line at the bottom of your screen. It should look like this:
```
View [m]an page, [k]ey bindings, [c]commands or [s]ettings? (press q to abort)
```
Press **c** and ranger will provide information on commands that you can use within the tool. For example, you can change permissions on the current file by entering **:chmod** followed by the intended permissions. For example, once a file is selected, you can type **:chmod 700** to set permissions to **rwx------**.
Typing **:edit** instead would open the file in **nano** and allow you to make changes and then save the file using **nano** commands.
### Wrap-Up
There are more ways to use **ranger** than are described in this post. The tool provides a very different way to list and interact with files on a Linux system and is easy to navigate once you get used to its multi-tiered way of listing directories and files and using arrow keys in place of **cd** commands to move around.
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3583890/navigating-your-linux-files-with-ranger.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://unsplash.com/photos/mHC0qJ7l-ls
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,173 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Free Up Space in /boot Partition on Ubuntu Linux?)
[#]: via: (https://itsfoss.com/free-boot-partition-ubuntu/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
How to Free Up Space in /boot Partition on Ubuntu Linux?
======
The other day, I got a warning that boot partition is almost full or has no space left. Yes, I have a separate boot partition, not many people do that these days, I believe.
This was the first time I saw such an error and it left me confused. Now, there are several [ways to free up space on Ubuntu][1] (or Ubuntu-based distros) but not all of them are useful in this case.
This is why I decided to write about the steps I followed to free some space in the /boot partition.
### Free up space in /boot partition on Ubuntu (if your boot partition is running out of space)
![][2]
Id advise you to carefully read through the solutions and follow the one best suited for your situation. Its easy but you need to be cautious about performing some of these on your production systems.
#### Method 1: Using apt autoremove
You dont have to be a terminal expert to do this, its just one command and you will be removing unused kernels to free up space in the /boot partition.
All you have to do is, type in:
```
sudo apt autoremove
```
This will not just remove unused kernels but also get rid of the dependencies that you dont need or isnt needed by any of the tools installed.
Once you enter the command, it will list the things that will be removed and you just have to confirm the action. If youre curious, you can go through it carefully and see what it actually removes.
Heres how it will look like:
![][3]
You have to press **Y** to proceed.
_**Its worth noting that this method will only work if youve a tiny bit of space left and you get the warning. But, if your /boot partition is full, APT may not even work.**_
In the next method, Ill highlight two different ways by which you can remove old kernels to free up space using a GUI and also the terminal.
#### Method 2: Remove Unused Kernel Manually (if apt autoremove didnt work)
Before you try to [remove any older kernels][4] to free up space, you need to identify the current active kernel and make sure that you dont delete that.
To [check your kernel version][5], type in the following command in the terminal:
```
uname -r
```
The [uname command is generally used to get Linux system information][6]. Here, this command displays the current Linux kernel being used. It should look like this:
![][7]
Now, that you know what your current Linux Kernel is, you just have to remove the ones that do not match this version. You should note it down somewhere so that you ensure you do not remove it accidentally.
Next, to remove it, you can either utilize the terminal or the GUI.
Warning!
Be extra careful while deleting kernels. Identify and delete old kernels only, not the current one you are using otherwise youll have a broken system.
##### Using a GUI tool to remove old Linux kernels
You can use the [Synaptic Package Manager][8] or a tool like [Stacer][9] to get started. Personally, when I encountered a full /boot partition with apt broken, I used [Stacer][6] to get rid of older kernels. So, let me show you how that looks.
First, you need to launch “**Stacer**” and then navigate your way to the package uninstaller as shown in the screenshot below.
![][10]
Here, search for “**image**” and you will find the images for the Linux Kernels you have. You just have to delete the old kernel versions and not your current kernel image.
Ive pointed out my current kernel and old kernels in my case in the screenshot above, so you have to be careful with your kernel version on your system.
You dont have to delete anything else, just the ones that are the older kernel versions.
Similarly, just search for “**headers**” in the list of packages and delete the old ones as shown below.
![][11]
Just to warn you, you **dont want to remove “linux-headers-generic”**. Only focus on the ones that have version numbers with them.
And, thats it, youll be done and apt will be working again and you have successfully freed up some space from your /boot partition. Similarly, you can do this using any other package manager youre comfortable with.
#### Using the command-line to remove old kernels
Its the same thing but just using the terminal. So, if you dont have the option to use a GUI (if its a remote machine/server) or if youre just comfortable with the terminal, you can follow the steps below.
First, list all your kernels installed using the command below:
```
ls -l /boot
```
It should look something like this:
![][12]
The ones that are mentioned as “**old**” or the ones that do not match your current kernel version are the unused kernels that you can delete.
Now, you can use the **rm** command to remove the specific kernels from the boot partition using the command below (a single command for each):
```
sudo rm /boot/vmlinuz-5.4.0-7634-generic
```
Make sure to check the version for your system — it may be different for your system.
If you have a lot of unused kernels, this will take time. So, you can also get rid of multiple kernels using the following command:
```
sudo rm /boot/*-5.4.0-{7634}-*
```
To clarify, you need to write the last part/code of the Kernel versions separated by commas to delete them all at once.
Suppose, I have two old kernels 5.4.0-7634-generic and 5.4.0-7624, the command will be:
```
sudo rm /boot/*-5.4.0-{7634,7624}-*
```
If you dont want to see the old kernel version in the grub boot menu, you can simply [update grub][13] using the following command:
```
sudo update-grub
```
Thats it. Youre done. Youve freed up space and also potentially fixed the broken APT if it was an issue after your /boot partition filled up.
In some cases, you may need to enter these commands to fix the broken apt (as Ive noticed in the forums):
```
sudo dpkg --configure -a
sudo apt install -f
```
Do note that you dont need to enter the above commands unless you find APT broken. Personally, I didnt need these commands but I found them handy for some on the forums.
--------------------------------------------------------------------------------
via: https://itsfoss.com/free-boot-partition-ubuntu/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/free-up-space-ubuntu-linux/
[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/10/free-boot-space-ubuntu-linux.jpg?resize=800%2C450&ssl=1
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/apt-autoremove-screenshot.jpg?resize=800%2C415&ssl=1
[4]: https://itsfoss.com/remove-old-kernels-ubuntu/
[5]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/
[6]: https://linuxhandbook.com/uname/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/uname-r-screenshot.jpg?resize=800%2C198&ssl=1
[8]: https://itsfoss.com/synaptic-package-manager/
[9]: https://itsfoss.com/optimize-ubuntu-stacer/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/stacer-remove-kernel.jpg?resize=800%2C562&ssl=1
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/stacer-remove-kernel-header.png?resize=800%2C576&ssl=1
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/command-kernel-list.png?resize=800%2C432&ssl=1
[13]: https://itsfoss.com/update-grub/

View File

@ -0,0 +1,222 @@
[#]: collector: (lujun9972)
[#]: translator: (rakino)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Disable IPv6 on Ubuntu Linux)
[#]: via: (https://itsfoss.com/disable-ipv6-ubuntu-linux/)
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
如何在 Ubuntu Linux 上禁用 IPv6
======
想知道怎样在 Ubuntu 上**禁用 IPv6** 吗?我会在这篇文章中介绍一些方法,以及为什么你应该考虑这一选择;以防改变主意,我也会提到如何**启用,或者说重新启用 IPv6**。
### 什么是 IPv6为什么会想要禁用它
<ruby>**[网际协议第6版][1]**<rt>Internet Protocol version 6</rt></ruby>[][1] **[IPv6][1]**[][1]是网际协议IP的最新版本。网际协议是一种通信协议它为网络上的计算机提供识别和定位系统并在互联网上进行通信路由。IPv6 是在 1998 年以取代 **IPv4** 协议为目的被设计出来的。
**IPv6** 意在提高安全性与性能的同时保证地址不被用尽;它可以在全球范围内为每台设备分配唯一的以 **128 位元**存储的地址,而 IPv4 只使用了 32 位元。
![Disable IPv6 Ubuntu][2]
尽管 IPv6 的目标是取代 IPv4但目前还有很长的路要走互联网上只有少于 **30%** 的网站支持 IPv6[这里][3] 是谷歌的统计IPv6 有时也会导致 [一些程序出现问题][4]。
由于 IPv6 使用全球(唯一分配的)路由地址,以及(仍然)有<ruby>互联网服务供应商<rt>Internet Service Provider</rt></ruby>ISP不提供 IPv6 支持的事实IPv6 这一功能在提供全球服务的<ruby>**虚拟私人网络**<rt>Virtual Private Network</rt></ruby>VPN供应商的优先级列表中处于较低的位置这样一来他们就可以专注于对 VPN 用户最重要的事情:安全。
不想让自己暴露在各种威胁之下可能是另一个让你想在系统上禁用 IPv6 的原因。虽然 IPv6 本身比 IPv4 更安全,但我所指的风险是另一种性质上的。如果你不积极使用 IPv6 及其功能,[启用 IPv6 后,你会很容易受到各种攻击][5],因而为黑客提供另一种可能的利用工具。
同样,配置基本的网络规则是不够的;就像对 IPv4 一样,你需要密切关注 IPv6 的配置,这可能会是一件相当麻烦的事情(维护也是)。并且随着 IPv6 而来的将会是一套不同于 IPv4 的问题(鉴于这个协议的年龄,许多问题已经可以在网上找到了),这又会使你的系统多了一层复杂性。
### 在 Ubuntu 上禁用 IPv6 [高级用户]
在本节中,我会详述如何在 Ubuntu 上禁用 IPv6 协议,请打开终端(**默认键:** CTRL+ALT+T让我们开始吧
**注意:**_接下来大部分输入终端的命令都需要 root 权限(**sudo**。_
警告!
如果你是普通 Linux 桌面用户,并且偏好稳定的工作系统,请避开本教程,接下来的部分是为那些知道自己在做什么以及为什么要这么做的用户准备的。
#### 1\. 使用 Sysctl 禁用 IPv6
首先,可以执行以下命令来**检查** IPv6 是否已经启用:
```
ip a
```
如果启用了,你应该会看到一个 IPv6 地址(网卡的名字可能会与图中有所不同)
![IPv6 Address Ubuntu][7]
在教程 [在 Ubuntu 中重启网络][8] 中,你已经见过 sysctl 命令了,在这里我们也同样会用到它。要**禁用 IPv6**,只需要输入三条命令:
```
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1
```
(译注:这篇文章 LCTT 有翻译,在 [这里《Linux 初学者:如何在 Ubuntu 中重启网络》][patch-1];不过尴尬的是,并没有提到使用 sysctl 的方法……)
检查命令是否生效:
```
ip a
```
如果命令生效,你应该会发现 IPv6 的条目消失了:
![IPv6 Disabled Ubuntu][9]
然而这种方法只能**临时禁用 IPv6**,因此在下次系统启动的时候, IPv6 仍然会被启用。
(译注:这里的临时禁用是指这次所做的改变直到此次关机之前都有效,因为相关的参数是存储在内存中的,可以改变值,但是在内存断电后就会丢失;这种意义上来讲,下文所述的两种方法都是临时的,只不过改变参数值的时机是在系统启动的早期,并且每次系统启动时都有应用而已。那么如何完成这种意义上的永久改变?答案是在编译内核的时候禁用相关功能,然后要后悔就只能重新编译内核了(悲)。)
一种让选项持续生效的方式是修改文件 **/etc/sysctl.conf**,在这里我用 vim 来编辑文件,不过你可以使用任何你想使用的编辑器,以及请确保你拥有**管理员权限**(用 **sudo**
![Sysctl Configuration][10]
将下面这几行(和之前使用的参数相同)加入到文件中:
```
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
```
执行以下命令应用设置:
```
sudo sysctl -p
```
如果在重启之后 IPv6 仍然被启用了,而你还想继续这种方法的话,那么你必须(使用 root 权限)创建文件 **/etc/rc.local** 并加入以下内容:
```
#!/bin/bash
# /etc/rc.local
/etc/sysctl.d
/etc/init.d/procps restart
exit 0
```
接着使用 [chmod 命令][11] 来更改文件权限,使其可执行:
```
sudo chmod 755 /etc/rc.local
```
这会让系统(在启动的时候)从之前编辑过的 sysctl 配置文件中读取内核参数。
#### 2\. 使用 GRUB 禁用 IPv6
另外一种方法是配置 **GRUB**,它会在系统启动时向内核传递参数。这样做需要编辑文件 **/etc/default/grub**(请确保拥有管理员权限)。
![GRUB Configuration][13]
现在需要修改文件中分别以 **GRUB_CMDLINE_LINUX_DEFAULT****GRUB_CMDLINE_LINUX** 开头的两行来在启动时禁用 IPv6
```
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash ipv6.disable=1"
GRUB_CMDLINE_LINUX="ipv6.disable=1"
```
(译注:这里是指在上述两行内增加参数 ipv6.disable=1不同的系统中这两行的默认值可能有所不同。
保存文件,然后执行命令:
```
sudo update-grub
```
(译注:该命令用以更新 GRUB 的配置文件,在没有 update-grub 命令的系统中需要使用 `sudo grub-mkconfig -o /boot/grub/grub.cfg`
设置会在重启后生效。
### 在 Ubuntu 上重新启用 IPv6
要想重新启用 IPv6你需要撤销之前的所有修改。不过只是想临时启用 IPv6 的话,可以执行以下命令:
```
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=0
sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=0
```
否则想要持续启用的话,看看是否修改过 **/etc/sysctl.conf**,可以删除掉之前增加的部分,也可以将它们改为以下值(两种方法等效):
```
net.ipv6.conf.all.disable_ipv6=0
net.ipv6.conf.default.disable_ipv6=0
net.ipv6.conf.lo.disable_ipv6=0
```
然后应用设置(可选):
```
sudo sysctl -p
```
(译注:这里可选的意思可能是如果之前临时启用了 IPv6 就没必要再重新加载配置文件了)
这样应该可以再次看到 IPv6 地址了:
![IPv6 Reenabled in Ubuntu][14]
另外,你也可以删除之前创建的文件 **/etc/rc.local**(可选):
```
sudo rm /etc/rc.local
```
如果修改了文件 **/etc/default/grub** ,回去删掉你所增加的参数:
```
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX=""
```
然后更新 GRUB 配置文件:
```
sudo update-grub
```
**尾声**
在这篇文章中,我介绍了在 Linux 上**禁用 IPv6** 的方法,并简述了什么是 IPv6 以及可能想要禁用掉它的原因。
那么,这篇文章对你有用吗?你有禁用掉 IPv6 连接吗?让我们评论区见吧~
--------------------------------------------------------------------------------
via: https://itsfoss.com/disable-ipv6-ubuntu-linux/
作者:[Sergiu][a]
选题:[lujun9972][b]
译者:[rakino](https://github.com/rakino)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/sergiu/
[b]: https://github.com/lujun9972
[1]: https://en.wikipedia.org/wiki/IPv6
[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/disable_ipv6_ubuntu.png?fit=800%2C450&ssl=1
[3]: https://www.google.com/intl/en/ipv6/statistics.html
[4]: https://whatismyipaddress.com/ipv6-issues
[5]: https://www.internetsociety.org/blog/2015/01/ipv6-security-myth-1-im-not-running-ipv6-so-i-dont-have-to-worry/
[6]: https://itsfoss.com/remove-drive-icons-from-unity-launcher-in-ubuntu/
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_address_ubuntu.png?fit=800%2C517&ssl=1
[8]: https://itsfoss.com/restart-network-ubuntu/
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_disabled_ubuntu.png?fit=800%2C442&ssl=1
[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/sysctl_configuration.jpg?fit=800%2C554&ssl=1
[11]: https://linuxhandbook.com/chmod-command/
[12]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/grub_configuration-1.jpg?fit=800%2C565&ssl=1
[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_address_ubuntu-1.png?fit=800%2C517&ssl=1
[patch-1]: https://github.com/LCTT/TranslateProject/blob/master/published/201905/20190307%20How%20to%20Restart%20a%20Network%20in%20Ubuntu%20-Beginner-s%20Tip.md

View File

@ -0,0 +1,303 @@
[#]: collector: (lujun9972)
[#]: translator: (gxlct008)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (TCP window scaling, timestamps and SACK)
[#]: via: (https://fedoramagazine.org/tcp-window-scaling-timestamps-and-sack/)
[#]: author: (Florian Westphal https://fedoramagazine.org/author/strlen/)
TCP 窗口缩放,时间戳和 SACK
======
![][1]
Linux TCP 协议栈具有无数个 _sysctl_ 旋钮,允许更改其行为。 这包括可用于接收或发送操作的内存量,套接字的最大数量、可选特性和协议扩展。
有很多文章出于各种“性能调优”或“安全性”原因,建议禁用 TCP 扩展,比如时间戳或<ruby>选择性确认<rt>selective acknowledgments</rt></ruby> SACK
本文提供了这些扩展的功能背景,默认情况下处于启用状态的原因,它们之间是如何关联的,以及为什么通常情况下将它们关闭是个坏主意。
### TCP 窗口缩放
TCP 可以维持的数据传输速率受到几个因素的限制。其中包括:
* 往返时间RTT。这是数据包到达目的地并返回回复所花费的时间。越低越好。
* 所涉及的网络路径的最低链路速度
* 丢包频率
* 新数据可用于传输的速度。 例如CPU 需要能够以足够快的速度将数据传递到网络适配器。如果 CPU 需要首先加密数据,则适配器可能必须等待新数据。同样地,如果磁盘存储不能足够快地读取数据,则磁盘存储可能会成为瓶颈。
* TCP 接收窗口的最大可能大小。接收窗口决定 TCP 在必须等待接收方报告接收到该数据之前可以传输多少数据 (以字节为单位)。这是由接收方宣布的。接收方将在读取并确认接收到传入数据时不断更新此值。接收窗口当前值包含在 [TCP 报头][2] 中,它是 TCP 发送的每个数据段的一部分。因此只要发送方接收到来自对等方的确认它就知道当前的接收窗口。这意味着往返时间RTT越长发送方获得接收窗口更新所需的时间就越长。
TCP 被限制为最多 64KB 的未确认(正在传输)数据。在大多数网络场景中,这甚至还不足以维持一个像样的数据速率。让我们看看一些例子。
##### 理论数据速率
由于往返时间 (RTT) 为 100 毫秒TCP 每秒最多可以传输 640KB。在延迟 1 秒的情况下,最大理论数据速率降至 64KB/s。
这是因为接收窗口的原因。一旦发送了 64KB 的数据,接收窗口就已经满了。发送方必须等待,直到对等方通知它应用程序已经读取了至少一部分数据。
发送的第一个段会把 TCP 窗口缩减一个自身的大小。在接收窗口值的更新可用之前,需要往返一次。当更新以 1 秒的延迟到达时,即使链路有足够的可用带宽,也会导致 64KB 的限制。
为了充分利用一个具有几毫秒延迟的快速网络,必须有一个比传统 TCP 支持的窗口大的窗口。“64KB 限制”是协议规范的产物TCP 头只为接收窗口大小保留 16 位。这允许接收窗口高达 64KB。在 TCP 协议最初设计时,这个大小并没有被视为一个限制。
不幸的是,想通过仅仅更改 TCP 头来支持更大的最大窗口值是不可能的。如果这样做就意味着 TCP 的所有实现都必须同时更新,否则它们将无法相互理解。为了解决这个问题,需要改变接收窗口值的解释。
“窗口缩放选项”允许这样做,同时保持与现有实现的兼容性。
#### TCP 选项:向后兼容的协议扩展
TCP 支持可选扩展。 这允许使用新特性增强协议,而无需立即更新所有实现。 当 TCP 启动器连接到对等方时,它还会发送一个支持的扩展列表。 所有扩展名都遵循相同的格式:一个唯一的选项号,后跟选项的长度以及选项数据本身。
TCP 响应程序检查连接请求中包含的所有选项号。 如果它遇到一个不能理解的选项号,则会跳过
该选项号附带的“长度”字节的数据,并检查下一个选项号。 响应者忽略了从答复中无法理解的内容。 这使发送方和接收方都够了解所支持的通用选项集。
使用窗口缩放时,选项数据总是由单个数字组成。
### 窗口缩放选项
```
窗口缩放选项 (WSopt): Kind: 3, Length: 3
    +---------+---------+---------+
    | Kind=3  |Length=3 |shift.cnt|
    +---------+---------+---------+
         1         1         1
```
[窗口缩放][3] 选项告诉对等点,应该使用给定的数字缩放 TCP 标头中的接收窗口值,以获取实际大小。
例如,一个宣告窗口缩放比例因子为 7 的 TCP 启动器试图指示响应程序,任何将来携带接收窗口值为 512 的数据包实际上都会宣告 65536 字节的窗口。 增加了 128 倍。这将允许最大为 8MB 的 TCP 窗口。
不能理解此选项的 TCP 响应程序将会忽略它。 为响应连接请求而发送的 TCP 数据包SYN-ACK不包含窗口缩放选项。在这种情况下双方只能使用 64k 的窗口大小。幸运的是,默认情况下,几乎每个 TCP 堆栈都支持并启用此选项,包括 Linux。
响应程序包括它自己所需的比例因子。两个对等点可以使用不同的号码。宣布比例因子为 0 也是合法的。这意味着对等点应该逐字处理它接收到的接收窗口值,但它允许应答方向上的缩放值,然后接收方可以使用更大的接收窗口。
与 SACK 或 TCP 时间戳不同,窗口缩放选项仅出现在 TCP 连接的前两个数据包中,之后无法更改。也不可能通过查看不包含初始连接三次握手的连接的数据包捕获来确定比例因子。
支持的最大比例因子为 14。这将允许 TCP 窗口的大小高达 1GB。
##### 窗口缩放的缺点
在非常特殊的情况下,它可能导致数据损坏。 在禁用该选项之前——通常情况下是不可能的。 还有一种解决方案可以防止这种情况。不幸的是,有些人在没有意识到与窗口缩放的关系的情况下禁用了该解决方案。 首先,让我们看一下需要解决的实际问题。 想象以下事件序列:
1. 发送方发送段s_1s_2s_3... s_n
2. 接收方看到s_1s_3.. s_n并发送对 s_1 的确认。
3. 发送方认为 s_2 丢失,然后再次发送。 它还发送段 s_n+1 中包含的新数据。
4. 接收方然后看到s_2s_n+1s_2数据包 s_2 被接收两次。
例如,当发送方过早触发重新传输时,可能会发生这种情况。 在正常情况下,即使使用窗口缩放,这种错误的重传也绝不会成为问题。 接收方将只丢弃重复项。
#### 从旧数据到新数据
TCP 序列号最多可以为 4GB。如果它变得大于此值则序列会回绕到 0然后再次增加。这本身不是问题但是如果这种问题发生得足够快则上述情况可能会造成歧义。
如果在正确的时刻发生回绕,则序列号 s_2重新发送的数据包可能已经大于 s_n+1。 因此在最后的步骤4接收器可以将其解释为s_2s_n+1s_n+m即它可以将 **“旧”** 数据包 s_2 视为包含新数据。
通常,这不会发生,因为即使在高带宽链接上,“回绕”也只会每隔几秒钟或几分钟发生一次。原始和不需要的重传之间的间隔将小得多。
例如,对于 50MB/s 的传输速度,副本要延迟到一分钟以上才会成为问题。序列号的包装速度不够快,小的延迟才会导致这个问题。
一旦 TCP 达到 “GB/s” 的吞吐率,序列号的包装速度就会非常快,以至于即使只有几毫秒的延迟也可能会造成 TCP 无法再检测到的重复项。通过解决接收窗口太小的问题TCP 现在可以用于以前无法实现的网络速度,这会产生一个新的,尽管很少见的问题。为了在 RTT 非常低的环境中安全使用 GB/s 的速度,接收方必须能够检测到这些旧副本,而不必仅依赖序列号。
### TCP 时间戳
#### 最佳使用日期。
用最简单的术语来说,[TCP 时间戳][3]只是在数据包上添加时间戳,以解决由非常快速的序列号回绕引起的歧义。 如果一个段看起来包含新数据,但其时间戳早于最后一个在窗口内的数据包,则该序列号已被重新包装,而“新”数据包实际上是一个较旧的副本。 这解决了即使在极端情况下重传的歧义。
但是,该扩展不仅仅是检测旧数据包。 TCP 时间戳的另一个主要功能是更精确的往返时间测量RTTm
#### 需要准确的 RTT 估算
当两个对等方都支持时间戳时,每个 TCP 段都携带两个附加数字:时间戳值和时间戳回显。
```
TCP 时间戳选项 (TSopt): Kind: 8, Length: 10
+-------+----+----------------+-----------------+
|Kind=8 | 10 |TS Value (TSval)|EchoReply (TSecr)|
+-------+----+----------------+-----------------+
    1      1         4                4
```
准确的 RTT 估算对于 TCP 性能至关重要。 TCP 自动重新发送未确认的数据。 重传由计时器触发:如果超时,则 TCP 会将尚未收到确认的一个或多个数据包视为丢失。 然后再发送一次。
但是,“尚未得到确认” 并不意味着该段已丢失。 也有可能是接收方到目前为止没有发送确认,或者确认仍在传输中。 这就造成了一个两难的困境TCP 必须等待足够长的时间,才能让这种轻微的延迟变得无关紧要,但它也不能等待太久。
##### 低网络延迟 VS 高网络延迟
在延迟较高的网络中如果计时器触发过快TCP 经常会将时间和带宽浪费在不必要的重发上。
然而,在延迟较低的网络中,等待太长时间会导致真正发生数据包丢失时吞吐量降低。因此,在低延迟网络中,计时器应该比高延迟网络中更早到期。 所以TCP 重传超时不能使用固定常量值作为超时。它需要根据其在网络中所经历的延迟来调整该值。
##### RTT往返时间的测量
TCP 选择基于预期往返时间RTT的重传超时。 RTT 事先是未知的。它是通过测量发送段与 TCP 接收到该段所承载数据的确认之间的增量来估算的。
由于多种因素使其而变得复杂。
* 出于性能原因TCP 不会为收到的每个数据包生成新的确认。它等待的时间非常短:如果有更多的数据段到达,则可以通过单个 ACK 数据包确认其接收。这称为<ruby>“累积确认”<rt>cumulative ACK</rt></ruby>
* 往返时间并不恒定。 这是有多种因素造成的。例如,客户端可能是一部移动电话,随其移动而切换到不同的基站。也可能是当链路或 CPU 利用率提高时,数据包交换花费了更长的时间。
* 必须重新发送的数据包在计算过程中必须被忽略。
这是因为发送方无法判断重传数据段的 ACK 是在确认原始传输 (毕竟已到达) 还是在确认重传。
最后一点很重要:当 TCP 忙于从丢失中恢复时,它可能仅接收到重传段的 ACK。这样它就无法在此恢复阶段测量更新RTT。所以它无法调整重传超时然后超时将以指数级增长。那是一种非常具体的情况它假设其他机制如快速重传或 SACK 不起作用)。但是,使用 TCP 时间戳,即使在这种情况下也会进行 RTT 评估。
如果使用了扩展,则对等方将从 TCP 段扩展空间中读取时间戳值并将其存储在本地。然后,它将该值放入作为 “时间戳回显” 发回的所有数据段中。
因此,该选项带有两个时间戳:它的发送方自己的时间戳和它从对等方收到的最新时间戳。原始发送方使用“回显时间戳”来计算 RTT。它是当前时间戳时钟与“时间戳回显”中所反映的值之间的增量。
##### 时间戳的其他用用途
TCP 时间戳甚至还有除 PAWS 和 RTT 测量以外的其他用途。例如,可以检测是否不需要重发。如果该确认携带较旧的时间戳回显,则该确认针对的是初始数据包,而不是重新发送的数据包。
TCP 时间戳的另一个更晦涩的用例与 TCP [syn cookie][4] 功能有关。
##### 在服务器端建立 TCP 连接
当连接请求到达的速度快于服务器应用程序可以接受新的传入连接的速度时,连接积压最终将达到其极限。这可能是由于系统配置错误或应用程序中的错误引起的。当一个或多个客户端发送连接请求而不对 “SYN ACK” 响应做出反应时,也会发生这种情况。这将用不完整的连接填充连接队列。这些条目需要几秒钟才会超时。这被称为<ruby>“同步洪水攻击”<rt>syn flood attack</rt></ruby>
##### TCP 时间戳和 TCP Syn Cookie
即使队列已满,某些 TCP 协议栈也允许继续接受新连接。发生这种情况时Linux 内核将在系统日志中打印一条突出的消息:
> P 端口上可能发生 SYN 泛洪。正在发送 Cookie。检查 SNMP 计数器。
此机制将完全绕过连接队列。通常存储在连接队列中的信息被编码到 SYN/ACK 响应 TCP 序列号中。当 ACK 返回时,可以根据序列号重建队列条目。
序列号只有有限的空间来存储信息。 因此,使用 “TCP Syn Cookie” 机制建立的连接不能支持 TCP 选项。
但是,对两个对等点都通用的 TCP 选项可以存储在时间戳中。 ACK 数据包在时间戳回显字段中反映了该值,这也允许恢复已达成共识的 TCP 选项。否则cookie 连接受标准的 64KB 接收窗口限制。
##### 常见误区 —— 时间戳不利于性能
不幸的是,一些指南建议禁用 TCP 时间戳以减少内核访问时间戳时钟来获取当前时间所需的次数。这是不正确的。如前所述RTT 估算是 TCP 的必要部分。因此,内核在接收/发送数据包时总是采用微秒级的时间戳。
在包处理步骤的其余部分中Linux 会重用 RTT 估算所需的时钟时间戳。这还避免了将时间戳添加到传出 TCP 数据包的额外时钟访问。
整个时间戳选项在每个数据包中仅需要 10 个字节的 TCP 选项空间,这并没有显著减少可用于数据包有效负载的空间。
##### 常见误区 —— 时间戳是个安全问题
一些安全审计工具和 (较旧的) 博客文章建议禁用 TCP 时间戳,因为据称它们泄露了系统正常运行时间:这样一来,便可以估算系统/内核的补丁级别。这在过去是正确的:时间戳时钟基于不断增加的值,该值在每次系统引导时都以固定值开始。时间戳值可以估计机器已经运行了多长时间 (正常运行时间)。
从 Linux 4.12 开始TCP 时间戳不再显示正常运行时间。发送的所有时间戳值都使用对等设备特定的偏移量。时间戳值也每 49 天换行一次。
换句话说,从地址 “A” 出发,或者终到地址 “A” 的连接看到的时间戳与到远程地址 “B” 的连接看到的时间戳不同。
运行 _sysctl net.ipv4.tcp_timeamp=2_ 以禁用随机化偏移。这使得分析由诸如 _Wireshark__tcpdump_ 之类的工具记录的数据包跟踪变得更容易 —— 从主机发送的数据包在其 TCP 选项时间戳中都具有相同的时钟基准。因此,对于正常操作,默认设置应保持不变。
### 选择性确认
如果丢失同一数据窗口中的多个数据包TCP 将会出现问题。 这是因为 TCP 确认是累积的,但仅适用于按顺序到达的数据包。例如:
* 发送方发送段 s_1s_2s_3... s_n
* 发送方收到 s_2 的 ACK
* 这意味着 s_1 和 s_2 都已收到,并且发送方不再需要保留这些段。
* s_3 是否应该重新发送? s_4呢 s_n
发送方等待 “重传超时” 或 “重复 ACK” 以使 s_2 到达。如果发生重传超时或到达 s_2 的多个重复 ACK则发送方再次发送 s_3。
如果发送方收到对 s_n 的确认,则 s_3 是唯一丢失的数据包。这是理想的情况。仅发送单个丢失的数据包。
如果发送方收到的确认段小于 s_n例如 s_4则意味着丢失了多个数据包。
发送方也需要重传下一个数据段。
##### 重传策略
可能只是重复相同的序列:重新发送下一个数据包,直到接收方指示它已处理了直至 s_n 的所有数据包为止。这种方法的问题在于,它需要一个 RTT直到发送方知道接下来必须重新发送的数据包为止。尽管这种策略可以避免不必要的重传但要等到 TCP 重新发送整个数据窗口后,它可能要花几秒钟甚至更长的时间。
另一种方法是一次重新发送几个数据包。当丢失了几个数据包时,此方法可使 TCP 恢复更快。在上面的示例中TCP 重新发送了 s_3s_4s_5...,但是只能确保已丢失 s_3。
从延迟的角度来看,这两种策略都不是最佳的。如果只有一个数据包需要重新发送,第一种策略是快速的,但是当多个数据包丢失时,它花费的时间太长。
即使必须重新发送多个数据包,第二个也是快速的,但是以浪费带宽为代价。此外,这样的 TCP 发送方在进行不必要的重传时可能已经发送了新数据。
通过可用信息TCP 无法知道丢失了哪些数据包。这就是 TCP [选择性确认][5]SACK的用武之地了。就像窗口缩放和时间戳一样它是另一个可选的但非常有用的 TCP 特性。
##### SACK 选项
```
   TCP Sack-Permitted Option: Kind: 4, Length 2
   +---------+---------+
   | Kind=4  | Length=2|
   +---------+---------+
```
支持此扩展的发送方在连接请求中包括 “允许 SACK” 选项。如果两个端点都支持扩展,则检测到数据流中丢失数据包的对等点可以将此信息通知发送方。
```
   TCP SACK Option: Kind: 5, Length: Variable
                     +--------+--------+
                     | Kind=5 | Length |
   +--------+--------+--------+--------+
   |      Left Edge of 1st Block       |
   +--------+--------+--------+--------+
   |      Right Edge of 1st Block      |
   +--------+--------+--------+--------+
   |                                   |
   /            . . .                  /
   |                                   |
   +--------+--------+--------+--------+
   |      Left Edge of nth Block       |
   +--------+--------+--------+--------+
   |      Right Edge of nth Block      |
   +--------+--------+--------+--------+
```
接收方遇到 segment_s2 后跟 s_5 ... s_n则在发送对 s_2 的确认时将包括一个 SACK 块:
```
                +--------+-------+
                | Kind=5 |   10  |
+--------+------+--------+-------+
| Left edge: s_5                 |
+--------+--------+-------+------+
| Right edge: s_n                |
+--------+-------+-------+-------+
```
这告诉发送方到 s_2 的段都是按顺序到达的,但也让发送方知道段 s_5 至 s_n 也已收到。 然后,发送方可以重新发送这两个数据包,并继续发送新数据。
##### 神话般的无损网络
从理论上讲,如果连接不会丢包,那么 SACK 就没有任何优势。或者连接具有如此低的延迟,甚至等待一个完整的 RTT 都无关紧要。
在实践中,无损行为几乎是不可能保证的。
即使网络及其所有交换机和路由器具有足够的带宽和缓冲区空间,数据包仍然可能丢失:
* 主机操作系统可能面临内存压力并丢弃数据包。请记住,一台主机可能同时处理数万个数据包流。
* CPU 可能无法足够快地消耗掉来自网络接口的传入数据包。这会导致网络适配器本身中的数据包丢失。
* 如果 TCP 时间戳不可用,即使一个非常小的 RTT 的连接也可能在丢失恢复期间暂时停止。
使用 SACK 不会增加 TCP 数据包的大小,除非连接遇到数据包丢失。因此,几乎没有理由禁用此功能。几乎所有的 TCP 协议栈都支持 SACK —— 它通常只在不进行 TCP 批量数据传输的低功耗 IOT 类似设备上才不存在。
当 Linux 系统接受来自此类设备的连接时TCP 会自动为受影响的连接禁用 SACK。
### 总结
本文中研究的三个 TCP 扩展都与 TCP 性能有关最好都保留其默认设置enabled。
TCP 握手可确保仅使用双方都可以理解的扩展,因此,永远不需因为对等方可能不支持而全局禁用扩展。
关闭这些扩展会导致严重的性能损失,尤其是在 TCP 窗口缩放和 SACK 的情况下。 可以禁用 TCP 时间戳而不会立即造成不利影响,但是现在没有令人信服的理由这样做了。
启用它们还可以支持 TCP 选项,即使在 SYN cookie 生效时也是如此。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/tcp-window-scaling-timestamps-and-sack/
作者:[Florian Westphal][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/gxlct008)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/strlen/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2020/08/tcp-window-scaling-816x346.png
[2]: https://en.wikipedia.org/wiki/Transmission_Control_Protocol#TCP_segment_structure
[3]: https://www.rfc-editor.org/info/rfc7323
[4]: https://en.wikipedia.org/wiki/SYN_cookies
[5]: https://www.rfc-editor.org/info/rfc2018

View File

@ -0,0 +1,273 @@
[#]: collector: (lujun9972)
[#]: translator: (gxlct008)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create a mobile app with Flutter)
[#]: via: (https://opensource.com/article/20/9/mobile-app-flutter)
[#]: author: (Vitaly Kuprenko https://opensource.com/users/kooper)
使用 Flutter 创建 App
======
使用流行的 Flutter 框架开始您的跨平台开发之旅。
![A person looking at a phone][1]
[Flutter][2] 是一个深受全球移动开发者欢迎的项目。该框架有一个庞大的、友好的发烧友社区,随着 Flutter 帮助程序员将他们的项目带入移动领域,这个社区还在继续增长。
本教程旨在帮助您开始使用 Flutter 进行移动开发。阅读之后,您将了解如何快速安装和设置框架,以便开始为智能手机、平板电脑和其他平台编码。
本操作指南假定您已在计算机上安装了 [Android Studio][3],并且具有一定的使用经验。
### 什么是 Flutter
Flutter 使得开发人员能够为多个平台构建应用程序,包括:
* Android
* iOS
* Web (测试版)
* macOS (正在开发中)
* Linux (正在开发中)
对 macOS 和 Linux 的支持还处于早期开发阶段,而 Web 支持预计很快就会发布。这意味着您可以立即试用其功能(如下所述)。
### 安装 Flutter
我使用的是 Ubuntu18.04,但安装过程与其他 Linux 发行版类似,比如 Arch 或 Mint。
#### 使用 snapd 安装
要使用 [Snapd][4] 在 Ubuntu 或类似发行版上安装 Flutter请在终端中输入以下内容
```
$ sudo snap install flutter --classic
$ sudo snap install flutter classic
flutter 0+git.142868f from flutter Team/ installed
```
然后使用 `flutter` 命令启动它。 首次启动时,该框架会下载到您的计算机上:
```
$ flutter
Initializing Flutter
Downloading <https://storage.googleapis.com/flutter\_infra\[...\]>
```
下载完成后,您会看到一条消息,告诉您 Flutter 已初始化:
![Flutter initialized][5]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
#### 手动安装
如果您没有安装 Snapd或者您的发行版不是 Ubuntu那么安装过程会略有不同。 在这种情况下,请[下载] [7] 为您的操作系统推荐的 Flutter 版本。
![Install Flutter manually][8]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
然后将其解压缩到您的主目录。
在您喜欢的文本编辑器中打开主目录中的 `.bashrc` 文件 (如果您使用 [Z shell][9],则打开 `.zshc`)。因为它是隐藏文件,所以您必须首先在文件管理器中启用显示隐藏文件,或者使用以下命令从终端打开它:
```
$ gedit ~/.bashrc &
```
将以下行添加到文件末尾:
```
export PATH="$PATH:~/flutter/bin"
```
保存并关闭文件。 请记住,如果将 Flutter 提取到您的主目录之外的其他位置,则 [Flutter SDK 的路径][10] 将有所不同。
关闭您的终端,然后再次打开,以便加载新配置。 或者,您可以通过以下命令使配置立即生效:
```
$ . ~/.bashrc
```
如果您没有看到错误,那说明一切都是正常的。
这种安装方法比使用 `snap`命令稍微困难一些,但是它非常通用,可以让您在几乎所有的发行版上安装框架。
#### 检查安装结果
要检查结果,请在终端中输入以下内容:
```
flutter doctor -v
```
您将看到有关已安装组件的信息。 如果看到错误,请不要担心。 您尚未安装任何用于 Flutter SDK 的 IDE 插件。
![Checking Flutter installation with the doctor command][11]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
### 安装 IDE 插件
您应该在您的 [集成开发环境 (IDE)][12] 中安装插件,以帮助它与 Flutter SDK 接口、与设备交互并构建代码。
Flutter 开发中常用的三个主要 IDE 工具是 IntelliJ IDEA (社区版)、Android Studio 和 VS Code (或 [VSCodium][13])。我在本教程中使用的是 Android Studio但步骤与它们在 IntelliJ Idea (社区版)上的工作方式相似,因为它们构建在相同的平台上。
首先,启动 **Android Studio**。打开 **Settings**,进入 **Plugins** 窗格,选择 **Marketplace** 选项卡。在搜索行中输入 **Flutter**,然后单击 **Install**
![Flutter plugins][14]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
您可能会看到一个安装 **Dart** 插件的选项;同意它。如果看不到 Dart 选项,请通过重复上述步骤手动安装。我还建议使用 **Rainbow Brackets** 插件,它可以让代码导航更简单。
就这样!您已经安装了所需的所有插件。您可以在终端中输入一个熟悉的命令进行检查:
```
flutter doctor -v
```
![Checking Flutter plugins with the doctor command][15]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
### 构建您的 “Hello World” 应用程序
要启动新项目,请创建一个 Flutter 项目:
1. 选择 **New -&gt; New Flutter project**.
![Creating a new Flutter plugin][16]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
2. 在窗口中,选择所需的项目类型。 在这种情况下,您需要选择 **Flutter Application**
3. 命名您的项目 **hello_world**。 请注意,您应该使用合并的名称,因此请使用下划线而不是空格。 您可能还需要指定 SDK 的路径。
![Naming a new Flutter plugin][17]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
4. 输入软件包名称。
您已经创建了一个项目!现在,您可以在设备上或使用模拟器启动它。
![Device options in Flutter][18]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
选择您想要的设备,然后按 **运行**。稍后,您将看到结果。
![Flutter demo on mobile device][19]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
现在你可以在一个 [中间项目][20] 上开始工作了。
### 尝试 Flutter for web
在安装 Flutter 的 Web 组件之前,您应该知道 Flutter 目前对 Web 应用程序的支持还很原始。 因此,将其用于复杂的项目并不是一个好主意。
默认情况下,基本 SDK 中不启用 Flutter for web。 要打开它,请转到 beta 频道。 为此,请在终端中输入以下命令:
```
flutter channel beta
```
![flutter channel beta output][21]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
接下来,使用以下命令根据 beta 分支升级 Flutter
```
flutter upgrade
```
![flutter upgrade output][22]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
要使 Flutter for web 工作,请输入:
```
flutter config --enable-web
```
重新启动 IDE这有助于 Android Studio 索引新的 IDE 并重新加载设备列表。您应该会看到几个新设备:
![Flutter for web device options][23]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
选择 **Chrome** 会在浏览器中启动一个应用程序, **Web Server** 会提供指向您的 Web 应用程序的链接,您可以在任何浏览器中打开它。
不过现在还不是急于开发的时候因为您当前的项目不支持Web。要改进它请打开项目根目录下的终端然后输入
```
flutter create
```
此命令重新创建项目,并添加 Web 支持。 现有代码不会被删除。
请注意,目录树已更改,现在有了一个 "web" 目录:
![File tree with web directory][24]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
现在您可以开始工作了。 选择 **Chrome**,然后按 **Run**。 稍后,您会看到带有应用程序的浏览器窗口。
![Flutter web app demo][25]
(Vitaly Kuprenko, [CC BY-SA 4.0][6])
恭喜你! 您刚刚为浏览器启动了一个项目,并且可以像其他任何网站一样继续使用它。
所有这些都来自同一代码库,因为 Flutter 使得几乎无需更改就可以为移动平台和 Web 编写代码。
### 用 Flutter 做更多的事情
Flutter 是用于移动开发的强大工具,而且它也是迈向跨平台开发的重要一步。 了解它,使用它,并将您的应用程序交付到所有平台!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/mobile-app-flutter
作者:[Vitaly Kuprenko][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/gxlct008)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/kooper
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd (A person looking at a phone)
[2]: https://flutter.dev/
[3]: https://developer.android.com/studio
[4]: https://snapcraft.io/docs/getting-started
[5]: https://opensource.com/sites/default/files/uploads/flutter1_initialized.png (Flutter initialized)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://flutter.dev/docs/get-started/install/linux
[8]: https://opensource.com/sites/default/files/uploads/flutter2_manual-install.png (Install Flutter manually)
[9]: https://opensource.com/article/19/9/getting-started-zsh
[10]: https://opensource.com/article/17/6/set-path-linux
[11]: https://opensource.com/sites/default/files/uploads/flutter3_doctor.png (Checking Flutter installation with the doctor command)
[12]: https://www.redhat.com/en/topics/middleware/what-is-ide
[13]: https://opensource.com/article/20/6/open-source-alternatives-vs-code
[14]: https://opensource.com/sites/default/files/uploads/flutter4_plugins.png (Flutter plugins)
[15]: https://opensource.com/sites/default/files/uploads/flutter5_plugincheck.png (Checking Flutter plugins with the doctor command)
[16]: https://opensource.com/sites/default/files/uploads/flutter6_newproject.png (Creating a new Flutter plugin)
[17]: https://opensource.com/sites/default/files/uploads/flutter7_projectname.png (Naming a new Flutter plugin)
[18]: https://opensource.com/sites/default/files/uploads/flutter8_launchflutter.png (Device options in Flutter)
[19]: https://opensource.com/sites/default/files/uploads/flutter9_demo.png (Flutter demo on mobile device)
[20]: https://opensource.com/article/18/6/flutter
[21]: https://opensource.com/sites/default/files/uploads/flutter10_beta.png (flutter channel beta output)
[22]: https://opensource.com/sites/default/files/uploads/flutter11_upgrade.png (flutter upgrade output)
[23]: https://opensource.com/sites/default/files/uploads/flutter12_new-devices.png (Flutter for web device options)
[24]: https://opensource.com/sites/default/files/uploads/flutter13_tree.png (File tree with web directory)
[25]: https://opensource.com/sites/default/files/uploads/flutter14_webapp.png (Flutter web app demo)

View File

@ -0,0 +1,149 @@
[#]: collector: (lujun9972)
[#]: translator: (HankChow)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (A practical guide to learning awk)
[#]: via: (https://opensource.com/article/20/9/awk-ebook)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
awk 实用学习指南
======
下载我们的电子书,学习如何更好地使用 `awk`
![Person programming on a laptop on a building][1]
在众多 [Linux][2] 命令中,`sed`、`awk` 和 `grep` 恐怕是其中最经典的三个命令了。它们引人注目或许是由于名字发音与众不同,也可能是它们无处不在,甚至是因为它们存在已久,但无论如何,如果要问哪些命令很有 Linux 风格,这三个命令是当之无愧的。其中 `sed``grep` 已经有很多简洁的标准用法了,但 `awk` 的使用难度却相对突出。
在日常使用中,通过 `sed` 实现字符串替换、通过 `grep` 实现过滤,这些都是司空见惯的操作了,但 `awk` 命令相对来说是用得比较少的。在我看来,可能的原因是大多数人都只使用 `sed` 或者 `grep` 的一些变体实现某些功能,例如:
```
$ sed -e 's/foo/bar/g' file.txt
$ grep foo file.txt
```
因此,尽管你可能会觉得 `sed``grep` 使用起来更加顺手,但实际上它们还有更多更强大的作用没有发挥出来。当然,我们没有必要在这两个命令上钻研得很深入,但我还是想理解自己是如何学习一个命令的。很多时候我会把一整串命令记住,但不会去了解其中的运行过程,这就让我产生了一种很熟悉命令的错觉,我可以随口说出某个命令的好几个选项参数,但这些参数具体有什么作用,以及它们的相关语法,我都并不明确。
这大概就是很多人对 `awk` 缺乏了解的原因了。
### 为使用而学习 awk
`awk` 并不深奥。它是一种相对基础的编程语言,因此你可以把它当成一门新的编程语言来学习:使用一些基本命令来熟悉语法、了解语言中的关键字并实现更复杂的功能,然后再多加练习就可以了。
### awk 是如何解析输入内容的
`awk` 的本质是将输入的内容看作是一个数组。当 `awk` 扫描一个文本文件时,会把每一行作为一条<ruby>记录<rt>record</rt></ruby>,每一条记录中又分割为多个<ruby>字段<rt>field</rt></ruby>。`awk` 记录了各条记录各个字段的信息,并通过内置变量 `NR`(记录数) 和 `NF`(字段数) 来调用相关信息。例如一下这个命令可以查看文件的行数:
```
$ awk 'END { print NR;}' example.txt
36
```
从上面的命令可以看出 `awk` 的基本语法,无论是一个单行命令还是一整个脚本,语法都是这样的:
```
`样式或关键字 { 操作 }`
```
在上面的例子中,`END` 是一个关键字而不是样式,与此类似的另一个关键字是 `BEGIN`。使用 `BEGIN``END` 可以让 `awk` 在解析内容前或解析内容后执行大括号中指定的操作。
你可以使用<ruby>样式<rt>pattern</rt></ruby>作为过滤器或限定符,这样 `awk` 只会对匹配样式的对应记录执行指定的操作。以下这个例子就是使用 `awk` 实现 `grep` 命令在文件中查找“Linux”字符串的功能
```
$ awk '/Linux/ { print $0; }' os.txt
OS: CentOS Linux (10.1.1.8)
OS: CentOS Linux (10.1.1.9)
OS: Red Hat Enterprise Linux (RHEL) (10.1.1.11)
OS: Elementary Linux (10.1.2.4)
OS: Elementary Linux (10.1.2.5)
OS: Elementary Linux (10.1.2.6)
```
`awk` 会将文件中的每一行作为一条记录,将一条记录中的每个单词作为一个字段,默认情况下会按照空格作为<ruby>分隔符<rt>field separator</rt></ruby>`FS`)切割出记录中的字段。如果想要使用其它内容作为分隔符,可以使用 `--field-separator` 选项指定分隔符:
```
$ awk --field-separator ':' '/Linux/ { print $2; }' os.txt
 CentOS Linux (10.1.1.8)
 CentOS Linux (10.1.1.9)
 Red Hat Enterprise Linux (RHEL) (10.1.1.11)
 Elementary Linux (10.1.2.4)
 Elementary Linux (10.1.2.5)
 Elementary Linux (10.1.2.6)
```
在上面的例子中,可以看到在 `awk` 处理后每一行的行首都有一个空格,那是因为在源文件中每个冒号(`:`)后面都带有一个空格。和 `cut` 有所不同的是,`awk` 可以指定一个字符串作为分隔符,就像这样:
```
$ awk --field-separator ': ' '/Linux/ { print $2; }' os.txt
CentOS Linux (10.1.1.8)
CentOS Linux (10.1.1.9)
Red Hat Enterprise Linux (RHEL) (10.1.1.11)
Elementary Linux (10.1.2.4)
Elementary Linux (10.1.2.5)
Elementary Linux (10.1.2.6)
```
### awk 中的函数
可以通过这样的语法在 `awk` 中自定义函数:
```
`函数名称(参数) { 操作 }`
```
函数的好处在于只需要编写一次就可以多次复用,因此函数在脚本中起到的作用会比在构造单行命令时大。同时 `awk` 自身也带有很多预定义的函数,并且工作原理和其它编程语言或电子表格保持一致。你只需要了解函数需要接受什么参数,就可以放心使用了。
`awk` 中提供了数学运算和字符串处理的相关函数。数学运算函数通常比较简单,传入一个数字,它就会传出一个结果:
```
$ awk 'BEGIN { print sqrt(1764); }'
42
```
而字符串处理函数则稍微复杂一点,但 [GNU awk 手册][3]中也有充足的文档。例如 `split()` 函数需要传入一个待分割的单一字段、一个数组用于存放分割结果,以及用于分割的<ruby>定界符<rt>delimiter</rt></ruby>
例如前面示例中的输出内容,每条记录的末尾都包含了一个 IP 地址。由于变量 `NF` 代表的是每条记录的字段数量,刚好对应的是每条记录中最后一个字段的序号,因此可以通过引用 `NF` 将每条记录的最后一个字段传入 `split()` 函数:
```
$ awk --field-separator ': ' '/Linux/ { split($NF, IP, "."); print "subnet: " IP[3]; }' os.txt
subnet: 1
subnet: 1
subnet: 1
subnet: 2
subnet: 2
subnet: 2
```
实际上 `awk` 的功能还远远不止于此,你还可以跳出 `awk` 本身,通过命令管道和脚本来自定义更多功能。
### 下载电子书
使用 `awk` 本身就是一个学习 `awk` 的过程,即使某些操作使用 `sed`、`grep`、`cut`、`tr` 命令已经完全足够了,也可以尝试使用 `awk` 来实现。只要熟悉了 `awk`,就可以在 Bash 中自定义一些 `awk` 函数,进而解析复杂的数据。
[下载我们的电子书][4]学习并开始使用 `awk` 吧!
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/awk-ebook
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[HankChow](https://github.com/hankchow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_code_programming_laptop.jpg?itok=ormv35tV (Person programming on a laptop on a building)
[2]: https://opensource.com/resources/linux
[3]: https://www.gnu.org/software/gawk/manual/gawk.html
[4]: https://opensource.com/downloads/awk-ebook

View File

@ -0,0 +1,411 @@
[#]: collector: "lujun9972"
[#]: translator: "lxbwolf"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: subject: "Find security issues in Go code using gosec"
[#]: via: "https://opensource.com/article/20/9/gosec"
[#]: author: "Gaurav Kamathe https://opensource.com/users/gkamathe"
使用 gosec 检查 Go 代码中的安全问题
======
来学习下 Golang 的安全检查工具 gosec。
![A lock on the side of a building][1]
[Go 语言][2]写的代码越来越常见尤其是在容器、Kubernetes 或云生态相关的开发中。Docker 是最早采用 Golang 的项目之一,随后是 Kubernetes之后大量的新项目在众多编程语言中选择了 Go。
像其他语言一样Go 也有它的长处和短处如安全缺陷。这些缺陷可能会因为语言本身的限制在程序员编码不当时出现例如C 代码中的内存安全问题。
无论它们出现的原因是什么,安全问题都应该在开发过程中尽早修复,以免在封装好的软件中出现。幸运的是,静态分析工具可以帮你批量地处理这些问题。静态分析工具通过解析用某种编程语言写的代码来找到问题。
这类工具中很多被称为 linter。传统意义上linter 更注重的是检查代码中编码问题、bug、代码风格之类的问题不会检查安全问题。例如[Coverity][3] 是很受欢迎的用来检查 C/C++ 代码问题的工具。然而,有工具专门用来检查源码中的安全问题。例如,[Bandit][4] 用来检查 Python 代码中的安全缺陷。[gosec][5] 用来搜寻 Go 源码中的安全缺陷。gosec 通过扫描 Go 的 AST<ruby>抽象语法树<rt>abstract syntax tree</rt></ruby>)来检查源码中的安全问题。
### 开始使用 gosec
在开始学习和使用 gosec 之前,你需要准备一个 Go 语言写的项目。有这么多开源软件,我相信这不是问题。你可以在 GitHub 的 [Golang 库排行榜]][6]中找一个。
本文中,我随机选了 [Docker CE][7] 项目,但你可以选择任意的 Go 项目。
#### 安装 Go 和 gosec
如果你还没安装 Go你可以先从仓库中拉取下来。如果你用的是 Fedora 或其他基于 RPM 的 Linux 发行版本:
```
`$ dnf install golang.x86_64`
```
如果你用的是其他操作系统,请参照 [Golang 安装][8]页面。
使用 `version` 参数来验证 Go 是否安装成功:
```
$ go version
go version go1.14.6 linux/amd64
$
```
运行 `go get` 命令就可以轻松地安装 gosec
```
$ go get github.com/securego/gosec/cmd/gosec
$
```
上面这行命令会从 GitHub 下载 gosec 的源码、编译并安装到指定位置。在仓库的 README 中你还可以看到[安装工具的其他方法][9]。
gosec 的源码会被下载到 `$GOPATH` 的位置,编译出的二进制文件会被安装到你系统上设置的 `bin` 目录下。你可以运行下面的命令来查看 `$GOPATH``$GOBIN` 目录:
```
$ go env | grep GOBIN
GOBIN="/root/go/gobin"
$
$ go env | grep GOPATH
GOPATH="/root/go"
$
```
如果 `go get` 命令执行成功,那么 gosec 二进制应该就可以使用了:
```
$
$ ls -l ~/go/bin/
total 9260
-rwxr-xr-x. 1 root root 9482175 Aug 20 04:17 gosec
$
```
你可以把 `$GOPATH` 下的 `bin` 目录添加到 `$PATH` 中。这样你就可以像使用系统上的其他命令一样来使用 gosec 命令行工具CLI了。
```
$ which gosec
/root/go/bin/gosec
$
```
使用 gosec 命令行工具的 `-help` 选项来看看运行是否符合预期:
```
$ gosec -help
gosec - Golang security checker
gosec analyzes Go source code to look for common programming mistakes that
can lead to security problems.
VERSION: dev
GIT TAG:
BUILD DATE:
USAGE:
```
之后,创建一个目录,把源码下载到这个目录作为实例项目(本例中,我用的是 Docker CE
```
$ mkdir gosec-demo
$
$ cd gosec-demo/
$
$ pwd
/root/gosec-demo
$
$ git clone <https://github.com/docker/docker-ce.git>
Cloning into 'docker-ce'...
remote: Enumerating objects: 1271, done.
remote: Counting objects: 100% (1271/1271), done.
remote: Compressing objects: 100% (722/722), done.
remote: Total 431003 (delta 384), reused 981 (delta 318), pack-reused 429732
Receiving objects: 100% (431003/431003), 166.84 MiB | 28.94 MiB/s, done.
Resolving deltas: 100% (221338/221338), done.
Updating files: 100% (10861/10861), done.
$
```
代码统计工具(本例中用的是 cloc显示这个项目大部分是用 Go 写的,恰好迎合了 gosec 的功能。
```
$ ./cloc /root/gosec-demo/docker-ce/
10771 text files.
8724 unique files.
2560 files ignored.
\-----------------------------------------------------------------------------------
Language files blank comment code
\-----------------------------------------------------------------------------------
Go 7222 190785 230478 1574580
YAML 37 4831 817 156762
Markdown 529 21422 0 67893
Protocol Buffers 149 5014 16562 10071
```
### 使用默认选项运行 gosec
在 Docker CE 项目中使用默认选项运行 gosec执行 `gosec ./...` 命令。屏幕上会有很多输出内容。在末尾你会看到一个简短的 `Summary`,列出了浏览的文件数、所有文件的总行数,以及源码中发现的问题数。
```
$ pwd
/root/gosec-demo/docker-ce
$
$ time gosec ./...
[gosec] 2020/08/20 04:44:15 Including rules: default
[gosec] 2020/08/20 04:44:15 Excluding rules: default
[gosec] 2020/08/20 04:44:15 Import directory: /root/gosec-demo/docker-ce/components/engine/opts
[gosec] 2020/08/20 04:44:17 Checking package: opts
[gosec] 2020/08/20 04:44:17 Checking file: /root/gosec-demo/docker-ce/components/engine/opts/address_pools.go
[gosec] 2020/08/20 04:44:17 Checking file: /root/gosec-demo/docker-ce/components/engine/opts/env.go
[gosec] 2020/08/20 04:44:17 Checking file: /root/gosec-demo/docker-ce/components/engine/opts/hosts.go
# End of gosec run
Summary:
Files: 1278
Lines: 173979
Nosec: 4
Issues: 644
real 0m52.019s
user 0m37.284s
sys 0m12.734s
$
```
滚动屏幕你会看到不同颜色高亮的行:红色表示需要尽快查看的高优先级问题,黄色表示中优先级的问题。
#### 关于“假阳性”
在开始检查代码之前,我想先分享几条基本原则。默认情况下,静态检查工具会基于一系列的规则对测试代码进行分析并报告出检查出来的*所有*问题。这表示工具报出来的每一个问题都需要修复吗?非也。这个问题最好的解答者是设计和开发这个软件的人。他们最熟悉代码,更重要的是,他们了解软件会在什么环境下部署以及会被怎样使用。
这个知识点对于判定工具标记出来的某段代码到底是不是安全缺陷至关重要。随着工作时间和经验的积累,你会慢慢学会怎样让静态分析工具忽略非安全缺陷,使报告内容的可执行性更高。因此,要判定 gosec 报出来的某个问题是否需要修复,让一名有经验的开发者对源码做人工审计会是比较好的办法。
#### 高优先级问题
从输出内容看gosec 发现了 Docker CE 的一个高优先级问题,它使用的是低版本的 TLS<ruby>传输层安全<rt>Transport Layer Security<rt></ruby>)。无论什么时候,使用软件和库的最新版本都是确保它更新及时、没有安全问题的最好的方法。
```
[/root/gosec-demo/docker-ce/components/engine/daemon/logger/splunk/splunk.go:173] - G402 (CWE-295): TLS MinVersion too low. (Confidence: HIGH, Severity: HIGH)
172:
&gt; 173: tlsConfig := &amp;tls.Config{}
174:
```
它还发现了一个伪随机数生成器。它是不是一个安全缺陷,取决于生成的随机数的使用方式。
```
[/root/gosec-demo/docker-ce/components/engine/pkg/namesgenerator/names-generator.go:843] - G404 (CWE-338): Use of weak random number generator (math/rand instead of crypto/rand) (Confidence: MEDIUM, Severity: HIGH)
842: begin:
&gt; 843: name := fmt.Sprintf("%s_%s", left[rand.Intn(len(left))], right[rand.Intn(len(right))])
844: if name == "boring_wozniak" /* Steve Wozniak is not boring */ {
```
#### 中优先级问题
这个工具还发现了一些中优先级问题。它标记了一个通过与 tar 相关的解压炸弹这种方式实现的潜在的 DoS 威胁,这种方式可能会被恶意的攻击者利用。
```
[/root/gosec-demo/docker-ce/components/engine/pkg/archive/copy.go:357] - G110 (CWE-409): Potential DoS vulnerability via decompression bomb (Confidence: MEDIUM, Severity: MEDIUM)
356:
&gt; 357: if _, err = io.Copy(rebasedTar, srcTar); err != nil {
358: w.CloseWithError(err)
```
它还发现了一个通过变量访问文件的问题。如果恶意使用者能访问这个变量,那么他们就可以改变变量的值去读其他文件。
```
[/root/gosec-demo/docker-ce/components/cli/cli/context/tlsdata.go:80] - G304 (CWE-22): Potential file inclusion via variable (Confidence: HIGH, Severity: MEDIUM)
79: if caPath != "" {
&gt; 80: if ca, err = ioutil.ReadFile(caPath); err != nil {
81: return nil, err
```
文件和目录通常是操作系统安全的最基础的元素。这里gosec 报出了一个可能需要你检查目录的权限是否安全的问题。
```
[/root/gosec-demo/docker-ce/components/engine/contrib/apparmor/main.go:41] - G301 (CWE-276): Expect directory permissions to be 0750 or less (Confidence: HIGH, Severity: MEDIUM)
40: // make sure /etc/apparmor.d exists
&gt; 41: if err := os.MkdirAll(path.Dir(apparmorProfilePath), 0755); err != nil {
42: log.Fatal(err)
```
你经常需要在源码中启动命令行工具。Go 使用内建的 exec 库来实现。仔细地分析用来调用这些工具的变量,就能发现安全缺陷。
```
[/root/gosec-demo/docker-ce/components/engine/testutil/fakestorage/fixtures.go:59] - G204 (CWE-78): Subprocess launched with variable (Confidence: HIGH, Severity: MEDIUM)
58:
&gt; 59: cmd := exec.Command(goCmd, "build", "-o", filepath.Join(tmp, "httpserver"), "github.com/docker/docker/contrib/httpserver")
60: cmd.Env = append(os.Environ(), []string{
```
#### 低优先级问题
在这个输出中gosec 报出了一个 “unsafe” 调用相关的低优先级问题,这个调用会绕开 Go 提供的内存保护。再仔细分析下你调用 “unsafe” 的方式,看看是否有被别人利用的可能性。
```
[/root/gosec-demo/docker-ce/components/engine/pkg/archive/changes_linux.go:264] - G103 (CWE-242): Use of unsafe calls should be audited (Confidence: HIGH, Severity: LOW)
263: for len(buf) &gt; 0 {
&gt; 264: dirent := (*unix.Dirent)(unsafe.Pointer(&amp;buf[0]))
265: buf = buf[dirent.Reclen:]
[/root/gosec-demo/docker-ce/components/engine/pkg/devicemapper/devmapper_wrapper.go:88] - G103 (CWE-242): Use of unsafe calls should be audited (Confidence: HIGH, Severity: LOW)
87: func free(p *C.char) {
&gt; 88: C.free(unsafe.Pointer(p))
89: }
```
它还标记了源码中未处理的错误。源码中出现的错误你都应该处理。
```
[/root/gosec-demo/docker-ce/components/cli/cli/command/image/build/context.go:172] - G104 (CWE-703): Errors unhandled. (Confidence: HIGH, Severity: LOW)
171: err := tar.Close()
&gt; 172: os.RemoveAll(dockerfileDir)
173: return err
```
### 自定义 gosec 扫描
使用 gosec 的默认选项带来了很多的问题。然而,经过人工审计和随着时间推移,你会掌握哪些问题是不需要标记的。你可以自己指定排除和包含哪些测试。
我上面提到过gosec 是基于一系列的规则从 Go 源码中查找问题的。下面是它使用的完整的[规则][10]列表:
* G101查找硬编码凭证
- G102绑定到所有接口
- G103审计不安全区块的使用
- G104审计未检查的错误
- G106审计 ssh.InsecureIgnoreHostKey 的使用
- G107: 提供给 HTTP 请求的 url 作为污点输入
- G108: 统计端点自动暴露到 /debug/pprof
- G109: strconv.Atoi 转换到 int16 或 int32 时潜在的整数溢出
- G110: 潜在的通过解压炸弹实现的 DoS
- G201SQL 查询构造使用格式字符串
- G202SQL 查询构造使用字符串连接
- G203在 HTML 模板中使用未转义的数据
- G203在HTML模板中使用未转义的数据
- G204审计命令执行情况
- G301创建目录时文件权限分配不合理
- G302chmod 文件权限分配不合理
- G303使用可预测的路径创建临时文件
- G304作为污点输入提供的文件路径
- G305提取 zip/tar 文档时遍历文件
- G306: 写到新文件时文件权限分配不合理
- G307: 把返回错误的函数放到 defer 内
- G401检测 DES、RC4、MD5 或 SHA1 的使用情况
- G402查找错误的 TLS 连接设置
- G403确保最小 RSA 密钥长度为 2048 位
- G404不安全的随机数源rand
- G501导入黑名单列表crypto/md5
- G502导入黑名单列表crypto/des
- G503导入黑名单列表crypto/rc4
- G504导入黑名单列表net/http/cgi
- G505导入黑名单列表crypto/sha1
- G601: 在 range 语句中使用隐式的元素别名
#### 排除指定的测试
你可以自定义 gosec 来避免对已知为安全的问题进行扫描和报告。你可以使用 `-exclude` 选项和上面的规则编号来忽略指定的问题。
例如,如果你不想让 gosec 检查源码中硬编码凭证相关的未处理的错误,那么你可以运行下面的命令来忽略这些错误:
```
$ gosec -exclude=G104 ./...
$ gosec -exclude=G104,G101 ./...
```
有时候你知道某段代码是安全的,但是 gosec 还是会报出问题。然而,你又不想完全排除掉整个检查,因为你想让 gosec 检查新增的代码。通过在你已知为安全的代码块添加 `#nosec` 标记可以避免 gosec 扫描。这样 gosec 会继续扫描新增代码,而忽略掉 `#nosec` 标记的代码块。
#### 运行指定的检查
另一方面,如果你只想检查指定的问题,你可以通过 `-include` 选项和规则编号来告诉 gosec 运行哪些检查:
```
`$ gosec -include=G201,G202 ./...`
```
#### 扫描测试文件
Go 语言自带对测试的支持通过单元测试来检验一个元素是否符合预期。在默认模式下gosec 会忽略测试文件,你可以使用 `-tests` 选项把它们包含进来:
```
`gosec -tests ./...`
```
#### 修改输出的格式
找出问题只是它的一半功能另一半功能是把它检查到的问题以用户友好同时又方便工具处理的方式报告出来。幸运的是gosec 可以用不同的方式输出。例如,如果你想看 JSON 格式的报告,那么就使用 `-fmt` 选项指定 JSON 格式并把结果保存到 `results.json` 文件中:
```
$ gosec -fmt=json -out=results.json ./...
$ ls -l results.json
-rw-r--r--. 1 root root 748098 Aug 20 05:06 results.json
$
{
"severity": "LOW",
"confidence": "HIGH",
"cwe": {
"ID": "242",
"URL": "<https://cwe.mitre.org/data/definitions/242.html>"
},
"rule_id": "G103",
"details": "Use of unsafe calls should be audited",
"file": "/root/gosec-demo/docker-ce/components/engine/daemon/graphdriver/graphtest/graphtest_unix.go",
"code": "304: \t// Cast to []byte\n305: \theader := *(*reflect.SliceHeader)(unsafe.Pointer(\u0026buf))\n306: \theader. Len *= 8\n",
"line": "305",
"column": "36"
},
```
### 用 gosec 检查容易暴露出来的问题
静态检查工具不能完全代替人工代码审计。然而,当代码量变大、有众多开发者时,这样的工具通常能用批量的方式帮忙找出容易暴露的问题。它对于帮助新开发者识别和在编码时避免引入这些安全缺陷很有用。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/gosec
作者:[Gaurav Kamathe][a]
选题:[lujun9972][b]
译者:[lxbowlf](https://github.com/lxbwolf)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/gkamathe
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA "A lock on the side of a building"
[2]: https://golang.org/
[3]: https://www.synopsys.com/software-integrity/security-testing/static-analysis-sast.html
[4]: https://pypi.org/project/bandit/
[5]: https://github.com/securego/gosec
[6]: https://github.com/trending/go
[7]: https://github.com/docker/docker-ce
[8]: https://golang.org/doc/install
[9]: https://github.com/securego/gosec#install
[10]: https://github.com/securego/gosec#available-rules

View File

@ -0,0 +1,75 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (5 questions to ask yourself when writing project documentation)
[#]: via: (https://opensource.com/article/20/9/project-documentation)
[#]: author: (Alexei Leontief https://opensource.com/users/alexeileontief)
编写项目文档时要问自己 5 个问题
======
使用一些有效沟通的基本原则可以帮助你创建与你的品牌一致的,编写良好,内容丰富的项目文档。
![A person writing.][1]
在开始另一个开源项目文档的实际写作部分之前,甚至在采访专家之前,最好回答一些有关新文档的高级问题。
著名的传播理论家 Harold Lasswell 在他 1948 年的文章《社会中的传播结构和功能》_The Structure and Function of Communication in Society_中写道
> (一个)描述沟通行为的方便方法是回答以下问题:
>
> * 谁
> * 说什么
> * 在哪个渠道
> * 对谁
> * 有什么效果?
>
作为一名技术沟通者,你可以运用 Lasswell 的理论,回答关于你文档的类似问题,以更好地传达你的信息,达到预期的效果。
### 谁—谁是文档的所有者?
或者说,文档背后是什么公司?它想向受众传达什么品牌形象?这个问题的答案将大大影响你的写作风格。公司也可能有自己的风格指南,或者至少有正式的使命声明,在这种情况下,你应该从这开始。
如果公司刚刚起步,你可以向文件的主人提出上述问题。作为作者,将你为公司创造的声音和角色与你自己的世界观和信仰结合起来是很重要的。这将使你的写作看起来更自然,而不像公司的行话。
### 说什么—文件类型是什么?
你需要传达什么信息它是什么类型的文档用户指南、API 参考、发布说明等?许多文档类型将有模板或普遍认可的结构,它将让你从这开始,并帮助确保包括所有必要的信息。
### 在哪个渠道—文档的格式是什么?
对于技术文档,沟通的渠道通常会告诉你文档的最终格式,也就是 PDF、HTML、文本文件等。这很可能也决定了你应该使用什么工具来编写你的文档。
### 对谁—目标受众是谁?
谁会阅读这份文档?他们的知识水平如何?他们的工作职责和主要挑战是什么?这些问题将帮助你确定你应该覆盖什么,是否应该进入细节,是否可以使用任何特定的术语,等等。在某些情况下,这些问题的答案甚至可以影响你使用的语法的复杂性。
### 有什么效果-文档的目的是什么?
在这里,你应该定义这个文档要为它的潜在读者解决什么问题,或者它应该为他们回答什么问题。例如,你的文档的目的可以是教你的客户如何使用你的产品。
这时,你可以参考 [Divio][2] 建议的方法。根据这种方法,你可以根据文档的总体方向,将任何文档分为四种类型之一:学习、解决问题、理解或获取信息。
在这个阶段,另一个很好的问题是,这个文档要解决什么业务问题(例如,如何削减支持成本)。带着业务问题,你可能会看到你写作的一个重要角度。
### 总结
上面的问题旨在帮助你形成有效沟通的基础,并确保你的文件涵盖了所有应该涵盖的内容。你可以把它们分解成你自己的问题清单,并把它们放在身边,以便在你有文件要创建的时候使用。当你面对空白页时,这份清单也可能会派上用场。希望它能激发你的灵感,帮助你产生想法。
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/project-documentation
作者:[Alexei Leontief][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alexeileontief
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_resume_rh1x.png?itok=S3HGxi6E (A person writing.)
[2]: https://documentation.divio.com/

View File

@ -0,0 +1,78 @@
[#]: collector: (lujun9972)
[#]: translator: (rakino)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create template files in GNOME)
[#]: via: (https://opensource.com/article/20/9/gnome-templates)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
在 GNOME 中创建文档模板
======
![Digital images of a computer desktop][1]
制作模板可以让你更快地开始写作新的文档。
我偶然发现了 [GNOME][2] 的一个新功能对我来说是的创建文档模版。模版template也被称作样版文件boilerplate一般是有着特定格式的空文档例如律师事务所的信笺在其顶部有着律所的名称和地址另一个例子是银行以及保险公司的保函在其底部页脚包含着某些免责声明。由于这类信息很少改变你可以把它们添加到空文档中作为模板使用。
一天,在浏览我的 Linux 系统文件的时候,我点击了**模板**Templates文件夹然后刚好发现窗口的上方有一条消息写着“将文件放入此文件夹并用作新文档的模板”以及一个**获取详情……** 的链接,指向了 [GNOME 指南GNOME help][3]中的模板页面。
![Message at top of Templates folder in GNOME Desktop][4]
(Alan Formy-Duval, [CC BY-SA 4.0][5])
### 创建模板
在 GNOME 中创建模板非常简单。有几种方法可以把文件放进模板文件夹里你既可以通过图形用户界面GUI或是命令行界面CLI从另一个位置复制或移动文件也可以创建一个全新的文件我选择了后者实际上我也创建了两个文件。
![My first two GNOME templates][6]
(Alan Formy-Duval, [CC BY-SA 4.0][5])
我的第一份模板是为 Opensource.com 的文章准备的,它有一个输入标题的位置以及关于我的名字和文章使用的许可证的几行。我的文章使用 Markdown 格式,所以我将模板创建为了一个新的 Markdown 文档——**Opensource.com Article.md**
````
# Title    
```
An article for Opensource.com
by: Alan Formy-Duval
Creative Commons BY-SA 4.0
```
````
我将这份文档保存在了 `/home/alan/Templates` 文件夹内,现在 GNOME 就可以将这个文件识别为模板,并在我要创建新文档的时候提供建议了。
### 使用模板
每当我有了新文章的灵感的时候,我只需要在我计划用来组织内容的文件夹里单击右键,然后从**新建文档**New Document列表中选择我想要的模板就可以开始了。
![Select the template by name][7]
(Alan Formy-Duval, [CC BY-SA 4.0][5])
你可以为各种文档或文件制作模板。我写这篇文章时使用了我为 Opensource.com 的文章创建的模板。程序员可能会把模板用于软件代码,这样的话也许你想要只包含 `main()` 的模板。
GNOME 桌面环境为 Linux 及相关操作系统的用户提供了一个非常实用、功能丰富的界面。你最喜欢的 GNOME 功能是什么,你又是怎样使用它们的呢?请在评论中分享~
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/gnome-templates
作者:[Alan Formy-Duval][a]
选题:[lujun9972][b]
译者:[rakino](https://github.com/rakino)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/alanfdoss
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_desk_home_laptop_browser.png?itok=Y3UVpY0l (Digital images of a computer desktop)
[2]: https://www.gnome.org/
[3]: https://help.gnome.org/users/gnome-help/stable/files-templates.html.en
[4]: https://opensource.com/sites/default/files/uploads/gnome-message_at_top_border.png (Message at top of Templates folder in GNOME Desktop)
[5]: https://creativecommons.org/licenses/by-sa/4.0/
[6]: https://opensource.com/sites/default/files/uploads/gnome-first_two_templates_border.png (My first two GNOME templates)
[7]: https://opensource.com/sites/default/files/uploads/gnome-new_document_menu_border.png (Select the template by name)