mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-13 22:30:37 +08:00
commit
9c55abe641
@ -4,15 +4,15 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13253-1.html)
|
||||
|
||||
用一个开源工具实现多线程 Python 程序的可视化
|
||||
======
|
||||
|
||||
> VizTracer 可以跟踪并发的 Python 程序,以帮助记录、调试和剖析。
|
||||
|
||||
![丰富多彩的声波图][1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/30/230404xi9pox38ookk8xe2.jpg)
|
||||
|
||||
并发是现代编程中必不可少的一部分,因为我们有多个核心,有许多需要协作的任务。然而,当并发程序不按顺序运行时,就很难理解它们。对于工程师来说,在这些程序中发现 bug 和性能问题不像在单线程、单任务程序中那么容易。
|
||||
|
||||
@ -20,7 +20,7 @@
|
||||
|
||||
VizTracer 是一个追踪和可视化 Python 程序的工具,对日志、调试和剖析很有帮助。尽管它对单线程、单任务程序很好用,但它在并发程序中的实用性是它的独特之处。
|
||||
|
||||
## 尝试一个简单的任务
|
||||
### 尝试一个简单的任务
|
||||
|
||||
从一个简单的练习任务开始:计算出一个数组中的整数是否是质数并返回一个布尔数组。下面是一个简单的解决方案:
|
||||
|
@ -0,0 +1,274 @@
|
||||
[#]: subject: (Learn how file input and output works in C)
|
||||
[#]: via: (https://opensource.com/article/21/3/file-io-c)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wyxplus)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13252-1.html)
|
||||
|
||||
学习如何用 C 语言来进行文件输入输出操作
|
||||
======
|
||||
|
||||
> 理解 I/O 有助于提升你的效率。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/30/222717gyuegz88ryu8ry7i.jpg)
|
||||
|
||||
如果你打算学习 C 语言的输入、输出,可以从 `stdio.h` 包含文件开始。正如你从其名字中猜到的,该文件定义了所有的标准(“std”)的输入和输出(“io”)函数。
|
||||
|
||||
大多数人学习的第一个 `stdio.h` 的函数是打印格式化输出的 `printf` 函数。或者是用来打印一个字符串的 `puts` 函数。这些函数非常有用,可以将信息打印给用户,但是如果你想做更多的事情,则需要了解其他函数。
|
||||
|
||||
你可以通过编写一个常见 Linux 命令的副本来了解其中一些功能和方法。`cp` 命令主要用于复制文件。如果你查看 `cp` 的帮助手册,可以看到 `cp` 命令支持非常多的参数和选项。但最简单的功能,就是复制文件:
|
||||
|
||||
```
|
||||
cp infile outfile
|
||||
```
|
||||
|
||||
你只需使用一些读写文件的基本函数,就可以用 C 语言来自己实现 `cp` 命令。
|
||||
|
||||
### 一次读写一个字符
|
||||
|
||||
你可以使用 `fgetc` 和 `fputc` 函数轻松地进行输入输出。这些函数一次只读写一个字符。该用法被定义在 `stdio.h`,并且这也很浅显易懂:`fgetc` 是从文件中读取一个字符,`fputc` 是将一个字符保存到文件中。
|
||||
|
||||
```
|
||||
int fgetc(FILE *stream);
|
||||
int fputc(int c, FILE *stream);
|
||||
```
|
||||
|
||||
编写 `cp` 命令需要访问文件。在 C 语言中,你使用 `fopen` 函数打开一个文件,该函数需要两个参数:文件名和打开文件的模式。模式通常是从文件读取(`r`)或向文件写入(`w`)。打开文件的方式也有其他选项,但是对于本教程而言,仅关注于读写操作。
|
||||
|
||||
因此,将一个文件复制到另一个文件就变成了打开源文件和目标文件,接着,不断从第一个文件读取字符,然后将该字符写入第二个文件。`fgetc` 函数返回从输入文件中读取的单个字符,或者当文件完成后返回文件结束标记(`EOF`)。一旦读取到 `EOF`,你就完成了复制操作,就可以关闭两个文件。该代码如下所示:
|
||||
|
||||
```
|
||||
do {
|
||||
ch = fgetc(infile);
|
||||
if (ch != EOF) {
|
||||
fputc(ch, outfile);
|
||||
}
|
||||
} while (ch != EOF);
|
||||
```
|
||||
|
||||
你可以使用此循环编写自己的 `cp` 程序,以使用 `fgetc` 和 `fputc` 函数一次读写一个字符。`cp.c` 源代码如下所示:
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int
|
||||
main(int argc, char **argv)
|
||||
{
|
||||
FILE *infile;
|
||||
FILE *outfile;
|
||||
int ch;
|
||||
|
||||
/* parse the command line */
|
||||
|
||||
/* usage: cp infile outfile */
|
||||
|
||||
if (argc != 3) {
|
||||
fprintf(stderr, "Incorrect usage\n");
|
||||
fprintf(stderr, "Usage: cp infile outfile\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* open the input file */
|
||||
|
||||
infile = fopen(argv[1], "r");
|
||||
if (infile == NULL) {
|
||||
fprintf(stderr, "Cannot open file for reading: %s\n", argv[1]);
|
||||
return 2;
|
||||
}
|
||||
|
||||
/* open the output file */
|
||||
|
||||
outfile = fopen(argv[2], "w");
|
||||
if (outfile == NULL) {
|
||||
fprintf(stderr, "Cannot open file for writing: %s\n", argv[2]);
|
||||
fclose(infile);
|
||||
return 3;
|
||||
}
|
||||
|
||||
/* copy one file to the other */
|
||||
|
||||
/* use fgetc and fputc */
|
||||
|
||||
do {
|
||||
ch = fgetc(infile);
|
||||
if (ch != EOF) {
|
||||
fputc(ch, outfile);
|
||||
}
|
||||
} while (ch != EOF);
|
||||
|
||||
/* done */
|
||||
|
||||
fclose(infile);
|
||||
fclose(outfile);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
你可以使用 `gcc` 来将 `cp.c` 文件编译成一个可执行文件:
|
||||
|
||||
```
|
||||
$ gcc -Wall -o cp cp.c
|
||||
```
|
||||
|
||||
`-o cp` 选项告诉编译器将编译后的程序保存到 `cp` 文件中。`-Wall` 选项告诉编译器提示所有可能的警告,如果你没有看到任何警告,则表示一切正常。
|
||||
|
||||
### 读写数据块
|
||||
|
||||
通过每次读写一个字符来实现自己的 `cp` 命令可以完成这项工作,但这并不是很快。在复制“日常”文件(例如文档和文本文件)时,你可能不会注意到,但是在复制大型文件或通过网络复制文件时,你才会注意到差异。每次处理一个字符需要大量的开销。
|
||||
|
||||
实现此 `cp` 命令的一种更好的方法是,读取一块的输入数据到内存中(称为缓存),然后将该数据集合写入到第二个文件。这样做的速度要快得多,因为程序可以一次读取更多的数据,这就就减少了从文件中“读取”的次数。
|
||||
|
||||
你可以使用 `fread` 函数将文件读入一个变量中。这个函数有几个参数:将数据读入的数组或内存缓冲区的指针(`ptr`),要读取的最小对象的大小(`size`),要读取对象的个数(`nmemb`),以及要读取的文件(`stream`):
|
||||
|
||||
```
|
||||
size_t fread(void *ptr, size_t size, size_t nmemb, FILE *stream);
|
||||
```
|
||||
|
||||
不同的选项为更高级的文件输入和输出(例如,读取和写入具有特定数据结构的文件)提供了很大的灵活性。但是,在从一个文件读取数据并将数据写入另一个文件的简单情况下,可以使用一个由字符数组组成的缓冲区。
|
||||
|
||||
你可以使用 `fwrite` 函数将缓冲区中的数据写入到另一个文件。这使用了与 `fread` 函数有相似的一组选项:要从中读取数据的数组或内存缓冲区的指针,要读取的最小对象的大小,要读取对象的个数以及要写入的文件。
|
||||
|
||||
```
|
||||
size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream);
|
||||
```
|
||||
|
||||
如果程序将文件读入缓冲区,然后将该缓冲区写入另一个文件,则数组(`ptr`)可以是固定大小的数组。例如,你可以使用长度为 200 个字符的字符数组作为缓冲区。
|
||||
|
||||
在该假设下,你需要更改 `cp` 程序中的循环,以将数据从文件读取到缓冲区中,然后将该缓冲区写入另一个文件中:
|
||||
|
||||
```
|
||||
while (!feof(infile)) {
|
||||
buffer_length = fread(buffer, sizeof(char), 200, infile);
|
||||
fwrite(buffer, sizeof(char), buffer_length, outfile);
|
||||
}
|
||||
```
|
||||
|
||||
这是更新后的 `cp` 程序的完整源代码,该程序现在使用缓冲区读取和写入数据:
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int
|
||||
main(int argc, char **argv)
|
||||
{
|
||||
FILE *infile;
|
||||
FILE *outfile;
|
||||
char buffer[200];
|
||||
size_t buffer_length;
|
||||
|
||||
/* parse the command line */
|
||||
|
||||
/* usage: cp infile outfile */
|
||||
|
||||
if (argc != 3) {
|
||||
fprintf(stderr, "Incorrect usage\n");
|
||||
fprintf(stderr, "Usage: cp infile outfile\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* open the input file */
|
||||
|
||||
infile = fopen(argv[1], "r");
|
||||
if (infile == NULL) {
|
||||
fprintf(stderr, "Cannot open file for reading: %s\n", argv[1]);
|
||||
return 2;
|
||||
}
|
||||
|
||||
/* open the output file */
|
||||
|
||||
outfile = fopen(argv[2], "w");
|
||||
if (outfile == NULL) {
|
||||
fprintf(stderr, "Cannot open file for writing: %s\n", argv[2]);
|
||||
fclose(infile);
|
||||
return 3;
|
||||
}
|
||||
|
||||
/* copy one file to the other */
|
||||
|
||||
/* use fread and fwrite */
|
||||
|
||||
while (!feof(infile)) {
|
||||
buffer_length = fread(buffer, sizeof(char), 200, infile);
|
||||
fwrite(buffer, sizeof(char), buffer_length, outfile);
|
||||
}
|
||||
|
||||
/* done */
|
||||
|
||||
fclose(infile);
|
||||
fclose(outfile);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
由于你想将此程序与其他程序进行比较,因此请将此源代码另存为 `cp2.c`。你可以使用 `gcc` 编译程序:
|
||||
|
||||
```
|
||||
$ gcc -Wall -o cp2 cp2.c
|
||||
```
|
||||
|
||||
和之前一样,`-o cp2` 选项告诉编译器将编译后的程序保存到 `cp2` 程序文件中。`-Wall` 选项告诉编译器打开所有警告。如果你没有看到任何警告,则表示一切正常。
|
||||
|
||||
### 是的,这真的更快了
|
||||
|
||||
使用缓冲区读取和写入数据是实现此版本 `cp` 程序更好的方法。由于它可以一次将文件的多个数据读取到内存中,因此该程序不需要频繁读取数据。在小文件中,你可能没有注意到使用这两种方案的区别,但是如果你需要复制大文件,或者在较慢的介质(例如通过网络连接)上复制数据时,会发现明显的差距。
|
||||
|
||||
我使用 Linux `time` 命令进行了比较。此命令可以运行另一个程序,然后告诉你该程序花费了多长时间。对于我的测试,我希望了解所花费时间的差距,因此我复制了系统上的 628 MB CD-ROM 镜像文件。
|
||||
|
||||
我首先使用标准的 Linux 的 `cp` 命令复制了映像文件,以查看所需多长时间。一开始通过运行 Linux 的 `cp` 命令,同时我还避免使用 Linux 内置的文件缓存系统,使其不会给程序带来误导性能提升的可能性。使用 Linux `cp` 进行的测试,总计花费不到一秒钟的时间:
|
||||
|
||||
```
|
||||
$ time cp FD13LIVE.iso tmpfile
|
||||
|
||||
real 0m0.040s
|
||||
user 0m0.001s
|
||||
sys 0m0.003s
|
||||
```
|
||||
|
||||
运行我自己实现的 `cp` 命令版本,复制同一文件要花费更长的时间。每次读写一个字符则花了将近五秒钟来复制文件:
|
||||
|
||||
```
|
||||
$ time ./cp FD13LIVE.iso tmpfile
|
||||
|
||||
real 0m4.823s
|
||||
user 0m4.100s
|
||||
sys 0m0.571s
|
||||
```
|
||||
|
||||
从输入读取数据到缓冲区,然后将该缓冲区写入输出文件则要快得多。使用此方法复制文件花不到一秒钟:
|
||||
|
||||
```
|
||||
$ time ./cp2 FD13LIVE.iso tmpfile
|
||||
|
||||
real 0m0.944s
|
||||
user 0m0.224s
|
||||
sys 0m0.608s
|
||||
```
|
||||
|
||||
我演示的 `cp` 程序使用了 200 个字符大小的缓冲区。我确信如果一次将更多文件数据读入内存,该程序将运行得更快。但是,通过这种比较,即使只有 200 个字符的缓冲区,你也已经看到了性能上的巨大差异。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/file-io-c
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wyxplus](https://github.com/wyxplus)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/file_system.jpg?itok=pzCrX1Kc "4 manilla folders, yellow, green, purple, blue"
|
||||
[2]: http://www.opengroup.org/onlinepubs/009695399/functions/fgetc.html
|
||||
[3]: http://www.opengroup.org/onlinepubs/009695399/functions/fputc.html
|
||||
[4]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
|
||||
[5]: http://www.opengroup.org/onlinepubs/009695399/functions/fopen.html
|
||||
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/fclose.html
|
||||
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/feof.html
|
||||
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/fread.html
|
||||
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/fwrite.html
|
@ -4,15 +4,15 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13256-1.html)
|
||||
|
||||
用 Ansible 自动化系统管理员的 5 个日常任务
|
||||
======
|
||||
|
||||
> 通过使用 Ansible 自动执行可重复的日常任务,提高工作效率并避免错误。
|
||||
|
||||
![Tips and gears turning][1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/202103/31/233904oo7q68eo2njfmf8o.jpg)
|
||||
|
||||
如果你讨厌执行重复性的任务,那么我有一个提议给你,去学习 [Ansible][2]!
|
||||
|
@ -3,50 +3,47 @@
|
||||
[#]: author: (Joshua Pearce https://opensource.com/users/jmpearce)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13255-1.html)
|
||||
|
||||
在家就能用得起的高温 3D 打印机
|
||||
======
|
||||
有多实惠?低于 1000 美元。
|
||||
|
||||
> 有多实惠?低于 1000 美元。
|
||||
|
||||
![High-temperature 3D-printed mask][1]
|
||||
|
||||
3D 打印机从 20 世纪 80 年代就已经出现了,但直到 [RepRap][2] 项目开源后,才得到大众的关注。RepRap 是 self-replicating rapid prototyper (自我复制快速原型机) 的缩写,它是一种基本上可以自己打印的 3D 打印机。[2004 年][3]发布开源计划在后,导致 3D 打印机的成本从几十万美金降到了几百美金。
|
||||
3D 打印机从 20 世纪 80 年代就已经出现了,但是由于 [RepRap][2] 项目的出现,它们直到获得开源才受到人们的关注。RepRap 意即<ruby>自我复制快速原型机<rt>self-replicating rapid prototyper</rt></ruby>,它是一种基本上可以自己打印的 3D 打印机。它的开源计划[2004 年][3] 发布之后,导致 3D 打印机的成本从几十万美金降到了几百美金。
|
||||
|
||||
这些开源的桌面工具一直局限于 ABS 等低性能、低温的热塑性塑料(如乐高积木)。市场上有几款高温打印机,但其高昂的成本(几万到几十万美元)使大多数人无法获得。直到最近,它们才参与了很多竞争,因为它们被一项专利 (US6722872B1) 锁定,该专利于 2021 年 2 月 27 日[到期][4]。
|
||||
这些开源的桌面工具一直局限于 ABS 等低性能、低温热塑性塑料(如乐高积木)。市场上有几款高温打印机,但其高昂的成本(几万到几十万美元)使大多数人无法获得。直到最近,它们才参与了很多竞争,因为它们被一项专利 (US6722872B1) 锁定,该专利于 2021 年 2 月 27 日[到期][4]。
|
||||
|
||||
随着这个路障的消除,我们即将看到高温、低成本、熔融纤维 3D 打印机的爆发。
|
||||
|
||||
价格低到什么程度?低于 1000 美元如何。
|
||||
|
||||
在疫情最严重的时候,我的团队赶紧发布了一个[开源高温 3D 打印机][5]的设计,用于制造可热灭菌的个人防护装备 (PPE)。该项目的想法是让人们能够[用高温材料打印 PPE][6](如口罩),并将它放入家用烤箱进行消毒。我们称我们的设备为 Cerberus,它具有以下特点:
|
||||
在疫情最严重的时候,我的团队赶紧发布了一个 [开源高温 3D 打印机][5] 的设计,用于制造可高温消毒的个人防护装备(PPE)。该项目的想法是让人们能够 [用高温材料打印 PPE][6](如口罩),并将它放入家用烤箱进行消毒。我们称我们的设备为 Cerberus,它具有以下特点:
|
||||
|
||||
1. 可达到 200℃ 加热床
|
||||
1. 可达到 200℃ 的加热床
|
||||
2. 可达到 500℃ 的热源
|
||||
3. 带有 1kW 加热器核心的隔离式加热室。
|
||||
4. 主电源(交流电源)电压室和床身加热,以便快速启动。
|
||||
|
||||
|
||||
|
||||
你可以用现成的零件来构建这个项目,其中一些零件你可以打印,价格不到 1000 美元。它可以成功打印聚醚酮酮 (PEKK) 和聚醚酰亚胺(PEI,以商品名 Ultem 出售)。这两种材料都比现在低成本打印机能打印的任何材料强得多。
|
||||
|
||||
![PPE printer][7]
|
||||
|
||||
(J.M.Pearce, [GNU Free Documentation License][8])
|
||||
|
||||
这款高温 3D 打印机的设计是有三个头,但我们发布的时候只有一个头。Cerberus 是以希腊神话中的三头冥界看门狗命名的。通常情况下,我们不会发布只有一个头的打印机,但疫情改变了我们的优先级。[开源社区团结起来][9],帮助解决早期的供应不足,许多桌面 3D 打印机都在产出有用的产品,以帮助保护人们免受 COVID 的侵害。
|
||||
|
||||
那另外两个头呢?
|
||||
|
||||
其他两个头是为了高温熔融颗粒制造(例如,这个开源的[3D打印机][10]的高温版本)和铺设在金属线中(像在[这个设计][11]中),以建立一个开源的热交换器。Cerberus 打印机的其他功能可能是一个自动喷嘴清洁器和在高温下打印连续纤维的方法。另外,你还可以在转台上安装任何你喜欢的东西来制造高端产品。
|
||||
其他两个头是为了高温熔融颗粒制造(例如,这个开源的 [3D打印机][10] 的高温版本)并铺设金属线(像在 [这个设计][11] 中),以建立一个开源的热交换器。Cerberus 打印机的其他功能可能是一个自动喷嘴清洁器和在高温下打印连续纤维的方法。另外,你还可以在转台上安装任何你喜欢的东西来制造高端产品。
|
||||
|
||||
把一个盒子放在 3D 打印机周围,而把电子元件留在外面的 [专利][12] 到期,为高温家用 3D 打印机铺平了道路,这将使这些设备以合理的成本从单纯的玩具变为工业工具。
|
||||
|
||||
把一个盒子放在 3D 打印机周围,而把电子元件留在外面的[专利][12]到期为高温家用 3D 打印机铺平了道路,这将使这些设备以合理的成本从单纯的玩具变为工业工具。
|
||||
已经有公司在 RepRap 传统的基础上,将这些低成本系统推向市场(例如,1250 美元的 [Creality3D CR-5 Pro][13] 3D 打印机可以达到 300℃)。Creality 销售最受欢迎的桌面 3D 打印机,并开源了部分设计。
|
||||
|
||||
已经有公司在 RepRap 传统的基础上,将这些低成本系统推向市场(例如,1250 美元的 [Creality3D CR-5 Pro][13] 3D 打印机可以达到 300℃)。Creality 销售最受欢迎的桌面 3D 打印机,并开放了部分设计。
|
||||
|
||||
然而,要打印超高端工程聚合物,这些打印机需要达到 350℃ 以上。开源计划已经可以帮助桌面 3D 打印机制造商开始与垄断公司竞争,这些公司由于躲在专利背后,已经阻碍了 3D 打印 20 年。预计低成本、高温桌面 3D 打印机的竞争将真正升温!
|
||||
然而,要打印超高端工程聚合物,这些打印机需要达到 350℃ 以上。开源计划已经可以帮助桌面 3D 打印机制造商开始与垄断公司竞争,这些公司由于躲在专利背后,已经阻碍了 3D 打印 20 年。期待低成本、高温桌面 3D 打印机的竞争将真正升温!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -55,7 +52,7 @@ via: https://opensource.com/article/21/3/desktop-3d-printer
|
||||
作者:[Joshua Pearce][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -3,16 +3,18 @@
|
||||
[#]: author: (Simon Arneaud https://theartofmachinery.com)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (DCOLIVERSUN)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13258-1.html)
|
||||
|
||||
Docker 镜像逆向工程
|
||||
一次 Docker 镜像的逆向工程
|
||||
======
|
||||
|
||||
本文介绍的内容开始于一个咨询陷阱:政府组织 A 让政府组织 B 开发一个网络应用程序。政府机构 B 把部分工作外包给某个人。后来,项目的托管和维护被外包给一家私人公司 C。C 公司发现,之前外包的人(过世很久了)已经构建了一个自定义的 Docker 镜像,并使镜像成为系统构建的依赖项,但这个人没有提交原始的 Dockerfile。C 公司有合同义务管理这个 Docker 镜像,可是他们他们没有源代码。C 公司偶尔叫我进去做各种工作,所以处理一些关于这个神秘 Docker 镜像的事情就成了我的工作。
|
||||
![](https://img.linux.net.cn/data/attachment/album/202104/01/215523oajrgjo77irb7nun.jpg)
|
||||
|
||||
幸运的是,这个 Docker 镜像格式比它应有的样子透明多了。虽然还需要做一些侦查工作,但只要解剖一个镜像文件,就能发现很多东西。例如,这里有一个 [Prettier 代码格式][1]镜像可供快速浏览。
|
||||
这要从一次咨询的失误说起:政府组织 A 让政府组织 B 开发一个 Web 应用程序。政府机构 B 把部分工作外包给某个人。后来,项目的托管和维护被外包给一家私人公司 C。C 公司发现,之前外包的人(已经离开很久了)构建了一个自定义的 Docker 镜像,并将其成为系统构建的依赖项,但这个人没有提交原始的 Dockerfile。C 公司有合同义务管理这个 Docker 镜像,可是他们他们没有源代码。C 公司偶尔叫我进去做各种工作,所以处理一些关于这个神秘 Docker 镜像的事情就成了我的工作。
|
||||
|
||||
幸运的是,Docker 镜像的格式比想象的透明多了。虽然还需要做一些侦查工作,但只要解剖一个镜像文件,就能发现很多东西。例如,这里有一个 [Prettier 代码格式化][1] 的镜像可供快速浏览。
|
||||
|
||||
首先,让 Docker <ruby>守护进程<rt>daemon</rt></ruby>拉取镜像,然后将镜像提取到文件中:
|
||||
|
||||
@ -42,7 +44,7 @@ manifest.json
|
||||
repositories
|
||||
```
|
||||
|
||||
如你所见,Docker 在命名时经常使用<ruby>哈希<rt>hash</rt></ruby>。我们看看 `manifest.json`。它在难以阅读的压缩 JSON 中,不过 [`jq` JSON Swiss Army knife][2]可以很好地打印 JSON:
|
||||
如你所见,Docker 在命名时经常使用<ruby>哈希<rt>hash</rt></ruby>。我们看看 `manifest.json`。它是以难以阅读的压缩 JSON 写的,不过 [JSON 瑞士军刀 jq][2] 可以很好地打印 JSON:
|
||||
|
||||
```
|
||||
$ jq . manifest.json
|
||||
@ -61,7 +63,7 @@ $ jq . manifest.json
|
||||
]
|
||||
```
|
||||
|
||||
请注意,这三层对应三个 hash 命名的目录。我们以后再看。现在,让我们看看 `Config` 键指向的 JSON 文件。文件名有点长,所有我把第一点放在这里:
|
||||
请注意,这三个<ruby>层<rt>Layer</rt></ruby>对应三个以哈希命名的目录。我们以后再看。现在,让我们看看 `Config` 键指向的 JSON 文件。它有点长,所以我只在这里转储第一部分:
|
||||
|
||||
```
|
||||
$ jq . 88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json | head -n 20
|
||||
@ -87,9 +89,9 @@ $ jq . 88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json | h
|
||||
"Image": "sha256:93e72874b338c1e0734025e1d8ebe259d4f16265dc2840f88c4c754e1c01ba0a",
|
||||
```
|
||||
|
||||
最重要的是 `history` 列表,它列出了镜像中的每一层。Docker 镜像由这些层堆叠而成。Dockerfile 中几乎每条命令都会变成一个层,描述该命令对镜像所做的更改。如果你执行 `RUN script.sh` 命令,创建 `really_big_file`,然后你用 `RUN rm really_big_file` 命令删除文件,Docker 镜像实际生成两层:一个包含 `really_big_file`,一个包含 `.wh.really_big_file` 记录来删除它。整个镜像文件大小不变。这就是为什么你会经常看到像 `RUN script.sh && rm really_big_file` 这样的 Dockerfile 命令链接在一起——它保障所有更改都合并到一层中。
|
||||
最重要的是 `history` 列表,它列出了镜像中的每一层。Docker 镜像由这些层堆叠而成。Dockerfile 中几乎每条命令都会变成一个层,描述该命令对镜像所做的更改。如果你执行 `RUN script.sh` 命令创建了 `really_big_file`,然后用 `RUN rm really_big_file` 命令删除文件,Docker 镜像实际生成两层:一个包含 `really_big_file`,一个包含 `.wh.really_big_file` 记录来删除它。整个镜像文件大小不变。这就是为什么你会经常看到像 `RUN script.sh && rm really_big_file` 这样的 Dockerfile 命令链接在一起——它保障所有更改都合并到一层中。
|
||||
|
||||
以下是 Docker 镜像中记录的所有层。注意,大多数层不改变文件系统镜像,并且 `empty_layer` 标记为 true。以下只有三个层是非空的,与我们之前描述的相符。
|
||||
以下是该 Docker 镜像中记录的所有层。注意,大多数层不改变文件系统镜像,并且 `empty_layer` 标记为 `true`。以下只有三个层是非空的,与我们之前描述的相符。
|
||||
|
||||
```
|
||||
$ jq .history 88f38be28f05f38dba94ce0c1328ebe2b963b65848ab96594f8172a9c3b0f25b.json
|
||||
@ -162,7 +164,7 @@ m=${NODEJS_VERSION} && npm install -g prettier@${PRETTIER_VERSION} && np
|
||||
]
|
||||
```
|
||||
|
||||
太棒了!所有的命令都在 `created_by` 字段中,我们几乎可以用这些命令重建 Dockerfile。但不是完全可以。最上面的 `ADD` 命令实际上没有给我们需要添加的文件。`COPY` 命令也没有全部信息。因为 `FROM` 命令会扩展到继承基础 Docker 镜像的所有层,所以可能会跟丢该命令。
|
||||
太棒了!所有的命令都在 `created_by` 字段中,我们几乎可以用这些命令重建 Dockerfile。但不是完全可以。最上面的 `ADD` 命令实际上没有给我们需要添加的文件。`COPY` 命令也没有全部信息。我们还失去了 `FROM` 语句,因为它们扩展成了从基础 Docker 镜像继承的所有层。
|
||||
|
||||
我们可以通过查看<ruby>时间戳<rt>timestamp</rt></ruby>,按 Dockerfile 对层进行分组。大多数层的时间戳相差不到一分钟,代表每一层构建所需的时间。但是前两层是 `2020-04-24`,其余的是 `2020-04-29`。这是因为前两层来自一个基础 Docker 镜像。理想情况下,我们可以找出一个 `FROM` 命令来获得这个镜像,这样我们就有了一个可维护的 Dockerfile。
|
||||
|
||||
@ -215,7 +217,7 @@ $ cat etc/alpine-release
|
||||
3.11.6
|
||||
```
|
||||
|
||||
如果你拉取、解压 `alpine:3.11.6`,你会发现里面有一个非空层,`layer.tar`与 Prettier 镜像基础层中的 `layer.tar` 是一样的。
|
||||
如果你拉取、解压 `alpine:3.11.6`,你会发现里面有一个非空层,`layer.tar` 与 Prettier 镜像基础层中的 `layer.tar` 是一样的。
|
||||
|
||||
出于兴趣,另外两个非空层是什么?第二层是包含 Prettier 安装包的主层。它有 528 个条目,包含 Prettier、一堆依赖项和证书更新:
|
||||
|
||||
@ -264,7 +266,7 @@ $ tar tf 6c37da2ee7de579a0bf5495df32ba3e7807b0a42e2a02779206d165f55f1ba70/layer.
|
||||
work/
|
||||
```
|
||||
|
||||
[原始 Dockerfile 在 Prettier 的 git repo 中][4]
|
||||
[原始 Dockerfile 在 Prettier 的 git 仓库中][4]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -273,7 +275,7 @@ via: https://theartofmachinery.com/2021/03/18/reverse_engineering_a_docker_image
|
||||
作者:[Simon Arneaud][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[DCOLIVERSUN](https://github.com/DCOLIVERSUN)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
192
published/20210324 Read and write files with Bash.md
Normal file
192
published/20210324 Read and write files with Bash.md
Normal file
@ -0,0 +1,192 @@
|
||||
[#]: subject: (Read and write files with Bash)
|
||||
[#]: via: (https://opensource.com/article/21/3/input-output-bash)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-13259-1.html)
|
||||
|
||||
用 Bash 读写文件
|
||||
======
|
||||
|
||||
> 学习 Bash 读取和写入数据的不同方式,以及何时使用每种方法。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202104/01/223653bc334ac33e5e4pwe.jpg)
|
||||
|
||||
当你使用 Bash 编写脚本时,有时你需要从一个文件中读取数据或向一个文件写入数据。有时文件可能包含配置选项,而另一些时候这个文件是你的用户用你的应用创建的数据。每种语言处理这个任务的方式都有些不同,本文将演示如何使用 Bash 和其他 [POSIX][2] shell 处理数据文件。
|
||||
|
||||
### 安装 Bash
|
||||
|
||||
如果你在使用 Linux,你可能已经有了 Bash。如果没有,你可以在你的软件仓库里找到它。
|
||||
|
||||
在 macOS 上,你可以使用默认终端,Bash 或 [Zsh][3],这取决于你运行的 macOS 版本。
|
||||
|
||||
在 Windows 上,有几种方法可以体验 Bash,包括微软官方支持的 [Windows Subsystem for Linux][4](WSL)。
|
||||
|
||||
安装 Bash 后,打开你最喜欢的文本编辑器并准备开始。
|
||||
|
||||
### 使用 Bash 读取文件
|
||||
|
||||
除了是 [shell][5] 之外,Bash 还是一种脚本语言。有几种方法可以从 Bash 中读取数据。你可以创建一种数据流并解析输出, 或者你可以将数据加载到内存中。这两种方法都是有效的获取信息的方法,但每种方法都有相当具体的用例。
|
||||
|
||||
#### 在 Bash 中援引文件
|
||||
|
||||
当你在 Bash 中 “<ruby>援引<rt>source</rt></ruby>” 一个文件时,你会让 Bash 读取文件的内容,期望它包含有效的数据,Bash 可以将这些数据放入它建立的数据模型中。你不会想要从旧文件中援引数据,但你可以使用这种方法来读取配置文件和函数。
|
||||
|
||||
(LCTT 译注:在 Bash 中,可以通过 `source` 或 `.` 命令来将一个文件读入,这个行为称为 “sourcing”,英文原意为“一次性(试)采购”、“寻找供应商”、“获得”等,考虑到 Bash 的语境和发音,我建议可以翻译为“援引”,或有不当,供大家讨论参考 —— wxy)
|
||||
|
||||
例如,创建一个名为 `example.sh` 的文件,并输入以下内容:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
greet opensource.com
|
||||
|
||||
echo "The meaning of life is $var"
|
||||
```
|
||||
|
||||
运行这段代码,看见失败了:
|
||||
|
||||
```
|
||||
$ bash ./example.sh
|
||||
./example.sh: line 3: greet: command not found
|
||||
The meaning of life is
|
||||
```
|
||||
|
||||
Bash 没有一个叫 `greet` 的命令,所以无法执行那一行,也没有一个叫 `var` 的变量记录,所以文件没有意义。为了解决这个问题,建立一个名为 `include.sh` 的文件:
|
||||
|
||||
```
|
||||
greet() {
|
||||
echo "Hello ${1}"
|
||||
}
|
||||
|
||||
var=42
|
||||
```
|
||||
|
||||
修改你的 `example.sh` 脚本,加入一个 `source` 命令:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
source include.sh
|
||||
|
||||
greet opensource.com
|
||||
|
||||
echo "The meaning of life is $var"
|
||||
```
|
||||
|
||||
运行脚本,可以看到工作了:
|
||||
|
||||
```
|
||||
$ bash ./example.sh
|
||||
Hello opensource.com
|
||||
The meaning of life is 42
|
||||
```
|
||||
|
||||
`greet` 命令被带入你的 shell 环境,因为它被定义在 `include.sh` 文件中,它甚至可以识别参数(本例中的 `opensource.com`)。变量 `var` 也被设置和导入。
|
||||
|
||||
#### 在 Bash 中解析文件
|
||||
|
||||
另一种让数据“进入” Bash 的方法是将其解析为数据流。有很多方法可以做到这一点. 你可以使用 `grep` 或 `cat` 或任何可以获取数据并管道输出到标准输出的命令。另外,你可以使用 Bash 内置的东西:重定向。重定向本身并不是很有用,所以在这个例子中,我也使用内置的 `echo` 命令来打印重定向的结果:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
echo $( < include.sh )
|
||||
```
|
||||
|
||||
将其保存为 `stream.sh` 并运行它来查看结果:
|
||||
|
||||
```
|
||||
$ bash ./stream.sh
|
||||
greet() { echo "Hello ${1}" } var=42
|
||||
$
|
||||
```
|
||||
|
||||
对于 `include.sh` 文件中的每一行,Bash 都会将该行打印(或 `echo`)到你的终端。先用管道把它传送到一个合适的解析器是用 Bash 读取数据的常用方法。例如, 假设 `include.sh` 是一个配置文件, 它的键和值对用一个等号(`=`)分开. 你可以用 `awk` 甚至 `cut` 来获取值:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
myVar=`grep var include.sh | cut -d'=' -f2`
|
||||
|
||||
echo $myVar
|
||||
```
|
||||
|
||||
试着运行这个脚本:
|
||||
|
||||
```
|
||||
$ bash ./stream.sh
|
||||
42
|
||||
```
|
||||
|
||||
### 用 Bash 将数据写入文件
|
||||
|
||||
无论你是要存储用户用你的应用创建的数据,还是仅仅是关于用户在应用中做了什么的元数据(例如,游戏保存或最近播放的歌曲),都有很多很好的理由来存储数据供以后使用。在 Bash 中,你可以使用常见的 shell 重定向将数据保存到文件中。
|
||||
|
||||
例如, 要创建一个包含输出的新文件, 使用一个重定向符号:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
TZ=UTC
|
||||
date > date.txt
|
||||
```
|
||||
|
||||
运行脚本几次:
|
||||
|
||||
```
|
||||
$ bash ./date.sh
|
||||
$ cat date.txt
|
||||
Tue Feb 23 22:25:06 UTC 2021
|
||||
$ bash ./date.sh
|
||||
$ cat date.txt
|
||||
Tue Feb 23 22:25:12 UTC 2021
|
||||
```
|
||||
|
||||
要追加数据,使用两个重定向符号:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
TZ=UTC
|
||||
date >> date.txt
|
||||
```
|
||||
|
||||
运行脚本几次:
|
||||
|
||||
```
|
||||
$ bash ./date.sh
|
||||
$ bash ./date.sh
|
||||
$ bash ./date.sh
|
||||
$ cat date.txt
|
||||
Tue Feb 23 22:25:12 UTC 2021
|
||||
Tue Feb 23 22:25:17 UTC 2021
|
||||
Tue Feb 23 22:25:19 UTC 2021
|
||||
Tue Feb 23 22:25:22 UTC 2021
|
||||
```
|
||||
|
||||
### Bash 轻松编程
|
||||
|
||||
Bash 的优势在于简单易学,因为只需要一些基本的概念,你就可以构建复杂的程序。完整的文档请参考 GNU.org 上的 [优秀的 Bash 文档][6]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/input-output-bash
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
|
||||
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[3]: https://opensource.com/article/19/9/getting-started-zsh
|
||||
[4]: https://opensource.com/article/19/7/ways-get-started-linux#wsl
|
||||
[5]: https://www.redhat.com/sysadmin/terminals-shells-consoles
|
||||
[6]: http://gnu.org/software/bash
|
@ -1,107 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to set up a homelab from hardware to firewall)
|
||||
[#]: via: (https://opensource.com/article/19/3/home-lab)
|
||||
[#]: author: (Michael Zamot https://opensource.com/users/mzamot)
|
||||
|
||||
How to set up a homelab from hardware to firewall
|
||||
======
|
||||
|
||||
Take a look at hardware and software options for building your own homelab.
|
||||
|
||||
![][1]
|
||||
|
||||
Do you want to create a homelab? Maybe you want to experiment with different technologies, create development environments, or have your own private cloud. There are many reasons to have a homelab, and this guide aims to make it easier to get started.
|
||||
|
||||
There are three categories to consider when planning a home lab: hardware, software, and maintenance. We'll look at the first two categories here and save maintaining your computer lab for a future article.
|
||||
|
||||
### Hardware
|
||||
|
||||
When thinking about your hardware needs, first consider how you plan to use your lab as well as your budget, noise, space, and power usage.
|
||||
|
||||
If buying new hardware is too expensive, search local universities, ads, and websites like eBay or Craigslist for recycled servers. They are usually inexpensive, and server-grade hardware is built to last many years. You'll need three types of hardware: a virtualization server, storage, and a router/firewall.
|
||||
|
||||
#### Virtualization servers
|
||||
|
||||
A virtualization server allows you to run several virtual machines that share the physical box's resources while maximizing and isolating resources. If you break one virtual machine, you won't have to rebuild the entire server, just the virtual one. If you want to do a test or try something without the risk of breaking your entire system, just spin up a new virtual machine and you're ready to go.
|
||||
|
||||
The two most important factors to consider in a virtualization server are the number and speed of its CPU cores and its memory. If there are not enough resources to share among all the virtual machines, they'll be overallocated and try to steal each other's CPU cycles and memory.
|
||||
|
||||
So, consider a CPU platform with multiple cores. You want to ensure the CPU supports virtualization instructions (VT-x for Intel and AMD-V for AMD). Examples of good consumer-grade processors that can handle virtualization are Intel i5 or i7 and AMD Ryzen. If you are considering server-grade hardware, the Xeon class for Intel and EPYC for AMD are good options. Memory can be expensive, especially the latest DDR4 SDRAM. When estimating memory requirements, factor at least 2GB for the host operating system's memory consumption.
|
||||
|
||||
If your electricity bill or noise is a concern, solutions like Intel's NUC devices provide a small form factor, low power usage, and reduced noise, but at the expense of expandability.
|
||||
|
||||
#### Network-attached storage (NAS)
|
||||
|
||||
If you want a machine loaded with hard drives to store all your personal data, movies, pictures, etc. and provide storage for the virtualization server, network-attached storage (NAS) is what you want.
|
||||
|
||||
In most cases, you won't need a powerful CPU; in fact, many commercial NAS solutions use low-powered ARM CPUs. A motherboard that supports multiple SATA disks is a must. If your motherboard doesn't have enough ports, use a host bus adapter (HBA) SAS controller to add extras.
|
||||
|
||||
Network performance is critical for a NAS, so select a gigabit network interface (or better).
|
||||
|
||||
Memory requirements will differ based on your filesystem. ZFS is one of the most popular filesystems for NAS, and you'll need more memory to use features such as caching or deduplication. Error-correcting code (ECC) memory is your best bet to protect data from corruption (but make sure your motherboard supports it before you buy). Last, but not least, don't forget an uninterruptible power supply (UPS), because losing power can cause data corruption.
|
||||
|
||||
#### Firewall and router
|
||||
|
||||
Have you ever realized that a cheap router/firewall is usually the main thing protecting your home network from the exterior world? These routers rarely receive timely security updates, if they receive any at all. Scared now? Well, [you should be][2]!
|
||||
|
||||
You usually don't need a powerful CPU or a great deal of memory to build your own router/firewall, unless you are handling a huge throughput or want to do CPU-intensive tasks, like a VPN server or traffic filtering. In such cases, you'll need a multicore CPU with AES-NI support.
|
||||
|
||||
You may want to get at least two 1-gigabit or better Ethernet network interface cards (NICs), also, not needed, but recommended, a managed switch to connect your DIY-router to create VLANs to further isolate and secure your network.
|
||||
|
||||
![Home computer lab PfSense][4]
|
||||
|
||||
### Software
|
||||
|
||||
After you've selected your virtualization server, NAS, and firewall/router, the next step is exploring the different operating systems and software to maximize their benefits. While you could use a regular Linux distribution like CentOS, Debian, or Ubuntu, they usually take more time to configure and administer than the following options.
|
||||
|
||||
#### Virtualization software
|
||||
|
||||
**[KVM][5]** (Kernel-based Virtual Machine) lets you turn Linux into a hypervisor so you can run multiple virtual machines in the same box. The best thing is that KVM is part of Linux, and it is the go-to option for many enterprises and home users. If you are comfortable, you can install **[libvirt][6]** and **[virt-manager][7]** to manage your virtualization platform.
|
||||
|
||||
**[Proxmox VE][8]** is a robust, enterprise-grade solution and a full open source virtualization and container platform. It is based on Debian and uses KVM as its hypervisor and LXC for containers. Proxmox offers a powerful web interface, an API, and can scale out to many clustered nodes, which is helpful because you'll never know when you'll run out of capacity in your lab.
|
||||
|
||||
**[oVirt][9] (RHV)** is another enterprise-grade solution that uses KVM as the hypervisor. Just because it's enterprise doesn't mean you can't use it at home. oVirt offers a powerful web interface and an API and can handle hundreds of nodes (if you are running that many servers, I don't want to be your neighbor!). The potential problem with oVirt for a home lab is that it requires a minimum set of nodes: You'll need one external storage, such as a NAS, and at least two additional virtualization nodes (you can run it just on one, but you'll run into problems in maintenance of your environment).
|
||||
|
||||
#### NAS software
|
||||
|
||||
**[FreeNAS][10]** is the most popular open source NAS distribution, and it's based on the rock-solid FreeBSD operating system. One of its most robust features is its use of the ZFS filesystem, which provides data-integrity checking, snapshots, replication, and multiple levels of redundancy (mirroring, striped mirrors, and striping). On top of that, everything is managed from the powerful and easy-to-use web interface. Before installing FreeNAS, check its hardware support, as it is not as wide as Linux-based distributions.
|
||||
|
||||
Another popular alternative is the Linux-based **[OpenMediaVault][11]**. One of its main features is its modularity, with plugins that extend and add features. Among its included features are a web-based administration interface; protocols like CIFS, SFTP, NFS, iSCSI; and volume management, including software RAID, quotas, access control lists (ACLs), and share management. Because it is Linux-based, it has extensive hardware support.
|
||||
|
||||
#### Firewall/router software
|
||||
|
||||
**[pfSense][12]** is an open source, enterprise-grade FreeBSD-based router and firewall distribution. It can be installed directly on a server or even inside a virtual machine (to manage your virtual or physical networks and save space). It has many features and can be expanded using packages. It is managed entirely using the web interface, although it also has command-line access. It has all the features you would expect from a router and firewall, like DHCP and DNS, as well as more advanced features, such as intrusion detection (IDS) and intrusion prevention (IPS) systems. You can create multiple networks listening on different interfaces or using VLANs, and you can create a secure VPN server with a few clicks. pfSense uses pf, a stateful packet filter that was developed for the OpenBSD operating system using a syntax similar to IPFilter. Many companies and organizations use pfSense.
|
||||
|
||||
* * *
|
||||
|
||||
With all this information in mind, it's time for you to get your hands dirty and start building your lab. In a future article, I will get into the third category of running a home lab: using automation to deploy and maintain it.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/home-lab
|
||||
|
||||
作者:[Michael Zamot (Red Hat)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mzamot
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb
|
||||
[2]: https://opensource.com/article/18/5/how-insecure-your-router
|
||||
[3]: /file/427426
|
||||
[4]: https://opensource.com/sites/default/files/uploads/pfsense2.png (Home computer lab PfSense)
|
||||
[5]: https://www.linux-kvm.org/page/Main_Page
|
||||
[6]: https://libvirt.org/
|
||||
[7]: https://virt-manager.org/
|
||||
[8]: https://www.proxmox.com/en/proxmox-ve
|
||||
[9]: https://ovirt.org/
|
||||
[10]: https://freenas.org/
|
||||
[11]: https://www.openmediavault.org/
|
||||
[12]: https://www.pfsense.org/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (wyxplus)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,285 +0,0 @@
|
||||
[#]: subject: (Learn how file input and output works in C)
|
||||
[#]: via: (https://opensource.com/article/21/3/file-io-c)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wyxplus)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Learn how file input and output works in C
|
||||
======
|
||||
Understanding I/O can help you do things faster.
|
||||
![4 manilla folders, yellow, green, purple, blue][1]
|
||||
|
||||
If you want to learn input and output in C, start by looking at the `stdio.h` include file. As you might guess from the name, that file defines all the standard ("std") input and output ("io") functions.
|
||||
|
||||
The first `stdio.h` function that most people learn is the `printf` function to print formatted output. Or the `puts` function to print a simple string. Those are great functions to print information to the user, but if you want to do more than that, you'll need to explore other functions.
|
||||
|
||||
You can learn about some of these functions and methods by writing a replica of a common Linux command. The `cp` command will copy one file to another. If you look at the `cp` man page, you'll see that `cp` supports a broad set of command-line parameters and options. But in the simplest case, `cp` supports copying one file to another:
|
||||
|
||||
|
||||
```
|
||||
`cp infile outfile`
|
||||
```
|
||||
|
||||
You can write your own version of this `cp` command in C by using only a few basic functions to _read_ and _write_ files.
|
||||
|
||||
### Reading and writing one character at a time
|
||||
|
||||
You can easily do input and output using the `fgetc` and `fputc` functions. These read and write data one character at a time. The usage is defined in `stdio.h` and is quite straightforward: `fgetc` reads (gets) a single character from a file, and `fputc` puts a single character into a file.
|
||||
|
||||
|
||||
```
|
||||
int [fgetc][2](FILE *stream);
|
||||
int [fputc][3](int c, FILE *stream);
|
||||
```
|
||||
|
||||
Writing the `cp` command requires accessing files. In C, you open a file using the `fopen` function, which takes two arguments: the _name_ of the file and the _mode_ you want to use. The mode is usually `r` to read from a file or `w` to write to a file. The mode supports other options too, but for this tutorial, just focus on reading and writing.
|
||||
|
||||
Copying one file to another then becomes a matter of opening the source and destination files, then _reading one character at a time_ from the first file, then _writing that character_ to the second file. The `fgetc` function returns either the single character read from the input file or the _end of file_ (`EOF`) marker when the file is done. Once you've read `EOF`, you've finished copying and you can close both files. That code looks like this:
|
||||
|
||||
|
||||
```
|
||||
do {
|
||||
ch = [fgetc][2](infile);
|
||||
if (ch != EOF) {
|
||||
[fputc][3](ch, outfile);
|
||||
}
|
||||
} while (ch != EOF);
|
||||
```
|
||||
|
||||
You can write your own `cp` program with this loop to read and write one character at a time by using the `fgetc` and `fputc` functions. The `cp.c` source code looks like this:
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int
|
||||
main(int argc, char **argv)
|
||||
{
|
||||
FILE *infile;
|
||||
FILE *outfile;
|
||||
int ch;
|
||||
|
||||
/* parse the command line */
|
||||
|
||||
/* usage: cp infile outfile */
|
||||
|
||||
if (argc != 3) {
|
||||
[fprintf][4](stderr, "Incorrect usage\n");
|
||||
[fprintf][4](stderr, "Usage: cp infile outfile\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* open the input file */
|
||||
|
||||
infile = [fopen][5](argv[1], "r");
|
||||
if (infile == NULL) {
|
||||
[fprintf][4](stderr, "Cannot open file for reading: %s\n", argv[1]);
|
||||
return 2;
|
||||
}
|
||||
|
||||
/* open the output file */
|
||||
|
||||
outfile = [fopen][5](argv[2], "w");
|
||||
if (outfile == NULL) {
|
||||
[fprintf][4](stderr, "Cannot open file for writing: %s\n", argv[2]);
|
||||
[fclose][6](infile);
|
||||
return 3;
|
||||
}
|
||||
|
||||
/* copy one file to the other */
|
||||
|
||||
/* use fgetc and fputc */
|
||||
|
||||
do {
|
||||
ch = [fgetc][2](infile);
|
||||
if (ch != EOF) {
|
||||
[fputc][3](ch, outfile);
|
||||
}
|
||||
} while (ch != EOF);
|
||||
|
||||
/* done */
|
||||
|
||||
[fclose][6](infile);
|
||||
[fclose][6](outfile);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
And you can compile that `cp.c` file into a full executable using the GNU Compiler Collection (GCC):
|
||||
|
||||
|
||||
```
|
||||
`$ gcc -Wall -o cp cp.c`
|
||||
```
|
||||
|
||||
The `-o cp` option tells the compiler to save the compiled program into the `cp` program file. The `-Wall` option tells the compiler to turn on all warnings. If you don't see any warnings, that means everything worked correctly.
|
||||
|
||||
### Reading and writing blocks of data
|
||||
|
||||
Programming your own `cp` command by reading and writing data one character at a time does the job, but it's not very fast. You might not notice when copying "everyday" files like documents and text files, but you'll really notice the difference when copying large files or when copying files over a network. Working on one character at a time requires significant overhead.
|
||||
|
||||
A better way to write this `cp` command is by reading a chunk of the input into memory (called a _buffer_), then writing that collection of data to the second file. This is much faster because the program can read more of the data at one time, which requires fewer "reads" from the file.
|
||||
|
||||
You can read a file into a variable by using the `fread` function. This function takes several arguments: the array or memory buffer to read data into (`ptr`), the size of the smallest thing you want to read (`size`), how many of those things you want to read (`nmemb`), and the file to read from (`stream`):
|
||||
|
||||
|
||||
```
|
||||
`size_t fread(void *ptr, size_t size, size_t nmemb, FILE *stream);`
|
||||
```
|
||||
|
||||
The different options provide quite a bit of flexibility for more advanced file input and output, such as reading and writing files with a certain data structure. But in the simple case of _reading data from one file_ and _writing data to another file_, you can use a buffer that is an array of characters.
|
||||
|
||||
And you can write the buffer to another file using the `fwrite` function. This uses a similar set of options to the `fread` function: the array or memory buffer to read data from, the size of the smallest thing you need to write, how many of those things you need to write, and the file to write to.
|
||||
|
||||
|
||||
```
|
||||
`size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream);`
|
||||
```
|
||||
|
||||
In the case where the program reads a file into a buffer, then writes that buffer to another file, the array (`ptr`) can be an array of a fixed size. For example, you can use a `char` array called `buffer` that is 200 characters long.
|
||||
|
||||
With that assumption, you need to change the loop in your `cp` program to _read data from a file into a buffer_ then _write that buffer to another file_:
|
||||
|
||||
|
||||
```
|
||||
while (![feof][7](infile)) {
|
||||
buffer_length = [fread][8](buffer, sizeof(char), 200, infile);
|
||||
[fwrite][9](buffer, sizeof(char), buffer_length, outfile);
|
||||
}
|
||||
```
|
||||
|
||||
Here's the full source code to your updated `cp` program, which now uses a buffer to read and write data:
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int
|
||||
main(int argc, char **argv)
|
||||
{
|
||||
FILE *infile;
|
||||
FILE *outfile;
|
||||
char buffer[200];
|
||||
size_t buffer_length;
|
||||
|
||||
/* parse the command line */
|
||||
|
||||
/* usage: cp infile outfile */
|
||||
|
||||
if (argc != 3) {
|
||||
[fprintf][4](stderr, "Incorrect usage\n");
|
||||
[fprintf][4](stderr, "Usage: cp infile outfile\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* open the input file */
|
||||
|
||||
infile = [fopen][5](argv[1], "r");
|
||||
if (infile == NULL) {
|
||||
[fprintf][4](stderr, "Cannot open file for reading: %s\n", argv[1]);
|
||||
return 2;
|
||||
}
|
||||
|
||||
/* open the output file */
|
||||
|
||||
outfile = [fopen][5](argv[2], "w");
|
||||
if (outfile == NULL) {
|
||||
[fprintf][4](stderr, "Cannot open file for writing: %s\n", argv[2]);
|
||||
[fclose][6](infile);
|
||||
return 3;
|
||||
}
|
||||
|
||||
/* copy one file to the other */
|
||||
|
||||
/* use fread and fwrite */
|
||||
|
||||
while (![feof][7](infile)) {
|
||||
buffer_length = [fread][8](buffer, sizeof(char), 200, infile);
|
||||
[fwrite][9](buffer, sizeof(char), buffer_length, outfile);
|
||||
}
|
||||
|
||||
/* done */
|
||||
|
||||
[fclose][6](infile);
|
||||
[fclose][6](outfile);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
Since you want to compare this program to the other program, save this source code as `cp2.c`. You can compile that updated program using GCC:
|
||||
|
||||
|
||||
```
|
||||
`$ gcc -Wall -o cp2 cp2.c`
|
||||
```
|
||||
|
||||
As before, the `-o cp2` option tells the compiler to save the compiled program into the `cp2` program file. The `-Wall` option tells the compiler to turn on all warnings. If you don't see any warnings, that means everything worked correctly.
|
||||
|
||||
### Yes, it really is faster
|
||||
|
||||
Reading and writing data using buffers is the better way to write this version of the `cp` program. Because it reads chunks of a file into memory at once, the program doesn't need to read data as often. You might not notice a difference in using either method on smaller files, but you'll really see the difference if you need to copy something that's much larger or when copying data on slower media like over a network connection.
|
||||
|
||||
I ran a runtime comparison using the Linux `time` command. This command runs another program, then tells you how long that program took to complete. For my test, I wanted to see the difference in time, so I copied a 628MB CD-ROM image file I had on my system.
|
||||
|
||||
I first copied the image file using the standard Linux `cp` command to see how long that takes. By running the Linux `cp` command first, I also eliminated the possibility that Linux's built-in file-cache system wouldn't give my program a false performance boost. The test with Linux `cp` took much less than one second to run:
|
||||
|
||||
|
||||
```
|
||||
$ time cp FD13LIVE.iso tmpfile
|
||||
|
||||
real 0m0.040s
|
||||
user 0m0.001s
|
||||
sys 0m0.003s
|
||||
```
|
||||
|
||||
Copying the same file using my own version of the `cp` command took significantly longer. Reading and writing one character at a time took almost five seconds to copy the file:
|
||||
|
||||
|
||||
```
|
||||
$ time ./cp FD13LIVE.iso tmpfile
|
||||
|
||||
real 0m4.823s
|
||||
user 0m4.100s
|
||||
sys 0m0.571s
|
||||
```
|
||||
|
||||
Reading data from an input into a buffer and then writing that buffer to an output file is much faster. Copying the file using this method took less than a second:
|
||||
|
||||
|
||||
```
|
||||
$ time ./cp2 FD13LIVE.iso tmpfile
|
||||
|
||||
real 0m0.944s
|
||||
user 0m0.224s
|
||||
sys 0m0.608s
|
||||
```
|
||||
|
||||
My demonstration `cp` program used a buffer that was 200 characters. I'm sure the program would run much faster if I read more of the file into memory at once. But for this comparison, you can already see the huge difference in performance, even with a small, 200 character buffer.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/file-io-c
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/file_system.jpg?itok=pzCrX1Kc (4 manilla folders, yellow, green, purple, blue)
|
||||
[2]: http://www.opengroup.org/onlinepubs/009695399/functions/fgetc.html
|
||||
[3]: http://www.opengroup.org/onlinepubs/009695399/functions/fputc.html
|
||||
[4]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
|
||||
[5]: http://www.opengroup.org/onlinepubs/009695399/functions/fopen.html
|
||||
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/fclose.html
|
||||
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/feof.html
|
||||
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/fread.html
|
||||
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/fwrite.html
|
@ -1,202 +0,0 @@
|
||||
[#]: subject: (Read and write files with Bash)
|
||||
[#]: via: (https://opensource.com/article/21/3/input-output-bash)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Read and write files with Bash
|
||||
======
|
||||
Learn the different ways Bash reads and writes data and when to use each
|
||||
method.
|
||||
![bash logo on green background][1]
|
||||
|
||||
When you're scripting with Bash, sometimes you need to read data from or write data to a file. Sometimes a file may contain configuration options, and other times the file is the data your user is creating with your application. Every language handles this task a little differently, and this article demonstrates how to handle data files with Bash and other [POSIX][2] shells.
|
||||
|
||||
### Install Bash
|
||||
|
||||
If you're on Linux, you probably already have Bash. If not, you can find it in your software repository.
|
||||
|
||||
On macOS, you can use the default terminal, either Bash or [Zsh][3], depending on the macOS version you're running.
|
||||
|
||||
On Windows, there are several ways to experience Bash, including Microsoft's officially supported [Windows Subsystem for Linux][4] (WSL).
|
||||
|
||||
Once you have Bash installed, open your favorite text editor and get ready to code.
|
||||
|
||||
### Reading a file with Bash
|
||||
|
||||
In addition to being [a shell][5], Bash is a scripting language. There are several ways to read data from Bash: You can create a sort of data stream and parse the output, or you can load data into memory. Both are valid methods of ingesting information, but each has pretty specific use cases.
|
||||
|
||||
#### Source a file in Bash
|
||||
|
||||
When you "source" a file in Bash, you cause Bash to read the contents of a file with the expectation that it contains valid data that Bash can fit into its established data model. You won't source data from any old file, but you can use this method to read configuration files and functions.
|
||||
|
||||
For instance, create a file called `example.sh` and enter this into it:
|
||||
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
greet opensource.com
|
||||
|
||||
echo "The meaning of life is $var"
|
||||
```
|
||||
|
||||
Run the code to see it fail:
|
||||
|
||||
|
||||
```
|
||||
$ bash ./example.sh
|
||||
./example.sh: line 3: greet: command not found
|
||||
The meaning of life is
|
||||
```
|
||||
|
||||
Bash doesn't have a command called `greet`, so it could not execute that line, and it has no record of a variable called `var`, so there is no known meaning of life. To fix this problem, create a file called `include.sh`:
|
||||
|
||||
|
||||
```
|
||||
greet() {
|
||||
echo "Hello ${1}"
|
||||
}
|
||||
|
||||
var=42
|
||||
```
|
||||
|
||||
Revise your `example.sh` script to include a `source` command:
|
||||
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
source include.sh
|
||||
|
||||
greet opensource.com
|
||||
|
||||
echo "The meaning of life is $var"
|
||||
```
|
||||
|
||||
Run the script to see it work:
|
||||
|
||||
|
||||
```
|
||||
$ bash ./example.sh
|
||||
Hello opensource.com
|
||||
The meaning of life is 42
|
||||
```
|
||||
|
||||
The `greet` command is brought into your shell environment because it is defined in the `include.sh` file, and it even recognizes the argument (`opensource.com` in this example). The variable `var` is set and imported, too.
|
||||
|
||||
#### Parse a file in Bash
|
||||
|
||||
The other way to get data "into" Bash is to parse it as a data stream. There are many ways to do this. You can use `grep` or `cat` or any command that takes data and pipes it to stdout. Alternately, you can use what is built into Bash: the redirect. Redirection on its own isn't very useful, so in this example, I also use the built-in `echo` command to print the results of the redirect:
|
||||
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
echo $( < include.sh )
|
||||
```
|
||||
|
||||
Save this as `stream.sh` and run it to see the results:
|
||||
|
||||
|
||||
```
|
||||
$ bash ./stream.sh
|
||||
greet() { echo "Hello ${1}" } var=42
|
||||
$
|
||||
```
|
||||
|
||||
For each line in the `include.sh` file, Bash prints (or echoes) the line to your terminal. Piping it first to an appropriate parser is a common way to read data with Bash. For instance, assume for a moment that `include.sh` is a configuration file with key and value pairs separated by an equal (`=`) sign. You could obtain values with `awk` or even `cut`:
|
||||
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
myVar=`grep var include.sh | cut -d'=' -f2`
|
||||
|
||||
echo $myVar
|
||||
```
|
||||
|
||||
Try running the script:
|
||||
|
||||
|
||||
```
|
||||
$ bash ./stream.sh
|
||||
42
|
||||
```
|
||||
|
||||
### Writing data to a file with Bash
|
||||
|
||||
Whether you're storing data your user created with your application or just metadata about what the user did in an application (for instance, game saves or recent songs played), there are many good reasons to store data for later use. In Bash, you can save data to files using common shell redirection.
|
||||
|
||||
For instance, to create a new file containing output, use a single redirect token:
|
||||
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
TZ=UTC
|
||||
date > date.txt
|
||||
```
|
||||
|
||||
Run the script a few times:
|
||||
|
||||
|
||||
```
|
||||
$ bash ./date.sh
|
||||
$ cat date.txt
|
||||
Tue Feb 23 22:25:06 UTC 2021
|
||||
$ bash ./date.sh
|
||||
$ cat date.txt
|
||||
Tue Feb 23 22:25:12 UTC 2021
|
||||
```
|
||||
|
||||
To append data, use the double redirect tokens:
|
||||
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
TZ=UTC
|
||||
date >> date.txt
|
||||
```
|
||||
|
||||
Run the script a few times:
|
||||
|
||||
|
||||
```
|
||||
$ bash ./date.sh
|
||||
$ bash ./date.sh
|
||||
$ bash ./date.sh
|
||||
$ cat date.txt
|
||||
Tue Feb 23 22:25:12 UTC 2021
|
||||
Tue Feb 23 22:25:17 UTC 2021
|
||||
Tue Feb 23 22:25:19 UTC 2021
|
||||
Tue Feb 23 22:25:22 UTC 2021
|
||||
```
|
||||
|
||||
### Bash for easy programming
|
||||
|
||||
Bash excels at being easy to learn because, with just a few basic concepts, you can build complex programs. For the full documentation, refer to the [excellent Bash documentation][6] on GNU.org.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/input-output-bash
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background)
|
||||
[2]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[3]: https://opensource.com/article/19/9/getting-started-zsh
|
||||
[4]: https://opensource.com/article/19/7/ways-get-started-linux#wsl
|
||||
[5]: https://www.redhat.com/sysadmin/terminals-shells-consoles
|
||||
[6]: http://gnu.org/software/bash
|
@ -1,144 +0,0 @@
|
||||
[#]: subject: (How to read and write files in C++)
|
||||
[#]: via: (https://opensource.com/article/21/3/ccc-input-output)
|
||||
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
How to read and write files in C++
|
||||
======
|
||||
If you know how to use I/O streams in C++, you can (in principle) handle
|
||||
any kind of I/O device.
|
||||
![Computer screen with files or windows open][1]
|
||||
|
||||
In C++, reading and writing to files can be done by using I/O streams in conjunction with the stream operators `>>` and `<<`. When reading or writing to files, those operators are applied to an instance of a class representing a file on the hard drive. This stream-based approach has a huge advantage: From a C ++ perspective, it doesn't matter what you are reading or writing to, whether it's a file, a database, the console, or another PC you are connected to over the network. Therefore, knowing how to write files using stream operators can be transferred to other areas.
|
||||
|
||||
### I/O stream classes
|
||||
|
||||
The C++ standard library provides the class [ios_base][2]. This class acts as the base class for all I/O stream-compatible classes, such as [basic_ofstream][3] and [basic_ifstream][4]. This example will use the specialized types for reading/writing characters, `ifstream` and `ofstream`.
|
||||
|
||||
* `ofstream` means _output file stream_, and it can be accessed with the insertion operator, `<<`.
|
||||
* `ifstream` means _input file stream_, and it can be accessed with the extraction operator, `>>`.
|
||||
|
||||
|
||||
|
||||
Both types are defined inside the header `<fstream>`.
|
||||
|
||||
A class that inherits from `ios_base` can be thought of as a data sink when writing to it or as a data source when reading from it, completely detached from the data itself. This object-oriented approach makes concepts such as [separation of concerns][5] and [dependency injection][6] easy to implement.
|
||||
|
||||
### A simple example
|
||||
|
||||
This example program is quite simple: It creates an `ofstream`, writes to it, creates an `ifstream`, and reads from it:
|
||||
|
||||
|
||||
```
|
||||
#include <iostream> // cout, cin, cerr etc...
|
||||
#include <fstream> // ifstream, ofstream
|
||||
#include <string>
|
||||
|
||||
int main()
|
||||
{
|
||||
std::string sFilename = "MyFile.txt";
|
||||
|
||||
/******************************************
|
||||
* *
|
||||
* WRITING *
|
||||
* *
|
||||
******************************************/
|
||||
|
||||
std::ofstream fileSink(sFilename); // Creates an output file stream
|
||||
|
||||
if (!fileSink) {
|
||||
std::cerr << "Canot open " << sFilename << std::endl;
|
||||
exit(-1);
|
||||
}
|
||||
|
||||
/* std::endl will automatically append the correct EOL */
|
||||
fileSink << "Hello Open Source World!" << std::endl;
|
||||
|
||||
/******************************************
|
||||
* *
|
||||
* READING *
|
||||
* *
|
||||
******************************************/
|
||||
|
||||
std::ifstream fileSource(sFilename); // Creates an input file stream
|
||||
|
||||
if (!fileSource) {
|
||||
std::cerr << "Canot open " << sFilename << std::endl;
|
||||
exit(-1);
|
||||
}
|
||||
else {
|
||||
// Intermediate buffer
|
||||
std::string buffer;
|
||||
|
||||
// By default, the >> operator reads word by workd (till whitespace)
|
||||
while (fileSource >> buffer)
|
||||
{
|
||||
std::cout << buffer << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
exit(0);
|
||||
}
|
||||
```
|
||||
|
||||
This code is available on [GitHub][7]. When you compile and execute it, you should get the following output:
|
||||
|
||||
![Console screenshot][8]
|
||||
|
||||
(Stephan Avenwedde, [CC BY-SA 4.0][9])
|
||||
|
||||
This is a simplified, beginner-friendly example. If you want to use this code in your own application, please note the following:
|
||||
|
||||
* The file streams are automatically closed at the end of the program. If you want to proceed with the execution, you should close them manually by calling the `close()` method.
|
||||
* These file stream classes inherit (over several levels) from [basic_ios][10], which overloads the `!` operator. This lets you implement a simple check if you can access the stream. On [cppreference.com][11], you can find an overview of when this check will (and won't) succeed, and you can implement further error handling.
|
||||
* By default, `ifstream` stops at white space and skips it. To read line by line until you reach [EOF][12], use the `getline(...)`-method.
|
||||
* For reading and writing binary files, pass the `std::ios::binary` flag to the constructor: This prevents [EOL][13] characters from being appended to each line.
|
||||
|
||||
|
||||
|
||||
### Writing from the systems perspective
|
||||
|
||||
When writing files, the data is written to the system's in-memory write buffer. When the system receives the system call [sync][14], this buffer's contents are written to the hard drive. This mechanism is also the reason you shouldn't remove a USB stick without telling the system. Usually, _sync_ is called on a regular basis by a daemon. If you really want to be on the safe side, you can also call _sync_ manually:
|
||||
|
||||
|
||||
```
|
||||
#include <unistd.h> // needs to be included
|
||||
|
||||
sync();
|
||||
```
|
||||
|
||||
### Summary
|
||||
|
||||
Reading and writing to files in C++ is not that complicated. Moreover, if you know how to deal with I/O streams, you also know (in principle) how to deal with any kind of I/O device. Libraries for various kinds of I/O devices let you use stream operators for easy access. This is why it is beneficial to know how I/O steams work.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/ccc-input-output
|
||||
|
||||
作者:[Stephan Avenwedde][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/hansic99
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY (Computer screen with files or windows open)
|
||||
[2]: https://en.cppreference.com/w/cpp/io/ios_base
|
||||
[3]: https://en.cppreference.com/w/cpp/io/basic_ofstream
|
||||
[4]: https://en.cppreference.com/w/cpp/io/basic_ifstream
|
||||
[5]: https://en.wikipedia.org/wiki/Separation_of_concerns
|
||||
[6]: https://en.wikipedia.org/wiki/Dependency_injection
|
||||
[7]: https://github.com/hANSIc99/cpp_input_output
|
||||
[8]: https://opensource.com/sites/default/files/uploads/c_console_screenshot.png (Console screenshot)
|
||||
[9]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[10]: https://en.cppreference.com/w/cpp/io/basic_ios
|
||||
[11]: https://en.cppreference.com/w/cpp/io/basic_ios/operator!
|
||||
[12]: https://en.wikipedia.org/wiki/End-of-file
|
||||
[13]: https://en.wikipedia.org/wiki/Newline
|
||||
[14]: https://en.wikipedia.org/wiki/Sync_%28Unix%29
|
@ -1,70 +0,0 @@
|
||||
[#]: subject: (Why you should care about service mesh)
|
||||
[#]: via: (https://opensource.com/article/21/3/service-mesh)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Why you should care about service mesh
|
||||
======
|
||||
Service mesh provides benefits for development and operations in
|
||||
microservices environments.
|
||||
![Net catching 1s and 0s or data in the clouds][1]
|
||||
|
||||
Many developers wonder why they should care about [service mesh][2]. It's a question I'm asked often in my presentations at developer meetups, conferences, and hands-on workshops about microservices development with cloud-native architecture. My answer is always the same: "As long as you want to simplify your microservices architecture, it should be running on Kubernetes."
|
||||
|
||||
Concerning simplification, you probably also wonder why distributed microservices must be designed so complexly for running on Kubernetes clusters. As this article explains, many developers solve the microservices architecture's complexity with service mesh and gain additional benefits by adopting service mesh in production.
|
||||
|
||||
### What is a service mesh?
|
||||
|
||||
A service mesh is a dedicated infrastructure layer for providing a transparent and code-independent (polyglot) way to eliminate nonfunctional microservices capabilities from the application code.
|
||||
|
||||
![Before and After Service Mesh][3]
|
||||
|
||||
(Daniel Oh, [CC BY-SA 4.0][4])
|
||||
|
||||
### Why service mesh matters to developers
|
||||
|
||||
When developers deploy microservices to the cloud, they have to address nonfunctional microservices capabilities to avoid cascading failures, regardless of business functionalities. Those capabilities typically can be represented in service discovery, logging, monitoring, resiliency, authentication, elasticity, and tracing. Developers must spend more time adding them to each microservice rather than developing actual business logic, which makes the microservices heavy and complex.
|
||||
|
||||
As organizations accelerate their move to the cloud, the service mesh can increase developer productivity. Instead of making the services responsible for dealing with those complexities and adding more code into each service to deal with cloud-native concerns, the Kubernetes + service mesh platform is responsible for providing those services to any application (existing or new, in any programming language or framework) running on the platform. Then the microservices can be lightweight and focus on their business logic rather than cloud-native complexities.
|
||||
|
||||
### Why service mesh matters to ops
|
||||
|
||||
This doesn't answer why ops teams need to care about the service mesh for operating cloud-native microservices on Kubernetes. It's because the ops teams have to ensure robust security, compliance, and observability for spreading new cloud-native applications across large hybrid and multi clouds on Kubernetes environments.
|
||||
|
||||
The service mesh is composed of a control plane for managing proxies to route traffic and a data plane for injecting sidecars. The sidecars allow the ops teams to do things like adding third-party security tools and tracing traffic in all service communications to avoid security breaches or compliance issues. The service mesh also improves observation capabilities by visualizing tracing metrics on graphical dashboards.
|
||||
|
||||
### How to get started with service mesh
|
||||
|
||||
Service mesh manages cloud-native capabilities more efficiently—for developers and operators and from application development to platform operation.
|
||||
|
||||
You might want to know where to get started adopting service mesh in alignment with your microservices applications and architecture. Luckily, there are many open source service mesh projects. Many cloud service providers also offer service mesh capabilities within their Kubernetes platforms.
|
||||
|
||||
![CNCF Service Mesh Landscape][5]
|
||||
|
||||
(Daniel Oh, [CC BY-SA 4.0][4])
|
||||
|
||||
You can find links to the most popular service mesh projects and services on the [CNCF Service Mesh Landscape][6] webpage.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/service-mesh
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/daniel-oh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds)
|
||||
[2]: https://www.redhat.com/en/topics/microservices/what-is-a-service-mesh
|
||||
[3]: https://opensource.com/sites/default/files/uploads/vm-vs-service-mesh.png (Before and After Service Mesh)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/service-mesh-providers.png (CNCF Service Mesh Landscape)
|
||||
[6]: https://landscape.cncf.io/card-mode?category=service-mesh&grouping=category
|
@ -1,97 +0,0 @@
|
||||
[#]: subject: (Manipulate data in files with Lua)
|
||||
[#]: via: (https://opensource.com/article/21/3/lua-files)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Manipulate data in files with Lua
|
||||
======
|
||||
Understand how Lua handles reading and writing data.
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
Some data is ephemeral, stored in RAM, and only significant while an application is running. But some data is meant to be persistent, stored on a hard drive for later use. When you program, whether you're working on a simple script or a complex suite of tools, it's common to need to read and write files. Sometimes a file may contain configuration options, and other times the file is the data that your user is creating with your application. Every language handles this task a little differently, and this article demonstrates how to handle data files with Lua.
|
||||
|
||||
### Installing Lua
|
||||
|
||||
If you're on Linux, you can install Lua from your distribution's software repository. On macOS, you can install Lua from [MacPorts][2] or [Homebrew][3]. On Windows, you can install Lua from [Chocolatey][4].
|
||||
|
||||
Once you have Lua installed, open your favorite text editor and get ready to code.
|
||||
|
||||
### Reading a file with Lua
|
||||
|
||||
Lua uses the `io` library for data input and output. The following example creates a function called `ingest` to read data from a file and then parses it with the `:read` function. When opening a file in Lua, there are several modes you can enable. Because I just need to read data from this file, I use the `r` (for "read") mode:
|
||||
|
||||
|
||||
```
|
||||
function ingest(file)
|
||||
local f = io.open(file, "r")
|
||||
local lines = f:read("*all")
|
||||
f:close()
|
||||
return(lines)
|
||||
end
|
||||
|
||||
myfile=ingest("example.txt")
|
||||
print(myfile)
|
||||
```
|
||||
|
||||
In the code, notice that the variable `myfile` is created to trigger the `ingest` function, and therefore, it receives whatever that function returns. The `ingest` function returns the lines (from a variable intuitively called `lines`) of the file. When the contents of the `myfile` variable are printed in the final step, the lines of the file appear in the terminal.
|
||||
|
||||
If the file `example.txt` contains configuration options, then I would write some additional code to parse that data, probably using another Lua library depending on whether the configuration was stored as an INI file or YAML file or some other format. If the data were an SVG graphic, I'd write extra code to parse XML, probably using an SVG library for Lua. In other words, the data your code reads can be manipulated once it's loaded into memory, but all that's required to load it is the `io` library.
|
||||
|
||||
### Writing data to a file with Lua
|
||||
|
||||
Whether you're storing data your user is creating with your application or just metadata about what the user is doing in an application (for instance, game saves or recent songs played), there are many good reasons to store data for later use. In Lua, this is achieved through the `io` library by opening a file, writing data into it, and closing the file:
|
||||
|
||||
|
||||
```
|
||||
function exgest(file)
|
||||
local f = io.open(file, "a")
|
||||
io.output(f)
|
||||
io.write("hello world\n")
|
||||
io.close(f)
|
||||
end
|
||||
|
||||
exgest("example.txt")
|
||||
```
|
||||
|
||||
To read data from the file, I open the file in `r` mode, but this time I use `a` (for "append") to write data to the end of the file. Because I'm writing plain text into a file, I added my own newline character (`\n`). Often, you're not writing raw text into a file, and you'll probably use an additional library to write a specific format instead. For instance, you might use an INI or YAML library to help write configuration files, an XML library to write XML, and so on.
|
||||
|
||||
### File modes
|
||||
|
||||
When opening files in Lua, there are some safeguards and parameters to define how a file should be handled. The default is `r`, which permits you to read data only:
|
||||
|
||||
* **r** for read only
|
||||
* **w** to overwrite or create a new file if it doesn't already exist
|
||||
* **r+** to read and overwrite
|
||||
* **a** to append data to a file or make a new file if it doesn't already exist
|
||||
* **a+** to read data, append data to a file, or make a new file if it doesn't already exist
|
||||
|
||||
|
||||
|
||||
There are a few others (`b` for binary formats, for instance), but those are the most common. For the full documentation, refer to the excellent Lua documentation on [Lua.org/manual][5].
|
||||
|
||||
### Lua and files
|
||||
|
||||
Like other programming languages, Lua has plenty of library support to access a filesystem to read and write data. Because Lua has a consistent and simple syntax, it's easy to perform complex processing on data in files of any format. Try using Lua for your next software project, or as an API for your C or C++ project.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/lua-files
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://opensource.com/article/20/11/macports
|
||||
[3]: https://opensource.com/article/20/6/homebrew-mac
|
||||
[4]: https://opensource.com/article/20/3/chocolatey
|
||||
[5]: http://lua.org/manual
|
93
sources/tech/20210330 A DevOps guide to documentation.md
Normal file
93
sources/tech/20210330 A DevOps guide to documentation.md
Normal file
@ -0,0 +1,93 @@
|
||||
[#]: subject: (A DevOps guide to documentation)
|
||||
[#]: via: (https://opensource.com/article/21/3/devops-documentation)
|
||||
[#]: author: (Will Kelly https://opensource.com/users/willkelly)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
A DevOps guide to documentation
|
||||
======
|
||||
Bring your documentation writing into the DevOps lifecycle.
|
||||
![Typewriter with hands][1]
|
||||
|
||||
DevOps is challenging technical documentation norms like at no other time in IT history. From automation to increased delivery velocity to dismantling the waterfall software development lifecycle model, these all spell the need for making dramatic changes to business and the philosophy of technical documentation.
|
||||
|
||||
Here are some ways DevOps is influencing technical documentation.
|
||||
|
||||
### The technical writer's changing role
|
||||
|
||||
The technical writer's role must adapt to DevOps. The good news is that many technical writers are already embedded in development teams, and they may have a leg up by already having collaborative relationships and growing knowledge of the product.
|
||||
|
||||
But you have some pivoting to do if your technical writers are used to working in siloes and relying on drafts written by subject matter experts as the basis for documentation.
|
||||
|
||||
Make the investments to ensure your documentation and other project-related content development efforts gain the tools, structure, and support they require. Start by changing your [technical writer hiring practices][2]. Documentation at the [speed of DevOps][3] requires rethinking your content strategy and breaking down longstanding silos between your DevOps team and the technical writer assigned to support the project.
|
||||
|
||||
DevOps also causes development teams to break away from the rigors of traditional documentation practices. Foremost, documentation's [definition of done][4] must change. Some corporate cultures make the technical writer a passive participant in software development. DevOps makes new demands—as the DevOps cultural transformation goes, so does the technical writer's role. Writers will need (and must adjust to) the transparency DevOps offers. They must integrate into DevOps teams. Depending on how an organization casts the role, bringing the technical writer into the team may present skillset challenges.
|
||||
|
||||
### Documentation standards, methodologies, and specifications
|
||||
|
||||
While DevOps has yet to influence technical documentation itself, the open source community has stepped up to help with application programming interface (API) documentation that's finding use among DevOps teams in enterprises of all sizes.
|
||||
|
||||
Open source specifications and tools for documenting APIs are an exciting area to watch. I'd like to think it is due to the influence of [Google Season of Docs][5], which gives open source software projects access to professional technical writing talent to tackle their most critical documentation projects.
|
||||
|
||||
Open source APIs are available and need to become part of the DevOps documentation discussion. The importance of cloud-native application integration requirements is on the rise. The [OpenAPI specification][6]—an open standard for defining and documenting an API—is a good resource for API documentation in DevOps environments. However, a significant criticism is that the specification can make documentation time-consuming to create and keep current.
|
||||
|
||||
There were brief attempts to create a [Continuous Documentation][7] methodology. There was also a movement to create a [DocOps][8] Framework that came out of CA (now Broadcom). Despite its initial promise, DocOps never caught on as an industry movement.
|
||||
|
||||
The current state of DevOps documentation standards means your DevOps teams (including your technical writer) need to begin creating documentation at the earliest stages of a project. You do this by adding documentation as both an agile story and (just as important) as a management expectation; you enforce it by tying it to annual performance reviews.
|
||||
|
||||
### Documentation tools
|
||||
|
||||
DevOps documentation authoring should occur online in a format or a platform accessible to all team members. MediaWiki, DokuWiki, TikiWiki, and other [open source wikis][9] offer DevOps teams a central repository for authoring and maintaining documentation.
|
||||
|
||||
Let teams choose their wiki just as you let them choose their other continuous integration/continuous development (CI/CD) toolchains. Part of the power of open source wikis is their extensibility. For example, DokuWiki includes a range of extensions you can install to create an authoring platform that meets your DevOps team's authoring requirements.
|
||||
|
||||
If you're ambitious enough to bolster your team's authoring and collaboration capabilities, [Nextcloud][10] (an open source cloud collaboration suite) is an option for putting your DevOps teams online and giving them the tools they need to author documentation.
|
||||
|
||||
### DevOps best practices
|
||||
|
||||
Documentation also plays a role in DevOps transformation. You're going to want to document the best practices that help your organization realize efficiency and process gains from DevOps. This information is too important to communicate only by word of mouth across your DevOps teams. Documentation is a unifying force if your organization has multiple DevOps teams; it promotes standardization of best practices and sets you up to capture and benchmark metrics for code quality.
|
||||
|
||||
Often it's developers who shoulder the work of documenting DevOps practices. Even if their organizations have technical writers, they might work across development teams. Thus, it's important that developers and sysadmins can capture, document, and communicate their best practices. Here are some tips to get that effort going in the right direction:
|
||||
|
||||
* Invest the time upfront to create a standard template for your DevOps best practices. Don't fall into the trap of copying a template you find online. Interview your stakeholders and teams to create a template that meets your team's needs.
|
||||
* Look for ways to be creative with information gathering, such as recording your team meetings and using chat system logs to serve as a foundation for your documentation.
|
||||
* Establish a wiki for publishing your best practices. Use a wiki that lets you maintain an audit trail of edits and updates. Such a platform sets your teams up to update and maintain best practices as they change.
|
||||
|
||||
|
||||
|
||||
It's smart to document dependencies as you build out your CI/CD toolchains. Such an effort pays off when you onboard new team members. It's also a little bit of insurance when a team member forgets something.
|
||||
|
||||
Finally, automation is enticing to DevOps stakeholders and practitioners alike. It's all fun and games until automation breaks. Having documentation for automation run books, admin guides, and other things in place (and up to date) means your staff can get automation working again regardless of when it breaks down.
|
||||
|
||||
### Final thoughts
|
||||
|
||||
DevOps is a net positive for technical documentation. It pulls content development into the DevOps lifecycle and breaks down the siloes between developers and technical writers within the organizational culture. Without the luxury of a technical writer, teams get the tools to accelerate their document authoring's velocity to match the speed of DevOps.
|
||||
|
||||
What is your organization doing to bring documentation into the DevOps lifecycle? Please share your experience in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/devops-documentation
|
||||
|
||||
作者:[Will Kelly][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/willkelly
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/typewriter-hands.jpg?itok=oPugBzgv (Typewriter with hands)
|
||||
[2]: https://opensource.com/article/19/11/hiring-technical-writers-devops
|
||||
[3]: https://searchitoperations.techtarget.com/opinion/Make-DevOps-documentation-an-integral-part-of-your-strategy?_ga=2.73253915.980148481.1610758264-908287796.1564772842
|
||||
[4]: https://www.agilealliance.org/glossary/definition-of-done
|
||||
[5]: https://developers.google.com/season-of-docs
|
||||
[6]: https://swagger.io/specification/
|
||||
[7]: https://devops.com/continuous-documentation
|
||||
[8]: https://www.cmswire.com/cms/information-management/the-importance-of-docops-in-the-new-era-of-business-027489.php
|
||||
[9]: https://opensource.com/article/20/7/sharepoint-alternative
|
||||
[10]: https://opensource.com/article/20/7/nextcloud
|
@ -0,0 +1,230 @@
|
||||
[#]: subject: (Access Python package index JSON APIs with requests)
|
||||
[#]: via: (https://opensource.com/article/21/3/python-package-index-json-apis-requests)
|
||||
[#]: author: (Ben Nuttall https://opensource.com/users/bennuttall)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Access Python package index JSON APIs with requests
|
||||
======
|
||||
PyPI's JSON API is a machine-readable source of the same kind of data
|
||||
you can access while browsing the website.
|
||||
![Python programming language logo with question marks][1]
|
||||
|
||||
PyPI, the Python package index, provides a JSON API for information about its packages. This is essentially a machine-readable source of the same kind of data you can access while browsing the website. For example, as a human, I can head to the [NumPy][2] project page in my browser, click around, and see which versions there are, what files are available, and things like release dates and which Python versions are supported:
|
||||
|
||||
![NumPy project page][3]
|
||||
|
||||
(Ben Nuttall, [CC BY-SA 4.0][4])
|
||||
|
||||
But if I want to write a program to access this data, I can use the JSON API instead of having to scrape and parse the HTML on these pages.
|
||||
|
||||
As an aside: On the old PyPI website, when it was hosted at `pypi.python.org`, the NumPy project page was at `pypi.python.org/pypi/numpy`, and accessing the JSON was a simple matter of adding a `/json` on the end, hence `https://pypi.org/pypi/numpy/json`. Now the PyPI website is hosted at `pypi.org`, and NumPy's project page is at `pypi.org/project/numpy`. The new site doesn't include rendering the JSON, but it still runs as it was before. So now, rather than adding `/json` to the URL, you have to remember the URL where they are.
|
||||
|
||||
You can open up the JSON for NumPy in your browser by heading to its URL. Firefox renders it nicely like this:
|
||||
|
||||
![JSON rendered in Firefox][5]
|
||||
|
||||
(Ben Nuttall, [CC BY-SA 4.0][4])
|
||||
|
||||
You can open `info`, `releases`, and `urls` to inspect the contents within. Or you can load it into a Python shell. Here are a few lines to get started:
|
||||
|
||||
|
||||
```
|
||||
import requests
|
||||
url = "<https://pypi.org/pypi/numpy/json>"
|
||||
r = requests.get(url)
|
||||
data = r.json()
|
||||
```
|
||||
|
||||
Once you have the data (calling `.json()` provides a [dictionary][6] of the data), you can inspect it:
|
||||
|
||||
![Inspecting data][7]
|
||||
|
||||
(Ben Nuttall, [CC BY-SA 4.0][4])
|
||||
|
||||
Open `releases`, and inspect the keys inside it:
|
||||
|
||||
![Inspecting keys in releases][8]
|
||||
|
||||
(Ben Nuttall, [CC BY-SA 4.0][4])
|
||||
|
||||
This shows that `releases` is a dictionary with version numbers as keys. Pick one (say, the latest one) and inspect that:
|
||||
|
||||
![Inspecting version][9]
|
||||
|
||||
(Ben Nuttall, [CC BY-SA 4.0][4])
|
||||
|
||||
Each release is a list, and this one contains 24 items. But what is each item? Since it's a list, you can index the first one and take a look:
|
||||
|
||||
![Indexing an item][10]
|
||||
|
||||
(Ben Nuttall, [CC BY-SA 4.0][4])
|
||||
|
||||
This item is a dictionary containing details about a particular file. So each of the 24 items in the list relates to a file associated with this particular version number, i.e., the 24 files listed at <https://pypi.org/project/numpy/1.20.1/#files>.
|
||||
|
||||
You could write a script that looks for something within the available data. For example, the following loop looks for versions with sdist (source distribution) files that specify a `requires_python` attribute and prints them:
|
||||
|
||||
|
||||
```
|
||||
for version, files in data['releases'].items():
|
||||
for f in files:
|
||||
if f.get('packagetype') == 'sdist' and f.get('requires_python'):
|
||||
print(version, f['requires_python'])
|
||||
```
|
||||
|
||||
![sdist files with requires_python attribute ][11]
|
||||
|
||||
(Ben Nuttall, [CC BY-SA 4.0][4])
|
||||
|
||||
### piwheels
|
||||
|
||||
Last year I [implemented a similar API][12] on the piwheels website. [piwheels.org][13] is a Python package index that provides wheels (precompiled binary packages) for the Raspberry Pi architecture. It's essentially a mirror of the package set on PyPI, but with Arm wheels instead of files uploaded to PyPI by package maintainers.
|
||||
|
||||
Since piwheels mimics the URL structure of PyPI, you can change the `pypi.org` part of a project page's URL to `piwheels.org`. It'll show you a similar kind of project page with details about which versions we have built and which files are available. Since I liked how the old site allowed you to add `/json` to the end of the URL, I made ours work that way, so NumPy's project page on PyPI is [pypi.org/project/numpy][14]. On piwheels, it is [piwheels.org/project/numpy][15], and the JSON is at [piwheels.org/project/numpy/json][16].
|
||||
|
||||
There's no need to duplicate the contents of PyPI's API, so we provide information about what's available on piwheels and include a list of all known releases, some basic information, and a list of files we have:
|
||||
|
||||
![JSON files available in piwheels][17]
|
||||
|
||||
(Ben Nuttall, [CC BY-SA 4.0][4])
|
||||
|
||||
Similar to the previous PyPI example, you could create a script to analyze the API contents, for example, to show the number of files piwheels has for each version of NumPy:
|
||||
|
||||
|
||||
```
|
||||
import requests
|
||||
|
||||
url = "<https://www.piwheels.org/project/numpy/json>"
|
||||
package = requests.get(url).json()
|
||||
|
||||
for version, info in package['releases'].items():
|
||||
if info['files']:
|
||||
print('{}: {} files'.format(version, len(info['files'])))
|
||||
else:
|
||||
print('{}: No files'.format(version))
|
||||
```
|
||||
|
||||
Also, each file contains some metadata:
|
||||
|
||||
![Metadata in JSON files in piwheels][18]
|
||||
|
||||
(Ben Nuttall, [CC BY-SA 4.0][4])
|
||||
|
||||
One handy thing is the `apt_dependencies` field, which lists the Apt packages needed to use the library. In the case of this NumPy file, as well as installing NumPy with pip, you'll also need to install `libatlas3-base` and `libgfortran` using Debian's Apt package manager.
|
||||
|
||||
Here is an example script that shows the Apt dependencies for a package:
|
||||
|
||||
|
||||
```
|
||||
import requests
|
||||
|
||||
def get_install(package, abi):
|
||||
url = '<https://piwheels.org/project/{}/json'.format(package)>
|
||||
r = requests.get(url)
|
||||
data = r.json()
|
||||
for version, release in sorted(data['releases'].items(), reverse=True):
|
||||
for filename, file in release['files'].items():
|
||||
if abi in filename:
|
||||
deps = ' '.join(file['apt_dependencies'])
|
||||
print("sudo apt install {}".format(deps))
|
||||
print("sudo pip3 install {}=={}".format(package, version))
|
||||
return
|
||||
|
||||
get_install('opencv-python', 'cp37m')
|
||||
get_install('opencv-python', 'cp35m')
|
||||
get_install('opencv-python-headless', 'cp37m')
|
||||
get_install('opencv-python-headless', 'cp35m')
|
||||
```
|
||||
|
||||
We also provide a general API endpoint for the list of packages, which includes download stats for each package:
|
||||
|
||||
|
||||
```
|
||||
import requests
|
||||
|
||||
url = "<https://www.piwheels.org/packages.json>"
|
||||
packages = requests.get(url).json()
|
||||
packages = {
|
||||
pkg: (d_month, d_all)
|
||||
for pkg, d_month, d_all, *_ in packages
|
||||
}
|
||||
|
||||
package = 'numpy'
|
||||
d_month, d_all = packages[package]
|
||||
|
||||
print(package, "has had", d_month, "downloads in the last month")
|
||||
print(package, "has had", d_all, "downloads in total")
|
||||
```
|
||||
|
||||
### pip search
|
||||
|
||||
Since `pip search` is currently disabled due to its XMLRPC interface being overloaded, people have been looking for alternatives. You can use the piwheels JSON API to search for package names instead since the set of packages is the same:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/python3
|
||||
import sys
|
||||
|
||||
import requests
|
||||
|
||||
PIWHEELS_URL = '<https://www.piwheels.org/packages.json>'
|
||||
|
||||
r = requests.get(PIWHEELS_URL)
|
||||
packages = {p[0] for p in r.json()}
|
||||
|
||||
def search(term):
|
||||
for pkg in packages:
|
||||
if term in pkg:
|
||||
yield pkg
|
||||
|
||||
if __name__ == '__main__':
|
||||
if len(sys.argv) == 2:
|
||||
results = search(sys.argv[1].lower())
|
||||
for res in results:
|
||||
print(res)
|
||||
else:
|
||||
print("Usage: pip_search TERM")
|
||||
```
|
||||
|
||||
For more information, see the piwheels [JSON API documentation][19].
|
||||
|
||||
* * *
|
||||
|
||||
_This article originally appeared on Ben Nuttall's [Tooling Tuesday blog][20] and is reused with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/python-package-index-json-apis-requests
|
||||
|
||||
作者:[Ben Nuttall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bennuttall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python_programming_question.png?itok=cOeJW-8r (Python programming language logo with question marks)
|
||||
[2]: https://pypi.org/project/numpy/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/numpy-project-page.png (NumPy project page)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/pypi-json-firefox.png (JSON rendered in Firefox)
|
||||
[6]: https://docs.python.org/3/tutorial/datastructures.html#dictionaries
|
||||
[7]: https://opensource.com/sites/default/files/uploads/pypi-json-notebook.png (Inspecting data)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/pypi-json-releases.png (Inspecting keys in releases)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/pypi-json-inspect.png (Inspecting version)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/pypi-json-release.png (Indexing an item)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/pypi-json-requires-python.png (sdist files with requires_python attribute )
|
||||
[12]: https://blog.piwheels.org/requires-python-support-new-project-page-layout-and-a-new-json-api/
|
||||
[13]: https://www.piwheels.org/
|
||||
[14]: https://pypi.org/project/numpy
|
||||
[15]: https://www.piwheels.org/project/numpy
|
||||
[16]: https://www.piwheels.org/project/numpy/json
|
||||
[17]: https://opensource.com/sites/default/files/uploads/piwheels-json.png (JSON files available in piwheels)
|
||||
[18]: https://opensource.com/sites/default/files/uploads/piwheels-json-numpy.png (Metadata in JSON files in piwheels)
|
||||
[19]: https://www.piwheels.org/json.html
|
||||
[20]: https://tooling.bennuttall.com/accessing-python-package-index-json-apis-with-requests/
|
@ -1,106 +0,0 @@
|
||||
[#]: subject: (NewsFlash: A Modern Open-Source Feed Reader With Feedly Support)
|
||||
[#]: via: (https://itsfoss.com/newsflash-feedreader/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (DCOLIVERSUN)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
NewsFlash: A Modern Open-Source Feed Reader With Feedly Support
|
||||
======
|
||||
|
||||
Some may choose to believe that RSS readers are dead, but they’re here to stay. Especially when you don’t want the Big tech algorithm to decide what you should read. With a feed reader, you can choose your own reading sources.
|
||||
|
||||
I’ve recently come across a fantastic RSS reader NewsFlash. It also supports adding feeds through web-based feed readers like [Feedly][1] and NewsBlur. That’s a big relief because if you are already such a service, you don’t have to import your feeds manually.
|
||||
|
||||
NewsFlash happens to be the spiritual successor to [FeedReader][2] with the original developer involved as well.
|
||||
|
||||
In case you’re wondering, we’ve already covered a list of [Feed Reader apps for Linux][3] if you’re looking for more options.
|
||||
|
||||
### NewsFlash: A Feed Reader To Complement Web-based RSS Reader Account
|
||||
|
||||
![][4]
|
||||
|
||||
It is important to note that NewsFlash isn’t just tailored for web-based RSS feed accounts, you can choose to use local RSS feeds as well without needing to sync them on multiple devices.
|
||||
|
||||
However, it is specifically helpful if you’re using any of the supported web-based feed readers.
|
||||
|
||||
Here, I’ll be highlighting some of the features that it offers.
|
||||
|
||||
### Features of NewsFlash
|
||||
|
||||
![][5]
|
||||
|
||||
* Desktop Notifications support
|
||||
* Fast search and filtering
|
||||
* Supports tagging
|
||||
* Useful keyboard shortcuts that can be later customized
|
||||
* Local feeds
|
||||
* Import/Export OPML files
|
||||
* Easily discover various RSS Feeds using Feedly’s library without needing to sign up for the service
|
||||
* Custom Font Support
|
||||
* Multiple themes supported (including a dark theme)
|
||||
* Ability to enable/disable the Thumbnails
|
||||
* Tweak the time for regular sync intervals
|
||||
* Support for web-based Feed accounts like Feedly, Fever, NewsBlur, feedbin, Miniflux
|
||||
|
||||
|
||||
|
||||
In addition to the features mentioned, it also opens the reader view when you re-size the window, so that’s a subtle addition.
|
||||
|
||||
![newsflash screenshot 1][6]
|
||||
|
||||
If you want to reset the account, you can easily do that as well – which will delete all your local data as well. And, yes, you can manually clear the cache and set an expiry for user data to exist locally for all the feeds you follow.
|
||||
|
||||
**Recommended Read:**
|
||||
|
||||
![][7]
|
||||
|
||||
#### [6 Best Feed Reader Apps for Linux][3]
|
||||
|
||||
Extensively use RSS feeds to stay updated with your favorite websites? Take a look at the best feed reader applications for Linux.
|
||||
|
||||
### Installing NewsFlash in Linux
|
||||
|
||||
You do not get any official packages available for various Linux distributions but limited to a [Flatpak][8].
|
||||
|
||||
For Arch users, you can find it available in [AUR][9].
|
||||
|
||||
Fortunately, the [Flatpak][10] package makes it easy for you to install it on any Linux distro you use. You can refer to our [Flatpak guide][11] for help.
|
||||
|
||||
In either case, you can refer to its [GitLab page][12] and compile it yourself.
|
||||
|
||||
### Closing Thoughts
|
||||
|
||||
I’m currently using it by moving away from web-based services as a local solution on my desktop. You can simply export the OPML file to get the same feeds on any of your mobile feed applications, that’s what I’ve done.
|
||||
|
||||
The user interface is easy to use and provides a modern UX, if not the best. You can find all the essential features available while being a simple-looking RSS reader as well.
|
||||
|
||||
What do you think about NewsFlash? Do you prefer using something else? Feel free to share your thoughts in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/newsflash-feedreader/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://feedly.com/
|
||||
[2]: https://jangernert.github.io/FeedReader/
|
||||
[3]: https://itsfoss.com/feed-reader-apps-linux/
|
||||
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/newsflash.jpg?resize=945%2C648&ssl=1
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/newsflash-screenshot.jpg?resize=800%2C533&ssl=1
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/newsflash-screenshot-1.jpg?resize=800%2C532&ssl=1
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/04/best-feed-reader-apps-linux.jpg?fit=800%2C450&ssl=1
|
||||
[8]: https://flathub.org/apps/details/com.gitlab.newsflash
|
||||
[9]: https://itsfoss.com/aur-arch-linux/
|
||||
[10]: https://itsfoss.com/what-is-flatpak/
|
||||
[11]: https://itsfoss.com/flatpak-guide/
|
||||
[12]: https://gitlab.com/news-flash/news_flash_gtk
|
@ -0,0 +1,198 @@
|
||||
[#]: subject: (3 reasons I use the Git cherry-pick command)
|
||||
[#]: via: (https://opensource.com/article/21/3/git-cherry-pick)
|
||||
[#]: author: (Manaswini Das https://opensource.com/users/manaswinidas)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
3 reasons I use the Git cherry-pick command
|
||||
======
|
||||
Cherry-picking solves a lot of problems in Git repositories. Here are
|
||||
three ways to fix your mistakes with git cherry-pick.
|
||||
![Measuring and baking a cherry pie recipe][1]
|
||||
|
||||
Finding your way around a version control system can be tricky. It can be massively overwhelming for a newbie, but being well-versed with the terminology and the basics of a version control system like Git is one of the baby steps to start contributing to open source.
|
||||
|
||||
Being familiar with Git can also help you out of sticky situations in your open source journey. Git is powerful and makes you feel in control—there is not a single way in which you cannot revert to a working version.
|
||||
|
||||
Here is an example to help you understand the importance of cherry-picking. Suppose you have made several commits in a branch, but you realize it's the wrong branch! What do you do now? Either you repeat all your changes in the correct branch and make a fresh commit, or you merge the branch into the correct branch. Wait, the former is too tedious, and you may not want to do the latter. So, is there a way? Yes, Git's got you covered. Here is where cherry-picking comes into play. As the term suggests, you can use it to hand-pick a commit from one branch and transfer it into another branch.
|
||||
|
||||
There are various reasons to use cherry-picking. Here are three of them.
|
||||
|
||||
### Avoid redundancy of efforts
|
||||
|
||||
There's no need to redo the same changes in a different branch when you can just copy the same commits to the other branch. Please note that cherry-picking commits will create a fresh commit with a new hash in the other branch, so please don't be confused if you see a different commit hash.
|
||||
|
||||
In case you are wondering what a commit hash is and how it is generated, here is a note to help you: A commit hash is a string generated using the [SHA-1][2] algorithm. The SHA-1 algorithm takes an input and outputs a unique 40-character hash. If you are on a [POSIX][3] system, try running this in your terminal:
|
||||
|
||||
|
||||
```
|
||||
`$ echo -n "commit" | openssl sha1`
|
||||
```
|
||||
|
||||
This outputs a unique 40-character hash, `4015b57a143aec5156fd1444a017a32137a3fd0f`. This hash represents the string `commit`.
|
||||
|
||||
A SHA-1 hash generated by Git when you make a commit represents much more than just a single string. It represents:
|
||||
|
||||
|
||||
```
|
||||
sha1(
|
||||
meta data
|
||||
commit message
|
||||
committer
|
||||
commit date
|
||||
author
|
||||
authoring date
|
||||
Hash of the entire tree object
|
||||
)
|
||||
```
|
||||
|
||||
This explains why you get a unique commit hash for the slightest change you make to your code. Not even a single change goes unnoticed. This is because Git has integrity.
|
||||
|
||||
### Undoing/restoring lost changes
|
||||
|
||||
Cherry-picking can be handy when you want to restore to a working version. When multiple developers are working on the same codebase, it is very likely for changes to get lost and the latest version to move to a stale or non-working version. That's where cherry-picking commits to the working version can be a savior.
|
||||
|
||||
#### How does it work?
|
||||
|
||||
Suppose there are two branches, `feature1` and `feature2`, and you want to apply commits from `feature1` to `feature2`.
|
||||
|
||||
On the `feature1` branch, run a `git log` command, and copy the commit hash that you want to cherry-pick. You can see a series of commits resembling the code sample below. The alphanumeric code following "commit" is the commit hash that you need to copy. You may choose to copy the first six characters (`966cf3` in this example) for the sake of convenience:
|
||||
|
||||
|
||||
```
|
||||
commit 966cf3d08b09a2da3f2f58c0818baa37184c9778 (HEAD -> master)
|
||||
Author: manaswinidas <[me@example.com][4]>
|
||||
Date: Mon Mar 8 09:20:21 2021 +1300
|
||||
|
||||
add instructions
|
||||
```
|
||||
|
||||
Then switch to `feature2` and run `git cherry-pick` on the hash you just got from the log:
|
||||
|
||||
|
||||
```
|
||||
$ git checkout feature2
|
||||
$ git cherry-pick 966cf3.
|
||||
```
|
||||
|
||||
If the branch doesn't exist, use `git checkout -b feature2` to create it.
|
||||
|
||||
Here's a catch: You may encounter the situation below:
|
||||
|
||||
|
||||
```
|
||||
$ git cherry-pick 966cf3
|
||||
On branch feature2
|
||||
You are currently cherry-picking commit 966cf3d.
|
||||
|
||||
nothing to commit, working tree clean
|
||||
The previous cherry-pick is now empty, possibly due to conflict resolution.
|
||||
If you wish to commit it anyway, use:
|
||||
|
||||
git commit --allow-empty
|
||||
|
||||
Otherwise, please use 'git reset'
|
||||
```
|
||||
|
||||
Do not panic. Just run `git commit --allow-empty` as suggested:
|
||||
|
||||
|
||||
```
|
||||
$ git commit --allow-empty
|
||||
[feature2 afb6fcb] add instructions
|
||||
Date: Mon Mar 8 09:20:21 2021 +1300
|
||||
```
|
||||
|
||||
This opens your default editor and allows you to edit the commit message. It's acceptable to save the existing message if you have nothing to add.
|
||||
|
||||
There you go; you did your first cherry-pick. As discussed above, if you run a `git log` on branch `feature2`, you will see a different commit hash. Here is an example:
|
||||
|
||||
|
||||
```
|
||||
commit afb6fcb87083c8f41089cad58deb97a5380cb2c2 (HEAD -> feature2)
|
||||
Author: manaswinidas <[me@example.com][4]>
|
||||
Date: Mon Mar 8 09:20:21 2021 +1300
|
||||
add instructions
|
||||
```
|
||||
|
||||
Don't be confused about the different commit hash. That just distinguishes between the commits in `feature1` and `feature2`.
|
||||
|
||||
### Cherry-pick multiple commits
|
||||
|
||||
But what if you want to cherry-pick multiple commits? You can use:
|
||||
|
||||
|
||||
```
|
||||
`git cherry-pick <commit-hash1> <commit-hash2>... <commit-hashn>`
|
||||
```
|
||||
|
||||
Please note that you don't have to use the entire commit hash; you can use the first five or six characters.
|
||||
|
||||
Again, this is tedious. What if the commits you want to cherry-pick are a range of continuous commits? This approach is too much work. Don't worry; there's an easier way.
|
||||
|
||||
Assume that you have two branches:
|
||||
|
||||
* `feature1` includes commits you want to copy (from `commitA` (older) to `commitB`).
|
||||
* `feature2` is the branch you want the commits to be transferred to from `feature1`.
|
||||
|
||||
|
||||
|
||||
Then:
|
||||
|
||||
1. Enter `git checkout <feature1>`.
|
||||
2. Get the hashes of `commitA` and `commitB`.
|
||||
3. Enter `git checkout <branchB>`.
|
||||
4. Enter `git cherry-pick <commitA>^..<commitB>` (please note that this includes `commitA` and `commitB`).
|
||||
5. Should you encounter a merge conflict, [solve it as usual][5] and then type `git cherry-pick --continue` to resume the cherry-pick process.
|
||||
|
||||
|
||||
|
||||
### Important cherry-pick options
|
||||
|
||||
Here are some useful options from the [Git documentation][6] that you can use with the `cherry-pick` command:
|
||||
|
||||
* `-e`, `--edit`: With this option, `git cherry-pick` lets you edit the commit message prior to committing.
|
||||
* `-s`, `--signoff`: Add a "Signed-off-by" line at the end of the commit message. See the signoff option in git-commit(1) for more information.
|
||||
* `-S[<keyid>]`, `--gpg-sign[=<keyid>]`: These are GPG-sign commits. The `keyid` argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space.
|
||||
* `--ff`: If the current HEAD is the same as the parent of the cherry-picked commit, then a fast-forward to this commit will be performed.
|
||||
|
||||
|
||||
|
||||
Here are some other sequencer subcommands (apart from continue):
|
||||
|
||||
* `--quit`: You can forget about the current operation in progress. This can be used to clear the sequencer state after a failed cherry-pick or revert.
|
||||
* `--abort`: Cancel the operation and return to the presequence state.
|
||||
|
||||
|
||||
|
||||
Here are some examples of cherry-picking:
|
||||
|
||||
* `git cherry-pick master`: Applies the change introduced by the commit at the tip of the master branch and creates a new commit with this change
|
||||
* `git cherry-pick master~4 master~2`: Applies the changes introduced by the fifth and third-last commits pointed to by master and creates two new commits with these changes
|
||||
|
||||
|
||||
|
||||
Feeling overwhelmed? You needn't remember all the commands. You can always type `git cherry-pick --help` in your terminal to look at more options or help.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/git-cherry-pick
|
||||
|
||||
作者:[Manaswini Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/manaswinidas
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/pictures/cherry-picking-recipe-baking-cooking.jpg?itok=XVwse6hw (Measuring and baking a cherry pie recipe)
|
||||
[2]: https://en.wikipedia.org/wiki/SHA-1
|
||||
[3]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[4]: mailto:me@example.com
|
||||
[5]: https://opensource.com/article/20/4/git-merge-conflict
|
||||
[6]: https://git-scm.com/docs/git-cherry-pick
|
@ -0,0 +1,160 @@
|
||||
[#]: subject: (A tool to spy on your DNS queries: dnspeep)
|
||||
[#]: via: (https://jvns.ca/blog/2021/03/31/dnspeep-tool/)
|
||||
[#]: author: (Julia Evans https://jvns.ca/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
A tool to spy on your DNS queries: dnspeep
|
||||
======
|
||||
|
||||
Hello! Over the last few days I made a little tool called [dnspeep][1] that lets you see what DNS queries your computer is making, and what responses it’s getting. It’s about [250 lines of Rust right now][2].
|
||||
|
||||
I’ll talk about how you can try it, what it’s for, why I made it, and some problems I ran into while writing it.
|
||||
|
||||
### how to try it
|
||||
|
||||
I built some binaries so you can quickly try it out.
|
||||
|
||||
For Linux (x86):
|
||||
|
||||
```
|
||||
wget https://github.com/jvns/dnspeep/releases/download/v0.1.0/dnspeep-linux.tar.gz
|
||||
tar -xf dnspeep-linux.tar.gz
|
||||
sudo ./dnspeep
|
||||
```
|
||||
|
||||
For Mac:
|
||||
|
||||
```
|
||||
wget https://github.com/jvns/dnspeep/releases/download/v0.1.0/dnspeep-macos.tar.gz
|
||||
tar -xf dnspeep-macos.tar.gz
|
||||
sudo ./dnspeep
|
||||
```
|
||||
|
||||
It needs to run as root because it needs access to all the DNS packets your computer is sending. This is the same reason `tcpdump` needs to run as root – it uses `libpcap` which is the same library that tcpdump uses.
|
||||
|
||||
You can also read the source and build it yourself at <https://github.com/jvns/dnspeep> if you don’t want to just download binaries and run them as root :).
|
||||
|
||||
### what the output looks like
|
||||
|
||||
Here’s what the output looks like. Each line is a DNS query and the response.
|
||||
|
||||
```
|
||||
$ sudo dnspeep
|
||||
query name server IP response
|
||||
A firefox.com 192.168.1.1 A: 44.235.246.155, A: 44.236.72.93, A: 44.236.48.31
|
||||
AAAA firefox.com 192.168.1.1 NOERROR
|
||||
A bolt.dropbox.com 192.168.1.1 CNAME: bolt.v.dropbox.com, A: 162.125.19.131
|
||||
```
|
||||
|
||||
Those queries are from me going to `neopets.com` in my browser, and the `bolt.dropbox.com` query is because I’m running a Dropbox agent and I guess it phones home behind the scenes from time to time because it needs to sync.
|
||||
|
||||
### why make another DNS tool?
|
||||
|
||||
I made this because I think DNS can seem really mysterious when you don’t know a lot about it!
|
||||
|
||||
Your browser (and other software on your computer) is making DNS queries all the time, and I think it makes it seem a lot more “real” when you can actually see the queries and responses.
|
||||
|
||||
I also wrote this to be used as a debugging tool. I think the question “is this a DNS problem?” is harder to answer than it should be – I get the impression that when trying to check if a problem is caused by DNS people often use trial and error or guess instead of just looking at the DNS responses that their computer is getting.
|
||||
|
||||
### you can see which software is “secretly” using the Internet
|
||||
|
||||
One thing I like about this tool is that it gives me a sense for what programs on my computer are using the Internet! For example, I found out that something on my computer is making requests to `ping.manjaro.org` from time to time for some reason, probably to check I’m connected to the internet.
|
||||
|
||||
A friend of mine actually discovered using this tool that he had some corporate monitoring software installed on his computer from an old job that he’d forgotten to uninstall, so you might even find something you want to remove.
|
||||
|
||||
### tcpdump is confusing if you’re not used to it
|
||||
|
||||
My first instinct when trying to show people the DNS queries their computer is making was to say “well, use tcpdump”! And `tcpdump` does parse DNS packets!
|
||||
|
||||
For example, here’s what a DNS query for `incoming.telemetry.mozilla.org.` looks like:
|
||||
|
||||
```
|
||||
11:36:38.973512 wlp3s0 Out IP 192.168.1.181.42281 > 192.168.1.1.53: 56271+ A? incoming.telemetry.mozilla.org. (48)
|
||||
11:36:38.996060 wlp3s0 In IP 192.168.1.1.53 > 192.168.1.181.42281: 56271 3/0/0 CNAME telemetry-incoming.r53-2.services.mozilla.com., CNAME prod.data-ingestion.prod.dataops.mozgcp.net., A 35.244.247.133 (180)
|
||||
```
|
||||
|
||||
This is definitely possible to learn to read, for example let’s break down the query:
|
||||
|
||||
`192.168.1.181.42281 > 192.168.1.1.53: 56271+ A? incoming.telemetry.mozilla.org. (48)`
|
||||
|
||||
* `A?` means it’s a DNS **query** of type A
|
||||
* `incoming.telemetry.mozilla.org.` is the name being qeried
|
||||
* `56271` is the DNS query’s ID
|
||||
* `192.168.1.181.42281` is the source IP/port
|
||||
* `192.168.1.1.53` is the destination IP/port
|
||||
* `(48)` is the length of the DNS packet
|
||||
|
||||
|
||||
|
||||
And in the response breaks down like this:
|
||||
|
||||
`56271 3/0/0 CNAME telemetry-incoming.r53-2.services.mozilla.com., CNAME prod.data-ingestion.prod.dataops.mozgcp.net., A 35.244.247.133 (180)`
|
||||
|
||||
* `3/0/0` is the number of records in the response: 3 answers, 0 authority, 0 additional. I think tcpdump will only ever print out the answer responses though.
|
||||
* `CNAME telemetry-incoming.r53-2.services.mozilla.com`, `CNAME prod.data-ingestion.prod.dataops.mozgcp.net.`, and `A 35.244.247.133` are the three answers
|
||||
* `56271` is the responses ID, which matches up with the query’s ID. That’s how you can tell it’s a response to the request in the previous line.
|
||||
|
||||
|
||||
|
||||
I think what makes this format the most difficult to deal with (as a human who just wants to look at some DNS traffic) though is that you have to manually match up the requests and responses, and they’re not always on adjacent lines. That’s the kind of thing computers are good at!
|
||||
|
||||
So I decided to write a little program (`dnspeep`) which would do this matching up and also remove some of the information I felt was extraneous.
|
||||
|
||||
### problems I ran into while writing it
|
||||
|
||||
When writing this I ran into a few problems.
|
||||
|
||||
* I had to patch the `pcap` crate to make it work properly with Tokio on Mac OS ([this change][3]). This was one of those bugs which took many hours to figure out and 1 line to fix :)
|
||||
* Different Linux distros seem to have different versions of `libpcap.so`, so I couldn’t easily distribute a binary that dynamically links libpcap (you can see other people having the same problem [here][4]). So I decided to statically compile libpcap into the tool on Linux. I still don’t really know how to do this properly in Rust, but I got it to work by copying the `libpcap.a` file into `target/release/deps` and then just running `cargo build`.
|
||||
* The `dns_parser` crate I’m using doesn’t support all DNS query types, only the most common ones. I probably need to switch to a different crate for parsing DNS packets but I haven’t found the right one yet.
|
||||
* Becuase the `pcap` interface just gives you raw bytes (including the Ethernet frame), I needed to [write code to figure out how many bytes to strip from the beginning to get the packet’s IP header][5]. I’m pretty sure there are some cases I’m still missing there.
|
||||
|
||||
|
||||
|
||||
I also had a hard time naming it because there are SO MANY DNS tools already (dnsspy! dnssnoop! dnssniff! dnswatch!). I basically just looked at every synonym for “spy” and then picked one that seemed fun and did not already have a DNS tool attached to it.
|
||||
|
||||
One thing this program doesn’t do is tell you which process made the DNS query, there’s a tool called [dnssnoop][6] I found that does that. It uses eBPF and it looks cool but I haven’t tried it.
|
||||
|
||||
### there are probably still lots of bugs
|
||||
|
||||
I’ve only tested this briefly on Linux and Mac and I already know of at least one bug (caused by not supporting enough DNS query types), so please report problems you run into!
|
||||
|
||||
The bugs aren’t dangerous though – because the libpcap interface is read-only the worst thing that can happen is that it’ll get some input it doesn’t understand and print out an error or crash.
|
||||
|
||||
### writing small educational tools is fun
|
||||
|
||||
I’ve been having a lot of fun writing small educational DNS tools recently.
|
||||
|
||||
So far I’ve made:
|
||||
|
||||
* <https://dns-lookup.jvns.ca> (a simple way to make DNS queries)
|
||||
* <https://dns-lookup.jvns.ca/trace.html> (shows you exactly what happens behind the scenes when you make a DNS query)
|
||||
* this tool (`dnspeep`)
|
||||
|
||||
|
||||
|
||||
Historically I’ve mostly tried to explain existing tools (like `dig` or `tcpdump`) instead of writing my own tools, but often I find that the output of those tools is confusing, so I’m interested in making more friendly ways to see the same information so that everyone can understand what DNS queries their computer is making instead of just tcpdump wizards :).
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2021/03/31/dnspeep-tool/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://jvns.ca/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/jvns/dnspeep
|
||||
[2]: https://github.com/jvns/dnspeep/blob/f5780dc822df5151f83703f05c767dad830bd3b2/src/main.rs
|
||||
[3]: https://github.com/ebfull/pcap/pull/168
|
||||
[4]: https://github.com/google/gopacket/issues/734
|
||||
[5]: https://github.com/jvns/dnspeep/blob/f5780dc822df5151f83703f05c767dad830bd3b2/src/main.rs#L136
|
||||
[6]: https://github.com/lilydjwg/dnssnoop
|
@ -0,0 +1,288 @@
|
||||
[#]: subject: (Playing with modular synthesizers and VCV Rack)
|
||||
[#]: via: (https://fedoramagazine.org/vcv-rack-modular-synthesizers/)
|
||||
[#]: author: (Yann Collette https://fedoramagazine.org/author/ycollet/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Playing with modular synthesizers and VCV Rack
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
You know about using Fedora Linux to write code, books, play games, and listen to music. You can also do system simulation, work on electronic circuits, work with embedded systems too via [Fedora Labs][2]. But you can also make music with the VCV Rack software. For that, you can use to [Fedora Jam][3] or work from a standard Fedora Workstation installation with the [LinuxMAO Copr][4] repository enabled. This article describes how to use modular synthesizers controlled by Fedora Linux.
|
||||
|
||||
### Some history
|
||||
|
||||
The origin of the modular synthesizer dates back to the 1950’s and was soon followed in the 60’s by the Moog modular synthesizer. [Wikipedia has a lot more on the history][5].
|
||||
|
||||
![Moog synthesizer circa 1975][6]
|
||||
|
||||
But, by the way, what is a modular synthesizer ?
|
||||
|
||||
These synthesizers are made of hardware “blocks” or modules with specific functions like oscillators, amplifier, sequencer, and other various functions. The blocks are connected together by wires. You make music with these connected blocks by manipulating knobs. Most of these modular synthesizers came without keyboard.
|
||||
|
||||
![][7]
|
||||
|
||||
Modular synthesizers were very common in the early days of progressive rock (with Emerson Lake and Palmer) and electronic music (Klaus Schulze, for example).
|
||||
|
||||
After a while people forgot about modular synthesizers because they were cumbersome, hard to tune, hard to fix, and setting a patch (all the wires connecting the modules) was a time consuming task not very easy to perform live. Price was also a problem because systems were mostly sold as a small series of modules, and you needed at least 10 of them to have a decent set-up.
|
||||
|
||||
In the last few years, there has been a rebirth of these synthesizers. Doepfer produces some affordable models and a lot of modules are also available and have open sources schematics and codes (check [Mutable instruments][8] for example).
|
||||
|
||||
But, a few years ago came … [VCV Rack.][9] VCV Rack stands for **V**oltage **C**ontrolled **V**irtual Rack: software-based modular synthesizer lead by Andrew Belt. His first commit on [GitHub][10] was Monday Nov 14 18:34:40 2016.
|
||||
|
||||
### Getting started with VCV Rack
|
||||
|
||||
#### Installation
|
||||
|
||||
To be able to use VCV Rack, you can either go to the [VCV Rack web site][9] and install a binary for Linux or, you can activate a Copr repository dedicated to music: the [LinuxMAO Copr][4] repository (disclaimer: I am the man behind this Copr repository). As a reminder, Copr is not officially supported by Fedora infrastructure. Use packages at your own risk.
|
||||
|
||||
Enable the repository with:
|
||||
|
||||
```
|
||||
sudo dnf copr enable ycollet/linuxmao
|
||||
```
|
||||
|
||||
Then install VCV Rack:
|
||||
|
||||
```
|
||||
sudo dnf install Rack-v1
|
||||
```
|
||||
|
||||
You can now start VCV Rack from the console of via the Multimedia entry in the start menu:
|
||||
|
||||
```
|
||||
$ Rack &
|
||||
```
|
||||
|
||||
![][11]
|
||||
|
||||
#### Add some modules
|
||||
|
||||
The first step is now to clean up everything and leave just the **AUDIO-8** module. You can remove modules in various ways:
|
||||
|
||||
* Click on a module and hit the backspace key
|
||||
* Right click on a module and click “delete”
|
||||
|
||||
|
||||
|
||||
The **AUDIO-8** module allows you to connect from and to audio devices. Here are the features for this module.
|
||||
|
||||
![][12]
|
||||
|
||||
Now it’s time to produce some noise (for the music, we’ll see that later).
|
||||
|
||||
Right click inside VCV Rack (but outside of a module) and a module search window will appear.
|
||||
|
||||
![][13]
|
||||
|
||||
Enter “VCO-2” in the search bar and click on the image of the module. This module is now on VCV Rack.
|
||||
|
||||
To move a module: click and drag the module.
|
||||
|
||||
To move a group of modules, hit shit + click + drag a module and all the modules on the right of the dragged modules will move with the selected module.
|
||||
|
||||
![][14]
|
||||
|
||||
Now you need to connect the modules by drawing a wire between the “OUT” connector of **VCO-2** module and the “1” “TO DEVICE” of **AUDIO-8** module.
|
||||
|
||||
Left-click on the “OUT” connector of the **VCO-2** module and while keeping the left-click, drag your mouse to the “1” “TO DEVICE” of the **AUDIO-8** module. Once on this connector, release your left-click.
|
||||
|
||||
![][15]
|
||||
|
||||
To remove a wire, do a right-click on the connector where the wire is connected.
|
||||
|
||||
To draw a wire from an already connected connector, hold “ctrl+left+click” and draw the wire. For example, you can draw a wire from “OUT” connector of module **VCO-2** to the “2” “TO DEVICE” connector of **AUDIO-8** module.
|
||||
|
||||
#### What are these wires ?
|
||||
|
||||
Wires allow you to control various part of the module. The information handled by these wires are Control Voltages, Gate signals, and Trigger signals.
|
||||
|
||||
**CV** ([Control Voltages][16]): These typically control pitch and range between a minimum value around -1 to -5 volt and a maximum value between 1 and 5 volt.
|
||||
|
||||
What is the **GATE** signal you find on some modules? Imagine a keyboard sending out on/off data to an amplifier module: its voltage is at zero when no key is pressed and jumps up to max level (5v for example) when a key is pressed; release the key, and the voltage goes back to zero again. A **GATE** signal can be emitted by things other than a keyboard. A clock module, for example, can emit gate signals.
|
||||
|
||||
Finally, what is a **TRIGGER** signal you find on some modules? It’s a square pulse which starts when you press a key and stops after a while.
|
||||
|
||||
In the modular world, **gate** and **trigger** signals are used to trigger drum machines, restart clocks, reset sequencers and so on.
|
||||
|
||||
#### Connecting everybody
|
||||
|
||||
Let’s control an oscillator with a CV signal. But before that, remove your **VCO-2** module (click on the module and hit backspace).
|
||||
|
||||
Do a right-click on VCV Rack a search for these modules:
|
||||
|
||||
* **VCO-1** (a controllable oscillator)
|
||||
* **LFO-1** (a low frequency oscillator which will control the frequency of the **VCO-1**)
|
||||
|
||||
|
||||
|
||||
Now draw wires:
|
||||
|
||||
* between the “SAW” connector of the **LFO-1** module and the “V/OCT” (Voltage per Octave) connector of the **VCO-1** module
|
||||
* between the “SIN” connector of the **VCO-1** module and the “1” “TO DEVICE” of the **AUDIO-8** module
|
||||
|
||||
|
||||
|
||||
![][17]
|
||||
|
||||
You can adjust the range of the frequency by turning the FREQ knob of the **LFO-1** module.
|
||||
|
||||
You can also adjust the low frequency of the sequence by turning the FREQ knob of the **VCO-1** module.
|
||||
|
||||
### The Fundamental modules for VCV Rack
|
||||
|
||||
When you install the **Rack-v1**, the **Rack-v1-Fundamental** package is automatically installed. **Rack-v1** only installs the rack system, with input / output modules, but without other basic modules.
|
||||
|
||||
In the Fundamental VCV Rack packages, there are various modules available.
|
||||
|
||||
![][18]
|
||||
|
||||
Some important modules to have in mind:
|
||||
|
||||
* **VCO**: Voltage Controlled Oscillator
|
||||
* **LFO**: Low Frequency Oscillator
|
||||
* **VCA**: Voltage Controlled Amplifier
|
||||
* **SEQ**: Sequencers (to define a sequence of voltage / notes)
|
||||
* **SCOPE**: an oscilloscope, very useful to debug your connexions
|
||||
* **ADSR**: a module to generate an envelope for a note. ADSR stands for **A**ttack / **D**ecay / **S**ustain / **R**elease
|
||||
|
||||
|
||||
|
||||
And there are a lot more functions available. I recommend you watch tutorials related to VCV Rack on YouTube to discover all these functionalities, in particular the Video Channel of [Omri Cohen][19].
|
||||
|
||||
### What to do next
|
||||
|
||||
Are you limited to the Fundamental modules? No, certainly not! VCV Rack provides some closed sources modules (for which you’ll need to pay) and a lot of other modules which are open source. All the open source modules are packages for Fedora 32 and 33. How many VCV Rack packages are available ?
|
||||
|
||||
```
|
||||
sudo dnf search rack-v1 | grep src | wc -l
|
||||
150
|
||||
```
|
||||
|
||||
And counting. Each month new packages appear. If you want to install everything at once, run:
|
||||
|
||||
```
|
||||
sudo dnf install `dnf search rack-v1 | grep src | sed -e "s/\(^.*\)\.src.*/\1/"`
|
||||
```
|
||||
|
||||
Here are some recommended modules to start with.
|
||||
|
||||
* BogAudio (dnf install rack-v1-BogAudio)
|
||||
* AudibleInstruments (dnf install rack-v1-AudibleInstruments)
|
||||
* Valley (dnf install rack-v1-Valley)
|
||||
* Befaco (dnf install rack-v1-Befaco)
|
||||
* Bidoo (dnf install rack-v1-Bidoo)
|
||||
* VCV-Recorder (dnf install rack-v1-VCV-Recorder)
|
||||
|
||||
|
||||
|
||||
### A more complex case
|
||||
|
||||
![][20]
|
||||
|
||||
From Fundamental, use **MIXER**, **AUDIO-8**, **MUTERS**, **SEQ-3**, **VCO-1**, **ADSR**, **VCA**.
|
||||
|
||||
Use:
|
||||
|
||||
* **Plateau** module from Valley package (it’s an enhanced reverb).
|
||||
* **BassDrum9** from DrumKit package.
|
||||
* **HolonicSystems-Gaps** from HolonicSystems-Free package.
|
||||
|
||||
|
||||
|
||||
How it sounds: checkout [this video][21] on my YouTube channel.
|
||||
|
||||
### Managing MIDI
|
||||
|
||||
VCV Rack as a bunch of modules dedicated to MIDI management.
|
||||
|
||||
![][22]
|
||||
|
||||
With these modules and with a tool like the Akai LPD-8:
|
||||
|
||||
![][23]
|
||||
|
||||
You can easily control knob in VCV Rack modules from a real life device.
|
||||
|
||||
Before buying some devices, check it’s Linux compatibility. Normally every “USB Class Compliant” device works out of the box in every Linux distribution.
|
||||
|
||||
The MIDI → Knob mapping is done via the “MIDI-MAP” module. Once you have selected the MIDI driver (first line) and MIDI device (second line), click on “unmapped”. Then, touch a knob you want to control on a module (for example the “FREQ” knob of the VCO-1 Fundamental module). Now, turn the knob of the MIDI device and there you are; the mapping is done.
|
||||
|
||||
### Artistic scopes
|
||||
|
||||
Last topic of this introduction paper: the scopes.
|
||||
|
||||
VCV Rack has several standard (and useful) scopes. The **SCOPE** module from Fundamental for example.
|
||||
|
||||
But it also has some interesting scopes.
|
||||
|
||||
![][24]
|
||||
|
||||
This used 3 **VCO-1** modules from Fundamental and a **fullscope** from wiqid-anomalies.
|
||||
|
||||
The first connector at the top of the scope corresponds to the X input. The one below is the Y input and the other one below controls the color of the graph.
|
||||
|
||||
For the complete documentation of this module, check:
|
||||
|
||||
* the documentation of [wigid-anomalies][25]
|
||||
* the documentation of the [fullscope][26] module
|
||||
* the github repository of the [wigid-anomalies][27] module
|
||||
|
||||
|
||||
|
||||
### For more information
|
||||
|
||||
If you’re looking for help or want to talk to the VCV Rack Community, visit their [Discourse forum][28]. You can get _patches_ (a patch is the file saved by VCV Rack) for VCV Rack on [Patch Storage][29].
|
||||
|
||||
Check out how vintage synthesizers looks like on [Vintage Synthesizer Museum][30] or [Google’s online exhibition][31]. The documentary “[I Dream of Wires][32]” provides a look at the history of modular synthesizers. Finally, the book _[Developing Virtual Syntehsizers with VCV Rack][33]_ provides more depth.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/vcv-rack-modular-synthesizers/
|
||||
|
||||
作者:[Yann Collette][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/ycollet/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2021/03/music_synthesizers-816x345.jpg
|
||||
[2]: https://labs.fedoraproject.org/
|
||||
[3]: https://fedoraproject.org/wiki/Fedora_Jam_Audio_Spin
|
||||
[4]: https://copr.fedorainfracloud.org/coprs/ycollet/linuxmao/
|
||||
[5]: https://en.wikipedia.org/wiki/Modular_synthesizer
|
||||
[6]: https://fedoramagazine.org/wp-content/uploads/2021/03/Moog_Modular_55_img1-1024x561.png
|
||||
[7]: https://fedoramagazine.org/wp-content/uploads/2021/03/modular_synthesizer_-_jam_syntotek_stockholm_2014-09-09_photo_by_henning_klokkerasen_edit-1.jpg
|
||||
[8]: https://mutable-instruments.net/
|
||||
[9]: https://vcvrack.com/
|
||||
[10]: https://github.com/VCVRack/Rack
|
||||
[11]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_215239-1024x498.png
|
||||
[12]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_232052.png
|
||||
[13]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210310_191531-1024x479.png
|
||||
[14]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_221358.png
|
||||
[15]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_222055.png
|
||||
[16]: https://en.wikipedia.org/wiki/CV/gate
|
||||
[17]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_223840.png
|
||||
[18]: https://fedoramagazine.org/wp-content/uploads/2021/03/Fundamental-showcase-1024x540.png
|
||||
[19]: https://www.youtube.com/channel/UCuWKHSHTHMV_nVSeNH4gYAg
|
||||
[20]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210309_233506.png
|
||||
[21]: https://www.youtube.com/watch?v=HhJ_HY2rN5k
|
||||
[22]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210310_193452-1024x362.png
|
||||
[23]: https://fedoramagazine.org/wp-content/uploads/2021/03/235492.jpg
|
||||
[24]: https://fedoramagazine.org/wp-content/uploads/2021/03/Screenshot_20210310_195044.png
|
||||
[25]: https://library.vcvrack.com/wiqid-anomalies
|
||||
[26]: https://library.vcvrack.com/wiqid-anomalies/fullscope
|
||||
[27]: https://github.com/wiqid/anomalies
|
||||
[28]: https://community.vcvrack.com/
|
||||
[29]: https://patchstorage.com/platform/vcv-rack/
|
||||
[30]: https://vintagesynthesizermuseum.com/
|
||||
[31]: https://artsandculture.google.com/story/7AUBadCIL5Tnow
|
||||
[32]: http://www.idreamofwires.org/
|
||||
[33]: https://www.leonardo-gabrielli.info/vcv-book
|
@ -0,0 +1,180 @@
|
||||
[#]: subject: (Use this open source tool to monitor variables in Python)
|
||||
[#]: via: (https://opensource.com/article/21/4/monitor-debug-python)
|
||||
[#]: author: (Tian Gao https://opensource.com/users/gaogaotiantian)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Use this open source tool to monitor variables in Python
|
||||
======
|
||||
Watchpoints is a simple but powerful tool to help you with monitoring
|
||||
variables while debugging Python.
|
||||
![Looking back with binoculars][1]
|
||||
|
||||
When debugging code, you're often faced with figuring out when a variable changes. Without any advanced tools, you have the option of using print statements to announce the variables when you expect them to change. However, this is a very ineffective way because the variables could change in many places, and constantly printing them to a terminal is noisy, while printing them to a log file becomes unwieldy.
|
||||
|
||||
This is a common issue, but now there is a simple but powerful tool to help you with monitoring variables: [watchpoints][2].
|
||||
|
||||
The [watchpoint concept is common in C and C++ debuggers][3] to monitor memories, but there's a lack of equivalent tools in Python. `watchpoints` fills in the gap.
|
||||
|
||||
### Installing
|
||||
|
||||
To use it, you must first install it by using `pip`:
|
||||
|
||||
|
||||
```
|
||||
`$ python3 -m pip install watchpoints`
|
||||
```
|
||||
|
||||
### Using watchpoints in Python
|
||||
|
||||
For any variable you'd like to monitor, use the **watch** function on it.
|
||||
|
||||
|
||||
```
|
||||
from watchpoints import watch
|
||||
|
||||
a = 0
|
||||
watch(a)
|
||||
a = 1
|
||||
```
|
||||
|
||||
As the variable changes, information about its value is printed to **stdout**:
|
||||
|
||||
|
||||
```
|
||||
====== Watchpoints Triggered ======
|
||||
|
||||
Call Stack (most recent call last):
|
||||
<module> (my_script.py:5):
|
||||
> a = 1
|
||||
a:
|
||||
0
|
||||
->
|
||||
1
|
||||
```
|
||||
|
||||
The information includes:
|
||||
|
||||
* The line where the variable was changed.
|
||||
* The call stack.
|
||||
* The previous/current value of the variable.
|
||||
|
||||
|
||||
|
||||
It not only works with the variable itself, but it also works with object changes:
|
||||
|
||||
|
||||
```
|
||||
from watchpoints import watch
|
||||
|
||||
a = []
|
||||
watch(a)
|
||||
a = {} # Trigger
|
||||
a["a"] = 2 # Trigger
|
||||
```
|
||||
|
||||
The callback is triggered when the variable **a** is reassigned, but also when the object assigned to a is changed.
|
||||
|
||||
What makes it even more interesting is that the monitor is not limited by the scope. You can watch the variable/object anywhere you want, and the callback is triggered no matter what function the program is executing.
|
||||
|
||||
|
||||
```
|
||||
from watchpoints import watch
|
||||
|
||||
def func(var):
|
||||
var["a"] = 1
|
||||
|
||||
a = {}
|
||||
watch(a)
|
||||
func(a)
|
||||
```
|
||||
|
||||
For example, this code prints:
|
||||
|
||||
|
||||
```
|
||||
====== Watchpoints Triggered ======
|
||||
|
||||
Call Stack (most recent call last):
|
||||
|
||||
<module> (my_script.py:8):
|
||||
> func(a)
|
||||
func (my_script.py:4):
|
||||
> var["a"] = 1
|
||||
a:
|
||||
{}
|
||||
->
|
||||
{'a': 1}
|
||||
```
|
||||
|
||||
The **watch** function can monitor more than a variable. It can also monitor the attributes and an element of a dictionary or list.
|
||||
|
||||
|
||||
```
|
||||
from watchpoints import watch
|
||||
|
||||
class MyObj:
|
||||
def __init__(self):
|
||||
self.a = 0
|
||||
|
||||
obj = MyObj()
|
||||
d = {"a": 0}
|
||||
watch(obj.a, d["a"]) # Yes you can do this
|
||||
obj.a = 1 # Trigger
|
||||
d["a"] = 1 # Trigger
|
||||
```
|
||||
|
||||
This could help you narrow down to some specific objects that you are interested in.
|
||||
|
||||
If you are not happy about the format of the output, you can customize it. Just define your own callback function:
|
||||
|
||||
|
||||
```
|
||||
watch(a, callback=my_callback)
|
||||
|
||||
# Or set it globally
|
||||
|
||||
watch.config(callback=my_callback)
|
||||
```
|
||||
|
||||
You can even bring up **pdb** when the trigger is hit:
|
||||
|
||||
|
||||
```
|
||||
`watch.config(pdb=True)`
|
||||
```
|
||||
|
||||
This behaves similarly to **breakpoint()**, giving you a debugger-like experience.
|
||||
|
||||
If you don’t want to import the function in every single file, you can make it global by using **install** function:
|
||||
|
||||
|
||||
```
|
||||
`watch.install() # or watch.install("func_name") and use it as func_name()`
|
||||
```
|
||||
|
||||
Personally, I think the coolest thing about watchpoints is its intuitive usage. Are you interested in some data? Just "watch" it, and you'll know when your variable changes.
|
||||
|
||||
### Try watchpoints
|
||||
|
||||
I developed and maintain `watchpoints` on [GitHub][2], and have released it under the licensed under Apache 2.0. Install it and use it, and of course contribution is always welcome.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/4/monitor-debug-python
|
||||
|
||||
作者:[Tian Gao][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/gaogaotiantian
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/look-binoculars-sight-see-review.png?itok=NOw2cm39 (Looking back with binoculars)
|
||||
[2]: https://github.com/gaogaotiantian/watchpoints
|
||||
[3]: https://opensource.com/article/21/3/debug-code-gdb
|
136
sources/tech/20210401 Find what changed in a Git commit.md
Normal file
136
sources/tech/20210401 Find what changed in a Git commit.md
Normal file
@ -0,0 +1,136 @@
|
||||
[#]: subject: (Find what changed in a Git commit)
|
||||
[#]: via: (https://opensource.com/article/21/4/git-whatchanged)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Find what changed in a Git commit
|
||||
======
|
||||
Git offers several ways you can quickly see which files changed in a
|
||||
commit.
|
||||
![Code going into a computer.][1]
|
||||
|
||||
If you use Git every day, you probably make a lot of commits. If you're using Git every day in a project with other people, it's safe to assume that _everyone_ is making lots of commits. Every day. And this means you're aware of how disorienting a Git log can become, with a seemingly eternal scroll of changes and no sign of what's been changed.
|
||||
|
||||
So how do you find out what file changed in a specific commit? It's easier than you think.
|
||||
|
||||
### Find what file changed in a commit
|
||||
|
||||
To find out which files changed in a given commit, use the `git log --raw` command. It's the fastest and simplest way to get insight into which files a commit affects. The `git log` command is underutilized in general, largely because it has so many formatting options, and many users get overwhelmed by too many choices and, in some cases, unclear documentation.
|
||||
|
||||
The log mechanism in Git is surprisingly flexible, though, and the `--raw` option provides a log of commits in your current branch, plus a list of each file that had changes made to it.
|
||||
|
||||
Here's the output of a standard `git log`:
|
||||
|
||||
|
||||
```
|
||||
$ git log
|
||||
commit fbbbe083aed75b24f2c77b1825ecab10def0953c (HEAD -> dev, origin/dev)
|
||||
Author: tux <[tux@example.com][2]>
|
||||
Date: Sun Nov 5 21:40:37 2020 +1300
|
||||
|
||||
exit immediately from failed download
|
||||
|
||||
commit 094f9948cd995acfc331a6965032ea0d38e01f03 (origin/master, master)
|
||||
Author: Tux <[tux@example.com][2]>
|
||||
Date: Fri Aug 5 02:05:19 2020 +1200
|
||||
|
||||
export makeopts from etc/example.conf
|
||||
|
||||
commit 76b7b46dc53ec13316abb49cc7b37914215acd47
|
||||
Author: Tux <[tux@example.com][2]>
|
||||
Date: Sun Jul 31 21:45:24 2020 +1200
|
||||
|
||||
fix typo in help message
|
||||
```
|
||||
|
||||
Even when the author helpfully specifies in the commit message which files changed, the log is fairly terse.
|
||||
|
||||
Here's the output of `git log --raw`:
|
||||
|
||||
|
||||
```
|
||||
$ git log --raw
|
||||
commit fbbbe083aed75b24f2c77b1825ecab10def0953c (HEAD -> dev, origin/dev)
|
||||
Author: tux <[tux@example.com][2]>
|
||||
Date: Sun Nov 5 21:40:37 2020 +1300
|
||||
|
||||
exit immediately from failed download
|
||||
|
||||
:100755 100755 cbcf1f3 4cac92f M src/example.lua
|
||||
|
||||
commit 094f9948cd995acfc331a6965032ea0d38e01f03 (origin/master, master)
|
||||
Author: Tux <[tux@example.com][2]>
|
||||
Date: Fri Aug 5 02:05:19 2020 +1200
|
||||
|
||||
export makeopts from etc/example.conf
|
||||
|
||||
:100755 100755 4c815c0 cbcf1f3 M src/example.lua
|
||||
:100755 100755 71653e1 8f5d5a6 M src/example.spec
|
||||
:100644 100644 9d21a6f e33caba R100 etc/example.conf etc/example.conf-default
|
||||
|
||||
commit 76b7b46dc53ec13316abb49cc7b37914215acd47
|
||||
Author: Tux <[tux@example.com][2]>
|
||||
Date: Sun Jul 31 21:45:24 2020 +1200
|
||||
|
||||
fix typo in help message
|
||||
|
||||
:100755 100755 e253aaf 4c815c0 M src/example.lua
|
||||
```
|
||||
|
||||
This tells you exactly which file was added to the commit and how the file was changed (`A` for added, `M` for modified, `R` for renamed, and `D` for deleted).
|
||||
|
||||
### Git whatchanged
|
||||
|
||||
The `git whatchanged` command is a legacy command that predates the log function. Its documentation says you're not meant to use it in favor of `git log --raw` and implies it's essentially deprecated. However, I still find it a useful shortcut to (mostly) the same output (although merge commits are excluded), and I anticipate creating an alias for it should it ever be removed. If you don't need to merge commits in your log (and you probably don't, if you're only looking to see files that changed), try `git whatchanged` as an easy mnemonic.
|
||||
|
||||
### View changes
|
||||
|
||||
Not only can you see which files changed, but you can also make `git log` display exactly what changed in the files. Your Git log can produce an inline diff, a line-by-line display of all changes for each file, with the `--patch` option:
|
||||
|
||||
|
||||
```
|
||||
commit 62a2daf8411eccbec0af69e4736a0fcf0a469ab1 (HEAD -> master)
|
||||
Author: Tux <[Tux@example.com][3]>
|
||||
Date: Wed Mar 10 06:46:58 2021 +1300
|
||||
|
||||
commit
|
||||
|
||||
diff --git a/hello.txt b/hello.txt
|
||||
index 65a56c3..36a0a7d 100644
|
||||
\--- a/hello.txt
|
||||
+++ b/hello.txt
|
||||
@@ -1,2 +1,2 @@
|
||||
Hello
|
||||
-world
|
||||
+opensource.com
|
||||
```
|
||||
|
||||
In this example, the one-word line "world" was removed from `hello.txt` and the new line "opensource.com" was added.
|
||||
|
||||
These patches can be used with common Unix utilities like [diff and patch][4], should you need to make the same changes manually elsewhere. The patches are also a good way to summarize the important parts of what new information a specific commit introduces. This is an invaluable overview when you've introduced a bug during a sprint. To find the cause of the error faster, you can ignore the parts of a file that didn't change and review just the new code.
|
||||
|
||||
### Simple commands for complex results
|
||||
|
||||
You don't have to understand refs and branches and commit hashes to view what files changed in a commit. Your Git log was designed to report Git activity to you, and if you want to format it in a specific way or extract specific information, it's often a matter of wading through many screens of documentation to put together the right command. Luckily, one of the most common requests about Git history is available with just one or two options: `--raw` and `--patch`. And if you can't remember `--raw`, just think, "Git, what changed?" and type `git whatchanged`.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/4/git-whatchanged
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82 (Code going into a computer.)
|
||||
[2]: mailto:tux@example.com
|
||||
[3]: mailto:Tux@example.com
|
||||
[4]: https://opensource.com/article/18/8/diffs-patches
|
286
sources/tech/20210401 Use awk to calculate letter frequency.md
Normal file
286
sources/tech/20210401 Use awk to calculate letter frequency.md
Normal file
@ -0,0 +1,286 @@
|
||||
[#]: subject: (Use awk to calculate letter frequency)
|
||||
[#]: via: (https://opensource.com/article/21/4/gawk-letter-game)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Use awk to calculate letter frequency
|
||||
======
|
||||
Write an awk script to determine the most (and least) common letters in
|
||||
a set of words.
|
||||
![Typewriter keys in multicolor][1]
|
||||
|
||||
I recently started writing a game where you build words using letter tiles. To create the game, I needed to know the frequency of letters across regular words in the English language, so I could present a useful set of letter tiles. Letter frequency is discussed in various places, including [on Wikipedia][2], but I wanted to calculate the letter frequency myself.
|
||||
|
||||
Linux provides a list of words in the `/usr/share/dict/words` file, so I already have a list of likely words to use. The `words` file contains lots of words that I want, but a few that I don't. I wanted a list of all words that weren't compound words (no hyphens or spaces) or proper nouns (no uppercase letters). To get that list, I can run the `grep` command to pull out only the lines that consist solely of lowercase letters:
|
||||
|
||||
|
||||
```
|
||||
`$ grep '^[a-z]*$' /usr/share/dict/words`
|
||||
```
|
||||
|
||||
This regular expression asks `grep` to match patterns that are only lowercase letters. The characters `^` and `$` in the pattern represent the start and end of the line, respectively. The `[a-z]` grouping will match only the lowercase letters **a** to **z**.
|
||||
|
||||
Here's a quick sample of the output:
|
||||
|
||||
|
||||
```
|
||||
$ grep '^[a-z]*$' /usr/share/dict/words | head
|
||||
a
|
||||
aa
|
||||
aaa
|
||||
aah
|
||||
aahed
|
||||
aahing
|
||||
aahs
|
||||
aal
|
||||
aalii
|
||||
aaliis
|
||||
```
|
||||
|
||||
And yes, those are all valid words. For example, "aahed" is the past tense exclamation of "aah," as in relaxation. And an "aalii" is a bushy tropical shrub.
|
||||
|
||||
Now I just need to write a `gawk` script to do the work of counting the letters in each word, and then print the relative frequency of each letter it finds.
|
||||
|
||||
### Counting letters
|
||||
|
||||
One way to count letters in `gawk` is to iterate through each character in each input line and count occurrences of each letter **a** to **z**. The `substr` function will return a substring of a given length, such as a single letter, from a larger string. For example, this code example will evaluate each character `c` from the input:
|
||||
|
||||
|
||||
```
|
||||
{
|
||||
len = length($0); for (i = 1; i <= len; i++) {
|
||||
c = substr($0, i, 1);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If I start with a global string `LETTERS` that contains the alphabet, I can use the `index` function to find the location of a single letter in the alphabet. I'll expand the `gawk` code example to evaluate only the letters **a** to **z** in the input:
|
||||
|
||||
|
||||
```
|
||||
BEGIN { LETTERS = "abcdefghijklmnopqrstuvwxyz" }
|
||||
|
||||
{
|
||||
len = length($0); for (i = 1; i <= len; i++) {
|
||||
c = substr($0, i, 1);
|
||||
ltr = index(LETTERS, c);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Note that the index function returns the first occurrence of the letter from the `LETTERS` string, starting with 1 at the first letter, or zero if not found. If I have an array that is 26 elements long, I can use the array to count the occurrences of each letter. I'll add this to my code example to increment (using `++`) the count for each letter as it appears in the input:
|
||||
|
||||
|
||||
```
|
||||
BEGIN { LETTERS = "abcdefghijklmnopqrstuvwxyz" }
|
||||
|
||||
{
|
||||
len = length($0); for (i = 1; i <= len; i++) {
|
||||
c = substr($0, i, 1);
|
||||
ltr = index(LETTERS, c);
|
||||
|
||||
if (ltr > 0) {
|
||||
++count[ltr];
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Printing relative frequency
|
||||
|
||||
After the `gawk` script counts all the letters, I want to print the frequency of each letter it finds. I am not interested in the total number of each letter from the input, but rather the _relative frequency_ of each letter. The relative frequency scales the counts so that the letter with the fewest occurrences (such as the letter **q**) is set to 1, and other letters are relative to that.
|
||||
|
||||
I'll start with the count for the letter **a**, then compare that value to the counts for each of the other letters **b** to **z**:
|
||||
|
||||
|
||||
```
|
||||
END {
|
||||
min = count[1]; for (ltr = 2; ltr <= 26; ltr++) {
|
||||
if (count[ltr] < min) {
|
||||
min = count[ltr];
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
At the end of that loop, the variable `min` contains the minimum count for any letter. I can use that to provide a scale for the counts to print the relative frequency of each letter. For example, if the letter with the lowest occurrence is **q**, then `min` will be equal to the **q** count.
|
||||
|
||||
Then I loop through each letter and print it with its relative frequency. I divide each count by `min` to print the relative frequency, which means the letter with the lowest count will be printed with a relative frequency of 1. If another letter appears twice as often as the lowest count, that letter will have a relative frequency of 2. I'm only interested in integer values here, so 2.1 and 2.9 are the same as 2 for my purposes:
|
||||
|
||||
|
||||
```
|
||||
END {
|
||||
min = count[1]; for (ltr = 2; ltr <= 26; ltr++) {
|
||||
if (count[ltr] < min) {
|
||||
min = count[ltr];
|
||||
}
|
||||
}
|
||||
|
||||
for (ltr = 1; ltr <= 26; ltr++) {
|
||||
print substr(LETTERS, ltr, 1), int(count[ltr] / min);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Putting it all together
|
||||
|
||||
Now I have a `gawk` script that can count the relative frequency of letters in its input:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/gawk -f
|
||||
|
||||
# only count a-z, ignore A-Z and any other characters
|
||||
|
||||
BEGIN { LETTERS = "abcdefghijklmnopqrstuvwxyz" }
|
||||
|
||||
{
|
||||
len = length($0); for (i = 1; i <= len; i++) {
|
||||
c = substr($0, i, 1);
|
||||
ltr = index(LETTERS, c);
|
||||
|
||||
if (ltr > 0) {
|
||||
++count[ltr];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# print relative frequency of each letter
|
||||
|
||||
END {
|
||||
min = count[1]; for (ltr = 2; ltr <= 26; ltr++) {
|
||||
if (count[ltr] < min) {
|
||||
min = count[ltr];
|
||||
}
|
||||
}
|
||||
|
||||
for (ltr = 1; ltr <= 26; ltr++) {
|
||||
print substr(LETTERS, ltr, 1), int(count[ltr] / min);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
I'll save that to a file called `letter-freq.awk` so that I can use it more easily from the command line.
|
||||
|
||||
If you prefer, you can also use `chmod +x` to make the file executable on its own. The `#!/usr/bin/gawk -f` on the first line means Linux will run it as a script using the `/usr/bin/gawk` program. And because the `gawk` command line uses `-f` to indicate which file it should use as a script, you need that hanging `-f` so that executing `letter-freq.awk` at the shell will be properly interpreted as running `/usr/bin/gawk -f letter-freq.awk` instead.
|
||||
|
||||
I can test the script with a few simple inputs. For example, if I feed the alphabet into my `gawk` script, each letter should have a relative frequency of 1:
|
||||
|
||||
|
||||
```
|
||||
$ echo abcdefghijklmnopqrstuvwxyz | gawk -f letter-freq.awk
|
||||
a 1
|
||||
b 1
|
||||
c 1
|
||||
d 1
|
||||
e 1
|
||||
f 1
|
||||
g 1
|
||||
h 1
|
||||
i 1
|
||||
j 1
|
||||
k 1
|
||||
l 1
|
||||
m 1
|
||||
n 1
|
||||
o 1
|
||||
p 1
|
||||
q 1
|
||||
r 1
|
||||
s 1
|
||||
t 1
|
||||
u 1
|
||||
v 1
|
||||
w 1
|
||||
x 1
|
||||
y 1
|
||||
z 1
|
||||
```
|
||||
|
||||
Repeating that example but adding an extra instance of the letter **e** will print the letter **e** with a relative frequency of 2 and every other letter as 1:
|
||||
|
||||
|
||||
```
|
||||
$ echo abcdeefghijklmnopqrstuvwxyz | gawk -f letter-freq.awk
|
||||
a 1
|
||||
b 1
|
||||
c 1
|
||||
d 1
|
||||
e 2
|
||||
f 1
|
||||
g 1
|
||||
h 1
|
||||
i 1
|
||||
j 1
|
||||
k 1
|
||||
l 1
|
||||
m 1
|
||||
n 1
|
||||
o 1
|
||||
p 1
|
||||
q 1
|
||||
r 1
|
||||
s 1
|
||||
t 1
|
||||
u 1
|
||||
v 1
|
||||
w 1
|
||||
x 1
|
||||
y 1
|
||||
z 1
|
||||
```
|
||||
|
||||
And now I can take the big step! I'll use the `grep` command with the `/usr/share/dict/words` file and identify the letter frequency for all words spelled entirely with lowercase letters:
|
||||
|
||||
|
||||
```
|
||||
$ grep '^[a-z]*$' /usr/share/dict/words | gawk -f letter-freq.awk
|
||||
a 53
|
||||
b 12
|
||||
c 28
|
||||
d 21
|
||||
e 72
|
||||
f 7
|
||||
g 15
|
||||
h 17
|
||||
i 58
|
||||
j 1
|
||||
k 5
|
||||
l 36
|
||||
m 19
|
||||
n 47
|
||||
o 47
|
||||
p 21
|
||||
q 1
|
||||
r 46
|
||||
s 48
|
||||
t 44
|
||||
u 25
|
||||
v 6
|
||||
w 4
|
||||
x 1
|
||||
y 13
|
||||
z 2
|
||||
```
|
||||
|
||||
Of all the lowercase words in the `/usr/share/dict/words` file, the letters **j**, **q**, and **x** occur least frequently. The letter **z** is also pretty rare. Not surprisingly, the letter **e** is the most frequently used.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/4/gawk-letter-game
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc-docdish-typewriterkeys-3.png?itok=NyBwMdK_ (Typewriter keys in multicolor)
|
||||
[2]: https://en.wikipedia.org/wiki/Letter_frequency
|
@ -0,0 +1,104 @@
|
||||
[#]: subject: (Wrong Time Displayed in Windows-Linux Dual Boot Setup? Here’s How to Fix it)
|
||||
[#]: via: (https://itsfoss.com/wrong-time-dual-boot/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
Wrong Time Displayed in Windows-Linux Dual Boot Setup? Here’s How to Fix it
|
||||
======
|
||||
|
||||
If you [dual boot Windows and Ubuntu][1] or any other Linux distribution, you might have noticed a time difference between the two operating systems.
|
||||
|
||||
When you [use Linux][2], it shows the correct time. But when you boot into Windows, it shows the wrong time. Sometimes, it is the opposite and Linux shows the wrong time and Windows has the correct time.
|
||||
|
||||
That’s strange specially because you are connected to the internet and your date and time is set to be used automatically.
|
||||
|
||||
Don’t worry! You are not the only one to face this issue. You can fix it by using the following command in the Linux terminal:
|
||||
|
||||
```
|
||||
timedatectl set-local-rtc 1
|
||||
```
|
||||
|
||||
Again, don’t worry. I’ll explain why you encounter a time difference in a dual boot setup. I’ll show you how the above command fixes the wrong time issue in Windows after dual boot.
|
||||
|
||||
### Why Windows and Linux show different time in dual boot?
|
||||
|
||||
A computer has two main clocks: a system clock and a hardware clock.
|
||||
|
||||
A hardware clock which is also called RTC ([real time clock][3]) or CMOS/BIOS clock. This clock is outside the operating system, on your computer’s motherboard. It keeps on running even after your system is powered off.
|
||||
|
||||
The system clock is what you see inside your operating system.
|
||||
|
||||
When your computer is powered on, the hardware clock is read and used to set the system clock. Afterwards, the system clock is used for tracking time. If your operating system makes any changes to system clock, like changing time zone etc, it tries to sync this information to the hardware clock.
|
||||
|
||||
By default, Linux assumes that the time stored in the hardware clock is in UTC, not the local time. On the other hand, Windows thinks that the time stored on the hardware clock is local time. That’s where the trouble starts.
|
||||
|
||||
Let me explain with examples.
|
||||
|
||||
You see I am in Kolkata time zone which is UTC+5:30. After installing when I set the [timezon][4][e][4] [in Ubuntu][4] to the Kolkata time zone, Ubuntu syncs this time information to the hardware clock but with an offset of 5:30 because it has to be in UTC for Linux.
|
||||
|
||||
Let’ say the current time in Kolkata timezone is 15:00 which means that the UTC time is 09:30.
|
||||
|
||||
Now when I turn off the system and boot into Windows, the hardware clock has the UTC time (09:30 in this example). But Windows thinks the hardware clock has stored the local time. And thus it changes the system clock (which should have shown 15:00) to use the UTC time (09:30) as the local time. And hence, Windows shows 09:30 as the time which is 5:30 hours behind the actual time (15:00 in our example).
|
||||
|
||||
![][5]
|
||||
|
||||
Again, if I set the correct time in Windows by toggling the automatic time zone and time buttons, you know what is going to happen? Now it will show the correct time on the system (15:00) and sync this information (notice the “Synchronize your clock” option in the image) to the hardware clock.
|
||||
|
||||
If you boot into Linux, it reads the time from the hardware clock which is in local time (15:00) but since Linux believes it to be the UTC time, it adds an offset of 5:30 to the system clock. Now Linux shows a time of 20:30 which is 5:30 hours ahead of the actual time.
|
||||
|
||||
Now that you understand the root cause of the time difference issues in dual boot, it’s time to see how to fix the issue.
|
||||
|
||||
### Fixing Windows Showing Wrong Time in a Dual Boot Setup With Linux
|
||||
|
||||
There are two ways you can go about handling this issue:
|
||||
|
||||
* Make Windows use UTC time for the hardware clock
|
||||
* Make Linux use local time for the hardware clock
|
||||
|
||||
|
||||
|
||||
It is easier to make the changes in Linux and hence I’ll recommend going with the second method.
|
||||
|
||||
Ubuntu and most other Linux distributions use systemd these days and hence you can use timedatectl command to change the settings.
|
||||
|
||||
What you are doing is to tell your Linux system to use the local time for the hardware clock (RTC). You do that with the `set-local-rtc` (set local time for RTC) option:
|
||||
|
||||
```
|
||||
timedatectl set-local-rtc 1
|
||||
```
|
||||
|
||||
As you can notice in the image below, the RTC now uses the local time.
|
||||
|
||||
![][6]
|
||||
|
||||
Now if you boot into Windows, it takes the hardware clock to be as local time which is actually correct this time. When you boot into Linux, your Linux system knows that the hardware clock is using local time, not UTC. And hence, it doesn’t try to add the off-set this time.
|
||||
|
||||
This fixes the time difference issue between Linux and Windows in dual boot.
|
||||
|
||||
You see a warning about not using local time for RTC. For desktop setups, it should not cause any issues. At least, I cannot think of one.
|
||||
|
||||
I hope I made things clear for you. If you still have questions, please leave a comment below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/wrong-time-dual-boot/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
|
||||
[2]: https://itsfoss.com/why-use-linux/
|
||||
[3]: https://www.computerhope.com/jargon/r/rtc.htm
|
||||
[4]: https://itsfoss.com/change-timezone-ubuntu/
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/set-time-windows.jpg?resize=800%2C491&ssl=1
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2021/03/set-local-time-for-rtc-ubuntu.png?resize=800%2C490&ssl=1
|
@ -0,0 +1,111 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wyxplus)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to set up a homelab from hardware to firewall)
|
||||
[#]: via: (https://opensource.com/article/19/3/home-lab)
|
||||
[#]: author: (Michael Zamot https://opensource.com/users/mzamot)
|
||||
|
||||
如何从硬件到防火墙建立一个家庭实验室
|
||||
======
|
||||
|
||||
查看用于构建自己的家庭实验室的硬件和软件方案。
|
||||
|
||||
![][1]
|
||||
|
||||
你有想过创建一个家庭实验室吗?或许你想尝试不同的技术,构建开发环境、亦或是建立自己的私有云。对于拥有一个家庭实验室有很多理由,本教程旨在使入门变得更容易。
|
||||
|
||||
规划家庭实验室时,需要考虑三方面:硬件、软件和维护。我们将在这里查看前两方面,并在以后的文章中讲述如何节省维护计算机实验室的时间。
|
||||
|
||||
### 硬件
|
||||
|
||||
|
||||
在考虑硬件需求时,首先要考虑如何使用实验室以及你的预算,噪声,空间和电源使用情况。
|
||||
|
||||
|
||||
如果购买新硬件过于昂贵,请搜索当地的大学,广告以及诸如 eBay 或 Craigslist 之类的网站,能获取二手服务器的地方。它们通常很便宜,并且服务器级的硬件可以使用很多年。你将需要三类硬件:虚拟化服务器,存储设备和路由器/防火墙。
|
||||
|
||||
#### 虚拟化服务器
|
||||
|
||||
一个虚拟化服务器允许你去运行多个共享物理机资源的虚拟机,同时最大化利用和隔离资源。如果你打算销毁一台虚拟机,无需重建整个服务器,因为其仅是一个虚拟机。如果你想进行测试或尝试某些操作而不损坏整个系统,仅需要新建一个虚拟机来运行即可。
|
||||
|
||||
在虚拟服务器中,需考虑两个最重要的因素是 CPU 的核心数及其运行速度以及内存容量。如果没有足够的资源够全部虚拟机共享,那么它们将被重复分配并试着获取其他虚拟机的 CPU 的周期和内存。
|
||||
|
||||
因此,考虑一个多核 CPU 的平台。你要确保 CPU 支持虚拟化指令(因特尔的 VT-x 指令集和 AMD 的 AMD-V 指令集)。能够处理虚拟化优质消费级处理器有因特尔的 i5 或 i7 和 AMD 的 Ryzen 处理器。如果你考虑服务器级的硬件,那么因特尔的志强系列和 AMD 的 EPYC 都是不错的选择。内存可能很昂贵,尤其是最近的 DDR4 内存。当我们估计所需多少内存时,请为主机操作系统的内存至少分配 2 GB 的空间。
|
||||
|
||||
如果你担心电费或噪声,则诸如因特尔 NUC 设备之类的解决方案虽然外形小巧,功耗低,噪音低,但是却以牺牲可扩展性为代价。
|
||||
|
||||
#### <ruby>网络附加存储<rt>Network-attached storage</rt></ruby>(NAS)
|
||||
|
||||
如果希望装有硬盘驱动器的计算机存储你的所有个人数据,电影,图片等,并为虚拟化服务器提供存储,则需要网络附加存储(NAS)。
|
||||
|
||||
在大多数情况下,你不太可能需要一颗强力的 CPU。实际上,许多商业 NAS 的解决方案使用低功耗的 ARM CPU。主板能支持多个 SATA 硬盘。如果你的主板没有足够的端口,请使用<ruby>主机总线适配器<rt>host bus adapter</rt><ruby>(HBA)SAS 控制器添加其他功能。
|
||||
|
||||
网络性能对于 NAS 来说是至关重要的,因此最好选择<ruby>千兆<rt>gigabit</rt></ruby>网络(或更快网络)。
|
||||
|
||||
内存需求将根据你的文件系统而有所不同。ZFS 是 NAS 上最受欢迎的文件系统之一,你将需要更多内存才能使用诸如缓存或重复数据删除之类的功能。<ruby>纠错码<rt>Error-correcting code</rt></ruby>(ECC)的内存是防止数据损坏的最佳选择(但在购买前请确保你的主板支持)。最后但同样重要的,不要忘记使用<ruby>不间断电源<rt>uninterruptible power supply</rt></ruby>(UPS),因为断电可能会使得数据出错。
|
||||
|
||||
|
||||
#### 防火墙和路由器
|
||||
|
||||
你是否曾意识到,廉价的路由器/防火墙通常是保护你的家庭网络不受外部环境影响的主要部分?这些路由器很少及时收到安全更新(如果有的话)。现在害怕了吗?好吧,[确实][2]!
|
||||
|
||||
通常,你不需要一颗强大的 CPU 或是大内存来构建你自己的路由器/防火墙,除非你需要高吞吐率或是执行 CPU 密集型任务,像是 VPN 服务器或是流量过滤。在这种情况下,你将需要一个支持 AES-NI 的多核 CPU。
|
||||
|
||||
你可能想要至少 2 个千兆或更快的<ruby>以太网卡<rt>Ethernet network interface cards</rt></ruby>(NIC),这不是必需的,但我推荐使用一个管理型交换机来连接你自己的装配的路由器,以创建 VLAN 来进一步隔离和保护你的网络。
|
||||
|
||||
![Home computer lab PfSense][4]
|
||||
|
||||
### 软件
|
||||
|
||||
在选择完你的虚拟化服务器、NAS 和防火墙/路由器后,下一步是探索不同的操作系统和软件,以最大程度地发挥其作用。尽管你可以使用 CentOS、Debian或 Ubuntu 之类的常规 Linux 发行版,但是与以下软件相比,它们通常花费更多的时间进行配置和管理。
|
||||
|
||||
#### 虚拟化软件
|
||||
|
||||
**[KVM][5]**(<ruby>基于内核的虚拟机<rt>Kernel-based Virtual Machine</rt></ruby>)使你可以将 Linux 变成虚拟机监控程序,以便可以在同一台机器中运行多个虚拟机。最好的是,KVM 作为 Linux 的一部分,它是许多企业和家庭用户的首选。如果你愿意,可以安装 **[libvirt][6]** 和 **[virt-manager][7]** 来管理你的虚拟化平台。
|
||||
|
||||
|
||||
**[Proxmox VE][8]** 是一个强大的企业级解决方案,并且是一个完整的开源虚拟化和容器平台。它基于 Debian,使用 KVM 作为其虚拟机管理程序,并使用 LXC 作为容器。Proxmox 提供了强大的网页界面,API,并且可以扩展到许多群集节点,这很有用,因为你永远不知道何时实验室容量不足。
|
||||
|
||||
**[oVirt][9](RHV)** 是另一种使用 KVM 作为虚拟机管理程序的企业级解决方案。不要仅仅因为它是企业,并不意味着你不能在家中使用它。oVirt 提供了强大的网页界面和 API,并且可以处理数百个节点(如果你正在运行那么多服务器,我不想成为你的邻居!)。oVirt 用于家庭实验室的潜在问题是它至少需要一些的节点集:你将需要一个外部存储(例如 NAS)和至少两个其他虚拟化节点(你可以仅在一个上运行它,但是你可能会在维护环境时遇到问题)。
|
||||
|
||||
#### 网络附加存储软件
|
||||
|
||||
**[FreeNAS][10]** 是最受欢迎的开源 NAS 发行版,它基于稳定的 FreeBSD 操作系统。它最强大的功能之一是支持 ZFS 文件系统,该文件系统提供了数据完整性检查、快照、复制和多个级别的冗余(镜像,条带化镜像和条带化)。最重要的是,所有功能都通过功能强大且易于使用的网页界面进行管理。在安装 FreeNAS 之前,请检查硬件是否支持,因为它不如基于 Linux 的发行版那么广泛。
|
||||
|
||||
另一个流行的替代方法是基于 Linux 的 **[OpenMediaVault][11]**。它的主要功能之一是模块化,带有可扩展和添加特性的插件。它包括的功能包括基于网页管理界面和协议,例如 CIFS,SFTP,NFS,iSCSI。以及卷管理,包括软件 RAID,资源分配,<ruby>访问控制列表<rt>access control lists</rt></ruby>(ACL)和共享管理。由于它是基于 Linux 的,因此其具有广泛的硬件支持。
|
||||
|
||||
#### 防火墙/路由器软件
|
||||
|
||||
**[pfSense][12]** 是基于 FreeBSD 的开源企业级路由器和防火墙发行版。它可以直接安装在服务器上,甚至可以安装在虚拟机中(以管理虚拟或物理网络并节省空间)。它有许多功能,可以使用软件包进行扩展。尽管它也有命令行访问权限,但也可以完全使用网页界面对其进行管理。它具有你所希望路由器和防火墙提供的所有功能,例如 DHCP 和 DNS,以及更高级的功能,例如入侵检测(IDS)和入侵防御(IPS)系统。你可以侦听多个不同接口或使用 VLAN 的网络,并且只需鼠标点击几下即可创建安全的 VPN 服务器。pfSense 使用 pf,这是一种有状态的数据包筛选器,它是使用类似于 IPFilter 的语法为 OpenBSD 操作系统开发的。许多公司和组织都有使用 pfSense。
|
||||
|
||||
* * *
|
||||
|
||||
考虑到所有的信息,是时候动手开始建立你的实验室了。在之后的文章中,我将介绍运行家庭实验室的第三方面:自动化进行部署和维护。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/home-lab
|
||||
|
||||
作者:[Michael Zamot (Red Hat)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wyxplus](https://github.com/wyxplus)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mzamot
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb
|
||||
[2]: https://opensource.com/article/18/5/how-insecure-your-router
|
||||
[3]: /file/427426
|
||||
[4]: https://opensource.com/sites/default/files/uploads/pfsense2.png (Home computer lab PfSense)
|
||||
[5]: https://www.linux-kvm.org/page/Main_Page
|
||||
[6]: https://libvirt.org/
|
||||
[7]: https://virt-manager.org/
|
||||
[8]: https://www.proxmox.com/en/proxmox-ve
|
||||
[9]: https://ovirt.org/
|
||||
[10]: https://freenas.org/
|
||||
[11]: https://www.openmediavault.org/
|
||||
[12]: https://www.pfsense.org/
|
141
translated/tech/20210326 How to read and write files in C.md
Normal file
141
translated/tech/20210326 How to read and write files in C.md
Normal file
@ -0,0 +1,141 @@
|
||||
[#]: subject: (How to read and write files in C++)
|
||||
[#]: via: (https://opensource.com/article/21/3/ccc-input-output)
|
||||
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wyxplus)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
如何用 C++ 读写文件
|
||||
======
|
||||
|
||||
如果你知道如何在 C++ 中使用<ruby>输入输出<rt>I/O</rt></ruby>流,那么你便能够(原则上)处理任何类型的输入输出设备。
|
||||
|
||||
![Computer screen with files or windows open][1]
|
||||
|
||||
在 C++ 中,可以通过将输入输出流与流运算符 `>>` 和 `<<` 结合使用来进行文件读写。当读写文件的时候,这些运算符将应用于代表硬盘驱动器上文件类的实例上。这种基于流的方法有个巨大的优势:从 C++ 的角度,无论你要读取或写入的内容是文件、数据库、控制台,亦或是你通过网络连接的另外一台电脑,这都无关紧要。因此,知道如何使用流运算符来写入文件能够被利用到其他领域。
|
||||
|
||||
### 输入输出流类
|
||||
|
||||
C++ 标准库提供了 [ios_base][2] 类。该类充当所有 I/O 流的基类,例如 [basic_ofstream][3] 和 [basic_ifstream][4]。本例将使用特殊的类型来读写字符,`ifstream` 和 `ofstream`。
|
||||
|
||||
- `ofstream`:输出文件流,并且其能通过插入运算符 `<<` 来实现。
|
||||
- `ifstream`:输入文件流,并且其能通过提取运算符 `>>` 来实现。
|
||||
|
||||
该两种类型都是在头文件 `<fstream>` 中所定义。
|
||||
|
||||
从 `ios_base` 继承的类在写入时可被视为数据接收器,在从其读取时可被视为数据源,与数据本身完全分离。这种面向对象的方法使 <ruby>[关注点分离][5]<rt>separation of concerns</rt></ruby> 和 <ruby>[依赖注入][6]<rt>dependency injection</rt></ruby> 等概念易于实现。
|
||||
|
||||
### 一个简单的例子
|
||||
|
||||
本例程是非常简单:实例化了一个 `ofstream` 来写入,和实例化一个 `ifstream` 来读取。
|
||||
|
||||
|
||||
```
|
||||
#include <iostream> // cout, cin, cerr etc...
|
||||
#include <fstream> // ifstream, ofstream
|
||||
#include <string>
|
||||
|
||||
int main()
|
||||
{
|
||||
std::string sFilename = "MyFile.txt";
|
||||
|
||||
/******************************************
|
||||
* *
|
||||
* WRITING *
|
||||
* *
|
||||
******************************************/
|
||||
|
||||
std::ofstream fileSink(sFilename); // Creates an output file stream
|
||||
|
||||
if (!fileSink) {
|
||||
std::cerr << "Canot open " << sFilename << std::endl;
|
||||
exit(-1);
|
||||
}
|
||||
|
||||
/* std::endl will automatically append the correct EOL */
|
||||
fileSink << "Hello Open Source World!" << std::endl;
|
||||
|
||||
/******************************************
|
||||
* *
|
||||
* READING *
|
||||
* *
|
||||
******************************************/
|
||||
|
||||
std::ifstream fileSource(sFilename); // Creates an input file stream
|
||||
|
||||
if (!fileSource) {
|
||||
std::cerr << "Canot open " << sFilename << std::endl;
|
||||
exit(-1);
|
||||
}
|
||||
else {
|
||||
// Intermediate buffer
|
||||
std::string buffer;
|
||||
|
||||
// By default, the >> operator reads word by workd (till whitespace)
|
||||
while (fileSource >> buffer)
|
||||
{
|
||||
std::cout << buffer << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
exit(0);
|
||||
}
|
||||
```
|
||||
|
||||
该代码可以在 [GitHub][7] 上查看。当你编译并且执行它时,你应该能获得以下输出:
|
||||
|
||||
![Console screenshot][8]
|
||||
|
||||
(Stephan Avenwedde, [CC BY-SA 4.0][9])
|
||||
|
||||
这是个简易、适合初学者的例子。如果你想去使用该代码在你自己的应用中,最好遵从以下建议:
|
||||
|
||||
* 文件流在程序结束的时候自动关闭。如果你想继续执行,那么应该通过调用 `close()` 方法手动关闭。
|
||||
* 这些文件流类继承自 [basic_ios][10](在多个级别上),并且重载了 `!` 运算符。这使你可以进行简单的检查,是否可以访问该流。在 [cppreference.com][11] 上,你可以找到该检查何时会(或不会)成功的概述,并且可以进一步实现错误处理。
|
||||
* 默认情况下,`ifstream` 停在空白处并跳过它。要逐行读取直到到达 [EOF][13] ,请使用 `getline(...)` 方法。
|
||||
* 为了读写二进制文件,请将 `std::ios::binary` 标志传递给构造函数:这样可以防止 [EOL][13] 字符附加到每一行。
|
||||
|
||||
### 从系统角度进行写入
|
||||
|
||||
写入文件时,数据将写入系统的内存写入缓冲区中。当系统收到系统调用 [sync][14] 时,此缓冲区的内容将被写入硬盘。这也是你在不告知系统的情况下,不要卸下 U 盘的原因。通常,守护进程会定期调用 _sync_。为了安全起见,也可以手动调用 _sync_:
|
||||
|
||||
|
||||
```
|
||||
#include <unistd.h> // needs to be included
|
||||
|
||||
sync();
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
在 C++ 中读写文件并不那么复杂。更何况,如果你知道如何处理输入输出流,(原则上)那么你也知道如何处理任何类型的输入输出设备。对于各种输入输出设备的库能让你更容易地使用流运算符。这就是为什么知道输入输出流的流程会对你有所助益的原因。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/ccc-input-output
|
||||
|
||||
作者:[Stephan Avenwedde][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wyxplus](https://github.com/wyxplus)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/hansic99
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_screen_windows_files.png?itok=kLTeQUbY "Computer screen with files or windows open"
|
||||
[2]: https://en.cppreference.com/w/cpp/io/ios_base
|
||||
[3]: https://en.cppreference.com/w/cpp/io/basic_ofstream
|
||||
[4]: https://en.cppreference.com/w/cpp/io/basic_ifstream
|
||||
[5]: https://en.wikipedia.org/wiki/Separation_of_concerns
|
||||
[6]: https://en.wikipedia.org/wiki/Dependency_injection
|
||||
[7]: https://github.com/hANSIc99/cpp_input_output
|
||||
[8]: https://opensource.com/sites/default/files/uploads/c_console_screenshot.png "Console screenshot"
|
||||
[9]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[10]: https://en.cppreference.com/w/cpp/io/basic_ios
|
||||
[11]: https://en.cppreference.com/w/cpp/io/basic_ios/operator!
|
||||
[12]: https://en.wikipedia.org/wiki/End-of-file
|
||||
[13]: https://en.wikipedia.org/wiki/Newline
|
||||
[14]: https://en.wikipedia.org/wiki/Sync_%28Unix%29
|
@ -0,0 +1,70 @@
|
||||
[#]: subject: (Why you should care about service mesh)
|
||||
[#]: via: (https://opensource.com/article/21/3/service-mesh)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
为什么你要关心 Service Mesh
|
||||
======
|
||||
在微服务环境中,Service Mesh 为开发和运营提供了好处。
|
||||
![Net catching 1s and 0s or data in the clouds][1]
|
||||
|
||||
很多开发者不知道为什么要关心 [Service Mesh][2]。这是我在开发者见面会、会议和实践研讨会上关于云原生架构的微服务开发的演讲中经常被问到的问题。我的回答总是一样的:“只要你想简化你的微服务架构,你就应该在 Kubernetes 上运行。”
|
||||
|
||||
关于简化,你可能也想知道,为什么分布式微服务必须设计得如此复杂才能在 Kubernetes 集群上运行。正如本文所解释的那样,许多开发人员通过 Service Mesh 解决了微服务架构的复杂性,并通过在生产中采用 Service Mesh 获得了额外的好处。
|
||||
|
||||
### 什么是 Service Mesh?
|
||||
|
||||
Service Mesh 是一个专门的基础设施层,用于提供一个透明的、独立于代码的 (polyglot) 方式,以消除应用代码中的非功能性微服务能力。
|
||||
|
||||
|
||||
![Before and After Service Mesh][3]
|
||||
|
||||
(Daniel Oh, [CC BY-SA 4.0][4])
|
||||
|
||||
### 为什么 Service Mesh 对开发者很重要
|
||||
|
||||
当开发人员将微服务部署到云时,无论业务功能如何,他们都必须解决非功能性微服务功能,以避免级联故障。这些功能通常可以体现在服务发现、日志、监控、韧性、认证、弹性和跟踪等方面。开发人员必须花费更多的时间将它们添加到每个微服务中,而不是开发实际的业务逻辑,这使得微服务变得沉重而复杂。
|
||||
|
||||
随着企业加速向云计算转移,Service Mesh 可以提高开发人员的生产力。Kubernetes 加 Service Mesh 平台不需要让服务负责处理这些复杂的问题,也不需要在每个服务中添加更多的代码来处理云原生的问题,而是负责向运行在该平台上的任何应用(现有的或新的,用任何编程语言或框架)提供这些服务。那么微服务就可以轻量级,专注于其业务逻辑,而不是云原生的复杂性。
|
||||
|
||||
### 为什么 Service Mesh 对运维很重要
|
||||
|
||||
这并没有回答为什么运维团队需要关心在 Kubernetes 上运行云原生微服务的 Service Mesh。因为运维团队必须确保在 Kubernetes 环境上的大型混合云和多云上部署新的云原生应用的强大安全性、合规性和可观察性。
|
||||
|
||||
Service Mesh 由一个用于管理代理路由流量的 control plane 和一个用于注入 Sidecar 的 data plane 组成。Sidecar 允许运维团队做一些比如添加第三方安全工具和追踪所有服务通信中的流量,以避免安全漏洞或合规问题。Service Mesh 还可以通过在图形面板上可视化地跟踪指标来提高观察能力。
|
||||
|
||||
### 如何开始使用 Service Mesh
|
||||
|
||||
对于开发者和运维人员,以及从应用开发到平台运营来说,Service Mesh 可以更有效地管理云原生功能。
|
||||
|
||||
你可能想知道从哪里开始采用 Service Mesh 来配合你的微服务应用和架构。幸运的是,有许多开源的 Service Mesh 项目。许多云服务提供商也在他们的 Kubernetes 平台中提供 Service Mesh。
|
||||
|
||||
![CNCF Service Mesh Landscape][5]
|
||||
|
||||
(Daniel Oh, [CC BY-SA 4.0][4])
|
||||
|
||||
你可以在 [CNCF Service Mesh Landscape][6] 页面中找到最受欢迎的 Service Mesh 项目和服务的链接。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/service-mesh
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/daniel-oh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_analytics_cloud.png?itok=eE4uIoaB (Net catching 1s and 0s or data in the clouds)
|
||||
[2]: https://www.redhat.com/en/topics/microservices/what-is-a-service-mesh
|
||||
[3]: https://opensource.com/sites/default/files/uploads/vm-vs-service-mesh.png (Before and After Service Mesh)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://opensource.com/sites/default/files/uploads/service-mesh-providers.png (CNCF Service Mesh Landscape)
|
||||
[6]: https://landscape.cncf.io/card-mode?category=service-mesh&grouping=category
|
@ -0,0 +1,97 @@
|
||||
[#]: subject: (Manipulate data in files with Lua)
|
||||
[#]: via: (https://opensource.com/article/21/3/lua-files)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
用 Lua 操作文件中的数据
|
||||
======
|
||||
了解 Lua 如何处理数据的读写。
|
||||
![Person standing in front of a giant computer screen with numbers, data][1]
|
||||
|
||||
有些数据是临时的,存储在 RAM 中,只有在应用运行时才有意义。但有些数据是要持久的,存储在硬盘上供以后使用。当你编程时,无论是简单的脚本还是复杂的工具套件,通常都需要读取和写入文件。有时文件可能包含配置选项,而另一些时候这个文件是你的用户用你的应用创建的数据。每种语言都会以不同的方式处理这项任务,本文将演示如何使用 Lua 处理文件数据。
|
||||
|
||||
### 安装 Lua
|
||||
|
||||
如果你使用的是 Linux,你可以从你的发行版软件库中安装 Lua。在 macOS 上,你可以从 [MacPorts][2] 或 [Homebrew][3] 安装 Lua。在 Windows 上,你可以从 [Chocolatey][4] 安装 Lua。
|
||||
|
||||
安装 Lua 后,打开你最喜欢的文本编辑器并准备开始。
|
||||
|
||||
### 用 Lua 读取文件
|
||||
|
||||
Lua 使用 `io` 库进行数据输入和输出。下面的例子创建了一个名为 `ingest` 的函数来从文件中读取数据,然后用 `:read` 函数进行解析。在 Lua 中打开一个文件时,有几种模式可以启用。因为我只需要从这个文件中读取数据,所以我使用 `r`(代表”读“)模式:
|
||||
|
||||
|
||||
```
|
||||
function ingest(file)
|
||||
local f = io.open(file, "r")
|
||||
local lines = f:read("*all")
|
||||
f:close()
|
||||
return(lines)
|
||||
end
|
||||
|
||||
myfile=ingest("example.txt")
|
||||
print(myfile)
|
||||
```
|
||||
|
||||
在这段代码中,注意到变量 `myfile` 是为了触发 `ingest` 函数而创建的,因此,它接收该函数返回的任何内容。`ingest` 函数返回文件的行数(从一个称为 `lines` 的变量中)。当最后一步打印 `myfile` 变量的内容时,文件的行数就会出现在终端中。
|
||||
|
||||
如果文件 `example.txt` 中包含了配置选项,那么我会写一些额外的代码来解析这些数据,可能会使用另一个 Lua 库,这取决于配置是以 INI 文件还是 YAML 文件或其他格式存储。如果数据是 SVG 图形,我会写额外的代码来解析 XML,可能会使用 Lua 的 SVG 库。换句话说,你的代码读取的数据一旦加载到内存中,就可以进行操作,但是它们都需要加载 `io` 库。
|
||||
|
||||
### 用 Lua 将数据写入文件
|
||||
|
||||
无论你是要存储用户用你的应用创建的数据,还是仅仅是关于用户在应用中做了什么的元数据(例如,游戏保存或最近播放的歌曲),都有很多很好的理由来存储数据供以后使用。在 Lua 中,这是通过 `io` 库实现的,打开一个文件,将数据写入其中,然后关闭文件:
|
||||
|
||||
|
||||
```
|
||||
function exgest(file)
|
||||
local f = io.open(file, "a")
|
||||
io.output(f)
|
||||
io.write("hello world\n")
|
||||
io.close(f)
|
||||
end
|
||||
|
||||
exgest("example.txt")
|
||||
```
|
||||
|
||||
为了从文件中读取数据,我以 `r` 模式打开文件,但这次我使用 `a` (用于”追加“)将数据写到文件的末尾。因为我是将纯文本写入文件,所以我添加了自己的换行符(`/n`)。通常情况下,你并不是将原始文本写入文件,你可能会使用一个额外的库来代替写入一个特定的格式。例如,你可能会使用 INI 或 YAML 库来帮助编写配置文件,使用 XML 库来编写 XML,等等。
|
||||
|
||||
### 文件模式
|
||||
|
||||
在 Lua 中打开文件时,有一些保护措施和参数来定义如何处理文件。默认值是 `r`,允许你只读数据:
|
||||
|
||||
* **r** 只读
|
||||
* **w** 如果文件不存在,覆盖或创建一个新文件。
|
||||
* **r+** 读取和覆盖。
|
||||
* **a** 追加数据到文件中,或在文件不存在的情况下创建一个新文件。
|
||||
* **a+** 读取数据,将数据追加到文件中,或文件不存在的话,创建一个新文件。
|
||||
|
||||
|
||||
|
||||
还有一些其他的(例如,`b` 代表二进制格式),但这些是最常见的。关于完整的文档,请参考 [Lua.org/manual][5] 上的优秀 Lua 文档。
|
||||
|
||||
### Lua 和文件
|
||||
|
||||
和其他编程语言一样,Lua 有大量的库支持来访问文件系统来读写数据。因为 Lua 有一个一致且简单语法,所以很容易对任何格式的文件数据进行复杂的处理。试着在你的下一个软件项目中使用 Lua,或者作为 C 或 C++ 项目的 API。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/21/3/lua-files
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data)
|
||||
[2]: https://opensource.com/article/20/11/macports
|
||||
[3]: https://opensource.com/article/20/6/homebrew-mac
|
||||
[4]: https://opensource.com/article/20/3/chocolatey
|
||||
[5]: http://lua.org/manual
|
@ -0,0 +1,104 @@
|
||||
[#]: subject: (NewsFlash: A Modern Open-Source Feed Reader With Feedly Support)
|
||||
[#]: via: (https://itsfoss.com/newsflash-feedreader/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (DCOLIVERSUN)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
||||
NewsFlash: 一款 Feedly 支持的新型开源 Feed 阅读器
|
||||
======
|
||||
|
||||
有些人可能认为 RSS 阅读器已经不再,但它们仍然坚持在这里,特别是当你不希望看到的东西都是来自大型科技企业算法的时候。Feed 阅读器可以帮你自助选择阅读来源。
|
||||
|
||||
我最近遇到一个很棒的 RSS 阅读器 NewsFlash。它支持通过基于 web 的 Feed 阅读器增加 feed,例如 [Feedly][1] 和 NewsBlur。如果你已经使用这种服务,就不必人工导入 feed,这节省了你的工作。
|
||||
|
||||
NewsFlash 恰好是 [FeedReadeer][2] 的精神继承者,FeedReader 初创开发人员也参与其中。
|
||||
|
||||
如果你正在找适用的 RSS 阅读器,我们整理了 [Linux Feed 阅读器][3] 列表供您参考。
|
||||
|
||||
### NewsFlash: 一款补充基于 web 的 RSS 阅读器账户的 Feed 阅读器
|
||||
|
||||
![][4]
|
||||
|
||||
请注意,NewsFlash 不仅针对基于 web 的 RSS feed 账户进行修改,而且你可以选择使用本地 RSS feed,不必在多设备间同步。
|
||||
|
||||
不过,如果你在用支持基于 web 的 feed 阅读器,那么 NewsFlash 特别有用。
|
||||
|
||||
这里,我将重点介绍 NewsFlash 提供的一些功能。
|
||||
|
||||
### NewsFlash 功能
|
||||
|
||||
![][5]
|
||||
|
||||
* 支持桌面通知
|
||||
* 快速搜索、过滤
|
||||
* 支持标签
|
||||
* 便捷、可重定义的键盘快捷键
|
||||
* 本地 feed
|
||||
* OPML 文件导入/导出
|
||||
* 无需注册即可在 Feedly 库中轻松找到不同 RSS Feed
|
||||
* 支持自定义字体
|
||||
* 支持多主题(包括深色主题)
|
||||
* 启动/禁止缩略图
|
||||
* 细粒度调整常规同步间隔时间
|
||||
* 支持基于 web 的 Feed 账户,例如 Feedly、Fever、NewsBlur、feedbin、Miniflux
|
||||
|
||||
除上述功能外,当你调整窗口大小时,还可以打开阅读器视图,这是一个细腻的补充功能。
|
||||
|
||||
![newsflash 截图1][6]
|
||||
|
||||
账户重新设置也很容易,不过需要删除所有本地数据。是的,你可以手动清除缓存并设置到期时间,以便所有你想看的 feed 用户数据都存在本地。
|
||||
|
||||
**推荐阅读:**
|
||||
|
||||
![][7]
|
||||
|
||||
#### [6 款 Linux 最佳 Feed 阅读器][3]
|
||||
|
||||
你是否想用 RSS feed 来订阅你喜欢网站的最新消息?看一看 Linux 最佳 Feed 阅读器吧。
|
||||
|
||||
### 在 Linux 上安装 NewsFlash
|
||||
|
||||
你无法找到适用于各种 Linux 发行版的官方软件包,只有 [Flatpak][8]。
|
||||
|
||||
对于 Arch 用户,可以从 [AUR][9] 下载。
|
||||
|
||||
幸运的是,[Flatpak][10] 软件包可以让你轻松在 Linux 发行版上安装 NewsFlash。具体请参阅我们的 [Flatpak 指南][11]。
|
||||
|
||||
你可以参考 NewsFlash 的 [GitLab页面][12] 去解决大部分问题。
|
||||
|
||||
### 结束语
|
||||
|
||||
我现在用 NewsFlash 作为桌面本地解决方案,不用基于 web 的服务。你可以通过直接导出 OPML 文件在移动 feed 应用上得到相同的 feed。这已经被我验证过了。
|
||||
|
||||
用户界面易于使用,也提供了数一数二的新版 UX。虽然这个 RSS 阅读器看似简单,但提供了你可以找到的所有重要功能。
|
||||
|
||||
你怎么看 NewsFlash?你喜欢用其他类似产品吗?欢迎在评论区中分享你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/newsflash-feedreader/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[DCOLIVERSUN](https://github.com/DCOLIVERSUN)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://feedly.com/
|
||||
[2]: https://jangernert.github.io/FeedReader/
|
||||
[3]: https://itsfoss.com/feed-reader-apps-linux/
|
||||
[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/newsflash.jpg?resize=945%2C648&ssl=1
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/newsflash-screenshot.jpg?resize=800%2C533&ssl=1
|
||||
[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2021/03/newsflash-screenshot-1.jpg?resize=800%2C532&ssl=1
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/04/best-feed-reader-apps-linux.jpg?fit=800%2C450&ssl=1
|
||||
[8]: https://flathub.org/apps/details/com.gitlab.newsflash
|
||||
[9]: https://itsfoss.com/aur-arch-linux/
|
||||
[10]: https://itsfoss.com/what-is-flatpak/
|
||||
[11]: https://itsfoss.com/flatpak-guide/
|
||||
[12]: https://gitlab.com/news-flash/news_flash_gtk
|
Loading…
Reference in New Issue
Block a user