diff --git a/sources/tech/20191028 How to remove duplicate lines from files with awk.md b/sources/tech/20191028 How to remove duplicate lines from files with awk.md deleted file mode 100644 index fea53c85a9..0000000000 --- a/sources/tech/20191028 How to remove duplicate lines from files with awk.md +++ /dev/null @@ -1,243 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (lxbwolf) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to remove duplicate lines from files with awk) -[#]: via: (https://opensource.com/article/19/10/remove-duplicate-lines-files-awk) -[#]: author: (Lazarus Lazaridis https://opensource.com/users/iridakos) - -How to remove duplicate lines from files with awk -====== -Learn how to use awk '!visited[$0]++' without sorting or changing their -order. -![Coding on a computer][1] - -Suppose you have a text file and you need to remove all of its duplicate lines. - -### TL;DR - -To remove the duplicate lines while _preserving their order in the file_, use: - - -``` -`awk '!visited[$0]++' your_file > deduplicated_file` -``` - -### How it works - -The script keeps an associative array with _indices_ equal to the unique lines of the file and _values_ equal to their occurrences. For each line of the file, if the line occurrences are zero, then it increases them by one and _prints the line_, otherwise, it just increases the occurrences _without printing the line_. - -I was not familiar with **awk**, and I wanted to understand how this can be accomplished with such a short script (**awk**ward). I did my research, and here is what is going on: - - * The awk "script" **!visited[$0]++** is executed for _each line_ of the input file. - * **visited[]** is a variable of type [associative array][2] (a.k.a. [Map][3]). We don't have to initialize it because **awk** will do it the first time we access it. - * The **$0** variable holds the contents of the line currently being processed. - * **visited[$0]** accesses the value stored in the map with a key equal to **$0** (the line being processed), a.k.a. the occurrences (which we set below). - * The **!** negates the occurrences' value: - * In awk, [any nonzero numeric value or any nonempty string value is true][4]. - * By default, [variables are initialized to the empty string][5], which is zero if converted to a number. - * That being said: - * If **visited[$0]** returns a number greater than zero, this negation is resolved to **false**. - * If **visited[$0]** returns a number equal to zero or an empty string, this negation is resolved to **true**. - * The **++** operation increases the variable's value (**visited[$0]**) by one. - * If the value is empty, **awk** converts it to **0** (number) automatically and then it gets increased. - * **Note:** The operation is executed after we access the variable's value. - - - -Summing up, the whole expression evaluates to: - - * **true** if the occurrences are zero/empty string - * **false** if the occurrences are greater than zero - - - -**awk** statements consist of a [_pattern-expression_ and an _associated action_][6]. - - -``` -` { }` -``` - -If the pattern succeeds, then the associated action is executed. If we don't provide an action, **awk**, by default, **print**s the input. - -> An omitted action is equivalent to **{ print $0 }**. - -Our script consists of one **awk** statement with an expression, omitting the action. So this: - - -``` -`awk '!visited[$0]++' your_file > deduplicated_file` -``` - -is equivalent to this: - - -``` -`awk '!visited[$0]++ { print $0 }' your_file > deduplicated_file` -``` - -For every line of the file, if the expression succeeds, the line is printed to the output. Otherwise, the action is not executed, and nothing is printed. - -### Why not use the **uniq** command? - -The **uniq** command removes only the _adjacent duplicate lines_. Here's a demonstration: - - -``` -$ cat test.txt -A -A -A -B -B -B -A -A -C -C -C -B -B -A -$ uniq < test.txt -A -B -A -C -B -A -``` - -### Other approaches - -#### Using the sort command - -We can also use the following [**sort**][7] command to remove the duplicate lines, but _the line order is not preserved_. - - -``` -`sort -u your_file > sorted_deduplicated_file` -``` - -#### Using cat, sort, and cut - -The previous approach would produce a de-duplicated file whose lines would be sorted based on the contents. [Piping a bunch of commands][8] can overcome this issue: - - -``` -`cat -n your_file | sort -uk2 | sort -nk1 | cut -f2-` -``` - -##### How it works - -Suppose we have the following file: - - -``` -abc -ghi -abc -def -xyz -def -ghi -klm -``` - -**cat -n test.txt** prepends the order number in each line. - - -``` -1       abc -2       ghi -3       abc -4       def -5       xyz -6       def -7       ghi -8       klm -``` - -**sort -uk2** sorts the lines based on the second column (**k2** option) and keeps only the first occurrence of the lines with the same second column value (**u** option). - - -``` -1       abc -4       def -2       ghi -8       klm -5       xyz -``` - -**sort -nk1** sorts the lines based on their first column (**k1** option) treating the column as a number (**-n** option). - - -``` -1       abc -2       ghi -4       def -5       xyz -8       klm -``` - -Finally, **cut -f2-** prints each line starting from the second column until its end (**-f2-** option: _Note the **-** suffix, which instructs it to include the rest of the line_). - - -``` -abc -ghi -def -xyz -klm -``` - -### References - - * [The GNU awk user's guide][9] - * [Arrays in awk][2] - * [Awk—Truth values][4] - * [Awk expressions][5] - * [How can I delete duplicate lines in a file in Unix?][10] - * [Remove duplicate lines without sorting [duplicate]][11] - * [How does awk '!a[$0]++' work?][12] - - - -That's all. Cat photo. - -![Duplicate cat][13] - -* * * - -_This article originally appeared on the iridakos blog by [Lazarus Lazaridis][14] under a [CC BY-NC 4.0 License][15] and is republished with the author's permission._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/19/10/remove-duplicate-lines-files-awk - -作者:[Lazarus Lazaridis][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/iridakos -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer) -[2]: http://kirste.userpage.fu-berlin.de/chemnet/use/info/gawk/gawk_12.html -[3]: https://en.wikipedia.org/wiki/Associative_array -[4]: https://www.gnu.org/software/gawk/manual/html_node/Truth-Values.html -[5]: https://ftp.gnu.org/old-gnu/Manuals/gawk-3.0.3/html_chapter/gawk_8.html -[6]: http://kirste.userpage.fu-berlin.de/chemnet/use/info/gawk/gawk_9.html -[7]: http://man7.org/linux/man-pages/man1/sort.1.html -[8]: https://stackoverflow.com/a/20639730/2292448 -[9]: https://www.gnu.org/software/gawk/manual/html_node/ -[10]: https://stackoverflow.com/questions/1444406/how-can-i-delete-duplicate-lines-in-a-file-in-unix -[11]: https://stackoverflow.com/questions/11532157/remove-duplicate-lines-without-sorting -[12]: https://unix.stackexchange.com/questions/159695/how-does-awk-a0-work/159734#159734 -[13]: https://opensource.com/sites/default/files/uploads/duplicate-cat.jpg (Duplicate cat) -[14]: https://iridakos.com/about/ -[15]: http://creativecommons.org/licenses/by-nc/4.0/ diff --git a/translated/tech/20191028 How to remove duplicate lines from files with awk.md b/translated/tech/20191028 How to remove duplicate lines from files with awk.md new file mode 100644 index 0000000000..a6fe25570e --- /dev/null +++ b/translated/tech/20191028 How to remove duplicate lines from files with awk.md @@ -0,0 +1,242 @@ +[#]: collector: (lujun9972) +[#]: translator: (lxbwolf) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to remove duplicate lines from files with awk) +[#]: via: (https://opensource.com/article/19/10/remove-duplicate-lines-files-awk) +[#]: author: (Lazarus Lazaridis https://opensource.com/users/iridakos) + +怎样使用 awk 删掉文件中重复的行 +====== +学习怎样使用 awk 的 `!visited[$0]++` 在不重新排序或改变原排列顺序的前提下删掉重复的行。 +![Coding on a computer][1] + +假设你有一个文本文件,你需要删掉所有重复的行。 + +### 这篇内容篇幅比较长,如果不想深入探讨或时间有限,可以看总结。 + +*保持原来的排列顺序*删掉重复行,使用: + + +``` +`awk '!visited[$0]++' your_file > deduplicated_file` +``` + +### 工作原理 + +这个脚本维持一个关联数组,*index 总数*为文件去重后的行数,每个 index 对应的 *value* 为某行出现的次数。对于文件的每一行,如果这行出现的次数为 0,则 value 加 1 *并打印这行*,否则 value 加 1 *不打印这行*。 + +我之前不熟悉 **awk**,我想弄清楚这么短小的一个脚本是怎么实现的。我调研了下,下面是调研心得: + + * awk “脚本” **!visited[$0]++** 对输入文件的*每一行*都执行 + * **visited[]** 是一个 [关联数组][2] (又名 [Map][3])的变量。 **awk** 会在第一次执行时初始化它,因此我们不需要初始化。 + * **$0** 变量的值是当前正在被处理的行的内容 + * **visited[$0]** 通过与 **$0**(正在被处理的行)相等的 key 访问 map 中的值,即出现次数(我们在下面设置的) + * **!** 对表示出现次数的值取反 + * 在 awk 中,[任意非零的数或任意非空的字符串的值是 true][4]。 + * [变量默认的初始值为空字符串][5],如果被转换为数字,则为 0. + * 也就是说: + * 如果 **visited[$0]** 的值是一个比 0 大的数,取反后被解析成 **false**。 + * 如果 **visited[$0]** 的值为等于 0 的数字或空字符串,取反后被解析成 **true** 。 + * **++** 表示变量(visited[$0])的值加 1. + * 如果值为空,**awk** 自动把它转换为 **0**(数字) 后加 1。 + * **注意:**加 1 操作是在我们取到了变量的值之后执行的。 + + + +总的来说,整个表达式的意思是: + + * **true** 如果表示出现次数的值为 0 或空字符串 + * **false** 如果出现的次数大于 0 + + + +**awk** 由 [_pattern 表达式和一个与之关联的 action_][6] 组成 + + +``` +` { }` +``` + +如果匹配到了 pattern,就会执行后面的 action。如果没有 action,**awk** 默认会 **print** 输入。 + +> 省略 action 等于 **{print $0}**。 + +我们的脚本由一个 **awk** 表达式语句组成,省略了 action。因此这样写: + + +``` +`awk '!visited[$0]++' your_file > deduplicated_file` +``` + +等于这样写: + + +``` +`awk '!visited[$0]++ { print $0 }' your_file > deduplicated_file` +``` + +对于文件的每一行,如果表达式匹配到了,这行内容被 print 到输出。否则,不执行 action,不打印任何东西。 + +### 为什么不用 uniq 命令? + +**uniq** 命令仅能对相邻的行去重。这是一个示例: + + +``` +$ cat test.txt +A +A +A +B +B +B +A +A +C +C +C +B +B +A +$ uniq < test.txt +A +B +A +C +B +A +``` + +### 其他方法 + +#### 使用 sort 命令 + +我们也可以用下面的 [**sort**][7] 命令来去除重复的行,但是*行原来的顺序没有被保留*。 + + +``` +`sort -u your_file > sorted_deduplicated_file` +``` + +#### 使用 cat + sort + cut + +上面的方法会产出一个去重的文件,各行是基于内容进行排序的。[通过管道连接命令][8] 可以解决这个问题。 + + +``` +`cat -n your_file | sort -uk2 | sort -nk1 | cut -f2-` +``` + +##### 工作原理 + +假设我们有下面一个文件 + + +``` +abc +ghi +abc +def +xyz +def +ghi +klm +``` + +**cat -n test.txt** 在每行前面显示序号。 + + +``` +1       abc +2       ghi +3       abc +4       def +5       xyz +6       def +7       ghi +8       klm +``` + +**sort -uk2** 基于第二列(**k2** 选项)进行排序,对于第二列相同的值只保留一次(**u** 选项)。 + + +``` +1       abc +4       def +2       ghi +8       klm +5       xyz +``` + +**sort -nk1** 基于第一列排序(**k1** 选项),把列的值作为数字来处理(**-n** 选项)。 + + +``` +1       abc +2       ghi +4       def +5       xyz +8       klm +``` + +最后,**cut -f2-** 打印每一行从第二列开始直到最后的内容(**-f2-** 选项:留意 - 后缀,- 表示这行后面的内容都包含在内)。 + + +``` +abc +ghi +def +xyz +klm +``` + +### 参考 + + * [GNU awk 用户手册][9] + * [awk 中的数组][2] + * [Awk—Truth values][4] + * [Awk 表达式][5] + * [Unix 怎么删除文件中重复的行?][10] + * [不用排序去掉重复的行【去重】][11] + * ['!a[$0]++' 工作原理][12] + + + +以上为全文。附上猫照。 + +![Duplicate cat][13] + +* * * + +本文首发在 [Lazarus Lazaridis][14] 的 iridakos 博客,遵照 [CC BY-NC 4.0 License][15] ,转载已获得作者授权。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/10/remove-duplicate-lines-files-awk + +作者:[Lazarus Lazaridis][a] +选题:[lujun9972][b] +译者:[lxbwolf](https://github.com/lxbwolf) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/iridakos +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer) +[2]: http://kirste.userpage.fu-berlin.de/chemnet/use/info/gawk/gawk_12.html +[3]: https://en.wikipedia.org/wiki/Associative_array +[4]: https://www.gnu.org/software/gawk/manual/html_node/Truth-Values.html +[5]: https://ftp.gnu.org/old-gnu/Manuals/gawk-3.0.3/html_chapter/gawk_8.html +[6]: http://kirste.userpage.fu-berlin.de/chemnet/use/info/gawk/gawk_9.html +[7]: http://man7.org/linux/man-pages/man1/sort.1.html +[8]: https://stackoverflow.com/a/20639730/2292448 +[9]: https://www.gnu.org/software/gawk/manual/html_node/ +[10]: https://stackoverflow.com/questions/1444406/how-can-i-delete-duplicate-lines-in-a-file-in-unix +[11]: https://stackoverflow.com/questions/11532157/remove-duplicate-lines-without-sorting +[12]: https://unix.stackexchange.com/questions/159695/how-does-awk-a0-work/159734#159734 +[13]: https://opensource.com/sites/default/files/uploads/duplicate-cat.jpg (Duplicate cat) +[14]: https://iridakos.com/about/ +[15]: http://creativecommons.org/licenses/by-nc/4.0/