Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-12-03 23:23:19 +08:00
commit 01b94ed72b
14 changed files with 816 additions and 614 deletions

View File

@ -0,0 +1,153 @@
[#]: collector: "lujun9972"
[#]: translator: "lxbwolf"
[#]: reviewer: "wxy"
[#]: publisher: "wxy"
[#]: url: "https://linux.cn/article-11636-1.html"
[#]: subject: "How to use loops in awk"
[#]: via: "https://opensource.com/article/19/11/loops-awk"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
在 awk 中怎么使用循环
======
> 来学习一下多次执行同一条命令的不同类型的循环。
![](https://img.linux.net.cn/data/attachment/album/201912/02/232951h3ibohlh77bk77d7.jpg)
`awk` 脚本有三个主要部分:`BEGIN` 和 `END` 函数(都可选),用户自己写的每次要执行的函数。某种程度上,`awk` 的主体部分就是一个循环,因为函数中的命令对每一条记录都会执行一次。然而,有时你希望对于一条记录执行多次命令,那么你就需要用到循环。
有多种类型的循环,分别适合不同的场景。
### while 循环
一个 `while` 循环检测一个表达式,如果表达式为 `true` 就执行命令。当表达式变为 `false` 时,循环中断。
```
#!/bin/awk -f
BEGIN {
# Loop through 1 to 10
i=1;
while (i <= 10) {
print i, " to the second power is ", i*i;
i = i+1;
}
exit;
}
```
在这个简单实例中,`awk` 打印了放在变量 `i` 中的整数值的平方。`while (i <= 10)` 语句告诉 `awk` 仅在 `i` 的值小于或等于 10 时才执行循环。在循环最后一次执行时(`i` 的值是 10循环终止。
### do-while 循环
do-while 循环执行在关键字 `do` 之后的命令。在每次循环结束时检测一个测试表达式来决定是否终止循环。仅在测试表达式返回 `true` 时才会重复执行命令(即还没有到终止循环的条件)。如果测试表达式返回 `false`,因为到了终止循环的条件所以循环被终止。
```
#!/usr/bin/awk -f
BEGIN {
i=2;
do {
print i, " to the second power is ", i*i;
i = i + 1
}
while (i < 10)
exit;
}
```
### for 循环
`awk` 中有两种 `for` 循环。
一种 `for` 循环初始化一个变量,检测一个测试表达式,执行变量递增,当表达式的结果为 `true` 时循环就会一直执行。
```
#!/bin/awk -f
BEGIN {
for (i=1; i <= 10; i++) {
print i, " to the second power is ", i*i;
}
exit;
}
```
另一种 `for` 循环设置一个有连续索引的数组变量,对每一个索引执行一个命令集。换句话说,它用一个数组“收集”每一条命令执行后的结果。
本例实现了一个简易版的 Unix 命令 `uniq`。通过把一系列字符串作为键加到数组 `a` 中,当相同的键再次出现时就增加键值,可以得到某个字符串出现的次数(就像 `uniq``--count` 选项)。如果你打印该数组的所有键,将会得到出现过的所有字符串。
用演示文件 `colours.txt`(前一篇文章中的文件)来举例:
```
name       color  amount
apple      red    4
banana     yellow 6
raspberry  red    99
strawberry red    3
grape      purple 10
apple      green  8
plum       purple 2
kiwi       brown  4
potato     brown  9
pineapple  yellow 5
```
这是 `awk` 版的简易 `uniq -c`
```
#! /usr/bin/awk -f
NR != 1 {
    a[$2]++
}
END {
    for (key in a) {
                print a[key] " " key
    }
}
```
示例数据文件的第三列是第一列列出的条目的计数。你可以用一个数组和 `for` 循环来按颜色统计第三列的条目。
```
#! /usr/bin/awk -f
BEGIN {
    FS=" ";
    OFS="\t";
    print("color\tsum");
}
NR != 1 {
    a[$2]+=$3;
}
END {
    for (b in a) {
        print b, a[b]
    }
}
```
你可以看到,在处理文件之前也需要在 `BEFORE` 函数(仅仅执行一次)中打印一列表头。
### 循环
在任何编程语言中循环都是很重要的一部分,`awk` 也不例外。使用循环你可以控制 `awk` 脚本怎样去运行,它可以统计什么信息,还有它怎么去处理你的数据。我们下一篇文章会讨论 `switch`、`continue` 和 `next` 语句。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/loops-awk
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[lxbwolf](https://github.com/lxbwolf)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh "arrows cycle symbol for failing faster"
[2]: http://hackerpublicradio.org/eps.php?id=2330

View File

@ -1,34 +1,28 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11637-1.html)
[#]: subject: (5 Commands to Find the IP Address of a Domain in the Linux Terminal)
[#]: via: (https://www.2daygeek.com/linux-command-find-check-domain-ip-address/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
5 个用于在 Linux 终端中查找域 IP 地址的命令
5 个用于在 Linux 终端中查找域 IP 地址的命令
======
本教程介绍了如何在 Linux 终端验证域名或计算机名的 IP 地址。
![](https://img.linux.net.cn/data/attachment/album/201912/03/000402c0ekkgku1f011kzt.jpg)
本教程将允许你一次检查多个域。
本教程介绍了如何在 Linux 终端验证域名或计算机名的 IP 地址。本教程将允许你一次检查多个域。你可能已经使用过这些命令来验证信息。但是,我们将教你如何有效使用这些命令在 Linux 终端中识别多个域的 IP 地址信息。
你可能已经使用过这些命令来验证信息
可以使用以下 5 个命令来完成此操作
但是,我们将教你如何有效使用这些命令在 Linux 终端中识别多个域的 IP 地址信息。
* `dig` 命令:它是一个用于查询 DNS 名称服务器的灵活命令行工具。
* `host` 命令:它是用于执行 DNS 查询的简单程序。
* `nslookup` 命令:它用于查询互联网域名服务器。
* `fping` 命令:它用于向网络主机发送 ICMP ECHO_REQUEST 数据包。
* `ping` 命令:它用于向网络主机发送 ICMP ECHO_REQUEST 数据包。
可以使用以下5个命令来完成此操作。
* **dig 命令:** dig 是用于查询 DNS名称服务器的灵活命令行工具。
  * **host 命令:** host 是用于执行 DNS 查询的简单程序。
  * **nslookup 命令:** nslookup 命令用于查询互联网域名服务器。
  * **fping 命令:** fping 命令用于将 ICMP ECHO_REQUEST 数据包发送到网络主机。
  * **ping 命令:** ping 命令用于向网络主机发送 ICMP ECHO_REQUEST 数据包。
为了测试,我们创建了一个名为 “domains-list.txt” 的文件,并添加了以下域。
为了测试,我们创建了一个名为 `domains-list.txt` 的文件,并添加了以下域。
```
# vi /opt/scripts/domains-list.txt
@ -40,11 +34,9 @@ linuxtechnews.com
### 方法 1如何使用 dig 命令查找域的 IP 地址
**[dig 命令][1]**代表 “domain information groper”,它是一个功能强大且灵活的命令行工具,用于查询 DNS 名称服务器。
[dig 命令][1]代表 “<ruby>域名信息抓手<rt>Domain Information Groper</rt></ruby>”,它是一个功能强大且灵活的命令行工具,用于查询 DNS 名称服务器。
它执行 DNS 查询,并显示来自查询的名称服务器的返回信息。
大多数 DNS 管理员使用 dig 命令来解决 DNS 问题,因为它灵活、易用且输出清晰。
它执行 DNS 查询,并显示来自查询的名称服务器的返回信息。大多数 DNS 管理员使用 `dig` 命令来解决 DNS 问题,因为它灵活、易用且输出清晰。
它还有批处理模式,可以从文件读取搜索请求。
@ -67,7 +59,7 @@ dig $server +short
done | paste -d " " - - -
```
添加以上脚本后,给 “dig-command.sh” 文件设置可执行权限。
添加以上内容到脚本后,给 `dig-command.sh` 文件设置可执行权限。
```
# chmod +x /opt/scripts/dig-command.sh
@ -104,13 +96,9 @@ linuxtechnews.com. 104.27.145.3
### 方法 2如何使用 host 命令查找域的 IP 地址
**[host 命令][2]**是一个简单的命令行程序,用于执行 **[DNS 查询][3]**
[host 命令][2]是一个简单的命令行程序,用于执行 [DNS 查询][3]。它通常用于将名称转换为 IP 地址,反之亦然。如果未提供任何参数或选项,`host` 将打印它的命令行参数和选项摘要
它通常用于将名称转换为 IP 地址,反之亦然。
如果未提供任何参数或选项host 将打印它的命令行参数和选项摘要。
你可以在 host 命令中添加特定选项或记录类型来查看域中的所有记录类型。
你可以在 `host` 命令中添加特定选项或记录类型来查看域中的所有记录类型。
```
# host 2daygeek.com | grep "has address" | sed 's/has address/-/g'
@ -129,7 +117,7 @@ do host $server | grep "has address" | sed 's/has address/-/g'
done
```
添加以上脚本后,给 “host-command.sh” 文件设置可执行权限。
添加以上内容到脚本后,给 `host-command.sh` 文件设置可执行权限。
```
# chmod +x /opt/scripts/host-command.sh
@ -150,13 +138,9 @@ linuxtechnews.com - 104.27.145.3
### 方法 3如何使用 nslookup 命令查找域的 IP 地址
**[nslookup 命令][4]**是用于查询互联网**[域名服务器DNS] [5]**的程序。
[nslookup 命令][4]是用于查询互联网[域名服务器DNS] [5]的程序。
nslookup 有两种模式,分别是交互式和非交互式。
交互模式允许用户查询名称服务器以获取有关各种主机和域的信息,或打印域中的主机列表。
非交互模式用于仅打印主机或域的名称和请求的信息。
`nslookup` 有两种模式,分别是交互式和非交互式。交互模式允许用户查询名称服务器以获取有关各种主机和域的信息,或打印域中的主机列表。非交互模式用于仅打印主机或域的名称和请求的信息。
它是一个网络管理工具,可以帮助诊断和解决 DNS 相关问题。
@ -178,7 +162,7 @@ do echo $server "-"
nslookup -q=A $server | tail -n+4 | sed -e '/^$/d' -e 's/Address://g' | grep -v 'Name|answer' | xargs -n1 done | paste -d " " - - -
```
添加以上脚本后,给 “nslookup-command.sh” 文件设置可执行权限。
添加以上内容到脚本后,给 `nslookup-command.sh` 文件设置可执行权限。
```
# chmod +x /opt/scripts/nslookup-command.sh
@ -196,11 +180,11 @@ linuxtechnews.com - 104.27.144.3 104.27.145.3
### 方法 4如何使用 fping 命令查找域的 IP 地址
**[fping 命令][6]**是类似 ping 之类的程序它使用互联网控制消息协议ICMPecho 请求来确定目标主机是否响应。
[fping 命令][6]是类似 `ping` 之类的程序它使用互联网控制消息协议ICMPecho 请求来确定目标主机是否响应。
fping 与 ping 不同,因为它允许用户并行 ping 任意数量的主机。另外,它可以从文本文件输入主机。
`fping``ping` 不同,因为它允许用户并行 ping 任意数量的主机。另外,它可以从文本文件输入主机。
fping 发送 ICMP echo 请求,并以循环方式移到下一个目标,并且不等到目标主机做出响应。
`fping` 发送 ICMP echo 请求,并以循环方式移到下一个目标,并且不等到目标主机做出响应。
如果目标主机答复,那么将其标记为活动主机并从要检查的目标列表中删除;如果目标在特定时间限制和/或重试限制内未响应,那么将其指定为不可访问。
@ -214,7 +198,7 @@ fping 发送 ICMP echo 请求,并以循环方式移到下一个目标,并且
### 方法 5如何使用 ping 命令查找域的 IP 地址
**[pingPacket Internet Groper命令][6]**是一个网络程序,用于测试 Internet 协议IP网络上主机的可用性/连接性。
[ping 命令][6]<ruby>数据包互联网抓手<rt>Packet Internet Groper</rt></ruby>)是一个网络程序,用于测试 Internet 协议IP网络上主机的可用性/连接性。
通过向目标主机发送互联网控制消息协议ICMPEcho 请求数据包并等待 ICMP Echo 应答来验证主机的可用性。
@ -238,7 +222,7 @@ ping -c 2 $server | head -2 | tail -1 | awk '{print $5}' | sed 's/[(:)]//g'
done | paste -d " " - -
```
添加以上脚本后,给 “ping-command.sh” 文件设置可执行权限。
添加以上内容到脚本后,给 `ping-command.sh` 文件设置可执行权限。
```
# chmod +x /opt/scripts/ping-command.sh
@ -261,7 +245,7 @@ via: https://www.2daygeek.com/linux-command-find-check-domain-ip-address/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,70 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Nvidia quietly unveils faster, lower power Tesla GPU accelerator)
[#]: via: (https://www.networkworld.com/article/3482097/nvidia-quietly-unveils-faster-lower-power-tesla-gpu-accelerator.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Nvidia quietly unveils faster, lower power Tesla GPU accelerator
======
Nvidia has upgraded its Volta line of Tesla GPU-accelerator cards to work faster using the same power as its old model.
client
Nvidia was all over Supercomputing 19 last week, not surprisingly, and made a lot of news which we will get into later. But overlooked was perhaps the most interesting news of all: a new generation graphics-acceleration card that is faster and way more power efficient.
Multiple attendees and news sites spotted it at the show, and Nvidia confirmed to me that this is indeed a new card. Nvidias “Volta” generation of Tesla GPU-accelerator cards has been out since 2017, so an upgrade was well overdue.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
The V100S comes only in PCI Express 3 form factor for now but is expected to eventually support Nvidias SXM2 interface. SXM is a dual-slot card design by Nvidia that requires no connection to the power supply, unlike the PCIe cards. SXM2 allows the GPU to communicate either with each other or to the CPU through Nvidias NVLink, a high-bandwidth, energy-efficient interconnect that can transfer data up to ten times faster than PCIe.
[][2]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][2]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
With this card, Nvidia is claiming 16.4 single-precision TFLOPS, 8.2 double-precision TFLOPS, and Tensor Core performance of up to 130 TFLOPS. That is only a 4-to-5 percent improvement over the V100 SXM2 design, but 16-to-17 percent faster than the PCIe V100 variant.
Memory capacity remains at 32GB but Nvidia added High Bandwidth Memory 2 (HBM2) to increase memory performance to 1,134GB/s, a 26 percent improvement over both PCIe and SXM2.
Now normally a performance boost would see a concurrent increase in power demand, but in this case, the power envelope for the PCIe card is 250 watts, same as the prior generation PCIe card. So this card delivers 16-to-17 percent more compute performance and 26 percent more memory bandwidth at the same power draw.
**Other News**
Nvidia made some other news at the conference:
* A new reference design and ecosystem support for its GPU-accelerated Arm-based reference servers for high-performance computing. The company says it has support from HPE/Cray, Marvell, Fujitsu, and Ampere, the startup led by former Intel executive Renee James looking to build Arm-based server processors.
* These companies will use Nvidia's reference design, which consists of hardware and software components, to build their own GPU-accelerated servers for everything from hyperscale cloud providers to high-performance storage and exascale supercomputing. The design also comes with CUDA-X, a special version of Nvidias CUDA GPU development language for Arm processors.
* Launch of Nvidia Magnum IO suite of software designed to help data scientists and AI and high-performance-computing researchers process massive amounts of data in minutes rather than hours. It is optimized to eliminate storage and I/O bottlenecks to deliver up to 20x faster data processing for multi-server, multi-GPU computing nodes.
* Nvidia and DDN, developer of AI andmulticloud data management, announced a bundling of DDNs A3ITM data management system with Nvidias DGX SuperPOD systems with so customers can deploy HPC infrastructure with minimal complexity and reduced timelines. The SuperPODs would also come with the new NVIDIA Magnum IO software stack.
* DDN said that SuperPOD was able to be deployed within hours and a single appliance could scale all to 80 nodes.  Benchmarks over a variety of different deep-learning models showed that the DDN system could keep a DGXSuperPOD system fully saturated with data.
**Now see** [**10 of the world's fastest supercomputers**][3]
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3482097/nvidia-quietly-unveils-faster-lower-power-tesla-gpu-accelerator.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/newsletters/signup.html
[2]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE20773&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[3]: https://www.networkworld.com/article/3236875/embargo-10-of-the-worlds-fastest-supercomputers.html
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (alim0x)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,84 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (hanwckf)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (curl exercises)
[#]: via: (https://jvns.ca/blog/2019/08/27/curl-exercises/)
[#]: author: (Julia Evans https://jvns.ca/)
curl exercises
======
Recently Ive been interested in how people learn things. I was reading Kathy Sierras great book [Badass: Making Users Awesome][1]. It talks about the idea of _deliberate practice_.
The idea is that you find a small micro-skill that can be learned in maybe 3 sessions of 45 minutes, and focus on learning that micro-skill. So, as an exercise, I was trying to think of a computer skill that I thought could be learned in 3 45-minute sessions.
I thought that making HTTP requests with `curl` might be a skill like that, so here are some curl exercises as an experiment!
### whats curl?
curl is a command line tool for making HTTP requests. I like it because its an easy way to test that servers or APIs are doing what I think, but its a little confusing at first!
Heres a drawing explaining curls most important command line arguments (which is page 6 of my [Bite Size Networking][2] zine). You can click to make it bigger.
<https://jvns.ca/images/curl.jpeg>
### fluency is valuable
With any command line tool, I think having fluency is really helpful. Its really nice to be able to just type in the thing you need. For example recently I was testing out the Gumroad API and I was able to just type in:
```
curl https://api.gumroad.com/v2/sales \
-d "access_token=<SECRET>" \
-X GET -d "before=2016-09-03"
```
and get things working from the command line.
### 21 curl exercises
These exercises are about understanding how to make different kinds of HTTP requests with curl. Theyre a little repetitive on purpose. They exercise basically everything I do with curl.
To keep it simple, were going to make a lot of our requests to the same website: <https://httpbin.org>. httpbin is a service that accepts HTTP requests and then tells you what request you made.
1. Request <https://httpbin.org>
2. Request <https://httpbin.org/anything>. httpbin.org/anything will look at the request you made, parse it, and echo back to you what you requested. curls default is to make a GET request.
3. Make a POST request to <https://httpbin.org/anything>
4. Make a GET request to <https://httpbin.org/anything>, but this time add some query parameters (set `value=panda`).
5. Request googles robots.txt file ([www.google.com/robots.txt][3])
6. Make a GET request to <https://httpbin.org/anything> and set the header `User-Agent: elephant`.
7. Make a DELETE request to <https://httpbin.org/anything>
8. Request <https://httpbin.org/anything> and also get the response headers
9. Make a POST request to <https://httpbin.com/anything> with the JSON body `{"value": "panda"}`
10. Make the same POST request as the previous exercise, but set the Content-Type header to `application/json` (because POST requests need to have a content type that matches their body). Look at the `json` field in the response to see the difference from the previous one.
11. Make a GET request to <https://httpbin.org/anything> and set the header `Accept-Encoding: gzip` (what happens? why?)
12. Put a bunch of a JSON in a file and then make a POST request to <https://httpbin.org/anything> with the JSON in that file as the body
13. Make a request to <https://httpbin.org/image> and set the header Accept: image/png. Save the output to a PNG file and open the file in an image viewer. Try the same thing with with different `Accept:` headers.
14. Make a PUT request to <https://httpbin.org/anything>
15. Request <https://httpbin.org/image/jpeg>, save it to a file, and open that file in your image editor.
16. Request <https://www.twitter.com>. Youll get an empty response. Get curl to show you the response headers too, and try to figure out why the response was empty.
17. Make any request to <https://httpbin.org/anything> and just set some nonsense headers (like `panda: elephant`)
18. Request <https://httpbin.org/status/404> and <https://httpbin.org/status/200>. Request them again and get curl to show the response headers.
19. Request <https://httpbin.org/anything> and set a username and password (with `-u username:password`)
20. Download the Twitter homepage (<https://twitter.com>) in Spanish by setting the `Accept-Language: es-ES` header.
21. Make a request to the Stripe API with curl. (see <https://stripe.com/docs/development> for how, they give you a test API key). Try making exactly the same request to <https://httpbin.org/anything>.
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/08/27/curl-exercises/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://www.amazon.com/Badass-Making-Awesome-Kathy-Sierra/dp/1491919019
[2]: https://wizardzines.com/zines/bite-size-networking
[3]: http://www.google.com/robots.txt

View File

@ -1,250 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Get sorted with sort at the command line)
[#]: via: (https://opensource.com/article/19/10/get-sorted-sort)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Get sorted with sort at the command line
======
Reorganize your data in a format that makes sense to you—right from the
Linux, BSD, or Mac terminal—with the sort command.
![Coding on a computer][1]
If you've ever used a spreadsheet application, then you know that rows can be sorted by the contents of a column. For instance, if you have a list of expenses, you might want to sort them by date or by ascending price or by category, and so on. If you're comfortable using a terminal, you may not want to have to use a big office application just to sort text data. And that's exactly what the [**sort**][2] command is for.
### Installing
You don't need to install **sort** because it's invariably included on any [POSIX][3] system. On most Linux systems, the **sort** command is bundled in a collection of utilities from the GNU organization. On other POSIX systems, such as BSD and Mac, the default **sort** command is not from GNU, so some options may differ. I'll attempt to account for both GNU and BSD implementations in this article.
### Sort lines alphabetically
The **sort** command, by default, looks at the first character of each line of a file and outputs each line in ascending alphabetic order. In the event that two characters on multiple lines are the same, it considers the next character. For example:
```
$ cat distro.list
Slackware
Fedora
Red Hat Enterprise Linux
Ubuntu
Arch
1337
Mint
Mageia
Debian
$ sort distro.list
1337
Arch
Debian
Fedora
Mageia
Mint
Red Hat Enterprise Linux
Slackware
Ubuntu
```
Using **sort** doesn't change the original file. Sort is a filter, so if you want to preserve your data in its sorted form, you must redirect the output using either **&gt;** or **tee**:
```
$ sort distro.list | tee distro.sorted
1337
Arch
Debian
[...]
$ cat distro.sorted
1337
Arch
Debian
[...]
```
### Sort by column
Complex data sets sometimes need to be sorted by something other than the first letter of each line. Imagine, for instance, a list of animals and each one's species and genus, and each "field" (a "cell" in a spreadsheet) is defined by a predictable delimiter character. This is such a common data format for spreadsheet exports that the CSV (comma-separated values) file extension exists to identify such files (although a CSV file doesn't have to be comma-separated, nor does a delimited file have to use the CSV extension to be valid and usable). Consider this example data set:
```
Aptenodytes;forsteri;Miller,JF;1778;Emperor
Pygoscelis;papua;Wagler;1832;Gentoo
Eudyptula;minor;Bonaparte;1867;Little Blue
Spheniscus;demersus;Brisson;1760;African
Megadyptes;antipodes;Milne-Edwards;1880;Yellow-eyed
Eudyptes;chrysocome;Viellot;1816;Southern Rockhopper
Torvaldis;linux;Ewing,L;1996;Tux
```
Given this sample data set, you can use the **\--field-separator** (use **-t** on BSD and Mac—or on GNU to reduce typing) option to set the delimiting character to a semicolon (because this example uses semicolons instead of commas, but it could use any character), and use the **\--key** (**-k** on BSD and Mac or on GNU to reduce typing) option to define which field to sort by. For example, to sort by the second field (starting at 1, not 0) of each line:
```
sort --field-separator=";" --key=2
Megadyptes;antipodes;Milne-Edwards;1880;Yellow-eyed
Eudyptes;chrysocome;Viellot;1816;Sothern Rockhopper
Spheniscus;demersus;Brisson;1760;African
Aptenodytes;forsteri;Miller,JF;1778;Emperor
Torvaldis;linux;Ewing,L;1996;Tux
Eudyptula;minor;Bonaparte;1867;Little Blue
Pygoscelis;papua;Wagler;1832;Gentoo
```
That's somewhat difficult to read, but Unix is famous for its _pipe_ method of constructing commands, so you can use the **column** command to "prettify" the output. Using GNU **column**:
```
$ sort --field-separator=";" \
\--key=2 penguins.list | \
column --table --separator ";"
Megadyptes   antipodes   Milne-Edwards  1880  Yellow-eyed
Eudyptes     chrysocome  Viellot        1816  Southern Rockhopper
Spheniscus   demersus    Brisson        1760  African
Aptenodytes  forsteri    Miller,JF      1778  Emperor
Torvaldis    linux       Ewing,L        1996  Tux
Eudyptula    minor       Bonaparte      1867  Little Blue
Pygoscelis   papua       Wagler         1832  Gentoo
```
Slightly more cryptic to the new user (but shorter to type), the command options on BSD and Mac:
```
$ sort -t ";" \
-k2 penguins.list | column -t -s ";"
Megadyptes   antipodes   Milne-Edwards  1880  Yellow-eyed
Eudyptes     chrysocome  Viellot        1816  Southern Rockhopper
Spheniscus   demersus    Brisson        1760  African
Aptenodytes  forsteri    Miller,JF      1778  Emperor
Torvaldis    linux       Ewing,L        1996  Tux
Eudyptula    minor       Bonaparte      1867  Little Blue
Pygoscelis   papua       Wagler         1832  Gentoo
```
The **key** definition doesn't have to be set to **2**, of course. Any existing field may be used as the sorting key.
### Reverse sort
You can reverse the order of a sorted list with the **\--reverse** (**-r** on BSD or Mac or GNU for brevity):
```
$ sort --reverse alphabet.list
z
y
x
w
[...]
```
You can achieve the same result by piping the output of a normal sort through [tac][4].
### Sorting by month (GNU only)
In a perfect world, everyone would write dates according to the ISO 8601 standard: year, month, day. It's a logical method of specifying a unique date, and it's easy for computers to understand. And yet quite often, humans use other means of identifying dates, including months with pretty arbitrary names.
Fortunately, the GNU **sort** command accounts for this and is able to sort correctly by month name. Use the **\--month-sort** (**-M**) option:
```
$ cat month.list
November
October
September
April
[...]
$ sort --month-sort month.list
January
February
March
April
May
[...]
November
December
```
Months may be identified by their full name or some portion of their names.
### Human-readable numeric sort (GNU only)
Another common point of confusion between humans and computers is groups of numbers. For instance, humans often write "1024 kilobytes" as "1KB" because it's easier and quicker for the human brain to parse "1KB" than "1024" (and it gets easier the larger the number becomes). To a computer, though, a string such as 9KB is larger than, for instance, 1MB (even though 9KB is only a fraction of a megabyte). The GNU **sort** command provides the **\--human-numeric-sort** (**-h**) option to help parse these values correctly.
```
$ cat sizes.list
2M
12MB
1k
9k
900
7000
$ sort --human-numeric-sort
900
7000
1k
9k
2M
12MB
```
There are some inconsistencies. For example, 16,000 bytes is greater than 1KB, but **sort** fails to recognize that:
```
$ cat sizes0.list
2M
12MB
16000
1k
$ sort -h sizes0.list
16000
1k
2M
12MB
```
Logically, 16,000 should be written 16KB in this context, so GNU **sort** is not entirely to blame. As long as you are sure that your numbers are consistent, the **\--human-numeric-sort** can help parse human-readable numbers in a computer-friendly way.
### Randomized sort (GNU only)
Sometimes utilities provide the option to do the opposite of what they're meant to do. In a way, it makes no sense for a **sort** command to have the ability to "sort" a file randomly. Then again, the workflow of the command makes it a convenient feature to have. You _could_ use a different command, like [**shuf**][5], or you could just add an option to the command you're using. Whether it's bloat or ingenious UX design, the GNU **sort** command provides the means to sort a file arbitrarily.
The purest form of arbitrary sorting is the **\--random-sort** or **-R** option (not to be confused with the **-r** option, which is short for **\--reverse**).
```
$ sort --random-sort alphabet.list
d
m
p
a
[...]
```
You can run a random sort multiple times on a file for different results each time.
### Sorted
There are many more features available with the **sort** GNU and BSD commands, so spend some time getting to know the options. You'll be surprised at how flexible **sort** can be, especially when it's combined with other Unix utilities.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/get-sorted-sort
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
[2]: https://en.wikipedia.org/wiki/Sort_(Unix)
[3]: https://en.wikipedia.org/wiki/POSIX
[4]: https://opensource.com/article/19/9/tac-command
[5]: https://www.gnu.org/software/coreutils/manual/html_node/shuf-invocation.html

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (hanwckf)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,64 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Use the Window Maker desktop on Linux)
[#]: via: (https://opensource.com/article/19/12/linux-window-maker-desktop)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Use the Window Maker desktop on Linux
======
This article is part of a special series of 24 days of Linux desktops.
Take a step back in time with Window Maker, which implements the old
Unix NeXTSTEP environment for today's users.
![Penguin with green background][1]
Before Mac OS X, there was a quirky closed-source Unix system called [NeXTSTEP][2]. Sun Microsystems later made NeXTSTEP's underpinnings an open specification, which enabled other projects to create free and open source versions of many NeXT libraries and components. GNUStep implemented the bulk of NeXTSTEP's libraries, and [Window Maker][3] implemented its desktop environment.
Window Maker mimics the NeXTSTEP desktop GUI closely and provides some interesting insight into what Unix was like in the late '80s and early '90s. It also reveals some of the foundational concepts behind window managers like Fluxbox and Openbox.
You can install Window Maker from your distribution's repository. To try it out, log out of your desktop session after the installation is complete. By default, your session manager (KDM, GDM, LightDM, or XDM, depending on your setup) will continue to log you into your default desktop, so you must override the default when logging in.
To switch to Window Maker on GDM:
![Selecting the Window Maker desktop in GDM][4]
And on KDM:
![Selecting the Window Maker desktop in KDM][5]
### Window Maker dock
By default, the Window Maker desktop is empty but for a few _docks_ in each corner. As in NeXTSTEP, in Window Maker, a dock area is where major applications can go to be minimized as icons, where launchers can be created for quick access to common applications, and where tiny "dockapps" can run.
You can try out a dockapp by searching for "dockapp" in your software repository. They tend to be network and system monitors, audio-setting panels, clocks, and similar. Here's Window Maker running on Fedora:
![Window Maker running on Fedora][6]
### Application menu
To access the application menu, right-click anywhere on the desktop. To close it again, right-click. Window Maker isn't a desktop environment; rather it's a window manager. It helps you arrange and manage windows. Its only bundled application is called [WPrefs][7] (or more commonly, Window Maker Preferences), a settings application that helps you configure commonly used settings, while the application menu provides access to other options, including themes.
The applications you run are entirely up to you. Within Window Maker, you can choose to run KDE applications, GNOME applications, and applications that are not considered part of any major desktop. Your work environment is yours to create, and you can manage it with Window Maker.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/12/linux-window-maker-desktop
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 (Penguin with green background)
[2]: https://en.wikipedia.org/wiki/NeXTSTEP
[3]: https://www.windowmaker.org/
[4]: https://opensource.com/sites/default/files/uploads/advent-windowmaker-gdm.jpg (Selecting the Window Maker desktop in GDM)
[5]: https://opensource.com/sites/default/files/uploads/advent-windowmaker-kdm.jpg (Selecting the Window Maker desktop in KDM)
[6]: https://opensource.com/sites/default/files/uploads/advent-windowmaker.jpg (Window Maker running on Fedora)
[7]: http://www.windowmaker.org/docs/guidedtour/prefs.html

View File

@ -0,0 +1,182 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bash Script to Check Successful and Failed User Login Attempts on Linux)
[#]: via: (https://www.2daygeek.com/bash-script-to-check-successful-and-failed-user-login-attempts-on-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Bash Script to Check Successful and Failed User Login Attempts on Linux
======
One of the typical tasks of Linux administrators is to check successful and failed login attempts in the Linux system.
This ensures that there are no illegal attempts at the environment.
It is very difficult to manually verify them because the output of the **“/var/log/secure”** file looks awkward.
To make this easier and more effective, we need to write a bash script.
Yes, you can achieve this using the following **[Bash script][1]**.
Ive included two shell scripts in this tutorial.
These scripts will show the number of users logged into the system for a given date. Also, it shows successful login attempts and failed login attempts.
The first **[shell script][2]** allows you to verify user access information for any date available in the **“/var/log/secure”** file.
The second bash script allows you to send a mail with user access information on a daily basis.
### Method-1 : Shell Script to Check Successful and Failed User Login Attempts on Linux
This script allows you to verify user access information for a given date from the terminal.
```
# vi /opt/scripts/user-access-details.sh
#!/bin/bash
echo ""
echo -e "Enter the Date, Use Double Space for date from 1 to 9 (Nov 3) and use Single Space for date from 10 to 31 (Nov 30): \c"
read yday
MYPATH=/var/log/secure*
yday=$(date --date='yesterday' | awk '{print $2,$3}')
yday=$(date | awk '{print $2,$3}')
tuser=$(grep "$yday" $MYPATH | grep "Accepted|Failed" | wc -l)
suser=$(grep "$yday" $MYPATH | grep "Accepted password|Accepted publickey|keyboard-interactive" | wc -l)
fuser=$(grep "$yday" $MYPATH | grep "Failed password" | wc -l)
scount=$(grep "$yday" $MYPATH | grep "Accepted" | awk '{print $9;}' | sort | uniq -c)
fcount=$(grep "$yday" $MYPATH | grep "Failed" | awk '{print $9;}' | sort | uniq -c)
echo "--------------------------------------------"
echo " User Access Report on: $yday"
echo "--------------------------------------------"
echo "Number of Users logged on System: $tuser"
echo "Successful logins attempt: $suser"
echo "Failed logins attempt: $fuser"
echo "--------------------------------------------"
echo -e "Success User Details:\n $scount"
echo "--------------------------------------------"
echo -e "Failed User Details:\n $fcount"
echo "--------------------------------------------"
```
Set an executable **[Linux file permission][3]** to **“user-access-details-1.sh”** file.
```
# chmod +x /opt/scripts/user-access-details-1.sh
```
When you run the script you will receive an alert like the one below.
```
# sh /opt/scripts/user-access-details.sh
Enter the Date, Use Double Space for date from 1 to 9 (Nov 3) and use Single Space for date from 10 to 31 (Nov 30): Nov 6
------------------------------------------
User Access Report on: Nov 6
------------------------------------------
Number of Users logged on System: 1
Successful logins attempt: 1
Failed logins attempt: 0
------------------------------------------
Success User Details:
1 root
------------------------------------------
Failed User Details:
------------------------------------------
```
When you run the script you will receive an alert like the one below.
```
# sh /opt/scripts/user-access-details.sh
Enter the Date, Use Double Space for date from 1 to 9 (Nov 3) and use Single Space for date from 10 to 31 (Nov 30): Nov 30
------------------------------------------
User Access Report on: Nov 30
------------------------------------------
Number of Users logged on System: 20
Successful logins attempt: 14
Failed logins attempt: 6
------------------------------------------
Success User Details:
1 daygeek
1 root
3 u1
4 u2
1 u3
2 u4
2 u5
------------------------------------------
Failed User Details:
3 u1
3 u4
------------------------------------------
```
### Method-2 : Bash Script to Check Successful and Failed User Login Attempts With eMail Alert.
This Bash script allows you to send a mail with user access details on a daily basis via email for yesterdays date.
```
# vi /opt/scripts/user-access-details-2.sh
#!/bin/bash
/tmp/u-access.txt
SUBJECT="User Access Reports on "date""
MESSAGE="/tmp/u-access.txt"
TO="[email protected]"
MYPATH=/var/log/secure*
yday=$(date --date='yesterday' | awk '{print $2,$3}')
tuser=$(grep "$yday" $MYPATH | grep "Accepted|Failed" | wc -l)
suser=$(grep "$yday" $MYPATH | grep "Accepted password|Accepted publickey|keyboard-interactive" | wc -l)
fuser=$(grep "$yday" $MYPATH | grep "Failed password" | wc -l)
scount=$(grep "$yday" $MYPATH | grep "Accepted" | awk '{print $9;}' | sort | uniq -c)
fcount=$(grep "$yday" $MYPATH | grep "Failed" | awk '{print $9;}' | sort | uniq -c)
echo "--------------------------------------------" >> $MESSAGE
echo " User Access Report on: $yday" >> $MESSAGE
echo "--------------------------------------------" >> $MESSAGE
echo "Number of Users logged on System: $tuser" >> $MESSAGE
echo "Successful logins attempt: $suser" >> $MESSAGE
echo "Failed logins attempt: $fuser" >> $MESSAGE
echo "--------------------------------------------" >> $MESSAGE
echo -e "Success User Details:\n $scount" >> $MESSAGE
echo "--------------------------------------------" >> $MESSAGE
echo -e "Failed User Details:\n $fcount" >> $MESSAGE
echo "--------------------------------------------" >> $MESSAGE
mail -s "$SUBJECT" "$TO" < $MESSAGE
```
Set an executable permission to **“user-access-details-2.sh”** file.
```
# chmod +x /opt/scripts/user-access-details-2.sh
```
Finally add a **[cronjob][4]** to automate this. It will run everyday at 8o clock.
```
# crontab -e
0 8 * * * /bin/bash /opt/scripts/user-access-details-2.sh
```
**Note:** You will be getting an email alert everyday at 8 oclock, which is for previous days user access information.
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/bash-script-to-check-successful-and-failed-user-login-attempts-on-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/category/bash-script/
[2]: https://www.2daygeek.com/category/shell-script/
[3]: https://www.2daygeek.com/understanding-linux-file-permissions/
[4]: https://www.2daygeek.com/linux-crontab-cron-job-to-schedule-jobs-task/

View File

@ -0,0 +1,56 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Nvidia quietly unveils faster, lower power Tesla GPU accelerator)
[#]: via: (https://www.networkworld.com/article/3482097/nvidia-quietly-unveils-faster-lower-power-tesla-gpu-accelerator.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Nvidia 悄悄推出更快、更低功耗的 Tesla GPU 加速器
======
Nvidia 升级了其 Volta 系列的 Tesla GPU 加速卡,使其能够以旧型号的相同功率更快地工作。
Nvidia 上周举行了 Supercomputing 19 大会,不出意外的是公布了很多新闻,这些我们将稍后提到。但被忽略的一条或许是其中最有趣的:一张更快、功耗更低的新一代图形加速卡。
多名与会者与多个新闻站点发现了这点Nvidia 向我证实这确实是一张新卡。Nvidia 的 “Volta” 这代 Tesla GPU 加速卡在 2017 年就已淘汰,因此升级工作应该早已过期。
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
V100S 目前仅提供 PCI Express 3 接口,但有望最终支持 Nvidia 的 SXM2 接口。SXM 是 Nvidia 的双插槽卡设计,与 PCIe 卡不同它不需要连接电源。SXM2 允许 GPU 通过 Nvidia 的 NVLink一种高带宽节能互连相互之间或与 CPU 进行通信,其数据传输速度比 PCIe 快十倍。
借助此卡Nvidia 声称拥有单精度 16.4 TFLOPS双精度 8.2 TFLOPS 并且 Tensor Core 性能高达 130 TFLOPS。这仅比 V100 SXM2 设计提高了 4 至 5但比 PCIe V100 变体提高了 16 至 17
内存容量保持在 32 GB但 Nvidia 添加了 High Bandwidth Memory 2HBM2以将内存性能提高到 1,134 GB/s这比 PCIe 和 SXM2 都提高了 26
通常情况下性能提升将同时导致功率增加但在这里PCIe 卡的总体功率为 250 瓦,与上一代 PCIe 卡相同。因此,在相同功耗下,该卡可额外提供 16-17 的计算性能,并增加 26 的内存带宽。
**其他新闻**
Nvidia 在会上还发布了其他新闻:
* 其 GPU 加速的基于 Arm 的高性能计算参考服务器的新参考设计和生态系统支持。该公司表示,它得到了 HPE/Cray、Marvell、富士通和 Ampere 的支持Ampere 是 Intel 前高管勒尼·詹姆斯Renee James领导的一家初创公司它希望建立基于 Arm 的服务器处理器。
  * 这些公司将使用 Nvidia 的参考设计(包括硬件和软件组件)来使用 GPU 构建从超大规模云提供商到高性能存储和百亿亿次超级计算等。该设计还带来了 CUDA-X这是 Nvidia 用于 Arm 处理器的 CUDA GPU 的特殊版本开发语言。
  * 推出 Nvidia Magnum IO 套件,旨在帮助数据科学家和 AI 以及高性能计算研究人员在几分钟而不是几小时内处理大量数据。它经过优化,消除了存储和 I/O 瓶颈,可为多服务器、多 GPU 计算节点提供高达 20 倍的数据处理速度。
  * Nvidia 和 DDN AI 以及多云数据管理开发商)宣布将 DDN 的 A3ITM 数据管理系统与 Nvidia 的 DGX SuperPOD 系统捆绑在一起,以便客户能够以最小的复杂性和更短的时限部署 HPC 基础架构。SuperPOD 还带有新的 NVIDIA Magnum IO 软件栈。
  * DDN 表示SuperPOD 能够在数小时内部署,并且单个设备可扩展至 80 个节点。不同的深度学习模型的基准测试表明DDN 系统可以使 DGXSuperPOD 系统完全保持数据饱和。
在 [Facebook][4] 和 [LinkedIn][5] 加入 Network World 社区评论热门主题。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3482097/nvidia-quietly-unveils-faster-lower-power-tesla-gpu-accelerator.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/newsletters/signup.html
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,81 @@
[#]: collector: (lujun9972)
[#]: translator: (hanwckf)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (curl exercises)
[#]: via: (https://jvns.ca/blog/2019/08/27/curl-exercises/)
[#]: author: (Julia Evans https://jvns.ca/)
curl 练习
======
最近,我对人们如何学习新事物感兴趣。我正在读 Kathy Sierra 的好书 [Badass: Making Users Awesome][1]它探讨了有关“刻意练习”的想法。这个想法是你找到一个可以用三个45分钟课程内能够学会的小技能并专注于学习这项小技能。因此作为一项练习我尝试考虑一项能够在3个45分钟课程内学会的计算机技能。
我认为使用 curl 构造 HTTP 请求也许就是这样的一项技能所以这里有一些curl练习作为实验
### 什么是 curl ?
curl 是用于构造 HTTP 请求的命令行工具。我喜欢使用 curl 因为它能够很轻松地测试服务器或API的行为是否符合预期但是刚开始接触它的时候会让你感到一些困惑
下面是一幅解释 curl 常用命令行参数的漫画 (在我的 [Bite Size Networking][2] 杂志的第6页
<https://jvns.ca/images/curl.jpeg>
### 熟能生巧
对于任何命令行工具,我认为熟练使用是很有帮助的,能够做到只输入必要的命令真是太好了。例如,最近我在测试 Gumroad API我只需要输入
```
curl https://api.gumroad.com/v2/sales \
-d "access_token=<SECRET>" \
-X GET -d "before=2016-09-03"
```
就能从命令行中得到想要的结果。
### 21 个 curl 练习
这些练习是用来理解如何使用 curl 构造不同种类的 HTTP 请求的,它们是故意重复的,基本上包含了我需要 curl 做的任何事情。
为了简单起见,我们将对 https://httpbin.org 发起一系列 HTTP 请求httpbin 接受 HTTP 请求,然后在响应中回显你所发起的 HTTP 请求。
1. 请求 <https://httpbin.org>
2. 请求 <https://httpbin.org/anything>httpbin.org/anything 将会解析你发起的请求并且在响应中回显。curl 默认发起的是 GET 请求
3. 向 <https://httpbin.org/anything> 发起 GET 请求
4. 向 <https://httpbin.org/anything> 发起 GET 请求,但是这次需要添加一些查询参数(设置 `value=panda`
5. 请求 Google 的 robots.txt 文件 ([www.google.com/robots.txt][3])
6. 向 <https://httpbin.org/anything> 发起 GET 请求,并且设置请求头为 `User-Agent: elephant`
7. 向 <https://httpbin.org/anything> 发起 DELETE 请求
8. 请求 <https://httpbin.org/anything> 并获取响应头信息
9. 向 <https://httpbin.com/anything> 发起请求体为 JSON `{"value": "panda"}` 的 POST 请求
10. 发起与上一次相同的 POST 请求,但是这次要把请求头中的 `Content-Type` 字段设置成 `application/json`(因为 POST 请求需要一个与请求体相匹配的 `Content-Type` 请求头字段)。查看响应体中的 `json` 字段,对比上一次得到的响应体
11. 向 <https://httpbin.org/anything> 发起 GET 请求,并且在请求头中设置 `Accept-Encoding: gzip`(将会发生什么?为什么会这样?)
12. 将一些 JSON 放在文件中,然后向 <https://httpbin.org/anything> 发起请求体为该文件的 POST 请求
13. 设置请求头为 `Accept: image/png` 并且向 <https://httpbin.org/image> 发起请求,将输出保存为 PNG 文件,然后使用图片浏览器打开。尝试使用不同的 `Accept:` 字段去请求此 URL
14. 向 <https://httpbin.org/anything> 发起 PUT 请求
15. 请求 <https://httpbin.org/image/jpeg> 并保存为文件,然后使用你的图片编辑器打开这个文件
16. 请求 <https://www.twitter.com>,你将会得到空的响应。让 curl 显示出响应头信息,并尝试找出响应内容为空的原因
17. 向 <https://httpbin.org/anything> 发起任意的请求,同时设置一些无意义的请求头(例如:`panda: elephant`
18. 请求 <https://httpbin.org/status/404><https://httpbin.org/status/200>,然后再次请求它们并且让 curl 显示响应头信息
19. 请求 <https://httpbin.org/anything> 并且设置用户名和密码(使用 `-u username:password`
20. 设置 `Accept-Language: es-ES` 的请求头用以下载 Twitter 的西班牙语主页 (<https://twitter.com>)
21. 使用 curl 向 Stripe API 发起请求(请查看 <https://stripe.com/docs/development> 了解如何使用,他们会给你一个测试用的 API key。尝试向 <https://httpbin.org/anything> 发起相同的请求
--------------------------------------------------------------------------------
via: https://jvns.ca/blog/2019/08/27/curl-exercises/
作者:[Julia Evans][a]
选题:[lujun9972][b]
译者:[hanwckf](https://github.com/hanwckf)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://jvns.ca/
[b]: https://github.com/lujun9972
[1]: https://www.amazon.com/Badass-Making-Awesome-Kathy-Sierra/dp/1491919019
[2]: https://wizardzines.com/zines/bite-size-networking
[3]: http://www.google.com/robots.txt

View File

@ -0,0 +1,249 @@
[#]: collector: (lujun9972)
[#]: translator: (lxbwolf)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Get sorted with sort at the command line)
[#]: via: (https://opensource.com/article/19/10/get-sorted-sort)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
命令行用 sort 进行排序
======
按自己的需求重新整理数据 — 用 LinuxBSD 或 Mac 的 terminal — 使用 sort 命令。
![Coding on a computer][1]
如果你曾经用过数据表应用程序,你就会知道可以按列的内容对行进行排序。例如,如果你有一列价格,你可能希望对它们进行按日期或升序抑或按类别进行排序。如果你熟悉 terminal 的使用,你不会仅为了排序文本数据就去使用庞大的办公软件。这正是 [**sort**][2] 命令的用处。
### 安装
你不必安装 **sort** ,因为它包含在任意 [POSIX][3] 系统里。在大多数 Linux 系统中,**sort** 命令被 GNU 组织捆绑在实用工具集合中。在其他的 POSIX 系统中,像 BSD 和 Mac默认的 **sort** 命令不是 GNU 提供的,所以有一些选项可能不一样。本文中我尽量对 GNU 和 BSD 两者的实现都进行说明。
### 按字母顺序排列行
**sort** 命令默认会读取文件每行的第一个字符并对每行按字母升序排序后输出。两行中的第一个字符相同的情况下,对下一个字符进行对比。例如:
```
$ cat distro.list
Slackware
Fedora
Red Hat Enterprise Linux
Ubuntu
Arch
1337
Mint
Mageia
Debian
$ sort distro.list
1337
Arch
Debian
Fedora
Mageia
Mint
Red Hat Enterprise Linux
Slackware
Ubuntu
```
使用 **sort** 不会改变原文件。sort 仅起到过滤的作用,所以如果你希望按排序后的格式保存数据,你需要用 **>** 或 **tee** 进行重定向。
```
$ sort distro.list | tee distro.sorted
1337
Arch
Debian
[...]
$ cat distro.sorted
1337
Arch
Debian
[...]
```
### 按列排序
复杂的数据有时候不止需要对每行的第一个字符进行排序。例如假设有一个动物列表用可预见的分隔符分隔每一个「字段」数据表中的「单元格」。这类由数据表导出的格式很常见CSVcomma-separated values以逗号分隔的数据后缀可以标识这些文件虽然 CSV 文件不一定用逗号分隔,有分隔符的文件也不一定用 CSV 后缀)。以下数据作为示例:
```
Aptenodytes;forsteri;Miller,JF;1778;Emperor
Pygoscelis;papua;Wagler;1832;Gentoo
Eudyptula;minor;Bonaparte;1867;Little Blue
Spheniscus;demersus;Brisson;1760;African
Megadyptes;antipodes;Milne-Edwards;1880;Yellow-eyed
Eudyptes;chrysocome;Viellot;1816;Southern Rockhopper
Torvaldis;linux;Ewing,L;1996;Tux
```
对于这组示例数据,你可以用 **--field-separator** (在 BSD 和 Mac 用 **-t**,或 GNU 上可以用简写 **-t** )设置分隔符为分号(以为示例数据中是用分号而不是逗号,理论上分隔符可以是任意字符),用 **--key** 在 BSD 和 Mac 上用 **-k**,或 GNU 上可以用简写 **-k**)选项指定哪个字段被排序。例如,对每行第二个字段进行排序(以 1 开头而不是 0
```
sort --field-separator=";" --key=2
Megadyptes;antipodes;Milne-Edwards;1880;Yellow-eyed
Eudyptes;chrysocome;Viellot;1816;Sothern Rockhopper
Spheniscus;demersus;Brisson;1760;African
Aptenodytes;forsteri;Miller,JF;1778;Emperor
Torvaldis;linux;Ewing,L;1996;Tux
Eudyptula;minor;Bonaparte;1867;Little Blue
Pygoscelis;papua;Wagler;1832;Gentoo
```
结果有点不容易读,但是 Unix 以构造命令的 **pipe** 方法而闻名,所以你可以使用 **column** 命令美化输出结果。使用 GNU **column**
```
$ sort --field-separator=";" \
\--key=2 penguins.list | \
column --table --separator ";"
Megadyptes   antipodes   Milne-Edwards  1880  Yellow-eyed
Eudyptes     chrysocome  Viellot        1816  Southern Rockhopper
Spheniscus   demersus    Brisson        1760  African
Aptenodytes  forsteri    Miller,JF      1778  Emperor
Torvaldis    linux       Ewing,L        1996  Tux
Eudyptula    minor       Bonaparte      1867  Little Blue
Pygoscelis   papua       Wagler         1832  Gentoo
```
对于初学者可能有点不好理解但是写起来简单BSD 和 Mac 上的命令选项:
```
$ sort -t ";" \
-k2 penguins.list | column -t -s ";"
Megadyptes   antipodes   Milne-Edwards  1880  Yellow-eyed
Eudyptes     chrysocome  Viellot        1816  Southern Rockhopper
Spheniscus   demersus    Brisson        1760  African
Aptenodytes  forsteri    Miller,JF      1778  Emperor
Torvaldis    linux       Ewing,L        1996  Tux
Eudyptula    minor       Bonaparte      1867  Little Blue
Pygoscelis   papua       Wagler         1832  Gentoo
```
当然 **key** 不一定非要设为 **2**。任意存在的字段都可以被设为排序的 key。
### 逆序排列
你可以用 **--reverse**BSD/Mac 上用 **-r**, GNU 也可以用简写 **-r**)选项来颠倒已经排好序的列表。
```
$ sort --reverse alphabet.list
z
y
x
w
[...]
```
你也可以把输出结果通过管道传给命令 [tac][4] 来实现相同的效果。
### 按月排序 (仅 GNU 支持)
理想情况下,所有人都按照 ISO 8601 标准来写日期:年,月,日。这是一种合乎逻辑的指定精确日期的方法,也可以很容易地被计算机理解。也有很多情况下,人类用其他的方式标注日期,用很随意的名字表示月份。
幸运的是GNU **sort** 命令能识别这种写法,并可以按月份的名称正确排序。使用 **--month-sort (-M)** 选项:
```
$ cat month.list
November
October
September
April
[...]
$ sort --month-sort month.list
January
February
March
April
May
[...]
November
December
```
月份的全称和简写都可以被识别。
### 人类可读的数字排序(仅 GNU 支持)
另一个广泛的人类和计算机的混淆点是数字的组合。例如,人类通常把 ”1024 kilobytes“ 写成 “1KB”因为人类解析 ”1 KB“ 比 ”1024“ 要容易且更快(数字越大,这种差异越明显)。对于计算机来说,一个 9 KB 的字符串要比诸如 1 MB 的字符串大(尽管 9 KB 是 1 兆的很小一部分。GNU **sort** 命令提供了**--human-numeric-sort (-h)** 选项来帮助正确解析这些值。
```
$ cat sizes.list
2M
12MB
1k
9k
900
7000
$ sort --human-numeric-sort
900
7000
1k
9k
2M
12MB
```
有一些情况例外。例如16000 bytes 比 1 KB 大,但是 **sort** 识别不了。
```
$ cat sizes0.list
2M
12MB
16000
1k
$ sort -h sizes0.list
16000
1k
2M
12MB
```
逻辑上来说这个示例中16000 应该写成 16 KB所以也不应该全部归咎于GNU **sort** 。只要你确保数字的一致性,**--human-numeric-sort** 可以用一种计算机友好的方式解析成人类可读的数字。
### 随机排序(仅 GNU 支持)
有时候工具也提供了一些与设计初衷相悖的选项。某种程度上说,**sort** 命令提供了对一个文件进行随机排序的能力没有任何意义。这个命令的工作流让这个特性变得很方便。你可以用其他的命令,像 [**shuf**][5] ,或者你可以用现在的命令添加一个选项。不管你认为它是一个臃肿的还是极具创造力的 UX 设计GNU **sort** 命令提供了对文件进行随机排序的功能。
最纯粹的随机排序格式选项是 **--random-sort** 或 **-R**(不要跟 **-r** 混淆,**-r** 是 **--reverse** 的简写)。
```
$ sort --random-sort alphabet.list
d
m
p
a
[...]
```
每次对文件运行随机排序都会有不同的结果。
### 结语
GNU 和 BSD 命令 **sort** 还有很多功能,所以花点时间去了解这些选项。你会惊异于 **sort** 的灵活性,尤其是当它和其他的 Unix 工具一起使用时。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/10/get-sorted-sort
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[lxbwolf](https://github.com/lxbwolf)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl "Coding on a computer"
[2]: https://en.wikipedia.org/wiki/Sort_(Unix)
[3]: https://en.wikipedia.org/wiki/POSIX
[4]: https://opensource.com/article/19/9/tac-command
[5]: https://www.gnu.org/software/coreutils/manual/html_node/shuf-invocation.html

View File

@ -1,163 +0,0 @@
[#]: collector: "lujun9972"
[#]: translator: "lxbwolf"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
[#]: subject: "How to use loops in awk"
[#]: via: "https://opensource.com/article/19/11/loops-awk"
[#]: author: "Seth Kenlon https://opensource.com/users/seth"
在 awk 中怎么使用循环
======
来学习一下多次执行同一条命令的不同类型的循环。
![arrows cycle symbol for failing faster][1]
awk 脚本有三个主要部分BEGIN 和 END 函数都可选用户自己写的每次要执行的函数。某种程度上awk 的主体部分就是一个循环,因为函数中的命令对每一条记录都会执行一次。然而,有时你希望对于一条记录执行多次命令,那么你就需要用到循环。
有多种类型的循环,分别适合不同的场景。
### while 循环
一个 while 循环检测一个表达式,如果表达式为 *true* 就执行命令。当表达式变为 *false* 时,循环中断。
```
#!/bin/awk -f
BEGIN {
        # Print the squares from 1 to 10
    i=1;
    while (i &lt;= 10) {
        print "The square of ", i, " is ", i*i;
        i = i+1;
    }
exit;
}
```
在这个简单实例中, awk 打印了变量 *i* 中的整数值的平方。**while (i <= 10)** 语句告诉 awk 仅在 *i* 的值小于或等于 10 时才执行循环。在循环最后一次执行时(*i* 的值是 10循环终止。
### Do while 循环
do-while 循环在关键字 **do** 之后执行命令。在每次循环结束时检测一个表达式来决定是否终止循环。仅在表达式返回 true 时才会重复执行命令(即还没有到终止循环的条件)。如果表达式返回 false因为到了终止循环的条件所以循环被终止。
```
#!/usr/bin/awk -f
BEGIN {
        i=2;
        do {
                print "The square of ", i, " is ", i*i;
                i = i + 1
        }
        while (i &lt; 10)
exit;
}
```
### for 循环
awk 中有两种 **for**循环。
一种 **for** 循环初始化一个变量,检测一个表达式,执行变量递增,当表达式的结果为 true 时循环就会一直执行。
```
#!/bin/awk -f
BEGIN {
    for (i=1; i &lt;= 10; i++) {
        print "The square of ", i, " is ", i*i;
    }
exit;
}
```
另一种 **for** 循环设置一个有连续 index 的数组变量,对每一个索引执行一个命令集。换句话说,它用一个数组「收集」每一条命令执行后的结果。
本例实现了一个简易版的 Unix 命令 **uniq** 。通过把一系列字符串作为 key 加到数组 a 中,当相同的 key 再次出现时就增加 value 的值,可以得到某个字符串出现的次数(就像 **uniq****--count** 选项)。如果你打印该数组的所有 key将会得到出现过的所有字符串。
用 demo 文件 **colours.txt** (前一篇文章中的文件)来举例:
```
name       color  amount
apple      red    4
banana     yellow 6
raspberry  red    99
strawberry red    3
grape      purple 10
apple      green  8
plum       purple 2
kiwi       brown  4
potato     brown  9
pineapple  yellow 5
```
这是 awk 版的简易 **uniq -c**
```
#! /usr/bin/awk -f
NR != 1 {
    a[$2]++
}
END {
    for (key in a) {
                print a[key] " " key
    }
}
```
示例数据文件的第三列是第一列列出的条目的计数。你可以用一个数组和 **for** 循环来从 color 维度统计第三列的条目。
```
#! /usr/bin/awk -f
BEGIN {
    FS=" ";
    OFS="\t";
    print("color\tsum");
}
NR != 1 {
    a[$2]+=$3;
}
END {
    for (b in a) {
        print b, a[b]
    }
}
```
你可以看到,在处理文件之前也需要在 **前置** 函数(仅仅执行一次)中打印一列表头。
### 循环
在任何编程语言中循环都是很重要的一部分awk 也不例外。使用循环你可以控制 awk 脚本怎样去运行,它可以统计什么信息,还有它怎么去处理你的数据。我们下一篇文章会讨论 switch 语句,**continue** 和 **next**
* * *
你是否更想听这篇文章?本文已被收录进 [Hacker Public Radio](http://hackerpublicradio.org/eps.php?id=2330),一个来自黑客,面向黑客的社区技术博客。
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/loops-awk
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[lxbwolf](https://github.com/lxbwolf)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh "arrows cycle symbol for failing faster"
[2]: http://hackerpublicradio.org/eps.php?id=2330