mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-28 01:01:09 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
abef499e4f
@ -1,33 +1,27 @@
|
||||
[#]: collector: "lujun9972"
|
||||
[#]: translator: "zero-MK"
|
||||
[#]: reviewer: " "
|
||||
[#]: publisher: " "
|
||||
[#]: url: " "
|
||||
[#]: reviewer: "wxy"
|
||||
[#]: publisher: "wxy"
|
||||
[#]: url: "https://linux.cn/article-10766-1.html"
|
||||
[#]: subject: "How To Check If A Port Is Open On Multiple Remote Linux System Using Shell Script With nc Command?"
|
||||
[#]: via: "https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-using-nc-command/"
|
||||
[#]: author: "Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/"
|
||||
|
||||
|
||||
如何检查多个远程 Linux 系统是否打开了指定端口?
|
||||
======
|
||||
|
||||
# 如何使用带有 nc 命令的 Shell 脚本来检查多个远程 Linux 系统是否打开了指定端口?
|
||||
我们最近写了一篇文章关于如何检查远程 Linux 服务器是否打开指定端口。它能帮助你检查单个服务器。
|
||||
|
||||
我们最近写了一篇文章关于如何检查远程 Linux 服务器是否打开指定端口。它能帮助您检查单个服务器。
|
||||
如果要检查五个服务器有没有问题,可以使用以下任何一个命令,如 `nc`(netcat)、`nmap` 和 `telnet`。但是如果想检查 50 多台服务器,那么你的解决方案是什么?
|
||||
|
||||
如果要检查五个服务器有没有问题,可以使用以下任何一个命令,如 nc(netcat),nmap 和 telnet。
|
||||
要检查所有服务器并不容易,如果你一个一个这样做,完全没有必要,因为这样你将会浪费大量的时间。为了解决这种情况,我使用 `nc` 命令编写了一个 shell 小脚本,它将允许我们扫描任意数量服务器给定的端口。
|
||||
|
||||
但是如果想检查 50 多台服务器,那么你的解决方案是什么?
|
||||
如果你要查找单个服务器扫描,你有多个选择,你只需阅读 [检查远程 Linux 系统上的端口是否打开?][1] 了解更多信息。
|
||||
|
||||
要检查所有服务器并不容易,如果你一个一个这样做,完全没有必要,因为这样你将会浪费大量的时间。
|
||||
本教程中提供了两个脚本,这两个脚本都很有用。这两个脚本都用于不同的目的,你可以通过阅读标题轻松理解其用途。
|
||||
|
||||
为了解决这种情况,我使用 nc 命令编写了一个 shell 小脚本,它将允许我们扫描任意数量服务器给定的端口。
|
||||
|
||||
如果您要查找单个服务器扫描,您有多个选择,你只需导航到到 **[检查远程 Linux 系统上的端口是否打开?][1]** 了解更多信息。
|
||||
|
||||
本教程中提供了两个脚本,这两个脚本都很有用。
|
||||
|
||||
这两个脚本都用于不同的目的,您可以通过阅读标题轻松理解其用途。
|
||||
|
||||
在你阅读这篇文章之前,我会问你几个问题,如果你知道答案或者你可以通过阅读这篇文章来获得答案。
|
||||
在你阅读这篇文章之前,我会问你几个问题,如果你不知道答案你可以通过阅读这篇文章来获得答案。
|
||||
|
||||
如何检查一个远程 Linux 服务器上指定的端口是否打开?
|
||||
|
||||
@ -35,17 +29,17 @@
|
||||
|
||||
如何检查多个远程 Linux 服务器上是否打开了多个指定的端口?
|
||||
|
||||
### 什么是nc(netcat)命令?
|
||||
### 什么是 nc(netcat)命令?
|
||||
|
||||
nc 即 netcat 。Netcat 是一个简单实用的 Unix 程序,它使用 TCP 或 UDP 协议进行跨网络连接进行数据读取和写入。
|
||||
`nc` 即 netcat。它是一个简单实用的 Unix 程序,它使用 TCP 或 UDP 协议进行跨网络连接进行数据读取和写入。
|
||||
|
||||
它被设计成一个可靠的 “后端” (back-end) 工具,我们可以直接使用或由其他程序和脚本轻松驱动它。
|
||||
它被设计成一个可靠的 “后端” 工具,我们可以直接使用或由其他程序和脚本轻松驱动它。
|
||||
|
||||
同时,它也是一个功能丰富的网络调试和探索工具,因为它可以创建您需要的几乎任何类型的连接,并具有几个有趣的内置功能。
|
||||
同时,它也是一个功能丰富的网络调试和探索工具,因为它可以创建你需要的几乎任何类型的连接,并具有几个有趣的内置功能。
|
||||
|
||||
Netcat 有三个主要的模式。分别是连接模式,监听模式和隧道模式。
|
||||
netcat 有三个主要的模式。分别是连接模式,监听模式和隧道模式。
|
||||
|
||||
**nc(netcat)的通用语法:**
|
||||
`nc`(netcat)的通用语法:
|
||||
|
||||
```
|
||||
$ nc [-options] [HostName or IP] [PortNumber]
|
||||
@ -55,9 +49,9 @@ $ nc [-options] [HostName or IP] [PortNumber]
|
||||
|
||||
如果要检查多个远程 Linux 服务器上给定端口是否打开,请使用以下 shell 脚本。
|
||||
|
||||
在我的例子中,我们将检查端口 22 是否在以下远程服务器中打开,确保您已经更新文件中的服务器列表而不是还是使用我的服务器列表。
|
||||
在我的例子中,我们将检查端口 22 是否在以下远程服务器中打开,确保你已经更新文件中的服务器列表而不是使用我的服务器列表。
|
||||
|
||||
您必须确保已经更新服务器列表 : `server-list.txt file` 。每个服务器(IP)应该在单独的行中。
|
||||
你必须确保已经更新服务器列表 :`server-list.txt` 。每个服务器(IP)应该在单独的行中。
|
||||
|
||||
```
|
||||
# cat server-list.txt
|
||||
@ -77,12 +71,12 @@ $ nc [-options] [HostName or IP] [PortNumber]
|
||||
#!/bin/sh
|
||||
for server in `more server-list.txt`
|
||||
do
|
||||
#echo $i
|
||||
nc -zvw3 $server 22
|
||||
#echo $i
|
||||
nc -zvw3 $server 22
|
||||
done
|
||||
```
|
||||
|
||||
设置 `port_scan.sh` 文件的可执行权限。
|
||||
设置 `port_scan.sh` 文件的可执行权限。
|
||||
|
||||
```
|
||||
$ chmod +x port_scan.sh
|
||||
@ -105,9 +99,9 @@ Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
|
||||
|
||||
如果要检查多个服务器中的多个端口,请使用下面的脚本。
|
||||
|
||||
在我的例子中,我们将检查给定服务器的 22 和 80 端口是否打开。确保您必须替换所需的端口和服务器名称而不使用是我的。
|
||||
在我的例子中,我们将检查给定服务器的 22 和 80 端口是否打开。确保你必须替换所需的端口和服务器名称而不使用是我的。
|
||||
|
||||
您必须确保已经将要检查的端口写入 `port-list.txt` 文件中。每个端口应该在一个单独的行中。
|
||||
你必须确保已经将要检查的端口写入 `port-list.txt` 文件中。每个端口应该在一个单独的行中。
|
||||
|
||||
```
|
||||
# cat port-list.txt
|
||||
@ -115,7 +109,7 @@ Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
|
||||
80
|
||||
```
|
||||
|
||||
您必须确保已经将要检查的服务器( IP 地址 )写入 `server-list.txt` 到文件中。每个服务器( IP ) 应该在单独的行中。
|
||||
你必须确保已经将要检查的服务器(IP 地址)写入 `server-list.txt` 到文件中。每个服务器(IP) 应该在单独的行中。
|
||||
|
||||
```
|
||||
# cat server-list.txt
|
||||
@ -135,12 +129,12 @@ Connection to 192.168.1.7 22 port [tcp/ssh] succeeded!
|
||||
#!/bin/sh
|
||||
for server in `more server-list.txt`
|
||||
do
|
||||
for port in `more port-list.txt`
|
||||
do
|
||||
#echo $server
|
||||
nc -zvw3 $server $port
|
||||
echo ""
|
||||
done
|
||||
for port in `more port-list.txt`
|
||||
do
|
||||
#echo $server
|
||||
nc -zvw3 $server $port
|
||||
echo ""
|
||||
done
|
||||
done
|
||||
```
|
||||
|
||||
@ -180,10 +174,10 @@ via: https://www.2daygeek.com/check-a-open-port-on-multiple-remote-linux-server-
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[zero-MK](https://github.com/zero-mk)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/how-to-check-whether-a-port-is-open-on-the-remote-linux-system-server/
|
||||
[1]: https://linux.cn/article-10675-1.html
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,61 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Intel follows AMD’s lead (again) into single-socket Xeon servers)
|
||||
[#]: via: (https://www.networkworld.com/article/3390201/intel-follows-amds-lead-again-into-single-socket-xeon-servers.html#tk.rss_all)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Intel follows AMD’s lead (again) into single-socket Xeon servers
|
||||
======
|
||||
Intel's new U series of processors are aimed at the low-end market where one processor is good enough.
|
||||
![Intel][1]
|
||||
|
||||
I’m really starting to wonder who the leader in x86 really is these days because it seems Intel is borrowing another page out of AMD’s playbook.
|
||||
|
||||
Intel launched a whole lot of new Xeon Scalable processors earlier this month, but they neglected to mention a unique line: the U series of single-socket processors. The folks over at Serve The Home sniffed it out first, and Intel has confirmed the existence of the line, just that they “didn’t broadly promote them.”
|
||||
|
||||
**[ Read also:[Intel makes a play for high-speed fiber networking for data centers][2] ]**
|
||||
|
||||
To backtrack a bit, AMD made a major push for [single-socket servers][3] when it launched the Epyc line of server chips. Epyc comes with up to 32 cores and multithreading, and Intel (and Dell) argued that one 32-core/64-thread processor was enough to handle many loads and a lot cheaper than a two-socket system.
|
||||
|
||||
The new U series isn’t available in the regular Intel [ARK database][4] listing of Xeon Scalable processors, but they do show up if you search. Intel says they are looking into that .There are two processors for now, one with 24 cores and two with 20 cores.
|
||||
|
||||
The 24-core Intel [Xeon Gold 6212U][5] will be a counterpart to the Intel Xeon Platinum 8260, with a 2.4GHz base clock speed and a 3.9GHz turbo clock and the ability to access up to 1TB of memory. The Xeon Gold 6212U will have the same 165W TDP as the 8260 line, but with a single socket that’s 165 fewer watts of power.
|
||||
|
||||
Also, Intel is suggesting a price of about $2,000 for the Intel Xeon Gold 6212U, a big discount over the Xeon Platinum 8260’s $4,702 list price. So, that will translate into much cheaper servers.
|
||||
|
||||
The [Intel Xeon Gold 6210U][6] with 20 cores carries a suggested price of $1,500, has a base clock rate of 2.50GHz with turbo boost to 3.9GHz and a 150 watt TDP. Finally, there is the 20-core Intel [Xeon Gold 6209U][7] with a price of around $1,000 that is identical to the 6210 except its base clock speed is 2.1GHz with a turbo boost of 3.9GHz and a TDP of 125 watts due to its lower clock speed.
|
||||
|
||||
**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][8] ]**
|
||||
|
||||
All of the processors support up to 1TB of DDR4-2933 memory and Intel’s Optane persistent memory.
|
||||
|
||||
In terms of speeds and feeds, AMD has a slight advantage over Intel in the single-socket race, and Epyc 2 is rumored to be approaching completion, which will only further advance AMD’s lead.
|
||||
|
||||
Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3390201/intel-follows-amds-lead-again-into-single-socket-xeon-servers.html#tk.rss_all
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/06/intel_generic_cpu_background-100760187-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3307852/intel-makes-a-play-for-high-speed-fiber-networking-for-data-centers.html
|
||||
[3]: https://www.networkworld.com/article/3253626/amd-lands-dell-as-its-latest-epyc-server-processor-customer.html
|
||||
[4]: https://ark.intel.com/content/www/us/en/ark/products/series/192283/2nd-generation-intel-xeon-scalable-processors.html
|
||||
[5]: https://ark.intel.com/content/www/us/en/ark/products/192453/intel-xeon-gold-6212u-processor-35-75m-cache-2-40-ghz.html
|
||||
[6]: https://ark.intel.com/content/www/us/en/ark/products/192452/intel-xeon-gold-6210u-processor-27-5m-cache-2-50-ghz.html
|
||||
[7]: https://ark.intel.com/content/www/us/en/ark/products/193971/intel-xeon-gold-6209u-processor-27-5m-cache-2-10-ghz.html
|
||||
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11
|
||||
[9]: https://www.facebook.com/NetworkWorld/
|
||||
[10]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,388 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Inter-process communication in Linux: Sockets and signals)
|
||||
[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-networking)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
|
||||
|
||||
Inter-process communication in Linux: Sockets and signals
|
||||
======
|
||||
|
||||
Learn how processes synchronize with each other in Linux.
|
||||
|
||||

|
||||
|
||||
This is the third and final article in a series about [interprocess communication][1] (IPC) in Linux. The [first article][2] focused on IPC through shared storage (files and memory segments), and the [second article][3] does the same for basic channels: pipes (named and unnamed) and message queues. This article moves from IPC at the high end (sockets) to IPC at the low end (signals). Code examples flesh out the details.
|
||||
|
||||
### Sockets
|
||||
|
||||
Just as pipes come in two flavors (named and unnamed), so do sockets. IPC sockets (aka Unix domain sockets) enable channel-based communication for processes on the same physical device (host), whereas network sockets enable this kind of IPC for processes that can run on different hosts, thereby bringing networking into play. Network sockets need support from an underlying protocol such as TCP (Transmission Control Protocol) or the lower-level UDP (User Datagram Protocol).
|
||||
|
||||
By contrast, IPC sockets rely upon the local system kernel to support communication; in particular, IPC sockets communicate using a local file as a socket address. Despite these implementation differences, the IPC socket and network socket APIs are the same in the essentials. The forthcoming example covers network sockets, but the sample server and client programs can run on the same machine because the server uses network address localhost (127.0.0.1), the address for the local machine on the local machine.
|
||||
|
||||
Sockets configured as streams (discussed below) are bidirectional, and control follows a client/server pattern: the client initiates the conversation by trying to connect to a server, which tries to accept the connection. If everything works, requests from the client and responses from the server then can flow through the channel until this is closed on either end, thereby breaking the connection.
|
||||
|
||||
An iterative server, which is suited for development only, handles connected clients one at a time to completion: the first client is handled from start to finish, then the second, and so on. The downside is that the handling of a particular client may hang, which then starves all the clients waiting behind. A production-grade server would be concurrent, typically using some mix of multi-processing and multi-threading. For example, the Nginx web server on my desktop machine has a pool of four worker processes that can handle client requests concurrently. The following code example keeps the clutter to a minimum by using an iterative server; the focus thus remains on the basic API, not on concurrency.
|
||||
|
||||
Finally, the socket API has evolved significantly over time as various POSIX refinements have emerged. The current sample code for server and client is deliberately simple but underscores the bidirectional aspect of a stream-based socket connection. Here's a summary of the flow of control, with the server started in a terminal then the client started in a separate terminal:
|
||||
|
||||
* The server awaits client connections and, given a successful connection, reads the bytes from the client.
|
||||
|
||||
* To underscore the two-way conversation, the server echoes back to the client the bytes received from the client. These bytes are ASCII character codes, which make up book titles.
|
||||
|
||||
* The client writes book titles to the server process and then reads the same titles echoed from the server. Both the server and the client print the titles to the screen. Here is the server's output, essentially the same as the client's:
|
||||
|
||||
```
|
||||
Listening on port 9876 for clients...
|
||||
War and Peace
|
||||
Pride and Prejudice
|
||||
The Sound and the Fury
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
#### Example 1. The socket server
|
||||
|
||||
```
|
||||
#include <string.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <unistd.h>
|
||||
#include <sys/types.h>
|
||||
#include <sys/socket.h>
|
||||
#include <netinet/tcp.h>
|
||||
#include <arpa/inet.h>
|
||||
#include "sock.h"
|
||||
|
||||
void report(const char* msg, int terminate) {
|
||||
perror(msg);
|
||||
if (terminate) exit(-1); /* failure */
|
||||
}
|
||||
|
||||
int main() {
|
||||
int fd = socket(AF_INET, /* network versus AF_LOCAL */
|
||||
SOCK_STREAM, /* reliable, bidirectional, arbitrary payload size */
|
||||
0); /* system picks underlying protocol (TCP) */
|
||||
if (fd < 0) report("socket", 1); /* terminate */
|
||||
|
||||
/* bind the server's local address in memory */
|
||||
struct sockaddr_in saddr;
|
||||
memset(&saddr, 0, sizeof(saddr)); /* clear the bytes */
|
||||
saddr.sin_family = AF_INET; /* versus AF_LOCAL */
|
||||
saddr.sin_addr.s_addr = htonl(INADDR_ANY); /* host-to-network endian */
|
||||
saddr.sin_port = htons(PortNumber); /* for listening */
|
||||
|
||||
if (bind(fd, (struct sockaddr *) &saddr, sizeof(saddr)) < 0)
|
||||
report("bind", 1); /* terminate */
|
||||
|
||||
/* listen to the socket */
|
||||
if (listen(fd, MaxConnects) < 0) /* listen for clients, up to MaxConnects */
|
||||
report("listen", 1); /* terminate */
|
||||
|
||||
fprintf(stderr, "Listening on port %i for clients...\n", PortNumber);
|
||||
/* a server traditionally listens indefinitely */
|
||||
while (1) {
|
||||
struct sockaddr_in caddr; /* client address */
|
||||
int len = sizeof(caddr); /* address length could change */
|
||||
|
||||
int client_fd = accept(fd, (struct sockaddr*) &caddr, &len); /* accept blocks */
|
||||
if (client_fd < 0) {
|
||||
report("accept", 0); /* don't terminate, though there's a problem */
|
||||
continue;
|
||||
}
|
||||
|
||||
/* read from client */
|
||||
int i;
|
||||
for (i = 0; i < ConversationLen; i++) {
|
||||
char buffer[BuffSize + 1];
|
||||
memset(buffer, '\0', sizeof(buffer));
|
||||
int count = read(client_fd, buffer, sizeof(buffer));
|
||||
if (count > 0) {
|
||||
puts(buffer);
|
||||
write(client_fd, buffer, sizeof(buffer)); /* echo as confirmation */
|
||||
}
|
||||
}
|
||||
close(client_fd); /* break connection */
|
||||
} /* while(1) */
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The server program above performs the classic four-step to ready itself for client requests and then to accept individual requests. Each step is named after a system function that the server calls:
|
||||
|
||||
1. **socket(…)** : get a file descriptor for the socket connection
|
||||
2. **bind(…)** : bind the socket to an address on the server's host
|
||||
3. **listen(…)** : listen for client requests
|
||||
4. **accept(…)** : accept a particular client request
|
||||
|
||||
|
||||
|
||||
The **socket** call in full is:
|
||||
|
||||
```
|
||||
int sockfd = socket(AF_INET, /* versus AF_LOCAL */
|
||||
SOCK_STREAM, /* reliable, bidirectional */
|
||||
0); /* system picks protocol (TCP) */
|
||||
```
|
||||
|
||||
The first argument specifies a network socket as opposed to an IPC socket. There are several options for the second argument, but **SOCK_STREAM** and **SOCK_DGRAM** (datagram) are likely the most used. A stream-based socket supports a reliable channel in which lost or altered messages are reported; the channel is bidirectional, and the payloads from one side to the other can be arbitrary in size. By contrast, a datagram-based socket is unreliable (best try), unidirectional, and requires fixed-sized payloads. The third argument to **socket** specifies the protocol. For the stream-based socket in play here, there is a single choice, which the zero represents: TCP. Because a successful call to **socket** returns the familiar file descriptor, a socket is written and read with the same syntax as, for example, a local file.
|
||||
|
||||
The **bind** call is the most complicated, as it reflects various refinements in the socket API. The point of interest is that this call binds the socket to a memory address on the server machine. However, the **listen** call is straightforward:
|
||||
|
||||
```
|
||||
if (listen(fd, MaxConnects) < 0)
|
||||
```
|
||||
|
||||
The first argument is the socket's file descriptor and the second specifies how many client connections can be accommodated before the server issues a connection refused error on an attempted connection. ( **MaxConnects** is set to 8 in the header file sock.h.)
|
||||
|
||||
The **accept** call defaults to a blocking wait: the server does nothing until a client attempts to connect and then proceeds. The **accept** function returns **-1** to indicate an error. If the call succeeds, it returns another file descriptor—for a read/write socket in contrast to the accepting socket referenced by the first argument in the **accept** call. The server uses the read/write socket to read requests from the client and to write responses back. The accepting socket is used only to accept client connections.
|
||||
|
||||
By design, a server runs indefinitely. Accordingly, the server can be terminated with a **Ctrl+C** from the command line.
|
||||
|
||||
#### Example 2. The socket client
|
||||
|
||||
```
|
||||
#include <string.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <unistd.h>
|
||||
#include <sys/types.h>
|
||||
#include <sys/socket.h>
|
||||
#include <arpa/inet.h>
|
||||
#include <netinet/in.h>
|
||||
#include <netinet/tcp.h>
|
||||
#include <netdb.h>
|
||||
#include "sock.h"
|
||||
|
||||
const char* books[] = {"War and Peace",
|
||||
"Pride and Prejudice",
|
||||
"The Sound and the Fury"};
|
||||
|
||||
void report(const char* msg, int terminate) {
|
||||
perror(msg);
|
||||
if (terminate) exit(-1); /* failure */
|
||||
}
|
||||
|
||||
int main() {
|
||||
/* fd for the socket */
|
||||
int sockfd = socket(AF_INET, /* versus AF_LOCAL */
|
||||
SOCK_STREAM, /* reliable, bidirectional */
|
||||
0); /* system picks protocol (TCP) */
|
||||
if (sockfd < 0) report("socket", 1); /* terminate */
|
||||
|
||||
/* get the address of the host */
|
||||
struct hostent* hptr = gethostbyname(Host); /* localhost: 127.0.0.1 */
|
||||
if (!hptr) report("gethostbyname", 1); /* is hptr NULL? */
|
||||
if (hptr->h_addrtype != AF_INET) /* versus AF_LOCAL */
|
||||
report("bad address family", 1);
|
||||
|
||||
/* connect to the server: configure server's address 1st */
|
||||
struct sockaddr_in saddr;
|
||||
memset(&saddr, 0, sizeof(saddr));
|
||||
saddr.sin_family = AF_INET;
|
||||
saddr.sin_addr.s_addr =
|
||||
((struct in_addr*) hptr->h_addr_list[0])->s_addr;
|
||||
saddr.sin_port = htons(PortNumber); /* port number in big-endian */
|
||||
|
||||
if (connect(sockfd, (struct sockaddr*) &saddr, sizeof(saddr)) < 0)
|
||||
report("connect", 1);
|
||||
|
||||
/* Write some stuff and read the echoes. */
|
||||
puts("Connect to server, about to write some stuff...");
|
||||
int i;
|
||||
for (i = 0; i < ConversationLen; i++) {
|
||||
if (write(sockfd, books[i], strlen(books[i])) > 0) {
|
||||
/* get confirmation echoed from server and print */
|
||||
char buffer[BuffSize + 1];
|
||||
memset(buffer, '\0', sizeof(buffer));
|
||||
if (read(sockfd, buffer, sizeof(buffer)) > 0)
|
||||
puts(buffer);
|
||||
}
|
||||
}
|
||||
puts("Client done, about to exit...");
|
||||
close(sockfd); /* close the connection */
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The client program's setup code is similar to the server's. The principal difference between the two is that the client neither listens nor accepts, but instead connects:
|
||||
|
||||
```
|
||||
if (connect(sockfd, (struct sockaddr*) &saddr, sizeof(saddr)) < 0)
|
||||
```
|
||||
|
||||
The **connect** call might fail for several reasons; for example, the client has the wrong server address or too many clients are already connected to the server. If the **connect** operation succeeds, the client writes requests and then reads the echoed responses in a **for** loop. After the conversation, both the server and the client **close** the read/write socket, although a close operation on either side is sufficient to close the connection. The client exits thereafter but, as noted earlier, the server remains open for business.
|
||||
|
||||
The socket example, with request messages echoed back to the client, hints at the possibilities of arbitrarily rich conversations between the server and the client. Perhaps this is the chief appeal of sockets. It is common on modern systems for client applications (e.g., a database client) to communicate with a server through a socket. As noted earlier, local IPC sockets and network sockets differ only in a few implementation details; in general, IPC sockets have lower overhead and better performance. The communication API is essentially the same for both.
|
||||
|
||||
### Signals
|
||||
|
||||
A signal interrupts an executing program and, in this sense, communicates with it. Most signals can be either ignored (blocked) or handled (through designated code), with **SIGSTOP** (pause) and **SIGKILL** (terminate immediately) as the two notable exceptions. Symbolic constants such as **SIGKILL** have integer values, in this case, 9.
|
||||
|
||||
Signals can arise in user interaction. For example, a user hits **Ctrl+C** from the command line to terminate a program started from the command-line; **Ctrl+C** generates a **SIGTERM** signal. **SIGTERM** for terminate, unlike **SIGKILL** , can be either blocked or handled. One process also can signal another, thereby making signals an IPC mechanism.
|
||||
|
||||
Consider how a multi-processing application such as the Nginx web server might be shut down gracefully from another process. The **kill** function:
|
||||
|
||||
```
|
||||
int kill(pid_t pid, int signum); /* declaration */
|
||||
```
|
||||
|
||||
can be used by one process to terminate another process or group of processes. If the first argument to function **kill** is greater than zero, this argument is treated as the pid (process ID) of the targeted process; if the argument is zero, the argument identifies the group of processes to which the signal sender belongs.
|
||||
|
||||
The second argument to **kill** is either a standard signal number (e.g., **SIGTERM** or **SIGKILL** ) or 0, which makes the call to **signal** a query about whether the pid in the first argument is indeed valid. The graceful shutdown of a multi-processing application thus could be accomplished by sending a terminate signal—a call to the **kill** function with **SIGTERM** as the second argument—to the group of processes that make up the application. (The Nginx master process could terminate the worker processes with a call to **kill** and then exit itself.) The **kill** function, like so many library functions, houses power and flexibility in a simple invocation syntax.
|
||||
|
||||
#### Example 3. The graceful shutdown of a multi-processing system
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <signal.h>
|
||||
#include <stdlib.h>
|
||||
#include <unistd.h>
|
||||
#include <sys/wait.h>
|
||||
|
||||
void graceful(int signum) {
|
||||
printf("\tChild confirming received signal: %i\n", signum);
|
||||
puts("\tChild about to terminate gracefully...");
|
||||
sleep(1);
|
||||
puts("\tChild terminating now...");
|
||||
_exit(0); /* fast-track notification of parent */
|
||||
}
|
||||
|
||||
void set_handler() {
|
||||
struct sigaction current;
|
||||
sigemptyset(¤t.sa_mask); /* clear the signal set */
|
||||
current.sa_flags = 0; /* enables setting sa_handler, not sa_action */
|
||||
current.sa_handler = graceful; /* specify a handler */
|
||||
sigaction(SIGTERM, ¤t, NULL); /* register the handler */
|
||||
}
|
||||
|
||||
void child_code() {
|
||||
set_handler();
|
||||
|
||||
while (1) { /** loop until interrupted **/
|
||||
sleep(1);
|
||||
puts("\tChild just woke up, but going back to sleep.");
|
||||
}
|
||||
}
|
||||
|
||||
void parent_code(pid_t cpid) {
|
||||
puts("Parent sleeping for a time...");
|
||||
sleep(5);
|
||||
|
||||
/* Try to terminate child. */
|
||||
if (-1 == kill(cpid, SIGTERM)) {
|
||||
perror("kill");
|
||||
exit(-1);
|
||||
}
|
||||
wait(NULL); /** wait for child to terminate **/
|
||||
puts("My child terminated, about to exit myself...");
|
||||
}
|
||||
|
||||
int main() {
|
||||
pid_t pid = fork();
|
||||
if (pid < 0) {
|
||||
perror("fork");
|
||||
return -1; /* error */
|
||||
}
|
||||
if (0 == pid)
|
||||
child_code();
|
||||
else
|
||||
parent_code(pid);
|
||||
return 0; /* normal */
|
||||
}
|
||||
```
|
||||
|
||||
The shutdown program above simulates the graceful shutdown of a multi-processing system, in this case, a simple one consisting of a parent process and a single child process. The simulation works as follows:
|
||||
|
||||
* The parent process tries to fork a child. If the fork succeeds, each process executes its own code: the child executes the function **child_code** , and the parent executes the function **parent_code**.
|
||||
* The child process goes into a potentially infinite loop in which the child sleeps for a second, prints a message, goes back to sleep, and so on. It is precisely a **SIGTERM** signal from the parent that causes the child to execute the signal-handling callback function **graceful**. The signal thus breaks the child process out of its loop and sets up the graceful termination of both the child and the parent. The child prints a message before terminating.
|
||||
* The parent process, after forking the child, sleeps for five seconds so that the child can execute for a while; of course, the child mostly sleeps in this simulation. The parent then calls the **kill** function with **SIGTERM** as the second argument, waits for the child to terminate, and then exits.
|
||||
|
||||
|
||||
|
||||
Here is the output from a sample run:
|
||||
|
||||
```
|
||||
% ./shutdown
|
||||
Parent sleeping for a time...
|
||||
Child just woke up, but going back to sleep.
|
||||
Child just woke up, but going back to sleep.
|
||||
Child just woke up, but going back to sleep.
|
||||
Child just woke up, but going back to sleep.
|
||||
Child confirming received signal: 15 ## SIGTERM is 15
|
||||
Child about to terminate gracefully...
|
||||
Child terminating now...
|
||||
My child terminated, about to exit myself...
|
||||
```
|
||||
|
||||
For the signal handling, the example uses the **sigaction** library function (POSIX recommended) rather than the legacy **signal** function, which has portability issues. Here are the code segments of chief interest:
|
||||
|
||||
* If the call to **fork** succeeds, the parent executes the **parent_code** function and the child executes the **child_code** function. The parent waits for five seconds before signaling the child:
|
||||
|
||||
```
|
||||
puts("Parent sleeping for a time...");
|
||||
sleep(5);
|
||||
if (-1 == kill(cpid, SIGTERM)) {
|
||||
...sleepkillcpidSIGTERM...
|
||||
```
|
||||
|
||||
If the **kill** call succeeds, the parent does a **wait** on the child's termination to prevent the child from becoming a permanent zombie; after the wait, the parent exits.
|
||||
|
||||
* The **child_code** function first calls **set_handler** and then goes into its potentially infinite sleeping loop. Here is the **set_handler** function for review:
|
||||
|
||||
```
|
||||
void set_handler() {
|
||||
struct sigaction current; /* current setup */
|
||||
sigemptyset(¤t.sa_mask); /* clear the signal set */
|
||||
current.sa_flags = 0; /* for setting sa_handler, not sa_action */
|
||||
current.sa_handler = graceful; /* specify a handler */
|
||||
sigaction(SIGTERM, ¤t, NULL); /* register the handler */
|
||||
}
|
||||
```
|
||||
|
||||
The first three lines are preparation. The fourth statement sets the handler to the function **graceful** , which prints some messages before calling **_exit** to terminate. The fifth and last statement then registers the handler with the system through the call to **sigaction**. The first argument to **sigaction** is **SIGTERM** for terminate, the second is the current **sigaction** setup, and the last argument ( **NULL** in this case) can be used to save a previous **sigaction** setup, perhaps for later use.
|
||||
|
||||
|
||||
|
||||
|
||||
Using signals for IPC is indeed a minimalist approach, but a tried-and-true one at that. IPC through signals clearly belongs in the IPC toolbox.
|
||||
|
||||
### Wrapping up this series
|
||||
|
||||
These three articles on IPC have covered the following mechanisms through code examples:
|
||||
|
||||
* Shared files
|
||||
* Shared memory (with semaphores)
|
||||
* Pipes (named and unnamed)
|
||||
* Message queues
|
||||
* Sockets
|
||||
* Signals
|
||||
|
||||
|
||||
|
||||
Even today, when thread-centric languages such as Java, C#, and Go have become so popular, IPC remains appealing because concurrency through multi-processing has an obvious advantage over multi-threading: every process, by default, has its own address space, which rules out memory-based race conditions in multi-processing unless the IPC mechanism of shared memory is brought into play. (Shared memory must be locked in both multi-processing and multi-threading for safe concurrency.) Anyone who has written even an elementary multi-threading program with communication via shared variables knows how challenging it can be to write thread-safe yet clear, efficient code. Multi-processing with single-threaded processes remains a viable—indeed, quite appealing—way to take advantage of today's multi-processor machines without the inherent risk of memory-based race conditions.
|
||||
|
||||
There is no simple answer, of course, to the question of which among the IPC mechanisms is the best. Each involves a trade-off typical in programming: simplicity versus functionality. Signals, for example, are a relatively simple IPC mechanism but do not support rich conversations among processes. If such a conversion is needed, then one of the other choices is more appropriate. Shared files with locking is reasonably straightforward, but shared files may not perform well enough if processes need to share massive data streams; pipes or even sockets, with more complicated APIs, might be a better choice. Let the problem at hand guide the choice.
|
||||
|
||||
Although the sample code ([available on my website][4]) is all in C, other programming languages often provide thin wrappers around these IPC mechanisms. The code examples are short and simple enough, I hope, to encourage you to experiment.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/interprocess-communication-linux-networking
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Inter-process_communication
|
||||
[2]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-1
|
||||
[3]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-2
|
||||
[4]: http://condor.depaul.edu/mkalin
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -7,17 +7,17 @@
|
||||
[#]: via: (https://opensource.com/article/19/4/getting-started-mercurial)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
Getting started with Mercurial for version control
|
||||
开始使用 Mercurial 进行版本控制
|
||||
======
|
||||
Learn the basics of Mercurial, a distributed version control system
|
||||
written in Python.
|
||||
了解 Mercurial 的基础知识,它是一个用 Python 写的分布式版本控制系统。
|
||||
|
||||
![][1]
|
||||
|
||||
[Mercurial][2] is a distributed version control system written in Python. Because it's written in a high-level language, you can write a Mercurial extension with a few Python functions.
|
||||
[Mercurial][2] 是一个用 Python 编写的分布式版本控制系统。因为它是用高级语言编写的,所以你可以用 Python 函数编写一个 Mercurial 扩展。
|
||||
|
||||
There are several ways to install Mercurial, which are explained in the [official documentation][3]. My favorite one is not there: using **pip**. This is the most amenable way to develop local extensions!
|
||||
在[官方文档中][3]说明了几种安装 Mercurial 的方法。我最喜欢的一种方法不在里面:使用 **pip**。这是开发本地扩展的最合适方式!
|
||||
|
||||
For now, Mercurial only supports Python 2.7, so you will need to create a Python 2.7 virtual environment:
|
||||
目前,Mercurial 仅支持 Python 2.7,因此你需要创建一个 Python 2.7 虚拟环境:
|
||||
|
||||
|
||||
```
|
||||
@ -25,7 +25,7 @@ python2 -m virtualenv mercurial-env
|
||||
./mercurial-env/bin/pip install mercurial
|
||||
```
|
||||
|
||||
To have a short command, and to satisfy everyone's insatiable need for chemistry-based humor, the command is called **hg**.
|
||||
为了有一个简短的命令,以及满足人们对化学幽默的无法满足的需求,命令称之为 **hg**。
|
||||
|
||||
|
||||
```
|
||||
@ -37,7 +37,7 @@ $ source mercurial-env/bin/activate
|
||||
(mercurial-env)$
|
||||
```
|
||||
|
||||
The status is empty since you do not have any files. Add a couple of files:
|
||||
由于还没有任何文件,因此状态为空。添加几个文件:
|
||||
|
||||
|
||||
```
|
||||
@ -58,11 +58,11 @@ date: Fri Mar 29 12:42:43 2019 -0700
|
||||
summary: Adding stuff
|
||||
```
|
||||
|
||||
The **addremove** command is useful: it adds any new files that are not ignored to the list of managed files and removes any files that have been removed.
|
||||
**addremove** 命令很有用:它将任何未被忽略的新文件添加到托管文件列表中,并移除任何已删除的文件。
|
||||
|
||||
As I mentioned, Mercurial extensions are written in Python—they are just regular Python modules.
|
||||
如我所说,Mercurial 扩展用 Python 写成,它们只是常规的 Python 模块。
|
||||
|
||||
This is an example of a short Mercurial extension:
|
||||
这是一个简短的 Mercurial 扩展示例:
|
||||
|
||||
|
||||
```
|
||||
@ -78,14 +78,14 @@ def say_hello(ui, repo, **opts):
|
||||
ui.write("hello ", opts['whom'], "\n")
|
||||
```
|
||||
|
||||
A simple way to test it is to put it in a file in the virtual environment manually:
|
||||
一个简单的测试方法是将它手动加入虚拟环境中的文件中:
|
||||
|
||||
|
||||
```
|
||||
`$ vi ../mercurial-env/lib/python2.7/site-packages/hello_ext.py`
|
||||
```
|
||||
|
||||
Then you need to _enable_ the extension. You can start by enabling it only in the current repository:
|
||||
然后你需要_启用_扩展。你可以仅在当前仓库中启用它开始:
|
||||
|
||||
|
||||
```
|
||||
@ -94,7 +94,7 @@ $ cat >> .hg/hgrc
|
||||
hello_ext =
|
||||
```
|
||||
|
||||
Now, a greeting is possible:
|
||||
现在,问候有了:
|
||||
|
||||
|
||||
```
|
||||
@ -102,9 +102,9 @@ Now, a greeting is possible:
|
||||
hello world
|
||||
```
|
||||
|
||||
Most extensions will do more useful stuff—possibly even things to do with Mercurial. The **repo** object is a **mercurial.hg.repository** object.
|
||||
大多数扩展会做更多有用的东西,甚至可能与 Mercurial 有关。 **repo** 对象是 **mercurial.hg.repository** 的对象。
|
||||
|
||||
Refer to the [official documentation][5] for more about Mercurial's API. And visit the [official repo][6] for more examples and inspiration.
|
||||
有关 Mercurial API 的更多信息,请参阅[官方文档][5]。并访问[官方仓库][6]获取更多示例和灵感。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -112,7 +112,7 @@ via: https://opensource.com/article/19/4/getting-started-mercurial
|
||||
|
||||
作者:[Moshe Zadka (Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
Loading…
Reference in New Issue
Block a user