Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-07-26 05:39:49 +08:00
commit b6e62c7b82
10 changed files with 1351 additions and 535 deletions

View File

@ -0,0 +1,295 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11147-1.html)
[#]: subject: (ClusterShell A Nifty Tool To Run Commands On Cluster Nodes In Parallel)
[#]: via: (https://www.2daygeek.com/clustershell-clush-run-commands-on-cluster-nodes-remote-system-in-parallel-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
ClusterShell一个在集群节点上并行运行命令的好工具
======
![](https://img.linux.net.cn/data/attachment/album/201907/26/053202pgcgg1y1rc5l5mgg.jpg)
我们过去曾写过两篇如何并行地在多个远程服务器上运行命令的文章:[并行 SSHPSSH][1] 和[分布式 ShellDSH][2]。今天,我们将讨论相同类型的主题,但它允许我们在集群节点上执行相同的操作。你可能会想,我可以编写一个小的 shell 脚本来实现这个目的,而不是安装这些第三方软件包。
当然,你是对的,如果要在十几个远程系统中运行一些命令,那么你不需要使用它。但是,你的脚本需要一些时间来完成此任务,因为它是按顺序运行的。想想你要是在一千多台服务器上运行一些命令会是什么样子?在这种情况下,你的脚本用处不大。此外,完成任务需要很长时间。所以,要克服这种问题和情况,我们需要可以在远程计算机上并行运行命令。
为此,我们需要在一个并行应用程序中使用它。我希望这个解释可以解决你对并行实用程序的疑虑。
### ClusterShell
[ClusterShell][3] 是一个事件驱动的开源 Python 库,旨在在服务器场或大型 Linux 集群上并行运行本地或远程命令。(`clush` 即 [ClusterShell][3])。
它将处理在 HPC 集群上遇到的常见问题,例如在节点组上操作,使用优化过的执行算法运行分布式命令,以及收集结果和合并相同的输出,或检索返回代码。
ClusterShell 可以利用已安装在系统上的现有远程 shell 设施,如 SSH。
ClusterShell 的主要目标是通过为开发人员提供轻量级、但可扩展的 Python API 来改进高性能集群的管理。它还提供了 `clush`、`clubak` 和 `cluset`/`nodeset` 等方便的命令行工具,可以让传统的 shell 脚本利用这个库的一些功能。
ClusterShell 是用 Python 编写的,它需要 Pythonv2.6+ 或 v3.4+)才能在你的系统上运行。
### 如何在 Linux 上安装 ClusterShell
ClusterShell 包在大多数发行版的官方包管理器中都可用。因此,使用发行版包管理器工具进行安装。
对于 Fedora 系统,使用 [DNF 命令][4]来安装 clustershell。
```
$ sudo dnf install clustershell
```
如果系统默认是 Python 2这会安装 Python 2 模块和工具,可以运行以下命令安装 Python 3 开发包。
```
$ sudo dnf install python3-clustershell
```
在执行 clustershell 安装之前,请确保你已在系统上启用 [EPEL 存储库][5]。
对于 RHEL/CentOS 系统,使用 [YUM 命令][6] 来安装 clustershell。
```
$ sudo yum install clustershell
```
如果系统默认是 Python 2这会安装 Python 2 模块和工具,可以运行以下命令安装 Python 3 开发包。
```
$ sudo yum install python34-clustershell
```
对于 openSUSE Leap 系统,使用 [Zypper 命令][7] 来安装 clustershell。
```
$ sudo zypper install clustershell
```
如果系统默认是 Python 2这会安装 Python 2 模块和工具,可以运行以下命令安装 Python 3 开发包。
```
$ sudo zypper install python3-clustershell
```
对于 Debian/Ubuntu 系统,使用 [APT-GET 命令][8] 或 [APT 命令][9] 来安装 clustershell。
```
$ sudo apt install clustershell
```
### 如何在 Linux 使用 PIP 安装 ClusterShell
可以使用 PIP 安装 ClusterShell因为它是用 Python 编写的。
在执行 clustershell 安装之前,请确保你已在系统上启用了 [Python][10] 和 [PIP][11]。
```
$ sudo pip install ClusterShell
```
### 如何在 Linux 上使用 ClusterShell
与其他实用程序(如 `pssh``dsh`)相比,它是直接了当的优秀工具。它有很多选项可以在远程并行执行。
在开始使用 clustershell 之前,请确保你已启用系统上的[无密码登录][12]。
以下配置文件定义了系统范围的默认值。你不需要修改这里的任何东西。
```
$ cat /etc/clustershell/clush.conf
```
如果你想要创建一个服务器组,那也可以。默认情况下有一些示例,请根据你的要求执行相同操作。
```
$ cat /etc/clustershell/groups.d/local.cfg
```
只需按以下列格式运行 clustershell 命令即可从给定节点获取信息:
```
$ clush -w 192.168.1.4,192.168.1.9 cat /proc/version
192.168.1.9: Linux version 4.15.0-45-generic ([email protected]) (gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)) #48-Ubuntu SMP Tue Jan 29 16:28:13 UTC 2019
192.168.1.4: Linux version 3.10.0-957.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) #1 SMP Thu Nov 8 23:39:32 UTC 2018
```
选项:
* `-w:` 你要运行该命令的节点。
你可以使用正则表达式而不是使用完整主机名和 IP
```
$ clush -w 192.168.1.[4,9] uname -r
192.168.1.9: 4.15.0-45-generic
192.168.1.4: 3.10.0-957.el7.x86_64
```
或者,如果服务器位于同一 IP 系列中,则可以使用以下格式:
```
$ clush -w 192.168.1.[4-9] date
192.168.1.6: Mon Mar 4 21:08:29 IST 2019
192.168.1.7: Mon Mar 4 21:08:29 IST 2019
192.168.1.8: Mon Mar 4 21:08:29 IST 2019
192.168.1.5: Mon Mar 4 09:16:30 CST 2019
192.168.1.9: Mon Mar 4 21:08:29 IST 2019
192.168.1.4: Mon Mar 4 09:16:30 CST 2019
```
clustershell 允许我们以批处理模式运行命令。使用以下格式来实现此目的:
```
$ clush -w 192.168.1.4,192.168.1.9 -b
Enter 'quit' to leave this interactive mode
Working with nodes: 192.168.1.[4,9]
clush> hostnamectl
---------------
192.168.1.4
---------------
Static hostname: CentOS7.2daygeek.com
Icon name: computer-vm
Chassis: vm
Machine ID: 002f47b82af248f5be1d67b67e03514c
Boot ID: f9b37a073c534dec8b236885e754cb56
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-957.el7.x86_64
Architecture: x86-64
---------------
192.168.1.9
---------------
Static hostname: Ubuntu18
Icon name: computer-vm
Chassis: vm
Machine ID: 27f6c2febda84dc881f28fd145077187
Boot ID: f176f2eb45524d4f906d12e2b5716649
Virtualization: oracle
Operating System: Ubuntu 18.04.2 LTS
Kernel: Linux 4.15.0-45-generic
Architecture: x86-64
clush> free -m
---------------
192.168.1.4
---------------
total used free shared buff/cache available
Mem: 1838 641 217 19 978 969
Swap: 2047 0 2047
---------------
192.168.1.9
---------------
total used free shared buff/cache available
Mem: 1993 352 1067 1 573 1473
Swap: 1425 0 1425
clush> w
---------------
192.168.1.4
---------------
09:21:14 up 3:21, 3 users, load average: 0.00, 0.01, 0.05
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
daygeek :0 :0 06:02 ?xdm? 1:28 0.30s /usr/libexec/gnome-session-binary --session gnome-classic
daygeek pts/0 :0 06:03 3:17m 0.06s 0.06s bash
daygeek pts/1 192.168.1.6 06:03 52:26 0.10s 0.10s -bash
---------------
192.168.1.9
---------------
21:13:12 up 3:12, 1 user, load average: 0.08, 0.03, 0.00
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
daygeek pts/0 192.168.1.6 20:42 29:41 0.05s 0.05s -bash
clush> quit
```
如果要在一组节点上运行该命令,请使用以下格式:
```
$ clush -w @dev uptime
or
$ clush -g dev uptime
or
$ clush --group=dev uptime
192.168.1.9: 21:10:10 up 3:09, 1 user, load average: 0.09, 0.03, 0.01
192.168.1.4: 09:18:12 up 3:18, 3 users, load average: 0.01, 0.02, 0.05
```
如果要在多个节点组上运行该命令,请使用以下格式:
```
$ clush -w @dev,@uat uptime
or
$ clush -g dev,uat uptime
or
$ clush --group=dev,uat uptime
192.168.1.7: 07:57:19 up 59 min, 1 user, load average: 0.08, 0.03, 0.00
192.168.1.9: 20:27:20 up 1:00, 1 user, load average: 0.00, 0.00, 0.00
192.168.1.5: 08:57:21 up 59 min, 1 user, load average: 0.00, 0.01, 0.05
```
clustershell 允许我们将文件复制到远程计算机。将本地文件或目录复制到同一个远程节点:
```
$ clush -w 192.168.1.[4,9] --copy /home/daygeek/passwd-up.sh
```
我们可以通过运行以下命令来验证它:
```
$ clush -w 192.168.1.[4,9] ls -lh /home/daygeek/passwd-up.sh
192.168.1.4: -rwxr-xr-x. 1 daygeek daygeek 159 Mar 4 09:00 /home/daygeek/passwd-up.sh
192.168.1.9: -rwxr-xr-x 1 daygeek daygeek 159 Mar 4 20:52 /home/daygeek/passwd-up.sh
```
将本地文件或目录复制到不同位置的远程节点:
```
$ clush -g uat --copy /home/daygeek/passwd-up.sh --dest /tmp
```
我们可以通过运行以下命令来验证它:
```
$ clush --group=uat ls -lh /tmp/passwd-up.sh
192.168.1.7: -rwxr-xr-x. 1 daygeek daygeek 159 Mar 6 07:44 /tmp/passwd-up.sh
```
将文件或目录从远程节点复制到本地系统:
```
$ clush -w 192.168.1.7 --rcopy /home/daygeek/Documents/magi.txt --dest /tmp
```
我们可以通过运行以下命令来验证它:
```
$ ls -lh /tmp/magi.txt.192.168.1.7
-rw-r--r-- 1 daygeek daygeek 35 Mar 6 20:24 /tmp/magi.txt.192.168.1.7
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/clustershell-clush-run-commands-on-cluster-nodes-remote-system-in-parallel-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/pssh-parallel-ssh-run-execute-commands-on-multiple-linux-servers/
[2]: https://www.2daygeek.com/dsh-run-execute-shell-commands-on-multiple-linux-servers-at-once/
[3]: https://cea-hpc.github.io/clustershell/
[4]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[5]: https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/
[6]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[7]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
[8]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[9]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[10]: https://www.2daygeek.com/3-methods-to-install-latest-python3-package-on-centos-6-system/
[11]: https://www.2daygeek.com/install-pip-manage-python-packages-linux/
[12]: https://www.2daygeek.com/linux-passwordless-ssh-login-using-ssh-keygen/

View File

@ -0,0 +1,67 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Data centers may soon recycle heat into electricity)
[#]: via: (https://www.networkworld.com/article/3410578/data-centers-may-soon-recycle-heat-into-electricity.html)
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
Data centers may soon recycle heat into electricity
======
Rice University researchers are developing a system that converts waste heat into light and then that light into electricity, which could help data centers reduce computing costs.
![Gordon Mah Ung / IDG][1]
Waste heat is the scurge of computing. In fact, much of the cost of powering a computer is from creating unwanted heat. Thats because the inefficiencies in electronic circuits, caused by resistance in the materials, generates that heat. The processors, without computing anything, are essentially converting expensively produced electrical energy into waste energy.
Its a fundamental problem, and one that hasnt been going away. But what if you could convert the unwanted heat back into electricity—recycle the heat back into its original energy form? The data center heat, instead of simply disgorging into the atmosphere to be gotten rid of with dubious eco-effects, could actually run more machines. Plus, your cooling costs would be taken care of—theres nothing to cool because youve already grabbed the hot air.
**[ Read also: [How server disaggregation can boost data center efficiency][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
Scientists at Rice Univeristy are trying to make that a reality by developing heat scavenging and conversion solutions.
Currently, the most efficient way to convert heat into electricity is through the use of traditional turbines.
Turbines “can give you nearly 50% conversion efficiency,” says Chloe Doiron, a graduate student at Rice University and co-lead on the project, in a [news article][4] on the schools website. Turbines convert the kinetic energy of moving fluids, like steam or combustion gases, into mechanical energy. The moving steam then shifts blades mounted on a shaft, which turns a generator, thus creating the power.
Not a bad solution. The problem, though, is “those systems are not easy to implement,” the researchers explain. The issue is that turbines are full of moving parts, and theyre big, noisy, and messy.
### Thermal emitter better than turbines for converting heat to energy
A better option would be a solid-state, thermal device that could absorb heat at the source and simply convert it, perhaps straight into attached batteries.
The researchers say a thermal emitter could absorb heat, jam it into tight, easy-to-capture bandwidth and then emit it as light. Cunningly, they would then simply turn the light into electricity, as we see all the time now in solar systems.
“Thermal photons are just photons emitted from a hot body,” says Rice University professor Junichiro Kono in the article. “If you look at something hot with an infrared camera, you see it glow. The camera is capturing these thermally excited photons.” Indeed, all heated surfaces, to some extent, send out light as thermal radiation.
The Rice team wants to use a film of aligned carbon nanotubes to do the job. The test system will be structured as an actual solar panel. Thats because solar panels, too, lose energy through heat, so are a good environment in which to work. The concept applies to other inefficient technologies, too. “Anything else that loses energy through heat [would become] far more efficient,” the researchers say.
Around 20% of industrial energy consumption is unwanted heat, Doiron says. That's a lot of wasted energy.
### Other heat conversion solutions
Other heat scavenging devices are making inroads, too. Now-commercially available thermoelectric technology can convert a temperature difference into power, also with no moving parts. They function by exposing a specially made material to heat. [Electrons flow when one part is cold and one is hot][5]. And the University of Utah is working on [silicon for chips that generates electricity][6] as one of two wafers heat up.
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3410578/data-centers-may-soon-recycle-heat-into-electricity.html
作者:[Patrick Nelson][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Patrick-Nelson/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/07/flir_20190711t191326-100801627-large.jpg
[2]: https://www.networkworld.com/article/3266624/how-server-disaggregation-could-make-cloud-datacenters-more-efficient.html
[3]: https://www.networkworld.com/newsletters/signup.html
[4]: https://news.rice.edu/2019/07/12/rice-device-channels-heat-into-light/
[5]: https://www.networkworld.com/article/2861438/how-to-convert-waste-data-center-heat-into-electricity.html
[6]: https://unews.utah.edu/beat-the-heat/
[7]: https://www.facebook.com/NetworkWorld/
[8]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,88 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (When it comes to the IoT, Wi-Fi has the best security)
[#]: via: (https://www.networkworld.com/article/3410563/when-it-comes-to-the-iot-wi-fi-has-the-best-security.html)
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
When it comes to the IoT, Wi-Fi has the best security
======
Its easy to dismiss good ol Wi-Fis role in internet of things networking. But Wi-Fi has more security advantages than other IoT networking choices.
![Ralph Gaithe / Soifer / Getty Images][1]
When it comes to connecting internet of things (IoT) devices, there is a wide variety of networks to choose from, each with its own set of capabilities, advantages and disadvantages, and ideal use cases. Good ol Wi-Fi is often seen as a default networking choice, available in many places, but of limited range and not particularly suited for IoT implementations.
According to [Aerohive Networks][2], however, Wi-Fi is “evolving to help IT address security complexities and challenges associated with IoT devices.” Aerohive sells cloud-managed networking solutions and was [acquired recently by software-defined networking company Extreme Networks for some $272 million][3]. And Aerohive's director of product marketing, Mathew Edwards, told me via email that Wi-Fi brings a number of security advantages compared to other IoT networking choices.
Its not a trivial problem. According to Gartner, in just the last three years, [approximately one in five organizations have been subject to an IoT-based attack][4]. And as more and more IoT devices come on line, the attack surface continues to grow quickly.
**[ Also read: [Extreme targets cloud services, SD-WAN, Wi-Fi 6 with $210M Aerohive grab][3] and [Smart cities offer window into the evolution of enterprise IoT technology][5] ]**
### What makes Wi-Fi more secure for IoT?
What exactly are Wi-Fis IoT security benefits? Some of it is simply 20 years of technological maturity, Edwards said.
“Extending beyond the physical boundaries of organizations, Wi-Fi has always had to be on the front foot when it comes to securely onboarding and monitoring a range of corporate, guest, and BYOD devices, and is now prepared with the next round of connectivity complexities with IoT,” he said.
Specifically, Edwards said, “Wi-Fi has evolved … to increase the visibility, security, and troubleshooting of edge devices by combining edge security with centralized cloud intelligence.”
Just as important, though, new Wi-Fi capabilities from a variety of vendors are designed to help identify and isolate IoT devices to integrate them into the wider network while limiting the potential risks. The goal is to incorporate IoT device awareness and protection mechanisms to prevent breaches and attacks through vulnerable headless devices. Edwards cited Aerohives work to “securely onboard IoT devices with its PPSK (private pre-shared key) technology, an authentication and encryption method providing 802.1X-equivalent role-based access, without the equivalent management complexities.”
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][6] ]**
### The IoT is already here—and so is Wi-Fi
Unfortunately, enterprise IoT security is not always a carefully planned and monitored operation.
“Much like BYOD,” Edwards said, “many organizations are dealing with IoT without them even knowing it.” On the plus side, even as “IoT devices have infiltrated many networks , ... administrators are already leveraging some of the tools to protect against IoT threats without them even realizing it.”
He noted that customers who have already deployed PPSK to secure guest and BYOD networks can easily extend those capabilities to cover IoT devices such as “smart TVs, projectors, printers, security systems, sensors and more.”
In addition, Edwards said, “vendors have introduced methods to assign performance and security limits through context-based profiling, which is easily extended to IoT devices once the vendor can utilize signatures to identify an IoT device.”
Once an IoT device is identified and tagged, Wi-Fi networks can assign it to a particular VLAN, set minimum and maximum data rates, data limits, application access, firewall rules, and other protections. That way, Edwards said, “if the device is lost, stolen, or launches a DDoS attack, the Wi-Fi network can kick it off, restrict it, or quarantine it.”
### Wi-Fi still isnt for every IoT deployment
All that hardly turns Wi-Fi into the perfect IoT network. Relatively high costs and limited range mean it wont find a place in many large-scale IoT implementations. But Edwards says Wi-Fis mature identification and control systems can help enterprises incorporate new IoT-based systems and sensors into their networks with more confidence.
**More about 802.11ax (Wi-Fi 6)**
* [Why 802.11ax is the next big thing in wireless][7]
* [FAQ: 802.11ax Wi-Fi][8]
* [Wi-Fi 6 (802.11ax) is coming to a router near you][9]
* [Wi-Fi 6 with OFDMA opens a world of new wireless possibilities][10]
* [802.11ax preview: Access points and routers that support Wi-Fi 6 are on tap][11]
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3410563/when-it-comes-to-the-iot-wi-fi-has-the-best-security.html
作者:[Fredric Paul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Fredric-Paul/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/03/hack-your-own-wi-fi_neon-wi-fi_keyboard_hacker-100791531-large.jpg
[2]: http://www.aerohive.com/
[3]: https://www.networkworld.com/article/3405440/extreme-targets-cloud-services-sd-wan-wifi-6-with-210m-aerohive-grab.html
[4]: https://www.gartner.com/en/newsroom/press-releases/2018-03-21-gartner-says-worldwide-iot-security-spending-will-reach-1-point-5-billion-in-2018.
[5]: https://www.networkworld.com/article/3409787/smart-cities-offer-window-into-the-evolution-of-enterprise-iot-technology.html
[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
[7]: https://www.networkworld.com/article/3215907/mobile-wireless/why-80211ax-is-the-next-big-thing-in-wi-fi.html
[8]: https://%20https//www.networkworld.com/article/3048196/mobile-wireless/faq-802-11ax-wi-fi.html
[9]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html
[10]: https://www.networkworld.com/article/3332018/wi-fi/wi-fi-6-with-ofdma-opens-a-world-of-new-wireless-possibilities.html
[11]: https://www.networkworld.com/article/3309439/mobile-wireless/80211ax-preview-access-points-and-routers-that-support-the-wi-fi-6-protocol-on-tap.html
[12]: https://www.facebook.com/NetworkWorld/
[13]: https://www.linkedin.com/company/network-world

View File

@ -1,309 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (ClusterShell A Nifty Tool To Run Commands On Cluster Nodes In Parallel)
[#]: via: (https://www.2daygeek.com/clustershell-clush-run-commands-on-cluster-nodes-remote-system-in-parallel-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
ClusterShell A Nifty Tool To Run Commands On Cluster Nodes In Parallel
======
We had written two articles in the past to run commands on multiple remote server in parallel.
These are **[Parallel SSH (PSSH)][1]** or **[Distributed Shell (DSH)][2]**.
Today also, we are going to discuss about the same kind of topic but it allows us to perform the same on cluster nodes as well.
You may think, i can write a small shell script to archive this instead of installing these third party packages.
Of course you are right and if you are going to run some commands in 10-15 remote systems then you dont need to use this.
However, the scripts take some time to complete this task as its running in a sequential order.
Think about if you would like to run some commands on 1000+ servers what will be the options?
In this case your script wont help you. Also, it would take good amount of time to complete a task.
So, to overcome this kind of issue and situation. We need to run the command in parallel on remote machines.
For that, we need use in one of the Parallel applications. I hope this explanation might fulfilled your doubts about parallel utilities.
### What Is ClusterShell?
clush stands for [ClusterShell][3]. ClusterShell is an event-driven open source Python library, designed to run local or distant commands in parallel on server farms or on large Linux clusters.
It will take care of common issues encountered on HPC clusters, such as operating on groups of nodes, running distributed commands using optimized execution algorithms, as well as gathering results and merging identical outputs, or retrieving return codes.
ClusterShell takes advantage of existing remote shell facilities already installed on your systems, like SSH.
ClusterShells primary goal is to improve the administration of high- performance clusters by providing a lightweight but scalable Python API for developers. It also provides clush, clubak and cluset/nodeset, convenient command-line tools that allow traditional shell scripts to benefit from some of the library features.
ClusterShells written in Python and it requires Python (v2.6+ or v3.4+) to run on your system.
### How To Install ClusterShell On Linux?
ClusterShell package is available in most of the distribution official package manager. So, use the distribution package manager tool to install it.
For **`Fedora`** system, use **[DNF Command][4]** to install clustershell.
```
$ sudo dnf install clustershell
```
Python 2 module and tools are installed and if its default on your system then run the following command to install Python 3 development on Fedora System.
```
$ sudo dnf install python3-clustershell
```
Make sure you should have enabled the **[EPEL repository][5]** on your system before performing clustershell installation.
For **`RHEL/CentOS`** systems, use **[YUM Command][6]** to install clustershell.
```
$ sudo yum install clustershell
```
Python 2 module and tools are installed and if its default on your system then run the following command to install Python 3 development on CentOS/RHEL System.
```
$ sudo yum install python34-clustershell
```
For **`openSUSE Leap`** system, use **[Zypper Command][7]** to install clustershell.
```
$ sudo zypper install clustershell
```
Python 2 module and tools are installed and if its default on your system then run the following command to install Python 3 development on OpenSUSE System.
```
$ sudo zypper install python3-clustershell
```
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][8]** or **[APT Command][9]** to install clustershell.
```
$ sudo apt install clustershell
```
### How To Install ClusterShell In Linux Using PIP?
Use PIP to install ClusterShell because its written in Python.
Make sure you should have enabled the **[Python][10]** and **[PIP][11]** on your system before performing clustershell installation.
```
$ sudo pip install ClusterShell
```
### How To Use ClusterShell On Linux?
Its straight forward and awesome tool compared with other utilities such as pssh and dsh. It has so many options to perform the remote execution in parallel.
Make sure you should have enabled the **[password less login][12]** on your system before start using clustershell.
The following configuration file defines system-wide default values. You no need to modify anything here.
```
$ cat /etc/clustershell/clush.conf
```
If you would like to create a servers group. Here you can go. By default some examples were available so, do the same for your requirements.
```
$ cat /etc/clustershell/groups.d/local.cfg
```
Just run the clustershell command in the following format to get the information from the given nodes.
```
$ clush -w 192.168.1.4,192.168.1.9 cat /proc/version
192.168.1.9: Linux version 4.15.0-45-generic ([email protected]) (gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)) #48-Ubuntu SMP Tue Jan 29 16:28:13 UTC 2019
192.168.1.4: Linux version 3.10.0-957.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) #1 SMP Thu Nov 8 23:39:32 UTC 2018
```
**Option:**
* **`-w:`** nodes where to run the command.
You can use the regular expressions instead of using full hostname and IPs.
```
$ clush -w 192.168.1.[4,9] uname -r
192.168.1.9: 4.15.0-45-generic
192.168.1.4: 3.10.0-957.el7.x86_64
```
Alternatively you can use the following format if you have the servers in the same IP series.
```
$ clush -w 192.168.1.[4-9] date
192.168.1.6: Mon Mar 4 21:08:29 IST 2019
192.168.1.7: Mon Mar 4 21:08:29 IST 2019
192.168.1.8: Mon Mar 4 21:08:29 IST 2019
192.168.1.5: Mon Mar 4 09:16:30 CST 2019
192.168.1.9: Mon Mar 4 21:08:29 IST 2019
192.168.1.4: Mon Mar 4 09:16:30 CST 2019
```
clustershell allow us to run the command in batch mode. Use the following format to achieve this.
```
$ clush -w 192.168.1.4,192.168.1.9 -b
Enter 'quit' to leave this interactive mode
Working with nodes: 192.168.1.[4,9]
clush> hostnamectl
---------------
192.168.1.4
---------------
Static hostname: CentOS7.2daygeek.com
Icon name: computer-vm
Chassis: vm
Machine ID: 002f47b82af248f5be1d67b67e03514c
Boot ID: f9b37a073c534dec8b236885e754cb56
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-957.el7.x86_64
Architecture: x86-64
---------------
192.168.1.9
---------------
Static hostname: Ubuntu18
Icon name: computer-vm
Chassis: vm
Machine ID: 27f6c2febda84dc881f28fd145077187
Boot ID: f176f2eb45524d4f906d12e2b5716649
Virtualization: oracle
Operating System: Ubuntu 18.04.2 LTS
Kernel: Linux 4.15.0-45-generic
Architecture: x86-64
clush> free -m
---------------
192.168.1.4
---------------
total used free shared buff/cache available
Mem: 1838 641 217 19 978 969
Swap: 2047 0 2047
---------------
192.168.1.9
---------------
total used free shared buff/cache available
Mem: 1993 352 1067 1 573 1473
Swap: 1425 0 1425
clush> w
---------------
192.168.1.4
---------------
09:21:14 up 3:21, 3 users, load average: 0.00, 0.01, 0.05
USER TTY FROM [email protected] IDLE JCPU PCPU WHAT
daygeek :0 :0 06:02 ?xdm? 1:28 0.30s /usr/libexec/gnome-session-binary --session gnome-classic
daygeek pts/0 :0 06:03 3:17m 0.06s 0.06s bash
daygeek pts/1 192.168.1.6 06:03 52:26 0.10s 0.10s -bash
---------------
192.168.1.9
---------------
21:13:12 up 3:12, 1 user, load average: 0.08, 0.03, 0.00
USER TTY FROM [email protected] IDLE JCPU PCPU WHAT
daygeek pts/0 192.168.1.6 20:42 29:41 0.05s 0.05s -bash
clush> quit
```
If you would like to run the command on a group of nodes then use the following format.
```
$ clush -w @dev uptime
or
$ clush -g dev uptime
or
$ clush --group=dev uptime
192.168.1.9: 21:10:10 up 3:09, 1 user, load average: 0.09, 0.03, 0.01
192.168.1.4: 09:18:12 up 3:18, 3 users, load average: 0.01, 0.02, 0.05
```
If you would like to run the command on more than one group of nodes then use the following format.
```
$ clush -w @dev,@uat uptime
or
$ clush -g dev,uat uptime
or
$ clush --group=dev,uat uptime
192.168.1.7: 07:57:19 up 59 min, 1 user, load average: 0.08, 0.03, 0.00
192.168.1.9: 20:27:20 up 1:00, 1 user, load average: 0.00, 0.00, 0.00
192.168.1.5: 08:57:21 up 59 min, 1 user, load average: 0.00, 0.01, 0.05
```
clustershell allow us to copy a file to remote machines. To copy local file or directory to the remote nodes in the same location.
```
$ clush -w 192.168.1.[4,9] --copy /home/daygeek/passwd-up.sh
```
We can verify the same by running the following command.
```
$ clush -w 192.168.1.[4,9] ls -lh /home/daygeek/passwd-up.sh
192.168.1.4: -rwxr-xr-x. 1 daygeek daygeek 159 Mar 4 09:00 /home/daygeek/passwd-up.sh
192.168.1.9: -rwxr-xr-x 1 daygeek daygeek 159 Mar 4 20:52 /home/daygeek/passwd-up.sh
```
To copy local file or directory to the remote nodes in the different location.
```
$ clush -g uat --copy /home/daygeek/passwd-up.sh --dest /tmp
```
We can verify the same by running the following command.
```
$ clush --group=uat ls -lh /tmp/passwd-up.sh
192.168.1.7: -rwxr-xr-x. 1 daygeek daygeek 159 Mar 6 07:44 /tmp/passwd-up.sh
```
To copy file or directory from remote nodes to local system.
```
$ clush -w 192.168.1.7 --rcopy /home/daygeek/Documents/magi.txt --dest /tmp
```
We can verify the same by running the following command.
```
$ ls -lh /tmp/magi.txt.192.168.1.7
-rw-r--r-- 1 daygeek daygeek 35 Mar 6 20:24 /tmp/magi.txt.192.168.1.7
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/clustershell-clush-run-commands-on-cluster-nodes-remote-system-in-parallel-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/pssh-parallel-ssh-run-execute-commands-on-multiple-linux-servers/
[2]: https://www.2daygeek.com/dsh-run-execute-shell-commands-on-multiple-linux-servers-at-once/
[3]: https://cea-hpc.github.io/clustershell/
[4]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[5]: https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/
[6]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[7]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
[8]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[9]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[10]: https://www.2daygeek.com/3-methods-to-install-latest-python3-package-on-centos-6-system/
[11]: https://www.2daygeek.com/install-pip-manage-python-packages-linux/
[12]: https://www.2daygeek.com/linux-passwordless-ssh-login-using-ssh-keygen/

View File

@ -1,225 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (0x996)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mastering user groups on Linux)
[#]: via: (https://www.networkworld.com/article/3409781/mastering-user-groups-on-linux.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Mastering user groups on Linux
======
Managing user groups on Linux systems is easy, but the commands can be more flexible than you might be aware.
![Scott 97006 \(CC BY 2.0\)][1]
User groups play an important role on Linux systems. They provide an easy way for a select groups of users to share files with each other. They also allow sysadmins to more effectively manage user privileges, since they can assign privileges to groups rather than individual users.
While a user group is generally created whenever a user account is added to a system, theres still a lot to know about how they work and how to work with them.
**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
### One user, one group?
Most user accounts on Linux systems are set up with the user and group names the same. The user "jdoe" will be set up with a group named "jdoe" and will be the only member of that newly created group. The users login name, user id, and group id will be added to the **/etc/passwd** and **/etc/group** files when the account is added, as shown in this example:
```
$ sudo useradd jdoe
$ grep jdoe /etc/passwd
jdoe:x:1066:1066:Jane Doe:/home/jdoe:/bin/sh
$ grep jdoe /etc/group
jdoe:x:1066:
```
The values in these files allow the system to translate between the text (jdoe) and numeric (1066) versions of the user id — jdoe is 1066 and 1066 is jdoe.
The assigned UID (user id) and GID (group id) for each user are generally the same and configured sequentially. If Jane Doe in the above example were the most recently added user, the next new user would likely be assigned 1067 as their user and group IDs.
### GID = UID?
UIDs and GIDs can get of out sync. For example, if you add a group using the **groupadd** command without specifying a group id, your system will assign the next available group id (in this case, 1067). The next user to be added to the system would then get 1067 as a UID but 1068 as a GID.
You can avoid this issue by specifying a smaller group id when you add a group rather than going with the default. In this command, we add a new group and provide a GID that is smaller than the range used for user accounts.
```
$ sudo groupadd -g 500 devops
```
If it works better for you, you can specify a shared group when you create accounts. For example, you might want to assign new development staff members to a devops group instead of putting each one in their own group.
```
$ sudo useradd -g staff bennyg
$ grep bennyg /etc/passwd
bennyg:x:1064:50::/home/bennyg:/bin/sh
```
### Primary and secondary groups
There are actually two types of groups — primary and secondary.
The **primary group** is the one thats recorded in the **/etc/passwd** file, configured when an account is set up. When a user creates a file, its their primary group that is associated with it.
```
$ whoami
jdoe
$ grep jdoe /etc/passwd
jdoe:x:1066:1066:John Doe:/home/jdoe:/bin/bash
^
|
+-------- primary group
$ touch newfile
$ ls -l newfile
-rw-rw-r-- 1 jdoe jdoe 0 Jul 16 15:22 newfile
^
|
+-------- primary group
```
**Secondary groups** are those that users might be added to once they already have accounts. Secondary group memberships show up in the /etc/group file.
```
$ grep devops /etc/group
devops:x:500:shs,jadep
^
|
+-------- secondary group for shs and jadep
```
The **/etc/group** file assigns names to user groups (e.g., 500 = devops) and records secondary group members.
### Preferred convention
The convention of having each user a member of their own group and optionally a member of any number of secondary groups allows users to more easily separate files that are personal from those they need to share with co-workers. When a user creates a file, members of the various user groups they belong to don't necessarily have access. A user will have to use the **chgrp** command to associate a file with a secondary group.
### Theres no place like /home
One important detail when adding a new account is that the **useradd** command does not necessarily add a home directory for a new user. If you want this step to be taken only some of the time, you can add **-m** (think of this as the “make home” option) with your useradd commands.
```
$ sudo useradd -m -g devops -c "John Doe" jdoe2
```
The options in this command:
* **-m** creates the home directory and populates it with start-up files
* **-g** specifies the group to assign the user to
* **-c** adds a descriptor for the account (usually the persons name)
If you want a home directory to be created _all_ of the time, you can change the default behavior by editing the **/etc/login.defs** file. Change or add a setting for the CREATE_HOME variable and set it to “yes”:
```
$ grep CREATE_HOME /etc/login.defs
CREATE_HOME yes
```
Another option is to set yourself up with an alias so that **useradd** always uses the -m option.
```
$ alias useradd=useradd -m
```
Make sure you add the alias to your ~/.bashrc or similar start-up file to make it permanent.
### Looking into /etc/login.defs
Heres a command to list all the setting in the /etc/login.defs file. The **grep** commands are hiding comments and blank lines.
```
$ cat /etc/login.defs | grep -v "^#" | grep -v "^$"
MAIL_DIR /var/mail
FAILLOG_ENAB yes
LOG_UNKFAIL_ENAB no
LOG_OK_LOGINS no
SYSLOG_SU_ENAB yes
SYSLOG_SG_ENAB yes
FTMP_FILE /var/log/btmp
SU_NAME su
HUSHLOGIN_FILE .hushlogin
ENV_SUPATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV_PATH PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
TTYGROUP tty
TTYPERM 0600
ERASECHAR 0177
KILLCHAR 025
UMASK 022
PASS_MAX_DAYS 99999
PASS_MIN_DAYS 0
PASS_WARN_AGE 7
UID_MIN 1000
UID_MAX 60000
GID_MIN 1000
GID_MAX 60000
LOGIN_RETRIES 5
LOGIN_TIMEOUT 60
CHFN_RESTRICT rwh
DEFAULT_HOME yes
CREATE_HOME yes <===
USERGROUPS_ENAB yes
ENCRYPT_METHOD SHA512
```
Notice the various settings in this file determine the range of user ids to be used along with password aging and other setting (e.g., umask).
### How to display a users groups
Users can be members of multiple groups for various reasons. Group membership gives a user access to group-owned files and directories, and sometimes this behavior is critical. To generate a list of the groups that some user belongs to, use the **groups** command.
```
$ groups jdoe
jdoe : jdoe adm admin cdrom sudo dip plugdev lpadmin staff sambashare
```
You can list your own groups by typing “groups” without an argument.
### How to add users to groups
If you want to add an existing user to another group, you can do that with a command like this:
```
$ sudo usermod -a -G devops jdoe
```
You can also add a user to multiple groups by specifying the groups in a comma-separated list:
```
$ sudo usermod -a -G devops,mgrs jdoe
```
The **-a** argument means “add” while **-G** lists the groups.
You can remove a user from a group by editing the **/etc/group** file and removing the username from the list. The usermod command may also have an option for removing a member from a group.
```
fish:x:16:nemo,dory,shark
|
V
fish:x:16:nemo,dory
```
### Wrap-up
Adding and managing user groups isn't particularly difficult, but consistency in how you configure accounts can make it easier in the long run.
**[ Now see: [Must-know Linux Commands][3] ]**
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3409781/mastering-user-groups-on-linux.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/07/carrots-100801917-large.jpg
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[3]: https://www.networkworld.com/article/3391029/must-know-linux-commands.html
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (hello-wn)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,120 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Introducing Fedora CoreOS)
[#]: via: (https://fedoramagazine.org/introducing-fedora-coreos/)
[#]: author: (Benjamin Gilbert https://fedoramagazine.org/author/bgilbert/)
Introducing Fedora CoreOS
======
![The Fedora CoreOS logo on a gray background.][1]
The Fedora CoreOS team is excited to [announce the first preview release][2] of Fedora CoreOS, a new Fedora edition built specifically for running containerized workloads securely and at scale. Its the successor to both [Fedora Atomic Host][3] and [CoreOS Container Linux][4]. Fedora CoreOS combines the provisioning tools, automatic update model, and philosophy of Container Linux with the packaging technology, OCI support, and SELinux security of Atomic Host.
Read on for more details about this exciting new release.
### Why Fedora CoreOS?
Containers allow workloads to be reproducibly deployed to production and automatically scaled to meet demand. The isolation provided by a container means that the host OS can be small. It only needs a Linux kernel, systemd, a container runtime, and a few additional services such as an SSH server.
While containers can be run on a full-sized server OS, an operating system built specifically for containers can provide functionality that a general purpose OS cannot. Since the required software is minimal and uniform, the entire OS can be deployed as a unit with little customization. And, since containers are deployed across multiple nodes for redundancy, the OS can update itself automatically and then reboot without interrupting workloads.
Fedora CoreOS is built to be the secure and reliable host for your compute clusters. Its designed specifically for running containerized workloads without regular maintenance, automatically updating itself with the latest OS improvements, bug fixes, and security updates. It provisions itself with Ignition, runs containers with Podman and Moby, and updates itself atomically and automatically with rpm-ostree.
### Provisioning immutable infrastructure
Whether you run in the cloud, virtualized, or on bare metal, a Fedora CoreOS machine always begins from the same place: a [generic OS image][5]. Then, during the first boot, Fedora CoreOS uses _Ignition_ to provision the system. Ignition reads an _Ignition config_ from cloud user data or a remote URL, and uses it to create disk partitions and file systems, users, files and systemd units.
To provision a machine:
1. Write a [Fedora CoreOS Config][6] (FCC), a YAML document that specifies the desired configuration of a machine. FCCs support all Ignition functionality, and also provide additional syntax (“sugar”) that makes it easier to specify typical configuration changes.
2. Use the [Fedora CoreOS Config Transpiler][7] to [validate your FCC and convert it to an Ignition config][8].
3. Launch a Fedora CoreOS machine and [pass it the Ignition config][9]. If the machine boots successfully, provisioning has completed without errors.
Fedora CoreOS is designed to be managed as _immutable infrastructure_. After a machine is provisioned, you should not modify _/etc_ or otherwise reconfigure the machine. Instead, modify the FCC and use it to provision a replacement machine.
This is similar to how youd manage a container: container images are not updated in place, but rebuilt from scratch and redeployed. This approach makes it easy to scale out when load increases. Simply use the same Ignition config to launch additional machines.
### Automatic updates
By default, Fedora CoreOS automatically downloads new OS releases, atomically installs them, and reboots into them. Releases roll out gradually over time. We can even stop a rollout if we discover a problem in a new release. Upgrades between Fedora releases are treated as any other update, and are automatically applied without user intervention.
The Linux ecosystem evolves quickly, and software updates can bring undesired behavior changes. However, for automatic updates to be trustworthy, they cannot break existing machines. To avoid this, Fedora CoreOS takes a two-pronged approach. First, we automatically test each change to the OS. However, automatic testing cant catch all regressions, so Fedora CoreOS also ships multiple independent _release streams_:
* The _testing_ stream is a regular snapshot of the current Fedora release, plus updates.
* After a _testing_ release has been available for two weeks, it is sent to the _stable_ stream. Bugs discovered in _testing_ will be fixed before a release is sent to _stable_.
* The _next_ stream is a regular snapshot of the upcoming Fedora release, allowing additional time for testing larger changes.
All three streams receive security updates and critical bugfixes, and are intended to be safe for production use. Most machines should run the _stable_ stream, since that receives the most testing. However, users should run a few percent of their nodes on the _next_ and _testing_ streams, and report problems to the [issue tracker][10]. This helps ensure that bugs that only affect certain workloads or certain hardware are fixed before they reach _stable_.
### Telemetry
To help direct our development efforts, Fedora CoreOS will perform some telemetry by default. A service called _[fedora-coreos-pinger][11]_ will periodically collect non-identifying information about the machine, such as the OS version, cloud platform, and instance type, and report it to servers controlled by the Fedora project.
No unique identifiers will be reported or collected, and the data will only be used in aggregate to answer questions about how Fedora CoreOS is being used. We will prominently document that this collection is occurring and how to disable it. We will also tell you how to help the project by reporting additional detail, including information that might identify the machine.
### Current status of Fedora CoreOS
Fedora CoreOS is still under active development, and some planned functionality is not available in the first preview release:
* Only the _testing_ stream currently exists; the _next_ and _stable_ streams are not yet available.
* Several cloud and virtualization platforms are not yet available. Only x86_64 is currently supported.
* Booting a live Fedora CoreOS system via network (PXE) or CD is not yet supported.
* We are actively discussing plans for closer integration with Kubernetes distributions, including OKD.
* Fedora CoreOS Config Transpiler will gain more sugar over time.
* Telemetry is not yet active.
* Documentation is still under development.
**While Fedora CoreOS is intended for production use, preview releases should _not_ be used in production. Fedora CoreOS may change in incompatible ways during the preview period. There is no guarantee that a preview release will successfully update to a later preview release, or to a stable release.**
### The future
We expect the preview period to continue for about six months. At the end of the preview, we will declare Fedora CoreOS stable and encourage its use in production.
CoreOS Container Linux will be maintained until about six months after Fedora CoreOS is declared stable. Well announce the exact timing later this year. During the preview period, well publish tools and documentation to help Container Linux users migrate to Fedora CoreOS.
[Fedora Atomic Host will be maintained][12] until the end of life of Fedora 29, expected in late November. Before then, Fedora Atomic Host users should migrate to Fedora CoreOS.
### Getting involved in Fedora CoreOS
To try out the new release, head over to the [download page][5] to get OS images or cloud image IDs. Then use the [quick start guide][13] to get a machine running quickly. Finally, get involved! You can report bugs and missing features to the [issue tracker][10]. You can also discuss Fedora CoreOS in [Fedora Discourse][14], the [development mailing list][15], or in #fedora-coreos on Freenode.
Welcome to Fedora CoreOS, and let us know what you think!
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/introducing-fedora-coreos/
作者:[Benjamin Gilbert][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/bgilbert/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/07/introducing-fedora-coreos-816x345.png
[2]: https://getfedora.org/coreos/
[3]: https://www.projectatomic.io/
[4]: https://coreos.com/os/docs/latest/
[5]: https://getfedora.org/coreos/download/
[6]: https://github.com/coreos/fcct/blob/master/docs/configuration-v1_0.md
[7]: https://github.com/coreos/fcct/releases
[8]: https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/#_generating_ignition_configs
[9]: https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/#_launching_fedora_coreos
[10]: https://github.com/coreos/fedora-coreos-tracker/issues
[11]: https://github.com/coreos/fedora-coreos-pinger/
[12]: https://fedoramagazine.org/announcing-fedora-coreos/
[13]: https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/
[14]: https://discussion.fedoraproject.org/c/server/coreos
[15]: https://lists.fedoraproject.org/archives/list/coreos@lists.fedoraproject.org/

View File

@ -0,0 +1,292 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (24 sysadmin job interview questions you should know)
[#]: via: (https://opensource.com/article/19/7/sysadmin-job-interview-questions)
[#]: author: (DirectedSoul https://opensource.com/users/directedsoul)
24 sysadmin job interview questions you should know
======
Have a sysadmin job interview coming up? Read this article for some
questions you might encounter and possible answers.
![Question and answer.][1]
As a geek who always played with computers, a career after my masters in IT was a natural choice. So, I decided the sysadmin path was the right one. In the process of my career, I have grown quite familiar with the job interview process. Here is a look at what to expect, the general career path, and a set of common questions and my answers to them.
### Typical sysadmin tasks and duties
Organizations need someone who understands the basics of how a system works so that they can keep their data safe, and keep their services running smoothly. You might ask: "Wait, isnt there more that a sysadmin can do?"
You are right. Now, in general, lets look at what might be a typical sysadmins day-to-day tasks. Depending on their companys needs and the persons skill level, a sysadmins tasks vary from managing desktops, laptops, networks, and servers, to designing the organizations IT policies. Sometimes sysadmins are even in charge of purchasing and placing orders for new IT equipment.
Those seeking system administration as their career paths might find it difficult to keep their skills and knowledge up to date, as rapid changes in the IT field are inevitable. The next natural question that arises out of anyones mind is how IT professionals keep up with the latest updates and skills.
### Low difficulty questions
Here are some of the more basic questions you will encounter, and my answers:
1. What are the first five commands you type on a *nix server after login?
> * **lsblk** to see information on all block devices
> * **who** to see who is logged into the server
> * **top** to get a sense of what is running on the server
> * **df -khT** to view the amount of disk space available on the server
> * **netstat** to see what TCP network connections are active
>
2. How do you make a process run in the background, and what are the advantages of doing so?
> You can make a process run in the background by adding the special character **&amp;** at the end of the command. Generally, applications that take too long to execute, and dont require user interaction are sent to the background so that we can continue our work in the terminal. ([Citation][2])
3. Is running these commands as root a good or bad idea?
> Running (everything) as root is bad due to two major issues. The first is _risk_. Nothing prevents you from making a careless mistake when you are logged in as **root**. If you try to change the system in a potentially harmful way, you need to use **sudo**, which introduces a pause (while youre entering the password) to ensure that you arent about to make a mistake.
>
> The second reason is _security_. Systems are harder to hack if you dont know the admin users login information. Having access to root means you already have one half of the working set of admin credentials.
4. What is the difference between **rm** and **rm -rf**?
> The **rm** command by itself only deletes the named files (and not directories). With **-rf** you add two additional features: The **-r**, **-R**, or --**recursive** flag recursively deletes the directorys contents, including hidden files and subdirectories, and the **-f**, or --**force**, flag makes **rm** ignore nonexistent files, and never prompt for confirmation.
5. **Compress.tgz** has a file size of approximately 15GB. How can you list its contents, and how do you list them only for a specific file?
> To list the files contents:
>
> **tar tf archive.tgz**
>
> To extract a specific file:
>
> **tar xf archive.tgz filename**
### Medium difficulty questions
Here are some harder questions you might encounter, and my answers:
6. What is RAID? What is RAID 0, RAID 1, RAID 5, RAID 6, and RAID 10?
> A RAID (Redundant Array of Inexpensive Disks) is a technology used to increase the performance and/or reliability of data storage. The RAID levels are:
>
> * RAID 0: Also known as disk striping, which is a technique that breaks up a file, and spreads the data across all of the disk drives in a RAID group. There are no safeguards against failure. ([Citation][3])
> * RAID 1: A popular disk subsystem that increases safety by writing the same data on two drives. Called _mirroring_, RAID1 does not increase write performance, but read performance may increase up to the sum of each disks performance. Also, if one drive fails, the second drive is used, and the failed drive is manually replaced. After replacement, the RAID controller duplicates the contents of the working drive onto the new one.
> * RAID 5: A disk subsystem that increases safety by computing parity data and increasing speed. RAID 5 does this by interleaving data across three or more drives (striping). Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost.
> * RAID 6: Which extends RAID 5 by adding another parity block. This level requires a minimum of four disks, and can continue to execute read/write with any two concurrent disk failures. RAID 6 does not have a performance penalty for reading operations, but it does have a performance penalty on write operations because of the overhead associated with parity calculations.
> * RAID 10: Also known as RAID 1+0, RAID 10 combines disk mirroring and disk striping to protect data. It requires a minimum of four disks, and stripes data across mirrored pairs. As long as one disk in each mirrored pair is functional, data can be retrieved. If two disks in the same mirrored pair fail, all data will be lost because there is no parity in the striped sets. ([Citation][4])
>
7. Which port is used for the **ping** command?
> The **ping** command uses ICMP. Specifically, it uses ICMP echo requests and ICMP echo reply packets.
>
> ICMP does not use either UDP or TCP communication services: Instead, it uses raw IP communication services. This means that the ICMP message is carried directly in an IP datagram data field.
8. What is the difference between a router and a gateway? What is the default gateway?
> _Router_ describes the general technical function (layer 3 forwarding), or a hardware device intended for that purpose, while _gateway_ describes the function for the local segment (providing connectivity to elsewhere). You could also state that you "set up a router as a gateway." Another term is _hop_, which describes forwarding between subnets.
>
> The term _default gateway_ is used to mean the router on your LAN, which has the responsibility of being the first point of contact for traffic to computers outside the LAN.
9. Explain the boot process for Linux.
> BIOS -&gt; Master Boot Record (MBR) -&gt; GRUB -&gt; the kernel -&gt; init -&gt; runlevel
10. How do you check the error messages while the server is booting up?
> Kernel messages are always stored in the kmsg buffer, visible via the **dmesg** command.
>
> Boot issues and errors call for a system administrator to look into certain important files, in conjunction with particular commands, which are each handled differently by different versions of Linux:
>
> * **/var/log/boot.log** is the system boot log, which contains all that unfolded during the system boot.
> * **/var/log/messages** stores global system messages, including the messages logged during system boot.
> * **/var/log/dmesg** contains kernel ring buffer information.
>
11. What is the difference between a symbolic link and a hard link?
> A _symbolic_ or _soft link_ is an actual link to the original file, whereas a _hard link_ is a mirror copy of the original file. If you delete the original file, the soft link has no value, because it then points to a non-existent file. In the case of a hard link, it is entirely the opposite. If you delete the original file, the hard link still contains the data from the original file. ([Citation][5])
12. How do you change kernel parameters? What kernel options might you need to tune?
> To set the kernel parameters in Unix-like systems, first edit the file **/etc/sysctl.conf**. After making the changes, save the file and run the **sysctl -p** command. This command makes the changes permanent without rebooting the machine
13. Explain the **/proc** filesystem.
> The **/proc** filesystem is virtual, and provides detailed information about the kernel, hardware, and running processes. Since **/proc** contains virtual files, it is called the _virtual file system_. These virtual files have unique qualities. Most of them are listed as zero bytes in size.
>
> Virtual files such as **/proc/interrupts**, **/proc/meminfo**, **/proc/mounts** and **/proc/partitions** provide an up-to-the-moment glimpse of the systems hardware. Others, such as **/proc/filesystems** and the **/proc/sys** directory provide system configuration information and interfaces.
14. How do you run a script as another user without their password?
> For example, if you were editing the sudoers file (such as **/private/etc/sudoers**), you might use **visudo** to add the following:
>
> [**user1 ALL=(user2) NOPASSWD: /opt/scripts/bin/generate.sh**][2]
15. What is the UID 0 toor account? Have you been compromised?
> The toor user is an alternative superuser account, where toor is root spelled backward. It is intended to be used with a non-standard shell, so the default shell for root does not need to change.
>
> This purpose is important. Shells which are not part of the base distribution, but are instead installed from ports or packages, are installed in **/usr/local/bin**; which, by default, resides on a different file system. If roots shell is located in **/usr/local/bin** and the file system containing **/usr/local/bin** is not mounted, root could not log in to fix a problem, and the sysadmin would have to reboot into single-user mode to enter the shells path.
### Advanced questions
Here are the even more difficult questions you may encounter:
16. How does **tracert** work and what protocol does it use?
> The command **tracert**—or **traceroute** depending on the operating system—allows you to see exactly what routers you touch as you move through the chain of connections to your final destination. If you end up with a problem where you cant connect to or **ping** your final destination, a **tracert** can help in that you can tell exactly where the chain of connections stops. ([Citation][6])
>
> With this information, you can contact the correct people; whether it be your own firewall, your ISP, your destinations ISP, or somewhere in the middle. The **tracert** command—like **ping**—uses the ICMP protocol, but also can use the first step of the TCP three-way handshake to send SYN requests for a response.
17. What is the main advantage of using **chroot**? When and why do we use it? What is the purpose of the **mount /dev**, **mount /proc**, and **mount /sys** commands in a **chroot** environment? 
> An advantage of having a **chroot** environment is that the filesystem is isolated from the physical host, since **chroot** has a separate filesystem inside your filesystem. The difference is that **chroot** uses a newly created root (**/**) as its root directory.
>
> A **chroot** jail lets you isolate a process and its children from the rest of the system. It should only be used for processes that dont run as **root**, as **root** users can break out of the jail easily.
>
> The idea is that you create a directory tree where you copy or link in all of the system files needed for the process to run. You then use the **chroot()** system call to tell it the root directory now exists at the base of this new tree, and then start the process running in that **chroot**d environment. Since the command then cant reference paths outside the modified root directory, it cant perform operations (read, write, etc.) maliciously on those locations. ([Citation][7])
18. How do you protect your system from getting hacked?
> By following the principle of least privileges and these practices:
>
> * Encrypt with public keys, which provides excellent security.
> * Enforce password complexity.
> * Understand why you are making exceptions to the rules above.
> * Review your exceptions regularly.
> * Hold someone to account for failure. (It keeps you on your toes.) ([Citation][8])
>
19. What is LVM, and what are the advantages of using it?
> LVM, or Logical Volume Management, uses a storage device management technology that gives users the power to pool and abstract the physical layout of component storage devices for easier and flexible administration. Using the device mapper Linux kernel framework, the current iteration (LVM2) can be used to gather existing storage devices into groups and allocate logical units from the combined space as needed.
20. What are sticky ports?
> Sticky ports are one of the network administrators best friends and worst headaches. They allow you to set up your network so that each port on a switch only permits one (or a number that you specify) computer to connect on that port, by locking it to a particular MAC address.
21. Explain port forwarding?
> When trying to communicate with systems on the inside of a secured network, it can be very difficult to do so from the outside—and with good reason. Therefore, the use of a port forwarding table within the router itself, or other connection management device, can allow specific traffic to automatically forward to a particular destination. For example, if you had a web server running on your network and you wanted to grant access to it from the outside, you would set up port forwarding to port 80 on the server in question. This would mean that anyone entering your IP address in a web browser would connect to the servers website immediately.
>
> Please note, it is usually not recommended to allow access to a server from the outside directly into your network.
22. What is a false positive and false negative in the case of IDS?
> When the Intrusion Detection System (IDS) device generates an alert for an intrusion which has actually not happened, this is false positive. If the device has not generated any alert and the intrusion has actually happened, this is the case of a false negative.
23. Explain **:(){ :|:&amp; };:** and how to stop this code if you are already logged into the system?
> This is a fork bomb. It breaks down as follows:
>
> * **:()** defines the function, with **:** as the function name, and the empty parenthesis shows that it will not accept any arguments.
> * **{ }** shows the beginning and end of the function definition.
> * **:|:** loads a copy of the function **:** into memory, and pipes its output to another copy of the **:** function, which also has to be loaded into memory.
> * **&amp;** makes the previous item a background process, so that the child processes will not get killed even though the parent gets auto-killed.
> * **:** at the end executes the function again, and hence the chain reaction begins.
>
>
> The best way to protect a multi-user system is to use Privileged Access Management (PAM) to limit the number of processes a user can use.
>
> The biggest problem with a fork bomb is the fact it takes up so many processes. So, we have two ways of attempting to fix this if you are already logged into the system. One option is to execute a SIGSTOP command to stop the process, such as:
>
> **killall -STOP -u user1**
>
> If you cant use the command line due to all processes being used, you will have to use **exec** to force it to run:
>
> **exec killall -STOP -u user1**
>
> With fork bombs, your best option is preventing them from becoming too big of an issue in the first place
24. What is OOM killer and how does it decide which process to kill first?
> If memory is exhaustively used up by processes to the extent that possibly threatens the systems stability, then the out of memory (OOM) killer comes into the picture.
>
> An OOM killer first has to select the best process(es) to kill. _Best_ here refers to the process which will free up the maximum memory upon being killed, and is also the least important to the system. The primary goal is to kill the least number of processes to minimize the damage done, and at the same time maximize the amount of memory freed.
>
> To facilitate this goal, the kernel maintains an oom_score for each of the processes. You can see the oom_score of each of the processes in the **/proc** filesystem under the **pid** directory:
>
> **$ cat /proc/10292/oom_score**
>
> The higher the value of oom_score for any process, the higher its likelihood is of being killed by the OOM Killer in an out-of-memory situation. ([Citation][9])
### Conclusion
System administration salaries have a [wide range][10] with some sites mentioning $70,000 to $100,000 a year, depending on the location, the size of the organization, and your education level plus years of experience. In the end, the system administration career path boils down to your interest in working with servers and solving cool problems. Now, I would say go ahead and achieve your dream path.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/sysadmin-job-interview-questions
作者:[DirectedSoul][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/directedsoul
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_HowToFish_520x292.png?itok=DHbdxv6H (Question and answer.)
[2]: https://github.com/trimstray/test-your-sysadmin-skills
[3]: https://www.waytoeasylearn.com/2016/05/netapp-filer-tutorial.html
[4]: https://searchstorage.techtarget.com/definition/RAID-10-redundant-array-of-independent-disks
[5]: https://www.answers.com/Q/What_is_hard_link_and_soft_link_in_Linux
[6]: https://www.wisdomjobs.com/e-university/network-administrator-interview-questions.html
[7]: https://unix.stackexchange.com/questions/105/chroot-jail-what-is-it-and-how-do-i-use-it
[8]: https://serverfault.com/questions/391370/how-to-prevent-zero-day-attacks
[9]: https://unix.stackexchange.com/a/153586/8369
[10]: https://blog.netwrix.com/2018/07/23/systems-administrator-salary-in-2018-how-much-can-you-earn/

View File

@ -0,0 +1,263 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Introduction to GNU Autotools)
[#]: via: (https://opensource.com/article/19/7/introduction-gnu-autotools)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
Introduction to GNU Autotools
======
If you're not using Autotools yet, this tutorial will change the way you
deliver your code.
![Linux kernel source code \(C\) in Visual Studio Code][1]
Have you ever downloaded the source code for a popular software project that required you to type the almost ritualistic **./configure; make &amp;&amp; make install** command sequence to build and install it? If so, youve used [GNU Autotools][2]. If youve ever looked into some of the files accompanying such a project, youve likely also been terrified at the apparent complexity of such a build system.
Good news! GNU Autotools is a lot simpler to set up than you think, and its GNU Autotools itself that generates those 1,000-line configuration files for you. Yes, you can write 20 or 30 lines of installation code and get the other 4,000 for free.
### Autotools at work
If youre a user new to Linux looking for information on how to install applications, you do not have to read this article! Youre welcome to read it if you want to research how software is built, but if youre just installing a new application, go read my article about [installing apps on Linux][3].
For developers, Autotools is a quick and easy way to manage and package source code so users can compile and install software. Autotools is also well-supported by major packaging formats, like DEB and RPM, so maintainers of software repositories can easily prepare a project built with Autotools.
Autotools works in stages:
1. First, during the **./configure** step, Autotools scans the host system (the computer its being run on) to discover the default settings. Default settings include where support libraries are located, and where new software should be placed on the system.
2. Next, during the **make** step, Autotools builds the application, usually by converting human-readable source code into machine language.
3. Finally, during the **make install** step, Autotools copies the files it built to the appropriate locations (as detected during the configure stage) on your computer.
This process seems simple, and it is, as long as you use Autotools.
### The Autotools advantage
GNU Autotools is a big and important piece of software that most of us take for granted. Along with [GCC (the GNU Compiler Collection)][4], Autotools is the scaffolding that allows Free Software to be constructed and installed to a running system. If youre running a [POSIX][5] system, its not an understatement to say that most of your operating system exists as runnable software on your computer because of these projects.
In the likely event that your pet project isnt an operating system, you might assume that Autotools is overkill for your needs. But, despite its reputation, Autotools has lots of little features that may benefit you, even if your project is a relatively simple application or series of scripts.
#### Portability
First of all, Autotools comes with portability in mind. While it cant make your project work across all POSIX platforms (thats up to you, as the coder), Autotools can ensure that the files youve marked for installation get installed to the most sensible locations on a known platform. And because of Autotools, its trivial for a power user to customize and override any non-optimal value, according to their own system.
With Autotools, all you need to know is what files need to be installed to what general location. It takes care of everything else. No more custom install scripts that break on any untested OS.
#### Packaging
Autotools is also well-supported. Hand a project with Autotools over to a distro packager, whether theyre packaging an RPM, DEB, TGZ, or anything else, and their job is simple. Packaging tools know Autotools, so theres likely to be no patching, hacking, or adjustments necessary. In many cases, incorporating an Autotools project into a pipeline can even be automated.
### How to use Autotools
To use Autotools, you must first have Autotools installed. Your distribution may provide one package meant to help developers build projects, or it may provide separate packages for each component, so you may have to do some research on your platform to discover what packages you need to install.
The components of Autotools are:
* **automake**
* **autoconf**
* **automake**
* **make**
While you likely need to install the compiler (GCC, for instance) required by your project, Autotools works just fine with scripts or binary assets that dont need to be compiled. In fact, Autotools can be useful for such projects because it provides a **make uninstall** script for easy removal.
Once you have all of the components installed, its time to look at the structure of your projects files.
#### Autotools project structure
GNU Autotools has very specific expectations, and most of them are probably familiar if you download and build source code often. First, the source code itself is expected to be in a subdirectory called **src**.
Your project doesnt have to follow all of these expectations, but if you put files in non-standard locations (from the perspective of Autotools), then youll have to make adjustments for that in your Makefile later.
Additionally, these files are required:
* **NEWS**
* **README**
* **AUTHORS**
* **ChangeLog**
You dont have to actively use the files, and they can be symlinks to a monolithic document (like **README.md**) that encompasses all of that information, but they must be present.
#### Autotools configuration
Create a file called **configure.ac** at your projects root directory. This file is used by **autoconf** to create the **configure** shell script that users run before building. The file must contain, at the very least, the **AC_INIT** and **AC_OUTPUT** [M4 macros][6]. You dont need to know anything about the M4 language to use these macros; theyre already written for you, and all of the ones relevant to Autotools are defined in the documentation.
Open the file in your favorite text editor. The **AC_INIT** macro may consist of the package name, version, an email address for bug reports, the project URL, and optionally the name of the source TAR file.
The **[AC_OUTPUT][7]** macro is much simpler and accepts no arguments.
```
AC_INIT([penguin], [2019.3.6], [[seth@example.com][8]])
AC_OUTPUT
```
If you were to run **autoconf** at this point, a **configure** script would be generated from your **configure.ac** file, and it would run successfully. Thats all it would do, though, because all you have done so far is define your projects metadata and called for a configuration script to be created.
The next macros you must invoke in your **configure.ac** file are functions to create a [Makefile][9]. A Makefile tells the **make** command what to do (usually, how to compile and link a program).
The macros to create a Makefile are **AM_INIT_AUTOMAKE**, which accepts no arguments, and **AC_CONFIG_FILES**, which accepts the name you want to call your output file.
Finally, you must add a macro to account for the compiler your project needs. The macro you use obviously depends on your project. If your project is written in C++, the appropriate macro is **AC_PROG_CXX**, while a project written in C requires **AC_PROG_CC**, and so on, as detailed in the [Building Programs and Libraries][10] section in the Autoconf documentation.
For example, I might add the following for my C++ program:
```
AC_INIT([penguin], [2019.3.6], [[seth@example.com][8]])
AC_OUTPUT
AM_INIT_AUTOMAKE
AC_CONFIG_FILES([Makefile])
AC_PROG_CXX
```
Save the file. Its time to move on to the Makefile.
#### Autotools Makefile generation
Makefiles arent difficult to write manually, but Autotools can write one for you, and the one it generates will use the configuration options detected during the `./configure` step, and it will contain far more options than you would think to include or want to write yourself. However, Autotools cant detect everything your project requires to build, so you have to add some details in the file **Makefile.am**, which in turn is used by **automake** when constructing a Makefile.
**Makefile.am** uses the same syntax as a Makefile, so if youve ever written a Makefile from scratch, then this process will be familiar and simple. Often, a **Makefile.am** file needs only a few variable definitions to indicate what files are to be built, and where they are to be installed.
Variables ending in **_PROGRAMS** identify code that is to be built (this is usually considered the _primary_ target; its the main reason the Makefile exists). Automake recognizes other primaries, like **_SCRIPTS**, **_DATA**, **_LIBRARIES**, and other common parts that make up a software project.
If your application is literally compiled during the build process, then you identify it as a binary program with the **bin_PROGRAMS** variable, and then reference any part of the source code required to build it (these parts may be one or more files to be compiled and linked together) using the program name as the variable prefix:
```
bin_PROGRAMS = penguin
penguin_SOURCES = penguin.cpp
```
The target of **bin_PROGRAMS** is installed into the **bindir**, which is user-configurable during compilation.
If your application isnt actually compiled, then your project doesnt need a **bin_PROGRAMS** variable at all. For instance, if your project is a script written in Bash, Perl, or a similar interpreted language, then define a **_SCRIPTS** variable instead:
```
bin_SCRIPTS = bin/penguin
```
Automake expects sources to be located in a directory called **src**, so if your project uses an alternative directory structure for its layout, you must tell Automake to accept code from outside sources:
```
AUTOMAKE_OPTIONS = foreign subdir-objects
```
Finally, you can create any custom Makefile rules in **Makefile.am** and theyll be copied verbatim into the generated Makefile. For instance, if you know that a temporary value needs to be replaced in your source code before the installation proceeds, you could make a custom rule for that process:
```
all-am: penguin
        touch bin/penguin.sh
       
penguin: bin/penguin.sh
        @sed "s|__datadir__|@datadir@|" $&lt; &gt;bin/$@
```
A particularly useful trick is to extend the existing **clean** target, at least during development. The **make clean** command generally removes all generated build files with the exception of the Automake infrastructure. Its designed this way because most users rarely want **make clean** to obliterate the files that make it easy to build their code.
However, during development, you might want a method to reliably return your project to a state relatively unaffected by Autotools. In that case, you may want to add this:
```
clean-local:
        @rm config.status configure config.log
        @rm Makefile
        @rm -r autom4te.cache/
        @rm aclocal.m4
        @rm compile install-sh missing Makefile.in
```
Theres a lot of flexibility here, and if youre not already familiar with Makefiles, it can be difficult to know what your **Makefile.am** needs. The barest necessity is a primary target, whether thats a binary program or a script, and an indication of where the source code is located (whether thats through a **_SOURCES** variable or by using **AUTOMAKE_OPTIONS** to tell Automake where to look for source code).
Once you have those variables and settings defined, you can try generating your build scripts as you see in the next section, and adjust for anything thats missing.
#### Autotools build script generation
Youve built the infrastructure, now its time to let Autotools do what it does best: automate your project tooling. The way the developer (you) interfaces with Autotools is different from how users building your code do.
Builders generally use this well-known sequence:
```
$ ./configure
$ make
$ sudo make install
```
For that incantation to work, though, you as the developer must bootstrap the build infrastructure. First, run **autoreconf** to generate the configure script that users invoke before running **make**. Use the **install** option to bring in auxiliary files, such as a symlink to **depcomp**, a script to generate dependencies during the compiling process, and a copy of the **compile** script, a wrapper for compilers to account for syntax variance, and so on.
```
$ autoreconf --install
configure.ac:3: installing './compile'
configure.ac:2: installing './install-sh'
configure.ac:2: installing './missing'
```
With this development build environment, you can then create a package for source code distribution:
```
$ make dist
```
The **dist** target is a rule you get for "free" from Autotools.
Its a feature that gets built into the Makefile generated from your humble **Makefile.am** configuration. This target produces a **tar.gz** archive containing all of your source code and all of the essential Autotools infrastructure so that people downloading the package can build the project.
At this point, you should review the contents of the archive carefully to ensure that it contains everything you intend to ship to your users. You should also, of course, try building from it yourself:
```
$ tar --extract --file penguin-0.0.1.tar.gz
$ cd penguin-0.0.1
$ ./configure
$ make
$ DESTDIR=/tmp/penguin-test-build make install
```
If your build is successful, you find a local copy of your compiled application specified by **DESTDIR** (in the case of this example, **/tmp/penguin-test-build**).
```
$ /tmp/example-test-build/usr/local/bin/example
hello world from GNU Autotools
```
### Time to use Autotools
Autotools is a great collection of scripts for a predictable and automated release process. This toolset may be new to you if youre used to Python or Bash builders, but its likely worth learning for the structure and adaptability it provides to your project.
And Autotools is not just for code, either. Autotools can be used to build [Docbook][11] projects, to keep media organized (I use Autotools for my music releases), documentation projects, and anything else that could benefit from customizable install targets.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/7/introduction-gnu-autotools
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_kernel_clang_vscode.jpg?itok=fozZ4zrr (Linux kernel source code (C) in Visual Studio Code)
[2]: https://www.gnu.org/software/automake/faq/autotools-faq.html
[3]: https://opensource.com/article/18/1/how-install-apps-linux
[4]: https://en.wikipedia.org/wiki/GNU_Compiler_Collection
[5]: https://en.wikipedia.org/wiki/POSIX
[6]: https://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/Initializing-configure.html
[7]: https://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/Output.html#Output
[8]: mailto:seth@example.com
[9]: https://www.gnu.org/software/make/manual/html_node/Introduction.html
[10]: https://www.gnu.org/software/automake/manual/html_node/Programs.html#Programs
[11]: https://opensource.com/article/17/9/docbook

View File

@ -0,0 +1,225 @@
[#]: collector: (lujun9972)
[#]: translator: (0x996)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mastering user groups on Linux)
[#]: via: (https://www.networkworld.com/article/3409781/mastering-user-groups-on-linux.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
掌握 Linux 用户组
======
在 Linux 系统中管理用户组并不费力,但相关命令可能比你所知的更为灵活。
![Scott 97006 \(CC BY 2.0\)][1]
在 Linux 系统中用户组起着重要作用。用户组提供了一种简单方法供一组用户互相共享文件。用户组也允许系统管理员更加有效地管理用户权限,因为管理员可以将权限分配给用户组而不是逐一分配给单个用户。
尽管通常只要在系统中添加用户账户就会创建用户组,关于用户组如何工作以及如何运用用户组还有很多需要了解的。
**[ 两分钟 Linux 技巧:[ 观看这些 2 分钟视频学习如何精通一大批 Linux 命令 ][2] ]**
### 一个用户,一个用户组?
Linux 系统中多数用户账户被设为用户名与用户组名相同。用户 "jdoe" 会被赋予一个名为 "jdoe" 的用户组,且成为该新建用户组的唯一成员。如本例所示,该用户的登录名,用户 id 和用户组 id 在新建账户时会被添加到 **/etc/passwd** 和 **/etc/group** 文件中:
```
$ sudo useradd jdoe
$ grep jdoe /etc/passwd
jdoe:x:1066:1066:Jane Doe:/home/jdoe:/bin/sh
$ grep jdoe /etc/group
jdoe:x:1066:
```
这些文件中的配置使系统得以在文本jdoe和数字1066这两种用户 id 形式之间互相转换—— jdoe 就是 1006且 1006 就是 jdoe。
分配给每个用户的 UID用户 id和 GID用户组 id通常是一样的并且顺序递增。若上例中 Jane Doe 是最近添加的用户,分配给下一个新用户的用户 id 和用户组 id 很可能都是 1067。
### GID = UID
UID 和 GID 可能不一致。例如,如果你用 **groupadd** 命令添加一个用户组而不指定用户组 id系统会分配下一个可用的用户组 id在本例中为 1067。下一个添加到系统中的用户其 UID 会是 1067 而 GID 则为 1068。
你可以避免这个问题,方法是添加用户组的时候指定一个较小的用户组 id 而不是接受默认值。在下面的命令中我们添加一个用户组并提供一个 GID这个 GID 小于应用于用户账户的 GID 取值范围。
```
$ sudo groupadd -g 500 devops
```
创建账户时你可以指定一个共享用户组,如果这样对你更合适的话。例如你可能想把新来的开发人员加入同一个 DevOps 用户组而不是一人一个用户组。
```
$ sudo useradd -g staff bennyg
$ grep bennyg /etc/passwd
bennyg:x:1064:50::/home/bennyg:/bin/sh
```
### <ruby>主要用户组<rt>primary group</rt></ruby><ruby>次要用户组<rt>secondary group</rt></ruby>
用户组实际上有两种——主要用户组和次要用户组
**主要用户组**是保存在 /etc/passwd 文件中的用户组,该用户组在账户创建时配置。当用户创建一个文件,用户的主要用户组与此文件关联。
```
$ whoami
jdoe
$ grep jdoe /etc/passwd
jdoe:x:1066:1066:John Doe:/home/jdoe:/bin/bash
^
|
+-------- 主要用户组
$ touch newfile
$ ls -l newfile
-rw-rw-r-- 1 jdoe jdoe 0 Jul 16 15:22 newfile
^
|
+-------- 主要用户组
```
用户一旦拥有账户之后被加入的那些用户组是**次要用户组**。次要用户组成员关系在 /etc/group 文件中显示。
```
$ grep devops /etc/group
devops:x:500:shs,jadep
^
|
+-------- shs 和 jadep 的次要用户组
```
**/etc/group** 文件给用户组分配组名称(例如 500 = devops并记录次要用户组成员。
### 首选的准则
每个用户是他自己的主要用户组成员并可以成为任意多个次要用户组成员这样一种准则允许用户更加容易地将个人文件和需要与同事分享的文件分开。当用户创建一个文件时,用户所属的不同用户组的成员不一定有访问权限。用户必须用 **chgrp** 命令将文件和次要用户组关联起来。
### 哪里也不如自己的<ruby>家目录<rt>/home</rt></ruby>
添加新账户时一个重要的细节是 **useradd** 命令并不一定为新用户添加一个家目录。若你只有某些时候想为用户添加家目录,你可以在 useradd 命令中加入 **-m**选项(可以把它想象成“安家”选项)
```
$ sudo useradd -m -g devops -c "John Doe" jdoe2
```
此命令中的选项如下:
* **-m** 创建家目录并在其中生成初始文件
* **-g** 指定用户归属的用户组
* **-c** 添加账户描述信息(通常是用户的姓名)
若你希望总是创建家目录,你可以编辑 **/etc/login.defs** 文件来更改默认工作方式。更改或添加 CREATE_HOME 变量并将其设置为 "yes"
```
$ grep CREATE_HOME /etc/login.defs
CREATE_HOME yes
```
另一种方法是用自己的账户设置别名从而让 **useradd** 一直带有 -m 选项。
```
$ alias useradd=useradd -m
```
确保将该别名添加到你的 ~/.bashrc 文件或类似的启动文件中以使其永久生效。
### 深入了解 /etc/login.defs
下面这个命令可列出 /etc/login.defs 文件中的全部设置。**grep**命令会隐藏所有注释和空行。
```
$ cat /etc/login.defs | grep -v "^#" | grep -v "^$"
MAIL_DIR /var/mail
FAILLOG_ENAB yes
LOG_UNKFAIL_ENAB no
LOG_OK_LOGINS no
SYSLOG_SU_ENAB yes
SYSLOG_SG_ENAB yes
FTMP_FILE /var/log/btmp
SU_NAME su
HUSHLOGIN_FILE .hushlogin
ENV_SUPATH PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV_PATH PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
TTYGROUP tty
TTYPERM 0600
ERASECHAR 0177
KILLCHAR 025
UMASK 022
PASS_MAX_DAYS 99999
PASS_MIN_DAYS 0
PASS_WARN_AGE 7
UID_MIN 1000
UID_MAX 60000
GID_MIN 1000
GID_MAX 60000
LOGIN_RETRIES 5
LOGIN_TIMEOUT 60
CHFN_RESTRICT rwh
DEFAULT_HOME yes
CREATE_HOME yes <===
USERGROUPS_ENAB yes
ENCRYPT_METHOD SHA512
```
注意此文件中的各种设置会决定用户 id 的取值范围以及密码使用期限和其他设置(如 umask
### 如何显示用户所属的用户组
出于各种原因用户可能是多个用户组的成员。用户组成员身份给与用户对用户组拥有的文件和目录的访问权限,有时候这种工作方式是至关重要的。要生成某个用户所属用户组的清单,用 **groups** 命令即可。
```
$ groups jdoe
jdoe : jdoe adm admin cdrom sudo dip plugdev lpadmin staff sambashare
```
你可以键入不带任何参数的“groups”来列出你自己的用户组。
### 如何添加用户至用户组
如果你想添加一个已有用户至别的用户组,你可以仿照下面的命令操作:
```
$ sudo usermod -a -G devops jdoe
```
你也可以指定逗号分隔的用户组列表来添加一个用户至多个用户组:
```
$ sudo usermod -a -G devops,mgrs jdoe
```
参数 **-a** 意思是“添加”,**-G** 指定用户组列表
你可以编辑 **/etc/group** 文件将用户名从用户组成员名单中删除从而将用户从用户组中移除。usermod 命令或许也有个选项用于从用户组中删除某个成员。
```
fish:x:16:nemo,dory,shark
|
V
fish:x:16:nemo,dory
```
### 提要
添加和管理用户组并非特别困难,但长远来看配置账户时的一致性可使这项工作更容易些。
**[ 延伸阅读:[必会的 Linux 命令][3] ]**
加入 Network World 的 [Facebook][4] 和 [LinkedIn][5] 社区,对最重要的话题发表你的评论。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3409781/mastering-user-groups-on-linux.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/0x996)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://images.idgesg.net/images/article/2019/07/carrots-100801917-large.jpg
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
[3]: https://www.networkworld.com/article/3391029/must-know-linux-commands.html
[4]: https://www.facebook.com/NetworkWorld/
[5]: https://www.linkedin.com/company/network-world