Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2019-11-11 14:48:53 +08:00
commit a743d6d6c8
13 changed files with 1539 additions and 274 deletions

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11560-1.html)
[#]: subject: (Tuning your bash or zsh shell on Fedora Workstation and Silverblue)
[#]: via: (https://fedoramagazine.org/tuning-your-bash-or-zsh-shell-in-workstation-and-silverblue/)
[#]: author: (George Luiz Maluf https://fedoramagazine.org/author/georgelmaluf/)

View File

@ -0,0 +1,94 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Confirmed! Microsoft Edge Will be Available on Linux)
[#]: via: (https://itsfoss.com/microsoft-edge-linux/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Confirmed! Microsoft Edge Will be Available on Linux
======
![][1]
_**Microsoft is overhauling its Edge web browser and it will be based on the open source**_ [_**Chromium**_][2] _**browser. Microsoft is also bringing the new Edge browser to desktop Linux however the Linux release might be a bit delayed.**_
Microsofts Internet Explorer once dominated the browser market share, but it lost its dominance in the last decade to Googles Chrome.
> The rise and fall of [#opensource][3] web browser Mozilla Firefox. [pic.twitter.com/Co5Xj3dKIQ][4]
>
> — Abhishek Prakash (@abhishek_foss) [March 22, 2017][5]
Microsoft tried to gain its lost position by creating Edge, a brand new web browser built with EdgeHTML and [Chakra engine][6]. It was tightly integrated with Microsofts digital assistant [Cortana][7] and Windows 10.
However, it still could not bring the crown home and as of today, it stands at the [fourth position in desktop browser usage share][8].
Lately, Microsoft decided to give Edge an overhaul by rebasing it on [open source Chromium project][9]. Googles Chrome browser is also based on Chromium. [Chromium is also available as a standalone web browser][2] and some Linux distributions use it at as the default web browser.
### The new Microsoft Edge web browser on Linux
After initial reluctance and uncertainties, it seems that Microsoft is finally going to bring the new Edge browser to Linux.
In its annual developer conference Microsoft [Ignite][10], the [session on Edge Browser][11] mentions that it is coming to Linux in future.
![Microsoft confirms that Edge is coming to Linux in future][12]
The new Edge browser will be available on 15th January 2020 but I think that the Linux release will be delayed.
### Is Microsoft Edge coming to Linux really a big deal?
Whats the big deal with Microsoft Edge coming to Linux? Dont we have plenty of [web browsers available for Linux][13] already? I think it has to do with the Microsoft Linux rivalry (if there is such a thing). If Microsoft does anything for Linux, specially desktop Linux, it becomes a news.
I also think that Edge on Linux has mutual benefits for Microsoft and for Linux users. Heres why.
#### Whats in it for Microsoft?
When Google launched its Chrome browser in 2008, no one had thought that it will dominate the market in just a few years. But why would a search engine put so much of energy behind a free web browser?
The answer is that Google is a search engine and it wants more people using its search engine and other services so that it can earn revenue from the ad services. With Chrome, Google is the default search engine. On other browsers like Firefox and Safari, Google pays hundreds of millions to be kept as the default web browser. Without Chrome, Google would have to rely entirely on the other browsers.
Microsoft too has a search engine named Bing. The Internet Explorer and Edge use Bing as the default search engine. If Edge is used by more users, it improves the chances of bringing more users to Bing. More Bing users is something Microsoft would love to have.
#### Whats in it for Linux users?
I see a couple of benefits for desktop Linux users. With Edge, you can use some Microsoft specific products on Linux. For example, Microsofts streaming gaming service [xCloud][14] maybe available on the Edge browser only.
Another benefit is an improved [Netflix experience on Linux][15]. Of course, you can use Chrome or [Firefox for watching Netflix on Linux][16] but you might not be getting the full HD or ultra HD streaming.
As far as I know, the [Full HD and Ultra HD Netflix streaming is only available on Microsoft Edge][17]. This means you can Netflix and chill in HD with Edge on Linux.
_**What do you think?**_
Whats your feeling about Microsoft Edge coming to Linux? Will you be using it when it is available for Linux? Do share your views in the comment section below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/microsoft-edge-linux/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/microsoft_edge_logo_transparent.png?ssl=1
[2]: https://itsfoss.com/install-chromium-ubuntu/
[3]: https://twitter.com/hashtag/opensource?src=hash&ref_src=twsrc%5Etfw
[4]: https://t.co/Co5Xj3dKIQ
[5]: https://twitter.com/abhishek_foss/status/844666818665025537?ref_src=twsrc%5Etfw
[6]: https://itsfoss.com/microsoft-chakra-core/
[7]: https://www.microsoft.com/en-in/windows/cortana
[8]: https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
[9]: https://www.chromium.org/Home
[10]: https://www.microsoft.com/en-us/ignite
[11]: https://myignite.techcommunity.microsoft.com/sessions/79341?source=sessions
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/11/Microsoft_Edge_Linux.jpg?ssl=1
[13]: https://itsfoss.com/open-source-browsers-linux/
[14]: https://www.pocket-lint.com/games/news/147429-what-is-xbox-project-xcloud-cloud-gaming-service-price-release-date-devices
[15]: https://itsfoss.com/watch-netflix-in-ubuntu-linux/
[16]: https://itsfoss.com/netflix-firefox-linux/
[17]: https://help.netflix.com/en/node/23742

View File

@ -0,0 +1,61 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My Linux story: Learning Linux in the 90s)
[#]: via: (https://opensource.com/article/19/11/learning-linux-90s)
[#]: author: (Mike Harris https://opensource.com/users/mharris)
My Linux story: Learning Linux in the 90s
======
This is the story of how I learned Linux before the age of WiFi, when
distributions came in the form of a CD.
![Sky with clouds and grass][1]
Most people probably don't remember where they, the computing industry, or the everyday world were in 1996. But I remember that year very clearly. I was a sophomore in high school in the middle of Kansas, and it was the start of my journey into free and open source software (FOSS).
I'm getting ahead of myself here. I was interested in computers even before 1996. I was born and raised on my family's first Apple ][e, followed many years later by the IBM Personal System/2. (Yes, there were definitely some generational skips along the way.) The IBM PS/2 had a very exciting feature: a 1200 baud Hayes modem.
I don't remember how, but early on, I got the phone number of a local [BBS][2]. Once I dialed into it, I could get a list of other BBSes in the local area, and my adventure into networked computing began.
In 1995, the people [lucky enough][3] to have a home internet connection spent less than 30 minutes a month using it. That internet was nothing like our modern services that operate over satellite, fiber, CATV coax, or any version of copper lines. Most homes dialed in with a modem, which tied up their phone line. (This was also long before cellphones were pervasive, and most people had just one home phone line.) I don't think there were many independent internet service providers (ISPs) back then, although that may have depended upon where you were located, so most people got service from a handful of big names, including America Online, CompuServe, and Prodigy.
And the service you did get was very slow; even at dial-up's peak evolution at 56K, you could only expect to get a maximum of about 3.5 Kbps. If you wanted to try Linux, downloading a 200MB to 800MB ISO image or (more realistically) a disk image set was a dedication to time, determination, and lack of phone usage.
I went with the easier route: In 1996, I ordered a "tri-Linux" CD set from a major Linux distributor. These tri-Linux disks provided three distributions; mine included Debian 1.1 (the first stable release of Debian), Red Hat Linux 3.0.3, and Slackware 3.1 (nicknamed Slackware '96). As I recall, the discs were purchased from an online store called [Linux Systems Labs][4]. The online store doesn't exist now, but in the 90s and early 00s, such distributors were common. And so were multi-disc sets of Linux. This one's from 1998 but gives you an idea of what they involved:
![A tri-linux CD set][5]
![A tri-linux CD set][6]
On a fateful day in the summer of 1996, while living in a new and relatively rural city in Kansas, I made my first attempt at installing and working with Linux. Throughout the summer of '96, I tried all three distributions on that tri-Linux CD set. They all ran beautifully on my mom's older Pentium 75MHz computer.
I ended up choosing [Slackware][7] 3.1 as my preferred distribution, probably more because of the terminal's appearance than the other, more important reasons one should consider before deciding on a distribution.
I was up and running. I was connecting to an "off-brand" ISP (a local provider in the area), dialing in on my family's second phone line (ordered to accommodate all my internet use). I was in heaven. I had a dual-boot (Microsoft Windows 95 and Slackware 3.1) computer that worked wonderfully. I was still dialing into the BBSes that I knew and loved and playing online BBS games like Trade Wars, Usurper, and Legend of the Red Dragon.
I can remember spending days upon days of time in #Linux on EFNet (IRC), helping other users answer their Linux questions and interacting with the moderation crew.
More than 20 years after taking my first swing at using the Linux OS at home, I am now entering my fifth year as a consultant for Red Hat, still using Linux (now Fedora) as my daily driver, and still on IRC helping people looking to use Linux.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/learning-linux-90s
作者:[Mike Harris][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/mharris
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-cloud.png?itok=vz0PIDDS (Sky with clouds and grass)
[2]: https://en.wikipedia.org/wiki/Bulletin_board_system
[3]: https://en.wikipedia.org/wiki/Global_Internet_usage#Internet_users
[4]: https://web.archive.org/web/19961221003003/http://lsl.com/
[5]: https://opensource.com/sites/default/files/20191026_142009.jpg (A tri-linux CD set)
[6]: https://opensource.com/sites/default/files/20191026_142020.jpg (A tri-linux CD set)
[7]: http://slackware.com

View File

@ -0,0 +1,45 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (My first open source contribution: Talk about your pull request)
[#]: via: (https://opensource.com/article/19/11/first-open-source-contribution-communicate-pull-request)
[#]: author: (Galen Corey https://opensource.com/users/galenemco)
My first open source contribution: Talk about your pull request
======
I finally heard back from the project and my code was merged.
![speech bubble that says tell me more][1]
Previously, I wrote about [keeping your code relevant][2] when making a contribution to an open source project. Now, you finally click **Create pull request**. You're elated, you're done.
At first, I didnt even care whether my code would get merged or not. I had done my part. I knew I could do it. The future lit up with the many future pull requests that I would make to open source projects.
But of course, I did want my code to become a part of my chosen project, and soon I found myself googling, "How long does it take for an open source pull request to get merged?" The results werent especially conclusive. Due to the nature of open source (the fact that anyone can participate in it), processes for maintaining projects vary widely. But I found a tweet somewhere that confidently said: "If you dont hear back in two months, you should reach out to the maintainers."
Well, two months came and went, and I heard nothing. I also did not reach out to the maintainers, since talking to people and asking them to critique your work is scary. But I wasnt overly concerned. I told myself that two months was probably an average, so I put it in the back of my mind.
At four months, there was still no response. I opted for the passive approach again. I decided not to try to get in touch with the maintainers, but my reasoning this time was more negative. I started to wonder if some of my earlier assumptions about how actively maintained the project was were wrong—maybe no one was keeping up with incoming pull requests. Or maybe they didnt look at pull requests from random people. I put the issue in the back of my mind again, this time with less hope of ever seeing a result.
I had nearly given up hope entirely and forgotten about the whole thing when, six months after I made my original pull request, I finally heard back. After making a few small changes that they requested, my code was approved and merged. My fifth mistake was giving up on my contribution when I did not hear back and failing to be communicative about my work.
Dont be afraid to communicate about your pull request. Doing so could mean something as simple as adding a comment to your issue that says, “Hey, Im working on this!" And dont give up hope just because you dont get a response for a while. The amount of time that it takes will vary based on who is maintaining the project and how much time they have to devote to maintaining it.
This story has a happy ending. My code was merged. I hope that by sharing some parts of the experience that tripped me up on my first open source journey, I can smooth the path for some of you who want to explore open source for the first time.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/first-open-source-contribution-communicate-pull-request
作者:[Galen Corey][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/galenemco
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSCD_MPL3_520x292_FINAL.png?itok=cp6TbjVI (speech bubble that says tell me more)
[2]: https://opensource.com/article/19/10/my-first-open-source-contribution-relevant-code

View File

@ -0,0 +1,209 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How universities are using open source to attract students)
[#]: via: (https://opensource.com/article/19/11/open-source-universities)
[#]: author: (Joshua Pearce https://opensource.com/users/jmpearce)
How universities are using open source to attract students
======
Many universities have begun new initiatives to attract students that
are excited about technical freedom and open source.
![Open education][1]
Michigan Tech just launched [opensource.mtu.edu][2], a virtual one-stop free shop for all things open source on campus. According to their site, _[Tech Today][3]_:
> "With the [majority of big companies now contributing to open source projects][4] it is clearly a major trend. [All [major] supercomputers][5] (including our own supercomputer: [Superior][6]), 90% of cloud servers, 82% of smartphones, and 62% of embedded systems run on open source operating systems. More than 70% of internet of things devices also use open source software. 90% of the Fortune Global 500 pay for the open source Linux operating system from Red Hat, a company that makes billions of dollars a year for the service they provide on top of the product that can be downloaded for free."
The publication also says that "the open source hardware movement is [roughly 15 years][7] behind its software counterpart," but it appears to be catching up quickly. Given their mandate to "attract students that are excited about technical freedom and open source," many universities have started a new front in the battle for educational supremacy.
Unlike conventional warfare, this is a battle that benefits the public. The more universities share using the open source paradigm, the faster technology moves forward with all of its concomitant benefits. The resources available through [opensource.mtu.edu][2] include:
* [Thousands of free and open access articles in their Digital Commons][8].
* Free data, including housing the [Free Inactive Patent Search][9], a tool to help find inactive patents that have fallen into the public domain.
* Free open source courses like [FOSS101][10]: Essentials of Free and Open Source Software, which teaches Linux commands and the Git revision control system, or [Open source 3D printing][11], which teaches OpenSCAD, FreeCAD, Blender, Arduino, and RepRap 3D printing.
* Student organizations like the [Open Source Hardware Enterprise][12], which is dedicated to the development and availability of open source hardware, and the [Open Source Club][13], which develops open source software.
* Free software, including the [Astrophysics Source Code Library (ASCL)][14] open repository, which now lists over 2,000 codes and the [Psychology Experiment Building Language (PEBL)][15] software for psychological testing used in laboratories and by clinicians around the world.
* Free hardware, including hundreds of digitally manufactured designs and dozens of complex machines for everything from [plastic recycling systems][16] to [open source lab equipment][17].
Michigan Tech is hardly alone with major initiatives across a broad swath of academia. Open access databases like [Academia][18], [OSF preprints][19], [ResearchGate][20], [PrePrints][21], and [Science Open][22] swell with millions of free, open access, peer-reviewed articles. The Center for Open Science supports the [Open Science Framework][23], which is a "free and open source project management tool that supports researchers throughout their entire project" lifecycle, including storing Gigabytes of data:
![Open Source Framework \(OSF\) workflow.][24]
_Source: [OSF][25]_
You can choose from a wide variety of course options at other institutions as well, and are generally able to take these courses at your own pace:
* Rochester Institute of Technology students can [earn a minor in free and open source software][26] and free culture.
* Many of the worlds most renowned colleges and universities offer free courses to self-learners through [OpenCourseWare (OCW)][27]. None of the courses offered through OCW award credit, though. For that, you need to pay.
* Schools like [MIT][28], the University of Notre Dame, Yale, Carnegie Mellon, Delft, Stanford, Johns Hopkins, University of California Berkeley and the Open University (among many more) offer free academic content, such as syllabi, lecture notes, assignments, and examinations.
Many universities also contribute to free and open source software (FOSS) and free and open source hardware (FOSH). In fact, many universities—including American International University West Africa, Brandeis University, Indiana University, and the University of Southern Queensland—are [Open Source Initiative (OSI) Affiliates][29]. The University of Texas even has [formal policies][30] in place for contributing to open source.
### Universities using open source in higher education
In addition, the vast majority of universities use FOSS. [PortalProgramas][31] ranked Tufts University as the top higher education user of FOSS. Even more representative is [Apereo][32], which is a network of universities actively supporting the use of open source in higher education. This network includes a long list of [member institutions][33]:
* American Public University System  
* Beijing Open-mindness Technology Co., Ltd.  
* Blindside Networks  
* Boston University Questrom School of Business  
* Brigham Young University  
* Brock University  
* Brown University  
* California Community Colleges Technology Center
* California State University, Sacramento  
* Cirrus Identity  
* Claremont Colleges  
* Clark County School District  
* Duke University  
* Edalex  
* Educational Service Unit Coordinating Council  
* ELAN e.V.  
* Entornos de Formación S.L (EDF)  
* ETH Zürich  
* Gert Sibande TVET College  
* HEC Montreal  
* Hosei University  
* Hotelschool the Hague  
* IlliniCloud  
* Instructional Media & Magic  
* JISC  
* Kyoto University  
* LAMP
* Learning Experiences  
* Longsight. Inc.  
* MPL, Ltda.  
* Nagoya University  
* New York University  
* North-West University  
* Oakland University  
* OPENCOLLAB
* Oxford University  
* Pepperdine University  
* Princeton University  
* Rice University  
* Roger Williams University  
* Rutgers University  
* Sinclair Community College  
* SWITCH  
* Texas State University, San Marcos  
* Unicon  
* Universidad Politecnica de Valencia  
* Universidad Publica de Navarra  
* Universitat de Lleida  
* Universite de Rennes 1  
* Universite de Valenciennes  
* University of Amsterdam  
* University of California, Berkeley  
* University of Cape Town  
* University of Edinburgh  
* University of Illinois  
* University of Kansas  
* University of Manchester  
* University of Michigan  
* University of North Carolina, Chapel Hill  
* University of Notre Dame  
* University of South Africa UNISA  
* University of Virginia  
* University of Wisconsin-Madison  
* University of Witwatersrand  
* Western University  
* Whitman College
 
Another popular organization is [Kuali][34], which is a nonprofit that produces open source administrative software for higher education institutions. Their members include:
* Boston University
* California State University, Office of the Chancellor
* Colorado State University
* Cornell University
* Drexel University
* Indiana University
* Marist College
* Massachusetts Institute of Technology
* Michigan State University
* North-West University, South Africa
* Research Foundation of The City University of New York
* Stevens Institute of Technology
* Strathmore University
* Tufts University
* University Corporation for Atmospheric Research
* Universidad del Sagrado Corazon
* University of Arizona
* University of California, Davis
* University of California, Irvine
* University of Connecticut
* University of Hawaii
* University of Illinois
* University of Maryland, Baltimore
* University of Maryland, College Park
* University of Toronto
* West Virginia University
Didn't see your favorite university on the list? If that school has been involved in open source, please leave a comment below telling me what your school is doing in open source. If you want to see your favorite school on the list and they aren't doing much in open source, you can encourage them by sending a letter asking the program heads to:
* Institutionalize sharing their research open access in their own Digital Commons and/or use one of the many free repositories.
* Share research data on the Open Science Framework.
* Provide OCW and/or offer courses and programs specifically focused on open source.
* Start and/or expand their use of FOSS and FOSH on campus, and/or join <https://kuali.org/membership> or <https://www.apereo.org/content/apereo-membership>.
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/open-source-universities
作者:[Joshua Pearce][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/jmpearce
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_OER_520x292_FINAL.png?itok=DBCJ4H1s (Open education)
[2]: https://opensource.mtu.edu/
[3]: https://www.mtu.edu/ttoday/?issue=20191022
[4]: https://opensource.com/business/16/5/2016-future-open-source-survey
[5]: https://www.zdnet.com/article/supercomputers-all-linux-all-the-time/
[6]: https://hpc.mtu.edu/
[7]: https://www.mdpi.com/2411-5134/3/3/44
[8]: https://digitalcommons.mtu.edu/
[9]: https://opensource.com/article/17/1/making-us-patent-system-useful-again
[10]: https://mtu.instructure.com/courses/1147020
[11]: https://opensource.com/article/19/2/3d-printing-course
[12]: http://openhardware.eit.mtu.edu/
[13]: http://mtuopensource.club/
[14]: https://ascl.net/
[15]: http://pebl.sourceforge.net/
[16]: https://www.appropedia.org/Recyclebot
[17]: https://www.appropedia.org/Open-source_Lab
[18]: https://www.academia.edu/
[19]: https://cos.io/our-products/osf-preprints/
[20]: https://www.researchgate.net/
[21]: https://www.preprints.org/
[22]: https://www.scienceopen.com/
[23]: https://osf.io/
[24]: https://opensource.com/sites/default/files/uploads/osf_workflow_-_hero.original600_copy_0.png
[25]: https://cdn.cos.io/media/images/OSF_workflow_-_hero.original.png
[26]: http://www.rit.edu/news/story.php?id=50590
[27]: https://learn.org/articles/25_Colleges_and_Universities_Ranked_by_Their_OpenCourseWare.html
[28]: https://ocw.mit.edu/index.htm
[29]: https://opensource.org/affiliates
[30]: https://it.utexas.edu/policies/releasing-software-open-source
[31]: http://www.portalprogramas.com/en/open-source-universities-ranking/about
[32]: https://www.apereo.org/
[33]: https://www.apereo.org/content/apereo-member-organizations
[34]: https://www.kuali.org/

View File

@ -1,271 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Install and Configure Nagios Core on CentOS 8 / RHEL 8)
[#]: via: (https://www.linuxtechi.com/install-nagios-core-rhel-8-centos-8/)
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
How to Install and Configure Nagios Core on CentOS 8 / RHEL 8
======
**Nagios** is a free and opensource network and alerting engine used to monitor various devices, such as network devices, and servers in a network. It supports both **Linux** and **Windows OS** and provides an intuitive web interface that allows you to easily monitor network resources. When professionally configured, it can alert you in the event a server or a network device goes down or malfunctions via email alerts. In this topic, we shed light on how you can install and configure Nagios core on **RHEL 8** / **CentOS 8**.
[![Install-Nagios-Core-RHEL8-CentOS8][1]][2]
### Prerequisites of Nagios Core
Before we begin, perform a flight check and ensure you have the following:
* An instance of RHEL 8 / CentOS 8
* SSH access to the instance
* A fast and stable internet connection
With the above requirements in check, lets roll our sleeves!
### Step 1: Install LAMP Stack
For Nagios to work as expected, you need to install LAMP stack or any other web hosting stack since its going to run on a browser. To achieve this, execute the command:
```
# dnf install httpd mariadb-server php-mysqlnd php-fpm
```
![Install-LAMP-stack-CentOS8][1]
You need to ensure that Apache web server is up and running. To do so, start and enable Apache server using the commands:
```
# systemctl start httpd
# systemctl enable httpd
```
![Start-enable-httpd-centos8][1]
To check the status of Apache server run
```
# systemctl status httpd
```
![Check-status-httpd-centos8][1]
Next, we need to start and enable MariaDB server, run the following commands
```
# systemctl start mariadb
# systemctl enable mariadb
```
![Start-enable-MariaDB-CentOS8][1]
To check MariaDB status run:
```
# systemctl status mariadb
```
![Check-MariaDB-status-CentOS8][1]
Also, you might consider hardening or securing your server and making it less susceptible to unauthorized access. To secure your server, run the command:
```
# mysql_secure_installation
```
Be sure to set a strong password for your MySQL instance. For the subsequent prompts, Type **Yes** and hit **ENTER**
![Secure-MySQL-server-CentOS8][1]
### Step 2: Install Required packages
Apart from installing the LAMP server, some additional packages are needed for the installation and proper configuration of Nagios. Therefore, install the packages as shown below:
```
# dnf install gcc glibc glibc-common wget gd gd-devel perl postfix
```
![Install-requisite-packages-CentOS8][1]
### Step 3: Create a Nagios user account
Next, we need to create a user account for the Nagios user. To achieve this , run the command:
```
# adduser nagios
# passwd nagios
```
![Create-new-user-for-Nagios][1]
Now, we need to create a group for Nagios and add the Nagios user to this group.
```
# groupadd nagiosxi
```
Now add the Nagios user to the group
```
# usermod -aG nagiosxi nagios
```
Also, add Apache user to the Nagios group
```
# usermod -aG nagiosxi apache
```
![Add-Nagios-group-user][1]
### Step 4: Download and install Nagios core
We can now proceed and install Nagios Core. The latest stable version in Nagios 4.4.5 which was released on August 19, 2019.  But first, download the Nagios tarball file from its official site.
To download Nagios core, first head to the tmp directory
```
# cd /tmp
```
Next download the tarball file
```
# wget https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.4.5.tar.gz
```
![Download-Nagios-CentOS8][1]
After downloading the tarball file, extract it using the command:
```
# tar -xvf nagios-4.4.5.tar.gz
```
Next, navigate to the uncompressed folder
```
# cd nagios-4.4.5
```
Run the commands below in this order
```
# ./configure --with-command-group=nagcmd
# make all
# make install
# make install-init
# make install-daemoninit
# make install-config
# make install-commandmode
# make install-exfoliation
```
To setup Apache configuration issue the command:
```
# make install-webconf
```
### Step 5: Configure Apache Web Server Authentication
Next, we are going to setup authentication for the user **nagiosadmin**. Please be mindful not to change the username or else, you may be required to perform further configuration which may be quite tedious.
To set up authentication run the command:
```
# htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin
```
![Configure-Apache-webserver-authentication-CentOS8][1]
You will be prompted for the password of the nagiosadmin user. Enter and confirm the password as requested. This is the user that you will use to login to Nagios towards the end of this tutorial.
For the changes to come into effect, restart your web server.
```
# systemctl restart httpd
```
### Step 6: Download &amp; install Nagios Plugins
Plugins will extend the functionality of the Nagios Server. They will help you monitor various services, network devices, and applications. To download the plugin tarball file run the command:
```
# wget https://nagios-plugins.org/download/nagios-plugins-2.2.1.tar.gz
```
Next, extract the tarball file and navigate to the uncompressed plugin folder
```
# tar -xvf nagios-plugins-2.2.1.tar.gz
# cd nagios-plugins-2.2.1
```
To install the plugins compile the source code as shown
```
# ./configure --with-nagios-user=nagios --with-nagios-group=nagiosxi
# make
# make install
```
### Step 7: Verify and Start Nagios
After the successful installation of Nagios plugins, verify the Nagios configuration to ensure that all is well and there is no error in the configuration:
```
# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
```
![Verify-Nagios-settings-CentOS8][1]
Next, start Nagios and verify its status
```
# systemctl start nagios
# systemctl status nagios
```
![Start-check-status-Nagios-CentOS8][1]
In case Firewall is running on system then allow “80” using the following command
```
# firewall-cmd --permanent --add-port=80/tcp# firewall-cmd --reload
```
### Step 8: Access Nagios dashboard via the web browser
To access Nagios, browse your servers IP address as shown
<http://server-ip/nagios>
A pop-up will appear prompting for the username and the password of the user we created earlier in Step 5. Enter the credentials and hit **Sign In**
![Access-Nagios-via-web-browser-CentOS8][1]
This ushers you to the Nagios dashboard as shown below
![Nagios-dashboard-CentOS8][1]
We have finally successfully installed and configured Nagios Core on CentOS 8 / RHEL 8. Your feedback is most welcome.
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-nagios-core-rhel-8-centos-8/
作者:[James Kiarie][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/james/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Install-Nagios-Core-RHEL8-CentOS8.jpg

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,164 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (7 Best Open Source Tools that will help in AI Technology)
[#]: via: (https://opensourceforu.com/2019/11/7-best-open-source-tools-that-will-help-in-ai-technology/)
[#]: author: (Nitin Garg https://opensourceforu.com/author/nitin-garg/)
7 Best Open Source Tools that will help in AI Technology
======
[![][1]][2]
_Artificial intelligence is an exceptional technology following the futuristic approach. In this progressive era, its capturing the attention of all the multination organizations. Some of the popular names in the industry like Google, IBM, Facebook, Amazon, Microsoft constantly investing in this new-age technology._
Anticipate in business needs using artificial intelligence and take research and development on another level. This advanced technology is becoming an integral part of organizations in research and development offering ultra-intelligent solutions. It helps you maintain accuracy and increase productivity with better results.
AI open source tools and technologies are capturing the attention of every industry providing with frequent and accurate results. These tools help you analyse your performance while providing you with a boost to generate greater revenue.
Without further ado, here we have listed some of the best open-source tools to help you understand artificial intelligence better.
**1\. TensorFlow**
TensorFlow is an open-source machine learning framework used for Artificial Intelligence. It is basically developed to conduct machine learning and deep learning for research and production. TensorFlow allows developers to create dataflow graphics structure, It moves through a network or a system node, and the graph provides a multidimensional array or tensor of data.
TensorFlow is an exceptional tool that offers countless advantages.
* Simplifies the numeric computation
* TensorFlow offers flexibility on multiple models.
* TensorFlow improves business efficiency
* Highly portable
* Automatic differentiate capabilities.
**2\. Apache SystemML**
Apache SystemML is a very popular open-source machine learning platform created by IBM offering a favourable workplace using big data. It can run efficiently and on Apache Spark and automatically scale your data while determining whether your code can run on the drive or Apache Spark Cluster. Not just that, its lucrative features make it stand out in the industry offers;
* Algorithms customization
* Multiple Execution Modes
* Automatic Optimisation
It also supports deep learning while enabling developers to implement machine learning code and optimizing it with more effectiveness.
**3\. OpenNN**
OpenNN is an open-source artificial intelligence neural network library for progressive analytics. It helps you develop robust models with C++ and Python while containing algorithms and utilities to deal with machine learning solutions likes forecasting and classification. It also covers regression and association providing high performance and technology evolution in the industry.
It possesses numerous lucrative features like;
* Digital Assistance
* Predictive Analysis
* Fast Performance
* Virtual Personal Assistance
* Speech Recognition
* Advanced Analytics
It helps you design advance solutions implementing data mining methods for fruitful results.
**4\. Caffe**
Caffe (Convolutional Architecture for Fast Feature Embedding) is an open-source deep learning framework. It considers speed, modularity, and expressions the most. Caffe was originally developed at the University of California, Berkeley Vision and Learning Centre, written in C++ with a python interface. It smoothly works on operating system Linux, macOS, and Windows.
Some of the key features of Caffe that helps in AI technology.
1. Expressive Architecture
2. Extensive Code
3. Large Community
4. Active Development
5. Speedy Performance
It helps you inspire innovation while introducing stimulated growth. Make full use of this tool to get desired results.
**5\. Torch**
Torch is an open-source machine learning library which, helps you simplify complex task like serialization, object-oriented programming by offering multiple convenient functions. It offers the utmost flexibility and speed in machine learning projects. Torch is written using scripting language Lua and comes with an underlying C implementation. It is used in multiple organization and research labs.
Torch has countless advantages like;
* Fast &amp; Effective GPU Support
* Linear algebra Routines
* Support for iOS &amp; Android Platform
* Numeric Optimization Routine
* N-dimensional arrays
**6\. Accord .NET**
Accord .NET is one of the renown free, open-source AI development tool. It has a set of libraries for combining audio and image processing libraries written in C#. From computer vision to computer audition, signal processing and statistics applications it helps you build everything for commercial use. It comes with a comprehensive set of the sample application for quick running and extensive range of libraries.
You can develop an advance app using Accord .NET using attention-grabbing features like;
* Statistical Analysis
* Data Ingestions
* Adaptive
* Deep Learning
* Second-order neural network learning algorithms
* Digital Assistance &amp; Multi-languages
* Speech recognition
**7\. Scikit-Learn**
Scikit-learn is one of the popular open-source tools that will help in AI technology. It is a valuable library for machine learning in Python. It includes efficient tools like machine learning and statistical modelling including classification, clustering, regression and dimensionality reduction.
Lets find out more about Scikit-Learn features;
* Cross-validation
* Clustering and Classification
* Manifold Learning
* Machine Learning
* Virtual process Automation
* Workflow Automation
From preprocessing to model selection Scikit-learn helps you take care of everything. It simplifies the complete task from data mining to data analysis.
**Final Thought**
These are some of the popular open-source AI tools which provide with the comprehensive range of features. Before developing the new-age application, one must select one of the tools and work accordingly. These tools provide with advanced Artificial Intelligence solutions keeping recent trends in mind.
Artificial intelligence is used globally and its marking its presence all around the world. With applications like Amazon Alexa, Siri, AI is providing customers with ultimate user experience. Its offering significant benefit in the industry capturing users attention. Among all the industries like healthcare, banking, finance, e-commerce artificial intelligence is contributing to growth and productivity while saving a lot of time and efforts.
Select any one of these open-source tools for better user experience and unbelievable results. It will help you grow and get a better result in terms of quality and security.
![Avatar][3]
[Nitin Garg][4]
The author is the CEO and co-founder of BR Softech [Business intelligence software company][5]. Likes to share his opinions on IT industry via blogs. His interest is to write on the latest and advanced IT technologies which include IoT, VR &amp; AR app development, web, and app development services. Along with this, he also offers consultancy services for RPA, Big Data and Cyber Security services.
[![][6]][7]
--------------------------------------------------------------------------------
via: https://opensourceforu.com/2019/11/7-best-open-source-tools-that-will-help-in-ai-technology/
作者:[Nitin Garg][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensourceforu.com/author/nitin-garg/
[b]: https://github.com/lujun9972
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2018/05/Artificial-Intelligence_EB-June-17.jpg?resize=696%2C464&ssl=1 (Artificial Intelligence_EB June 17)
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2018/05/Artificial-Intelligence_EB-June-17.jpg?fit=1000%2C667&ssl=1
[3]: https://secure.gravatar.com/avatar/d4e6964b80590824b981f06a451aa9e6?s=100&r=g
[4]: https://opensourceforu.com/author/nitin-garg/
[5]: https://www.brsoftech.com/bi-consulting-services.html
[6]: https://opensourceforu.com/wp-content/uploads/2019/11/assoc.png
[7]: https://feedburner.google.com/fb/a/mailverify?uri=LinuxForYou&loc=en_US

View File

@ -0,0 +1,148 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to manage music tags using metaflac)
[#]: via: (https://opensource.com/article/19/11/metaflac-fix-music-tags)
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
How to manage music tags using metaflac
======
Correct music tagging errors from the command line with this powerful
open source utility.
![website design image][1]
I've been ripping CDs to my computer for a long time now. Over that time, I've used several different tools for ripping, and I have observed that each tool seems to have a different take on tagging, specifically, what metadata to save with the music data. By "observed," I mean that music players seem to sort albums in a funny order, they split tracks in one physical directory into two albums, or they create other sorts of frustrating irritations.
I've also learned that some of the tags are pretty obscure, and many music players and tag editors don't show them. Even so, they may use them for sorting or displaying music in some edge cases, like where the player separates all the music files containing tag XYZ into a different album from all the files not containing that tag.
So if the tagging applications and music players don't show the "weirdo" tags—but are somehow affected by them—what can you do?
### Metaflac to the rescue!
I have been meaning to get familiar with **[metaflac][2]**, the open source command-line metadata editor for [FLAC files][3], which is my open source music file format of choice. Not that there is anything wrong with great tag-editing software like [EasyTAG][4], but the old saying "if all you have is a hammer…" comes to mind. Also, from a practical perspective, my home and office stereo music needs are met by small, dedicated servers running [Armbian][5] and [MPD][6], with the music files stored locally, running a very stripped-down, music-only headless environment, so a command-line metadata management tool would be quite useful.
The screenshot below shows the typical problem created by my long-term ripping program: Putumayo's wonderful compilation of Colombian music appears as two separate albums, one containing a single track, the other containing the remaining 11:
![Album with incorrect tags][7]
I used metaflac to generate a list of all the tags for all of the FLAC files in the directory containing those tracks:
```
rm -f tags.txt
for f in *.flac; do
        echo $f &gt;&gt; tags.txt
        metaflac --export-tags-to=tags.tmp "$f"
        cat tags.tmp &gt;&gt; tags.txt
        rm tags.tmp
done
```
I saved this as an executable shell script (see my colleague [David Both][8]'s wonderful series of columns on Bash shell scripting, [particularly the one on loops][9]). Basically, what I'm doing here is creating a file, _tags.txt_, containing the filename (the **echo** command) followed by all its flags, followed by the next filename, and so forth. Here are the first few lines of the result:
```
A Guapi.flac
TITLE=A Guapi
ARTIST=Grupo Bahia
ALBUMARTIST=Various Artists
ALBUM=Putumayo Presents: Colombia
DATE=2001
TRACKTOTAL=12
GENRE=Latin Salsa
MUSICBRAINZ_ALBUMARTISTID=89ad4ac3-39f7-470e-963a-56509c546377
MUSICBRAINZ_ALBUMID=6e096386-1655-4781-967d-f4e32defb0a3
MUSICBRAINZ_ARTISTID=2993268d-feb6-4759-b497-a3ef76936671
DISCID=900a920c
ARTISTSORT=Grupo Bahia
MUSICBRAINZ_DISCID=RwEPU0UpVVR9iMP_nJexZjc_JCc-
COMPILATION=1
MUSICBRAINZ_TRACKID=8a067685-8707-48ff-9040-6a4df4d5b0ff
ALBUMARTISTSORT=50 de Joselito, Los
Cumbia Del Caribe.flac
```
After a bit of investigation, it turns out I ripped a number of my Putumayo CDs at the same time, and whatever software I was using at the time seems to have put the MUSICBRAINZ_ tags on all but one of the files. (A bug? Probably; I see this on a half-dozen albums.) Also, with respect to the sometimes unusual sorting, note the ALBUMARTISTSORT tag moved the Spanish article "Los" to the end of the artist name, after a comma.
I used a simple **awk** script to list all the tags reported in the _tags.txt_ file:
```
`awk -F= 'index($0,"=") > 0 {print $1}' tags.txt | sort -u`
```
This split all lines into fields using **=** as the field separator and prints the first field of lines containing an equals sign. The results are passed through sort with the **-u** flag, which eliminates all duplication in the output (see my colleague Seth Kenlon's great [article on the **sort** utility][10]). For this specific _tags.txt_ file, the output is:
```
ALBUM
ALBUMARTIST
ALBUMARTISTSORT
ARTIST
ARTISTSORT
COMPILATION
DATE
DISCID
GENRE
MUSICBRAINZ_ALBUMARTISTID
MUSICBRAINZ_ALBUMID
MUSICBRAINZ_ARTISTID
MUSICBRAINZ_DISCID
MUSICBRAINZ_TRACKID
TITLE
TRACKTOTAL
```
Sleuthing around a bit, I found that the MUSICBRAINZ_ flags appear on all but one FLAC file, so I used the metaflac command to delete those flags:
```
for f in *.flac; do metaflac --remove-tag MUSICBRAINZ_ALBUMARTISTID "$f"; done
for f in *.flac; do metaflac --remove-tag MUSICBRAINZ_ALBUMID "$f"; done
for f in *.flac; do metaflac --remove-tag MUSICBRAINZ_ARTISTID "$f"; done
for f in *.flac; do metaflac --remove-tag MUSICBRAINZ_DISCID "$f"; done
for f in *.flac; do metaflac --remove-tag MUSICBRAINZ_TRACKID "$f"; done
```
Once that's done, I can rebuild the MPD database with my music player. Here are the results:
![Album with correct tags][11]
And, there we are—all 12 tracks together in one album.
So, yeah, I'm lovin' metaflac a whole bunch. I expect I'll be using it more often as I try to wrangle the last bits of weirdness in my music collection's music tags. It's highly recommended!
### And the music
I've been spending a few evenings listening to Odario Williams' program _After Dark_ on CBC Music. (CBC is Canada's public broadcasting corporation.) Thanks to Odario, one of the albums I've really come to enjoy is [_Songs for Cello and Voice_ by Kevin Fox][12]. Here he is, covering the Eurythmics tune "[Sweet Dreams (Are Made of This)][13]."
I bought this on CD, and now it's on my music server with its tags properly organized!
--------------------------------------------------------------------------------
via: https://opensource.com/article/19/11/metaflac-fix-music-tags
作者:[Chris Hermansen][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/clhermansen
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web-design-monitor-website.png?itok=yUK7_qR0 (website design image)
[2]: https://xiph.org/flac/documentation_tools_metaflac.html
[3]: https://xiph.org/flac/index.html
[4]: https://wiki.gnome.org/Apps/EasyTAG
[5]: https://www.armbian.com/
[6]: https://www.musicpd.org/
[7]: https://opensource.com/sites/default/files/uploads/music-tags1_before.png (Album with incorrect tags)
[8]: https://opensource.com/users/dboth
[9]: https://opensource.com/article/19/10/programming-bash-loops
[10]: https://opensource.com/article/19/10/get-sorted-sort
[11]: https://opensource.com/sites/default/files/uploads/music-tags2_after.png (Album with correct tags)
[12]: https://burlingtonpac.ca/events/kevin-fox/
[13]: https://www.youtube.com/watch?v=uyN66XI1zp4

View File

@ -0,0 +1,129 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Managing software and services with Cockpit)
[#]: via: (https://fedoramagazine.org/managing-software-and-services-with-cockpit/)
[#]: author: (Shaun Assam https://fedoramagazine.org/author/sassam/)
Managing software and services with Cockpit
======
![][1]
The Cockpit series continues to focus on some of the tools users and administrators can use to perform everyday tasks within the web user-interface. So far weve covered [introducing the user-interface][2], [storage][3] and [network management][4], and [user accounts][5]. Hence, this article will highlight how Cockpit handles software and services.
The menu options for Applications and Software Updates are available through Cockpits PackageKit feature. To install it from the command-line, run:
```
sudo dnf install cockpit-packagekit
```
For [Fedora Silverblue][6], [Fedora CoreOS][7], and other ostree-based operating systems, install the _cockpit-ostree_ package and reboot the system:
```
sudo rpm-ostree install cockpit-ostree; sudo systemctl reboot
```
### Software updates
On the main screen, Cockpit notifies the user whether the system is updated, or if any updates are available. Click the **Updates Available** link on the main screen, or **Software Updates** in the menu options, to open the updates page.
#### RPM-based updates
The top of the screen displays general information such as the number of updates and the number of security-only updates. It also shows when the system was last checked for updates, and a button to perform the check. Likewise, this button is equivalent to the command **sudo dnf check-update**.
Below is the **Available Updates** section, which lists the packages requiring updates. Furthermore, each package displays the name, version, and best of all, the severity of the update. Clicking a package in the list provides additional information such as the CVE, the Bugzilla ID, and a brief description of the update. For details about the CVE and related bugs, click their respective links.
Also, one of the best features about Software Updates is the option to only install security updates. Distinguishing which updates to perform makes it simple for those who may not need, or want, the latest and greatest software installed. Of course, one can always use [Red Hat Enterprise Linux][8] or [CentOS][9] for machines requiring long-term support.
The example below demonstrates how Cockpit applies RPM-based updates.
![][10]
#### OSTree-based updates
The popular article [What is Silverblue][11] states:
> OSTree is used by rpm-ostree, a hybrid package/image based system… It atomically replicates a base OS and allows the user to “layer” the traditional RPM on top of the base OS if needed.
Because of this setup, Cockpit uses a snapshot-like layout for these operating systems. As seen in the demo below, the top of the screen displays the repository (_fedora_), the base OS image, and a button to **Check for Updates**.
Clicking the repository name (_fedora_ in the demo below) opens the **Change Repository** screen. From here one can **Add New Repository**, or click the pencil icon to edit an existing repository. Editing provides the option to delete the repository, or **Add Another Key**. To add a new repository, enter the name and URL. Also, select whether or not to **Use trusted GPG key**.
There are three categories that provide details of its respective image: Tree, Packages, and Signature. **Tree** displays basic information such as the operating system, version of the image, how long ago it was released, and the origin of the image. **Packages** displays a list of installed packages within that image. **Signature** verifies the integrity of the image such as the author, date, RSA key ID, and status.
The current, or running, image displays a green check-mark beside it. If something happens, or an update causes an issue, click the **Roll Back and Reboot** button. This restores the system to a previous image.
![][12]
### Applications
The **Applications** screen displays a list of add-ons available for Cockpit. This makes it easy to find and install the plugins required by the user. At the time of this article, some of the options include the 389 Directory Service, Fleet Commander, and Subscription Manager. The demo below shows a complete list of available Cockpit add-ons.
Also, each item displays the name, a brief description, and a button to install, or remove, the add-on. Furthermore, clicking the item displays more information (if available). To refresh the list, click the icon at the top-right corner.
![][13]
### Subscription Management
Subscription managers allow admins to attach subscriptions to the machine. Even more, subscriptions give admins control over user access to content and packages. One example of this is the famous [Red Hat subscription model][14]. This feature works in relation to the **subscription-manager** command
The Subscriptions add-on can be installed via Cockpits Applications menu option. It can also be installed from the command-line with:
```
sudo dnf install cockpit-subscriptions
```
To begin, click **Subscriptions** in the main menu. If the machine is currently unregistered, it opens the **Register System** screen. Next, select the URL. You can choose **Default**, which uses Red Hats subscription server, or enter a **Custom URL**. Enter the **Login**, **Password**, **Activation Key**, and **Organization** ID. Finally, to complete the process, click the **Register** button.
The main page for Subscriptions show if the machine is registered, the System Purpose, and a list of installed products.
![][15]
### Services
To start, click the **Services** menu option. Because Cockpit uses _[systemd][16]_, we get the options to view **System Services**, **Targets**, **Sockets**, **Timers**, and **Paths**. Cockpit also provides an intuitive interface to help users search and find the service they want to configure. Services can also be filtered by its state: **All**, **Enabled**, **Disabled**, or **Static**. Below this is the list of services. Each row displays the service name, description, state, and automatic startup behavior.
For example, lets take _bluetooth.service_. Typing _bluetooth_ in the search bar automatically displays the service. Now, select the service to view the details of that service. The page displays the status and path of the service file. It also displays information in the service file such as the requirements and conflicts. Finally, at the bottom of the page, are the logs pertaining to that service.
Also, users can quickly start and stop the service by toggling the switch beside the service name. The three-dots to the right of that switch expands those options to **Enable**, **Disable**, **Mask/Unmask** the service
To learn more about _systemd_, check out the series in the Fedora Magazine starting with [What is an init system?][17]
![][18]
In the next article well explore the security features available in Cockpit.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/managing-software-and-services-with-cockpit/
作者:[Shaun Assam][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/sassam/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/11/cockpit-sw-services-816x345.jpg
[2]: https://fedoramagazine.org/cockpit-and-the-evolution-of-the-web-user-interface/
[3]: https://fedoramagazine.org/performing-storage-management-tasks-in-cockpit/
[4]: https://fedoramagazine.org/managing-network-interfaces-and-firewalld-in-cockpit/
[5]: https://fedoramagazine.org/managing-user-accounts-with-cockpit/
[6]: https://silverblue.fedoraproject.org/
[7]: https://getfedora.org/en/coreos/
[8]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux?intcmp=701f2000001OEGhAAO
[9]: https://www.centos.org/
[10]: https://fedoramagazine.org/wp-content/uploads/2019/11/cockpit-software-updates-rpm.gif
[11]: https://fedoramagazine.org/what-is-silverblue/
[12]: https://fedoramagazine.org/wp-content/uploads/2019/11/cockpit-software-updates-ostree.gif
[13]: https://fedoramagazine.org/wp-content/uploads/2019/11/cockpit-applications.gif
[14]: https://www.redhat.com/en/about/value-of-subscription
[15]: https://fedoramagazine.org/wp-content/uploads/2019/11/cockpit-subscriptions.gif
[16]: https://fedoramagazine.org/series/systemd-series/
[17]: https://fedoramagazine.org/what-is-an-init-system/
[18]: https://fedoramagazine.org/wp-content/uploads/2019/11/cockpit-services.gif

View File

@ -0,0 +1,214 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Create Affinity and Anti-Affinity Policy in OpenStack)
[#]: via: (https://www.linuxtechi.com/create-affinity-anti-affinity-policy-openstack/)
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
How to Create Affinity and Anti-Affinity Policy in OpenStack
======
In the organizations where the **OpenStack** is used aggressively, so in such organizations application and database teams can come up with requirement that their application and database instances are required to launch either on same **compute nodes** (hypervisor) or different compute nodes.
[![OpenStack-VMs-Affinity-AntiAffinity-Policy][1]][2]
So, this requirement in OpenStack is fulfilled via **server groups** with **affinity** and **anti-affinity** policies. Server Group is used control affinity and anti-affinity rules for scheduling openstack instances.
When we try to provision virtual machines with affinity server group then all virtual machines will be launched on same compute node. When VMs are provisioned with ant-affinity server group then all VMs will be launched in different compute nodes. In this article we will demonstrate how to create OpenStack server groups with Affinity and Anti-Affinity rules.
Lets first verify whether your OpenStack setup support Affinity and Anti-Affinity Policies or not, execute the following grep command from your controller nodes,
```
# grep -i "scheduler_default_filters" /etc/nova/nova.conf
```
Output should be something like below,
![Affinity-AntiAffinity-Filter-Nova-Conf-OpenStack][1]
As we can see Affinity and Ant-Affinity filters are enabled but in case if these are not enabled then add these filters in **/etc/nova/nova.conf**  file of controller nodes under “**scheduler_default_filters**” parameters.
```
# vi /etc/nova/nova.conf
………………
scheduler_default_filters=xx,xxx,xxx,xxxxx,xxxx,xxx,xxx,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,xx,xxx,xxxx,xx
………………
```
Save and exit the file
To make above changes into the effect, restart the following services
```
# systemctl restart openstack-nova-scheduler
# systemctl restart openstack-nova-conductor
```
Now lets create OpenStack Server Groups with Affinity and Anti-Affinity Policies
### Server Group with Affinity Policy
To create a server group with name “app” for affinity policy, execute the following openstack command from controller node,
**Syntax:**
# openstack server group create policy affinity &lt;Server-Group-Name&gt;
Or
# nova server-group-create &lt;Server-Group-Name&gt; affinity
**Note:** Before start executing openstack command, please make sure you source project credential file, in my case project credential file is “**openrc**”
Example:
```
# source openrc
# openstack server group create --policy affinity app
```
### Server Group with Anti-Affinity Policy
To create a server group with anti-affinity policy, execute the following openstack command from controller node, I am assuming server group name is “database”
**Syntax:**
# openstack server group create policy anti-affinity &lt;Server-Group-Name&gt;
Or
# nova server-group-create &lt;Server-Group-Name&gt; anti-affinity
Example:
```
# source openrc
# openstack server group create --policy anti-affinity database
```
### List Server Groups ID and Policies
Execute either nova command or Openstack command to get server groups id and their policies
```
# nova server-group-list | grep -Ei "Policies|database"
Or
# openstack server group list --long | grep -Ei "Policies|app|database"
```
Output would be something like below,
![Server-Group-Policies-OpenStack][1]
### [Launch Virtual Machines (VMs)][3] with Affinity Policy
Lets assume we want to launch 4 vms with affinity policy, run the following “**openstack server create**” command
**Syntax:**
# openstack server create image &lt;img-name&gt; flavor &lt;id-or-flavor-name&gt; security-group &lt;security-group-name&gt; nic net-id=&lt;network-id&gt; hint group=&lt;Server-Group-ID&gt; max &lt;number-of-vms&gt;  &lt;VM-Name&gt;
**Example:**
```
# openstack server create --image Cirros --flavor m1.small --security-group default --nic net-id=37b9ab9a-f198-4db1-a5d6-5789b05bfb4c --hint group="a9847c7f-b7c2-4751-9c9a-03b117e704ff" --max 4 affinity-test
```
Output of above command,
![OpenStack-Server-create-with-hint-option][1]
Lets verify whether VMs are launched on same compute node or not, run following command
```
# openstack server list --long -c Name -c Status -c Host -c "Power State" | grep -i affinity-test
```
![Affinity-VMs-Status-OpenStack][1]
This confirms that our affinity policy is working fine as all the VMs are launched on same compute node.
Now lets test anti-affinity policy
### Launch Virtual Machines (VMs) with Anti-Affinity Policy
For anti-affinity policy we will launch 4 VMs, in above openstack server create command, we need to replace Anti-Affinity Server Groups ID. In our case we will be using database server group id.
Run the following openstack command to launch 4 VMs on different computes with anti-affinity policy,
```
# openstack server create --image Cirros --flavor m1.small --security-group default --nic net-id=37b9ab9a-f198-4db1-a5d6-5789b05bfb4c --hint group="498fd41b-8a8a-497a-afd8-bc361da2d74e" --max 4 anti-affinity-test
```
Output
![Openstack-server-create-anti-affinity-hint-option][1]
Use below openstack command to verify whether VMs are launched on different compute nodes or not
```
# openstack server list --long -c Name -c Status -c Host -c "Power State" | grep -i anti-affinity-test
```
![Anti-Affinity-VMs-Status-OpenStack][1]
Above output confirms that our anti-affinity policy is also working fine.
**Note:** Default Quota for Server group is 10 for every tenant , it means we can max launch 10 VMs inside a server group.
Use below command to view Server Group quota for a specific tenant, replace the tenant id that suits to your setup
```
# openstack quota show f6852d73eaee497a8a640757fe02b785 | grep -i server_group
| server_group_members | 10 |
| server_groups | 10 |
#
```
To update Server Group Quota, execute the following commands
```
# nova quota-update --server-group-members 15 f6852d73eaee497a8a640757fe02b785
# nova quota-update --server-groups 15 f6852d73eaee497a8a640757fe02b785
```
Now re-run the openstack quota command to verify server group quota
```
# openstack quota show f6852d73eaee497a8a640757fe02b785 | grep -i server_group
| server_group_members | 15 |
| server_groups | 15 |
#
```
Thats all, we have successfully updated Server Group quota for the tenant. This conclude the article as well, please do hesitate to share it among your technical friends.
* [Facebook][4]
* [Twitter][5]
* [LinkedIn][6]
* [Reddit][7]
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/create-affinity-anti-affinity-policy-openstack/
作者:[Pradeep Kumar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/pradeep/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/11/OpenStack-VMs-Affinity-AntiAffinity-Policy.jpg
[3]: https://www.linuxtechi.com/create-delete-virtual-machine-command-line-openstack/
[4]: http://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.linuxtechi.com%2Fcreate-affinity-anti-affinity-policy-openstack%2F&t=How%20to%20Create%20Affinity%20and%20Anti-Affinity%20Policy%20in%20OpenStack
[5]: http://twitter.com/share?text=How%20to%20Create%20Affinity%20and%20Anti-Affinity%20Policy%20in%20OpenStack&url=https%3A%2F%2Fwww.linuxtechi.com%2Fcreate-affinity-anti-affinity-policy-openstack%2F&via=Linuxtechi
[6]: http://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fwww.linuxtechi.com%2Fcreate-affinity-anti-affinity-policy-openstack%2F&title=How%20to%20Create%20Affinity%20and%20Anti-Affinity%20Policy%20in%20OpenStack
[7]: http://www.reddit.com/submit?url=https%3A%2F%2Fwww.linuxtechi.com%2Fcreate-affinity-anti-affinity-policy-openstack%2F&title=How%20to%20Create%20Affinity%20and%20Anti-Affinity%20Policy%20in%20OpenStack

View File

@ -0,0 +1,200 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bash Script to Monitor Disk Space Usage on Multiple Remote Linux Systems With eMail Alert)
[#]: via: (https://www.2daygeek.com/linux-bash-script-to-monitor-disk-space-usage-on-multiple-remote-linux-systems-send-email/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Bash Script to Monitor Disk Space Usage on Multiple Remote Linux Systems With eMail Alert
======
Some time ago, we had wrote **[Bash script to monitor disk space usage on a Linux][1]** system with an email alert.
That script works on a single machine, and you have to put the script on the corresponding machine.
If you want to set disk space usage alerts on multiple computers at the same time, that script does not help you.
So we have written this new **[shell script][2]** to achieve this.
To do so, you need a JUMP server (centralized server) that can communicate with any other computer without a password.
This means that password-less authentication must be set as a prerequisite.
When the prerequisite is complete, run the script on the JUMP server.
Finally add a **[cronjob][3]** to completely automate this process.
Three shell scripts are included in this article, and choose the one you like.
### 1) Bash Script-1: Bash Script to Check Disk Space Usage on Multiple Remote Linux Systems and Print Output on Terminal
This **[bash script][4]** checks the disk space usage on a given remote machine and print the output to the terminal if the system reaches the specified threshold.
In this example, we set the threshold limit to 80% for testing purpose and you can adjust this limit to suit your needs.
Also, replace your email id instead of us to receive this alert.
```
# vi /opt/scripts/disk-usage-multiple.sh
#!/bin/sh
output1=/tmp/disk-usage.out
echo "---------------------------------------------------------------------------"
echo "HostName Filesystem Size Used Avail Use% Mounted on"
echo "---------------------------------------------------------------------------"
for server in `more /opt/scripts/servers.txt`
do
output=`ssh $server df -Ph | tail -n +2 | sed s/%//g | awk '{ if($5 > 80) print $0;}'`
echo "$server: $output" >> $output1
done
cat $output1 | grep G | column -t
rm $output1
```
Run the script file once you have added the above script to a file.
```
# sh /opt/scripts/disk-usage-multiple.sh
```
You get an output like the one below.
```
------------------------------------------------------------------------------------------------
HostName Filesystem Size Used Avail Use% Mounted on
------------------------------------------------------------------------------------------------
server01: /dev/mapper/vg_root-lv_red 5.0G 4.3G 784M 85 /var/log/httpd
server02: /dev/mapper/vg_root-lv_var 5.8G 4.5G 1.1G 81 /var
server03: /dev/mapper/vg01-LogVol01 5.7G 4.5G 1003M 82 /usr
server04: /dev/mapper/vg01-LogVol04 4.9G 3.9G 711M 85 /usr
server05: /dev/mapper/vg_root-lv_u01 74G 56G 15G 80 /u01
```
### 2) Shell Script-2: Shell Script to Monitor Disk Space Usage on Multiple Remote Linux Systems With eMail Alerts
This shell script checks the disk space usage on a given remote machine and sends the output via a mail in a simple text once the system reaches the specified threshold.
```
# vi /opt/scripts/disk-usage-multiple-1.sh
#!/bin/sh
SUBJECT="Disk Usage Report on "`date`""
MESSAGE="/tmp/disk-usage.out"
MESSAGE1="/tmp/disk-usage-1.out"
TO="[email protected]"
echo "---------------------------------------------------------------------------------------------------" >> $MESSAGE1
echo "HostName Filesystem Size Used Avail Use% Mounted on" >> $MESSAGE1
echo "---------------------------------------------------------------------------------------------------" >> $MESSAGE1
for server in `more /opt/scripts/servers.txt`
do
output=`ssh $server df -Ph | tail -n +2 | sed s/%//g | awk '{ if($5 > 80) print $0;}'`
echo "$server: $output" >> $MESSAGE
done
cat $MESSAGE | grep G | column -t >> $MESSAGE1
mail -s "$SUBJECT" "$TO" < $MESSAGE1
rm $MESSAGE
rm $MESSAGE1
```
Run the script file once you have added the above script to a file.
```
# sh /opt/scripts/disk-usage-multiple-1.sh
```
You get an output like the one below.
```
------------------------------------------------------------------------------------------------
HostName Filesystem Size Used Avail Use% Mounted on
------------------------------------------------------------------------------------------------
server01: /dev/mapper/vg_root-lv_red 5.0G 4.3G 784M 85 /var/log/httpd
server02: /dev/mapper/vg_root-lv_var 5.8G 4.5G 1.1G 81 /var
server03: /dev/mapper/vg01-LogVol01 5.7G 4.5G 1003M 82 /usr
server04: /dev/mapper/vg01-LogVol04 4.9G 3.9G 711M 85 /usr
server05: /dev/mapper/vg_root-lv_u01 74G 56G 15G 80 /u01
```
Finally add a cronjob to automate this. It will run every 10 minutes.
```
# crontab -e
*/10 * * * * /bin/bash /opt/scripts/disk-usage-multiple-1.sh
```
### 3) Bash Script-3: Bash Script to Monitor Disk Space Usage on Multiple Remote Linux Systems With eMail Alerts
This shell script checks the disk space usage on a given remote machine and sends the output via the mail with a CSV file if the system reaches the specified threshold.
```
# vi /opt/scripts/disk-usage-multiple-2.sh
#!/bin/sh
MESSAGE="/tmp/disk-usage.out"
MESSAGE2="/tmp/disk-usage-1.csv"
echo "Server Name, Filesystem, Size, Used, Avail, Use%, Mounted on" > $MESSAGE2
for server in thvtstrhl7 thvrhel6
for server in `more /opt/scripts/servers-disk-usage.txt`
do
output1=`ssh $server df -Ph | tail -n +2 | sed s/%//g | awk '{ if($5 > 80) print $0;}'`
echo "$server $output1" >> $MESSAGE
done
cat $MESSAGE | grep G | column -t | while read output;
do
Sname=$(echo $output | awk '{print $1}')
Fsystem=$(echo $output | awk '{print $2}')
Size=$(echo $output | awk '{print $3}')
Used=$(echo $output | awk '{print $4}')
Avail=$(echo $output | awk '{print $5}')
Use=$(echo $output | awk '{print $6}')
Mnt=$(echo $output | awk '{print $7}')
echo "$Sname,$Fsystem,$Size,$Used,$Avail,$Use,$Mnt" >> $MESSAGE2
done
echo "Disk Usage Report for `date +"%B %Y"`" | mailx -s "Disk Usage Report on `date`" -a /tmp/disk-usage-1.csv [email protected]
rm $MESSAGE
rm $MESSAGE2
```
Run the script file once you have added the above script to a file.
```
# sh /opt/scripts/disk-usage-multiple-2.sh
```
You get an output like the one below.
![][5]
Finally add a cronjob to automate this. It will run every 10 minutes.
```
# crontab -e
*/10 * * * * /bin/bash /opt/scripts/disk-usage-multiple-1.sh
```
**Note:** Because the script is scheduled to run once every 10 minutes, you will receive an email alert every 10 minutes.
If your system reaches a given limit after 18 minutes, you will receive an email alert on the second cycle, such as after 20 minutes (2nd 10 minute cycle).
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/linux-bash-script-to-monitor-disk-space-usage-on-multiple-remote-linux-systems-send-email/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/linux-shell-script-to-monitor-disk-space-usage-and-send-email/
[2]: https://www.2daygeek.com/category/shell-script/
[3]: https://www.2daygeek.com/crontab-cronjob-to-schedule-jobs-in-linux/
[4]: https://www.2daygeek.com/category/bash-script/
[5]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7

View File

@ -0,0 +1,272 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Install and Configure Nagios Core on CentOS 8 / RHEL 8)
[#]: via: (https://www.linuxtechi.com/install-nagios-core-rhel-8-centos-8/)
[#]: author: (James Kiarie https://www.linuxtechi.com/author/james/)
如何在 CentOS 8 / RHEL 8 上安装和配置 Nagios Core
======
**Nagios** 是一个免费开源网络和警报引擎,它用于监视各种设备,例如网络设备和网络中的服务器。它支持 **Linux****Windows**,并提供直观的 Web 界面,可让你轻松监控网络资源。经过专业配置后,它可以在服务器或网络设备下线或者故障时向你发出邮件警报。在本文中,我们说明了如何在 **RHEL 8** / **CentOS 8** 上安装和配置 Nagios Core。
[![Install-Nagios-Core-RHEL8-CentOS8][1]][2]
### Nagios Core 的先决条件
在开始之前,请先检查并确保有以下各项:
* RHEL 8 / CentOS 8 的实例
* 能通过 SSH 访问实例
* 快速稳定的互联网连接
满足上述要求后,我们开始吧!
### 步骤 1安装 LAMP
为了使 Nagios 能够按预期工作,你需要安装 LAMP 或其他网络托管软件,因为它们将在浏览器上运行。 为此,请执行以下命令:
```
# dnf install httpd mariadb-server php-mysqlnd php-fpm
```
![Install-LAMP-stack-CentOS8][1]
你需要确保 Apache Web 服务器已启动并正在运行。 为此,请使用以下命令启用并启动 Apache 服务器:
```
# systemctl start httpd
# systemctl enable httpd
```
![Start-enable-httpd-centos8][1]
检查 Apache 服务器运行状态
```
# systemctl status httpd
```
![Check-status-httpd-centos8][1]
接下来,我们需要启用并启动 MariaDB 服务器,运行以下命令
```
# systemctl start mariadb
# systemctl enable mariadb
```
![Start-enable-MariaDB-CentOS8][1]
要检查 MariaDB 状态,请运行:
```
# systemctl status mariadb
```
![Check-MariaDB-status-CentOS8][1]
另外,你可能会考虑加强或保护服务器,使其不容易受到未经授权的访问。要保护服务器,请运行以下命令:
```
# mysql_secure_installation
```
确保为你的 MySQL 实例设置一个强密码。对于后续提示,请输入 **Yes** 并按**回车**
![Secure-MySQL-server-CentOS8][1]
### 步骤 2安装必需的软件包
除了安装 LAMP 外,还需要一些其他软件包来安装和正确配置 Nagios。因此如下所示安装软件包
```
# dnf install gcc glibc glibc-common wget gd gd-devel perl postfix
```
![Install-requisite-packages-CentOS8][1]
### 步骤 3创建 Nagios 用户帐户
接下来,我们需要为 Nagios 用户创建一个用户帐户。为此,请运行以下命令:
```
# adduser nagios
# passwd nagios
```
![Create-new-user-for-Nagios][1]
现在,我们需要为 Nagios 创建一个组,并将 Nagios 用户添加到该组中。
```
# groupadd nagiosxi
```
现在添加 Nagios 用户到组中
```
# usermod -aG nagiosxi nagios
```
另外,将 Apache 用户添加到 Nagios 组
```
# usermod -aG nagiosxi apache
```
![Add-Nagios-group-user][1]
### 步骤 4下载并安装 Nagios Core
现在,我们可以继续安装 Nagios Core。Nagios 4.4.5 的最新稳定版本于 2019 年 8 月 19 日发布。但首先,请从它的官方网站下载 Nagios tarball 文件。
要下载 Nagios Core请首进入 tmp 目录
```
# cd /tmp
```
接下来下载 tarball 文件
```
# wget https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.4.5.tar.gz
```
![Download-Nagios-CentOS8][1]
下载完 tarball 文件后,使用以下命令将其解压缩:
```
# tar -xvf nagios-4.4.5.tar.gz
```
接下来,进入未压缩的文件夹
```
# cd nagios-4.4.5
```
按此顺序运行以下命令
```
# ./configure --with-command-group=nagcmd
# make all
# make install
# make install-init
# make install-daemoninit
# make install-config
# make install-commandmode
# make install-exfoliation
```
要配置 Apache请运行以下命令
```
# make install-webconf
```
### 步骤 5配置 Apache Web 服务器身份验证
接下来,我们将为用户 **nagiosadmin** 设置身份验证。请注意不要更改用户名,否则,可能会要求你进一步的配置,这可能很繁琐。
要设置身份验证,请运行以下命令:
```
# htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin
```
![Configure-Apache-webserver-authentication-CentOS8][1]
系统将提示你输入 nagiosadmin 用户的密码。输入并按要求确认密码。在本教程结束时,你将使用该用户登录 Nagios。
为使更改生效,请重新启动 Web 服务器。
```
# systemctl restart httpd
```
### 步骤 6下载并安装 Nagios 插件
插件将扩展 Nagios 服务器的功能。它们将帮助你监控各种服务、网络设备和应用。要下载插件 tarball 文件,请运行以下命令:
```
# wget https://nagios-plugins.org/download/nagios-plugins-2.2.1.tar.gz
```
接下来,解压 tarball 文件并进入到未压缩的插件文件夹
```
# tar -xvf nagios-plugins-2.2.1.tar.gz
# cd nagios-plugins-2.2.1
```
要安装插件,请编译源代码,如下所示
```
# ./configure --with-nagios-user=nagios --with-nagios-group=nagiosxi
# make
# make install
```
### 步骤 7验证和启动 Nagios
成功安装 Nagios 插件后,验证 Nagios 配置以确保一切良好,并且配置中没有错误:
```
# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
```
![Verify-Nagios-settings-CentOS8][1]
接下来,启动 Nagios 并验证其状态
```
# systemctl start nagios
# systemctl status nagios
```
![Start-check-status-Nagios-CentOS8][1]
如果系统中有防火墙,那么使用以下命令允许 ”80“ 端口
```
# firewall-cmd --permanent --add-port=80/tcp# firewall-cmd --reload
```
### 步骤 8通过 Web 浏览器访问 Nagios 面板
要访问 Nagios请打开服务器的 IP 地址,如下所示
<http://server-ip/nagios>
这将出现一个弹出窗口,提示输入我们在步骤 5 创建的用户名和密码。输入凭据并点击”**登录**“
![Access-Nagios-via-web-browser-CentOS8][1]
这将引导你到 Nagios 面板,如下所示
![Nagios-dashboard-CentOS8][1]
我们终于成功地在 CentOS 8 / RHEL 8 上安装和配置了 Nagios Core。欢迎你的反馈。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/install-nagios-core-rhel-8-centos-8/
作者:[James Kiarie][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linuxtechi.com/author/james/
[b]: https://github.com/lujun9972
[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]: https://www.linuxtechi.com/wp-content/uploads/2019/10/Install-Nagios-Core-RHEL8-CentOS8.jpg