2
0
mirror of https://github.com/LCTT/TranslateProject.git synced 2025-04-02 02:50:11 +08:00

Merge pull request from lujun9972/www.linuxjournal.com

remove www.linuxjournal.com
This commit is contained in:
Ezio 2018-02-22 19:16:49 +08:00 committed by GitHub
commit 9d064beabe
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 0 additions and 2767 deletions

View File

@ -1,58 +0,0 @@
Raspberry Pi Alternatives
======
A look at some of the many interesting Raspberry Pi competitors.
The phenomenon behind the Raspberry Pi computer series has been pretty amazing. It's obvious why it has become so popular for Linux projects—it's a low-cost computer that's actually quite capable for the price, and the GPIO pins allow you to use it in a number of electronics projects such that it starts to cross over into Arduino territory in some cases. Its overall popularity has spawned many different add-ons and accessories, not to mention step-by-step guides on how to use the platform. I've personally written about Raspberry Pis often in this space, and in my own home, I use one to control a beer fermentation fridge, one as my media PC, one to control my 3D printer and one as a handheld gaming device.
The popularity of the Raspberry Pi also has spawned competition, and there are all kinds of other small, low-cost, Linux-powered Raspberry Pi-like computers for sale—many of which even go so far as to add "Pi" to their names. These computers aren't just clones, however. Although some share a similar form factor to the Raspberry Pi, and many also copy the GPIO pinouts, in many cases, these other computers offer features unavailable in a traditional Raspberry Pi. Some boards offer SATA, Wi-Fi or Gigabit networking; others offer USB3, and still others offer higher-performance CPUs or more RAM. When you are choosing a low-power computer for a project or as a home server, it pays to be aware of these Raspberry Pi alternatives, as in many cases, they will perform much better. So in this article, I discuss some alternatives to Raspberry Pis that I've used personally, their pros and cons, and then provide some examples of where they work best.
### Banana Pi
I've mentioned the Banana Pi before in past articles (see "Papa's Got a Brand New NAS" in the September 2016 issue and "Banana Backups" in the September 2017 issue), and it's a great choice when you want a board with a similar form factor, similar CPU and RAM specs, and a similar price (~$30) to a Raspberry Pi but need faster I/O. The Raspberry Pi product line is used for a lot of home server projects, but it limits you to 10/100 networking and a USB2 port for additional storage. Where the Banana Pi product line really shines is in the fact that it includes both a Gigabit network port and SATA port, while still having similar GPIO expansion options and running around the same price as a Raspberry Pi.
Before I settled on an Odroid XU4 for my home NAS (more on that later), I first experimented with a cluster of Banana Pis. The idea was to attach a SATA disk to each Banana Pi and use software like Ceph or GlusterFS to create a storage cluster shared over the network. Even though any individual Banana Pi wasn't necessarily that fast, considering how cheap they are in aggregate, they should be able to perform reasonably well and allow you to expand your storage by adding another disk and another Banana Pi. In the end, I decided to go a more traditional and simpler route with a single server and software RAID, and now I use one Banana Pi as an image gallery server. I attached a 2.5" laptop SATA drive to the other and use it as a local backup server running BackupPC. It's a nice solution that takes up almost no space and little power to run.
### Orange Pi Zero
I was really excited when I first heard about the Raspberry Pi Zero project. I couldn't believe there was such a capable little computer for only $5, and I started imagining all of the cool projects I could use one for around the house. That initial excitement was dampened a bit by the fact that they sold out quickly, and just about every vendor settled into the same pattern: put standalone Raspberry Pi Zeros on backorder but have special $20 starter kits in stock that include various adapter cables, a micro SD card and a plastic case that I didn't need. More than a year after the release, the situation still remains largely the same. Although I did get one Pi Zero and used it for a cool Adafruit "Pi Grrl Zero" gaming project, I had to put the rest of my ideas on hold, because they just never seemed to be in stock when I wanted them.
The Orange Pi Zero was created by the same company that makes the entire line of Orange Pi computers that compete with the Raspberry Pi. The main thing that makes the Orange Pi Zero shine in my mind is that they have a small, square form factor that is wider than a Raspberry Pi Zero but not as long. It also includes a Wi-Fi card like the more expensive Raspberry Pi Zero W, and it runs between $6 and $9, depending on whether you opt for 256MB of RAM or 512MB of RAM. More important, they are generally in stock, so there's no need to sit on a backorder list when you have a fun project in mind.
The Orange Pi Zero boards themselves are pretty capable. Out of the box, they include a quad-core ARM CPU, Wi-Fi (as I mentioned before), along with a 10/100 network port and USB2\. They also include Raspberry-Pi-compatible GPIO pins, but even more interesting is that there is a $9 "NAS" expansion board for it that mounts to its 13-pin header and provides extra USB2 ports, a SATA and mSATA port, along with an IR and audio and video ports, which makes it about as capable as a more expensive Banana Pi board. Even without the expansion board, this would make a nice computer you could sit anywhere within range of your Wi-Fi and run any number of services. The main downside is you are limited to composite video, so this isn't the best choice for gaming or video-based projects.
Although Orange Pi Zeros are capable boards in their own right, what makes them particularly enticing to me is that they are actually available when you want them, unlike some of the other sub-$10 boards out there. There's nothing worse than having a cool idea for a cheap home project and then having to wait for a board to come off backorder.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12261f1.jpg)
Figure 1\. An Orange Pi Zero (right) and an Espressobin (left)
### Odroid XU4
When I was looking to replace my rack-mounted NAS at home, I first looked at all of the Raspberry Pi options, including Banana Pi and other alternatives, but none of them seemed to have quite enough horsepower for my needs. I needed a machine that not only offered Gigabit networking to act as a NAS, but one that had high-speed disk I/O as well. The Odroid XU4 fit the bill with its eight-core ARM CPU, 2GB RAM, Gigabit network and USB3 ports. Although it was around $75 (almost twice the price of a Raspberry Pi), it was a much more capable computer all while being small and low-power.
The entire Odroid product line is a good one to consider if you want a low-power home server but need more resources than a traditional Raspberry Pi can offer and are willing to spend a little bit extra for the privilege. In addition to a NAS, the Odroid XU4, with its more powerful CPU and extra RAM, is a good all-around server for the home. The USB3 port means you have a lot of storage options should you need them.
### Espressobin
Although the Odroid XU4 is a great home server, I still sometimes can see that it gets bogged down in disk and network I/O compared to a traditional higher-powered server. Some of this might be due to the chips that were selected for the board, and perhaps some of it has to do with the fact that I'm using both disk encryption and software RAID over USB3\. In either case, I started looking for another option to help take a bit of the storage burden off this server, and I came across the Espressobin board.
The Espressobin is a $50 board that launched as a popular Indiegogo campaign and is now a shipping product that you can pick up in a number of places, including Amazon. Although it costs a bit more than a Raspberry Pi 3, it includes a 64-bit dual-core ARM Cortex A53 at 1.2GHz, 12Gb of RAM (depending on the configuration), three Gigabit network ports with a built-in switch, a SATA port, a USB3 port, a mini-PCIe port, plus a number of other options, including two sets of GPIO headers and a nice built-in serial console running on the micro-USB port.
The main benefit to the Espressobin is the fact that it was designed by Marvell with chips that actually can use all of the bandwidth that the board touts. In some other boards, often you'll find a SATA2 port that's hanging off a USB2 interface or other architectural hacks that, although they will let you connect a SATA disk or Gigabit networking port, it doesn't mean you'll get the full bandwidth the spec claims. Although I intend to have my own Espressobin take over home NAS duties, it also would make a great home gateway router, general-purpose server or even a Wi-Fi access point, provided you added the right Wi-Fi card.
### Conclusion
A whole world of alternatives to Raspberry Pis exists—this list covers only some of the ones I've used myself. I hope it has encouraged you to think twice before you default to a Raspberry Pi for your next project. Although there's certainly nothing wrong with Raspberry Pis, there are several small computers that run Linux well and, in many cases, offer better hardware or other expansion options beyond the capabilities of a Raspberry Pi for a similar price.
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/raspberry-pi-alternatives
作者:[Kyle Rankin][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/kyle-rankin

View File

@ -1,91 +0,0 @@
Four Hidden Costs and Risks of Sudo Can Lead to Cybersecurity Risks and Compliance Problems on Unix and Linux Servers
======
It is always a philosophical debate as to whether to use open source software in a regulated environment. Open source software is crowd sourced, and developers from all over the world contribute to packages that are later included in Operating System distributions. In the case of sudo, a package designed to provide privileged access included in many Linux distributions, the debate is whether it meets the requirements of an organization, and to what level it can be relied upon to deliver compliance information to auditors.
There are four hidden costs or risks that must be considered when evaluating whether sudo is meeting your organizations cybersecurity and compliance needs on its Unix and Linux systems, including administrative, forensics and audit, business continuity, and vendor support. Although sudo is a low-cost solution, it may come at a high price in a security program, and when an organization is delivering compliance data to satisfy auditors. In this article, we will review these areas while identifying key questions that should be answered to measure acceptable levels of risk. While every organization is different, there are specific risk/cost considerations that make a strong argument for replacing sudo with a commercially-supported solution.
### Administrative Costs
There are several hidden administrative costs is using sudo for Unix and Linux privilege management. For example, with sudo, you also need to run a third-party automation management system (like CFEngine or Puppet) plus third party authentication modules on the box. And, if you plan to externalize the box at all, youre going to have to replace sudo with that suppliers version of sudo. So, you end up maintaining sudo, a third-party management system, a third-party automation system, and may have to replace it all if you want to authenticate against something external to the box. A commercial solution would help to consolidate this functionality and simplify the overall management of Unix and Linux servers.
Another complexity with sudo is that everything is local, meaning it can be extremely time-consuming to manage as environments grow. And as we all know, time is money. With sudo, you have to rely on local systems on the server to keep logs locally, rotate them, send them to an archival environment, and ensure that no one is messing with any of the other related subsystems. This can be a complex and time-consuming process. A commercial solution would combine all of this activity together, including binary pushes and retention, upgrades, logs, archival, and more.
Unix and Linux systems by their very nature are decentralized, so managing each host separately leads to administrative costs and inefficiencies which in turn leads to risks. A commercial solution centralizes management and policy development across all hosts, introducing enterprise level consistency and best practices to a privileged access management program.
### Forensics & Audit Risks
Administrative costs aside, lets look at the risks associated with not being able to produce log data for forensic investigations. Why is this a challenge for sudo? The sudo package is installed locally on individual servers, and configuration files are maintained on each server individually. There are some tools such as Puppet or Chef that can monitor these files for changes, and replace files with known good copies when a change is detected, but those tools only work after a change takes place. These tools usually operate on a schedule, often checking once or twice per day, so if a system is compromised, or authorization files are changed, it may be several hours before the system is restored to a known good state. The question is, what can happen in those hours?
There is currently no keystroke logging within sudo, and since any logs of sudo activity are stored locally on servers, they can be tampered with by savvy administrators. Event logs are typically collected with normal system logs, but once again, this requires additional configuration and management of these tools. When advanced users are granted administrative access on servers, it is possible that log data can be modified, or deleted, and all evidence of their activities erased with very little indication that events took place. Now, the question is, has this happened, or is it continuing to happen?
With sudo, there is no log integrity no chain of custody on logs meaning logs cant be non-repudiated and therefore cant be used in legal proceedings in most jurisdictions. This is a significant risk to organizations, especially in criminal prosecution, termination, or other disciplinary actions. Third-party commercial solutions logs are tamper-proof, which is just not possible with sudo.
Large organizations typically collect a tremendous amount of data, including system logs, access information, and other system information from all their systems. This data is then sent to a SIEM for analytics, and reporting. SIEM tools do not usually deliver real-time alerting when uncharacteristic events happen on systems, and often configuration of events is difficult and time consuming. For this reason, SIEM solutions are rarely relied upon for alerting within an enterprise environment. Here the question is, what is an acceptable delay from the time an event takes place until someone is alerted?
Correlating log activity with other data to determine a broader pattern of abuse is also impossible with sudo. Commercial solutions gather logs into one place with searchable indices. Some commercial solutions even correlate this log data against other sources to identify uncharacteristic behavior that could be a warning that a serious security issue is afoot. Commercial solutions therefore provide greater forensic benefits than sudo.
Another gotcha with sudo is that change management processes cant be verified. It is always a best practice to review change records, and to validate that what was performed during the change matches the implementation that was proposed. ITIL and other security frameworks require validation of change management practices. Sudo cant do this. Commercial solutions can do this through reviewing session command recording history and file integrity monitoring without revealing the underlying session data.
There is no session recording with sudo. Session logs are one of the best forensic tools available for investigating what happened on servers. Its human nature that people tend to be more cautious when they know they can be watched. Sudo doesnt provide session recordings.
Finally, there is no segregation of duties with sudo. Most security and compliance frameworks require true separation of duties, and using a tool such as sudo just “skins” over the segregation of duties aspect. All of these deficiencies lack of log integrity, lack of session monitoring, no change management introduces risk when organizations must prove compliance or investigate anomalies.
### Business Continuity Risks
Sudo is open source. There is no indemnification if there is a critical error. Also, there is no rollback with sudo, so there is always the chance that mistakes will bring and entire system down with no one to call for support. Sure, it is possible to centralize sudo through a third-party tool such as Puppet or CFEngine, but you still end up managing multiple files across multiple groups of systems manually (or managed as one huge policy). With this approach, there is greater risk that mistakes will break every system at once. A commercial solution would have policy roll-back capability that would limit the damage done.
### Lack of Enterprise Support
Since sudo is an open source package, there is no official service level for when packages must be updated to respond to identified security flaws, or vulnerabilities. By mid-2017, there have already been two vulnerabilities identified in sudo with a CVSS score greater than six (CVE Sudo Vulnerabilities). Over the past several years, there have been a number of vulnerabilities discovered in sudo that took as many as three years to patch ([CVE-2013-2776][1] , [CVE-2013-2777][2] , [CVE-2013-1776][3]). The question here is, what exploits have been used in the past several months or years? A commercial solution that replaces sudo would eliminate this problem.
### Ten Questions to Measure Risk in Your Unix and Linux Environment
Unix and Linux systems present high-value targets for external attackers and malicious insiders. Expect to be breached if you share accounts, provide unfettered root access, or let files and sessions go unmonitored. Gaining root or other privileged credentials makes it easy for attackers to fly under the radar and access sensitive systems and data. And as we have reviewed, sudo isnt going to help.
In balancing costs vs. an acceptable level of risk to your Unix and Linux environment, consider these 10 questions:
1. How much time are Unix/Linux admins spending just trying to keep up? Can your organization benefit from automation?
2. Are you able to keep up with the different platform and version changes to your Unix/Linux systems?
3. As you grow and more hosts are added, how much more time will admins need to keep up with policy? Is adding personnel an option?
4. What about consistency across systems? Modifying individual sudoers files with multiple admins makes that very difficult. Wouldnt systems become siloed if not consistently managed?
5. What happens when you bring in new or different Linux or Unix platforms? How will that complicate the management of the environment?
6. How critical is it for compliance or legal purposes to know whether a policy file or log has been tampered with?
7. Do you have a way to verify that the sudoers file hasnt been modified without permission?
8. How do you know what admins actually did once they became root? Do you have a command history for their activity?
9. What would it cost the business if a mission-critical Unix/Linux host goes down? With sudo, how quickly could the team troubleshoot and fix the problem?
10. Can you demonstrate to the board that you have a backup if there is a significant outage?
### Benefits of Using a Commercial Solution
Although they come at a higher cost than free open source solutions, commercial solutions provide an effective way to mitigate the general issues related to sudo. Solutions that offer centralized management ease the pressure on monitoring and maintaining remote systems, centralized logging of events, and keystroke recording are the cornerstone of audit expectations for most enterprises.
Commercial solutions usually have a regular release cycle, and can typically deliver patches in response to vulnerabilities in hours, or days from the time theyre reported. Commercial solutions like PowerBroker for Unix & Linux by BeyondTrust provide event logging on separate infrastructure that is inaccessible to privileged users, and this eliminates the possibility of log tampering. PowerBroker also provides strong, centralized policy controls that are managed within an infrastructure separate from systems under management; this eliminates the possibility of rogue changes to privileged access policies in server environments. Strong policy control also moves security posture from Respond to Prevent, and advanced features provide the ability to integrate with other enterprise tools, and conditionally alert when privileged access sessions begin, or end.
### Conclusion
For organizations that are serious about incorporating a strong privileged access management program into their security program, there is no question that a commercial product delivers much better than an open source offering such as sudo. Eliminating the possibility of malicious behavior using strong controls, centralized log file collection, and centralized policy management is far better than relying on questionable, difficult to manage controls delivered within sudo. In calculating an acceptable level of risk to your tier-1 Unix and Linux systems, all of these costs and benefits must be considered.
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/four-hidden-costs-and-risks-sudo-can-lead-cybersecurity-risks-and-compliance-problems-unix-a
作者:[Chad Erbe][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/chad-erbe
[1]:https://www.cvedetails.com/cve/CVE-2013-2776/
[2]:https://www.cvedetails.com/cve/CVE-2013-2777/
[3]:https://www.cvedetails.com/cve/CVE-2013-1776/

View File

@ -1,347 +0,0 @@
Complete Guide for Using AsciiDoc in Linux
======
**Brief: This detailed guide discusses the advantages of using AsciiDoc and shows you how to install and use AsciiDoc in Linux.**
Over the years I used many different tools to write articles, reports or documentation. I think all started for me with Luc Barthelet's Epistole on Apple IIc from the French editor Version Soft. Then I switched to GUI tools with the excellent Microsoft Word 5 for Apple Macintosh, then the less convincing (to me) StarOffice on Sparc Solaris, that was already known as OpenOffice when I definitively switched to Linux. All these tools were really [word-processors][1].
But I was never really convinced by [WYSIWYG][2] editors. So I investigated many different more-or-less human-readable text formats: [troff][3], [HTML][4], [RTF][5], [TeX][6]/[LaTeX][7], [XML][8] and finally [AsciiDoc][9] which is the tool I use the most today. In fact, I am using it right now to write this article!
If I made that history, it was because somehow the loop is closed. Epistole was a word-processor of the text-console era. As far as I remember, there were menus and you can use the mouse to select text -- but most of the formatting was done by adding non-intrusive tags into the text. Just like it is done with AsciiDoc. Of course, it was not the first software to do that. But it was the first I used!
![Controlling text alignment in Luc Barthelet's Epistole \(1985-Apple II\) by using commands embedded into the text][11]
### Why AsciiDoc (or any other text file format)?
I see two advantages in using text formats for writing: first, there is a clear separation between the content and the presentation. This argument is open to discussion since some text formats like TeX or HTML require a good discipline to adhere to that separation. And on the other hand, you can somehow achieve some level of separation by using [templates and stylesheets][12] with WYSIWYG editors. I agree with that. But I still find presentation issues intrusive with GUI tools. Whereas, when using text formats, you can focus on the content only without any font style or widow line disturbing you in your writing. But maybe it's just me? However, I can't count the number of times I stopped my writing just to fix some minor styling issue -- and having lost my inspiration when I came back to the text. If you disagree or have a different experience, don't hesitate to contradict me using the comment section below!
Anyway, my second argument will be less subject to personal interpretation: documents based on text formats are highly interoperable. Not only you can edit them with any text editor on any platform, but you can easily manage text revisions with a tool such as [git][13] or [SVN][14], or automate text modification using common tools such as [sed][15], [AWK][16], [Perl][17] and so on. To give you a concrete example, when using a text-based format like AsciiDoc, I only need one command to produce highly personalized mailing from a master document, whereas the same job using a WYSIWYG editor would have required a clever use of "fields" and going through several wizard screens.
### What is AsciiDoc?
Strictly speaking, AsciiDoc is a file format. It defines syntactic constructs that will help a processor to understand the semantics of the various parts of your text. Usually in order to produce a nicely formatted output.
Even if that definition could seem abstract, this is something simple: some keywords or characters in your document have a special meaning that will change the rendering of the document. This is the exact same concept as the tags in HTML. But a key difference with AsciiDoc is the property of the source document to remain easily human readable.
Check [our GitHub repository][18] to compare how the same output can be produced using few common text files format: (coffee manpage idea courtesy of <http://www.linuxjournal.com/article/1158>)
* `coffee.man` uses the venerable troff processor (based on the 1964 [RUNOFF][19] program). It's mostly used today to write [man pages][20]. You can try it after having downloaded the `coffee.*` files by typing `man ./coffee.man` at your command prompt.
* `coffee.tex` uses the LaTeX syntax (1985) to achieve mostly the same result but for a PDF output. LaTeX is a typesetting program especially well suited for scientific publications because of its ability to nicely format mathematical formulae and tables. You can produce the PDF from the LaTeX source using `pdflatex coffee.tex`
* `coffee.html` is using the HTML format (1991) to describe the page. You can directly open that file with your favorite web browser to see the result.
* `coffee.adoc`, finally, is using the AsciiDoc syntax (2002). You can produce both HTML and PDF from that file:
```
asciidoc coffee.adoc # HTML output
a2x --format pdf ./coffee.adoc # PDF output (dblatex)
a2x --fop --format pdf ./coffee.adoc # PDF output (Apache FOP)
```
Now you've seen the result, open those four files using your favorite [text editor][21] (nano, vim, SublimeText, gedit, Atom, … ) and compare the sources: there are great chances you will agree the AsciiDoc sources are easier to read -- and probably to write too.
![Who is who? Could you guess which of these example files is written using AsciiDoc?][22]
### How to install AsciiDoc in Linux?
AsciiDoc is relatively complex to install because of the many dependencies. I mean complex if you want to install it from sources. For most of us, using our package manager is probably the best way:
```
apt-get install asciidoc fop
```
or the following command:
```
yum install acsiidoc fop
```
(fop is only required if you need the [Apache FOP][23] backend for PDF generation -- this is the PDF backend I use myself)
More details about the installation can be found on [the official AsciiDoc website][24]. For now, all you need now is a little bit of patience, since, at least on my minimal Debian system, installing AsciiDoc require 360MB to be downloaded (mostly because of the LaTeX dependency). Which, depending on your Internet bandwidth, may give you plenty of time to read the rest of this article.
### AsciiDoc Tutorial: How to write in AsciiDoc?
![AsciiDoc tutorial for Linux][25]
I said it several times, AsciiDoc is a human-readable text file format. So, you can write your documents using the text editor of your choice. There are even dedicated text editors. But I will not talk about them here-- simply because I don't use them. But if are using one of them, don't hesitate to share your feedback using the comment section at the end of this article.
I do not intend to create yet another AsciiDoc syntax tutorial here: there are plenty of them already available on the web. So I will only mention the very basic syntactic constructs you will use in virtually any document. From the simple "coffee" command example quoted above, you may see:
* **titles** in AsciiDoc are identified by underlying them with `===` or `---` (depending on the title level),
* **bold** character spans are written between starts,
* and **italics** between underscores.
Those are pretty common convention probably dating back to the pre-HTML email era. In addition, you may need two other common constructs, not illustrated in my previous example: **hyperlinks** and **images** inclusion, whose syntax is pretty self-explanatory.
```
// HyperText links
link:http://dashing-kazoo.flywheelsites.com[ItsFOSS Linux Blog]
// Inline Images
image:https://itsfoss.com/wp-content/uploads/2017/06/itsfoss-text-logo.png[ItsFOSS Text Logo]
// Block Images
image::https://itsfoss.com/wp-content/uploads/2017/06/itsfoss-text-logo.png[ItsFOSS Text Logo]
```
But the AsciiDoc syntax is much richer than that. If you want more, I can point you to that nice AsciiDoc cheatsheet: <http://powerman.name/doc/asciidoc>
### How to render the final output?
I will assume here you have already written some text following the AsciiDoc format. If this is not the case, you can download [here][26] some example files copied straight out of the AsciiDoc documentation:
```
# Download the AsciiDoc User Guide source document
BASE='https://raw.githubusercontent.com/itsfoss/asciidoc-intro/master'
wget "${BASE}"/{asciidoc.txt,customers.csv}
```
Since AsciiDoc is human-readable, you can send the AsciiDoc source text directly to someone by email, and the recipient will be able to read that message without further ado. But, you may want to provide some more nicely formatted output. For example as HTML for web publication (just like I've done it for this article). Or as PDF for print or display usage.
In all cases, you need a processor. In fact, under the hood, you will need several processors. Because your AsciiDoc document will be transformed into various intermediate formats before producing the final output. Since several tools are used, the output of one being the input of the next one, we sometimes speak of a toolchain.
Even if I explain some inner working details here, you have to understand most of that will be hidden from you. Unless maybe when you initially have to install the tools-- or if you want to fine-tune some steps of the process.
#### In practice?
For HTML output, you only need the `asciidoc` tool. For more complicated toolchains, I encourage you to use the `a2x` tool (part of the AsciiDoc distribution) that will trigger the necessary processors in order:
```
# All examples are based on the AsciiDoc User Guide source document
# HTML output
asciidoc asciidoc.txt
firefox asciidoc.html
# XHTML output
a2x --format=xhtml asciidoc.txt
# PDF output (LaTeX processor)
a2x --format=pdf asciidoc.txt
# PDF output (FOP processor)
a2x --fop --format=pdf asciidoc.txt
```
Even if it can directly produce an HTML output, the core functionality of the `asciidoc` tool remains to transform the AsciiDoc document to the intermediate [DocBook][27] format. DocBook is a XML-based format commonly used for (but not limited to) technical documentation publishing. DocBook is a semantic format. That means it describes your document content. But not its presentation. So formatting will be the next step of the transformation. For that, whatever is the output format, the DocBook intermediate document is processed through an [XSLT][28] processor to produce either directly the output (e.g. XHTML), or another intermediate format.
This is the case when you generate a PDF document where the DocBook document will be (at your will) converted either as a LaTeX intermediate representation or as [XSL-FO][29] (a XML-based language for page description). Finally, a dedicated tool will convert that representation to PDF.
The extra steps for PDF generations are notably justified by the fact the toolchain has to handle pagination for the PDF output. Something this is not necessary for a "stream" format like HTML.
#### dblatex or fop?
Since there are two PDF backends, the usual question is "Which is the best?" Something I can't answer for you.
Both processors have [pros and cons][30]. And ultimately, the choice will be a compromise between your needs and your tastes. So I encourage you to take the time to try both of them before choosing the backend you will use. If you follow the LaTeX path, [dblatex][31] will be the backend used to produce the PDF. Whereas it will be [Apache FOP][32] if you prefer using the XSL-FO intermediate format. So don't forget to take a look at the documentation of these tools to see how easy it will be to customize the output to your needs. Unless of course if you are satisfied with the default output!
### How to customize the output of AsciiDoc?
#### AsciiDoc to HTML
Out of the box, AsciiDoc produces pretty nice documents. But sooner or later you will what to customize their appearance.
The exact changes will depend on the backend you use. For the HTML output, most changes can be done by changing the [CSS][33] stylesheet associated with the document.
For example, let's say I want to display all section headings in red, I could create the following `custom.css` file:
```
h2 {
color: red;
}
```
And process the document using the slightly modified command:
```
# Set the 'stylesheet' attribute to
# the absolute path to our custom CSS file
asciidoc -a stylesheet=$PWD/custom.css asciidoc.txt
```
You can also make changes at a finer level by attaching a role attribute to an element. This will translate into a class attribute in the generated HTML.
For example, try to modify our test document to add the role attribute to the first paragraph of the text:
```
[role="summary"]
AsciiDoc is a text document format ....
```
Then add the following rule to the `custom.css` file:
```
.summary {
font-style: italic;
}
```
Re-generate the document:
```
asciidoc -a stylesheet=$PWD/custom.css asciidoc.txt
```
![AsciiDoc HTML output with custom CSS to display the first paragraph in italics and section headings in color][34]
1. et voila: the first paragraph is now displayed in italic. With a little bit of creativity, some patience and a couple of CSS tutorials, you should be able to customize your document at your wills.
#### AsciiDoc to PDF
Customizing the PDF output is somewhat more complex. Not from the author's perspective since the source text will remain identical. Eventually using the same role attribute as above to identify the parts that need a special treatment.
But you can no longer use CSS to define the formatting for PDF output. For the most common settings, there are parameters you can set from the command line. Some parameters can be used both with the dblatex and the fop backends, others are specific to each backend.
For the list of dblatex supported parameters, see <http://dblatex.sourceforge.net/doc/manual/sec-params.html>
For the list of DocBook XSL parameters, see <http://docbook.sourceforge.net/release/xsl/1.75.2/doc/param.html>
Since margin adjustment is a pretty common requirement, you may also want to take a look at that: <http://docbook.sourceforge.net/release/xsl/current/doc/fo/general.html>
If the parameter names are somewhat consistent between the two backends, the command-line arguments used to pass those values to the backends differ between dblatex and fop. So, double check first your syntax if apparently, this isn't working. But to be honest, while writing this article I wasn't able to make the `body.font.family` parameter work with the dblatex backend. Since I usually use fop, maybe did I miss something? If you have more clues about that, I will be more than happy to read your suggestions in the comment section at the end of this article!
Worth mentioning using non-standard fonts-- even with fop-require some extra work. But it's pretty well documented on the Apache website: <https://xmlgraphics.apache.org/fop/trunk/fonts.html#bulk>
```
# XSL-FO/FOP
a2x -v --format pdf \
--fop \
--xsltproc-opts='--stringparam page.margin.inner 10cm' \
--xsltproc-opts='--stringparam body.font.family Helvetica' \
--xsltproc-opts='--stringparam body.font.size 8pt' \
asciidoc.txt
# dblatex
# (body.font.family _should_ work, but, apparently, it isn't ?!?)
a2x -v --format pdf \
--dblatex-opts='--param page.margin.inner=10cm' \
--dblatex-opts='--stringparam body.font.family Helvetica' \
asciidoc.txt
```
#### Fine-grained setting for PDF generation
Global parameters are nice if you just need to adjust some pre-defined settings. But if you want to fine-tune the document (or completely change the layout) you will need some extra efforts.
At the core of the DocBook processing there is [XSLT][28]. XSLT is a computer language, expressed in XML notation, that allows to write arbitrary transformation from an XML document to … something else. XML or not.
For example, you will need to extend or modify the [DocBook XSL stylesheet][35] to produce the XSL-FO code for the new styles you may want. And if you use the dblatex backend, this may require modifying the corresponding DocBook-to-LaTeX XSLT stylesheet. In that latter case you may also need to use a custom LaTeX package. But I will not focus on that since dblatex is not the backend I use myself. I can only point you to the [official documentation][36] if you want to know more. But once again, if you're familiar with that, please share your tips and tricks in the comment section!
Even while focusing only on fop, I don't really have the room here to detail the entire procedure. So, I will just show you the changes you could use to obtain a similar result as the one obtained with few CSS lines in HTML output above. That is: section titles in red and a summary paragraph in italics.
The trick I use here is to create a new XSLT stylesheet, importing the original DocBook stylesheet, but overriding the attribute sets or template for the elements we want to change:
```
<?xml version='1.0'?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:exsl="http://exslt.org/common" exclude-result-prefixes="exsl"
xmlns:fo="http://www.w3.org/1999/XSL/Format"
version='1.0'>
<!-- Import the default DocBook stylesheet for XSL-FO -->
<xsl:import href="/etc/asciidoc/docbook-xsl/fo.xsl" />
<!--
DocBook XSL defines many attribute sets you can
use to control the output elements
-->
<xsl:attribute-set name="section.title.level1.properties">
<xsl:attribute name="color">#FF0000</xsl:attribute>
</xsl:attribute-set>
<!--
For fine-grained changes, you will need to write
or override XSLT templates just like I did it below
for 'summary' simpara (paragraphs)
-->
<xsl:template match="simpara[@role='summary']">
<!-- Capture inherited result -->
<xsl:variable name="baseresult">
<xsl:apply-imports/>
</xsl:variable>
<!-- Customize the result -->
<xsl:for-each select="exsl:node-set($baseresult)/node()">
<xsl:copy>
<xsl:copy-of select="@*"/>
<xsl:attribute name="font-style">italic</xsl:attribute>
<xsl:copy-of select="node()"/>
</xsl:copy>
</xsl:for-each>
</xsl:template>
</xsl:stylesheet>
```
Then, you have to request `a2x` to use that custom XSL stylesheet to produce the output rather than the default one using the `--xsl-file` option:
```
a2x -v --format pdf \
--fop \
--xsl-file=./custom.xsl \
asciidoc.txt
```
![AsciiDoc PDF output generated from Apache FOP using a custom XSLT to display the first paragraph in italics and section headings in color][37]
With a little bit of familiarity with XSLT, the hints given here and some queries on your favorite search engine, I think you should be able to start customizing the XSL-FO output.
But I will not lie, some apparently simple changes in the document output may require you to spend quite some times searching through the DocBook XML and XSL-FO manuals, examining the stylesheets sources and performing a couple of tests before you finally achieve what you want.
### My opinion
Writing documents using a text format has tremendous advantages. And if you need to publish to HTML, there is not much reason for not using AsciiDoc. The syntax is clean and neat, processing is simple and changing the presentation if needed, mostly require easy to acquire CSS skills.
And even if you don't use the HTML output directly, HTML can be used as an interchange format with many WYSIWYG applications today. As an example, this is was I've done here: I copied the HTML output of this article into the WordPress edition area, thus conserving all formatting, without having to type anything directly into WordPress.
If you need to publish to PDF-- the advantages remain the same for the writer. Things will be certainly harsher if you need to change the default layout in depth though. In a corporate environment, that probably means hiring a document designed skilled with XSLT to produce the set of stylesheets that will suit your branding or technical requirements-- or for someone in the team to acquire those skills. But once done it will be a pleasure to write text with AsciiDoc. And seeing those writings being automatically converted to beautiful HTML pages or PDF documents!
Finally, if you find AsciiDoc either too simplistic or too complex, you may take a look at some other file formats with similar goals: [Markdown][38], [Textile][39], [reStructuredText][40] or [AsciiDoctor][41] to name few. Even if based on concepts dating back to the early days of computing, the human-readable text format ecosystem is pretty rich. Probably richer it was only 20 years ago. As a proof, many modern [static web site generators][42] are based on them. Unfortunately, this is out of the scope for this article. So, let us know if you want to hear more about that!
--------------------------------------------------------------------------------
via: https://itsfoss.com/asciidoc-guide/
作者:[Sylvain Leroux][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/sylvain/
[1]:https://www.computerhope.com/jargon/w/wordssor.htm
[2]:https://en.wikipedia.org/wiki/WYSIWYG
[3]:https://en.wikipedia.org/wiki/Troff
[4]:https://en.wikipedia.org/wiki/HTML
[5]:https://en.wikipedia.org/wiki/Rich_Text_Format
[6]:https://en.wikipedia.org/wiki/TeX
[7]:https://en.wikipedia.org/wiki/LaTeX
[8]:https://en.wikipedia.org/wiki/XML
[9]:https://en.wikipedia.org/wiki/AsciiDoc
[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/09//epistole-manual-command-example-version-soft-luc-barthelet-1985.png
[12]:https://wiki.openoffice.org/wiki/Documentation/OOo3_User_Guides/Getting_Started/Templates_and_Styles
[13]:https://en.wikipedia.org/wiki/Git
[14]:https://en.wikipedia.org/wiki/Apache_Subversion
[15]:https://en.wikipedia.org/wiki/Sed
[16]:https://en.wikipedia.org/wiki/AWK
[17]:https://en.wikipedia.org/wiki/Perl
[18]:https://github.com/itsfoss/asciidoc-intro/tree/master/coffee
[19]:https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF
[20]:https://en.wikipedia.org/wiki/Man_page
[21]:https://en.wikipedia.org/wiki/Text_editor
[22]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/09//troff-latex-html-asciidoc-compare-source-code.png
[23]:https://en.wikipedia.org/wiki/Formatting_Objects_Processor
[24]:http://www.methods.co.nz/asciidoc/INSTALL.html
[25]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/asciidoc-tutorial-linux.jpg
[26]:https://raw.githubusercontent.com/itsfoss/asciidoc-intro/master
[27]:https://en.wikipedia.org/wiki/DocBook
[28]:https://en.wikipedia.org/wiki/XSLT
[29]:https://en.wikipedia.org/wiki/XSL_Formatting_Objects
[30]:http://www.methods.co.nz/asciidoc/userguide.html#_pdf_generation
[31]:http://dblatex.sourceforge.net/
[32]:https://xmlgraphics.apache.org/fop/
[33]:https://en.wikipedia.org/wiki/Cascading_Style_Sheets
[34]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/09//asciidoc-html-output-custom-role-italic-paragraph-color-heading.png
[35]:http://www.sagehill.net/docbookxsl/
[36]:http://dblatex.sourceforge.net/doc/manual/sec-custom.html
[37]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/09//asciidoc-fop-output-custom-role-italic-paragraph-color-heading.png
[38]:https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet
[39]:https://txstyle.org/
[40]:http://docutils.sourceforge.net/docs/user/rst/quickstart.html
[41]:http://asciidoctor.org/
[42]:https://www.smashingmagazine.com/2015/11/modern-static-website-generators-next-big-thing/

View File

@ -1,789 +0,0 @@
translating by lujun9972
Linux Filesystem Events with inotify
======
Triggering scripts with incron and systemd.
It is, at times, important to know when things change in the Linux OS. The uses to which systems are placed often include high-priority data that must be processed as soon as it is seen. The conventional method of finding and processing new file data is to poll for it, usually with cron. This is inefficient, and it can tax performance unreasonably if too many polling events are forked too often.
Linux has an efficient method for alerting user-space processes to changes impacting files of interest. The inotify Linux system calls were first discussed here in Linux Journal in a [2005 article by Robert Love][6] who primarily addressed the behavior of the new features from the perspective of C.
However, there also are stable shell-level utilities and new classes of monitoring dæmons for registering filesystem watches and reporting events. Linux installations using systemd also can access basic inotify functionality with path units. The inotify interface does have limitations—it can't monitor remote, network-mounted filesystems (that is, NFS); it does not report the userid involved in the event; it does not work with /proc or other pseudo-filesystems; and mmap() operations do not trigger it, among other concerns. Even with these limitations, it is a tremendously useful feature.
This article completes the work begun by Love and gives everyone who can write a Bourne shell script or set a crontab the ability to react to filesystem changes.
### The inotifywait Utility
Working under Oracle Linux 7 (or similar versions of Red Hat/CentOS/Scientific Linux), the inotify shell tools are not installed by default, but you can load them with yum:
```
# yum install inotify-tools
Loaded plugins: langpacks, ulninfo
ol7_UEKR4 | 1.2 kB 00:00
ol7_latest | 1.4 kB 00:00
Resolving Dependencies
--> Running transaction check
---> Package inotify-tools.x86_64 0:3.14-8.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
==============================================================
Package Arch Version Repository Size
==============================================================
Installing:
inotify-tools x86_64 3.14-8.el7 ol7_latest 50 k
Transaction Summary
==============================================================
Install 1 Package
Total download size: 50 k
Installed size: 111 k
Is this ok [y/d/N]: y
Downloading packages:
inotify-tools-3.14-8.el7.x86_64.rpm | 50 kB 00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
Installing : inotify-tools-3.14-8.el7.x86_64 1/1
Verifying : inotify-tools-3.14-8.el7.x86_64 1/1
Installed:
inotify-tools.x86_64 0:3.14-8.el7
Complete!
```
The package will include two utilities (inotifywait and inotifywatch), documentation and a number of libraries. The inotifywait program is of primary interest.
Some derivatives of Red Hat 7 may not include inotify in their base repositories. If you find it missing, you can obtain it from [Fedora's EPEL repository][7], either by downloading the inotify RPM for manual installation or adding the EPEL repository to yum.
Any user on the system who can launch a shell may register watches—no special privileges are required to use the interface. This example watches the /tmp directory:
```
$ inotifywait -m /tmp
Setting up watches.
Watches established.
```
If another session on the system performs a few operations on the files in /tmp:
```
$ touch /tmp/hello
$ cp /etc/passwd /tmp
$ rm /tmp/passwd
$ touch /tmp/goodbye
$ rm /tmp/hello /tmp/goodbye
```
those changes are immediately visible to the user running inotifywait:
```
/tmp/ CREATE hello
/tmp/ OPEN hello
/tmp/ ATTRIB hello
/tmp/ CLOSE_WRITE,CLOSE hello
/tmp/ CREATE passwd
/tmp/ OPEN passwd
/tmp/ MODIFY passwd
/tmp/ CLOSE_WRITE,CLOSE passwd
/tmp/ DELETE passwd
/tmp/ CREATE goodbye
/tmp/ OPEN goodbye
/tmp/ ATTRIB goodbye
/tmp/ CLOSE_WRITE,CLOSE goodbye
/tmp/ DELETE hello
/tmp/ DELETE goodbye
```
A few relevant sections of the manual page explain what is happening:
```
$ man inotifywait | col -b | sed -n '/diagnostic/,/helpful/p'
inotifywait will output diagnostic information on standard error and
event information on standard output. The event output can be config-
ured, but by default it consists of lines of the following form:
watched_filename EVENT_NAMES event_filename
watched_filename
is the name of the file on which the event occurred. If the
file is a directory, a trailing slash is output.
EVENT_NAMES
are the names of the inotify events which occurred, separated by
commas.
event_filename
is output only when the event occurred on a directory, and in
this case the name of the file within the directory which caused
this event is output.
By default, any special characters in filenames are not escaped
in any way. This can make the output of inotifywait difficult
to parse in awk scripts or similar. The --csv and --format
options will be helpful in this case.
```
It also is possible to filter the output by registering particular events of interest with the -e option, the list of which is shown here:
| access | create | move_self |
|========|========|===========|
| attrib | delete | moved_to |
| close_write | delete_self | moved_from |
| close_nowrite | modify | open |
| close | move | unmount |
A common application is testing for the arrival of new files. Since inotify must be given the name of an existing filesystem object to watch, the directory containing the new files is provided. A trigger of interest is also easy to provide—new files should be complete and ready for processing when the close_write trigger fires. Below is an example script to watch for these events:
```
#!/bin/sh
unset IFS # default of space, tab and nl
# Wait for filesystem events
inotifywait -m -e close_write \
/tmp /var/tmp /home/oracle/arch-orcl/ |
while read dir op file
do [[ "${dir}" == '/tmp/' && "${file}" == *.txt ]] &&
echo "Import job should start on $file ($dir $op)."
[[ "${dir}" == '/var/tmp/' && "${file}" == CLOSE_WEEK*.txt ]] &&
echo Weekly backup is ready.
[[ "${dir}" == '/home/oracle/arch-orcl/' && "${file}" == *.ARC ]]
&&
su - oracle -c 'ORACLE_SID=orcl ~oracle/bin/log_shipper' &
[[ "${dir}" == '/tmp/' && "${file}" == SHUT ]] && break
((step+=1))
done
echo We processed $step events.
```
There are a few problems with the script as presented—of all the available shells on Linux, only ksh93 (that is, the AT&T Korn shell) will report the "step" variable correctly at the end of the script. All the other shells will report this variable as null.
The reason for this behavior can be found in a brief explanation on the manual page for Bash: "Each command in a pipeline is executed as a separate process (i.e., in a subshell)." The MirBSD clone of the Korn shell has a slightly longer explanation:
```
# man mksh | col -b | sed -n '/The parts/,/do so/p'
The parts of a pipeline, like below, are executed in subshells. Thus,
variable assignments inside them fail. Use co-processes instead.
foo | bar | read baz # will not change $baz
foo | bar |& read -p baz # will, however, do so
```
And, the pdksh documentation in Oracle Linux 5 (from which MirBSD mksh emerged) has several more mentions of the subject:
```
General features of at&t ksh88 that are not (yet) in pdksh:
- the last command of a pipeline is not run in the parent shell
- `echo foo | read bar; echo $bar' prints foo in at&t ksh, nothing
in pdksh (ie, the read is done in a separate process in pdksh).
- in pdksh, if the last command of a pipeline is a shell builtin, it
is not executed in the parent shell, so "echo a b | read foo bar"
does not set foo and bar in the parent shell (at&t ksh will).
This may get fixed in the future, but it may take a while.
$ man pdksh | col -b | sed -n '/BTW, the/,/aware/p'
BTW, the most frequently reported bug is
echo hi | read a; echo $a # Does not print hi
I'm aware of this and there is no need to report it.
```
This behavior is easy enough to demonstrate—running the script above with the default bash shell and providing a sequence of example events:
```
$ cp /etc/passwd /tmp/newdata.txt
$ cp /etc/group /var/tmp/CLOSE_WEEK20170407.txt
$ cp /etc/passwd /tmp/SHUT
```
gives the following script output:
```
# ./inotify.sh
Setting up watches.
Watches established.
Import job should start on newdata.txt (/tmp/ CLOSE_WRITE,CLOSE).
Weekly backup is ready.
We processed events.
```
Examining the process list while the script is running, you'll also see two shells, one forked for the control structure:
```
$ function pps { typeset a IFS=\| ; ps ax | while read a
do case $a in *$1*|+([!0-9])) echo $a;; esac; done }
$ pps inot
PID TTY STAT TIME COMMAND
3394 pts/1 S+ 0:00 /bin/sh ./inotify.sh
3395 pts/1 S+ 0:00 inotifywait -m -e close_write /tmp /var/tmp
3396 pts/1 S+ 0:00 /bin/sh ./inotify.sh
```
As it was manipulated in a subshell, the "step" variable above was null when control flow reached the echo. Switching this from #/bin/sh to #/bin/ksh93 will correct the problem, and only one shell process will be seen:
```
# ./inotify.ksh93
Setting up watches.
Watches established.
Import job should start on newdata.txt (/tmp/ CLOSE_WRITE,CLOSE).
Weekly backup is ready.
We processed 2 events.
$ pps inot
PID TTY STAT TIME COMMAND
3583 pts/1 S+ 0:00 /bin/ksh93 ./inotify.sh
3584 pts/1 S+ 0:00 inotifywait -m -e close_write /tmp /var/tmp
```
Although ksh93 behaves properly and in general handles scripts far more gracefully than all of the other Linux shells, it is rather large:
```
$ ll /bin/[bkm]+([aksh93]) /etc/alternatives/ksh
-rwxr-xr-x. 1 root root 960456 Dec 6 11:11 /bin/bash
lrwxrwxrwx. 1 root root 21 Apr 3 21:01 /bin/ksh ->
/etc/alternatives/ksh
-rwxr-xr-x. 1 root root 1518944 Aug 31 2016 /bin/ksh93
-rwxr-xr-x. 1 root root 296208 May 3 2014 /bin/mksh
lrwxrwxrwx. 1 root root 10 Apr 3 21:01 /etc/alternatives/ksh ->
/bin/ksh93
```
The mksh binary is the smallest of the Bourne implementations above (some of these shells may be missing on your system, but you can install them with yum). For a long-term monitoring process, mksh is likely the best choice for reducing both processing and memory footprint, and it does not launch multiple copies of itself when idle assuming that a coprocess is used. Converting the script to use a Korn coprocess that is friendly to mksh is not difficult:
```
#!/bin/mksh
unset IFS # default of space, tab and nl
# Wait for filesystem events
inotifywait -m -e close_write \
/tmp/ /var/tmp/ /home/oracle/arch-orcl/ \
2</dev/null |& # Launch as Korn coprocess
while read -p dir op file # Read from Korn coprocess
do [[ "${dir}" == '/tmp/' && "${file}" == *.txt ]] &&
print "Import job should start on $file ($dir $op)."
[[ "${dir}" == '/var/tmp/' && "${file}" == CLOSE_WEEK*.txt ]] &&
print Weekly backup is ready.
[[ "${dir}" == '/home/oracle/arch-orcl/' && "${file}" == *.ARC ]]
&&
su - oracle -c 'ORACLE_SID=orcl ~oracle/bin/log_shipper' &
[[ "${dir}" == '/tmp/' && "${file}" == SHUT ]] && break
((step+=1))
done
echo We processed $step events.
```
Note that the Korn and Bolsky reference on the Korn shell outlines the following requirements in a program operating as a coprocess:
Caution: The co-process must:
+ Send each output message to standard output.
+ Have a Newline at the end of each message.
+ Flush its standard output whenever it writes a message.
An fflush(NULL) is found in the main processing loop of the inotifywait source, and these requirements appear to be met.
The mksh version of the script is the most reasonable compromise for efficient use and correct behavior, and I have explained it at some length here to save readers trouble and frustration—it is important to avoid control structures executing in subshells in most of the Borne family. However, hopefully all of these ersatz shells someday fix this basic flaw and implement the Korn behavior correctly.
### A Practical Application—Oracle Log Shipping
Oracle databases that are configured for hot backups produce a stream of "archived redo log files" that are used for database recovery. These are the most critical backup files that are produced in an Oracle database.
These files are numbered sequentially and are written to a log directory configured by the DBA. An inotifywatch can trigger activities to compress, encrypt and/or distribute the archived logs to backup and disaster recovery servers for safekeeping. You can configure Oracle RMAN to do most of these functions, but the OS tools are more capable, flexible and simpler to use.
There are a number of important design parameters for a script handling archived logs:
* A "critical section" must be established that allows only a single process to manipulate the archived log files at a time. Oracle will sometimes write bursts of log files, and inotify might cause the handler script to be spawned repeatedly in a short amount of time. Only one instance of the handler script can be allowed to run—any others spawned during the handler's lifetime must immediately exit. This will be achieved with a textbook application of the flock program from the util-linux package.
* The optimum compression available for production applications appears to be [lzip][1]. The author claims that the integrity of his archive format is [superior to many more well known utilities][2], both in compression ability and also structural integrity. The lzip binary is not in the standard repository for Oracle Linux—it is available in EPEL and is easily compiled from source.
* Note that [7-Zip][3] uses the same LZMA algorithm as lzip, and it also will perform AES encryption on the data after compression. Encryption is a desirable feature, as it will exempt a business from [breach disclosure laws][4] in most US states if the backups are lost or stolen and they contain "Protected Personal Information" (PPI), such as birthdays or Social Security Numbers. The author of lzip does have harsh things to say regarding the quality of 7-Zip archives using LZMA2, and the openssl enc program can be used to apply AES encryption after compression to lzip archives or any other type of file, as I discussed in a [previous article][5]. I'm foregoing file encryption in the script below and using lzip for clarity.
* The current log number will be recorded in a dot file in the Oracle user's home directory. If a log is skipped for some reason (a rare occurrence for an Oracle database), log shipping will stop. A missing log requires an immediate and full database backup (either cold or hot)—successful recoveries of Oracle databases cannot skip logs.
* The scp program will be used to copy the log to a remote server, and it should be called repeatedly until it returns successfully.
* I'm calling the genuine '93 Korn shell for this activity, as it is the most capable scripting shell and I don't want any surprises.
Given these design parameters, this is an implementation:
```
# cat ~oracle/archutils/process_logs
#!/bin/ksh93
set -euo pipefail
IFS=$'\n\t' # http://redsymbol.net/articles/unofficial-bash-strict-mode/
(
flock -n 9 || exit 1 # Critical section-allow only one process.
ARCHDIR=~oracle/arch-${ORACLE_SID}
APREFIX=${ORACLE_SID}_1_
ASUFFIX=.ARC
CURLOG=$(<~oracle/.curlog-$ORACLE_SID)
File="${ARCHDIR}/${APREFIX}${CURLOG}${ASUFFIX}"
[[ ! -f "$File" ]] && exit
while [[ -f "$File" ]]
do ((NEXTCURLOG=CURLOG+1))
NextFile="${ARCHDIR}/${APREFIX}${NEXTCURLOG}${ASUFFIX}"
[[ ! -f "$NextFile" ]] && sleep 60 # Ensure ARCH has finished
nice /usr/local/bin/lzip -9q "$File"
until scp "${File}.lz" "yourcompany.com:~oracle/arch-$ORACLE_SID"
do sleep 5
done
CURLOG=$NEXTCURLOG
File="$NextFile"
done
echo $CURLOG > ~oracle/.curlog-$ORACLE_SID
) 9>~oracle/.processing_logs-$ORACLE_SID
```
The above script can be executed manually for testing even while the inotify handler is running, as the flock protects it.
A standby server, or a DataGuard server in primitive standby mode, can apply the archived logs at regular intervals. The script below forces a 12-hour delay in log application for the recovery of dropped or damaged objects, so inotify cannot be easily used in this case—cron is a more reasonable approach for delayed file processing, and a run every 20 minutes will keep the standby at the desired recovery point:
```
# cat ~oracle/archutils/delay-lock.sh
#!/bin/ksh93
(
flock -n 9 || exit 1 # Critical section-only one process.
WINDOW=43200 # 12 hours
LOG_DEST=~oracle/arch-$ORACLE_SID
OLDLOG_DEST=$LOG_DEST-applied
function fage { print $(( $(date +%s) - $(stat -c %Y "$1") ))
} # File age in seconds - Requires GNU extended date & stat
cd $LOG_DEST
of=$(ls -t | tail -1) # Oldest file in directory
[[ -z "$of" || $(fage "$of") -lt $WINDOW ]] && exit
for x in $(ls -rt) # Order by ascending file mtime
do if [[ $(fage "$x") -ge $WINDOW ]]
then y=$(basename $x .lz) # lzip compression is optional
[[ "$y" != "$x" ]] && /usr/local/bin/lzip -dkq "$x"
$ORACLE_HOME/bin/sqlplus '/ as sysdba' > /dev/null 2>&1 <<-EOF
recover standby database;
$LOG_DEST/$y
cancel
quit
EOF
[[ "$y" != "$x" ]] && rm "$y"
mv "$x" $OLDLOG_DEST
fi
done
) 9> ~oracle/.recovering-$ORACLE_SID
```
I've covered these specific examples here because they introduce tools to control concurrency, which is a common issue when using inotify, and they advance a few features that increase reliability and minimize storage requirements. Hopefully enthusiastic readers will introduce many improvements to these approaches.
### The incron System
Lukas Jelinek is the author of the incron package that allows users to specify tables of inotify events that are executed by the master incrond process. Despite the reference to "cron", the package does not schedule events at regular intervals—it is a tool for filesystem events, and the cron reference is slightly misleading.
The incron package is available from EPEL. If you have installed the repository, you can load it with yum:
```
# yum install incron
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package incron.x86_64 0:0.5.10-8.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=================================================================
Package Arch Version Repository Size
=================================================================
Installing:
incron x86_64 0.5.10-8.el7 epel 92 k
Transaction Summary
==================================================================
Install 1 Package
Total download size: 92 k
Installed size: 249 k
Is this ok [y/d/N]: y
Downloading packages:
incron-0.5.10-8.el7.x86_64.rpm | 92 kB 00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : incron-0.5.10-8.el7.x86_64 1/1
Verifying : incron-0.5.10-8.el7.x86_64 1/1
Installed:
incron.x86_64 0:0.5.10-8.el7
Complete!
```
On a systemd distribution with the appropriate service units, you can start and enable incron at boot with the following commands:
```
# systemctl start incrond
# systemctl enable incrond
Created symlink from
/etc/systemd/system/multi-user.target.wants/incrond.service
to /usr/lib/systemd/system/incrond.service.
```
In the default configuration, any user can establish incron schedules. The incrontab format uses three fields:
```
path> <mask> <command>
```
Below is an example entry that was set with the -e option:
```
$ incrontab -e #vi session follows
$ incrontab -l
/tmp/ IN_ALL_EVENTS /home/luser/myincron.sh $@ $% $#
```
You can record a simple script and mark it with execute permission:
```
$ cat myincron.sh
#!/bin/sh
echo -e "path: $1 op: $2 \t file: $3" >> ~/op
$ chmod 755 myincron.sh
```
Then, if you repeat the original /tmp file manipulations at the start of this article, the script will record the following output:
```
$ cat ~/op
path: /tmp/ op: IN_ATTRIB file: hello
path: /tmp/ op: IN_CREATE file: hello
path: /tmp/ op: IN_OPEN file: hello
path: /tmp/ op: IN_CLOSE_WRITE file: hello
path: /tmp/ op: IN_OPEN file: passwd
path: /tmp/ op: IN_CLOSE_WRITE file: passwd
path: /tmp/ op: IN_MODIFY file: passwd
path: /tmp/ op: IN_CREATE file: passwd
path: /tmp/ op: IN_DELETE file: passwd
path: /tmp/ op: IN_CREATE file: goodbye
path: /tmp/ op: IN_ATTRIB file: goodbye
path: /tmp/ op: IN_OPEN file: goodbye
path: /tmp/ op: IN_CLOSE_WRITE file: goodbye
path: /tmp/ op: IN_DELETE file: hello
path: /tmp/ op: IN_DELETE file: goodbye
```
While the IN_CLOSE_WRITE event on a directory object is usually of greatest interest, most of the standard inotify events are available within incron, which also offers several unique amalgams:
```
$ man 5 incrontab | col -b | sed -n '/EVENT SYMBOLS/,/child process/p'
EVENT SYMBOLS
These basic event mask symbols are defined:
IN_ACCESS File was accessed (read) (*)
IN_ATTRIB Metadata changed (permissions, timestamps, extended
attributes, etc.) (*)
IN_CLOSE_WRITE File opened for writing was closed (*)
IN_CLOSE_NOWRITE File not opened for writing was closed (*)
IN_CREATE File/directory created in watched directory (*)
IN_DELETE File/directory deleted from watched directory (*)
IN_DELETE_SELF Watched file/directory was itself deleted
IN_MODIFY File was modified (*)
IN_MOVE_SELF Watched file/directory was itself moved
IN_MOVED_FROM File moved out of watched directory (*)
IN_MOVED_TO File moved into watched directory (*)
IN_OPEN File was opened (*)
When monitoring a directory, the events marked with an asterisk (*)
above can occur for files in the directory, in which case the name
field in the returned event data identifies the name of the file within
the directory.
The IN_ALL_EVENTS symbol is defined as a bit mask of all of the above
events. Two additional convenience symbols are IN_MOVE, which is a com-
bination of IN_MOVED_FROM and IN_MOVED_TO, and IN_CLOSE, which combines
IN_CLOSE_WRITE and IN_CLOSE_NOWRITE.
The following further symbols can be specified in the mask:
IN_DONT_FOLLOW Don't dereference pathname if it is a symbolic link
IN_ONESHOT Monitor pathname for only one event
IN_ONLYDIR Only watch pathname if it is a directory
Additionally, there is a symbol which doesn't appear in the inotify sym-
bol set. It is IN_NO_LOOP. This symbol disables monitoring events until
the current one is completely handled (until its child process exits).
```
The incron system likely presents the most comprehensive interface to inotify of all the tools researched and listed here. Additional configuration options can be set in /etc/incron.conf to tweak incron's behavior for those that require a non-standard configuration.
### Path Units under systemd
When your Linux installation is running systemd as PID 1, limited inotify functionality is available through "path units" as is discussed in a lighthearted [article by Paul Brown][8] at OCS-Mag.
The relevant manual page has useful information on the subject:
```
$ man systemd.path | col -b | sed -n '/Internally,/,/systems./p'
Internally, path units use the inotify(7) API to monitor file systems.
Due to that, it suffers by the same limitations as inotify, and for
example cannot be used to monitor files or directories changed by other
machines on remote NFS file systems.
```
Note that when a systemd path unit spawns a shell script, the $HOME and tilde (~) operator for the owner's home directory may not be defined. Using the tilde operator to reference another user's home directory (for example, ~nobody/) does work, even when applied to the self-same user running the script. The Oracle script above was explicit and did not reference ~ without specifying the target user, so I'm using it as an example here.
Using inotify triggers with systemd path units requires two files. The first file specifies the filesystem location of interest:
```
$ cat /etc/systemd/system/oralog.path
[Unit]
Description=Oracle Archivelog Monitoring
Documentation=http://docs.yourserver.com
[Path]
PathChanged=/home/oracle/arch-orcl/
[Install]
WantedBy=multi-user.target
```
The PathChanged parameter above roughly corresponds to the close-write event used in my previous direct inotify calls. The full collection of inotify events is not (currently) supported by systemd—it is limited to PathExists, PathChanged and PathModified, which are described in man systemd.path.
The second file is a service unit describing a program to be executed. It must have the same name, but a different extension, as the path unit:
```
$ cat /etc/systemd/system/oralog.service
[Unit]
Description=Oracle Archivelog Monitoring
Documentation=http://docs.yourserver.com
[Service]
Type=oneshot
Environment=ORACLE_SID=orcl
ExecStart=/bin/sh -c '/root/process_logs >> /tmp/plog.txt 2>&1'
```
The oneshot parameter above alerts systemd that the program that it forks is expected to exit and should not be respawned automatically—the restarts are limited to triggers from the path unit. The above service configuration will provide the best options for logging—divert them to /dev/null if they are not needed.
Use systemctl start on the path unit to begin monitoring—a common error is using it on the service unit, which will directly run the handler only once. Enable the path unit if the monitoring should survive a reboot.
Although this limited functionality may be enough for some casual uses of inotify, it is a shame that the full functionality of inotifywait and incron are not represented here. Perhaps it will come in time.
### Conclusion
Although the inotify tools are powerful, they do have limitations. To repeat them, inotify cannot monitor remote (NFS) filesystems; it cannot report the userid involved in a triggering event; it does not work with /proc or other pseudo-filesystems; mmap() operations do not trigger it; and the inotify queue can overflow resulting in lost events, among other concerns.
Even with these weaknesses, the efficiency of inotify is superior to most other approaches for immediate notifications of filesystem activity. It also is quite flexible, and although the close-write directory trigger should suffice for most usage, it has ample tools for covering special use cases.
In any event, it is productive to replace polling activity with inotify watches, and system administrators should be liberal in educating the user community that the classic crontab is not an appropriate place to check for new files. Recalcitrant users should be confined to Ultrix on a VAX until they develop sufficient appreciation for modern tools and approaches, which should result in more efficient Linux systems and happier administrators.
### Sidenote: Archiving /etc/passwd
Tracking changes to the password file involves many different types of inotify triggering events. The vipw utility commonly will make changes to a temporary file, then clobber the original with it. This can be seen when the inode number changes:
```
# ll -i /etc/passwd
199720973 -rw-r--r-- 1 root root 3928 Jul 7 12:24 /etc/passwd
# vipw
[ make changes ]
You are using shadow passwords on this system.
Would you like to edit /etc/shadow now [y/n]? n
# ll -i /etc/passwd
203784208 -rw-r--r-- 1 root root 3956 Jul 7 12:24 /etc/passwd
```
The destruction and replacement of /etc/passwd even occurs with setuid binaries called by unprivileged users:
```
$ ll -i /etc/passwd
203784196 -rw-r--r-- 1 root root 3928 Jun 29 14:55 /etc/passwd
$ chsh
Changing shell for fishecj.
Password:
New shell [/bin/bash]: /bin/csh
Shell changed.
$ ll -i /etc/passwd
199720970 -rw-r--r-- 1 root root 3927 Jul 7 12:23 /etc/passwd
```
For this reason, all inotify triggering events should be considered when tracking this file. If there is concern with an inotify queue overflow (in which events are lost), then the OPEN, ACCESS and CLOSE_NOWRITE,CLOSE triggers likely can be immediately ignored.
All other inotify events on /etc/passwd might run the following script to version the changes into an RCS archive and mail them to an administrator:
```
#!/bin/sh
# This script tracks changes to the /etc/passwd file from inotify.
# Uses RCS for archiving. Watch for UID zero.
PWMAILS=Charlie.Root@openbsd.org
TPDIR=~/track_passwd
cd $TPDIR
if diff -q /etc/passwd $TPDIR/passwd
then exit # they are the same
else sleep 5 # let passwd settle
diff /etc/passwd $TPDIR/passwd 2>&1 | # they are DIFFERENT
mail -s "/etc/passwd changes $(hostname -s)" "$PWMAILS"
cp -f /etc/passwd $TPDIR # copy for checkin
# "SCCS, the source motel! Programs check in and never check out!"
# -- Ken Thompson
rcs -q -l passwd # lock the archive
ci -q -m_ passwd # check in new ver
co -q passwd # drop the new copy
fi > /dev/null 2>&1
```
Here is an example email from the script for the above chfn operation:
```
-----Original Message-----
From: root [mailto:root@myhost.com]
Sent: Thursday, July 06, 2017 2:35 PM
To: Fisher, Charles J. ;
Subject: /etc/passwd changes myhost
57c57
< fishecj:x:123:456:Fisher, Charles J.:/home/fishecj:/bin/bash
---
> fishecj:x:123:456:Fisher, Charles J.:/home/fishecj:/bin/csh
```
Further processing on the third column of /etc/passwd might detect UID zero (a root user) or other important user classes for emergency action. This might include a rollback of the file from RCS to /etc and/or SMS messages to security contacts.
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/linux-filesystem-events-inotify
作者:[Charles Fisher][a]
译者:[lujun9972](https://github.com/lujun9972)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[1]:http://www.nongnu.org/lzip
[2]:http://www.nongnu.org/lzip/xz_inadequate.html
[3]:http://www.7-zip.org
[4]:http://www.ncsl.org/research/telecommunications-and-information-technology/security-breach-notification-laws.aspx
[5]:http://www.linuxjournal.com/content/flat-file-encryption-openssl-and-gpg
[6]:http://www.linuxjournal.com/article/8478
[7]:https://fedoraproject.org/wiki/EPEL
[8]:http://www.ocsmag.com/2015/09/02/monitoring-file-access-for-dummies

View File

@ -1,125 +0,0 @@
Avoiding Server Disaster
======
Worried that your server will go down? You should be. Here are some disaster-planning tips for server owners.
If you own a car or a house, you almost certainly have insurance. Insurance seems like a huge waste of money. You pay it every year and make sure that you get the best possible price for the best possible coverage, and then you hope you never need to use the insurance. Insurance seems like a really bad deal—until you have a disaster and realize that had it not been for the insurance, you might have been in financial ruin.
Unfortunately, disasters and mishaps are a fact of life in the computer industry. And so, just as you pay insurance and hope never to have to use it, you also need to take time to ensure the safety and reliability of your systems—not because you want disasters to happen, or even expect them to occur, but rather because you have to.
If your website is an online brochure for your company and then goes down for a few hours or even days, it'll be embarrassing and annoying, but not financially painful. But, if your website is your business, when your site goes down, you're losing money. If that's the case, it's crucial to ensure that your server and software are not only unlikely to go down, but also easily recoverable if and when that happens.
Why am I writing about this subject? Well, let's just say that this particular problem hit close to home for me, just before I started to write this article. After years of helping clients around the world to ensure the reliability of their systems, I made the mistake of not being as thorough with my own. ("The shoemaker's children go barefoot", as the saying goes.) This means that just after launching my new online product for Python developers, a seemingly trivial upgrade turned into a disaster. The precautions I put in place, it turns out, weren't quite enough—and as I write this, I'm still putting my web server together. I'll survive, as will my server and business, but this has been a painful and important lesson—one that I'll do almost anything to avoid repeating in the future.
So in this article, I describe a number of techniques I've used to keep servers safe and sound through the years, and to reduce the chances of a complete meltdown. You can think of these techniques as insurance for your server, so that even if something does go wrong, you'll be able to recover fairly quickly.
I should note that most of the advice here assumes no redundancy in your architecture—that is, a single web server and (at most) a single database server. If you can afford to have a bunch of servers of each type, these sorts of problems tend to be much less frequent. However, that doesn't mean they go away entirely. Besides, although people like to talk about heavy-duty web applications that require massive iron in order to run, the fact is that many businesses run on small, one- and two-computer servers. Moreover, those businesses don't need more than that; the ROI (return on investment) they'll get from additional servers cannot be justified. However, the ROI from a good backup and recovery plan is huge, and thus worth the investment.
### The Parts of a Web Application
Before I can talk about disaster preparation and recovery, it's important to consider the different parts of a web application and what those various parts mean for your planning.
For many years, my website was trivially small and simple. Even if it contained some simple programs, those generally were used for sending email or for dynamically displaying different assets to visitors. The entire site consisted of some static HTML, images, JavaScript and CSS. No database or other excitement was necessary.
At the other end of the spectrum, many people have full-blown web applications, sitting on multiple servers, with one or more databases and caches, as well as HTTP servers with extensively edited configuration files.
But even when considering those two extremes, you can see that a web application consists of only a few parts:
* The application software itself.
* Static assets for that application.
* Configuration file(s) for the HTTP server(s).
* Database configuration files.
* Database schema and contents.
Assuming that you're using a high-level language, such as Python, Ruby or JavaScript, everything in this list either is a file or can be turned into one. (All databases make it possible to "dump" their contents onto disk, into a format that then can be loaded back into the database server.)
Consider a site containing only application software, static assets and configuration files. (In other words, no database is involved.) In many cases, such a site can be backed up reliably in Git. Indeed, I prefer to keep my sites in Git, backed up on a commercial hosting service, such as GitHub or Bitbucket, and then deployed using a system like Capistrano.
In other words, you develop the site on your own development machine. Whenever you are happy with a change that you've made, you commit the change to Git (on your local machine) and then do a git push to your central repository. In order to deploy your application, you then use Capistrano to do a cap deploy, which reads the data from the central repository, puts it into the appropriate place on the server's filesystem, and you're good to go.
This system keeps you safe in a few different ways. The code itself is located in at least three locations: your development machine, the server and the repository. And those central repositories tend to be fairly reliable, if only because it's in the financial interest of the hosting company to ensure that things are reliable.
I should add that in such a case, you also should include the HTTP server's configuration files in your Git repository. Those files aren't likely to change very often, but I can tell you from experience, if you're recovering from a crisis, the last thing you want to think about is how your Apache configuration files should look. Copying those files into your Git repository will work just fine.
### Backing Up Databases
You could argue that the difference between a "website" and a "web application" is a database. Databases long have powered the back ends of many web applications and for good reason—they allow you to store and retrieve data reliably and flexibly. The power that modern open-source databases provides was unthinkable just a decade or two ago, and there's no reason to think that they'll be any less reliable in the future.
And yet, just because your database is pretty reliable doesn't mean that it won't have problems. This means you're going to want to keep a snapshot ("dump") of the database's contents around, in case the database server corrupts information, and you need to roll back to a previous version.
My favorite solution for such a problem is to dump the database on a regular basis, preferably hourly. Here's a shell script I've used, in one form or another, for creating such regular database dumps:
```
#!/bin/sh
BACKUP_ROOT="/home/database-backups/"
YEAR=`/bin/date +'%Y'`
MONTH=`/bin/date +'%m'`
DAY=`/bin/date +'%d'`
DIRECTORY="$BACKUP_ROOT/$YEAR/$MONTH/$DAY"
USERNAME=dbuser
DATABASE=dbname
HOST=localhost
PORT=3306
/bin/mkdir -p $DIRECTORY
/usr/bin/mysqldump -h $HOST --databases $DATABASE -u $USERNAME
↪| /bin/gzip --best --verbose >
↪$DIRECTORY/$DATABASE-dump.gz
```
The above shell script starts off by defining a bunch of variables, from the directory in which I want to store the backups, to the parts of the date (stored in $YEAR, $MONTH and $DAY). This is so I can have a separate directory for each day of the month. I could, of course, go further and have separate directories for each hour, but I've found that I rarely need more than one backup from a day.
Once I have defined those variables, I then use the mkdir command to create a new directory. The -p option tells mkdir that if necessary, it should create all of the directories it needs such that the entire path will exist.
Finally, I then run the database's "dump" command. In this particular case, I'm using MySQL, so I'm using the mysqldump command. The output from this command is a stream of SQL that can be used to re-create the database. I thus take the output from mysqldump and pipe it into gzip, which compresses the output file. Finally, the resulting dumpfile is placed, in compressed form, inside the daily backup directory.
Depending on the size of your database and the amount of disk space you have on hand, you'll have to decide just how often you want to run dumps and how often you want to clean out old ones. I know from experience that dumping every hour can cause some load problems. On one virtual machine I've used, the overall administration team was unhappy that I was dumping and compressing every hour, which they saw as an unnecessary use of system resources.
If you're worried your system will run out of disk space, you might well want to run a space-checking program that'll alert you when the filesystem is low on free space. In addition, you can run a cron job that uses find to erase all dumpfiles from before a certain cutoff date. I'm always a bit nervous about programs that automatically erase backups, so I generally prefer not to do this. Rather, I run a program that warns me if the disk usage is going above 85% (which is usually low enough to ensure that I can fix the problem in time, even if I'm on a long flight). Then I can go in and remove the problematic files by hand.
When you back up your database, you should be sure to back up the configuration for that database as well. The database schema and data, which are part of the dumpfile, are certainly important. However, if you find yourself having to re-create your server from scratch, you'll want to know precisely how you configured the database server, with a particular emphasis on the filesystem configuration and memory allocations. I tend to use PostgreSQL for most of my work, and although postgresql.conf is simple to understand and configure, I still like to keep it around with my dumpfiles.
Another crucial thing to do is to check your database dumps occasionally to be sure that they are working the way you want. It turns out that the backups I thought I was making weren't actually happening, in no small part because I had modified the shell script and hadn't double-checked that it was creating useful backups. Occasionally pulling out one of your dumpfiles and restoring it to a separate (and offline!) database to check its integrity is a good practice, both to ensure that the dump is working and that you remember how to restore it in the case of an emergency.
### Storing Backups
But wait. It might be great to have these backups, but what if the server goes down entirely? In the case of the code, I mentioned to ensure that it was located on more than one machine, ensuring its integrity. By contrast, your database dumps are now on the server, such that if the server fails, your database dumps will be inaccessible.
This means you'll want to have your database dumps stored elsewhere, preferably automatically. How can you do that?
There are a few relatively easy and inexpensive solutions to this problem. If you have two servers—ideally in separate physical locations—you can use rsync to copy the files from one to the other. Don't rsync the database's actual files, since those might get corrupted in transfer and aren't designed to be copied when the server is running. By contrast, the dumpfiles that you have created are more than able to go elsewhere. Setting up a remote server, with a user specifically for handling these backup transfers, shouldn't be too hard and will go a long way toward ensuring the safety of your data.
I should note that using rsync in this way basically requires that you set up passwordless SSH, so that you can transfer without having to be physically present to enter the password.
Another possible solution is Amazon's Simple Storage Server (S3), which offers astonishing amounts of disk space at very low prices. I know that many companies use S3 as a simple (albeit slow) backup system. You can set up a cron job to run a program that copies the contents of a particular database dumpfile directory onto a particular server. The assumption here is that you're not ever going to use these backups, meaning that S3's slow searching and access will not be an issue once you're working on the server.
Similarly, you might consider using Dropbox. Dropbox is best known for its desktop client, but it has a "headless", text-based client that can be used on Linux servers without a GUI connected. One nice advantage of Dropbox is that you can share a folder with any number of people, which means you can have Dropbox distribute your backup databases everywhere automatically, including to a number of people on your team. The backups arrive in their Dropbox folder, and you can be sure that the LAMP is conditional.
Finally, if you're running a WordPress site, you might want to consider VaultPress, a for-pay backup system. I must admit that in the weeks before I took my server down with a database backup error, I kept seeing ads in WordPress for VaultPress. "Who would buy that?", I asked myself, thinking that I'm smart enough to do backups myself. Of course, after disaster occurred and my database was ruined, I realized that $30/year to back up all of my data is cheap, and I should have done it before.
### Conclusion
When it comes to your servers, think less like an optimistic programmer and more like an insurance agent. Perhaps disaster won't strike, but if it does, will you be able to recover? Making sure that even if your server is completely unavailable, you'll be able to bring up your program and any associated database is crucial.
My preferred solution involves combining a Git repository for code and configuration files, distributed across several machines and services. For the databases, however, it's not enough to dump your database; you'll need to get that dump onto a separate machine, and preferably test the backup file on a regular basis. That way, even if things go wrong, you'll be able to get back up in no time.
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/avoiding-server-disaster
作者:[Reuven M.Lerner][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/user/1000891

View File

@ -1,325 +0,0 @@
leemeans translating
Creating an Adventure Game in the Terminal with ncurses
======
How to use curses functions to read the keyboard and manipulate the screen.
My [previous article][1] introduced the ncurses library and provided a simple program that demonstrated a few curses functions to put text on the screen. In this follow-up article, I illustrate how to use a few other curses functions.
### An Adventure
When I was growing up, my family had an Apple II computer. It was on this machine that my brother and I taught ourselves how to write programs in AppleSoft BASIC. After writing a few math puzzles, I moved on to creating games. Having grown up in the 1980s, I already was a fan of the Dungeons and Dragons tabletop games, where you role-played as a fighter or wizard on some quest to defeat monsters and plunder loot in strange lands. So it shouldn't be surprising that I also created a rudimentary adventure game.
The AppleSoft BASIC programming environment supported a neat feature: in standard resolution graphics mode (GR mode), you could probe the color of a particular pixel on the screen. This allowed a shortcut to create an adventure game. Rather than create and update an in-memory map that was transferred to the screen periodically, I could rely on GR mode to maintain the map for me, and my program could query the screen as the player's character moved around the screen. Using this method, I let the computer do most of the hard work. Thus, my top-down adventure game used blocky GR mode graphics to represent my game map.
My adventure game used a simple map that represented a large field with a mountain range running down the middle and a large lake on the upper-left side. I might crudely draw this map for a tabletop gaming campaign to include a narrow path through the mountains, allowing the player to pass to the far side.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-map.jpg)
Figure 1. A simple Tabletop Game Map with a Lake and Mountains
You can draw this map in cursesusing characters to represent grass, mountains and water. Next, I describe how to do just that using curses functions and how to create and play a similar adventure game in the Linux terminal.
### Constructing the Program
In my last article, I mentioned that most curses programs start with the same set of instructions to determine the terminal type and set up the curses environment:
```
initscr();
cbreak();
noecho();
```
For this program, I add another statement:
```
keypad(stdscr, TRUE);
```
The TRUE flag allows curses to read the keypad and function keys from the user's terminal. If you want to use the up, down, left and right arrow keys in your program, you need to use keypad(stdscr, TRUE) here.
Having done that, you now can start drawing to the terminal screen. The curses functions include several ways to draw text on the screen. In my previous article, I demonstrated the addch() and addstr() functions and their associated mvaddch() and mvaddstr() counterparts that first moved to a specific location on the screen before adding text. To create the adventure game map on the terminal, you can use another set of functions: vline() and hline(), and their partner functions mvvline() and mvhline(). These mv functions accept screen coordinates, a character to draw and how many times to repeat that character. For example, mvhline(1, 2, '-', 20) will draw a line of 20 dashes starting at line 1, column 2.
To draw the map to the terminal screen programmatically, let's define this draw_map() function:
```
#define GRASS ' '
#define EMPTY '.'
#define WATER '~'
#define MOUNTAIN '^'
#define PLAYER '*'
void draw_map(void)
{
int y, x;
/* draw the quest map */
/* background */
for (y = 0; y < LINES; y++) {
mvhline(y, 0, GRASS, COLS);
}
/* mountains, and mountain path */
for (x = COLS / 2; x < COLS * 3 / 4; x++) {
mvvline(0, x, MOUNTAIN, LINES);
}
mvhline(LINES / 4, 0, GRASS, COLS);
/* lake */
for (y = 1; y < LINES / 2; y++) {
mvhline(y, 1, WATER, COLS / 3);
}
}
```
In drawing this map, note the use of mvvline() and mvhline() to fill large chunks of characters on the screen. I created the fields of grass by drawing horizontal lines (mvhline) of characters starting at column 0, for the entire height and width of the screen. I added the mountains on top of that by drawing vertical lines (mvvline), starting at row 0, and a mountain path by drawing a single horizontal line (mvhline). And, I created the lake by drawing a series of short horizontal lines (mvhline). It may seem inefficient to draw overlapping rectangles in this way, but remember that curses doesn't actually update the screen until I call the refresh() function later.
Having drawn the map, all that remains to create the game is to enter a loop where the program waits for the user to press one of the up, down, left or right direction keys and then moves a player icon appropriately. If the space the player wants to move into is unoccupied, it allows the player to go there.
You can use curses as a shortcut. Rather than having to instantiate a version of the map in the program and replicate this map to the screen, you can let the screen keep track of everything for you. The inch() function, and associated mvinch() function, allow you to probe the contents of the screen. This allows you to query curses to find out whether the space the player wants to move into is already filled with water or blocked by mountains. To do this, you'll need a helper function that you'll use later:
```
int is_move_okay(int y, int x)
{
int testch;
/* return true if the space is okay to move into */
testch = mvinch(y, x);
return ((testch == GRASS) || (testch == EMPTY));
}
```
As you can see, this function probes the location at column y, row x and returns true if the space is suitably unoccupied, or false if not.
That makes it really easy to write a navigation loop: get a key from the keyboard and move the user's character around depending on the up, down, left and right arrow keys. Here's a simplified version of that loop:
```
do {
ch = getch();
/* test inputted key and determine direction */
switch (ch) {
case KEY_UP:
if ((y > 0) && is_move_okay(y - 1, x)) {
y = y - 1;
}
break;
case KEY_DOWN:
if ((y < LINES - 1) && is_move_okay(y + 1, x)) {
y = y + 1;
}
break;
case KEY_LEFT:
if ((x > 0) && is_move_okay(y, x - 1)) {
x = x - 1;
}
break;
case KEY_RIGHT
if ((x < COLS - 1) && is_move_okay(y, x + 1)) {
x = x + 1;
}
break;
}
}
while (1);
```
To use this in a game, you'll need to add some code inside the loop to allow other keys (for example, the traditional WASD movement keys), provide a method for the user to quit the game and move the player's character around the screen. Here's the program in full:
```
/* quest.c */
#include
#include
#define GRASS ' '
#define EMPTY '.'
#define WATER '~'
#define MOUNTAIN '^'
#define PLAYER '*'
int is_move_okay(int y, int x);
void draw_map(void);
int main(void)
{
int y, x;
int ch;
/* initialize curses */
initscr();
keypad(stdscr, TRUE);
cbreak();
noecho();
clear();
/* initialize the quest map */
draw_map();
/* start player at lower-left */
y = LINES - 1;
x = 0;
do {
/* by default, you get a blinking cursor - use it to indicate player */
mvaddch(y, x, PLAYER);
move(y, x);
refresh();
ch = getch();
/* test inputted key and determine direction */
switch (ch) {
case KEY_UP:
case 'w':
case 'W':
if ((y > 0) && is_move_okay(y - 1, x)) {
mvaddch(y, x, EMPTY);
y = y - 1;
}
break;
case KEY_DOWN:
case 's':
case 'S':
if ((y < LINES - 1) && is_move_okay(y + 1, x)) {
mvaddch(y, x, EMPTY);
y = y + 1;
}
break;
case KEY_LEFT:
case 'a':
case 'A':
if ((x > 0) && is_move_okay(y, x - 1)) {
mvaddch(y, x, EMPTY);
x = x - 1;
}
break;
case KEY_RIGHT:
case 'd':
case 'D':
if ((x < COLS - 1) && is_move_okay(y, x + 1)) {
mvaddch(y, x, EMPTY);
x = x + 1;
}
break;
}
}
while ((ch != 'q') && (ch != 'Q'));
endwin();
exit(0);
}
int is_move_okay(int y, int x)
{
int testch;
/* return true if the space is okay to move into */
testch = mvinch(y, x);
return ((testch == GRASS) || (testch == EMPTY));
}
void draw_map(void)
{
int y, x;
/* draw the quest map */
/* background */
for (y = 0; y < LINES; y++) {
mvhline(y, 0, GRASS, COLS);
}
/* mountains, and mountain path */
for (x = COLS / 2; x < COLS * 3 / 4; x++) {
mvvline(0, x, MOUNTAIN, LINES);
}
mvhline(LINES / 4, 0, GRASS, COLS);
/* lake */
for (y = 1; y < LINES / 2; y++) {
mvhline(y, 1, WATER, COLS / 3);
}
}
```
In the full program listing, you can see the complete arrangement of curses functions to create the game:
1) Initialize the curses environment.
2) Draw the map.
3) Initialize the player coordinates (lower-left).
4) Loop:
* Draw the player's character.
* Get a key from the keyboard.
* Adjust the player's coordinates up, down, left or right, accordingly.
* Repeat.
5) When done, close the curses environment and exit.
### Let's Play
When you run the game, the player's character starts in the lower-left corner. As the player moves around the play area, the program creates a "trail" of dots. This helps show where the player has been before, so the player can avoid crossing the path unnecessarily.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-start.png)
Figure 2\. The player starts the game in the lower-left corner.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/quest-1.png)
Figure 3\. The player can move around the play area, such as around the lake and through the mountain pass.
To create a complete adventure game on top of this, you might add random encounters with various monsters as the player navigates his or her character around the play area. You also could include special items the player could discover or loot after defeating enemies, which would enhance the player's abilities further.
But to start, this is a good program for demonstrating how to use the curses functions to read the keyboard and manipulate the screen.
### Next Steps
This program is a simple example of how to use the curses functions to update and read the screen and keyboard. You can do so much more with curses, depending on what you need your program to do. In a follow up article, I plan to show how to update this sample program to use colors. In the meantime, if you are interested in learning more about curses, I encourage you to read Pradeep Padala's [NCURSES Programming HOWTO][2] at the Linux Documentation Project.
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/creating-adventure-game-terminal-ncurses
作者:[Jim Hall][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/jim-hall
[1]:http://www.linuxjournal.com/content/getting-started-ncurses
[2]:http://tldp.org/HOWTO/NCURSES-Programming-HOWTO

View File

@ -1,583 +0,0 @@
Rapid, Secure Patching: Tools and Methods
======
It was with some measure of disbelief that the computer science community greeted the recent [EternalBlue][1]-related exploits that have torn through massive numbers of vulnerable systems. The SMB exploits have kept coming (the most recent being [SMBLoris][2] presented at the last DEF CON, which impacts multiple SMB protocol versions, and for which Microsoft will issue no corrective patch. Attacks with these tools [incapacitated critical infrastructure][3] to the point that patients were even turned away from the British National Health Service.
It is with considerable sadness that, during this SMB catastrophe, we also have come to understand that the famous Samba server presented an exploitable attack surface on the public internet in sufficient numbers for a worm to propagate successfully. I previously [have discussed SMB security][4] in Linux Journal, and I am no longer of the opinion that SMB server processes should run on Linux.
In any case, systems administrators of all architectures must be able to down vulnerable network servers and patch them quickly. There is often a need for speed and competence when working with a large collection of Linux servers. Whether this is due to security situations or other concerns is immaterial—the hour of greatest need is not the time to begin to build administration tools. Note that in the event of an active intrusion by hostile parties, [forensic analysis][5] may be a legal requirement, and no steps should be taken on the compromised server without a careful plan and documentation. Especially in this new era of the black hats, computer professionals must step up their game and be able to secure vulnerable systems quickly.
### Secure SSH Keypairs
Tight control of a heterogeneous UNIX environment must begin with best-practice use of SSH authentication keys. I'm going to open this section with a simple requirement. SSH private keys must be one of three types: Ed25519, ECDSA using the E-521 curve or RSA keys of 3072 bits. Any key that does not meet those requirements should be retired (in particular, DSA keys must be removed from service immediately).
The [Ed25519][6] key format is associated with Daniel J. Bernstein, who has such a preeminent reputation in modern cryptography that the field is becoming a DJB [monoculture][7]. The Ed25519 format is deigned for speed, security and size economy. If all of your SSH servers are recent enough to support Ed25519, then use it, and consider nothing else.
[Guidance on creating Ed25519 keys][8] suggests 100 rounds for a work factor in the "-o" secure format. Raising the number of rounds raises the strength of the encrypted key against brute-force attacks (should a file copy of the private key fall into hostile hands), at the cost of more work and time in decrypting the key when ssh-add is executed. Although there always is [controversy and discussion][9] with security advances, I will repeat the guidance here and suggest that the best format for a newly created SSH key is this:
```
ssh-keygen -a 100 -t ed25519
```
Your systems might be too old to support Ed25519—Oracle/CentOS/Red Hat 7 have this problem (the 7.1 release introduced support). If you cannot upgrade your old SSH clients and servers, your next best option is likely E-521, available in the ECDSA key format.
The ECDSA curves came from the US government's National Institute of Standards (NIST). The best known and most implemented of all of the NIST curves are P-256, P-384 and E-521\. All three curves are approved for secret communications by a variety of government entities, but a number of cryptographers have [expressed growing suspicion][10] that the P-256 and P-384 curves are tainted. Well known cryptographer Bruce Schneier [has remarked][11]: "I no longer trust the constants. I believe the NSA has manipulated them through their relationships with industry." However, DJB [has expressed][12] limited praise of the E-521 curve: "To be fair I should mention that there's one standard NIST curve using a nice prime, namely 2521 1; but the sheer size of this prime makes it much slower than NIST P-256." All of the NIST curves have greater issues with "side channel" attacks than Ed25519—P-521 is certainly a step down, and many assert that none of the NIST curves are safe. In summary, there is a slight risk that a powerful adversary exists with an advantage over the P-256 and P-384 curves, so one is slightly inclined to avoid them. Note that even if your OpenSSH (source) release is capable of E-521, it may be [disabled by your vendor][13] due to patent concerns, so E-521 is not an option in this case. If you cannot use DJB's 2255 19 curve, this command will generate an E-521 key on a capable system:
```
ssh-keygen -o -a 100 -b 521 -t ecdsa
```
And, then there is the unfortunate circumstance with SSH servers that support neither ECDSA nor Ed25519\. In this case, you must fall back to RSA with much larger key sizes. An absolute minimum is the modern default of 2048 bits, but 3072 is a wiser choice:
```
ssh-keygen -o -a 100 -b 3072 -t rsa
```
Then in the most lamentable case of all, when you must use old SSH clients that are not able to work with private keys created with the -o option, you can remove the password on id_rsa and create a naked key, then use OpenSSL to encrypt it with AES256 in the PKCS#8 format, as [first documented by Martin Kleppmann][14]. Provide a blank new password for the keygen utility below, then supply a new password when OpenSSL reprocesses the key:
```
$ cd ~/.ssh
$ cp id_rsa id_rsa-orig
$ ssh-keygen -p -t rsa
Enter file in which the key is (/home/cfisher/.ssh/id_rsa):
Enter old passphrase:
Key has comment 'cfisher@localhost.localdomain'
Enter new passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved with the new passphrase.
$ openssl pkcs8 -topk8 -v2 aes256 -in id_rsa -out id_rsa-strong
Enter Encryption Password:
Verifying - Enter Encryption Password:
mv id_rsa-strong id_rsa
chmod 600 id_rsa
```
After creating all of these keys on a newer system, you can compare the file sizes:
```
$ ll .ssh
total 32
-rw-------. 1 cfisher cfisher 801 Aug 10 21:30 id_ecdsa
-rw-r--r--. 1 cfisher cfisher 283 Aug 10 21:30 id_ecdsa.pub
-rw-------. 1 cfisher cfisher 464 Aug 10 20:49 id_ed25519
-rw-r--r--. 1 cfisher cfisher 111 Aug 10 20:49 id_ed25519.pub
-rw-------. 1 cfisher cfisher 2638 Aug 10 21:45 id_rsa
-rw-------. 1 cfisher cfisher 2675 Aug 10 21:42 id_rsa-orig
-rw-r--r--. 1 cfisher cfisher 583 Aug 10 21:42 id_rsa.pub
```
Although they are relatively enormous, all versions of OpenSSH that I have used have been compatible with the RSA private key in PKCS#8 format. The Ed25519 public key is now small enough to fit in 80 columns without word wrap, and it is as convenient as it is efficient and secure.
Note that PuTTY may have problems using various versions of these keys, and you may need to remove passwords for a successful import into the PuTTY agent.
These keys represent the most secure formats available for various OpenSSH revisions. They really aren't intended for PuTTY or other general interactive activity. Although one hopes that all users create strong keys for all situations, these are enterprise-class keys for major systems activities. It might be wise, however, to regenerate your system host keys to conform to these guidelines.
These key formats may soon change. Quantum computers are causing increasing concern for their ability to run [Shor's Algorithm][15], which can be used to find prime factors to break these keys in reasonable time. The largest commercially available quantum computer, the [D-Wave 2000Q][16], effectively [presents under 200 qubits][17] for this activity, which is not (yet) powerful enough for a successful attack. NIST [announced a competition][18] for a new quantum-resistant public key system with a deadline of November 2017 In response, a team including DJB has released source code for [NTRU Prime][19]. It does appear that we will likely see a post-quantum public key format for OpenSSH (and potentially TLS 1.3) released within the next two years, so take steps to ease migration now.
Also, it's important for SSH servers to restrict their allowed ciphers, MACs and key exchange lest strong keys be wasted on broken crypto (3DES, MD5 and arcfour should be long-disabled). My [previous guidance][20] on the subject involved the following (three) lines in the SSH client and server configuration (note that formatting in the sshd_config file requires all parameters on the same line with no spaces in the options; line breaks have been added here for clarity):
```
Ciphers chacha20-poly1305@openssh.com,
aes256-gcm@openssh.com,
aes128-gcm@openssh.com,
aes256-ctr,
aes192-ctr,
aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,
hmac-sha2-256-etm@openssh.com,
hmac-ripemd160-etm@openssh.com,
umac-128-etm@openssh.com,
hmac-sha2-512,
hmac-sha2-256,
hmac-ripemd160,
umac-128@openssh.com
KexAlgorithms curve25519-sha256@libssh.org,
diffie-hellman-group-exchange-sha256
```
Since the previous publication, RIPEMD160 is likely no longer safe and should be removed. Older systems, however, may support only SHA1, MD5 and RIPEMD160\. Certainly remove MD5, but users of PuTTY likely will want to retain SHA1 when newer MACs are not an option. Older servers can present a challenge in finding a reasonable Cipher/MAC/KEX when working with modern systems.
At this point, you should have strong keys for secure clients and servers. Now let's put them to use.
### Scripting the SSH Agent
Modern OpenSSH distributions contain the ssh-copy-id shell script for easy key distribution. Below is an example of installing a specific, named key in a remote account:
```
$ ssh-copy-id -i ~/.ssh/some_key.pub person@yourserver.com
ssh-copy-id: INFO: Source of key(s) to be installed:
"/home/cfisher/.ssh/some_key.pub"
ssh-copy-id: INFO: attempting to log in with the new key(s),
to filter out any that are already installed
ssh-copy-id: INFO: 1 key(s) remain to be installed --
if you are prompted now it is to install the new keys
person@yourserver.com's password:
Number of key(s) added: 1
Now try logging into the machine, with:
"ssh 'person@yourserver.com'"
and check to make sure that only the key(s) you wanted were added.
```
If you don't have the ssh-copy-id script, you can install a key manually with the following command:
```
$ ssh person@yourserver.com 'cat >> ~/.ssh/authorized_keys' < \
~/.ssh/some_key.pub
```
If you have SELinux enabled, you might have to mark a newly created authorized_keys file with a security type; otherwise, the sshd server dæmon will be prevented from reading the key (the syslog may report this issue):
```
$ ssh person@yourserver.com 'chcon -t ssh_home_t
↪~/.ssh/authorized_keys'
```
Once your key is installed, test it in a one-time use with the -i option (note that you are entering a local key password, not a remote authentication password):
```
$ ssh -i ~/.ssh/some_key person@yourserver.com
Enter passphrase for key '/home/v-fishecj/.ssh/some_key':
Last login: Wed Aug 16 12:20:26 2017 from 10.58.17.14
yourserver $
```
General, interactive users likely will cache their keys with an agent. In the example below, the same password is used on all three types of keys that were created in the previous section:
```
$ eval $(ssh-agent)
Agent pid 4394
$ ssh-add
Enter passphrase for /home/cfisher/.ssh/id_rsa:
Identity added: ~cfisher/.ssh/id_rsa (~cfisher/.ssh/id_rsa)
Identity added: ~cfisher/.ssh/id_ecdsa (cfisher@init.com)
Identity added: ~cfisher/.ssh/id_ed25519 (cfisher@init.com)
```
The first command above launches a user agent process, which injects environment variables (named SSH_AGENT_SOCK and SSH_AGENT_PID) into the parent shell (via eval). The shell becomes aware of the agent and passes these variables to the programs that it runs from that point forward.
When launched, the ssh-agent has no credentials and is unable to facilitate SSH activity. It must be primed by adding keys, which is done with ssh-add. When called with no arguments, all of the default keys will be read. It also can be called to add a custom key:
```
$ ssh-add ~/.ssh/some_key
Enter passphrase for /home/cfisher/.ssh/some_key:
Identity added: /home/cfisher/.ssh/some_key
↪(cfisher@localhost.localdomain)
```
Note that the agent will not retain the password on the key. ssh-add uses any and all passwords that you enter while it runs to decrypt keys that it finds, but the passwords are cleared from memory when ssh-add terminates (they are not sent to ssh-agent). This allows you to upgrade to new key formats with minimal inconvenience, while keeping the keys reasonably safe.
The current cached keys can be listed with ssh-add -l (from, which you can deduce that "some_key" is an Ed25519):
```
$ ssh-add -l
3072 SHA256:cpVFMZ17oO5n/Jfpv2qDNSNcV6ffOVYPV8vVaSm3DDo
/home/cfisher/.ssh/id_rsa (RSA)
521 SHA256:1L9/CglR7cstr54a600zDrBbcxMj/a3RtcsdjuU61VU
cfisher@localhost.localdomain (ECDSA)
256 SHA256:Vd21LEM4lixY4rIg3/Ht/w8aoMT+tRzFUR0R32SZIJc
cfisher@localhost.localdomain (ED25519)
256 SHA256:YsKtUA9Mglas7kqC4RmzO6jd2jxVNCc1OE+usR4bkcc
cfisher@localhost.localdomain (ED25519)
```
While a "primed" agent is running, the SSH clients may use (trusting) remote servers fluidly, with no further prompts for credentials:
```
$ sftp person@yourserver.com
Connected to yourserver.com.
sftp> quit
$ scp /etc/passwd person@yourserver.com:/tmp
passwd 100% 2269 65.8KB/s 00:00
$ ssh person@yourserver.com
(motd for yourserver.com)
$ ls -l /tmp/passwd
-rw-r--r-- 1 root wheel 2269 Aug 16 09:07 /tmp/passwd
$ rm /tmp/passwd
$ exit
Connection to yourserver.com closed.
```
The OpenSSH agent can be locked, preventing any further use of the credentials that it holds (this might be appropriate when suspending a laptop):
```
$ ssh-add -x
Enter lock password:
Again:
Agent locked.
$ ssh yourserver.com
Enter passphrase for key '/home/cfisher/.ssh/id_rsa': ^C
```
It will provide credentials again when it is unlocked:
```
$ ssh-add -X
Enter lock password:
Agent unlocked.
```
You also can set ssh-agent to expire keys after a time limit with the -t option, which may be useful for long-lived agents that must clear keys after a set daily shift.
General shell users may cache many types of keys with a number of differing agent implementations. In addition to the standard OpenSSH agent, users may rely upon PuTTY's pageant.exe, GNOME keyring or KDE Kwallet, among others (the use of the PUTTY agent could likely fill an article on its own).
However, the goal here is to create "enterprise" keys for critical server controls. You likely do not want long-lived agents in order to limit the risk of exposure. When scripting with "enterprise" keys, you will run an agent only for the duration of the activity, then kill it at completion.
There are special options for accessing the root account with OpenSSH—the PermitRootLogin parameter can be added to the sshd_config file (usually found in /etc/ssh). It can be set to a simple yes or no, forced-commands-only, which will allow only explicitly-authorized programs to be executed, or the equivalent options prohibit-password or without-password, both of which will allow access to the keys generated here.
Many hold that root should not be allowed any access. [Michael W. Lucas][21] addresses the question in SSH Mastery:
> Sometimes, it seems that you need to allow users to SSH in to the system as root. This is a colossally bad idea in almost all environments. When users must log in as a regular user and then change to root, the system logs record the user account, providing accountability. Logging in as root destroys that audit trail....It is possible to override the security precautions and make sshd permit a login directly as root. It's such a bad idea that I'd consider myself guilty of malpractice if I told you how to do it. Logging in as root via SSH almost always means you're solving the wrong problem. Step back and look for other ways to accomplish your goal.
When root action is required quickly on more than a few servers, the above advice can impose painful delays. Lucas' direct criticism can be addressed by allowing only a limited set of "bastion" servers to issue root commands over SSH. Administrators should be forced to log in to the bastions with unprivileged accounts to establish accountability.
However, one problem with remotely "changing to root" is the [statistical use of the Viterbi algorithm][22] Short passwords, the su - command and remote SSH calls that use passwords to establish a trinary network configuration are all uniquely vulnerable to timing attacks on a user's keyboard movement. Those with the highest security concerns will need to compensate.
For the rest of us, I recommend that PermitRootLogin without-password be set for all target machines.
Finally, you can easily terminate ssh-agent interactively with the -k option:
```
$ eval $(ssh-agent -k)
Agent pid 4394 killed
```
With these tools and the intended use of them in mind, here is a complete script that runs an agent for the duration of a set of commands over a list of servers for a common named user (which is not necessarily root):
```
# cat artano
#!/bin/sh
if [[ $# -lt 1 ]]; then echo "$0 - requires commands"; exit; fi
R="-R5865:127.0.0.1:5865" # set to "-2" if you don't want
↪port forwarding
eval $(ssh-agent -s)
function cleanup { eval $(ssh-agent -s -k); }
trap cleanup EXIT
function remsh { typeset F="/tmp/${1}" h="$1" p="$2";
↪shift 2; echo "#$h"
if [[ "$ARTANO" == "PARALLEL" ]]
then ssh "$R" -p "$p" "$h" "$@" < /dev/null >>"${F}.out"
↪2>>"${F}.err" &
else ssh "$R" -p "$p" "$h" "$@"
fi } # HOST PORT CMD
if ssh-add ~/.ssh/master_key
then remsh yourserver.com 22 "$@"
remsh container.yourserver.com 2200 "$@"
remsh anotherserver.com 22 "$@"
# Add more hosts here.
else echo Bad password - killing agent. Try again.
fi
wait
#######################################################################
# Examples: # Artano is an epithet of a famous mythical being
# artano 'mount /patchdir' # you will need an fstab entry for this
# artano 'umount /patchdir'
# artano 'yum update -y 2>&1'
# artano 'rpm -Fvh /patchdir/\*.rpm'
#######################################################################
```
This script runs all commands in sequence on a collection of hosts by default. If the ARTANO environment variable is set to PARALLEL, it instead will launch them all as background processes simultaneously and append their STDOUT and STDERR to files in /tmp (this should be no problem when dealing with fewer than a hundred hosts on a reasonable server). The PARALLEL setting is useful not only for pushing changes faster, but also for collecting audit results.
Below is an example using the yum update agent. The source of this particular invocation had to traverse a firewall and relied on a proxy setting in the /etc/yum.conf file, which used the port-forwarding option (-R) above:
```
# ./artano 'yum update -y 2>&1'
Agent pid 3458
Enter passphrase for /root/.ssh/master_key:
Identity added: /root/.ssh/master_key (/root/.ssh/master_key)
#yourserver.com
Loaded plugins: langpacks, ulninfo
No packages marked for update
#container.yourserver.com
Loaded plugins: langpacks, ulninfo
No packages marked for update
#anotherserver.com
Loaded plugins: langpacks, ulninfo
No packages marked for update
Agent pid 3458 killed
```
The script can be used for more general maintenance functions. Linux installations running the XFS filesystem should "defrag" periodically. Although this normally would be done with cron, it can be a centralized activity, stored in a separate script that includes only on the appropriate hosts:
```
&1'
Agent pid 7897
Enter passphrase for /root/.ssh/master_key:
Identity added: /root/.ssh/master_key (/root/.ssh/master_key)
#yourserver.com
#container.yourserver.com
#anotherserver.com
Agent pid 7897 killed
```
An easy method to collect the contents of all authorized_keys files for all users is the following artano script (this is useful for system auditing and is coded to remove file duplicates):
```
artano 'awk -F: {print\$6\"/.ssh/authorized_keys\"} \
/etc/passwd | sort -u | xargs grep . 2> /dev/null'
```
It is convenient to configure NFS mounts for file distribution to remote nodes. Bear in mind that NFS is clear text, and sensitive content should not traverse untrusted networks while unencrypted. After configuring an NFS server on host 1.2.3.4, I add the following line to the /etc/fstab file on all the clients and create the /patchdir directory. After the change, the artano script can be used to mass-mount the directory if the network configuration is correct:
```
# tail -1 /etc/fstab
1.2.3.4:/var/cache/yum/x86_64/7Server/ol7_latest/packages
↪/patchdir nfs4 noauto,proto=tcp,port=2049 0 0
```
Assuming that the NFS server is mounted, RPMs can be upgraded from images stored upon it (note that Oracle Spacewalk or Red Hat Satellite might be a more capable patch method):
```
# ./artano 'rpm -Fvh /patchdir/\*.rpm'
Agent pid 3203
Enter passphrase for /root/.ssh/master_key:
Identity added: /root/.ssh/master_key (/root/.ssh/master_key)
#yourserver.com
Preparing... ########################
Updating / installing...
xmlsec1-1.2.20-7.el7_4 ########################
xmlsec1-openssl-1.2.20-7.el7_4 ########################
Cleaning up / removing...
xmlsec1-openssl-1.2.20-5.el7 ########################
xmlsec1-1.2.20-5.el7 ########################
#container.yourserver.com
Preparing... ########################
Updating / installing...
xmlsec1-1.2.20-7.el7_4 ########################
xmlsec1-openssl-1.2.20-7.el7_4 ########################
Cleaning up / removing...
xmlsec1-openssl-1.2.20-5.el7 ########################
xmlsec1-1.2.20-5.el7 ########################
#anotherserver.com
Preparing... ########################
Updating / installing...
xmlsec1-1.2.20-7.el7_4 ########################
xmlsec1-openssl-1.2.20-7.el7_4 ########################
Cleaning up / removing...
xmlsec1-openssl-1.2.20-5.el7 ########################
xmlsec1-1.2.20-5.el7 ########################
Agent pid 3203 killed
```
I am assuming that my audience is already experienced with package tools for their preferred platforms. However, to avoid criticism that I've included little actual discussion of patch tools, the following is a quick reference of RPM manipulation commands, which is the most common package format on enterprise systems:
* rpm -Uvh package.i686.rpm — install or upgrade a package file.
* rpm -Fvh package.i686.rpm — upgrade a package file, if an older version is installed.
* rpm -e package — remove an installed package.
* rpm -q package — list installed package name and version.
* rpm -q --changelog package — print full changelog for installed package (including CVEs).
* rpm -qa — list all installed packages on the system.
* rpm -ql package — list all files in an installed package.
* rpm -qpl package.i686.rpm — list files included in a package file.
* rpm -qi package — print detailed description of installed package.
* rpm -qpi package — print detailed description of package file.
* rpm -qf /path/to/file — list package that installed a particular file.
* rpm --rebuild package.src.rpm — unpack and build a binary RPM under /usr/src/redhat.
* rpm2cpio package.src.rpm | cpio -icduv — unpack all package files in the current directory.
Another important consideration for scripting the SSH agent is limiting the capability of an authorized key. There is a [specific syntax][23] for such limitations Of particular interest is the from="" clause, which will restrict logins on a key to a limited set of hosts. It is likely wise to declare a set of "bastion" servers that will record non-root logins that escalate into controlled users who make use of the enterprise keys.
An example entry might be the following (note that I've broken this line, which is not allowed syntax but done here for clarity):
```
from="*.c2.security.yourcompany.com,4.3.2.1" ssh-ed25519
↪AAAAC3NzaC1lZDI1NTE5AAAAIJSSazJz6A5x6fTcDFIji1X+
↪svesidBonQvuDKsxo1Mx
```
A number of other useful restraints can be placed upon authorized_keys entries. The command="" will restrict a key to a single program or script and will set the SSH_ORIGINAL_COMMAND environment variable to the client's attempted call—scripts can set alarms if the variable does not contain approved contents. The restrict option also is worth consideration, as it disables a large set of SSH features that can be both superfluous and dangerous.
Although it is possible to set server identification keys in the known_hosts file to a @revoked status, this cannot be done with the contents of authorized_keys. However, a system-wide file for forbidden keys can be set in the sshd_config with RevokedKeys. This file overrides any user's authorized_keys. If set, this file must exist and be readable by the sshd server process; otherwise, no keys will be accepted at all (so use care if you configure it on a machine where there are obstacles to physical access). When this option is set, use the artano script to append forbidden keys to the file quickly when they should be disallowed from the network. A clear and convenient file location would be /etc/ssh/revoked_keys.
It is also possible to establish a local Certificate Authority (CA) for OpenSSH that will [allow keys to be registered with an authority][24] with expiration dates. These CAs can [become quite elaborate][25] in their control over an enterprise. Although the maintenance of an SSH CA is beyond the scope of this article, keys issued by such CAs should be strong by adhering to the requirements for Ed25519/E-521/RSA-3072.
### pdsh
Many higher-level tools for the control of collections of servers exist that are much more sophisticated than the script I've presented here. The most famous is likely [Puppet][26], which is a Ruby-based configuration management system for enterprise control. Puppet has a somewhat short list of supported operating systems. If you are looking for low-level control of Android, Tomato, Linux smart terminals or other "exotic" POSIX, Puppet is likely not the appropriate tool. Another popular Ruby-based tool is [Chef][27], which is known for its complexity. Both Puppet and Chef require Ruby installations on both clients and servers, and they both will catalog any SSH keys that they find, so this key strength discussion is completely applicable to them.
There are several similar Python-based tools, including [Ansible][28], [Bcfg2][29], [Fabric][30] and [SaltStack][31]. Of these, only Ansible can run "agentless" over a bare SSH connection; the rest will require agents that run on target nodes (and this likely includes a Python runtime).
Another popular configuration management tool is [CFEngine][32], which is coded in C and claims very high performance. [Rudder][33] has evolved from portions of CFEngine and has a small but growing user community.
Most of the previously mentioned packages are licensed commercially and some are closed source.
The closest low-level tool to the activities presented here is the Parallel Distributed Shell (pdsh), which can be found in the [EPEL repository][34]. The pdsh utilities grew out of an IBM-developed package named dsh designed for the control of compute clusters. Install the following packages from the repository to use pdsh:
```
# rpm -qa | grep pdsh
pdsh-2.31-1.el7.x86_64
pdsh-rcmd-ssh-2.31-1.el7.x86_64
```
An SSH agent must be running while using pdsh with encrypted keys, and there is no obvious way to control the destination port on a per-host basis as was done with the artano script. Below is an example using pdsh to run a command on three remote servers:
```
# eval $(ssh-agent)
Agent pid 17106
# ssh-add ~/.ssh/master_key
Enter passphrase for /root/.ssh/master_key:
Identity added: /root/.ssh/master_key (/root/.ssh/master_key)
# pdsh -w hosta.com,hostb.com,hostc.com uptime
hosta: 13:24:49 up 13 days, 2:13, 6 users, load avg: 0.00, 0.01, 0.05
hostb: 13:24:49 up 7 days, 21:15, 5 users, load avg: 0.05, 0.04, 0.05
hostc: 13:24:49 up 9 days, 3:26, 3 users, load avg: 0.00, 0.01, 0.05
# eval $(ssh-agent -k)
Agent pid 17106 killed
```
The -w option above defines a host list. It allows for limited arithmetic expansion and can take the list of hosts from standard input if the argument is a dash (-). The PDSH_SSH_ARGS and PDSH_SSH_ARGS_APPEND environment variables can be used to pass custom options to the SSH call. By default, 32 sessions will be launched in parallel, and this "fanout/sliding window" will be maintained by launching new host invocations as existing connections complete and close. You can adjust the size of the "fanout" either with the -f option or the FANOUT environment variable. It's interesting to note that there are two file copy commands: pdcp and rpdcp, which are analogous to scp.
Even a low-level utility like pdsh lacks some flexibility that is available by scripting OpenSSH, so prepare to feel even greater constraints as more complicated tools are introduced.
### Conclusion
Modern Linux touches us in many ways on diverse platforms. When the security of these systems is not maintained, others also may touch our platforms and turn them against us. It is important to realize the maintenance obligations when you add any Linux platform to your environment. This obligation always exists, and there are consequences when it is not met.
In a security emergency, simple, open and well understood tools are best. As tool complexity increases, platform portability certainly declines, the number of competent administrators also falls, and this likely impacts speed of execution. This may be a reasonable trade in many other aspects, but in a security context, it demands a much more careful analysis. Emergency measures must be documented and understood by a wider audience than is required for normal operations, and using more general tools facilitates that discussion.
I hope the techniques presented here will prompt that discussion for those who have not yet faced it.
### Disclaimer
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of Linux Journal.
### Note:
An exploit [compromising Ed25519][35] was recently demonstrated that relies upon custom hardware changes to derive a usable portion of a secret key. Physical hardware security is a basic requirement for encryption integrity, and many common algorithms are further vulnerable to cache timing or other side channel attacks that can be performed by the unprivileged processes of other users. Use caution when granting access to systems that process sensitive data.
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/rapid-secure-patching-tools-and-methods
作者:[Charles Fisher][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/charles-fisher
[1]:https://en.wikipedia.org/wiki/EternalBlue
[2]:http://securityaffairs.co/wordpress/61530/hacking/smbloris-smbv1-flaw.html
[3]:http://www.telegraph.co.uk/news/2017/05/13/nhs-cyber-attack-everything-need-know-biggest-ransomware-offensive
[4]:http://www.linuxjournal.com/content/smbclient-security-windows-printing-and-file-transfer
[5]:https://staff.washington.edu/dittrich/misc/forensics
[6]:https://ed25519.cr.yp.to
[7]:http://www.metzdowd.com/pipermail/cryptography/2016-March/028824.html
[8]:https://blog.g3rt.nl/upgrade-your-ssh-keys.html
[9]:https://news.ycombinator.com/item?id=12563899
[10]:http://safecurves.cr.yp.to/rigid.html
[11]:https://en.wikipedia.org/wiki/Curve25519
[12]:http://blog.cr.yp.to/20140323-ecdsa.html
[13]:https://lwn.net/Articles/573166
[14]:http://martin.kleppmann.com/2013/05/24/improving-security-of-ssh-private-keys.html
[15]:https://en.wikipedia.org/wiki/Shor's_algorithm
[16]:https://www.dwavesys.com/d-wave-two-system
[17]:https://crypto.stackexchange.com/questions/40893/can-or-can-not-d-waves-quantum-computers-use-shors-and-grovers-algorithm-to-f
[18]:https://yro.slashdot.org/story/16/12/21/2334220/nist-asks-public-for-help-with-quantum-proof-cryptography
[19]:https://ntruprime.cr.yp.to/index.html
[20]:http://www.linuxjournal.com/content/cipher-security-how-harden-tls-and-ssh
[21]:https://www.michaelwlucas.com/tools/ssh
[22]:https://people.eecs.berkeley.edu/~dawnsong/papers/ssh-timing.pdf
[23]:https://man.openbsd.org/sshd#AUTHORIZED_KEYS_FILE_FORMAT
[24]:https://ef.gy/hardening-ssh
[25]:https://code.facebook.com/posts/365787980419535/scalable-and-secure-access-with-ssh
[26]:https://puppet.com
[27]:https://www.chef.io
[28]:https://www.ansible.com
[29]:http://bcfg2.org
[30]:http://www.fabfile.org
[31]:https://saltstack.com
[32]:https://cfengine.com
[33]:http://www.rudder-project.org/site
[34]:https://fedoraproject.org/wiki/EPEL
[35]:https://research.kudelskisecurity.com/2017/10/04/defeating-eddsa-with-faults

View File

@ -1,174 +0,0 @@
Ansible: Making Things Happen
======
In my [last article][1], I described how to configure your server and clients so you could connect to each client from the server. Ansible is a push-based automation tool, so the connection is initiated from your "server", which is usually just a workstation or a server you ssh in to from your workstation. In this article, I explain how modules work and how you can use Ansible in ad-hoc mode from the command line.
Ansible is supposed to make your job easier, so the first thing you need to learn is how to do familiar tasks. For most sysadmins, that means some simple command-line work. Ansible has a few quirks when it comes to command-line utilities, but it's worth learning the nuances, because it makes for a powerful system.
### Command Module
This is the safest module to execute remote commands on the client machine. As with most Ansible modules, it requires Python to be installed on the client, but that's it. When Ansible executes commands using the Command Module, it does not process those commands through the user's shell. This means some variables like $HOME are not available. It also means stream functions (redirects, pipes) don't work. If you don't need to redirect output or to reference the user's home directory as a shell variable, the Command Module is what you want to use. To invoke the Command Module in ad-hoc mode, do something like this:
```
ansible host_or_groupname -m command -a "whoami"
```
Your output should show SUCCESS for each host referenced and then return the user name that the user used to log in. You'll notice that the user is not root, unless that's the user you used to connect to the client computer.
If you want to see the elevated user, you'll add another argument to the ansible command. You can add -b in order to "become" the elevated user (or the sudo user). So, if you were to run the same command as above with a "-b" flag:
```
ansible host_or_groupname -b -m command -a "whoami"
```
you should see a similar result, but the whoami results should say root instead of the user you used to connect. That flag is important to use, especially if you try to run remote commands that require root access!
### Shell Module
There's nothing wrong with using the Shell Module to execute remote commands. It's just important to know that since it uses the remote user's environment, if there's something goofy with the user's account, it might cause problems that the Command Module avoids. If you use the Shell Module, however, you're able to use redirects and pipes. You can use the whoami example to see the difference. This command:
```
ansible host_or_groupname -m command -a "whoami > myname.txt"
```
should result in an error about > not being a valid argument. Since the Command Module doesn't run inside any shell, it interprets the greater-than character as something you're trying to pass to the whoami command. If you use the Shell Module, however, you have no problems:
```
ansible host_or_groupname -m shell -a "whom > myname.txt"
```
This should execute and give you a SUCCESS message for each host, but there should be nothing returned as output. On the remote machine, however, there should be a file called myname.txt in the user's home directory that contains the name of the user. My personal policy is to use the Command Module whenever possible and to use the Shell Module if needed.
### The Raw Module
Functionally, the Raw Module works like the Shell Module. The key difference is that Ansible doesn't do any error checking, and STDERR, STDOUT and Return Code is returned. Other than that, Ansible has no idea what happens, because it just executes the command over SSH directly. So while the Shell Module will use /bin/sh by default, the Raw Module just uses whatever the user's personal default shell might be.
Why would a person decide to use the Raw Module? It doesn't require Python on the remote computer—at all. Although it's true that most servers have Python installed by default, or easily could have it installed, many embedded devices don't and can't have Python installed. For most configuration management tools, not having an agent program installed means the remote device can't be managed. With Ansible, if all you have is SSH, you still can execute remote commands using the Raw Module. I've used the Raw Module to manage Bitcoin miners that have a very minimal embedded environment. It's a powerful tool, and when you need it, it's invaluable!
### Copy Module
Although it's certainly possible to do file and folder manipulation with the Command and Shell Modules, Ansible includes a module specifically for copying files to the server. Even though it requires learning a new syntax for copying files, I like to use it because Ansible will check to see whether a file exists, and whether it's the same file. That means it copies the file only if it needs to, saving time and bandwidth. It even will make backups of existing files! I can't tell you how many times I've used scp and sshpass in a Bash FOR loop and dumped files on servers, even if they didn't need them. Ansible makes it easy and doesn't require FOR loops and IP iterations.
The syntax is a little more complicated than with Command, Shell or Raw. Thankfully, as with most things in the Ansible world, it's easy to understand—for example:
```
ansible host_or_groupname -b -m copy \
-a "src=./updated.conf dest=/etc/ntp.conf \
owner=root group=root mode=0644 backup=yes"
```
This will look in the current directory (on the Ansible server/workstation) for a file called updated.conf and then copy it to each host. On the remote system, the file will be put in /etc/ntp.conf, and if a file already exists, and it's different, the original will be backed up with a date extension. If the files are the same, Ansible won't make any changes.
I tend to use the Copy Module when updating configuration files. It would be perfect for updating configuration files on Bitcoin miners, but unfortunately, the Copy Module does require that the remote machine has Python installed. Nevertheless, it's a great way to update common files on many remote machines with one simple command. It's also important to note that the Copy Module supports copying remote files to other locations on the remote filesystem using the remote_src=true directive.
### File Module
The File Module has a lot in common with the Copy Module, but if you try to use the File Module to copy a file, it doesn't work as expected. The File Module does all its actions on the remote machine, so src and dest are all references to the remote filesystem. The File Module often is used for creating directories, creating links or deleting remote files and folders. The following will simply create a folder named /etc/newfolder on the remote servers and set the mode:
```
ansible host_or_groupname -b -m file \
-a "path=/etc/newfolder state=directory mode=0755"
```
You can, of course, set the owner and group, along with a bunch of other options, which you can learn about on the Ansible doc site. I find I most often will either create a folder or symbolically link a file using the File Module. To create a symlink:
```
sensible host_or_groupname -b -m file \
-a "src=/etc/ntp.conf dest=/home/user/ntp.conf \
owner=user group=user state=link"
```
Notice that the state directive is how you inform Ansible what you actually want to do. There are several state options:
* link — create symlink.
* directory — create directory.
* hard — create hardlink.
* touch — create empty file.
* absent — delete file or directory recursively.
This might seem a bit complicated, especially when you easily could do the same with a Command or Shell Module command, but the clarity of using the appropriate module makes it more difficult to make mistakes. Plus, learning these commands in ad-hoc mode will make playbooks, which consist of many commands, easier to understand (I plan to cover this in my next article).
### File Management
Anyone who manages multiple distributions knows it can be tricky to handle the various package managers. Ansible handles this in a couple ways. There are specific modules for apt and yum, but there's also a generic module called "package" that will install on the remote computer regardless of whether it's Red Hat- or Debian/Ubuntu-based.
Unfortunately, while Ansible usually can detect the type of package manager it needs to use, it doesn't have a way to fix packages with different names. One prime example is Apache. On Red Hat-based systems, the package is "httpd", but on Debian/Ubuntu systems, it's "apache2". That means some more complex things need to happen in order to install the correct package automatically. The individual modules, however, are very easy to use. I find myself just using apt or yum as appropriate, just like when I manually manage servers. Here's an apt example:
```
ansible host_or_groupname -b -m apt \
-a "update_cache=yes name=apache2 state=latest"
```
With this one simple line, all the host machines will run apt-get update (that's the update_cache directive at work), then install apache2's latest version including any dependencies required. Much like the File Module, the state directive has a few options:
* latest — get the latest version, upgrading existing if needed.
* absent — remove package if installed.
* present — make sure package is installed, but don't upgrade existing.
The Yum Module works similarly to the Apt Module, but I generally don't bother with the update_cache directive, because yum updates automatically. Although very similar, installing Apache on a Red Hat-based system looks like this:
```
ansible host_or_groupname -b -m yum \
-a "name=httpd state=present"
```
The difference with this example is that if Apache is already installed, it won't update, even if an update is available. Sometimes updating to the latest version isn't want you want, so this stops that from accidentally happening.
### Just the Facts, Ma'am
One frustrating thing about using Ansible in ad-hoc mode is that you don't have access to the "facts" about the remote systems. In my next article, where I plan to explore creating playbooks full of various tasks, you'll see how you can reference the facts Ansible learns about the systems. It makes Ansible far more powerful, but again, it can be utilized only in playbook mode. Nevertheless, it's possible to use ad-hoc mode to peek at the sorts information Ansible gathers. If you run the setup module, it will show you all the details from a remote system:
```
ansible host_or_groupname -b -m setup
```
That command will spew a ton of variables on your screen. You can scroll through them all to see the vast amount of information Ansible pulls from the host machines. In fact, it shows so much information, it can be overwhelming. You can filter the results:
```
ansible host_or_groupname -b -m setup -a "filter=*family*"
```
That should just return a single variable, ansible_os_family, which likely will be Debian or Red Hat. When you start building more complex Ansible setups with playbooks, it's possible to insert some logic and conditionals in order to use yum where appropriate and apt where the system is Debian-based. Really, the facts variables are incredibly useful and make building playbooks that much more exciting.
But, that's for another article, because you've come to the end of the second installment. Your assignment for now is to get comfortable using Ansible in ad-hoc mode, doing one thing at a time. Most people think ad-hoc mode is just a stepping stone to more complex Ansible setups, but I disagree. The ability to configure hundreds of servers consistently and reliably with a single command is nothing to scoff at. I love making elaborate playbooks, but just as often, I'll use an ad-hoc command in a situation that used to require me to ssh in to a bunch of servers to do simple tasks. Have fun with Ansible; it just gets more interesting from here!
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/ansible-making-things-happen
作者:[Shawn Powers][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/shawn-powers
[1]:http://www.linuxjournal.com/content/ansible-automation-framework-thinks-sysadmin

View File

@ -1,191 +0,0 @@
Shell Scripting: Dungeons, Dragons and Dice
======
In my [last article][1], I talked about a really simple shell script for a game called Bunco, which is a dice game played in rounds where you roll three dice and compare your values to the round number. Match all three and match the round number, and you just got a bunco for 25 points. Otherwise, any die that match the round are worth one point each. It's simple—a game designed for people who are getting tipsy at the local pub, and it also is easy to program.
The core function in the Bunco program was one that produced a random number between 16 to simulate rolling a six-sided die. It looked like this:
```
rolldie()
{
local result=$1
rolled=$(( ( $RANDOM % 6 ) + 1 ))
eval $result=$rolled
}
```
It's invoked with a variable name as the single argument, and it will load a random number between 16 into that value—for example:
```
rolldie die1
```
will assign a value 1..6 to $die1\. Make sense?
If you can do that, however, what's to stop you from having a second argument that specifies the number of sides of the die you want to "roll" with the function? Something like this:
```
rolldie()
{
local result=$1 sides=$2
rolled=$(( ( $RANDOM % $sides ) + 1 ))
eval $result=$rolled
}
```
To test it, let's just write a tiny wrapper that simply asks for a 20-sided die (d20) result:
```
rolldie die 20
echo resultant roll is $die
```
Easy enough. To make it a bit more useful, let's allow users to specify a sequence of dice rolls, using the standard D&D notation of nDm—that is, n m-sided dice. Bunco would have been done with 3d6, for example (three six-sided die). Got it?
Since you might well have starting flags too, let's build that into the parsing loop using the ever handy getopt:
```
while getopts "h" arg
do
case "$arg" in
* ) echo "dnd-dice NdM {NdM}"
echo "NdM = N M-sided dice"; exit 0 ;;
esac
done
shift $(( $OPTIND - 1 ))
for request in $* ; do
echo "Rolling: $request"
done
```
With a well formed notation like 3d6, it's easy to break up the argument into its component parts, like so:
```
dice=$(echo $request | cut -dd -f1)
sides=$(echo $request | cut -dd -f2)
echo "Rolling $dice $sides-sided dice"
```
To test it, let's give it some arguments and see what the program outputs:
```
$ dnd-dice 3d6 1d20 2d100 4d3 d5
Rolling 3 6-sided dice
Rolling 1 20-sided dice
Rolling 2 100-sided dice
Rolling 4 3-sided dice
Rolling 5-sided dice
```
Ah, the last one points out a mistake in the script. If there's no number of dice specified, the default should be 1\. You theoretically could default to a six-sided die too, but that's not anywhere near so safe an assumption.
With that, you're close to a functional program because all you need is a loop to process more than one die in a request. It's easily done with a while loop, but let's add some additional smarts to the script:
```
for request in $* ; do
dice=$(echo $request | cut -dd -f1)
sides=$(echo $request | cut -dd -f2)
echo "Rolling $dice $sides-sided dice"
sum=0 # reset
while [ ${dice:=1} -gt 0 ] ; do
rolldie die $sides
echo " dice roll = $die"
sum=$(( $sum + $die ))
dice=$(( $dice - 1 ))
done
echo " sum total = $sum"
done
```
This is pretty solid actually, and although the output statements need to be cleaned up a bit, the code's basically fully functional:
```
$ dnd-dice 3d6 1d20 2d100 4d3 d5
Rolling 3 6-sided dice
dice roll = 5
dice roll = 6
dice roll = 5
sum total = 16
Rolling 1 20-sided dice
dice roll = 16
sum total = 16
Rolling 2 100-sided dice
dice roll = 76
dice roll = 84
sum total = 160
Rolling 4 3-sided dice
dice roll = 2
dice roll = 2
dice roll = 1
dice roll = 3
sum total = 8
Rolling 5-sided dice
dice roll = 2
sum total = 2
```
Did you catch that I fixed the case when $dice has no value? It's tucked into the reference in the while statement. Instead of referring to it as $dice, I'm using the notation ${dice:=1}, which uses the value specified unless it's null or no value, in which case the value 1 is assigned and used. It's a handy and a perfect fix in this case.
In a game, you generally don't care much about individual die values; you just want to sum everything up and see what the total value is. So if you're rolling 4d20, for example, it's just a single value you calculate and share with the game master or dungeon master.
A bit of output statement cleanup and you can do that:
```
$ dnd-dice.sh 3d6 1d20 2d100 4d3 d5
3d6 = 16
1d20 = 13
2d100 = 74
4d3 = 8
d5 = 2
```
Let's run it a second time just to ensure you're getting different values too:
```
3d6 = 11
1d20 = 10
2d100 = 162
4d3 = 6
d5 = 3
```
There are definitely different values, and it's a pretty useful script, all in all.
You could create a number of variations with this as a basis, including what some gamers enjoy called "exploding dice". The idea is simple: if you roll the best possible value, you get to roll again and add the second value too. Roll a d20 and get a 20? You can roll again, and your result is then 20 + whatever the second value is. Where this gets crazy is that you can do this for multiple cycles, so a d20 could become 30, 40 or even 50.
And, that's it for this article. There isn't much else you can do with dice at this point. In my next article, I'll look at...well, you'll have to wait and see! Don't forget, if there's a topic you'd like me to tackle, please send me a note!
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/shell-scripting-dungeons-dragons-and-dice
作者:[Dave Taylor][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/dave-taylor
[1]:http://www.linuxjournal.com/content/shell-scripting-bunco-game

View File

@ -1,84 +0,0 @@
Evolving Your Own Life: Introducing Biogenesis
======
Biogenesis provides a platform where you can create entire ecosystems of lifeforms and see how they interact and how the system as a whole evolves over time.
You always can get the latest version from the project's main [website][1], but it also should be available in the package management systems for most distributions. For Debian-based distributions, install Biogenesis with the following command:
```
sudo apt-get install biogenesis
```
If you do download it directly from the project website, you also need to have a Java virtual machine installed in order to run it.
To start it, you either can find the appropriate entry in the menu of your desktop environment, or you simply can type biogenesis in a terminal window. When it first starts, you will get an empty window within which to create your world.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof1.png)
Figure 1\. When you first start Biogenesis, you get a blank canvas so you can start creating your world.
The first step is to create a world. If you have a previous instance that you want to continue with, click the Game→Open menu item and select the appropriate file. If you want to start fresh, click Game→New to get a new world with a random selection of organisms.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof2.png)
Figure 2\. When you launch a new world, you get a random selection of organisms to start your ecosystem.
The world starts right away, with organisms moving and potentially interacting immediately. However, you can pause the world by clicking on the icon that is second from the right in the toolbar. Alternatively, you also can just press the p key to pause and resume the evolution of the world.
At the bottom of the window, you'll find details about the world as it currently exists. There is a display of the frames per second, along with the current time within the world. Next, there is a count of the current population of organisms. And finally, there is a display of the current levels of oxygen and carbon dioxide. You can adjust the amount of carbon dioxide within the world either by clicking the relevant icon in the toolbar or selecting the World menu item and then clicking either Increase CO2 or Decrease CO2.
There also are several parameters that govern how the world works and how your organisms will fare. If you select World→Parameters, you'll see a new window where you can play with those values.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof3.png)
Figure 3\. The parameter configuration window allows you to set parameters on the physical characteristics of the world, along with parameters that control the evolution of your organisms.
The General tab sets the amount of time per frame and whether hardware acceleration is used for display purposes. The World tab lets you set the physical characteristics of the world, such as the size and the initial oxygen and carbon dioxide levels. The Organisms tab allows you to set the initial number of organisms and their initial energy levels. You also can set their life span and mutation rate, among other items. The Metabolism tab lets you set the parameters around photosynthetic metabolism. And, the Genes tab allows you to set the probabilities and costs for the various genes that can be used to define your organisms.
What about the organisms within your world though? If you click on one of the organisms, it will be highlighted and the display will change.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof4.png)
Figure 4\. You can select individual organisms to find information about them, as well as apply different types of actions.
The icon toolbar at the top of the window will change to provide actions that apply to organisms. At the bottom of the window is an information bar describing the selected organism. It shows physical characteristics of the organism, such as age, energy and mass. It also describes its relationships to other organisms. It does this by displaying the number of its children and the number of its victims, as well as which generation it is.
If you want even more detail about an organism, click the Examine genes button in the bottom bar. This pops up a new window called the Genetic Laboratory that allows you to look at and alter the genes making up this organism. You can add or delete genes, as well as change the parameters of existing genes.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof5.png)
Figure 5\. The Genetic Laboratory allows you to play with the individual genes that make up an organism.
Right-clicking on a particular organism displays a drop-down menu that provides even more tools to work with. The first one allows you to track the selected organism as the world evolves. The next two entries allow you either to feed your organism extra food or weaken it. Normally, organisms need a certain amount of energy before they can reproduce. Selecting the fourth entry forces the selected organism to reproduce immediately, regardless of the energy level. You also can choose either to rejuvenate or outright kill the selected organism. If you want to increase the population of a particular organism quickly, simply copy and paste a number of a given organism.
Once you have a particularly interesting organism, you likely will want to be able to save it so you can work with it further. When you right-click an organism, one of the options is to export the organism to a file. This pops up a standard save dialog box where you can select the location and filename. The standard file ending for Biogenesis genetic code files is .bgg. Once you start to have a collection of organisms you want to work with, you can use them within a given world by right-clicking a blank location on the canvas and selecting the import option. This allows you to pull those saved organisms back into a world that you are working with.
Once you have allowed your world to evolve for a while, you probably will want to see how things are going. Clicking World→Statistics will pop up a new window where you can see what's happening within your world.
![](http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12224biof6.png)
Figure 6\. The statistics window gives you a breakdown of what's happening within the world you have created.
The top of the window gives you the current statistics, including the time, the number of organisms, how many are dead, and the oxygen and carbon dioxide levels. It also provides a bar with the relative proportions of the genes.
Below this pane is a list of some remarkable organisms within your world. These are organisms that have had the most children, the most victims or those that are the most infected. This way, you can focus on organisms that are good at the traits you're interested in.
On the right-hand side of the window is a display of the world history to date. The top portion displays the history of the population, and the bottom portion displays the history of the atmosphere. As your world continues evolving, click the update button to get the latest statistics.
This software package could be a great teaching tool for learning about genetics, the environment and how the two interact. If you find a particularly interesting organism, be sure to share it with the community at the project website. It might be worth a look there for starting organisms too, allowing you to jump-start your explorations.
--------------------------------------------------------------------------------
via: http://www.linuxjournal.com/content/evolving-your-own-life-introducing-biogenesis
作者:[Joey Bernard][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxjournal.com/users/joey-bernard
[1]:http://biogenesis.sourceforge.net