Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-02-05 00:15:13 +08:00
commit 8909ddee75
15 changed files with 1685 additions and 323 deletions

View File

@ -0,0 +1,80 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (6 open governance questions every project needs to answer)
[#]: via: (https://opensource.com/article/20/2/open-source-projects-governance)
[#]: author: (Gordon Haff https://opensource.com/users/ghaff)
6 open governance questions every project needs to answer
======
Open governance insights from Chris Aniszczyk, VP of Developer Relations
at the Linux Foundation.
![Two government buildings][1]
When we think about what needs to be in place for an open source project to function, one of the first things to come to mind is probably a license. For one thing, absent an approved [Open Source Initiative (OSI) license][2], a project isnt truly open source in the minds of many. Furthermore, the choice to use a copyleft license like the GNU General Public License (GPL) or a permissive license like Massachusetts Institute of Technology (MIT) can affect the sort of community that grows up around and uses the project.
However, Chris Aniszczyk, VP of Developer Relations at the Linux Foundation, argues that its equally important to consider the **open governance of a project** because the license itself doesnt actually tell you how the project is governed.
These are some of the questions that Aniszczyk argues need be answered. He adds that answering these questions before disputes arise, and answering them in a way thats viewed as open and fair to all participants leads to projects that tend to be more successful long term, especially as they grow in size.
### 6 open governance questions for every project
1. Who makes the decisions?
2. How are maintainers added?
3. Who owns the rights to the domain?
4. Who owns the rights to the trademarks?
5. How are those things governed?
6. Who owns how the build system works?
However, while all of these questions should be considered, there isnt one correct way of answering them. Different projects—and foundations hosting projects—take different approaches, whether to accommodate the requirements of a particular community or just for historical reasons.
The latter is often the case when a project uses something often called the Benevolent Dictator for Life (BDFL) model, in which one person—usually the project's founder—generally has the final say on major project decisions. Many projects end up here by default—perhaps most notably the Linux kernel. However, Red Hats Joe Brockmeier observed to me that its mostly considered an anti-pattern at this point. "While a few BDFL-driven projects have succeeded to do well, others have stumbled with that approach," he says.
Aniszczyk observes that "foundations have different sets of bylaws, charters, and how theyre structured, and there are fascinating differences between these organizations. Like Apache is very famous for the Apache Way, and thats how they expect projects to operate. They very much have guardrails about how releases are done. [Its] kind of an incubator process where every project starts way before it graduates to a top-level project. In terms of how projects are governed, its almost like an infinite amount of approaches," he concludes.
### Minimum requirements
That said, Aniszczyk lists some minimum requirements.
"Our pattern, at least, in many Linux Foundation and Cloud Native Computing Foundation (CNCF) projects, is a _governance.md_ file, which describes how decisions are made, how things are governed, how maintainers are added, removed, how are sub-projects added, removed, etc., how releases are done. That would be step one," he says.
#### Ownership
Secondly, he doesnt "think you could do open governance without assets being neutrally owned. At the end of the day, someone owns the domain, the rights to the trademark, some of the copyright, potentially. There are many great organizations out there that are super lightweight. There are things like the Apache Foundation, Software in the Public Interest, and the Software Freedom Conservancy."
Aniszczyk also sees some common approaches as at least potential anti-patterns. A key example is contributor license agreements (CLA), which define the terms under which intellectual property, like code, is contributed to a project. He says that if a company wants "to build a product or use a dual license type model, thats a very valid reason for a CLA. Otherwise, I view CLA as a high friction tool for developers."
#### Developer Certificate of Origin
Instead, he generally encourages people to "use what we call the 'Developer Certificate of Origin.' Its how the Linux kernel works, where basically it takes all the basic things that most CLAs do, which would be like, Did I write this code? Did I not copy it elsewhere? Do I have the rights to give this to you, and you sign off on? Its been a very successful model played out in the kernel and many other ecosystems. Im generally not really supportive of having CLAs unless theres a real strict business need."
#### Naming a project
He also sees a lot of what he considers mistakes in naming. "Project branding is super important. Theres a common pattern where people will start a project, it could be within a company or yourself, or you have a startup, and youll call it, lets say, 'Docker.' Then you have Docker the project, and you have Docker, the company. Then you also have Docker the product or Docker the enterprise product. All those things serve different audiences. It leads to confusion because I have an inherent belief that the name of something has a value proposition attached to it. Please name your company separate from your project, from your product," he argues.
#### Trust
Finally, Aniszczyk points to the role of open governance in building trust and confidence that a company cant just take a project unilaterally for its own ends. "Trust is table stakes in order to build strong communities because, without openly governed institutions in projects, trust is very hard to come by," he concludes.
_List to the Innovate @Open podcast episode from which Chris Aniszczyks remarks were drawn can be heard [here][3]._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/open-source-projects-governance
作者:[Gordon Haff][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ghaff
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_lawdotgov2.png?itok=n36__lZj (Two government buildings)
[2]: https://opensource.org/licenses
[3]: https://grhpodcasts.s3.amazonaws.com/cra1911.mp3

View File

@ -0,0 +1,127 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bulletin Board Systems: The VICE Exposé)
[#]: via: (https://twobithistory.org/2020/02/02/bbs.html)
[#]: author: (Two-Bit History https://twobithistory.org)
Bulletin Board Systems: The VICE Exposé
======
By now, you have almost certainly heard of the dark web. On sites unlisted by any search engine, in forums that cannot be accessed without special passwords or protocols, criminals and terrorists meet to discuss conspiracy theories and trade child pornography.
We have reported before on the dark webs [“hurtcore” communities][1], its [human trafficking markets][2], its [rent-a-hitman websites][3]. We have explored [the challenges the dark web presents to regulators][4], the rise of [dark web revenge porn][5], and the frightening size of [the dark web gun trade][6]. We have kept you informed about that one dark web forum where you can make like Walter White and [learn how to manufacture your own drugs][7], and also about—thanks to our foreign correspondent—[the Chinese dark web][8]. We have even attempted to [catalog every single location on the dark web][9]. Our coverage of the dark web has been nothing if not comprehensive.
But I wanted to go deeper.
We know that below the surface web is the deep web, and below the deep web is the dark web. It stands to reason that below the dark web there should be a deeper, darker web.
A month ago, I set out to find it. Unsure where to start, I made a post on _Reddit_, a website frequented primarily by cosplayers and computer enthusiasts. I asked for a guide, a Styx ferryman to bear me across to the mythical underworld I sought to visit.
Only minutes after I made my post, I received a private message. “If you want to see it, Ill take you there,” wrote _Reddit_ user FingerMyKumquat. “But Ill warn you just once—its not pretty to see.”
### Getting Access
This would not be like visiting Amazon to shop for toilet paper. I could not just enter an address into the address bar of my browser and hit go. In fact, as my Charon informed me, where we were going, there are no addresses. At least, no web addresses.
But where exactly were we going? The answer: Back in time. The deepest layer of the internet is also the oldest. Down at this deepest layer exists a secret society of “bulletin board systems,” a network of underground meetinghouses that in some cases have been in continuous operation since the 1980s—since before Facebook, before Google, before even stupidvideos.com.
To begin, I needed to download software that could handle the ancient protocols used to connect to the meetinghouses. I was told that bulletin board systems today use an obsolete military protocol called Telnet. Once upon a time, though, they operated over the phone lines. To connect to a system back then you had to dial its _phone number_.
The software I needed was called [SyncTerm][10]. It was not available on the App Store. In order to install it, I had to compile it. This is a major barrier to entry, I am told, even to veteran computer programmers.
When I had finally installed SyncTerm, my guide said he needed to populate my directory. I asked what that was a euphemism for, but was told it was not a euphemism. Down this far, there are no search engines, so you can only visit the bulletin board systems you know how to contact. My directory was the list of bulletin board systems I would be able to contact. My guide set me up with just seven, which he said would be more than enough.
_More than enough for what,_ I wondered. Was I really prepared to go deeper than the dark web? Was I ready to look through this window into the black abyss of the human soul?
![][11] _The vivid blue interface of SyncTerm. My directory of BBSes on the left._
### Heatwave
I decided first to visit the bulletin board system called “Heatwave,” which I imagined must be a hangout for global warming survivalists. I “dialed” in. The next thing I knew, I was being asked if I wanted to create a user account. I had to be careful to pick an alias that would be inconspicuous in this sub-basement of the internet. I considered “DonPablo,” and “z3r0day,” but finally chose “ripper”—a name I could remember because it is also the name of my great-aunt Merediths Shih Tzu. I was then asked where I was dialing from; I decided “xxx” was the right amount of enigmatic.
And then—I was in. Curtains of fire rolled down my screen and dispersed, revealing the main menu of the Heatwave bulletin board system.
![][12] _The main menu of the Heatwave BBS._
I had been told that even in the glory days of bulletin board systems, before the rise of the world wide web, a large system would only have several hundred users or so. Many systems were more exclusive, and most served only users in a single telephone area code. But how many users dialed the “Heatwave” today? There was a main menu option that read “(L)ast Few Callers,” so I hit “L” on my keyboard.
My screen slowly filled with a large table, listing all of the systems “callers” over the last few days. Who were these shadowy outcasts, these expert hackers, these denizens of the digital demimonde? My eyes scanned down the list, and what I saw at first confused me: There was a “Dan,” calling from St. Louis, MO. There was also a “Greg Miller,” calling from Portland, OR. Another caller claimed he was “George” calling from Campellsburg, KY. Most of the entries were like that.
It was a joke, of course. A meme, a troll. It was normcore fashion in noms de guerre. These were thrill-seeking Palo Alto adolescents on Adderall making fun of the surface web. They werent fooling me.
I wanted to know what they talked about with each other. What cryptic colloquies took place here, so far from public scrutiny? My index finger, with ever so slight a tremble, hit “M” for “(M)essage Areas.”
Here, I was presented with a choice. I could enter the area reserved for discussions about “T-99 and Geneve,” which I did not dare do, not knowing what that could possibly mean. I could also enter the area for discussions about “Other,” which seemed like a safe place to start.
The system showed me message after message. There was advice about how to correctly operate a leaf-blower, as well as a protracted debate about the depth of the Strait of Hormuz relative to the draft of an aircraft carrier. I assumed the real messages were further on, and indeed I soon spotted what I was looking for. The user “Kevin” was complaining to other users about the side effects of a drug called Remicade. This was not a drug I had heard of before. Was it some powerful new synthetic stimulant? A cocktail of other recreational drugs? Was it something I could bring with me to impress people at the next VICE holiday party?
I googled it. Remicade is used to treat rheumatoid arthritis and Crohns disease.
In reply to the original message, there was some further discussion about high resting heart rates and mechanical heart valves. I decided that I had gotten lost and needed to contact FingerMyKumquat. “Finger,” I messaged him, “What is this shit Im looking at here? I want the real stuff. I want blackmail and beheadings. Show me the scum of the earth!”
“Perhaps youre ready for the SpookNet,” he wrote back.
### SpookNet
Each bulletin board system is an island in the television-static ocean of the digital world. Each systems callers are lonely sailors come into port after many a month plying the seas.
But the bulletin board systems are not entirely disconnected. Faint phosphorescent filaments stretch between the islands, links in the special-purpose networks that were constructed—before the widespread availability of the internet—to propagate messages from one system to another.
One such network is the SpookNet. Not every bulletin board system is connected to the SpookNet. To get on, I first had to dial “Reality Check.”
![][13] _The Reality Check BBS._
Once I was in, I navigated my way past the main menu and through the SpookNet gateway. What I saw then was like a catalog index for everything stored in that secret Pentagon warehouse from the end of the _X-Files_ pilot. There were message boards dedicated to UFOs, to cryptography, to paranormal studies, and to “End Times and the Last Days.” There was a board for discussing “Truth, Polygraphs, and Serums,” and another for discussing “Silencers of Information.” Here, surely, I would find something worth writing about in an article for VICE.
I browsed and I browsed. I learned about which UFO documentaries are worth watching on Netflix. I learned that “paper mill” is a derogatory term used in the intelligence community (IC) to describe individuals known for constantly trying to sell “explosive” or “sensitive” documents—as in the sentence, offered as an example by one SpookNet user, “Damn, here comes that paper mill Juan again.” I learned that there was an effort afoot to get two-factor authentication working for bulletin board systems.
“These are just a bunch of normal losers,” I finally messaged my guide. “Mostly they complain about anti-vaxxers and verses from the Quran. This is just _Reddit_!”
“Huh,” he replied. “When you said scum of the earth, did you mean something else?”
I had one last idea. In their heyday, bulletin board systems were infamous for being where everyone went to download illegal, cracked computer software. An entire subculture evolved, with gangs of software pirates competing to be the first to crack a new release. The first gang to crack the new software would post their “warez” for download along with a custom piece of artwork made using lo-fi ANSI graphics, which served to identify the crack as their own.
I wondered if there were any old warez to be found on the Reality Check BBS. I backed out of the SpookNet gateway and keyed my way to the downloads area. There were many files on offer there, but one in particular caught my attention: a 5.3 megabyte file just called “GREY.”
I downloaded it. It was a complete PDF copy of E. L. James _50 Shades of Grey_.
_If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][14] on Twitter or subscribe to the [RSS feed][15] to make sure you know when a new post is out._
_Previously on TwoBitHistory…_
> I first heard about the FOAF (Friend of a Friend) standard back when I wrote my post about the Semantic Web. I thought it was a really interesting take on social networking and I've wanted to write about it since. Finally got around to it!<https://t.co/VNwT8wgH8j>
>
> — TwoBitHistory (@TwoBitHistory) [January 5, 2020][16]
--------------------------------------------------------------------------------
via: https://twobithistory.org/2020/02/02/bbs.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://www.vice.com/en_us/article/mbxqqy/a-journey-into-the-worst-corners-of-the-dark-web
[2]: https://www.vice.com/en_us/article/vvbazy/my-brief-encounter-with-a-dark-web-human-trafficking-site
[3]: https://www.vice.com/en_us/article/3d434v/a-fake-dark-web-hitman-site-is-linked-to-a-real-murder
[4]: https://www.vice.com/en_us/article/ezv85m/problem-the-government-still-doesnt-understand-the-dark-web
[5]: https://www.vice.com/en_us/article/53988z/revenge-porn-returns-to-the-dark-web
[6]: https://www.vice.com/en_us/article/j5qnbg/dark-web-gun-trade-study-rand
[7]: https://www.vice.com/en_ca/article/wj374q/inside-the-dark-web-forum-that-tells-you-how-to-make-drugs
[8]: https://www.vice.com/en_us/article/4x38ed/the-chinese-deep-web-takes-a-darker-turn
[9]: https://www.vice.com/en_us/article/vv57n8/here-is-a-list-of-every-single-possible-dark-web-site
[10]: http://syncterm.bbsdev.net/
[11]: https://twobithistory.org/images/sync.png
[12]: https://twobithistory.org/images/heatwave-main-menu.png
[13]: https://twobithistory.org/images/reality.png
[14]: https://twitter.com/TwoBitHistory
[15]: https://twobithistory.org/feed.xml
[16]: https://twitter.com/TwoBitHistory/status/1213920921251131394?ref_src=twsrc%5Etfw

View File

@ -0,0 +1,66 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Private equity firms are gobbling up data centers)
[#]: via: (https://www.networkworld.com/article/3518817/private-equity-firms-are-gobbling-up-the-data-center-market.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Private equity firms are gobbling up data centers
======
Private equity firms accounted for 80% of all data-center acquisitions in 2019. Is that a good thing?
scanrail / Getty Images
Merger and acquisition activity surrounding [data-center][1] facilities is starting to resemble the Oklahoma Land Rush, and private-equity firms are taking most of the action.
New research from Synergy Research Group saw more than 100 deals in 2019, a 50% growth over 2018, and private-equity companies accounted for 80% of them.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
M&amp;A activity broke the 100 transaction mark for the first time in 2019, and that comes despite a 45% decline in public company activity, such as the massive Digital Reality Trust [purchase][3] of Interxion. At the same time, the size of the deals dropped in 2019, with fewer worth $1 billion or more vs. 2018, and the average deal value fell 24% vs. 2018.
[][4]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][4]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
Since 2015, there have been approximately 350 data-center deals, both public and private, with a total value of $75 billion, according to Synergy. Over this period, private equity buyers have accounted for 57% of the deal volume. Deals were roughly a 50-50 split until 2018 when public company purchases began to trail off.
Anecdotally, Ive heard one reason for the decline in big deals is there are no more big purchases to be had, at least in the US. DRT/Interxion is an exception, and Interxion is a foreign company. Other big deals, like Equinix purchasing Verizons data centers for $3.6 billion in 2017 or AT&amp;T selling its data centers to private equity company Brookfield in 2019. There just isnt much left to sell.
The question becomes is this necessarily a good thing? Private equity firms have something of a well-earned bad reputation for buying up companies, sucking all the profit out of them and discarding the empty husk.
But John Dinsdale, chief analyst for Synergy, said not to worry, that the private equity firms grabbing data centers are looking to grow them. “This is a heavily infrastructure-oriented business where what you can take out is pretty directly related to what you put in. A lot of these equity investors are looking to build something rather than quickly flipping the assets,” he said via e-mail.
He added “In these types of business there isnt that much manpower, HQ or overhead there to be stripped out.” Which is true. Data centers are pretty low-staffed. It was a national news item several years ago that Apples $1 billion data center in rural North Carolina would only [create 50 jobs][5]. Thats true for most data centers.
At least one big player, Digital Realty Trust, was formed in 2004 after private-equity firm GI Partners bought out 21 data centers from a bankruptcy. DRT has grown to 214 centers in the U.S. and Europe.
So in this case, a private equity firm buying out your data center provider might prove to be a good thing.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3518817/private-equity-firms-are-gobbling-up-the-data-center-market.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.networkworld.com/article/3451437/digital-realty-acquisition-of-interxion-reshapes-data-center-landscape.html
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://www.cultofmac.com/132012/despite-huge-unemployment-rate-apples-1-billion-data-super-center-only-created-50-new-jobs/
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -1,101 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 cool new projects to try in COPR for January 2020)
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/)
[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
4 cool new projects to try in COPR for January 2020
======
![][1]
COPR is a [collection][2] of personal repositories for software that isnt carried in Fedora. Some software doesnt conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isnt supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
This article presents a few new and interesting projects in COPR. If youre new to using COPR, see the [COPR User Documentation][3] for how to get started.
### Contrast
[Contrast][4] is a small app used for checking contrast between two colors and to determine if it meets the requirements specified in [WCAG][5]. The colors can be selected either using their RGB hex codes or with a color picker tool. In addition to showing the contrast ratio, Contrast displays a short text on a background in selected colors to demonstrate comparison.
![][6]
#### Installation instructions
The [repo][7] currently provides contrast for Fedora 31 and Rawhide. To install Contrast, use these commands:
```
sudo dnf copr enable atim/contrast
sudo dnf install contrast
```
### Pamixer
[Pamixer][8] is a command-line tool for adjusting and monitoring volume levels of sound devices using PulseAudio. You can display the current volume of a device and either set it directly or increase/decrease it, or (un)mute it. Pamixer can list all sources and sinks.
#### Installation instructions
The [repo][9] currently provides Pamixer for Fedora 31 and Rawhide. To install Pamixer, use these commands:
```
sudo dnf copr enable opuk/pamixer
sudo dnf install pamixer
```
### PhotoFlare
[PhotoFlare][10] is an image editor. It has a simple and well-arranged user interface, where most of the features are available in the toolbars. PhotoFlare provides features such as various color adjustments, image transformations, filters, brushes and automatic cropping, although it doesnt support working with layers. Also, PhotoFlare can edit pictures in batches, applying the same filters and transformations on all pictures and storing the results in a specified directory.
![][11]
#### Installation instructions
The [repo][12] currently provides PhotoFlare for Fedora 31. To install Photoflare, use these commands:
```
sudo dnf copr enable adriend/photoflare
sudo dnf install photoflare
```
### Tdiff
[Tdiff][13] is a command-line tool for comparing two file trees. In addition to showing that some files or directories exist in one tree only, tdiff shows differences in file sizes, types and contents, owner user and group ids, permissions, modification time and more.
#### Installation instructions
The [repo][14] currently provides tdiff for Fedora 29-31 and Rawhide, EPEL 6-8 and other distributions. To install tdiff, use these commands:
```
sudo dnf copr enable fif/tdiff
sudo dnf install tdiff
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/
作者:[Dominik Turecek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/dturecek/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
[2]: https://copr.fedorainfracloud.org/
[3]: https://docs.pagure.org/copr.copr/user_documentation.html#
[4]: https://gitlab.gnome.org/World/design/contrast
[5]: https://www.w3.org/WAI/standards-guidelines/wcag/
[6]: https://fedoramagazine.org/wp-content/uploads/2020/01/contrast-screenshot.png
[7]: https://copr.fedorainfracloud.org/coprs/atim/contrast/
[8]: https://github.com/cdemoulins/pamixer
[9]: https://copr.fedorainfracloud.org/coprs/opuk/pamixer/
[10]: https://photoflare.io/
[11]: https://fedoramagazine.org/wp-content/uploads/2020/01/photoflare-screenshot.png
[12]: https://copr.fedorainfracloud.org/coprs/adriend/photoflare/
[13]: https://github.com/F-i-f/tdiff
[14]: https://copr.fedorainfracloud.org/coprs/fif/tdiff/

View File

@ -1,105 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (qianmingtian)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intro to the Linux command line)
[#]: via: (https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Intro to the Linux command line
======
Here are some warm-up exercises for anyone just starting to use the Linux command line. Warning: It can be addictive.
[Sandra Henry-Stocker / Linux][1] [(CC0)][2]
If youre new to Linux or have simply never bothered to explore the command line, you may not understand why so many Linux enthusiasts get excited typing commands when theyre sitting at a comfortable desktop with plenty of tools and apps available to them. In this post, well take a quick dive to explore the wonders of the command line and see if maybe we can get you hooked.
First, to use the command line, you have to open up a command tool (also referred to as a “command prompt”). How to do this will depend on which version of Linux youre running. On RedHat, for example, you might see an Activities tab at the top of your screen which will open a list of options and a small window for entering a command (like “cmd” which will open the window for you). On Ubuntu and some others, you might see a small terminal icon along the left-hand side of your screen. On many systems, you can open a command window by pressing the **Ctrl+Alt+t** keys at the same time.
You will also find yourself on the command line if you log into a Linux system using a tool like PuTTY.
[][3]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][3]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
Once you get your command line window, youll find yourself sitting at a prompt. It could be just a **$** or something as elaborate as “**user@system:~$**” but it means that the system is ready to run commands for you.
Once you get this far, it will be time to start entering commands. Below are some of the commands to try first, and [here is a PDF][4] of some particularly useful commands and a two-sided command cheatsheet suitable for printing out and laminating.
```
Command What it does
pwd show me where I am in the file system (initially, this will be your
home directory)
ls list my files
ls -a list even more of my files (including those that start with a period)
ls -al list my files with lots of details (including dates, file sizes and
permissions)
who show me who is logged in (dont be disappointed if its only you)
date remind me what day today is (shows the time too)
ps list my running processes (might just be your shell and the “ps”
command)
```
Once youve gotten used to your Linux home from the command line point of view, you can begin to explore. Maybe youll feel ready to wander around the file system with commands like these:
```
Command What it does
cd /tmp move to another directory (in this case, /tmp)
ls list files in that location
cd go back home (with no arguments, cd always takes you back to your home
directory)
cat .bashrc display the contents of a file (in this case, .bashrc)
history show your recent commands
echo hello say “hello” to yourself
cal show a calendar for the current month
```
To get a feeling for why more advanced Linux users like the command line so much, you will want to try some other features like redirection and pipes. Redirection is when you take the output of a command and drop it into a file instead of displaying it on your screen. Pipes are when you take the output of one command and send it to another command that will manipulate it in some way. Here are commands to try:
[[Get regularly scheduled insights by signing up for Network World newsletters.]][5]
```
Command What it does
echo “echo hello” > tryme create a new file and put the words “echo hello” into
it
chmod 700 tryme make the new file executable
tryme run the new file (it should run the command it
contains and display “hello”)
ps aux show all running processes
ps aux | grep $USER show all running processes, but limit the output to
lines containing your username
echo $USER display your username using an environment variable
whoami display your username with a command
who | wc -l count how many users are currently logged in
```
### Wrap-Up
Once you get used to the basic commands, you can explore other commands and try your hand at writing scripts. You might find that Linux is a lot more powerful and nice to use than you ever imagined.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://commons.wikimedia.org/wiki/File:Tux.svg
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://www.networkworld.com/article/3391029/must-know-linux-commands.html
[5]: https://www.networkworld.com/newsletters/signup.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,81 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Give an old MacBook new life with Linux)
[#]: via: (https://opensource.com/article/20/2/macbook-linux-elementary)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
Give an old MacBook new life with Linux
======
Elementary OS's latest release, Hera, is an impressive platform for
resurrecting an outdated MacBook.
![Coffee and laptop][1]
When I installed Apple's [MacOS Mojave][2], it slowed my formerly reliable MacBook Air to a crawl. My computer, released in 2015, has 4GB RAM, an i5 processor, and a Broadcom 4360 wireless card, but Mojave proved too much for my daily driver—it made working with [GnuCash][3] impossible, and it whetted my appetite to return to Linux. I am glad I did, but I felt bad that I had this perfectly good MacBook lying around unused.
I tried several Linux distributions on my MacBook Air, but there was always a gotcha. Sometimes it was the wireless card; another time, it was a lack of support for the touchpad. After reading some good reviews, I decided to try [Elementary OS][4] 5.0 (Juno). I [made a boot drive][5] with my USB creator and inserted it into the MacBook Air. I got to a live desktop, and the operating system recognized my Broadcom wireless chipset—I thought this just might work!
I liked what I saw in Elementary OS; its [Pantheon][6] desktop is really great, and its look and feel are familiar to Apple users—it has a dock at the bottom of the display and icons that lead to useful applications. I liked the preview of what I could expect, so I decided to install it—and then my wireless disappeared. That was disappointing. I really liked Elementary OS, but no wireless is a non-starter.
Fast-forward to December 2019, when I heard a review on the [Linux4Everyone][7] podcast about Elementary's latest release, v.5.1 (Hera) bringing a MacBook back to life. So, I decided to try again with Hera. I downloaded the ISO, created the bootable drive, plugged it in, and this time the operating system recognized my wireless card. I was in business!
![MacBook Air with Hera][8]
I was overjoyed that my very light, yet powerful MacBook Air was getting a new life with Linux. I have been exploring Elementary OS in greater detail, and I can tell you that I am impressed.
### Elementary OS's features
According to [Elementary's blog][9], "The newly redesigned login and lock screen greeter looks sharper, works better, and fixes many reported issues with the previous greeter including focus issues, HiDPI issues, and better localization. The new design in Hera was in response to user feedback from Juno, and enables some nice new features."
"Nice new features" in an understatement—Elementary OS easily has one of the best-designed Linux user interfaces I have ever seen. A System Settings icon is on the dock by default; it is easy to change the settings, and soon I had the system configured to my liking. I need larger text sizes than the defaults, and the Universal Access controls are easy to use and allow me to set large text and high contrast. I can also adjust the dock with larger icons and other options.
![Elementary OS's Settings screen][10]
Pressing the Mac's Command key brings up a list of keyboard shortcuts, which is very helpful to new users.
![Elementary OS's Keyboard shortcuts][11]
Elementary OS ships with the [Epiphany][12] web browser, which I find quite easy to use. It's a bit different than Chrome, Chromium, or Firefox, but it is more than adequate.
For security-conscious users (as we should all be), Elementary OS's Security and Privacy settings provide multiple options, including a firewall, history, locking, automatic deletion of temporary and trash files, and an on/off switch for location services.
![Elementary OS's Privacy and Security screen][13]
### More on Elementary OS
Elementary OS was originally released in 2011, and its latest version, Hera, was released on December 3, 2019. [Cassidy James Blaede][14], Elementary's co-founder and CXO, is the operating system's UX architect. Cassidy loves to design and build useful, usable, and delightful digital products using open technologies.
Elementary OS has excellent user [documentation][15], and its code (licensed under GPL 3.0) is available on [GitHub][16]. Elementary OS encourages involvement in the project, so be sure to reach out and [join the community][17].
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/macbook-linux-elementary
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o (Coffee and laptop)
[2]: https://en.wikipedia.org/wiki/MacOS_Mojave
[3]: https://www.gnucash.org/
[4]: https://elementary.io/
[5]: https://opensource.com/life/14/10/test-drive-linux-nothing-flash-drive
[6]: https://opensource.com/article/19/12/pantheon-linux-desktop
[7]: https://www.linux4everyone.com/20-macbook-pro-elementary-os
[8]: https://opensource.com/sites/default/files/uploads/macbookair_hera.png (MacBook Air with Hera)
[9]: https://blog.elementary.io/introducing-elementary-os-5-1-hera/
[10]: https://opensource.com/sites/default/files/uploads/elementaryos_settings.png (Elementary OS's Settings screen)
[11]: https://opensource.com/sites/default/files/uploads/elementaryos_keyboardshortcuts.png (Elementary OS's Keyboard shortcuts)
[12]: https://en.wikipedia.org/wiki/GNOME_Web
[13]: https://opensource.com/sites/default/files/uploads/elementaryos_privacy-security.png (Elementary OS's Privacy and Security screen)
[14]: https://github.com/cassidyjames
[15]: https://elementary.io/docs/learning-the-basics#learning-the-basics
[16]: https://github.com/elementary
[17]: https://elementary.io/get-involved

View File

@ -0,0 +1,169 @@
[#]: collector: (lujun9972)
[#]: translator: ( guevaraya)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Troubleshoot Kubernetes with the power of tmux and kubectl)
[#]: via: (https://opensource.com/article/20/2/kubernetes-tmux-kubectl)
[#]: author: (Abhishek Tamrakar https://opensource.com/users/tamrakar)
Troubleshoot Kubernetes with the power of tmux and kubectl
======
A kubectl plugin that uses tmux to make troubleshooting Kubernetes much
simpler.
![Woman sitting in front of her laptop][1]
[Kubernetes][2] is a thriving open source container orchestration platform that offers scalability, high availability, robustness, and resiliency for applications. One of its many features is support for running custom scripts or binaries through its primary client binary, [kubectl][3]. Kubectl is very powerful and allows users to do anything with it that they could do directly on a Kubernetes cluster.
### Troubleshooting Kubernetes with aliases
Anyone who uses Kubernetes for container orchestration is aware of its features—as well as the complexity it brings because of its design. For example, there is an urgent need to simplify troubleshooting in Kubernetes with something that is quicker and has little need for manual intervention (except in critical situations).
There are many scenarios to consider when it comes to troubleshooting functionality. In one scenario, you know what you need to run, but the command's syntax—even when it can run as a single command—is excessively complex, or it may need one or two inputs to work.
For example, if you frequently need to jump into a running container in the System namespace, you may find yourself repeatedly writing:
```
`kubectl --namespace=kube-system exec -i -t <your-pod-name>`
```
To simplify troubleshooting, you could use command-line aliases of these commands. For example, you could add the following to your dotfiles (.bashrc or .zshrc):
```
`alias ksysex='kubectl --namespace=kube-system exec -i -t'`
```
This is one of many examples from a [repository of common Kubernetes aliases][4] that shows one way to simplify functions in kubectl. For something simple like this scenario, an alias is sufficient.
### Switching to a kubectl plugin
A more complex troubleshooting scenario involves the need to run many commands, one after the other, to investigate an environment and come to a conclusion. Aliases alone are not sufficient for this use
case; you need repeatable logic and correlations between the many parts of your Kubernetes deployment. What you really need is automation to deliver the desired output in less time.
Consider 10 to 20—or even 50 to 100—namespaces holding different microservices on your cluster. What would be helpful for you to start troubleshooting this scenario?
* You would need something that can quickly tell which pod in which namespace is throwing errors.
* You would need something that can watch logs of all the pods in a namespace.
* You might also need to watch logs of certain pods in a specific namespace that have shown errors.
Any solution that covers these points would be very useful in investigating production issues as well as during development and testing cycles.
To create something more powerful than a simple alias, you can use [kubectl plugins][5]. Plugins are like standalone scripts written in any scripting language but are designed to extend the functionality of your main command when serving as a Kubernetes admin.
To create a plugin, you must use the proper syntax of **kubectl-&lt;your-plugin-name&gt;** to copy the script to one of the exported pathways in your **$PATH** and give it executable permissions (**chmod +x**).
After creating a plugin and moving it into your path, you can run it immediately. For example, I have kubectl-krawl and kubectl-kmux in my path:
```
$ kubectl plugin list
The following compatible plugins are available:
/usr/local/bin/kubectl-krawl
/usr/local/bin/kubectl-kmux
$ kubectl kmux
```
Now let's explore what this looks like when you power Kubernetes with tmux.
### Harnessing the power of tmux
[Tmux][6] is a very powerful tool that many sysadmins and ops teams rely on to troubleshoot issues related to ease of operability—from splitting windows into panes for running parallel debugging on multiple machines to monitoring logs. One of its major advantages is that it can be used on the command line or in automation scripts.
I created [a kubectl plugin][7] that uses tmux to make troubleshooting much simpler. I will use annotations to walk through the logic behind the plugin (and leave it for you to go through the plugin's full code):
```
#NAMESPACE is namespace to monitor.
#POD is pod name
#Containers is container names
# initialize a counter n to count the number of loop counts, later be used by tmux to split panes.
n=0;
# start a loop on a list of pod and containers
while IFS=' ' read -r POD CONTAINERS
do
           # tmux create the new window for each pod
            tmux neww $COMMAND -n $POD 2&gt;/dev/null
           # start a loop for all containers inside a running pod
        for CONTAINER in ${CONTAINERS//,/ }
        do
        if [ x$POD = x -o x$CONTAINER = x ]; then
        # if any of the values is null, exit.
        warn "Looks like there is a problem getting pods data."
        break
        fi
           
            # set the command to execute
        COMMAND=”kubectl logs -f $POD -c $CONTAINER -n $NAMESPACE”
        # check tmux session
        if tmux has-session -t &lt;session name&gt; 2&gt;/dev/null;
        then
        &lt;set session exists&gt;
        else
        &lt;create session&gt;
        fi
           # split planes in the current window for each containers
        tmux selectp -t $n \; \
        splitw $COMMAND \; \
        select-layout tiled \;
           # end loop for containers
        done
           # rename the window to identify by pod name
        tmux renamew $POD 2&gt;/dev/null
       
            # increment the counter
        ((n+=1))
# end loop for pods
done&lt; &lt;(&lt;fetch list of pod and containers from kubernetes cluster&gt;)
# finally select the window and attach session
 tmux selectw -t &lt;session name&gt;:1 \; \
  attach-session -t &lt;session name&gt;\;
```
After the plugin script runs, it will produce output similar to the image below. Each pod has its own window, and each container (if there is more than one) is split by the panes in its pod window, streaming logs as they arrive. The beauty of tmux can be seen below; with the proper configuration, you can even see which window has activity going on (see the white tabs).
![Output of kmux plugin][8]
### Conclusion
Aliases are always helpful for simple troubleshooting in Kubernetes environments. When the environment gets more complex, a kubectl plugin is a powerful option for using more advanced scripting. There are no limits on which programming language you can use to write kubectl plugins. The only requirements are that the naming convention in the path is executable, and it doesn't have the same name as an existing kubectl command.
To read the complete code or try the plugins I created, check my [kube-plugins-github][7] repository. Issues and pull requests are welcome.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/kubernetes-tmux-kubectl
作者:[Abhishek Tamrakar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/tamrakar
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_4.png?itok=VGZO8CxT (Woman sitting in front of her laptop)
[2]: https://opensource.com/resources/what-is-kubernetes
[3]: https://kubernetes.io/docs/reference/kubectl/overview/
[4]: https://github.com/ahmetb/kubectl-aliases/blob/master/.kubectl_aliases
[5]: https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/
[6]: https://opensource.com/article/19/6/tmux-terminal-joy
[7]: https://github.com/abhiTamrakar/kube-plugins
[8]: https://opensource.com/sites/default/files/uploads/kmux-output.png (Output of kmux plugin)

View File

@ -0,0 +1,430 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Ansible Roles Quick Start Guide with Examples)
[#]: via: (https://www.2daygeek.com/ansible-roles-quick-start-guide-with-examples/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Ansible Roles Quick Start Guide with Examples
======
Ansible is an excellent configuration management and orchestration tool.
It is designed to easily automate the entire infrastructure.
We have written three articles in the past about Ansible.
If you are new to Ansible, I advise you to read the articles below, which will help you understand the basics of Ansible.
* **Part-1: [Ansible Automation Tool Installation, Configuration and Quick Start Guide][1]**
* **Part-2: [Ansible Ad-hoc Command Quick Start Guide with Examples][2]**
* **Part-3: [Ansible Playbooks Quick Start Guide with Examples][3]**
### Whats Ansible Roles?
Ansible Roles provides the framework for automatically loading certain tasks, files, vars, templates, and handlers from a known file structure into the playbook.
The primary mechanism of role is to break a playbook into multiple pieces (files).
This makes it easier for you to write complex playbooks and makes them easier to reuse.
Also, it reduces the syntax error by breaking it into multiple files.
Ansible Playbook is a set of roles, and each role essentially performs a specific function.
The Ansible roles are reusable (you can import the roles into other paybooks as well) because the roles are independent of each other and do not depend on others while executing.
Ansible offers a two-example directory structure that helps you organize your ansible playbook content, and its use.
It is not limited to using the same data structure, and you can create your own directory structure based on your needs.
Each directory is have a **“main.yml”** file, which contains the basic content:
### Ansible Roles Default Directory Structure
Ansible Best Practices provides the following two directory structures. The first is very simple and well suited for a small environment with simple production and inventory files.
```
production # inventory file for production servers
staging # inventory file for staging environment
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
hostname1.yml # here we assign variables to particular systems
hostname2.yml
library/ # if any custom modules, put them here (optional)
module_utils/ # if any custom module_utils to support modules, put them here (optional)
filter_plugins/ # if any custom filter plugins, put them here (optional)
site.yml # master playbook
webservers.yml # playbook for webserver tier
dbservers.yml # playbook for dbserver tier
roles/
common/ # this hierarchy represents a "role"
tasks/ #
main.yml # <-- tasks file can include smaller files if warranted
handlers/ #
main.yml # <-- handlers file
templates/ # <-- files for use with the template resource
ntp.conf.j2 # <------- templates end in .j2
files/ #
bar.txt # <-- files for use with the copy resource
foo.sh # <-- script files for use with the script resource
vars/ #
main.yml # <-- variables associated with this role
defaults/ #
main.yml # <-- default lower priority variables for this role
meta/ #
main.yml # <-- role dependencies
library/ # roles can also include custom modules
module_utils/ # roles can also include custom module_utils
lookup_plugins/ # or other types of plugins, like lookup in this case
webtier/ # same kind of structure as "common" was above, done for the webtier role
monitoring/ # ""
fooapp/ # ""
```
If you want to use this directory structure run the command below.
```
$ sudo mkdir -p group_vars host_vars library module_utils filter_plugins
$ sudo mkdir -p roles/common/{tasks,handlers,templates,files,vars,defaults,meta,library,module_utils,lookup_plugins}
$ sudo touch production staging site.yml roles/common/{tasks,handlers,templates,files,vars,defaults,meta}/main.yml
```
The second one is appropriate when you have a very complex inventory environment.
```
inventories/
production/
hosts # inventory file for production servers
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
hostname1.yml # here we assign variables to particular systems
hostname2.yml
staging/
hosts # inventory file for staging environment
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
stagehost1.yml # here we assign variables to particular systems
stagehost2.yml
library/
module_utils/
filter_plugins/
site.yml
webservers.yml
dbservers.yml
roles/
common/
webtier/
monitoring/
fooapp/
```
If you want to use this directory structure run the command below.
```
$ sudo mkdir -p inventories/{production,staging}/{group_vars,host_vars}
$ sudo touch inventories/{production,staging}/hosts
$ sudo mkdir -p group_vars host_vars library module_utils filter_plugins
$ sudo mkdir -p roles/common/{tasks,handlers,templates,files,vars,defaults,meta,library,module_utils,lookup_plugins}
$ sudo touch site.yml roles/common/{tasks,handlers,templates,files,vars,defaults,meta}/main.yml
```
### How to Create a Simple Ansible Roles Directory Structure
By default there is no “Roles” directory in your Ansible directory, so you have to create it first.
```
$ sudo mkdir /etc/ansible/roles
```
Use the following Ansible Galaxy command to create a simple directory structure for a role.
```
$ sudo ansible-galaxy init [/Path/to/Role_Name]
```
### Whats Ansible Galaxy?
Ansible Galaxy refers to the Galaxy Website, a free platform for finding, downloading and sharing community developed roles.
The Galaxy website offers pre-packaged units such as roles and collections. Provisioning infrastructure, deploy applications and youll find plenty of roles for all the tasks that you do on a daily basis.
While writing this article I saw **23478** results and it is growing on a daily basis.
To prove this, we are going to create the **“webserver”** role. To do so, run the following command.
```
$ sudo ansible-galaxy init /etc/ansible/roles/webserver
- Role /etc/ansible/roles/webserver was created successfully
```
Once you have created a new role, use the tree commmand to view the detailed directory structure.
```
$ tree /etc/ansible/roles/webserver
/etc/ansible/roles/webserver
├── defaults
│ └── main.yml
├── files
├── handlers
│ └── main.yml
├── meta
│ └── main.yml
├── README.md
├── tasks
│ └── main.yml
├── templates
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.yml
8 directories, 8 files
```
It comes with 8 directories and 8 files, details are as follows.
* **defaults:** Default variables for the role
* **handlers:** It contains handlers, which may be used by this role or even anywhere outside this role.
* **meta:** Defines some meta data for this role.
* **tasks:** It contains the main list of tasks to be executed by the role.
* **templates:** It contains templates which can be deployed via this role.
* **vars:** Other variables for the role.
This is a sample playbook that sets up the Apache Web server on Debian and Red Hat-based systems.
```
$ sudo nano /etc/ansible/playbooks/webserver.yml
---
- hosts: web
become: yes
name: "Install and Configure Apache Web Server on Linux"
tasks:
- name: "Install Apache Web Server on RHEL Based Systems"
yum: name=httpd update_cache=yes state=latest
when: ansible_facts['os_family']|lower == "redhat"
- name: "Install Apache Web Server on Debian Based Systems"
apt: name=apache2 update_cache=yes state=latest
when: ansible_facts['os_family']|lower == "debian"
- name: "Start the Apache Web Server"
service:
name: httpd
state: started
enabled: yes
- name: "Enable mod_rewrite module"
apache2_module:
name: rewrite
state: present
notify:
- restart apache
handlers:
- name: "Restart Apache2 Web Server"
service:
name: apache2
state: restarted
- name: "Restart httpd Web Server"
service:
name: httpd
state: restarted
```
Lets break the playbook above into Ansible roles. If you only have simple contents, add them to the **“main.yml”** file, otherwise create separate **“xyz.yml”** files for each task.
**Make a note:** **“notify”** should be included in the last task, which is why we have added it to the **“module.yml”** file.
Create a separate task to install the Apache Web Server on Red Hat-based systems.
```
$ sudo vi /etc/ansible/roles/webserver/tasks/redhat.yml
---
- name: "Install Apache Web Server on RHEL Based Systems"
yum:
name: httpd
update_cache: yes
state: latest
```
Create a separate task to install the Apache Web Server on Debian-based systems.
```
$ sudo vi /etc/ansible/roles/webserver/tasks/debian.yml
---
- name: "Install Apache Web Server on Debian Based Systems"
apt:
name: apache2
update_cache: yes
state: latest
```
Create a separate task to start the Apache web server on Red Hat based systems.
```
$ sudo vi /etc/ansible/roles/webserver/tasks/service-httpd.yml
---
- name: "Start the Apache Web Server"
service:
name: httpd
state: started
enabled: yes
```
Create a separate task to start the Apache web server on Debian based systems.
```
$ sudo vi /etc/ansible/roles/webserver/tasks/service-apache2.yml
---
- name: "Start the Apache Web Server"
service:
name: apache2
state: started
enabled: yes
```
Create a separate task to copy the index file into the Apache web root directory.
```
$ sudo nano /etc/ansible/roles/webserver/tasks/configure.yml
---
- name: Copy index.html file
copy: src=files/index.html dest=/var/www/html
```
Create a separate task to install rewrite module on Debian based systems.
```
$ sudo nano /etc/ansible/roles/webserver/tasks/modules.yml
---
- name: "Enable mod_rewrite module"
apache2_module:
name: rewrite
state: present
notify:
- restart apache
```
Finally import all tasks into the **“main.yml”** file of the Tasks directory.
```
$ sudo nano /etc/ansible/roles/webserver/tasks/main.yml
---
tasks file for /etc/ansible/roles/webserver
- import_tasks: redhat.yml
when: ansible_facts['os_family']|lower == 'redhat'
- import_tasks: debian.yml
when: ansible_facts['os_family']|lower == 'debian'
- import_tasks: service-httpd.yml
- import_tasks: service-apache2.yml
- import_tasks: configure.yml
- import_tasks: modules.yml
when: ansible_facts['os_family']|lower == 'debian'
```
Add the handler information to the **“main.yml”** file of the handlers directory.
```
$ sudo nano /etc/ansible/roles/webserver/handlers/main.yml
---
#handlers file for /etc/ansible/roles/webserver
- name: "Restart httpd Web Server"
service:
name: httpd
state: restarted
- name: "Restart Apache2 Web Server"
service:
name: apache2
state: restarted
```
Add an index.html file to the files directory. This is the file you want to copy on the target server.
```
$ sudo nano /etc/ansible/roles/webserver/files/index.html
This is the test page of 2DayGeek.com for Ansible Tutorials
```
You have successfully broken the playbook into Ansible roles using the steps above. Now, your new Ansible role may look like the one below.
![][4]
If you have done everything for your Ansible role, then finally import this role into your playbook.
```
$ sudo nano /etc/ansible/playbooks/webserver-role.yml
---
- hosts: all
become: yes
name: "Install and Configure Apache Web Server on Linux"
roles:
- webserver
```
Once you have done everything, I advise you to check the Playbook syntax before executing it.
```
$ ansible-playbook /etc/ansible/playbooks/webserver-role.yml --syntax-check
playbook: /etc/ansible/playbooks/webserver-role.yml
```
Finally execute the Ansible playbook to see the magic.
![][4]
Hope this tutorial helped you to learn about the Ansible roles. If you are satisfied, please share the article on social media. If you would like to improve this article, add your comments in the comment section.
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/ansible-roles-quick-start-guide-with-examples/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/install-configure-ansible-automation-tool-linux-quick-start-guide/
[2]: https://www.2daygeek.com/ansible-ad-hoc-command-quick-start-guide-with-examples/
[3]: https://www.2daygeek.com/ansible-playbooks-quick-start-guide-with-examples/
[4]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7

View File

@ -0,0 +1,158 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (SimpleLogin: Open Source Solution to Protect Your Email Inbox From Spammers)
[#]: via: (https://itsfoss.com/simplelogin/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
SimpleLogin: Open Source Solution to Protect Your Email Inbox From Spammers
======
_**Brief: SimpleLogin is an open-source service to help you protect your email address by giving you a permanent alias email address.**_
Normally, you have to use your real email address to sign up for services that you want to use personally or for your business.
In the process, youre sharing your email address right? And, that potentially exposes your email address to spammers (depending on where you shared the information).
What if you can protect your real email address by providing an alias for it instead? No Im not talking about disposable email addresses like 10minutemail which could be useful for temporary sign-ups even though theyve been blocked by certain services.
Im talking about something similar to “_[Hide My Emai for Sign in with Apple ID][1]_” but a free and open-source solution i.e [SimpleLogin][2].
### SimpleLogin: An open source service to protect your email inbox
![][3]
_It is worth noting that you still have to use your existing email client (or email service) to receive and send emails but with this service, you get to hide your real email ID._
SimpleLogin is an open-source project (you can find it on [GitHub][4]) available for free (with premium upgrade options) that aims to keep your email private.
Unlike temporary email services, it generates a permanent random alias for your email address that you can use to sign up for services without revealing your real email.
The alias works as a point of contact to forward the emails intended to your real email ID.
**Youll receive the emails sent to the alias email address in your real email inbox and if you believe that the alias is receiving too many spams, you block the alias. This way, you completely stop getting spam emails sent to the particular aliased email address.**
Not just limited to receiving emails but you can also send emails through the alias email address. Interesting, right? And, using this coupled with [secure email services][5] should be a good combination to protect your privacy.
**Recommended Read:**
![][6]
#### [Best VPN Services for Privacy Minded Linux Users][7]
Here are our recommendations for best VPN services for Linux users to secure their privacy and enhance their online security. Check it out.
### Features of SimpleLogin
![][8]
Before taking a look at how it works, let me highlight what it offers overall to the Internet users and web developers as well:
* Protects your real email address by generating an alias address
* Send/Recieve emails through your alias
* Block the alias if emails get too spammy
* Custom domain supported with premium plans
* You can choose to self-host it
* If youre a web developer, you can follow the [documentation][9] to integrate a “**Sign in with SimpleLogin**” button to your login page.
You can either utilize the web browser or use the extension for Firefox, Chrome and Safari.
[SimpleLogin][2]
### How SimpleLogin Works?
![][10]
To start with, youll have to sign up for the service with your primary email ID that you want to keep private.
Once done you have to use your alias email to sign up for any other services you want.
![][11]
The number of aliases generated is limited in the free plan however, you can upgrade to the premium plan if you want to generate different alias email addresses for every site.
You dont necessarily need to use the web portal, you can use the browser extension to generate aliases and use them when needed as shown in the image below:
![][12]
Even if you want to send an email without revealing your real email ID, just generate an alias email by typing in the receivers email ID and paste the alias in your email client to send it.
### Brief conversation with SimpleLogins founder
I was quite impressed to see an open-source service like this so I reached out to [**Son Nguyen Kim**][13] (_SimpleLogins founder_). Heres a few things I asked along with the responses I got:
**How can you assure users that they can rely on your service for their personal/business use?**
**Son Nguyen Kim:** SimpleLogin follows all the best practices in terms of [email deliverability][14] to reduce the emails ending up in the Spam folder. To mention a few:
* SPF, DKIM and strict DMARC
* TLS everywhere
* “Clean” IP: we made sure that our IP addresses are not blacklisted anywhere
* Constant monitoring to avoid abuses.
* Participate in email providers postmaster programs
**How sustainable is your business currently? **
**Son Nguyen Kim:** Though in Beta, we already have paying customers. They use SimpleLogin both personally (to protect privacy) and for their business (create emails with their domains).
**What features have you planned for the future?**
**Son Nguyen Kim**: An iOS app is already in progress, the Android app will follow just after.
* [PGP][15] to encrypt emails
* Able to strip images from emails. Email tracking is usually done [using a 1-pixel image][16] so tracking will also be removed with this feature enabled.
* [U2F][17] support (Yubikey)
* Better integration with existing email infrastructure for people who want to self-host SimpleLogin
You can also find a public roadmap to their plans on [Trello][18].
**Wrapping Up**
Personally, I would really love to see this succeed as a privacy-friendly alternative to social network sign-up options implemented on various web services.
In addition to that, as it stands now as a service to generate alias email that should suffice a lot of users who do not want to share their real email address. My initial impressions on SimpleLogins beta phase is quite positive. Id recommend you to give it a try!
They also have a [Patreon][19] page if you wish to donate instead of opting for a paying customer to help the development of SimpleLogin.
Have you tried something like this before? How exciting do you think SimpleLogin is? Feel free to share your thoughts in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/simplelogin/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://support.apple.com/en-us/HT210425
[2]: https://simplelogin.io/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/simplelogin-website.jpg?ssl=1
[4]: https://github.com/simple-login/app
[5]: https://itsfoss.com/secure-private-email-services/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/best-vpn-linux.png?fit=800%2C450&ssl=1
[7]: https://itsfoss.com/best-vpn-linux/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/simplelogin-settings.jpg?ssl=1
[9]: https://docs.simplelogin.io/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/simplelogin-details.png?ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/simplelogin-dashboard.jpg?ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/simplelogin-extensions.jpg?ssl=1
[13]: https://twitter.com/nguyenkims
[14]: https://blog.hubspot.com/marketing/email-delivery-deliverability
[15]: https://www.openpgp.org/
[16]: https://www.theverge.com/2019/7/3/20681508/tracking-pixel-email-spying-superhuman-web-beacon-open-tracking-read-receipts-location
[17]: https://en.wikipedia.org/wiki/Universal_2nd_Factor
[18]: https://trello.com/b/4d6A69I4/open-roadmap
[19]: https://www.patreon.com/simplelogin

View File

@ -0,0 +1,83 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Ubuntu 19.04 Has Reached End of Life! Existing Users Must Upgrade to Ubuntu 19.10)
[#]: via: (https://itsfoss.com/ubuntu-19-04-end-of-life/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Ubuntu 19.04 Has Reached End of Life! Existing Users Must Upgrade to Ubuntu 19.10
======
_**Brief: Ubuntu 19.04 has reached the end of life on 23rd January 2020. This means that systems running Ubuntu 19.04 wont receive security and maintenance updates anymore and thus leaving them vulnerable.**_
![][1]
[Ubuntu 19.04][2] was released on 18th April, 2019. Since it was not a long term support (LTS) release, it was supported only for nine months.
Completing its release cycle, Ubuntu 19.04 reached end of life on 23rd January, 2020.
Ubuntu 19.04 brought a few visual and performance improvements and paved the way for a sleek and aesthetically pleasant Ubuntu look.
Like any other regular Ubuntu release, it had a life span of nine months. And that has ended now.
### End of life for Ubuntu 19.04? What does it mean?
End of life is means a certain date after which an operating system release wont get updates.
You might already know that Ubuntu (or any other operating system for that matter) provides security and maintenance upgrades in order to keep your systems safe from cyber attacks.
Once a release reaches the end of life, the operating system stops receiving these important updates.
If you continue using a system after the end of life of your operating system release, your system will be vulnerable to cyber attacks and malware.
Thats not it. In Ubuntu, the applications that you downloaded using APT from Software Center wont be updated as well. In fact, you wont be able to [install new software using apt-get command][3] anymore (gradually, if not immediately).
### All Ubuntu 19.04 users must upgrade to Ubuntu 19.10
Starting 23rd January 2020, Ubuntu 19.04 will stop receiving updates. You must upgrade to Ubuntu 19.10 which will be supported till July 2020.
This is also applicable to other [official Ubuntu flavors][4] such as Lubuntu, Xubuntu, Kubuntu etc.
#### How to upgrade to Ubuntu 19.10?
Thankfully, Ubuntu provides easy ways to upgrade the existing system to a newer version.
In fact, Ubuntu also prompts you that a new Ubuntu version is available and that you should upgrade to it.
![Existing Ubuntu 19.04 should see a message to upgrade to Ubuntu 19.10][5]
If you have a good internet connection, you can use the same [Software Updater tool that you use to update Ubuntu][6]. In the above image, you just need to click the Upgrade button and follow the instructions. I have written a detailed guide about [upgrading to Ubuntu 18.04][7] using this method.
If you dont have a good internet connection, there is a workaround for you. Make a backup of your home directory or your important data on an external disk.
Then, make a live USB of Ubuntu 19.10. Download Ubuntu 19.10 ISO and use the Startup Disk Creator tool already installed on your Ubuntu system to create a live USB out of this ISO.
Boot from this live USB and go on installing Ubuntu 19.10. In the installation procedure, you should see an option to remove Ubuntu 19.04 and replace it with Ubuntu 19.10. Choose this option and proceed as if you are [installing Ubuntu][8] afresh.
#### Are you still using Ubuntu 19.04, 18.10, 17.10 or some other unsupported version?
You should note that at present only Ubuntu 16.04, 18.04 and 19.10 (or higher) versions are supported. If you are running an Ubuntu version other than these, you must upgrade to a newer version.
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-19-04-end-of-life/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/End-of-Life-Ubuntu-19.04.png?ssl=1
[2]: https://itsfoss.com/ubuntu-19-04-release/
[3]: https://itsfoss.com/apt-get-linux-guide/
[4]: https://itsfoss.com/which-ubuntu-install/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/ubuntu_19_04_end_of_life.jpg?ssl=1
[6]: https://itsfoss.com/update-ubuntu/
[7]: https://itsfoss.com/upgrade-ubuntu-version/
[8]: https://itsfoss.com/install-ubuntu/

View File

@ -0,0 +1,175 @@
MidnightBSD 可能是你通往 FreeBSD 的大门
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/midnight_4_0.jpg?itok=T2gpLVui)
[FreeBSD][1] 是一个开源操作系统,衍生自著名的 [Berkeley Software Distribution伯克利软件套件][2]。FreeBSD 的第一个版本在 1993 年发布并且仍然很强大。2007 年左右Lucas Holt 想创建一个 FreeBSD 的分支,利用 OpenStep (现在是 Cocoa) 的 Objective-C 框架widget 工具包和应用程序开发工具的 [GnuStep][3] 的实现。为此,他开始开发 MidnightBSD 桌面发行版。
MidnightBSD (以 Lucas 的猫命名Midnight) 仍然在积极地(尽管缓慢)开发。自2017年8月可以获得最新的稳定发布版本 (0.8.6) 。尽管 BSD 发行版不是你所说的用户友好型发行版,通过命令行来加速它们的安装速度是一个你自己熟悉如何处理一个文本( ncurses )安装和最后完成安装的非常好的方法。
为此,你最终会得到一个非常可靠的 FreeBSD 分支的桌面发行版。但是如果你是一名 Linux 用户正在寻找扩展你的技能的方法,它将做一点工作,… 这是一个很好的起始地方。
我想带你走过安装 MidnightBSD 的工艺流程,如何添加一个图形桌面环境,然后如何安装应用程序。
### 安装
正如我所提到的,这是一个文本( ncurses ) 安装过程,因此在这里没有找到可用的鼠标。相反,你将使用你键盘的 Tab 和箭头按键。在你下载 [最新的发布版本][4] 后,将它刻录到一个 CD/DVD 或 USB 驱动器,并启动你的机器(或者在 [VirtualBox][5] 中创建一个虚拟机)。安装器将打开并给你三个选项(图 1)。选择安装(使用你的键盘的箭头按键),并敲击 Enter 键。
![MidnightBSD installer][6]
图 1: 启动 MidnightBSD 安装器。
在这点上,在这里要经历相当多的屏幕。其中很多屏幕是一目了然的:
1. 设置非默认键盘映射(是/否)
2. 设置主机名称
3. 添加可选系统组件(文档游戏32位兼容性系统源码代码)
4. 分区硬盘
5. 管理员密码
6. 配置网络接口
7. 选择地区(时区)
8. 启用服务(例如获得 shell)
9. 添加用户(图 2)
![Adding a user][7]
图 2: 向系统添加一个用户。
在你向系统添加用户后,你将被拖拽到一个窗口中(图 3),在这里,你可以处理任何你可能忘记的或你想重新配置的东西。如果你不需要作出任何更改,选择 Exit ,然后你的配置将被应用。
![Applying your configurations][8]
图 3: 应用你的配置。
在接下来的窗口中,当出现提示时,选择 No ,接下来系统将重启。在 MidnightBSD 重启后,你已经为下一阶段的安装做好了准备。
### Post 安装
当你最新安装的 MidnightBSD 启动时你将发现你自己在一个命令提示符中。在这一点上在这里未找到图形界面。我安装应用程序MidnightBSD 依赖于 mport 工具。比如说你想安装 Xfce 桌面环境。为此,登录到 MidnightBSD 中,并发出下面的命令:
```
sudo mport index
sudo mport install xorg
```
你现在有已经安装的 Xorg 窗口服务器,它将允许你来安装桌面环境。使用命令来安装 Xfce
```
sudo mport install xfce
```
现在已经安装 Xfce 。不过,我们需要和命令 startx 一起运行来启用它。为此,让我们先安装 nano 编辑器。发出命令:
```
sudo mport install nano
```
随着 nano 已安装,发出命令:
```
nano ~/.xinitrc
```
这个文件仅包含一行:
```
exec startxfce4
```
保存并关闭这个文件。如果你现在发出命令 startx, Xfce 桌面环境将启动。你应该会感到一点在家里的感觉(图 4).
![ Xfce][9]
图 4: Xfce桌面界面已准备好服务。
因为你不想总是必需发出命令 startx ,你希望启用登录守护进程。然而,却没有安装。要安装这个子系统,发出命令:
```
sudo mport install mlogind
```
当完成安装后,通过在 /etc/rc.conf 文件中添加一个项目来在启动时启用 mlogind 。在 rc.conf 文件的底部,添加以下内容:
```
mlogind_enable=”YES”
```
保存并关闭该文件。现在,当你启动(或重启)机器时,你应该会看到图形登录屏幕。在写这篇文章的时候,在登录后,我最后得到一个空白屏幕和不想要的 X 光标。不幸的是,目前似乎并没有这个问题的解决方法。所以,要访问你的桌面环境,你必需使用 startx 命令。
### 安装
开箱即用,你将不能找到很多能使用的应用程序。如果你尝试安装应用程序(使用 mport ),你将很快发现你自己的沮丧,因为只能找到很少的应用程序。为解决这个问题,我们需要使用 svnlite 命令来查看检查出可用的 mport 软件列表。回到终端窗口,并发出命令:
```
svnlite co http://svn.midnightbsd.org/svn/mports/trunk mports
```
在你完成这些后,你应该看到一个命名为 ~/mports 的新的命令。 更改到这个目录(使用命令 cd ~/.mports 。发出 ls 命令,然后你应该看到许多的类别(图 5)。
![applications][10]
图 5: 对于 mport 现在可用的应用程序类别。
你想安装 Firefox ?如果你查看 www 目录,你将看到一个 linux-firefox 列表。发出命令:
```
sudo mport install linux-firefox
```
现在你应该会在 Xfce 桌面菜单中看到一个 Firefox 项目。翻找所有的类别,并使用 mport 命令来安装你需要的所有软件。
### 一个悲哀的警告
一个悲哀的小警告是, mport (通过via svnlite) 仅能找到的一个 office 套件的版本是 OpenOffice 3 。那是非常过时的。尽管 在 ~/mports/editors 目录中找到 Abiword ,但是它看起来不可用于安装。甚至在安装 OpenOffice 3 后,它会输出一个 Exec 格式错误。换句话说,你将不能使用 MidnightBSD 在 office 生产效率方面做很多的事情。但是,嘿嘿,如果你周围躺有一个旧的 Palm 导航器,你也安装 pilot 链接。换句话说,可用的软件不能生成一个极其有用的桌面发行版… 至少对普通用户不是。但是,如果你想在 MidnightBSD 上开发,你将找到很多可用的工具,准备安装(查看 ~/mports/devel 目录)。你甚至可以使用命令安装 Drupal
```
sudo mport install drupal7
```
当然,在此之后,你将需要创建一个数据库( MySQL 已经安装),安装 Apache (sudo mport install apache24) ,并配置必需的 Apache 指令。
显然地,已安装的和能够安装什么是一个已经应用程序,系统和服务的大杂烩。但是随着足够多的工作,你最终可以得到一个能够服务特殊目的的发行版。
### 享受 *BSD 优良
这就是你如何使 MidnightBSD 启动,并在一个有点用的桌面发行版中运行。它不像很多其它的 Linux 发行版一样快速容易但是如果你想要一个你想要的发行版这可能正是你正在寻找的。尽管很多竞争对手有很多为安装而准备的可用的软件标题MidnightBSD 无疑是一个 Linux 爱好者或管理员应该尝试的有趣的挑战。
通过来自 Linux 基金会和 edX 的免费的[" Linux 简介" ][11]课程学习更多关于 Linux 的信息。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/5/midnightbsd-could-be-your-gateway-freebsd
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[robsean](https://github.com/robsean)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.freebsd.org/
[2]:https://en.wikipedia.org/wiki/Berkeley_Software_Distribution
[3]:https://en.wikipedia.org/wiki/GNUstep
[4]:http://www.midnightbsd.org/download/
[5]:https://www.virtualbox.org/
[6]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_1.jpg (MidnightBSD installer)
[7]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_2.jpg (Adding a user)
[8]:https://lcom.static.linuxfound.org/sites/lcom/files/mightnight_3.jpg (Applying your configurations)
[9]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_4.jpg (Xfce)
[10]:https://lcom.static.linuxfound.org/sites/lcom/files/midnight_5.jpg (applications)
[11]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -7,15 +7,18 @@
[#]: via: (https://www.networkworld.com/article/3333640/linux/zipping-files-on-linux-the-many-variations-and-how-to-use-them.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Zipping files on Linux: the many variations and how to use them
在 Linux 上压缩文件zip 命令的各种变体及用法
======
> 除了压缩和解压缩文件外,你还可以使用 zip 命令执行许多有趣的操作。这是一些其他的 zip 选项以及它们如何提供帮助。
![](https://images.idgesg.net/images/article/2019/01/zipper-100785364-large.jpg)
Some of us have been zipping files on Unix and Linux systems for many decades — to save some disk space and package files together for archiving. Even so, there are some interesting variations on zipping that not all of us have tried. So, in this post, were going to look at standard zipping and unzipping as well as some other interesting zipping options.
为了节省一些磁盘空间并将文件打包在一起进行归档,我们中的一些人已经在 Unix 和 Linux 系统上压缩文件数十年了。即使这样,并不是所有人都尝试过一些有趣的压缩工具的变体。因此,在本文中,我们将介绍标准的压缩和解压缩以及其他一些有趣的压缩选项。
### The basic zip command
### 基本的 zip 命令
First, lets look at the basic **zip** command. It uses what is essentially the same compression algorithm as **gzip** , but there are a couple important differences. For one thing, the gzip command is used only for compressing a single file where zip can both compress files and join them together into an archive. For another, the gzip command zips “in place”. In other words, it leaves a compressed file — not the original file alongside the compressed copy. Here's an example of gzip at work:
首先,让我们看一下基本的 `zip` 命令。它使用了与 `gzip` 基本上相同的压缩算法,但是有一些重要的区别。一方面,`gzip` 命令仅用于压缩单个文件,而 `zip` 既可以压缩文件,也可以将多个文件结合在一起成为归档文件。另外,`gzip` 命令是“就地”压缩。换句话说,它会留下一个压缩文件,而不是原始文件。 这是工作中的 `gzip` 示例:
```
$ gzip onefile
@ -23,7 +26,7 @@ $ ls -l
-rw-rw-r-- 1 shs shs 10514 Jan 15 13:13 onefile.gz
```
And here's zip. Notice how this command requires that a name be provided for the zipped archive where gzip simply uses the original file name and adds the .gz extension.
而这是 `zip`。请注意,此命令要求为压缩存档提供名称,其中 `gzip`(执行压缩操作后)仅使用原始文件名并添加 `.gz` 扩展名。
```
$ zip twofiles.zip file*
@ -35,9 +38,9 @@ $ ls -l
-rw-rw-r-- 1 shs shs 21289 Jan 15 13:35 twofiles.zip
```
Notice also that the original files are still sitting there.
请注意,原始文件仍位于原处。
The amount of disk space that is saved (i.e., the degree of compression obtained) will depend on the content of each file. The variation in the example below is considerable.
所节省的磁盘空间量(即获得的压缩程度)将取决于每个文件的内容。以下示例中的变化很大。
```
$ zip mybin.zip ~/bin/*
@ -56,9 +59,9 @@ $ zip mybin.zip ~/bin/*
adding: bin/tt (deflated 6%)
```
### The unzip command
### unzip 命令
The **unzip** command will recover the contents from a zip file and, as you'd likely suspect, leave the zip file intact, whereas a similar gunzip command would leave only the uncompressed file.
`unzip` 命令将从一个 zip 文件中恢复内容,并且,如你所料,原来的 zip 文件还保留在那里,而类似的`gunzip` 命令将仅保留未压缩的文件。
```
$ unzip twofiles.zip
@ -71,9 +74,9 @@ $ ls -l
-rw-rw-r-- 1 shs shs 21289 Jan 15 13:35 twofiles.zip
```
### The zipcloak command
### zipcloak 命令
The **zipcloak** command encrypts a zip file, prompting you to enter a password twice (to help ensure you don't "fat finger" it) and leaves the file in place. You can expect the file size to vary a little from the original.
`zipcloak` 命令对一个 zip 文件进行加密,提示你输入两次密码(以确保你不会“胖手指”),然后将该文件原位存储。你可以想到,文件大小与原始文件会有所不同。
```
$ zipcloak twofiles.zip
@ -89,11 +92,11 @@ total 204
unencrypted version
```
Keep in mind that the original files are still sitting there unencrypted.
请记住,压缩包之外的原始文件仍处于未加密状态。
### The zipdetails command
### zipdetails 命令
The **zipdetails** command is going to show you details — a _lot_ of details about a zipped file, likely a lot more than you care to absorb. Even though we're looking at an encrypted file, zipdetails does display the file names along with file modification dates, user and group information, file length data, etc. Keep in mind that this is all "metadata." We don't see the contents of the files.
`zipdetails` 命令将向你显示详细信息:有关压缩文件的详细信息,可能比你想象的要多得多。即使我们正在查看一个加密的文件,`zipdetails` 也会显示文件名以及文件修改日期、用户和组信息、文件长度数据等。请记住,这都是“元数据”。我们看不到文件的内容。
```
$ zipdetails twofiles.zip
@ -233,9 +236,9 @@ $ zipdetails twofiles.zip
Done
```
### The zipgrep command
### zipgrep命令
The **zipgrep** command is going to use a grep-type feature to locate particular content in your zipped files. If the file is encrypted, you will need to enter the password provided for the encryption for each file you want to examine. If you only want to check the contents of a single file from the archive, add its name to the end of the zipgrep command as shown below.
`zipgrep` 命令将使用 `grep` 类的功能来找到压缩文件中的特定内容。如果文件已加密,则需要为要检查的每个文件输入为加密所提供的密码。如果只想检查归档文件中单个文件的内容,请将其名称添加到 `zipgrep` 命令的末尾,如下所示。
```
$ zipgrep hazard twofiles.zip file1
@ -243,9 +246,9 @@ $ zipgrep hazard twofiles.zip file1
Certain pesticides should be banned since they are hazardous to the environment.
```
### The zipinfo command
### zipinfo 命令
The **zipinfo** command provides information on the contents of a zipped file whether encrypted or not. This includes the file names, sizes, dates and permissions.
`zipinfo` 命令提供有关压缩文件内容的信息,无论是否加密。这包括文件名、大小、日期和权限。
```
$ zipinfo twofiles.zip
@ -256,9 +259,9 @@ Zip file size: 21313 bytes, number of entries: 2
2 files, 116954 bytes uncompressed, 20991 bytes compressed: 82.1%
```
### The zipnote command
### zipnote 命令
The **zipnote** command can be used to extract comments from zip archives or add them. To display comments, just preface the name of the archive with the command. If no comments have been added previously, you will see something like this:
`zipnote` 命令可用于从 zip 归档中提取注释或添加注释。要显示注释,只需在命令前面加上归档名称即可。如果之前未添加任何注释,你将看到类似以下内容:
```
$ zipnote twofiles.zip
@ -269,21 +272,21 @@ $ zipnote twofiles.zip
@ (zip file comment below this line)
```
If you want to add comments, write the output from the zipnote command to a file:
如果要添加注释,请先将 `zipnote` 命令的输出写入文件:
```
$ zipnote twofiles.zip > comments
```
Next, edit the file you've just created, inserting your comments above the **(comment above this line)** lines. Then add the comments using a zipnote command like this one:
接下来,编辑你刚刚创建的文件,将注释插入到 `(comment above this line)` 行上方。然后使用像这样的`zipnote` 命令添加注释:
```
$ zipnote -w twofiles.zip < comments
```
### The zipsplit command
### zipsplit 命令
The **zipsplit** command can be used to break a zip archive into multiple zip archives when the original file is too large — maybe because you're trying to add one of the files to a small thumb drive. The easiest way to do this seems to be to specify the max size for each of the zipped file portions. This size must be large enough to accomodate the largest included file.
当归档文件太大时,可以使用 `zipsplit` 命令将一个 zip 归档文件分解为多个 zip 归档文件,这样你就可以将其中某一个文件放到小型 U 盘中。最简单的方法似乎是为每个部分的压缩文件指定最大大小,此大小必须足够大以容纳最大的包含文件。
```
$ zipsplit -n 12000 twofiles.zip
@ -296,15 +299,11 @@ $ ls twofile*.zip
-rw-rw-r-- 1 shs shs 21377 Jan 15 14:27 twofiles.zip
```
Notice how the extracted files are sequentially named "twofile1" and "twofile2".
请注意,提取的文件是如何依次命名为 `twofile1``twofile2` 的。
### Wrap-up
### 总结
The **zip** command, along with some of its zipping compatriots, provide a lot of control over how you generate and work with compressed file archives.
**[ Also see:[Invaluable tips and tricks for troubleshooting Linux][1] ]**
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
`zip` 命令及其一些压缩工具变体,对如何生成和使用压缩文件归档提供了很多控制。
--------------------------------------------------------------------------------
@ -312,7 +311,7 @@ via: https://www.networkworld.com/article/3333640/linux/zipping-files-on-linux-t
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -7,43 +7,43 @@
[#]: via: (https://www.2daygeek.com/linux-commands-check-memory-usage/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
8 Commands to Check Memory Usage on Linux
检查 Linux 中内存使用情况的8条命令
======
Linux is not like Windows and you will not get a GUI always, especially in a server environment.
Linux 并不像 Windows你经常不会有图形界面可供使用特别是在服务器环境中。
As a Linux administrator, it is important to know how to check your available and used resources, such as memory, CPU, disk space, etc.
作为一名 Linux 管理员知道如何获取当前可用的和已经使用的资源情况比如内存、CPU、磁盘等是相当重要的。
If there are any applications that use too much resources on the system to run your system at the optimum level you need to find and fix.
如果某一应用在你的系统上占用了太多的资源,导致你的系统无法达到最优状态,那么你需要找到并修正它。
If you want to **[find out the top 10 memory (RAM) consumption processes in Linux][1]**, go to the following article.
如果你想找到消耗内存前十名的进程,你需要去阅读这篇文章: **[在 Linux 系统中找到消耗内存最多的 10 个进程][1]** 。
In Linux, there are commands for everything, so use the corresponding commands.
在 Linux 中,命令能做任何事,所以使用相关命令吧。
In this tutorial, we will show you eight powerful commands to check memory usage on a Linux system, including RAM and swap.
在这篇教程中,我们将会给你展示 8 个有用的命令来即查看在 Linux 系统中内存的使用情况,包括 RAM 和交换分区。
**[Creating swap space on a Linux system][2]** is very important.
创建交换分区在 Linux 系统中是非常重要的,如果你想了解如何创建,可以去阅读这篇文章: **[在 Linux 系统上创建交换分区][2]** 。
The following commands can help you check memory usage in Linux in different ways.
下面的命令可以帮助你以不同的方式查看 Linux 内存使用情况。
* free Command
* /proc/meminfo File
* vmstat Command
* ps_mem Command
* smem Command
* top Command
* htop Command
* glances Command
* free 命令
* /proc/meminfo 文件
* vmstat 命令
* ps_mem 命令
* smem 命令
* top 命令
* htop 命令
* glances 命令
### 1) How to Check Memory Usage on Linux Using the free Command
### 1)如何使用 free 命令查看 Linux 内存使用情况
**[Free command][3]** is the most powerful command widely used by the Linux administrator. But it provides very little information compared to the “/proc/meminfo” file.
**[Free 命令][3]** 是被 Linux 管理员广泛使用地命令。但是它提供的信息比 “/proc/meminfo” 文件少。
Free command displays the total amount of free and used physical and swap memory on the system, as well as buffers and caches used by the kernel.
Free 命令会分别展示物理内存和交换分区内存中已使用的和未使用的数量,以及内核使用的缓冲区和缓存。
These information is gathered from the “/proc/meminfo” file.
这些信息都是从 “/proc/meminfo” 文件中获取的。
```
# free -m
@ -52,24 +52,24 @@ Mem: 15867 9199 1702 3315 4965 3039
Swap: 17454 666 16788
```
* **total:** Total installed memory
* **used:** Memory is currently in use by running processes (used= total free buff/cache)
* **free:** Unused memory (free= total used buff/cache)
* **shared:** Memory shared between two or more processes (multiple processes)
* **buffers:** Memory reserved by the kernel to hold a process queue request.
* **cache:** Size of the page cache that holds recently used files in RAM
* **buff/cache:** Buffers + Cache
* **available:** Estimation of how much memory is available for starting new applications, without swapping.
* **total:** 总的内存量
* **used:** 当前正在被运行中的进程使用的内存量 (used = total free buff/cache)
* **free:** 未被使用的内存量 (free = total used buff/cache)
* **shared:** 在两个或多个进程之间共享的内存量 (多进程)
* **buffers:** 内核用于记录进程队列请求的内存量
* **cache:** 在 RAM 中最近使用的文件中的页缓冲大小
* **buff/cache:** 缓冲区和缓存总的使用内存量
* **available:** 启动新应用不含交换分区的可用内存量
### 2) How to Check Memory Usage on Linux Using the /proc/meminfo File
### 2) 如何使用 /proc/meminfo 文件查看 Linux 内存使用情况
The “/proc/meminfo” file is a virtual file that contains various real-time information about memory usage.
“/proc/meminfo” 文件是一个包含了多种内存使用的实时信息的虚拟文件。
It shows memory stats in kilobytes, most of which are somewhat difficult to understand.
它展示内存状态单位使用的是 kB其中大部分属性都难以理解。
However it contains useful information about memory usage.
然而它也包含了内存使用情况的有用信息。
```
# cat /proc/meminfo
@ -124,13 +124,14 @@ DirectMap2M: 14493696 kB
DirectMap1G: 2097152 kB
```
### 3) How to Check Memory Usage on Linux Using the vmstat Command
### 3) 如何使用 vmstat 命令查看 Linux 内存使用情况
The **[vmstat command][4]** is another useful tool for reporting virtual memory statistics.
**[vmstat 命令][4]** 是另一个报告虚拟内存统计信息的有用工具。
vmstat reports information about processes, memory, paging, block IO, traps, disks, and cpu functionality.
vmstat 报告的信息包括:进程、内存、页面映射、块 I/O、陷阱、磁盘和 cpu 功能信息。
vmstat does not require special permissions, and it can help identify system bottlenecks.
vmstat 不需要特殊的权限,并且它可以帮助诊断系统瓶颈。
```
# vmstat
@ -140,58 +141,58 @@ procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
1 0 682060 1769324 234188 4853500 0 3 25 91 31 16 34 13 52 0 0
```
If you want to understand this in detail, read the field description below.
如果你想详细了解每一项的含义,阅读下面的描述。
**Procs**
* **r:** The number of runnable processes (running or waiting for run time).
* **b:** The number of processes in uninterruptible sleep.
* **r:** 可以运行的进程数目(正在运行或等待运行)
* **b:** 不间断睡眠中的进程数目
**Memory**
* **swpd:** the amount of virtual memory used.
* **free:** the amount of idle memory.
* **buff:** the amount of memory used as buffers.
* **cache:** the amount of memory used as cache.
* **inact:** the amount of inactive memory. (-a option)
* **active:** the amount of active memory. (-a option)
* **swpd:** 使用的虚拟内存数量
* **free:** 空闲的内存数量
* **buff:** 用作缓冲区内存的数量
* **cache:** 用作缓存内存的数量
* **inact:** 不活动的内存数量(-a 选项)
* **active:** 活动的内存数量(-a 选项)
**Swap**
* **si:** Amount of memory swapped in from disk (/s).
* **so:** Amount of memory swapped to disk (/s).
* **si:** 从磁盘交换的内存数量 (/s).
* **so:** 交换到磁盘的内存数量 (/s).
**IO**
* **bi:** Blocks received from a block device (blocks/s).
* **bo:** Blocks sent to a block device (blocks/s).
* **bi:** 从一个块设备中收到的块 (blocks/s).
* **bo:** 发送到一个块设备的块 (blocks/s).
**System**
* **in:** The number of interrupts per second, including the clock.
* **cs:** The number of context switches per second.
* **in:** 每秒的中断此数,包括时钟。
* **cs:** 每秒的上下文切换次数。
**CPU : These are percentages of total CPU time.**
**CPU : 下面这些是在总的 CPU 时间占的百分比 **
* **us:** Time spent running non-kernel code. (user time, including nice time)
* **sy:** Time spent running kernel code. (system time)
* **id:** Time spent idle. Prior to Linux 2.5.41, this includes IO-wait time.
* **wa:** Time spent waiting for IO. Prior to Linux 2.5.41, included in idle.
* **st:** Time stolen from a virtual machine. Prior to Linux 2.6.11, unknown.
* **us:** 花费在非内核上的时间占比(包括用户时间,调度)
* **sy:** 花费在内核上的时间占比 (系统时间)
* **id:** 花费在闲置的时间占比。在 Linux 2.5.41 之前,包括 I/O 等待时间
* **wa:** 花费在 I/O 等待上的时间占比。在 Linux 2.5.41 之前,包括空闲时间
* **st:** 被虚拟机偷走的时间占比。在 Linux 2.6.11 之前,这部分称为 unknow
Run the following command for detailed information.
运行下面的命令查看详细的信息。
```
# vmstat -s
@ -223,16 +224,15 @@ Run the following command for detailed information.
1577163147 boot time
3318 forks
```
### 4) 如何使用 ps_mem 命令查看 Linux 内存使用情况
### 4) How to Check Memory Usage on Linux Using the ps_mem Command
**[ps_mem][5]** 是一个简单的 Python 脚本用来查看当前内存使用情况。
**[ps_mem][5]** is a simple Python script that allows you to get core memory usage accurately for a program in Linux.
该工具可以确定每个程序使用了多少内存(不是每个进程)。
This can determine how much RAM is used per program (not per process).
该工具采用如下的方法计算每个程序使用内存:总的使用 = 程序进程私有的内存 + 程序进程共享的内存。
It calculates the total amount of memory used per program, total = sum (private RAM for program processes) + sum (shared RAM for program processes).
The shared RAM is problematic to calculate, and the tool automatically selects the most accurate method available for the running kernel.
计算共享内存是存在不足之处的,该工具可以为运行中的内核自动选择最准确的方法。
```
# ps_mem
@ -285,15 +285,15 @@ The shared RAM is problematic to calculate, and the tool automatically selects t
==================================
```
### 5) How to Check Memory Usage on Linux Using the smem Command
### 5)如何使用 smem 命令查看 Linux 内存使用情况
**[smem][6]** is a tool that can provide numerous reports of memory usage on Linux systems. Unlike existing tools, smem can report Proportional Set Size (PSS), Unique Set Size (USS) and Resident Set Size (RSS).
**[smem][6]** 是一个可以为 Linux 系统提供多种内存使用情况报告的工具。不同于现有的工具smem 可以报告比例集大小PSS、唯一集大小USS和居住集大小RSS
Proportional Set Size (PSS): refers to the amount of memory used by libraries and applications in the virtual memory system.
比例集PSS库和应用在虚拟内存系统中的使用量。
Unique Set Size (USS) : Unshared memory is reported as USS (Unique Set Size).
唯一集大小USS其报告的是非共享内存。
Resident Set Size (RSS) : The standard measure of physical memory (it typically shared among multiple applications) usage known as resident set size (RSS) will significantly overestimate memory usage.
居住集大小RSS物理内存通常多进程共享使用情况其通常高于内存使用量。
```
# smem -tk
@ -336,13 +336,13 @@ Resident Set Size (RSS) : The standard measure of physical memory (it typically
90 1 0 4.8G 5.2G 8.0G
```
### 6) How to Check Memory Usage on Linux Using the top Command
### 6) 如何使用 top 命令查看 Linux 内存使用情况
**[top command][7]** is one of the most frequently used commands by Linux administrators to understand and view the resource usage for a process on a Linux system.
**[top 命令][7]** 是一个 Linux 系统的管理员最常使用的用于查看进程的资源使用情况的命令。
It displays the total memory of the system, current memory usage, free memory and total memory used by the buffers.
该命令会展示了系统总的内存量,当前内存使用量,空闲内存量和缓冲区使用的内存总量。
In addition, it displays total swap memory, current swap usage, free swap memory, and total cached memory by the system.
此外,该命令还会展示总的交换空间内存量,当前交换空间的内存使用量,空闲的交换空间内存量和缓存使用的内存总量。
```
# top -b | head -10
@ -368,25 +368,25 @@ KiB Swap: 17873388 total, 17873388 free, 0 used. 9179772 avail Mem
2174 daygeek 20 2466680 122196 78604 S 0.8 0.8 0:17.75 WebExtensi+
```
### 7) How to Check Memory Usage on Linux Using the htop Command
### 7) 如何使用 htop 命令查看 Linux 内存使用情况
The **[htop command][8]** is an interactive process viewer for Linux/Unix systems. It is a text-mode application and requires the ncurses library, it was developed by Hisham.
**[htop 命令][8]** 是一个可交互的 Linux/Unix 系统进程查看器。它是一个文本模式应用,且使用它需要 Hisham 开发的 ncurses库。
It is designed as an alternative to the top command.
该名令的设计目的使用来代替 top 命令。
This is similar to the top command, but allows you to scroll vertically and horizontally to see all the processes running the system.
该命令与 top 命令很相似,但是其允许你可以垂直地或者水平地的滚动以便可以查看系统中所有的进程情况。
htop comes with Visual Colors, which have added benefits and are very evident when it comes to tracking system performance.
htop 命令拥有不同的颜色,这个额外的优点当你在追踪系统性能情况时十分有用。
You are free to carry out any tasks related to processes, such as process killing and renicing without entering their PIDs.
此外你可以自由地执行与进程相关的任务比如杀死进程或者改变进程的优先级而不需要其进程号PID
[![][9]][10]
### 8) How to Check Memory Usage on Linux Using the glances Command
### 8)如何使用 glances 命令查看 Linux 内存使用情况
**[Glances][11]** is a cross-platform system monitoring tool written in Python.
**[Glances][11]** 是一个 Python 编写的跨平台的系统监视工具。
You can see all information in one place such as CPU usage, Memory usage, running process, Network interface, Disk I/O, Raid, Sensors, Filesystem info, Docker, System info, Uptime, etc,.
你可以在一个其中查看所有信息比如CPU使用情况内存使用情况正在运行的进程网络接口磁盘 I/ORAID传感器文件系统信息Docker系统信息运行时间等等。
![][9]

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 cool new projects to try in COPR for January 2020)
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/)
[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
COPR 仓库中 4 个很酷的新项目2020.01
======
![][1]
COPR 是个人软件仓库[集合][2],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准;或者它可能不符合其他 Fedora 标准尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
本文介绍了 COPR 中一些有趣的新项目。如果你第一次使用 COPR请参阅 [COPR 用户文档][3]。
### Contrast
[Contrast][4]是一款小应用,用于检查两种颜色之间的对比度并确定其是否满足 [WCAG][5]中指定的要求。可以使用十六进制 RGB 代码或使用颜色选择器选择颜色。除了显示对比度之外Contrast 还以选定的颜色为背景上显示短文本来显示比较。
![][6]
#### 安装说明
[仓库][7]当前为 Fedora 31 和 Rawhide 提供 Contrast。要安装 Contrast请使用以下命令
```
sudo dnf copr enable atim/contrast
sudo dnf install contrast
```
### Pamixer
[Pamixer][8]是一个使用 PulseAudio 调整和监控声音设备音量的命令行工具。你可以显示设备的当前音量并直接增加/减小它,或静音/取消静音。Pamixer 可以列出所有源和接收器。
#### 安装说明
[仓库][7]当前为 Fedora 31 和 Rawhide 提供 Pamixer。要安装 Pamixer请使用以下命令
```
sudo dnf copr enable opuk/pamixer
sudo dnf install pamixer
```
### PhotoFlare
[PhotoFlare][10] 是一款图像编辑器。它有简单且布局合理的用户界面,其中的大多数功能都可在工具栏中使用。尽管它不支持使用图层,但 PhotoFlare 提供了诸如各种颜色调整、图像变换、滤镜、画笔和自动裁剪等功能。此外PhotoFlare 可以批量编辑图片,来对所有图片应用相同的滤镜和转换,并将结果保存在指定目录中。
![][11]
#### 安装说明
[仓库][7]当前为 Fedora 31 提供 PhotoFlare。要安装 PhotoFlare请使用以下命令
```
sudo dnf copr enable adriend/photoflare
sudo dnf install photoflare
```
### Tdiff
[Tdiff][13] 是用于比较两个文件树的命令行工具。除了显示某些文件或目录仅存在于一棵树中之外tdiff 还显示文件大小、类型和内容,所有者用户和组 ID、权限、修改时间等方面的差异。
#### 安装说明
[仓库][7]当前为 Fedora 29-31、Rawhide、EPEL 6-8 和其他发行版提供 tdiff。要安装 tdiff请使用以下命令
```
sudo dnf copr enable fif/tdiff
sudo dnf install tdiff
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/
作者:[Dominik Turecek][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/dturecek/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
[2]: https://copr.fedorainfracloud.org/
[3]: https://docs.pagure.org/copr.copr/user_documentation.html#
[4]: https://gitlab.gnome.org/World/design/contrast
[5]: https://www.w3.org/WAI/standards-guidelines/wcag/
[6]: https://fedoramagazine.org/wp-content/uploads/2020/01/contrast-screenshot.png
[7]: https://copr.fedorainfracloud.org/coprs/atim/contrast/
[8]: https://github.com/cdemoulins/pamixer
[9]: https://copr.fedorainfracloud.org/coprs/opuk/pamixer/
[10]: https://photoflare.io/
[11]: https://fedoramagazine.org/wp-content/uploads/2020/01/photoflare-screenshot.png
[12]: https://copr.fedorainfracloud.org/coprs/adriend/photoflare/
[13]: https://github.com/F-i-f/tdiff
[14]: https://copr.fedorainfracloud.org/coprs/fif/tdiff/

View File

@ -0,0 +1,99 @@
[#]: collector: (lujun9972)
[#]: translator: (qianmingtian)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Intro to the Linux command line)
[#]: via: (https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Linux 命令行简介
======
下面是一些针对刚开始使用 Linux 命令行的人的热身练习。警告:它可能会上瘾。[Sandra Henry-Stocker / Linux][1] [(CC0)][2]
如果你是 Linux 新手,或者从来没有花时间研究过命令行,你可能不会理解为什么这么多 Linux 爱好者坐在舒适的桌面使用大量工具与应用时键入命令会产生兴奋。在这篇文章中,我们将快速浏览一下命令行的奇妙之处,看看能否让你着迷。
首先,要使用命令行,你必须打开一个命令工具(也称为“命令提示符”)。如何做到这一点将取决于你运行的 Linux 版本。例如,在 RedHat 上,你可能会在屏幕顶部看到一个 Activities 选项卡,它将打开一个选项列表和一个用于输入命令的小窗口(如 “cmd” ,它将为你打开窗口)。在 Ubuntu 和其他一些版本中,你可能会在屏幕左侧看到一个小的终端图标。在许多系统上,你可以同时按 **Ctrl+Alt+t** 键打开命令窗口。
如果你使用 PuTTY 之类的工具登录 Linux 系统,你会发现自己已经处于命令行界面。
[][3]
由 HPE 赞助的 BrandPost
[走消费存储智能化之路][3]
将 HPE 存储的灵活性和经济性与 HPE GreenLake 结合起来,高效地运转你的 IT 部门。
一旦你得到你的命令行窗口,你会发现自己坐在一个提示符面前。它可能只是一个 **$** 或者像 “**user@system:~$**” 这样的东西,但它意味着系统已经准备好为你运行命令了。
一旦你走到这一步,就应该开始输入命令了。下面是一些要首先尝试的命令,以及这里是一些特别有用的命令的 [PDF][4] 和适合打印和做成卡片的双面命令手册。
```
命令 用途
pwd 显示我在文件系统中的位置(在最初进入系统时运行将显示主目录)
ls 列出我的文件
ls-a 列出我更多的文件(包括隐藏文件)
ls-al 列出我的文件,并且包含很多详细信息(包括日期、文件大小和权限)
who 告诉我谁登录了(如果只有你,不要失望)
date 日期提醒我今天是星期几(也显示时间)
ps 列出我正在运行的进程可能只是你的shell和“ps”命令
```
一旦你从命令行角度习惯了 Linux 主目录之后,就可以开始探索了。也许你会准备好使用以下命令在文件系统中闲逛:
```
命令 用途
cd /tmp 移动到其他文件夹(本例中,打开 /tem 文件夹)
ls 列出当前位置的文件
cd 回到主目录(不带参数的 cd 总是能将你带回到主目录)
cat .bashrc 显示文件的内容(本例中显示 .bashrc 文件的内容)
history 显示最近执行的命令
echo hello 跟自己说 “hello”
cal 显示当前月份的日历
```
要了解为什么高级 Linux 用户如此喜欢命令行,你将需要尝试其他一些功能,例如重定向和管道。 重定向是当你获取命令的输出并将其放到文件中而不是在屏幕上显示时。管道是指你将一个命令的输出发送给另一条将以某种方式对其进行操作的命令。这是可以尝试的命令:
[[通过注册 Network World 简讯来获得定期安排的详解]][5]
```
命令 用途
echo “echo hello” > tryme 创建一个新的文件并将 “echo hello” 写入该文件
chmod 700 tryme 使新建的文件可执行
tryme 运行新文件(它应当运行文件中包含的命令并且显示 “hello”
ps aux 显示所有运行中的程序
ps aux | grep $USER 显示所有运行中的程序,但是限制输出的内容包含你的用户名
echo $USER 使用环境变量显示你的用户名
whoami 使用命令显示你的用户名
who | wc -l 计数所有当前登录的用户数目
```
### 总结
一旦你习惯了基本命令,就可以探索其他命令并尝试编写脚本。 你可能会发现 Linux 比你想象的要强大并且好用得多。
加入 [Facebook][6] 和 [LinkedIn][7] 上的 Network World 社区,来评论最热门的话题。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3518440/intro-to-the-linux-command-line.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[qianmingtian][c]
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[c]: https://github.com/qianmingtian
[1]: https://commons.wikimedia.org/wiki/File:Tux.svg
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://www.networkworld.com/article/3391029/must-know-linux-commands.html
[5]: https://www.networkworld.com/newsletters/signup.html
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world