Merge pull request #2 from LCTT/master

sync from origin
This commit is contained in:
qianmingtian 2020-02-04 19:10:04 +08:00 committed by GitHub
commit 3a234d197d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
14 changed files with 1449 additions and 262 deletions

View File

@ -1,32 +1,28 @@
[#]: collector: (lujun9972)
[#]: translator: (mengxinayan)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-11849-1.html)
[#]: subject: (Showing memory usage in Linux by process and user)
[#]: via: (https://www.networkworld.com/article/3516319/showing-memory-usage-in-linux-by-process-and-user.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
按照进程和用户查看Linux系统中的内存使用情况
查看 Linux 系统中进程和用户的内存使用情况
======
有一些命令可以用来检查 Linux 系统中的内存使用情况,下面是一些更好的命令
[Fancycrave][1] [(CC0)][2]
有许多工具可以查看 Linux 系统中的内存使用情况。一些命令被广泛使用,比如 **free**, **ps** 。而另一些命令允许通过多种方式展示系统的性能统计信息,比如 **top** 。在这篇文章中,我们将介绍一些命令以帮助你确定当前占用着最多内存资源的用户或者进程。
> 有一些命令可以用来检查 Linux 系统中的内存使用情况,下面是一些更好的命令。
![Fancycrave][1]
有许多工具可以查看 Linux 系统中的内存使用情况。一些命令被广泛使用,比如 `free`、`ps`。而另一些命令允许通过多种方式展示系统的性能统计信息,比如 `top`。在这篇文章中,我们将介绍一些命令以帮助你确定当前占用着最多内存资源的用户或者进程。
下面是一些按照进程查看内存使用情况的命令:
### 使用 top
### 按照进程查看内存使用情况
**top** 是最好的查看内存使用情况的命令之一。为了查看哪个进程使用着最多的内存,一个简单的办法就是启动 **top** ,然后按下 **shift+m** 这样便可以查看按照内存占用百分比从高到底排列的进程。当你按下了 **shift+m** ,你的 top 应该会得到类似于下面这样的输出结果:
#### 使用 top
[][3]
HPE 赞助的 BrandPost
[Take the Intelligent Route with Consumption-Based Storage][3]
将 HPE 存储的易用性和经济性与 HPE GreenLake 结合起来,能帮助的你的 IT 部门更加高效
`top` 是最好的查看内存使用情况的命令之一。为了查看哪个进程使用着最多的内存,一个简单的办法就是启动 `top`,然后按下 `shift+m`,这样便可以查看按照内存占用百分比从高到底排列的进程。当你按下了 `shift+m` ,你的 `top` 应该会得到类似于下面这样的输出结果:
```
$top
@ -54,11 +50,11 @@ MiB Swap: 2048.0 total, 2045.7 free, 2.2 used. 3053.5 avail Mem
2373 root 20 0 150408 57000 9924 S 0.3 0.9 10:15.35 nessusd
```
注意 **%MEM** 排序。 列表的大小取决于你的窗口大小,但是占据着最多的内存的进程将会显示在列表的顶端。
注意 `%MEM` 排序。列表的大小取决于你的窗口大小,但是占据着最多的内存的进程将会显示在列表的顶端。
### 使用 ps
#### 使用 ps
**ps** 命令中的一列用来展示每个进程的内存使用情况。为了展示和查看哪个进程使用着最多的内存,你可以将 **ps** 命令的结果传递给 **sort** 命令。下面是一个有用的演示
`ps` 命令中的一列用来展示每个进程的内存使用情况。为了展示和查看哪个进程使用着最多的内存,你可以将 `ps` 命令的结果传递给 `sort` 命令。下面是一个有用的示例
```
$ ps aux | sort -rnk 4 | head -5
@ -69,7 +65,7 @@ nemo 342 9.9 5.9 2854664 363528 ? Sl 08:59 4:44 /usr/lib/firefo
nemo 2389 39.5 3.8 1774412 236116 pts/1 Sl+ 09:15 12:21 vlc videos/edge_computing.mp4
```
在上面的例子中(文中已截断),sort 命令使用了 **-r** 选项(反转),**-n** 选项(数字值),**-k** 选项(关键字),使 sort 命令对 ps 命令的结果按照第四列(内存使用情况)中的数字逆序进行排列并输出。如果我们首先显示 **ps** 命令的标题,那么将会便于查看。
在上面的例子中(文中已截断),`sort` 命令使用了 `-r` 选项(反转)、`-n` 选项(数字值)、`-k` 选项(关键字),使 `sort` 命令对 `ps` 命令的结果按照第四列(内存使用情况)中的数字逆序进行排列并输出。如果我们首先显示 `ps` 命令的标题,那么将会便于查看。
```
$ ps aux | head -1; ps aux | sort -rnk 4 | head -5
@ -81,7 +77,7 @@ nemo 342 9.9 5.9 2854664 363528 ? Sl 08:59 4:44 /usr/lib/firefo
nemo 2389 39.5 3.8 1774412 236116 pts/1 Sl+ 09:15 12:21 vlc videos/edge_computing.mp4
```
如果你喜欢这个命令,你可以用下面的命令为他指定一个别名,如果你想一直使用它,不要忘记把该命令添加到你的 ~/.bashrc 文件中。
如果你喜欢这个命令,你可以用下面的命令为他指定一个别名,如果你想一直使用它,不要忘记把该命令添加到你的 `~/.bashrc` 文件中。
```
$ alias mem-by-proc="ps aux | head -1; ps aux | sort -rnk 4"
@ -89,11 +85,13 @@ $ alias mem-by-proc="ps aux | head -1; ps aux | sort -rnk 4"
下面是一些根据用户查看内存使用情况的命令:
### 使用 top
### 按用户查看内存使用情况
#### 使用 top
按照用户检查内存使用情况会更复杂一些,因为你需要找到一种方法把用户所拥有的所有进程统计为单一的内存使用量。
如果你只想查看单个用户进程使用情况, **top** 命令可以采用与上文中同样的方法进行使用。只需要添加 -U 选项并在其后面指定你要查看的用户名,然后按下 **shift+m** 便可以按照内存使用有多到少进行查看。
如果你只想查看单个用户进程使用情况,`top` 命令可以采用与上文中同样的方法进行使用。只需要添加 `-U` 选项并在其后面指定你要查看的用户名,然后按下 `shift+m` 便可以按照内存使用有多到少进行查看。
```
$ top -U nemo
@ -115,9 +113,9 @@ MiB Swap: 2048.0 total, 2042.7 free, 5.2 used. 2812.0 avail Mem
32533 nemo 20 0 2389088 102532 76808 S 0.0 1.7 0:01.79 WebExtensions
```
### 使用 ps
#### 使用 ps
你依旧可以使用 **ps** 命令通过内存使用情况来排列某个用户的进程。在这个例子中,我们将使用 **grep** 命令来筛选得到某个用户的所有进程。
你依旧可以使用 `ps` 命令通过内存使用情况来排列某个用户的进程。在这个例子中,我们将使用 `grep` 命令来筛选得到某个用户的所有进程。
```
$ ps aux | head -1; ps aux | grep ^nemo| sort -rnk 4 | more
@ -131,7 +129,7 @@ nemo 29527 3.9 3.7 2736924 227448 ? Ssl 08:50 4:11 /usr/bin/gnome-
```
### 使用 ps 和其他命令的搭配
如果你想比较某个用户与其他用户内存使用情况将会比较复杂。在这种情况中,创建并排序一个按照用户总的内存使用量是一个不错的技术,但是它需要做一些更多的工作,并涉及到许多命令。在下面的脚本中,我们使用 **ps aux | grep -v COMMAND | awk '{print $1}' | sort -u** 命令得到了用户列表。其中包含了系统用户比如 **syslog** 。我们对每个任务使用 **awk** 命令以收集每个用户总的内存使用情况。在最后一步中,我们展示每个用户总的内存使用量(按照从大到小的顺序)。
如果你想比较某个用户与其他用户内存使用情况将会比较复杂。在这种情况中,创建并排序一个按照用户总的内存使用量是一个不错的方法,但是它需要做一些更多的工作,并涉及到许多命令。在下面的脚本中,我们使用 `ps aux | grep -v COMMAND | awk '{print $1}' | sort -u` 命令得到了用户列表。其中包含了系统用户比如 `syslog`。我们对每个任务使用 `awk` 命令以收集每个用户总的内存使用情况。在最后一步中,我们展示每个用户总的内存使用量(按照从大到小的顺序)。
```
#!/bin/bash
@ -171,8 +169,6 @@ $ ./show_user_mem_usage
在 Linux 有许多方法可以报告内存使用情况。可以通过一些用心设计的工具和命令,来查看并获得某个进程或者用户占用着最多的内存。
在 [Facebook][4] and [LinkedIn][5] 中加入 Network World 社区,来评论热门话题。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3516319/showing-memory-usage-in-linux-by-process-and-user.html
@ -180,13 +176,13 @@ via: https://www.networkworld.com/article/3516319/showing-memory-usage-in-linux-
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[萌新阿岩](https://github.com/mengxinayan)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://unsplash.com/photos/37LPYOkEE2o
[1]: https://images.idgesg.net/images/article/2018/06/chips_processors_memory_cards_by_fancycrave_cc0_via_unsplash_1200x800-100760955-large.jpg
[2]: https://creativecommons.org/publicdomain/zero/1.0/
[3]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[4]: https://www.facebook.com/NetworkWorld/

View File

@ -0,0 +1,80 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (6 open governance questions every project needs to answer)
[#]: via: (https://opensource.com/article/20/2/open-source-projects-governance)
[#]: author: (Gordon Haff https://opensource.com/users/ghaff)
6 open governance questions every project needs to answer
======
Open governance insights from Chris Aniszczyk, VP of Developer Relations
at the Linux Foundation.
![Two government buildings][1]
When we think about what needs to be in place for an open source project to function, one of the first things to come to mind is probably a license. For one thing, absent an approved [Open Source Initiative (OSI) license][2], a project isnt truly open source in the minds of many. Furthermore, the choice to use a copyleft license like the GNU General Public License (GPL) or a permissive license like Massachusetts Institute of Technology (MIT) can affect the sort of community that grows up around and uses the project.
However, Chris Aniszczyk, VP of Developer Relations at the Linux Foundation, argues that its equally important to consider the **open governance of a project** because the license itself doesnt actually tell you how the project is governed.
These are some of the questions that Aniszczyk argues need be answered. He adds that answering these questions before disputes arise, and answering them in a way thats viewed as open and fair to all participants leads to projects that tend to be more successful long term, especially as they grow in size.
### 6 open governance questions for every project
1. Who makes the decisions?
2. How are maintainers added?
3. Who owns the rights to the domain?
4. Who owns the rights to the trademarks?
5. How are those things governed?
6. Who owns how the build system works?
However, while all of these questions should be considered, there isnt one correct way of answering them. Different projects—and foundations hosting projects—take different approaches, whether to accommodate the requirements of a particular community or just for historical reasons.
The latter is often the case when a project uses something often called the Benevolent Dictator for Life (BDFL) model, in which one person—usually the project's founder—generally has the final say on major project decisions. Many projects end up here by default—perhaps most notably the Linux kernel. However, Red Hats Joe Brockmeier observed to me that its mostly considered an anti-pattern at this point. "While a few BDFL-driven projects have succeeded to do well, others have stumbled with that approach," he says.
Aniszczyk observes that "foundations have different sets of bylaws, charters, and how theyre structured, and there are fascinating differences between these organizations. Like Apache is very famous for the Apache Way, and thats how they expect projects to operate. They very much have guardrails about how releases are done. [Its] kind of an incubator process where every project starts way before it graduates to a top-level project. In terms of how projects are governed, its almost like an infinite amount of approaches," he concludes.
### Minimum requirements
That said, Aniszczyk lists some minimum requirements.
"Our pattern, at least, in many Linux Foundation and Cloud Native Computing Foundation (CNCF) projects, is a _governance.md_ file, which describes how decisions are made, how things are governed, how maintainers are added, removed, how are sub-projects added, removed, etc., how releases are done. That would be step one," he says.
#### Ownership
Secondly, he doesnt "think you could do open governance without assets being neutrally owned. At the end of the day, someone owns the domain, the rights to the trademark, some of the copyright, potentially. There are many great organizations out there that are super lightweight. There are things like the Apache Foundation, Software in the Public Interest, and the Software Freedom Conservancy."
Aniszczyk also sees some common approaches as at least potential anti-patterns. A key example is contributor license agreements (CLA), which define the terms under which intellectual property, like code, is contributed to a project. He says that if a company wants "to build a product or use a dual license type model, thats a very valid reason for a CLA. Otherwise, I view CLA as a high friction tool for developers."
#### Developer Certificate of Origin
Instead, he generally encourages people to "use what we call the 'Developer Certificate of Origin.' Its how the Linux kernel works, where basically it takes all the basic things that most CLAs do, which would be like, Did I write this code? Did I not copy it elsewhere? Do I have the rights to give this to you, and you sign off on? Its been a very successful model played out in the kernel and many other ecosystems. Im generally not really supportive of having CLAs unless theres a real strict business need."
#### Naming a project
He also sees a lot of what he considers mistakes in naming. "Project branding is super important. Theres a common pattern where people will start a project, it could be within a company or yourself, or you have a startup, and youll call it, lets say, 'Docker.' Then you have Docker the project, and you have Docker, the company. Then you also have Docker the product or Docker the enterprise product. All those things serve different audiences. It leads to confusion because I have an inherent belief that the name of something has a value proposition attached to it. Please name your company separate from your project, from your product," he argues.
#### Trust
Finally, Aniszczyk points to the role of open governance in building trust and confidence that a company cant just take a project unilaterally for its own ends. "Trust is table stakes in order to build strong communities because, without openly governed institutions in projects, trust is very hard to come by," he concludes.
_List to the Innovate @Open podcast episode from which Chris Aniszczyks remarks were drawn can be heard [here][3]._
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/open-source-projects-governance
作者:[Gordon Haff][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/ghaff
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW_lawdotgov2.png?itok=n36__lZj (Two government buildings)
[2]: https://opensource.org/licenses
[3]: https://grhpodcasts.s3.amazonaws.com/cra1911.mp3

View File

@ -0,0 +1,127 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Bulletin Board Systems: The VICE Exposé)
[#]: via: (https://twobithistory.org/2020/02/02/bbs.html)
[#]: author: (Two-Bit History https://twobithistory.org)
Bulletin Board Systems: The VICE Exposé
======
By now, you have almost certainly heard of the dark web. On sites unlisted by any search engine, in forums that cannot be accessed without special passwords or protocols, criminals and terrorists meet to discuss conspiracy theories and trade child pornography.
We have reported before on the dark webs [“hurtcore” communities][1], its [human trafficking markets][2], its [rent-a-hitman websites][3]. We have explored [the challenges the dark web presents to regulators][4], the rise of [dark web revenge porn][5], and the frightening size of [the dark web gun trade][6]. We have kept you informed about that one dark web forum where you can make like Walter White and [learn how to manufacture your own drugs][7], and also about—thanks to our foreign correspondent—[the Chinese dark web][8]. We have even attempted to [catalog every single location on the dark web][9]. Our coverage of the dark web has been nothing if not comprehensive.
But I wanted to go deeper.
We know that below the surface web is the deep web, and below the deep web is the dark web. It stands to reason that below the dark web there should be a deeper, darker web.
A month ago, I set out to find it. Unsure where to start, I made a post on _Reddit_, a website frequented primarily by cosplayers and computer enthusiasts. I asked for a guide, a Styx ferryman to bear me across to the mythical underworld I sought to visit.
Only minutes after I made my post, I received a private message. “If you want to see it, Ill take you there,” wrote _Reddit_ user FingerMyKumquat. “But Ill warn you just once—its not pretty to see.”
### Getting Access
This would not be like visiting Amazon to shop for toilet paper. I could not just enter an address into the address bar of my browser and hit go. In fact, as my Charon informed me, where we were going, there are no addresses. At least, no web addresses.
But where exactly were we going? The answer: Back in time. The deepest layer of the internet is also the oldest. Down at this deepest layer exists a secret society of “bulletin board systems,” a network of underground meetinghouses that in some cases have been in continuous operation since the 1980s—since before Facebook, before Google, before even stupidvideos.com.
To begin, I needed to download software that could handle the ancient protocols used to connect to the meetinghouses. I was told that bulletin board systems today use an obsolete military protocol called Telnet. Once upon a time, though, they operated over the phone lines. To connect to a system back then you had to dial its _phone number_.
The software I needed was called [SyncTerm][10]. It was not available on the App Store. In order to install it, I had to compile it. This is a major barrier to entry, I am told, even to veteran computer programmers.
When I had finally installed SyncTerm, my guide said he needed to populate my directory. I asked what that was a euphemism for, but was told it was not a euphemism. Down this far, there are no search engines, so you can only visit the bulletin board systems you know how to contact. My directory was the list of bulletin board systems I would be able to contact. My guide set me up with just seven, which he said would be more than enough.
_More than enough for what,_ I wondered. Was I really prepared to go deeper than the dark web? Was I ready to look through this window into the black abyss of the human soul?
![][11] _The vivid blue interface of SyncTerm. My directory of BBSes on the left._
### Heatwave
I decided first to visit the bulletin board system called “Heatwave,” which I imagined must be a hangout for global warming survivalists. I “dialed” in. The next thing I knew, I was being asked if I wanted to create a user account. I had to be careful to pick an alias that would be inconspicuous in this sub-basement of the internet. I considered “DonPablo,” and “z3r0day,” but finally chose “ripper”—a name I could remember because it is also the name of my great-aunt Merediths Shih Tzu. I was then asked where I was dialing from; I decided “xxx” was the right amount of enigmatic.
And then—I was in. Curtains of fire rolled down my screen and dispersed, revealing the main menu of the Heatwave bulletin board system.
![][12] _The main menu of the Heatwave BBS._
I had been told that even in the glory days of bulletin board systems, before the rise of the world wide web, a large system would only have several hundred users or so. Many systems were more exclusive, and most served only users in a single telephone area code. But how many users dialed the “Heatwave” today? There was a main menu option that read “(L)ast Few Callers,” so I hit “L” on my keyboard.
My screen slowly filled with a large table, listing all of the systems “callers” over the last few days. Who were these shadowy outcasts, these expert hackers, these denizens of the digital demimonde? My eyes scanned down the list, and what I saw at first confused me: There was a “Dan,” calling from St. Louis, MO. There was also a “Greg Miller,” calling from Portland, OR. Another caller claimed he was “George” calling from Campellsburg, KY. Most of the entries were like that.
It was a joke, of course. A meme, a troll. It was normcore fashion in noms de guerre. These were thrill-seeking Palo Alto adolescents on Adderall making fun of the surface web. They werent fooling me.
I wanted to know what they talked about with each other. What cryptic colloquies took place here, so far from public scrutiny? My index finger, with ever so slight a tremble, hit “M” for “(M)essage Areas.”
Here, I was presented with a choice. I could enter the area reserved for discussions about “T-99 and Geneve,” which I did not dare do, not knowing what that could possibly mean. I could also enter the area for discussions about “Other,” which seemed like a safe place to start.
The system showed me message after message. There was advice about how to correctly operate a leaf-blower, as well as a protracted debate about the depth of the Strait of Hormuz relative to the draft of an aircraft carrier. I assumed the real messages were further on, and indeed I soon spotted what I was looking for. The user “Kevin” was complaining to other users about the side effects of a drug called Remicade. This was not a drug I had heard of before. Was it some powerful new synthetic stimulant? A cocktail of other recreational drugs? Was it something I could bring with me to impress people at the next VICE holiday party?
I googled it. Remicade is used to treat rheumatoid arthritis and Crohns disease.
In reply to the original message, there was some further discussion about high resting heart rates and mechanical heart valves. I decided that I had gotten lost and needed to contact FingerMyKumquat. “Finger,” I messaged him, “What is this shit Im looking at here? I want the real stuff. I want blackmail and beheadings. Show me the scum of the earth!”
“Perhaps youre ready for the SpookNet,” he wrote back.
### SpookNet
Each bulletin board system is an island in the television-static ocean of the digital world. Each systems callers are lonely sailors come into port after many a month plying the seas.
But the bulletin board systems are not entirely disconnected. Faint phosphorescent filaments stretch between the islands, links in the special-purpose networks that were constructed—before the widespread availability of the internet—to propagate messages from one system to another.
One such network is the SpookNet. Not every bulletin board system is connected to the SpookNet. To get on, I first had to dial “Reality Check.”
![][13] _The Reality Check BBS._
Once I was in, I navigated my way past the main menu and through the SpookNet gateway. What I saw then was like a catalog index for everything stored in that secret Pentagon warehouse from the end of the _X-Files_ pilot. There were message boards dedicated to UFOs, to cryptography, to paranormal studies, and to “End Times and the Last Days.” There was a board for discussing “Truth, Polygraphs, and Serums,” and another for discussing “Silencers of Information.” Here, surely, I would find something worth writing about in an article for VICE.
I browsed and I browsed. I learned about which UFO documentaries are worth watching on Netflix. I learned that “paper mill” is a derogatory term used in the intelligence community (IC) to describe individuals known for constantly trying to sell “explosive” or “sensitive” documents—as in the sentence, offered as an example by one SpookNet user, “Damn, here comes that paper mill Juan again.” I learned that there was an effort afoot to get two-factor authentication working for bulletin board systems.
“These are just a bunch of normal losers,” I finally messaged my guide. “Mostly they complain about anti-vaxxers and verses from the Quran. This is just _Reddit_!”
“Huh,” he replied. “When you said scum of the earth, did you mean something else?”
I had one last idea. In their heyday, bulletin board systems were infamous for being where everyone went to download illegal, cracked computer software. An entire subculture evolved, with gangs of software pirates competing to be the first to crack a new release. The first gang to crack the new software would post their “warez” for download along with a custom piece of artwork made using lo-fi ANSI graphics, which served to identify the crack as their own.
I wondered if there were any old warez to be found on the Reality Check BBS. I backed out of the SpookNet gateway and keyed my way to the downloads area. There were many files on offer there, but one in particular caught my attention: a 5.3 megabyte file just called “GREY.”
I downloaded it. It was a complete PDF copy of E. L. James _50 Shades of Grey_.
_If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][14] on Twitter or subscribe to the [RSS feed][15] to make sure you know when a new post is out._
_Previously on TwoBitHistory…_
> I first heard about the FOAF (Friend of a Friend) standard back when I wrote my post about the Semantic Web. I thought it was a really interesting take on social networking and I've wanted to write about it since. Finally got around to it!<https://t.co/VNwT8wgH8j>
>
> — TwoBitHistory (@TwoBitHistory) [January 5, 2020][16]
--------------------------------------------------------------------------------
via: https://twobithistory.org/2020/02/02/bbs.html
作者:[Two-Bit History][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://twobithistory.org
[b]: https://github.com/lujun9972
[1]: https://www.vice.com/en_us/article/mbxqqy/a-journey-into-the-worst-corners-of-the-dark-web
[2]: https://www.vice.com/en_us/article/vvbazy/my-brief-encounter-with-a-dark-web-human-trafficking-site
[3]: https://www.vice.com/en_us/article/3d434v/a-fake-dark-web-hitman-site-is-linked-to-a-real-murder
[4]: https://www.vice.com/en_us/article/ezv85m/problem-the-government-still-doesnt-understand-the-dark-web
[5]: https://www.vice.com/en_us/article/53988z/revenge-porn-returns-to-the-dark-web
[6]: https://www.vice.com/en_us/article/j5qnbg/dark-web-gun-trade-study-rand
[7]: https://www.vice.com/en_ca/article/wj374q/inside-the-dark-web-forum-that-tells-you-how-to-make-drugs
[8]: https://www.vice.com/en_us/article/4x38ed/the-chinese-deep-web-takes-a-darker-turn
[9]: https://www.vice.com/en_us/article/vv57n8/here-is-a-list-of-every-single-possible-dark-web-site
[10]: http://syncterm.bbsdev.net/
[11]: https://twobithistory.org/images/sync.png
[12]: https://twobithistory.org/images/heatwave-main-menu.png
[13]: https://twobithistory.org/images/reality.png
[14]: https://twitter.com/TwoBitHistory
[15]: https://twobithistory.org/feed.xml
[16]: https://twitter.com/TwoBitHistory/status/1213920921251131394?ref_src=twsrc%5Etfw

View File

@ -0,0 +1,66 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Private equity firms are gobbling up data centers)
[#]: via: (https://www.networkworld.com/article/3518817/private-equity-firms-are-gobbling-up-the-data-center-market.html)
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
Private equity firms are gobbling up data centers
======
Private equity firms accounted for 80% of all data-center acquisitions in 2019. Is that a good thing?
scanrail / Getty Images
Merger and acquisition activity surrounding [data-center][1] facilities is starting to resemble the Oklahoma Land Rush, and private-equity firms are taking most of the action.
New research from Synergy Research Group saw more than 100 deals in 2019, a 50% growth over 2018, and private-equity companies accounted for 80% of them.
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
M&amp;A activity broke the 100 transaction mark for the first time in 2019, and that comes despite a 45% decline in public company activity, such as the massive Digital Reality Trust [purchase][3] of Interxion. At the same time, the size of the deals dropped in 2019, with fewer worth $1 billion or more vs. 2018, and the average deal value fell 24% vs. 2018.
[][4]
BrandPost Sponsored by HPE
[Take the Intelligent Route with Consumption-Based Storage][4]
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
Since 2015, there have been approximately 350 data-center deals, both public and private, with a total value of $75 billion, according to Synergy. Over this period, private equity buyers have accounted for 57% of the deal volume. Deals were roughly a 50-50 split until 2018 when public company purchases began to trail off.
Anecdotally, Ive heard one reason for the decline in big deals is there are no more big purchases to be had, at least in the US. DRT/Interxion is an exception, and Interxion is a foreign company. Other big deals, like Equinix purchasing Verizons data centers for $3.6 billion in 2017 or AT&amp;T selling its data centers to private equity company Brookfield in 2019. There just isnt much left to sell.
The question becomes is this necessarily a good thing? Private equity firms have something of a well-earned bad reputation for buying up companies, sucking all the profit out of them and discarding the empty husk.
But John Dinsdale, chief analyst for Synergy, said not to worry, that the private equity firms grabbing data centers are looking to grow them. “This is a heavily infrastructure-oriented business where what you can take out is pretty directly related to what you put in. A lot of these equity investors are looking to build something rather than quickly flipping the assets,” he said via e-mail.
He added “In these types of business there isnt that much manpower, HQ or overhead there to be stripped out.” Which is true. Data centers are pretty low-staffed. It was a national news item several years ago that Apples $1 billion data center in rural North Carolina would only [create 50 jobs][5]. Thats true for most data centers.
At least one big player, Digital Realty Trust, was formed in 2004 after private-equity firm GI Partners bought out 21 data centers from a bankruptcy. DRT has grown to 214 centers in the U.S. and Europe.
So in this case, a private equity firm buying out your data center provider might prove to be a good thing.
Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3518817/private-equity-firms-are-gobbling-up-the-data-center-market.html
作者:[Andy Patrizio][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Andy-Patrizio/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
[2]: https://www.networkworld.com/newsletters/signup.html
[3]: https://www.networkworld.com/article/3451437/digital-realty-acquisition-of-interxion-reshapes-data-center-landscape.html
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
[5]: https://www.cultofmac.com/132012/despite-huge-unemployment-rate-apples-1-billion-data-super-center-only-created-50-new-jobs/
[6]: https://www.facebook.com/NetworkWorld/
[7]: https://www.linkedin.com/company/network-world

View File

@ -1,101 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 cool new projects to try in COPR for January 2020)
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/)
[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
4 cool new projects to try in COPR for January 2020
======
![][1]
COPR is a [collection][2] of personal repositories for software that isnt carried in Fedora. Some software doesnt conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isnt supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
This article presents a few new and interesting projects in COPR. If youre new to using COPR, see the [COPR User Documentation][3] for how to get started.
### Contrast
[Contrast][4] is a small app used for checking contrast between two colors and to determine if it meets the requirements specified in [WCAG][5]. The colors can be selected either using their RGB hex codes or with a color picker tool. In addition to showing the contrast ratio, Contrast displays a short text on a background in selected colors to demonstrate comparison.
![][6]
#### Installation instructions
The [repo][7] currently provides contrast for Fedora 31 and Rawhide. To install Contrast, use these commands:
```
sudo dnf copr enable atim/contrast
sudo dnf install contrast
```
### Pamixer
[Pamixer][8] is a command-line tool for adjusting and monitoring volume levels of sound devices using PulseAudio. You can display the current volume of a device and either set it directly or increase/decrease it, or (un)mute it. Pamixer can list all sources and sinks.
#### Installation instructions
The [repo][9] currently provides Pamixer for Fedora 31 and Rawhide. To install Pamixer, use these commands:
```
sudo dnf copr enable opuk/pamixer
sudo dnf install pamixer
```
### PhotoFlare
[PhotoFlare][10] is an image editor. It has a simple and well-arranged user interface, where most of the features are available in the toolbars. PhotoFlare provides features such as various color adjustments, image transformations, filters, brushes and automatic cropping, although it doesnt support working with layers. Also, PhotoFlare can edit pictures in batches, applying the same filters and transformations on all pictures and storing the results in a specified directory.
![][11]
#### Installation instructions
The [repo][12] currently provides PhotoFlare for Fedora 31. To install Photoflare, use these commands:
```
sudo dnf copr enable adriend/photoflare
sudo dnf install photoflare
```
### Tdiff
[Tdiff][13] is a command-line tool for comparing two file trees. In addition to showing that some files or directories exist in one tree only, tdiff shows differences in file sizes, types and contents, owner user and group ids, permissions, modification time and more.
#### Installation instructions
The [repo][14] currently provides tdiff for Fedora 29-31 and Rawhide, EPEL 6-8 and other distributions. To install tdiff, use these commands:
```
sudo dnf copr enable fif/tdiff
sudo dnf install tdiff
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/
作者:[Dominik Turecek][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/dturecek/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
[2]: https://copr.fedorainfracloud.org/
[3]: https://docs.pagure.org/copr.copr/user_documentation.html#
[4]: https://gitlab.gnome.org/World/design/contrast
[5]: https://www.w3.org/WAI/standards-guidelines/wcag/
[6]: https://fedoramagazine.org/wp-content/uploads/2020/01/contrast-screenshot.png
[7]: https://copr.fedorainfracloud.org/coprs/atim/contrast/
[8]: https://github.com/cdemoulins/pamixer
[9]: https://copr.fedorainfracloud.org/coprs/opuk/pamixer/
[10]: https://photoflare.io/
[11]: https://fedoramagazine.org/wp-content/uploads/2020/01/photoflare-screenshot.png
[12]: https://copr.fedorainfracloud.org/coprs/adriend/photoflare/
[13]: https://github.com/F-i-f/tdiff
[14]: https://copr.fedorainfracloud.org/coprs/fif/tdiff/

View File

@ -1,98 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (LazyWolfLin)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 Key Changes to Look Out for in Linux Kernel 5.6)
[#]: via: (https://itsfoss.com/linux-kernel-5-6/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
4 Key Changes to Look Out for in Linux Kernel 5.6
======
While weve already witnessed the stable release of Linux 5.5 with better hardware support, Linux 5.6 is next.
To be honest, Linux 5.6 is much more exciting than 5.5. Even though the upcoming Ubuntu 20.04 LTS release will feature Linux 5.5 out of the box, you should really know what Linux 5.6 kernel has in store for us.
In this article, Ill be highlighting the key changes and features that you can expect with Linux 5.6 release:
### Linux 5.6 features highlight
![][1]
Ill try to keep the list of features up-to-date whenever theres a piece of new information on Linux 5.6. But, for now, lets take a look at what we already know so far:
#### 1\. WireGuard Support
WireGuard will be added to Linux 5.6 potentially replacing [OpenVPN][2] for a variety of reasons.
You can learn more about [WireGuard][3] on their official site to know the benefits. Of course, if youve used it, you might be aware of the reasons why its potentially better than OpenVPN.
Also, [Ubuntu 20.04 LTS will be adding support for WireGuard][4].
#### 2\. USB4 Support
Linux 5.6 will also include the support of **USB4**.
In case you didnt know about USB 4.0 (USB4), you can read the [announcement post][5].
As per the announcement “_USB4 doubles the maximum aggregate bandwidth of USB and enables multiple simultaneous data and display protocols._“
Also, while we know that USB4 is based on the Thunderbolt protocol specification, it will be backward compatible with USB 2.0, USB 3.0, and Thunderbolt 3 which is great news.
#### 3\. F2FS Data Compression Using LZO/LZ4
Linux 5.6 will also come with the support for F2FS data compression using LZO/LZ4 algorithms.
In other words, it is just a new compression technique for the Linux file-system where you will be able to select particular file extensions.
#### 4\. Fixing the Year 2038 problem for 32-bit systems
Unix and Linux store the time value in a 32-bit signed integer format which has the maximum value of 2147483647. Beyond this number, due to integer overflow, the values will be stored as a negative number.
This means that for a 32-bit system, the time value cannot go beyond 2147483647 seconds after Jan. 1, 1970. In simpler terms, after 03:14:07 UTC on Jan. 19, 2038, due to integer overflow, the time will read as Dec. 13, 1901 instead of Jan. 19, 2038.
Linux kernel 5.6 has a fix for this problem so that 32-bit systems can run beyond the year 2038.
#### 5\. Improved Hardware Support
Obviously, with the next release, the hardware support will improve as well. The plan to support newer wireless peripherals will be a priority too.
The new kernel will also add the support for MX Master 3 mouse and other wireless Logitech products.
In addition to Logitech products, you can expect a lot of different hardware support as well (including the support for AMD GPUs, NVIDIA GPUs, and Intel Tiger Lake chipset support).
#### 6\. Other Changes
Also, in addition to all these major additions/support in Linux 5.6, there are several other changes that would be coming with the next kernel release:
* Improvements in AMD Zen temperature/power reporting
* A fix for AMD CPUs overheating in ASUS TUF laptops
* Open-source NVIDIA RTX 2000 “Turing” graphics support
* FSCRYPT inline encryption.
[Phoronix][6] tracked a lot of technical changes arriving with Linux 5.6. So, if youre curious about every bit of the changes involved in Linux 5.6, you can check for yourself.
Now that youve known about whats coming with Linux 5.6 release what do you think about it? Let me know your thoughts in the comments below.
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-kernel-5-6/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[LazyWolfLin](https://github.com/LazyWolfLin)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/linux-kernel-5.6.jpg?ssl=1
[2]: https://openvpn.net/
[3]: https://www.wireguard.com/
[4]: https://www.phoronix.com/scan.php?page=news_item&px=Ubuntu-20.04-Adds-WireGuard
[5]: https://www.usb.org/sites/default/files/2019-09/USB-IF_USB4%20spec%20announcement_FINAL.pdf
[6]: https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.6-Spectacular

View File

@ -0,0 +1,81 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Give an old MacBook new life with Linux)
[#]: via: (https://opensource.com/article/20/2/macbook-linux-elementary)
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
Give an old MacBook new life with Linux
======
Elementary OS's latest release, Hera, is an impressive platform for
resurrecting an outdated MacBook.
![Coffee and laptop][1]
When I installed Apple's [MacOS Mojave][2], it slowed my formerly reliable MacBook Air to a crawl. My computer, released in 2015, has 4GB RAM, an i5 processor, and a Broadcom 4360 wireless card, but Mojave proved too much for my daily driver—it made working with [GnuCash][3] impossible, and it whetted my appetite to return to Linux. I am glad I did, but I felt bad that I had this perfectly good MacBook lying around unused.
I tried several Linux distributions on my MacBook Air, but there was always a gotcha. Sometimes it was the wireless card; another time, it was a lack of support for the touchpad. After reading some good reviews, I decided to try [Elementary OS][4] 5.0 (Juno). I [made a boot drive][5] with my USB creator and inserted it into the MacBook Air. I got to a live desktop, and the operating system recognized my Broadcom wireless chipset—I thought this just might work!
I liked what I saw in Elementary OS; its [Pantheon][6] desktop is really great, and its look and feel are familiar to Apple users—it has a dock at the bottom of the display and icons that lead to useful applications. I liked the preview of what I could expect, so I decided to install it—and then my wireless disappeared. That was disappointing. I really liked Elementary OS, but no wireless is a non-starter.
Fast-forward to December 2019, when I heard a review on the [Linux4Everyone][7] podcast about Elementary's latest release, v.5.1 (Hera) bringing a MacBook back to life. So, I decided to try again with Hera. I downloaded the ISO, created the bootable drive, plugged it in, and this time the operating system recognized my wireless card. I was in business!
![MacBook Air with Hera][8]
I was overjoyed that my very light, yet powerful MacBook Air was getting a new life with Linux. I have been exploring Elementary OS in greater detail, and I can tell you that I am impressed.
### Elementary OS's features
According to [Elementary's blog][9], "The newly redesigned login and lock screen greeter looks sharper, works better, and fixes many reported issues with the previous greeter including focus issues, HiDPI issues, and better localization. The new design in Hera was in response to user feedback from Juno, and enables some nice new features."
"Nice new features" in an understatement—Elementary OS easily has one of the best-designed Linux user interfaces I have ever seen. A System Settings icon is on the dock by default; it is easy to change the settings, and soon I had the system configured to my liking. I need larger text sizes than the defaults, and the Universal Access controls are easy to use and allow me to set large text and high contrast. I can also adjust the dock with larger icons and other options.
![Elementary OS's Settings screen][10]
Pressing the Mac's Command key brings up a list of keyboard shortcuts, which is very helpful to new users.
![Elementary OS's Keyboard shortcuts][11]
Elementary OS ships with the [Epiphany][12] web browser, which I find quite easy to use. It's a bit different than Chrome, Chromium, or Firefox, but it is more than adequate.
For security-conscious users (as we should all be), Elementary OS's Security and Privacy settings provide multiple options, including a firewall, history, locking, automatic deletion of temporary and trash files, and an on/off switch for location services.
![Elementary OS's Privacy and Security screen][13]
### More on Elementary OS
Elementary OS was originally released in 2011, and its latest version, Hera, was released on December 3, 2019. [Cassidy James Blaede][14], Elementary's co-founder and CXO, is the operating system's UX architect. Cassidy loves to design and build useful, usable, and delightful digital products using open technologies.
Elementary OS has excellent user [documentation][15], and its code (licensed under GPL 3.0) is available on [GitHub][16]. Elementary OS encourages involvement in the project, so be sure to reach out and [join the community][17].
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/macbook-linux-elementary
作者:[Don Watkins][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/don-watkins
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o (Coffee and laptop)
[2]: https://en.wikipedia.org/wiki/MacOS_Mojave
[3]: https://www.gnucash.org/
[4]: https://elementary.io/
[5]: https://opensource.com/life/14/10/test-drive-linux-nothing-flash-drive
[6]: https://opensource.com/article/19/12/pantheon-linux-desktop
[7]: https://www.linux4everyone.com/20-macbook-pro-elementary-os
[8]: https://opensource.com/sites/default/files/uploads/macbookair_hera.png (MacBook Air with Hera)
[9]: https://blog.elementary.io/introducing-elementary-os-5-1-hera/
[10]: https://opensource.com/sites/default/files/uploads/elementaryos_settings.png (Elementary OS's Settings screen)
[11]: https://opensource.com/sites/default/files/uploads/elementaryos_keyboardshortcuts.png (Elementary OS's Keyboard shortcuts)
[12]: https://en.wikipedia.org/wiki/GNOME_Web
[13]: https://opensource.com/sites/default/files/uploads/elementaryos_privacy-security.png (Elementary OS's Privacy and Security screen)
[14]: https://github.com/cassidyjames
[15]: https://elementary.io/docs/learning-the-basics#learning-the-basics
[16]: https://github.com/elementary
[17]: https://elementary.io/get-involved

View File

@ -0,0 +1,169 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Troubleshoot Kubernetes with the power of tmux and kubectl)
[#]: via: (https://opensource.com/article/20/2/kubernetes-tmux-kubectl)
[#]: author: (Abhishek Tamrakar https://opensource.com/users/tamrakar)
Troubleshoot Kubernetes with the power of tmux and kubectl
======
A kubectl plugin that uses tmux to make troubleshooting Kubernetes much
simpler.
![Woman sitting in front of her laptop][1]
[Kubernetes][2] is a thriving open source container orchestration platform that offers scalability, high availability, robustness, and resiliency for applications. One of its many features is support for running custom scripts or binaries through its primary client binary, [kubectl][3]. Kubectl is very powerful and allows users to do anything with it that they could do directly on a Kubernetes cluster.
### Troubleshooting Kubernetes with aliases
Anyone who uses Kubernetes for container orchestration is aware of its features—as well as the complexity it brings because of its design. For example, there is an urgent need to simplify troubleshooting in Kubernetes with something that is quicker and has little need for manual intervention (except in critical situations).
There are many scenarios to consider when it comes to troubleshooting functionality. In one scenario, you know what you need to run, but the command's syntax—even when it can run as a single command—is excessively complex, or it may need one or two inputs to work.
For example, if you frequently need to jump into a running container in the System namespace, you may find yourself repeatedly writing:
```
`kubectl --namespace=kube-system exec -i -t <your-pod-name>`
```
To simplify troubleshooting, you could use command-line aliases of these commands. For example, you could add the following to your dotfiles (.bashrc or .zshrc):
```
`alias ksysex='kubectl --namespace=kube-system exec -i -t'`
```
This is one of many examples from a [repository of common Kubernetes aliases][4] that shows one way to simplify functions in kubectl. For something simple like this scenario, an alias is sufficient.
### Switching to a kubectl plugin
A more complex troubleshooting scenario involves the need to run many commands, one after the other, to investigate an environment and come to a conclusion. Aliases alone are not sufficient for this use
case; you need repeatable logic and correlations between the many parts of your Kubernetes deployment. What you really need is automation to deliver the desired output in less time.
Consider 10 to 20—or even 50 to 100—namespaces holding different microservices on your cluster. What would be helpful for you to start troubleshooting this scenario?
* You would need something that can quickly tell which pod in which namespace is throwing errors.
* You would need something that can watch logs of all the pods in a namespace.
* You might also need to watch logs of certain pods in a specific namespace that have shown errors.
Any solution that covers these points would be very useful in investigating production issues as well as during development and testing cycles.
To create something more powerful than a simple alias, you can use [kubectl plugins][5]. Plugins are like standalone scripts written in any scripting language but are designed to extend the functionality of your main command when serving as a Kubernetes admin.
To create a plugin, you must use the proper syntax of **kubectl-&lt;your-plugin-name&gt;** to copy the script to one of the exported pathways in your **$PATH** and give it executable permissions (**chmod +x**).
After creating a plugin and moving it into your path, you can run it immediately. For example, I have kubectl-krawl and kubectl-kmux in my path:
```
$ kubectl plugin list
The following compatible plugins are available:
/usr/local/bin/kubectl-krawl
/usr/local/bin/kubectl-kmux
$ kubectl kmux
```
Now let's explore what this looks like when you power Kubernetes with tmux.
### Harnessing the power of tmux
[Tmux][6] is a very powerful tool that many sysadmins and ops teams rely on to troubleshoot issues related to ease of operability—from splitting windows into panes for running parallel debugging on multiple machines to monitoring logs. One of its major advantages is that it can be used on the command line or in automation scripts.
I created [a kubectl plugin][7] that uses tmux to make troubleshooting much simpler. I will use annotations to walk through the logic behind the plugin (and leave it for you to go through the plugin's full code):
```
#NAMESPACE is namespace to monitor.
#POD is pod name
#Containers is container names
# initialize a counter n to count the number of loop counts, later be used by tmux to split panes.
n=0;
# start a loop on a list of pod and containers
while IFS=' ' read -r POD CONTAINERS
do
           # tmux create the new window for each pod
            tmux neww $COMMAND -n $POD 2&gt;/dev/null
           # start a loop for all containers inside a running pod
        for CONTAINER in ${CONTAINERS//,/ }
        do
        if [ x$POD = x -o x$CONTAINER = x ]; then
        # if any of the values is null, exit.
        warn "Looks like there is a problem getting pods data."
        break
        fi
           
            # set the command to execute
        COMMAND=”kubectl logs -f $POD -c $CONTAINER -n $NAMESPACE”
        # check tmux session
        if tmux has-session -t &lt;session name&gt; 2&gt;/dev/null;
        then
        &lt;set session exists&gt;
        else
        &lt;create session&gt;
        fi
           # split planes in the current window for each containers
        tmux selectp -t $n \; \
        splitw $COMMAND \; \
        select-layout tiled \;
           # end loop for containers
        done
           # rename the window to identify by pod name
        tmux renamew $POD 2&gt;/dev/null
       
            # increment the counter
        ((n+=1))
# end loop for pods
done&lt; &lt;(&lt;fetch list of pod and containers from kubernetes cluster&gt;)
# finally select the window and attach session
 tmux selectw -t &lt;session name&gt;:1 \; \
  attach-session -t &lt;session name&gt;\;
```
After the plugin script runs, it will produce output similar to the image below. Each pod has its own window, and each container (if there is more than one) is split by the panes in its pod window, streaming logs as they arrive. The beauty of tmux can be seen below; with the proper configuration, you can even see which window has activity going on (see the white tabs).
![Output of kmux plugin][8]
### Conclusion
Aliases are always helpful for simple troubleshooting in Kubernetes environments. When the environment gets more complex, a kubectl plugin is a powerful option for using more advanced scripting. There are no limits on which programming language you can use to write kubectl plugins. The only requirements are that the naming convention in the path is executable, and it doesn't have the same name as an existing kubectl command.
To read the complete code or try the plugins I created, check my [kube-plugins-github][7] repository. Issues and pull requests are welcome.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/2/kubernetes-tmux-kubectl
作者:[Abhishek Tamrakar][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/tamrakar
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_4.png?itok=VGZO8CxT (Woman sitting in front of her laptop)
[2]: https://opensource.com/resources/what-is-kubernetes
[3]: https://kubernetes.io/docs/reference/kubectl/overview/
[4]: https://github.com/ahmetb/kubectl-aliases/blob/master/.kubectl_aliases
[5]: https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/
[6]: https://opensource.com/article/19/6/tmux-terminal-joy
[7]: https://github.com/abhiTamrakar/kube-plugins
[8]: https://opensource.com/sites/default/files/uploads/kmux-output.png (Output of kmux plugin)

View File

@ -0,0 +1,430 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Ansible Roles Quick Start Guide with Examples)
[#]: via: (https://www.2daygeek.com/ansible-roles-quick-start-guide-with-examples/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Ansible Roles Quick Start Guide with Examples
======
Ansible is an excellent configuration management and orchestration tool.
It is designed to easily automate the entire infrastructure.
We have written three articles in the past about Ansible.
If you are new to Ansible, I advise you to read the articles below, which will help you understand the basics of Ansible.
* **Part-1: [Ansible Automation Tool Installation, Configuration and Quick Start Guide][1]**
* **Part-2: [Ansible Ad-hoc Command Quick Start Guide with Examples][2]**
* **Part-3: [Ansible Playbooks Quick Start Guide with Examples][3]**
### Whats Ansible Roles?
Ansible Roles provides the framework for automatically loading certain tasks, files, vars, templates, and handlers from a known file structure into the playbook.
The primary mechanism of role is to break a playbook into multiple pieces (files).
This makes it easier for you to write complex playbooks and makes them easier to reuse.
Also, it reduces the syntax error by breaking it into multiple files.
Ansible Playbook is a set of roles, and each role essentially performs a specific function.
The Ansible roles are reusable (you can import the roles into other paybooks as well) because the roles are independent of each other and do not depend on others while executing.
Ansible offers a two-example directory structure that helps you organize your ansible playbook content, and its use.
It is not limited to using the same data structure, and you can create your own directory structure based on your needs.
Each directory is have a **“main.yml”** file, which contains the basic content:
### Ansible Roles Default Directory Structure
Ansible Best Practices provides the following two directory structures. The first is very simple and well suited for a small environment with simple production and inventory files.
```
production # inventory file for production servers
staging # inventory file for staging environment
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
hostname1.yml # here we assign variables to particular systems
hostname2.yml
library/ # if any custom modules, put them here (optional)
module_utils/ # if any custom module_utils to support modules, put them here (optional)
filter_plugins/ # if any custom filter plugins, put them here (optional)
site.yml # master playbook
webservers.yml # playbook for webserver tier
dbservers.yml # playbook for dbserver tier
roles/
common/ # this hierarchy represents a "role"
tasks/ #
main.yml # <-- tasks file can include smaller files if warranted
handlers/ #
main.yml # <-- handlers file
templates/ # <-- files for use with the template resource
ntp.conf.j2 # <------- templates end in .j2
files/ #
bar.txt # <-- files for use with the copy resource
foo.sh # <-- script files for use with the script resource
vars/ #
main.yml # <-- variables associated with this role
defaults/ #
main.yml # <-- default lower priority variables for this role
meta/ #
main.yml # <-- role dependencies
library/ # roles can also include custom modules
module_utils/ # roles can also include custom module_utils
lookup_plugins/ # or other types of plugins, like lookup in this case
webtier/ # same kind of structure as "common" was above, done for the webtier role
monitoring/ # ""
fooapp/ # ""
```
If you want to use this directory structure run the command below.
```
$ sudo mkdir -p group_vars host_vars library module_utils filter_plugins
$ sudo mkdir -p roles/common/{tasks,handlers,templates,files,vars,defaults,meta,library,module_utils,lookup_plugins}
$ sudo touch production staging site.yml roles/common/{tasks,handlers,templates,files,vars,defaults,meta}/main.yml
```
The second one is appropriate when you have a very complex inventory environment.
```
inventories/
production/
hosts # inventory file for production servers
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
hostname1.yml # here we assign variables to particular systems
hostname2.yml
staging/
hosts # inventory file for staging environment
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
stagehost1.yml # here we assign variables to particular systems
stagehost2.yml
library/
module_utils/
filter_plugins/
site.yml
webservers.yml
dbservers.yml
roles/
common/
webtier/
monitoring/
fooapp/
```
If you want to use this directory structure run the command below.
```
$ sudo mkdir -p inventories/{production,staging}/{group_vars,host_vars}
$ sudo touch inventories/{production,staging}/hosts
$ sudo mkdir -p group_vars host_vars library module_utils filter_plugins
$ sudo mkdir -p roles/common/{tasks,handlers,templates,files,vars,defaults,meta,library,module_utils,lookup_plugins}
$ sudo touch site.yml roles/common/{tasks,handlers,templates,files,vars,defaults,meta}/main.yml
```
### How to Create a Simple Ansible Roles Directory Structure
By default there is no “Roles” directory in your Ansible directory, so you have to create it first.
```
$ sudo mkdir /etc/ansible/roles
```
Use the following Ansible Galaxy command to create a simple directory structure for a role.
```
$ sudo ansible-galaxy init [/Path/to/Role_Name]
```
### Whats Ansible Galaxy?
Ansible Galaxy refers to the Galaxy Website, a free platform for finding, downloading and sharing community developed roles.
The Galaxy website offers pre-packaged units such as roles and collections. Provisioning infrastructure, deploy applications and youll find plenty of roles for all the tasks that you do on a daily basis.
While writing this article I saw **23478** results and it is growing on a daily basis.
To prove this, we are going to create the **“webserver”** role. To do so, run the following command.
```
$ sudo ansible-galaxy init /etc/ansible/roles/webserver
- Role /etc/ansible/roles/webserver was created successfully
```
Once you have created a new role, use the tree commmand to view the detailed directory structure.
```
$ tree /etc/ansible/roles/webserver
/etc/ansible/roles/webserver
├── defaults
│ └── main.yml
├── files
├── handlers
│ └── main.yml
├── meta
│ └── main.yml
├── README.md
├── tasks
│ └── main.yml
├── templates
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.yml
8 directories, 8 files
```
It comes with 8 directories and 8 files, details are as follows.
* **defaults:** Default variables for the role
* **handlers:** It contains handlers, which may be used by this role or even anywhere outside this role.
* **meta:** Defines some meta data for this role.
* **tasks:** It contains the main list of tasks to be executed by the role.
* **templates:** It contains templates which can be deployed via this role.
* **vars:** Other variables for the role.
This is a sample playbook that sets up the Apache Web server on Debian and Red Hat-based systems.
```
$ sudo nano /etc/ansible/playbooks/webserver.yml
---
- hosts: web
become: yes
name: "Install and Configure Apache Web Server on Linux"
tasks:
- name: "Install Apache Web Server on RHEL Based Systems"
yum: name=httpd update_cache=yes state=latest
when: ansible_facts['os_family']|lower == "redhat"
- name: "Install Apache Web Server on Debian Based Systems"
apt: name=apache2 update_cache=yes state=latest
when: ansible_facts['os_family']|lower == "debian"
- name: "Start the Apache Web Server"
service:
name: httpd
state: started
enabled: yes
- name: "Enable mod_rewrite module"
apache2_module:
name: rewrite
state: present
notify:
- restart apache
handlers:
- name: "Restart Apache2 Web Server"
service:
name: apache2
state: restarted
- name: "Restart httpd Web Server"
service:
name: httpd
state: restarted
```
Lets break the playbook above into Ansible roles. If you only have simple contents, add them to the **“main.yml”** file, otherwise create separate **“xyz.yml”** files for each task.
**Make a note:** **“notify”** should be included in the last task, which is why we have added it to the **“module.yml”** file.
Create a separate task to install the Apache Web Server on Red Hat-based systems.
```
$ sudo vi /etc/ansible/roles/webserver/tasks/redhat.yml
---
- name: "Install Apache Web Server on RHEL Based Systems"
yum:
name: httpd
update_cache: yes
state: latest
```
Create a separate task to install the Apache Web Server on Debian-based systems.
```
$ sudo vi /etc/ansible/roles/webserver/tasks/debian.yml
---
- name: "Install Apache Web Server on Debian Based Systems"
apt:
name: apache2
update_cache: yes
state: latest
```
Create a separate task to start the Apache web server on Red Hat based systems.
```
$ sudo vi /etc/ansible/roles/webserver/tasks/service-httpd.yml
---
- name: "Start the Apache Web Server"
service:
name: httpd
state: started
enabled: yes
```
Create a separate task to start the Apache web server on Debian based systems.
```
$ sudo vi /etc/ansible/roles/webserver/tasks/service-apache2.yml
---
- name: "Start the Apache Web Server"
service:
name: apache2
state: started
enabled: yes
```
Create a separate task to copy the index file into the Apache web root directory.
```
$ sudo nano /etc/ansible/roles/webserver/tasks/configure.yml
---
- name: Copy index.html file
copy: src=files/index.html dest=/var/www/html
```
Create a separate task to install rewrite module on Debian based systems.
```
$ sudo nano /etc/ansible/roles/webserver/tasks/modules.yml
---
- name: "Enable mod_rewrite module"
apache2_module:
name: rewrite
state: present
notify:
- restart apache
```
Finally import all tasks into the **“main.yml”** file of the Tasks directory.
```
$ sudo nano /etc/ansible/roles/webserver/tasks/main.yml
---
tasks file for /etc/ansible/roles/webserver
- import_tasks: redhat.yml
when: ansible_facts['os_family']|lower == 'redhat'
- import_tasks: debian.yml
when: ansible_facts['os_family']|lower == 'debian'
- import_tasks: service-httpd.yml
- import_tasks: service-apache2.yml
- import_tasks: configure.yml
- import_tasks: modules.yml
when: ansible_facts['os_family']|lower == 'debian'
```
Add the handler information to the **“main.yml”** file of the handlers directory.
```
$ sudo nano /etc/ansible/roles/webserver/handlers/main.yml
---
#handlers file for /etc/ansible/roles/webserver
- name: "Restart httpd Web Server"
service:
name: httpd
state: restarted
- name: "Restart Apache2 Web Server"
service:
name: apache2
state: restarted
```
Add an index.html file to the files directory. This is the file you want to copy on the target server.
```
$ sudo nano /etc/ansible/roles/webserver/files/index.html
This is the test page of 2DayGeek.com for Ansible Tutorials
```
You have successfully broken the playbook into Ansible roles using the steps above. Now, your new Ansible role may look like the one below.
![][4]
If you have done everything for your Ansible role, then finally import this role into your playbook.
```
$ sudo nano /etc/ansible/playbooks/webserver-role.yml
---
- hosts: all
become: yes
name: "Install and Configure Apache Web Server on Linux"
roles:
- webserver
```
Once you have done everything, I advise you to check the Playbook syntax before executing it.
```
$ ansible-playbook /etc/ansible/playbooks/webserver-role.yml --syntax-check
playbook: /etc/ansible/playbooks/webserver-role.yml
```
Finally execute the Ansible playbook to see the magic.
![][4]
Hope this tutorial helped you to learn about the Ansible roles. If you are satisfied, please share the article on social media. If you would like to improve this article, add your comments in the comment section.
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/ansible-roles-quick-start-guide-with-examples/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/install-configure-ansible-automation-tool-linux-quick-start-guide/
[2]: https://www.2daygeek.com/ansible-ad-hoc-command-quick-start-guide-with-examples/
[3]: https://www.2daygeek.com/ansible-playbooks-quick-start-guide-with-examples/
[4]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7

View File

@ -0,0 +1,158 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (SimpleLogin: Open Source Solution to Protect Your Email Inbox From Spammers)
[#]: via: (https://itsfoss.com/simplelogin/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
SimpleLogin: Open Source Solution to Protect Your Email Inbox From Spammers
======
_**Brief: SimpleLogin is an open-source service to help you protect your email address by giving you a permanent alias email address.**_
Normally, you have to use your real email address to sign up for services that you want to use personally or for your business.
In the process, youre sharing your email address right? And, that potentially exposes your email address to spammers (depending on where you shared the information).
What if you can protect your real email address by providing an alias for it instead? No Im not talking about disposable email addresses like 10minutemail which could be useful for temporary sign-ups even though theyve been blocked by certain services.
Im talking about something similar to “_[Hide My Emai for Sign in with Apple ID][1]_” but a free and open-source solution i.e [SimpleLogin][2].
### SimpleLogin: An open source service to protect your email inbox
![][3]
_It is worth noting that you still have to use your existing email client (or email service) to receive and send emails but with this service, you get to hide your real email ID._
SimpleLogin is an open-source project (you can find it on [GitHub][4]) available for free (with premium upgrade options) that aims to keep your email private.
Unlike temporary email services, it generates a permanent random alias for your email address that you can use to sign up for services without revealing your real email.
The alias works as a point of contact to forward the emails intended to your real email ID.
**Youll receive the emails sent to the alias email address in your real email inbox and if you believe that the alias is receiving too many spams, you block the alias. This way, you completely stop getting spam emails sent to the particular aliased email address.**
Not just limited to receiving emails but you can also send emails through the alias email address. Interesting, right? And, using this coupled with [secure email services][5] should be a good combination to protect your privacy.
**Recommended Read:**
![][6]
#### [Best VPN Services for Privacy Minded Linux Users][7]
Here are our recommendations for best VPN services for Linux users to secure their privacy and enhance their online security. Check it out.
### Features of SimpleLogin
![][8]
Before taking a look at how it works, let me highlight what it offers overall to the Internet users and web developers as well:
* Protects your real email address by generating an alias address
* Send/Recieve emails through your alias
* Block the alias if emails get too spammy
* Custom domain supported with premium plans
* You can choose to self-host it
* If youre a web developer, you can follow the [documentation][9] to integrate a “**Sign in with SimpleLogin**” button to your login page.
You can either utilize the web browser or use the extension for Firefox, Chrome and Safari.
[SimpleLogin][2]
### How SimpleLogin Works?
![][10]
To start with, youll have to sign up for the service with your primary email ID that you want to keep private.
Once done you have to use your alias email to sign up for any other services you want.
![][11]
The number of aliases generated is limited in the free plan however, you can upgrade to the premium plan if you want to generate different alias email addresses for every site.
You dont necessarily need to use the web portal, you can use the browser extension to generate aliases and use them when needed as shown in the image below:
![][12]
Even if you want to send an email without revealing your real email ID, just generate an alias email by typing in the receivers email ID and paste the alias in your email client to send it.
### Brief conversation with SimpleLogins founder
I was quite impressed to see an open-source service like this so I reached out to [**Son Nguyen Kim**][13] (_SimpleLogins founder_). Heres a few things I asked along with the responses I got:
**How can you assure users that they can rely on your service for their personal/business use?**
**Son Nguyen Kim:** SimpleLogin follows all the best practices in terms of [email deliverability][14] to reduce the emails ending up in the Spam folder. To mention a few:
* SPF, DKIM and strict DMARC
* TLS everywhere
* “Clean” IP: we made sure that our IP addresses are not blacklisted anywhere
* Constant monitoring to avoid abuses.
* Participate in email providers postmaster programs
**How sustainable is your business currently? **
**Son Nguyen Kim:** Though in Beta, we already have paying customers. They use SimpleLogin both personally (to protect privacy) and for their business (create emails with their domains).
**What features have you planned for the future?**
**Son Nguyen Kim**: An iOS app is already in progress, the Android app will follow just after.
* [PGP][15] to encrypt emails
* Able to strip images from emails. Email tracking is usually done [using a 1-pixel image][16] so tracking will also be removed with this feature enabled.
* [U2F][17] support (Yubikey)
* Better integration with existing email infrastructure for people who want to self-host SimpleLogin
You can also find a public roadmap to their plans on [Trello][18].
**Wrapping Up**
Personally, I would really love to see this succeed as a privacy-friendly alternative to social network sign-up options implemented on various web services.
In addition to that, as it stands now as a service to generate alias email that should suffice a lot of users who do not want to share their real email address. My initial impressions on SimpleLogins beta phase is quite positive. Id recommend you to give it a try!
They also have a [Patreon][19] page if you wish to donate instead of opting for a paying customer to help the development of SimpleLogin.
Have you tried something like this before? How exciting do you think SimpleLogin is? Feel free to share your thoughts in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/simplelogin/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://support.apple.com/en-us/HT210425
[2]: https://simplelogin.io/
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/simplelogin-website.jpg?ssl=1
[4]: https://github.com/simple-login/app
[5]: https://itsfoss.com/secure-private-email-services/
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/best-vpn-linux.png?fit=800%2C450&ssl=1
[7]: https://itsfoss.com/best-vpn-linux/
[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/simplelogin-settings.jpg?ssl=1
[9]: https://docs.simplelogin.io/
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/simplelogin-details.png?ssl=1
[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/01/simplelogin-dashboard.jpg?ssl=1
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/01/simplelogin-extensions.jpg?ssl=1
[13]: https://twitter.com/nguyenkims
[14]: https://blog.hubspot.com/marketing/email-delivery-deliverability
[15]: https://www.openpgp.org/
[16]: https://www.theverge.com/2019/7/3/20681508/tracking-pixel-email-spying-superhuman-web-beacon-open-tracking-read-receipts-location
[17]: https://en.wikipedia.org/wiki/Universal_2nd_Factor
[18]: https://trello.com/b/4d6A69I4/open-roadmap
[19]: https://www.patreon.com/simplelogin

View File

@ -0,0 +1,83 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Ubuntu 19.04 Has Reached End of Life! Existing Users Must Upgrade to Ubuntu 19.10)
[#]: via: (https://itsfoss.com/ubuntu-19-04-end-of-life/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
Ubuntu 19.04 Has Reached End of Life! Existing Users Must Upgrade to Ubuntu 19.10
======
_**Brief: Ubuntu 19.04 has reached the end of life on 23rd January 2020. This means that systems running Ubuntu 19.04 wont receive security and maintenance updates anymore and thus leaving them vulnerable.**_
![][1]
[Ubuntu 19.04][2] was released on 18th April, 2019. Since it was not a long term support (LTS) release, it was supported only for nine months.
Completing its release cycle, Ubuntu 19.04 reached end of life on 23rd January, 2020.
Ubuntu 19.04 brought a few visual and performance improvements and paved the way for a sleek and aesthetically pleasant Ubuntu look.
Like any other regular Ubuntu release, it had a life span of nine months. And that has ended now.
### End of life for Ubuntu 19.04? What does it mean?
End of life is means a certain date after which an operating system release wont get updates.
You might already know that Ubuntu (or any other operating system for that matter) provides security and maintenance upgrades in order to keep your systems safe from cyber attacks.
Once a release reaches the end of life, the operating system stops receiving these important updates.
If you continue using a system after the end of life of your operating system release, your system will be vulnerable to cyber attacks and malware.
Thats not it. In Ubuntu, the applications that you downloaded using APT from Software Center wont be updated as well. In fact, you wont be able to [install new software using apt-get command][3] anymore (gradually, if not immediately).
### All Ubuntu 19.04 users must upgrade to Ubuntu 19.10
Starting 23rd January 2020, Ubuntu 19.04 will stop receiving updates. You must upgrade to Ubuntu 19.10 which will be supported till July 2020.
This is also applicable to other [official Ubuntu flavors][4] such as Lubuntu, Xubuntu, Kubuntu etc.
#### How to upgrade to Ubuntu 19.10?
Thankfully, Ubuntu provides easy ways to upgrade the existing system to a newer version.
In fact, Ubuntu also prompts you that a new Ubuntu version is available and that you should upgrade to it.
![Existing Ubuntu 19.04 should see a message to upgrade to Ubuntu 19.10][5]
If you have a good internet connection, you can use the same [Software Updater tool that you use to update Ubuntu][6]. In the above image, you just need to click the Upgrade button and follow the instructions. I have written a detailed guide about [upgrading to Ubuntu 18.04][7] using this method.
If you dont have a good internet connection, there is a workaround for you. Make a backup of your home directory or your important data on an external disk.
Then, make a live USB of Ubuntu 19.10. Download Ubuntu 19.10 ISO and use the Startup Disk Creator tool already installed on your Ubuntu system to create a live USB out of this ISO.
Boot from this live USB and go on installing Ubuntu 19.10. In the installation procedure, you should see an option to remove Ubuntu 19.04 and replace it with Ubuntu 19.10. Choose this option and proceed as if you are [installing Ubuntu][8] afresh.
#### Are you still using Ubuntu 19.04, 18.10, 17.10 or some other unsupported version?
You should note that at present only Ubuntu 16.04, 18.04 and 19.10 (or higher) versions are supported. If you are running an Ubuntu version other than these, you must upgrade to a newer version.
--------------------------------------------------------------------------------
via: https://itsfoss.com/ubuntu-19-04-end-of-life/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/End-of-Life-Ubuntu-19.04.png?ssl=1
[2]: https://itsfoss.com/ubuntu-19-04-release/
[3]: https://itsfoss.com/apt-get-linux-guide/
[4]: https://itsfoss.com/which-ubuntu-install/
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/02/ubuntu_19_04_end_of_life.jpg?ssl=1
[6]: https://itsfoss.com/update-ubuntu/
[7]: https://itsfoss.com/upgrade-ubuntu-version/
[8]: https://itsfoss.com/install-ubuntu/

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (wxy)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -7,15 +7,18 @@
[#]: via: (https://www.networkworld.com/article/3333640/linux/zipping-files-on-linux-the-many-variations-and-how-to-use-them.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Zipping files on Linux: the many variations and how to use them
在 Linux 上压缩文件zip 命令的各种变体及用法
======
> 除了压缩和解压缩文件外,你还可以使用 zip 命令执行许多有趣的操作。这是一些其他的 zip 选项以及它们如何提供帮助。
![](https://images.idgesg.net/images/article/2019/01/zipper-100785364-large.jpg)
Some of us have been zipping files on Unix and Linux systems for many decades — to save some disk space and package files together for archiving. Even so, there are some interesting variations on zipping that not all of us have tried. So, in this post, were going to look at standard zipping and unzipping as well as some other interesting zipping options.
为了节省一些磁盘空间并将文件打包在一起进行归档,我们中的一些人已经在 Unix 和 Linux 系统上压缩文件数十年了。即使这样,并不是所有人都尝试过一些有趣的压缩工具的变体。因此,在本文中,我们将介绍标准的压缩和解压缩以及其他一些有趣的压缩选项。
### The basic zip command
### 基本的 zip 命令
First, lets look at the basic **zip** command. It uses what is essentially the same compression algorithm as **gzip** , but there are a couple important differences. For one thing, the gzip command is used only for compressing a single file where zip can both compress files and join them together into an archive. For another, the gzip command zips “in place”. In other words, it leaves a compressed file — not the original file alongside the compressed copy. Here's an example of gzip at work:
首先,让我们看一下基本的 `zip` 命令。它使用了与 `gzip` 基本上相同的压缩算法,但是有一些重要的区别。一方面,`gzip` 命令仅用于压缩单个文件,而 `zip` 既可以压缩文件,也可以将多个文件结合在一起成为归档文件。另外,`gzip` 命令是“就地”压缩。换句话说,它会留下一个压缩文件,而不是原始文件。 这是工作中的 `gzip` 示例:
```
$ gzip onefile
@ -23,7 +26,7 @@ $ ls -l
-rw-rw-r-- 1 shs shs 10514 Jan 15 13:13 onefile.gz
```
And here's zip. Notice how this command requires that a name be provided for the zipped archive where gzip simply uses the original file name and adds the .gz extension.
而这是 `zip`。请注意,此命令要求为压缩存档提供名称,其中 `gzip`(执行压缩操作后)仅使用原始文件名并添加 `.gz` 扩展名。
```
$ zip twofiles.zip file*
@ -35,9 +38,9 @@ $ ls -l
-rw-rw-r-- 1 shs shs 21289 Jan 15 13:35 twofiles.zip
```
Notice also that the original files are still sitting there.
请注意,原始文件仍位于原处。
The amount of disk space that is saved (i.e., the degree of compression obtained) will depend on the content of each file. The variation in the example below is considerable.
所节省的磁盘空间量(即获得的压缩程度)将取决于每个文件的内容。以下示例中的变化很大。
```
$ zip mybin.zip ~/bin/*
@ -56,9 +59,9 @@ $ zip mybin.zip ~/bin/*
adding: bin/tt (deflated 6%)
```
### The unzip command
### unzip 命令
The **unzip** command will recover the contents from a zip file and, as you'd likely suspect, leave the zip file intact, whereas a similar gunzip command would leave only the uncompressed file.
`unzip` 命令将从一个 zip 文件中恢复内容,并且,如你所料,原来的 zip 文件还保留在那里,而类似的`gunzip` 命令将仅保留未压缩的文件。
```
$ unzip twofiles.zip
@ -71,9 +74,9 @@ $ ls -l
-rw-rw-r-- 1 shs shs 21289 Jan 15 13:35 twofiles.zip
```
### The zipcloak command
### zipcloak 命令
The **zipcloak** command encrypts a zip file, prompting you to enter a password twice (to help ensure you don't "fat finger" it) and leaves the file in place. You can expect the file size to vary a little from the original.
`zipcloak` 命令对一个 zip 文件进行加密,提示你输入两次密码(以确保你不会“胖手指”),然后将该文件原位存储。你可以想到,文件大小与原始文件会有所不同。
```
$ zipcloak twofiles.zip
@ -89,11 +92,11 @@ total 204
unencrypted version
```
Keep in mind that the original files are still sitting there unencrypted.
请记住,压缩包之外的原始文件仍处于未加密状态。
### The zipdetails command
### zipdetails 命令
The **zipdetails** command is going to show you details — a _lot_ of details about a zipped file, likely a lot more than you care to absorb. Even though we're looking at an encrypted file, zipdetails does display the file names along with file modification dates, user and group information, file length data, etc. Keep in mind that this is all "metadata." We don't see the contents of the files.
`zipdetails` 命令将向你显示详细信息:有关压缩文件的详细信息,可能比你想象的要多得多。即使我们正在查看一个加密的文件,`zipdetails` 也会显示文件名以及文件修改日期、用户和组信息、文件长度数据等。请记住,这都是“元数据”。我们看不到文件的内容。
```
$ zipdetails twofiles.zip
@ -233,9 +236,9 @@ $ zipdetails twofiles.zip
Done
```
### The zipgrep command
### zipgrep命令
The **zipgrep** command is going to use a grep-type feature to locate particular content in your zipped files. If the file is encrypted, you will need to enter the password provided for the encryption for each file you want to examine. If you only want to check the contents of a single file from the archive, add its name to the end of the zipgrep command as shown below.
`zipgrep` 命令将使用 `grep` 类的功能来找到压缩文件中的特定内容。如果文件已加密,则需要为要检查的每个文件输入为加密所提供的密码。如果只想检查归档文件中单个文件的内容,请将其名称添加到 `zipgrep` 命令的末尾,如下所示。
```
$ zipgrep hazard twofiles.zip file1
@ -243,9 +246,9 @@ $ zipgrep hazard twofiles.zip file1
Certain pesticides should be banned since they are hazardous to the environment.
```
### The zipinfo command
### zipinfo 命令
The **zipinfo** command provides information on the contents of a zipped file whether encrypted or not. This includes the file names, sizes, dates and permissions.
`zipinfo` 命令提供有关压缩文件内容的信息,无论是否加密。这包括文件名、大小、日期和权限。
```
$ zipinfo twofiles.zip
@ -256,9 +259,9 @@ Zip file size: 21313 bytes, number of entries: 2
2 files, 116954 bytes uncompressed, 20991 bytes compressed: 82.1%
```
### The zipnote command
### zipnote 命令
The **zipnote** command can be used to extract comments from zip archives or add them. To display comments, just preface the name of the archive with the command. If no comments have been added previously, you will see something like this:
`zipnote` 命令可用于从 zip 归档中提取注释或添加注释。要显示注释,只需在命令前面加上归档名称即可。如果之前未添加任何注释,你将看到类似以下内容:
```
$ zipnote twofiles.zip
@ -269,21 +272,21 @@ $ zipnote twofiles.zip
@ (zip file comment below this line)
```
If you want to add comments, write the output from the zipnote command to a file:
如果要添加注释,请先将 `zipnote` 命令的输出写入文件:
```
$ zipnote twofiles.zip > comments
```
Next, edit the file you've just created, inserting your comments above the **(comment above this line)** lines. Then add the comments using a zipnote command like this one:
接下来,编辑你刚刚创建的文件,将注释插入到 `(comment above this line)` 行上方。然后使用像这样的`zipnote` 命令添加注释:
```
$ zipnote -w twofiles.zip < comments
```
### The zipsplit command
### zipsplit 命令
The **zipsplit** command can be used to break a zip archive into multiple zip archives when the original file is too large — maybe because you're trying to add one of the files to a small thumb drive. The easiest way to do this seems to be to specify the max size for each of the zipped file portions. This size must be large enough to accomodate the largest included file.
当归档文件太大时,可以使用 `zipsplit` 命令将一个 zip 归档文件分解为多个 zip 归档文件,这样你就可以将其中某一个文件放到小型 U 盘中。最简单的方法似乎是为每个部分的压缩文件指定最大大小,此大小必须足够大以容纳最大的包含文件。
```
$ zipsplit -n 12000 twofiles.zip
@ -296,15 +299,11 @@ $ ls twofile*.zip
-rw-rw-r-- 1 shs shs 21377 Jan 15 14:27 twofiles.zip
```
Notice how the extracted files are sequentially named "twofile1" and "twofile2".
请注意,提取的文件是如何依次命名为 `twofile1``twofile2` 的。
### Wrap-up
### 总结
The **zip** command, along with some of its zipping compatriots, provide a lot of control over how you generate and work with compressed file archives.
**[ Also see:[Invaluable tips and tricks for troubleshooting Linux][1] ]**
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
`zip` 命令及其一些压缩工具变体,对如何生成和使用压缩文件归档提供了很多控制。
--------------------------------------------------------------------------------
@ -312,7 +311,7 @@ via: https://www.networkworld.com/article/3333640/linux/zipping-files-on-linux-t
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
译者:[wxy](https://github.com/wxy)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,101 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 cool new projects to try in COPR for January 2020)
[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/)
[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/)
COPR 仓库中 4 个很酷的新项目2020.01
======
![][1]
COPR 是个人软件仓库[集合][2],它不在 Fedora 中。这是因为某些软件不符合轻松打包的标准;或者它可能不符合其他 Fedora 标准尽管它是自由而开源的。COPR 可以在 Fedora 套件之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,这是一种尝试新的或实验性的软件的一种巧妙的方式。
本文介绍了 COPR 中一些有趣的新项目。如果你第一次使用 COPR请参阅 [COPR 用户文档][3]。
### Contrast
[Contrast][4]是一款小应用,用于检查两种颜色之间的对比度并确定其是否满足 [WCAG][5]中指定的要求。可以使用十六进制 RGB 代码或使用颜色选择器选择颜色。除了显示对比度之外Contrast 还以选定的颜色为背景上显示短文本来显示比较。
![][6]
#### 安装说明
[仓库][7]当前为 Fedora 31 和 Rawhide 提供 Contrast。要安装 Contrast请使用以下命令
```
sudo dnf copr enable atim/contrast
sudo dnf install contrast
```
### Pamixer
[Pamixer][8]是一个使用 PulseAudio 调整和监控声音设备音量的命令行工具。你可以显示设备的当前音量并直接增加/减小它,或静音/取消静音。Pamixer 可以列出所有源和接收器。
#### 安装说明
[仓库][7]当前为 Fedora 31 和 Rawhide 提供 Pamixer。要安装 Pamixer请使用以下命令
```
sudo dnf copr enable opuk/pamixer
sudo dnf install pamixer
```
### PhotoFlare
[PhotoFlare][10] 是一款图像编辑器。它有简单且布局合理的用户界面,其中的大多数功能都可在工具栏中使用。尽管它不支持使用图层,但 PhotoFlare 提供了诸如各种颜色调整、图像变换、滤镜、画笔和自动裁剪等功能。此外PhotoFlare 可以批量编辑图片,来对所有图片应用相同的滤镜和转换,并将结果保存在指定目录中。
![][11]
#### 安装说明
[仓库][7]当前为 Fedora 31 提供 PhotoFlare。要安装 PhotoFlare请使用以下命令
```
sudo dnf copr enable adriend/photoflare
sudo dnf install photoflare
```
### Tdiff
[Tdiff][13] 是用于比较两个文件树的命令行工具。除了显示某些文件或目录仅存在于一棵树中之外tdiff 还显示文件大小、类型和内容,所有者用户和组 ID、权限、修改时间等方面的差异。
#### 安装说明
[仓库][7]当前为 Fedora 29-31、Rawhide、EPEL 6-8 和其他发行版提供 tdiff。要安装 tdiff请使用以下命令
```
sudo dnf copr enable fif/tdiff
sudo dnf install tdiff
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-january-2020/
作者:[Dominik Turecek][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/dturecek/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg
[2]: https://copr.fedorainfracloud.org/
[3]: https://docs.pagure.org/copr.copr/user_documentation.html#
[4]: https://gitlab.gnome.org/World/design/contrast
[5]: https://www.w3.org/WAI/standards-guidelines/wcag/
[6]: https://fedoramagazine.org/wp-content/uploads/2020/01/contrast-screenshot.png
[7]: https://copr.fedorainfracloud.org/coprs/atim/contrast/
[8]: https://github.com/cdemoulins/pamixer
[9]: https://copr.fedorainfracloud.org/coprs/opuk/pamixer/
[10]: https://photoflare.io/
[11]: https://fedoramagazine.org/wp-content/uploads/2020/01/photoflare-screenshot.png
[12]: https://copr.fedorainfracloud.org/coprs/adriend/photoflare/
[13]: https://github.com/F-i-f/tdiff
[14]: https://copr.fedorainfracloud.org/coprs/fif/tdiff/

View File

@ -0,0 +1,96 @@
[#]: collector: (lujun9972)
[#]: translator: (LazyWolfLin)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (4 Key Changes to Look Out for in Linux Kernel 5.6)
[#]: via: (https://itsfoss.com/linux-kernel-5-6/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
四大亮点带你看 Linux Kernel 5.6
======
当我们体验 Linux 5.5 稳定发行版带来更好的硬件支持时Linux 5.6 已经来了。
说实话Linux 5.6 比 5.5 更令人兴奋。即使即将发布的 Ubuntu 20.04 LTS 发行版将开箱集成 Linux 5.5,你也需要真正了解 Linux 5.6 kernel 为我们提供了什么。
我将在本文中重点介绍 Linux 5.6 发行版中值得期待的关键更改和功能:
### Linux 5.6 功能亮点
![][1]
当 Linux 5.6 有新消息时,我会努力更新这份功能列表。但现在让我们先看一下当前已知的内容:
#### 1\. 支持 WireGuard
WireGuard 将被添加到 Linux 5.6,出于各种原因的考虑它可能将取代 [OpenVPN][2]。
你可以在官网上进一步了解 [WireGuard][3] 的优点。当然,如果你使用过它,那你可能已经知道它比 OpenVPN 更好的原因。
同样,[Ubuntu 20.04 LTS 将支持 WireGuard][4]。
#### 2\. 支持 USB4
Linux 5.6 也将支持 **USB4**
如果你不了解 USB 4.0 (USB4),你可以阅读这份[文档][5].
根据文档“_USB4 将使 USB 的最大带宽增大一倍并支持同时多数据和显示接口并行。_”
另外,虽然我们都知道 USB4 基于 Thunderbolt 接口协议,但它将向后兼容 USB 2.0、USB 3.0 以及 Thunderbolt 3这将是一个好消息。
#### 3\. 使用 LZO/LZ4 压缩 F2FS 数据
Linux 5.6 也将支持使用LZO/LZ4 算法压缩 F2FS 数据。
换句话说,这只是 Linux 文件系统的一种新压缩技术,你可以选择待定的文件扩展名。
#### 4\. 解决 32 位系统的 2038 年问题
Unix 和 Linux 将时间值以 32 位有符号整数格式存储,其最大值为 2147483647。时间值如果超过这个数值则将由于整数溢出而存储为负数。
这意味着对于32位系统时间值不能超过 1970 年 1 月 1 日后的 2147483647 秒。也就是说,在 UTC 时间 2038 年 1 月 19 日 03:14:07 时,由于整数溢出,时间将显示为 1901 年 12 月 13 日而不是 2038 年 1 月 19 日。
Linux kernel 5.6 解决了这个问题,因此 32 位系统可以运行到 2038 年以后。
#### 5\. 改进硬件支持
很显然,在下一次发布版中,硬件支持也将继续提升。而支持新式无线外设的计划也同样紧迫。
新内核中将增加对 MX Master 3 鼠标以及罗技其他无线产品的支持。
除了罗技的产品外,你还可以期待获得许多不同硬件的支持(包括对 AMD GPUs、NVIDIA GPUs 和 Intel Tiger Lake 芯片组的支持)。
#### 6\. 其他更新
此外Linux 5.6 中除了上述主要的新增功能或支持外,下一个内核版本也将进行其他一些改进:
* 改进 AMD Zen 的温度/功率报告
* 修复华硕飞行堡垒系列笔记本中 AMD CPUs 过热
* 开源支持 NVIDIA RTX 2000 图灵系列显卡
* 内建 FSCRYPT 加密
[Phoronix][6] 跟踪了 Linux 5.6 带来的许多技术性更改。因此,如果你好奇 Linux 5.6 所涉及的全部更改,则可以亲自了解一下。
现在你已经了解了 Linux 5.6 发布版带来的新功能,对此有什么看法呢?在下方评论中留下你的看法。
--------------------------------------------------------------------------------
via: https://itsfoss.com/linux-kernel-5-6/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[LazyWolfLin](https://github.com/LazyWolfLin)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/02/linux-kernel-5.6.jpg?ssl=1
[2]: https://openvpn.net/
[3]: https://www.wireguard.com/
[4]: https://www.phoronix.com/scan.php?page=news_item&px=Ubuntu-20.04-Adds-WireGuard
[5]: https://www.usb.org/sites/default/files/2019-09/USB-IF_USB4%20spec%20announcement_FINAL.pdf
[6]: https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.6-Spectacular