Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu.Wang 2018-04-18 21:12:12 +08:00
commit 836465ff64
19 changed files with 2290 additions and 373 deletions

View File

@ -0,0 +1,81 @@
面向企业的最佳 Linux 发行版
====
在这篇文章中,我将分享企业环境下顶级的 Linux 发行版。其中一些发行版用于服务器和云环境以及桌面任务。所有这些可选的 Linux 具有的一个共同点是它们都是企业级 Linux 发行版 —— 所以你可以期待更高程度的功能性,当然还有支持程度。
### 什么是企业级的 Linux 发行版?
企业级的 Linux 发行版可以归结为以下内容 —— 稳定性和支持。在企业环境中,使用的 Linux 版本必须满足这两点。稳定性意味着所提供的软件包既稳定又可用,同时仍然保持预期的安全性。
企业级的支持因素意味着有一个可靠的支持机制。有时这是单一的(官方)来源,如公司。在其他情况下,它可能是一个非营利性的治理机构,向优秀的第三方支持供应商提供可靠的建议。很明显,前者是最好的选择,但两者都可以接受。
### Red Hat 企业级 LinuxRHEL
[Red Hat][1] 有很多很棒的产品,都有企业级的支持来保证可用。其核心重点如下:
- Red Hat 企业级 Linux 服务器:这是一组服务器产品,包括从容器托管到 SAP 服务的所有内容,还有其他衍生的服务器。
- Red Hat 企业级 Linux 桌面:这些是严格控制的用户环境,运行 Red Hat Linux提供基本的桌面功能。这些功能包括访问最新的应用程序如 web 浏览器、电子邮件、LibreOffice 等。
- Red Hat 企业级 Linux 工作站:这基本上是 Red Hat 企业级 Linux 桌面,但针对高性能任务进行了优化。它也非常适合于大型部署和持续管理。
#### 为什么选择 Red Hat 企业级 Linux
Red Hat 是一家非常成功的大型公司,销售围绕 Linux 的服务。基本上Red Hat 从那些想要避免供应商锁定和其他相关问题的公司赚钱。这些公司认识到聘用开源软件专家和管理他们的服务器和其他计算需求的价值。一家公司只需要购买订阅来让 Red Hat 做支持工作就行。
Red Hat 也是一个可靠的社会公民。他们赞助开源项目以及像 OpenSource.com 这样的 FoSS 支持网站LCTT 译注FoSS 是 Free and Open Source Software 的缩写,意为自由及开源软件),并为 Fedora 项目提供支持。Fedora 不是由 Red Hat 所有的,而是由它赞助开发的。这使 Fedora 得以发展,同时也使 Red Hat 受益匪浅。Red Hat 可以从 Fedora 项目中获得他们想要的,并将其用于他们的企业级 Linux 产品中。 就目前来看Fedora 充当了红帽企业 Linux 的上游渠道。
### SUSE Linux 企业版本
[SUSE][2] 是一家非常棒的公司,为企业用户提供了可靠的 Linux 选择。SUSE 的产品类似于 Red Hat桌面和服务器都是该公司所关注的。从我自己使用 SUSE 的经验来看,我相信 YaST 已经证明了,对于希望在工作场所使用 Linux 操作系统的非 Linux 管理员而言它拥有巨大的优势。YaST 为那些需要一些基本的 Linux 命令行知识的任务提供了一个友好的 GUI。
SUSE 的核心重点如下:
- SUSE Linux 企业级服务器SLES包括任务特定的解决方案从云到 SAP以及任务关键计算和基于软件的数据存储。
- SUSE Linux 企业级桌面:对于那些希望为员工提供可靠的 Linux 工作站的公司来说SUSE Linux 企业级桌面是一个不错的选择。和 Red Hat 一样SUSE 通过订阅模式来对其提供支持。你可以选择三个不同级别的支持。
#### 为什么选择 SUSE Linux 企业版?
SUSE 是一家围绕 Linux 销售服务的公司,但他们仍然通过专注于简化操作来实现这一目标。从他们的网站到其提供的 Linux 发行版,重点是易用性,而不会牺牲安全性或可靠性。尽管在美国毫无疑问 Red Hat 是服务器的标准,但 SUSE 作为公司和开源社区的贡献成员都做得很好。
我还会继续说SUSE 不会太严肃,当你在 IT 领域建立联系的时候,这是一件很棒的事情。从他们关于 Linux 的有趣音乐视频到 SUSE 贸易展位中使用的 Gecko 以获得有趣的照片机会SUSE 将自己描述成简单易懂和平易近人的形象。
### Ubuntu LTS Linux
[Ubuntu Long Term Release][3] LTS Linux 是一个简单易用的企业级 Linux 发行版。Ubuntu 看起来比上面提到的其他发行版更新更频繁有时候也更不稳定。但请不要误解Ubuntu LTS 版本被认为是相当稳定的,不过,我认为一些专家可能不太同意它们是安全可靠的。
#### Ubuntu 的核心重点如下:
- Ubuntu 桌面版毫无疑问Ubuntu 桌面非常简单可以快速地学习并运行。也许在高级安装选项中缺少一些东西但这使得其更简单直白。作为额外的奖励Ubuntu 相比其他版本有更多的软件包除了它的父亲Debian 发行版)。我认为 Ubuntu 真正的亮点在于,你可以在网上找到许多销售 Ubuntu 的厂商,包括服务器、台式机和笔记本电脑。
- Ubuntu 服务器版这包括服务器、云和容器产品。Ubuntu 还提供了 Juju 云“应用商店”这样一个有趣的概念。对于任何熟悉 Ubuntu 或 Debian 的人来说Ubuntu 服务器都很有意义。对于这些人来说,它就像手套一样,为你提供了你已经熟知并喜爱的命令行工具。
- Ubuntu IoT最近Ubuntu 的开发团队已经把目标瞄准了“物联网”IoT的创建解决方案。包括数字标识、机器人技术和物联网网关。我的猜测是我们将在 Ubuntu 中看到大量增长的物联网用户来自企业,而不是普通家庭用户。
#### 为什么选择 Ubuntu LTS
社区是 Ubuntu 最大的优点。除了在已经拥挤的服务器市场上的巨大增长之外它还与普通用户在一起。Ubuntu 的开发和用户社区是坚如磐石的。因此,虽然它可能被认为比其他企业版更不稳定,但是我发现将 Ubuntu LTS 安装锁定到 “security updates only” 模式下提供了非常稳定的体验。
### CentOS 或者 Scientific Linux 怎么样呢?
首先,让我们把 [CentOS][4] 作为一个企业发行版,如果你有自己的内部支持团队来维护它,那么安装 CentOS 是一个很好的选择。毕竟,它与 Red Hat 企业级 Linux 兼容,并提供了与 Red Hat 产品相同级别的稳定性。不幸的是,它不能完全取代 Red Hat 支持订阅。
那么 [Scientific Linux][5] 呢?它的发行版怎么样?好吧,它就像 CentOS它是基于 Red Hat Linux 的。但与 CentOS 不同的是,它与 Red Hat 没有任何隶属关系。 Scientific Linux 从一开始就有一个目标 —— 为世界各地的实验室提供一个通用的 Linux 发行版。今天Scientific Linux 基本上是 Red Hat 减去所包含的商标资料。
这两种发行版都不能真正地与 Red Hat 互换,因为它们缺少 Red Hat 支持组件。
哪一个是顶级企业发行版?我认为这取决于你需要为自己确定的许多因素:订阅范围、可用性、成本、服务和提供的功能。这些是每个公司必须自己决定的因素。就我个人而言,我认为 Red Hat 在服务器上获胜,而 SUSE 在桌面环境中轻松获胜,但这只是我的意见 —— 你不同意?点击下面的评论部分,让我们来谈谈它。
--------------------------------------------------------------------------------
via: https://www.datamation.com/open-source/best-linux-distros-for-the-enterprise.html
作者:[Matt Hartley][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.datamation.com/author/Matt-Hartley-3080.html
[1]:https://www.redhat.com/en
[2]:https://www.suse.com/
[3]:http://releases.ubuntu.com/16.04/
[4]:https://www.centos.org/
[5]:https://www.scientificlinux.org/

View File

@ -0,0 +1,106 @@
可怕的万圣节 Linux 命令
======
![](https://images.idgesg.net/images/article/2017/10/animal-skeleton-100739983-large.jpg)
虽然现在不是万圣节,也可以关注一下 Linux 可怕的一面。什么命令可能会显示鬼、巫婆和僵尸的图像?哪个会鼓励“不给糖果就捣蛋”的精神?
### crypt
好吧,我们一直看到 `crypt`。尽管名称不同crypt 不是一个地窖,也不是垃圾文件的埋葬坑,而是一个加密文件内容的命令。现在,`crypt` 通常用一个脚本实现,通过调用一个名为 `mcrypt` 的二进制文件来模拟以前的 `crypt` 命令来完成它的工作。直接使用 `mycrypt` 命令是更好的选择。
```
$ mcrypt x
Enter the passphrase (maximum of 512 characters)
Please use a combination of upper and lower case letters and numbers.
Enter passphrase:
Enter passphrase:
File x was encrypted.
```
请注意,`mcrypt` 命令会创建第二个扩展名为 `.nc` 的文件。它不会覆盖你正在加密的文件。
`mcrypt` 命令有密钥大小和加密算法的选项。你也可以再选项中指定密钥,但 `mcrypt` 命令不鼓励这样做。
### kill
还有 `kill` 命令 - 当然并不是指谋杀而是用来强制和非强制地结束进程这取决于正确终止它们的要求。当然Linux 并不止于此。相反,它有各种 `kill` 命令来终止进程。我们有 `kill`、`pkill`、`killall`、`killpg`、`rfkill`、`skill`()读作 es-kill、`tgkill`、`tkill` 和 `xkill`
```
$ killall runme
[1] Terminated ./runme
[2] Terminated ./runme
[3]- Terminated ./runme
[4]+ Terminated ./runme
```
### shred
Linux 系统也支持一个名为 `shred` 的命令。`shred` 命令会覆盖文件以隐藏其以前的内容,并确保使用硬盘恢复工具无法恢复它们。请记住,`rm` 命令基本上只是删除文件在目录文件中的引用,但不一定会从磁盘上删除内容或覆盖它。`shred` 命令覆盖文件的内容。
```
$ shred dupes.txt
$ more dupes.txt
▒oΛ▒▒9▒lm▒▒▒▒▒o▒1־▒▒f▒f▒▒▒i▒▒h^}&▒▒▒{▒▒
```
### 僵尸
虽然不是命令,但僵尸在 Linux 系统上是很顽固的存在。僵尸基本上是没有完全清理掉的死亡进程的遗骸。进程_不应该_这样工作 —— 让死亡进程四处游荡,而不是简单地让它们死亡并进入数字天堂,所以僵尸的存在表明了让他们遗留于此的进程有一些缺陷。
一个简单的方法来检查你的系统是否有僵尸进程遗留,看看 `top` 命令的标题行。
```
$ top
top - 18:50:38 up 6 days, 6:36, 2 users, load average: 0.00, 0.00, 0.00
Tasks: 171 total, 1 running, 167 sleeping, 0 stopped, 3 zombie `< ==`
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni, 99.9 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 2003388 total, 250840 free, 545832 used, 1206716 buff/cache
KiB Swap: 9765884 total, 9765764 free, 120 used. 1156536 avail Mem
```
可怕!上面显示有三个僵尸进程。
### at midnight
有时会在万圣节这么说死者的灵魂从日落开始游荡直到午夜。Linux 可以通过 `at midnight` 命令跟踪它们的离开。用于安排在下次到达指定时间时运行的作业,`at` 的作用类似于一次性的 cron。
```
$ at midnight
warning: commands will be executed using /bin/sh
at> echo 'the spirits of the dead have left'
at> <EOT>
job 3 at Thu Oct 31 00:00:00 2017
```
### 守护进程
Linux 系统也高度依赖守护进程 —— 在后台运行的进程,并提供系统的许多功能。许多守护进程的名称以 “d” 结尾。这个 “d” 代表<ruby>守护进程<rt>daemon</rt></ruby>,表明这个进程一直运行并支持一些重要功能。有的会用单词 “daemon” 。
```
$ ps -ef | grep sshd
root 1142 1 0 Oct19 ? 00:00:00 /usr/sbin/sshd -D
root 25342 1142 0 18:34 ? 00:00:00 sshd: shs [priv]
$ ps -ef | grep daemon | grep -v grep
message+ 790 1 0 Oct19 ? 00:00:01 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root 836 1 0 Oct19 ? 00:00:02 /usr/lib/accountsservice/accounts-daemon
```
### 万圣节快乐!
在 [Facebook][1] 和 [LinkedIn][2] 上加入 Network World 社区来对主题进行评论。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3235219/linux/scary-linux-commands-for-halloween.html
作者:[Sandra Henry-Stocker][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:https://www.facebook.com/NetworkWorld/
[2]:https://www.linkedin.com/company/network-world

View File

@ -1,3 +1,5 @@
Translating by valoniakim
How to start an open source program in your company
======

View File

@ -0,0 +1,79 @@
For project safety back up your people, not just your data
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_remote_teams_world.png?itok=_9DCHEel)
The [FSF][1] was founded in 1985, Perl in 1987 ([happy 30th birthday, Perl][2]!), and Linux in 1991. The [term open source][3] and the [Open Source Initiative][4] both came into being in 1998 (and [turn 20 years old][5] in 2018). Since then, free and open source software has grown to become the default choice for software development, enabling incredible innovation.
We, the greater open source community, have come of age. Millions of open source projects exist today, and each year the [GitHub Octoverse][6] reports millions of new public repositories. We rely on these projects every day, and many of us could not operate our services or our businesses without them.
So what happens when the leaders of these projects move on? How can we help ease those transitions while ensuring that the projects thrive? By teaching and encouraging **succession planning**.
### What is succession planning?
Succession planning is a popular topic among business executives, boards of directors, and human resources professionals, but it doesn't often come up with maintainers of free and open source projects. Because the concept is common in business contexts, that's where you'll find most resources and advice about establishing a succession plan. As you might expect, most of these articles aren't directly applicable to FOSS, but they do form a springboard from which we can launch our own ideas about succession planning.
According to [Wikipedia][7]:
> Succession planning is a process for identifying and developing new leaders who can replace old leaders when they leave, retire, or die.
In my opinion, this definition doesn't apply very well to free and open source software projects. I primarily object to the use of the term leaders. For the collaborative projects of FOSS, everyone can be some form of leader. Roles other than "project founder" or "benevolent dictator for life" are just as important. Any project role that is measured by bus factor is one that can benefit from succession planning.
> A project's bus factor is the number of team members who, if hit by a bus, would endanger the smooth operation of the project. The smallest and worst bus factor is 1: when only a single person's loss would put the project in jeopardy. It's a somewhat grim but still very useful concept.
I propose that instead of viewing succession planning as a leadership pipeline, free and open source projects should view it as a skills pipeline. What sorts of skills does your project need to continue functioning well, and how can you make sure those skills always exist in your community?
### Benefits of succession planning
When I talk to project maintainers about succession planning, they often respond with something like, "We've been pretty successful so far without having to think about this. Why should we start now?"
Aside from the fact that the phrase, "We've always done it this way" is probably one of the most dangerous in the English language, and hearing (or saying) it should send up red flags in any community, succession planning provides plenty of very real benefits:
* **Continuity** : When someone leaves, what happens to the tasks they were performing? Succession planning helps ensure those tasks continue uninterrupted and no one is left hanging.
* **Avoiding a power vacuum** : When a person leaves a role with no replacement, it can lead to confusion, delays, and often most damaging, political woes. After all, it's much easier to fix delays than hurt feelings. A succession plan helps alleviate the insecure and unstable time when someone in a vital role moves on.
* **Increased project/organization longevity** : The thinking required for succession planning is the same sort of thinking that contributes to project longevity. Ensuring continuity in leadership, culture, and productivity also helps ensure the project will continue. It will evolve, but it will survive.
* **Reduced workload/pressure on current leaders** : When a single team member performs a critical role in the project, they often feel pressure to be constantly "on." This can lead to burnout and worse, resignations. A succession plan ensures that all important individuals have a backup or successor. The knowledge that someone can take over is often enough to reduce the pressure, but it also means that key players can take breaks or vacations without worrying that their role will be neglected in their absence.
* **Talent development** : Members of the FOSS community talk a lot about mentoring these days, and that's great. However, most of the conversation is around mentoring people to contribute code to a project. There are many different ways to contribute to free and open source software projects beyond programming. A robust succession plan recognizes these other forms of contribution and provides mentoring to prepare people to step into critical non-programming roles.
* **Inspiration for new members** : It can be very motivational for new or prospective community members to see that a project uses its succession plan. Not only does it show them that the project is well-organized and considers its own health and welfare as well as that of its members, but it also clearly shows new members how they can grow in the community. An obvious path to critical roles and leadership positions inspires new members to stick around to walk that path.
* **Diversity of thoughts/get out of a rut** : Succession plans provide excellent opportunities to bring in new people and ideas to the critical roles of a project. [Studies show][8] that diverse leadership teams are more effective and the projects they lead are more innovative. Using your project's succession plan to mentor people from different backgrounds and with different perspectives will help strengthen and evolve the project in a healthy way.
* **Enabling meritocracy** : Unfortunately, what often passes for meritocracy in many free and open source projects is thinly veiled hostility toward new contributors and diverse opinions—hostility that's delivered from within an echo chamber. Meritocracy without a mentoring program and healthy governance structure is simply an excuse to practice subjective discrimination while hiding behind unexpressed biases. A well-executed succession plan helps teams reach the goal of a true meritocracy. What counts as merit for any given role, and how to reach that level of merit, are openly, honestly, and completely documented. The entire community will be able to see and judge which members are on the path or deserve to take on a particular critical role.
### Why it doesn't happen
Succession planning isn't a panacea, and it won't solve all problems for all projects, but as described above, it offers a lot of worthwhile benefits to your project.
Despite that, very few free and open source projects or organizations put much thought into it. I was curious why that might be, so I asked around. I learned that the reasons for not having a succession plan fall into one of five different buckets:
* **Too busy** : Many people recognize succession planning (or lack thereof) as a problem for their project but just "hadn't ever gotten around to it" because there's "always something more important to work on." I understand and sympathize with this, but I suspect the problem may have more to do with prioritization than with time availability.
* **Don't think of it** : Some people are so busy and preoccupied that they haven't considered, "Hey, what would happen if Jen had to leave the project?" This never occurs to them. After all, Jen's always been there when they need her, right? And that will always be the case, right?
* **Don't want to think of it** : Succession planning shares a trait with estate planning: It's associated with negative feelings like loss and can make people address their own mortality. Some people are uncomfortable with this and would rather not consider it at all than take the time to make the inevitable easier for those they leave behind.
* **Attitude of current leaders** : A few of the people with whom I spoke didn't want to recognize that they're replaceable, or to consider that they may one day give up their power and influence on the project. While this was (thankfully) not a common response, it was alarming enough to deserve its own bucket. Failure of someone in a critical role to recognize or admit that they won't be around forever can set a project up for failure in the long run.
* **Don't know where to start** : Many people I interviewed realize that succession planning is something that their project should be doing. They were even willing to carve out the time to tackle this very large task. What they lacked was any guidance on how to start the process of creating a succession plan.
As you can imagine, something as important and people-focused as a succession plan isn't easy to create, and it doesn't happen overnight. Also, there are many different ways to do it. Each project has its own needs and critical roles. One size does not fit all where succession plans are concerned.
There are, however, some guidelines for how every project could proceed with the succession plan creation process. I'll cover these guidelines in my next article.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/passing-baton-succession-planning-foss-leadership
作者:[VM(Vicky) Brasseur][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/vmbrasseur
[1]:http://www.fsf.org
[2]:https://opensource.com/article/17/10/perl-turns-30
[3]:https://opensource.com/article/18/2/coining-term-open-source-software
[4]:https://opensource.org
[5]:https://opensource.org/node/910
[6]:https://octoverse.github.com
[7]:https://en.wikipedia.org/wiki/Succession_planning
[8]:https://hbr.org/2016/11/why-diverse-teams-are-smarter

View File

@ -0,0 +1,93 @@
How to develop the FOSS leaders of the future
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_paperclips.png?itok=j48op49T)
Do you hold a critical role in a free and open source software project? Would you like to make it easier for the next person to step into your shoes, while also giving yourself the freedom to take breaks and avoid burnout?
Of course you would! But how do you get started?
Before you do anything, remember that this is a free or open source project. As with all things in FOSS, your succession planning should happen in collaboration with others. The [Principle of Least Astonishment][1] also applies: Don't work on your plan in isolation, then spring it on the entire community. Work together and publicly, so no one is caught off guard when the cultural or governance changes start happening.
### Identify and analyse critical roles
As a project leader, your first step is to identify the critical roles in your community. While it can help to ask each community members what role they perform, it's important to realize that most people perform multiple roles. Make sure you consider every role that each community member plays in the project.
Once you've identified the roles and determined which ones are critical to your project, the next step is to list all of the duties and responsibilities for each of those critical roles. Be very honest here. List the duties and responsibilities you think each role has, then ask the person who performs that role to list the duties the role actually has. You'll almost certainly find that the second list is longer than the first.
### Refactor large roles
During this process, have you discovered any roles that encompass a large number of duties and responsibilities? Large roles are like large methods in your code: They're a sign of a problem, and they need to be refactored to make them easier to maintain. One of the easiest and most effective steps in succession planning for FOSS projects is to split up each large role into two or more smaller roles and distribute these to other community members. With that one step, you've greatly improved the [bus factor][2] for your project. Even better, you've made each one of those new, smaller roles much more accessible and less intimidating for new community members. People are much more likely to volunteer for a role if it's not a massive burden.
### Limit role tenure
Another way to make a role more enticing is to limit its tenure. Community members will be more willing to step into roles that aren't open-ended. They can look at their life and work plans and ask themselves, "Can I take on this role for the next eighteen months?" (or whatever term limit you set).
Setting term limits also helps those who are currently performing the role. They know when they can set aside those duties and move on to something else, which can help alleviate burnout. Also, setting a term limit creates a pool of people who have performed the role and are qualified to step in if needed, which can also mitigate burnout.
### Knowledge transfer
Once you've identified and defined the critical roles in your project, most of what remains is knowledge transfer. Even small projects involve a lot of moving parts and knowledge that needs to be where everyone can see, share, use, and contribute to it. What sort of knowledge should you be collecting? The answer will vary by project, needs, and role, but here are some of the most common (and commonly overlooked) types of information needed to implement a succession plan:
* **Roles and their duties** : You've spent a lot of time identifying, analyzing, and potentially refactoring roles and their duties. Make sure this information doesn't get lost.
* **Policies and procedures** : None of those duties occur in a vacuum. Each duty must be performed in a particular way (procedures) when particular conditions are met (policies). Take stock of these details for every duty of every role.
* **Resources** : What accounts are associated with the project, or are necessary for it to operate? Who helps you with meetup space, sponsorship, or in-kind services? Such information is vital to project operation but can be easily lost when the responsible community member moves on.
* **Credentials** : Ideally, every external service required by the project will use a login that goes to an email address designated for a specific role (`sre@project.org`) rather than to a personal address. Every role's address should include multiple people on the distribution list to ensure that important messages (such as downtime or bogus "forgot password" requests) aren't missed. The credentials for every service should be kept in a secure keystore, with access limited to the fewest number of people possible.
* **Project history** : All community members benefit greatly from learning the history of the project. Collecting project history information can clarify why decisions were made in the past, for example, and reveal otherwise unexpressed requirements and values of the community. Project histories can also help new community members understand "inside jokes," jargon, and other cultural factors.
* **Transition plans** : A succession plan doesn't do much good if project leaders haven't thought through how to transition a role from one person to another. How will you locate and prepare people to take over a critical role? Since the project has already done a lot of thinking and knowledge transfer, transition plans for each role may be easier to put together.
Doing a complete knowledge transfer for all roles in a project can be an enormous undertaking, but the effort is worth it. To avoid being overwhelmed by such a daunting task, approach it one role at a time, finishing each one before you move onto the next. Limiting the scope in this way makes both progress and success much more likely.
### Document, document, document!
Succession planning takes time. The community will be making a lot of decisions and collecting a lot of information, so make sure nothing gets lost. It's important to document everything (not just in email threads). Where knowledge is concerned, documentation scales and people do not. Include even the things that you think are obvious—what's obvious to a more seasoned community member may be less so to a newbie, so don't skip steps or information.
Gather these decisions, processes, policies, and other bits of information into a single place, even if it's just a collection of markdown files in the main project repository. The "how" and "where" of the documentation can be sorted out later. It's better to capture key information first and spend time [bike-shedding][3] a documentation system later.
Once you've collected all of this information, you should understand that it's unlikely that anyone will read it. I know, it seems unfair, but that's just how things usually work out. The reason? There is simply too much documentation and too little time. To address this, add an abstract, or summary, at the top of each item. Often that's all a person needs, and if not, the complete document is there for a deep dive. Recognizing and adapting to how most people use documentation increases the likelihood that they will use yours.
Above all, don't skip the documentation process. Without documentation, succession plans are impossible.
### New leaders
If you don't yet perform a critical role but would like to, you can contribute to the succession planning process while apprenticing your way into one of those roles.
For starters, actively look for opportunities to learn and contribute. Shadow people in critical roles. You'll learn how the role is done, and you can document it to help with the succession planning process. You'll also get the opportunity to see whether it's a role you're interested in pursuing further.
Asking for mentorship is a great way to get yourself closer to taking on a critical role in the project. Even if you haven't heard that mentoring is available, it's perfectly OK to ask about it. The people already in those roles are usually happy to mentor others, but often are too busy to think about offering mentorship. Asking is a helpful reminder to them that they should be helping to train people to take over their role when they need a break.
As you perform your own tasks, actively seek out feedback. This will not only improve your skills, but it shows that you're interested in doing a better job for the community. This commitment will pay off when your project needs people to step into critical roles.
Finally, as you communicate with more experienced community members, take note of anecdotes about the history of the project and how it operates. This history is very important, especially for new contributors or people stepping into critical roles. It provides the context necessary for new contributors to understand what things do or don't work and why. As you hear these stories, document them so they can be passed on to those who come after you.
### Succession planning examples
While too few FOSS projects are actively considering succession planning, some are doing a great job of trying to reduce their bus factor and prevent maintainer burnout.
[Exercism][4] isn't just an excellent tool for gaining fluency in programming languages. It's also an [open source project][5] that goes out of its way to help contributors [land their first patch][6]. In 2016, the project reviewed the health of each language track and [discovered that many were woefully maintained][7]. There simply weren't enough people covering each language, so maintainers were burning out. The Exercism community recognized the risk this created and pushed to find new maintainers for as many language tracks as possible. As a result, the project was able to revive several tracks from near-death and develop a structure for inviting people to become maintainers.
The purpose of the [Vox Pupuli][8] project is to serve as a sort of succession plan for the [Puppet module][9] community. When a maintainer no longer wishes or is able to work on their module, they can bequeath it to the Vox Pupuli community. This community of 30 collaborators shares responsibility for maintaining all the modules it accepts into the project. The large number of collaborators ensures that no single person bears the burden of maintenance while also providing a long and fruitful life for every module in the project.
These are just two examples of how some FOSS projects are tackling succession planning. Share your stories in the comments below.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/succession-planning-how-develop-foss-leaders-future
作者:[VM(Vicky) Brasseur)][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/vmbrasseur
[1]:https://en.wikipedia.org/wiki/Principle_of_least_astonishment
[2]:https://en.wikipedia.org/wiki/Bus_factor
[3]:https://en.wikipedia.org/wiki/Law_of_triviality
[4]:http://exercism.io
[5]:https://github.com/exercism/exercism.io
[6]:https://github.com/exercism/exercism.io/blob/master/CONTRIBUTING.md
[7]:https://tinyletter.com/exercism/letters/exercism-track-health-check-new-maintainers
[8]:https://voxpupuli.org
[9]:https://forge.puppet.com

View File

@ -0,0 +1,58 @@
What developers need to know about security
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/locks_keys_bridge_paris.png?itok=Bp0dsEc9)
DevOps doesn't mean that everyone needs to be an expert in both development and operations. This is especially true in larger organizations in which roles tend to be more specialized. Rather, DevOps thinking has evolved in a way that makes it more about the separation of concerns. To the degree that operations teams can deploy platforms for developers (whether on-premises or in a public cloud) and get out of the way, that's good news for both teams. Developers get a productive development environment and self-service. Operations can focus on keeping the underlying plumbing running and maintaining the platform.
It's a contract of sorts. Devs expect a stable and functional platform from ops. Ops expects that devs will be able to handle most of the tasks associated with developing apps on their own.
Devs expect a stable and functional platform from ops. Ops expects that devs will be able to handle most of the tasks associated with developing apps on their own.
That said, DevOps is also about better communication, collaboration, and transparency. It works better if it's not about merely a new type of wall between dev and ops. Ops needs to be sensitive to the type of tools developers want and need and the visibility they require, through monitoring and logging, to write better applications. Conversely, developers need some awareness of how the underlying infrastructure can be used most effectively and what can keep operations up at night (literally).
That said, DevOps is also about better communication, collaboration, and transparency. It works better if it's not about merely a new type of wall between dev and ops. Ops needs to be sensitive to the type of tools developers want and need and the visibility they require, through monitoring and logging, to write better applications. Conversely, developers need some awareness of how the underlying infrastructure can be used most effectively and what can keep operations up at night (literally).
The same principle applies more broadly to DevSecOps, a term that serves to explicitly remind us that security needs to be embedded throughout the entire DevOps pipeline from sourcing content to writing apps, building them, testing them, and running them in production. Developers (and operations) don't suddenly need to become security specialists in addition to the other hats they already wear. But they can often benefit from a greater awareness of security best practices (which may be different from what they've become accustomed to) and shifting away from a mindset that views security as some unfortunate obstacle.
Here are a few observations.
The Open Web Application Security Project ([OWASP][1]) [Top 10 list][2] provides a window into the top vulnerabilities in web applications. Many entries on the list will be familiar to web programmers. Cross-site scripting (XSS) and injection flaws are among the most common. What's striking though is that many of the flaws on the original 2007 list are still on 2017's list ([PDF][3]). Whether it's training or tooling that's most at fault, many of the same coding flaws keep popping up.
The situation is exacerbated by new platform technologies. For example, while containers don't necessarily require applications to be written differently, they dovetail with new patterns (such as [microservices][4] ) and can amplify the effects of certain security practices. For example, as my colleague [Dan Walsh][5] [@rhatdan][6] ) writes, "The biggest misconception in computing [is] you need root to run applications. The problem is not so much that devs think they need root. It is that they build this assumption into the services that they build, i.e., the services cannot run without root, making us all less secure."
Was defaulting to root access ever a good practice? Not really. But it was arguably (maybe) a defensible one with applications and systems that were otherwise sufficiently isolated by other means. But with everything connected, no real perimeter, multi-tenant workloads, users with many different levels of access rights—to say nothing of an ever more dangerous threat environment—there's far less leeway for shortcuts.
[Automation][7] should be an integral part of DevOps anyway. That automation needs to include security and compliance testing throughout the process. Where did the code come from? Are third-party technologies, products, or container images involved? Are there known security errata? Are there known common code flaws? Are secrets and personally identifiable information kept isolated? How do we authenticate? Who is authorized to deploy services and applications?
You're not writing your own crypto, are you?
Automate penetration testing where possible. Did I mention automation? It's an essential part of making security continuous rather than a checklist item that's done once in a while.
Does this sound hard? It probably is a bit. At least it may be different. But as a participant in a [DevOpsDays OpenSpaces][8] London said to me: "It's just technical testing. It's not magical or mysterious." He went on to say that it's not even that hard to get involved with security as a way to gain a broader understanding of the whole software lifecycle (which is not a bad skill to have). He also suggested taking part in incident response exercises or [capture the flag exercises][9]. You might even find they're fun.
This article is based on [a talk][10] the author will be giving at [Red Hat Summit 2018][11], which will be held May 8-10 in San Francisco. _[Register by May 7][11] to save US$ 500 off of registration. Use discount code **OPEN18** on the payment page to apply the discount._
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/what-developers-need-know-about-security
作者:[Gordon Haff][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/ghaff
[1]:https://www.owasp.org/index.php/Main_Page
[2]:https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
[3]:https://www.owasp.org/images/7/72/OWASP_Top_10-2017_%28en%29.pdf.pdf
[4]:https://opensource.com/tags/microservices
[5]:https://opensource.com/users/rhatdan
[6]:https://twitter.com/rhatdan
[7]:https://opensource.com/tags/automation
[8]:https://www.devopsdays.org/open-space-format/
[9]:https://dev.to/_theycallmetoni/capture-the-flag-its-a-game-for-hacki-mean-security-professionals
[10]:https://agenda.summit.redhat.com/SessionDetail.aspx?id=154677
[11]:https://www.redhat.com/en/summit/2018

View File

@ -1,106 +0,0 @@
translating---geekpi
How to Use WSL Like a Linux Pro
============================================================
![WSL](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/wsl-pro.png?itok=e65wEEAw "WSL")
Learn how to perform tasks like mounting USB drives and manipulating files in this WSL tutorial. (Image courtesy: Microsoft)[Used with permission][1][Microsoft][2]
In the [previous tutorial][4], we learned about setting up WSL on your Windows 10 system. You can perform a lot of Linux command like tasks in Windows 10 using WSL. Many sysadmin tasks are done inside a terminal, whether its a Linux based system or macOS. Windows 10, however, lacks such capabilities. You want to run a cron job? No. You want to ssh into your server and then rsync files? No way. How about managing your local files with powerful command line utilities instead of using slow and unreliable GUI utilities?
In this tutorial, youll see how to perform additional tasks beyond managing your servers using WSL - things like mounting USB drives and manipulating files. You need to be running a fully updated Windows 10 and the Linux distro of your choice. I covered these steps in the [previous article][5], so begin there if you need to catch up. Lets get started.
### Keep your Linux system updated
The fact is there is no Linux kernel running under the hood when you run Ubuntu or openSUSE through WSL. Yet, you must keep your distros fully updated to keep your system protected from any new known vulnerabilities. Since only two free community distributions are officially available in Windows Store, out tutorial will cover only those two: openSUSE and Ubuntu.
Update your Ubuntu system:
```
# sudo apt-get update
# sudo apt-get dist-upgrade
```
To run updates for openSUSE:
```
# zypper up
```
You can also upgrade openSUSE to the latest version with the  _dup_  command. But before running the system upgrade, please run updates using the previous command.
```
# zypper dup
```
**Note: **openSUSE defaults to the root user. If you want to perform any non-administrative tasks, please switch to a non-privileged user. You can learn how to create a user on openSUSE in this [article][6].
### Manage local files
If you want to use great Linux command line utilities to manage your local files, you can easily do that with WSL. Unfortunately, WSL doesnt yet support things like _lsblk_ or  _mnt_  to mount local drives. You can, however,  _cd _ to the C drive and manage files:
/mnt/c/Users/swapnil/Music
I am now in the Music directory of the C drive.
To mount other drives, partitions, and external USB drives, you will need to create a mount point and then mount that drive.
Open File Explorer and check the mount point of that drive. Lets assume its mounted in Windows as S:\
In the Ubuntu/openSUSE terminal, create a mount point for the drive.
```
sudo mkdir /mnt/s
```
Now mount the drive:
```
mount -f drvfs S: /mnt/s
```
Once mounted, you can now access that drive from your distro. Just bear in mind that distro running with WSL will see what Windows can see. So you cant mount ext4 drives that cant be mounted on Windows natively.
You can now use all those magical Linux commands here. Want to copy or move files from one folder to another? Just run the  _cp_  or  _mv_ command.
```
cp /source-folder/source-file.txt /destination-folder/
cp /music/classical/Beethoven/symphony-2.mp3 /plex-media/music/classical/
```
If you want to move folders or large files, I would recommend  _rsync_ instead of the  _cp_  command:
```
rsync -avzP /music/classical/Beethoven/symphonies/ /plex-media/music/classical/
```
Yay!
Want to create new directories in Windows drives, just use the awesome  _mkdir_ command.
Want to set up a cron job to automate a task at certain time? Go ahead and create a cron job with  _crontab -e_ . Easy peasy.
You can also mount network/remote folders in Linux so you can manage them with better tools. All of my drives are plugged into either a Raspberry Pi powered server or a live server, so I simply ssh into that machine and manage the drive. Transferring files between the local machine and remote system can be done using, once again, the  _rsync_  command.
WSL is now out of beta, and it will continue to get more new features. Two features that I am excited about are the lsblk command and the dd command that allow me to natively manage my drives and create bootable drives of Linux from within Windows. If you are new to the Linux command line, this [previous tutorial ][7]will help you get started with some of the most basic commands.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2018/2/how-use-wsl-linux-pro
作者:[SWAPNIL BHARTIYA][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://blogs.msdn.microsoft.com/commandline/learn-about-windows-console-and-windows-subsystem-for-linux-wsl/
[3]:https://www.linux.com/files/images/wsl-propng
[4]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[5]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[6]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[7]:https://www.linux.com/learn/how-use-linux-command-line-basics-cli

View File

@ -0,0 +1,192 @@
5 Best Feed Reader Apps for Linux
======
**Brief: Extensively use RSS feeds to stay updated with your favorite websites? Take a look at the best feed reader applications for Linux.**
[RSS][1] feeds were once most widely used, to collect news and articles from different sources at one place. It is often perceived that [RSS usage is in decline][2]. However, there are still people (like me) who believe in opening an application that accumulates all the websites articles at one place, which they can read later even when they are not connected to the internet.
Feed Readers makes it easier by collecting all the published items on a website for anytime access. You dont need to open several browser tabs to go to your favorite websites, and bookmarking the one you liked.
In this article, Ill share some of my favorite feed reader applications for Linux desktop.
### Best Feed Readers for Linux
![Best Feed Readers for Linux][3]
As usual, Linux has multiple choices for feed readers and in this article, we have compiled the 5 good feed readers applications for you. The list is no particular order.
#### 1\. Akregator Feed Reader
[Akregator][4] is a KDE product which is easy to use and powerful enough to provide latest updates from news sites, blogs and RSS/Atom enabled websites.
It comes with an internal browser for news reading and updated the feed in real time.
##### Features
* You can add a websites feed using “Ädd Feed” options and define an interval to refresh and update subscribe feeds.
* It can store and archive contents the setting of which can be defined on a global level or on individual feeds.
* Features option to import subscribed feeds from another browser or a past back up.
* Notifies you of the unread feeds.
##### How to install Akregator
If you are running KDE desktop, most probably Akregator is already installed on your system. If not, you can use the below command for Debian based systems.
```
sudo apt install akregator
```
Once installed, you can directly add a website by clicking on Feed menu and then **Add feed** and giving the website name. This is how Its FOSS feed looks like when added.
![][5]
#### 2\. QuiteRSS
[QuiteRSS][6] is another free and open source RSS/Atom news feed reader with lots of features. There are additional features like proxy integration, adblocker, integrated browser, and system tray integration. Its easier to update feeds by setting up a timer to refresh.
##### Features
* Automatic feed updation on either start up or using a timer option.
* Searching feed URL using website address and categorizing them in new, unread, starred and deleted section.
* Embedded browser so that you dont leave the app.
* Hiding images, if you are only interested in text.
* Adblocker and better system tray integration.
* Multiple language support.
##### How to install QuiteRSS
You can install it from the QuiteRSS ppa.
```
sudo add-apt-repository ppa:quiterss/quiterss
sudo apt-get update
sudo apt-get install quiterss
```
![][7]
#### 3\. Liferea
Linux Feed Reader aka [Liferea][8] is probably the most used feed aggregator on Linux platform. It is fast and easy to use and supports RSS / Atom feeds. It has support for podcasts and there is an option for adding custom scripts which can run depending upon your actions.
Theres a browser integration while you still have the options to open an item in a separate browser.
##### Features
* Liferea can download and save feeds from your favorite website to read offline.
* It can be synced with other RSS feed readers, making a transition easier.
* Support for Podcasts.
* Support for search folders, which allows users to save searches.
##### How to install Liferea
Liferea is available in the official repository for almost all the distributions. Ubuntu-based users can install it by using below command:
```
sudo apt-get install liferea
```
![][9]
#### 4\. FeedReader
[FeedReader][10] is a simple and elegant RSS desktop client for your web-based RSS accounts. It can work with Feedbin, Feedly, FreshRSS, Local RSS among others and has options to send it over mail, tweet about it etc.
##### Features
* There are multiple themes for formatting.
* You can customize it according to your preferences.
* Supports notifications and podcasts.
* Fast searches and various filters are present, along with several keyboard shortcuts to make your reading experience better.
##### How to install FeedReader
FeedReader is available as a Flatpak for almost every Linux distribution.
```
flatpak install http://feedreader.xarbit.net/feedreader-repo/feedreader.flatpakref
```
It is also available in Fedora repository:
```
sudo dnf install feedreader
```
And, in Arch User Repository.
```
yaourt -S feedreader
```
![][11]
#### 5\. Newsbeuter: RSS feed in terminal
[Newsbeuter][12] is an open source feed reader for terminal lovers. There is an option to add and delete an RSS feed and to get the content on the terminal itself. Newsbeuter is loved by people who spend more time on the terminal and want their feed to be clutter free from images and ads.
##### How to install Newsbeuter
```
sudo apt-get install newsbeuter
```
Once installation completes, you can launch it by using below command
```
newsbeuter
```
To add a feed in your list, edit the urls file and add the RSS feed.
```
vi ~/.newsbeuter/urls
>> http://feeds.feedburner.com/itsfoss
```
To read the feeds, launch newsbeuter and it will display all the posts.
![][13]
You can get the useful commands at the bottom of the terminal which can help you in using newsbeuter. You can read this [manual page][14] for detailed information.
#### Final Words
To me, feed readers are still relevant, especially when you follow multiple websites and blogs. The offline access to your favorite website and blogs content with options to archive and search is the biggest advantage of using a feed reader.
Do you use a feed reader on your Linux system? If yes, tell us your favorite one in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/feed-reader-apps-linux/
作者:[Ambarish Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/ambarish/
[1]:https://en.wikipedia.org/wiki/RSS
[2]:http://andrewchen.co/the-death-of-rss-in-a-single-graph/
[3]:https://itsfoss.com/wp-content/uploads/2018/04/best-feed-reader-apps-linux.jpg
[4]:https://www.kde.org/applications/internet/akregator/
[5]:https://itsfoss.com/wp-content/uploads/2018/02/Akregator2-800x500.jpg
[6]:https://quiterss.org/
[7]:https://itsfoss.com/wp-content/uploads/2018/02/QuiteRSS2.jpg
[8]:https://itsfoss.com/liferea-rss-client/
[9]:https://itsfoss.com/wp-content/uploads/2018/02/Liferea-800x525.png
[10]:https://jangernert.github.io/FeedReader/
[11]:https://itsfoss.com/wp-content/uploads/2018/02/FeedReader2-800x465.jpg
[12]:https://newsbeuter.org/
[13]:https://itsfoss.com/wp-content/uploads/2018/02/newsbeuter.png
[14]:http://manpages.ubuntu.com/manpages/bionic/man1/newsbeuter.1.html

View File

@ -1,3 +1,5 @@
translating---geekpi
How To Check User Created Date On Linux
======
Did you know, how to check user account created date on Linux system? If Yes, what are the ways to do.

View File

@ -0,0 +1,192 @@
The df Command Tutorial With Examples For Beginners
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/df-command-1-720x340.png)
In this guide, we are going to learn to use **df** command. The df command, stands for **D** isk **F** ree, reports file system disk space usage. It displays the amount of disk space available on the file system in a Linux system. The df command is not to be confused with **du** command. Both serves different purposes. The df command reports **how much disk space we have** (i.e free space) whereas the du command reports **how much disk space is being consumed** by the files and folders. Hope I made myself clear. Let us go ahead and see some practical examples of df command, so you can understand it better.
### The df Command Tutorial With Examples
**1\. View entire file system disk space usage**
Run df command without any arguments to display the entire file system disk space.
```
$ df
```
**Sample output:**
```
Filesystem 1K-blocks Used Available Use% Mounted on
dev 4033216 0 4033216 0% /dev
run 4038880 1120 4037760 1% /run
/dev/sda2 478425016 428790352 25308980 95% /
tmpfs 4038880 34396 4004484 1% /dev/shm
tmpfs 4038880 0 4038880 0% /sys/fs/cgroup
tmpfs 4038880 11636 4027244 1% /tmp
/dev/loop0 84096 84096 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 95054 55724 32162 64% /boot
tmpfs 807776 28 807748 1% /run/user/1000
```
![][2]
As you can see, the result is divided into six columns. Let us see what each column means.
* **Filesystem** the filesystem on the system.
* **1K-blocks** the size of the filesystem, measured in 1K blocks.
* **Used** the amount of space used in 1K blocks.
* **Available** the amount of available space in 1K blocks.
* **Use%** the percentage that the filesystem is in use.
* **Mounted on** the mount point where the filesystem is mounted.
**2\. Display file system disk usage in human readable format**
As you may noticed in the above examples, the usage is showed in 1k blocks. If you want to display them in human readable format, use **-h** flag.
```
$ df -h
Filesystem Size Used Avail Use% Mounted on
dev 3.9G 0 3.9G 0% /dev
run 3.9G 1.1M 3.9G 1% /run
/dev/sda2 457G 409G 25G 95% /
tmpfs 3.9G 27M 3.9G 1% /dev/shm
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 12M 3.9G 1% /tmp
/dev/loop0 83M 83M 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 93M 55M 32M 64% /boot
tmpfs 789M 28K 789M 1% /run/user/1000
```
Now look at the **Size** and **Avail** columns, the usage is shown in GB and MB.
**3\. Display disk space usage only in MB**
To view file system disk space usage only in Megabytes, use **-m** flag.
```
$ df -m
Filesystem 1M-blocks Used Available Use% Mounted on
dev 3939 0 3939 0% /dev
run 3945 2 3944 1% /run
/dev/sda2 467212 418742 24716 95% /
tmpfs 3945 26 3920 1% /dev/shm
tmpfs 3945 0 3945 0% /sys/fs/cgroup
tmpfs 3945 12 3933 1% /tmp
/dev/loop0 83 83 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 93 55 32 64% /boot
tmpfs 789 1 789 1% /run/user/1000
```
**4\. List inode information instead of block usage**
We can list inode information instead of block usage by using **-i** flag as shown below.
```
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
dev 1008304 439 1007865 1% /dev
run 1009720 649 1009071 1% /run
/dev/sda2 30392320 844035 29548285 3% /
tmpfs 1009720 86 1009634 1% /dev/shm
tmpfs 1009720 18 1009702 1% /sys/fs/cgroup
tmpfs 1009720 3008 1006712 1% /tmp
/dev/loop0 12829 12829 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 25688 390 25298 2% /boot
tmpfs 1009720 29 1009691 1% /run/user/1000
```
**5\. Display the file system type**
To display the file system type, use **-T** flag.
```
$ df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
dev devtmpfs 4033216 0 4033216 0% /dev
run tmpfs 4038880 1120 4037760 1% /run
/dev/sda2 ext4 478425016 428790896 25308436 95% /
tmpfs tmpfs 4038880 31300 4007580 1% /dev/shm
tmpfs tmpfs 4038880 0 4038880 0% /sys/fs/cgroup
tmpfs tmpfs 4038880 11984 4026896 1% /tmp
/dev/loop0 squashfs 84096 84096 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 ext4 95054 55724 32162 64% /boot
tmpfs tmpfs 807776 28 807748 1% /run/user/1000
```
As you see, there is an extra column (second from left) that shows the file system type.
**6\. Display only the specific file system type**
We can limit the listing to a certain file systems. for example **ext4**. To do so, we use **-t** flag.
```
$ df -t ext4
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 478425016 428790896 25308436 95% /
/dev/sda1 95054 55724 32162 64% /boot
```
See? This command shows only the ext4 file system disk space usage.
**7\. Exclude specific file system type**
Some times, you may want to exclude a specific file system from the result. This can be achieved by using **-x** flag.
```
$ df -x ext4
Filesystem 1K-blocks Used Available Use% Mounted on
dev 4033216 0 4033216 0% /dev
run 4038880 1120 4037760 1% /run
tmpfs 4038880 26116 4012764 1% /dev/shm
tmpfs 4038880 0 4038880 0% /sys/fs/cgroup
tmpfs 4038880 11984 4026896 1% /tmp
/dev/loop0 84096 84096 0 100% /var/lib/snapd/snap/core/4327
tmpfs 807776 28 807748 1% /run/user/1000
```
The above command will display all file systems usage, except **ext4**.
**8\. Display usage for a folder**
To display the disk space available and where it is mounted for a folder, for example **/home/sk/** , use this command:
```
$ df -hT /home/sk/
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda2 ext4 457G 409G 25G 95% /
```
This command shows the file system type, used and available space in human readable form and where it is mounted. If you dont to display the file system type, just ignore the **-t** flag.
For more details, refer the man pages.
```
$ man df
```
**Recommended read:**
And, thats all for today! I hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginners/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2018/04/df-command.png

View File

@ -0,0 +1,83 @@
The Vrms Program Helps You To Find Non-free Software In Debian
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/vrms-1-720x340.png)
The other day I was reading an interesting guide that explained the [**difference between free and open source software on Digital ocean**][1]. Until then, I thought both are more or less same. Oh man, I was wrong. There are few significant differences between them. While reading that article, I was wondering how to find non-free software in Linux, hence this post.
### Say hello to “Virtual Richard M. Stallman”, a Perl script to find Non-free Software in Debian
The **Virtual Richard M. Stallman** , shortly **vrms** , is a program, written in Perl, that analyzes the list of installed software on your Debian-based systems and reports all of the packages from non-free and contrib trees which are currently installed. For those wondering, a free software should meet the following [**four essential freedoms**][2].
* **Freedom 0** The freedom to run the program as you wish, for any purpose.
* **Freedom 1** The freedom to study how the program works, and adapt it to your needs. Access to the source code is a precondition for this.
* **Freedom 2** The freedom to redistribute copies so you can help your neighbor.
* **Freedom 3** The freedom to improve the program, and release your improvements to the public, so that the whole community benefits. Access to the source code is a precondition for this.
Any software that doesnt meet the above four conditions are not considered as a free software. In a nutshell, a **Free software means the users have the freedom to run, copy, distribute, study, change and improve the software.**
Now let us find if the installed software is free or non-free, shall we?
The Vrms package is available in the default repositories of Debian and its derivatives like Ubuntu. So, you can install it using apt package manager using the following command.
```
$ sudo apt-get install vrms
```
Once installed, run the following command to find non-free software in your debian-based system.
```
$ vrms
```
Sample output from my Ubuntu 16.04 LTS desktop.
```
Non-free packages installed on ostechnix
unrar Unarchiver for .rar files (non-free version)
1 non-free packages, 0.0% of 2103 installed packages.
```
![][4]
As you can see in the above screenshot, I have one non-free package installed in my Ubuntu box.
If you dont have any non-free packages on your system, you should see the following output instead.
```
No non-free or contrib packages installed on ostechnix! rms would be proud.
```
Vrms can able to find non-free packages not just on Debian but also from Ubuntu, Linux Mint and other deb-based systems as well.
**Limitations**
The Vrms program has some limitations though. Like I already mentioned, it lists the packages from the non-free and contrib sections installed. However, some distributions doesnt follow the policy which ensures proprietary software only ends up in repository sections recognized by vrms as “non-free” and they make no effort to preserve this separation. In such cases, Vrms wont recognize the non-free software and will always report that you have non-free software installed on your system. If youre using distros like Debian and Ubuntu that follows the policy of keeping proprietary software in a non-free repositories, Vrms will definitely help you to find the non-free packages.
And, thats all. Hope this was useful. More good stuffs to come. Stay tuned!
Happy Tamil new year wishes to all Tamil folks around the world!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/the-vrms-program-helps-you-to-find-non-free-software-in-debian/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.digitalocean.com/community/tutorials/Free-vs-Open-Source-Software
[2]:https://www.gnu.org/philosophy/free-sw.html
[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[4]:http://www.ostechnix.com/wp-content/uploads/2018/04/vrms.png

View File

@ -0,0 +1,93 @@
translating---geekpi
4 cool new projects to try in COPR for April
======
![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg)
COPR is a [collection][1] of personal repositories for software that isnt carried in Fedora. Some software doesnt conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isnt supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
Heres a set of new and interesting projects in COPR.
### Anki
[Anki][2] is a program that helps you learn and remember things using spaced repetition. You can create cards and organize them into decks, or download [existing decks][3]. A card has a question on one side and an answer on the other. It may also include images, video or audio. How well you answer each card determines how often you see that particular card in the future.
While Anki is already in Fedora, this repo provides a newer version.
![][4]
#### Installation instructions
The repo currently provides Anki for Fedora 27, 28, and Rawhide. To install Anki, use these commands:
```
sudo dnf copr enable thomasfedb/anki
sudo dnf install anki
```
### Fd
[Fd][5] is a command-line utility thats a simple and slightly faster alternative to [find][6]. It can execute commands on found items in parallel. Fd also uses colorized terminal output and ignores hidden files and patterns specified in .gitignore by default.
#### Installation instructions
The repo currently provides fd for Fedora 26, 27, 28, and Rawhide. To install fd, use these commands:
```
sudo dnf copr enable keefle/fd
sudo dnf install fd
```
### KeePass
[KeePass][7] is a password manager. It holds all passwords in one end-to-end encrypted database locked with a master key or key file. The passwords can be organized into groups and generated by the programs built-in generator. Among its other features is Auto-Type, which can provide a username and password to selected forms.
While KeePass is already in Fedora, this repo provides the newest version.
![][8]
#### Installation instructions
The repo currently provides KeePass for Fedora 26 and 27. To install KeePass, use these commands:
```
sudo dnf copr enable mavit/keepass
sudo dnf install keepass
```
### jo
[Jo][9] is a command-line utility that transforms input to JSON strings or arrays. It features a simple [syntax][10] and recognizes booleans, strings and numbers. In addition, jo supports nesting and can nest its own output as well.
#### Installation instructions
The repo currently provides jo for Fedora 26, 27, and Rawhide, and for EPEL 6 and 7. To install jo, use these commands:
```
sudo dnf copr enable ganto/jo
sudo dnf install jo
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-try-copr-april-2018/
作者:[Dominik Turecek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org
[1]:https://copr.fedorainfracloud.org/
[2]:https://apps.ankiweb.net/
[3]:https://ankiweb.net/shared/decks/
[4]:https://fedoramagazine.org/wp-content/uploads/2018/03/anki.png
[5]:https://github.com/sharkdp/fd
[6]:https://www.gnu.org/software/findutils/
[7]:https://keepass.info/
[8]:https://fedoramagazine.org/wp-content/uploads/2018/03/keepass.png
[9]:https://github.com/jpmens/jo
[10]:https://github.com/jpmens/jo/blob/master/jo.md

View File

@ -0,0 +1,181 @@
How To Resize Active/Primary root Partition Using GParted Utility
======
Today we are going to discuss about disk partition. Its one of the best topics in Linux. This allow users to resize the active root partition in Linux.
In this article we will teach you how to resize the active root partition on Linux using Gparted utility.
Just imagine, our system has 30GB disk and we didnt configure properly while installation the Ubuntu operating system.
We need to install another OS in that so we want to make secondary partition on that.
Its not advisable to resize active partition. However, we are going to perform this as there is no way to free up the system.
Make sure you should take backup of important data before performing this action because if something goes wrong (For example, if power got failure or your system got rebooted), you can retain your data.
### What Is Gparted
[GParted][1] is a free partition manager that enables you to resize, copy, and move partitions without data loss. We can use all of the features of the GParted application is by using the GParted Live bootable image. GParted Live enables you to use GParted on GNU/Linux as well as other operating systems, such as Windows or Mac OS X.
### 1) Check Disk Space Usage Using df Command
I just want to show you about my partition using df command. The df command output clearly showing that i only have one partition.
```
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 30G 3.4G 26.2G 16% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 487M 4.0K 487M 1% /dev
tmpfs 100M 844K 99M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 152K 497M 1% /run/shm
none 100M 52K 100M 1% /run/user
```
### 2) Check Disk Partition Using fdisk Command
Im going to verify this using fdisk command.
```
$ sudo fdisk -l
[sudo] password for daygeek:
Disk /dev/sda: 33.1 GB, 33129218048 bytes
255 heads, 63 sectors/track, 4027 cylinders, total 64705504 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000473a3
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 62609407 31303680 83 Linux
/dev/sda2 62611454 64704511 1046529 5 Extended
/dev/sda5 62611456 64704511 1046528 82 Linux swap / Solaris
```
### 3) Download GParted live ISO Image
Use the below command to download GParted live ISO to perform this action.
```
$ wget https://downloads.sourceforge.net/gparted/gparted-live-0.31.0-1-amd64.iso
```
### 4) Boot Your System With GParted Live Installation Media
Boot your system with GParted live installation media (like Burned CD/DVD or USB or ISO image). You will get the output similar to below screen. Here choose **GParted Live (Default settings)** and hit **Enter**.
![][3]
### 5) Keyboard Selection
By default it chooses the second option, just hit **Enter**.
![][4]
### 6) Language Selection
By default it chooses **33** for US English, just hit **Enter**.
![][5]
### 7) Mode Selection (GUI or Command-Line)
By default it chooses **0** for GUI mode, just hit **Enter**.
![][6]
### 8) Loaded GParted Live Screen
Now, GParted live screen is loaded. It is showing the list of partition which was created by me earlier.
![][7]
### 9) How To Resize The root Partition
Choose the root partition you want to resize, only one partition is available here so im going to edit that partition to install another OS.
![][8]
To do so, press **Resize/Move** button to resize the partition.
![][9]
Here, enter the size which you want take out from this partition in first box. Im going to claim **10GB** so, i added **10240MB** and leave rest of boxes as default, then hit **Resize/Move** button
![][10]
It will ask you once again to confirm to resize the partition because you are editing the live system partition, then hit **Ok**.
![][11]
It has been successfully shrink the partition from 30GB to 20GB. Also shows Unallocated disk space of 10GB.
![][12]
Finally click `Apply` button to perform remaining below operations.
![][13]
* **`e2fsck`** e2fsck is a file system check utility that automatically repair the file system for bad sectors, I/O errors related to HDD.
* **`resize2fs`** The resize2fs program will resize ext2, ext3, or ext4 file systems. It can be used to enlarge or shrink an unmounted file system located on device.
* **`e2image`** The e2image program will save critical ext2, ext3, or ext4 filesystem metadata located on device to a file specified by image-file.
**`e2fsck`** e2fsck is a file system check utility that automatically repair the file system for bad sectors, I/O errors related to HDD.
![][14]
**`resize2fs`** The resize2fs program will resize ext2, ext3, or ext4 file systems. It can be used to enlarge or shrink an unmounted file system located on device.
![][15]
**`e2image`** The e2image program will save critical ext2, ext3, or ext4 filesystem metadata located on device to a file specified by image-file.
![][16]
All the operation got completed and close the dialog box.
![][17]
Now, i could able to see **10GB** of Unallocated disk partition.
![][18]
Reboot the system to check this.
![][19]
### 10) Check Free Space
Login to the system back and use fdisk command to see the available space in the partition. Yes i could see **10GB** of Unallocated disk space on this partition.
```
$ sudo parted /dev/sda print free
[sudo] password for daygeek:
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sda: 32.2GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 10.7GB 10.7GB Free Space
1 10.7GB 32.2GB 21.5GB primary ext4 boot
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility/
作者:[Magesh Maruthamuthu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/magesh/
[1]:https://gparted.org/
[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-1.png
[4]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-2.png
[5]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-3.png
[6]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-4.png
[7]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-5.png
[8]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-6.png
[9]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-7.png
[10]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-8.png
[11]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-9.png
[12]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-10.png
[13]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-11.png
[14]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-12.png
[15]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-13.png
[16]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-14.png
[17]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-15.png
[18]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-16.png
[19]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-17.png

View File

@ -0,0 +1,954 @@
Running Jenkins builds in containers
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_scale_performance.jpg?itok=R7jyMeQf)
Running applications in containers has become a well-accepted practice in the enterprise sector, as [Docker][1] with [Kubernetes][2] (K8s) now provides a scalable, manageable application platform. The container-based approach also suits the [microservices architecture][3] that's gained significant momentum in the past few years.
One of the most important advantages of a container application platform is the ability to dynamically bring up isolated containers with resource limits. Let's check out how this can change the way we run our continuous integration/continuous development (CI/CD) tasks.
Building and packaging an application requires an environment that can download the source code, access dependencies, and have the build tools installed. Running unit and component tests as part of the build may use local ports or require third-party applications (e.g., databases, message brokers, etc.) to be running. In the end, we usually have multiple, pre-configured build servers with each running a certain type of job. For tests, we maintain dedicated instances of third-party apps (or try to run them embedded) and avoid running jobs in parallel that could mess up each other's outcome. The pre-configuration for such a CI/CD environment can be a hassle, and the required number of servers for different jobs can significantly change over time as teams shift between versions and development platforms.
Once we have access to a container platform (onsite or in the cloud), it makes sense to move the resource-intensive CI/CD task executions into dynamically created containers. In this scenario, build environments can be independently started and configured for each job execution. Tests during the build have free reign to use available resources in this isolated box, while we can also bring up a third-party application in a side container that exists only for this job's lifecycle.
It sounds nice… Let's see how it works in real life.
Note: This article is based on a real-world solution for a project running on a [Red Hat OpenShift][4] v3.7 cluster. OpenShift is the enterprise-ready version of Kubernetes, so these practices work on a K8s cluster as well. To try, download the [Red Hat CDK][5] and run the `jenkins-ephemeral` or `jenkins-persistent` [templates][6] that create preconfigured Jenkins masters on OpenShift.
### Solution overview
The solution to executing CI/CD tasks (builds, tests, etc.) in containers on OpenShift is based on [Jenkins distributed builds][7], which means:
* We need a Jenkins master; it may run inside the cluster but also works with an external master
* Jenkins features/plugins are available as usual, so existing projects can be used
* The Jenkins GUI is available to configure, run, and browse job output
* if you prefer code, [Jenkins Pipeline][8] is also available
From a technical point of view, the dynamic containers to run jobs are Jenkins agent nodes. When a build kicks off, first a new node starts and "reports for duty" to the Jenkins master via JNLP (port 5000). The build is queued until the agent node comes up and picks up the build. The build output is sent back to the master—just like with regular Jenkins agent servers—but the agent container is shut down once the build is done.
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/1_running_jenkinsincontainers.png?itok=fR4ntnn8)
Different kinds of builds (e.g., Java, NodeJS, Python, etc.) need different agent nodes. This is nothing new—labels could previously be used to restrict which agent nodes should run a build. To define the config for these Jenkins agent containers started for each job, we will need to set the following:
* The Docker image to boot up
* Resource limits
* Environment variables
* Volumes mounted
The core component here is the [Jenkins Kubernetes plugin][9]. This plugin interacts with the K8s cluster (by using a ServiceAccount) and starts/stops the agent nodes. Multiple agent types can be defined as Kubernetes pod templates under the plugin's configuration (refer to them by label in projects).
These [agent images][10] are provided out of the box (also on [CentOS7][11]):
* [jenkins-slave-base-rhel7][12]: Base image starting the agent that connects to Jenkins master; the Java heap is set according to container memory
* [jenkins-slave-maven-rhel7][13]: Image for Maven and Gradle builds (extends base)
* [jenkins-slave-nodejs-rhel7][14]: Image with NodeJS4 tools (extends base)
Note: This solution is not related to OpenShift's [Source-to-Image (S2I)][15] build, which can also be used for certain CI/CD tasks.
### Background learning material
There are several good blogs and documentation about Jenkins builds on OpenShift. The following are good to start with:
Take a look at them to understand the overall solution. In this article, we'll look at the different issues that come up while applying those practices.
### Build my application
For our [example][16], let's assume a Java project with the following build steps:
* **Source:** Pull project source from a Git repository
* **Build with Maven:** Dependencies come from an internal repository (let's use Apache Nexus) mirroring external Maven repos
* **Deploy artifact:** The built JAR is uploaded to the repository
During the CI/CD process, we need to interact with Git and Nexus, so the Jenkins jobs have be able to access those systems. This requires configuration and stored credentials that can be managed at different places:
* **In Jenkins:** We can add credentials to Jenkins that the Git plugin can use and add files to the project (using containers doesn't change anything).
* **In OpenShift:** Use ConfigMap and secret objects that are added to the Jenkins agent containers as files or environment variables.
* **In a fully customized Docker image:** These are pre-configured with everything to run a type of job; just extend one of the agent images.
Which approach you use is a question of taste, and your final solution may be a mix. Below we'll look at the second option, where the configuration is managed primarily in OpenShift. Customize the Maven agent container via the Kubernetes plugin configuration by setting environment variables and mounting files.
Note: Adding environment variables through the UI doesn't work with Kubernetes plugin v1.0 due to a [bug][17]. Either update the plugin or (as a workaround) edit `config.xml` directly and restart Jenkins.
### Pull source from Git
Pulling a public Git is trivial. For a private Git repo, authentication is required and the client also needs to trust the server for a secure connection. A Git pull can typically be done via two protocols:
* HTTPS: Authentication is with username/password. The server's SSL certificate must be trusted by the job, which is only tricky if it's signed by a custom CA.
```
git clone https://git.mycompany.com:443/myapplication.git
```
* SSH: Authentication is with a private key. The server is trusted when its public key's fingerprint is found in the `known_hosts` file.
```
git clone ssh://git@git.mycompany.com:22/myapplication.git
```
Downloading the source through HTTP with username/password is OK when it's done manually; for automated builds, SSH is better.
#### Git with SSH
For a SSH download, we need to ensure that the SSH connection works between the agent container and the Git's SSH port. First, we need a private-public key pair. To generate one, run:
```
ssh keygen -t rsa -b 2048 -f my-git-ssh -N ''
```
It generates a private key in `my-git-ssh` (empty passphrase) and the matching public key in `my-git-ssh.pub`. Add the public key to the user on the Git server (preferably a ServiceAccount); web UIs usually support upload. To make the SSH connection work, we need two files on the agent container:
* The private key at `~/.ssh/id_rsa`
* The server's public key in `~/.ssh/known_hosts`. To get this, try `ssh git.mycompany.com` and accept the fingerprint; this will create a new line in the `known_hosts` file. Use that.
Store the private key as `id_rsa` and server's public key as `known_hosts` in an OpenShift secret (or config map).
```
apiVersion: v1
kind: Secret
metadata:
  name: mygit-ssh
stringData:
  id_rsa: |-
    -----BEGIN RSA PRIVATE KEY-----
    ...
    -----END RSA PRIVATE KEY-----
  known_hosts: |-
    git.mycompany.com ecdsa-sha2-nistp256 AAA...
```
Then configure this as a volume in the Kubernetes plugin for the Maven pod at mount point `/home/jenkins/.ssh/`. Each item in the secret will be a file matching the key name under the mount directory. We can use the UI (`Manage Jenkins / Configure / Cloud / Kubernetes`), or edit Jenkins config `/var/lib/jenkins/config.xml`:
```
<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
<name>maven</name>
...
  <volumes>
    <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>
      <mountPath>/home/jenkins/.ssh</mountPath>
      <secretName>mygit-ssh</secretName>
    </org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>
  </volumes>
```
Pulling a Git source through SSH should work in the jobs running on this agent now.
Note: It's also possible to customize the SSH connection in `~/.ssh/config`, for example, if we don't want to bother with `known_hosts` or the private key is mounted to a different location:
```
Host git.mycompany.com
   StrictHostKeyChecking no
   IdentityFile /home/jenkins/.config/git-secret/ssh-privatekey
```
#### Git with HTTP
If you prefer an HTTP download, add the username/password to a [Git-credential-store][18] file somewhere:
* E.g. `/home/jenkins/.config/git-secret/credentials` from an OpenShift secret, one site per line:
```
https://username:password@git.mycompany.com
https://user:pass@github.com
```
* Enable it in [git-config][19] expected at `/home/jenkins/.config/git/config`:
```
[credential]
  helper = store --file=/home/jenkins/.config/git-secret/credentials
```
If the Git service has a certificate signed by a custom certificate authority (CA), the quickest hack is to set the `GIT_SSL_NO_VERIFY=true` environment variable (EnvVar) for the agent. The proper solution needs two things:
* Add the custom CA's public certificate to the agent container from a config map to a path (e.g. `/usr/ca/myTrustedCA.pem`).
* Tell Git the path to this cert in an EnvVar `GIT_SSL_CAINFO=/usr/ca/myTrustedCA.pem` or in the `git-config` file mentioned above:
```
[http "https://git.mycompany.com"]
    sslCAInfo = /usr/ca/myTrustedCA.pem
```
Note: In OpenShift v3.7 (and earlier), the config map and secret mount points [must not overlap][20], so we can't map to `/home/jenkins` and `/home/jenkins/dir` at the same time. This is why we didn't use the well-known file locations above. A fix is expected in OpenShift v3.9.
### Maven
To make a Maven build work, there are usually two things to do:
* A corporate Maven repository (e.g., Apache Nexus) should be set up to act as a proxy for external repos. Use this as a mirror.
* This internal repository may have an HTTPS endpoint with a certificate signed by a custom CA.
Having an internal Maven repository is practically essential if builds run in containers because they start with an empty local repository (cache), so Maven downloads all the JARs every time. Downloading from an internal proxy repo on the local network is obviously quicker than downloading from the Internet.
The [Maven Jenkins agent][13] image supports an environment variable that can be used to set the URL for this proxy. Set the following in the Kubernetes plugin container template:
```
MAVEN_MIRROR_URL=https://nexus.mycompany.com/repository/maven-public
```
The build artifacts (JARs) should also be archived in a repository, which may or may not be the same as the one acting as a mirror for dependencies above. Maven `deploy` requires the repo URL in the `pom.xml` under [Distribution management][21] (this has nothing to do with the agent image):
```
<project ...>
<distributionManagement>
 <snapshotRepository>
  <id>mynexus</id>
  <url>https://nexus.mycompany.com/repository/maven-snapshots/</url>
 </snapshotRepository>
 <repository>
  <id>mynexus</id>
  <url>https://nexus.mycompany.com/repository/maven-releases/</url>
 </repository>
</distributionManagement>
```
Uploading the artifact may require authentication. In this case, username/password must be set in the `settings.xml` under the server ID matching the one in `pom.xml`. We need to mount a whole `settings.xml` with the URL, username, and password on the Maven Jenkins agent container from an OpenShift secret. We can also use environment variables as below:
* Add environment variables from a secret to the container:
```
MAVEN_SERVER_USERNAME=admin
MAVEN_SERVER_PASSWORD=admin123
```
* Mount `settings.xml` from a config map to `/home/jenkins/.m2/settings.xml`:
```
<settings ...>
 <mirrors>
  <mirror>
   <mirrorOf>external:*</mirrorOf>
   <url>${env.MAVEN_MIRROR_URL}</url>
   <id>mirror</id>
  </mirror>
 </mirrors>
 <servers>
  <server>
   <id>mynexus</id>
   <username>${env.MAVEN_SERVER_USERNAME}</username>
   <password>${env.MAVEN_SERVER_PASSWORD}</password>
  </server>
 </servers>
</settings>
```
Disable interactive mode (use batch mode) to skip the download log by using `-B` for Maven commands or by adding `<interactiveMode>false</interactiveMode>` to `settings.xml`.
If the Maven repository's HTTPS endpoint uses a certificate signed by a custom CA, we need to create a Java KeyStore using the [keytool][22] containing the CA certificate as trusted. This KeyStore should be uploaded as a config map in OpenShift. Use the `oc` command to create a config map from files:
```
 oc create configmap maven-settings--from-file=settings.xml=settings.xml--from-
file=myTruststore.jks=myTruststore.jks
```
Mount the config map somewhere on the Jenkins agent. In this example we use `/home/jenkins/.m2`, but only because we have `settings.xml` in the same config map. The KeyStore can go under any path.
Then make the Maven Java process use this file as a trust store by setting Java parameters in the `MAVEN_OPTS` environment variable for the container:
```
MAVEN_OPTS=
-Djavax.net.ssl.trustStore=/home/jenkins/.m2/myTruststore.jks
-Djavax.net.ssl.trustStorePassword=changeit
```
### Memory usage
This is probably the most important part—if we don't set max memory correctly, we'll run into intermittent build failures after everything seems to work.
Running Java in a container can cause high memory usage errors if we don't set the heap in the Java command line. The JVM [sees the total memory of the host machine][23] instead of the container's memory limit and sets the [default max heap][24] accordingly. This is typically much more than the container's memory limit, and OpenShift simply kills the container when a Java process allocates more memory for the heap.
Although the `jenkins``-slave-base` image has a built-in [script to set max heap ][25]to half the container memory (this can be modified via EnvVar `CONTAINER_HEAP_PERCENT=0.50`), it only applies to the Jenkins agent Java process. In a Maven build, we have important additional Java processes running:
* The `mvn` command itself is a Java tool.
* The [Maven Surefire-plugin][26] executes the unit tests in a forked JVM by default.
At the end of the day, we'll have three Java processes running at the same time in the container, and it's important to estimate their memory usage to avoid unexpectedly killed pods. Each process has a different way to set JVM options:
* Jenkins agent heap is calculated as mentioned above, but we definitely shouldn't let the agent have such a big heap. Memory is needed for the other two JVMs. Setting `JAVA_OPTS` works for the Jenkins agent.
* The `mvn` tool is called by the Jenkins job. Set `MAVEN_OPTS` to customize this Java process.
* The JVM spawned by the Maven `surefire` plugin for the unit tests can be customized by the [argLine][27] Maven property. It can be set in the `pom.xml`, in a profile in `settings.xml` or simply by adding `-DargLine=… to mvn` command in `MAVEN_OPTS`.
Here is an example of how to set these environment variables for the Maven agent container:
```
 JAVA_OPTS=-Xms64m -Xmx64m
MAVEN_OPTS=-Xms128m -Xmx128m -DargLine=${env.SUREFIRE_OPTS}
SUREFIRE_OPTS=-Xms256m -Xmx256m
```
These numbers worked in our tests with 1024Mi agent container memory limit building and running unit tests for a SpringBoot app. These are relatively low numbers and a bigger heap size; a higher limit may be needed for complex Maven projects and unit tests.
Note: The actual memory usage of a Java8 process is something like `HeapSize + MetaSpace + OffHeapMemory`, and this can be significantly more than the max heap size set. With the settings above, the three Java processes took more than 900Mi memory in our case. See RSS memory for processes within the container: `ps -e -o ``pid``,user``,``rss``,comm``,args`
The Jenkins agent images have both JDK 64 bit and 32 bit installed. For `mvn` and `surefire`, the 64-bit JVM is used by default. To lower memory usage, it makes sense to force 32-bit JVM as long as `-Xmx` is less than 1.5 GB:
```
JAVA_HOME=/usr/lib/jvm/Java-1.8.0-openjdk-1.8.0.1610.b14.el7_4.i386
```
Note that it's also possible to set Java arguments in the `JAVA_TOOL_OPTIONS` EnvVar, which is picked up by any JVM started. The parameters in `JAVA_OPTS` and `MAVEN_OPTS` overwrite the ones in `JAVA_TOOL_OPTIONS`, so we can achieve the same heap configuration for our Java processes as above without using `argLine`:
```
JAVA_OPTS=-Xms64m -Xmx64m
MAVEN_OPTS=-Xms128m -Xmx128m
JAVA_TOOL_OPTIONS=-Xms256m -Xmx256m
```
It's still a bit confusing, as all JVMs log `Picked up JAVA_TOOL_OPTIONS:`
### Jenkins Pipeline
Following the settings above, we should have everything prepared to run a successful build. We can pull the code, download the dependencies, run the unit tests, and upload the artifact to our repository. Let's create a Jenkins Pipeline project that does this:
```
pipeline {
  /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var Which container to bring up for the build. Pick one of the templates configured in Kubernetes plugin. */
  agent {
    label 'maven'
  }
  stages {
    stage('Pull Source') {
      steps {
        git url: 'ssh://git@git.mycompany.com:22/myapplication.git', branch: 'master'
      }
    }
    stage('Unit Tests') {
      steps {
        sh 'mvn test'
      }
    }
    stage('Deploy to Nexus') {
      steps {
        sh 'mvn deploy -DskipTests'
      }
    }
  }
}
```
For a real project, of course, the CI/CD pipeline should do more than just the Maven build; it could deploy to a development environment, run integration tests, promote to higher environments, etc. The learning articles linked above show examples of how to do those things.
### Multiple containers
One pod can be running multiple containers with each having their own resource limits. They share the same network interface, so we can reach started services on `localhost`, but we need to think about port collisions. Environment variables are set separately, but the volumes mounted are the same for all containers configured in one Kubernetes pod template.
Bringing up multiple containers is useful when an external service is required for unit tests and an embedded solution doesn't work (e.g., database, message broker, etc.). In this case, this second container also starts and stops with the Jenkins agent.
See the Jenkins `config.xml` snippet where we start an `httpbin` service on the side for our Maven build:
```
<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
  <name>maven</name>
  <volumes>
    ...
  </volumes>
  <containers>
    <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
      <name>jnlp</name>
      
      <resourceLimitCpu>500m</resourceLimitCpu>
      <resourceLimitMemory>1024Mi</resourceLimitMemory>
      <envVars>
      ...
      </envVars>        
      ...
    </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
    <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
      <name>httpbin</name>
      
      <resourceLimitCpu></resourceLimitCpu>
      <resourceLimitMemory>256Mi</resourceLimitMemory>
      <envVars/>
      ...
    </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
  </containers>
  <envVars/>
</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
```
### Summary
For a summary, see the created OpenShift resources and the Kubernetes plugin configuration from Jenkins `config.xml` with the configuration described above.
```
apiVersion: v1
kind: List
metadata: {}
items:
- apiVersion: v1
  kind: ConfigMap
  metadata:
    name: git-config
  data:
    config: |
      [credential]
          helper = store --file=/home/jenkins/.config/git-secret/credentials
      [http "http://git.mycompany.com"]
          sslCAInfo = /home/jenkins/.config/git/myTrustedCA.pem
    myTrustedCA.pem: |-
      -----BEGIN CERTIFICATE-----
      MIIDVzCCAj+gAwIBAgIJAN0sC...
      -----END CERTIFICATE-----
- apiVersion: v1
  kind: Secret
  metadata:
    name: git-secret
  stringData:
    ssh-privatekey: |-
      -----BEGIN RSA PRIVATE KEY-----
      ...
      -----END RSA PRIVATE KEY-----
    credentials: |-
      https://username:password@git.mycompany.com
      https://user:pass@github.com
- apiVersion: v1
  kind: ConfigMap
  metadata:
    name: git-ssh
  data:
    config: |-
      Host git.mycompany.com
        StrictHostKeyChecking yes
        IdentityFile /home/jenkins/.config/git-secret/ssh-privatekey
    known_hosts: '[git.mycompany.com]:22 ecdsa-sha2-nistp256 AAAdn7...'
- apiVersion: v1
  kind: Secret
  metadata:
    name: maven-secret
  stringData:
    username: admin
    password: admin123
```
One additional config map was created from files:
```
 oc create configmap maven-settings --from-file=settings.xml=settings.xml
--from-file=myTruststore.jks=myTruststore.jks
```
Kubernetes plugin configuration:
```
<?xml version='1.0' encoding='UTF-8'?>
<hudson>
...
  <clouds>
    <org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud plugin="kubernetes@1.0">
      <name>openshift</name>
      <defaultsProviderTemplate></defaultsProviderTemplate>
      <templates>
        <org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
          <inheritFrom></inheritFrom>
          <name>maven</name>
          <namespace></namespace>
          <privileged>false</privileged>
          <alwaysPullImage>false</alwaysPullImage>
          <instanceCap>2147483647</instanceCap>
          <slaveConnectTimeout>100</slaveConnectTimeout>
          <idleMinutes>0</idleMinutes>
          <label>maven</label>
          <serviceAccount>jenkins37</serviceAccount>
          <nodeSelector></nodeSelector>
          <nodeUsageMode>NORMAL</nodeUsageMode>
          <customWorkspaceVolumeEnabled>false</customWorkspaceVolumeEnabled>
          <workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">
            <memory>false</memory>
          </workspaceVolume>
          <volumes>
            <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>
              <mountPath>/home/jenkins/.config/git-secret</mountPath>
              <secretName>git-secret</secretName>
            </org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>
            <org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
              <mountPath>/home/jenkins/.ssh</mountPath>
              <configMapName>git-ssh</configMapName>
            </org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
            <org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
              <mountPath>/home/jenkins/.config/git</mountPath>
              <configMapName>git-config</configMapName>
            </org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
            <org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
              <mountPath>/home/jenkins/.m2</mountPath>
              <configMapName>maven-settings</configMapName>
            </org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
          </volumes>
          <containers>
            <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
              <name>jnlp</name>
              
              <privileged>false</privileged>
              <alwaysPullImage>false</alwaysPullImage>
              <workingDir>/tmp</workingDir>
              <command></command>
              <args>${computer.jnlpmac} ${computer.name}</args>
              <ttyEnabled>false</ttyEnabled>
              <resourceRequestCpu>500m</resourceRequestCpu>
              <resourceRequestMemory>1024Mi</resourceRequestMemory>
              <resourceLimitCpu>500m</resourceLimitCpu>
              <resourceLimitMemory>1024Mi</resourceLimitMemory>
              <envVars>
                <org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                  <key>JAVA_HOME</key>
                  <value>/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.i386</value>
                </org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                  <key>JAVA_OPTS</key>
                  <value>-Xms64m -Xmx64m</value>
                </org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                  <key>MAVEN_OPTS</key>
                  <value>-Xms128m -Xmx128m -DargLine=${env.SUREFIRE_OPTS} -Djavax.net.ssl.trustStore=/home/jenkins/.m2/myTruststore.jks -Djavax.net.ssl.trustStorePassword=changeit</value>
                </org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                  <key>SUREFIRE_OPTS</key>
                  <value>-Xms256m -Xmx256m</value>
                </org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                  <key>MAVEN_MIRROR_URL</key>
                  <value>https://nexus.mycompany.com/repository/maven-public</value>
                </org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.model.SecretEnvVar>
                  <key>MAVEN_SERVER_USERNAME</key>
                  <secretName>maven-secret</secretName>
                  <secretKey>username</secretKey>
                </org.csanchez.jenkins.plugins.kubernetes.model.SecretEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.model.SecretEnvVar>
                  <key>MAVEN_SERVER_PASSWORD</key>
                  <secretName>maven-secret</secretName>
                  <secretKey>password</secretKey>
                </org.csanchez.jenkins.plugins.kubernetes.model.SecretEnvVar>
              </envVars>
              <ports/>
              <livenessProbe>
                <execArgs></execArgs>
                <timeoutSeconds>0</timeoutSeconds>
                <initialDelaySeconds>0</initialDelaySeconds>
                <failureThreshold>0</failureThreshold>
                <periodSeconds>0</periodSeconds>
                <successThreshold>0</successThreshold>
              </livenessProbe>
            </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
            <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
              <name>httpbin</name>
              
              <privileged>false</privileged>
              <alwaysPullImage>false</alwaysPullImage>
              <workingDir></workingDir>
              <command>/run.sh</command>
              <args></args>
              <ttyEnabled>false</ttyEnabled>
              <resourceRequestCpu></resourceRequestCpu>
              <resourceRequestMemory>256Mi</resourceRequestMemory>
              <resourceLimitCpu></resourceLimitCpu>
              <resourceLimitMemory>256Mi</resourceLimitMemory>
              <envVars/>
              <ports/>
              <livenessProbe>
                <execArgs></execArgs>
                <timeoutSeconds>0</timeoutSeconds>
                <initialDelaySeconds>0</initialDelaySeconds>
                <failureThreshold>0</failureThreshold>
                <periodSeconds>0</periodSeconds>
                <successThreshold>0</successThreshold>
              </livenessProbe>
            </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
          </containers>
          <envVars/>
          <annotations/>
          <imagePullSecrets/>
        </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
      </templates>
      <serverUrl>https://172.30.0.1:443</serverUrl>
      <serverCertificate>-----BEGIN CERTIFICATE-----
MIIC6jCC...
-----END CERTIFICATE-----</serverCertificate>
      <skipTlsVerify>false</skipTlsVerify>
      <namespace>first</namespace>
      <jenkinsUrl>http://jenkins.cicd.svc:80</jenkinsUrl>
      <jenkinsTunnel>jenkins-jnlp.cicd.svc:50000</jenkinsTunnel>
      <credentialsId>1a12dfa4-7fc5-47a7-aa17-cc56572a41c7</credentialsId>
      <containerCap>10</containerCap>
      <retentionTimeout>5</retentionTimeout>
      <connectTimeout>0</connectTimeout>
      <readTimeout>0</readTimeout>
      <maxRequestsPerHost>32</maxRequestsPerHost>
    </org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud>
  </clouds>
</hudson>
```
Happy builds!
This was originally published on [ITNext][28] and is reprinted with permission.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/running-jenkins-builds-containers
作者:[Balazs Szeti][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/bszeti
[1]:https://opensource.com/resources/what-docker
[2]:https://opensource.com/resources/what-is-kubernetes
[3]:https://martinfowler.com/articles/microservices.html
[4]:https://www.openshift.com/
[5]:https://developers.redhat.com/products/cdk/overview/
[6]:https://github.com/openshift/origin/tree/master/examples/jenkins
[7]:https://wiki.jenkins.io/display/JENKINS/Distributed+builds
[8]:https://jenkins.io/doc/book/pipeline/
[9]:https://github.com/jenkinsci/kubernetes-plugin
[10]:https://access.redhat.com/containers/#/search/jenkins%2520slave
[11]:https://hub.docker.com/search/?isAutomated=0&isOfficial=0&page=1&pullCount=0&q=openshift+jenkins+slave+&starCount=0
[12]:https://github.com/openshift/jenkins/tree/master/slave-base
[13]:https://github.com/openshift/jenkins/tree/master/slave-maven
[14]:https://github.com/openshift/jenkins/tree/master/slave-nodejs
[15]:https://docs.openshift.com/container-platform/3.7/architecture/core_concepts/builds_and_image_streams.html#source-build
[16]:https://github.com/bszeti/camel-springboot/tree/master/camel-rest-complex
[17]:https://issues.jenkins-ci.org/browse/JENKINS-47112
[18]:https://git-scm.com/docs/git-credential-store/1.8.2
[19]:https://git-scm.com/docs/git-config/1.8.2
[20]:https://bugzilla.redhat.com/show_bug.cgi?id=1430322
[21]:https://maven.apache.org/pom.html#Distribution_Management
[22]:https://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html
[23]:https://developers.redhat.com/blog/2017/03/14/java-inside-docker/
[24]:https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/parallel.html#default_heap_size
[25]:https://github.com/openshift/jenkins/blob/master/slave-base/contrib/bin/run-jnlp-client
[26]:http://maven.apache.org/surefire/maven-surefire-plugin/examples/fork-options-and-parallel-execution.html
[27]:http://maven.apache.org/surefire/maven-surefire-plugin/test-mojo.html#argLine
[28]:https://itnext.io/running-jenkins-builds-in-containers-458e90ff2a7b

View File

@ -1,82 +0,0 @@
面向企业的最佳 Linux 发行版
====
在这篇文章中,我将分享企业环境下顶级的 Linux 发行版。其中一些发行版与桌面任务一起用于服务器和云环境。所有这些 Linux 选项具有的一个共同点是它们都是企业级 Linux 发行版 -- 所以你可以期待更高程度的功能,当然还有支持。
### 什么是企业级的 Linux 发行版?
一个企业级的 Linux 发行版可以归结为以下内容 - 稳定性和支持。在企业环境中,使用的 Linux 版本必须满足这两点。稳定性意味着所提供的软件包既稳定又可用,同时仍然保持预期的安全性。
企业级的支持因素意味着有一个可靠的支持机制。有时这是单一的(官方)来源,如公司。在其他情况下,它可能是一个非营利性的执政机构,向优秀的第三方支持供应商提供可靠的建议。很明显,前者是最好的选择,但两者都可以接受。
### Red Hat 企业级 Linux
[Red Hat][1] 有很多很棒的产品,都有企业级的支持来保证可用。其核心重点如下:
- Red Hat 企业级 Linux 服务器:这是一组服务器产品,包括从容器托管到 SAP 服务的所有内容,还有其他衍生的服务器。
- Red Hat 企业级 Linux 桌面:这些是严格控制的用户环境,运行 Red Hat Linux提供基本的桌面功能。这些功能包括访问最新的应用程序如 web 浏览器、电子邮件、LibreOffice 等。
- Red Hat 企业级 Linux 工作站:这基本上是 Red Hat 企业级 Linux 桌面,但针对高性能任务进行了优化。它也非常适合于大型部署和持续管理。
### 为什么选择 Red Hat 企业级 Linux
Red Hat 是一家大型的、非常成功的公司,销售围绕 Linux 的服务。基本上Red Hat 从那些想要避免供应商锁定和其他相关问题的公司赚钱。这些公司认识到聘用开源软件专家和管理他们的服务器和其他计算需求的价值。一家公司只需要购买订阅Red Hat 会在支持方面做其余的工作。
Red Hat 也是一个可靠的社会公民。他们赞助开源项目FoSS 支持网站译注FoSS 是 Free and Open Source Software 的缩写意为自由及开源软件像OpenSource.com 这样的网站,并为 Fedora 项目提供支持。Fedora 不是由 Red Hat 所有,而是它赞助开发的。这使 Fedora 得以发展,同时也使 Red Hat 受益匪浅。Red Hat 可以从 Fedora 项目中获得他们想要的,并将其用于他们的企业级 Linux 产品中。 就目前来看Fedora 充当了红帽企业 Linux 的上游渠道。
### SUSE Linux 企业版本
[SUSE][2] 是一家非常棒的公司,为企业用户提供了可靠的 Linux 选项。SUSE 的产品类似于 Red Hat因为桌面和服务器都是公司关注的。从我自己使用 SUSE 的经验来看,我相信 YaST 已经证明对于希望在工作场所使用 Linux 操作系统的非 Linux 管理员而言它是一笔巨大的资产。YaST 为那些需要一些基本的 Linux 命令行知识的任务提供了一个友好的 GUI。
SUSE 的核心重点如下:
- SUSE Linux 企业级服务器:包括任务特定的解决方案,从云到 SAP 选项,以及任务关键计算和基于软件的数据存储。
- SUSE Linux 企业级桌面:对于那些希望为员工提供可靠的 Linux 工作站的公司来说SUSE Linux 企业级桌面是一个不错的选择。和 Red Hat 一样SUSE 通过订阅模式来提供对其支持产品的访问。你可以选择三个不同级别的支持。
为什么选择 SUSE Linux 企业版?
SUSE 是一家围绕 Linux 销售服务的公司,但他们仍然通过专注于简化操作来实现这一目标。从他们的网站到 SUSE 提供的 Linux 发行版,重点是易用性,而不会牺牲安全性或可靠性。尽管在美国毫无疑问 Red Hat 是服务器的标准,但 SUSE 作为公司和开源社区贡献成员都做得很好。
我还会继续说SUSE 不会把自己太当回事。当你在 IT 领域建立联系的时候,这是一件很棒的事情。从他们关于 Linux 的有趣音乐视频到 SUSE 贸易展位中使用的 Gecko 以获得有趣的照片机会SUSE 将自己描述成简单易懂和平易近人的形象。
### Ubuntu LTS Linux
[Ubuntu Long Term Release][3] (LTS) Linux 是一个简单易用的企业级 Linux 发行版。Ubuntu 看起来比上面提到的其他发行版更新更频繁有时候也更不稳定。不要误解Ubuntu LTS 版本被认为是相当稳定的,不过,我认为一些专家可能不同意你的观点,如果你认为他们是防弹的。(这句不太理解)
Ubuntu 的核心重点如下:
- Ubuntu 桌面版毫无疑问Ubuntu 桌面非常简单可以快速学习和快速运行。它在高级安装选项中可能缺少一些东西但它有一种直截了当的弥补。作为额外的奖励Ubuntu 相比比其他版本有更多的软件包除了它的父亲Debian 发行版)。我认为 Ubuntu 真正的亮点在于,你可以在网上找到许多销售 Ubuntu 的厂商,包括服务器,台式机和笔记本电脑。
- Ubuntu 服务器版这包括服务器云和容器产品。Ubuntu 还提供了 Juju 云“应用商店”这样一个有趣的概念。对于任何熟悉 Ubuntu 或 Debian 的人来说Ubuntu 服务器都很有意义。对于这些人来说,它就像手套一样,为你提供你已经知道并喜爱的命令行工具。
Ubuntu IoT最近Ubuntu 的开发团队已经把目标瞄准了“物联网”IoT的创建解决方案。包括数字标识、机器人技术和物联网网关。我的猜测是我们将在 Ubuntu 中看到物联网增长的大部分来自企业用户,而不是普通家庭用户。
为什么选择 Ubuntu LTS
社区是 Ubuntu 最大的优点。除了在已经拥挤的服务器市场上的巨大增长之外,它还与普通用户在一起。使用 Ubuntu 开发者和社区用户是坚如磐石的。因此,虽然它可能被认为比其他企业版更不稳定,但是我发现将 Ubuntu LTS 安装锁定到到 “security updates only” 模式下提供了非常稳定的体验。
### CentOS 或者 Scientific Linux 怎么样呢?
首先,让我们把 [CentOS][4] 作为一个企业发行版,如果你有自己的内部支持团队来维护它,那么安装 CentOS 是一个很好的选择。毕竟,它与 Red Hat 企业级 Linux 兼容,并提供了与 Red Hat 产品相同级别的稳定性。不幸的是,它不会完全取代 Red Hat 支持订阅。
那么 [Scientific Linux][5] 呢?它的发行版怎么样?好吧,它就像 CentOS它是基于 Red Hat Linux 的。但与 CentOS 不同的是,它与 Red Hat 没有任何关系。 Scientific Linux 从一开始就有一个目标 - 为世界各地的实验室提供一个通用的 Linux 发行版。今天Scientific Linux 基本上是 Red Hat 减去包含的商标资料。
这两种发行版都不能真正地与 Red Hat 互换,因为他们缺少 Red Hat 支持组件。
哪一个是顶级企业发行版?我认为这取决于你需要为自己确定的许多因素:订阅范围,可用性,成本,服务和提供的功能。这些是每个公司必须自己决定的因素。就我个人而言,我认为 Red Hat 在服务器上获胜,而 SUSE 在桌面环境中轻松获胜,但这只是我的意见 - 你不同意?点击下面的评论部分,让我们来谈谈它。
--------------------------------------------------------------------------------
via: https://www.datamation.com/open-source/best-linux-distros-for-the-enterprise.html
作者:[Matt Hartley][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.datamation.com/author/Matt-Hartley-3080.html
[1]:https://www.redhat.com/en
[2]:https://www.suse.com/
[3]:http://releases.ubuntu.com/16.04/
[4]:https://www.centos.org/
[5]:https://www.scientificlinux.org/

View File

@ -1,103 +0,0 @@
可怕的万圣节 Linux 命令
======
万圣节快到了,是时候关注一下 Linux 可怕的一面。什么命令可能会显示鬼、巫婆和僵尸的图像?这可能会鼓励伎俩或治疗的精神?哪个会鼓励“不给糖果就捣蛋”的精神?
### crypt
好吧,我们一直看到 **crypt**。尽管名称不同crypt 不是一个地窖也不是垃圾文件的埋葬坑而是一个加密文件内容的命令。现在“crypt” 通常用一个脚本实现,通过调用一个名为 **mcrypt** 的二进制文件来模拟以前的 crypt 命令来完成它的工作。直接使用 **mycrypt** 命令是更好的选择。
```
$ mcrypt x
Enter the passphrase (maximum of 512 characters)
Please use a combination of upper and lower case letters and numbers.
Enter passphrase:
Enter passphrase:
File x was encrypted.
```
请注意mcrypt 命令会创建一个扩展名为 “.nc” 的第二个文件。它不会覆盖你正在加密的文件。
mcrypt 命令有密钥大小和加密算法的选项。你也可以再选项中指定密钥,但 mcrypt 命令不鼓励这样做。
### kill
还有 kill 命令 - 当然并不是指谋杀而是用来强制和非强制地结束进程这取决于正确终止它们的要求。当然Linux 并不止于此。相反,它有各种 kill 命令来终止进程。我们有 kill、pkill、killall、killpg、rfkill、skill (读作 es-kill)、tgkill、tkill 和 xkill。
```
$ killall runme
[1] Terminated ./runme
[2] Terminated ./runme
[3]- Terminated ./runme
[4]+ Terminated ./runme
```
### shred
Linux 系统也支持一个名为 **shred** 的命令。shred 命令会覆盖文件以隐藏其以前的内容并确保使用硬盘恢复工具无法恢复它们。请记住rm 命令基本上只是删除文件在目录文件中的引用,但不一定会从磁盘上删除内容或覆盖它。**shred** 命令覆盖文件的内容。
```
$ shred dupes.txt
$ more dupes.txt
▒oΛ▒▒9▒lm▒▒▒▒▒o▒1־▒▒f▒f▒▒▒i▒▒h^}&▒▒▒{▒▒
```
### 僵尸
虽然不是命令,但**僵尸**在 Linux 系统上是很顽固的存在。僵尸基本上是没有完全清理掉的死亡进程的遗骸。进程_不应该_这样工作 - 让死亡进程四处游荡,而不是简单地让它们死亡并进入数字天堂,所以僵尸的存在表明让他们遗留的进程有一些缺陷。
一个简单的方法来检查你的系统是否有僵尸进程遗留,看看 top 命令的标题行。
```
$ top
top - 18:50:38 up 6 days, 6:36, 2 users, load average: 0.00, 0.00, 0.00
Tasks: 171 total, 1 running, 167 sleeping, 0 stopped, 3 zombie **< ==**
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni, 99.9 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 2003388 total, 250840 free, 545832 used, 1206716 buff/cache
KiB Swap: 9765884 total, 9765764 free, 120 used. 1156536 avail Mem
```
可怕!上面显示有三个僵尸进程。
### at midnight
有时会在万圣节这么说死者的灵魂从日落开始游荡直到午夜。Linux 可以通过 “at midnight” 命令跟踪它们的离开。用于安排在下次到达指定时间时运行的作业,**at** 的作用类似于一次性的 cron。
```
$ at midnight
warning: commands will be executed using /bin/sh
at> echo 'the spirits of the dead have left'
at> <EOT>
job 3 at Thu Oct 31 00:00:00 2017
```
### 守护进程
Linux 系统也高度依赖守护进程 - 在后台运行的进程,并提供系统的许多功能。许多守护进程的名称以 “d” 结尾。这个 “d” 代表“守护进程” daemon表明这个进程一直运行并支持一些重要功能。有的会将单词 “daemon” 展开。
```
$ ps -ef | grep sshd
root 1142 1 0 Oct19 ? 00:00:00 /usr/sbin/sshd -D
root 25342 1142 0 18:34 ? 00:00:00 sshd: shs [priv]
$ ps -ef | grep daemon | grep -v grep
message+ 790 1 0 Oct19 ? 00:00:01 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root 836 1 0 Oct19 ? 00:00:02 /usr/lib/accountsservice/accounts-daemon
```
### 万圣节快乐!
在 [Facebook][1] 和 [LinkedIn][2] 上加入 Network World 社区来对主题进行评论。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3235219/linux/scary-linux-commands-for-halloween.html
作者:[Sandra Henry-Stocker][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:https://www.facebook.com/NetworkWorld/
[2]:https://www.linkedin.com/company/network-world

View File

@ -1,26 +1,24 @@
translating---geekpi
How to Create, Revert and Delete KVM Virtual machine snapshot with virsh command
如何使用 virsh 命令创建、还原和删除 KVM 虚拟机快照
======
[![KVM-VirtualMachine-Snapshot][1]![KVM-VirtualMachine-Snapshot][2]][2]
While working on the virtualization platform system administrators usually take the snapshot of virtual machine before doing any major activity like deploying the latest patch and code.
在虚拟化平台上进行系统管理工作时经常在开始主要活动比如部署补丁和代码前先设置一个虚拟机快照。
Virtual machine **snapshot** is a copy of virtual machines disk at the specific point of time. In other words we can say snapshot keeps or preserve the state and data of a virtual machine at given point of time.
虚拟机**快照**是特定时间点的虚拟机磁盘的副本。换句话说,快照保存了给定的时间点虚拟机的状态和数据。
### Where we can use VM snapshots ..?
### 我们可以在哪里使用虚拟机快照?
If you are working on **KVM** based **hypervisors** we can take virtual machines or domain snapshot using the virsh command. Snapshot becomes very helpful in a situation where you have installed or apply the latest patches on the VM but due to some reasons, application hosted in the VMs becomes unstable and application team wants to revert all the changes or patches. If you had taken the snapshot of the VM before applying patches then we can restore or revert the VM to its previous state using snapshot.
如果你在使用基于 **KVM** 的**虚拟机管理程序**,那么可以使用 virsh 命令获取虚拟机或域快照。快照在一种情况下变得非常有用,当你已经在虚拟机上安装或应用了最新的补丁,但是由于某些原因,虚拟机上的程序变得不稳定,程序团队想要还原所有的更改和补丁。如果你在应用补丁之前设置了虚拟机的快照,那么可以使用快照将虚拟机恢复到之前的状态。
**Note:** We can only take the snapshot of the VMs whose disk format is **Qcow2** and raw disk format is not supported by kvm virsh command, Use below command to convert the raw disk format to qcow2
**注意:**我们只能设置磁盘格式为 **Qcow2** 的虚拟机的快照,并且 kvm virsh 命令不支持 raw 磁盘格式,请使用以下命令将原始磁盘格式转换为 qcow2。
```
# qemu-img convert -f raw -O qcow2 image-name.img image-name.qcow2
```
### Create KVM Virtual Machine (domain) Snapshot
### 创建 KVM 虚拟机(域)快照
I am assuming KVM hypervisor is already configured on CentOS 7 / RHEL 7 box and VMs are running on it. We can list the all the VMs on hypervisor using below virsh command,
我假设 KVM 管理程序已经在 CentOS 7 / RHEL 7 机器上配置好了,并且有虚拟机正在运行。我们可以使用下面的 virsh 命令列出虚拟机管理程序中的所有虚拟机,
```
[root@kvm-hypervisor ~]# virsh list --all
 Id    Name                           State
@ -35,9 +33,9 @@ I am assuming KVM hypervisor is already configured on CentOS 7 / RHEL 7 box and
```
Lets suppose we want to create the snapshot of **webserver** VM, run the below command,
假设我们想创建 **webserver** 虚拟机的快照,运行下面的命令,
**Syntax :**
**语法:**
```
# virsh snapshot-create-as domain {vm_name} name {snapshot_name} description “enter description here”
@ -49,7 +47,7 @@ Domain snapshot webserver_snap created
```
Once the snapshot is created then we can list snapshots related to the VM using below command,
创建快照后,我们可以使用下面的命令列出与虚拟机相关的快照,
```
[root@kvm-hypervisor ~]# virsh snapshot-list webserver
 Name                 Creation Time             State
@ -59,7 +57,7 @@ Once the snapshot is created then we can list snapshots related to the VM using
```
To list the detailed info of VMs snapshot, run the beneath virsh command,
要列出虚拟机快照的详细信息,请运行下面的 virsh 命令,
```
[root@kvm-hypervisor ~]# virsh snapshot-info --domain webserver --snapshotname webserver_snap
Name:           webserver_snap
@ -75,7 +73,7 @@ Metadata:       yes
```
We can view the size of snapshot using below qemu-img command,
我们可以使用下面的 qemu-img 命令查看快照的大小,
```
[root@kvm-hypervisor ~]# qemu-img info /var/lib/libvirt/images/snaptestvm.img
@ -83,11 +81,11 @@ We can view the size of snapshot using below qemu-img command,
[![qemu-img-command-output-kvm][1]![qemu-img-command-output-kvm][3]][3]
### Revert / Restore KVM virtual Machine to Snapshot
### 还原 KVM 虚拟机快照
Lets assume we want to revert or restore webserver VM to the snapshot that we have created in above step. Use below virsh command to restore Webserver VM to its snapshot “ **webserver_snap**
假设我们想要将 webserver 虚拟机还原到我们在上述步骤中创建的快照。使用下面的 virsh 命令将 Webserver 虚拟机恢复到其快照 “**webserver_snap**” 上。
**Syntax :**
**语法:**
```
# virsh snapshot-revert {vm_name} {snapshot_name}
@ -98,9 +96,9 @@ Lets assume we want to revert or restore webserver VM to the snapshot that we
```
### Delete KVM virtual Machine Snapshots
### 删除 KVM 虚拟机快照
To delete KVM virtual machine snapshots, first get the VMs snapshot details using “ **virsh snapshot-list** ” command and then use “ **virsh snapshot-delete** ” command to delete the snapshot. Example is shown below:
要删除 KVM 虚拟机快照,首先使用 “**virsh snapshot-list**” 命令获取虚拟机的快照详细信息,然后使用 “**virsh snapshot-delete**” 命令删除快照。如下示例所示:
```
[root@kvm-hypervisor ~]# virsh snapshot-list --domain webserver
 Name                 Creation Time             State
@ -114,14 +112,14 @@ Domain snapshot webserver_snap deleted
```
Thats all from this article, I hope you guys get an idea on how to manage KVM virtual machine snapshots using virsh command. Please do share your feedback and dont hesitate to share it among your technical friends 🙂
这就是本文的全部内容,我希望你们能够了解如何使用 virsh 命令来管理 KVM 虚拟机快照。请分享你的反馈,并不要犹豫地分享给你的技术朋友🙂
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/create-revert-delete-kvm-virtual-machine-snapshot-virsh-command/
作者:[Pradeep Kumar][a]
译者:[译者ID](https://github.com/译者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,104 @@
如何像 Linux 专家那样使用 WSL
============================================================
![WSL](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/wsl-pro.png?itok=e65wEEAw "WSL")
在本 WSL 教程中了解如何执行像挂载 USB 驱动器和操作文件等任务。图片提供Microsoft[经许可使用][1]
在[之前的教程][4]中,我们学习了在 Windows 10 上设置 WSL。你可以在 Windows 10 中使用 WSL 执行许多 Linux 命令。许多系统管理任务都是在终端内部完成的,无论是基于 Linux 的系统还是 macOS。然而Windows 10 缺乏这样的功能。你想运行一个 cron 任务么?不行。你想 SSH 进入你的服务器,然后 rsync 文件么?没门。如何用强大的命令行工具管理本地文件,而不是使用缓慢和不可靠的 GUI 工具?
在本教程中,你将看到如何使用 WSL执行除了管理的其他任务 - 例如挂载 USB 驱动器和操作文件。你需要运行一个完全更新的 Windows 10 并选择一个 Linux 发行版。我在[上一篇文章][5]中介绍了这些步骤,所以如果你需要赶上,那就从那里开始。让我们开始吧。
### 保持你的 Linux 系统更新
事实上,当你通过 WSL 运行 Ubuntu 或 openSUSE 时,没有 Linux 内核在运行。然而,你必须保持你的发行版完整更新,以保护你的系统免受任何新的已知漏洞的影响。由于在 Windows 应用商店中只有两个免费的社区发行版所以教程将只覆盖以下两个openSUSE 和 Ubuntu。
更新你的 Ubuntu 系统:
```
# sudo apt-get update
# sudo apt-get dist-upgrade
```
运行 openSUSE 的更新:
```
# zypper up
```
您还可以使用 _dup_ 命令将 openSUSE 升级到最新版本。但在运行系统升级之前,请使用上一个命令运行更新。
```
# zypper dup
```
**注意:** openSUSE 默认为 root 用户。如果你想执行任何非管理员任务,请切换到非特权用户。您可以这篇[文章][6]中了解如何在 openSUSE上 创建用户。
### 管理本地文件
如果你想使用优秀的 Linux 命令行工具来管理本地文件,你可以使用 WSL 轻松完成此操作。不幸的是WSL 还不支持像 _lsblk__mnt_ 这样的东西来挂载本地驱动器。但是,你可以 _cd _ 到 C 盘并管理文件:
/mnt/c/Users/swapnil/Music
我现在在 C 盘的 Music 目录下。
要安装其他驱动器、分区和外部 USB 驱动器,你需要创建一个挂载点,然后挂载该驱动器。
打开文件资源管理器并检查该驱动器的挂载点。假设它在 Windows 中被挂载为 S:\
在 Ubuntu/openSUSE 终端中,为驱动器创建一个挂载点。
```
sudo mkdir /mnt/s
```
现在挂载驱动器:
```
mount -f drvfs S: /mnt/s
```
挂载完毕后,你现在可以从发行版访问该驱动器。请记住,使用 WSL 运行的发行版将会看到 Windows 能看到的内容。因此,你无法挂载在 Windows 上无法原生挂载的 ext4 驱动器。
现在你可以在这里使用所有这些神奇的 Linux 命令。想要将文件从一个文件夹复制或移动到另一个文件夹?只需运行 _cp__mv_ 命令。
```
cp /source-folder/source-file.txt /destination-folder/
cp /music/classical/Beethoven/symphony-2.mp3 /plex-media/music/classical/
```
如果你想移动文件夹或大文件,我会推荐 _rsync_ 而不是 _cp_ 命令:
```
rsync -avzP /music/classical/Beethoven/symphonies/ /plex-media/music/classical/
```
耶!
想要在 Windows 驱动器中创建新目录,只需使用 _mkdir_ 命令。
想要在某个时间设置一个 cron 作业来自动执行任务吗?继续使用 _crontab -e_ 创建一个 cron 作业。十分简单。
你还可以在 Linux 中挂载网络/远程文件夹,以便你可以使用更好的工具管理它们。我的所有驱动器都插在树莓派或者服务器上,因此我只需 ssh 进入该机器并管理硬盘。在本地计算机和远程系统之间传输文件可以再次使用 _rsync_ 命令完成。
WSL 现在已经不再是测试版了,它将继续获得更多新功能。我很兴奋的两个特性是 lsblk 命令和 dd 命令,它们允许我在 Windows 中本机管理我的驱动器并创建可引导的 Linux 驱动器。如果你是 Linux 命令行的新手,[前一篇教程][7]将帮助你开始使用一些最基本的命令。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2018/2/how-use-wsl-linux-pro
作者:[SWAPNIL BHARTIYA][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://blogs.msdn.microsoft.com/commandline/learn-about-windows-console-and-windows-subsystem-for-linux-wsl/
[3]:https://www.linux.com/files/images/wsl-propng
[4]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[5]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[6]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[7]:https://www.linux.com/learn/how-use-linux-command-line-basics-cli

View File

@ -1,24 +1,25 @@
9 Useful touch command examples in Linux
======
Touch command is used to create empty files and also changes the timestamps of existing files on Unix & Linux System. Changing timestamps here means updating the access and modification time of files and directories.
在 Linux 下 9 个有用的 touch 命令示例
=====
touch 命令用于创建空文件,并且更改 Unix 和 Linux 系统上现有文件时间戳。这里更改时间戳意味着更新文件和目录的访问以及修改时间。
[![touch-command-examples-linux][1]![touch-command-examples-linux][2]][2]
Lets have a look on the syntax and options used in touch command,
让我们来看看 touch 命令的语法和选项:
**Syntax** : # touch {options} {file}
**语法** # touch {选项} {文件}
Options used in touch command,
touch 命令中使用的选项:
![touch-command-options][1]
![touch-command-options][3]
In this article we will walk through 9 useful touch command examples in Linux,
在这篇文章中,我们将介绍 Linux 中 9 个有用的 touch 命令示例。
### Example:1 Create an empty file using touch
### 示例:1 使用 touch 创建一个空文件
To create an empty file using touch command on Linux systems, type touch followed by the file name, example is shown below,
要在 Linux 系统上使用 touch 命令创建空文件,键入 touch然后输入文件名。如下所示
```
[root@linuxtechi ~]# touch devops.txt
[root@linuxtechi ~]# ls -l devops.txt
@ -27,26 +28,26 @@ To create an empty file using touch command on Linux systems, type touch followe
```
### Example:2 Create empty files in bulk using touch
### 示例:2 使用 touch 创建批量空文件
There can be some scenario where we have to create lots of empty files for some testing, this can be easily achieved using touch command,
可能会出现一些情况,我们必须为某些测试创建大量空文件,这可以使用 touch 命令轻松实现:
```
[root@linuxtechi ~]# touch sysadm-{1..20}.txt
```
In the above example we have created 20 empty files with name sysadm-1.txt to sysadm-20.txt, you can change the name and numbers based on your requirements.
在上面的例子中,我们创建了 20 个名为 sysadm-1.txt 到 sysadm-20.txt 的空文件,你可以根据需要更改名称和数字。
### Example:3 Change / Update access time of a file and directory
### 示例:3 改变/更新文件和目录的访问时间
Lets assume we want to change access time of a file called “ **devops.txt** “, to do this use **-a** option in touch command followed by file name, example is shown below,
假设我们想要改变名为 **devops.txt** 文件的访问时间,在 touch 命令中使用 **-a** 选项,然后输入文件名。如下所示:
```
[root@linuxtechi ~]# touch -a devops.txt
[root@linuxtechi ~]#
```
Now verify whether access time of a file has been updated or not using stat command
现在使用 `stat` 命令验证文件的访问时间是否已更新:
```
[root@linuxtechi ~]# stat devops.txt
  File: devops.txt
@ -62,9 +63,9 @@ Change: 2018-03-29 23:03:10.902000000 -0400
```
**Change access time of a directory** ,
**改变目录的访问时间**
Lets assume we have a nfsshare folder under /mnt, Lets change the access time of this folder using the below command,
假设我们在 /mnt 目录下有一个 nfsshare 文件夹,让我们用下面的命令改变这个文件夹的访问时间:
```
[root@linuxtechi ~]# touch -m /mnt/nfsshare/
[root@linuxtechi ~]#
@ -83,9 +84,9 @@ Change: 2018-03-29 23:34:38.095000000 -0400
```
### Example:4 Change Access time without creating new file
### 示例:4 更改访问时间而不用创建新文件
There can be some situations where we want to change access time of a file if it exists and avoid creating the file. Using **-c** option in touch command, we can change access time of a file if it exists and will not a create a file, if it doesnt exist.
在某些情况下,如果文件存在,我们希望更改文件的访问时间,并避免创建文件。在 touch 命令中使用 **-c** 选项即可,如果文件存在,那么我们可以改变文件的访问时间,如果不存在,我们也可不会创建它。
```
[root@linuxtechi ~]# touch -c sysadm-20.txt
[root@linuxtechi ~]# touch -c winadm-20.txt
@ -95,18 +96,18 @@ ls: cannot access winadm-20.txt: No such file or directory
```
### Example:5 Change Modification time of a file and directory
### 示例:5 更改文件和目录的修改时间
Using **-m** option in touch command, we can change the modification time of a file and directory,
在 touch 命令中使用 **-m** 选项,我们可以更改文件和目录的修改时间。
Lets change the modification time of a file called “devops.txt”,
让我们更改名为 “devops.txt” 文件的更改时间:
```
[root@linuxtechi ~]# touch -m devops.txt
[root@linuxtechi ~]#
```
Now verify whether modification time has been changed or not using stat command,
现在使用 stat 命令来验证修改时间是否改变:
```
[root@linuxtechi ~]# stat devops.txt
  File: devops.txt
@ -122,23 +123,14 @@ Change: 2018-03-29 23:59:49.106000000 -0400
```
Similarly, we can change modification time of a directory,
同样的,我们可以改变一个目录的修改时间:
```
[root@linuxtechi ~]# touch -m /mnt/nfsshare/
[root@linuxtechi ~]#
```
### Example:6 Changing access and modification time in one go
Use “ **-am** ” option in touch command to change the access and modification together or in one go, example is shown below,
```
[root@linuxtechi ~]# touch -am devops.txt
[root@linuxtechi ~]#
```
Cross verify the access and modification time using stat,
使用 stat 交叉验证访问和修改时间:
```
[root@linuxtechi ~]# stat devops.txt
  File: devops.txt
@ -154,45 +146,43 @@ Change: 2018-03-30 00:06:20.145000000 -0400
```
### Example:7 Set the Access & modification time to a specific date and time
### 示例:7 将访问和修改时间设置为特定的日期和时间
Whenever we do change access and modification time of a file & directory using touch command, then it set the current time as access & modification time of that file or directory,
每当我们使用 touch 命令更改文件和目录的访问和修改时间时,它将当前时间设置为该文件或目录的访问和修改时间。
Lets assume we want to set specific date and time as access & modification time of a file, this is can be achieved using -c & -t option in touch command,
假设我们想要将特定的日期和时间设置为文件的访问和修改时间,这可以使用 touch 命令中的 -c-t 选项来实现。
Date and Time can be specified in the format: {CCYY}MMDDhhmm.ss
日期和时间可以使用以下格式指定:{CCYY}MMDDhhmm.ss
Where:
其中:
* CC First two digits of a year
* YY Second two digits of a year
* MM Month of the Year (01-12)
* DD Day of the Month (01-31)
* hh Hour of the day (00-23)
* mm Minutes of the hour (00-59)
* CC 年份的前两位数字
* YY 年份的后两位数字
* MM 月份 (01-12)
* DD (01-31)
* hh 小时 (00-23)
* mm 分钟 (00-59)
Lets set the access & modification time of devops.txt file for future date and time( 2025 year, 10th Month, 19th day of month, 18th hours and 20th minute)
让我们将 devops.txt file 文件的访问和修改时间设置为未来的一个时间( 2025 年, 10 月, 19 日, 18 时 20 分)。
```
[root@linuxtechi ~]# touch -c -t 202510191820 devops.txt
```
Use stat command to view the update access & modification time,
使用 stat 命令查看更新访问和修改时间:
![stat-command-output-linux][1]
![stat-command-output-linux][4]
Set the Access and Modification time based on date string, Use -d option in touch command and then specify the date string followed by the file name, example is shown below,
根据日期字符串设置访问和修改时间,在 touch 命令中使用 -d 选项,然后指定日期字符串,后面跟文件名。如下所示:
```
[root@linuxtechi ~]# touch -c -d "2010-02-07 20:15:12.000000000 +0530" sysadm-29.txt
[root@linuxtechi ~]#
```
Verify the status using stat command,
使用 stat 命令验证文件的状态:
```
[root@linuxtechi ~]# stat sysadm-20.txt
  File: sysadm-20.txt
@ -208,24 +198,24 @@ Change: 2018-03-30 10:23:31.584000000 +0530
```
**Note:** In above commands, if we dont specify -c then touch command will create a new file in case it doesnt exist on the system and will set the timestamps whatever is mentioned in the command.
**注意:**在上述命令中,如果我们不指定 -c那么 touch 命令将创建一个新文件以防系统中存在该文件,并将时间戳设置为命令中给出的。
### Example:8 Set the timestamps to a file using a reference file (-r)
### 示例:8 使用参考文件设置时间戳(-r
In touch command we can use a reference file for setting the timestamps of file or directory. Lets assume I want to set the same timestamps of file “sysadm-20.txt” on “devops.txt” file. This can be easily achieved using -r option in touch.
在 touch 命令中,我们可以使用参考文件来设置文件或目录的时间戳。假设我想在 “devops.txt” 文件上设置与文件 “sysadm-20.txt” 文件相同的时间戳touch 命令中使用 -r 选项可以轻松实现。
**Syntax:** # touch -r {reference-file} actual-file
**语法:**# touch -r {参考文件} 真正文件
```
[root@linuxtechi ~]# touch -r sysadm-20.txt devops.txt
[root@linuxtechi ~]#
```
### Example:9 Change Access & Modification time on symbolic link file
### 示例:9 在符号链接文件上更改访问和修改时间
By default, whenever we try to change timestamps of a symbolic link file using touch command then it will change the timestamps of original file only, In case you want to change timestamps of a symbolic link file then this can be achieved using -h option in touch command,
默认情况下,每当我们尝试使用 touch 命令更改符号链接文件的时间戳时,它只会更改原始文件的时间戳。如果你想更改符号链接文件的时间戳,则可以使用 touch 命令中的 -h 选项来实现。
**Syntax:** # touch -h {symbolic link file}
**语法:** # touch -h {符号链接文件}
```
[root@linuxtechi opt]# ls -l /root/linuxgeeks.txt
lrwxrwxrwx. 1 root root 15 Mar 30 10:56 /root/linuxgeeks.txt -> linuxadmins.txt
@ -236,14 +226,14 @@ lrwxrwxrwx. 1 root root 15 Oct 19  2030 linuxgeeks.txt -> linuxadmins.txt
```
Thats all from this tutorial, I hope these examples help you to understand touch command. Please do share your valuable feedback and comments.
这就是本教程的全部了。我希望这些例子能帮助你理解 touch 命令。请分享你的宝贵意见和评论。
--------------------------------------------------------------------------------
via: https://www.linuxtechi.com/9-useful-touch-command-examples-linux/
作者:[Pradeep Kumar][a]
译者:[译者ID](https://github.com/译者ID)
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出