Merge pull request #8 from LCTT/master

update
This commit is contained in:
MjSeven 2018-04-17 23:47:51 +08:00 committed by GitHub
commit 9ed3fb9f04
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
23 changed files with 3594 additions and 209 deletions

View File

@ -0,0 +1,106 @@
可怕的万圣节 Linux 命令
======
![](https://images.idgesg.net/images/article/2017/10/animal-skeleton-100739983-large.jpg)
虽然现在不是万圣节,也可以关注一下 Linux 可怕的一面。什么命令可能会显示鬼、巫婆和僵尸的图像?哪个会鼓励“不给糖果就捣蛋”的精神?
### crypt
好吧,我们一直看到 `crypt`。尽管名称不同crypt 不是一个地窖,也不是垃圾文件的埋葬坑,而是一个加密文件内容的命令。现在,`crypt` 通常用一个脚本实现,通过调用一个名为 `mcrypt` 的二进制文件来模拟以前的 `crypt` 命令来完成它的工作。直接使用 `mycrypt` 命令是更好的选择。
```
$ mcrypt x
Enter the passphrase (maximum of 512 characters)
Please use a combination of upper and lower case letters and numbers.
Enter passphrase:
Enter passphrase:
File x was encrypted.
```
请注意,`mcrypt` 命令会创建第二个扩展名为 `.nc` 的文件。它不会覆盖你正在加密的文件。
`mcrypt` 命令有密钥大小和加密算法的选项。你也可以再选项中指定密钥,但 `mcrypt` 命令不鼓励这样做。
### kill
还有 `kill` 命令 - 当然并不是指谋杀而是用来强制和非强制地结束进程这取决于正确终止它们的要求。当然Linux 并不止于此。相反,它有各种 `kill` 命令来终止进程。我们有 `kill`、`pkill`、`killall`、`killpg`、`rfkill`、`skill`()读作 es-kill、`tgkill`、`tkill` 和 `xkill`
```
$ killall runme
[1] Terminated ./runme
[2] Terminated ./runme
[3]- Terminated ./runme
[4]+ Terminated ./runme
```
### shred
Linux 系统也支持一个名为 `shred` 的命令。`shred` 命令会覆盖文件以隐藏其以前的内容,并确保使用硬盘恢复工具无法恢复它们。请记住,`rm` 命令基本上只是删除文件在目录文件中的引用,但不一定会从磁盘上删除内容或覆盖它。`shred` 命令覆盖文件的内容。
```
$ shred dupes.txt
$ more dupes.txt
▒oΛ▒▒9▒lm▒▒▒▒▒o▒1־▒▒f▒f▒▒▒i▒▒h^}&▒▒▒{▒▒
```
### 僵尸
虽然不是命令,但僵尸在 Linux 系统上是很顽固的存在。僵尸基本上是没有完全清理掉的死亡进程的遗骸。进程_不应该_这样工作 —— 让死亡进程四处游荡,而不是简单地让它们死亡并进入数字天堂,所以僵尸的存在表明了让他们遗留于此的进程有一些缺陷。
一个简单的方法来检查你的系统是否有僵尸进程遗留,看看 `top` 命令的标题行。
```
$ top
top - 18:50:38 up 6 days, 6:36, 2 users, load average: 0.00, 0.00, 0.00
Tasks: 171 total, 1 running, 167 sleeping, 0 stopped, 3 zombie `< ==`
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni, 99.9 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 2003388 total, 250840 free, 545832 used, 1206716 buff/cache
KiB Swap: 9765884 total, 9765764 free, 120 used. 1156536 avail Mem
```
可怕!上面显示有三个僵尸进程。
### at midnight
有时会在万圣节这么说死者的灵魂从日落开始游荡直到午夜。Linux 可以通过 `at midnight` 命令跟踪它们的离开。用于安排在下次到达指定时间时运行的作业,`at` 的作用类似于一次性的 cron。
```
$ at midnight
warning: commands will be executed using /bin/sh
at> echo 'the spirits of the dead have left'
at> <EOT>
job 3 at Thu Oct 31 00:00:00 2017
```
### 守护进程
Linux 系统也高度依赖守护进程 —— 在后台运行的进程,并提供系统的许多功能。许多守护进程的名称以 “d” 结尾。这个 “d” 代表<ruby>守护进程<rt>daemon</rt></ruby>,表明这个进程一直运行并支持一些重要功能。有的会用单词 “daemon” 。
```
$ ps -ef | grep sshd
root 1142 1 0 Oct19 ? 00:00:00 /usr/sbin/sshd -D
root 25342 1142 0 18:34 ? 00:00:00 sshd: shs [priv]
$ ps -ef | grep daemon | grep -v grep
message+ 790 1 0 Oct19 ? 00:00:01 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root 836 1 0 Oct19 ? 00:00:02 /usr/lib/accountsservice/accounts-daemon
```
### 万圣节快乐!
在 [Facebook][1] 和 [LinkedIn][2] 上加入 Network World 社区来对主题进行评论。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3235219/linux/scary-linux-commands-for-halloween.html
作者:[Sandra Henry-Stocker][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:https://www.facebook.com/NetworkWorld/
[2]:https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,37 @@
Easily Run And Integrate AppImage Files With AppImageLauncher
======
Did you ever download an AppImage file and you didn't know how to use it? Or maybe you know how to use it but you have to navigate to the folder where you downloaded the .AppImage file every time you want to run it, or manually create a launcher for it.
With AppImageLauncher, these are problems of the past. The application lets you **easily run AppImage files, without having to make them executable**. But its most interesting feature is easily integrating AppImages with your system: **AppImageLauncher can automatically add an AppImage application shortcut to your desktop environment's application launcher / menu (including the app icon and proper description).**
Without making the downloaded Kdenline AppImage executable manually, the first time I double click it (having AppImageLauncher installed), AppImageLauncher presents two options:
`Run once` or `Integrate and run`.
Clicking on Integrate and run, the AppImage is copied to the `~/.bin/` folder (hidden folder in the home directory) and is added to the menu, then the app is launched.
**Removing it is just as simple** , as long as the desktop environment you're using has support for desktop actions. For example, in Gnome Shell, simply **right click the application icon in the Activities Overview and select** `Remove from system` :
The AppImageLauncher GitHub page says that the application only supports Debian-based systems for now (this includes Ubuntu and Linux Mint) because it integrates deeply with the system. The application is currently in heavy development, and there are already issues opened by its developer to build RPM packages, so Fedora / openSUSE support might be added in the not too distant future.
### Download AppImageLauncher
The AppImageLauncher download page provides binaries for Debian, Ubuntu or Linux Mint (64bit), as well as a 64bit AppImage. The source is also available.
[Download AppImageLauncher][1]
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/04/easily-run-and-integrate-appimage-files.html
作者:[Logix][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://github.com/TheAssassin/AppImageLauncher/releases
[2]:https://kdenlive.org/download/

View File

@ -0,0 +1,71 @@
Management, from coordination to collaboration
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_consensuscollab2.png?itok=uMO9zn5U)
Any organization is fundamentally a pattern of interactions between people. The nature of those interactions—their quality, their frequency, their outcomes—is the most important product an organization can create. Perhaps counterintuitively, recognizing this fact has never been more important than it is today—a time when digital technologies are reshaping not only how we work but also what we do when we come together.
And yet many organizational leaders treat those interactions between people as obstacles or hindrances to avoid or eliminate, rather than as the powerful sources of innovation they really are.
That's why we're observing that some of the most successful organizations today are those capable of shifting the way they think about the value of the interactions in the workplace. And to do that, they've radically altered their approach to management and leadership.
### Moving beyond mechanical management
Simply put, traditionally managed organizations treat unanticipated interactions between stakeholders as potentially destructive forces—and therefore as costs to be mitigated.
This view has a long, storied history in the field of economics. But it's perhaps nowhere more clear than in the early writing of Nobel Prize-winning economist[Ronald Coase][1]. In 1937, Coase published "[The Nature of the Firm][2]," an essay about the reasons people organized into firms to work on large-scale projects—rather than tackle those projects alone. Coase argued that when the cost of coordinating workers together inside a firm is less than that of similar market transactions outside, people will tend to organize so they can reap the benefits of lower operating costs.
But at some point, Coase's theory goes, the work of coordinating interactions between so many people inside the firm actually outweighs the benefits of having an organization in the first place. The complexity of those interactions becomes too difficult to handle. Management, then, should serve the function of decreasing this complexity. Its primary goal is coordination, eliminating the costs associated with messy interpersonal interactions that could slow the firm and reduce its efficiency. As one Fortune 100 CEO recently told me, "Failures happen most often around organizational handoffs."
This makes sense to people practicing what I've called "[mechanical management][3]," where managing people is the act of keeping them focused on specific, repeatable, specialized tasks. Here, management's key function is optimizing coordination costs—ensuring that every specialized component of the finely-tuned organizational machine doesn't impinge on the others and slow them down. Managers work to avoid failures by coordinating different functions across the organization (accounts payable, research and development, engineering, human resources, sales, and so on) to get them to operate toward a common goal. And managers create value by controlling information flows, intervening only when functions become misaligned.
Today, when so many of these traditionally well-defined tasks have become automated, value creation is much more a result of novel innovation and problem solving—not finding new ways to drive efficiency from repeatable processes. But numerous studies demonstrate that innovative, problem-solving activity occurs much more regularly when people work in cross-functional teams—not as isolated individuals or groups constrained by single-functional silos. This kind of activity can lead to what some call "accidental integration": the serendipitous innovation that occurs when old elements combine in new and unforeseen ways.
That's why working collaboratively has now become a necessity that managers need to foster, not eliminate.
### From coordination to collaboration
Reframing the value of the firm—from something that coordinated individual transactions to something that produces novel innovations—means rethinking the value of the relations at the core of our organizations. And that begins with reimagining the task of management, which is no longer concerned primarily with minimizing coordination costs but maximizing cooperation opportunities.
Too few of our tried-and-true management practices have this goal. If they're seeking greater innovation, managers need to encourage more interactions between people in different functional areas, not fewer. A cross-functional team may not be as efficient as one composed of people with the same skill sets. But a cross-functional team is more likely to be the one connecting points between elements in your organization that no one had ever thought to connect (the one more likely, in other words, to achieve accidental integration).
Working collaboratively has now become a necessity that managers need to foster, not eliminate.
I have three suggestions for leaders interested in making this shift:
First, define organizations around processes, not functions. We've seen this strategy work in enterprise IT, for example, in the case of [DevOps][4], where teams emerge around end goals (like a mobile application or a website), not singular functions (like developing, testing, and production). In DevOps environments, the same team that writes the code is responsible for maintaining it once it's in production. (We've found that when the same people who write the code are the ones woken up when it fails at 3 a.m., we get better code.)
Second, define work around the optimal organization rather than the organization around the work. Amazon is a good example of this strategy. Teams usually stick to the "[Two Pizza Rule][5]" when establishing optimal conditions for collaboration. In other words, Amazon leaders have determined that the best-sized team for maximum innovation is about 10 people, or a group they can feed with two pizzas. If the problem gets bigger than that two-pizza team can handle, they split the problem into two simpler problems, dividing the work between multiple teams rather than adding more people to the single team.
And third, to foster creative behavior and really get people cooperating with one another, do whatever you can to cultivate a culture of honest and direct feedback. Be straightforward and, as I wrote in The Open Organization, let the sparks fly; have frank conversations and let the best ideas win.
### Let it go
I realize that asking managers to significantly shift the way they think about their roles can lead to fear and skepticism. Some managers define their performance (and their very identities) by the control they exert over information and people. But the more you dictate the specific ways your organization should do something, the more static and brittle that activity becomes. Agility requires letting go—giving up a certain degree of control.
Front-line managers will see their roles morph from dictating and monitoring to enabling and supporting. Instead of setting individual-oriented goals, they'll need to set group-oriented goals. Instead of developing individual incentives, they'll need to consider group-oriented incentives.
Because ultimately, their goal should be to[create the context in which their teams can do their best work][6].
[Subscribe to our weekly newsletter][7] to learn more about open organizations.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/18/4/management-coordination-collaboration
作者:[Jim Whitehurst][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/remyd
[1]:https://news.uchicago.edu/article/2013/09/02/ronald-h-coase-founding-scholar-law-and-economics-1910-2013
[2]:http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0335.1937.tb00002.x/full
[3]:https://opensource.com/open-organization/18/2/try-learn-modify
[4]:https://enterprisersproject.com/devops
[5]:https://www.fastcompany.com/3037542/productivity-hack-of-the-week-the-two-pizza-approach-to-productive-teamwork
[6]:https://opensource.com/open-organization/16/3/what-it-means-be-open-source-leader
[7]:https://opensource.com/open-organization/resources/newsletter

View File

@ -0,0 +1,79 @@
For project safety back up your people, not just your data
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_remote_teams_world.png?itok=_9DCHEel)
The [FSF][1] was founded in 1985, Perl in 1987 ([happy 30th birthday, Perl][2]!), and Linux in 1991. The [term open source][3] and the [Open Source Initiative][4] both came into being in 1998 (and [turn 20 years old][5] in 2018). Since then, free and open source software has grown to become the default choice for software development, enabling incredible innovation.
We, the greater open source community, have come of age. Millions of open source projects exist today, and each year the [GitHub Octoverse][6] reports millions of new public repositories. We rely on these projects every day, and many of us could not operate our services or our businesses without them.
So what happens when the leaders of these projects move on? How can we help ease those transitions while ensuring that the projects thrive? By teaching and encouraging **succession planning**.
### What is succession planning?
Succession planning is a popular topic among business executives, boards of directors, and human resources professionals, but it doesn't often come up with maintainers of free and open source projects. Because the concept is common in business contexts, that's where you'll find most resources and advice about establishing a succession plan. As you might expect, most of these articles aren't directly applicable to FOSS, but they do form a springboard from which we can launch our own ideas about succession planning.
According to [Wikipedia][7]:
> Succession planning is a process for identifying and developing new leaders who can replace old leaders when they leave, retire, or die.
In my opinion, this definition doesn't apply very well to free and open source software projects. I primarily object to the use of the term leaders. For the collaborative projects of FOSS, everyone can be some form of leader. Roles other than "project founder" or "benevolent dictator for life" are just as important. Any project role that is measured by bus factor is one that can benefit from succession planning.
> A project's bus factor is the number of team members who, if hit by a bus, would endanger the smooth operation of the project. The smallest and worst bus factor is 1: when only a single person's loss would put the project in jeopardy. It's a somewhat grim but still very useful concept.
I propose that instead of viewing succession planning as a leadership pipeline, free and open source projects should view it as a skills pipeline. What sorts of skills does your project need to continue functioning well, and how can you make sure those skills always exist in your community?
### Benefits of succession planning
When I talk to project maintainers about succession planning, they often respond with something like, "We've been pretty successful so far without having to think about this. Why should we start now?"
Aside from the fact that the phrase, "We've always done it this way" is probably one of the most dangerous in the English language, and hearing (or saying) it should send up red flags in any community, succession planning provides plenty of very real benefits:
* **Continuity** : When someone leaves, what happens to the tasks they were performing? Succession planning helps ensure those tasks continue uninterrupted and no one is left hanging.
* **Avoiding a power vacuum** : When a person leaves a role with no replacement, it can lead to confusion, delays, and often most damaging, political woes. After all, it's much easier to fix delays than hurt feelings. A succession plan helps alleviate the insecure and unstable time when someone in a vital role moves on.
* **Increased project/organization longevity** : The thinking required for succession planning is the same sort of thinking that contributes to project longevity. Ensuring continuity in leadership, culture, and productivity also helps ensure the project will continue. It will evolve, but it will survive.
* **Reduced workload/pressure on current leaders** : When a single team member performs a critical role in the project, they often feel pressure to be constantly "on." This can lead to burnout and worse, resignations. A succession plan ensures that all important individuals have a backup or successor. The knowledge that someone can take over is often enough to reduce the pressure, but it also means that key players can take breaks or vacations without worrying that their role will be neglected in their absence.
* **Talent development** : Members of the FOSS community talk a lot about mentoring these days, and that's great. However, most of the conversation is around mentoring people to contribute code to a project. There are many different ways to contribute to free and open source software projects beyond programming. A robust succession plan recognizes these other forms of contribution and provides mentoring to prepare people to step into critical non-programming roles.
* **Inspiration for new members** : It can be very motivational for new or prospective community members to see that a project uses its succession plan. Not only does it show them that the project is well-organized and considers its own health and welfare as well as that of its members, but it also clearly shows new members how they can grow in the community. An obvious path to critical roles and leadership positions inspires new members to stick around to walk that path.
* **Diversity of thoughts/get out of a rut** : Succession plans provide excellent opportunities to bring in new people and ideas to the critical roles of a project. [Studies show][8] that diverse leadership teams are more effective and the projects they lead are more innovative. Using your project's succession plan to mentor people from different backgrounds and with different perspectives will help strengthen and evolve the project in a healthy way.
* **Enabling meritocracy** : Unfortunately, what often passes for meritocracy in many free and open source projects is thinly veiled hostility toward new contributors and diverse opinions—hostility that's delivered from within an echo chamber. Meritocracy without a mentoring program and healthy governance structure is simply an excuse to practice subjective discrimination while hiding behind unexpressed biases. A well-executed succession plan helps teams reach the goal of a true meritocracy. What counts as merit for any given role, and how to reach that level of merit, are openly, honestly, and completely documented. The entire community will be able to see and judge which members are on the path or deserve to take on a particular critical role.
### Why it doesn't happen
Succession planning isn't a panacea, and it won't solve all problems for all projects, but as described above, it offers a lot of worthwhile benefits to your project.
Despite that, very few free and open source projects or organizations put much thought into it. I was curious why that might be, so I asked around. I learned that the reasons for not having a succession plan fall into one of five different buckets:
* **Too busy** : Many people recognize succession planning (or lack thereof) as a problem for their project but just "hadn't ever gotten around to it" because there's "always something more important to work on." I understand and sympathize with this, but I suspect the problem may have more to do with prioritization than with time availability.
* **Don't think of it** : Some people are so busy and preoccupied that they haven't considered, "Hey, what would happen if Jen had to leave the project?" This never occurs to them. After all, Jen's always been there when they need her, right? And that will always be the case, right?
* **Don't want to think of it** : Succession planning shares a trait with estate planning: It's associated with negative feelings like loss and can make people address their own mortality. Some people are uncomfortable with this and would rather not consider it at all than take the time to make the inevitable easier for those they leave behind.
* **Attitude of current leaders** : A few of the people with whom I spoke didn't want to recognize that they're replaceable, or to consider that they may one day give up their power and influence on the project. While this was (thankfully) not a common response, it was alarming enough to deserve its own bucket. Failure of someone in a critical role to recognize or admit that they won't be around forever can set a project up for failure in the long run.
* **Don't know where to start** : Many people I interviewed realize that succession planning is something that their project should be doing. They were even willing to carve out the time to tackle this very large task. What they lacked was any guidance on how to start the process of creating a succession plan.
As you can imagine, something as important and people-focused as a succession plan isn't easy to create, and it doesn't happen overnight. Also, there are many different ways to do it. Each project has its own needs and critical roles. One size does not fit all where succession plans are concerned.
There are, however, some guidelines for how every project could proceed with the succession plan creation process. I'll cover these guidelines in my next article.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/passing-baton-succession-planning-foss-leadership
作者:[VM(Vicky) Brasseur][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/vmbrasseur
[1]:http://www.fsf.org
[2]:https://opensource.com/article/17/10/perl-turns-30
[3]:https://opensource.com/article/18/2/coining-term-open-source-software
[4]:https://opensource.org
[5]:https://opensource.org/node/910
[6]:https://octoverse.github.com
[7]:https://en.wikipedia.org/wiki/Succession_planning
[8]:https://hbr.org/2016/11/why-diverse-teams-are-smarter

View File

@ -0,0 +1,157 @@
Download YouTube Videos in Linux Command Line
======
**Brief: Easily download YouTube videos in Linux using youtube-dl command line tool. With this tool, you can also choose video format and video quality such as 1080p or 4K.**
![](https://itsfoss.com/wp-content/uploads/2015/10/Download-YouTube-Videos.jpeg)
I know you have already seen [how to download YouTube videos][1]. But those tools were mostly GUI ways. I am going to show you how to download YouTube videos in Linux terminal using youtube-dl.
### Install youtube-dl to download YouTube videos in Linux terminal
[youtube-dl][2] is a Python-based small command-line tool that allows downloading videos from [YouTube][3], [Dailymotion][4], Photobucket, Facebook, Yahoo, Metacafe, Depositfiles and few more similar sites. It is written in pygtk and requires Python interpreter to run this program, its not platform restricted. It should run on any Unix, Windows or in Mac OS X based systems.
The youtube-dl tool supports resuming interrupted downloads. If youtube-dl is killed (for example by Ctrl-C or due to loss of Internet connectivity) in the middle of the download, you can simply re-run it with the same YouTube video URL. It will automatically resume the unfinished download, as long as a partial download is present in the current directory. Which means you dont need [download managers in Linux][5] just for resuming downloads.
#### youtube-dl features
This tiny tool has so many features that it wont be an exaggeration to call it the best YouTube downloader for Linux.
* Download videos from not only YouTube but other popular video websites like Dailymotion, Facebook etc
* Allows downloading videos in several available video formats such as MP4, WebM etc.
* You can also choose the quality of the video being downloaded. If the video is available in 4K, you can download it in 4K, 1080p, 720p etc
* Automatical pause and resume of video downloads.
* Allows to bypass YouTube geo-restrictions
Downloading videos from websites could be against their policies. Its up to you if choose to download videos.
#### How to install youtube-dl
youtube-dl is a popular program and is available in the default repositories of most Linux distributions, if not all. You can use the standard way of installing packages in your distribution to install youtube-dl. Ill still show some commands for the sake of it.
If you are running Ubuntu-based Linux distribution, you can install it using this command:
```
sudo apt install youtube-dl
```
For other Linux distribution, you can quickly install youtube-dl on your system through the command line interface with:
```
sudo wget https://yt-dl.org/downloads/latest/youtube-dl -O/usr/local/bin/youtube-dl
```
After fetching the file, you need to set a executable permission on the script to execute properly.
```
sudo chmod a+rx /usr/local/bin/youtube-dl
```
#### Using YouTube-dl for downloading videos:
To download a video file, simply run the following command. Where “VIDEO_URL” is the URL of the video that you want to download.
```
youtube-dl<video_url>
```
#### Download YouTube videos in various formats and quality size
These days YouTube videos have different resolutions, you first need to check available video formats of a given YouTube video. For that run youtube-dl with “-F” option. It will show you a list of available formats.
```
youtube-dl -F <video_url>
```
Its output will be like:
```
 Setting language
BlXaGWbFVKY: Downloading video webpage
BlXaGWbFVKY: Downloading video info webpage
BlXaGWbFVKY: Extracting video information
Available formats:
37      :       mp4     [1080x1920]
46      :       webm    [1080x1920]
22      :       mp4     [720x1280]
45      :       webm    [720x1280]
35      :       flv     [480x854]
44      :       webm    [480x854]
34      :       flv     [360x640]
18      :       mp4     [360x640]
43      :       webm    [360x640]
5       :       flv     [240x400]
17      :       mp4     [144x176]
```
Now among the available video formats, choose one that you like. For example, if you want to download it in MP4 version and 1080 pixel, you should use:
```
youtube-dl -f 37<video_url>
```
#### Download subtitles of videos using youtube-dl
First, check if there are subtitles available for the video. To list all subs for a video, use the command below:
```
youtube-dl --list-subs <video_url>
```
To download all subs, but not the video:
```
youtube-dl --all-subs --skip-download <video_url>
```
#### Download entire YouTube playlist
To download a playlist, simply run the following command. Where “playlist_url” is the URL of the playlist that you want to download.
```
youtube-dl -cit <playlist_url>
```
#### Download only audio from YouTube videos
If you just want to download the audio from a YouTube video, you can use the -x option to simply extract the audio file from the video.
```
youtube-dl -x <video_url>
```
The default file format is Ogg which you may not like. You can specify the file format of the audio file in the following manner:
```
youtube-dl -x --audio-format mp3 <video_url>
```
#### And a lot more can be done with youtube-dl
youtube-dl is a versatile command line tool and provides a number of functionalities. No wonder it is such a popular command line tool.
I have only shown some of the most common usages of this tool. But if you want to explore its capabilities further, please check its [manual][6].
I hope this article helped you to download YouTube videos on Linux. If you have questions or suggestions, please drop a comment below.
Article updated with inputs from Abhishek Prakash.
--------------------------------------------------------------------------------
via: https://itsfoss.com/download-youtube-linux/
作者:[alimiracle][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/ali/
[1]:https://itsfoss.com/download-youtube-videos-ubuntu/
[2]:https://rg3.github.io/youtube-dl/
[3]:https://www.youtube.com/c/itsfoss/
[4]:https://www.dailymotion.com/
[5]:https://itsfoss.com/4-best-download-managers-for-linux/
[6]:https://github.com/rg3/youtube-dl/blob/master/README.md#readme

View File

@ -0,0 +1,202 @@
How to Install Docker CE on Your Desktop
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/containers-volumes_0.jpg?itok=gv0_MXiZ)
[In the previous article,][1] we learned some of the basic terminologies of the container world. That background information will come in handy when we run commands and use some of those terms in follow-up articles, including this one. This article will cover the installation of Docker on desktop Linux, macOS, and Windows, and it is intended for beginners who want to get started with Docker containers. The only prerequisite is that you are comfortable with command-line interface.
### Why do I need Docker CE on my local machine?
As a new user, you many wonder why you need containers on your local systems. Arent they meant to run in cloud and servers as microservices? While containers have been part of the Linux world for a very long time, it was Docker that made them really consumable with its tools and technologies.
The greatest thing about Docker containers is that you can use your local machine for development and testing. The container images that you create on your local system can then run “anywhere.” There is no conflict between developers and operators about apps running fine on development systems but not in production.
The point is that in order to create containerized applications, you must be able to run and create containers on your local systems.
You can use any of the three platforms -- desktop Linux, Windows, or macOS as the development platform for containers. Once Docker is successfully running on these systems, you will be using the same commands across platforms so it really doesnt matter which OS you are running underneath.
Thats the beauty of Docker.
### Lets get started
There are two editions of Docker. Docker Enterprise Edition (EE) and Docker Community Edition (CE). We will be using the Docker Community Edition, which is a free of cost version of Docker intended for developers and enthusiasts who want to get started with Docker.
There are two channels of Docker CE: stable and edge. As the name implies, the stable version gives you well-tested quarterly updates, whereas the edge version offers new updates every month. After further testing, these edge features are added to the stable release. I recommend the stable version for new users.
Docker CE is supported on macOS, Windows 10, Ubuntu 14.04, 16.04, 17.04 and 17.10; Debian 7.7,8,9 and 10; Fedora 25, 26, 27; and centOS. While you can download Docker CE binaries and install on your Desktop Linux systems, I recommend adding repositories so you continue to receive patches and updates.
### Install Docker CE on Desktop Linux
You dont need a full blown desktop Linux to run Docker, you can install it on a bare minimal Linux server as well, that you can run in a VM. In this tutorial, I am running it on Fedora 27 and Ubuntu 17.04 running on my main systems.
### Ubuntu Installation
First things first. Run a system update so your Ubuntu packages are fully updated:
```
$ sudo apt-get update
```
Now run system upgrade:
```
$ sudo apt-get dist-upgrade
```
Then install Docker PGP keys:
```
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Update the repository info again:
$ sudo apt-get update
```
Now install Docker CE:
```
$ sudo apt-get install docker-ce
```
Once it's installed, Docker CE runs automatically on Ubuntu based systems. Lets check if its running:
```
$ sudo systemctl status docker
```
You should get the following output:
```
docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2017-12-28 15:06:35 EST; 19min ago
Docs: https://docs.docker.com
Main PID: 30539 (dockerd)
```
Since Docker is installed on your system, you can now use Docker CLI (Command Line Interface) to run Docker commands. Living up to the tradition, lets run the Hello World command:
```
$ sudo docker run hello-world
```
![YMChR_7xglpYBT91rtXnqQc6R1Hx9qMX_iO99vL8][2]
Congrats! You have Docker running on your Ubuntu system.
### Installing Docker CE on Fedora
Things are a bit different on Fedora 27. On Fedora, you first need to install def-plugins-core packages that will allow you to manage your DNF packages from CLI.
```
$ sudo dnf -y install dnf-plugins-core
```
Now install the Docker repo on your system:
```
$ sudo dnf config-manager \
--add-repo \
https://download.docker.com/linux/fedora/docker-ce.repo
Its time to install Docker CE:
$ sudo dnf install docker-ce
```
Unlike Ubuntu, Docker doesnt start automatically on Fedora. So lets start it:
```
$ sudo systemctl start docker
```
You will have to start Docker manually after each reboot, so lets configure it to start automatically after reboots. $ systemctl enable docker Well, its time to run the Hello World command:
```
$ sudo docker run hello-world
```
Congrats, Docker is running on your Fedora 27 system.
### Cutting your roots
You may have noticed that you have to use sudo to run Docker commands. Thats because of Docker daemons binding with the UNIX socket, instead of a TCP port and that socket is owned by the root user. So, you need sudo privileges to run the docker command. You can add system user to the docker group so it wont require sudo:
```
$ sudo groupadd docker
```
In most cases, the docker user group is automatically created when you install Docker CE, so all you need to do is add your user to that group:
```
$ sudo usermod -aG docker $USER
```
To test if the group has been added successfully, run the groups command against the name of the user:
```
$ groups swapnil
```
(Here, Swapnil is the user.)
This is the output on my system:
```
$ swapnil : swapnil adm cdrom sudo dip plugdev lpadmin sambashare docker
```
You can see that the user also belongs to the docker group. Log out of your system, so that group changes take effect. Once you log back in, try the Hello World command without sudo:
```
$ docker run hello-world
```
You can check system wide info about the installed version of Docker and more by running this command:
```
$ docker info
```
### Install Docker CE on macOS and Windows
You can easily install Docker CE (and EE) on macOS and Windows. Download the official Docker for Mac and install it the way you install applications on macOS, by simply dragging them into the Applications directory. Once the file is copied, open Docker from spotlight to start the installation process. Once installed, Docker will start automatically and you can see it in the top bar of macOS.
![IEX23j65zYlF8mZ1c-T_vFw_i1B1T1hibw_AuhEA][3]
macOS is UNIX, so you can simply open the terminal app and start using Docker commands natively. Test the hello world app:
```
$ docker run hello-world
```
Congrats, you have Docker running on your macOS.
### Docker on Windows 10
You need the latest version of Windows 10 Pro or Server in order to run/install Docker on it. If you are not fully updated, Windows wont install Docker. I got an error on my Windows 10 system and had to run system updates. My version was still behind, and I hit [this][4] bug. So, if you fail to install Docker on Windows, just know you are not alone. Keep an eye on that bug to find a solution.
Once you install Docker on Windows, you can either use bash shell via WSL or use PowerShell to run docker commands. Lets test the “Hello World” command in PowerShell:
```
PS C:\Users\swapnil> docker run hello-world
```
Congrats, you have Docker running on Windows.
In the next article, we will talk about pulling images from DockerHub and running containers on our systems. We will also talk about pushing our own containers to Docker Hub.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/how-install-docker-ce-your-desktop
作者:[SWAPNIL BHARTIYA][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://www.linux.com/blog/intro-to-linux/2017/12/container-basics-terms-you-need-know
[2]:https://lh5.googleusercontent.com/YMChR_7xglpYBT91rtXnqQc6R1Hx9qMX_iO99vL8Z8C0-BlynDcL5B5pG-zzH0fKU0Qvnzd89v0KDEbZiO0gTfGNGfDtO-FkTt0bmzIQ-TKbNmv18S9RXdkSeXqgKDFRewnaHPj2
[3]:https://lh3.googleusercontent.com/IEX23j65zYlF8mZ1c-T_vFw_i1B1T1hibw_AuhEAfwv9oFpMfcAqkgEk7K5o58iDAAfGozSpIvY_qEsTOHRlSbesMKwTnG9rRkWba1KPSmnuH1LyoccDGNO3Clbz8du0gSByZxNj
[4]:https://github.com/docker/for-win/issues/1263

View File

@ -1,106 +0,0 @@
translating---geekpi
How to Use WSL Like a Linux Pro
============================================================
![WSL](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/wsl-pro.png?itok=e65wEEAw "WSL")
Learn how to perform tasks like mounting USB drives and manipulating files in this WSL tutorial. (Image courtesy: Microsoft)[Used with permission][1][Microsoft][2]
In the [previous tutorial][4], we learned about setting up WSL on your Windows 10 system. You can perform a lot of Linux command like tasks in Windows 10 using WSL. Many sysadmin tasks are done inside a terminal, whether its a Linux based system or macOS. Windows 10, however, lacks such capabilities. You want to run a cron job? No. You want to ssh into your server and then rsync files? No way. How about managing your local files with powerful command line utilities instead of using slow and unreliable GUI utilities?
In this tutorial, youll see how to perform additional tasks beyond managing your servers using WSL - things like mounting USB drives and manipulating files. You need to be running a fully updated Windows 10 and the Linux distro of your choice. I covered these steps in the [previous article][5], so begin there if you need to catch up. Lets get started.
### Keep your Linux system updated
The fact is there is no Linux kernel running under the hood when you run Ubuntu or openSUSE through WSL. Yet, you must keep your distros fully updated to keep your system protected from any new known vulnerabilities. Since only two free community distributions are officially available in Windows Store, out tutorial will cover only those two: openSUSE and Ubuntu.
Update your Ubuntu system:
```
# sudo apt-get update
# sudo apt-get dist-upgrade
```
To run updates for openSUSE:
```
# zypper up
```
You can also upgrade openSUSE to the latest version with the  _dup_  command. But before running the system upgrade, please run updates using the previous command.
```
# zypper dup
```
**Note: **openSUSE defaults to the root user. If you want to perform any non-administrative tasks, please switch to a non-privileged user. You can learn how to create a user on openSUSE in this [article][6].
### Manage local files
If you want to use great Linux command line utilities to manage your local files, you can easily do that with WSL. Unfortunately, WSL doesnt yet support things like _lsblk_ or  _mnt_  to mount local drives. You can, however,  _cd _ to the C drive and manage files:
/mnt/c/Users/swapnil/Music
I am now in the Music directory of the C drive.
To mount other drives, partitions, and external USB drives, you will need to create a mount point and then mount that drive.
Open File Explorer and check the mount point of that drive. Lets assume its mounted in Windows as S:\
In the Ubuntu/openSUSE terminal, create a mount point for the drive.
```
sudo mkdir /mnt/s
```
Now mount the drive:
```
mount -f drvfs S: /mnt/s
```
Once mounted, you can now access that drive from your distro. Just bear in mind that distro running with WSL will see what Windows can see. So you cant mount ext4 drives that cant be mounted on Windows natively.
You can now use all those magical Linux commands here. Want to copy or move files from one folder to another? Just run the  _cp_  or  _mv_ command.
```
cp /source-folder/source-file.txt /destination-folder/
cp /music/classical/Beethoven/symphony-2.mp3 /plex-media/music/classical/
```
If you want to move folders or large files, I would recommend  _rsync_ instead of the  _cp_  command:
```
rsync -avzP /music/classical/Beethoven/symphonies/ /plex-media/music/classical/
```
Yay!
Want to create new directories in Windows drives, just use the awesome  _mkdir_ command.
Want to set up a cron job to automate a task at certain time? Go ahead and create a cron job with  _crontab -e_ . Easy peasy.
You can also mount network/remote folders in Linux so you can manage them with better tools. All of my drives are plugged into either a Raspberry Pi powered server or a live server, so I simply ssh into that machine and manage the drive. Transferring files between the local machine and remote system can be done using, once again, the  _rsync_  command.
WSL is now out of beta, and it will continue to get more new features. Two features that I am excited about are the lsblk command and the dd command that allow me to natively manage my drives and create bootable drives of Linux from within Windows. If you are new to the Linux command line, this [previous tutorial ][7]will help you get started with some of the most basic commands.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2018/2/how-use-wsl-linux-pro
作者:[SWAPNIL BHARTIYA][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://blogs.msdn.microsoft.com/commandline/learn-about-windows-console-and-windows-subsystem-for-linux-wsl/
[3]:https://www.linux.com/files/images/wsl-propng
[4]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[5]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[6]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[7]:https://www.linux.com/learn/how-use-linux-command-line-basics-cli

View File

@ -1,3 +1,6 @@
Translating by MjSeven
9 Useful touch command examples in Linux
======
Touch command is used to create empty files and also changes the timestamps of existing files on Unix & Linux System. Changing timestamps here means updating the access and modification time of files and directories.

View File

@ -0,0 +1,192 @@
5 Best Feed Reader Apps for Linux
======
**Brief: Extensively use RSS feeds to stay updated with your favorite websites? Take a look at the best feed reader applications for Linux.**
[RSS][1] feeds were once most widely used, to collect news and articles from different sources at one place. It is often perceived that [RSS usage is in decline][2]. However, there are still people (like me) who believe in opening an application that accumulates all the websites articles at one place, which they can read later even when they are not connected to the internet.
Feed Readers makes it easier by collecting all the published items on a website for anytime access. You dont need to open several browser tabs to go to your favorite websites, and bookmarking the one you liked.
In this article, Ill share some of my favorite feed reader applications for Linux desktop.
### Best Feed Readers for Linux
![Best Feed Readers for Linux][3]
As usual, Linux has multiple choices for feed readers and in this article, we have compiled the 5 good feed readers applications for you. The list is no particular order.
#### 1\. Akregator Feed Reader
[Akregator][4] is a KDE product which is easy to use and powerful enough to provide latest updates from news sites, blogs and RSS/Atom enabled websites.
It comes with an internal browser for news reading and updated the feed in real time.
##### Features
* You can add a websites feed using “Ädd Feed” options and define an interval to refresh and update subscribe feeds.
* It can store and archive contents the setting of which can be defined on a global level or on individual feeds.
* Features option to import subscribed feeds from another browser or a past back up.
* Notifies you of the unread feeds.
##### How to install Akregator
If you are running KDE desktop, most probably Akregator is already installed on your system. If not, you can use the below command for Debian based systems.
```
sudo apt install akregator
```
Once installed, you can directly add a website by clicking on Feed menu and then **Add feed** and giving the website name. This is how Its FOSS feed looks like when added.
![][5]
#### 2\. QuiteRSS
[QuiteRSS][6] is another free and open source RSS/Atom news feed reader with lots of features. There are additional features like proxy integration, adblocker, integrated browser, and system tray integration. Its easier to update feeds by setting up a timer to refresh.
##### Features
* Automatic feed updation on either start up or using a timer option.
* Searching feed URL using website address and categorizing them in new, unread, starred and deleted section.
* Embedded browser so that you dont leave the app.
* Hiding images, if you are only interested in text.
* Adblocker and better system tray integration.
* Multiple language support.
##### How to install QuiteRSS
You can install it from the QuiteRSS ppa.
```
sudo add-apt-repository ppa:quiterss/quiterss
sudo apt-get update
sudo apt-get install quiterss
```
![][7]
#### 3\. Liferea
Linux Feed Reader aka [Liferea][8] is probably the most used feed aggregator on Linux platform. It is fast and easy to use and supports RSS / Atom feeds. It has support for podcasts and there is an option for adding custom scripts which can run depending upon your actions.
Theres a browser integration while you still have the options to open an item in a separate browser.
##### Features
* Liferea can download and save feeds from your favorite website to read offline.
* It can be synced with other RSS feed readers, making a transition easier.
* Support for Podcasts.
* Support for search folders, which allows users to save searches.
##### How to install Liferea
Liferea is available in the official repository for almost all the distributions. Ubuntu-based users can install it by using below command:
```
sudo apt-get install liferea
```
![][9]
#### 4\. FeedReader
[FeedReader][10] is a simple and elegant RSS desktop client for your web-based RSS accounts. It can work with Feedbin, Feedly, FreshRSS, Local RSS among others and has options to send it over mail, tweet about it etc.
##### Features
* There are multiple themes for formatting.
* You can customize it according to your preferences.
* Supports notifications and podcasts.
* Fast searches and various filters are present, along with several keyboard shortcuts to make your reading experience better.
##### How to install FeedReader
FeedReader is available as a Flatpak for almost every Linux distribution.
```
flatpak install http://feedreader.xarbit.net/feedreader-repo/feedreader.flatpakref
```
It is also available in Fedora repository:
```
sudo dnf install feedreader
```
And, in Arch User Repository.
```
yaourt -S feedreader
```
![][11]
#### 5\. Newsbeuter: RSS feed in terminal
[Newsbeuter][12] is an open source feed reader for terminal lovers. There is an option to add and delete an RSS feed and to get the content on the terminal itself. Newsbeuter is loved by people who spend more time on the terminal and want their feed to be clutter free from images and ads.
##### How to install Newsbeuter
```
sudo apt-get install newsbeuter
```
Once installation completes, you can launch it by using below command
```
newsbeuter
```
To add a feed in your list, edit the urls file and add the RSS feed.
```
vi ~/.newsbeuter/urls
>> http://feeds.feedburner.com/itsfoss
```
To read the feeds, launch newsbeuter and it will display all the posts.
![][13]
You can get the useful commands at the bottom of the terminal which can help you in using newsbeuter. You can read this [manual page][14] for detailed information.
#### Final Words
To me, feed readers are still relevant, especially when you follow multiple websites and blogs. The offline access to your favorite website and blogs content with options to archive and search is the biggest advantage of using a feed reader.
Do you use a feed reader on your Linux system? If yes, tell us your favorite one in the comments.
--------------------------------------------------------------------------------
via: https://itsfoss.com/feed-reader-apps-linux/
作者:[Ambarish Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/ambarish/
[1]:https://en.wikipedia.org/wiki/RSS
[2]:http://andrewchen.co/the-death-of-rss-in-a-single-graph/
[3]:https://itsfoss.com/wp-content/uploads/2018/04/best-feed-reader-apps-linux.jpg
[4]:https://www.kde.org/applications/internet/akregator/
[5]:https://itsfoss.com/wp-content/uploads/2018/02/Akregator2-800x500.jpg
[6]:https://quiterss.org/
[7]:https://itsfoss.com/wp-content/uploads/2018/02/QuiteRSS2.jpg
[8]:https://itsfoss.com/liferea-rss-client/
[9]:https://itsfoss.com/wp-content/uploads/2018/02/Liferea-800x525.png
[10]:https://jangernert.github.io/FeedReader/
[11]:https://itsfoss.com/wp-content/uploads/2018/02/FeedReader2-800x465.jpg
[12]:https://newsbeuter.org/
[13]:https://itsfoss.com/wp-content/uploads/2018/02/newsbeuter.png
[14]:http://manpages.ubuntu.com/manpages/bionic/man1/newsbeuter.1.html

View File

@ -0,0 +1,171 @@
How To Setup Static File Server Instantly
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/serve-720x340.png)
Ever wanted to share your files or project over network, but dont know how to do? No worries! Here is a simple utility named **“serve”** to share your files instantly over network. This simple utility will instantly turn your system into a static file server, allowing you to serve your files over network. You can access the files from any devices regardless of their operating system. All you need is a web browser. This utility also can be used to serve static websites. It is formerly known as “list” and “micro-list”, but now the name has been changed to “serve”, which is much more suitable for the purpose of this utility.
### Setup Static File Server Using Serve
To install “serve”, you need to install NodeJS and NPM first. Refer the following link to install NodeJS and NPM in your Linux box.
Once NodeJS and NPM installed, run the following command to install “serve”.
```
$ npm install -g serve
```
Done! Now is the time to serve the files or folders.
The typical syntax to use “serve” is:
```
$ serve [options] <path-to-files-or-folders>
```
### Serve Specific files or folders
For example, let us share the contents of the **Documents** directory. To do so, run:
```
$ serve Documents/
```
Sample output would be:
![][2]
As you can see in the above screenshot, the contents of the given directory have been served over network via two URLs.
To access the contents from the local system itself, all you have to do is open your web browser and navigate to **<http://localhost:5000/>** URL.
![][3]
The Serve utility displays the contents of the given directory in a simple layout. You can download (right click on the files and choose “Save link as..”) or just view them in the browser.
If you want to open local address automatically in the browser, use **-o** flag.
```
$ serve -o Documents/
```
Once you run the above command, The Serve utility will open your web browser automatically and display the contents of the shared item.
Similarly, to access the shared directory from a remote system over network, type **<http://192.168.43.192:5000>** in the browsers address bar. Replace 192.168.43.192 with your systems IP.
**Serve contents via different port**
As you may noticed, The serve utility uses port **5000** by default. So, make sure the port 5000 is allowed in your firewall or router. If it is blocked for some reason, you can serve the contents using different port using **-p** flag.
```
$ serve -p 1234 Documents/
```
The above command will serve the contents of Documents directory via port **1234**.
![][4]
To serve a file, instead of a folder, just give its full path like below.
```
$ serve Documents/Papers/notes.txt
```
The contents of the shared directory can be accessed by any user on the network as long as they know the path.
**Serve the entire $HOME directory**
Open your Terminal and type:
```
$ serve
```
This will share the contents of your entire $HOME directory over network.
To stop the sharing, press **CTRL+C**.
**Serve selective files or folders**
You may not want to share all files or directories, but only a few in a directory. You can do this by excluding the files or directories using **-i** flag.
```
$ serve -i Downloads/
```
The above command will serve entire file system except **Downloads** directory.
**Serve contents only on localhost**
Sometimes, you want to serve the contents only on the local system itself, not on the entire network. To do so, use **-l** flag as shown below:
```
$ serve -l Documents/
```
This command will serve the **Documents** directory only on localhost.
![][5]
This can be useful when youre working on a shared server. All users in the in the system can access the share, but not the remote users.
**Serve content using SSL**
Since we serve the contents over the local network, we need not to use SSL. However, Serve utility has the ability to shares contents using SSL using **ssl** flag.
```
$ serve --ssl Documents/
```
![][6]
To access the shares via web browser use “<https://localhost:5000> or “<https://ip:5000>.
![][7]
**Serve contents with authentication**
In all above examples, we served the contents without any authentication. So anyone on the network can access them without any authentication. You might feel some contents should be accessed with username and password.
To do so, use:
```
$ SERVE_USER=ostechnix SERVE_PASSWORD=123456 serve --auth
```
Now the users need to enter the username (i.e **ostechnix** in our case) and password (123456) to access the shares.
![][8]
The Serve utility has some other features, such as disable [**Gzip compression**][9], setup * CORS headers to allow requests from any origin, prevent copy address automatically to clipboard etc. You can read the complete help section by running the following command:
```
$ serve help
```
And, thats all for now. Hope this helps. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-setup-static-file-server-instantly/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-2.png
[4]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-4.png
[5]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-3.png
[6]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-6.png
[7]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-5-1.png
[8]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-7-1.png
[9]:https://www.ostechnix.com/how-to-compress-and-decompress-files-in-linux/

View File

@ -0,0 +1,147 @@
A Desktop GUI Application For NPM
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/ndm-3-720x340.png)
NPM, short for **N** ode **P** ackage **M** anager, is a command line package manager for installing NodeJS packages, or modules. We already have have published a guide that described how to [**manage NodeJS packages using NPM**][1]. As you may noticed, managing NodeJS packages or modules using Npm is not a big deal. However, if youre not compatible with CLI-way, there is a desktop GUI application named **NDM** which can be used for managing NodeJS applications/modules. NDM, stands for **N** PM **D** esktop **M** anager, is a free, open source graphical front-end for NPM that allows us to install, update, remove NodeJS packages via a simple graphical window.
In this brief tutorial, we are going to learn about Ndm in Linux.
### Install NDM
NDM is available in AUR, so you can install it using any AUR helpers on Arch Linux and its derivatives like Antergos and Manjaro Linux.
Using [**Pacaur**][2]:
```
$ pacaur -S ndm
```
Using [**Packer**][3]:
```
$ packer -S ndm
```
Using [**Trizen**][4]:
```
$ trizen -S ndm
```
Using [**Yay**][5]:
```
$ yay -S ndm
```
Using [**Yaourt**][6]:
```
$ yaourt -S ndm
```
On RHEL based systems like CentOS, run the following command to install NDM.
```
$ echo "[fury] name=ndm repository baseurl=https://repo.fury.io/720kb/ enabled=1 gpgcheck=0" | sudo tee /etc/yum.repos.d/ndm.repo && sudo yum update &&
```
On Debian, Ubuntu, Linux Mint:
```
$ echo "deb [trusted=yes] https://apt.fury.io/720kb/ /" | sudo tee /etc/apt/sources.list.d/ndm.list && sudo apt-get update && sudo apt-get install ndm
```
NDM can also be installed using **Linuxbrew**. First, install Linuxbrew as described in the following link.
After installing Linuxbrew, you can install NDM using the following commands:
```
$ brew update
$ brew install ndm
```
On other Linux distributions, go to the [**NDM releases page**][7], download the latest version, compile and install it yourself.
### NDM Usage
Launch NDM wither from the Menu or using application launcher. This is how NDMs default interface looks like.
![][9]
From here, you can install NodeJS packages/modules either locally or globally.
**Install NodeJS packages locally**
To install a package locally, first choose project directory by clicking on the **“Add projects”** button from the Home screen and select the directory where you want to keep your project files. For example, I have chosen a directory named **“demo”** as my project directory.
Click on the project directory (i.e **demo** ) and then, click **Add packages** button.
![][10]
Type the package name you want to install and hit the **Install** button.
![][11]
Once installed, the packages will be listed under the projects directory. Simply click on the directory to view the list of installed packages locally.
![][12]
Similarly, you can create separate project directories and install NodeJS modules in them. To view the list of installed modules on a project, click on the project directory, and you will the packages on the right side.
**Install NodeJS packages globally**
To install NodeJS packages globally, click on the **Globals** button on the left from the main interface. Then, click “Add packages” button, type the name of the package and hit “Install” button.
**Manage packages**
Click on any installed packages and you will see various options on the top, such as
1. Version (to view the installed version),
2. Latest (to install latest available version),
3. Update (to update the currently selected package),
4. Uninstall (to remove the selected package) etc.
![][13]
NDM has two more options namely **“Update npm”** which is used to update the node package manager to latest available version, and **Doctor** that runs a set of checks to ensure that your npm installation has what it needs to manage your packages/modules.
### Conclusion
NDM makes the process of installing, updating, removing NodeJS packages easier! You dont need to memorize the commands to perform those tasks. NDM lets us to do them all with a few mouse clicks via simple graphical window. For those who are lazy to type commands, NDM is perfect companion to manage NodeJS packages.
Cheers!
**Resource:**
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/ndm-a-desktop-gui-application-for-npm/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/manage-nodejs-packages-using-npm/
[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
[4]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[6]:https://www.ostechnix.com/install-yaourt-arch-linux/
[7]:https://github.com/720kb/ndm/releases
[8]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[9]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-1.png
[10]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-5-1.png
[11]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-6.png
[12]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-7.png
[13]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-8.png

View File

@ -0,0 +1,80 @@
A new approach to security instrumentation
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx)
How many of us have ever uttered the following phrase: “I hope this works!”?
Without a doubt, most of us have, likely more than once. Its not a phrase that inspires confidence, as it reveals doubts about our abilities or the functionality of whatever we are testing. Unfortunately, this very phrase defines our traditional security model all too well. We operate based on the assumption and the hope that the controls we put in place—from vulnerability scanning on web applications to anti-virus on endpoints—prevent malicious actors and software from entering our systems and damaging or stealing our information.
Penetration testing took a step to combat relying on assumptions by actively trying to break into the network, inject malicious code into a web application, or spread “malware” by sending out phishing emails. Composed of finding and poking holes in our different security layers, pen testing fails to account for situations in which holes are actively opened. In security experimentation, we intentionally create chaos in the form of controlled, simulated incident behavior to objectively instrument our ability to detect and deter these types of activities.
> “Security experimentation provides a methodology for the experimentation of the security of distributed systems to build confidence in the ability to withstand malicious conditions.”
When it comes to security and complex distributed systems, a common adage in the chaos engineering community reiterates that “hope is not an effective strategy.” How often do we proactively instrument what we have designed or built to determine if the controls are failing? Most organizations do not discover that their security controls are failing until a security incident results from that failure. We believe that “Security incidents are not detective measures” and “Hope is not an effective strategy” should be the mantras of IT professionals operating effective security practices.
The industry has traditionally emphasized preventative security measures and defense-in-depth, whereas our mission is to drive new knowledge and insights into the security toolchain through detective experimentation. With so much focus on the preventative mechanisms, we rarely attempt beyond one-time or annual pen testing requirements to validate whether or not those controls are performing as designed.
With all of these constantly changing, stateless variables in modern distributed systems, it becomes next to impossible for humans to adequately understand how their systems behave, as this can change from moment to moment. One way to approach this problem is through robust systematic instrumentation and monitoring. For instrumentation in security, you can break down the domain into two primary buckets: **testing** , and what we call **experimentation**. Testing is the validation or assessment of a previously known outcome. In plain terms, we know what we are looking for before we go looking for it. On the other hand, experimentation seeks to derive new insights and information that was previously unknown. While testing is an important practice for mature security teams, the following example should help further illuminate the differences between the two, as well as provide a more tangible depiction of the added value of experimentation.
### Example scenario: Craft beer delivery
Consider a simple web service or web application that takes orders for craft beer deliveries.
This is a critical service for this craft beer delivery company, whose orders come in from its customers' mobile devices, the web, and via its API from restaurants that serve its craft beer. This critical service runs in the company's AWS EC2 environment and is considered by the company to be secure. The company passed its PCI compliance with flying colors last year and annually performs third-party penetration tests, so it assumes that its systems are secure.
This company also prides itself on its DevOps and continuous delivery practices by deploying sometimes twice in the same day.
After learning about chaos engineering and security experimentation, the company's development teams want to determine, on a continuous basis, how resilient and effective its security systems are to real-world events, and furthermore, to ensure that they are not introducing new problems into the system that the security controls are not able to detect.
The team wants to start small by evaluating port security and firewall configurations for their ability to detect, block, and alert on misconfigured changes to the port configurations on their EC2 security groups.
* The team begins by performing a summary of their assumptions about the normal state.
* Develops a hypothesis for port security in their EC2 instances
* Selects and configures the YAML file for the Unauthorized Port Change experiment.
* This configuration would designate the objects to randomly select from for targeting, as well as the port ranges and number of ports that should be changed.
* The team also configures when to run the experiment and shrinks the scope of its blast radius to ensure minimal business impact.
* For this first test, the team has chosen to run the experiment in their stage environments and run a single run of the test.
* In true Game Day style, the team has elected a Master of Disaster to run the experiment during a predefined two-hour window. During that window of time, the Master of Disaster will execute the experiment on one of the EC2 Instance Security Groups.
* Once the Game Day has finished, the team begins to conduct a thorough, blameless post-mortem exercise where the focus is on the results of the experiment against the steady state and the original hypothesis. The questions would be something similar to the following:
### Post-mortem questions
* Did the firewall detect the unauthorized port change?
* If the change was detected, was it blocked?
* Did the firewall report log useful information to the log aggregation tool?
* Did the SIEM throw an alert on the unauthorized change?
* If the firewall did not detect the change, did the configuration management tool discover the change?
* Did the configuration management tool report good information to the log aggregation tool?
* Did the SIEM finally correlate an alert?
* If the SIEM threw an alert, did the Security Operations Center get the alert?
* Was the SOC analyst who got the alert able to take action on the alert, or was necessary information missing?
* If the SOC alert determined the alert to be credible, was Security Incident Response able to conduct triage activities easily from the data?
The acknowledgment and anticipation of failure in our systems have already begun unraveling our assumptions about how our systems work. Our mission is to take what we have learned and apply it more broadly to begin to truly address security weaknesses proactively, going beyond the reactive processes that currently dominate traditional security models.
As we continue to explore this new domain, we will be sure to post our findings. For those interested in learning more about the research or getting involved, please feel free to contact [Aaron Rinehart][1] or [Grayson Brewer][2].
Special thanks to Samuel Roden for the insights and thoughts provided in this article.
**[See our related story,[Is the term DevSecOps necessary?][3]]**
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/new-approach-security-instrumentation
作者:[Aaron Rinehart][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/aaronrinehart
[1]:https://twitter.com/aaronrinehart
[2]:https://twitter.com/BrewerSecurity
[3]:https://opensource.com/article/18/4/devsecops

View File

@ -0,0 +1,352 @@
Getting started with Jenkins Pipelines
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe)
Jenkins is a well-known open source continuous integration and continuous development automation tool. It has an excellent supporting community, and hundreds of plugins and developers have been using it for years.
This article will provide a brief guide on how to get started with Pipelines and multibranch pipelines.
Why pipelines?
* Developers can automate the integration, testing, and deployment of their code, going from source code to product consumers many times using one tool.
* Pipelines "as code" known as Jenkinsfiles can be saved in any source control system. In previous Jenkins versions, jobs were only configured using the UI. With Jenkinfiles, pipelines are more maintainable and portable.
* Multi-branch pipelines integrate with Git so that different branches, features, and releases can have independent pipelines enabling each developer to customize their development/deployment process.
* Non-technical members of a team can trigger and customize builds using parameters, analyze test reports, receive email alerts and have a better understanding of the build and deployment process through the pipeline stage view (improved in latest versions with the Blue Ocean UI).
* Jenkins can also be [installed using Docker][1] and pipelines can interact with [Docker agents][2].
### Requirements:
* [Jenkins 2.89.2][3] (WAR) with Java 8 is the version used in this how-to
* Plugins used (To install: `Manage Jenkins → Manage Plugins →Available`):
* Pipeline: [declarative][4]
* [Blue Ocean][5]
* [Cucumber reports][6]
## Getting started with Jenkins Pipelines
If you have not used Jenkins Pipelines before, I recommend [reading the documentation][7] before getting started here, as it includes a complete description and introduction to the technology as well as the benefits of using it.
This is the Jenkinsfile I used (you can also access this [code][8] on GitHub):
```
pipeline {
    agent any
    stages {
stage('testing pipeline'){
          steps{
      echo 'test1'
                sh 'mkdir from-jenkins'
                sh 'touch from-jenkins/test.txt'
                }
        }
}
}
```
1\. Click on **New Item**.
2\. Name the project, select **Pipeline** , and click **OK**.
3\. The configuration page displays once the project is created. In the **Definition** segment, you must decide to either obtain the Jenkinsfile from source control management (SCM) or create the Pipeline script in Jenkins. Hosting the Jenkinsfile in SCM is recommended so that it is portable and maintainable.
The SCM I chose was Git with simple user and pass credentials (SSH can also be used). By default, Jenkins will look for a Jenkinsfile in that repository unless it's specified otherwise in the **Script Path** directory.
4\. Go back to the job page after saving the Jenkinsfile and select **Build Now**. Jenkins will trigger the job. Its first stage is to pull down the Jenkinsfile from SCM. It reports any changes from the previous run and executes it.
Clicking on **Stage View** provides console information:
### Using Blue Ocean
Jenkins' [Blue Ocean][9] provides a better UI for Pipelines. It is accessible from the job's main page (see image above).
This simple Pipeline has one stage (in addition to the default stage): **Checkout SCM** , which pulls the Jenkinsfile in three steps. The first step echoes a message, the second creates a directory named `from-``jenkins` in the Jenkins workspace, and the third puts a file called `test.txt` inside that directory. The path for the Jenkins workspace is `$user/.jenkins/workspace`, located in the machine where the job was executed. In this example, the job is executed in any available node. If there is no other node connected then it is executed in the machine where Jenkins is installed—check Manage Jenkins > Manage nodes for information about the nodes.
Another way to create a Pipeline is with Blue Ocean's plugin. (The following screenshots show the same repo.)
1\. Click **Open Blue Ocean**.
2\. Click **New Pipeline**.
3\. Select **SCM** and enter the repository URL; an SSH key will be provided. This must be added to your Git SSH keys (in `Settings →SSH and GPG keys`).
4\. Jenkins will automatically detect the branch and the Jenkinsfile, if present. It will also trigger the job.
### Pipeline development:
The following Jenkinsfile triggers Cucumber tests from a GitHub repository, creates and archives a JAR, sends emails, and exposes different ways the job can execute with variables, parallel stages, etc. The Java project used in this demo was forked from [cucumber/cucumber-jvm][10] to [mluyo3414/cucumber-jvm][11]. You can also access [the][12][Jenkinsfile][12] on GitHub. Since the Jenkinsfile is not in the repository's top directory, the configuration has to be changed to another path:
```
pipeline {
    // 1. runs in any agent, otherwise specify a slave node
    agent any
    parameters {
// 2.variables for the parametrized execution of the test: Text and options
        choice(choices: 'yes\nno', description: 'Are you sure you want to execute this test?', name: 'run_test_only')
        choice(choices: 'yes\nno', description: 'Archived war?', name: 'archive_war')
        string(defaultValue: "your.email@gmail.com", description: 'email for notifications', name: 'notification_email')
    }
//3. Environment variables
environment {
firstEnvVar= 'FIRST_VAR'
secondEnvVar= 'SECOND_VAR'
thirdEnvVar= 'THIRD_VAR'
}
//4. Stages
    stages {
        stage('Test'){
             //conditional for parameter
            when {
                environment name: 'run_test_only', value: 'yes'
            }
            steps{
                sh 'cd examples/java-calculator && mvn clean integration-test'
            }
        }
//5. demo parallel stage with script
        stage ('Run demo parallel stages') {
steps {
        parallel(
        "Parallel stage #1":
                  {
                  //running a script instead of DSL. In this case to run an if/else
                  script{
                    if (env.run_test_only =='yes')
                        {
                        echo env.firstEnvVar
                        }
                    else
                        {
                        echo env.secondEnvVar
                        }
                  }
         },
        "Parallel stage #2":{
                echo "${thirdEnvVar}"
                }
                )
             }
        }
    }
//6. post actions for success or failure of job. Commented out in the following code: Example on how to add a node where a stage is specifically executed. Also, PublishHTML is also a good plugin to expose Cucumber reports but we are using a plugin using Json.
   
post {
        success {
        //node('node1'){
echo "Test succeeded"
            script {
    // configured from using gmail smtp Manage Jenkins-> Configure System -> Email Notification
    // SMTP server: smtp.gmail.com
    // Advanced: Gmail user and pass, use SSL and SMTP Port 465
    // Capitalized variables are Jenkins variables see https://wiki.jenkins.io/display/JENKINS/Building+a+software+project
                mail(bcc: '',
                     body: "Run ${JOB_NAME}-#${BUILD_NUMBER} succeeded. To get more details, visit the build results page: ${BUILD_URL}.",
                     cc: '',
                     from: 'jenkins-admin@gmail.com',
                     replyTo: '',
                     subject: "${JOB_NAME} ${BUILD_NUMBER} succeeded",
                     to: env.notification_email)
                     if (env.archive_war =='yes')
                     {
             // ArchiveArtifact plugin
                        archiveArtifacts '**/java-calculator-*-SNAPSHOT.jar'
                      }
                       // Cucumber report plugin
                      cucumber fileIncludePattern: '**/java-calculator/target/cucumber-report.json', sortingMethod: 'ALPHABETICAL'
            //publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: true, reportDir: '/home/reports', reportFiles: 'reports.html', reportName: 'Performance Test Report', reportTitles: ''])
            }
        //}
        }
        failure {
            echo "Test failed"
            mail(bcc: '',
                body: "Run ${JOB_NAME}-#${BUILD_NUMBER} succeeded. To get more details, visit the build results page: ${BUILD_URL}.",
                 cc: '',
                 from: 'jenkins-admin@gmail.com',
                 replyTo: '',
                 subject: "${JOB_NAME} ${BUILD_NUMBER} failed",
                 to: env.notification_email)
                 cucumber fileIncludePattern: '**/java-calculator/target/cucumber-report.json', sortingMethod: 'ALPHABETICAL'
//publishHTML([allowMissing: true, alwaysLinkToLastBuild: false, keepAll: true, reportDir: '/home/tester/reports', reportFiles: 'reports.html', reportName: 'Performance Test Report', reportTitles: ''])
        }
    }
}
```
Always check **Pipeline Syntax** to see how to use the different plugins in the Jenkinsfile.
An email notification indicates the build was successful:
Archived JAR from a successful build:
You can access **Cucumber reports** on the same page.
## How to create a multibranch pipeline
If your project already has a Jenkinsfile, follow the [**Multibranch Pipeline** project][13] instructions in Jenkins' docs. It uses Git and assumes credentials are already configured. This is how the configuration looks in the traditional view:
### If this is your first time creating a Pipeline, follow these steps:
1\. Select **Open Blue Ocean**.
2\. Select **New Pipeline**.
3\. Select **Git** and insert the Git repository address. This repository does not currently have a Jenkinsfile. An SSH key will be generated; it will be used in the next step.
4\. Go to GitHub. Click on the profile avatar in the top-right corner and select Settings. Then select **SSH and GPG Keys** from the left-hand menu and insert the SSH key Jenkins provides.
5\. Go back to Jenkins and click **Create Pipeline**. If the project does not contain a Jenkinsfile, Jenkins will prompt you to create a new one.
6\. Once you click **Create Pipeline** , an interactive Pipeline diagram will prompt you to add stages by clicking **+**. You can add parallel or sequential stages and multiple steps to each stage. A list offers different options for the steps.
7\. The following diagram shows three stages (Stage 1, Stage 2a, and Stage 2b) with simple print messages indicating steps. You can also add environment variables and specify in which agent the Jenkinsfile will be executed.
Click **Save** , then commit the new Jenkinsfile by clicking **Save & Run**.
You can also add a new branch.
8\. The job will execute.
If a new branch was added, you can see it in GitHub.
9\. If another branch with a Jenkinsfile is created, you can discover it by clicking **Scan Multibranch Pipeline Now**. In this case, a new branch called `new-feature-2` is created in GitHub from Master (only branches with Jenkinsfiles are displayed in Jenkins).
After scanning, the new branch appears in Jenkins.
This new feature was created using GitHub directly; Jenkins will detect new branches when it performs a scan. If you don't want the newly discovered Pipelines to be executed when discovered, change the settings by clicking **Configure** on the job's Multibranch Pipeline main page and adding the property **Suppress automatic SCM triggering**. This way, Jenkins will discover new Pipelines but they will have to be manually triggered.
This article was originally published on the [ITNext channel][14] on Medium and is reprinted with permission.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/jenkins-pipelines-with-cucumber
作者:[Miguel Suarez][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mluyo3414
[1]:https://jenkins.io/doc/book/installing/#downloading-and-running-jenkins-in-docker
[2]:https://jenkins.io/doc/book/pipeline/docker/
[3]:https://jenkins.io/doc/pipeline/tour/getting-started/
[4]:https://plugins.jenkins.io/pipeline-model-definition
[5]:https://plugins.jenkins.io/blueocean
[6]:https://plugins.jenkins.io/cucumber-reports
[7]:https://jenkins.io/doc/book/pipeline/
[8]:https://github.com/mluyo3414/jenkins-test
[9]:https://jenkins.io/projects/blueocean/
[10]:https://github.com/cucumber/cucumber-jvm
[11]:https://github.com/mluyo3414/cucumber-jvm
[12]:https://github.com/mluyo3414/cucumber-jvm/blob/master/examples/java-calculator/Jenkinsfile
[13]:https://jenkins.io/doc/book/pipeline/multibranch/#creating-a-multibranch-pipeline
[14]:https://itnext.io/jenkins-pipelines-889420409510

View File

@ -0,0 +1,123 @@
translating---geekpi
How To Check User Created Date On Linux
======
Did you know, how to check user account created date on Linux system? If Yes, what are the ways to do.
Are you getting succeed on this? If yes, how to do?
Basically Linux operating system doesnt track this information so, what are the alternate ways to get this information.
You might ask why i want to check this?
Yes, in some cases you may want to check this information, at that time this will very helpful for you.
This can be verified using below 7 methods.
* Using /var/log/secure file
* Using aureport utility
* Using .bash_logout file
* Using chage Command
* Using useradd Command
* Using passwd Command
* Using last Command
### Method-1: Using /var/log/secure file
It stores all security related messages including authentication failures and authorization privileges. It also tracks sudo logins, SSH logins and other errors logged by system security services daemon.
```
# grep prakash /var/log/secure
Apr 12 04:07:18 centos.2daygeek.com useradd[21263]: new group: name=prakash, GID=501
Apr 12 04:07:18 centos.2daygeek.com useradd[21263]: new user: name=prakash, UID=501, GID=501, home=/home/prakash, shell=/bin/bash
Apr 12 04:07:34 centos.2daygeek.com passwd: pam_unix(passwd:chauthtok): password changed for prakash
Apr 12 04:08:32 centos.2daygeek.com sshd[21269]: Accepted password for prakash from 103.5.134.167 port 60554 ssh2
Apr 12 04:08:32 centos.2daygeek.com sshd[21269]: pam_unix(sshd:session): session opened for user prakash by (uid=0)
```
### Method-2: Using aureport utility
The aureport utility allows you to generate summary and columnar reports on the events recorded in Audit log files. By default, all audit.log files in the /var/log/audit/ directory are queried to create the report.
```
# aureport --auth | grep prakash
46. 04/12/2018 04:08:32 prakash 103.5.134.167 ssh /usr/sbin/sshd yes 288
47. 04/12/2018 04:08:32 prakash 103.5.134.167 ssh /usr/sbin/sshd yes 291
```
### Method-3: Using .bash_logout file
The .bash_logout file in your home directory have a special meaning to bash, it provides a way to execute commands when the user logs out of the system.
We can check for the Change date of the .bash_logout file in the users home directory. This file is created upon users first logout.
```
# stat /home/prakash/.bash_logout
File: `/home/prakash/.bash_logout'
Size: 18 Blocks: 8 IO Block: 4096 regular file
Device: 801h/2049d Inode: 256153 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 501/ prakash) Gid: ( 501/ prakash)
Access: 2017-03-22 20:15:00.000000000 -0400
Modify: 2017-03-22 20:15:00.000000000 -0400
Change: 2018-04-12 04:07:18.283000323 -0400
```
### Method-4: Using chage Command
chage stand for change age. This command allows user to mange password expiry information. The chage command changes the number of days between password changes and the date of the last password change.
This information is used by the system to determine when a user must change his/her password. This will work if the user does not change the password since the account creation date.
```
# chage --list prakash
Last password change : Apr 12, 2018
Password expires : never
Password inactive : never
Account expires : never
Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7
```
### Method-5: Using useradd Command
useradd command is used to create new accounts in Linux. By default, it wont add user creation date and we have to add date using “Comment” option.
```
# useradd -m prakash -c `date +%Y/%m/%d`
# grep prakash /etc/passwd
prakash:x:501:501:2018/04/12:/home/prakash:/bin/bash
```
### Method-6: Using useradd Command
passwd command used assign password to local accounts or users. If the user has not changed his password since the accounts creation date, then you can use the passwd command to find out the date of the last password reset.
```
# passwd -S prakash
prakash PS 2018-04-11 0 99999 7 -1 (Password set, MD5 crypt.)
```
### Method-7: Using last Command
last command reads the file /var/log/wtmp and displays a list of all users logged in (and out) since that file was created.
```
# last | grep "prakash"
prakash pts/2 103.5.134.167 Thu Apr 12 04:08 still logged in
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-check-user-created-date-on-linux/
作者:[Prakash Subramanian][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/prakash/

View File

@ -0,0 +1,137 @@
Finding what youre looking for on Linux
======
![](https://images.idgesg.net/images/article/2018/04/binoculars-100754967-large.jpg)
It isnt hard to find what youre looking for on a Linux system — a file or a command — but there are a _lot_ of ways to go looking.
### 7 commands to find Linux files
#### find
The most obvious is undoubtedly the **find** command, and find has become easier to use than it was years ago. It used to require a starting location for your search, but these days, you can also use find with just a file name or regular expression if youre willing to confine your search to the local directory.
```
$ find e*
empty
examples.desktop
```
In this way, it works much like the **ls** command and isn't doing much of a search.
For more relevant searches, find requires a starting point and some criteria for your search (unless you simply want it to provide a recursive listing of that starting points directory. The command **find . -type f** will recursively list all regular files starting with the current directory while **find ~nemo -type f -empty** will find empty files in Nemos home directory.
```
$ find ~nemo -type f -empty
/home/nemo/empty
```
**Also on Network world:[11 pointless but awesome Linux terminal tricks][1]**
#### locate
The name of the **locate** command suggests that it does basically the same thing as find, but it works entirely differently. Where the **find** command can select files based on a variety of criteria — name, size, owner, permissions, state (such as empty), etc. with a selectable depth for the search, the **locate** command looks through a file called /var/lib/mlocate/mlocate.db to find what youre looking for. That db file is periodically updated, so a locate of a file you just created will probably fail to find it. If that bothers you, you can run the updatedb file and get the update to happen right away.
```
$ sudo updatedb
```
#### mlocate
The **mlocate** command works like the **locate** command and uses the same mlocate.db file as locate.
#### which
The **which** command works very differently than the **find** and **locate** commands. It uses your search path and checks each directory on it for an executable with the file name youre looking for. Once it finds one, it stops searching and displays the full path to that executable.
The primary benefit of the **which** command is that it answers the question, “If I enter this command, what executable file will be run?” It ignores files that arent executable and doesnt list all executables on the system with that name — just the one that it finds first. If you wanted to find _all_ executables that have some name, you could run a find command like this, but it might take considerably longer to run the very efficient **which** command.
```
$ find / -name locate -perm -a=x 2>/dev/null
/usr/bin/locate
/etc/alternatives/locate
```
In this find command, were looking for all executables (files that cen be run by anyone) named “locate”. Were also electing not to view all of the “Permission denied” messages that would otherwise clutter our screens.
#### whereis
The **whereis** command works a lot like the **which** command, but it provides more information. Instead of just looking for executables, it also looks for man pages and source files. Like the **which** command, it uses your search path ($PATH) to drive its search.
```
$ whereis locate
locate: /usr/bin/locate /usr/share/man/man1/locate.1.gz
```
#### whatis
The **whatis** command has its own unique mission. Instead of actually finding files, it looks for information in the man pages for the command you are asking about and provides the brief description of the command from the top of the man page.
```
$ whatis locate
locate (1) - find files by name
```
If you ask about a script that youve just set up, it wont have any idea what youre referring to and will tell you so.
```
$ whatis cleanup
cleanup: nothing appropriate.
```
#### apropos
The **apropos** command is useful when you know what you want to do, but you have no idea what command you should be using to do it. If you were wondering how to locate files, for example, the commands “apropos find” and “apropos locate” would have a lot of suggestions to offer.
```
$ apropos find
File::IconTheme (3pm) - find icon directories
File::MimeInfo::Applications (3pm) - Find programs to open a file by mimetype
File::UserDirs (3pm) - find extra media and documents directories
find (1) - search for files in a directory hierarchy
findfs (8) - find a filesystem by label or UUID
findmnt (8) - find a filesystem
gst-typefind-1.0 (1) - print Media type of file
ippfind (1) - find internet printing protocol printers
locate (1) - find files by name
mlocate (1) - find files by name
pidof (8) - find the process ID of a running program.
sane-find-scanner (1) - find SCSI and USB scanners and their device files
systemd-delta (1) - Find overridden configuration files
xdg-user-dir (1) - Find an XDG user dir
$
$ apropos locate
blkid (8) - locate/print block device attributes
deallocvt (1) - deallocate unused virtual consoles
fallocate (1) - preallocate or deallocate space to a file
IO::Tty (3pm) - Low-level allocate a pseudo-Tty, import constants.
locate (1) - find files by name
mlocate (1) - find files by name
mlocate.db (5) - a mlocate database
mshowfat (1) - shows FAT clusters allocated to file
ntfsfallocate (8) - preallocate space to a file on an NTFS volume
systemd-sysusers (8) - Allocate system users and groups
systemd-sysusers.service (8) - Allocate system users and groups
updatedb (8) - update a database for mlocate
updatedb.mlocate (8) - update a database for mlocate
whereis (1) - locate the binary, source, and manual page files for a...
which (1) - locate a command
```
### Wrap-up
The commands available on Linux for locating and identifying files are quite varied, but they're all very useful.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3268768/linux/finding-what-you-re-looking-for-on-linux.html
作者:[Sandra Henry-Stocker][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:http://www.networkworld.com/article/2926630/linux/11-pointless-but-awesome-linux-terminal-tricks.html#tk.nww-fsb

View File

@ -0,0 +1,89 @@
Redcore Linux Makes Gentoo Easy
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/redcore.jpg?itok=SfsuPD0w)
Raise your hand if youve always wanted to try [Gentoo Linux][1] but never did because you didnt have either the time or the skills to invest in such a challenging installation. Im sure there are plenty of Linux users out there not willing to admit this, but its okay, really; installing Gentoo is a challenge, and it can be very time consuming. In the end, however, installing Gentoo will result in a very personalized Linux desktop that offers the fulfillment of saying, “I did it!”
So, whats a curious Linux user to do, when they want to experience this elite distribution? One option is to turn to the likes of [Redcore Linux][2]. Redcore does what many have tried (and few have succeeded in doing) in bringing Gentoo to the masses. In fact, [Sabayon][3] Linux is the only other distro I can think of thats truly succeeded in bringing a level of simplicity to Gentoo Linux that many users can enjoy. And while Sabayon is still very much in active development, its good to know there are others attempting what might have once been deemed impossible:
### Making Gentoo Linux easy
Instead of building your desktop piece by piece, system by system, Redcore (like Sabayon) brings a much more standard installation to the process. Unlike Sabayon (which gives you the options of a GNOME, KDE, Xfce, Mate, or Fluxbox editions), Redcore offers a version that ships with two possible desktop options: The [LXQt][4] desktop and [Openbox][5]. The LXQt is a lightweight desktop that offers plenty of configuration options and performs quite well on older hardware, whereas Openbox is a very minimalist take on the desktop. In fact, once you log into the Openbox desktop, youll be left wondering if something had gone wrong (until you right-click on the desktop to see the solitary menu).
If youre looking for a more modern take on the desktop, neither LXQt or Openbox will be what youre looking for. However, there is no doubt the combination of a rolling-release Gentoo-lite system that uses the LXQt and Openbox desktops will perform quite well.
The official description of the distribution is:
Redcore Linux is a distribution based on Gentoo Linux (stable + some unstable) and a continuation of, now defunct, Kogaion Linux. Kogaion Linux itself was a distribution based initially on Sabayon Linux, and later on Gentoo Linux and it was developed by RogentOS Development Group since 2011. Ghiunhan Mamut (aka V3n3RiX) himself joined RogentOS Development Group in January 2014.
If you know much about how Gentoo is structured, Redcore Linux is built from Gentoo Linux stage3. Stage3 a tarball containing a populated directory structure from a basic Gentoo system that contains no kernel, only binaries and libraries essential for bootstrapping. On top of stage3, the Redcore developers add a kernel, a bootloader and a few other items (such as dbus and Dracut), as well as configure the init system (OpenRC).
With all of that out of the way, lets see what the installation of Redcore is like and how well it can serve as your desktop distribution.
### Installation
As youve probably expected, the installation of Redcore is incredibly simple. Download the live ISO image, burn it to a CD/DVD or USB, insert the installation media, boot the device, log into the desktop (live username/password is redcore/redcore) and click the installer icon on the desktop. The installer used by Redcore is [Calamares][6], which means the installation is incredibly easy and, in an instant, familiar (Figure 1).
Everything with Calamares is automatic. In other words, you wont have to manually partition your drive or select individual packages for installation. You should be able to start and finish a Redcore installation in five or ten minutes. Once the installation completes, reboot and log in with the username/password you created during installation.
### Usage
Upon login, you can select between LXQt and Openbox. I highly recommend against using Openbox. Why? Because nothing will open from the menu. I was actually quite surprised to find the Openbox desktop fairly unusable upon installation. With that in mind, select the LXQt option and be done with it.
Upon logging in, youll be greeted by a fairly straight-forward desktop. Click on the menu button (bottom right of screen) and search through the menu hierarchy to launch an application. The list of installed applications is fairly straightforward, with the exception of finding [Steam][7] and [Wine][8] pre-installed. You might be surprised, considering Redcore is a rolling distribution, that many of the user-crucial applications are out of date. Take, for instance, LibreOffice. Redcore ships with 5.4.5.1. The Still release of LibreOffice is currently at 5.4.6. Opening the Sisyphus GUI (front end for the Sisyphus package manager) and youll see that LibreOffice is up to date (according to the package manager) at 5.4.5.1 (Figure 2).
![ Sisyphus][10]
Figure 2: The Sisyphus GUI package manager.
[Used with permission][11]
If you do see packages available for upgrade (which you might), click the upgrade button and allow the upgrade to complete. Considering this is a rolling release, you should be up to date. However, you can search through Sisyphus, locate new packages to install, and install them with ease. Installation with the Sisyphus front end is quite user-friendly.
### That default browser
You wont find a copy of Firefox or Chrome installed on Redcore. Instead, QupZilla serves as the default browser. When you do open the default browser (or if you click on the Ask for help icon on the desktop) you will find the preconfigured home page to be the [recorelinux freenode.net page][12]. Instead of being greeted by a hand-crafted application, geared toward helping new users, one must choose a nickname and venture into the world of IRC. Although one might be inclined to think that does new users a disservice, one must consider the type of “new” user Redcore will be serving: These arent going to be new-to-Linux users. Instead, Redcore knows its users and knows many of them are already familiar with IRC. That means users dont have to turn to Google to search for answers. Instead, they can chat with other users and even developers to solve their problems. This, of course, does depend on those users (who might be capable of answering questions) actually be logged into the redcorelinux channel on freenode.
### That default theme
Im going to make a confession here. Ive never understood the whole “dark theme” preference. I do understand that taste is a subjective issue, but my taste tends to lean toward the lighter themes. Thats not a problem. To change the theme for the LXQt desktop, open the menu, type desktop in the search field, and then select Customize Look and Feel. In the resulting window (Figure 3), you can select from the short list of theme options.
![desktop][14]
Figure 3: Changing the desktop theme in Redcore.
[Used with permission][11]
### That target audience
So who is Redcores best target audience? If youre looking to gain the benefit of Gentoo Linux, without having to go through the exhausting “get up to speed” and installation process required to compile one of the most challenging operating systems on the planet, Redcore might be what youre looking for. Its a very simplified means of enjoying a Gentoo-less take on Gentoo Linux. Of course, if youre looking to enjoy Gentoo with a more modern desktop, I would highly recommend [Sabayon][3]. However, the LXQt lightweight desktop will certainly give life to old hardware. And Recore does this with a bit of Gentoo style.
Learn more about Linux through the free ["Introduction to Linux" ][15] course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/4/redcore-linux-makes-gentoo-easy
作者:[JACK WALLEN][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.gentoo.org/
[2]:https://redcorelinux.org/
[3]:http://www.sabayon.org/
[4]:https://lxqt.org/
[5]:http://openbox.org/wiki/Main_Page
[6]:https://calamares.io/about/
[7]:http://store.steampowered.com/
[8]:https://www.winehq.org/
[10]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/redcore_2.jpg?itok=ubNC-htJ ( Sisyphus)
[11]:https://www.linux.com/licenses/category/used-permission
[12]:http://webchat.freenode.net/?channels=redcorelinux
[14]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/redcore_3.jpg?itok=FKg67lrS (desktop)
[15]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,192 @@
The df Command Tutorial With Examples For Beginners
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/df-command-1-720x340.png)
In this guide, we are going to learn to use **df** command. The df command, stands for **D** isk **F** ree, reports file system disk space usage. It displays the amount of disk space available on the file system in a Linux system. The df command is not to be confused with **du** command. Both serves different purposes. The df command reports **how much disk space we have** (i.e free space) whereas the du command reports **how much disk space is being consumed** by the files and folders. Hope I made myself clear. Let us go ahead and see some practical examples of df command, so you can understand it better.
### The df Command Tutorial With Examples
**1\. View entire file system disk space usage**
Run df command without any arguments to display the entire file system disk space.
```
$ df
```
**Sample output:**
```
Filesystem 1K-blocks Used Available Use% Mounted on
dev 4033216 0 4033216 0% /dev
run 4038880 1120 4037760 1% /run
/dev/sda2 478425016 428790352 25308980 95% /
tmpfs 4038880 34396 4004484 1% /dev/shm
tmpfs 4038880 0 4038880 0% /sys/fs/cgroup
tmpfs 4038880 11636 4027244 1% /tmp
/dev/loop0 84096 84096 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 95054 55724 32162 64% /boot
tmpfs 807776 28 807748 1% /run/user/1000
```
![][2]
As you can see, the result is divided into six columns. Let us see what each column means.
* **Filesystem** the filesystem on the system.
* **1K-blocks** the size of the filesystem, measured in 1K blocks.
* **Used** the amount of space used in 1K blocks.
* **Available** the amount of available space in 1K blocks.
* **Use%** the percentage that the filesystem is in use.
* **Mounted on** the mount point where the filesystem is mounted.
**2\. Display file system disk usage in human readable format**
As you may noticed in the above examples, the usage is showed in 1k blocks. If you want to display them in human readable format, use **-h** flag.
```
$ df -h
Filesystem Size Used Avail Use% Mounted on
dev 3.9G 0 3.9G 0% /dev
run 3.9G 1.1M 3.9G 1% /run
/dev/sda2 457G 409G 25G 95% /
tmpfs 3.9G 27M 3.9G 1% /dev/shm
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 12M 3.9G 1% /tmp
/dev/loop0 83M 83M 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 93M 55M 32M 64% /boot
tmpfs 789M 28K 789M 1% /run/user/1000
```
Now look at the **Size** and **Avail** columns, the usage is shown in GB and MB.
**3\. Display disk space usage only in MB**
To view file system disk space usage only in Megabytes, use **-m** flag.
```
$ df -m
Filesystem 1M-blocks Used Available Use% Mounted on
dev 3939 0 3939 0% /dev
run 3945 2 3944 1% /run
/dev/sda2 467212 418742 24716 95% /
tmpfs 3945 26 3920 1% /dev/shm
tmpfs 3945 0 3945 0% /sys/fs/cgroup
tmpfs 3945 12 3933 1% /tmp
/dev/loop0 83 83 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 93 55 32 64% /boot
tmpfs 789 1 789 1% /run/user/1000
```
**4\. List inode information instead of block usage**
We can list inode information instead of block usage by using **-i** flag as shown below.
```
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
dev 1008304 439 1007865 1% /dev
run 1009720 649 1009071 1% /run
/dev/sda2 30392320 844035 29548285 3% /
tmpfs 1009720 86 1009634 1% /dev/shm
tmpfs 1009720 18 1009702 1% /sys/fs/cgroup
tmpfs 1009720 3008 1006712 1% /tmp
/dev/loop0 12829 12829 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 25688 390 25298 2% /boot
tmpfs 1009720 29 1009691 1% /run/user/1000
```
**5\. Display the file system type**
To display the file system type, use **-T** flag.
```
$ df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
dev devtmpfs 4033216 0 4033216 0% /dev
run tmpfs 4038880 1120 4037760 1% /run
/dev/sda2 ext4 478425016 428790896 25308436 95% /
tmpfs tmpfs 4038880 31300 4007580 1% /dev/shm
tmpfs tmpfs 4038880 0 4038880 0% /sys/fs/cgroup
tmpfs tmpfs 4038880 11984 4026896 1% /tmp
/dev/loop0 squashfs 84096 84096 0 100% /var/lib/snapd/snap/core/4327
/dev/sda1 ext4 95054 55724 32162 64% /boot
tmpfs tmpfs 807776 28 807748 1% /run/user/1000
```
As you see, there is an extra column (second from left) that shows the file system type.
**6\. Display only the specific file system type**
We can limit the listing to a certain file systems. for example **ext4**. To do so, we use **-t** flag.
```
$ df -t ext4
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 478425016 428790896 25308436 95% /
/dev/sda1 95054 55724 32162 64% /boot
```
See? This command shows only the ext4 file system disk space usage.
**7\. Exclude specific file system type**
Some times, you may want to exclude a specific file system from the result. This can be achieved by using **-x** flag.
```
$ df -x ext4
Filesystem 1K-blocks Used Available Use% Mounted on
dev 4033216 0 4033216 0% /dev
run 4038880 1120 4037760 1% /run
tmpfs 4038880 26116 4012764 1% /dev/shm
tmpfs 4038880 0 4038880 0% /sys/fs/cgroup
tmpfs 4038880 11984 4026896 1% /tmp
/dev/loop0 84096 84096 0 100% /var/lib/snapd/snap/core/4327
tmpfs 807776 28 807748 1% /run/user/1000
```
The above command will display all file systems usage, except **ext4**.
**8\. Display usage for a folder**
To display the disk space available and where it is mounted for a folder, for example **/home/sk/** , use this command:
```
$ df -hT /home/sk/
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda2 ext4 457G 409G 25G 95% /
```
This command shows the file system type, used and available space in human readable form and where it is mounted. If you dont to display the file system type, just ignore the **-t** flag.
For more details, refer the man pages.
```
$ man df
```
**Recommended read:**
And, thats all for today! I hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginners/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2018/04/df-command.png

View File

@ -0,0 +1,126 @@
Useful Resources for Those Who Want to Know More About Linux
======
Linux is one of the most popular and versatile operating systems available. It can be used on a smartphone, computer and even a car. Linux has been around since the 1990s and is still one of the most widespread operating systems.
Linux is actually used to run most of the Internet as it is considered to be rather stable compared to other operating systems. This is one of the [reasons why people choose Linux over Windows][1]. Besides, Linux provides its users with privacy and doesnt collect their data at all, while Windows 10 and its Cortana voice control system always require updating your personal information.
Linux has many advantages. However, people do not hear much about it, as it has been squeezed out from the market by Windows and Mac. And many people get confused when they start using Linux, as its a bit different from popular operating systems.
So to help you out weve collected 5 useful resources for those who want to know more about Linux.
### 1.[Linux for Absolute Beginners][2]
If you want to learn as much about Linux as you can, you should consider taking a full course for beginners, provided by Eduonix. This course will introduce you to all features of Linux and provide you with all necessary materials to help you find out more about the peculiarities of how Linux works.
You should definitely choose this course if:
* you want to learn the details about the Linux operating system;
* you want to find out how to install it;
* you want to understand how Linux cooperates with your hardware;
* you want to learn how to operate Linux command line.
### 2.[PC World: A Linux Beginners Guide][3]
A free resource for those who want to learn everything about Linux in one place. PC World specializes in various aspects of working with computer operating systems, and it provides its subscribers with the most accurate and up-to-date information. Here you can also learn more about the [benefits of Linux][4] and latest news about his operating system.
This resource provides you with information on:
* how to install Linux;
* how to use command line;
* how to install additional software;
* how to operate Linux desktop environment.
### 3.[Linux Training][5]
A lot of people who work with computers are required to learn how to operate Linux in case Windows operating system suddenly crashes. And what can be better than using an official resource to start your Linux training?
This resource provides online enrollment on the Linux training, where you can get the most updated information from the authentic source. “A year ago our IT department offered us a Linux training on the official website”, says Martin Gibson, a developer at [Assignmenthelper.com.au][6]. “We took this course because we needed to learn how to back up all our files to another system to provide our customers with maximum security, and this resource really taught us everything.”
So you should definitely use this resource if:
* you want to receive firsthand information about the operating system;
* want to learn the peculiarities of how to run Linux on your computer;
* want to connect with other Linux users and share your experience with them.
4. [The Linux Foundation: Training Videos][7]
If you easily get bored from reading a lot of resources, this website is definitely for you. The Linux Foundation provides training videos, lectures and webinars, held by IT specialists, software developers and technical consultants.
All the training videos are subdivided into categories for:
* Developers: working with Linux Kernel, handling Linux Device Drivers, Linux virtualization etc.;
* System Administrators: developing virtual hosts on Linux, building a Firewall, analyzing Linux performance etc.;
* Users: getting started using Linux, introduction to embedded Linux and so on.
5. [LinuxInsider][8]
Did you know that Microsoft was so amazed by the efficiency of Linux that it [allowed users to run Linux on Microsoft cloud computing device][9]? If you want to learn more about this operating system, Linux Insider provides its subscribers with the latest news on Linux operating systems, gives information about the latest updates and Linux features.
On this resource, you will have the opportunity to:
* participate in Linux community;
* learn about how to run Linux on various devices;
* check out reviews;
* participate in blog discussions and read the tech blog.
### Wrapping up…
Linux offers a lot of benefits, including complete privacy, stable operation and even malware protection. Its definitely worth trying, learning how to use will help you better understand how your computer works and what it needs to operate smoothly.
### About the Author
_Lucy Benton is a digital marketing specialist, business consultant and helps people to turn their dreams into the profitable business. Now she is writing for marketing and business resources. Also Lucy has her own blog_ [_Prowritingpartner.com_][10] _,where you can check her last publications._
--------------------------------------------------------------------------------
via: https://linuxaria.com/article/useful-resources-for-those-who-want-to-know-more-about-linux
作者:[Lucy Benton][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.lifewire.com
[1]:https://www.lifewire.com/windows-vs-linux-mint-2200609
[2]:https://www.eduonix.com/courses/system-programming/linux-for-absolute-beginners
[3]:https://www.pcworld.com/article/2918397/operating-systems/how-to-get-started-with-linux-a-beginners-guide.html
[4]:https://www.popsci.com/switch-to-linux-operating-system#page-4
[5]:https://www.linux.com/learn/training
[6]:https://www.assignmenthelper.com.au/
[7]:https://training.linuxfoundation.org/free-linux-training/linux-training-videos
[8]:https://www.linuxinsider.com/
[9]:https://www.wired.com/2016/08/linux-took-web-now-taking-world/
[10]:https://prowritingpartner.com/
[11]:https://cdn.linuxaria.com/wp-content/plugins/flattr/img/flattr-badge-large.png
[12]:https://linuxaria.com/?flattrss_redirect&id=8570&md5=ee76fa2b44bdf6ef419a7f9906d3a5ad (Flattr)

View File

@ -0,0 +1,91 @@
4 cool new projects to try in COPR for April
======
![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg)
COPR is a [collection][1] of personal repositories for software that isnt carried in Fedora. Some software doesnt conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isnt supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
Heres a set of new and interesting projects in COPR.
### Anki
[Anki][2] is a program that helps you learn and remember things using spaced repetition. You can create cards and organize them into decks, or download [existing decks][3]. A card has a question on one side and an answer on the other. It may also include images, video or audio. How well you answer each card determines how often you see that particular card in the future.
While Anki is already in Fedora, this repo provides a newer version.
![][4]
#### Installation instructions
The repo currently provides Anki for Fedora 27, 28, and Rawhide. To install Anki, use these commands:
```
sudo dnf copr enable thomasfedb/anki
sudo dnf install anki
```
### Fd
[Fd][5] is a command-line utility thats a simple and slightly faster alternative to [find][6]. It can execute commands on found items in parallel. Fd also uses colorized terminal output and ignores hidden files and patterns specified in .gitignore by default.
#### Installation instructions
The repo currently provides fd for Fedora 26, 27, 28, and Rawhide. To install fd, use these commands:
```
sudo dnf copr enable keefle/fd
sudo dnf install fd
```
### KeePass
[KeePass][7] is a password manager. It holds all passwords in one end-to-end encrypted database locked with a master key or key file. The passwords can be organized into groups and generated by the programs built-in generator. Among its other features is Auto-Type, which can provide a username and password to selected forms.
While KeePass is already in Fedora, this repo provides the newest version.
![][8]
#### Installation instructions
The repo currently provides KeePass for Fedora 26 and 27. To install KeePass, use these commands:
```
sudo dnf copr enable mavit/keepass
sudo dnf install keepass
```
### jo
[Jo][9] is a command-line utility that transforms input to JSON strings or arrays. It features a simple [syntax][10] and recognizes booleans, strings and numbers. In addition, jo supports nesting and can nest its own output as well.
#### Installation instructions
The repo currently provides jo for Fedora 26, 27, and Rawhide, and for EPEL 6 and 7. To install jo, use these commands:
```
sudo dnf copr enable ganto/jo
sudo dnf install jo
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/4-try-copr-april-2018/
作者:[Dominik Turecek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org
[1]:https://copr.fedorainfracloud.org/
[2]:https://apps.ankiweb.net/
[3]:https://ankiweb.net/shared/decks/
[4]:https://fedoramagazine.org/wp-content/uploads/2018/03/anki.png
[5]:https://github.com/sharkdp/fd
[6]:https://www.gnu.org/software/findutils/
[7]:https://keepass.info/
[8]:https://fedoramagazine.org/wp-content/uploads/2018/03/keepass.png
[9]:https://github.com/jpmens/jo
[10]:https://github.com/jpmens/jo/blob/master/jo.md

View File

@ -0,0 +1,181 @@
How To Resize Active/Primary root Partition Using GParted Utility
======
Today we are going to discuss about disk partition. Its one of the best topics in Linux. This allow users to resize the active root partition in Linux.
In this article we will teach you how to resize the active root partition on Linux using Gparted utility.
Just imagine, our system has 30GB disk and we didnt configure properly while installation the Ubuntu operating system.
We need to install another OS in that so we want to make secondary partition on that.
Its not advisable to resize active partition. However, we are going to perform this as there is no way to free up the system.
Make sure you should take backup of important data before performing this action because if something goes wrong (For example, if power got failure or your system got rebooted), you can retain your data.
### What Is Gparted
[GParted][1] is a free partition manager that enables you to resize, copy, and move partitions without data loss. We can use all of the features of the GParted application is by using the GParted Live bootable image. GParted Live enables you to use GParted on GNU/Linux as well as other operating systems, such as Windows or Mac OS X.
### 1) Check Disk Space Usage Using df Command
I just want to show you about my partition using df command. The df command output clearly showing that i only have one partition.
```
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 30G 3.4G 26.2G 16% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 487M 4.0K 487M 1% /dev
tmpfs 100M 844K 99M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 152K 497M 1% /run/shm
none 100M 52K 100M 1% /run/user
```
### 2) Check Disk Partition Using fdisk Command
Im going to verify this using fdisk command.
```
$ sudo fdisk -l
[sudo] password for daygeek:
Disk /dev/sda: 33.1 GB, 33129218048 bytes
255 heads, 63 sectors/track, 4027 cylinders, total 64705504 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000473a3
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 62609407 31303680 83 Linux
/dev/sda2 62611454 64704511 1046529 5 Extended
/dev/sda5 62611456 64704511 1046528 82 Linux swap / Solaris
```
### 3) Download GParted live ISO Image
Use the below command to download GParted live ISO to perform this action.
```
$ wget https://downloads.sourceforge.net/gparted/gparted-live-0.31.0-1-amd64.iso
```
### 4) Boot Your System With GParted Live Installation Media
Boot your system with GParted live installation media (like Burned CD/DVD or USB or ISO image). You will get the output similar to below screen. Here choose **GParted Live (Default settings)** and hit **Enter**.
![][3]
### 5) Keyboard Selection
By default it chooses the second option, just hit **Enter**.
![][4]
### 6) Language Selection
By default it chooses **33** for US English, just hit **Enter**.
![][5]
### 7) Mode Selection (GUI or Command-Line)
By default it chooses **0** for GUI mode, just hit **Enter**.
![][6]
### 8) Loaded GParted Live Screen
Now, GParted live screen is loaded. It is showing the list of partition which was created by me earlier.
![][7]
### 9) How To Resize The root Partition
Choose the root partition you want to resize, only one partition is available here so im going to edit that partition to install another OS.
![][8]
To do so, press **Resize/Move** button to resize the partition.
![][9]
Here, enter the size which you want take out from this partition in first box. Im going to claim **10GB** so, i added **10240MB** and leave rest of boxes as default, then hit **Resize/Move** button
![][10]
It will ask you once again to confirm to resize the partition because you are editing the live system partition, then hit **Ok**.
![][11]
It has been successfully shrink the partition from 30GB to 20GB. Also shows Unallocated disk space of 10GB.
![][12]
Finally click `Apply` button to perform remaining below operations.
![][13]
* **`e2fsck`** e2fsck is a file system check utility that automatically repair the file system for bad sectors, I/O errors related to HDD.
* **`resize2fs`** The resize2fs program will resize ext2, ext3, or ext4 file systems. It can be used to enlarge or shrink an unmounted file system located on device.
* **`e2image`** The e2image program will save critical ext2, ext3, or ext4 filesystem metadata located on device to a file specified by image-file.
**`e2fsck`** e2fsck is a file system check utility that automatically repair the file system for bad sectors, I/O errors related to HDD.
![][14]
**`resize2fs`** The resize2fs program will resize ext2, ext3, or ext4 file systems. It can be used to enlarge or shrink an unmounted file system located on device.
![][15]
**`e2image`** The e2image program will save critical ext2, ext3, or ext4 filesystem metadata located on device to a file specified by image-file.
![][16]
All the operation got completed and close the dialog box.
![][17]
Now, i could able to see **10GB** of Unallocated disk partition.
![][18]
Reboot the system to check this.
![][19]
### 10) Check Free Space
Login to the system back and use fdisk command to see the available space in the partition. Yes i could see **10GB** of Unallocated disk space on this partition.
```
$ sudo parted /dev/sda print free
[sudo] password for daygeek:
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sda: 32.2GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 10.7GB 10.7GB Free Space
1 10.7GB 32.2GB 21.5GB primary ext4 boot
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility/
作者:[Magesh Maruthamuthu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/magesh/
[1]:https://gparted.org/
[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-1.png
[4]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-2.png
[5]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-3.png
[6]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-4.png
[7]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-5.png
[8]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-6.png
[9]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-7.png
[10]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-8.png
[11]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-9.png
[12]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-10.png
[13]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-11.png
[14]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-12.png
[15]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-13.png
[16]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-14.png
[17]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-15.png
[18]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-16.png
[19]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-17.png

View File

@ -0,0 +1,954 @@
Running Jenkins builds in containers
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_scale_performance.jpg?itok=R7jyMeQf)
Running applications in containers has become a well-accepted practice in the enterprise sector, as [Docker][1] with [Kubernetes][2] (K8s) now provides a scalable, manageable application platform. The container-based approach also suits the [microservices architecture][3] that's gained significant momentum in the past few years.
One of the most important advantages of a container application platform is the ability to dynamically bring up isolated containers with resource limits. Let's check out how this can change the way we run our continuous integration/continuous development (CI/CD) tasks.
Building and packaging an application requires an environment that can download the source code, access dependencies, and have the build tools installed. Running unit and component tests as part of the build may use local ports or require third-party applications (e.g., databases, message brokers, etc.) to be running. In the end, we usually have multiple, pre-configured build servers with each running a certain type of job. For tests, we maintain dedicated instances of third-party apps (or try to run them embedded) and avoid running jobs in parallel that could mess up each other's outcome. The pre-configuration for such a CI/CD environment can be a hassle, and the required number of servers for different jobs can significantly change over time as teams shift between versions and development platforms.
Once we have access to a container platform (onsite or in the cloud), it makes sense to move the resource-intensive CI/CD task executions into dynamically created containers. In this scenario, build environments can be independently started and configured for each job execution. Tests during the build have free reign to use available resources in this isolated box, while we can also bring up a third-party application in a side container that exists only for this job's lifecycle.
It sounds nice… Let's see how it works in real life.
Note: This article is based on a real-world solution for a project running on a [Red Hat OpenShift][4] v3.7 cluster. OpenShift is the enterprise-ready version of Kubernetes, so these practices work on a K8s cluster as well. To try, download the [Red Hat CDK][5] and run the `jenkins-ephemeral` or `jenkins-persistent` [templates][6] that create preconfigured Jenkins masters on OpenShift.
### Solution overview
The solution to executing CI/CD tasks (builds, tests, etc.) in containers on OpenShift is based on [Jenkins distributed builds][7], which means:
* We need a Jenkins master; it may run inside the cluster but also works with an external master
* Jenkins features/plugins are available as usual, so existing projects can be used
* The Jenkins GUI is available to configure, run, and browse job output
* if you prefer code, [Jenkins Pipeline][8] is also available
From a technical point of view, the dynamic containers to run jobs are Jenkins agent nodes. When a build kicks off, first a new node starts and "reports for duty" to the Jenkins master via JNLP (port 5000). The build is queued until the agent node comes up and picks up the build. The build output is sent back to the master—just like with regular Jenkins agent servers—but the agent container is shut down once the build is done.
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/1_running_jenkinsincontainers.png?itok=fR4ntnn8)
Different kinds of builds (e.g., Java, NodeJS, Python, etc.) need different agent nodes. This is nothing new—labels could previously be used to restrict which agent nodes should run a build. To define the config for these Jenkins agent containers started for each job, we will need to set the following:
* The Docker image to boot up
* Resource limits
* Environment variables
* Volumes mounted
The core component here is the [Jenkins Kubernetes plugin][9]. This plugin interacts with the K8s cluster (by using a ServiceAccount) and starts/stops the agent nodes. Multiple agent types can be defined as Kubernetes pod templates under the plugin's configuration (refer to them by label in projects).
These [agent images][10] are provided out of the box (also on [CentOS7][11]):
* [jenkins-slave-base-rhel7][12]: Base image starting the agent that connects to Jenkins master; the Java heap is set according to container memory
* [jenkins-slave-maven-rhel7][13]: Image for Maven and Gradle builds (extends base)
* [jenkins-slave-nodejs-rhel7][14]: Image with NodeJS4 tools (extends base)
Note: This solution is not related to OpenShift's [Source-to-Image (S2I)][15] build, which can also be used for certain CI/CD tasks.
### Background learning material
There are several good blogs and documentation about Jenkins builds on OpenShift. The following are good to start with:
Take a look at them to understand the overall solution. In this article, we'll look at the different issues that come up while applying those practices.
### Build my application
For our [example][16], let's assume a Java project with the following build steps:
* **Source:** Pull project source from a Git repository
* **Build with Maven:** Dependencies come from an internal repository (let's use Apache Nexus) mirroring external Maven repos
* **Deploy artifact:** The built JAR is uploaded to the repository
During the CI/CD process, we need to interact with Git and Nexus, so the Jenkins jobs have be able to access those systems. This requires configuration and stored credentials that can be managed at different places:
* **In Jenkins:** We can add credentials to Jenkins that the Git plugin can use and add files to the project (using containers doesn't change anything).
* **In OpenShift:** Use ConfigMap and secret objects that are added to the Jenkins agent containers as files or environment variables.
* **In a fully customized Docker image:** These are pre-configured with everything to run a type of job; just extend one of the agent images.
Which approach you use is a question of taste, and your final solution may be a mix. Below we'll look at the second option, where the configuration is managed primarily in OpenShift. Customize the Maven agent container via the Kubernetes plugin configuration by setting environment variables and mounting files.
Note: Adding environment variables through the UI doesn't work with Kubernetes plugin v1.0 due to a [bug][17]. Either update the plugin or (as a workaround) edit `config.xml` directly and restart Jenkins.
### Pull source from Git
Pulling a public Git is trivial. For a private Git repo, authentication is required and the client also needs to trust the server for a secure connection. A Git pull can typically be done via two protocols:
* HTTPS: Authentication is with username/password. The server's SSL certificate must be trusted by the job, which is only tricky if it's signed by a custom CA.
```
git clone https://git.mycompany.com:443/myapplication.git
```
* SSH: Authentication is with a private key. The server is trusted when its public key's fingerprint is found in the `known_hosts` file.
```
git clone ssh://git@git.mycompany.com:22/myapplication.git
```
Downloading the source through HTTP with username/password is OK when it's done manually; for automated builds, SSH is better.
#### Git with SSH
For a SSH download, we need to ensure that the SSH connection works between the agent container and the Git's SSH port. First, we need a private-public key pair. To generate one, run:
```
ssh keygen -t rsa -b 2048 -f my-git-ssh -N ''
```
It generates a private key in `my-git-ssh` (empty passphrase) and the matching public key in `my-git-ssh.pub`. Add the public key to the user on the Git server (preferably a ServiceAccount); web UIs usually support upload. To make the SSH connection work, we need two files on the agent container:
* The private key at `~/.ssh/id_rsa`
* The server's public key in `~/.ssh/known_hosts`. To get this, try `ssh git.mycompany.com` and accept the fingerprint; this will create a new line in the `known_hosts` file. Use that.
Store the private key as `id_rsa` and server's public key as `known_hosts` in an OpenShift secret (or config map).
```
apiVersion: v1
kind: Secret
metadata:
  name: mygit-ssh
stringData:
  id_rsa: |-
    -----BEGIN RSA PRIVATE KEY-----
    ...
    -----END RSA PRIVATE KEY-----
  known_hosts: |-
    git.mycompany.com ecdsa-sha2-nistp256 AAA...
```
Then configure this as a volume in the Kubernetes plugin for the Maven pod at mount point `/home/jenkins/.ssh/`. Each item in the secret will be a file matching the key name under the mount directory. We can use the UI (`Manage Jenkins / Configure / Cloud / Kubernetes`), or edit Jenkins config `/var/lib/jenkins/config.xml`:
```
<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
<name>maven</name>
...
  <volumes>
    <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>
      <mountPath>/home/jenkins/.ssh</mountPath>
      <secretName>mygit-ssh</secretName>
    </org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>
  </volumes>
```
Pulling a Git source through SSH should work in the jobs running on this agent now.
Note: It's also possible to customize the SSH connection in `~/.ssh/config`, for example, if we don't want to bother with `known_hosts` or the private key is mounted to a different location:
```
Host git.mycompany.com
   StrictHostKeyChecking no
   IdentityFile /home/jenkins/.config/git-secret/ssh-privatekey
```
#### Git with HTTP
If you prefer an HTTP download, add the username/password to a [Git-credential-store][18] file somewhere:
* E.g. `/home/jenkins/.config/git-secret/credentials` from an OpenShift secret, one site per line:
```
https://username:password@git.mycompany.com
https://user:pass@github.com
```
* Enable it in [git-config][19] expected at `/home/jenkins/.config/git/config`:
```
[credential]
  helper = store --file=/home/jenkins/.config/git-secret/credentials
```
If the Git service has a certificate signed by a custom certificate authority (CA), the quickest hack is to set the `GIT_SSL_NO_VERIFY=true` environment variable (EnvVar) for the agent. The proper solution needs two things:
* Add the custom CA's public certificate to the agent container from a config map to a path (e.g. `/usr/ca/myTrustedCA.pem`).
* Tell Git the path to this cert in an EnvVar `GIT_SSL_CAINFO=/usr/ca/myTrustedCA.pem` or in the `git-config` file mentioned above:
```
[http "https://git.mycompany.com"]
    sslCAInfo = /usr/ca/myTrustedCA.pem
```
Note: In OpenShift v3.7 (and earlier), the config map and secret mount points [must not overlap][20], so we can't map to `/home/jenkins` and `/home/jenkins/dir` at the same time. This is why we didn't use the well-known file locations above. A fix is expected in OpenShift v3.9.
### Maven
To make a Maven build work, there are usually two things to do:
* A corporate Maven repository (e.g., Apache Nexus) should be set up to act as a proxy for external repos. Use this as a mirror.
* This internal repository may have an HTTPS endpoint with a certificate signed by a custom CA.
Having an internal Maven repository is practically essential if builds run in containers because they start with an empty local repository (cache), so Maven downloads all the JARs every time. Downloading from an internal proxy repo on the local network is obviously quicker than downloading from the Internet.
The [Maven Jenkins agent][13] image supports an environment variable that can be used to set the URL for this proxy. Set the following in the Kubernetes plugin container template:
```
MAVEN_MIRROR_URL=https://nexus.mycompany.com/repository/maven-public
```
The build artifacts (JARs) should also be archived in a repository, which may or may not be the same as the one acting as a mirror for dependencies above. Maven `deploy` requires the repo URL in the `pom.xml` under [Distribution management][21] (this has nothing to do with the agent image):
```
<project ...>
<distributionManagement>
 <snapshotRepository>
  <id>mynexus</id>
  <url>https://nexus.mycompany.com/repository/maven-snapshots/</url>
 </snapshotRepository>
 <repository>
  <id>mynexus</id>
  <url>https://nexus.mycompany.com/repository/maven-releases/</url>
 </repository>
</distributionManagement>
```
Uploading the artifact may require authentication. In this case, username/password must be set in the `settings.xml` under the server ID matching the one in `pom.xml`. We need to mount a whole `settings.xml` with the URL, username, and password on the Maven Jenkins agent container from an OpenShift secret. We can also use environment variables as below:
* Add environment variables from a secret to the container:
```
MAVEN_SERVER_USERNAME=admin
MAVEN_SERVER_PASSWORD=admin123
```
* Mount `settings.xml` from a config map to `/home/jenkins/.m2/settings.xml`:
```
<settings ...>
 <mirrors>
  <mirror>
   <mirrorOf>external:*</mirrorOf>
   <url>${env.MAVEN_MIRROR_URL}</url>
   <id>mirror</id>
  </mirror>
 </mirrors>
 <servers>
  <server>
   <id>mynexus</id>
   <username>${env.MAVEN_SERVER_USERNAME}</username>
   <password>${env.MAVEN_SERVER_PASSWORD}</password>
  </server>
 </servers>
</settings>
```
Disable interactive mode (use batch mode) to skip the download log by using `-B` for Maven commands or by adding `<interactiveMode>false</interactiveMode>` to `settings.xml`.
If the Maven repository's HTTPS endpoint uses a certificate signed by a custom CA, we need to create a Java KeyStore using the [keytool][22] containing the CA certificate as trusted. This KeyStore should be uploaded as a config map in OpenShift. Use the `oc` command to create a config map from files:
```
 oc create configmap maven-settings--from-file=settings.xml=settings.xml--from-
file=myTruststore.jks=myTruststore.jks
```
Mount the config map somewhere on the Jenkins agent. In this example we use `/home/jenkins/.m2`, but only because we have `settings.xml` in the same config map. The KeyStore can go under any path.
Then make the Maven Java process use this file as a trust store by setting Java parameters in the `MAVEN_OPTS` environment variable for the container:
```
MAVEN_OPTS=
-Djavax.net.ssl.trustStore=/home/jenkins/.m2/myTruststore.jks
-Djavax.net.ssl.trustStorePassword=changeit
```
### Memory usage
This is probably the most important part—if we don't set max memory correctly, we'll run into intermittent build failures after everything seems to work.
Running Java in a container can cause high memory usage errors if we don't set the heap in the Java command line. The JVM [sees the total memory of the host machine][23] instead of the container's memory limit and sets the [default max heap][24] accordingly. This is typically much more than the container's memory limit, and OpenShift simply kills the container when a Java process allocates more memory for the heap.
Although the `jenkins``-slave-base` image has a built-in [script to set max heap ][25]to half the container memory (this can be modified via EnvVar `CONTAINER_HEAP_PERCENT=0.50`), it only applies to the Jenkins agent Java process. In a Maven build, we have important additional Java processes running:
* The `mvn` command itself is a Java tool.
* The [Maven Surefire-plugin][26] executes the unit tests in a forked JVM by default.
At the end of the day, we'll have three Java processes running at the same time in the container, and it's important to estimate their memory usage to avoid unexpectedly killed pods. Each process has a different way to set JVM options:
* Jenkins agent heap is calculated as mentioned above, but we definitely shouldn't let the agent have such a big heap. Memory is needed for the other two JVMs. Setting `JAVA_OPTS` works for the Jenkins agent.
* The `mvn` tool is called by the Jenkins job. Set `MAVEN_OPTS` to customize this Java process.
* The JVM spawned by the Maven `surefire` plugin for the unit tests can be customized by the [argLine][27] Maven property. It can be set in the `pom.xml`, in a profile in `settings.xml` or simply by adding `-DargLine=… to mvn` command in `MAVEN_OPTS`.
Here is an example of how to set these environment variables for the Maven agent container:
```
 JAVA_OPTS=-Xms64m -Xmx64m
MAVEN_OPTS=-Xms128m -Xmx128m -DargLine=${env.SUREFIRE_OPTS}
SUREFIRE_OPTS=-Xms256m -Xmx256m
```
These numbers worked in our tests with 1024Mi agent container memory limit building and running unit tests for a SpringBoot app. These are relatively low numbers and a bigger heap size; a higher limit may be needed for complex Maven projects and unit tests.
Note: The actual memory usage of a Java8 process is something like `HeapSize + MetaSpace + OffHeapMemory`, and this can be significantly more than the max heap size set. With the settings above, the three Java processes took more than 900Mi memory in our case. See RSS memory for processes within the container: `ps -e -o ``pid``,user``,``rss``,comm``,args`
The Jenkins agent images have both JDK 64 bit and 32 bit installed. For `mvn` and `surefire`, the 64-bit JVM is used by default. To lower memory usage, it makes sense to force 32-bit JVM as long as `-Xmx` is less than 1.5 GB:
```
JAVA_HOME=/usr/lib/jvm/Java-1.8.0-openjdk-1.8.0.1610.b14.el7_4.i386
```
Note that it's also possible to set Java arguments in the `JAVA_TOOL_OPTIONS` EnvVar, which is picked up by any JVM started. The parameters in `JAVA_OPTS` and `MAVEN_OPTS` overwrite the ones in `JAVA_TOOL_OPTIONS`, so we can achieve the same heap configuration for our Java processes as above without using `argLine`:
```
JAVA_OPTS=-Xms64m -Xmx64m
MAVEN_OPTS=-Xms128m -Xmx128m
JAVA_TOOL_OPTIONS=-Xms256m -Xmx256m
```
It's still a bit confusing, as all JVMs log `Picked up JAVA_TOOL_OPTIONS:`
### Jenkins Pipeline
Following the settings above, we should have everything prepared to run a successful build. We can pull the code, download the dependencies, run the unit tests, and upload the artifact to our repository. Let's create a Jenkins Pipeline project that does this:
```
pipeline {
  /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var Which container to bring up for the build. Pick one of the templates configured in Kubernetes plugin. */
  agent {
    label 'maven'
  }
  stages {
    stage('Pull Source') {
      steps {
        git url: 'ssh://git@git.mycompany.com:22/myapplication.git', branch: 'master'
      }
    }
    stage('Unit Tests') {
      steps {
        sh 'mvn test'
      }
    }
    stage('Deploy to Nexus') {
      steps {
        sh 'mvn deploy -DskipTests'
      }
    }
  }
}
```
For a real project, of course, the CI/CD pipeline should do more than just the Maven build; it could deploy to a development environment, run integration tests, promote to higher environments, etc. The learning articles linked above show examples of how to do those things.
### Multiple containers
One pod can be running multiple containers with each having their own resource limits. They share the same network interface, so we can reach started services on `localhost`, but we need to think about port collisions. Environment variables are set separately, but the volumes mounted are the same for all containers configured in one Kubernetes pod template.
Bringing up multiple containers is useful when an external service is required for unit tests and an embedded solution doesn't work (e.g., database, message broker, etc.). In this case, this second container also starts and stops with the Jenkins agent.
See the Jenkins `config.xml` snippet where we start an `httpbin` service on the side for our Maven build:
```
<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
  <name>maven</name>
  <volumes>
    ...
  </volumes>
  <containers>
    <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
      <name>jnlp</name>
      
      <resourceLimitCpu>500m</resourceLimitCpu>
      <resourceLimitMemory>1024Mi</resourceLimitMemory>
      <envVars>
      ...
      </envVars>        
      ...
    </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
    <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
      <name>httpbin</name>
      
      <resourceLimitCpu></resourceLimitCpu>
      <resourceLimitMemory>256Mi</resourceLimitMemory>
      <envVars/>
      ...
    </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
  </containers>
  <envVars/>
</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
```
### Summary
For a summary, see the created OpenShift resources and the Kubernetes plugin configuration from Jenkins `config.xml` with the configuration described above.
```
apiVersion: v1
kind: List
metadata: {}
items:
- apiVersion: v1
  kind: ConfigMap
  metadata:
    name: git-config
  data:
    config: |
      [credential]
          helper = store --file=/home/jenkins/.config/git-secret/credentials
      [http "http://git.mycompany.com"]
          sslCAInfo = /home/jenkins/.config/git/myTrustedCA.pem
    myTrustedCA.pem: |-
      -----BEGIN CERTIFICATE-----
      MIIDVzCCAj+gAwIBAgIJAN0sC...
      -----END CERTIFICATE-----
- apiVersion: v1
  kind: Secret
  metadata:
    name: git-secret
  stringData:
    ssh-privatekey: |-
      -----BEGIN RSA PRIVATE KEY-----
      ...
      -----END RSA PRIVATE KEY-----
    credentials: |-
      https://username:password@git.mycompany.com
      https://user:pass@github.com
- apiVersion: v1
  kind: ConfigMap
  metadata:
    name: git-ssh
  data:
    config: |-
      Host git.mycompany.com
        StrictHostKeyChecking yes
        IdentityFile /home/jenkins/.config/git-secret/ssh-privatekey
    known_hosts: '[git.mycompany.com]:22 ecdsa-sha2-nistp256 AAAdn7...'
- apiVersion: v1
  kind: Secret
  metadata:
    name: maven-secret
  stringData:
    username: admin
    password: admin123
```
One additional config map was created from files:
```
 oc create configmap maven-settings --from-file=settings.xml=settings.xml
--from-file=myTruststore.jks=myTruststore.jks
```
Kubernetes plugin configuration:
```
<?xml version='1.0' encoding='UTF-8'?>
<hudson>
...
  <clouds>
    <org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud plugin="kubernetes@1.0">
      <name>openshift</name>
      <defaultsProviderTemplate></defaultsProviderTemplate>
      <templates>
        <org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
          <inheritFrom></inheritFrom>
          <name>maven</name>
          <namespace></namespace>
          <privileged>false</privileged>
          <alwaysPullImage>false</alwaysPullImage>
          <instanceCap>2147483647</instanceCap>
          <slaveConnectTimeout>100</slaveConnectTimeout>
          <idleMinutes>0</idleMinutes>
          <label>maven</label>
          <serviceAccount>jenkins37</serviceAccount>
          <nodeSelector></nodeSelector>
          <nodeUsageMode>NORMAL</nodeUsageMode>
          <customWorkspaceVolumeEnabled>false</customWorkspaceVolumeEnabled>
          <workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">
            <memory>false</memory>
          </workspaceVolume>
          <volumes>
            <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>
              <mountPath>/home/jenkins/.config/git-secret</mountPath>
              <secretName>git-secret</secretName>
            </org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>
            <org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
              <mountPath>/home/jenkins/.ssh</mountPath>
              <configMapName>git-ssh</configMapName>
            </org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
            <org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
              <mountPath>/home/jenkins/.config/git</mountPath>
              <configMapName>git-config</configMapName>
            </org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
            <org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
              <mountPath>/home/jenkins/.m2</mountPath>
              <configMapName>maven-settings</configMapName>
            </org.csanchez.jenkins.plugins.kubernetes.volumes.ConfigMapVolume>
          </volumes>
          <containers>
            <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
              <name>jnlp</name>
              
              <privileged>false</privileged>
              <alwaysPullImage>false</alwaysPullImage>
              <workingDir>/tmp</workingDir>
              <command></command>
              <args>${computer.jnlpmac} ${computer.name}</args>
              <ttyEnabled>false</ttyEnabled>
              <resourceRequestCpu>500m</resourceRequestCpu>
              <resourceRequestMemory>1024Mi</resourceRequestMemory>
              <resourceLimitCpu>500m</resourceLimitCpu>
              <resourceLimitMemory>1024Mi</resourceLimitMemory>
              <envVars>
                <org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                  <key>JAVA_HOME</key>
                  <value>/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.i386</value>
                </org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                  <key>JAVA_OPTS</key>
                  <value>-Xms64m -Xmx64m</value>
                </org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                  <key>MAVEN_OPTS</key>
                  <value>-Xms128m -Xmx128m -DargLine=${env.SUREFIRE_OPTS} -Djavax.net.ssl.trustStore=/home/jenkins/.m2/myTruststore.jks -Djavax.net.ssl.trustStorePassword=changeit</value>
                </org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                  <key>SUREFIRE_OPTS</key>
                  <value>-Xms256m -Xmx256m</value>
                </org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                  <key>MAVEN_MIRROR_URL</key>
                  <value>https://nexus.mycompany.com/repository/maven-public</value>
                </org.csanchez.jenkins.plugins.kubernetes.ContainerEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.model.SecretEnvVar>
                  <key>MAVEN_SERVER_USERNAME</key>
                  <secretName>maven-secret</secretName>
                  <secretKey>username</secretKey>
                </org.csanchez.jenkins.plugins.kubernetes.model.SecretEnvVar>
                <org.csanchez.jenkins.plugins.kubernetes.model.SecretEnvVar>
                  <key>MAVEN_SERVER_PASSWORD</key>
                  <secretName>maven-secret</secretName>
                  <secretKey>password</secretKey>
                </org.csanchez.jenkins.plugins.kubernetes.model.SecretEnvVar>
              </envVars>
              <ports/>
              <livenessProbe>
                <execArgs></execArgs>
                <timeoutSeconds>0</timeoutSeconds>
                <initialDelaySeconds>0</initialDelaySeconds>
                <failureThreshold>0</failureThreshold>
                <periodSeconds>0</periodSeconds>
                <successThreshold>0</successThreshold>
              </livenessProbe>
            </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
            <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
              <name>httpbin</name>
              
              <privileged>false</privileged>
              <alwaysPullImage>false</alwaysPullImage>
              <workingDir></workingDir>
              <command>/run.sh</command>
              <args></args>
              <ttyEnabled>false</ttyEnabled>
              <resourceRequestCpu></resourceRequestCpu>
              <resourceRequestMemory>256Mi</resourceRequestMemory>
              <resourceLimitCpu></resourceLimitCpu>
              <resourceLimitMemory>256Mi</resourceLimitMemory>
              <envVars/>
              <ports/>
              <livenessProbe>
                <execArgs></execArgs>
                <timeoutSeconds>0</timeoutSeconds>
                <initialDelaySeconds>0</initialDelaySeconds>
                <failureThreshold>0</failureThreshold>
                <periodSeconds>0</periodSeconds>
                <successThreshold>0</successThreshold>
              </livenessProbe>
            </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
          </containers>
          <envVars/>
          <annotations/>
          <imagePullSecrets/>
        </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
      </templates>
      <serverUrl>https://172.30.0.1:443</serverUrl>
      <serverCertificate>-----BEGIN CERTIFICATE-----
MIIC6jCC...
-----END CERTIFICATE-----</serverCertificate>
      <skipTlsVerify>false</skipTlsVerify>
      <namespace>first</namespace>
      <jenkinsUrl>http://jenkins.cicd.svc:80</jenkinsUrl>
      <jenkinsTunnel>jenkins-jnlp.cicd.svc:50000</jenkinsTunnel>
      <credentialsId>1a12dfa4-7fc5-47a7-aa17-cc56572a41c7</credentialsId>
      <containerCap>10</containerCap>
      <retentionTimeout>5</retentionTimeout>
      <connectTimeout>0</connectTimeout>
      <readTimeout>0</readTimeout>
      <maxRequestsPerHost>32</maxRequestsPerHost>
    </org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud>
  </clouds>
</hudson>
```
Happy builds!
This was originally published on [ITNext][28] and is reprinted with permission.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/running-jenkins-builds-containers
作者:[Balazs Szeti][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/bszeti
[1]:https://opensource.com/resources/what-docker
[2]:https://opensource.com/resources/what-is-kubernetes
[3]:https://martinfowler.com/articles/microservices.html
[4]:https://www.openshift.com/
[5]:https://developers.redhat.com/products/cdk/overview/
[6]:https://github.com/openshift/origin/tree/master/examples/jenkins
[7]:https://wiki.jenkins.io/display/JENKINS/Distributed+builds
[8]:https://jenkins.io/doc/book/pipeline/
[9]:https://github.com/jenkinsci/kubernetes-plugin
[10]:https://access.redhat.com/containers/#/search/jenkins%2520slave
[11]:https://hub.docker.com/search/?isAutomated=0&isOfficial=0&page=1&pullCount=0&q=openshift+jenkins+slave+&starCount=0
[12]:https://github.com/openshift/jenkins/tree/master/slave-base
[13]:https://github.com/openshift/jenkins/tree/master/slave-maven
[14]:https://github.com/openshift/jenkins/tree/master/slave-nodejs
[15]:https://docs.openshift.com/container-platform/3.7/architecture/core_concepts/builds_and_image_streams.html#source-build
[16]:https://github.com/bszeti/camel-springboot/tree/master/camel-rest-complex
[17]:https://issues.jenkins-ci.org/browse/JENKINS-47112
[18]:https://git-scm.com/docs/git-credential-store/1.8.2
[19]:https://git-scm.com/docs/git-config/1.8.2
[20]:https://bugzilla.redhat.com/show_bug.cgi?id=1430322
[21]:https://maven.apache.org/pom.html#Distribution_Management
[22]:https://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html
[23]:https://developers.redhat.com/blog/2017/03/14/java-inside-docker/
[24]:https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/parallel.html#default_heap_size
[25]:https://github.com/openshift/jenkins/blob/master/slave-base/contrib/bin/run-jnlp-client
[26]:http://maven.apache.org/surefire/maven-surefire-plugin/examples/fork-options-and-parallel-execution.html
[27]:http://maven.apache.org/surefire/maven-surefire-plugin/test-mojo.html#argLine
[28]:https://itnext.io/running-jenkins-builds-in-containers-458e90ff2a7b

View File

@ -1,103 +0,0 @@
可怕的万圣节 Linux 命令
======
万圣节快到了,是时候关注一下 Linux 可怕的一面。什么命令可能会显示鬼、巫婆和僵尸的图像?这可能会鼓励伎俩或治疗的精神?哪个会鼓励“不给糖果就捣蛋”的精神?
### crypt
好吧,我们一直看到 **crypt**。尽管名称不同crypt 不是一个地窖也不是垃圾文件的埋葬坑而是一个加密文件内容的命令。现在“crypt” 通常用一个脚本实现,通过调用一个名为 **mcrypt** 的二进制文件来模拟以前的 crypt 命令来完成它的工作。直接使用 **mycrypt** 命令是更好的选择。
```
$ mcrypt x
Enter the passphrase (maximum of 512 characters)
Please use a combination of upper and lower case letters and numbers.
Enter passphrase:
Enter passphrase:
File x was encrypted.
```
请注意mcrypt 命令会创建一个扩展名为 “.nc” 的第二个文件。它不会覆盖你正在加密的文件。
mcrypt 命令有密钥大小和加密算法的选项。你也可以再选项中指定密钥,但 mcrypt 命令不鼓励这样做。
### kill
还有 kill 命令 - 当然并不是指谋杀而是用来强制和非强制地结束进程这取决于正确终止它们的要求。当然Linux 并不止于此。相反,它有各种 kill 命令来终止进程。我们有 kill、pkill、killall、killpg、rfkill、skill (读作 es-kill)、tgkill、tkill 和 xkill。
```
$ killall runme
[1] Terminated ./runme
[2] Terminated ./runme
[3]- Terminated ./runme
[4]+ Terminated ./runme
```
### shred
Linux 系统也支持一个名为 **shred** 的命令。shred 命令会覆盖文件以隐藏其以前的内容并确保使用硬盘恢复工具无法恢复它们。请记住rm 命令基本上只是删除文件在目录文件中的引用,但不一定会从磁盘上删除内容或覆盖它。**shred** 命令覆盖文件的内容。
```
$ shred dupes.txt
$ more dupes.txt
▒oΛ▒▒9▒lm▒▒▒▒▒o▒1־▒▒f▒f▒▒▒i▒▒h^}&▒▒▒{▒▒
```
### 僵尸
虽然不是命令,但**僵尸**在 Linux 系统上是很顽固的存在。僵尸基本上是没有完全清理掉的死亡进程的遗骸。进程_不应该_这样工作 - 让死亡进程四处游荡,而不是简单地让它们死亡并进入数字天堂,所以僵尸的存在表明让他们遗留的进程有一些缺陷。
一个简单的方法来检查你的系统是否有僵尸进程遗留,看看 top 命令的标题行。
```
$ top
top - 18:50:38 up 6 days, 6:36, 2 users, load average: 0.00, 0.00, 0.00
Tasks: 171 total, 1 running, 167 sleeping, 0 stopped, 3 zombie **< ==**
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni, 99.9 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 2003388 total, 250840 free, 545832 used, 1206716 buff/cache
KiB Swap: 9765884 total, 9765764 free, 120 used. 1156536 avail Mem
```
可怕!上面显示有三个僵尸进程。
### at midnight
有时会在万圣节这么说死者的灵魂从日落开始游荡直到午夜。Linux 可以通过 “at midnight” 命令跟踪它们的离开。用于安排在下次到达指定时间时运行的作业,**at** 的作用类似于一次性的 cron。
```
$ at midnight
warning: commands will be executed using /bin/sh
at> echo 'the spirits of the dead have left'
at> <EOT>
job 3 at Thu Oct 31 00:00:00 2017
```
### 守护进程
Linux 系统也高度依赖守护进程 - 在后台运行的进程,并提供系统的许多功能。许多守护进程的名称以 “d” 结尾。这个 “d” 代表“守护进程” daemon表明这个进程一直运行并支持一些重要功能。有的会将单词 “daemon” 展开。
```
$ ps -ef | grep sshd
root 1142 1 0 Oct19 ? 00:00:00 /usr/sbin/sshd -D
root 25342 1142 0 18:34 ? 00:00:00 sshd: shs [priv]
$ ps -ef | grep daemon | grep -v grep
message+ 790 1 0 Oct19 ? 00:00:01 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root 836 1 0 Oct19 ? 00:00:02 /usr/lib/accountsservice/accounts-daemon
```
### 万圣节快乐!
在 [Facebook][1] 和 [LinkedIn][2] 上加入 Network World 社区来对主题进行评论。
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3235219/linux/scary-linux-commands-for-halloween.html
作者:[Sandra Henry-Stocker][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:https://www.facebook.com/NetworkWorld/
[2]:https://www.linkedin.com/company/network-world

View File

@ -0,0 +1,104 @@
如何像 Linux 专家那样使用 WSL
============================================================
![WSL](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/wsl-pro.png?itok=e65wEEAw "WSL")
在本 WSL 教程中了解如何执行像挂载 USB 驱动器和操作文件等任务。图片提供Microsoft[经许可使用][1]
在[之前的教程][4]中,我们学习了在 Windows 10 上设置 WSL。你可以在 Windows 10 中使用 WSL 执行许多 Linux 命令。许多系统管理任务都是在终端内部完成的,无论是基于 Linux 的系统还是 macOS。然而Windows 10 缺乏这样的功能。你想运行一个 cron 任务么?不行。你想 SSH 进入你的服务器,然后 rsync 文件么?没门。如何用强大的命令行工具管理本地文件,而不是使用缓慢和不可靠的 GUI 工具?
在本教程中,你将看到如何使用 WSL执行除了管理的其他任务 - 例如挂载 USB 驱动器和操作文件。你需要运行一个完全更新的 Windows 10 并选择一个 Linux 发行版。我在[上一篇文章][5]中介绍了这些步骤,所以如果你需要赶上,那就从那里开始。让我们开始吧。
### 保持你的 Linux 系统更新
事实上,当你通过 WSL 运行 Ubuntu 或 openSUSE 时,没有 Linux 内核在运行。然而,你必须保持你的发行版完整更新,以保护你的系统免受任何新的已知漏洞的影响。由于在 Windows 应用商店中只有两个免费的社区发行版所以教程将只覆盖以下两个openSUSE 和 Ubuntu。
更新你的 Ubuntu 系统:
```
# sudo apt-get update
# sudo apt-get dist-upgrade
```
运行 openSUSE 的更新:
```
# zypper up
```
您还可以使用 _dup_ 命令将 openSUSE 升级到最新版本。但在运行系统升级之前,请使用上一个命令运行更新。
```
# zypper dup
```
**注意:** openSUSE 默认为 root 用户。如果你想执行任何非管理员任务,请切换到非特权用户。您可以这篇[文章][6]中了解如何在 openSUSE上 创建用户。
### 管理本地文件
如果你想使用优秀的 Linux 命令行工具来管理本地文件,你可以使用 WSL 轻松完成此操作。不幸的是WSL 还不支持像 _lsblk__mnt_ 这样的东西来挂载本地驱动器。但是,你可以 _cd _ 到 C 盘并管理文件:
/mnt/c/Users/swapnil/Music
我现在在 C 盘的 Music 目录下。
要安装其他驱动器、分区和外部 USB 驱动器,你需要创建一个挂载点,然后挂载该驱动器。
打开文件资源管理器并检查该驱动器的挂载点。假设它在 Windows 中被挂载为 S:\
在 Ubuntu/openSUSE 终端中,为驱动器创建一个挂载点。
```
sudo mkdir /mnt/s
```
现在挂载驱动器:
```
mount -f drvfs S: /mnt/s
```
挂载完毕后,你现在可以从发行版访问该驱动器。请记住,使用 WSL 运行的发行版将会看到 Windows 能看到的内容。因此,你无法挂载在 Windows 上无法原生挂载的 ext4 驱动器。
现在你可以在这里使用所有这些神奇的 Linux 命令。想要将文件从一个文件夹复制或移动到另一个文件夹?只需运行 _cp__mv_ 命令。
```
cp /source-folder/source-file.txt /destination-folder/
cp /music/classical/Beethoven/symphony-2.mp3 /plex-media/music/classical/
```
如果你想移动文件夹或大文件,我会推荐 _rsync_ 而不是 _cp_ 命令:
```
rsync -avzP /music/classical/Beethoven/symphonies/ /plex-media/music/classical/
```
耶!
想要在 Windows 驱动器中创建新目录,只需使用 _mkdir_ 命令。
想要在某个时间设置一个 cron 作业来自动执行任务吗?继续使用 _crontab -e_ 创建一个 cron 作业。十分简单。
你还可以在 Linux 中挂载网络/远程文件夹,以便你可以使用更好的工具管理它们。我的所有驱动器都插在树莓派或者服务器上,因此我只需 ssh 进入该机器并管理硬盘。在本地计算机和远程系统之间传输文件可以再次使用 _rsync_ 命令完成。
WSL 现在已经不再是测试版了,它将继续获得更多新功能。我很兴奋的两个特性是 lsblk 命令和 dd 命令,它们允许我在 Windows 中本机管理我的驱动器并创建可引导的 Linux 驱动器。如果你是 Linux 命令行的新手,[前一篇教程][7]将帮助你开始使用一些最基本的命令。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2018/2/how-use-wsl-linux-pro
作者:[SWAPNIL BHARTIYA][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://www.linux.com/licenses/category/used-permission
[2]:https://blogs.msdn.microsoft.com/commandline/learn-about-windows-console-and-windows-subsystem-for-linux-wsl/
[3]:https://www.linux.com/files/images/wsl-propng
[4]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[5]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[6]:https://www.linux.com/blog/learn/2018/2/how-get-started-using-wsl-windows-10
[7]:https://www.linux.com/learn/how-use-linux-command-line-basics-cli