Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu.Wang 2018-04-16 22:51:36 +08:00
commit 92bfa538fd
23 changed files with 2134 additions and 462 deletions

View File

@ -74,7 +74,6 @@ gunzip -c file1.gz > /home/himanshu/file1
> `gunzip` 在命令行接受一系列的文件,并且将每个文件内容以正确的魔法数开始,且后缀名为 `.gz`、`-gz`、`.z`、`-z` 或 `_z` (忽略大小写)的压缩文件,用未压缩的文件替换它,并删除其原扩展名。 `gunzip` 也可识别一些特殊扩展名的压缩文件,如 `.tgz``.taz` 分别是 `.tar.gz``.tar.Z` 的缩写。在压缩时,`gzip` 在必要情况下使用 `.tgz` 作为扩展名,而不是只截取掉 `.tar` 后缀。
> `gunzip` 目前可以解压 `gzip`、`zip`、`compress`、`compress -H``pack`)产生的文件。`gunzip` 自动检测输入文件格式。在使用前两种压缩格式时,`gunzip` 会检验 32 位循环冗余校验码CRC。对于 pack 包,`gunzip` 会检验压缩长度。标准压缩格式在设计上不允许相容性检测。不过 `gunzip` 有时可以检测出坏的 `.Z` 文件。如果你解压 `.Z` 文件时出错,不要因为标准解压没报错就认为 `.Z` 文件一定是正确的。这通常意味着标准解压过程不检测它的输入而是直接产生一个错误的输出。SCO 的 `compress -H` 格式lzh 压缩方法)不包括 CRC 校验码,但也允许一些相容性检查。
```
### 结语

View File

@ -1,14 +1,17 @@
CIO 真正需要 DevOps 团队做什么?
======
IT 领导者可以从大量的 [DevOps][1] 材料和 [向 DevOps 转变][2] 所要求的文化挑战中学习。但是,你在一个 DevOps 团队面对长期或短期挑战的调整中 —— 一个 CIO 真正需要他们做的是什么呢?
> DevOps 团队需要 IT 领导者关注三件事:沟通、技术债务和信任。
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/cio_cloud_2.png?itok=wGdeAHVn)
IT 领导者可以从大量的 [DevOps][1] 材料和 [向 DevOps 转变][2] 所要求的文化挑战中学习。但是,你在一个 DevOps 团队面对长期或短期的团结挑战的调整中 —— 一个 CIO 真正需要他们做的是什么呢?
在我与 DevOps 团队成员的谈话中我听到的其中一些内容让你感到非常的意外。DevOps 专家(无论是内部团队的还是外部团队的)都希望将下列的事情放在你的 CIO 优先关注的级别。
### 1. 沟通
第一个也是最重要的一个DevOps 专家需要面对面的沟通。一个经验丰富的 DevOps 团队是非常了解当前 DevOps 的趋势,以及成功、和失败的经验,并且他们非常乐意去分享这些信息。表达 DevOps 的概念是很困难的,因此,要在这种新的工作关系中保持开放,定期(不用担心,不用每周)讨论有关你的 IT 的当前状态,如何评价你的沟通环境,以及你的整体的 IT 产业。
**[想从领导 DevOps 的 CIO 们处学习更多的知识吗?查看我们的综合资源,[DevOps: IT 领导者指南][3]。 ]**
第一个也是最重要的一个DevOps 专家需要面对面的沟通。一个经验丰富的 DevOps 团队是非常了解当前 DevOps 的趋势,以及成功和失败的经验,并且他们非常乐意去分享这些信息。表达 DevOps 的概念是很困难的,因此,要在这种新的工作关系中保持开放,定期(不用担心,不用每周)讨论有关你的 IT 的当前状态,如何评价你的沟通环境,以及你的整体的 IT 产业。
相反,你应该准备好与 DevOps 团队去共享当前的业务需求和目标。业务不再是独立于 IT 的东西:它们现在是驱动 IT 发展的重要因素,并且 IT 决定了你的业务需求和目标运行的效果如何。
@ -16,19 +19,17 @@ IT 领导者可以从大量的 [DevOps][1] 材料和 [向 DevOps 转变][2] 所
### 2. 降低技术债务
第二,力争更好地理解技术债务,并在 DevOps 中努力降低它。你的 DevOps 团队面对的工作都非常难。在这种情况下,技术债务是指在一个庞大的、不可持续的环境(查看 Rube Goldberg之中,通过维护和增加新功能而占用的人力资源和基础设备资源。
第二,力争更好地理解技术债务,并在 DevOps 中努力降低它。你的 DevOps 团队都工作于一线。这里,“技术债务”是指在一个庞大的、不可持续发展的环境之中,通过维护和增加新功能而占用的人力资源和基础设备资源(查看 Rube Goldberg
常见的 CIO 问题包括:
CIO 常见的问题包括:
* 为什么我们要用一种新方法去做这件事情?
* 为什么我们要在它上面花费时间和金钱?
* 如果这里没有新功能,只是现有组件实现了自动化,那么我们的收益是什么?
“如果没有坏,就不要去修理它”,这样的事情是可以理解的。但是,如果你正在路上好好的开车,而每个人都加速超过你,这时候,你的环境就被破坏了。持续投入宝贵的资源去支撑或扩张拼凑起来的环境。
"如果没有坏,就不要去修理它“ ,这样的事情是可以理解的。但是,如果你正在路上好好的开车,而每个人都加速超过你,这时候,你的环境就被破坏了。持续投入宝贵的资源去支撑或扩张拼凑起来的环境。
选择妥协,并且一个接一个的打补丁,以这种方式去处理每个独立的问题,结果将从一开始就变得很糟糕 —— 在一个不能支撑建筑物的地基上,一层摞一层地往上堆。事实上,这种方法就像不断地在电脑中插入坏磁盘一样。迟早有一天,面对出现的问题,你将会毫无办法。在外面持续增加的压力下,整个事情将变得一团糟,完全吞噬掉你的资源。
选择妥协,并且一个接一个的打补丁,以这种方式去处理每个独立的问题,结果将从一开始就变得很糟糕 —— 在一个不能支撑建筑物的地基上,一层摞一层地往上堆。事实上,这种方法就像不断地在电脑中插入坏磁盘一样。迟早有一天,面对出现的问题你将会毫无办法。在外面持续增加的压力下,整个事情将变得一团糟,完全吞噬掉你的资源。
这种情况下,解决方案就是:自动化。使用自动化的结果是良好的可伸缩性 —— 每个维护人员在 IT 环境的维护和增长方面花费更少的努力。如果增加人力资源是实现业务增长的唯一办法,那么,可伸缩性就是白日做梦。
@ -36,7 +37,7 @@ IT 领导者可以从大量的 [DevOps][1] 材料和 [向 DevOps 转变][2] 所
### 3. 信任
最后,相信你的 DevOps 团队并且一定要理解他们。DevOps 专家也知道这个要求很难,但是他们必须有你的强大支持和你参与实践的意愿。因为 DevOps 团队持续改进你的 IT 环境,他们自身也在不断地适应这些变化的技术,而这些变化通常正是 “你要去学习的经验”。
最后,相信你的 DevOps 团队并且一定要理解他们。DevOps 专家也知道这个要求很难,但是他们必须得到你的强大支持和你积极参与的意愿。因为 DevOps 团队持续改进你的 IT 环境,他们自身也在不断地适应这些变化的技术,而这些变化通常正是 “你要去学习的经验”。
倾听倾听倾听他们并且相信他们。DevOps 的改变是非常有价值的,而且也是值的去投入时间和金钱的。它可以提高效率、生产力、和业务响应能力。信任你的 DevOps 团队,并且给予他们更多的自由,实现更高效率的 IT 改进。
@ -48,7 +49,7 @@ via: https://enterprisersproject.com/article/2017/12/what-devops-teams-really-ne
作者:[John Allessio][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,175 @@
使用 GitHub 和 Python 实现持续部署
======
![](https://fedoramagazine.org/wp-content/uploads/2018/03/cd-github-python-945x400.jpg)
借助 GitHub 的<ruby>网络钩子<rt>webhook</rt></ruby>,开发者可以创建很多有用的服务。从触发一个 Jenkins 实例上的 CI持续集成 任务到配置云中的机器,几乎有着无限的可能性。这篇教程将展示如何使用 Python 和 Flask 框架来搭建一个简单的持续部署CD服务。
在这个例子中的持续部署服务是一个简单的 Flask 应用,其带有接受 GitHub 的<ruby>网络钩子<rt>webhook</rt></ruby>请求的 REST <ruby>端点<rt>endpoint</rt></ruby>。在验证每个请求都来自正确的 GitHub 仓库后,服务器将<ruby>拉取<rt>pull</rt></ruby>更改到仓库的本地副本。这样每次一个新的<ruby>提交<rt>commit</rt></ruby>推送到远程 GitHub 仓库,本地仓库就会自动更新。
### Flask web 服务
用 Flask 搭建一个小的 web 服务非常简单。这里可以先看看项目的结构。
```
├── app
│ ├── __init__.py
│ └── webhooks.py
├── requirements.txt
└── wsgi.py
```
首先,创建应用。应用代码在 `app` 目录下。
两个文件(`__init__.py` 和 `webhooks.py`)构成了 Flask 应用。前者包含有创建 Flask 应用并为其添加配置的代码。后者有<ruby>端点<rt>endpoint</rt></ruby>逻辑。这是该应用接收 GitHub 请求数据的地方。
这里是 `app/__init__.py` 的内容:
```
import os
from flask import Flask
from .webhooks import webhook
def create_app():
""" Create, configure and return the Flask application """
app = Flask(__name__)
app.config['GITHUB_SECRET'] = os.environ.get('GITHUB_SECRET')
app.config['REPO_PATH'] = os.environ.get('REPO_PATH')
app.register_blueprint(webhook)
return(app)
```
该函数创建了两个配置变量:
* `GITHUB_SECRET` 保存一个密码,用来认证 GitHub 请求。
* `REPO_PATH` 保存了自动更新的仓库路径。
这份代码使用<ruby>[Flask 蓝图][1]<rt>Flask Blueprints</rt></ruby>来组织应用的<ruby>端点<rt>endpoint</rt></ruby>。使用蓝图可以对 API 进行逻辑分组,使应用程序更易于维护。通常认为这是一种好的做法。
这里是 `app/webhooks.py` 的内容:
```
import hmac
from flask import request, Blueprint, jsonify, current_app
from git import Repo
webhook = Blueprint('webhook', __name__, url_prefix='')
@webhook.route('/github', methods=['POST'])
def handle_github_hook():
""" Entry point for github webhook """
signature = request.headers.get('X-Hub-Signature')
sha, signature = signature.split('=')
secret = str.encode(current_app.config.get('GITHUB_SECRET'))
hashhex = hmac.new(secret, request.data, digestmod='sha1').hexdigest()
if hmac.compare_digest(hashhex, signature):
repo = Repo(current_app.config.get('REPO_PATH'))
origin = repo.remotes.origin
origin.pull('--rebase')
commit = request.json['after'][0:6]
print('Repository updated with commit {}'.format(commit))
return jsonify({}), 200
```
首先代码创建了一个新的蓝图 `webhook`。然后它使用 Flask `route` 为蓝图添加了一个端点。任何请求 `/GitHub` URL 端点的 POST 请求都将调用这个路由。
#### 验证请求
当服务在该端点上接到请求时,首先它必须验证该请求是否来自 GitHub 以及来自正确的仓库。GitHub 在请求头的 `X-Hub-Signature` 中提供了一个签名。该签名由一个密码(`GITHUB_SECRET`),请求体的 HMAC 十六进制摘要,并使用 `sha1` 哈希生成。
为了验证请求,服务需要在本地计算签名并与请求头中收到的签名做比较。这可以由 `hmac.compare_digest` 函数完成。
#### 自定义钩子逻辑
在验证请求后,现在就可以处理了。这篇教程使用 [GitPython][3] 模块来与 git 仓库进行交互。GitPython 模块中的 `Repo` 对象用于访问远程仓库 `origin`。该服务在本地拉取 `origin` 仓库的最新更改,还用 `--rebase` 选项来避免合并的问题。
调试打印语句显示了从请求体收到的短提交哈希。这个例子展示了如何使用请求体。更多关于请求体的可用数据的信息,请查询 [GitHub 文档][4]。
最后该服务返回了一个空的 JSON 字符串和 200 的状态码。这用于告诉 GitHub 的网络钩子服务已经收到了请求。
### 部署服务
为了运行该服务,这个例子使用 [gunicorn][5] web 服务器。首先安装服务依赖。在支持的 Fedora 服务器上,以 [sudo][6] 运行这条命令:
```
sudo dnf install python3-gunicorn python3-flask python3-GitPython
```
现在编辑 gunicorn 使用的 `wsgi.py` 文件来运行该服务:
```
from app import create_app
application = create_app()
```
为了部署服务,使用以下命令克隆这个 git [仓库][7]或者使用你自己的 git 仓库:
```
git clone https://github.com/cverna/github_hook_deployment.git /opt/
```
下一步是配置服务所需的环境变量。运行这些命令:
```
export GITHUB_SECRET=asecretpassphraseusebygithubwebhook
export REPO_PATH=/opt/github_hook_deployment/
```
这篇教程使用网络钩子服务的 GitHub 仓库,但你可以使用你想要的不同仓库。最后,使用这些命令开启该 web 服务:
```
cd /opt/github_hook_deployment/
gunicorn --bind 0.0.0.0 wsgi:application --reload
```
这些选项中绑定了 web 服务的 IP 地址为 `0.0.0.0`,意味着它将接收来自任何的主机的请求。选项 `--reload` 确保了当代码更改时重启 web 服务。这就是持续部署的魔力所在。每次接收到 GitHub 请求时将拉取仓库的最近更新,同时 gunicore 检测这些更改并且自动重启服务。
**注意: **为了能接收到 GitHub 请求web 服务必须部署到具有公有 IP 地址的服务器上。做到这点的简单方法就是使用你最喜欢的云提供商比如 DigitalOceanAWSLinode等。
### 配置 GitHub
这篇教程的最后一部分是配置 GitHub 来发送网络钩子请求到 web 服务上。这是持续部署的关键。
从你的 GitHub 仓库的设置中,选择 Webhook 菜单并且点击“Add Webhook”。输入以下信息
* “Payload URL” 服务的 URL比如 `<http://public_ip_address:8000/github>`
* “Content type” 选择 “application/json”
* “Secret” 前面定义的 `GITHUB_SECRET` 环境变量
然后点击“Add Webhook” 按钮。
![][8]
现在每当该仓库发生推送事件时GitHub 将向服务发送请求。
### 总结
这篇教程向你展示了如何写一个基于 Flask 的用于接收 GitHub 的网络钩子请求,并实现持续集成的 web 服务。现在你应该能以本教程作为起点来搭建对自己有用的服务。
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/continuous-deployment-github-python/
作者:[Clément Verna][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[kimii](https://github.com/kimii)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org
[1]:http://flask.pocoo.org/docs/0.12/blueprints/
[2]:https://en.wikipedia.org/wiki/HMAC
[3]:https://gitpython.readthedocs.io/en/stable/index.html
[4]:https://developer.github.com/v3/activity/events/types/#webhook-payload-example-26
[5]:http://gunicorn.org/
[6]:https://fedoramagazine.org/howto-use-sudo/
[7]:https://github.com/cverna/github_hook_deployment.git
[8]:https://fedoramagazine.org/wp-content/uploads/2018/03/Screenshot-2018-3-26-cverna-github_hook_deployment1.png

View File

@ -1,3 +1,5 @@
translating by MZqk
Whats next in IT automation: 6 trends to watch
======

View File

@ -0,0 +1,37 @@
Easily Run And Integrate AppImage Files With AppImageLauncher
======
Did you ever download an AppImage file and you didn't know how to use it? Or maybe you know how to use it but you have to navigate to the folder where you downloaded the .AppImage file every time you want to run it, or manually create a launcher for it.
With AppImageLauncher, these are problems of the past. The application lets you **easily run AppImage files, without having to make them executable**. But its most interesting feature is easily integrating AppImages with your system: **AppImageLauncher can automatically add an AppImage application shortcut to your desktop environment's application launcher / menu (including the app icon and proper description).**
Without making the downloaded Kdenline AppImage executable manually, the first time I double click it (having AppImageLauncher installed), AppImageLauncher presents two options:
`Run once` or `Integrate and run`.
Clicking on Integrate and run, the AppImage is copied to the `~/.bin/` folder (hidden folder in the home directory) and is added to the menu, then the app is launched.
**Removing it is just as simple** , as long as the desktop environment you're using has support for desktop actions. For example, in Gnome Shell, simply **right click the application icon in the Activities Overview and select** `Remove from system` :
The AppImageLauncher GitHub page says that the application only supports Debian-based systems for now (this includes Ubuntu and Linux Mint) because it integrates deeply with the system. The application is currently in heavy development, and there are already issues opened by its developer to build RPM packages, so Fedora / openSUSE support might be added in the not too distant future.
### Download AppImageLauncher
The AppImageLauncher download page provides binaries for Debian, Ubuntu or Linux Mint (64bit), as well as a 64bit AppImage. The source is also available.
[Download AppImageLauncher][1]
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/04/easily-run-and-integrate-appimage-files.html
作者:[Logix][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://github.com/TheAssassin/AppImageLauncher/releases
[2]:https://kdenlive.org/download/

View File

@ -0,0 +1,71 @@
Management, from coordination to collaboration
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUS_consensuscollab2.png?itok=uMO9zn5U)
Any organization is fundamentally a pattern of interactions between people. The nature of those interactions—their quality, their frequency, their outcomes—is the most important product an organization can create. Perhaps counterintuitively, recognizing this fact has never been more important than it is today—a time when digital technologies are reshaping not only how we work but also what we do when we come together.
And yet many organizational leaders treat those interactions between people as obstacles or hindrances to avoid or eliminate, rather than as the powerful sources of innovation they really are.
That's why we're observing that some of the most successful organizations today are those capable of shifting the way they think about the value of the interactions in the workplace. And to do that, they've radically altered their approach to management and leadership.
### Moving beyond mechanical management
Simply put, traditionally managed organizations treat unanticipated interactions between stakeholders as potentially destructive forces—and therefore as costs to be mitigated.
This view has a long, storied history in the field of economics. But it's perhaps nowhere more clear than in the early writing of Nobel Prize-winning economist[Ronald Coase][1]. In 1937, Coase published "[The Nature of the Firm][2]," an essay about the reasons people organized into firms to work on large-scale projects—rather than tackle those projects alone. Coase argued that when the cost of coordinating workers together inside a firm is less than that of similar market transactions outside, people will tend to organize so they can reap the benefits of lower operating costs.
But at some point, Coase's theory goes, the work of coordinating interactions between so many people inside the firm actually outweighs the benefits of having an organization in the first place. The complexity of those interactions becomes too difficult to handle. Management, then, should serve the function of decreasing this complexity. Its primary goal is coordination, eliminating the costs associated with messy interpersonal interactions that could slow the firm and reduce its efficiency. As one Fortune 100 CEO recently told me, "Failures happen most often around organizational handoffs."
This makes sense to people practicing what I've called "[mechanical management][3]," where managing people is the act of keeping them focused on specific, repeatable, specialized tasks. Here, management's key function is optimizing coordination costs—ensuring that every specialized component of the finely-tuned organizational machine doesn't impinge on the others and slow them down. Managers work to avoid failures by coordinating different functions across the organization (accounts payable, research and development, engineering, human resources, sales, and so on) to get them to operate toward a common goal. And managers create value by controlling information flows, intervening only when functions become misaligned.
Today, when so many of these traditionally well-defined tasks have become automated, value creation is much more a result of novel innovation and problem solving—not finding new ways to drive efficiency from repeatable processes. But numerous studies demonstrate that innovative, problem-solving activity occurs much more regularly when people work in cross-functional teams—not as isolated individuals or groups constrained by single-functional silos. This kind of activity can lead to what some call "accidental integration": the serendipitous innovation that occurs when old elements combine in new and unforeseen ways.
That's why working collaboratively has now become a necessity that managers need to foster, not eliminate.
### From coordination to collaboration
Reframing the value of the firm—from something that coordinated individual transactions to something that produces novel innovations—means rethinking the value of the relations at the core of our organizations. And that begins with reimagining the task of management, which is no longer concerned primarily with minimizing coordination costs but maximizing cooperation opportunities.
Too few of our tried-and-true management practices have this goal. If they're seeking greater innovation, managers need to encourage more interactions between people in different functional areas, not fewer. A cross-functional team may not be as efficient as one composed of people with the same skill sets. But a cross-functional team is more likely to be the one connecting points between elements in your organization that no one had ever thought to connect (the one more likely, in other words, to achieve accidental integration).
Working collaboratively has now become a necessity that managers need to foster, not eliminate.
I have three suggestions for leaders interested in making this shift:
First, define organizations around processes, not functions. We've seen this strategy work in enterprise IT, for example, in the case of [DevOps][4], where teams emerge around end goals (like a mobile application or a website), not singular functions (like developing, testing, and production). In DevOps environments, the same team that writes the code is responsible for maintaining it once it's in production. (We've found that when the same people who write the code are the ones woken up when it fails at 3 a.m., we get better code.)
Second, define work around the optimal organization rather than the organization around the work. Amazon is a good example of this strategy. Teams usually stick to the "[Two Pizza Rule][5]" when establishing optimal conditions for collaboration. In other words, Amazon leaders have determined that the best-sized team for maximum innovation is about 10 people, or a group they can feed with two pizzas. If the problem gets bigger than that two-pizza team can handle, they split the problem into two simpler problems, dividing the work between multiple teams rather than adding more people to the single team.
And third, to foster creative behavior and really get people cooperating with one another, do whatever you can to cultivate a culture of honest and direct feedback. Be straightforward and, as I wrote in The Open Organization, let the sparks fly; have frank conversations and let the best ideas win.
### Let it go
I realize that asking managers to significantly shift the way they think about their roles can lead to fear and skepticism. Some managers define their performance (and their very identities) by the control they exert over information and people. But the more you dictate the specific ways your organization should do something, the more static and brittle that activity becomes. Agility requires letting go—giving up a certain degree of control.
Front-line managers will see their roles morph from dictating and monitoring to enabling and supporting. Instead of setting individual-oriented goals, they'll need to set group-oriented goals. Instead of developing individual incentives, they'll need to consider group-oriented incentives.
Because ultimately, their goal should be to[create the context in which their teams can do their best work][6].
[Subscribe to our weekly newsletter][7] to learn more about open organizations.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/18/4/management-coordination-collaboration
作者:[Jim Whitehurst][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/remyd
[1]:https://news.uchicago.edu/article/2013/09/02/ronald-h-coase-founding-scholar-law-and-economics-1910-2013
[2]:http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0335.1937.tb00002.x/full
[3]:https://opensource.com/open-organization/18/2/try-learn-modify
[4]:https://enterprisersproject.com/devops
[5]:https://www.fastcompany.com/3037542/productivity-hack-of-the-week-the-two-pizza-approach-to-productive-teamwork
[6]:https://opensource.com/open-organization/16/3/what-it-means-be-open-source-leader
[7]:https://opensource.com/open-organization/resources/newsletter

View File

@ -0,0 +1,157 @@
Download YouTube Videos in Linux Command Line
======
**Brief: Easily download YouTube videos in Linux using youtube-dl command line tool. With this tool, you can also choose video format and video quality such as 1080p or 4K.**
![](https://itsfoss.com/wp-content/uploads/2015/10/Download-YouTube-Videos.jpeg)
I know you have already seen [how to download YouTube videos][1]. But those tools were mostly GUI ways. I am going to show you how to download YouTube videos in Linux terminal using youtube-dl.
### Install youtube-dl to download YouTube videos in Linux terminal
[youtube-dl][2] is a Python-based small command-line tool that allows downloading videos from [YouTube][3], [Dailymotion][4], Photobucket, Facebook, Yahoo, Metacafe, Depositfiles and few more similar sites. It is written in pygtk and requires Python interpreter to run this program, its not platform restricted. It should run on any Unix, Windows or in Mac OS X based systems.
The youtube-dl tool supports resuming interrupted downloads. If youtube-dl is killed (for example by Ctrl-C or due to loss of Internet connectivity) in the middle of the download, you can simply re-run it with the same YouTube video URL. It will automatically resume the unfinished download, as long as a partial download is present in the current directory. Which means you dont need [download managers in Linux][5] just for resuming downloads.
#### youtube-dl features
This tiny tool has so many features that it wont be an exaggeration to call it the best YouTube downloader for Linux.
* Download videos from not only YouTube but other popular video websites like Dailymotion, Facebook etc
* Allows downloading videos in several available video formats such as MP4, WebM etc.
* You can also choose the quality of the video being downloaded. If the video is available in 4K, you can download it in 4K, 1080p, 720p etc
* Automatical pause and resume of video downloads.
* Allows to bypass YouTube geo-restrictions
Downloading videos from websites could be against their policies. Its up to you if choose to download videos.
#### How to install youtube-dl
youtube-dl is a popular program and is available in the default repositories of most Linux distributions, if not all. You can use the standard way of installing packages in your distribution to install youtube-dl. Ill still show some commands for the sake of it.
If you are running Ubuntu-based Linux distribution, you can install it using this command:
```
sudo apt install youtube-dl
```
For other Linux distribution, you can quickly install youtube-dl on your system through the command line interface with:
```
sudo wget https://yt-dl.org/downloads/latest/youtube-dl -O/usr/local/bin/youtube-dl
```
After fetching the file, you need to set a executable permission on the script to execute properly.
```
sudo chmod a+rx /usr/local/bin/youtube-dl
```
#### Using YouTube-dl for downloading videos:
To download a video file, simply run the following command. Where “VIDEO_URL” is the URL of the video that you want to download.
```
youtube-dl<video_url>
```
#### Download YouTube videos in various formats and quality size
These days YouTube videos have different resolutions, you first need to check available video formats of a given YouTube video. For that run youtube-dl with “-F” option. It will show you a list of available formats.
```
youtube-dl -F <video_url>
```
Its output will be like:
```
 Setting language
BlXaGWbFVKY: Downloading video webpage
BlXaGWbFVKY: Downloading video info webpage
BlXaGWbFVKY: Extracting video information
Available formats:
37      :       mp4     [1080x1920]
46      :       webm    [1080x1920]
22      :       mp4     [720x1280]
45      :       webm    [720x1280]
35      :       flv     [480x854]
44      :       webm    [480x854]
34      :       flv     [360x640]
18      :       mp4     [360x640]
43      :       webm    [360x640]
5       :       flv     [240x400]
17      :       mp4     [144x176]
```
Now among the available video formats, choose one that you like. For example, if you want to download it in MP4 version and 1080 pixel, you should use:
```
youtube-dl -f 37<video_url>
```
#### Download subtitles of videos using youtube-dl
First, check if there are subtitles available for the video. To list all subs for a video, use the command below:
```
youtube-dl --list-subs <video_url>
```
To download all subs, but not the video:
```
youtube-dl --all-subs --skip-download <video_url>
```
#### Download entire YouTube playlist
To download a playlist, simply run the following command. Where “playlist_url” is the URL of the playlist that you want to download.
```
youtube-dl -cit <playlist_url>
```
#### Download only audio from YouTube videos
If you just want to download the audio from a YouTube video, you can use the -x option to simply extract the audio file from the video.
```
youtube-dl -x <video_url>
```
The default file format is Ogg which you may not like. You can specify the file format of the audio file in the following manner:
```
youtube-dl -x --audio-format mp3 <video_url>
```
#### And a lot more can be done with youtube-dl
youtube-dl is a versatile command line tool and provides a number of functionalities. No wonder it is such a popular command line tool.
I have only shown some of the most common usages of this tool. But if you want to explore its capabilities further, please check its [manual][6].
I hope this article helped you to download YouTube videos on Linux. If you have questions or suggestions, please drop a comment below.
Article updated with inputs from Abhishek Prakash.
--------------------------------------------------------------------------------
via: https://itsfoss.com/download-youtube-linux/
作者:[alimiracle][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://itsfoss.com/author/ali/
[1]:https://itsfoss.com/download-youtube-videos-ubuntu/
[2]:https://rg3.github.io/youtube-dl/
[3]:https://www.youtube.com/c/itsfoss/
[4]:https://www.dailymotion.com/
[5]:https://itsfoss.com/4-best-download-managers-for-linux/
[6]:https://github.com/rg3/youtube-dl/blob/master/README.md#readme

View File

@ -0,0 +1,202 @@
How to Install Docker CE on Your Desktop
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/containers-volumes_0.jpg?itok=gv0_MXiZ)
[In the previous article,][1] we learned some of the basic terminologies of the container world. That background information will come in handy when we run commands and use some of those terms in follow-up articles, including this one. This article will cover the installation of Docker on desktop Linux, macOS, and Windows, and it is intended for beginners who want to get started with Docker containers. The only prerequisite is that you are comfortable with command-line interface.
### Why do I need Docker CE on my local machine?
As a new user, you many wonder why you need containers on your local systems. Arent they meant to run in cloud and servers as microservices? While containers have been part of the Linux world for a very long time, it was Docker that made them really consumable with its tools and technologies.
The greatest thing about Docker containers is that you can use your local machine for development and testing. The container images that you create on your local system can then run “anywhere.” There is no conflict between developers and operators about apps running fine on development systems but not in production.
The point is that in order to create containerized applications, you must be able to run and create containers on your local systems.
You can use any of the three platforms -- desktop Linux, Windows, or macOS as the development platform for containers. Once Docker is successfully running on these systems, you will be using the same commands across platforms so it really doesnt matter which OS you are running underneath.
Thats the beauty of Docker.
### Lets get started
There are two editions of Docker. Docker Enterprise Edition (EE) and Docker Community Edition (CE). We will be using the Docker Community Edition, which is a free of cost version of Docker intended for developers and enthusiasts who want to get started with Docker.
There are two channels of Docker CE: stable and edge. As the name implies, the stable version gives you well-tested quarterly updates, whereas the edge version offers new updates every month. After further testing, these edge features are added to the stable release. I recommend the stable version for new users.
Docker CE is supported on macOS, Windows 10, Ubuntu 14.04, 16.04, 17.04 and 17.10; Debian 7.7,8,9 and 10; Fedora 25, 26, 27; and centOS. While you can download Docker CE binaries and install on your Desktop Linux systems, I recommend adding repositories so you continue to receive patches and updates.
### Install Docker CE on Desktop Linux
You dont need a full blown desktop Linux to run Docker, you can install it on a bare minimal Linux server as well, that you can run in a VM. In this tutorial, I am running it on Fedora 27 and Ubuntu 17.04 running on my main systems.
### Ubuntu Installation
First things first. Run a system update so your Ubuntu packages are fully updated:
```
$ sudo apt-get update
```
Now run system upgrade:
```
$ sudo apt-get dist-upgrade
```
Then install Docker PGP keys:
```
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Update the repository info again:
$ sudo apt-get update
```
Now install Docker CE:
```
$ sudo apt-get install docker-ce
```
Once it's installed, Docker CE runs automatically on Ubuntu based systems. Lets check if its running:
```
$ sudo systemctl status docker
```
You should get the following output:
```
docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2017-12-28 15:06:35 EST; 19min ago
Docs: https://docs.docker.com
Main PID: 30539 (dockerd)
```
Since Docker is installed on your system, you can now use Docker CLI (Command Line Interface) to run Docker commands. Living up to the tradition, lets run the Hello World command:
```
$ sudo docker run hello-world
```
![YMChR_7xglpYBT91rtXnqQc6R1Hx9qMX_iO99vL8][2]
Congrats! You have Docker running on your Ubuntu system.
### Installing Docker CE on Fedora
Things are a bit different on Fedora 27. On Fedora, you first need to install def-plugins-core packages that will allow you to manage your DNF packages from CLI.
```
$ sudo dnf -y install dnf-plugins-core
```
Now install the Docker repo on your system:
```
$ sudo dnf config-manager \
--add-repo \
https://download.docker.com/linux/fedora/docker-ce.repo
Its time to install Docker CE:
$ sudo dnf install docker-ce
```
Unlike Ubuntu, Docker doesnt start automatically on Fedora. So lets start it:
```
$ sudo systemctl start docker
```
You will have to start Docker manually after each reboot, so lets configure it to start automatically after reboots. $ systemctl enable docker Well, its time to run the Hello World command:
```
$ sudo docker run hello-world
```
Congrats, Docker is running on your Fedora 27 system.
### Cutting your roots
You may have noticed that you have to use sudo to run Docker commands. Thats because of Docker daemons binding with the UNIX socket, instead of a TCP port and that socket is owned by the root user. So, you need sudo privileges to run the docker command. You can add system user to the docker group so it wont require sudo:
```
$ sudo groupadd docker
```
In most cases, the docker user group is automatically created when you install Docker CE, so all you need to do is add your user to that group:
```
$ sudo usermod -aG docker $USER
```
To test if the group has been added successfully, run the groups command against the name of the user:
```
$ groups swapnil
```
(Here, Swapnil is the user.)
This is the output on my system:
```
$ swapnil : swapnil adm cdrom sudo dip plugdev lpadmin sambashare docker
```
You can see that the user also belongs to the docker group. Log out of your system, so that group changes take effect. Once you log back in, try the Hello World command without sudo:
```
$ docker run hello-world
```
You can check system wide info about the installed version of Docker and more by running this command:
```
$ docker info
```
### Install Docker CE on macOS and Windows
You can easily install Docker CE (and EE) on macOS and Windows. Download the official Docker for Mac and install it the way you install applications on macOS, by simply dragging them into the Applications directory. Once the file is copied, open Docker from spotlight to start the installation process. Once installed, Docker will start automatically and you can see it in the top bar of macOS.
![IEX23j65zYlF8mZ1c-T_vFw_i1B1T1hibw_AuhEA][3]
macOS is UNIX, so you can simply open the terminal app and start using Docker commands natively. Test the hello world app:
```
$ docker run hello-world
```
Congrats, you have Docker running on your macOS.
### Docker on Windows 10
You need the latest version of Windows 10 Pro or Server in order to run/install Docker on it. If you are not fully updated, Windows wont install Docker. I got an error on my Windows 10 system and had to run system updates. My version was still behind, and I hit [this][4] bug. So, if you fail to install Docker on Windows, just know you are not alone. Keep an eye on that bug to find a solution.
Once you install Docker on Windows, you can either use bash shell via WSL or use PowerShell to run docker commands. Lets test the “Hello World” command in PowerShell:
```
PS C:\Users\swapnil> docker run hello-world
```
Congrats, you have Docker running on Windows.
In the next article, we will talk about pulling images from DockerHub and running containers on our systems. We will also talk about pushing our own containers to Docker Hub.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/how-install-docker-ce-your-desktop
作者:[SWAPNIL BHARTIYA][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/arnieswap
[1]:https://www.linux.com/blog/intro-to-linux/2017/12/container-basics-terms-you-need-know
[2]:https://lh5.googleusercontent.com/YMChR_7xglpYBT91rtXnqQc6R1Hx9qMX_iO99vL8Z8C0-BlynDcL5B5pG-zzH0fKU0Qvnzd89v0KDEbZiO0gTfGNGfDtO-FkTt0bmzIQ-TKbNmv18S9RXdkSeXqgKDFRewnaHPj2
[3]:https://lh3.googleusercontent.com/IEX23j65zYlF8mZ1c-T_vFw_i1B1T1hibw_AuhEAfwv9oFpMfcAqkgEk7K5o58iDAAfGozSpIvY_qEsTOHRlSbesMKwTnG9rRkWba1KPSmnuH1LyoccDGNO3Clbz8du0gSByZxNj
[4]:https://github.com/docker/for-win/issues/1263

View File

@ -1,76 +0,0 @@
translating---geekpi
How to find files in Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
If you're a Windows user or a non-power-user of OSX, you probably use a GUI to find files. You may also find the interface limited, frustrating, or both, and have learned to excel at organizing things and remembering the exact order of your files. You can do that in Linux, too—but you don't have to.
One of the best things about Linux is that it offers a variety of ways to do things. You can open any file manager and `ctrl`+`f`, you can use the program you are in to open files manually, or you can simply start typing letters and it'll filter the current directory listing.
![Screenshot of how to find files in Linux with Ctrl+F][2]
Screenshot of how to find files in Linux with Ctrl+F
But what if you don't know where your file is and don't want to search the entire disk? Linux is well-tooled for this and a variety of other use-cases.
### Finding program locations by command name
The Linux file system can seem daunting if you're used to putting things wherever you like. For me, one of the hardest things to get used to was finding where programs are supposed to live.
For example, `which bash` will usually return `/bin/bash`, but if you download a program and it doesn't appear in your menus, the `which` command can be a great tool.
A similar utility is the `locate` command, which I find useful for finding configuration files. I don't like typing in program names because simple ones like `locate php` often offer many results that need to be filtered further.
For more information about `locate` and `which`, see the `man` pages:
* `man which`
* `man locate`
### Find
The `find` utility offers much more advanced functionality. Below is an example from a script I've installed on a number of servers that I administer to ensure that a specific pattern of file (also known as a glob) exists for only five days and all files older than that are deleted. (Since its last modification, a decimal is used to account for up to 240 minutes difference.)
```
find ./backup/core-files*.tar.gz -mtime +4.9 -exec rm {} \;
```
The `find` utility has many advanced use-cases, but most common is executing commands on results without chaining and filtering files by type, creation, and modification date.
Another interesting use of `find` is to find all files with executable permissions. This can help ensure that nobody is installing bitcoin miners or botnets on your expensive servers.
```
find / -perm /+x
```
For more information on `find`, see the `man` page using `man find.`
### Grep
Want to find a file by its contents? Linux has it covered. You can use many Linux utilities to efficiently search for files that match a pattern, but `grep` is one that I use often.
Suppose you have an application that's delivering error messages with a code reference and stack trace. You find these in your logs. Grepping is not always the go-to, but I always `grep -R` if the issue is with a supplied value.
An increasing number of IDEs are implementing find functions, but if you're accessing a remote system or for whatever reason don't have a GUI, or if you want to iterate in-place, then use: `grep -R {searchterm}` or on systems supporting `egrep` alias; just add `-e` flag to command `egrep -r {regex-pattern}`.
I used this technique when patching the `dhcpcd5` in [Raspbian][3] last year so I could continue to operate a network access point on newer Debian releases from the [Raspberry Pi Foundation][4].
What tips help you search for files more efficiently on Linux?
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/how-find-files-linux
作者:[Lewis Cowles][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/lewiscowles1986
[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/find-files-in-linux-ctrlf.png?itok=1gf9kIut (Screenshot of how to find files in Linux with Ctrl+F)
[3]:https://www.raspbian.org/
[4]:https://www.raspberrypi.org/

View File

@ -1,150 +0,0 @@
12 Git tips for Git's 12th birthday
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/party_anniversary_flag_birthday_celebrate.jpg?itok=KqfMENa7)
[Git][1], the distributed revision-control system that's become the default tool for source code control in the open source world, turns 12 on April 7. One of the more frustrating things about using Git is how much you need to know to use it effectively. This can also be one of the more awesome things about using Git, because there's nothing quite like discovering a new tip or trick that can streamline or improve your workflow.
In honor of Git's 12th birthday, here are 12 tips and tricks to make your Git experience more useful and powerful, starting with some basics you might have overlooked and scaling up to some real power-user tricks!
### 1\. Your ~/.gitconfig file
The first time you tried to use the `git` command to commit a change to a repository, you might have been greeted with something like this:
```
*** Please tell me who you are.
Run
  git config --global user.email "you@example.com"
  git config --global user.name "Your Name"
to set your account's default identity.
```
What you might not have realized is that those commands are modifying the contents of `~/.gitconfig`, which is where Git stores global configuration options. There are a vast array of things you can do via your `~/.gitconfig` file, including defining aliases, turning particular command options on (or off!) on a permanent basis, and modifying aspects of how Git works (e.g., which diff algorithm `git diff` uses or what type of merge strategy is used by default). You can even conditionally include other config files based on the path to a repository! See `man git-config` for all the details.
### 2\. Your repo's .gitconfig file
In the previous tip, you may have wondered what that `--global` flag on the `git config` command was doing. It tells Git to update the "global" configuration, the one found in `~/.gitconfig`. Of course, having a global config also implies a local configuration, and sure enough, if you omit the `--global` flag, `git config` will instead update the repository-specific configuration, which is stored in `.git/config`.
Options that are set in the `.git/config` file will override any setting in the `~/.gitconfig` file. So, for example, if you need to use a different email address for a particular repository, you can run `git config user.email "also_you@example.com"`. Then, any commits in that repository will use your other email address. This can be super useful if you work on open source projects from a work laptop and want them to show up with a personal email address while still using your work email for your main Git configuration.
Pretty much anything you can set in `~/.gitconfig`, you can also set in `.git/config` to make it specific to the given repository. In any of the following tips, when I mention adding something to your `~/.gitconfig`, just remember you could also set that option for just one repository by adding it to `.git/config` instead.
### 3\. Aliases
Aliases are another thing you can put in your `~/.gitconfig`. These work just like aliases in the command shell—they set up a new command name that can invoke one or more other commands, often with a particular set of options or flags. They're super useful for longer, more complicated commands you use frequently.
You can define aliases using the `git config` command—for example, running `git config --global --add alias.st status` will make running `git st` do the same thing as running `git status`—but I find when defining aliases, it's frequently easier to just edit the `~/.gitconfig` file directly.
If you choose to go this route, you'll find that the `~/.gitconfig` file is an [INI file][2]. INI is basically a key-value file format with particular sections. When adding an alias, you'll be changing the `[alias]` section. For example, to define the same `git st` alias as above, add this to the file:
```
[alias]
st = status
```
(If there's already an `[alias]` section, just add the second line to that existing section.)
### 4\. Aliases to shell commands
Aliases aren't limited to just running other Git subcommands—you can also define aliases that run other shell commands. This is a fantastic way to deal with a recurring, infrequent, and complicated task: Once you've figured out how to do it once, preserve the command under an alias. For example, I have a few repositories where I've forked an open source project and made some local modifications that don't need to be contributed back to the project. I want to keep up-to-date with ongoing development work in the project but also maintain my local changes. To accomplish this, I need to periodically merge the changes from the upstream repo into my fork—which I do by using an alias I call `upstream-merge`. It's defined like this:
```
upstream-merge = !"git fetch origin -v && git fetch upstream -v && git merge upstream/master && git push"
```
The `!` at the beginning of the alias definition tells Git to run the command via the shell. This example involves running a number of `git` commands, but aliases defined in this way can run any shell command.
(Note that if you want to copy my `upstream-merge` alias, you'll need to make sure you have a Git remote named `upstream` pointed at the upstream repository you've forked from. You can add this by running `git remote add upstream <URL to repo>`.)
### 5\. Visualizing the commit graph
If you work on a project with a lot of branching activity, sometimes it can be difficult to get a handle on all the work that's happening and how it's all related. Various GUI tools allow you to get a picture of different branches and commits in what's called the "commit graph." For example, here's a section of one of my repositories visualized with the [GitLab][3] commit graph viewer:
![GitLab commit graph viewer][5]
John Anderson, CC BY
If you're a dedicated command-line user or somebody who finds switching tools to be distracting, it's nice to get a similar view of the commit graph from the command line. That's where the `--graph` argument to the `git log` command comes in:
![Repository visualized with --graph command][7]
John Anderson, CC BY
This is the same section of the same repo visualized with the following command:
```
git log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit --date=relative
```
The `--graph` option adds the graph to the left side of the log, `--abbrev-commit` shortens the commit [SHAs][8], `--date=relative` expresses the dates in relative terms, and the `--pretty` bit handles all the other custom formatting. I have this aliased to `git lg`, and it is one of my top 10 most frequently run commands.
### 6\. A nicer force-push
Sometimes, as hard as you try to avoid it, you'll find that you need to run `git push --force` to overwrite the history on a remote copy of your repository. You may have gotten some feedback that caused you to do an interactive rebase, or you may simply have messed up and want to hide the evidence.
One of the hazards with force pushes happens when somebody else has made changes on top of the same branch in the remote copy of the repository. When you force-push your rewritten history, those commits will be lost. This is where `git push --force-with-lease` comes in—it will not allow you to force-push if the remote branch has been updated, which will ensure you don't throw away someone else's work.
### 7\. git add -N
Have you ever used `git commit -a` to stage and commit all your outstanding changes in a single move, only to discover after you've pushed your commit that `git commit -a` ignores newly added files? You can work around this by using the `git add -N` (think "notify") to tell Git about newly added files you'd like to be included in commits before you actually commit them for the first time.
### 8\. git add -p
A best practice when using Git is to make sure each commit consists of only a single logical change—whether that's a fix for a bug or a new feature. Sometimes when you're working, however, you'll end up with more than one commit's worth of change in your repository. How can you manage to divide things up so that each commit contains only the appropriate changes? `git add --patch` to the rescue!
This flag will cause the `git add` command to look at all the changes in your working copy and, for each one, ask if you'd like to stage it to be committed, skip over it, or defer the decision (as well as a few other more powerful options you can see by selecting `?` after running the command). `git add -p` is a fantastic tool for producing well-structured commits.
### 9\. git checkout -p
Similar to `git add -p`, the `git checkout` command will take a `--patch` or `-p` option, which will cause it to present each "hunk" of change in your local working copy and allow you to discard it—basically reverting your local working copy to what was there before your change.
This is fantastic when, for example, you've introduced a bunch of debug logging statements while chasing down a bug. After the bug is fixed, you can first use `git checkout -p` to remove all the new debug logging, then you `git add -p` to add the bug fix. Nothing is more satisfying than putting together a beautiful, well-structured commit!
### 10\. Rebase with command execution
Some projects have a rule that each commit in the repository must be in a working state—that is, at each commit, it should be possible to compile the code or the test suite should run without failure. This is not too difficult when you're working on a branch over time, but if you end up needing to rebase for whatever reason, it can be a little tedious to step through each rebased commit to make sure you haven't accidentally introduced a break.
Fortunately, `git rebase` has you covered with the `-x` or `--exec` option. `git rebase -x <cmd>` will run that command after each commit is applied in the rebase. So, for example, if you have a project where `npm run tests` runs your test suite, `git rebase -x npm run tests` would run the test suite after each commit was applied during the rebase. This allows you to see if the test suite fails at any of the rebased commits so you can confirm that the test suite is still passing at each commit.
### 11\. Time-based revision references
Many Git subcommands take a revision argument to specify what part of the repository to work on. This can be the SHA1 of a particular commit, a branch name, or even a symbolic name like `HEAD` (which refers to the most recent commit on the currently checked out branch). In addition to these simple forms, you can also append a specific date or time to mean "this reference, at this time."
This becomes very useful when you're dealing with a newly introduced bug and find yourself saying, "I know this worked yesterday! What changed?" Instead of staring at the output of `git log` trying to figure out what commit was changed when, you can simply run `git diff HEAD@{yesterday}`, and see all the changes that have happened since then. This also works with longer time periods (e.g., `git diff HEAD@{'2 months ago'}`) as well as exact dates (e.g., `git diff HEAD@{'2010-01-01 12:00:00'}`).
You can also use these date-based revision arguments with any Git subcommand that takes a revision argument. Find full details about which format to use in the man page for `gitrevisions`.
### 12\. The all-seeing reflog
Have you ever rebased away a commit, then discovered there was something in that commit you wanted to keep? You may have thought that information was lost forever and would need to be recreated. But if you committed it in your local working copy, it was added to the reference log (reflog), and you should still be able to access it.
Running `git reflog` will show you a list of all the activity for the current branch in your local working copy and also give you the SHA1 of each commit. Once you've found the commit you rebased away, you can run `git checkout <SHA1>` to check out that commit, copy any information you need, and run `git checkout HEAD` to return to the most recent commit in the branch.
### That's all folks!
Hopefully at least one of these tips has taught you something new about Git, a 12-year-old project that's continuing to innovate and add new features. What's your favorite Git trick?
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/12-git-tips-gits-12th-birthday
作者:[John SJ Anderson][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/genehack
[1]:https://git-scm.com/
[2]:https://en.wikipedia.org/wiki/INI_file
[3]:https://gitlab.com/
[4]:/file/392941
[5]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gui_graph.png?itok=3GovYfG1 (GitLab commit graph viewer)
[6]:/file/392936
[7]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/console_graph.png?itok=XogY1P8M (Repository visualized with --graph command)
[8]:https://en.wikipedia.org/wiki/Secure_Hash_Algorithms

View File

@ -1,3 +1,5 @@
translating---geekpi
The Shuf Command Tutorial With Examples For Beginners
======

View File

@ -0,0 +1,171 @@
How To Setup Static File Server Instantly
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/serve-720x340.png)
Ever wanted to share your files or project over network, but dont know how to do? No worries! Here is a simple utility named **“serve”** to share your files instantly over network. This simple utility will instantly turn your system into a static file server, allowing you to serve your files over network. You can access the files from any devices regardless of their operating system. All you need is a web browser. This utility also can be used to serve static websites. It is formerly known as “list” and “micro-list”, but now the name has been changed to “serve”, which is much more suitable for the purpose of this utility.
### Setup Static File Server Using Serve
To install “serve”, you need to install NodeJS and NPM first. Refer the following link to install NodeJS and NPM in your Linux box.
Once NodeJS and NPM installed, run the following command to install “serve”.
```
$ npm install -g serve
```
Done! Now is the time to serve the files or folders.
The typical syntax to use “serve” is:
```
$ serve [options] <path-to-files-or-folders>
```
### Serve Specific files or folders
For example, let us share the contents of the **Documents** directory. To do so, run:
```
$ serve Documents/
```
Sample output would be:
![][2]
As you can see in the above screenshot, the contents of the given directory have been served over network via two URLs.
To access the contents from the local system itself, all you have to do is open your web browser and navigate to **<http://localhost:5000/>** URL.
![][3]
The Serve utility displays the contents of the given directory in a simple layout. You can download (right click on the files and choose “Save link as..”) or just view them in the browser.
If you want to open local address automatically in the browser, use **-o** flag.
```
$ serve -o Documents/
```
Once you run the above command, The Serve utility will open your web browser automatically and display the contents of the shared item.
Similarly, to access the shared directory from a remote system over network, type **<http://192.168.43.192:5000>** in the browsers address bar. Replace 192.168.43.192 with your systems IP.
**Serve contents via different port**
As you may noticed, The serve utility uses port **5000** by default. So, make sure the port 5000 is allowed in your firewall or router. If it is blocked for some reason, you can serve the contents using different port using **-p** flag.
```
$ serve -p 1234 Documents/
```
The above command will serve the contents of Documents directory via port **1234**.
![][4]
To serve a file, instead of a folder, just give its full path like below.
```
$ serve Documents/Papers/notes.txt
```
The contents of the shared directory can be accessed by any user on the network as long as they know the path.
**Serve the entire $HOME directory**
Open your Terminal and type:
```
$ serve
```
This will share the contents of your entire $HOME directory over network.
To stop the sharing, press **CTRL+C**.
**Serve selective files or folders**
You may not want to share all files or directories, but only a few in a directory. You can do this by excluding the files or directories using **-i** flag.
```
$ serve -i Downloads/
```
The above command will serve entire file system except **Downloads** directory.
**Serve contents only on localhost**
Sometimes, you want to serve the contents only on the local system itself, not on the entire network. To do so, use **-l** flag as shown below:
```
$ serve -l Documents/
```
This command will serve the **Documents** directory only on localhost.
![][5]
This can be useful when youre working on a shared server. All users in the in the system can access the share, but not the remote users.
**Serve content using SSL**
Since we serve the contents over the local network, we need not to use SSL. However, Serve utility has the ability to shares contents using SSL using **ssl** flag.
```
$ serve --ssl Documents/
```
![][6]
To access the shares via web browser use “<https://localhost:5000> or “<https://ip:5000>.
![][7]
**Serve contents with authentication**
In all above examples, we served the contents without any authentication. So anyone on the network can access them without any authentication. You might feel some contents should be accessed with username and password.
To do so, use:
```
$ SERVE_USER=ostechnix SERVE_PASSWORD=123456 serve --auth
```
Now the users need to enter the username (i.e **ostechnix** in our case) and password (123456) to access the shares.
![][8]
The Serve utility has some other features, such as disable [**Gzip compression**][9], setup * CORS headers to allow requests from any origin, prevent copy address automatically to clipboard etc. You can read the complete help section by running the following command:
```
$ serve help
```
And, thats all for now. Hope this helps. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-setup-static-file-server-instantly/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-2.png
[4]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-4.png
[5]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-3.png
[6]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-6.png
[7]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-5-1.png
[8]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-7-1.png
[9]:https://www.ostechnix.com/how-to-compress-and-decompress-files-in-linux/

View File

@ -0,0 +1,147 @@
A Desktop GUI Application For NPM
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/ndm-3-720x340.png)
NPM, short for **N** ode **P** ackage **M** anager, is a command line package manager for installing NodeJS packages, or modules. We already have have published a guide that described how to [**manage NodeJS packages using NPM**][1]. As you may noticed, managing NodeJS packages or modules using Npm is not a big deal. However, if youre not compatible with CLI-way, there is a desktop GUI application named **NDM** which can be used for managing NodeJS applications/modules. NDM, stands for **N** PM **D** esktop **M** anager, is a free, open source graphical front-end for NPM that allows us to install, update, remove NodeJS packages via a simple graphical window.
In this brief tutorial, we are going to learn about Ndm in Linux.
### Install NDM
NDM is available in AUR, so you can install it using any AUR helpers on Arch Linux and its derivatives like Antergos and Manjaro Linux.
Using [**Pacaur**][2]:
```
$ pacaur -S ndm
```
Using [**Packer**][3]:
```
$ packer -S ndm
```
Using [**Trizen**][4]:
```
$ trizen -S ndm
```
Using [**Yay**][5]:
```
$ yay -S ndm
```
Using [**Yaourt**][6]:
```
$ yaourt -S ndm
```
On RHEL based systems like CentOS, run the following command to install NDM.
```
$ echo "[fury] name=ndm repository baseurl=https://repo.fury.io/720kb/ enabled=1 gpgcheck=0" | sudo tee /etc/yum.repos.d/ndm.repo && sudo yum update &&
```
On Debian, Ubuntu, Linux Mint:
```
$ echo "deb [trusted=yes] https://apt.fury.io/720kb/ /" | sudo tee /etc/apt/sources.list.d/ndm.list && sudo apt-get update && sudo apt-get install ndm
```
NDM can also be installed using **Linuxbrew**. First, install Linuxbrew as described in the following link.
After installing Linuxbrew, you can install NDM using the following commands:
```
$ brew update
$ brew install ndm
```
On other Linux distributions, go to the [**NDM releases page**][7], download the latest version, compile and install it yourself.
### NDM Usage
Launch NDM wither from the Menu or using application launcher. This is how NDMs default interface looks like.
![][9]
From here, you can install NodeJS packages/modules either locally or globally.
**Install NodeJS packages locally**
To install a package locally, first choose project directory by clicking on the **“Add projects”** button from the Home screen and select the directory where you want to keep your project files. For example, I have chosen a directory named **“demo”** as my project directory.
Click on the project directory (i.e **demo** ) and then, click **Add packages** button.
![][10]
Type the package name you want to install and hit the **Install** button.
![][11]
Once installed, the packages will be listed under the projects directory. Simply click on the directory to view the list of installed packages locally.
![][12]
Similarly, you can create separate project directories and install NodeJS modules in them. To view the list of installed modules on a project, click on the project directory, and you will the packages on the right side.
**Install NodeJS packages globally**
To install NodeJS packages globally, click on the **Globals** button on the left from the main interface. Then, click “Add packages” button, type the name of the package and hit “Install” button.
**Manage packages**
Click on any installed packages and you will see various options on the top, such as
1. Version (to view the installed version),
2. Latest (to install latest available version),
3. Update (to update the currently selected package),
4. Uninstall (to remove the selected package) etc.
![][13]
NDM has two more options namely **“Update npm”** which is used to update the node package manager to latest available version, and **Doctor** that runs a set of checks to ensure that your npm installation has what it needs to manage your packages/modules.
### Conclusion
NDM makes the process of installing, updating, removing NodeJS packages easier! You dont need to memorize the commands to perform those tasks. NDM lets us to do them all with a few mouse clicks via simple graphical window. For those who are lazy to type commands, NDM is perfect companion to manage NodeJS packages.
Cheers!
**Resource:**
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/ndm-a-desktop-gui-application-for-npm/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/manage-nodejs-packages-using-npm/
[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
[4]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[6]:https://www.ostechnix.com/install-yaourt-arch-linux/
[7]:https://github.com/720kb/ndm/releases
[8]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[9]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-1.png
[10]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-5-1.png
[11]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-6.png
[12]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-7.png
[13]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-8.png

View File

@ -0,0 +1,80 @@
A new approach to security instrumentation
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx)
How many of us have ever uttered the following phrase: “I hope this works!”?
Without a doubt, most of us have, likely more than once. Its not a phrase that inspires confidence, as it reveals doubts about our abilities or the functionality of whatever we are testing. Unfortunately, this very phrase defines our traditional security model all too well. We operate based on the assumption and the hope that the controls we put in place—from vulnerability scanning on web applications to anti-virus on endpoints—prevent malicious actors and software from entering our systems and damaging or stealing our information.
Penetration testing took a step to combat relying on assumptions by actively trying to break into the network, inject malicious code into a web application, or spread “malware” by sending out phishing emails. Composed of finding and poking holes in our different security layers, pen testing fails to account for situations in which holes are actively opened. In security experimentation, we intentionally create chaos in the form of controlled, simulated incident behavior to objectively instrument our ability to detect and deter these types of activities.
> “Security experimentation provides a methodology for the experimentation of the security of distributed systems to build confidence in the ability to withstand malicious conditions.”
When it comes to security and complex distributed systems, a common adage in the chaos engineering community reiterates that “hope is not an effective strategy.” How often do we proactively instrument what we have designed or built to determine if the controls are failing? Most organizations do not discover that their security controls are failing until a security incident results from that failure. We believe that “Security incidents are not detective measures” and “Hope is not an effective strategy” should be the mantras of IT professionals operating effective security practices.
The industry has traditionally emphasized preventative security measures and defense-in-depth, whereas our mission is to drive new knowledge and insights into the security toolchain through detective experimentation. With so much focus on the preventative mechanisms, we rarely attempt beyond one-time or annual pen testing requirements to validate whether or not those controls are performing as designed.
With all of these constantly changing, stateless variables in modern distributed systems, it becomes next to impossible for humans to adequately understand how their systems behave, as this can change from moment to moment. One way to approach this problem is through robust systematic instrumentation and monitoring. For instrumentation in security, you can break down the domain into two primary buckets: **testing** , and what we call **experimentation**. Testing is the validation or assessment of a previously known outcome. In plain terms, we know what we are looking for before we go looking for it. On the other hand, experimentation seeks to derive new insights and information that was previously unknown. While testing is an important practice for mature security teams, the following example should help further illuminate the differences between the two, as well as provide a more tangible depiction of the added value of experimentation.
### Example scenario: Craft beer delivery
Consider a simple web service or web application that takes orders for craft beer deliveries.
This is a critical service for this craft beer delivery company, whose orders come in from its customers' mobile devices, the web, and via its API from restaurants that serve its craft beer. This critical service runs in the company's AWS EC2 environment and is considered by the company to be secure. The company passed its PCI compliance with flying colors last year and annually performs third-party penetration tests, so it assumes that its systems are secure.
This company also prides itself on its DevOps and continuous delivery practices by deploying sometimes twice in the same day.
After learning about chaos engineering and security experimentation, the company's development teams want to determine, on a continuous basis, how resilient and effective its security systems are to real-world events, and furthermore, to ensure that they are not introducing new problems into the system that the security controls are not able to detect.
The team wants to start small by evaluating port security and firewall configurations for their ability to detect, block, and alert on misconfigured changes to the port configurations on their EC2 security groups.
* The team begins by performing a summary of their assumptions about the normal state.
* Develops a hypothesis for port security in their EC2 instances
* Selects and configures the YAML file for the Unauthorized Port Change experiment.
* This configuration would designate the objects to randomly select from for targeting, as well as the port ranges and number of ports that should be changed.
* The team also configures when to run the experiment and shrinks the scope of its blast radius to ensure minimal business impact.
* For this first test, the team has chosen to run the experiment in their stage environments and run a single run of the test.
* In true Game Day style, the team has elected a Master of Disaster to run the experiment during a predefined two-hour window. During that window of time, the Master of Disaster will execute the experiment on one of the EC2 Instance Security Groups.
* Once the Game Day has finished, the team begins to conduct a thorough, blameless post-mortem exercise where the focus is on the results of the experiment against the steady state and the original hypothesis. The questions would be something similar to the following:
### Post-mortem questions
* Did the firewall detect the unauthorized port change?
* If the change was detected, was it blocked?
* Did the firewall report log useful information to the log aggregation tool?
* Did the SIEM throw an alert on the unauthorized change?
* If the firewall did not detect the change, did the configuration management tool discover the change?
* Did the configuration management tool report good information to the log aggregation tool?
* Did the SIEM finally correlate an alert?
* If the SIEM threw an alert, did the Security Operations Center get the alert?
* Was the SOC analyst who got the alert able to take action on the alert, or was necessary information missing?
* If the SOC alert determined the alert to be credible, was Security Incident Response able to conduct triage activities easily from the data?
The acknowledgment and anticipation of failure in our systems have already begun unraveling our assumptions about how our systems work. Our mission is to take what we have learned and apply it more broadly to begin to truly address security weaknesses proactively, going beyond the reactive processes that currently dominate traditional security models.
As we continue to explore this new domain, we will be sure to post our findings. For those interested in learning more about the research or getting involved, please feel free to contact [Aaron Rinehart][1] or [Grayson Brewer][2].
Special thanks to Samuel Roden for the insights and thoughts provided in this article.
**[See our related story,[Is the term DevSecOps necessary?][3]]**
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/new-approach-security-instrumentation
作者:[Aaron Rinehart][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/aaronrinehart
[1]:https://twitter.com/aaronrinehart
[2]:https://twitter.com/BrewerSecurity
[3]:https://opensource.com/article/18/4/devsecops

View File

@ -0,0 +1,352 @@
Getting started with Jenkins Pipelines
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe)
Jenkins is a well-known open source continuous integration and continuous development automation tool. It has an excellent supporting community, and hundreds of plugins and developers have been using it for years.
This article will provide a brief guide on how to get started with Pipelines and multibranch pipelines.
Why pipelines?
* Developers can automate the integration, testing, and deployment of their code, going from source code to product consumers many times using one tool.
* Pipelines "as code" known as Jenkinsfiles can be saved in any source control system. In previous Jenkins versions, jobs were only configured using the UI. With Jenkinfiles, pipelines are more maintainable and portable.
* Multi-branch pipelines integrate with Git so that different branches, features, and releases can have independent pipelines enabling each developer to customize their development/deployment process.
* Non-technical members of a team can trigger and customize builds using parameters, analyze test reports, receive email alerts and have a better understanding of the build and deployment process through the pipeline stage view (improved in latest versions with the Blue Ocean UI).
* Jenkins can also be [installed using Docker][1] and pipelines can interact with [Docker agents][2].
### Requirements:
* [Jenkins 2.89.2][3] (WAR) with Java 8 is the version used in this how-to
* Plugins used (To install: `Manage Jenkins → Manage Plugins →Available`):
* Pipeline: [declarative][4]
* [Blue Ocean][5]
* [Cucumber reports][6]
## Getting started with Jenkins Pipelines
If you have not used Jenkins Pipelines before, I recommend [reading the documentation][7] before getting started here, as it includes a complete description and introduction to the technology as well as the benefits of using it.
This is the Jenkinsfile I used (you can also access this [code][8] on GitHub):
```
pipeline {
    agent any
    stages {
stage('testing pipeline'){
          steps{
      echo 'test1'
                sh 'mkdir from-jenkins'
                sh 'touch from-jenkins/test.txt'
                }
        }
}
}
```
1\. Click on **New Item**.
2\. Name the project, select **Pipeline** , and click **OK**.
3\. The configuration page displays once the project is created. In the **Definition** segment, you must decide to either obtain the Jenkinsfile from source control management (SCM) or create the Pipeline script in Jenkins. Hosting the Jenkinsfile in SCM is recommended so that it is portable and maintainable.
The SCM I chose was Git with simple user and pass credentials (SSH can also be used). By default, Jenkins will look for a Jenkinsfile in that repository unless it's specified otherwise in the **Script Path** directory.
4\. Go back to the job page after saving the Jenkinsfile and select **Build Now**. Jenkins will trigger the job. Its first stage is to pull down the Jenkinsfile from SCM. It reports any changes from the previous run and executes it.
Clicking on **Stage View** provides console information:
### Using Blue Ocean
Jenkins' [Blue Ocean][9] provides a better UI for Pipelines. It is accessible from the job's main page (see image above).
This simple Pipeline has one stage (in addition to the default stage): **Checkout SCM** , which pulls the Jenkinsfile in three steps. The first step echoes a message, the second creates a directory named `from-``jenkins` in the Jenkins workspace, and the third puts a file called `test.txt` inside that directory. The path for the Jenkins workspace is `$user/.jenkins/workspace`, located in the machine where the job was executed. In this example, the job is executed in any available node. If there is no other node connected then it is executed in the machine where Jenkins is installed—check Manage Jenkins > Manage nodes for information about the nodes.
Another way to create a Pipeline is with Blue Ocean's plugin. (The following screenshots show the same repo.)
1\. Click **Open Blue Ocean**.
2\. Click **New Pipeline**.
3\. Select **SCM** and enter the repository URL; an SSH key will be provided. This must be added to your Git SSH keys (in `Settings →SSH and GPG keys`).
4\. Jenkins will automatically detect the branch and the Jenkinsfile, if present. It will also trigger the job.
### Pipeline development:
The following Jenkinsfile triggers Cucumber tests from a GitHub repository, creates and archives a JAR, sends emails, and exposes different ways the job can execute with variables, parallel stages, etc. The Java project used in this demo was forked from [cucumber/cucumber-jvm][10] to [mluyo3414/cucumber-jvm][11]. You can also access [the][12][Jenkinsfile][12] on GitHub. Since the Jenkinsfile is not in the repository's top directory, the configuration has to be changed to another path:
```
pipeline {
    // 1. runs in any agent, otherwise specify a slave node
    agent any
    parameters {
// 2.variables for the parametrized execution of the test: Text and options
        choice(choices: 'yes\nno', description: 'Are you sure you want to execute this test?', name: 'run_test_only')
        choice(choices: 'yes\nno', description: 'Archived war?', name: 'archive_war')
        string(defaultValue: "your.email@gmail.com", description: 'email for notifications', name: 'notification_email')
    }
//3. Environment variables
environment {
firstEnvVar= 'FIRST_VAR'
secondEnvVar= 'SECOND_VAR'
thirdEnvVar= 'THIRD_VAR'
}
//4. Stages
    stages {
        stage('Test'){
             //conditional for parameter
            when {
                environment name: 'run_test_only', value: 'yes'
            }
            steps{
                sh 'cd examples/java-calculator && mvn clean integration-test'
            }
        }
//5. demo parallel stage with script
        stage ('Run demo parallel stages') {
steps {
        parallel(
        "Parallel stage #1":
                  {
                  //running a script instead of DSL. In this case to run an if/else
                  script{
                    if (env.run_test_only =='yes')
                        {
                        echo env.firstEnvVar
                        }
                    else
                        {
                        echo env.secondEnvVar
                        }
                  }
         },
        "Parallel stage #2":{
                echo "${thirdEnvVar}"
                }
                )
             }
        }
    }
//6. post actions for success or failure of job. Commented out in the following code: Example on how to add a node where a stage is specifically executed. Also, PublishHTML is also a good plugin to expose Cucumber reports but we are using a plugin using Json.
   
post {
        success {
        //node('node1'){
echo "Test succeeded"
            script {
    // configured from using gmail smtp Manage Jenkins-> Configure System -> Email Notification
    // SMTP server: smtp.gmail.com
    // Advanced: Gmail user and pass, use SSL and SMTP Port 465
    // Capitalized variables are Jenkins variables see https://wiki.jenkins.io/display/JENKINS/Building+a+software+project
                mail(bcc: '',
                     body: "Run ${JOB_NAME}-#${BUILD_NUMBER} succeeded. To get more details, visit the build results page: ${BUILD_URL}.",
                     cc: '',
                     from: 'jenkins-admin@gmail.com',
                     replyTo: '',
                     subject: "${JOB_NAME} ${BUILD_NUMBER} succeeded",
                     to: env.notification_email)
                     if (env.archive_war =='yes')
                     {
             // ArchiveArtifact plugin
                        archiveArtifacts '**/java-calculator-*-SNAPSHOT.jar'
                      }
                       // Cucumber report plugin
                      cucumber fileIncludePattern: '**/java-calculator/target/cucumber-report.json', sortingMethod: 'ALPHABETICAL'
            //publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: true, reportDir: '/home/reports', reportFiles: 'reports.html', reportName: 'Performance Test Report', reportTitles: ''])
            }
        //}
        }
        failure {
            echo "Test failed"
            mail(bcc: '',
                body: "Run ${JOB_NAME}-#${BUILD_NUMBER} succeeded. To get more details, visit the build results page: ${BUILD_URL}.",
                 cc: '',
                 from: 'jenkins-admin@gmail.com',
                 replyTo: '',
                 subject: "${JOB_NAME} ${BUILD_NUMBER} failed",
                 to: env.notification_email)
                 cucumber fileIncludePattern: '**/java-calculator/target/cucumber-report.json', sortingMethod: 'ALPHABETICAL'
//publishHTML([allowMissing: true, alwaysLinkToLastBuild: false, keepAll: true, reportDir: '/home/tester/reports', reportFiles: 'reports.html', reportName: 'Performance Test Report', reportTitles: ''])
        }
    }
}
```
Always check **Pipeline Syntax** to see how to use the different plugins in the Jenkinsfile.
An email notification indicates the build was successful:
Archived JAR from a successful build:
You can access **Cucumber reports** on the same page.
## How to create a multibranch pipeline
If your project already has a Jenkinsfile, follow the [**Multibranch Pipeline** project][13] instructions in Jenkins' docs. It uses Git and assumes credentials are already configured. This is how the configuration looks in the traditional view:
### If this is your first time creating a Pipeline, follow these steps:
1\. Select **Open Blue Ocean**.
2\. Select **New Pipeline**.
3\. Select **Git** and insert the Git repository address. This repository does not currently have a Jenkinsfile. An SSH key will be generated; it will be used in the next step.
4\. Go to GitHub. Click on the profile avatar in the top-right corner and select Settings. Then select **SSH and GPG Keys** from the left-hand menu and insert the SSH key Jenkins provides.
5\. Go back to Jenkins and click **Create Pipeline**. If the project does not contain a Jenkinsfile, Jenkins will prompt you to create a new one.
6\. Once you click **Create Pipeline** , an interactive Pipeline diagram will prompt you to add stages by clicking **+**. You can add parallel or sequential stages and multiple steps to each stage. A list offers different options for the steps.
7\. The following diagram shows three stages (Stage 1, Stage 2a, and Stage 2b) with simple print messages indicating steps. You can also add environment variables and specify in which agent the Jenkinsfile will be executed.
Click **Save** , then commit the new Jenkinsfile by clicking **Save & Run**.
You can also add a new branch.
8\. The job will execute.
If a new branch was added, you can see it in GitHub.
9\. If another branch with a Jenkinsfile is created, you can discover it by clicking **Scan Multibranch Pipeline Now**. In this case, a new branch called `new-feature-2` is created in GitHub from Master (only branches with Jenkinsfiles are displayed in Jenkins).
After scanning, the new branch appears in Jenkins.
This new feature was created using GitHub directly; Jenkins will detect new branches when it performs a scan. If you don't want the newly discovered Pipelines to be executed when discovered, change the settings by clicking **Configure** on the job's Multibranch Pipeline main page and adding the property **Suppress automatic SCM triggering**. This way, Jenkins will discover new Pipelines but they will have to be manually triggered.
This article was originally published on the [ITNext channel][14] on Medium and is reprinted with permission.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/jenkins-pipelines-with-cucumber
作者:[Miguel Suarez][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mluyo3414
[1]:https://jenkins.io/doc/book/installing/#downloading-and-running-jenkins-in-docker
[2]:https://jenkins.io/doc/book/pipeline/docker/
[3]:https://jenkins.io/doc/pipeline/tour/getting-started/
[4]:https://plugins.jenkins.io/pipeline-model-definition
[5]:https://plugins.jenkins.io/blueocean
[6]:https://plugins.jenkins.io/cucumber-reports
[7]:https://jenkins.io/doc/book/pipeline/
[8]:https://github.com/mluyo3414/jenkins-test
[9]:https://jenkins.io/projects/blueocean/
[10]:https://github.com/cucumber/cucumber-jvm
[11]:https://github.com/mluyo3414/cucumber-jvm
[12]:https://github.com/mluyo3414/cucumber-jvm/blob/master/examples/java-calculator/Jenkinsfile
[13]:https://jenkins.io/doc/book/pipeline/multibranch/#creating-a-multibranch-pipeline
[14]:https://itnext.io/jenkins-pipelines-889420409510

View File

@ -0,0 +1,121 @@
How To Check User Created Date On Linux
======
Did you know, how to check user account created date on Linux system? If Yes, what are the ways to do.
Are you getting succeed on this? If yes, how to do?
Basically Linux operating system doesnt track this information so, what are the alternate ways to get this information.
You might ask why i want to check this?
Yes, in some cases you may want to check this information, at that time this will very helpful for you.
This can be verified using below 7 methods.
* Using /var/log/secure file
* Using aureport utility
* Using .bash_logout file
* Using chage Command
* Using useradd Command
* Using passwd Command
* Using last Command
### Method-1: Using /var/log/secure file
It stores all security related messages including authentication failures and authorization privileges. It also tracks sudo logins, SSH logins and other errors logged by system security services daemon.
```
# grep prakash /var/log/secure
Apr 12 04:07:18 centos.2daygeek.com useradd[21263]: new group: name=prakash, GID=501
Apr 12 04:07:18 centos.2daygeek.com useradd[21263]: new user: name=prakash, UID=501, GID=501, home=/home/prakash, shell=/bin/bash
Apr 12 04:07:34 centos.2daygeek.com passwd: pam_unix(passwd:chauthtok): password changed for prakash
Apr 12 04:08:32 centos.2daygeek.com sshd[21269]: Accepted password for prakash from 103.5.134.167 port 60554 ssh2
Apr 12 04:08:32 centos.2daygeek.com sshd[21269]: pam_unix(sshd:session): session opened for user prakash by (uid=0)
```
### Method-2: Using aureport utility
The aureport utility allows you to generate summary and columnar reports on the events recorded in Audit log files. By default, all audit.log files in the /var/log/audit/ directory are queried to create the report.
```
# aureport --auth | grep prakash
46. 04/12/2018 04:08:32 prakash 103.5.134.167 ssh /usr/sbin/sshd yes 288
47. 04/12/2018 04:08:32 prakash 103.5.134.167 ssh /usr/sbin/sshd yes 291
```
### Method-3: Using .bash_logout file
The .bash_logout file in your home directory have a special meaning to bash, it provides a way to execute commands when the user logs out of the system.
We can check for the Change date of the .bash_logout file in the users home directory. This file is created upon users first logout.
```
# stat /home/prakash/.bash_logout
File: `/home/prakash/.bash_logout'
Size: 18 Blocks: 8 IO Block: 4096 regular file
Device: 801h/2049d Inode: 256153 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 501/ prakash) Gid: ( 501/ prakash)
Access: 2017-03-22 20:15:00.000000000 -0400
Modify: 2017-03-22 20:15:00.000000000 -0400
Change: 2018-04-12 04:07:18.283000323 -0400
```
### Method-4: Using chage Command
chage stand for change age. This command allows user to mange password expiry information. The chage command changes the number of days between password changes and the date of the last password change.
This information is used by the system to determine when a user must change his/her password. This will work if the user does not change the password since the account creation date.
```
# chage --list prakash
Last password change : Apr 12, 2018
Password expires : never
Password inactive : never
Account expires : never
Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7
```
### Method-5: Using useradd Command
useradd command is used to create new accounts in Linux. By default, it wont add user creation date and we have to add date using “Comment” option.
```
# useradd -m prakash -c `date +%Y/%m/%d`
# grep prakash /etc/passwd
prakash:x:501:501:2018/04/12:/home/prakash:/bin/bash
```
### Method-6: Using useradd Command
passwd command used assign password to local accounts or users. If the user has not changed his password since the accounts creation date, then you can use the passwd command to find out the date of the last password reset.
```
# passwd -S prakash
prakash PS 2018-04-11 0 99999 7 -1 (Password set, MD5 crypt.)
```
### Method-7: Using last Command
last command reads the file /var/log/wtmp and displays a list of all users logged in (and out) since that file was created.
```
# last | grep "prakash"
prakash pts/2 103.5.134.167 Thu Apr 12 04:08 still logged in
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-check-user-created-date-on-linux/
作者:[Prakash Subramanian][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/prakash/

View File

@ -0,0 +1,137 @@
Finding what youre looking for on Linux
======
![](https://images.idgesg.net/images/article/2018/04/binoculars-100754967-large.jpg)
It isnt hard to find what youre looking for on a Linux system — a file or a command — but there are a _lot_ of ways to go looking.
### 7 commands to find Linux files
#### find
The most obvious is undoubtedly the **find** command, and find has become easier to use than it was years ago. It used to require a starting location for your search, but these days, you can also use find with just a file name or regular expression if youre willing to confine your search to the local directory.
```
$ find e*
empty
examples.desktop
```
In this way, it works much like the **ls** command and isn't doing much of a search.
For more relevant searches, find requires a starting point and some criteria for your search (unless you simply want it to provide a recursive listing of that starting points directory. The command **find . -type f** will recursively list all regular files starting with the current directory while **find ~nemo -type f -empty** will find empty files in Nemos home directory.
```
$ find ~nemo -type f -empty
/home/nemo/empty
```
**Also on Network world:[11 pointless but awesome Linux terminal tricks][1]**
#### locate
The name of the **locate** command suggests that it does basically the same thing as find, but it works entirely differently. Where the **find** command can select files based on a variety of criteria — name, size, owner, permissions, state (such as empty), etc. with a selectable depth for the search, the **locate** command looks through a file called /var/lib/mlocate/mlocate.db to find what youre looking for. That db file is periodically updated, so a locate of a file you just created will probably fail to find it. If that bothers you, you can run the updatedb file and get the update to happen right away.
```
$ sudo updatedb
```
#### mlocate
The **mlocate** command works like the **locate** command and uses the same mlocate.db file as locate.
#### which
The **which** command works very differently than the **find** and **locate** commands. It uses your search path and checks each directory on it for an executable with the file name youre looking for. Once it finds one, it stops searching and displays the full path to that executable.
The primary benefit of the **which** command is that it answers the question, “If I enter this command, what executable file will be run?” It ignores files that arent executable and doesnt list all executables on the system with that name — just the one that it finds first. If you wanted to find _all_ executables that have some name, you could run a find command like this, but it might take considerably longer to run the very efficient **which** command.
```
$ find / -name locate -perm -a=x 2>/dev/null
/usr/bin/locate
/etc/alternatives/locate
```
In this find command, were looking for all executables (files that cen be run by anyone) named “locate”. Were also electing not to view all of the “Permission denied” messages that would otherwise clutter our screens.
#### whereis
The **whereis** command works a lot like the **which** command, but it provides more information. Instead of just looking for executables, it also looks for man pages and source files. Like the **which** command, it uses your search path ($PATH) to drive its search.
```
$ whereis locate
locate: /usr/bin/locate /usr/share/man/man1/locate.1.gz
```
#### whatis
The **whatis** command has its own unique mission. Instead of actually finding files, it looks for information in the man pages for the command you are asking about and provides the brief description of the command from the top of the man page.
```
$ whatis locate
locate (1) - find files by name
```
If you ask about a script that youve just set up, it wont have any idea what youre referring to and will tell you so.
```
$ whatis cleanup
cleanup: nothing appropriate.
```
#### apropos
The **apropos** command is useful when you know what you want to do, but you have no idea what command you should be using to do it. If you were wondering how to locate files, for example, the commands “apropos find” and “apropos locate” would have a lot of suggestions to offer.
```
$ apropos find
File::IconTheme (3pm) - find icon directories
File::MimeInfo::Applications (3pm) - Find programs to open a file by mimetype
File::UserDirs (3pm) - find extra media and documents directories
find (1) - search for files in a directory hierarchy
findfs (8) - find a filesystem by label or UUID
findmnt (8) - find a filesystem
gst-typefind-1.0 (1) - print Media type of file
ippfind (1) - find internet printing protocol printers
locate (1) - find files by name
mlocate (1) - find files by name
pidof (8) - find the process ID of a running program.
sane-find-scanner (1) - find SCSI and USB scanners and their device files
systemd-delta (1) - Find overridden configuration files
xdg-user-dir (1) - Find an XDG user dir
$
$ apropos locate
blkid (8) - locate/print block device attributes
deallocvt (1) - deallocate unused virtual consoles
fallocate (1) - preallocate or deallocate space to a file
IO::Tty (3pm) - Low-level allocate a pseudo-Tty, import constants.
locate (1) - find files by name
mlocate (1) - find files by name
mlocate.db (5) - a mlocate database
mshowfat (1) - shows FAT clusters allocated to file
ntfsfallocate (8) - preallocate space to a file on an NTFS volume
systemd-sysusers (8) - Allocate system users and groups
systemd-sysusers.service (8) - Allocate system users and groups
updatedb (8) - update a database for mlocate
updatedb.mlocate (8) - update a database for mlocate
whereis (1) - locate the binary, source, and manual page files for a...
which (1) - locate a command
```
### Wrap-up
The commands available on Linux for locating and identifying files are quite varied, but they're all very useful.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3268768/linux/finding-what-you-re-looking-for-on-linux.html
作者:[Sandra Henry-Stocker][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.networkworld.com/author/Sandra-Henry_Stocker/
[1]:http://www.networkworld.com/article/2926630/linux/11-pointless-but-awesome-linux-terminal-tricks.html#tk.nww-fsb

View File

@ -0,0 +1,89 @@
Redcore Linux Makes Gentoo Easy
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/redcore.jpg?itok=SfsuPD0w)
Raise your hand if youve always wanted to try [Gentoo Linux][1] but never did because you didnt have either the time or the skills to invest in such a challenging installation. Im sure there are plenty of Linux users out there not willing to admit this, but its okay, really; installing Gentoo is a challenge, and it can be very time consuming. In the end, however, installing Gentoo will result in a very personalized Linux desktop that offers the fulfillment of saying, “I did it!”
So, whats a curious Linux user to do, when they want to experience this elite distribution? One option is to turn to the likes of [Redcore Linux][2]. Redcore does what many have tried (and few have succeeded in doing) in bringing Gentoo to the masses. In fact, [Sabayon][3] Linux is the only other distro I can think of thats truly succeeded in bringing a level of simplicity to Gentoo Linux that many users can enjoy. And while Sabayon is still very much in active development, its good to know there are others attempting what might have once been deemed impossible:
### Making Gentoo Linux easy
Instead of building your desktop piece by piece, system by system, Redcore (like Sabayon) brings a much more standard installation to the process. Unlike Sabayon (which gives you the options of a GNOME, KDE, Xfce, Mate, or Fluxbox editions), Redcore offers a version that ships with two possible desktop options: The [LXQt][4] desktop and [Openbox][5]. The LXQt is a lightweight desktop that offers plenty of configuration options and performs quite well on older hardware, whereas Openbox is a very minimalist take on the desktop. In fact, once you log into the Openbox desktop, youll be left wondering if something had gone wrong (until you right-click on the desktop to see the solitary menu).
If youre looking for a more modern take on the desktop, neither LXQt or Openbox will be what youre looking for. However, there is no doubt the combination of a rolling-release Gentoo-lite system that uses the LXQt and Openbox desktops will perform quite well.
The official description of the distribution is:
Redcore Linux is a distribution based on Gentoo Linux (stable + some unstable) and a continuation of, now defunct, Kogaion Linux. Kogaion Linux itself was a distribution based initially on Sabayon Linux, and later on Gentoo Linux and it was developed by RogentOS Development Group since 2011. Ghiunhan Mamut (aka V3n3RiX) himself joined RogentOS Development Group in January 2014.
If you know much about how Gentoo is structured, Redcore Linux is built from Gentoo Linux stage3. Stage3 a tarball containing a populated directory structure from a basic Gentoo system that contains no kernel, only binaries and libraries essential for bootstrapping. On top of stage3, the Redcore developers add a kernel, a bootloader and a few other items (such as dbus and Dracut), as well as configure the init system (OpenRC).
With all of that out of the way, lets see what the installation of Redcore is like and how well it can serve as your desktop distribution.
### Installation
As youve probably expected, the installation of Redcore is incredibly simple. Download the live ISO image, burn it to a CD/DVD or USB, insert the installation media, boot the device, log into the desktop (live username/password is redcore/redcore) and click the installer icon on the desktop. The installer used by Redcore is [Calamares][6], which means the installation is incredibly easy and, in an instant, familiar (Figure 1).
Everything with Calamares is automatic. In other words, you wont have to manually partition your drive or select individual packages for installation. You should be able to start and finish a Redcore installation in five or ten minutes. Once the installation completes, reboot and log in with the username/password you created during installation.
### Usage
Upon login, you can select between LXQt and Openbox. I highly recommend against using Openbox. Why? Because nothing will open from the menu. I was actually quite surprised to find the Openbox desktop fairly unusable upon installation. With that in mind, select the LXQt option and be done with it.
Upon logging in, youll be greeted by a fairly straight-forward desktop. Click on the menu button (bottom right of screen) and search through the menu hierarchy to launch an application. The list of installed applications is fairly straightforward, with the exception of finding [Steam][7] and [Wine][8] pre-installed. You might be surprised, considering Redcore is a rolling distribution, that many of the user-crucial applications are out of date. Take, for instance, LibreOffice. Redcore ships with 5.4.5.1. The Still release of LibreOffice is currently at 5.4.6. Opening the Sisyphus GUI (front end for the Sisyphus package manager) and youll see that LibreOffice is up to date (according to the package manager) at 5.4.5.1 (Figure 2).
![ Sisyphus][10]
Figure 2: The Sisyphus GUI package manager.
[Used with permission][11]
If you do see packages available for upgrade (which you might), click the upgrade button and allow the upgrade to complete. Considering this is a rolling release, you should be up to date. However, you can search through Sisyphus, locate new packages to install, and install them with ease. Installation with the Sisyphus front end is quite user-friendly.
### That default browser
You wont find a copy of Firefox or Chrome installed on Redcore. Instead, QupZilla serves as the default browser. When you do open the default browser (or if you click on the Ask for help icon on the desktop) you will find the preconfigured home page to be the [recorelinux freenode.net page][12]. Instead of being greeted by a hand-crafted application, geared toward helping new users, one must choose a nickname and venture into the world of IRC. Although one might be inclined to think that does new users a disservice, one must consider the type of “new” user Redcore will be serving: These arent going to be new-to-Linux users. Instead, Redcore knows its users and knows many of them are already familiar with IRC. That means users dont have to turn to Google to search for answers. Instead, they can chat with other users and even developers to solve their problems. This, of course, does depend on those users (who might be capable of answering questions) actually be logged into the redcorelinux channel on freenode.
### That default theme
Im going to make a confession here. Ive never understood the whole “dark theme” preference. I do understand that taste is a subjective issue, but my taste tends to lean toward the lighter themes. Thats not a problem. To change the theme for the LXQt desktop, open the menu, type desktop in the search field, and then select Customize Look and Feel. In the resulting window (Figure 3), you can select from the short list of theme options.
![desktop][14]
Figure 3: Changing the desktop theme in Redcore.
[Used with permission][11]
### That target audience
So who is Redcores best target audience? If youre looking to gain the benefit of Gentoo Linux, without having to go through the exhausting “get up to speed” and installation process required to compile one of the most challenging operating systems on the planet, Redcore might be what youre looking for. Its a very simplified means of enjoying a Gentoo-less take on Gentoo Linux. Of course, if youre looking to enjoy Gentoo with a more modern desktop, I would highly recommend [Sabayon][3]. However, the LXQt lightweight desktop will certainly give life to old hardware. And Recore does this with a bit of Gentoo style.
Learn more about Linux through the free ["Introduction to Linux" ][15] course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/4/redcore-linux-makes-gentoo-easy
作者:[JACK WALLEN][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[1]:https://www.gentoo.org/
[2]:https://redcorelinux.org/
[3]:http://www.sabayon.org/
[4]:https://lxqt.org/
[5]:http://openbox.org/wiki/Main_Page
[6]:https://calamares.io/about/
[7]:http://store.steampowered.com/
[8]:https://www.winehq.org/
[10]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/redcore_2.jpg?itok=ubNC-htJ ( Sisyphus)
[11]:https://www.linux.com/licenses/category/used-permission
[12]:http://webchat.freenode.net/?channels=redcorelinux
[14]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/redcore_3.jpg?itok=FKg67lrS (desktop)
[15]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,126 @@
Useful Resources for Those Who Want to Know More About Linux
======
Linux is one of the most popular and versatile operating systems available. It can be used on a smartphone, computer and even a car. Linux has been around since the 1990s and is still one of the most widespread operating systems.
Linux is actually used to run most of the Internet as it is considered to be rather stable compared to other operating systems. This is one of the [reasons why people choose Linux over Windows][1]. Besides, Linux provides its users with privacy and doesnt collect their data at all, while Windows 10 and its Cortana voice control system always require updating your personal information.
Linux has many advantages. However, people do not hear much about it, as it has been squeezed out from the market by Windows and Mac. And many people get confused when they start using Linux, as its a bit different from popular operating systems.
So to help you out weve collected 5 useful resources for those who want to know more about Linux.
### 1.[Linux for Absolute Beginners][2]
If you want to learn as much about Linux as you can, you should consider taking a full course for beginners, provided by Eduonix. This course will introduce you to all features of Linux and provide you with all necessary materials to help you find out more about the peculiarities of how Linux works.
You should definitely choose this course if:
* you want to learn the details about the Linux operating system;
* you want to find out how to install it;
* you want to understand how Linux cooperates with your hardware;
* you want to learn how to operate Linux command line.
### 2.[PC World: A Linux Beginners Guide][3]
A free resource for those who want to learn everything about Linux in one place. PC World specializes in various aspects of working with computer operating systems, and it provides its subscribers with the most accurate and up-to-date information. Here you can also learn more about the [benefits of Linux][4] and latest news about his operating system.
This resource provides you with information on:
* how to install Linux;
* how to use command line;
* how to install additional software;
* how to operate Linux desktop environment.
### 3.[Linux Training][5]
A lot of people who work with computers are required to learn how to operate Linux in case Windows operating system suddenly crashes. And what can be better than using an official resource to start your Linux training?
This resource provides online enrollment on the Linux training, where you can get the most updated information from the authentic source. “A year ago our IT department offered us a Linux training on the official website”, says Martin Gibson, a developer at [Assignmenthelper.com.au][6]. “We took this course because we needed to learn how to back up all our files to another system to provide our customers with maximum security, and this resource really taught us everything.”
So you should definitely use this resource if:
* you want to receive firsthand information about the operating system;
* want to learn the peculiarities of how to run Linux on your computer;
* want to connect with other Linux users and share your experience with them.
4. [The Linux Foundation: Training Videos][7]
If you easily get bored from reading a lot of resources, this website is definitely for you. The Linux Foundation provides training videos, lectures and webinars, held by IT specialists, software developers and technical consultants.
All the training videos are subdivided into categories for:
* Developers: working with Linux Kernel, handling Linux Device Drivers, Linux virtualization etc.;
* System Administrators: developing virtual hosts on Linux, building a Firewall, analyzing Linux performance etc.;
* Users: getting started using Linux, introduction to embedded Linux and so on.
5. [LinuxInsider][8]
Did you know that Microsoft was so amazed by the efficiency of Linux that it [allowed users to run Linux on Microsoft cloud computing device][9]? If you want to learn more about this operating system, Linux Insider provides its subscribers with the latest news on Linux operating systems, gives information about the latest updates and Linux features.
On this resource, you will have the opportunity to:
* participate in Linux community;
* learn about how to run Linux on various devices;
* check out reviews;
* participate in blog discussions and read the tech blog.
### Wrapping up…
Linux offers a lot of benefits, including complete privacy, stable operation and even malware protection. Its definitely worth trying, learning how to use will help you better understand how your computer works and what it needs to operate smoothly.
### About the Author
_Lucy Benton is a digital marketing specialist, business consultant and helps people to turn their dreams into the profitable business. Now she is writing for marketing and business resources. Also Lucy has her own blog_ [_Prowritingpartner.com_][10] _,where you can check her last publications._
--------------------------------------------------------------------------------
via: https://linuxaria.com/article/useful-resources-for-those-who-want-to-know-more-about-linux
作者:[Lucy Benton][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.lifewire.com
[1]:https://www.lifewire.com/windows-vs-linux-mint-2200609
[2]:https://www.eduonix.com/courses/system-programming/linux-for-absolute-beginners
[3]:https://www.pcworld.com/article/2918397/operating-systems/how-to-get-started-with-linux-a-beginners-guide.html
[4]:https://www.popsci.com/switch-to-linux-operating-system#page-4
[5]:https://www.linux.com/learn/training
[6]:https://www.assignmenthelper.com.au/
[7]:https://training.linuxfoundation.org/free-linux-training/linux-training-videos
[8]:https://www.linuxinsider.com/
[9]:https://www.wired.com/2016/08/linux-took-web-now-taking-world/
[10]:https://prowritingpartner.com/
[11]:https://cdn.linuxaria.com/wp-content/plugins/flattr/img/flattr-badge-large.png
[12]:https://linuxaria.com/?flattrss_redirect&id=8570&md5=ee76fa2b44bdf6ef419a7f9906d3a5ad (Flattr)

View File

@ -1,21 +1,17 @@
Translating by FelixYFZ
How to test internet speed in Linux terminal
如何在Linux的终端测试网速
======
Learn how to use speedtest cli tool to test internet speed in Linux terminal. Also includes one liner python command to get speed details right away.
学习如何在Linux终端使用命令行工具测试网速,或者仅用一条python命令立刻获得网速的测试结果。
![在Linux终端测试网速][1]
![test internet speed in linux terminal][1]
我们都会在连接网络或者wifi的时候去测试网络带宽。 为什么不用我们自己的服务器下面将会教你如何在Linux终端测试网速。
Most of us check the internet bandwidth speed whenever we connect to new network or wifi. So why not our servers! Here is a tutorial which will walk you through to test internet speed in Linux terminal.
我们多数都会使用[Mb/s][2]标准来测试网速。 仅仅只是桌面上的一个简单的操作,访问他们的网站点击浏览。
它将使用最近的服务器来扫描你的本地主机来测试网速。 如果你使用的是移动设备他们有对应的移动端APP。但如果你使用的是只有命令行终端界面的则会有些不同。下面让我们一起看看如何在Linux的终端来测试网速。
如果你只是想偶尔的做一次网速测试而不想去下载测试工具,那么请往下看如何使用命令完成测试。
Everyone of us generally uses [Speedtest by Ookla][2] to check internet speed. Its pretty simple process for a desktop. Goto their website and just click GO button. It will scans your location and speed test with nearest server. If you are on mobile, they have their app for you. But if you are on terminal with command line interface things are little different. Lets see how to check internet speed from Linux terminal.
### 第一步:下载网速测试命令行工具。
If you want to speed check only once and dont want to download tool on server, jump here and see one liner command.
### Step 1 : Download speedtest cli tool
First of all, you have to download speedtest CLI tool from [github repository][3]. Now a days, its also included in many well known Linux repositories as well. If its their in yours then you can directly [install that package on your Linux distro][4].
Lets proceed with Github download and install process. [Install git package][4] depending on your distro. Then clone Github repo of speedtest like belwo :
首先你需要从github上下载网速测试命令行工具。现在上面也包含许多其他的Linu相关的仓库如果已经在你的库中你可以直接在你的Linux上进行安装。 让我们继续下载和安装过程安装的git包取决于你的Linux发行版。然后按照下面的方法来克隆Github speedtest存储库
```
[root@kerneltalks ~]# git clone https://github.com/sivel/speedtest-cli.git
@ -27,7 +23,7 @@ Resolving deltas: 100% (518/518), done.
```
It will be cloned to your present working directory. New directory named `speedtest-cli` will be created. You can see below files in it.
它将会被克隆到你当前的工作目录新的名为speedtest-cli的目录将会被创建,你将在新的目录下看到如下的文件。
```
[root@kerneltalks ~]# cd speedtest-cli
@ -45,13 +41,11 @@ total 96
-rw-r--r--. 1 root root 333 Oct 7 16:55 tox.ini
```
The python script `speedtest.py` is the one we will be using to check internet speed.
名为speedtest.py的脚本文件就是用来测试网速的。你可以在/usr/bin执行环境下将这个脚本链接到一条命令以便这台机器上的所有用户都能使用。或者你可以为这个脚本创建一个命令别名这样就能让所有用户很容易使用它。
You can link this script for a command in /usr/bin so that all users on server can use it. Or you can even create [command alias][5] for it and it will be easy for all users to use it.
### 运行python脚本
### Step 2 : Run python script
Now, run python script without any argument and it will search nearest server and test your internet speed.
现在,直接运行这个脚本,不需要添加任何参数它将会搜寻最近的服务器来测试你的网速。
```
[root@kerneltalks speedtest-cli]# python speedtest.py
@ -66,13 +60,13 @@ Testing upload speed............................................................
Upload: 323.95 Mbit/s
```
Oh! Dont amaze with speed. 😀 I am on [AWS EC2 Linux server][6]. Thats the bandwidth of Amazon data center! 🙂
Oh 不要被这个网速惊讶道。😀我在AWE EX2的服务器上。那是亚马逊数据中心的网速🙂
### Different options with script
### 这个脚本可以添加有不同的选项。
Few options which might be useful are as below :
下面的几个选项对这个脚本可能会很有用处:
**To search speedtest servers** nearby your location use `--list` switch and `grep` for your location name.
** 用搜寻你附近的网路测试服务器,使用 --list和grep加上本地名来列出所有附件的服务器。
```
[root@kerneltalks speedtest-cli]# python speedtest.py --list | grep -i mumbai
@ -90,11 +84,10 @@ Few options which might be useful are as below :
6403) YOU Broadband India Pvt Ltd., Mumbai (Mumbai, India) [1.15 km]
```
You can see here, first column is server identifier followed by name of company hosting that server, location and finally its distance from your location.
然后你就能从搜寻结果中看到,第一列是服务器识别号,紧接着是公司的名称和所在地,最后是离你的距离。
**To test internet speed using specific server** use `--server` switch and server identifier from previous output as argument.
```
** 如果要使用指定的服务器来测试网速,后面跟上命令 --server 加上服务器的识别号。
[root@kerneltalks speedtest-cli]# python speedtest.py --server 2827
Retrieving speedtest.net configuration...
Testing from Amazon (35.154.184.126)...
@ -107,7 +100,7 @@ Testing upload speed............................................................
Upload: 69.25 Mbit/s
```
**To get share link of your speed test** , use `--share` switch. It will give you URL of your test hosted on speedtest website. You can share this URL.
** 如果想得到你的测试结果的共享链接,使用命令 --share,你将会得到测试结果的链接。
```
[root@kerneltalks speedtest-cli]# python speedtest.py --share
@ -124,21 +117,20 @@ Share results: http://www.speedtest.net/result/6687428141.png
```
Observe last line which includes URL of your test result. If I download that image its the one below :
输出中的最后一行就是你的测试结果的链接。下载下来的图片内容如下 :
![Speedtest result on Linux][7]
Thats it! But hey if you dont want all this technical jargon, you can even use below one liner to get speed test done right away.
这就是全部的过程!如果你不想子结果中看到这些技术术语,你也可以使用如下的命令迅速测出你的网速
### Internet speed test using one liner in terminal
### 要想在终端使用一条命令测试网速。
We are going to use [curl tool ][8]to fetch above said python script online and supply it to python for execution on the go!
我们将使用Curl工具来在线抓取上面使用的python脚本然后直接用python执行脚本
```
[root@kerneltalks ~]# curl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python -
```
Above command will run the script and show you result on screen!
上面的脚本将会运行脚本输出结果到屏幕上。
```
[root@kerneltalks speedtest-cli]# curl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python -
@ -153,14 +145,12 @@ Testing upload speed............................................................
Upload: 355.84 Mbit/s
```
I tested this tool on RHEL 7 server but process is same on Ubuntu, Debian, Fedora or CentOS.
--------------------------------------------------------------------------------
via: https://kerneltalks.com/tips-tricks/how-to-test-internet-speed-in-linux-terminal/
这是在RHEL 7上执行的结果在Ubuntu,Debian, Fedora或者Centos上一样可以执行。
作者:[Shrikant Lavhate][a]
译者:[译者ID](https://github.com/译者ID)
译者:[FelixYFZ ](https://github.com/FelixYFZ)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,185 +0,0 @@
使用 Github 和 Python 实现持续部署
======
![](https://fedoramagazine.org/wp-content/uploads/2018/03/cd-github-python-945x400.jpg)
借助 Github 的 webhook网络钩子开发者可以创建很多有用的服务。从触发一个 Jenkins 实例上的 CI持续集成 任务到配置云中的机器,几乎有着无限的可能性。这篇教程将展示如何使用 Python 和 Flask 框架来搭建一个简单的持续部署服务。
在这个例子中的持续部署服务是一个简单的 Flask 应用,其带有接受 Github 的 webhook 请求的 REST 端点endpoint。在验证每个请求都来自正确的 Github 仓库后服务器将拉取pull更改到仓库的本地副本。这样每次一个新的提交(commit)推送到远程 Github 仓库,本地仓库就会自动更新。
### Flask web 服务
用 Flask 搭建一个小的 web 服务非常简单。这里可以先看看项目的结构。
```
├── app
│ ├── __init__.py
│ └── webhooks.py
├── requirements.txt
└── wsgi.py
```
首先,创建应用。应用代码在 app 目录下。
两个文件__init__.py 和 webhooks.py构成了 Flask 应用。前者有创建 Flask 应用并为其添加配置的代码。后者有端点endpoint逻辑。这是该应用接收 Github 请求的地方。
这里是 app/__init__.py 的内容:
```
import os
from flask import Flask
from .webhooks import webhook
def create_app():
""" Create, configure and return the Flask application """
app = Flask(__name__)
app.config['GITHUB_SECRET'] = os.environ.get('GITHUB_SECRET')
app.config['REPO_PATH'] = os.environ.get('REPO_PATH')
app.register_blueprint(webhook)
return(app)
```
该函数创建了两个配置变量:
* **GITHUB_SECRET** 保存一个密码,用来认证 Github 请求。
* **REPO_PATH** 保存了自动更新的仓库路径。
这份代码使用[Flask 蓝图Blueprints][1]来组织应用的端点endpoint。使用蓝图可以对 API 进行逻辑分组,使应用程序更易于维护。通常认为这是一种好的做法。
这里是 app/webhooks.py 的内容:
```
import hmac
from flask import request, Blueprint, jsonify, current_app
from git import Repo
webhook = Blueprint('webhook', __name__, url_prefix='')
@webhook.route('/github', methods=['POST'])
def handle_github_hook():
""" Entry point for github webhook """
signature = request.headers.get('X-Hub-Signature')
sha, signature = signature.split('=')
secret = str.encode(current_app.config.get('GITHUB_SECRET'))
hashhex = hmac.new(secret, request.data, digestmod='sha1').hexdigest()
if hmac.compare_digest(hashhex, signature):
repo = Repo(current_app.config.get('REPO_PATH'))
origin = repo.remotes.origin
origin.pull('--rebase')
commit = request.json['after'][0:6]
print('Repository updated with commit {}'.format(commit))
return jsonify({}), 200
```
首先代码创建了一个新的蓝图 webhook。然后它使用 Flask 路由route为蓝图添加了一个端点。任何 URL 为 /Github 的端点的 POST 请求都将调用这个路由。
#### 验证请求
当服务收到该端点的请求时,首先它必须验证该请求是否来自 GitHub 同时来自正确的仓库。Github 在请求头的 X-Hub-Signature 中提供了一个签名。该签名由一个密码GITHUB_SECRET请求体的 HMAC 十六进制摘要和使用 sha1 的哈希生成。
为了验证请求,服务需要在本地计算签名并与请求头中收到的签名做比较。这可以由 hmac.compare_digest 函数完成。
#### 自定义钩子逻辑
在验证请求后,现在可以处理了。这篇教程使用[GitPython][3]模块来与 git 仓库进行交互。GitPython 模块中的 Repo 对象用于访问远程仓库 origin。服务在本地拉取远程仓库的最新更改还用 -rebase 选项来避免合并的问题。
调试打印语句显示了从请求体收到的短提交哈希。这个例子展示了如何使用请求体。更多关于请求体的可用数据的信息,请查询[github 文档][4]。
最后服务返回了一个空的 JSON 字符串和 200 的状态码。这用于告诉 Github 的 webhook 服务已经收到了请求。
### 部署服务
为了运行该服务,这个例子使用 [gunicorn][5] web 服务器。首先安装服务依赖。在支持的 Fedora 服务器上,以[sudo][6]运行这条命令:
```
sudo dnf install python3-gunicorn python3-flask python3-GitPython
```
现在编辑 gunicorn 使用的 wsgi.py 文件来运行服务:
```
from app import create_app
application = create_app()
```
为了部署服务,使用以下命令克隆这个 git [仓库][7]或者使用你自己的 git 仓库:
```
git clone https://github.com/cverna/github_hook_deployment.git /opt/
```
下一步是配置服务所需的环境变量。运行这些命令:
```
export GITHUB_SECRET=asecretpassphraseusebygithubwebhook
export REPO_PATH=/opt/github_hook_deployment/
```
这篇教程使用 webhook 服务的 Github 仓库,但你可以使用你想要的不同仓库。最后,使用这些命令开启 该 web 服务:
```
cd /opt/github_hook_deployment/
gunicorn --bind 0.0.0.0 wsgi:application --reload
```
这些选项中绑定了 web 服务的 ip 地址为 0.0.0.0,意味着它将接收来自任何的主机的请求。选项 -reload 确保了当代码更改时重启 web 服务。这就是持续部署的魔力所在。每次接收到 Github 请求时将拉取仓库的最近更新,同时 gunicore 检测这些更改并且自动重启服务。
**注意: **为了能接收到 Github 请求web 服务必须部署到具有公有 IP 地址的服务器上。做到这点的简单方法就是使用你最喜欢的云提供商比如 DigitalOceanAWSLinode等。
### 配置 Github
这篇教程的最后一部分是配置 Github 来发送 webhook 请求到 web 服务上。这是持续部署的关键。
从你的 Github 仓库的设置中,选择 Webhook 菜单,并且点击添加 Webhook。输入以下信息
* **Payload URL:** 服务的 URL比如<http://public\_ip\_address:8000/github>
* **Content type:** 选择 application/json
* **Secret:** 前面定义的 **GITHUB_SECRET** 环境变量
然后点击添加 Webhook 按钮。
![][8]
现在每当该仓库发生 push(推送)事件时Github 将向服务发送请求。
### 总结
这篇教程向你展示了如何写一个基于 Flask 的用于接收 Github 的 webhook 请求和实现持续集成的 web 服务。现在你应该能以本教程作为起点来搭建对自己有用的服务。
#### 像这样:
加载中...
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/continuous-deployment-github-python/
作者:[Author Archive;Author Website;Clément Verna][a]
译者:[kimii](https://github.com/kimii)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org
[1]:http://flask.pocoo.org/docs/0.12/blueprints/
[2]:https://en.wikipedia.org/wiki/HMAC
[3]:https://gitpython.readthedocs.io/en/stable/index.html
[4]:https://developer.github.com/v3/activity/events/types/#webhook-payload-example-26
[5]:http://gunicorn.org/
[6]:https://fedoramagazine.org/howto-use-sudo/
[7]:https://github.com/cverna/github_hook_deployment.git
[8]:https://fedoramagazine.org/wp-content/uploads/2018/03/Screenshot-2018-3-26-cverna-github_hook_deployment1.png

View File

@ -0,0 +1,74 @@
如何在 Linux 中查找文件
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
如果你是 Windows 或 OSX 的非高级用户,那么可能使用 GUI 来查找文件。你也可能发现界面限制,令人沮丧,或者两者兼而有之,并学会组织文件并记住它们的确切顺序。你也可以在 Linux 中做到这一点 - 但你不必这样做。
Linux 的好处之一是它提供了多种方式来处理。你可以打开任何文件管理器或按下 `ctrl` +`f`,你也可以使用程序手动打开文件,或者你可以开始输入字母,它会过滤当前目录列表。
![Screenshot of how to find files in Linux with Ctrl+F][2]
使用 Ctrl+F 在 Linux 中查找文件的截图
但是如果你不知道你的文件在哪里又不想搜索整个磁盘呢对于这个以及其他各种情况Linux 都很合适。
### 按命令名查找程序位置
如果你习惯随心所欲地放文件Linux 文件系统看起来会让人望而生畏。对我而言,最难习惯的一件事是找到程序在哪里。
例如,`which bash` 通常会返回 `/bin/bash`,但是如果你下载了一个程序并且它没有出现在你的菜单中,`which` 命令可以是一个很好的工具。
一个类似的工具是 `locate` 命令,我发现它对于查找配置文件很有用。我不喜欢输入程序名称,因为像 `locate php` 这样的简单程序通常会提供很多需要进一步过滤的结果。
有关 `locate``which` 的更多信息,请参阅 `man` 页面:
* `man which`
* `man locate`
### Find
`find` 工具提供了更先进的功能。以下是我安装在许多服务器上的脚本示例,我用于确保特定模式的文件(也称为 glob仅存在五天并且所有早于该文件的文件都将被删除。 (自上次修改以来,十进制用于计算最多 240 分钟的差异)
```
find ./backup/core-files*.tar.gz -mtime +4.9 -exec rm {} \;
```
`find` 工具有许多高级用法,但最常见的是对结果执行命令,而不用链式地按照类型、创建日期、修改日期过滤文件。
find 的另一个有趣用处是找到所有有可执行权限的文件。这有助于确保没有人在你昂贵的服务器上安装比特币挖矿程序或僵尸网络。
```
find / -perm /+x
```
有关 `find` 的更多信息,请使用 `man find` 参考 `man` 页面。
### Grep
想通过内容中查找文件? Linux 已经实现了。你可以使用许多 Linux 工具来高效搜索符合模式的文件,但是 `grep` 是我经常使用的工具。
假设你有一个程序发布代码引用和堆栈跟踪的错误消息。你要在日志中找到这些。 grep 不总是最好的方法,但如果文件是一个给定的值,我经常使用 `grep -R`
越来越多的 IDE 正在实现查找功能,但是如果你正在访问远程系统或出于任何原因没有 GUI或者如果你想在当前目录递归查找请使用`grep -R {searchterm} ` 或在支持 `egrep` 别名的系统上,只需将 `-e` 标志添加到命令 `egrep -r {regex-pattern}`
我在去年给 [Raspbian][3] 中的 `dhcpcd5` 打补丁时使用了这种技术,这样我就可以在[树莓派基金会][4]发布新的 Debian 时继续操作网络接入点了。
哪些提示可帮助你在 Linux 上更有效地搜索文件?
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/how-find-files-linux
作者:[Lewis Cowles][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/lewiscowles1986
[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/find-files-in-linux-ctrlf.png?itok=1gf9kIut (Screenshot of how to find files in Linux with Ctrl+F)
[3]:https://www.raspbian.org/
[4]:https://www.raspberrypi.org/

View File

@ -0,0 +1,150 @@
12 个 Git 提示献给 Git 12 岁生日
=====
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/party_anniversary_flag_birthday_celebrate.jpg?itok=KqfMENa7)
[Git][1] 是一个分布式版本控制系统,它已经成为开源世界中源代码控制的默认工具,在 4 月 12 日这天,它 12 岁了。使用 Git 令人沮丧的事情之一是你需要知道更多才能有效地使用 Git。这也可能是使用 Git 比较美妙的一件事,因为没有什么比发现一个新技巧来简化或提高你的工作流的效率更令人快乐了。
为了纪念 Git 的 12 岁生日,这里有 12 条技巧和诀窍来让你的 Git 经验更加有用和强大。从你可能忽略的一些基本知识开始,并扩展到一些真正的高级用户技巧!
### 1\. 你的 ~/.gitconfig 文件
当你第一次尝试使用 `git` 命令向仓库提交一个更改时,你可能会收到这样的欢迎信息:
```
*** Please tell me who you are.
Run
  git config --global user.email "you@example.com"
  git config --global user.name "Your Name"
to set your account's default identity.
```
你可能没有意识到正是这些命令正在修改 `~/.gitconfig` 的内容,这是 Git 存储全局配置选项的地方。你可以通过 `~/.gitconfig` 文件来做大量的事,包括定义别名,永久性打开(或关闭)特定命令选项,以及修改 Git 工作方式(例如,`git diff` 使用哪个 diff 算法,或者默认使用什么类型的合并策略)。你甚至可以根据仓库的路径有条件地包含其他配置文件!所有细节请参阅 `man git-config`
### 2\. 你仓库的 .gitconfig 文件
在之前的技巧中,你可能想知道 `git config` 命令中 `--global` 标志是干什么的。它告诉 Git 更新 `~/.gitconfig` 中的 "global" 配置。当然,有一个全局配置意味着一个本地配置,确定的是,如果你省略 `--global` 标志,`git config` 将改为更新仓库特有的配置,该配置在 `.git/config` 中。
`.git/config` 文件中设置的选项将覆盖 `〜/.gitconfig` 文件中的所有设置。因此,例如,如果你需要为特定仓库使用不同的电子邮件地址,则可以运行 `git config user.email "also_you@example.com"`。然后,该仓库中的任何提交都将使用你单独配置的电子邮件地址。如果你在开源项目中工作,而且希望他们显示自己的电子邮件地址,同时仍然使用自己工作邮箱作为主 Git 配置,这非常有用。
几乎任何你可以在 `〜/ .gitconfig` 中设置的东西,你也可以在 `.git / config` 中进行设置,以使其作用于特定的仓库。在线面的提示中,当我提到将某些内容添加到 `~/.gitconfig` 时,只需记住你也可以在特定仓库的 `.git/config` 中添加来设置那个选项。
### 3\. 别名
别名是你可以在 `〜/ .gitconfig` 中做的另一件事。它的工作原理就像命令行中的 shell -- 它们设定一个新的命令名称,可以调用一个或多个其他命令,通常使用一组特定的选项或标志。它们对于那些你经常使用的又长又复杂的命令来说非常有效。
你可以使用 `git config` 命令来定义别名 - 例如,运行 `git config --global --add alias.st status` 将使运行 `git st` 与运行 `git status` 做同样的事情 - 但是我在定义别名时发现,直接编辑 `~/.gitconfig` 文件通常更容易。
如果你选择使用这种方法,你会发现 `~/.gitconfig` 文件是一个 [INI 文件][2]。INI 是一种带有特定段落的键值对文件格式。当添加一个别名时,你将改变 `[alias]` 段落。例如,定义上面相同的 `git st` 别名时,添加如下到文件:
```
[alias]
st = status
```
(如果已经有 `[alias]` 段落,只需将第二行添加到现有部分。)
### 4\. shell 命令中的别名
别名不仅仅限于运行其他 Git 子 命令 - 你还可以定义运行其他 shell 命令的别名。这是一个很好的方式来处理一个反复发生的,罕见和复杂的任务:一旦你确定了如何完成它,就可以在别名下保存该命令。例如,我有一些仓库,我已经 fork 了一个开源的项目并进行了一些本地修改。我想跟上项目正在进行的开发工作,并保存我本地的变化。为了实现这个目标,我需要定期将来自上游仓库的更改合并到我 fork 的项目中 - 我通过使用我称之为 `upstream-merge` 的别名。它是这样定义的:
```
upstream-merge = !"git fetch origin -v && git fetch upstream -v && git merge upstream/master && git push"
```
别名定义开头的 `!` 告诉 Git 通过 shell 运行这个命令。这个例子涉及到运行一些 `git` 命令,但是以这种方式定义的别名可以运行任何 shell 命令。
(注意,如果你想复制我的 `upstream-merge` 别名,你需要确保你有一个名为 `upstream` 的 Git remote 指向你已经分配的上游仓库,你可以通过运行 `git remote add upstream <URL to repo>` 来添加一个。)
### 5\. 可视化提交图
如果你从事的是一个有很多分支活动的项目,有时可能很难掌握所有正在发生的工作以及它们之间的相关性。各种图形用户界面工具可让你获取不同分支的图片并在所谓的“提交图表”中提交。例如,以下是我使用 [GitLab][3] 提交图表查看器可视化的我的一个仓库的一部分:
![GitLab commit graph viewer][5]
John Anderson, CC BY
如果你是一个专注于命令行的用户或者某个发现切换工具让人分心,那么最好从命令行获得类似的提交视图。这就是 `git log` 命令的 `--graph` 参数出现的地方:
![Repository visualized with --graph command][7]
John Anderson, CC BY
以下命令可视化相同仓库可达到相同效果:
```
git log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit --date=relative
```
`--graph` 选项将图添加到日志的右侧,`--abbrev-commit` 缩短提交的 [SHAs][8] 值,`--date=relative` 用相对属于表示日期,以及 `--pretty` 位处理所有其他自定义格式。我有 `git lg` 别名用于这个功能,它是我最常用的 10 个命令之一。
### 6\. 更优雅的强制推送
有时,就像你试图避免使用它一样困难的是,你会发现你需要运行 `git push --force` 来覆盖仓库远程副本上的历史记录。你可能得到了一些反馈,导致你进行交互式重新分配,或者你可能已经搞砸了,并希望隐藏证据。
当其他人在仓库的远程副本的同一分支上进行更改时,会发生强制推送的危险。当你强制推送已重写的历史记录时,这些提交将会丢失。这就是 `git push --force-with-lease` 出现的原因 -- 如果远程分支已经丢失,它不会允许你强制推送,这确保你不会丢掉别人的工作。
### 7\. git add -N
你是否过 `git commit -a` 在一次行动中提交所有未完成的修改,只有在你推送完提交后才发现 `git commit -a` 忽略了新添加的文件才能发现?你可以使用 `git add -N` (想想 "notify") 来解决这个问题,告诉 Git 在第一次实际提交它们之前,你希望在提交中包含新增文件。
### 8\. git add -p
使用 Git 时的最佳做法是确保每次提交都只包含一个逻辑修改 - 无论这是修复错误还是添加新功能。然而,有时当你在工作时,你的仓库最终会有多个提交值的修改。你怎样才能设法把事情分开,使每个提交只包含适当的修改呢?`git add --patch` 来拯救你了!
这个标志会让 `git add` 命令查看你工作副本中的所有变化,并为每个变化询问你是否想要将它提交,跳过,或者推迟决定(以及其他更强大的选项,你可以在运行命令后选择 `?` 来查看)。`git add -p` 是生成结构良好的提交的绝佳工具。
### 9\. git checkout -p
`git add -p` 类似,`git checkout` 命令将采用 `--patch``-p` 选项,这会使其在本地工作副本中显示每个“大块”的改动,并允许丢弃它 -- 简单来说就是将本地工作副本恢复到更改之前的状态。
这真的很棒。例如,当你追踪一个 bug 时引入了一堆调试日志语句,修正了这个 bug 之后,你可以先使用 `git checkout -p` 移除所有新的调试日志,然后 `git add -p` 来添加 bug 修复。没有比组合一个优雅的,结构良好的提交更令人满意!
### 10\. Rebase 时执行命令
有些项目有一个规则,即存储库中的每个提交都必须处于工作状态 - 也就是说,在每次提交时,应该可以编译代码,或者应该运行测试套件而不会失败。 当你在分支上工作时,这并不困难,但是如果你最终因为某种原因需要 rebase 时,那么逐步完成每个重新绑定的提交以确保你没有意外地引入一个中断是乏味的。
幸运的是,`git rebase` 已经覆盖了 `-x``--exec` 选项。`git rebase -x <cmd>` 将在每次提交应用到 rebase 后运行该命令。因此,举个例子,如果你有一个项目,其中 `npm run tests` 运行你的测试套件,`git rebase -x npm run tests` 将在 rebase 期间每次提交之后运行测试套件。这使你可以查看测试套件是否在任何重新发布的提交中失败,因此你可以确认测试套件在每次提交时仍能通过。
### 11\. 基于时间的修正参考
很多 Git 子命令都接受一个修正的参数来决定命令作用于仓库的哪个部分,可以是某次特定的提交的 sha1 值,一个分支的名称,甚至是一个符号性的名称如 `HEAD`(代表当前检出分支最后一次的提交),除了这些简单的形式以外,你还可以附加一个指定的日期或时间作为参数,表示“这个时间的引用”。
这个功能在某些时候会变得十分有用。当你处理最新出现的 bug自言自语道“这个功能昨天还是好好的到底又改了些什么”不用盯着满屏的 `git log` 的输出试图弄清楚什么时候更改了提交,你只需运行 `git diff HEAD@{yesterday}`,看看从昨天以来的所有修改。这也适用于更长的时间段(例如 `git diff HEAD@{'2 months ago'}`),以及一个确切的日期(例如 `git diff HEAD@{'2010-01-01 12:00:00'}`)。
你也可以将这些基于日期的修改参数与使用修正参数的任何 Git 子命令一起使用。在 `gitrevisions` 手册页中有关于具体使用哪种格式的详细信息。
### 12\. 能知道所有的 reflog
你是不是试过在 rebase 时干掉过某次提交然后发现你需要保留那个提交中一些东西你可能觉得这些信息已经永远找不回来了只能重新创建。但是如果你在本地工作副本中提交了提交就会被添加到引用日志reflog中 ,你仍然可以访问到。
运行 `git reflog` 将在本地工作副本中显示当前分支的所有活动的列表,并为你提供每个提交的 SHA1 值。一旦发现你 rebase 时放弃的那个提交,你可以运行 `git checkout <SHA1>` 跳转到该提交,复制任何你需要的信息,然后再运行 `git checkout HEAD` 返回到分支最近的提交去。
### 以上是全部内容
希望这些技巧中至少有一个能够教给你一些关于 Git 的新东西Git 是一个有12年历史的项目并且在持续创新和增加新功能中。你最喜欢的 Git 技巧是什么?
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/12-git-tips-gits-12th-birthday
作者:[John SJ Anderson][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/genehack
[1]:https://git-scm.com/
[2]:https://en.wikipedia.org/wiki/INI_file
[3]:https://gitlab.com/
[4]:/file/392941
[5]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gui_graph.png?itok=3GovYfG1 (GitLab commit graph viewer)
[6]:/file/392936
[7]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/console_graph.png?itok=XogY1P8M (Repository visualized with --graph command)
[8]:https://en.wikipedia.org/wiki/Secure_Hash_Algorithms