Merge pull request #2 from LCTT/master

update
This commit is contained in:
MjSeven 2018-03-22 23:35:37 +08:00 committed by GitHub
commit a7b3bfc8dc
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 1508 additions and 0 deletions

View File

@ -0,0 +1,59 @@
6 common questions about agile development practices for teams
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard.png?itok=kBeRTFL1)
"Any questions?"
Youve probably heard a speaker ask this question at the end of their presentation. This is the most important part of the presentation—after all, you didn't attend just to hear a lecture but to participate in a conversation and a community.
Recently I had the opportunity to hear my fellow Red Hatters present a session called "[Agile in Practice][1]" to a group of technical students at a local university. During the session, software engineer Tomas Tomecek and agile practitioners Fernando Colleone and Pavel Najman collaborated to explain the foundations of agile methodology and showcase best practices for day-to-day activities.
### 1\. What is the perfect team size?
Knowing that students attended this session to learn what agile practice is and how to apply it to projects, I wondered how the students' questions would compare to those I hear every day as an agile practitioner at Red Hat. It turns out that the students asked the same questions as my colleagues. These questions drive straight into the core of agile in practice.
Students wanted to know the size of a small team versus a large team. This issue is relevant to anyone who has ever teamed up to work on a project. Based on Tomas's experience as a tech leader, 12 people working on a project would be considered a large team. In the real world, team size is not often directly correlated to productivity. In some cases, a smaller team located in a single location or time zone might be more productive than a larger team that's spread around the world. Ultimately, the presenters suggested that the ideal team size is probably five people (which aligns with scrum 7, +-2).
### 2\. What operational challenges do teams face?
The presenters compared projects supported by local teams (teams with all members in one office or within close proximity to each other) with distributed teams (teams located in different time zones). Engineers prefer local teams when the project requires close cooperation among team members because delays caused by time differences can destroy the "flow" of writing software. At the same time, distributed teams can bring together skill sets that may not be available locally and are great for certain development use cases. Also, there are various best practices to improve cooperation in distributed teams.
### 3\. How much time is needed to groom the backlog?
Because this was an introductory talk targeting students who were new to agile, the speakers focused on [Scrum][2] and [Kanban][3] as ways to make agile specific for them. They used the Scrum framework to illustrate a method of writing software and Kanban for a communication and work planning system. On the question of time needed to groom a project's backlog, the speakers explained that there is no fixed rule. Rather, practice makes perfect: During the early stages of development, when a project is new—and especially if some members of the team are new to agile—grooming can consume several hours per week. Over time and with practice, it becomes more efficient.
### 4\. Is a product owner necessary? What is their role?
Product owners help facilitate scaling; however, what matters is not the job title, but that you have someone on your team who represents the customer's voice and goals. In many teams, especially those that are part of a larger group of engineering teams working on a single output, a lead engineer can serve as the product owner.
### 5\. What agile tools do you suggest using? Is specific software necessary to implement Scrum or Kanban in practice?
Although using proprietary software such as Jira or Trello can be helpful, especially when working with large numbers of contributors working on big enterprise projects, they are not required. Scrum and Kanban can be done with tools as simple as paper cards. The key is to have a clear source of information and strong communication across the entire team. That said, two excellent open source kanban tools are [Taiga][4] and [Wekan][5]. For more information, see [5 open source alternatives to Trello][6] and [Top 7 open source project management tools for agile teams][7].
### 6\. How can students use agile techniques for school projects?
The presenters encouraged students to use kanban to visualize and outline tasks to be completed before the end of the project. The key is to create a common board so the entire team can see the status of the project. By using kanban or a similar high-visibility strategy, students wont get to the end of the project and discover that any particular team member has not been keeping up.
Scrum practices such as sprints and daily standups are also excellent ways to ensure that everyone is making progress and that the various parts of the project will work together at the end. Regular check-ins and information-sharing are also essential. To learn more about Scrum, see [What is scrum?][8].
Remember that Kanban and Scrum are just two of many tools and frameworks that make up agile. They may not be the best approach for every situation.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/agile-mindset
作者:[Dominika Bula][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dominika
[1]:http://zijemeit.cz/sessions/agile-in-practice/
[2]:https://www.scrum.org/resources/what-is-scrum
[3]:https://en.wikipedia.org/wiki/Kanban
[4]:https://taiga.io/
[5]:https://wekan.github.io/
[6]:https://opensource.com/alternatives/trello
[7]:https://opensource.com/article/18/2/agile-project-management-tools
[8]:https://opensource.com/resources/scrum

View File

@ -0,0 +1,70 @@
Can we build a social network that serves users rather than advertisers?
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_team_community_group.png?itok=Nc_lTsUK)
Today, open source software is far-reaching and has played a key role driving innovation in our digital economy. The world is undergoing radical change at a rapid pace. People in all parts of the world need a purpose-built, neutral, and transparent online platform to meet the challenges of our time.
And open principles might just be the way to get us there. What would happen if we married digital innovation with social innovation using open-focused thinking?
This question is at the heart of our work at [Human Connection][1], a forward-thinking, Germany-based knowledge and action network with a mission to create a truly social network that serves the world. We're guided by the notion that human beings are inherently generous and sympathetic, and that they thrive on benevolent actions. But we haven't seen a social network that has fully supported our natural tendency towards helpfulness and cooperation to promote the common good. Human Connection aspires to be the platform that allows everyone to become an active changemaker.
In order to achieve the dream of a solution-oriented platform that enables people to take action around social causes by engaging with charities, community groups, and social change activists, Human Connection embraces open values as a vehicle for social innovation.
Here's how.
### Transparency first
Transparency is one of Human Connection's guiding principles. Human Connection invites programmers around the world to jointly work on the platform's source code (JavaScript, Vue, nuxt) by [making their source code available on Github][2] and support the idea of a truly social network by contributing to the code or programming additional functions.
But our commitment to transparency extends beyond our development practices. In fact—when it comes to building a new kind of social network that promotes true connection and interaction between people who are passionate about changing the world for the better—making the source code available is just one step towards being transparent.
To facilitate open dialogue, the Human Connection team holds [regular public meetings online][3]. Here we answer questions, encourage suggestions, and respond to potential concerns. Our Meet The Team events are also recorded and made available to the public afterwards. By being fully transparent with our process, our source code, and our finances, we can protect ourselves against critics or other potential backlashes.
The commitment to transparency also means that all user contributions that shared publicly on Human Connection will be released under a Creative Commons license and can eventually be downloaded as a data pack. By making crowd knowledge available, especially in a decentralized way, we create the opportunity for social pluralism.
Guiding all of our organizational decisions is one question: "Does it serve the people and the greater good?" And we use the [UN Charter][4] and the Universal Declaration of Human Rights as a foundation for our value system. As we'll grow bigger, especially with our upcoming open beta launch, it's important for us to stay accountable to that mission. I'm even open to the idea of inviting the Chaos Computer Club or other hacker clubs to verify the integrity of our code and our actions by randomly checking into our platform.
When it comes to building a new kind of social network that promotes true connection and interaction between people who are passionate about changing the world for the better, making the source code available is just one step towards being transparent.
### A collaborative community
A [collaborative, community-centered approach][5] to programming the Human Connection platform is the foundation for an idea that extends beyond the practical applications of a social network. Our team is driven by finding an answer to the question: "What makes a social network truly social?"
A network that abandons the idea of a profit-driven algorithm serving advertisers instead of end-users can only thrive by turning to the process of peer production and collaboration. Organizations like [Code Alliance][6] and [Code for America][7], for example, have demonstrated how technology can be created in an open source environment to benefit humanity and disrupt the status quo. Community-driven projects like the map-based reporting platform [FixMyStreet][8] or the [Tasking Manager][9] built for the Humanitarian OpenStreetMap initiative have embraced crowdsourcing as a way to move their mission forward.
Our approach to building Human Connection has been collaborative from the start. To gather initial data on the necessary functions and the purpose of a truly social network, we collaborated with the National Institute for Oriental Languages and Civilizations (INALCO) at the University Sorbonne in Paris and the Stuttgart Media University in Germany. Research findings from both projects were incorporated into the early development of Human Connection. Thanks to that research, [users will have a whole new set of functions available][10] that put them in control of what content they see and how they engage with others. As early supporters are [invited to the network's alpha version][10], they can experience the first available noteworthy functions. Here are just a few:
* Linking information to action was one key theme emerging from our research sessions. Current social networks leave users in the information stage. Student groups at both universities saw a need for an action-oriented component that serves our human instinct of working together to solve problems. So we built a ["Can Do" function][11] into our platform. It's one of the ways individuals can take action after reading about a certain topic. "Can Do's" are user-suggested activities in the "Take Action" area that everyone can implement.
* The "Versus" function is another defining result. Where traditional social networks are limited to a comment function, our student groups saw the need for a more structured and useful way to engage in discussions and arguments. A "Versus" is a counter-argument to a public post that is displayed separately and provides an opportunity to highlight different opinions around an issue.
* Today's social networks don't provide a lot of options to filter content. Research has shown that a filtering option by emotions can help us navigate the social space in accordance with our daily mood and potentially protect our emotional wellbeing by not displaying sad or upsetting posts on a day where we want to see uplifting content only.
Human Connection invites changemakers to collaborate on the development of a network with the potential to mobilize individuals and groups around the world to turn negative news into "Can Do's"—and participate in social innovation projects in conjunction with charities and non-profit organizations.
[Subscribe to our weekly newsletter][12] to learn more about open organizations.
--------------------------------------------------------------------------------
via: https://opensource.com/open-organization/18/3/open-social-human-connection
作者:[Dennis Hack][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dhack
[1]:https://human-connection.org/en/
[2]:https://github.com/human-connection/
[3]:https://youtu.be/tPcYRQcepYE
[4]:http://www.un.org/en/charter-united-nations/index.html
[5]:https://youtu.be/BQHBno-efRI
[6]:http://codealliance.org/
[7]:https://www.codeforamerica.org/
[8]:http://fixmystreet.org/
[9]:https://tasks.hotosm.org/
[10]:https://youtu.be/AwSx06DK2oU
[11]:https://youtu.be/g2gYLNx686I
[12]:https://opensource.com/open-organization/resources/newsletter

View File

@ -0,0 +1,66 @@
8 tips for better agile retrospective meetings
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_meeting.png?itok=4_CivQgp)
Ive often thought that retrospectives should be called prospectives, as that term concerns the future rather than focusing on the past. The retro itself is truly future-looking: Its the space where we can ask the question, “With what we know now, whats the next experiment we need to try for improving our lives, and the lives of our customers?”
### Whats a retro supposed to look like?
There are two significant loops in product development: One produces the desired potentially shippable nugget. The other is where we examine how were working—not only to avoid doing what didnt work so well, but also to determine how we can amplify the stuff we do well—and devise an experiment to pull into the next production loop to improve how our team is delighting our customers. This is the loop on the right side of this diagram:
![Retrospective 1][2]
### When retros implode
While attending various teams' iteration retrospective meetings, I saw a common thread of malcontent associated with a relentless focus on continuous improvement.
One of the engineers put it bluntly: “[Our] continuous improvement feels like we are constantly failing.”
The teams talked about what worked, restated the stuff that didnt work (perhaps already feeling like they were constantly failing), nodded to one another, and gave long sighs. Then one of the engineers (already late for another meeting) finally summed up the meeting: “Ok, lets try not to submit all of the code on the last day of the sprint.” There was no opportunity to amplify the good, as the good was not discussed.
In effect, heres what the retrospective felt like:
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/retro_2.jpg?itok=HrDkppCG)
The anti-pattern is where retrospectives become dreaded sessions where we look back at the last iteration, make two columns—what worked and what didnt work—and quickly come to some solution for the next iteration. There is no [scientific method][3] involved. There is no data gathering and research, no hypothesis, and very little deep thought. The result? You dont get an experiment or a potential improvement to pull into the next iteration.
### 8 tips for better retrospectives
1. Amplify the good! Instead of focusing on what didnt work well, why not begin the retro by having everyone mention one positive item first?
2. Dont jump to a solution. Thinking about a problem deeply instead of trying to solve it right away might be a better option.
3. If the retrospective doesnt make you feel excited about an experiment, maybe you shouldnt try it in the next iteration.
4. If youre not analyzing how to improve, ([5 Whys][4], [force-field analysis][5], [impact mapping][6], or [fish-boning][7]), you might be jumping to solutions too quickly.
5. Vary your methods. If every time you do a retrospective you ask, “What worked, what didnt work?” and then vote on the top item from either column, your team will quickly get bored. [Retromat][8] is a great free retrospective tool to help vary your methods.
6. End each retrospective by asking for feedback on the retro itself. This might seem a bit meta, but it works: Continually improving the retrospective is recursively improving as a team.
7. Remove the impediments. Ask how you are enabling the team's search for improvement, and be prepared to act on any feedback.
8. There are no "iteration police." Take breaks as needed. Deriving hypotheses from analysis and coming up with experiments involves creativity, and it can be taxing. Every once in a while, go out as a team and enjoy a nice retrospective lunch.
This article was inspired by [Retrospective anti-pattern: continuous improvement should not feel like constantly failing][9], posted at [Podojo.com][10].
**[See our related story,[How to build a business case for DevOps transformation][11].]**
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/tips-better-agile-retrospective-meetings
作者:[Catherine Louis][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/catherinelouis
[1]:/file/389021
[2]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/retro_1.jpg?itok=bggmHN1Q (Retrospective 1)
[3]:https://en.wikipedia.org/wiki/Scientific_method
[4]:https://en.wikipedia.org/wiki/5_Whys
[5]:https://en.wikipedia.org/wiki/Force-field_analysis
[6]:https://opensource.com/open-organization/17/6/experiment-impact-mapping
[7]:https://en.wikipedia.org/wiki/Ishikawa_diagram
[8]:https://plans-for-retrospectives.com/en/?id=28
[9]:http://www.podojo.com/retrospective-anti-pattern-continuous-improvement-should-not-feel-like-constantly-failing/
[10]:http://www.podojo.com/
[11]:https://opensource.com/article/18/2/how-build-business-case-devops-transformation

View File

@ -1,3 +1,5 @@
[translating for laujinseoi]
7 Best eBook Readers for Linux
======
**Brief:** In this article, we are covering some of the best ebook readers for Linux. These apps give a better reading experience and some will even help in managing your ebooks.

View File

@ -1,3 +1,5 @@
Translating by MjSeven
How to setup and configure network bridge on Debian Linux
======

View File

@ -1,3 +1,5 @@
hankchow translating
How to measure particulate matter with a Raspberry Pi
======

View File

@ -0,0 +1,347 @@
How To Edit Multiple Files Using Vim Editor
======
![](https://www.ostechnix.com/wp-content/uploads/2018/03/Edit-Multiple-Files-Using-Vim-Editor-720x340.png)
Sometimes, you will find yourself in a situation where you want to make changes in multiple files or you might want to copy the contents of one file to another. If youre on GUI mode, you could simply open the files in any graphical text editor, like gedit, and use CTRL+C and CTRL+V to copy/paste the contents. In CLI mode, you cant use such editors. No worries! Where there is vim editor, there is a way! In this tutorial, we are going to learn to edit multiple files at the same time using Vim editor. Trust me, this is very interesting read.
### Installing Vim
Vim editor is available in the official repositories of most Linux distributions. So you can install it using the default package manager. For example, on Arch Linux and its variants you can install it using command:
```
$ sudo pacman -S vim
```
On Debian, Ubuntu:
```
$ sudo apt-get install vim
```
On RHEL, CentOS:
```
$ sudo yum install vim
```
On Fedora:
```
$ sudo dnf install vim
```
On openSUSE:
```
$ sudo zypper install vim
```
### Edit multiple files at a time using Vim editor in Linux
Let us now get down to the business. We can do this in two methods.
#### Method 1
I have two files namely **file1.txt** and **file2.txt** , with a bunch of random words. Let us have a look at them.
```
$ cat file1.txt
ostechnix
open source
technology
linux
unix
$ cat file2.txt
line1
line2
line3
line4
line5
```
Now, let us edit these two files at a time. To do so, run:
```
$ vim file1.txt file2.txt
```
Vim will display the contents of the files in an order. The first files contents will be shown first and then second file and so on.
![][2]
**Switch between files**
To move to the next file, type:
```
:n
```
![][3]
To go back to previous file, type:
```
:N
```
Vim wont allow you to move to the next file if there are any unsaved changes. To save the changes in the current file, type:
```
ZZ
```
Please note that it is double capital letters ZZ (SHIFT+zz).
To abandon the changes and move to the previous file, type:
```
:N!
```
To view the files which are being currently edited, type:
```
:buffers
```
![][4]
You will see the list of loaded files at the bottom.
![][5]
To switch to the next file, type **:buffer** followed by the buffer number. For example, to switch to the first file, type:
```
:buffer 1
```
![][6]
**Opening additional files for editing**
We are currently editing two files namely file1.txt, file2.txt. I want to open another file named **file3.txt** for editing.
What will you do? Its easy! Just type **:e** followed by the file name like below.
```
:e file3.txt
```
![][7]
Now you can edit file3.txt.
To view how many files are being edited currently, type:
```
:buffers
```
![][8]
Please note that you can not switch between opened files with **:e** using either **:n** or **:N**. To switch to another file, type **:buffer** followed by the file buffer number.
**Copying contents of one file into another**
You know how to open and edit multiple files at the same time. Sometimes, you might want to copy the contents of one file into another. It is possible too. Switch to a file of your choice. For example, let us say you want to copy the contents of file1.txt into file2.txt.
To do so, first switch to file1.txt:
```
:buffer 1
```
Place the move cursor in-front of a line that wants to copy and type **yy** to yank(copy) the line. Then. move to file2.txt:
```
:buffer 2
```
Place the mouse cursor where you want to paste the copied lines from file1.txt and type **p**. For example, you want to paste the copied line between line2 and line3. To do so, put the mouse cursor before line and type **p**.
Sample output:
```
line1
line2
ostechnix
line3
line4
line5
```
![][9]
To save the changes made in the current file, type:
```
ZZ
```
Again, please note that this is double capital ZZ (SHIFT+z).
To save the changes in all files and exit vim editor. type:
```
:wq
```
Similarly, you can copy any line from any file to other files.
**Copying entire file contents into another**
We know how to copy a single line. What about the entire file contents? Thats also possible. Let us say, you want to copy the entire contents of file1.txt into file2.txt.
To do so, open the file2.txt first:
```
$ vim file2.txt
```
If the files are already loaded, you can switch to file2.txt by typing:
```
:buffer 2
```
Move the cursor to the place where you wanted to copy the contents of file1.txt. I want to copy the contents of file1.txt after line5 in file2.txt, so I moved the cursor to line 5. Then, type the following command and hit ENTER key:
```
:r file1.txt
```
![][10]
Here, **r** means **read**.
Now you will see the contents of file1.txt is pasted after line5 in file2.txt.
```
line1
line2
line3
line4
line5
ostechnix
open source
technology
linux
unix
```
![][11]
To save the changes in the current file, type:
```
ZZ
```
To save all changes in all loaded files and exit vim editor, type:
```
:wq
```
#### Method 2
The another method to open multiple files at once is by using either **-o** or **-O** flags.
To open multiple files in horizontal windows, run:
```
$ vim -o file1.txt file2.txt
```
![][12]
To switch between windows, press **CTRL-w w** (i.e Press **CTRL+w** and again press **w** ). Or, you the following shortcuts to move between windows.
* **CTRL-w k** top window
* **CTRL-w j** bottom window
To open multiple files in vertical windows, run:
```
$ vim -O file1.txt file2.txt file3.txt
```
![][13]
To switch between windows, press **CTRL-w w** (i.e Press **CTRL+w** and again press **w** ). Or, use the following shortcuts to move between windows.
* **CTRL-w l** left window
* **CTRL-w h** right window
Everything else is same as described in method 1.
For example, to list currently loaded files, run:
```
:buffers
```
To switch between files:
```
:buffer 1
```
To open an additional file, type:
```
:e file3.txt
```
To copy entire contents of a file into another:
```
:r file1.txt
```
The only difference in method 2 is once you saved the changes in the current file using **ZZ** , the file will automatically close itself. Also, you need to close the files one by one by typing **:wq**. But, had you followed the method 1, when typing **:wq** all changes will be saved in all files and all files will be closed at once.
For more details, refer man pages.
```
$ man vim
```
**Suggested read:**
You know now how to edit multiples files using vim editor in Linux. As you can see, editing multiple files is not that difficult. Vim editor has more powerful features. We will write more about Vim editor in the days to come.
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-edit-multiple-files-using-vim-editor/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[2]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-1-1.png
[3]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-2.png
[4]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-5.png
[5]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-6.png
[6]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-7.png
[7]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-8.png
[8]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-10-1.png
[9]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-11.png
[10]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-12.png
[11]:http://www.ostechnix.com/wp-content/uploads/2018/03/edit-multiple-files-13.png
[12]:http://www.ostechnix.com/wp-content/uploads/2018/03/Edit-multiple-files-16.png
[13]:http://www.ostechnix.com/wp-content/uploads/2018/03/Edit-multiple-files-17.png

View File

@ -0,0 +1,435 @@
How To Manage Disk Partitions Using Parted Command
======
We all knows disk partitions is one of the important task for Linux administrator. They can not survive without knowing this.
In worst cases, at least once in a week they would get this request from dependent team but in big environment admins used to get this request very often.
You may ask why we need to use parted instead of fdisk? What is the difference? Its a good question, i will give you more details about this.
* Parted allow users to create a partition when the disk size is larger than 2TB but fdisk doesnt allow.
* Parted is a higher-level tool than fdisk.
* It supports multiple partition table which includes GPT.
* It allows users to resize the partition but while shrinking the partition it does not worked as expected and i got error most of the time so, i would advise users to do not shrink the partition.
### What Is Parted
Parted is a program to manipulate disk partitions. It supports multiple partition table formats, including MS-DOS and GPT.
It allows user to create, delete, resize, shrink, move and copy partitions, reorganizing disk usage, and copying data to new hard disks. GParted is a GUI frontend of parted.
### How To Install Parted
Parted package is pre-installed on most of the Linux distribution. If not, use the following commands to install parted package.
For **`Debian/Ubuntu`** , use [APT-GET Command][1] or [APT Command][2] to install parted.
```
$ sudo apt install parted
```
For **`RHEL/CentOS`** , use [YUM Command][3] to install parted.
```
$ sudo yum install parted
```
For **`Fedora`** , use [DNF Command][4] to install parted.
```
$ sudo dnf install parted
```
For **`Arch Linux`** , use [Pacman Command][5] to install parted.
```
$ sudo pacman -S parted
```
For **`openSUSE`** , use [Zypper Command][6] to install parted.
```
$ sudo zypper in parted
```
### How To Launch Parted
The below parted command picks the `/dev/sda` disk automatically, because this is the first hard drive in this system.
```
$ sudo parted
GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted)
```
Also we can go to the corresponding disk by selecting the appropriate disk using below command.
```
(parted) select /dev/sdb
Using /dev/sdb
(parted)
```
If you wants to go to particular disk, use the following format. In our case we are going to use `/dev/sdb`.
```
$ sudo parted [Device Name]
$ sudo parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted)
```
### How To List Available Disks Using Parted Command
If you dont know what are the disks are added in your system. Just run the following command, which will display all the available disks name, and other useful information such as Disk Size, Model, Sector Size, Partition Table, Disk Flags, and partition information.
```
$ sudo parted -l
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sda: 32.2GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 32.2GB 32.2GB primary ext4 boot
Error: /dev/sdb: unrecognised disk label
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
```
The above error message clearly shows there is no valid disk label for the disk `/dev/sdb`. Hence, we have to set `disk label` first as it doesnt take any label automatically.
### How To Create Disk Partition Using Parted Command
Parted allows us to create primary or extended partition. Procedure is same for both but make sure you have to pass an appropriate partition type like `primary` or `extended` while creating the partition.
To perform this activity, we have added a new `50GB` hard disk in the system, which falls under `/dev/sdb`.
In two ways we can create a partition, one is detailed way and other one is single command. In the below example we are going to add one primary partition in detailed way. Make a note, we should set `disk label` first as it doesnt take any label automatically.
We are going to create a new partition with `10GB` of disk in the below example.
```
$ sudo parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel msdos
(parted) unit GB
(parted) mkpart
Partition type? primary/extended? primary
File system type? [ext2]? ext4
Start? 0.00GB
End? 10.00GB
(parted) print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 0.00GB 10.0GB 10.0GB primary ext4 lba
(parted) quit
Information: You may need to update /etc/fstab.
```
Alternatively we can create a new partition using single parted command.
We are going to create second partition with `10GB` of disk in the below example.
```
$ sudo parted [Disk Name] [mkpart] [Partition Type] [Filesystem Type] [Partition Start Size] [Partition End Size]
$ sudo parted /dev/sdb mkpart primary ext4 10.0GB 20.0GB
Information: You may need to update /etc/fstab.
```
### How To Create A Partition With All Remaining Space
You have created all required partitions except `/home` and you wants to use all the remaining space to `/home` partition, how to do that? use the following command to create a partition.
The below command create a new partition with 33.7GB, which starts from `20GB` and ends with `53GB`. `100%` end size will allow users to create a new partition with remaining all available space in the disk.
```
$ sudo parted [Disk Name] [mkpart] [Partition Type] [Filesystem Type] [Partition Start Size] [Partition End Size]
$ sudo parted /dev/sdb mkpart primary ext4 20.0GB 100%
Information: You may need to update /etc/fstab.
```
### How To List All Partitions using Parted
As you aware of, we have created three partitions in the above step and if you want to list all available partitions on the disk use the print command.
```
$ sudo parted /dev/sdb print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 10.0GB 9999MB primary ext4
2 10.0GB 20.0GB 9999MB primary ext4
3 20.0GB 53.7GB 33.7GB primary ext4
```
### How To Create A File System On Partition Using mkfs
Users can create a file system on the partition using mkfs. Follow the below procedure to create a filesystem using mkfs.
```
$ sudo mkfs.ext4 /dev/sdb1
mke2fs 1.43.4 (31-Jan-2017)
Creating filesystem with 2621440 4k blocks and 656640 inodes
Filesystem UUID: 415cf467-634c-4403-8c9f-47526bbaa381
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
```
Do the same for other partitions as well.
```
$ sudo mkfs.ext4 /dev/sdb2
$ sudo mkfs.ext4 /dev/sdb3
```
Create necessary folders and mount the partitions on that.
```
$ sudo mkdir /par1 /par2 /par3
$ sudo mount /dev/sdb1 /par1
$ sudo mount /dev/sdb2 /par2
$ sudo mount /dev/sdb3 /par3
```
Run the following command to check newly mounted partitions.
```
$ df -h /dev/sdb[1-3]
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 9.2G 37M 8.6G 1% /par1
/dev/sdb2 9.2G 37M 8.6G 1% /par2
/dev/sdb3 31G 49M 30G 1% /par3
```
### How To Check Free Space On The Disk
Run the following command to check available free space on the disk. This disk has `25.7GB` of free disk space.
```
$ sudo parted /dev/sdb print free
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 10.0GB 9999MB primary ext4
2 10.0GB 20.0GB 9999MB primary ext4
3 20.0GB 28.0GB 8001MB primary ext4
28.0GB 53.7GB 25.7GB Free Space
```
### How To Resize Partition Using Parted Command
Parted allow users to resize the partitions to big and smaller size. As i told in the beginning of the article, do not shrink partitions because this leads to face disk error issue.
Run the following command to check disk partitions and available free space. I could see `25.7GB` of free space on this disk.
```
$ sudo parted /dev/sdb print free
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 10.0GB 9999MB primary ext4
2 10.0GB 20.0GB 9999MB primary ext4
3 20.0GB 28.0GB 8001MB primary ext4
28.0GB 53.7GB 25.7GB Free Space
```
Run the following command to resize the partition. We are going to resize (increase) the partition 3 end size from `28GB to 33GB`.
```
$ sudo parted [Disk Name] [resizepart] [Partition Number] [Partition New End Size]
$ sudo parted /dev/sdb resizepart 3 33.0GB
Information: You may need to update /etc/fstab.
```
Run the following command to verify whether this partition is increased or not. Yes, i could see the partition 3 got increased from `8GB to 13GB`.
```
$ sudo parted /dev/sdb print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 10.0GB 9999MB primary ext4
2 10.0GB 20.0GB 9999MB primary ext4
3 20.0GB 33.0GB 13.0GB primary ext4
```
Resize the file system to grow the resized partition.
```
$ sudo resize2fs /dev/sdb3
resize2fs 1.43.4 (31-Jan-2017)
Resizing the filesystem on /dev/sdb3 to 3173952 (4k) blocks.
The filesystem on /dev/sdb3 is now 3173952 (4k) blocks long.
```
Finally, check whether the mount point has been successfully increased or not.
```
$ df -h /dev/sdb[1-3]
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 9.2G 5.1G 3.6G 59% /par1
/dev/sdb2 9.2G 2.1G 6.6G 24% /par2
/dev/sdb3 12G 1.1G 11G 10% /par3
```
### How To Remove Partition Using Parted Command
We can simple remove the unused partition (if the partition is no longer use) using rm command. See the procedure below. We are going to remove partition 3 `/dev/sdb3` in this example.
```
$ sudo parted [Disk Name] [rm] [Partition Number]
$ sudo parted /dev/sdb rm 3
Warning: Partition /dev/sdb3 is being used. Are you sure you want to continue?
Yes/No? Yes
Error: Partition(s) 3 on /dev/sdb have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use.
You should reboot now before making further changes.
Ignore/Cancel? Ignore
Information: You may need to update /etc/fstab.
```
We can check the same using below command. Yes, i could see that partition 3 has been removed successfully.
```
$ sudo parted /dev/sdb print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 10.0GB 9999MB primary ext4
2 10.0GB 20.0GB 9999MB primary ext4
```
### How To Set/Change Partition Flag Using Parted Command
We can easily change the partition flag using below command. We are going to set `lvm` flag to partition 2 `/dev/sdb2`.
```
$ sudo parted [Disk Name] [set] [Partition Number] [Flags Name] [Flag On/Off]
$ sudo parted /dev/sdb set 2 lvm on
Information: You may need to update /etc/fstab.
```
We can verify this modification by listing disk partitions.
```
$ sudo parted /dev/sdb print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 10.0GB 9999MB primary ext4
2 10.0GB 20.0GB 9999MB primary ext4 lvm
```
To know list of available flags, use the following command.
```
$ (parted) help set
set NUMBER FLAG STATE change the FLAG on partition NUMBER
NUMBER is the partition number used by Linux. On MS-DOS disk labels, the primary partitions number from 1 to 4, logical partitions from 5 onwards.
FLAG is one of: boot, root, swap, hidden, raid, lvm, lba, hp-service, palo, prep, msftres, bios_grub, atvrecv, diag, legacy_boot, msftdata, irst, esp
STATE is one of: on, off
```
If you want to know the available options in parted, just navigate to `help` page.
```
$ sudo parted
GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) help
align-check TYPE N check partition N for TYPE(min|opt) alignment
help [COMMAND] print general help, or help on COMMAND
mklabel,mktable LABEL-TYPE create a new disklabel (partition table)
mkpart PART-TYPE [FS-TYPE] START END make a partition
name NUMBER NAME name partition NUMBER as NAME
print [devices|free|list,all|NUMBER] display the partition table, available devices, free space, all found partitions, or a particular partition
quit exit program
rescue START END rescue a lost partition near START and END
resizepart NUMBER END resize partition NUMBER
rm NUMBER delete partition NUMBER
select DEVICE choose the device to edit
disk_set FLAG STATE change the FLAG on selected device
disk_toggle [FLAG] toggle the state of FLAG on selected device
set NUMBER FLAG STATE change the FLAG on partition NUMBER
toggle [NUMBER [FLAG]] toggle the state of FLAG on partition NUMBER
unit UNIT set the default unit to UNIT
version display the version number and copyright information of GNU Parted
(parted) quit
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-manage-disk-partitions-using-parted-command/
作者:[Magesh Maruthamuthu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/magesh/
[1]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
[2]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[3]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[4]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[5]:https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
[6]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/

View File

@ -0,0 +1,207 @@
hankchow translating
How to use Ansible to patch systems and install applications
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_osyearbook2016_sysadmin_cc.png?itok=Y1AHCKI4)
Have you ever wondered how to patch your systems, reboot, and continue working?
If so, you'll be interested in [Ansible][1] , a simple configuration management tool that can make some of the hardest work easy. For example, system administration tasks that can be complicated, take hours to complete, or have complex requirements for security.
In my experience, one of the hardest parts of being a sysadmin is patching systems. Every time you get a Common Vulnerabilities and Exposure (CVE) notification or Information Assurance Vulnerability Alert (IAVA) mandated by security, you have to kick into high gear to close the security gaps. (And, believe me, your security officer will hunt you down unless the vulnerabilities are patched.)
Ansible can reduce the time it takes to patch systems by running [packaging modules][2]. To demonstrate, let's use the [yum module][3] to update the system. Ansible can install, update, remove, or install from another location (e.g., `rpmbuild` from continuous integration/continuous development). Here is the task for updating the system:
```
  - name: update the system
    yum:
      name: "*"
      state: latest
```
In the first line, we give the task a meaningful `name` so we know what Ansible is doing. In the next line, the `yum module` updates the CentOS virtual machine (VM), then `name: "*"` tells yum to update everything, and, finally, `state: latest` updates to the latest RPM.
After updating the system, we need to restart and reconnect:
```
  - name: restart system to reboot to newest kernel
    shell: "sleep 5 && reboot"
    async: 1
    poll: 0
  - name: wait for 10 seconds
    pause:
      seconds: 10
  - name: wait for the system to reboot
    wait_for_connection:
      connect_timeout: 20
      sleep: 5
      delay: 5
      timeout: 60
  - name: install epel-release
    yum:
      name: epel-release
      state: latest
```
The `shell module` puts the system to sleep for 5 seconds then reboots. We use `sleep` to prevent the connection from breaking, `async` to avoid timeout, and `poll` to fire & forget. We pause for 10 seconds to wait for the VM to come back and use `wait_for_connection` to connect back to the VM as soon as it can make a connection. Then we `install epel-release` to test the RPM installation. You can run this playbook multiple times to show the `idempotent`, and the only task that will show as changed is the reboot since we are using the `shell` module. You can use `changed_when: False` to ignore the change when using the `shell` module if you expect no actual changes.
So far we've learned how to update a system, restart the VM, reconnect, and install a RPM. Next we will install NGINX using the role in [Ansible Lightbulb][4].
```
  - name: Ensure nginx packages are present
    yum:
      name: nginx, python-pip, python-devel, devel
      state: present
    notify: restart-nginx-service
  - name: Ensure uwsgi package is present
    pip:
      name: uwsgi
      state: present
    notify: restart-nginx-service
  - name: Ensure latest default.conf is present
    template:
      src: templates/nginx.conf.j2
      dest: /etc/nginx/nginx.conf
      backup: yes
    notify: restart-nginx-service
  - name: Ensure latest index.html is present
    template:
      src: templates/index.html.j2
      dest: /usr/share/nginx/html/index.html
  - name: Ensure nginx service is started and enabled
    service:
      name: nginx
      state: started
      enabled: yes
  - name: Ensure proper response from localhost can be received
    uri:
      url: "http://localhost:80/"
      return_content: yes
    register: response
    until: 'nginx_test_message in response.content'
    retries: 10
    delay: 1
```
And the handler that restarts the nginx service:
```
# handlers file for nginx-example
  - name: restart-nginx-service
    service:
      name: nginx
      state: restarted
```
In this role, we install the RPMs `nginx`, `python-pip`, `python-devel`, and `devel` and install `uwsgi` with PIP. Next, we use the `template` module to copy over the `nginx.conf` and `index.html` for the page to display. After that, we make sure the service is enabled on boot and started. Then we use the `uri` module to check the connection to the page.
Here is a playbook showing an example of updating, restarting, and installing an RPM. Then continue installing nginx. This can be done with any other roles/applications you want.
```
  - hosts: all
    roles:
      - centos-update
      - nginx-simple
```
Watch this demo video for more insight on the process.
[demo](https://asciinema.org/a/166437/embed?)
This was just a simple example of how to update, reboot, and continue. For simplicity, I added the packages without [variables][5]. Once you start working with a large number of hosts, you will need to change a few settings:
This is because on your production environment you might want to update one system at a time (not fire & forget) and actually wait a longer time for your system to reboot and continue.
For more ways to automate your work with this tool, take a look at the other [Ansible articles on Opensource.com][6].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/ansible-patch-systems
作者:[Jonathan Lozada De La Matta][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jlozadad
[1]:https://www.ansible.com/overview/how-ansible-works
[2]:https://docs.ansible.com/ansible/latest/list_of_packaging_modules.html
[3]:https://docs.ansible.com/ansible/latest/yum_module.html
[4]:https://github.com/ansible/lightbulb/tree/master/examples/nginx-role
[5]:https://docs.ansible.com/ansible/latest/playbooks_variables.html
[6]:https://opensource.com/tags/ansible

View File

@ -0,0 +1,318 @@
Protecting Code Integrity with PGP — Part 6: Using PGP with Git
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/global-network.jpg?itok=h_hhZc36)
In this tutorial series, we're providing practical guidelines for using PGP, including basic concepts and generating and protecting your keys. If you missed the previous articles, you can catch up below. In this article, we look at Git's integration with PGP, starting with signed tags, then introducing signed commits, and finally adding support for signed pushes.
[Part 1: Basic Concepts and Tools][1]
[Part 2: Generating Your Master Key][2]
[Part 3: Generating PGP Subkeys][3]
[Part 4: Moving Your Master Key to Offline Storage][4]
[Part 5: Moving Subkeys to a Hardware Device][5]
One of the core features of Git is its decentralized nature -- once a repository is cloned to your system, you have full history of the project, including all of its tags, commits and branches. However, with hundreds of cloned repositories floating around, how does anyone verify that the repository you downloaded has not been tampered with by a malicious third party? You may have cloned it from GitHub or some other official-looking location, but what if someone had managed to trick you?
Or what happens if a backdoor is discovered in one of the projects you've worked on, and the "Author" line in the commit says it was done by you, while you're pretty sure you had [nothing to do with it][6]?
To address both of these issues, Git introduced PGP integration. Signed tags prove the repository integrity by assuring that its contents are exactly the same as on the workstation of the developer who created the tag, while signed commits make it nearly impossible for someone to impersonate you without having access to your PGP keys.
### Checklist
* Understand signed tags, commits, and pushes (ESSENTIAL)
* Configure git to use your key (ESSENTIAL)
* Learn how tag signing and verification works (ESSENTIAL)
* Configure git to always sign annotated tags (NICE)
* Learn how commit signing and verification works (ESSENTIAL)
* Configure git to always sign commits (NICE)
* Configure gpg-agent options (ESSENTIAL)
### Considerations
Git implements multiple levels of integration with PGP, first starting with signed tags, then introducing signed commits, and finally adding support for signed pushes.
#### Understanding Git Hashes
Git is a complicated beast, but you need to know what a "hash" is in order to have a good grasp on how PGP integrates with it. We'll narrow it down to two kinds of hashes: tree hashes and commit hashes.
##### Tree hashes
Every time you commit a change to a repository, git records checksum hashes of all objects in it -- contents (blobs), directories (trees), file names and permissions, etc, for each subdirectory in the repository. It only does this for trees and blobs that have changed with each commit, so as not to re-checksum the entire tree unnecessarily if only a small part of it was touched.
Then it calculates and stores the checksum of the toplevel tree, which will inevitably be different if any part of the repository has changed.
##### Commit hashes
Once the tree hash has been created, git will calculate the commit hash, which will include the following information about the repository and the change being made:
* The checksum hash of the tree
* The checksum hash of the tree before the change (parent)
* Information about the author (name, email, time of authorship)
* Information about the committer (name, email, time of commit)
* The commit message
##### Hashing function
At the time of writing, git still uses the SHA1 hashing mechanism to calculate checksums, though work is under way to transition to a stronger algorithm that is more resistant to collisions. Note, that git already includes collision avoidance routines, so it is believed that a successful collision attack against git remains impractical.
#### Annotated tags and tag signatures
Git tags allow developers to mark specific commits in the history of each git repository. Tags can be "lightweight" \-- more or less just a pointer at a specific commit, or they can be "annotated," which becomes its own object in the git tree. An annotated tag object contains all of the following information:
* The checksum hash of the commit being tagged
* The tag name
* Information about the tagger (name, email, time of tagging)
* The tag message
A PGP-signed tag is simply an annotated tag with all these entries wrapped around in a PGP signature. When a developer signs their git tag, they effectively assure you of the following:
* Who they are (and why you should trust them)
* What the state of their repository was at the time of signing:
* The tag includes the hash of the commit
* The commit hash includes the hash of the toplevel tree
* Which includes hashes of all files, contents, and subtrees
* It also includes all information about authorship
* Including exact times when changes were made
When you clone a git repository and verify a signed tag, that gives you cryptographic assurance that all contents in the repository, including all of its history, are exactly the same as the contents of the repository on the developer's computer at the time of signing.
#### Signed commits
Signed commits are very similar to signed tags -- the contents of the commit object are PGP-signed instead of the contents of the tag object. A commit signature also gives you full verifiable information about the state of the developer's tree at the time the signature was made. Tag signatures and commit PGP signatures provide exact same security assurances about the repository and its entire history.
#### Signed pushes
This is included here for completeness' sake, since this functionality needs to be enabled on the server receiving the push before it does anything useful. As we saw above, PGP-signing a git object gives verifiable information about the developer's git tree, but not about their intent for that tree.
For example, you can be working on an experimental branch in your own git fork trying out a promising cool feature, but after you submit your work for review, someone finds a nasty bug in your code. Since your commits are properly signed, someone can take the branch containing your nasty bug and push it into master, introducing a vulnerability that was never intended to go into production. Since the commit is properly signed with your key, everything looks legitimate and your reputation is questioned when the bug is discovered.
Ability to require PGP-signatures during git push was added in order to certify the intent of the commit, and not merely verify its contents.
#### Configure git to use your PGP key
If you only have one secret key in your keyring, then you don't really need to do anything extra, as it becomes your default key.
However, if you happen to have multiple secret keys, you can tell git which key should be used ([fpr] is the fingerprint of your key):
```
$ git config --global user.signingKey [fpr]
```
NOTE: If you have a distinct gpg2 command, then you should tell git to always use it instead of the legacy gpg from version 1:
```
$ git config --global gpg.program gpg2
```
#### How to work with signed tags
To create a signed tag, simply pass the -s switch to the tag command:
```
$ git tag -s [tagname]
```
Our recommendation is to always sign git tags, as this allows other developers to ensure that the git repository they are working with has not been maliciously altered (e.g. in order to introduce backdoors).
##### How to verify signed tags
To verify a signed tag, simply use the verify-tag command:
```
$ git verify-tag [tagname]
```
If you are verifying someone else's git tag, then you will need to import their PGP key. Please refer to the "Trusted Team communication" document in the same repository for guidance on this topic.
##### Verifying at pull time
If you are pulling a tag from another fork of the project repository, git should automatically verify the signature at the tip you're pulling and show you the results during the merge operation:
```
$ git pull [url] tags/sometag
```
The merge message will contain something like this:
```
Merge tag 'sometag' of [url]
[Tag message]
# gpg: Signature made [...]
# gpg: Good signature from [...]
```
#### Configure git to always sign annotated tags
Chances are, if you're creating an annotated tag, you'll want to sign it. To force git to always sign annotated tags, you can set a global configuration option:
```
$ git config --global tag.forceSignAnnotated true
```
Alternatively, you can just train your muscle memory to always pass the -s switch:
```
$ git tag -asm "Tag message" tagname
```
#### How to work with signed commits
It is easy to create signed commits, but it is much more difficult to incorporate them into your workflow. Many projects use signed commits as a sort of "Committed-by:" line equivalent that records code provenance -- the signatures are rarely verified by others except when tracking down project history. In a sense, signed commits are used for "tamper evidence," and not to "tamper-proof" the git workflow.
To create a signed commit, you just need to pass the -S flag to the git commit command (it's capital -S due to collision with another flag):
```
$ git commit -S
```
Our recommendation is to always sign commits and to require them of all project members, regardless of whether anyone is verifying them (that can always come at a later time).
##### How to verify signed commits
To verify a single commit you can use verify-commit:
```
$ git verify-commit [hash]
```
You can also look at repository logs and request that all commit signatures are verified and shown:
```
$ git log --pretty=short --show-signature
```
##### Verifying commits during git merge
If all members of your project sign their commits, you can enforce signature checking at merge time (and then sign the resulting merge commit itself using the -S flag):
```
$ git merge --verify-signatures -S merged-branch
```
Note, that the merge will fail if there is even one commit that is not signed or does not pass verification. As it is often the case, technology is the easy part -- the human side of the equation is what makes adopting strict commit signing for your project difficult.
##### If your project uses mailing lists for patch management
If your project uses a mailing list for submitting and processing patches, then there is little use in signing commits, because all signature information will be lost when sent through that medium. It is still useful to sign your commits, just so others can refer to your publicly hosted git trees for reference, but the upstream project receiving your patches will not be able to verify them directly with git.
You can still sign the emails containing the patches, though.
#### Configure git to always sign commits
You can tell git to always sign commits:
```
git config --global commit.gpgSign true
```
Or you can train your muscle memory to always pass the -S flag to all git commit operations (this includes --amend).
#### Configure gpg-agent options
The GnuPG agent is a helper tool that will start automatically whenever you use the gpg command and run in the background with the purpose of caching the private key passphrase. This way you only have to unlock your key once to use it repeatedly (very handy if you need to sign a bunch of git operations in an automated script without having to continuously retype your passphrase).
There are two options you should know in order to tweak when the passphrase should be expired from cache:
* default-cache-ttl (seconds): If you use the same key again before the time-to-live expires, the countdown will reset for another period. The default is 600 (10 minutes).
* max-cache-ttl (seconds): Regardless of how recently you've used the key since initial passphrase entry, if the maximum time-to-live countdown expires, you'll have to enter the passphrase again. The default is 30 minutes.
If you find either of these defaults too short (or too long), you can edit your ~/.gnupg/gpg-agent.conf file to set your own values:
```
# set to 30 minutes for regular ttl, and 2 hours for max ttl
default-cache-ttl 1800
max-cache-ttl 7200
```
##### Bonus: Using gpg-agent with ssh
If you've created an [A] (Authentication) key and moved it to the smartcard, you can use it with ssh for adding 2-factor authentication for your ssh sessions. You just need to tell your environment to use the correct socket file for talking to the agent.
First, add the following to your ~/.gnupg/gpg-agent.conf:
```
enable-ssh-support
```
Then, add this to your .bashrc:
```
export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket)
```
You will need to kill the existing gpg-agent process and start a new login session for the changes to take effect:
```
$ killall gpg-agent
$ bash
$ ssh-add -L
```
The last command should list the SSH representation of your PGP Auth key (the comment should say cardno:XXXXXXXX at the end to indicate it's coming from the smartcard).
To enable key-based logins with ssh, just add the ssh-add -L output to ~/.ssh/authorized_keys on remote systems you log in to. Congratulations, you've just made your ssh credentials extremely difficult to steal.
As a bonus, you can get other people's PGP-based ssh keys from public keyservers, should you need to grant them ssh access to anything:
```
$ gpg --export-ssh-key [keyid]
```
This can come in super handy if you need to allow developers access to git repositories over ssh. Next time, we'll provide tips for protecting your email accounts as well as your PGP keys.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/pgp/2018/3/protecting-code-integrity-pgp-part-6-using-pgp-git
作者:[KONSTANTIN RYABITSEV][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/mricon
[1]:https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools
[2]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-2-generating-and-protecting-your-master-pgp-key
[3]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-3-generating-pgp-subkeys
[4]:https://www.linux.com/blog/learn/pgp/2018/3/protecting-code-integrity-pgp-part-4-moving-your-master-key-offline-storage
[5]:https://www.linux.com/blog/learn/pgp/2018/3/protecting-code-integrity-pgp-part-5-moving-subkeys-hardware-device
[6]:https://github.com/jayphelps/git-blame-someone-else