Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu.Wang 2018-10-11 14:40:46 +08:00
commit 7558481ea2
20 changed files with 2308 additions and 671 deletions

View File

@ -0,0 +1,108 @@
3 areas to drive DevOps change
======
Driving large-scale organizational change is painful, but when it comes to DevOps, the payoff is worth the pain.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-inclusion-transformation-change_20180927.png?itok=2E-g10hJ)
Pain avoidance is a powerful motivator. Some studies hint that even [plants experience a type of pain][1] and take steps to defend themselves. Yet we have plenty of examples of humans enduring pain on purpose—exercise often hurts, but we still do it. When we believe the payoff is worth the pain, we'll endure almost anything.
The truth is that driving large-scale organizational change is painful. It hurts for those having to change their values and behaviors, it hurts for leadership, and it hurts for the people just trying to do their jobs. In the case of DevOps, though, I can tell you the pain is worth it.
I've seen firsthand how teams learn they must spend time improving their technical processes, take ownership of their automation pipelines, and become masters of their fate. They gain the tools they need to be successful.
![Improvements after DevOps transformation][3]
Image by Lee Eason. CC BY-SA 4.0
This chart shows the value of that change. In a company where I directed a DevOps transformation, its 60+ teams submitted more than 900 requests per month to release management. If you add up the time those tickets stayed open, it came to more than 350 days per month. What could your company do with an extra 350 person-days per month? In addition to the improvements seen above, they went from 100 to 9,000 deployments per month, a 24% decrease in high-severity bugs, happier engineers, and improved net promoter scores (NPS). The biggest NPS improvements link to the teams furthest along on their DevOps journey, as the [Puppet State of DevOps][4] report predicted. The bottom line is that investments into technical process improvement translate into better business outcomes.
DevOps leaders must focus on three main areas to drive this change: executives, culture, and team health.
### Executives
The bottom line is that investments into technical process improvement translate into better business outcomes.
The larger your organization, the greater the distance (and opportunities for misunderstanding) between business leadership and the individuals delivering services to your customers. To make things worse, the landscape of tools and practices in technology is changing at an accelerating rate. This makes it practically impossible for business leaders to understand on their own how transformations like DevOps or agile work.
The larger your organization, the greater the distance (and opportunities for misunderstanding) between business leadership and the individuals delivering services to your customers. To make things worse, the landscape of tools and practices in technology is changing at an accelerating rate. This makes it practically impossible for business leaders to understand on their own how transformations like DevOps or agile work.
DevOps leaders must help executives come along for the ride. Educating leaders gives them options when they're making decisions and makes it more likely they'll choose paths that help your company.
For example, let's say your executives believe DevOps is going to improve how you deploy your products into production, but they don't understand how. You've been working with a software team to help automate their deployment. When an executive hears about a deploy failure (and there will be failures), they will want to understand how it occurred. When they learn the software team did the deployment rather than the release management team, they may try to protect the business by decreeing all production releases must go through traditional change controls. You will lose credibility, and teams will be far less likely to trust you and accept further changes.
It takes longer to rebuild trust with executives and get their support after an incident than it would have taken to educate them in the first place. Put the time in upfront to build alignment, and it will pay off as you implement tactical changes.
Two pieces of advice when building that alignment:
* First, **don't ignore any constraints** they raise. If they have worries about contracts or security, make the heads of legal and security your new best friends. By partnering with them, you'll build their trust and avoid making costly mistakes.
* Second, **use metrics to build a bridge** between what your delivery teams are doing and your executives' concerns. If the business has a goal to reduce customer churn, and you know from research that many customers leave because of unplanned downtime, reinforce that your teams are committed to tracking and improving Mean Time To Detection and Resolution (MTTD and MTTR). You can use those key metrics to show meaningful progress that teams and executives understand and get behind.
### Culture
DevOps is a culture of continuous improvement focused on code, build, deploy, and operational processes. Culture describes the organization's values and behaviors. Essentially, we're talking about changing how people behave, which is never easy.
I recommend reading [The Wolf in CIO's Clothing][5]. Spend time thinking about psychology and motivation. Read [Drive][6] or at least watch Daniel Pink's excellent [TED Talk][7]. Read [The Hero with a Thousand Faces][8] and learn to identify the different journeys everyone is on. If none of these things sound interesting, you are not the right person to drive change in your company. Otherwise, read on!
Essentially, we're talking about changing how people behave, which is never easy.
Most rational people behave according to their values. Most organizations don't have explicit values everyone understands and lives by. Therefore, you'll need to identify the organization's values that have led to the behaviors that have led to the current state. You also need to make sure you can tell the story about how those values came to be and how they led to where you are. When you tell that story, be careful not to demonize those values—they aren't immoral or evil. People did the best they could at the time, given what they knew and what resources they had.
Most rational people behave according to their values. Most organizations don't have explicit values everyone understands and lives by. Therefore, you'll need to identify the organization's values that have led to the behaviors that have led to the current state. You also need to make sure you can tell the story about how those values came to be and how they led to where you are. When you tell that story, be careful not to demonize those values—they aren't immoral or evil. People did the best they could at the time, given what they knew and what resources they had.
Explain that the company and its organizational goals are changing, and the team must alter its values. It's helpful to express this in terms of contrast. For example, your company may have historically valued cost savings above all else. That value is there for a reason—the company was cash-strapped. To get new products out, the infrastructure group had to tightly couple services by sharing database clusters or servers. Over time, those practices created a real mess that became hard to maintain. Simple changes started breaking things in unexpected ways. This led to tight change-control processes that were painful for delivery teams, so they stopped changing things.
Play that movie for five years, and you end up with little to no innovation, legacy technology, attraction and retention problems, and poor-quality products. You've grown the company, but you've hit a ceiling, and you can't continue to grow with those same values and behaviors. Now you must put engineering efficiency above cost saving. If one option will help teams maintain their service easier, but the other option is cheaper in the short term, you go with the first option.
You must tell this story again and again. Then you must celebrate any time a team expresses the new value through their behavior—even if they make a mistake. When a team has a deploy failure, congratulate them for taking the risk and encourage them to keep learning. Explain how their behavior is leading to the right outcome and support them. Over time, teams will see the message is real, and they'll feel safe altering their behavior.
### Team health
Have you ever been in a planning meeting and heard something like this: "We can't really estimate that story until John gets back from vacation. He's the only one who knows that area of the code well enough." Or: "We can't get this task done because it's got a cross-team dependency on network engineering, and the guy that set up the firewall is out sick." Or: "John knows that system best; if he estimated the story at a 3, then let's just go with that." When the team works on that story, who will most likely do the work? That's right, John will, and the cycle will continue.
For a long time, we've accepted that this is just the nature of software development. If we don't solve for it, we perpetuate the cycle.
Entropy will always drive teams naturally towards disorder and bad health. Our job as team members and leaders is to intentionally manage against that entropy and keep our teams healthy. Transformations like DevOps, agile, moving to the cloud, or refactoring a legacy application all amplify and accelerate that entropy. That's because transformations add new skills and expertise needed for the team to take on that new type of work.
Let's look at an example of a product team refactoring its legacy monolith. As usual, they build those new services in AWS. The legacy monolith was deployed to the data center, monitored, and backed up by IT. IT made sure the application's infosec requirements were met at the infrastructure layer. They conducted disaster recovery tests, patched the servers, and installed and configured required intrusion detection and antivirus agents. And they kept change control records, required for the annual audit process, of everything was done to the application's infrastructure.
I often see product teams make the fatal mistake of thinking IT is all cost and bottleneck. They're hungry to shed the skin of IT and use the public cloud, but they never stop to appreciate the critical services IT provides. Moving to the cloud means you implement these things differently; they don't go away. AWS is still a data center, and any team utilizing it accepts the related responsibilities.
In practice, this means product teams must learn how to do those IT services when they move to the cloud. So, when our fictional product team starts refactoring its legacy application and putting new services in in the cloud, it will need a vastly expanded skillset to be successful. Those skills don't magically appear—they're learned or hired—and team leaders and managers must actively manage the process.
I built [Tekata.io][9] because I couldn't find any tools to support me as I helped my teams evolve. Tekata is free and easy to use, but the tool is not as important as the people and process. Make sure you build continuous learning into your cadence and keep track of your team's weak spots. Those weak spots affect your ability to deliver, and filling them usually involves learning new things, so there's a wonderful synergy here. In fact, 76% of millennials think professional development opportunities are [one of the most important elements][10] of company culture.
### Proof is in the payoff
DevOps transformations involve altering the behavior, and therefore the culture, of your teams. That must be done with executive support and understanding. At the same time, those behavior changes mean learning new skills, and that process must also be managed carefully. But the payoff for pulling this off is more productive teams, happier and more engaged team members, higher quality products, and happier customers.
Lee Eason will present [Tales From A DevOps Transformation][11] at [All Things Open][12], October 21-23 in Raleigh, N.C.
Disclaimer: All opinions are statements in this article are exclusively those of Lee Eason and are not representative of Ipreo or IHS Markit.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/tales-devops-transformation
作者:[Lee Eason][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/leeeason
[b]: https://github.com/lujun9972
[1]: https://link.springer.com/article/10.1007%2Fs00442-014-2995-6
[2]: /file/411061
[3]: https://opensource.com/sites/default/files/uploads/devops-delays.png (Improvements after DevOps transformation)
[4]: https://puppet.com/resources/whitepaper/state-of-devops-report
[5]: https://www.gartner.com/en/publications/wolf-cio
[6]: https://en.wikipedia.org/wiki/Drive:_The_Surprising_Truth_About_What_Motivates_Us
[7]: https://www.ted.com/talks/dan_pink_on_motivation?language=en#t-2094
[8]: https://en.wikipedia.org/wiki/The_Hero_with_a_Thousand_Faces
[9]: https://tekata.io/
[10]: https://www.execu-search.com/~/media/Resources/pdf/2017_Hiring_Outlook_eBook
[11]: https://allthingsopen.org/talk/tales-from-a-devops-transformation/
[12]: https://allthingsopen.org/

View File

@ -0,0 +1,47 @@
4 best practices for giving open source code feedback
======
A few simple guidelines can help you provide better feedback.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_mail_box_envelope_send_blue.jpg?itok=6Epj47H6)
In the previous article I gave you tips for [how to receive feedback][1], especially in the context of your first free and open source project contribution. Now it's time to talk about the other side of that same coin: providing feedback.
If I tell you that something you did in your contribution is "stupid" or "naive," how would you feel? You'd probably be angry, hurt, or both, and rightfully so. These are mean-spirited words that when directed at people, can cut like knives. Words matter, and they matter a great deal. Therefore, put as much thought into the words you use when leaving feedback for a contribution as you do into any other form of contribution you give to the project. As you compose your feedback, think to yourself, "How would I feel if someone said this to me? Is there some way someone might take this another way, a less helpful way?" If the answer to that last question has even the chance of being a yes, backtrack and rewrite your feedback. It's better to spend a little time rewriting now than to spend a lot of time apologizing later.
When someone does make a mistake that seems like it should have been obvious, remember that we all have different experiences and knowledge. What's obvious to you may not be to someone else. And, if you recall, there once was a time when that thing was not obvious to you. We all make mistakes. We all typo. We all forget commas, semicolons, and closing brackets. Save yourself a lot of time and effort: Point out the mistake, but leave out the judgement. Stick to the facts. After all, if the mistake is that obvious, then no critique will be necessary, right?
1. **Avoid ad hominem comments.** Remember to review only the contribution and not the person who contributed it. That is to say, point out, "the contribution could be more efficient here in this way…" rather than, "you did this inefficiently." The latter is ad hominem feedback. Ad hominem is a Latin phrase meaning "to the person," which is where your feedback is being directed: to the person who contributed it rather than to the contribution itself. By providing feedback on the person you make that feedback personal, and the contributor is justified in taking it personally. Be careful when crafting your feedback to make sure you're addressing only the contents of the contribution and not accidentally criticizing the person who submitted it for review.
2. **Include positive comments.** Not all of your feedback has to (or should) be critical. As you review the contribution and you see something that you like, provide feedback on that as well. Several academic studies—including an important one by [Baumeister, Braslavsky, Finkenauer, and Vohs][2]—show that humans focus more on negative feedback than positive. When your feedback is solely negative, it can be very disheartening for contributors. Including positive reinforcement and feedback is motivating to people and helps them feel good about their contribution and the time they spent on it, which all adds up to them feeling more inclined to provide another contribution in the future. It doesn't have to be some gushing paragraph of flowery praise, but a quick, "Huh, that's a really smart way to handle that. It makes everything flow really well," can go a long way toward encouraging someone to keep contributing.
3. **Questions are feedback, too.** Praise is one less common but valuable type of review feedback. Questions are another. If you're looking at a contribution and can't tell why the submitter
When your feedback is solely negative, it can be very disheartening for contributors.
did things the way they did, or if the contribution just doesn't make a lot of sense to you, asking for more information acts as feedback. It tells the submitter that something they contributed isn't as clear as they thought and that it may need some work to make the approach more obvious, or if it's a code contribution, a comment to explain what's going on and why. A simple, "I don't understand this part here. Could you please tell me what it's doing and why you chose that way?" can start a dialogue that leads to a contribution that's much easier for future contributors to understand and maintain.
4. **Expect a negotiation.** Using questions as a form of feedback implies that there will be answers to those questions, or perhaps other questions in response. Whether your feedback is in question or statement format, you should expect to generate some sort of dialogue throughout the process. An alternative is to see your feedback as incontrovertible, your word as law. Although this is definitely one approach you can take, it's rarely a good one. When providing feedback on a contribution, it's best to collaborate rather than dictate. As these dialogues arise, embracing them as opportunities for conversation and learning on both sides is important. Be willing to discuss their approach and your feedback, and to take the time to understand their perspective.
The bottom line is: Don't be a jerk. If you're not sure whether the feedback you're planning to leave makes you sound like a jerk, pause to have someone else review it before you click Send. Have empathy for the person at the receiving end of that feedback. While the maxim is thousands of years old, it still rings true today that you should try to do unto others as you would have them do unto you. Put yourself in their shoes and aim to be helpful and supportive rather than simply being right.
_Adapted from[Forge Your Future with Open Source][3] by VM (Vicky) Brasseur, Copyright © 2018 The Pragmatic Programmers LLC. Reproduced with the permission of the publisher._
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/best-practices-giving-open-source-code-feedback
作者:[VM(Vicky) Brasseur][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/vmbrasseur
[b]: https://github.com/lujun9972
[1]: https://opensource.com/article/18/10/6-tips-receiving-feedback
[2]: https://www.msudenver.edu/media/content/sri-taskforce/documents/Baumeister-2001.pdf
[3]: http://www.pragprog.com/titles/vbopens

View File

@ -0,0 +1,82 @@
GCC: Optimizing Linux, the Internet, and Everything
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/gcc-paper.jpg?itok=QFNUZWsV)
Software is useless if computers can't run it. Even the most talented developer is at the mercy of the compiler when it comes to run-time performance - if you dont have a reliable compiler toolchain you cant build anything serious. The GNU Compiler Collection (GCC) provides a robust, mature and high performance partner to help you get the most out of your software. With decades of development by thousands of people GCC is one of the most respected compilers in the world. If you are building applications and not using GCC, you are missing out on the best possible solution.
GCC is the “de facto-standard open source compiler today” [1] according to LLVM.org and the foundation used to build complete systems - from the kernel upwards. GCC supports over 60 hardware platforms, including ARM, Intel, AMD, IBM POWER, SPARC, HP PA-RISC, and IBM Z, as well as a variety of operating environments, including GNU, Linux, Windows, macOS, FreeBSD, NetBSD, OpenBSD, DragonFly BSD, Solaris, AIX, HP-UX, and RTEMS. It offers highly compliant C/C++ compilers and support for popular C libraries, such as GNU C Library (glibc), Newlib, musl, and the C libraries included with various BSD operating systems, as well as front-ends for Fortran, Ada, and GO languages. GCC also functions as a cross compiler, creating executable code for a platform other than the one on which the compiler is running. GCC is the core component of the tightly integrated GNU toolchain, produced by the GNU Project, that includes glibc, Binutils, and the GNU Debugger (GDB).
"My all-time favorite GNU tool is GCC, the GNU Compiler Collection. At a time when developer tools were expensive, GCC was the second GNU tool and the one that enabled a community to write and build all the others. This tool single-handedly changed the industry and led to the creation of the free software movement, since a good, free compiler is a prerequisite to a community creating software." —Dave Neary, Open Source and Standards team at Red Hat. [2]
### Optimizing Linux
As the default compiler for the Linux kernel source, GCC delivers trusted, stable performance along with the additional extensions needed to correctly build the kernel. GCC is a standard component of popular Linux distributions, such as Arch Linux, CentOS, Debian, Fedora, openSUSE, and Ubuntu, where it routinely compiles supporting system components. This includes the default libraries used by Linux (such as libc, libm, libintl, libssh, libssl, libcrypto, libexpat, libpthread, and ncurses) which depend on GCC to provide correctness and performance and are used by applications and system utilities to access Linux kernel features. Many of the application packages included with a distribution are also built with GCC, such as Python, Perl, Ruby, nginx, Apache HTTP Server, OpenStack, Docker, and OpenShift. This combination of kernel, libraries, and application software translates into a large volume of code built with GCC for each Linux distribution. For the openSUSE distribution nearly 100% of native code is built by GCC, including 6,135 source packages producing 5,705 shared libraries and 38,927 executables. This amounts to about 24,540 source packages compiled weekly. [3]
The base version of GCC included in Linux distributions is used to create the kernel and libraries that define the system Application Binary Interface (ABI). User space developers have the option of downloading the latest stable version of GCC to gain access to advanced features, performance optimizations, and improvements in usability. Linux distributions offer installation instructions or prebuilt toolchains for deploying the latest version of GCC along with other GNU tools that help to enhance developer productivity and improve deployment time.
### Optimizing the Internet
GCC is one of the most widely adopted core compilers for embedded systems, enabling the development of software for the growing world of IoT devices. GCC offers a number of extensions that make it well suited for embedded systems software development, including fine-grained control using compiler built-ins, #pragmas, inline assembly, and application-focussed command-line options. GCC supports a broad base of embedded architectures, including ARM, AMCC, AVR, Blackfin, MIPS, RISC-V, Renesas Electronics V850, and NXP and Freescale Power-based processors, producing efficient, high quality code. The cross-compilation capability offered by GCC is critical to this community, and prebuilt cross-compilation toolchains [4] are a major requirement. For example, the GNU ARM Embedded toolchains are integrated and validated packages featuring the Arm Embedded GCC compiler, libraries, and other tools necessary for bare-metal software development. These toolchains are available for cross-compilation on Windows, Linux and macOS host operating systems and target the popular ARM Cortex-R and Cortex-M processors, which have shipped in tens of billions of internet capable devices. [5]
GCC empowers Cloud Computing, providing a reliable development platform for software that needs to directly manages computing resources, like database and web serving engines and backup and security software. GCC is fully compliant with C++11 and C++14 and offers experimental support for C++17 and C++2a [6], creating performant object code with a solid debugging information. Some examples of applications that utilize GCC include: MySQL Database Management System, which requires GCC for Linux [7]; the Apache HTTP Server, which recommends using GCC [8]; and Bacula, an enterprise ready network backup tool which require GCC. [9]
### Optimizing Everything
For the research and development of the scientific codes used in High Performance Computing (HPC), GCC offers mature C, C++, and Fortran front ends as well as support for OpenMP and OpenACC APIs for directive-based parallel programming. Because GCC offers portability across computing environments, it enables code to be more easily targeted and tested across a variety of new and legacy client and server platforms. GCC offers full support for OpenMP 4.0 for C, C++ and Fortran compilers and full support for OpenMP 4.5 for C and C++ compilers. For OpenACC, GCC supports most of the 2.5 specification and performance optimizations and is the only non-commercial, nonacademic compiler to provide [OpenACC][1] support.
Code performance is an important parameter to this community and GCC offers a solid performance base. A Nov. 2017 paper published by Colfax Research evaluates C++ compilers for the speed of compiled code parallelized with OpenMP 4.x directives and for the speed of compilation time. Figure 1 plots the relative performance of the computational kernels when compiled by the different compilers and run with a single thread. The performance values are normalized so that the performance of G++ is equal to 1.0.
![performance][3]
Figure 1. Relative performance of each kernel as compiled by the different compilers. (single-threaded, higher is better).
[Used with permission][4]
The paper summarizes “the GNU compiler also does very well in our tests. G++ produces the second fastest code in three out of six cases and is amongst the fastest compiler in terms of compile time.” [10]
### Who Is Using GCC?
In The State of Developer Ecosystem Survey in 2018 by JetBrains, out of 6,000 developers who took the survey GCC is regularly used by 66% of C++ programmers and 73% of C programmers. [11] Here is a quick summary of the benefits of GCC that make it so popular with the developer community.
* For developers required to write code for a variety of new and legacy computing platforms and operating environments, GCC delivers support for the broadest range of hardware and operating environments. Compilers offered by hardware vendors focus mainly on support for their products and other open source compilers are much more limited in the hardware and operating systems supported. [12]
* There is a wide variety of GCC-based prebuilt toolchains, which has particular appeal to embedded systems developers. This includes the GNU ARM Embedded toolchains and 138 pre-compiled cross compiler toolchains available on the Bootlin web site. [13] While other open source compilers, such as Clang/LLVM, can replace GCC in existing cross compiling toolchains, these would need to be completely rebuilt by the developer. [14]
* GCC delivers to application developers trusted, stable performance from a mature compiler platform. The GCC 8/9 vs. LLVM Clang 6/7 Compiler Benchmarks On AMD EPYC article provides results of 49 benchmarks ran across the four tested compilers at three optimization levels. Coming in first 34% of the time was GCC 8.2 RC1 using "-O3 -march=native" level, while at the same optimization level LLVM Clang 6.0 came in second with wins 20% of the time. [15]
* GCC delivers improved diagnostics for compile time debugging [16] and accurate and useful information for runtime debugging. GCC is tightly integrated with GDB, a mature and feature complete tool which offers non-stop debugging that can stop a single thread at a breakpoint.
* GCC is a well supported platform with an active, committed community that supports the current and two previous releases. With releases schedule yearly this provides two years of support for a version.
### GCC: Continuing to Optimize Linux, the Internet, and Everything
GCC continues to move forward as a world-class compiler. The most current version of GCC is 8.2, which was released in July 2018 and added hardware support for upcoming Intel CPUs, more ARM CPUs and improved performance for AMDs ZEN CPU. Initial C17 support has been added along with initial work towards C++2A. Diagnostics have continued to be enhanced including better emitted diagnostics, with improved locations, location ranges, and fix-it hints, particularly in the C++ front end. A blog written by David Malcolm of Red Hat in March 2018 provides an overview of usability improvements in GCC 8. [17]
New hardware platforms continue to rely on the GCC toolchain for software development, such as RISC-V, a free and open ISA that is of interest to machine learning, Artificial Intelligence (AI), and IoT market segments. GCC continues to be a critical component in the continuing development of Linux systems. The Clear Linux Project for Intel Architecture, an emerging distribution built for cloud, client, and IoT use cases, provides a good example of how GCC compiler technology is being used and improved to boost the performance and security of a Linux-based system. GCC is also being used for application development for Microsoft's Azure Sphere, a Linux-based operating system for IoT applications that initially supports the ARM based MediaTek MT3620 processor. In terms of developing the next generation of programmers, GCC is also a core component of the Windows toolchain for Raspberry PI, the low-cost embedded board running Debian-based GNU/Linux that is used to promote the teaching of basic computer science in schools and developing countries.
GCC was first released on March 22, 1987 by Richard Stallman, the founder of the GNU Project and was considered a significant breakthrough since it was the first portable ANSI C optimizing compiler released as free software. GCC is maintained by a community of programmers from all over the world under the direction of a steering committee that ensures broad, representative oversight of the project. GCCs community approach is one of its strengths, resulting in a large and diverse community of developers and users that contribute to and provide support for the project. According to Open Hub, GCC “is one of the largest open-source teams in the world, and is in the top 2% of all project teams on Open Hub.” [18]
There has been a lot of discussion about the licensing of GCC, most of which confuses rather than enlightens. GCC is distributed under the GNU General Public License version 3 or later with the Runtime Library Exception. This is a copyleft license, which means that derivative work can only be distributed under the same license terms. GPLv3 is intended to protect GCC from being made proprietary and requires that changes to GCC code are made available freely and openly. To the end user the compiler is just the same as any other; using GCC makes no difference to any licensing choices you might make for your own code. [19]
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2018/10/gcc-optimizing-linux-internet-and-everything
作者:[Margaret Lewis][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/margaret-lewis
[b]: https://github.com/lujun9972
[1]: https://www.openacc.org/tools
[2]: /files/images/gccjpg-0
[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/gcc_0.jpg?itok=HbGnRqWX (performance)
[4]: https://www.linux.com/licenses/category/used-permission

View File

@ -1,292 +0,0 @@
GraveAccent翻译中 Rock Solid React.js Foundations: A Beginners Guide
============================================================
** 此处有Canvas,请手动处理 **
![](https://cdn-images-1.medium.com/max/1000/1*wj5ujzj5wPQIKb0mIWLgNQ.png)
React.js crash course
Ive been working with React and React-Native for the last couple of months. I have already released two apps in production, [Kiven Aa][1] (React) and [Pollen Chat][2] (React Native). When I started learning React, I was searching for something (a blog, a video, a course, whatever) that didnt only teach me how to write apps in React. I also wanted it to prepare me for interviews.
Most of the material I found, concentrated on one or the other. So, this post is aimed towards the audience who is looking for a perfect mix of theory and hands-on. I will give you a little bit of theory so that you understand what is happening under the hood and then I will show you how to write some React.js code.
If you prefer video, I have this entire course up on YouTube as well. Please check that out.
Lets dive in…
> React.js is a JavaScript library for building user interfaces
You can build all sorts of single page applications. For example, chat messengers and e-commerce portals where you want to show changes on the user interface in real-time.
### Everythings a component
A React app is comprised of components,  _a lot of them_ , nested into one another.  _But what are components, you may ask?_
A component is a reusable piece of code, which defines how certain features should look and behave on the UI. For example, a button is a component.
Lets look at the following calculator, which you see on Google when you try to calculate something like 2+2 = 4 1 = 3 (quick maths!)
![](https://cdn-images-1.medium.com/max/1000/1*NS9DykYDyYG7__UXJdysTA.png)
Red markers denote components
As you can see in the image above, the calculator has many areaslike the  _result display window_  and the  _numpad_ . All of these can be separate components or one giant component. It depends on how comfortable one is in breaking down and abstracting away things in React
You write code for all such components separately. Then combine those under one container, which in turn is a React component itself. This way you can create reusable components and your final app will be a collection of separate components working together.
The following is one such way you can write the calculator, shown above, in React.
```
<Calculator>
<DisplayWindow />
<NumPad>
<Key number={1}/>
<Key number={2}/>
.
.
.
<Key number={9}/>
</NumPad>
</Calculator>
```
Yes! It looks like HTML code, but it isnt. We will explore more about it in the later sections.
### Setting up our Playground
This tutorial focuses on Reacts fundamentals. It is not primarily geared towards React for Web or [React Native][3] (for building mobile apps). So, we will use an online editor so as to avoid web or native specific configurations before even learning what React can do.
Ive already set up an environment for you on [codepen.io][4]. Just follow the link and read all the comments in HTML and JavaScript (JS) tabs.
### Controlling Components
Weve learned that a React app is a collection of various components, structured as a nested tree. Thus, we require some sort of mechanism to pass data from one component to other.
#### Enter “props”
We can pass arbitrary data to our component using a `props` object. Every component in React gets this `props` object.
Before learning how to use this `props` object, lets learn about functional components.
#### a) Functional component
A functional component in React consumes arbitrary data that you pass to it using `props` object. It returns an object which describes what UI React should render. Functional components are also known as Stateless components.
Lets write our first functional component.
```
function Hello(props) {
return <div>{props.name}</div>
}
```
Its that simple. We just passed `props` as an argument to a plain JavaScript function and returned,  _umm, well, what was that? That _ `_<div>{props.name}</div>_` _thing!_  Its JSX (JavaScript Extended). We will learn more about it in a later section.
This above function will render the following HTML in the browser.
```
<!-- If the "props" object is: {name: 'rajat'} -->
<div>
rajat
</div>
```
> Read the section below about JSX, where I have explained how did we get this HTML from our JSX code.
How can you use this functional component in your React app? Glad you asked! Its as simple as the following.
```
<Hello name='rajat' age={26}/>
```
The attribute `name` in the above code becomes `props.name` inside our `Hello`component. The attribute `age` becomes `props.age` and so on.
> Remember! You can nest one React component inside other React components.
Lets use this `Hello` component in our codepen playground. Replace the `div`inside `ReactDOM.render()` with our `Hello` component, as follows, and see the changes in the bottom window.
```
function Hello(props) {
return <div>{props.name}</div>
}
ReactDOM.render(<Hello name="rajat"/>, document.getElementById('root'));
```
> But what if your component has some internal state. For instance, like the following counter component, which has an internal count variable, which changes on + andkey presses.
A React component with an internal state
#### b) Class-based component
The class-based component has an additional property `state` , which you can use to hold a components private data. We can rewrite our `Hello` component using class notation as follows. Since these components have a state, these are also known as Stateful components.
```
class Counter extends React.Component {
// this method should be present in your component
render() {
return (
<div>
{this.props.name}
</div>
);
}
}
```
We extend `React.Component` class of React library to make class-based components in React. Learn more about JavaScript classes [here][5].
The `render()` method must be present in your class as React looks for this method in order to know what UI it should render on screen.
To use this sort of internal state, we first have to initialize the `state` object in the constructor of the component class, in the following way.
```
class Counter extends React.Component {
constructor() {
super();
// define the internal state of the component
this.state = {name: 'rajat'}
}
render() {
return (
<div>
{this.state.name}
</div>
);
}
}
// Usage:
// In your react app: <Counter />
```
Similarly, the `props` can be accessed inside our class-based component using `this.props` object.
To set the state, you use `React.Component`'s `setState()`. We will see an example of this, in the last part of this tutorial.
> Tip: Never call `setState()` inside `render()` function, as `setState()` causes component to re-render and this will result in endless loop.
![](https://cdn-images-1.medium.com/max/1000/1*rPUhERO1Bnr5XdyzEwNOwg.png)
A class-based component has an optional property “state”.
_Apart from _ `_state_` _, a class-based component has some life-cycle methods like _ `_componentWillMount()._` _ These you can use to do stuff, like initializing the _ `_state_` _and all but that is out of the scope of this post._
### JSX
JSX is a short form of  _JavaScript Extended_  and it is a way to write `React`components. Using JSX, you get the full power of JavaScript inside XML like tags.
You put JavaScript expressions inside `{}`. The following are some valid JSX examples.
```
<button disabled={true}>Press me!</button>
<button disabled={true}>Press me {3+1} times!</button>;
<div className='container'><Hello /></div>
```
The way it works is you write JSX to describe what your UI should look like. A [transpiler][6] like `Babel` converts that code into a bunch of `React.createElement()` calls. The React library then uses those `React.createElement()` calls to construct a tree-like structure of DOM elements. In case of React for Web or Native views in case of React Native. It keeps it in the memory.
React then calculates how it can effectively mimic this tree in the memory of the UI displayed to the user. This process is known as [reconciliation][7]. After that calculation is done, React makes the changes to the actual UI on the screen.
** 此处有Canvas,请手动处理 **
![](https://cdn-images-1.medium.com/max/1000/1*ighKXxBnnSdDlaOr5-ZOPg.png)
How React converts your JSX into a tree which describes your apps UI
You can use [Babels online REPL][8] to see what React actually outputs when you write some JSX.
![](https://cdn-images-1.medium.com/max/1000/1*NRuBKgzNh1nHwXn0JKHafg.png)
Use Babel REPL to transform JSX into plain JavaScript
> Since JSX is just a syntactic sugar over plain `React.createElement()` calls, React can be used without JSX.
Now we have every concept in place, so we are well positioned to write a `counter` component that we saw earlier as a GIF.
The code is as follows and I hope that you already know how to render that in our playground.
```
class Counter extends React.Component {
constructor(props) {
super(props);
this.state = {count: this.props.start || 0}
// the following bindings are necessary to make `this` work in the callback
this.inc = this.inc.bind(this);
this.dec = this.dec.bind(this);
}
inc() {
this.setState({
count: this.state.count + 1
});
}
dec() {
this.setState({
count: this.state.count - 1
});
}
render() {
return (
<div>
<button onClick={this.inc}>+</button>
<button onClick={this.dec}>-</button>
<div>{this.state.count}</div>
</div>
);
}
}
```
The following are some salient points about the above code.
1. JSX uses `camelCasing` hence `button`'s attribute is `onClick`, not `onclick`, as we use in HTML.
2. Binding is necessary for `this` to work on callbacks. See line #8 and 9 in the code above.
The final interactive code is located [here][9].
With that, weve reached the conclusion of our React crash course. I hope I have shed some light on how React works and how you can use React to build bigger apps, using smaller and reusable components.
* * *
If you have any queries or doubts, hit me up on Twitter [@rajat1saxena][10] or write to me at [rajat@raynstudios.com][11].
* * *
#### Please recommend this post, if you liked it and share it with your network. Follow me for more tech related posts and consider subscribing to my channel [Rayn Studios][12] on YouTube. Thanks a lot.
--------------------------------------------------------------------------------
via: https://medium.freecodecamp.org/rock-solid-react-js-foundations-a-beginners-guide-c45c93f5a923
作者:[Rajat Saxena ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.freecodecamp.org/@rajat1saxena
[1]:https://kivenaa.com/
[2]:https://play.google.com/store/apps/details?id=com.pollenchat.android
[3]:https://facebook.github.io/react-native/
[4]:https://codepen.io/raynesax/pen/MrNmBM
[5]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes
[6]:https://en.wikipedia.org/wiki/Source-to-source_compiler
[7]:https://reactjs.org/docs/reconciliation.html
[8]:https://babeljs.io/repl
[9]:https://codepen.io/raynesax/pen/QaROqK
[10]:https://twitter.com/rajat1saxena
[11]:mailto:rajat@raynstudios.com
[12]:https://www.youtube.com/channel/UCUmQhjjF9bsIaVDJUHSIIKw

View File

@ -1,149 +0,0 @@
translating---geekpi
A Desktop GUI Application For NPM
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/ndm-3-720x340.png)
NPM, short for **N** ode **P** ackage **M** anager, is a command line package manager for installing NodeJS packages, or modules. We already have have published a guide that described how to [**manage NodeJS packages using NPM**][1]. As you may noticed, managing NodeJS packages or modules using Npm is not a big deal. However, if youre not compatible with CLI-way, there is a desktop GUI application named **NDM** which can be used for managing NodeJS applications/modules. NDM, stands for **N** PM **D** esktop **M** anager, is a free, open source graphical front-end for NPM that allows us to install, update, remove NodeJS packages via a simple graphical window.
In this brief tutorial, we are going to learn about Ndm in Linux.
### Install NDM
NDM is available in AUR, so you can install it using any AUR helpers on Arch Linux and its derivatives like Antergos and Manjaro Linux.
Using [**Pacaur**][2]:
```
$ pacaur -S ndm
```
Using [**Packer**][3]:
```
$ packer -S ndm
```
Using [**Trizen**][4]:
```
$ trizen -S ndm
```
Using [**Yay**][5]:
```
$ yay -S ndm
```
Using [**Yaourt**][6]:
```
$ yaourt -S ndm
```
On RHEL based systems like CentOS, run the following command to install NDM.
```
$ echo "[fury] name=ndm repository baseurl=https://repo.fury.io/720kb/ enabled=1 gpgcheck=0" | sudo tee /etc/yum.repos.d/ndm.repo && sudo yum update &&
```
On Debian, Ubuntu, Linux Mint:
```
$ echo "deb [trusted=yes] https://apt.fury.io/720kb/ /" | sudo tee /etc/apt/sources.list.d/ndm.list && sudo apt-get update && sudo apt-get install ndm
```
NDM can also be installed using **Linuxbrew**. First, install Linuxbrew as described in the following link.
After installing Linuxbrew, you can install NDM using the following commands:
```
$ brew update
$ brew install ndm
```
On other Linux distributions, go to the [**NDM releases page**][7], download the latest version, compile and install it yourself.
### NDM Usage
Launch NDM wither from the Menu or using application launcher. This is how NDMs default interface looks like.
![][9]
From here, you can install NodeJS packages/modules either locally or globally.
**Install NodeJS packages locally**
To install a package locally, first choose project directory by clicking on the **“Add projects”** button from the Home screen and select the directory where you want to keep your project files. For example, I have chosen a directory named **“demo”** as my project directory.
Click on the project directory (i.e **demo** ) and then, click **Add packages** button.
![][10]
Type the package name you want to install and hit the **Install** button.
![][11]
Once installed, the packages will be listed under the projects directory. Simply click on the directory to view the list of installed packages locally.
![][12]
Similarly, you can create separate project directories and install NodeJS modules in them. To view the list of installed modules on a project, click on the project directory, and you will the packages on the right side.
**Install NodeJS packages globally**
To install NodeJS packages globally, click on the **Globals** button on the left from the main interface. Then, click “Add packages” button, type the name of the package and hit “Install” button.
**Manage packages**
Click on any installed packages and you will see various options on the top, such as
1. Version (to view the installed version),
2. Latest (to install latest available version),
3. Update (to update the currently selected package),
4. Uninstall (to remove the selected package) etc.
![][13]
NDM has two more options namely **“Update npm”** which is used to update the node package manager to latest available version, and **Doctor** that runs a set of checks to ensure that your npm installation has what it needs to manage your packages/modules.
### Conclusion
NDM makes the process of installing, updating, removing NodeJS packages easier! You dont need to memorize the commands to perform those tasks. NDM lets us to do them all with a few mouse clicks via simple graphical window. For those who are lazy to type commands, NDM is perfect companion to manage NodeJS packages.
Cheers!
**Resource:**
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/ndm-a-desktop-gui-application-for-npm/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/manage-nodejs-packages-using-npm/
[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
[4]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[6]:https://www.ostechnix.com/install-yaourt-arch-linux/
[7]:https://github.com/720kb/ndm/releases
[8]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[9]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-1.png
[10]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-5-1.png
[11]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-6.png
[12]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-7.png
[13]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-8.png

View File

@ -1,140 +0,0 @@
Translating by qhwdw
What Stable Kernel Should I Use?
======
I get a lot of questions about people asking me about what stable kernel should they be using for their product/device/laptop/server/etc. all the time. Especially given the now-extended length of time that some kernels are being supported by me and others, this isnt always a very obvious thing to determine. So this post is an attempt to write down my opinions on the matter. Of course, you are free to use what ever kernel version you want, but heres what I recommend.
As always, the opinions written here are my own, I speak for no one but myself.
### What kernel to pick
Heres the my short list of what kernel you should use, raked from best to worst options. Ill go into the details of all of these below, but if you just want the summary of all of this, here it is:
Hierarchy of what kernel to use, from best solution to worst:
* Supported kernel from your favorite Linux distribution
* Latest stable release
* Latest LTS release
* Older LTS release that is still being maintained
What kernel to never use:
* Unmaintained kernel release
To give numbers to the above, today, as of August 24, 2018, the front page of kernel.org looks like this:
![][1]
So, based on the above list that would mean that:
* 4.18.5 is the latest stable release
* 4.14.67 is the latest LTS release
* 4.9.124, 4.4.152, and 3.16.57 are the older LTS releases that are still being maintained
* 4.17.19 and 3.18.119 are “End of Life” kernels that have had a release in the past 60 days, and as such stick around on the kernel.org site for those who still might want to use them.
Quite easy, right?
Ok, now for some justification for all of this:
### Distribution kernels
The best solution for almost all Linux users is to just use the kernel from your favorite Linux distribution. Personally, I prefer the community based Linux distributions that constantly roll along with the latest updated kernel and it is supported by that developer community. Distributions in this category are Fedora, openSUSE, Arch, Gentoo, CoreOS, and others.
All of these distributions use the latest stable upstream kernel release and make sure that any needed bugfixes are applied on a regular basis. That is the one of the most solid and best kernel that you can use when it comes to having the latest fixes ([remember all fixes are security fixes][2]) in it.
There are some community distributions that take a bit longer to move to a new kernel release, but eventually get there and support the kernel they currently have quite well. Those are also great to use, and examples of these are Debian and Ubuntu.
Just because I did not list your favorite distro here does not mean its kernel is not good. Look on the web site for the distro and make sure that the kernel package is constantly updated with the latest security patches, and all should be well.
Lots of people seem to like the old, “traditional” model of a distribution and use RHEL, SLES, CentOS or the “LTS” Ubuntu release. Those distros pick a specific kernel version and then camp out on it for years, if not decades. They do loads of work backporting the latest bugfixes and sometimes new features to these kernels, all in a Quixote quest to keep the version number from never being changed, despite having many thousands of changes on top of that older kernel version. This work is a truly thankless job, and the developers assigned to these tasks do some wonderful work in order to achieve these goals. If you like never seeing your kernel version number change, then use these distributions. They usually cost some money to use, but the support you get from these companies is worth it when something goes wrong.
So again, the best kernel you can use is one that someone else supports, and you can turn to for help. Use that support, usually you are already paying for it (for the enterprise distributions), and those companies know what they are doing.
But, if you do not want to trust someone else to manage your kernel for you, or you have hardware that a distribution does not support, then you want to run the Latest stable release:
### Latest stable release
This kernel is the latest one from the Linux kernel developer community that they declare as “stable”. About every three months, the community releases a new stable kernel that contains all of the newest hardware support, the latest performance improvements, as well as the latest bugfixes for all parts of the kernel. Over the next 3 months, bugfixes that go into the next kernel release to be made are backported into this stable release, so that any users of this kernel are sure to get them as soon as possible.
This is usually the kernel that most community distributions use as well, so you can be sure it is tested and has a large audience of users. Also, the kernel community (all 4000+ developers) are willing to help support users of this release, as it is the latest one that they made.
After 3 months, a new kernel is released and you should move to it to ensure that you stay up to date, as support for this kernel is usually dropped a few weeks after the newer release happens.
If you have new hardware that is purchased after the last LTS release came out, you almost are guaranteed to have to run this kernel in order to have it supported. So for desktops or new servers, this is usually the recommended kernel to be running.
### Latest LTS release
If your hardware relies on a vendors out-of-tree patch in order to make it work properly (like almost all embedded devices these days), then the next best kernel to be using is the latest LTS release. That release gets all of the latest kernel fixes that goes into the stable releases where applicable, and lots of users test and use it.
Note, no new features and almost no new hardware support is ever added to these kernels, so if you need to use a new device, it is better to use the latest stable release, not this release.
Also this release is common for users that do not like to worry about “major” upgrades happening on them every 3 months. So they stick to this release and upgrade every year instead, which is a fine practice to follow.
The downsides of using this release is that you do not get the performance improvements that happen in newer kernels, except when you update to the next LTS kernel, potentially a year in the future. That could be significant for some workloads, so be very aware of this.
Also, if you have problems with this kernel release, the first thing that any developer whom you report the issue to is going to ask you to do is, “does the latest stable release have this problem?” So you will need to be aware that support might not be as easy to get as with the latest stable releases.
Now if you are stuck with a large patchset and can not update to a new LTS kernel once a year, perhaps you want the older LTS releases:
### Older LTS release
These releases have traditionally been supported by the community for 2 years, sometimes longer for when a major distribution relies on this (like Debian or SLES). However in the past year, thanks to a lot of suport and investment in testing and infrastructure from Google, Linaro, Linaro member companies, [kernelci.org][3], and others, these kernels are starting to be supported for much longer.
Heres the latest LTS releases and how long they will be supported for, as shown at [kernel.org/category/releases.html][4] on August 24, 2018:
![][5]
The reason that Google and other companies want to have these kernels live longer is due to the crazy (some will say broken) development model of almost all SoC chips these days. Those devices start their development lifecycle a few years before the chip is released, however that code is never merged upstream, resulting in a brand new chip being released based on a 2 year old kernel. These SoC trees usually have over 2 million lines added to them, making them something that I have started calling “Linux-like” kernels.
If the LTS releases stop happening after 2 years, then support from the community instantly stops, and no one ends up doing bugfixes for them. This results in millions of very insecure devices floating around in the world, not something that is good for any ecosystem.
Because of this dependency, these companies now require new devices to constantly update to the latest LTS releases as they happen for their specific release version (i.e. every 4.9.y release that happens). An example of this is the Android kernel requirements for new devices shipping for the “O” and now “P” releases specified the minimum kernel version allowed, and Android security releases might start to require those “.y” releases to happen more frequently on devices.
I will note that some manufacturers are already doing this today. Sony is one great example of this, updating to the latest 4.4.y release on many of their new phones for their quarterly security release. Another good example is the small company Essential which has been tracking the 4.4.y releases faster than anyone that I know of.
There is one huge caveat when using a kernel like this. The number of security fixes that get backported are not as great as with the latest LTS release, because the traditional model of the devices that use these older LTS kernels is a much more reduced user model. These kernels are not to be used in any type of “general computing” model where you have untrusted users or virtual machines, as the ability to do some of the recent Spectre-type fixes for older releases is greatly reduced, if present at all in some branches.
So again, only use older LTS releases in a device that you fully control, or lock down with a very strong security model (like Android enforces using SELinux and application isolation). Never use these releases on a server with untrusted users, programs, or virtual machines.
Also, support from the community for these older LTS releases is greatly reduced even from the normal LTS releases, if available at all. If you use these kernels, you really are on your own, and need to be able to support the kernel yourself, or rely on you SoC vendor to provide that support for you (note that almost none of them do provide that support, so beware…)
### Unmaintained kernel release
Surprisingly, many companies do just grab a random kernel release, slap it into their product and proceed to ship it in hundreds of thousands of units without a second thought. One crazy example of this would be the Lego Mindstorm systems that shipped a random -rc release of a kernel in their device for some unknown reason. A -rc release is a development release that not even the Linux kernel developers feel is ready for everyone to use just yet, let alone millions of users.
You are of course free to do this if you want, but note that you really are on your own here. The community can not support you as no one is watching all kernel versions for specific issues, so you will have to rely on in-house support for everything that could go wrong. Which for some companies and systems, could be just fine, but be aware of the “hidden” cost this might cause if you do not plan for this up front.
### Summary
So, heres a short list of different types of devices, and what I would recommend for their kernels:
* Laptop / Desktop: Latest stable release
* Server: Latest stable release or latest LTS release
* Embedded device: Latest LTS release or older LTS release if the security model used is very strong and tight.
And as for me, what do I run on my machines? My laptops run the latest development kernel (i.e. Linuss development tree) plus whatever kernel changes I am currently working on and my servers run the latest stable release. So despite being in charge of the LTS releases, I dont run them myself, except in testing systems. I rely on the development and latest stable releases to ensure that my machines are running the fastest and most secure releases that we know how to create at this point in time.
--------------------------------------------------------------------------------
via: http://kroah.com/log/blog/2018/08/24/what-stable-kernel-should-i-use/
作者:[Greg Kroah-Hartman][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://kroah.com
[1]:https://s3.amazonaws.com/kroah.com/images/kernel.org_2018_08_24.png
[2]:http://kroah.com/log/blog/2018/02/05/linux-kernel-release-model/
[3]:https://kernelci.org/
[4]:https://www.kernel.org/category/releases.html
[5]:https://s3.amazonaws.com/kroah.com/images/kernel.org_releases_2018_08_24.png

View File

@ -1,90 +0,0 @@
translating by belitex
3 open source distributed tracing tools
======
Find performance issues quickly with these tools, which provide a graphical view of what's happening across complex software systems.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8)
Distributed tracing systems enable users to track a request through a software system that is distributed across multiple applications, services, and databases as well as intermediaries like proxies. This allows for a deeper understanding of what is happening within the software system. These systems produce graphical representations that show how much time the request took on each step and list each known step.
A user reviewing this content can determine where the system is experiencing latencies or blockages. Instead of testing the system like a binary search tree when requests start failing, operators and developers can see exactly where the issues begin. This can also reveal where performance changes might be occurring from deployment to deployment. Its always better to catch regressions automatically by alerting to the anomalous behavior than to have your customers tell you.
How does this tracing thing work? Well, each request gets a special ID thats usually injected into the headers. This ID uniquely identifies that transaction. This transaction is normally called a trace. The trace is the overall abstract idea of the entire transaction. Each trace is made up of spans. These spans are the actual work being performed, like a service call or a database request. Each span also has a unique ID. Spans can create subsequent spans called child spans, and child spans can have multiple parents.
Once a transaction (or trace) has run its course, it can be searched in a presentation layer. There are several tools in this space that well discuss later, but the picture below shows [Jaeger][1] from my [Istio walkthrough][2]. It shows multiple spans of a single trace. The power of this is immediately clear as you can better understand the transactions story at a glance.
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_jaeger_istio_0.png)
This demo uses Istios built-in OpenTracing implementation, so I can get tracing without even modifying my application. It also uses Jaeger, which is OpenTracing-compatible.
So what is OpenTracing? Lets find out.
### OpenTracing API
[OpenTracing][3] is a spec that grew out of [Zipkin][4] to provide cross-platform compatibility. It offers a vendor-neutral API for adding tracing to applications and delivering that data into distributed tracing systems. A library written for the OpenTracing spec can be used with any system that is OpenTracing-compliant. Zipkin, Jaeger, and Appdash are examples of open source tools that have adopted the open standard, but even proprietary tools like [Datadog][5] and [Instana][6] are adopting it. This is expected to continue as OpenTracing reaches ubiquitous status.
### OpenCensus
Okay, we have OpenTracing, but what is this [OpenCensus][7] thing that keeps popping up in my searches? Is it a competing standard, something completely different, or something complementary?
The answer depends on who you ask. I will do my best to explain the difference (as I understand it): OpenCensus takes a more holistic or all-inclusive approach. OpenTracing is focused on establishing an open API and spec and not on open implementations for each language and tracing system. OpenCensus provides not only the specification but also the language implementations and wire protocol. It also goes beyond tracing by including additional metrics that are normally outside the scope of distributed tracing systems.
OpenCensus allows viewing data on the host where the application is running, but it also has a pluggable exporter system for exporting data to central aggregators. The current exporters produced by the OpenCensus team include Zipkin, Prometheus, Jaeger, Stackdriver, Datadog, and SignalFx, but anyone can create an exporter.
From my perspective, theres a lot of overlap. One isnt necessarily better than the other, but its important to know what each does and doesnt do. OpenTracing is primarily a spec, with others doing the implementation and opinionation. OpenCensus provides a holistic approach for the local component with more opinionation but still requires other systems for remote aggregation.
### Tool options
#### Zipkin
Zipkin was one of the first systems of this kind. It was developed by Twitter based on the [Google Dapper paper][8] about the internal system Google uses. Zipkin was written using Java, and it can use Cassandra or ElasticSearch as a scalable backend. Most companies should be satisfied with one of those options. The lowest supported Java version is Java 6. It also uses the [Thrift][9] binary communication protocol, which is popular in the Twitter stack and is hosted as an Apache project.
The system consists of reporters (clients), collectors, a query service, and a web UI. Zipkin is meant to be safe in production by transmitting only a trace ID within the context of a transaction to inform receivers that a trace is in process. The data collected in each reporter is then transmitted asynchronously to the collectors. The collectors store these spans in the database, and the web UI presents this data to the end user in a consumable format. The delivery of data to the collectors can occur in three different methods: HTTP, Kafka, and Scribe.
The [Zipkin community][10] has also created [Brave][11], a Java client implementation compatible with Zipkin. It has no dependencies, so it wont drag your projects down or clutter them with libraries that are incompatible with your corporate standards. There are many other implementations, and Zipkin is compatible with the OpenTracing standard, so these implementations should also work with other distributed tracing systems. The popular Spring framework has a component called [Spring Cloud Sleuth][12] that is compatible with Zipkin.
#### Jaeger
[Jaeger][1] is a newer project from Uber Technologies that the [CNCF][13] has since adopted as an Incubating project. It is written in Golang, so you dont have to worry about having dependencies installed on the host or any overhead of interpreters or language virtual machines. Similar to Zipkin, Jaeger also supports Cassandra and ElasticSearch as scalable storage backends. Jaeger is also fully compatible with the OpenTracing standard.
Jaegers architecture is similar to Zipkin, with clients (reporters), collectors, a query service, and a web UI, but it also has an agent on each host that locally aggregates the data. The agent receives data over a UDP connection, which it batches and sends to a collector. The collector receives that data in the form of the [Thrift][14] protocol and stores that data in Cassandra or ElasticSearch. The query service can access the data store directly and provide that information to the web UI.
By default, a user wont get all the traces from the Jaeger clients. The system samples 0.1% (1 in 1,000) of traces that pass through each client. Keeping and transmitting all traces would be a bit overwhelming to most systems. However, this can be increased or decreased by configuring the agents, which the client consults with for its configuration. This sampling isnt completely random, though, and its getting better. Jaeger uses probabilistic sampling, which tries to make an educated guess at whether a new trace should be sampled or not. [Adaptive sampling is on its roadmap][15], which will improve the sampling algorithm by adding additional context for making decisions.
#### Appdash
[Appdash][16] is a distributed tracing system written in Golang, like Jaeger. It was created by [Sourcegraph][17] based on Googles Dapper and Twitters Zipkin. Similar to Jaeger and Zipkin, Appdash supports the OpenTracing standard; this was a later addition and requires a component that is different from the default component. This adds risk and complexity.
At a high level, Appdashs architecture consists mostly of three components: a client, a local collector, and a remote collector. Theres not a lot of documentation, so this description comes from testing the system and reviewing the code. The client in Appdash gets added to your code. Appdash provides Python, Golang, and Ruby implementations, but OpenTracing libraries can be used with Appdashs OpenTracing implementation. The client collects the spans and sends them to the local collector. The local collector then sends the data to a centralized Appdash server running its own local collector, which is the remote collector for all other nodes in the system.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/distributed-tracing-tools
作者:[Dan Barker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/barkerd427
[1]: https://www.jaegertracing.io/
[2]: https://www.youtube.com/watch?v=T8BbeqZ0Rls
[3]: http://opentracing.io/
[4]: https://zipkin.io/
[5]: https://www.datadoghq.com/
[6]: https://www.instana.com/
[7]: https://opencensus.io/
[8]: https://research.google.com/archive/papers/dapper-2010-1.pdf
[9]: https://thrift.apache.org/
[10]: https://zipkin.io/pages/community.html
[11]: https://github.com/openzipkin/brave
[12]: https://cloud.spring.io/spring-cloud-sleuth/
[13]: https://www.cncf.io/
[14]: https://en.wikipedia.org/wiki/Apache_Thrift
[15]: https://www.jaegertracing.io/docs/roadmap/#adaptive-sampling
[16]: https://github.com/sourcegraph/appdash
[17]: https://about.sourcegraph.com/

View File

@ -1,3 +1,5 @@
translating by singledo
How to use the SSH and SFTP protocols on your home network
======

View File

@ -1,3 +1,5 @@
translating---geekpi
Tips for listing files with ls at the Linux command line
======
Learn some of the Linux 'ls' command's most useful variations.

View File

@ -1,3 +1,4 @@
Translating by Ryze-Borgia
Functional programming in Python: Immutable data structures
======
Immutability can help us better understand our code. Here's how to achieve it without sacrificing performance.

View File

@ -0,0 +1,328 @@
6 Commands To Shutdown And Reboot The Linux System From Terminal
======
Linux administrator performing many tasks in their routine work. The system Shutdown and Reboot task also included in it.
Its one of the risky task for them because some times it wont come back due to some reasons and they need to spend more time on it to troubleshoot.
These task can be performed through CLI in Linux. Most of the time Linux administrator prefer to perform these kind of tasks via CLI because they are familiar on this.
There are few commands are available in Linux to perform these tasks and user needs to choose appropriate command to perform the task based on the requirement.
All these commands has their own feature and allow Linux admin to use it.
**Suggested Read :**
**(#)** [11 Methods To Find System/Server Uptime In Linux][1]
**(#)** [Tuptime A Tool To Report The Historical And Statistical Running Time Of Linux System][2]
When the system is initiated for Shutdown or Reboot. It will be notified to all logged-in users and processes. Also, it wont allow any new logins if the time argument is used.
I would suggest you to double check before you perform this action because you need to follow few prerequisites to make sure everything is fine.
Those steps are listed below.
* Make sure you should have a console access to troubleshoot further in case any issues arise. VMWare access for VMs and IPMI/iLO/iDRAC access for physical servers.
* You have to create a ticket as per your company procedure either Incident or Change ticket and get approval
* Take the important configuration files backup and move to other servers for safety
* Verify the log files (Perform the pre-check)
* Communicate about your activity with other dependencies teams like DBA, Application, etc
* Ask them to bring down their Database service or Application service and get a confirmation from them.
* Validate the same from your end using the appropriate command to double confirm this.
* Finally reboot the system
* Verify the log files (Perform the post-check), If everything is good then move to next step. If you found something is wrong then troubleshoot accordingly.
* If its back to up and running, ask the dependencies team to bring up their applications.
* Monitor for some time, and communicate back to them saying everything is working fine as expected.
This task can be performed using following commands.
* **`shutdown Command:`** shutdown command used to halt, power-off or reboot the machine.
* **`halt Command:`** halt command used to halt, power-off or reboot the machine.
* **`poweroff Command:`** poweroff command used to halt, power-off or reboot the machine.
* **`reboot Command:`** reboot command used to halt, power-off or reboot the machine.
* **`init Command:`** init (short for initialization) is the first process started during booting of the computer system.
* **`systemctl Command:`** systemd is a system and service manager for Linux operating systems.
### Method-1: How To Shutdown And Reboot The Linux System Using Shutdown Command
shutdown command used to power-off or reboot a Linux remote machine or local host. Its offering
multiple options to perform this task effectively. If the time argument is used, 5 minutes before the system goes down the /run/nologin file is created to ensure that further logins shall not be allowed.
The general syntax is
```
# shutdown [OPTION] [TIME] [MESSAGE]
```
Run the below command to shutdown a Linux machine immediately. It will kill all the processes immediately and will shutdown the system.
```
# shutdown -h now
```
* **`-h:`** Equivalent to poweroff, unless halt is specified.
Alternatively we can use the shutdown command with `halt` option to bring down the machine immediately.
```
# shutdown --halt now
or
# shutdown -H now
```
* **`-H, --halt:`** Halt the machine.
Alternatively we can use the shutdown command with `poweroff` option to bring down the machine immediately.
```
# shutdown --poweroff now
or
# shutdown -P now
```
* **`-P, --poweroff:`** Power-off the machine (the default).
Run the below command to shutdown a Linux machine immediately. It will kill all the processes immediately and will shutdown the system.
```
# shutdown -h now
```
* **`-h:`** Equivalent to poweroff, unless halt is specified.
If you run the below commands without time parameter, it will wait for a minute then execute the given command.
```
# shutdown -h
Shutdown scheduled for Mon 2018-10-08 06:42:31 EDT, use 'shutdown -c' to cancel.
[email protected]#
Broadcast message from [email protected] (Mon 2018-10-08 06:41:31 EDT):
The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT!
```
All other logged in users can see a broadcast message in their terminal like below.
```
[[email protected] ~]$
Broadcast message from [email protected] (Mon 2018-10-08 06:41:31 EDT):
The system is going down for power-off at Mon 2018-10-08 06:42:31 EDT!
```
for Halt option.
```
# shutdown -H
Shutdown scheduled for Mon 2018-10-08 06:37:53 EDT, use 'shutdown -c' to cancel.
[email protected]#
Broadcast message from [email protected] (Mon 2018-10-08 06:36:53 EDT):
The system is going down for system halt at Mon 2018-10-08 06:37:53 EDT!
```
for Poweroff option.
```
# shutdown -P
Shutdown scheduled for Mon 2018-10-08 06:40:07 EDT, use 'shutdown -c' to cancel.
[email protected]#
Broadcast message from [email protected] (Mon 2018-10-08 06:39:07 EDT):
The system is going down for power-off at Mon 2018-10-08 06:40:07 EDT!
```
This can be cancelled by hitting `shutdown -c` option on your terminal.
```
# shutdown -c
Broadcast message from [email protected] (Mon 2018-10-08 06:39:09 EDT):
The system shutdown has been cancelled at Mon 2018-10-08 06:40:09 EDT!
```
All other logged in users can see a broadcast message in their terminal like below.
```
[[email protected] ~]$
Broadcast message from [email protected] (Mon 2018-10-08 06:41:35 EDT):
The system shutdown has been cancelled at Mon 2018-10-08 06:42:35 EDT!
```
Add a time parameter, if you want to perform shutdown or reboot in `N` seconds. Here you can add broadcast a custom message to logged-in users. In this example, we are rebooting the machine in another 5 minutes.
```
# shutdown -r +5 "To activate the latest Kernel"
Shutdown scheduled for Mon 2018-10-08 07:13:16 EDT, use 'shutdown -c' to cancel.
[[email protected] ~]#
Broadcast message from [email protected] (Mon 2018-10-08 07:08:16 EDT):
To activate the latest Kernel
The system is going down for reboot at Mon 2018-10-08 07:13:16 EDT!
```
Run the below command to reboot a Linux machine immediately. It will kill all the processes immediately and will reboot the system.
```
# shutdown -r now
```
* **`-r, --reboot:`** Reboot the machine.
### Method-2: How To Shutdown And Reboot The Linux System Using reboot Command
reboot command used to power-off or reboot a Linux remote machine or local host. Reboot command comes with two useful options.
It will perform a graceful shutdown and restart of the machine (This is similar to your restart option which is available in your system menu).
Run “reboot command without any option to reboot Linux machine.
```
# reboot
```
Run the “reboot” command with `-p` option to power-off or shutdown the Linux machine.
```
# reboot -p
```
* **`-p, --poweroff:`** Power-off the machine, either halt or poweroff commands is invoked.
Run the “reboot” command with `-f` option to forcefully reboot the Linux machine (This is similar to pressing the power button on the CPU).
```
# reboot -f
```
* **`-f, --force:`** Force immediate halt, power-off, or reboot.
### Method-3: How To Shutdown And Reboot The Linux System Using init Command
init (short for initialization) is the first process started during booting of the computer system.
It will check the /etc/inittab file to decide the Linux run level. Also, allow users to perform shutdown and reboot the Linux machine. There are seven runlevels exist, from zero to six.
**Suggested Read :**
**(#)** [How To Check All Running Services In Linux][3]
Run the below init command to shutdown the system .
```
# init 0
```
* **`0:`** Halt to shutdown the system.
Run the below init command to reboot the system .
```
# init 6
```
* **`6:`** Reboot to reboot the system.
### Method-4: How To Shutdown The Linux System Using halt Command
halt command used to power-off or shutdown a Linux remote machine or local host.
halt terminates all processes and shuts down the cpu.
```
# halt
```
### Method-5: How To Shutdown The Linux System Using poweroff Command
poweroff command used to power-off or shutdown a Linux remote machine or local host. Poweroff is exactly like halt, but it also turns off the unit itself (lights and everything on a PC). It sends an ACPI command to the board, then to the PSU, to cut the power.
```
# poweroff
```
### Method-6: How To Shutdown And Reboot The Linux System Using systemctl Command
Systemd is a new init system and system manager which was implemented/adapted into all the major Linux distributions over the traditional SysV init systems.
systemd is compatible with SysV and LSB init scripts. It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1.
**Suggested Read :**
**(#)** [chkservice A Tool For Managing Systemd Units From Linux Terminal][4]
Its a parent process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart.
systemctl is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status).
systemd uses .service files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring /cgroup/systemd file.
```
# systemctl halt
# systemctl poweroff
# systemctl reboot
# systemctl suspend
# systemctl hibernate
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/6-commands-to-shutdown-halt-poweroff-reboot-the-linux-system/
作者:[Prakash Subramanian][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/prakash/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/11-methods-to-find-check-system-server-uptime-in-linux/
[2]: https://www.2daygeek.com/tuptime-a-tool-to-report-the-historical-and-statistical-running-time-of-linux-system/
[3]: https://www.2daygeek.com/how-to-check-all-running-services-in-linux/
[4]: https://www.2daygeek.com/chkservice-a-tool-for-managing-systemd-units-from-linux-terminal/

View File

@ -0,0 +1,70 @@
Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool
======
**Mathpix is a nifty little tool that allows you to take screenshots of complex mathematical equations and instantly converts it into LaTeX editable text.**
![Mathpix converts math equations images into LaTeX][1]
[LaTeX editors][2] are excellent when it comes to writing academic and scientific documentation.
There is a steep learning curved involved of course. And this learning curve becomes steeper if you have to write complex mathematical equations.
[Mathpix][3] is a nifty little tool that helps you in this regard.
Suppose you are reading a document that has mathematical equations. If you want to use those equations in your [LaTeX document][4], you need to use your ninja LaTeX skills and plenty of time.
But Mathpix solves this problem for you. With Mathpix, you take the screenshot of the mathematical equations, and it will instantly give you the LaTeX code. You can then use this code in your [favorite LaTeX editor][2].
See Mathpix in action in the video below:
<https://itsfoss.com/wp-content/uploads/2018/10/mathpix.mp4>
[Video credit][5]: Reddit User [kaitlinmcunningham][6]
Isnt it super-cool? I guess the hardest part of writing LaTeX documents are those complicated equations. For lazy bums like me, Mathpix is a godsend.
### Getting Mathpix
Mathpix is available for Linux, macOS, Windows and iOS. There is no Android app for the moment.
Note: Mathpix is a free to use tool but its not open source.
On Linux, [Mathpix is available as a Snap package][7]. Which means [if you have Snap support enabled on your Linux distribution][8], you can install Mathpix with this simple command:
```
sudo snap install mathpix-snipping-tool
```
Using Mathpix is simple. Once installed, open the tool. Youll find it in the top panel. You can start taking the screenshot with Mathpix using the keyboard shortcut Ctrl+Alt+M.
It will instantly translate the image of equation into a LaTeX code. The code will be copied into clipboard and you can then paste it in a LaTeX editor.
Mathpixs optical character recognition technology is [being used][9] by a number of companies like [WolframAlpha][10], Microsoft, Google, etc. to improve their tools image recognition capability while dealing with math symbols.
Altogether, its an awesome tool for students and academics. Its free to use and I so wish that it was an open source tool. We cannot get everything in life, can we?
Do you use Mathpix or some other similar tool while dealing with mathematical symbols in LaTeX? What do you think of Mathpix? Share your views with us in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/mathpix/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/mathpix-converts-equations-into-latex.jpeg
[2]: https://itsfoss.com/latex-editors-linux/
[3]: https://mathpix.com/
[4]: https://www.latex-project.org/
[5]: https://g.redditmedia.com/b-GL1rQwNezQjGvdlov9U_6vDwb1A7kEwGHYcQ1Ogtg.gif?fm=mp4&mp4-fragmented=false&s=39fd1816b43e2b544986d629f75a7a8e
[6]: https://www.reddit.com/user/kaitlinmcunningham
[7]: https://snapcraft.io/mathpix-snipping-tool
[8]: https://itsfoss.com/install-snap-linux/
[9]: https://mathpix.com/api.html
[10]: https://www.wolframalpha.com/

View File

@ -0,0 +1,198 @@
How To Create And Maintain Your Own Man Pages
======
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Um-pages-1-720x340.png)
We already have discussed about a few [**good alternatives to Man pages**][1]. Those alternatives are mainly used for learning concise Linux command examples without having to go through the comprehensive man pages. If youre looking for a quick and dirty way to easily and quickly learn a Linux command, those alternatives are worth trying. Now, you might be thinking how can I create my own man-like help pages for a Linux command? This is where **“Um”** comes in handy. Um is a command line utility, used to easily create and maintain your own Man pages that contains only what youve learned about a command so far.
By creating your own alternative to man pages, you can avoid lots of unnecessary, comprehensive details in a man page and include only what is necessary to keep in mind. If you ever wanted to created your own set of man-like pages, Um will definitely help. In this brief tutorial, we will see how to install “Um” command line utility and how to create our own man pages.
### Installing Um
Um is available for Linux and Mac OS. At present, it can only be installed using **Linuxbrew** package manager in Linux systems. Refer the following link if you havent installed Linuxbrew yet.
Once Linuxbrew installed, run the following command to install Um utility.
```
$ brew install sinclairtarget/wst/um
```
If you will see an output something like below, congratulations! Um has been installed and ready to use.
```
[...]
==> Installing sinclairtarget/wst/um
==> Downloading https://github.com/sinclairtarget/um/archive/4.0.0.tar.gz
==> Downloading from https://codeload.github.com/sinclairtarget/um/tar.gz/4.0.0
-=#=# # #
==> Downloading https://rubygems.org/gems/kramdown-1.17.0.gem
######################################################################## 100.0%
==> gem install /home/sk/.cache/Homebrew/downloads/d0a5d978120a791d9c5965fc103866815189a4e3939
==> Caveats
Bash completion has been installed to:
/home/linuxbrew/.linuxbrew/etc/bash_completion.d
==> Summary
🍺 /home/linuxbrew/.linuxbrew/Cellar/um/4.0.0: 714 files, 1.3MB, built in 35 seconds
==> Caveats
==> openssl
A CA file has been bootstrapped using certificates from the SystemRoots
keychain. To add additional certificates (e.g. the certificates added in
the System keychain), place .pem files in
/home/linuxbrew/.linuxbrew/etc/openssl/certs
and run
/home/linuxbrew/.linuxbrew/opt/openssl/bin/c_rehash
==> ruby
Emacs Lisp files have been installed to:
/home/linuxbrew/.linuxbrew/share/emacs/site-lisp/ruby
==> um
Bash completion has been installed to:
/home/linuxbrew/.linuxbrew/etc/bash_completion.d
```
Before going to use to make your man pages, you need to enable bash completion for Um.
To do so, open your **~/.bash_profile** file:
```
$ nano ~/.bash_profile
```
And, add the following lines in it:
```
if [ -f $(brew --prefix)/etc/bash_completion.d/um-completion.sh ]; then
. $(brew --prefix)/etc/bash_completion.d/um-completion.sh
fi
```
Save and close the file. Run the following commands to update the changes.
```
$ source ~/.bash_profile
```
All done. let us go ahead and create our first man page.
### Create And Maintain Your Own Man Pages
Let us say, you want to create your own man page for “dpkg” command. To do so, run:
```
$ um edit dpkg
```
The above command will open a markdown template in your default editor:
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Create-dpkg-man-page.png)
My default editor is Vi, so the above commands open it in the Vi editor. Now, start adding everything you want to remember about “dpkg” command in this template.
Here is a sample:
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Edit-dpkg-man-page.png)
As you see in the above output, I have added Synopsis, description and two options for dpkg command. You can add as many as sections you want in the man pages. Make sure you have given proper and easily-understandable titles for each section. Once done, save and quit the file (If you use Vi editor, Press **ESC** key and type **:wq** ).
Finally, view your newly created man page using command:
```
$ um dpkg
```
![](http://www.ostechnix.com/wp-content/uploads/2018/10/View-dpkg-man-page.png)
As you can see, the the dpkg man page looks exactly like the official man pages. If you want to edit and/or add more details in a man page, again run the same command and add the details.
```
$ um edit dpkg
```
To view the list of newly created man pages using Um, run:
```
$ um list
```
All man pages will be saved under a directory named**`.um`**in your home directory
Just in case, if you dont want a particular page, simply delete it as shown below.
```
$ um rm dpkg
```
To view the help section and all available general options, run:
```
$ um --help
usage: um <page name>
um <sub-command> [ARGS...]
The first form is equivalent to `um read <page name>`.
Subcommands:
um (l)ist List the available pages for the current topic.
um (r)ead <page name> Read the given page under the current topic.
um (e)dit <page name> Create or edit the given page under the current topic.
um rm <page name> Remove the given page.
um (t)opic [topic] Get or set the current topic.
um topics List all topics.
um (c)onfig [config key] Display configuration environment.
um (h)elp [sub-command] Display this help message, or the help message for a sub-command.
```
### Configure Um
To view the current configuration, run:
```
$ um config
Options prefixed by '*' are set in /home/sk/.um/umconfig.
editor = vi
pager = less
pages_directory = /home/sk/.um/pages
default_topic = shell
pages_ext = .md
```
In this file, you can edit and change the values for **pager** , **editor** , **default_topic** , **pages_directory** , and **pages_ext** options as you wish. Say for example, if you want to save the newly created Um pages in your **[Dropbox][2]** folder, simply change the value of **pages_directory** directive and point it to the Dropbox folder in **~/.um/umconfig** file.
```
pages_directory = /Users/myusername/Dropbox/um
```
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-create-and-maintain-your-own-man-pages/
作者:[SK][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/
[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/

View File

@ -0,0 +1,163 @@
5 alerting and visualization tools for sysadmins
======
These open source tools help users understand system behavior and output, and provide alerts for potential problems.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI-)
You probably know (or can guess) what alerting and visualization tools are used for. Why would we discuss them as observability tools, especially since some systems include visualization as a feature?
Observability comes from control theory and describes our ability to understand a system based on its inputs and outputs. This article focuses on the output component of observability.
Alerting and visualization tools analyze the outputs of other systems and provide structured representations of these outputs. Alerts are basically a synthesized understanding of negative system outputs, and visualizations are disambiguated structured representations that facilitate user comprehension.
### Common types of alerts and visualizations
#### Alerts
Lets first cover what alerts are _not_. Alerts should not be sent if the human responder cant do anything about the problem. This includes alerts that are sent to multiple individuals with only a few who can respond, or situations where every anomaly in the system triggers an alert. This leads to alert fatigue and receivers ignoring all alerts within a specific medium until the system escalates to a medium that isnt already saturated.
For example, if an operator receives hundreds of emails a day from the alerting system, that operator will soon ignore all emails from the alerting system. The operator will respond to a real incident only when he or she is experiencing the problem, emailed by a customer, or called by the boss. In this case, alerts have lost their meaning and usefulness.
Alerts are not a constant stream of information or a status update. They are meant to convey a problem from which the system cant automatically recover, and they are sent only to the individual most likely to be able to recover the system. Everything that falls outside this definition isnt an alert and will only damage your employees and company culture.
Everyone has a different set of alert types, so I won't discuss things like priority levels (P1-P5) or models that use words like "Informational," "Warning," and "Critical." Instead, Ill describe the generic categories emergent in complex systems incident response.
You might have noticed I mentioned an “Informational” alert type right after I wrote that alerts shouldnt be informational. Well, not everyone agrees, but I dont consider something an alert if it isnt sent to anyone. It is a data point that many systems refer to as an alert. It represents some event that should be known but not responded to. It is generally part of the visualization system of the alerting tool and not an event that triggers actual notifications. Mike Julian covers this and other aspects of alerting in his book [Practical Monitoring][1]. It's a must read for work in this area.
Non-informational alerts consist of types that can be responded to or require action. I group these into two categories: internal outage and external outage. (Most companies have more than two levels for prioritizing their response efforts.) Degraded system performance is considered an outage in this model, as the impact to each user is usually unknown.
Internal outages are a lower priority than external outages, but they still need to be responded to quickly. They often include internal systems that company employees use or components of applications that are visible only to company employees.
External outages consist of any system outage that would immediately impact a customer. These dont include a system outage that prevents releasing updates to the system. They do include customer-facing application failures, database outages, and networking partitions that hurt availability or consistency if either can impact a user. They also include outages of tools that may not have a direct impact on users, as the application continues to run but this transparent dependency impacts performance. This is common when the system uses some external service or data source that isnt necessary for full functionality but may cause delays as the application performs retries or handles errors from this external dependency.
### Visualizations
There are many visualization types, and I wont cover them all here. Its a fascinating area of research. On the data analytics side of my career, learning and applying that knowledge is a constant challenge. We need to provide simple representations of complex system outputs for the widest dissemination of information. [Google Charts][2] and [Tableau][3] have a wide selection of visualization types. Well cover the most common visualizations and some innovative solutions for quickly understanding systems.
#### Line chart
The line chart is probably the most common visualization. It does a pretty good job of producing an understanding of a system over time. A line chart in a metrics system would have a line for each unique metric or some aggregation of metrics. This can get confusing when there are a lot of metrics in the same dashboard (as shown below), but most systems can select specific metrics to view rather than having all of them visible. Also, anomalous behavior is easy to spot if its significant enough to escape the noise of normal operations. Below we can see purple, yellow, and light blue lines that might indicate anomalous behavior.
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_line_chart.png)
Another feature of a line chart is that you can often stack them to show relationships. For example, you might want to look at requests on each server individually, but also in aggregate. This allows you to understand the overall system as well as each instance in the same graph.
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_line_chart_aggregate.png)
#### Heatmaps
Another common visualization is the heatmap. It is useful when looking at histograms. This type of visualization is similar to a bar chart but can show gradients within the bars representing the different percentiles of the overall metric. For example, suppose youre looking at request latencies and you want to quickly understand the overall trend as well as the distribution of all requests. A heatmap is great for this, and it can use color to disambiguate the quantity of each section with a quick glance.
The heatmap below shows the higher concentration around the centerline of the graph with an easy-to-understand visualization of the distribution vertically for each time bucket. We might want to review a couple of points in time where the distribution gets wide while the others are fairly tight like at 14:00. This distribution might be a negative performance indicator.
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_histogram.png)
#### Gauges
The last common visualization Ill cover here is the gauge, which helps users understand a single metric quickly. Gauges can represent a single metric, like your speedometer represents your driving speed or your gas gauge represents the amount of gas in your car. Similar to the gas gauge, most monitoring gauges clearly indicate what is good and what isnt. Often (as is shown below), good is represented by green, getting worse by orange, and “everything is breaking” by red. The middle row below shows traditional gauges.
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_gauges.png)
This image shows more than just traditional gauges. The other gauges are single stat representations that are similar to the function of the classic gauge. They all use the same color scheme to quickly indicate system health with just a glance. Arguably, the bottom row is probably the best example of a gauge that allows you to glance at a dashboard and know that everything is healthy (or not). This type of visualization is usually what I put on a top-level dashboard. It offers a full, high-level understanding of system health in seconds.
#### Flame graphs
A less common visualization is the flame graph, introduced by [Netflixs Brendan Gregg][4] in 2011. Its not ideal for dashboarding or quickly observing high-level system concerns; its normally seen when trying to understand a specific application problem. This visualization focuses on CPU and memory and the associated frames. The X-axis lists the frames alphabetically, and the Y-axis shows stack depth. Each rectangle is a stack frame and includes the function being called. The wider the rectangle, the more it appears in the stack. This method is invaluable when trying to diagnose system performance at the application level and I urge everyone to give it a try.
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_flame_graph_0.png)
### Tool options
There are several commercial options for alerting, but since this is Opensource.com, Ill cover only systems that are being used at scale by real companies that you can use at no cost. Hopefully, youll be able to contribute new and innovative features to make these systems even better.
### Alerting tools
#### Bosun
If youve ever done anything with computers and gotten stuck, the help you received was probably thanks to a Stack Exchange system. Stack Exchange runs many different websites around a crowdsourced question-and-answer model. [Stack Overflow][5] is very popular with developers, and [Super User][6] is popular with operations. However, there are now hundreds of sites ranging from parenting to sci-fi and philosophy to bicycles.
Stack Exchange open-sourced its alert management system, [Bosun][7], around the same time Prometheus and its [AlertManager][8] system were released. There were many similarities in the two systems, and thats a really good thing. Like Prometheus, Bosun is written in Golang. Bosuns scope is more extensive than Prometheus as it can interact with systems beyond metrics aggregation. It can also ingest data from log and event aggregation systems. It supports Graphite, InfluxDB, OpenTSDB, and Elasticsearch.
Bosuns architecture consists of a single server binary, a backend like OpenTSDB, Redis, and [scollector agents][9]. The scollector agents automatically detect services on a host and report metrics for those processes and other system resources. This data is sent to a metrics backend. The Bosun server binary then queries the backends to determine if any alerts need to be fired. Bosun can also be used by tools like [Grafana][10] to query the underlying backends through one common interface. Redis is used to store state and metadata for Bosun.
A really neat feature of Bosun is that it lets you test your alerts against historical data. This was something I missed in Prometheus several years ago, when I had data for an issue I wanted alerts on but no easy way to test it. To make sure my alerts were working, I had to create and insert dummy data. This system alleviates that very time-consuming process.
Bosun also has the usual features like showing simple graphs and creating alerts. It has a powerful expression language for writing alerting rules. However, it only has email and HTTP notification configurations, which means connecting to Slack and other tools requires a bit more customization ([which its documentation covers][11]). Similar to Prometheus, Bosun can use templates for these notifications, which means they can look as awesome as you want them to. You can use all your HTML and CSS skills to create the baddest email alert anyone has ever seen.
#### Cabot
[Cabot][12] was created by a company called [Arachnys][13]. You may not know who Arachnys is or what it does, but you have probably felt its impact: It built the leading cloud-based solution for fighting financial crimes. That sounds pretty cool, right? At a previous company, I was involved in similar functions around [“know your customer"][14] laws. Most companies would consider it a very bad thing to be linked to a terrorist group, for example, funneling money through their systems. These solutions also help defend against less-atrocious offenders like fraudsters who could also pose a risk to the institution.
So why did Arachnys create Cabot? Well, it is kind of a Christmas present to everyone, as it was a Christmas project built because its developers couldnt wrap their heads around [Nagios][15]. And really, who can blame them? Cabot was written with Django and Bootstrap, so it should be easy for most to contribute to the project. (Another interesting factoid: The name comes from the creators dog.)
The Cabot architecture is similar to Bosun in that it doesnt collect any data. Instead, it accesses data through the APIs of the tools it is alerting for. Therefore, Cabot uses a pull (rather than a push) model for alerting. It reaches out into each systems API and retrieves the information it needs to make a decision based on a specific check. Cabot stores the alerting data in a Postgres database and also has a cache using Redis.
Cabot natively supports [Graphite][16], but it also supports [Jenkins][17], which is rare in this area. [Arachnys][13] uses Jenkins like a centralized cron, but I like this idea of treating build failures like outages. Obviously, a build failure isnt as critical as a production outage, but it could still alert the team and escalate if the failure isnt resolved. Who actually checks Jenkins every time an email comes in about a build failure? Yeah, me too!
Another interesting feature is that Cabot can integrate with Google Calendar for on-call rotations. Cabot calls this feature Rota, which is a British term for a roster or rotation. This makes a lot of sense, and I wish other systems would take this idea further. Cabot doesnt support anything more complex than primary and backup personnel, but there is certainly room for additional features. The docs say if you want something more advanced, you should look at a commercial option.
#### StatsAgg
[StatsAgg][18]? How did that make the list? Well, its not every day you come across a publishing company that has created an alerting platform. I think that deserves recognition. Of course, [Pearson][19] isnt just a publishing company anymore; it has several web presences and a joint venture with [OReilly Media][20]. However, I still think of it as the company that published my schoolbooks and tests.
StatsAgg isnt just an alerting platform; its also a metrics aggregation platform. And its kind of like a proxy for other systems. It supports Graphite, StatsD, InfluxDB, and OpenTSDB as inputs, but it can also forward those metrics to their respective platforms. This is an interesting concept, but potentially risky as loads increase on a central service. However, if the StatsAgg infrastructure is robust enough, it can still produce alerts even when a backend storage platform has an outage.
StatsAgg is written in Java and consists only of the main server and UI, which keeps complexity to a minimum. It can send alerts based on regular expression matching and is focused on alerting by service rather than host or instance. Its goal is to fill a void in the open source observability stack, and I think it does that quite well.
### Visualization tools
#### Grafana
Almost everyone knows about [Grafana][10], and many have used it. I have used it for years whenever I need a simple dashboard. The tool I used before was deprecated, and I was fairly distraught about that until Grafana made it okay. Grafana was gifted to us by Torkel Ödegaard. Like Cabot, Grafana was also created around Christmastime, and released in January 2014. It has come a long way in just a few years. It started life as a Kibana dashboarding system, and Torkel forked it into what became Grafana.
Grafanas sole focus is presenting monitoring data in a more usable and pleasing way. It can natively gather data from Graphite, Elasticsearch, OpenTSDB, Prometheus, and InfluxDB. Theres an Enterprise version that uses plugins for more data sources, but theres no reason those other data source plugins couldnt be created as open source, as the Grafana plugin ecosystem already offers many other data sources.
What does Grafana do for me? It provides a central location for understanding my system. It is web-based, so anyone can access the information, although it can be restricted using different authentication methods. Grafana can provide knowledge at a glance using many different types of visualizations. However, it has started integrating alerting and other features that arent traditionally combined with visualizations.
Now you can set alerts visually. That means you can look at a graph, maybe even one showing where an alert should have triggered due to some degradation of the system, click on the graph where you want the alert to trigger, and then tell Grafana where to send the alert. Thats a pretty powerful addition that wont necessarily replace an alerting platform, but it can certainly help augment it by providing a different perspective on alerting criteria.
Grafana has also introduced more collaboration features. Users have been able to share dashboards for a long time, meaning you dont have to create your own dashboard for your [Kubernetes][21] cluster because there are several already available—with some maintained by Kubernetes developers and others by Grafana developers.
The most significant addition around collaboration is annotations. Annotations allow a user to add context to part of a graph. Other users can then use this context to understand the system better. This is an invaluable tool when a team is in the middle of an incident and communication and common understanding are critical. Having all the information right where youre already looking makes it much more likely that knowledge will be shared across the team quickly. Its also a nice feature to use during blameless postmortems when the team is trying to understand how the failure occurred and learn more about their system.
#### Vizceral
Netflix created [Vizceral][22] to understand its traffic patterns better when performing a traffic failover. Unlike Grafana, which is a more general tool, Vizceral serves a very specific use case. Netflix no longer uses this tool internally and says it is no longer actively maintained, but it still updates the tool periodically. I highlight it here primarily to point out an interesting visualization mechanism and how it can help solve a problem. Its worth running it in a demo environment just to better grasp the concepts and witness whats possible with these systems.
### What to read next
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/alerting-and-visualization-tools-sysadmins
作者:[Dan Barker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/barkerd427
[b]: https://github.com/lujun9972
[1]: https://www.practicalmonitoring.com/
[2]: https://developers.google.com/chart/interactive/docs/gallery
[3]: https://libguides.libraries.claremont.edu/c.php?g=474417&p=3286401
[4]: http://www.brendangregg.com/flamegraphs.html
[5]: https://stackoverflow.com/
[6]: https://superuser.com/
[7]: http://bosun.org/
[8]: https://prometheus.io/docs/alerting/alertmanager/
[9]: https://bosun.org/scollector/
[10]: https://grafana.com/
[11]: https://bosun.org/notifications
[12]: https://cabotapp.com/
[13]: https://www.arachnys.com/
[14]: https://en.wikipedia.org/wiki/Know_your_customer
[15]: https://www.nagios.org/
[16]: https://graphiteapp.org/
[17]: https://jenkins.io/
[18]: https://github.com/PearsonEducation/StatsAgg
[19]: https://www.pearson.com/us/
[20]: https://www.oreilly.com/
[21]: https://opensource.com/resources/what-is-kubernetes
[22]: https://github.com/Netflix/vizceral

View File

@ -0,0 +1,457 @@
An introduction to using tcpdump at the Linux command line
======
This flexible, powerful command-line tool helps ease the pain of troubleshooting network issues.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/terminal_command_linux_desktop_code.jpg?itok=p5sQ6ODE)
In my experience as a sysadmin, I have often found network connectivity issues challenging to troubleshoot. For those situations, tcpdump is a great ally.
Tcpdump is a command line utility that allows you to capture and analyze network traffic going through your system. It is often used to help troubleshoot network issues, as well as a security tool.
A powerful and versatile tool that includes many options and filters, tcpdump can be used in a variety of cases. Since it's a command line tool, it is ideal to run in remote servers or devices for which a GUI is not available, to collect data that can be analyzed later. It can also be launched in the background or as a scheduled job using tools like cron.
In this article, we'll look at some of tcpdump's most common features.
### 1\. Installation on Linux
Tcpdump is included with several Linux distributions, so chances are, you already have it installed. Check if tcpdump is installed on your system with the following command:
```
$ which tcpdump
/usr/sbin/tcpdump
```
If tcpdump is not installed, you can install it but using your distribution's package manager. For example, on CentOS or Red Hat Enterprise Linux, like this:
```
$ sudo yum install -y tcpdump
```
Tcpdump requires `libpcap`, which is a library for network packet capture. If it's not installed, it will be automatically added as a dependency.
You're ready to start capturing some packets.
### 2\. Capturing packets with tcpdump
To capture packets for troubleshooting or analysis, tcpdump requires elevated permissions, so in the following examples most commands are prefixed with `sudo`.
To begin, use the command `tcpdump -D` to see which interfaces are available for capture:
```
$ sudo tcpdump -D
1.eth0
2.virbr0
3.eth1
4.any (Pseudo-device that captures on all interfaces)
5.lo [Loopback]
```
In the example above, you can see all the interfaces available in my machine. The special interface `any` allows capturing in any active interface.
Let's use it to start capturing some packets. Capture all packets in any interface by running this command:
```
$ sudo tcpdump -i any
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
09:56:18.293641 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 3770820720:3770820916, ack 3503648727, win 309, options [nop,nop,TS val 76577898 ecr 510770929], length 196
09:56:18.293794 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 196, win 391, options [nop,nop,TS val 510771017 ecr 76577898], length 0
09:56:18.295058 IP rhel75.59883 > gateway.domain: 2486+ PTR? 1.64.168.192.in-addr.arpa. (43)
09:56:18.310225 IP gateway.domain > rhel75.59883: 2486 NXDomain* 0/1/0 (102)
09:56:18.312482 IP rhel75.49685 > gateway.domain: 34242+ PTR? 28.64.168.192.in-addr.arpa. (44)
09:56:18.322425 IP gateway.domain > rhel75.49685: 34242 NXDomain* 0/1/0 (103)
09:56:18.323164 IP rhel75.56631 > gateway.domain: 29904+ PTR? 1.122.168.192.in-addr.arpa. (44)
09:56:18.323342 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 196:584, ack 1, win 309, options [nop,nop,TS val 76577928 ecr 510771017], length 388
09:56:18.323563 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 584, win 411, options [nop,nop,TS val 510771047 ecr 76577928], length 0
09:56:18.335569 IP gateway.domain > rhel75.56631: 29904 NXDomain* 0/1/0 (103)
09:56:18.336429 IP rhel75.44007 > gateway.domain: 61677+ PTR? 98.122.168.192.in-addr.arpa. (45)
09:56:18.336655 IP gateway.domain > rhel75.44007: 61677* 1/0/0 PTR rhel75. (65)
09:56:18.337177 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 584:1644, ack 1, win 309, options [nop,nop,TS val 76577942 ecr 510771047], length 1060
---- SKIPPING LONG OUTPUT -----
09:56:19.342939 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 1752016, win 1444, options [nop,nop,TS val 510772067 ecr 76578948], length 0
^C
9003 packets captured
9010 packets received by filter
7 packets dropped by kernel
$
```
Tcpdump continues to capture packets until it receives an interrupt signal. You can interrupt capturing by pressing `Ctrl+C`. As you can see in this example, `tcpdump` captured more than 9,000 packets. In this case, since I am connected to this server using `ssh`, tcpdump captured all these packages. To limit the number of packets captured and stop `tcpdump`, use the `-c` option:
```
$ sudo tcpdump -i any -c 5
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
11:21:30.242740 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 3772575680:3772575876, ack 3503651743, win 309, options [nop,nop,TS val 81689848 ecr 515883153], length 196
11:21:30.242906 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 196, win 1443, options [nop,nop,TS val 515883235 ecr 81689848], length 0
11:21:30.244442 IP rhel75.43634 > gateway.domain: 57680+ PTR? 1.64.168.192.in-addr.arpa. (43)
11:21:30.244829 IP gateway.domain > rhel75.43634: 57680 NXDomain 0/0/0 (43)
11:21:30.247048 IP rhel75.33696 > gateway.domain: 37429+ PTR? 28.64.168.192.in-addr.arpa. (44)
5 packets captured
12 packets received by filter
0 packets dropped by kernel
$
```
In this case, `tcpdump` stopped capturing automatically after capturing five packets. This is useful in different scenarios—for instance, if you're troubleshooting connectivity and capturing a few initial packages is enough. This is even more useful when we apply filters to capture specific packets (shown below).
By default, tcpdump resolves IP addresses and ports into names, as shown in the previous example. When troubleshooting network issues, it is often easier to use the IP addresses and port numbers; disable name resolution by using the option `-n` and port resolution with `-nn`:
```
$ sudo tcpdump -i any -c5 -nn
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
23:56:24.292206 IP 192.168.64.28.22 > 192.168.64.1.35110: Flags [P.], seq 166198580:166198776, ack 2414541257, win 309, options [nop,nop,TS val 615664 ecr 540031155], length 196
23:56:24.292357 IP 192.168.64.1.35110 > 192.168.64.28.22: Flags [.], ack 196, win 1377, options [nop,nop,TS val 540031229 ecr 615664], length 0
23:56:24.292570 IP 192.168.64.28.22 > 192.168.64.1.35110: Flags [P.], seq 196:568, ack 1, win 309, options [nop,nop,TS val 615664 ecr 540031229], length 372
23:56:24.292655 IP 192.168.64.1.35110 > 192.168.64.28.22: Flags [.], ack 568, win 1400, options [nop,nop,TS val 540031229 ecr 615664], length 0
23:56:24.292752 IP 192.168.64.28.22 > 192.168.64.1.35110: Flags [P.], seq 568:908, ack 1, win 309, options [nop,nop,TS val 615664 ecr 540031229], length 340
5 packets captured
6 packets received by filter
0 packets dropped by kernel
```
As shown above, the capture output now displays the IP addresses and port numbers. This also prevents tcpdump from issuing DNS lookups, which helps to lower network traffic while troubleshooting network issues.
Now that you're able to capture network packets, let's explore what this output means.
### 3\. Understanding the output format
Tcpdump is capable of capturing and decoding many different protocols, such as TCP, UDP, ICMP, and many more. While we can't cover all of them here, to help you get started, let's explore the TCP packet. You can find more details about the different protocol formats in tcpdump's [manual pages][1]. A typical TCP packet captured by tcpdump looks like this:
```
08:41:13.729687 IP 192.168.64.28.22 > 192.168.64.1.41916: Flags [P.], seq 196:568, ack 1, win 309, options [nop,nop,TS val 117964079 ecr 816509256], length 372
```
The fields may vary depending on the type of packet being sent, but this is the general format.
The first field, `08:41:13.729687,` represents the timestamp of the received packet as per the local clock.
Next, `IP` represents the network layer protocol—in this case, `IPv4`. For `IPv6` packets, the value is `IP6`.
The next field, `192.168.64.28.22`, is the source IP address and port. This is followed by the destination IP address and port, represented by `192.168.64.1.41916`.
After the source and destination, you can find the TCP Flags `Flags [P.]`. Typical values for this field include:
| Value | Flag Type | Description |
|-------| --------- | ----------------- |
| S | SYN | Connection Start |
| F | FIN | Connection Finish |
| P | PUSH | Data push |
| R | RST | Connection reset |
| . | ACK | Acknowledgment |
This field can also be a combination of these values, such as `[S.]` for a `SYN-ACK` packet.
Next is the sequence number of the data contained in the packet. For the first packet captured, this is an absolute number. Subsequent packets use a relative number to make it easier to follow. In this example, the sequence is `seq 196:568,` which means this packet contains bytes 196 to 568 of this flow.
This is followed by the Ack Number: `ack 1`. In this case, it is 1 since this is the side sending data. For the side receiving data, this field represents the next expected byte (data) on this flow. For example, the Ack number for the next packet in this flow would be 568.
The next field is the window size `win 309`, which represents the number of bytes available in the receiving buffer, followed by TCP options such as the MSS (Maximum Segment Size) or Window Scale. For details about TCP protocol options, consult [Transmission Control Protocol (TCP) Parameters][2].
Finally, we have the packet length, `length 372`, which represents the length, in bytes, of the payload data. The length is the difference between the last and first bytes in the sequence number.
Now let's learn how to filter packages to narrow down results and make it easier to troubleshoot specific issues.
### 4\. Filtering packets
As mentioned above, tcpdump can capture too many packages, some of which are not even related to the issue you're troubleshooting. For example, if you're troubleshooting a connectivity issue with a web server you're not interested in the SSH traffic, so removing the SSH packets from the output makes it easier to work on the real issue.
One of tcpdump's most powerful features is its ability to filter the captured packets using a variety of parameters, such as source and destination IP addresses, ports, protocols, etc. Let's look at some of the most common ones.
#### Protocol
To filter packets based on protocol, specifying the protocol in the command line. For example, capture ICMP packets only by using this command:
```
$ sudo tcpdump -i any -c5 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
```
In a different terminal, try to ping another machine:
```
$ ping opensource.com
PING opensource.com (54.204.39.132) 56(84) bytes of data.
64 bytes from ec2-54-204-39-132.compute-1.amazonaws.com (54.204.39.132): icmp_seq=1 ttl=47 time=39.6 ms
```
Back in the tcpdump capture, notice that tcpdump captures and displays only the ICMP-related packets. In this case, tcpdump is not displaying name resolution packets that were generated when resolving the name `opensource.com`:
```
09:34:20.136766 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 1, length 64
09:34:20.176402 IP ec2-54-204-39-132.compute-1.amazonaws.com > rhel75: ICMP echo reply, id 20361, seq 1, length 64
09:34:21.140230 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 2, length 64
09:34:21.180020 IP ec2-54-204-39-132.compute-1.amazonaws.com > rhel75: ICMP echo reply, id 20361, seq 2, length 64
09:34:22.141777 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 3, length 64
5 packets captured
5 packets received by filter
0 packets dropped by kernel
```
#### Host
Limit capture to only packets related to a specific host by using the `host` filter:
```
$ sudo tcpdump -i any -c5 -nn host 54.204.39.132
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
09:54:20.042023 IP 192.168.122.98.39326 > 54.204.39.132.80: Flags [S], seq 1375157070, win 29200, options [mss 1460,sackOK,TS val 122350391 ecr 0,nop,wscale 7], length 0
09:54:20.088127 IP 54.204.39.132.80 > 192.168.122.98.39326: Flags [S.], seq 1935542841, ack 1375157071, win 28960, options [mss 1460,sackOK,TS val 522713542 ecr 122350391,nop,wscale 9], length 0
09:54:20.088204 IP 192.168.122.98.39326 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 122350437 ecr 522713542], length 0
09:54:20.088734 IP 192.168.122.98.39326 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 122350438 ecr 522713542], length 112: HTTP: GET / HTTP/1.1
09:54:20.129733 IP 54.204.39.132.80 > 192.168.122.98.39326: Flags [.], ack 113, win 57, options [nop,nop,TS val 522713552 ecr 122350438], length 0
5 packets captured
5 packets received by filter
0 packets dropped by kernel
```
In this example, tcpdump captures and displays only packets to and from host `54.204.39.132`.
#### Port
To filter packets based on the desired service or port, use the `port` filter. For example, capture packets related to a web (HTTP) service by using this command:
```
$ sudo tcpdump -i any -c5 -nn port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
09:58:28.790548 IP 192.168.122.98.39330 > 54.204.39.132.80: Flags [S], seq 1745665159, win 29200, options [mss 1460,sackOK,TS val 122599140 ecr 0,nop,wscale 7], length 0
09:58:28.834026 IP 54.204.39.132.80 > 192.168.122.98.39330: Flags [S.], seq 4063583040, ack 1745665160, win 28960, options [mss 1460,sackOK,TS val 522775728 ecr 122599140,nop,wscale 9], length 0
09:58:28.834093 IP 192.168.122.98.39330 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 122599183 ecr 522775728], length 0
09:58:28.834588 IP 192.168.122.98.39330 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 122599184 ecr 522775728], length 112: HTTP: GET / HTTP/1.1
09:58:28.878445 IP 54.204.39.132.80 > 192.168.122.98.39330: Flags [.], ack 113, win 57, options [nop,nop,TS val 522775739 ecr 122599184], length 0
5 packets captured
5 packets received by filter
0 packets dropped by kernel
```
#### Source IP/hostname
You can also filter packets based on the source or destination IP Address or hostname. For example, to capture packets from host `192.168.122.98`:
```
$ sudo tcpdump -i any -c5 -nn src 192.168.122.98
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
10:02:15.220824 IP 192.168.122.98.39436 > 192.168.122.1.53: 59332+ A? opensource.com. (32)
10:02:15.220862 IP 192.168.122.98.39436 > 192.168.122.1.53: 20749+ AAAA? opensource.com. (32)
10:02:15.364062 IP 192.168.122.98.39334 > 54.204.39.132.80: Flags [S], seq 1108640533, win 29200, options [mss 1460,sackOK,TS val 122825713 ecr 0,nop,wscale 7], length 0
10:02:15.409229 IP 192.168.122.98.39334 > 54.204.39.132.80: Flags [.], ack 669337581, win 229, options [nop,nop,TS val 122825758 ecr 522832372], length 0
10:02:15.409667 IP 192.168.122.98.39334 > 54.204.39.132.80: Flags [P.], seq 0:112, ack 1, win 229, options [nop,nop,TS val 122825759 ecr 522832372], length 112: HTTP: GET / HTTP/1.1
5 packets captured
5 packets received by filter
0 packets dropped by kernel
```
Notice that tcpdumps captured packets with source IP address `192.168.122.98` for multiple services such as name resolution (port 53) and HTTP (port 80). The response packets are not displayed since their source IP is different.
Conversely, you can use the `dst` filter to filter by destination IP/hostname:
```
$ sudo tcpdump -i any -c5 -nn dst 192.168.122.98
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
10:05:03.572931 IP 192.168.122.1.53 > 192.168.122.98.47049: 2248 1/0/0 A 54.204.39.132 (48)
10:05:03.572944 IP 192.168.122.1.53 > 192.168.122.98.47049: 33770 0/0/0 (32)
10:05:03.621833 IP 54.204.39.132.80 > 192.168.122.98.39338: Flags [S.], seq 3474204576, ack 3256851264, win 28960, options [mss 1460,sackOK,TS val 522874425 ecr 122993922,nop,wscale 9], length 0
10:05:03.667767 IP 54.204.39.132.80 > 192.168.122.98.39338: Flags [.], ack 113, win 57, options [nop,nop,TS val 522874436 ecr 122993972], length 0
10:05:03.672221 IP 54.204.39.132.80 > 192.168.122.98.39338: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 522874437 ecr 122993972], length 642: HTTP: HTTP/1.1 302 Found
5 packets captured
5 packets received by filter
0 packets dropped by kernel
```
#### Complex expressions
You can also combine filters by using the logical operators `and` and `or` to create more complex expressions. For example, to filter packets from source IP address `192.168.122.98` and service HTTP only, use this command:
```
$ sudo tcpdump -i any -c5 -nn src 192.168.122.98 and port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
10:08:00.472696 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [S], seq 2712685325, win 29200, options [mss 1460,sackOK,TS val 123170822 ecr 0,nop,wscale 7], length 0
10:08:00.516118 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [.], ack 268723504, win 229, options [nop,nop,TS val 123170865 ecr 522918648], length 0
10:08:00.516583 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [P.], seq 0:112, ack 1, win 229, options [nop,nop,TS val 123170866 ecr 522918648], length 112: HTTP: GET / HTTP/1.1
10:08:00.567044 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [.], ack 643, win 239, options [nop,nop,TS val 123170916 ecr 522918661], length 0
10:08:00.788153 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [F.], seq 112, ack 643, win 239, options [nop,nop,TS val 123171137 ecr 522918661], length 0
5 packets captured
5 packets received by filter
0 packets dropped by kernel
```
You can create more complex expressions by grouping filter with parentheses. In this case, enclose the entire filter expression with quotation marks to prevent the shell from confusing them with shell expressions:
```
$ sudo tcpdump -i any -c5 -nn "port 80 and (src 192.168.122.98 or src 54.204.39.132)"
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
10:10:37.602214 IP 192.168.122.98.39346 > 54.204.39.132.80: Flags [S], seq 871108679, win 29200, options [mss 1460,sackOK,TS val 123327951 ecr 0,nop,wscale 7], length 0
10:10:37.650651 IP 54.204.39.132.80 > 192.168.122.98.39346: Flags [S.], seq 854753193, ack 871108680, win 28960, options [mss 1460,sackOK,TS val 522957932 ecr 123327951,nop,wscale 9], length 0
10:10:37.650708 IP 192.168.122.98.39346 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 123328000 ecr 522957932], length 0
10:10:37.651097 IP 192.168.122.98.39346 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 123328000 ecr 522957932], length 112: HTTP: GET / HTTP/1.1
10:10:37.692900 IP 54.204.39.132.80 > 192.168.122.98.39346: Flags [.], ack 113, win 57, options [nop,nop,TS val 522957942 ecr 123328000], length 0
5 packets captured
5 packets received by filter
0 packets dropped by kernel
```
In this example, we're filtering packets for HTTP service only (port 80) and source IP addresses `192.168.122.98` or `54.204.39.132`. This is a quick way of examining both sides of the same flow.
### 5\. Checking packet content
In the previous examples, we're checking only the packets' headers for information such as source, destinations, ports, etc. Sometimes this is all we need to troubleshoot network connectivity issues. Sometimes, however, we need to inspect the content of the packet to ensure that the message we're sending contains what we need or that we received the expected response. To see the packet content, tcpdump provides two additional flags: `-X` to print content in hex, and ASCII or `-A` to print the content in ASCII.
For example, inspect the HTTP content of a web request like this:
```
$ sudo tcpdump -i any -c10 -nn -A port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
13:02:14.871803 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [S], seq 2546602048, win 29200, options [mss 1460,sackOK,TS val 133625221 ecr 0,nop,wscale 7], length 0
E..<..@.@.....zb6.'....P...@......r............
............................
13:02:14.910734 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [S.], seq 1877348646, ack 2546602049, win 28960, options [mss 1460,sackOK,TS val 525532247 ecr 133625221,nop,wscale 9], length 0
E..<..@./..a6.'...zb.P..o..&...A..q a..........
.R.W.......     ................
13:02:14.910832 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 133625260 ecr 525532247], length 0
E..4..@.@.....zb6.'....P...Ao..'...........
.....R.W................
13:02:14.911808 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 133625261 ecr 525532247], length 112: HTTP: GET / HTTP/1.1
E.....@.@..1..zb6.'....P...Ao..'...........
.....R.WGET / HTTP/1.1
User-Agent: Wget/1.14 (linux-gnu)
Accept: */*
Host: opensource.com
Connection: Keep-Alive
................
13:02:14.951199 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [.], ack 113, win 57, options [nop,nop,TS val 525532257 ecr 133625261], length 0
E..4.F@./.."6.'...zb.P..o..'.......9.2.....
.R.a....................
13:02:14.955030 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 525532258 ecr 133625261], length 642: HTTP: HTTP/1.1 302 Found
E....G@./...6.'...zb.P..o..'.......9.......
.R.b....HTTP/1.1 302 Found
Server: nginx
Date: Sun, 23 Sep 2018 17:02:14 GMT
Content-Type: text/html; charset=iso-8859-1
Content-Length: 207
X-Content-Type-Options: nosniff
Location: https://opensource.com/
Cache-Control: max-age=1209600
Expires: Sun, 07 Oct 2018 17:02:14 GMT
X-Request-ID: v-6baa3acc-bf52-11e8-9195-22000ab8cf2d
X-Varnish: 632951979
Age: 0
Via: 1.1 varnish (Varnish/5.2)
X-Cache: MISS
Connection: keep-alive
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>302 Found</title>
</head><body>
<h1>Found</h1>
<p>The document has moved <a href="https://opensource.com/">here</a>.</p>
</body></html>
................
13:02:14.955083 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [.], ack 643, win 239, options [nop,nop,TS val 133625304 ecr 525532258], length 0
E..4..@.@.....zb6.'....P....o..............
.....R.b................
13:02:15.195524 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [F.], seq 113, ack 643, win 239, options [nop,nop,TS val 133625545 ecr 525532258], length 0
E..4..@.@.....zb6.'....P....o..............
.....R.b................
13:02:15.236592 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [F.], seq 643, ack 114, win 57, options [nop,nop,TS val 525532329 ecr 133625545], length 0
E..4.H@./.. 6.'...zb.P..o..........9.I.....
.R......................
13:02:15.236656 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [.], ack 644, win 239, options [nop,nop,TS val 133625586 ecr 525532329], length 0
E..4..@.@.....zb6.'....P....o..............
.....R..................
10 packets captured
10 packets received by filter
0 packets dropped by kernel
```
This is helpful for troubleshooting issues with API calls, assuming the calls are using plain HTTP. For encrypted connections, this output is less useful.
### 6\. Saving captures to a file
Another useful feature provided by tcpdump is the ability to save the capture to a file so you can analyze the results later. This allows you to capture packets in batch mode overnight, for example, and verify the results in the morning. It also helps when there are too many packets to analyze since real-time capture can occur too fast.
To save packets to a file instead of displaying them on screen, use the option `-w`:
```
$ sudo tcpdump -i any -c10 -nn -w webserver.pcap port 80
[sudo] password for ricardo:
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
10 packets captured
10 packets received by filter
0 packets dropped by kernel
```
This command saves the output in a file named `webserver.pcap`. The `.pcap` extension stands for "packet capture" and is the convention for this file format.
As shown in this example, nothing gets displayed on-screen, and the capture finishes after capturing 10 packets, as per the option `-c10`. If you want some feedback to ensure packets are being captured, use the option `-v`.
Tcpdump creates a file in binary format so you cannot simply open it with a text editor. To read the contents of the file, execute tcpdump with the `-r` option:
```
$ tcpdump -nn -r webserver.pcap
reading from file webserver.pcap, link-type LINUX_SLL (Linux cooked)
13:36:57.679494 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [S], seq 3709732619, win 29200, options [mss 1460,sackOK,TS val 135708029 ecr 0,nop,wscale 7], length 0
13:36:57.718932 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [S.], seq 1999298316, ack 3709732620, win 28960, options [mss 1460,sackOK,TS val 526052949 ecr 135708029,nop,wscale 9], length 0
13:36:57.719005 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 135708068 ecr 526052949], length 0
13:36:57.719186 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 135708068 ecr 526052949], length 112: HTTP: GET / HTTP/1.1
13:36:57.756979 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [.], ack 113, win 57, options [nop,nop,TS val 526052959 ecr 135708068], length 0
13:36:57.760122 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 526052959 ecr 135708068], length 642: HTTP: HTTP/1.1 302 Found
13:36:57.760182 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [.], ack 643, win 239, options [nop,nop,TS val 135708109 ecr 526052959], length 0
13:36:57.977602 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [F.], seq 113, ack 643, win 239, options [nop,nop,TS val 135708327 ecr 526052959], length 0
13:36:58.022089 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [F.], seq 643, ack 114, win 57, options [nop,nop,TS val 526053025 ecr 135708327], length 0
13:36:58.022132 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [.], ack 644, win 239, options [nop,nop,TS val 135708371 ecr 526053025], length 0
$
```
Since you're no longer capturing the packets directly from the network interface, `sudo` is not required to read the file.
You can also use any of the filters we've discussed to filter the content from the file, just as you would with real-time data. For example, inspect the packets in the capture file from source IP address `54.204.39.132` by executing this command:
```
$ tcpdump -nn -r webserver.pcap src 54.204.39.132
reading from file webserver.pcap, link-type LINUX_SLL (Linux cooked)
13:36:57.718932 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [S.], seq 1999298316, ack 3709732620, win 28960, options [mss 1460,sackOK,TS val 526052949 ecr 135708029,nop,wscale 9], length 0
13:36:57.756979 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [.], ack 113, win 57, options [nop,nop,TS val 526052959 ecr 135708068], length 0
13:36:57.760122 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 526052959 ecr 135708068], length 642: HTTP: HTTP/1.1 302 Found
13:36:58.022089 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [F.], seq 643, ack 114, win 57, options [nop,nop,TS val 526053025 ecr 135708327], length 0
```
### What's next?
These basic features of tcpdump will help you get started with this powerful and versatile tool. To learn more, consult the [tcpdump website][3] and [man pages][4].
The tcpdump command line interface provides great flexibility for capturing and analyzing network traffic. If you need a graphical tool to understand more complex flows, look at [Wireshark][5].
One benefit of Wireshark is that it can read `.pcap` files captured by tcpdump. You can use tcpdump to capture packets in a remote machine that does not have a GUI and analyze the result file with Wireshark, but that is a topic for another day.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/10/introduction-tcpdump
作者:[Ricardo Gerardi][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/rgerardi
[b]: https://github.com/lujun9972
[1]: http://www.tcpdump.org/manpages/tcpdump.1.html#lbAG
[2]: https://www.iana.org/assignments/tcp-parameters/tcp-parameters.xhtml
[3]: http://www.tcpdump.org/#
[4]: http://www.tcpdump.org/manpages/tcpdump.1.html
[5]: https://www.wireshark.org/

View File

@ -0,0 +1,197 @@
Cloc Count The Lines Of Source Code In Many Programming Languages
======
![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-720x340.png)
As a developer, you may need to share the progress and statistics of your code to your boss or colleagues. Your boss might want to analyze the code and give any additional inputs. In such cases, there are few programs, as far as I know, available to analyze the source code. One such program is [**Ohcount**][1]. Today, I came across yet another similar utility namely **“Cloc”**. Using the Cloc, you can easily count the lines of source code in several programming languages. It counts the blank lines, comment lines, and physical lines of source code and displays the result in a neat tabular-column format. Cloc is free, open source and cross-platform utility entirely written in **Perl** programming language.
### Features
Cloc ships with numerous advantages including the following:
* Easy to install/use. Requires no dependencies.
* Portable
* It can produce results in a variety of formats, such as plain text, SQL, JSON, XML, YAML, comma separated values.
* Can count your git commits.
* Count the code in directories and sub-directories.
* Count codes count code within compressed archives like tar balls, Zip files, Java .ear files etc.
* Open source and cross platform.
### Installing Cloc
The Cloc utility is available in the default repositories of most Unix-like operating systems. So, you can install it using the default package manager as shown below.
On Arch Linux and its variants:
```
$ sudo pacman -S cloc
```
On Debian, Ubuntu:
```
$ sudo apt-get install cloc
```
On CentOS, Red Hat, Scientific Linux:
```
$ sudo yum install cloc
```
On Fedora:
```
$ sudo dnf install cloc
```
On FreeBSD:
```
$ sudo pkg install cloc
```
It can also installed using third-party package manager like [**NPM**][2] as well.
```
$ npm install -g cloc
```
### Count The Lines Of Source Code In Many Programming Languages
Let us start with a simple example. I have a “hello world” program written in C in my current working directory.
```
$ cat hello.c
#include <stdio.h>
int main()
{
// printf() displays the string inside quotation
printf("Hello, World!");
return 0;
}
```
To count the lines of code in the hello.c program, simply run:
```
$ cloc hello.c
```
Sample output:
![](https://www.ostechnix.com/wp-content/uploads/2018/10/Hello-World-Program.png)
The first column specifies the **name of programming languages that the source code consists of**. As you can see in the above output, the source code of “hello world” program is written using **C** programming language.
The second column displays the **number of files in each programming languages**. So, our code contains **1 file** in total.
The third column displays the **total number of blank lines**. We have zero blank files in our code.
The fourth column displays **number of comment lines**.
And the final and fifth column displays **total physical lines of given source code**.
It is just a 6 line code program, so counting the lines in the code is not a big deal. What about the some big source code file? Have a look at the following example:
```
$ cloc file.tar.gz
```
Sample output:
![](https://www.ostechnix.com/wp-content/uploads/2018/10/cloc-1.png)
As per the above output, it is quite difficult to manually find exact count of code. But, Cloc displays the result in seconds with nice tabular-column format. You can view the gross total of each section at the end which is quite handy when it comes to analyze the source code of a program.
Cloc not only counts the individual source code files, but also files inside directories and sub-directories, archives, and even in specific git commits etc.
**Count the lines of codes in a directory:**
```
$ cloc dir/
```
![][4]
**Sub-directory:**
```
$ cloc dir/cloc/tests
```
![][5]
**Count the lines of codes in archive file:**
```
$ cloc archive.zip
```
![][6]
You can also count lines in a git repository, using a specific commit like below.
```
$ git clone https://github.com/AlDanial/cloc.git
$ cd cloc
$ cloc 157d706
```
![][7]
Cloc can recognize several programming languages. To view the complete list of recognized languages, run:
```
$ cloc --show-lang
```
For more details, refer the help section.
```
$ cloc --help
```
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/cloc-count-the-lines-of-source-code-in-many-programming-languages/
作者:[SK][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/ohcount-the-source-code-line-counter-and-analyzer/
[2]: https://www.ostechnix.com/install-node-js-linux/
[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[4]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-2-1.png
[5]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-4.png
[6]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-3.png
[7]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-5.png

View File

@ -0,0 +1,281 @@
坚实的 React 基础:初学者指南
============================================================
![](https://cdn-images-1.medium.com/max/1000/1*wj5ujzj5wPQIKb0mIWLgNQ.png)
React.js crash course
在过去的几个月里,我一直在使用 React 和 React-Native。我已经发布了两个作为产品的应用 [Kiven Aa][1]React [Pollen Chat][2]React Native。当我开始学习 React 时,我找了一些不仅仅是教我如何用 React 写应用的东西(一个博客,一个视频,一个课程,等等),我也想让它帮我做好面试准备。
我发现的大部分资料都集中在某一单一方面上。所以,这篇文章针对的是那些希望理论与实践完美结合的观众。我会告诉你一些理论,以便你了解幕后发生的事情,然后我会向你展示如何编写一些 React.js 代码。
如果你更喜欢视频形式我在YouTube上传了整个课程请去看看。
让我们开始......
> React.js 是一个用于构建用户界面的 JavaScript 库
你可以构建各种单页应用程序。例如,你希望在用户界面上实时显示更改的聊天软件和电子商务门户。
### 一切都是组件
React 应用由组件组成,数量多且互相嵌套。你或许会问:”可什么是组件呢?“
组件是可重用的代码段,它定义了某些功能在 UI 上的外观和行为。 比如,按钮就是一个组件。
让我们看看下面的计算器当你尝试计算2 + 2 = 4 -1 = 3简单的数学题你会在Google上看到这个计算器。
![](https://cdn-images-1.medium.com/max/1000/1*NS9DykYDyYG7__UXJdysTA.png)
红色标记表示组件
如上图所示,这个计算器有很多区域,比如展示窗口和数字键盘。所有这些都可以是许多单独的组件或一个巨大的组件。这取决于在 React 中分解和抽象出事物的程度。你为所有这些组件分别编写代码,然后合并这些组件到一个容器中,而这个容器又是一个 React 组件。这样你就可以创建可重用的组件,最终的应用将是一组协同工作的单独组件。
以下是一个你践行了以上原则并可以用 React 编写计算器的方法。
```
<Calculator>
<DisplayWindow />
<NumPad>
<Key number={1}/>
<Key number={2}/>
.
.
.
<Key number={9}/>
</NumPad>
</Calculator>
```
没错它看起来像HTML代码然而并不是。我们将在后面的部分中详细探讨它。
### 设置我们的 Playground
这篇教程专注于 React 的基础部分。它没有偏向 Web 或 React Native开发移动应用。所以我们会用一个在线编辑器这样可以在学习 React 能做什么之前避免 web 或 native 的具体配置。
我已经为读者在 [codepen.io][4] 设置好了开发环境。只需点开这个链接并且阅读所有 HTML 和 JavaScript 注释。
### 控制组件
我们已经了解到 React 应用是各种组件的集合,结构为嵌套树。因此,我们需要某种机制来将数据从一个组件传递到另一个组件。
#### 进入 “props”
我们可以使用 `props` 对象将任意数据传递给我们的组件。 React 中的每个组件都会获取 `props` 对象。在学习如何使用 `props` 之前,让我们学习函数式组件。
#### a) 函数式组件
在 React 中,一个函数式组件通过 `props` 对象使用你传递给它的任意数据。它返回一个对象,该对象描述了 React 应渲染的 UI。函数式组件也称为无状态组件。
让我们编写第一个函数式组件。
```
function Hello(props) {
return <div>{props.name}</div>
}
```
就这么简单。我们只是将 `props` 作为参数传递给了一个普通的 JavaScript 函数并且有返回值。嗯?返回了什么?那个 `<div>{props.name}</div>`。它是 JSXJavaScript Extended。我们将在后面的部分中详细了解它。
上面这个函数将在浏览器中渲染出以下HTML。
```
<!-- If the "props" object is: {name: 'rajat'} -->
<div>
rajat
</div>
```
> 阅读以下有关 JSX 的部分,这一部分解释了如何从我们的 JSX 代码中得到这段 HTML 。
如何在 React 应用中使用这个函数式组件? 很高兴你问了! 它就像下面这么简单。
```
<Hello name='rajat' age={26}/>
```
属性 `name` 在上面的代码中变成了 `Hello` 组件里的 `props.name` ,属性 `age` 变成了 `props.age`
> 记住! 你可以将一个React组件嵌套在其他React组件中。
让我们在 codepen playground 使用 `Hello` 组件。用我们的 `Hello` 组件替换 `ReactDOM.render()` 中的 `div`,并在底部窗口中查看更改。
```
function Hello(props) {
return <div>{props.name}</div>
}
ReactDOM.render(<Hello name="rajat"/>, document.getElementById('root'));
```
> 但是如果你的组件有一些内部状态怎么办?例如,像下面的计数器组件一样,它有一个内部计数变量,它在 + 和 - 键按下时发生变化。
具有内部状态的 React 组件
#### b) 基于类的组件
基于类的组件有一个额外属性 `state` ,你可以用它存放组件的私有数据。我们可以用 class 表示法重写我们的 `Hello` 。由于这些组件具有状态,因此这些组件也称为有状态组件。
```
class Counter extends React.Component {
// this method should be present in your component
render() {
return (
<div>
{this.props.name}
</div>
);
}
}
```
我们继承了 React 库的 React.Component 类以在React中创建基于类的组件。在[这里][5]了解更多有关 JavaScript 类的东西。
`render()` 方法必须存在于你的类中因为React会查找此方法用以了解它应在屏幕上渲染的 UI。为了使用这种内部状态我们首先要在组件
要使用这种内部状态,我们首先必须按以下方式初始化组件类的构造函数中的状态对象。
```
class Counter extends React.Component {
constructor() {
super();
// define the internal state of the component
this.state = {name: 'rajat'}
}
render() {
return (
<div>
{this.state.name}
</div>
);
}
}
// Usage:
// In your react app: <Counter />
```
类似地,可以使用 this.props 对象在我们基于类的组件内访问 props。
要设置 state请使用 `React.Component``setState()`。 在本教程的最后一部分中,我们将看到一个这样的例子。
> 提示:永远不要在 `render()` 函数中调用 `setState()`,因为 `setState` 会导致组件重新渲染,这将导致无限循环。
![](https://cdn-images-1.medium.com/max/1000/1*rPUhERO1Bnr5XdyzEwNOwg.png)
基于类的组件具有可选属性 “state”。
除了 `state` 以外,基于类的组件有一些声明周期方法比如 `componentWillMount()`。你可以利用这些去做初始化 `state`这样的事, 可是那将超出这篇文章的范畴。
### JSX
JSX 是 JavaScript Extended 的一种简短形式,它是一种编写 React components 的方法。使用 JSX你可以在类 XML 标签中获得 JavaScript 的全部力量。
你把 JavaScript 表达式放在`{}`里。下面是一些有效的 JSX 例子。
```
<button disabled={true}>Press me!</button>
<button disabled={true}>Press me {3+1} times!</button>;
<div className='container'><Hello /></div>
```
它的工作方式是你编写 JSX 来描述你的 UI 应该是什么样子。像 Babel 这样的转码器将这些代码转换为一堆 `React.createElement()`调用。然后React 库使用这些 `React.createElement()`调用来构造 DOM 元素的树状结构。对于 React 的网页视图或 React Native 的 Native 视图,它将保存在内存中。
React 接着会计算它如何在存储展示给用户的 UI 的内存中有效地模仿这个树。此过程称为 [reconciliation][7]。完成计算后React会对屏幕上的真正 UI 进行更改。
![](https://cdn-images-1.medium.com/max/1000/1*ighKXxBnnSdDlaOr5-ZOPg.png)
React 如何将你的 JSX 转化为描述应用 UI 的树。
你可以使用 [Babel 的在线 REPL][8] 查看当你写一些 JSX 的时候React 的真正输出。
![](https://cdn-images-1.medium.com/max/1000/1*NRuBKgzNh1nHwXn0JKHafg.png)
使用Babel REPL 转换 JSX 为普通 JavaScript
> 由于 JSX 只是 `React.createElement()` 调用的语法糖,因此可以在没有 JSX 的情况下使用 React。
现在我们了解了所有的概念所以我们已经准备好编写我们之前看到的作为GIF图的计数器组件。
代码如下,我希望你已经知道了如何在我们的 playground 上渲染它。
```
class Counter extends React.Component {
constructor(props) {
super(props);
this.state = {count: this.props.start || 0}
// the following bindings are necessary to make `this` work in the callback
this.inc = this.inc.bind(this);
this.dec = this.dec.bind(this);
}
inc() {
this.setState({
count: this.state.count + 1
});
}
dec() {
this.setState({
count: this.state.count - 1
});
}
render() {
return (
<div>
<button onClick={this.inc}>+</button>
<button onClick={this.dec}>-</button>
<div>{this.state.count}</div>
</div>
);
}
}
```
以下是关于上述代码的一些重点。
1. JSX 使用 `驼峰命名` ,所以 `button` 的 属性是 `onClick`不是我们在HTML中用的 `onclick`
2. 绑定 `this` 是必要的,以便在回调时工作。 请参阅上面代码中的第8行和第9行。
最终的交互式代码位于[此处][9]。
有了这个,我们已经到了 React 速成课程的结束。我希望我已经阐明了 React 如何工作以及如何使用 React 来构建更大的应用程序,使用更小和可重用的组件。
--------------------------------------------------------------------------------
via: https://medium.freecodecamp.org/rock-solid-react-js-foundations-a-beginners-guide-c45c93f5a923
作者:[Rajat Saxena ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.freecodecamp.org/@rajat1saxena
[1]:https://kivenaa.com/
[2]:https://play.google.com/store/apps/details?id=com.pollenchat.android
[3]:https://facebook.github.io/react-native/
[4]:https://codepen.io/raynesax/pen/MrNmBM
[5]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes
[6]:https://en.wikipedia.org/wiki/Source-to-source_compiler
[7]:https://reactjs.org/docs/reconciliation.html
[8]:https://babeljs.io/repl
[9]:https://codepen.io/raynesax/pen/QaROqK
[10]:https://twitter.com/rajat1saxena
[11]:mailto:rajat@raynstudios.com
[12]:https://www.youtube.com/channel/UCUmQhjjF9bsIaVDJUHSIIKw

View File

@ -0,0 +1,145 @@
NPM 的桌面 GUI 程序
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/ndm-3-720x340.png)
NPM 是 **N** ode **P** ackage **M** anager node 包管理器)的缩写,它是用于安装 NodeJS 软件包或模块的命令行软件包管理器。我们发布过一个指南描述了如何[**使用 NPM 管理 NodeJS 包**][1]。你可能已经注意到,使用 Npm 管理 NodeJS 包或模块并不是什么大问题。但是,如果你不习惯用 CLI 的方式,这有一个名为 **NDM** 的桌面 GUI 程序,它可用于管理 NodeJS 程序/模块。 NDM代表 **N** PM **D** esktop **M** anager npm 桌面管理器),是 NPM 的免费开源图形前端,它允许我们通过简单图形桌面安装、更新、删除 NodeJS 包。
在这个简短的教程中,我们将了解 Linux 中的 Ndm。
### 安装 NDM
NDM 在 AUR 中可用,因此你可以在 Arch Linux 及其衍生版(如 Antergos 和 Manjaro Linux上使用任何 AUR 助手程序安装。
使用 [**Pacaur**][2]
```
$ pacaur -S ndm
```
使用 [**Packer**][3]
```
$ packer -S ndm
```
使用 [**Trizen**][4]
```
$ trizen -S ndm
```
使用 [**Yay**][5]
```
$ yay -S ndm
```
使用 [**Yaourt**][6]
```
$ yaourt -S ndm
```
在基于 RHEL 的系统(如 CentOS运行以下命令以安装 NDM。
```
$ echo "[fury] name=ndm repository baseurl=https://repo.fury.io/720kb/ enabled=1 gpgcheck=0" | sudo tee /etc/yum.repos.d/ndm.repo && sudo yum update &&
```
在 Debian、Ubuntu、Linux Mint
```
$ echo "deb [trusted=yes] https://apt.fury.io/720kb/ /" | sudo tee /etc/apt/sources.list.d/ndm.list && sudo apt-get update && sudo apt-get install ndm
```
也可以使用 **Linuxbrew** 安装 NDM。首先按照以下链接中的说明安装 Linuxbrew。
安装 Linuxbrew 后,可以使用以下命令安装 NDM
```
$ brew update
$ brew install ndm
```
在其他 Linux 发行版上,进入[**NDM 发布页面**][7],下载最新版本,自行编译和安装。
### NDM 使用
从菜单或使用应用启动器启动 NDM。这就是 NDM 的默认界面。
![][9]
在这里你可以本地或全局安装 NodeJS 包/模块。
**本地安装 NodeJS 包**
要在本地安装软件包,首先通过单击主屏幕上的 **“Add projects”** 按钮选择项目目录,然后选择要保留项目文件的目录。例如,我选择了一个名为 **“demo”** 的目录作为我的项目目录。
单击项目目录(即 **demo**),然后单击 **Add packages** 按钮。
![][10]
输入要安装的软件包名称,然后单击 **Install** 按钮。
![][11]
安装后,软件包将列在项目目录下。只需单击该目录即可在本地查看已安装软件包的列表。
![][12]
同样,你可以创建单独的项目目录并在其中安装 NodeJS 模块。要查看项目中已安装模块的列表,请单击项目目录,右侧将显示软件包。
**全局安装 NodeJS 包**
要全局安装 NodeJS 包,请单击主界面左侧的 **Globals** 按钮。然后,单击 “Add packages” 按钮,输入包的名称并单击 “Install” 按钮。
**管理包**
单击任何已安装的包,不将在顶部看到各种选项,例如:
1. 版本(查看已安装的版本),
  2. 最新(安装最新版本),
  3. 更新(更新当前选定的包),
  4. 卸载(删除所选包)等。
![][13]
NDM 还有两个选项,即 **“Update npm”** 用于将 node 包管理器更新成最新可用版本, **Doctor** 运行一组检查以确保你的 npm 安装有所需的功能管理你的包/模块。
### 结论
NDM 使安装、更新、删除 NodeJS 包的过程更加容易你无需记住执行这些任务的命令。NDM 让我们在简单的图形界面中点击几下鼠标即可完成所有操作。对于那些懒得输入命令的人来说NDM 是管理 NodeJS 包的完美伴侣。
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/ndm-a-desktop-gui-application-for-npm/
作者:[SK][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/manage-nodejs-packages-using-npm/
[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
[4]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[6]:https://www.ostechnix.com/install-yaourt-arch-linux/
[7]:https://github.com/720kb/ndm/releases
[8]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[9]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-1.png
[10]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-5-1.png
[11]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-6.png
[12]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-7.png
[13]:http://www.ostechnix.com/wp-content/uploads/2018/04/ndm-8.png

View File

@ -0,0 +1,139 @@
我应该使用哪些稳定版内核?
======
很多人都问我这样的问题,在他们的产品/设备/笔记本/服务器等上面应该使用什么样的稳定版内核。一直以来,尤其是那些现在已经延长支持时间的内核,都是由我和其他人提供支持,因此,给出这个问题的答案并不是件容易的事情。因此这篇文章我将尝试去给出我在这个问题上的看法。当然,你可以任意选用任何一个你想去使用的内核版本,这里只是我的建议。
和以前一样,在这里给出的这些看法只代表我个人的意见。
### 可选择的内核有哪些
下面列出了我建议你应该去使用的内核的列表,从最好的到最差的都有。我在下面将详细介绍,但是如果你只想得到一个结论,它就是你想要的:
建议你使用的内核的分级,从最佳的方案到最差的方案如下:
* 你最喜欢的 Linux 发行版支持的内核
* 最新的稳定版
* 最新的 LTS 发行版
* 仍然处于维护状态的老的 LTS 发行版
绝对不要去使用的内核:
* 不再维护的内核发行版
给上面的列表给出具体的数字,今天是 2018 年 8 月 24 日kernel.org 页面上可以看到是这样:
![][1]
因此,基于上面的列表,那它应该是:
* 4.18.5 是最新的稳定版
* 4.14.67 是最新的 LTS 发行版
* 4.9.124、4.4.152、以及 3.16.57 是仍然处于维护状态的老的 LTS 发行版
* 4.17.19 和 3.18.119 是过去 60 天内 “生命周期终止” 的内核版本,它们仍然保留在 kernel.org 站点上,是为了仍然想去使用它们的那些人。
非常容易,对吗?
Ok现在我给出这样选择的一些理由
### Linux 发行版内核
对于大多数 Linux 用户来说,最好的方案就是使用你喜欢的 Linux 发行版的内核。就我本人而言,我比较喜欢基于社区的、内核不断滚动升级的用最新内核的 Linux 发行版,并且它也是由开发者社区来支持的。这种类型的发行版有 Fedora、openSUSE、Arch、Gentoo、CoreOS、以及其它的。
所有这些发行版都使用了上游的最新的稳定版内核,并且确保定期打了需要的 bug 修复补丁。当它拥有了最新的修复之后([记住所有的修复都是安全修复][2]),这就是你可以使用的最安全、最好的内核之一。
有些社区的 Linux 发行版需要很长的时间才发行一个新内核的发行版但是最终发行的版本和所支持的内核都是非常好的。这些也都非常好用Debian 和 Ubuntu 就是这样的例子。
我没有在这里列出你所喜欢的发行版,并不是意味着它们的内核不够好。查看这些发行版的网站,确保它们的内核包是不断应用最新的安全补丁进行升级过的,那么它就应该是很好的。
许多人好像喜欢旧的、“传统” 模式的发行版,以及使用 RHEL、SLES、CentOS 或者 “LTS” Ubuntu 发行版。这些发行版挑选一个特定的内核版本,然后使用好几年,而不是几十年。他们移植了最新的 bug 修复,有时也有一些内核的新特性,所有的只是追求堂吉诃德式的保持版本号不变而已,尽管他们已经在那个旧的内核版本上做了成千上万的变更。这其实是一个吃力不讨好的工作,开发者分配去做这些任务,看上去做的很不错,其实就是为了实现这些目标。如果你从来没有看到你的内核版本号发生过变化,而仍然在使用这些发行版。他们通常会为使用而付出一些成本,当发生错误时能够从这些公司得到一些支持,那就是值得的。
所以,你能使用的最好的内核是你可以求助于别人,而别人可以为你提供支持的内核。使用那些支持,你通常都已经为它支付过费用了(对于企业发行版),而这些公司也知道他们职责是什么。
但是,如果你不希望去依赖别人,而是希望你自己管理你的内核,或者你有发行版不支持的硬件,那么你应该去使用最新的稳定版:
### 最新的稳定版
最新的稳定版内核是 Linux 内核开发者社区宣布为“稳定版”的最新的一个内核。大约每三个月,社区发行一个包含了对所有新硬件支持的、新的稳定版内核,最新版的内核不但改善内核性能,同时还包含内核各部分的 bug 修复。再经过三个月之后,进入到下一个内核版本的 bug 修复将被移植进入这个稳定版内核中,因此,使用这个内核版本的用户将确保立即得到这些修复。
最新的稳定版内核通常也是主流社区发行版使用的较好的内核,因此你可以确保它是经过测试和拥有大量用户使用的内核。另外,内核社区(全部开发者超过 4000 人)也将帮助这个发行版提供对用户的支持,因为这是他们做的最新的一个内核。
三个月之后,将发行一个新的稳定版内核,你应该去更新到它以确保你的内核始终是最新的稳定版,当最新的稳定版内核发布之后,对你的当前稳定版内核的支持通常会落后几周时间。
如果你在上一个 LTS 版本发布之后购买了最新的硬件,为了能够支持最新的硬件,你几乎是绝对需要去运行这个最新的稳定版内核。对于台式机或新的服务器,它们通常需要运行在它们推荐的内核版本上。
### 最新的 LTS 发行版
如果你的硬件为了保证正常运行(像大多数的嵌入式设备),需要依赖供应商的源码树外的补丁,那么对你来说,最好的内核版本是最新的 LTS 发行版。这个发行版拥有所有进入稳定版内核的最新 bug 修复,以及大量的用户测试和使用。
请注意,这个最新的 LTS 发行版没有新特性,并且也几乎不会增加对新硬件的支持,因此,如果你需要使用一个新设备,那你的最佳选择就是最新的稳定版内核,而不是最新的 LTS 版内核。
另外,对于这个 LTS 发行版内核的用户来说,他也不用担心每三个月一次的“重大”升级。因此,他们将一直坚持使用这个 LTS 内核发行版,并每年升级一次,这是一个很好的实践。
使用这个 LTS 发行版的不利方面是,你没法得到在最新版本内核上实现的内核性能提升,除非在未来的一年中,你升级到下一个 LTS 版内核。
另外,如果你使用的这个内核版本有问题,你所做的第一件事情就是向任意一位内核开发者报告发生的问题,并向他们询问,“最新的稳定版内核中是否也存在这个问题?”并且,你将意识到,对它的支持不会像使用最新的稳定版内核那样容易得到。
现在,如果你坚持使用一个有大量的补丁集的内核,并且不希望升级到每年一次的新 LTS 内核版本上,那么,或许你应该去使用老的 LTS 发行版内核:
### 老的 LTS 发行版
这些发行版传统上都由社区提供 2 年时间的支持,有时候当一个重要的 Linux 发行版(像 Debian 或 SLES 一样)依赖它时,这个支持时间会更长。然而在过去一年里,感谢 Google、Linaro、Linaro 成员公司、[kernelci.org][3]、以及其它公司在测试和基础设施上的大量投入,使得这些老的 LTS 发行版内核得到更长时间的支持。
这是最新的 LTS 发行版,它们将被支持多长时间,这是 2018 年 8 月 24 日显示在 [kernel.org/category/releases.html][4] 上的信息:
![][5]
Google 和其它公司希望这些内核使用的时间更长的原因是,由于现在几乎所有的 SoC 芯片的疯狂(也有人说是打破常规)的开发模型。这些设备在芯片发行前几年就启动了他们的开发生命周期,而那些代码从来不会合并到上游,最终结果是始终在一个分支中,新的芯片基于一个 2 年以前的老内核发布。这些 SoC 的代码树通常增加了超过 200 万行的代码,这使得它们成为我们前面称之为“类 Linux 内核“的东西。
如果在 2 年后,这个 LTS 发行版停止支持,那么来自社区的支持将立即停止,并且没有人对它再进行 bug 修复。这导致了在全球各地数以百万计的不安全设备仍然在使用中,这对任何生态系统来说都不是什么好事情。
由于这种依赖,这些公司现在要求新设备不断更新到最新的 LTS 发行版,而这些特定的发行版(即每个 4.9.y 发行版)就是为它们发行的。其中一个这样的例子就是新 Android 设备对内核版本的要求,这些新设备的 “O” 版本和现在的 “P” 版本指定了最低允许使用的内核版本,并且在设备上越来越频繁升级的、安全的 Android 发行版开始要求使用这些 “.y” 发行版。
我注意到一些生产商现在已经在做这些事情。Sony 是其中一个非常好的例子,在他们的大多数新手机上,通过他们每季度的安全发行版,将设备更新到最新的 4.4.y 发行版上。另一个很好的例子是一家小型公司 Essential他们持续跟踪 4.4.y 发行版,据我所知,他们发布新版本的速度比其它公司都快。
当使用这种很老的内核时有个重大警告。移植到这种内核中的 bug 修复比起最新版本的 LTS 内核来说数量少很多,因为这些使用很老的 LTS 内核的传统设备型号要远少于现在的用户使用的型号。如果你打算将它们用在有不可信的用户或虚拟机的地方,那么这些内核将不再被用于任何”通用计算“的模型中,因为对于这些内核不会去做像最近的 Spectre 这样的修复,如果在一些分支中存在这样的 bug那么将极大地降低安全性。
因此,仅当在你能够完全控制的设备中使用老的 LTS 发行版,或者是使用在有一个非常强大的安全模型(像 Android 一样强制使用 SELinux 和应用程序隔离)去限制的情况下。绝对不要在有不可信用户、程序、或虚拟机的服务器上使用这些老的 LTS 发行版内核。
此外,如果社区对它有支持的话,社区对这些老的 LTS 内核发行版相比正常的 LTS 内核发行版的支持要少的多。如果你使用这些内核,那么你只能是一个人在战斗,你需要有能力去独自支持这些内核,或者依赖你的 SoC 供应商为你提供支持(需要注意的是,对于大部分供应商来说是不会为你提供支持的,因此,你要特别注意 …)。
### 不再维护的内核发行版
更让人感到惊讶的事情是,许多公司只是随便选一个内核发行版,然后将它封装到它们的产品里,并将它毫不犹豫地承载到数十万的部件中。其中一个这样的糟糕例子是 Lego Mindstorm 系统,不知道是什么原因在它们的设备上随意选取了一个 `-rc` 的内核发行版。`-rc` 的发行版是开发中的版本Linux 内核开发者认为它根本就不适合任何人使用,更不用说是数百万的用户了。
当然,如果你愿意,你可以随意地使用它,但是需要注意的是,可能真的就只有你一个人在使用它。社区不会为你提供支持,因为他们不可能关注所有内核版本的特定问题,因此如果出现错误,你只能独自去解决它。对于一些公司和系统来说,这么做可能还行,但是如果没有为此有所规划,那么要当心因此而产生的”隐性“成本。
### 总结
基于以上原因,下面是一个针对不同类型设备的简短列表,这些设备我推荐适用的内核如下:
* 笔记本 / 台式机:最新的稳定版内核
* 服务器:最新的稳定版内核或最新的 LTS 版内核
* 嵌入式设备:最新的 LTS 版内核或老的 LTS 版内核(如果使用的安全模型非常强大和严格)
至于我,在我的机器上运行什么样的内核?我的笔记本运行的是最新的开发版内核(即 Linus 的开发树)再加上我正在做修改的内核,我的服务器上运行的是最新的稳定版内核。因此,尽管我负责 LTS 发行版的支持工作,但我自己并不使用 LTS 版内核,除了在测试系统上。我依赖于开发版和最新的稳定版内核,以确保我的机器运行的是目前我们所知道的最快的也是最安全的内核版本。
--------------------------------------------------------------------------------
via: http://kroah.com/log/blog/2018/08/24/what-stable-kernel-should-i-use/
作者:[Greg Kroah-Hartman][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://kroah.com
[1]:https://s3.amazonaws.com/kroah.com/images/kernel.org_2018_08_24.png
[2]:http://kroah.com/log/blog/2018/02/05/linux-kernel-release-model/
[3]:https://kernelci.org/
[4]:https://www.kernel.org/category/releases.html
[5]:https://s3.amazonaws.com/kroah.com/images/kernel.org_releases_2018_08_24.png

View File

@ -0,0 +1,88 @@
三个开源的分布式追踪工具
======
这几个工具对复杂软件系统中的实时事件做了可视化,能帮助你快速发现性能问题。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8)
分布式追踪系统能够从头到尾地追踪分布式系统中的请求,跨越多个应用、服务、数据库以及像代理这样的中间件。它能帮助你更深入地理解系统中到底发生了什么。追踪系统以图形化的方式,展示出每个已知步骤以及某个请求在每个步骤上的耗时。
用户可以通过这些展示来判断系统的哪个环节有延迟或阻塞,当请求失败时,运维和开发人员可以看到准确的问题源头,而不需要去测试整个系统,比如用二叉查找树的方法去定位问题。在开发迭代的过程中,追踪系统还能够展示出可能引起性能变化的环节。通过异常行为的警告自动地感知到性能在退化,总是比客户告诉你要好。
追踪是怎么工作的呢?给每个请求分配一个特殊 ID这个 ID 通常会插入到请求头部中。它唯一标识了对应的事务。一般把事务叫做 tracetrace 是抽象整个事务的概念。每一个 trace 由 span 组成span 代表着一次请求中真正执行的操作,比如一次服务调用,一次数据库请求等。每一个 span 也有自己唯一的 ID。span 之下也可以创建子 span子 span 可以有多个父 span。
当一次事务(或者说 trace运行过之后就可以在追踪系统的表示层上搜索了。有几个工具可以用作表示层我们下文会讨论不过我们先看下面的图它是我在 [Istio walkthrough][2] 视频教程中提到的 [Jaeger][1] 界面,展示了单个 trace 中的多个 span。很明显这个图能让你一目了然地对事务有更深的了解。
![](https://opensource.com/sites/default/files/uploads/monitoring_guide_jaeger_istio_0.png)
这个 demo 使用了 Istio 内置的 OpenTracing 实现,所以我甚至不需要修改自己的应用代码就可以获得追踪数据。我也用到了 Jaeger它是兼容 OpenTracing 的。
那么 OpenTracing 到底是什么呢?我们来看看。
### OpenTracing API
[OpenTracing][3] 是源自 [Zipkin][4] 的规范,以提供跨平台兼容性。它提供了对厂商中立的 API用来向应用程序添加追踪功能并将追踪数据发送到分布式的追踪系统。按照 OpenTracing 规范编写的库,可以被任何兼容 OpenTracing 的系统使用。采用这个开放标准的开源工具有 ZipkinJaeger和 Appdash 等。甚至像 [Datadog][5] 和 [Instana][6] 这种付费工具也在采用。因为现在 OpenTracing 已经无处不在,这样的趋势有望继续发展下去。
### OpenCensus
OpenTracing 已经说过了,可 [OpenCensus][7] 又是什么呢?它在搜索结果中老是出现。它是一个和 OpenTracing 完全不同或者互补的竞争标准吗?
这个问题的答案取决于你的提问对象。我先尽我所能地解释一下他们的不同按照我的理解OpenCensus 更加全面或者说它包罗万象。OpenTracing 专注于建立开放的 API 和规范而不是为每一种开发语言和追踪系统都提供开放的实现。OpenCensus 不仅提供规范,还提供开发语言的实现,和连接协议,而且它不仅只做追踪,还引入了额外的度量指标,这些一般不在分布式追踪系统的职责范围。
使用 OpenCensus我们能够在运行着应用程序的主机上查看追踪数据但它也有个可插拔的导出器系统用于导出数据到中心聚合器。目前 OpenCensus 团队提供的导出器包括 ZipkinPrometheusJaegerStackdriverDatadog 和 SignalFx不过任何人都可以创建一个导出器。
依我看这两者有很多重叠的部分没有哪个一定比另外一个好但是重要的是要知道它们做什么事情和不做什么事情。OpenTracing 主要是一个规范具体的实现和独断的设计由其他人来做。OpenCensus 更加独断地为本地组件提供了全面的解决方案,但是仍然需要其他系统做远程的聚合。
### 可选工具
#### Zipkin
Zipkin 是最早出现的这类工具之一。 谷歌在 2010 年发表了介绍其内部追踪系统 Dapper 的[论文][8]Twitter 以此为基础开发了 Zipkin。Zipkin 的开发语言 Java用 Cassandra 或 ElasticSearch 作为可扩展的存储后端这些选择能满足大部分公司的需求。Zipkin 支持的最低 Java 版本是 Java 6它也使用了 [Thrift][9] 的二进制通信协议Thrift 在 Twitter 的系统中很流行,现在作为 Apache 项目在托管。
这个系统包括上报器(客户端),数据收集器,查询服务和一个 web 界面。Zipkin 只传输一个带事务上下文的 trace ID 来告知接收者追踪的进行,所以说在生产环境中是安全的。每一个客户端收集到的数据,会异步地传输到数据收集器。收集器把这些 span 的数据存到数据库web 界面负责用可消费的格式展示这些数据给用户。客户端传输数据到收集器有三种方式HTTPKafka 和 Scribe。
[Zipkin 社区][10] 还提供了 [Brave][11],一个跟 Zipkin 兼容的 Java 客户端的实现。由于 Brave 没有任何依赖,所以它不会拖累你的项目,也不会使用跟你们公司标准不兼容的库来搞乱你的项目。除 Brave 之外,还有很多其他的 Zipkin 客户端实现,因为 Zipkin 和 OpenTracing 标准是兼容的,所以这些实现也能用到其他的分布式追踪系统中。流行的 Spring 框架中一个叫 [Spring Cloud Sleuth][12] 的分布式追踪组件,它和 Zipkin 是兼容的。
#### Jaeger
[Jaeger][1] 来自 Uber是一个比较新的项目[CNCF][13] (云原生计算基金会)已经把 Jaeger 托管为孵化项目。Jaeger 使用 Golang 开发,因此你不用担心在服务器上安装依赖的问题,也不用担心开发语言的解释器或虚拟机的开销。和 Zipkin 类似Jaeger 也支持用 Cassandra 和 ElasticSearch 做可扩展的存储后端。Jaeger 也完全兼容 OpenTracing 标准。
Jaeger 的架构跟 Zipkin 很像,有客户端(上报器),数据收集器,查询服务和一个 web 界面,不过它还有一个在各个服务器上运行着的代理,负责在服务器本地做数据聚合。代理通过一个 UDP 连接接收数据,然后分批处理,发送到数据收集器。收集器接收到的数据是 [Thrift][14] 协议的格式,它把数据存到 Cassandra 或者 ElasticSearch 中。查询服务能直接访问数据库,并给 web 界面提供所需的信息。
默认情况下Jaeger 客户端不会采集所有的追踪数据,只抽样了 0.1% 的( 1000 个采 1 个追踪数据。对大多数系统来说保留所有的追踪数据并传输的话就太多了。不过通过配置代理可以调整这个值客户端会从代理获取自己的配置。这个抽样并不是完全随机的并且正在变得越来越好。Jaeger 使用概率抽样,试图对是否应该对新踪迹进行抽样进行有根据的猜测。 自适应采样已经在[路线图][15],它将通过添加额外的,能够帮助做决策的上下文,来改进采样算法。
#### Appdash
[Appdash][16] 也是一个用 Golang 写的分布式追踪系统,和 Jaeger 一样。Appdash 是 [Sourcegraph][17] 公司基于谷歌的 Dapper 和 Twitter 的 Zipkin 开发的。同样的,它也支持 Opentracing 标准,不过这是后来添加的功能,依赖了一个与默认组件不同的组件,因此增加了风险和复杂度。
从高层次来看Appdash 的架构主要有三个部分:客户端,本地收集器和远程收集器。因为没有很多文档,所以这个架构描述是基于对系统的测试以及查看源码。写代码时需要把 Appdash 的客户端添加进来。 Appdash 提供了 PythonGolang 和 Ruby 的实现,不过 OpenTracing 库可以与 Appdash 的 OpenTracing 实现一起使用。 客户端收集 span 数据,并将它们发送到本地收集器。然后,本地收集器将数据发送到中心的 Appdash 服务器,这个服务器上运行着自己的本地收集器,它的本地收集器是其他所有节点的远程收集器。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/distributed-tracing-tools
作者:[Dan Barker][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[belitex](https://github.com/belitex)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/barkerd427
[1]: https://www.jaegertracing.io/
[2]: https://www.youtube.com/watch?v=T8BbeqZ0Rls
[3]: http://opentracing.io/
[4]: https://zipkin.io/
[5]: https://www.datadoghq.com/
[6]: https://www.instana.com/
[7]: https://opencensus.io/
[8]: https://research.google.com/archive/papers/dapper-2010-1.pdf
[9]: https://thrift.apache.org/
[10]: https://zipkin.io/pages/community.html
[11]: https://github.com/openzipkin/brave
[12]: https://cloud.spring.io/spring-cloud-sleuth/
[13]: https://www.cncf.io/
[14]: https://en.wikipedia.org/wiki/Apache_Thrift
[15]: https://www.jaegertracing.io/docs/roadmap/#adaptive-sampling
[16]: https://github.com/sourcegraph/appdash
[17]: https://about.sourcegraph.com/