Remove the given page.
+ um (t)opic [topic] Get or set the current topic.
+ um topics List all topics.
+ um (c)onfig [config key] Display configuration environment.
+ um (h)elp [sub-command] Display this help message, or the help message for a sub-command.
+
+```
+
+### Configure Um
+
+To view the current configuration, run:
+
+```
+$ um config
+Options prefixed by '*' are set in /home/sk/.um/umconfig.
+editor = vi
+pager = less
+pages_directory = /home/sk/.um/pages
+default_topic = shell
+pages_ext = .md
+
+```
+
+In this file, you can edit and change the values for **pager** , **editor** , **default_topic** , **pages_directory** , and **pages_ext** options as you wish. Say for example, if you want to save the newly created Um pages in your **[Dropbox][2]** folder, simply change the value of **pages_directory** directive and point it to the Dropbox folder in **~/.um/umconfig** file.
+
+```
+pages_directory = /Users/myusername/Dropbox/um
+
+```
+
+And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
+
+Cheers!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/how-to-create-and-maintain-your-own-man-pages/
+
+作者:[SK][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[b]: https://github.com/lujun9972
+[1]: https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/
+[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/
diff --git a/sources/tech/20181010 5 alerting and visualization tools for sysadmins.md b/sources/tech/20181010 5 alerting and visualization tools for sysadmins.md
new file mode 100644
index 0000000000..f933449461
--- /dev/null
+++ b/sources/tech/20181010 5 alerting and visualization tools for sysadmins.md
@@ -0,0 +1,163 @@
+5 alerting and visualization tools for sysadmins
+======
+These open source tools help users understand system behavior and output, and provide alerts for potential problems.
+
+
+
+You probably know (or can guess) what alerting and visualization tools are used for. Why would we discuss them as observability tools, especially since some systems include visualization as a feature?
+
+Observability comes from control theory and describes our ability to understand a system based on its inputs and outputs. This article focuses on the output component of observability.
+
+Alerting and visualization tools analyze the outputs of other systems and provide structured representations of these outputs. Alerts are basically a synthesized understanding of negative system outputs, and visualizations are disambiguated structured representations that facilitate user comprehension.
+
+### Common types of alerts and visualizations
+
+#### Alerts
+
+Let’s first cover what alerts are _not_. Alerts should not be sent if the human responder can’t do anything about the problem. This includes alerts that are sent to multiple individuals with only a few who can respond, or situations where every anomaly in the system triggers an alert. This leads to alert fatigue and receivers ignoring all alerts within a specific medium until the system escalates to a medium that isn’t already saturated.
+
+For example, if an operator receives hundreds of emails a day from the alerting system, that operator will soon ignore all emails from the alerting system. The operator will respond to a real incident only when he or she is experiencing the problem, emailed by a customer, or called by the boss. In this case, alerts have lost their meaning and usefulness.
+
+Alerts are not a constant stream of information or a status update. They are meant to convey a problem from which the system can’t automatically recover, and they are sent only to the individual most likely to be able to recover the system. Everything that falls outside this definition isn’t an alert and will only damage your employees and company culture.
+
+Everyone has a different set of alert types, so I won't discuss things like priority levels (P1-P5) or models that use words like "Informational," "Warning," and "Critical." Instead, I’ll describe the generic categories emergent in complex systems’ incident response.
+
+You might have noticed I mentioned an “Informational” alert type right after I wrote that alerts shouldn’t be informational. Well, not everyone agrees, but I don’t consider something an alert if it isn’t sent to anyone. It is a data point that many systems refer to as an alert. It represents some event that should be known but not responded to. It is generally part of the visualization system of the alerting tool and not an event that triggers actual notifications. Mike Julian covers this and other aspects of alerting in his book [Practical Monitoring][1]. It's a must read for work in this area.
+
+Non-informational alerts consist of types that can be responded to or require action. I group these into two categories: internal outage and external outage. (Most companies have more than two levels for prioritizing their response efforts.) Degraded system performance is considered an outage in this model, as the impact to each user is usually unknown.
+
+Internal outages are a lower priority than external outages, but they still need to be responded to quickly. They often include internal systems that company employees use or components of applications that are visible only to company employees.
+
+External outages consist of any system outage that would immediately impact a customer. These don’t include a system outage that prevents releasing updates to the system. They do include customer-facing application failures, database outages, and networking partitions that hurt availability or consistency if either can impact a user. They also include outages of tools that may not have a direct impact on users, as the application continues to run but this transparent dependency impacts performance. This is common when the system uses some external service or data source that isn’t necessary for full functionality but may cause delays as the application performs retries or handles errors from this external dependency.
+
+### Visualizations
+
+There are many visualization types, and I won’t cover them all here. It’s a fascinating area of research. On the data analytics side of my career, learning and applying that knowledge is a constant challenge. We need to provide simple representations of complex system outputs for the widest dissemination of information. [Google Charts][2] and [Tableau][3] have a wide selection of visualization types. We’ll cover the most common visualizations and some innovative solutions for quickly understanding systems.
+
+#### Line chart
+
+The line chart is probably the most common visualization. It does a pretty good job of producing an understanding of a system over time. A line chart in a metrics system would have a line for each unique metric or some aggregation of metrics. This can get confusing when there are a lot of metrics in the same dashboard (as shown below), but most systems can select specific metrics to view rather than having all of them visible. Also, anomalous behavior is easy to spot if it’s significant enough to escape the noise of normal operations. Below we can see purple, yellow, and light blue lines that might indicate anomalous behavior.
+
+
+
+Another feature of a line chart is that you can often stack them to show relationships. For example, you might want to look at requests on each server individually, but also in aggregate. This allows you to understand the overall system as well as each instance in the same graph.
+
+
+
+#### Heatmaps
+
+Another common visualization is the heatmap. It is useful when looking at histograms. This type of visualization is similar to a bar chart but can show gradients within the bars representing the different percentiles of the overall metric. For example, suppose you’re looking at request latencies and you want to quickly understand the overall trend as well as the distribution of all requests. A heatmap is great for this, and it can use color to disambiguate the quantity of each section with a quick glance.
+
+The heatmap below shows the higher concentration around the centerline of the graph with an easy-to-understand visualization of the distribution vertically for each time bucket. We might want to review a couple of points in time where the distribution gets wide while the others are fairly tight like at 14:00. This distribution might be a negative performance indicator.
+
+
+
+#### Gauges
+
+The last common visualization I’ll cover here is the gauge, which helps users understand a single metric quickly. Gauges can represent a single metric, like your speedometer represents your driving speed or your gas gauge represents the amount of gas in your car. Similar to the gas gauge, most monitoring gauges clearly indicate what is good and what isn’t. Often (as is shown below), good is represented by green, getting worse by orange, and “everything is breaking” by red. The middle row below shows traditional gauges.
+
+
+
+This image shows more than just traditional gauges. The other gauges are single stat representations that are similar to the function of the classic gauge. They all use the same color scheme to quickly indicate system health with just a glance. Arguably, the bottom row is probably the best example of a gauge that allows you to glance at a dashboard and know that everything is healthy (or not). This type of visualization is usually what I put on a top-level dashboard. It offers a full, high-level understanding of system health in seconds.
+
+#### Flame graphs
+
+A less common visualization is the flame graph, introduced by [Netflix’s Brendan Gregg][4] in 2011. It’s not ideal for dashboarding or quickly observing high-level system concerns; it’s normally seen when trying to understand a specific application problem. This visualization focuses on CPU and memory and the associated frames. The X-axis lists the frames alphabetically, and the Y-axis shows stack depth. Each rectangle is a stack frame and includes the function being called. The wider the rectangle, the more it appears in the stack. This method is invaluable when trying to diagnose system performance at the application level and I urge everyone to give it a try.
+
+
+
+### Tool options
+
+There are several commercial options for alerting, but since this is Opensource.com, I’ll cover only systems that are being used at scale by real companies that you can use at no cost. Hopefully, you’ll be able to contribute new and innovative features to make these systems even better.
+
+### Alerting tools
+
+#### Bosun
+
+If you’ve ever done anything with computers and gotten stuck, the help you received was probably thanks to a Stack Exchange system. Stack Exchange runs many different websites around a crowdsourced question-and-answer model. [Stack Overflow][5] is very popular with developers, and [Super User][6] is popular with operations. However, there are now hundreds of sites ranging from parenting to sci-fi and philosophy to bicycles.
+
+Stack Exchange open-sourced its alert management system, [Bosun][7], around the same time Prometheus and its [AlertManager][8] system were released. There were many similarities in the two systems, and that’s a really good thing. Like Prometheus, Bosun is written in Golang. Bosun’s scope is more extensive than Prometheus’ as it can interact with systems beyond metrics aggregation. It can also ingest data from log and event aggregation systems. It supports Graphite, InfluxDB, OpenTSDB, and Elasticsearch.
+
+Bosun’s architecture consists of a single server binary, a backend like OpenTSDB, Redis, and [scollector agents][9]. The scollector agents automatically detect services on a host and report metrics for those processes and other system resources. This data is sent to a metrics backend. The Bosun server binary then queries the backends to determine if any alerts need to be fired. Bosun can also be used by tools like [Grafana][10] to query the underlying backends through one common interface. Redis is used to store state and metadata for Bosun.
+
+A really neat feature of Bosun is that it lets you test your alerts against historical data. This was something I missed in Prometheus several years ago, when I had data for an issue I wanted alerts on but no easy way to test it. To make sure my alerts were working, I had to create and insert dummy data. This system alleviates that very time-consuming process.
+
+Bosun also has the usual features like showing simple graphs and creating alerts. It has a powerful expression language for writing alerting rules. However, it only has email and HTTP notification configurations, which means connecting to Slack and other tools requires a bit more customization ([which its documentation covers][11]). Similar to Prometheus, Bosun can use templates for these notifications, which means they can look as awesome as you want them to. You can use all your HTML and CSS skills to create the baddest email alert anyone has ever seen.
+
+#### Cabot
+
+[Cabot][12] was created by a company called [Arachnys][13]. You may not know who Arachnys is or what it does, but you have probably felt its impact: It built the leading cloud-based solution for fighting financial crimes. That sounds pretty cool, right? At a previous company, I was involved in similar functions around [“know your customer"][14] laws. Most companies would consider it a very bad thing to be linked to a terrorist group, for example, funneling money through their systems. These solutions also help defend against less-atrocious offenders like fraudsters who could also pose a risk to the institution.
+
+So why did Arachnys create Cabot? Well, it is kind of a Christmas present to everyone, as it was a Christmas project built because its developers couldn’t wrap their heads around [Nagios][15]. And really, who can blame them? Cabot was written with Django and Bootstrap, so it should be easy for most to contribute to the project. (Another interesting factoid: The name comes from the creator’s dog.)
+
+The Cabot architecture is similar to Bosun in that it doesn’t collect any data. Instead, it accesses data through the APIs of the tools it is alerting for. Therefore, Cabot uses a pull (rather than a push) model for alerting. It reaches out into each system’s API and retrieves the information it needs to make a decision based on a specific check. Cabot stores the alerting data in a Postgres database and also has a cache using Redis.
+
+Cabot natively supports [Graphite][16], but it also supports [Jenkins][17], which is rare in this area. [Arachnys][13] uses Jenkins like a centralized cron, but I like this idea of treating build failures like outages. Obviously, a build failure isn’t as critical as a production outage, but it could still alert the team and escalate if the failure isn’t resolved. Who actually checks Jenkins every time an email comes in about a build failure? Yeah, me too!
+
+Another interesting feature is that Cabot can integrate with Google Calendar for on-call rotations. Cabot calls this feature Rota, which is a British term for a roster or rotation. This makes a lot of sense, and I wish other systems would take this idea further. Cabot doesn’t support anything more complex than primary and backup personnel, but there is certainly room for additional features. The docs say if you want something more advanced, you should look at a commercial option.
+
+#### StatsAgg
+
+[StatsAgg][18]? How did that make the list? Well, it’s not every day you come across a publishing company that has created an alerting platform. I think that deserves recognition. Of course, [Pearson][19] isn’t just a publishing company anymore; it has several web presences and a joint venture with [O’Reilly Media][20]. However, I still think of it as the company that published my schoolbooks and tests.
+
+StatsAgg isn’t just an alerting platform; it’s also a metrics aggregation platform. And it’s kind of like a proxy for other systems. It supports Graphite, StatsD, InfluxDB, and OpenTSDB as inputs, but it can also forward those metrics to their respective platforms. This is an interesting concept, but potentially risky as loads increase on a central service. However, if the StatsAgg infrastructure is robust enough, it can still produce alerts even when a backend storage platform has an outage.
+
+StatsAgg is written in Java and consists only of the main server and UI, which keeps complexity to a minimum. It can send alerts based on regular expression matching and is focused on alerting by service rather than host or instance. Its goal is to fill a void in the open source observability stack, and I think it does that quite well.
+
+### Visualization tools
+
+#### Grafana
+
+Almost everyone knows about [Grafana][10], and many have used it. I have used it for years whenever I need a simple dashboard. The tool I used before was deprecated, and I was fairly distraught about that until Grafana made it okay. Grafana was gifted to us by Torkel Ödegaard. Like Cabot, Grafana was also created around Christmastime, and released in January 2014. It has come a long way in just a few years. It started life as a Kibana dashboarding system, and Torkel forked it into what became Grafana.
+
+Grafana’s sole focus is presenting monitoring data in a more usable and pleasing way. It can natively gather data from Graphite, Elasticsearch, OpenTSDB, Prometheus, and InfluxDB. There’s an Enterprise version that uses plugins for more data sources, but there’s no reason those other data source plugins couldn’t be created as open source, as the Grafana plugin ecosystem already offers many other data sources.
+
+What does Grafana do for me? It provides a central location for understanding my system. It is web-based, so anyone can access the information, although it can be restricted using different authentication methods. Grafana can provide knowledge at a glance using many different types of visualizations. However, it has started integrating alerting and other features that aren’t traditionally combined with visualizations.
+
+Now you can set alerts visually. That means you can look at a graph, maybe even one showing where an alert should have triggered due to some degradation of the system, click on the graph where you want the alert to trigger, and then tell Grafana where to send the alert. That’s a pretty powerful addition that won’t necessarily replace an alerting platform, but it can certainly help augment it by providing a different perspective on alerting criteria.
+
+Grafana has also introduced more collaboration features. Users have been able to share dashboards for a long time, meaning you don’t have to create your own dashboard for your [Kubernetes][21] cluster because there are several already available—with some maintained by Kubernetes developers and others by Grafana developers.
+
+The most significant addition around collaboration is annotations. Annotations allow a user to add context to part of a graph. Other users can then use this context to understand the system better. This is an invaluable tool when a team is in the middle of an incident and communication and common understanding are critical. Having all the information right where you’re already looking makes it much more likely that knowledge will be shared across the team quickly. It’s also a nice feature to use during blameless postmortems when the team is trying to understand how the failure occurred and learn more about their system.
+
+#### Vizceral
+
+Netflix created [Vizceral][22] to understand its traffic patterns better when performing a traffic failover. Unlike Grafana, which is a more general tool, Vizceral serves a very specific use case. Netflix no longer uses this tool internally and says it is no longer actively maintained, but it still updates the tool periodically. I highlight it here primarily to point out an interesting visualization mechanism and how it can help solve a problem. It’s worth running it in a demo environment just to better grasp the concepts and witness what’s possible with these systems.
+
+### What to read next
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/10/alerting-and-visualization-tools-sysadmins
+
+作者:[Dan Barker][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/barkerd427
+[b]: https://github.com/lujun9972
+[1]: https://www.practicalmonitoring.com/
+[2]: https://developers.google.com/chart/interactive/docs/gallery
+[3]: https://libguides.libraries.claremont.edu/c.php?g=474417&p=3286401
+[4]: http://www.brendangregg.com/flamegraphs.html
+[5]: https://stackoverflow.com/
+[6]: https://superuser.com/
+[7]: http://bosun.org/
+[8]: https://prometheus.io/docs/alerting/alertmanager/
+[9]: https://bosun.org/scollector/
+[10]: https://grafana.com/
+[11]: https://bosun.org/notifications
+[12]: https://cabotapp.com/
+[13]: https://www.arachnys.com/
+[14]: https://en.wikipedia.org/wiki/Know_your_customer
+[15]: https://www.nagios.org/
+[16]: https://graphiteapp.org/
+[17]: https://jenkins.io/
+[18]: https://github.com/PearsonEducation/StatsAgg
+[19]: https://www.pearson.com/us/
+[20]: https://www.oreilly.com/
+[21]: https://opensource.com/resources/what-is-kubernetes
+[22]: https://github.com/Netflix/vizceral
diff --git a/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md b/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md
new file mode 100644
index 0000000000..6998661f23
--- /dev/null
+++ b/sources/tech/20181010 An introduction to using tcpdump at the Linux command line.md
@@ -0,0 +1,457 @@
+An introduction to using tcpdump at the Linux command line
+======
+
+This flexible, powerful command-line tool helps ease the pain of troubleshooting network issues.
+
+
+
+In my experience as a sysadmin, I have often found network connectivity issues challenging to troubleshoot. For those situations, tcpdump is a great ally.
+
+Tcpdump is a command line utility that allows you to capture and analyze network traffic going through your system. It is often used to help troubleshoot network issues, as well as a security tool.
+
+A powerful and versatile tool that includes many options and filters, tcpdump can be used in a variety of cases. Since it's a command line tool, it is ideal to run in remote servers or devices for which a GUI is not available, to collect data that can be analyzed later. It can also be launched in the background or as a scheduled job using tools like cron.
+
+In this article, we'll look at some of tcpdump's most common features.
+
+### 1\. Installation on Linux
+
+Tcpdump is included with several Linux distributions, so chances are, you already have it installed. Check if tcpdump is installed on your system with the following command:
+
+```
+$ which tcpdump
+/usr/sbin/tcpdump
+```
+
+If tcpdump is not installed, you can install it but using your distribution's package manager. For example, on CentOS or Red Hat Enterprise Linux, like this:
+
+```
+$ sudo yum install -y tcpdump
+```
+
+Tcpdump requires `libpcap`, which is a library for network packet capture. If it's not installed, it will be automatically added as a dependency.
+
+You're ready to start capturing some packets.
+
+### 2\. Capturing packets with tcpdump
+
+To capture packets for troubleshooting or analysis, tcpdump requires elevated permissions, so in the following examples most commands are prefixed with `sudo`.
+
+To begin, use the command `tcpdump -D` to see which interfaces are available for capture:
+
+```
+$ sudo tcpdump -D
+1.eth0
+2.virbr0
+3.eth1
+4.any (Pseudo-device that captures on all interfaces)
+5.lo [Loopback]
+```
+
+In the example above, you can see all the interfaces available in my machine. The special interface `any` allows capturing in any active interface.
+
+Let's use it to start capturing some packets. Capture all packets in any interface by running this command:
+
+```
+$ sudo tcpdump -i any
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+09:56:18.293641 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 3770820720:3770820916, ack 3503648727, win 309, options [nop,nop,TS val 76577898 ecr 510770929], length 196
+09:56:18.293794 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 196, win 391, options [nop,nop,TS val 510771017 ecr 76577898], length 0
+09:56:18.295058 IP rhel75.59883 > gateway.domain: 2486+ PTR? 1.64.168.192.in-addr.arpa. (43)
+09:56:18.310225 IP gateway.domain > rhel75.59883: 2486 NXDomain* 0/1/0 (102)
+09:56:18.312482 IP rhel75.49685 > gateway.domain: 34242+ PTR? 28.64.168.192.in-addr.arpa. (44)
+09:56:18.322425 IP gateway.domain > rhel75.49685: 34242 NXDomain* 0/1/0 (103)
+09:56:18.323164 IP rhel75.56631 > gateway.domain: 29904+ PTR? 1.122.168.192.in-addr.arpa. (44)
+09:56:18.323342 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 196:584, ack 1, win 309, options [nop,nop,TS val 76577928 ecr 510771017], length 388
+09:56:18.323563 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 584, win 411, options [nop,nop,TS val 510771047 ecr 76577928], length 0
+09:56:18.335569 IP gateway.domain > rhel75.56631: 29904 NXDomain* 0/1/0 (103)
+09:56:18.336429 IP rhel75.44007 > gateway.domain: 61677+ PTR? 98.122.168.192.in-addr.arpa. (45)
+09:56:18.336655 IP gateway.domain > rhel75.44007: 61677* 1/0/0 PTR rhel75. (65)
+09:56:18.337177 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 584:1644, ack 1, win 309, options [nop,nop,TS val 76577942 ecr 510771047], length 1060
+
+---- SKIPPING LONG OUTPUT -----
+
+09:56:19.342939 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 1752016, win 1444, options [nop,nop,TS val 510772067 ecr 76578948], length 0
+^C
+9003 packets captured
+9010 packets received by filter
+7 packets dropped by kernel
+$
+```
+
+Tcpdump continues to capture packets until it receives an interrupt signal. You can interrupt capturing by pressing `Ctrl+C`. As you can see in this example, `tcpdump` captured more than 9,000 packets. In this case, since I am connected to this server using `ssh`, tcpdump captured all these packages. To limit the number of packets captured and stop `tcpdump`, use the `-c` option:
+
+```
+$ sudo tcpdump -i any -c 5
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+11:21:30.242740 IP rhel75.localdomain.ssh > 192.168.64.1.56322: Flags [P.], seq 3772575680:3772575876, ack 3503651743, win 309, options [nop,nop,TS val 81689848 ecr 515883153], length 196
+11:21:30.242906 IP 192.168.64.1.56322 > rhel75.localdomain.ssh: Flags [.], ack 196, win 1443, options [nop,nop,TS val 515883235 ecr 81689848], length 0
+11:21:30.244442 IP rhel75.43634 > gateway.domain: 57680+ PTR? 1.64.168.192.in-addr.arpa. (43)
+11:21:30.244829 IP gateway.domain > rhel75.43634: 57680 NXDomain 0/0/0 (43)
+11:21:30.247048 IP rhel75.33696 > gateway.domain: 37429+ PTR? 28.64.168.192.in-addr.arpa. (44)
+5 packets captured
+12 packets received by filter
+0 packets dropped by kernel
+$
+```
+
+In this case, `tcpdump` stopped capturing automatically after capturing five packets. This is useful in different scenarios—for instance, if you're troubleshooting connectivity and capturing a few initial packages is enough. This is even more useful when we apply filters to capture specific packets (shown below).
+
+By default, tcpdump resolves IP addresses and ports into names, as shown in the previous example. When troubleshooting network issues, it is often easier to use the IP addresses and port numbers; disable name resolution by using the option `-n` and port resolution with `-nn`:
+
+```
+$ sudo tcpdump -i any -c5 -nn
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+23:56:24.292206 IP 192.168.64.28.22 > 192.168.64.1.35110: Flags [P.], seq 166198580:166198776, ack 2414541257, win 309, options [nop,nop,TS val 615664 ecr 540031155], length 196
+23:56:24.292357 IP 192.168.64.1.35110 > 192.168.64.28.22: Flags [.], ack 196, win 1377, options [nop,nop,TS val 540031229 ecr 615664], length 0
+23:56:24.292570 IP 192.168.64.28.22 > 192.168.64.1.35110: Flags [P.], seq 196:568, ack 1, win 309, options [nop,nop,TS val 615664 ecr 540031229], length 372
+23:56:24.292655 IP 192.168.64.1.35110 > 192.168.64.28.22: Flags [.], ack 568, win 1400, options [nop,nop,TS val 540031229 ecr 615664], length 0
+23:56:24.292752 IP 192.168.64.28.22 > 192.168.64.1.35110: Flags [P.], seq 568:908, ack 1, win 309, options [nop,nop,TS val 615664 ecr 540031229], length 340
+5 packets captured
+6 packets received by filter
+0 packets dropped by kernel
+```
+
+As shown above, the capture output now displays the IP addresses and port numbers. This also prevents tcpdump from issuing DNS lookups, which helps to lower network traffic while troubleshooting network issues.
+
+Now that you're able to capture network packets, let's explore what this output means.
+
+### 3\. Understanding the output format
+
+Tcpdump is capable of capturing and decoding many different protocols, such as TCP, UDP, ICMP, and many more. While we can't cover all of them here, to help you get started, let's explore the TCP packet. You can find more details about the different protocol formats in tcpdump's [manual pages][1]. A typical TCP packet captured by tcpdump looks like this:
+
+```
+08:41:13.729687 IP 192.168.64.28.22 > 192.168.64.1.41916: Flags [P.], seq 196:568, ack 1, win 309, options [nop,nop,TS val 117964079 ecr 816509256], length 372
+```
+
+The fields may vary depending on the type of packet being sent, but this is the general format.
+
+The first field, `08:41:13.729687,` represents the timestamp of the received packet as per the local clock.
+
+Next, `IP` represents the network layer protocol—in this case, `IPv4`. For `IPv6` packets, the value is `IP6`.
+
+The next field, `192.168.64.28.22`, is the source IP address and port. This is followed by the destination IP address and port, represented by `192.168.64.1.41916`.
+
+After the source and destination, you can find the TCP Flags `Flags [P.]`. Typical values for this field include:
+
+| Value | Flag Type | Description |
+|-------| --------- | ----------------- |
+| S | SYN | Connection Start |
+| F | FIN | Connection Finish |
+| P | PUSH | Data push |
+| R | RST | Connection reset |
+| . | ACK | Acknowledgment |
+
+This field can also be a combination of these values, such as `[S.]` for a `SYN-ACK` packet.
+
+Next is the sequence number of the data contained in the packet. For the first packet captured, this is an absolute number. Subsequent packets use a relative number to make it easier to follow. In this example, the sequence is `seq 196:568,` which means this packet contains bytes 196 to 568 of this flow.
+
+This is followed by the Ack Number: `ack 1`. In this case, it is 1 since this is the side sending data. For the side receiving data, this field represents the next expected byte (data) on this flow. For example, the Ack number for the next packet in this flow would be 568.
+
+The next field is the window size `win 309`, which represents the number of bytes available in the receiving buffer, followed by TCP options such as the MSS (Maximum Segment Size) or Window Scale. For details about TCP protocol options, consult [Transmission Control Protocol (TCP) Parameters][2].
+
+Finally, we have the packet length, `length 372`, which represents the length, in bytes, of the payload data. The length is the difference between the last and first bytes in the sequence number.
+
+Now let's learn how to filter packages to narrow down results and make it easier to troubleshoot specific issues.
+
+### 4\. Filtering packets
+
+As mentioned above, tcpdump can capture too many packages, some of which are not even related to the issue you're troubleshooting. For example, if you're troubleshooting a connectivity issue with a web server you're not interested in the SSH traffic, so removing the SSH packets from the output makes it easier to work on the real issue.
+
+One of tcpdump's most powerful features is its ability to filter the captured packets using a variety of parameters, such as source and destination IP addresses, ports, protocols, etc. Let's look at some of the most common ones.
+
+#### Protocol
+
+To filter packets based on protocol, specifying the protocol in the command line. For example, capture ICMP packets only by using this command:
+
+```
+$ sudo tcpdump -i any -c5 icmp
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+```
+
+In a different terminal, try to ping another machine:
+
+```
+$ ping opensource.com
+PING opensource.com (54.204.39.132) 56(84) bytes of data.
+64 bytes from ec2-54-204-39-132.compute-1.amazonaws.com (54.204.39.132): icmp_seq=1 ttl=47 time=39.6 ms
+```
+
+Back in the tcpdump capture, notice that tcpdump captures and displays only the ICMP-related packets. In this case, tcpdump is not displaying name resolution packets that were generated when resolving the name `opensource.com`:
+
+```
+09:34:20.136766 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 1, length 64
+09:34:20.176402 IP ec2-54-204-39-132.compute-1.amazonaws.com > rhel75: ICMP echo reply, id 20361, seq 1, length 64
+09:34:21.140230 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 2, length 64
+09:34:21.180020 IP ec2-54-204-39-132.compute-1.amazonaws.com > rhel75: ICMP echo reply, id 20361, seq 2, length 64
+09:34:22.141777 IP rhel75 > ec2-54-204-39-132.compute-1.amazonaws.com: ICMP echo request, id 20361, seq 3, length 64
+5 packets captured
+5 packets received by filter
+0 packets dropped by kernel
+```
+
+#### Host
+
+Limit capture to only packets related to a specific host by using the `host` filter:
+
+```
+$ sudo tcpdump -i any -c5 -nn host 54.204.39.132
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+09:54:20.042023 IP 192.168.122.98.39326 > 54.204.39.132.80: Flags [S], seq 1375157070, win 29200, options [mss 1460,sackOK,TS val 122350391 ecr 0,nop,wscale 7], length 0
+09:54:20.088127 IP 54.204.39.132.80 > 192.168.122.98.39326: Flags [S.], seq 1935542841, ack 1375157071, win 28960, options [mss 1460,sackOK,TS val 522713542 ecr 122350391,nop,wscale 9], length 0
+09:54:20.088204 IP 192.168.122.98.39326 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 122350437 ecr 522713542], length 0
+09:54:20.088734 IP 192.168.122.98.39326 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 122350438 ecr 522713542], length 112: HTTP: GET / HTTP/1.1
+09:54:20.129733 IP 54.204.39.132.80 > 192.168.122.98.39326: Flags [.], ack 113, win 57, options [nop,nop,TS val 522713552 ecr 122350438], length 0
+5 packets captured
+5 packets received by filter
+0 packets dropped by kernel
+```
+
+In this example, tcpdump captures and displays only packets to and from host `54.204.39.132`.
+
+#### Port
+
+To filter packets based on the desired service or port, use the `port` filter. For example, capture packets related to a web (HTTP) service by using this command:
+
+```
+$ sudo tcpdump -i any -c5 -nn port 80
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+09:58:28.790548 IP 192.168.122.98.39330 > 54.204.39.132.80: Flags [S], seq 1745665159, win 29200, options [mss 1460,sackOK,TS val 122599140 ecr 0,nop,wscale 7], length 0
+09:58:28.834026 IP 54.204.39.132.80 > 192.168.122.98.39330: Flags [S.], seq 4063583040, ack 1745665160, win 28960, options [mss 1460,sackOK,TS val 522775728 ecr 122599140,nop,wscale 9], length 0
+09:58:28.834093 IP 192.168.122.98.39330 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 122599183 ecr 522775728], length 0
+09:58:28.834588 IP 192.168.122.98.39330 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 122599184 ecr 522775728], length 112: HTTP: GET / HTTP/1.1
+09:58:28.878445 IP 54.204.39.132.80 > 192.168.122.98.39330: Flags [.], ack 113, win 57, options [nop,nop,TS val 522775739 ecr 122599184], length 0
+5 packets captured
+5 packets received by filter
+0 packets dropped by kernel
+```
+
+#### Source IP/hostname
+
+You can also filter packets based on the source or destination IP Address or hostname. For example, to capture packets from host `192.168.122.98`:
+
+```
+$ sudo tcpdump -i any -c5 -nn src 192.168.122.98
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+10:02:15.220824 IP 192.168.122.98.39436 > 192.168.122.1.53: 59332+ A? opensource.com. (32)
+10:02:15.220862 IP 192.168.122.98.39436 > 192.168.122.1.53: 20749+ AAAA? opensource.com. (32)
+10:02:15.364062 IP 192.168.122.98.39334 > 54.204.39.132.80: Flags [S], seq 1108640533, win 29200, options [mss 1460,sackOK,TS val 122825713 ecr 0,nop,wscale 7], length 0
+10:02:15.409229 IP 192.168.122.98.39334 > 54.204.39.132.80: Flags [.], ack 669337581, win 229, options [nop,nop,TS val 122825758 ecr 522832372], length 0
+10:02:15.409667 IP 192.168.122.98.39334 > 54.204.39.132.80: Flags [P.], seq 0:112, ack 1, win 229, options [nop,nop,TS val 122825759 ecr 522832372], length 112: HTTP: GET / HTTP/1.1
+5 packets captured
+5 packets received by filter
+0 packets dropped by kernel
+```
+
+Notice that tcpdumps captured packets with source IP address `192.168.122.98` for multiple services such as name resolution (port 53) and HTTP (port 80). The response packets are not displayed since their source IP is different.
+
+Conversely, you can use the `dst` filter to filter by destination IP/hostname:
+
+```
+$ sudo tcpdump -i any -c5 -nn dst 192.168.122.98
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+10:05:03.572931 IP 192.168.122.1.53 > 192.168.122.98.47049: 2248 1/0/0 A 54.204.39.132 (48)
+10:05:03.572944 IP 192.168.122.1.53 > 192.168.122.98.47049: 33770 0/0/0 (32)
+10:05:03.621833 IP 54.204.39.132.80 > 192.168.122.98.39338: Flags [S.], seq 3474204576, ack 3256851264, win 28960, options [mss 1460,sackOK,TS val 522874425 ecr 122993922,nop,wscale 9], length 0
+10:05:03.667767 IP 54.204.39.132.80 > 192.168.122.98.39338: Flags [.], ack 113, win 57, options [nop,nop,TS val 522874436 ecr 122993972], length 0
+10:05:03.672221 IP 54.204.39.132.80 > 192.168.122.98.39338: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 522874437 ecr 122993972], length 642: HTTP: HTTP/1.1 302 Found
+5 packets captured
+5 packets received by filter
+0 packets dropped by kernel
+```
+
+#### Complex expressions
+
+You can also combine filters by using the logical operators `and` and `or` to create more complex expressions. For example, to filter packets from source IP address `192.168.122.98` and service HTTP only, use this command:
+
+```
+$ sudo tcpdump -i any -c5 -nn src 192.168.122.98 and port 80
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+10:08:00.472696 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [S], seq 2712685325, win 29200, options [mss 1460,sackOK,TS val 123170822 ecr 0,nop,wscale 7], length 0
+10:08:00.516118 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [.], ack 268723504, win 229, options [nop,nop,TS val 123170865 ecr 522918648], length 0
+10:08:00.516583 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [P.], seq 0:112, ack 1, win 229, options [nop,nop,TS val 123170866 ecr 522918648], length 112: HTTP: GET / HTTP/1.1
+10:08:00.567044 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [.], ack 643, win 239, options [nop,nop,TS val 123170916 ecr 522918661], length 0
+10:08:00.788153 IP 192.168.122.98.39342 > 54.204.39.132.80: Flags [F.], seq 112, ack 643, win 239, options [nop,nop,TS val 123171137 ecr 522918661], length 0
+5 packets captured
+5 packets received by filter
+0 packets dropped by kernel
+```
+
+You can create more complex expressions by grouping filter with parentheses. In this case, enclose the entire filter expression with quotation marks to prevent the shell from confusing them with shell expressions:
+
+```
+$ sudo tcpdump -i any -c5 -nn "port 80 and (src 192.168.122.98 or src 54.204.39.132)"
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+10:10:37.602214 IP 192.168.122.98.39346 > 54.204.39.132.80: Flags [S], seq 871108679, win 29200, options [mss 1460,sackOK,TS val 123327951 ecr 0,nop,wscale 7], length 0
+10:10:37.650651 IP 54.204.39.132.80 > 192.168.122.98.39346: Flags [S.], seq 854753193, ack 871108680, win 28960, options [mss 1460,sackOK,TS val 522957932 ecr 123327951,nop,wscale 9], length 0
+10:10:37.650708 IP 192.168.122.98.39346 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 123328000 ecr 522957932], length 0
+10:10:37.651097 IP 192.168.122.98.39346 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 123328000 ecr 522957932], length 112: HTTP: GET / HTTP/1.1
+10:10:37.692900 IP 54.204.39.132.80 > 192.168.122.98.39346: Flags [.], ack 113, win 57, options [nop,nop,TS val 522957942 ecr 123328000], length 0
+5 packets captured
+5 packets received by filter
+0 packets dropped by kernel
+```
+
+In this example, we're filtering packets for HTTP service only (port 80) and source IP addresses `192.168.122.98` or `54.204.39.132`. This is a quick way of examining both sides of the same flow.
+
+### 5\. Checking packet content
+
+In the previous examples, we're checking only the packets' headers for information such as source, destinations, ports, etc. Sometimes this is all we need to troubleshoot network connectivity issues. Sometimes, however, we need to inspect the content of the packet to ensure that the message we're sending contains what we need or that we received the expected response. To see the packet content, tcpdump provides two additional flags: `-X` to print content in hex, and ASCII or `-A` to print the content in ASCII.
+
+For example, inspect the HTTP content of a web request like this:
+
+```
+$ sudo tcpdump -i any -c10 -nn -A port 80
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+13:02:14.871803 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [S], seq 2546602048, win 29200, options [mss 1460,sackOK,TS val 133625221 ecr 0,nop,wscale 7], length 0
+E..<..@.@.....zb6.'....P...@......r............
+............................
+13:02:14.910734 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [S.], seq 1877348646, ack 2546602049, win 28960, options [mss 1460,sackOK,TS val 525532247 ecr 133625221,nop,wscale 9], length 0
+E..<..@./..a6.'...zb.P..o..&...A..q a..........
+.R.W....... ................
+13:02:14.910832 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 133625260 ecr 525532247], length 0
+E..4..@.@.....zb6.'....P...Ao..'...........
+.....R.W................
+13:02:14.911808 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 133625261 ecr 525532247], length 112: HTTP: GET / HTTP/1.1
+E.....@.@..1..zb6.'....P...Ao..'...........
+.....R.WGET / HTTP/1.1
+User-Agent: Wget/1.14 (linux-gnu)
+Accept: */*
+Host: opensource.com
+Connection: Keep-Alive
+
+................
+13:02:14.951199 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [.], ack 113, win 57, options [nop,nop,TS val 525532257 ecr 133625261], length 0
+E..4.F@./.."6.'...zb.P..o..'.......9.2.....
+.R.a....................
+13:02:14.955030 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 525532258 ecr 133625261], length 642: HTTP: HTTP/1.1 302 Found
+E....G@./...6.'...zb.P..o..'.......9.......
+.R.b....HTTP/1.1 302 Found
+Server: nginx
+Date: Sun, 23 Sep 2018 17:02:14 GMT
+Content-Type: text/html; charset=iso-8859-1
+Content-Length: 207
+X-Content-Type-Options: nosniff
+Location: https://opensource.com/
+Cache-Control: max-age=1209600
+Expires: Sun, 07 Oct 2018 17:02:14 GMT
+X-Request-ID: v-6baa3acc-bf52-11e8-9195-22000ab8cf2d
+X-Varnish: 632951979
+Age: 0
+Via: 1.1 varnish (Varnish/5.2)
+X-Cache: MISS
+Connection: keep-alive
+
+
+
+302 Found
+
+Found
+The document has moved here.
+
+................
+13:02:14.955083 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [.], ack 643, win 239, options [nop,nop,TS val 133625304 ecr 525532258], length 0
+E..4..@.@.....zb6.'....P....o..............
+.....R.b................
+13:02:15.195524 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [F.], seq 113, ack 643, win 239, options [nop,nop,TS val 133625545 ecr 525532258], length 0
+E..4..@.@.....zb6.'....P....o..............
+.....R.b................
+13:02:15.236592 IP 54.204.39.132.80 > 192.168.122.98.39366: Flags [F.], seq 643, ack 114, win 57, options [nop,nop,TS val 525532329 ecr 133625545], length 0
+E..4.H@./.. 6.'...zb.P..o..........9.I.....
+.R......................
+13:02:15.236656 IP 192.168.122.98.39366 > 54.204.39.132.80: Flags [.], ack 644, win 239, options [nop,nop,TS val 133625586 ecr 525532329], length 0
+E..4..@.@.....zb6.'....P....o..............
+.....R..................
+10 packets captured
+10 packets received by filter
+0 packets dropped by kernel
+```
+
+This is helpful for troubleshooting issues with API calls, assuming the calls are using plain HTTP. For encrypted connections, this output is less useful.
+
+### 6\. Saving captures to a file
+
+Another useful feature provided by tcpdump is the ability to save the capture to a file so you can analyze the results later. This allows you to capture packets in batch mode overnight, for example, and verify the results in the morning. It also helps when there are too many packets to analyze since real-time capture can occur too fast.
+
+To save packets to a file instead of displaying them on screen, use the option `-w`:
+
+```
+$ sudo tcpdump -i any -c10 -nn -w webserver.pcap port 80
+[sudo] password for ricardo:
+tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
+10 packets captured
+10 packets received by filter
+0 packets dropped by kernel
+```
+
+This command saves the output in a file named `webserver.pcap`. The `.pcap` extension stands for "packet capture" and is the convention for this file format.
+
+As shown in this example, nothing gets displayed on-screen, and the capture finishes after capturing 10 packets, as per the option `-c10`. If you want some feedback to ensure packets are being captured, use the option `-v`.
+
+Tcpdump creates a file in binary format so you cannot simply open it with a text editor. To read the contents of the file, execute tcpdump with the `-r` option:
+
+```
+$ tcpdump -nn -r webserver.pcap
+reading from file webserver.pcap, link-type LINUX_SLL (Linux cooked)
+13:36:57.679494 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [S], seq 3709732619, win 29200, options [mss 1460,sackOK,TS val 135708029 ecr 0,nop,wscale 7], length 0
+13:36:57.718932 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [S.], seq 1999298316, ack 3709732620, win 28960, options [mss 1460,sackOK,TS val 526052949 ecr 135708029,nop,wscale 9], length 0
+13:36:57.719005 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 135708068 ecr 526052949], length 0
+13:36:57.719186 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [P.], seq 1:113, ack 1, win 229, options [nop,nop,TS val 135708068 ecr 526052949], length 112: HTTP: GET / HTTP/1.1
+13:36:57.756979 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [.], ack 113, win 57, options [nop,nop,TS val 526052959 ecr 135708068], length 0
+13:36:57.760122 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 526052959 ecr 135708068], length 642: HTTP: HTTP/1.1 302 Found
+13:36:57.760182 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [.], ack 643, win 239, options [nop,nop,TS val 135708109 ecr 526052959], length 0
+13:36:57.977602 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [F.], seq 113, ack 643, win 239, options [nop,nop,TS val 135708327 ecr 526052959], length 0
+13:36:58.022089 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [F.], seq 643, ack 114, win 57, options [nop,nop,TS val 526053025 ecr 135708327], length 0
+13:36:58.022132 IP 192.168.122.98.39378 > 54.204.39.132.80: Flags [.], ack 644, win 239, options [nop,nop,TS val 135708371 ecr 526053025], length 0
+$
+```
+
+Since you're no longer capturing the packets directly from the network interface, `sudo` is not required to read the file.
+
+You can also use any of the filters we've discussed to filter the content from the file, just as you would with real-time data. For example, inspect the packets in the capture file from source IP address `54.204.39.132` by executing this command:
+
+```
+$ tcpdump -nn -r webserver.pcap src 54.204.39.132
+reading from file webserver.pcap, link-type LINUX_SLL (Linux cooked)
+13:36:57.718932 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [S.], seq 1999298316, ack 3709732620, win 28960, options [mss 1460,sackOK,TS val 526052949 ecr 135708029,nop,wscale 9], length 0
+13:36:57.756979 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [.], ack 113, win 57, options [nop,nop,TS val 526052959 ecr 135708068], length 0
+13:36:57.760122 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [P.], seq 1:643, ack 113, win 57, options [nop,nop,TS val 526052959 ecr 135708068], length 642: HTTP: HTTP/1.1 302 Found
+13:36:58.022089 IP 54.204.39.132.80 > 192.168.122.98.39378: Flags [F.], seq 643, ack 114, win 57, options [nop,nop,TS val 526053025 ecr 135708327], length 0
+```
+
+### What's next?
+
+These basic features of tcpdump will help you get started with this powerful and versatile tool. To learn more, consult the [tcpdump website][3] and [man pages][4].
+
+The tcpdump command line interface provides great flexibility for capturing and analyzing network traffic. If you need a graphical tool to understand more complex flows, look at [Wireshark][5].
+
+One benefit of Wireshark is that it can read `.pcap` files captured by tcpdump. You can use tcpdump to capture packets in a remote machine that does not have a GUI and analyze the result file with Wireshark, but that is a topic for another day.
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/10/introduction-tcpdump
+
+作者:[Ricardo Gerardi][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/rgerardi
+[b]: https://github.com/lujun9972
+[1]: http://www.tcpdump.org/manpages/tcpdump.1.html#lbAG
+[2]: https://www.iana.org/assignments/tcp-parameters/tcp-parameters.xhtml
+[3]: http://www.tcpdump.org/#
+[4]: http://www.tcpdump.org/manpages/tcpdump.1.html
+[5]: https://www.wireshark.org/
diff --git a/sources/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md b/sources/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md
new file mode 100644
index 0000000000..1cefdaaa4f
--- /dev/null
+++ b/sources/tech/20181010 Cloc - Count The Lines Of Source Code In Many Programming Languages.md
@@ -0,0 +1,197 @@
+Cloc – Count The Lines Of Source Code In Many Programming Languages
+======
+
+
+
+As a developer, you may need to share the progress and statistics of your code to your boss or colleagues. Your boss might want to analyze the code and give any additional inputs. In such cases, there are few programs, as far as I know, available to analyze the source code. One such program is [**Ohcount**][1]. Today, I came across yet another similar utility namely **“Cloc”**. Using the Cloc, you can easily count the lines of source code in several programming languages. It counts the blank lines, comment lines, and physical lines of source code and displays the result in a neat tabular-column format. Cloc is free, open source and cross-platform utility entirely written in **Perl** programming language.
+
+### Features
+
+Cloc ships with numerous advantages including the following:
+
+ * Easy to install/use. Requires no dependencies.
+ * Portable
+ * It can produce results in a variety of formats, such as plain text, SQL, JSON, XML, YAML, comma separated values.
+ * Can count your git commits.
+ * Count the code in directories and sub-directories.
+ * Count codes count code within compressed archives like tar balls, Zip files, Java .ear files etc.
+ * Open source and cross platform.
+
+
+
+### Installing Cloc
+
+The Cloc utility is available in the default repositories of most Unix-like operating systems. So, you can install it using the default package manager as shown below.
+
+On Arch Linux and its variants:
+
+```
+$ sudo pacman -S cloc
+
+```
+
+On Debian, Ubuntu:
+
+```
+$ sudo apt-get install cloc
+
+```
+
+On CentOS, Red Hat, Scientific Linux:
+
+```
+$ sudo yum install cloc
+
+```
+
+On Fedora:
+
+```
+$ sudo dnf install cloc
+
+```
+
+On FreeBSD:
+
+```
+$ sudo pkg install cloc
+
+```
+
+It can also installed using third-party package manager like [**NPM**][2] as well.
+
+```
+$ npm install -g cloc
+
+```
+
+### Count The Lines Of Source Code In Many Programming Languages
+
+Let us start with a simple example. I have a “hello world” program written in C in my current working directory.
+
+```
+$ cat hello.c
+#include
+int main()
+{
+ // printf() displays the string inside quotation
+ printf("Hello, World!");
+ return 0;
+}
+
+```
+
+To count the lines of code in the hello.c program, simply run:
+
+```
+$ cloc hello.c
+
+```
+
+Sample output:
+
+
+
+The first column specifies the **name of programming languages that the source code consists of**. As you can see in the above output, the source code of “hello world” program is written using **C** programming language.
+
+The second column displays the **number of files in each programming languages**. So, our code contains **1 file** in total.
+
+The third column displays the **total number of blank lines**. We have zero blank files in our code.
+
+The fourth column displays **number of comment lines**.
+
+And the final and fifth column displays **total physical lines of given source code**.
+
+It is just a 6 line code program, so counting the lines in the code is not a big deal. What about the some big source code file? Have a look at the following example:
+
+```
+$ cloc file.tar.gz
+
+```
+
+Sample output:
+
+
+
+As per the above output, it is quite difficult to manually find exact count of code. But, Cloc displays the result in seconds with nice tabular-column format. You can view the gross total of each section at the end which is quite handy when it comes to analyze the source code of a program.
+
+Cloc not only counts the individual source code files, but also files inside directories and sub-directories, archives, and even in specific git commits etc.
+
+**Count the lines of codes in a directory:**
+
+```
+$ cloc dir/
+
+```
+
+![][4]
+
+**Sub-directory:**
+
+```
+$ cloc dir/cloc/tests
+
+```
+
+![][5]
+
+**Count the lines of codes in archive file:**
+
+```
+$ cloc archive.zip
+
+```
+
+![][6]
+
+You can also count lines in a git repository, using a specific commit like below.
+
+```
+$ git clone https://github.com/AlDanial/cloc.git
+
+$ cd cloc
+
+$ cloc 157d706
+
+```
+
+![][7]
+
+Cloc can recognize several programming languages. To view the complete list of recognized languages, run:
+
+```
+$ cloc --show-lang
+
+```
+
+For more details, refer the help section.
+
+```
+$ cloc --help
+
+```
+
+Cheers!
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://www.ostechnix.com/cloc-count-the-lines-of-source-code-in-many-programming-languages/
+
+作者:[SK][a]
+选题:[lujun9972][b]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://www.ostechnix.com/author/sk/
+[b]: https://github.com/lujun9972
+[1]: https://www.ostechnix.com/ohcount-the-source-code-line-counter-and-analyzer/
+[2]: https://www.ostechnix.com/install-node-js-linux/
+[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
+[4]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-2-1.png
+[5]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-4.png
+[6]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-3.png
+[7]: http://www.ostechnix.com/wp-content/uploads/2018/10/cloc-5.png
diff --git a/translated/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md b/translated/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md
new file mode 100644
index 0000000000..a9ece78ef7
--- /dev/null
+++ b/translated/talk/20180915 Linux vs Mac- 7 Reasons Why Linux is a Better Choice than Mac.md
@@ -0,0 +1,131 @@
+Linux vs Mac: Linux 比 Mac 好的七个原因
+======
+最近我们谈论了一些[为什么 Linux 比 Windows 好][1]的原因。毫无疑问, Linux 是个非常优秀的平台。但是它和其他的操作系统一样也会有缺点。对于某些专门的领域,像是游戏, Windows 当然更好。 而对于视频编辑等任务, Mac 系统可能更为方便。这一切都取决于你的爱好,以及你想用你的系统做些什么。在这篇文章中,我们将会介绍一些 Linux 相对于 Mac 更好的一些地方。
+
+如果你已经在用 Mac 或者打算买一台 Mac 电脑,我们建议你仔细考虑一下,看看是改为使用 Linux 还是继续使用 Mac 。
+
+### Linux 比 Mac 好的 7 个原因
+
+![Linux vs Mac: 为什么 Linux 更好][2]
+
+Linux 和 macOS 都是类 Unix 操作系统,并且都支持 Unix 命令行、 bash 和其他一些命令行工具,相比于 Windows ,他们所支持的应用和游戏比较少。但缺点也仅仅如此。
+
+平面设计师和视频剪辑师更加倾向于使用 Mac 系统,而 Linux 更加适合做开发、系统管理、运维的工程师。
+
+那要不要使用 Linux 呢,为什么要选择 Linux 呢?下面是根据实际经验和理性分析给出的一些建议。
+
+#### 1\. 价格
+
+![Linux vs Mac: 为什么 Linux 更好][3]
+
+假设你只是需要浏览文件、看电影、下载图片、写文档、制作报表或者做一些类似的工作,并且你想要一个更加安全的系统。
+
+那在这种情况下,你觉得花费几百块买个系统完成这项工作,或者花费更多直接买个 Macbook 划算吗?当然,最终的决定权还是在你。
+
+买个装好 Mac 系统的电脑还是买个便宜的电脑,然后自己装上免费的 Linux 系统,这个要看你自己的偏好。就我个人而言,除了音视频剪辑创作之外,Linux 都非常好地用,而对于音视频方面,我更倾向于使用 Final Cut Pro (专业的视频编辑软件) 和 Logic Pro X (专业的音乐制作软件)(这两款软件都是苹果公司推出的)。
+
+#### 2\. 硬件支持
+
+![Linux vs Mac: 为什么 Linux 更好][4]
+
+Linux 支持多种平台. 无论你的电脑配置如何,你都可以在上面安装 Linux,无论性能好或者差,Linux 都可以运行。[即使你的电脑已经使用很久了, 你仍然可以通过选择安装合适的发行版让 Linux 在你的电脑上流畅的运行][5].
+
+而 Mac 不同,它是苹果机专用系统。如果你希望买个便宜的电脑,然后自己装上 Mac 系统,这几乎是不可能的。一般来说 Mac 都是和苹果设备连在一起的。
+
+这是[在非苹果系统上安装 Mac OS 的教程][6]. 这里面需要用到的专业技术以及可能遇到的一些问题将会花费你许多时间,你需要想好这样做是否值得。
+
+总之,Linux 所支持的硬件平台很广泛,而 MacOS 相对而言则非常少。
+
+#### 3\. 安全性
+
+![Linux vs Mac: 为什么 Linux 更好][7]
+
+很多人都说 ios 和 Mac 是非常安全的平台。的确,相比于 Windows ,它确实比较安全,可并不一定有 Linux 安全。
+
+我不是在危言耸听。 Mac 系统上也有不少恶意软件和广告,并且[数量与日俱增][8]。我认识一些不太懂技术的用户 使用着非常缓慢的 Mac 电脑并且为此苦苦挣扎。一项快速调查显示[浏览器恶意劫持软件][9]是罪魁祸首.
+
+从来没有绝对安全的操作系统,Linux 也不例外。 Linux 也有漏洞,但是 Linux 发行版提供的及时更新弥补了这些漏洞。另外,到目前为止在 Linux 上还没有自动运行的病毒或浏览器劫持恶意软件的案例发生。
+
+这可能也是一个你应该选择 Linux 而不是 Mac 的原因。
+
+#### 4\. 可定制性与灵活性
+
+![Linux vs Mac: 为什么 Linux 更好][10]
+
+如果你有不喜欢的东西,自己定制或者修改它都行。
+
+举个例子,如果你不喜欢 Ubuntu 18.04.1 的 [Gnome 桌面环境][11],你可以换成 [KDE Plasma][11]。 你也可以尝试一些 [Gnome 扩展][12]丰富你的桌面选择。这种灵活性和可定制性在 Mac OS 是不可能有的。
+
+除此之外你还可以根据需要修改一些操作系统的代码(但是可能需要一些专业知识)以创造出适合你的系统。这个在 Mac OS 上可以做吗?
+
+另外你可以根据需要从一系列的 Linux 发行版进行选择。比如说,如果你想喜欢 Mac OS上的工作流, [Elementary OS][13] 可能是个不错的选择。你想在你的旧电脑上装上一个轻量级的 Linux 发行版系统吗?这里是一个[轻量级 Linux 发行版列表][5]。相比较而言, Mac OS 缺乏这种灵活性。
+
+#### 5\. 使用 Linux 有助于你的职业生涯 [针对 IT 行业和科学领域的学生]
+
+![Linux vs Mac: 为什么 Linux 更好][14]
+
+对于 IT 领域的学生和求职者而言,这是有争议的但是也是有一定的帮助的。使用 Linux 并不会让你成为一个优秀的人,也不一定能让你得到任何与 IT 相关的工作。
+
+但是当你开始使用 Linux 并且开始探索如何使用的时候,你将会获得非常多的经验。作为一名技术人员,你迟早会接触终端,学习通过命令行实现文件系统管理以及应用程序安装。你可能不会知道这些都是一些 IT 公司的新职员需要培训的内容。
+
+除此之外,Linux 在就业市场上还有很大的发展空间。 Linux 相关的技术有很多( Cloud 、 Kubernetes 、Sysadmin 等),您可以学习,获得证书并获得一份相关的高薪的工作。要学习这些,你必须使用 Linux 。
+
+#### 6\. 可靠
+
+![Linux vs Mac: 为什么 Linux 更好][15]
+
+想想为什么服务器上用的都是 Linux 系统,当然是因为它可靠。
+
+但是它为什么可靠呢,相比于 Mac OS ,它的可靠体现在什么方面呢?
+
+答案很简单——给用户更多的控制权,同时提供更好的安全性。在 Mac OS 上,你并不能完全控制它,这样做是为了让操作变得更容易,同时提高你的用户体验。使用 Linux ,你可以做任何你想做的事情——这可能会导致(对某些人来说)糟糕的用户体验——但它确实使其更可靠。
+
+#### 7\. 开源
+
+![Linux vs Mac: 为什么 Linux 更好][16]
+
+开源并不是每个人都关心的。但对我来说,Linux 最重要的优势在于它的开源特性。下面讨论的大多数观点都是开源软件的直接优势。
+
+简单解释一下,如果是开源软件,你可以自己查看或者修改它。但对 Mac 来说,苹果拥有独家控制权。即使你有足够的技术知识,也无法查看 Mac OS 的源代码。
+
+形象点说,Mac 驱动的系统可以让你得到一辆车,但缺点是你不能打开引擎盖看里面是什么。那可能非常糟糕!
+
+如果你想深入了解开源软件的优势,可以在 OpenSource.com 上浏览一下 [Ben Balter 的文章][17]。
+
+### 总结
+
+现在你应该知道为什么 Linux 比 Mac 好了吧,你觉得呢?上面的这些原因可以说服你选择 Linux 吗?如果不行的话那又是为什么呢?
+
+在下方评论让我们知道你的想法。
+
+Note: 这里的图片是以企鹅俱乐部为原型的。
+
+--------------------------------------------------------------------------------
+
+via: https://itsfoss.com/linux-vs-mac/
+
+作者:[Ankush Das][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[Ryze-Borgia](https://github.com/Ryze-Borgia)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://itsfoss.com/author/ankush/
+[1]: https://itsfoss.com/linux-better-than-windows/
+[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/Linux-vs-mac-featured.png
+[3]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-1.jpeg
+[4]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-4.jpeg
+[5]: https://itsfoss.com/lightweight-linux-beginners/
+[6]: https://hackintosh.com/
+[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-2.jpeg
+[8]: https://www.computerworld.com/article/3262225/apple-mac/warning-as-mac-malware-exploits-climb-270.html
+[9]: https://www.imore.com/how-to-remove-browser-hijack
+[10]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-3.jpeg
+[11]: https://www.gnome.org/
+[12]: https://itsfoss.com/best-gnome-extensions/
+[13]: https://elementary.io/
+[14]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-5.jpeg
+[15]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-6.jpeg
+[16]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/09/linux-vs-mac-7.jpeg
+[17]: https://opensource.com/life/15/12/why-open-source
diff --git a/translated/tech/20180824 What Stable Kernel Should I Use.md b/translated/tech/20180824 What Stable Kernel Should I Use.md
deleted file mode 100644
index 7a8d330a77..0000000000
--- a/translated/tech/20180824 What Stable Kernel Should I Use.md
+++ /dev/null
@@ -1,139 +0,0 @@
-我应该使用哪些稳定版内核?
-======
-很多人都问我这样的问题,在他们的产品/设备/笔记本/服务器等上面应该使用什么样的稳定版内核。一直以来,尤其是那些现在已经延长支持时间的内核,都是由我和其他人提供支持,因此,给出这个问题的答案并不是件容易的事情。因此这篇文章我将尝试去给出我在这个问题上的看法。当然,你可以任意选用任何一个你想去使用的内核版本,这里只是我的建议。
-
-和以前一样,在这里给出的这些看法只代表我个人的意见。
-
-### 可选择的内核有哪些
-
-下面列出了我建议你应该去使用的内核的列表,从最好的到最差的都有。我在下面将详细介绍,但是如果你只想得到一个结论,它就是你想要的:
-
-建议你使用的内核的分级,从最佳的方案到最差的方案如下:
-
- * 你最喜欢的 Linux 发行版支持的内核
- * 最新的稳定版
- * 最新的 LTS 发行版
- * 仍然处于维护状态的老的 LTS 发行版
-
-
-
-绝对不要去使用的内核:
-
- * 不再维护的内核发行版
-
-
-
-给上面的列表给出具体的数字,今天是 2018 年 8 月 24 日,kernel.org 页面上可以看到是这样:
-
-![][1]
-
-因此,基于上面的列表,那它应该是:
-
- * 4.18.5 是最新的稳定版
- * 4.14.67 是最新的 LTS 发行版
- * 4.9.124、4.4.152、以及 3.16.57 是仍然处于维护状态的老的 LTS 发行版
- * 4.17.19 和 3.18.119 是过去 60 天内 “生命周期终止” 的内核版本,它们仍然保留在 kernel.org 站点上,是为了仍然想去使用它们的那些人。
-
-
-
-非常容易,对吗?
-
-Ok,现在我给出这样选择的一些理由:
-
-### Linux 发行版内核
-
-对于大多数 Linux 用户来说,最好的方案就是使用你喜欢的 Linux 发行版的内核。就我本人而言,我比较喜欢基于社区的、内核不断滚动升级的用最新内核的 Linux 发行版,并且它也是由开发者社区来支持的。这种类型的发行版有 Fedora、openSUSE、Arch、Gentoo、CoreOS、以及其它的。
-
-所有这些发行版都使用了上游的最新的稳定版内核,并且确保定期打了需要的 bug 修复补丁。当它拥有了最新的修复之后([记住所有的修复都是安全修复][2]),这就是你可以使用的最安全、最好的内核之一。
-
-有些社区的 Linux 发行版需要很长的时间才发行一个新内核的发行版,但是最终发行的版本和所支持的内核都是非常好的。这些也都非常好用,Debian 和 Ubuntu 就是这样的例子。
-
-我没有在这里列出你所喜欢的发行版,并不是意味着它们的内核不够好。查看这些发行版的网站,确保它们的内核包是不断应用最新的安全补丁进行升级过的,那么它就应该是很好的。
-
-许多人好像喜欢旧的、“传统” 模式的发行版,以及使用 RHEL、SLES、CentOS 或者 “LTS” Ubuntu 发行版。这些发行版挑选一个特定的内核版本,然后使用好几年,而不是几十年。他们移植了最新的 bug 修复,有时也有一些内核的新特性,所有的只是追求堂吉诃德式的保持版本号不变而已,尽管他们已经在那个旧的内核版本上做了成千上万的变更。这其实是一个吃力不讨好的工作,开发者分配去做这些任务,看上去做的很不错,其实就是为了实现这些目标。如果你从来没有看到你的内核版本号发生过变化,而仍然在使用这些发行版。他们通常会为使用而付出一些成本,当发生错误时能够从这些公司得到一些支持,那就是值得的。
-
-所以,你能使用的最好的内核是你可以求助于别人,而别人可以为你提供支持的内核。使用那些支持,你通常都已经为它支付过费用了(对于企业发行版),而这些公司也知道他们职责是什么。
-
-但是,如果你不希望去依赖别人,而是希望你自己管理你的内核,或者你有发行版不支持的硬件,那么你应该去使用最新的稳定版:
-
-### 最新的稳定版
-
-最新的稳定版内核是 Linux 内核开发者社区宣布为“稳定版”的最新的一个内核。大约每三个月,社区发行一个包含了对所有新硬件支持的、新的稳定版内核,最新版的内核不但改善内核性能,同时还包含内核各部分的 bug 修复。再经过三个月之后,进入到下一个内核版本的 bug 修复将被移植进入这个稳定版内核中,因此,使用这个内核版本的用户将确保立即得到这些修复。
-
-最新的稳定版内核通常也是主流社区发行版使用的较好的内核,因此你可以确保它是经过测试和拥有大量用户使用的内核。另外,内核社区(全部开发者超过 4000 人)也将帮助这个发行版提供对用户的支持,因为这是他们做的最新的一个内核。
-
-三个月之后,将发行一个新的稳定版内核,你应该去更新到它以确保你的内核始终是最新的稳定版,当最新的稳定版内核发布之后,对你的当前稳定版内核的支持通常会落后几周时间。
-
-如果你在上一个 LTS 版本发布之后购买了最新的硬件,为了能够支持最新的硬件,你几乎是绝对需要去运行这个最新的稳定版内核。对于台式机或新的服务器,它们通常需要运行在它们推荐的内核版本上。
-
-### 最新的 LTS 发行版
-
-如果你的硬件为了保证正常运行(像大多数的嵌入式设备),需要依赖供应商的源码树外的补丁,那么对你来说,最好的内核版本是最新的 LTS 发行版。这个发行版拥有所有进入稳定版内核的最新 bug 修复,以及大量的用户测试和使用。
-
-请注意,这个最新的 LTS 发行版没有新特性,并且也几乎不会增加对新硬件的支持,因此,如果你需要使用一个新设备,那你的最佳选择就是最新的稳定版内核,而不是最新的 LTS 版内核。
-
-另外,对于这个 LTS 发行版内核的用户来说,他也不用担心每三个月一次的“重大”升级。因此,他们将一直坚持使用这个 LTS 内核发行版,并每年升级一次,这是一个很好的实践。
-
-使用这个 LTS 发行版的不利方面是,你没法得到在最新版本内核上实现的内核性能提升,除非在未来的一年中,你升级到下一个 LTS 版内核。
-
-另外,如果你使用的这个内核版本有问题,你所做的第一件事情就是向任意一位内核开发者报告发生的问题,并向他们询问,“最新的稳定版内核中是否也存在这个问题?”并且,你将意识到,对它的支持不会像使用最新的稳定版内核那样容易得到。
-
-现在,如果你坚持使用一个有大量的补丁集的内核,并且不希望升级到每年一次的新 LTS 内核版本上,那么,或许你应该去使用老的 LTS 发行版内核:
-
-### 老的 LTS 发行版
-
-这些发行版传统上都由社区提供 2 年时间的支持,有时候当一个重要的 Linux 发行版(像 Debian 或 SLES 一样)依赖它时,这个支持时间会更长。然而在过去一年里,感谢 Google、Linaro、Linaro 成员公司、[kernelci.org][3]、以及其它公司在测试和基础设施上的大量投入,使得这些老的 LTS 发行版内核得到更长时间的支持。
-
-这是最新的 LTS 发行版,它们将被支持多长时间,这是 2018 年 8 月 24 日显示在 [kernel.org/category/releases.html][4] 上的信息:
-
-![][5]
-
-Google 和其它公司希望这些内核使用的时间更长的原因是,由于现在几乎所有的 SoC 芯片的疯狂(也有人说是打破常规)的开发模型。这些设备在芯片发行前几年就启动了他们的开发生命周期,而那些代码从来不会合并到上游,最终结果是始终在一个分支中,新的芯片基于一个 2 年以前的老内核发布。这些 SoC 的代码树通常增加了超过 200 万行的代码,这使得它们成为我们前面称之为“类 Linux 内核“的东西。
-
-如果在 2 年后,这个 LTS 发行版停止支持,那么来自社区的支持将立即停止,并且没有人对它再进行 bug 修复。这导致了在全球各地数以百万计的不安全设备仍然在使用中,这对任何生态系统来说都不是什么好事情。
-
-由于这种依赖,这些公司现在要求新设备不断更新到最新的 LTS 发行版,而这些特定的发行版(即每个 4.9.y 发行版)就是为它们发行的。其中一个这样的例子就是新 Android 设备对内核版本的要求,这些新设备的 “O” 版本和现在的 “P” 版本指定了最低允许使用的内核版本,并且在设备上越来越频繁升级的、安全的 Android 发行版开始要求使用这些 “.y” 发行版。
-
-我注意到一些生产商现在已经在做这些事情。Sony 是其中一个非常好的例子,在他们的大多数新手机上,通过他们每季度的安全发行版,将设备更新到最新的 4.4.y 发行版上。另一个很好的例子是一家小型公司 Essential,他们持续跟踪 4.4.y 发行版,据我所知,他们发布新版本的速度比其它公司都快。
-
-当使用这种很老的内核时有个重大警告。移植到这种内核中的 bug 修复比起最新版本的 LTS 内核来说数量少很多,因为这些使用很老的 LTS 内核的传统设备型号要远少于现在的用户使用的型号。如果你打算将它们用在有不可信的用户或虚拟机的地方,那么这些内核将不再被用于任何”通用计算“的模型中,因为对于这些内核不会去做像最近的 Spectre 这样的修复,如果在一些分支中存在这样的 bug,那么将极大地降低安全性。
-
-因此,仅当在你能够完全控制的设备中使用老的 LTS 发行版,或者是使用在有一个非常强大的安全模型(像 Android 一样强制使用 SELinux 和应用程序隔离)去限制的情况下。绝对不要在有不可信用户、程序、或虚拟机的服务器上使用这些老的 LTS 发行版内核。
-
-此外,如果社区对它有支持的话,社区对这些老的 LTS 内核发行版相比正常的 LTS 内核发行版的支持要少的多。如果你使用这些内核,那么你只能是一个人在战斗,你需要有能力去独自支持这些内核,或者依赖你的 SoC 供应商为你提供支持(需要注意的是,对于大部分供应商来说是不会为你提供支持的,因此,你要特别注意 …)。
-
-### 不再维护的内核发行版
-
-更让人感到惊讶的事情是,许多公司只是随便选一个内核发行版,然后将它封装到它们的产品里,并将它毫不犹豫地承载到数十万的部件中。其中一个这样的糟糕例子是 Lego Mindstorm 系统,不知道是什么原因在它们的设备上随意选取了一个 `-rc` 的内核发行版。`-rc` 的发行版是开发中的版本,Linux 内核开发者认为它根本就不适合任何人使用,更不用说是数百万的用户了。
-
-当然,如果你愿意,你可以随意地使用它,但是需要注意的是,可能真的就只有你一个人在使用它。社区不会为你提供支持,因为他们不可能关注所有内核版本的特定问题,因此如果出现错误,你只能独自去解决它。对于一些公司和系统来说,这么做可能还行,但是如果没有为此有所规划,那么要当心因此而产生的”隐性“成本。
-
-### 总结
-
-基于以上原因,下面是一个针对不同类型设备的简短列表,这些设备我推荐适用的内核如下:
-
- * 笔记本 / 台式机:最新的稳定版内核
- * 服务器:最新的稳定版内核或最新的 LTS 版内核
- * 嵌入式设备:最新的 LTS 版内核或老的 LTS 版内核(如果使用的安全模型非常强大和严格)
-
-
-
-至于我,在我的机器上运行什么样的内核?我的笔记本运行的是最新的开发版内核(即 Linus 的开发树)再加上我正在做修改的内核,我的服务器上运行的是最新的稳定版内核。因此,尽管我负责 LTS 发行版的支持工作,但我自己并不使用 LTS 版内核,除了在测试系统上。我依赖于开发版和最新的稳定版内核,以确保我的机器运行的是目前我们所知道的最快的也是最安全的内核版本。
-
---------------------------------------------------------------------------------
-
-via: http://kroah.com/log/blog/2018/08/24/what-stable-kernel-should-i-use/
-
-作者:[Greg Kroah-Hartman][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[qhwdw](https://github.com/qhwdw)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]:http://kroah.com
-[1]:https://s3.amazonaws.com/kroah.com/images/kernel.org_2018_08_24.png
-[2]:http://kroah.com/log/blog/2018/02/05/linux-kernel-release-model/
-[3]:https://kernelci.org/
-[4]:https://www.kernel.org/category/releases.html
-[5]:https://s3.amazonaws.com/kroah.com/images/kernel.org_releases_2018_08_24.png
diff --git a/translated/tech/20180928 A Free And Secure Online PDF Conversion Suite.md b/translated/tech/20180928 A Free And Secure Online PDF Conversion Suite.md
deleted file mode 100644
index 46cc5067f2..0000000000
--- a/translated/tech/20180928 A Free And Secure Online PDF Conversion Suite.md
+++ /dev/null
@@ -1,104 +0,0 @@
-一款免费且安全的在线PDF转换软件
-======
-
-
-
-我们总在寻找一个更好用且更高效的解决方案,来我们的生活理加方便。 比方说,在处理PDF文档时,你会迫切地想拥有一款工具,它能够在任何情形下都显得快速可靠。在这,我们想向你推荐**EasyPDF**——一款可以胜任所有场合的在线PDF软件。通过大量的测试,我们可以保证:这款工具能够让你的PDF文档管理更加容易。
-
-不过,关于EasyPDF有一些十分重要的事情,你必须知道。
-
-* EasyPDF是免费的、匿名的在线PDF转换软件。
-* 能够将PDF文档转换成Word、Excel、PowerPoint、AutoCAD、JPG, GIF和Text等格式格式的文档。
-* 能够从ord、Excel、PowerPoint等其他格式的文件创建PDF文件。
-* 能够进行PDF文档的合并、分割和压缩。
-* 能够识别扫描的PDF和图片中的内容。
-* 可以从你的设备或者云存储(Google Drive 和 DropBox)中上传文档。
-* 可以在Windows、Linux、Mac和智能手机上通过浏览器来操作。
-* 支持多种语言。
-
-### EasyPDF的用户界面
-
-
-
-EasyPDF最吸引你眼球的就是平滑的用户界面,营造一种整洁的环境,这会让使用者感觉更加舒服。由于网站完全没有一点广告,EasyPDF的整体使用体验相比以前会好很多。
-
-每种不同类型的转换都有它们专门的菜单,只需要简单地向其中添加文件,你并不需要知道太多知识来进行操作。
-
-许多类似网站没有做好相关的优化,使得在手机上的使用体验并不太友好。然而,EasyPDF突破了这一个瓶颈。在智能手机上,EasyPDF几乎可以秒开,并且可以顺畅的操作。你也通过Chrome app的**three dots menu**把EasyPDF添加到手机的主屏幕上。
-
-
-
-### 特性
-
-除了好看的界面,EasyPDF还非常易于使用。为了使用它,你 **不需要注册一个账号** 或者**留下一个邮箱**,它是完全匿名的。另外,EasyPDF也不会对要转换的文件进行数量或者大小的限制,完全不需要安装!酷极了,不是吗?
-
-首先,你需要选择一种想要进行的格式转换,比如,将PDF转换成Word。然后,选择你想要转换的PDF文件。你可以通过两种方式来上传文件:直接拖拉或者从设备上的文件夹进行选择。还可以选择从[**Google Drive**][1] 或 [**Dropbox**][2]来上传文件。
-
-选择要进行格式转换的文件后,点击Convert按钮开始转换过程。转换过程会在一分钟内完成,你并不需要等待太长时间。如果你还有对其他文件进行格式转换,在接着转换前,不要忘了将前面已经转换完成的文件下载保存。不然的话,你将会丢失前面的文件。
-
-
-
-要进行其他类型的格式转换,直接返回到主页。
-
-目前支持的几种格式转换类型如下:
-
-* **PDF to Word** – 将 PDF 文档 转换成 Word 文档
-
- * **PDF 转换成 PowerPoint** – 将 PDF 文档 转换成 PowerPoint 演示讲稿
-
- * **PDF 转换成 Excel** – 将 PDF 文档 转换成 Excel 文档
-
- * **PDF 创建** – 从一些其他类型的文件(如, text, doc, odt)来创建PDF文档
-
- * **Word 转换成 PDF** – 将 Word 文档 转换成 PDF 文档
-
- * **JPG 转换成 PDF** – 将 JPG images 转换成 PDF 文档
-
- * **PDF 转换成 Au转换成CAD** – 将 PDF 文档 转换成 .dwg 格式 (DWG 是 CAD 文件的原生的格式)
-
- * **PDF 转换成 Text** – 将 PDF 文档 转换成 Text 文档
-
- * **PDF 分割** – 把 PDF 文件分割成多个部分
-
- * **PDF 合并** – 把多个PDF文件合并成一个文件
-
- * **PDF 压缩** – 将 PDF 文档进行压缩
-
- * **PDF 转换成 JPG** – 将 PDF 文档 转换成 JPG 图片
-
- * **PDF 转换成 PNG** – 将 PDF 文档 转换成 PNG 图片
-
- * **PDF 转换成 GIF** – 将 PDF 文档 转换成 GIF 文件
-
- * **在线文字内容识别** – 将扫描的纸质文档转换成能够进行编辑的文件(如,Word,Excel,Text)
-
- 想试一试吗?好极了!点击下面的链接,然后开始格式转换吧!
-
-[][https://easypdf.com/]
-
-### 总结
-
-EasyPDF 名符其实,能够让PDF 管理更加容易。就我测试过的 EasyPDF 服务而言,它提供了**完全免费**的简单易用的转换功能。它十分快速、安全和可靠。你会对它的服务质量感到非常满意,因为它不用支付任何费用,也不用留下像邮箱这样的个人信息。值得一试,也许你会找到你自己更喜欢的 PDF 工具。
-
-好吧,我就说这些。更多的好东西还在后后面,请继续关注!
-
-加油!
-
-
-
-
-
---------------------------------------------------------------------------------
-
-via: https://www.ostechnix.com/easypdf-a-free-and-secure-online-pdf-conversion-suite/
-
-作者:[SK][a]
-选题:[lujun9972](https://github.com/lujun9972)
-译者:[译者ID](https://github.com/zhousiyu325)
-校对:[校对者ID](https://github.com/校对者ID)
-
-本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
-
-[a]: https://www.ostechnix.com/author/sk/
-[1]: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/
-[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/
diff --git a/translated/tech/20180928 What containers can teach us about DevOps.md b/translated/tech/20180928 What containers can teach us about DevOps.md
new file mode 100644
index 0000000000..d514d8ba0b
--- /dev/null
+++ b/translated/tech/20180928 What containers can teach us about DevOps.md
@@ -0,0 +1,105 @@
+容器技术对指导我们 DevOps 的一些启发
+======
+
+容器技术的使用支撑了目前 DevOps 三大主要实践:流水线,及时反馈,持续实验与学习以改进。
+
+
+
+容器技术与 DevOps 二者在发展的过程中是互相促进的关系。得益于 DevOps 的设计理念愈发先进,容器生态系统在设计上与组件选择上也有相应发展。同时,由于容器技术在生产环境中的使用,反过来也促进了 DevOps 三大主要实践:[支撑DevOps的三个实践][1].
+
+
+### 工作流
+
+**容器中的工作流**
+
+每个容器都可以看成一个独立的封闭仓库,当你置身其中,不需要管外部的系统环境、集群环境、以及其他基础设施,不管你在里面如何折腾,只要对外提供正常的功能就好。一般来说,容器内运行的应用,一般作为整个应用系统架构的一部分:比如 web API,数据库,任务执行,缓存系统,垃圾回收器等。运维团队一般会限制容器的资源使用,并在此基础上建立完善的容器性能监控服务,从而降低其对基础设施或者下游其他用户的影响。
+
+**现实中的工作流**
+
+那些跟“容器”一样独立工作的团队,也可以借鉴这种限制容器占用资源的策略。因为无论是在现实生活中的工作流(代码发布、构建基础设施,甚至制造[Spacely’s Sprockets][2]等),还是技术中的工作流(开发、测试、试运行、发布)都使用了这样的线性工作流,一旦某个独立的环节或者工作团队出现了问题,那么整个下游都会受到影响,虽然使用我们这种线性的工作流有效降低了工作耦合性。
+
+**DevOps 中的工作流**
+
+DevOps 中的第一条原则,就是掌控整个执行链路的情况,努力理解系统如何协同工作,并理解其中出现的问题如何对整个过程产生影响。为了提高流程的效率,团队需要持续不断的找到系统中可能存在的性能浪费以及忽视的点,并最终修复它们。
+
+
+> “践行这样的工作流后,可以避免传递一个已知的缺陷到工作流的下游,避免产生一个可能会导致全局性能退化的局部优化,持续优化工作流的性能,持续加深对于系统的理解”
+
+–Gene Kim, [支撑DevOps的三个实践][3], IT 革命, 2017.4.25
+
+### 反馈
+
+**容器中的反馈**
+
+除了限制容器的资源,很多产品还提供了监控和通知容器性能指标的功能,从而了解当容器工作不正常时,容器内部处于什么样的工作状态。比如 目前[流行的][5][Prometheus][4],可以用来从容器和容器集群中收集相应的性能指标数据。容器本身特别适用于分隔应用系统,以及打包代码和其运行环境,但也同时带来不透明的特性,这时从中快速的收集信息,从而解决发生在其内部出现的问题,就显得尤为重要了。
+
+**现实中的反馈**
+
+在现实中,从始至终同样也需要反馈。一个高效的处理流程中,及时的反馈能够快速的定位事情发生的时间。反馈的关键词是“快速”和“相关”。当一个团队处理大量不相关的事件时,那些真正需要快速反馈的重要信息,很容易就被忽视掉,并向下游传递形成更严重的问题。想象下[如果露西和埃塞尔][6]能够很快的意识到:传送带太快了,那么制作出的巧克力可能就没什么问题了(尽管这样就不太有趣了)。
+
+**DevOps and feedback**
+
+DevOps 中的第二条原则,就是快速收集所有的相关有用信息,这样在出现的问题影响到其他开发进程之前,就可以被识别出。DevOps 团队应该努力去“优化下游“,以及快速解决那些可能会影响到之后团队的问题。同工作流一样,反馈也是一个持续的过程,目标是快速的获得重要的信息以及当问题出现后能够及时的响应。
+
+> "快速的反馈对于提高技术的质量、可用性、安全性至关重要。"
+
+–Gene Kim, et al., DevOps 手册:如何在技术组织中创造世界级的敏捷性,可靠性和安全性, IT 革命, 2016
+
+### 持续实验与学习
+
+**容器中的持续实验与学习**
+
+如何让”持续的实验与学习“更具操作性是一个不小的挑战。容器让我们的开发工程师和运营团队,在不需要掌握太多边缘或难以理解的东西情况下,依然可以安全地进行本地和生产环境的测试,这在之前是难以做到的。即便是一些激进的实验,容器技术仍然让我们轻松地进行版本控制、记录、分享。
+
+**现实中的持续实验与学习**
+
+举个我自己的例子:多年前,作为一个年轻、初出茅庐的系统管理员(仅仅工作三周),我被要求对一个运行某个大学核心IT部门网站的Apache虚拟主机进行更改。由于没有易于使用的测试环境,我直接在生产的站点上进行了配置修改,当时觉得配置没问题就发布了,几分钟后,我隔壁无意中听到了同事说:
+
+”等会,网站挂了?“
+
+“没错,怎么回事?”
+
+很多人蒙圈了……
+
+在被嘲讽之后(真实的嘲讽),我一头扎在工作台上,赶紧撤销我之前的更改。当天下午晚些时候,部门主管 - 我老板的老板的老板来到我的工位上,问发生了什么事。
+“别担心,”她告诉我。“我们不会生你的气,这是一个错误,现在你已经学会了。“
+
+而在容器中,这种情形很容易的进行测试,并且也很容易在部署生产环境之前,被那些经验老道的团队成员发现。
+
+**DevOps 中的持续实验与学习**
+
+做实验的初衷是我们每个人都希望通过一些改变从而能够提高一些东西,并勇敢地通过实验来验证我们的想法。对于 DevOps 团队来说,失败无论对团队还是个人来说都是经验,所要不要担心失败。团队中的每个成员不断学习、共享,也会不断提升其所在团队与组织的水平。
+
+随着系统变得越来越琐碎,我们更需要将注意力发在特殊的点上:上面提到的两条原则主要关注的是流程的目前全貌,而持续的学习则是关注的则是整个项目、人员、团队、组织的未来。它不仅对流程产生了影响,还对流程中的每个人产生影响。
+
+> "无风险的实验让我们能够不懈的改进我们的工作,但也要求我们使用之前没有用过的工作方式"
+
+–Gene Kim, et al., [凤凰计划:让你了解 IT、DevOps以及如何取得商业成功][7], IT 革命, 2013
+
+### 容器技术给我们 DevOps 上的启迪
+
+学习如何有效地使用容器可以学习DevOps的三条原则:工作流,反馈以及持续实验和学习。从整体上看应用程序和基础设施,而不是对容器外的东西置若罔闻,教会我们考虑到系统的所有部分,了解其上游和下游影响,打破孤岛,并作为一个团队工作,以提高全局性能和深度
+了解整个系统。通过努力提供及时准确的反馈,我们可以在组织内部创建有效的反馈模式,以便在问题发生影响之前发现问题。
+最后,提供一个安全的环境来尝试新的想法并从中学习,教会我们创造一种文化,在这种文化中,失败一方面促进了我们知识的增长,另一方面通过有根据的猜测,可以为复杂的问题带来新的、优雅的解决方案。
+
+
+
+--------------------------------------------------------------------------------
+
+via: https://opensource.com/article/18/9/containers-can-teach-us-devops
+
+作者:[Chris Hermansen][a]
+选题:[lujun9972](https://github.com/lujun9972)
+译者:[译者ID](https://github.com/littleji)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]: https://opensource.com/users/clhermansen
+[1]: https://itrevolution.com/the-three-ways-principles-underpinning-devops/
+[2]: https://en.wikipedia.org/wiki/The_Jetsons
+[3]: http://itrevolution.com/the-three-ways-principles-underpinning-devops
+[4]: https://prometheus.io/
+[5]: https://opensource.com/article/18/9/prometheus-operational-advantage
+[6]: https://www.youtube.com/watch?v=8NPzLBSBzPI
+[7]: https://itrevolution.com/book/the-phoenix-project/