mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-12 01:40:10 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
9ff10e3ac7
@ -0,0 +1,95 @@
|
||||
Install AWFFull web server log analysis application on ubuntu 17.10
|
||||
======
|
||||
|
||||
|
||||
AWFFull is a web server log analysis program based on "The Webalizer".AWFFull produces usage statistics in HTML format for viewing with a browser. The results are presented in both columnar and graphical format, which facilitates interpretation. Yearly, monthly, daily and hourly usage statistics are presented, along with the ability to display usage by site, URL, referrer, user agent (browser), user name,search strings, entry/exit pages, and country (some information may not be available if not present in the log file being processed).
|
||||
|
||||
|
||||
|
||||
AWFFull supports CLF (common log format) log files, as well as Combined log formats as defined by NCSA and others, and variations of these which it attempts to handle intelligently. In addition, AWFFull also supports wu-ftpd xferlog formatted log files, allowing analysis of ftp servers, and squid proxy logs. Logs may also be compressed, via gzip.
|
||||
|
||||
AWFFull is a web server log analysis program based on "The Webalizer".AWFFull produces usage statistics in HTML format for viewing with a browser. The results are presented in both columnar and graphical format, which facilitates interpretation. Yearly, monthly, daily and hourly usage statistics are presented, along with the ability to display usage by site, URL, referrer, user agent (browser), user name,search strings, entry/exit pages, and country (some information may not be available if not present in the log file being processed).AWFFull supports CLF (common log format) log files, as well as Combined log formats as defined by NCSA and others, and variations of these which it attempts to handle intelligently. In addition, AWFFull also supports wu-ftpd xferlog formatted log files, allowing analysis of ftp servers, and squid proxy logs. Logs may also be compressed, via gzip.
|
||||
|
||||
If a compressed log file is detected, it will be automatically uncompressed while it is read. Compressed logs must have the standard gzip extension of .gz.
|
||||
|
||||
### Changes from Webalizer
|
||||
|
||||
AWFFull is based on the Webalizer code and has a number of large and small changes. These include:
|
||||
|
||||
o Beyond the raw statistics: Making use of published formulae to provide additional insights into site usage.
|
||||
|
||||
o GeoIP IP Address look-ups for more accurate country detection.
|
||||
|
||||
o Resizable graphs.
|
||||
|
||||
o Integration with GNU gettext allowing for ease of translations.Currently 32 languages are supported.
|
||||
|
||||
o Display more than 12 months of the site history on the front page.
|
||||
|
||||
o Additional page count tracking and sort by same.
|
||||
|
||||
o Some minor visual tweaks, including Geolizer's use of Kb, Mb etc for Volumes.
|
||||
|
||||
o Additional Pie Charts for URL counts, Entry and Exit Pages, and Sites.
|
||||
|
||||
o Horizontal lines on graphs that are more sensible and easier to read.
|
||||
|
||||
o User Agent and Referral tracking is now calculated via PAGES not HITS.
|
||||
|
||||
o GNU style long command line options are now supported (eg --help).
|
||||
|
||||
o Can choose what is a page by excluding "what isn't" vs the original "what is" method.
|
||||
|
||||
o Requests to the site being analysed are displayed with the matching referring URL.
|
||||
|
||||
o A Table of 404 Errors, and the referring URL can be generated.
|
||||
|
||||
o An external CSS file can be used with the generated html.
|
||||
|
||||
o Manual performance optimisation of the config file is now easier with a post analysis summary output.
|
||||
|
||||
o Specified IP's & Addresses can be assigned to a given country.
|
||||
|
||||
o Additional Dump options for detailed analysis with other tools.
|
||||
|
||||
o Lotus Domino v6 logs are now detected and processed.
|
||||
|
||||
**Install awffull on ubuntu 17.10**
|
||||
|
||||
> sudo apt-get install awffull
|
||||
|
||||
### Configuring AWFFULL
|
||||
|
||||
You have to edit awffull config file at /etc/awffull/awffull.conf. If you have multiple virtual websites running in the same machine, you can make several copies of the default config file.
|
||||
|
||||
> sudo vi /etc/awffull/awffull.conf
|
||||
|
||||
Make sure the following lines are there
|
||||
|
||||
> LogFile /var/log/apache2/access.log.1
|
||||
> OutputDir /var/www/html/awffull
|
||||
|
||||
Save and exit the file
|
||||
|
||||
You can run the awffull config using the following command
|
||||
|
||||
> awffull -c [your config file name]
|
||||
|
||||
This will create all the required files under /var/www/html/awffull directory so you can access your webserver stats using http://serverip/awffull/
|
||||
|
||||
You should see similar to the following screen
|
||||
|
||||
If you have more site and you can automate the process using shell script and cron job.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.ubuntugeek.com/install-awffull-web-server-log-analysis-application-on-ubuntu-17-10.html
|
||||
|
||||
作者:[ruchi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.ubuntugeek.com/author/ubuntufix
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
How To Turn On/Off Colors For ls Command In Bash On a Linux/Unix
|
||||
======
|
||||
|
||||
|
@ -1,4 +1,5 @@
|
||||
// Translating by Linchenguang....
|
||||
BriFuture is translating this article
|
||||
|
||||
Let’s Build A Simple Interpreter. Part 1.
|
||||
======
|
||||
|
||||
|
392
sources/tech/20170404 Kernel Tracing with Ftrace.md
Normal file
392
sources/tech/20170404 Kernel Tracing with Ftrace.md
Normal file
@ -0,0 +1,392 @@
|
||||
Translating by qhwdw Kernel Tracing with Ftrace
|
||||
============================================================
|
||||
|
||||
[Andrej Yemelianov][7]
|
||||
Tags: [ftrace][8], [kernel][9], [kernel profiling][10], [kernel tracing][11], [linux][12], [tracepoints][13]
|
||||
|
||||

|
||||
|
||||
There are a number of tools for analyzing events at the kernel level: [SystemTap][14], [ktap][15], [Sysdig][16], [LTTNG][17], etc., and you can find plenty of detailed articles and materials about these on the web.
|
||||
|
||||
You’ll find much less information on Linux’s native mechanism for tracing system events and retrieving/analyzing troubleshooting information. That would be [ftrace][18], the first tracing tool added to the kernel, and this is what we’ll be looking at today. Let’s start by defining some key terms.
|
||||
|
||||
### Kernel Tracing and Profiling
|
||||
|
||||
Kernel profiling detects performance “bottlenecks”. Profiling helps us determine where exactly in a program we’re losing performance. Special programs generate a profile—an event summary—which can be used to figure out which functions took the most time to run. These programs, however, don’t help identify why performance dropped.
|
||||
|
||||
Bottlenecking usually occurs under conditions that can’t be identified from profiling. To understand why an event took place, the relevant context has to be restored. This requires tracing.
|
||||
|
||||
Tracing is understood as the process of collecting information on the activity in a working system. This is done with special tools that register system events kind of like how a tape recorder records ambient sound.
|
||||
|
||||
Tracing programs can simultaneously trace events at the application and OS level. The information they gather may be useful for diagnosing multiple system problems.
|
||||
|
||||
Tracing is sometimes compared to logging. There definitely are similarities between the two, but there are differences, too.
|
||||
|
||||
With tracing, information is written about low-level events. These number in the hundreds or even thousands. With logging, information is written about higher-level events, which are much less frequent. These include users logging into the system, application errors, database transaction, etc.
|
||||
|
||||
Just like logs, tracing data can be read as is; however, it’s more useful to extract information about specific applications. All tracing programs are capable of this.
|
||||
|
||||
The Linux kernel has three primary mechanisms for kernel tracing and profiling:
|
||||
|
||||
* tracepoints – a mechanism that works over static instrumented code
|
||||
|
||||
* kprobes – a dynamic tracing mechanism used to interrupt a kernel code at any point, call its own handler, and return after all of the necessary operations have been completed
|
||||
|
||||
* perf_events – an interface for accessing the PMU (Performance Monitoring Unit)
|
||||
|
||||
We won’t be writing about all of these mechanism here, but anyone interested can visit [Brendan Gregg’s blog][19].
|
||||
|
||||
Using ftrace, we can interact with these mechanisms and get debugging information directly from the user space. We’ll talk about this in more detail below. All command line examples are in Ubuntu 14.04, kernel ver. 3.13.0-24.
|
||||
|
||||
### Ftrace: General Information
|
||||
|
||||
Ftrace is short for Function Trace, but that’s not all it does: it can be used to track context switches, measure the time it takes to process interruptions, calculate the time for activating high-priority tasks, and much more.
|
||||
|
||||
Ftrace was developed by Steven Rostedt and has been included in the kernel since version 2.6.27 in 2008\. This is the framework that provides a debugging ring buffer for recording data. This data is gathered by the kernel’s integrated tracing programs.
|
||||
|
||||
Ftrace works on the debugfs file system, which is mounted by default in most modern Linux distributions. To start using ftrace, you’ll have to go to the sys/kernel/debug/tracing directory (this is only available to the root user):
|
||||
|
||||
```
|
||||
# cd /sys/kernel/debug/tracing
|
||||
```
|
||||
|
||||
The contents of the directory should look like this:
|
||||
|
||||
```
|
||||
аvailable_filter_functions options stack_trace_filter
|
||||
available_tracers per_cpu trace
|
||||
buffer_size_kb printk_formats trace_clock
|
||||
buffer_total_size_kb README trace_marker
|
||||
current_tracer saved_cmdlines trace_options
|
||||
dyn_ftrace_total_info set_event trace_pipe
|
||||
enabled_functions set_ftrace_filter trace_stat
|
||||
events set_ftrace_notrace tracing_cpumask
|
||||
free_buffer set_ftrace_pid tracing_max_latency
|
||||
function_profile_enabled set_graph_function tracing_on
|
||||
instances set_graph_notrace tracing_thresh
|
||||
kprobe_events snapshot uprobe_events
|
||||
kprobe_profile stack_max_size uprobe_profile
|
||||
```
|
||||
|
||||
We won’t describe all of these files and subdirectories; that’s already been taken care of in the [official documentation][20]. Instead, we’ll just briefly describe the files relevant to our context:
|
||||
|
||||
* available_tracers – available tracing programs
|
||||
|
||||
* current_tracer – the tracing program presently running
|
||||
|
||||
* tracing_on – the system file responsible for enabling or disabling data writing to the ring buffer (to enable this, the number 1 has to be added to the file; to disable it, the number 0)
|
||||
|
||||
* trace – the file where tracing data is saved in human-readable format
|
||||
|
||||
### Available Tracers
|
||||
|
||||
We can view a list of available tracers with the command
|
||||
|
||||
```
|
||||
root@andrei:/sys/kernel/debug/tracing#: cat available_tracers
|
||||
blk mmiotrace function_graph wakeup_rt wakeup function nop
|
||||
```
|
||||
|
||||
Let’s take a quick look at the features of each tracer:
|
||||
|
||||
* function – a function call tracer without arguments
|
||||
|
||||
* function_graph – a function call tracer with subcalls
|
||||
|
||||
* blk – a call and event tracer related to block device I/O operations (this is what blktrace uses)
|
||||
|
||||
* mmiotrace – a memory-mapped I/O operation tracer
|
||||
|
||||
* nop – the simplest tracer, which as the name suggests, doesn’t do anything (although it may come in handy in some situations, which we’ll describe later on)
|
||||
|
||||
### The Function Tracer
|
||||
|
||||
We’ll start our introduction to ftrace with the function tracer. Let’s look at a test script:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
dir=/sys/kernel/debug/tracing
|
||||
|
||||
sysctl kernel.ftrace_enabled=1
|
||||
echo function > ${dir}/current_tracer
|
||||
echo 1 > ${dir}/tracing_on
|
||||
sleep 1
|
||||
echo 0 > ${dir}/tracing_on
|
||||
less ${dir}/trace
|
||||
```
|
||||
|
||||
This script is fairly straightforward, but there are a few things worth noting. The command sysctl ftrace.enabled=1 enables the function tracer. We then enable the current tracer by writing its name to the current_tracer file.
|
||||
|
||||
Next, we write a 1 to tracing_on, which enables the ring buffer. The syntax requires a space between 1 and the > symbol; echo1> tracing_on will not work. One line later, we disable it (if 0 is written to tracing_on, the buffer won’t clear and ftrace won’t be disabled).
|
||||
|
||||
Why would we do this? Between the two echo commands, we see the command sleep 1\. We enable the buffer, run this command, and then disable it. This lets the tracer include information about all of the system calls that occur while the command runs.
|
||||
|
||||
In the last line of the script, we give the command to display tracing data in the console.
|
||||
|
||||
Once the script has run, we’ll see the following printout (here is just a small fragment):
|
||||
|
||||
```
|
||||
# tracer: function
|
||||
#
|
||||
# entries-in-buffer/entries-written: 29571/29571 #P:2
|
||||
#
|
||||
# _-----=> irqs-off
|
||||
# / _----=> need-resched
|
||||
# | / _---=> hardirq/softirq
|
||||
# || / _--=> preempt-depth
|
||||
# ||| / delay
|
||||
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
|
||||
# | | | |||| | |
|
||||
trace.sh-1295 [000] .... 90.502874: mutex_unlock <-rb_simple_write
|
||||
trace.sh-1295 [000] .... 90.502875: __fsnotify_parent <-vfs_write
|
||||
trace.sh-1295 [000] .... 90.502876: fsnotify <-vfs_write
|
||||
trace.sh-1295 [000] .... 90.502876: __srcu_read_lock <-fsnotify
|
||||
trace.sh-1295 [000] .... 90.502876: __srcu_read_unlock <-fsnotify
|
||||
trace.sh-1295 [000] .... 90.502877: __sb_end_write <-vfs_write
|
||||
trace.sh-1295 [000] .... 90.502877: syscall_trace_leave <-int_check_syscall_exit_work
|
||||
trace.sh-1295 [000] .... 90.502878: context_tracking_user_exit <-syscall_trace_leave
|
||||
trace.sh-1295 [000] .... 90.502878: context_tracking_user_enter <-syscall_trace_leave
|
||||
trace.sh-1295 [000] d... 90.502878: vtime_user_enter <-context_tracking_user_enter
|
||||
trace.sh-1295 [000] d... 90.502878: _raw_spin_lock <-vtime_user_enter
|
||||
trace.sh-1295 [000] d... 90.502878: __vtime_account_system <-vtime_user_enter
|
||||
trace.sh-1295 [000] d... 90.502878: get_vtime_delta <-__vtime_account_system
|
||||
trace.sh-1295 [000] d... 90.502879: account_system_time <-__vtime_account_system
|
||||
trace.sh-1295 [000] d... 90.502879: cpuacct_account_field <-account_system_time
|
||||
trace.sh-1295 [000] d... 90.502879: acct_account_cputime <-account_system_time
|
||||
trace.sh-1295 [000] d... 90.502879: __acct_update_integrals <-acct_account_cputime
|
||||
```
|
||||
|
||||
The printout starts with information about the number of entries in the buffer and the total number of entries written. The difference between these two number is the number of events lost while filling the buffer (there were no losses in our example).
|
||||
|
||||
Then there’s a list of functions that includes the following information:
|
||||
|
||||
* process identifier (PID)
|
||||
|
||||
* the CPU the process runs on (CPU#)
|
||||
|
||||
* the process start time (TIMESTAMP)
|
||||
|
||||
* the name of the traceable function and the parent function that called it (FUNCTION); for example, in the first line of our output, the mutex-unlock function was called by rb_simple_write
|
||||
|
||||
### The Function_graph Tracer
|
||||
|
||||
The function_graph tracer works just like function, but more detailed: the entry and exit point is shown for each function. With this tracer, we can trace functions with sub calls and measure the execution time of each function.
|
||||
|
||||
Let’s edit the script from our last example:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
dir=/sys/kernel/debug/tracing
|
||||
|
||||
sysctl kernel.ftrace_enabled=1
|
||||
echo function_graph > ${dir}/current_tracer
|
||||
echo 1 > ${dir}/tracing_on
|
||||
sleep 1
|
||||
echo 0 > ${dir}/tracing_on
|
||||
less ${dir}/trace
|
||||
```
|
||||
|
||||
After running this script, we get the following printout:
|
||||
|
||||
```
|
||||
# tracer: function_graph
|
||||
#
|
||||
# CPU DURATION FUNCTION CALLS
|
||||
# | | | | | | |
|
||||
0) 0.120 us | } /* resched_task */
|
||||
0) 1.877 us | } /* check_preempt_curr */
|
||||
0) 4.264 us | } /* ttwu_do_wakeup */
|
||||
0) + 29.053 us | } /* ttwu_do_activate.constprop.74 */
|
||||
0) 0.091 us | _raw_spin_unlock();
|
||||
0) 0.260 us | ttwu_stat();
|
||||
0) 0.133 us | _raw_spin_unlock_irqrestore();
|
||||
0) + 37.785 us | } /* try_to_wake_up */
|
||||
0) + 38.478 us | } /* default_wake_function */
|
||||
0) + 39.203 us | } /* pollwake */
|
||||
0) + 40.793 us | } /* __wake_up_common */
|
||||
0) 0.104 us | _raw_spin_unlock_irqrestore();
|
||||
0) + 42.920 us | } /* __wake_up_sync_key */
|
||||
0) + 44.160 us | } /* sock_def_readable */
|
||||
0) ! 192.850 us | } /* tcp_rcv_established */
|
||||
0) ! 197.445 us | } /* tcp_v4_do_rcv */
|
||||
0) 0.113 us | _raw_spin_unlock();
|
||||
0) ! 205.655 us | } /* tcp_v4_rcv */
|
||||
0) ! 208.154 us | } /* ip_local_deliver_finish */
|
||||
```
|
||||
|
||||
In this graph, DURATION shows the time spent running a function. Pay careful attention to the points marked by the + and ! symbols. The plus sign (+) means the function took more than 10 microseconds; the exclamation point (!) means it took more than 100 microseconds.
|
||||
|
||||
Under FUNCTION_CALLS, we find information on each function call.
|
||||
|
||||
The symbols used to show the initiation and completion of each function is the same as in C: bracers ({) demarcate functions, one at the start and one at the end; leaf functions that don’t call any other function are marked with a semicolon (;).
|
||||
|
||||
### Function Filters
|
||||
|
||||
The ftrace printout can be big, and finding exactly what it is you’re looking for can be extremely difficult. We can use filters to simplify our search: the printout will only display information about the functions we’re interested in. To do this, we just have to write the name of our function in the set_ftrace_filter file. For example:
|
||||
|
||||
```
|
||||
root@andrei:/sys/kernel/debug/tracing# echo kfree > set_ftrace_filter
|
||||
```
|
||||
|
||||
To disable the filter, we add an empty line to this file:
|
||||
|
||||
```
|
||||
root@andrei:/sys/kernel/debug/tracing# echo > set_ftrace_filter
|
||||
```
|
||||
|
||||
By running the command
|
||||
|
||||
```
|
||||
root@andrei:/sys/kernel/debug/tracing# echo kfree > set_ftrace_notrace
|
||||
```
|
||||
|
||||
we get the opposite result: the printout will give us information about every function except kfree().
|
||||
|
||||
Another useful option is set_ftrace_pid. This is for tracing functions that can be called while a particular process runs.
|
||||
|
||||
ftrace has many more filtering options. For a more detailed look at these, you can read Steven Rostedt’s article on [LWN.net][21].
|
||||
|
||||
### Tracing Events
|
||||
|
||||
We mentioned the tracepoints mechanism above. Tracepoints are special code inserts that trigger system events. Tracepoints may be dynamic (meaning they have several checks attached to them) or static (no checks attached).
|
||||
|
||||
Static tracepoints don’t affect the system in any way; they just add a few bytes for the function call at the end of the instrumented function and add a data structure in a separate section.
|
||||
|
||||
Dynamic tracepoints call a trace function when the relevant code fragment is executed. Tracing data is written to the ring buffer.
|
||||
|
||||
Tracepoints can be included anywhere in a code; in fact, they can already be found in a lot of kernel functions. Let’s look at the kmem_cache_alloc function (taken from [here][22]):
|
||||
|
||||
```
|
||||
{
|
||||
void *ret = slab_alloc(cachep, flags, _RET_IP_);
|
||||
|
||||
trace_kmem_cache_alloc(_RET_IP_, ret,
|
||||
cachep->object_size, cachep->size, flags);
|
||||
return ret;
|
||||
}
|
||||
```
|
||||
|
||||
trace_kmem_cache_alloc is itself a tracepoint. We can find countless more examples by just looking at the source code of other kernel functions.
|
||||
|
||||
The Linux kernel has a special API for working with tracepoints from the user space. In the /sys/kernel/debug/tracing directory, there’s an events directory where system events are saved. These are available for tracing. System events in this context can be understood as the tracepoints included in the kernel.
|
||||
|
||||
The list of these can be viewed by running the command:
|
||||
|
||||
```
|
||||
root@andrei:/sys/kernel/debug/tracing# cat available_events
|
||||
```
|
||||
|
||||
A long list will be printed out in the console. This is a bit inconvenient. We can print out a more structured listed using the command:
|
||||
|
||||
```
|
||||
root@andrei:/sys/kernel/debug/tracing# ls events
|
||||
|
||||
block gpio mce random skb vsyscall
|
||||
btrfs header_event migrate ras sock workqueue
|
||||
compaction header_page module raw_syscalls spi writeback
|
||||
context_tracking iommu napi rcu swiotlb xen
|
||||
enable irq net regmap syscalls xfs
|
||||
exceptions irq_vectors nmi regulator task xhci-hcd
|
||||
ext4 jbd2 oom rpm timer
|
||||
filemap kmem pagemap sched udp
|
||||
fs kvm power scsi vfs
|
||||
ftrace kvmmmu printk signal vmscan
|
||||
```
|
||||
|
||||
All possible events are combined in the subdirectory by subsystem. Before we can start tracing events, we’ll make sure we’ve enabled writing to the ring buffer:
|
||||
|
||||
```
|
||||
root@andrei:/sys/kernel/debug/tracing# cat tracing_on
|
||||
```
|
||||
|
||||
If the number 0 is shown in the console, we run:
|
||||
|
||||
```
|
||||
root@andrei:/sys/kernel/debug/tracing# echo 1 > tracing_on
|
||||
```
|
||||
|
||||
In our last article, we wrote about the chroot() system call; let’s trace access to this system call. For our tracer, we’ll use nop because function and function_graph record too much information, including event information that we’re just not interested in.
|
||||
|
||||
```
|
||||
root@andrei:/sys/kernel/debug/tracing# echo nop > current_tracer
|
||||
```
|
||||
|
||||
All system call related events are saved in the syscalls directory. Here we’ll find a directory for entering and exiting various system calls. We’ll activate the tracepoint we need by writing 1 in the corresponding file:
|
||||
|
||||
```
|
||||
root@andrei:/sys/kernel/debug/tracing# echo 1 > events/syscalls/sys_enter_chroot/enable
|
||||
```
|
||||
|
||||
Then we create an isolated file system using chroot (for more information, see this [previous post][23]). After we’ve executed the commands we want, we’ll disable the tracer so that no excess or irrelevant information appears in the printout:
|
||||
|
||||
```
|
||||
root@andrei:/sys/kernel/debug/tracing# echo 0 > tracing_on
|
||||
```
|
||||
|
||||
Then, we can look at the contents of the ring buffer. At the end of the printout, we’ll find information about our system call (here is a small section):
|
||||
|
||||
```
|
||||
root@andrei:/sys/kernel/debug/tracing# сat trace
|
||||
|
||||
......
|
||||
chroot-11321 [000] .... 4606.265208: sys_chroot(filename: 7fff785ae8c2)
|
||||
chroot-11325 [000] .... 4691.677767: sys_chroot(filename: 7fff242308cc)
|
||||
bash-11338 [000] .... 4746.971300: sys_chroot(filename: 7fff1efca8cc)
|
||||
bash-11351 [000] .... 5379.020609: sys_chroot(filename: 7fffbf9918cc)
|
||||
```
|
||||
|
||||
More detailed information about configuring event tracing can be found [here][24].
|
||||
|
||||
### Conclusion
|
||||
|
||||
In this article, we presented a general overview of ftrace’s capabilities. We’d appreciate any comments or additions. If you’d like to dive deeper into this topic, we recommend looking at the following resources:
|
||||
|
||||
* [https://www.kernel.org/doc/Documentation/trace/tracepoints.txt][1] — a detailed description of the tracepoints mechanism
|
||||
|
||||
* [https://www.kernel.org/doc/Documentation/trace/events.txt][2] — a manual for tracing system events in Linux
|
||||
|
||||
* [https://www.kernel.org/doc/Documentation/trace/ftrace.txt][3] — offical ftrace documentation
|
||||
|
||||
* [https://lttng.org/files/thesis/desnoyers-dissertation-2009-12-v27.pdf][4] — Mathieu Desnoyers’ (the creator of tracepoints and LTTNG author) dissertation about kernel tracing and profiling
|
||||
|
||||
* [https://lwn.net/Articles/370423/][5] — Steven Rostedt’s article on ftrace capabilities
|
||||
|
||||
* [http://alex.dzyoba.com/linux/profiling-ftrace.html][6] — an ftrace overview that analyzes a practical use case
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via:https://blog.selectel.com/kernel-tracing-ftrace/
|
||||
|
||||
作者:[Andrej Yemelianov][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://blog.selectel.com/author/yemelianov/
|
||||
[1]:https://www.kernel.org/doc/Documentation/trace/tracepoints.txt
|
||||
[2]:https://www.kernel.org/doc/Documentation/trace/events.txt
|
||||
[3]:https://www.kernel.org/doc/Documentation/trace/ftrace.txt
|
||||
[4]:https://lttng.org/files/thesis/desnoyers-dissertation-2009-12-v27.pdf
|
||||
[5]:https://lwn.net/Articles/370423/
|
||||
[6]:http://alex.dzyoba.com/linux/profiling-ftrace.html
|
||||
[7]:https://blog.selectel.com/author/yemelianov/
|
||||
[8]:https://blog.selectel.com/tag/ftrace/
|
||||
[9]:https://blog.selectel.com/tag/kernel/
|
||||
[10]:https://blog.selectel.com/tag/kernel-profiling/
|
||||
[11]:https://blog.selectel.com/tag/kernel-tracing/
|
||||
[12]:https://blog.selectel.com/tag/linux/
|
||||
[13]:https://blog.selectel.com/tag/tracepoints/
|
||||
[14]:https://sourceware.org/systemtap/
|
||||
[15]:https://github.com/ktap/ktap
|
||||
[16]:http://www.sysdig.org/
|
||||
[17]:http://lttng.org/
|
||||
[18]:https://www.kernel.org/doc/Documentation/trace/ftrace.txt
|
||||
[19]:http://www.brendangregg.com/blog/index.html
|
||||
[20]:https://www.kernel.org/doc/Documentation/trace/ftrace.txt
|
||||
[21]:https://lwn.net/Articles/370423/
|
||||
[22]:http://lxr.free-electrons.com/source/mm/slab.c
|
||||
[23]:https://blog.selectel.com/containerization-mechanisms-namespaces/
|
||||
[24]:https://www.kernel.org/doc/Documentation/trace/events.txt
|
@ -1,336 +0,0 @@
|
||||
(yixunx translating)
|
||||
Internet Chemotherapy
|
||||
======
|
||||
|
||||
12/10 2017
|
||||
|
||||
### 1. Internet Chemotherapy
|
||||
|
||||
Internet Chemotherapy was a 13 month project between Nov 2016 - Dec 2017.
|
||||
It has been known under names such as 'BrickerBot', 'bad firmware
|
||||
upgrade', 'ransomware', 'large-scale network failure' and even
|
||||
'unprecedented terrorist actions.' That last one was a little harsh,
|
||||
Fernandez, but I guess I can't please everybody.
|
||||
|
||||
You can download the module which executes the http and telnet-based
|
||||
payloads from this router at http://91.215.104.140/mod_plaintext.py. Due to
|
||||
platform limitations the module is obfuscated single threaded python, but
|
||||
the payloads are in plain view and should be easy to figure out for any
|
||||
programmer worth his/her/hir salt. Take a look at the number of payloads,
|
||||
0-days and techniques and let the reality sink in for a moment. Then
|
||||
imagine what would've happened to the Internet in 2017 if I had been a
|
||||
blackhat dedicated to building a massive DDoS cannon for blackmailing the
|
||||
biggest providers and companies. I could've disrupted them all and caused
|
||||
extraordinary damage to the Internet in the process.
|
||||
|
||||
My ssh crawler is too dangerous to publish. It contains various levels of
|
||||
automation for the purpose of moving laterally through poorly designed
|
||||
ISP networks and taking them over through only a single breached router.
|
||||
My ability to commandeer and secure hundreds of thousands of ISP routers
|
||||
was the foundation of my anti-IoT botnet project as it gave me great
|
||||
visibility of what was happening on the Internet and it gave me an
|
||||
endless supply of nodes for hacking back. I began my non-destructive ISP
|
||||
network cleanup project in 2015 and by the time Mirai came around I was
|
||||
in a good position to react. The decision to willfully sabotage other
|
||||
people's equipment was nonetheless a difficult one to make, but the
|
||||
colossally dangerous CVE-2016-10372 situation ultimately left me with no
|
||||
other choice. From that moment on I was all-in.
|
||||
|
||||
I am now here to warn you that what I've done was only a temporary band-
|
||||
aid and it's not going to be enough to save the Internet in the future.
|
||||
The bad guys are getting more sophisticated, the number of potentially
|
||||
vulnerable devices keep increasing, and it's only a matter of time before
|
||||
a large scale Internet-disrupting event will occur. If you are willing to
|
||||
believe that I've disabled over 10 million vulnerable devices over the 13-
|
||||
month span of the project then it's not far-fetched to say that such a
|
||||
destructive event could've already happened in 2017.
|
||||
|
||||
YOU SHOULD WAKE UP TO THE FACT THAT THE INTERNET IS ONLY ONE OR TWO
|
||||
SERIOUS IOT EXPLOITS AWAY FROM BEING SEVERELY DISRUPTED. The damage of
|
||||
such an event is immeasurable given how digitally connected our societies
|
||||
have become, yet CERTs, ISPs and governments are not taking the gravity
|
||||
of the situation seriously enough. ISPs keep deploying devices with
|
||||
exposed control ports and although these are trivially found using
|
||||
services like Shodan the national CERTs don't seem to care. A lot of
|
||||
countries don't even have CERTs. Many of the world's biggest ISPs do not
|
||||
have any actual security know-how in-house, and are instead relying on
|
||||
foreign vendors for help in case anything goes wrong. I've watched large
|
||||
ISPs withering for months under conditioning from my botnet without them
|
||||
being able to fully mitigate the vulnerabilities (good examples are BSNL,
|
||||
Telkom ZA, PLDT, from time to time PT Telkom, and pretty much most large
|
||||
ISPs south of the border). Just look at how slow and ineffective Telkom
|
||||
ZA was in dealing with its Aztech modem problem and you will begin to
|
||||
understand the hopelessness of the current situation. In 99% of the
|
||||
problem cases the solution would have simply been for the ISPs to deploy
|
||||
sane ACLs and CPE segmentation, yet months later their technical staff
|
||||
still hasn't figured this out. If ISPs are unable to mitigate weeks and
|
||||
months of continuous deliberate sabotage of their equipment then what
|
||||
hope is there that they would notice and fix a Mirai problem on their
|
||||
networks? Many of the world's biggest ISPs are catastrophically negligent
|
||||
and this is the biggest danger by a landslide, yet paradoxically it
|
||||
should also be the easiest problem to fix.
|
||||
|
||||
I've done my part to try to buy the Internet some time, but I've gone as
|
||||
far as I can. Now it's up to you. Even small actions are important. Among
|
||||
the things you can do are:
|
||||
|
||||
* Review your own ISP's security through services such as Shodan and take
|
||||
them to task over exposed telnet, http, httpd, ssh, tr069 etc. ports on
|
||||
their networks. Refer them to this document if you have to. There's no
|
||||
good reason why any of these control ports should ever be accessible
|
||||
from the outside world. Exposing control ports is an amateur mistake.
|
||||
If enough customers complain they might actually do something about it!
|
||||
|
||||
* Vote with your wallet! Refuse to buy or use 'intelligent' products
|
||||
unless the manufacturer can prove that the product can and will receive
|
||||
timely security updates. Find out about the vendor's security track
|
||||
record before giving them your hard-earned money. Be willing to pay a
|
||||
little bit more for credible security.
|
||||
|
||||
* Lobby your local politicians and government officials for improved
|
||||
security legislation for IoT (Internet of Things) devices such as
|
||||
routers, IP cameras and 'intelligent' devices. Private or public
|
||||
companies currently lack the incentives for solving this problem in the
|
||||
immediate term. This matter is as important as minimum safety
|
||||
requirements for cars and general electrical appliances.
|
||||
|
||||
* Consider volunteering your time or other resources to underappreciated
|
||||
whitehat organizations such as GDI Foundation or Shadowserver
|
||||
Foundation. These organizations and people make a big difference and
|
||||
they can significantly amplify the impact of your skillset in helping
|
||||
the Internet.
|
||||
|
||||
* Last but not least, consider the long-shot potential of getting IoT
|
||||
devices designated as an 'attractive nuisance' through precedent-
|
||||
setting legal action. If a home owner can be held liable for a
|
||||
burglar/trespasser getting injured then I don't see why a device owner
|
||||
(or ISP or manufacturer) shouldn't be held liable for the damage that
|
||||
was caused by their dangerous devices being exploitable through the
|
||||
Internet. Attribution won't be a problem for Layer 7 attacks. If any
|
||||
large ISPs with deep pockets aren't willing to fund such precedent
|
||||
cases (and they might not since they fear that such precedents could
|
||||
come back to haunt them) we could even crowdfund such initiatives over
|
||||
here and in the EU. ISPs: consider your volumetric DDoS bandwidth cost
|
||||
savings in 2017 as my indirect funding of this cause and as evidence
|
||||
for its potential upside.
|
||||
|
||||
### 2. Timeline
|
||||
|
||||
Here are some of the more memorable events of the project:
|
||||
|
||||
* Deutsche Telekom Mirai disruption in late November 2016. My hastily
|
||||
assembled initial TR069/64 payload only performed a 'route del default'
|
||||
but this was enough to get the ISP's attention to the problem and the
|
||||
resulting headlines alerted other ISPs around the world to the
|
||||
unfolding disaster.
|
||||
|
||||
* Around January 11-12 some Mirai-infected DVRs with exposed control port
|
||||
6789 ended up getting bricked in Washington DC, and this made numerous
|
||||
headlines. Gold star to Vemulapalli for determining that Mirai combined
|
||||
with /dev/urandom had to be 'highly sophisticated ransomware'. Whatever
|
||||
happened to those 2 unlucky souls in Europe?
|
||||
|
||||
* In late January 2017 the first genuine large-scale ISP takedown occured
|
||||
when Rogers Canada's supplier Hitron carelessly pushed out new firmware
|
||||
with an unauthenticated root shell listening on port 2323 (presumably
|
||||
this was a debugging interface that they forgot to disable). This epic
|
||||
blunder was quickly discovered by Mirai botnets, and the end-result was
|
||||
a large number of bricked units.
|
||||
|
||||
* In February 2017 I noticed the first Mirai evolution of the year, with
|
||||
both Netcore/Netis and Broadcom CLI-based modems being attacked. The
|
||||
BCM CLI would turn out to become one of the main Mirai battlegrounds of
|
||||
2017, with both the blackhats and me chasing the massive long tail of
|
||||
ISP and model-specific default credentials for the rest of the year.
|
||||
The 'broadcom' payloads in the above source may look strange but
|
||||
they're statistically the most likely sequences to disable any of the
|
||||
endless number of buggy BCM CLI firmwares out there.
|
||||
|
||||
* In March 2017 I significantly increased my botnet's node count and
|
||||
started to add more web payloads in response to the threats from IoT
|
||||
botnets such as Imeij, Amnesia and Persirai. The large-scale takedown
|
||||
of these hacked devices created a new set of concerns. For example,
|
||||
among the leaked credentials of the Avtech and Wificam devices there
|
||||
were logins which strongly implied airports and other important
|
||||
facilities, and around April 1 2017 the UK government officials
|
||||
warned of a 'credible cyber threat' to airports and nuclear
|
||||
facilities from 'hacktivists.' Oops.
|
||||
|
||||
* The more aggressive scanning also didn't escape the attention of
|
||||
civilian security researchers, and in April 6 2017 security company
|
||||
Radware published an article about my project. The company trademarked
|
||||
it under the name 'BrickerBot.' It became clear that if I were to
|
||||
continue increasing the scale of my IoT counteroffensive I had to come
|
||||
up with better network mapping/detection methods for honeypots and
|
||||
other risky targets.
|
||||
|
||||
* Around April 11th 2017 something very unusual happened. At first it
|
||||
started like so many other ISP takedowns, with a semi-local ISP called
|
||||
Sierra Tel running exposed Zyxel devices with the default telnet login
|
||||
of supervisor/zyad1234. A Mirai runner discovered the exposed devices
|
||||
and my botnet followed soon after, and yet another clash in the epic
|
||||
BCM CLI war of 2017 took place. This battle didn't last long. It
|
||||
would've been just like any of the hundreds of other ISP takedowns in
|
||||
2017 were it not for something very unusual occuring right after the
|
||||
smoke settled. Amazingly, the ISP didn't try to cover up the outage as
|
||||
some kind of network issue, power spike or a bad firmware upgrade. They
|
||||
didn't lie to their customers at all. Instead, they promptly published
|
||||
a press release about their modems having been vulnerable which allowed
|
||||
their customers to assess their potential risk exposure. What did the
|
||||
most honest ISP in the world get for its laudable transparency? Sadly
|
||||
it got little more than criticism and bad press. It's still the most
|
||||
depressing case of 'why we can't have nice things' to me, and probably
|
||||
the main reason for why 99% of security mistakes get covered up and the
|
||||
actual victims get left in the dark. Too often 'responsible disclosure'
|
||||
simply becomes a euphemism for 'coverup.'
|
||||
|
||||
* On April 14 2017 DHS warned of 'BrickerBot Threat to Internet of
|
||||
Things' and the thought of my own government labeling me as a cyber
|
||||
threat felt unfair and myopic. Surely the ISPs that run dangerously
|
||||
insecure network deployments and the IoT manufacturers that peddle
|
||||
amateurish security implementations should have been fingered as the
|
||||
actual threat to Americans rather than me? If it hadn't been for me
|
||||
millions of us would still be doing their banking and other sensitive
|
||||
transactions over hacked equipment and networks. If anybody from DHS
|
||||
ever reads this I urge you to reconsider what protecting the homeland
|
||||
and its citizens actually means.
|
||||
|
||||
* In late April 2017 I spent some time on improving my TR069/64 attack
|
||||
methods, and in early May 2017 a company called Wordfence (now Defiant)
|
||||
reported a significant decline in a TR069-exploiting botnet that had
|
||||
previously posed a threat to Wordpress installations. It's noteworthy
|
||||
that the same botnet temporarily returned a few weeks later using a
|
||||
different exploit (but this was also eventually mitigated).
|
||||
|
||||
* In May 2017 hosting company Akamai reported in its Q1 2017 State of the
|
||||
Internet report an 89% decrease in large (over 100 Gbps) DDoS attacks
|
||||
compared with Q1 2016, and a 30% decrease in total DDoS attacks. The
|
||||
largest attack of Q1 2017 was 120 Gbps vs 517 Gbps in Q4 2016. As large
|
||||
volumetric DDoS was one of the primary signatures of Mirai this felt
|
||||
like concrete justification for all the months of hard work in the IoT
|
||||
trenches.
|
||||
|
||||
* During the summer I kept improving my exploit arsenal, and in late July
|
||||
I performed some test runs against APNIC ISPs. The results were quite
|
||||
surprising. Among other outcomes a few hundred thousand BSNL and MTNL
|
||||
modems were disabled and this outage become headline news in India.
|
||||
Given the elevated geopolitical tensions between India and China at the
|
||||
time I felt that there was a credible risk of the large takedown being
|
||||
blamed on China so I made the rare decision to publically take credit
|
||||
for it. Catalin, I'm very sorry for the abrupt '2 day vacation' that
|
||||
you had to take after reporting the news.
|
||||
|
||||
* Previously having worked on APNIC and AfriNIC, on August 9th 2017 I
|
||||
also launched a large scale cleanup of LACNIC space which caused
|
||||
problems for various providers across the subcontinent. The attack made
|
||||
headlines in Venezuela after a few million cell phone users of Movilnet
|
||||
lost service. Although I'm personally against government surveillance
|
||||
of the Internet the case of Venezuela is noteworthy. Many of the
|
||||
LACNIC ISPs and networks have been languishing for months under
|
||||
persistent conditioning from my botnet, but Venezuelan providers have
|
||||
been quick to fortify their networks and secure their infrastructure.
|
||||
I believe this is due to Venezuela engaging in far more invasive deep
|
||||
packet inspection than the other LACNIC countries. Food for thought.
|
||||
|
||||
* In August 2017 F5 Labs released a report called "The Hunt for IoT: The
|
||||
Rise of Thingbots" in which the researchers were perplexed over the
|
||||
recent lull in telnet activity. The researchers speculated that the
|
||||
lack of activity may be evidence that one or more very large cyber
|
||||
weapons are being built (which I guess was in fact true). This piece
|
||||
is to my knowledge the most accurate assessment of the scope of my
|
||||
project but fascinatingly the researchers were unable to put two and
|
||||
two together in spite of gathering all the relevant clues on a single
|
||||
page.
|
||||
|
||||
* In August 2017 Akamai's Q2 2017 State of the Internet report announces
|
||||
the first quarter in 3 years without the provider observing a single
|
||||
large (over 100 Gbps) attack, and a 28% decrease in total DDoS attacks
|
||||
vs Q1 2017. This seems like further validation of the cleanup effort.
|
||||
This phenomenally good news is completely ignored by the mainstream
|
||||
media which operates under an 'if it bleeds it leads' mentality even
|
||||
when it comes to information security. This is yet another reason why
|
||||
we can't have nice things.
|
||||
|
||||
* After the publication of CVE-2017-7921 and 7923 in September 2017 I
|
||||
decided to take a closer look at Hikvision devices, and to my horror
|
||||
I realized that there's a technique for botting most of the vulnerable
|
||||
firmwares that the blackhats hadn't discovered yet. As a result I
|
||||
launched a global cleanup initiative around mid-September. Over a
|
||||
million DVRs and cameras (mainly Hikvision and Dahua) were disabled
|
||||
over a span of 3 weeks and publications such as IPVM.com wrote several
|
||||
articles about the attacks. Dahua and Hikvision wrote press releases
|
||||
mentioning or alluding to the attacks. A huge number of devices finally
|
||||
got their firmwares upgraded. Seeing the confusion that the cleanup
|
||||
effort caused I decided to write a quick summary for the CCTV people at
|
||||
http://depastedihrn3jtw.onion.link/show.php?md5=62d1d87f67a8bf485d43a05ec32b1e6f
|
||||
(sorry for the NSFW language of the pastebin service). The staggering
|
||||
number of vulnerable units that were online months after critical
|
||||
security patches were available should be the ultimate wakeup call to
|
||||
everyone about the utter dysfunctionality of the current IoT patching
|
||||
process.
|
||||
|
||||
* Around September 28 2017 Verisign releases a report saying that DDoS
|
||||
attacks declined 55% in Q2 2017 vs Q1, with a massive 81% attack peak
|
||||
decline.
|
||||
|
||||
* On November 23rd 2017 the CDN provider Cloudflare reports that 'in
|
||||
recent months, Cloudflare has seen a dramatic reduction in simple
|
||||
attempts to flood our network with junk traffic.' Cloudflare speculates
|
||||
it could've partly been due to their change in policies, but the
|
||||
reductions also line up well with the IoT cleanup activities.
|
||||
|
||||
* At the end of November 2017 Akamai's Q3 2017 State of the Internet
|
||||
report sees a small 8% increase in total DDoS attacks for the quarter.
|
||||
Although this was a significant reduction compared to Q3 2016 the
|
||||
slight uptick serves as a reminder of the continued risks and dangers.
|
||||
|
||||
* As a further reminder of the dangers a new Mirai strain dubbed 'Satori'
|
||||
reared its head in November-December of 2017. It's particularly
|
||||
noteworthy how quickly the botnet managed to grow based on a single
|
||||
0-day exploit. This event underlines the current perilous operating
|
||||
state of the Internet, and why we're only one or two severe IoT
|
||||
exploits away from widespread disruption. What will happen when nobody
|
||||
is around to disable the next threat? Sinkholing and other whitehat/
|
||||
'legal' mitigations won't be enough in 2018 just like they weren't
|
||||
enough in 2016. Perhaps in the future governments will be able to
|
||||
collaborate on a counterhacking task force with a global mandate for
|
||||
disabling particularly severe existential threats to the Internet, but
|
||||
I'm not holding my breath.
|
||||
|
||||
* Late in the year there were also some hysterical headlines regarding a
|
||||
new botnet that was dubbed 'Reaper' and 'IoTroop'. I know some of you
|
||||
will eventually ridicule those who estimated its size at 1-2 million
|
||||
but you should understand that security researchers have very limited
|
||||
knowledge of what's happening on networks and hardware that they don't
|
||||
control. In practice the researchers could not possibly have known or
|
||||
even assumed that most of the vulnerable device pool had already been
|
||||
disabled by the time the botnet emerged. Give the 'Reaper' one or two
|
||||
new unmitigated 0-days and it'll become as terrifying as our worst
|
||||
fears.
|
||||
|
||||
### 3. Parting Thoughts
|
||||
|
||||
I'm sorry to leave you in these circumstances, but the threat to my own
|
||||
safety is becoming too great to continue. I have made many enemies. If
|
||||
you want to help look at the list of action items further up. Good luck.
|
||||
|
||||
There will also be those who will criticize me and say that I've acted
|
||||
irresponsibly, but that's completely missing the point. The real point
|
||||
is that if somebody like me with no previous hacking background was able
|
||||
to do what I did, then somebody better than me could've done far worse
|
||||
things to the Internet in 2017. I'm not the problem and I'm not here to
|
||||
play by anyone's contrived rules. I'm only the messenger. The sooner you
|
||||
realize this the better.
|
||||
|
||||
-Dr Cyborkian a.k.a. janit0r, conditioner of 'terminally ill' devices.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via:https://ghostbin.com/paste/q2vq2
|
||||
|
||||
作者:janit0r
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,
|
||||
[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
Configuring MSMTP On Ubuntu 16.04 (Again)
|
||||
======
|
||||
This post exists as a copy of what I had on my previous blog about configuring MSMTP on Ubuntu 16.04; I'm posting it as-is for posterity, and have no idea if it'll work on later versions. As I'm not hosting my own Ubuntu/MSMTP server anymore I can't see any updates being made to this, but if I ever do have to set this up again I'll create an updated post! Anyway, here's what I had…
|
||||
|
@ -1,112 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
The World Map In Your Terminal
|
||||
======
|
||||
I just stumbled upon an interesting utility. The World map in the Terminal! Yes, It is so cool. Say hello to **MapSCII** , a Braille and ASCII world map renderer for your xterm-compatible terminals. It supports GNU/Linux, Mac OS, and Windows. I thought it is a just another project hosted on GitHub. But I was wrong! It is really impressive what they did there. We can use our mouse pointer to drag and zoom in and out a location anywhere in the world map. The other notable features are;
|
||||
|
||||
* Discover Point-of-Interests around any given location
|
||||
* Highly customizable layer styling with [Mapbox Styles][1] support
|
||||
* Connect to any public or private vector tile server
|
||||
* Or just use the supplied and optimized [OSM2VectorTiles][2] based one
|
||||
* Work offline and discover local [VectorTile][3]/[MBTiles][4]
|
||||
* Compatible with most Linux and OSX terminals
|
||||
* Highly optimizied algorithms for a smooth experience
|
||||
|
||||
|
||||
|
||||
### Displaying the World Map in your Terminal using MapSCII
|
||||
|
||||
To open the map, just run the following command from your Terminal:
|
||||
```
|
||||
telnet mapscii.me
|
||||
```
|
||||
|
||||
Here is the World map from my Terminal.
|
||||
|
||||
[![][5]][6]
|
||||
|
||||
Cool, yeah?
|
||||
|
||||
To switch to Braille view, press **c**.
|
||||
|
||||
[![][5]][7]
|
||||
|
||||
Type **c** again to switch back to the previous format **.**
|
||||
|
||||
To scroll around the map, use arrow keys **up** , **down** , **left** , **right**. To zoom in/out a location, use **a** and **z** keys. Also, you can use the scroll wheel of your mouse to zoom in or out. To quit the map, press **q**.
|
||||
|
||||
Like I already said, don't think it is a simple project. Click on any location on the map and press **" a"** to zoom in.
|
||||
|
||||
Here are some the sample screenshots after I zoomed it.
|
||||
|
||||
[![][5]][8]
|
||||
|
||||
I can be able to zoom to view the states in my country (India).
|
||||
|
||||
[![][5]][9]
|
||||
|
||||
And the districts in a state (Tamilnadu):
|
||||
|
||||
[![][5]][10]
|
||||
|
||||
Even the [Taluks][11] and the towns in a district:
|
||||
|
||||
[![][5]][12]
|
||||
|
||||
And, the place where I completed my schooling:
|
||||
|
||||
[![][5]][13]
|
||||
|
||||
Even though it is just a smallest town, MapSCII displayed it accurately. MapSCII uses [**OpenStreetMap**][14] to collect the data.
|
||||
|
||||
### Install MapSCII locally
|
||||
|
||||
Liked it? Great! You can host it on your own system.
|
||||
|
||||
Make sure you have installed Node.js on your system. If not, refer the following link.
|
||||
|
||||
[Install NodeJS on Linux][15]
|
||||
|
||||
Then, run the following command to install it.
|
||||
```
|
||||
sudo npm install -g mapscii
|
||||
|
||||
```
|
||||
|
||||
To launch MapSCII, run:
|
||||
```
|
||||
mapscii
|
||||
```
|
||||
|
||||
Have fun! More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/mapscii-world-map-terminal/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.mapbox.com/mapbox-gl-style-spec/
|
||||
[2]:https://github.com/osm2vectortiles
|
||||
[3]:https://github.com/mapbox/vector-tile-spec
|
||||
[4]:https://github.com/mapbox/mbtiles-spec
|
||||
[5]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-1-2.png ()
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-2.png ()
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-3.png ()
|
||||
[9]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-4.png ()
|
||||
[10]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-5.png ()
|
||||
[11]:https://en.wikipedia.org/wiki/Tehsils_of_India
|
||||
[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-6.png ()
|
||||
[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-7.png ()
|
||||
[14]:https://www.openstreetmap.org/
|
||||
[15]:https://www.ostechnix.com/install-node-js-linux/
|
@ -0,0 +1,117 @@
|
||||
Installing Awstat for analyzing Apache logs
|
||||
======
|
||||
AWSTAT is free an very powerful log analyser tool for apache log files. After analyzing logs from apache, it present them in easy to understand graphical format. Awstat is short for Advanced Web statistics & it works on command line interface or on CGI.
|
||||
|
||||
In this tutorial, we will be installing AWSTAT on our Centos 7 machine for analyzing apache logs.
|
||||
|
||||
( **Recommended read** :[ **Scheduling important jobs with crontab**][1])
|
||||
|
||||
### Pre-requisites
|
||||
|
||||
**1-** A website hosted on apache web server, to create one read below mentioned tutorials on apache web servers,
|
||||
|
||||
( **Recommended reads** - [**installing Apache**][2], [**Securing apache with SSL cert**][3] & **hardening tips for apache** )
|
||||
|
||||
**2-** Epel repository enabled on the system, as Awstat packages are not available on default repositories. To enable epel-repo , run
|
||||
|
||||
```
|
||||
$ rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-10.noarch.rpm
|
||||
```
|
||||
|
||||
### Installing Awstat
|
||||
|
||||
Once the epel-repository has been enabled on the system, awstat can be installed by running,
|
||||
|
||||
```
|
||||
$ yum install awstat
|
||||
```
|
||||
|
||||
When awstat is installed, it creates a file for apache at '/etc/httpd/conf.d/awstat.conf' with some configurations. These configurations are good to be used incase web server &awstat are configured on the same machine but if awstat is on different machine than the webserver, then some changes are to be made to the file.
|
||||
|
||||
#### Configuring Apache for Awstat
|
||||
|
||||
To configure awstat for a remote web server, open /etc/httpd/conf.d/awstat.conf, & update the parameter 'Allow from' with the IP address of the web server
|
||||
|
||||
```
|
||||
$ vi /etc/httpd/conf.d/awstat.conf
|
||||
|
||||
<Directory "/usr/share/awstats/wwwroot">
|
||||
Options None
|
||||
AllowOverride None
|
||||
<IfModulemod_authz_core.c>
|
||||
# Apache 2.4
|
||||
Require local
|
||||
</IfModule>
|
||||
<IfModule !mod_authz_core.c>
|
||||
# Apache 2.2
|
||||
Order allow,deny
|
||||
Allow from 127.0.0.1
|
||||
Allow from 192.168.1.100
|
||||
</IfModule>
|
||||
</Directory>
|
||||
```
|
||||
|
||||
Save the file & restart the apache services to implement the changes,
|
||||
|
||||
```
|
||||
$ systemctl restart httpd
|
||||
```
|
||||
|
||||
#### Configuring AWSTAT
|
||||
|
||||
For every website that we add to awstat, a different configuration file needs to be created with the website information . An example file is created in folder '/etc/awstats' by the name 'awstats.localhost.localdomain.conf', we can make copies of it & configure our website with this,
|
||||
|
||||
```
|
||||
$ cd /etc/awstats
|
||||
$ cp awstats.localhost.localdomain.conf awstats.linuxtechlab.com.conf
|
||||
```
|
||||
|
||||
Now open the file & edit the following three parameters to match your website,
|
||||
|
||||
```
|
||||
$ vi awstats.linuxtechlab.com.conf
|
||||
|
||||
LogFile="/var/log/httpd/access.log"
|
||||
SiteDomain="linuxtechlab.com"
|
||||
HostAliases=www.linuxtechlab.com localhost 127.0.0.1
|
||||
```
|
||||
|
||||
Last step is to update the configuration file, which can be done executing the command below,
|
||||
|
||||
```
|
||||
/usr/share/awstats/wwwroot/cgi-bin/awstats.pl -config=linuxtechlab.com -update
|
||||
```
|
||||
|
||||
#### Checking the awstat page
|
||||
|
||||
To test/check the awstat page, open web-browser & enter the following URL in the address bar,
|
||||
**https://linuxtechlab.com/awstats/awstats.pl?config=linuxtechlab.com**
|
||||
|
||||
![awstat][5]
|
||||
|
||||
**Note-** we can also schedule a cron job to update the awstat on regular basis. An example for the crontab
|
||||
|
||||
```
|
||||
$ crontab -e
|
||||
0 1 * * * /usr/share/awstats/wwwroot/cgi-bin/awstats.pl -config=linuxtechlab.com–update
|
||||
```
|
||||
|
||||
We now end our tutorial on installing Awstat for analyzing apache logs, please leave your comments/queries in the comment box below.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linuxtechlab.com/installing-awstat-analyzing-apache-logs/
|
||||
|
||||
作者:[SHUSAIN][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linuxtechlab.com/author/shsuain/
|
||||
[1]:http://linuxtechlab.com/scheduling-important-jobs-crontab/
|
||||
[2]:http://linuxtechlab.com/beginner-guide-configure-apache/
|
||||
[3]:http://linuxtechlab.com/create-ssl-certificate-apache-server/
|
||||
[4]:https://i1.wp.com/linuxtechlab.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif?resize=602%2C312
|
||||
[5]:https://i0.wp.com/linuxtechlab.com/wp-content/uploads/2017/04/awstat.jpg?resize=602%2C312
|
@ -1,3 +1,5 @@
|
||||
translating by heart4lor
|
||||
|
||||
How to Make a Minecraft Server – ThisHosting.Rocks
|
||||
======
|
||||
We’ll show you how to make a Minecraft server with beginner-friendly step-by-step instructions. It will be a persistent multiplayer server that you can play on with your friends from all around the world. You don’t have to be in a LAN.
|
||||
|
@ -0,0 +1,101 @@
|
||||
How To Resume Partially Transferred Files Over SSH Using Rsync
|
||||
======
|
||||
|
||||

|
||||
|
||||
There are chances that the large files which are being copied over SSH using SCP command might be interrupted or cancelled or broken due to various reasons such as power failure or network failure or user intervention. The other day I was copying the Ubuntu 16.04 ISO file to my remote system. Unfortunately, the power is gone, and the network connection is dropped immediately. The result? The copy process is terminated! This is just a simple example. The Ubuntu ISO is not so big, and I could restart the copy process as soon as the power is restored. But in production environment, you might not want to do it while you're transferring large files.
|
||||
|
||||
Also, you can't always resume the aborted process using **scp** command. Because, If you do, It will simply overwrite the existing files. What would you do in such situations? No worries! This is where **Rsync** utility comes in handy! Rsync can help you to resume the interrupted copy or download process where you left it off. For those wondering, Rsync is a fast, versatile file copying utility that can be used to copy and transfer files or folders to and from remote and local systems.
|
||||
|
||||
It offers a large number of options that control every aspect of its behavior and permit very flexible specification of the set of files to be copied. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.
|
||||
|
||||
Just like SCP, rsync will also copy files over SSH. In case you wanted to download or transfer a big files and folders over SSH, I recommend you to use rsync utility. Be mindful that the **rsync utility should be installed on both sides** (remote and local systems) in order to resume partially transferred files.
|
||||
|
||||
### Resume Partially Transferred Files Using Rsync
|
||||
|
||||
Well, let me show you an example. I am going to copy Ubuntu 16.04 ISO from my local system to remote system with command:
|
||||
|
||||
```
|
||||
$ scp Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
|
||||
```
|
||||
|
||||
Here,
|
||||
|
||||
* **sk** is my remote system 's username
|
||||
* **192.168.43.2** is the IP address of the remote machine.
|
||||
|
||||
|
||||
|
||||
Now, I terminated it by pressing **CTRL+c**.
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
sk@192.168.43.2's password:
|
||||
ubuntu-16.04-desktop-amd64.iso 26% 372MB 26.2MB/s 00:39 ETA^c
|
||||
```
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
As you see in the above output, I terminated the copy process when it reached 26%.
|
||||
|
||||
If I re-run the above command, it will simply overwrite the existing file. In other words, the copy process will not resume where I left it off.
|
||||
|
||||
In order to resume the copy process, we can use **rsync** command as shown below.
|
||||
|
||||
```
|
||||
$ rsync -P -rsh=ssh Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
```
|
||||
sk@192.168.1.103's password:
|
||||
sending incremental file list
|
||||
ubuntu-16.04-desktop-amd64.iso
|
||||
380.56M 26% 41.05MB/s 0:00:25
|
||||
```
|
||||
|
||||
[![][1]][4]
|
||||
|
||||
See? Now, the copying process is resumed where we left it off earlier. You also can use "-partial" instead of parameter "-P" like below.
|
||||
```
|
||||
$ rsync --partial -rsh=ssh Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
|
||||
```
|
||||
|
||||
Here, the parameter "-partial" or "-P" tells the rsync command to keep the partial downloaded file and resumes the process.
|
||||
|
||||
Alternatively, we can use the following commands as well to resume partially transferred files over SSH.
|
||||
|
||||
```
|
||||
$ rsync -avP Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
|
||||
```
|
||||
|
||||
Or,
|
||||
|
||||
```
|
||||
rsync -av --partial Soft_Backup/OS\ Images/Linux/ubuntu-16.04-desktop-amd64.iso sk@192.168.43.2:/home/sk/
|
||||
```
|
||||
|
||||
That's it. You know now how to resume the cancelled, interrupted, and partially downloaded files using rsync command. As you can see, it is not so difficult either. If rsync is installed on both systems, we can easily resume the copy process as described above.
|
||||
|
||||
If you find this tutorial helpful, please share it on your social, professional networks and support OSTechNix. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-resume-partially-downloaded-or-transferred-files-using-rsync/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[2]:http://www.ostechnix.com/wp-content/uploads/2016/02/scp.png ()
|
||||
[3]:/cdn-cgi/l/email-protection
|
||||
[4]:http://www.ostechnix.com/wp-content/uploads/2016/02/rsync.png ()
|
135
sources/tech/20180129 How to Use DockerHub.md
Normal file
135
sources/tech/20180129 How to Use DockerHub.md
Normal file
@ -0,0 +1,135 @@
|
||||
How to Use DockerHub
|
||||
======
|
||||
|
||||

|
||||
|
||||
In the previous articles, we learned the basics of [Docker terminology][1], [how to install Docker][2] on desktop Linux, macOS, and Windows, and [how to create container images][3] and run them on your system. In this last article in the series, we will talk about using images from DockerHub and publishing your own images to DockerHub.
|
||||
|
||||
First things first: what is DockerHub and why is it important? DockerHub is a cloud-based repository run and managed by Docker Inc. It's an online repository where Docker images can be published and used by other users. There are both public and private repositories. If you are a company, you can have a private repository for use within your own organization, whereas public images can be used by anyone.
|
||||
|
||||
You can also use official Docker images that are published publicly. I use many such images, including for my test WordPress installations, KDE plasma apps, and more. Although we learned last time how to create your own Docker images, you don't have to. There are thousands of images published on DockerHub for you to use. DockerHub is hardcoded into Docker as the default registry, so when you run the docker pull command against any image, it will be downloaded from DockerHub.
|
||||
|
||||
### Download images from Docker Hub and run locally
|
||||
|
||||
Please check out the previous articles in the series to get started. Then, once you have Docker running on your system, you can open the terminal and run:
|
||||
```
|
||||
$ docker images
|
||||
```
|
||||
|
||||
This command will show all the docker images currently on your system. Let's say you want to deploy Ubuntu on your local machine; you would do:
|
||||
```
|
||||
$ docker pull ubuntu
|
||||
```
|
||||
|
||||
If you already have Ubuntu image on your system, the command will automatically update that image to the latest version. So, if you want to update the existing images, just run the docker pull command, easy peasy. It's like apt-get upgrade without any muss and fuss.
|
||||
|
||||
You already know how to run an image:
|
||||
```
|
||||
$ docker run -it <image name>
|
||||
|
||||
$ docker run -it ubuntu
|
||||
```
|
||||
|
||||
The command prompt should change to something like this:
|
||||
```
|
||||
root@1b3ec4621737:/#
|
||||
```
|
||||
|
||||
Now you can run any command and utility that you use on Ubuntu. It's all safe and contained. You can run all the experiments and tests you want on that Ubuntu. Once you are done testing, you can nuke the image and download a new one. There is no system overhead that you would get with a virtual machine.
|
||||
|
||||
You can exit that container by running the exit command:
|
||||
```
|
||||
$ exit
|
||||
```
|
||||
|
||||
Now let's say you want to install Nginx on your system. Run search to find the desired image:
|
||||
```
|
||||
$ docker search nginx
|
||||
|
||||
aizMFFysICAEsgDDYrsrlqwoCgGbWVHtcOzgV9mA
|
||||
```
|
||||
|
||||
As you can see, there are many images of Nginx on DockerHub. Why? Because anyone can publish an image. Various images are optimized for different projects, so you can choose the appropriate image. You just need to install the appropriate image for your use-case.
|
||||
|
||||
Let's say you want to pull Bitnami's Nginx container:
|
||||
```
|
||||
$ docker pull bitnami/nginx
|
||||
```
|
||||
|
||||
Now run it with:
|
||||
```
|
||||
$ docker run -it bitnami/nginx
|
||||
```
|
||||
|
||||
### How to publish images to Docker Hub?
|
||||
|
||||
Previously, [we learned how to create a Docker image][3], and we can easily publish that image to DockerHub. First, you need to log into DockerHub. If you don't already have an account, please [create one][5]. Then, you can open terminal app and log in:
|
||||
```
|
||||
$ docker login --username=<USERNAME>
|
||||
```
|
||||
|
||||
Replace <USERNAME> with the name of your username for Docker Hub. In my case it's arnieswap:
|
||||
```
|
||||
$ docker login --username=arnieswap>
|
||||
```
|
||||
|
||||
Enter the password, and you are logged in. Now run the docker images command to get the ID of the image that you created last time.
|
||||
```
|
||||
$ docker images
|
||||
|
||||
tW1jDOugkX7J2FfyFyToM6B8m5OYFwMba-Ag5aez
|
||||
```
|
||||
|
||||
Now, suppose you want to push the ng image to DockerHub. First, we need to tag that image ([learn more about tags][1]):
|
||||
```
|
||||
$ docker tag e7083fd898c7 arnieswap/my_repo:testing
|
||||
```
|
||||
|
||||
Now push that image:
|
||||
```
|
||||
$ docker push arnieswap/my_repo
|
||||
```
|
||||
|
||||
The push refers to repository [docker.io/arnieswap/my_repo]
|
||||
```
|
||||
12628b20827e: Pushed
|
||||
|
||||
8600ee70176b: Mounted from library/ubuntu
|
||||
|
||||
2bbb3cec611d: Mounted from library/ubuntu
|
||||
|
||||
d2bb1fc88136: Mounted from library/ubuntu
|
||||
|
||||
a6a01ad8b53f: Mounted from library/ubuntu
|
||||
|
||||
833649a3e04c: Mounted from library/ubuntu
|
||||
|
||||
testing: digest: sha256:286cb866f34a2aa85c9fd810ac2cedd87699c02731db1b8ca1cfad16ef17c146 size: 1569
|
||||
|
||||
```
|
||||
|
||||
Eureka! Your image is being uploaded. Once finished, open DockerHub, log into your account, and you can see your very first Docker image. Now anyone can deploy your image. It's the easiest and fastest way to develop and distribute software. Whenever you update the image, users can simply run:
|
||||
```
|
||||
$ docker run arnieswap/my_repo
|
||||
```
|
||||
|
||||
Now you know why people love Docker containers. They solve many problems that traditional workloads face and allow you develop, test, and deploy applications in no time. And, by following the steps in this series, you can try them out for yourself.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/intro-to-linux/2018/1/how-use-dockerhub
|
||||
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.linux.com/users/arnieswap
|
||||
[1]:https://www.linux.com/blog/intro-to-linux/2017/12/container-basics-terms-you-need-know
|
||||
[2]:https://www.linux.com/blog/learn/intro-to-linux/how-install-docker-ce-your-desktop
|
||||
[3]:https://www.linux.com/blog/learn/intro-to-linux/2018/1/how-create-docker-image
|
||||
[4]:https://lh3.googleusercontent.com/aizMFFysICAEsgDDYrsrlqwoCgGbWVHtcOzgV9mAtV8IdBZgHPJTdHIZhWBNCRvOyJb108ZBajJ_Nz10yCxGSvk-AF-yvFxpojLdVu3Jjihcwaup6CQLc67A5nglBuGDaOZWcrbV
|
||||
[5]:https://hub.docker.com/
|
||||
[6]:https://lh6.googleusercontent.com/tW1jDOugkX7J2FfyFyToM6B8m5OYFwMba-Ag5aezVGf2A5gsKJ47QrCh_TOKWgIKfE824Uc2Cwwwj9jWps1yJlUZqDyIceVQs-nEbKavFDxuUxLyd4thBA4_rsXrQH4r7hrG8FnD
|
@ -0,0 +1,173 @@
|
||||
How to make your LXD containers get IP addresses from your LAN using a bridge
|
||||
======
|
||||
**Background** : LXD is a hypervisor that manages machine containers on Linux distributions. You install LXD on your Linux distribution and then you can launch machine containers into your distribution running all sort of (other) Linux distributions.
|
||||
|
||||
In the previous post, we saw how to get our LXD container to receive an IP address from the local network (instead of getting the default private IP address), using **macvlan**.
|
||||
|
||||
In this post, we are going to see how to use a **bridge** to make our containers get an IP address from the local network. Specifically, we are going to see how to do this using NetworkManager. If you have several public IP addresses, you can use this method (or the other with the **macvlan** ) in order to expose your LXD containers directly to the Internet.
|
||||
|
||||
### Creating the bridge with NetworkManager
|
||||
|
||||
See this post [How to configure a Linux bridge with Network Manager on Ubuntu][1] on how to create the bridge with NetworkManager. It explains that you
|
||||
|
||||
1. Use **NetworkManager** to **Add a New Connection** , a **Bridge**.
|
||||
2. When configuring the **Bridge** , you specify the real network connection (the device, like **eth0** or **enp3s12** ) that will be **the slave of the bridge**. You can verify the device of the network connection if you run **ip route list 0.0.0.0/0**.
|
||||
3. Then, you can remove the old network connection and just keep the slave. The slave device ( **bridge0** ) will now be the device that gets you your LAN IP address.
|
||||
|
||||
|
||||
|
||||
At this point you would have again network connectivity. Here is the new device, **bridge0**.
|
||||
```
|
||||
$ ifconfig bridge0
|
||||
bridge0 Link encap:Ethernet HWaddr 00:e0:4b:e0:a8:c2
|
||||
inet addr:192.168.1.64 Bcast:192.168.1.255 Mask:255.255.255.0
|
||||
inet6 addr: fe80::d3ca:7a11:f34:fc76/64 Scope:Link
|
||||
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
|
||||
RX packets:9143 errors:0 dropped:0 overruns:0 frame:0
|
||||
TX packets:7711 errors:0 dropped:0 overruns:0 carrier:0
|
||||
collisions:0 txqueuelen:1000
|
||||
RX bytes:7982653 (7.9 MB) TX bytes:1056263 (1.0 MB)
|
||||
```
|
||||
|
||||
### Creating a new profile in LXD for bridge networking
|
||||
|
||||
In LXD, there is a default profile and then you can create additional profile that either are independent from the default (like in the **macvlan** post), or can be chained with the default profile. Now we see the latter.
|
||||
|
||||
First, create a new and empty LXD profile, called **bridgeprofile**.
|
||||
```
|
||||
$ lxc create profile bridgeprofile
|
||||
```
|
||||
|
||||
Here is the fragment to add to the new profile. The **eth0** is the interface name in the container, so for the Ubuntu containers it does not change. Then, **bridge0** is the interface that was created by NetworkManager. If you created that bridge by some other way, add here the appropriate interface name. The **EOF** at the end is just a marker when we copy and past to the profile.
|
||||
```
|
||||
description: Bridged networking LXD profile
|
||||
devices:
|
||||
eth0:
|
||||
name: eth0
|
||||
nictype: bridged
|
||||
parent: bridge0
|
||||
type: nic
|
||||
**EOF**
|
||||
```
|
||||
|
||||
Paste the fragment to the new profile.
|
||||
```
|
||||
$ cat <<EOF | lxc profile edit bridgeprofile
|
||||
(paste here the full fragment from earlier)
|
||||
```
|
||||
|
||||
The end result should look like the following.
|
||||
```
|
||||
$ lxc profile show bridgeprofile
|
||||
config: {}
|
||||
description: Bridged networking LXD profile
|
||||
devices:
|
||||
eth0:
|
||||
name: eth0
|
||||
nictype: bridged
|
||||
parent: bridge0
|
||||
type: nic
|
||||
name: bridgeprofile
|
||||
used_by:
|
||||
```
|
||||
|
||||
If it got messed up, delete the profile and start over again. Here is the command.
|
||||
```
|
||||
$ lxc profile delete profile_name_to_delete
|
||||
```
|
||||
|
||||
### Creating containers with the bridge profile
|
||||
|
||||
Now we are ready to create a new container that will use the bridge. We need to specify first the default profile, then the new profile. This is because the new profile will overwrite the network settings of the default profile.
|
||||
```
|
||||
$ lxc launch -p default -p bridgeprofile ubuntu:x mybridge
|
||||
Creating mybridgeStarting mybridge
|
||||
```
|
||||
|
||||
Here is the result.
|
||||
```
|
||||
$ lxc list
|
||||
+-------------|---------|---------------------|------+
|
||||
| mytest | RUNNING | 192.168.1.72 (eth0) | |
|
||||
+-------------|---------|---------------------|------+
|
||||
| ... | ... |
|
||||
```
|
||||
|
||||
The container **mybridge** is accessible from the local network.
|
||||
|
||||
### Changing existing containers to use the bridge profile
|
||||
|
||||
Suppose we have an existing container that was created with the default profile, and got the LXD NAT network. Can we switch it to use the bridge profile?
|
||||
|
||||
Here is the existing container.
|
||||
```
|
||||
$ lxc launch ubuntu:x mycontainer
|
||||
Creating mycontainerStarting mycontainer
|
||||
```
|
||||
|
||||
Let's assign **mycontainer** to use the new profile, " **default,bridgeprofile "**.
|
||||
```
|
||||
$ lxc profile assign mycontainer default,bridgeprofile
|
||||
```
|
||||
|
||||
Now we just need to restart the networking in the container.
|
||||
```
|
||||
$ lxc exec mycontainer -- systemctl restart networking.service
|
||||
```
|
||||
|
||||
This can take quite some time, 10 to 20 seconds. Be patient. Obviously, we could simply restart the container. However, since it can take quite some time to get the IP address, it is more practical to know exactly when you get the new IP address.
|
||||
|
||||
Let's see how it looks!
|
||||
```
|
||||
$ lxc list ^mycontainer$
|
||||
+----------------|-------------|---------------------|------+
|
||||
| NAME | STATE | IPV4 | IPV6 |
|
||||
+----------------|-------------|---------------------|------+
|
||||
| mycontainer | RUNNING | 192.168.1.76 (eth0) | |
|
||||
+----------------|-------------|---------------------|------+
|
||||
```
|
||||
|
||||
It is great! It got a LAN IP address! In the **lxc list** command, we used the filter **^mycontainer$** , which means to show only the container with the exact name **mycontainer**. By default, **lxc list** does a substring search when it tries to match a container name. Those **^** and **$** characters are related to Linux/Unix in general, where **^** means **start** , and **$** means **end**. Therefore, **^mycontainer$** means the exact string **mycontainer**!
|
||||
|
||||
### Changing bridged containers to use the LXD NAT
|
||||
|
||||
Let's switch back from using the bridge, to using the LXD NAT network. We stop the container, then assign just the **default** profile and finally start the container.
|
||||
```
|
||||
$ lxc stop mycontainer
|
||||
$ lxc profile assign mycontainer default
|
||||
Profiles default applied to mycontainer
|
||||
$ lxc start mycontainer
|
||||
```
|
||||
|
||||
Let's have a look at it,
|
||||
```
|
||||
$ lxc list ^mycontainer$
|
||||
+-------------|---------|----------------------|--------------------------------+
|
||||
| NAME | STATE | IPV4 | IPV6 |
|
||||
+-------------|---------|----------------------|--------------------------------+
|
||||
| mycontainer | RUNNING | 10.52.252.101 (eth0) | fd42:cba6:...:fe10:3f14 (eth0) |
|
||||
+-------------|---------|----------------------|--------------------------------+
|
||||
```
|
||||
|
||||
**NOTE** : I tried to assign the **default** profile while the container was running in bridged mode. It made a mess with the networking and the container could not get an IPv4 IP address anymore. It could get an IPv6 address though. Therefore, use as a rule of thumb to stop a container before assigning a different profile.
|
||||
|
||||
**NOTE #2** : If your container has a LAN IP address, it is important to stop the container so that your router's DHCP server gets the notification to remove the DHCP lease. Most routers remember the MAC address of a new computer, and a new container gets a new random MAC address. Therefore, do not delete or kill containers that have a LAN IP address but rather stop them first. Your router's DHCP lease table is only that big.
|
||||
|
||||
### Conclusion
|
||||
|
||||
In this post we saw how to selectively get ours containers to receive a LAN IP address. This requires to set the host network interface to be the slave of the bridge. It is a bit invasive compared to [using a **macvlan**][2], but offers the ability for the containers and the host to communicate with each other over the LAN.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.simos.info/how-to-make-your-lxd-containers-get-ip-addresses-from-your-lan-using-a-bridge/
|
||||
|
||||
作者:[Simos Xenitellis][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://blog.simos.info/author/simos/
|
||||
[1]:http://ask.xmodulo.com/configure-linux-bridge-network-manager-ubuntu.html (Permalink to How to configure a Linux bridge with Network Manager on Ubuntu)
|
||||
[2]:https://blog.simos.info/how-to-make-your-lxd-container-get-ip-addresses-from-your-lan/
|
@ -0,0 +1,262 @@
|
||||
WebSphere MQ programming in Python with Zato
|
||||
======
|
||||
[WebSphere MQ][1] is a messaging middleware product by IBM - a message queue server - and this post shows how to integrate with MQ from Python and [Zato][2].
|
||||
|
||||
The article will go through a short process that will let you:
|
||||
|
||||
* Send messages to queues in 1 line of Python code
|
||||
* Receive messages from queues without coding
|
||||
* Seamlessly integrate with Java JMS applications - frequently found in WebSphere MQ environments
|
||||
* Push MQ messages from [Django][3] or [Flask][4]
|
||||
|
||||
|
||||
|
||||
### Prerequisites
|
||||
|
||||
* [Zato][2] 3.0+ (e.g. from [source code][5])
|
||||
* WebSphere MQ 6.0+
|
||||
|
||||
|
||||
|
||||
### Preliminary steps
|
||||
|
||||
* Obtain connection details and credentials to the queue manager that you will be connecting to:
|
||||
|
||||
* host, e.g. 10.151.13.11
|
||||
* port, e.g. 1414
|
||||
* channel name, e.g. DEV.SVRCONN.1
|
||||
* queue manager name (optional)
|
||||
* username (optional)
|
||||
* password (optional)
|
||||
* Install [Zato][6]
|
||||
|
||||
* On the same system that Zato is on, install a [WebSphere MQ Client][7] \- this is an umbrella term for a set of development headers and libraries that let applications connect to remote queue managers
|
||||
|
||||
* Install [PyMQI][8] \- an additional dependency implementing the low-level proprietary MQ protocol. Note that you need to use the pip command that Zato ships with:
|
||||
|
||||
|
||||
|
||||
```
|
||||
# Assuming Zato is in /opt/zato/current
|
||||
zato$ cd /opt/zato/current/bin
|
||||
zato$ ./pip install pymqi
|
||||
|
||||
```
|
||||
|
||||
* That is it - everything is installed and the rest is a matter of configuration
|
||||
|
||||
|
||||
|
||||
### Understanding definitions, outgoing connections and channels
|
||||
|
||||
Everything in Zato revolves around re-usability and hot-reconfiguration - each individual piece of configuration can be changed on the fly, while servers are running, without restarts.
|
||||
|
||||
Note that the concepts below are presented in the context of WebSphere MQ but they apply to other connection types in Zato too.
|
||||
|
||||
* **Definitions** \- encapsulate common details that apply to other parts of configuration, e.g. a connection definition may contain remote host and port
|
||||
* **Outgoing connections** \- objects through which data is sent to remote resources, such as MQ queues
|
||||
* **Channels** \- objects through which data can be received, for instance, from MQ queues
|
||||
|
||||
|
||||
|
||||
It is usually most convenient to configure environments during development using [web-admin GUI][9] but afterwards this can be automated with [enmasse][10], [API][11] or [command-line interface][12].
|
||||
|
||||
Once configuration is defined, it can be used from Zato services which in turn represent APIs that Zato clients invoke. Then, external applications, such as a Django or Flask, will connect using HTTP to a Zato service which will on their behalf send messages to MQ queues.
|
||||
|
||||
Let's use web-admin to define all the Zato objects required for MQ integrations. (Hint: web-admin by default runs on <http://localhost:8183>)
|
||||
|
||||
### Definition
|
||||
|
||||
* Go to Connections -> Definitions -> WebSphere MQ
|
||||
* Fill out the form and click OK
|
||||
* Observe the 'Use JMS' checkbox - more about it later on
|
||||
|
||||
|
||||
|
||||
![Screenshots][13]
|
||||
|
||||
* Note that a password is by default set to an unusable one (a random UUID4) so once a definition is created, click on Change password to set it to a required one
|
||||
|
||||
|
||||
|
||||
![Screenshots][14]
|
||||
|
||||
* Click Ping to confirm that connections to the remote queue manager can be established
|
||||
|
||||
|
||||
|
||||
![Screenshots][15]
|
||||
|
||||
### Outgoing connection
|
||||
|
||||
* Go to Connections -> Outgoing -> WebSphere MQ
|
||||
* Fill out the form - the connection's name is just a descriptive label
|
||||
* Note that you do not specify a queue name here - this is because a single connection can be used with as many queues as needed
|
||||
|
||||
|
||||
|
||||
![Screenshots][16]
|
||||
|
||||
* You can now send a test MQ message directly from web-admin after click Send a message
|
||||
|
||||
|
||||
|
||||
![Screenshots][17]
|
||||
|
||||
![Screenshots][18]
|
||||
|
||||
### API services
|
||||
|
||||
* Having carried out the steps above, you can now send messages to queue managers from web-admin, which is a great way to confirm MQ-level connectivity but the crucial point of using Zato is to offer API services to client applications so let's create two services now, one for sending messages to MQ and one that will receive them.
|
||||
|
||||
|
||||
|
||||
```
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
# Zato
|
||||
from zato.server.service import Service
|
||||
|
||||
class MQSender(Service):
|
||||
""" Sends all incoming messages as they are straight to a remote MQ queue.
|
||||
"""
|
||||
def handle(self):
|
||||
|
||||
# This single line suffices
|
||||
self.out.wmq.send(self.request.raw_request, 'customer.updates', 'CUSTOMER.1')
|
||||
```
|
||||
|
||||
* In practice, a service such as the one above could perform transformation on incoming messages or read its destination queue names from configuration files but it serves to illustrate the point that literally 1 line of code is needed to send MQ messages
|
||||
|
||||
* Let's create a channel service now - one that will act as a callback invoked for each message consumed off a queue:
|
||||
|
||||
|
||||
|
||||
```
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import absolute_import, division, print_function, unicode_literals
|
||||
|
||||
# Zato
|
||||
from zato.server.service import Service
|
||||
|
||||
class MQReceiver(Service):
|
||||
""" Invoked for each message taken from a remote MQ queue
|
||||
"""
|
||||
def handle(self):
|
||||
self.logger.info(self.request.raw_request)
|
||||
```
|
||||
|
||||
But wait - if this is the service that is a callback one then how does it know which queue to get messages from?
|
||||
|
||||
That is the key point of Zato architecture - services do not need to know it and unless you really need it, they won't ever access this information.
|
||||
|
||||
Such configuration details are configured externally (for instance, in web-admin) and a service is just a black box that receives some input, operates on it and produces output.
|
||||
|
||||
In fact, the very same service could be mounted not only on WebSphere MQ ones but also on REST or AMQP channels.
|
||||
|
||||
Without further ado, let's create a channel in that case, but since this is an article about MQ, only this connection type will be shown even if the same principle applies to other channel types.
|
||||
|
||||
### Channel
|
||||
|
||||
* Go to Connections -> Channels -> WebSphere MQ
|
||||
* Fill out the form and click OK
|
||||
* Data format may be JSON, XML or blank if no automatic de-serialization is required
|
||||
|
||||
|
||||
|
||||
![Screenshots][19]
|
||||
|
||||
After clicking OK a lightweight background task will start to listen for messages pertaining to a given queue and upon receiving any, the service configured for channel will be invoked.
|
||||
|
||||
You can start as many channels as there are queues to consume messages from, that is, each channel = one input queue and each channel may declare a different service.
|
||||
|
||||
### JMS Java integration
|
||||
|
||||
In many MQ environments the majority of applications will be based on Java JMS and Zato implements the underlying wire-level MQ JMS protocol to let services integrate with such systems without any effort from a Python programmer's perspective.
|
||||
|
||||
When creating connection definitions, merely check Use JMS and everything will be taken care of under the hood - all the necessary wire headers will be added or removed when it needs to be done.
|
||||
|
||||
![Screenshots][20]
|
||||
|
||||
### No restarts required
|
||||
|
||||
It's worth to emphasize again that at no point are server restarts required to reconfigure connection details.
|
||||
|
||||
No matter how many definitions, outgoing connections, channels there are, and no matter of what kind they are (MQ or not), changing any of them will only update that very one across the whole cluster of Zato servers without interrupting other API services running concurrently.
|
||||
|
||||
### Configuration wrap-up
|
||||
|
||||
* MQ connection definitions are re-used across outgoing connections and channels
|
||||
* Outgoing connections are used by services to send messages to queues
|
||||
* Data from queues is read through channels that invoke user-defined services
|
||||
* Everything is reconfigurable on the fly
|
||||
|
||||
|
||||
|
||||
Let's now check how to add a REST channel for the MQSender service thus letting Django and Flask push MQ messages.
|
||||
|
||||
### Django and Flask integration
|
||||
|
||||
* Any Zato-based API service can be mounted on a channel
|
||||
* For Django and Flask, it is most convenient to mount one's services on REST channels and invoke them using the [zato-client][21] from PyPI
|
||||
* zato-client is a set of convenience clients that lets any Python application, including ones based on Django or Flask, to invoke Zato services in just a few steps
|
||||
* There is [a dedicated chapter][22] in documentation about Django and Flask, including a sample integration scenario
|
||||
* It's recommended to go through the chapter step-by-step - since all Zato configuration objects share the same principles, the whole of its information applies to any sort of technology that Django or Flask may need to integrate with, including WebSphere MQ
|
||||
* After completing that chapter, to push messages to MQ, you will only need to:
|
||||
* Create a security definition for a new REST channel for Django or Flask
|
||||
* Create the REST channel itself
|
||||
* Assign a service to it (e.g. MQSender)
|
||||
* Use a Python client from zato-client to invoke that channel from Django or Flask
|
||||
* And that is it - no MQ programming is needed to send messages to MQ queues from any Python application :-)
|
||||
|
||||
|
||||
|
||||
### Summary
|
||||
|
||||
* Zato lets Python programmers integrate with WebSphere MQ with little to no effort
|
||||
* Built-in support for JMS lets one integrate with existing Java applications in a transparent manner
|
||||
* Built-in Python clients offer trivial access to Zato-based API services from other Python applications, including Django or Flask
|
||||
|
||||
|
||||
|
||||
Where to next? Start off with the [tutorial][23], then consult the [documentation][24], there is a lot of information for all types of API and integration projects, and have a look at [support options][25] in case you need absolutely any sort of assistance!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://zato.io/blog/posts/websphere-mq-python-zato.html
|
||||
|
||||
作者:[zato][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://zato.io
|
||||
[1]:https://en.wikipedia.org/wiki/IBM_WebSphere_MQ
|
||||
[2]:https://zato.io/docs
|
||||
[3]:https://www.djangoproject.com/
|
||||
[4]:http://flask.pocoo.org/
|
||||
[5]:https://zato.io/docs/admin/guide/install/source.html
|
||||
[6]:https://zato.io/docs/admin/guide/install/index.html
|
||||
[7]:https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_7.0.1/com.ibm.mq.csqzaf.doc/cs10230_.htm
|
||||
[8]:https://github.com/dsuch/pymqi/
|
||||
[9]:https://zato.io/docs/web-admin/intro.html
|
||||
[10]:https://zato.io/docs/admin/guide/enmasse.html
|
||||
[11]:https://zato.io/docs/public-api/intro.html
|
||||
[12]:https://zato.io/docs/admin/cli/index.html
|
||||
[13]:https://zato.io/blog/images/wmq-python-zato/def-create.png
|
||||
[14]:https://zato.io/blog/images/wmq-python-zato/def-options.png
|
||||
[15]:https://zato.io/blog/images/wmq-python-zato/def-ping.png
|
||||
[16]:https://zato.io/blog/images/wmq-python-zato/outconn-create.png
|
||||
[17]:https://zato.io/blog/images/wmq-python-zato/outconn-options.png
|
||||
[18]:https://zato.io/blog/images/wmq-python-zato/outconn-send.png
|
||||
[19]:https://zato.io/blog/images/wmq-python-zato/channel-create.png
|
||||
[20]:https://zato.io/blog/images/wmq-python-zato/def-create-jms.png
|
||||
[21]:https://pypi.python.org/pypi/zato-client
|
||||
[22]:https://zato.io/docs/progguide/clients/django-flask.html
|
||||
[23]:https://zato.io/docs/tutorial/01.html
|
||||
[24]:https://zato.io/docs/
|
||||
[25]:https://zato.io/support.html
|
@ -0,0 +1,259 @@
|
||||
tmux – A Powerful Terminal Multiplexer For Heavy Command-Line Linux User
|
||||
======
|
||||
tmux stands for terminal multiplexer, it allows users to create/enable multiple terminals (vertical & horizontal) in single window, this can be accessed and controlled easily from single window when you are working with different issues.
|
||||
|
||||
It uses a client-server model, which allows you to share sessions between users, also you can attach terminals to a tmux session back. We can easily move or rearrange the virtual console as per the need. Terminal sessions can freely rebound from one virtual console to another.
|
||||
|
||||
tmux depends on libevent and ncurses libraries. tmux offers status-line at the bottom of the screen which display information about your current tmux session suc[]h as current window number, window name, username, hostname, current time, and current date.
|
||||
|
||||
When tmux is started it creates a new session with a single window and displays it on screen. It allows users to create Any number of windows in the same session.
|
||||
|
||||
Many of us says it's similar to screen but i'm not since this offers wide range of configuration options.
|
||||
|
||||
**Make a note:** `Ctrl+b` is the default prefix in tmux so, to perform any action in tumx, you have to type the prefix first then required options.
|
||||
|
||||
**Suggested Read :** [List Of Terminal Emulator For Linux][1]
|
||||
|
||||
### tmux Features
|
||||
|
||||
* Create any number of windows
|
||||
* Create any number of panes in the single window
|
||||
* It allows vertical and horizontal splits
|
||||
* Detach and Re-attach window
|
||||
* Server-client architecture which allows users to share sessions between users
|
||||
* tmux offers wide range of configuration hacks
|
||||
|
||||
|
||||
|
||||
**Suggested Read :**
|
||||
**(#)** [tmate - Instantly Share Your Terminal Session To Anyone In Seconds][2]
|
||||
**(#)** [Teleconsole - A Tool To Share Your Terminal Session Instantly To Anyone In Seconds][3]
|
||||
|
||||
### How to Install tmux Command
|
||||
|
||||
tmux command is pre-installed by default in most of the Linux systems. If no, follow the below procedure to get installed.
|
||||
|
||||
For **`Debian/Ubuntu`** , use [APT-GET Command][4] or [APT Command][5] to install tmux.
|
||||
```
|
||||
$ sudo apt install tmux
|
||||
|
||||
```
|
||||
|
||||
For **`RHEL/CentOS`** , use [YUM Command][6] to install tmux.
|
||||
```
|
||||
$ sudo yum install tmux
|
||||
|
||||
```
|
||||
|
||||
For **`Fedora`** , use [DNF Command][7] to install tmux.
|
||||
```
|
||||
$ sudo dnf install tmux
|
||||
|
||||
```
|
||||
|
||||
For **`Arch Linux`** , use [Pacman Command][8] to install tmux.
|
||||
```
|
||||
$ sudo pacman -S tmux
|
||||
|
||||
```
|
||||
|
||||
For **`openSUSE`** , use [Zypper Command][9] to install tmux.
|
||||
```
|
||||
$ sudo zypper in tmux
|
||||
|
||||
```
|
||||
|
||||
### How to Use tmux
|
||||
|
||||
kick start the tmux session by running following command on terminal. When tmux is started it creates a new session with a single window and will automatically login to your default shell with your user account.
|
||||
```
|
||||
$ tmux
|
||||
|
||||
```
|
||||
|
||||
[![][10]![][10]][11]
|
||||
|
||||
You will get similar to above screenshot like us. tmux comes with status bar which display an information's about current sessions details, date, time, etc.,.
|
||||
|
||||
The status bar information's are below:
|
||||
|
||||
* **`0 :`** It is indicating the session number which was created by the tmux server. By default it starts with 0.
|
||||
* **`0:username@host: :`** 0 is indicating the session number. Username and Hostname which is holding the current window.
|
||||
* **`~ :`** It is indicating the current directory (We are in the Home directory)
|
||||
* **`* :`** This indicate that the window is active now.
|
||||
* **`Hostname :`** This shows fully qualified hostname of the server
|
||||
* **`Date& Time:`** It shows current date and time
|
||||
|
||||
|
||||
|
||||
### How to Split Window
|
||||
|
||||
tmux allows users to split window vertically and horizontally. Let 's see how to do that.
|
||||
|
||||
Press `**(Ctrl+b), %**` to split the pane vertically.
|
||||
[![][10]![][10]][13]
|
||||
|
||||
Press `**(Ctrl+b), "**` to split the pane horizontally.
|
||||
[![][10]![][10]][14]
|
||||
|
||||
### How to Move Between Panes
|
||||
|
||||
Lets say, we have created few panes and want to move between them. How to do that? If you don 't know how to do, then there is no purpose to use tmux. Use the following control keys to perform the actions. There are many ways to move between panes.
|
||||
|
||||
Press `(Ctrl+b), Left arrow` - To Move Left
|
||||
|
||||
Press `(Ctrl+b), Right arrow` - To Move Right
|
||||
|
||||
Press `(Ctrl+b), Up arrow` - To Move Up
|
||||
|
||||
Press `(Ctrl+b), Down arrow` - To Move Down
|
||||
|
||||
Press `(Ctrl+b), {` - To Move Left
|
||||
|
||||
Press `(Ctrl+b), }` - To Move Right
|
||||
|
||||
Press `(Ctrl+b), o` - Switch to next pane (left-to-right, top-down)
|
||||
|
||||
Press `(Ctrl+b), ;` - Move to the previously active pane.
|
||||
|
||||
For testing purpose, we are going to move between panes. Now, we are in the `pane2` which shows `lsb_release -a` command output.
|
||||
[![][10]![][10]][15]
|
||||
|
||||
And we are going to move to `pane0` which shows `uname -a` command output.
|
||||
[![][10]![][10]][16]
|
||||
|
||||
### How to Open/Create New Window
|
||||
|
||||
You can open any number of windows within one terminal. Terminal window can be split vertically & horizontally which is called `panes`. Each pane will contain its own, independently running terminal instance.
|
||||
|
||||
Press `(Ctrl+b), c` to create a new window.
|
||||
|
||||
Press `(Ctrl+b), n` move to the next window.
|
||||
|
||||
Press `(Ctrl+b), p` to move to the previous window.
|
||||
|
||||
Press `(Ctrl+b), (0-9)` to immediately move to a specific window.
|
||||
|
||||
Press `(Ctrl+b), l` Move to the previously selected window.
|
||||
|
||||
I have two windows, first window has three panes which contains operating system distribution information, top command output & kernal information.
|
||||
[![][10]![][10]][17]
|
||||
|
||||
And second window has two panes which contains Linux distributions logo information. Use the following commands perform the action.
|
||||
[![][10]![][10]][18]
|
||||
|
||||
Press `(Ctrl+b), w` Choose the current window interactively.
|
||||
[![][10]![][10]][19]
|
||||
|
||||
### How to Zoom Panes
|
||||
|
||||
You are working in some pane which is very small and you want to zoom it out for further work. To do use the following key binds.
|
||||
|
||||
Currently we have three panes and i'm working in `pane1` which shows system activity using **Top** command and am going to zoom that.
|
||||
[![][10]![][10]][17]
|
||||
|
||||
When you zoom a pane, it will hide all other panes and display only the zoomed pane in the window.
|
||||
[![][10]![][10]][20]
|
||||
|
||||
Press `(Ctrl+b), z` to zoom the pane and press it again, to bring the zoomed pane back.
|
||||
|
||||
### Display Pane Information
|
||||
|
||||
To know about pane number and it's size, run the following command.
|
||||
|
||||
Press `(Ctrl+b), q` to briefly display pane indexes.
|
||||
[![][10]![][10]][21]
|
||||
|
||||
### Display Window Information
|
||||
|
||||
To know about window number, layout size, number of panes associated with the window and it's size, etc., run the following command.
|
||||
|
||||
Just run `tmux list-windows` to view window information.
|
||||
[![][10]![][10]][22]
|
||||
|
||||
### How to Resize Panes
|
||||
|
||||
You may want to resize the panes to fit your requirement. You have to press `(Ctrl+b), :` then type the following details on the `yellow` color bar in the bottom of the page.
|
||||
[![][10]![][10]][23]
|
||||
|
||||
In the previous section we have print pane index which shows panes size as well. To test this we are going to increase `10 cells UPward`. See the following output that has increased the pane1 & pane2 size from `55x21` to `55x31`.
|
||||
[![][10]![][10]][24]
|
||||
|
||||
**Syntax:** `(Ctrl+b), :` then type `resize-pane [options] [cells size]`
|
||||
|
||||
`(Ctrl+b), :` then type `resize-pane -D 10` to resize the current pane Down for 10 cells.
|
||||
|
||||
`(Ctrl+b), :` then type `resize-pane -U 10` to resize the current pane UPward for 10 cells.
|
||||
|
||||
`(Ctrl+b), :` then type `resize-pane -L 10` to resize the current pane Left for 10 cells.
|
||||
|
||||
`(Ctrl+b), :` then type `resize-pane -R 10` to resize the current pane Right for 10 cells.
|
||||
|
||||
### Detaching and Re-attaching tmux Session
|
||||
|
||||
One of the most powerful features of tmux is the ability to detach and reattach session whenever you need.
|
||||
|
||||
Run a long running process and press `Ctrl+b` followed by `d` to detach your tmux session safely by leaving the running process.
|
||||
|
||||
**Suggested Read :** [How To Keep A Process/Command Running After Disconnecting SSH Session][25]
|
||||
|
||||
Now, run a long running process. For demonstration purpose, we are going to move this server backup to another remote server for disaster recovery (DR) purpose.
|
||||
|
||||
You will get similar output like below after detached tmux session.
|
||||
```
|
||||
[detached (from session 0)]
|
||||
|
||||
```
|
||||
|
||||
Run the following command to list the available tmux sessions.
|
||||
```
|
||||
$ tmux ls
|
||||
0: 3 windows (created Tue Jan 30 06:17:47 2018) [109x45]
|
||||
|
||||
```
|
||||
|
||||
Now, re-attach the tmux session using an appropriate session ID as follow.
|
||||
```
|
||||
$ tmux attach -t 0
|
||||
|
||||
```
|
||||
|
||||
### How to Close Panes & Window
|
||||
|
||||
Just type `exit` or hit `Ctrl-d` in the corresponding pane to close it. It's similar to terminal close. To close window, press `(Ctrl+b), &`.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/tmux-a-powerful-terminal-multiplexer-emulator-for-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.2daygeek.com/author/magesh/
|
||||
[1]:https://www.2daygeek.com/category/terminal-emulator/
|
||||
[2]:https://www.2daygeek.com/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds/
|
||||
[3]:https://www.2daygeek.com/teleconsole-share-terminal-session-instantly-to-anyone-in-seconds/
|
||||
[4]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[5]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[6]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
|
||||
[7]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[8]:https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[9]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
|
||||
[10]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[11]:https://www.2daygeek.com/wp-content/uploads/2018/01/tmux-a-powerful-terminal-multiplexer-emulator-for-linux-1.png
|
||||
[13]:https://www.2daygeek.com/wp-content/uploads/2018/01/tmux-a-powerful-terminal-multiplexer-emulator-for-linux-2.png
|
||||
[14]:https://www.2daygeek.com/wp-content/uploads/2018/01/tmux-a-powerful-terminal-multiplexer-emulator-for-linux-3.png
|
||||
[15]:https://www.2daygeek.com/wp-content/uploads/2018/01/tmux-a-powerful-terminal-multiplexer-emulator-for-linux-4.png
|
||||
[16]:https://www.2daygeek.com/wp-content/uploads/2018/01/tmux-a-powerful-terminal-multiplexer-emulator-for-linux-5.png
|
||||
[17]:https://www.2daygeek.com/wp-content/uploads/2018/01/tmux-a-powerful-terminal-multiplexer-emulator-for-linux-8.png
|
||||
[18]:https://www.2daygeek.com/wp-content/uploads/2018/01/tmux-a-powerful-terminal-multiplexer-emulator-for-linux-6.png
|
||||
[19]:https://www.2daygeek.com/wp-content/uploads/2018/01/tmux-a-powerful-terminal-multiplexer-emulator-for-linux-7.png
|
||||
[20]:https://www.2daygeek.com/wp-content/uploads/2018/01/tmux-a-powerful-terminal-multiplexer-emulator-for-linux-9.png
|
||||
[21]:https://www.2daygeek.com/wp-content/uploads/2018/01/tmux-a-powerful-terminal-multiplexer-emulator-for-linux-10.png
|
||||
[22]:https://www.2daygeek.com/wp-content/uploads/2018/01/tmux-a-powerful-terminal-multiplexer-emulator-for-linux-14.png
|
||||
[23]:https://www.2daygeek.com/wp-content/uploads/2018/01/tmux-a-powerful-terminal-multiplexer-emulator-for-linux-11.png
|
||||
[24]:https://www.2daygeek.com/wp-content/uploads/2018/01/tmux-a-powerful-terminal-multiplexer-emulator-for-linux-13.png
|
||||
[25]:https://www.2daygeek.com/how-to-keep-a-process-command-running-after-disconnecting-ssh-session/
|
76
translated/tech/20171218 Internet Chemotherapy.md
Normal file
76
translated/tech/20171218 Internet Chemotherapy.md
Normal file
@ -0,0 +1,76 @@
|
||||
互联网化疗
|
||||
======
|
||||
|
||||
译注:本文作者 janit0r 被认为是 BrickerBot 病毒的作者。此病毒会攻击物联网上安全性不足的设备并使其断开和其他网络设备的连接。janit0r 宣称他使用这个病毒的目的是保护互联网的安全,避免这些设备被入侵者用于入侵网络上的其他设备。janit0r 称此项目为”互联网化疗“。janit0r 决定在 2017 年 12 月终止这个项目,并在网络上发表了这篇文章。
|
||||
|
||||
12/10 2017
|
||||
|
||||
### 1. 互联网化疗
|
||||
|
||||
互联网化疗是在 2016 年 11 月 到 2017 年 12 月之间的一个为期 13 个月的项目。它曾被称为“BrickerBot”、“错误的固件升级”、“勒索软件”、“大规模网络瘫痪”,甚至“前所未有的恐怖行为”。最后一个有点伤人了,费尔南德斯(译注:委内瑞拉电信公司 CANTV 的光纤网络曾在 2017 年 8 月受到病毒攻击,公司董事长曼努埃尔·费尔南德斯称这次攻击为[“前所未有的恐怖行为”][1]),但我想我大概不能让所有人都满意吧。
|
||||
|
||||
你可以从 http://91.215.104.140/mod_plaintext.py 下载我的代码模块,它可以基于 http 和 telnet 发送恶意请求(译注:这个链接已经失效,不过在 [Github][2] 上有备份)。因为平台的限制,模块里是混淆过的单线程 Python 代码,但请求的内容依然是明文,任何合格的程序员应该都能看得懂。看看这里面有多少恶意请求、0-day 漏洞和入侵技巧,花点时间让自己接受现实。然后想象一下,如果我是一个黑客,致力于创造出强大的 DDoS 生成器来勒索那些最大的互联网服务提供商和公司的话,互联网在 2017 年会受到怎样的打击。我完全可以让他们全部陷入混乱,并同时对整个互联网造成巨大的伤害。
|
||||
|
||||
我的 ssh 爬虫太危险了,不能发布出来。它包含很多层面的自动化,可以只利用一个被入侵的路由器就能够在设计有缺陷的互联网服务提供商的网络上平行移动并加以入侵。正是因为我可以征用数以万计的互联网服务提供商的路由器,而这些路由器让我知晓网络上发生的事情并给我源源不断的节点用来进行入侵行动,我才得以进行我的反物联网僵尸网络项目。我于 2015 年开始了我的非破坏性的提供商网络清理项目,于是当 Mirai 病毒入侵时我已经做好了准备来做出回应。主动破坏其他人的设备仍然是一个困难的决定,但无比危险的 CVE-2016-10372 漏洞让我别无选择。从那时起我就决定一不做二不休。
|
||||
|
||||
(译注:上一段中提到的 Mirai 病毒首次出现在 2016 年 8 月。它可以远程入侵运行 Linux 系统的网络设备并利用这些设备构建僵尸网络。本文作者 janit0r 宣称当 Mirai 入侵时他利用自己的 BrickerBot 病毒强制将数以万计的设备从网络上断开,从而减少 Mirai 病毒可以利用的设备数量。)
|
||||
|
||||
我在此警告你们,我所做的只是权宜之计,它并不足以在未来继续拯救互联网。坏人们正变得更加聪明,可能有漏洞的设备数量在持续增加,发生大规模的、能使网络瘫痪的事件只是时间问题。如果你愿意相信我曾经在一个持续 13 个月的项目中使上千万有漏洞的设备变得无法使用,那么不过分地说,如此严重的事件本可能在 2017 年就发生。
|
||||
|
||||
__你们应该意识到,只需要再有一两个严重的物联网漏洞,我们的网络就会严重瘫痪。__ 考虑到我们的社会现在是多么依赖数字网络,而计算机安全应急响应组和互联网服务提供商们又是多么地忽视问题的严重性,这种事件造成的伤害是无法估计的。互联网服务提供商在持续地部署有开放的控制端口的设备,而且即使像 Shodan 这样的服务可以轻而易举地发现这些问题,我国的计算机安全应急响应组还是似乎并不在意。而很多国家甚至都没有自己的计算机安全应急响应组。世界上许多的互联网服务提供商都没有雇佣任何熟知计算机安全问题的人,而是在出现问题的时候依赖于外来的专家来解决。我曾见识过大型互联网服务提供商在我的僵尸网络的调节之下连续多个月持续受损,但他们还是不能完全解决漏洞(几个好的例子是 BSNL、Telkom ZA、PLDT、某些时候的 PT Telkom,以及南半球大部分的大型互联网服务提供商)。只要看看 Telkom ZA 解决他们的 Aztech 调制解调器问题的速度有多慢,你就会开始理解现状有多么令人绝望。在 99% 的情况下,要解决这个问题只需要互联网服务提供商部署合理的访问控制列表并把部署在用户端的设备单独分段,但是几个月过去之后他们的技术人员依然没有弄明白。如果互联网服务提供商在经历了几周到几个月的针对他们设备的蓄意攻击之后仍然无法解决问题,我们又怎么能期望他们会注意到并解决 Mirai 在他们网络上造成的问题呢?世界上许多最大的互联网服务提供商对这些事情无知得令人发指,而这毫无疑问是最大的危险,但奇怪的是,这应该也是最容易解决的问题。
|
||||
|
||||
我已经尽自己的责任试着去让互联网多坚持一段时间,但我已经尽力了。接下来要交给你们了。即使很小的行动也是非常重要的。你们能做的事情有:
|
||||
|
||||
* 使用像 Shodan 之类的服务来检查你的互联网服务提供商的安全性,并驱使他们去解决他们网络上开放的 telnet、http、httpd、ssh 和 tr069 等端口。如果需要的话,可以把这篇文章给他们看。从来不存在什么好的理由来让这些端口可以从外界访问。开放控制端口是业余人士的错误。如果有足够的客户抱怨,他们也许真的会采取行动!
|
||||
* 用你的钱包投票!拒绝购买或使用任何“智能“产品,除非制造商保证这个产品能够而且将会收到及时的安全更新。在把你辛苦赚的钱交给提供商之前,先去查看他们的安全记录。为了更好的安全性,可以多花一些钱。
|
||||
* 游说你本地的政治家和政府官员,让他们改进法案来规范物联网设备,包括路由器、IP 照相机和各种”智能“设备。不论私有还是公有的公司目前都没有足够的动机去在短期内解决问题。这件事情和汽车或者通用电气的安全标准一样重要。
|
||||
* 考虑给像 GDI 基金会或者 Shadowserver 基金会这种缺少支持的白帽黑客组织贡献你的时间或者其他资源。这些组织和人能产生巨大的影响,并且他们可以很好地发挥你的能力来帮助互联网。
|
||||
* 最后,虽然希望不大,但可以考虑通过设立法律先例来让物联网设备成为一种”<ruby>诱惑性危险品<rt>attractive nuisance</rt></ruby>“(译注:attractive nuisance 是美国法律中的一个原则,意思是如果儿童在私人领地上因为某些对儿童有吸引力的危险物品而受伤,领地的主人需要负责,无论受伤的儿童是否是合法进入领地)。如果一个房主可以因为小偷或者侵入者受伤而被追责,我不清楚为什么设备的主人(或者互联网服务提供商和设备制造商)不应该因为他们的危险的设备被远程入侵所造成的伤害而被追责。连带责任原则应该适用于对设备应用层的入侵。如果任何有钱的大型互联网服务提供商不愿意为设立这样的先例而出钱(他们也许的确不会,因为他们害怕这样的先例会反过来让自己吃亏),我们甚至可以在这里还有在欧洲为这个行动而众筹。互联网服务提供商们:把你们在用来应对 DDoS 的带宽上省下的可观的成本当做我为这个目标的间接投资,也当做它的好处的证明吧。
|
||||
|
||||
### 2. 时间线
|
||||
|
||||
下面是这个项目中一些值得纪念的事件:
|
||||
|
||||
* 2016 年 11 月底的德国电信 Mirai 事故。我匆忙写出的最初的 TR069/64 请求只执行了 `route del default`,不过这已经足够引起互联网服务提供商去注意这个问题,而它引发的新闻头条警告了全球的其他互联网服务提供商来注意这个迫近的危机。
|
||||
* 大约 1 月 11 日 到 12 日,一些位于华盛顿特区的开放了 6789 控制端口的硬盘录像机被 Mirai 入侵并瘫痪,这上了很多头条新闻。我要给 Vemulapalli 点赞,她居然认为 Mirai 加上 /dev/urandom 一定是“非常复杂的勒索软件”(译注:Archana Vemulapalli 当时是华盛顿市政府的 CTO)。欧洲的那两个可怜人又怎么样了呢?
|
||||
* 2017 年 1 月底发生了第一起真正的大规模互联网服务提供商下架事件。Rogers Canada 的提供商 Hitron 非常粗心地推送了一个在 2323 端口上监听的无验证的 root shell(这可能是一个他们忘记关闭的 debug 接口)。这个惊天的失误很快被 Mirai 的僵尸网络所发现,造成大量设备瘫痪。
|
||||
* 在 2017 年 2 月,我注意到 Mirai 在这一年里的第一次扩张,Netcore/Netis 以及基于 Broadcom CLI 的调制解调器都遭受了攻击。BCM CLI 后来成为了 Mirai 在 2017 年的主要战场,黑客们和我自己都在这一年的余下时间里花大量时间寻找无数互联网服务提供商和设备制造商设置的默认密码。前面代码中的“broadcom”请求内容也许看上去有点奇怪,但它们是统计角度上最可能禁用那些大量的有问题的 BCM CLI 固件的序列。
|
||||
* 在 2017 年 3 月,我大幅提升了我的僵尸网络的节点数量并开始加入更多的网络请求。这是为了应对包括 Imeij、Amnesia 和 Persirai 在内的僵尸网络的威胁。大规模地禁用这些被入侵的设备也带来了新的一些问题。比如在 Avtech 和 Wificam 设备所泄露的登录信息当中,有一些用户名和密码非常像是用于机场和其他重要设施的,而英国政府官员大概在 2017 年 4 月 1 日关于针对机场和核设施的“实际存在的网络威胁”做出过警告。哎呀。
|
||||
* 这种更加激进的排查还引起了民间安全研究者的注意,安全公司 Radware 在 2017 年 4 月 6 日发表了一篇关于我的项目的文章。这个公司把它叫做“BrickerBot”。显然,如果我要继续增加我的物联网防御措施的规模,我必须想出更好的网络映射与检测方法来应对蜜罐或者其他有风险的目标。
|
||||
* 2017 年 4 月 11 日左右的时候发生了一件非常不寻常的事情。一开始这看上去和许多其他的互联网服务提供商下架事件相似,一个叫 Sierra Tel 的半本地的互联网服务提供商在一些 Zyxel 设备上使用了默认的 telnet 用户名密码 supervisor/zyad1234。一个 Mirai 使用者发现了这些有漏洞的设备,而我的僵尸网络紧随其后,2017年精彩绝伦的 BCM CLI 战争又开启了新的一场战斗。这场战斗并没有持续很久。它本来会和 2017 年的其他数百起互联网服务提供商下架事件一样,如果不是在尘埃落定之后发生的那件非常不寻常的事情的话。令人惊奇的是,这家互联网服务提供商并没有试着把这次网络中断掩盖成某种网络故障、电力超额或错误的固件升级。他们完全没有对客户说谎。相反,他们很快发表了新闻公告,说他们的调制解调器有漏洞,这让他们的客户们得以评估自己可能遭受的风险。这家全世界最诚实的互联网服务提供商为他们值得赞扬的公开行为而收获了什么呢?悲哀的是,它得到的只是批评和不好的名声。这依然是我记忆中最令人沮丧的“为什么我们得不到好东西”的例子,这很有可能也是为什么 99% 的安全错误都被掩盖而真正的受害者被蒙在鼓里的最主要原因。太多时候,“有责任心的信息公开”会直接变成“粉饰太平”的委婉说法。
|
||||
* 在 2017 年 4 月 14 日,国土安全部关于“BrickerBot 对物联网的威胁”做出了警告,我自己的政府把我作为一个网络威胁这件事让我觉得他们很不公平而且目光短浅。跟我相比,对美国人民威胁最大的难道不应该是那些部署缺乏安全性的网络设备的提供商和贩卖不成熟的安全方案的物联网设备制造商吗?如果没有我,数以百万计的人们可能还在用被入侵的设备和网络来处理银行业务和其他需要保密的交易。如果国土安全部里有人读到这篇文章,我强烈建议你重新考虑一下保护国家和公民究竟是什么意思。
|
||||
* 在 2017 年 4 月底,我花了一些时间改进我的 TR069/64 攻击方法,然后在 2017 年 5 月初,一个叫 Wordfence 的公司(现在叫 Defiant)报道称一个曾给 Wordpress 网站造成威胁的基于入侵 TR069 的僵尸网络很明显地衰减了。值得注意的是,同一个僵尸网络在几星期后使用了一个不同的入侵方式暂时回归了(不过这最终也同样被化解了)。
|
||||
* 在 2017 年 5 月,主机公司 Akamai 在它的 2017 年第一季度互联网现状报告中写道,相比于 2016 年第一季度,大型(超过 100 Gbps)DDoS 攻击数减少了 89%,而总体 DDoS 攻击数减少了 30%。鉴于大型 DDoS 攻击是 Mirai 的主要手段,我觉得这给这些月来在物联网领域的辛苦劳动提供了实际的支持。
|
||||
* 在夏天我持续地改进我的入侵技术军火库,然后在 7 月底我针对亚太互联网络信息中心的互联网服务提供商进行了一些测试。测试结果非常令人吃惊。造成的影响之一是数十万的 BSNL 和 MTNL 调制解调器被禁用,而这次中断事故在印度成为了头条新闻。考虑到当时在印度和中国之间持续升级的地缘政治压力,我觉得这个事故有很大的风险会被归咎于中国所为,于是我很罕见地决定公开承认是我所做。Catalin,我很抱歉你在报道这条新闻之后突然被迫放的“两天的假期”。
|
||||
* 在处理过亚太地区和非洲地区的互联网络信息中心之后,在 2017 年 8 月 9 日我又针对拉丁美洲与加勒比地区互联网络信息中心进行了大规模的清理,给这个大洲的许多提供商造成了问题。在数百万的 Movilnet 的手机用户失去连接之后,这次攻击在委内瑞拉被大幅报道。虽然我个人反对政府监管互联网,委内瑞拉的这次情况值得注意。许多拉美与加勒比地区的提供商与网络曾在我的僵尸网络的持续调节之下连续数个月逐渐衰弱,但委内瑞拉的提供商很快加强了他们的网络防护并确保了他们的网络设施的安全。我认为这是由于委内瑞拉相比于该地区的其他国家来说进行了更具侵入性的深度包检测。值得思考一下。
|
||||
* F5 实验室在 2017 年 8 月发布了一个题为“狩猎物联网:僵尸物联网的崛起”的报告,研究者们在其中对近期 telnet 活动的平静表达了困惑。研究者们猜测这种活动的减少也许证实了一个或多个非常大型的网络武器正在成型(我想这大概确实是真的)。这篇报告是在我印象中对我的项目的规模最准确的评估,但神奇的是,这些研究者们什么都推断不出来,尽管他们把所有相关的线索都集中到了一页纸上。
|
||||
* 2017 年 8 月,Akamai 的 2017 年第二季度互联网现状报告宣布这是三年以来首个该提供商没有发现任何大规模(超过 100 Gbps)攻击的季度,而且 DDoS 攻击总数相比 2017 年第一季度减少了 28%。这看上去给我的清理工作提供了更多的支持。这个出奇的好消息被主流媒体所完全忽视了,这些媒体有着“流血的才是好新闻”的心态,即使是在信息安全领域。这是我们为什么不能得到好东西的又一个原因。
|
||||
* 在 CVE-2017-7921 和 7923 于 2017 年 9 月发表之后,我决定更密切地关注海康威视公司的设备,然后我惊恐地发现有一个黑客们还没有发现的方法可以用来入侵有漏洞的固件。于是我在 9 月中旬开启了一个全球范围的清理行动。超过一百万台硬盘录像机和摄像机(主要是海康威视和大华出品)在三周内被禁用,然后包括 IPVM.com 在内的媒体为这些攻击写了多篇报道。大华和海康威视在新闻公告中提到或暗示了这些攻击。大量的设备总算得到了固件升级。看到这次清理活动造成的困惑,我决定给这些闭路电视制造商写一篇[简短的总结][3](请原谅在这个粘贴板网站上的不和谐的语言)。这令人震惊的有漏洞而且在关键的安全补丁发布之后仍然在线的设备数量应该能够唤醒所有人,让他们知道现今的物联网补丁升级过程有多么无力。
|
||||
* 2017 年 9 月 28 日左右,Verisign 发表了报告称 2017 年第二季度的 DDoS 攻击数相比第一季度减少了 55%,而且攻击峰值大幅减少了 81%。
|
||||
* 2017 年 11 月 23 日,CDN 供应商 Cloudflare 报道称“近几个月来,Cloudflare 看到试图用垃圾流量挤满我们的网络的简单进攻尝试有了大幅减少”。Cloudflare 推测这可能和他们的政策变化有一定关系,但这些减少也和物联网清理行动有着明显的重合。
|
||||
* 2017 年 11 月底,Akamai 的 2017 年第三季度互联网现状报告称 DDoS 攻击数较前一季度小幅增加了 8%。虽然这相比 2016 年的第三季度已经减少了很多,但这次小幅上涨提醒我们危险仍然存在。
|
||||
* 作为潜在危险的更进一步的提醒,一个叫做“Satori”的新的 Mirai 变种于 2017 年 11 月至 12 月开始冒头。这个僵尸网络仅仅通过一个 0-day 漏洞而达成的增长速度非常值得注意。这起事件凸显了互联网的危险现状,以及我们为什么距离大规模的事故只差一两起物联网入侵。当下一次威胁发生而且没人阻止的时候,会发生什么?Sinkholing 和其他的白帽或“合法”的缓解措施在 2018 年不会有效,就像它们在 2016 年也不曾有效一样。也许未来各国政府可以合作创建一个国际范围的反黑客特别部队来应对特别严重的会影响互联网存续的威胁,但我并不抱太大期望。
|
||||
* 在年末出现了一些危言耸听的新闻报道,有关被一个被称作“Reaper”和“IoTroop”的新的僵尸网络。我知道你们中有些人最终会去嘲笑那些把它的规模估算为一两百万的人,但你们应该理解这些网络安全研究者们对网络上发生的事情以及不由他们掌控的硬件的事情都是一知半解。实际来说,研究者们不可能知道或甚至猜测到大部分有漏洞的设备在僵尸网络出现时已经被禁用了。给“Reaper”一两个新的未经处理的 0-day 漏洞的话,它就会变得和我们最担心的事情一样可怕。
|
||||
|
||||
### 3. 临别赠言
|
||||
|
||||
我很抱歉把你们留在这种境况当中,但我的人身安全受到的威胁已经不允许我再继续下去。我树了很多敌人。如果你想要帮忙,请看前面列举的该做的事情。祝你好运。
|
||||
|
||||
也会有人批评我,说我不负责任,但这完全找错了重点。真正的重点是如果一个像我一样没有黑客背景的人可以做我做到的事情,那么一个比我厉害的人可以在 2017 年对互联网做比这要可怕太多的事情。我并不是问题本身,我也不是来遵循任何人制定的规则的。我只是报信的人。你越早意识到这点越好。
|
||||
|
||||
-Dr Cyborkian 又名 janit0r,“病入膏肓”的设备的调节者。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via:https://ghostbin.com/paste/q2vq2
|
||||
|
||||
作者:janit0r
|
||||
译者:[yixunx](https://github.com/yixunx)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,
|
||||
[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:https://www.telecompaper.com/news/venezuelan-operators-hit-by-unprecedented-cyberattack--1208384
|
||||
[2]:https://github.com/JeremyNGalloway/mod_plaintext.py
|
||||
[3]:http://depastedihrn3jtw.onion.link/show.php?md5=62d1d87f67a8bf485d43a05ec32b1e6f
|
111
translated/tech/20180120 The World Map In Your Terminal.md
Normal file
111
translated/tech/20180120 The World Map In Your Terminal.md
Normal file
@ -0,0 +1,111 @@
|
||||
在终端显示世界地图
|
||||
======
|
||||
我偶然发现了一个有趣的工具。在终端的世界地图!是的,这太酷了。向 **MapSCII** 问好,这是可在 xterm 兼容终端渲染的盲文和 ASCII 世界地图。它支持 GNU/Linux、Mac OS 和 Windows。我以为这是另一个在 GitHub 上托管的项目。但是我错了!他们做了令人印象深刻的事。我们可以使用我们的鼠标指针在世界地图的任何地方拖拽放大和缩小。其他显著的特性是:
|
||||
|
||||
* 发现任何特定地点周围的兴趣点
|
||||
* 高度可定制的图层样式,带有[ Mapbox 样式][1]支持
|
||||
* 连接到任何公共或私有矢量贴片服务器
|
||||
* 或者使用已经提供并已优化的基于 [OSM2VectorTiles][2] 服务器
|
||||
* 离线工作,发现本地 [VectorTile][3]/[MBTiles][4]
|
||||
* 兼容大多数 Linux 和 OSX 终端
|
||||
* 高度优化算法的流畅体验
|
||||
|
||||
|
||||
|
||||
### 使用 MapSCII 在终端中显示世界地图
|
||||
|
||||
要打开地图,只需从终端运行以下命令:
|
||||
```
|
||||
telnet mapscii.me
|
||||
```
|
||||
|
||||
这是我终端上的世界地图。
|
||||
|
||||
[![][5]][6]
|
||||
|
||||
很酷,是吗?
|
||||
|
||||
要切换到盲文视图,请按 **c**。
|
||||
|
||||
[![][5]][7]
|
||||
|
||||
Type **c** again to switch back to the previous format **.**
|
||||
再次输入 **c** 切回以前的格式。
|
||||
|
||||
要滚动地图,请使用**向上**、向下**、**向左**、**向右**箭头键。要放大/缩小位置,请使用 **a** 和 **a** 键。另外,你可以使用鼠标的滚轮进行放大或缩小。要退出地图,请按 **q**。
|
||||
|
||||
就像我已经说过的,不要认为这是一个简单的项目。点击地图上的任何位置,然后按 **“a”** 放大。
|
||||
|
||||
放大后,下面是一些示例截图。
|
||||
|
||||
[![][5]][8]
|
||||
|
||||
我可以放大查看我的国家(印度)的州。
|
||||
|
||||
[![][5]][9]
|
||||
|
||||
和州内的地区(Tamilnadu):
|
||||
|
||||
[![][5]][10]
|
||||
|
||||
甚至是地区内的镇 [Taluks][11]:
|
||||
|
||||
[![][5]][12]
|
||||
|
||||
还有,我完成学业的地方:
|
||||
|
||||
[![][5]][13]
|
||||
|
||||
即使它只是一个最小的城镇,MapSCII 也能准确地显示出来。 MapSCII 使用 [**OpenStreetMap**][14] 来收集数据。
|
||||
|
||||
### 在本地安装 MapSCII
|
||||
|
||||
喜欢它吗?很好!你可以安装在你自己的系统上。
|
||||
|
||||
确保你的系统上已经安装了 Node.js。如果还没有,请参阅以下链接。
|
||||
|
||||
[Install NodeJS on Linux][15]
|
||||
|
||||
然后,运行以下命令来安装它。
|
||||
```
|
||||
sudo npm install -g mapscii
|
||||
|
||||
```
|
||||
|
||||
要启动 MapSCII,请运行:
|
||||
```
|
||||
mapscii
|
||||
```
|
||||
|
||||
玩的开心!会有更好的东西。敬请关注!
|
||||
|
||||
干杯!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/mapscii-world-map-terminal/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.mapbox.com/mapbox-gl-style-spec/
|
||||
[2]:https://github.com/osm2vectortiles
|
||||
[3]:https://github.com/mapbox/vector-tile-spec
|
||||
[4]:https://github.com/mapbox/mbtiles-spec
|
||||
[5]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[6]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-1-2.png ()
|
||||
[7]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-2.png ()
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-3.png ()
|
||||
[9]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-4.png ()
|
||||
[10]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-5.png ()
|
||||
[11]:https://en.wikipedia.org/wiki/Tehsils_of_India
|
||||
[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-6.png ()
|
||||
[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/MapSCII-7.png ()
|
||||
[14]:https://www.openstreetmap.org/
|
||||
[15]:https://www.ostechnix.com/install-node-js-linux/
|
Loading…
Reference in New Issue
Block a user