mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-21 02:10:11 +08:00
Merge branch 'master' of https://github.com/LCTT/TranslateProject
This commit is contained in:
commit
8496e0a104
@ -0,0 +1,136 @@
|
||||
又一波你可能不知道的 Linux 命令行网络监控工具
|
||||
===============================================================================
|
||||
|
||||
对任何规模的业务来说,网络监控工具都是一个重要的功能。网络监控的目标可能千差万别。比如,监控活动的目标可以是保证长期的网络服务、安全保护、对性能进行排查、网络使用统计等。由于它的目标不同,网络监控器使用很多不同的方式来完成任务。比如对包层面的嗅探,对数据流层面的统计数据,向网络中注入探测的流量,分析服务器日志等。
|
||||
|
||||
尽管有许多专用的网络监控系统可以365天24小时监控,但您依旧可以在特定的情况下使用命令行式的网络监控器,某些命令行式的网络监控器在某方面很有用。如果您是系统管理员,那您就应该有亲身使用一些知名的命令行式网络监控器的经历。这里有一份**Linux上流行且实用的网络监控器**列表。
|
||||
|
||||
### 包层面的嗅探器 ###
|
||||
|
||||
在这个类别下,监控工具在链路上捕捉独立的包,分析它们的内容,展示解码后的内容或者包层面的统计数据。这些工具在最底层对网络进行监控、管理,同样的也能进行最细粒度的监控,其代价是影响网络I/O和分析的过程。
|
||||
|
||||
1. **dhcpdump**:一个命令行式的DHCP流量嗅探工具,捕捉DHCP的请求/回复流量,并以用户友好的方式显示解码的DHCP协议消息。这是一款排查DHCP相关故障的实用工具。
|
||||
|
||||
2. **[dsniff][1]**:一个基于命令行的嗅探、伪造和劫持的工具合集,被设计用于网络审查和渗透测试。它可以嗅探多种信息,比如密码、NSF流量(LCTT 译注:此处疑为 NFS 流量)、email消息、网络地址等。
|
||||
|
||||
3. **[httpry][2]**:一个HTTP报文嗅探器,用于捕获、解码HTTP请求和回复报文,并以用户友好的方式显示这些信息。(LCTT 译注:[延伸阅读](https://linux.cn/article-4148-1.html)。 )
|
||||
|
||||
4. **IPTraf**:基于命令行的网络统计数据查看器。它实时显示包层面、连接层面、接口层面、协议层面的报文/字节数。抓包过程由协议过滤器控制,且操作过程全部是菜单驱动的。(LCTT 译注:[延伸阅读](https://linux.cn/article-5430-1.html)。)
|
||||
|
||||

|
||||
|
||||
5. **[mysql-sniffer][3]**:一个用于抓取、解码MySQL请求相关的数据包的工具。它以可读的方式显示最频繁或全部的请求。
|
||||
|
||||
6. **[ngrep][4]**:在网络报文中执行grep。它能实时抓取报文,并用正则表达式或十六进制表达式的方式匹配(过滤)报文。它是一个可以对异常流量进行检测、存储或者对实时流中特定模式报文进行抓取的实用工具。
|
||||
|
||||
7. **[p0f][5]**:一个被动的基于包嗅探的指纹采集工具,可以可靠地识别操作系统、NAT或者代理设置、网络链路类型以及许多其它与活动的TCP连接相关的属性。
|
||||
|
||||
8. **pktstat**:一个命令行式的工具,通过实时分析报文,显示连接带宽使用情况以及相关的协议(例如,HTTP GET/POST、FTP、X11)等描述信息。
|
||||
|
||||

|
||||
|
||||
9. **Snort**:一个入侵检测和预防工具,通过规则驱动的协议分析和内容匹配,来检测/预防活跃流量中各种各样的后门、僵尸网络、网络钓鱼、间谍软件攻击。
|
||||
|
||||
10. **tcpdump**:一个命令行的嗅探工具,可以基于过滤表达式抓取网络中的报文,分析报文,并且在包层面输出报文内容以便于包层面的分析。他在许多网络相关的错误排查、网络程序debug、或[安全][6]监测方面应用广泛。
|
||||
|
||||
11. **tshark**:一个与Wireshark窗口程序一起使用的命令行式的嗅探工具。它能捕捉、解码网络上的实时报文,并能以用户友好的方式显示其内容。
|
||||
|
||||
### 流/进程/接口层面的监控 ###
|
||||
|
||||
在这个分类中,网络监控器通过把流量按照流、相关进程或接口分类,收集每个流、每个进程、每个接口的统计数据。其信息的来源可以是libpcap抓包库或者sysfs内核虚拟文件系统。这些工具的监控成本很低,但是缺乏包层面的检视能力。
|
||||
|
||||
12. **bmon**:一个基于命令行的带宽监测工具,可以显示各种接口相关的信息,不但包括接收/发送的总量/平均值统计数据,而且拥有历史带宽使用视图。
|
||||
|
||||

|
||||
|
||||
13. **[iftop][7]**:一个带宽使用监测工具,可以实时显示某个网络连接的带宽使用情况。它对所有带宽使用情况排序并通过ncurses的接口来进行可视化。他可以方便的监控哪个连接消耗了最多的带宽。(LCTT 译注:[延伸阅读](https://linux.cn/article-1843-1.html)。)
|
||||
|
||||
14. **nethogs**:一个基于ncurses显示的进程监控工具,提供进程相关的实时的上行/下行带宽使用信息。它对检测占用大量带宽的进程很有用。(LCTT 译注:[延伸阅读](https://linux.cn/article-2808-1.html)。)
|
||||
|
||||
15. **netstat**:一个显示许多TCP/UDP的网络堆栈的统计信息的工具。诸如打开的TCP/UDP连接书、网络接口发送/接收、路由表、协议/套接字的统计信息和属性。当您诊断与网络堆栈相关的性能、资源使用时它很有用。
|
||||
|
||||
16. **[speedometer][8]**:一个可视化某个接口发送/接收的带宽使用的历史趋势,并且基于ncurses的条状图进行显示的终端工具。
|
||||
|
||||

|
||||
|
||||
17. **[sysdig][9]**:一个可以通过统一的界面对各个Linux子系统进行系统级综合性调试的工具。它的网络监控模块可以监控在线或离线、许多进程/主机相关的网络统计数据,例如带宽、连接/请求数等。(LCTT 译注:[延伸阅读](https://linux.cn/article-4341-1.html)。)
|
||||
|
||||
18. **tcptrack**:一个TCP连接监控工具,可以显示活动的TCP连接,包括源/目的IP地址/端口、TCP状态、带宽使用等。
|
||||
|
||||

|
||||
|
||||
19. **vnStat**:一个存储并显示每个接口的历史接收/发送带宽视图(例如,当前、每日、每月)的流量监控器。作为一个后台守护进程,它收集并存储统计数据,包括接口带宽使用率和传输字节总数。(LCTT 译注:[延伸阅读](https://linux.cn/article-5256-1.html)。)
|
||||
|
||||
### 主动网络监控器 ###
|
||||
|
||||
不同于前面提到的被动的监听工具,这个类别的工具们在监听时会主动的“注入”探测内容到网络中,并且会收集相应的反应。监听目标包括路由路径、可供使用的带宽、丢包率、延时、抖动(jitter)、系统设置或者缺陷等。
|
||||
|
||||
20. **[dnsyo][10]**:一个DNS检测工具,能够管理跨越多达1500个不同网络的开放解析器的DNS查询。它在您检查DNS传播或排查DNS设置的时候很有用。
|
||||
|
||||
21. **[iperf][11]**:一个TCP/UDP带宽测量工具,能够测量两个端点间最大可用带宽。它通过在两个主机间单向或双向的输出TCP/UDP探测流量来测量可用的带宽。它在监测网络容量、调谐网络协议栈参数时很有用。一个叫做[netperf][12]的变种拥有更多的功能及更好的统计数据。
|
||||
|
||||
22. **[netcat][13]/socat**:通用的网络调试工具,可以对TCP/UDP套接字进行读、写或监听。它通常和其他的程序或脚本结合起来在后端对网络传输或端口进行监听。(LCTT 译注:[延伸阅读](https://linux.cn/article-1171-1.html)。)
|
||||
|
||||
23. **nmap**:一个命令行的端口扫描和网络发现工具。它依赖于若干基于TCP/UDP的扫描技术来查找开放的端口、活动的主机或者在本地网络存在的操作系统。它在你审查本地主机漏洞或者建立维护所用的主机映射时很有用。[zmap][14]是一个类似的替代品,是一个用于互联网范围的扫描工具。(LCTT 译注:[延伸阅读](https://linux.cn/article-2561-1.html)。)
|
||||
|
||||
24. ping:一个常用的网络测试工具。通过交换ICMP的echo和reply报文来实现其功能。它在测量路由的RTT、丢包率以及检测远端系统防火墙规则时很有用。ping的变种有更漂亮的界面(例如,[noping][15])、多协议支持(例如,[hping][16])或者并行探测能力(例如,[fping][17])。(LCTT 译注:[延伸阅读](https://linux.cn/article-2303-1.html)。)
|
||||
|
||||

|
||||
|
||||
25. **[sprobe][18]**:一个启发式推断本地主机和任意远端IP地址之间的网络带宽瓶颈的命令行工具。它使用TCP三次握手机制来评估带宽的瓶颈。它在检测大范围网络性能和路由相关的问题时很有用。
|
||||
|
||||
26. **traceroute**:一个能发现从本地到远端主机的第三层路由/转发路径的网络发现工具。它发送限制了TTL的探测报文,收集中间路由的ICMP反馈信息。它在排查低速网络连接或者路由相关的问题时很有用。traceroute的变种有更好的RTT统计功能(例如,[mtr][19])。
|
||||
|
||||
### 应用日志解析器 ###
|
||||
|
||||
在这个类别下的网络监测器把特定的服务器应用程序作为目标(例如,web服务器或者数据库服务器)。由服务器程序产生或消耗的网络流量通过它的日志被分析和监测。不像前面提到的网络层的监控器,这个类别的工具能够在应用层面分析和监控网络流量。
|
||||
|
||||
27. **[GoAccess][20]**:一个针对Apache和Nginx服务器流量的交互式查看器。基于对获取到的日志的分析,它能展示包括日访问量、最多请求、客户端操作系统、客户端位置、客户端浏览器等在内的多个实时的统计信息,并以滚动方式显示。
|
||||
|
||||

|
||||
|
||||
28. **[mtop][21]**:一个面向MySQL/MariaDB服务器的命令行监控器,它可以将成本最大的查询和当前数据库服务器负载以可视化的方式显示出来。它在您优化MySQL服务器性能、调谐服务器参数时很有用。
|
||||
|
||||

|
||||
|
||||
29. **[ngxtop][22]**:一个面向Nginx和Apache服务器的流量监测工具,能够以类似top指令的方式可视化的显示Web服务器的流量。它解析web服务器的查询日志文件并收集某个目的地或请求的流量统计信息。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
在这篇文章中,我展示了许多命令行式监测工具,从最底层的包层面的监控器到最高层应用程序层面的网络监控器。了解那个工具的作用是一回事,选择哪个工具使用又是另外一回事。单一的一个工具不能作为您每天使用的通用的解决方案。一个好的系统管理员应该能决定哪个工具更适合当前的环境。希望这个列表对此有所帮助。
|
||||
|
||||
欢迎您通过回复来改进这个列表的内容!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/useful-command-line-network-monitors-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[wwy-hust](https://github.com/wwy-hust)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/nanni
|
||||
[1]:http://www.monkey.org/~dugsong/dsniff/
|
||||
[2]:http://xmodulo.com/monitor-http-traffic-command-line-linux.html
|
||||
[3]:https://github.com/zorkian/mysql-sniffer
|
||||
[4]:http://ngrep.sourceforge.net/
|
||||
[5]:http://lcamtuf.coredump.cx/p0f3/
|
||||
[6]:http://xmodulo.com/recommend/firewallbook
|
||||
[7]:http://xmodulo.com/how-to-install-iftop-on-linux.html
|
||||
[8]:https://excess.org/speedometer/
|
||||
[9]:http://xmodulo.com/monitor-troubleshoot-linux-server-sysdig.html
|
||||
[10]:http://xmodulo.com/check-dns-propagation-linux.html
|
||||
[11]:https://iperf.fr/
|
||||
[12]:http://www.netperf.org/netperf/
|
||||
[13]:http://xmodulo.com/useful-netcat-examples-linux.html
|
||||
[14]:https://zmap.io/
|
||||
[15]:http://noping.cc/
|
||||
[16]:http://www.hping.org/
|
||||
[17]:http://fping.org/
|
||||
[18]:http://sprobe.cs.washington.edu/
|
||||
[19]:http://xmodulo.com/better-alternatives-basic-command-line-utilities.html#mtr_link
|
||||
[20]:http://goaccess.io/
|
||||
[21]:http://mtop.sourceforge.net/
|
||||
[22]:http://xmodulo.com/monitor-nginx-web-server-command-line-real-time.html
|
@ -1,7 +1,7 @@
|
||||
既然float不能表示所有的int,那为什么在类型转换时C++将int转换成float?
|
||||
---------
|
||||
=============
|
||||
|
||||
#问题:
|
||||
###问题:
|
||||
|
||||
代码如下:
|
||||
|
||||
@ -13,7 +13,7 @@ if (i == f) // 执行某段代码
|
||||
|
||||
编译器会将i转换成float类型,然后比较这两个float的大小,但是float能够表示所有的int吗?为什么没有将int和float转换成double类型进行比较呢?
|
||||
|
||||
#回答:
|
||||
###回答:
|
||||
|
||||
在整型数的演变中,当`int`变成`unsigned`时,会丢掉负数部分(有趣的是,这样的话,`0u < -1`就是对的了)。
|
||||
|
||||
@ -32,11 +32,11 @@ if((double) i < (double) f)
|
||||
顺便提一下,在这个问题中有趣的是,`unsigned`的优先级高于`int`,所以把`int`和`unsigned`进行比较时,最终进行的是unsigned类型的比较(开头提到的`0u < -1`就是这个道理)。我猜测这可能是在早些时候(计算机发展初期),当时的人们认为`unsigned`比`int`在所表示的数值范围上受到的限制更小:现在还不需要符号位,所以可以使用额外的位来表示更大的数值范围。如果你觉得`int`可能会溢出,那么就使用unsigned好了——在使用16位表示的ints时这个担心会更明显。
|
||||
|
||||
----
|
||||
via:[stackoverflow](http://stackoverflow.com/questions/28010565/why-does-c-promote-an-int-to-a-float-when-a-float-cannot-represent-all-int-val/28011249#28011249)
|
||||
via: [stackoverflow](http://stackoverflow.com/questions/28010565/why-does-c-promote-an-int-to-a-float-when-a-float-cannot-represent-all-int-val/28011249#28011249)
|
||||
|
||||
作者:[wintermute][a]
|
||||
译者:[KayGuoWhu](https://github.com/KayGuoWhu)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,65 @@
|
||||
iptraf:一个实用的TCP/UDP网络监控工具
|
||||
================================================================================
|
||||
|
||||
[iptraf][1]是一个基于ncurses的IP局域网监控器,用来生成包括TCP信息、UDP计数、ICMP和OSPF信息、以太网负载信息、节点状态信息、IP校验和错误等等统计数据。
|
||||
|
||||
它基于ncurses的用户界面可以使用户免于记忆繁琐的命令行开关。
|
||||
|
||||
### 特征 ###
|
||||
|
||||
- IP流量监控器,用来显示你的网络中的IP流量变化信息。包括TCP标识信息、包以及字节计数,ICMP细节,OSPF包类型。
|
||||
- 简单的和详细的接口统计数据,包括IP、TCP、UDP、ICMP、非IP以及其他的IP包计数、IP校验和错误,接口活动、包大小计数。
|
||||
- TCP和UDP服务监控器,能够显示常见的TCP和UDP应用端口上发送的和接收的包的数量。
|
||||
- 局域网数据统计模块,能够发现在线的主机,并显示其上的数据活动统计信息。
|
||||
- TCP、UDP、及其他协议的显示过滤器,允许你只查看感兴趣的流量。
|
||||
- 日志功能。
|
||||
- 支持以太网、FDDI、ISDN、SLIP、PPP以及本地回环接口类型。
|
||||
- 利用Linux内核内置的原始套接字接口,允许它(指iptraf)能够用于各种支持的网卡上
|
||||
- 全屏,菜单式驱动的操作。
|
||||
|
||||
###安装方法###
|
||||
|
||||
**Ubuntu以及其衍生版本**
|
||||
|
||||
sudo apt-get install iptraf
|
||||
|
||||
**Arch Linux以及其衍生版本**
|
||||
|
||||
sudo pacman -S iptra
|
||||
|
||||
**Fedora以及其衍生版本**
|
||||
|
||||
sudo yum install iptraf
|
||||
|
||||
### 用法 ###
|
||||
|
||||
如果不加任何命令行选项地运行**iptraf**命令,程序将进入一种交互模式,通过主菜单可以访问多种功能。
|
||||
|
||||

|
||||
|
||||
简易的上手导航菜单。
|
||||
|
||||

|
||||
|
||||
选择要监控的接口。
|
||||
|
||||

|
||||
|
||||
接口**ppp0**处的流量。
|
||||
|
||||

|
||||
|
||||
试试吧!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/iptraf-tcpudp-network-monitoring-utility/
|
||||
|
||||
作者:[Enock Seth Nyamador][a]
|
||||
译者:[DongShuaike](https://github.com/DongShuaike)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/seth/
|
||||
[1]:http://iptraf.seul.org/about.html
|
@ -1,12 +1,14 @@
|
||||
|
||||
自动化部署基于Docker的Rails应用
|
||||
================================================================================
|
||||

|
||||
|
||||
[TL;DR] 这是系列文章的第三篇,讲述了我的公司是如何将基础设施从PaaS移植到Docker上的。
|
||||
|
||||
- [第一部分][1]:谈论了我接触Docker之前的经历;
|
||||
- [第二部分][2]:一步步搭建一个安全而又私有的registry。
|
||||
|
||||
----------
|
||||
|
||||
在系列文章的最后一篇里,我们将用一个实例来学习如何自动化整个部署过程。
|
||||
|
||||
### 基本的Rails应用程序###
|
||||
@ -18,99 +20,97 @@
|
||||
$ rvm use 2.2.0
|
||||
$ rails new && cd docker-test
|
||||
|
||||
创建一个基础控制器:
|
||||
创建一个基本的控制器:
|
||||
|
||||
$ rails g controller welcome index
|
||||
|
||||
……然后编辑 `routes.rb` ,以便让工程的根指向我们新创建的welcome#index方法:(这句话理解不太理解)
|
||||
……,然后编辑 `routes.rb` ,以便让该项目的根指向我们新创建的welcome#index方法:
|
||||
|
||||
root 'welcome#index'
|
||||
|
||||
在终端运行 `rails s` ,然后打开浏览器,登录[http://localhost:3000][3],你会进入到索引界面当中。我们不准备给应用加上多么神奇的东西,这只是一个基础实例,用来验证当我们将要创建并部署容器的时候,一切运行正常。
|
||||
在终端运行 `rails s` ,然后打开浏览器,登录[http://localhost:3000][3],你会进入到索引界面当中。我们不准备给应用加上多么神奇的东西,这只是一个基础的实例,当我们将要创建并部署容器的时候,用它来验证一切是否运行正常。
|
||||
|
||||
### 安装webserver ###
|
||||
|
||||
我们打算使用Unicorn当做我们的webserver。在Gemfile中添加 `gem 'unicorn'`和 `gem 'foreman'`然后将它bundle起来(运行 `bundle install`命令)。
|
||||
|
||||
在Rails应用启动的伺候,需要配置Unicorn,所以我们将一个**unicorn.rb**文件放在**config**目录下。[这里有一个Unicorn配置文件的例子][4]你可以直接复制粘贴Gist的内容。
|
||||
启动Rails应用时,需要先配置好Unicorn,所以我们将一个**unicorn.rb**文件放在**config**目录下。[这里有一个Unicorn配置文件的例子][4],你可以直接复制粘贴Gist的内容。
|
||||
|
||||
Let's also add a Procfile with the following content inside the root of the project so that we will be able to start the app with foreman:
|
||||
接下来,在工程的根目录下添加一个Procfile,以便可以使用foreman启动应用,内容为下:
|
||||
接下来,在项目的根目录下添加一个Procfile,以便可以使用foreman启动应用,内容为下:
|
||||
|
||||
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
|
||||
|
||||
现在运行**foreman start**命令启动应用,一切都将正常运行,并且你将能够在[http://localhost:5000][5]上看到一个正在运行的应用。
|
||||
|
||||
### 创建一个Docker映像 ###
|
||||
### 构建一个Docker镜像 ###
|
||||
|
||||
现在我们创建一个映像来运行我们的应用。在Rails工程的跟目录下,创建一个名为**Dockerfile**的文件,然后粘贴进以下内容:
|
||||
现在我们构建一个镜像来运行我们的应用。在这个Rails项目的根目录下,创建一个名为**Dockerfile**的文件,然后粘贴进以下内容:
|
||||
|
||||
# Base image with ruby 2.2.0
|
||||
# 基于镜像 ruby 2.2.0
|
||||
FROM ruby:2.2.0
|
||||
|
||||
# Install required libraries and dependencies
|
||||
# 安装所需的库和依赖
|
||||
RUN apt-get update && apt-get install -qy nodejs postgresql-client sqlite3 --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Set Rails version
|
||||
# 设置 Rails 版本
|
||||
ENV RAILS_VERSION 4.1.1
|
||||
|
||||
# Install Rails
|
||||
# 安装 Rails
|
||||
RUN gem install rails --version "$RAILS_VERSION"
|
||||
|
||||
# Create directory from where the code will run
|
||||
# 创建代码所运行的目录
|
||||
RUN mkdir -p /usr/src/app
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
# Make webserver reachable to the outside world
|
||||
# 使 webserver 可以在容器外面访问
|
||||
EXPOSE 3000
|
||||
|
||||
# Set ENV variables
|
||||
# 设置环境变量
|
||||
ENV PORT=3000
|
||||
|
||||
# Start the web app
|
||||
# 启动 web 应用
|
||||
CMD ["foreman","start"]
|
||||
|
||||
# Install the necessary gems
|
||||
# 安装所需的 gems
|
||||
ADD Gemfile /usr/src/app/Gemfile
|
||||
ADD Gemfile.lock /usr/src/app/Gemfile.lock
|
||||
RUN bundle install --without development test
|
||||
|
||||
# Add rails project (from same dir as Dockerfile) to project directory
|
||||
# 将 rails 项目(和 Dockerfile 同一个目录)添加到项目目录
|
||||
ADD ./ /usr/src/app
|
||||
|
||||
# Run rake tasks
|
||||
# 运行 rake 任务
|
||||
RUN RAILS_ENV=production rake db:create db:migrate
|
||||
|
||||
使用提供的Dockerfile,执行下列命令创建一个映像[1][7]:
|
||||
使用上述Dockerfile,执行下列命令创建一个镜像(确保**boot2docker**已经启动并在运行当中):
|
||||
|
||||
$ docker build -t localhost:5000/your_username/docker-test .
|
||||
|
||||
然后,如果一切正常,长日志输出的最后一行应该类似于:
|
||||
然后,如果一切正常,长长的日志输出的最后一行应该类似于:
|
||||
|
||||
Successfully built 82e48769506c
|
||||
$ docker images
|
||||
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
|
||||
localhost:5000/your_username/docker-test latest 82e48769506c About a minute ago 884.2 MB
|
||||
|
||||
来运行容器吧!
|
||||
让我们运行一下容器试试!
|
||||
|
||||
$ docker run -d -p 3000:3000 --name docker-test localhost:5000/your_username/docker-test
|
||||
|
||||
You should be able to reach your Rails app running inside the Docker container at port 3000 of your boot2docker VM[2][8] (in my case [http://192.168.59.103:3000][6]).
|
||||
通过你的boot2docker虚拟机[2][8]的3000号端口(我的是[http://192.168.59.103:3000][6]),你可以观察你的Rails应用。
|
||||
通过你的boot2docker虚拟机的3000号端口(我的是[http://192.168.59.103:3000][6]),你可以观察你的Rails应用。(如果不清楚你的boot2docker虚拟地址,输入` $ boot2docker ip`命令查看。)
|
||||
|
||||
### 使用shell脚本进行自动化部署 ###
|
||||
|
||||
前面的文章(指文章1和文章2)已经告诉了你如何将新创建的映像推送到私有registry中,并将其部署在服务器上,所以我们跳过这一部分直接开始自动化进程。
|
||||
前面的文章(指文章1和文章2)已经告诉了你如何将新创建的镜像推送到私有registry中,并将其部署在服务器上,所以我们跳过这一部分直接开始自动化进程。
|
||||
|
||||
我们将要定义3个shell脚本,然后最后使用rake将它们捆绑在一起。
|
||||
|
||||
### 清除 ###
|
||||
|
||||
每当我们创建映像的时候,
|
||||
每当我们创建镜像的时候,
|
||||
|
||||
- 停止并重启boot2docker;
|
||||
- 去除Docker孤儿映像(那些没有标签,并且不再被容器所使用的映像们)。
|
||||
- 去除Docker孤儿镜像(那些没有标签,并且不再被容器所使用的镜像们)。
|
||||
|
||||
在你的工程根目录下的**clean.sh**文件中输入下列命令。
|
||||
|
||||
@ -132,22 +132,22 @@ You should be able to reach your Rails app running inside the Docker container a
|
||||
|
||||
$ chmod +x clean.sh
|
||||
|
||||
### 创建 ###
|
||||
### 构建 ###
|
||||
|
||||
创建的过程基本上和之前我们所做的(docker build)内容相似。在工程的根目录下创建一个**build.sh**脚本,填写如下内容:
|
||||
构建的过程基本上和之前我们所做的(docker build)内容相似。在工程的根目录下创建一个**build.sh**脚本,填写如下内容:
|
||||
|
||||
docker build -t localhost:5000/your_username/docker-test .
|
||||
|
||||
给脚本执行权限。
|
||||
记得给脚本执行权限。
|
||||
|
||||
### 部署 ###
|
||||
|
||||
最后,创建一个**deploy.sh**脚本,在里面填进如下内容:
|
||||
|
||||
# Open SSH connection from boot2docker to private registry
|
||||
# 打开 boot2docker 到私有注册库的 SSH 连接
|
||||
boot2docker ssh "ssh -o 'StrictHostKeyChecking no' -i /Users/username/.ssh/id_boot2docker -N -L 5000:localhost:5000 root@your-registry.com &" &
|
||||
|
||||
# Wait to make sure the SSH tunnel is open before pushing...
|
||||
# 在推送前先确认该 SSH 通道是开放的。
|
||||
echo Waiting 5 seconds before pushing image.
|
||||
|
||||
echo 5...
|
||||
@ -165,7 +165,7 @@ You should be able to reach your Rails app running inside the Docker container a
|
||||
echo Starting push!
|
||||
docker push localhost:5000/username/docker-test
|
||||
|
||||
如果你不理解这其中的含义,请先仔细阅读这部分[part 2][9]。
|
||||
如果你不理解这其中的含义,请先仔细阅读这部分[第二部分][2]。
|
||||
|
||||
给脚本加上执行权限。
|
||||
|
||||
@ -179,10 +179,9 @@ You should be able to reach your Rails app running inside the Docker container a
|
||||
|
||||
这一点都不费工夫,可是事实上开发者比你想象的要懒得多!那么咱们就索性再懒一点!
|
||||
|
||||
我们最后再把工作好好整理一番,我们现在要将三个脚本捆绑在一起,通过rake。
|
||||
|
||||
为了更简单一点,你可以在工程根目录下已经存在的Rakefile中添加几行代码,打开Rakefile文件——pun intended——把下列内容粘贴进去。
|
||||
我们最后再把工作好好整理一番,我们现在要将三个脚本通过rake捆绑在一起。
|
||||
|
||||
为了更简单一点,你可以在工程根目录下已经存在的Rakefile中添加几行代码,打开Rakefile文件,把下列内容粘贴进去。
|
||||
|
||||
namespace :docker do
|
||||
desc "Remove docker container"
|
||||
@ -221,34 +220,27 @@ Deploy独立于build,build独立于clean。所以每次我们输入命令运
|
||||
|
||||
$ rake docker:deploy
|
||||
|
||||
接下来就是见证奇迹的时刻了。一旦映像文件被上传(第一次可能花费较长的时间),你就可以ssh登录产品服务器,并且(通过SSH管道)把docker映像拉取到服务器并运行了。多么简单!
|
||||
接下来就是见证奇迹的时刻了。一旦镜像文件被上传(第一次可能花费较长的时间),你就可以ssh登录产品服务器,并且(通过SSH管道)把docker镜像拉取到服务器并运行了。多么简单!
|
||||
|
||||
也许你需要一段时间来习惯,但是一旦成功,它几乎与用Heroku部署一样简单。
|
||||
|
||||
备注:像往常一样,请让我了解到你的意见。我不敢保证这种方法是最好,最快,或者最安全的Docker开发的方法,但是这东西对我们确实奏效。
|
||||
|
||||
- 确保**boot2docker**已经启动并在运行当中。
|
||||
- 如果你不了解你的boot2docker虚拟地址,输入` $ boot2docker ip`命令查看。
|
||||
- 点击[here][10],教你怎样搭建私有的registry。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://cocoahunter.com/2015/01/23/docker-3/
|
||||
|
||||
作者:[Michelangelo Chasseur][a]
|
||||
译者:[DongShuaike](https://github.com/DongShuaike)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://cocoahunter.com/author/michelangelo/
|
||||
[1]:http://cocoahunter.com/docker-1
|
||||
[2]:http://cocoahunter.com/2015/01/23/docker-2/
|
||||
[1]:https://linux.cn/article-5339-1.html
|
||||
[2]:https://linux.cn/article-5379-1.html
|
||||
[3]:http://localhost:3000/
|
||||
[4]:https://gist.github.com/chasseurmic/0dad4d692ff499761b20
|
||||
[5]:http://localhost:5000/
|
||||
[6]:http://192.168.59.103:3000/
|
||||
[7]:http://cocoahunter.com/2015/01/23/docker-3/#fn:1
|
||||
[8]:http://cocoahunter.com/2015/01/23/docker-3/#fn:2
|
||||
[9]:http://cocoahunter.com/2015/01/23/docker-2/
|
||||
[10]:http://cocoahunter.com/2015/01/23/docker-2/
|
||||
|
@ -1,6 +1,6 @@
|
||||
25个给git熟手的技巧
|
||||
25个 Git 进阶技巧
|
||||
================================================================================
|
||||
我已经使用git差不多18个月了,觉得自己对它应该已经非常了解。然后来自GitHub的[Scott Chacon][1]过来给LVS做培训,[LVS是一个赌博软件供应商和开发商][2](从2013年开始的合同),而我在第一天里就学到了很多。
|
||||
我已经使用git差不多18个月了,觉得自己对它应该已经非常了解。然后来自GitHub的[Scott Chacon][1]过来给LVS做培训([LVS是一个赌博软件供应商和开发商][2],从2013年开始的合同),而我在第一天里就学到了很多。
|
||||
|
||||
作为一个对git感觉良好的人,我觉得分享从社区里掌握的一些有价值的信息,也许能帮某人解决问题而不用做太深入研究。
|
||||
|
||||
@ -15,21 +15,21 @@
|
||||
|
||||
#### 2. Git是基于指针的 ####
|
||||
|
||||
保存在git里的一切都是文件。当你创建一个提交的时候,会建立一个包含你的提交信息和相关数据(名字,邮件地址,日期/时间,前一个提交,等等)的文件,并把它链接到一个文件树中。文件树中包含了对象或其他树的列表。对象或容器是和本次提交相关的实际内容(也是一个文件,你想了解的话,尽管文件名并没有包含在对象里,而是在树中)。所有这些文件都使用对象的SHA-1哈希值作为文件名。
|
||||
保存在git里的一切都是文件。当你创建一个提交的时候,会建立一个包含你的提交信息和相关数据(名字,邮件地址,日期/时间,前一个提交,等等)的文件,并把它链接到一个树文件中。这个树文件中包含了对象或其他树的列表。这里的提到的对象(或二进制大对象)是和本次提交相关的实际内容(它也是一个文件,另外,尽管文件名并没有包含在对象里,但是存储在树中)。所有这些文件都使用对象的SHA-1哈希值作为文件名。
|
||||
|
||||
用这种方式,分支和标签就是简单的文件(基本上是这样),包含指向实际提交的SHA-1哈希值。使用这些索引会带来优秀的灵活性和速度,比如创建一个新分支就只要简单地创建一个包含分支名字和所分出的那个提交的SHA-1索引的文件。当然,你不需要自己做这些,而只要使用Git命令行工具(或者GUI),但是实际上就是这么简单。
|
||||
用这种方式,分支和标签就是简单的文件(基本上是这样),包含指向该提交的SHA-1哈希值。使用这些索引会带来优秀的灵活性和速度,比如创建一个新分支就是简单地用分支名字和所分出的那个提交的SHA-1索引来创建一个文件。当然,你不需要自己做这些,而只要使用Git命令行工具(或者GUI),但是实际上就是这么简单。
|
||||
|
||||
你也许听说过叫HEAD的索引。这只是简单的一个文件,包含了你当前指向的那个提交的SHA-1索引值。如果你正在解决一次合并冲突然后看到了HEAD,这并不是一个特别的分支或分值上一个必须的特殊点,只是标明你当前所在位置。
|
||||
你也许听说过叫HEAD的索引。这只是简单的一个文件,包含了你当前指向的那个提交的SHA-1索引值。如果你正在解决一次合并冲突然后看到了HEAD,这并不是一个特别的分支或分支上的一个必需的特殊位置,只是标明你当前所在位置。
|
||||
|
||||
所有的分支指针都保存在.git/refs/heads里,HEAD在.git/HEAD里,而标签保存在.git/refs/tags里 - 自己可以放心地进去看看。
|
||||
所有的分支指针都保存在.git/refs/heads里,HEAD在.git/HEAD里,而标签保存在.git/refs/tags里 - 自己可以随便进去看看。
|
||||
|
||||
#### 3. 两个父节点 - 当然! ####
|
||||
#### 3. 两个爸爸(父节点) - 你没看错! ####
|
||||
|
||||
在历史中查看一个合并提交的信息时,你将看到有两个父节点(相对于一般工作上的常规提交的情况)。第一个父节点是你所在的分支,第二个是你合并过来的分支。
|
||||
在历史中查看一个合并提交的信息时,你将看到有两个父节点(不同于工作副本上的常规提交的情况)。第一个父节点是你所在的分支,第二个是你合并过来的分支。
|
||||
|
||||
#### 4. 合并冲突 ####
|
||||
|
||||
目前我相信你碰到过合并冲突并且解决过。通常是编辑一下文件,去掉<<<<,====,>>>>标志,保留需要留下的代码。有时能够看到这两个修改之前的代码会很不错,比如,在这两个分支上有冲突的改动之前。下面是一种方式:
|
||||
目前我相信你碰到过合并冲突并且解决过。通常是编辑一下文件,去掉<<<<,====,>>>>标志,保留需要留下的代码。有时能够看到这两个修改之前的代码会很不错,比如,在这两个现在冲突的分支之前的改动。下面是一种方式:
|
||||
|
||||
$ git diff --merge
|
||||
diff --cc dummy.rb
|
||||
@ -45,14 +45,14 @@
|
||||
end
|
||||
end
|
||||
|
||||
如果是二进制文件,比较差异就没那么简单了...通常你要做的就是测试这个二进制文件的两个版本来决定保留哪个(或者在二进制文件编辑器里手工复制冲突部分)。从一个特定分支获取文件拷贝(比如说你在合并master和feature123):
|
||||
如果是二进制文件,比较差异就没那么简单了...通常你要做的就是测试这个二进制文件的两个版本来决定保留哪个(或者在二进制文件编辑器里手工复制冲突部分)。从一个特定分支获取文件拷贝(比如说你在合并master和feature123两个分支):
|
||||
|
||||
$ git checkout master flash/foo.fla # 或者...
|
||||
$ git checkout feature132 flash/foo.fla
|
||||
$ # 然后...
|
||||
$ git add flash/foo.fla
|
||||
|
||||
另一种方式是通过git输出文件 - 你可以输出到另外的文件名,然后再重命名正确的文件(当你决定了要用哪个)为正常的文件名:
|
||||
另一种方式是通过git输出文件 - 你可以输出到另外的文件名,然后当你决定了要用哪个后,再将选定的正确文件复制为正常的文件名:
|
||||
|
||||
$ git show master:flash/foo.fla > master-foo.fla
|
||||
$ git show feature132:flash/foo.fla > feature132-foo.fla
|
||||
@ -71,7 +71,7 @@
|
||||
|
||||
#### 5. 远端服务器 ####
|
||||
|
||||
git的一个超强大的功能就是可以有不止一个远端服务器(实际上你一直都在一个本地仓库上工作)。你并不是一定都要有写权限,你可以有多个可以读取的服务器(用来合并他们的工作)然后写入其他仓库。添加一个新的远端服务器很简单:
|
||||
git的一个超强大的功能就是可以有不止一个远端服务器(实际上你一直都在一个本地仓库上工作)。你并不是一定都要有这些服务器的写权限,你可以有多个可以读取的服务器(用来合并他们的工作)然后写入到另外一个仓库。添加一个新的远端服务器很简单:
|
||||
|
||||
$ git remote add john git@github.com:johnsomeone/someproject.git
|
||||
|
||||
@ -87,10 +87,10 @@ git的一个超强大的功能就是可以有不止一个远端服务器(实
|
||||
|
||||
$ git diff master..john/master
|
||||
|
||||
你也可以查看不在远端分支的HEAD的改动:
|
||||
你也可以查看没有在远端分支上的HEAD的改动:
|
||||
|
||||
$ git log remote/branch..
|
||||
# 注意:..后面没有结束的refspec
|
||||
# 注意:..后面没有结束的特定引用
|
||||
|
||||
#### 6. 标签 ####
|
||||
|
||||
@ -99,7 +99,7 @@ git的一个超强大的功能就是可以有不止一个远端服务器(实
|
||||
建立这两种类型的标签都很简单(只有一个命令行开关的差异)
|
||||
|
||||
$ git tag to-be-tested
|
||||
$ git tag -a v1.1.0 # 会提示输入标签信息
|
||||
$ git tag -a v1.1.0 # 会提示输入标签的信息
|
||||
|
||||
#### 7. 建立分支 ####
|
||||
|
||||
@ -108,7 +108,7 @@ git的一个超强大的功能就是可以有不止一个远端服务器(实
|
||||
$ git branch feature132
|
||||
$ git checkout feature132
|
||||
|
||||
当然,如果你确定自己要新建分支并直接切换过去,可以用一个命令实现:
|
||||
当然,如果你确定自己直接切换到新建的分支,可以用一个命令实现:
|
||||
|
||||
$ git checkout -b feature132
|
||||
|
||||
@ -117,20 +117,20 @@ git的一个超强大的功能就是可以有不止一个远端服务器(实
|
||||
$ git checkout -b twitter-experiment feature132
|
||||
$ git branch -d feature132
|
||||
|
||||
更新:你也可以(像Brian Palmer在原博客文章的评论里提出的)只用“git branch”的-m开关在一个命令里实现(像Mike提出的,如果你只有一个分支参数,就会重命名当前分支):
|
||||
更新:你也可以(像Brian Palmer在原博客文章的评论里提出的)只用“git branch”的-m开关在一个命令里实现(像Mike提出的,如果你只指定了一个分支参数,就会重命名当前分支):
|
||||
|
||||
$ git branch -m twitter-experiment
|
||||
$ git branch -m feature132 twitter-experiment
|
||||
|
||||
#### 8. 合并分支 ####
|
||||
|
||||
在将来什么时候,你希望合并改动。有两种方式:
|
||||
也许在将来的某个时候,你希望将改动合并。有两种方式:
|
||||
|
||||
$ git checkout master
|
||||
$ git merge feature83 # 或者...
|
||||
$ git rebase feature83
|
||||
|
||||
merge和rebase之间的差别是merge会尝试处理改动并建立一个新的混合了两者的提交。rebase会尝试把你从一个分支最后一次分离后的所有改动,一个个加到该分支的HEAD上。不过,在已经将分支推到远端服务器后不要再rebase了 - 这回引起冲突/问题。
|
||||
merge和rebase之间的差别是merge会尝试处理改动并建立一个新的混合了两者的提交。rebase会尝试把你从一个分支最后一次分离后的所有改动,一个个加到该分支的HEAD上。不过,在已经将分支推到远端服务器后不要再rebase了 - 这会引起冲突/问题。
|
||||
|
||||
如果你不确定在哪些分支上还有独有的工作 - 所以你也不知道哪些分支需要合并而哪些可以删除,git branch有两个开关可以帮你:
|
||||
|
||||
@ -147,7 +147,7 @@ merge和rebase之间的差别是merge会尝试处理改动并建立一个新的
|
||||
$ git push origin twitter-experiment:refs/heads/twitter-experiment
|
||||
# origin是我们服务器的名字,而twitter-experiment是分支名字
|
||||
|
||||
更新:感谢Erlend在原博客文章上的评论 - 这个实际上和`git push origin twitter-experiment`效果一样,不过使用完整的语法,你可以在两者之间使用不同的分知名(这样本地分支可以是`add-ssl-support`而远端是`issue-1723`)。
|
||||
更新:感谢Erlend在原博客文章上的评论 - 这个实际上和`git push origin twitter-experiment`效果一样,不过使用完整的语法,你可以在两者之间使用不同的分支名(这样本地分支可以是`add-ssl-support`而远端是`issue-1723`)。
|
||||
|
||||
如果你想在远端服务器上删除一个分支(注意分支名前面的冒号):
|
||||
|
||||
@ -210,7 +210,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
|
||||
|
||||
这会让你进入一个基于菜单的交互式提示。你可以使用命令中的数字或高亮的字母(如果你在终端里打开了高亮的话)来进入相应的模式。然后就只是输入你希望操作的文件的数字了(你可以使用这样的格式,1或者1-4或2,4,7)。
|
||||
|
||||
如果你想进入补丁模式(交互式模式下的‘p’或‘5’),你也可以直接进入:
|
||||
如果你想进入补丁模式(交互式模式下按‘p’或‘5’),你也可以直接进入:
|
||||
|
||||
$ git add -p
|
||||
diff --git a/dummy.rb b/dummy.rb
|
||||
@ -226,11 +226,11 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
|
||||
end
|
||||
Stage this hunk [y,n,q,a,d,/,e,?]?
|
||||
|
||||
你可以看到下方会有一些选项供选择用来添加该文件的这个改动,该文件的所有改动,等等。使用‘?’命令可以详细解释这些选项。
|
||||
你可以看到下方会有一些选项供选择用来添加该文件的这个改动、该文件的所有改动,等等。使用‘?’命令可以详细解释这些选项。
|
||||
|
||||
#### 12. 从文件系统里保存/取回改动 ####
|
||||
|
||||
有些项目(比如git项目本身)在git文件系统中直接保存额外文件而并没有将它们加入到版本控制中。
|
||||
有些项目(比如Git项目本身)在git文件系统中直接保存额外文件而并没有将它们加入到版本控制中。
|
||||
|
||||
让我们从在git中存储一个随机文件开始:
|
||||
|
||||
@ -251,7 +251,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
|
||||
|
||||
#### 13. 查看日志 ####
|
||||
|
||||
如果不用‘git log’来查看最近的提交你git用不了多久。不过,有一些技巧来更好地应用。比如,你可以使用下面的命令来查看每次提交的具体改动:
|
||||
长时间使用 Git 的话,不会没用过‘git log’来查看最近的提交。不过,有一些技巧来更好地应用。比如,你可以使用下面的命令来查看每次提交的具体改动:
|
||||
|
||||
$ git log -p
|
||||
|
||||
@ -268,7 +268,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
|
||||
|
||||
#### 14. 搜索日志 ####
|
||||
|
||||
如果你想找特定作者可以这样做:
|
||||
如果你想找特定提交者可以这样做:
|
||||
|
||||
$ git log --author=Andy
|
||||
|
||||
@ -278,7 +278,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
|
||||
|
||||
$ git log --grep="Something in the message"
|
||||
|
||||
也有一个更强大的叫做pickaxe的命令用来查找删除或添加某个特定内容的提交(比如,该文件第一次出现或被删除)。这可以告诉你什么时候增加了一行(但这一行里的某个字符后面被改动过就不行了):
|
||||
也有一个更强大的叫做pickaxe的命令用来查找包含了删除或添加的某个特定内容的提交(比如,该内容第一次出现或被删除)。这可以告诉你什么时候增加了一行(但这一行里的某个字符后面被改动过就不行了):
|
||||
|
||||
$ git log -S "TODO: Check for admin status"
|
||||
|
||||
@ -294,7 +294,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
|
||||
|
||||
$ git log --since=2.months.ago --until=1.day.ago
|
||||
|
||||
默认情况下会用OR来组合查询,但你可以轻易地改为AND(如果你有超过一条的标准)
|
||||
默认情况下会用OR来组合查询,但你可以轻易地改为AND(如果你有超过一条的查询标准)
|
||||
|
||||
$ git log --since=2.months.ago --until=1.day.ago --author=andy -S "something" --all-match
|
||||
|
||||
@ -310,7 +310,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
|
||||
$ git show feature132@{yesterday} # 时间相关
|
||||
$ git show feature132@{2.hours.ago} # 时间相关
|
||||
|
||||
注意和之前部分有些不同,末尾的插入符号意思是该提交的父节点 - 开始位置的插入符号意思是不在这个分支。
|
||||
注意和之前部分有些不同,末尾的^的意思是该提交的父节点 - 开始位置的^的意思是不在这个分支。
|
||||
|
||||
#### 16. 选择范围 ####
|
||||
|
||||
@ -321,7 +321,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
|
||||
|
||||
你也可以省略[new],将使用当前的HEAD。
|
||||
|
||||
### Rewinding Time & Fixing Mistakes ###
|
||||
### 时光回溯和后悔药 ###
|
||||
|
||||
#### 17. 重置改动 ####
|
||||
|
||||
@ -329,7 +329,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
|
||||
|
||||
$ git reset HEAD lib/foo.rb
|
||||
|
||||
通常会使用‘unstage’的别名,因为看上去有些不直观。
|
||||
通常会使用‘unstage’的别名,因为上面的看上去有些不直观。
|
||||
|
||||
$ git config --global alias.unstage "reset HEAD"
|
||||
$ git unstage lib/foo.rb
|
||||
@ -369,11 +369,11 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
|
||||
|
||||
#### 19. 交互式切换基础 ####
|
||||
|
||||
这是一个我之前看过展示却没真正理解过的很赞的功能,现在很简单。假如说你提交了3次但是你希望更改顺序或编辑(或者合并):
|
||||
这是一个我之前看过展示却没真正理解过的很赞的功能,现在觉得它就很简单了。假如说你提交了3次但是你希望更改顺序或编辑(或者合并):
|
||||
|
||||
$ git rebase -i master~3
|
||||
|
||||
然后会启动你的编辑器并带有一些指令。你所要做的就是修改这些指令来选择/插入/编辑(或者删除)提交和保存/退出。然后在编辑完后你可以用`git rebase --continue`命令来让每一条指令生效。
|
||||
然后这会启动你的编辑器并带有一些指令。你所要做的就是修改这些指令来选择/插入/编辑(或者删除)提交和保存/退出。然后在编辑完后你可以用`git rebase --continue`命令来让每一条指令生效。
|
||||
|
||||
如果你有修改,将会切换到你提交时所处的状态,之后你需要使用命令git commit --amend来编辑。
|
||||
|
||||
@ -446,7 +446,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
|
||||
|
||||
$ git branch experimental SHA1_OF_HASH
|
||||
|
||||
如果你访问过的话,你通常可以用git reflog来找到SHA1哈希值。
|
||||
如果你最近访问过的话,你通常可以用git reflog来找到SHA1哈希值。
|
||||
|
||||
另一种方式是使用`git fsck —lost-found`。其中一个dangling的提交就是丢失的HEAD(它只是已删除分支的HEAD,而HEAD^被引用为当前的HEAD所以它并不处于dangling状态)
|
||||
|
||||
@ -460,7 +460,7 @@ via: https://www.andyjeffries.co.uk/25-tips-for-intermediate-git-users/
|
||||
|
||||
作者:[Andy Jeffries][a]
|
||||
译者:[zpl1025](https://github.com/zpl1025)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,67 @@
|
||||
在Ubuntu 14.10上安装基于Web的监控工具:Linux-Dash
|
||||
================================================================================
|
||||
|
||||
Linux-Dash是一个用于GNU/Linux机器的,低开销的监控仪表盘。您可以安装试试!Linux Dash的界面提供了您的服务器的所有关键信息的详细视图,可监测的信息包括RAM、磁盘使用率、网络、安装的软件、用户、运行的进程等。所有的信息都被分成几类,您可以通过主页工具栏中的按钮跳到任何一类中。Linux Dash并不是最先进的监测工具,但它十分适合寻找灵活、轻量级、容易部署的应用的用户。
|
||||
|
||||
### Linux-Dash的功能 ###
|
||||
|
||||
- 使用一个基于Web的漂亮的仪表盘界面来监控服务器信息
|
||||
- 实时的按照你的要求监控RAM、负载、运行时间、磁盘配置、用户和许多其他系统状态
|
||||
- 支持基于Apache2/niginx + PHP的服务器
|
||||
- 通过点击和拖动来重排列控件
|
||||
- 支持多种类型的linux服务器
|
||||
|
||||
### 当前控件列表 ###
|
||||
|
||||
- 通用信息
|
||||
- 平均负载
|
||||
- RAM
|
||||
- 磁盘使用量
|
||||
- 用户
|
||||
- 软件
|
||||
- IP
|
||||
- 网络速率
|
||||
- 在线状态
|
||||
- 处理器
|
||||
- 日志
|
||||
|
||||
### 在Ubuntu server 14.10上安装Linux-Dash ###
|
||||
|
||||
首先您需要确认您安装了[Ubuntu LAMP server 14.10][1],接下来您需要安装下面的包:
|
||||
|
||||
sudo apt-get install php5-json unzip
|
||||
|
||||
安装这个模块后,需要在apache2中启用该模块,所以您需要使用下面的命令重启apache2服务器:
|
||||
|
||||
sudo service apache2 restart
|
||||
|
||||
现在您需要下载linux-dash的安装包并安装它:
|
||||
|
||||
wget https://github.com/afaqurk/linux-dash/archive/master.zip
|
||||
|
||||
unzip master.zip
|
||||
|
||||
sudo mv linux-dash-master/ /var/www/html/linux-dash-master/
|
||||
|
||||
接下来您需要使用下面的命令来改变权限:
|
||||
|
||||
sudo chmod 755 /var/www/html/linux-dash-master/
|
||||
|
||||
现在您便可以访问http://serverip/linux-dash-master/了。您应该会看到类似下面的输出:
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.ubuntugeek.com/install-linux-dash-web-based-monitoring-tool-on-ubntu-14-10.html
|
||||
|
||||
作者:[ruchi][a]
|
||||
译者:[wwy-hust](https://github.com/wwy-hust)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.ubuntugeek.com/author/ubuntufix
|
||||
[1]:http://www.ubuntugeek.com/step-by-step-ubuntu-14-10-utopic-unicorn-lamp-server-setup.html
|
@ -1,14 +1,14 @@
|
||||
Linux有问必答时间--如何在Linux下禁用IPv6
|
||||
Linux有问必答:如何在Linux下禁用IPv6
|
||||
================================================================================
|
||||
> **问题**:我发现我的一个应用程序在尝试通过IPv6建立连接,但是由于我们本地网络不允许分配IPv6的流量,IPv6连接会超时,应用程序的连接会退回到IPv4,这样就会造成不必要的延迟。由于我目前对IPv6没有任何需求,所以我想在我的Linux主机上禁用IPv6。有什么比较合适的方法呢?
|
||||
> **问题**:我发现我的一个应用程序在尝试通过IPv6建立连接,但是由于我们本地网络不允许分配IPv6的流量,IPv6连接会超时,应用程序的连接会回退到IPv4,这样就会造成不必要的延迟。由于我目前对IPv6没有任何需求,所以我想在我的Linux主机上禁用IPv6。有什么比较合适的方法呢?
|
||||
|
||||
IPv6被认为是IPv4——互联网上的传统32位地址空间的替代产品,它为了解决现有IPv4地址空间即将耗尽的问题。然而,由于IPv4已经被每台主机或设备连接到了互联网上,所以想在一夜之间将它们全部切换到IPv6几乎是不可能的。许多IPv4到IPv6的转换机制(例如:双协议栈、网络隧道、代理) 已经被提出来用来促进IPv6能被采用,并且很多应用也正在进行重写,就像我们所说的,来增加对IPv6的支持。有一件事情能确定,就是在可预见的未来里IPv4和IPv6势必将共存。
|
||||
IPv6被认为是IPv4——互联网上的传统32位地址空间——的替代产品,它用来解决现有IPv4地址空间即将耗尽的问题。然而,由于已经有大量主机、设备用IPv4连接到了互联网上,所以想在一夜之间将它们全部切换到IPv6几乎是不可能的。许多IPv4到IPv6的转换机制(例如:双协议栈、网络隧道、代理) 已经被提出来用来促进IPv6能被采用,并且很多应用也正在进行重写,如我们所提倡的,来增加对IPv6的支持。有一件事情可以确定,就是在可预见的未来里IPv4和IPv6势必将共存。
|
||||
|
||||
理想情况下,[向IPv6过渡的进程][1]不应该被最终的用户所看见,但是IPv4/IPv6混合环境有时会让你碰到各种源于IPv4和IPv6之间不经意间的相互作用的问题。举个例子,你会碰到应用程序超时的问题比如apt-get或ssh尝试通过IPv6连接失败、DNS服务器意外清空了IPv6的AAAA记录、或者你支持IPv6的设备不兼容你的互联网服务提供商遗留下的IPv4网络等等等等。
|
||||
理想情况下,[向IPv6过渡的进程][1]不应该被最终的用户所看见,但是IPv4/IPv6混合环境有时会让你碰到各种源于IPv4和IPv6之间不经意间的相互碰撞的问题。举个例子,你会碰到应用程序超时的问题,比如apt-get或ssh尝试通过IPv6连接失败、DNS服务器意外清空了IPv6的AAAA记录、或者你支持IPv6的设备不兼容你的互联网服务提供商遗留下的IPv4网络,等等等等。
|
||||
|
||||
当然这不意味着你应该盲目地在你的Linux机器上禁用IPv6。鉴于IPv6许诺的种种好处,作为社会的一份子我们最终还是要充分拥抱它的,但是作为给最终用户进行故障排除过程的一部分,如果IPv6确实是罪魁祸首那你可以尝试去关闭它。
|
||||
当然这不意味着你应该盲目地在你的Linux机器上禁用IPv6。鉴于IPv6许诺的种种好处,作为社会的一份子我们最终还是要充分拥抱它的,但是作为给最终用户进行故障排除过程的一部分,如果IPv6确实是罪魁祸首,那你可以尝试去关闭它。
|
||||
|
||||
这里有一些让你在Linux中部分或全部禁用IPv6的小技巧(例如:为一个已经确定的网络接口)。这些小贴士应该适用于所有主流的Linux发行版包括Ubuntu、Debian、Linux Mint、CentOS、Fedora、RHEL以及Arch Linux。
|
||||
这里有一些让你在Linux中部分(例如:对于某个特定的网络接口)或全部禁用IPv6的小技巧。这些小贴士应该适用于所有主流的Linux发行版包括Ubuntu、Debian、Linux Mint、CentOS、Fedora、RHEL以及Arch Linux。
|
||||
|
||||
### 查看IPv6在Linux中是否被启用 ###
|
||||
|
||||
@ -24,7 +24,7 @@ IPv6被认为是IPv4——互联网上的传统32位地址空间的替代产品
|
||||
|
||||
### 临时禁用IPv6 ###
|
||||
|
||||
如果你想要在你的Linux系统上临时关闭IPv6,你可以用 /proc 文件系统。"临时",意思是我们所做的禁用IPv6的更改在系统重启后将不被保存。IPv6会在你的Linux机器重启后再次被启用。
|
||||
如果你想要在你的Linux系统上临时关闭IPv6,你可以用 /proc 文件系统。"临时"的意思是我们所做的禁用IPv6的更改在系统重启后将不被保存。IPv6会在你的Linux机器重启后再次被启用。
|
||||
|
||||
要将一个特定的网络接口禁用IPv6,使用以下命令:
|
||||
|
||||
@ -50,7 +50,7 @@ IPv6被认为是IPv4——互联网上的传统32位地址空间的替代产品
|
||||
|
||||
#### 方法一 ####
|
||||
|
||||
第一种方法是请求以上提到的 /proc 对 /etc/sysctl.conf 文件进行修改。
|
||||
第一种方法是通过 /etc/sysctl.conf 文件对 /proc 进行永久修改。
|
||||
|
||||
换句话说,就是用文本编辑器打开 /etc/sysctl.conf 然后添加以下内容:
|
||||
|
||||
@ -69,7 +69,7 @@ IPv6被认为是IPv4——互联网上的传统32位地址空间的替代产品
|
||||
|
||||
#### 方法二 ####
|
||||
|
||||
另一个永久禁用IPv6的方法是在开机的时候执行一个必要的内核参数。
|
||||
另一个永久禁用IPv6的方法是在开机的时候传递一个必要的内核参数。
|
||||
|
||||
用文本编辑器打开 /etc/default/grub 并给GRUB_CMDLINE_LINUX变量添加"ipv6.disable=1"。
|
||||
|
||||
@ -79,7 +79,7 @@ IPv6被认为是IPv4——互联网上的传统32位地址空间的替代产品
|
||||
|
||||
GRUB_CMDLINE_LINUX="xxxxx ipv6.disable=1"
|
||||
|
||||
上面的"xxxxx"代表任意存在着的内核参数,在它后面添加"ipv6.disable=1"。
|
||||
上面的"xxxxx"代表任何已有的内核参数,在它后面添加"ipv6.disable=1"。
|
||||
|
||||

|
||||
|
||||
@ -97,7 +97,7 @@ Fedora、CentOS/RHEL系统:
|
||||
|
||||
### 禁用IPv6之后的其它可选步骤 ###
|
||||
|
||||
这里有一些可选步骤在你禁用IPv6后需要考虑,这是因为当你在内核里禁用IPv6后,其它程序仍然会尝试使用IPv6。在大多数情况下,例如应用程序的运转状态不太会遭到破坏,但是出于效率或安全方面的原因,你要为他们禁用IPv6。
|
||||
这里有一些在你禁用IPv6后需要考虑的可选步骤,这是因为当你在内核里禁用IPv6后,其它程序也许仍然会尝试使用IPv6。在大多数情况下,应用程序的这种行为不太会影响到什么,但是出于效率或安全方面的原因,你可以为他们禁用IPv6。
|
||||
|
||||
#### /etc/hosts ####
|
||||
|
||||
@ -124,7 +124,7 @@ Fedora、CentOS/RHEL系统:
|
||||
|
||||
默认情况下,OpenSSH服务(sshd)会去尝试捆绑IPv4和IPv6的地址。
|
||||
|
||||
要强制sshd只捆绑IPv4地址,用文本编辑器打开 /etc/ssh/sshd_config 并添加以下脚本行。inet只适用于IPv4,而inet6是适用于IPv6的。
|
||||
要强制sshd只捆绑IPv4地址,用文本编辑器打开 /etc/ssh/sshd_config 并添加以下行。inet只适用于IPv4,而inet6是适用于IPv6的。
|
||||
|
||||
$ sudo vi /etc/ssh/sshd_config
|
||||
|
||||
@ -140,7 +140,7 @@ via: http://ask.xmodulo.com/disable-ipv6-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[ZTinoZ](https://github.com/ZTinoZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,18 +1,18 @@
|
||||
领略一些最著名的 Linux 网络工具
|
||||
一大波你可能不知道的 Linux 网络工具
|
||||
================================================================================
|
||||
在你的系统上使用命令行工具来监控你的网络是非常实用的,并且对于 Linux 用户来说,有着许许多多现成的工具可以使用,如 nethogs, ntopng, nload, iftop, iptraf, bmon, slurm, tcptrack, cbm, netwatch, collectl, trafshow, cacti, etherape, ipband, jnettop, netspeed 以及 speedometer。
|
||||
如果要在你的系统上监控网络,那么使用命令行工具是非常实用的,并且对于 Linux 用户来说,有着许许多多现成的工具可以使用,如: nethogs, ntopng, nload, iftop, iptraf, bmon, slurm, tcptrack, cbm, netwatch, collectl, trafshow, cacti, etherape, ipband, jnettop, netspeed 以及 speedometer。
|
||||
|
||||
鉴于世上有着许多的 Linux 专家和开发者,显然还存在其他的网络监控工具,但在这篇教程中,我不打算将它们所有包括在内。
|
||||
|
||||
上面列出的工具都有着自己的独特之处,但归根结底,它们都做着监控网络流量的工作,且并不是只有一种方法来完成这件事。例如 nethogs 可以被用来展示每个进程的带宽使用情况,以防你想知道究竟是哪个应用在消耗了你的整个网络资源; iftop 可以被用来展示每个套接字连接的带宽使用情况,而 像 nload 这类的工具可以帮助你得到有关整个带宽的信息。
|
||||
上面列出的工具都有着自己的独特之处,但归根结底,它们都做着监控网络流量的工作,只是通过各种不同的方法。例如 nethogs 可以被用来展示每个进程的带宽使用情况,以防你想知道究竟是哪个应用在消耗了你的整个网络资源; iftop 可以被用来展示每个套接字连接的带宽使用情况,而像 nload 这类的工具可以帮助你得到有关整个带宽的信息。
|
||||
|
||||
### 1) nethogs ###
|
||||
|
||||
nethogs 是一个免费的工具,当要查找哪个 PID (注:即 process identifier,进程 ID) 给你的网络流量带来了麻烦时,它是非常方便的。它按每个进程来组织带宽,而不是像大多数的工具那样按照每个协议或每个子网来划分流量。它功能丰富,同时支持 IPv4 和 IPv6,并且我认为,若你想在你的 Linux 主机上确定哪个程序正消耗着你的全部带宽,它是来做这件事的最佳的程序。
|
||||
nethogs 是一个免费的工具,当要查找哪个 PID (注:即 process identifier,进程 ID) 给你的网络流量带来了麻烦时,它是非常方便的。它按每个进程来分组带宽,而不是像大多数的工具那样按照每个协议或每个子网来划分流量。它功能丰富,同时支持 IPv4 和 IPv6,并且我认为,若你想在你的 Linux 主机上确定哪个程序正消耗着你的全部带宽,它是来做这件事的最佳的程序。
|
||||
|
||||
一个 Linux 用户可以使用 **nethogs** 来显示每个进程的 TCP 下载和上传速率,使用命令 **nethogs eth0** 来监控一个特定的设备,上面的 eth0 是那个你想获取信息的设备的名称,你还可以得到有关正在被传输的数据的传输速率的信息。
|
||||
一个 Linux 用户可以使用 **nethogs** 来显示每个进程的 TCP 下载和上传速率,可以使用命令 **nethogs eth0** 来监控一个指定的设备,上面的 eth0 是那个你想获取信息的设备的名称,你还可以得到有关正在传输的数据的传输速率信息。
|
||||
|
||||
对我而言, nethogs 是非常容易使用的,或许是因为我非常喜欢它以至于我总是在我的 Ubuntu 12.04 LTS 机器中使用它来监控我的网络带宽。
|
||||
对我而言, nethogs 是非常容易使用的,或许是因为我非常喜欢它,以至于我总是在我的 Ubuntu 12.04 LTS 机器中使用它来监控我的网络带宽。
|
||||
|
||||
例如要想使用混杂模式来嗅探,可以像下面展示的命令那样使用选项 -p:
|
||||
|
||||
@ -20,6 +20,8 @@ nethogs 是一个免费的工具,当要查找哪个 PID (注:即 process ide
|
||||
|
||||
假如你想更多地了解 nethogs 并深入探索它,那么请毫不犹豫地阅读我们做的关于这个网络带宽监控工具的整个教程。
|
||||
|
||||
(LCTT 译注:关于 nethogs 的更多信息可以参考:https://linux.cn/article-2808-1.html )
|
||||
|
||||
### 2) nload ###
|
||||
|
||||
nload 是一个控制台应用,可以被用来实时地监控网络流量和带宽使用情况,它还通过提供两个简单易懂的图表来对流量进行可视化。这个绝妙的网络监控工具还可以在监控过程中切换被监控的设备,而这可以通过按左右箭头来完成。
|
||||
@ -28,19 +30,21 @@ nload 是一个控制台应用,可以被用来实时地监控网络流量和
|
||||
|
||||
正如你在上面的截图中所看到的那样,由 nload 提供的图表是非常容易理解的。nload 提供了有用的信息,也展示了诸如被传输数据的总量和最小/最大网络速率等信息。
|
||||
|
||||
而更酷的是你可以在下面的命令的帮助下运行 nload 这个工具,这个命令是非常的短小且易记的:
|
||||
而更酷的是你只需要直接运行 nload 这个工具就行,这个命令是非常的短小且易记的:
|
||||
|
||||
nload
|
||||
|
||||
我很确信的是:我们关于如何使用 nload 的详细教程将帮助到新的 Linux 用户,甚至可以帮助那些正寻找关于 nload 信息的老手。
|
||||
|
||||
(LCTT 译注:关于 nload 的更新信息可以参考:https://linux.cn/article-5114-1.html )
|
||||
|
||||
### 3) slurm ###
|
||||
|
||||
slurm 是另一个 Linux 网络负载监控工具,它以一个不错的 ASCII 图来显示结果,它还支持许多键值用以交互,例如 **c** 用来切换到经典模式, **s** 切换到分图模式, **r** 用来重绘屏幕, **L** 用来启用 TX/RX(注:TX,发送流量;RX,接收流量) LED,**m** 用来在经典分图模式和大图模式之间进行切换, **q** 退出 slurm。
|
||||
slurm 是另一个 Linux 网络负载监控工具,它以一个不错的 ASCII 图来显示结果,它还支持许多按键用以交互,例如 **c** 用来切换到经典模式, **s** 切换到分图模式, **r** 用来重绘屏幕, **L** 用来启用 TX/RX 灯(注:TX,发送流量;RX,接收流量) ,**m** 用来在经典分图模式和大图模式之间进行切换, **q** 退出 slurm。
|
||||
|
||||

|
||||
|
||||
在网络负载监控工具 slurm 中,还有许多其它的键值可用,你可以很容易地使用下面的命令在 man 手册中学习它们。
|
||||
在网络负载监控工具 slurm 中,还有许多其它的按键可用,你可以很容易地使用下面的命令在 man 手册中学习它们。
|
||||
|
||||
man slurm
|
||||
|
||||
@ -48,11 +52,11 @@ slurm 在 Ubuntu 和 Debian 的官方软件仓库中可以找到,所以使用
|
||||
|
||||
sudo apt-get install slurm
|
||||
|
||||
我们已经在一个教程中对 slurm 的使用做了介绍,所以请访问相关网页( 注:应该指的是[这篇文章](http://linoxide.com/ubuntu-how-to/monitor-network-load-slurm-tool/) ),并不要忘记和其它使用 Linux 的朋友分享这些知识。
|
||||
我们已经在一个[教程](http://linoxide.com/ubuntu-how-to/monitor-network-load-slurm-tool/)中对 slurm 的使用做了介绍,不要忘记和其它使用 Linux 的朋友分享这些知识。
|
||||
|
||||
### 4) iftop ###
|
||||
|
||||
当你想在一个接口上按照主机来展示带宽使用情况时,iftop 是一个非常有用的工具。根据 man 手册,**iftop** 在一个已命名的接口或在它可以找到的第一个接口(假如没有任何特殊情况,它就像一个外部的接口)上监听网络流量,并且展示出一个表格来显示当前一对主机间的带宽使用情况。
|
||||
当你想显示连接到网卡上的各个主机的带宽使用情况时,iftop 是一个非常有用的工具。根据 man 手册,**iftop** 在一个指定的接口或在它可以找到的第一个接口(假如没有任何特殊情况,它应该是一个对外的接口)上监听网络流量,并且展示出一个表格来显示当前的一对主机间的带宽使用情况。
|
||||
|
||||
通过在虚拟终端中使用下面的命令,Ubuntu 和 Debian 用户可以在他们的机器中轻易地安装 iftop:
|
||||
|
||||
@ -61,6 +65,8 @@ slurm 在 Ubuntu 和 Debian 的官方软件仓库中可以找到,所以使用
|
||||
在你的机器上,可以使用下面的命令通过 yum 来安装 iftop:
|
||||
|
||||
yum -y install iftop
|
||||
|
||||
(LCTT 译注:关于 nload 的更多信息请参考:https://linux.cn/article-1843-1.html )
|
||||
|
||||
### 5) collectl ###
|
||||
|
||||
@ -69,7 +75,7 @@ collectl 可以被用来收集描述当前系统状态的数据,并且它支
|
||||
- 记录模式
|
||||
- 回放模式
|
||||
|
||||
**记录模式** 允许从一个正在运行的系统中读取数据,然后将这些数据要么显示在终端中,要么写入一个或多个文件或套接字中。
|
||||
**记录模式** 允许从一个正在运行的系统中读取数据,然后将这些数据要么显示在终端中,要么写入一个或多个文件或一个套接字中。
|
||||
|
||||
**回放模式**
|
||||
|
||||
@ -79,13 +85,15 @@ Ubuntu 和 Debian 用户可以在他们的机器上使用他们默认的包管
|
||||
|
||||
sudo apt-get install collectl
|
||||
|
||||
还可以使用下面的命令来安装 collectl, 因为对于这些发行版本(注:这里指的是用 yum 作为包管理器的发行版本),在它们官方的软件仓库中也含有 collectl:
|
||||
还可以使用下面的命令来安装 collectl, 因为对于这些发行版本(注:这里指的是用 yum 作为包管理器的发行版本),在它们官方的软件仓库中也含有 collectl:
|
||||
|
||||
yum install collectl
|
||||
|
||||
(LCTT 译注:关于 collectl 的更多信息请参考: https://linux.cn/article-3154-1.html )
|
||||
|
||||
### 6) Netstat ###
|
||||
|
||||
Netstat 是一个用来监控**传入和传出的网络数据包统计数据**和接口统计数据的命令行工具。它为传输控制协议 TCP (包括上传和下行),路由表,及一系列的网络接口(网络接口控制器或者软件定义的网络接口) 和网络协议统计数据展示网络连接情况。
|
||||
Netstat 是一个用来监控**传入和传出的网络数据包统计数据**的接口统计数据命令行工具。它会显示 TCP 连接 (包括上传和下行),路由表,及一系列的网络接口(网卡或者SDN接口)和网络协议统计数据。
|
||||
|
||||
Ubuntu 和 Debian 用户可以在他们的机器上使用默认的包管理器来安装 netstat。Netsta 软件被包括在 net-tools 软件包中,并可以在 shell 或虚拟终端中运行下面的命令来安装它:
|
||||
|
||||
@ -107,6 +115,8 @@ CentOS, Fedora, RHEL 用户可以在他们的机器上使用默认的包管理
|
||||
|
||||

|
||||
|
||||
(LCTT 译注:关于 netstat 的更多信息请参考:https://linux.cn/article-2434-1.html )
|
||||
|
||||
### 7) Netload ###
|
||||
|
||||
netload 命令只展示一个关于当前网络荷载和自从程序运行之后传输数据总的字节数目的简要报告,它没有更多的功能。它是 netdiag 软件的一部分。
|
||||
@ -115,9 +125,9 @@ netload 命令只展示一个关于当前网络荷载和自从程序运行之后
|
||||
|
||||
# yum install netdiag
|
||||
|
||||
Netload 在默认仓库中作为 netdiag 的一部分可以被找到,我们可以轻易地使用下面的命令来利用 **apt** 包管理器安装 **netdiag**:
|
||||
Netload 是默认仓库中 netdiag 的一部分,我们可以轻易地使用下面的命令来利用 **apt** 包管理器安装 **netdiag**:
|
||||
|
||||
$ sudo apt-get install netdiag (注:这里原文为 sudo install netdiag,应该加上 apt-get)
|
||||
$ sudo apt-get install netdiag
|
||||
|
||||
为了运行 netload,我们需要确保选择了一个正在工作的网络接口的名称,如 eth0, eh1, wlan0, mon0等,然后在 shell 或虚拟终端中运行下面的命令:
|
||||
|
||||
@ -127,21 +137,23 @@ Netload 在默认仓库中作为 netdiag 的一部分可以被找到,我们可
|
||||
|
||||
### 8) Nagios ###
|
||||
|
||||
Nagios 是一个领先且功能强大的开源监控系统,它使得网络或系统管理员在服务器相关的问题影响到服务器的主要事务之前,鉴定并解决这些问题。 有了 Nagios 系统,管理员便可以在一个单一的窗口中监控远程的 Linux 、Windows 系统、交换机、路由器和打印机等。它显示出重要的警告并指示出在你的网络或服务器中是否出现某些故障,这间接地帮助你在问题发生之前,着手执行补救行动。
|
||||
Nagios 是一个领先且功能强大的开源监控系统,它使得网络或系统管理员可以在服务器的各种问题影响到服务器的主要事务之前,发现并解决这些问题。 有了 Nagios 系统,管理员便可以在一个单一的窗口中监控远程的 Linux 、Windows 系统、交换机、路由器和打印机等。它会显示出重要的警告并指出在你的网络或服务器中是否出现某些故障,这可以间接地帮助你在问题发生前就着手执行补救行动。
|
||||
|
||||
Nagios 有一个 web 界面,其中有一个图形化的活动监视器。通过浏览网页 http://localhost/nagios/ 或 http://localhost/nagios3/ 便可以登录到这个 web 界面。假如你在远程的机器上进行操作,请使用你的 IP 地址来替换 localhost,然后键入用户名和密码,我们便会看到如下图所展示的信息:
|
||||
|
||||

|
||||
|
||||
(LCTT 译注:关于 Nagios 的更多信息请参考:https://linux.cn/article-2436-1.html )
|
||||
|
||||
### 9) EtherApe ###
|
||||
|
||||
EtherApe 是一个针对 Unix 的图形化网络监控工具,它仿照了 etherman 软件。它具有链路层,IP 和 TCP 模式并支持 Ethernet, FDDI, Token Ring, ISDN, PPP, SLIP 及 WLAN 设备等接口,再加上支持一些封装的格式。主机和链接随着流量大小和被着色的协议名称展示而变化。它可以过滤要展示的流量,并可从一个文件或运行的网络中读取数据报。
|
||||
EtherApe 是一个针对 Unix 的图形化网络监控工具,它仿照了 etherman 软件。它支持链路层、IP 和 TCP 等模式,并支持以太网, FDDI, 令牌环, ISDN, PPP, SLIP 及 WLAN 设备等接口,以及一些封装格式。主机和连接随着流量和协议而改变其尺寸和颜色。它可以过滤要展示的流量,并可从一个文件或运行的网络中读取数据包。
|
||||
|
||||
在 CentOS、Fedora、RHEL 等 Linux 发行版本中安装 etherape 是一件容易的事,因为在它们的官方软件仓库中就可以找到 etherape。我们可以像下面展示的命令那样使用 yum 包管理器来安装它:
|
||||
|
||||
yum install etherape
|
||||
|
||||
我们可以使用下面的命令在 Ubuntu、Debian 及它们的衍生发行版本中使用 **apt** 包管理器来安装 EtherApe :
|
||||
我们也可以使用下面的命令在 Ubuntu、Debian 及它们的衍生发行版本中使用 **apt** 包管理器来安装 EtherApe :
|
||||
|
||||
sudo apt-get install etherape
|
||||
|
||||
@ -149,13 +161,13 @@ EtherApe 是一个针对 Unix 的图形化网络监控工具,它仿照了 ethe
|
||||
|
||||
sudo etherape
|
||||
|
||||
然后, **etherape** 的 **图形用户界面** 便会被执行。接着,在菜单上面的 **捕捉** 选项下,我们可以选择 **模式**(IP,链路层,TCP) 和 **接口**。一切设定完毕后,我们需要点击 **开始** 按钮。接着我们便会看到类似下面截图的东西:
|
||||
然后, **etherape** 的 **图形用户界面** 便会被执行。接着,在菜单上面的 **捕捉** 选项下,我们可以选择 **模式**(IP,链路层,TCP) 和 **接口**。一切设定完毕后,我们需要点击 **开始** 按钮。接着我们便会看到类似下面截图的东西:
|
||||
|
||||

|
||||
|
||||
### 10) tcpflow ###
|
||||
|
||||
tcpflow 是一个命令行工具,它可以捕捉作为 TCP 连接(流)的一部分的传输数据,并以一种方便协议分析或除错的方式来存储数据。它重建了实际的数据流并将每个流存储在不同的文件中,以备日后的分析。它理解 TCP 序列号并可以正确地重建数据流,不管是在重发或乱序发送状态下。
|
||||
tcpflow 是一个命令行工具,它可以捕捉 TCP 连接(流)的部分传输数据,并以一种方便协议分析或除错的方式来存储数据。它重构了实际的数据流并将每个流存储在不同的文件中,以备日后的分析。它能识别 TCP 序列号并可以正确地重构数据流,不管是在重发还是乱序发送状态下。
|
||||
|
||||
通过 **apt** 包管理器在 Ubuntu 、Debian 系统中安装 tcpflow 是很容易的,因为默认情况下在官方软件仓库中可以找到它。
|
||||
|
||||
@ -175,7 +187,7 @@ tcpflow 是一个命令行工具,它可以捕捉作为 TCP 连接(流)的一
|
||||
|
||||
# yum install --nogpgcheck http://pkgs.repoforge.org/tcpflow/tcpflow-0.21-1.2.el6.rf.i686.rpm
|
||||
|
||||
我们可以使用 tcpflow 来捕捉全部或部分 tcp 流量并以一种简单的方式把它们写到一个可读文件中。下面的命令执行着我们想要做的事情,但我们需要在一个空目录中运行下面的命令,因为它将创建诸如 x.x.x.x.y-a.a.a.a.z 格式的文件,做完这些之后,只需按 Ctrl-C 便可停止这个命令。
|
||||
我们可以使用 tcpflow 来捕捉全部或部分 tcp 流量,并以一种简单的方式把它们写到一个可读的文件中。下面的命令就可以完成这个事情,但我们需要在一个空目录中运行下面的命令,因为它将创建诸如 x.x.x.x.y-a.a.a.a.z 格式的文件,运行之后,只需按 Ctrl-C 便可停止这个命令。
|
||||
|
||||
$ sudo tcpflow -i eth0 port 8000
|
||||
|
||||
@ -183,49 +195,51 @@ tcpflow 是一个命令行工具,它可以捕捉作为 TCP 连接(流)的一
|
||||
|
||||
### 11) IPTraf ###
|
||||
|
||||
[IPTraf][2] 是一个针对 Linux 平台的基于控制台的网络统计应用。它生成一系列的图形,如 TCP 连接包和字节的数目、接口信息和活动指示器、 TCP/UDP 流量故障以及 LAN 状态包和字节的数目。
|
||||
[IPTraf][2] 是一个针对 Linux 平台的基于控制台的网络统计应用。它生成一系列的图形,如 TCP 连接的包/字节计数、接口信息和活动指示器、 TCP/UDP 流量故障以及局域网内设备的包/字节计数。
|
||||
|
||||
在默认的软件仓库中可以找到 IPTraf,所以我们可以使用下面的命令通过 **apt** 包管理器轻松地安装 IPTraf:
|
||||
|
||||
$ sudo apt-get install iptraf
|
||||
|
||||
在默认的软件仓库中可以找到 IPTraf,所以我们可以使用下面的命令通过 **yum** 包管理器轻松地安装 IPTraf:
|
||||
我们可以使用下面的命令通过 **yum** 包管理器轻松地安装 IPTraf:
|
||||
|
||||
# yum install iptraf
|
||||
|
||||
我们需要以管理员权限来运行 IPTraf(注:这里原文写错为 TPTraf),并带有一个可用的网络接口名。这里,我们的网络接口名为 wlan2,所以我们使用 wlan2 来作为接口的名称:
|
||||
我们需要以管理员权限来运行 IPTraf,并带有一个有效的网络接口名。这里,我们的网络接口名为 wlan2,所以我们使用 wlan2 来作为参数:
|
||||
|
||||
$ sudo iptraf wlan2 (注:这里原文为 sudo iptraf,应该加上 wlan2)
|
||||
$ sudo iptraf wlan2
|
||||
|
||||

|
||||
|
||||
开始一般的网络接口统计,键入:
|
||||
开始通常的网络接口统计,键入:
|
||||
|
||||
# iptraf -g
|
||||
|
||||
为了在一个名为 eth0 的接口设备上看详细的统计信息,使用:
|
||||
查看接口 eth0 的详细统计信息,使用:
|
||||
|
||||
# iptraf -d wlan2 (注:这里的 wlan2 和 上面的 eth0 不一致,下面的几句也是这种情况,请相应地改正)
|
||||
# iptraf -d eth0
|
||||
|
||||
为了看一个名为 eth0 的接口的 TCP 和 UDP 监控,使用:
|
||||
查看接口 eth0 的 TCP 和 UDP 监控信息,使用:
|
||||
|
||||
# iptraf -z wlan2
|
||||
# iptraf -z eth0
|
||||
|
||||
为了展示在一个名为 eth0 的接口上的包的大小和数目,使用:
|
||||
查看接口 eth0 的包的大小和数目,使用:
|
||||
|
||||
# iptraf -z wlan2
|
||||
# iptraf -z eth0
|
||||
|
||||
注意:请将上面的 wlan2 替换为你的接口名称。你可以通过运行`ip link show`命令来检查你的接口。
|
||||
注意:请将上面的 eth0 替换为你的接口名称。你可以通过运行`ip link show`命令来检查你的接口。
|
||||
|
||||
(LCTT 译注:关于 iptraf 的更多详细信息请参考:https://linux.cn/article-5430-1.html )
|
||||
|
||||
### 12) Speedometer ###
|
||||
|
||||
Speedometer 是一个小巧且简单的工具,它只绘出一幅包含有通过某个给定端口的上行、下行流量的好看的图。
|
||||
Speedometer 是一个小巧且简单的工具,它只用来绘出一幅包含有通过某个给定端口的上行、下行流量的好看的图。
|
||||
|
||||
在默认的软件仓库中可以找到 Speedometer ,所以我们可以使用下面的命令通过 **yum** 包管理器轻松地安装 Speedometer:
|
||||
|
||||
# yum install speedometer
|
||||
|
||||
在默认的软件仓库中可以找到 Speedometer ,所以我们可以使用下面的命令通过 **apt** 包管理器轻松地安装 Speedometer:
|
||||
我们可以使用下面的命令通过 **apt** 包管理器轻松地安装 Speedometer:
|
||||
|
||||
$ sudo apt-get install speedometer
|
||||
|
||||
@ -239,15 +253,15 @@ Speedometer 可以简单地通过在 shell 或虚拟终端中执行下面的命
|
||||
|
||||
### 13) Netwatch ###
|
||||
|
||||
Netwatch 是 netdiag 工具集里的一部分,并且它也显示出当前主机和其他远程主机的连接情况,以及在每个连接中数据传输的速率。
|
||||
Netwatch 是 netdiag 工具集里的一部分,它也显示当前主机和其他远程主机的连接情况,以及在每个连接中数据传输的速率。
|
||||
|
||||
我们可以使用 yum 在 fedora 中安装 Netwatch,因为它在 fedora 的默认软件仓库中。但若你运行着 CentOS 或 RHEL , 我们需要安装 [rpmforge 软件仓库][3]。
|
||||
|
||||
# yum install netwatch
|
||||
|
||||
Netwatch 作为 netdiag 的一部分可以在默认的软件仓库中找到,所以我们可以轻松地使用下面的命令来利用 **apt** 包管理器安装 **netdiag**:
|
||||
Netwatch 是 netdiag 的一部分,可以在默认的软件仓库中找到,所以我们可以轻松地使用下面的命令来利用 **apt** 包管理器安装 **netdiag**:
|
||||
|
||||
$ sudo apt-get install netdiag(注:这里应该加上 apt-get
|
||||
$ sudo apt-get install netdiag
|
||||
|
||||
为了运行 netwatch, 我们需要在虚拟终端或 shell 中执行下面的命令:
|
||||
|
||||
@ -259,15 +273,15 @@ Netwatch 作为 netdiag 的一部分可以在默认的软件仓库中找到,
|
||||
|
||||
### 14) Trafshow ###
|
||||
|
||||
Trafshow 同 netwatch 和 pktstat(注:这里原文中多了一个 trafshow)一样,可以报告当前激活的连接里使用的协议和每个连接中数据传输的速率。它可以使用 pcap 类型的滤波器来筛选出特定的连接。
|
||||
Trafshow 同 netwatch 和 pktstat 一样,可以报告当前活动的连接里使用的协议和每个连接中数据传输的速率。它可以使用 pcap 类型的过滤器来筛选出特定的连接。
|
||||
|
||||
我们可以使用 yum 在 fedora 中安装 trafshow(注:这里原文为 Netwatch,应该为 trafshow),因为它在 fedora 的默认软件仓库中。但若你正运行着 CentOS 或 RHEL , 我们需要安装 [rpmforge 软件仓库][4]。
|
||||
我们可以使用 yum 在 fedora 中安装 trafshow ,因为它在 fedora 的默认软件仓库中。但若你正运行着 CentOS 或 RHEL , 我们需要安装 [rpmforge 软件仓库][4]。
|
||||
|
||||
# yum install trafshow
|
||||
|
||||
Trafshow 在默认仓库中可以找到,所以我们可以轻松地使用下面的命令来利用 **apt** 包管理器安装它:
|
||||
|
||||
$ sudo apt-get install trafshow(注:原文少了 apt-get)
|
||||
$ sudo apt-get install trafshow
|
||||
|
||||
为了使用 trafshow 来执行监控任务,我们需要在虚拟终端或 shell 中执行下面的命令:
|
||||
|
||||
@ -275,7 +289,7 @@ Trafshow 在默认仓库中可以找到,所以我们可以轻松地使用下
|
||||
|
||||

|
||||
|
||||
为了特别地监控 tcp 连接,如下面一样添加上 tcp 参数:
|
||||
为了专门监控 tcp 连接,如下面一样添加上 tcp 参数:
|
||||
|
||||
$ sudo trafshow -i wlan2 tcp
|
||||
|
||||
@ -285,7 +299,7 @@ Trafshow 在默认仓库中可以找到,所以我们可以轻松地使用下
|
||||
|
||||
### 15) Vnstat ###
|
||||
|
||||
与大多数的其他工具相比,Vnstat 有一点不同。实际上它运行一个后台服务或守护进程,并时刻记录着传输数据的大小。另外,它可以被用来生成一个带有网络使用历史记录的报告。
|
||||
与大多数的其他工具相比,Vnstat 有一点不同。实际上它运行着一个后台服务或守护进程,并时刻记录着传输数据的大小。另外,它可以被用来生成一个网络使用历史记录的报告。
|
||||
|
||||
我们需要开启 EPEL 软件仓库,然后运行 **yum** 包管理器来安装 vnstat。
|
||||
|
||||
@ -301,7 +315,7 @@ Vnstat 在默认软件仓库中可以找到,所以我们可以使用下面的
|
||||
|
||||

|
||||
|
||||
为了实时地监控带宽使用情况,使用 ‘-l’ 选项(实时模式)。然后它将以一种非常精确的方式来展示被上行和下行数据所使用的带宽总量,但不会显示任何有关主机连接或进程的内部细节。
|
||||
为了实时地监控带宽使用情况,使用 ‘-l’ 选项(live 模式)。然后它将以一种非常精确的方式来展示上行和下行数据所使用的带宽总量,但不会显示任何有关主机连接或进程的内部细节。
|
||||
|
||||
$ vnstat -l
|
||||
|
||||
@ -313,7 +327,7 @@ Vnstat 在默认软件仓库中可以找到,所以我们可以使用下面的
|
||||
|
||||
### 16) tcptrack ###
|
||||
|
||||
[tcptrack][5] 可以展示 TCP 连接的状态,它在一个给定的网络端口上进行监听。tcptrack 监控它们的状态并展示出一个经过排列且不断更新的有关来源/目标地址、带宽使用情况等信息的列表,这与 **top** 命令的输出非常类似 。
|
||||
[tcptrack][5] 可以展示 TCP 连接的状态,它在一个给定的网络端口上进行监听。tcptrack 监控它们的状态并展示出排序且不断更新的列表,包括来源/目标地址、带宽使用情况等信息,这与 **top** 命令的输出非常类似 。
|
||||
|
||||
鉴于 tcptrack 在软件仓库中,我们可以轻松地在 Debian、Ubuntu 系统中从软件仓库使用 **apt** 包管理器来安装 tcptrack。为此,我们需要在 shell 或虚拟终端中执行下面的命令:
|
||||
|
||||
@ -329,7 +343,7 @@ Vnstat 在默认软件仓库中可以找到,所以我们可以使用下面的
|
||||
|
||||
注:这里我们下载了 rpmforge-release 的当前最新版本,即 0.5.3-1,你总是可以从 rpmforge 软件仓库中下载其最新版本,并请在上面的命令中替换为你下载的版本。
|
||||
|
||||
**tcptrack** 需要以 root 权限或超级用户身份来运行。执行 tcptrack 时,我们需要带上那个我们想监视的网络接口 TCP 连接状况的接口名称。这里我们的接口名称为 wlan2,所以如下面这样使用:
|
||||
**tcptrack** 需要以 root 权限或超级用户身份来运行。执行 tcptrack 时,我们需要带上要监视的网络接口 TCP 连接状况的接口名称。这里我们的接口名称为 wlan2,所以如下面这样使用:
|
||||
|
||||
sudo tcptrack -i wlan2
|
||||
|
||||
@ -345,7 +359,7 @@ Vnstat 在默认软件仓库中可以找到,所以我们可以使用下面的
|
||||
|
||||
### 17) CBM ###
|
||||
|
||||
CBM 或 Color Bandwidth Meter 可以展示出当前所有网络设备的流量使用情况。这个程序是如此的简单,以至于应该可以从它的名称中看出其功能。CBM 的源代码和新版本可以在 [http://www.isotton.com/utils/cbm/][7] 上找到。
|
||||
CBM ( Color Bandwidth Meter) 可以展示出当前所有网络设备的流量使用情况。这个程序是如此的简单,以至于都可以从它的名称中看出其功能。CBM 的源代码和新版本可以在 [http://www.isotton.com/utils/cbm/][7] 上找到。
|
||||
|
||||
鉴于 CBM 已经包含在软件仓库中,我们可以简单地使用 **apt** 包管理器从 Debian、Ubuntu 的软件仓库中安装 CBM。为此,我们需要在一个 shell 窗口或虚拟终端中运行下面的命令:
|
||||
|
||||
@ -359,7 +373,7 @@ CBM 或 Color Bandwidth Meter 可以展示出当前所有网络设备的流量
|
||||
|
||||
### 18) bmon ###
|
||||
|
||||
[Bmon][8] 或 Bandwidth Monitoring ,是一个用于调试和实时监控带宽的工具。这个工具能够检索各种输入模块的统计数据。它提供了多种输出方式,包括一个基于 curses 库的界面,轻量级的HTML输出,以及 ASCII 输出格式。
|
||||
[Bmon][8] ( Bandwidth Monitoring) ,是一个用于调试和实时监控带宽的工具。这个工具能够检索各种输入模块的统计数据。它提供了多种输出方式,包括一个基于 curses 库的界面,轻量级的HTML输出,以及 ASCII 输出格式。
|
||||
|
||||
bmon 可以在软件仓库中找到,所以我们可以通过使用 apt 包管理器来在 Debian、Ubuntu 中安装它。为此,我们需要在一个 shell 窗口或虚拟终端中运行下面的命令:
|
||||
|
||||
@ -373,7 +387,7 @@ bmon 可以在软件仓库中找到,所以我们可以通过使用 apt 包管
|
||||
|
||||
### 19) tcpdump ###
|
||||
|
||||
[TCPDump][9] 是一个用于网络监控和数据获取的工具。它可以为我们节省很多的时间,并可用来调试网络或服务器 的相关问题。它打印出在某个网络接口上与布尔表达式匹配的数据包所包含的内容的一个描述。
|
||||
[TCPDump][9] 是一个用于网络监控和数据获取的工具。它可以为我们节省很多的时间,并可用来调试网络或服务器的相关问题。它可以打印出在某个网络接口上与布尔表达式相匹配的数据包所包含的内容的一个描述。
|
||||
|
||||
tcpdump 可以在 Debian、Ubuntu 的默认软件仓库中找到,我们可以简单地以 sudo 权限使用 apt 包管理器来安装它。为此,我们需要在一个 shell 窗口或虚拟终端中运行下面的命令:
|
||||
|
||||
@ -389,7 +403,6 @@ tcpdump 需要以 root 权限或超级用户来运行,我们需要带上我们
|
||||
|
||||

|
||||
|
||||
|
||||
假如你只想监视一个特定的端口,则可以运行下面的命令。下面是一个针对 80 端口(网络服务器)的例子:
|
||||
|
||||
$ sudo tcpdump -i wlan2 'port 80'
|
||||
@ -419,14 +432,15 @@ tcpdump 需要以 root 权限或超级用户来运行,我们需要带上我们
|
||||
|
||||
### 结论 ###
|
||||
|
||||
在第一部分中(注:我认为原文多了 first 这个单词,总之是前后文的内容有些不连贯),我们介绍了一些在 Linux 下的网络负载监控工具,这对于系统管理员甚至是新手来说,都是很有帮助的。在这篇文章中介绍的每一个工具都有其特点,不同的选项等,但最终它们都可以帮助你来监控你的网络流量。
|
||||
在这篇文章中,我们介绍了一些在 Linux 下的网络负载监控工具,这对于系统管理员甚至是新手来说,都是很有帮助的。在这篇文章中介绍的每一个工具都具有其特点,不同的选项等,但最终它们都可以帮助你来监控你的网络流量。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/monitoring-2/network-monitoring-tools-linux/
|
||||
|
||||
作者:[Bobbin Zachariah][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,4 +1,4 @@
|
||||
如何修复:apt-get update无法添加新的CD-ROM
|
||||
如何修复 apt-get update 无法添加新的 CD-ROM 的错误
|
||||
================================================================================
|
||||

|
||||
|
||||
@ -63,8 +63,8 @@
|
||||
via: http://itsfoss.com/fix-failed-fetch-cdrom-aptget-update-add-cdroms/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -4,7 +4,7 @@
|
||||
|
||||

|
||||
|
||||
Linux系统的KVM管理
|
||||
*Linux系统的KVM管理*
|
||||
|
||||
在这篇文章里没有什么新的概念,我们只是用命令行工具重复之前所做过的事情,也没有什么前提条件,都是相同的过程,之前的文章我们都讨论过。
|
||||
|
||||
@ -31,35 +31,40 @@ Virsh命令行工具是一款管理virsh客户域的用户界面。virsh程序
|
||||
# virsh pool-define-as Spool1 dir - - - - "/mnt/personal-data/SPool1/"
|
||||
|
||||

|
||||
创建新存储池
|
||||
|
||||
*创建新存储池*
|
||||
|
||||
**2. 查看环境中我们所有的存储池,用以下命令。**
|
||||
|
||||
# virsh pool-list --all
|
||||
|
||||

|
||||
列出所有存储池
|
||||
|
||||
*列出所有存储池*
|
||||
|
||||
**3. 现在我们来构造存储池了,用以下命令来构造我们刚才定义的存储池。**
|
||||
|
||||
# virsh pool-build Spool1
|
||||
|
||||

|
||||
构造存储池
|
||||
|
||||
**4. 用virsh带pool-start的命令来激活并启动我们刚才创建并构造完成的存储池。**
|
||||
*构造存储池*
|
||||
|
||||
**4. 用带pool-start参数的virsh命令来激活并启动我们刚才创建并构造完成的存储池。**
|
||||
|
||||
# virsh pool-start Spool1
|
||||
|
||||

|
||||
激活存储池
|
||||
|
||||
*激活存储池*
|
||||
|
||||
**5. 查看环境中存储池的状态,用以下命令。**
|
||||
|
||||
# virsh pool-list --all
|
||||
|
||||

|
||||
查看存储池状态
|
||||
|
||||
*查看存储池状态*
|
||||
|
||||
你会发现Spool1的状态变成了已激活。
|
||||
|
||||
@ -68,14 +73,16 @@ Virsh命令行工具是一款管理virsh客户域的用户界面。virsh程序
|
||||
# virsh pool-autostart Spool1
|
||||
|
||||

|
||||
配置KVM存储池
|
||||
|
||||
*配置KVM存储池*
|
||||
|
||||
**7. 最后来看看我们新的存储池的信息吧。**
|
||||
|
||||
# virsh pool-info Spool1
|
||||
|
||||

|
||||
查看KVM存储池信息
|
||||
|
||||
*查看KVM存储池信息*
|
||||
|
||||
恭喜你,Spool1已经准备好待命,接下来我们试着创建存储卷来使用它。
|
||||
|
||||
@ -90,12 +97,14 @@ Virsh命令行工具是一款管理virsh客户域的用户界面。virsh程序
|
||||
# qemu-img create -f raw /mnt/personal-data/SPool1/SVol1.img 10G
|
||||
|
||||

|
||||
创建存储卷
|
||||
|
||||
*创建存储卷*
|
||||
|
||||
**9. 通过使用带info的qemu-img命令,你可以获取到你的新磁盘映像的一些信息。**
|
||||
|
||||

|
||||
查看存储卷信息
|
||||
|
||||
*查看存储卷信息*
|
||||
|
||||
**警告**: 不要用qemu-img命令来修改被运行中的虚拟机或任何其它进程所正在使用的映像,那样映像会被破坏。
|
||||
|
||||
@ -120,15 +129,18 @@ Virsh命令行工具是一款管理virsh客户域的用户界面。virsh程序
|
||||
# virt-install --name=rhel7 --disk path=/mnt/personal-data/SPool1/SVol1.img --graphics spice --vcpu=1 --ram=1024 --location=/run/media/dos/9e6f605a-f502-4e98-826e-e6376caea288/rhel-server-7.0-x86_64-dvd.iso --network bridge=virbr0
|
||||
|
||||

|
||||
创建新的虚拟机
|
||||
|
||||
*创建新的虚拟机*
|
||||
|
||||
**11. 你会看到弹出一个virt-vierwer窗口,像是在通过它在与虚拟机通信。**
|
||||
|
||||

|
||||
虚拟机启动程式
|
||||
|
||||
*虚拟机启动程式*
|
||||
|
||||

|
||||
虚拟机安装过程
|
||||
|
||||
*虚拟机安装过程*
|
||||
|
||||
### 结论 ###
|
||||
|
||||
@ -143,7 +155,7 @@ via: http://www.tecmint.com/kvm-management-tools-to-manage-virtual-machines/
|
||||
|
||||
作者:[Mohammad Dosoukey][a]
|
||||
译者:[ZTinoZ](https://github.com/ZTinoZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,4 +1,4 @@
|
||||
Linux 基础:如何修复Ubuntu上“E: /var/cache/apt/archives/ subprocess new pre-removal script returned error exit status 1 ”的错误
|
||||
如何修复 Ubuntu 上“...script returned error exit status 1”的错误
|
||||
================================================================================
|
||||

|
||||
|
||||
@ -6,11 +6,11 @@ Linux 基础:如何修复Ubuntu上“E: /var/cache/apt/archives/ subprocess ne
|
||||
|
||||
> E: /var/cache/apt/archives/ subprocess new pre-removal script returned error exit status 1
|
||||
|
||||

|
||||

|
||||
|
||||
### 解决: ###
|
||||
|
||||
我google了以下并找到了方法。下面是我解决的方法。
|
||||
我google了一下并找到了方法。下面是我解决的方法。
|
||||
|
||||
sudo apt-get clean
|
||||
sudo apt-get update && sudo apt-get upgrade
|
||||
@ -33,11 +33,11 @@ Linux 基础:如何修复Ubuntu上“E: /var/cache/apt/archives/ subprocess ne
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.unixmen.com/linux-basics-how-to-fix-e-varcacheaptarchives-subprocess-new-pre-removal-script-returned-error-exit-status-1-in-ubuntu/
|
||||
via: http://www.unixmen.com/linux-basics-how-to-fix-e-varcacheaptarchives-subprocess-new-pre-removal-script-returned-error-exit-status-1-in-ubuntu/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,8 @@
|
||||
如何设置lftp - 一个简易的命令行FTP程序
|
||||
LFTP : 一个功能强大的命令行FTP程序
|
||||
================================================================================
|
||||
大家好,这篇文章是介绍Lftp以及如何在Linux操作系统下安装的。[Lftp][1]是一个基于命令行的文件传输软件也被称为FTP客户端,由Alexander Lukyanov开发并以GNU GPL协议许可发行。除了FTP,它还支持FTPS,HTTP,HTTPS,HFTP,FISH,以及SFTP。这个程序还支持FXP,允许数据绕过客户端直接在两个FTP服务器之间传输。
|
||||
大家好,这篇文章是介绍Lftp以及如何在Linux操作系统下安装的。[Lftp][1]是一个基于命令行的文件传输软件(也被称为FTP客户端),由Alexander Lukyanov开发并以GNU GPL协议许可发行。除了FTP协议外,它还支持FTPS,HTTP,HTTPS,HFTP,FISH,以及SFTP等协议。这个程序还支持FXP,允许数据绕过客户端直接在两个FTP服务器之间传输。
|
||||
|
||||
他有很多很棒的高级功能,比如完整目录树递归镜像以及断点续传下载。传输任务可以安排在稍后的时间段执行,可以限制带宽,可以创建传输列表,还支持类似Unix shell的任务控制。客户端还可以在交互式或自动脚本里使用。
|
||||
它有很多很棒的高级功能,比如递归镜像整个目录树以及断点续传下载。传输任务可以安排在稍后的时间段计划执行,可以限制带宽,可以创建传输列表,还支持类似Unix shell的任务控制。客户端还可以在交互式或自动脚本里使用。
|
||||
|
||||
### 安装Lftp ###
|
||||
|
||||
@ -44,7 +44,7 @@ OpenSuse系统里的包管理软件Zypper可以用来安装lftp。下面是在Op
|
||||
|
||||
要登录到ftp服务器或sftp服务器,我们首先需要知道所要求的认证信息,比如用户名,密码,端口。
|
||||
|
||||
之后,我们想通过lftp来登录。
|
||||
之后,我们可以通过lftp来登录。
|
||||
|
||||
$ lftp ftp://linoxide@localhost
|
||||
|
||||
@ -56,9 +56,9 @@ OpenSuse系统里的包管理软件Zypper可以用来安装lftp。下面是在Op
|
||||
|
||||

|
||||
|
||||
### 浏览 ###
|
||||
### 导航 ###
|
||||
|
||||
我们可以用**ls**命令来列出文件和目录,用**cd**命令打开目录。
|
||||
我们可以用**ls**命令来列出文件和目录,用**cd**命令进入到目录。
|
||||
|
||||

|
||||
|
||||
@ -158,7 +158,7 @@ OpenSuse系统里的包管理软件Zypper可以用来安装lftp。下面是在Op
|
||||
|
||||
### 总结 ###
|
||||
|
||||
哇!我们已经成功地安装了lftp并学会了使用它的一些基础的主要方式。lftp是一个非常棒的命令行ftp客户端,它支持许多额外的功能以及很酷的特性。它比其他普通ftp客户端多了很多东西。好吧,你要是有任何问题,建议,反馈,请在下面的评论区里留言。谢谢!享用lftp吧 :-)
|
||||
哇!我们已经成功地安装了lftp并学会了它的一些基础的主要使用方式。lftp是一个非常棒的命令行ftp客户端,它支持许多额外的功能以及很酷的特性。它比其他普通ftp客户端多了很多东西。好吧,你要是有任何问题,建议,反馈,请在下面的评论区里留言。谢谢!享用lftp吧 :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -166,7 +166,7 @@ via: http://linoxide.com/linux-how-to/setup-lftp-command-line-ftp/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[zpl1025](https://github.com/zpl1025)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,5 +1,6 @@
|
||||
[已解决] Ubuntu 14.04从待机中唤醒后鼠标键盘出现僵死情况 [快速小贴士]
|
||||
================================================================================
|
||||
修复 Ubuntu 14.04 从待机中唤醒后鼠标键盘出现僵死情况
|
||||
=========
|
||||
|
||||
### 问题: ###
|
||||
|
||||
当Ubuntu14.04或14.10从睡眠和待机状态恢复时,鼠标和键盘出现僵死,不能点击也不能输入。解决这种情况是唯一方法就是按关机键强关系统,这不仅非常不便且令人恼火。因为在Ubuntu的默认情况中合上笔记本等同于切换到睡眠模式。
|
||||
@ -12,15 +13,15 @@
|
||||
|
||||
sudo apt-get install --reinstall xserver-xorg-input-all
|
||||
|
||||
这则贴士源自一个自由开源读者Dev的提问。快试试这篇贴士,看看是否对你也有效。在一个类似的问题中,你可以[修复Ubuntu登录后无Unity界面、侧边栏和Dash的问题][1]
|
||||
这则贴士源自一个我们的读者Dev的提问。快试试这篇贴士,看看是否对你也有效。在一个类似的问题中,你可以[修复Ubuntu登录后无Unity界面、侧边栏和Dash的问题][1]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/keyboard-mouse-freeze-suspend/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[martin2011qi](https://github.com/martin2011qi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,12 +1,13 @@
|
||||
走进Linux之systemd启动过程
|
||||
================================================================================
|
||||
Linux系统的启动方式有点复杂,而且总是有需要优化的地方。传统的Linux系统启动过程主要由著名的init进程(也被称为SysV init启动系统)处理,而基于init的启动系统也被确认会有效率不足的问题,systemd是Linux系统机器的另一种启动方式,宣称弥补了以[传统Linux SysV init][2]为基础的系统的缺点。在这里我们将着重讨论systemd的特性和争议,但是为了更好地理解它,也会看一下通过传统的以SysV init为基础的系统的Linux启动过程是什么样的。友情提醒一下systemd仍然处在测试阶段,而未来发布的Linux操作系统也正准备用systemd启动管理程序替代当前的启动过程。
|
||||
|
||||
Linux系统的启动方式有点复杂,而且总是有需要优化的地方。传统的Linux系统启动过程主要由著名的init进程(也被称为SysV init启动系统)处理,而基于init的启动系统被认为有效率不足的问题,systemd是Linux系统机器的另一种启动方式,宣称弥补了以[传统Linux SysV init][2]为基础的系统的缺点。在这里我们将着重讨论systemd的特性和争议,但是为了更好地理解它,也会看一下通过传统的以SysV init为基础的系统的Linux启动过程是什么样的。友情提醒一下,systemd仍然处在测试阶段,而未来发布的Linux操作系统也正准备用systemd启动管理程序替代当前的启动过程(LCTT 译注:截止到本文发表,主流的Linux发行版已经有很多采用了 systemd)。
|
||||
|
||||
### 理解Linux启动过程 ###
|
||||
|
||||
在我们打开Linux电脑的电源后第一个启动的进程就是init。分配给init进程的PID是1。它是系统其他所有进程的父进程。当一台Linux电脑启动后,处理器会先在系统存储中查找BIOS,之后BIOS会测试系统资源然后找到第一个引导设备,通常设置为硬盘,然后会查找硬盘的主引导记录(MBR),然后加载到内存中并把控制权交给它,以后的启动过程就由MBR控制。
|
||||
在我们打开Linux电脑的电源后第一个启动的进程就是init。分配给init进程的PID是1。它是系统其他所有进程的父进程。当一台Linux电脑启动后,处理器会先在系统存储中查找BIOS,之后BIOS会检测系统资源然后找到第一个引导设备,通常为硬盘,然后会查找硬盘的主引导记录(MBR),然后加载到内存中并把控制权交给它,以后的启动过程就由MBR控制。
|
||||
|
||||
主引导记录会初始化引导程序(Linux上有两个著名的引导程序,GRUB和LILO,80%的Linux系统在用GRUB引导程序),这个时候GRUB或LILO会加载内核模块。内核会马上查找/sbin下的init进程并执行它。从这里开始init成为了Linux系统的父进程。init读取的第一个文件是/etc/inittab,通过它init会确定我们Linux操作系统的运行级别。它会从文件/etc/fstab里查找分区表信息然后做相应的挂载。然后init会启动/etc/init.d里指定的默认启动级别的所有服务/脚本。所有服务在这里通过init一个一个被初始化。在这个过程里,init每次只启动一个服务,所有服务/守护进程都在后台执行并由init来管理。
|
||||
主引导记录会初始化引导程序(Linux上有两个著名的引导程序,GRUB和LILO,80%的Linux系统在用GRUB引导程序),这个时候GRUB或LILO会加载内核模块。内核会马上查找/sbin下的“init”程序并执行它。从这里开始init成为了Linux系统的父进程。init读取的第一个文件是/etc/inittab,通过它init会确定我们Linux操作系统的运行级别。它会从文件/etc/fstab里查找分区表信息然后做相应的挂载。然后init会启动/etc/init.d里指定的默认启动级别的所有服务/脚本。所有服务在这里通过init一个一个被初始化。在这个过程里,init每次只启动一个服务,所有服务/守护进程都在后台执行并由init来管理。
|
||||
|
||||
关机过程差不多是相反的过程,首先init停止所有服务,最后阶段会卸载文件系统。
|
||||
|
||||
@ -14,9 +15,9 @@ Linux系统的启动方式有点复杂,而且总是有需要优化的地方。
|
||||
|
||||
### 理解Systemd ###
|
||||
|
||||
开发Systemd的主要目的就是减少系统引导时间和计算开销。Systemd(系统管理守护进程),最开始以GNU GPL协议授权开发,现在已转为使用GNU LGPL协议,它是如今讨论最热烈的引导和服务管理程序。如果你的Linux系统配置为使用Systemd引导程序,那么代替传统的SysV init,启动过程将交给systemd处理。Systemd的一个核心功能是它同时支持SysV init的后开机启动脚本。
|
||||
开发Systemd的主要目的就是减少系统引导时间和计算开销。Systemd(系统管理守护进程),最开始以GNU GPL协议授权开发,现在已转为使用GNU LGPL协议,它是如今讨论最热烈的引导和服务管理程序。如果你的Linux系统配置为使用Systemd引导程序,它取替传统的SysV init,启动过程将交给systemd处理。Systemd的一个核心功能是它同时支持SysV init的后开机启动脚本。
|
||||
|
||||
Systemd引入了并行启动的概念,它会为每个需要启动的守护进程建立一个管道套接字,这些套接字对于使用它们的进程来说是抽象的,这样它们可以允许不同守护进程之间进行交互。Systemd会创建新进程并为每个进程分配一个控制组。处于不同控制组的进程之间可以通过内核来互相通信。[systemd处理开机启动进程][2]的方式非常漂亮,和传统基于init的系统比起来优化了太多。让我们看下Systemd的一些核心功能。
|
||||
Systemd引入了并行启动的概念,它会为每个需要启动的守护进程建立一个套接字,这些套接字对于使用它们的进程来说是抽象的,这样它们可以允许不同守护进程之间进行交互。Systemd会创建新进程并为每个进程分配一个控制组(cgroup)。处于不同控制组的进程之间可以通过内核来互相通信。[systemd处理开机启动进程][2]的方式非常漂亮,和传统基于init的系统比起来优化了太多。让我们看下Systemd的一些核心功能。
|
||||
|
||||
- 和init比起来引导过程简化了很多
|
||||
- Systemd支持并发引导过程从而可以更快启动
|
||||
@ -81,7 +82,9 @@ Systemd提供了工具用于识别和定位引导相关的问题或性能影响
|
||||
234ms httpd.service
|
||||
191ms vmms.service
|
||||
|
||||
**systemd-analyze verify** 显示在所有系统单元中是否有语法错误。**systemd-analyze plot** 可以用来把整个引导过程写入一个SVG格式文件里。整个引导过程非常长不方便阅读,所以通过这个命令我们可以把输出写入一个文件,之后再查看和分析。下面这个命令就是做这个。
|
||||
**systemd-analyze verify** 显示在所有系统单元中是否有语法错误。
|
||||
|
||||
**systemd-analyze plot** 可以用来把整个引导过程写入一个SVG格式文件里。整个引导过程非常长不方便阅读,所以通过这个命令我们可以把输出写入一个文件,之后再查看和分析。下面这个命令就是做这个。
|
||||
|
||||
systemd-analyze plot > boot.svg
|
||||
|
||||
@ -89,9 +92,9 @@ Systemd提供了工具用于识别和定位引导相关的问题或性能影响
|
||||
|
||||
Systemd并没有幸运地获得所有人的青睐,一些专家和管理员对于它的工作方式和开发有不同意见。根据对于Systemd的批评,它不是“类Unix”方式因为它试着替换一些系统服务。一些专家也不喜欢使用二进制配置文件的想法。据说编辑systemd配置非常困难而且没有一个可用的图形工具。
|
||||
|
||||
### 在Ubuntu 14.04和12.04上测试Systemd ###
|
||||
### 如何在Ubuntu 14.04和12.04上测试Systemd ###
|
||||
|
||||
本来,Ubuntu决定从Ubuntu 16.04 LTS开始使用Systemd来替换当前的引导过程。Ubuntu 16.04预计在2016年4月发布,但是考虑到Systemd的流行和需求,即将发布的**Ubuntu 15.04**将采用它作为默认引导程序。好消息是Ubuntu 14.04 Trusty Tahr和Ubuntu 12.04 Precise Pangolin的用户可以在他们的机器上测试Systemd。测试过程并不复杂,你所要做的只是把相关的PPA包含到系统中,更新仓库并升级系统。
|
||||
本来,Ubuntu决定从Ubuntu 16.04 LTS开始使用Systemd来替换当前的引导过程。Ubuntu 16.04预计在2016年4月发布,但是考虑到Systemd的流行和需求,刚刚发布的**Ubuntu 15.04**采用它作为默认引导程序。另外,Ubuntu 14.04 Trusty Tahr和Ubuntu 12.04 Precise Pangolin的用户可以在他们的机器上测试Systemd。测试过程并不复杂,你所要做的只是把相关的PPA包含到系统中,更新仓库并升级系统。
|
||||
|
||||
**声明**:请注意它仍然处于Ubuntu的测试和开发阶段。升级测试包可能会带来一些未知错误,最坏的情况下有可能损坏你的系统配置。请确保在尝试升级前已经备份好重要数据。
|
||||
|
||||
@ -127,7 +130,7 @@ Systemd并没有幸运地获得所有人的青睐,一些专家和管理员对
|
||||
|
||||

|
||||
|
||||
就这样,你的Ubuntu系统已经不在使用传统的引导程序了,改为使用Systemd管理器。重启你的机器然后查看systemd引导过程吧。
|
||||
就这样,你的Ubuntu系统已经不再使用传统的引导程序了,改为使用Systemd管理器。重启你的机器然后查看systemd引导过程吧。
|
||||
|
||||

|
||||
|
||||
@ -141,7 +144,7 @@ via: http://linoxide.com/linux-how-to/systemd-boot-process/
|
||||
|
||||
作者:[Aun Raza][a]
|
||||
译者:[zpl1025](https://github.com/zpl1025)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,12 +1,13 @@
|
||||
11个Linux终端命令,让你的世界摇滚起来
|
||||
11个让你吃惊的 Linux 终端命令
|
||||
================================================================================
|
||||
我已经用了十年的Linux了,通过今天这篇文章我将向大家展示一系列的,我希望一开始就有人教导而不是曾在我成长道路上绊住我的Linux命令、工具和花招。
|
||||
|
||||

|
||||
Linux的快捷键。
|
||||
我已经用了十年的Linux了,通过今天这篇文章我将向大家展示一系列的命令、工具和技巧,我希望一开始就有人告诉我这些,而不是曾在我成长道路上绊住我。
|
||||
|
||||
### 1. 命令行日常系快捷键 ###
|
||||
|
||||

|
||||
|
||||
*Linux的快捷键。*
|
||||
|
||||
如下的快捷方式非常有用,能够极大的提升你的工作效率:
|
||||
|
||||
- CTRL + U - 剪切光标前的内容
|
||||
@ -16,11 +17,11 @@ Linux的快捷键。
|
||||
- CTRL + A - 移动光标到行首
|
||||
- ALT + F - 跳向下一个空格
|
||||
- ALT + B - 跳回上一个空格
|
||||
- ALT + Backspace - 删除前一个字
|
||||
- CTRL + W - 剪切光标后一个字
|
||||
- ALT + Backspace - 删除前一个单词
|
||||
- CTRL + W - 剪切光标后一个单词
|
||||
- Shift + Insert - 向终端内粘贴文本
|
||||
|
||||
那么为了让上诉内容更易理解来看下面的这行命令。
|
||||
那么为了让上述内容更易理解来看下面的这行命令。
|
||||
|
||||
sudo apt-get intall programname
|
||||
|
||||
@ -28,7 +29,7 @@ Linux的快捷键。
|
||||
|
||||
想象现在光标正在行末,我们有很多的方法将她退回单词install并替换它。
|
||||
|
||||
我可以按两次ALT+B这样光标就会在如下的位置(这里用^代替光标的位置)。
|
||||
我可以按两次ALT+B这样光标就会在如下的位置(这里用^指代光标的位置)。
|
||||
|
||||
sudo apt-get^intall programname
|
||||
|
||||
@ -36,32 +37,36 @@ Linux的快捷键。
|
||||
|
||||
如果你想将浏览器中的文本复制到终端,可以使用快捷键"shift + insert"。
|
||||
|
||||

|
||||
|
||||
### 2. SUDO !! ###
|
||||
|
||||
这个命令如果你还不知道我觉得你应该好好感谢我,因为如果你不知道那每次你在输入长串命令后看到“permission denied”后一定会痛恼不堪。
|
||||

|
||||
|
||||
*sudo !!*
|
||||
|
||||
如果你还不知道这个命令,我觉得你应该好好感谢我,因为如果你不知道的话,那每次你在输入长串命令后看到“permission denied”后一定会痛恼不堪。
|
||||
|
||||
- sudo !!
|
||||
|
||||
如何使用sudo !!?很简单。试想你刚输入了如下命令:
|
||||
如何使用sudo !!?很简单。试想你刚输入了如下命令:
|
||||
|
||||
apt-get install ranger
|
||||
|
||||
一定会出现"Permission denied"除非你的登录了足够高权限的账户。
|
||||
一定会出现“Permission denied”,除非你已经登录了足够高权限的账户。
|
||||
|
||||
sudo !!就会用sudo的形式运行上一条命令。所以上一条命令可以看成是这样:
|
||||
sudo !! 就会用 sudo 的形式运行上一条命令。所以上一条命令就变成了这样:
|
||||
|
||||
sudo apt-get install ranger
|
||||
|
||||
如果你不知道什么是sudo[戳这里][1]。
|
||||
|
||||

|
||||
暂停终端运行的应用程序。
|
||||
如果你不知道什么是sudo,[戳这里][1]。
|
||||
|
||||
### 3. 暂停并在后台运行命令 ###
|
||||
|
||||
我曾经写过一篇如何在终端后台运行命令的指南。
|
||||

|
||||
|
||||
*暂停终端运行的应用程序。*
|
||||
|
||||
我曾经写过一篇[如何在终端后台运行命令的指南][13]。
|
||||
|
||||
- CTRL + Z - 暂停应用程序
|
||||
- fg - 重新将程序唤到前台
|
||||
@ -74,41 +79,42 @@ sudo !!就会用sudo的形式运行上一条命令。所以上一条命令可以
|
||||
|
||||
文件编辑到一半你意识到你需要马上在终端输入些命令,但是nano在前台运行让你不能输入。
|
||||
|
||||
你可能觉得唯一的方法就是保存文件,推出nano,运行命令以后在重新打开nano。
|
||||
你可能觉得唯一的方法就是保存文件,退出 nano,运行命令以后在重新打开nano。
|
||||
|
||||
其实你只要按CTRL + Z前台的命令就会暂停,画面就切回到命令行了。然后你就能运行你想要运行命令,等命令运行完后在终端窗口输入“fg”就可以回到先前暂停的任务。
|
||||
其实你只要按CTRL + Z,前台的命令就会暂停,画面就切回到命令行了。然后你就能运行你想要运行命令,等命令运行完后在终端窗口输入“fg”就可以回到先前暂停的任务。
|
||||
|
||||
有一个尝试非常有趣就是用nano打开文件,输入一些东西然后暂停会话。再用nano打开另一个文件,输入一些什么后再暂停会话。如果你输入“fg”你将回到第二个用nano打开的文件。只有退出nano再输入“fg”,你才会回到第一个用nano打开的文件。
|
||||
|
||||

|
||||
nohup.
|
||||
|
||||
### 4. 使用nohup在登出SSH会话后仍运行命令 ###
|
||||
|
||||
如果你用ssh登录别的机器时,[nohup命令]真的非常有用。
|
||||

|
||||
|
||||
*nohup*
|
||||
|
||||
如果你用ssh登录别的机器时,[nohup命令][2]真的非常有用。
|
||||
|
||||
那么怎么使用nohup呢?
|
||||
|
||||
想象一下你使用ssh远程登录到另一台电脑上,你运行了一条非常耗时的命令然后退出了ssh会话,不过命令仍在执行。而nohup可以将这一场景变成现实。
|
||||
|
||||
举个例子以测试为目的我用[树莓派][3]来下载发行版。
|
||||
举个例子,因为测试的需要,我用我的[树莓派][3]来下载发行版。我绝对不会给我的树莓派外接显示器、键盘或鼠标。
|
||||
|
||||
我绝对不会给我的树莓派外接显示器、键盘或鼠标。
|
||||
|
||||
一般我总是用[SSH] [4]从笔记本电脑连接到树莓派。如果我在不用nohup的情况下使用树莓派下载大型文件,那我就必须等待到下载完成后才能登出ssh会话关掉笔记本。如果是这样那我为什么要使用树莓派下文件呢?
|
||||
一般我总是用[SSH][4]从笔记本电脑连接到树莓派。如果我在不用nohup的情况下使用树莓派下载大型文件,那我就必须等待到下载完成后,才能登出ssh会话关掉笔记本。可如果是这样,那我为什么要使用树莓派下文件呢?
|
||||
|
||||
使用nohup的方法也很简单,只需如下例中在nohup后输入要执行的命令即可:
|
||||
|
||||
nohup wget http://mirror.is.co.za/mirrors/linuxmint.com/iso//stable/17.1/linuxmint-17.1-cinnamon-64bit.iso &
|
||||
|
||||

|
||||
At管理任务日程
|
||||
|
||||
### 5. ‘在’特定的时间运行Linux命令 ###
|
||||
|
||||

|
||||
|
||||
*At管理任务日程*
|
||||
|
||||
‘nohup’命令在你用SSH连接到服务器,并在上面保持执行SSH登出前任务的时候十分有用。
|
||||
|
||||
想一下如果你需要在特定的时间执行同一个命令,这种情况该怎么办呢?
|
||||
想一下如果你需要在特定的时间执行相同的命令,这种情况该怎么办呢?
|
||||
|
||||
命令‘at’就能妥善解决这一情况。以下是‘at’使用示例。
|
||||
|
||||
@ -116,78 +122,80 @@ At管理任务日程
|
||||
at> cowsay 'hello'
|
||||
at> CTRL + D
|
||||
|
||||
上面的命令能在周五下午10时38分运行程序[cowsay] [5]。
|
||||
上面的命令能在周五下午10时38分运行程序[cowsay][5]。
|
||||
|
||||
使用的语法就是‘at’后追加日期时间。
|
||||
使用的语法就是‘at’后追加日期时间。当at>提示符出现后就可以输入你想在那个时间运行的命令了。
|
||||
|
||||
当at>提示符出现后就可以输入你想在那个时间运行的命令了。
|
||||
CTRL + D 返回终端。
|
||||
|
||||
CTRL + D返回终端。
|
||||
还有许多日期和时间的格式,都需要你好好翻一翻‘at’的man手册来找到更多的使用方式。
|
||||
|
||||
还有许多日期和时间的格式都是值得的你好好翻一翻‘at’的man手册来找到更多的使用方式。
|
||||
|
||||

|
||||
|
||||
### 6. Man手册 ###
|
||||
|
||||
Man手册会为你列出命令和参数的使用大纲,教你如何使用她们。
|
||||

|
||||
|
||||
Man手册看起开沉闷呆板。(我思忖她们也不是被设计来娱乐我们的)。
|
||||
*彩色man 手册*
|
||||
|
||||
不过这不代表你不能做些什么来使她们变得性感点。
|
||||
Man手册会为你列出命令和参数的使用大纲,教你如何使用她们。Man手册看起来沉闷呆板。(我思忖她们也不是被设计来娱乐我们的)。
|
||||
|
||||
不过这不代表你不能做些什么来使她们变得漂亮些。
|
||||
|
||||
export PAGER=most
|
||||
|
||||
你需要 ‘most’;她会使你的你的man手册的色彩更加绚丽。
|
||||
你需要安装 ‘most’;她会使你的你的man手册的色彩更加绚丽。
|
||||
|
||||
你可以用一下命令给man手册设定指定的行长:
|
||||
你可以用以下命令给man手册设定指定的行长:
|
||||
|
||||
export MANWIDTH=80
|
||||
|
||||
最后,如果你有浏览器,你可以使用-H在默认浏览器中打开任意的man页。
|
||||
最后,如果你有一个可用的浏览器,你可以使用-H在默认浏览器中打开任意的man页。
|
||||
|
||||
man -H <command>
|
||||
|
||||
注意啦,以上的命令只有在你将默认的浏览器已经设置到环境变量$BROWSER中了之后才效果哟。
|
||||
注意啦,以上的命令只有在你将默认的浏览器设置到环境变量$BROWSER中了之后才效果哟。
|
||||
|
||||

|
||||
使用htop查看进程。
|
||||
|
||||
### 7. 使用htop查看和管理进程 ###
|
||||
|
||||
你用哪个命令找出电脑上正在运行的进程的呢?我敢打赌是‘[ps][6]’并在其后加不同的参数来得到你所想要的不同输出。
|
||||

|
||||
|
||||
*使用htop查看进程。*
|
||||
|
||||
你用哪个命令找出电脑上正在运行的进程的呢?我敢打赌是‘[ps][6]’并在其后加不同的参数来得到你所想要的不同输出。
|
||||
|
||||
安装‘[htop][7]’吧!绝对让你相见恨晚。
|
||||
|
||||
htop在终端中将进程以列表的方式呈现,有点类似于Windows中的任务管理器。
|
||||
|
||||
你可以使用功能键的组合来切换排列的方式和展示出来的项。你也可以在htop中直接杀死进程。
|
||||
htop在终端中将进程以列表的方式呈现,有点类似于Windows中的任务管理器。你可以使用功能键的组合来切换排列的方式和展示出来的项。你也可以在htop中直接杀死进程。
|
||||
|
||||
在终端中简单的输入htop即可运行。
|
||||
|
||||
htop
|
||||
|
||||

|
||||
命令行文件管理 - Ranger.
|
||||
|
||||
### 8. 使用ranger浏览文件系统 ###
|
||||
|
||||
如果说htop是命令行进程控制的好帮手那么[ranger][8]就是命令行浏览文件系统的好帮手。
|
||||

|
||||
|
||||
*命令行文件管理 - Ranger*
|
||||
|
||||
如果说htop是命令行进程控制的好帮手,那么[ranger][8]就是命令行浏览文件系统的好帮手。
|
||||
|
||||
你在用之前可能需要先安装,不过一旦安装了以后就可以在命令行输入以下命令启动她:
|
||||
|
||||
ranger
|
||||
|
||||
在命令行窗口中ranger和一些别的文件管理器很像,但是她是左右结构的比起上下的来意味着你按左方向键你将前进到上一个文件夹结构而右方向键则会切换到下一个。
|
||||
在命令行窗口中ranger和一些别的文件管理器很像,但是相比上下结构布局,她是左右结构的,这意味着你按左方向键你将前进到上一个文件夹,而右方向键则会切换到下一个。
|
||||
|
||||
在使用前ranger的man手册还是值得一读的,这样你就可以用快捷键操作ranger了。
|
||||
|
||||

|
||||
Linux取消关机。
|
||||
|
||||
### 9. 取消关机 ###
|
||||
|
||||
无论是在命令行还是图形用户界面[关机][9]后发现自己不是真的想要关机。
|
||||

|
||||
|
||||
*Linux取消关机。*
|
||||
|
||||
无论是在命令行还是图形用户界面[关机][9]后,才发现自己不是真的想要关机。
|
||||
|
||||
shutdown -c
|
||||
|
||||
@ -197,11 +205,13 @@ Linux取消关机。
|
||||
|
||||
- [pkill][10] shutdown
|
||||
|
||||

|
||||
使用XKill杀死挂起进程。
|
||||
|
||||
### 10. 杀死挂起进程的简单方法 ###
|
||||
|
||||

|
||||
|
||||
*使用XKill杀死挂起进程。*
|
||||
|
||||
想象一下,你正在运行的应用程序不明原因的僵死了。
|
||||
|
||||
你可以使用‘ps -ef’来找到该进程后杀掉或者使用‘htop’。
|
||||
@ -214,18 +224,20 @@ Linux取消关机。
|
||||
|
||||
那如果整个系统挂掉了怎么办呢?
|
||||
|
||||
按住键盘上的‘alt’和‘sysrq’同时输入:
|
||||
按住键盘上的‘alt’和‘sysrq’不放,然后慢慢输入以下键:
|
||||
|
||||
- [REISUB][12]
|
||||
|
||||
这样不按电源键你的计算机也能重启了。
|
||||
|
||||

|
||||
youtube-dl.
|
||||
|
||||
### 11. 下载Youtube视频 ###
|
||||
|
||||
一般来说我们大多数人都喜欢看Youtube的视频,也会通过钟爱的播放器播放Youtube的流。
|
||||

|
||||
|
||||
*youtube-dl.*
|
||||
|
||||
一般来说我们大多数人都喜欢看Youtube的视频,也会通过钟爱的播放器播放Youtube的流媒体。
|
||||
|
||||
如果你需要离线一段时间(比如:从苏格兰南部坐飞机到英格兰南部旅游的这段时间)那么你可能希望下载一些视频到存储设备中,到闲暇时观看。
|
||||
|
||||
@ -235,7 +247,7 @@ youtube-dl.
|
||||
|
||||
youtube-dl url-to-video
|
||||
|
||||
你能在Youtubu视频页面点击分享链接得到视频的url。只要简单的复制链接在粘帖到命令行就行了(要用shift + insert快捷键哟)。
|
||||
你可以在Youtubu视频页面点击分享链接得到视频的url。只要简单的复制链接在粘帖到命令行就行了(要用shift + insert快捷键哟)。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
@ -246,8 +258,8 @@ youtube-dl.
|
||||
via: http://linux.about.com/od/commands/tp/11-Linux-Terminal-Commands-That-Will-Rock-Your-World.htm
|
||||
|
||||
作者:[Gary Newell][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[martin2011qi](https://github.com/martin2011qi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
@ -264,3 +276,4 @@ via: http://linux.about.com/od/commands/tp/11-Linux-Terminal-Commands-That-Will-
|
||||
[10]:http://linux.about.com/library/cmd/blcmdl1_pkill.htm
|
||||
[11]:http://linux.about.com/od/funnymanpages/a/funman_xkill.htm
|
||||
[12]:http://blog.kember.net/articles/reisub-the-gentle-linux-restart/
|
||||
[13]:http://linux.about.com/od/commands/fl/How-To-Run-Linux-Programs-From-The-Terminal-In-Background-Mode.htm
|
@ -1,14 +1,14 @@
|
||||
Ubuntu中,使用Prey定位被盗的笔记本与手机
|
||||
使用Prey定位被盗的Ubuntu笔记本与智能电话
|
||||
===============================================================================
|
||||
Prey是一款跨平台的开源工具,可以帮助你找回被盗的笔记本,台式机,平板和智能手机。它已经获得了广泛的流行,声称帮助召回了成百上千台丢失的笔记本和智能手机。Prey的使用特别简单,首先安装在你的笔记本或者手机上,当你的设备不见了,用你的账号登入Prey网站,并且标记你的设备为“丢失”。只要小偷将设备接入网络,Prey就会马上发送设备的地理位置给你。如果你的笔记本有摄像头,它还会拍下小偷。
|
||||
Prey是一款跨平台的开源工具,可以帮助你找回被盗的笔记本,台式机,平板和智能手机。它已经获得了广泛的流行,声称帮助找回了成百上千台丢失的笔记本和智能手机。Prey的使用特别简单,首先安装在你的笔记本或者手机上,当你的设备不见了,用你的账号登入Prey网站,并且标记你的设备为“丢失”。只要小偷将设备接入网络,Prey就会马上发送设备的地理位置给你。如果你的笔记本有摄像头,它还会拍下该死的贼。
|
||||
|
||||
Prey占用很小的系统资源;你不会对你的设备运行有任何影响。你也可以配合其他你已经在设备上安装的防盗软件使用。Prey采用安全加密的通道,在你的设备与Prey服务器之间进行数据传输。
|
||||
Prey占用很小的系统资源;你不会对你的设备运行有任何影响。你也可以配合其他你已经在设备上安装的防盗软件使用。Prey在你的设备与Prey服务器之间采用安全加密的通道进行数据传输。
|
||||
|
||||
### 在Ubuntu上安装并配置Prey ###
|
||||
|
||||
让我们来看看如何在Ubuntu上安装和配置Prey,需要提醒的是,在配置过程中,我们必须到Prey官网进行账号注册。一旦完成上述工作,Prey将会开始监视的设备了。免费的账号最多可以监视三个设备,如果你需要添加更多的设备,你就需要购买合适的的套餐了。
|
||||
让我们来看看如何在Ubuntu上安装和配置Prey,需要提醒的是,在配置过程中,我们必须到Prey官网进行账号注册。一旦完成上述工作,Prey将会开始监视你的设备了。免费的账号最多可以监视三个设备,如果你需要添加更多的设备,你就需要购买合适的的套餐了。
|
||||
|
||||
想象一下Prey多么流行与被广泛使用,它现在已经被添加到了官方的软件库中了。这意味着你不要往软件包管理器添加任何PPA。很简单地,登录你的终端,运行以下的命令来安装它:
|
||||
可以想象Prey多么流行与被广泛使用,它现在已经被添加到了官方的软件库中了。这意味着你不要往软件包管理器添加任何PPA。很简单,登录你的终端,运行以下的命令来安装它:
|
||||
|
||||
sudo apt-get install prey
|
||||
|
||||
@ -54,7 +54,7 @@ Prey有一个明显的不足。它需要你的设备接入互联网才会发送
|
||||
|
||||
### 结论 ###
|
||||
|
||||
这是一款小巧,非常有用的安全保护应用,可以让你在一个地方追踪你所有的设备,尽管不完美,但是仍然提供了找回被盗设备的机会。它在Linux,Windows和Mac平台上无缝运行。以上就是Prey完整使用的所有细节。
|
||||
这是一款小巧,非常有用的安全保护应用,可以让你在一个地方追踪你所有的设备,尽管不完美,但是仍然提供了找回被盗设备的机会。它在Linux,Windows和Mac平台上无缝运行。以上就是[Prey][2]完整使用的所有细节。
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
@ -62,9 +62,10 @@ via: http://linoxide.com/ubuntu-how-to/anti-theft-application-prey-ubuntu/
|
||||
|
||||
作者:[Aun Raza][a]
|
||||
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunrz/
|
||||
[1]:https://preyproject.com/
|
||||
[2]:https://preyproject.com/plans
|
@ -16,7 +16,7 @@
|
||||
|
||||
$ cat .ssh/id_rsa.pub | ssh aliceB@hostB 'cat >> .ssh/authorized_keys'
|
||||
|
||||
自此以后,从aliceA@hostA上ssh到aliceB@hostB上再也不需要输入密码。
|
||||
自此以后,从aliceA@hostA上ssh到aliceB@hostB上再也不需要输入密码。(LCTT 译注:上述的创建目录并复制的操作也可以通过一个 ssh-copy-id 命令一步完成:`ssh-copy-id -i ~/.ssh/id_rsa.pub aliceB@hostB`)
|
||||
|
||||
### 疑难解答 ###
|
||||
|
||||
@ -34,7 +34,7 @@ via: http://xmodulo.com/how-to-enable-ssh-login-without.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[KayGuoWhu](https://github.com/KayGuoWhu)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,12 +1,12 @@
|
||||
Linux有问必答--如何使用命令行压缩JPEG图像
|
||||
Linux有问必答:如何在命令行下压缩JPEG图像
|
||||
================================================================================
|
||||
> **问题**: 我有许多数码照相机拍出来的照片。我想在上传到Dropbox之前,优化和压缩下JPEG图片。有没有什么简单的方法压缩JPEG图片并不损耗他们的质量?
|
||||
|
||||
如今拍照设备(如智能手机、数码相机)拍出来的图片分辨率越来越大。甚至3630万像素的Nikon D800已经冲入市场,并且这个趋势根本停不下来。如今的拍照设备不断地提高着照片分辨率,使得我们不得不压缩后,再上传到有储存限制、带宽限制的云。
|
||||
|
||||
事实上,这里有一个非常简单的方法压缩JPEG图像。一个叫“jpegoptim”命令行工具可以帮助你“无损”美化JPEG图像,所以你可以压缩JPEG图片而不至于牺牲他们的质量。万一你的存储空间和带宽预算真的很少,jpegoptim也支持“有损耗”压缩来调整图像大小。
|
||||
事实上,这里有一个非常简单的方法压缩JPEG图像。一个叫“jpegoptim”命令行工具可以帮助你“无损”美化JPEG图像,让你可以压缩JPEG图片而不至于牺牲他们的质量。万一你的存储空间和带宽预算真的很少,jpegoptim也支持“有损”压缩来调整图像大小。
|
||||
|
||||
如果要压缩PNG图像,参考[this guideline][1]例子。
|
||||
如果要压缩PNG图像,参考[这个指南][1]的例子。
|
||||
|
||||
### 安装jpegoptim ###
|
||||
|
||||
@ -34,7 +34,7 @@ CentOS/RHEL安装,先开启[EPEL库][2],然后运行下列命令:
|
||||
|
||||
注意,原始图像会被压缩后图像覆盖。
|
||||
|
||||
如果jpegoptim不能无损美化图像,将不会覆盖
|
||||
如果jpegoptim不能无损美化图像,将不会覆盖它:
|
||||
|
||||
$ jpegoptim -v photo.jpg
|
||||
|
||||
@ -46,21 +46,21 @@ CentOS/RHEL安装,先开启[EPEL库][2],然后运行下列命令:
|
||||
|
||||
$ jpegoptim -d ./compressed photo.jpg
|
||||
|
||||
这样,压缩的图片将会保存在./compressed目录(已同样的输入文件名)
|
||||
这样,压缩的图片将会保存在./compressed目录(以同样的输入文件名)
|
||||
|
||||
如果你想要保护文件的创建修改时间,使用"-p"参数。这样压缩后的图片会得到与原始图片相同的日期时间。
|
||||
|
||||
$ jpegoptim -d ./compressed -p photo.jpg
|
||||
|
||||
如果你只是想获得无损压缩率,使用"-n"参数来模拟压缩,然后它会打印压缩率。
|
||||
如果你只是想看看无损压缩率而不是真的想压缩它们,使用"-n"参数来模拟压缩,然后它会显示出压缩率。
|
||||
|
||||
$ jpegoptim -n photo.jpg
|
||||
|
||||
### 有损压缩JPG图像 ###
|
||||
|
||||
万一你真的需要要保存在云空间上,你可以使用有损压缩JPG图片。
|
||||
万一你真的需要要保存在云空间上,你还可以使用有损压缩JPG图片。
|
||||
|
||||
这种情况下,使用"-m<质量>"选项,质量数范围0到100。(0是最好质量,100是最坏质量)
|
||||
这种情况下,使用"-m<质量>"选项,质量数范围0到100。(0是最好质量,100是最差质量)
|
||||
|
||||
例如,用50%质量压缩图片:
|
||||
|
||||
@ -76,7 +76,7 @@ CentOS/RHEL安装,先开启[EPEL库][2],然后运行下列命令:
|
||||
|
||||
### 一次压缩多张JPEG图像 ###
|
||||
|
||||
最常见的情况是需要压缩一个目录下的多张JPEG图像文件。为了应付这种情况,你可以使用接下里的脚本。
|
||||
最常见的情况是需要压缩一个目录下的多张JPEG图像文件。为了应付这种情况,你可以使用接下来的脚本。
|
||||
|
||||
#!/bin/sh
|
||||
|
||||
@ -90,11 +90,11 @@ CentOS/RHEL安装,先开启[EPEL库][2],然后运行下列命令:
|
||||
via: http://ask.xmodulo.com/compress-jpeg-images-command-line-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[VicYu/Vic020](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[VicYu/Vic020](https://github.com/Vic020)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
[1]:http://xmodulo.com/how-to-compress-png-files-on-linux.html
|
||||
[2]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html
|
||||
[2]:https://linux.cn/article-2324-1.html
|
@ -1,6 +1,6 @@
|
||||
Linux有问必答-- 如何在VPS上安装和访问CentOS远程桌面
|
||||
Linux有问必答:如何在VPS上安装和访问CentOS 7远程桌面
|
||||
================================================================================
|
||||
> **提问**: 我想在VPS中安装CentOS桌面,并可以直接从我家远程访问GUI桌面。有什么建议可以在VPS上设置和访问CentOS远程桌面?
|
||||
> **提问**: 我想在VPS中安装CentOS桌面,并可以直接从我家远程访问GUI桌面。在VPS上设置和访问CentOS远程桌面有什么建议吗?
|
||||
|
||||
如何远程办公或者远程弹性化工作制在技术领域正变得越来越流行。这个趋势背后的一个技术就是远程桌面。你的桌面环境在云中,你可以在任何你去的地方,或者在家或者工作场所访问你的远程桌面。
|
||||
|
||||
@ -10,7 +10,7 @@ Linux有问必答-- 如何在VPS上安装和访问CentOS远程桌面
|
||||
|
||||
### 第一步: 安装CentOS桌面 ###
|
||||
|
||||
如果现在的CentOS版本是没有桌面的最小版本,你需要先在VPS上安装桌面(比如GNOME)。比如,DigitalOcean的镜像就是最小版本,它需要如下安装[桌面GUI][2]
|
||||
如果你现在安装的CentOS版本是没有桌面的最小版本,你需要先在VPS上安装桌面(比如GNOME)。比如,DigitalOcean的镜像就是最小版本,它需要如下安装[桌面GUI][2]
|
||||
|
||||
# yum groupinstall "GNOME Desktop"
|
||||
|
||||
@ -36,15 +36,15 @@ CentOS依靠systemd来管理和配置系统服务。所以我们将使用systemd
|
||||
# systemctl status vncserver@:.service
|
||||
# systemctl is-enabled vncserver@.service
|
||||
|
||||
默认上,刚安装的VNC服务并没有激活(禁用)。
|
||||
默认的,刚安装的VNC服务并没有激活(禁用)。
|
||||
|
||||

|
||||
|
||||
现在服务一份通用的VNC服务文件来位用户xmodulo创建一个VNC服务配置。
|
||||
现在复制一份通用的VNC服务文件来为用户xmodulo创建一个VNC服务配置。
|
||||
|
||||
# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service
|
||||
|
||||
用本文编辑器来打开配置文件,用实际的用户名(比如:xmodulo)来替换[Service]下面的<USER>。同样。在ExecStart后面追加 "-geometry <resolution>" 参数。最后,要修改下面两行加粗字体的两行。
|
||||
用本文编辑器来打开配置文件,用实际的用户名(比如:xmodulo)来替换[Service]下面的<USER>。同样。在ExecStart后面追加 "-geometry <resolution>" 参数。最后,要修改下面“ExecStart”和“PIDFile”两行。
|
||||
|
||||
# vi /etc/systemd/system/vncserver@:1.service
|
||||
|
||||
@ -85,7 +85,7 @@ CentOS依靠systemd来管理和配置系统服务。所以我们将使用systemd
|
||||
|
||||
### 第三步:通过SSH连接到远程桌面 ###
|
||||
|
||||
设计上,VNC使用的远程帧缓存(RFB)并不是一种安全的协议。那么在VNC客户端上直接连接到VNC服务器上并不是一个好主意。任何敏感信息比如密码都可以在VNC流量中被轻易地泄露。因此,我强烈建议使用SSH隧道来[加密你的VNC流量][3]。
|
||||
从设计上说,VNC使用的远程帧缓存(RFB)并不是一种安全的协议,那么在VNC客户端上直接连接到VNC服务器上并不是一个好主意。任何敏感信息比如密码都可以在VNC流量中被轻易地泄露。因此,我强烈建议使用SSH隧道来[加密你的VNC流量][3]。
|
||||
|
||||
在你要运行VNC客户端的本机上,使用下面的命令来创建一个连接到远程VPS的SSH通道。当被要输入SSH密码时,输入用户的密码。
|
||||
|
||||
@ -99,7 +99,7 @@ CentOS依靠systemd来管理和配置系统服务。所以我们将使用systemd
|
||||
|
||||

|
||||
|
||||
你将被要求输入VNC密码。当你输入VNC密码时,你就可以安全地连接到CentOS的远程桌面了.
|
||||
你将被要求输入VNC密码。当你输入VNC密码时,你就可以安全地连接到CentOS的远程桌面了。
|
||||
|
||||

|
||||
|
||||
@ -111,7 +111,7 @@ via: http://ask.xmodulo.com/centos-remote-desktop-vps.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,10 +1,11 @@
|
||||
怎样在Github上做开源代码库的主人
|
||||
怎样在Github上托管开源代码库
|
||||
================================================================================
|
||||
大家好,今天我们要学习一下怎样管理github.com库中的开源软件源代码。GitHub是一个基于web的Git库托管服务,提供分布式修改控制和Git的源代码管理(SCM)功能并加入了自身的特点。它给开源和私有项目提供了一个互相协作的工作区、代码预览和代码管理功能。不像Git,一个完完全全的命令行工具,GitHub提供了一个基于web的图形化界面和桌面,也整合了手机。GitHub同时提供了私有库付费计划和免费账号,都是用来管理开源软件项目的。
|
||||
|
||||
大家好,今天我们要学习一下怎样在github.com提供的仓库中托管开源软件源代码。GitHub是一个基于web的Git仓库托管服务,提供基于 git 的分布式版本控制和源代码管理(SCM)功能,并加入了自身的特点。它给开源项目和私有项目提供了一个互相协作的工作区、代码预览和代码管理功能。不像Git是一个完完全全的命令行工具,GitHub提供了一个基于web的图形化界面和桌面,也整合了手机操作。GitHub同时提供了私有库付费计划和通常用来管理开源软件项目的免费账号。
|
||||
|
||||

|
||||
|
||||
这是一种快速灵活,基于web的托管服务,它使用方便,管理分布式修改控制系统也是相当容易,任何人都能为了将它们使用、贡献、共享、问题跟踪和更多的全球各地数以百万计的人在github的库里管理他们的软件源代码。这里有一些简单快速地管理软件源代码的方法。
|
||||
这是一种快速灵活,基于web的托管服务,它使用方便,管理分布式版本控制系统也是相当容易,任何人都能将他们的软件源代码托管到 github,让全球各地数以百万计的人可以使用它、参与贡献、共享它、进行问题跟踪以及更多的用途。这里有一些简单快速地托管软件源代码的方法。
|
||||
|
||||
### 1. 创建一个新的Github账号 ###
|
||||
|
||||
@ -20,7 +21,7 @@
|
||||
|
||||
### 2. 创建一个新的库 ###
|
||||
|
||||
成功注册新账号或登录上Github之后,我们需要创建一个新的库来开始我们的正题。
|
||||
成功注册新账号或登录上Github之后,我们需要创建一个新的库来开始我们的征程。
|
||||
|
||||
点击位于顶部靠右账号id旁边的**(+)**按钮,然后点击“New Repository”。
|
||||
|
||||
@ -46,13 +47,13 @@
|
||||
|
||||
现在git已经准备就绪,我们要上传代码了。
|
||||
|
||||
**注意**:为了避免错误,不要用**README**文件、许可证或gitignore文件来初始化新库,你可以在项目推送到Github上之后再添加它们。
|
||||
**注意**:为了避免错误,不要在初始化的新库中包含**README**、license或gitignore等文件,你可以在项目推送到Github上之后再添加它们。
|
||||
|
||||
在终端上,我们需要把当前工作目录更改为你的本地项目,然后将本地目录初始化为Git库。
|
||||
在终端上,我们需要切换当前工作目录为你的本地项目的目录,然后将其初始化为Git库。
|
||||
|
||||
$ git init
|
||||
|
||||
接着我们在我们的新的本地库里添加的文件来作为我们的首次提交内容。
|
||||
接着我们添加新的本地库里中的文件,作为我们的首次提交内容。
|
||||
|
||||
$ git add .
|
||||
|
||||
@ -62,16 +63,16 @@
|
||||
|
||||

|
||||
|
||||
在终端上,我们要给远程库添加URL地址,用于以后我们能提交我们本地的库。
|
||||
在终端上,添加远程库的URL地址,以便我们的本地库推送到远程。
|
||||
|
||||
$ git remote add origin remote Repository url
|
||||
$ git remote add origin 远程库的URL
|
||||
$ git remote -v
|
||||
|
||||

|
||||
|
||||
注意:请确保将远程库的URL替换成了自己的远程库的URL。
|
||||
注意:请确保将上述“远程库的URL”替换成了你自己的远程库的URL。
|
||||
|
||||
现在,要将我们的本地库提交至GitHub版本库中,我们需要运行一下命令并且输入所需的用户名和密码。
|
||||
现在,要将我们的本地库的改变推送至GitHub的版本库中,我们需要运行以下命令,并且输入所需的用户名和密码。
|
||||
|
||||
$ git push origin master
|
||||
|
||||
@ -87,9 +88,9 @@
|
||||
|
||||
请把以上这条URL地址更改成你想要克隆的地址。
|
||||
|
||||
### 更新改动 ###
|
||||
### 推送改动 ###
|
||||
|
||||
如果我们对我们的代码做了更改并想把它们提交至我们的远程库中,我们应该在该目录下运行以下命令。
|
||||
如果我们对我们的代码做了更改并想把它们推送至我们的远程库中,我们应该在该目录下运行以下命令。
|
||||
|
||||
$ git add .
|
||||
$ git commit -m "Updating"
|
||||
@ -97,7 +98,7 @@
|
||||
|
||||
### 结论 ###
|
||||
|
||||
啊哈!我们已经成功地管理我们在Github库中的项目源代码了。快速灵活的Github基于web的托管服务,分布式修改控制系统使用起来方便容易。数百万个非常棒的开源项目驻扎在github上。所以,如果你有任何问题、建议或反馈,请在评论中告诉我们。谢谢大家!好好享受吧 :-)
|
||||
啊哈!我们已经成功地将我们的项目源代码托管到Github的库中了。Github是快速灵活的基于web的托管服务,分布式版本控制系统使用起来方便容易。数百万个非常棒的开源项目驻扎在github上。所以,如果你有任何问题、建议或反馈,请在评论中告诉我们。谢谢大家!好好享受吧 :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -105,7 +106,7 @@ via: http://linoxide.com/usr-mgmt/host-open-source-code-repository-github/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[ZTinoZ](https://github.com/ZTinoZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,57 @@
|
||||
两种方式创建你自己的 Docker 基本映像
|
||||
================================================================================
|
||||
|
||||
欢迎大家,今天我们学习一下 docker 基本映像以及如何构建我们自己的 docker 基本映像。[Docker][1] 是一个开源项目,提供了一个可以打包、装载和运行任何应用的轻量级容器的开放平台。它没有语言支持、框架和打包系统的限制,从小型的家用电脑到高端服务器,在何时何地都可以运行。这使它们可以不依赖于特定软件栈和供应商,像一块块积木一样部署和扩展网络应用、数据库和后端服务。
|
||||
|
||||
Docker 映像是不可更改的只读层。Docker 使用 **Union File System** 在只读文件系统上增加可读写的文件系统,但所有更改都发生在最顶层的可写层,而其下的只读映像上的原始文件仍然不会改变。由于映像不会改变,也就没有状态。基本映像是没有父类的那些映像。Docker 基本映像主要的好处是它允许我们有一个独立运行的 Linux 操作系统。
|
||||
|
||||
下面是我们如何可以创建自定义的基本映像的方式。
|
||||
|
||||
### 1. 使用 Tar 创建 Docker 基本映像 ###
|
||||
|
||||
我们可以使用 tar 构建我们自己的基本映像,我们从一个运行中的 Linux 发行版开始,将其打包为基本映像。这过程可能会有些不同,它取决于我们打算构建的发行版。在 Debian 发行版中,已经预带了 debootstrap。在开始下面的步骤之前,我们需要安装 debootstrap。debootstrap 用来获取构建基本系统需要的包。这里,我们构建基于 Ubuntu 14.04 "Trusty" 的映像。要完成这些,我们需要在终端或者 shell 中运行以下命令。
|
||||
|
||||
$ sudo debootstrap trusty trusty > /dev/null
|
||||
$ sudo tar -C trusty -c . | sudo docker import - trusty
|
||||
|
||||

|
||||
|
||||
上面的命令为当前文件夹创建了一个 tar 文件并输出到标准输出中,"docker import - trusty" 通过管道从标准输入中获取这个 tar 文件并根据它创建一个名为 trusty 的基本映像。然后,如下所示,我们将运行映像内部的一条测试命令。
|
||||
|
||||
$ docker run trusty cat /etc/lsb-release
|
||||
|
||||
[Docker GitHub Repo][2] 中有一些允许我们快速构建基本映像的事例脚本.
|
||||
|
||||
### 2. 使用Scratch构建基本映像 ###
|
||||
|
||||
在 Docker registry 中,有一个被称为 Scratch 的使用空 tar 文件构建的特殊库:
|
||||
|
||||
$ tar cv --files-from /dev/null | docker import - scratch
|
||||
|
||||

|
||||
|
||||
我们可以使用这个映像构建新的小容器:
|
||||
|
||||
FROM scratch
|
||||
ADD script.sh /usr/local/bin/run.sh
|
||||
CMD ["/usr/local/bin/run.sh"]
|
||||
|
||||
上面的 Dockerfile 文件来自一个很小的映像。这里,它首先从一个完全空的文件系统开始,然后它复制新建的 /usr/local/bin/run.sh 为 script.sh ,然后运行脚本 /usr/local/bin/run.sh。
|
||||
|
||||
### 结尾 ###
|
||||
|
||||
这这个教程中,我们学习了如何构建一个开箱即用的自定义 Docker 基本映像。构建一个 docker 基本映像是一个很简单的任务,因为这里有很多已经可用的包和脚本。如果我们想要在里面安装想要的东西,构建 docker 基本映像非常有用。如果有任何疑问,建议或者反馈,请在下面的评论框中写下来。非常感谢!享受吧 :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/2-ways-create-docker-base-image/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://www.docker.com/
|
||||
[2]:https://github.com/docker/docker/blob/master/contrib/mkimage-busybox.sh
|
@ -0,0 +1,41 @@
|
||||
EvilAP_Defender:可以警示和攻击 WIFI 热点陷阱的工具
|
||||
===============================================================================
|
||||
|
||||
**开发人员称,EvilAP_Defender甚至可以攻击流氓Wi-Fi接入点**
|
||||
|
||||
这是一个新的开源工具,可以定期扫描一个区域,以防出现恶意 Wi-Fi 接入点,同时如果发现情况会提醒网络管理员。
|
||||
|
||||
这个工具叫做 EvilAP_Defender,是为监测攻击者所配置的恶意接入点而专门设计的,这些接入点冒用合法的名字诱导用户连接上。
|
||||
|
||||
这类接入点被称做假面猎手(evil twin),使得黑客们可以从所接入的设备上监听互联网信息流。这可以被用来窃取证书、钓鱼网站等等。
|
||||
|
||||
大多数用户设置他们的计算机和设备可以自动连接一些无线网络,比如家里的或者工作地方的网络。通常,当面对两个同名的无线网络时,即SSID相同,有时候甚至连MAC地址(BSSID)也相同,这时候大多数设备会自动连接信号较强的一个。
|
||||
|
||||
这使得假面猎手攻击容易实现,因为SSID和BSSID都可以伪造。
|
||||
|
||||
[EvilAP_Defender][1]是一个叫Mohamed Idris的人用Python语言编写,公布在GitHub上面。它可以使用一个计算机的无线网卡来发现流氓接入点,这些坏蛋们复制了一个真实接入点的SSID,BSSID,甚至是其他的参数如通道,密码,隐私协议和认证信息等等。
|
||||
|
||||
该工具首先以学习模式运行,以便发现合法的接入点[AP],并且将其加入白名单。然后可以切换到正常模式,开始扫描未认证的接入点。
|
||||
|
||||
如果一个恶意[AP]被发现了,该工具会用电子邮件提醒网络管理员,但是开发者也打算在未来加入短信提醒功能。
|
||||
|
||||
该工具还有一个保护模式,在这种模式下,应用会发起一个denial-of-service [DoS]攻击反抗恶意接入点,为管理员采取防卫措施赢得一些时间。
|
||||
|
||||
“DoS 将仅仅针对有着相同SSID的而BSSID(AP的MAC地址)不同或者不同信道的流氓 AP,”Idris在这款工具的文档中说道。“这是为了避免攻击到你的正常网络。”
|
||||
|
||||
尽管如此,用户应该切记在许多国家,攻击别人的接入点很多时候都是非法的,甚至是一个看起来像是攻击者操控的恶意接入点。
|
||||
|
||||
要能够运行这款工具,需要Aircrack-ng无线网套装,一个支持Aircrack-ng的无线网卡,MySQL和Python运行环境。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/2905725/security0/this-tool-can-alert-you-about-evil-twin-access-points-in-the-area.html
|
||||
|
||||
作者:[Lucian Constantin][a]
|
||||
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoworld.com/author/Lucian-Constantin/
|
||||
[1]:https://github.com/moha99sa/EvilAP_Defender/blob/master/README.TXT
|
@ -1,12 +1,12 @@
|
||||
一些重要Docker命令的简单介绍
|
||||
一些重要 Docker 命令的简单介绍
|
||||
================================================================================
|
||||
大家好,今天我们来学习一些在你使用Docker之前需要了解的重要的 Docker 命令。Docker 是一个提供开发平台去打包,装载和运行任何应用的轻量级容器开源项目。它没有语言支持,框架和打包系统的限制,能从一个小的家庭电脑到高端服务器,在任何地方任何时间运行。这使得它们成为不依赖于一个特定的栈或供应商,部署和扩展web应用,数据库和后端服务很好的构建块。
|
||||
大家好,今天我们来学习一些在你使用 Docker 之前需要了解的重要的 Docker 命令。[Docker][1] 是一个开源项目,提供了一个可以打包、装载和运行任何应用的轻量级容器的开放平台。它没有语言支持、框架和打包系统的限制,从小型的家用电脑到高端服务器,在何时何地都可以运行。这使它们可以不依赖于特定软件栈和供应商,像一块块积木一样部署和扩展网络应用、数据库和后端服务。
|
||||
|
||||
Docker 命令简单易学,也很容易实现或实践。这是一些你运行 Docker 并充分利用它需要知道的简单 Docker 命令。
|
||||
|
||||
### 1. 拉取一个 Docker 镜像 ###
|
||||
### 1. 拉取 Docker 镜像 ###
|
||||
|
||||
由于容器是由 Docker 镜像构建的,首先我们需要拉取一个 docker 镜像来开始。我们可以从 Docker 注册 Hub 获取需要的 docker 镜像。在我们使用 pull 命令拉取任何镜像之前,由于pull命令被标识为恶意命令,我们需要保护我们的系统。为了保护我们的系统不受这个问题影响,我们需要添加 **127.0.0.1 index.docker.io** 到 /etc/hosts 条目。我们可以通过使用喜欢的文本编辑器完成。
|
||||
由于容器是由 Docker 镜像构建的,首先我们需要拉取一个 docker 镜像来开始。我们可以从 Docker Registry Hub 获取所需的 docker 镜像。在我们使用 pull 命令拉取任何镜像之前,为了避免 pull 命令的一些恶意风险,我们需要保护我们的系统。为了保护我们的系统不受这个风险影响,我们需要添加 **127.0.0.1 index.docker.io** 到 /etc/hosts 条目。我们可以通过使用喜欢的文本编辑器完成。
|
||||
|
||||
# nano /etc/hosts
|
||||
|
||||
@ -16,7 +16,7 @@ Docker 命令简单易学,也很容易实现或实践。这是一些你运行
|
||||
|
||||

|
||||
|
||||
要拉取一个 docker 进行,我们需要运行下面的命令。
|
||||
要拉取一个 docker 镜像,我们需要运行下面的命令。
|
||||
|
||||
# docker pull registry.hub.docker.com/busybox
|
||||
|
||||
@ -28,9 +28,9 @@ Docker 命令简单易学,也很容易实现或实践。这是一些你运行
|
||||
|
||||

|
||||
|
||||
### 2. 运行一个 Docker 容器 ###
|
||||
### 2. 运行 Docker 容器 ###
|
||||
|
||||
现在,成功地拉取要求或需要的 Docker 镜像之后,我们当然想运行这个 Docker 镜像。我们可以用 docker run 命令在镜像上运行一个 docker 容器。在 Docker 镜像之上运行一个 docker 容易时我们有很多选项和标记。我们使用 -t 和 -i 标记运行一个 docker 镜像并进入容器,如下面所示。
|
||||
现在,成功地拉取要求的或所需的 Docker 镜像之后,我们当然想运行这个 Docker 镜像。我们可以用 docker run 命令在镜像上运行一个 docker 容器。在 Docker 镜像上运行一个 docker 容器时我们有很多选项和标记。我们使用 -t 和 -i 选项来运行一个 docker 镜像并进入容器,如下面所示。
|
||||
|
||||
# docker run -it busybox
|
||||
|
||||
@ -50,7 +50,7 @@ Docker 命令简单易学,也很容易实现或实践。这是一些你运行
|
||||
|
||||

|
||||
|
||||
### 3. 查看容器 ###
|
||||
### 3. 检查容器运行 ###
|
||||
|
||||
不论容器是否运行,查看日志文件都很简单。我们可以使用下面的命令去检查是否有 docker 容器在实时运行。
|
||||
|
||||
@ -62,17 +62,17 @@ Docker 命令简单易学,也很容易实现或实践。这是一些你运行
|
||||
|
||||

|
||||
|
||||
### 4. 检查 Docker 容器 ###
|
||||
### 4. 查看容器信息 ###
|
||||
|
||||
我们可以使用 inspect 命令检查一个 Docker 容器的每条信息。
|
||||
我们可以使用 inspect 命令查看一个 Docker 容器的各种信息。
|
||||
|
||||
# docker inspect <container id>
|
||||
|
||||

|
||||
|
||||
### 5. 杀死或删除命令 ###
|
||||
### 5. 杀死或删除 ###
|
||||
|
||||
我们可以使用 docker id 杀死或者停止进程或 docker 容器,如下所示。
|
||||
我们可以使用容器 id 杀死或者停止 docker 容器(进程),如下所示。
|
||||
|
||||
# docker stop <container id>
|
||||
|
||||
@ -90,7 +90,7 @@ Docker 命令简单易学,也很容易实现或实践。这是一些你运行
|
||||
|
||||
### 结论 ###
|
||||
|
||||
这些都是学习充分实现和利用 Docker 很基本的 docker 命令。有了这些命令,Docker 变得很简单,提供给端用户一个简单的计算平台。根据上面的教程,任何人学习 Docker 命令都非常简单。如果你有任何问题,建议,反馈,请写到下面的评论框中以便我们改进和更新内容。多谢!享受吧 :-)
|
||||
这些都是充分学习和使用 Docker 很基本的 docker 命令。有了这些命令,Docker 变得很简单,可以提供给最终用户一个易用的计算平台。根据上面的教程,任何人学习 Docker 命令都非常简单。如果你有任何问题,建议,反馈,请写到下面的评论框中以便我们改进和更新内容。多谢! 希望你喜欢 :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -98,8 +98,9 @@ via: http://linoxide.com/linux-how-to/important-docker-commands/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://www.docker.com/
|
@ -0,0 +1,181 @@
|
||||
Web缓存基础:术语、HTTP报头和缓存策略
|
||||
=====================================================================
|
||||
|
||||
### 简介
|
||||
|
||||
对于您的站点的访问者来说,智能化的内容缓存是提高用户体验最有效的方式之一。缓存,或者对之前的请求的临时存储,是HTTP协议实现中最核心的内容分发策略之一。分发路径中的组件均可以缓存内容来加速后续的请求,这受控于对该内容所声明的缓存策略。
|
||||
|
||||
在这份指南中,我们将讨论一些Web内容缓存的基本概念。这主要包括如何选择缓存策略以保证互联网范围内的缓存能够正确的处理您的内容。我们将谈一谈缓存带来的好处、副作用以及不同的策略能带来的性能和灵活性的最大结合。
|
||||
|
||||
###什么是缓存(caching)?
|
||||
|
||||
缓存(caching)是一个描述存储可重用资源以便加快后续请求的行为的术语。有许多不同类型的缓存,每种都有其自身的特点,应用程序缓存和内存缓存由于其对特定回复的加速,都很常用。
|
||||
|
||||
这份指南的主要讲述的Web缓存是一种不同类型的缓存。Web缓存是HTTP协议的一个核心特性,它能最小化网络流量,并且提升用户所感知的整个系统响应速度。内容从服务器到浏览器的传输过程中,每个层面都可以找到缓存的身影。
|
||||
|
||||
Web缓存根据特定的规则缓存相应HTTP请求的响应。对于缓存内容的后续请求便可以直接由缓存满足而不是重新发送请求到Web服务器。
|
||||
|
||||
###好处
|
||||
|
||||
有效的缓存技术不仅可以帮助用户,还可以帮助内容的提供者。缓存对内容分发带来的好处有:
|
||||
|
||||
- **减少网络开销**:内容可以在从内容提供者到内容消费者网络路径之间的许多不同的地方被缓存。当内容在距离内容消费者更近的地方被缓存时,由于缓存的存在,请求将不会消耗额外的网络资源。
|
||||
- **加快响应速度**:由于并不是必须通过整个网络往返,缓存可以使内容的获得变得更快。缓存放在距用户更近的地方,例如浏览器缓存,使得内容的获取几乎是瞬时的。
|
||||
- **在同样的硬件上提高速度**:对于保存原始内容的服务器来说,更多的性能可以通过允许激进的缓存策略从硬件上压榨出来。内容拥有者们可以利用分发路径上某个强大的服务器来应对特定内容负载的冲击。
|
||||
- **网络中断时内容依旧可用**:使用某种策略,缓存可以保证在原始服务器变得不可用时,相应的内容对用户依旧可用。
|
||||
|
||||
###术语
|
||||
|
||||
在面对缓存时,您可能对一些经常遇到的术语可能不太熟悉。一些常见的术语如下:
|
||||
|
||||
- **原始服务器**:原始服务器是内容的原始存放地点。如果您是Web服务器管理员,它就是您所管理的机器。它负责为任何不能从缓存中得到的内容进行回复,并且负责设置所有内容的缓存策略。
|
||||
- **缓存命中率**:一个缓存的有效性依照缓存的命中率进行度量。它是可以从缓存中得到数据的请求数与所有请求数的比率。缓存命中率高意味着有很高比例的数据可以从缓存中获得。这通常是大多数管理员想要的结果。
|
||||
- **新鲜度**:新鲜度用来描述一个缓存中的项目是否依旧适合返回给客户端。缓存中的内容只有在由缓存策略指定的新鲜期内才会被返回。
|
||||
- **过期内容**:缓存中根据缓存策略的新鲜期设置已过期的内容。过期的内容被标记为“陈旧”。通常,过期内容不能用于回复客户端的请求。必须重新从原始服务器请求新的内容或者至少验证缓存的内容是否仍然准确。
|
||||
- **校验**:缓存中的过期内容可以验证是否有效以便刷新过期时间。验证过程包括联系原始服务器以检查缓存的数据是否依旧代表了最近的版本。
|
||||
- **失效**:失效是依据过期日期从缓存中移除内容的过程。当内容在原始服务器上已被改变时就必须这样做,缓存中过期的内容会导致客户端发生问题。
|
||||
|
||||
还有许多其他的缓存术语,不过上面的这些应该能帮助您开始。
|
||||
|
||||
###什么能被缓存?
|
||||
|
||||
某些特定的内容比其他内容更容易被缓存。对大多数站点来说,一些适合缓存的内容如下:
|
||||
|
||||
- Logo和商标图像
|
||||
- 普通的不变化的图像(例如,导航图标)
|
||||
- CSS样式表
|
||||
- 普通的Javascript文件
|
||||
- 可下载的内容
|
||||
- 媒体文件
|
||||
|
||||
这些文件更倾向于不经常改变,所以长时间的对它们进行缓存能获得好处。
|
||||
|
||||
一些项目在缓存中必须加以注意:
|
||||
|
||||
- HTML页面
|
||||
- 会替换改变的图像
|
||||
- 经常修改的Javascript和CSS文件
|
||||
- 需要有认证后的cookies才能访问的内容
|
||||
|
||||
一些内容从来不应该被缓存:
|
||||
|
||||
- 与敏感信息相关的资源(银行数据,等)
|
||||
- 用户相关且经常更改的数据
|
||||
|
||||
除上面的通用规则外,通常您需要指定一些规则以便于更好地缓存不同种类的内容。例如,如果登录的用户都看到的是同样的网站视图,就应该在任何地方缓存这个页面。如果登录的用户会在一段时间内看到站点中用户特定的视图,您应该让用户的浏览器缓存该数据而不应让任何中介节点缓存该视图。
|
||||
|
||||
###Web内容缓存的位置
|
||||
|
||||
Web内容会在整个分发路径中的许多不同的位置被缓存:
|
||||
|
||||
- **浏览器缓存**:Web浏览器自身会维护一个小型缓存。典型地,浏览器使用一种策略指示缓存最重要的内容。这可能是用户相关的内容或可能会再次请求且下载代价较高。
|
||||
- **中间缓存代理**:任何在客户端和您的基础架构之间的服务器都可以按期望缓存一些内容。这些缓存可能由ISP(网络服务提供者)或者其他独立组织提供。
|
||||
- **反向缓存**:您的服务器基础架构可以为后端的服务实现自己的缓存。如果实现了缓存,那么便可以在处理请求的位置返回相应的内容而不用每次请求都使用后端服务。
|
||||
|
||||
上面的这些位置通常都可以根据它们自身的缓存策略和内容源的缓存策略缓存一些相应的内容。
|
||||
|
||||
###缓存头部
|
||||
|
||||
缓存策略依赖于两个不同的因素。所缓存的实体本身需要决定是否应该缓存可接受的内容。它可以只缓存部分可以缓存的内容,但不能缓存超过限制的内容。
|
||||
|
||||
缓存行为主要由缓存策略决定,而缓存策略由内容拥有者设置。这些策略主要通过特定的HTTP头部来清晰地表达。
|
||||
|
||||
经过几个不同HTTP协议的变化,出现了一些不同的针对缓存方面的头部,它们的复杂度各不相同。下面列出了那些你也许应该注意的:
|
||||
|
||||
- **`Expires`**:尽管使用范围相当有限,但`Expires`头部是非常简洁明了的。通常它设置一个未来的时间,内容会在此时间过期。这时,任何对同样内容的请求都应该回到原始服务器处。这个头部或许仅仅最适合回退模式(fall back)。
|
||||
- **`Cache-Control`**:这是`Expires`的一个更加现代化的替换物。它已被很好的支持,且拥有更加灵活的实现。在大多数案例中,它比`Expires`更好,但同时设置两者的值也无妨。稍后我们将讨论您可以设置的`Cache-Control`的详细选项。
|
||||
- **`ETag`**:`ETag`用于缓存验证。源服务器可以在首次服务一个内容时为该内容提供一个独特的`ETag`。当一个缓存需要验证这个内容是否即将过期,他会将相应的`ETag`发送回服务器。源服务器或者告诉缓存内容是一致的,或者发送更新后的内容(带着新的`ETag`)。
|
||||
- **`Last-Modified`**:这个头部指明了相应的内容最后一次被修改的时间。它可能会作为保证内容新鲜度的验证策略的一部分被使用。
|
||||
- **`Content-Length`**:尽管并没有在缓存中明确涉及,`Content-Length`头部在设置缓存策略时很重要。某些软件如果不提前获知内容的大小以留出足够空间,则会拒绝缓存该内容。
|
||||
- **`Vary`**:缓存系统通常使用请求的主机和路径作为存储该资源的键。当判断一个请求是否是请求同样内容时,`Vary`头部可以被用来提醒缓存系统需要注意另一个附加头部。它通常被用来告诉缓存系统同样注意`Accept-Encoding`头部,以便缓存系统能够区分压缩和未压缩的内容。
|
||||
|
||||
### Vary头部的隐语
|
||||
|
||||
`Vary`头部提供给您存储同一个内容的不同版本的能力,代价是降低了缓存的容量。
|
||||
|
||||
在使用`Accept-Encoding`时,设置`Vary`头部允许明确区分压缩和未压缩的内容。这在服务某些不能处理压缩数据的浏览器时很重要,它可以保证基本的可用性。`Vary`的一个典型的值是`Accept-Encoding`,它只有两到三个可选的值。
|
||||
|
||||
一开始看上去`User-Agent`这样的头部可以用于区分移动浏览器和桌面浏览器,以便您的站点提供差异化的服务。但`User-Agent`字符串是非标准的,结果将会造成在中间缓存中保存同一内容的许多不同版本的缓存,这会导致缓存命中率的降低。`Vary`头部应该谨慎使用,尤其是您不具备在您控制的中间缓存中使请求标准化的能力(也许可以,比如您可以控制CDN的话)。
|
||||
|
||||
###缓存控制标志怎样影响缓存
|
||||
|
||||
上面我们提到了`Cache-Control`头部如何被用与现代缓存策略标准。能够通过这个头部设定许多不同的缓存指令,多个不同的指令通过逗号分隔。
|
||||
|
||||
一些您可以使用的指示内容缓存策略的`Cache-Control`的选项如下:
|
||||
|
||||
- **`no-cache`**:这个指令指示所有缓存的内容在新的请求到达时必须先重新验证,再发送给客户端。这条指令实际将内容立刻标记为过期的,但允许通过验证手段重新验证以避免重新下载整个内容。
|
||||
- **`no-store`**:这条指令指示缓存的内容不能以任何方式被缓存。它适合在回复敏感信息时设置。
|
||||
- **`public`**:它将内容标记为公有的,这意味着它能被浏览器和其他任何中间节点缓存。通常,对于使用了HTTP验证的请求,其回复被默认标记为`private`。`public`标记将会覆盖这个设置。
|
||||
- **`private`**:它将内容标记为私有的。私有数据可以被用户的浏览器缓存,但*不能*被任何中间节点缓存。它通常用于用户相关的数据。
|
||||
- **`max-age`**:这个设置指示了缓存内容的最大生存期,它在最大生存期后必须在源服务器处被验证或被重新下载。在现代浏览器中这个选项大体上取代了`Expires`头部,浏览器也将其作为决定内容的新鲜度的基础。这个选项的值以秒为单位表示,最大可以表示一年的新鲜期(31536000秒)。
|
||||
- **`s-maxage`**:这个选项非常类似于`max-age`,它指明了内容能够被缓存的时间。区别是这个选项只在中间节点的缓存中有效。结合这两个选项可以构建更加灵活的缓存策略。
|
||||
- **`must-revalidate`**:它指明了由`max-age`、`s-maxage`或`Expires`头部指明的新鲜度信息必须被严格的遵守。它避免了缓存的数据在网络中断等类似的场景中被使用。
|
||||
- **`proxy-revalidate`**:它和上面的选项有着一样的作用,但只应用于中间的代理节点。在这种情况下,用户的浏览器可以在网络中断时使用过期内容,但中间缓存内容不能用于此目的。
|
||||
- **`no-transform`**:这个选项告诉缓存在任何情况下都不能因为性能的原因修改接收到的内容。这意味着,缓存不允许压缩接收到的内容(没有从原始服务器处接收过压缩版本的该内容)并发送。
|
||||
|
||||
这些选项能够以不同的方式结合以获得不同的缓存行为。一些互斥的值如下:
|
||||
|
||||
- `no-cache`,`no-store`以及由其他前面未提到的选项指明的常用的缓存行为
|
||||
- `public`和`private`
|
||||
|
||||
如果`no-store`和`no-cache`都被设置,那么`no-store`会取代`no-cache`。对于非授权的请求的回复,`public`是隐含的设置。对于授权的请求的回复,`private`选项是隐含的。他们可以通过在`Cache-Control`头部中指明相应的相反的选项以覆盖。
|
||||
|
||||
###开发一种缓存策略
|
||||
|
||||
在理想情况下,任何内容都可以被尽可能缓存,而您的服务器只需要偶尔的提供一些验证内容即可。但这在现实中很少发生,因此您应该尝试设置一些明智的缓存策略,以在长期缓存和站点改变的需求间达到平衡。
|
||||
|
||||
### 常见问题
|
||||
|
||||
在许多情况中,由于内容被产生的方式(如根据每个用户动态的产生)或者内容的特性(例如银行的敏感数据),这些内容不应该被缓存。另一些许多管理员在设置缓存时可能面对的问题是外部缓存的数据未过期,但新版本的数据已经产生。
|
||||
|
||||
这些都是经常遇到的问题,它们会影响缓存的性能和您提供的数据的准确性。然而,我们可以通过开发提前预见这些问题的缓存策略来缓解这些问题。
|
||||
|
||||
### 一般性建议
|
||||
|
||||
尽管您的实际情况会指导您选择的缓存策略,但是下面的建议能帮助您获得一些合理的决定。
|
||||
|
||||
在您担心使用哪一个特定的头部之前,有一些特定的步骤可以帮助您提高您的缓存命中率。一些建议如下:
|
||||
|
||||
- **为图像、CSS和共享的内容建立特定的文件夹**:将内容放到特定的文件夹内使得您可以方便的从您的站点中的任何页面引用这些内容。
|
||||
- **使用同样的URL来表示同样的内容**:由于缓存使用内容请求中的主机名和路径作为键,因此应保证您的所有页面中的该内容的引用方式相同,前一个建议能让这点更加容易做到。
|
||||
- **尽可能使用CSS图像拼接**:对于像图标和导航等内容,使用CSS图像拼接能够减少渲染您页面所需要的请求往返,并且允许对拼接缓存很长一段时间。
|
||||
- **尽可能将主机脚本和外部资源本地化**:如果您使用Javascript脚本和其他外部资源,如果上游没有提供合适的缓存头部,那么您应考虑将这些内容放在您自己的服务器上。您应该注意上游的任何更新,以便更新本地的拷贝。
|
||||
- **对缓存内容收集文件摘要**:静态的内容比如CSS和Javascript文件等通常比较适合收集文件摘要。这意味着为文件名增加一个独特的标志符(通常是这个文件的哈希值)可以在文件修改后绕开缓存保证新的内容被重新获取。有很多工具可以帮助您创建文件摘要并且修改HTML文档中的引用。
|
||||
|
||||
对于不同的文件正确地选择不同的头部这件事,下面的内容可以作为一般性的参考:
|
||||
|
||||
- **允许所有的缓存存储一般内容**:静态内容以及非用户相关的内容应该在分发链的所有节点被缓存。这使得中间节点可以将该内容回复给多个用户。
|
||||
- **允许浏览器缓存用户相关的内容**:对于每个用户的数据,通常在用户自己的浏览器中缓存是可以被接受且有益的。缓存在用户自身的浏览器能够使得用户在接下来的浏览中能够瞬时读取,但这些内容不适合在任何中间代理节点缓存。
|
||||
- **将时间敏感的内容作为特例**:如果您的数据是时间敏感的,那么相对上面两条参考,应该将这些数据作为特例,以保证过期的数据不会在关键的情况下被使用。例如,您的站点有一个购物车,它应该立刻反应购物车里面的物品。依据内容的特点,可以在`Cache-Control`头部中使用`no-cache`或`no-store`选项。
|
||||
- **总是提供验证器**:验证器使得过期的内容可以无需重新下载而得到刷新。设置`ETag`和`Last-Modified`头部将允许缓存向原始服务器验证内容,并在内容未修改时刷新该内容新鲜度以减少负载。
|
||||
- **对于支持的内容设置长的新鲜期**:为了更加有效的利用缓存,一些作为支持性的内容应该被设置较长的新鲜期。这通常比较适合图像和CSS等由用户请求用来渲染HTML页面的内容。和文件摘要一起,设置延长的新鲜期将允许缓存长时间的存储这些资源。如果资源发生改变,修改的文件摘要将会使缓存的数据无效并触发对新的内容的下载。那时,新的支持的内容会继续被缓存。
|
||||
- **对父内容设置短的新鲜期**:为了使得前面的模式正常工作,容器类的内容应该相应的设置短的新鲜期,或者设置不全部缓存。这通常是在其他协助内容中使用的HTML页面。这个HTML页面将会被频繁的下载,使得它能快速的响应改变。支持性的内容因此可以被尽量缓存。
|
||||
|
||||
关键之处便在于达到平衡,一方面可以尽量的进行缓存,另一方面为未来保留当改变发生时从而改变整个内容的机会。您的站点应该同时具有:
|
||||
|
||||
- 尽量缓存的内容
|
||||
- 拥有短的新鲜期的缓存内容,可以被重新验证
|
||||
- 完全不被缓存的内容
|
||||
|
||||
这样做的目的便是将内容尽可能的移动到第一个分类(尽量缓存)中的同时,维持可以接受的缓存命中率。
|
||||
|
||||
结论
|
||||
----
|
||||
|
||||
花时间确保您的站点使用了合适的缓存策略将对您的站点产生重要的影响。缓存使得您可以在保证服务同样内容的同时减少带宽的使用。您的服务器因此可以靠同样的硬件处理更多的流量。或许更重要的是,客户们能在您的网站中获得更快的体验,这会使得他们更愿意频繁的访问您的站点。尽管有效的Web缓存并不是银弹,但设置合适的缓存策略会使您以最小的代价获得可观的收获。
|
||||
|
||||
---
|
||||
|
||||
via: https://www.digitalocean.com/community/tutorials/web-caching-basics-terminology-http-headers-and-caching-strategies
|
||||
|
||||
作者: [Justin Ellingwood](https://www.digitalocean.com/community/users/jellingwood)
|
||||
译者:[wwy-hust](https://github.com/wwy-hust)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
推荐:[royaso](https://github.com/royaso)
|
||||
|
||||
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
|
||||
|
@ -0,0 +1,63 @@
|
||||
在Ubuntu中安装Visual Studio Code
|
||||
================================================================================
|
||||

|
||||
|
||||
微软令人意外地[发布了Visual Studio Code][1],并支持主要的桌面平台,当然包括linux。如果你是一名需要在ubuntu工作的web开发人员,你可以**非常轻松的安装Visual Studio Code**。
|
||||
|
||||
我将要使用[Ubuntu Make][2]来安装Visual Studio Code。Ubuntu Make,就是以前的Ubuntu开发者工具中心,是一个命令行工具,帮助用户快速安装各种开发工具、语言和IDE。也可以使用Ubuntu Make轻松[安装Android Studio][3] 和其他IDE,如Eclipse。本文将展示**如何在Ubuntu中使用Ubuntu Make安装Visual Studio Code**。(译注:也可以直接去微软官网下载安装包)
|
||||
|
||||
### 安装微软Visual Studio Code ###
|
||||
|
||||
开始之前,首先需要安装Ubuntu Make。虽然Ubuntu Make存在Ubuntu15.04官方库中,**但是需要Ubuntu Make 0.7以上版本才能安装Visual Studio**。所以,需要通过官方PPA更新到最新的Ubuntu Make。此PPA支持Ubuntu 14.04, 14.10 和 15.04。
|
||||
|
||||
注意,**仅支持64位版本**。
|
||||
|
||||
打开终端,使用下列命令,通过官方PPA来安装Ubuntu Make:
|
||||
|
||||
sudo add-apt-repository ppa:ubuntu-desktop/ubuntu-make
|
||||
sudo apt-get update
|
||||
sudo apt-get install ubuntu-make
|
||||
|
||||
安装Ubuntu Make完后,接着使用下列命令安装Visual Studio Code:
|
||||
|
||||
umake web visual-studio-code
|
||||
|
||||
安装过程中,将会询问安装路径,如下图:
|
||||
|
||||

|
||||
|
||||
在抛出一堆要求和条件后,它会询问你是否确认安装Visual Studio Code。输入‘a’来确定:
|
||||
|
||||

|
||||
|
||||
确定之后,安装程序会开始下载并安装。安装完成后,你可以发现Visual Studio Code 图标已经出现在了Unity启动器上。点击图标开始运行!下图是Ubuntu 15.04 Unity的截图:
|
||||
|
||||

|
||||
|
||||
### 卸载Visual Studio Code###
|
||||
|
||||
卸载Visual Studio Code,同样使用Ubuntu Make命令。如下:
|
||||
|
||||
umake web visual-studio-code --remove
|
||||
|
||||
如果你不打算使用Ubuntu Make,也可以通过微软官方下载安装文件。
|
||||
|
||||
- [下载Visual Studio Code Linux版][4]
|
||||
|
||||
怎样!是不是超级简单就可以安装Visual Studio Code,这都归功于Ubuntu Make。我希望这篇文章能帮助到你。如果您有任何问题或建议,欢迎给我留言。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/install-visual-studio-code-ubuntu/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[Vic020/VicYu](http://vicyu.net)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:https://linux.cn/article-5376-1.html
|
||||
[2]:https://wiki.ubuntu.com/ubuntu-make
|
||||
[3]:http://itsfoss.com/install-android-studio-ubuntu-linux/
|
||||
[4]:https://code.visualstudio.com/Download
|
@ -0,0 +1,164 @@
|
||||
如何使用Vault安全的存储密码和API密钥
|
||||
=======================================================================
|
||||
Vault是用来安全的获取秘密信息的工具,它可以保存密码、API密钥、证书等信息。Vault提供了一个统一的接口来访问秘密信息,其具有健壮的访问控制机制和丰富的事件日志。
|
||||
|
||||
对关键信息的授权访问是一个困难的问题,尤其是当有许多用户角色,并且用户请求不同的关键信息时,例如用不同权限登录数据库的登录配置,用于外部服务的API密钥,SOA通信的证书等。当保密信息由不同的平台进行管理,并使用一些自定义的配置时,情况变得更糟,因此,安全的存储、管理审计日志几乎是不可能的。但Vault为这种复杂情况提供了一个解决方案。
|
||||
|
||||
### 突出特点 ###
|
||||
|
||||
**数据加密**:Vault能够在不存储数据的情况下对数据进行加密、解密。开发者们便可以存储加密后的数据而无需开发自己的加密技术,Vault还允许安全团队自定义安全参数。
|
||||
|
||||
**安全密码存储**:Vault在将秘密信息(API密钥、密码、证书)存储到持久化存储之前对数据进行加密。因此,如果有人偶尔拿到了存储的数据,这也没有任何意义,除非加密后的信息能被解密。
|
||||
|
||||
**动态密码**:Vault可以随时为AWS、SQL数据库等类似的系统产生密码。比如,如果应用需要访问AWS S3 桶,它向Vault请求AWS密钥对,Vault将给出带有租期的所需秘密信息。一旦租用期过期,这个秘密信息就不再存储。
|
||||
|
||||
**租赁和更新**:Vault给出的秘密信息带有租期,一旦租用期过期,它便立刻收回秘密信息,如果应用仍需要该秘密信息,则可以通过API更新租用期。
|
||||
|
||||
**撤销**:在租用期到期之前,Vault可以撤销一个秘密信息或者一个秘密信息树。
|
||||
|
||||
### 安装Vault ###
|
||||
|
||||
有两种方式来安装使用Vault。
|
||||
|
||||
**1. 预编译的Vault二进制** 能用于所有的Linux发行版,下载地址如下,下载之后,解压并将它放在系统PATH路径下,以方便调用。
|
||||
|
||||
- [下载预编译的二进制 Vault (32-bit)][1]
|
||||
- [下载预编译的二进制 Vault (64-bit)][2]
|
||||
- [下载预编译的二进制 Vault (ARM)][3]
|
||||
|
||||

|
||||
|
||||
*下载相应的预编译的Vault二进制版本。*
|
||||
|
||||

|
||||
|
||||
*解压下载到本地的二进制版本。*
|
||||
|
||||
祝贺你!您现在可以使用Vault了。
|
||||
|
||||

|
||||
|
||||
**2. 从源代码编译**是另一种在系统中安装Vault的方式。在安装Vault之前需要安装GO和GIT。
|
||||
|
||||
在 **Redhat系统中安装GO** 使用下面的指令:
|
||||
|
||||
sudo yum install go
|
||||
|
||||
在 **Debin系统中安装GO** 使用下面的指令:
|
||||
|
||||
sudo apt-get install golang
|
||||
|
||||
或者
|
||||
|
||||
sudo add-apt-repository ppa:gophers/go
|
||||
|
||||
sudo apt-get update
|
||||
|
||||
sudo apt-get install golang-stable
|
||||
|
||||
在 **Redhat系统中安装GIT** 使用下面的命令:
|
||||
|
||||
sudo yum install git
|
||||
|
||||
在 **Debian系统中安装GIT** 使用下面的命令:
|
||||
|
||||
sudo apt-get install git
|
||||
|
||||
一旦GO和GIT都已被安装好,我们便可以开始从源码编译安装Vault。
|
||||
|
||||
> 将下列的Vault仓库拷贝至GOPATH
|
||||
|
||||
https://github.com/hashicorp/vault
|
||||
|
||||
> 测试下面的文件是否存在,如果它不存在,那么Vault没有被克隆到合适的路径。
|
||||
|
||||
$GOPATH/src/github.com/hashicorp/vault/main.go
|
||||
|
||||
> 执行下面的指令来编译Vault,并将二进制文件放到系统bin目录下。
|
||||
|
||||
make dev
|
||||
|
||||

|
||||
|
||||
### 一份Vault入门教程 ###
|
||||
|
||||
我们已经编制了一份Vault的官方交互式教程,并带有它在SSH上的输出信息。
|
||||
|
||||
**概述**
|
||||
|
||||
这份教程包括下列步骤:
|
||||
|
||||
- 初始化并启封您的Vault
|
||||
- 在Vault中对您的请求授权
|
||||
- 读写秘密信息
|
||||
- 密封您的Vault
|
||||
|
||||
#### **初始化您的Vault**
|
||||
|
||||
首先,我们需要为您初始化一个Vault的工作实例。在初始化过程中,您可以配置Vault的密封行为。简单起见,现在使用一个启封密钥来初始化Vault,命令如下:
|
||||
|
||||
vault init -key-shares=1 -key-threshold=1
|
||||
|
||||
您会注意到Vault在这里输出了几个密钥。不要清除您的终端,这些密钥在后面的步骤中会使用到。
|
||||
|
||||

|
||||
|
||||
#### **启封您的Vault**
|
||||
|
||||
当一个Vault服务器启动时,它是密封的状态。在这种状态下,Vault被配置为知道物理存储在哪里及如何存取它,但不知道如何对其进行解密。Vault使用加密密钥来加密数据。这个密钥由"主密钥"加密,主密钥不保存。解密主密钥需要入口密钥。在这个例子中,我们使用了一个入口密钥来解密这个主密钥。
|
||||
|
||||
vault unseal <key 1>
|
||||
|
||||

|
||||
|
||||
####**为您的请求授权**
|
||||
|
||||
在执行任何操作之前,连接的客户端必须是被授权的。授权的过程是检验一个人或者机器是否如其所申明的那样具有正确的身份。这个身份用在向Vault发送请求时。为简单起见,我们将使用在步骤2中生成的root令牌,这个信息可以回滚终端屏幕看到。使用一个客户端令牌进行授权:
|
||||
|
||||
vault auth <root token>
|
||||
|
||||

|
||||
|
||||
####**读写保密信息**
|
||||
|
||||
现在Vault已经被设置妥当,我们可以开始读写默认挂载的秘密后端里面的秘密信息了。写在Vault中的秘密信息首先被加密,然后被写入后端存储中。后端存储机制绝不会看到未加密的信息,并且也没有在Vault之外解密的需要。
|
||||
|
||||
vault write secret/hello value=world
|
||||
|
||||
当然,您接下来便可以读这个保密信息了:
|
||||
|
||||
vault read secret/hello
|
||||
|
||||

|
||||
|
||||
####**密封您的Vault**
|
||||
|
||||
还有一个用I来密封Vault的API。它将丢掉现在的加密密钥并需要另一个启封过程来恢复它。密封仅需要一个拥有root权限的操作者。这是一种罕见的"打破玻璃过程"的典型部分。
|
||||
|
||||
这种方式中,如果检测到一个入侵,Vault数据将会立刻被锁住,以便最小化损失。如果不能访问到主密钥碎片的话,就不能再次获取数据。
|
||||
|
||||
vault seal
|
||||
|
||||

|
||||
|
||||
这便是入门教程的结尾。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
Vault是一个非常有用的应用,它提供了一个可靠且安全的存储关键信息的方式。另外,它在存储前加密关键信息、审计日志维护、以租期的方式获取秘密信息,且一旦租用期过期它将立刻收回秘密信息。Vault是平台无关的,并且可以免费下载和安装。要发掘Vault的更多信息,请访问其[官方网站][4]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/how-tos/secure-secret-store-vault/
|
||||
|
||||
作者:[Aun Raza][a]
|
||||
译者:[wwy-hust](https://github.com/wwy-hust)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunrz/
|
||||
[1]:https://dl.bintray.com/mitchellh/vault/vault_0.1.0_linux_386.zip
|
||||
[2]:https://dl.bintray.com/mitchellh/vault/vault_0.1.0_linux_amd64.zip
|
||||
[3]:https://dl.bintray.com/mitchellh/vault/vault_0.1.0_linux_arm.zip
|
||||
[4]:https://vaultproject.io/
|
@ -0,0 +1,86 @@
|
||||
Linux 有问必答:如何在Ubuntu上配置网桥
|
||||
===============================================================================
|
||||
> **Question**: 我需要在我的Ubuntu主机上建立一个Linux网桥,共享一个网卡给其他一些虚拟主机或在主机上创建的容器。我目前正在Ubuntu上使用网络管理器(Network Manager),所以最好>能使用网络管理器来配置一个网桥。我该怎么做?
|
||||
|
||||
网桥是一个硬件装备,用来将两个或多个数据链路层(OSI七层模型中第二层)互联,以使得不同网段上的网络设备可以互相访问。当你想要互联一个主机里的多个虚拟机器或者以太接口时,就需要在Linux主机里有一个类似桥接的概念。这里使用的是一种软网桥。
|
||||
|
||||
有很多的方法来配置一个Linux网桥。举个例子,在一个无外接显示/键盘的服务器环境里,你可以使用[brct][1]手动地配置一个网桥。而在桌面环境下,在网络管理器里也支持网桥设置。那就让我们测试一下如何用网络管理器配置一个网桥吧。
|
||||
|
||||
### 要求 ###
|
||||
|
||||
为了避免[任何问题][2],建议你的网络管理器版本为0.9.9或者更高,它用在 Ubuntu 15.04或者更新的版本。
|
||||
|
||||
$ apt-cache show network-manager | grep Version
|
||||
|
||||
----------
|
||||
|
||||
Version: 0.9.10.0-4ubuntu15.1
|
||||
Version: 0.9.10.0-4ubuntu15
|
||||
|
||||
### 创建一个网桥 ###
|
||||
|
||||
使用网络管理器创建网桥最简单的方式就是通过nm-connection-editor。这款GUI(图形用户界面)的工具允许你傻瓜式地配置一个网桥。
|
||||
|
||||
首先,启动nm-connection-editor。
|
||||
|
||||
$ nm-connection-editor
|
||||
|
||||
该编辑器的窗口会显示给你一个列表,列出目前配置好的网络连接。点击右上角的“添加”按钮,创建一个网桥。
|
||||
|
||||

|
||||
|
||||
接下来,选择“Bridge”(网桥)作为连接类型。
|
||||
|
||||

|
||||
|
||||
现在,开始配置网桥,包括它的名字和所桥接的连接。如果没有创建过其他网桥,那么默认的网桥接口会被命名为bridge0。
|
||||
|
||||
回顾一下,创建网桥的目的是为了通过网桥共享你的以太网卡接口,所以你需要添加以太网卡接口到网桥。在图形界面添加一个新的“桥接的连接”可以实现上述目的。点击“Add”按钮。
|
||||
|
||||

|
||||
|
||||
选择“以太网”作为连接类型。
|
||||
|
||||

|
||||
|
||||
在“设备的 MAC 地址”区域,选择你想要从属于网桥的接口。本例中,假设该接口是eth0。
|
||||
|
||||

|
||||
|
||||
点击“常规”标签,并且选中两个复选框,分别是“当其可用时自动连接到该网络”和“所有用户都可以连接到该网络”。
|
||||
|
||||

|
||||
|
||||
切换到“IPv4 设置”标签,为网桥配置DHCP或者是静态IP地址。注意,你应该为从属的以太网卡接口eth0使用相同的IPv4设定。本例中,我们假设eth0是用过DHCP配置的。因此,此处选择“自动(DHCP)”。如果eth0被指定了一个静态IP地址,那么你也应该指定相同的IP地址给网桥。
|
||||
|
||||

|
||||
|
||||
最后,保存网桥的设置。
|
||||
|
||||
现在,你会看见一个新增的网桥连接被创建在“网络连接”窗口里。因为已经从属与网桥,以前配置好的有线连接 eth0 就不再需要了,所以去删除原来的有线连接吧。
|
||||
|
||||

|
||||
|
||||
这时候,网桥连接会被自动激活。从指定给eth0的IP地址被网桥接管起,你将会暂时丢失一下连接。当IP地址赋给了网桥,你将会通过网桥连接回你的以太网卡接口。你可以通过“Network”设置确认一下。
|
||||
|
||||

|
||||
|
||||
同时,检查可用的接口。提醒一下,网桥接口必须已经取代了任何你的以太网卡接口拥有的IP地址。
|
||||
|
||||

|
||||
|
||||
就这么多了,现在,网桥已经可以用了。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/configure-linux-bridge-network-manager-ubuntu.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
[1]:http://xmodulo.com/how-to-configure-linux-bridge-interface.html
|
||||
[2]:https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1273201
|
@ -0,0 +1,76 @@
|
||||
Linux有问必答:如何安装autossh
|
||||
================================================================================
|
||||
> **提问**: 我打算在linux上安装autossh,我应该怎么做呢?
|
||||
|
||||
[autossh][1] 是一款开源工具,可以帮助管理SSH会话、自动重连和停止转发流量。autossh会假定目标主机已经设定[无密码SSH登陆][2],以便autossh可以重连断开的SSH会话而不用用户操作。
|
||||
|
||||
只要你建立[反向SSH隧道][3]或者[挂载基于SSH的远程文件夹][4],autossh迟早会派上用场。基本上只要需要维持SSH会话,autossh肯定是有用的。
|
||||
|
||||

|
||||
|
||||
下面有许多linux发行版autossh的安装方法。
|
||||
|
||||
### Debian 或 Ubuntu 系统 ###
|
||||
|
||||
autossh已经加入基于Debian系统的基础库,所以可以很方便的安装。
|
||||
|
||||
$ sudo apt-get install autossh
|
||||
|
||||
### Fedora 系统 ###
|
||||
|
||||
Fedora库同样包含autossh包,使用yum安装。
|
||||
|
||||
$ sudo yum install autossh
|
||||
|
||||
### CentOS 或 RHEL 系统 ###
|
||||
|
||||
CentOS/RHEL 6 或早期版本, 需要开启第三库[Repoforge库][5], 然后才能使用yum安装.
|
||||
|
||||
$ sudo yum install autossh
|
||||
|
||||
CentOS/RHEL 7以后,autossh 已经不在Repoforge库中. 你需要从源码编译安装(例子在下面)。
|
||||
|
||||
### Arch Linux 系统 ###
|
||||
|
||||
$ sudo pacman -S autossh
|
||||
|
||||
### Debian 或 Ubuntu 系统中从源码编译安装###
|
||||
|
||||
如果你想要使用最新版本的autossh,你可以自己编译源码安装
|
||||
|
||||
$ sudo apt-get install gcc make
|
||||
$ wget http://www.harding.motd.ca/autossh/autossh-1.4e.tgz
|
||||
$ tar -xf autossh-1.4e.tgz
|
||||
$ cd autossh-1.4e
|
||||
$ ./configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
|
||||
### CentOS, Fedora 或 RHEL 系统中从源码编译安装###
|
||||
|
||||
在CentOS/RHEL 7以后,autossh不在是预编译包。所以你不得不从源码编译安装。
|
||||
|
||||
$ sudo yum install wget gcc make
|
||||
$ wget http://www.harding.motd.ca/autossh/autossh-1.4e.tgz
|
||||
$ tar -xf autossh-1.4e.tgz
|
||||
$ cd autossh-1.4e
|
||||
$ ./configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/install-autossh-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[Vic020/VicYu](http://vicyu.net)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
[1]:http://www.harding.motd.ca/autossh/
|
||||
[2]:https://linux.cn/article-5444-1.html
|
||||
[3]:http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html
|
||||
[4]:http://xmodulo.com/how-to-mount-remote-directory-over-ssh-on-linux.html
|
||||
[5]:http://xmodulo.com/how-to-set-up-rpmforge-repoforge-repository-on-centos.html
|
@ -0,0 +1,185 @@
|
||||
监控 Linux 容器性能的命令行神器
|
||||
================================================================================
|
||||
ctop是一个新的基于命令行的工具,它可用于在容器层级监控进程。容器通过利用控制器组(cgroup)的资源管理功能,提供了操作系统层级的虚拟化环境。该工具从cgroup收集与内存、CPU、块输入输出的相关数据,以及拥有者、开机时间等元数据,并以人性化的格式呈现给用户,这样就可以快速对系统健康状况进行评估。基于所获得的数据,它可以尝试推测下层的容器技术。ctop也有助于在低内存环境中检测出谁在消耗大量的内存。
|
||||
|
||||
### 功能 ###
|
||||
|
||||
ctop的一些功能如下:
|
||||
|
||||
- 收集CPU、内存和块输入输出的度量值
|
||||
- 收集与拥有者、容器技术和任务统计相关的信息
|
||||
- 通过任意栏对信息排序
|
||||
- 以树状视图显示信息
|
||||
- 折叠/展开cgroup树
|
||||
- 选择并跟踪cgroup/容器
|
||||
- 选择显示数据刷新的时间窗口
|
||||
- 暂停刷新数据
|
||||
- 检测基于systemd、Docker和LXC的容器
|
||||
- 基于Docker和LXC的容器的高级特性
|
||||
- 打开/连接shell以进行深度诊断
|
||||
- 停止/杀死容器类型
|
||||
|
||||
### 安装 ###
|
||||
|
||||
**ctop**是由Python写成的,因此,除了需要Python 2.6或其更高版本外(带有内建的光标支持),别无其它外部依赖。推荐使用Python的pip进行安装,如果还没有安装pip,请先安装,然后使用pip安装ctop。
|
||||
|
||||
*注意:本文样例来自Ubuntu(14.10)系统*
|
||||
|
||||
$ sudo apt-get install python-pip
|
||||
|
||||
使用pip安装ctop:
|
||||
|
||||
poornima@poornima-Lenovo:~$ sudo pip install ctop
|
||||
|
||||
[sudo] password for poornima:
|
||||
|
||||
Downloading/unpacking ctop
|
||||
|
||||
Downloading ctop-0.4.0.tar.gz
|
||||
|
||||
Running setup.py (path:/tmp/pip_build_root/ctop/setup.py) egg_info for package ctop
|
||||
|
||||
Installing collected packages: ctop
|
||||
|
||||
Running setup.py install for ctop
|
||||
|
||||
changing mode of build/scripts-2.7/ctop from 644 to 755
|
||||
|
||||
changing mode of /usr/local/bin/ctop to 755
|
||||
|
||||
Successfully installed ctop
|
||||
|
||||
Cleaning up...
|
||||
|
||||
如果不选择使用pip安装,你也可以使用wget直接从github安装:
|
||||
|
||||
poornima@poornima-Lenovo:~$ wget https://raw.githubusercontent.com/yadutaf/ctop/master/cgroup_top.py -O ctop
|
||||
|
||||
--2015-04-29 19:32:53-- https://raw.githubusercontent.com/yadutaf/ctop/master/cgroup_top.py
|
||||
|
||||
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 199.27.78.133
|
||||
|
||||
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|199.27.78.133|:443... connected.
|
||||
|
||||
HTTP request sent, awaiting response... 200 OK Length: 27314 (27K) [text/plain]
|
||||
|
||||
Saving to: ctop
|
||||
|
||||
100%[======================================>] 27,314 --.-K/s in 0s
|
||||
|
||||
2015-04-29 19:32:59 (61.0 MB/s) - ctop saved [27314/27314]
|
||||
|
||||
----------
|
||||
|
||||
poornima@poornima-Lenovo:~$ chmod +x ctop
|
||||
|
||||
如果cgroup-bin包没有安装,你可能会碰到一个错误消息,你可以通过安装需要的包来解决。
|
||||
|
||||
poornima@poornima-Lenovo:~$ ./ctop
|
||||
|
||||
[ERROR] Failed to locate cgroup mountpoints.
|
||||
|
||||
poornima@poornima-Lenovo:~$ sudo apt-get install cgroup-bin
|
||||
|
||||
下面是ctop的输出样例:
|
||||
|
||||

|
||||
|
||||
*ctop屏幕*
|
||||
|
||||
### 用法选项 ###
|
||||
|
||||
ctop [--tree] [--refresh=] [--columns=] [--sort-col=] [--follow=] [--fold=, ...] ctop (-h | --help)
|
||||
|
||||
当你进入ctop屏幕,可使用上(↑)和下(↓)箭头键在容器间导航。点击某个容器就选定了该容器,按q或Ctrl+C退出该容器。
|
||||
|
||||
现在,让我们来看看上面列出的那一堆选项究竟是怎么用的吧。
|
||||
|
||||
**-h / --help - 显示帮助信息**
|
||||
|
||||
poornima@poornima-Lenovo:~$ ctop -h
|
||||
Usage: ctop [options]
|
||||
|
||||
Options:
|
||||
-h, --help show this help message and exit
|
||||
--tree show tree view by default
|
||||
--refresh=REFRESH Refresh display every <seconds>
|
||||
--follow=FOLLOW Follow cgroup path
|
||||
--columns=COLUMNS List of optional columns to display. Always includes
|
||||
'name'
|
||||
--sort-col=SORT_COL Select column to sort by initially. Can be changed
|
||||
dynamically.
|
||||
|
||||
|
||||
**--tree - 显示容器的树形视图**
|
||||
|
||||
默认情况下,会显示列表视图
|
||||
|
||||
当你进入ctop窗口,你可以使用F5按钮在树状/列表视图间切换。
|
||||
|
||||
**--fold=<name> - 在树形视图中折叠名为 \<name> 的 cgroup 路径**
|
||||
|
||||
该选项需要与 --tree 选项组合使用。
|
||||
|
||||
例子: ctop --tree --fold=/user.slice
|
||||
|
||||

|
||||
|
||||
*'ctop --fold'的输出*
|
||||
|
||||
在ctop窗口中,使用+/-键来展开或折叠子cgroup。
|
||||
|
||||
注意:在写本文时,pip仓库中还没有最新版的ctop,还不支持命令行的‘--fold’选项
|
||||
|
||||
**--follow= - 跟踪/高亮 cgroup 路径**
|
||||
|
||||
例子: ctop --follow=/user.slice/user-1000.slice
|
||||
|
||||
正如你在下面屏幕中所见到的那样,带有“/user.slice/user-1000.slice”路径的cgroup被高亮显示,这让用户易于跟踪,就算显示位置变了也一样。
|
||||
|
||||

|
||||
|
||||
*'ctop --follow'的输出*
|
||||
|
||||
你也可以使用‘f’按钮来让高亮的行跟踪选定的容器。默认情况下,跟踪是关闭的。
|
||||
|
||||
**--refresh= - 按指定频率刷新显示,默认1秒**
|
||||
|
||||
这对于按每用户需求来显示改变刷新率时很有用。使用‘p’按钮可以暂停刷新并选择文本。
|
||||
|
||||
**--columns=<columns> - 限定只显示选定的列。'name' 需要是第一个字段,其后跟着其它字段。默认情况下,字段包括:owner, processes,memory, cpu-sys, cpu-user, blkio, cpu-time**
|
||||
|
||||
例子: ctop --columns=name,owner,type,memory
|
||||
|
||||

|
||||
|
||||
*'ctop --column'的输出*
|
||||
|
||||
**-sort-col=<sort-col> - 按指定的列排序。默认使用 cpu-user 排序**
|
||||
|
||||
例子: ctop --sort-col=blkio
|
||||
|
||||
如果有Docker和LXC支持的额外容器,跟踪选项也是可用的:
|
||||
|
||||
press 'a' - 接驳到终端输出
|
||||
|
||||
press 'e' - 打开容器中的一个 shell
|
||||
|
||||
press 's' - 停止容器 (SIGTERM)
|
||||
|
||||
press 'k' - 杀死容器 (SIGKILL)
|
||||
|
||||
目前 Jean-Tiare Le Bigot 还在积极开发 [ctop][1] 中,希望我们能在该工具中见到像本地 top 命令一样的特性 :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/how-tos/monitor-linux-containers-performance/
|
||||
|
||||
作者:[B N Poornima][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/bnpoornima/
|
||||
[1]:https://github.com/yadutaf/ctop
|
@ -0,0 +1,406 @@
|
||||
建立你自己的 CA 服务:OpenSSL 命令行 CA 操作快速指南
|
||||
================================================================================
|
||||
|
||||
这些是关于使用 OpenSSL 生成证书授权(CA)、中间证书授权和末端证书的速记随笔,内容包括 OCSP、CRL 和 CA 颁发者信息,以及指定颁发和有效期限等。
|
||||
|
||||
我们将建立我们自己的根 CA,我们将使用根 CA 来生成一个中间 CA 的例子,我们将使用中间 CA 来签署末端用户证书。
|
||||
|
||||
### 根 CA ###
|
||||
|
||||
创建根 CA 授权目录并切换到该目录:
|
||||
|
||||
mkdir ~/SSLCA/root/
|
||||
cd ~/SSLCA/root/
|
||||
|
||||
为我们的根 CA 生成一个8192位长的 SHA-256 RSA 密钥:
|
||||
|
||||
openssl genrsa -aes256 -out rootca.key 8192
|
||||
|
||||
样例输出:
|
||||
|
||||
Generating RSA private key, 8192 bit long modulus
|
||||
.........++
|
||||
....................................................................................................................++
|
||||
e is 65537 (0x10001)
|
||||
|
||||
如果你想要用密码保护该密钥,请添加 `-aes256` 选项。
|
||||
|
||||
创建自签名根 CA 证书 `ca.crt`;你需要为你的根 CA 提供一个身份:
|
||||
|
||||
openssl req -sha256 -new -x509 -days 1826 -key rootca.key -out rootca.crt
|
||||
|
||||
样例输出:
|
||||
|
||||
You are about to be asked to enter information that will be incorporated
|
||||
into your certificate request.
|
||||
What you are about to enter is what is called a Distinguished Name or a DN.
|
||||
There are quite a few fields but you can leave some blank
|
||||
For some fields there will be a default value,
|
||||
If you enter '.', the field will be left blank.
|
||||
-----
|
||||
Country Name (2 letter code) [AU]:NL
|
||||
State or Province Name (full name) [Some-State]:Zuid Holland
|
||||
Locality Name (eg, city) []:Rotterdam
|
||||
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Sparkling Network
|
||||
Organizational Unit Name (eg, section) []:Sparkling CA
|
||||
Common Name (e.g. server FQDN or YOUR name) []:Sparkling Root CA
|
||||
Email Address []:
|
||||
|
||||
创建一个存储 CA 序列的文件:
|
||||
|
||||
touch certindex
|
||||
echo 1000 > certserial
|
||||
echo 1000 > crlnumber
|
||||
|
||||
放置 CA 配置文件,该文件持有 CRL 和 OCSP 末端的存根。
|
||||
|
||||
# vim ca.conf
|
||||
[ ca ]
|
||||
default_ca = myca
|
||||
|
||||
[ crl_ext ]
|
||||
issuerAltName=issuer:copy
|
||||
authorityKeyIdentifier=keyid:always
|
||||
|
||||
[ myca ]
|
||||
dir = ./
|
||||
new_certs_dir = $dir
|
||||
unique_subject = no
|
||||
certificate = $dir/rootca.crt
|
||||
database = $dir/certindex
|
||||
private_key = $dir/rootca.key
|
||||
serial = $dir/certserial
|
||||
default_days = 730
|
||||
default_md = sha1
|
||||
policy = myca_policy
|
||||
x509_extensions = myca_extensions
|
||||
crlnumber = $dir/crlnumber
|
||||
default_crl_days = 730
|
||||
|
||||
[ myca_policy ]
|
||||
commonName = supplied
|
||||
stateOrProvinceName = supplied
|
||||
countryName = optional
|
||||
emailAddress = optional
|
||||
organizationName = supplied
|
||||
organizationalUnitName = optional
|
||||
|
||||
[ myca_extensions ]
|
||||
basicConstraints = critical,CA:TRUE
|
||||
keyUsage = critical,any
|
||||
subjectKeyIdentifier = hash
|
||||
authorityKeyIdentifier = keyid:always,issuer
|
||||
keyUsage = digitalSignature,keyEncipherment,cRLSign,keyCertSign
|
||||
extendedKeyUsage = serverAuth
|
||||
crlDistributionPoints = @crl_section
|
||||
subjectAltName = @alt_names
|
||||
authorityInfoAccess = @ocsp_section
|
||||
|
||||
[ v3_ca ]
|
||||
basicConstraints = critical,CA:TRUE,pathlen:0
|
||||
keyUsage = critical,any
|
||||
subjectKeyIdentifier = hash
|
||||
authorityKeyIdentifier = keyid:always,issuer
|
||||
keyUsage = digitalSignature,keyEncipherment,cRLSign,keyCertSign
|
||||
extendedKeyUsage = serverAuth
|
||||
crlDistributionPoints = @crl_section
|
||||
subjectAltName = @alt_names
|
||||
authorityInfoAccess = @ocsp_section
|
||||
|
||||
[alt_names]
|
||||
DNS.0 = Sparkling Intermidiate CA 1
|
||||
DNS.1 = Sparkling CA Intermidiate 1
|
||||
|
||||
[crl_section]
|
||||
URI.0 = http://pki.sparklingca.com/SparklingRoot.crl
|
||||
URI.1 = http://pki.backup.com/SparklingRoot.crl
|
||||
|
||||
[ocsp_section]
|
||||
caIssuers;URI.0 = http://pki.sparklingca.com/SparklingRoot.crt
|
||||
caIssuers;URI.1 = http://pki.backup.com/SparklingRoot.crt
|
||||
OCSP;URI.0 = http://pki.sparklingca.com/ocsp/
|
||||
OCSP;URI.1 = http://pki.backup.com/ocsp/
|
||||
|
||||
如果你需要设置某个特定的证书生效/过期日期,请添加以下内容到`[myca]`:
|
||||
|
||||
# format: YYYYMMDDHHMMSS
|
||||
default_enddate = 20191222035911
|
||||
default_startdate = 20181222035911
|
||||
|
||||
### 创建中间 CA###
|
||||
|
||||
生成中间 CA (名为 intermediate1)的私钥:
|
||||
|
||||
openssl genrsa -out intermediate1.key 4096
|
||||
|
||||
生成中间 CA 的 CSR:
|
||||
|
||||
openssl req -new -sha256 -key intermediate1.key -out intermediate1.csr
|
||||
|
||||
样例输出:
|
||||
|
||||
You are about to be asked to enter information that will be incorporated
|
||||
into your certificate request.
|
||||
What you are about to enter is what is called a Distinguished Name or a DN.
|
||||
There are quite a few fields but you can leave some blank
|
||||
For some fields there will be a default value,
|
||||
If you enter '.', the field will be left blank.
|
||||
-----
|
||||
Country Name (2 letter code) [AU]:NL
|
||||
State or Province Name (full name) [Some-State]:Zuid Holland
|
||||
Locality Name (eg, city) []:Rotterdam
|
||||
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Sparkling Network
|
||||
Organizational Unit Name (eg, section) []:Sparkling CA
|
||||
Common Name (e.g. server FQDN or YOUR name) []:Sparkling Intermediate CA
|
||||
Email Address []:
|
||||
|
||||
Please enter the following 'extra' attributes
|
||||
to be sent with your certificate request
|
||||
A challenge password []:
|
||||
An optional company name []:
|
||||
|
||||
确保中间 CA 的主体(CN)和根 CA 不同。
|
||||
|
||||
用根 CA 签署 中间 CA 的 CSR:
|
||||
|
||||
openssl ca -batch -config ca.conf -notext -in intermediate1.csr -out intermediate1.crt
|
||||
|
||||
样例输出:
|
||||
|
||||
Using configuration from ca.conf
|
||||
Check that the request matches the signature
|
||||
Signature ok
|
||||
The Subject's Distinguished Name is as follows
|
||||
countryName :PRINTABLE:'NL'
|
||||
stateOrProvinceName :ASN.1 12:'Zuid Holland'
|
||||
localityName :ASN.1 12:'Rotterdam'
|
||||
organizationName :ASN.1 12:'Sparkling Network'
|
||||
organizationalUnitName:ASN.1 12:'Sparkling CA'
|
||||
commonName :ASN.1 12:'Sparkling Intermediate CA'
|
||||
Certificate is to be certified until Mar 30 15:07:43 2017 GMT (730 days)
|
||||
|
||||
Write out database with 1 new entries
|
||||
Data Base Updated
|
||||
|
||||
生成 CRL(同时采用 PEM 和 DER 格式):
|
||||
|
||||
openssl ca -config ca.conf -gencrl -keyfile rootca.key -cert rootca.crt -out rootca.crl.pem
|
||||
|
||||
openssl crl -inform PEM -in rootca.crl.pem -outform DER -out rootca.crl
|
||||
|
||||
每次使用该 CA 签署证书后,请生成 CRL。
|
||||
|
||||
如果你需要撤销该中间证书:
|
||||
|
||||
openssl ca -config ca.conf -revoke intermediate1.crt -keyfile rootca.key -cert rootca.crt
|
||||
|
||||
### 配置中间 CA ###
|
||||
|
||||
为该中间 CA 创建一个新文件夹,然后进入该文件夹:
|
||||
|
||||
mkdir ~/SSLCA/intermediate1/
|
||||
cd ~/SSLCA/intermediate1/
|
||||
|
||||
从根 CA 拷贝中间证书和密钥:
|
||||
|
||||
cp ~/SSLCA/root/intermediate1.key ./
|
||||
cp ~/SSLCA/root/intermediate1.crt ./
|
||||
|
||||
创建索引文件:
|
||||
|
||||
touch certindex
|
||||
echo 1000 > certserial
|
||||
echo 1000 > crlnumber
|
||||
|
||||
创建一个新的 `ca.conf` 文件:
|
||||
|
||||
# vim ca.conf
|
||||
[ ca ]
|
||||
default_ca = myca
|
||||
|
||||
[ crl_ext ]
|
||||
issuerAltName=issuer:copy
|
||||
authorityKeyIdentifier=keyid:always
|
||||
|
||||
[ myca ]
|
||||
dir = ./
|
||||
new_certs_dir = $dir
|
||||
unique_subject = no
|
||||
certificate = $dir/intermediate1.crt
|
||||
database = $dir/certindex
|
||||
private_key = $dir/intermediate1.key
|
||||
serial = $dir/certserial
|
||||
default_days = 365
|
||||
default_md = sha1
|
||||
policy = myca_policy
|
||||
x509_extensions = myca_extensions
|
||||
crlnumber = $dir/crlnumber
|
||||
default_crl_days = 365
|
||||
|
||||
[ myca_policy ]
|
||||
commonName = supplied
|
||||
stateOrProvinceName = supplied
|
||||
countryName = optional
|
||||
emailAddress = optional
|
||||
organizationName = supplied
|
||||
organizationalUnitName = optional
|
||||
|
||||
[ myca_extensions ]
|
||||
basicConstraints = critical,CA:FALSE
|
||||
keyUsage = critical,any
|
||||
subjectKeyIdentifier = hash
|
||||
authorityKeyIdentifier = keyid:always,issuer
|
||||
keyUsage = digitalSignature,keyEncipherment
|
||||
extendedKeyUsage = serverAuth
|
||||
crlDistributionPoints = @crl_section
|
||||
subjectAltName = @alt_names
|
||||
authorityInfoAccess = @ocsp_section
|
||||
|
||||
[alt_names]
|
||||
DNS.0 = example.com
|
||||
DNS.1 = example.org
|
||||
|
||||
[crl_section]
|
||||
URI.0 = http://pki.sparklingca.com/SparklingIntermidiate1.crl
|
||||
URI.1 = http://pki.backup.com/SparklingIntermidiate1.crl
|
||||
|
||||
[ocsp_section]
|
||||
caIssuers;URI.0 = http://pki.sparklingca.com/SparklingIntermediate1.crt
|
||||
caIssuers;URI.1 = http://pki.backup.com/SparklingIntermediate1.crt
|
||||
OCSP;URI.0 = http://pki.sparklingca.com/ocsp/
|
||||
OCSP;URI.1 = http://pki.backup.com/ocsp/
|
||||
|
||||
修改 `[alt_names]` 部分,添加你需要的主体备选名。如果你不需要主体备选名,请移除该部分包括`subjectAltName = @alt_names`的行。
|
||||
|
||||
如果你需要设置一个指定的生效/到期日期,请添加以下内容到 `[myca]`:
|
||||
|
||||
# format: YYYYMMDDHHMMSS
|
||||
default_enddate = 20191222035911
|
||||
default_startdate = 20181222035911
|
||||
|
||||
生成一个空白 CRL(同时以 PEM 和 DER 格式):
|
||||
|
||||
openssl ca -config ca.conf -gencrl -keyfile rootca.key -cert rootca.crt -out rootca.crl.pem
|
||||
|
||||
openssl crl -inform PEM -in rootca.crl.pem -outform DER -out rootca.crl
|
||||
|
||||
### 生成末端用户证书 ###
|
||||
|
||||
我们使用这个新的中间 CA 来生成一个末端用户证书,请重复以下操作来使用该 CA 为每个用户签署。
|
||||
|
||||
mkdir enduser-certs
|
||||
|
||||
生成末端用户的私钥:
|
||||
|
||||
openssl genrsa -out enduser-certs/enduser-example.com.key 4096
|
||||
|
||||
生成末端用户的 CSR:
|
||||
|
||||
openssl req -new -sha256 -key enduser-certs/enduser-example.com.key -out enduser-certs/enduser-example.com.csr
|
||||
|
||||
样例输出:
|
||||
|
||||
You are about to be asked to enter information that will be incorporated
|
||||
into your certificate request.
|
||||
What you are about to enter is what is called a Distinguished Name or a DN.
|
||||
There are quite a few fields but you can leave some blank
|
||||
For some fields there will be a default value,
|
||||
If you enter '.', the field will be left blank.
|
||||
-----
|
||||
Country Name (2 letter code) [AU]:NL
|
||||
State or Province Name (full name) [Some-State]:Noord Holland
|
||||
Locality Name (eg, city) []:Amsterdam
|
||||
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Example Inc
|
||||
Organizational Unit Name (eg, section) []:IT Dept
|
||||
Common Name (e.g. server FQDN or YOUR name) []:example.com
|
||||
Email Address []:
|
||||
|
||||
Please enter the following 'extra' attributes
|
||||
to be sent with your certificate request
|
||||
A challenge password []:
|
||||
An optional company name []:
|
||||
|
||||
使用中间 CA 签署末端用户的 CSR:
|
||||
|
||||
openssl ca -batch -config ca.conf -notext -in enduser-certs/enduser-example.com.csr -out enduser-certs/enduser-example.com.crt
|
||||
|
||||
样例输出:
|
||||
|
||||
Using configuration from ca.conf
|
||||
Check that the request matches the signature
|
||||
Signature ok
|
||||
The Subject's Distinguished Name is as follows
|
||||
countryName :PRINTABLE:'NL'
|
||||
stateOrProvinceName :ASN.1 12:'Noord Holland'
|
||||
localityName :ASN.1 12:'Amsterdam'
|
||||
organizationName :ASN.1 12:'Example Inc'
|
||||
organizationalUnitName:ASN.1 12:'IT Dept'
|
||||
commonName :ASN.1 12:'example.com'
|
||||
Certificate is to be certified until Mar 30 15:18:26 2016 GMT (365 days)
|
||||
|
||||
Write out database with 1 new entries
|
||||
Data Base Updated
|
||||
|
||||
生成 CRL(同时以 PEM 和 DER 格式):
|
||||
|
||||
openssl ca -config ca.conf -gencrl -keyfile intermediate1.key -cert intermediate1.crt -out intermediate1.crl.pem
|
||||
|
||||
openssl crl -inform PEM -in intermediate1.crl.pem -outform DER -out intermediate1.crl
|
||||
|
||||
每次你使用该 CA 签署证书后,都需要生成 CRL。
|
||||
|
||||
如果你需要撤销该末端用户证书:
|
||||
|
||||
openssl ca -config ca.conf -revoke enduser-certs/enduser-example.com.crt -keyfile intermediate1.key -cert intermediate1.crt
|
||||
|
||||
样例输出:
|
||||
|
||||
Using configuration from ca.conf
|
||||
Revoking Certificate 1000.
|
||||
Data Base Updated
|
||||
|
||||
通过连接根证书和中间证书来创建证书链文件。
|
||||
|
||||
cat ../root/rootca.crt intermediate1.crt > enduser-certs/enduser-example.com.chain
|
||||
|
||||
发送以下文件给末端用户:
|
||||
|
||||
enduser-example.com.crt
|
||||
enduser-example.com.key
|
||||
enduser-example.com.chain
|
||||
|
||||
你也可以让末端用户提供他们自己的 CSR,而只发送给他们这个 .crt 文件。不要把它从服务器删除,否则你就不能撤销了。
|
||||
|
||||
### 校验证书 ###
|
||||
|
||||
你可以对证书链使用以下命令来验证末端用户证书:
|
||||
|
||||
openssl verify -CAfile enduser-certs/enduser-example.com.chain enduser-certs/enduser-example.com.crt
|
||||
enduser-certs/enduser-example.com.crt: OK
|
||||
|
||||
你也可以针对 CRL 来验证。首先,将 PEM 格式的 CRL 和证书链相连接:
|
||||
|
||||
cat ../root/rootca.crt intermediate1.crt intermediate1.crl.pem > enduser-certs/enduser-example.com.crl.chain
|
||||
|
||||
验证证书:
|
||||
|
||||
openssl verify -crl_check -CAfile enduser-certs/enduser-example.com.crl.chain enduser-certs/enduser-example.com.crt
|
||||
|
||||
没有撤销时的输出:
|
||||
|
||||
enduser-certs/enduser-example.com.crt: OK
|
||||
|
||||
撤销后的输出如下:
|
||||
|
||||
enduser-certs/enduser-example.com.crt: CN = example.com, ST = Noord Holland, C = NL, O = Example Inc, OU = IT Dept
|
||||
error 23 at 0 depth lookup:certificate revoked
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://raymii.org/s/tutorials/OpenSSL_command_line_Root_and_Intermediate_CA_including_OCSP_CRL%20and_revocation.html
|
||||
|
||||
作者:Remy van Elst
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,8 +1,8 @@
|
||||
在 RedHat/CentOS 7.x 中使用 cmcli 命令管理网络
|
||||
在 RedHat/CentOS 7.x 中使用 nmcli 命令管理网络
|
||||
===============
|
||||
[**Red Hat Enterprise Linux 7** 与 **CentOS 7**][1] 中默认的网络服务由 **NetworkManager** 提供,这是动态控制及配置网络的守护进程,它用于保持当前网络设备及连接处于工作状态,同时也支持传统的 ifcfg 类型的配置文件。
|
||||
NetworkManager 可以用于以下类型的连接:
|
||||
Ethernet,VLANS,Bridges,Bonds,Teams,Wi-Fi,mobile boradband(如移动3G)以及 IP-over-InfiniBand。针对与这些网络类型,NetworkManager 可以配置他们的网络别名,IP 地址,静态路由,DNS,VPN连接以及很多其它的特殊参数。
|
||||
|
||||
NetworkManager 可以用于以下类型的连接:Ethernet,VLANS,Bridges,Bonds,Teams,Wi-Fi,mobile boradband(如移动3G)以及 IP-over-InfiniBand。针对与这些网络类型,NetworkManager 可以配置他们的网络别名,IP 地址,静态路由,DNS,VPN连接以及很多其它的特殊参数。
|
||||
|
||||
可以用命令行工具 nmcli 来控制 NetworkManager。
|
||||
|
||||
@ -24,19 +24,21 @@ Ethernet,VLANS,Bridges,Bonds,Teams,Wi-Fi,mobile boradband(如移
|
||||
|
||||
显示所有连接。
|
||||
|
||||
# nmcli connection show -a
|
||||
# nmcli connection show -a
|
||||
|
||||
仅显示当前活动的连接。
|
||||
|
||||
# nmcli device status
|
||||
|
||||
列出通过 NetworkManager 验证的设备列表及他们的状态。
|
||||
列出 NetworkManager 识别出的设备列表及他们的状态。
|
||||
|
||||

|
||||
|
||||
### 启动/停止 网络接口###
|
||||
|
||||
使用 nmcli 工具启动或停止网络接口,与 ifconfig 的 up/down 是一样的。使用下列命令停止某个接口:
|
||||
使用 nmcli 工具启动或停止网络接口,与 ifconfig 的 up/down 是一样的。
|
||||
|
||||
使用下列命令停止某个接口:
|
||||
|
||||
# nmcli device disconnect eno16777736
|
||||
|
||||
@ -50,7 +52,7 @@ Ethernet,VLANS,Bridges,Bonds,Teams,Wi-Fi,mobile boradband(如移
|
||||
|
||||
# nmcli connection add type ethernet con-name NAME_OF_CONNECTION ifname interface-name ip4 IP_ADDRESS gw4 GW_ADDRESS
|
||||
|
||||
根据你需要的配置更改 NAME_OF_CONNECTION,IP_ADDRESS, GW_ADDRESS参数(如果不需要网关的话可以省略最后一部分)。
|
||||
根据你需要的配置更改 NAME\_OF\_CONNECTION,IP\_ADDRESS, GW\_ADDRESS参数(如果不需要网关的话可以省略最后一部分)。
|
||||
|
||||
# nmcli connection add type ethernet con-name NEW ifname eno16777736 ip4 192.168.1.141 gw4 192.168.1.1
|
||||
|
||||
@ -68,9 +70,11 @@ Ethernet,VLANS,Bridges,Bonds,Teams,Wi-Fi,mobile boradband(如移
|
||||
|
||||

|
||||
|
||||
###增加一个使用 DHCP 的新连接
|
||||
|
||||
增加新的连接,使用DHCP自动分配IP地址,网关,DNS等,你要做的就是将命令行后 ip/gw 地址部分去掉就行了,DHCP会自动分配这些参数。
|
||||
|
||||
例,在 eno 16777736 设备上配置一个 名为 NEW_DHCP 的 DHCP 连接
|
||||
例,在 eno 16777736 设备上配置一个 名为 NEW\_DHCP 的 DHCP 连接
|
||||
|
||||
# nmcli connection add type ethernet con-name NEW_DHCP ifname eno16777736
|
||||
|
||||
@ -79,8 +83,8 @@ Ethernet,VLANS,Bridges,Bonds,Teams,Wi-Fi,mobile boradband(如移
|
||||
via: http://linoxide.com/linux-command/nmcli-tool-red-hat-centos-7/
|
||||
|
||||
作者:[Adrian Dinu][a]
|
||||
译者:[SPccman](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[SPccman](https://github.com/SPccman)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,23 +1,23 @@
|
||||
translating by cvsher
|
||||
Linux grep command with 14 different examples
|
||||
================================================================================
|
||||
### Overview : ###
|
||||
14 个 grep 命令的例子
|
||||
===========
|
||||
|
||||
Linux like operating system provides a searching tool known as **grep (global regular expression print)**. grep command is useful for searching the content of one more files based on the pattern. A pattern may be a single character, bunch of characters, single word or a sentence.
|
||||
###概述:###
|
||||
|
||||
When we execute the grep command with specified pattern, if its is matched, then it will display the line of file containing the pattern without modifying the contents of existing file.
|
||||
所有的类linux系统都会提供一个名为**grep(global regular expression print,全局正则表达式输出)**的搜索工具。grep命令在对一个或多个文件的内容进行基于模式的搜索的情况下是非常有用的。模式可以是单个字符、多个字符、单个单词、或者是一个句子。
|
||||
|
||||
In this tutorial we will discuss 14 different examples of grep command
|
||||
当命令匹配到执行命令时指定的模式时,grep会将包含模式的一行输出,但是并不对原文件内容进行修改。
|
||||
|
||||
### Example:1 Search the pattern (word) in a file ###
|
||||
在本文中,我们将会讨论到14个grep命令的例子。
|
||||
|
||||
Search the “linuxtechi” word in the file /etc/passwd file
|
||||
###例1 在文件中查找模式(单词)###
|
||||
|
||||
在/etc/passwd文件中查找单词“linuxtechi”
|
||||
|
||||
root@Linux-world:~# grep linuxtechi /etc/passwd
|
||||
linuxtechi:x:1000:1000:linuxtechi,,,:/home/linuxtechi:/bin/bash
|
||||
root@Linux-world:~#
|
||||
|
||||
### Example:2 Search the pattern in the multiple files. ###
|
||||
###例2 在多个文件中查找模式。###
|
||||
|
||||
root@Linux-world:~# grep linuxtechi /etc/passwd /etc/shadow /etc/gshadow
|
||||
/etc/passwd:linuxtechi:x:1000:1000:linuxtechi,,,:/home/linuxtechi:/bin/bash
|
||||
@ -31,14 +31,14 @@ Search the “linuxtechi” word in the file /etc/passwd file
|
||||
/etc/gshadow:sambashare:!::linuxtechi
|
||||
root@Linux-world:~#
|
||||
|
||||
### Example:3 List the name of those files which contain a specified pattern using -l option. ###
|
||||
###例3 使用-l参数列出包含指定模式的文件的文件名。###
|
||||
|
||||
root@Linux-world:~# grep -l linuxtechi /etc/passwd /etc/shadow /etc/fstab /etc/mtab
|
||||
/etc/passwd
|
||||
/etc/shadow
|
||||
root@Linux-world:~#
|
||||
|
||||
### Example:4 Search the pattern in the file along with associated line number(s) using the -n option ###
|
||||
###例4 使用-n参数,在文件中查找指定模式并显示匹配行的行号###
|
||||
|
||||
root@Linux-world:~# grep -n linuxtechi /etc/passwd
|
||||
39:linuxtechi:x:1000:1000:linuxtechi,,,:/home/linuxtechi:/bin/bash
|
||||
@ -48,34 +48,34 @@ root@Linux-world:~# grep -n root /etc/passwd /etc/shadow
|
||||
|
||||

|
||||
|
||||
### Example:5 Print the line excluding the pattern using -v option ###
|
||||
###例5 使用-v参数输出不包含指定模式的行###
|
||||
|
||||
List all the lines of the file /etc/passwd that does not contain specific word “linuxtechi”.
|
||||
输出/etc/passwd文件中所有不含单词“linuxtechi”的行
|
||||
|
||||
root@Linux-world:~# grep -v linuxtechi /etc/passwd
|
||||
|
||||

|
||||
|
||||
### Example:6 Display all the lines that starts with specified pattern using ^ symbol ###
|
||||
###例6 使用 ^ 符号输出所有以某指定模式开头的行###
|
||||
|
||||
Bash shell treats carrot symbol (^) as a special character which marks the beginning of line or a word. Let’s display the lines which starts with “root” word in the file /etc/passwd.
|
||||
Bash脚本将 ^ 符号视作特殊字符,用于指定一行或者一个单词的开始。例如输出/etc/passes文件中所有以“root”开头的行
|
||||
|
||||
root@Linux-world:~# grep ^root /etc/passwd
|
||||
root:x:0:0:root:/root:/bin/bash
|
||||
root@Linux-world:~#
|
||||
|
||||
### Example: 7 Display all the lines that ends with specified pattern using $ symbol. ###
|
||||
###例7 使用 $ 符号输出所有以指定模式结尾的行。###
|
||||
|
||||
List all the lines of /etc/passwd that ends with “bash” word.
|
||||
输出/etc/passwd文件中所有以“bash”结尾的行。
|
||||
|
||||
root@Linux-world:~# grep bash$ /etc/passwd
|
||||
root:x:0:0:root:/root:/bin/bash
|
||||
linuxtechi:x:1000:1000:linuxtechi,,,:/home/linuxtechi:/bin/bash
|
||||
root@Linux-world:~#
|
||||
|
||||
Bash shell treats dollar ($) symbol as a special character which marks the end of line or word.
|
||||
Bash脚本将美元($)符号视作特殊字符,用于指定一行或者一个单词的结尾。
|
||||
|
||||
### Example:8 Search the pattern recursively using -r option ###
|
||||
###例8 使用 -r 参数递归地查找特定模式###
|
||||
|
||||
root@Linux-world:~# grep -r linuxtechi /etc/
|
||||
/etc/subuid:linuxtechi:100000:65536
|
||||
@ -91,37 +91,37 @@ Bash shell treats dollar ($) symbol as a special character which marks the end o
|
||||
/etc/passwd:linuxtechi:x:1000:1000:linuxtechi,,,:/home/linuxtechi:/bin/bash
|
||||
............................................................................
|
||||
|
||||
Above command will search linuxtechi in the “/etc” directory recursively.
|
||||
上面的命令将会递归的在/etc目录中查找“linuxtechi”单词
|
||||
|
||||
### Example:9 Search all the empty or blank lines of a file using grep ###
|
||||
###例9 使用 grep 查找文件中所有的空行
|
||||
|
||||
root@Linux-world:~# grep ^$ /etc/shadow
|
||||
root@Linux-world:~#
|
||||
|
||||
As there is no empty line in /etc/shadow file , so nothing is displayed.
|
||||
由于/etc/shadow文件中没有空行,所以没有任何输出
|
||||
|
||||
### Example:10 Search the pattern using ‘grep -i’ option. ###
|
||||
###例10 使用 -i 参数查找模式###
|
||||
|
||||
-i option in the grep command ignores the letter case i.e it will ignore upper case or lower case letters while searching
|
||||
grep命令的-i参数在查找时忽略字符的大小写。
|
||||
|
||||
Lets take an example , i want to search “LinuxTechi” word in the passwd file.
|
||||
我们来看一个例子,在paswd文件中查找“LinuxTechi”单词。
|
||||
|
||||
nextstep4it@localhost:~$ grep -i LinuxTechi /etc/passwd
|
||||
linuxtechi:x:1001:1001::/home/linuxtechi:/bin/bash
|
||||
nextstep4it@localhost:~$
|
||||
|
||||
### Example:11 Search multiple patterns using -e option ###
|
||||
###例11 使用 -e 参数查找多个模式###
|
||||
|
||||
For example i want to search ‘linuxtechi’ and ‘root’ word in a single grep command , then using -e option we can search multiple patterns .
|
||||
例如,我想在一条grep命令中查找‘linuxtechi’和‘root’单词,使用-e参数,我们可以查找多个模式。
|
||||
|
||||
root@Linux-world:~# grep -e "linuxtechi" -e "root" /etc/passwd
|
||||
root:x:0:0:root:/root:/bin/bash
|
||||
linuxtechi:x:1000:1000:linuxtechi,,,:/home/linuxtechi:/bin/bash
|
||||
root@Linux-world:~#
|
||||
|
||||
### Example:12 Getting Search pattern from a file using “grep -f” ###
|
||||
###例12 使用 -f 用文件指定待查找的模式###
|
||||
|
||||
First create a search pattern file “grep_pattern” in your current working directory. In my case i have put the below contents.
|
||||
首先,在当前目录中创建一个搜索模式文件“grep_pattern”,我想文件中输入的如下内容。
|
||||
|
||||
root@Linux-world:~# cat grep_pattern
|
||||
^linuxtechi
|
||||
@ -129,35 +129,35 @@ First create a search pattern file “grep_pattern” in your current working di
|
||||
false$
|
||||
root@Linux-world:~#
|
||||
|
||||
Now try to search using grep_pattern file.
|
||||
现在,试试使用grep_pattern文件进行搜索
|
||||
|
||||
root@Linux-world:~# grep -f grep_pattern /etc/passwd
|
||||
|
||||

|
||||
|
||||
### Example:13 Count the number of matching patterns using -c option ###
|
||||
###例13 使用 -c 参数计算模式匹配到的数量###
|
||||
|
||||
Let take the above example , we can count the number of matching patterns using -c option in grep command.
|
||||
继续上面例子,我们在grep命令中使用-c命令计算匹配指定模式的数量
|
||||
|
||||
root@Linux-world:~# grep -c -f grep_pattern /etc/passwd
|
||||
22
|
||||
root@Linux-world:~#
|
||||
|
||||
### Example:14 Display N number of lines before & after pattern matching ###
|
||||
###例14 输出匹配指定模式行的前或者后面N行###
|
||||
|
||||
a) Display Four lines before patten matching using -B option
|
||||
a)使用-B参数输出匹配行的前4行
|
||||
|
||||
root@Linux-world:~# grep -B 4 "games" /etc/passwd
|
||||
|
||||

|
||||
|
||||
b) Display Four lines after pattern matching using -A option
|
||||
b)使用-A参数输出匹配行的后4行
|
||||
|
||||
root@Linux-world:~# grep -A 4 "games" /etc/passwd
|
||||
|
||||

|
||||
|
||||
c) Display Four lines around the pattern matching using -C option
|
||||
c)使用-C参数输出匹配行的前后各4行
|
||||
|
||||
root@Linux-world:~# grep -C 4 "games" /etc/passwd
|
||||
|
||||
@ -168,8 +168,8 @@ c) Display Four lines around the pattern matching using -C option
|
||||
via: http://www.linuxtechi.com/linux-grep-command-with-14-different-examples/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[cvsher](https://github.com/cvsher)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,31 @@
|
||||
Ubuntu Devs Propose Stateless Persistent Network Interface Names for Ubuntu and Debian
|
||||
======================================================================================
|
||||
*Networks are detected in an unpredictable and unstable order*
|
||||
|
||||
**Martin Pitt, a renown Ubuntu and Debian developer, came with the proposal of enabling stateless persistent network interface names in the upcoming versions of the Ubuntu Linux and Debian GNU/Linux operating systems.**
|
||||
|
||||
According to Mr. Pitt, it appears that the problem lies in the automatic detection of network interfaces within the Linux kernel. As such, network interfaces are detected in an unstable and unpredictable order. However, it order to connect to a certain network interface in ifupdown or networkd users will need to identify it first using a stable name.
|
||||
|
||||
"The general schema for this is to have an udev rule which does some matches to identify a particular interface, and assings a NAME="foo" to it," says Martin Pitt in an email to the Ubuntu mailinglist. "Interfaces with an explicit NAME= get called just like this, and others just get a kernel driver default, usually ethN, wlanN, or sometimes others (some wifi drivers have their own naming schemas)."
|
||||
|
||||
**Sever solutions appeared over the years: mac, biosdevname, and ifnames**
|
||||
|
||||
Apparently, several solutions are available for this problem, including an installation of an udev rule in /lib/udev/rules.d/75-persistent-net-generator.rules that creates a MAC address at first boot and writes it to /etc/udev/rules.d/70-persistent-net.rules, which is currently used by default in Ubuntu and applies to most hardware components.
|
||||
|
||||
Other solutions include biosdevname, a package that reads port or index numbers, and slot names from the BIOS and writes them to /lib/udev/rules.d/71-biosdevname.rules, and ifnames, a persistent name generator that automatically checks the BIOS and/or firmware for index numbers or slot names, similar to biosdevname.
|
||||
|
||||
However, the difference between ifnames and biosdevname is that the latter falls back to slot names, such as PCI numbers, and then to the MAC address and writes to /lib/udev/rules.d/80-net-setup-link.rules. All of these solutions can be combined, and Martin Pitt proposes to replace the first solution that is now used by default with the ifnames one.
|
||||
|
||||
If a new solution is implemented, a lot of networking issues will be resolved in Ubuntu, especially the cloud version. In addition, it will provide for stable network interface names for all new Ubuntu installations, and resolve many other problems related to system-image, etc.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://news.softpedia.com/news/Ubuntu-Devs-Propose-Stateless-Persistent-Network-Interface-Names-for-Ubuntu-and-Debian-480730.shtml
|
||||
|
||||
作者:[Marius Nestor][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://news.softpedia.com/editors/browse/marius-nestor
|
@ -1,56 +0,0 @@
|
||||
Square 2.0 Icon Pack Is Twice More Beautiful
|
||||
================================================================================
|
||||

|
||||
|
||||
Elegant, modern looking [Square icon theme][1] has recently been upgraded to version 2.0, which makes it more beautiful than ever. Square icon packs are compatible with all major desktop environments such as **Unity, GNOME, KDE, MATE** etc. Which means that you can use them for all popular Linux distributions such as Ubuntu, Fedora, Linux Mint, elementary OS etc. The vastness of this icon pack can be estimated from the fact it contains over 15,000 icons.
|
||||
|
||||
### Install and use Square icon pack 2.0 in Linux ###
|
||||
|
||||
There are two variants of Square icons, dark and light. Based on your preference, you can choose either of the two. For experimentation sake, I would advise you to download both variants of the icon theme.
|
||||
|
||||
You can download the icon pack from the link below. The files are stored in Google Drive, so don’t be suspicious if you don’t see a standard website like [SourceForge][2].
|
||||
|
||||
- [Square Dark Icons][3]
|
||||
- [Square Light Icons][4]
|
||||
|
||||
To use the icon theme, extract the downloaded files in ~/.icons directory. If this doesn’t exist, create it. Once you have the files in the right place, based on your desktop environment, use a tool to change the icon theme. I have written some small tutorials in the past on this topic. Feel free to refer to them if you need further help:
|
||||
|
||||
- [How to change themes in Ubuntu Unity][5]
|
||||
- [How to change themes in GNOME Shell][6]
|
||||
- [How to change themes in Linux Mint][7]
|
||||
- [How to change theme in Elementary OS Freya][8]
|
||||
|
||||
### Give it a try ###
|
||||
|
||||
Here is what my Ubuntu 14.04 looks like with Square icons. I am using [Ubuntu 15.04 default wallpaper][9] in the background.
|
||||
|
||||

|
||||
|
||||
A quick look at several icons in the Square theme:
|
||||
|
||||

|
||||
|
||||
How do you find it? Do you think it can be considered as one of the [best icon themes for Ubuntu 14.04][10]? Do share your thoughts and stay tuned for more articles on customizing your Linux desktop.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/square-2-0-icon-pack-linux/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://gnome-look.org/content/show.php/Square?content=163513
|
||||
[2]:http://sourceforge.net/
|
||||
[3]:http://gnome-look.org/content/download.php?content=163513&id=1&tan=62806435
|
||||
[4]:http://gnome-look.org/content/download.php?content=163513&id=2&tan=19789941
|
||||
[5]:http://itsfoss.com/how-to-install-themes-in-ubuntu-13-10/
|
||||
[6]:http://itsfoss.com/install-switch-themes-gnome-shell/
|
||||
[7]:http://itsfoss.com/install-icon-linux-mint/
|
||||
[8]:http://itsfoss.com/install-themes-icons-elementary-os-freya/
|
||||
[9]:http://itsfoss.com/default-wallpapers-ubuntu-1504/
|
||||
[10]:http://itsfoss.com/best-icon-themes-ubuntu-1404/
|
@ -1,41 +0,0 @@
|
||||
Synfig Studio 1.0 — Open Source Animation Gets Serious
|
||||
================================================================================
|
||||

|
||||
|
||||
**A brand new version of the free, open-source 2D animation software Synfig Studio is now available to download. **
|
||||
|
||||
The first release of the cross-platform software in well over a year, Synfig Studio 1.0 builds on its claim of offering “industrial-strength solution for creating film-quality animation” with a suite of new and improved features.
|
||||
|
||||
Among them is an improved user interface that the project developers say is ‘easier’ and ‘more intuitive’ to use. The client adds a new **single-window mode** for tidy working and has been **reworked to use the latest GTK3 libraries**.
|
||||
|
||||
On the features front there are several notable changes, including the addition of a fully-featured bone system.
|
||||
|
||||
This **joint-and-pivot ‘skeleton’ framework** is well suited to 2D cut-out animation and should prove super efficient when coupled with the complex deformations new to this release, or used with Synfig’s popular ‘automatic interpolated keyframes’ (read: frame-to-frame morphing).
|
||||
|
||||
注:youtube视频
|
||||
<iframe width="750" height="422" frameborder="0" allowfullscreen="" src="https://www.youtube.com/embed/M8zW1qCq8ng?feature=oembed"></iframe>
|
||||
|
||||
New non-destructive cutout tools, friction effects and initial support for full frame-by-frame bitmap animation, may help unlock the creativity of open-source animators, as might the addition of a sound layer for syncing the animation timeline with a soundtrack!
|
||||
|
||||
### Download Synfig Studio 1.0 ###
|
||||
|
||||
Synfig Studio is not a tool suited for everyone, though the latest batch of improvements in this latest release should help persuade some animators to give the free animation software a try.
|
||||
|
||||
If you want to find out what open-source animation software is like for yourself, you can grab an installer for Ubuntu for the latest release direct from the project’s Sourceforge page using the links below.
|
||||
|
||||
- [Download Synfig 1.0 (64bit) .deb Installer][1]
|
||||
- [Download Synfig 1.0 (32bit) .deb Installer][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/04/synfig-studio-new-release-features
|
||||
|
||||
作者:[oey-Elijah Sneddon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:http://sourceforge.net/projects/synfig/files/releases/1.0/linux/synfigstudio_1.0_amd64.deb/download
|
||||
[2]:http://sourceforge.net/projects/synfig/files/releases/1.0/linux/synfigstudio_1.0_x86.deb/download
|
@ -1,111 +0,0 @@
|
||||
translating wi-cuckoo
|
||||
What are good command line HTTP clients?
|
||||
================================================================================
|
||||
The whole is greater than the sum of its parts is a very famous quote from Aristotle, a Greek philosopher and scientist. This quote is particularly pertinent to Linux. In my view, one of Linux's biggest strengths is its synergy. The usefulness of Linux doesn't derive only from the huge raft of open source (command line) utilities. Instead, it's the synergy generated by using them together, sometimes in conjunction with larger applications.
|
||||
|
||||
The Unix philosophy spawned a "software tools" movement which focused on developing concise, basic, clear, modular and extensible code that can be used for other projects. This philosophy remains an important element for many Linux projects.
|
||||
|
||||
Good open source developers writing utilities seek to make sure the utility does its job as well as possible, and work well with other utilities. The goal is that users have a handful of tools, each of which seeks to excel at one thing. Some utilities work well independently.
|
||||
|
||||
This article looks at 3 open source command line HTTP clients. These clients let you download files off the internet from a command line. But they can also be used for many more interesting purposes such as testing, debugging and interacting with HTTP servers and web applications. Working with HTTP from the command-line is a worthwhile skill for HTTP architects and API designers. If you need to play around with an API, HTTPie and cURL will be invaluable.
|
||||
|
||||
----------
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
HTTPie (pronounced aych-tee-tee-pie) is an open source command line HTTP client. It is a a command line interface, cURL-like tool for humans.
|
||||
|
||||
The goal of this software is to make CLI interaction with web services as human-friendly as possible. It provides a simple http command that allows for sending arbitrary HTTP requests using a simple and natural syntax, and displays colorized output. HTTPie can be used for testing, debugging, and generally interacting with HTTP servers.
|
||||
|
||||
#### Features include: ####
|
||||
|
||||
- Expressive and intuitive syntax
|
||||
- Formatted and colorized terminal output
|
||||
- Built-in JSON support
|
||||
- Forms and file uploads
|
||||
- HTTPS, proxies, and authentication
|
||||
- Arbitrary request data
|
||||
- Custom headers
|
||||
- Persistent sessions
|
||||
- Wget-like downloads
|
||||
- Python 2.6, 2.7 and 3.x support
|
||||
- Linux, Mac OS X and Windows support
|
||||
- Plugins
|
||||
- Documentation
|
||||
- Test coverage
|
||||
|
||||
- Website: [httpie.org][1]
|
||||
- Developer: Jakub Roztočil
|
||||
- License: Open Source
|
||||
- Version Number: 0.9.2
|
||||
|
||||
----------
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
cURL is an open source command line tool for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, TELNET and TFTP.
|
||||
|
||||
curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, kerberos...), file transfer resume, proxy tunneling and a busload of other useful tricks.
|
||||
|
||||
#### Features include: ####
|
||||
|
||||
- Config file support
|
||||
- Multiple URLs in a single command line
|
||||
- Range "globbing" support: [0-13], {one,two,three}
|
||||
- Multiple file upload on a single command line
|
||||
- Custom maximum transfer rate
|
||||
- Redirectable stderr
|
||||
- Metalink support
|
||||
|
||||
- Website: [curl.haxx.se][2]
|
||||
- Developer: Daniel Stenberg
|
||||
- License: MIT/X derivate license
|
||||
- Version Number: 7.42.0
|
||||
|
||||
----------
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
Wget is open source software that retrieves content from web servers. Its name is derived from World Wide Web and get. It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies.
|
||||
|
||||
Wget can follow links in HTML pages and create local versions of remote web sites, fully recreating the directory structure of the original site. This is known as "recursive downloading."
|
||||
|
||||
Wget has been designed for robustness over slow or unstable network connections.
|
||||
|
||||
Features include:
|
||||
|
||||
- Resume aborted downloads, using REST and RANGE
|
||||
- Use filename wild cards and recursively mirror directories
|
||||
- NLS-based message files for many different languages
|
||||
- Optionally converts absolute links in downloaded documents to relative, so that downloaded documents may link to each other locally
|
||||
- Runs on most UNIX-like operating systems as well as Microsoft Windows
|
||||
- Supports HTTP proxies
|
||||
- Supports HTTP cookies
|
||||
- Supports persistent HTTP connections
|
||||
- Unattended / background operation
|
||||
- Uses local file timestamps to determine whether documents need to be re-downloaded when mirroring
|
||||
|
||||
- Website: [www.gnu.org/software/wget/][3]
|
||||
- Developer: Hrvoje Niksic, Gordon Matzigkeit, Junio Hamano, Dan Harkless, and many others
|
||||
- License: GNU GPL v3
|
||||
- Version Number: 1.16.3
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxlinks.com/article/20150425174537249/HTTPclients.html
|
||||
|
||||
作者:Frazer Kline
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://httpie.org/
|
||||
[2]:http://curl.haxx.se/
|
||||
[3]:https://www.gnu.org/software/wget/
|
@ -0,0 +1,117 @@
|
||||
translating by wwy-hust
|
||||
|
||||
Guake 0.7.0 Released – A Drop-Down Terminal for Gnome Desktops
|
||||
================================================================================
|
||||
Linux commandline is the best and most powerful thing that fascinates a new user and provides extreme power to experienced users and geeks. For those who work on Server and Production, they are already aware of this fact. It would be interesting to know that Linux console was one of those first features of the kernel that was written by Linus Torvalds way back in the year 1991.
|
||||
|
||||
Terminal is a powerful tool that is very reliable as it does not have any movable part. Terminal serves as an intermediate between console and GUI environment. Terminal themselves are GUI application that run on top of a desktop environment. There are a lot of terminal application some of which are Desktop Environment specific and rest are universal. Terminator, Konsole, Gnome-Terminal, Terminology, XFCE terminal, xterm are a few terminal emulators to name.
|
||||
|
||||
You may get a list of most widely used Terminal Emulator follow the below link.
|
||||
|
||||
- [20 Useful Terminals for Linux][1]
|
||||
|
||||
Last day while surfing web, I came across a terminal namely ‘guake‘ which is a terminal for gnome. Though this is not the first time I have learned about Guake. I’d known this application nearly one year ago but somehow I could not write on this and later it was out of my mind until I heard it again. So finally the article is here. We will be taking you to Guake features, installation on Debian, Ubuntu and Fedora followed by quick testing.
|
||||
|
||||
#### What is Guake? ####
|
||||
|
||||
Guake is a Drop Down Terminal for Gnome Environment. Written from scratch mostly in Python and a little in C this application is released under GPLv2+ and is available for Linux and alike systems. Guake is inspired by a console in computer game Quake which slides down from the top by pressing a specially Key (Default is F12) and then slides-up when the same key is pressed.
|
||||
|
||||
Important to mention that Guake is not the first of this kind. Yakuake which stands for Yet Another Kuake, a terminal emulator for KDE Desktop Environment and Tilda which is a GTK+ terminal Emulator are also inspired by the same slide up/down console of computer game Quake.
|
||||
|
||||
#### Features of Guake ####
|
||||
|
||||
- Lightweight
|
||||
- Simple Easy and Elegant
|
||||
- Functional
|
||||
- Powerful
|
||||
- Good Looking
|
||||
- Smooth integration of terminal into GUI
|
||||
- Appears when you call and disappear once you are done by pressing a predefined hot key
|
||||
- Support for hotkeys, tabs, background transparency makes it a brilliant application, must for every Gnome User.
|
||||
- Extremely configurable
|
||||
- Plenty of color palette included, fixed and recognized
|
||||
- Shortcut for transparency level
|
||||
- Run a script when Guake starts via Guake Preferences.
|
||||
- Able to run on more than one monitor
|
||||
|
||||
Guake 0.7.0 was released recently, which brings numerous fixes as well as some new features as discussed above. For complete Guake 0.7.0 changelog and source tarball packages can be found [Here][2].
|
||||
|
||||
### Installing Guake Terminal in Linux ###
|
||||
|
||||
If you are interested in compiling Guake from source you may download the source from the link above, build it yourself before installing.
|
||||
|
||||
However Guake is available to be installed on most of the distributions from repository or by adding an additional repository. Here, we will be installing Guake on Debian, Ubuntu, Linux Mint and Fedora systems.
|
||||
|
||||
First get the latest software package list from the repository and then install Guake from the default repository as shown below.
|
||||
|
||||
---------------- On Debian, Ubuntu and Linux Mint ----------------
|
||||
$ sudo apt-get update
|
||||
$ apt-get install guake
|
||||
|
||||
----------
|
||||
|
||||
---------------- On Fedora 19 Onwards ----------------
|
||||
# yum update
|
||||
# yum install guake
|
||||
|
||||
After installation, start the Guake from another terminal as:
|
||||
|
||||
$ guake
|
||||
|
||||
After starting it, use F12 (Default) to roll down and roll up the terminal on your Gnome Desktop.
|
||||
|
||||
Seems very beautiful specially the transparent background. Roll down… Roll up… Roll down… Roll up…. run command. Open another tab run command… Roll up… Roll down…
|
||||
|
||||

|
||||
Guake Terminal in Action
|
||||
|
||||
If your wallpaper or working windows color don’t match you may like to change your wallpaper or reduce the transparency of the Guake terminal color.
|
||||
|
||||
Next is to look into Guake Properties to edit settings as per requirements. Run Guake Preferences either by running it from Application Menu or by running the below command.
|
||||
|
||||
$ guake --preferences
|
||||
|
||||

|
||||
Guake Terminal Properties
|
||||
|
||||
Scrolling Properties..
|
||||
|
||||

|
||||
Guake Scrolling Settings
|
||||
|
||||
Appearance Properties – Here you can modify text and background color as well as tune transparency.
|
||||
|
||||

|
||||
Appearance Properties
|
||||
|
||||
Keyboard Shortcuts – Here you may edit and Modify Toggle key for Guage Visibility (default is F12).
|
||||
|
||||

|
||||
Keyboard Shortcuts
|
||||
|
||||
Compatibility Setting – Perhaps you won’t need to edit it.
|
||||
|
||||

|
||||
Compatibility Setting
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
This Project is not too young and not too old, hence has reached certain level of maturity and is quiet solid and works out of the box. For someone like me who need to switch between GUI and Console very often Guake is a boon. I don’t need to manage an extra window, open and close frequently, use tab among a huge pool of opened applications to find terminal or switch to different workspace to manage terminal now all I need is F12.
|
||||
|
||||
I think this is a must tool for any Linux user who makes use of GUI and Console at the same time, equally. I am going to recommend it to anyone who want to work on a system where interaction between GUI and Console is smooth and hassle free.
|
||||
|
||||
That’s all for now. Let us know if there is any problem in installing and running. We will be here to help you. Also tell us your’s experience about Guake. Provide us with your valuable feedback in the comments below. Like and share us and help us get spread.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/install-guake-terminal-ubuntu-mint-fedora/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/linux-terminal-emulators/
|
||||
[2]:https://github.com/Guake/guake/releases/tag/0.7.0
|
@ -0,0 +1,103 @@
|
||||
New to Linux? 5 Apps You Didn’t Know You Were Missing
|
||||
================================================================================
|
||||

|
||||
|
||||
When you moved to Linux, you went straight for the obvious browsers, cloud clients, music players, email clients, and perhaps image editors, right? As a result, you’ve missed several vital, productive tools. Here’s a roundup of five umissable Linux apps that you really need to install.
|
||||
|
||||
### [Synergy][1] ###
|
||||
|
||||
Synergy is a godsend if you use multiple desktops. It’s an open-source app that allows you to use a single mouse and keyboard across multiple computers, displays, and operating systems. Switching the mouse and keyboard functionality between the desktops is easy. Just move the mouse out the edge of one screen and into another.
|
||||
|
||||

|
||||
|
||||
When you open Synergy for the first time, it will run you through the setup wizard. The primary desktop is the one whose input devices you’ll be sharing with the other desktops. Configure that as the server. Add the remaining computers as clients.
|
||||
|
||||

|
||||
|
||||
Synergy maintains a common clipboard across all connected desktops. It also merges the lock screen setup, i.e. you need to bypass the lock screen just once to log in to all the computers together. Under **Edit > Settings**, you can make a few more tweaks such as adding a password and setting Synergy to launch on startup.
|
||||
|
||||
### [BasKet Note Pads][2] ###
|
||||
|
||||
Using BasKet Note Pads is somewhat like mapping your brain onto a computer. It helps make sense of all the ideas floating around in your head by allowing you to organize them in digestible chunks. You can use BasKet Note Pads for various tasks such as taking notes, creating idea maps and to-do lists, saving links, managing research, and keeping track of project data.
|
||||
|
||||
Each main idea or project goes into a section called a basket. To split ideas further, you can have one or more sub-baskets or sibling baskets. The baskets are further broken down into notes, which hold all the bits and pieces of a project. You can group them, tag them, and filter them.
|
||||
|
||||
The left pane in the application’s two-pane structure displays a tree-like view of all the baskets you have created.
|
||||
|
||||

|
||||
|
||||
BasKet Note Pads might seem a little complex on day one, but you’ll get the hang of it soon. When you’re not using it, the app sits in the system tray, ready for quick access.
|
||||
|
||||
Want a [simpler note-taking alternative][3] on Linux? Try [Springseed][4].
|
||||
|
||||
### [Caffeine][5]###
|
||||
|
||||
How do you ensure that your computer doesn’t go to sleep right in the middle of an [interesting movie][6]? Caffeine is the answer. No, you don’t need to brew a cup of coffee for your computer. You just need to install a lightweight indicator applet called Caffeine. It prevents the screen-saver, lock screen, or the Sleep mode from being activated when the computer is idle, only if the current window is in full-screen mode.
|
||||
|
||||
To install the applet, [download its latest version][7]. If you want to go [the ppa way][8], here’s how you can:
|
||||
|
||||
$ sudo add-apt-repository ppa:caffeine-developers/ppa
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install caffeine
|
||||
|
||||
On Ubuntu versions 14.10 and 15.04 (and their derivatives), you’ll also need to install certain dependency packages:
|
||||
|
||||
$ sudo apt-get install libappindicator3-1 gir1.2-appindicator3-0.1
|
||||
|
||||
After finishing the installation, add **caffeine-indicator** to your list of startup applications to make the indicator appear in the system tray. You can turn Caffeine’s functionality on and off via the app’s context menu, which pops up when you right-click on the tray icon.
|
||||
|
||||

|
||||
|
||||
### Easystroke ###
|
||||
|
||||
Easystroke makes an excellent [Linux mouse hack][9]. Use it to set up a series of customized mouse/touchpad/pen gestures to simulate common actions such as keystrokes, commands, and scrolls. Setting up Easystroke gestures is straightforward enough, thanks to the clear instructions that appear at all the right moments when you’re navigating the UI.
|
||||
|
||||

|
||||
|
||||
Begin by choosing the mouse button you’d like to use for performing gestures. Throw in a modifier if you like. You’ll find this setting under **Preferences > Behavior > Gesture Button**. Now head to the **Actions** tab and record strokes for your most commonly used actions.
|
||||
|
||||

|
||||
|
||||
Using the **Preferences** and **Advanced** tabs, you can make other tweaks like setting Easystroke to autostart, adding a system tray icon, and changing scroll speed.
|
||||
|
||||
### Guake ###
|
||||
|
||||
I saved my favorite Linux find for last. Guake is a dropdown command line modeled after the one in the first-person shooter video game [Quake][10]. Whether you’re [learning about terminal commands][11] or executing them on a regular basis, Guake is a great way to keep the terminal handy. You can bring it up or hide it in a single keystroke.
|
||||
|
||||
As you can see in the image below, when in action, Guake appears as an overlay on the current window. Right-click within the terminal to access the **Preferences** section, from where you can change Guake’s appearance, its scroll action, keyboard shortcuts, and more.
|
||||
|
||||

|
||||
|
||||
If KDE is your [Linux desktop of choice][12], do check out [Yakuake][13], which provides a similar functionality.
|
||||
|
||||
### Name Your Favorite Linux Discovery! ###
|
||||
|
||||
There are many more [super useful Linux apps][14] waiting to be discovered. Rest assured that we’ll keep introducing you to them.
|
||||
|
||||
Which Linux app were you happiest to learn about? Which one do you consider a must-have? Tell us in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.makeuseof.com/tag/new-linux-5-apps-didnt-know-missing/
|
||||
|
||||
作者:[Akshata][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.makeuseof.com/tag/author/akshata/
|
||||
[1]:http://synergy-project.org/
|
||||
[2]:http://basket.kde.org/
|
||||
[3]:http://www.makeuseof.com/tag/try-these-3-beautiful-note-taking-apps-that-work-offline/
|
||||
[4]:http://getspringseed.com/
|
||||
[5]:https://launchpad.net/caffeine
|
||||
[6]:http://www.makeuseof.com/tag/popular-apps-movies-according-google/
|
||||
[7]:http://ppa.launchpad.net/caffeine-developers/ppa/ubuntu/pool/main/c/caffeine/
|
||||
[8]:http://www.makeuseof.com/tag/ubuntu-ppa-technology-explained/
|
||||
[9]:http://www.makeuseof.com/tag/4-astounding-linux-mouse-hacks/
|
||||
[10]:http://en.wikipedia.org/wiki/Quake_%28video_game%29
|
||||
[11]:http://www.makeuseof.com/tag/4-ways-teach-terminal-commands-linux-si/
|
||||
[12]:http://www.makeuseof.com/tag/10-top-linux-desktop-environments-available/
|
||||
[13]:https://yakuake.kde.org/
|
||||
[14]:http://www.makeuseof.com/tag/linux-treasures-x-sublime-native-linux-apps-will-make-want-switch/
|
@ -0,0 +1,71 @@
|
||||
This Ubuntu App Applies Instagram Style Filters to Your Photos
|
||||
================================================================================
|
||||
**Looking for an Ubuntu app to apply Instagram style filters to your photos in Ubuntu?**
|
||||
|
||||
Grab your selfie stick and step this way…
|
||||
|
||||

|
||||
XnRetro is a photo editing app
|
||||
|
||||
### XnRetro Photo Editor ###
|
||||
|
||||
**XnRetro** is a simple image editing application that lets you quickly add “Instagram like” effects to your photos.
|
||||
|
||||
You know the sort of effects I’m talking about: scratches, noises, and frames, over processing, vintage washes and nostalgic tints (because in this age of digital transience we must know that endless selfies and sandwich snaps are unlikely to ever become nostalgic of themselves).
|
||||
|
||||
Whether you consider such effects to be of asinine artistic value or shortcut to being creative, these kinds of filters are popular and can help add a splash of personality to an otherwise so-so photo.
|
||||
|
||||
#### XnRetro Features ####
|
||||
|
||||
**XnRetro features the following:**
|
||||
|
||||
- 20 color filters
|
||||
- 15 light effects (bokeh, leaks, etc)
|
||||
- 28 frames and borders
|
||||
- 5 Vignettes (with strength control)
|
||||
- Image adjustments for contrast, gamma, saturation, etc
|
||||
- Square crop option
|
||||
|
||||

|
||||
Small tweak to make light effects work
|
||||
|
||||
You can save edited images (in theory) as .jpg or .png files and share them straight to social media from within the app.
|
||||
|
||||
I say “in theory” because .jpg saving doesn’t actually work in the Linux version of the app (you can save edited images as .png files though). Similarly, most of the built-in social networking links are borked or just flat out fail on export.
|
||||
|
||||
To get the **15 light leaks** to work you will need to re-save each .jpg image in XnRetro ‘light’ folder as a .png file. Edit the ‘light.xml’ to match the new file names, hit save and the light effects will load up in XnRetro without issue.
|
||||
|
||||
> ‘For user-friendly image editing XnRetro is hard to beat — once you make it work.’
|
||||
|
||||
**Is XnRetro Worth Installing?**
|
||||
|
||||
XnRetro is not perfect. It’s is pretty old-looking, difficult to properly install and has not been updated for several years.
|
||||
|
||||
It does still work, barring .jpg saving, and is a nimble alternative to an advanced app like The Gimp or Shotwell’s set of ‘serious’ image adjustment tools.
|
||||
|
||||
While web apps and Chrome Apps¹ like [Pixlr Touch Up][1] and [Polarr][2] offer similar features you may be looking for a truly native solution.
|
||||
|
||||
And for that, for user-friendly image editing based around easy-to-apply filters, XnRetro is hard to beat.
|
||||
|
||||
### Download XnRetro for Ubuntu ###
|
||||
|
||||
XnRetro is not available as an installable .deb package. It is distributed as a binary file, meaning you need to double-click on the program file run it each and every time. It’s also 32-bit only.
|
||||
|
||||
You can download XnRetro using the link below. Once completed you need to extract the archive and enter the folder it creates. Double-click on the ‘xnretro’ program binary inside.
|
||||
|
||||
- [Download XnRetro for Linux (32bit, tar.gz)][3]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/05/instagram-photo-filters-ubuntu-desktop-app
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:http://www.omgchrome.com/?s=pixlr
|
||||
[2]:http://www.omgchrome.com/the-best-chrome-apps-of-2014/
|
||||
[3]:http://www.xnview.com/en/xnretro/#downloads
|
@ -0,0 +1,73 @@
|
||||
Open Source History: Why Did Linux Succeed?
|
||||
================================================================================
|
||||
> Why did Linux, the Unix-like operating system kernel started by Linus Torvalds in 1991 that became central to the open source world, succeed where so many similar projects, including GNU HURD and the BSDs, fail?
|
||||
|
||||

|
||||
|
||||
One of the most puzzling questions about the history of free and open source is this: Why did Linux succeed so spectacularly, whereas similar attempts to build a free or open source, Unix-like operating system kernel met with considerably less success? I don't know the answer to that question. But I have rounded up some theories, which I'd like to lay out here.
|
||||
|
||||
First, though, let me make clear what I mean when I write that Linux was a great success. I am defining it in opposition primarily to the variety of other Unix-like operating system kernels, some of them open and some not, that proliferated around the time Linux was born. [GNU][1] HURD, the free-as-in-freedom kernel whose development began in [May 1991][2], is one of them. Others include Unices that most people today have never heard of, such as various derivatives of the Unix variant developed at the University of California at Berkeley, BSD; Xenix, Microsoft's take on Unix; academic Unix clones including Minix; and the original Unix developed under the auspices of AT&T, which was vitally important in academic and commercial computing circles during earlier decades, but virtually disappeared from the scene by the 1990s.
|
||||
|
||||
#### Related ####
|
||||
|
||||
- [Open Source History: Tracing the Origins of Hacker Culture and the Hacker Ethic][3]
|
||||
- [Unix and Personal Computers: Reinterpreting the Origins of Linux][4]
|
||||
|
||||
I'd also like to make clear that I'm writing here about kernels, not complete operating systems. To a great extent, the Linux kernel owes its success to the GNU project as a whole, which produced the crucial tools, including compilers, a debugger and a BASH shell implementation, that are necessary to build a Unix-like operating system. But GNU developers never created a viable version of the the HURD kernel (although they are [still trying][5]). Instead, Linux ended up as the kernel that glued the rest of the GNU pieces together, even though that had never been in the GNU plans.
|
||||
|
||||
So it's worth asking why Linux, a kernel launched by Linus Torvalds, an obscure programmer in Finland, in 1991—the same year as HURD—endured and thrived within a niche where so many other Unix-like kernels, many of which enjoyed strong commercial backing and association with the leading Unix hackers of the day, failed to take off. To that end, here are a few theories pertaining to that question that I've come across as I've researched the history of the free and open source software worlds, along with the respective strengths and weaknesses of these various explanations.
|
||||
|
||||
### Linux Adopted a Decentralized Development Approach ###
|
||||
|
||||
This is the argument that comes out of Eric S. Raymond's essay, "[The Cathedral and the Bazaar][6]," and related works, which make the case that software develops best when a large number of contributors collaborate continuously within a relatively decentralized organizational structure. That was generally true of Linux, in contrast to, for instance, GNU HURD, which took a more centrally directed approach to code development—and, as a result, "had been evidently failing" to build a complete operating system for a decade, in Raymond's view.
|
||||
|
||||
To an extent, this explanation makes sense, but it has some significant flaws. For one, Torvalds arguably assumed a more authoritative role in directing Linux code development—deciding which contributions to include and reject—than Raymond and others have wanted to recognize. For another, this reasoning does not explain why GNU succeeded in producing so much software besides a working kernel. If only decentralized development works well in the free/open source software world, then all of GNU's programming efforts should have been a bust—which they most certainly were not.
|
||||
|
||||
### Linux is Pragmatic; GNU is Ideological ###
|
||||
|
||||
Personally, I find this explanation—which supposes that Linux grew so rapidly because its founder was a pragmatist who initially wrote the kernel just to be able to run a tailored Unix OS on his computer at home, not as part of a crusade to change the world through free software, as the GNU project aimed to do—the most compelling.
|
||||
|
||||
Still, it has some weaknesses that make it less than completely satisfying. In particular, while Torvalds himself adopted pragmatic principles, not all members of the community that coalesced around his project, then or today, have done the same. Yet, Linux has succeeded all the same.
|
||||
|
||||
Moreover, if pragmatism was the key to Linux's endurance, then why, again, was GNU successful in building so many other tools besides a kernel? If having strong political beliefs about software prevents you from pursuing successful projects, GNU should have been an outright failure, not an endeavor that produced a number of software packages that remain foundational to the IT world today.
|
||||
|
||||
Last but not least, many of the other Unix variants of the late 1980s and early 1990s, especially several BSD off-shoots, were the products of pragmatism. Their developers aimed to build Unix variants that could be more freely shared than those restricted by expensive commercial licenses, but they were not deeply ideological about programming or sharing code. Neither was Torvalds, and it is therefore difficult to explain Linux's success, and the failure of other Unix projects, in terms of ideological zeal.
|
||||
|
||||
### Operating System Design ###
|
||||
|
||||
There are technical differences between Linux and some other Unix variants that are important to keep in mind when considering the success of Linux. Richard Stallman, the founder of the GNU project, pointed to these in explaining, in an email to me, why HURD development had lagged: "It is true that the GNU Hurd is not a practical success. Part of the reason is that its basic design made it somewhat of a research project. (I chose that design thinking it was a shortcut to get a working kernel in a hurry.)"
|
||||
|
||||
Linux is also different from other Unix variants in the sense that Torvalds wrote all of the Linux code himself. Having a Unix of his own, free of other people's code, was one of his stated intentions when he [first announced Linux][7] in August 1991. This characteristic sets Linux apart from most of the other Unix variants that existed at that time, which derived their code bases from either AT&T Unix or Berkeley's BSD.
|
||||
|
||||
I'm not a computer scientist, so I'm not qualified to decide whether the Linux code was simply superior to that of the other Unices, explaining why Linux succeeded. But that's an argument someone might make—although it does not account for the disparity in culture and personnel between Linux and other Unix kernels, which, to me, seem more important than code in understanding Linux's success.
|
||||
|
||||
### The "Community" Put Its Support Behind Linux ###
|
||||
|
||||
Stallman also wrote that "mainly the reason" for Linux's success was that "Torvalds made Linux free software, and since then more of the community's effort has gone into Linux than into the Hurd." That's not exactly a complete explanation for Linux's trajectory, since it does not account for why the community of free software developers followed Torvalds instead of HURD or another Unix. But it nonetheless highlights this shift as a large part of how Linux prevailed.
|
||||
|
||||
A fuller account of the free software community's decision to endorse Linux would have to explain why developers did so even though, at first, Linux was a very obscure project—much more so, by any measure, than some of the other attempts at the time to create a freer Unix, such as NET BSD and 386/BSD—as well as one whose affinity with the goals of the free software movement was not at first clear. Originally, Torvalds released Linux under a license that simply prevented its commercial use. It was considerably later that he switched to the GNU General Public License, which protects the openness of source code.
|
||||
|
||||
So, those are the explanations I've found for Linux's success as an open source operating system kernel—a success which, to be sure, has been measured in some respects (desktop Linux never became what its proponents hoped, for instance). But Linux has also become foundational to the computing world in ways that no other Unix-like OS has. Maybe Apple OS X and iOS, which derive from BSD, come close, but they don't play such a central role as Linux in powering the Internet, among other things.
|
||||
|
||||
Have other ideas on why Linux became what it did, or why its counterparts in the Unix world have now almost all sunk into obscurity? (I know: BSD variants still have a following today, and some commercial Unices remain important enough for [Red Hat][8] (RHT) to be [courting their users][9]. But none of these Unix holdouts have conquered everything from Web servers to smartphones in the way Linux has.) I'd be delighted to hear them.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://thevarguy.com/open-source-application-software-companies/050415/open-source-history-why-did-linux-succeed
|
||||
|
||||
作者:[hristopher Tozzi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://thevarguy.com/author/christopher-tozzi
|
||||
[1]:http://gnu.org/
|
||||
[2]:http://www.gnu.org/software/hurd/history/hurd-announce
|
||||
[3]:http://thevarguy.com/open-source-application-software-companies/042915/open-source-history-tracing-origins-hacker-culture-and-ha
|
||||
[4]:http://thevarguy.com/open-source-application-software-companies/042715/unix-and-personal-computers-reinterpreting-origins-linux
|
||||
[5]:http://thevarguy.com/open-source-application-software-companies/042015/30-years-hurd-lives-gnu-updates-open-source-
|
||||
[6]:http://www.catb.org/esr/writings/cathedral-bazaar/cathedral-bazaar/
|
||||
[7]:https://groups.google.com/forum/#!topic/comp.os.minix/dlNtH7RRrGA[1-25]
|
||||
[8]:http://www.redhat.com/
|
||||
[9]:http://thevarguy.com/open-source-application-software-companies/032614/red-hat-grants-certification-award-unix-linux-migration-a
|
@ -1,444 +0,0 @@
|
||||
Web Caching Basics: Terminology, HTTP Headers, and Caching Strategies
|
||||
=====================================================================
|
||||
|
||||
### Introduction
|
||||
|
||||
Intelligent content caching is one of the most effective ways to improve
|
||||
the experience for your site's visitors. Caching, or temporarily storing
|
||||
content from previous requests, is part of the core content delivery
|
||||
strategy implemented within the HTTP protocol. Components throughout the
|
||||
delivery path can all cache items to speed up subsequent requests,
|
||||
subject to the caching policies declared for the content.
|
||||
|
||||
In this guide, we will discuss some of the basic concepts of web content
|
||||
caching. This will mainly cover how to select caching policies to ensure
|
||||
that caches throughout the internet can correctly process your content.
|
||||
We will talk about the benefits that caching affords, the side effects
|
||||
to be aware of, and the different strategies to employ to provide the
|
||||
best mixture of performance and flexibility.
|
||||
|
||||
What Is Caching?
|
||||
----------------
|
||||
|
||||
Caching is the term for storing reusable responses in order to make
|
||||
subsequent requests faster. There are many different types of caching
|
||||
available, each of which has its own characteristics. Application caches
|
||||
and memory caches are both popular for their ability to speed up certain
|
||||
responses.
|
||||
|
||||
Web caching, the focus of this guide, is a different type of cache. Web
|
||||
caching is a core design feature of the HTTP protocol meant to minimize
|
||||
network traffic while improving the perceived responsiveness of the
|
||||
system as a whole. Caches are found at every level of a content's
|
||||
journey from the original server to the browser.
|
||||
|
||||
Web caching works by caching the HTTP responses for requests according
|
||||
to certain rules. Subsequent requests for cached content can then be
|
||||
fulfilled from a cache closer to the user instead of sending the request
|
||||
all the way back to the web server.
|
||||
|
||||
Benefits
|
||||
--------
|
||||
|
||||
Effective caching aids both content consumers and content providers.
|
||||
Some of the benefits that caching brings to content delivery are:
|
||||
|
||||
- **Decreased network costs**: Content can be cached at various points
|
||||
in the network path between the content consumer and content origin.
|
||||
When the content is cached closer to the consumer, requests will not
|
||||
cause much additional network activity beyond the cache.
|
||||
- **Improved responsiveness**: Caching enables content to be retrieved
|
||||
faster because an entire network round trip is not necessary. Caches
|
||||
maintained close to the user, like the browser cache, can make this
|
||||
retrieval nearly instantaneous.
|
||||
- **Increased performance on the same hardware**: For the server where
|
||||
the content originated, more performance can be squeezed from the
|
||||
same hardware by allowing aggressive caching. The content owner can
|
||||
leverage the powerful servers along the delivery path to take the
|
||||
brunt of certain content loads.
|
||||
- **Availability of content during network interruptions**: With
|
||||
certain policies, caching can be used to serve content to end users
|
||||
even when it may be unavailable for short periods of time from the
|
||||
origin servers.
|
||||
|
||||
Terminology
|
||||
-----------
|
||||
|
||||
When dealing with caching, there are a few terms that you are likely to
|
||||
come across that might be unfamiliar. Some of the more common ones are
|
||||
below:
|
||||
|
||||
- **Origin server**: The origin server is the original location of the
|
||||
content. If you are acting as the web server administrator, this is
|
||||
the machine that you control. It is responsible for serving any
|
||||
content that could not be retrieved from a cache along the request
|
||||
route and for setting the caching policy for all content.
|
||||
- **Cache hit ratio**: A cache's effectiveness is measured in terms of
|
||||
its cache hit ratio or hit rate. This is a ratio of the requests
|
||||
able to be retrieved from a cache to the total requests made. A high
|
||||
cache hit ratio means that a high percentage of the content was able
|
||||
to be retrieved from the cache. This is usually the desired outcome
|
||||
for most administrators.
|
||||
- **Freshness**: Freshness is a term used to describe whether an item
|
||||
within a cache is still considered a candidate to serve to a client.
|
||||
Content in a cache will only be used to respond if it is within the
|
||||
freshness time frame specified by the caching policy.
|
||||
- **Stale content**: Items in the cache expire according to the cache
|
||||
freshness settings in the caching policy. Expired content is
|
||||
"stale". In general, expired content cannot be used to respond to
|
||||
client requests. The origin server must be re-contacted to retrieve
|
||||
the new content or at least verify that the cached content is still
|
||||
accurate.
|
||||
- **Validation**: Stale items in the cache can be validated in order
|
||||
to refresh their expiration time. Validation involves checking in
|
||||
with the origin server to see if the cached content still represents
|
||||
the most recent version of item.
|
||||
- **Invalidation**: Invalidation is the process of removing content
|
||||
from the cache before its specified expiration date. This is
|
||||
necessary if the item has been changed on the origin server and
|
||||
having an outdated item in cache would cause significant issues for
|
||||
the client.
|
||||
|
||||
There are plenty of other caching terms, but the ones above should help
|
||||
you get started.
|
||||
|
||||
What Can be Cached?
|
||||
-------------------
|
||||
|
||||
Certain content lends itself more readily to caching than others. Some
|
||||
very cache-friendly content for most sites are:
|
||||
|
||||
- Logos and brand images
|
||||
- Non-rotating images in general (navigation icons, for example)
|
||||
- Style sheets
|
||||
- General Javascript files
|
||||
- Downloadable Content
|
||||
- Media Files
|
||||
|
||||
These tend to change infrequently, so they can benefit from being cached
|
||||
for longer periods of time.
|
||||
|
||||
Some items that you have to be careful in caching are:
|
||||
|
||||
- HTML pages
|
||||
- Rotating images
|
||||
- Frequently modified Javascript and CSS
|
||||
- Content requested with authentication cookies
|
||||
|
||||
Some items that should almost never be cached are:
|
||||
|
||||
- Assets related to sensitive data (banking info, etc.)
|
||||
- Content that is user-specific and frequently changed
|
||||
|
||||
In addition to the above general rules, it's possible to specify
|
||||
policies that allow you to cache different types of content
|
||||
appropriately. For instance, if authenticated users all see the same
|
||||
view of your site, it may be possible to cache that view anywhere. If
|
||||
authenticated users see a user-sensitive view of the site that will be
|
||||
valid for some time, you may tell the user's browser to cache, but tell
|
||||
any intermediary caches not to store the view.
|
||||
|
||||
Locations Where Web Content Is Cached
|
||||
-------------------------------------
|
||||
|
||||
Content can be cached at many different points throughout the delivery
|
||||
chain:
|
||||
|
||||
- **Browser cache**: Web browsers themselves maintain a small cache.
|
||||
Typically, the browser sets a policy that dictates the most
|
||||
important items to cache. This may be user-specific content or
|
||||
content deemed expensive to download and likely to be requested
|
||||
again.
|
||||
- **Intermediary caching proxies**: Any server in between the client
|
||||
and your infrastructure can cache certain content as desired. These
|
||||
caches may be maintained by ISPs or other independent parties.
|
||||
- **Reverse Cache**: Your server infrastructure can implement its own
|
||||
cache for backend services. This way, content can be served from the
|
||||
point-of-contact instead of hitting backend servers on each request.
|
||||
|
||||
Each of these locations can and often do cache items according to their
|
||||
own caching policies and the policies set at the content origin.
|
||||
|
||||
Caching Headers
|
||||
---------------
|
||||
|
||||
Caching policy is dependent upon two different factors. The caching
|
||||
entity itself gets to decide whether or not to cache acceptable content.
|
||||
It can decide to cache less than it is allowed to cache, but never more.
|
||||
|
||||
The majority of caching behavior is determined by the caching policy,
|
||||
which is set by the content owner. These policies are mainly articulated
|
||||
through the use of specific HTTP headers.
|
||||
|
||||
Through various iterations of the HTTP protocol, a few different
|
||||
cache-focused headers have arisen with varying levels of sophistication.
|
||||
The ones you probably still need to pay attention to are below:
|
||||
|
||||
- **`Expires`**: The `Expires` header is very straight-forward,
|
||||
although fairly limited in scope. Basically, it sets a time in the
|
||||
future when the content will expire. At this point, any requests for
|
||||
the same content will have to go back to the origin server. This
|
||||
header is probably best used only as a fall back.
|
||||
- **`Cache-Control`**: This is the more modern replacement for the
|
||||
`Expires` header. It is well supported and implements a much more
|
||||
flexible design. In almost all cases, this is preferable to
|
||||
`Expires`, but it may not hurt to set both values. We will discuss
|
||||
the specifics of the options you can set with `Cache-Control` a bit
|
||||
later.
|
||||
- **`Etag`**: The `Etag` header is used with cache validation. The
|
||||
origin can provide a unique `Etag` for an item when it initially
|
||||
serves the content. When a cache needs to validate the content it
|
||||
has on-hand upon expiration, it can send back the `Etag` it has for
|
||||
the content. The origin will either tell the cache that the content
|
||||
is the same, or send the updated content (with the new `Etag`).
|
||||
- **`Last-Modified`**: This header specifies the last time that the
|
||||
item was modified. This may be used as part of the validation
|
||||
strategy to ensure fresh content.
|
||||
- **`Content-Length`**: While not specifically involved in caching,
|
||||
the `Content-Length` header is important to set when defining
|
||||
caching policies. Certain software will refuse to cache content if
|
||||
it does not know in advanced the size of the content it will need to
|
||||
reserve space for.
|
||||
- **`Vary`**: A cache typically uses the requested host and the path
|
||||
to the resource as the key with which to store the cache item. The
|
||||
`Vary` header can be used to tell caches to pay attention to an
|
||||
additional header when deciding whether a request is for the same
|
||||
item. This is most commonly used to tell caches to key by the
|
||||
`Accept-Encoding` header as well, so that the cache will know to
|
||||
differentiate between compressed and uncompressed content.
|
||||
|
||||
### An Aside about the Vary Header
|
||||
|
||||
The `Vary` header provides you with the ability to store different
|
||||
versions of the same content at the expense of diluting the entries in
|
||||
the cache.
|
||||
|
||||
In the case of `Accept-Encoding`, setting the `Vary` header allows for a
|
||||
critical distinction to take place between compressed and uncompressed
|
||||
content. This is needed to correctly serve these items to browsers that
|
||||
cannot handle compressed content and is necessary in order to provide
|
||||
basic usability. One characteristic that tells you that
|
||||
`Accept-Encoding` may be a good candidate for `Vary` is that it only has
|
||||
two or three possible values.
|
||||
|
||||
Items like `User-Agent` might at first glance seem to be a good way to
|
||||
differentiate between mobile and desktop browsers to serve different
|
||||
versions of your site. However, since `User-Agent` strings are
|
||||
non-standard, the result will likely be many versions of the same
|
||||
content on intermediary caches, with a very low cache hit ratio. The
|
||||
`Vary` header should be used sparingly, especially if you do not have
|
||||
the ability to normalize the requests in intermediate caches that you
|
||||
control (which may be possible, for instance, if you leverage a content
|
||||
delivery network).
|
||||
|
||||
How Cache-Control Flags Impact Caching
|
||||
--------------------------------------
|
||||
|
||||
Above, we mentioned how the `Cache-Control` header is used for modern
|
||||
cache policy specification. A number of different policy instructions
|
||||
can be set using this header, with multiple instructions being separated
|
||||
by commas.
|
||||
|
||||
Some of the `Cache-Control` options you can use to dictate your
|
||||
content's caching policy are:
|
||||
|
||||
- **`no-cache`**: This instruction specifies that any cached content
|
||||
must be re-validated on each request before being served to a
|
||||
client. This, in effect, marks the content as stale immediately, but
|
||||
allows it to use revalidation techniques to avoid re-downloading the
|
||||
entire item again.
|
||||
- **`no-store`**: This instruction indicates that the content cannot
|
||||
be cached in any way. This is appropriate to set if the response
|
||||
represents sensitive data.
|
||||
- **`public`**: This marks the content as public, which means that it
|
||||
can be cached by the browser and any intermediate caches. For
|
||||
requests that utilized HTTP authentication, responses are marked
|
||||
`private` by default. This header overrides that setting.
|
||||
- **`private`**: This marks the content as `private`. Private content
|
||||
may be stored by the user's browser, but must *not* be cached by any
|
||||
intermediate parties. This is often used for user-specific data.
|
||||
- **`max-age`**: This setting configures the maximum age that the
|
||||
content may be cached before it must revalidate or re-download the
|
||||
content from the origin server. In essence, this replaces the
|
||||
`Expires` header for modern browsing and is the basis for
|
||||
determining a piece of content's freshness. This option takes its
|
||||
value in seconds with a maximum valid freshness time of one year
|
||||
(31536000 seconds).
|
||||
- **`s-maxage`**: This is very similar to the `max-age` setting, in
|
||||
that it indicates the amount of time that the content can be cached.
|
||||
The difference is that this option is applied only to intermediary
|
||||
caches. Combining this with the above allows for more flexible
|
||||
policy construction.
|
||||
- **`must-revalidate`**: This indicates that the freshness information
|
||||
indicated by `max-age`, `s-maxage` or the `Expires` header must be
|
||||
obeyed strictly. Stale content cannot be served under any
|
||||
circumstance. This prevents cached content from being used in case
|
||||
of network interruptions and similar scenarios.
|
||||
- **`proxy-revalidate`**: This operates the same as the above setting,
|
||||
but only applies to intermediary proxies. In this case, the user's
|
||||
browser can potentially be used to serve stale content in the event
|
||||
of a network interruption, but intermediate caches cannot be used
|
||||
for this purpose.
|
||||
- **`no-transform`**: This option tells caches that they are not
|
||||
allowed to modify the received content for performance reasons under
|
||||
any circumstances. This means, for instance, that the cache is not
|
||||
able to send compressed versions of content it did not receive from
|
||||
the origin server compressed and is not allowed.
|
||||
|
||||
These can be combined in different ways to achieve various caching
|
||||
behavior. Some mutually exclusive values are:
|
||||
|
||||
- `no-cache`, `no-store`, and the regular caching behavior indicated
|
||||
by absence of either
|
||||
- `public` and `private`
|
||||
|
||||
The `no-store` option supersedes the `no-cache` if both are present. For
|
||||
responses to unauthenticated requests, `public` is implied. For
|
||||
responses to authenticated requests, `private` is implied. These can be
|
||||
overridden by including the opposite option in the `Cache-Control`
|
||||
header.
|
||||
|
||||
Developing a Caching Strategy
|
||||
-----------------------------
|
||||
|
||||
In a perfect world, everything could be cached aggressively and your
|
||||
servers would only be contacted to validate content occasionally. This
|
||||
doesn't often happen in practice though, so you should try to set some
|
||||
sane caching policies that aim to balance between implementing long-term
|
||||
caching and responding to the demands of a changing site.
|
||||
|
||||
### Common Issues
|
||||
|
||||
There are many situations where caching cannot or should not be
|
||||
implemented due to how the content is produced (dynamically generated
|
||||
per user) or the nature of the content (sensitive banking information,
|
||||
for example). Another problem that many administrators face when setting
|
||||
up caching is the situation where older versions of your content are out
|
||||
in the wild, not yet stale, even though new versions have been
|
||||
published.
|
||||
|
||||
These are both frequently encountered issues that can have serious
|
||||
impacts on cache performance and the accuracy of content you are
|
||||
serving. However, we can mitigate these issues by developing caching
|
||||
policies that anticipate these problems.
|
||||
|
||||
### General Recommendations
|
||||
|
||||
While your situation will dictate the caching strategy you use, the
|
||||
following recommendations can help guide you towards some reasonable
|
||||
decisions.
|
||||
|
||||
There are certain steps that you can take to increase your cache hit
|
||||
ratio before worrying about the specific headers you use. Some ideas
|
||||
are:
|
||||
|
||||
- **Establish specific directories for images, css, and shared
|
||||
content**: Placing content into dedicated directories will allow you
|
||||
to easily refer to them from any page on your site.
|
||||
- **Use the same URL to refer to the same items**: Since caches key
|
||||
off of both the host and the path to the content requested, ensure
|
||||
that you refer to your content in the same way on all of your pages.
|
||||
The previous recommendation makes this significantly easier.
|
||||
- **Use CSS image sprites where possible**: CSS image sprites for
|
||||
items like icons and navigation decrease the number of round trips
|
||||
needed to render your site and allow your site to cache that single
|
||||
sprite for a long time.
|
||||
- **Host scripts and external resources locally where possible**: If
|
||||
you utilize javascript scripts and other external resources,
|
||||
consider hosting those resources on your own servers if the correct
|
||||
headers are not being provided upstream. Note that you will have to
|
||||
be aware of any updates made to the resource upstream so that you
|
||||
can update your local copy.
|
||||
- **Fingerprint cache items**: For static content like CSS and
|
||||
Javascript files, it may be appropriate to fingerprint each item.
|
||||
This means adding a unique identifier to the filename (often a hash
|
||||
of the file) so that if the resource is modified, the new resource
|
||||
name can be requested, causing the requests to correctly bypass the
|
||||
cache. There are a variety of tools that can assist in creating
|
||||
fingerprints and modifying the references to them within HTML
|
||||
documents.
|
||||
|
||||
In terms of selecting the correct headers for different items, the
|
||||
following can serve as a general reference:
|
||||
|
||||
- **Allow all caches to store generic assets**: Static content and
|
||||
content that is not user-specific can and should be cached at all
|
||||
points in the delivery chain. This will allow intermediary caches to
|
||||
respond with the content for multiple users.
|
||||
- **Allow browsers to cache user-specific assets**: For per-user
|
||||
content, it is often acceptable and useful to allow caching within
|
||||
the user's browser. While this content would not be appropriate to
|
||||
cache on any intermediary caching proxies, caching in the browser
|
||||
will allow for instant retrieval for users during subsequent visits.
|
||||
- **Make exceptions for essential time-sensitive content**: If you
|
||||
have content that is time-sensitive, make an exception to the above
|
||||
rules so that the out-dated content is not served in critical
|
||||
situations. For instance, if your site has a shopping cart, it
|
||||
should reflect the items in the cart immediately. Depending on the
|
||||
nature of the content, the `no-cache` or `no-store` options can be
|
||||
set in the `Cache-Control` header to achieve this.
|
||||
- **Always provide validators**: Validators allow stale content to be
|
||||
refreshed without having to download the entire resource again.
|
||||
Setting the `Etag` and the `Last-Modified` headers allow caches to
|
||||
validate their content and re-serve it if it has not been modified
|
||||
at the origin, further reducing load.
|
||||
- **Set long freshness times for supporting content**: In order to
|
||||
leverage caching effectively, elements that are requested as
|
||||
supporting content to fulfill a request should often have a long
|
||||
freshness setting. This is generally appropriate for items like
|
||||
images and CSS that are pulled in to render the HTML page requested
|
||||
by the user. Setting extended freshness times, combined with
|
||||
fingerprinting, allows caches to store these resources for long
|
||||
periods of time. If the assets change, the modified fingerprint will
|
||||
invalidate the cached item and will trigger a download of the new
|
||||
content. Until then, the supporting items can be cached far into the
|
||||
future.
|
||||
- **Set short freshness times for parent content**: In order to make
|
||||
the above scheme work, the containing item must have relatively
|
||||
short freshness times or may not be cached at all. This is typically
|
||||
the HTML page that calls in the other assisting content. The HTML
|
||||
itself will be downloaded frequently, allowing it to respond to
|
||||
changes rapidly. The supporting content can then be cached
|
||||
aggressively.
|
||||
|
||||
The key is to strike a balance that favors aggressive caching where
|
||||
possible while leaving opportunities to invalidate entries in the future
|
||||
when changes are made. Your site will likely have a combination of:
|
||||
|
||||
- Aggressively cached items
|
||||
- Cached items with a short freshness time and the ability to
|
||||
re-validate
|
||||
- Items that should not be cached at all
|
||||
|
||||
The goal is to move content into the first categories when possible
|
||||
while maintaining an acceptable level of accuracy.
|
||||
|
||||
Conclusion
|
||||
----------
|
||||
|
||||
Taking the time to ensure that your site has proper caching policies in
|
||||
place can have a significant impact on your site. Caching allows you to
|
||||
cut down on the bandwidth costs associated with serving the same content
|
||||
repeatedly. Your server will also be able to handle a greater amount of
|
||||
traffic with the same hardware. Perhaps most importantly, clients will
|
||||
have a faster experience on your site, which may lead them to return
|
||||
more frequently. While effective web caching is not a silver bullet,
|
||||
setting up appropriate caching policies can give you measurable gains
|
||||
with minimal work.
|
||||
|
||||
|
||||
---
|
||||
|
||||
作者: [Justin Ellingwood](https://www.digitalocean.com/community/users/jellingwood)
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
推荐:[royaso](https://github.com/royaso)
|
||||
|
||||
via: https://www.digitalocean.com/community/tutorials/web-caching-basics-terminology-http-headers-and-caching-strategies
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
|
@ -1,135 +0,0 @@
|
||||
What are useful command-line network monitors on Linux
|
||||
================================================================================
|
||||
Network monitoring is a critical IT function for businesses of all sizes. The goal of network monitoring can vary. For example, the monitoring activity can be part of long-term network provisioning, security protection, performance troubleshooting, network usage accounting, and so on. Depending on its goal, network monitoring is done in many different ways, such as performing packet-level sniffing, collecting flow-level statistics, actively injecting probes into the network, parsing server logs, etc.
|
||||
|
||||
While there are many dedicated network monitoring systems capable of 24/7/365 monitoring, you can also leverage command-line network monitors in certain situations, where a dedicated monitor is an overkill. If you are a system admin, you are expected to have hands-on experience with some of well known CLI network monitors. Here is a list of **popular and useful command-line network monitors on Linux**.
|
||||
|
||||
### Packet-Level Sniffing ###
|
||||
|
||||
In this category, monitoring tools capture individual packets on the wire, dissect their content, and display decoded packet content or packet-level statistics. These tools conduct network monitoring from the lowest level, and as such, can possibly do the most fine-grained monitoring at the cost of network I/O and analysis efforts.
|
||||
|
||||
1. **dhcpdump**: a comman-line DHCP traffic sniffer capturing DHCP request/response traffic, and displays dissected DHCP protocol messages in a human-friendly format. It is useful when you are troubleshooting DHCP related issues.
|
||||
|
||||
2. **[dsniff][1]**: a collection of command-line based sniffing, spoofing and hijacking tools designed for network auditing and penetration testing. They can sniff various information such as passwords, NSF traffic, email messages, website URLs, and so on.
|
||||
|
||||
3. **[httpry][2]**: an HTTP packet sniffer which captures and decode HTTP requests and response packets, and display them in a human-readable format.
|
||||
|
||||
4. **IPTraf**: a console-based network statistics viewer. It displays packet-level, connection-level, interface-level, protocol-level packet/byte counters in real-time. Packet capturing can be controlled by protocol filters, and its operation is full menu-driven.
|
||||
|
||||

|
||||
|
||||
5. **[mysql-sniffer][3]**: a packet sniffer which captures and decodes packets associated with MySQL queries. It displays the most frequent or all queries in a human-readable format.
|
||||
|
||||
6. **[ngrep][4]**: grep over network packets. It can capture live packets, and match (filtered) packets against regular expressions or hexadecimal expressions. It is useful for detecting and storing any anomalous traffic, or for sniffing particular patterns of information from live traffic.
|
||||
|
||||
7. **[p0f][5]**: a passive fingerprinting tool which, based on packet sniffing, reliably identifies operating systems, NAT or proxy settings, network link types and various other properites associated with an active TCP connection.
|
||||
|
||||
8. **pktstat**: a command-line tool which analyzes live packets to display connection-level bandwidth usages as well as descriptive information of protocols involved (e.g., HTTP GET/POST, FTP, X11).
|
||||
|
||||

|
||||
|
||||
9. **Snort**: an intrusion detection and prevention tool which can detect/prevent a variety of backdoor, botnets, phishing, spyware attacks from live traffic based on rule-driven protocol analysis and content matching.
|
||||
|
||||
10. **tcpdump**: a command-line packet sniffer which is capable of capturing nework packets on the wire based on filter expressions, dissect the packets, and dump the packet content for packet-level analysis. It is widely used for any kinds of networking related troubleshooting, network application debugging, or [security][6] monitoring.
|
||||
|
||||
11. **tshark**: a command-line packet sniffing tool that comes with Wireshark GUI program. It can capture and decode live packets on the wire, and show decoded packet content in a human-friendly fashion.
|
||||
|
||||
### Flow-/Process-/Interface-Level Monitoring ###
|
||||
|
||||
In this category, network monitoring is done by classifying network traffic into flows, associated processes or interfaces, and collecting per-flow, per-process or per-interface statistics. Source of information can be libpcap packet capture library or sysfs kernel virtual filesystem. Monitoring overhead of these tools is low, but packet-level inspection capabilities are missing.
|
||||
|
||||
12. **bmon**: a console-based bandwidth monitoring tool which shows various per-interface information, including not-only aggregate/average RX/TX statistics, but also a historical view of bandwidth usage.
|
||||
|
||||

|
||||
|
||||
13. **[iftop][7]**: a bandwidth usage monitoring tool that can shows bandwidth usage for individual network connections in real time. It comes with ncurses-based interface to visualize bandwidth usage of all connections in a sorted order. It is useful for monitoring which connections are consuming the most bandwidth.
|
||||
|
||||
14. **nethogs**: a process monitoring tool which offers a real-time view of upload/download bandwidth usage of individual processes or programs in an ncurses-based interface. This is useful for detecting bandwidth hogging processes.
|
||||
|
||||
15. **netstat**: a command-line tool that shows various statistics and properties of the networking stack, such as open TCP/UDP connections, network interface RX/TX statistics, routing tables, protocol/socket statistics. It is useful when you diagnose performance and resource usage related problems of the networking stack.
|
||||
|
||||
16. **[speedometer][8]**: a console-based traffic monitor which visualizes the historical trend of an interface's RX/TX bandwidth usage with ncurses-drawn bar charts.
|
||||
|
||||

|
||||
|
||||
17. **[sysdig][9]**: a comprehensive system-level debugging tool with a unified interface for investigating different Linux subsystems. Its network monitoring module is capable of monitoring, either online or offline, various per-process/per-host networking statistics such as bandwidth usage, number of connections/requests, etc.
|
||||
|
||||
18. **tcptrack**: a TCP connection monitoring tool which displays information of active TCP connections, including source/destination IP addresses/ports, TCP state, and bandwidth usage.
|
||||
|
||||

|
||||
|
||||
19. **vnStat**: a command-line traffic monitor which maintains a historical view of RX/TX bandwidh usage (e.g., current, daily, monthly) on a per-interface basis. Running as a background daemon, it collects and stores interface statistics on bandwidth rate and total bytes transferred.
|
||||
|
||||
### Active Network Monitoring ###
|
||||
|
||||
Unlike passive monitoring tools presented so far, tools in this category perform network monitoring by actively "injecting" probes into the network and collecting corresponding responses. Monitoring targets include routing path, available bandwidth, loss rates, delay, jitter, system settings or vulnerabilities, and so on.
|
||||
|
||||
20. **[dnsyo][10]**: a DNS monitoring tool which can conduct DNS lookup from open resolvers scattered across more than 1,500 different networks. It is useful when you check DNS propagation or troubleshoot DNS configuration.
|
||||
|
||||
21. **[iperf][11]**: a TCP/UDP bandwidth measurement utility which can measure maximum available bandwidth between two end points. It measures available bandwidth by having two hosts pump out TCP/UDP probe traffic between them either unidirectionally or bi-directionally. It is useful when you test the network capacity, or tune the parameters of network stack. A variant called [netperf][12] exists with more features and better statistics.
|
||||
|
||||
22. **[netcat][13]/socat**: versatile network debugging tools capable of reading from, writing to, or listen on TCP/UDP sockets. They are often used alongside with other programs or scripts for backend network transfer or port listening.
|
||||
|
||||
23. **nmap**: a command-line port scanning and network discovery utility. It relies on a number of TCP/UDP based scanning techniques to detect open ports, live hosts, or existing operating systems on the local network. It is useful when you audit local hosts for vulnerabilities or build a host map for maintenance purpose. [zmap][14] is an alernative scanning tool with Internet-wide scanning capability.
|
||||
|
||||
24. ping: a network testing tool which works by exchaning ICMP echo and reply packets with a remote host. It is useful when you measure round-trip-time (RTT) delay and loss rate of a routing path, as well as test the status or firewall rules of a remote system. Variations of ping exist with fancier interface (e.g., [noping][15]), multi-protocol support (e.g., [hping][16]) or parallel probing capability (e.g., [fping][17]).
|
||||
|
||||

|
||||
|
||||
25. **[sprobe][18]**: a command-line tool that heuristically infers the bottleneck bandwidth between a local host and any arbitrary remote IP address. It uses TCP three-way handshake tricks to estimate the bottleneck bandwidth. It is useful when troubleshooting wide-area network performance and routing related problems.
|
||||
|
||||
26. **traceroute**: a network discovery tool which reveals a layer-3 routing/forwarding path from a local host to a remote host. It works by sending TTL-limited probe packets and collecting ICMP responses from intermediate routers. It is useful when troubleshooting slow network connections or routing related problems. Variations of traceroute exist with better RTT statistics (e.g., [mtr][19]).
|
||||
|
||||
### Application Log Parsing ###
|
||||
|
||||
In this category, network monitoring is targeted at a specific server application (e.g., web server or database server). Network traffic generated or consumed by a server application is monitored by analyzing its log file. Unlike network-level monitors presented in earlier categories, tools in this category can analyze and monitor network traffic from application-level.
|
||||
|
||||
27. **[GoAccess][20]**: a console-based interactive viewer for Apache and Nginx web server traffic. Based on access log analysis, it presents a real-time statistics of a number of metrics including daily visits, top requests, client operating systems, client locations, client browsers, in a scrollable view.
|
||||
|
||||

|
||||
|
||||
28. **[mtop][21]**: a command-line MySQL/MariaDB server moniter which visualizes the most expensive queries and current database server load. It is useful when you optimize MySQL server performance and tune server configurations.
|
||||
|
||||

|
||||
|
||||
29. **[ngxtop][22]**: a traffic monitoring tool for Nginx and Apache web server, which visualizes web server traffic in a top-like interface. It works by parsing a web server's access log file and collecting traffic statistics for individual destinations or requests.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In this article, I presented a wide variety of command-line network monitoring tools, ranging from the lowest packet-level monitors to the highest application-level network monitors. Knowing which tool does what is one thing, and choosing which tool to use is another, as any single tool cannot be a universal solution for your every need. A good system admin should be able to decide which tool is right for the circumstance at hand. Hopefully the list helps with that.
|
||||
|
||||
You are always welcome to improve the list with your comment!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/useful-command-line-network-monitors-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/nanni
|
||||
[1]:http://www.monkey.org/~dugsong/dsniff/
|
||||
[2]:http://xmodulo.com/monitor-http-traffic-command-line-linux.html
|
||||
[3]:https://github.com/zorkian/mysql-sniffer
|
||||
[4]:http://ngrep.sourceforge.net/
|
||||
[5]:http://lcamtuf.coredump.cx/p0f3/
|
||||
[6]:http://xmodulo.com/recommend/firewallbook
|
||||
[7]:http://xmodulo.com/how-to-install-iftop-on-linux.html
|
||||
[8]:https://excess.org/speedometer/
|
||||
[9]:http://xmodulo.com/monitor-troubleshoot-linux-server-sysdig.html
|
||||
[10]:http://xmodulo.com/check-dns-propagation-linux.html
|
||||
[11]:https://iperf.fr/
|
||||
[12]:http://www.netperf.org/netperf/
|
||||
[13]:http://xmodulo.com/useful-netcat-examples-linux.html
|
||||
[14]:https://zmap.io/
|
||||
[15]:http://noping.cc/
|
||||
[16]:http://www.hping.org/
|
||||
[17]:http://fping.org/
|
||||
[18]:http://sprobe.cs.washington.edu/
|
||||
[19]:http://xmodulo.com/better-alternatives-basic-command-line-utilities.html#mtr_link
|
||||
[20]:http://goaccess.io/
|
||||
[21]:http://mtop.sourceforge.net/
|
||||
[22]:http://xmodulo.com/monitor-nginx-web-server-command-line-real-time.html
|
@ -1,3 +1,5 @@
|
||||
[Translating by DongShuaike]
|
||||
|
||||
Installing Cisco Packet tracer in Linux
|
||||
================================================================================
|
||||

|
||||
@ -194,4 +196,4 @@ via: http://www.unixmen.com/installing-cisco-packet-tracer-linux/
|
||||
[1]:https://www.netacad.com/
|
||||
[2]:https://www.dropbox.com/s/5evz8gyqqvq3o3v/Cisco%20Packet%20Tracer%206.1.1%20Linux.tar.gz?dl=0
|
||||
[3]:http://www.oracle.com/technetwork/java/javase/downloads/jre8-downloads-2133155.html
|
||||
[4]:https://www.netacad.com/
|
||||
[4]:https://www.netacad.com/
|
||||
|
@ -1,3 +1,5 @@
|
||||
[Trnslating by DongShuaike]
|
||||
|
||||
iptraf: A TCP/UDP Network Monitoring Utility
|
||||
================================================================================
|
||||
[iptraf][1] is an ncurses-based IP LAN monitor that generates various network statistics including TCP info, UDP counts, ICMP and OSPF information, Ethernet load info, node stats, IP checksum errors, and others.
|
||||
|
@ -1,71 +0,0 @@
|
||||
Install Linux-Dash (Web Based Monitoring tool) on Ubntu 14.10
|
||||
================================================================================
|
||||
|
||||
A low-overhead monitoring web dashboard for a GNU/Linux machine. Simply drop-in the app and go!.Linux Dash's interface provides a detailed overview of all vital aspects of your server, including RAM and disk usage, network, installed software, users, and running processes. All information is organized into sections, and you can jump to a specific section using the buttons in the main toolbar. Linux Dash is not the most advanced monitoring tool out there, but it might be a good fit for users looking for a slick, lightweight, and easy to deploy application.
|
||||
|
||||
### Linux-Dash Features ###
|
||||
|
||||
A beautiful web-based dashboard for monitoring server info
|
||||
|
||||
Live, on-demand monitoring of RAM, Load, Uptime, Disk Allocation, Users and many more system stats
|
||||
|
||||
Drop-in install for servers with Apache2/nginx + PHP
|
||||
|
||||
Click and drag to re-arrange widgets
|
||||
|
||||
Support for wide range of linux server flavors
|
||||
|
||||
### List of Current Widgets ###
|
||||
|
||||
- General info
|
||||
- Load Average
|
||||
- RAM
|
||||
- Disk Usage
|
||||
- Users
|
||||
- Software
|
||||
- IP
|
||||
- Internet Speed
|
||||
- Online
|
||||
- Processes
|
||||
- Logs
|
||||
|
||||
### Install Linux-dash on ubuntu server 14.10 ###
|
||||
|
||||
First you need to make sure you have [Ubuntu LAMP server 14.10][1] installed and Now you have to install the following package
|
||||
|
||||
sudo apt-get install php5-json unzip
|
||||
|
||||
After the installation this module will enable for apache2 so you need to restart the apache2 server using the following command
|
||||
|
||||
sudo service apache2 restart
|
||||
|
||||
Now you need to download the linux-dash package and install
|
||||
|
||||
wget https://github.com/afaqurk/linux-dash/archive/master.zip
|
||||
|
||||
unzip master.zip
|
||||
|
||||
sudo mv linux-dash-master/ /var/www/html/linux-dash-master/
|
||||
|
||||
Now you need to change the permissions using the following command
|
||||
|
||||
sudo chmod 755 /var/www/html/linux-dash-master/
|
||||
|
||||
Now you need to go to http://serverip/linux-dash-master/ you should see similar to the following output
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.ubuntugeek.com/install-linux-dash-web-based-monitoring-tool-on-ubntu-14-10.html
|
||||
|
||||
作者:[ruchi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.ubuntugeek.com/author/ubuntufix
|
||||
[1]:http://www.ubuntugeek.com/step-by-step-ubuntu-14-10-utopic-unicorn-lamp-server-setup.html
|
@ -1,3 +1,5 @@
|
||||
translating by createyuan
|
||||
|
||||
How to Test Your Internet Speed Bidirectionally from Command Line Using ‘Speedtest-CLI’ Tool
|
||||
================================================================================
|
||||
We always need to check the speed of the Internet connection at home and office. What we do for this? Go to websites like Speedtest.net and begin test. It loads JavaScript in the web browser and then select best server based upon ping and output the result. It also uses a Flash player to produce graphical results.
|
||||
@ -129,4 +131,4 @@ via: http://www.tecmint.com/check-internet-speed-from-command-line-in-linux/
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/speedtest-mini-server-to-test-bandwidth-speed/
|
||||
[1]:http://www.tecmint.com/speedtest-mini-server-to-test-bandwidth-speed/
|
||||
|
@ -1,147 +0,0 @@
|
||||
Conky – The Ultimate X Based System Monitor Application
|
||||
================================================================================
|
||||
Conky is a system monitor application written in ‘C’ Programming Language and released under GNU General Public License and BSD License. It is available for Linux and BSD Operating System. The application is X (GUI) based that was originally forked from [Torsmo][1].
|
||||
|
||||
#### Features ####
|
||||
|
||||
- Simple User Interface
|
||||
- Higher Degree of configuration
|
||||
- It can show System stats using built-in objects (300+) as well as external scripts either on the desktop or in it’s own container.
|
||||
- Low on Resource Utilization
|
||||
- Shows system stats for a wide range of system variables which includes but not restricted to CPU, memory, swap, Temperature, Processes, Disk, Network, Battery, email, System messages, Music player, weather, breaking news, updates and blah..blah..blah
|
||||
- Available in Default installation of OS like CrunchBang Linux and Pinguy OS.
|
||||
|
||||
#### Lesser Known Facts about Conky ####
|
||||
|
||||
- The Name conky was derived from a Canadian Television Show.
|
||||
- It has already been ported to Nokia N900.
|
||||
- It is no more maintained officially.
|
||||
|
||||
### Conky Installation and Usage in Linux ###
|
||||
|
||||
Before we install conky, we need to install packages like lm-sensors, curl and hddtemp using following command.
|
||||
|
||||
# apt-get install lm-sensors curl hddtemp
|
||||
|
||||
Time to detect-sensors.
|
||||
|
||||
# sensors-detect
|
||||
|
||||
**Note**: Answer ‘Yes‘ when prompted!
|
||||
|
||||
Check all the detected sensors.
|
||||
|
||||
# sensors
|
||||
|
||||
#### Sample Output ####
|
||||
|
||||
acpitz-virtual-0
|
||||
Adapter: Virtual device
|
||||
temp1: +49.5°C (crit = +99.0°C)
|
||||
|
||||
coretemp-isa-0000
|
||||
Adapter: ISA adapter
|
||||
Physical id 0: +49.0°C (high = +100.0°C, crit = +100.0°C)
|
||||
Core 0: +49.0°C (high = +100.0°C, crit = +100.0°C)
|
||||
Core 1: +49.0°C (high = +100.0°C, crit = +100.0°C)
|
||||
|
||||
Conky can be installed from repo as well as, can be compiled from source.
|
||||
|
||||
# yum install conky [On RedHat systems]
|
||||
# apt-get install conky-all [On Debian systems]
|
||||
|
||||
**Note**: Before you install conky on Fedora/CentOS, you must have enabled [EPEL repository][2].
|
||||
|
||||
After conky has been installed, just issue following command to start it.
|
||||
|
||||
$ conky &
|
||||
|
||||

|
||||
Conky Monitor in Action
|
||||
|
||||
It will run conky in a popup like window. It uses the basic conky configuration file located at /etc/conky/conky.conf.
|
||||
|
||||
You may need to integrate conky with the desktop and won’t like a popup like window every-time. Here is what you need to do
|
||||
|
||||
Copy the configuration file /etc/conky/conky.conf to your home directory and rename it as ‘`.conkyrc`‘. The dot (.) at the beginning ensures that the configuration file is hidden.
|
||||
|
||||
$ cp /etc/conky/conky.conf /home/$USER/.conkyrc
|
||||
|
||||
Now restart conky to take new changes.
|
||||
|
||||
$ killall -SIGUSR1 conky
|
||||
|
||||

|
||||
Conky Monitor Window
|
||||
|
||||
You may edit the conky configuration file located in your home dircetory. The configuration file is very easy to understand.
|
||||
|
||||
Here is a sample configuration of conky.
|
||||
|
||||

|
||||
Conky Configuration
|
||||
|
||||
From the above window you can modify color, borders, size, scale, background, alignment and several other properties. By setting different alignments to different conky window, we can run more than one conky script at a time.
|
||||
|
||||
**Using script other than the default for conky and where to find it?**
|
||||
|
||||
You may write your own conky script or use one that is available over Internet. We don’t suggest you to use any script you find on the web which can be potentially dangerous unless you know what you are doing. However a few famous threads and pages have conky script that you can trust as mentioned below.
|
||||
|
||||
- [http://ubuntuforums.org/showthread.php?t=281865][3]
|
||||
- [http://conky.sourceforge.net/screenshots.html][4]
|
||||
|
||||
At the above url, you will find every screenshot has a hyperlink, which will redirects to script file.
|
||||
|
||||
#### Testing Conky Script ####
|
||||
|
||||
Here I will be running a third party written conky-script on my Debian Jessie Machine, to test.
|
||||
|
||||
$ wget https://github.com/alexbel/conky/archive/master.zip
|
||||
$ unzip master.zip
|
||||
|
||||
Change current working directory to just extracted directory.
|
||||
|
||||
$ cd conky-master
|
||||
|
||||
Rename the secrets.yml.example to secrets.yml.
|
||||
|
||||
$ mv secrets.yml.example secrets.yml
|
||||
|
||||
Install Ruby before you could run this (ruby) script.
|
||||
|
||||
$ sudo apt-get install ruby
|
||||
$ ruby starter.rb
|
||||
|
||||

|
||||
Conky Fancy Look
|
||||
|
||||
**Note**: This script can be modified to show your current weather, temperature, etc.
|
||||
|
||||
If you want to start conky at boot, add the below one liner to startup Applications.
|
||||
|
||||
conky --pause 10
|
||||
save and exit.
|
||||
|
||||
And Finally…such a lightweight and useful GUI eye candy like package is not in active stage and is not maintained officially anymore. The last stable release was conky 1.9.0 released on May 03, 2012. A thread on Ubuntu forum has gone over 2k pages of users sharing configuration. (link to forum : [http://ubuntuforums.org/showthread.php?t=281865/][5])
|
||||
|
||||
- [Conky Homepage][6]
|
||||
|
||||
That’s all for now. Keep connected. Keep commenting. Share your thoughts and configuration in comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/install-conky-in-ubuntu-debian-fedora/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://torsmo.sourceforge.net/
|
||||
[2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
|
||||
[3]:http://ubuntuforums.org/showthread.php?t=281865
|
||||
[4]:http://conky.sourceforge.net/screenshots.html
|
||||
[5]:http://ubuntuforums.org/showthread.php?t=281865/
|
||||
[6]:http://conky.sourceforge.net/
|
@ -1,96 +0,0 @@
|
||||
Translating by ZTinoZ
|
||||
Install Inkscape - Open Source Vector Graphic Editor
|
||||
================================================================================
|
||||
Inkscape is an open source vector graphic editing tool which uses Scalable Vector Graphics (SVG) and that makes it different from its competitors like Xara X, Corel Draw and Adobe Illustrator etc. SVG is a widely-deployed royalty-free graphics format developed and maintained by the W3C SVG Working Group. It is a cross platform tool which runs fine on Linux, Windows and Mac OS.
|
||||
|
||||
Inkscape development was started in 2003, Inkscape's bug tracking system was hosted on Sourceforge initially but it was migrated to Launchpad afterwards. Its current latest stable version is 0.91. It is under continuous development and bug fixes and we will be reviewing its prominent features and installing process in the article.
|
||||
|
||||
### Salient Features ###
|
||||
|
||||
Lets review the outstanding features of this application categorically.
|
||||
|
||||
#### Creating Objects ####
|
||||
|
||||
- Drawing different colored sized and shaped freehand lines through pencil tool, straight lines and curves through Bezier (pen) tool, applying freehand calligraphic strokes through calligraphic tool etc
|
||||
- Creating, selecting, editing and formatting text through text tool. Manipulating text in plain text boxes, on paths or in shapes
|
||||
- Helps draw various shapes like rectangles, ellipses, circles, arcs, polygons, stars, spirals etc and then resize, rotate and modify (turn sharp edges round) them
|
||||
- Create and embed bitmaps with simple commands
|
||||
|
||||
#### Object manipulation ####
|
||||
|
||||
- Skewing, moving, scaling, rotating objects through interactive manipulations and pacifying the numeric values
|
||||
- Performing raising and lowering Z-order operations
|
||||
- Grouping and ungrouping objects to create a virtual scope for editing or manipulation
|
||||
- Layers form a hierarchal tree and can be locked or rearranged for various manipulations
|
||||
- Distribution and alignment commands
|
||||
|
||||
#### Fill and Stroke ####
|
||||
|
||||
- Copy/paste styles
|
||||
- Pick Color tool
|
||||
- Selecting colors on a continuous plot based on vectors of RGB, HSL, CMS, CMYK and color wheel
|
||||
- Gradient editor helps creating and managing multi-stop gradients
|
||||
- Define an image or selection and use it to pattern fill
|
||||
- Dashed Strokes can be used with few predefined dashed patterns
|
||||
- Beginning, middle and ending marks through path markers
|
||||
|
||||
#### Operation on Paths ####
|
||||
|
||||
- Node Editing: Moving nodes and Bezier handles, node alignment and distribution etc
|
||||
- Boolean operations like yes or no conditions
|
||||
- Simplifying paths with variable levels or thresholds
|
||||
- Path insetting and outsetting along with link and offset objects
|
||||
- Converting bitmap images into paths (color and monochrome paths) through path tracing
|
||||
|
||||
#### Text manipulation ####
|
||||
|
||||
- All installed outlined fonts can be used even for right to left align objects
|
||||
- Formatting text, letter spacing, line spacing or kerning
|
||||
- Text on path and on shapes where both text and path or shapes can be edited or modified
|
||||
|
||||
#### Rendering ####
|
||||
|
||||
- Inkscape fully support anti-aliased display which is a technique that reduces or eliminates aliasing by shading the pixels along the border.
|
||||
- Support for alpha transparency display and PNG export
|
||||
|
||||
### Install Inkscape on Ubuntu 14.04 and 14.10 ###
|
||||
|
||||
In order to install Inkscape on Ubuntu, we will need to first [add its stable Personal Package Archive][1] (PPA) to Advanced Package Tool (APT) repository. Launch the terminal and run following command to add its PPA.
|
||||
|
||||
sudo add-apt-repository ppa:inkscape.dev/stable
|
||||
|
||||

|
||||
|
||||
Once the PPA has been added to the APT repository we need to update it using following command.
|
||||
|
||||
sudo apt-get update
|
||||
|
||||

|
||||
|
||||
After updating the repository we are ready to install inkscape which is accomplished using the following command.
|
||||
|
||||
sudo apt-get install inkscape
|
||||
|
||||

|
||||
|
||||
Congratulation, Inkscape has been installed now and all set for image editing and making full use of feature rich application.
|
||||
|
||||

|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Inkscape is a feature rich graphic editing tool which empowers its user with state of the art capabilities. It is an open source application which is freely available for installation and customizations and supports wide range of file formats including but not limited to JPEG, PNG, GIF and PDF. Visit its [official website][2] for more news and updates regarding this application.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/tools/install-inkscape-open-source-vector-graphic-editor/
|
||||
|
||||
作者:[Aun Raza][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunrz/
|
||||
[1]:https://launchpad.net/~inkscape.dev/+archive/ubuntu/stable
|
||||
[2]:https://inkscape.org/en/
|
@ -1,3 +1,4 @@
|
||||
Translating by ZTinoZ
|
||||
15 Things to Do After Installing Ubuntu 15.04 Desktop
|
||||
================================================================================
|
||||
This tutorial is intended for beginners and covers some basic steps on what to do after you have installed Ubuntu 15.04 “Vivid Vervet” Desktop version on your computer in order to customize the system and install basic programs for daily usage.
|
||||
@ -295,4 +296,4 @@ via: http://www.tecmint.com/things-to-do-after-installing-ubuntu-15-04-desktop/
|
||||
|
||||
[a]:http://www.tecmint.com/author/cezarmatei/
|
||||
[1]:http://www.viber.com/en/products/linux
|
||||
[2]:http://ubuntu-tweak.com/
|
||||
[2]:http://ubuntu-tweak.com/
|
||||
|
@ -1,86 +0,0 @@
|
||||
KDE Plasma 5.3 Released, Here’s How To Upgrade in Kubuntu 15.04
|
||||
================================================================================
|
||||
**KDE [has announced][1] the stable release of Plasma 5.3, which comes charged with a slate of new power management features. **
|
||||
|
||||
Having impressed and excited [with an earlier beta release in April][2], the latest update to the new stable update to the Plasma 5 desktop environments is now considered stable and ready for download.
|
||||
|
||||
Plasma 5.3 continues to refine and finesse the new-look KDE desktop. It sees plenty of feature additions for desktop users to enjoy and **almost 400 bug fixes** packed in it should also improvements the performance and overall stability, too.
|
||||
|
||||
### What’s New in Plasma 5.3 ###
|
||||
|
||||

|
||||
Better Bluetooth Management in Plasma 5.3
|
||||
|
||||
While we touched on the majority of the **new features** [in Plasma 5.3 in an earlier article][3] many are worth reiterating.
|
||||
|
||||
**Enhanced power management** features and configuration options, including a **new battery applet, energy usage monitor** and **animated changes in screen brightness**, will help KDE last longer on portable devices.
|
||||
|
||||
Closing a laptop when an external monitor is connected no longer triggers ‘suspend’. This new behaviour is called ‘**cinema mode**‘ and comes enabled by default, but can be disabled using an option in power management settings.
|
||||
|
||||
**Bluetooth functionality is improved**, with a brand new panel applet making connecting and configuring paired bluetooth devices like smartphones, keyboards and speakers easier than ever.
|
||||
|
||||
Similarly, **trackpad configuration in KDE is easier** with Plasma 5.3 thanks to a new set-up and settings module.
|
||||
|
||||

|
||||
Trackpad, Touchpad. Tomato, Tomayto.
|
||||
|
||||
For Plasma widget fans there is a new **Press and Hold** gesture. When enabled this hides the settings handle that appears when on mouseover. Instead making it only appear when long-clicking on widget.
|
||||
|
||||
On the topic of widget-y things, several **old Plasmoid favourites are reintroduced** with this release, including a useful system monitor, handy hard-drive stats and a comic reader.
|
||||
|
||||
### Learning More & Trying It Out ###
|
||||
|
||||

|
||||
|
||||
A full list of everything — and I mean everything — that is new and improved in Plasma 5.3 is listed [in the official change log][4].
|
||||
|
||||
Live images that let you try Plasma 5.3 on a Kubuntu base **without affecting your own system** are available from the KDE community:
|
||||
|
||||
- [Download KDE Plasma Live Images][5]
|
||||
|
||||
If you need super stable system you can use these live images to try the features but stick with the version of KDE that comes with your distribution on your main computer.
|
||||
|
||||
However, if you’re happy to experiment — read: can handle any package conflicts or system issues resulting from attempting to upgrade your desktop environment — you can.
|
||||
|
||||
### Install Plasma 5.3 in Kubuntu 15.04 ###
|
||||
|
||||

|
||||
|
||||
To **install Plasma 5.3 in Kubuntu 15.04** you need to add the KDE Backports PPA, run the Software Updater tool and install any available updates.
|
||||
|
||||
The Kubuntu backports PPA may/will also upgrade other parts of the KDE Platform other than Plasma that are installed on your system including KDE applications, frameworks and Kubuntu specific configuration files.
|
||||
|
||||
Using the command line is by far the fastest way to upgrade to Plasma 5.3 in Kubuntu:
|
||||
|
||||
sudo add-apt-repository ppa:kubuntu-ppa/backports
|
||||
|
||||
sudo apt-get update && sudo apt-get dist-upgrade
|
||||
|
||||
After the upgrade process has completed, and assuming everything went well, you should reboot your computer.
|
||||
|
||||
If you’re using an alternative desktop environment, like LXDE, Unity or GNOME, you will need to install the Kubuntu desktop package (you’ll find it in the Ubuntu Software Centre) after running both of the commands above.
|
||||
|
||||
To downgrade to the stock version of Plasma in 15.04 you can use the PPA-Purge tool:
|
||||
|
||||
sudo apt-get install ppa-purge
|
||||
|
||||
sudo ppa-purge ppa:kubuntu-ppa/backports
|
||||
|
||||
Let us know how your upgrade/testing goes in the comments below and don’t forget to mention the features you hope to see added to the Plasma 5 desktop next.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/04/kde-plasma-5-3-released-heres-how-to-upgrade-in-kubuntu-15-04
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:https://www.kde.org/announcements/plasma-5.3.0.php
|
||||
[2]:http://www.omgubuntu.co.uk/2015/04/beta-plasma-5-3-features
|
||||
[3]:http://www.omgubuntu.co.uk/2015/04/beta-plasma-5-3-features
|
||||
[4]:https://www.kde.org/announcements/plasma-5.2.2-5.3.0-changelog.php
|
||||
[5]:https://community.kde.org/Plasma/Live_Images
|
@ -1,61 +0,0 @@
|
||||
How To Install Visual Studio Code On Ubuntu
|
||||
================================================================================
|
||||

|
||||
|
||||
Microsoft has done the unexpected by [releasing Visual Studio Code][1] for all major desktop platforms that includes Linux as well. If you are a web developer who happens to be using Ubuntu, you can **easily install Visual Studio Code in Ubuntu**.
|
||||
|
||||
We will be using [Ubuntu Make][2] for installing Visual Studio Code in Ubuntu. Ubuntu Make, previously known as Ubuntu Developer Tools Center, is a command line utility that allows you to easily install various development tools, languages and IDEs. You can easily [install Android Studio][3] and other popular IDEs such as Eclipse with Ubuntu Make. In this tutorial we shall see **how to install Visual Studio Code in Ubuntu with Ubuntu Make**.
|
||||
|
||||
### Install Microsoft Visual Studio Code in Ubuntu ###
|
||||
|
||||
Before installing Visual Studio Code, we need to install Ubuntu Make first. Though Ubuntu Make is available in Ubuntu 15.04 repository, **you’ll need Ubuntu Make 0.7 for Visual Studio**. You can get the latest Ubuntu Make by using the official PPA. The PPA is available for Ubuntu 14.04, 14.10 and 15.04. Also, it **is only available for 64 bit platform**.
|
||||
|
||||
Open a terminal and use the following commands to install Ubuntu Make via official PPA:
|
||||
|
||||
sudo add-apt-repository ppa:ubuntu-desktop/ubuntu-make
|
||||
sudo apt-get update
|
||||
sudo apt-get install ubuntu-make
|
||||
|
||||
Once you have installed Ubuntu Make, use the command below to install Visual Studio Code:
|
||||
|
||||
umake web visual-studio-code
|
||||
|
||||
You’ll be asked to provide a path where it will be installed:
|
||||
|
||||

|
||||
|
||||
After throwing a whole lot of terms and conditions, it will ask for your permission to install Visual Studio Code. Press ‘a’ at this screen:
|
||||
|
||||

|
||||
|
||||
Once you do that it will start downloading and installing it. Once it is installed, you can see that Visual Studio Code icon has already been locked to the Unity Launcher. Just click on it to run it. This is how Visual Studio Code looks like in Ubuntu 15.04 Unity:
|
||||
|
||||

|
||||
|
||||
### Uninstall Visual Studio Code from Ubuntu ###
|
||||
|
||||
To uninstall Visual Studio Code, we’ll use the same command line tool umake. Just use the following command in terminal:
|
||||
|
||||
umake web visual-studio-code --remove
|
||||
|
||||
If you do not want to use Ubuntu Make, you can install Visual Studio Code by downloading the files from Microsoft:
|
||||
|
||||
- [Download Visual Studio Code for Linux][4]
|
||||
|
||||
See, how easy it is to install Visual Studio Code in Ubuntu, all thanks to Ubuntu Make. I hope this tutorial helped you. Feel free to drop a comment if you have any questions or suggestions.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/install-visual-studio-code-ubuntu/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://www.geekwire.com/2015/microsofts-visual-studio-expands-to-mac-and-linux-with-new-code-development-tool/
|
||||
[2]:https://wiki.ubuntu.com/ubuntu-make
|
||||
[3]:http://itsfoss.com/install-android-studio-ubuntu-linux/
|
||||
[4]:https://code.visualstudio.com/Download
|
@ -1,183 +0,0 @@
|
||||
Useful Commands to Create Commandline Chat Server and Remove Unwanted Packages in Linux
|
||||
================================================================================
|
||||
Here we are with the next part of Linux Command Line Tips and Tricks. If you missed our previous post on Linux Tricks you may find it here.
|
||||
|
||||
- [5 Linux Command Line Tricks][1]
|
||||
|
||||
In this post we will be introducing 6 command Line tips namely create Linux Command line chat using Netcat command, perform addition of a column on the fly from the output of a command, remove orphan packages from Debian and CentOS, get local and remote IP from command Line, get colored output in terminal and decode various color code and last but not the least hash tags implementation in Linux command Line. Lets check them one by one.
|
||||
|
||||

|
||||
6 Useful Commandline Tricks and Tips
|
||||
|
||||
### 1. Create Linux Commandline Chat Server ###
|
||||
|
||||
We all have been using chat service since a long time. We are familiar with Google chat, Hangout, Facebook chat, Whatsapp, Hike and several other application and integrated chat services. Do you know Linux nc command can make your Linux box a chat server with just one line of command.
|
||||
What is nc command in Linux and what it does?
|
||||
|
||||
nc is the depreciation of Linux netcat command. The nc utility is often referred as Swiss army knife based upon the number of its built-in capabilities. It is used as debugging tool, investigation tool, reading and writing to network connection using TCP/UDP, DNS forward/reverse checking.
|
||||
|
||||
It is prominently used for port scanning, file transferring, backdoor and port listening. nc has the ability to use any local unused port and any local network source address.
|
||||
|
||||
Use nc command (On Server with IP address: 192.168.0.7) to create a command line messaging server instantly.
|
||||
|
||||
$ nc -l -vv -p 11119
|
||||
|
||||
Explanation of the above command switches.
|
||||
|
||||
- -v : means Verbose
|
||||
- -vv : more verbose
|
||||
- -p : The local port Number
|
||||
|
||||
You may replace 11119 with any other local port number.
|
||||
|
||||
Next on the client machine (IP address: 192.168.0.15) run the following command to initialize chat session to machine (where messaging server is running).
|
||||
|
||||
$ nc 192.168.0.7 11119
|
||||
|
||||

|
||||
|
||||
**Note**: You can terminate chat session by hitting ctrl+c key and also nc chat is one-to-one service.
|
||||
|
||||
### 2. How to Sum Values in a Column in Linux ###
|
||||
|
||||
How to sum the numerical values of a column, generated as an output of a command, on the fly in the terminal.
|
||||
|
||||
The output of the ‘ls -l‘ command.
|
||||
|
||||
$ ls -l
|
||||
|
||||

|
||||
|
||||
Notice that the second column is numerical which represents number of symbolic links and the 5th column is numerical which represents the size of he file. Say we need to sum the values of fifth column on the fly.
|
||||
|
||||
List the content of 5th column without printing anything else. We will be using ‘awk‘ command to do this. ‘$5‘ represents 5th column.
|
||||
|
||||
$ ls -l | awk '{print $5}'
|
||||
|
||||

|
||||
|
||||
Now use awk to print the sum of the output of 5th column by pipelining it.
|
||||
|
||||
$ ls -l | awk '{print $5}' | awk '{total = total + $1}END{print total}'
|
||||
|
||||

|
||||
|
||||
### How to Remove Orphan Packages in Linux? ###
|
||||
|
||||
Orphan packages are those packages that are installed as a dependency of another package and no longer required when the original package is removed.
|
||||
|
||||
Say we installed a package gtprogram which was dependent of gtdependency. We can’t install gtprogram unless gtdependency is installed.
|
||||
|
||||
When we remove gtprogram it won’t remove gtdependency by default. And if we don’t remove gtdependency, it will remain as Orpahn Package with no connection to any other package.
|
||||
|
||||
# yum autoremove [On RedHat Systems]
|
||||
|
||||

|
||||
|
||||
# apt-get autoremove [On Debian Systems]
|
||||
|
||||

|
||||
|
||||
You should always remove Orphan Packages to keep the Linux box loaded with just necessary stuff and nothing else.
|
||||
|
||||
### 4. How to Get Local and Public IP Address of Linux Server ###
|
||||
|
||||
To get you local IP address run the below one liner script.
|
||||
|
||||
$ ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d:
|
||||
|
||||
You must have installed ifconfig, if not, apt or yum the required packages. Here we will be pipelining the output of ifconfig with grep command to find the string “intel addr:”.
|
||||
|
||||
We know ifconfig command is sufficient to output local IP Address. But ifconfig generate lots of other outputs and our concern here is to generate only local IP address and nothing else.
|
||||
|
||||
# ifconfig | grep "inet addr:"
|
||||
|
||||

|
||||
|
||||
Although the output is more custom now, but we need to filter our local IP address only and nothing else. For this we will use awk to print the second column only by pipelining it with the above script.
|
||||
|
||||
# ifconfig | grep “inet addr:” | awk '{print $2}'
|
||||
|
||||

|
||||
|
||||
Clear from the above image that we have customised the output very much but still not what we want. The loopback address 127.0.0.1 is still there in the result.
|
||||
|
||||
We use use -v flag with grep that will print only those lines that don’t match the one provided in argument. Every machine have the same loopback address 127.0.0.1, so use grep -v to print those lines that don’t have this string, by pipelining it with above output.
|
||||
|
||||
# ifconfig | grep "inet addr" | awk '{print $2}' | grep -v '127.0.0.1'
|
||||
|
||||

|
||||
|
||||
We have almost generated desired output, just replace the string `(addr:)` from the beginning. We will use cut command to print only column two. The column 1 and column 2 are not separated by tab but by `(:)`, so we need to use delimiter `(-d)` by pipelining the above output.
|
||||
|
||||
# ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d:
|
||||
|
||||

|
||||
|
||||
Finally! The desired result has been generated.
|
||||
|
||||
### 5. How to Color Linux Terminal ###
|
||||
|
||||
You might have seen colored output in terminal. Also you would be knowing to enable/disable colored output in terminal. If not you may follow the below steps.
|
||||
|
||||
In Linux every user has `'.bashrc'` file, this file is used to handle your terminal output. Open and edit this file with your choice of editor. Note that, this file is hidden (dot beginning of file means hidden).
|
||||
|
||||
$ vi /home/$USER/.bashrc
|
||||
|
||||
Make sure that the following lines below are uncommented. ie., it don’t start with a #.
|
||||
|
||||
if [ -x /usr/bin/dircolors ]; then
|
||||
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dirc$
|
||||
alias ls='ls --color=auto'
|
||||
#alias dir='dir --color=auto'
|
||||
#alias vdir='vdir --color=auto'
|
||||
|
||||
alias grep='grep --color=auto'
|
||||
alias fgrep='fgrep --color=auto'
|
||||
alias egrep='egrep --color=auto'
|
||||
fi
|
||||
|
||||

|
||||
|
||||
Once done! Save and exit. To make the changes taken into effect logout and again login.
|
||||
|
||||
Now you will see files and folders are listed in various colors based upon type of file. To decode the color code run the below command.
|
||||
|
||||
$ dircolors -p
|
||||
|
||||
Since the output is too long, lets pipeline the output with less command so that we get output one screen at a time.
|
||||
|
||||
$ dircolors -p | less
|
||||
|
||||

|
||||
|
||||
### 6. How to Hash Tag Linux Commands and Scripts ###
|
||||
|
||||
We are using hash tags on Twitter, Facebook and Google Plus (may be some other places, I have not noticed). These hash tags make it easier for others to search for a hash tag. Very few know that we can use hash tag in Linux command Line.
|
||||
|
||||
We already know that `#` in configuration files and most of the programming languages is treated as comment line and is excluded from execution.
|
||||
|
||||
Run a command and then create a hash tag of the command so that we can find it later. Say we have a long script that was executed in point 4 above. Now create a hash tag for this. We know ifconfig can be run by sudo or root user hence acting as root.
|
||||
|
||||
# ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d: #myip
|
||||
|
||||
The script above has been hash tagged with ‘myip‘. Now search for the hash tag in reverse-i-serach (press ctrl+r), in the terminal and type ‘myip‘. You may execute it from there, as well.
|
||||
|
||||

|
||||
|
||||
You may create as many hash tags for every command and find it later using reverse-i-search.
|
||||
|
||||
That’s all for now. We have been working hard to produce interesting and knowledgeable contents for you. What do you think how we are doing? Any suggestion is welcome. You may comment in the box below. Keep connected! Kudos.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/linux-commandline-chat-server-and-remove-unwanted-packages/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/5-linux-command-line-tricks/
|
@ -1,40 +0,0 @@
|
||||
Bodhi Linux Introduces Moksha Desktop
|
||||
================================================================================
|
||||

|
||||
|
||||
Ubuntu based lightweight Linux distribution [Bodhi Linux][1] is working on a desktop environment of its own. This new desktop environment will be called Moksha (Sanskrit for ‘complete freedom’). Moksha will be replacing the usual [Enlightenment desktop environment][2].
|
||||
|
||||
### Why Moksha instead of Enlightenment? ###
|
||||
|
||||
Jeff Hoogland of Bodhi Linux [says][3] that he had been unhappy with the newer versions of Enlightenment in the recent past. Until E17, Enlightenment was very stable and complemented well to the need of a lightweight Linux OS, but the E18 was so full of bugs that Bodhi Linux skipped it altogether.
|
||||
|
||||
While the latest [Bodhi Linux 3.0.0 release][4] uses E19 (except the legacy mode, meant for older hardware, still uses E17), Jeff is not happy with E19 as well. He quotes:
|
||||
|
||||
> On top of the performance issues, E19 did not allow for me personally to have the same workflow I enjoyed under E17 due to features it no longer had. Because of this I had changed to using the E17 on all of my Bodhi 3 computers – even my high end ones. This got me to thinking how many of our existing Bodhi users felt the same way, so I [opened a discussion about it on our user forums][5].
|
||||
|
||||
### Moksha is continuation of the E17 desktop ###
|
||||
|
||||
Moksha will be a continuation of Bodhi’s favorite E17 desktop. Jeff further mentions:
|
||||
|
||||
> We will start by integrating all of the Bodhi changes we have simply been patching into the source code over the years and fixing the few issues the desktop has. Once this is done we will begin back porting a few of the more useful features E18 and E19 introduced to the Enlightenment desktop and finally, we will introduce a few new things we think will improve the end user experience.
|
||||
|
||||
### When will Moksha release? ###
|
||||
|
||||
The next update to Bodhi will be Bodhi 3.1.0 in August this year. This new release will bring Moksha on all of its default ISOs. Let’s wait and watch to see if Moksha turns out to be a good decision or not.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/bodhi-linux-introduces-moksha-desktop/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://www.bodhilinux.com/
|
||||
[2]:https://www.enlightenment.org/
|
||||
[3]:http://www.bodhilinux.com/2015/04/28/introducing-the-moksha-desktop/
|
||||
[4]:http://itsfoss.com/bodhi-linux-3/
|
||||
[5]:http://forums.bodhilinux.com/index.php?/topic/12322-e17-vs-e19-which-are-you-using-and-why/
|
@ -1,459 +0,0 @@
|
||||
First Step Guide for Learning Shell Scripting
|
||||
================================================================================
|
||||

|
||||
|
||||
Usually when people say "shell scripting" they have on mind bash, ksh, sh, ash or similar linux/unix scripting language. Scripting is another way to communicate with computer. Using graphic windows interface (not matter windows or linux) user can move mouse and clicking on the various objects like, buttons, lists, check boxes and so on. But it is very inconvenient way witch requires user participation and accuracy each time he would like to ask computer / server to do the same tasks (lets say to convert photos or download new movies, mp3 etc). To make all these things easy accessible and automated we could use shell scripts.
|
||||
|
||||
Some programming languages like pascal, foxpro, C, java needs to be compiled before they could be executed. They needs appropriate compiler to make our code to do some job.
|
||||
|
||||
Another programming languages like php, javascript, visualbasic do not needs compiler. So they need interpretersand we could run our program without compiling the code.
|
||||
|
||||
The shell scripts is also like interpreters, but it is usually used to call external compiled programs. Then captures the outputs, exit codes and act accordingly.
|
||||
|
||||
One of the most popular shell scripting language in the linux world is the bash. And i think (this is my own opinion) this is because bash shell allows user easily navigate through the history commands (previously executed) by default, in opposite ksh which requires some tuning in .profile or remember some "magic" key combination to walk through history and amend commands.
|
||||
|
||||
Ok, i think this is enough for introduction and i leaving for your judge which environment is most comfortable for you. Since now i will speak only about bash and scripting. In the following examples i will use the CentOS 6.6 and bash-4.1.2. Just make sure you have the same or greater version.
|
||||
|
||||
### Shell Script Streams ###
|
||||
|
||||
The shell scripting it is something similar to conversation of several persons. Just imagine that all command like the persons who able to do something if you properly ask them. Lets say you would like to write the document. First of all you need the paper, then you need to say the content to someone to write it, and finally you would like to store it somewhere. Or you would like build a house, so you will ask appropriate persons to cleanup the space. After they say "its done" then other engineers could build for you the walls. And finally, when engineers also tell "Its done" you can ask the painters to color your house. And what would happen if you ask the painters coloring your walls before they are built? I think they will start to complain. Almost all commands like the persons could speak and if they did its job without any issues they speaks to "standard output". If they can't to what you asking - they speaking to the "standard error". So finally all commands listening for you through "standard input".
|
||||
|
||||
Quick example- when you opening linux terminal and writing some text - you speaking to bash through "standard input". So ask the bash shell **who am i**
|
||||
|
||||
root@localhost ~]# who am i <--- you speaking through the standard input to bash shell
|
||||
root pts/0 2015-04-22 20:17 (192.168.1.123) <--- bash shell answering to you through the standard output
|
||||
|
||||
Now lets ask something that bash will not understand us:
|
||||
|
||||
[root@localhost ~]# blablabla <--- and again, you speaking through standard input
|
||||
-bash: blablabla: command not found <--- bash complaining through standard error
|
||||
|
||||
The first word before ":" usually is the command which complaining to you. Actually each of these streams has their own index number:
|
||||
|
||||
- standard input (**stdin**) - 0
|
||||
- standard output (**stdout**) - 1
|
||||
- standard error (**stderr**) - 2
|
||||
|
||||
If you really would like to know to witch output command said something - you need to redirect (to use "greater than ">" symbol after command and stream index) that speech to file:
|
||||
|
||||
[root@localhost ~]# blablabla 1> output.txt
|
||||
-bash: blablabla: command not found
|
||||
|
||||
In this example we tried to redirect 1 (**stdout**) stream to file named output.txt. Lets look does to the content of that file. We use the command cat for that:
|
||||
|
||||
[root@localhost ~]# cat output.txt
|
||||
[root@localhost ~]#
|
||||
|
||||
Seams that is empty. Ok now lets try to redirect 2 (**stderr**) streem:
|
||||
|
||||
[root@localhost ~]# blablabla 2> error.txt
|
||||
[root@localhost ~]#
|
||||
|
||||
Ok, we see that complains gone. Lets chec the file:
|
||||
|
||||
[root@localhost ~]# cat error.txt
|
||||
-bash: blablabla: command not found
|
||||
[root@localhost ~]#
|
||||
|
||||
Exactly! We see that all complains was recorded to the errors.txt file.
|
||||
|
||||
Sometimes commands produces **stdout** and **stderr** simultaniously. To redirect them to separate files we can use the following syntax:
|
||||
|
||||
command 1>out.txt 2>err.txt
|
||||
|
||||
To shorten this syntax a bit we can skip the "1" as by default the **stdout** stream will be redirected:
|
||||
|
||||
command >out.txt 2>err.txt
|
||||
|
||||
Ok, lets try to do something "bad". lets remove the file1 and folder1 with the rm command:
|
||||
|
||||
[root@localhost ~]# rm -vf folder1 file1 > out.txt 2>err.txt
|
||||
|
||||
Now check our output files:
|
||||
|
||||
[root@localhost ~]# cat out.txt
|
||||
removed `file1'
|
||||
[root@localhost ~]# cat err.txt
|
||||
rm: cannot remove `folder1': Is a directory
|
||||
[root@localhost ~]#
|
||||
|
||||
As we see the streams was separated to different files. Sometimes it is not handy as usually we want to see the sequence when the errors appeared - before or after some actions. For that we can redirect both streams to the same file:
|
||||
|
||||
command >>out_err.txt 2>>out_err.txt
|
||||
|
||||
Note : Please notice that i use ">>" instead of ">". It allows us to append file instead of overwrite.
|
||||
|
||||
We can redirect one stream to another:
|
||||
|
||||
command >out_err.txt 2>&1
|
||||
|
||||
Let me explain. All stdout of the command will be redirected to the out_err.txt. The errout will be redirected to the 1-st stream which (as i already explained above) will be redirected to the same file. Let see the example:
|
||||
|
||||
[root@localhost ~]# rm -fv folder2 file2 >out_err.txt 2>&1
|
||||
[root@localhost ~]# cat out_err.txt
|
||||
rm: cannot remove `folder2': Is a directory
|
||||
removed `file2'
|
||||
[root@localhost ~]#
|
||||
|
||||
Looking at the combined output we can state that first of all **rm** command tried to remove the folder2 and it was not success as linux require the **-r** key for **rm** command to allow remove folders. At the second the file2 was removed. By providing the **-v** (verbose) key for the **rm** command we asking rm command to inform as about each removed file or folder.
|
||||
|
||||
This is almost all you need to know about redirection. I say almost, because there is one more very important redirection which called "piping". By using | (pipe) symbol we usually redirecting **stdout** streem.
|
||||
|
||||
Lets say we have the text file:
|
||||
|
||||
[root@localhost ~]# cat text_file.txt
|
||||
This line does not contain H e l l o word
|
||||
This lilne contains Hello
|
||||
This also containd Hello
|
||||
This one no due to HELLO all capital
|
||||
Hello bash world!
|
||||
|
||||
and we need to find the lines in it with the words "Hello". Linux has the **grep** command for that:
|
||||
|
||||
[root@localhost ~]# grep Hello text_file.txt
|
||||
This lilne contains Hello
|
||||
This also containd Hello
|
||||
Hello bash world!
|
||||
[root@localhost ~]#
|
||||
|
||||
This is ok when we have file and would like to sech in it. But what to do if we need to find something in the output of another command? Yes, of course we can redirect the output to the file and then look in it:
|
||||
|
||||
[root@localhost ~]# fdisk -l>fdisk.out
|
||||
[root@localhost ~]# grep "Disk /dev" fdisk.out
|
||||
Disk /dev/sda: 8589 MB, 8589934592 bytes
|
||||
Disk /dev/mapper/VolGroup-lv_root: 7205 MB, 7205814272 bytes
|
||||
Disk /dev/mapper/VolGroup-lv_swap: 855 MB, 855638016 bytes
|
||||
[root@localhost ~]#
|
||||
|
||||
If you going to grep something with white spaces embrace that with " quotes!
|
||||
|
||||
Note : fdisk command shows information about Linux OS disk drives
|
||||
|
||||
As we see this way is not very handy as soon we will mess the space with temporary files. For that we can use the pipes. They allow us redirect one command **stdout** to another command **stdin** streams:
|
||||
|
||||
[root@localhost ~]# fdisk -l | grep "Disk /dev"
|
||||
Disk /dev/sda: 8589 MB, 8589934592 bytes
|
||||
Disk /dev/mapper/VolGroup-lv_root: 7205 MB, 7205814272 bytes
|
||||
Disk /dev/mapper/VolGroup-lv_swap: 855 MB, 855638016 bytes
|
||||
[root@localhost ~]#
|
||||
|
||||
As we see, we get the same result without any temporary files. We have redirected **frisk stdout** to the **grep stdin**.
|
||||
|
||||
**Note** : Pipe redirection is always from left to right.
|
||||
|
||||
There are several other redirections but we will speak about them later.
|
||||
|
||||
### Displaying custom messages in the shell ###
|
||||
|
||||
As we already know usually communication with and within shell is going as dialog. So lets create some real script which also will speak with us. It will allow you to learn some simple commands and better understand the scripting concept.
|
||||
|
||||
Imagine we are working in some company as help desk manager and we would like to create some shell script to register the call information: phone number, User name and brief description about issue. We going to store it in the plain text file data.txt for future statistics. Script it self should work in dialog way to make live easy for help desk workers. So first of all we need to display the questions. For displaying any messages there is echo and printf commands. Both of them displaying messages, but printf is more powerful as we can nicely form output to align it to the right, left or leave dedicated space for message. Lets start from simple one. For file creation please use your favorite text editor (kate, nano, vi, ...) and create the file named note.sh with the command inside:
|
||||
|
||||
echo "Phone number ?"
|
||||
|
||||
### Script execution ###
|
||||
|
||||
After you have saved the file we can run it with bash command by providing our file as an argument:
|
||||
|
||||
[root@localhost ~]# bash note.sh
|
||||
Phone number ?
|
||||
|
||||
Actually to use this way for script execution is not handy. It would be more comfortable just execute the script without any **bash** command as a prefix. To make it executable we can use **chmod** command:
|
||||
|
||||
[root@localhost ~]# ls -la note.sh
|
||||
-rw-r--r--. 1 root root 22 Apr 23 20:52 note.sh
|
||||
[root@localhost ~]# chmod +x note.sh
|
||||
[root@localhost ~]# ls -la note.sh
|
||||
-rwxr-xr-x. 1 root root 22 Apr 23 20:52 note.sh
|
||||
[root@localhost ~]#
|
||||
|
||||

|
||||
|
||||
**Note** : ls command displays the files in the current folder. By adding the keys -la it will display a bit more information about files.
|
||||
|
||||
As we see, before **chmod** command execution, script has only read (r) and write (w) permissions. After **chmod +x** it got execute (x) permissions. (More details about permissions i am going to describe in next article.) Now we can simply run it:
|
||||
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number ?
|
||||
|
||||
Before script name i have added ./ combination. . (dot) in the unix world means current position (current folder), the / (slash) is the folder separator. (In Windows OS we use \ (backslash) for the same). So whole this combination means: "from the current folder execute the note.sh script". I think it will be more clear for you if i run this script with full path:
|
||||
|
||||
[root@localhost ~]# /root/note.sh
|
||||
Phone number ?
|
||||
[root@localhost ~]#
|
||||
|
||||
It also works.
|
||||
|
||||
Everything would be ok if all linux users would have the same default shell. If we simply execute this script default user shell will be used to parse script content and run the commands. Different shells have a bit different syntax, internal commands, etc. So to guarantee the **bash** will be used for our script we should add **#!/bin/bash** as the first line. In this way default user shell will call **/bin/bash** and only then will execute following shell commands in the script:
|
||||
|
||||
[root@localhost ~]# cat note.sh
|
||||
#!/bin/bash
|
||||
echo "Phone number ?"
|
||||
|
||||
Only now we will be 100% sure that **bash** will be used to parse our script content. Lets move on.
|
||||
|
||||
### Reading the inputs ###
|
||||
|
||||
After we have displayed the message script should wait for answer from user. There is the command **read**:
|
||||
|
||||
#!/bin/bash
|
||||
echo "Phone number ?"
|
||||
read phone
|
||||
|
||||
After execution script will wait for the user input until he press the [ENTER] key:
|
||||
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number ?
|
||||
12345 <--- here is my input
|
||||
[root@localhost ~]#
|
||||
|
||||
Everything you have input will be stored to the variable **phone**. To display the value of variable we can use the same **echo** command:
|
||||
|
||||
[root@localhost ~]# cat note.sh
|
||||
#!/bin/bash
|
||||
echo "Phone number ?"
|
||||
read phone
|
||||
echo "You have entered $phone as a phone number"
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number ?
|
||||
123456
|
||||
You have entered 123456 as a phone number
|
||||
[root@localhost ~]#
|
||||
|
||||
In **bash** shell we using **$** (dollar) sign as variable indication, except when reading into variable and few other moments (will describe later).
|
||||
|
||||
Ok, now we are ready to add the rest questions:
|
||||
|
||||
#!/bin/bash
|
||||
echo "Phone number?"
|
||||
read phone
|
||||
echo "Name?"
|
||||
read name
|
||||
echo "Issue?"
|
||||
read issue
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number?
|
||||
123
|
||||
Name?
|
||||
Jim
|
||||
Issue?
|
||||
script is not working.
|
||||
[root@localhost ~]#
|
||||
|
||||
### Using stream redirection ###
|
||||
|
||||
Perfect! There is left to redirect everything to the file data.txt. As a field separator we going to use / (slash) symbol.
|
||||
|
||||
**Note** : You can chose any which you think is the best, bat be sure that content will not have thes symbols inside. It will cause extra fields in the line.
|
||||
|
||||
Do not forget to use ">>" instead of ">" as we would like to append the output to the end of file!
|
||||
|
||||
[root@localhost ~]# tail -2 note.sh
|
||||
read issue
|
||||
echo "$phone/$name/$issue">>data.txt
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number?
|
||||
987
|
||||
Name?
|
||||
Jimmy
|
||||
Issue?
|
||||
Keybord issue.
|
||||
[root@localhost ~]# cat data.txt
|
||||
987/Jimmy/Keybord issue.
|
||||
[root@localhost ~]#
|
||||
|
||||
**Note** : The command **tail** displays the last **-n** lines of the file.
|
||||
|
||||
Bingo. Lets run once again:
|
||||
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number?
|
||||
556
|
||||
Name?
|
||||
Janine
|
||||
Issue?
|
||||
Mouse was broken.
|
||||
[root@localhost ~]# cat data.txt
|
||||
987/Jimmy/Keybord issue.
|
||||
556/Janine/Mouse was broken.
|
||||
[root@localhost ~]#
|
||||
|
||||
Our file is growing. Lets add the date in the front of each line. This will be useful later when playing with data while calculating statistic. For that we can use command date and give it some format as i do not like default one:
|
||||
|
||||
[root@localhost ~]# date
|
||||
Thu Apr 23 21:33:14 EEST 2015 <---- default output of dta command
|
||||
[root@localhost ~]# date "+%Y.%m.%d %H:%M:%S"
|
||||
2015.04.23 21:33:18 <---- formated output
|
||||
|
||||
There are several ways to read the command output to the variable. In this simple situation we will use ` (back quotes):
|
||||
|
||||
[root@localhost ~]# cat note.sh
|
||||
#!/bin/bash
|
||||
now=`date "+%Y.%m.%d %H:%M:%S"`
|
||||
echo "Phone number?"
|
||||
read phone
|
||||
echo "Name?"
|
||||
read name
|
||||
echo "Issue?"
|
||||
read issue
|
||||
echo "$now/$phone/$name/$issue">>data.txt
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number?
|
||||
123
|
||||
Name?
|
||||
Jim
|
||||
Issue?
|
||||
Script hanging.
|
||||
[root@localhost ~]# cat data.txt
|
||||
2015.04.23 21:38:56/123/Jim/Script hanging.
|
||||
[root@localhost ~]#
|
||||
|
||||
Hmmm... Our script looks a bit ugly. Lets prettify it a bit. If you would read manual about **read** command you would find that read command also could display some messages. For this we should use -p key and message:
|
||||
|
||||
[root@localhost ~]# cat note.sh
|
||||
#!/bin/bash
|
||||
now=`date "+%Y.%m.%d %H:%M:%S"`
|
||||
read -p "Phone number: " phone
|
||||
read -p "Name: " name
|
||||
read -p "Issue: " issue
|
||||
echo "$now/$phone/$name/$issue">>data.txt
|
||||
|
||||
You can fine a lots of interesting about each command directly from the console. Just type: **man read, man echo, man date, man ....**
|
||||
|
||||
Agree it looks much better!
|
||||
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number: 321
|
||||
Name: Susane
|
||||
Issue: Mouse was stolen
|
||||
[root@localhost ~]# cat data.txt
|
||||
2015.04.23 21:38:56/123/Jim/Script hanging.
|
||||
2015.04.23 21:43:50/321/Susane/Mouse was stolen
|
||||
[root@localhost ~]#
|
||||
|
||||
And the cursor is right after the message (not in new line) what makes a bit sense.
|
||||
Loop
|
||||
|
||||
Time to improve our script. If user works all day with the calls it is not very handy to run it each time. Lets add all these actions in the never-ending loop:
|
||||
|
||||
[root@localhost ~]# cat note.sh
|
||||
#!/bin/bash
|
||||
while true
|
||||
do
|
||||
read -p "Phone number: " phone
|
||||
now=`date "+%Y.%m.%d %H:%M:%S"`
|
||||
read -p "Name: " name
|
||||
read -p "Issue: " issue
|
||||
echo "$now/$phone/$name/$issue">>data.txt
|
||||
done
|
||||
|
||||
I have swapped **read phone** and **now=`date** lines. This is because i would like to get the time right after the phone number will be entered. If i would left it as the first line in the loop **- the** now variable will get the time right after the data was stored in the file. And it is not good as the next call could be after 20 mins or so.
|
||||
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number: 123
|
||||
Name: Jim
|
||||
Issue: Script still not works.
|
||||
Phone number: 777
|
||||
Name: Daniel
|
||||
Issue: I broke my monitor
|
||||
Phone number: ^C
|
||||
[root@localhost ~]# cat data.txt
|
||||
2015.04.23 21:38:56/123/Jim/Script hanging.
|
||||
2015.04.23 21:43:50/321/Susane/Mouse was stolen
|
||||
2015.04.23 21:47:55/123/Jim/Script still not works.
|
||||
2015.04.23 21:48:16/777/Daniel/I broke my monitor
|
||||
[root@localhost ~]#
|
||||
|
||||
NOTE: To exit from the never-ending loop you can by pressing [Ctrl]+[C] keys. Shell will display ^ as the Ctrl key.
|
||||
|
||||
### Using pipe redirection ###
|
||||
|
||||
Lets add more functionality to our "Frankenstein" I would like the script will display some statistic after each call. Lets say we want to see the how many times each number called us. For that we should cat the data.txt file:
|
||||
|
||||
[root@localhost ~]# cat data.txt
|
||||
2015.04.23 21:38:56/123/Jim/Script hanging.
|
||||
2015.04.23 21:43:50/321/Susane/Mouse was stolen
|
||||
2015.04.23 21:47:55/123/Jim/Script still not works.
|
||||
2015.04.23 21:48:16/777/Daniel/I broke my monitor
|
||||
2015.04.23 22:02:14/123/Jimmy/New script also not working!!!
|
||||
[root@localhost ~]#
|
||||
|
||||
Now all this output we can redirect to the **cut** command to **cut** each line into the chunks (our delimiter "/") and print the second field:
|
||||
|
||||
[root@localhost ~]# cat data.txt | cut -d"/" -f2
|
||||
123
|
||||
321
|
||||
123
|
||||
777
|
||||
123
|
||||
[root@localhost ~]#
|
||||
|
||||
Now this output we can redirect to another command to **sort**:
|
||||
|
||||
[root@localhost ~]# cat data.txt | cut -d"/" -f2|sort
|
||||
123
|
||||
123
|
||||
123
|
||||
321
|
||||
777
|
||||
[root@localhost ~]#
|
||||
|
||||
and leave only unique lines. To count unique entries just add **-c** key for **uniq** command:
|
||||
|
||||
[root@localhost ~]# cat data.txt | cut -d"/" -f2 | sort | uniq -c
|
||||
3 123
|
||||
1 321
|
||||
1 777
|
||||
[root@localhost ~]#
|
||||
|
||||
Just add this to end of our loop:
|
||||
|
||||
#!/bin/bash
|
||||
while true
|
||||
do
|
||||
read -p "Phone number: " phone
|
||||
now=`date "+%Y.%m.%d %H:%M:%S"`
|
||||
read -p "Name: " name
|
||||
read -p "Issue: " issue
|
||||
echo "$now/$phone/$name/$issue">>data.txt
|
||||
echo "===== We got calls from ====="
|
||||
cat data.txt | cut -d"/" -f2 | sort | uniq -c
|
||||
echo "--------------------------------"
|
||||
done
|
||||
|
||||
Run it:
|
||||
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number: 454
|
||||
Name: Malini
|
||||
Issue: Windows license expired.
|
||||
===== We got calls from =====
|
||||
3 123
|
||||
1 321
|
||||
1 454
|
||||
1 777
|
||||
--------------------------------
|
||||
Phone number: ^C
|
||||
|
||||

|
||||
|
||||
Current scenario is going through well-known steps like:
|
||||
|
||||
- Display message
|
||||
- Get user input
|
||||
- Store values to the file
|
||||
- Do something with stored data
|
||||
|
||||
But what if user has several responsibilities and he needs sometimes to input data, sometimes to do statistic calculations, or might be to find something in stored data? For that we need to implement switches / cases. In next article i will show you how to use them and how to nicely form the output. It is useful while "drawing" the tables in the shell.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-shell-script/guide-start-learning-shell-scripting-scratch/
|
||||
|
||||
作者:[Petras Liumparas][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/petrasl/
|
@ -1,170 +0,0 @@
|
||||
Translating by wwy-hust
|
||||
|
||||
|
||||
How to Securely Store Passwords and Api Keys Using Vault
|
||||
================================================================================
|
||||
Vault is a tool that is used to access secret information securely, it may be password, API key, certificate or anything else. Vault provides a unified interface to secret information through strong access control mechanism and extensive logging of events.
|
||||
|
||||
Granting access to critical information is quite a difficult problem when we have multiple roles and individuals across different roles requiring various critical information like, login details to databases with different privileges, API keys for external services, credentials for service oriented architecture communication etc. Situation gets even worse when access to secret information is managed across different platforms with custom settings, so rolling, secure storage and managing the audit logs is almost impossible. But Vault provides a solution to such a complex situation.
|
||||
|
||||
### Salient Features ###
|
||||
|
||||
Data Encryption: Vault can encrypt and decrypt data with no requirement to store it. Developers can now store encrypted data without developing their own encryption techniques and it allows security teams to define security parameters.
|
||||
|
||||
**Secure Secret Storage**: Vault encrypts the secret information (API keys, passwords or certificates) before storing it on to the persistent (secondary) storage. So even if somebody gets access to the stored information by chance, it will be of no use until it is decrypted.
|
||||
|
||||
**Dynamic Secrets**: On demand secrets are generated for systems like AWS and SQL databases. If an application needs to access S3 bucket, for instance, it requests AWS keypair from Vault, which grants the required secret information along with a lease time. The secret information won’t work once the lease time is expired.
|
||||
|
||||
**Leasing and Renewal**: Vault grants secrets with a lease limit, it revokes the secrets as soon as lease expires which can further be renewed through APIs if required.
|
||||
|
||||
**Revocation**: Upon expiring the lease period Vault can revoke a single secret or a tree of secrets.
|
||||
|
||||
### Installing Vault ###
|
||||
|
||||
There are two ways to use Vault.
|
||||
|
||||
**1. Pre-compiled Vault Binary** can be downloaded for all Linux flavors from the following source, once done, unzip it and place it on a system PATH where other binaries are kept so that it can be accessed/invoked easily.
|
||||
|
||||
- [Download Precompiled Vault Binary (32-bit)][1]
|
||||
- [Download Precompiled Vault Binary (64-bit)][2]
|
||||
- [Download Precompiled Vault Binary (ARM)][3]
|
||||
|
||||
Download the desired precompiled Vault binary.
|
||||
|
||||

|
||||
|
||||
Unzip the downloaded binary.
|
||||
|
||||

|
||||
|
||||
unzipCongratulations! Vault is ready to be used.
|
||||
|
||||

|
||||
|
||||
**2. Compiling from source** is another way of installing Vault on the system. GO and GIT are required to be installed and configured properly on the system before we start the installation process.
|
||||
|
||||
To **install GO on Redhat systems** use the following command.
|
||||
|
||||
sudo yum install go
|
||||
|
||||
To **install GO on Debian systems** use the following commands.
|
||||
|
||||
sudo apt-get install golang
|
||||
|
||||
OR
|
||||
|
||||
sudo add-apt-repository ppa:gophers/go
|
||||
|
||||
sudo apt-get update
|
||||
|
||||
sudo apt-get install golang-stable
|
||||
|
||||
To **install GIT on Redhat systems** use the following command.
|
||||
|
||||
sudo yum install git
|
||||
|
||||
To **install GIT on Debian systems** use the following commands.
|
||||
|
||||
sudo apt-get install git
|
||||
|
||||
Once both GO and GIT are installed we start the Vault installation process by compiling from the source.
|
||||
|
||||
> Clone following Vault repository into the GOPATH
|
||||
|
||||
https://github.com/hashicorp/vault
|
||||
|
||||
> Verify if the following clone file exist, if it doesn’t then Vault wasn’t cloned to the proper path.
|
||||
|
||||
$GOPATH/src/github.com/hashicorp/vault/main.go
|
||||
|
||||
> Run following command to build Vault in the current system and put binary in the bin directory.
|
||||
|
||||
make dev
|
||||
|
||||

|
||||
|
||||
### An introductory tutorial of Vault ###
|
||||
|
||||
We have compiled Vault’s official interactive tutorial along with its output on SSH.
|
||||
|
||||
**Overview**
|
||||
|
||||
This tutorial will cover the following steps:
|
||||
|
||||
- Initializing and unsealing your Vault
|
||||
- Authorizing your requests to Vault
|
||||
- Reading and writing secrets
|
||||
- Sealing your Vault
|
||||
|
||||
**Initialize your Vault**
|
||||
|
||||
To get started, we need to initialize an instance of Vault for you to work with.
|
||||
While initializing, you can configure the seal behavior of Vault.
|
||||
Initialize Vault now, with 1 unseal key for simplicity, using the command:
|
||||
|
||||
vault init -key-shares=1 -key-threshold=1
|
||||
|
||||
You'll notice Vault prints out several keys here. Don't clear your terminal, as these are needed in the next few steps.
|
||||
|
||||

|
||||
|
||||
**Unsealing your Vault**
|
||||
|
||||
When a Vault server is started, it starts in a sealed state. In this state, Vault is configured to know where and how to access the physical storage, but doesn't know how to decrypt any of it.
|
||||
Vault encrypts data with an encryption key. This key is encrypted with the "master key", which isn't stored. Decrypting the master key requires a threshold of shards. In this example, we use one shard to decrypt this master key.
|
||||
|
||||
vault unseal <key 1>
|
||||
|
||||

|
||||
|
||||
**Authorize your requests**
|
||||
|
||||
Before performing any operation with Vault, the connecting client must be authenticated. Authentication is the process of verifying a person or machine is who they say they are and assigning an identity to them. This identity is then used when making requests with Vault.
|
||||
For simplicity, we'll use the root token we generated on init in Step 2. This output should be available in the scrollback.
|
||||
Authorize with a client token:
|
||||
|
||||
vault auth <root token>
|
||||
|
||||

|
||||
|
||||
**Read and write secrets**
|
||||
|
||||
Now that Vault has been set-up, we can start reading and writing secrets with the default mounted secret backend. Secrets written to Vault are encrypted and then written to the backend storage. The backend storage mechanism never sees the unencrypted value and doesn't have the means necessary to decrypt it without Vault.
|
||||
|
||||
vault write secret/hello value=world
|
||||
|
||||
Of course, you can then read this data too:
|
||||
|
||||
vault read secret/hello
|
||||
|
||||

|
||||
|
||||
**Seal your Vault**
|
||||
|
||||
There is also an API to seal the Vault. This will throw away the encryption key and require another unseal process to restore it. Sealing only requires a single operator with root privileges. This is typically part of a rare "break glass procedure".
|
||||
This way, if there is a detected intrusion, the Vault data can be locked quickly to try to minimize damages. It can't be accessed again without access to the master key shards.
|
||||
|
||||
vault seal
|
||||
|
||||

|
||||
|
||||
That is the end of introductory tutorial.
|
||||
|
||||
### Summary ###
|
||||
|
||||
Vault is a very useful application mainly because of providing a reliable and secure way of storing critical information. Furthermore it encrypts the critical information before storing, maintains audit logs, grants secret information for limited lease time and revokes it once lease is expired. It is platform independent and freely available to download and install. To discover more about Vault, readers are encouraged to visit the official website.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/how-tos/secure-secret-store-vault/
|
||||
|
||||
作者:[Aun Raza][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunrz/
|
||||
[1]:https://dl.bintray.com/mitchellh/vault/vault_0.1.0_linux_386.zip
|
||||
[2]:https://dl.bintray.com/mitchellh/vault/vault_0.1.0_linux_amd64.zip
|
||||
[3]:https://dl.bintray.com/mitchellh/vault/vault_0.1.0_linux_arm.zip
|
@ -1,112 +0,0 @@
|
||||
How to Setup OpenERP (Odoo) on CentOS 7.x
|
||||
================================================================================
|
||||
Hi everyone, this tutorial is all about how we can setup Odoo (formerly known as OpenERP) on our CentOS 7 Server. Are you thinking to get an awesome ERP (Enterprise Resource Planning) app for your business ?. Then, OpenERP is the best app you are searching as it is a Free and Open Source Software which provides an outstanding features for your business or company.
|
||||
|
||||
[OpenERP][1] is a free and open source traditional OpenERP (Enterprise Resource Planning) app which includes Open Source CRM, Website Builder, eCommerce, Project Management, Billing & Accounting, Point of Sale, Human Resources, Marketing, Manufacturing, Purchase Management and many more modules included for a better way to boost the productivity and sales. Odoo Apps can be used as stand-alone applications, but they also integrate seamlessly so you get a full-featured Open Source ERP when you install several Apps.
|
||||
|
||||
So, here are some quick and easy steps to get your copy of OpenERP installed on your CentOS machine.
|
||||
|
||||
### 1. Installing PostgreSQL ###
|
||||
|
||||
First of all, we'll want to update the packages installed in our CentOS 7 machine to ensure that the latest packages, patches and security are up to date. To update our sytem, we should run the following command in a shell or terminal.
|
||||
|
||||
# yum clean all
|
||||
# yum update
|
||||
|
||||
Now, we'll want to install PostgreSQL Database System as OpenERP uses PostgreSQL for its database system. To install it, we'll need to run the following command.
|
||||
|
||||
# yum install postgresql postgresql-server postgresql-libs
|
||||
|
||||

|
||||
|
||||
After it is installed, we'll need to initialize the database with the following command
|
||||
|
||||
# postgresql-setup initdb
|
||||
|
||||

|
||||
|
||||
We'll then set PostgreSQL to start on every boot and start the PostgreSQL Database server.
|
||||
|
||||
# systemctl enable postgresql
|
||||
# systemctl start postgresql
|
||||
|
||||
As we haven't set a password for the user "postgresql", we'll want to set it now.
|
||||
|
||||
# su - postgres
|
||||
$ psql
|
||||
postgres=# \password postgres
|
||||
postgres=# \q
|
||||
# exit
|
||||
|
||||

|
||||
|
||||
### 2. Configuring Odoo Repository ###
|
||||
|
||||
After our Database Server has been installed correctly, we'll want add EPEL (Extra Packages for Enterprise Linux) to our CentOS server. Odoo (or OpenERP) depends on Python run-time and many other packages that are not included in default standard repository. As such, we'll want to add the Extra Packages for Enterprise Linux (or EPEL) repository support so that Odoo can get the required dependencies. To install, we'll need to run the following command.
|
||||
|
||||
# yum install epel-release
|
||||
|
||||

|
||||
|
||||
Now, after we install EPEL, we'll now add repository of Odoo (OpenERP) using yum-config-manager.
|
||||
|
||||
# yum install yum-utils
|
||||
|
||||
# yum-config-manager --add-repo=https://nightly.odoo.com/8.0/nightly/rpm/odoo.repo
|
||||
|
||||

|
||||
|
||||
### 3. Installing Odoo 8 (OpenERP) ###
|
||||
|
||||
Finally after adding repository of Odoo 8 (OpenERP) in our CentOS 7 machine. We'll can install Odoo 8 (OpenERP) using the following command.
|
||||
|
||||
# yum install -y odoo
|
||||
|
||||
The above command will install odoo along with the necessary dependency packages.
|
||||
|
||||

|
||||
|
||||
Now, we'll enable automatic startup of Odoo in every boot and will start our Odoo service using the command below.
|
||||
|
||||
# systemctl enable odoo
|
||||
# systemctl start odoo
|
||||
|
||||

|
||||
|
||||
### 4. Allowing Firewall ###
|
||||
|
||||
As Odoo uses port 8069, we'll need to allow firewall for remote access. We can allow firewall to port 8069 by running the following command.
|
||||
|
||||
# firewall-cmd --zone=public --add-port=8069/tcp --permanent
|
||||
# firewall-cmd --reload
|
||||
|
||||

|
||||
|
||||
**Note: By default, only connections from localhost are allowed. If we want to allow remote access to PostgreSQL databases, we'll need to add the line shown in the below image to pg_hba.conf configuration file:**
|
||||
|
||||
# nano /var/lib/pgsql/data/pg_hba.conf
|
||||
|
||||

|
||||
|
||||
### 5. Web Interface ###
|
||||
|
||||
Finally, as we have successfully installed our latest Odoo 8 (OpenERP) on our CentOS 7 Server, we can now access our Odoo by browsing to http://ip-address:8069 http://my-site.com:8069 using our favorite web browser. Then, first thing we'll gonna do is we'll create a new database and create a new password for it. Note, the master password is admin by default. Then, we can login to our panel with that username and password.
|
||||
|
||||

|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Odoo 8 (formerly OpenERP) is the best ERP app available in the world of Open Source. We did an excellent work on installing it because OpenERP is a set of many modules which are essential for a complete ERP app for business and company. So, if you have any questions, suggestions, feedback please write them in the comment box below. Thank you ! Enjoy OpenERP (Odoo 8) :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/setup-openerp-odoo-centos-7/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://www.odoo.com/
|
@ -1,96 +0,0 @@
|
||||
Linux FAQs with Answers--How to configure a Linux bridge with Network Manager on Ubuntu
|
||||
================================================================================
|
||||
> **Question**: I need to set up a Linux bridge on my Ubuntu box to share a NIC with several other virtual machines or containers created on the box. I am currently using Network Manager on my Ubuntu, so preferrably I would like to configure a bridge using Network Manager. How can I do that?
|
||||
|
||||
Network bridge is a hardware equipment used to interconnect two or more Layer-2 network segments, so that network devices on different segments can talk to each other. A similar bridging concept is needed within a Linux host, when you want to interconnect multiple VMs or Ethernet interfaces within a host. That is one use case of a software Linux bridge.
|
||||
|
||||
There are several different ways to configure a Linux bridge. For example, in a headless server environment, you can use [brctl][1] to manually configure a bridge. In desktop environment, bridge support is available in Network Manager. Let's examine how to configure a bridge with Network Manager.
|
||||
|
||||
### Requirement ###
|
||||
|
||||
To avoid [any issue][2], it is recommended that you have Network Manager 0.9.9 and higher, which is the case for Ubuntu 15.04 and later.
|
||||
|
||||
$ apt-cache show network-manager | grep Version
|
||||
|
||||
----------
|
||||
|
||||
Version: 0.9.10.0-4ubuntu15.1
|
||||
Version: 0.9.10.0-4ubuntu15
|
||||
|
||||
### Create a Bridge ###
|
||||
|
||||
The easiest way to create a bridge with Network Manager is via nm-connection-editor. This GUI tool allows you to configure a bridge in easy-to-follow steps.
|
||||
|
||||
To start, invoke nm-connection-editor.
|
||||
|
||||
$ nm-connection-editor
|
||||
|
||||
The editor window will show you a list of currently configured network connections. Click on "Add" button in the top right to create a bridge.
|
||||
|
||||

|
||||
|
||||
Next, choose "Bridge" as a connection type.
|
||||
|
||||

|
||||
|
||||
Now it's time to configure a bridge, including its name and bridged connection(s). With no other bridges created, the default bridge interface will be named bridge0.
|
||||
|
||||
Recall that the goal of creating a bridge is to share your Ethernet interface via the bridge. So you need to add the Ethernet interface to the bridge. This is achieved by adding a new "bridged connection" in the GUI. Click on "Add" button.
|
||||
|
||||

|
||||
|
||||
Choose "Ethernet" as a connection type.
|
||||
|
||||

|
||||
|
||||
In "Device MAC address" field, choose the interface that you want to enslave into the bridge. In this example, assume that this interface is eth0.
|
||||
|
||||

|
||||
|
||||
Click on "General" tab, and enable both checkboxes that say "Automatically connect to this network when it is available" and "All users may connect to this network".
|
||||
|
||||

|
||||
|
||||
Save the change.
|
||||
|
||||
Now you will see a new slave connection created in the bridge.
|
||||
|
||||

|
||||
|
||||
Click on "General" tab of the bridge, and make sure that top-most two checkboxes are enabled.
|
||||
|
||||

|
||||
|
||||
Go to "IPv4 Settings" tab, and configure either DHCP or static IP address for the bridge. Note that you should use the same IPv4 settings as the enslaved Ethernet interface eth0. In this example, we assume that eth0 is configured via DHCP. Thus choose "Automatic (DHCP)" here. If eth0 is assigned a static IP address, you should assign the same IP address to the bridge.
|
||||
|
||||

|
||||
|
||||
Finally, save the bridge settings.
|
||||
|
||||
Now you will see an additional bridge connection created in "Network Connections" window. You no longer need a previously-configured wired connection for the enslaved interface eth0. So go ahead and delete the original wired connection.
|
||||
|
||||

|
||||
|
||||
At this point, the bridge connection will automatically be activated. You will momentarily lose a connection, since the IP address assigned to eth0 is taken over by the bridge. Once an IP address is assigned to the bridge, you will be connected back to your Ethernet interface via the bridge. You can confirm that by checking "Network" settings.
|
||||
|
||||

|
||||
|
||||
Also, check the list of available interfaces. As mentioned, the bridge interface must have taken over whatever IP address was possessed by your Ethernet interface.
|
||||
|
||||

|
||||
|
||||
That's it, and now the bridge is ready to use!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/configure-linux-bridge-network-manager-ubuntu.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
[1]:http://xmodulo.com/how-to-configure-linux-bridge-interface.html
|
||||
[2]:https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1273201
|
@ -1,97 +0,0 @@
|
||||
Linux FAQs with Answers--How to install Shrew Soft IPsec VPN client on Linux
|
||||
================================================================================
|
||||
> **Question**: I need to connect to an IPSec VPN gateway. For that, I'm trying to use Shrew Soft VPN client, which is available for free. How can I install Shrew Soft VPN client on [insert your Linux distro]?
|
||||
|
||||
There are many commercial VPN gateways available, which come with their own proprietary VPN client software. While there are also open-source VPN server/client alternatives, they are typically lacking in sophisticated IPsec support, such as Internet Key Exchange (IKE) which is a standard IPsec protocol used to secure VPN key exchange and authentication. Shrew Soft VPN is a free IPsec VPN client supporting a number of authentication methods, key exchange, encryption and firewall traversal options.
|
||||
|
||||
Here is how you can install Shrew Soft VPN client on Linux platforms.
|
||||
|
||||
First, download its source code from the [official website][1].
|
||||
|
||||
### Install Shrew VPN Client on Debian, Ubuntu or Linux Mint ###
|
||||
|
||||
Shrew Soft VPN client GUI requires Qt 4.x. So you will need to install its development files as part of dependencies.
|
||||
|
||||
$ sudo apt-get install cmake libqt4-core libqt4-dev libqt4-gui libedit-dev libssl-dev checkinstall flex bison
|
||||
$ wget https://www.shrew.net/download/ike/ike-2.2.1-release.tbz2
|
||||
$ tar xvfvj ike-2.2.1-release.tbz2
|
||||
$ cd ike
|
||||
$ cmake -DCMAKE_INSTALL_PREFIX=/usr -DQTGUI=YES -DETCDIR=/etc -DNATT=YES .
|
||||
$ make
|
||||
$ sudo make install
|
||||
$ cd /etc/
|
||||
$ sudo mv iked.conf.sample iked.conf
|
||||
|
||||
### Install Shrew VPN Client on CentOS, Fedora or RHEL ###
|
||||
|
||||
Similar to Debian based systems, you will need to install a number of dependencies including Qt4 before compiling it.
|
||||
|
||||
$ sudo yum install qt-devel cmake gcc-c++ openssl-devel libedit-devel flex bison
|
||||
$ wget https://www.shrew.net/download/ike/ike-2.2.1-release.tbz2
|
||||
$ tar xvfvj ike-2.2.1-release.tbz2
|
||||
$ cd ike
|
||||
$ cmake -DCMAKE_INSTALL_PREFIX=/usr -DQTGUI=YES -DETCDIR=/etc -DNATT=YES .
|
||||
$ make
|
||||
$ sudo make install
|
||||
$ cd /etc/
|
||||
$ sudo mv iked.conf.sample iked.conf
|
||||
|
||||
On Red Hat based systems, one last step is to open /etc/ld.so.conf with a text editor, and add the following line.
|
||||
|
||||
$ sudo vi /etc/ld.so.conf
|
||||
|
||||
----------
|
||||
|
||||
include /usr/lib/
|
||||
|
||||
Reload run-time bindings of shared libraries to incorporate newly installed shared libraries:
|
||||
|
||||
$ sudo ldconfig
|
||||
|
||||
### Launch Shrew VPN Client ###
|
||||
|
||||
First launch IKE daemon (iked). This daemon speaks the IKE protocol to communicate with a remote host over IPSec as a VPN client.
|
||||
|
||||
$ sudo iked
|
||||
|
||||

|
||||
|
||||
Now start qikea which is an IPsec VPN client front end. This GUI application allows you to manage remote site configurations and to initiate VPN connections.
|
||||
|
||||

|
||||
|
||||
To create a new VPN configuration, click on "Add" button, and fill out VPN site configuration. Once you create a configuration, you can initiate a VPN connection simply by clicking on the configuration.
|
||||
|
||||

|
||||
|
||||
### Troubleshooting ###
|
||||
|
||||
1. I am getting the following error while running iked.
|
||||
|
||||
iked: error while loading shared libraries: libss_ike.so.2.2.1: cannot open shared object file: No such file or directory
|
||||
|
||||
To solve this problem, you need to update the dynamic linker to incorporate libss_ike library. For that, add to /etc/ld.so.conf the path where the library is located (e.g., /usr/lib), and then run ldconfig command.
|
||||
|
||||
$ sudo ldconfig
|
||||
|
||||
Verify that libss_ike is added to the library path:
|
||||
|
||||
$ ldconfig -p | grep ike
|
||||
|
||||
----------
|
||||
|
||||
libss_ike.so.2.2.1 (libc6,x86-64) => /lib/libss_ike.so.2.2.1
|
||||
libss_ike.so (libc6,x86-64) => /lib/libss_ike.so
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/install-shrew-soft-ipsec-vpn-client-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
[1]:https://www.shrew.net/download/ike
|
@ -1,78 +0,0 @@
|
||||
Vic020
|
||||
|
||||
Linux FAQs with Answers--How to install autossh on Linux
|
||||
================================================================================
|
||||
> **Question**: I would like to install autossh on [insert your Linux distro]. How can I do that?
|
||||
|
||||
[autossh][1] is an open-source tool that allows you to monitor an SSH session and restart it automatically should it gets disconnected or stops forwarding traffic. autossh assumes that [passwordless SSH login][2] for a destination host is already setup, so that it can restart a broken SSH session without user's involvement.
|
||||
|
||||
autossh comes in handy when you want to set up [reverse SSH tunnels][3] or [mount remote folders over SSH][4]. Essentially in any situation where persistent SSH sessions are required, autossh can be useful.
|
||||
|
||||

|
||||
|
||||
Here is how to install autossh on various Linux distributions.
|
||||
|
||||
### Install Autossh on Debian or Ubuntu ###
|
||||
|
||||
autossh is available in base repositories of Debian based systems, so installation is easy.
|
||||
|
||||
$ sudo apt-get install autossh
|
||||
|
||||
### Install Autossh on Fedora ###
|
||||
|
||||
Fedora repositories also carry autossh package. So simply use yum command.
|
||||
|
||||
$ sudo yum install autossh
|
||||
|
||||
### Install Autossh on CentOS or RHEL ###
|
||||
|
||||
For CentOS/RHEL 6 or earlier, enable [Repoforge repository][5] first, and then use yum command.
|
||||
|
||||
$ sudo yum install autossh
|
||||
|
||||
For CentOS/RHEL 7, autossh is no longer available in Repoforge repository. You will need to build it from the source (explained below).
|
||||
|
||||
### Install Autossh on Arch Linux ###
|
||||
|
||||
$ sudo pacman -S autossh
|
||||
|
||||
### Compile Autossh from the Source on Debian or Ubuntu ###
|
||||
|
||||
If you would like to try the latest version of autossh, you can build it from the source as follows.
|
||||
|
||||
$ sudo apt-get install gcc make
|
||||
$ wget http://www.harding.motd.ca/autossh/autossh-1.4e.tgz
|
||||
$ tar -xf autossh-1.4e.tgz
|
||||
$ cd autossh-1.4e
|
||||
$ ./configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
|
||||
### Compile Autossh from the Source on CentOS, Fedora or RHEL ###
|
||||
|
||||
On CentOS/RHEL 7, autossh is not available as a pre-built package. So you'll need to compile it from the source as follows.
|
||||
|
||||
$ sudo yum install wget gcc make
|
||||
$ wget http://www.harding.motd.ca/autossh/autossh-1.4e.tgz
|
||||
$ tar -xf autossh-1.4e.tgz
|
||||
$ cd autossh-1.4e
|
||||
$ ./configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/install-autossh-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
[1]:http://www.harding.motd.ca/autossh/
|
||||
[2]:http://xmodulo.com/how-to-enable-ssh-login-without.html
|
||||
[3]:http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html
|
||||
[4]:http://xmodulo.com/how-to-mount-remote-directory-over-ssh-on-linux.html
|
||||
[5]:http://xmodulo.com/how-to-set-up-rpmforge-repoforge-repository-on-centos.html
|
@ -0,0 +1,827 @@
|
||||
zhangboyue 翻译中
|
||||
45 Zypper Commands to Manage ‘Suse’ Linux Package Management
|
||||
================================================================================
|
||||
SUSE (Software and System Entwicklung (Germany) meaning Software and System Development, in English) Linux lies on top of Linux Kernel brought by Novell. SUSE comes in two pack. One of them is called OpenSUSE, which is freely available (free as in speech as well as free as in wine). It is a community driven project packed with latest application support, the latest stable release of OpenSUSE Linux is 13.2.
|
||||
|
||||
The other is SUSE Linux Enterprise which is a commercial Linux Distribution designed specially for enterprise and production. SUSE Linux Enterprise edition comes with a variety of Enterprise Applications and features suited for production environment, the latest stable release of SUSE Linux Enterprise Edition is 12.
|
||||
|
||||
You may like to check the detailed installation instruction of SUSE Linux Enterprise Server at:
|
||||
|
||||
- [Installation of SUSE Linux Enterprise Server 12][1]
|
||||
|
||||
Zypper and YaST are the Package Manager for SUSE Linux, which works on top of RPM.
|
||||
|
||||
YaST which stands for Yet another Setup Tool is a tool that works on OpenSUSE and SUSE Enterprise edition to administer, setup and configure SUSE Linux.
|
||||
|
||||
Zypper is the command line interface of ZYpp package manager for installing, removing and updating SUSE. ZYpp is the package management engine that powers both Zypper and YaST.
|
||||
|
||||
Here in this article we will see Zypper in action, which will be installing, updating, removing and doing every other thing a package manager can do. Here we go…
|
||||
|
||||
**Important** : Remember all these command are meant for system wide changes hence must be run as root, else the command will fail.
|
||||
|
||||
### Getting Basic Help with Zypper ###
|
||||
|
||||
1. Run zypper without any option, will give you a list of all global options and commands.
|
||||
|
||||
# zypper
|
||||
|
||||
Usage:
|
||||
zypper [--global-options]
|
||||
|
||||
2. To get help on a specific command say ‘in’ (install), run the below commands.
|
||||
|
||||
# zypper help in
|
||||
OR
|
||||
# zypper help install
|
||||
|
||||
install (in) [options] <capability|rpm_file_uri> ...
|
||||
|
||||
Install packages with specified capabilities or RPM files with specified
|
||||
location. A capability is NAME[.ARCH][OP], where OP is one
|
||||
of <, <=, =, >=, >.
|
||||
|
||||
Command options:
|
||||
--from <alias|#|URI> Select packages from the specified repository.
|
||||
-r, --repo <alias|#|URI> Load only the specified repository.
|
||||
-t, --type Type of package (package, patch, pattern, product, srcpackage).
|
||||
Default: package.
|
||||
-n, --name Select packages by plain name, not by capability.
|
||||
-C, --capability Select packages by capability.
|
||||
-f, --force Install even if the item is already installed (reinstall),
|
||||
downgraded or changes vendor or architecture.
|
||||
--oldpackage Allow to replace a newer item with an older one.
|
||||
Handy if you are doing a rollback. Unlike --force
|
||||
it will not enforce a reinstall.
|
||||
--replacefiles Install the packages even if they replace files from other,
|
||||
already installed, packages. Default is to treat file conflicts
|
||||
as an error. --download-as-needed disables the fileconflict check.
|
||||
......
|
||||
|
||||
3. Search for a package (say gnome-desktop) before installing.
|
||||
|
||||
# zypper se gnome-desktop
|
||||
|
||||
Retrieving repository 'openSUSE-13.2-Debug' metadata ............................................................[done]
|
||||
Building repository 'openSUSE-13.2-Debug' cache .................................................................[done]
|
||||
Retrieving repository 'openSUSE-13.2-Non-Oss' metadata ......................................................... [done]
|
||||
Building repository 'openSUSE-13.2-Non-Oss' cache ...............................................................[done]
|
||||
Retrieving repository 'openSUSE-13.2-Oss' metadata ..............................................................[done]
|
||||
Building repository 'openSUSE-13.2-Oss' cache ...................................................................[done]
|
||||
Retrieving repository 'openSUSE-13.2-Update' metadata ...........................................................[done]
|
||||
Building repository 'openSUSE-13.2-Update' cache ................................................................[done]
|
||||
Retrieving repository 'openSUSE-13.2-Update-Non-Oss' metadata ...................................................[done]
|
||||
Building repository 'openSUSE-13.2-Update-Non-Oss' cache ........................................................[done]
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
|
||||
S | Name | Summary | Type
|
||||
--+---------------------------------------+-----------------------------------------------------------+-----------
|
||||
| gnome-desktop2-lang | Languages for package gnome-desktop2 | package
|
||||
| gnome-desktop2 | The GNOME Desktop API Library | package
|
||||
| libgnome-desktop-2-17 | The GNOME Desktop API Library | package
|
||||
| libgnome-desktop-3-10 | The GNOME Desktop API Library | package
|
||||
| libgnome-desktop-3-devel | The GNOME Desktop API Library -- Development Files | package
|
||||
| libgnome-desktop-3_0-common | The GNOME Desktop API Library -- Common data files | package
|
||||
| gnome-desktop-debugsource | Debug sources for package gnome-desktop | package
|
||||
| gnome-desktop-sharp2-debugsource | Debug sources for package gnome-desktop-sharp2 | package
|
||||
| gnome-desktop2-debugsource | Debug sources for package gnome-desktop2 | package
|
||||
| libgnome-desktop-2-17-debuginfo | Debug information for package libgnome-desktop-2-17 | package
|
||||
| libgnome-desktop-3-10-debuginfo | Debug information for package libgnome-desktop-3-10 | package
|
||||
| libgnome-desktop-3_0-common-debuginfo | Debug information for package libgnome-desktop-3_0-common | package
|
||||
| libgnome-desktop-2-17-debuginfo-32bit | Debug information for package libgnome-desktop-2-17 | package
|
||||
| libgnome-desktop-3-10-debuginfo-32bit | Debug information for package libgnome-desktop-3-10 | package
|
||||
| gnome-desktop-sharp2 | Mono bindings for libgnome-desktop | package
|
||||
| libgnome-desktop-2-devel | The GNOME Desktop API Library -- Development Files | package
|
||||
| gnome-desktop-lang | Languages for package gnome-desktop | package
|
||||
| libgnome-desktop-2-17-32bit | The GNOME Desktop API Library | package
|
||||
| libgnome-desktop-3-10-32bit | The GNOME Desktop API Library | package
|
||||
| gnome-desktop | The GNOME Desktop API Library | srcpackage
|
||||
|
||||
4. Get information on a pattern package (say lamp_server) using following command.
|
||||
|
||||
# zypper info -t pattern lamp_server
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
|
||||
|
||||
Information for pattern lamp_server:
|
||||
------------------------------------
|
||||
Repository: openSUSE-13.2-Update
|
||||
Name: lamp_server
|
||||
Version: 20141007-5.1
|
||||
Arch: x86_64
|
||||
Vendor: openSUSE
|
||||
Installed: No
|
||||
Visible to User: Yes
|
||||
Summary: Web and LAMP Server
|
||||
Description:
|
||||
Software to set up a Web server that is able to serve static, dynamic, and interactive content (like a Web shop). This includes Apache HTTP Server, the database management system MySQL,
|
||||
and scripting languages such as PHP, Python, Ruby on Rails, or Perl.
|
||||
Contents:
|
||||
|
||||
S | Name | Type | Dependency
|
||||
--+-------------------------------+---------+-----------
|
||||
| apache2-mod_php5 | package |
|
||||
| php5-iconv | package |
|
||||
i | patterns-openSUSE-base | package |
|
||||
i | apache2-prefork | package |
|
||||
| php5-dom | package |
|
||||
| php5-mysql | package |
|
||||
i | apache2 | package |
|
||||
| apache2-example-pages | package |
|
||||
| mariadb | package |
|
||||
| apache2-mod_perl | package |
|
||||
| php5-ctype | package |
|
||||
| apache2-doc | package |
|
||||
| yast2-http-server | package |
|
||||
| patterns-openSUSE-lamp_server | package |
|
||||
|
||||
5. To open zypper shell session run the below command.
|
||||
|
||||
# zypper shell
|
||||
OR
|
||||
# zypper sh
|
||||
|
||||
zypper> help
|
||||
Usage:
|
||||
zypper [--global-options]
|
||||
|
||||
**Note**: On Zypper shell type ‘help‘ to get a list of global options and commands.
|
||||
|
||||
### Zypper Repository Management ###
|
||||
|
||||
#### Listing Defined Repositories ####
|
||||
|
||||
6. Use zypper repos or zypper lr commands to list all the defined repositories.
|
||||
|
||||
# zypper repos
|
||||
OR
|
||||
# zypper lr
|
||||
|
||||
| Alias | Name | Enabled | Refresh
|
||||
--+---------------------------+------------------------------------+---------+--------
|
||||
1 | openSUSE-13.2-0 | openSUSE-13.2-0 | Yes | No
|
||||
2 | repo-debug | openSUSE-13.2-Debug | Yes | Yes
|
||||
3 | repo-debug-update | openSUSE-13.2-Update-Debug | No | Yes
|
||||
4 | repo-debug-update-non-oss | openSUSE-13.2-Update-Debug-Non-Oss | No | Yes
|
||||
5 | repo-non-oss | openSUSE-13.2-Non-Oss | Yes | Yes
|
||||
6 | repo-oss | openSUSE-13.2-Oss | Yes | Yes
|
||||
7 | repo-source | openSUSE-13.2-Source | No | Yes
|
||||
8 | repo-update | openSUSE-13.2-Update | Yes | Yes
|
||||
9 | repo-update-non-oss | openSUSE-13.2-Update-Non-Oss | Yes | Yes
|
||||
|
||||
7. List zypper URI on the table.
|
||||
|
||||
# zypper lr -u
|
||||
|
||||
# | Alias | Name | Enabled | Refresh | URI
|
||||
--+---------------------------+------------------------------------+---------+---------+----------------------------------------------------------------
|
||||
1 | openSUSE-13.2-0 | openSUSE-13.2-0 | Yes | No | cd:///?devices=/dev/disk/by-id/ata-VBOX_CD-ROM_VB2-01700376
|
||||
2 | repo-debug | openSUSE-13.2-Debug | Yes | Yes | http://download.opensuse.org/debug/distribution/13.2/repo/oss/
|
||||
3 | repo-debug-update | openSUSE-13.2-Update-Debug | No | Yes | http://download.opensuse.org/debug/update/13.2/
|
||||
4 | repo-debug-update-non-oss | openSUSE-13.2-Update-Debug-Non-Oss | No | Yes | http://download.opensuse.org/debug/update/13.2-non-oss/
|
||||
5 | repo-non-oss | openSUSE-13.2-Non-Oss | Yes | Yes | http://download.opensuse.org/distribution/13.2/repo/non-oss/
|
||||
6 | repo-oss | openSUSE-13.2-Oss | Yes | Yes | http://download.opensuse.org/distribution/13.2/repo/oss/
|
||||
7 | repo-source | openSUSE-13.2-Source | No | Yes | http://download.opensuse.org/source/distribution/13.2/repo/oss/
|
||||
8 | repo-update | openSUSE-13.2-Update | Yes | Yes | http://download.opensuse.org/update/13.2/
|
||||
9 | repo-update-non-oss | openSUSE-13.2-Update-Non-Oss | Yes | Yes | http://download.opensuse.org/update/13.2-non-oss/
|
||||
|
||||
8. List repository priority and list by priority.
|
||||
|
||||
# zypper lr -P
|
||||
|
||||
# | Alias | Name | Enabled | Refresh | Priority
|
||||
--+---------------------------+------------------------------------+---------+---------+---------
|
||||
1 | openSUSE-13.2-0 | openSUSE-13.2-0 | Yes | No | 99
|
||||
2 | repo-debug | openSUSE-13.2-Debug | Yes | Yes | 99
|
||||
3 | repo-debug-update | openSUSE-13.2-Update-Debug | No | Yes | 99
|
||||
4 | repo-debug-update-non-oss | openSUSE-13.2-Update-Debug-Non-Oss | No | Yes | 99
|
||||
5 | repo-non-oss | openSUSE-13.2-Non-Oss | Yes | Yes | 85
|
||||
6 | repo-oss | openSUSE-13.2-Oss | Yes | Yes | 99
|
||||
7 | repo-source | openSUSE-13.2-Source | No | Yes | 99
|
||||
8 | repo-update | openSUSE-13.2-Update | Yes | Yes | 99
|
||||
9 | repo-update-non-oss | openSUSE-13.2-Update-Non-Oss | Yes | Yes | 99
|
||||
|
||||
#### Refreshing Repositories ####
|
||||
|
||||
9. Use commands zypper refresh or zypper ref to refresh zypper repositories.
|
||||
|
||||
# zypper refresh
|
||||
OR
|
||||
# zypper ref
|
||||
|
||||
Repository 'openSUSE-13.2-0' is up to date.
|
||||
Repository 'openSUSE-13.2-Debug' is up to date.
|
||||
Repository 'openSUSE-13.2-Non-Oss' is up to date.
|
||||
Repository 'openSUSE-13.2-Oss' is up to date.
|
||||
Repository 'openSUSE-13.2-Update' is up to date.
|
||||
Repository 'openSUSE-13.2-Update-Non-Oss' is up to date.
|
||||
All repositories have been refreshed.
|
||||
|
||||
10. To refresh a specific repository say ‘repo-non-oss‘, type:
|
||||
|
||||
# zypper refresh repo-non-oss
|
||||
|
||||
Repository 'openSUSE-13.2-Non-Oss' is up to date.
|
||||
Specified repositories have been refreshed.
|
||||
|
||||
11. To force update a repository say ‘repo-non-oss‘, type:
|
||||
|
||||
# zypper ref -f repo-non-oss
|
||||
|
||||
Forcing raw metadata refresh
|
||||
Retrieving repository 'openSUSE-13.2-Non-Oss' metadata ............................................................[done]
|
||||
Forcing building of repository cache
|
||||
Building repository 'openSUSE-13.2-Non-Oss' cache ............................................................[done]
|
||||
Specified repositories have been refreshed.
|
||||
|
||||
#### Modifying Repositories ####
|
||||
|
||||
Here, we use ‘zypper modifyrepo‘ or ‘zypper mr‘ commands to disable, enable zypper repositories.
|
||||
|
||||
12. Before disabling repository, you must know that in Zypper, every repository has its own unique number, that is used to disable or enable a repository.
|
||||
|
||||
Let’s say you want to disable repository ‘repo-oss‘, to disable first you need to its number by typing following command.
|
||||
|
||||
# zypper lr
|
||||
|
||||
# | Alias | Name | Enabled | Refresh
|
||||
--+---------------------------+------------------------------------+---------+--------
|
||||
1 | openSUSE-13.2-0 | openSUSE-13.2-0 | Yes | No
|
||||
2 | repo-debug | openSUSE-13.2-Debug | Yes | Yes
|
||||
3 | repo-debug-update | openSUSE-13.2-Update-Debug | No | Yes
|
||||
4 | repo-debug-update-non-oss | openSUSE-13.2-Update-Debug-Non-Oss | No | Yes
|
||||
5 | repo-non-oss | openSUSE-13.2-Non-Oss | Yes | Yes
|
||||
6 | repo-oss | openSUSE-13.2-Oss | No | Yes
|
||||
7 | repo-source | openSUSE-13.2-Source | No | Yes
|
||||
8 | repo-update | openSUSE-13.2-Update | Yes | Yes
|
||||
9 | repo-update-non-oss | openSUSE-13.2-Update-Non-Oss | Yes | Yes
|
||||
|
||||
Do you see in the above output, that the repository ‘repo-oss‘ having number 6, to disable this you need to specify number 6 along with following command.
|
||||
|
||||
# zypper mr -d 6
|
||||
|
||||
Repository 'repo-oss' has been successfully disabled.
|
||||
|
||||
13. To enable again same repository ‘repo-oss‘, which appears at number 6 (as shown in above example).
|
||||
|
||||
# zypper mr -e 6
|
||||
|
||||
Repository 'repo-oss' has been successfully enabled.
|
||||
|
||||
14. Enable auto-refresh and rpm file ‘caching‘ for a repo say ‘repo-non-oss‘ and set its priority to say 85.
|
||||
|
||||
# zypper mr -rk -p 85 repo-non-oss
|
||||
|
||||
Repository 'repo-non-oss' priority has been left unchanged (85)
|
||||
Nothing to change for repository 'repo-non-oss'.
|
||||
|
||||
15. Disable rpm file caching for all the repositories.
|
||||
|
||||
# zypper mr -Ka
|
||||
|
||||
RPM files caching has been disabled for repository 'openSUSE-13.2-0'.
|
||||
RPM files caching has been disabled for repository 'repo-debug'.
|
||||
RPM files caching has been disabled for repository 'repo-debug-update'.
|
||||
RPM files caching has been disabled for repository 'repo-debug-update-non-oss'.
|
||||
RPM files caching has been disabled for repository 'repo-non-oss'.
|
||||
RPM files caching has been disabled for repository 'repo-oss'.
|
||||
RPM files caching has been disabled for repository 'repo-source'.
|
||||
RPM files caching has been disabled for repository 'repo-update'.
|
||||
RPM files caching has been disabled for repository 'repo-update-non-oss'.
|
||||
|
||||
16. Enable rpm file caching for all the repositories.
|
||||
|
||||
# zypper mr -ka
|
||||
|
||||
RPM files caching has been enabled for repository 'openSUSE-13.2-0'.
|
||||
RPM files caching has been enabled for repository 'repo-debug'.
|
||||
RPM files caching has been enabled for repository 'repo-debug-update'.
|
||||
RPM files caching has been enabled for repository 'repo-debug-update-non-oss'.
|
||||
RPM files caching has been enabled for repository 'repo-non-oss'.
|
||||
RPM files caching has been enabled for repository 'repo-oss'.
|
||||
RPM files caching has been enabled for repository 'repo-source'.
|
||||
RPM files caching has been enabled for repository 'repo-update'.
|
||||
RPM files caching has been enabled for repository 'repo-update-non-oss'.
|
||||
|
||||
17. Disable rpm file caching for remote repositories.
|
||||
|
||||
# zypper mr -Kt
|
||||
|
||||
RPM files caching has been disabled for repository 'repo-debug'.
|
||||
RPM files caching has been disabled for repository 'repo-debug-update'.
|
||||
RPM files caching has been disabled for repository 'repo-debug-update-non-oss'.
|
||||
RPM files caching has been disabled for repository 'repo-non-oss'.
|
||||
RPM files caching has been disabled for repository 'repo-oss'.
|
||||
RPM files caching has been disabled for repository 'repo-source'.
|
||||
RPM files caching has been disabled for repository 'repo-update'.
|
||||
RPM files caching has been disabled for repository 'repo-update-non-oss'.
|
||||
|
||||
18. Enable rpm file caching for remote repositories.
|
||||
|
||||
# zypper mr -kt
|
||||
|
||||
RPM files caching has been enabled for repository 'repo-debug'.
|
||||
RPM files caching has been enabled for repository 'repo-debug-update'.
|
||||
RPM files caching has been enabled for repository 'repo-debug-update-non-oss'.
|
||||
RPM files caching has been enabled for repository 'repo-non-oss'.
|
||||
RPM files caching has been enabled for repository 'repo-oss'.
|
||||
RPM files caching has been enabled for repository 'repo-source'.
|
||||
RPM files caching has been enabled for repository 'repo-update'.
|
||||
RPM files caching has been enabled for repository 'repo-update-non-oss'.
|
||||
|
||||
#### Adding Repositories ####
|
||||
|
||||
You may make use of any of the two commands – ‘zypper addrepo‘ or ‘zypper ar‘. You may use repo url or alias to add Repository.
|
||||
|
||||
19. Add a repository say “http://download.opensuse.org/update/12.3/”.
|
||||
|
||||
# zypper ar http://download.opensuse.org/update/11.1/ update
|
||||
|
||||
Adding repository 'update' .............................................................................................................................................................[done]
|
||||
Repository 'update' successfully added
|
||||
Enabled : Yes
|
||||
Autorefresh : No
|
||||
GPG check : Yes
|
||||
URI : http://download.opensuse.org/update/11.1/
|
||||
|
||||
20. Rename a repository. It will change the alias only. You may use command ‘zypper namerepo‘ or ‘zypper nr‘. To rename aka change alias of a repo that appears at number 10 (zypper lr) to upd8, run the below command.
|
||||
|
||||
# zypper nr 10 upd8
|
||||
|
||||
Repository 'update' renamed to 'upd8'.
|
||||
|
||||
#### Removing Repositories ####
|
||||
|
||||
21. Remove a repository. It will remove the repository from the system. You may use the command ‘zypper removerepo‘ or ‘zypper rr‘. To remove a repo say ‘upd8‘, run the below command.
|
||||
|
||||
# zypper rr upd8
|
||||
|
||||
# Removing repository 'upd8' .........................................................................................[done]
|
||||
Repository 'upd8' has been removed.
|
||||
|
||||
### Package Management using Zypper ###
|
||||
|
||||
#### Install a Package with Zypper ####
|
||||
|
||||
22. With Zypper, we can install packages based upon capability name. For example, to install a package (say Mozilla Firefox) using capability name.
|
||||
|
||||
# zypper in MozillaFirefox
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
Resolving package dependencies...
|
||||
|
||||
The following 128 NEW packages are going to be installed:
|
||||
adwaita-icon-theme at-spi2-atk-common at-spi2-atk-gtk2 at-spi2-core cantarell-fonts cups-libs desktop-file-utils fontconfig gdk-pixbuf-query-loaders gstreamer gstreamer-fluendo-mp3
|
||||
gstreamer-plugins-base gtk2-branding-openSUSE gtk2-data gtk2-immodule-amharic gtk2-immodule-inuktitut gtk2-immodule-thai gtk2-immodule-vietnamese gtk2-metatheme-adwaita
|
||||
gtk2-theming-engine-adwaita gtk2-tools gtk3-data gtk3-metatheme-adwaita gtk3-tools hicolor-icon-theme hicolor-icon-theme-branding-openSUSE libasound2 libatk-1_0-0 libatk-bridge-2_0-0
|
||||
libatspi0 libcairo2 libcairo-gobject2 libcanberra0 libcanberra-gtk0 libcanberra-gtk2-module libcanberra-gtk3-0 libcanberra-gtk3-module libcanberra-gtk-module-common libcdda_interface0
|
||||
libcdda_paranoia0 libcolord2 libdrm2 libdrm_intel1 libdrm_nouveau2 libdrm_radeon1 libFLAC8 libfreebl3 libgbm1 libgdk_pixbuf-2_0-0 libgraphite2-3 libgstapp-1_0-0 libgstaudio-1_0-0
|
||||
libgstpbutils-1_0-0 libgstreamer-1_0-0 libgstriff-1_0-0 libgsttag-1_0-0 libgstvideo-1_0-0 libgthread-2_0-0 libgtk-2_0-0 libgtk-3-0 libharfbuzz0 libjasper1 libjbig2 libjpeg8 libjson-c2
|
||||
liblcms2-2 libLLVM libltdl7 libnsssharedhelper0 libogg0 liborc-0_4-0 libpackagekit-glib2-18 libpango-1_0-0 libpciaccess0 libpixman-1-0 libpulse0 libsndfile1 libsoftokn3 libspeex1
|
||||
libsqlite3-0 libstartup-notification-1-0 libtheoradec1 libtheoraenc1 libtiff5 libvisual libvorbis0 libvorbisenc2 libvorbisfile3 libwayland-client0 libwayland-cursor0 libwayland-server0
|
||||
libX11-xcb1 libxcb-dri2-0 libxcb-dri3-0 libxcb-glx0 libxcb-present0 libxcb-render0 libxcb-shm0 libxcb-sync1 libxcb-util1 libxcb-xfixes0 libXcomposite1 libXcursor1 libXdamage1 libXevie1
|
||||
libXfixes3 libXft2 libXi6 libXinerama1 libxkbcommon-0_4_3 libXrandr2 libXrender1 libxshmfence1 libXtst6 libXv1 libXxf86vm1 Mesa Mesa-libEGL1 Mesa-libGL1 Mesa-libglapi0
|
||||
metatheme-adwaita-common MozillaFirefox MozillaFirefox-branding-openSUSE mozilla-nss mozilla-nss-certs PackageKit-gstreamer-plugin pango-tools sound-theme-freedesktop
|
||||
|
||||
The following 10 recommended packages were automatically selected:
|
||||
gstreamer-fluendo-mp3 gtk2-branding-openSUSE gtk2-data gtk2-immodule-amharic gtk2-immodule-inuktitut gtk2-immodule-thai gtk2-immodule-vietnamese libcanberra0 libpulse0
|
||||
PackageKit-gstreamer-plugin
|
||||
|
||||
128 new packages to install.
|
||||
Overall download size: 77.2 MiB. Already cached: 0 B After the operation, additional 200.0 MiB will be used.
|
||||
Continue? [y/n/? shows all options] (y): y
|
||||
Retrieving package cantarell-fonts-0.0.16-1.1.noarch (1/128), 74.1 KiB (115.6 KiB unpacked)
|
||||
Retrieving: cantarell-fonts-0.0.16-1.1.noarch.rpm .........................................................................................................................[done (63.4 KiB/s)]
|
||||
Retrieving package hicolor-icon-theme-0.13-2.1.2.noarch (2/128), 40.1 KiB ( 50.5 KiB unpacked)
|
||||
Retrieving: hicolor-icon-theme-0.13-2.1.2.noarch.rpm ...................................................................................................................................[done]
|
||||
Retrieving package sound-theme-freedesktop-0.8-7.1.2.noarch (3/128), 372.6 KiB (460.3 KiB unpacked)
|
||||
|
||||
23. Install a package (say gcc) using version.
|
||||
|
||||
# zypper in 'gcc<5.1'
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
Resolving package dependencies...
|
||||
|
||||
The following 13 NEW packages are going to be installed:
|
||||
cpp cpp48 gcc gcc48 libasan0 libatomic1-gcc49 libcloog-isl4 libgomp1-gcc49 libisl10 libitm1-gcc49 libmpc3 libmpfr4 libtsan0-gcc49
|
||||
|
||||
13 new packages to install.
|
||||
Overall download size: 14.5 MiB. Already cached: 0 B After the operation, additional 49.4 MiB will be used.
|
||||
Continue? [y/n/? shows all options] (y): y
|
||||
|
||||
24. Install a package (say gcc) for architecture (say i586).
|
||||
|
||||
# zypper in gcc.i586
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
Resolving package dependencies...
|
||||
|
||||
The following 13 NEW packages are going to be installed:
|
||||
cpp cpp48 gcc gcc48 libasan0 libatomic1-gcc49 libcloog-isl4 libgomp1-gcc49 libisl10 libitm1-gcc49 libmpc3 libmpfr4 libtsan0-gcc49
|
||||
|
||||
13 new packages to install.
|
||||
Overall download size: 14.5 MiB. Already cached: 0 B After the operation, additional 49.4 MiB will be used.
|
||||
Continue? [y/n/? shows all options] (y): y
|
||||
Retrieving package libasan0-4.8.3+r212056-2.2.4.x86_64 (1/13), 74.2 KiB (166.9 KiB unpacked)
|
||||
Retrieving: libasan0-4.8.3+r212056-2.2.4.x86_64.rpm .......................................................................................................................[done (79.2 KiB/s)]
|
||||
Retrieving package libatomic1-gcc49-4.9.0+r211729-2.1.7.x86_64 (2/13), 14.3 KiB ( 26.1 KiB unpacked)
|
||||
Retrieving: libatomic1-gcc49-4.9.0+r211729-2.1.7.x86_64.rpm ...............................................................................................................[done (55.3 KiB/s)]
|
||||
|
||||
25. Install a package (say gcc) for specific architecture (say i586) and specific version (say <5.1),
|
||||
|
||||
# zypper in 'gcc.i586<5.1'
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
Resolving package dependencies...
|
||||
|
||||
The following 13 NEW packages are going to be installed:
|
||||
cpp cpp48 gcc gcc48 libasan0 libatomic1-gcc49 libcloog-isl4 libgomp1-gcc49 libisl10 libitm1-gcc49 libmpc3 libmpfr4 libtsan0-gcc49
|
||||
|
||||
13 new packages to install.
|
||||
Overall download size: 14.4 MiB. Already cached: 129.5 KiB After the operation, additional 49.4 MiB will be used.
|
||||
Continue? [y/n/? shows all options] (y): y
|
||||
In cache libasan0-4.8.3+r212056-2.2.4.x86_64.rpm (1/13), 74.2 KiB (166.9 KiB unpacked)
|
||||
In cache libatomic1-gcc49-4.9.0+r211729-2.1.7.x86_64.rpm (2/13), 14.3 KiB ( 26.1 KiB unpacked)
|
||||
In cache libgomp1-gcc49-4.9.0+r211729-2.1.7.x86_64.rpm (3/13), 41.1 KiB ( 90.7 KiB unpacked)
|
||||
|
||||
26. Install a Package (say libxine) from repository (amarok).
|
||||
|
||||
# zypper in amarok upd:libxine1
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
Resolving package dependencies...
|
||||
The following 202 NEW packages are going to be installed:
|
||||
amarok bundle-lang-kde-en clamz cups-libs enscript fontconfig gdk-pixbuf-query-loaders ghostscript-fonts-std gptfdisk gstreamer gstreamer-plugins-base hicolor-icon-theme
|
||||
hicolor-icon-theme-branding-openSUSE htdig hunspell hunspell-tools icoutils ispell ispell-american kde4-filesystem kdebase4-runtime kdebase4-runtime-branding-openSUSE kdelibs4
|
||||
kdelibs4-branding-openSUSE kdelibs4-core kdialog libakonadi4 l
|
||||
.....
|
||||
|
||||
27. Install a Package (say git) using name (-n).
|
||||
|
||||
# zypper in -n git
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
Resolving package dependencies...
|
||||
|
||||
The following 35 NEW packages are going to be installed:
|
||||
cvs cvsps fontconfig git git-core git-cvs git-email git-gui gitk git-svn git-web libserf-1-1 libsqlite3-0 libXft2 libXrender1 libXss1 perl-Authen-SASL perl-Clone perl-DBD-SQLite perl-DBI
|
||||
perl-Error perl-IO-Socket-SSL perl-MLDBM perl-Net-Daemon perl-Net-SMTP-SSL perl-Net-SSLeay perl-Params-Util perl-PlRPC perl-SQL-Statement perl-Term-ReadKey subversion subversion-perl tcl
|
||||
tk xhost
|
||||
|
||||
The following 13 recommended packages were automatically selected:
|
||||
git-cvs git-email git-gui gitk git-svn git-web perl-Authen-SASL perl-Clone perl-MLDBM perl-Net-Daemon perl-Net-SMTP-SSL perl-PlRPC perl-SQL-Statement
|
||||
|
||||
The following package is suggested, but will not be installed:
|
||||
git-daemon
|
||||
|
||||
35 new packages to install.
|
||||
Overall download size: 15.6 MiB. Already cached: 0 B After the operation, additional 56.7 MiB will be used.
|
||||
Continue? [y/n/? shows all options] (y): y
|
||||
|
||||
28. Install a package using wildcards. For example, install all php5 packages.
|
||||
|
||||
# zypper in php5*
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
Resolving package dependencies...
|
||||
|
||||
Problem: php5-5.6.1-18.1.x86_64 requires smtp_daemon, but this requirement cannot be provided
|
||||
uninstallable providers: exim-4.83-3.1.8.x86_64[openSUSE-13.2-0]
|
||||
postfix-2.11.0-5.2.2.x86_64[openSUSE-13.2-0]
|
||||
sendmail-8.14.9-2.2.2.x86_64[openSUSE-13.2-0]
|
||||
exim-4.83-3.1.8.i586[repo-oss]
|
||||
msmtp-mta-1.4.32-2.1.3.i586[repo-oss]
|
||||
postfix-2.11.0-5.2.2.i586[repo-oss]
|
||||
sendmail-8.14.9-2.2.2.i586[repo-oss]
|
||||
exim-4.83-3.1.8.x86_64[repo-oss]
|
||||
msmtp-mta-1.4.32-2.1.3.x86_64[repo-oss]
|
||||
postfix-2.11.0-5.2.2.x86_64[repo-oss]
|
||||
sendmail-8.14.9-2.2.2.x86_64[repo-oss]
|
||||
postfix-2.11.3-5.5.1.i586[repo-update]
|
||||
postfix-2.11.3-5.5.1.x86_64[repo-update]
|
||||
Solution 1: Following actions will be done:
|
||||
do not install php5-5.6.1-18.1.x86_64
|
||||
do not install php5-pear-Auth_SASL-1.0.6-7.1.3.noarch
|
||||
do not install php5-pear-Horde_Http-2.0.1-6.1.3.noarch
|
||||
do not install php5-pear-Horde_Image-2.0.1-6.1.3.noarch
|
||||
do not install php5-pear-Horde_Kolab_Format-2.0.1-6.1.3.noarch
|
||||
do not install php5-pear-Horde_Ldap-2.0.1-6.1.3.noarch
|
||||
do not install php5-pear-Horde_Memcache-2.0.1-7.1.3.noarch
|
||||
do not install php5-pear-Horde_Mime-2.0.2-6.1.3.noarch
|
||||
do not install php5-pear-Horde_Oauth-2.0.0-6.1.3.noarch
|
||||
do not install php5-pear-Horde_Pdf-2.0.1-6.1.3.noarch
|
||||
....
|
||||
|
||||
29. Install a Package (say lamp_server) using pattern (group of packages).
|
||||
|
||||
# zypper in -t pattern lamp_server
|
||||
|
||||
ading repository data...
|
||||
Reading installed packages...
|
||||
Resolving package dependencies...
|
||||
|
||||
The following 29 NEW packages are going to be installed:
|
||||
apache2 apache2-doc apache2-example-pages apache2-mod_perl apache2-prefork patterns-openSUSE-lamp_server perl-Data-Dump perl-Encode-Locale perl-File-Listing perl-HTML-Parser
|
||||
perl-HTML-Tagset perl-HTTP-Cookies perl-HTTP-Daemon perl-HTTP-Date perl-HTTP-Message perl-HTTP-Negotiate perl-IO-HTML perl-IO-Socket-SSL perl-libwww-perl perl-Linux-Pid
|
||||
perl-LWP-MediaTypes perl-LWP-Protocol-https perl-Net-HTTP perl-Net-SSLeay perl-Tie-IxHash perl-TimeDate perl-URI perl-WWW-RobotRules yast2-http-server
|
||||
|
||||
The following NEW pattern is going to be installed:
|
||||
lamp_server
|
||||
|
||||
The following 10 recommended packages were automatically selected:
|
||||
apache2 apache2-doc apache2-example-pages apache2-mod_perl apache2-prefork perl-Data-Dump perl-IO-Socket-SSL perl-LWP-Protocol-https perl-TimeDate yast2-http-server
|
||||
|
||||
29 new packages to install.
|
||||
Overall download size: 7.2 MiB. Already cached: 1.2 MiB After the operation, additional 34.7 MiB will be used.
|
||||
Continue? [y/n/? shows all options] (y):
|
||||
|
||||
30. Install a Package (say nano) and remove a package (say vi) in one go.
|
||||
|
||||
# zypper in nano -vi
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
'-vi' not found in package names. Trying capabilities.
|
||||
Resolving package dependencies...
|
||||
|
||||
The following 2 NEW packages are going to be installed:
|
||||
nano nano-lang
|
||||
|
||||
The following package is going to be REMOVED:
|
||||
vim
|
||||
|
||||
The following recommended package was automatically selected:
|
||||
nano-lang
|
||||
|
||||
2 new packages to install, 1 to remove.
|
||||
Overall download size: 550.0 KiB. Already cached: 0 B After the operation, 463.3 KiB will be freed.
|
||||
Continue? [y/n/? shows all options] (y):
|
||||
...
|
||||
|
||||
31. Install a rpm package (say teamviewer).
|
||||
|
||||
# zypper in teamviewer*.rpm
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
Resolving package dependencies...
|
||||
|
||||
The following 24 NEW packages are going to be installed:
|
||||
alsa-oss-32bit fontconfig-32bit libasound2-32bit libexpat1-32bit libfreetype6-32bit libgcc_s1-gcc49-32bit libICE6-32bit libjpeg62-32bit libpng12-0-32bit libpng16-16-32bit libSM6-32bit
|
||||
libuuid1-32bit libX11-6-32bit libXau6-32bit libxcb1-32bit libXdamage1-32bit libXext6-32bit libXfixes3-32bit libXinerama1-32bit libXrandr2-32bit libXrender1-32bit libXtst6-32bit
|
||||
libz1-32bit teamviewer
|
||||
|
||||
The following recommended package was automatically selected:
|
||||
alsa-oss-32bit
|
||||
|
||||
24 new packages to install.
|
||||
Overall download size: 41.2 MiB. Already cached: 0 B After the operation, additional 119.7 MiB will be used.
|
||||
Continue? [y/n/? shows all options] (y):
|
||||
..
|
||||
|
||||
#### Remove a Package with Zypper ####
|
||||
|
||||
32. To remove any package, you can use ‘zypper remove‘ or ‘zypper rm‘ commands. For example, to remove a package (say apache2), run:
|
||||
|
||||
# zypper remove apache2
|
||||
Or
|
||||
# zypper rm apache2
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
Resolving package dependencies...
|
||||
|
||||
The following 2 packages are going to be REMOVED:
|
||||
apache2 apache2-prefork
|
||||
|
||||
2 packages to remove.
|
||||
After the operation, 4.2 MiB will be freed.
|
||||
Continue? [y/n/? shows all options] (y): y
|
||||
(1/2) Removing apache2-2.4.10-19.1 ........................................................................[done]
|
||||
(2/2) Removing apache2-prefork-2.4.10-19.1 ................................................................[done]
|
||||
|
||||
#### Updating Packages using Zypper ####
|
||||
|
||||
33. Update all packages. You may use commands ‘zypper update‘ or ‘zypper up‘.
|
||||
|
||||
# zypper up
|
||||
OR
|
||||
# zypper update
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
Nothing to do.
|
||||
|
||||
34. Update specific packages (say apache2 and openssh).
|
||||
|
||||
# zypper up apache2 openssh
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
No update candidate for 'apache2-2.4.10-19.1.x86_64'. The highest available version is already installed.
|
||||
No update candidate for 'openssh-6.6p1-5.1.3.x86_64'. The highest available version is already installed.
|
||||
Resolving package dependencies...
|
||||
|
||||
Nothing to do.
|
||||
|
||||
35. Install a package say (mariadb) if not installed, if installed update it.
|
||||
|
||||
# zypper in mariadb
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
'mariadb' is already installed.
|
||||
No update candidate for 'mariadb-10.0.13-2.6.1.x86_64'. The highest available version is already installed.
|
||||
Resolving package dependencies...
|
||||
|
||||
Nothing to do.
|
||||
|
||||
#### Install Source and Build Dependencies ####
|
||||
|
||||
You may use ‘zypper source-install‘ or ‘zypper si‘ commands to build packages from source.
|
||||
|
||||
36. Install source packages and build their dependencies for a package (say mariadb).
|
||||
|
||||
# zypper si mariadb
|
||||
|
||||
Reading installed packages...
|
||||
Loading repository data...
|
||||
Resolving package dependencies...
|
||||
|
||||
The following 36 NEW packages are going to be installed:
|
||||
autoconf automake bison cmake cpp cpp48 gcc gcc48 gcc48-c++ gcc-c++ libaio-devel libarchive13 libasan0 libatomic1-gcc49 libcloog-isl4 libedit-devel libevent-devel libgomp1-gcc49 libisl10
|
||||
libitm1-gcc49 libltdl7 libmpc3 libmpfr4 libopenssl-devel libstdc++48-devel libtool libtsan0-gcc49 m4 make ncurses-devel pam-devel readline-devel site-config tack tcpd-devel zlib-devel
|
||||
|
||||
The following source package is going to be installed:
|
||||
mariadb
|
||||
|
||||
36 new packages to install, 1 source package.
|
||||
Overall download size: 71.5 MiB. Already cached: 129.5 KiB After the operation, additional 183.9 MiB will be used.
|
||||
Continue? [y/n/? shows all options] (y): y
|
||||
|
||||
37. Install only the source for a package (say mariadb).
|
||||
|
||||
# zypper in -D mariadb
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
'mariadb' is already installed.
|
||||
No update candidate for 'mariadb-10.0.13-2.6.1.x86_64'. The highest available version is already installed.
|
||||
Resolving package dependencies...
|
||||
|
||||
Nothing to do.
|
||||
|
||||
38. Install only the build dependencies for a packages (say mariadb).
|
||||
|
||||
# zypper si -d mariadb
|
||||
|
||||
Reading installed packages...
|
||||
Loading repository data...
|
||||
Resolving package dependencies...
|
||||
|
||||
The following 36 NEW packages are going to be installed:
|
||||
autoconf automake bison cmake cpp cpp48 gcc gcc48 gcc48-c++ gcc-c++ libaio-devel libarchive13 libasan0 libatomic1-gcc49 libcloog-isl4 libedit-devel libevent-devel libgomp1-gcc49 libisl10
|
||||
libitm1-gcc49 libltdl7 libmpc3 libmpfr4 libopenssl-devel libstdc++48-devel libtool libtsan0-gcc49 m4 make ncurses-devel pam-devel readline-devel site-config tack tcpd-devel zlib-devel
|
||||
|
||||
The following package is recommended, but will not be installed due to conflicts or dependency issues:
|
||||
readline-doc
|
||||
|
||||
36 new packages to install.
|
||||
Overall download size: 33.7 MiB. Already cached: 129.5 KiB After the operation, additional 144.3 MiB will be used.
|
||||
Continue? [y/n/? shows all options] (y): y
|
||||
|
||||
#### Zypper in Scripts and Applications ####
|
||||
|
||||
39. Install a Package (say mariadb) without interaction of user.
|
||||
|
||||
# zypper --non-interactive in mariadb
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
'mariadb' is already installed.
|
||||
No update candidate for 'mariadb-10.0.13-2.6.1.x86_64'. The highest available version is already installed.
|
||||
Resolving package dependencies...
|
||||
|
||||
Nothing to do.
|
||||
|
||||
40. Remove a Package (say mariadb) without interaction of user.
|
||||
|
||||
# zypper --non-interactive rm mariadb
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
Resolving package dependencies...
|
||||
|
||||
The following package is going to be REMOVED:
|
||||
mariadb
|
||||
|
||||
1 package to remove.
|
||||
After the operation, 71.8 MiB will be freed.
|
||||
Continue? [y/n/? shows all options] (y): y
|
||||
(1/1) Removing mariadb-10.0.13-2.6.1 .............................................................................[done]
|
||||
|
||||
41. Output zypper in xml.
|
||||
|
||||
# zypper --xmlout
|
||||
|
||||
|
||||
|
||||
Usage:
|
||||
zypper [--global-options] <command> [--command-options] [arguments]
|
||||
|
||||
Global Options
|
||||
....
|
||||
|
||||
42. Generate quiet output at installation.
|
||||
|
||||
# zypper --quiet in mariadb
|
||||
|
||||
The following NEW package is going to be installed:
|
||||
mariadb
|
||||
|
||||
1 new package to install.
|
||||
Overall download size: 0 B. Already cached: 7.8 MiB After the operation, additional 71.8 MiB will be used.
|
||||
Continue? [y/n/? shows all options] (y):
|
||||
...
|
||||
|
||||
43. Generate quiet output at UN-installation.
|
||||
|
||||
# zypper --quiet rm mariadb
|
||||
|
||||
44. Auto agree to Licenses/Agreements.
|
||||
|
||||
# zypper patch --auto-agree-with-licenses
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
Resolving package dependencies...
|
||||
|
||||
Nothing to do.
|
||||
|
||||
#### Clean Zypper Cache and View History ####
|
||||
|
||||
45. If you want to clean zypper cache only, you can use following command.
|
||||
|
||||
# zypper clean
|
||||
|
||||
All repositories have been cleaned up.
|
||||
|
||||
If you want to clean metadata and package cache at once you may like to pass –all/-a with zypper as.
|
||||
|
||||
# zypper clean -a
|
||||
|
||||
All repositories have been cleaned up.
|
||||
|
||||
46. To view logs of any installed, updated or removed packages through zypper, are logged in /var/log/zypp/history. You may cat it to view or may use filter to get a custom output.
|
||||
|
||||
# cat /var/log/zypp/history
|
||||
|
||||
2015-05-07 15:43:03|install|boost-license1_54_0|1.54.0-10.1.3|noarch||openSUSE-13.2-0|0523b909d2aae5239f9841316dafaf3a37b4f096|
|
||||
2015-05-07 15:43:03|install|branding-openSUSE|13.2-3.6.1|noarch||openSUSE-13.2-0|6609def94b1987bf3f90a9467f4f7ab8f8d98a5c|
|
||||
2015-05-07 15:43:03|install|bundle-lang-common-en|13.2-3.3.1|noarch||openSUSE-13.2-0|ca55694e6fdebee6ce37ac7cf3725e2aa6edc342|
|
||||
2015-05-07 15:43:03|install|insserv-compat|0.1-12.2.2|noarch||openSUSE-13.2-0|6160de7fbf961a279591a83a1550093a581214d9|
|
||||
2015-05-07 15:43:03|install|libX11-data|1.6.2-5.1.2|noarch||openSUSE-13.2-0|f1cb58364ba9016c1f93b1a383ba12463c56885a|
|
||||
2015-05-07 15:43:03|install|libnl-config|3.2.25-2.1.2|noarch||openSUSE-13.2-0|aab2ded312a781e93b739b418e3d32fe4e187020|
|
||||
2015-05-07 15:43:04|install|wireless-regdb|2014.06.13-1.2|noarch||openSUSE-13.2-0|be8cb16f3e92af12b5ceb977e37e13f03c007bd1|
|
||||
2015-05-07 15:43:04|install|yast2-trans-en_US|3.1.0-2.1|noarch||openSUSE-13.2-0|1865754e5e0ec3c149ac850b340bcca55a3c404d|
|
||||
2015-05-07 15:43:04|install|yast2-trans-stats|2.19.0-16.1.3|noarch||openSUSE-13.2-0|b107d2b3e702835885b57b04d12d25539f262d1a|
|
||||
2015-05-07 15:43:04|install|cracklib-dict-full|2.8.12-64.1.2|x86_64||openSUSE-13.2-0|08bd45dbba7ad44e3a4837f730be76f55ad5dcfa|
|
||||
......
|
||||
|
||||
#### Upgrade Suse Using Zypper ####
|
||||
|
||||
47. You can use ‘dist-upgrade‘ option with zypper command to upgrade your current Suse Linux to most recent version.
|
||||
|
||||
# zypper dist-upgrade
|
||||
|
||||
You are about to do a distribution upgrade with all enabled repositories. Make sure these repositories are compatible before you continue. See 'man zypper' for more information about this command.
|
||||
Building repository 'openSUSE-13.2-0' cache .....................................................................[done]
|
||||
Retrieving repository 'openSUSE-13.2-Debug' metadata ............................................................[done]
|
||||
Building repository 'openSUSE-13.2-Debug' cache .................................................................[done]
|
||||
Retrieving repository 'openSUSE-13.2-Non-Oss' metadata ..........................................................[done]
|
||||
Building repository 'openSUSE-13.2-Non-Oss' cache ...............................................................[done]
|
||||
|
||||
That’s all for now. Hope this article would help you in managing you SUSE System and Server specially for newbies. If you feel that I left certain commands (Human are erroneous) you may provide us with the feedback in the comments so that we can update the article. Keep Connected, Keep Commenting, Stay tuned. Kudos!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/zypper-commands-to-manage-suse-linux-package-management/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/installation-of-suse-linux-enterprise-server-12/
|
@ -0,0 +1,188 @@
|
||||
How to Install Percona Server on CentOS 7
|
||||
================================================================================
|
||||
In this article we are going to learn about percona server, an opensource drop-in replacement for MySQL and also for MariaDB. The InnoDB database engine make it very attractive and a good alternative if you need performance, reliability and a cost efficient solution
|
||||
|
||||
In the following sections I am going to cover the installation of the percona server on the CentOS 7, I will also cover the steps needed to make backup of your current data, configuration and how to restore your backup.
|
||||
|
||||
### Table of contents ###
|
||||
|
||||
1. What is and why use percona
|
||||
1. Backup your databases
|
||||
1. Remove previous SQL server
|
||||
1. Installing Percona binaries
|
||||
1. Configuring Percona
|
||||
1. Securing your environment
|
||||
1. Restore your backup
|
||||
|
||||
### 1. What is and why use Percona ###
|
||||
|
||||
Percona is an opensource alternative to the MySQL and MariaDB databases, it's a fork of the MySQL with many improvements and unique features that makes it more reliable, powerful and faster than MySQL, and yet is fully compatible with it, you can even use replication between Oracle's MySQL and Percona.
|
||||
|
||||
#### Features exclusive to Percona ####
|
||||
|
||||
- Partitioned Adaptive Hash Search
|
||||
- Fast Checksum Algorithm
|
||||
- Buffer Pool Pre-Load
|
||||
- Support for FlashCache
|
||||
|
||||
#### MySQL Enterprise and Percona specific features ####
|
||||
|
||||
- Import Tables From Different Servers
|
||||
- PAM authentication
|
||||
- Audit Log
|
||||
- Threadpool
|
||||
|
||||
Now that you are pretty excited to see all these good things together, we are going show you how to install and do basic configuration of Percona Server.
|
||||
|
||||
### 2. Backup your databases ###
|
||||
|
||||
The following, command creates a mydatabases.sql file with the SQL commands to recreate/restore salesdb and employeedb databases, replace the databases names to reflect your setup, skip if this is a brand new setup
|
||||
|
||||
mysqldump -u root -p --databases employeedb salesdb > mydatabases.sql
|
||||
|
||||
Copy the current configuration file, you can also skip this in fresh setups
|
||||
|
||||
cp my.cnf my.cnf.bkp
|
||||
|
||||
### 3. Remove your previous SQL Server ###
|
||||
|
||||
Stop the MySQL/MariaDB if it's running.
|
||||
|
||||
systemctl stop mysql.service
|
||||
|
||||
Uninstall MariaDB and MySQL
|
||||
|
||||
yum remove MariaDB-server MariaDB-client MariaDB-shared mysql mysql-server
|
||||
|
||||
Move / Rename the MariaDB files in **/var/lib/mysql**, it's a safer and faster than just removing, it's like a 2nd level instant backup. :)
|
||||
|
||||
mv /var/lib/mysql /var/lib/mysql_mariadb
|
||||
|
||||
### 4. Installing Percona binaries ###
|
||||
|
||||
You can choose from a number of options on how to install Percona, in a CentOS system it's generally a better idea to use yum or RPM, so these are the way that are covered by this article, compiling and install from sources are not covered by this article.
|
||||
|
||||
Installing from Yum repository:
|
||||
|
||||
First you need to set the Percona's Yum repository with this:
|
||||
|
||||
yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
|
||||
|
||||
And then install Percona with:
|
||||
|
||||
yum install Percona-Server-client-56 Percona-Server-server-56
|
||||
|
||||
The above command installs Percona server and clients, shared libraries, possibly Perl and perl modules such as DBI::MySQL, if that are not already installed, and also other dependencies as needed.
|
||||
|
||||
Installing from RPM package:
|
||||
|
||||
We can download all rpm packages with the help of wget:
|
||||
|
||||
wget -r -l 1 -nd -A rpm -R "*devel*,*debuginfo*" \ http://www.percona.com/downloads/Percona-Server-5.5/Percona-Server-5.5.42-37.1/binary/redhat/7/x86_64/
|
||||
|
||||
And with rpm utility, you install all the packages once:
|
||||
|
||||
rpm -ivh Percona-Server-server-55-5.5.42-rel37.1.el7.x86_64.rpm \ Percona-Server-client-55-5.5.42-rel37.1.el7.x86_64.rpm \ Percona-Server-shared-55-5.5.42-rel37.1.el7.x86_64.rpm
|
||||
|
||||
Note the backslash '\' on the end of the sentences on the above commands, if you install individual packages, remember that to met dependencies, the shared package must be installed before client and client before server.
|
||||
|
||||
### 5. Configuring Percona Server ###
|
||||
|
||||
#### Restoring previous configuration ####
|
||||
|
||||
As we are moving from MariaDB, you can just restore the backup of my.cnf file that you made in earlier steps.
|
||||
|
||||
cp /etc/my.cnf.bkp /etc/my.cnf
|
||||
|
||||
#### Creating a new my.cnf ####
|
||||
|
||||
If you need a new configuration file that fit your needs or if you don't have made a copy of my.cnf, you can use this wizard, it will generate for you, through simple steps.
|
||||
|
||||
Here is a sample my.cnf file that comes with Percona-Server package
|
||||
|
||||
# Percona Server template configuration
|
||||
|
||||
[mysqld]
|
||||
#
|
||||
# Remove leading # and set to the amount of RAM for the most important data
|
||||
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
|
||||
# innodb_buffer_pool_size = 128M
|
||||
#
|
||||
# Remove leading # to turn on a very important data integrity option: logging
|
||||
# changes to the binary log between backups.
|
||||
# log_bin
|
||||
#
|
||||
# Remove leading # to set options mainly useful for reporting servers.
|
||||
# The server defaults are faster for transactions and fast SELECTs.
|
||||
# Adjust sizes as needed, experiment to find the optimal values.
|
||||
# join_buffer_size = 128M
|
||||
# sort_buffer_size = 2M
|
||||
# read_rnd_buffer_size = 2M
|
||||
datadir=/var/lib/mysql
|
||||
socket=/var/lib/mysql/mysql.sock
|
||||
|
||||
# Disabling symbolic-links is recommended to prevent assorted security risks
|
||||
symbolic-links=0
|
||||
|
||||
[mysqld_safe]
|
||||
log-error=/var/log/mysqld.log
|
||||
pid-file=/var/run/mysqld/mysqld.pid
|
||||
|
||||
After making your my.cnf file fit your needs, it's time to start the service:
|
||||
|
||||
systemctl restart mysql.service
|
||||
|
||||
If everything goes fine, your server is now up and ready to ready to receive SQL commands, you can try the following command to check:
|
||||
|
||||
mysql -u root -p -e 'SHOW VARIABLES LIKE "version_comment"'
|
||||
|
||||
If you can't start the service, you can look for a reason in **/var/log/mysql/mysqld.log** this file is set by the **log-error** option in my.cnf's **[mysqld_safe]** session.
|
||||
|
||||
tail /var/log/mysql/mysqld.log
|
||||
|
||||
You can also take a look in a file inside **/var/lib/mysql/** with name in the form of **[hostname].err** as the following example:
|
||||
|
||||
tail /var/lib/mysql/centos7.err
|
||||
|
||||
If this also fail in show what is wrong, you can also try strace:
|
||||
|
||||
yum install strace && systemctl stop mysql.service && strace -f -f mysqld_safe
|
||||
|
||||
The above command is extremely verbous and it's output is quite low level but can show you the reason you can't start service in most times.
|
||||
|
||||
### 6. Securing your environment ###
|
||||
|
||||
Ok, you now have your RDBMS ready to receive SQL queries, but it's not a good idea to put your precious data on a server without minimum security, it's better to make it safer with mysql_secure_instalation, this utility helps in removing unused default features, also set the root main password and make access restrictions for using this user.
|
||||
Just invoke it by the shell and follow instructions on the screen.
|
||||
|
||||
mysql_secure_install
|
||||
|
||||
### 7. Restore your backup ###
|
||||
|
||||
If you are coming from a previous setup, now you can restore your databases, just use mysqldump once again.
|
||||
|
||||
mysqldump -u root -p < mydatabases.sql
|
||||
|
||||
Congratulations, you just installed Percona on your CentOS Linux, your server is now fully ready for use; You can now use your service as it was MySQL, and your services are fully compatible with it.
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
There is a lot of things to configure in order to achieve better performance, but here is some straightforward options to improve your setup. When using innodb engine it's also a good idea to set the **innodb_file_per_table** option **on**, it gonna distribute table indexes in a file per table basis, it means that each table have it's own index file, it makes the overall system, more robust and easier to repair.
|
||||
|
||||
Other option to have in mind is the **innodb_buffer_pool_size** option, InnoDB should have large enough to your datasets, and some value **between 70% and 80%** of the total available memory should be reasonable.
|
||||
|
||||
By setting the **innodb-flush-method** to **O_DIRECT** you disable write cache, if you have **RAID**, this should be set to improved performance as this cache is already done in a lower level.
|
||||
|
||||
If your data is not that critical and you don't need fully **ACID** compliant transactions, you can adjust to 2 the option **innodb_flush_log_at_trx_commit**, this will also lead to improved performance.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/percona-server-centos-7/
|
||||
|
||||
作者:[Carlos Alberto][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/carlosal/
|
@ -0,0 +1,179 @@
|
||||
Install ‘Tails 1.4′ Linux Operating System to Preserve Privacy and Anonymity
|
||||
================================================================================
|
||||
In this Internet world and the world of Internet we perform most of our task online be it Ticket booking, Money transfer, Studies, Business, Entertainment, Social Networking and what not. We spend a major part of our time online daily. It has been getting hard to remain anonymous with each passing day specially when backdoors are being planted by organizations like NSA (National Security Agency) who are putting their nose in between every thing that we come across online. We have least or no privacy online. All the searches are logged upon the basis of user Internet surfing activity and machine activity.
|
||||
|
||||
A wonderful browser from Tor project is used by millions which help us surfing the web anonymously however it is not difficult to trace your browsing habits and hence tor alone is not the guarantee of your safety online. You may like to check Tor features and installation instructions here:
|
||||
|
||||
- [Anonymous Web Browsing using Tor][1]
|
||||
|
||||
There is a operating system named Tails by Tor Projects. Tails (The Amnesic Incognito Live System) is a live operating system, based on Debian Linux distribution, which mainly focused on preserving privacy and anonymity on the web while browsing internet, means all it’s outgoing connection are forced to pass through the Tor and direct (non-anonymous) requests are blocked. The system is designed to run from any boot-able media be it USB stick or DVD.
|
||||
|
||||
The latest stable release of Tails OS is 1.4 which was released on May 12, 2015. Powered by open source Monolithic Linux Kernel and built on top of Debian GNU/Linux Tails aims at Personal Computer Market and includes GNOME 3 as default user Interface.
|
||||
|
||||
#### Features of Tails OS 1.4 ####
|
||||
|
||||
- Tails is a free operating system, free as in beer and free as in speech.
|
||||
- Built on top of Debian/GNU Linux. The most widely used OS that is Universal.
|
||||
- Security Focused Distribution.
|
||||
- Windows 8 camouflage.
|
||||
- Need not to be installed and browse Internet anonymously using Live Tails CD/DVD.
|
||||
- Leave no trace on the computer, while tails is running.
|
||||
- Advanced cryptographic tools used to encrypt everything that concerns viz., files, emails, etc.
|
||||
- Sends and Receive traffic through tor network.
|
||||
- In true sense it provides privacy for anyone, anywhere.
|
||||
- Comes with several applications ready to be used from Live Environment.
|
||||
- All the softwares comes per-configured to connect to INTERNET only through Tor network.
|
||||
- Any application that tries to connect to Internet without Tor Network is blocked, automatically.
|
||||
- Restricts someone who is watching what sites you visit and restricts sites to learn your geographical location.
|
||||
- Connect to websites that are blocked and/or censored.
|
||||
- Designed specially not to use space used by parent OS even when there is free swap space.
|
||||
- The whole OS loads on RAM and is flushed when we reboot/shutdown. Hence no trace of running.
|
||||
- Advanced security implementation by encrypting USB disk, HTTPS ans Encrypt and sign emails and documents.
|
||||
|
||||
#### What can you expect in Tails 1.4 ####
|
||||
|
||||
- Tor Browser 4.5 with a security Slider.
|
||||
- Tor Upgraded to version 0.2.6.7.
|
||||
- Several Security holes fixed.
|
||||
- Many of the bug fixed and patches applied to Applications like curl, OpenJDK 7, tor Network, openldap, etc.
|
||||
|
||||
To get a complete list of change logs you may visit [HERE][2]
|
||||
|
||||
**Note**: It is strongly recommended to upgrade to Tails 1.4, if you’re using any older version of Tails.
|
||||
|
||||
#### Why should I use Tails Operating System ####
|
||||
|
||||
You need Tails because you need:
|
||||
|
||||
- Freedom from network surveillance
|
||||
- Defend freedom, privacy and confidentiality
|
||||
- Security aka traffic analysis
|
||||
|
||||
This tutorial will walk through the installation of Tails 1.4 OS with a short review.
|
||||
|
||||
### Tails 1.4 Installation Guide ###
|
||||
|
||||
1. To download the latest Tails OS 1.4, you may use wget command to download directly.
|
||||
|
||||
$ wget http://dl.amnesia.boum.org/tails/stable/tails-i386-1.4/tails-i386-1.4.iso
|
||||
|
||||
Alternatively you may download Tails 1.4 Direct ISO image or use a Torrent Client to pull the iso image file for you. Here is the link to both downloads:
|
||||
|
||||
- [tails-i386-1.4.iso][3]
|
||||
- [tails-i386-1.4.torrent][4]
|
||||
|
||||
2. After downloading, verify ISO Integrity by matching SHA256 checksum with the SHA256SUM provided on the official website..
|
||||
|
||||
$ sha256sum tails-i386-1.4.iso
|
||||
|
||||
339c8712768c831e59c4b1523002b83ccb98a4fe62f6a221fee3a15e779ca65d
|
||||
|
||||
If you are interested in knowing OpenPGP, checking Tails signing key against Debian keyring and anything related to Tails cryptographic signature, you may like to point your browser [HERE][5].
|
||||
|
||||
3. Next you need to write the image to USB stick or DVD ROM. You may like to check the article, [How to Create Live Bootable USB][6] for details on how to make a flash drive bootable and write ISO to it.
|
||||
|
||||
4. Insert the Tails OS Bootable flash drive or DVD ROM in the disk and boot from it (select from BIOS to boot). The first screen – two options to select from ‘Live‘ and ‘Live (failsafe)‘. Select ‘Live‘ and press Enter.
|
||||
|
||||

|
||||
Tails Boot Menu
|
||||
|
||||
5. Just before login. You have two options. Click ‘More Options‘ if you want to configure and set advanced options else click ‘No‘.
|
||||
|
||||

|
||||
Tails Welcome Screen
|
||||
|
||||
6. After clicking Advanced option, you need to setup root password. This is important if you want to upgrade it. This root password is valid till you shutdown/reboot the machine.
|
||||
|
||||
Also you may enable Windows Camouflage, if you want to run this OS on a public place, so that it seems as you are running Windows 8 operating system. Good option indeed! Is not it? Also you have a option to configure Network and Mac Address. Click ‘Login‘ when done!.
|
||||
|
||||

|
||||
Tails OS Configuration
|
||||
|
||||
7. This is Tails GNU/Linux OS camouflaged by Windows Skin.
|
||||
|
||||

|
||||
Tails Windows Look
|
||||
|
||||
8. It will start Tor Network in the background. Check the Notification on the top-right corner of the screen – Tor is Ready / You are now connected to the Internet.
|
||||
|
||||
Also check what it contains under Internet Menu. Notice – It has Tor Browser (safe) and Unsafe Web Browser (Where incoming and outgoing data don’t pass through TOR Network) along with other applications.
|
||||
|
||||

|
||||
Tails Menu and Tools
|
||||
|
||||
9. Click Tor and check your IP Address. It confirms my physical location is not shared and my privacy is intact.
|
||||
|
||||

|
||||
Check Privacy on Tails
|
||||
|
||||
10. You may Invoke Tails Installer to clone & Install, Clone & Upgrade and Upgrade from ISO.
|
||||
|
||||

|
||||
Tails Installer Options
|
||||
|
||||
11. The other option was to select Tor without any advanced option, just before login (Check step #5 above).
|
||||
|
||||

|
||||
Tails Without Advance Option
|
||||
|
||||
12. You will get log-in to Gnome3 Desktop Environment.
|
||||
|
||||

|
||||
Tails Gnome Desktop
|
||||
|
||||
13. If you click to Launch Unsafe browser in Camouflage or without Camouflage, you will be notified.
|
||||
|
||||

|
||||
Tails Browsing Notification
|
||||
|
||||
If you do, this is what you get in a Browser.
|
||||
|
||||

|
||||
Tails Browsing Alert
|
||||
|
||||
#### Is Tails for me? ####
|
||||
|
||||
To get the above question answered, first answer a few question.
|
||||
|
||||
- Do you need your privacy to be intact while you are online?
|
||||
- Do you want to remain hidden from Identity thieves?
|
||||
- Do you want somebody to put your nose in between your private chat online?
|
||||
- Do you really want to show your geographical location to anybody there?
|
||||
- Do you carry out banking transactions online?
|
||||
- Are you happy with the censorship by government and ISP?
|
||||
|
||||
If the answer to any of the above question is ‘YES‘ you preferably need Tails. If answer to all the above question is ‘NO‘ you perhaps don’t need it.
|
||||
|
||||
To know more about Tails? Point your browser to user Documentation : [https://tails.boum.org/doc/index.en.html][7]
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
Tails is an OS which is must for those who work in an unsafe environment. An OS focused on security yet contains bundles of Application – Gnome Desktop, Tor, Firefox (Iceweasel), Network Manager, Pidgin, Claws mail, Liferea feed addregator, Gobby, Aircrack-ng, I2P.
|
||||
|
||||
It also contain several tools for Encryption and Privacy Under the Hood, viz., LUKS, GNUPG, PWGen, Shamir’s Secret Sharing, Virtual Keyboard (against Hardware Keylogging), MAT, KeePassX Password Manager, etc.
|
||||
|
||||
That’s all for now. Keep Connected to Tecmint. Share your thoughts on Tails GNU/Linux Operating System. What do you think about the future of the Project? Also test it Locally and let us know your experience.
|
||||
|
||||
You may run it in [Virtualbox][8] as well. Remember Tails loads the whole OS in RAM hence give enough RAM to run Tails in VM.
|
||||
|
||||
I tested in 1GB Environment and it worked without lagging. Thanks to all our readers for their Support. In making Tecmint a one place for all Linux related stuffs your co-operation is needed. Kudos!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/install-tails-1-4-linux-operating-system-to-preserve-privacy-and-anonymity/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/tor-browser-for-anonymous-web-browsing/
|
||||
[2]:https://tails.boum.org/news/version_1.4/index.en.html
|
||||
[3]:http://dl.amnesia.boum.org/tails/stable/tails-i386-1.4/tails-i386-1.4.iso
|
||||
[4]:https://tails.boum.org/torrents/files/tails-i386-1.4.torrent
|
||||
[5]:https://tails.boum.org/download/index.en.html#verify
|
||||
[6]:http://www.tecmint.com/install-linux-from-usb-device/
|
||||
[7]:https://tails.boum.org/doc/index.en.html
|
||||
[8]:http://www.tecmint.com/install-virtualbox-on-redhat-centos-fedora/
|
@ -0,0 +1,56 @@
|
||||
Square 2.0图标包更漂亮了
|
||||
================================================================================
|
||||

|
||||
|
||||
优雅、现代的[Square图标主题][1]最近更新到了2.0版,这让它比以前更漂亮了。Square图标包与其他主要的桌面环境如**Unity、 GNOME、KDE、 MATE等等**兼容。这意味着你可以在所有的流行Linux发行版如Ubuntu、Fedora、Linux Mint、elementary OS等等中使用它。 这个图标包估计包含超过了15000个图标。
|
||||
|
||||
### 在Linux中安装Square 2.0图标包 ###
|
||||
|
||||
有两种不同的Square图标,暗色和亮色。基于你的洗好,你可以选择二者之一。出于实验,我建议你两个主题包都下载。
|
||||
|
||||
你可以用下面的链接下载图标包。文件存储在Google Drive,因此如果你没有看见像[SourceForge][2]这样标准的下载网站时怀疑。
|
||||
|
||||
- [Square Dark Icons][3]
|
||||
- [Square Light Icons][4]
|
||||
|
||||
要使用图标主题,解压下载的文件到~/.icons文件夹下。如果它不存在,就创建它。当你在正确的地方有这些文件后,基于你的桌面环境,使用一个工具来改变图标主题。我以前写了一些关于这个主题的教程。如果你需要额外的帮助,那么欢迎指出来:
|
||||
|
||||
- [如何在Ubuntu Unity中改变主题][5]
|
||||
- [如何在GNOME Shell中改变主题][6]
|
||||
- [如何在Linux Mint中改变主题][7]
|
||||
- [如何在Elementary OS Freya中改变主题][8]
|
||||
|
||||
### 试一下 ###
|
||||
|
||||
这是我用Square图标在Ubuntu 14.04中的效果。我背景使用的是[Ubuntu 15.04 默认壁纸][9]。
|
||||
|
||||

|
||||
|
||||
Square主题中几个图标的样子:
|
||||
|
||||

|
||||
|
||||
你怎么找到它的?你认为它是[Ubuntu 14.04中最佳的图标主题][10]之一么?你会分享它病期待更多关于自定义Linux桌面的文章么?
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/square-2-0-icon-pack-linux/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://gnome-look.org/content/show.php/Square?content=163513
|
||||
[2]:http://sourceforge.net/
|
||||
[3]:http://gnome-look.org/content/download.php?content=163513&id=1&tan=62806435
|
||||
[4]:http://gnome-look.org/content/download.php?content=163513&id=2&tan=19789941
|
||||
[5]:http://itsfoss.com/how-to-install-themes-in-ubuntu-13-10/
|
||||
[6]:http://itsfoss.com/install-switch-themes-gnome-shell/
|
||||
[7]:http://itsfoss.com/install-icon-linux-mint/
|
||||
[8]:http://itsfoss.com/install-themes-icons-elementary-os-freya/
|
||||
[9]:http://itsfoss.com/default-wallpapers-ubuntu-1504/
|
||||
[10]:http://itsfoss.com/best-icon-themes-ubuntu-1404/
|
@ -1,40 +0,0 @@
|
||||
这个工具可以提醒你一个区域内的假面猎手接入点 (注:evil twin暂无相关翻译)
|
||||
===============================================================================
|
||||
**开发人员称,EvilAP_Defender甚至可以攻击流氓Wi-Fi接入点**
|
||||
|
||||
一个新的开源工具可以定期扫描一个区域,以防流氓Wi-Fi接入点,同时如果发现情况会提醒网络管理员。
|
||||
|
||||
这个工具叫做EvilAP_Defender,是为监测攻击者配置的恶意接入点而专门设计的,这些接入点冒用合法的名字诱导用户连接上。
|
||||
|
||||
这类接入点被称做假面猎手,使得黑客们从接入的设备上监听互联网信息流。这可以被用来窃取证书,破坏网站等等。
|
||||
|
||||
大多数用户设置他们的计算机和设备可以自动连接一些无线网络,比如家里的或者工作地方的网络。尽管如此,当面对两个同名的无线网络时,即SSID相同,有时候甚至时MAC地址也相同,这时候大多数设备会自动连接信号较强的一个。
|
||||
|
||||
这使得假面猎手的攻击容易实现,因为SSID和BSSID都可以伪造。
|
||||
|
||||
[EvilAP_Defender][1]是一个叫Mohamed Idris的人用Python语言编写,公布在GitHub上面。它可以使用一个计算机的无线网卡来发现流氓接入点,这些接入点复制了一个真实接入点的SSID,BSSID,甚至是其他的参数如通道,密码,隐私协议和认证信息。
|
||||
|
||||
该工具首先以学习模式运行,为了发现合法的接入点[AP],并且加入白名单。然后切换到正常模式,开始扫描未认证的接入点。
|
||||
|
||||
如果一个恶意[AP]被发现了,该工具会用电子邮件提醒网络管理员,但是开发者也打算在未来加入短信提醒功能。
|
||||
|
||||
该工具还有一个保护模式,在这种模式下,应用会发起一个denial-of-service [DoS]攻击反抗恶意接入点,为管理员采取防卫措施赢得一些时间。
|
||||
|
||||
“DoS不仅针对有着相同SSID的恶意AP,也针对BSSID(AP的MAC地址)不同或者不同信道的,”Idris在这款工具的文档中说道。“这是避免攻击你的合法网络。”
|
||||
|
||||
尽管如此,用户应该切记在许多国家,攻击别人的接入点,甚至一个可能一个攻击者操控的恶意的接入点,很多时候都是非法的。
|
||||
|
||||
为了能够运行这款工具,需要Aircrack-ng无线网套装,一个支持Aircrack-ng的无线网卡,MySQL和Python运行环境。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/2905725/security0/this-tool-can-alert-you-about-evil-twin-access-points-in-the-area.html
|
||||
|
||||
作者:[Lucian Constantin][a]
|
||||
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoworld.com/author/Lucian-Constantin/
|
||||
[1] https://github.com/moha99sa/EvilAP_Defender/blob/master/README.TXT
|
@ -0,0 +1,43 @@
|
||||
Translating by H-mudcup
|
||||
|
||||
Synfig Studio 1.0 —— 开源动画动真格的了
|
||||
================================================================================
|
||||

|
||||
|
||||
**现在可以下载 Synfig Studio 这个自由、开源的2D动画软件的全新版本了。 **
|
||||
|
||||
在第一次发行这个跨平台的软件一年以后,Synfig Studio 1.0 带着一套全新改和改进过的功能,实现它所承诺的“创造电影级别的动画的产业级解决方案”。
|
||||
|
||||
在众多功能之上的是一个改进过的用户界面,据工程开发者说那是个用起来‘更简单’、‘更直观’的界面。客户端添加了新的**单窗口模式**,让界面更整洁,而且**为了使用最新的 GTK3 库而被重新制作**。
|
||||
|
||||
在功能方面有几个值得注意的变化,包括新加的全功能骨骼系统。
|
||||
|
||||
这套**关节和转轴的‘骨骼’构架**非常适合2D剪纸动画,再配上这个版本新加的复杂的变形控制系统或是 Synfig 受欢迎的‘关键帧自动插入’(阅读:画面与画面间的变形)应该会变得非常有效率的。
|
||||
|
||||
注:youtube视频
|
||||
<iframe width="750" height="422" frameborder="0" allowfullscreen="" src="https://www.youtube.com/embed/M8zW1qCq8ng?feature=oembed"></iframe>
|
||||
|
||||
新的无损剪切工具,摩擦力效果和对逐帧位图动画的支持,可能会有助于释放开源动画师们的创造力,更别说新加的用于同步动画的时间线和声音的声效层!
|
||||
|
||||
### 下载 Synfig Studio 1.0 ###
|
||||
|
||||
Synfig Studio 并不是任何人都能用的工具套件,这最新发行版的最新一批改进应该能吸引一些动画制作者试一试这个软件。
|
||||
|
||||
If you want to find out what open-source animation software is like for yourself, you can grab an installer for Ubuntu for the latest release direct from the project’s page using the links below. 如果你想看看开源动画制作软件是什么样的,你可以通过下面的链接直接从工程的 Sourceforge 页下载一个适用于 Ubuntu 的最新版本的安装器。
|
||||
|
||||
- [Download Synfig 1.0 (64bit) .deb Installer][1]
|
||||
- [Download Synfig 1.0 (32bit) .deb Installer][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/04/synfig-studio-new-release-features
|
||||
|
||||
作者:[oey-Elijah Sneddon][a]
|
||||
译者:[H-mudcup](https://github.com/H-mudcup)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:http://sourceforge.net/projects/synfig/files/releases/1.0/linux/synfigstudio_1.0_amd64.deb/download
|
||||
[2]:http://sourceforge.net/projects/synfig/files/releases/1.0/linux/synfigstudio_1.0_x86.deb/download
|
@ -0,0 +1,110 @@
|
||||
什么是好的命令行HTTP客户端?
|
||||
==============================================================================
|
||||
整体大于各部分之和,这是引自希腊哲学家和科学家的亚里士多德的名言。这句话特别切中Linux。在我看来,Linux最强大的地方之一就是它的协作性。Linux的实用性并不仅仅源自大量的开源程序(命令行)。相反,其协作性来自于这些程序的综合利用,有时是结合更大型的应用。
|
||||
|
||||
Unix哲学引发了一场“软件工具”的运动,关注开发简洁,基础,干净,模块化和扩展性好的代码,并可以运用于其他的项目。这种哲学为许多的Linux项目留下了一个重要的元素。
|
||||
|
||||
好的开源开发者写程序为了确保该程序尽可能运行正确,同时能与其他程序很好地协作。目标就是使用者拥有一堆方便的工具,每一个力求干不止一件事。许多程序能独立工作得很好。
|
||||
|
||||
这篇文章讨论3个开源命令行HTTP客户端。这些客户端可以让你使用命令行从互联网上下载文件。但同时,他们也可以用于许多有意思的地方,如测试,调式和与HTTP服务器或网络应用互动。对于HTTP架构师和API设计人员来说,使用命令行操作HTTP是一个值得花时间学习的技能。如果你需要来回使用API,HTTPie和cURL,这没什么价值。
|
||||
|
||||
-------------
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
HTTPie(发音 aych-tee-tee-pie)是一款开源命令行HTTP客户端。它是一个命令行界面,类cURL的工具。
|
||||
|
||||
该软件的目标是使得与网络服务器的交互尽可能的人性化。其提供了一个简单的http命令,允许使用简单且自然的语句发送任意的HTTP请求,并显示不同颜色的输出。HTTPie可以用于测试,调式和与HTTP服务器的一般交互。
|
||||
|
||||
#### 功能包括:####
|
||||
|
||||
- 可表达,直观的语句
|
||||
- 格式化,颜色区分的终端输出
|
||||
- 内建JSON支持
|
||||
- 表单和文件上传
|
||||
- HTTPS,代理和认证
|
||||
- 任意数据请求
|
||||
- 自定义标题 (此处header不确定是否特别意义)
|
||||
- 持久会话
|
||||
- 类Wget下载
|
||||
- Python 2.6,2.7和3.x支持
|
||||
- Linux,Mac OS X 和 Windows支持
|
||||
- 支持插件
|
||||
- 帮助文档
|
||||
- 测试覆盖 (直译有点别扭)
|
||||
|
||||
- 网站:[httpie.org][1]
|
||||
- 开发者: Jakub Roztočil
|
||||
- 证书: 开源
|
||||
- 版本号: 0.9.2
|
||||
|
||||
----------
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
cURL是一个开源命令行工具,用于使用URL语句传输数据,支持DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS,IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, TELNET和TFTP。
|
||||
|
||||
cURL支持SSL证书,HTTP POST,HTTP PUT,FTP上传,HTTP基于表单上传,代理,缓存,用户名+密码认证(Basic, Digest, NTLM, Negotiate, kerberos...),文件传输恢复, 代理通道和一些其他实用窍门的总线负载。(这里的名词我不明白其专业意思)
|
||||
|
||||
#### 功能包括:####
|
||||
|
||||
- 配置文件支持
|
||||
- 一个单独命令行多个URL
|
||||
- “globbing”漫游支持: [0-13],{one, two, three}
|
||||
- 一个命令上传多个文件
|
||||
- 自定义最大传输速度
|
||||
- 重定向标准错误输出
|
||||
- Metalink支持
|
||||
|
||||
- 网站: [curl.haxx.se][2]
|
||||
- 开发者: Daniel Stenberg
|
||||
- 证书: MIT/X derivate license
|
||||
- 版本号: 7.42.0
|
||||
|
||||
----------
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
Wget是一个从网络服务器获取信息的开源软件。其名字源于World Wide Web 和 get。Wget支持HTTP,HTTPS和FTP协议,同时也通过HTTP代理获取信息。
|
||||
|
||||
Wget可以根据HTML页面的链接,创建远程网络站点的本地版本,是完全重造源站点的目录结构。这种方式被冠名“recursive downloading。”
|
||||
|
||||
Wget已经设计可以加快低速或者不稳定的网络连接。
|
||||
|
||||
功能包括:
|
||||
|
||||
- 使用REST和RANGE恢复中断的下载
|
||||
- 使用文件名
|
||||
- 多语言的基于NLS的消息文件
|
||||
- 选择性地转换下载文档里地绝对链接为相对链接,使得下载文档可以本地相互链接
|
||||
- 在大多数类UNIX操作系统和微软Windows上运行
|
||||
- 支持HTTP代理
|
||||
- 支持HTTP数据缓存
|
||||
- 支持持续地HTTP连接
|
||||
- 无人照管/后台操作
|
||||
- 当远程对比时,使用本地文件时间戳来决定是否需要重新下载文档 (mirroring没想出合适的表达)
|
||||
|
||||
- 站点: [www.gnu.org/software/wget/][3]
|
||||
- 开发者: Hrvoje Niksic, Gordon Matzigkeit, Junio Hamano, Dan Harkless, and many others
|
||||
- 证书: GNU GPL v3
|
||||
- 版本号: 1.16.3
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxlinks.com/article/20150425174537249/HTTPclients.html
|
||||
|
||||
作者:Frazer Kline
|
||||
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://httpie.org/
|
||||
[2]:http://curl.haxx.se/
|
||||
[3]:https://www.gnu.org/software/wget/
|
@ -1,172 +0,0 @@
|
||||
|
||||
使用Observium来监控你的网络和服务器
|
||||
================================================================================
|
||||
### 简介###
|
||||
|
||||
在监控你的服务器,交换机或者物理机器时有过问题吗?, **Observium**可以满足你的需求.作为一个免费的监控系统,可以帮助你远程监控你的服务器.它是一个由PHP编写的基于自动发现SNMP的网络监控平台,支持非常广泛的网络硬件和操作系统,包括 Cisco,Windows,Linux,HP,NetApp等.在此我会通过在Ubuntu12.04上设置一个**Observium**服务器的同时提供相应的步骤.
|
||||
|
||||

|
||||
|
||||
目前存在两种不同的**observium**版本.
|
||||
|
||||
- Observium 社区版本是一个在QPL开源许可证下的免费工具,这个版本时对于较小部署的最好解决方案. 该版本每6个月得到一次安全性更新.
|
||||
- 第2个版本是Observium Professional, 该版本在基于SVN的发布机制下的发行版. 会得到每日安全性更新. 该工具适用于服务提供商和企业级部署.
|
||||
|
||||
更多信息可以通过其官网获得[website of Observium][1].
|
||||
|
||||
### 系统需求###
|
||||
|
||||
为了安装 **Observium**, 需要具有一个最新安装的服务器。**Observium**是在Ubuntu LTS和Debian系统上进行开发的,所以推荐在Ubuntu或Debian上安装**Observium**,因为可能在别的平台上会有一些小问题。
|
||||
|
||||
该文章会知道你如何在Ubuntu12.04上进行安装**Observium**。对于小型的**Observium**安装,推荐的基础配置要有256MB内存和双核处理器。
|
||||
|
||||
### 安装需求 ###
|
||||
|
||||
在安装**Observuim**之前,你需要确认安装所有的依赖关系包。
|
||||
|
||||
首先,使用下面的命令更新的服务器:
|
||||
|
||||
sudo apt-get update
|
||||
|
||||
然后你需要安装运行Observuim 所需的全部包。
|
||||
|
||||
Observium需要使用下面所列出的软件才能正确的运行:
|
||||
|
||||
- LAMP server
|
||||
- fping
|
||||
- Net-SNMP 5.4+
|
||||
- RRDtool 1.3+
|
||||
- Graphviz
|
||||
|
||||
对于可选特性的要求:
|
||||
|
||||
- Ipmitool - 只有当你想要探寻IPMI(Intelligent Platform Management Interface智能平台管理接口)基板控制器。
|
||||
- Libvirt-bin - 只有当你想要使用libvirt进行远程VM主机监控时。
|
||||
|
||||
sudo apt-get install libapache2-mod-php5 php5-cli php5-mysql php5-gd php5-mcrypt php5-json php-pear snmp fping mysql-server mysql-client python-mysqldb rrdtool subversion whois mtr-tiny ipmitool graphviz imagemagick libvirt ipmitool
|
||||
|
||||
### 为Observium创建MySQL 数据库和用户。
|
||||
|
||||
现在你需要登录到MySQL中并为**Observium**创建数据库:
|
||||
mysql -u root -p
|
||||
|
||||
在用户验证成功之后,你需要按照下面的命令创建该数据库。
|
||||
|
||||
CREATE DATABASE observium;
|
||||
|
||||
数据库名为**Observium**,稍后你会需要这个信息。
|
||||
|
||||
现在你需要创建数据库管理员用户。
|
||||
|
||||
CREATE USER observiumadmin@localhost IDENTIFIED BY 'observiumpassword';
|
||||
|
||||
接下来,你需要给该管理员用户相应的权限来管理创建的数据库。
|
||||
|
||||
GRANT ALL PRIVILEGES ON observium.* TO observiumadmin@localhost;
|
||||
|
||||
你需要将权限信息写回到磁盘中来激活新的MySQL用户:
|
||||
|
||||
FLUSH PRIVILEGES;
|
||||
exit
|
||||
|
||||
### 下载并安装 Observium###
|
||||
|
||||
现在我们的系统已经准备好了, 可以开始Observium的安装了。
|
||||
|
||||
第一步,创建Observium将要使用的文件目录:
|
||||
mkdir -p /opt/observium && cd /opt
|
||||
|
||||
为了达到本教程的目的,我们将会使用Observium的社区/开源版本。使用下面的命令下载并解压:
|
||||
|
||||
wget http://www.observium.org/observium-community-latest.tar.gz
|
||||
tar zxvf observium-community-latest.tar.gz
|
||||
|
||||
现在进入到Observium目录。
|
||||
|
||||
cd observium
|
||||
|
||||
将默认的配置文件'**config.php.default**'复制到'**config.php**',并将数据库配置选项填充到配置文件中:
|
||||
|
||||
cp config.php.default config.php
|
||||
nano config.php
|
||||
|
||||
----------
|
||||
|
||||
/ Database config
|
||||
$config['db_host'] = 'localhost';
|
||||
$config['db_user'] = 'observiumadmin';
|
||||
$config['db_pass'] = 'observiumpassword';
|
||||
$config['db_name'] = 'observium';
|
||||
|
||||
现在为MySQL数据库设置默认的数据库模式:
|
||||
php includes/update/update.php
|
||||
|
||||
现在你需要创建一个文件目录来存储rrd文件,并修改其权限以便让apache能将写入到文件中。
|
||||
|
||||
mkdir rrd
|
||||
chown apache:apache rrd
|
||||
|
||||
为了在出现问题时进行问题修理,你需要创建日志文件。
|
||||
|
||||
mkdir -p /var/log/observium
|
||||
chown apache:apache /var/log/observium
|
||||
|
||||
现在你需要为Observium创建虚拟主机配置。
|
||||
|
||||
<VirtualHost *:80>
|
||||
DocumentRoot /opt/observium/html/
|
||||
ServerName observium.domain.com
|
||||
CustomLog /var/log/observium/access_log combined
|
||||
ErrorLog /var/log/observium/error_log
|
||||
<Directory "/opt/observium/html/">
|
||||
AllowOverride All
|
||||
Options FollowSymLinks MultiViews
|
||||
</Directory>
|
||||
</VirtualHost>
|
||||
|
||||
下一步你需要让你的Apache服务器的rewrite(重写)功能生效。
|
||||
|
||||
为了让'mod_rewrite'生效,输入以下命令:
|
||||
|
||||
sudo a2enmod rewrite
|
||||
|
||||
该模块在下一次Apache服务重启之后就会生效。
|
||||
|
||||
sudo service apache2 restart
|
||||
|
||||
###配置Observium###
|
||||
|
||||
在登入网络接口之前,你需要为Observium创建一个管理员账户(级别10)。
|
||||
|
||||
# cd /opt/observium
|
||||
# ./adduser.php admin adminpassword 10
|
||||
User admin added successfully.
|
||||
|
||||
下一步为发现和探寻工作设置一个cron任务,创建一个新的文件‘**/etc/cron.d/observium**’ 并在其中添加以下的内容。
|
||||
|
||||
33 */6 * * * root /opt/observium/discovery.php -h all >> /dev/null 2>&1
|
||||
*/5 * * * * root /opt/observium/discovery.php -h new >> /dev/null 2>&1
|
||||
*/5 * * * * root /opt/observium/poller-wrapper.py 1 >> /dev/null 2>&1
|
||||
|
||||
重载cron进程来获取系的人物实体。
|
||||
|
||||
# /etc/init.d/cron reload
|
||||
|
||||
好啦,你已经完成了Observium服务器的安装拉! 使用你的浏览器登录到**http://<Server IP>**,然后上路巴。
|
||||
|
||||

|
||||
|
||||
尽情享受吧!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.unixmen.com/monitoring-network-servers-observium/
|
||||
|
||||
作者:[anismaj][a]
|
||||
译者:[theo-l](https://github.com/theo-l)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.unixmen.com/author/anis/
|
||||
[1]:http://www.observium.org/
|
@ -1,57 +0,0 @@
|
||||
创建你自己的Docker基本映像的2中方式
|
||||
================================================================================
|
||||
欢迎大家,今天我们学习一下docker基本映像以及如何构建我们自己的docker基本映像。[Docker][1]是一个开源项目,为打包,装载和运行任何应用提供开发平台的轻量级容器。它没有语言支持,框架和打包系统的限制,从小型的家用电脑到高端的服务器,在何时何地都可以运行。这使它们成为不依赖于特定栈和供应商,很好的部署和扩展网络应用,数据库和后端服务的构建块。
|
||||
|
||||
Docker映像是不可更改的只读层。Docker使用**Union File System**在只读文件系统上增加读写文件系统。但所有更改都发生在最顶层的可写层,最底部,在只读映像上的原始文件仍然不会改变。由于映像不会改变,也就没有状态。基本映像是没有父类的那些映像。Docker基本映像主要的好处是它允许我们有一个独立允许的Linux操作系统。
|
||||
|
||||
下面是我们如何可以创建自定义基本映像的方式。
|
||||
|
||||
### 1. 使用Tar创建Docker基本映像 ###
|
||||
|
||||
我们可以使用tar构建我们自己的基本映像,我们从将要打包为基本映像的运行中的Linux发行版开始构建。这过程可以会有些不同,它取决于我们打算构建的发行版。在Linux的发行版Debian中,已经预装了debootstrap。在开始下面的步骤之前,我们需要安装debootstrap。debootstrap用来获取构建基本系统需要的包。这里,我们构建基于Ubuntu 14.04 "Trusty" 的映像。做这些,我们需要在终端或者shell中运行以下命令。
|
||||
|
||||
$ sudo debootstrap trusty trusty > /dev/null
|
||||
$ sudo tar -C trusty -c . | sudo docker import - trusty
|
||||
|
||||

|
||||
|
||||
上面的命令为当前文件夹创建了一个tar文件并输出到STDOUT中,"docker import - trusty"从STDIN中获取这个tar文件并根据它创建一个名为trusty的基本映像。然后,如下所示,我们将运行映像内部的一条测试命令。
|
||||
|
||||
$ docker run trusty cat /etc/lsb-release
|
||||
|
||||
[Docker GitHub Repo][2] 中有一些允许我们快速构建基本映像的事例脚本.
|
||||
|
||||
### 2. 使用Scratch构建基本映像 ###
|
||||
|
||||
在Docker的注册表中,有一个被称为Scratch的使用空tar文件构建的特殊库:
|
||||
|
||||
$ tar cv --files-from /dev/null | docker import - scratch
|
||||
|
||||

|
||||
|
||||
|
||||
我们可以使用这个映像构建新的小容器:
|
||||
|
||||
FROM scratch
|
||||
ADD script.sh /usr/local/bin/run.sh
|
||||
CMD ["/usr/local/bin/run.sh"]
|
||||
|
||||
上面的Docker文件来自一个很小的映像。这里,它首先从一个完全空的文件系统开始,然后它复制新建的/usr/local/bin/run.sh为script.sh,然后运行脚本/usr/local/bin/run.sh。
|
||||
|
||||
### 结尾 ###
|
||||
|
||||
这这个教程中,我们学习了如果构建一个自定义的Docker基本映像。构建一个docker基本映像是一个很简单的任务,因为这里有很多已经可用的包和脚本。如果我们想要在里面安装想要的东西,构建docker基本映像非常有用。如果有任何疑问,建议或者反馈,请在下面的评论框中写下来。非常感谢!享受吧 :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/2-ways-create-docker-base-image/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://www.docker.com/
|
||||
[2]:https://github.com/docker/docker/blob/master/contrib/mkimage-busybox.sh
|
@ -0,0 +1,146 @@
|
||||
Conky - 终极的 X 视窗系统监视器应用
|
||||
================================================================================
|
||||
Conky 是一个用 ‘C’ 语言写就的系统监视器,并在 GNU 通用公共许可协议和 BSD 许可协议下发布,在 Linux 和 BSD 操作系统中都可以获取到它。这个应用是基于 X 视窗系统的,原本 fork 至 [Torsmo][1]。
|
||||
|
||||
#### 特点 ####
|
||||
|
||||
- 简洁的用户界面;
|
||||
- 高配置性;
|
||||
- 它既可使用内置的部件(超过 300 多个) 也可使用外部脚本,来在桌面或其自有容器中展示系统的状态;
|
||||
- 低资源消耗;
|
||||
- 它可显示范围广泛的系统参数,包括但不限于 CPU,内存,swap 分区 ,温度,进程,磁盘使用情况,网络状态,电池电量,邮件收发,系统消息,音乐播放器的控制,天气信息,最新新闻,升级信息等等;
|
||||
- 在许多操作系统中如 CrunchBang Linux 和 Pinguy OS 被默认安装;
|
||||
|
||||
#### 关于 Conky 的少有人知的事实 ####
|
||||
|
||||
- conky 这个名称来自于一个加拿大电视节目;
|
||||
- 它已被移植到 Nokia N900 上;
|
||||
- 它已不再被官方维护;
|
||||
|
||||
### 在 Linux 中 Conky 的安装和使用 ###
|
||||
|
||||
在我们安装 conky 之前,我们需要使用下面的命令来安装诸如 `lm-sensors`, `curl` 和 `hddtemp` 之类的软件包:
|
||||
|
||||
# apt-get install lm-sensors curl hddtemp
|
||||
|
||||
然后是检测传感器:
|
||||
|
||||
# sensors-detect
|
||||
|
||||
**注**: 在被系统提示时,回答 ‘Yes’ 。
|
||||
|
||||
检测所有探测到的传感器:
|
||||
|
||||
# sensors
|
||||
|
||||
#### 样例输出 ####
|
||||
|
||||
acpitz-virtual-0
|
||||
Adapter: Virtual device
|
||||
temp1: +49.5°C (crit = +99.0°C)
|
||||
|
||||
coretemp-isa-0000
|
||||
Adapter: ISA adapter
|
||||
Physical id 0: +49.0°C (high = +100.0°C, crit = +100.0°C)
|
||||
Core 0: +49.0°C (high = +100.0°C, crit = +100.0°C)
|
||||
Core 1: +49.0°C (high = +100.0°C, crit = +100.0°C)
|
||||
|
||||
Conky 既可以从软件仓库中安装,也可从源代码编译得到:
|
||||
|
||||
# yum install conky [在 RedHat 系的系统上]
|
||||
# apt-get install conky-all [在 Debian 系的系统上]
|
||||
|
||||
**注**: 在 Fedora/CentOS 上安装 conky 之前,你必须启用 [EPEL 软件仓库][2]。
|
||||
|
||||
在安装完 conky 之后,只需输入如下命令来开启它:
|
||||
|
||||
$ conky &
|
||||
|
||||

|
||||
正在运行的 Conky 监视器
|
||||
|
||||
这使得 conky 以一个弹窗的形式运行,并使用位于 `/etc/conky/conky.conf` 的 conky 基本配置文件。
|
||||
|
||||
你可能想将 conky 集成到桌面上,并不想让它每次以弹窗的形式出现,下面就是你需要做的:
|
||||
|
||||
将配置文件 `/etc/conky/conky.conf` 复制到你的家目录中,并将它重命名为 `.conkyrc`,开头的点号 (.) 是为了确保这个配置文件是隐藏的。
|
||||
|
||||
$ cp /etc/conky/conky.conf /home/$USER/.conkyrc
|
||||
|
||||
现在重启 conky 来应用新的更改:
|
||||
|
||||
$ killall -SIGUSR1 conky
|
||||
|
||||

|
||||
Conky 监视器窗口
|
||||
|
||||
你可能想编辑位于你家目录的 conky 的配置文件,这个配置文件的内容是非常容易理解的。
|
||||
|
||||
下面是 conky 配置文件的一个样例:
|
||||
|
||||

|
||||
Conky 的配置
|
||||
|
||||
从上面的窗口中,你可以更改颜色,边框,大小,缩放比例,背景,对齐方式及几个其他属性。通过为不同的 conky 窗口设定不同的对齐方式,我们可以同时运行超过一个 conky 脚本。
|
||||
|
||||
**为 conky 使用脚本而不是默认配置以及如何找到这些脚本?**
|
||||
|
||||
你可以编写你自己的 conky 脚本或使用来自于互联网的脚本;我们并不建议你使用你从互联网中找到的具有潜在危险的任何脚本,除非你清楚你正在做什么。然而,有一些著名的主题和网页包含可信赖的 conky 脚本,例如下面所提及的:
|
||||
|
||||
- [http://ubuntuforums.org/showthread.php?t=281865][3]
|
||||
- [http://conky.sourceforge.net/screenshots.html][4]
|
||||
|
||||
在上面的 URL 地址中,你将发现每个截图都有一个超链接,它们将重定向到脚本文件。
|
||||
|
||||
#### 测试 Conky 脚本 ####
|
||||
|
||||
这里我将在我的 Debian Jessie 机子中运行一个由第三方写的 conky 脚本,以此来进行测试:
|
||||
|
||||
$ wget https://github.com/alexbel/conky/archive/master.zip
|
||||
$ unzip master.zip
|
||||
|
||||
切换当前工作目录到刚才解压的目录:
|
||||
|
||||
$ cd conky-master
|
||||
|
||||
将 `secrets.yml.example` 重命名为 `secrets.yml`:
|
||||
|
||||
$ mv secrets.yml.example secrets.yml
|
||||
|
||||
在你能够运行这个(ruby)脚本之前安装 Ruby:
|
||||
|
||||
$ sudo apt-get install ruby
|
||||
$ ruby starter.rb
|
||||
|
||||

|
||||
华丽的 conky 外观
|
||||
|
||||
**注**: 这个脚本可以被修改以展示你当前的天气,温度等;
|
||||
|
||||
假如你想让 conky 开机自启,请在开机启动应用设置(startup Applications) 中添加如下的几行命令:
|
||||
|
||||
conky --pause 10
|
||||
save and exit.
|
||||
|
||||
最后。。。 如此轻量级且吸引眼球的实用 GUI 软件包不再处于激活状态且官方不再进行维护了。最新的稳定发布版本为 conky 1.9.0, 于 2012 年 5 月 3 号发布。在 Ubuntu 论坛上,一个有关用户分享 conky 配置的主题已经超过了 2000 多页。(这个论坛主题的链接为: [http://ubuntuforums.org/showthread.php?t=281865/][5])
|
||||
|
||||
- [Conky 主页][6]
|
||||
|
||||
这就是全部内容了。保持联系,保持评论。请在下面的评论框里分享你的想法和配置。
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/install-conky-in-ubuntu-debian-fedora/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://torsmo.sourceforge.net/
|
||||
[2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/
|
||||
[3]:http://ubuntuforums.org/showthread.php?t=281865
|
||||
[4]:http://conky.sourceforge.net/screenshots.html
|
||||
[5]:http://ubuntuforums.org/showthread.php?t=281865/
|
||||
[6]:http://conky.sourceforge.net/
|
@ -1,26 +1,26 @@
|
||||
How to Install WordPress with Nginx in a Docker Container
|
||||
如何在 Docker 容器里的 Nginx 中安装 WordPress
|
||||
================================================================================
|
||||
Hi all, today we'll learn how to install WordPress running Nginx Web Server in a Docker Container. WordPress is an awesome free and open source Content Management System running thousands of websites throughout the globe. [Docker][1] is an Open Source project that provides an open platform to pack, ship and run any application as a lightweight container. It has no boundaries of Language support, Frameworks or packaging system and can be run anywhere, anytime from a small home computers to high-end servers. It makes them great building blocks for deploying and scaling web apps, databases, and back-end services without depending on a particular stack or provider.
|
||||
大家好,今天我们来学习一下如何在 Docker 容器上运行的 Nginx Web 服务器中安装 WordPress。WordPress 是一个很好的免费开源的内容管理系统,全球成千上万的网站都在使用它。[Docker][1] 是一个提供开放平台来打包,分发和运行任何应用的开源轻量级容器项目。它没有语言支持,框架或打包系统的限制,可以在从小的家用电脑到高端服务器的任何地方任何时间运行。这让它们成为可以用于部署和扩展网络应用,数据库和后端服务而不必依赖于特定的栈或者提供商的很好的构建块。
|
||||
|
||||
Today, we'll deploy a docker container with the latest WordPress package with necessary prerequisites ie Nginx Web Server, PHP5, MariaDB Server, etc. Here are some short and sweet steps to successfully install a WordPress running Nginx in a Docker Container.
|
||||
今天,我们会在 docker 容器上部署最新的 WordPress 软件包,包括需要的前提条件,例如 Nginx Web 服务器、PHP5、MariaDB 服务器等。下面是在运行在 Docker 容器上成功安装 WordPress 的简单步骤。
|
||||
|
||||
### 1. Installing Docker ###
|
||||
### 1. 安装 Docker ###
|
||||
|
||||
Before we really start, we'll need to make sure that we have Docker installed in our Linux machine. Here, we are running CentOS 7 as host so, we'll be running yum manager to install docker using the below command.
|
||||
在我们真正开始之前,我们需要确保在我们的 Linux 机器上已经安装了 Docker。我们使用的主机是 CentOS 7,因此我们用下面的命令使用 yum 管理器安装 docker。
|
||||
|
||||
# yum install docker
|
||||
|
||||

|
||||

|
||||
|
||||
# systemctl restart docker.service
|
||||
|
||||
### 2. Creating WordPress Dockerfile ###
|
||||
### 2. 创建 WordPress Docker 文件 ###
|
||||
|
||||
We'll need to create a Dockerfile which will automate the installation of the wordpress and its necessary pre-requisites. This Dockerfile will be used to build the image of WordPress installation we created. This WordPress Dockerfile fetches a CentOS 7 image from the Docker Registry Hub and updates the system with the latest available packages. It then installs the necessary softwares like Nginx Web Server, PHP, MariaDB, Open SSH Server and more which are essential for the Docker Container to work. It then executes a script which will initialize the installation of WordPress out of the box.
|
||||
我们需要创建用于自动安装 wordpress 以及前提条件的 docker 文件。这个 docker 文件将用于构建 WordPress 的安装镜像。这个 WordPress docker 文件会从 Docker 库中心获取 CentOS 7 镜像并用最新的可用更新升级系统。然后它会安装必要的软件,例如 Nginx Web 服务器、PHP、MariaDB、Open SSH 服务器以及其它保证 Docker 容器正常运行不可缺少的组件。最后它会执行一个初始化 WordPress 安装的脚本。
|
||||
|
||||
# nano Dockerfile
|
||||
|
||||
Then, we'll need to add the following lines of configuration inside that Dockerfile.
|
||||
然后,我们需要将下面的配置行添加到 Docker 文件中。
|
||||
|
||||
FROM centos:centos7
|
||||
MAINTAINER The CentOS Project <cloud-ops@centos.org>
|
||||
@ -48,15 +48,15 @@ Then, we'll need to add the following lines of configuration inside that Dockerf
|
||||
|
||||
CMD ["/bin/bash", "/start.sh"]
|
||||
|
||||

|
||||

|
||||
|
||||
### 3. Creating Start script ###
|
||||
### 3. 创建启动 script ###
|
||||
|
||||
After we create our Dockerfile, we'll need to create a script named start.sh which will run and configure our WordPress installation. It will create and configure database, passwords for wordpress. To create it, we'll need to open start.sh with our favorite text editor.
|
||||
我们创建了 docker 文件之后,我们需要创建用于运行和配置 WordPress 安装的脚本,名称为 start.sh。它会为 WordPress 创建并配置数据库和密码。用我们喜欢的文本编辑器打开 start.sh。
|
||||
|
||||
# nano start.sh
|
||||
|
||||
After opening start.sh, we'll need to add the following lines of configuration into it.
|
||||
打开 start.sh 之后,我们要添加下面的配置行到文件中。
|
||||
|
||||
#!/bin/bash
|
||||
|
||||
@ -67,7 +67,7 @@ After opening start.sh, we'll need to add the following lines of configuration i
|
||||
}
|
||||
|
||||
__create_user() {
|
||||
# Create a user to SSH into as.
|
||||
# 创建用于 SSH 登录的用户
|
||||
SSH_USERPASS=`pwgen -c -n -1 8`
|
||||
useradd -G wheel user
|
||||
echo user:$SSH_USERPASS | chpasswd
|
||||
@ -75,7 +75,7 @@ After opening start.sh, we'll need to add the following lines of configuration i
|
||||
}
|
||||
|
||||
__mysql_config() {
|
||||
# Hack to get MySQL up and running... I need to look into it more.
|
||||
# 启用并运行 MySQL
|
||||
yum -y erase mariadb mariadb-server
|
||||
rm -rf /var/lib/mysql/ /etc/my.cnf
|
||||
yum -y install mariadb mariadb-server
|
||||
@ -86,18 +86,18 @@ After opening start.sh, we'll need to add the following lines of configuration i
|
||||
}
|
||||
|
||||
__handle_passwords() {
|
||||
# Here we generate random passwords (thank you pwgen!). The first two are for mysql users, the last batch for random keys in wp-config.php
|
||||
# 在这里我们生成随机密码(感谢 pwgen)。前面两个用于 mysql 用户,最后一个用于 wp-config.php 的随机密钥。
|
||||
WORDPRESS_DB="wordpress"
|
||||
MYSQL_PASSWORD=`pwgen -c -n -1 12`
|
||||
WORDPRESS_PASSWORD=`pwgen -c -n -1 12`
|
||||
# This is so the passwords show up in logs.
|
||||
# 这是在日志中显示的密码。
|
||||
echo mysql root password: $MYSQL_PASSWORD
|
||||
echo wordpress password: $WORDPRESS_PASSWORD
|
||||
echo $MYSQL_PASSWORD > /mysql-root-pw.txt
|
||||
echo $WORDPRESS_PASSWORD > /wordpress-db-pw.txt
|
||||
# There used to be a huge ugly line of sed and cat and pipe and stuff below,
|
||||
# but thanks to @djfiander's thing at https://gist.github.com/djfiander/6141138
|
||||
# there isn't now.
|
||||
# 这里原来是一个包括 sed、cat、pipe 和 stuff 的很长的行,但多亏了
|
||||
# @djfiander 的 https://gist.github.com/djfiander/6141138
|
||||
# 现在没有了
|
||||
sed -e "s/database_name_here/$WORDPRESS_DB/
|
||||
s/username_here/$WORDPRESS_DB/
|
||||
s/password_here/$WORDPRESS_PASSWORD/
|
||||
@ -116,7 +116,7 @@ After opening start.sh, we'll need to add the following lines of configuration i
|
||||
}
|
||||
|
||||
__start_mysql() {
|
||||
# systemctl start mysqld.service
|
||||
# systemctl 启动 mysqld 服务
|
||||
mysqladmin -u root password $MYSQL_PASSWORD
|
||||
mysql -uroot -p$MYSQL_PASSWORD -e "CREATE DATABASE wordpress; GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpress'@'localhost' IDENTIFIED BY '$WORDPRESS_PASSWORD'; FLUSH PRIVILEGES;"
|
||||
killall mysqld
|
||||
@ -127,7 +127,7 @@ After opening start.sh, we'll need to add the following lines of configuration i
|
||||
supervisord -n
|
||||
}
|
||||
|
||||
# Call all functions
|
||||
# 调用所有函数
|
||||
__check
|
||||
__create_user
|
||||
__mysql_config
|
||||
@ -136,17 +136,17 @@ After opening start.sh, we'll need to add the following lines of configuration i
|
||||
__start_mysql
|
||||
__run_supervisor
|
||||
|
||||

|
||||

|
||||
|
||||
After adding the above configuration, we'll need to save it and then exit.
|
||||
增加完上面的配置之后,保存并关闭文件。
|
||||
|
||||
### 4. Creating Configuration files ###
|
||||
### 4. 创建配置文件 ###
|
||||
|
||||
Now, we'll need to create configuration file for Nginx Web Server named nginx-site.conf .
|
||||
现在,我们需要创建 Nginx Web 服务器的配置文件,命名为 nginx-site.conf。
|
||||
|
||||
# nano nginx-site.conf
|
||||
|
||||
Then, we'll add the following configuration to the config file.
|
||||
然后,增加下面的配置信息到配置文件。
|
||||
|
||||
user nginx;
|
||||
worker_processes 1;
|
||||
@ -230,13 +230,13 @@ Then, we'll add the following configuration to the config file.
|
||||
}
|
||||
}
|
||||
|
||||

|
||||

|
||||
|
||||
Now, we'll create supervisord.conf file and add the following lines as shown below.
|
||||
现在,创建 supervisor.conf 文件并添加下面的行。
|
||||
|
||||
# nano supervisord.conf
|
||||
|
||||
Then, add the following lines.
|
||||
然后,添加以下行。
|
||||
|
||||
[unix_http_server]
|
||||
file=/tmp/supervisor.sock ; (the path to the socket file)
|
||||
@ -286,60 +286,60 @@ Then, add the following lines.
|
||||
events = PROCESS_LOG
|
||||
result_handler = supervisor_stdout:event_handler
|
||||
|
||||

|
||||

|
||||
|
||||
After adding, we'll save and exit the file.
|
||||
添加完后,保存并关闭文件。
|
||||
|
||||
### 5. Building WordPress Container ###
|
||||
### 5. 构建 WordPress 容器 ###
|
||||
|
||||
Now, after done with creating configurations and scripts, we'll now finally use the Dockerfile to build our desired container with the latest WordPress CMS installed and configured according to the configuration. To do so, we'll run the following command in that directory.
|
||||
现在,完成了创建配置文件和脚本之后,我们终于要使用 docker 文件来创建安装最新的 WordPress CMS(译者注:Content Management System,内容管理系统)所需要的容器,并根据配置文件进行配置。做到这点,我们需要在对应的目录中运行以下命令。
|
||||
|
||||
# docker build --rm -t wordpress:centos7 .
|
||||
|
||||

|
||||

|
||||
|
||||
### 6. Running WordPress Container ###
|
||||
### 6. 运行 WordPress 容器 ###
|
||||
|
||||
Now, to run our newly built container and open port 80 and 22 for Nginx Web Server and SSH access respectively, we'll run the following command.
|
||||
现在,执行以下命令运行新构建的容器,并为 Nginx Web 服务器和 SSH 访问打开88 和 22号相应端口 。
|
||||
|
||||
# CID=$(docker run -d -p 80:80 wordpress:centos7)
|
||||
|
||||

|
||||

|
||||
|
||||
To check the process and commands executed inside the container, we'll run the following command.
|
||||
运行以下命令检查进程以及容器内部执行的命令。
|
||||
|
||||
# echo "$(docker logs $CID )"
|
||||
|
||||
TO check if the port mapping is correct or not, run the following command.
|
||||
运行以下命令检查端口映射是否正确。
|
||||
|
||||
# docker ps
|
||||
|
||||

|
||||

|
||||
|
||||
### 7. Web Interface ###
|
||||
### 7. Web 界面 ###
|
||||
|
||||
Finally if everything went accordingly, we'll be welcomed with WordPress when pointing the browser to http://ip-address/ or http://mywebsite.com/ .
|
||||
最后如果一切正常的话,当我们用浏览器打开 http://ip-address/ 或者 http://mywebsite.com/ 的时候会看到 WordPress 的欢迎界面。
|
||||
|
||||

|
||||

|
||||
|
||||
Now, we'll go step wise through the web interface and setup wordpress configuration, username and password for the WordPress Panel.
|
||||
现在,我们将通过 Web 界面为 WordPress 面板设置 WordPress 的配置、用户名和密码。
|
||||
|
||||

|
||||

|
||||
|
||||
Then, use the username and password entered above into the WordPress Login page.
|
||||
然后,用上面用户名和密码输入到 WordPress 登录界面。
|
||||
|
||||

|
||||

|
||||
|
||||
### Conclusion ###
|
||||
### 总结 ###
|
||||
|
||||
We successfully built and run WordPress CMS under LEMP Stack running in CentOS 7 Operating System as the docker OS. Running WordPress inside a container makes a lot safe and secure to the host system from the security perspective. This article enables one to completely configure WordPress to run under Docker Container with Nginx Web Server. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
|
||||
我们已经成功地在以 CentOS 7 作为 docker OS 的 LEMP 栈上构建并运行了 WordPress CMS。从安全层面来说,在容器中运行 WordPress 对于宿主系统更加安全可靠。这篇文章介绍了在 Docker 容器中运行的 Nginx Web 服务器上使用 WordPress 的完整配置。如果你有任何问题、建议、反馈,请在下面的评论框中写下来,让我们可以改进和更新我们的内容。非常感谢!Enjoy :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/install-wordpress-nginx-docker-container/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
@ -0,0 +1,95 @@
|
||||
安装Inkscape - 开源适量图形编辑器
|
||||
================================================================================
|
||||
Inkscape是一款开源矢量图形编辑工具,它使用可缩放矢量图形(SVG)图形格式并不同于它的竞争对手如Xara X、Corel Draw和Adobe Illustrator等等。SVG是一个广泛部署、免版税使用的图形格式,由W3C SVG工作组开发和维护。这是一个跨平台工具,完美运行于Linux、Windows和Mac OS上。
|
||||
|
||||
Inkscape始于2003年,起初它的bug跟踪系统托管于Sourceforge上但是 后来迁移到了Launchpad上。当前它最新的一个稳定版本是0.91,它不断地在发展和修改中。我们将在本文里了解一下它的突出特点和安装过程。
|
||||
|
||||
### 显著特性 ###
|
||||
|
||||
让我们直接来了解这款应用程序的显著特性。
|
||||
|
||||
#### 创建对象 ####
|
||||
|
||||
- 用铅笔工具来画出不同颜色、大小和形状的手绘线,用贝塞尔曲线(笔式)工具来画出直线和曲线,通过书法工具来应用到手写的书法笔画上等等
|
||||
- 用文本工具来创建、选择、编辑和格式化文本。在纯文本框、在路径上或在形状里操作文本
|
||||
- 有效绘制各种形状,像矩形、椭圆形、圆形、弧线、多边形、星形和螺旋形等等并调整其大小、旋转并修改(圆角化)它们
|
||||
- 用简单地命令创建并嵌入位图
|
||||
|
||||
#### 对象处理 ####
|
||||
|
||||
- 通过交互式操作和调整数值来扭曲、移动、测量、旋转目标
|
||||
- 执行力提升并减少了Z-order操作。
|
||||
- 对象群组化或取消群组化可以去创建一个虚拟层阶用来编辑或处理
|
||||
- 图层采用层次结构树的结构并且能锁定或以各式各样的处理方式来重新布置
|
||||
- 分布与对齐指令
|
||||
|
||||
#### 填充与边框 ####
|
||||
|
||||
- 复制/粘贴风格
|
||||
- 取色工具
|
||||
- 用RGB, HSL, CMS, CMYK和色盘这四种不同的方式选色
|
||||
- 渐层编辑器能创建和管理多停点渐层
|
||||
- 定义一个图像或其它选择用来进行花纹填充
|
||||
- 用一些预定义泼洒花纹可对边框进行花纹泼洒
|
||||
- 通过路径标示器来开始、对折和结束标示
|
||||
|
||||
#### 路径上的操作 ####
|
||||
|
||||
- 节点编辑:移动节点和贝塞尔曲线掌控,节点的对齐和分布等等
|
||||
- 布尔运算(是或否)
|
||||
- 运用可变的路径起迄点可简化路径
|
||||
- 路径插入和增设连同动态和链接偏移对象
|
||||
- 通过路径追踪把位图图像转换成路径(彩色或单色路径)
|
||||
|
||||
#### 文本处理 ####
|
||||
|
||||
- 所有安装好的外框字体都能用甚至可以从右至左对齐对象
|
||||
- 格式化文本、调整字母间距、行间距或列间距
|
||||
- 路径上的文本和形状上的文本和路径或形状都可以被编辑和修改
|
||||
|
||||
#### 渲染 ####
|
||||
|
||||
- Inkscape完全支持抗锯齿显示,这是一种通过柔化边界上的像素从而减少或消除凹凸锯齿的技术。
|
||||
- 支持alpha透明显示和PNG格式图片的导出
|
||||
|
||||
### 在Ubuntu 14.04和14.10上安装Inkscape ###
|
||||
|
||||
为了在Ubuntu上安装Inkscape,我们首先需要 [添加它的稳定版Personal Package Archive][1] (PPA) 至Advanced Package Tool (APT) 库中。打开终端并运行一下命令来添加它的PPA:
|
||||
|
||||
sudo add-apt-repository ppa:inkscape.dev/stable
|
||||
|
||||

|
||||
|
||||
PPA添加到APT库中后,我们要用以下命令进行更新:
|
||||
|
||||
sudo apt-get update
|
||||
|
||||

|
||||
|
||||
更新好库之后,我们准备用以下命令来完成安装:
|
||||
|
||||
sudo apt-get install inkscape
|
||||
|
||||

|
||||
|
||||
恭喜,现在Inkscape已经被安装好了,我们可以充分利用它的丰富功能特点来编辑制作图像了。
|
||||
|
||||

|
||||
|
||||
### 结论 ###
|
||||
|
||||
Inkscape是一款特点鲜明的图形编辑工具,它给予用户充分发挥自己艺术力的权利。它还是一款自由安装和自定义开源应用并且支持大范围文件类型包括JPEG, PNG, GIF和PDF且不仅这些。访问它的 [官方网站][2] 来获取更多新闻和应用更新。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/tools/install-inkscape-open-source-vector-graphic-editor/
|
||||
|
||||
作者:[Aun Raza][a]
|
||||
译者:[ZTinoZ](https://github.com/ZTinoZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunrz/
|
||||
[1]:https://launchpad.net/~inkscape.dev/+archive/ubuntu/stable
|
||||
[2]:https://inkscape.org/en/
|
@ -0,0 +1,85 @@
|
||||
|
||||
KDE Plasma 5.3已发布,Kubuntu 15.04升级攻略
|
||||
================================================================================
|
||||
**KDE[已经宣布][1]Plasma 5.3的稳定版已经准备就绪,它包含了一个新的电源管理方面的稳定特性。**
|
||||
|
||||
[先前四月份的beta版][2]已经让我们印象深刻,甚至跃跃欲试了,Plasma 5桌面环境的稳定版更新的最新更新已经稳定,并且可以下载了。
|
||||
|
||||
Plasma 5.3继续改善和细化了全新的KDE桌面,它添加了大量的特性供桌面用户体验。同时也修复了**多达400个错误**,这对性能和稳定性方面也进行了大量改善。
|
||||
|
||||
### Plasma 5.3中的新东西 ###
|
||||
|
||||

|
||||
Plasma 5.3中更好的蓝牙管理
|
||||
|
||||
而[在早期关于Plasma 5.3的文章][3]中,我们触及了大量**新特性**,这其中很多都值得反复说道说道。
|
||||
|
||||
**加强的电源管理**特性和配置选项,包括**新的电源小程序、能源使用监控**和**动态屏幕亮度变化**,将有助于让KDE在移动设备上加强续航能力。
|
||||
|
||||
在连接了外部监视器的时候合上笔记本盖子时,不会再触发‘挂起’操作。这个新的行为被称之为‘**影院模式**’,并且默认开启。但是,可以通功过电源管理设置中的相关选项禁用。
|
||||
|
||||
**蓝牙功能被改善**,带来了一个全新的面板小程序,使得在连接到并配置配对的蓝牙设备,如只能手机、键盘和扬声器时,比以往更为便捷。
|
||||
|
||||
同样,对于Plasma 5.3,**KDE中的轨迹板配置更为方便**,这多亏了新的安装和设置模块。
|
||||

|
||||
轨迹板、触控板。Tomato, Tomayto。
|
||||
|
||||
对于Plasma小部件狂热者,带来了一个**按住并锁定**手势。当启用该功能,会隐藏移动鼠标时出现的设置处理。取而代之的是,它只会在长点击小部件时发生该行为。
|
||||
|
||||
谈到widget-y这类事情时,该发布版中**再次引入了几个旧的Plasmoid最受欢迎的东西**,包括一个有用的系统监视器、便利的硬盘驱动器统计和一个漫画阅读器。
|
||||
|
||||
### 了解更多&尝试 ###
|
||||
|
||||

|
||||
|
||||
一张全部内容的完整列表——我说全部内容——是指Plasma 5.3中[在官方修改日志中][4]列出的新的和改进的内容。
|
||||
|
||||
你可以从KDE社区获取Live镜像,试用Kubuntu上的Plasma 5.3,**而不会影响到你自己的系统**:
|
||||
|
||||
- [下载KDE Plasma Live镜像][5]
|
||||
|
||||
如果你需要超级稳定的系统,你可以使用这些镜像来尝试新特性,但是你可以继续使用你的主要计算机上与你的版本对应的KDE版本。
|
||||
|
||||
但是,如果你对实验版满意——请阅:能够处理任何包冲突,或者由尝试升级桌面环境而导致的系统问题——那么你可以安装。
|
||||
|
||||
### 安装Plasma 5.3到Kubuntu 15.04 ###
|
||||
|
||||

|
||||
|
||||
要**安装Plasma 5.3到Kubuntu 15.04**中,你需要添加KDE 移植PPA,运行软件更新器工具并安装任何可用的更新。
|
||||
|
||||
Kubuntu移植PPA可能也会升级除了安装在你系统上的Plasma外的其它KDE平台组件,包括KDE应用程序、框架和Kubuntu特定配置文件。
|
||||
|
||||
目前为止,使用命令行来升级Kubuntu中的到Plasma 5.3是最快速的方法:
|
||||
|
||||
sudo add-apt-repository ppa:kubuntu-ppa/backports
|
||||
|
||||
sudo apt-get update && sudo apt-get dist-upgrade
|
||||
|
||||
在升级过程完成后,如果一切顺利,你应该重启计算机。
|
||||
|
||||
如果你正在使用一个备用桌面环境,比如LXDE、Unity或者GNOME,则你需要在运行完上面的两个命令后安装Kubuntu桌面包(你可以在Ubuntu软件中心找到)。
|
||||
To downgrade to the stock version of Plasma in 15.04 you can use the PPA-Purge tool:
|
||||
|
||||
sudo apt-get install ppa-purge
|
||||
|
||||
sudo ppa-purge ppa:kubuntu-ppa/backports
|
||||
|
||||
请在下面的评论中留言,让我们知道你怎么升级/测试过程是怎样的,别忘了告诉我们你在下一个Plasma 5桌面中要看到的特性。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/04/kde-plasma-5-3-released-heres-how-to-upgrade-in-kubuntu-15-04
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:https://www.kde.org/announcements/plasma-5.3.0.php
|
||||
[2]:http://www.omgubuntu.co.uk/2015/04/beta-plasma-5-3-features
|
||||
[3]:http://www.omgubuntu.co.uk/2015/04/beta-plasma-5-3-features
|
||||
[4]:https://www.kde.org/announcements/plasma-5.2.2-5.3.0-changelog.php
|
||||
[5]:https://community.kde.org/Plasma/Live_Images
|
@ -0,0 +1,177 @@
|
||||
Linux中,创建聊天服务器、移除冗余软件包的实用命令
|
||||
=============================================================================
|
||||
这里,我们来看Linux命令行实用技巧的下一个部分。如果你错过了Linux Tracks之前的文章,可以从这里找到。
|
||||
|
||||
- [5 Linux Command Line Tracks][1]
|
||||
|
||||
本篇中,我们将会介绍6个命令行小技巧,包括使用Netcat命令创建Linux命令行聊天,从某个命令的输出中对某一列做加法,移除Debian和CentOS上多余的包,从命令行中获取本地与远程的IP地址,在终端获得彩色的输出与解码各样的颜色,最后是Linux命令行里井号标签的使用。让我们来一个一个地看一下。
|
||||
|
||||

|
||||
6个实用的命令行技巧
|
||||
|
||||
### 1. 创建Linux命令行聊天服务 ###
|
||||
我们大家使用聊天服务都有很长一段时间了。对于Google Chat,Hangout,Facebook Chat,Whatsapp,Hike和其他一些应用与集成的聊天服务,我们都很熟悉了。那你知道Linux的nc命令可以使你的Linux盒子变成一个聊天服务器,而仅仅只需要一行命令吗。什么是nc命令,它又是怎么工作的呢?
|
||||
|
||||
nc是Linux netcat命令的旧版。nc就像瑞士军刀一样,内建呢大量的功能。nc可用做调式工具,调查工具,使用TCP/UDP读写网络连接,DNS正向/反向检查。
|
||||
|
||||
nc主要用在端口扫描,文件传输,后台和端口监听。nc可以使用任何闲置的端口和任何本地网络源地址。
|
||||
|
||||
使用nc命令(在192.168.0.7的服务器上)创建一个命令行即时信息传输服务器。
|
||||
|
||||
$ nc -l -vv 11119
|
||||
|
||||
对上述命令的解释。
|
||||
|
||||
- -v : 表示 Verbose
|
||||
- -vv : 更多的 Verbose
|
||||
- -p : 本地端口号
|
||||
|
||||
你可以用任何其他的本地端口号替换11119。
|
||||
|
||||
接下来在客户端机器(IP地址:192.168.0.15),运行下面的命令初始化聊天会话(信息传输服务正在运行)。
|
||||
|
||||
$ nc 192.168.0.7:11119
|
||||
|
||||

|
||||
|
||||
**注意**:你可以按下ctrl+c终止会话,同时nc聊天是一个一对一的服务。
|
||||
|
||||
### 2. Linux中如何统计某一列的总值 ###
|
||||
|
||||
如何统计在终端里,某个命令的输出中,其中一列的数值总和,
|
||||
|
||||
‘ls -l’命令的输出。
|
||||
|
||||
$ ls -l
|
||||
|
||||

|
||||
|
||||
注意到第二列代表软连接的数量,第五列则是文件的大小。假设我们需要汇总第五列的数值。
|
||||
|
||||
仅仅列出第五列的内容。我们会使用‘awk’命令做到这点。‘$5’即代表第五列。
|
||||
|
||||
$ ls -l | awk '{print $5}'
|
||||
|
||||

|
||||
|
||||
现在,通过管道连接,使用awk打印出第五列数值的总和。
|
||||
|
||||
$ ls -l | awk '{print $5}' | awk '{total = total + $1}END{print total}'
|
||||
|
||||

|
||||
|
||||
### 在Linux里如何移除废弃包 ###
|
||||
|
||||
废弃包是指那些作为其他包的依赖而被安装,但是当源包被移除之后就不再需要的包。
|
||||
|
||||
假设我们安装了gtprogram,依赖是gtdependency。除非我们安装了gtdependency,否则安装不了gtprogram。
|
||||
|
||||
当我们移除gtprogram的时候,默认并不会移除gtdependency。并且如果我们不移除gtdependency的话,它就会遗留下来成为废弃包,与其他任何包再无联系。
|
||||
|
||||
# yum autoremove [On RedHat Systems]
|
||||
|
||||

|
||||
|
||||
# apt-get autoremove [On Debian Systems]
|
||||
|
||||

|
||||
|
||||
你应该经常移除废弃包,保持Linux机器仅仅加载一些需要的东西。
|
||||
|
||||
### 4. 如何获得Linux服务器本地的与公网的IP地址 ###
|
||||
|
||||
为了获得本地IP地址,运行下面的一行脚本。
|
||||
|
||||
$ ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d:
|
||||
|
||||
你必须安装了ifconfig,如果没有,使用apt或者yum工具安装需要的包。这里我们将会管道连接ifconfig的输出,并且结合grep命令找到包含“intel addr:”的字符串。
|
||||
|
||||
我们知道对于输出本地IP地址,ifconfig命令足够用了。但是ifconfig生成了许多的输出,而我们关注的地方仅仅是本地IP地址,不是其他的。
|
||||
|
||||
# ifconfig | grep "inet addr:"
|
||||
|
||||

|
||||
|
||||
尽管目前的输出好多了,但是我们需要过滤出本地的IP地址,不含其他东西。针对这个,我们将会使用awk打印出第二列输出,通过管道连接上述的脚本。
|
||||
|
||||
# ifconfig | grep “inet addr:” | awk '{print $2}'
|
||||
|
||||

|
||||
|
||||
上面图片清楚的表示,我们已经很大程度上自定义了输出,当仍然不是我们想要的。本地环路地址 127.0.0.1 仍然在结果中。
|
||||
|
||||
我们可以使用grep的-v选项,这样会打印出不匹配给定参数的其他行。每个机器都有同样的环路地址 127.0.0.1,所以使用grep -v打印出不包含127.0.0.1的行,通过管道连接前面的脚本。
|
||||
|
||||
# ifconfig | grep "inet addr" | awk '{print $2}' | grep -v '127.0.0.1'
|
||||
|
||||

|
||||
|
||||
我们差不多得到想要的输出了,仅仅需要从开头替换掉字符串`(addr:)`。我们将会使用cut命令单独打印出第二列。一二列之间并不是用tab分割,而是`(:)`,所以我们需要使用到域分割符选项`(-d)`,通过管道连接上面的输出。
|
||||
|
||||
# ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d:
|
||||
|
||||

|
||||
|
||||
最后!期望的结果出来了。
|
||||
|
||||
### 5.如何在Linux终端彩色输出 ###
|
||||
|
||||
你可能在终端看见过彩色的输出。同时你也可能知道在终端里允许/禁用彩色输出。如果都不知道的话,里可以参考下面的步骤。
|
||||
|
||||
在Linux中,每个用户都有`'.bashrc'`文件,被用来管理你的终端输出。打开并且编辑该文件,用你喜欢的编辑器。注意一下,这个文件是隐藏的(文件开头为点的代表隐藏文件)。
|
||||
|
||||
$ vi /home/$USER/.bashrc
|
||||
|
||||
确保以下的行没有被注释掉。ie.,行开头没有#。
|
||||
|
||||
if [ -x /usr/bin/dircolors ]; then
|
||||
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dirc$
|
||||
alias ls='ls --color=auto'
|
||||
#alias dir='dir --color=auto'
|
||||
#alias vdir='vdir --color=auto'
|
||||
|
||||
alias grep='grep --color=auto'
|
||||
alias fgrep='fgrep --color=auto'
|
||||
alias egrep='egrep --color=auto'
|
||||
fi
|
||||
|
||||

|
||||
|
||||
完成后!保存并退出。为了让改动生效,需要注销账户后再次登录。
|
||||
|
||||
现在,你会看见列出的文件和文件夹名字有着不同的颜色,根据文件类型来决定。为了解码颜色,可以运行下面的命令。
|
||||
|
||||
$ dircolors -p | less
|
||||
|
||||

|
||||
|
||||
### 6.如何用井号标记和Linux命令和脚本 ###
|
||||
|
||||
我们一直在Twitter,Facebook和Google Plus(可能是其他我们没有提到的地方)上使用井号标签。那些井号标签使得其他人搜索一个标签更加容易。可是很少人知道,我们可以在Linux命令行使用井号标签。
|
||||
|
||||
我们已经知道配置文件里的`#`,在大多数的编程语言中,这个符号被用作注释行,即不被执行。
|
||||
|
||||
运行一个命令,然后为这个命令创建一个井号标签,这样之后我们就可以找到它。假设我们有一个很长的脚本,就上面第四点被执行的命令。现在为它创建一个井号标签。我们知道ifconfig可以被sudo或者root执行,因此用root来执行。
|
||||
|
||||
# ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d: #myip
|
||||
|
||||
上述脚本被’mytag‘给标记了。现在在reverse-i-search(按下ctrl+r)搜索一下这个标签,在终端里,并输入’mytag‘。你可以从这里开始执行。
|
||||
|
||||

|
||||
|
||||
你可以创建很多的井号标签,为每个命令,之后使用reverse-i-search找到它。
|
||||
|
||||
目前就这么多了。我们一直在辛苦的工作,创造有趣的,有知识性的内容给你。你觉得我们是如何工作的呢?欢迎咨询任何问题。你可以在下面评论。保持联络!Kudox。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/linux-commandline-chat-server-and-remove-unwanted-packages/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/5-linux-command-line-tricks/
|
@ -0,0 +1,39 @@
|
||||
Bodhi Linux引入Moksha桌面
|
||||
================================================================================
|
||||

|
||||
|
||||
基于Ubuntu的轻量级Linux发行版[Bodhi Linux][1]致力于构建其自家的桌面环境,这个全新桌面环境被称之为Moksha(梵文意为‘完全自由’)。Moksha将替换常用的[Enlightenment桌面环境][2]。
|
||||
|
||||
### 为何用Moksha替换Englightenment? ###
|
||||
|
||||
Bodhi Linux的Jeff Hoogland最近[表示][3]了他对新版Enlightenment的不满。直到E17,Enlightenment都十分稳定,并且能满足轻量级Linux的部署需求。而E18则到处都充满了问题,Bodhi Linux只好弃之不用了。
|
||||
|
||||
虽然最新的[Bodhi Linux 3.0发行版][4]仍然使用了E19作为其桌面(除传统模式外,这意味着,对于旧的硬件,仍然会使用E17),Jeff对E19也十分不满。他说道:
|
||||
|
||||
>除了性能问题外,对于我个人而言,E19并没有给我带来与E17下相同的工作流程,因为它移除了很多E17的特性。鉴于此,我不得不将我所有的3台Bodhi计算机桌面改成E17——这3台机器都是我高端的了。这不由得让我想到,我们还有多少现存的Bodhi用户也怀着和我同样的感受,所以,我[在我们的用户论坛上开启一个与此相关的讨论][5]。
|
||||
|
||||
### Moksha是E17桌面的延续 ###
|
||||
|
||||
Moksha将会是Bodhi所热衷的E17桌面的延续。Jeff进一步提到:
|
||||
>我们将从整合所有Bodhi修改开始。多年来我们一直都只是给源代码打补丁,并修复桌面所具有的问题。如果该工作完成,我们将开始移植一些E18和E19引入的更为有用的特性,最后,我们将引入一些我们认为会改善最终用户体验的东西。
|
||||
|
||||
### Moksha何时发布? ###
|
||||
|
||||
下一个Bodhi更新将会是Bodhi 3.1.0,就在今年八月。这个新版本将为所有其缺省ISO带来Moksha。让我们拭目以待,看看Moksha是否是一个好的决定。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/bodhi-linux-introduces-moksha-desktop/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://www.bodhilinux.com/
|
||||
[2]:https://www.enlightenment.org/
|
||||
[3]:http://www.bodhilinux.com/2015/04/28/introducing-the-moksha-desktop/
|
||||
[4]:http://itsfoss.com/bodhi-linux-3/
|
||||
[5]:http://forums.bodhilinux.com/index.php?/topic/12322-e17-vs-e19-which-are-you-using-and-why/
|
@ -0,0 +1,460 @@
|
||||
Shell脚本学习初次操作指南
|
||||
================================================================================
|
||||

|
||||
|
||||
通常,当人们提到“shell脚本语言”时,浮现在他们脑海中是bash,ksh,sh或者其它相类似的linux/unix脚本语言。脚本语言是与计算机交流的另外一种途径。使用图形化窗口界面(不管是windows还是linux都无所谓)用户可以移动鼠标并点击各种对象,比如按钮、列表、选框等等。但这种方式在每次用户想要计算机/服务器完成相同任务时(比如说批量转换照片,或者下载新的电影、mp3等)却是十分不方便。要想让所有这些事情变得简单并且自动化,我们可以使用shell脚本。
|
||||
|
||||
某些编程语言,像pascal、foxpro、C、java之类,在执行前需要先进行编译。它们需要合适的编译器来让我们的代码完成某个任务。
|
||||
|
||||
而其它一些编程语言,像php、javascript、visualbasic之类,则不需要编译器,因此它们需要解释器,而我们不需要编译代码就可以运行程序。
|
||||
|
||||
shell脚本也像解释器一样,但它通常用于调用外部已编译的程序。然后,它会捕获输出结果、退出代码并根据情况进行处理。
|
||||
|
||||
Linux世界中最为流行的shell脚本语言之一,就是bash。而我认为(这是我自己的看法)原因在于,默认情况下bash shell可以让用户便捷地通过历史命令(先前执行过的)导航,与之相反的是,ksh则要求对.profile进行一些调整,或者记住一些“魔术”组合键来查阅历史并修正命令。
|
||||
|
||||
好了,我想这些介绍已经足够了,剩下来哪个环境最适合你,就留给你自己去判断吧。从现在开始,我将只讲bash及其脚本。在下面的例子中,我将使用CentOS 6.6和bash-4.1.2。请确保你有相同版本,或者更高版本。
|
||||
|
||||
### Shell脚本流 ###
|
||||
|
||||
shell脚本语言就跟和几个人聊天类似。你只需把所有命令想象成能帮你做事的那些人,只要你用正确的方式来请求他们去做。比如说,你想要写文档。首先,你需要纸。然后,你需要把内容说给某个人听,让他帮你写。最后,你想要把它存放到某个地方。或者说,你想要造一所房子,因而你需要请合适的人来清空场地。在他们说“事情干完了”,那么另外一些工程师就可以帮你来砌墙。最后,当这些工程师们也告诉你“事情干完了”的时候,你就可以叫油漆工来给房子粉饰了。如果你让油漆工在墙砌好前就来粉饰,会发生什么呢?我想,他们会开始发牢骚了。几乎所有这些像人一样的命令都会说话,如果它们完成了工作而没有发生什么问题,那么它们就会告诉“标准输出”。如果它们不能做你叫它们做的事——它们会告诉“标准错误”。这样,最后,所有的命令都通过“标准输入”来听你的话。
|
||||
|
||||
快速实例——当你打开linux终端并写一些文本时——你正通过“标准输入”和bash说话。那么,让我们来问问bash shell **who am i**吧。
|
||||
|
||||
root@localhost ~]# who am i <--- you speaking through the standard input to bash shell
|
||||
root pts/0 2015-04-22 20:17 (192.168.1.123) <--- bash shell answering to you through the standard output
|
||||
|
||||
现在,让我们说一些bash听不懂的问题:
|
||||
|
||||
[root@localhost ~]# blablabla <--- 哈,你又在和标准输入说话了
|
||||
-bash: blablabla: command not found <--- bash通过标准错误在发牢骚了
|
||||
|
||||
“:”之前的第一个单词通常是向你发牢骚的命令。实际上,这些流中的每一个都有它们自己的索引号:
|
||||
|
||||
- 标准输入(**stdin**) - 0
|
||||
- 标准输出(**stdout**) - 1
|
||||
- 标准错误(**stderr**) - 2
|
||||
|
||||
如果你真的想要知道哪个输出命令说了些什么——你需要重定向(在命令后使用大于号“>”和流索引)那次发言到文件:
|
||||
|
||||
[root@localhost ~]# blablabla 1> output.txt
|
||||
-bash: blablabla: command not found
|
||||
|
||||
在本例中,我们试着重定向1(**stdout**)流到名为output.txt的文件。让我们来看对该文件内容所做的事情吧,使用cat命令可以做这事:
|
||||
|
||||
[root@localhost ~]# cat output.txt
|
||||
[root@localhost ~]#
|
||||
|
||||
看起来似乎是空的。好吧,现在让我们来重定向2(**stderr**)流:
|
||||
|
||||
[root@localhost ~]# blablabla 2> error.txt
|
||||
[root@localhost ~]#
|
||||
|
||||
好吧,我们看到牢骚话没了。让我们检查一下那个文件:
|
||||
|
||||
[root@localhost ~]# cat error.txt
|
||||
-bash: blablabla: command not found
|
||||
[root@localhost ~]#
|
||||
|
||||
果然如此!我们看到,所有牢骚话都被记录到errors.txt文件里头去了。
|
||||
|
||||
有时候,命令会同时产生**stdout**和**stderr**。要重定向它们到不同的文件,我们可以使用以下语句:
|
||||
|
||||
command 1>out.txt 2>err.txt
|
||||
|
||||
要缩短一点语句,我们可以忽略“1”,因为默认情况下**stdout**会被重定向:
|
||||
|
||||
command >out.txt 2>err.txt
|
||||
|
||||
好吧,让我们试试做些“坏事”。让我们用rm命令把file1和folder1给删了吧:
|
||||
|
||||
[root@localhost ~]# rm -vf folder1 file1 > out.txt 2>err.txt
|
||||
|
||||
现在来检查以下输出文件:
|
||||
|
||||
[root@localhost ~]# cat out.txt
|
||||
removed `file1'
|
||||
[root@localhost ~]# cat err.txt
|
||||
rm: cannot remove `folder1': Is a directory
|
||||
[root@localhost ~]#
|
||||
|
||||
正如我们所看到的,不同的流被分离到了不同的文件。有时候,这也不似很方便,因为我们想要查看出现错误时,在某些操作前面或后面所连续发生的事情。要实现这一目的,我们可以重定向两个流到同一个文件:
|
||||
|
||||
command >>out_err.txt 2>>out_err.txt
|
||||
|
||||
注意:请注意,我使用“>>”替代了“>”。它允许我们附加到文件,而不是覆盖文件。
|
||||
|
||||
我们可以重定向一个流到另一个:
|
||||
|
||||
command >out_err.txt 2>&1
|
||||
|
||||
让我来解释一下吧。所有命令的标准输出将被重定向到out_err.txt,错误输出将被重定向到1-st流(上面已经解释过了),而该流会被重定向到同一个文件。让我们看这个实例:
|
||||
|
||||
[root@localhost ~]# rm -fv folder2 file2 >out_err.txt 2>&1
|
||||
[root@localhost ~]# cat out_err.txt
|
||||
rm: cannot remove `folder2': Is a directory
|
||||
removed `file2'
|
||||
[root@localhost ~]#
|
||||
|
||||
看着这些组合的输出,我们可以将其说明为:首先,**rm**命令试着将folder2删除,而它不会成功,因为linux要求**-r**键来允许**rm**命令删除文件夹,而第二个file2会被删除。通过为**rm**提供**-v**(详情)键,我们让rm命令告诉我们每个被删除的文件或文件夹。
|
||||
|
||||
这些就是你需要知道的,关于重定向的几乎所有内容了。我是说几乎,因为还有一个更为重要的重定向工具,它称之为“管道”。通过使用|(管道)符号,我们通常重定向**stdout**流。
|
||||
|
||||
比如说,我们有这样一个文本文件:
|
||||
|
||||
[root@localhost ~]# cat text_file.txt
|
||||
This line does not contain H e l l o word
|
||||
This lilne contains Hello
|
||||
This also containd Hello
|
||||
This one no due to HELLO all capital
|
||||
Hello bash world!
|
||||
|
||||
而我们需要找到其中某些带有“Hello”的行,Linux中有个**grep**命令可以完成该工作:
|
||||
|
||||
[root@localhost ~]# grep Hello text_file.txt
|
||||
This lilne contains Hello
|
||||
This also containd Hello
|
||||
Hello bash world!
|
||||
[root@localhost ~]#
|
||||
|
||||
当我们有个文件,想要在里头搜索的时候,这用起来很不错。当如果我们需要在另一个命令的输出中查找某些东西,这又该怎么办呢?是的,当然,我们可以重定向输出到文件,然后再在文件里头查找:
|
||||
|
||||
[root@localhost ~]# fdisk -l>fdisk.out
|
||||
[root@localhost ~]# grep "Disk /dev" fdisk.out
|
||||
Disk /dev/sda: 8589 MB, 8589934592 bytes
|
||||
Disk /dev/mapper/VolGroup-lv_root: 7205 MB, 7205814272 bytes
|
||||
Disk /dev/mapper/VolGroup-lv_swap: 855 MB, 855638016 bytes
|
||||
[root@localhost ~]#
|
||||
|
||||
如果你打算grep一些双引号引起来带有空格的内容呢!
|
||||
|
||||
注意: fdisk命令显示关于Linux操作系统磁盘驱动器的信息
|
||||
|
||||
就像我们看到的,这种方式很不方便,因为我们不一会儿就把临时文件空间给搞乱了。要完成该任务,我们可以使用管道。它们允许我们重定向一个命令的**stdout**到另一个命令的**stdin**流:
|
||||
|
||||
[root@localhost ~]# fdisk -l | grep "Disk /dev"
|
||||
Disk /dev/sda: 8589 MB, 8589934592 bytes
|
||||
Disk /dev/mapper/VolGroup-lv_root: 7205 MB, 7205814272 bytes
|
||||
Disk /dev/mapper/VolGroup-lv_swap: 855 MB, 855638016 bytes
|
||||
[root@localhost ~]#
|
||||
|
||||
如你所见,我们不需要任何临时文件就获得了相同的结果。我们把**fdisk stdout**重定向到了**grep stdin**。
|
||||
|
||||
**注意** : 管道重定向总是从左至右的。
|
||||
|
||||
还有几个其它重定向,但是我们将把它们放在后面讲。
|
||||
|
||||
### 在shell中显示自定义信息 ###
|
||||
|
||||
正如我们所知道的,通常,与shell的交流以及shell内的交流是以对话的方式进行的。因此,让我们创建一些真正的脚本吧,这些脚本也会和我们讲话。这会让你学到一些简单的命令,并对脚本的概念有一个更好的理解。
|
||||
|
||||
假设我们是某个公司的总服务台经理,我们想要创建某个shell脚本来注册呼叫信息:电话号码、用户名以及问题的简要描述。我们打算把这些信息存储到普通文本文件data.txt中,以便今后统计。脚本它自己就是以对话的方式工作,这会让总服务台的工作人员的小日子过得轻松点。那么,首先我们需要显示问题。对于现实信息,我们可以用echo和printf命令。这两个都是用来显示信息的,但是printf更为强大,因为我们可以通过它很好地格式化输出,我们可以让它右对齐、左对齐或者为信息留出专门的空间。让我们从一个简单的例子开始吧。要创建文件,请使用你喜欢的文本编辑器(kate,nano,vi,……),然后创建名为note.sh的文件,里面写入这些命令:
|
||||
|
||||
echo "Phone number ?"
|
||||
|
||||
### Script执行 ###
|
||||
|
||||
在保存文件后,我们可以使用bash命令来运行,把我们的文件作为它的参数:
|
||||
|
||||
[root@localhost ~]# bash note.sh
|
||||
Phone number ?
|
||||
|
||||
实际上,这样来执行脚本是很不方便的。如果不使用**bash**命令作为前缀来执行,会更舒服一些。要让脚本可执行,我们可以使用**chmod**命令:
|
||||
|
||||
[root@localhost ~]# ls -la note.sh
|
||||
-rw-r--r--. 1 root root 22 Apr 23 20:52 note.sh
|
||||
[root@localhost ~]# chmod +x note.sh
|
||||
[root@localhost ~]# ls -la note.sh
|
||||
-rwxr-xr-x. 1 root root 22 Apr 23 20:52 note.sh
|
||||
[root@localhost ~]#
|
||||
|
||||

|
||||
|
||||
**注意** : ls命令显示了当前文件夹内的文件。通过添加-la键,它会显示更多文件信息。
|
||||
|
||||
如我们所见,在**chmod**命令执行前,脚本只有读(r)和写(w)权限。在执行**chmod +x**后,它就获得了执行(x)权限。(关于权限的更多细节,我会在下一篇文章中讲述。)现在,我们只需这么来运行:
|
||||
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number ?
|
||||
|
||||
在脚本名前,我添加了./组合。.(点)在unix世界中意味着当前位置(当前文件夹),/(斜线)是文件夹分隔符。(在Windows系统中,我们使用\(反斜线)实现同样功能)所以,这整个组合的意思是说:“从当前文件夹执行note.sh脚本”。我想,如果我用完整路径来运行这个脚本的话,你会更加清楚一些:
|
||||
|
||||
[root@localhost ~]# /root/note.sh
|
||||
Phone number ?
|
||||
[root@localhost ~]#
|
||||
|
||||
它也能工作。
|
||||
|
||||
如果所有linux用户都有相同的默认shell,那就万事OK。如果我们只是执行该脚本,默认的用户shell就会用于解析脚本内容并运行命令。不同的shell有着一丁点不同的语法、内部命令等等,所以,为了保证我们的脚本会使用**bash**,我们应该添加**#!/bin/bash**到文件首行。这样,默认的用户shell将调用**/bin/bash**,而只有在那时候,脚本中的命令才会被执行:
|
||||
|
||||
[root@localhost ~]# cat note.sh
|
||||
#!/bin/bash
|
||||
echo "Phone number ?"
|
||||
|
||||
直到现在,我们才100%确信**bash**会用来解析我们的脚本内容。让我们继续。
|
||||
|
||||
### 读取输入 ###
|
||||
|
||||
在现实信息后,脚本会等待用户回答。那儿有个**read**命令用来接收用户的回答:
|
||||
|
||||
#!/bin/bash
|
||||
echo "Phone number ?"
|
||||
read phone
|
||||
|
||||
在执行后,脚本会等待用户输入,直到用户按[ENTER]键:
|
||||
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number ?
|
||||
12345 <--- 这儿是我输入的内容
|
||||
[root@localhost ~]#
|
||||
|
||||
你输入的所有东西都会被存储到变量**phone**中,要显示变量的值,我们同样可以使用**echo**命令:
|
||||
|
||||
[root@localhost ~]# cat note.sh
|
||||
#!/bin/bash
|
||||
echo "Phone number ?"
|
||||
read phone
|
||||
echo "You have entered $phone as a phone number"
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number ?
|
||||
123456
|
||||
You have entered 123456 as a phone number
|
||||
[root@localhost ~]#
|
||||
|
||||
在**bash** shell中,我们使用**$**(美元)符号作为变量标示,除了读入到变量和其它为数不多的时候(将在今后说明)。
|
||||
|
||||
好了,现在我们准备添加剩下的问题了:
|
||||
|
||||
#!/bin/bash
|
||||
echo "Phone number?"
|
||||
read phone
|
||||
echo "Name?"
|
||||
read name
|
||||
echo "Issue?"
|
||||
read issue
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number?
|
||||
123
|
||||
Name?
|
||||
Jim
|
||||
Issue?
|
||||
script is not working.
|
||||
[root@localhost ~]#
|
||||
|
||||
### 使用流重定向 ###
|
||||
|
||||
太完美了!剩下来就是重定向所有东西到文件data.txt了。作为字段分隔符,我们将使用/(斜线)符号。
|
||||
|
||||
**注意** : 你可以选择任何你认为是最好,但是确保文件内容不会包含这些符号在内。它会导致在文本行中产生额外字段。
|
||||
|
||||
别忘了使用“>>”来代替“>”,因为我们想要将输出内容附加到文件末!
|
||||
|
||||
[root@localhost ~]# tail -2 note.sh
|
||||
read issue
|
||||
echo "$phone/$name/$issue">>data.txt
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number?
|
||||
987
|
||||
Name?
|
||||
Jimmy
|
||||
Issue?
|
||||
Keybord issue.
|
||||
[root@localhost ~]# cat data.txt
|
||||
987/Jimmy/Keybord issue.
|
||||
[root@localhost ~]#
|
||||
|
||||
**注意** : **tail**命令显示了文件的最后**-n**行。
|
||||
|
||||
搞定。让我们再来运行一次看看:
|
||||
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number?
|
||||
556
|
||||
Name?
|
||||
Janine
|
||||
Issue?
|
||||
Mouse was broken.
|
||||
[root@localhost ~]# cat data.txt
|
||||
987/Jimmy/Keybord issue.
|
||||
556/Janine/Mouse was broken.
|
||||
[root@localhost ~]#
|
||||
|
||||
我们的文件在增长,让我们在每行前面加个日期吧,这对于今后摆弄这些统计数据时会很有用。要实现这功能,我们可以使用date命令,并指定某种格式,因为我不喜欢默认格式:
|
||||
|
||||
[root@localhost ~]# date
|
||||
Thu Apr 23 21:33:14 EEST 2015 <---- date命令的默认输出
|
||||
[root@localhost ~]# date "+%Y.%m.%d %H:%M:%S"
|
||||
2015.04.23 21:33:18 <---- 格式化后的输出
|
||||
|
||||
有几种方式可以读取命令输出到变脸,在这种简单的情况下,我们将使用`(反引号):
|
||||
|
||||
[root@localhost ~]# cat note.sh
|
||||
#!/bin/bash
|
||||
now=`date "+%Y.%m.%d %H:%M:%S"`
|
||||
echo "Phone number?"
|
||||
read phone
|
||||
echo "Name?"
|
||||
read name
|
||||
echo "Issue?"
|
||||
read issue
|
||||
echo "$now/$phone/$name/$issue">>data.txt
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number?
|
||||
123
|
||||
Name?
|
||||
Jim
|
||||
Issue?
|
||||
Script hanging.
|
||||
[root@localhost ~]# cat data.txt
|
||||
2015.04.23 21:38:56/123/Jim/Script hanging.
|
||||
[root@localhost ~]#
|
||||
|
||||
嗯…… 我们的脚本看起来有点丑啊,让我们来美化一下。如果你要手动读取**read**命令,你会发现read命令也可以显示一些信息。要实现该功能,我们应该使用-p键加上信息:
|
||||
|
||||
[root@localhost ~]# cat note.sh
|
||||
#!/bin/bash
|
||||
now=`date "+%Y.%m.%d %H:%M:%S"`
|
||||
read -p "Phone number: " phone
|
||||
read -p "Name: " name
|
||||
read -p "Issue: " issue
|
||||
echo "$now/$phone/$name/$issue">>data.txt
|
||||
|
||||
你可以直接从控制台查找到各个命令的大量有趣的信息,只需输入:**man read, man echo, man date, man ……**
|
||||
|
||||
同意吗?它看上去是好多了!
|
||||
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number: 321
|
||||
Name: Susane
|
||||
Issue: Mouse was stolen
|
||||
[root@localhost ~]# cat data.txt
|
||||
2015.04.23 21:38:56/123/Jim/Script hanging.
|
||||
2015.04.23 21:43:50/321/Susane/Mouse was stolen
|
||||
[root@localhost ~]#
|
||||
|
||||
光标在消息的后面(不是在新的一行中),这有点意思。
|
||||
|
||||
循环
|
||||
|
||||
是时候来改进我们的脚本了。如果用户一整天都在接电话,如果每次都要去运行,这岂不是很麻烦?让我们让这些活动都永无止境地循环去吧:
|
||||
|
||||
[root@localhost ~]# cat note.sh
|
||||
#!/bin/bash
|
||||
while true
|
||||
do
|
||||
read -p "Phone number: " phone
|
||||
now=`date "+%Y.%m.%d %H:%M:%S"`
|
||||
read -p "Name: " name
|
||||
read -p "Issue: " issue
|
||||
echo "$now/$phone/$name/$issue">>data.txt
|
||||
done
|
||||
|
||||
我已经交换了**read phone**和**now=`date`**行。这是因为我想要在输入电话号码后再获得时间。如果我把它放在循环**- the**的首行,变量就会在数据存储到文件中后获得时间。而这并不好,因为下一次呼叫可能在20分钟后,甚至更晚。
|
||||
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number: 123
|
||||
Name: Jim
|
||||
Issue: Script still not works.
|
||||
Phone number: 777
|
||||
Name: Daniel
|
||||
Issue: I broke my monitor
|
||||
Phone number: ^C
|
||||
[root@localhost ~]# cat data.txt
|
||||
2015.04.23 21:38:56/123/Jim/Script hanging.
|
||||
2015.04.23 21:43:50/321/Susane/Mouse was stolen
|
||||
2015.04.23 21:47:55/123/Jim/Script still not works.
|
||||
2015.04.23 21:48:16/777/Daniel/I broke my monitor
|
||||
[root@localhost ~]#
|
||||
|
||||
注意: 要从无限循环中退出,你可以按[Ctrl]+[C]键。Shell会显示^表示Ctrl键。
|
||||
|
||||
### 使用管道重定向 ###
|
||||
|
||||
让我们添加更多功能到我们的“弗兰肯斯坦”,我想要脚本在每次呼叫后显示某个统计数据。比如说,我想要查看各个号码呼叫了我几次。对于这个,我们应该cat文件data.txt:
|
||||
|
||||
[root@localhost ~]# cat data.txt
|
||||
2015.04.23 21:38:56/123/Jim/Script hanging.
|
||||
2015.04.23 21:43:50/321/Susane/Mouse was stolen
|
||||
2015.04.23 21:47:55/123/Jim/Script still not works.
|
||||
2015.04.23 21:48:16/777/Daniel/I broke my monitor
|
||||
2015.04.23 22:02:14/123/Jimmy/New script also not working!!!
|
||||
[root@localhost ~]#
|
||||
|
||||
现在,所有输出我们都可以重定向到**cut**命令,让**cut**来把每行切成一块一块(我们使用分隔符“/”),然后打印第二个字段:
|
||||
|
||||
[root@localhost ~]# cat data.txt | cut -d"/" -f2
|
||||
123
|
||||
321
|
||||
123
|
||||
777
|
||||
123
|
||||
[root@localhost ~]#
|
||||
|
||||
现在,我们可以把这个输出重定向打另外一个命令**sort**:
|
||||
|
||||
[root@localhost ~]# cat data.txt | cut -d"/" -f2|sort
|
||||
123
|
||||
123
|
||||
123
|
||||
321
|
||||
777
|
||||
[root@localhost ~]#
|
||||
|
||||
然后只留下唯一的行。要统计唯一条目,只需添加**-c**键到**uniq**命令:
|
||||
|
||||
[root@localhost ~]# cat data.txt | cut -d"/" -f2 | sort | uniq -c
|
||||
3 123
|
||||
1 321
|
||||
1 777
|
||||
[root@localhost ~]#
|
||||
|
||||
只要把这个添加到我们的循环的最后:
|
||||
|
||||
#!/bin/bash
|
||||
while true
|
||||
do
|
||||
read -p "Phone number: " phone
|
||||
now=`date "+%Y.%m.%d %H:%M:%S"`
|
||||
read -p "Name: " name
|
||||
read -p "Issue: " issue
|
||||
echo "$now/$phone/$name/$issue">>data.txt
|
||||
echo "===== We got calls from ====="
|
||||
cat data.txt | cut -d"/" -f2 | sort | uniq -c
|
||||
echo "--------------------------------"
|
||||
done
|
||||
|
||||
运行:
|
||||
|
||||
[root@localhost ~]# ./note.sh
|
||||
Phone number: 454
|
||||
Name: Malini
|
||||
Issue: Windows license expired.
|
||||
===== We got calls from =====
|
||||
3 123
|
||||
1 321
|
||||
1 454
|
||||
1 777
|
||||
--------------------------------
|
||||
Phone number: ^C
|
||||
|
||||

|
||||
|
||||
当前场景贯穿了几个熟知的步骤:
|
||||
|
||||
- 显示消息
|
||||
- 获取用户输入
|
||||
- 存储值到文件
|
||||
- 处理存储的数据
|
||||
|
||||
但是,如果用户有点责任心,他有时候需要输入数据,有时候需要统计,或者可能要在存储的数据中查找一些东西呢?对于这些事情,我们需要使用switches/cases,并知道怎样来很好地格式化输出。这对于在shell中“画”表格的时候很有用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-shell-script/guide-start-learning-shell-scripting-scratch/
|
||||
|
||||
作者:[Petras Liumparas][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/petrasl/
|
@ -0,0 +1,112 @@
|
||||
如何在CentOS 7.x中安装OpenERP(Odoo)
|
||||
================================================================================
|
||||
各位好,这篇教程关于的是如何在CentOS 7中安装Odoo(就是我们所知的OpenERP)。你是不是在考虑为你的业务安装一个不错的ERP(企业资源规划)软件?那么OpenERP就是你寻找的最好的程序,因为它是一款为你的商务提供杰出特性的自由开源软件。
|
||||
|
||||
[OpenERP][1]是一款自由开源的传统的OpenERP(企业资源规划),它包含了开源CRM、网站构建、电子商务、项目管理、计费账务、销售点、人力资源、市场、生产、采购管理以及其他模块用于提高效率及销售。Odoo可以作为独立程序,但是它可以无缝集成因此你可以在安装数个程序后得到一个全功能的开源ERP。
|
||||
|
||||
因此,下面是在你的CentOS上安装OpenERP的步骤。
|
||||
|
||||
### 1. 安装 PostgreSQL ###
|
||||
|
||||
首先,首先我们需要更新CentOS 7的软件包来确保是最新的包,补丁和安全更新。要更新我们的系统,我们要在shell下运行下面的命令。
|
||||
|
||||
# yum clean all
|
||||
# yum update
|
||||
|
||||
现在我们要安装PostgreSQL,因为OpenERP使用PostgreSQL作为他的数据库。要安装它,我们需要运行下面的命令。
|
||||
|
||||
# yum install postgresql postgresql-server postgresql-libs
|
||||
|
||||

|
||||
|
||||
、安装完成后,我们需要用下面的命令初始化数据库。
|
||||
|
||||
# postgresql-setup initdb
|
||||
|
||||

|
||||
|
||||
我们接着设置PostgreSQL来使它每次开机启动。
|
||||
|
||||
# systemctl enable postgresql
|
||||
# systemctl start postgresql
|
||||
|
||||
因为我们还没有为用户“postgresql”设置密码,我们现在设置。
|
||||
|
||||
# su - postgres
|
||||
$ psql
|
||||
postgres=# \password postgres
|
||||
postgres=# \q
|
||||
# exit
|
||||
|
||||

|
||||
|
||||
### 2. 设置Odoo仓库 ###
|
||||
|
||||
在初始化数据库初始化完成后,我们要添加EPEL(企业版Linux的额外包)到我们的CentOS中。Odoo(或者OpenERP)依赖于Python运行时以及其他包没有包含在标准仓库中。这样我们要位企业版Linux添加额外的包仓库支持来解决Odoo所需要的依赖。要安装完成,我们需要运行下面的命令。
|
||||
|
||||
# yum install epel-release
|
||||
|
||||

|
||||
|
||||
现在,安装EPEL后,我们现在使用yum-config-manager添加Odoo(OpenERp)的仓库。
|
||||
|
||||
# yum install yum-utils
|
||||
|
||||
# yum-config-manager --add-repo=https://nightly.odoo.com/8.0/nightly/rpm/odoo.repo
|
||||
|
||||

|
||||
|
||||
### 3. 安装Odoo 8 (OpenERP) ###
|
||||
|
||||
在CentOS 7中添加Odoo 8(OpenERP)的仓库后。我们使用下面的命令来安装Odoo 8(OpenERP)。
|
||||
|
||||
# yum install -y odoo
|
||||
|
||||
上面的命令会安装odoo以及必须的依赖的包。
|
||||
|
||||

|
||||
|
||||
现在我们使用下面的命令在每次启动后启动Odoo服务。
|
||||
|
||||
# systemctl enable odoo
|
||||
# systemctl start odoo
|
||||
|
||||

|
||||
|
||||
### 4. 防火墙允许 ###
|
||||
|
||||
因为Odoo使用8069端口,我们需要在防火墙中允许远程访问。我们使用下面的命令来在防火墙中允许8069防火墙。
|
||||
|
||||
# firewall-cmd --zone=public --add-port=8069/tcp --permanent
|
||||
# firewall-cmd --reload
|
||||
|
||||

|
||||
|
||||
**注意:默认上,只有本地的连接才允许。如果我们要允许PostgreSQL的远程访问,我们需要在pg_hba.conf添加下面图片中一行**
|
||||
|
||||
# nano /var/lib/pgsql/data/pg_hba.conf
|
||||
|
||||

|
||||
|
||||
### 5. Web接口 ###
|
||||
|
||||
我们已经在CentOS 7中安装了最新的Odoo 8(OpenERP),我们可以在浏览器中输入http://ip-address:8069来访问Odoo。 接着,我们要做的第一件事就是创建一个新的数据库和新的密码。注意,主密码默认是管理员密码。接着,我们可以在面板中输入用户名和密码。
|
||||
|
||||

|
||||
|
||||
### 总结 ###
|
||||
|
||||
Odoo 8(OpenERP)是世界上最好的开源ERP程序。我们做了一件出色的工作来安装它因为OpenERP是由许多模块组成的针对商务和公司的完整ERP程序。因此,如果你有任何问题、建议、反馈请在下面的评论栏写下。谢谢你!享受OpenERP(Odoo 8)吧 :-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/setup-openerp-odoo-centos-7/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://www.odoo.com/
|
@ -0,0 +1,98 @@
|
||||
Linux有问必答——Linux上如何安装Shrew Soft IPsec VPN
|
||||
================================================================================
|
||||
> **Question**: I need to connect to an IPSec VPN gateway. For that, I'm trying to use Shrew Soft VPN client, which is available for free. How can I install Shrew Soft VPN client on [insert your Linux distro]?
|
||||
> **问题**:我需要连接到一个IPSec VPN网关,鉴于此,我尝试使用Shrew Soft VPN客户端,它是一个免费版本。我怎样才能安装Shrew Soft VPN客户端到[插入你的Linux发行版]?
|
||||
|
||||
市面上有许多商业VPN网关,同时附带有他们自己的专有VPN客户端软件。虽然也有许多开源的VPN服务器/客户端备选方案,但它们通常缺乏复杂的IPsec支持,比如互联网密钥交换(IKE),这是一个标准的IPsec协议,用于加固VPN密钥交换和验证安全。Shrew Soft VPN是一个免费的IPsec VPN客户端,它支持多种验证方法、密钥交换、加密以及防火墙穿越选项。
|
||||
|
||||
下面介绍如何安装Shrew Soft VPN客户端到Linux平台。
|
||||
|
||||
首先,从[官方站点][1]下载它的源代码。
|
||||
|
||||
### 安装Shrew VPN客户端到Debian, Ubuntu或者Linux Mint ###
|
||||
|
||||
Shrew Soft VPN客户端图形界面要求使用Qt 4.x。所以,作为依赖,你需要安装其开发文件。
|
||||
|
||||
$ sudo apt-get install cmake libqt4-core libqt4-dev libqt4-gui libedit-dev libssl-dev checkinstall flex bison
|
||||
$ wget https://www.shrew.net/download/ike/ike-2.2.1-release.tbz2
|
||||
$ tar xvfvj ike-2.2.1-release.tbz2
|
||||
$ cd ike
|
||||
$ cmake -DCMAKE_INSTALL_PREFIX=/usr -DQTGUI=YES -DETCDIR=/etc -DNATT=YES .
|
||||
$ make
|
||||
$ sudo make install
|
||||
$ cd /etc/
|
||||
$ sudo mv iked.conf.sample iked.conf
|
||||
|
||||
### 安装Shrew VPN客户端到CentOS, Fedora或者RHEL ###
|
||||
|
||||
与基于Debian的系统类似,在编译前你需要安装一堆依赖包,包括Qt4。
|
||||
|
||||
$ sudo yum install qt-devel cmake gcc-c++ openssl-devel libedit-devel flex bison
|
||||
$ wget https://www.shrew.net/download/ike/ike-2.2.1-release.tbz2
|
||||
$ tar xvfvj ike-2.2.1-release.tbz2
|
||||
$ cd ike
|
||||
$ cmake -DCMAKE_INSTALL_PREFIX=/usr -DQTGUI=YES -DETCDIR=/etc -DNATT=YES .
|
||||
$ make
|
||||
$ sudo make install
|
||||
$ cd /etc/
|
||||
$ sudo mv iked.conf.sample iked.conf
|
||||
|
||||
在基于Red Hat的系统中,最后一步需要用文本编辑器打开/etc/ld.so.conf文件,并添加以下行。
|
||||
|
||||
$ sudo vi /etc/ld.so.conf
|
||||
|
||||
----------
|
||||
|
||||
include /usr/lib/
|
||||
|
||||
重新加载运行时绑定的共享库文件,以容纳新安装的共享库:
|
||||
|
||||
$ sudo ldconfig
|
||||
|
||||
### 启动Shrew VPN客户端 ###
|
||||
|
||||
首先,启动IKE守护进程(iked)。该守护进作为VPN客户端程通过IKE协议与远程主机经由IPSec通信。
|
||||
|
||||
$ sudo iked
|
||||
|
||||

|
||||
|
||||
现在,启动qikea,它是一个IPsec VPN客户端前端。该GUI应用允许你管理远程站点配置并初始化VPN连接。
|
||||
|
||||

|
||||
|
||||
要创建一个新的VPN配置,点击“添加”按钮,然后填入VPN站点配置。创建配置后,你可以通过点击配置来初始化VPN连接。
|
||||
|
||||

|
||||
|
||||
### 故障排除 ###
|
||||
|
||||
1. 我在运行iked时碰到了如下错误。
|
||||
|
||||
iked: error while loading shared libraries: libss_ike.so.2.2.1: cannot open shared object file: No such file or directory
|
||||
|
||||
要解决该问题,你需要更新动态链接器来容纳libss_ike库。对于此,请添加库文件的位置路径到/etc/ld.so.conf文件中,然后运行ldconfig命令。
|
||||
|
||||
$ sudo ldconfig
|
||||
|
||||
验证libss_ike是否添加到了库路径:
|
||||
|
||||
$ ldconfig -p | grep ike
|
||||
|
||||
----------
|
||||
|
||||
libss_ike.so.2.2.1 (libc6,x86-64) => /lib/libss_ike.so.2.2.1
|
||||
libss_ike.so (libc6,x86-64) => /lib/libss_ike.so
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/install-shrew-soft-ipsec-vpn-client-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
[1]:https://www.shrew.net/download/ike
|
@ -0,0 +1,134 @@
|
||||
在 Debian, Ubuntu, Linux Mint 及 Fedora 中安装 uGet 下载管理器 2.0
|
||||
================================================================================
|
||||
在经历了一段漫长的开发期后,期间发布了超过 11 个开发版本,最终 uGet 项目小组高兴地宣布 uGet 的最新稳定版本 uGet 2.0 已经可以下载使用了。最新版本包含许多吸引人的特点,例如一个新的设定对话框,改进了 aria2 插件对 BitTorrent 和 Metalink 协议的支持,同时对位于横栏中的 uGet RSS 信息提供了更好的支持,其他特点包括:
|
||||
|
||||
- 一个新的 “检查更新” 按钮,提醒您有关新的发行版本的信息;
|
||||
- 增添新的语言支持并升级了现有的语言;
|
||||
- 增加了一个新的 “信息横栏” ,允许开发者轻松地向所有的用户提供有关 uGet 的信息;
|
||||
- 通过对文档、提交反馈和错误报告等内容的链接,增强了帮助菜单;
|
||||
- 将 uGet 下载管理器集成到了 Linux 平台下的两个主要的浏览器 Firefox 和 Google Chrome 中;
|
||||
- 改进了对 Firefox 插件 ‘FlashGot’ 的支持;
|
||||
|
||||
### 何为 uGet ###
|
||||
|
||||
uGet (先前名为 UrlGfe) 是一个开源,免费,且极其强大的基于 GTK 的多平台下载管理器应用程序,它用 C 语言写就,在 GPL 协议下发布。它提供了一大类的功能,如恢复先前的下载任务,支持多重下载,使用一个独立的配置来支持分类,剪贴板监视,下载队列,从 HTML 文件中导出 URL 地址,集成在 Firefox 中的 Flashgot 插件中,使用集成在 uGet 中的 aria2(一个命令行下载管理器) 来下载 torrent 和 metalink 文件。
|
||||
|
||||
我已经在下面罗列出了 uGet 下载管理器的所有关键特点,并附带了详细的解释。
|
||||
|
||||
#### uGet 下载管理器的关键特点 ####
|
||||
|
||||
- 下载队列: 可以将你的下载任务放入一个队列中。当某些下载任务完成后,将会自动开始下载队列中余下的文件;
|
||||
- 恢复下载: 假如在某些情况下,你的网络中断了,不要担心,你可以从先前停止的地方继续下载或重新开始;
|
||||
- 下载分类: 支持多种分类来管理下载;
|
||||
- 剪贴板监视: 将要下载的文件类型复制到剪贴板中,便会自动弹出下载提示框以下载刚才复制的文件;
|
||||
- 批量下载: 允许你轻松地一次性下载多个文件;
|
||||
- 支持多种协议: 允许你轻松地使用 aria2 命令行插件通过 HTTP, HTTPS, FTP, BitTorrent 及 Metalink 等协议下载文件;
|
||||
- 多连接: 使用 aria2 插件,每个下载同时支持多达 20 个连接;
|
||||
- 支持 FTP 登录或匿名 FTP 登录: 同时支持使用用户名和密码来登录 FTP 或匿名 FTP ;
|
||||
- 队列下载: 新增队列下载,现在你可以对你的所有下载进行安排调度;
|
||||
- 通过 FlashGot 与 FireFox 集成: 与作为一个独立支持的 Firefox 插件的 FlashGot 集成,从而可以处理单个或大量的下载任务;
|
||||
- CLI 界面或虚拟终端支持: 提供命令行或虚拟终端选项来下载文件;
|
||||
- 自动创建目录: 假如你提供了一个先前并不存在的保存路径,uGet 将会自动创建这个目录;
|
||||
- 下载历史管理: 跟踪记录已下载和已删除的下载任务的条目,每个列表支持 9999 个条目,比当前默认支持条目数目更早的条目将会被自动删除;
|
||||
- 多语言支持: uGet 默认使用英语,但它可支持多达 23 种语言;
|
||||
- Aria2 插件: uGet 集成了 Aria2 插件,来为 aria2 提供更友好的 GUI 界面;
|
||||
|
||||
如若你想了解更加完整的特点描述,请访问 uGet 官方的 [特点页面][1].
|
||||
|
||||
### 在 Debian, Ubuntu, Linux Mint 及 Fedora 中安装 uGet ###
|
||||
|
||||
uGet 开发者在 Linux 平台下的各种软件仓库中添加了 uGet 的最新版本,所以你可以在你使用的 Linux 发行版本下使用受支持的软件仓库来安装或升级 uGet 。
|
||||
|
||||
当前,一些 Linux 发行版本下的 uGet 可能不是最新的,但你可以到 [uGet 下载页面][2] 去了解你所用发行版本的支持状态,在那里选择你喜爱的发行版本来了解更多的信息。
|
||||
|
||||
#### 在 Debian 下 ####
|
||||
|
||||
在 Debian 的测试版本 (Jessie) 和不稳定版本 (Sid) 中,你可以在一个可信赖的基础上,使用官方的软件仓库轻易地安装和升级 uGet 。
|
||||
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install uget
|
||||
|
||||
#### 在 Ubuntu 和 Linux Mint 下 ####
|
||||
|
||||
在 Ubuntu 和 Linux Mint 下,你可以使用官方的 PPA `ppa:plushuang-tw/uget-stable` 安装和升级 uGet ,通过使用这个 PPA,你可以自动地与最新版本保持同步。
|
||||
|
||||
$ sudo add-apt-repository ppa:plushuang-tw/uget-stable
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install uget
|
||||
|
||||
#### 在 Fedora 下 ####
|
||||
|
||||
在 Fedora 20 – 21 下,最新版本的 uGet(2.0) 可以从官方软件仓库中获得,从这些软件仓库中安装是非常值得信赖的。
|
||||
|
||||
$ sudo yum install uget
|
||||
|
||||
**注**: 在旧版本的 Debian, Ubuntu, Linux Mint 和 Fedora 下,用户也可以安装 uGet , 但可获取的版本为 1.10.4 。假如你期待使用升级版本(例如 2.0 版本),你需要升级你的系统并添加 uGet 的 PPA 以此来获取最新的稳定版本。
|
||||
|
||||
### 安装 aria2 插件 ###
|
||||
|
||||
[aria2][3] 是一个卓越的命令行下载管理应用,在 uGet 中它作为一个 aria2 插件,为 uGet 增添了更为强大的功能,如下载 toorent,metalinks 文件,支持多种协议和多来源下载等功能。
|
||||
|
||||
默认情况下,uGet 在当今大多数的 Linux 系统中使用 `curl` 来作为后端,但 aria2 插件将 curl 替换为 aria2 来作为 uGet 的后端。
|
||||
|
||||
aria2 是一个单独的软件包,需要独立安装。你可以在你的 Linux 发行版本下,使用受支持的软件仓库来轻易地安装 aria2 的最新版本,或根据 [下载 aria2 页面][4] 来安装它,该页面详细解释了在各个发行版本中如何安装 aria2 。
|
||||
|
||||
#### 在 Debian, Ubuntu 和 Linux Mint 下 ####
|
||||
|
||||
利用下面的命令,使用 aria2 的个人软件仓库来安装最新版本的 aria2 :
|
||||
|
||||
$ sudo add-apt-repository ppa:t-tujikawa/ppa
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install aria2
|
||||
|
||||
#### 在 Fedora 下 ####
|
||||
|
||||
Fedora 的官方软件仓库中已经添加了 aria2 软件包,所以你可以轻易地使用下面的 yum 命令来安装它:
|
||||
|
||||
$ sudo yum install aria2
|
||||
|
||||
#### 开启 uGet ####
|
||||
|
||||
为了启动 uGet,从桌面菜单的搜索栏中键入 "uGet"。可参考如下的截图:
|
||||
|
||||

|
||||
开启 uGet 下载管理器
|
||||
|
||||

|
||||
uGet 版本: 2.0
|
||||
|
||||
#### 在 uGet 中激活 aria2 插件 ####
|
||||
|
||||
为了激活 aria2 插件, 从 uGet 菜单接着到 `编辑 –> 设置 –> 插件` , 从下拉菜单中选择 "aria2"。
|
||||
|
||||

|
||||
为 uGet 启用 Aria2 插件
|
||||
|
||||
### uGet 2.0 截图赏析 ###
|
||||
|
||||

|
||||
使用 Aria2 下载文件
|
||||
|
||||

|
||||
使用 uGet 下载 Torrent 文件
|
||||
|
||||

|
||||
使用 uGet 进行批量下载
|
||||
|
||||
针对其他 Linux 发行版本和 Windows 平台的 RPM 包和 uGet 的源文件都可以在 uGet 的[下载页面][5] 下找到。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/install-uget-download-manager-in-linux/
|
||||
|
||||
作者:[Ravi Saive][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/admin/
|
||||
[1]:http://uget.visuex.com/features
|
||||
[2]:http://ugetdm.com/downloads
|
||||
[3]:http://www.tecmint.com/install-aria2-a-multi-protocol-command-line-download-manager-in-rhel-centos-fedora/
|
||||
[4]:http://ugetdm.com/downloads-aria2
|
||||
[5]:http://ugetdm.com/downloads
|
@ -0,0 +1,151 @@
|
||||
修复Ubuntu 14.04中各种更新错误
|
||||
================================================================================
|
||||

|
||||
|
||||
在Ubuntu更新中,谁没有碰见个错误?在Ubuntu和其它基于Ubuntu的Linux发行版中,更新错误很常见,也为数不少。这些错误出现的原因多种多样,修复起来也很简单。在本文中,我们将见到Ubuntu中各种类型频繁发生的更新错误以及它们的修复方法。
|
||||
|
||||
### 合并列表问题 ###
|
||||
|
||||
当你在终端中运行更新命令时,你可能会碰到这个错误“[合并列表错误][1]”,就像下面这样:
|
||||
|
||||
> E:Encountered a section with no Package: header,
|
||||
>
|
||||
> E:Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_universe_binary-i386_Packages,
|
||||
>
|
||||
> E:The package lists or status file could not be parsed or opened.’
|
||||
|
||||
可以使用以下命令来修复该错误:
|
||||
|
||||
sudo rm -r /var/lib/apt/lists/*
|
||||
sudo apt-get clean && sudo apt-get update
|
||||
|
||||
### 下载仓库信息失败 -1 ###
|
||||
|
||||
实际上,有两种类型的[下载仓库信息失败错误][2]。如果你的错误是这样的:
|
||||
|
||||
> W:Failed to fetch bzip2:/var/lib/apt/lists/partial/in.archive.ubuntu.com_ubuntu_dists_oneiric_restricted_binary-i386_Packages Hash Sum mismatch,
|
||||
>
|
||||
> W:Failed to fetch bzip2:/var/lib/apt/lists/partial/in.archive.ubuntu.com_ubuntu_dists_oneiric_multiverse_binary-i386_Packages Hash Sum mismatch,
|
||||
>
|
||||
> E:Some index files failed to download. They have been ignored, or old ones used instead
|
||||
|
||||
那么,你可以用以下命令修复:
|
||||
|
||||
sudo rm -rf /var/lib/apt/lists/*
|
||||
sudo apt-get update
|
||||
|
||||
### 下载仓库信息失败 -2 ###
|
||||
|
||||
下载仓库信息失败的另外一种类型是由于PPA过时导致的。通常,当你运行更新管理器,并看到这样的错误时:
|
||||
|
||||

|
||||
|
||||
你可以运行sudo apt-get update来查看哪个PPA更新失败,你可以把它从源列表中删除。你可以按照这个截图指南来[修复下载仓库信息失败错误][3]。
|
||||
|
||||
### 下载包文件失败错误 ###
|
||||
|
||||
一个类似的错误是[下载包文件失败错误][4],像这样:
|
||||
|
||||

|
||||
|
||||
该错误很容易修复,只需修改软件源为主服务器即可。转到软件和更新,在那里你可以修改下载服务器为主服务器:
|
||||
|
||||

|
||||
|
||||
### 部分更新错误 ###
|
||||
|
||||
在终端中运行更新会出现[部分更新错误][5]:
|
||||
|
||||
> Not all updates can be installed
|
||||
>
|
||||
> Run a partial upgrade, to install as many updates as possible
|
||||
|
||||
在终端中运行以下命令来修复该错误:
|
||||
|
||||
sudo apt-get install -f
|
||||
|
||||
### 加载共享库时发生错误 ###
|
||||
|
||||
该错误更多是安装错误,而不是更新错误。如果尝试从源码安装程序,你可能会碰到这个错误:
|
||||
|
||||
> error while loading shared libraries:
|
||||
>
|
||||
> cannot open shared object file: No such file or directory
|
||||
|
||||
该错误可以通过在终端中运行以下命令来修复:
|
||||
|
||||
sudo /sbin/ldconfig -v
|
||||
|
||||
你可以在这里查找到更多详细内容[加载共享库时发生错误][6]。
|
||||
|
||||
### 无法获取锁/var/cache/apt/archives/lock ###
|
||||
|
||||
在另一个程序在使用APT时,会发生该错误。假定你正在Ubuntu软件中心安装某个东西,然后你又试着在终端中运行apt。
|
||||
|
||||
> E: Could not get lock /var/cache/apt/archives/lock – open (11: Resource temporarily unavailable)
|
||||
>
|
||||
> E: Unable to lock directory /var/cache/apt/archives/
|
||||
|
||||
通常,只要你把所有其它使用apt的程序关了,这个问题就会好的。但是,如果问题持续,可以使用以下命令:
|
||||
|
||||
sudo rm /var/lib/apt/lists/lock
|
||||
|
||||
如果上面的命令不起作用,可以试试这个命令:
|
||||
|
||||
sudo killall apt-get
|
||||
|
||||
关于该错误的更多信息,可以在[这里][7]找到。
|
||||
|
||||
### GPG错误: 下列签名无法验证 ###
|
||||
|
||||
在添加一个PPA时,可能会导致以下错误[GPG错误: 下列签名无法验证][8],这通常发生在终端中运行更新时:
|
||||
|
||||
> W: GPG error: http://repo.mate-desktop.org saucy InRelease: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 68980A0EA10B4DE8
|
||||
|
||||
我们所要做的,就是获取系统中的这个公钥,从信息中获取密钥号。在上述信息中,密钥号为68980A0EA10B4DE8。该密钥可通过以下方式使用:
|
||||
|
||||
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 68980A0EA10B4DE8
|
||||
|
||||
在添加密钥后,再次运行更新就没有问题了。
|
||||
|
||||
### BADSIG错误 ###
|
||||
|
||||
另外一个与签名相关的Ubuntu更新错误是[BADSIG错误][9],它看起来像这样:
|
||||
|
||||
> W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://extras.ubuntu.com precise Release: The following signatures were invalid: BADSIG 16126D3A3E5C1192 Ubuntu Extras Archive Automatic Signing Key
|
||||
>
|
||||
> W: GPG error: http://ppa.launchpad.net precise Release:
|
||||
>
|
||||
> The following signatures were invalid: BADSIG 4C1CBC1B69B0E2F4 Launchpad PPA for Jonathan French W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/precise/Release
|
||||
|
||||
要修复该BADSIG错误,请在终端中使用以下命令:
|
||||
|
||||
sudo apt-get clean
|
||||
cd /var/lib/apt
|
||||
sudo mv lists oldlist
|
||||
sudo mkdir -p lists/partial
|
||||
sudo apt-get clean
|
||||
sudo apt-get update
|
||||
|
||||
本文汇集了你可能会碰到的**Ubuntu更新错误**,我希望这会对你处理这些错误有所帮助。你在Ubuntu中是否也碰到过其它更新错误呢?请在下面的评论中告诉我,我会试着写个快速指南。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://itsfoss.com/fix-update-errors-ubuntu-1404/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://itsfoss.com/author/abhishek/
|
||||
[1]:http://itsfoss.com/how-to-fix-problem-with-mergelist/
|
||||
[2]:http://itsfoss.com/solve-ubuntu-error-failed-to-download-repository-information-check-your-internet-connection/
|
||||
[3]:http://itsfoss.com/failed-to-download-repository-information-ubuntu-13-04/
|
||||
[4]:http://itsfoss.com/fix-failed-download-package-files-error-ubuntu/
|
||||
[5]:http://itsfoss.com/fix-partial-upgrade-error-elementary-os-luna-quick-tip/
|
||||
[6]:http://itsfoss.com/solve-open-shared-object-file-quick-tip/
|
||||
[7]:http://itsfoss.com/fix-ubuntu-install-error/
|
||||
[8]:http://itsfoss.com/solve-gpg-error-signatures-verified-ubuntu/
|
||||
[9]:http://itsfoss.com/solve-badsig-error-quick-tip/
|
@ -0,0 +1,96 @@
|
||||
Linux中用于监控网络、磁盘使用、开机时间、平均负载和内存使用率的shell脚本
|
||||
================================================================================
|
||||
系统管理员的任务真的很艰难,因为他/她必须监控服务器、用户、日志,还得创建备份,等等等等。对于大多数重复性的任务,大多数管理员都会写一个自动化脚本来日复一日重复这些任务。这里,我们已经写了一个shell脚本给大家,用来自动化完成系统管理员所要完成的常规任务,这可能在多数情况下,尤其是对于新手而言十分有用,他们能通过该脚本获取到大多数的他们想要的信息,包括系统、网络、用户、负载、内存、主机、内部IP、外部IP、开机时间等。
|
||||
|
||||
我们已经注意并进行了格式化输出(在一定程度上哦)。此脚本不包含任何恶意内容,并且它能以普通用户帐号运行。事实上,我们也推荐你以普通用户运行该脚本,而不是root。
|
||||
|
||||

|
||||
监控Linux系统健康的Shell脚本
|
||||
|
||||
你可以通过给Tecmint和脚本作者合适的积分,获得自由使用/修改/再分发下面代码的权利。我们已经试着在一定程度上自定义了输出结果,除了要求的输出内容外,其它内容都不会生成。我们也已经试着使用了那些Linux系统中通常不使用的变量,这些变量可能也是自由代码。
|
||||
|
||||
#### 最小系统要求 ####
|
||||
|
||||
你所需要的一切,就是一台正常运转的Linux盒子。
|
||||
|
||||
#### 依赖性 ####
|
||||
|
||||
对于一个标准的Linux发行版,使用此包时没有任何依赖。此外,该脚本不需要root权限来执行。但是,如果你想要安装,则必须输入一次root密码。
|
||||
|
||||
#### 安全性 ####
|
||||
|
||||
我们也关注到了系统安全问题,所以在安装此包时,不需要安装任何额外包,也不需要root访问权限来运行。此外,源代码是采用Apache 2.0许可证发布的,这意味着只要你保留Tecmint的版权,你可以自由地编辑、修改并再分发该代码。
|
||||
|
||||
### 如何安装和运行脚本? ###
|
||||
|
||||
首先,使用[wget命令][1]下载监控脚本`“tecmint_monitor.sh”`,给它赋予合适的执行权限。
|
||||
|
||||
# wget http://tecmint.com/wp-content/scripts/tecmint_monitor.sh
|
||||
# chmod 755 tecmint_monitor.sh
|
||||
|
||||
强烈建议你以普通用户身份安装该脚本,而不是root。安装过程中会询问root密码,并且在需要的时候安装必要的组件。
|
||||
|
||||
要安装`“tecmint_monitor.sh”`脚本,只需像下面这样使用-i(安装)选项就可以了。
|
||||
|
||||
/tecmint_monitor.sh -i
|
||||
|
||||
在提示你输入root密码时输入该密码。如果一切顺利,你会看到像下面这样的安装成功信息。
|
||||
|
||||
Password:
|
||||
Congratulations! Script Installed, now run monitor Command
|
||||
|
||||
安装完毕后,你可以通过在任何位置,以任何用户调用命令`‘monitor’`来运行该脚本。如果你不喜欢安装,你需要在每次运行时输入路径。
|
||||
|
||||
# ./Path/to/script/tecmint_monitor.sh
|
||||
|
||||
现在,以任何用户从任何地方运行monitor命令,就是这么简单:
|
||||
|
||||
$ monitor
|
||||
|
||||

|
||||
|
||||
你一运行命令,就会获得下面这些各种各样和系统相关的信息:
|
||||
|
||||
- 互联网连通性
|
||||
- 操作系统类型
|
||||
- 操作系统名称
|
||||
- 操作系统版本
|
||||
- 架构
|
||||
- 内核版本
|
||||
- 主机名
|
||||
- 内部IP
|
||||
- 外部IP
|
||||
- 域名服务器
|
||||
- 已登录用户
|
||||
- 内存使用率
|
||||
- 交换分区使用率
|
||||
- 磁盘使用率
|
||||
- 平均负载
|
||||
- 系统开机时间
|
||||
|
||||
使用-v(版本)开关来检查安装的脚本的版本。
|
||||
|
||||
$ monitor -v
|
||||
|
||||
tecmint_monitor version 0.1
|
||||
Designed by Tecmint.com
|
||||
Released Under Apache 2.0 License
|
||||
|
||||
### 小结 ###
|
||||
|
||||
该脚本在一些机器上可以开机即用,这一点我已经检查过。相信对于你而言,它也会正常工作。如果你们发现了什么毛病,可以在评论中告诉我。这个脚本还不是结束,这仅仅是个开始。从这里开始,你可以将它提升到任何等级。如果你想要编辑脚本,将它带入一个更深的层次,尽管随意去做吧,别忘了给我们合适的积分,也别忘了把你更新后的脚本拿出来和我们分享哦,这样,我们也能通过给你合适的积分来更新此文。
|
||||
|
||||
别忘了和我们分享你的想法或者脚本,我们会在这儿帮助你。谢谢你们给予的所有挚爱。保持连线,不要走开哦。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/linux-server-health-monitoring-script/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/10-wget-command-examples-in-linux/
|
@ -0,0 +1,107 @@
|
||||
如何在 Windows 操作系统中运行 Docker 客户端
|
||||
================================================================================
|
||||
大家好,今天我们来了解一下 Windows 操作系统中的 Docker 以及在其中安装 Docker Windows 客户端的知识。Docker 引擎使用 Linux 特定内核特性,因此不能通过 Windows 内核运行,Docker 引擎创建一个小的虚拟系统运行 Linux 并利用它的资源和内核。Windows Docker 客户端用虚拟化 Docker 引擎构建,运行以及管理 盒子以外的 Docker 容器。这里有个由 Boot2Docker 团队开发的名为 Boot2Docker 的应用程序,它创建运行在基于[Linux 微内核][1]的小型 Linux 系统上的虚拟机,是特意为在 Windows 上运行 [Docker][2] 容器开发的。它完全运行在 RAM 中,需要大约 27M 内存并能在 5s(YMMV,译者注:your mileage may vary,因人而异) 内启动。因此,在用于 Windows 的 Docker 引擎被开发出来之前,我们在 Windows 机器里只能运行 Linux 容器。
|
||||
|
||||
下面是安装 Docker 客户端并在上面运行容器的简单步骤。
|
||||
|
||||
### 1. 下载 Boot2Docker ###
|
||||
|
||||
在我们开始安装之前,我们需要 Boot2Docker 的可执行文件。可以从 [它的 Github][3] 下载最新版本的 Boot2Docker。在这篇指南中,我们从网站中下载版本 v1.6.1。我们从那网页中用我们喜欢的浏览器或者下载管理器下载了名为 [docker-install.exe][4] 的文件。
|
||||
|
||||

|
||||
|
||||
### 2. 安装 Boot2Docker ###
|
||||
|
||||
现在我们运行安装文件,它会安装 Window Docker 客户端、用于 Windows 的 Git(MSYS-git)、VirtualBox、Boot2Docker Linux ISO 以及 Boot2Docker 管理工具,这些对于在盒子之外运行 Docker 引擎都至关重要。
|
||||
|
||||

|
||||
|
||||
### 3. 运行 Boot2Docker ###
|
||||
|
||||

|
||||
|
||||
安装完成必要的组件之后,我们从桌面 Boot2Docker 快捷方式启动 Boot2Docker。它会要求你输入以后用于验证的 SSH 密钥。然后会启动一个配置好的用于管理在虚拟机中运行的 Docker 的 unix shell。
|
||||
|
||||

|
||||
|
||||
为了检查是否正确配置,运行下面的 docker version 命令。
|
||||
|
||||
docker version
|
||||
|
||||

|
||||
|
||||
### 4. 运行 Docker ###
|
||||
|
||||
由于 **Boot2Docker Start** 自动启动了一个已经正确设置好环境变量的 shell,我们可以马上开始使用 Docker。**请注意,如果我们将 Boot2Docker 作为一个远程 Docker 守护进程,那么不要在 docker 命令之前加 sudo。**
|
||||
|
||||
现在,让我们来试试 **hello-world** 例子镜像,它会下载 hello-world 镜像,运行并输出 "Hello from Docker" 信息。
|
||||
|
||||
$ docker run hello-world
|
||||
|
||||

|
||||
|
||||
### 5. 使用命令提示符(CMD) 运行 Docker###
|
||||
|
||||
现在,如果你想开始用命令提示符使用 Docker,你可以打开命令提示符(CMD.exe)。由于 Boot2Docker 要求 ssh.exe 在 PATH 中,我们需要在命令提示符中输入以下命令使得 %PATH% 环境变量中包括 Git 安装目录下的 bin 文件夹。
|
||||
|
||||
set PATH=%PATH%;"c:\Program Files (x86)\Git\bin"
|
||||
|
||||

|
||||
|
||||
运行上面的命令之后,我们可以在命令提示符中运行 **boot2docker start** 启动 Boot2Docker 虚拟机。
|
||||
|
||||
boot2docker start
|
||||
|
||||

|
||||
|
||||
**注意**: 如果你看到 machine does no exist 的错误信息,就运行 **boot2docker init** 命令。
|
||||
|
||||
然后复制控制台中的命令到 cmd.exe 中为控制台窗口设置环境变量,然后我们就可以像平常一样运行 docker 容器了。
|
||||
|
||||
### 6. 使用 PowerShell 运行 Docker ###
|
||||
|
||||
为了能在 PowerShell 中运行 Docker,我们需要启动一个 PowerShell 窗口并添加 ssh.exe 到 PATH 变量。
|
||||
|
||||
$Env:Path = "${Env:Path};c:\Program Files (x86)\Git\bin"
|
||||
|
||||
运行完上面的命令,我们还需要运行
|
||||
|
||||
boot2docker start
|
||||
|
||||

|
||||
|
||||
这会打印用于设置环境变量连接到虚拟机内部运行的 Docker 的 PowerShell 命令。我们只需要在 PowerShell 中运行这些命令就可以和平常一样运行 docker 容器。
|
||||
|
||||
### 7. 用 PUTTY 登录 ###
|
||||
|
||||
Boot2Docker 在%USERPROFILE%\.ssh 目录生成和使用用于登录的公共和私有密钥,我们也需要使用这个文件夹中的私有密钥。私有密钥需要转换为 PuTTY 的格式。我们可以通过 puttygen.exe 实现。
|
||||
|
||||
我们需要打开 puttygen.exe 并从 %USERPROFILE%\.ssh\id_boot2docker 中导入("File"->"Load" 菜单)私钥,然后点击 "Save Private Key"。然后用保存的文件通过 PuTTY 用 docker@127.0.0.1:2022 登录。
|
||||
|
||||
### 8. Boot2Docker 选项 ###
|
||||
|
||||
Boot2Docker 管理工具提供了一些命令,如下所示。
|
||||
|
||||
$ boot2docker
|
||||
|
||||
Usage: boot2docker.exe [<options>] {help|init|up|ssh|save|down|poweroff|reset|restart|config|status|info|ip|shellinit|delete|download|upgrade|version} [<args>]
|
||||
|
||||
### 总结 ###
|
||||
|
||||
通过 Docker Windows 客户端使用 Docker 很有趣。Boot2Docker 管理工具是一个能使任何 Docker 容器能像在 Linux 主机上平稳运行的很棒的应用程序。如果你更仔细的话,你会发现 boot2docker 默认用户的用户名是 docker,密码是 tcuser。最新版本的 boot2docker 设置了一个 host-only 的网络适配器提供访问容器的端口。一般来说是 192.168.59.103,但可以通过 VirtualBox 的 DHCP 实现改变。如果你有任何问题、建议、反馈,请在下面的评论框中写下来然后我们可以改进或者更新我们的内容。非常感谢!Enjoy:-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/run-docker-client-inside-windows-os/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:http://tinycorelinux.net/
|
||||
[2]:https://www.docker.io/
|
||||
[3]:https://github.com/boot2docker/windows-installer/releases/latest
|
||||
[4]:https://github.com/boot2docker/windows-installer/releases/download/v1.6.1/docker-install.exe
|
@ -0,0 +1,62 @@
|
||||
Linux有问必答——Linux上如何查看torrent文件内容
|
||||
================================================================================
|
||||
> **问题**: 我从网站上下载了一个torrent文件。Linux上有没有工具让我查看torrent文件的内容?例如,我想知道torrent里面都有什么文件。
|
||||
|
||||
torrent文件(也就是扩展名为**.torrent**的文件)是BitTorrent元数据文件,里面存储了BitTorrent客户端用来从BitTorrent点对点网络下载共享文件的信息(如,追踪器URL、文件列表、大小、校验和、创建日期等)。在单个torrent文件里面,可以列出一个或多个文件用于共享。
|
||||
|
||||
torrent文件内容由BEncode编码为BitTorrent数据序列化格式,因此,要查看torrent文件的内容,你需要相应的解码器。
|
||||
|
||||
事实上,任何图形化的BitTorrent客户端(如Transmission或uTorrent)都带有BEncode解码器,所以,你可以用它们直接打开来查看torrent文件的内容。然而,如果你不想要使用BitTorrent客户端来检查torrent文件,你可以试试这个命令行torrent查看器,它叫[dumptorrent][1]。
|
||||
|
||||
**dumptorrent**命令可以使用内建的BEncode解码器打印torrent文件的详细信息(如,文件名、大小、跟踪器URL、创建日期、信息散列等等)。
|
||||
|
||||
### 安装DumpTorrent到Linux ###
|
||||
|
||||
要安装dumptorrent到Linux,你可以从源代码来构建它。
|
||||
|
||||
在Debian、Ubuntu或Linux Mint上:
|
||||
|
||||
$ sudo apt-get install gcc make
|
||||
$ wget http://downloads.sourceforge.net/project/dumptorrent/dumptorrent/1.2/dumptorrent-1.2.tar.gz
|
||||
$ tar -xvf dumptorrent-1.2.tar.gz
|
||||
$ cd dumptorrent-1.2
|
||||
$ make
|
||||
$ sudo cp dumptorrent /usr/local/bin
|
||||
|
||||
在CentOS、Fedora或RHEL上:
|
||||
|
||||
$ sudo yum install gcc make
|
||||
$ wget http://downloads.sourceforge.net/project/dumptorrent/dumptorrent/1.2/dumptorrent-1.2.tar.gz
|
||||
$ tar -xvf dumptorrent-1.2.tar.gz
|
||||
$ cd dumptorrent-1.2
|
||||
$ make
|
||||
$ sudo cp dumptorrent /usr/local/bin
|
||||
|
||||
确保你的路径中[包含][2]了/usr/local/bin。
|
||||
|
||||
### 查看torrent的内容 ###
|
||||
|
||||
要检查torrent的内容,只需要运行dumptorrent,并将torrent文件作为参数执行。这会打印出torrent的概要,包括文件名、大小和跟踪器URL。
|
||||
|
||||
$ dumptorrent <torrent-file>
|
||||
|
||||

|
||||
要查看torrent的完整内容,请添加“-v”选项。它会打印更多关于torrent的详细信息,包括信息散列、片长度、创建日期、创建者,以及完整的声明列表。
|
||||
|
||||
$ dumptorrent -v <torrent-file>
|
||||
|
||||

|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/view-torrent-file-content-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
[1]:http://dumptorrent.sourceforge.net/
|
||||
[2]:http://ask.xmodulo.com/change-path-environment-variable-linux.html
|
@ -0,0 +1,106 @@
|
||||
关于Docker容器的基础网络命令
|
||||
================================================================================
|
||||
各位好,今天我们将学习一些Docker容器的基础命令。Docker是一个提供了开放平台来打包、发布并以一个轻量级容器运行任意程序的开放平台。它没有语言支持、框架或者打包系统的限制,可在任何时间、任何地方在小到家用电脑大到高端服务器上运行。这使得在部署和扩展网络应用、数据库和终端服务时不依赖于特定的栈或者提供商。Docker注定是用于网络的如它正应用于数据中心、ISP和越来越多的网络服务。
|
||||
|
||||
因此,这里有一些你在管理Docker容器的时候会用到的一些命令。
|
||||
|
||||
### 1. 找到Docker接口 ###
|
||||
|
||||
Docker默认会创建一个名为docker0的网桥接口来连接外部的世界。docker容器运行时直接连接到网桥接口docker0。默认上,docker会分配172.17.42.1/16给docker0,它是所有运行容器ip地址的子网。得到Docker接口的ip地址非常简单。要找出docker0网桥接口和连接到网桥上的docker容器,我们可以在终端或者安装了docker的shell中运行ip命令。
|
||||
|
||||
# ip a
|
||||
|
||||

|
||||
|
||||
### 2. 得到Docker容器的ip地址 ###
|
||||
|
||||
如我们上面读到的,docker在主机中创建了一个叫docker0的网桥接口。如我们创建一个心的docker容器一样,它自动被默认分配了一个在子网范围内的ip地址。因此,要检测运行中的Docker容器的ip地址,我们需要进入一个正在运行的容器并用下面的命令检查ip地址。首先,我们运行一个新的容器并进入。如果你已经有一个正在运行的容器,你可以跳过这个步骤。
|
||||
|
||||
# docker run -it ubuntu
|
||||
|
||||
现在,我们可以运行ip a来得到容器的ip地址了。
|
||||
|
||||
# ip a
|
||||
|
||||

|
||||
|
||||
### 3. 映射暴露的端口 ###
|
||||
|
||||
要映射配置在Dockerfile的暴露端口,我们只需用下面带上-P标志的命令。这会打开docker容器的随机端口并映射到Dockerfile中定义的端口。下面是使用-P来打开/映射定义的端口的例子。
|
||||
|
||||
# docker run -itd -P httpd
|
||||
|
||||

|
||||
|
||||
上面的命令会映射Dockerfile中定义的httpd 80端口到容器的端口上。我们用下面的命令来查看正在运行的容器暴露的端口。
|
||||
|
||||
# docker ps
|
||||
|
||||
并且可以用下面的curl命令来检查。
|
||||
|
||||
# curl http://localhost:49153
|
||||
|
||||

|
||||
|
||||
### 4. 映射到特定的端口上 ###
|
||||
|
||||
我们也可以映射暴露端口或者docker容器端口到我们指定的端口上。要实现这个,我们用-p标志来定义我们的需要。这里是我们的一个例子。
|
||||
|
||||
# docker run -itd -p 8080:80 httpd
|
||||
|
||||
上面的命令会映射8080端口到80上。我们可以运行curl来检查这点。
|
||||
|
||||
# curl http://localhost:8080
|
||||
|
||||

|
||||
|
||||
### 5. 创建自己的网桥 ###
|
||||
|
||||
要给容器创建一个自定义的IP地址,在本篇中我们会创建一个名为bro的新网桥。要分配需要的ip地址,我们需要在运行docker的主机中运行下面的命令。
|
||||
|
||||
# stop docker.io
|
||||
# ip link add br0 type bridge
|
||||
# ip addr add 172.30.1.1/20 dev br0
|
||||
# ip link set br0 up
|
||||
# docker -d -b br0
|
||||
|
||||

|
||||
|
||||
创建完docker网桥之后,我们要让docker的守护进程知道它。
|
||||
|
||||
# echo 'DOCKER_OPTS="-b=br0"' >> /etc/default/docker
|
||||
# service docker.io start
|
||||
|
||||

|
||||
|
||||
到这里,桥接后的接口将会分配给容器新的在桥接子网内的ip地址。
|
||||
|
||||
### 6. 链接到另外一个容器上 ###
|
||||
|
||||
我们可以用Dokcer连接一个容器到另外一个上。我们可以在不容的容器上运行不同的程序,并且相互连接或链接。链接允许容器间相互连接并安全地从一个容器上传输信息给另一个容器。要做到这个,我们可以使用--link标志。首先,我们使用--name标志来表示training/postgres镜像。
|
||||
|
||||
# docker run -d --name db training/postgres
|
||||
|
||||

|
||||
|
||||
完成之后,我们将容器db与training/webapp链接来形成新的叫web的容器。
|
||||
|
||||
# docker run -d -P --name web --link db:db training/webapp python app.py
|
||||
|
||||

|
||||
|
||||
### 总结 ###
|
||||
|
||||
Docker网络很神奇也好玩,因为有我们可以对docker容器做很多事情。这里有些简单和基础的我们可以把玩docker网络命令。docker的网络是非常高级的。我们可以用它做很多事情。如果你有任何的问题、建议、反馈请在下面的评论栏写下来以便于我们我们可以提升或者更新文章的内容。谢谢! 玩得开心!:-)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/networking-commands-docker-containers/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
Loading…
Reference in New Issue
Block a user