mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-19 22:51:41 +08:00
commit
f70beaeae7
@ -2,9 +2,9 @@
|
||||
================================================================================
|
||||
![00_lead_image_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x300x00_lead_image_aptik.png.pagespeed.ic.n3TJwp8YK_.png)
|
||||
|
||||
当你想重装Ubuntu或者仅仅是想安装它的一个新版本的时候,寻到一个便捷的方法去重新安装之前的应用并且重置其设置是很有用的。此时 *Aptik* 粉墨登场,它可以帮助你轻松实现。
|
||||
当你想重装Ubuntu或者仅仅是想安装它的一个新版本的时候,如果有个便捷的方法来重新安装之前的应用并且重置其设置会很方便的。此时 *Aptik* 粉墨登场,它可以帮助你轻松实现。
|
||||
|
||||
Aptik(自动包备份和回复)是一个可以用在Ubuntu,Linux Mint, 和其他基于Debian以及Ubuntu的Linux发行版上的应用,它允许你将已经安装过的包括软件库、下载包、安装的应用及其主题和设置在内的PPAs(个人软件包存档)备份到外部的U盘、网络存储或者类似于Dropbox的云服务上。
|
||||
Aptik(自动包备份和恢复)是一个可以用在Ubuntu,Linux Mint 和其他基于Debian以及Ubuntu的Linux发行版上的应用,它允许你将已经安装过的包括软件库、下载包、安装的应用和主题、用户设置在内的PPAs(个人软件包存档)备份到外部的U盘、网络存储或者类似于Dropbox的云服务上。
|
||||
|
||||
注意:当我们在此文章中说到输入某些东西的时候,如果被输入的内容被引号包裹,请不要将引号一起输入进去,除非我们有特殊说明。
|
||||
|
||||
@ -16,7 +16,7 @@ Aptik(自动包备份和回复)是一个可以用在Ubuntu,Linux Mint, 和
|
||||
|
||||
![01_command_to_add_repository](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x99x01_command_to_add_repository.png.pagespeed.ic.UfVC9QLj54.png)
|
||||
|
||||
输入下边的命令到提示符旁边,来确保资源库已经是最新版本。
|
||||
在命令行提示符输入下边的命令,来确保资源库已经是最新版本。
|
||||
|
||||
sudo apt-get update
|
||||
|
||||
@ -86,11 +86,11 @@ Aptik的主窗口显示出来了。从“Backup Directory”下拉列表中选
|
||||
|
||||
接下来,“Downloaded Packages (APT Cache)”的项目只对重装同样版本的Ubuntu有用处。它会备份下你系统缓存(/var/cache/apt/archives)中的包。如果你是升级系统的话,可以跳过这个条目,因为针对新系统的包会比现有系统缓存中的包更加新一些。
|
||||
|
||||
备份和回复下载过的包,这可以在重装Ubuntu,并且重装包的时候节省时间和网络带宽。因为一旦你把这些包恢复到系统缓存中之后,他们可以重新被利用起来,这样下载过程就免了,包的安装会更加快捷。
|
||||
备份和恢复下载过的包,这可以在重装Ubuntu,并且重装包的时候节省时间和网络带宽。因为一旦你把这些包恢复到系统缓存中之后,他们可以重新被利用起来,这样下载过程就免了,包的安装会更加快捷。
|
||||
|
||||
如果你是重装相同版本的Ubuntu系统的话,点击 “Downloaded Packages (APT Cache)” 右侧的 “Backup” 按钮来备份系统缓存中的包。
|
||||
|
||||
注意:当你备份下载过的包的时候是没有二级对话框出现。你系统缓存 (/var/cache/apt/archives) 中的包会被拷贝到备份目录下一个名叫 “archives” 的文件夹中,当整个过程完成后会出现一个对话框来告诉你备份已经完成。
|
||||
注意:当你备份下载过的包的时候是没有二级对话框出现的。你系统缓存 (/var/cache/apt/archives) 中的包会被拷贝到备份目录下一个名叫 “archives” 的文件夹中,当整个过程完成后会出现一个对话框来告诉你备份已经完成。
|
||||
|
||||
![16_downloaded_packages_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x544x16_downloaded_packages_backed_up.png.pagespeed.ic.z8ysuwzQAK.png)
|
||||
|
||||
@ -104,7 +104,7 @@ Aptik的主窗口显示出来了。从“Backup Directory”下拉列表中选
|
||||
|
||||
![18_clicking_backup_for_software_selections](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x18_clicking_backup_for_software_selections.png.pagespeed.ic.QI5D-IgnP_.png)
|
||||
|
||||
名为 “packages.list” and “packages-installed.list” 的两个文件出现在了备份目录中,并且一个用来通知你备份完成的对话框出现。点击 ”OK“关闭它。
|
||||
备份目录中出现了两个名为 “packages.list” 和“packages-installed.list” 的文件,并且会弹出一个通知你备份完成的对话框。点击 ”OK“关闭它。
|
||||
|
||||
注意:“packages-installed.list”文件包含了所有的包,而 “packages.list” 在包含了所有包的前提下还指出了那些包被选择上了。
|
||||
|
||||
@ -120,27 +120,27 @@ Aptik的主窗口显示出来了。从“Backup Directory”下拉列表中选
|
||||
|
||||
![21_zipping_settings_files](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x21_zipping_settings_files.png.pagespeed.ic.dgoBj7egqv.png)
|
||||
|
||||
当打包完成后,打包后的文件被拷贝到备份目录下,另外一个备份成功的对话框出现。点击”OK“,关掉。
|
||||
当打包完成后,打包后的文件被拷贝到备份目录下,另外一个备份成功的对话框出现。点击“OK”关掉。
|
||||
|
||||
![22_app_settings_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22_app_settings_backed_up.png.pagespeed.ic.Mb6utyLJ3W.png)
|
||||
|
||||
来自 “/usr/share/themes” 目录的主题和来自 “/usr/share/icons” 目录的图标也可以备份。点击 “Themes and Icons” 右侧的 “Backup” 来进行此操作。“Backup Themes” 对话框默认选择了所有的主题和图标。你可以安装需要取消到一些然后点击 “Backup” 进行备份。
|
||||
放在 “/usr/share/themes” 目录的主题和放在 “/usr/share/icons” 目录的图标也可以备份。点击 “Themes and Icons” 右侧的 “Backup” 来进行此操作。“Backup Themes” 对话框默认选择了所有的主题和图标。你可以安装需要的、取消一些不要的,然后点击 “Backup” 进行备份。
|
||||
|
||||
![22a_backing_up_themes_and_icons](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22a_backing_up_themes_and_icons.png.pagespeed.ic.KXa8W3YhyF.png)
|
||||
|
||||
主题被打包拷贝到备份目录下的 “themes” 文件夹中,图标被打包拷贝到备份目录下的 “icons” 文件夹中。然后成功提示对话框出现,点击”OK“关闭它。
|
||||
主题被打包拷贝到备份目录下的 “themes” 文件夹中,图标被打包拷贝到备份目录下的 “icons” 文件夹中。然后成功提示对话框出现,点击“OK”关闭它。
|
||||
|
||||
![22b_themes_and_icons_backed_up](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x530x22b_themes_and_icons_backed_up.png.pagespeed.ic.ejjRaymD39.png)
|
||||
|
||||
一旦你完成了需要的备份,点击主界面左上角的”X“关闭 Aptik 。
|
||||
一旦你完成了需要的备份,点击主界面左上角的“X”关闭 Aptik 。
|
||||
|
||||
![23_closing_aptik](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x542x23_closing_aptik.png.pagespeed.ic.pNk9Vt3--l.png)
|
||||
|
||||
备份过的文件已存在于你选择的备份目录中,可以随时取阅。
|
||||
备份过的文件已存在于你选择的备份目录中,可以随时查看。
|
||||
|
||||
![24_backup_files_in_directory](http://cdn5.howtogeek.com/wp-content/uploads/2014/12/650x374x24_backup_files_in_directory.png.pagespeed.ic.vwblOfN915.png)
|
||||
|
||||
当你重装Ubuntu或者安装新版本的Ubuntu后,在新的系统中安装 Aptik 并且将备份好的文件置于新系统中让其可被使用。运行 Aptik,并使用每个条目的 “Restore” 按钮来恢复你的软件源、应用、包、设置、主题以及图标。
|
||||
当你重装Ubuntu或者安装新版本的Ubuntu后,在新的系统中安装 Aptik 并且将备份好的文件置于新系统中使用。运行 Aptik,并使用每个条目的 “Restore” 按钮来恢复你的软件源、应用、包、设置、主题以及图标。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -148,7 +148,7 @@ via: http://www.howtogeek.com/206454/how-to-backup-and-restore-your-apps-and-ppa
|
||||
|
||||
作者:Lori Kaufman
|
||||
译者:[Ping](https://github.com/mr-ping)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,18 +1,17 @@
|
||||
如何在 Linux 上使用 HAProxy 配置 HTTP 负载均衡器
|
||||
使用 HAProxy 配置 HTTP 负载均衡器
|
||||
================================================================================
|
||||
随着基于 Web 的应用和服务的增多,IT 系统管理员肩上的责任也越来越重。当遇到不可预期的事件如流量达到高峰,流量增大或者内部的挑战比如硬件的损坏或紧急维修,无论如何,你的 Web 应用都必须要保持可用性。甚至现在流行的 devops 和持续交付也可能威胁到你的 Web 服务的可靠性和性能的一致性。
|
||||
随着基于 Web 的应用和服务的增多,IT 系统管理员肩上的责任也越来越重。当遇到不可预期的事件如流量达到高峰,流量增大或者内部的挑战比如硬件的损坏或紧急维修,无论如何,你的 Web 应用都必须要保持可用性。甚至现在流行的 devops 和持续交付(CD)也可能威胁到你的 Web 服务的可靠性和性能的一致性。
|
||||
|
||||
不可预测,不一直的性能表现是你无法接受的。但是我们怎样消除这些缺点呢?大多数情况下一个合适的负载均衡解决方案可以解决这个问题。今天我会给你们介绍如何使用 [HAProxy][1] 配置 HTTP 负载均衡器。
|
||||
不可预测,不一致的性能表现是你无法接受的。但是我们怎样消除这些缺点呢?大多数情况下一个合适的负载均衡解决方案可以解决这个问题。今天我会给你们介绍如何使用 [HAProxy][1] 配置 HTTP 负载均衡器。
|
||||
|
||||
###什么是 HTTP 负载均衡? ###
|
||||
|
||||
HTTP 负载均衡是一个网络解决方案,它将发入的 HTTP 或 HTTPs 请求分配至一组提供相同的 Web 应用内容的服务器用于响应。通过将请求在这样的多个服务器间进行均衡,负载均衡器可以防止服务器出现单点故障,可以提升整体的可用性和响应速度。它还可以让你能够简单的通过添加或者移除服务器来进行横向扩展或收缩,对工作负载进行调整。
|
||||
HTTP 负载均衡是一个网络解决方案,它将进入的 HTTP 或 HTTPs 请求分配至一组提供相同的 Web 应用内容的服务器用于响应。通过将请求在这样的多个服务器间进行均衡,负载均衡器可以防止服务器出现单点故障,可以提升整体的可用性和响应速度。它还可以让你能够简单的通过添加或者移除服务器来进行横向扩展或收缩,对工作负载进行调整。
|
||||
|
||||
### 什么时候,什么情况下需要使用负载均衡? ###
|
||||
|
||||
负载均衡可以提升服务器的使用性能和最大可用性,当你的服务器开始出现高负载时就可以使用负载均衡。或者你在为一个大型项目设计架构时,在前端使用负载均衡是一个很好的习惯。当你的环境需要扩展的时候它会很有用。
|
||||
|
||||
|
||||
### 什么是 HAProxy? ###
|
||||
|
||||
HAProxy 是一个流行的开源的 GNU/Linux 平台下的 TCP/HTTP 服务器的负载均衡和代理软件。HAProxy 是单线程,事件驱动架构,可以轻松的处理 [10 Gbps 速率][2] 的流量,在生产环境中被广泛的使用。它的功能包括自动健康状态检查,自定义负载均衡算法,HTTPS/SSL 支持,会话速率限制等等。
|
||||
@ -24,13 +23,13 @@ HAProxy 是一个流行的开源的 GNU/Linux 平台下的 TCP/HTTP 服务器的
|
||||
### 准备条件 ###
|
||||
|
||||
你至少要有一台,或者最好是两台 Web 服务器来验证你的负载均衡的功能。我们假设后端的 HTTP Web 服务器已经配置好并[可以运行][3]。
|
||||
You will need at least one, or preferably two web servers to verify functionality of your load balancer. We assume that backend HTTP web servers are already [up and running][3].
|
||||
|
||||
|
||||
### 在 Linux 中安装 HAProxy ###
|
||||
## 在 Linux 中安装 HAProxy ##
|
||||
|
||||
对于大多数的发行版,我们可以使用发行版的包管理器来安装 HAProxy。
|
||||
|
||||
#### 在 Debian 中安装 HAProxy ####
|
||||
### 在 Debian 中安装 HAProxy ###
|
||||
|
||||
在 Debian Wheezy 中我们需要添加源,在 /etc/apt/sources.list.d 下创建一个文件 "backports.list" ,写入下面的内容
|
||||
|
||||
@ -41,25 +40,25 @@ You will need at least one, or preferably two web servers to verify functionalit
|
||||
# apt get update
|
||||
# apt get install haproxy
|
||||
|
||||
#### 在 Ubuntu 中安装 HAProxy ####
|
||||
### 在 Ubuntu 中安装 HAProxy ###
|
||||
|
||||
# apt get install haproxy
|
||||
|
||||
#### 在 CentOS 和 RHEL 中安装 HAProxy ####
|
||||
### 在 CentOS 和 RHEL 中安装 HAProxy ###
|
||||
|
||||
# yum install haproxy
|
||||
|
||||
### 配置 HAProxy ###
|
||||
## 配置 HAProxy ##
|
||||
|
||||
本教程假设有两台运行的 HTTP Web 服务器,它们的 IP 地址是 192.168.100.2 和 192.168.100.3。我们将负载均衡配置在 192.168.100.4 的这台服务器上。
|
||||
|
||||
为了让 HAProxy 工作正常,你需要修改 /etc/haproxy/haproxy.cfg 中的一些选项。我们会在这一节中解释这些修改。一些配置可能因 GNU/Linux 发行版的不同而变化,这些会被标注出来。
|
||||
|
||||
#### 1. 配置日志功能 ####
|
||||
### 1. 配置日志功能 ###
|
||||
|
||||
你要做的第一件事是为 HAProxy 配置日志功能,在排错时日志将很有用。日志配置可以在 /etc/haproxy/haproxy.cfg 的 global 段中找到他们。下面是针对不同的 Linux 发型版的 HAProxy 日志配置。
|
||||
|
||||
**CentOS 或 RHEL:**
|
||||
#### CentOS 或 RHEL:####
|
||||
|
||||
在 CentOS/RHEL中启用日志,将下面的:
|
||||
|
||||
@ -82,7 +81,7 @@ You will need at least one, or preferably two web servers to verify functionalit
|
||||
|
||||
# service rsyslog restart
|
||||
|
||||
**Debian 或 Ubuntu:**
|
||||
####Debian 或 Ubuntu:####
|
||||
|
||||
在 Debian 或 Ubuntu 中启用日志,将下面的内容
|
||||
|
||||
@ -106,7 +105,7 @@ You will need at least one, or preferably two web servers to verify functionalit
|
||||
|
||||
# service rsyslog restart
|
||||
|
||||
#### 2. 设置默认选项 ####
|
||||
### 2. 设置默认选项 ###
|
||||
|
||||
下一步是设置 HAProxy 的默认选项。在 /etc/haproxy/haproxy.cfg 的 default 段中,替换为下面的配置:
|
||||
|
||||
@ -124,7 +123,7 @@ You will need at least one, or preferably two web servers to verify functionalit
|
||||
|
||||
上面的配置是当 HAProxy 为 HTTP 负载均衡时建议使用的,但是并不一定是你的环境的最优方案。你可以自己研究 HAProxy 的手册并配置它。
|
||||
|
||||
#### 3. Web 集群配置 ####
|
||||
### 3. Web 集群配置 ###
|
||||
|
||||
Web 集群配置定义了一组可用的 HTTP 服务器。我们的负载均衡中的大多数设置都在这里。现在我们会创建一些基本配置,定义我们的节点。将配置文件中从 frontend 段开始的内容全部替换为下面的:
|
||||
|
||||
@ -141,14 +140,14 @@ Web 集群配置定义了一组可用的 HTTP 服务器。我们的负载均衡
|
||||
server web01 192.168.100.2:80 cookie node1 check
|
||||
server web02 192.168.100.3:80 cookie node2 check
|
||||
|
||||
"listen webfarm *:80" 定义了负载均衡器监听的地址和端口。为了教程的需要,我设置为 "\*" 表示监听在所有接口上。在真实的场景汇总,这样设置可能不太合适,应该替换为可以从 internet 访问的那个网卡接口。
|
||||
"listen webfarm \*:80" 定义了负载均衡器监听的地址和端口。为了教程的需要,我设置为 "*" 表示监听在所有接口上。在真实的场景汇总,这样设置可能不太合适,应该替换为可以从 internet 访问的那个网卡接口。
|
||||
|
||||
stats enable
|
||||
stats uri /haproxy?stats
|
||||
stats realm Haproxy\ Statistics
|
||||
stats auth haproxy:stats
|
||||
|
||||
上面的设置定义了,负载均衡器的状态统计信息可以通过 http://<load-balancer-IP>/haproxy?stats 访问。访问需要简单的 HTTP 认证,用户名为 "haproxy" 密码为 "stats"。这些设置可以替换为你自己的认证方式。如果你不需要状态统计信息,可以完全禁用掉。
|
||||
上面的设置定义了,负载均衡器的状态统计信息可以通过 http://\<load-balancer-IP>/haproxy?stats 访问。访问需要简单的 HTTP 认证,用户名为 "haproxy" 密码为 "stats"。这些设置可以替换为你自己的认证方式。如果你不需要状态统计信息,可以完全禁用掉。
|
||||
|
||||
下面是一个 HAProxy 统计信息的例子
|
||||
|
||||
@ -160,7 +159,7 @@ Web 集群配置定义了一组可用的 HTTP 服务器。我们的负载均衡
|
||||
- **source**:对请求的客户端 IP 地址进行哈希计算,根据哈希值和服务器的权重将请求调度至后端服务器。
|
||||
- **uri**:对 URI 的左半部分(问号之前的部分)进行哈希,根据哈希结果和服务器的权重对请求进行调度
|
||||
- **url_param**:根据每个 HTTP GET 请求的 URL 查询参数进行调度,使用固定的请求参数将会被调度至指定的服务器上
|
||||
- **hdr(name**):根据 HTTP 首部中的 <name> 字段来进行调度
|
||||
- **hdr(name**):根据 HTTP 首部中的 \<name> 字段来进行调度
|
||||
|
||||
"cookie LBN insert indirect nocache" 这一行表示我们的负载均衡器会存储 cookie 信息,可以将后端服务器池中的节点与某个特定会话绑定。节点的 cookie 存储为一个自定义的名字。这里,我们使用的是 "LBN",你可以指定其他的名称。后端节点会保存这个 cookie 的会话。
|
||||
|
||||
@ -169,25 +168,25 @@ Web 集群配置定义了一组可用的 HTTP 服务器。我们的负载均衡
|
||||
|
||||
上面是我们的 Web 服务器节点的定义。服务器有由内部名称(如web01,web02),IP 地址和唯一的 cookie 字符串表示。cookie 字符串可以自定义,我这里使用的是简单的 node1,node2 ... node(n)
|
||||
|
||||
### 启动 HAProxy ###
|
||||
## 启动 HAProxy ##
|
||||
|
||||
如果你完成了配置,现在启动 HAProxy 并验证是否运行正常。
|
||||
|
||||
#### 在 Centos/RHEL 中启动 HAProxy ####
|
||||
### 在 Centos/RHEL 中启动 HAProxy ###
|
||||
|
||||
让 HAProxy 开机自启,使用下面的命令
|
||||
|
||||
# chkconfig haproxy on
|
||||
# service haproxy start
|
||||
|
||||
当然,防火墙需要开放 80 端口,想下面这样
|
||||
当然,防火墙需要开放 80 端口,像下面这样
|
||||
|
||||
**CentOS/RHEL 7 的防火墙**
|
||||
####CentOS/RHEL 7 的防火墙####
|
||||
|
||||
# firewallcmd permanent zone=public addport=80/tcp
|
||||
# firewallcmd reload
|
||||
|
||||
**CentOS/RHEL 6 的防火墙**
|
||||
####CentOS/RHEL 6 的防火墙####
|
||||
|
||||
把下面内容加至 /etc/sysconfig/iptables 中的 ":OUTPUT ACCEPT" 段中
|
||||
|
||||
@ -197,9 +196,9 @@ Web 集群配置定义了一组可用的 HTTP 服务器。我们的负载均衡
|
||||
|
||||
# service iptables restart
|
||||
|
||||
#### 在 Debian 中启动 HAProxy ####
|
||||
### 在 Debian 中启动 HAProxy ###
|
||||
|
||||
#### 启动 HAProxy ####
|
||||
启动 HAProxy
|
||||
|
||||
# service haproxy start
|
||||
|
||||
@ -207,7 +206,7 @@ Web 集群配置定义了一组可用的 HTTP 服务器。我们的负载均衡
|
||||
|
||||
A INPUT p tcp dport 80 j ACCEPT
|
||||
|
||||
#### 在 Ubuntu 中启动HAProxy ####
|
||||
### 在 Ubuntu 中启动HAProxy ###
|
||||
|
||||
让 HAProxy 开机自动启动在 /etc/default/haproxy 中配置
|
||||
|
||||
@ -221,7 +220,7 @@ Web 集群配置定义了一组可用的 HTTP 服务器。我们的负载均衡
|
||||
|
||||
# ufw allow 80
|
||||
|
||||
### 测试 HAProxy ###
|
||||
## 测试 HAProxy ##
|
||||
|
||||
检查 HAProxy 是否工作正常,我们可以这样做
|
||||
|
||||
@ -239,7 +238,7 @@ Web 集群配置定义了一组可用的 HTTP 服务器。我们的负载均衡
|
||||
|
||||
$ curl http://192.168.100.4/test.php
|
||||
|
||||
我们多次使用这个命令此时,会发现交替的输出下面的内容(因为使用了轮询算法):
|
||||
我们多次运行这个命令此时,会发现交替的输出下面的内容(因为使用了轮询算法):
|
||||
|
||||
Server IP: 192.168.100.2
|
||||
X-Forwarded-for: 192.168.100.4
|
||||
@ -251,13 +250,13 @@ Web 集群配置定义了一组可用的 HTTP 服务器。我们的负载均衡
|
||||
|
||||
如果我们停掉一台后端 Web 服务,curl 命令仍然正常工作,请求被分发至另一台可用的 Web 服务器。
|
||||
|
||||
### 总结 ###
|
||||
## 总结 ##
|
||||
|
||||
现在你有了一个完全可用的负载均衡器,以轮询的模式对你的 Web 节点进行负载均衡。还可以去实验其他的配置选项以适应你的环境。希望这个教程可以帮会组你们的 Web 项目有更好的可用性。
|
||||
现在你有了一个完全可用的负载均衡器,以轮询的模式对你的 Web 节点进行负载均衡。还可以去实验其他的配置选项以适应你的环境。希望这个教程可以帮助你们的 Web 项目有更好的可用性。
|
||||
|
||||
你可能已经发现了,这个教程只包含单台负载均衡的设置。这意味着我们仍然有单点故障的问题。在真实场景中,你应该至少部署 2 台或者 3 台负载均衡以防止意外发生,但这不是本教程的范围。
|
||||
|
||||
如果 你有任何问题或建议,请在评论中提出,我会尽我的努力回答。
|
||||
如果你有任何问题或建议,请在评论中提出,我会尽我的努力回答。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -265,11 +264,11 @@ via: http://xmodulo.com/haproxy-http-load-balancer-linux.html
|
||||
|
||||
作者:[Jaroslav Štěpánek][a]
|
||||
译者:[Liao](https://github.com/liaoishere)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/jaroslav
|
||||
[1]:http://www.haproxy.org/
|
||||
[2]:http://www.haproxy.org/10g.html
|
||||
[3]:http://xmodulo.com/how-to-install-lamp-server-on-ubuntu.html
|
||||
[3]:http://linux.cn/article-1567-1.html
|
158
published/20141203 Docker--Present and Future.md
Normal file
158
published/20141203 Docker--Present and Future.md
Normal file
@ -0,0 +1,158 @@
|
||||
Docker 的现状与未来
|
||||
================================================================================
|
||||
|
||||
### Docker - 迄今为止发生的那些事情 ###
|
||||
|
||||
Docker 是一个专为 Linux 容器而设计的工具集,用于‘构建、交付和运行’分布式应用。它最初是 DotCloud 的一个开源项目,于2013年3月发布。这个项目越来越受欢迎,以至于 DotCloud 公司都更名为 Docker 公司(并最终[出售了原有的 PaaS 业务][1])。[Docker 1.0][2]是在2014年6月发布的,而且延续了之前每月更新一个版本的传统。
|
||||
|
||||
Docker 1.0版本的发布标志着 Docker 公司认为该平台已经充分成熟,足以用于生产环境中(由该公司与合作伙伴提供付费支持选择)。每个月发布的更新表明该项目正在迅速发展,比如增添一些新特性、解决一些他们发现的问题。该项目已经成功地分离了‘运行’和‘交付’两件事,所以来自任何版本的 Docker 镜像源都可以与其它版本共同使用(具备向前和向后兼容的特性),这为 Docker 应对快速变化提供了稳定的保障。
|
||||
|
||||
Docker 之所以能够成为最受欢迎的开源项目之一可能会被很多人看做是炒作,但是也是由其坚实的基础所决定的。Docker 的影响力已经得到整个行业许多大企业的支持,包括亚马逊, Canonical 公司, CenturyLink, 谷歌, IBM, 微软, New Relic, Pivotal, 红帽和 VMware。这使得只要有 Linux 的地方,Docker 就可以无处不在。除了这些鼎鼎有名的大公司以外,许多初创公司也在围绕着 Docker 发展,或者改变他们的发展方向来与 Docker 更好地结合起来。这些合作伙伴们(无论大或小)都将帮助推动 Docker 核心项目及其周边生态环境的快速发展。
|
||||
|
||||
### Docker 技术简要综述 ###
|
||||
|
||||
Docker 利用 Linux 的一些内核机制例如 [cGroups][3]、命名空间和 [SElinux][4] 来实现容器之间的隔离。起初 Docker 只是 [LXC][5] 容器管理器子系统的前端,但是在 0.9 版本中引入了 [libcontainer][6],这是一个原生的 go 语言库,提供了用户空间和内核之间的接口。
|
||||
|
||||
容器是基于 [AUFS][7] 这样的联合文件系统的,它允许跨多个容器共享组件,如操作系统镜像和已安装的相关库。这种文件系统的分层方法也被 [Dockerfile][8] 的 DevOps 工具所利用,这些工具能够缓存成功完成的操作。这就省下了安装操作系统和相关应用程序依赖包的时间,极大地加速测试周期。另外,在容器之间的共享库也能够减少内存的占用。
|
||||
|
||||
一个容器是从一个镜像开始运行的,它可以来自本地创建,本地缓存,或者从一个注册库(registry)下载。Docker 公司运营的 [Docker Hub 公有注册库][9],为各种操作系统、中间件和数据库提供了官方仓库存储。各个组织和个人都可以在 docker Hub 上发布的镜像的公有库,也可以注册成私有仓库。由于上传的镜像可以包含几乎任何内容,所以 Docker 提供了一种自动构建工具(以往称为“可信构建”),镜像可以从一种称之为 Dockerfile 的镜像内容清单构建而成。
|
||||
|
||||
### 容器 vs. 虚拟机 ###
|
||||
|
||||
容器会比虚拟机更高效,因为它们能够分享一个内核和分享应用程序库。相比虚拟机系统,这也将使得 Docker 使用的内存更小,即便虚拟机利用了内存超量使用的技术。部署容器时共享底层的镜像层也可以减少存储占用。IBM 的 Boden Russel 已经做了一些[基准测试][10]来说明两者之间的不同。
|
||||
|
||||
相比虚拟机系统,容器具有较低系统开销的优势,所以在容器中,应用程序的运行效率将会等效于在同样的应用程序在虚拟机中运行,甚至效果更佳。IBM 的一个研究团队已经发表了一本名为[虚拟机与 Linux 容器的性能比较]的文章[11]。
|
||||
|
||||
容器只是在隔离特性上要比虚拟机逊色。虚拟机可以利用如 Intel 的 VT-d 和 VT-x 技术的 ring-1 [硬件隔离][12]技术。这种隔离可以防止虚拟机突破和彼此交互。而容器至今还没有任何形式的硬件隔离,这使它容易受到攻击。一个称为 [Shocker][13] 的概念攻击验证表明,在 Docker 1.0 之前的版本是存在这种脆弱性的。尽管 Docker 1.0 修复了许多由 Shocker 漏洞带来的较为严重的问题,Docker 的 CTO Solomon Hykes 仍然[说][14],“当我们可以放心宣称 Docker 的开箱即用是安全的,即便是不可信的 uid0 程序(超级用户权限程序),我们将会很明确地告诉大家。”Hykes 的声明承认,其漏洞及相关的风险依旧存在,所以在容器成为受信任的工具之前将有更多的工作要做。
|
||||
|
||||
对于许多用户案例而言,在容器和虚拟机之间二者选择其一是种错误的二分法。Docker 同样可以在虚拟机中工作的很好,这让它可以用在现有的虚拟基础措施、私有云或者公有云中。同样也可以在容器里跑虚拟机,这也类似于谷歌在其云平台的使用方式。像 IaaS 服务这样普遍可用的基础设施,能够即时提供所需的虚拟机,可以预期容器与虚拟机一起使用的情景将会在数年后出现。容器管理和虚拟机技术也有可能被集成到一起提供一个两全其美的方案;这样,一个硬件信任锚微虚拟化所支撑的 libcontainer 容器,可与前端 Docker 工具链和生态系统整合,而使用提供更好隔离性的不同后端。微虚拟化(例如 Bromium 的 [vSentry][15] 和 VMware 的 [Project Fargo][16])已经用于在桌面环境中以提供基于硬件的应用程序隔离,所以类似的方法也可以用于 libcontainer,作为 Linux内核中的容器机制的替代技术。
|
||||
|
||||
### ‘容器化’ 的应用程序 ###
|
||||
|
||||
几乎所有 Linux 应用程序都可以在 Docker 容器中运行,并没有编程语言或框架的限制。唯一的实际限制是以操作系统的角度来允许容器做什么。即使如此,也可以在特权模式下运行容器,从而大大减少了限制(与之对应的是容器中的应用程序的风险增加,可能导致损坏主机操作系统)。
|
||||
|
||||
容器都是从镜像开始运行的,而镜像也可以从运行中的容器获取。本质上说,有两种方法可以将应用程序放到容器中,分别是手动构建和 Dockerfile。
|
||||
|
||||
#### 手动构建 ####
|
||||
|
||||
手动构建从启动一个基础的操作系统镜像开始,然后在交互式终端中用你所选的 Linux 提供的包管理器安装应用程序及其依赖项。Zef Hemel 在‘[使用 Linux 容器来支持便携式应用程序部署][17]’的文章中讲述了他部署的过程。一旦应用程序被安装之后,容器就可以被推送至注册库(例如Docker Hub)或者导出为一个tar文件。
|
||||
|
||||
#### Dockerfile ####
|
||||
|
||||
Dockerfile 是一个用于构建 Docker 容器的脚本化系统。每一个 Dockerfile 定义了开始的基础镜像,以及一系列在容器中运行的命令或者一些被添加到容器中的文件。Dockerfile 也可以指定对外的端口和当前工作目录,以及容器启动时默认执行的命令。用 Dockerfile 构建的容器可以像手工构建的镜像一样推送或导出。Dockerfile 也可以用于 Docker Hub 的自动构建系统,即在 Docker 公司的控制下从头构建,并且该镜像的源代码是任何需要使用它的人可见的。
|
||||
|
||||
#### 单进程? ####
|
||||
|
||||
无论镜像是手动构建还是通过 Dockerfile 构建,有一个要考虑的关键因素是当容器启动时仅启动一个进程。对于一个单一用途的容器,例如运行一个应用服务器,运行一个单一的进程不是一个问题(有些关于容器应该只有一个单独的进程的争议)。对于一些容器需要启动多个进程的情况,必须先启动 [supervisor][18] 进程,才能生成其它内部所需的进程。由于容器内没有初始化系统,所以任何依赖于 systemd、upstart 或类似初始化系统的东西不修改是无法工作的。
|
||||
|
||||
### 容器和微服务 ###
|
||||
|
||||
全面介绍使用微服务结构体系的原理和好处已经超出了这篇文章的范畴(在 [InfoQ eMag: Microservices][19] 有全面阐述)。然而容器是绑定和部署微服务实例的捷径。
|
||||
|
||||
大规模微服务部署的多数案例都是部署在虚拟机上,容器只是用于较小规模的部署上。容器具有共享操作系统和公用库的的内存和硬盘存储的能力,这也意味着它可以非常有效的并行部署多个版本的服务。
|
||||
|
||||
### 连接容器 ###
|
||||
|
||||
一些小的应用程序适合放在单独的容器中,但在许多案例中应用程序需要分布在多个容器中。Docker 的成功包括催生了一连串新的应用程序组合工具、编制工具及平台作为服务(PaaS)的实现。在这些努力的背后,是希望简化从一组相互连接的容器来创建应用的过程。很多工具也在扩展、容错、性能管理以及对已部署资产进行版本控制方面提供了帮助。
|
||||
|
||||
#### 连通性 ####
|
||||
|
||||
Docker 的网络功能是相当原始的。在同一主机,容器内的服务可以互相访问,而且 Docker 也可以通过端口映射到主机操作系统,使服务可以通过网络访问。官方支持的提供连接能力的库叫做 [libchan][20],这是一个提供给 Go 语言的网络服务库,类似于[channels][21]。在 libchan 找到进入应用的方法之前,第三方应用仍然有很大空间可提供配套的网络服务。例如,[Flocker][22] 已经采取了基于代理的方法使服务实现跨主机(以及底层存储)的移植。
|
||||
|
||||
#### 合成 ####
|
||||
|
||||
Docker 本身拥有把容器连接在一起的机制,与元数据相关的依赖项可以被传递到相依赖的容器中,并用于环境变量和主机入口。如 [Fig][23] 和 [geard][24] 这样的应用合成工具可以在单一文件中展示出这种依赖关系图,这样多个容器就可以汇聚成一个连贯的系统。CenturyLink 公司的 [Panamax][25] 合成工具类似 Fig 和 geard 的底层实现方法,但新增了一些基于 web 的用户接口,并直接与 GitHub 相结合,以便于应用程序分享。
|
||||
|
||||
#### 编制 ####
|
||||
|
||||
像 [Decking][26]、New Relic 公司的 [Centurion][27] 和谷歌公司的 [Kubernetes][28] 这样的编制系统都是旨在协助容器的部署和管理其生命周期系统。也有许多 [Apache Mesos][30] (特别是 [Marathon(马拉松式)持续运行很久的框架])的案例(例如[Mesosphere][29])已经被用于配合 Docker 一起使用。通过为应用程序与底层基础架构之间(例如传递 CPU 核数和内存的需求)提供一个抽象的模型,编制工具提供了两者的解耦,简化了应用程序开发和数据中心操作。有很多各种各样的编制系统,因为许多来自内部系统的以前开发的用于大规模容器部署的工具浮现出来了;如 Kubernetes 是基于谷歌的 [Omega][32] 系统的,[Omega][32] 是用于管理遍布谷歌云环境中容器的系统。
|
||||
|
||||
虽然从某种程度上来说合成工具和编制工具的功能存在重叠,但这也是它们之间互补的一种方式。例如 Fig 可以被用于描述容器间如何实现功能交互,而 Kubernetes pods(容器组)可用于提供监控和扩展。
|
||||
|
||||
#### 平台(即服务)####
|
||||
|
||||
有一些 Docker 原生的 PaaS 服务实现,例如 [Deis][33] 和 [Flynn][34] 已经显现出 Linux 容器在开发上的的灵活性(而不是那些“自以为是”的给出一套语言和框架)。其它平台,例如 CloudFoundry、OpenShift 和 Apcera Continuum 都已经采取将 Docker 基础功能融入其现有的系统的技术路线,这样基于 Docker 镜像(或者基于 Dockerfile)的应用程序也可以与之前用支持的语言和框架的开发的应用一同部署和管理。
|
||||
|
||||
### 所有的云 ###
|
||||
|
||||
由于 Docker 能够运行在任何正常更新内核的 Linux 虚拟机中,它几乎可以用在所有提供 IaaS 服务的云上。大多数的主流云厂商已经宣布提供对 Docker 及其生态系统的支持。
|
||||
|
||||
亚马逊已经把 Docker 引入它们的 Elastic Beanstalk 系统(这是在底层 IaaS 上的一个编制系统)。谷歌使 Docker 成为了“可管理的 VM”,它提供了GAE PaaS 和GCE IaaS 之间的中转站。微软和 IBM 也都已经宣布了基于 Kubernetes 的服务,这样可以在它们的云上部署和管理多容器应用程序。
|
||||
|
||||
为了给现有种类繁多的后端提供可用的一致接口,Docker 团队已经引进 [libswarm][35], 它可以集成于众多的云和资源管理系统。Libswarm 所阐明的目标之一是“通过切换服务来源避免被特定供应商套牢”。这是通过呈现一组一致的服务(与API相关联的)来完成的,该服务会通过特定的后端服务所实现。例如 Docker 服务器将支持本地 Docker 命令行工具的 Docker 远程 API 调用,这样就可以管理一组服务供应商的容器了。
|
||||
|
||||
基于 Docker 的新服务类型仍在起步阶段。总部位于伦敦的 Orchard 实验室提供了 Docker 的托管服务,但是 Docker 公司表示,收购 Orchard 后,其相关服务不会置于优先位置。Docker 公司也出售了之前 DotCloud 的PaaS 业务给 cloudControl。基于更早的容器管理系统的服务例如 [OpenVZ][36] 已经司空见惯了,所以在一定程度上 Docker 需要向主机托管商们证明其价值。
|
||||
|
||||
### Docker 及其发行版 ###
|
||||
|
||||
Docker 已经成为大多数 Linux 发行版例如 Ubuntu、Red Hat 企业版(RHEL)和 CentOS 的一个标准功能。遗憾的是这些发行版的步调和 Docker 项目并不一致,所以在发布版中找到的版本总是远远落后于最新版本。例如 Ubuntu 14.04 版本中的版本是 Docker 0.9.1,而当 Ubuntu 升级至 14.04.1 时 Docker 版本并没有随之升级(此时 Docker 已经升至 1.1.2 版本)。在发行版的软件仓库中还有一个名字空间的冲突,因为 “Docker” 也是 KDE 系统托盘的名字;所以在 Ubuntu 14.04 版本中相关安装包的名字和命令行工具都是使用“Docker.io”的名字。
|
||||
|
||||
在企业级 Linux 的世界中,情况也并没有因此而不同。CentOS 7 中的 Docker 版本是 0.11.1,这是 Docker 公司宣布准备发行 Docker 1.0 产品版本之前的开发版。Linux 发行版用户如果希望使用最新版本以保障其稳定、性能和安全,那么最好地按照 Docker 的[安装说明][37]进行,使用 Docker 公司的所提供的软件库而不是采用发行版的。
|
||||
|
||||
Docker 的到来也催生了新的 Linux 发行版,如 [CoreOS][38] 和红帽的 [Project Atomic][39],它们被设计为能运行容器的最小环境。这些发布版相比传统的发行版,带着更新的内核及 Docker 版本,对内存的使用和硬盘占用率也更低。新发行版也配备了用于大型部署的新工具,例如 [fleet][40](一个分布式初始化系统)和[etcd][41](用于元数据管理)。这些发行版也有新的自我更新机制,以便可以使用最新的内核和 Docker。这也意味着使用 Docker 的影响之一是它抛开了对发行版和相关的包管理解决方案的关注,而对 Linux 内核(及使用它的 Docker 子系统)更加关注。
|
||||
|
||||
这些新发行版也许是运行 Docker 的最好方式,但是传统的发行版和它们的包管理器对容器来说仍然是非常重要的。Docker Hub 托管的官方镜像有 Debian、Ubuntu 和 CentOS,以及一个‘半官方’的 Fedora 镜像库。RHEL 镜像在Docker Hub 中不可用,因为它是 Red Hat 直接发布的。这意味着在 Docker Hub 的自动构建机制仅仅用于那些纯开源发行版下(并愿意信任那些源于 Docker 公司团队提供的基础镜像)。
|
||||
|
||||
Docker Hub 集成了如 Git Hub 和 Bitbucket 这样源代码控制系统来自动构建包管理器,用于管理构建过程中创建的构建规范(在Dockerfile中)和生成的镜像之间的复杂关系。构建过程的不确定结果并非是 Docker 的特定问题——而与软件包管理器如何工作有关。今天构建完成的是一个版本,明天构建的可能就是更新的版本,这就是为什么软件包管理器需要升级的原因。容器抽象(较少关注容器中的内容)以及容器扩展(因为轻量级资源利用率)有可能让这种不确定性成为 Docker 的痛点。
|
||||
|
||||
### Docker 的未来 ###
|
||||
|
||||
Docker 公司对核心功能(libcontainer),跨服务管理(libswarm) 和容器间的信息传递(libchan)的发展上提出了明确的路线。与此同时,该公司已经表明愿意收购 Orchard 实验室,将其纳入自身生态系统。然而 Docker 不仅仅是 Docker 公司的,这个项目的贡献者也来自许多大牌贡献者,其中不乏像谷歌、IBM 和 Red Hat 这样的大公司。在仁慈独裁者、CTO Solomon Hykes 掌舵的形势下,为公司和项目明确了技术领导关系。在前18个月的项目中通过成果输出展现了其快速行动的能力,而且这种趋势并没有减弱的迹象。
|
||||
|
||||
许多投资者正在寻找10年前 VMware 公司的 ESX/vSphere 平台的特征矩阵,并试图找出虚拟机的普及而带动的企业预期和当前 Docker 生态系统两者的距离(和机会)。目前 Docker 生态系统正缺乏类似网络、存储和(对于容器的内容的)细粒度版本管理,这些都为初创企业和创业者提供了机会。
|
||||
|
||||
随着时间的推移,在虚拟机和容器(Docker 的“运行”部分)之间的区别将变得没那么重要了,而关注点将会转移到“构建”和“交付”方面。这些变化将会使“Docker发生什么?”变得不如“Docker将会给IT产业带来什么?”那么重要了。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoq.com/articles/docker-future
|
||||
|
||||
作者:[Chris Swan][a]
|
||||
译者:[disylee](https://github.com/disylee)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoq.com/author/Chris-Swan
|
||||
[1]:http://blog.dotcloud.com/dotcloud-paas-joins-cloudcontrol
|
||||
[2]:http://www.infoq.com/news/2014/06/docker_1.0
|
||||
[3]:https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt
|
||||
[4]:http://selinuxproject.org/page/Main_Page
|
||||
[5]:https://linuxcontainers.org/
|
||||
[6]:http://blog.docker.com/2014/03/docker-0-9-introducing-execution-drivers-and-libcontainer/
|
||||
[7]:http://aufs.sourceforge.net/aufs.html
|
||||
[8]:https://docs.docker.com/reference/builder/
|
||||
[9]:https://registry.hub.docker.com/
|
||||
[10]:http://bodenr.blogspot.co.uk/2014/05/kvm-and-docker-lxc-benchmarking-with.html?m=1
|
||||
[11]:http://domino.research.ibm.com/library/cyberdig.nsf/papers/0929052195DD819C85257D2300681E7B/$File/rc25482.pdf
|
||||
[12]:https://en.wikipedia.org/wiki/X86_virtualization#Hardware-assisted_virtualization
|
||||
[13]:http://stealth.openwall.net/xSports/shocker.c
|
||||
[14]:https://news.ycombinator.com/item?id=7910117
|
||||
[15]:http://www.bromium.com/products/vsentry.html
|
||||
[16]:http://cto.vmware.com/vmware-docker-better-together/
|
||||
[17]:http://www.infoq.com/articles/docker-containers
|
||||
[18]:http://docs.docker.com/articles/using_supervisord/
|
||||
[19]:http://www.infoq.com/minibooks/emag-microservices
|
||||
[20]:https://github.com/docker/libchan
|
||||
[21]:https://gobyexample.com/channels
|
||||
[22]:http://www.infoq.com/news/2014/08/clusterhq-launch-flocker
|
||||
[23]:http://www.fig.sh/
|
||||
[24]:http://openshift.github.io/geard/
|
||||
[25]:http://panamax.io/
|
||||
[26]:http://decking.io/
|
||||
[27]:https://github.com/newrelic/centurion
|
||||
[28]:https://github.com/GoogleCloudPlatform/kubernetes
|
||||
[29]:https://mesosphere.io/2013/09/26/docker-on-mesos/
|
||||
[30]:http://mesos.apache.org/
|
||||
[31]:https://github.com/mesosphere/marathon
|
||||
[32]:http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41684.pdf
|
||||
[33]:http://deis.io/
|
||||
[34]:https://flynn.io/
|
||||
[35]:https://github.com/docker/libswarm
|
||||
[36]:http://openvz.org/Main_Page
|
||||
[37]:https://docs.docker.com/installation/#installation
|
||||
[38]:https://coreos.com/
|
||||
[39]:http://www.projectatomic.io/
|
||||
[40]:https://github.com/coreos/fleet
|
||||
[41]:https://github.com/coreos/etcd
|
@ -1,29 +1,29 @@
|
||||
如何从终端以后台模式运行Linux程序
|
||||
如何在终端下以后台模式运行Linux程序
|
||||
===
|
||||
|
||||
![Linux Terminal Window.](http://0.tqn.com/y/linux/1/W/r/G/1/terminal.JPG)
|
||||
|
||||
Linux终端窗口。
|
||||
*Linux终端窗口*
|
||||
|
||||
这是一个简短但是非常有用的教程:它向你展示从终端运行Linux应用程序的同时,如何保证终端仍然在控制之中。
|
||||
这是一个简短但是非常有用的教程:它向你展示从终端运行Linux应用程序的同时,如何保证终端仍然可以操作。
|
||||
|
||||
在Linux中有许多方式可以打开一个终端,这主要取决于你分类的选择和桌面环境。
|
||||
在Linux中有许多方式可以打开一个终端,这主要取决于你的发行版的选择和桌面环境。
|
||||
|
||||
使用Ubuntu,你可以使用CTRL + ALT + T组合键打开一个终端。你也可以点击超级键(Windows键)打开一个终端窗口。在键盘上,[打开Ubuntu Dash][1],然后搜索"TERM"。点击"Term"图标将会打开一个终端窗口。
|
||||
使用Ubuntu的话,你可以使用CTRL + ALT + T组合键打开一个终端。你也可以点击超级键(Windows键)打开一个终端窗口。在键盘上,[打开Ubuntu Dash][1],然后搜索"TERM"。点击"Term"图标将会打开一个终端窗口。
|
||||
|
||||
其他诸如XFCE, KDE, LXDE, Cinnamon和MATE的桌面环境,你将会在菜单中发现终端。还有一些分类会把终端图标放在入口处,或者在面板上放置终端启动器。
|
||||
其他诸如XFCE, KDE, LXDE, Cinnamon和MATE的桌面环境,你将会在菜单中发现“终端”这个应用。还有一些发行版会把终端图标放在菜单项,或者在面板上放置终端启动器。
|
||||
|
||||
你可以在终端输入一个程序的名字来启动一个应用。举例,你可以通过输入"firefox"启动火狐浏览器。
|
||||
你可以在终端里面输入一个程序的名字来启动一个应用。举例,你可以通过输入"firefox"启动火狐浏览器。
|
||||
|
||||
从终端运行程序的好处是一可以包含额外的选项。
|
||||
从终端运行程序的好处是可以使用额外的选项。
|
||||
|
||||
举个例子,如果你输入下面的命令,一个新的火狐浏览器将会打开,而且默认的搜索引擎将会搜索引用之间的术语:
|
||||
举个例子,如果你输入下面的命令,一个新的火狐浏览器将会打开,而且默认的搜索引擎将会搜索引号之间的词语:
|
||||
|
||||
firefox -search "Linux.About.Com"
|
||||
|
||||
你会发现,如果你运行火狐浏览器,应用程序将被打开,并且控制将会回到终端,这将意味着你可以继续在终端工作。
|
||||
你会发现,如果你运行火狐浏览器,应用程序打开后,控制权将会回到终端(重新出现了命令提示符),这将意味着你可以继续在终端工作。
|
||||
|
||||
通常情况下,如果你通过终端运行一个程序,程序将被打开,并且直到那个程序关闭结束,你将不会重新获得终端的控制权。这是因为你是在前台打开程序的。
|
||||
通常情况下,如果你通过终端运行一个程序,程序打开后,并且直到那个程序关闭结束,你都将不会获得终端的控制权。这是因为你是在前台打开程序的。
|
||||
|
||||
想要从终端运行一个程序,并且立即将终端的控制权返回给你,你需要以后台进程的方式打开程序。
|
||||
|
||||
@ -31,11 +31,11 @@ Linux终端窗口。
|
||||
|
||||
libreoffice &
|
||||
|
||||
在终端中仅仅提供程序的名字,应用程序可能运行不了。如果程序不存在于一个设置了路径变量的文件夹中,你需要指定完成的路径名来运行程序。
|
||||
在终端中仅仅提供程序的名字,应用程序可能运行不了。如果程序不存在于一个设置在PATH 环境变量的文件夹中,你需要指定完整的路径名来运行程序。
|
||||
|
||||
/path/to/yourprogram &
|
||||
|
||||
如果你并不确定一个程序是否存在于Linux文件结构,使用find或者locate命令来查询应用程序。
|
||||
如果你并不确定一个程序是否存在于Linux文件系统中,使用find或者locate命令来查找该应用程序。
|
||||
|
||||
找一个文件的语法如下:
|
||||
|
||||
@ -45,7 +45,7 @@ Linux终端窗口。
|
||||
|
||||
find / -name firefox
|
||||
|
||||
输出会很快掠过,所以你可以以管道的方式控制输出的多少:
|
||||
输出会很快滚动出很多,所以你可以以管道的方式控制输出的多少:
|
||||
|
||||
find / -name firefox | more
|
||||
|
||||
@ -57,26 +57,25 @@ find命令将会返回因权限拒绝而发生错误的文件夹数量,这些
|
||||
|
||||
sudo find / -name firefox | more
|
||||
|
||||
如果你知道你想寻找的文件在你的当前文件夹结构中,你可以一个点代替先前的斜线,如下:
|
||||
如果你知道你想寻找的文件在你的当前文件夹中,你可以一个点代替先前的斜线,如下:
|
||||
|
||||
sudo find . -name firefox | more
|
||||
|
||||
你可能不需要sudo来提升权限。如果你在home文件夹结构中寻找文件,sudo就不需要。
|
||||
你可能不需要sudo来提升权限。如果你在home文件夹中寻找文件,sudo就不需要。
|
||||
|
||||
一些应用程序需要提升用户权限来运行,你可能得到一个缺少权限的错误,除非你使用一个具有足够权限的用户,或者使用sudo提升你的权限。
|
||||
|
||||
下面是一个小花招。如果你运行一个程序,而且它需要提升权限来运行,输入下面命令:
|
||||
下面是一个小花招。如果你运行一个程序,而且它需要提升权限来运行,输入下面命令来提升权限重新执行:
|
||||
|
||||
sudo !!
|
||||
|
||||
---
|
||||
|
||||
via: http://linux.about.com/od/commands/fl/How-To-Run-Linux-Programs-From-T
|
||||
he-Terminal-In-Background-Mode.htm
|
||||
via: http://linux.about.com/od/commands/fl/How-To-Run-Linux-Programs-From-The-Terminal-In-Background-Mode.htm
|
||||
|
||||
作者:[Gary Newell][a]
|
||||
译者:[su-kaiyao](https://github.com/su-kaiyao)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中>
|
||||
国](http://linux.cn/) 荣誉推出
|
@ -1,24 +1,22 @@
|
||||
10个检测Linux内存使用情况的‘free’命令
|
||||
检测 Linux 内存使用情况的 free 命令的10个例子
|
||||
===
|
||||
|
||||
**Linux**是最有名的开源操作系统之一,它拥有着极其巨大的指令集。确定**物理内存**和**交换内存**所有可用空间的最重要,也是唯一的方法是使用“**free**”命令。
|
||||
**Linux**是最有名的开源操作系统之一,它拥有着极其巨大的命令集。确定**物理内存**和**交换内存**所有可用空间的最重要、也是唯一的方法是使用“**free**”命令。
|
||||
|
||||
Linux “**free**”命令通过给出**Linux/Unix**操作系统中内核已使用的**buffers**情况,来提供**物理内存**和**交换内存**的总使用量和可用量。
|
||||
Linux “**free**”命令可以给出类**Linux/Unix**操作系统中**物理内存**和**交换内存**的总使用量、可用量及内核使用的**缓冲区**情况。
|
||||
|
||||
![10 Linux Free Command Examples](http://www.tecmint.com/wp-content/uploads/2012/09/Linux-Free-commands.png)
|
||||
|
||||
这篇文章提供一些带有参数选项的“**free**”命令,这些命令对于你更好地利用你的内存会有帮助。
|
||||
这篇文章提供一些各种参数选项的“**free**”命令,这些命令对于你更好地利用你的内存会有帮助。
|
||||
|
||||
### 1. 显示你的系统内存 ###
|
||||
|
||||
free命令用于检测**物理内存**和**交换内存**已使用量和可用量(单位为**KB**)。下面演示命令的使用情况。
|
||||
free命令用于检测**物理内存**和**交换内存**已使用量和可用量(默认单位为**KB**)。下面演示命令的使用情况。
|
||||
|
||||
# free
|
||||
|
||||
total used free shared buffers cach
|
||||
ed
|
||||
Mem: 1021628 912548 109080 0 120368 6555
|
||||
48
|
||||
total used free shared buffers cached
|
||||
Mem: 1021628 912548 109080 0 120368 655548
|
||||
-/+ buffers/cache: 136632 884996
|
||||
Swap: 4194296 0 4194296
|
||||
|
||||
@ -28,21 +26,18 @@ ed
|
||||
|
||||
# free -b
|
||||
|
||||
total used free shared buffers cach
|
||||
ed
|
||||
Mem: 1046147072 934420480 111726592 0 123256832 6712811
|
||||
52
|
||||
total used free shared buffers cached
|
||||
Mem: 1046147072 934420480 111726592 0 123256832 671281152
|
||||
-/+ buffers/cache: 139882496 906264576
|
||||
Swap: 4294959104 0 4294959104
|
||||
|
||||
### 3. 以千字节为单位显示内存 ###
|
||||
|
||||
加上**-k**参数的free命令,以(KB)**千字节**为单位显示内存大小。
|
||||
加上**-k**参数的free命令(默认单位,所以可以不用使用它),以(KB)**千字节**为单位显示内存大小。
|
||||
|
||||
# free -k
|
||||
|
||||
total used free shared buffers cach
|
||||
ed
|
||||
total used free shared buffers cached
|
||||
Mem: 1021628 912520 109108 0 120368 655548
|
||||
-/+ buffers/cache: 136604 885024
|
||||
Swap: 4194296 0 4194296
|
||||
@ -53,10 +48,8 @@ ed
|
||||
|
||||
# free -m
|
||||
|
||||
total used free shared buffers cach
|
||||
ed
|
||||
Mem: 997 891 106 0 117 6
|
||||
40
|
||||
total used free shared buffers cached
|
||||
Mem: 997 891 106 0 117 640
|
||||
-/+ buffers/cache: 133 864
|
||||
Swap: 4095 0 4095
|
||||
|
||||
@ -66,8 +59,7 @@ ed
|
||||
|
||||
# free -g
|
||||
total used free shared buffers cached
|
||||
Mem: 0 0 0 0 0
|
||||
0
|
||||
Mem: 0 0 0 0 0 0
|
||||
-/+ buffers/cache: 0 0
|
||||
Swap: 3 0 3
|
||||
|
||||
@ -77,10 +69,8 @@ ed
|
||||
|
||||
# free -t
|
||||
|
||||
total used free shared buffers cache
|
||||
d
|
||||
Mem: 1021628 912520 109108 0 120368 6555
|
||||
48
|
||||
total used free shared buffers cached
|
||||
Mem: 1021628 912520 109108 0 120368 655548
|
||||
-/+ buffers/cache: 136604 885024
|
||||
Swap: 4194296 0 4194296
|
||||
Total: 5215924 912520 4303404
|
||||
@ -91,10 +81,8 @@ d
|
||||
|
||||
# free -o
|
||||
|
||||
total used free shared buffers cache
|
||||
d
|
||||
Mem: 1021628 912520 109108 0 120368 6555
|
||||
48
|
||||
total used free shared buffers cached
|
||||
Mem: 1021628 912520 109108 0 120368 655548
|
||||
Swap: 4194296 0 4194296
|
||||
|
||||
### 8. 定期时间间隔更新内存状态 ###
|
||||
@ -103,10 +91,8 @@ d
|
||||
|
||||
# free -s 5
|
||||
|
||||
total used free shared buffers cach
|
||||
ed
|
||||
Mem: 1021628 912368 109260 0 120368 6555
|
||||
48
|
||||
total used free shared buffers cached
|
||||
Mem: 1021628 912368 109260 0 120368 655548
|
||||
-/+ buffers/cache: 136452 885176
|
||||
Swap: 4194296 0 4194296
|
||||
|
||||
@ -116,10 +102,8 @@ ed
|
||||
|
||||
# free -l
|
||||
|
||||
total used free shared buffers cach
|
||||
ed
|
||||
Mem: 1021628 912368 109260 0 120368 6555
|
||||
48
|
||||
total used free shared buffers cached
|
||||
Mem: 1021628 912368 109260 0 120368 655548
|
||||
Low: 890036 789064 100972
|
||||
High: 131592 123304 8288
|
||||
-/+ buffers/cache: 136452 885176
|
||||
@ -139,7 +123,7 @@ via: http://www.tecmint.com/check-memory-usage-in-linux/
|
||||
|
||||
作者:[Ravi Saive][a]
|
||||
译者:[su-kaiyao](https://github.com/su-kaiyao)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中>
|
||||
国](http://linux.cn/) 荣誉推出
|
@ -11,9 +11,9 @@ Jetty被广泛用于多种项目和产品,都可以在开发环境和生产环
|
||||
- 灵活和可扩展
|
||||
- 小足迹
|
||||
- 可嵌入
|
||||
- 异步
|
||||
- 异步支持
|
||||
- 企业弹性扩展
|
||||
- Apache和Eclipse双重许可
|
||||
- Apache和Eclipse双重许可证
|
||||
|
||||
### ubuntu 14.10 server上安装Jetty 9 ###
|
||||
|
||||
@ -71,10 +71,9 @@ Java将会安装到/usr/lib/jvm/java-8-openjdk-i386,同时在该目录下会
|
||||
|
||||
#### ** ERROR: JETTY_HOME not set, you need to set it or install in a standard location ####
|
||||
|
||||
你需要确保在/etc/default/jetty文件中设置了正确的Jetty家目录路径,
|
||||
你可以使用以下URL来测试jetty
|
||||
你需要确保在/etc/default/jetty文件中设置了正确的Jetty家目录路径,你可以使用以下URL来测试jetty。
|
||||
|
||||
Jetty现在应该运行在8085端口,打开浏览器并访问http://serverip:8085,你应该可以看到Jetty屏幕。
|
||||
Jetty现在应该运行在8085端口,打开浏览器并访问http://服务器IP:8085,你应该可以看到Jetty屏幕。
|
||||
|
||||
#### Jetty服务检查 ####
|
||||
|
||||
@ -96,7 +95,7 @@ via: http://www.ubuntugeek.com/install-jetty-9-java-servlet-engine-and-webserver
|
||||
|
||||
作者:[ruchi][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,10 +1,10 @@
|
||||
如何在Linux的命令行中使用Evernote
|
||||
================================================================================
|
||||
这周让我们继续什么学习如个使用Linux命令行管理和组织信息。在命令行中管理[你的个人花费][1]后,我建议你在命令行中管理你的笔记,特别地是,当你笔记放在Evernote中时。为防止你从来没有听说过,[Evernote][2]专门有一个用户有好的在线服务用来在不同的设备间同步笔记。除了提供花哨的基于Web的API,Evernote还发布了在Windows、Mac、[Android][3]和iOS上的客户端。然而至今还没有官方的Linux客户端可用。老实说在众多的非官方Linux程序中,一个程序一出现就吸引了所有的命令行爱好者:[Geeknote][4]
|
||||
这周让我们继续学习如何使用Linux命令行管理和组织信息。在命令行中管理[你的个人花费][1]后,我建议你在命令行中管理你的笔记,特别是当你用Evernote记录笔记时。要是你从来没有听说过它,[Evernote][2] 专门有一个用户友好的在线服务可以在不同的设备间同步笔记。除了提供花哨的基于Web的API,Evernote还发布了在Windows、Mac、[Android][3]和iOS上的客户端。然而至今还没有官方的Linux客户端可用。老实说在众多的非官方Linux客户端中,有一个程序一出现就吸引了所有的命令行爱好者,它就是[Geeknote][4]。
|
||||
|
||||
### Geeknote 的安装 ###
|
||||
|
||||
Geeknote使用Python开发的。因此,在开始之前请确保你已经安装了Python(最好是2.7的版本)和git。
|
||||
Geeknote是使用Python开发的。因此,在开始之前请确保你已经安装了Python(最好是2.7的版本)和git。
|
||||
|
||||
#### 在 Debian、 Ubuntu 和 Linux Mint 中 ####
|
||||
|
||||
@ -26,38 +26,38 @@ Geeknote使用Python开发的。因此,在开始之前请确保你已经安装
|
||||
|
||||
### Geeknote 的基本使用 ###
|
||||
|
||||
一旦你安装玩Geeknote后,你应该将Geeknote与你的Evernote账号关联:
|
||||
一旦你安装完Geeknote后,你应该将Geeknote与你的Evernote账号关联:
|
||||
|
||||
$ geeknote login
|
||||
|
||||
接着输入你的emial地址、密码、和你的二步验证码。如果你没有后者,忽略它并按下回车。
|
||||
接着输入你的email地址、密码和你的二步验证码。如果你没有后者的话,忽略它并按下回车。
|
||||
|
||||
![](https://farm8.staticflickr.com/7525/15761947888_7bc71bf216_o.jpg)
|
||||
|
||||
很明显,你需要一个Evernote账号来完成这些,因此先去注册。
|
||||
显然你需要一个Evernote账号来完成这些,因此先去注册吧。
|
||||
|
||||
一旦完成这一切之后,你就可以开始了,创建新的笔记并编辑它们。
|
||||
完成这些之后,你就可以开始创建新的笔记并编辑它们了。
|
||||
|
||||
但是首先,你需要设置你最喜欢的文本编辑器:
|
||||
不过首先,你还需要设置你最喜欢的文本编辑器:
|
||||
|
||||
$ geeknote settings --editor vim
|
||||
|
||||
接着,常规创建一条新笔记的语法是:
|
||||
然后,一般创建一条新笔记的语法是:
|
||||
|
||||
$ geeknote create --title [title of the new note] (--content [content] --tags [comma-separated tags] --notebook [comma-separated notebooks])
|
||||
|
||||
上面的命令中,只有‘title’是必须的,它会与一条新笔记的标题相关联。其他的标注可以为笔记添加额外的元数据:添加标签来与你的笔记关联、指定放在那个笔记本里。同样,如果你的标题或者内容还有空格,不要忘记将它们放在引号中。
|
||||
上面的命令中,只有‘title’是必须的,它会与一条新笔记的标题相关联。其他的标注可以为笔记添加额外的元数据:添加标签来与你的笔记关联、指定放在那个笔记本里。同样,如果你的标题或者内容中有空格,不要忘记将它们放在引号中。
|
||||
|
||||
|
||||
比如:
|
||||
|
||||
$ geeknote create --title "My note" --content "This is a test note" --tags "finance, business, important" --notebook "Family"
|
||||
|
||||
通常上,下一步就是编辑你的笔记。语法很相似:
|
||||
然后,你可以编辑你的笔记。语法很相似:
|
||||
|
||||
$ geeknote edit --note [title of the note to edit] (--title [new title] --tags [new tags] --notebook [new notebooks])
|
||||
|
||||
注意可选的参数如标题、标签和笔记本,用来修改笔记的元数据。比如,你可以用下面的命令重命名笔记:
|
||||
注意可选的参数如新的标题、标签和笔记本,用来修改笔记的元数据。你也可以用下面的命令重命名笔记:
|
||||
|
||||
$ geeknote edit --note [old title] --title [new title]
|
||||
|
||||
@ -65,13 +65,13 @@ Geeknote使用Python开发的。因此,在开始之前请确保你已经安装
|
||||
|
||||
$ geeknote find --search [text-to-search] --tags [comma-separated tags] --notebook [comma-separated notebooks] --date [date-or-date-range] --content-search
|
||||
|
||||
默认上,上面的命令会通过标题搜索笔记。 用"--content-search"选项,就可以搜索它们的内容。
|
||||
默认地上面的命令会通过标题搜索笔记。 用"--content-search"选项,就可以按内容搜索。
|
||||
|
||||
比如:
|
||||
|
||||
$ geeknote find --search "*restaurant" --notebooks "Family" --date 31.03.2014-31.08.2014
|
||||
|
||||
显示制定标题的笔记:
|
||||
显示指定标题的笔记:
|
||||
|
||||
$ geeknote show [title]
|
||||
|
||||
@ -89,13 +89,13 @@ Geeknote使用Python开发的。因此,在开始之前请确保你已经安装
|
||||
|
||||
小心这是真正的删除。它会从云存储中删除这条笔记。
|
||||
|
||||
最后有很多的选项来管理标签和笔记本。我想最有用的是显示笔记本列表。
|
||||
最后有很多的选项来管理标签和笔记本。我想最有用的就是显示笔记本列表。
|
||||
|
||||
$ geeknote notebook-list
|
||||
|
||||
![](https://farm8.staticflickr.com/7472/15762063420_43e3ee17da_o.jpg)
|
||||
|
||||
下面的非常相像。你可以猜到,可以用下面的命令列出所有的标签:
|
||||
下面的命令非常相像。你可以猜到,可以用下面的命令列出所有的标签:
|
||||
|
||||
$ geeknote tag-list
|
||||
|
||||
@ -107,27 +107,25 @@ Geeknote使用Python开发的。因此,在开始之前请确保你已经安装
|
||||
|
||||
$ geeknote tag-create --title [tag title]
|
||||
|
||||
一旦你了解了窍门,很明显语法是非常连贯且明确的。
|
||||
一旦你了解了窍门,很明显这些语法是非常自然明确的。
|
||||
|
||||
如果你想要了解更多,不要忘记查看[官方文档][6]。
|
||||
|
||||
### 福利 ###
|
||||
|
||||
As a bonus, Geeknote comes with the utility gnsync, which allows for file synchronization between your Evernote account and your local computer. However, I find its syntax a bit dry:
|
||||
福利的是,Geeknote自带的gnsync工具可以让你在Evernote和本地计算机之间同步。然而,我发现它的语法有点枯燥:
|
||||
作为福利,Geeknote自带的gnsync工具可以让你在Evernote和本地计算机之间同步。不过,我发现它的语法有点枯燥:
|
||||
|
||||
$ gnsync --path [where to sync] (--mask [what kind of file to sync] --format [in which format] --logpath [where to write the log] --notebook [which notebook to use])
|
||||
|
||||
下面是这些的意义。
|
||||
|
||||
下面是这些参数的意义。
|
||||
|
||||
- **--path /home/adrien/Documents/notes/**: 与Evernote同步笔记的位置。
|
||||
- **--mask "*.txt"**: 只同步纯文本文件。默认上,gnsync会尝试同步所有文件。
|
||||
- **--mask "*.txt"**: 只同步纯文本文件。默认gnsync会尝试同步所有文件。
|
||||
- **--format markdown**: 你希望它们是纯文本或者markdown格式(默认是纯文本)。
|
||||
- **--logpath /home/adrien/gnsync.log**: 同步日志的位置。为防出错,gnsync会在那里写入日志信息。
|
||||
- **--notebook "Family"**: 同步哪个笔记本中的笔记。如果你那里留空,程序会创建一个以你同步文件夹命令的笔记本。
|
||||
- **--notebook "Family"**: 同步哪个笔记本中的笔记。如果留空,程序会创建一个以你同步文件夹命令的笔记本。
|
||||
|
||||
总结来说,Geeknote是一款花哨的Evernote的命令行客户端。我个人不常使用Evernote,但它仍然很漂亮和有用。命令行一方面让它变得很极客且很容易与shell脚本结合。同样,还有Git上fork出来的Geeknote,在ArchLinux AUR上称为[geeknote-improved-git][7],貌似它有更多的特性和比其他分支更积极的开发。但在我看来,还很值得再看看。
|
||||
总的来说,Geeknote是一款漂亮的Evernote的命令行客户端。我个人不常使用Evernote,但它仍然很漂亮和有用。命令行一方面让它变得很极客且很容易与shell脚本结合。此外,在Git上还有Geeknote的一个分支项目,在ArchLinux AUR上称为[geeknote-improved-git][7],貌似它有更多的特性和比其他分支更积极的开发。我觉得值得去看看。
|
||||
|
||||
你认为Geeknote怎么样? 有什么你想用的么?或者你更喜欢使用传统的程序?在评论区中让我们知道。
|
||||
|
||||
@ -137,7 +135,7 @@ via: http://xmodulo.com/evernote-command-line-linux.html
|
||||
|
||||
作者:[Adrien Brochard][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,6 +1,6 @@
|
||||
CentOS 7.x中正确设置时间与时钟服务器同步
|
||||
================================================================================
|
||||
**Chrony**是一个开源而自由的应用,它能帮助你保持系统时钟与时钟服务器同步,因此让你的时间保持精确。它由两个程序组成,分别是chronyd和chronyc。chronyd是一个后台运行的守护进程,用于调整内核中运行的系统时钟和时钟服务器同步。它确定计算机获取或丢失时间的比率,并对此进行补偿。chronyc提供了一个用户界面,用于监控性能并进行多样化的配置。它可以在chronyd实例控制的计算机上干这些事,也可以在一台不同的远程计算机上干这些事。
|
||||
**Chrony**是一个开源的自由软件,它能帮助你保持系统时钟与时钟服务器(NTP)同步,因此让你的时间保持精确。它由两个程序组成,分别是chronyd和chronyc。chronyd是一个后台运行的守护进程,用于调整内核中运行的系统时钟和时钟服务器同步。它确定计算机增减时间的比率,并对此进行补偿。chronyc提供了一个用户界面,用于监控性能并进行多样化的配置。它可以在chronyd实例控制的计算机上工作,也可以在一台不同的远程计算机上工作。
|
||||
|
||||
在像CentOS 7之类基于RHEL的操作系统上,已经默认安装有Chrony。
|
||||
|
||||
@ -10,19 +10,17 @@ CentOS 7.x中正确设置时间与时钟服务器同步
|
||||
|
||||
**server** - 该参数可以多次用于添加时钟服务器,必须以"server "格式使用。一般而言,你想添加多少服务器,就可以添加多少服务器。
|
||||
|
||||
Example:
|
||||
server 0.centos.pool.ntp.org
|
||||
server 3.europe.pool.ntp.org
|
||||
|
||||
**stratumweight** - stratumweight指令设置当chronyd从可用源中选择同步源时,每个层应该添加多少距离到同步距离。默认情况下,CentOS中设置为0,让chronyd在选择源时忽略层。
|
||||
**stratumweight** - stratumweight指令设置当chronyd从可用源中选择同步源时,每个层应该添加多少距离到同步距离。默认情况下,CentOS中设置为0,让chronyd在选择源时忽略源的层级。
|
||||
|
||||
**driftfile** - chronyd程序的主要行为之一,就是根据实际时间计算出计算机获取或丢失时间的比率,将它记录到一个文件中是最合理的,它会在重启后为系统时钟作出补偿,甚至它可能有机会从时钟服务器获得好的估值。
|
||||
**driftfile** - chronyd程序的主要行为之一,就是根据实际时间计算出计算机增减时间的比率,将它记录到一个文件中是最合理的,它会在重启后为系统时钟作出补偿,甚至可能的话,会从时钟服务器获得较好的估值。
|
||||
|
||||
**rtcsync** - rtcsync指令将启用一个内核模式,在该模式中,系统时间每11分钟会拷贝到实时时钟(RTC)。
|
||||
|
||||
**allow / deny** - 这里你可以指定一台主机、子网,或者网络以允许或拒绝NTP连接到扮演时钟服务器的机器。
|
||||
|
||||
Examples:
|
||||
allow 192.168.4.5
|
||||
deny 192.168/16
|
||||
|
||||
@ -30,11 +28,10 @@ CentOS 7.x中正确设置时间与时钟服务器同步
|
||||
|
||||
**bindcmdaddress** - 该指令允许你限制chronyd监听哪个网络接口的命令包(由chronyc执行)。该指令通过cmddeny机制提供了一个除上述限制以外可用的额外的访问控制等级。
|
||||
|
||||
Example:
|
||||
bindcmdaddress 127.0.0.1
|
||||
bindcmdaddress ::1
|
||||
|
||||
**makestep** - 通常,chronyd将根据需求通过减慢或加速时钟,使得系统逐步纠正所有时间偏差。在某些特定情况下,系统时钟可能会漂移过快,导致该回转过程消耗很长的时间来纠正系统时钟。该指令强制chronyd在调整期大于某个阀值时调停系统时钟,但只有在因为chronyd启动时间超过指定限制(可使用负值来禁用限制),没有更多时钟更新时才生效。
|
||||
**makestep** - 通常,chronyd将根据需求通过减慢或加速时钟,使得系统逐步纠正所有时间偏差。在某些特定情况下,系统时钟可能会漂移过快,导致该调整过程消耗很长的时间来纠正系统时钟。该指令强制chronyd在调整期大于某个阀值时步进调整系统时钟,但只有在因为chronyd启动时间超过指定限制(可使用负值来禁用限制),没有更多时钟更新时才生效。
|
||||
|
||||
### 使用chronyc ###
|
||||
|
||||
@ -66,7 +63,7 @@ via: http://linoxide.com/linux-command/chrony-time-sync/
|
||||
|
||||
作者:[Adrian Dinu][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,30 +1,30 @@
|
||||
Linux 有问必答:如何在Ubuntu或者Debian中启动进入命令行
|
||||
Linux 有问必答:如何在Ubuntu或者Debian中启动后进入命令行
|
||||
================================================================================
|
||||
> **提问**:我运行的是Ubuntu桌面,但是我希望启动后临时进入命令行。有什么简便的方法可以启动进入终端?
|
||||
|
||||
Linux桌面自带了一个显示管理器(比如:GDM、KDM、LightDM),它们可以让计算机启动自动进入一个基于GUI的登录环境。然而,如果你要直接启动进入终端怎么办? 比如,你在排查桌面相关的问题或者想要运行一个不需要GUI的发行程序。
|
||||
Linux桌面自带了一个显示管理器(比如:GDM、KDM、LightDM),它们可以让计算机启动自动进入一个基于GUI的登录环境。然而,如果你要直接启动进入终端怎么办? 比如,你在排查桌面相关的问题或者想要运行一个不需要GUI的应用程序。
|
||||
|
||||
注意你可以通过按下Ctrl+Alt+F1到F6临时从桌面GUI切换到虚拟终端。然而,在本例中你的桌面GUI仍在后台运行,这不同于纯文本模式启动。
|
||||
注意虽然你可以通过按下Ctrl+Alt+F1到F6临时从桌面GUI切换到虚拟终端。然而,在这种情况下你的桌面GUI仍在后台运行,这不同于纯文本模式启动。
|
||||
|
||||
在Ubuntu或者Debian桌面中,你可以通过传递合适的内核参数在启动时启动文本模式。
|
||||
|
||||
### 启动临时进入命令行 ###
|
||||
|
||||
如果你想要禁止桌面GUI并只有一次进入文本模式,你可以使用GRUB菜单。
|
||||
如果你想要禁止桌面GUI并临时进入一次文本模式,你可以使用GRUB菜单。
|
||||
|
||||
首先,打开你的电脑。当你看到初始的GRUB菜单时,按下‘e’。
|
||||
|
||||
![](https://farm8.staticflickr.com/7490/16112246542_bc1875a397_z.jpg)
|
||||
|
||||
接着会进入下一屏,这里你可以修改内核启动选项。向下滚动到以“linux”开始的行,这里就是内核参数的列表。删除列表中的“quiet”和“splash”。在列表中添加“text”。
|
||||
接着会进入下一屏,这里你可以修改内核启动选项。向下滚动到以“linux”开始的行,这里就是内核参数的列表。删除参数列表中的“quiet”和“splash”。在参数列表中添加“text”。
|
||||
|
||||
![](https://farm8.staticflickr.com/7471/15493282603_8a70f70af2_z.jpg)
|
||||
|
||||
升级的内核选项列表看上去像这样。按下Ctrl+x继续启动。这会一次性以详细模式启动控制台。
|
||||
升级的内核选项列表看上去像这样。按下Ctrl+x继续启动。这会以详细模式启动控制台一次(LCTT译注:由于没有保存修改,所以下次重启还会进入 GUI)。
|
||||
|
||||
![](https://farm8.staticflickr.com/7570/15925676530_b11af59243_z.jpg)
|
||||
|
||||
永久启动进入命令行。
|
||||
### 永久启动进入命令行 ###
|
||||
|
||||
如果你想要永久启动进入命令行,你需要[更新定义了内核启动参数GRUB设置][1]。
|
||||
|
||||
@ -32,7 +32,7 @@ Linux桌面自带了一个显示管理器(比如:GDM、KDM、LightDM),
|
||||
|
||||
$ sudo vi /etc/default/grub
|
||||
|
||||
查找以GRUB_CMDLINE_LINUX_DEFAULT开头的行,并用“#”注释这行。这会禁止初始屏幕,而启动详细模式(也就是说显示详细的的启动过程)。
|
||||
查找以GRUB\_CMDLINE\_LINUX\_DEFAULT开头的行,并用“#”注释这行。这会禁止初始屏幕,而启动详细模式(也就是说显示详细的的启动过程)。
|
||||
|
||||
更改GRUB_CMDLINE_LINUX="" 成:
|
||||
|
||||
@ -48,7 +48,7 @@ Linux桌面自带了一个显示管理器(比如:GDM、KDM、LightDM),
|
||||
|
||||
$ sudo update-grub
|
||||
|
||||
这时,你的桌面应该从GUI启动切换到控制台启动了。可以通过重启验证。
|
||||
这时,你的桌面应该可以从GUI启动切换到控制台启动了。可以通过重启验证。
|
||||
|
||||
![](https://farm8.staticflickr.com/7518/16106378151_81ac6b5a49_b.jpg)
|
||||
|
||||
@ -57,7 +57,7 @@ Linux桌面自带了一个显示管理器(比如:GDM、KDM、LightDM),
|
||||
via: http://ask.xmodulo.com/boot-into-command-line-ubuntu-debian.html
|
||||
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,50 @@
|
||||
交友网站的2000万用户数据遭泄露
|
||||
----------
|
||||
*泄露数据包括Gmail、Hotmail以及Yahoo邮箱*
|
||||
|
||||
![泄露的数据很可能来自于在线交友网站Topface](http://i1-news.softpedia-static.com/images/news2/Data-of-20-Million-Users-Stolen-from-Dating-Website-471179-2.jpg)
|
||||
|
||||
#一名黑客非法窃取了在线交友网站Topface一个包含2000万用户资料的数据库。
|
||||
|
||||
目前并不清楚这些数据是否已经公开,但是根据某些未公开页面的消息说,某个网名为“Mastermind”的人声称掌握着这些数据。
|
||||
|
||||
#泄露数据列表涵盖了全世界数百个域名
|
||||
|
||||
此人号称泄露数据的内容100%真实有效,而Easy Solutions的CTO,Daniel Ingevaldson 周日在一篇博客中说道,泄露数据包括Hotmail、Yahoo和Gmail等邮箱地址。
|
||||
|
||||
Easy Solutions是一家位于美国的公司,提供多个不同平台的网络检测与安全防护产品。
|
||||
|
||||
据Ingevaldson所说,泄露的数据中,700万来自于Hotmail,250万来自于Yahoo,220万来自于Gmail.com。
|
||||
|
||||
我们并不清楚这些数据是可以直接登录邮箱账户的用户名和密码,还是登录交友网站的账户。另外,也不清楚这些数据在数据库中是加密状态还是明文存在的。
|
||||
|
||||
邮箱地址常常被用于在线网站的登录用户名,用户可以凭借唯一密码进行登录。然而重复使用同一个密码是许多用户的常用作法,同一个密码可以登录许多在线账户。
|
||||
|
||||
[Ingevaldson 还说](1):“看起来,这些数据事实上涵盖了全世界数百个域名。除了原始被黑的网页,黑客和不法分子很可能利用窃取的帐密进行暴库、自动扫描、危害包括银行业、旅游业以及email提供商在内的多个网站。”
|
||||
|
||||
#预计将披露更多信息
|
||||
|
||||
据我们的多个消息源爆料,数据的泄露源就是Topface,一个包含9000万用户的在线交友网站。其总部位于俄罗斯圣彼得堡,超过50%的用户来自于俄罗斯以外的国家。
|
||||
|
||||
我们联系了Topface,向他们求证最近是否遭受了可能导致如此大量数据泄露的网络攻击;但目前我们仍未收到该公司的回复。
|
||||
|
||||
攻击者可能无需获得非法访问权限就窃取了这些数据,Easy Solutions 推测攻击者很可能针对网站客户端使用钓鱼邮件直接获取到了用户数据。
|
||||
|
||||
我们无法通过Easy Solutions的在线网站联系到他们,但我们已经尝试了其他交互通讯方式,目前正在等待更多信息的披露。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via:http://news.softpedia.com/news/Data-of-20-Million-Users-Stolen-from-Dating-Website-471179.shtml
|
||||
|
||||
本文发布时间:26 Jan 2015, 10:20 GMT
|
||||
|
||||
作者:[Ionut Ilascu][a]
|
||||
|
||||
译者:[Mr小眼儿](https://github.com/tinyeyeser)
|
||||
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://news.softpedia.com/editors/browse/ionut-ilascu
|
||||
[1]:http://newblog.easysol.net/dating-site-breached/
|
@ -1,7 +1,7 @@
|
||||
Linux下如何过滤、分割以及合并 pcap 文件
|
||||
=============
|
||||
|
||||
如果你是个网络管理员,并且你的工作包括测试一个[入侵侦测系统][1]或一些网络访问控制策略,那么你通常需要抓取数据包并且在离线状态下分析这些文件。当需要保存捕获的数据包时,我们会想到 libpcap 的数据包格式被广泛使用于许多开源的嗅探工具以及捕包程序。如果 pcap 文件被用于入侵测试或离线分析的话,那么在将他们[注入][2]网络之前通常要先对 pcap 文件进行一些操作。
|
||||
如果你是一个测试[入侵侦测系统][1]或一些网络访问控制策略的网络管理员,那么你经常需要抓取数据包并在离线状态下分析这些文件。当需要保存捕获的数据包时,我们一般会存储为 libpcap 的数据包格式 pcap,这是一种被许多开源的嗅探工具以及捕包程序广泛使用的格式。如果 pcap 文件被用于入侵测试或离线分析的话,那么在将他们[注入][2]网络之前通常要先对 pcap 文件进行一些操作。
|
||||
|
||||
![](https://farm8.staticflickr.com/7570/15425147404_a69f416673_c.jpg)
|
||||
|
||||
@ -9,9 +9,9 @@ Linux下如何过滤、分割以及合并 pcap 文件
|
||||
|
||||
### Editcap 与 Mergecap###
|
||||
|
||||
Wireshark,是最受欢迎的 GUI 嗅探工具,实际上它来源于一套非常有用的命令行工具集。其中包括 editcap 与 mergecap。editcap 是一个万能的 pcap 编辑器,它可以过滤并且能以多种方式来分割 pcap 文件。mergecap 可以将多个 pcap 文件合并为一个。 这篇文章就是基于这些 Wireshark 命令行工具。
|
||||
Wireshark,是最受欢迎的 GUI 嗅探工具,实际上它带了一套非常有用的命令行工具集。其中包括 editcap 与 mergecap。editcap 是一个万能的 pcap 编辑器,它可以过滤并且能以多种方式来分割 pcap 文件。mergecap 可以将多个 pcap 文件合并为一个。 这篇文章就是基于这些 Wireshark 命令行工具的。
|
||||
|
||||
如果你已经安装过Wireshark了,那么这些工具已经在你的系统中了。如果还没装的话,那么我们接下来就安装 Wireshark 命令行工具。 需要注意的是,在基于 Debian 的发行版上我们可以不用安装 Wireshark GUI 而仅安装 命令行工具,但是在 Red Hat 及 基于它的发行版中则需要安装整个 Wireshark 包。
|
||||
如果你已经安装过 Wireshark 了,那么这些工具已经在你的系统中了。如果还没装的话,那么我们接下来就安装 Wireshark 命令行工具。 需要注意的是,在基于 Debian 的发行版上我们可以不用安装 Wireshark GUI 而仅安装命令行工具,但是在 Red Hat 及 基于它的发行版中则需要安装整个 Wireshark 包。
|
||||
|
||||
**Debian, Ubuntu 或 Linux Mint**
|
||||
|
||||
@ -27,15 +27,15 @@ Wireshark,是最受欢迎的 GUI 嗅探工具,实际上它来源于一套非
|
||||
|
||||
通过 editcap, 我们能以很多不同的规则来过滤 pcap 文件中的内容,并且将过滤结果保存到新文件中。
|
||||
|
||||
首先,以“起止时间”来过滤 pcap 文件。 " - A < start-time > and " - B < end-time > 选项可以过滤出处在这个时间段到达的数据包(如,从 2:30 ~ 2:35)。时间的格式为 “ YYYY-MM-DD HH:MM:SS"。
|
||||
首先,以“起止时间”来过滤 pcap 文件。 " - A < start-time > 和 " - B < end-time > 选项可以过滤出在这个时间段到达的数据包(如,从 2:30 ~ 2:35)。时间的格式为 “ YYYY-MM-DD HH:MM:SS"。
|
||||
|
||||
$ editcap -A '2014-12-10 10:11:01' -B '2014-12-10 10:21:01' input.pcap output.pcap
|
||||
$ editcap -A '2014-12-10 10:11:01' -B '2014-12-10 10:21:01' input.pcap output.pcap
|
||||
|
||||
也可以从某个文件中提取指定的 N 个包。下面的命令行从 input.pcap 文件中提取100个包(从 401 到 500)并将它们保存到 output.pcap 中:
|
||||
|
||||
$ editcap input.pcap output.pcap 401-500
|
||||
|
||||
使用 "-D< dup-window >" (dup-window可以看成是对比的窗口大小,仅与此范围内的包进行对比)选项可以提取出重复包。每个包都依次与它之前的 < dup-window > -1 个包对比长度与MD5值,如果有匹配的则丢弃。
|
||||
使用 "-D < dup-window >" (dup-window可以看成是对比的窗口大小,仅与此范围内的包进行对比)选项可以提取出重复包。每个包都依次与它之前的 < dup-window > -1 个包对比长度与MD5值,如果有匹配的则丢弃。
|
||||
|
||||
$ editcap -D 10 input.pcap output.pcap
|
||||
|
||||
@ -71,13 +71,13 @@ Wireshark,是最受欢迎的 GUI 嗅探工具,实际上它来源于一套非
|
||||
|
||||
如果要忽略时间戳,仅仅想以命令行中的顺序来合并文件,那么使用 -a 选项即可。
|
||||
|
||||
例如,下列命令会将 input.pcap文件的内容写入到 output.pcap, 并且将 input2.pcap 的内容追加在后面。
|
||||
例如,下列命令会将 input.pcap 文件的内容写入到 output.pcap, 并且将 input2.pcap 的内容追加在后面。
|
||||
|
||||
$ mergecap -a -w output.pcap input.pcap input2.pcap
|
||||
|
||||
###总结###
|
||||
|
||||
在这篇指导中,我演示了多个 editcap、 mergecap 操作 pcap 文件的案例。除此之外,还有其它的相关工具,如 [reordercap][3]用于将数据包重新排序,[text2pcap][4] 用于将pcap 文件转换为 文本格式, [pcap-diff][5]用于比较 pcap 文件的异同,等等。当进行网络入侵测试及解决网络问题时,这些工具与[包注入工具][6]非常实用,所以最好了解他们。
|
||||
在这篇指导中,我演示了多个 editcap、 mergecap 操作 pcap 文件的例子。除此之外,还有其它的相关工具,如 [reordercap][3]用于将数据包重新排序,[text2pcap][4] 用于将 pcap 文件转换为文本格式, [pcap-diff][5]用于比较 pcap 文件的异同,等等。当进行网络入侵测试及解决网络问题时,这些工具与[包注入工具][6]非常实用,所以最好了解他们。
|
||||
|
||||
你是否使用过 pcap 工具? 如果用过的话,你用它来做过什么呢?
|
||||
|
||||
@ -86,8 +86,8 @@ Wireshark,是最受欢迎的 GUI 嗅探工具,实际上它来源于一套非
|
||||
via: http://xmodulo.com/filter-split-merge-pcap-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[SPccman](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[SPccman](https://github.com/SPccman)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
@ -1,50 +0,0 @@
|
||||
Data of 20 Million Users Stolen from Dating Website
|
||||
----------
|
||||
*Info includes Gmail, Hotmail and Yahoo emails*
|
||||
|
||||
![Details could belong to users of online dating website Topface](http://i1-news.softpedia-static.com/images/news2/Data-of-20-Million-Users-Stolen-from-Dating-Website-471179-2.jpg)
|
||||
|
||||
#A database containing details of more than 20 million users of an online dating website has been allegedly stolen by a hacker.
|
||||
|
||||
It is unclear at the moment if the information has been dumped into the public domain, but someone using the online alias “Mastermind” claims to have it, according to a post on an undisclosed paste site.
|
||||
|
||||
#List contains hundreds of domains from all over the world
|
||||
|
||||
The individual claims that the details are 100% valid and Daniel Ingevaldson, Chief Technology Officer at Easy Solutions, said in a blog post on Sunday that the list included email addresses from Hotmail, Yahoo and Gmail.
|
||||
|
||||
Easy Solutions is a US-based company that provides security products for detecting and preventing cyber fraud across different computer platforms.
|
||||
|
||||
According to Ingevaldson, the list contains over 7 million credentials from Hotmail, 2.5 million from Yahoo, and 2.2 million from Gmail.com.
|
||||
|
||||
It is unclear if “credentials” refers to usernames and passwords that can be used to access the email accounts or the account of the dating website. Also, it is unknown whether the database stored the passwords in a secure manner or if they were available in plain text.
|
||||
|
||||
An email address is often used as the username for an online service, to which the user can log in with a unique password. However, password recycling is a common practice for many users and the same string could be used to sign in to multiple online accounts.
|
||||
|
||||
“The list appears to be international in nature with hundreds of domains listed from all over the world. Hackers and fraudsters are likely to leverage stolen credentials to commit fraud not on the original hacked site, but to use them to exploit password re-use to automatically scan and compromise other sites including banking, travel and email providers,” [says Ingevaldson](1).
|
||||
|
||||
#More information is expected to emerge
|
||||
|
||||
According to our sources, the affected website is Topface, an online dating location that touts over 90 million users. The business is headquartered in Sankt Petersburg, Russia, and it advertises that more than 50% of its users are from outside Russia.
|
||||
|
||||
We contacted Topface to confirm or deny whether they suffered a breach recently that could have resulted in exposing a database this big; we are yet to receive an answer from the company.
|
||||
|
||||
The credentials could have been stolen without perpetrators needing to gain unauthorized access, as Easy Solutions draws attention to the fact that email phishing may also have been used to get the info straight from the clients of the website.
|
||||
|
||||
Easy Solutions could not be contacted through the online form available on its website, but we tried alternative communication and are currently waiting for more details.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via:http://news.softpedia.com/news/Data-of-20-Million-Users-Stolen-from-Dating-Website-471179.shtml
|
||||
|
||||
本文发布时间:26 Jan 2015, 10:20 GMT
|
||||
|
||||
作者:[Ionut Ilascu][a]
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://news.softpedia.com/editors/browse/ionut-ilascu
|
||||
[1]:http://newblog.easysol.net/dating-site-breached/
|
@ -1,60 +0,0 @@
|
||||
(translating by runningwater )
|
||||
This App Can Write a Single ISO to 20 USB Drives Simultaneously
|
||||
================================================================================
|
||||
**If I were to ask you to burn a single Linux ISO to 17 USB thumb drives how would you go about doing it?**
|
||||
|
||||
Code savvy folks would write a little bash script to automate the process, and a large number would use a GUI tool like the USB Startup Disk Creator to burn the ISO to each drive in turn, one by one. But the rest of us would fast conclude that neither method is ideal.
|
||||
|
||||
### Problem > Solution ###
|
||||
|
||||
![GNOME MultiWriter in action](http://www.omgubuntu.co.uk/wp-content/uploads/2015/01/gnome-multi-writer.jpg)
|
||||
|
||||
GNOME MultiWriter in action
|
||||
|
||||
Richard Hughes, a GNOME developer, faced a similar dilemma. He wanted to create a number of USB drives pre-loaded with an OS, but wanted a tool simple enough for someone like his dad to use.
|
||||
|
||||
His response was to create a **brand new app** that combines both approaches into one easy to use tool.
|
||||
|
||||
It’s called “[GNOME MultiWriter][1]” and lets you write a single ISO or IMG to multiple USB drives at the same time.
|
||||
|
||||
It nixes the need to customize or create a command line script and relinquishes the need to waste an afternoon performing an identical set of actions on repeat.
|
||||
|
||||
All you need is this app, an ISO, some thumb-drives and lots of empty USB ports.
|
||||
|
||||
### Use Cases and Installing ###
|
||||
|
||||
![The app can be installed on Ubuntu](http://www.omgubuntu.co.uk/wp-content/uploads/2015/01/mutli-writer-on-ubuntu.jpg)
|
||||
|
||||
The app can be installed on Ubuntu
|
||||
|
||||
The app has a pretty defined usage scenario, that being situations where USB sticks pre-loaded with an OS or live image are being distributed.
|
||||
|
||||
That being said, it should work just as well for anyone wanting to create a solitary bootable USB stick, too — and since I’ve never once successfully created a bootable image from Ubuntu’s built-in disk creator utility, working alternatives are welcome news to me!
|
||||
|
||||
Hughes, the developer, says it **supports up to 20 USB drives**, each being between 1GB and 32GB in size.
|
||||
|
||||
The drawback (for now) is that GNOME MultiWriter is not a finished, stable product. It works, but at this early blush there are no pre-built binaries to install or a PPA to add to your overstocked software sources.
|
||||
|
||||
If you know your way around the usual configure/make process you can get it up and running in no time. On Ubuntu 14.10 you may also need to install the following packages first:
|
||||
|
||||
sudo apt-get install gnome-common yelp-tools libcanberra-gtk3-dev libudisks2-dev gobject-introspection
|
||||
|
||||
If you get it up and running, give it a whirl and let us know what you think!
|
||||
|
||||
Bugs and pull requests can be longed on the GitHub page for the project, which is where you’ll also found tarball downloads for manual installation.
|
||||
|
||||
- [GNOME MultiWriter on Github][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/01/gnome-multiwriter-iso-usb-utility
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[runningwater](https://github.com/runningwater)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:https://github.com/hughsie/gnome-multi-writer/
|
||||
[2]:https://github.com/hughsie/gnome-multi-writer/
|
@ -1,3 +1,5 @@
|
||||
su-kaiyao translating
|
||||
|
||||
4 Best Modern Open Source Code Editors For Linux
|
||||
================================================================================
|
||||
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/01/Best_Open_Source_Editors.jpeg)
|
||||
@ -83,4 +85,4 @@ via: http://itsfoss.com/best-modern-open-source-code-editors-for-linux/
|
||||
[12]:http://lighttable.com/
|
||||
[13]:https://github.com/LightTable/LightTable
|
||||
[14]:http://itsfoss.com/notepadqq-notepad-for-linux/
|
||||
[15]:http://itsfoss.com/scite-the-notepad-for-linux/
|
||||
[15]:http://itsfoss.com/scite-the-notepad-for-linux/
|
||||
|
@ -0,0 +1,60 @@
|
||||
Meet Vivaldi — A New Web Browser Built for Power Users
|
||||
================================================================================
|
||||
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/01/Screen-Shot-2015-01-27-at-17.36.jpg)
|
||||
|
||||
**A brand new web browser has arrived this week that aims to meet the needs of power users — and it’s already available for Linux.**
|
||||
|
||||
Vivaldi is the name of this new browser and it has been launched as a tech preview (read: a beta without the responsibility) for 64-bit Linux machines, Windows and Mac. It is built — shock — on the tried-and-tested open-source frameworks of Chromium, Blink and Google’s open-source V8 JavaScript engine (among other projects).
|
||||
|
||||
Does the world really want another browser? Vivaldi, the brain child of former Opera Software CEO Jon von Tetzchner, is less concerned about want and more about need.
|
||||
|
||||
Vivaldi is being built with the sort of features that keyboard preferring tab addicts need. It is not being pitched at users who find Firefox perplexing or whose sole criticism of Chrome is that it moved the bookmarks button.
|
||||
|
||||
That’s not tacky marketing spiel either. Despite the ‘technical preview’ badge it comes with, Vivaldi is already packed with features that demonstrate its power user slant.
|
||||
|
||||
Plenty of folks feel left behind and underserved by the simplified, paired back offerings other software companies are producing. Vivaldi, even at this early juncture, looks well placed to succeed in winning them over.
|
||||
|
||||
### Vivaldi Features ###
|
||||
|
||||
A few of Vivaldi’s key features already present include:
|
||||
|
||||
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/01/quick.jpg)
|
||||
|
||||
**Quick Commands** (Ctrl + Q) is an in-app HUD that lets you quickly filter through settings, options and features, be it opening a bookmark or hiding the status bar, using your keyboard. No clicks needed.
|
||||
|
||||
**Tab Stacks** let you clean up your workspace by grouping separate tabs into one, and then using a keyboard command or the tab preview picker to switch between them.
|
||||
|
||||
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/01/tab-stacks.jpg)
|
||||
|
||||
A collapsible **side panel** that houses extra features (just like old Opera) including a (not yet working) mail client, contacts, bookmarks browser and note taking section that lets you take and annotate screenshots.
|
||||
|
||||
A bunch of other features are on offer too, including customizable keyboard shortcuts, a tabs bar that can be set on any edge of the browser (or hidden entirely), privacy options and a speed dial with folders.
|
||||
|
||||
### Opera Mark II ###
|
||||
|
||||
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/01/vivaldi-settings-in-ubuntu-750x434.jpg)
|
||||
|
||||
It’s not a leap to see Vivaldi as the true successor to Opera post-Presto (Opera’s old, proprietary rendering engine). Opera (which also pushed out a minor new update today) has split out many of its “power user” features as it chases a lighter, more manageable set of features.
|
||||
|
||||
Vivaldi wants to pick up the baggage Opera has been so keen to offload. And while that might not help it grab marketshare it will see it grab the attention of power users, many of whom will no doubt already be using Linux.
|
||||
|
||||
### Download ###
|
||||
|
||||
Interested in taking it for a spin? You can. Vivaldi is available to download for Windows, Mac and 64-bit Linux distributions. On the latter you have a choice of Debian or RPM installer.
|
||||
|
||||
Bear in mind that it’s not finished and that more features (including extensions, sync and more) are planned for future builds.
|
||||
|
||||
- [Download Vivaldi Tech Preview for Linux][1]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/01/vivaldi-web-browser-linux-download-power-users
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:https://vivaldi.com/#Download
|
@ -1,144 +0,0 @@
|
||||
Translating by ZTinoZ
|
||||
20 Linux Commands Interview Questions & Answers
|
||||
================================================================================
|
||||
**Q:1 How to check current run level of a linux server ?**
|
||||
|
||||
Ans: ‘who -r’ & ‘runlevel’ commands are used to check the current runlevel of a linux box.
|
||||
|
||||
**Q:2 How to check the default gatway in linux ?**
|
||||
|
||||
Ans: Using the commands “route -n” and “netstat -nr” , we can check default gateway. Apart from the default gateway info , these commands also display the current routing tables .
|
||||
|
||||
**Q:3 How to rebuild initrd image file on Linux ?**
|
||||
|
||||
Ans: In case of CentOS 5.X / RHEL 5.X , mkinitrd command is used to create initrd file , example is shown below :
|
||||
|
||||
# mkinitrd -f -v /boot/initrd-$(uname -r).img $(uname -r)
|
||||
|
||||
If you want to create initrd for a specific kernel version , then replace ‘uname -r’ with desired kernel
|
||||
|
||||
In Case of CentOS 6.X / RHEL 6.X , dracut command is used to create initrd file example is shown below :
|
||||
|
||||
# dracut -f
|
||||
|
||||
Above command will create the initrd file for the current version. To rebuild the initrd file for a specific kernel , use below command :
|
||||
|
||||
# dracut -f initramfs-2.x.xx-xx.el6.x86_64.img 2.x.xx-xx.el6.x86_64
|
||||
|
||||
**Q:4 What is cpio command ?**
|
||||
|
||||
Ans: cpio stands for Copy in and copy out. Cpio copies files, lists and extract files to and from a archive ( or a single file).
|
||||
|
||||
**Q:5 What is patch command and where to use it ?**
|
||||
|
||||
Ans: As the name suggest patch command is used to apply changes ( or patches) to the text file. Patch command generally accept output from the diff and convert older version of files into newer versions. For example Linux kernel source code consists of number of files with millions of lines , so whenever any contributor contribute the changes , then he/she will be send the only changes instead of sending the whole source code. Then the receiver will apply the changes with patch command to its original source code.
|
||||
|
||||
Create a diff file for use with patch,
|
||||
|
||||
# diff -Naur old_file new_file > diff_file
|
||||
|
||||
Where old_file and new_file are either single files or directories containing files. The r option supports recursion of a directory tree.
|
||||
|
||||
Once the diff file has been created, we can apply it to patch the old file into the new file:
|
||||
|
||||
# patch < diff_file
|
||||
|
||||
**Q:6 What is use of aspell ?**
|
||||
|
||||
Ans: As the name suggest aspell is an interactive spelling checker in linux operating system. The aspell command is the successor to an earlier program named ispell, and can be used, for the most part, as a drop-in replacement. While the aspell program is mostly used by other programs that require spell-checking capability, it can also be used very effectively as a stand-alone tool from the command line.
|
||||
|
||||
**Q:7 How to check the SPF record of domain from command line ?**
|
||||
|
||||
Ans: We can check SPF record of a domain using dig command. Example is shown below :
|
||||
|
||||
linuxtechi@localhost:~$ dig -t TXT google.com
|
||||
|
||||
**Q:8 How to identify which package the specified file (/etc/fstab) is associated with in linux ?**
|
||||
|
||||
Ans: # rpm -qf /etc/fstab
|
||||
|
||||
Above command will list the package which provides file “/etc/fstab”
|
||||
|
||||
**Q:9 Which command is used to check the status of bond0 ?**
|
||||
|
||||
Ans: cat /proc/net/bonding/bond0
|
||||
|
||||
**Q:10 What is the use of /proc file system in linux ?**
|
||||
|
||||
Ans: The /proc file system is a RAM based file system which maintains information about the current state of the running kernel including details on CPU, memory, partitioning, interrupts, I/O addresses, DMA channels, and running processes. This file system is represented by various files which do not actually store the information, they point to the information in the memory. The /proc file system is maintained automatically by the system.
|
||||
|
||||
**Q:11 How to find files larger than 10MB in size in /usr directory ?**
|
||||
|
||||
Ans: # find /usr -size +10M
|
||||
|
||||
**Q:12 How to find files in the /home directory that were modified more than 120 days ago ?**
|
||||
|
||||
Ans: # find /home -mtime +l20
|
||||
|
||||
**Q:13 How to find files in the /var directory that have not been accessed in the last 90 days ?**
|
||||
|
||||
Ans: # find /var -atime -90
|
||||
|
||||
**Q:14 Search for core files in the entire directory tree and delete them as found without prompting for confirmation**
|
||||
|
||||
Ans: # find / -name core -exec rm {} \;
|
||||
|
||||
**Q:15 What is the purpose of strings command ?**
|
||||
|
||||
Ans: The strings command is used to extract and display the legible contents of a non-text file.
|
||||
|
||||
**Q:16 What is the use tee filter ?**
|
||||
|
||||
Ans: The tee filter is used to send an output to more than one destination. It can send one copy of the output to a file and another to the screen (or some other program) if used with pipe.
|
||||
|
||||
linuxtechi@localhost:~$ ll /etc | nl | tee /tmp/ll.out
|
||||
|
||||
In the above example, the output from ll is numbered and captured in /tmp/ll.out file. The output is also displayed on the screen.
|
||||
|
||||
**Q:17 What would the command export PS1 = ”$LOGNAME@`hostname`:\$PWD: do ?**
|
||||
|
||||
Ans: The export command provided will change the login prompt to display username, hostname, and the current working directory.
|
||||
|
||||
**Q:18 What would the command ll | awk ‘{print $3,”owns”,$9}’ do ?**
|
||||
|
||||
Ans: The ll command provided will display file names and their owners.
|
||||
|
||||
**Q:19 What is the use of at command in linux ?**
|
||||
|
||||
Ans: The at command is used to schedule a one-time execution of a program in the future. All submitted jobs are spooled in the /var/spool/at directory and executed by the atd daemon when the scheduled time arrives.
|
||||
|
||||
**Q:20 What is the role of lspci command in linux ?**
|
||||
|
||||
Ans: The lspci command displays information about PCI buses and the devices attached to your system. Specify -v, -vv, or -vvv for detailed output. With the -m option, the command produces more legible output.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxtechi.com/20-linux-commands-interview-questions-answers/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxtechi.com/author/pradeep/
|
||||
[1]:
|
||||
[2]:
|
||||
[3]:
|
||||
[4]:
|
||||
[5]:
|
||||
[6]:
|
||||
[7]:
|
||||
[8]:
|
||||
[9]:
|
||||
[10]:
|
||||
[11]:
|
||||
[12]:
|
||||
[13]:
|
||||
[14]:
|
||||
[15]:
|
||||
[16]:
|
||||
[17]:
|
||||
[18]:
|
||||
[19]:
|
||||
[20]:
|
31
sources/talk/20150127 Windows 10 versus Linux.md
Normal file
31
sources/talk/20150127 Windows 10 versus Linux.md
Normal file
@ -0,0 +1,31 @@
|
||||
Windows 10 versus Linux
|
||||
================================================================================
|
||||
![](https://farm4.staticflickr.com/3852/14863156322_e4edbae70e_t.jpg)
|
||||
|
||||
Windows 10 seemed to dominate the headlines today, even in many Linux circles. Leading the pack is Brian Fagioli at betanews.com saying Windows 10 is ringing the death knell for Linux desktops. Microsoft announced today that Windows 10 will be free for loyal Windows users and Steven J. Vaughan-Nichols said it's the newest Open Source company. Then Matt Hartley compares Windows 10 to Ubuntu and Jesse Smith reviews Windows 10 from a Linux user's perspective.
|
||||
|
||||
**Windows 10** was the talk around water coolers today with Microsoft's [announcement][1] that it would be free for Windows 7 and up users. Here in Linuxland, that didn't go unnoticed. Brian Fagioli at betanews.com, a self-proclaimed Linux fan, said today, "Windows 10 closes the door entirely. The year of the Linux desktop will never happen. Rest in peace." [Fagioli explained][2] that Microsoft listened to user complaints and not only addressed them but improved way beyond that. He said Linux missed the boat by failing to capitalize on the Windows 8 unpopularity and ultimate failure. Then he concluded that we on the fringe must accept our "shattered dreams" thanks to Windows 10.
|
||||
|
||||
**H**owever, Jesse Smith, of Distrowatch.com fame, said Microsoft isn't making it easy to find the download, but it is possible and he did it. The installer was simple enough except for the partitioner, which was quite limited and almost scary. After finally getting into Windows 10, Smith said the layout was "sparce" without a lot of the distractions folks hated about 7. The menu is back and the start screen is gone. A new package manager looks a lot like Ubuntu's and Android's according to Smith, but requires an online Microsoft account to use. [Smith concludes][3] in part, "Windows 10 feels like a beta for an early version of Android, a consumer operating system that is designed to be on-line all the time. It does not feel like an operating system I would use to get work done."
|
||||
|
||||
**S**mith's [full article][4] compares Windows 10 to Linux quite a bit, but Matt Hartley today posted an actual Windows 10 vs Linux report. [He said][5] both installers were straightforward and easy Windows still doesn't dual boot easily and Windows provides encryption by default but Ubuntu offers it as an option. At the desktop Hartley said Windows 10 "is struggling to let go of its Windows 8 roots." He thought the Windows Store looks more polished than Ubuntu's but didn't really like the "tile everything" approach to newly installed apps. In conclusion, Hartley said, "The first issue is that it's going to be a free upgrade for a lot of Windows users. This means the barrier to entry and upgrade is largely removed. Second, it seems this time Microsoft has really buckled down on listening to what their users want."
|
||||
|
||||
**S**teven J. Vaughan-Nichols today said that Microsoft is the newest Open Source company; not because it's going to be releasing Windows 10 as a free upgrade but because Microsoft is changing itself from a software company to a software as a service company. And, according to Vaughan-Nichols, Microsoft needs Open Source to do it. They've been working on it for years beginning with Novell/SUSE. Not only that, they've been releasing software as Open Source as well (whatever the motives). [Vaughan-Nichols concluded][6], "Most people won't see it, but Microsoft -- yes Microsoft -- has become an open-source company."
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ostatic.com/blog/windows-10-versus-linux
|
||||
|
||||
作者:[Susan Linton][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ostatic.com/member/susan-linton
|
||||
[1]:https://news.google.com/news/section?q=microsoft+windows+10+free&ie=UTF-8&oe=UTF-8
|
||||
[2]:http://betanews.com/2015/01/25/windows-10-is-the-final-nail-in-the-coffin-for-the-linux-desktop/
|
||||
[3]:http://blowingupbits.com/2015/01/an-outsiders-perspective-on-windows-10-preview/
|
||||
[4]:http://blowingupbits.com/2015/01/an-outsiders-perspective-on-windows-10-preview/
|
||||
[5]:http://www.datamation.com/open-source/windows-vs-linux-the-2015-version-1.html
|
||||
[6]:http://www.zdnet.com/article/microsoft-the-open-source-company/
|
@ -0,0 +1,86 @@
|
||||
7 communities driving open source development
|
||||
================================================================================
|
||||
Not so long ago, the open source model was the rebellious kid on the block, viewed with suspicion by established industry players. Today, open initiatives and foundations are flourishing with long lists of vendor committers who see the model as a key to innovation.
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/0_opensource-title-100539095-orig.jpg)
|
||||
|
||||
### Open Development of Tech Drives Innovation ###
|
||||
|
||||
Over the past two decades, open development of technology has come to be seen as a key to driving innovation. Even companies that once saw open source as a threat have come around — Microsoft, for example, is now active in a number of open source initiatives. To date, most open development has focused on software. But even that is changing as communities have begun to coalesce around open hardware initiatives. Here are seven organizations that are successfully promoting and developing open technologies, both hardware and software.
|
||||
|
||||
### OpenPOWER Foundation ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/1_openpower-100539100-orig.jpg)
|
||||
|
||||
The [OpenPOWER Foundation][2] was founded by IBM, Google, Mellanox, Tyan and NVIDIA in 2013 to drive open collaboration hardware development in the same spirit as the open source software development which has found fertile ground in the past two decades.
|
||||
|
||||
IBM seeded the foundation by opening up its Power-based hardware and software technologies, offering licenses to use Power IP in independent hardware products. More than 70 members now work together to create custom open servers, components and software for Linux-based data centers.
|
||||
|
||||
In April, OpenPOWER unveiled a technology roadmap based on new POWER8 process-based servers capable of analyzing data 50 times faster than the latest x86-based systems. In July, IBM and Google released a firmware stack. October saw the availability of NVIDIA GPU accelerated POWER8 systems and the first OpenPOWER reference server from Tyan.
|
||||
|
||||
### The Linux Foundation ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/2_the-linux-foundation-100539101-orig.jpg)
|
||||
|
||||
Founded in 2000, [The Linux Foundation][2] is now the host for the largest open source, collaborative development effort in history, with more than 180 corporate members and many individual and student members. It sponsors the work of key Linux developers and promotes, protects and advances the Linux operating system and collaborative software development.
|
||||
|
||||
Some of its most successful collaborative projects include Code Aurora Forum (a consortium of companies with projects serving the mobile wireless industry), MeeGo (a project to build a Linux kernel-based operating system for mobile devices and IVI) and the Open Virtualization Alliance (which fosters the adoption of free and open source software virtualization solutions).
|
||||
|
||||
### Open Virtualization Alliance ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/3_open-virtualization-alliance-100539102-orig.jpg)
|
||||
|
||||
The [Open Virtualization Alliance (OVA)][3] exists to foster the adoption of free and open source software virtualization solutions like Kernel-based Virtual Machine (KVM) through use cases and support for the development of interoperable common interfaces and APIs. KVM turns the Linux kernel into a hypervisor.
|
||||
|
||||
Today, KVM is the most commonly used hypervisor with OpenStack.
|
||||
|
||||
### The OpenStack Foundation ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/4_the-openstack-foundation-100539096-orig.jpg)
|
||||
|
||||
Originally launched as an Infrastructure-as-a-Service (IaaS) product by NASA and Rackspace hosting in 2010, the [OpenStack Foundation][4] has become the home for one of the biggest open source projects around. It boasts more than 200 member companies, including AT&T, AMD, Avaya, Canonical, Cisco, Dell and HP.
|
||||
|
||||
Organized around a six-month release cycle, the foundation's OpenStack projects are developed to control pools of processing, storage and networking resources through a data center — all managed or provisioned through a Web-based dashboard, command-line tools or a RESTful API. So far, the collaborative development supported by the foundation has resulted in the creation of OpenStack components including OpenStack Compute (a cloud computing fabric controller that is the main part of an IaaS system), OpenStack Networking (a system for managing networks and IP addresses) and OpenStack Object Storage (a scalable redundant storage system).
|
||||
|
||||
### OpenDaylight ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/5_opendaylight-100539097-orig.jpg)
|
||||
|
||||
Another collaborative project to come out of the Linux Foundation, [OpenDaylight][5] is a joint initiative of industry vendors, like Dell, HP, Oracle and Avaya founded in April 2013. Its mandate is the creation of a community-led, open, industry-supported framework consisting of code and blueprints for Software-Defined Networking (SDN). The idea is to provide a fully functional SDN platform that can be deployed directly, without requiring other components, though vendors can offer add-ons and enhancements.
|
||||
|
||||
### Apache Software Foundation ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/6_apache-software-foundation-100539098-orig.jpg)
|
||||
|
||||
The [Apache Software Foundation (ASF)][7] is home to nearly 150 top level projects ranging from open source enterprise automation software to a whole ecosystem of distributed computing projects related to Apache Hadoop. These projects deliver enterprise-grade, freely available software products, while the Apache License is intended to make it easy for users, whether commercial or individual, to deploy Apache products.
|
||||
|
||||
ASF was incorporated in 1999 as a membership-based, not-for-profit corporation with meritocracy at its heart — to become a member you must first be actively contributing to one or more of the foundation's collaborative projects.
|
||||
|
||||
### Open Compute Project ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/7_open-compute-project-100539099-orig.jpg)
|
||||
|
||||
An outgrowth of Facebook's redesign of its Oregon data center, the [Open Compute Project (OCP)][7] aims to develop open hardware solutions for data centers. The OCP is an initiative made up of cheap, vanity-free servers, modular I/O storage for Open Rack (a rack standard designed for data centers to integrate the rack into the data center infrastructure) and a relatively "green" data center design.
|
||||
|
||||
OCP board members include representatives from Facebook, Intel, Goldman Sachs, Rackspace and Microsoft.
|
||||
|
||||
OCP recently announced two options for licensing: an Apache 2.0-like license that allows for derivative works and a more prescriptive license that encourages changes to be rolled back into the original software.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.networkworld.com/article/2866074/opensource-subnet/7-communities-driving-open-source-development.html
|
||||
|
||||
作者:[Thor Olavsrud][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.networkworld.com/author/Thor-Olavsrud/
|
||||
[1]:http://openpowerfoundation.org/
|
||||
[2]:http://www.linuxfoundation.org/
|
||||
[3]:https://openvirtualizationalliance.org/
|
||||
[4]:http://www.openstack.org/foundation/
|
||||
[5]:http://www.opendaylight.org/
|
||||
[6]:http://www.apache.org/
|
||||
[7]:http://www.opencompute.org/
|
153
sources/talk/20150128 The top 10 rookie open source projects.md
Normal file
153
sources/talk/20150128 The top 10 rookie open source projects.md
Normal file
@ -0,0 +1,153 @@
|
||||
The top 10 rookie open source projects
|
||||
================================================================================
|
||||
Black Duck presents its Open Source Rookies of the Year -- the 10 most exciting, active new projects germinated by the global open source community
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/open_source_rookies_01-100564902-orig.jpeg)
|
||||
|
||||
### Open Source Rookies of the Year ###
|
||||
|
||||
Each year sees the start of thousands of new open source projects. Only a handful gets real traction. Some projects gain momentum by building on existing, well-known technologies; others truly break new ground. Many projects are created to solve a simple development problem, while others begin with loftier intentions shared by like-minded developers around the world.
|
||||
|
||||
Since 2009, the open source software logistics company Black Duck has identified the [Open Source Rookies of the Year][1], based on activity tracked by its [Open Hub][2] (formerly Ohloh) site. This year, we're delighted to present 10 winners and two honorable mentions for 2015, selected from thousands of open source projects. Using a weighted scoring system, points were awarded based on project activity, the pace of commits, and several other factors.
|
||||
|
||||
Open source has become the industry's engine of innovation. This year, for example, growth in projects related to Docker containerization trumped every other rookie area -- and not coincidentally reflected the most exciting area of enterprise technology overall. At the very least, the projects described here provide a window on what the global open source developer community is thinking, which is fast becoming a good indicator of where we're headed.
|
||||
|
||||
### 2015 Open Source Rookie of the Year: DebOps ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/open_source_rookies_02-100564903-orig.jpeg)
|
||||
|
||||
[DebOps][3] is a collection of [Ansible][4] playbooks and roles, scalable from one container to an entire data center. Founder Maciej Delmanowski open-sourced DebOps to ensure his work outlived his current work environment and could grow in strength and depth from outside contributors.
|
||||
|
||||
DebOps began at a small university in Poland that ran its own data center, where everything was configured by hand. Crashes sometimes led to days of downtime -- and Delmanowski realized that a configuration management system was needed. Starting with a Debian base, DebOps is a group of Ansible playbooks that configure an entire data infrastructure. The project has been implemented in many different working environments, and the founders plan to continue supporting and improving it as time goes on.
|
||||
|
||||
### 2015 Open Source Rookie of the Year: Code Combat ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/open_source_rookies_03-100564904-gallery.idge.jpg)
|
||||
|
||||
The traditional pen-and-paper way of learning falls short for technical subjects. Games, however, are all about engagement -- which is why the founders of [CodeCombat][5] went about creating a multiplayer programming game to teach people how to code.
|
||||
|
||||
At its inception, CodeCombat was an idea for a startup, but the founders decided to create an open source project instead. The idea blossomed within the community, and the project gained contributors at a steady rate. A mere two months after its launch, the game was accepted into Google’s Summer of Code. The game reaches a broad audience and is available in 45 languages. CodeCombat hopes to become the standard for people who want to learn to code and have fun at the same time.
|
||||
|
||||
### 2015 Open Source Rookie of the Year: Storj ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/open_source_rookies_04-100564905-gallery.idge.jpg)
|
||||
|
||||
[Storj][6] is a peer-to-peer cloud storage network that implements end-to-end encryption, enabling users to transfer and share data without reliance on a third party. Based on bitcoin blockchain technology and peer-to-peer protocols, Storj provides secure, private, and encrypted cloud storage.
|
||||
|
||||
Opponents of cloud-based data storage worry about cost efficiencies and vulnerability to attack. Intended to address both concerns, Storj is a private cloud storage marketplace where space is purchased and traded via Storjcoin X (SJCX). Files uploaded to Storj are shredded, encrypted, and stored across the community. File owners are the sole individuals who possess keys to the encrypted information.
|
||||
|
||||
The proof of concept for this decentralized cloud storage marketplace was first presented at the Texas Bitcoin Conference Hackathon in 2014. After winning first place in the hackathon, the project founders and leaders used open forums, Reddit, bitcoin forums, and social media to grow an active community, now an essential part of the Storj decision-making process.
|
||||
|
||||
### 2015 Open Source Rookie of the Year: Neovim ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/open_source_rookies_05-100564906-orig.jpg)
|
||||
|
||||
Since its inception in 1991, Vim has been a beloved text editor adopted by millions of software developers. [Neovim][6] is the next generation.
|
||||
|
||||
The software development ecosystem has experienced exponential growth and innovation over the past 23 years. Neovim founder Thiago de Arruda knew that Vim was lacking in modern-day features and development speed. Although determined to preserve the signature features of Vim, the community behind Neovim seeks to improve and evolve the technology of its favorite text editor. Crowdfunding initially enabled de Arruda to focus six uninterrupted months on launching this endeavor. He credits the Neovim community for supporting the project and for inspiring him to continue contributing.
|
||||
|
||||
### 2015 Open Source Rookie of the Year: CockroachDB ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/open_source_rookies_06-100564907-orig.jpg)
|
||||
|
||||
Former Googlers are bringing a big-company data solution to open source in the form of [CockroachDB][8], a scalable, geo-replicated, transactional data store.
|
||||
|
||||
To maintain the terabytes of data transacted over its global online properties, Google developed Spanner. This powerful tool provides Google with scalability, survivability, and transactionality -- qualities that the team behind CockroachDB is serving up to the open source community. Like an actual cockroach, CockroachDB can survive without its head, tolerating the failure of any node. This open source project has a devoted community of experienced contributors, actively cultivated by the founders via social media, GitHub, networking, conferences, and meet-ups.
|
||||
|
||||
### 2015 Open Source Rookie of the Year: Kubernetes ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/open_source_rookies_07-100564908-orig.jpg)
|
||||
|
||||
In introducing containerized software development to the open source community, [Docker][9] has become the backbone of a strong, innovative set of tools and technologies. [Kubernetes][10], which Google introduced last June, is an open source container management tool used to accelerate development and simplify operations.
|
||||
|
||||
Google has been using containers for years in its internal operations. At the summer 2014 DockerCon, the Internet giant open-sourced Kubernetes, which was developed to meet the needs of the exponentially growing Docker ecosystem. Through collaborations with other organizations and projects, such as Red Hat and CoreOS, Kubernetes project managers have grown their project to be the No. 1 downloaded tool on the Docker Hub. The Kubernetes team hopes to expand the project and grow the community, so software developers can spend less time managing infrastructure and more time building the apps they want.
|
||||
|
||||
### 2015 Open Source Rookie of the Year: Open Bazaar ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/open_source_rookies_08-100564909-orig.jpg)
|
||||
|
||||
[OpenBazaar][11] is a decentralized marketplace for trading with anyone using bitcoin. The proof of concept for OpenBazaar was born at a hackathon, where its founders combined BitTorent, bitcoin, and traditional financial server methodologies to create a censorship-resistant trading platform. The OpenBazaar team sought new members, and before long they were able to expand the OpenBazaar community immensely. The table stakes of OpenBazaar -- transparency and a common goal to revolutionize trade and commerce -- are helping founders and contributors work toward a real-world, uncontrolled, and decentralized marketplace.
|
||||
|
||||
### 2015 Open Source Rookie of the Year: IPFS ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/open_source_rookies_09-100564910-orig.jpg)
|
||||
|
||||
[IPFS (InterPlanetary File System)][12] is a global, versioned, peer-to-peer file system.It synthesizes many of the ideas behind Git, BitTorrent, and HTTP to bring a new data and data structure transport protocol to the open Web.
|
||||
|
||||
Open source is known for developing simple solutions to complex problems that result in many innovations, but these powerful projects represent only one slice of the open source community. IFPS belong to a more radical group whose proof of concept seems daring, outrageous, and even unattainable -- in this case, a peer-to-peer distributed file system that seeks to connect all computing devices. This possible HTTP replacement maintains a community through multiple mediums, including the Git community and an IRC channel that has more than 100 current contributors. This “crazy” idea will be available for alpha testing in 2015.
|
||||
|
||||
### 2015 Open Source Rookie of the Year: cAdvisor ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/open_source_rookies_10-100564911-orig.jpg)
|
||||
|
||||
[cAdvisor (Container Advisor)][13] is a daemon that collects, aggregates, processes, and exports information about running containers, providing container users with an understanding of resource usage and performance characteristics. For each container, cAdvisor keeps resource isolation parameters, historical resource usage, histograms of complete historical resource usage, and network statistics. This data is exported by container and across machines.
|
||||
|
||||
cAdvisor can run on most Linux distros and supports many container types, including Docker. It has become the de facto monitoring agent for containers, has been integrated into many systems, and is one of the most downloaded images on the Docker Hub. The team hopes to grow cAdvisor to understand application performance more deeply and to integrate this information into clusterwide systems.
|
||||
|
||||
### 2015 Open Source Rookie of the Year: Terraform ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/open_source_rookies_11-100564912-orig.jpg)
|
||||
|
||||
[Terraform][14] provides a common configuration to launch infrastructure, from physical and virtual servers to email and DNS providers. The idea is to encompass everything from custom in-house solutions to services offered by public cloud platforms. Once launched, Terraform enables ops to change infrastructure safely and efficiently as the configuration evolves.
|
||||
|
||||
Working at a devops company, Terraform.io's founders identified a pain point in codifying the knowledge required to build a complete data center, from plugged-in servers to a fully networked and functional data center. Infrastructure is described using a high-level configuration syntax, which allows a blueprint of your data center to be versioned and treated as you would any other code. Sponsorship from the well-respected open source company HashiCorp helped launch the project.
|
||||
|
||||
### Honorable mention: Docker Fig ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/open_source_rookies_12-100564913-orig.jpg)
|
||||
|
||||
[Fig][15] provides fast, isolated development environments using [Docker][16]. It moves the configuration required to orchestrate Docker into a simple fig.yml file. It handles all the work of building and running containers and forwarding their ports, as well as sharing volumes and linking them.
|
||||
|
||||
Orchard formed Fig last year to create a new system of tools to make Docker work. It was developed as a way of setting up development environments with Docker, enabling users to define the exact environment for their apps, while also running databases and caches inside Docker. Fig solved a major pain point for developers. Docker fully supports this open source project and [recently purchased Orchard][17] to expand the reach of Fig.
|
||||
|
||||
### Honorable mention: Drone ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/open_source_rookies_13-100564916-orig.jpg)
|
||||
|
||||
[Drone][18] is a Continuous Integration platform built on Docker and [written in Go][19]. The Drone project grew out of frustration with existing available technologies and processes for setting up development environments.
|
||||
|
||||
Drone provides a simple approach to automated testing and continuous delivery: Simply pick a Docker image tailored to your needs, connect GitHub, and commit. Drone uses Docker containers to provision isolated testing environments, giving every project complete control over its stack without the burden of traditional server administration. The community behind Drone is 100 contributors strong and hopes to bring this project to the enterprise and to mobile app development.
|
||||
|
||||
### Open source rookies ###
|
||||
|
||||
![](http://images.techhive.com/images/article/2015/01/open_source_rookies_14-100564941-orig.jpg)
|
||||
|
||||
- [Open Source Rookies of the 2014 Year][20]
|
||||
- [InfoWorld's 2015 Technology of the Year Award winners][21]
|
||||
- [Bossies: The Best of Open Source Software Awards][22]
|
||||
- [15 essential open source tools for Windows admins][23]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/2875439/open-source-software/the-top-10-rookie-open-source-projects.html
|
||||
|
||||
作者:[Black Duck Software][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoworld.com/author/Black-Duck-Software/
|
||||
[1]:https://www.blackducksoftware.com/open-source-rookies
|
||||
[2]:https://www.openhub.net/
|
||||
[3]:https://github.com/debops/debops
|
||||
[4]:http://www.infoworld.com/article/2612397/data-center/review--ansible-orchestration-is-a-veteran-unix-admin-s-dream.html
|
||||
[5]:https://codecombat.com/
|
||||
[6]:http://storj.io/
|
||||
[7]:http://neovim.org/
|
||||
[8]:https://github.com/cockroachdb/cockroach
|
||||
[9]:http://www.infoworld.com/resources/16373/application-virtualization/the-beginners-guide-to-docker
|
||||
[10]:http://kubernetes.io/
|
||||
[11]:https://openbazaar.org/
|
||||
[12]:http://ipfs.io/
|
||||
[13]:https://github.com/google/cadvisor
|
||||
[14]:https://www.terraform.io/
|
||||
[15]:http://www.fig.sh/
|
||||
[16]:http://www.infoworld.com/resources/16373/application-virtualization/the-beginners-guide-to-docker
|
||||
[17]:http://www.infoworld.com/article/2608546/application-virtualization/docker-acquires-orchard-in-a-sign-of-rising-ambitions.html
|
||||
[18]:https://drone.io/
|
||||
[19]:http://www.infoworld.com/article/2683845/google-go/164121-Fast-guide-to-Go-programming.html
|
||||
[20]:https://www.blackducksoftware.com/open-source-rookies
|
||||
[21]:http://www.infoworld.com/article/2871935/application-development/infoworlds-2015-technology-of-the-year-award-winners.html
|
||||
[22]:http://www.infoworld.com/article/2688104/open-source-software/article.html
|
||||
[23]:http://www.infoworld.com/article/2854954/microsoft-windows/15-essential-open-source-tools-for-windows-admins.html
|
@ -1,98 +0,0 @@
|
||||
dupeGuru – Find And Remove Duplicate Files Instantly From Hard Drive
|
||||
================================================================================
|
||||
### Introduction ###
|
||||
|
||||
Disk full is one of the big trouble for us. No matter how we’re careful, sometimes we might copy the same file to multiple locations, or download the same file twice unknowingly. Therefore, sooner or later we will end up with disk full error message, which is worst when we really need some space to store important data. If you believe your system has multiple duplicate files, then **dupeGuru** might help you.
|
||||
|
||||
dupeGuru team have also developed applications called **dupeGuru Music Edition** to remove duplicate music files, and **dupeGuru Picture Edition** to remove duplicate pictures.
|
||||
|
||||
### 1. dupeGuru (Standard Edition) ###
|
||||
|
||||
For those who don’t know about [dupeGuru][1], It is a free, open source, cross-platform application that can used to find and remove the duplicate files in your system. It will run under Linux, Windows, and Mac OS X platforms. It uses a quick fuzzy matching algorithm to find the duplicate files in minutes. Also, you can tweak dupeGuru to find exactly what kind of duplicate files you want to, and eliminate what kind of files from deletion. It supports English, French, German, Chinese (Simplified), Czech, Italian, Armenian, Russian, Ukrainian, Brazilian, and Vietnamese.
|
||||
|
||||
#### Install dupeGuru On Ubuntu 14.10/14.04/13.10/13.04/12.04 ####
|
||||
|
||||
dupeGuru developers have created a Ubuntu PPA to ease the installation. To install dupeGuru, enter the following commands one by one in your Terminal.
|
||||
|
||||
sudo apt-add-repository ppa:hsoft/ppa
|
||||
sudo apt-get update
|
||||
sudo apt-get install dupeguru-se
|
||||
|
||||
#### Usage ####
|
||||
|
||||
Usage is very simple. Launch dupeGuru either from Unity Dash or Menu.
|
||||
|
||||
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/11/dupeGuru_007.png)
|
||||
|
||||
Click + button on the bottom, and add the folder you want to scan. Click Scan button to start finding the duplicate files.
|
||||
|
||||
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/11/dupeGuru_008.png)
|
||||
|
||||
If the selected folder contains any duplicate files, it will display them. As you in the below screen shot, I have a duplicate file in the Downloads directory.
|
||||
|
||||
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/11/dupeGuru-Results_009.png)
|
||||
|
||||
Now, you can decide what to do. You can either delete the duplicate file, or rename it, or copy/move it to another location. To do that select the duplicate files, or check the box that says “**Dupes only**” on the Menu bar. If you selected the Dupes only option, the duplicates files will only visible. So you can select and delete them easily. Click on the **Actions** drop-down box. Finally, select the action you want to perform. Here, I just want to delete the duplicate file, so I selected the option: **Send marked to Recycle bin**.
|
||||
|
||||
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/11/Menu_010.png)
|
||||
|
||||
Then, click **Proceed** to delete the duplicate files.
|
||||
|
||||
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/11/Deletion-Options_011.png)
|
||||
|
||||
### 2. dupeGuru Music Edition ###
|
||||
|
||||
[dupeGuru Music Edition][2] or dupeGuru ME in short, is just like dupeGuru. It does everything dupeGuru does, but it has more information columns (such as bitrate, duration, tags, etc..) and more scan types (filename with fields, tags and audio content). Like dupeGuru, dupeGuru ME also runs on Linux, Windows, and Mac OS X.
|
||||
|
||||
It supports variety of formats such as MP3, WMA, AAC (iTunes format), OGG, FLAC, loss-less AAC and loss-less WMA etc,
|
||||
|
||||
#### Install dupeGuru ME On Ubuntu 14.10/14.04/13.10/13.04/12.04 ####
|
||||
|
||||
Now, we don’t have to add any PPA, because already the added in the previous steps. So, enter the following command to install from your Terminal.
|
||||
|
||||
sudo apt-get install dupeguru-me
|
||||
|
||||
#### Usage ####
|
||||
|
||||
Launch it either from Unity dash or Menu. The usage, interface, and look of dupeGuru ME is similar to normal dupeGuru. Add the folder you to scan and select the action you want to perform. The duplicate music files will be deleted.
|
||||
|
||||
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/11/dupeGuru-Music-Edition-Results_012.png)
|
||||
|
||||
### 3. dupeGuru Picture Edition ###
|
||||
|
||||
[dupeGuru Picture Edition][3], or duepGuru PE in short, is a tool to find duplicate pictures on your computer. It is as like as dupeGuru, but is specialized for duplicate pictures matching. dupeGuru PE runs on Linux, Windows, and Mac OS X.
|
||||
|
||||
dupeGuru PE supports JPG, PNG, TIFF, GIF and BMP formats. All these formats can be compared together. The Mac OS X version of dupeGuru PE also supports PSD and RAW (CR2 and NEF) formats.
|
||||
|
||||
#### Install dupeGuru PE On Ubuntu 14.10/14.04/13.10/13.04/12.04 ####
|
||||
|
||||
As we have already added the PPA, We don’t need to add PPA for dupeGuru either. Just, run the following command to install it.
|
||||
|
||||
sudo apt-get install dupeguru-pe
|
||||
|
||||
#### Usage ####
|
||||
|
||||
It’s also look like dupeGuru, and dupeGuru ME in terms of usage, interface, and look.I wonder why the developer have created there separate versions for each category. It would be better, a single application has all of the above three features combined.
|
||||
|
||||
Launch it, add the folder you want to scan, and select the action you want to perform. That’s it. you duplicated files will be gone.
|
||||
|
||||
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/11/dupeGuru-Picture-Edition-Results_014.png)
|
||||
|
||||
If you can’t remove them in case of any security problems, note down the location of the files, and manually delete them either from Terminal or File manager.
|
||||
|
||||
Cheers!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/dupeguru-find-remove-duplicate-files-instantly-hard-drive/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/sk/
|
||||
[1]:http://www.hardcoded.net/dupeguru/
|
||||
[2]:http://www.hardcoded.net/dupeguru_me/
|
||||
[3]:http://www.hardcoded.net/dupeguru_pe/
|
@ -1,3 +1,6 @@
|
||||
Translating by shipsw
|
||||
|
||||
|
||||
Auditd - Tool for Security Auditing on Linux Server
|
||||
================================================================================
|
||||
First of all , we wish all our readers **Happy & Prosperous New YEAR 2015** from our Linoxide team. So lets start this new year explaining about Auditd tool.
|
||||
@ -200,4 +203,4 @@ via: http://linoxide.com/how-tos/auditd-tool-security-auditing/
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/pungki/
|
||||
[1]:http://linoxide.com/tools/wajig-package-management-debian/
|
||||
[1]:http://linoxide.com/tools/wajig-package-management-debian/
|
||||
|
@ -1,3 +1,4 @@
|
||||
[bazz2222222]
|
||||
How to Configure Chroot Environment in Ubuntu 14.04
|
||||
================================================================================
|
||||
There are many instances when you may wish to isolate certain applications, user, or environments within a Linux system. Different operating systems have different methods of achieving isolation, and in Linux, a classic way is through a `chroot` environment.
|
||||
@ -143,4 +144,4 @@ via: http://linoxide.com/ubuntu-how-to/configure-chroot-environment-ubuntu-14-04
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:https://launchpad.net/ubuntu/+archivemirrors
|
||||
[1]:https://launchpad.net/ubuntu/+archivemirrors
|
||||
|
@ -1,3 +1,4 @@
|
||||
Translating by ZTinoZ
|
||||
Linux FAQs with Answers--How to check CPU info on Linux
|
||||
================================================================================
|
||||
> **Question**: I would like to know detailed information about the CPU processor of my computer. What are the available methods to check CPU information on Linux?
|
||||
@ -112,4 +113,4 @@ via: http://ask.xmodulo.com/check-cpu-info-linux.html
|
||||
[1]:http://xmodulo.com/how-to-find-number-of-cpu-cores-on.html
|
||||
[2]:http://en.wikipedia.org/wiki/CPUID
|
||||
[3]:http://xmodulo.com/identify-cpu-processor-architecture-linux.html
|
||||
[4]:http://xmodulo.com/identify-cpu-processor-architecture-linux.html
|
||||
[4]:http://xmodulo.com/identify-cpu-processor-architecture-linux.html
|
||||
|
@ -1,113 +0,0 @@
|
||||
Ping Translating
|
||||
|
||||
Linux FAQs with Answers--How to check memory usage on Linux
|
||||
================================================================================
|
||||
> **Question**: I would like to monitor memory usage on my Linux system. What are the available GUI-based or command-line tools for checking current memory usage of Linux?
|
||||
|
||||
When it comes to optimizing the performance of a Linux system, physical memory is the single most important factor. Naturally, Linux offers a wealth of options to monitor the usage of the precious memory resource. Different tools vary in terms of their monitoring granularity (e.g., system-wide, per-process, per-user), interface (e.g., GUI, command-line, ncurses) or running mode (e.g., interactive, batch mode).
|
||||
|
||||
Here is a non-exhaustive list of GUI or command-line tools to choose from to check used and free memory on Linux platform.
|
||||
|
||||
### 1. /proc/meminfo ###
|
||||
|
||||
The simpliest method to check RAM usage is via /proc/meminfo. This dynamically updated virtual file is actually the source of information displayed by many other memory related tools such as free, top and ps tools. From the amount of available/free physical memory to the amount of buffer waiting to be or being written back to disk, /proc/meminfo has everything you want to know about system memory usage. Process-specific memory information is also available from /proc/<pid>/statm and /proc/<pid>/status
|
||||
|
||||
$ cat /proc/meminfo
|
||||
|
||||
![](https://farm8.staticflickr.com/7483/15989497899_bb6afede11_b.jpg)
|
||||
|
||||
### 2. atop ###
|
||||
|
||||
The atop command is an ncurses-based interactive system and process monitor for terminal environments. It shows a dynamically-updated summary of system resources (CPU, memory, network, I/O, kernel), with colorized warnings in case of high system load. It also offers a top-like view of processes (or users) along with their resource usage, so that system admin can tell which processes or users are responsible for system load. Reported memory statistics include total/free memory, cached/buffer memory and committed virtual memory.
|
||||
|
||||
$ sudo atop
|
||||
|
||||
![](https://farm8.staticflickr.com/7552/16149756146_893773b84c_b.jpg)
|
||||
|
||||
### 3. free ###
|
||||
|
||||
The free command is a quick and easy way to get an overview of memory usage gleaned from /proc/meminfo. It shows a snapshot of total/free physical memory and swap space of the system, as well as used/free buffer space in the kernel.
|
||||
|
||||
$ free -h
|
||||
![](https://farm8.staticflickr.com/7531/15988117988_ba8c6b7b63_b.jpg)
|
||||
|
||||
### 4. GNOME System Monitor ###
|
||||
|
||||
GNOME System Monitor is a GUI application that shows a short history of system resource utilization for CPU, memory, swap space and network. It also offers a process view of CPU and memory usage.
|
||||
|
||||
$ gnome-system-monitor
|
||||
|
||||
![](https://farm8.staticflickr.com/7539/15988118078_279f0da494_c.jpg)
|
||||
|
||||
### 5. htop ###
|
||||
|
||||
The htop command is an ncurses-based interactive processor viewer which shows per-process memory usage in real time. It can report resident memory size (RSS), total program size in memory, library size, shared page size, and dirty page size for all running processes. You can scroll the (sorted) list of processes horizontally or vertically.
|
||||
|
||||
$ htop
|
||||
|
||||
![](https://farm9.staticflickr.com/8236/8599814378_de071de408_c.jpg)
|
||||
|
||||
### 6. KDE System Monitor ###
|
||||
|
||||
While GNOME desktop has GNOME System Monitor, KDE desktop has its own counterpart: KDE System Monitor. Its functionality is mostly similar to GNOME version, i.e., showing a real-time history of system resource usage, as well as a process list along with per-process CPU/memory consumption.
|
||||
|
||||
$ ksysguard
|
||||
|
||||
![](https://farm8.staticflickr.com/7479/15991397329_ec5d786ffd_c.jpg)
|
||||
|
||||
### 7. memstat ###
|
||||
|
||||
The memstat utility is useful to identify which executable(s), process(es) and shared libraries are consuming virtual memory. Given a process ID, memstat identifies how much virtual memory is used by the process' associated executable, data, and shared libraries.
|
||||
|
||||
$ memstat -p <PID>
|
||||
|
||||
![](https://farm8.staticflickr.com/7518/16175635905_1880e50055_b.jpg)
|
||||
|
||||
### 8. nmon ###
|
||||
|
||||
The nmon utility is an ncurses-based system benchmark tool which can monitor CPU, memory, disk I/O, kernel, filesystem and network resources in interactive mode. As for memory usage, it can show information such as total/free memory, swap space, buffer/cached memory, virtual memory page in/out statistics, all in real time.
|
||||
|
||||
$ nmon
|
||||
|
||||
![](https://farm9.staticflickr.com/8648/15989760117_30f62f4aba_b.jpg)
|
||||
|
||||
### 9. ps ###
|
||||
|
||||
The ps command can show per-process memory usage in real-time. Reported memory usage information includes %MEM (percent of physical memory used), VSZ (total amount of virtual memory used), and RSS (total amount of physical memory used). You can sort the process list by using "--sort" option. For example, to sort in the decreasing order of RSS:
|
||||
|
||||
$ ps aux --sort -rss
|
||||
|
||||
![](https://farm9.staticflickr.com/8602/15989881547_ca40839c19_c.jpg)
|
||||
|
||||
### 10. smem ###
|
||||
|
||||
The [smem][1] command allows you to measure physical memory usage by different processes and users based on information available from /proc. It utilizes proportional set size (PSS) metric to accurately quantify effective memory usage of Linux processes. Memory usage analysis can be exported to graphical charts such as bar and pie graphs.
|
||||
|
||||
$ sudo smem --pie name -c "pss"
|
||||
|
||||
![](https://farm8.staticflickr.com/7466/15614838428_eed7426cfe_c.jpg)
|
||||
|
||||
### 11. top ###
|
||||
|
||||
The top command offers a real-time view of running processes, along with various process-specific resource usage statistics. Memory related information includes %MEM (memory utilization percentage), VIRT (total amount of virtual memory used), SWAP (amount of swapped-out virtual memory), CODE (amount of physical memory allocated for code execution), DATA (amount of physical memory allocated to non-executable data), RES (total amount of physical memory used; CODE+DATA), and SHR (amount of memory potentially shared with other processes). You can sort the process list based on memory usage or size.
|
||||
|
||||
![](https://farm8.staticflickr.com/7464/15989760047_eb8d51d9f2_c.jpg)
|
||||
|
||||
### 12. vmstat ###
|
||||
|
||||
The vmstat command-line utility displays instantaneous and average statistics of various system activities covering CPU, memory, interrupts, and disk I/O. As for memory information, the command shows not only physical memory usage (e.g., tota/used memory and buffer/cache memory), but also virtual memory statistics (e.g., memory paged in/out, swapped in/out).
|
||||
|
||||
$ vmstat -s
|
||||
|
||||
![](https://farm9.staticflickr.com/8582/15988236860_3f142008d2_b.jpg)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/check-memory-usage-linux.html
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://xmodulo.com/visualize-memory-usage-linux.html
|
@ -1,46 +0,0 @@
|
||||
Ping Translating
|
||||
|
||||
Linux FAQs with Answers--How to set a custom HTTP header in curl
|
||||
================================================================================
|
||||
> **Question**: I am trying to fetch a URL with curl command, but want to set a few custom header fields in the outgoing HTTP request. How can I use a custom HTTP header with curl?
|
||||
|
||||
curl is a powerful command-line tool that can transfer data to and from a server over network. It supports a number of transfer protocols, notably HTTP/HTTPS, and many others such as FTP/FTPS, RTSP, POP3/POP3S, SCP, IMAP/IMAPS, etc. When you send out an HTTP request for a URL with curl, it uses a default HTTP header with only essential header fields (e.g., User-Agent, Host, and Accept).
|
||||
|
||||
![](https://farm8.staticflickr.com/7568/16225032086_fb8f1c508a_b.jpg)
|
||||
|
||||
In some cases, however, you may want to override the default header or even add a custom header field in an HTTP request. For example, you may want to override "Host" field to test a [load balancer][1], or spoof "User-Agent" string to get around browser-specific access restriction. In other cases, you may be accessing a website which requires a specific cookie, or testing a REST-ful API with various custom parameters in the header.
|
||||
|
||||
To handle all these cases, curl provides an easy way to fully control the HTTP header of outgoing HTTP requests. The parameter you want to use is "-H" or equivalently "--header".
|
||||
|
||||
The "-H" option can be specified multiple times with curl command to define more than one HTTP header fields.
|
||||
|
||||
For example, the following command sets three HTTP header fields, i.e., overriding "Host" field, and add two fields ("Accept-Language" and "Cookie").
|
||||
|
||||
$ curl -H 'Host: 157.166.226.25' -H 'Accept-Language: es' -H 'Cookie: ID=1234' http://cnn.com
|
||||
|
||||
![](https://farm8.staticflickr.com/7520/16250111432_de39638ec0_c.jpg)
|
||||
|
||||
For standard HTTP header fields such as "User-Agent", "Cookie", "Host", there is actually another way to setting them. The curl command offers designated options for setting these header fields:
|
||||
|
||||
- **-A (or --user-agent)**: set "User-Agent" field.
|
||||
- **-b (or --cookie)**: set "Cookie" field.
|
||||
- **-e (or --referer)**: set "Referer" field.
|
||||
|
||||
For example, the following two commands are equivalent. Both of them change "User-Agent" string in the HTTP header.
|
||||
|
||||
$ curl -H "User-Agent: my browser" http://cnn.com
|
||||
$ curl -A "my browser" http://cnn.com
|
||||
|
||||
wget is another command-line tool which you can use to fetch a URL similar to curl, and wget also allows you to use a custom HTTP header. Check out [this post][2] for details on wget command.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/custom-http-header-curl.html
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://xmodulo.com/haproxy-http-load-balancer-linux.html
|
||||
[2]:http://xmodulo.com/how-to-use-custom-http-headers-with-wget.html
|
@ -1,78 +0,0 @@
|
||||
How to Boot Linux ISO Images Directly From Your Hard Drive
|
||||
================================================================================
|
||||
Hi all, today we'll teach you an awesome interesting stuff related with the Operating System Disk Image and Booting. Now, try many OS you like without installing them in your Physical Hard Drive and without burning DVDs or USBs.
|
||||
|
||||
We can boot Linux ISO files directly from your hard drive with Linux’s GRUB2 boot loader. We can boot any Linux Distribution's using this method without creating bootable USBs, Burn DVDs, etc but the changes made will be temporary.
|
||||
|
||||
![boot iso files directly from hard drive in grub2](http://blog.linoxide.com/wp-content/uploads/2015/01/boot-iso-files-directly-from-hard-drive-in-grub2.png)
|
||||
|
||||
### 1. Get the ISO of the Linux Distributions: ###
|
||||
|
||||
Here, we're gonna create Menu of Ubuntu 14.04 LTS "Trusty" and Linux Mint 17.1 LTS "Rebecca" so, we downloaded them from their official site:
|
||||
|
||||
Ubuntu from : [http://ubuntu.com/][1] And Linux Mint from: [http://linuxmint.com/][2]
|
||||
|
||||
You can download ISO files of required linux distributions from their respective websites. If you have mirror of the iso files hosted near your area or country, it is recommended if you have no sufficient internet download speed.
|
||||
|
||||
### 2. Determine the Hard Drive Partition’s Path ###
|
||||
|
||||
GRUB uses a different “device name” scheme than Linux does. On a Linux system, /dev/sda0 is the first partition on the first hard disk — **a** means the first hard disk and **0** means its first partition. In GRUB, (hd0,1) is equivalent to /dev/sda0. The **0** means the first hard disk, while **1** means the first partition on it. In other words, in a GRUB device name, the disk numbers start counting at 0 and the partition numbers start counting at 1. For example, (hd3,6) refers to the sixth partition on the fourth hard disk.
|
||||
|
||||
You can use the **fdisk -l** command to view this information. On Ubuntu, open a Terminal and run the following command:
|
||||
|
||||
$ sudo fdisk -l
|
||||
|
||||
![fdisk-l view the list of the hard disk and its partitions](http://blog.linoxide.com/wp-content/uploads/2015/01/fdisk-l.png)
|
||||
|
||||
You’ll see a list of Linux device paths, which you can convert to GRUB device names on your own. For example, below we can see the system partition is /dev/sda1 — so that’s (hd0,1) for GRUB.
|
||||
|
||||
### 3. Adding boot menu to Grub2 ###
|
||||
|
||||
The easiest way to add a custom boot entry is to edit the /etc/grub.d/40_custom script. This file is designed for user-added custom boot entries. After editing the file, the contents of your /etc/defaults/grub file and the /etc/grub.d/ scripts will be combined to create a /boot/grub/grub.cfg file. You shouldn't edit this file by hand. It’s designed to be automatically generated from settings you specify in other files.
|
||||
|
||||
So we’ll need to open the /etc/grub.d/40_custom file for editing with root privileges. On Ubuntu, you can do this by opening a Terminal window and running the following command:
|
||||
|
||||
$ sudo nano /etc/grub.d/40_custom
|
||||
|
||||
Unless we’ve added other custom boot entries, we should see a mostly empty file. We'll need to add one or more ISO-booting sections to the file below the commented lines.
|
||||
|
||||
=====
|
||||
menuentry “Ubuntu 14.04 ISO” {
|
||||
set isofile=”/home/linoxide/Downloads/ubuntu-14.04.1-desktop-amd64.iso”
|
||||
loopback loop (hd0,1)$isofile
|
||||
linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/filename=${isofile} quiet splash
|
||||
initrd (loop)/casper/initrd.lz
|
||||
}
|
||||
menuentry "Linux Mint 17.1 Cinnamon ISO" {
|
||||
set isofile=”/home/linoxide/Downloads/mint-17.1-desktop-amd64.iso”
|
||||
loopback loop (hd0,1)$isofile
|
||||
linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/filename=${isofile} quiet splash
|
||||
initrd (loop)/casper/initrd.lz
|
||||
}
|
||||
|
||||
![Grub2 Menu configuration for adding the ISOs](http://blog.linoxide.com/wp-content/uploads/2015/01/grub-added-iso.png)
|
||||
|
||||
**Important Note**: Different Linux distributions require different boot entries with different boot options. The GRUB Live ISO Multiboot project offers a variety of [menu entries for different Linux distributions][3]. You should be able to adapt these example menu entries for the ISO file you want to boot. You can also just perform a web search for the name and release number of the Linux distribution you want to boot along with “boot from ISO in GRUB” to find more information.
|
||||
|
||||
### 4. Updating Grub2 ###
|
||||
|
||||
To make the custom menu entries active, we'll run "sudo update-grub"
|
||||
|
||||
sudo update-grub
|
||||
|
||||
Hurray, we have successfully added our brand new linux distribution's ISO to our GRUB Menu. Now, we'll be able to boot them and enjoy trying them. You can add many distributions and try them all. Note that the changes made in those OS will don't be kept preserved, which means you'll loose changes made in that distros after the restart.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/boot-linux-iso-images-directly-hard-drive/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/arunp/
|
||||
[1]:http://ubuntu.com/
|
||||
[2]:http://linuxmint.com/
|
||||
[3]:http://git.marmotte.net/git/glim/tree/grub2
|
@ -1,3 +1,5 @@
|
||||
Translating by Medusar
|
||||
|
||||
How to make a file immutable on Linux
|
||||
================================================================================
|
||||
Suppose you want to write-protect some important files on Linux, so that they cannot be deleted or tampered with by accident or otherwise. In other cases, you may want to prevent certain configuration files from being overwritten automatically by software. While changing their ownership or permission bits on the files by using chown or chmod is one way to deal with this situation, this is not a perfect solution as it cannot prevent any action done with root privilege. That is when chattr comes in handy.
|
||||
@ -69,4 +71,4 @@ via: http://xmodulo.com/make-file-immutable-linux.html
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/nanni
|
||||
[a]:http://xmodulo.com/author/nanni
|
||||
|
@ -1,3 +1,4 @@
|
||||
zpl1025
|
||||
A Shell Primer: Master Your Linux, OS X, Unix Shell Environment
|
||||
================================================================================
|
||||
On a Linux or Unix-like systems each user and process runs in a specific environment. An environment includes variables, settings, aliases, functions and more. Following is a very brief introduction to some useful shell environment commands, including examples of how to use each command and setup your own environment to increase productivity in the command prompt.
|
||||
@ -689,4 +690,4 @@ via: http://www.cyberciti.biz/howto/shell-primer-configuring-your-linux-unix-osx
|
||||
[12]:http://www.cyberciti.biz/faq/fedora-redhat-scientific-linuxenable-bash-completion/
|
||||
[13]:http://www.cyberciti.biz/tips/howto-linux-unix-bash-shell-setup-prompt.html
|
||||
[14]:http://bash.cyberciti.biz/guide/Setting_shell_options
|
||||
[15]:http://www.cyberciti.biz/write-for-nixcraft/
|
||||
[15]:http://www.cyberciti.biz/write-for-nixcraft/
|
||||
|
@ -1,163 +0,0 @@
|
||||
Cleaning up Ubuntu 14.10,14.04,13.10 system
|
||||
================================================================================
|
||||
We have already discussed [Cleaning up a Ubuntu GNU/Linux system][1] and this tutorial is updated with new ubuntu versions and more tools added.
|
||||
|
||||
If you want to clean your ubuntu machine you need to follow these simple steps to remove all unnecessary junk files.
|
||||
|
||||
### Remove partial packages ###
|
||||
|
||||
This is yet another built-in feature, but this time it is not used in Synaptic Package Manager. It is used in the Terminal. Now, in the Terminal, key in the following command
|
||||
|
||||
sudo apt-get autoclean
|
||||
|
||||
Then enact the package clean command. What this commnad does is to clean remove .deb packages that apt caches when you install/update programs. To use the clean command type the following in a terminal window:
|
||||
|
||||
sudo apt-get clean
|
||||
|
||||
You can then use the autoremove command. What the autoremove command does is to remove packages installed as dependencies after the original package is removed from the system. To use autoremove tye the following in a terminal window:
|
||||
|
||||
sudo apt-get autoremove
|
||||
|
||||
### Remove unnecessary locale data ###
|
||||
|
||||
For this we need to install localepurge.Automagically remove unnecessary locale data.This is just a simple script to recover diskspace wasted for unneeded locale files and localized man pages. It will automagically be invoked upon completion of any apt installation run.
|
||||
|
||||
Install localepurge in Ubuntu
|
||||
|
||||
sudo apt-get install localepurge
|
||||
|
||||
After installing anything with apt-get install, localepurge will remove all translation files and translated man pages in languages you cannot read.
|
||||
|
||||
If you want to configure localepurge you need to edit /etc/locale.nopurge
|
||||
|
||||
This can save you several megabytes of disk space, depending on the packages you have installed.
|
||||
|
||||
Example:-
|
||||
|
||||
I am trying to install dicus using apt-get
|
||||
|
||||
sudo apt-get install discus
|
||||
|
||||
after end of this installation you can see something like below
|
||||
|
||||
localepurge: Disk space freed in /usr/share/locale: 41860K
|
||||
|
||||
### Remove "orphaned" packages ###
|
||||
|
||||
If you want to remove orphaned packages you need to install deborphan package.
|
||||
|
||||
Install deborphan in Ubuntu
|
||||
|
||||
sudo apt-get install deborphan
|
||||
|
||||
### Using deborphan ###
|
||||
|
||||
Open Your terminal and enter the following command
|
||||
|
||||
sudo deborphan | xargs sudo apt-get -y remove --purge
|
||||
|
||||
### Remove "orphaned" packages Using GtkOrphan ###
|
||||
|
||||
GtkOrphan (a Perl/Gtk2 application for debian systems) is a graphical tool which analyzes the status of your installations, looking for orphaned libraries. It implements a GUI front-end for deborphan, adding the package-removal capability.
|
||||
|
||||
### Install GtkOrphan in Ubuntu ###
|
||||
|
||||
Open the terminal and run the following command
|
||||
|
||||
sudo apt-get install gtkorphan
|
||||
|
||||
#### Screenshot ####
|
||||
|
||||
![](http://www.ubuntugeek.com/wp-content/uploads/2015/01/41.png)
|
||||
|
||||
### Remove Orphan packages using Wajig ###
|
||||
|
||||
simplified Debian package management front end.Wajig is a single commandline wrapper around apt, apt-cache, dpkg,/etc/init.d scripts and more, intended to be easy to use and providing extensive documentation for all of its functions.
|
||||
|
||||
With a suitable sudo configuration, most (if not all) package installation as well as creation tasks can be done from a user shell. Wajig is also suitable for general system administration.A Gnome GUI command ‘gjig' is also included in the package.
|
||||
|
||||
### Install Wajig in Ubuntu ###
|
||||
|
||||
Open the terminal and run the following command
|
||||
|
||||
sudo apt-get install wajig
|
||||
|
||||
### Debfoster --- Keep track of what you did install ###
|
||||
|
||||
debfoster maintains a list of installed packages that were explicitly requested rather than installed as a dependency. Arguments are entirely optional, debfoster can be invoked per se after each run of dpkg and/or apt-get.
|
||||
|
||||
Alternatively you can use debfoster to install and remove packages by specifying the packages on the command line. Packages suffixed with a --- are removed while packages without a suffix are installed.
|
||||
|
||||
If a new package is encountered or if debfoster notices that a package that used to be a dependency is now an orphan, it will ask you what to do with it. If you decide to keep it, debfoster will just take note and continue. If you decide that this package is not interesting enough it will be removed as soon as debfoster is done asking questions. If your choices cause other packages to become orphaned more questions will ensue.
|
||||
|
||||
### Install debfoster in Ubuntu ###
|
||||
|
||||
Open the terminal and run the following command
|
||||
|
||||
sudo apt-get install debfoster
|
||||
|
||||
### Using debfoster ###
|
||||
|
||||
to create the initial keepers file use the following command
|
||||
|
||||
sudo debfoster -q
|
||||
|
||||
you can always edit the file /var/lib/debfosterkeepers which defines the packages you want to remain on your system.
|
||||
|
||||
to edit the keepers file type
|
||||
|
||||
sudo vi /var/lib/debfoster/keepers
|
||||
|
||||
To force debfoster to remove all packages that aren't listed in this list or dependencies of packages that are listed in this list.It will also add all packages in this list that aren't installed. So it makes your system comply with this list. Do this
|
||||
|
||||
sudo debfoster -f
|
||||
|
||||
To keep track of what you installed additionally do once in a while :
|
||||
|
||||
sudo debfoster
|
||||
|
||||
### xdiskusage -- Check where the space on your hard drive goes ###
|
||||
|
||||
Displays a graphic of your disk usage with du.xdiskusage is a user-friendly program to show you what is using up all your disk space. It is based on the design of the "xdu" program written by Phillip C. Dykstra. Changes have been made so it runs "du" for you, and can display the free space left on the disk, and produce a PostScript version of the display.xdiskusage is nice if you want to easily see where the space on your hard drive goes.
|
||||
|
||||
### Install xdiskusage in Ubuntu ###
|
||||
|
||||
sudo apt-get install xdiskusage
|
||||
|
||||
If you want to open this application you need to use the following command
|
||||
|
||||
sudo xdiskusage
|
||||
|
||||
![](http://www.ubuntugeek.com/wp-content/uploads/2015/01/5.png)
|
||||
|
||||
Once it opens you should see similar to the following screen
|
||||
|
||||
### Bleachbit ###
|
||||
|
||||
BleachBit quickly frees disk space and tirelessly guards your privacy. Free cache, delete cookies, clear Internet history, shred temporary files, delete logs, and discard junk you didn't know was there. Designed for Linux and Windows systems, it wipes clean a thousand applications including Firefox, Internet Explorer, Adobe Flash, Google Chrome, Opera, Safari,and more. Beyond simply deleting files, BleachBit includes advanced features such as shredding files to prevent recovery, wiping free disk space to hide traces of files deleted by other applications, and vacuuming Firefox to make it faster. Better than free, BleachBit is open source.
|
||||
|
||||
### Install Bleachbit in ubuntu ###
|
||||
|
||||
Open the terminal and run the following command
|
||||
|
||||
sudo apt-get install bleachbit
|
||||
|
||||
![](http://www.ubuntugeek.com/wp-content/uploads/2015/01/6.png)
|
||||
|
||||
### Using Ubuntu-Tweak ###
|
||||
|
||||
You can also Use [Ubuntu-Tweak][2] To clean up your system
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.ubuntugeek.com/cleaning-up-a-ubuntu-gnulinux-system-updated-with-ubuntu-14-10-and-more-tools-added.html
|
||||
|
||||
作者:[ruchi][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.ubuntugeek.com/author/ubuntufix
|
||||
[1]:http://www.ubuntugeek.com/cleaning-up-all-unnecessary-junk-files-in-ubuntu.html
|
||||
[2]:http://www.ubuntugeek.com/www.ubuntugeek.com/install-ubuntu-tweak-on-ubuntu-14-10.html
|
@ -1,3 +1,5 @@
|
||||
Ping -- Translating
|
||||
|
||||
iptraf: A TCP/UDP Network Monitoring Utility
|
||||
================================================================================
|
||||
[iptraf][1] is an ncurses-based IP LAN monitor that generates various network statistics including TCP info, UDP counts, ICMP and OSPF information, Ethernet load info, node stats, IP checksum errors, and others.
|
||||
@ -61,4 +63,4 @@ via: http://www.unixmen.com/iptraf-tcpudp-network-monitoring-utility/
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/seth/
|
||||
[1]:http://iptraf.seul.org/about.html
|
||||
[1]:http://iptraf.seul.org/about.html
|
||||
|
@ -0,0 +1,422 @@
|
||||
25 Useful Apache ‘.htaccess’ Tricks to Secure and Customize Websites
|
||||
================================================================================
|
||||
Websites are important parts of our lives. They serve the means to expand businesses, share knowledge and lots more. Earlier restricted to providing only static contents, with introduction of dynamic client and server side scripting languages and continued advancement of existing static language like html to html5, adding every bit of dynamicity is possible to the websites and what left is expected to follow soon in near future.
|
||||
|
||||
With websites, comes the need of a unit that can display these websites to a huge set of audience all over the globe. This need is fulfilled by the servers that provide means to host a website. This includes a list of servers like: Apache HTTP Server, Joomla, and WordPress that allow one to host their websites.
|
||||
|
||||
![Apache htaccess Tricks](http://www.tecmint.com/wp-content/uploads/2015/01/htaccess-tricks.jpg)
|
||||
25 htaccess Tricks
|
||||
|
||||
One who wants to host a website can create a local server of his own or can contact any of above mentioned or any another server administrator to host his website. But the actual issue starts from this point. Performance of a website depends mainly on following factors:
|
||||
|
||||
- Bandwidth consumed by the website.
|
||||
- How secure is the website against hackers.
|
||||
- Optimism when it comes to data search through the database
|
||||
- User-friendliness when it comes to displaying navigation menus and providing more UI features.
|
||||
|
||||
Alongside this, various factors that govern success of servers in hosting websites are:
|
||||
|
||||
- Amount of data compression achieved for a particular website.
|
||||
- Ability to simultaneously serve multiple clients asking for a same or different website.
|
||||
- Securing the confidential data entered on the websites like: emails, credit card details and so on.
|
||||
- Allowing more and more options to enhance dynamicity to a website.
|
||||
|
||||
This article deals with one such feature provided by the servers that help enhance performance of websites along with securing them from bad bots, hotlinks etc. i.e. ‘.htaccess‘ file.
|
||||
|
||||
### What is .htaccess? ###
|
||||
|
||||
htaccess (or hypertext access) are the files that provide options for website owners to control the server environment variables and other parameters to enhance functionality of their websites. These files can reside in any and every directory in the directory tree of the website and provide features to the directory and the files and folders inside it.
|
||||
|
||||
What are these features? Well these are the server directives i.e. the lines that instruct server to perform a specific task, and these directives apply only to the files and folders inside the folder in which this file is placed. These files are hidden by default as all Operating System and the web servers are configured to ignore them by default but making the hidden files visible can make you see this very special file. What type of parameters can be controlled is the topic of discussion of subsequent sections.
|
||||
|
||||
Note: If .htaccess file is placed in /apache/home/www/Gunjit/ directory then it will provide directives for all the files and folders in that directory, but if this directory contains another folder namely: /Gunjit/images/ which again has another .htaccess file then the directives in this folder will override those provided by the master .htaccess file (or file in the folder up in hierarchy).
|
||||
|
||||
### Apache Server and .htaccess files ###
|
||||
|
||||
Apache HTTP Server colloquially called Apache was named after a Native American Tribe Apache to respect its superior skills in warfare strategy. Build on C/C++ and XML it is cross-platform web server which is based on NCSA HTTPd server and has a key role in growth and advancement of World Wide Web.
|
||||
|
||||
Most commonly used on UNIX, Apache is available for wide variety of platforms including FreeBSD, Linux, Windows, Mac OS, Novel Netware etc. In 2009, Apache became the first server to serve more than 100 million websites.
|
||||
|
||||
Apache server has one .htaccess file per user in www/ directory. Although these files are hidden but can be made visible if required. In www/ directory there are a number of folders each pertaining to a website named on user’s or owner’s name. Apart from this you can have one .htaccess file in each folder which configured files in that folder as stated above.
|
||||
|
||||
How to configure htaccess file on Apache server is as follows…
|
||||
|
||||
### Configuration on Apache Server ###
|
||||
|
||||
There can be two cases:
|
||||
|
||||
#### Hosting website on own server ####
|
||||
|
||||
In this case, if .htaccess files are not enabled, you can enable .htaccess files by simply going to httpd.conf (Default configuration file for Apache HTTP Daemon) and finding the <Directories> section.
|
||||
|
||||
<Directory "/var/www/htdocs">
|
||||
|
||||
And locate the line that says…
|
||||
|
||||
AllowOverride None
|
||||
|
||||
And correct it to.
|
||||
|
||||
AllowOverride All
|
||||
|
||||
Now, on restarting Apache, .htaccess will work.
|
||||
|
||||
#### Hosting website on different hosting provider server ####
|
||||
|
||||
In this case it is better to consult the hosting admin, if they allow access to .htaccess files.
|
||||
|
||||
### 25 ‘.htaccess’ Tricks of Apache Web Server for Websites ###
|
||||
|
||||
#### 1. How to enable mod_rewrite in .htaccess file ####
|
||||
|
||||
mod_rewrite option allows you to use redirections and hiding your true URL with redirecting to some other URL. This option can prove very useful allowing you to replace the lengthy and long URL’s to short and easy to remember ones.
|
||||
|
||||
To allow mod_rewrite just have a practice to add the following line as the first line of your .htaccess file.
|
||||
|
||||
Options +FollowSymLinks
|
||||
|
||||
This option allows you to follow symbolic links and thus enable the mod_rewrite option on the website. Replacing the URL with short and crispy one is presented later on.
|
||||
|
||||
#### 2. How to Allow or Deny Access to Websites ####
|
||||
|
||||
htaccess file can allow or deny access of website or a folder or files in the directory in which it is placed by using order, allow and deny keywords.
|
||||
|
||||
**Allowing access to only 192.168.3.1 IP**
|
||||
|
||||
Order Allow, Deny
|
||||
Deny from All
|
||||
Allow from 192.168.3.1
|
||||
|
||||
OR
|
||||
|
||||
Order Allow, Deny
|
||||
Allow from 192.168.3.1
|
||||
|
||||
Order keyword here specifies the order in which allow, deny access would be processed. For the above ‘Order’ statement, the Allow statements would be processed first and then the deny statements would be processed.
|
||||
|
||||
**Denying access to only one IP Address**
|
||||
|
||||
The below lines provide the means to allow access of the website to all the users accept one with IP Address: 192.168.3.1.
|
||||
|
||||
rder Allow, Deny
|
||||
Deny from 192.168.3.1
|
||||
Allow from All
|
||||
|
||||
OR
|
||||
|
||||
|
||||
Order Deny, Allow
|
||||
Deny from 192.168.3.1
|
||||
|
||||
#### 3. Generate Apache Error documents for different error codes. ####
|
||||
|
||||
Using some simple lines, we can fix the error document that run on different error codes generated by the server when user/client requests a page not available on the website like most of us would have seen the ‘404 Page not found’ page in their web browser. ‘.htaccess’ files specify what action to take in case of such error conditions.
|
||||
|
||||
To do this, the following lines are needed to be added to the ‘.htaccess’ files:
|
||||
|
||||
ErrorDocument <error-code> <path-of-document/string-representing-html-file-content>
|
||||
|
||||
‘ErrorDocument’ is a keyword, error-code can be any of 401, 403, 404, 500 or any valid error representing code and lastly, ‘path-of-document’ represents the path on the local machine (in case you are using your own local server) or on the server (in case you are using any other’s server to host your website).
|
||||
|
||||
**Example:**
|
||||
|
||||
ErrorDocument 404 /error-docs/error-404.html
|
||||
|
||||
The above line sets the document ‘error-404.html’ placed in error-docs folder to be displayed in case the 404 error is reported by the server for any invalid request for a page by the client.
|
||||
|
||||
rrorDocument 404 "<html><head><title>404 Page not found</title></head><body><p>The page you request is not present. Check the URL you have typed</p></body></html>"
|
||||
|
||||
The above representation is also correct which places the string representing a usual html file.
|
||||
|
||||
#### 4. Setting/Unsetting Apache server environment variables ####
|
||||
|
||||
In .htaccess file you can set or unset the global environment variables that server allow to be modified by the hosters of the websites. For setting or unsetting the environment variables you need to add the following lines to your .htaccess files.
|
||||
|
||||
**Setting the Environment variables**
|
||||
|
||||
SetEnv OWNER “Gunjit Khera”
|
||||
|
||||
Unsetting the Environment variables
|
||||
|
||||
UnsetEnv OWNER
|
||||
|
||||
#### 5. Defining different MIME types for files ####
|
||||
|
||||
MIME (Multipurpose Internet Multimedia Extensions) are the types that are recognized by the browser by default when running any web page. You can define MIME types for your website in .htaccess files, so that different types of files as defined by you can be recognized and run by the server.
|
||||
|
||||
<IfModule mod_mime.c>
|
||||
AddType application/javascript js
|
||||
AddType application/x-font-ttf ttf ttc
|
||||
</IfModule>
|
||||
|
||||
Here, mod_mime.c is the module for controlling definitions of different MIME types and if you have this module installed on your system then you can use this module to define different MIME types for different extensions used in your website so that server can understand them.
|
||||
|
||||
#### 6. How to Limit the size of Uploads and Downloads in Apache ####
|
||||
|
||||
.htaccess files allow you the feature to control the amount of data being uploaded or downloaded by a particular client from your website. For this you just need to append the following lines to your .htaccess file:
|
||||
|
||||
php_value upload_max_filesize 20M
|
||||
php_value post_max_size 20M
|
||||
php_value max_execution_time 200
|
||||
php_value max_input_time 200
|
||||
|
||||
The above lines set maximum upload size, maximum size of data being posted, maximum execution time i.e. the maximum time the a user is allowed to execute a website on his local machine, maximum time constrain within on the input time.
|
||||
|
||||
#### 7. Making Users to download .mp3 and other files before playing on your website. ####
|
||||
|
||||
Mostly, people play songs on websites before downloading them to check the song quality etc. Being a smart seller you can add a feature that can come in very handy for you which will not let any user play songs or videos online and users have to download them for playing. This is very useful as online playing of songs and videos consumes a lot of bandwidth.
|
||||
|
||||
Following lines are needed to be added to be added to your .htaccess file:
|
||||
|
||||
AddType application/octet-stream .mp3 .zip
|
||||
|
||||
#### 8. Setting Directory Index for Website ####
|
||||
|
||||
Most of website developers would already know that the first page that is displayed i.e. the home page of a website is named as ‘index.html’ .Many of us would have seen this also. But how is this set?
|
||||
|
||||
.htaccess file provides a way to list a set of pages which would be scanned in order when a client requests to visit home page of the website and accordingly any one of the listed set of pages if found would be listed as the home page of the website and displayed to the user.
|
||||
|
||||
Following line is needed to be added to produce the desired effect.
|
||||
|
||||
DirectoryIndex index.html index.php yourpage.php
|
||||
|
||||
The above line specifies that if any request for visiting the home page comes by any visitor then the above listed pages will be searched in order in the directory firstly: index.html which if found will be displayed as the sites home page, otherwise list will proceed to the next page i.e. index.php and so on till the last page you have entered in the list.
|
||||
|
||||
#### 9. How to enable GZip compression for Files to save site’s bandwidth. ####
|
||||
|
||||
This is a common observation that heavy sites generally run bit slowly than light weight sites that take less amount of space. This is just because for a heavy site it takes time to load the huge script files and images before displaying them on the client’s web browser.
|
||||
|
||||
This is a common mechanism that when a browser requests a web page, server provides the browser with that webpage and now to locally display that web page, the browser has to download that page and then run the script inside that page.
|
||||
|
||||
What GZip compression does here is saving the time required to serve a single customer thus increasing the bandwidth. The source files of the website on the server are kept in compressed form and when the request comes from a user then these files are transferred in compressed form which are then uncompressed and executed on the server. This improves the bandwidth constrain.
|
||||
|
||||
Following lines can allow you to compress the source files of your website but this requires mod_deflate.c module to be installed on your server.
|
||||
|
||||
<IfModule mod_deflate.c>
|
||||
AddOutputFilterByType DEFLATE text/plain
|
||||
AddOutputFilterByType DEFLATE text/html
|
||||
AddOutputFilterByType DEFLATE text/xml
|
||||
AddOutputFilterByType DEFLATE application/html
|
||||
AddOutputFilterByType DEFLATE application/javascript
|
||||
AddOutputFilterByType DEFLATE application/x-javascript
|
||||
</IfModule>
|
||||
|
||||
#### 10. Playing with the File types. ####
|
||||
|
||||
There are certain conditions that the server assumes by default. Like: .php files are run on the server, similarly .txt files say for example are meant to be displayed. Like this we can make some executable cgi-scripts or files to be simply displayed as the source code on our website instead of being executed.
|
||||
|
||||
To do this observe the following lines from a .htaccess file.
|
||||
|
||||
RemoveHandler cgi-script .php .pl .py
|
||||
AddType text/plain .php .pl .py
|
||||
|
||||
These lines tell the server that .pl (perl script), .php (PHP file) and .py (Python file) are meant to just be displayed and not executed as cgi-scripts.
|
||||
|
||||
#### 11. Setting the Time Zone for Apache server ####
|
||||
|
||||
The power and importance of .htaccess files can be seen by the fact that this can be used to set the Time Zone of the server accordingly. This can be done by setting a global Environment variable ‘TZ’ of the list of global environment variables that are provided by the server to each of the hosted website for modification.
|
||||
|
||||
Due to this reason only, we can see time on the websites (that display it) according to our time zone. May be some other person hosting his website on the server would have the timezone set according to the location where he lives.
|
||||
|
||||
Following lines set the Time Zone of the Server.
|
||||
|
||||
SetEnv TZ India/Kolkata
|
||||
|
||||
#### 12. How to enable Cache Control on Website ####
|
||||
|
||||
A very interesting feature of browser, most have observed is that on opening one website simultaneously more than one time, the latter one opens fast as compared to the first time. But how is this possible? Well in this case, the browser stores some frequently visited pages in its cache for faster access later on.
|
||||
|
||||
But for how long? Well this answer depends on you i.e. on the time you set in your .htaccess file for Cache control. The .htaccess file can specify the amount of time for which the pages of website can stay in the browser’s cache and after expiration of time, it must revalidate i.e. pages would be deleted from the Cache and recreated the next time user visits the site.
|
||||
|
||||
Following lines implement Cache Control for your website.
|
||||
|
||||
<FilesMatch "\.(ico|png|jpeg|svg|ttf)$">
|
||||
Header Set Cache-Control "max-age=3600, public"
|
||||
</FilesMatch>
|
||||
<FilesMatch "\.(js|css)$">
|
||||
Header Set Cache-Control "public"
|
||||
Header Set Expires "Sat, 24 Jan 2015 16:00:00 GMT"
|
||||
</FilesMatch>
|
||||
|
||||
The above lines allow caching of the pages which are inside the directory in which .htaccess files are placed for 1 hour.
|
||||
|
||||
#### 13. Configuring a single file, the <files> option. ####
|
||||
|
||||
Usually the content in .htaccess files apply to all the files and folders inside the directory in which the file is placed, but you can also provide some special permissions to a special file, like denying access to that file only or so on.
|
||||
|
||||
For this you need to add <File> tag to your file in a way like this:
|
||||
|
||||
<files conf.html="">
|
||||
Order allow, deny
|
||||
Deny from 188.100.100.0
|
||||
</files>
|
||||
|
||||
This is a simple case of denying a file ‘conf.html’ from access by IP 188.100.100.0, but you can add any or every feature described for .htaccess file till now including the features yet to be described to the file like: Cache-control, GZip compression.
|
||||
|
||||
This feature is used by most of the servers to secure .htaccess files which is the reason why we are not able to see the .htaccess files on the browsers. How the files are authenticated is demonstrated in subsequent heading.
|
||||
|
||||
#### 14. Enabling CGI scripts to run outside of cgi-bin folder. ####
|
||||
|
||||
Usually servers run CGI scripts that are located inside the cgi-bin folder but, you can enable running of CGI scripts located in your desired folder but just adding following lines to .htaccess file located in the desired folder and if not, then creating one, appending following lines:
|
||||
|
||||
AddHandler cgi-script .cgi
|
||||
Options +ExecCGI
|
||||
|
||||
#### 15. How to enable SSI on Website with .htaccess ####
|
||||
|
||||
Server side includes as the name suggests would be related to something included at the server side. But what? Generally when we have many pages in our website and we have a navigation menu on our home page that displays links to other pages then, we can enable SSI (Server Size Includes) option that allows all the pages displayed in the navigation menu to be included with the home page completely.
|
||||
|
||||
The SSI allows inclusion of multiple pages as if content they contain is a part of a single page so that any editing needed to be done is done in one file only which saves a lot of disk space. This option is by default enabled on servers but for .shtml files.
|
||||
|
||||
In case you want to enable it for .html files you need to add following lines:
|
||||
|
||||
AddHandler server-parsed .html
|
||||
|
||||
After this following in the html file would lead to SSI.
|
||||
|
||||
<!--#inlcude virtual= “gk/document.html”-->
|
||||
|
||||
#### 16. How to Prevent website Directory Listing ####
|
||||
|
||||
To prevent any client being able to list the directories of the website on the server at his local machine add following lines to the file inside the directory you don’t want to get listed.
|
||||
|
||||
Options -Indexes
|
||||
|
||||
#### 17. Changing Default charset and language headers. ####
|
||||
|
||||
.htaccess files allow you to modify the character set used i.e. ASCII or UNICODE, UTF-8 etc. for your website along with the default language used for the display of content.
|
||||
|
||||
Following server’s global environment variables allow you to achieve above feature.
|
||||
|
||||
AddDefaultCharset UTF-8
|
||||
DefaultLanguage en-US
|
||||
|
||||
**Re-writing URL’s: Redirection Rules**
|
||||
|
||||
Re-writing feature simply means replacing the long and un-rememberable URL’s with short and easy to remember ones. But, before going into this topic there are some rules and some conventions for special symbols used later on in this article.
|
||||
|
||||
**Special Symbols:**
|
||||
|
||||
Symbol Meaning
|
||||
^ - Start of the string
|
||||
$ - End of the String
|
||||
| - Or [a|b] – a or b
|
||||
[a-z] - Any of the letter between a to z
|
||||
+ - One or more occurrence of previous letter
|
||||
* - Zero or more occurrence of previous letter
|
||||
? - Zero or one occurrence of previous letter
|
||||
|
||||
**Constants and their meaning:**
|
||||
|
||||
Constant Meaning
|
||||
NC - No-case or case sensitive
|
||||
L - Last rule – stop processing further rules
|
||||
R - Temporary redirect to new URL
|
||||
R=301 - Permanent redirect to new URL
|
||||
F - Forbidden, send 403 header to the user
|
||||
P - Proxy – grab remote content in substitution section and return it
|
||||
G - Gone, no longer exists
|
||||
S=x - Skip next x rules
|
||||
T=mime-type - Force specified MIME type
|
||||
E=var:value - Set environment variable var to value
|
||||
H=handler - Set handler
|
||||
PT - Pass through – in case of URL’s with additional headers.
|
||||
QSA - Append query string from requested to substituted URL
|
||||
|
||||
#### 18. Redirecting a non-www URL to a www URL. ####
|
||||
|
||||
Before starting with the explanation, lets first see the lines that are needed to be added to .htaccess file to enable this feature.
|
||||
|
||||
RewriteEngine ON
|
||||
RewriteCond %{HTTP_HOST} ^abc\.net$
|
||||
RewriteRule (.*) http://www.abc.net/$1 [R=301,L]
|
||||
|
||||
The above lines enable the Rewrite Engine and then in second line check all those URL’s that pertain to host abc.net or have the HTTP_HOST environment variable set to “abc.net”.
|
||||
|
||||
For all such URL’s the code permanently redirects them (as R=301 rule is enabled) to the new URL http://www.abc.net/$1 where $1 is the non-www URL having host as abc.net. The non-www URL is the one in bracket and is referred by $1.
|
||||
|
||||
#### 19. Redirecting entire website to https. ####
|
||||
|
||||
Following lines will help you transfer entire website to https:
|
||||
|
||||
RewriteEngine ON
|
||||
RewriteCond %{HTTPS} !on
|
||||
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
|
||||
|
||||
The above lines enable the re-write engine and then check the value of HTTPS environment variable. If it is on then re-write the entire pages of the website to https.
|
||||
|
||||
#### 20. A custom redirection example ####
|
||||
|
||||
For example, redirect url ‘http://www.abc.net?p=100&q=20 ‘ to ‘http://www.abc.net/10020pq’.
|
||||
|
||||
RewriteEngine ON
|
||||
RewriteRule ^http://www.abc.net/([0-9]+)([0-9]+)pq$ ^http://www.abc.net?p=$1&q=$2
|
||||
|
||||
In above lines, $1 represents the first bracket and $2 represents the second bracket.
|
||||
|
||||
#### 21. Renaming the htaccess file ####
|
||||
|
||||
For preventing the .htaccess file from the intruders and other people from viewing those files you can rename that file so that it is not accessed by client’s browser. The line that does this is:
|
||||
|
||||
AccessFileName htac.cess
|
||||
|
||||
#### 22. How to Prevent Image Hotlinking for your Website ####
|
||||
|
||||
Another problem that is major factor of large bandwidth consumption by the websites is the problem of hot links which are links to your websites by other websites for display of images mostly of your website which consumes your bandwidth. This problem is also called as ‘bandwidth theft’.
|
||||
|
||||
A common observation is when a site displays the image contained in some other site due to this hot-linking your site needs to be loaded and at the stake of your site’s bandwidth, the other site’s images are displayed. To prevent this for like: images such as: .gif, .jpeg etc. following lines of code would help:
|
||||
|
||||
RewriteEngine ON
|
||||
RewriteCond %{HTTP_REFERER} !^$
|
||||
RewriteCond %{HTTP_REFERERER} !^http://(www\.)?mydomain.com/.*$ [NC]
|
||||
RewriteRule \.(gif|jpeg|png)$ - [F].
|
||||
|
||||
The above lines check if the HTTP_REFERER is not set to blank or not set to any of the links in your websites. If this is happening then all the images in your page are replaced by 403 forbidden.
|
||||
|
||||
#### 23. How to Redirect Users to Maintenance Page. ####
|
||||
|
||||
In case your website is down for maintenance and you want to notify all your clients that need to access your websites about this then for such cases you can add following lines to your .htaccess websites that allow only admin access and replace the site pages having links to any .jpg, .css, .gif, .js etc.
|
||||
|
||||
RewriteCond %{REQUEST_URI} !^/admin/ [NC]
|
||||
RewriteCond %{REQUEST_URI} !^((.*).css|(.*).js|(.*).png|(.*).jpg) [NC]
|
||||
RewriteRule ^(.*)$ /ErrorDocs/Maintainence_Page.html
|
||||
[NC,L,U,QSA]
|
||||
|
||||
These lines check if the Requested URL contains any request for any admin page i.e. one starting with ‘/admin/’ or any request to ‘.png, .jpg, .js, .css’ pages and for any such requests it replaces that page to ‘ErrorDocs/Maintainence_Page.html’.
|
||||
|
||||
#### 24. Mapping IP Address to Domain Name ####
|
||||
|
||||
Name servers are the servers that convert a specific IP Address to a domain name. This mapping can also be specified in the .htaccess files in the following manner.
|
||||
|
||||
For Mapping L.M.N.O address to a domain name www.hellovisit.com
|
||||
RewriteCond %{HTTP_HOST} ^L\.M\.N\.O$ [NC]
|
||||
RewriteRule ^(.*)$ http://www.hellovisit.com/$1 [L,R=301]
|
||||
|
||||
The above lines check if the host for any page is having the IP Address as: L.M.N.O and if so the page is mapped to the domain name http://www.hellovisit.com by the third line by permanent redirection.
|
||||
|
||||
#### 25. FilesMatch Tag ####
|
||||
|
||||
Like <files> tag that is used to apply conditions to a single file, <FilesMatch> can be used to match to a group of files and apply some conditions to the group of files as below:
|
||||
|
||||
<FilesMatch “\.(png|jpg)$”>
|
||||
Order Allow, Deny
|
||||
Deny from All
|
||||
</FilesMatch>
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
The list of tricks that can be done with .htaccess files is much more. Thus, this gives us an idea how powerful this file is and how much security and dynamicity and other features it can give to your website.
|
||||
|
||||
We’ve tried our best to cover as much as htaccess tricks in this article, but incase if we’ve missed any important trick, or you most welcome to post your htaccess ideas and tricks that you know via comments section below – we will include those in our article too…
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/apache-htaccess-tricks/
|
||||
|
||||
作者:[Gunjit Khera][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/gunjitk94/
|
@ -0,0 +1,90 @@
|
||||
How to limit network bandwidth on Linux
|
||||
================================================================================
|
||||
If you often run multiple networking applications on your Linux desktop, or share bandwidth among multiple computers at home, you will want to have a better control over bandwidth usage. Otherwise, when you are downloading a big file with a downloader, your interactive SSH session may become sluggish to the point where it's unusable. Or when you sync a big folder over Dropbox, your roommate may complain that video streaming at her computer gets choppy.
|
||||
|
||||
In this tutorial, I am going to describe two different ways to rate limit network traffic on Linux.
|
||||
|
||||
### Rate Limit an Application on Linux ###
|
||||
|
||||
One way to rate limit network traffic is via a command-line tool called [trickle][1]. The trickle command allows you to shape the traffic of any particular program by "pre-loading" a rate-limited socket library at run-time. A nice thing about trickle is that it runs purely in user-space, meaning you don't need root privilege to restrict the bandwidth usage of a program. To be compatible with trickle, the program must use socket interface with no statically linked library. trickle can be handy when you want to rate limit a program which does not have a built-in bandwidth control functionality.
|
||||
|
||||
To install trickle on Ubuntu, Debian and their derivatives:
|
||||
|
||||
$ sudo apt-get install trickle
|
||||
|
||||
To install trickle on Fedora or CentOS/RHEL (with [EPEL repository][2]):
|
||||
|
||||
$ sudo yum install trickle
|
||||
|
||||
Basic usage of trickle is as follows. Simply put, you prepend trickle (with rate) in front of the command you are trying to run.
|
||||
|
||||
$ trickle -d <download-rate> -u <upload-rate> <command>
|
||||
|
||||
This will limit the download and upload rate of <command> to specified values (in KBytes/s).
|
||||
|
||||
For example, set the maximum upload bandwidth of your scp session to 100 KB/s:
|
||||
|
||||
$ trickle -u 100 scp backup.tgz alice@remote_host.com:
|
||||
|
||||
If you want, you can set the maximum download speed (e.g., 300 KB/s) of your Firefox browser by creating a [custom launcher][3] with the following command.
|
||||
|
||||
trickle -d 300 firefox %u
|
||||
|
||||
Finally, trickle can run in a daemon mode, where it can restrict the "aggregate" bandwidth usage of all running programs launched via trickle. To launch trickle as a daemon (i.e., trickled):
|
||||
|
||||
$ sudo trickled -d 1000
|
||||
|
||||
Once the trickled daemon is running in the background, you can launch other programs via trickle. If you launch one program with trickle, its maximum download rate is 1000 KB/s. If you launch another program with trickle, each of them will be rate limited to 500 KB/s, etc.
|
||||
|
||||
### Rate Limit a Network Interface on Linux ###
|
||||
|
||||
Another way to control your bandwidth resource is to enforce bandwidth limit on a per-interface basis. This is useful when you are sharing your upstream Internet connection with someone else. Like anything else, Linux has a tool for you. [wondershaper][4] exactly does that: rate-limit a network interface.
|
||||
|
||||
wondershaper is in fact a shell script which uses [tc][5] to define traffic shaping and QoS for a specific network interface. Outgoing traffic is shaped by being placed in queues with different priorities, while incoming traffic is rate-limited by packet dropping.
|
||||
|
||||
In fact, the stated goal of wondershaper is much more than just adding bandwidth cap to an interface. wondershaper tries to maintain low latency for interactive sessions such as SSH while bulk download or upload is going on. Also, it makes sure that bulk upload (e.g., Dropbox sync) does not suffocate download, and vice versa.
|
||||
|
||||
To install wondershaper on Ubuntu, Debian and their derivatives:
|
||||
|
||||
$ sudo apt-get install wondershaper
|
||||
|
||||
To install wondershaper on Fedora or CentOS/RHEL (with [EPEL repository][6]):
|
||||
|
||||
$ sudo yum install wondershaper
|
||||
|
||||
Basic usage of wondershaper is as follows.
|
||||
|
||||
$ sudo wondershaper <interface> <download-rate> <upload-rate>
|
||||
|
||||
For example, to set the maximum download/upload bandwidth for eth0 to 1000Kbit/s and 500Kbit/s, respectively:
|
||||
|
||||
$ sudo wondershaper eth0 1000 500
|
||||
|
||||
You can remove the rate limit by running:
|
||||
|
||||
$ sudo wondershaper clear eth0
|
||||
|
||||
If you are interested in how wondershaper works, you can read its shell script (/sbin/wondershaper).
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
In this tutorial, I introduced two different ways to control your bandwidth usages on Linux desktop, on per-application or per-interface basis. Both tools are extremely user-friendly, offering you a quick and easy way to shape otherwise unconstrained traffic. For those of you who want to know more about rate control on Linux, refer to [the Linux bible][7].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/limit-network-bandwidth-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/nanni
|
||||
[1]:http://monkey.org/~marius/trickle
|
||||
[2]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html
|
||||
[3]:http://xmodulo.com/create-desktop-shortcut-launcher-linux.html
|
||||
[4]:http://lartc.org/wondershaper/
|
||||
[5]:http://lartc.org/manpages/tc.txt
|
||||
[6]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html
|
||||
[7]:http://www.lartc.org/lartc.html
|
81
sources/tech/20150128 Docker-1 Moving to Docker.md
Normal file
81
sources/tech/20150128 Docker-1 Moving to Docker.md
Normal file
@ -0,0 +1,81 @@
|
||||
Moving to Docker
|
||||
================================================================================
|
||||
![](http://cocoahunter.com/content/images/2015/01/docker1.jpeg)
|
||||
|
||||
[TL;DR] This is the first post in a series of 3 on how my company moved its infrastructure from PaaS to Docker based deployment. If you want, you can skip the intro (this post) and head directly to the technical topics (links at the bottom of the page).
|
||||
|
||||
----------
|
||||
|
||||
In the last month I've been strggling with devops. This is my very personal story and experience in trying to streamline a deployment process of a Raila app with Docker.
|
||||
|
||||
When I started my company – [Touchware][1] – in 2012 I was a lone developer. Things were small, uncomplicated, they didn't require a lot of maintenance, nor they needed to scale all that much. During the course of last year though, we grew quite a lot (we are now a team of 10 people) and our server-side applications and API grew both in terms of scope and scale.
|
||||
|
||||
### Step 1 - Heroku ###
|
||||
|
||||
We still are a very small team and we need to make things going and run as smoothly as possible. When we looked for possible solutions, we decided to stick with something that would have removed from our shoulders the burden of managing hardware. Since we develop mainly Rails based applications and Heroku has a great support for RoR and various kind of DBs and cached (Postgres / Mongo / Redis etc.), the smartest choice seemed to be going with [Heroku][2]. And that's what we did.
|
||||
|
||||
Heroku has a great support and great documentation and deploying apps is just so snappy! Only problem is, when you start growing, you need to have piles of cash around to pay the bills. Not the best deal, really.
|
||||
|
||||
### Step 2 - Dokku ###
|
||||
|
||||
In a rush to try and cut the costs, we decided to try with Dokku. [Dokku][3], quoting the Github repo is a
|
||||
|
||||
> Docker powered mini-Heroku in around 100 lines of Bash
|
||||
|
||||
We launched some instances on [DigitalOcean][4] with Dokku pre-installed and we gave it spin. Dokku is very much like Heroku, but when you have complex applications for whom you need to twear params, or where you need certain dependencies, it's just not gonna work out. We had an app where we needed to apply multiple transformations on images and we couldn't find a way to install the correct version of imagemagick into the dokku-based Docker container that was hosting our Rails app. We still have a couple of very simple apps that are running on Dokku, but we had to move some of them back to Heroku.
|
||||
|
||||
### Step 3 - Docker ###
|
||||
|
||||
A couple of months ago, since the problem of devops and managing production apps was resurfacing, I decided to try out [Docker][5]. Docker, in simple terms, allows developers to containerize applications and to ease the deployment. Since a Docker container basically has all the dependencies it needs to run your app, if everything runs fine on your laptop, you can be sure it'll also run like a champ in production on a remote server, be it an AWS E2C instance or a VPS on DigitalOcean.
|
||||
|
||||
Docker IMHO is particularly interesting for the following reasons:
|
||||
|
||||
- it promotes modularization and separation of concerns: you need to start thinking about your apps in terms of logical components (load balancer: 1 container, DB: 1 container, webapp: 1 container etc.);
|
||||
- it's very flexible in terms of deployment options: containers can be deployed to a wide variety of HW and can be easily redeployed to different servers / providers;
|
||||
- it allows for a very fine grained tuning of your app environment: you build the images your containers runs from, so you have plenty of options for configuring your environment exactly as you would like to.
|
||||
|
||||
There are howerver some downsides:
|
||||
|
||||
- the learning curve is quite steep (this is probably a very personal problem, but I'm talking as a software dev and not as a skilled operations professional);
|
||||
- setup is not simple, especially if you want to have a private registry / repository (more about this later).
|
||||
|
||||
Following are some tips I put together during the course of the last week with the findings of someone that is new to the game.
|
||||
|
||||
----------
|
||||
|
||||
In the following articles we'll see how to setup a semi-automated Docker based deployment system.
|
||||
|
||||
- [Setting up a private Docker registry][6]
|
||||
- [Configuring a Rails app for semi-automated deployment][7]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://cocoahunter.com/2015/01/23/docker-1/
|
||||
|
||||
作者:[Michelangelo Chasseur][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://cocoahunter.com/author/michelangelo/
|
||||
[1]:http://www.touchwa.re/
|
||||
[2]:http://cocoahunter.com/2015/01/23/docker-1/www.heroku.com
|
||||
[3]:https://github.com/progrium/dokku
|
||||
[4]:http://cocoahunter.com/2015/01/23/docker-1/www.digitalocean.com
|
||||
[5]:http://www.docker.com/
|
||||
[6]:http://cocoahunter.com/2015/01/23/docker-2/
|
||||
[7]:http://cocoahunter.com/2015/01/23/docker-3/
|
||||
[8]:
|
||||
[9]:
|
||||
[10]:
|
||||
[11]:
|
||||
[12]:
|
||||
[13]:
|
||||
[14]:
|
||||
[15]:
|
||||
[16]:
|
||||
[17]:
|
||||
[18]:
|
||||
[19]:
|
||||
[20]:
|
@ -0,0 +1,241 @@
|
||||
Setting up a private Docker registry
|
||||
================================================================================
|
||||
![](http://cocoahunter.com/content/images/2015/01/docker2.jpg)
|
||||
|
||||
[TL;DR] This is the second post in a series of 3 on how my company moved its infrastructure from PaaS to Docker based deployment.
|
||||
|
||||
- [First part][1]: where I talk about the process we went thru before approaching Docker;
|
||||
- [Third pard][2]: where I show how to automate the entire process of building images and deploying a Rails app with Docker.
|
||||
|
||||
----------
|
||||
|
||||
Why would ouy want ot set up a provate registry? Well, for starters, Docker Hub only allows you to have one free private repo. Other companies are beginning to offer similar services, but they are all not very cheap. In addition, if you need to deploy production ready applications built with Docker, you might not want to publish those images on the public Docker Hub.
|
||||
|
||||
This is a very pragmatic approach to dealing with the intricacies of setting up a private Docker registry. For the tutorial we will be using a small 512MB instance on DigitalOcean (from now on DO). I also assume you already know the basics of Docker since I will be concentrating on some more complicated stuff.
|
||||
|
||||
### Local set up ###
|
||||
|
||||
First of all you need to install **boot2docker** and docker CLI. If you already have your basic Docker environment up and running, you can just skip to the next section.
|
||||
|
||||
From the terminal run the following command[1][3]:
|
||||
|
||||
brew install boot2docker docker
|
||||
|
||||
If everything is ok[2][4], you will now be able to start the VM inside which Docker will run with the following command:
|
||||
|
||||
boot2docker up
|
||||
|
||||
Follow the instructions, copy and paste the export commands that boot2docker will print in the terminal. If you now run `docker ps` you should be greeted by the following line
|
||||
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
|
||||
Ok, Docker is ready to go. This will be enough for the moment. Let's go back to setting up the registry.
|
||||
|
||||
### Creating the server ###
|
||||
|
||||
Log into you DO account and create a new Droplet by selecting an image with Docker pre-installed[^n].
|
||||
|
||||
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-18-26-14.png)
|
||||
|
||||
You should receive your root credentials via email. Log into your instance and run `docker ps` to see if eveything is ok.
|
||||
|
||||
### Setting up AWS S3 ###
|
||||
|
||||
We are going to use Amazon Simple Storage Service (S3) as the storage layer for our registry / repository. We will need to create a bucket and user credentials to allow our docker container accessoing it.
|
||||
|
||||
Login into your AWS account (if you don't have one you can set one up at [http://aws.amazon.com/][5]) and from the console select S3 (Simple Storage Service).
|
||||
|
||||
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-19-29-21.png)
|
||||
|
||||
Click on **Create Bucket**, enter a unique name for your bucket (and write it down, we're gonna need it later), then click on **Create**.
|
||||
|
||||
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-19-22-50.png)
|
||||
|
||||
That's it! We're done setting up the storage part.
|
||||
|
||||
### Setup AWS access credentials ###
|
||||
|
||||
We are now going to create a new user. Go back to your AWS console and select IAM (Identity & Access Management).
|
||||
|
||||
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-19-29-08.png)
|
||||
|
||||
In the dashboard, on the left side of the webpage, you should click on Users. Then select **Create New Users**.
|
||||
|
||||
You should be presented with the following screen:
|
||||
|
||||
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-19-31-42.png)
|
||||
|
||||
Enter a name for your user (e.g. docker-registry) and click on Create. Write down (or download the csv file with) your Access Key and Secret Access Key that we'll need when running the Docker container. Go back to your users list and select the one you just created.
|
||||
|
||||
Under the Permission section, click on Attach User Policy. In the next screen, you will be presented with multiple choices: select Custom Policy.
|
||||
|
||||
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-19-41-21.png)
|
||||
|
||||
Here's the content of the custom policy:
|
||||
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Sid": "SomeStatement",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:*"
|
||||
],
|
||||
"Resource": [
|
||||
"arn:aws:s3:::docker-registry-bucket-name/*",
|
||||
"arn:aws:s3:::docker-registry-bucket-name"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
This will allow the user (i.e. the registry) to manage (read/write) content on the bucket (make sure to use the bucket name you previously defined when setting up AWS S3). To sum it up: when you'll be pushing Docker images from your local machine to your repository, the server will be able to upload them to S3.
|
||||
|
||||
### Installing the registry ###
|
||||
|
||||
Now let's head back to our DO server and SSH into it. We are going to use[^n] one of the [official Docker registry images][6].
|
||||
|
||||
Let's start our registry with the following command:
|
||||
|
||||
docker run \
|
||||
-e SETTINGS_FLAVOR=s3 \
|
||||
-e AWS_BUCKET=bucket-name \
|
||||
-e STORAGE_PATH=/registry \
|
||||
-e AWS_KEY=your_aws_key \
|
||||
-e AWS_SECRET=your_aws_secret \
|
||||
-e SEARCH_BACKEND=sqlalchemy \
|
||||
-p 5000:5000 \
|
||||
--name registry \
|
||||
-d \
|
||||
registry
|
||||
|
||||
Docker should pull the required fs layers from the Docker Hub and eventually start the daemonised container.
|
||||
|
||||
### Testing the registry ###
|
||||
|
||||
If everything worked out, you should now be able to test the registry by pinging it and by searching its content (though for the time being it's still empty).
|
||||
|
||||
Our registry is very basic and it does not provide any means of authentication. Since there are no easy ways of adding authentication (at least none that I'm aware of that are easy enough to implment in order to justify the effort), I've decided that the easiest way of querying / pulling / pushing the registry is an unsecure (over HTTP) connection tunneled thru SSH.
|
||||
|
||||
Opening an SSH tunnel from your local machine is straightforward:
|
||||
|
||||
ssh -N -L 5000:localhost:5000 root@your_registry.com
|
||||
|
||||
The command is tunnelling connections over SSH from port 5000 of the registry server (which is the one we exposed with the `docker run` command in the previous paragraph) to port 5000 on the localhost.
|
||||
|
||||
If you now browse to the following address [http://localhost:5000/v1/_ping][7] you should get the following very simple response
|
||||
|
||||
{}
|
||||
|
||||
This just means that the registry is working correctly. You can also list the whole content of the registry by browsing to [http://localhost:5000/v1/search][8] that will get you a similar response:
|
||||
|
||||
{
|
||||
"num_results": 2,
|
||||
"query": "",
|
||||
"results": [
|
||||
{
|
||||
"description": "",
|
||||
"name": "username/first-repo"
|
||||
},
|
||||
{
|
||||
"description": "",
|
||||
"name": "username/second-repo"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
### Building an image ###
|
||||
|
||||
Let's now try and build a very simple Docker image to test our newly installed registry. On your local machine, create a Dockerfile with the following content[^n]:
|
||||
|
||||
# Base image with ruby 2.2.0
|
||||
FROM ruby:2.2.0
|
||||
|
||||
MAINTAINER Michelangelo Chasseur <michelangelo.chasseur@touchwa.re>
|
||||
|
||||
...and build it:
|
||||
|
||||
docker build -t localhost:5000/username/repo-name .
|
||||
|
||||
The `localhost:5000` part is especially important: the first part of the name of a Docker image will tell the `docker push` command the endpoint towards which we are trying to push our image. In our case, since we are connecting to our remote private registry via an SSH tunnel, `localhost:5000` represents exactly the reference to our registry.
|
||||
|
||||
If everything works as expected, when the command returns, you should be able to list your newly created image with the `docker images` command. Run it and see it for yourself.
|
||||
|
||||
### Pushing to the registry ###
|
||||
|
||||
Now comes the trickier part. It took a me a while to realize what I'm about to describe, so just be patient if you don't get it the first time you read and try to follow along. I know that all this stuff will seem pretty complicated (and it would be if you didn't automate the process), but I promise in the end it will all make sense. In the next post I will show a couple of shell scripts and Rake tasks that will automate the whole process and will let you deploy a Rails to your registry app with a single easy command.
|
||||
|
||||
The docker command you are running from your terminal is actually using the boot2docker VM to run the containers and do all the magic stuff. So when we run a command like `docker push some_repo` what is actually happening is that it's the boot2docker VM that is reacing out for the registry, not our localhost.
|
||||
|
||||
This is an extremely important point to understand: in order to push the Docker image to the remote private registry, the SSH tunnel needs to be established from the boot2docker VM and not from your local machine.
|
||||
|
||||
There are a couple of ways to go with it. I will show you the shortest one (which is not probably the easiest to understand, but it's the one that will let us automate the process with shell scripts).
|
||||
|
||||
First of all though we need to sort one last thing with SSH.
|
||||
|
||||
### Setting up SSH ###
|
||||
|
||||
Let's add our boot2docker SSH key to our remote server (registry) known hosts. We can do so using the ssh-copy-id utility that you can install with the following command shouldn't you already have it:
|
||||
|
||||
brew install ssh-copy-id
|
||||
|
||||
Then run:
|
||||
|
||||
ssh-copy-id -i /Users/username/.ssh/id_boot2docker root@your-registry.com
|
||||
|
||||
Make sure to substitute `/Users/username/.ssh/id_boot2docker` with the correct path of your ssh key.
|
||||
|
||||
This will allow us to connect via SSH to our remote registry without being prompted for the password.
|
||||
|
||||
Finally let's test it out:
|
||||
|
||||
boot2docker ssh "ssh -o 'StrictHostKeyChecking no' -i /Users/michelangelo/.ssh/id_boot2docker -N -L 5000:localhost:5000 root@registry.touchwa.re &" &
|
||||
|
||||
To break things out a little bit:
|
||||
|
||||
- `boot2docker ssh` lets you pass a command as a parameter that will be executed by the boot2docker VM;
|
||||
- the final `&` indicates that we want our command to be executed in the background;
|
||||
- `ssh -o 'StrictHostKeyChecking no' -i /Users/michelangelo/.ssh/id_boot2docker -N -L 5000:localhost:5000 root@registry.touchwa.re &` is the actual command our boot2docker VM will run;
|
||||
- the `-o 'StrictHostKeyChecking no'` will make sure that we are not prompted with security questions;
|
||||
- the `-i /Users/michelangelo/.ssh/id_boot2docker` indicates which SSH key we want our VM to use for authentication purposes (note that this should be the key you added to your remote registry in the previous step);
|
||||
- finally we are opening a tunnel on mapping port 5000 to localhost:5000.
|
||||
|
||||
### Pulling from another server ###
|
||||
|
||||
You should now be able to push your image to the remote registry by simply issuing the following command:
|
||||
|
||||
docker push localhost:5000/username/repo_name
|
||||
|
||||
In the [next post][9] we'll se how to automate some of this stuff and we'll containerize a real Rails application. Stay tuned!
|
||||
|
||||
P.S. Please use the comments to let me know of any inconsistencies or fallacies in my tutorial. Hope you enjoyed it!
|
||||
|
||||
1. I'm also assuming you are running on OS X.
|
||||
1. For a complete list of instructions to set up your docker environment and requirements, please visit [http://boot2docker.io/][10]
|
||||
1. Select Image > Applications > Docker 1.4.1 on 14.04 at the time of this writing.
|
||||
1. [https://github.com/docker/docker-registry/][11]
|
||||
1. This is just a stub, in the next post I will show you how to bundle a Rails application into a Docker container.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://cocoahunter.com/2015/01/23/docker-2/
|
||||
|
||||
作者:[Michelangelo Chasseur][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://cocoahunter.com/author/michelangelo/
|
||||
[1]:http://cocoahunter.com/2015/01/23/docker-1/
|
||||
[2]:http://cocoahunter.com/2015/01/23/docker-3/
|
||||
[3]:http://cocoahunter.com/2015/01/23/docker-2/#fn:1
|
||||
[4]:http://cocoahunter.com/2015/01/23/docker-2/#fn:2
|
||||
[5]:http://aws.amazon.com/
|
||||
[6]:https://registry.hub.docker.com/_/registry/
|
||||
[7]:http://localhost:5000/v1/_ping
|
||||
[8]:http://localhost:5000/v1/search
|
||||
[9]:http://cocoahunter.com/2015/01/23/docker-3/
|
||||
[10]:http://boot2docker.io/
|
||||
[11]:https://github.com/docker/docker-registry/
|
@ -0,0 +1,253 @@
|
||||
Automated Docker-based Rails deployments
|
||||
================================================================================
|
||||
![](http://cocoahunter.com/content/images/2015/01/docker3.jpeg)
|
||||
|
||||
[TL;DR] This is the third post in a series of 3 on how my company moved its infrastructure from PaaS to Docker based deployment.
|
||||
|
||||
- [First part][1]: where I talk about the process we went thru before approaching Docker;
|
||||
- [Second part][2]: where I explain how setting up a private registry for in house secure deployments.
|
||||
|
||||
----------
|
||||
|
||||
In this final part we will see how to automate the whole deployment process with a real world (though very basic) example.
|
||||
|
||||
### Basic Rails app ###
|
||||
|
||||
Let's dive into the topic right away and bootstrap a basic Rails app. For the purpose of this demonstration I'm going to use Ruby 2.2.0 and Rails 4.1.1
|
||||
|
||||
From the terminal run:
|
||||
|
||||
$ rvm use 2.2.0
|
||||
$ rails new && cd docker-test
|
||||
|
||||
Let's create a basic controller:
|
||||
|
||||
$ rails g controller welcome index
|
||||
|
||||
...and edit `routes.rb` so that the root of the project will point to our newly created welcome#index method:
|
||||
|
||||
root 'welcome#index'
|
||||
|
||||
Running `rails s` from the terminal and browsing to [http://localhost:3000][3] should bring you to the index page. We're not going to make anything fancier to the app, it's just a basic example to prove that when we'll build and deploy the container everything is working.
|
||||
|
||||
### Setup the webserver ###
|
||||
|
||||
We are going to use Unicorn as our webserver. Add `gem 'unicorn'` and `gem 'foreman'` to the Gemfile and bundle it up (run `bundle install` from the command line).
|
||||
|
||||
Unicorn needs to be configured when the Rails app launches, so let's put a **unicorn.rb** file inside the **config** directory. [Here is an example][4] of a Unicorn configuration file. You can just copy & paste the content of the Gist.
|
||||
|
||||
Let's also add a Procfile with the following content inside the root of the project so that we will be able to start the app with foreman:
|
||||
|
||||
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
|
||||
|
||||
If you now try to run the app with **foreman start** everything should work as expected and you should have a running app on [http://localhost:5000][5]
|
||||
|
||||
### Building a Docker image ###
|
||||
|
||||
Now let's build the image inside which our app is going to live. In the root of our Rails project, create a file named **Dockerfile** and paste in it the following:
|
||||
|
||||
# Base image with ruby 2.2.0
|
||||
FROM ruby:2.2.0
|
||||
|
||||
# Install required libraries and dependencies
|
||||
RUN apt-get update && apt-get install -qy nodejs postgresql-client sqlite3 --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Set Rails version
|
||||
ENV RAILS_VERSION 4.1.1
|
||||
|
||||
# Install Rails
|
||||
RUN gem install rails --version "$RAILS_VERSION"
|
||||
|
||||
# Create directory from where the code will run
|
||||
RUN mkdir -p /usr/src/app
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
# Make webserver reachable to the outside world
|
||||
EXPOSE 3000
|
||||
|
||||
# Set ENV variables
|
||||
ENV PORT=3000
|
||||
|
||||
# Start the web app
|
||||
CMD ["foreman","start"]
|
||||
|
||||
# Install the necessary gems
|
||||
ADD Gemfile /usr/src/app/Gemfile
|
||||
ADD Gemfile.lock /usr/src/app/Gemfile.lock
|
||||
RUN bundle install --without development test
|
||||
|
||||
# Add rails project (from same dir as Dockerfile) to project directory
|
||||
ADD ./ /usr/src/app
|
||||
|
||||
# Run rake tasks
|
||||
RUN RAILS_ENV=production rake db:create db:migrate
|
||||
|
||||
Using the provided Dockerfile, let's try and build an image with the following command[1][7]:
|
||||
|
||||
$ docker build -t localhost:5000/your_username/docker-test .
|
||||
|
||||
And again, if everything worked out correctly, the last line of the long log output should read something like:
|
||||
|
||||
Successfully built 82e48769506c
|
||||
$ docker images
|
||||
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
|
||||
localhost:5000/your_username/docker-test latest 82e48769506c About a minute ago 884.2 MB
|
||||
|
||||
Let's try and run the container!
|
||||
|
||||
$ docker run -d -p 3000:3000 --name docker-test localhost:5000/your_username/docker-test
|
||||
|
||||
You should be able to reach your Rails app running inside the Docker container at port 3000 of your boot2docker VM[2][8] (in my case [http://192.168.59.103:3000][6]).
|
||||
|
||||
### Automating with shell scripts ###
|
||||
|
||||
Since you should already know from the previous post3 how to push your newly created image to a private regisitry and deploy it on a server, let's skip this part and go straight to automating the process.
|
||||
|
||||
We are going to define 3 shell scripts and finally tie it all together with rake.
|
||||
|
||||
### Clean ###
|
||||
|
||||
Every time we build our image and deploy we are better off always clean everything. That means the following:
|
||||
|
||||
- stop (if running) and restart boot2docker;
|
||||
- remove orphaned Docker images (images that are without tags and that are no longer used by your containers).
|
||||
|
||||
Put the following into a **clean.sh** file in the root of your project.
|
||||
|
||||
echo Restarting boot2docker...
|
||||
boot2docker down
|
||||
boot2docker up
|
||||
|
||||
echo Exporting Docker variables...
|
||||
sleep 1
|
||||
export DOCKER_HOST=tcp://192.168.59.103:2376
|
||||
export DOCKER_CERT_PATH=/Users/user/.boot2docker/certs/boot2docker-vm
|
||||
export DOCKER_TLS_VERIFY=1
|
||||
|
||||
sleep 1
|
||||
echo Removing orphaned images without tags...
|
||||
docker images | grep "<none>" | awk '{print $3}' | xargs docker rmi
|
||||
|
||||
Also make sure to make the script executable:
|
||||
|
||||
$ chmod +x clean.sh
|
||||
|
||||
### Build ###
|
||||
|
||||
The build process basically consists in reproducing what we just did before (docker build). Create a **build.sh** script at the root of your project with the following content:
|
||||
|
||||
docker build -t localhost:5000/your_username/docker-test .
|
||||
|
||||
Make the script executable.
|
||||
|
||||
### Deploy ###
|
||||
|
||||
Finally, create a **deploy.sh** script with this content:
|
||||
|
||||
# Open SSH connection from boot2docker to private registry
|
||||
boot2docker ssh "ssh -o 'StrictHostKeyChecking no' -i /Users/username/.ssh/id_boot2docker -N -L 5000:localhost:5000 root@your-registry.com &" &
|
||||
|
||||
# Wait to make sure the SSH tunnel is open before pushing...
|
||||
echo Waiting 5 seconds before pushing image.
|
||||
|
||||
echo 5...
|
||||
sleep 1
|
||||
echo 4...
|
||||
sleep 1
|
||||
echo 3...
|
||||
sleep 1
|
||||
echo 2...
|
||||
sleep 1
|
||||
echo 1...
|
||||
sleep 1
|
||||
|
||||
# Push image onto remote registry / repo
|
||||
echo Starting push!
|
||||
docker push localhost:5000/username/docker-test
|
||||
|
||||
If you don't understand what's going on here, please make sure you've read thoroughfully [part 2][9] of this series of posts.
|
||||
|
||||
Make the script executable.
|
||||
|
||||
### Tying it all together with rake ###
|
||||
|
||||
Having 3 scripts would now require you to run them individually each time you decide to deploy your app:
|
||||
|
||||
1. clean
|
||||
1. build
|
||||
1. deploy / push
|
||||
|
||||
That wouldn't be much of an effort, if it weren't for the fact that developers are lazy! And lazy be it, then!
|
||||
|
||||
The final step to wrap things up, is tying the 3 parts together with rake.
|
||||
|
||||
To make things even simpler you can just append a bunch of lines of code to the end of the already present Rakefile in the root of your project. Open the Rakefile file - pun intended :) - and paste the following:
|
||||
|
||||
namespace :docker do
|
||||
desc "Remove docker container"
|
||||
task :clean do
|
||||
sh './clean.sh'
|
||||
end
|
||||
|
||||
desc "Build Docker image"
|
||||
task :build => [:clean] do
|
||||
sh './build.sh'
|
||||
end
|
||||
|
||||
desc "Deploy Docker image"
|
||||
task :deploy => [:build] do
|
||||
sh './deploy.sh'
|
||||
end
|
||||
end
|
||||
|
||||
Even if you don't know rake syntax (which you should, because it's pretty awesome!), it's pretty obvious what we are doing. We have declared 3 tasks inside a namespace (docker).
|
||||
|
||||
This will create the following 3 tasks:
|
||||
|
||||
- rake docker:clean
|
||||
- rake docker:build
|
||||
- rake docker:deploy
|
||||
|
||||
Deploy is dependent on build, build is dependent on clean. So every time we run from the command line
|
||||
|
||||
$ rake docker:deploy
|
||||
|
||||
All the script will be executed in the required order.
|
||||
|
||||
### Test it ###
|
||||
|
||||
To see if everything is working, you just need to make a small change in the code of your app and run
|
||||
|
||||
$ rake docker:deploy
|
||||
|
||||
and see the magic happening. Once the image has been uploaded (and the first time it could take quite a while), you can ssh into your production server and pull (thru an SSH tunnel) the docker image onto the server and run. It's that easy!
|
||||
|
||||
Well, maybe it takes a while to get accustomed to how everything works, but once it does, it's almost (almost) as easy as deploying with Heroku.
|
||||
|
||||
P.S. As always, please let me have your ideas. I'm not sure this is the best, or the fastest, or the safest way of doing devops with Docker, but it certainly worked out for us.
|
||||
|
||||
- make sure to have **boot2docker** up and running.
|
||||
- If you don't know your boot2docker VM address, just run `$ boot2docker ip`
|
||||
- if you don't, you can read it [here][10]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://cocoahunter.com/2015/01/23/docker-3/
|
||||
|
||||
作者:[Michelangelo Chasseur][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://cocoahunter.com/author/michelangelo/
|
||||
[1]:http://cocoahunter.com/docker-1
|
||||
[2]:http://cocoahunter.com/2015/01/23/docker-2/
|
||||
[3]:http://localhost:3000/
|
||||
[4]:https://gist.github.com/chasseurmic/0dad4d692ff499761b20
|
||||
[5]:http://localhost:5000/
|
||||
[6]:http://192.168.59.103:3000/
|
||||
[7]:http://cocoahunter.com/2015/01/23/docker-3/#fn:1
|
||||
[8]:http://cocoahunter.com/2015/01/23/docker-3/#fn:2
|
||||
[9]:http://cocoahunter.com/2015/01/23/docker-2/
|
||||
[10]:http://cocoahunter.com/2015/01/23/docker-2/
|
@ -0,0 +1,86 @@
|
||||
How To Monitor Access Point Signal Strength With wifi-linux
|
||||
================================================================================
|
||||
As a python geek I love exploring new python tools on github that target the linux users. Today I discovered a simple application written in python programming language that can be used to monitor access point signal strength.
|
||||
|
||||
I have been experimenting for about two hours with **wifi-linux** and it works great but I would like to see some unittests in the near future from the author as the command **plot** is not working on my machine and is also causing some errors.
|
||||
|
||||
### What is wifi-linux ###
|
||||
|
||||
According to the official readme.md file on author's github account wifi-linux is a very simple python script which collects RSSI information about wifi access points around you and draws graphics showing RSSI activity.
|
||||
|
||||
The author states that the program also draws RSSI activity graphic and this can be generated with the command plot but unfortunetly it is not working for me. wifi-linux supports other commands such as **bp** to add a breakpoint, **print** to print some statistics and **start changer**.
|
||||
|
||||
The wifi-linux application has the folowing dependencies:
|
||||
|
||||
- dbus-python
|
||||
- gnuplot-py
|
||||
|
||||
So first we have to install all the package dependencies for our project in order to run it in our linux machine.
|
||||
|
||||
### Install pakages required by wifi-linux ###
|
||||
|
||||
I tried to install python-dbus by using the pip tool which is used to manage python packages but it did not work and the reason for this is that pip looks for setup.py, which dbus-python doesn't have. So the following command is not going to work.
|
||||
|
||||
pip install dbus-python
|
||||
|
||||
And to make sure it does not work give it a try. It is a very high probability that you will get the following error displayed on your console.
|
||||
|
||||
IOError: [Errno 2] No such file or directory: '/tmp/pip_build_oltjano/dbus-python/setup.py'
|
||||
|
||||
How did I manage to solve this problem? It is very simple. I installed the the system package for the Python DBUS bindings using the following command.
|
||||
|
||||
sudo apt-get install python-dbus
|
||||
|
||||
The above command will work only in machines that make use of the apt-get package manager such as Debian and Ubuntu.
|
||||
|
||||
Then the second dependency we have to take care is the gnuplot-py. Download it, extract using the tar utility and then run setup.py install to install the python package.
|
||||
|
||||
First step is to download gnuplot-py.
|
||||
|
||||
wget http://prdownloads.sourceforge.net/gnuplot-py/gnuplot-py-1.8.tar.gz
|
||||
|
||||
Then use the tar utility to extract it.
|
||||
|
||||
tar xvf gnuplot-py-1.8.tar.gz
|
||||
|
||||
Then use the cd command to change directory.
|
||||
|
||||
cd gnuplot-py-1.8
|
||||
|
||||
Once there then run the following command to install the package gunplot-py on your system.
|
||||
|
||||
sudo setup.py install
|
||||
|
||||
Once the installation is finished you are ready to run the wifi-linux on your machine. Just download it and use the following command to run the script.
|
||||
|
||||
Download wifi-linux on your local machine by using the following command.
|
||||
|
||||
wget https://github.com/dixel/wifi-linux/archive/master.zip
|
||||
|
||||
Extract the master.zip archive and then use the following command to run the python script list_rsssi.py
|
||||
|
||||
python list_rssi.py
|
||||
|
||||
The following screenshot shows wifi-linux in action.
|
||||
|
||||
![wifi-linux to monitor wifi signal strength](http://blog.linoxide.com/wp-content/uploads/2015/01/wifi-linux.png)
|
||||
|
||||
Then the command **bp** is executed to add a breakpoint like shown below.
|
||||
|
||||
![the bp command in wifi-linux](http://blog.linoxide.com/wp-content/uploads/2015/01/wifi-linux2.png)
|
||||
|
||||
The command **print** can be used to display stats on the console of your machine. An example of its usage is shown below.
|
||||
|
||||
![the print command](http://blog.linoxide.com/wp-content/uploads/2015/01/wifi-linux3.png)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/monitor-access-point-signal-strength-wifi-linux/
|
||||
|
||||
作者:[Oltjano Terpollari][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/oltjano/
|
@ -1,111 +0,0 @@
|
||||
hi ! 让我来翻译
|
||||
|
||||
How to debug a C/C++ program with Nemiver debugger
|
||||
================================================================================
|
||||
If you read [my post on GDB][1], you know how important and useful a debugger I think can be for a C/C++ program. However, if a command line debugger like GDB sounds more like a problem than a solution to you, you might be more interested in Nemiver. [Nemiver][2] is a GTK+-based standalone graphical debugger for C/C++ programs, using GDB as its back-end. Admirable for its speed and stability, Nemiver is a very reliable debugger filled with goodies.
|
||||
|
||||
### Installation of Nemiver ###
|
||||
|
||||
For Debian based distributions, it should be pretty straightforward:
|
||||
|
||||
$ sudo apt-get install nemiver
|
||||
|
||||
For Arch Linux:
|
||||
|
||||
$ sudo pacman -S nemiver
|
||||
|
||||
For Fedora:
|
||||
|
||||
$ sudo yum install nemiver
|
||||
|
||||
If you prefer compiling yourself, the latest sources are available from [GNOME website][3].
|
||||
|
||||
As a bonus, it integrates very well with the GNOME environment.
|
||||
|
||||
### Basic Usage of Nemiver ###
|
||||
|
||||
Start Nemiver with the command:
|
||||
|
||||
$ nemiver
|
||||
|
||||
You can also summon it with an executable with:
|
||||
|
||||
$ nemiver [path to executable to debug]
|
||||
|
||||
Note that Nemiver will be much more helpful if the executable is compiled in debug mode (the -g flag with GCC).
|
||||
|
||||
A good thing is that Nemiver is really fast to load, so you should instantly see the main screen in the default layout.
|
||||
|
||||
![](https://farm9.staticflickr.com/8679/15535277554_d320f6692c_c.jpg)
|
||||
|
||||
By default, a breakpoint has been placed in the first line of the main function. This gives you the time to recognize the basic debugger functions:
|
||||
|
||||
![](https://farm9.staticflickr.com/8669/16131832596_bc68ae18a8_o.jpg)
|
||||
|
||||
- Next line (mapped to F6)
|
||||
- Step inside a function (F7)
|
||||
- Step out of a function (Shift+F7)
|
||||
|
||||
But maybe my personal favorite is the option "Run to cursor" which makes the program run until a precise line under your cursor, and is by default mapped to F11.
|
||||
|
||||
Next, the breakpoints are also easy to use. The quick way to lay a breakpoint at a line is using F8. But Nemiver also has a more complex menu under "Debug" which allows you to set up a breakpoint at a particular function, line number, location of binary file, or even at an event like an exception, a fork, or an exec.
|
||||
|
||||
![](https://farm8.staticflickr.com/7579/16157622315_d680a63896_z.jpg)
|
||||
|
||||
You can also watch a variable by tracking it. In "Debug" you can inspect an expression by giving its name and examining it. It is then possible to add it to the list of controlled variable for easy access. This is probably one of the most useful aspects as I have never been a huge fan of hovering over a variable to get its value. Note that hovering does work though. And to make it even better, Nemiver is capable of watching a struct, and giving you the values of all the member variables.
|
||||
|
||||
![](https://farm8.staticflickr.com/7465/15970310470_7ed020c613.jpg)
|
||||
|
||||
Talking about easy access to information, I also really appreciate the layout of the program. By default, the code is in the upper half and the tabs in the lower part. This grants you access to a terminal for output, a context tracker, a breakpoints list, register addresses, memory map, and variable control. But note that under "Edit" "Preferences" "Layout" you can select different layouts, including a dynamic one for you to modify.
|
||||
|
||||
![](https://farm9.staticflickr.com/8606/15971551549_00e4cdd32e_c.jpg)
|
||||
|
||||
![](https://farm8.staticflickr.com/7525/15535277594_026fef17c1_z.jpg)
|
||||
|
||||
And naturally, once you set up all your breakpoints, watch-points, and layout, you can save your session under “File” for easy retrieval in case you close Nemiver.
|
||||
|
||||
### Advanced Usage of Nemiver ###
|
||||
|
||||
So far, we talked about the basic features of Nemiver, i.e., what you need to get started and debug simple programs immediately. If you have more advanced needs, and especially more complex programs, you might be more interested in some of these features mentioned here.
|
||||
|
||||
#### Debugging a running process ####
|
||||
|
||||
Nemiver allows you to attach to a running process for debugging. Under the "File" menu, you can filter the list of running processes, and connect to a process.
|
||||
|
||||
![](https://farm9.staticflickr.com/8593/16155720571_00e4cdd32e_z.jpg)
|
||||
|
||||
#### Debugging a program remotely over a TCP connection ####
|
||||
|
||||
Nemiver supports remote-debugging, where you set up a lightweight debug server on a remote machine, and launch Nemiver from another machine to debug a remote target hosted by the debug server. Remote debugging can be useful if you cannot run full-fledged Nemiver or GDB on the remote machine for some reason. Under the "File" menu, specify the binary, shared library location, and the address and port.
|
||||
|
||||
![](https://farm8.staticflickr.com/7469/16131832746_c47dee4ef1.jpg)
|
||||
|
||||
#### Using your own GDB binary to debug ####
|
||||
|
||||
In case you compiled Nemiver yourself, you can specify a new location for GDB under "Edit" "Preferences" "Debug". This option can be useful if you want to use a custom version of GDB in Nemiver for some reason.
|
||||
|
||||
#### Follow a child or parent process ####
|
||||
|
||||
Nemiver is capable of following a child or parent process in case your program forks. To enable this feature, go to "Preferences" under "Debugger" tab.
|
||||
|
||||
![](https://farm8.staticflickr.com/7512/16131832716_5724ff434c_z.jpg)
|
||||
|
||||
To conclude, Nemiver is probably my favorite program for debugging without an IDE. It even beats GDB in my opinion, and [command line][4] programs generally have a good grip on me. So if you have never used it, I really recommend it. I can only congratulate the team behind it for giving us such a reliable and stable program.
|
||||
|
||||
What do you think of Nemiver? Would you consider it for standalone debugging? Or do you still stick to an IDE? Let us know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/debug-program-nemiver-debugger.html
|
||||
|
||||
作者:[Adrien Brochard][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/adrien
|
||||
[1]:http://xmodulo.com/gdb-command-line-debugger.html
|
||||
[2]:https://wiki.gnome.org/Apps/Nemiver
|
||||
[3]:https://download.gnome.org/sources/nemiver/0.9/
|
||||
[4]:http://xmodulo.com/recommend/linuxclibook
|
@ -0,0 +1,59 @@
|
||||
支持同时把单个 ISO 文件写入 20 个 USB 驱动盘的应用程序
|
||||
================================================================================
|
||||
**我的问题是如何把一个Linux ISO 文件烧录到 17 个 USB 拇指驱动盘?**
|
||||
|
||||
精通代码的会写一个 bash 脚本来自动化处理,而大部分的人会使用像 USB 启动盘创建器这样的图形用户界面工具来把 ISO 文件一个一个的烧录到驱动盘中。但剩下的还有一些人会很快得出结论,两种方法都不太理想。
|
||||
|
||||
### 问题 > 解决 ###
|
||||
|
||||
![GNOME MultiWriter in action](http://www.omgubuntu.co.uk/wp-content/uploads/2015/01/gnome-multi-writer.jpg)
|
||||
|
||||
GNOME MultiWriter 在运行当中
|
||||
|
||||
Richard Hughes,一个 GNOME 开发者,也面临着类似的困境。他要创建一批预装操作系统的 USB 驱动盘,需要一个足够简单的工具,使得像他父亲这样的用户也能使用。
|
||||
|
||||
他的反应是开发**品牌性的新应用程序**,使上面的两种方法合二为一,创造出易用的一款工具。
|
||||
|
||||
它的名字就叫 “[GNOME MultiWriter][1]”。同时可以把单个的 ISO 或 IMG 文件写入多个 USB 驱动盘。
|
||||
|
||||
它不支持个性化自定义或命令行执行的功能,使用它就可以省掉浪费一下午的时间来对相同的操作的重复动作。
|
||||
|
||||
您需要的就是这一款应用程序、一个 ISO 镜像文件、一些拇指驱动盘以用许多空 USB 接口。
|
||||
|
||||
### 用例和安装 ###
|
||||
|
||||
![The app can be installed on Ubuntu](http://www.omgubuntu.co.uk/wp-content/uploads/2015/01/mutli-writer-on-ubuntu.jpg)
|
||||
|
||||
该应用程序可以在 Ubuntu 上安装
|
||||
|
||||
这款应用程序的定义使用场景很不错,正适合使用于预装正要发布的操作系统或 live 映像的 USB 棒上。
|
||||
|
||||
那就是说,任何人想要创建一个单独可启动的 USB 棒的话,也是一样的适用 - 因我用 Ubuntu 的内置磁盘创建工具来创建可引导的映像从来没有一次成功过的,所以这方案对我来说是个好消息!
|
||||
|
||||
它的开发者 Hughes 说它**最高能支持20个 USB驱动盘**,每个盘的大小在 1GB 到 32GB之间。
|
||||
|
||||
GNOME MultiWriter 不好的地方(到现在为止)就是它还没有一个完结、稳定的成品。它是能工作,但在早期的时候,还没有可安装的二进制版本或可添加到你庞大软件源的 PPA。
|
||||
|
||||
如果您知道通常的 configure/make 的操作流程的话,可以获取其源码并随时都可以编译运行。在 Ubuntu14.10 系统上,你可能还需要首先安装以下软件包:
|
||||
|
||||
sudo apt-get install gnome-common yelp-tools libcanberra-gtk3-dev libudisks2-dev gobject-introspection
|
||||
|
||||
如果您得到并运行起来,已经玩转的话,给我们分享下您的感受!
|
||||
|
||||
此项目托管在 GitHub 上,盼望对其提出问题缺陷和发起 pull 请求,在上面也可以找到压缩包下载,进行手动安装。
|
||||
|
||||
- [Github 上的 GNOME MultiWriter][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/01/gnome-multiwriter-iso-usb-utility
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[runningwater](https://github.com/runningwater)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:https://github.com/hughsie/gnome-multi-writer/
|
||||
[2]:https://github.com/hughsie/gnome-multi-writer/
|
@ -1,168 +0,0 @@
|
||||
Docker的现状与未来
|
||||
================================================================================
|
||||
|
||||
### Docker - 故事渊源流长 ###
|
||||
|
||||
Docker是一个专为Linux容器而设计的工具集,用于‘构建,交付和运行’分布式应用。它最初是通过DotCloud作为一个开源项目在2013年3月的时候发布的。这个项目越来越受欢迎,这使得DotCloud更名为Docker公司(并最终 [出售了原有的PaaS业务][1]).[Docker 1.0][2]是在2014年6月发布的,而且延续了之前每月更新一个版本的习惯。
|
||||
|
||||
|
||||
1.0版本的发布标志着Docker公司认为这个平台的充分成熟已经足以用于生产环境中(由本公司与合作伙伴提供付费支持选项).每个月发布的更新显示,该项目正在迅速发展,增添一些新特性、解决一些他们发现的问题。然而该项目已经成功地从‘运行’和‘交付’实现分离,所以任何版本的Docker镜像源都可以与其它版本共同使用(具备向前和向后兼容的特性),这为Docker使用的快速变化提供了稳定的保障。
|
||||
|
||||
Docker之所以能够成为最受欢迎的开源项目之一除了很多人会认为是的炒作成分,也是由坚实的物质基础奠定的。Docker的影响力已经得到整个行业许多品牌的支持,包括亚马逊, Canonical公司, 世纪互联, 谷歌, IBM, 微软, New Relic, Pivotal, 红帽和VMware. 这使只要Linux可使用的地方,Docker的使用便无处不在。除了这些鼎鼎有名的大公司以外,许多初创公司也在围绕着Docker在成长,或者改变他们的发展方向来与Docker更好地结合起来。这些合作关系(无论大于小)都将帮助推动Docker核心项目及其周边生态环境的快速发展。
|
||||
|
||||
|
||||
### Docker技术的简要综述 ###
|
||||
|
||||
Docker利用Linux的一些内核工具例如[cGroups][3],命名空间和[SElinux][4]来实现容器之间的隔离。起初Docker只是[LXC][5]容器管理器子系统的前端,但是在0.9版本中引入了[libcontainer][6],这是原生go语言库用于提供用户空间和内核之间的接口。
|
||||
|
||||
容器位于联合文件系统的顶部,例如[AUFS][7],它允许跨多个容器共享例如操作系统镜和安装相关库的组件。在文件系统中的分层方法也利用[ Dockerfile ] [8]中的DevOps工具,这些工具能够成功地完成高速缓存的操作。利用等待时间来安装操作系统和相关应用程序依赖包将会极大地加速测试周期。容器之间的共享库也能够减少内存的占用。
|
||||
|
||||
一个容器是从一个镜像开始运行的,它可以本地创建,本地缓存,或者通过注册表来下载。Docker公司经营的 [Docker 公有注册库][9],这为各种操作系统、中间件和数据库提供了主机官方仓库。组织和个人可以在docker公司的为镜像创建公有库,并且也有举办私人仓库的订阅服务。由于上传的镜像会包含几乎所有Docker提供的自动化构建工具(以往称为“受信任的构建”),它的镜像是从Dockerfile创建的,而Dockerfile是镜像内容的清单。
|
||||
|
||||
### 容器 vs 虚拟机 ###
|
||||
|
||||
容器会比虚拟机更高效,因为它们能够分享一个内核和分享共享应用程序库。相比虚拟机系统,这也将使得Docker使用的内存空间很小,即使虚拟机利用了内存超量使用的技术。部署容器时共享底层的镜像层也可以减少内存的占用。IBM的Boden Russel已经做了一些[基准测试][10]说明两者的不同。
|
||||
|
||||
相比虚拟机系统,容器呈现出较低系统开销的优势,所以在容器中,应用程序的运行效率将会等效于在同样的应用程序在虚拟机中运行甚至效果更佳。IBM的一个研究团队已经发表了一本名为[虚拟机与Linux容器的性能比较]的文章[11].
|
||||
|
||||
|
||||
容器在隔离特性上要比虚拟机逊色。虚拟机可以利用ring-1[硬件隔离][12]例如Intel的VT-d和VT-x技术。这种隔离可以防止虚拟机爆发和彼此交互。而容器至今还没有任何形式的硬件隔离,这使它容易受到攻击。一个命名为[Shocker][13]的概念攻击验证表明,在之前的1.0版本中Docker是存在这种脆弱性的。尽管Docker1.0修复了许多由于Shocker漏洞引发较为的严重问题,Docker的CTO Solomon Hykes仍然[表态][14],“当我们自然而然地说Docker的开箱即用是安全的,即便包含了不收信任的uid0程序,我们将会很明确地这样表述。”Hykes的声明承认,其它的漏洞及相关的风险依旧存在,所以在容器成为受信任的工具之前将有更多的工作需要被完成。
|
||||
|
||||
对于许多用户案例而言,在容器和虚拟机两者之间选择一种是一种错误的二分法。Docker同样可以在虚拟机中很好工作,它可以被用于现有的虚拟基础措施、私有云或者公有云。同样也可以在容器里跑虚拟机,这也是谷歌使用云平台的一部分。给予一个广泛可利用的基础设施例如IaaS服务,可以为虚拟机提供合理的预期需求,这个合理的预期即容器与虚拟机一起使用的情景将会在数年后出现。容器管理和虚拟机技术有可能被集成到一起提供一个两全其美的方案;所以,位于libcontainer 容器后面的硬件信任锚微虚拟化实施例,可与前端 Docker 工具链和生态系统整合,而不同于后端使用的是能够提供更好绝缘性。微虚拟化(例如Bromium的[vSentry][15]和VMware的 [Project Fargo][16])已经在桌面环境中使用以提供应用程序之间基于硬件的隔离,所以类似的方法可以用于连接libcontainer代替Linux内核中的容器机制。
|
||||
|
||||
### ‘Dockerizing’ 应用程序 ###
|
||||
|
||||
几乎所有Linux应用程序都可以在Docker容器中运行。它们不受任何语言的选择或框架的限制。唯一在实践中受限的是从操作系统的角度来允许容器做什么。即使如此,bar可以在特权模式下通过运行容器,从而大大减少了控制(并相应地增加了容器中的应用程序,这将会导致损坏主机操作系统存在的风险)。
|
||||
|
||||
|
||||
容器都是从镜像开始运行的,而镜像也可以从运行中的容器获取。通常使用2中方法从容器中获取应用程序,分别是手动获取和Dockerfile..
|
||||
|
||||
#### 手动构建 ####
|
||||
|
||||
手动构建首先通过基础操作系统镜像启动一个基本操作。交互式的终端可以安装应用程序和用于包管理的依赖项来选择所需要的Linux风格。Zef Hemel在‘[使用Linux容器来支持便携式应用程序部署][17]’的文章中讲述了他部署的过程。一旦应用程序被安装之后,容器可以被推送至注册中心(例如Docker Hub)或者导出一个tar文件。
|
||||
|
||||
#### Dockerfile ####
|
||||
|
||||
Dockerfile是一个用于构建Docker容器的脚本化系统。每一个Dockerfile定义了开始的基础镜像,从一系列的命令在容器中运行或者一些列的文件被添加到容器中。当容器启动时默认命令会在启动时被执行,Dockerfile也可以指定对外的端口和当前工作目录。容器类似手工构建一样可以通过可推送或导出的Dockerfiles来构建。Dockerfiles也可以被用于Docker Hub的自动构建系统,使用的镜像受Docker公司的控制并且该镜像源代码是任何人可视的。
|
||||
|
||||
|
||||
####仅仅一个进程? ####
|
||||
|
||||
无论镜像是手动构建还是通过Dockerfile构建,有一个关键的考虑因素是当容器启动时,只有一个进程进程被启动。对于一个容器一对一服务的目的,例如运行一个应用服务器,运行一个单一的进程不是一个问题(有些关于容器应该只有一个单独的进程的争议)。对于一些容器需要启动多个进程的情况,必须先启动 [supervisor][18]进程,才能生成其它内部所需的进程。
|
||||
|
||||
### 容器和微服务 ###
|
||||
|
||||
一个完整的关于使用微服务结构体系的原理和好处已经远远超出了这篇文章(并已经覆盖了[InfoQ eMag: Microservices][19])的范围).然而容器是微服务捆绑和部署实例的捷径。
|
||||
|
||||
尽管大多数实际案例表明大量的微服务目前还是大多数部署在虚拟机,容器相对拥有较小的部署机会。容器具备位操作系统共享内存和硬盘占用量的能力,库常见的应用程序代码也意味着并排部署多个办法的服务是非常高效的。
|
||||
|
||||
### 连接容器 ###
|
||||
|
||||
一些小的应用程序适合放在单独的容器中,但在许多案例中应用程序将遍布多个容器。Docker的成功包括催生了一连串的新应用程序组合工具、业务流程工具和实现平台作为服务(PaaS)过程。许多工具还帮助实现缩放、容错、业务管理以及对已部署资产进行版本控制。
|
||||
|
||||
|
||||
#### 连接 ####
|
||||
|
||||
Docker的网络功能是相当原始的。在同一主机,容器内的服务和一互相访问,而且Docker也可以通过端口映射到主机操作系统使服务可以通过网络服务被调用。官方的赞助方式是连接到[libchan][20],这是一个提供给Go语言的网络服务库,类似于[channels][21]。直至libcan找到方法进入应用程序,第三方应用仍然有很大空间可提供配套的网络服务。例如,[Flocker][22]已经采取了基于代理的方法使服务实现跨主机(以及底层存储)移植。
|
||||
|
||||
#### 合成 ####
|
||||
|
||||
Docker本身拥有把容器连接在一起的机制,与元数据相关的依赖项可以被传递到相依赖的容器并用于环境变量和主机入口的消耗。应用合成工具例如[Fig][23]和[geard][24]展示出其依赖关系图在一个独立的文件中,于是多个容器可以汇聚成一个连贯的系统。世纪互联公司的[Panamax][25]合成工具类似底层Fig和 geard的方法,但新增了一些基于web的用户接口,并直接与GitHub相结合,以便于应用程序可以直接被共享。
|
||||
|
||||
#### 业务流程 ####
|
||||
|
||||
业务流程系统例如[Decking][26],New Relic公司的[Centurion][27]和谷歌公司的[Kubernetes][28]都是旨在帮助部署容器和管理其生命周期系统。也有无数的例子(例如[Apache Mesos][30](特别是[Marathon(马拉松式)持续运行很久的框架] 的 [Mesosphere][29]正在与Docker一起使用。通过为应用程序(例如传递CPU核数和内存的需求)与底层基础架构之间提供一个抽象的模型,业务流程工具提供了解耦,旨在简化应用程序开发和数据中心操作。还有各种各样的业务流程系统,因为人们已经淘汰了以前开发的内部系统,取而代之的是大量容器部署的管理系统;例如Kubernetes是基于谷歌的[Omega][32]系统,这个系统用于管理谷歌区域内的容器。
|
||||
|
||||
虽然从某种程度上来说合成工具和业务流程工具的功能存在重叠,另外这也是它们之间互补的一种方式。例如Fig可以被用于描述容器间如何实现功能交互,而Kubernetes pods可能用于提供监控和缩放。
|
||||
|
||||
|
||||
#### 平台 (类似一个服务) ####
|
||||
|
||||
大量的Docker已经实现本地PaaS安装部署,例如[Deis][33] 和 [Flynn][34]的出现并在现实中得到利用,Linux容器在很大程度上为开发人员提供了灵活性(而不是“固执己见”地给出一组语言和框架)。其它平台例如CloudFoundry, OpenShift 和 Apcera Continuum都已经采取Docker基础功能融入其现有的系统,这样基于Docker镜像(或者基于Dockerfile)的应用程序也可以用之前支持的语言和框架一起部署和管理。
|
||||
|
||||
### 支持所有的云 ###
|
||||
|
||||
由于Docker能够在任何的Linux虚拟机中运行并合理地更新内核,它几乎可以为所有云提供IaaS服务。大多数的云厂商已经宣布对码头及其生态系统提供附加支持。
|
||||
|
||||
亚马逊已经把Docker引入它们的Elastic Beanstalk系统(这是在底层IaaS的一个业务流程系统)。谷歌已经启用‘managed VMs'’,这是提供
|
||||
程序引擎PaaS和计算引擎IaaS之间的中转站。微软和IBM都已经宣布基于Kubernetes的服务,所以多容器应用程序可以在它们的云上被部署和管理。
|
||||
|
||||
为了给现有种类繁多的后端提供可用的一致接口,Docker团队已经引进[libswarm][35], 它能用于集成众多云和资源管理系统。Libswarm所阐明的目标之一是‘避免供应商通过交换任何服务锁定另一个’。这是通过呈现一组一致服务(与API相关联的)来完成的,该服务会附加执行特定的后端服务。例如装有Docker服务的服务器将对Docker命令行工具展示Docker远程API,这样容器就可以被托管在一些列的服务供应商。
|
||||
|
||||
基于Docker的新服务类型仍在起步阶段。总部位于伦敦的Orchard实验室提供了Docker的托管服务,但是Docker公司表示,收购后,Orchard的服务将不会是一个有优先事项。Docker公司也出售之前DotCloud的PaaS业务给cloudControl。基于就更早前的容器管理系统的服务例如[OpenVZ][36]已经司空见惯了,所以在一定程度上Docker需要向托管供应商证明其价值。
|
||||
|
||||
### Docker 及其发行版 ###
|
||||
|
||||
Docker已经成为大多数Linux发行版例如Ubuntu,Red Hat企业版(RHEL)和CentOS的一个标准功能。遗憾的是发布是以不同的移动速度到Docker项目,所以在发布版中找到的版本总是远远落后于可用版本。例如Ubuntu 14.04版本是对应Docker 0.9.1版本发布的,但是并没有相应的版本更改点当Ubuntu升级至14.04.1(这个时候Docker已经升至1.1.2版本)。由于Docker也是一个KDE系统托盘,所以在官方库同样存在命名问题;所以在Ubuntu14.04版本中相关安装包的名字和命令行工具都是使用‘Docker.io’命名。
|
||||
|
||||
在企业版的Linux世界中,情况也并没有因此而不同。CentOS7伴随着Docker 0.11.1的到来,该发行版本即是之前Docker公司宣布准备发行Docker 1.0版本的准备版。Linux发行版用户希望最新版本可以承诺其稳定性,性能和安全性能够更完善,并且更好地结合[安装说明][37]和使用Docker公司的库托管而不是采取包括其分布的版本库。
|
||||
|
||||
Docker的到来催生了新的Linux发行版本例如[CoreOS][38]和红帽被用于设计为运行容器最小环境的[Project Atomic][39]。这些发布版相比传统的发布版伴随着更多新内核和Docker版本的特性。它们对内存的使用和硬盘占用率更小。新的发行也配备了新的工具用于大型部署例如[fleet][40],这是‘一个分布式init系统’和[etcd][41]是用于元数据管理。也有新机制用于更新发布版本身来使得内核和Docker可以被使用。这也意味着使用Docker的影响之一是它抛开分布版和相关的包管理解决方案的关注,使Linux内核(即Docker子系统正在使用)更加重要。
|
||||
|
||||
新的发布版将是运行Docker的最好方式,但是传统的发布版本和它们的包管理对容器来说仍然是非常重要的。Docker Hub托管的官方镜像有Debian,Ubuntu和CentOS。当然也有一个‘半官方’的库用于Fedora镜像。RHEL镜像在Docker Hub中不可用,因为是从Red Hat直接发布的。这意味着在Docker Hub的自动构建机制仅仅用于那些纯粹的开源发布版不(并愿意信任基于Docker公司团队所策划镜像的出处)。
|
||||
|
||||
|
||||
虽然Docker Hub与源代码控制系统相结合,例如Git Hub和Bitbucket在构建过程中用于自动创建包管理及生成规范之间的复杂关系(在Dockerfile中),并在构建过程中建立镜像。在构建过程中的非确定性结果并非是Docker具体的问题——这个是由于软件包如何管理工作的结果。在构建完成的当天将会给出一个版本,这个构建完成的另外一次将会得到最新版本,这就是为什么软件包管理需要升级措施。容器的抽象(较少关注一个容器的内容)以及容器的分散(因为轻量级资源利用率)是更有可能与Docker获取关联的痛点。
|
||||
|
||||
### Docker的未来 ###
|
||||
|
||||
Docker公司对核心功能(libcontainer),跨服务管理(libswarm) 和容器间的信息传递(libchan)的发展提出了明确的路线。与此同时公司已经表明愿意利用自身生态系统和收购Orchard实验室。然而Docker相比Docker公司意味着更多,随着项目的壮大,越来越多对这个项目的
|
||||
大牌贡献者,其中不乏像谷歌、IBM和Red Hat这样的大公司。在仁慈独裁者CTO Solomon Hykes 掌舵的形势下,为公司和项目明确了技术领导的关系。在前18个月的项目中通过成果输出展现了快速行动的能力,而且这种趋势并没有减弱的迹象。
|
||||
|
||||
许多投资者正在寻找10年前VMware公司的ESX/vSphere平台的特征矩阵,并找出虚拟机的普及而驱动的企业预期和当前Docker生态系统两者的距离(和机会)。目前Docker生态系统正缺乏类似网络、存储和版本细粒度的管理(对容器的内容),这些都为初创企业和在职人员提供机会。
|
||||
|
||||
随着时间的推移,在虚拟机和容器(Docker的运行部分)之间的区别将变得不重要了,而关注点将会转移到‘构建’和‘交付’缓解。这些变化将会使‘Docker发生什么?’这个问题变得比‘Docker将会给IT产业带来什么?’更不重要了。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoq.com/articles/docker-future
|
||||
|
||||
作者:[Chris Swan][a]
|
||||
译者:[disylee](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoq.com/author/Chris-Swan
|
||||
[1]:http://blog.dotcloud.com/dotcloud-paas-joins-cloudcontrol
|
||||
[2]:http://www.infoq.com/news/2014/06/docker_1.0
|
||||
[3]:https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt
|
||||
[4]:http://selinuxproject.org/page/Main_Page
|
||||
[5]:https://linuxcontainers.org/
|
||||
[6]:http://blog.docker.com/2014/03/docker-0-9-introducing-execution-drivers-and-libcontainer/
|
||||
[7]:http://aufs.sourceforge.net/aufs.html
|
||||
[8]:https://docs.docker.com/reference/builder/
|
||||
[9]:https://registry.hub.docker.com/
|
||||
[10]:http://bodenr.blogspot.co.uk/2014/05/kvm-and-docker-lxc-benchmarking-with.html?m=1
|
||||
[11]:http://domino.research.ibm.com/library/cyberdig.nsf/papers/0929052195DD819C85257D2300681E7B/$File/rc25482.pdf
|
||||
[12]:https://en.wikipedia.org/wiki/X86_virtualization#Hardware-assisted_virtualization
|
||||
[13]:http://stealth.openwall.net/xSports/shocker.c
|
||||
[14]:https://news.ycombinator.com/item?id=7910117
|
||||
[15]:http://www.bromium.com/products/vsentry.html
|
||||
[16]:http://cto.vmware.com/vmware-docker-better-together/
|
||||
[17]:http://www.infoq.com/articles/docker-containers
|
||||
[18]:http://docs.docker.com/articles/using_supervisord/
|
||||
[19]:http://www.infoq.com/minibooks/emag-microservices
|
||||
[20]:https://github.com/docker/libchan
|
||||
[21]:https://gobyexample.com/channels
|
||||
[22]:http://www.infoq.com/news/2014/08/clusterhq-launch-flocker
|
||||
[23]:http://www.fig.sh/
|
||||
[24]:http://openshift.github.io/geard/
|
||||
[25]:http://panamax.io/
|
||||
[26]:http://decking.io/
|
||||
[27]:https://github.com/newrelic/centurion
|
||||
[28]:https://github.com/GoogleCloudPlatform/kubernetes
|
||||
[29]:https://mesosphere.io/2013/09/26/docker-on-mesos/
|
||||
[30]:http://mesos.apache.org/
|
||||
[31]:https://github.com/mesosphere/marathon
|
||||
[32]:http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/41684.pdf
|
||||
[33]:http://deis.io/
|
||||
[34]:https://flynn.io/
|
||||
[35]:https://github.com/docker/libswarm
|
||||
[36]:http://openvz.org/Main_Page
|
||||
[37]:https://docs.docker.com/installation/#installation
|
||||
[38]:https://coreos.com/
|
||||
[39]:http://www.projectatomic.io/
|
||||
[40]:https://github.com/coreos/fleet
|
||||
[41]:https://github.com/coreos/etcd
|
@ -0,0 +1,143 @@
|
||||
20条Linux命令面试问答
|
||||
================================================================================
|
||||
**问:1 如何查看当前的Linux服务器的运行级别?**
|
||||
|
||||
答: ‘who -r’ 和 ‘runlevel’ 命令可以用来查看当前的Linux服务器的运行级别。
|
||||
|
||||
**问:2 如何查看Linux的默认网关?**
|
||||
|
||||
答: 用 “route -n” 和 “netstat -nr” 命令,我们可以查看默认网关。除了默认的网关信息,这两个命令还可以显示当前的路由表。
|
||||
|
||||
**问:3 如何在Linux上重建初始化内存盘影响文件?**
|
||||
|
||||
答: 在CentOS 5.X / RHEL 5.X中,可以用mkinitrd命令来创建初始化内存盘文件,举例如下:
|
||||
|
||||
# mkinitrd -f -v /boot/initrd-$(uname -r).img $(uname -r)
|
||||
|
||||
如果你想要给特定的内核版本创建初始化内存盘,你就用所需的内核名替换掉 ‘uname -r’ 。
|
||||
|
||||
在CentOS 6.X / RHEL 6.X中,则用dracut命令来创建初始化内存盘文件,举例如下:
|
||||
|
||||
# dracut -f
|
||||
|
||||
以上命令能给当前的系统版本创建初始化内存盘,给特定的内核版本重建初始化内存盘文件则使用以下命令:
|
||||
|
||||
# dracut -f initramfs-2.x.xx-xx.el6.x86_64.img 2.x.xx-xx.el6.x86_64
|
||||
|
||||
**问:4 cpio命令是什么?**
|
||||
|
||||
答: cpio就是复制入和复制出的意思。cpio可以向一个归档文件(或单个文件)复制文件、列表,还可以从中提取文件。
|
||||
|
||||
**问:5 patch命令是什么?如何使用?**
|
||||
|
||||
答: 顾名思义,patch命令就是用来将修改(或补丁)写进文本文件里。Patch命令通常是接收diff的输出并把文件的旧版本转换为新版本。举个例子,Linux内核源代码由百万行代码文件构成,所以无论何时,任何代码贡献者贡献出代码,只需发送改动的部分而不是整个源代码,然后接收者用patch命令将改动写进原始的源代码里。
|
||||
|
||||
创建一个diff文件给patch使用,
|
||||
|
||||
# diff -Naur old_file new_file > diff_file
|
||||
|
||||
旧文件和新文件要么都是单个的文件要么都是包含文件的目录,-r参数支持目录树递归。
|
||||
|
||||
一旦diff文件创建好,我们就能在旧的文件上打上补丁,把它变成新文件:
|
||||
|
||||
# patch < diff_file
|
||||
|
||||
**问:6 aspell有什么用 ?**
|
||||
|
||||
答: 顾名思义,aspell就是Linux操作系统上的一款交互式拼写检查器。aspell命令继任了更早的一个名为ispell的程序,并且作为一款嵌入式替代品 ,最重要的是它非常好用。当aspell程序主要被其它一些需要拼写检查能力的程序所使用的时候,在命令行中作为一个独立运行的工具的它也能十分有效。
|
||||
|
||||
**问:7 如何从命令行查看域SPF记录?**
|
||||
|
||||
答: 我们可以用dig命令来查看域SPF记录。举例如下:
|
||||
|
||||
linuxtechi@localhost:~$ dig -t TXT google.com
|
||||
|
||||
**问:8 如何识别Linux系统中指定文件(/etc/fstab)的关联包?**
|
||||
|
||||
答: # rpm -qf /etc/fstab
|
||||
|
||||
以上命令能列出供应给“/etc/fstab”文件的包。
|
||||
|
||||
**问:9 哪条命令用来查看bond0的状态?**
|
||||
|
||||
答: cat /proc/net/bonding/bond0
|
||||
|
||||
**问:10 Linux系统中的/proc文件系统有什么用?**
|
||||
|
||||
答: /proc文件系统是一个基于维护关于当前正在运行的内核状态信息的文件系统的随机存取存储器(RAM),其中包括CPU、内存、分区划分、I/O地址、直接内存访问通道和正在运行的进程。这个文件系统所代表的是各种不实际存储信息的文件,它们指向的是内存里的信息。/proc文件系统是由系统自动维护的。
|
||||
|
||||
**问:11 如何在/usr目录下找出大小超过10MB的文件?**
|
||||
|
||||
答: # find /usr -size +10M
|
||||
|
||||
**问:12 如何在/home目录下找出120天之前被修改过的文件?**
|
||||
|
||||
答: # find /home -mtime +120
|
||||
|
||||
**问:13 如何在/var目录下找出90天之内未被访问过的文件?**
|
||||
|
||||
答: # find /var \\! -atime -90
|
||||
|
||||
**问:14 在整个目录树下查找核心文件,如发现则删除它们且不提示确认信息。**
|
||||
|
||||
答: # find / -name core -exec rm {} \;
|
||||
|
||||
**问:15 strings命令有什么作用?**
|
||||
|
||||
答: strings命令用来提取和显示非文本文件的清晰内容。
|
||||
|
||||
**问:16 tee filter有什么作用 ?**
|
||||
|
||||
答: tee filter用来向多个目标发送输出内容。它可以向一个文件发送一份输出的拷贝并且如果使用管道的话可以在屏幕上(或一些其它程序)输出其它内容。
|
||||
|
||||
linuxtechi@localhost:~$ ll /etc | nl | tee /tmp/ll.out
|
||||
|
||||
在以上例子中,从ll输出的是在 /tmp/ll.out 文件中被捕获的,输出同样在屏幕上显示了出来。
|
||||
|
||||
**问:17 export PS1 = ”$LOGNAME@`hostname`:\$PWD: 这条命令是在做什么?**
|
||||
|
||||
答: 这条export命令会更改登录提示符来显示用户名、本机名和当前工作目录。
|
||||
|
||||
**问:18 ll | awk ‘{print $3,”owns”,$9}’ 这条命令是在做什么?**
|
||||
|
||||
答: 这条ll命令会显示这些文件的文件名和它们的拥有者。
|
||||
|
||||
**问:19 :Linux中的at命令有什么用?**
|
||||
|
||||
答: at命令用来安排一个程序在未来的做一次一次性执行。所有提交的任务都被放在 /var/spool/at 目录下并且到了执行时间的时候通过atd守护进程来执行。
|
||||
|
||||
**问:20 linux中lspci命令的作用是什么?**
|
||||
|
||||
答: lspci命令用来显示你的系统上PCI总线和附加设备的信息。指定-v,-vv或-vvv来获取详细输出,加上-r参数的话,命令的输出则会更具有易读性。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.linuxtechi.com/20-linux-commands-interview-questions-answers/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
译者:[ZTinoZ](https://github.com/ZTinoZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxtechi.com/author/pradeep/
|
||||
[1]:
|
||||
[2]:
|
||||
[3]:
|
||||
[4]:
|
||||
[5]:
|
||||
[6]:
|
||||
[7]:
|
||||
[8]:
|
||||
[9]:
|
||||
[10]:
|
||||
[11]:
|
||||
[12]:
|
||||
[13]:
|
||||
[14]:
|
||||
[15]:
|
||||
[16]:
|
||||
[17]:
|
||||
[18]:
|
||||
[19]:
|
||||
[20]:
|
@ -0,0 +1,105 @@
|
||||
dupeGuru - 直接从硬盘中查找并移除重复文件
|
||||
================================================================================
|
||||
|
||||
### 简介 ###
|
||||
|
||||
对我们来说,磁盘被装满是一个较大的困扰。无论我们如何小心谨慎,我们总可能将相同的文件复制到多个不同的地方,或者在不知情的情况下,重复下载了同一个文件。因此,迟早你会看到“磁盘已满”的错误提示,若此时我们确实需要一些磁盘空间来存储重要数据,以上情形无疑是最糟糕的。假如你确信自己的系统中有重复文件,那么 **dupeGuru** 可能会帮助到你。
|
||||
|
||||
dupeGuru 团队也开发了名为 **dupeGuru 音乐版** 的应用来移除重复的音乐文件,和名为 **dupeGuru 图片版** 的应用来移除重复的图片文件。
|
||||
|
||||
### 1. dupeGuru (标准版) ###
|
||||
|
||||
对于那些不熟悉 [dupeGuru][1] 的人来说,它是一个免费,开源,跨平台的应用,其用途是在系统中查找和移除重复文件。它可以在 Linux, Windows, 和 Mac OS X 等平台下使用。通过使用一个快速的模糊匹配算法,它可以在几分钟内找到重复文件。同时,你还可以调整 dupeGuru 使它去精确查找特定文件类型的重复文件,以及从你想删除的文件中,消除特定的某些文件。它支持英语、 法语、 德语、 中文 (简体)、 捷克语、 意大利语、亚美尼亚语、 俄语、乌克兰语、巴西语和越南语。
|
||||
|
||||
#### 在 Ubuntu 14.10/14.04/13.10/13.04/12.04 中安装 dupeGuru ####
|
||||
|
||||
dupeGuru 开发者已经构建了一个 Ubuntu PPA (Personal Package Archives)来简化安装过程。为了安装 dupeGuru,依次在终端中键入以下命令:
|
||||
|
||||
```
|
||||
sudo apt-add-repository ppa:hsoft/ppa
|
||||
sudo apt-get update
|
||||
sudo apt-get install dupeguru-se
|
||||
```
|
||||
|
||||
### 使用 ###
|
||||
|
||||
使用非常简单,可从 Unity 面板或菜单中启动 dupeGuru 。
|
||||
|
||||
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/11/dupeGuru_007.png)
|
||||
|
||||
点击位于底部的 `+` 按钮来添加你想扫描的文件目录。点击 `扫描` 按钮开始查找重复文件。
|
||||
|
||||
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/11/dupeGuru_008.png)
|
||||
|
||||
一旦所选目录中含有重复文件,则它将在窗口中展示重复文件。正如你所看到的,在下面的截图中,我的下载目录中有一个重复文件。
|
||||
|
||||
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/11/dupeGuru-Results_009.png)
|
||||
|
||||
现在,你可以决定下一步如何操作。你可以删除这个重复的文件,或者对它进行重命名,抑或是 复制/移动 这个文件到另一个位置。为此,选定该重复文件,或 在菜单栏中选定写有“**仅显示重复**”选项 ,如果你选择了“**仅显示重复**”选项,则只有重复文件在窗口中可见,这样你便可以轻易地选择并删除这些文件。点击“操作”下拉菜单,最后选择你将执行的操作。在这里,我只想删除重复文件,所以我选择了“移动标记文件到垃圾箱”这个选项。
|
||||
|
||||
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/11/Menu_010.png)
|
||||
|
||||
接着,点击“继续”选项来移除重复文件。
|
||||
|
||||
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/11/Deletion-Options_011.png)
|
||||
|
||||
### 2. dupeGuru 音乐版 ###
|
||||
|
||||
[dupeGuru 音乐版][2] 或 简称 dupeGuru ME ,它的功能与 dupeGuru 类似。它拥有 dupeGuru 的所有功能,但它包含更多的信息列 (如比特率,持续时间,标签等)和更多的扫描类型(如带有字段的文件名,标签以及音频内容)。同 dupeGuru 一样, dupeGuru ME 也运行在 Linux, Windows, 和 Mac OS X 中。
|
||||
|
||||
它支持众多的格式,诸如 MP3, WMA, AAC (iTunes 格式), OGG, FLAC, 即失真率较少的 AAC 和 WMA 格式等。
|
||||
|
||||
#### 在 Ubuntu 14.10/14.04/13.10/13.04/12.04 中安装 dupeGuru ME ####
|
||||
|
||||
现在,我们不必再添加任何 PPA,因为在前面的步骤中,我们已经进行了添加。所以在终端中键入以下命令来安装它:
|
||||
|
||||
```
|
||||
sudo apt-get install dupeguru-me
|
||||
```
|
||||
|
||||
### 使用 ###
|
||||
|
||||
你可以从 Unity 面板或菜单中启动它。dupeGuru ME 的使用方法,操作界面和外观与正常的 dupeGuru 类似。添加你想扫描的目录并选择你想执行的操作。重复的音乐文件就会被删除。
|
||||
|
||||
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/11/dupeGuru-Music-Edition-Results_012.png)
|
||||
|
||||
### 3. dupeGuru 图片版 ###
|
||||
|
||||
[dupeGuru 图片版][3],或简称为 duepGuru PE,是一个在你的电脑中查找重复图片的工具。它与 dupeGuru 类似,但独具匹配重复图片的功能。dupeGuru PE 可运行在 Linux, Windows, 和 Mac OS X 中。
|
||||
|
||||
dupeGuru PE 支持 JPG, PNG, TIFF, GIF 和 BMP 等图片格式。所有的这些格式可以被同时比较。Mac OS X 版的 dupeGuru PE 还支持 PSD 和 RAW (CR2 和 NEF) 格式。
|
||||
|
||||
#### 在 Ubuntu 14.10/14.04/13.10/13.04/12.04 中安装 dupeGuru PE ####
|
||||
|
||||
由于我们已经添加了 PPA, 我们也不必为 dupeGuru PE 添加 PPA。只需运行如下命令来安装它。
|
||||
|
||||
```
|
||||
sudo apt-get install dupeguru-pe
|
||||
```
|
||||
|
||||
#### 使用 ####
|
||||
|
||||
就使用方法,操作界面和外观而言,它与 dupeGuru ,dupeGuru ME 类似。我就纳闷为什么开发者为不同的类别开发了不同的版本。我想如果开发一个结合以上三个版本功能的应用,或许会更好。
|
||||
|
||||
启动它,添加你想扫描的目录,并选择你想执行的操作。就这样,你的重复文件将消失。
|
||||
|
||||
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/11/dupeGuru-Picture-Edition-Results_014.png)
|
||||
|
||||
如若因为任何的安全问题而不能移除某些重复文件,请记下这些文件的位置,通过终端或文件管理器来手动删除它们。
|
||||
|
||||
欢呼吧!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/dupeguru-find-remove-duplicate-files-instantly-hard-drive/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/sk/
|
||||
[1]:http://www.hardcoded.net/dupeguru/
|
||||
[2]:http://www.hardcoded.net/dupeguru_me/
|
||||
[3]:http://www.hardcoded.net/dupeguru_pe/
|
@ -1,33 +1,32 @@
|
||||
[bazz2222222222]
|
||||
Linux Namespaces
|
||||
Linux 命名空间
|
||||
================================================================================
|
||||
### Background ###
|
||||
### 背景 ###
|
||||
|
||||
Starting from kernel 2.6.24, Linux supports 6 different types of namespaces. Namespaces are useful in creating processes that are more isolated from the rest of the system, without needing to use full low level virtualization technology.
|
||||
从2.6.24版的内核开始,Linux 就支持6种不同类型的命名空间。它们的出现,使用户创建的进程能够与系统分离得更加彻底,从而不需要考虑太多底层的虚拟化技术。
|
||||
|
||||
- **CLONE_NEWIPC**: IPC Namespaces: SystemV IPC and POSIX Message Queues can be isolated.
|
||||
- **CLONE_NEWPID**: PID Namespaces: PIDs are isolated, meaning that a virtual PID inside of the namespace can conflict with a PID outside of the namespace. PIDs inside the namespace will be mapped to other PIDs outside of the namespace. The first PID inside the namespace will be '1' which outside of the namespace is assigned to init
|
||||
- **CLONE_NEWNET**: Network Namespaces: Networking (/proc/net, IPs, interfaces and routes) are isolated. Services can be run on the same ports within namespaces, and "duplicate" virtual interfaces can be created.
|
||||
- **CLONE_NEWNS**: Mount Namespaces. We have the ability to isolate mount points as they appear to processes. Using mount namespaces, we can achieve similar functionality to chroot() however with improved security.
|
||||
- **CLONE_NEWUTS**: UTS Namespaces. This namespaces primary purpose is to isolate the hostname and NIS name.
|
||||
- **CLONE_NEWUSER**: User Namespaces. Here, user and group IDs are different inside and outside of namespaces and can be duplicated.
|
||||
- **CLONE_NEWIPC**: 进程间通信(IPC)的命名空间,可以将 SystemV 的 IPC 和 POSIX 的消息队列独立出来。
|
||||
- **CLONE_NEWPID**: 进程 ID 的命名空间,进程 ID 独立,意思就是命名空间内的进程 ID 可能会与命名空间外的进程 ID 冲突,于是命名空间内的进程 ID 映射到命名空间外时会使用另外一个进程 ID。比如说,命名空间内 ID 为1的进程,在命名空间外就是指 init 进程。
|
||||
- **CLONE_NEWNET**: 网络命名空间,用于隔离网络资源(/proc/net、IP 地址、网卡、路由等)。后台进程可以运行在不同命名空间内的相同端口上,用户还可以虚拟出一块网卡。
|
||||
- **CLONE_NEWNS**: 挂载命名空间,进程运行时可以将挂载点与系统分离,使用这个功能时,我们可以达到 chroot 的功能,而在安全性方面比 chroot 更高。
|
||||
- **CLONE_NEWUTS**: UTS 命名空间,主要目的是独立出主机名和网络信息服务(NIS)。
|
||||
- **CLONE_NEWUSER**: 用户命名空间,同进程 ID 一样,用户 ID 和组 ID 在命名空间内外是不一样的,并且在不同命名空间内可以存在相同的 ID。
|
||||
|
||||
Let's look first at the structure of a C program, required to demonstrate process namespaces. The following has been tested on Debian 6 and 7. First, we need to allocate a page of memory on the stack, and set a pointer to the end of that memory page. We use **alloca** to allocate stack memory rather than malloc which would allocate memory on the heap.
|
||||
本文用 C 语言介绍上述概念,因为演示进程命名空间的时候需要用到 C 语言。下面的测试过程在 Debian 6 和 Debian 7 上执行。首先,在栈内分配一页内存空间,并将指针指向内存页的末尾。这里我们使用 **alloca()** 函数来分配内存,不要用 malloc() 函数,它会把内存分配在堆上。
|
||||
|
||||
void *mem = alloca(sysconf(_SC_PAGESIZE)) + sysconf(_SC_PAGESIZE);
|
||||
|
||||
Next, we use **clone** to create a child process, passing the location of our child stack 'mem', as well as the required flags to specify a new namespace. We specify 'callee' as the function to execute within the child space:
|
||||
然后使用 **clone()** 函数创建子进程,传入栈空间的地址 "mem",以及指定命名空间的标记。同时我们还指定“callee”作为子进程运行的函数。
|
||||
|
||||
mypid = clone(callee, mem, SIGCHLD | CLONE_NEWIPC | CLONE_NEWPID | CLONE_NEWNS | CLONE_FILES, NULL);
|
||||
|
||||
After calling **clone** we then wait for the child process to finish, before terminating the parent. If not, the parent execution flow will continue and terminate immediately after, clearing up the child with it:
|
||||
**clone** 之后我们要在父进程中等待子进程先退出,否则的话,父进程会继续运行下去,直到进程结束,留下子进程变成孤儿进程:
|
||||
|
||||
while (waitpid(mypid, &r, 0) < 0 && errno == EINTR)
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
Lastly, we'll return to the shell with the exit code of the child:
|
||||
最后当子进程退出后,我们会回到 shell 界面。
|
||||
|
||||
if (WIFEXITED(r))
|
||||
{
|
||||
@ -35,7 +34,7 @@ Lastly, we'll return to the shell with the exit code of the child:
|
||||
}
|
||||
return EXIT_FAILURE;
|
||||
|
||||
Now, let's look at the **callee** function:
|
||||
上文介绍的 **callee** 函数功能如下:
|
||||
|
||||
static int callee()
|
||||
{
|
||||
@ -48,7 +47,7 @@ Now, let's look at the **callee** function:
|
||||
return ret;
|
||||
}
|
||||
|
||||
Here, we mount a **/proc** filesystem, and then set the uid (User ID) and gid (Group ID) to the value of 'u' before spawning the **/bin/bash** shell. [LXC][1] is an OS level virtualization tool utilizing cgroups and namespaces for resource isolation. Let's put it all together, setting 'u' to 65534 which is user "nobody" and group "nogroup" on Debian:
|
||||
程序挂载 **/proc** 文件系统,设置用户 ID 和组 ID,值都为“u”,然后运行 **/bin/bash** 程序,[LXC][1] 是操作系统级的虚拟化工具,使用 cgroups 和命名空间来完成资源的分离。现在我们把所有代码放在一起,变量“u”的值设为65534,在 Debian 系统中,这是“nobody”和“nogroup”:
|
||||
|
||||
#define _GNU_SOURCE
|
||||
#include <unistd.h>
|
||||
@ -90,7 +89,7 @@ Here, we mount a **/proc** filesystem, and then set the uid (User ID) and gid (G
|
||||
return ret;
|
||||
}
|
||||
|
||||
To execute the code produces the following:
|
||||
执行以下命令来运行上面的代码:
|
||||
|
||||
root@w:~/pen/tmp# gcc -O -o ns.c -Wall -Werror -ansi -c89 ns.c
|
||||
root@w:~/pen/tmp# ./ns
|
||||
@ -102,18 +101,18 @@ To execute the code produces the following:
|
||||
nobody 5 0.0 0.0 2784 1064 pts/1 R+ 21:21 0:00 ps auxw
|
||||
nobody@w:~/pen/tmp$
|
||||
|
||||
Notice that the UID and GID are set to that of nobody and nogroup. Specifically notice that the full ps output shows only two running processes and that their PIDs are 1 and 5 respectively. Now, let's move on to using ip netns to work with network namespaces. First, let's confirm that no namespaces exist currently:
|
||||
注意上面的结果,UID 和 GID 被设置成 nobody 和 nogroup 了,特别是 ps 工具只输出两个进程,它们的 ID 分别是1和5(LCTT注:这就是上文介绍 CLONE_NEWPID 时提到的功能,在线程所在的命名空间内,进程 ID 可以为1,映射到命名空间外就是65534;而命名空间外的 ID 为1的进程一直是 init)。接下来轮到使用 ip netns 来设置网络的命名空间。第一步先确定当前系统没有命名空间:
|
||||
|
||||
root@w:~# ip netns list
|
||||
Object "netns" is unknown, try "ip help".
|
||||
|
||||
In this case, either ip needs an upgrade, or the kernel does. Assuming you have a kernel newer than 2.6.24, it's most likely **ip**. After upgrading, **ip netns list** should by default return nothing. Let's add a new namespace called 'ns1':
|
||||
这种情况下,你需要更新你的系统内核,以及 ip 工具。这里假设你的内核版高于2.6.24,ip 工具版本也差不多,高于2.6.24(LCTT注:ip 工具由 iproute 安装包提供,此安装包版本与内核版本相近)。更新好后,**ip netns list** 在没有命名空间存在的情况下不会输出任务信息。加个名为“ns1”的命名空间看看:
|
||||
|
||||
root@w:~# ip netns add ns1
|
||||
root@w:~# ip netns list
|
||||
ns1
|
||||
|
||||
First, let's list the current interfaces:
|
||||
列出网卡:
|
||||
|
||||
root@w:~# ip link list
|
||||
1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
|
||||
@ -121,7 +120,7 @@ First, let's list the current interfaces:
|
||||
2: eth0: mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT qlen 1000
|
||||
link/ether 00:0c:29:65:25:9e brd ff:ff:ff:ff:ff:ff
|
||||
|
||||
Now to create a new virtual interface, and add it to our new namespace. Virtual interfaces are created in pairs, and are linked to each other - imagine a virtual crossover cable:
|
||||
创建新的虚拟网卡,加到命名空间。虚拟网卡需要成对创建,互相关联——想想交叉电缆吧:
|
||||
|
||||
root@w:~# ip link add veth0 type veth peer name veth1
|
||||
root@w:~# ip link list
|
||||
@ -134,9 +133,9 @@ Now to create a new virtual interface, and add it to our new namespace. Virtual
|
||||
4: veth0: mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
|
||||
link/ether f2:f7:5e:e2:22:ac brd ff:ff:ff:ff:ff:ff
|
||||
|
||||
**ifconfig** -a will also now show the addition of both veth0 and veth1.
|
||||
这个时候 **ifconfig** -a 命令也能显示新添加的 veth0 和 veth1 两块网卡。
|
||||
|
||||
Great, now to assign our new interfaces to the namespace. Note that ip **netns exec** is used to execute commands within the namespace:
|
||||
很好,现在将这两份块网卡加到命名空间中去。注意一下,下面的 ip **netns exec** 命令用于将后面的命令在命名空间中执行(LCTT注:下面的结果显示了在 ns1 这个网络命名空间中,只存在 lo 和 veth1 两块网卡):
|
||||
|
||||
root@w:~# ip link set veth1 netns ns1
|
||||
root@w:~# ip netns exec ns1 ip link list
|
||||
@ -145,21 +144,21 @@ Great, now to assign our new interfaces to the namespace. Note that ip **netns e
|
||||
3: veth1: mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
|
||||
link/ether d2:e9:52:18:19:ab brd ff:ff:ff:ff:ff:ff
|
||||
|
||||
**ifconfig** -a will now only show veth0, as veth1 is in the ns1 namespace.
|
||||
这个时候 **ifconfig** -a 命令只能显示 veth0,不能显示 veth1,因为后者现在在 ns1 命名空间中。
|
||||
|
||||
Should we want to delete veth0/veth1:
|
||||
如果想删除 veth1,可以执行下面的命令:
|
||||
|
||||
ip netns exec ns1 ip link del veth1
|
||||
|
||||
We can now assign IP address 192.168.5.5/24 to veth0 on our host:
|
||||
为 veth0 分配 IP 地址:
|
||||
|
||||
ifconfig veth0 192.168.5.5/24
|
||||
|
||||
And assign veth1 192.168.5.10/24 within ns1:
|
||||
在命名空间内为 veth1 分配 IP 地址:
|
||||
|
||||
ip netns exec ns1 ifconfig veth1 192.168.5.10/24 up
|
||||
|
||||
To execute ip addr **list** on both our host and within our namespace:
|
||||
在命名空间内外执行 ip addr **list** 命令:
|
||||
|
||||
root@w:~# ip addr list
|
||||
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
|
||||
@ -186,7 +185,7 @@ To execute ip addr **list** on both our host and within our namespace:
|
||||
inet6 fe80::10bd:b6ff:fe76:a6eb/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
To view routing tables inside and outside of the namespace:
|
||||
在命名空间内外查看路由表:
|
||||
|
||||
root@w:~# ip route list
|
||||
default via 192.168.3.1 dev eth0 proto static
|
||||
@ -195,7 +194,7 @@ To view routing tables inside and outside of the namespace:
|
||||
root@w:~# ip netns exec ns1 ip route list
|
||||
192.168.5.0/24 dev veth1 proto kernel scope link src 192.168.5.10
|
||||
|
||||
Lastly, to connect our physical and virtual interfaces, we'll require a bridge. Let's bridge eth0 and veth0 on the host, and then use DHCP to gain an IP within the ns1 namespace:
|
||||
最后,将虚拟网卡连到物理网卡上,我们需要用到桥接。这里做的是将 veth0 桥接到 eth0,而 ns1 命名空间内则使用 DHCP 自动获取 IP 地址:
|
||||
|
||||
root@w:~# brctl addbr br0
|
||||
root@w:~# brctl addif br0 eth0
|
||||
@ -210,7 +209,7 @@ Lastly, to connect our physical and virtual interfaces, we'll require a bridge.
|
||||
inet6 fe80::20c:29ff:fe65:259e/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
br0 has been assigned an IP of 192.168.3.122/24. Now for the namespace:
|
||||
为网桥 br0 分配的 IP 地址为192.168.3.122/24。接下来为命名空间分配地址:
|
||||
|
||||
root@w:~# ip netns exec ns1 dhclient veth1
|
||||
root@w:~# ip netns exec ns1 ip addr list
|
||||
@ -222,17 +221,19 @@ br0 has been assigned an IP of 192.168.3.122/24. Now for the namespace:
|
||||
inet6 fe80::10bd:b6ff:fe76:a6eb/64 scope link
|
||||
valid_lft forever preferred_lft forever
|
||||
|
||||
Excellent! veth1 has been assigned 192.168.3.248/24
|
||||
现在, veth1 的 IP 被设置成 192.168.3.248/24 了。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.howtoforge.com/linux-namespaces
|
||||
|
||||
作者:[aziods][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[bazz2](https://github.com/bazz2)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.howtoforge.com/forums/private.php?do=newpm&u=138952
|
||||
[1]:http://en.wikipedia.org/wiki/LXC
|
||||
|
||||
|
@ -1,21 +1,21 @@
|
||||
Ubuntu 14.04 Apache2.2迁移2.4问题解决
|
||||
小贴士:在 Ubuntu 14.04 中Apache从2.2迁移到2.4的问题
|
||||
================================================================================
|
||||
如果你进行了一次**Ubuntu**从12.04到14.04的升级,那么这次升级还包括了一个重大的升级--**Apache**从2.2版本到2.4版本。**Apache**的这次升级带来了许多性能提升,但是如果继续使用2.2的配置会导致很多错误。
|
||||
如果你进行了一次**Ubuntu**从12.04到14.04的升级,那么它还包括了一个重大的升级--**Apache**从2.2版本升级到2.4版本。**Apache**的这次升级带来了许多性能提升,但是如果继续使用2.2的配置文件会导致很多错误。
|
||||
|
||||
### 访问控制的改变 ###
|
||||
|
||||
从**Apache 2.4**起,授权(authorization)开始启用,比起2.2的一个检查一个数据存储,授权更加灵活。过去很难确定那些命令授权应用了,但是授权(authorization)的引入解决了这些问题,现在,配置可以控制什么时候授权方法被调用,什么条件决定何时授权访问。
|
||||
从**Apache 2.4**起,授权(authorization)开始启用,比起2.2的一个检查一个数据存储,授权更加灵活。过去很难确定授权如何并且以什么样的顺序被应用,但是授权容器指令的介绍解决了这些问题,现在,配置可以控制什么时候授权方法被调用,什么条件决定何时授权访问。
|
||||
|
||||
这就是为什么大多数的升级失败是由于错误配置,2.2的访问控制基于IP地址,主机名和其他字符通过使用指令Order,来设置Allow, Deny或 Satisfy,但是2.4,这些一切被新模板授权(authorization)来替代检查。
|
||||
|
||||
为了弄清楚这些,可以来看一些虚拟主机的例子,这些可以在/etc/apache2/sites-enabled/default 或者 /etc/apache2/sites-enabled/网页名称 中找到:
|
||||
为了弄清楚这些,可以来看一些虚拟主机的例子,这些可以在/etc/apache2/sites-enabled/default 或者 /etc/apache2/sites-enabled/你的网页名称 中找到:
|
||||
|
||||
老2.2虚拟主机配置:
|
||||
旧的2.2虚拟主机配置:
|
||||
|
||||
Order allow,deny
|
||||
Allow from all
|
||||
|
||||
新2.4虚拟主机配置:
|
||||
新的2.4虚拟主机配置:
|
||||
|
||||
Require all granted
|
||||
|
||||
@ -23,24 +23,24 @@ Ubuntu 14.04 Apache2.2迁移2.4问题解决
|
||||
|
||||
### .htaccess 问题 ###
|
||||
|
||||
升级后如果一些设置不执行或者得到重定向错误,检查是否这些设置是在.htaccess文件中。如果是,2.4已经不再使用.htaccess文件,在2.4中默认使用AllowOverride指令来设置,因此忽略了.htaccess文件。你需要做的就是改变和增加AllowOverride All命令到你的页面配置文件中。
|
||||
升级后如果一些设置不执行或者得到重定向错误,检查是否这些设置是在.htaccess文件中。如果是,2.4已经不再使用.htaccess文件,在2.4中默认使用AllowOverride指令来设置,因此忽略了.htaccess文件。你需要做的全部就是改变或者添加AllowOverride All命令到你的网站配置文件中。
|
||||
|
||||
上面截图中,可以看见AllowOverride All指令。
|
||||
|
||||
### 丢失配置文件或者模块 ###
|
||||
|
||||
根据我的经验,这次升级带了其他问题就是老模块和配置文件不再需要或者不被支持了。所以你必须十分清楚Apache不再支持的各种文件,并且在老配置中移除这些老模块来解决问题。之后你可以搜索和安装相似的模块来替代。
|
||||
根据我的经验,这次升级带来的另一个问题就是在2.4中旧模块和配置文件不再需要或者不被支持了。你将会收到一条“Apache不能包含这个相应文件”的明确警告,你需要做的是在配置文件中移除这些导致问题的命令行。之后你可以搜索和安装相似的模块来替代。
|
||||
|
||||
### 其他需要的知道的小改变 ###
|
||||
### 其他需要了解的小改变 ###
|
||||
|
||||
这里还有一些其他改变的需要考虑,虽然这些通常只会发生警告,而不是错误。
|
||||
|
||||
- MaxClients重命名为MaxRequestWorkers,使之有更准确的描述。而异步MPM,如event,客服端最大连接数不量比与工作线程数。老名字依然支持。
|
||||
- MaxClients重命名为MaxRequestWorkers,使之有更准确的描述。而异步MPM,如event,客户端最大连接数不量比于工作线程数。旧的名字依然支持。
|
||||
- DefaultType命令无效,使用它已经没有任何效果了。需要使用其他配置设定来替代它
|
||||
- EnableSendfile默认关闭
|
||||
- FileETag 默认"MTime Size"(没有INode)
|
||||
- KeepAlive 只接受On或Off值。之前的任何值不是Off或者0都认为是On
|
||||
- Mutex 替代 Directives AcceptMutex, LockFile, RewriteLock, SSLMutex, SSLStaplingMutex, 和 WatchdogMutexPath 。需要删除或者替代所有2.2老配置的设置。
|
||||
- KeepAlive 只接受“On”或“Off”值。之前的任何不是“Off”或者“0”的值都被认为是“On”
|
||||
- Mutex 已经替代了 Directives AcceptMutex, LockFile, RewriteLock, SSLMutex, SSLStaplingMutex 和 WatchdogMutexPath 。你需要做的是估计一下这些被移动的指令在2.2中的使用情况,来决定是否删除或者使用Mutex来替代。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -48,7 +48,7 @@ via: http://linoxide.com/linux-how-to/apache-migration-2-2-to-2-4-ubuntu-14-04/
|
||||
|
||||
作者:[Adrian Dinu][a]
|
||||
译者:[Vic020/VicYu](http://vicyu.net)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[Caroline](https://github.com/carolinewuyan)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
|
@ -1,17 +1,16 @@
|
||||
translating by mtunique
|
||||
Linux FAQs with Answers--How to check disk space on Linux with df command
|
||||
Linux有问必答:在Linux下如何用df命令检查磁盘空间?
|
||||
================================================================================
|
||||
> **Question**: I know I can use df command to check a file system's disk space usage on Linux. Can you show me practical examples of the df command so that I can make the most out of it?
|
||||
> **问题**: 我知道在Linux上我可以用df命令来查看磁盘使用空间。你能告诉我df命令的实际例子使我可以最大限度得利用它吗?
|
||||
|
||||
As far as disk storage is concerned, there are many command-line or GUI-based tools that can tell you about current disk space usage. These tools report on detailed disk utilization in various human-readable formats, such as easy-to-understand summary, detailed statistics, or [intuitive visualization][1]. If you simply want to know how much free disk space is available for different file systems, then df command is probably all you need.
|
||||
对于磁盘存储方面,有很多命令行或基于GUI的工具,它可以告诉你关于当前磁盘空间的使用情况。这些工具用各种人们可读的格式展示磁盘利用率的详细信息,比如易于理解的总结,详细的统计信息或直观的[可视化报告][1]。如果你只想知道不同文件系统有多少空闲的磁盘空间,那么df命令可能是你所需要的。
|
||||
|
||||
![](https://farm9.staticflickr.com/8632/15505309473_51bffec3f1_b.jpg)
|
||||
|
||||
The df command can report on disk utilization of any "mounted" file system. There are different ways this command can be invoked. Here are some **useful** df **command examples**.
|
||||
df命令可以展示任何“mounted”文件系统的磁盘利用率。该命令可以用不同的方式调用。这里有一些**有用的** df **命令例子**.
|
||||
|
||||
### Display in Human-Readable Format ###
|
||||
### 用人们可读的方式展示 ###
|
||||
|
||||
By default, the df command reports disk space in 1K blocks, which is not easily interpretable. The "-h" parameter will make df print disk space in a more human-readable format (e.g., 100K, 200M, 3G).
|
||||
默认情况下,df命令用1K为块来展示磁盘空间,这不容易解释。“-h”参数使df用更可读的方式打印磁盘空间(例如 100K,200M,3G)。
|
||||
|
||||
$ df -h
|
||||
|
||||
@ -27,9 +26,9 @@ By default, the df command reports disk space in 1K blocks, which is not easily
|
||||
none 100M 48K 100M 1% /run/user
|
||||
/dev/sda1 228M 98M 118M 46% /boot
|
||||
|
||||
### Display Inode Usage ###
|
||||
### 展示Inode使用情况 ###
|
||||
|
||||
When you monitor disk usage, you must watch out for not only disk space, but also "inode" usage. In Linux, inode is a data structure used to store metadata of a particular file, and when a file system is created, a pre-defined number of inodes are allocated. This means that a file system can run out of space not only because big files use up all available space, but also because many small files use up all available inodes. To display inode usage, use "-i" option.
|
||||
当你监视磁盘使用情况时,你必须注意的不仅仅是磁盘空间还有“inode”的使用情况。在Linux中,inode是用来存储特定文件的元数据的一种数据结构,在创建一个文件系统时,inode的预先定义数量将被分配。这意味着,一个文件系统可能耗尽空间不只是因为大文件用完了所有可用空间,也可能是因为很多小文件用完了所有可能的inode。用“-i”选项展示inode使用情况。
|
||||
|
||||
$ df -i
|
||||
|
||||
@ -45,10 +44,9 @@ When you monitor disk usage, you must watch out for not only disk space, but als
|
||||
none 1004417 28 1004389 1% /run/user
|
||||
/dev/sda1 124496 346 124150 1% /boot
|
||||
|
||||
### Display Disk Usage Grant Total ###
|
||||
|
||||
By default, the df command shows disk utilization of individual file systems. If you want to know the total disk usage over all existing file systems, add "--total" option.
|
||||
### 展示磁盘总利用率 ###
|
||||
|
||||
默认情况下, df命令显示磁盘的单个文件系统的利用率。如果你想知道的所有文件系统的总磁盘使用量,增加“ --total ”选项。
|
||||
$ df -h --total
|
||||
|
||||
----------
|
||||
@ -64,9 +62,9 @@ By default, the df command shows disk utilization of individual file systems. If
|
||||
/dev/sda1 228M 98M 118M 46% /boot
|
||||
total 918G 565G 307G 65% -
|
||||
|
||||
### Display File System Types ###
|
||||
### 展示文件系统类型 ###
|
||||
|
||||
By default, the df command does not show file system type information. Use "-T" option to add file system types to the output.
|
||||
默认情况下,df命令不现实文件系统类型信息。用“-T”选项来添加文件系统信息到输出中。
|
||||
|
||||
$ df -T
|
||||
|
||||
@ -82,9 +80,9 @@ By default, the df command does not show file system type information. Use "-T"
|
||||
none tmpfs 102400 48 102352 1% /run/user
|
||||
/dev/sda1 ext2 233191 100025 120725 46% /boot
|
||||
|
||||
### Include or Exclude a Specific File System Type ###
|
||||
### 包含或排除特定的文件系统类型 ###
|
||||
|
||||
If you want to know free space of a specific file system type, use "-t <type>" option. You can use this option multiple times to include more than one file system types.
|
||||
如果你想知道特定文件系统类型的剩余空间,用“-t <type>”选项。你可以多次使用这个选项来包含更多的文件系统类型。
|
||||
|
||||
$ df -t ext2 -t ext4
|
||||
|
||||
@ -94,13 +92,13 @@ If you want to know free space of a specific file system type, use "-t <type>" o
|
||||
/dev/mapper/ubuntu-root 952893348 591583380 312882756 66% /
|
||||
/dev/sda1 233191 100025 120725 46% /boot
|
||||
|
||||
To exclude a specific file system type, use "-x <type>" option. You can use this option multiple times as well.
|
||||
排除特定的文件系统类型,用“-x <type>”选项。同样,你可以用这个选项多次。
|
||||
|
||||
$ df -x tmpfs
|
||||
|
||||
### Display Disk Usage of a Specific Mount Point ###
|
||||
### 显示一个具体的挂载点磁盘使用情况 ###
|
||||
|
||||
If you specify a mount point with df, it will report disk usage of the file system mounted at that location. If you specify a regular file (or a directory) instead of a mount point, df will display disk utilization of the file system which contains the file (or the directory).
|
||||
如果你用df指定一个挂载点,它将报告挂载在那个地方的文件系统的磁盘使用情况。如果你指定一个普通文件(或一个目录)而不是一个挂载点,df将现实包含这个文件(或目录)的文件系统的磁盘利用率。
|
||||
|
||||
$ df /
|
||||
|
||||
@ -118,9 +116,9 @@ If you specify a mount point with df, it will report disk usage of the file syst
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/mapper/ubuntu-root 952893348 591583528 312882608 66% /
|
||||
|
||||
### Display Information about Dummy File Systems ###
|
||||
### 现实虚拟文件系统的信息 ###
|
||||
|
||||
If you want to display disk space information for all existing file systems including dummy file systems, use "-a" option. Here, dummy file systems refer to pseudo file systems which do not have corresponding physical devices, e.g., tmpfs, cgroup virtual file system or FUSE file systems. These dummy filesystems have size of 0, and are not reported by df without "-a" option.
|
||||
如果你想显示所有已经存在的文件系统(包括虚拟文件系统)的磁盘空间信息,用“-a”选项。这里,虚拟文件系统是指没有相对应的物理设备的假的文件系统,例如,tmpfs,cgroup虚拟文件系统或FUSE文件安系统。这些虚拟文件系统大小为0,不用“-a”选项将不会被报告出来。
|
||||
|
||||
$ df -a
|
||||
|
||||
@ -150,7 +148,7 @@ If you want to display disk space information for all existing file systems incl
|
||||
|
||||
via: http://ask.xmodulo.com/check-disk-space-linux-df-command.html
|
||||
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[mtunique](https://github.com/mtunique)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
@ -0,0 +1,113 @@
|
||||
Linux有问必答:如何检查Linux的内存使用状况
|
||||
================================================================================
|
||||
|
||||
>**问题**:我想要监测Linux系统的内存使用状况。有哪些可用的图形界面或者命令行工具来检查当前内存使用情况?
|
||||
|
||||
当涉及到Linux系统性能优化的时候,物理内存是一个最重要的因素。自然的,Linux提供了丰富的选择来监测对于珍贵的内存资源的使用。不同的工具,在监测粒度(例如:全系统范围, 每个进程, 每个用户),接口(例如:图形用户界面, 命令行, ncurses)或者运行模式(交互模式, 批量处理模式)上都不尽相同。
|
||||
|
||||
下面是一个可供选择的但并不全面的图形或命令行工具列表,这些工具用来检查并且释放Linux平台中内存。
|
||||
|
||||
### 1. /proc/meminfo ###
|
||||
|
||||
一种最简单的方法是通过“/proc/meminfo”来检查内存使用状况。这个动态更新的虚拟文件事实上是许多信息资源的集中展示,这些资源来自于诸如free,top和ps这些与内存相关的工具。从可用/闲置物理内存数量到等待被写入缓存的数量或者已写回磁盘的数量,只要是你想要的关于内存使用的信息,“/proc/meminfo”应有尽有。特定进程的内存信息也可以通过“/proc/<pid>/statm”和“/proc/<pid>/status”来获取。
|
||||
|
||||
$ cat /proc/meminfo
|
||||
|
||||
![](https://farm8.staticflickr.com/7483/15989497899_bb6afede11_b.jpg)
|
||||
|
||||
### 2. atop ###
|
||||
|
||||
atop命令是用于终端环境的基于ncurses的交互系统和进程监测工具。它展示了动态更新的系统资源(中央处理器, 内存, 网络, 输入/输出, 内核)摘要,并且用醒目的颜色将高系统负载的警告信息标注出来。它同样提供了类似于top的线程(或用户)资源使用视图,因此系统管理员可以指出哪个进程或者用户对系统负载负责。内存统计报告包括了总计/闲置内存,缓存的/缓冲的内存 和 提交的虚拟内存。
|
||||
|
||||
$ sudo atop
|
||||
|
||||
![](https://farm8.staticflickr.com/7552/16149756146_893773b84c_b.jpg)
|
||||
|
||||
### 3. free ###
|
||||
|
||||
free命令是一个用来获得内存使用概况的快速简单的方法,这些信息从“/proc/meminfo”获取。它提供了一个快照用于展示总计/闲置的物理内存和系统交换区,以及已使用/闲置的内核缓冲区。
|
||||
|
||||
$ free -h
|
||||
|
||||
![](https://farm8.staticflickr.com/7531/15988117988_ba8c6b7b63_b.jpg)
|
||||
|
||||
### 4. GNOME System Monitor ###
|
||||
|
||||
GNOME System Monitor 是一个图形界面应用,它展示了包括中央处理器,内存,交换区和网络在内的系统资源使用率的短暂历史记录。它同时也可以提供一个带有中央处理器和内存使用情况的进程视图。
|
||||
|
||||
$ gnome-system-monitor
|
||||
|
||||
![](https://farm8.staticflickr.com/7539/15988118078_279f0da494_c.jpg)
|
||||
|
||||
### 5. htop ###
|
||||
|
||||
htop命令是一个基于ncurses的交互处理视图,它实时展示了每个进程的内存使用情况。它可以报告所有运行中进程的常驻内存大小(RSS)、内存中程序的总大小、库大小、共享文件大小、和脏页面大小。你可以横向或者纵向滚动进程列表进行查看。
|
||||
|
||||
$ htop
|
||||
|
||||
![](https://farm9.staticflickr.com/8236/8599814378_de071de408_c.jpg)
|
||||
|
||||
### 6. KDE System Monitor ###
|
||||
|
||||
就像GNOME桌面拥有GNOME System Monitor一样,KDE桌面也有它自己的对口应用:KDE System Monitor。这个工具的功能与GNOME版本极其相似,也就是说,它同样展示了一个关于系统资源使用情况,以及带有每个进程的中央处理器/内存消耗情况的实时历史记录。
|
||||
|
||||
$ ksysguard
|
||||
|
||||
![](https://farm8.staticflickr.com/7479/15991397329_ec5d786ffd_c.jpg)
|
||||
|
||||
### 7. memstat ###
|
||||
|
||||
memstat工具对于识别正在消耗虚拟内存的可执行文件、进程和共享库非常有用。给出一个进程识别号,memstat即可识别出与之相关联的可执行文件、数据和共享库究竟使用了多少虚拟内存。
|
||||
|
||||
$ memstat -p <PID>
|
||||
|
||||
![](https://farm8.staticflickr.com/7518/16175635905_1880e50055_b.jpg)
|
||||
|
||||
### 8. nmon ###
|
||||
|
||||
nmon工具是一个基于ncurses系统基准测试工具,它能够以交互方式监测中央处理器、内存、磁盘输入/输出、内核、文件系统以及网络资源。对于内存使用状况而言,它能够展示像总计/闲置内存、交换区、缓冲的/缓存的内存,虚拟内存页面输入输出统计,所有这些都是实时的。
|
||||
|
||||
$ nmon
|
||||
|
||||
![](https://farm9.staticflickr.com/8648/15989760117_30f62f4aba_b.jpg)
|
||||
|
||||
### 9. ps ###
|
||||
|
||||
ps命令能够实时展示每个进程的内存使用状况。内存使用报告里包括了 %MEM (物理内存使用百分比), VSZ (虚拟内存使用总量), and RSS (物理内存使用总量)。你可以使用“--sort”选项来对进程列表排序。例如,按照RSS降序排序:
|
||||
|
||||
$ ps aux --sort -rss
|
||||
|
||||
![](https://farm9.staticflickr.com/8602/15989881547_ca40839c19_c.jpg)
|
||||
|
||||
### 10. smem ###
|
||||
|
||||
[smem][1]命令允许你测定不同进程和用户的物理内存使用状况,这些信息来源于“/proc”目录。它利用比例设置大小(PSS)指标来精确量化Linux进程的有效内存使用情况。内存使用分析能够扩展成为柱状图或者饼图类的图形化图表。
|
||||
|
||||
$ sudo smem --pie name -c "pss"
|
||||
|
||||
![](https://farm8.staticflickr.com/7466/15614838428_eed7426cfe_c.jpg)
|
||||
|
||||
### 11. top ###
|
||||
|
||||
top命令提供了一个运行中进程的实时视图,以及特定进程的各种资源使用统计信息。与内存相关的信息包括 %MEM (内存使用率), VIRT (虚拟内存使用总量), SWAP (交换出的虚拟内存使用量), CODE (分配给代码执行的物理内存数量), DATA (分配给无需执行的数据的物理内存数量), RES (物理内存使用总量; CODE+DATA), and SHR (有可能与其他进程共享的内存数量).你能够基于内存使用情况或者大小对进程列表进行排序。
|
||||
|
||||
![](https://farm8.staticflickr.com/7464/15989760047_eb8d51d9f2_c.jpg)
|
||||
|
||||
### 12. vmstat ###
|
||||
|
||||
vmstat命令行工具显示涵盖了中央处理器、内存、中断和磁盘输入/输出在内的各种系统活动的瞬时和平均统计数据。对于内存信息而言,命令不仅仅展示了物理内存使用情况(例如总计/已使用内存和缓冲的/缓存的内存),还同样展示了虚拟内存统计数据(例如,内存的页输入/输出,交换输入/输出)
|
||||
|
||||
$ vmstat -s
|
||||
|
||||
![](https://farm9.staticflickr.com/8582/15988236860_3f142008d2_b.jpg)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/check-memory-usage-linux.html
|
||||
|
||||
译者:[Ping](https://github.com/mr-ping)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://xmodulo.com/visualize-memory-usage-linux.html
|
@ -0,0 +1,144 @@
|
||||
Linux 有问必答: 如何在Ubuntu或者Debian中下载和安装ixgbe驱动
|
||||
================================================================================
|
||||
> **提问**: 我想为我的Intel 10G网卡下载安装最新的ixgbe。我该如何在Ubuntu(或者Debian)中安装ixgbe驱动?
|
||||
|
||||
Intel的10G网卡(比如,82598、 82599、 x540)由ixgbe驱动支持。现代的Linux发版已经将ixgbe作为一个可加载模块。然而,有些情况你不想要你机器上的已经编译和安装的ixgbe驱动。比如,你想要体验ixbge驱动的最新特性。同样,自带内核中的ixgbe中的一个默认问题是不允许你自定义旭东内核参数。如果你想要完全自动一ixgbe驱动(比如 RSS、多队列、中断阈值等等),你需要手动从源码编译ixgbe驱动。
|
||||
|
||||
这里是如何在Ubuntu、Debian或者它们的衍生版中下载安装ixgbe驱动。
|
||||
|
||||
### 第一步: 安装前提 ###
|
||||
|
||||
安装之前,需要安装匹配的内核头文件和开发工具包。
|
||||
|
||||
$ sudo apt-get install linux-headers-$(uname -r)
|
||||
$ sudo apt-get install gcc make
|
||||
|
||||
### 第二步: 编译Ixgbe驱动 ###
|
||||
|
||||
从[最新的ixgbe驱动][1]中下载源码。
|
||||
|
||||
$ wget http://sourceforge.net/projects/e1000/files/ixgbe%20stable/3.23.2/ixgbe-3.23.2.tar.gz
|
||||
|
||||
如下编译ixgbe驱动。
|
||||
|
||||
$ tar xvfvz ixgbe-3.23.2.tar.gz
|
||||
$ cd ixgbe-3.23.2/src
|
||||
$ make
|
||||
|
||||
### 第三步: 检查Ixgbe驱动 ###
|
||||
|
||||
编译之后,你会看到在ixgbe-3.23.2/src目录下创建了**ixgbe.ko**。这就是会加载到内核之中的ixgbe驱动。
|
||||
|
||||
用modinfo命令检查内核模块的信息。注意你需要指定模块的绝对路径(比如 ./ixgbe.ko 或者 /home/xmodulo/ixgbe/ixgbe-3.23.2/src/ixgbe.ko)。输出中会显示ixgbe内核的版本。
|
||||
|
||||
$ modinfo ./ixgbe.ko
|
||||
|
||||
----------
|
||||
|
||||
filename: /home/xmodulo/ixgbe/ixgbe-3.23.2/src/ixgbe.ko
|
||||
version: 3.23.2
|
||||
license: GPL
|
||||
description: Intel(R) 10 Gigabit PCI Express Network Driver
|
||||
author: Intel Corporation,
|
||||
srcversion: 2ADA5E537923E983FA9DAE2
|
||||
alias: pci:v00008086d00001560sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d00001558sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d0000154Asv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d00001557sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d0000154Fsv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d0000154Dsv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d00001528sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d000010F8sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d0000151Csv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d00001529sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d0000152Asv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d000010F9sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d00001514sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d00001507sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d000010FBsv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d00001517sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d000010FCsv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d000010F7sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d00001508sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d000010DBsv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d000010F4sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d000010E1sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d000010F1sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d000010ECsv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d000010DDsv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d0000150Bsv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d000010C8sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d000010C7sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d000010C6sv*sd*bc*sc*i*
|
||||
alias: pci:v00008086d000010B6sv*sd*bc*sc*i*
|
||||
depends: ptp,dca
|
||||
vermagic: 3.11.0-19-generic SMP mod_unload modversions
|
||||
parm: InterruptType:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default IntMode (deprecated) (array of int)
|
||||
parm: IntMode:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default 2 (array of int)
|
||||
parm: MQ:Disable or enable Multiple Queues, default 1 (array of int)
|
||||
parm: DCA:Disable or enable Direct Cache Access, 0=disabled, 1=descriptor only, 2=descriptor and data (array of int)
|
||||
parm: RSS:Number of Receive-Side Scaling Descriptor Queues, default 0=number of cpus (array of int)
|
||||
parm: VMDQ:Number of Virtual Machine Device Queues: 0/1 = disable, 2-16 enable (default=8) (array of int)
|
||||
parm: max_vfs:Number of Virtual Functions: 0 = disable (default), 1-63 = enable this many VFs (array of int)
|
||||
parm: VEPA:VEPA Bridge Mode: 0 = VEB (default), 1 = VEPA (array of int)
|
||||
parm: InterruptThrottleRate:Maximum interrupts per second, per vector, (0,1,956-488281), default 1 (array of int)
|
||||
parm: LLIPort:Low Latency Interrupt TCP Port (0-65535) (array of int)
|
||||
parm: LLIPush:Low Latency Interrupt on TCP Push flag (0,1) (array of int)
|
||||
parm: LLISize:Low Latency Interrupt on Packet Size (0-1500) (array of int)
|
||||
parm: LLIEType:Low Latency Interrupt Ethernet Protocol Type (array of int)
|
||||
parm: LLIVLANP:Low Latency Interrupt on VLAN priority threshold (array of int)
|
||||
parm: FdirPballoc:Flow Director packet buffer allocation level:
|
||||
1 = 8k hash filters or 2k perfect filters
|
||||
2 = 16k hash filters or 4k perfect filters
|
||||
3 = 32k hash filters or 8k perfect filters (array of int)
|
||||
parm: AtrSampleRate:Software ATR Tx packet sample rate (array of int)
|
||||
parm: FCoE:Disable or enable FCoE Offload, default 1 (array of int)
|
||||
parm: LRO:Large Receive Offload (0,1), default 1 = on (array of int)
|
||||
parm: allow_unsupported_sfp:Allow unsupported and untested SFP+ modules on 82599 based adapters, default 0 = Disable (array of int)
|
||||
|
||||
### 第四步: 测试Ixgbe驱动 ###
|
||||
|
||||
在测试新的模块之前,如果你内核中已存在旧版本ixgbe模块的话你需要先移除它。
|
||||
|
||||
$ sudo rmmod ixgbe
|
||||
|
||||
接着使用insmod命令插入新编译的ixgbe模块。确保指定一个模块的绝对路径。
|
||||
|
||||
$ sudo insmod ./ixgbe.ko
|
||||
|
||||
如果上面的命令成功运行,就不会显示任何的信息。
|
||||
|
||||
如果你需要,你可以尝试加入额外的参数。比如,设置RSS的队列数量为16:
|
||||
|
||||
$ sudo insmod ./ixgbe.ko RSS=16
|
||||
|
||||
检查**/var/log/kern.log**来查看ixgbe驱动是否成功激活。查看日志中的“Intel(R) 10 Gigabit PCI Express Network Driver”。ixgbe的版本信息应该和之前的modinfo的显示应该相同。
|
||||
|
||||
Sep 18 14:48:52 spongebob kernel: [684717.906254] Intel(R) 10 Gigabit PCI Express Network Driver - version 3.22.3
|
||||
|
||||
![](https://farm8.staticflickr.com/7583/16056721867_f06e152076_c.jpg)
|
||||
|
||||
### 第五步: 安装Ixgbe驱动 ###
|
||||
|
||||
一旦你验证新的ixgbe驱动已经成功家在,最后一步是在你的系统中安装驱动。
|
||||
|
||||
$ sudo make install
|
||||
|
||||
**ixgbe.ko** 接着会安装在/lib/modules/<kernel-version>/kernel/drivers/net/ethernet/intel/ixgbe 下。
|
||||
|
||||
这一步起,你可以用下面的modprobe命令加载ixgbe驱动了。注意你不必再指定绝对路径。
|
||||
|
||||
$ sudo modprobe ixgbe
|
||||
|
||||
如果你希望在启动时家在ixgbe驱动,你可以在/etc/modules的最后加入“ixgbe”。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/download-install-ixgbe-driver-ubuntu-debian.html
|
||||
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://sourceforge.net/projects/e1000/files/ixgbe%20stable/
|
@ -0,0 +1,44 @@
|
||||
Linux有问必答:如何在curl中设置自定义的HTTP头
|
||||
================================================================================
|
||||
> **问题**:我正尝试使用curl命令获取一个URL,但除此之外我还想在传出的HTTP请求中设置一些自定义的头部字段。我如何能够在curl中使用自定义的HTTP头呢?
|
||||
|
||||
curl是一个强大的命令行工具,它可以通过网络将信息传递给服务器或者从服务器获取数据。他支持很多的传输协议,尤其是HTTP/HTTPS以及其他诸如FTP/FTPS, RTSP, POP3/POP3S, SCP, IMAP/IMAPS协议等。当你使用curl向一个URL发送HTTP请求的时候,它会使用一个默认只包含必要的头部字段(如:User-Agent, Host, and Accept)的HTTP头。
|
||||
|
||||
![](https://farm8.staticflickr.com/7568/16225032086_fb8f1c508a_b.jpg)
|
||||
|
||||
在一些个例中,或许你想要在一个HTTP请求中覆盖掉默认的HTTP头或者添加一个新的自定义头部字段。例如,你或许想要重写“HOST”字段来测试一个[负载均衡][1],或者通过重写"User-Agent"字符串来欺骗特定浏览器以解决其访问限制的问题。
|
||||
|
||||
为了解决所有这些问题,curl提供了一个简单的方法来完全控制传出HTTP请求的HTTP头。你需要的这个参数是“-H” 或者 “--header”。
|
||||
|
||||
为了定义多个HTTP头部字段,"-H"选项可以在curl命令中被多次指定。
|
||||
|
||||
例如:以下命令设置了3个HTTP头部字段。也就是说,重写了“HOST”字段,并且添加了两个字段("Accept-Language" 和 "Cookie")
|
||||
|
||||
$ curl -H 'Host: 157.166.226.25' -H 'Accept-Language: es' -H 'Cookie: ID=1234' http://cnn.com
|
||||
|
||||
![](https://farm8.staticflickr.com/7520/16250111432_de39638ec0_c.jpg)
|
||||
|
||||
对于"User-Agent", "Cookie", "Host"这类标准的HTTP头部字段,通常会有另外一种设置方法。curl命令提供了特定的选项来对这些头部字段进行设置:
|
||||
|
||||
- **-A (or --user-agent)**: 设置 "User-Agent" 字段.
|
||||
- **-b (or --cookie)**: 设置 "Cookie" 字段.
|
||||
- **-e (or --referer)**: 设置 "Referer" 字段.
|
||||
|
||||
例如,以下两个命令是等效的。这两个命令同样都对HTTP头的"User-Agent"字符串进行了更改。
|
||||
|
||||
$ curl -H "User-Agent: my browser" http://cnn.com
|
||||
$ curl -A "my browser" http://cnn.com
|
||||
|
||||
wget是另外一个类似于curl,可以用来获取URL的命令行工具。并且wget也一样允许你使用一个自定义的HTTP头。点击[这里][2]查看wget命令的详细信息。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://ask.xmodulo.com/custom-http-header-curl.html
|
||||
|
||||
译者:[Ping](http://mr-ping.com)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://xmodulo.com/haproxy-http-load-balancer-linux.html
|
||||
[2]:http://xmodulo.com/how-to-use-custom-http-headers-with-wget.html
|
@ -0,0 +1,204 @@
|
||||
清理 Ubuntu 14.10,14.04,13.10 系统
|
||||
================================================================================
|
||||
前面我们已经讨论了[如何清理 Ubuntu GNU/Linux 系统][1],这篇教程将在原有教程的基础上,增加对新的 Ubuntu 发行版本的支持,并介绍更多的工具。
|
||||
|
||||
假如你想清理你的 Ubuntu 主机,你可以按照以下的一些简单步骤来移除所有不需要的垃圾文件。
|
||||
|
||||
### 移除多余软件包 ###
|
||||
|
||||
这又是一个内置功能,但这次我们不必使用新得立包管理软件(Synaptic Package Manager),而是在终端中达到目的。
|
||||
|
||||
现在,在终端窗口中键入如下命令:
|
||||
|
||||
```
|
||||
sudo apt-get autoclean
|
||||
```
|
||||
|
||||
这便激活了包清除命令。这个命令所做的工作是: 自动清除那些当你安装或升级程序时系统所缓存的 `.deb` 包(即清除 `/var/cache/apt/archives` 目录,不过只清理过时的包)。如果需要使用 清除命令,只需在终端窗口中键入以下命令:
|
||||
|
||||
```
|
||||
sudo apt-get clean
|
||||
```
|
||||
|
||||
然后你就可以使用自动移除命令。这个命令所做的工作是:清除那些 在系统中被某个已经卸载的软件 作为依赖所安装的软件包。要使用自动移除命令,在终端窗口中键入以下命令:
|
||||
|
||||
```
|
||||
sudo apt-get autoremove
|
||||
```
|
||||
|
||||
### 移除不需要的本地数据 ###
|
||||
|
||||
为达到此目的,我们需要安装 `localepurge` 软件,它将自动移除一些不需要的本地数据。这个软件是一个简单的脚本,它将从那些 不再需要的本地文件和本地联机手册( man pages ) 所占用的空间中回收磁盘空间。这个软件将在任何 apt 安装命令运行时 被自动激活。
|
||||
|
||||
在 Ubuntu 中安装 `localepurge`
|
||||
|
||||
```
|
||||
sudo apt-get install localepurge
|
||||
```
|
||||
|
||||
在通过 `apt-get install` 安装任意软件后, localepurge 将移除所有 不是使用你系统中所设定语言的 翻译文件和翻译的联机手册。
|
||||
|
||||
假如你想设置 `localepurge`,你需要编辑 `/ect/locale.nopurge` 文件。
|
||||
|
||||
根据你已经安装的软件,这将为你节省几兆的磁盘空间。
|
||||
|
||||
例子:-
|
||||
|
||||
假如我试着使用 `apt-get` 来安装 `dicus`软件:
|
||||
|
||||
```
|
||||
sudo apt-get install discus
|
||||
```
|
||||
|
||||
在软件安装完毕之后,你将看到如下提示:
|
||||
|
||||
> localepurge: Disk space freed in /usr/share/locale: 41860K
|
||||
|
||||
### 移除 孤包 ###
|
||||
|
||||
假如你想移除孤包,你需要安装 `deborphan` 软件:
|
||||
|
||||
在 Ubuntu 中安装 `deborphan` :
|
||||
|
||||
```
|
||||
sudo apt-get install deborphan
|
||||
```
|
||||
|
||||
### 使用 deborphan ###
|
||||
|
||||
打开终端并键入如下命令即可:
|
||||
|
||||
```
|
||||
sudo deborphan | xargs sudo apt-get -y remove --purge
|
||||
```
|
||||
|
||||
### 使用 GtkOrphan 来移除 孤包 ###
|
||||
|
||||
`GtkOrphan` (一个针对 debian 系发行版本的 Perl/Gtk2 应用) 是一个分析 用户安装过程状态并查找孤立库文件的图形化工具, 它为 `deborphan` 提供了一个 GUI 前端,并具备移除软件包的功能。
|
||||
|
||||
### 在 Ubuntu 中安装 GtkOrphan ###
|
||||
|
||||
打开终端并运行如下命令:
|
||||
|
||||
```
|
||||
sudo apt-get install gtkorphan
|
||||
```
|
||||
|
||||
#### 一张截图 ####
|
||||
|
||||
![](http://www.ubuntugeek.com/wp-content/uploads/2015/01/41.png)
|
||||
|
||||
### 使用 Wajig 移除孤包 ###
|
||||
|
||||
`Wajig`是 Debian 包管理系统中一个简单的软件包管理前端。它将 apt、apt-cache、 dpkg、 /etc/init.d 中的脚本等 通过一个单一命令集成在一起,它的设计初衷是 使用简单 和 为它所包含的函数提供丰富的文档。
|
||||
|
||||
通过适当的 `sudo`配置,大多数(如果不是全部)的软件包安装和创建等任务可以通过 一个用户 shell 来完成。`Wajig` 也适用于一般的系统管理。另外,一个 Gnome GUI 命令 `gjig`也被囊括在这个软件包之中。
|
||||
|
||||
### 在 Ubuntu 中安装 Wajig ###
|
||||
|
||||
打开终端并运行如下命令:
|
||||
|
||||
```
|
||||
sudo apt-get install wajig
|
||||
```
|
||||
|
||||
### Debfoster --- 跟踪你在安装过程中的操作 ###
|
||||
|
||||
debfoster 将会维护一个列有 被明确要求安装的软件包的列表,但不包括那些作为某个软件的依赖而被安装的软件包。参数是完全可选的,你甚至可以使得 在 dpkg 和/或 apt-get 每次运行之后的每一秒内 激活 debfoster 。
|
||||
|
||||
另外,你还可以在命令行中使用 debfoster 来安装或移除某些特定的软件包。那些后缀为 `---` 的软件包将会被移除,而没有后缀的软件包将会被安装。
|
||||
|
||||
假如一个新的软件包或 debfoster 注意到 作为某个软件包的依赖的软件包 是一个孤包,则 debfoster 将会询问你下一步如何操作。若你决定保留这个孤包, debfoster 将只会进行记录并继续安装过程;若你觉得这个软件包不足以引起你的兴趣,在 debfoster 询问这个问题后,它将移除这个软件包。进一步的,如果你的决定使得其他的软件包变为孤包,更多的提问将会接踵而来。
|
||||
|
||||
### 在 Ubuntu 中安装 debfoster ###
|
||||
|
||||
打开终端并运行如下命令:
|
||||
|
||||
```
|
||||
sudo apt-get install debfoster
|
||||
```
|
||||
|
||||
### 使用 debfoster ###
|
||||
|
||||
为了创建一个 初始跟踪文件,可以使用如下命令:
|
||||
|
||||
```
|
||||
sudo debfoster -q
|
||||
```
|
||||
|
||||
你总可以编辑 `/var/lib/debfoster/keepers` 文件,来定义那些 你想留在系统中的软件包。
|
||||
|
||||
为了编辑这个文件,可以键入:
|
||||
|
||||
```
|
||||
sudo vi /var/lib/debfoster/keepers
|
||||
```
|
||||
|
||||
强制使 debfoster 去移除 所有没有被列在上面这个文件的软件包 或 安装作为某些列在这个文件中的软件包的依赖, 它也会添加所有在这个列表中没有被安装的软件包。若要根据这个列表来执行相关操作,只需执行
|
||||
|
||||
```
|
||||
sudo debfoster -f
|
||||
```
|
||||
|
||||
若需要跟踪你新安装的软件包,你需要时不时地执行如下命令:
|
||||
|
||||
```
|
||||
sudo debfoster
|
||||
```
|
||||
|
||||
### xdiskusage -- 查看你的硬盘空间都去哪儿了 ###
|
||||
|
||||
图形化地展示磁盘使用情况的 du.xdiskusage 是一个用户友好型程序,它将为你展示你所有磁盘的使用情况。 它是在 Phillip C. Dykstra 所写的 “xdu” 程序的基础上设计的。一些改变使得它可以为你运行 “du”命令,并显示磁盘的剩余空间,并且假如你想清晰地了解你的磁盘空间都去哪儿了,它还可以生成一个 PostScript 格式的名为 display.xdiskusage 的文件。
|
||||
|
||||
### 在 Ubuntu 中安装 xdiskusage ###
|
||||
|
||||
只需使用如下命令:
|
||||
|
||||
```
|
||||
sudo apt-get install xdiskusage
|
||||
```
|
||||
|
||||
若你想打开这个应用,你需要使用如下命令:
|
||||
|
||||
```
|
||||
sudo xdiskusage
|
||||
```
|
||||
|
||||
一旦这个应用被打开,你将看到如下图所示的界面:
|
||||
|
||||
![](http://www.ubuntugeek.com/wp-content/uploads/2015/01/5.png)
|
||||
|
||||
|
||||
### Bleachbit ###
|
||||
|
||||
BleachBit 能快速地释放磁盘空间并不知疲倦地保护你的隐私。它可以 释放缓存,删除 cookie,清除 Internet 上网历史,粉碎临时文件,删除日志,丢弃你所不知道存在何处的垃圾。为 Linux 和 Windows 系统设计,它支持擦除清理数以千计的应用程序,如 Firefox, Internet Explorer, Adobe Flash, Google Chrome, Opera, Safari 等等。除了简单地删除文件,BleachBit 还包括许多高级功能,诸如 粉碎文件以防止恢复,擦除磁盘空间 来隐藏被其他应用程序所删除文件的痕迹,为火狐“除尘”,使其速度更快等。比免费更好,BleachBit 是一个开源软件。
|
||||
|
||||
### 在 Ubuntu 中安装 Bleachbit ###
|
||||
|
||||
打开终端并运行如下命令:
|
||||
|
||||
```
|
||||
sudo apt-get install bleachbit
|
||||
```
|
||||
|
||||
### 一张截图 ###
|
||||
|
||||
![](http://www.ubuntugeek.com/wp-content/uploads/2015/01/6.png)
|
||||
|
||||
### 使用 Ubuntu-Tweak ###
|
||||
|
||||
最后,你也可以使用 [Ubuntu-Tweak][2] 来清理你的系统。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.ubuntugeek.com/cleaning-up-a-ubuntu-gnulinux-system-updated-with-ubuntu-14-10-and-more-tools-added.html
|
||||
|
||||
作者:[ruchi][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.ubuntugeek.com/author/ubuntufix
|
||||
[1]:http://www.ubuntugeek.com/cleaning-up-all-unnecessary-junk-files-in-ubuntu.html
|
||||
[2]:http://www.ubuntugeek.com/www.ubuntugeek.com/install-ubuntu-tweak-on-ubuntu-14-10.html
|
@ -0,0 +1,78 @@
|
||||
在CentOS 7中安装Jetty服务器
|
||||
================================================================================
|
||||
[Jetty][1] 是一款纯Java的HTTP **(Web) 服务器**和Java Servlet容器。 通常在更大的网络框架中,Jetty经常用于设备间的通信。但是其他Web服务器通常给人类传递文件。Jetty是一个Eclipse基金中免费开源项目。这个Web服务器用于如Apache ActiveMQ、 Alfresco、 Apache Geronimo、 Apache Maven、 Apache Spark、Google App Engine、 Eclipse、 FUSE、 Twitter的 Streaming API 和 Zimbra中。
|
||||
|
||||
这篇文章会解释‘如何在CentOS服务器中安装Jetty服务器’。
|
||||
|
||||
**首先我们要用下面的命令安装JDK:**
|
||||
|
||||
yum -y install java-1.7.0-openjdk wget
|
||||
|
||||
**JDK安装之后,我们就可以下载最新版本的Jetty了:**
|
||||
|
||||
wget http://download.eclipse.org/jetty/stable-9/dist/jetty-distribution-9.2.5.v20141112.tar.gz
|
||||
|
||||
**解压并移动下载的包到/opt:**
|
||||
|
||||
tar zxvf jetty-distribution-9.2.5.v20141112.tar.gz -C /opt/
|
||||
|
||||
**重命名文件夹名为jetty:**
|
||||
|
||||
mv /opt/jetty-distribution-9.2.5.v20141112/ /opt/jetty
|
||||
|
||||
**创建一个jetty用户:**
|
||||
|
||||
useradd -m jetty
|
||||
|
||||
**改变jetty文件夹的所属用户:**
|
||||
|
||||
chown -R jetty:jetty /opt/jetty/
|
||||
|
||||
**为jetty.sh创建一个软链接到 /etc/init.d directory 来创建一个启动脚本文件:**
|
||||
|
||||
ln -s /opt/jetty/bin/jetty.sh /etc/init.d/jetty
|
||||
|
||||
**添加脚本:**
|
||||
|
||||
chkconfig --add jetty
|
||||
|
||||
**是jetty在系统启动时启动:**
|
||||
|
||||
chkconfig --level 345 jetty on
|
||||
|
||||
**使用你最喜欢的文本编辑器打开 /etc/default/jetty 并修改端口和监听地址:**
|
||||
|
||||
vi /etc/default/jetty
|
||||
|
||||
----------
|
||||
|
||||
JETTY_HOME=/opt/jetty
|
||||
JETTY_USER=jetty
|
||||
JETTY_PORT=8080
|
||||
JETTY_HOST=50.116.24.78
|
||||
JETTY_LOGS=/opt/jetty/logs/
|
||||
|
||||
**我们完成了安装,现在可以启动jetty服务了 **
|
||||
|
||||
service jetty start
|
||||
|
||||
完成了!
|
||||
|
||||
现在你可以在 **http://<youripaddress>:8080** 中访问了
|
||||
|
||||
就是这样。
|
||||
|
||||
干杯!!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/install-jetty-web-server-centos-7/
|
||||
|
||||
作者:[Jijo][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/jijo/
|
||||
[1]:http://eclipse.org/jetty/
|
@ -0,0 +1,62 @@
|
||||
LinSSID - 一款Linux下的图形化Wi-Fi扫描器
|
||||
================================================================================
|
||||
### 介绍 ###
|
||||
|
||||
如你所知,**LinSSID** 是一款可以用于寻找可用无线网络的图形化软件。它完全开源,用C++写成,使用了Linux无线工具、Qt5、Qwt6.1,它在外观和功能上与**Inssider** (MS Windows)相近。
|
||||
|
||||
### 安装 ###
|
||||
|
||||
你可以使用源码安装,如果你使用的是基于DEB的系统比如Ubuntu和LinuxMint等等,你也可以使用PPA安装。
|
||||
|
||||
你可用从[this link][1]这个链接下载并安装LinSSID。
|
||||
|
||||
这里我门将使用PPA来安装并测试这个软件。
|
||||
|
||||
添加LinSSID的PPA并输入下面的命令安装。
|
||||
|
||||
sudo add-apt-repository ppa:wseverin/ppa
|
||||
sudo apt-get update
|
||||
sudo apt-get install linssid
|
||||
|
||||
### 用法 ###
|
||||
|
||||
安装完成之后,你可以从菜单或者unity中启动。
|
||||
|
||||
你将被要求输入管理员密码。
|
||||
|
||||
![Password required for iwlist scan_001](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Password-required-for-iwlist-scan_001.png)
|
||||
|
||||
这就是LinSSID的界面。
|
||||
|
||||
![LinSSID_002](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/LinSSID_002.png)
|
||||
|
||||
现在选择你想要连接无线网络的网卡,比如这里是wlan0.点击Play按钮来搜寻wi-fi网络列表。
|
||||
|
||||
几秒钟之后,LinSSID就会显示wi-fi网络了。
|
||||
|
||||
![LinSSID_003](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/LinSSID_003.png)
|
||||
|
||||
如你在上面的截屏中所见,LinSSID显示SSID名、MAC ID、通道、隐私、加密方式、信号和协议等等信息。当然,你可以让LinSSID显示更多的选项,比如安全、带宽等等。要显示这些,进入**View**菜单并选择需要的选项。同样,它显示了不同通道中的信号随着时间信号强度的变化。最后,它可以工作在2.4Ghz和5Ghz通道上。
|
||||
|
||||
|
||||
就是这样。希望这个工具对你有用。
|
||||
|
||||
干杯!!
|
||||
|
||||
参考链接:
|
||||
|
||||
- [LinSSID 主页][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/linssid-graphical-wi-fi-scanner-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/sk/
|
||||
[1]:http://sourceforge.net/projects/linssid/files/
|
||||
[2]:http://sourceforge.net/projects/linssid/
|
@ -0,0 +1,137 @@
|
||||
Linux 基础:如何在Ubuntu上检查是否已经安装了一个包
|
||||
================================================================================
|
||||
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/04/ubuntu-790x558.png)
|
||||
|
||||
如果你正在管理Debian或者Ubuntu服务器,你也许会经常使用**dpkg** 或者 **apt-get**命令。这两个命令用来安装、卸载和更新包。
|
||||
|
||||
在本篇中,让我们看下如何在基于DEB的系统下检查是否安装了一个包。
|
||||
|
||||
要检查特定的包,比如firefox是否安装了,使用这个命令:
|
||||
|
||||
dpkg -s firefox
|
||||
|
||||
示例输出:
|
||||
|
||||
Package: firefox
|
||||
Status: install ok installed
|
||||
Priority: optional
|
||||
Section: web
|
||||
Installed-Size: 93339
|
||||
Maintainer: Ubuntu Mozilla Team <ubuntu-mozillateam@lists.ubuntu.com>
|
||||
Architecture: amd64
|
||||
Version: 35.0+build3-0ubuntu0.14.04.2
|
||||
Replaces: kubuntu-firefox-installer
|
||||
Provides: gnome-www-browser, iceweasel, www-browser
|
||||
Depends: lsb-release, libasound2 (>= 1.0.16), libatk1.0-0 (>= 1.12.4), libc6 (>= 2.17), libcairo2 (>= 1.2.4), libdbus-1-3 (>= 1.0.2), libdbus-glib-1-2 (>= 0.78), libfontconfig1 (>= 2.9.0), libfreetype6 (>= 2.2.1), libgcc1 (>= 1:4.1.1), libgdk-pixbuf2.0-0 (>= 2.22.0), libglib2.0-0 (>= 2.37.3), libgtk2.0-0 (>= 2.24.0), libpango-1.0-0 (>= 1.22.0), libpangocairo-1.0-0 (>= 1.14.0), libstartup-notification0 (>= 0.8), libstdc++6 (>= 4.6), libx11-6, libxcomposite1 (>= 1:0.3-1), libxdamage1 (>= 1:1.1), libxext6, libxfixes3, libxrender1, libxt6
|
||||
Recommends: xul-ext-ubufox, libcanberra0, libdbusmenu-glib4, libdbusmenu-gtk4
|
||||
Suggests: ttf-lyx
|
||||
Conffiles:
|
||||
/etc/firefox/syspref.js 09e457e65435a1a043521f2bd19cd2a1
|
||||
/etc/apport/blacklist.d/firefox ee63264f847e671832d42255912ce144
|
||||
/etc/apport/native-origins.d/firefox 7c26b75c7c2b715c89cc6d85338252a4
|
||||
/etc/apparmor.d/usr.bin.firefox f54f7a43361c7ecfa3874abca2f292cf
|
||||
Description: Safe and easy web browser from Mozilla
|
||||
Firefox delivers safe, easy web browsing. A familiar user interface,
|
||||
enhanced security features including protection from online identity theft,
|
||||
and integrated search let you get the most out of the web.
|
||||
Xul-Appid: {ec8030f7-c20a-464f-9b0e-13a3a9e97384}
|
||||
|
||||
如上所见,firefox已经安装了。
|
||||
|
||||
同样,你可以使用**dpkg-query** 命令。这个命令会有一个更好的输出,当然,你可以用通配符。
|
||||
|
||||
dpkg-query -l firefox
|
||||
|
||||
示例输出:
|
||||
|
||||
Desired=Unknown/Install/Remove/Purge/Hold
|
||||
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|
||||
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
|
||||
||/ Name Version Architecture Description
|
||||
+++-====================================-=======================-=======================-=============================================================================
|
||||
ii firefox 35.0+build3-0ubuntu0.14 amd64 Safe and easy web browser from Mozilla
|
||||
|
||||
要列出你系统中安装的包,输入下面的命令:
|
||||
|
||||
dpkg --get-selections
|
||||
|
||||
示例输出:
|
||||
|
||||
abiword install
|
||||
abiword-common install
|
||||
accountsservice install
|
||||
acl install
|
||||
adduser install
|
||||
alsa-base install
|
||||
alsa-utils install
|
||||
anacron install
|
||||
app-install-data install
|
||||
apparmor install
|
||||
.
|
||||
.
|
||||
.
|
||||
zeitgeist install
|
||||
zeitgeist-core install
|
||||
zeitgeist-datahub install
|
||||
zenity install
|
||||
zenity-common install
|
||||
zip install
|
||||
zlib1g:amd64 install
|
||||
zlib1g:i386 install
|
||||
|
||||
上面的输出可能会非常长,这依赖于你的系统已安装的包。
|
||||
|
||||
你同样可以通过**grep**来过滤割到更精确的包。比如,我想要使用**dpkg**命令查看系统中安装的gcc包:
|
||||
|
||||
dpkg --get-selections | grep gcc
|
||||
|
||||
示例输出:
|
||||
|
||||
gcc install
|
||||
gcc-4.8 install
|
||||
gcc-4.8-base:amd64 install
|
||||
gcc-4.8-base:i386 install
|
||||
gcc-4.9-base:amd64 install
|
||||
gcc-4.9-base:i386 install
|
||||
libgcc-4.8-dev:amd64 install
|
||||
libgcc1:amd64 install
|
||||
libgcc1:i386 install
|
||||
|
||||
额外的,你可以使用“**-L**”参数来找出包中文件的位置。
|
||||
|
||||
dpkg -L gcc-4.8
|
||||
|
||||
示例输出:
|
||||
|
||||
/.
|
||||
/usr
|
||||
/usr/share
|
||||
/usr/share/doc
|
||||
/usr/share/doc/gcc-4.8-base
|
||||
/usr/share/doc/gcc-4.8-base/README.Bugs
|
||||
/usr/share/doc/gcc-4.8-base/NEWS.html
|
||||
/usr/share/doc/gcc-4.8-base/quadmath
|
||||
/usr/share/doc/gcc-4.8-base/quadmath/changelog.gz
|
||||
/usr/share/doc/gcc-4.8-base/gcc
|
||||
.
|
||||
.
|
||||
.
|
||||
/usr/bin/x86_64-linux-gnu-gcc-4.8
|
||||
/usr/bin/x86_64-linux-gnu-gcc-ar-4.8
|
||||
/usr/bin/x86_64-linux-gnu-gcov-4.8
|
||||
|
||||
就是这样了。希望这篇对你有用。
|
||||
|
||||
美好的一天!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.unixmen.com/linux-basics-check-package-installed-not-ubuntu/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.unixmen.com/author/sk/
|
Loading…
Reference in New Issue
Block a user