mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-02-25 00:50:15 +08:00
merge
This commit is contained in:
commit
1b0a58fb21
@ -194,7 +194,7 @@ mod\_evasive被配置为使用/etc/httpd/conf.d/mod\_evasive.conf中的指令。
|
||||
|
||||
DOSSystemCommand "sudo /usr/local/bin/scripts-tecmint/ban_ip.sh %s"
|
||||
|
||||
上面一行的%s代表了由mod\_evasive检测到的攻击IP地址。
|
||||
上面一行的%s代表了由mod_evasive检测到的攻击IP地址。
|
||||
|
||||
#####将apache用户添加到sudoers文件#####
|
||||
|
||||
@ -233,7 +233,7 @@ mod\_evasive被配置为使用/etc/httpd/conf.d/mod\_evasive.conf中的指令。
|
||||
我们的测试环境由一个CentOS 7服务器[IP 192.168.0.17]和一个Windows组成,在Windows[IP 192.168.0.103]上我们发起攻击:
|
||||
|
||||

|
||||
|
||||
I
|
||||
*确认主机IP地址*
|
||||
|
||||
请播放下面的视频(YT 视频,请自备梯子: https://www.youtube.com/-U_mdet06Jk ),并跟从列出的步骤来模拟一个Dos攻击:
|
||||
@ -257,7 +257,7 @@ mod\_evasive被配置为使用/etc/httpd/conf.d/mod\_evasive.conf中的指令。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/protect-apache-using-mod\_security-and-mod\_evasive-on-rhel-centos-fedora/
|
||||
via: http://www.tecmint.com/protect-apache-using-mod_security-and-mod_evasive-on-rhel-centos-fedora/
|
||||
|
||||
作者:[Gabriel Cánepa][a]
|
||||
译者:[wwy-hust](https://github.com/wwy-hust)
|
||||
|
@ -1,37 +1,37 @@
|
||||
在 Debian, Ubuntu, Linux Mint 及 Fedora 中安装 uGet 下载管理器 2.0
|
||||
================================================================================
|
||||
在经历了一段漫长的开发期后,期间发布了超过 11 个开发版本,最终 uGet 项目小组高兴地宣布 uGet 的最新稳定版本 uGet 2.0 已经可以下载使用了。最新版本包含许多吸引人的特点,例如一个新的设定对话框,改进了 aria2 插件对 BitTorrent 和 Metalink 协议的支持,同时对位于横栏中的 uGet RSS 信息提供了更好的支持,其他特点包括:
|
||||
在经历了一段漫长的开发期后,并发布了超过 11 个开发版本,最终 uGet 项目小组高兴地宣布 uGet 的最新稳定版本 uGet 2.0 已经可以下载使用了。最新版本包含许多吸引人的特点,例如一个新的设定对话框,改进了 aria2 插件对 BitTorrent 和 Metalink 协议的支持,同时对位于横幅中的 uGet RSS 信息提供了更好的支持,其他特点包括:
|
||||
|
||||
- 一个新的 “检查更新” 按钮,提醒您有关新的发行版本的信息;
|
||||
- 新增一个 “检查更新” 按钮,提醒您有关新的发行版本的信息;
|
||||
- 增添新的语言支持并升级了现有的语言;
|
||||
- 增加了一个新的 “信息横栏” ,允许开发者轻松地向所有的用户提供有关 uGet 的信息;
|
||||
- 通过对文档、提交反馈和错误报告等内容的链接,增强了帮助菜单;
|
||||
- 新增一个 “信息横幅” ,可以让开发者轻松地向所有的用户提供有关 uGet 的信息;
|
||||
- 增强了帮助菜单,包括文档、提交反馈和错误报告等内容的链接;
|
||||
- 将 uGet 下载管理器集成到了 Linux 平台下的两个主要的浏览器 Firefox 和 Google Chrome 中;
|
||||
- 改进了对 Firefox 插件 ‘FlashGot’ 的支持;
|
||||
|
||||
### 何为 uGet ###
|
||||
|
||||
uGet (先前名为 UrlGfe) 是一个开源,免费,且极其强大的基于 GTK 的多平台下载管理器应用程序,它用 C 语言写就,在 GPL 协议下发布。它提供了一大类的功能,如恢复先前的下载任务,支持多重下载,使用一个独立的配置来支持分类,剪贴板监视,下载队列,从 HTML 文件中导出 URL 地址,集成在 Firefox 中的 Flashgot 插件中,使用集成在 uGet 中的 aria2(一个命令行下载管理器) 来下载 torrent 和 metalink 文件。
|
||||
uGet (先前名为 UrlGfe) 是一个开源,免费,且极其强大的基于 GTK 的多平台下载管理器应用程序,它用 C 语言写就,在 GPL 协议下发布。它提供了大量功能,如恢复先前的下载任务,支持多点下载,使用一个独立的配置来支持分类,剪贴板监视,下载队列,从 HTML 文件中导出 URL 地址,集成在 Firefox 中的 Flashgot 插件中,使用集成在 uGet 中的 aria2(一个命令行下载管理器) 来下载 torrent 和 metalink 文件。
|
||||
|
||||
我已经在下面罗列出了 uGet 下载管理器的所有关键特点,并附带了详细的解释。
|
||||
|
||||
#### uGet 下载管理器的关键特点 ####
|
||||
|
||||
- 下载队列: 可以将你的下载任务放入一个队列中。当某些下载任务完成后,将会自动开始下载队列中余下的文件;
|
||||
- 下载队列: 将你的下载任务放入一个队列中。当某些下载任务完成后,将会自动开始下载队列中余下的文件;
|
||||
- 恢复下载: 假如在某些情况下,你的网络中断了,不要担心,你可以从先前停止的地方继续下载或重新开始;
|
||||
- 下载分类: 支持多种分类来管理下载;
|
||||
- 剪贴板监视: 将要下载的文件类型复制到剪贴板中,便会自动弹出下载提示框以下载刚才复制的文件;
|
||||
- 批量下载: 允许你轻松地一次性下载多个文件;
|
||||
- 支持多种协议: 允许你轻松地使用 aria2 命令行插件通过 HTTP, HTTPS, FTP, BitTorrent 及 Metalink 等协议下载文件;
|
||||
- 多连接: 使用 aria2 插件,每个下载同时支持多达 20 个连接;
|
||||
- 支持 FTP 登录或匿名 FTP 登录: 同时支持使用用户名和密码来登录 FTP 或匿名 FTP ;
|
||||
- 支持 FTP 登录或 FTP 匿名登录: 同时支持使用用户名和密码来登录 FTP 或匿名 FTP ;
|
||||
- 队列下载: 新增队列下载,现在你可以对你的所有下载进行安排调度;
|
||||
- 通过 FlashGot 与 FireFox 集成: 与作为一个独立支持的 Firefox 插件的 FlashGot 集成,从而可以处理单个或大量的下载任务;
|
||||
- CLI 界面或虚拟终端支持: 提供命令行或虚拟终端选项来下载文件;
|
||||
- 自动创建目录: 假如你提供了一个先前并不存在的保存路径,uGet 将会自动创建这个目录;
|
||||
- 下载历史管理: 跟踪记录已下载和已删除的下载任务的条目,每个列表支持 9999 个条目,比当前默认支持条目数目更早的条目将会被自动删除;
|
||||
- 多语言支持: uGet 默认使用英语,但它可支持多达 23 种语言;
|
||||
- Aria2 插件: uGet 集成了 Aria2 插件,来为 aria2 提供更友好的 GUI 界面;
|
||||
- Aria2 插件: uGet 集成了 Aria2 插件,来为你提供更友好的 GUI 界面;
|
||||
|
||||
如若你想了解更加完整的特点描述,请访问 uGet 官方的 [特点页面][1].
|
||||
|
||||
@ -43,7 +43,7 @@ uGet 开发者在 Linux 平台下的各种软件仓库中添加了 uGet 的最
|
||||
|
||||
#### 在 Debian 下 ####
|
||||
|
||||
在 Debian 的测试版本 (Jessie) 和不稳定版本 (Sid) 中,你可以在一个可信赖的基础上,使用官方的软件仓库轻易地安装和升级 uGet 。
|
||||
在 Debian Jessie 和Sid 中,你可以使用官方软件仓库轻易地安装和升级可靠的 uGet 软件包。
|
||||
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install uget
|
||||
@ -58,7 +58,7 @@ uGet 开发者在 Linux 平台下的各种软件仓库中添加了 uGet 的最
|
||||
|
||||
#### 在 Fedora 下 ####
|
||||
|
||||
在 Fedora 20 – 21 下,最新版本的 uGet(2.0) 可以从官方软件仓库中获得,从这些软件仓库中安装是非常值得信赖的。
|
||||
在 Fedora 20 – 21 下,最新版本的 uGet(2.0) 可以从官方软件仓库中获得可靠的软件包。
|
||||
|
||||
$ sudo yum install uget
|
||||
|
||||
@ -70,7 +70,7 @@ uGet 开发者在 Linux 平台下的各种软件仓库中添加了 uGet 的最
|
||||
|
||||
默认情况下,uGet 在当今大多数的 Linux 系统中使用 `curl` 来作为后端,但 aria2 插件将 curl 替换为 aria2 来作为 uGet 的后端。
|
||||
|
||||
aria2 是一个单独的软件包,需要独立安装。你可以在你的 Linux 发行版本下,使用受支持的软件仓库来轻易地安装 aria2 的最新版本,或根据 [下载 aria2 页面][4] 来安装它,该页面详细解释了在各个发行版本中如何安装 aria2 。
|
||||
aria2 是一个单独的软件包,需要独立安装。你可以在你的 Linux 发行版下,使用受支持的软件仓库来轻易地安装 aria2 的最新版本,或根据 [下载 aria2 页面][4] 来安装它,该页面详细解释了在各个发行版本中如何安装 aria2 。
|
||||
|
||||
#### 在 Debian, Ubuntu 和 Linux Mint 下 ####
|
||||
|
||||
@ -91,28 +91,34 @@ Fedora 的官方软件仓库中已经添加了 aria2 软件包,所以你可以
|
||||
为了启动 uGet,从桌面菜单的搜索栏中键入 "uGet"。可参考如下的截图:
|
||||
|
||||

|
||||
开启 uGet 下载管理器
|
||||
|
||||
*开启 uGet 下载管理器*
|
||||
|
||||

|
||||
uGet 版本: 2.0
|
||||
|
||||
*uGet 版本: 2.0*
|
||||
|
||||
#### 在 uGet 中激活 aria2 插件 ####
|
||||
|
||||
为了激活 aria2 插件, 从 uGet 菜单接着到 `编辑 –> 设置 –> 插件` , 从下拉菜单中选择 "aria2"。
|
||||
|
||||

|
||||
为 uGet 启用 Aria2 插件
|
||||
|
||||
*为 uGet 启用 Aria2 插件*
|
||||
|
||||
### uGet 2.0 截图赏析 ###
|
||||
|
||||

|
||||
使用 Aria2 下载文件
|
||||
|
||||
*使用 Aria2 下载文件*
|
||||
|
||||

|
||||
使用 uGet 下载 Torrent 文件
|
||||
|
||||
*使用 uGet 下载 Torrent 文件*
|
||||
|
||||

|
||||
使用 uGet 进行批量下载
|
||||
|
||||
*使用 uGet 进行批量下载*
|
||||
|
||||
针对其他 Linux 发行版本和 Windows 平台的 RPM 包和 uGet 的源文件都可以在 uGet 的[下载页面][5] 下找到。
|
||||
|
||||
@ -122,7 +128,7 @@ via: http://www.tecmint.com/install-uget-download-manager-in-linux/
|
||||
|
||||
作者:[Ravi Saive][a]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,133 +1,99 @@
|
||||
|
||||
如何在CentOS 7上安装Percona Server
|
||||
|
||||
如何在 CentOS 7 上安装 Percona服务器
|
||||
================================================================================
|
||||
在这篇文章中我们将了解关于Percona Server,一个开源简易的MySQL,MariaDB的替代。InnoDB的数据库引擎使得Percona Server非常有吸引力,如果你需要的高性能,高可靠性和高性价比的解决方案,它将是一个很好的选择。
|
||||
|
||||
在下文中将介绍在CentOS 7上Percona的服务器的安装,以及备份当前数据,配置的步骤和如何恢复备份。
|
||||
|
||||
|
||||
###目录###
|
||||
|
||||
|
||||
1.什么是Percona,为什么使用它
|
||||
2.备份你的数据库
|
||||
3.删除之前的SQL服务器
|
||||
4.使用二进制包安装Percona
|
||||
5.配置Percona
|
||||
6.保护你的数据
|
||||
7.恢复你的备份
|
||||
在这篇文章中我们将了解关于 Percona 服务器,一个开源的MySQL,MariaDB的替代品。InnoDB的数据库引擎使得Percona 服务器非常有吸引力,如果你需要的高性能,高可靠性和高性价比的解决方案,它将是一个很好的选择。
|
||||
|
||||
在下文中将介绍在CentOS 7上Percona 服务器的安装,以及备份当前数据,配置的步骤和如何恢复备份。
|
||||
|
||||
### 1.什么是Percona,为什么使用它 ###
|
||||
|
||||
|
||||
Percona是一个开源简易的MySQL,MariaDB数据库的替代,它是MYSQL的一个分支,相当多的改进和独特的功能使得它比MYSQL更可靠,性能更强,速度更快,它与MYSQL完全兼容,你甚至可以在Oracle的MYSQL与Percona之间使用复制命令。
|
||||
Percona是一个MySQL,MariaDB数据库的开源替代品,它是MySQL的一个分支,相当多的改进和独特的功能使得它比MYSQL更可靠,性能更强,速度更快,它与MYSQL完全兼容,你甚至可以在Oracle的MySQL与Percona之间使用复制。
|
||||
|
||||
#### 在Percona中独具特色的功能 ####
|
||||
|
||||
- 分区适应哈希搜索
|
||||
- 快速校验算法
|
||||
- 缓冲池预加载
|
||||
- 支持FlashCache
|
||||
|
||||
-分段自适应哈希搜索
|
||||
-快速校验算法
|
||||
-缓冲池预加载
|
||||
-支持FlashCache
|
||||
#### MySQL企业版和Percona中的特有功能 ####
|
||||
|
||||
#### MySQL企业版和Percona的特定功能 ####
|
||||
- 从不同的服务器导入表
|
||||
- PAM认证
|
||||
- 审计日志
|
||||
- 线程池
|
||||
|
||||
-从不同的服务器导入表
|
||||
-PAM认证
|
||||
-审计日志
|
||||
-线程池
|
||||
|
||||
|
||||
现在,你肯定很兴奋地看到这些好的东西整理在一起,我们将告诉你如何安装和做些的Percona Server的基本配置。
|
||||
现在,你肯定很兴奋地看到这些好的东西整合在一起,我们将告诉你如何安装和对Percona Server做基本配置。
|
||||
|
||||
### 2. 备份你的数据库 ###
|
||||
|
||||
|
||||
接下来,在命令行下使用SQL命令创建一个mydatabases.sql文件来重建/恢复salesdb和employeedb数据库,重命名数据库以便反映你的设置,如果没有安装MYSQL跳过此步
|
||||
接下来,在命令行下使用SQL命令创建一个mydatabases.sql文件,来重建或恢复salesdb和employeedb数据库,根据你的设置替换数据库名称,如果没有安装MySQL则跳过此步:
|
||||
|
||||
mysqldump -u root -p --databases employeedb salesdb > mydatabases.sql
|
||||
|
||||
复制当前的配置文件,如果你没有安装MYSQL也可跳过
|
||||
|
||||
复制当前的配置文件,如果你没有安装MYSQL也可跳过:
|
||||
|
||||
cp my.cnf my.cnf.bkp
|
||||
|
||||
### 3.删除之前的SQL服务器 ###
|
||||
|
||||
|
||||
停止MYSQL/MariaDB如果它们还在运行
|
||||
|
||||
停止MYSQL/MariaDB,如果它们还在运行:
|
||||
|
||||
systemctl stop mysql.service
|
||||
|
||||
卸载MariaDB和MYSQL
|
||||
|
||||
卸载MariaDB和MYSQL:
|
||||
|
||||
yum remove MariaDB-server MariaDB-client MariaDB-shared mysql mysql-server
|
||||
|
||||
移动重命名在/var/lib/mysql当中的MariaDB文件,这比仅仅只是移除更为安全快速,这就像2级即时备份。:)
|
||||
|
||||
移动重命名放在/var/lib/mysql当中的MariaDB文件。这比仅仅只是移除更为安全快速,这就像2级即时备份。:)
|
||||
|
||||
mv /var/lib/mysql /var/lib/mysql_mariadb
|
||||
|
||||
### 4.使用二进制包安装Percona ###
|
||||
|
||||
|
||||
你可以在众多Percona安装方法中选择,在CentOS中使用Yum或者RPM包安装通常是更好的主意,所以这些是本文介绍的方式,下载源文件编译后安装在本文中并没有介绍。
|
||||
|
||||
从Yum仓库中安装:
|
||||
|
||||
从Yum仓库中安装:
|
||||
|
||||
|
||||
首先,你需要设置的Percona的Yum库:
|
||||
|
||||
首先,你需要设置Percona的Yum库:
|
||||
|
||||
yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
|
||||
|
||||
接下来安装Percona:
|
||||
|
||||
|
||||
yum install Percona-Server-client-56 Percona-Server-server-56
|
||||
|
||||
上面的命令安装Percona的服务器和客户端,共享库,可能需要Perl和Perl模块,以及其他依赖的需要。如DBI::MySQL的,如果这些尚未安装,
|
||||
|
||||
使用RPM包安装:
|
||||
上面的命令安装Percona的服务器和客户端、共享库,可能需要Perl和Perl模块,以及其他依赖的需要,如DBI::MySQL。如果这些尚未安装,可能需要安装更多的依赖包。
|
||||
|
||||
使用RPM包安装:
|
||||
|
||||
我们可以使用wget命令下载所有的rpm包:
|
||||
|
||||
|
||||
wget -r -l 1 -nd -A rpm -R "*devel*,*debuginfo*" \ http://www.percona.com/downloads/Percona-Server-5.5/Percona-Server-5.5.42-37.1/binary/redhat/7/x86_64/
|
||||
wget -r -l 1 -nd -A rpm -R "*devel*,*debuginfo*" \
|
||||
http://www.percona.com/downloads/Percona-Server-5.5/Percona-Server-5.5.42-37.1/binary/redhat/7/x86_64/
|
||||
|
||||
使用rpm工具,一次性安装所有的rpm包:
|
||||
|
||||
rpm -ivh Percona-Server-server-55-5.5.42-rel37.1.el7.x86_64.rpm \
|
||||
Percona-Server-client-55-5.5.42-rel37.1.el7.x86_64.rpm \
|
||||
Percona-Server-shared-55-5.5.42-rel37.1.el7.x86_64.rpm
|
||||
|
||||
rpm -ivh Percona-Server-server-55-5.5.42-rel37.1.el7.x86_64.rpm \ Percona-Server-client-55-5.5.42-rel37.1.el7.x86_64.rpm \ Percona-Server-shared-55-5.5.42-rel37.1.el7.x86_64.rpm
|
||||
|
||||
注意在上面命令语句中最后的反斜杠'\',如果您安装单独的软件包,记住要解决依赖关系,在安装客户端之前要先安装共享包,在安装服务器之前请先安装客户端。
|
||||
注意在上面命令语句中最后的反斜杠'\'(只是为了换行方便)。如果您安装单独的软件包,记住要解决依赖关系,在安装客户端之前要先安装共享包,在安装服务器之前请先安装客户端。
|
||||
|
||||
### 5.配置Percona服务器 ###
|
||||
|
||||
|
||||
|
||||
#### 恢复之前的配置 ####
|
||||
|
||||
|
||||
当我们从MariaDB迁移过来时,你可以将之前的my.cnf的备份文件恢复回来。
|
||||
|
||||
|
||||
cp /etc/my.cnf.bkp /etc/my.cnf
|
||||
|
||||
#### 创建一个新的my.cnf文件 ####
|
||||
|
||||
|
||||
如果你需要一个适合你需求的新的配置文件或者你并没有备份配置文件,你可以使用以下方法,通过简单的几步生成新的配置文件。
|
||||
|
||||
下面是Percona-server软件包自带的my.cnf文件
|
||||
|
||||
|
||||
# Percona Server template configuration
|
||||
|
||||
[mysqld]
|
||||
@ -158,33 +124,29 @@ Percona是一个开源简易的MySQL,MariaDB数据库的替代,它是MYSQL
|
||||
|
||||
根据你的需要配置好my.cnf后,就可以启动该服务了:
|
||||
|
||||
|
||||
systemctl restart mysql.service
|
||||
|
||||
如果一切顺利的话,它已经准备好执行SQL命令了,你可以用以下命令检查它是否已经正常启动:
|
||||
|
||||
|
||||
mysql -u root -p -e 'SHOW VARIABLES LIKE "version_comment"'
|
||||
|
||||
如果你不能够正常启动它,你可以在**/var/log/mysql/mysqld.log**中查找原因,该文件可在my.cnf的[mysql_safe]的log-error中设置。
|
||||
|
||||
tail /var/log/mysql/mysqld.log
|
||||
|
||||
你也可以在/var/lib/mysql/文件夹下查找格式为[hostname].err的文件,就像下面这个例子样:
|
||||
|
||||
你也可以在/var/lib/mysql/文件夹下查找格式为[主机名].err的文件,就像下面这个例子:
|
||||
|
||||
tail /var/lib/mysql/centos7.err
|
||||
|
||||
如果还是没找出原因,你可以试试strace:
|
||||
|
||||
|
||||
yum install strace && systemctl stop mysql.service && strace -f -f mysqld_safe
|
||||
|
||||
上面的命令挺长的,输出的结果也相对简单,但绝大多数时候你都能找到无法启动的原因。
|
||||
|
||||
### 6.保护你的数据 ###
|
||||
|
||||
好了,你的关系数据库管理系统已经准备好接收SQL查询,但是把你宝贵的数据放在没有最起码安全保护的服务器上并不可取,为了更为安全最好使用mysql_secure_instalation,这个工具可以帮助删除未使用的默认功能,还设置root的密码,并限制使用此用户进行访问。
|
||||
只需要在shell中执行,并参照屏幕上的说明。
|
||||
好了,你的关系数据库管理系统已经准备好接收SQL查询,但是把你宝贵的数据放在没有最起码安全保护的服务器上并不可取,为了更为安全最好使用mysql_secure_install来安装,这个工具可以帮助你删除未使用的默认功能,并设置root的密码,限制使用此用户进行访问。只需要在shell中执行该命令,并参照屏幕上的说明操作。
|
||||
|
||||
mysql_secure_install
|
||||
|
||||
@ -192,28 +154,27 @@ Percona是一个开源简易的MySQL,MariaDB数据库的替代,它是MYSQL
|
||||
|
||||
如果您参照之前的设置,现在你可以恢复数据库,只需再用mysqldump一次。
|
||||
|
||||
|
||||
mysqldump -u root -p < mydatabases.sql
|
||||
恭喜你,你刚刚已经在你的CentOS上成功安装了Percona,你的服务器已经可以正式投入使用;你可以像使用MYSQL一样使用它,你的服务器与他完全兼容。
|
||||
|
||||
恭喜你,你刚刚已经在你的CentOS上成功安装了Percona,你的服务器已经可以正式投入使用;你可以像使用MySQL一样使用它,你的服务器与它完全兼容。
|
||||
|
||||
### 总结 ###
|
||||
|
||||
为了获得更强的性能你需要对配置文件做大量的修改,但这里也有一些简单的选项来提高机器的性能。当使用InnoDB引擎时,将innodb_file_per_table设置为on,它将在一个文件中为每个表创建索引表,这意味着每个表都有它自己的索引文件,它使系统更强大和更容易维修。
|
||||
为了获得更强的性能你需要对配置文件做大量的修改,但这里也有一些简单的选项来提高机器的性能。当使用InnoDB引擎时,将innodb_file_per_table设置为on,它将在一个文件中为每个表创建索引表,这意味着每个表都有它自己的索引文件,它使系统更强大和更容易维修。
|
||||
|
||||
可以修改innodb_buffer_pool_size选项,InnoDB应该有足够的缓存池来应对你的数据集,大小应该为当前可用内存的70%到80%。
|
||||
|
||||
过将innodb-flush-method设置为O_DIRECT,关闭写入高速缓存,如果你使用了RAID,这可以提升性能因为在底层已经完成了缓存操作。
|
||||
将innodb-flush-method设置为O_DIRECT,关闭写入高速缓存,如果你使用了RAID,这可以提升性能,因为在底层已经完成了缓存操作。
|
||||
|
||||
如果你的数据并不是十分关键并且并不需要对数据库事务正确执行的四个基本要素完全兼容,可以将innodb_flush_log_at_trx_commit设置为2,这也能提升系统的性能。
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/linux-how-to/percona-server-centos-7/
|
||||
|
||||
作者:[Carlos Alberto][a]
|
||||
译者:[FatJoe123](https://github.com/FatJoe123)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,8 @@
|
||||
Linux有问必答——Linux上Apache错误日志的位置在哪里?
|
||||
Linux有问必答:Linux上Apache错误日志的位置在哪里?
|
||||
================================================================================
|
||||
> **问题**: 我尝试着解决我 Linux 系统上的 Apache 网络服务器的错误,Apache的错误日志文件放在[你的 Linux 版本]的哪个位置呢?
|
||||
> **问题**: 我尝试着解决我 Linux 系统上的 Apache Web 服务器的错误,Apache的错误日志文件放在[XX Linux 版本]的哪个位置呢?
|
||||
|
||||
错误日志和访问日志文件为系统管理员提供了有用的信息,比如,为网络服务器排障,[保护][1]系统不受各种各样的恶意活动侵犯,或者只是进行[各种各样的][2][分析][3]以监控 HTTP 服务器。根据你网络服务器配置的不同,其错误/访问日志可能放在你系统中不同位置。
|
||||
错误日志和访问日志文件为系统管理员提供了有用的信息,比如,为 Web 服务器排障,[保护][1]系统不受各种各样的恶意活动侵犯,或者只是进行[各种各样的][2][分析][3]以监控 HTTP 服务器。根据你 Web 服务器配置的不同,其错误/访问日志可能放在你系统中不同位置。
|
||||
|
||||
本文可以帮助你**找到Linux上的Apache错误日志**。
|
||||
|
||||
@ -28,7 +28,7 @@ Linux有问必答——Linux上Apache错误日志的位置在哪里?
|
||||
|
||||
#### 使用虚拟主机自定义的错误日志 ####
|
||||
|
||||
如果在 Apache 网络服务器中使用了虚拟主机, ErrorLog 指令可能会在虚拟主机容器内指定,在这种情况下,上面所说的系统范围的错误日志位置将被忽略。
|
||||
如果在 Apache Web 服务器中使用了虚拟主机, ErrorLog 指令可能会在虚拟主机容器内指定,在这种情况下,上面所说的系统范围的错误日志位置将被忽略。
|
||||
|
||||
启用了虚拟主机后,各个虚拟主机可以定义其自身的自定义错误日志位置。要找出某个特定虚拟主机的错误日志位置,你可以打开 /etc/apache2/sites-enabled/<your-site>.conf,然后查找 ErrorLog 指令,该指令会显示站点指定的错误日志文件。
|
||||
|
||||
@ -40,11 +40,11 @@ Linux有问必答——Linux上Apache错误日志的位置在哪里?
|
||||
|
||||
#### 自定义的错误日志 ####
|
||||
|
||||
要找出 Apache 错误日志的自定义位置,请用文本编辑器打开 /etc/httpd/conf/httpd.conf,然后查找 ServerRoot,该参数显示了 Apache 服务器目录树的顶层,日志文件和配置都位于该目录树中。例如:
|
||||
要找出 Apache 错误日志的自定义位置,请用文本编辑器打开 /etc/httpd/conf/httpd.conf,然后查找 ServerRoot,该参数显示了 Apache Web 服务器目录树的顶层,日志文件和配置都位于该目录树中。例如:
|
||||
|
||||
ServerRoot "/etc/httpd"
|
||||
|
||||
现在,查找 ErrorLog 开头的行,该行指出了 Apache 网络服务器将错误日志写到了哪里去。注意,指定的位置是 ServerRoot 值的相对位置。例如:
|
||||
现在,查找 ErrorLog 开头的行,该行指出了 Apache Web 服务器将错误日志写到了哪里去。注意,指定的位置是 ServerRoot 值的相对位置。例如:
|
||||
|
||||
ErrorLog "log/error_log"
|
||||
|
||||
@ -71,11 +71,11 @@ via: http://ask.xmodulo.com/apache-error-log-location-linux.html
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://ask.xmodulo.com/author/nanni
|
||||
[1]:http://xmodulo.com/configure-fail2ban-apache-http-server.html
|
||||
[2]:http://xmodulo.com/interactive-apache-web-server-log-analyzer-linux.html
|
||||
[3]:http://xmodulo.com/sql-queries-apache-log-files-linux.html
|
||||
[1]:https://linux.cn/article-5068-1.html
|
||||
[2]:https://linux.cn/article-5352-1.html
|
||||
[3]:https://linux.cn/article-4405-1.html
|
@ -1,7 +1,7 @@
|
||||
如何在云服务提供商的机器使用Docker Machine
|
||||
如何在云服务提供商的平台上使用Docker Machine
|
||||
================================================================================
|
||||
大家好,今天我们来学习如何使用Docker Machine在各种云服务提供商的平台部署Docker。Docker Machine是一个可以帮助我们在自己的电脑、云服务提供商的机器以及我们数据中心的机器上创建Docker机器的应用程序。它为创建服务器、在服务器中安装Docker、根据用户需求配置Docker客户端提供了简单的解决方案。驱动API对本地机器、数据中心的虚拟机或者公用云机器都适用。Docker Machine支持Windows、OSX和Linux,并且提供一个独立的二进制文件,可以直接使用。它让我们可以充分利用支持Docker的基础设施的生态环境合作伙伴,并且使用相同的接口进行访问。它让人们可以使用一个命令来简单而迅速地在不同的云平台部署Docker容器。
|
||||
|
||||
大家好,今天我们来了解如何使用Docker Machine在各种云服务提供商的平台上部署Docker。Docker Machine是一个可以帮助我们在自己的电脑、云服务提供商的平台以及我们数据中心的机器上创建Docker机器的应用程序。它为创建服务器、在服务器中安装Docker、根据用户需求配置Docker客户端提供了简单的解决方案。驱动API对本地机器、数据中心的虚拟机或者公用云机器都适用。Docker Machine支持Windows、OSX和Linux,并且提供一个独立的二进制文件,可以直接使用。它让我们可以充分利用支持Docker的基础设施的生态环境合作伙伴,并且使用相同的接口进行访问。它让人们可以使用一个命令来简单而迅速地在不同的云平台部署Docker容器。
|
||||
|
||||
### 1. 安装Docker Machine ###
|
||||
|
||||
@ -25,14 +25,14 @@ Docker Machine可以很好地支持每一种Linux发行版。首先,我们需
|
||||
|
||||

|
||||
|
||||
另外机器上需要有docker命令,可以使用如下命令安装:
|
||||
要在我们的机器上启用docker命令,需要使用如下命令安装Docker客户端:
|
||||
|
||||
# curl -L https://get.docker.com/builds/linux/x86_64/docker-latest > /usr/local/bin/docker
|
||||
# chmod +x /usr/local/bin/docker
|
||||
|
||||
### 2. 创建机器 ###
|
||||
|
||||
在自己的Linux机器上安装好了Docker Machine之后,我们想要将一个docker虚拟机部署到云服务器上。Docker Machine支持几个流行的云平台,如igital Ocean、Amazon Web Services(AWS)、Microsoft Azure、Google Cloud Computing等等,所以我们可以在不同的平台使用相同的接口来部署Docker。本文中我们会使用digitalocean驱动在Digital Ocean的服务器上部署Docker,--driver选项指定digitalocean驱动,--digitalocean-access-token选项指定[Digital Ocean Control Panel][1]提供的API Token,命令最后的是我们创建的Docker虚拟机的机器名。运行如下命令:
|
||||
在自己的Linux机器上安装好了Docker Machine之后,我们想要将一个docker虚拟机部署到云服务器上。Docker Machine支持几个流行的云平台,如igital Ocean、Amazon Web Services(AWS)、Microsoft Azure、Google Cloud Computing及其它等等,所以我们可以在不同的平台使用相同的接口来部署Docker。本文中我们会使用digitalocean驱动在Digital Ocean的服务器上部署Docker,--driver选项指定digitalocean驱动,--digitalocean-access-token选项指定[Digital Ocean Control Panel][1]提供的API Token,命令最后的是我们创建的Docker虚拟机的机器名。运行如下命令:
|
||||
|
||||
# docker-machine create --driver digitalocean --digitalocean-access-token <API-Token> linux-dev
|
||||
|
||||
@ -40,7 +40,7 @@ Docker Machine可以很好地支持每一种Linux发行版。首先,我们需
|
||||
|
||||

|
||||
|
||||
**注意**: 这里linux-dev是我们将要创建的机器的名称。`<API-Token>`是一个安全key,可以在Digtal Ocean Control Panel生成。要找到这个key,我们只需要登录到我们的Digital Ocean Control Panel,然后点击API,再点击Generate New Token,填写一个名称,选上Read和Write。然后我们就会得到一串十六进制的key,那就是`<API-Token>`,简单地替换到上边的命令中即可。
|
||||
**注意**: 这里linux-dev是我们将要创建的机器的名称。`<API-Token>`是一个安全key,可以在Digtal Ocean Control Panel生成。要找到这个key,我们只需要登录到我们的Digital Ocean Control Panel,然后点击API,再点击 Generate New Token,填写一个名称,选上Read和Write。然后我们就会得到一串十六进制的key,那就是`<API-Token>`,简单地替换到上边的命令中即可。
|
||||
|
||||
运行如上命令后,我们可以在Digital Ocean Droplet Panel中看到一个具有默认配置的droplet已经被创建出来了。
|
||||
|
||||
@ -48,35 +48,35 @@ Docker Machine可以很好地支持每一种Linux发行版。首先,我们需
|
||||
|
||||
简便起见,docker-machine会使用默认配置来部署Droplet。我们可以通过增加选项来定制我们的Droplet。这里是一些digitalocean相关的选项,我们可以使用它们来覆盖Docker Machine所使用的默认配置。
|
||||
|
||||
--digitalocean-image "ubuntu-14-04-x64" 是选择Droplet的镜像
|
||||
--digitalocean-ipv6 enable 是启用IPv6网络支持
|
||||
--digitalocean-private-networking enable 是启用专用网络
|
||||
--digitalocean-region "nyc3" 是选择部署Droplet的区域
|
||||
--digitalocean-size "512mb" 是选择内存大小和部署的类型
|
||||
- --digitalocean-image "ubuntu-14-04-x64" 用于选择Droplet的镜像
|
||||
- --digitalocean-ipv6 enable 启用IPv6网络支持
|
||||
- --digitalocean-private-networking enable 启用专用网络
|
||||
- --digitalocean-region "nyc3" 选择部署Droplet的区域
|
||||
- --digitalocean-size "512mb" 选择内存大小和部署的类型
|
||||
|
||||
如果你想在其他云服务使用docker-machine,并且想覆盖默认的配置,可以运行如下命令来获取Docker Mackine默认支持的对每种平台适用的参数。
|
||||
|
||||
# docker-machine create -h
|
||||
|
||||
### 3. 选择活跃机器 ###
|
||||
### 3. 选择活跃主机 ###
|
||||
|
||||
部署Droplet后,我们想马上运行一个Docker容器,但在那之前,我们需要检查下活跃机器是否是我们需要的机器。可以运行如下命令查看。
|
||||
部署Droplet后,我们想马上运行一个Docker容器,但在那之前,我们需要检查下活跃主机是否是我们需要的机器。可以运行如下命令查看。
|
||||
|
||||
# docker-machine ls
|
||||
|
||||

|
||||
|
||||
ACTIVE一列有“*”标记的是活跃机器。
|
||||
ACTIVE一列有“*”标记的是活跃主机。
|
||||
|
||||
现在,如果我们想将活跃机器切换到需要的机器,运行如下命令:
|
||||
现在,如果我们想将活跃主机切换到需要的主机,运行如下命令:
|
||||
|
||||
# docker-machine active linux-dev
|
||||
|
||||
**注意**:这里,linux-dev是机器名,我们打算激活这个机器,并且在其中运行Docker容器。
|
||||
**注意**:这里,linux-dev是机器名,我们打算激活这个机器,并且在其上运行Docker容器。
|
||||
|
||||
### 4. 运行一个Docker容器 ###
|
||||
|
||||
现在,我们已经选择了活跃机器,就可以运行Docker容器了。可以测试一下,运行一个busybox容器来执行`echo hello word`命令,这样就可以得到输出:
|
||||
现在,我们已经选择了活跃主机,就可以运行Docker容器了。可以测试一下,运行一个busybox容器来执行`echo hello word`命令,这样就可以得到输出:
|
||||
|
||||
# docker run busybox echo hello world
|
||||
|
||||
@ -98,9 +98,9 @@ SSH到机器上之后,我们可以在上边运行任何Docker容器。这里
|
||||
|
||||
# exit
|
||||
|
||||
### 5. 删除机器 ###
|
||||
### 5. 删除主机 ###
|
||||
|
||||
删除在运行的机器以及它的所有镜像和容器,我们可以使用docker-machine rm命令:
|
||||
删除在运行的主机以及它的所有镜像和容器,我们可以使用docker-machine rm命令:
|
||||
|
||||
# docker-machine rm linux-dev
|
||||
|
||||
@ -112,15 +112,15 @@ SSH到机器上之后,我们可以在上边运行任何Docker容器。这里
|
||||
|
||||

|
||||
|
||||
### 6. 在不使用驱动的情况新增一个机器 ###
|
||||
### 6. 在不使用驱动的情况新增一个主机 ###
|
||||
|
||||
我们可以在不使用驱动的情况往Docker增加一台机器,只需要一个URL。它可以使用一个已有机器的别名,所以我们就不需要每次在运行docker命令时输入完整的URL了。
|
||||
我们可以在不使用驱动的情况往Docker增加一台主机,只需要一个URL。它可以使用一个已有机器的别名,所以我们就不需要每次在运行docker命令时输入完整的URL了。
|
||||
|
||||
$ docker-machine create --url=tcp://104.131.50.36:2376 custombox
|
||||
|
||||
### 7. 管理机器 ###
|
||||
### 7. 管理主机 ###
|
||||
|
||||
如果你已经让Docker运行起来了,可以使用简单的**docker-machine stop**命令来停止所有正在运行的机器,如果需要再启动的话可以运行**docker-machine start**:
|
||||
如果你已经让Docker运行起来了,可以使用简单的**docker-machine stop**命令来停止所有正在运行的主机,如果需要再启动的话可以运行**docker-machine start**:
|
||||
|
||||
# docker-machine stop
|
||||
# docker-machine start
|
||||
@ -140,7 +140,7 @@ via: http://linoxide.com/linux-how-to/use-docker-machine-cloud-provider/
|
||||
|
||||
作者:[Arun Pyasi][a]
|
||||
译者:[goreliu](https://github.com/goreliu)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,16 +1,13 @@
|
||||
如何在Ubuntu/Debian/Linux Mint中编译和安装wxWidgets
|
||||
================================================================================
|
||||
|
||||
### wxWidgets ###
|
||||
|
||||
wxWidgets是一个程序开发框架/库, 允许你在Windows、Mac、Linux中使用相同的代码跨平台开发。
|
||||
|
||||
它主要用C++写成,但也可以与其他语言绑定比如Python、Perl、Ruby。
|
||||
wxWidgets是一个程序开发框架/库, 允许你在Windows、Mac、Linux中使用相同的代码跨平台开发。它主要用C++写成,但也可以与其他语言绑定比如Python、Perl、Ruby。
|
||||
|
||||
本教程中我将向你展示如何在基于Debian的linux中如Ubuntu和Linux Mint中编译wxwidgets 3.0+。
|
||||
|
||||
从源码编译wxWidgets并不困难,仅仅需要几分钟。
|
||||
|
||||
库可以按不同的方式来编译,比如静态或者动态库。
|
||||
从源码编译wxWidgets并不困难,仅仅需要几分钟。库可以按不同的方式来编译,比如静态或者动态库。
|
||||
|
||||
### 1. 下载 wxWidgets ###
|
||||
|
||||
@ -20,13 +17,13 @@ wxWidgets是一个程序开发框架/库, 允许你在Windows、Mac、Linux中
|
||||
|
||||
### 2. 设置编译环境 ###
|
||||
|
||||
要编译wxwidgets,我们需要一些工具包括C++编译器, 在Linux上是g++。所有这些可以通过apt-get工具从仓库中安装。
|
||||
要编译wxwidgets,我们需要一些工具包括C++编译器,在Linux上是g++。所有这些可以通过apt-get工具从仓库中安装。
|
||||
|
||||
我们还需要wxWidgets依赖的GTK开发库。
|
||||
|
||||
$ sudo apt-get install libgtk-3-dev build-essential checkinstall
|
||||
|
||||
>checkinstall工具允许我们为wxwidgets创建一个安装包,这样之后就可以轻松的使用包管理器来卸载。
|
||||
> 这个叫做checkinstall的工具允许我们为wxwidgets创建一个安装包,这样之后就可以轻松的使用包管理器来卸载。
|
||||
|
||||
### 3. 编译 wxWidgets ###
|
||||
|
||||
@ -42,7 +39,7 @@ wxWidgets是一个程序开发框架/库, 允许你在Windows、Mac、Linux中
|
||||
|
||||
"--disable-shared"选项将会编译静态库而不是动态库。
|
||||
|
||||
make命令完成后,编译也成功了。是时候安装wxWidgets到正确的目录。
|
||||
make命令完成后,编译就成功了。是时候安装wxWidgets到正确的目录。
|
||||
|
||||
更多信息请参考install.txt和readme.txt,这可在wxwidgets中的/docs/gtk/目录下找到。
|
||||
|
||||
@ -58,7 +55,7 @@ checkinstall会询问几个问题,请保证在提问后提供一个版本号
|
||||
|
||||
### 5. 追踪安装的文件 ###
|
||||
|
||||
如果你想要检查文件安装的位置,使用dpkg命令后面跟上checkinstall提供的报名。
|
||||
如果你想要检查文件安装的位置,使用dpkg命令后面跟上checkinstall提供的包名。
|
||||
|
||||
$ dpkg -L package_name
|
||||
/.
|
||||
@ -85,17 +82,17 @@ checkinstall会询问几个问题,请保证在提问后提供一个版本号
|
||||
$ cd samples/
|
||||
$ make
|
||||
|
||||
make命令完成后,进入sampl子目录,这里就有一个可以马上运行的Demo程序了。
|
||||
make命令完成后,进入sample 子目录,这里就有一个可以马上运行的Demo程序了。
|
||||
|
||||
### 7. 编译你的第一个程序 ###
|
||||
|
||||
你完成编译demo程序后,可以写你自己的程序来编译了。这个也很简单。
|
||||
|
||||
假设你用的是C++这样你可以使用编辑器的高亮特性。比如gedit、kate、kwrite等等。或者用全功能的IDE像Geany、Codelite、Codeblocks等等。
|
||||
假设你用的是C++,这样的话你还可以使用编辑器的高亮特性。比如gedit、kate、kwrite等等。或者用全功能的IDE像Geany、Codelite、Codeblocks等等。
|
||||
|
||||
然而你的第一个程序只需要用一个文本编辑器来快速完成。
|
||||
|
||||
这里就是
|
||||
如下:
|
||||
|
||||
#include <wx/wx.h>
|
||||
|
||||
@ -155,7 +152,7 @@ via: http://www.binarytides.com/install-wxwidgets-ubuntu/
|
||||
|
||||
作者:[Silver Moon][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,14 +1,14 @@
|
||||
Linux 有问必答--如何在桌面版 Ubuntu 中用命令行更改系统代理设置
|
||||
Linux 有问必答:如何在桌面版 Ubuntu 中用命令行更改系统代理设置
|
||||
================================================================================
|
||||
> **问题**: 我经常需要在桌面版 Ubuntu 中更改系统代理设置,但我不想通过繁琐的 GUI 菜单链:"系统设置" -> "网络" -> "网络代理"。在命令行中有更方便的方法更改桌面版的代理设置吗?
|
||||
> **问题**: 我经常需要在桌面版 Ubuntu 中更改系统代理设置,但我不想通过繁琐的 GUI 菜单点击:"系统设置" -> "网络" -> "网络代理"。在命令行中有更方便的方法更改桌面版的代理设置吗?
|
||||
|
||||
在桌面版 Ubuntu 中,它的桌面环境设置,包括系统代理设置,都存储在 DConf 数据库,这是简单的键值对存储。如果你想通过系统设置菜单修改桌面属性,更改会持久保存在后端的 DConf 数据库。在 Ubuntu 中更改 DConf 数据库有基于图像用户界面和非图形用户界面的两种方式。系统设置或者 dconf-editor 是访问 DConf 数据库的图形方法,而 gsettings 或 dconf 就是能更改数据库的命令行工具。
|
||||
在桌面版 Ubuntu 中,它的桌面环境设置,包括系统代理设置,都存储在 DConf 数据库,这是简单的键值对存储。如果你想通过系统设置菜单修改桌面属性,更改会持久保存在后端的 DConf 数据库。在 Ubuntu 中更改 DConf 数据库有基于图像用户界面和非图形用户界面的两种方式。系统设置或者 `dconf-editor` 是访问 DConf 数据库的图形方法,而 `gsettings` 或 `dconf` 就是能更改数据库的命令行工具。
|
||||
|
||||
下面介绍如何用 gsettings 从命令行更改系统代理设置。
|
||||
下面介绍如何用 `gsettings` 从命令行更改系统代理设置。
|
||||
|
||||

|
||||
|
||||
gsetting 读写特定 Dconf 设置的基本用法如下:
|
||||
`gsettings` 读写特定 Dconf 设置的基本用法如下:
|
||||
|
||||
更改 DConf 设置:
|
||||
|
||||
@ -53,7 +53,7 @@ gsetting 读写特定 Dconf 设置的基本用法如下:
|
||||
|
||||
### 在命令行中清除系统代理设置 ###
|
||||
|
||||
最后,清除所有 手动/自动 代理设置,还原为无代理设置:
|
||||
最后,清除所有“手动/自动”代理设置,还原为无代理设置:
|
||||
|
||||
$ gsettings set org.gnome.system.proxy mode 'none'
|
||||
|
||||
@ -63,7 +63,7 @@ via: http://ask.xmodulo.com/change-system-proxy-settings-command-line-ubuntu-des
|
||||
|
||||
作者:[Dan Nanni][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,8 @@
|
||||
Linux ntopng——网络监控工具的安装(附截图)
|
||||
Linux 上网络监控工具 ntopng 的安装
|
||||
================================================================================
|
||||
当今世界,人们的计算机都相互连接,互联互通。小到你的家庭局域网(LAN),大到最大的一个被我们称为的——互联网。当你管理一台联网的计算机时,你就是在管理最关键的组件之一。由于大多数开发出的应用程序都基于网络,网络就连接起了这些关键点。
|
||||
当今世界,人们的计算机都相互连接,互联互通。小到你的家庭局域网(LAN),大到最大的一个被我们称为互联网。当你管理一台联网的计算机时,你就是在管理最关键的组件之一。由于大多数开发出的应用程序都基于网络,网络就连接起了这些关键点。
|
||||
|
||||
这就是为什么我们需要网络监控工具。最好的网络监控工具之一,它叫作ntop。来自[维基百科][1]的知识“ntop是一个网络探测器,它以与top显示进程般类似的方式显示网络使用率。在交互模式中,它显示了用户终端上的网络状态。在网页模式中,它作为网络服务器,创建网络状态的HTML转储文件。它支持NetFlow/sFlowemitter/collector,这是一个基于HTTP的客户端界面,用于创建ntop为中心的监控应用和RRD用于持续地存储通信数据”
|
||||
这就是为什么我们需要网络监控工具。ntop 是最好的网络监控工具之一。来自[维基百科][1]的知识“ntop是一个网络探测器,它以与top显示进程般类似的方式显示网络使用率。在交互模式中,它显示了用户终端上的网络状态。在网页模式中,它作为网络服务器,创建网络状态的HTML转储文件。它支持NetFlow/sFlowemitter/collector,这是一个基于HTTP的客户端界面,用于创建ntop为中心的监控应用,并使用RRD来持续存储通信数据”。
|
||||
|
||||
15年后的今天,你将见到ntopng——下一代ntop。
|
||||
|
||||
@ -15,7 +15,7 @@ Ntopng是一个基于网页的高速通信分析器和流量收集器。Ntopng
|
||||
从[ntopng网站][2]上,我们可以看到他们说它有众多的特性。这里列出了其中一些:
|
||||
|
||||
- 按各种协议对网络通信排序
|
||||
- 显示网络通信和IPv4/v6激活的主机
|
||||
- 显示网络通信和IPv4/v6的激活主机
|
||||
- 持续不断以RRD格式存储定位主机的通信数据到磁盘
|
||||
- 通过nDPI,ntop的DPI框架,发现应用协议
|
||||
- 显示各种协议间的IP通信分布
|
||||
@ -24,11 +24,9 @@ Ntopng是一个基于网页的高速通信分析器和流量收集器。Ntopng
|
||||
- 报告按协议类型排序的IP协议使用率
|
||||
- 生成HTML5/AJAX网络通信数据
|
||||
|
||||
### 安装 ###
|
||||
### 安装的先决条件 ###
|
||||
|
||||
Ntop为CentOS和**基于64位**Ubuntu预编译好了包,你可以在[他们的下载页面][3]找到这些包。对于32位操作系统,你必须从源代码编译。本文在**CentOS 6.4 32位**版本上**测试过**。但是,它也可以在其它基于CentOS/RedHat的Linux版本上工作。让我们开始吧。
|
||||
|
||||
#### 先决条件 ####
|
||||
Ntop为CentOS和**基于64位**Ubuntu预编译好了包,你可以在[它们的下载页面][3]找到这些包。对于32位操作系统,你必须从源代码编译。本文在**CentOS 6.4 32位**版本上**测试过**。但是,它也可以在其它基于CentOS/RedHat的Linux版本上工作。让我们开始吧。
|
||||
|
||||
#### 开发工具 ####
|
||||
|
||||
@ -78,7 +76,7 @@ Ntop为CentOS和**基于64位**Ubuntu预编译好了包,你可以在[他们的
|
||||
# make
|
||||
# make install
|
||||
|
||||
*由于ntopng是一个基于网页的应用,你的系统必须安装有工作良好的网络服务器*
|
||||
*由于ntopng是一个基于网页的应用,你的系统必须安装有工作良好的 Web 服务器*
|
||||
|
||||
### 为ntopng创建配置文件 ###
|
||||
|
||||
@ -89,13 +87,16 @@ Ntop为CentOS和**基于64位**Ubuntu预编译好了包,你可以在[他们的
|
||||
# cd ntopng
|
||||
# vi ntopng.start
|
||||
|
||||
放入这些行:
|
||||
--local-network “10.0.2.0/24”
|
||||
放入这些行:
|
||||
|
||||
--local-network "10.0.2.0/24"
|
||||
--interface 1
|
||||
|
||||
---
|
||||
# vi ntopng.pid
|
||||
|
||||
放入该行:
|
||||
放入该行:
|
||||
|
||||
-G=/var/run/ntopng.pid
|
||||
|
||||
保存这些文件,然后继续下一步。
|
||||
@ -128,7 +129,7 @@ Ntop为CentOS和**基于64位**Ubuntu预编译好了包,你可以在[他们的
|
||||
|
||||

|
||||
|
||||
在**主机菜单**上,你可以看到连接到流的所有主机
|
||||
在**主机菜单**上,你可以看到连接到流的所有主机。
|
||||
|
||||

|
||||
|
||||
@ -146,8 +147,7 @@ Ntop为CentOS和**基于64位**Ubuntu预编译好了包,你可以在[他们的
|
||||
|
||||

|
||||
|
||||
**界面菜单**将引领你进入更多内部菜单。
|
||||
包菜单将给你显示包的分布大小。
|
||||
**界面菜单**将引领你进入更多内部菜单。包菜单将给你显示包的大小分布。
|
||||
|
||||

|
||||
|
||||
@ -157,7 +157,7 @@ Ntop为CentOS和**基于64位**Ubuntu预编译好了包,你可以在[他们的
|
||||
|
||||

|
||||
|
||||
你也可以通过使用**历史活跃度菜单**查看活跃度
|
||||
你也可以通过使用**历史活跃度菜单**查看活跃度。
|
||||
|
||||

|
||||
|
||||
@ -167,7 +167,7 @@ Ntop为CentOS和**基于64位**Ubuntu预编译好了包,你可以在[他们的
|
||||
|
||||

|
||||
|
||||
Ntopng为你提供了一个范围宽广的时间线,从5分钟到1年都可以。你只需要点击你想要现实的时间线。图标本身是可以点击的,你可以点击它来进行缩放。
|
||||
Ntopng为你提供了一个范围宽广的时间线,从5分钟到1年都可以。你只需要点击你想要显示的时间线。图表本身是可以点击的,你可以点击它来进行缩放。
|
||||
|
||||
当然,ntopng能做的事比上面图片中展示的还要多得多。你也可以将定位和电子地图服务整合进来。在ntopng自己的网站上,有已付费的模块可供使用,如nprobe可以扩展ntopng可以提供给你的信息。更多关于ntopng的信息,你可以访问[ntopng网站][5]。
|
||||
|
||||
@ -177,7 +177,7 @@ via: http://linoxide.com/monitoring-2/ntopng-network-monitoring-tool/
|
||||
|
||||
作者:[Pungki Arianto][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -2,9 +2,9 @@
|
||||
================================================================================
|
||||

|
||||
|
||||
笔记本过热是最近一个常见的问题。监控硬件温度或许可以帮助你诊断笔记本为什么会过热。本篇中,我们会**了解如何在Ubuntu中检查CPU的温度**。
|
||||
夏天到了,笔记本过热是最近一个常见的问题。监控硬件温度或许可以帮助你诊断笔记本为什么会过热。本篇中,我们会**了解如何在Ubuntu中检查CPU的温度**。
|
||||
|
||||
我们将使用一个GUI工具[Psensor][1],它允许你在Linux中监控硬件温度。用Psensor你可以:
|
||||
我们将使用一个GUI工具[Psensor][1],它允许你在Linux中监控硬件温度。用Psensor你可以:
|
||||
|
||||
- 监控cpu和主板的温度
|
||||
- 监控NVidia GPU的文档
|
||||
@ -17,7 +17,7 @@ Psensor最新的版本同样提供了Ubuntu中的指示小程序,这样使得
|
||||
|
||||
### 如何在Ubuntu 15.04 和 14.04中安装Psensor ###
|
||||
|
||||
在安装Psensor前,你需要安装和配置[lm-sensors][2],一个用于硬件监控的命令行工具。如果你想要测量磁盘温度,你还需要安装[hddtemp][3]。要安装这些工具,运行下面的这些命令:
|
||||
在安装Psensor前,你需要安装和配置[lm-sensors][2],这是一个用于硬件监控的命令行工具。如果你想要测量磁盘温度,你还需要安装[hddtemp][3]。要安装这些工具,运行下面的这些命令:
|
||||
|
||||
sudo apt-get install lm-sensors hddtemp
|
||||
|
||||
@ -45,7 +45,7 @@ Psensor最新的版本同样提供了Ubuntu中的指示小程序,这样使得
|
||||
|
||||
sudo apt-get install psensor
|
||||
|
||||
安装完成后,在Unity Dash中运行程序。第一次运行时,你应该配置Psensor该监控什么状态。
|
||||
安装完成后,在Unity Dash中运行程序。第一次运行时,你应该配置Psensor该监控什么状态。
|
||||
|
||||

|
||||
|
||||
@ -73,7 +73,7 @@ via: http://itsfoss.com/check-laptop-cpu-temperature-ubuntu/
|
||||
|
||||
作者:[Abhishek][a]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,6 +1,6 @@
|
||||
11个让人惊叹的Linux终端彩蛋
|
||||
11个无用而有趣的Linux终端彩蛋
|
||||
================================================================================
|
||||
这里有一些很酷的Linux终端彩蛋,其中的每一个看上去并没有实际用途,但很精彩。
|
||||
这里有一些很酷的Linux终端彩蛋,其中的每一个看上去并没有实际用途,但很有趣。
|
||||
|
||||

|
||||
|
||||
@ -8,7 +8,7 @@
|
||||
|
||||
当我们使用命令行工作时,Linux是功能和实用性最好的操作系统之一。想要执行一个特殊任务?可能一个程序或者脚本就可以帮你搞定。但就像一本书中说到的,只工作不玩耍聪明的孩子也会变傻。下边是我最喜欢的可以在终端做的没有实际用途的、傻傻的、恼人的、可笑的事情。
|
||||
|
||||
### 给终端一个态度 ###
|
||||
### 让终端成为一个有态度的人 ###
|
||||
|
||||
* 第一步)敲入`sudo visudo`
|
||||
* 第二步)在“Defaults”末尾(文件的前半部分)添加一行“Defaults insults”。
|
||||
@ -20,13 +20,13 @@
|
||||
|
||||
### apt-get moo ###
|
||||
|
||||
你看过这张截图?那就是运行`apt-get moo`(在基于Debian的系统)的结果。对,就是它了。不要对它抱太多幻想,你会失望的,我不骗你。但是这是Linux世界最被人熟知的彩蛋之一。所以我把它包含进来,并且放在前排,然后我也就不会收到5千封邮件,指责我把它遗漏了。
|
||||
|
||||

|
||||
|
||||
你看过这张截图?那就是运行`apt-get moo`(在基于Debian的系统)的结果。对,就是它了。不要对它抱太多幻想,你会失望的,我不骗你。但是这是Linux世界最被人熟知的彩蛋之一。所以我把它包含进来,并且放在前排,然后我也就不会收到5千封邮件,指责我把它遗漏了。
|
||||
|
||||
### aptitude moo ###
|
||||
|
||||
更有趣的是将moo应用到aptitude上。敲入`aptitude moo`(在Ubuntu及其衍生版),你对`moo`可以做什么事情的看法会有所变化。你还还会知道更多事情,尝试重新输入这条命令,但这次添加一个`-v`参数。这还没有结束,试着添加更多`v`,一次添加一个,直到aptitude给了你想要的东西。
|
||||
更有趣的是将moo应用到aptitude上。敲入`aptitude moo`(在Ubuntu及其衍生版),你对`moo`可以做什么事情的看法会有所变化。你还还会知道更多事情,尝试重新输入这条命令,但这次添加一个`-v`参数。这还没有结束,试着添加更多`v`,一次添加一个,直到抓狂的aptitude给了你想要的东西。
|
||||
|
||||

|
||||
|
||||
@ -38,25 +38,25 @@
|
||||
* 第二步)在“# Misc options”部分,去掉“Color”前的“#”。
|
||||
* 第三步)添加“ILoveCandy”。
|
||||
|
||||
现在我们使用pacman安装新软件包时,进度条里会出现一个小吃豆人。真应该默认就是这样的。
|
||||
现在我们使用pacman安装新软件包时,进度条里会出现一个小吃豆人。真应该默认就这样的。
|
||||
|
||||

|
||||
|
||||
### Cowsay! ###
|
||||
|
||||
`aptitude moo`的输出格式很漂亮,但我想你苦于不能自由自在地使用。输入`cowsay`,它会做到你想做的事情。你可以让牛说任何你喜欢的东西。而且不只可以用牛,还可以用Calvin、Beavis和Ghostbusters的ASCII logo——输入`cowsay -l`可以得到所有可用的logo。它是Linux世界的强大工具。像很多其他命令一样,你可以使用管道把其他程序的输出输送给它,比如`fortune | cowsay`。
|
||||
`aptitude moo`的输出格式很漂亮,但我想你苦于不能自由自在地使用。输入`cowsay`,它会做到你想做的事情。你可以让牛说任何你喜欢的东西。而且不只可以用牛,还可以用Calvin、Beavis和Ghostbusters logo的ASCII的艺术,输入`cowsay -l`可以得到所有可用的参数。它是Linux世界的强大工具。像很多其他命令一样,你可以使用管道把其他程序的输出输送给它,比如`fortune | cowsay`,让这头牛变成哲学家。
|
||||
|
||||

|
||||
|
||||
### 变成3l33t h@x0r ###
|
||||
|
||||
`nmap`并不是我们平时经常使用的基本命令。但如果你想蹂躏`nmap`的话,可能想在它的输出中看到l33t。在任何`nmap`命令(比如`nmap -oS - google.com`)后添加`-oS`。现在你的`nmap`已经处于官方名称是“[Script Kiddie Mode][1]”的模式了。Angelina Jolie和Keanu Reeves会为此骄傲的。
|
||||
`nmap`并不是我们平时经常使用的基本命令。但如果你想蹂躏`nmap`的话,比如像人一样看起来像l33t。在任何`nmap`命令后添加`-oS`(比如`nmap -oS - google.com`)。现在你的`nmap`已经处于标准叫法是“[脚本玩具模式][1]”的模式了。Angelina Jolie和Keanu Reeves会为此骄傲的。
|
||||
|
||||

|
||||
|
||||
### 获得所有的Discordian日期 ###
|
||||
|
||||
如果你们曾经坐在一起思考,“嗨!我想使用无用但异想天开的方式来书写今天的日期……”试试运行`ddate`。结果类似于“Today is Setting Orange, the 72nd day of Discord in the YOLD 3181”,这会让你的服务树日志平添不少香料。
|
||||
如果你们曾经坐在一起思考,“嗨!我想使用无用但异想天开的方式来书写今天的日期……”试试运行`ddate`。结果类似于“Today is Setting Orange, the 72nd day of Discord in the YOLD 3181”,这会让你的服务树日志平添不少趣味。
|
||||
|
||||
注意:在技术层面,确实有一个[Discordian Calendar][2],理论上被[Discordianism][3]追随者所使用。这意味着我可能得罪某些人。或者不会,我不确定。不管怎样,`ddate`是一个方便的工具。
|
||||
|
||||
@ -76,7 +76,7 @@
|
||||
|
||||
### 将任何文本逆序输出 ###
|
||||
|
||||
将任何文本使用管道输送给`rev`命令,它就会将文本内容逆序输出。`fortune | rev`会给你好运。当然,这不意味着rev会将幸运转换成不幸。
|
||||
将任何文本使用管道输送给`rev`命令,它就会将文本内容逆序输出。`fortune | rev`会给你好运。当然,这不意味着rev会将幸运(fortune)转换成不幸。
|
||||
|
||||

|
||||
|
||||
@ -94,7 +94,7 @@ via: http://www.networkworld.com/article/2926630/linux/11-pointless-but-awesome-
|
||||
|
||||
作者:[Bryan Lunduke][a]
|
||||
译者:[goreliu](https://github.com/goreliu)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,175 @@
|
||||
深入 NGINX: 我们如何设计性能和扩展
|
||||
================================================================================
|
||||
|
||||
NGINX 能在 web 性能中取得领先地位,这是由于其软件设计所决定的。许多 web 服务器和应用程序服务器使用一个简单的线程或基于流程的架构,NGINX 立足于一个复杂的事件驱动的体系结构,使它能够在现代硬件上扩展到成千上万的并发连接。
|
||||
|
||||
这张[深入 NGINX][1] 的信息图从高层次的流程架构深度挖掘说明了 NGINX 如何在单一进程里保持多个连接。这篇博客进一步详细地解释了这一切是如何工作的。
|
||||
|
||||
### 知识 – NGINX进程模型 ###
|
||||
|
||||

|
||||
|
||||
为了更好的理解这个设计,你需要理解 NGINX 如何运行的。NGINX 有一个主进程(它执行特权操作,如读取配置和绑定端口)和一些工作进程与辅助进程。
|
||||
|
||||
# service nginx restart
|
||||
* Restarting nginx
|
||||
# ps -ef --forest | grep nginx
|
||||
root 32475 1 0 13:36 ? 00:00:00 nginx: master process /usr/sbin/nginx \
|
||||
-c /etc/nginx/nginx.conf
|
||||
nginx 32476 32475 0 13:36 ? 00:00:00 \_ nginx: worker process
|
||||
nginx 32477 32475 0 13:36 ? 00:00:00 \_ nginx: worker process
|
||||
nginx 32479 32475 0 13:36 ? 00:00:00 \_ nginx: worker process
|
||||
nginx 32480 32475 0 13:36 ? 00:00:00 \_ nginx: worker process
|
||||
nginx 32481 32475 0 13:36 ? 00:00:00 \_ nginx: cache manager process
|
||||
nginx 32482 32475 0 13:36 ? 00:00:00 \_ nginx: cache loader process
|
||||
|
||||
在四核服务器,NGINX 主进程创建了4个工作进程和两个管理磁盘内容缓存的缓存辅助进程。
|
||||
|
||||
### 为什么架构很重要? ###
|
||||
|
||||
任何 Unix 应用程序的根本基础是线程或进程。(从 Linux 操作系统的角度来看,线程和进程大多是相同的,主要的区别是他们共享内存的程度。)一个线程或进程是一个自包含的指令集,操作系统可以在一个 CPU 核心上调度运行它们。大多数复杂的应用程序并行运行多个线程或进程有两个原因:
|
||||
|
||||
- 它们可以同时使用更多的计算核心。
|
||||
- 线程或进程可以轻松实现并行操作。(例如,在同一时刻保持多连接)。
|
||||
|
||||
进程和线程消耗资源。他们每个都使用内存和其他系统资源,他们会在 CPU 核心中换入和换出(一个操作可以叫做上下文切换)。大多数现代服务器可以并行保持上百个小型的、活动的线程或进程,但是一旦内存耗尽或高 I/O 压力引起大量的上下文切换会导致性能严重下降。
|
||||
|
||||
网络应用程序设计的常用方法是为每个连接分配一个线程或进程。此体系结构简单、容易实现,但是当应用程序需要处理成千上万的并发连接时这种结构就不具备扩展性。
|
||||
|
||||
### NGINX 如何工作? ###
|
||||
|
||||
NGINX 使用一种可预测的进程模式来分配可使用的硬件资源:
|
||||
|
||||
- 主进程(master)执行特权操作,如读取配置和绑定端口,然后创建少量的子进程(如下的三种类型)。
|
||||
- 缓存加载器进程(cache loader)在加载磁盘缓存到内存中时开始运行,然后退出。适当的调度,所以其资源需求很低。
|
||||
- 缓存管理器进程(cache manager)定期裁剪磁盘缓存中的记录来保持他们在配置的大小之内。
|
||||
- 工作进程(worker)做所有的工作!他们保持网络连接、读写内容到磁盘,与上游服务器通信。
|
||||
|
||||
在大多数情况下 NGINX 的配置建议:每个 CPU 核心运行一个工作进程,这样最有效地利用硬件资源。你可以在配置中包含 [worker_processes auto][2]指令配置:
|
||||
|
||||
worker_processes auto;
|
||||
|
||||
当一个 NGINX 服务处于活动状态,只有工作进程在忙碌。每个工作进程以非阻塞方式保持多连接,以减少上下文交换。
|
||||
|
||||
每个工作进程是一个单一线程并且独立运行,它们会获取新连接并处理之。这些进程可以使用共享内存通信来共享缓存数据、会话持久性数据及其它共享资源。(在 NGINX 1.7.11 及其以后版本,还有一个可选的线程池,工作进程可以转让阻塞的操作给它。更多的细节,参见“[NGINX 线程池可以爆增9倍性能!][16]”。对于 NGINX Plus 用户,该功能计划在今年晚些时候加入到 R7 版本中。)
|
||||
|
||||
### NGINX 工作进程内部 ###
|
||||
|
||||

|
||||
|
||||
每个 NGINX 工作进程按照 NGINX 配置初始化,并由主进程提供一组监听端口。
|
||||
|
||||
NGINX 工作进程首先在监听套接字上等待事件([accept_mutex][3] 和[内核套接字分片][4])。事件被新进来的连接初始化。这些连接被分配到一个状态机 – HTTP 状态机是最常用的,但 NGINX 也实现了流式(原始 TCP )状态机和几种邮件协议(SMTP、IMAP和POP3)的状态机。
|
||||
|
||||

|
||||
|
||||
状态机本质上是一组指令,告诉 NGINX 如何处理一个请求。大多数 web 服务器像 NGINX 一样使用类似的状态机来实现相同的功能 - 区别在于实现。
|
||||
|
||||
### 调度状态机 ###
|
||||
|
||||
把状态机想象成国际象棋的规则。每个 HTTP 事务是一个象棋游戏。一方面棋盘是 web 服务器 —— 一位大师可以非常迅速地做出决定。另一方面是远程客户端 —— 在一个相对较慢的网络下 web 浏览器访问网站或应用程序。
|
||||
|
||||
不管怎样,这个游戏规则很复杂。例如,web 服务器可能需要与各方沟通(代理一个上游的应用程序)或与身份验证服务器对话。web 服务器的第三方模块甚至可以扩展游戏规则。
|
||||
|
||||
#### 一个阻塞状态机 ####
|
||||
|
||||
回忆我们之前的描述,一个进程或线程就像一套独立的指令集,操作系统可以在一个 CPU 核心上调度运行它。大多数 web 服务器和 web 应用使用每个连接一个进程或者每个连接一个线程的模式来玩这个“象棋游戏”。每个进程或线程都包含玩完“一个游戏”的指令。在服务器运行该进程的期间,其大部分的时间都是“阻塞的” —— 等待客户端完成它的下一步行动。
|
||||
|
||||

|
||||
|
||||
1. web 服务器进程在监听套接字上监听新连接(客户端发起新“游戏”)
|
||||
1. 当它获得一个新游戏,就玩这个游戏,每走一步去等待客户端响应时就阻塞了。
|
||||
1. 游戏完成后,web 服务器进程可能会等待是否有客户机想要开始一个新游戏(这里指的是一个“保持的”连接)。如果这个连接关闭了(客户端断开或者发生超时),web 服务器进程会返回并监听一个新“游戏”。
|
||||
|
||||
要记住最重要的一点是每个活动的 HTTP 连接(每局棋)需要一个专用的进程或线程(象棋高手)。这个结构简单容并且易扩展第三方模块(“新规则”)。然而,还是有巨大的不平衡:尤其是轻量级 HTTP 连接其实就是一个文件描述符和小块内存,映射到一个单独的线程或进程,这是一个非常重量级的系统对象。这种方式易于编程,但太过浪费。
|
||||
|
||||
#### NGINX是一个真正的象棋大师 ####
|
||||
|
||||
也许你听过[同时表演赛][5]游戏,有一个象棋大师同时对战许多对手?
|
||||
|
||||

|
||||
|
||||
*[列夫·吉奥吉夫在保加利亚的索非亚同时对阵360人][6]。他的最终成绩是284胜70平6负。*
|
||||
|
||||
这就是 NGINX 工作进程如何“下棋”的。每个工作进程(记住 - 通常每个CPU核心上有一个工作进程)是一个可同时对战上百人(事实是,成百上千)的象棋大师。
|
||||
|
||||

|
||||
|
||||
1. 工作进程在监听和连接套接字上等待事件。
|
||||
1. 事件发生在套接字上,并且由工作进程处理它们:
|
||||
- 在监听套接字的事件意味着一个客户端已经开始了一局新棋局。工作进程创建了一个新连接套接字。
|
||||
- 在连接套接字的事件意味着客户端已经下了一步棋。工作进程及时响应。
|
||||
|
||||
一个工作进程在网络流量上从不阻塞,等待它的“对手”(客户端)做出反应。当它下了一步,工作进程立即继续其他的游戏,在那里工作进程正在处理下一步,或者在门口欢迎一个新玩家。
|
||||
|
||||
#### 为什么这个比阻塞式多进程架构更快? ####
|
||||
|
||||
NGINX 每个工作进程很好的扩展支撑了成百上千的连接。每个连接在工作进程中创建另外一个文件描述符和消耗一小部分额外内存。每个连接有很少的额外开销。NGINX 进程可以固定在某个 CPU 上。上下文交换非常罕见,一般只发生在没有工作要做时。
|
||||
|
||||
在阻塞方式,每个进程一个连接的方法中,每个连接需要大量额外的资源和开销,并且上下文切换(从一个进程切换到另一个)非常频繁。
|
||||
|
||||
更详细的解释,看看这篇关于 NGINX 架构的[文章][7],它由NGINX公司开发副总裁及共同创始人 Andrew Alexeev 写的。
|
||||
|
||||
通过适当的[系统优化][8],NGINX 的每个工作进程可以扩展来处理成千上万的并发 HTTP 连接,并能脸不红心不跳的承受峰值流量(大量涌入的新“游戏”)。
|
||||
|
||||
### 更新配置和升级 NGINX ###
|
||||
|
||||
NGINX 的进程体系架构使用少量的工作进程,有助于有效的更新配置文件甚至 NGINX 程序本身。
|
||||
|
||||

|
||||
|
||||
更新 NGINX 配置文件是非常简单、轻量、可靠的操作。典型的就是运行命令 `nginx –s reload`,所做的就是检查磁盘上的配置并发送 SIGHUP 信号给主进程。
|
||||
|
||||
当主进程接收到一个 SIGHUP 信号,它会做两件事:
|
||||
|
||||
- 重载配置文件和分支出一组新的工作进程。这些新的工作进程立即开始接受连接和处理流量(使用新的配置设置)
|
||||
- 通知旧的工作进程优雅的退出。工作进程停止接受新的连接。当前的 http 请求一旦完成,工作进程就彻底关闭这个连接(那就是,没有残存的“保持”连接)。一旦所有连接关闭,这个工作进程就退出。
|
||||
|
||||
这个重载过程能引发一个 CPU 和内存使用的小峰值,但是跟活动连接加载的资源相比它一般不易察觉。每秒钟你可以多次重载配置(很多 NGINX 用户都这么做)。非常罕见的情况下,有很多世代的工作进程等待关闭连接时会发生问题,但即使是那样也很快被解决了。
|
||||
|
||||
NGINX 的程序升级过程中拿到了高可用性圣杯 —— 你可以随随时更新这个软件,不会丢失连接,停机,或者中断服务。
|
||||
|
||||

|
||||
|
||||
程序升级过程类似于平滑重载配置的方法。一个新的 NGINX 主进程与原主进程并行运行,然后他们共享监听套接字。两个进程都是活动的,并且各自的工作进程处理流量。然后你可以通知旧的主进程和它的工作进程优雅的退出。
|
||||
|
||||
整个过程的详细描述在 [NGINX 管理][9]。
|
||||
|
||||
### 结论 ###
|
||||
|
||||
[深入 NGINX 信息图][10] 提供一个 NGINX 功能实现的高层面概览,但在这简单的解释的背后是超过十年的创新和优化,使得 NGINX 在广泛的硬件上提供尽可能最好的性能同时保持了现代 Web 应用程序所需要的安全性和可靠性。
|
||||
|
||||
如果你想阅读更多关于NGINX的优化,查看这些优秀的资源:
|
||||
|
||||
- [安装和 NGINX 性能调优][11] (webinar; Speaker Deck 上的[讲义][12])
|
||||
- [NGINX 性能调优][13]
|
||||
- [开源应用架构: NGINX 篇][14]
|
||||
- [NGINX 1.9.1 中的套接字分片][15] (使用 SO_REUSEPORT 套接字选项)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/
|
||||
|
||||
作者:[Owen Garrett][a]
|
||||
译者:[wyangsun](https://github.com/wyangsun)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://nginx.com/author/owen/
|
||||
[1]:http://nginx.com/resources/library/infographic-inside-nginx/
|
||||
[2]:http://nginx.org/en/docs/ngx_core_module.html#worker_processes
|
||||
[3]:http://nginx.org/en/docs/ngx_core_module.html#accept_mutex
|
||||
[4]:http://nginx.com/blog/socket-sharding-nginx-release-1-9-1/
|
||||
[5]:http://en.wikipedia.org/wiki/Simultaneous_exhibition
|
||||
[6]:http://gambit.blogs.nytimes.com/2009/03/03/in-chess-records-were-made-to-be-broken/
|
||||
[7]:http://www.aosabook.org/en/nginx.html
|
||||
[8]:http://nginx.com/blog/tuning-nginx/
|
||||
[9]:http://nginx.org/en/docs/control.html
|
||||
[10]:http://nginx.com/resources/library/infographic-inside-nginx/
|
||||
[11]:http://nginx.com/resources/webinars/installing-tuning-nginx/
|
||||
[12]:https://speakerdeck.com/nginx/nginx-installation-and-tuning
|
||||
[13]:http://nginx.com/blog/tuning-nginx/
|
||||
[14]:http://www.aosabook.org/en/nginx.html
|
||||
[15]:http://nginx.com/blog/socket-sharding-nginx-release-1-9-1/
|
||||
[16]:http://nginx.com/blog/thread-pools-boost-performance-9x/
|
@ -0,0 +1,84 @@
|
||||
PHP 20岁了:从玩具到巨头
|
||||
=============================================================================
|
||||
|
||||
> 曾经的‘丑小鸭工程’已经转变为一个互联网巨头,感谢灵活、务实和充满活力的开发者社区。
|
||||
|
||||
当Rasmus Lerdorf发布“[一个用C写的小型紧凑的CGI可执行程序集合][2]”时, 他没有想到他的创造会对网络发展产生多大的影响。今年在Miami举行的SunshinePHP大会上,Lerdorf做了开场演讲,他自嘲到,“在1995年的时候,我以为我已经在 Web 上解除了C API的束缚。显然,事情并非那样,我们全成了C程序员了。”
|
||||
|
||||

|
||||
|
||||
题图来自: [Steve Jurvetson via Flickr][1]
|
||||
|
||||
实际上,当Lerdorf发布个人主页工具(Personal Home Page Tools,即 PHP 名字的来源)的1.0版本时,那时的网络还是如此的年轻。直到那年的十一月份HTML 2.0还没有公布,而且HTTP/1.0也是次年的五月份才出现。那时,NCSA HTTPd是使用最广泛的网络服务器,而网景的Navigator则是最流行的网络浏览器,八月份的时候,IE1.0才刚刚出现。换句话说,PHP的开端刚好撞上了浏览器战争的前夜。
|
||||
|
||||
早些时候,我们谈论了一大堆关于PHP对网络发展的影响。回到那时候,当说到用于网络应用的服务器端处理,我们的选择是有限的。PHP满足了我们对于一种工具的需求,这就是可以使得我们在网络上做一些动态的事情。它的实用的灵活性只受限于我们的想像力,PHP从那时起便与网络共同成长。现在,PHP占据了网络语言的超过80%的份额,已经是成熟的脚本语言,特别适合解决网络问题。她独一无二的血统讲述了一个故事,实用高于理论,解决问题高于纯正。
|
||||
|
||||
### 把我们钩住的网络魔力 ###
|
||||
|
||||
PHP一开始并不是一门编程语言,从她的设计就很明显不是 -- 或者她本来就缺乏相关特性,正如那些贬低者指出的那样。最初,她是作为一种API帮助网络开发者能够接入底层的C语言封装库。第一个版本是一组小的CGI可执行程序,提供表单处理功能,可以访问查询参数和mSQL数据库。而且她可以如此容易地处理一个网络应用的数据库,证明了其在激发我们对于PHP的兴趣和PHP后来的支配地位的关键作用。
|
||||
|
||||
到了第二版 -- 即 PHP/FI -- 数据库的支持已经扩展到包括PostgreSQL、MySQL、Oracle、Sybase等等。她通过封装他们的C语言库来支持各种数据库,将他们作为PHP库的一部分。PHP/FI也封装了GD库,可以创建并管理GIF图像。她可以作为一个Apache模块运行,或者编译进FastCGI支持,并且她引入的 PHP 编程语言支持变量、数组、语言结构和函数。对于那个时候大多数在网络这块工作的人来说,PHP是我们一直在寻求的那款“胶水”。
|
||||
|
||||
当PHP吸纳越来越多的编程语言功能,演变为第三版和之后的版本时,她从来没有失去这种黏合的特性。通过仓库如PECL(PHP Extension Community Library),PHP可以把各种库都连在一起,将他们的函数引入到PHP层面。这种将组件结合在一起的能力,成为PHP之美的一个重要方面,使之不会受限于其源代码上。
|
||||
|
||||
### 网络,一个码农们的社区 ###
|
||||
|
||||
PHP在网络发展上的持续影响并不局限于能用这种语言干什么。PHP如何完成工作,谁参与进来 -- 这些都是PHP传奇中很重要的部分。
|
||||
|
||||
早在1997年,PHP的用户群体开始形成。其中最早的是美国中西部PHP用户组(后来叫做 Chiago PHP),并[1997年二月份的时候举行了第一次聚会][4]。这是一个充满生气、饱含激情的开发者社区形成的开端,聚合成一种吸引力 -- 在网络上的一个小工具就可以帮助他们解决问题。PHP这种普遍存在的特性使得她成为网络开发一个很自然的选择。在分享主导的世界里,她开始盛行,而且低入的门槛对于许多早期的网络开发者来说是十分有吸引力的。
|
||||
|
||||
伴随着社区的成长,为开发者带来了一堆工具和资源。这一年是2000年,出现了PHP的一个转折点,它见证了第一次PHP开发者大会,聚集了编程语言的核心开发者,他们在Tel Aviv见面,讨论即将到来的4.0版本的发布。PHP扩展和应用仓库(PEAR)也于2000年发起,它提供了高质量的用户代码包,依据标准和最佳操作。第一届PHP大会PHP Kongress不久之后在德国举行。[PHPDeveloper.org][5]也随后上线,直到今天,这都是PHP社区里最权威的新闻资源。
|
||||
|
||||
这个社区的势头表明了接下来几年里PHP成长的关键所在。随着网络开发产业的爆发,PHP也获得发展。PHP开始为更多、更大的网站提供动力。越来越多的用户群在世界各地开花。邮件列表、在线论坛、IRC、大会,以及如php[architect]、德国PHP杂志、国际PHP杂志等商业杂志 -- PHP社区的活力在完成网络工作的方式上有极其重要的影响:共同地,开放地,倡导代码共享。
|
||||
|
||||
然后,在10年前,PHP 5发布后不久,在网络发展史上一个有趣地事情发生了,它导致了PHP社区如何构建库和应用的转变:Ruby on Rails发布了。
|
||||
|
||||
### 框架的异军突起 ###
|
||||
|
||||
用于Ruby编程语言的Ruby on Rails框架在MVC(模型-视图-控制)架构模型上获得了不断增长的焦点与关注。Mojavi PHP框架几年前已经使用MVC模型了,但是Ruby on Rails的高明之处在于巩固了MVC。框架引爆了PHP社区,并且框架已经改变了开发者构建PHP应用程序的方式。
|
||||
|
||||
许多重要的项目和发展已经发端,这归功于PHP社区框架的生长。[PHP框架互用性组织][6]成立于2009年,致力于在框架间建立编码标准,命名约定与最佳操作。编纂这些标准和操作帮助为开发者在使用成员项目的代码时提供了越来越多的互用性软件。互用性意味着每个框架可以拆分为组块和独立的库,也可以作为整体的框架在一起使用。互用性带来了另一个重要的里程碑:Composer项目于2011年诞生了。
|
||||
|
||||
从Node.js的NPM和Ruby的Bundler获得灵感,Composer开辟了PHP应用开发的新纪元,创造了一次PHP“文艺复兴”。它激发了包互用性、标准命名约定、编码标准的采用、覆盖测试的提升。它是任何现代PHP应用中的一个基本工具。
|
||||
|
||||
### 加速和创新的需要 ###
|
||||
|
||||
如今,PHP社区有一个生机勃勃应用和库的生态系统,有一些被广泛安装的PHP应用包括WordPress,Drupal,Joomla和MediaWiki。从小型的夫妻店站点到whitehouse.gov和Wikipeida,这些应用支撑了各种不同规模的业务的网站。在Alexa前十的站点中,有6个使用PHP,在一天内为数十亿的页面访问提供服务。因此,PHP应用已成为需要加速的首选,并且许多创新也加入到PHP的核心来提升性能。
|
||||
|
||||
在2010年,Facebook公开了其用作PHP源对源的编译器的HipHop,可以翻译PHP代码为C++代码,并且编译为一个单独的可执行二进制应用。Facebook的规模和成长需要从标准互用的PHP代码迁移到更快、最佳的可执行的代码。尽管如此,由于PHP的易用和快速开发周期,Facebook还想继续使用PHP。HipHop后来进化为HHVM,这是一个针对PHP的JIT(just-in-time)编译基础的执行引擎,其包含一个基于PHP的新的语言:[Hack][7]。
|
||||
|
||||
Facebook的创新以及其他的VM项目是在引擎水平上的比较,其引起了关于Zend引擎未来的讨论。Zend引擎依然是PHP的内核和语言规范。在2014年,它创建了一个语言规范项目,“提供一个完整的,简明的语句定义,和PHP语言的语义学”,使得对编译器项目来说,创建互用的PHP实现成为可能。
|
||||
|
||||
下一个PHP的主要版本成为了激烈争论的话题,他们提出了一个叫做phpng(下一代)的项目,来清理,重构,优化和改进PHP代码基础,这也展示了对实际应用的性能的实质提升。由于之前有一个未发布的PHP 6.0版本,因此在决定命名下一个主要版本叫做“PHP 7”后,就合并了phpng分支,并制定了开发PHP 7的计划,以增加很多语言中拥有的功能,如scalar和返回类型提示。
|
||||
|
||||
随着[今天第一版PHP 7 alpha发布][8],基准检测程序显示她在许多方面[与HHVM的一样好或者拥有更好的性能][9],PHP正与现代网络开发需求保持一致的步伐。同样地,PHP-FIG继续创新和推动框架与库的协作 -- 最近由于[PSR-7][10]的采纳,这将会改变PHP项目处理HTTP的方式。用户组、会议、公众和如[PHPMentoring.org][11]这样的布道者继续在PHP开发者社区提倡最好的操作、编码标准和测试。
|
||||
|
||||
PHP从各个方面见证了网络的成熟,而且PHP自己也成熟了。曾经一个简单的低级C语言库的API封装,PHP以她自己的方式,已经成为一个羽翼丰满的编程语言。她的开发者社区是一个充满生气、乐于助人、在实用方面引以为傲,并且欢迎新人的地方。PHP已经经受了20年的考验,而且目前在语言与社区里的活跃性,会保证她在接下来的几年里将会是一个密切相关的、积极有用的的语言。
|
||||
|
||||
在Rasmus Lerdorf的SunshinePHP的演讲中,他回忆到,“我会想到我会在20年之后讨论当初做的这个愚蠢的小项目吗?没有。”
|
||||
|
||||
这里向Lerdorf和PHP社区的其他人致敬,感谢他们把这个“愚蠢的小项目”变成了一个如今网络上持久、强大的组件。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/2933858/php/php-at-20-from-pet-project-to-powerhouse.html
|
||||
|
||||
作者:[Ben Ramsey][a]
|
||||
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoworld.com/author/Ben-Ramsey/
|
||||
[1]:https://www.flickr.com/photos/jurvetson/13049862325
|
||||
[2]:https://groups.google.com/d/msg/comp.infosystems.www.authoring.cgi/PyJ25gZ6z7A/M9FkTUVDfcwJ
|
||||
[3]:http://w3techs.com/technologies/overview/programming_language/all
|
||||
[4]:http://web.archive.org/web/20061215165756/http://chiphpug.php.net/mpug.htm
|
||||
[5]:http://www.phpdeveloper.org/
|
||||
[6]:http://www.php-fig.org/
|
||||
[7]:http://www.infoworld.com/article/2610885/facebook-q-a--hack-brings-static-typing-to-php-world.html
|
||||
[8]:https://wiki.php.net/todo/php70#timetable
|
||||
[9]:http://talks.php.net/velocity15
|
||||
[10]:http://www.php-fig.org/psr/psr-7/
|
||||
[11]:http://phpmentoring.org/
|
||||
|
@ -0,0 +1,62 @@
|
||||
Linux Kernel 4.1 Released, This Is What’s New
|
||||
================================================================================
|
||||
**TuxlogoA brand new version of the Linux Kernel — the heartbeat of the modern world (if we you want us to be poetic about it) — has been released.**
|
||||
|
||||

|
||||
|
||||
The arrival [has been announced][1] by Linus Torvalds (who else?) on the Linux Kernel Mailing List (where else?) and comes almost two months after the [first entry in the new 4.x series][2].
|
||||
|
||||
Levity aside, and like every release before it, Linux Kernel 4.1 features a big set of changes. These touch everything from hardware compatibility to power management to file-system performance and technical fixes for obscure processors you’ve never heard of.
|
||||
|
||||
Linux 4.1 is already being tracked in Ubuntu 15.10, due for release in October.
|
||||
|
||||
### What’s New In Linux 4.1? ###
|
||||
|
||||

|
||||
Tux got mail
|
||||
|
||||
The sub-heading is on your lips and we’re not here simply to serve up an announcement of an announcement.
|
||||
|
||||
We’ve gone through the (vast, long, lengthy and at times technically unintelligible) change-log to pick out some highlights that may not feed hyperbole but may impact on you, a desktop users.
|
||||
|
||||
#### Power Improvements ####
|
||||
|
||||
The big headline user-facing feature you’ll find in Linux 4.1 are the wealth of performance and power efficiency improvements committed for Intel’s Cherry Trail and Bay Trail chips. SoCs and devices, such as the Intel Compute Stick.
|
||||
|
||||
Anecdotal suggestions are that Linux Kernel 4.1 gives select combinations of newer Intel hardware as much as an extra hour of battery life. Such high gains are not likely to apply to anything but a very specific sub-set of chips and systems (and high-end ones at that) but it’s still exciting to hear of.
|
||||
|
||||
**Highlights of Linux 4.1 include:**
|
||||
|
||||
- EXT4 gains file-system level encryption (thanks to Google)
|
||||
- Logitech lg4ff driver improves ‘force feedback’ for gaming wheels
|
||||
- Toshiba laptop driver gains USB sleep charging and backlight improvements
|
||||
- Rumble support for Xbox One controller
|
||||
- Better battery reporting in Wacom tablet driver
|
||||
- Various misc. power improvements for both ARM and x86 devices
|
||||
- Samsung Exynos 3250 power management improvements
|
||||
- Support for the Bamboo Pad
|
||||
- Lenovo OneLink Pro Dock gains USB support
|
||||
- Support for Realtek 8723A, 8723B, 8761A, 8821 Wi-Fi cards
|
||||
|
||||
### Install Linux Kernel 4.1 on Ubuntu ###
|
||||
|
||||
Although this release of the kernel is classed as stable there is no pressing need for Ubuntu desktop users to go out of their way to install it.
|
||||
|
||||
Not that you can’t; if you’re impatient and skilled enough to do so you can take a crack at installing Linux 4.1 on Ubuntu by grabbing the appropriate set of packages from [Canonical’s mainline kernel archive][3] (or by risking a third-party PPA).
|
||||
|
||||
Ubuntu 15.10 Wily Werewolf, due for release in October, is to be based on the Ubuntu Kernel 4.1.x (the Ubuntu kernel is the Linux Kernel plus Ubuntu-specific patches that have not been accepted upstream).
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.omgubuntu.co.uk/2015/06/linux-4-1-kernel-new-features
|
||||
|
||||
作者:[Joey-Elijah Sneddon][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://plus.google.com/117485690627814051450/?rel=author
|
||||
[1]:https://lkml.org/lkml/2015/6/22/8
|
||||
[2]:http://www.omgubuntu.co.uk/2015/04/linux-kernel-4-0-new-features
|
||||
[3]:http://kernel.ubuntu.com/~kernel-ppa/mainline/?C=N;O=D
|
@ -1,3 +1,4 @@
|
||||
translating wi-cuckoo
|
||||
What will be the future of Linux without Linus?
|
||||
================================================================================
|
||||

|
||||
|
@ -1,268 +0,0 @@
|
||||
How to Manage and Use LVM (Logical Volume Management) in Ubuntu
|
||||
================================================================================
|
||||

|
||||
|
||||
In our [previous article we told you what LVM is and what you may want to use it for][1], and today we are going to walk you through some of the key management tools of LVM so you will be confident when setting up or expanding your installation.
|
||||
|
||||
As stated before, LVM is a abstraction layer between your operating system and physical hard drives. What that means is your physical hard drives and partitions are no longer tied to the hard drives and partitions they reside on. Rather, the hard drives and partitions that your operating system sees can be any number of separate hard drives pooled together or in a software RAID.
|
||||
|
||||
To manage LVM there are GUI tools available but to really understand what is happening with your LVM configuration it is probably best to know what the command line tools are. This will be especially useful if you are managing LVM on a server or distribution that does not offer GUI tools.
|
||||
|
||||
Most of the commands in LVM are very similar to each other. Each valid command is preceded by one of the following:
|
||||
|
||||
- Physical Volume = pv
|
||||
- Volume Group = vg
|
||||
- Logical Volume = lv
|
||||
|
||||
The physical volume commands are for adding or removing hard drives in volume groups. Volume group commands are for changing what abstracted set of physical partitions are presented to your operating in logical volumes. Logical volume commands will present the volume groups as partitions so that your operating system can use the designated space.
|
||||
|
||||
### Downloadable LVM Cheat Sheet ###
|
||||
|
||||
To help you understand what commands are available for each prefix we made a LVM cheat sheet. We will cover some of the commands in this article, but there is still a lot you can do that won’t be covered here.
|
||||
|
||||
All commands on this list will need to be run as root because you are changing system wide settings that will affect the entire machine.
|
||||
|
||||

|
||||
|
||||
### How to View Current LVM Information ###
|
||||
|
||||
The first thing you may need to do is check how your LVM is set up. The s and display commands work with physical volumes (pv), volume groups (vg), and logical volumes (lv) so it is a good place to start when trying to figure out the current settings.
|
||||
|
||||
The display command will format the information so it’s easier to understand than the s command. For each command you will see the name and path of the pv/vg and it should also give information about free and used space.
|
||||
|
||||

|
||||
|
||||
The most important information will be the PV name and VG name. With those two pieces of information we can continue working on the LVM setup.
|
||||
|
||||
### Creating a Logical Volume ###
|
||||
|
||||
Logical volumes are the partitions that your operating system uses in LVM. To create a logical volume we first need to have a physical volume and volume group. Here are all of the steps necessary to create a new logical volume.
|
||||
|
||||
#### Create physical volume ####
|
||||
|
||||
We will start from scratch with a brand new hard drive with no partitions or information on it. Start by finding which disk you will be working with. (/dev/sda, sdb, etc.)
|
||||
|
||||
> Note: Remember all of the commands will need to be run as root or by adding ‘sudo’ to the beginning of the command.
|
||||
|
||||
fdisk -l
|
||||
|
||||
If your hard drive has never been formatted or partitioned before you will probably see something like this in the fdisk output. This is completely fine because we are going to create the needed partitions in the next steps.
|
||||
|
||||

|
||||
|
||||
Our new disk is located at /dev/sdb so lets use fdisk to create a new partition on the drive.
|
||||
|
||||
There are a plethora of tools that can create a new partition with a GUI, [including Gparted][2], but since we have the terminal open already, we will use fdisk to create the needed partition.
|
||||
|
||||
From a terminal type the following commands:
|
||||
|
||||
fdisk /dev/sdb
|
||||
|
||||
This will put you in a special fdisk prompt.
|
||||
|
||||

|
||||
|
||||
Enter the commands in the order given to create a new primary partition that uses 100% of the new hard drive and is ready for LVM. If you need to change the partition size or want multiple partions I suggest using GParted or reading about fdisk on your own.
|
||||
|
||||
**Warning: The following steps will format your hard drive. Make sure you don’t have any information on this hard drive before following these steps.**
|
||||
|
||||
- n = create new partition
|
||||
- p = creates primary partition
|
||||
- 1 = makes partition the first on the disk
|
||||
|
||||
Push enter twice to accept the default first cylinder and last cylinder.
|
||||
|
||||

|
||||
|
||||
To prepare the partition to be used by LVM use the following two commands.
|
||||
|
||||
- t = change partition type
|
||||
- 8e = changes to LVM partition type
|
||||
|
||||
Verify and write the information to the hard drive.
|
||||
|
||||
- p = view partition setup so we can review before writing changes to disk
|
||||
- w = write changes to disk
|
||||
|
||||

|
||||
|
||||
After those commands, the fdisk prompt should exit and you will be back to the bash prompt of your terminal.
|
||||
|
||||
Enter pvcreate /dev/sdb1 to create a LVM physical volume on the partition we just created.
|
||||
|
||||
You may be asking why we didn’t format the partition with a file system but don’t worry, that step comes later.
|
||||
|
||||

|
||||
|
||||
#### Create volume Group ####
|
||||
|
||||
Now that we have a partition designated and physical volume created we need to create the volume group. Luckily this only takes one command.
|
||||
|
||||
vgcreate vgpool /dev/sdb1
|
||||
|
||||

|
||||
|
||||
Vgpool is the name of the new volume group we created. You can name it whatever you’d like but it is recommended to put vg at the front of the label so if you reference it later you will know it is a volume group.
|
||||
|
||||
#### Create logical volume ####
|
||||
|
||||
To create the logical volume that LVM will use:
|
||||
|
||||
lvcreate -L 3G -n lvstuff vgpool
|
||||
|
||||

|
||||
|
||||
The -L command designates the size of the logical volume, in this case 3 GB, and the -n command names the volume. Vgpool is referenced so that the lvcreate command knows what volume to get the space from.
|
||||
|
||||
#### Format and Mount the Logical Volume ####
|
||||
|
||||
One final step is to format the new logical volume with a file system. If you want help choosing a Linux file system, read our [how to that can help you choose the best file system for your needs][3].
|
||||
|
||||
mkfs -t ext3 /dev/vgpool/lvstuff
|
||||
|
||||

|
||||
|
||||
Create a mount point and then mount the volume somewhere you can use it.
|
||||
|
||||
mkdir /mnt/stuff
|
||||
mount -t ext3 /dev/vgpool/lvstuff /mnt/stuff
|
||||
|
||||

|
||||
|
||||
#### Resizing a Logical Volume ####
|
||||
|
||||
One of the benefits of logical volumes is you can make your shares physically bigger or smaller without having to move everything to a bigger hard drive. Instead, you can add a new hard drive and extend your volume group on the fly. Or if you have a hard drive that isn’t used you can remove it from the volume group to shrink your logical volume.
|
||||
|
||||
There are three basic tools for making physical volumes, volume groups, and logical volumes bigger or smaller.
|
||||
|
||||
Note: Each of these commands will need to be preceded by pv, vg, or lv depending on what you are working with.
|
||||
|
||||
- resize – can shrink or expand physical volumes and logical volumes but not volume groups
|
||||
- extend – can make volume groups and logical volumes bigger but not smaller
|
||||
- reduce – can make volume groups and logical volumes smaller but not bigger
|
||||
|
||||
Let’s walk through an example of how to add a new hard drive to the logical volume “lvstuff” we just created.
|
||||
|
||||
#### Install and Format new Hard Drive ####
|
||||
|
||||
To install a new hard drive follow the steps above to create a new partition and add change it’s partition type to LVM (8e). Then use pvcreate to create a physical volume that LVM can recognize.
|
||||
|
||||
#### Add New Hard Drive to Volume Group ####
|
||||
|
||||
To add the new hard drive to a volume group you just need to know what your new partition is, /dev/sdc1 in our case, and the name of the volume group you want to add it to.
|
||||
|
||||
This will add the new physical volume to the existing volume group.
|
||||
|
||||
vgextend vgpool /dev/sdc1
|
||||
|
||||

|
||||
|
||||
#### Extend Logical Volume ####
|
||||
|
||||
To resize the logical volume we need to say how much we want to extend by size instead of by device. In our example we just added a 8 GB hard drive to our 3 GB vgpool. To make that space usable we can use lvextend or lvresize.
|
||||
|
||||
lvextend -L8G /dev/vgpool/lvstuff
|
||||
|
||||

|
||||
|
||||
While this command will work you will see that it will actually resize our logical volume to 8 GB instead of adding 8 GB to the existing volume like we wanted. To add the last 3 available gigabytes you need to use the following command.
|
||||
|
||||
lvextend -L+3G /dev/vgpool/lvstuff
|
||||
|
||||

|
||||
|
||||
Now our logical volume is 11 GB in size.
|
||||
|
||||
#### Extend File System ####
|
||||
|
||||
The logical volume is 11 GB but the file system on that volume is still only 3 GB. To make the file system use the entire 11 GB available you have to use the command resize2fs. Just point resize2fs to the 11 GB logical volume and it will do the magic for you.
|
||||
|
||||
resize2fs /dev/vgpool/lvstuff
|
||||
|
||||

|
||||
|
||||
**Note: If you are using a different file system besides ext3/4 please see your file systems resize tools.**
|
||||
|
||||
#### Shrink Logical Volume ####
|
||||
|
||||
If you wanted to remove a hard drive from a volume group you would need to follow the above steps in reverse order and use lvreduce and vgreduce instead.
|
||||
|
||||
1. resize file system (make sure to move files to a safe area of the hard drive before resizing)
|
||||
1. reduce logical volume (instead of + to extend you can also use – to reduce by size)
|
||||
1. remove hard drive from volume group with vgreduce
|
||||
|
||||
#### Backing up a Logical Volume ####
|
||||
|
||||
Snapshots is a feature that some newer advanced file systems come with but ext3/4 lacks the ability to do snapshots on the fly. One of the coolest things about LVM snapshots is your file system is never taken offline and you can have as many as you want without taking up extra hard drive space.
|
||||
|
||||

|
||||
|
||||
When LVM takes a snapshot, a picture is taken of exactly how the logical volume looks and that picture can be used to make a copy on a different hard drive. While a copy is being made, any new information that needs to be added to the logical volume is written to the disk just like normal, but changes are tracked so that the original picture never gets destroyed.
|
||||
|
||||
To create a snapshot we need to create a new logical volume with enough free space to hold any new information that will be written to the logical volume while we make a backup. If the drive is not actively being written to you can use a very small amount of storage. Once we are done with our backup we just remove the temporary logical volume and the original logical volume will continue on as normal.
|
||||
|
||||
#### Create New Snapshot ####
|
||||
|
||||
To create a snapshot of lvstuff use the lvcreate command like before but use the -s flag.
|
||||
|
||||
lvcreate -L512M -s -n lvstuffbackup /dev/vgpool/lvstuff
|
||||
|
||||

|
||||
|
||||
Here we created a logical volume with only 512 MB because the drive isn’t being actively used. The 512 MB will store any new writes while we make our backup.
|
||||
|
||||
#### Mount New Snapshot ####
|
||||
|
||||
Just like before we need to create a mount point and mount the new snapshot so we can copy files from it.
|
||||
|
||||
mkdir /mnt/lvstuffbackup
|
||||
mount /dev/vgpool/lvstuffbackup /mnt/lvstuffbackup
|
||||
|
||||

|
||||
|
||||
#### Copy Snapshot and Delete Logical Volume ####
|
||||
|
||||
All you have left to do is copy all of the files from /mnt/lvstuffbackup/ to an external hard drive or tar it up so it is all in one file.
|
||||
|
||||
**Note: tar -c will create an archive and -f will say the location and file name of the archive. For help with the tar command use man tar in the terminal.**
|
||||
|
||||
tar -cf /home/rothgar/Backup/lvstuff-ss /mnt/lvstuffbackup/
|
||||
|
||||

|
||||
|
||||
Remember that while the backup is taking place all of the files that would be written to lvstuff are being tracked in the temporary logical volume we created earlier. Make sure you have enough free space while the backup is happening.
|
||||
|
||||
Once the backup finishes, unmount the volume and remove the temporary snapshot.
|
||||
|
||||
umount /mnt/lvstuffbackup
|
||||
lvremove /dev/vgpool/lvstuffbackup/
|
||||
|
||||

|
||||
|
||||
#### Deleting a Logical Volume ####
|
||||
|
||||
To delete a logical volume you need to first make sure the volume is unmounted, and then you can use lvremove to delete it. You can also remove a volume group once the logical volumes have been deleted and a physical volume after the volume group is deleted.
|
||||
|
||||
Here are all the commands using the volumes and groups we’ve created.
|
||||
|
||||
umount /mnt/lvstuff
|
||||
lvremove /dev/vgpool/lvstuff
|
||||
vgremove vgpool
|
||||
pvremove /dev/sdb1 /dev/sdc1
|
||||
|
||||

|
||||
|
||||
That should cover most of what you need to know to use LVM. If you’ve got some experience on the topic, be sure to share your wisdom in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
|
||||
|
||||
译者:[runningwater](https://github.com/runningwater)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.howtogeek.com/howto/36568/what-is-logical-volume-management-and-how-do-you-enable-it-in-ubuntu/
|
||||
[2]:http://www.howtogeek.com/howto/17001/how-to-format-a-usb-drive-in-ubuntu-using-gparted/
|
||||
[3]:http://www.howtogeek.com/howto/33552/htg-explains-which-linux-file-system-should-you-choose/
|
@ -1,301 +0,0 @@
|
||||
Translating by ZTinoZ
|
||||
20 Useful Terminal Emulators for Linux
|
||||
================================================================================
|
||||
A Terminal emulator is a computer program that reproduces a video terminal within some other display structure. In other words the Terminal emulator has an ability to make a dumb machine appear like a client computer networked to the server. The terminal emulator allows an end user to access console as well as its applications such as text user interface and command line interface.
|
||||
|
||||

|
||||
|
||||
20 Linux Terminal Emulators
|
||||
|
||||
You may find huge number of terminal emulators to choose from this open source world. Some of them offers large range of features while others offers less features. To give a better understanding to the quality of software that are available, we have gathered a list of marvelous terminal emulator for Linux. Each title provides its description and feature along with screenshot of the software with relevant download link.
|
||||
|
||||
### 1. Terminator ###
|
||||
|
||||
Terminator is an advanced and powerful terminal emulator which supports multiple terminals windows. This emulator is fully customizable. You can change the size, colour, give different shapes to the terminal. Its very user friendly and fun to use.
|
||||
|
||||
#### Features of Terminator ####
|
||||
|
||||
- Customize your profiles and colour schemes, set the size to fit your needs.
|
||||
- Use plugins to get even more functionality.
|
||||
- Several key-shortcuts are available to speed up common activities.
|
||||
- Split the terminal window into several virtual terminals and re-size them as needed.
|
||||
|
||||

|
||||
|
||||
Terminator Terminal
|
||||
|
||||
- [Terminator Homepage][1]
|
||||
- [Download and Installation Instructions][2]
|
||||
|
||||
### 2. Tilda ###
|
||||
|
||||
Tilda is a stylish drop-down terminal based on GTK+. With the help of a single key press you can launch a new or hide Tilda window. However, you can add colors of your choice to change the look of the text and Terminal background.
|
||||
|
||||
#### Features of Tilda ####
|
||||
|
||||
Interface with Highly customization option.
|
||||
You can set the transparency level for Tilda window.
|
||||
Excellent built-in colour schemes.
|
||||
|
||||

|
||||
|
||||
Tilda Terminal
|
||||
|
||||
- [Tilda Homepage][3]
|
||||
|
||||
### 3. Guake ###
|
||||
|
||||
Guake is a python based drop-down terminal created for the GNOME Desktop Environment. It is invoked by pressing a single keystroke, and can make it hidden by pressing same keystroke again. Its design was determined from FPS (First Person Shooter) games such as Quake and one of its main target is be easy to reach.
|
||||
|
||||
Guake is very much similar to Yakuaka and Tilda, but it’s an experiment to mix the best of them into a single GTK-based program. Guake has been written in python from scratch using a little piece in C (global hotkeys stuff).
|
||||
|
||||

|
||||
|
||||
Guake Terminal
|
||||
|
||||
- [Guake Homepage][4]
|
||||
|
||||
### 4. Yakuake ###
|
||||
|
||||
Yakuake (Yet Another Kuake) is a KDE based drop-down terminal emulator very much similar to Guake terminal emulator in functionality. It’s design was inspired from fps consoles games such as Quake.
|
||||
|
||||
Yakuake is basically a KDE application, which can be easily installed on KDE desktop, but if you try to install Yakuake in GNOME desktop, it will prompt you to install huge number of dependency packages.
|
||||
|
||||
#### Yakuake Features ####
|
||||
|
||||
- Fluently turn down from the top of your screen
|
||||
- Tabbed interface
|
||||
- Configurable dimensions and animation speed
|
||||
- Customizable
|
||||
|
||||

|
||||
|
||||
Yakuake Terminal
|
||||
|
||||
- [Yakuake Homepage][5]
|
||||
|
||||
### 5. ROXTerm ###
|
||||
|
||||
ROXterm is yet another lightweight terminal emulator designed to provide similar features to gnome-terminal. It was originally constructed to have lesser footprints and faster start-up time by not using the Gnome libraries and by using a independent applet to bring the configuration interface (GUI), but over the time it’s role has shifted to bringing a higher range of features for power users.
|
||||
|
||||
However, it is more customizable than gnome-terminal and anticipated more at “power” users who make excessive use of terminals. It is easily integrated with GNOME desktop environment and provides features like drag & drop of items into terminal.
|
||||
|
||||

|
||||
|
||||
Roxterm Terminal
|
||||
|
||||
- [ROXTerm Homepage][6]
|
||||
|
||||
### 6. Eterm ###
|
||||
|
||||
Eterm is a lightest color terminal emulator designed as a replacement for xterm. It is developed with a Freedom of Choice ideology, leaving as much power, flexibility, and freedom as workable in the hands of the user.
|
||||
|
||||

|
||||
|
||||
Eterm Terminal
|
||||
|
||||
- [Eterm Homepage][7]
|
||||
|
||||
### 7. Rxvt ###
|
||||
|
||||
Rxvt stands for extended virtual terminal is a color terminal emulator application for Linux intended as an xterm replacement for power users who don’t need to have a feature such as Tektronix 4014 emulation and toolkit-style configurability.
|
||||
|
||||

|
||||
|
||||
Rxvt Terminal
|
||||
|
||||
- [Rxvt Homepage][8]
|
||||
|
||||
### 8. Wterm ###
|
||||
|
||||
Wterm is a another light weight color terminal emulator based on rxvt project. It includes features such as background images, transparency, reverse transparency and an considerable set or runtime options are accessible resulting in a very high customizable terminal emulator.
|
||||
|
||||

|
||||
|
||||
wterm Terminal
|
||||
|
||||
- [Wterm Homepage][9]
|
||||
|
||||
### 9. LXTerminal ###
|
||||
|
||||
LXTerminal is a default VTE-based terminal emulator for LXDE (Lightweight X Desktop Environment) without any unnecessary dependency. The terminal has got some nice features such as.
|
||||
LXTerminal Features
|
||||
|
||||
- Multiple tabs support
|
||||
- Supports common commands like cp, cd, dir, mkdir, mvdir.
|
||||
- Feature to hide the menu bar for saving space
|
||||
- Change the color scheme.
|
||||
|
||||

|
||||
|
||||
lxterminal Terminal
|
||||
|
||||
- [LXTerminal Homepage][10]
|
||||
|
||||
### 10. Konsole ###
|
||||
|
||||
Konsole is yet another powerful KDE based free terminal emulator was originally created by Lars Doelle.
|
||||
Konsole Features
|
||||
|
||||
- Multiple Tabbed terminals.
|
||||
- Translucent backgrounds.
|
||||
- Support for Split-view mode.
|
||||
- Directory and SSH bookmarking.
|
||||
- Customizable color schemes.
|
||||
- Customizable key bindings.
|
||||
- Notification alerts about activity in a terminal.
|
||||
- Incremental search
|
||||
- Support for Dolphin file manager
|
||||
- Export of output in plain text or HTML format.
|
||||
|
||||

|
||||
|
||||
Konsole Terminal
|
||||
|
||||
- [Konsole Homepage][11]
|
||||
|
||||
### 11. TermKit ###
|
||||
|
||||
TermKit is a elegant terminal that aims to construct aspects of the GUI with the command line based application using WebKit rendering engine mostly used in web browsers like Google Chrome and Chromium. TermKit is originally designed for Mac and Windows, but due to TermKit fork by Floby which you can now able to install it under Linux based distributions and experience the power of TermKit.
|
||||
|
||||

|
||||
|
||||
TermKit Terminal
|
||||
|
||||
- [TermKit Homepage][12]
|
||||
|
||||
12. st
|
||||
|
||||
st is a simple terminal implementation for X Window.
|
||||
|
||||

|
||||
|
||||
st terminal
|
||||
|
||||
- [st Homepage][13]
|
||||
|
||||
### 13. Gnome-Terminal ###
|
||||
|
||||
GNOME terminal is a built-in terminal emulator for GNOME desktop environment developed by Havoc Pennington and others. It allow users to run commands using a real Linux shell while remaining on the on the GNOME environment. GNOME Terminal emulates the xterm terminal emulator and brings a few similar features.
|
||||
|
||||
The Gnome terminal supports multiple profiles, where users can able to create multiple profiles for his/her account and can customize configuration options such as fonts, colors, background image, behavior, etc. per account and define a name to each profile. It also supports mouse events, url detection, multiple tabs, etc.
|
||||
|
||||

|
||||
|
||||
Gnome Terminal
|
||||
|
||||
- [Gnome Terminal][14]
|
||||
|
||||
### 14. Final Term ###
|
||||
|
||||
Final Term is a open source stylish terminal emulator that has some exciting capabilities and handy features into one single beautiful interface. It is still under development, but provides significant features such as Semantic text menus, Smart command completion, GUI terminal controls, Omnipotent keybindings, Color support and many more. The following animated screen grab demonstrates some of their features. Please click on image to view demo.
|
||||
|
||||

|
||||
|
||||
FinalTerm Terminal
|
||||
|
||||
- [Final Term][15]
|
||||
|
||||
### 15. Terminology ###
|
||||
|
||||
Terminology is yet another new modern terminal emulator created for the Enlightenment desktop, but also can be used in different desktop environments. It has some awesome unique features, which do not have in any other terminal emulator.
|
||||
|
||||
Apart features, terminology offers even more things that you wouldn’t assume from a other terminal emulators, like preview thumbnails of images, videos and documents, it also allows you to see those files directly from Terminology.
|
||||
|
||||
You can watch a following demonstrations video created by the Terminology developer (the video quality isn’t clear, but still it’s enough to get the idea about Terminology).
|
||||
|
||||
<iframe width="630" height="480" frameborder="0" allowfullscreen="" src="//www.youtube.com/embed/ibPziLRGvkg"></iframe>
|
||||
|
||||
- [Terminology][16]
|
||||
|
||||
### 16. Xfce4 terminal ###
|
||||
|
||||
Xfce terminal is a lightweight modern and easy to use terminal emulator specially designed for Xfce desktop environment. The latest release of xfce terminal has some new cool features such as search dialog, tab color changer, drop-down console like Guake or Yakuake and many more.
|
||||
|
||||

|
||||
|
||||
Xfce Terminal
|
||||
|
||||
- [Xfce4 Terminal][17]
|
||||
|
||||
### 17. xterm ###
|
||||
|
||||
The xterm application is a standard terminal emulator for the X Window System. It maintain DEC VT102 and Tektronix 4014 compatible terminals for applications that can’t use the window system directly.
|
||||
|
||||

|
||||
|
||||
xterm Terminal
|
||||
|
||||
- [xterm][18]
|
||||
|
||||
### 18. LilyTerm ###
|
||||
|
||||
The LilyTerm is a another less known open source terminal emulator based off of libvte that desire to be fast and lightweight. LilyTerm also include some key features such as:
|
||||
|
||||
- Support for tabbing, coloring and reordering tabs
|
||||
- Ability to manage tabs through keybindings
|
||||
- Support for background transparency and saturation.
|
||||
- Support for user specific profile creation.
|
||||
- Several customization options for profiles.
|
||||
- Extensive UTF-8 support.
|
||||
|
||||

|
||||
|
||||
Lilyterm Terminal
|
||||
|
||||
- [LilyTerm][19]
|
||||
|
||||
### 19. Sakura ###
|
||||
|
||||
The sakura is a another less known Unix style terminal emulator developed for command line purpose as well as text-based terminal programs. Sakura is based on GTK and livte and provides not more advanced features but some customization options such as multiple tab support, custom text color, font and background images, speedy command processing and few more.
|
||||
|
||||

|
||||
|
||||
Sakura Terminal
|
||||
|
||||
- [Sakura][20]
|
||||
|
||||
### 20. rxvt-unicode ###
|
||||
|
||||
The rxvt-unicode (also known as urxvt) is a yet another highly customizable, lightweight and fast terminal emulator with xft and unicode support was developed by Marc Lehmann. It got some outstanding features such as support for international language via Unicode, the ability to display multiple font types and support for Perl extensions.
|
||||
|
||||

|
||||
|
||||
rxvt unicode
|
||||
|
||||
- [rxvt-unicode][21]
|
||||
|
||||
If you know any other capable Linux terminal emulators that I’ve not included in the above list, please do share with me using our comment section.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/linux-terminal-emulators/
|
||||
|
||||
作者:[Ravi Saive][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/admin/
|
||||
[1]:https://launchpad.net/terminator
|
||||
[2]:http://www.tecmint.com/terminator-a-linux-terminal-emulator-to-manage-multiple-terminal-windows/
|
||||
[3]:http://tilda.sourceforge.net/tildaabout.php
|
||||
[4]:https://github.com/Guake/guake
|
||||
[5]:http://extragear.kde.org/apps/yakuake/
|
||||
[6]:http://roxterm.sourceforge.net/index.php?page=index&lang=en
|
||||
[7]:http://www.eterm.org/
|
||||
[8]:http://sourceforge.net/projects/rxvt/
|
||||
[9]:http://sourceforge.net/projects/wterm/
|
||||
[10]:http://wiki.lxde.org/en/LXTerminal
|
||||
[11]:http://konsole.kde.org/
|
||||
[12]:https://github.com/unconed/TermKit
|
||||
[13]:http://st.suckless.org/
|
||||
[14]:https://help.gnome.org/users/gnome-terminal/stable/
|
||||
[15]:http://finalterm.org/
|
||||
[16]:http://www.enlightenment.org/p.php?p=about/terminology
|
||||
[17]:http://docs.xfce.org/apps/terminal/start
|
||||
[18]:http://invisible-island.net/xterm/
|
||||
[19]:http://lilyterm.luna.com.tw/
|
||||
[20]:https://launchpad.net/sakura
|
||||
[21]:http://software.schmorp.de/pkg/rxvt-unicode
|
@ -1,175 +0,0 @@
|
||||
wyangsun 申领
|
||||
Inside NGINX: How We Designed for Performance & Scale
|
||||
================================================================================
|
||||
NGINX leads the pack in web performance, and it’s all due to the way the software is designed. Whereas many web servers and application servers use a simple threaded or process-based architecture, NGINX stands out with a sophisticated event-driven architecture that enables it to scale to hundreds of thousands of concurrent connections on modern hardware.
|
||||
|
||||
The [Inside NGINX][1] infographic drills down from the high-level process architecture to illustrate how NGINX handles multiple connections within a single process. This blog explains how it all works in further detail.
|
||||
|
||||
### Setting the Scene – the NGINX Process Model ###
|
||||
|
||||

|
||||
|
||||
To better understand this design, you need to understand how NGINX runs. NGINX has a master process (which performs the privileged operations such as reading configuration and binding to ports) and a number of worker and helper processes.
|
||||
|
||||
# service nginx restart
|
||||
* Restarting nginx
|
||||
# ps -ef --forest | grep nginx
|
||||
root 32475 1 0 13:36 ? 00:00:00 nginx: master process /usr/sbin/nginx \
|
||||
-c /etc/nginx/nginx.conf
|
||||
nginx 32476 32475 0 13:36 ? 00:00:00 \_ nginx: worker process
|
||||
nginx 32477 32475 0 13:36 ? 00:00:00 \_ nginx: worker process
|
||||
nginx 32479 32475 0 13:36 ? 00:00:00 \_ nginx: worker process
|
||||
nginx 32480 32475 0 13:36 ? 00:00:00 \_ nginx: worker process
|
||||
nginx 32481 32475 0 13:36 ? 00:00:00 \_ nginx: cache manager process
|
||||
nginx 32482 32475 0 13:36 ? 00:00:00 \_ nginx: cache loader process
|
||||
|
||||
On this 4-core server, the NGINX master process creates 4 worker processes and a couple of cache helper processes which manage the on-disk content cache.
|
||||
|
||||
### Why Is Architecture Important? ###
|
||||
|
||||
The fundamental basis of any Unix application is the thread or process. (From the Linux OS perspective, threads and processes are mostly identical; the major difference is the degree to which they share memory.) A thread or process is a self-contained set of instructions that the operating system can schedule to run on a CPU core. Most complex applications run multiple threads or processes in parallel for two reasons:
|
||||
|
||||
- They can use more compute cores at the same time.
|
||||
- Threads and processes make it very easy to do operations in parallel (for example, to handle multiple connections at the same time).
|
||||
|
||||
Processes and threads consume resources. They each use memory and other OS resources, and they need to be swapped on and off the cores (an operation called a context switch). Most modern servers can handle hundreds of small, active threads or processes simultaneously, but performance degrades seriously once memory is exhausted or when high I/O load causes a large volume of context switches.
|
||||
|
||||
The common way to design network applications is to assign a thread or process to each connection. This architecture is simple and easy to implement, but it does not scale when the application needs to handle thousands of simultaneous connections.
|
||||
|
||||
### How Does NGINX Work? ###
|
||||
|
||||
NGINX uses a predictable process model that is tuned to the available hardware resources:
|
||||
|
||||
- The master process performs the privileged operations such as reading configuration and binding to ports, and then creates a small number of child processes (the next three types).
|
||||
- The cache loader process runs at startup to load the disk-based cache into memory, and then exits. It is scheduled conservatively, so its resource demands are low.
|
||||
- The cache manager process runs periodically and prunes entries from the disk caches to keep them within the configured sizes.
|
||||
- The worker processes do all of the work! They handle network connections, read and write content to disk, and communicate with upstream servers.
|
||||
|
||||
The NGINX configuration recommended in most cases – running one worker process per CPU core – makes the most efficient use of hardware resources. You configure it by including the [worker_processes auto][2] directive in the configuration:
|
||||
|
||||
worker_processes auto;
|
||||
|
||||
When an NGINX server is active, only the worker processes are busy. Each worker process handles multiple connections in a non-blocking fashion, reducing the number of context switches.
|
||||
|
||||
Each worker process is single-threaded and runs independently, grabbing new connections and processing them. The processes can communicate using shared memory for shared cache data, session persistence data, and other shared resources.
|
||||
|
||||
### Inside the NGINX Worker Process ###
|
||||
|
||||

|
||||
|
||||
Each NGINX worker process is initialized with the NGINX configuration and is provided with a set of listen sockets by the master process.
|
||||
|
||||
The NGINX worker processes begin by waiting for events on the listen sockets ([accept_mutex][3] and [kernel socket sharding][4]). Events are initiated by new incoming connections. These connections are assigned to a state machine – the HTTP state machine is the most commonly used, but NGINX also implements state machines for stream (raw TCP) traffic and for a number of mail protocols (SMTP, IMAP, and POP3).
|
||||
|
||||

|
||||
|
||||
The state machine is essentially the set of instructions that tell NGINX how to process a request. Most web servers that perform the same functions as NGINX use a similar state machine – the difference lies in the implementation.
|
||||
|
||||
### Scheduling the State Machine ###
|
||||
|
||||
Think of the state machine like the rules for chess. Each HTTP transaction is a chess game. On one side of the chessboard is the web server – a grandmaster who can make decisions very quickly. On the other side is the remote client – the web browser that is accessing the site or application over a relatively slow network.
|
||||
|
||||
However, the rules of the game can be very complicated. For example, the web server might need to communicate with other parties (proxying to an upstream application) or talk to an authentication server. Third-party modules in the web server can even extend the rules of the game.
|
||||
|
||||
#### A Blocking State Machine ####
|
||||
|
||||
Recall our description of a process or thread as a self-contained set of instructions that the operating system can schedule to run on a CPU core. Most web servers and web applications use a process-per-connection or thread-per-connection model to play the chess game. Each process or thread contains the instructions to play one game through to the end. During the time the process is run by the server, it spends most of its time ‘blocked’ – waiting for the client to complete its next move.
|
||||
|
||||

|
||||
|
||||
1. The web server process listens for new connections (new games initiated by clients) on the listen sockets.
|
||||
1. When it gets a new game, it plays that game, blocking after each move to wait for the client’s response.
|
||||
1. Once the game completes, the web server process might wait to see if the client wants to start a new game (this corresponds to a keepalive connection). If the connection is closed (the client goes away or a timeout occurs), the web server process returns to listening for new games.
|
||||
|
||||
The important point to remember is that every active HTTP connection (every chess game) requires a dedicated process or thread (a grandmaster). This architecture is simple and easy to extend with third-party modules (‘new rules’). However, there’s a huge imbalance: the rather lightweight HTTP connection, represented by a file descriptor and a small amount of memory, maps to a separate thread or process, a very heavyweight operating system object. It’s a programming convenience, but it’s massively wasteful.
|
||||
|
||||
#### NGINX is a True Grandmaster ####
|
||||
|
||||
Perhaps you’ve heard of [simultaneous exhibition][5] games, where one chess grandmaster plays dozens of opponents at the same time?
|
||||
|
||||

|
||||
|
||||
[Kiril Georgiev played 360 people simultaneously in Sofia, Bulgaria][6]. His final score was 284 wins, 70 draws and 6 losses.
|
||||
|
||||
That’s how an NGINX worker process plays “chess.” Each worker (remember – there’s usually one worker for each CPU core) is a grandmaster that can play hundreds (in fact, hundreds of thousands) of games simultaneously.
|
||||
|
||||

|
||||
|
||||
1. The worker waits for events on the listen and connection sockets.
|
||||
1. Events occur on the sockets and the worker handles them:
|
||||
|
||||
- An event on the listen socket means that a client has started a new chess game. The worker creates a new connection socket.
|
||||
- An event on a connection socket means that the client has made a new move. The worker responds promptly.
|
||||
|
||||
A worker never blocks on network traffic, waiting for its “opponent” (the client) to respond. When it has made its move, the worker immediately proceeds to other games where moves are waiting to be processed, or welcomes new players in the door.
|
||||
|
||||
### Why Is This Faster than a Blocking, Multi-Process Architecture? ###
|
||||
|
||||
NGINX scales very well to support hundreds of thousands of connections per worker process. Each new connection creates another file descriptor and consumes a small amount of additional memory in the worker process. There is very little additional overhead per connection. NGINX processes can remain pinned to CPUs. Context switches are relatively infrequent and occur when there is no work to be done.
|
||||
|
||||
In the blocking, connection-per-process approach, each connection requires a large amount of additional resources and overhead, and context switches (swapping from one process to another) are very frequent.
|
||||
|
||||
For a more detailed explanation, check out this [article][7] about NGINX architecture, by Andrew Alexeev, VP of Corporate Development and Co-Founder at NGINX, Inc.
|
||||
|
||||
With appropriate [system tuning][8], NGINX can scale to handle hundreds of thousands of concurrent HTTP connections per worker process, and can absorb traffic spikes (an influx of new games) without missing a beat.
|
||||
|
||||
### Updating Configuration and Upgrading NGINX ###
|
||||
|
||||
NGINX’s process architecture, with a small number of worker processes, makes for very efficient updating of the configuration and even the NGINX binary itself.
|
||||
|
||||

|
||||
|
||||
Updating NGINX configuration is a very simple, lightweight, and reliable operation. It typically just means running the `nginx –s reload` command, which checks the configuration on disk and sends the master process a SIGHUP signal.
|
||||
|
||||
When the master process receives a SIGHUP, it does two things:
|
||||
|
||||
- Reloads the configuration and forks a new set of worker processes. These new worker processes immediately begin accepting connections and processing traffic (using the new configuration settings).
|
||||
- Signals the old worker processes to gracefully exit. The worker processes stop accepting new connections. As soon as each current HTTP request completes, the worker process cleanly shuts down the connection (that is, there are no lingering keepalives). Once all connections are closed, the worker processes exit.
|
||||
|
||||
This reload process can cause a small spike in CPU and memory usage, but it’s generally imperceptible compared to the resource load from active connections. You can reload the configuration multiple times per second (and many NGINX users do exactly that). Very rarely, issues arise when there are many generations of NGINX worker processes waiting for connections to close, but even those are quickly resolved.
|
||||
|
||||
NGINX’s binary upgrade process achieves the holy grail of high-availability – you can upgrade the software on the fly, without any dropped connections, downtime, or interruption in service.
|
||||
|
||||

|
||||
|
||||
The binary upgrade process is similar in approach to the graceful reload of configuration. A new NGINX master process runs in parallel with the original master process, and they share the listening sockets. Both processes are active, and their respective worker processes handle traffic. You can then signal the old master and its workers to gracefully exit.
|
||||
|
||||
The entire process is described in more detail in [Controlling NGINX][9].
|
||||
|
||||
### Conclusion ###
|
||||
|
||||
The [Inside NGINX infographic][10] provides a high-level overview of how NGINX functions, but behind this simple explanation is over ten years of innovation and optimization that enable NGINX to deliver the best possible performance on a wide range of hardware while maintaining the security and reliability that modern web applications require.
|
||||
|
||||
If you’d like to read more about the optimizations in NGINX, check out these great resources:
|
||||
|
||||
- [Installing and Tuning NGINX for Performance][11] (webinar; [slides][12] at Speaker Deck)
|
||||
- [Tuning NGINX for Performance][13]
|
||||
- [The Architecture of Open Source Applications – NGINX][14]
|
||||
- [Socket Sharding in NGINX Release 1.9.1][15] (using the SO_REUSEPORT socket option)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/
|
||||
|
||||
作者:[Owen Garrett][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://nginx.com/author/owen/
|
||||
[1]:http://nginx.com/resources/library/infographic-inside-nginx/
|
||||
[2]:http://nginx.org/en/docs/ngx_core_module.html#worker_processes
|
||||
[3]:http://nginx.org/en/docs/ngx_core_module.html#accept_mutex
|
||||
[4]:http://nginx.com/blog/socket-sharding-nginx-release-1-9-1/
|
||||
[5]:http://en.wikipedia.org/wiki/Simultaneous_exhibition
|
||||
[6]:http://gambit.blogs.nytimes.com/2009/03/03/in-chess-records-were-made-to-be-broken/
|
||||
[7]:http://www.aosabook.org/en/nginx.html
|
||||
[8]:http://nginx.com/blog/tuning-nginx/
|
||||
[9]:http://nginx.org/en/docs/control.html
|
||||
[10]:http://nginx.com/resources/library/infographic-inside-nginx/
|
||||
[11]:http://nginx.com/resources/webinars/installing-tuning-nginx/
|
||||
[12]:https://speakerdeck.com/nginx/nginx-installation-and-tuning
|
||||
[13]:http://nginx.com/blog/tuning-nginx/
|
||||
[14]:http://www.aosabook.org/en/nginx.html
|
||||
[15]:http://nginx.com/blog/socket-sharding-nginx-release-1-9-1/
|
@ -1,91 +0,0 @@
|
||||
How to combine two graphs on Cacti
|
||||
================================================================================
|
||||
[Cacti][1] a fantastic open source network monitoring system that is widely used to graph network elements like bandwidth, storage, processor and memory utilization. Using its web based interface, you can create and organize graphs easily. However, some advanced features like merging graphs, creating aggregate graphs using multiple sources, migration of Cacti to another server are not provided by default. You might need some experience with Cacti to pull these off. In this tutorial, we will see how we can merge two Cacti graphs into one.
|
||||
|
||||
Consider this example. Client-A has been connected to port 5 of switch-A for the last six months. Port 5 becomes faulty, and so the client is migrated to Port 6. As Cacti uses different graphs for each interface/element, the bandwidth history of the client would be split into port 5 and port 6. So we end up with two graphs for one client - one with six months' worth of old data, and the other that contains ongoing data.
|
||||
|
||||
In such cases, we can actually combine the two graphs so the old data is appended to the new graph, and we get to keep a single graph containing historic and new data for one customer. This tutorial will explain exactly how we can achieve that.
|
||||
|
||||
Cacti stores the data of each graph in its own RRD (round robin database) file. When a graph is requested, the values stored in a corresponding RRD file are used to generate the graph. RRD files are stored in `/var/lib/cacti/rra` in Ubuntu/Debian systems and in `/var/www/cacti/rra` in CentOS/RHEL systems.
|
||||
|
||||
The idea behind merging graphs is to alter these RRD files so the values from the old RRD file are appended to the new RRD file.
|
||||
|
||||
### Scenario ###
|
||||
|
||||
The services for a client is running on eth0 for over a year. Because of hardware failure, the client has been migrated to eth1 interface of another server. We want to graph the bandwidth of the new interface, while retaining the historic data for over a year. The client would see only one graph.
|
||||
|
||||
### Identifying the RRD for the Graph ###
|
||||
|
||||
The first step during graph merging is to identify the RRD file associated with a graph. We can check the file by opening the graph in debug mode. To do this, go to Cacti's menu: Console > Graph Management > Select Graph > Turn On Graph Debug Mode.
|
||||
|
||||
#### Old graph: ####
|
||||
|
||||

|
||||
|
||||
#### New graph: ####
|
||||
|
||||

|
||||
|
||||
From the example output (which is based on a Debian system), we can identify the RRD files for two graphs:
|
||||
|
||||
- **Old graph**: /var/lib/cacti/rra/old_graph_traffic_in_8.rrd
|
||||
- **New graph**: /var/lib/cacti/rra/new_graph_traffic_in_10.rrd
|
||||
|
||||
### Preparing a Script ###
|
||||
|
||||
We will merge two RRD files using a [RRD splice script][2]. Download this PHP script, and install it as /var/lib/cacti/rra/rrdsplice.php (for Debian/Ubuntu) or /var/www/cacti/rra/rrdsplice.php (for CentOS/RHEL).
|
||||
|
||||
Next, make sure that the file is owned by Apache user.
|
||||
|
||||
On Debian or Ubuntu, run the following command:
|
||||
|
||||
# chown www-data:www-data rrdsplice.php
|
||||
|
||||
and update rrdsplice.php accordingly. Look for the following line:
|
||||
|
||||
chown($finrrd, "apache");
|
||||
|
||||
and replace it with:
|
||||
|
||||
chown($finrrd, "www-data");
|
||||
|
||||
On CentOS or RHEL, run the following command:
|
||||
|
||||
# chown apache:apache rrdsplice.php
|
||||
|
||||
### Merging Two Graphs ###
|
||||
|
||||
The syntax usage of the script can easily be found by running it without any parameters.
|
||||
|
||||
# cd /path/to/rrdsplice.php
|
||||
# php rrdsplice.php
|
||||
|
||||
----------
|
||||
|
||||
USAGE: rrdsplice.php --oldrrd=file --newrrd=file --finrrd=file
|
||||
|
||||
Now we are ready to merge two RRD files. Simply supply the names of an old RRD file and a new RRD file. We will overwrite the merged result back to the new RRD file.
|
||||
|
||||
# php rrdsplice.php --oldrrd=old_graph_traffic_in_8.rrd --newrrd=new_graph_traffic_in_10.rrd --finrrd=new_graph_traffic_in_10.rrd
|
||||
|
||||
Now the data from the old RRD file should be appended to the new RRD. Any new data will continue to be written by Cacti to the new RRD file. If we click on the graph, we should be able to verify that the weekly, monthly and yearly records have also been added from the old graph. The second graph in the following diagram shows weekly records from the old graph.
|
||||
|
||||

|
||||
|
||||
To sum up, this tutorial showed how we can easily merge two Cacti graphs into one. This trick is useful when a service is migrated to another device/interface and we want to deal with only one graph instead of two. The script is very handy as it can join graphs regardless of the source device e.g., Cisco 1800 router and Cisco 2960 switch.
|
||||
|
||||
Hope this helps.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/combine-two-graphs-cacti.html
|
||||
|
||||
作者:[Sarmed Rahman][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/sarmed
|
||||
[1]:http://xmodulo.com/install-configure-cacti-linux.html
|
||||
[2]:http://svn.cacti.net/viewvc/developers/thewitness/rrdsplice/rrdsplice.php
|
@ -1,436 +0,0 @@
|
||||
Translating by GOLinux!
|
||||
The Art of Command Line
|
||||
================================================================================
|
||||
- [Basics](#basics)
|
||||
- [Everyday use](#everyday-use)
|
||||
- [Processing files and data](#processing-files-and-data)
|
||||
- [System debugging](#system-debugging)
|
||||
- [One-liners](#one-liners)
|
||||
- [Obscure but useful](#obscure-but-useful)
|
||||
- [More resources](#more-resources)
|
||||
- [Disclaimer](#disclaimer)
|
||||
|
||||
|
||||

|
||||
|
||||
Fluency on the command line is a skill often neglected or considered arcane, but it improves your flexibility and productivity as an engineer in both obvious and subtle ways. This is a selection of notes and tips on using the command-line that I've found useful when working on Linux. Some tips are elementary, and some are fairly specific, sophisticated, or obscure. This page is not long, but if you can use and recall all the items here, you know a lot.
|
||||
|
||||
Much of this
|
||||
[originally](http://www.quora.com/What-are-some-lesser-known-but-useful-Unix-commands)
|
||||
[appeared](http://www.quora.com/What-are-the-most-useful-Swiss-army-knife-one-liners-on-Unix)
|
||||
on [Quora](http://www.quora.com/What-are-some-time-saving-tips-that-every-Linux-user-should-know),
|
||||
but given the interest there, it seems it's worth using Github, where people more talented than I can readily suggest improvements. If you see an error or something that could be better, please submit an issue or PR!
|
||||
|
||||
Scope:
|
||||
|
||||
- The goals are breadth and brevity. Every tip is essential in some situation or significantly saves time over alternatives.
|
||||
- This is written for Linux. Many but not all items apply equally to MacOS (or even Cygwin).
|
||||
- The focus is on interactive Bash, though many tips apply to other shells and to general Bash scripting.
|
||||
- Descriptions are intentionally minimal, with the expectation you'll use `man`, `apt-get`/`yum`/`dnf` to install, and Google for more background.
|
||||
|
||||
|
||||
## Basics
|
||||
|
||||
- Learn basic Bash. Actually, type `man bash` and at least skim the whole thing; it's pretty easy to follow and not that long. Alternate shells can be nice, but Bash is powerful and always available (learning *only* zsh, fish, etc., while tempting on your own laptop, restricts you in many situations, such as using existing servers).
|
||||
|
||||
- Learn at least one text-based editor well. Ideally Vim (`vi`), as there's really no competition for random editing in a terminal (even if you use Emacs, a big IDE, or a modern hipster editor most of the time).
|
||||
|
||||
- Learn about redirection of output and input using `>` and `<` and pipes using `|`. Learn about stdout and stderr.
|
||||
|
||||
- Learn about file glob expansion with `*` (and perhaps `?` and `{`...`}`) and quoting and the difference between double `"` and single `'` quotes. (See more on variable expansion below.)
|
||||
|
||||
- Be familiar with Bash job management: `&`, **ctrl-z**, **ctrl-c**, `jobs`, `fg`, `bg`, `kill`, etc.
|
||||
|
||||
- Know `ssh`, and the basics of passwordless authentication, via `ssh-agent`, `ssh-add`, etc.
|
||||
|
||||
- Basic file management: `ls` and `ls -l` (in particular, learn what every column in `ls -l` means), `less`, `head`, `tail` and `tail -f` (or even better, `less +F`), `ln` and `ln -s` (learn the differences and advantages of hard versus soft links), `chown`, `chmod`, `du` (for a quick summary of disk usage: `du -sk *`), `df`, `mount`.
|
||||
|
||||
- Basic network management: `ip` or `ifconfig`, `dig`.
|
||||
|
||||
- Know regular expressions well, and the various flags to `grep`/`egrep`. The `-i`, `-o`, `-A`, and `-B` options are worth knowing.
|
||||
|
||||
- Learn to use `apt-get`, `yum`, or `dnf` (depending on distro) to find and install packages. And make sure you have `pip` to install Python-based command-line tools (a few below are easiest to install via `pip`).
|
||||
|
||||
|
||||
## Everyday use
|
||||
|
||||
- In Bash, use **ctrl-r** to search through command history.
|
||||
|
||||
- In Bash, use **ctrl-w** to delete the last word, and **ctrl-u** to delete the whole line. Use **alt-b** and **alt-f** to move by word, and **ctrl-k** to kill to the end of the line. See `man readline` for all the default keybindings in Bash. There are a lot. For example **alt-.** cycles through previous arguments, and **alt-*** expands a glob.
|
||||
|
||||
- To go back to the previous working directory: `cd -`
|
||||
|
||||
- If you are halfway through typing a command but change your mind, hit **alt-#** to add a `#` at the beginning and enter it as a comment (or use **ctrl-a**, **#**, **enter**). You can then return to it later via command history.
|
||||
|
||||
- Use `xargs` (or `parallel`). It's very powerful. Note you can control how many items execute per line (`-L`) as well as parallelism (`-P`). If you're not sure if it'll do the right thing, use `xargs echo` first. Also, `-I{}` is handy. Examples:
|
||||
```bash
|
||||
find . -name '*.py' | xargs grep some_function
|
||||
cat hosts | xargs -I{} ssh root@{} hostname
|
||||
```
|
||||
|
||||
- `pstree -p` is a helpful display of the process tree.
|
||||
|
||||
- Use `pgrep` and `pkill` to find or signal processes by name (`-f` is helpful).
|
||||
|
||||
- Know the various signals you can send processes. For example, to suspend a process, use `kill -STOP [pid]`. For the full list, see `man 7 signal`
|
||||
|
||||
- Use `nohup` or `disown` if you want a background process to keep running forever.
|
||||
|
||||
- Check what processes are listening via `netstat -lntp`.
|
||||
|
||||
- See also `lsof` for open sockets and files.
|
||||
|
||||
- In Bash scripts, use `set -x` for debugging output. Use strict modes whenever possible. Use `set -e` to abort on errors. Use `set -o pipefail` as well, to be strict about errors (though this topic is a bit subtle). For more involved scripts, also use `trap`.
|
||||
|
||||
- In Bash scripts, subshells (written with parentheses) are convenient ways to group commands. A common example is to temporarily move to a different working directory, e.g.
|
||||
```bash
|
||||
# do something in current dir
|
||||
(cd /some/other/dir; other-command)
|
||||
# continue in original dir
|
||||
```
|
||||
|
||||
- In Bash, note there are lots of kinds of variable expansion. Checking a variable exists: `${name:?error message}`. For example, if a Bash script requires a single argument, just write `input_file=${1:?usage: $0 input_file}`. Arithmetic expansion: `i=$(( (i + 1) % 5 ))`. Sequences: `{1..10}`. Trimming of strings: `${var%suffix}` and `${var#prefix}`. For example if `var=foo.pdf`, then `echo ${var%.pdf}.txt` prints `foo.txt`.
|
||||
|
||||
- The output of a command can be treated like a file via `<(some command)`. For example, compare local `/etc/hosts` with a remote one:
|
||||
```sh
|
||||
diff /etc/hosts <(ssh somehost cat /etc/hosts)
|
||||
```
|
||||
|
||||
- Know about "here documents" in Bash, as in `cat <<EOF ...`.
|
||||
|
||||
- In Bash, redirect both standard output and standard error via: `some-command >logfile 2>&1`. Often, to ensure a command does not leave an open file handle to standard input, tying it to the terminal you are in, it is also good practice to add `</dev/null`.
|
||||
|
||||
- Use `man ascii` for a good ASCII table, with hex and decimal values. For general encoding info, `man unicode`, `man utf-8`, and `man latin1` are helpful.
|
||||
|
||||
- Use `screen` or `tmux` to multiplex the screen, especially useful on remote ssh sessions and to detach and re-attach to a session. A more minimal alternative for session persistence only is `dtach`.
|
||||
|
||||
- In ssh, knowing how to port tunnel with `-L` or `-D` (and occasionally `-R`) is useful, e.g. to access web sites from a remote server.
|
||||
|
||||
- It can be useful to make a few optimizations to your ssh configuration; for example, this `~/.ssh/config` contains settings to avoid dropped connections in certain network environments, use compression (which is helpful with scp over low-bandwidth connections), and multiplex channels to the same server with a local control file:
|
||||
```
|
||||
TCPKeepAlive=yes
|
||||
ServerAliveInterval=15
|
||||
ServerAliveCountMax=6
|
||||
Compression=yes
|
||||
ControlMaster auto
|
||||
ControlPath /tmp/%r@%h:%p
|
||||
ControlPersist yes
|
||||
```
|
||||
|
||||
- A few other options relevant to ssh are security sensitive and should be enabled with care, e.g. per subnet or host or in trusted networks: `StrictHostKeyChecking=no`, `ForwardAgent=yes`
|
||||
|
||||
- To get the permissions on a file in octal form, which is useful for system configuration but not available in `ls` and easy to bungle, use something like
|
||||
```sh
|
||||
stat -c '%A %a %n' /etc/timezone
|
||||
```
|
||||
|
||||
- For interactive selection of values from the output of another command, use [`percol`](https://github.com/mooz/percol).
|
||||
|
||||
- For interaction with files based on the output of another command (like `git`), use `fpp` ([PathPicker](https://github.com/facebook/PathPicker)).
|
||||
|
||||
- For a simple web server for all files in the current directory (and subdirs), available to anyone on your network, use:
|
||||
`python -m SimpleHTTPServer 7777` (for port 7777 and Python 2).
|
||||
|
||||
|
||||
## Processing files and data
|
||||
|
||||
- To locate a file by name in the current directory, `find . -iname '*something*'` (or similar). To find a file anywhere by name, use `locate something` (but bear in mind `updatedb` may not have indexed recently created files).
|
||||
|
||||
- For general searching through source or data files (more advanced than `grep -r`), use [`ag`](https://github.com/ggreer/the_silver_searcher).
|
||||
|
||||
- To convert HTML to text: `lynx -dump -stdin`
|
||||
|
||||
- For Markdown, HTML, and all kinds of document conversion, try [`pandoc`](http://pandoc.org/).
|
||||
|
||||
- If you must handle XML, `xmlstarlet` is old but good.
|
||||
|
||||
- For JSON, use `jq`.
|
||||
|
||||
- For Excel or CSV files, [csvkit](https://github.com/onyxfish/csvkit) provides `in2csv`, `csvcut`, `csvjoin`, `csvgrep`, etc.
|
||||
|
||||
- For Amazon S3, [`s3cmd`](https://github.com/s3tools/s3cmd) is convenient and [`s4cmd`](https://github.com/bloomreach/s4cmd) is faster. Amazon's [`aws`](https://github.com/aws/aws-cli) is essential for other AWS-related tasks.
|
||||
|
||||
- Know about `sort` and `uniq`, including uniq's `-u` and `-d` options -- see one-liners below.
|
||||
|
||||
- Know about `cut`, `paste`, and `join` to manipulate text files. Many people use `cut` but forget about `join`.
|
||||
|
||||
- Know that locale affects a lot of command line tools in subtle ways, including sorting order (collation) and performance. Most Linux installations will set `LANG` or other locale variables to a local setting like US English. But be aware sorting will change if you change locale. And know i18n routines can make sort or other commands run *many times* slower. In some situations (such as the set operations or uniqueness operations below) you can safely ignore slow i18n routines entirely and use traditional byte-based sort order, using `export LC_ALL=C`.
|
||||
|
||||
- Know basic `awk` and `sed` for simple data munging. For example, summing all numbers in the third column of a text file: `awk '{ x += $3 } END { print x }'`. This is probably 3X faster and 3X shorter than equivalent Python.
|
||||
|
||||
- To replace all occurrences of a string in place, in one or more files:
|
||||
```sh
|
||||
perl -pi.bak -e 's/old-string/new-string/g' my-files-*.txt
|
||||
```
|
||||
|
||||
- To rename many files at once according to a pattern, use `rename`. For complex renames, [`repren`](https://github.com/jlevy/repren) may help.
|
||||
```sh
|
||||
# Recover backup files foo.bak -> foo:
|
||||
rename 's/\.bak$//' *.bak
|
||||
# Full rename of filenames, directories, and contents foo -> bar:
|
||||
repren --full --preserve-case --from foo --to bar .
|
||||
```
|
||||
|
||||
- Use `shuf` to shuffle or select random lines from a file.
|
||||
|
||||
- Know `sort`'s options. Know how keys work (`-t` and `-k`). In particular, watch out that you need to write `-k1,1` to sort by only the first field; `-k1` means sort according to the whole line.
|
||||
|
||||
- Stable sort (`sort -s`) can be useful. For example, to sort first by field 2, then secondarily by field 1, you can use `sort -k1,1 | sort -s -k2,2`
|
||||
|
||||
- If you ever need to write a tab literal in a command line in Bash (e.g. for the -t argument to sort), press **ctrl-v** **[Tab]** or write `$'\t'` (the latter is better as you can copy/paste it).
|
||||
|
||||
- For binary files, use `hd` for simple hex dumps and `bvi` for binary editing.
|
||||
|
||||
- Also for binary files, `strings` (plus `grep`, etc.) lets you find bits of text.
|
||||
|
||||
- To convert text encodings, try `iconv`. Or `uconv` for more advanced use; it supports some advanced Unicode things. For example, this command lowercases and removes all accents (by expanding and dropping them):
|
||||
```sh
|
||||
uconv -f utf-8 -t utf-8 -x '::Any-Lower; ::Any-NFD; [:Nonspacing Mark:] >; ::Any-NFC; ' < input.txt > output.txt
|
||||
```
|
||||
|
||||
- To split files into pieces, see `split` (to split by size) and `csplit` (to split by a pattern).
|
||||
|
||||
- Use `zless`, `zmore`, `zcat`, and `zgrep` to operate on compressed files.
|
||||
|
||||
|
||||
## System debugging
|
||||
|
||||
- For web debugging, `curl` and `curl -I` are handy, or their `wget` equivalents, or the more modern [`httpie`](https://github.com/jakubroztocil/httpie).
|
||||
|
||||
- To know disk/cpu/network status, use `iostat`, `netstat`, `top` (or the better `htop`), and (especially) `dstat`. Good for getting a quick idea of what's happening on a system.
|
||||
|
||||
- For a more in-depth system overview, use [`glances`](https://github.com/nicolargo/glances). It presents you with several system level statistics in one terminal window. Very helpful for quickly checking on various subsystems.
|
||||
|
||||
- To know memory status, run and understand the output of `free` and `vmstat`. In particular, be aware the "cached" value is memory held by the Linux kernel as file cache, so effectively counts toward the "free" value.
|
||||
|
||||
- Java system debugging is a different kettle of fish, but a simple trick on Oracle's and some other JVMs is that you can run `kill -3 <pid>` and a full stack trace and heap summary (including generational garbage collection details, which can be highly informative) will be dumped to stderr/logs.
|
||||
|
||||
- Use `mtr` as a better traceroute, to identify network issues.
|
||||
|
||||
- For looking at why a disk is full, `ncdu` saves time over the usual commands like `du -sh *`.
|
||||
|
||||
- To find which socket or process is using bandwidth, try `iftop` or `nethogs`.
|
||||
|
||||
- The `ab` tool (comes with Apache) is helpful for quick-and-dirty checking of web server performance. For more complex load testing, try `siege`.
|
||||
|
||||
- For more serious network debugging, `wireshark`, `tshark`, or `ngrep`.
|
||||
|
||||
- Know about `strace` and `ltrace`. These can be helpful if a program is failing, hanging, or crashing, and you don't know why, or if you want to get a general idea of performance. Note the profiling option (`-c`), and the ability to attach to a running process (`-p`).
|
||||
|
||||
- Know about `ldd` to check shared libraries etc.
|
||||
|
||||
- Know how to connect to a running process with `gdb` and get its stack traces.
|
||||
|
||||
- Use `/proc`. It's amazingly helpful sometimes when debugging live problems. Examples: `/proc/cpuinfo`, `/proc/xxx/cwd`, `/proc/xxx/exe`, `/proc/xxx/fd/`, `/proc/xxx/smaps`.
|
||||
|
||||
- When debugging why something went wrong in the past, `sar` can be very helpful. It shows historic statistics on CPU, memory, network, etc.
|
||||
|
||||
- For deeper systems and performance analyses, look at `stap` ([SystemTap](https://sourceware.org/systemtap/wiki)), [`perf`](http://en.wikipedia.org/wiki/Perf_(Linux)), and [`sysdig`](https://github.com/draios/sysdig).
|
||||
|
||||
- Confirm what Linux distribution you're using (works on most distros): `lsb_release -a`
|
||||
|
||||
- Use `dmesg` whenever something's acting really funny (it could be hardware or driver issues).
|
||||
|
||||
|
||||
## One-liners
|
||||
|
||||
A few examples of piecing together commands:
|
||||
|
||||
- It is remarkably helpful sometimes that you can do set intersection, union, and difference of text files via `sort`/`uniq`. Suppose `a` and `b` are text files that are already uniqued. This is fast, and works on files of arbitrary size, up to many gigabytes. (Sort is not limited by memory, though you may need to use the `-T` option if `/tmp` is on a small root partition.) See also the note about `LC_ALL` above.
|
||||
```sh
|
||||
cat a b | sort | uniq > c # c is a union b
|
||||
cat a b | sort | uniq -d > c # c is a intersect b
|
||||
cat a b b | sort | uniq -u > c # c is set difference a - b
|
||||
```
|
||||
|
||||
- Summing all numbers in the third column of a text file (this is probably 3X faster and 3X less code than equivalent Python):
|
||||
```sh
|
||||
awk '{ x += $3 } END { print x }' myfile
|
||||
```
|
||||
|
||||
- If want to see sizes/dates on a tree of files, this is like a recursive `ls -l` but is easier to read than `ls -lR`:
|
||||
```sh
|
||||
find . -type f -ls
|
||||
```
|
||||
|
||||
- Use `xargs` or `parallel` whenever you can. Note you can control how many items execute per line (`-L`) as well as parallelism (`-P`). If you're not sure if it'll do the right thing, use xargs echo first. Also, `-I{}` is handy. Examples:
|
||||
```sh
|
||||
find . -name '*.py' | xargs grep some_function
|
||||
cat hosts | xargs -I{} ssh root@{} hostname
|
||||
```
|
||||
|
||||
- Say you have a text file, like a web server log, and a certain value that appears on some lines, such as an `acct_id` parameter that is present in the URL. If you want a tally of how many requests for each `acct_id`:
|
||||
```sh
|
||||
cat access.log | egrep -o 'acct_id=[0-9]+' | cut -d= -f2 | sort | uniq -c | sort -rn
|
||||
```
|
||||
|
||||
- Run this function to get a random tip from this document (parses Markdown and extracts an item):
|
||||
```sh
|
||||
function taocl() {
|
||||
curl -s https://raw.githubusercontent.com/jlevy/the-art-of-command-line/master/README.md |
|
||||
pandoc -f markdown -t html |
|
||||
xmlstarlet fo --html --dropdtd |
|
||||
xmlstarlet sel -t -v "(html/body/ul/li[count(p)>0])[$RANDOM mod last()+1]" |
|
||||
xmlstarlet unesc | fmt -80
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Obscure but useful
|
||||
|
||||
- `expr`: perform arithmetic or boolean operations or evaluate regular expressions
|
||||
|
||||
- `m4`: simple macro processor
|
||||
|
||||
- `screen`: powerful terminal multiplexing and session persistence
|
||||
|
||||
- `yes`: print a string a lot
|
||||
|
||||
- `cal`: nice calendar
|
||||
|
||||
- `env`: run a command (useful in scripts)
|
||||
|
||||
- `look`: find English words (or lines in a file) beginning with a string
|
||||
|
||||
- `cut `and `paste` and `join`: data manipulation
|
||||
|
||||
- `fmt`: format text paragraphs
|
||||
|
||||
- `pr`: format text into pages/columns
|
||||
|
||||
- `fold`: wrap lines of text
|
||||
|
||||
- `column`: format text into columns or tables
|
||||
|
||||
- `expand` and `unexpand`: convert between tabs and spaces
|
||||
|
||||
- `nl`: add line numbers
|
||||
|
||||
- `seq`: print numbers
|
||||
|
||||
- `bc`: calculator
|
||||
|
||||
- `factor`: factor integers
|
||||
|
||||
- `gpg`: encrypt and sign files
|
||||
|
||||
- `toe`: table of terminfo entries
|
||||
|
||||
- `nc`: network debugging and data transfer
|
||||
|
||||
- `ngrep`: grep for the network layer
|
||||
|
||||
- `dd`: moving data between files or devices
|
||||
|
||||
- `file`: identify type of a file
|
||||
|
||||
- `stat`: file info
|
||||
|
||||
- `tac`: print files in reverse
|
||||
|
||||
- `shuf`: random selection of lines from a file
|
||||
|
||||
- `comm`: compare sorted files line by line
|
||||
|
||||
- `hd` and `bvi`: dump or edit binary files
|
||||
|
||||
- `strings`: extract text from binary files
|
||||
|
||||
- `tr`: character translation or manipulation
|
||||
|
||||
- `iconv `or uconv: conversion for text encodings
|
||||
|
||||
- `split `and `csplit`: splitting files
|
||||
|
||||
- `7z`: high-ratio file compression
|
||||
|
||||
- `ldd`: dynamic library info
|
||||
|
||||
- `nm`: symbols from object files
|
||||
|
||||
- `ab`: benchmarking web servers
|
||||
|
||||
- `strace`: system call debugging
|
||||
|
||||
- `mtr`: better traceroute for network debugging
|
||||
|
||||
- `cssh`: visual concurrent shell
|
||||
|
||||
- `wireshark` and `tshark`: packet capture and network debugging
|
||||
|
||||
- `host` and `dig`: DNS lookups
|
||||
|
||||
- `lsof`: process file descriptor and socket info
|
||||
|
||||
- `dstat`: useful system stats
|
||||
|
||||
- [`glances`](https://github.com/nicolargo/glances): high level, multi-subsystem overview
|
||||
|
||||
- `iostat`: CPU and disk usage stats
|
||||
|
||||
- `htop`: improved version of top
|
||||
|
||||
- `last`: login history
|
||||
|
||||
- `w`: who's logged on
|
||||
|
||||
- `id`: user/group identity info
|
||||
|
||||
- `sar`: historic system stats
|
||||
|
||||
- `iftop` or `nethogs`: network utilization by socket or process
|
||||
|
||||
- `ss`: socket statistics
|
||||
|
||||
- `dmesg`: boot and system error messages
|
||||
|
||||
- `hdparm`: SATA/ATA disk manipulation/performance
|
||||
|
||||
- `lsb_release`: Linux distribution info
|
||||
|
||||
- `lshw`: hardware information
|
||||
|
||||
- `fortune`, `ddate`, and `sl`: um, well, it depends on whether you consider steam locomotives and Zippy quotations "useful"
|
||||
|
||||
|
||||
## More resources
|
||||
|
||||
- [awesome-shell](https://github.com/alebcay/awesome-shell): A curated list of shell tools and resources.
|
||||
- [Strict mode](http://redsymbol.net/articles/unofficial-bash-strict-mode/) for writing better shell scripts.
|
||||
|
||||
|
||||
## Disclaimer
|
||||
|
||||
With the exception of very small tasks, code is written so others can read it. With power comes responsibility. The fact you *can* do something in Bash doesn't necessarily mean you should! ;)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://github.com/jlevy/the-art-of-command-line
|
||||
|
||||
作者:[jlevy][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/jlevy
|
||||
[1]:
|
||||
[2]:
|
||||
[3]:
|
||||
[4]:
|
||||
[5]:
|
||||
[6]:
|
||||
[7]:
|
||||
[8]:
|
||||
[9]:
|
||||
[10]:
|
||||
[11]:
|
||||
[12]:
|
||||
[13]:
|
||||
[14]:
|
||||
[15]:
|
||||
[16]:
|
||||
[17]:
|
||||
[18]:
|
||||
[19]:
|
||||
[20]:
|
@ -1,265 +0,0 @@
|
||||
translating by NearTan
|
||||
|
||||
How to Setup Node.JS on Ubuntu 15.04 with Different Methods
|
||||
================================================================================
|
||||
This is such an important article to guide you about the installation and setup of Node.js on Ubuntu 15.04. Node.js is basically a server side java script programming to provide you bindings for enabling the input and output socket bios and streams. It has a high through put and non-blocking IO in a single threaded event loop. It’s also a platform layer that provides the functionality to interact with the Operating system to write file, read files and to do networking operations. So in this article we will perform different installation methods to setup Node.Js on Ubuntu 15.04 server.
|
||||
|
||||
### Methods to Install Node.JS ###
|
||||
|
||||
Their are different ways to install Node.Js and we can prefer to choose any one. Following are some important ways to install Node.Js that we will use to guide you about their each setup on Ubuntu 15.04 so make sure to remove the previous previous packages to avoid from any package conflict.
|
||||
|
||||
- Installation of Node.JS from Source Code
|
||||
- Installation of Node.JS from Package Manager
|
||||
- Installation of Node.JS from Github Repository
|
||||
- Installation of Node.JS with NVM
|
||||
|
||||
### 1) Installing from Source Code ###
|
||||
|
||||
Let's start with installation of Node.JS from the Source Code, so make sure that your system is up to date and dependent packages are pre installed. So follow the below steps and start the setup.
|
||||
|
||||
#### STEP 1: System Update ####
|
||||
|
||||
Use following commands and update Operating system and ten install the required packages are necessary for Node.JS setup.
|
||||
|
||||
root@ubuntu-15:~# apt-get update
|
||||
|
||||
root@ubuntu-15:~# apt-get install python gcc make g++
|
||||
|
||||
#### STEP 2: Get Source Code of Node.JS ####
|
||||
|
||||
After we had installed the dependent packages now from the official web link of Node.JS just download its source code package and extract it as in following commands.
|
||||
|
||||
root@ubuntu-15:~# wget http://nodejs.org/dist/v0.12.4/node-v0.12.4.tar.gz
|
||||
root@ubuntu-15:~# tar zxvf node-v0.12.4.tar.gz
|
||||
|
||||
#### STEP 3: Starting Installation ####
|
||||
|
||||
Now move to the source code directory and execute the .configuration file
|
||||
|
||||

|
||||
|
||||
root@ubuntu-15:~# ls
|
||||
node-v0.12.4 node-v0.12.4.tar.gz
|
||||
root@ubuntu-15:~# cd node-v0.12.4/
|
||||
root@ubuntu-15:~/node-v0.12.4# ./configure
|
||||
root@ubuntu-15:~/node-v0.12.4# make install
|
||||
|
||||
### Testing after Installation of Node.Js from Source Code ###
|
||||
|
||||
Once the installation is done after running make installa command, we check its version and test that Node.Js is working fine by executing some test out puts.
|
||||
|
||||
root@ubuntu-15:~/node-v0.12.4# node -v
|
||||
v0.12.4
|
||||
|
||||

|
||||
|
||||
Create a new file with .js extension execute it with Node command.
|
||||
|
||||
root@ubuntu-15:~/node-v0.12.4# touch helo_test.js
|
||||
root@ubuntu-15:~/node-v0.12.4# vim helo_test.js
|
||||
console.log('Hello World');
|
||||
|
||||
Now execute it with Node command.
|
||||
|
||||
root@ubuntu-15:~/node-v0.12.4# node helo_test.js
|
||||
Hello World
|
||||
|
||||
The out put shows that we had successfully installed Node.JS on Ubuntu 15.04 which is now ready to create applications on it using java scripts.
|
||||
|
||||
### 2) Installing from Package Manager ###
|
||||
|
||||
The installation of Node.JS through Package Manager is also a very simple process to setup on Ubuntu through NodeSource by adding its Personal Package Archives (PPA).
|
||||
|
||||
We will go through the following steps to install Node.JS through PPA.
|
||||
|
||||
#### STEP 1: Get the Source Code using Curl ####
|
||||
|
||||
Before getting the source code by using the Curl command we must update our system and then run the curl command to fetch the NodeSource into local repository.
|
||||
|
||||
root@ubuntu-15:~#apt-get update
|
||||
root@ubuntu-15:~# curl -sL https://deb.nodesource.com/setup | sudo bash -
|
||||
|
||||
The curl will execute the following tasks.
|
||||
|
||||
## Installing the NodeSource Node.js 0.10 repo...
|
||||
## Populating apt-get cache...
|
||||
## Confirming "vivid" is supported...
|
||||
## Adding the NodeSource signing key to your keyring...
|
||||
## Creating apt sources list file for the NodeSource Node.js 0.10 repo...
|
||||
## Running `apt-get update` for you...
|
||||
Fetched 6,411 B in 5s (1,077 B/s)
|
||||
Reading package lists... Done
|
||||
## Run `apt-get install nodejs` (as root) to install Node.js 0.10 and npm
|
||||
|
||||
#### STEP 2: Installation of NodeJS and NPM ####
|
||||
|
||||
As per the output of Curl we had fetched the NodeSource to our local repository now through apt-get commands we will install the NodeJS and NPM packages.
|
||||
|
||||
root@ubuntu-15:~# apt-get install nodejs
|
||||
|
||||

|
||||
|
||||
#### STEP 3: Installing Build Essentials Tool ####
|
||||
|
||||
To compile and install native addons from npm we may also need to install build tools using following command as.
|
||||
|
||||
root@ubuntu-15:~# apt-get install -y build-essential
|
||||
|
||||
### Testing With Node.JS Shell ###
|
||||
|
||||
Testing of Node.Js is similar to the one that we did after installation from the source Code.
|
||||
so lets start with the node command as follow and get its output result to see if Node.JS is fully functional.
|
||||
|
||||
root@ubuntu-15:~# node
|
||||
> console.log('Node.js Installed Using Package Manager');
|
||||
Node.js Installed Using Package Manager
|
||||
|
||||
----------
|
||||
|
||||
root@ubuntu-15:~# node
|
||||
> a = [1,2,3,4,5]
|
||||
[ 1, 2, 3, 4, 5 ]
|
||||
> typeof a
|
||||
'object'
|
||||
> 5 + 2
|
||||
7
|
||||
>
|
||||
(^C again to quit)
|
||||
>
|
||||
root@ubuntu-15:~#
|
||||
|
||||
### REPL For Your NodeJS Apps ###
|
||||
|
||||
REPL is the Node.js shell, any valid Javascript which can be written in a script can be passed to the REPL. So lets see how REPL works with NodeJS.
|
||||
|
||||
root@ubuntu-15:~# node
|
||||
> var repl = require("repl");
|
||||
undefined
|
||||
> repl.start("> ");
|
||||
|
||||
Press Enter and it will show out put like this:
|
||||
> { domain: null,
|
||||
_events: {},
|
||||
_maxListeners: 10,
|
||||
useGlobal: false,
|
||||
ignoreUndefined: false,
|
||||
eval: [Function],
|
||||
inputStream:
|
||||
{ _connecting: false,
|
||||
_handle:
|
||||
{ fd: 0,
|
||||
writeQueueSize: 0,
|
||||
owner: [Circular],
|
||||
onread: [Function: onread],
|
||||
reading: true },
|
||||
_readableState:
|
||||
{ highWaterMark: 0,
|
||||
buffer: [],
|
||||
length: 0,
|
||||
pipes: null,
|
||||
...
|
||||
...
|
||||
|
||||
Here is the list of command line help that we can use to work with REPL.
|
||||
|
||||

|
||||
|
||||
### Working with NodeJS Packet Manager ###
|
||||
|
||||
NPM is simple CLI tool for ensuring that a given node script runs continuously. It helps to Install and manage dependencies through the file package.json. We will start its using init command as.
|
||||
|
||||
root@ubuntu-15:~# npm init
|
||||
|
||||

|
||||
|
||||
### 3) Installing from Github Repository ###
|
||||
|
||||
In this method to install Node.JS we will consider few simple steps to directory clone the Node.JS repository from the github.
|
||||
|
||||
Before the start of configurations through the the cloned package of Node.JS make sure that we must have following dependent packages installed with us.
|
||||
|
||||
root@ubuntu-15:~# apt-get install g++ curl make libssl-dev apache2-utils git-core
|
||||
|
||||
Now start the clone with git command and change the directory to its configuration file.
|
||||
|
||||
root@ubuntu-15:~# git clone git://github.com/ry/node.git
|
||||
root@ubuntu-15:~# cd node/
|
||||
|
||||

|
||||
|
||||
To install from the cloned repository now lets run the .config command and then make install to complete the setup of Node.Js .
|
||||
|
||||
root@ubuntu-15:~# ./configure
|
||||
|
||||

|
||||
|
||||
Run the make install command that will takes few minutes so be patient and get it ready finalize the setup on Node.JS package for you.
|
||||
|
||||
root@ubuntu-15:~/node# make install
|
||||
|
||||
root@ubuntu-15:~/node# node -v
|
||||
v0.13.0-pre
|
||||
|
||||
### Testing with Node.JS ###
|
||||
|
||||
root@ubuntu-15:~/node# node
|
||||
> a = [1,2,3,4,5,6,7]
|
||||
[ 1, 2, 3, 4, 5, 6, 7 ]
|
||||
> typeof a
|
||||
'object'
|
||||
> 6 + 5
|
||||
11
|
||||
>
|
||||
(^C again to quit)
|
||||
>
|
||||
root@ubuntu-15:~/node#
|
||||
|
||||
### 4) Installing Node.JS Using Node Version Manager ###
|
||||
|
||||
In this last method we show you we can easily install Node.JS with NVM method. This is one of the best method to install and configure Node.JS it provides us with facility to control versions.
|
||||
|
||||
So before starting with this method make sure that the previously installed package is not already present there if you are working on the same machine.
|
||||
|
||||
#### STEP 1: Installing Prerequisites ####
|
||||
|
||||
Let's start with the system update and then install the following prerequisites for the Ubuntu Server to setup Node.JS using NodeJS Versions Manager System. Using the Curl command we will get NVM from the git to our local repository.
|
||||
|
||||
root@ubuntu-15:~# apt-get install build-essential libssl-dev
|
||||
root@ubuntu-15:~# curl https://raw.githubusercontent.com/creationix/nvm/v0.16.1/install.sh | sh
|
||||
|
||||

|
||||
|
||||
#### STEP 2: Update Home Environment ####
|
||||
|
||||
After the curl command downloads the necessary packages for the NVM in the home directory of the user it also make changes to the bash profile. so in order to update those change and get access to NVM we must relogin to the terminal or we can update it using the source command as below.
|
||||
|
||||
root@ubuntu-15:~# source ~/.profile
|
||||
|
||||
Now we can use the NVM to choose the default NVM version or by moving to the previous version of NVM that we can do so by using the following commands.
|
||||
|
||||
root@ubuntu-15:~# nvm ls
|
||||
root@ubuntu-15:~# nvm alias default 0.12.4
|
||||
|
||||

|
||||
|
||||
#### STEP 3: Using NodeJS Versions Manager ####
|
||||
|
||||
We have successfully setup the NodeJS with NVM package and now can use it with its useful commands.
|
||||
|
||||

|
||||
|
||||
### CONCLUSION ###
|
||||
|
||||
Now we are ready to go with building server side applications with Node.JS and you are free to choose the best installation setup of your choice as we had done with four optional installations of Node.Js on the latest version of Ubuntu 15.04 with priority installation sequence, now its up to you to follow the one of your own choice and start coding with NodeJS.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/ubuntu-how-to/setup-node-js-ubuntu-15-04-different-methods/
|
||||
|
||||
作者:[Kashif Siddique][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/kashifs/
|
@ -0,0 +1,348 @@
|
||||
Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management
|
||||
================================================================================
|
||||
**Shilpa Nair has just graduated in the year 2015. She went to apply for Trainee position in a National News Television located in Noida, Delhi. When she was in the last year of graduation and searching for help on her assignments she came across Tecmint. Since then she has been visiting Tecmint regularly.**
|
||||
|
||||

|
||||
|
||||
Linux Interview Questions on RPM
|
||||
|
||||
All the questions and answers are rewritten based upon the memory of Shilpa Nair.
|
||||
|
||||
> “Hi friends! I am Shilpa Nair from Delhi. I have completed my graduation very recently and was hunting for a Trainee role soon after my degree. I have developed a passion for UNIX since my early days in the collage and I was looking for a role that suits me and satisfies my soul. I was asked a lots of questions and most of them were basic questions related to RedHat Package Management.”
|
||||
|
||||
Here are the questions, that I was asked and their corresponding answers. I am posting only those questions that are related to RedHat GNU/Linux Package Management, as they were mainly asked.
|
||||
|
||||
### 1. How will you find if a package is installed or not? Say you have to find if ‘nano’ is installed or not, what will you do? ###
|
||||
|
||||
> **Answer** : To find the package nano, weather installed or not, we can use rpm command with the option -q is for query and -a stands for all the installed packages.
|
||||
>
|
||||
> # rpm -qa nano
|
||||
> OR
|
||||
> # rpm -qa | grep -i nano
|
||||
>
|
||||
> nano-2.3.1-10.el7.x86_64
|
||||
>
|
||||
> Also the package name must be complete, an incomplete package name will return the prompt without printing anything which means that package (incomplete package name) is not installed. It can be understood easily by the example below:
|
||||
>
|
||||
> We generally substitute vim command with vi. But if we find package vi/vim we will get no result on the standard output.
|
||||
>
|
||||
> # vi
|
||||
> # vim
|
||||
>
|
||||
> However we can clearly see that the package is installed by firing vi/vim command. Here is culprit is incomplete file name. If we are not sure of the exact file-name we can use wildcard as:
|
||||
>
|
||||
> # rpm -qa vim*
|
||||
>
|
||||
> vim-minimal-7.4.160-1.el7.x86_64
|
||||
>
|
||||
> This way we can find information about any package, if installed or not.
|
||||
|
||||
### 2. How will you install a package XYZ using rpm? ###
|
||||
|
||||
> **Answer** : We can install any package (*.rpm) using rpm command a shown below, here options -i (install), -v (verbose or display additional information) and -h (print hash mark during package installation).
|
||||
>
|
||||
> # rpm -ivh peazip-1.11-1.el6.rf.x86_64.rpm
|
||||
>
|
||||
> Preparing... ################################# [100%]
|
||||
> Updating / installing...
|
||||
> 1:peazip-1.11-1.el6.rf ################################# [100%]
|
||||
>
|
||||
> If upgrading a package from earlier version -U switch should be used, option -v and -h follows to make sure we get a verbose output along with hash Mark, that makes it readable.
|
||||
|
||||
### 3. You have installed a package (say httpd) and now you want to see all the files and directories installed and created by the above package. What will you do? ###
|
||||
|
||||
> **Answer** : We can list all the files (Linux treat everything as file including directories) installed by the package httpd using options -l (List all the files) and -q (is for query).
|
||||
>
|
||||
> # rpm -ql httpd
|
||||
>
|
||||
> /etc/httpd
|
||||
> /etc/httpd/conf
|
||||
> /etc/httpd/conf.d
|
||||
> ...
|
||||
|
||||
### 4. You are supposed to remove a package say postfix. What will you do? ###
|
||||
|
||||
> **Answer** : First we need to know postfix was installed by what package. Find the package name that installed postfix using options -e erase/uninstall a package) and –v (verbose output).
|
||||
>
|
||||
> # rpm -qa postfix*
|
||||
>
|
||||
> postfix-2.10.1-6.el7.x86_64
|
||||
>
|
||||
> and then remove postfix as:
|
||||
>
|
||||
> # rpm -ev postfix-2.10.1-6.el7.x86_64
|
||||
>
|
||||
> Preparing packages...
|
||||
> postfix-2:3.0.1-2.fc22.x86_64
|
||||
|
||||
### 5. Get detailed information about an installed package, means information like Version, Release, Install Date, Size, Summary and a brief description. ###
|
||||
|
||||
> **Answer** : We can get detailed information about an installed package by using option -qa with rpm followed by package name.
|
||||
>
|
||||
> For example to find details of package openssh, all I need to do is:
|
||||
>
|
||||
> # rpm -qi openssh
|
||||
>
|
||||
> [root@tecmint tecmint]# rpm -qi openssh
|
||||
> Name : openssh
|
||||
> Version : 6.8p1
|
||||
> Release : 5.fc22
|
||||
> Architecture: x86_64
|
||||
> Install Date: Thursday 28 May 2015 12:34:50 PM IST
|
||||
> Group : Applications/Internet
|
||||
> Size : 1542057
|
||||
> License : BSD
|
||||
> ....
|
||||
|
||||
### 6. You are not sure about what are the configuration files provided by a specific package say httpd. How will you find list of all the configuration files provided by httpd and their location. ###
|
||||
|
||||
> **Answer** : We need to run option -c followed by package name with rpm command and it will list the name of all the configuration file and their location.
|
||||
>
|
||||
> # rpm -qc httpd
|
||||
>
|
||||
> /etc/httpd/conf.d/autoindex.conf
|
||||
> /etc/httpd/conf.d/userdir.conf
|
||||
> /etc/httpd/conf.d/welcome.conf
|
||||
> /etc/httpd/conf.modules.d/00-base.conf
|
||||
> /etc/httpd/conf/httpd.conf
|
||||
> /etc/sysconfig/httpd
|
||||
>
|
||||
> Similarly we can list all the associated document files as:
|
||||
>
|
||||
> # rpm -qd httpd
|
||||
>
|
||||
> /usr/share/doc/httpd/ABOUT_APACHE
|
||||
> /usr/share/doc/httpd/CHANGES
|
||||
> /usr/share/doc/httpd/LICENSE
|
||||
> ...
|
||||
>
|
||||
> also, we can list the associated License file as:
|
||||
>
|
||||
> # rpm -qL openssh
|
||||
>
|
||||
> /usr/share/licenses/openssh/LICENCE
|
||||
>
|
||||
> Not to mention that the option -d and option -L in the above command stands for ‘documents‘ and ‘License‘, respectively.
|
||||
|
||||
### 7. You came across a configuration file located at ‘/usr/share/alsa/cards/AACI.conf’ and you are not sure this configuration file is associated with what package. How will you find out the parent package name? ###
|
||||
|
||||
> **Answer** : When a package is installed, the relevant information gets stored in the database. So it is easy to trace what provides the above package using option -qf (-f query packages owning files).
|
||||
>
|
||||
> # rpm -qf /usr/share/alsa/cards/AACI.conf
|
||||
> alsa-lib-1.0.28-2.el7.x86_64
|
||||
>
|
||||
> Similarly we can find (what provides) information about any sub-packge, document files and License files.
|
||||
|
||||
### 8. How will you find list of recently installed software’s using rpm? ###
|
||||
|
||||
> **Answer** : As said earlier, everything being installed is logged in database. So it is not difficult to query the rpm database and find the list of recently installed software’s.
|
||||
>
|
||||
> We can do this by running the below commands using option –last (prints the most recent installed software’s).
|
||||
>
|
||||
> # rpm -qa --last
|
||||
>
|
||||
> The above command will print all the packages installed in a order such that, the last installed software appears at the top.
|
||||
>
|
||||
> If our concern is to find out specific package, we can grep that package (say sqlite) from the list, simply as:
|
||||
>
|
||||
> # rpm -qa --last | grep -i sqlite
|
||||
>
|
||||
> sqlite-3.8.10.2-1.fc22.x86_64 Thursday 18 June 2015 05:05:43 PM IST
|
||||
>
|
||||
> We can also get a list of 10 most recently installed software simply as:
|
||||
>
|
||||
> # rpm -qa --last | head
|
||||
>
|
||||
> We can refine the result to output a more custom result simply as:
|
||||
>
|
||||
> # rpm -qa --last | head -n 2
|
||||
>
|
||||
> In the above command -n represents number followed by a numeric value. The above command prints a list of 2 most recent installed software.
|
||||
|
||||
### 9. Before installing a package, you are supposed to check its dependencies. What will you do? ###
|
||||
|
||||
> **Answer** : To check the dependencies of a rpm package (XYZ.rpm), we can use switches -q (query package), -p (query a package file) and -R (Requires / List packages on which this package depends i.e., dependencies).
|
||||
>
|
||||
> # rpm -qpR gedit-3.16.1-1.fc22.i686.rpm
|
||||
>
|
||||
> /bin/sh
|
||||
> /usr/bin/env
|
||||
> glib2(x86-32) >= 2.40.0
|
||||
> gsettings-desktop-schemas
|
||||
> gtk3(x86-32) >= 3.16
|
||||
> gtksourceview3(x86-32) >= 3.16
|
||||
> gvfs
|
||||
> libX11.so.6
|
||||
> ...
|
||||
|
||||
### 10. Is rpm a front-end Package Management Tool? ###
|
||||
|
||||
> **Answer** : No! rpm is a back-end package management for RPM based Linux Distribution.
|
||||
>
|
||||
> [YUM][1] which stands for Yellowdog Updater Modified is the front-end for rpm. YUM automates the overall process of resolving dependencies and everything else.
|
||||
>
|
||||
> Very recently [DNF][2] (Dandified YUM) replaced YUM in Fedora 22. Though YUM is still available to be used in RHEL and CentOS, we can install dnf and use it alongside of YUM. DNF is said to have a lots of improvement over YUM.
|
||||
>
|
||||
> Good to know, you keep yourself updated. Lets move to the front-end part.
|
||||
|
||||
### 11. How will you list all the enabled repolist on a system. ###
|
||||
|
||||
> **Answer** : We can list all the enabled repos on a system simply using following commands.
|
||||
>
|
||||
> # yum repolist
|
||||
> or
|
||||
> # dnf repolist
|
||||
>
|
||||
> Last metadata expiration check performed 0:30:03 ago on Mon Jun 22 16:50:00 2015.
|
||||
> repo id repo name status
|
||||
> *fedora Fedora 22 - x86_64 44,762
|
||||
> ozonos Repository for Ozon OS 61
|
||||
> *updates Fedora 22 - x86_64 - Updates
|
||||
>
|
||||
> The above command will only list those repos that are enabled. If we need to list all the repos, enabled or not, we can do.
|
||||
>
|
||||
> # yum repolist all
|
||||
> or
|
||||
> # dnf repolist all
|
||||
>
|
||||
> Last metadata expiration check performed 0:29:45 ago on Mon Jun 22 16:50:00 2015.
|
||||
> repo id repo name status
|
||||
> *fedora Fedora 22 - x86_64 enabled: 44,762
|
||||
> fedora-debuginfo Fedora 22 - x86_64 - Debug disabled
|
||||
> fedora-source Fedora 22 - Source disabled
|
||||
> ozonos Repository for Ozon OS enabled: 61
|
||||
> *updates Fedora 22 - x86_64 - Updates enabled: 5,018
|
||||
> updates-debuginfo Fedora 22 - x86_64 - Updates - Debug
|
||||
|
||||
### 12. How will you list all the available and installed packages on a system? ###
|
||||
|
||||
> **Answer** : To list all the available packages on a system, we can do:
|
||||
>
|
||||
> # yum list available
|
||||
> or
|
||||
> # dnf list available
|
||||
>
|
||||
> ast metadata expiration check performed 0:34:09 ago on Mon Jun 22 16:50:00 2015.
|
||||
> Available Packages
|
||||
> 0ad.x86_64 0.0.18-1.fc22 fedora
|
||||
> 0ad-data.noarch 0.0.18-1.fc22 fedora
|
||||
> 0install.x86_64 2.6.1-2.fc21 fedora
|
||||
> 0xFFFF.x86_64 0.3.9-11.fc22 fedora
|
||||
> 2048-cli.x86_64 0.9-4.git20141214.723738c.fc22 fedora
|
||||
> 2048-cli-nocurses.x86_64 0.9-4.git20141214.723738c.fc22 fedora
|
||||
> ....
|
||||
>
|
||||
> To list all the installed Packages on a system, we can do.
|
||||
>
|
||||
> # yum list installed
|
||||
> or
|
||||
> # dnf list installed
|
||||
>
|
||||
> Last metadata expiration check performed 0:34:30 ago on Mon Jun 22 16:50:00 2015.
|
||||
> Installed Packages
|
||||
> GeoIP.x86_64 1.6.5-1.fc22 @System
|
||||
> GeoIP-GeoLite-data.noarch 2015.05-1.fc22 @System
|
||||
> NetworkManager.x86_64 1:1.0.2-1.fc22 @System
|
||||
> NetworkManager-libnm.x86_64 1:1.0.2-1.fc22 @System
|
||||
> aajohan-comfortaa-fonts.noarch 2.004-4.fc22 @System
|
||||
> ....
|
||||
>
|
||||
> To list all the available and installed packages on a system, we can do.
|
||||
>
|
||||
> # yum list
|
||||
> or
|
||||
> # dnf list
|
||||
>
|
||||
> Last metadata expiration check performed 0:32:56 ago on Mon Jun 22 16:50:00 2015.
|
||||
> Installed Packages
|
||||
> GeoIP.x86_64 1.6.5-1.fc22 @System
|
||||
> GeoIP-GeoLite-data.noarch 2015.05-1.fc22 @System
|
||||
> NetworkManager.x86_64 1:1.0.2-1.fc22 @System
|
||||
> NetworkManager-libnm.x86_64 1:1.0.2-1.fc22 @System
|
||||
> aajohan-comfortaa-fonts.noarch 2.004-4.fc22 @System
|
||||
> acl.x86_64 2.2.52-7.fc22 @System
|
||||
> ....
|
||||
|
||||
### 13. How will you install and update a package and a group of packages separately on a system using YUM/DNF? ###
|
||||
|
||||
> Answer : To Install a package (say nano), we can do,
|
||||
>
|
||||
> # yum install nano
|
||||
>
|
||||
> To Install a Group of Package (say Haskell), we can do.
|
||||
>
|
||||
> # yum groupinstall 'haskell'
|
||||
>
|
||||
> To update a package (say nano), we can do.
|
||||
>
|
||||
> # yum update nano
|
||||
>
|
||||
> To update a Group of Package (say Haskell), we can do.
|
||||
>
|
||||
> # yum groupupdate 'haskell'
|
||||
|
||||
### 14. How will you SYNC all the installed packages on a system to stable release? ###
|
||||
|
||||
> **Answer** : We can sync all the packages on a system (say CentOS or Fedora) to stable release as,
|
||||
>
|
||||
> # yum distro-sync [On CentOS/RHEL]
|
||||
> or
|
||||
> # dnf distro-sync [On Fedora 20 Onwards]
|
||||
|
||||
Seems you have done a good homework before coming for the interview,Good!. Before proceeding further I just want to ask one more question.
|
||||
|
||||
### 15. Are you familiar with YUM local repository? Have you tried making a Local YUM repository? Let me know in brief what you will do to create a local YUM repo. ###
|
||||
|
||||
> **Answer** : First I would like to Thank you Sir for appreciation. Coming to question, I must admit that I am quiet familiar with Local YUM repositories and I have already implemented it for testing purpose in my local machine.
|
||||
>
|
||||
> 1. To set up Local YUM repository, we need to install the below three packages as:
|
||||
>
|
||||
> # yum install deltarpm python-deltarpm createrepo
|
||||
>
|
||||
> 2. Create a directory (say /home/$USER/rpm) and copy all the RPMs from RedHat/CentOS DVD to that folder.
|
||||
>
|
||||
> # mkdir /home/$USER/rpm
|
||||
> # cp /path/to/rpm/on/DVD/*.rpm /home/$USER/rpm
|
||||
>
|
||||
> 3. Create base repository headers as.
|
||||
>
|
||||
> # createrepo -v /home/$USER/rpm
|
||||
>
|
||||
> 4. Create the .repo file (say abc.repo) at the location /etc/yum.repos.d simply as:
|
||||
>
|
||||
> cd /etc/yum.repos.d && cat << EOF > abc.repo
|
||||
> [local-installation]name=yum-local
|
||||
> baseurl=file:///home/$USER/rpm
|
||||
> enabled=1
|
||||
> gpgcheck=0
|
||||
> EOF
|
||||
|
||||
**Important**: Make sure to remove $USER with user_name.
|
||||
|
||||
That’s all we need to do to create a Local YUM repository. We can now install applications from here, that is relatively fast, secure and most important don’t need an Internet connection.
|
||||
|
||||
Okay! It was nice interviewing you. I am done. I am going to suggest your name to HR. You are a young and brilliant candidate we would like to have in our organization. If you have any question you may ask me.
|
||||
|
||||
**Me**: Sir, it was really a very nice interview and I feel very lucky today, to have cracked the interview..
|
||||
|
||||
Obviously it didn’t end here. I asked a lots of questions like the project they are handling. What would be my role and responsibility and blah..blah..blah
|
||||
|
||||
Friends, by the time all these were documented I have been called for HR round which is 3 days from now. Hope I do my best there as well. All your blessings will count.
|
||||
|
||||
Thankyou friends and Tecmint for taking time and documenting my experience. Mates I believe Tecmint is doing some really extra-ordinary which must be praised. When we share ours experience with other, other get to know many things from us and we get to know our mistakes.
|
||||
|
||||
It enhances our confidence level. If you have given any such interview recently, don’t keep it to yourself. Spread it! Let all of us know that. You may use the below form to share your experience with us.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/linux-rpm-package-management-interview-questions/
|
||||
|
||||
作者:[Avishek Kumar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/avishek/
|
||||
[1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/
|
||||
[2]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/
|
@ -1,81 +0,0 @@
|
||||
PHP 20岁了:从兴趣工程到强大的组织
|
||||
=============================================================================
|
||||

|
||||
|
||||
信任: [Steve Jurvetson via Flickr][1]
|
||||
|
||||
> 曾经的‘丑小鸭工程’已经转变为一个网络世界的强大组织,感谢网络世界的灵活,实用与充满生气的社区
|
||||
|
||||
当Rasmus Lerdorf发布“[一个用C写的紧凑的CGI可执行程序集合][2],” 他没有想到他的创造会对网络发展产生多大的影响。今年在Miami举行的SunshinePHP大会上,Lerdorf做了开场演讲,他自嘲到,“在1995年的时候,我以为我已经解除了C API的束缚,在网络之上。显然,事情并非那样,或者我们都已经是C程序员了。”
|
||||
|
||||
实际上,当Lerdorf发布个人主页工具 -- 后来以PHP闻名的1.0版本时 -- 那时的网络还是如此的年轻。HTML 2.0还没有公布,直到那年的十一月份,而且HTTP/1.0也是次年的五月份才出现。NCSA HTTPd是使用最广泛的网络浏览器,而网景的Navigator则是最流行的网络浏览器,八月份的时候,IE1.0才到来。换句话说,PHP的开端刚好撞上了浏览器战争的前夜。
|
||||
|
||||
早些时候,说了大堆关于PHP对网络发展的影响。回到那时候,我们的选择是有限的,当说到服务器方面对于网络软件的进展。PHP满足了我们对于一个工具的需求,就是可以使得我们在网络上做一些动态的事情。实用的灵活性束缚了我们的想像,PHP从那时起便与网络共同成长。现在,PHP占据了网络语言超过80%的份额,已经是成熟的脚本语言,特别适合解决网络问题。她独一无二的血统讲述一个故事,实用高于理论,解决问题高于纯粹。
|
||||
|
||||
### 把我们钩住的网络魔力 ###
|
||||
|
||||
PHP一开始并不是一门编程语言,从她的设计就很明星 -- 或者她本来就缺,正如那些贬低者指出的那样。最初,她是作为一种API帮助网络开发者能够接入底层的C语言封装库。第一个版本是一组小的CGI可执行程序,提供表单处理功能,通过接入需要的参数和mSQL数据库。而且她如此容易地处理一个网络应用的数据库,证明了其在激发我们对于PHP的兴趣和PHP后来的支配地位的关键作用。
|
||||
|
||||
到了第二版 -- aka PHP/FI -- 数据库的支持已经扩展到包括PostgreSQL、MySQL、Oracle、Sybase等等。她通过包括他们的C语言库来支持其数据库,将他们作为PHP库的一部分。PHP/FI也可以包括GD库,创建并管理GIF镜像。她可以作为一个Apache模块运行,或者有FastCGI支持的时候被编译,并且她展示了支持变量,数组,语言结构和函数的PHP脚本语言。对于那个时候大多数在网络这块工作的人来说,PHP是我们一直在寻求的那款“胶水”。
|
||||
|
||||
当PHP吸纳越来越多的编程语言分支,演变为第三版和之后的版本,她从来没有失去连接的特性。通过仓库如PECL(PHP Extension Community Library),PHP可以把库都连在一起,暴露他们的函数给PHP层。这种将组件结合在一起的能力,成为PHP之美的一个重要方面,尽管她没有被限制在其资源代码上。
|
||||
|
||||
### 网络,一个码农们的社区 ###
|
||||
|
||||
PHP在网络发展上的持续影响并不局限于能用这种语言干什么。PHP如何完成工作,谁参与近进来 -- 这些都是PHP传奇很重要的部分。
|
||||
|
||||
早在1997年,PHP的用户群体开始形成。其中最早的是中东的PHP用户群(后来以Chiago PHP闻名),并[1997年二月份的时候举行了第一次聚会][4]。这是一个充满生气,饱含激情的开发者社区形成的开端,聚合成一种吸引力 -- 在网络上很少的工具就可以帮助他们解决问题。PHP这种普遍存在的特性使得她成为网络开发一个很自然的选择。在分享主导的世界里,她开始盛行,而且低入的门槛对于许多早期的网络开发者来说是十分有吸引力的。
|
||||
|
||||
伴随着社区的成长,为开发者带来了一堆工具和资源。这一年2000年 -- PHP的一个转折点 -- 见证了第一次PHP开发者大会,编程语言的核心开发者的一次聚集,在Tel Aviv见面,讨论即将到来的4.0版本的发布。PHP扩展和应用仓库(PEAR)也于2000发起,提供高质量的用户代码包,根据标准且最好的操作。第一届PHP大会,PHP Kongress,不久之后在德国举行。[PHPDeveloper.org][5]随后上线,直到今天。这是在PHP社区里最权威的新闻资源。
|
||||
|
||||
这种公社的势头(有待校正)证明了接下来几年里PHP成长的关键所在,且随着网络开发产业的爆发,PHP也获得发展。PHP开始占领更多,更大的网站。越来越多的用户群在世界范围内形成。邮件列表;在线论坛;IRC;大会;交易日记如php[架构],德国PHP杂志,国际PHP杂志 -- PHP社区的活力在完成网络工作的方式上有极其重要的影响:共同地,开放地,倡导代码共享。
|
||||
|
||||
然后,10年前,PHP 5发布后不久,在网络发展史上一个有趣地事情发生了,导致了PHP社区如何构建库和应用的转变:Ruby on Rails发布了。
|
||||
|
||||
### 框架的异军突起 ###
|
||||
|
||||
针对Ruby编程语言的Ruby on Rails,在MVC(模型-视图-控制)架构模型上获得了不断增长的焦点与关注。Mojavi PHP框架几年前已经使用该模型了,但是Ruby on Rails的高明之处在于巩固了MVC。框架在PHP社区炸开了,并且框架已经改变了开发者构建PHP应用程序的方式。
|
||||
|
||||
许多重要的项目和发展已经起势,这归功于PHP社区框架的生长。PHP[框架互用性组织][6]成立于2009年,致力于在框架间建立编码标准,命名约定与最佳操作。编纂这些标准和操作帮助为开发者提供了越来越多的互动性软件,使用成员项目的代码。互用性意味着每个框架可以拆分为组块和独立的库,可以被一起使用作为整体的框架。互用性带来了另一个重要的里程碑:Composer项目于2011年诞生了。
|
||||
|
||||
从Node.js的NPM和Ruby的Bundler获得灵感,Composer开辟了PHP应用开发的新纪元,创造了一次PHP“文艺复兴”。它激起了包,标准命名约定,编码标准的采用与成长中的覆盖测试间的互用性。它是任何现代PHP应用中的一个基本工具。
|
||||
|
||||
### 加速和创新的需要 ###
|
||||
|
||||
如今,PHP社区有一个生机勃勃应用和库的生态系统,一些被广泛安装的PHP应用包括WordPress,Drupal,Joomla和MediaWiki。这些应用占据了所有规模的商业的网络形式,从小型的夫妻店到站点如whitehouse.gov和Wikipeida。Alexa前十的站点中,6个使用PHP在一天内服务数十亿的页面。结果,PHP应用已成加速需要的首选 -- 并且许多创新也加入到PHP的核心来提升表现。
|
||||
|
||||
在2010年,Facebook公开了其用作PHP源对源的编译器的HipHop,翻译PHP代码为C++代码,并且编译为一个单独的可执行二进制应用。Facebook的规模和成长需要从标准互用的PHP代码迁移到更快,最佳的可执行的代码。尽管如此,Facebook想继续使用PHP,由于PHP的易用和快速开发周期。HipHop进化为HHVM,一个针对PHP的JIT(just-in-time)编译基础的执行引擎,其包含一个基于PHP的新的语言:[Hack][7]。
|
||||
|
||||
Facebook的创新,和其他的VM项目,创建了在引擎水平上的比较,引起了关于Zend引擎未来的讨论。Zend引擎依然占据PHP和一种语言表述的问题的核心。在2004年,一个语言表述项目被创建,“为提供一个完整的,简明的语句定义,和PHP语言的语义学,”使得对编译器项目来说,创建共用的PHP实现成为可能。
|
||||
|
||||
下一个PHP的主要版本成为了激烈争论的话题,一个著名的pgpng项目(下一代)作为一个能清理,重构,优化和提升PHP代码基础的选项被提出来,这也展示了对真实世界应用的性能的实质上的提升。由于之前未发布的PHP 6.0版本,在决定命名下一个主要版本为“PHP 7”后,phpng分支被合并了,并制定了计划开发PHP 7,Hack提供了很多语言中拥有的功能,如scalar和返回键入提示。
|
||||
|
||||
随着[今天第一版PHP 7 alpha发布][8],标准检测程序显示在许多方面[与HHVM的一样或者更好的性能][9],PHP正与现代网络开发需求保持一致的步伐。同样地,PHP-FIG继续创新和推送框架与库用于协作 -- 最近由于[PSR-7][10]的采纳,这将会改变PHP项目处理HTTP的方式。用户组,会议,公众和积极性如[PHPMentoring.org][11]继续在PHP开发者社区提倡最好的操作,编码标准和测试
|
||||
|
||||
PHP从各个方面见证了网络的成熟,而且PHP自己也成熟了。曾经一个单一的低层次的C语言库的API包装,PHP以她自己的方式,已经成为一个羽翼丰满的编程语言。她的开发者社区是一个充满生气,乐于帮助,在实用方面以自己为傲,并且欢迎新入者的地方。PHP已经经受了20年的考验,而且目前在语言与社区里活跃性,会保证她将会是一个中肯的,有用的的语言,在接下来的几年里。
|
||||
|
||||
他的SunshinePHP的关键发言中,Rasmus Lerdorf回忆到,“我曾想过我会在20年之后与我当初做的这个愚蠢的小项目交流吗?没有。”
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.infoworld.com/article/2933858/php/php-at-20-from-pet-project-to-powerhouse.html
|
||||
|
||||
作者:[Ben Ramsey][a]
|
||||
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.infoworld.com/author/Ben-Ramsey/
|
||||
[1]:https://www.flickr.com/photos/jurvetson/13049862325
|
||||
[2]:https://groups.google.com/d/msg/comp.infosystems.www.authoring.cgi/PyJ25gZ6z7A/M9FkTUVDfcwJ
|
||||
[3]:http://w3techs.com/technologies/overview/programming_language/all
|
||||
[4]:http://web.archive.org/web/20061215165756/http://chiphpug.php.net/mpug.htm
|
||||
[5]:http://www.phpdeveloper.org/
|
||||
[6]:http://www.php-fig.org/
|
||||
[7]:http://www.infoworld.com/article/2610885/facebook-q-a--hack-brings-static-typing-to-php-world.html
|
||||
[8]:https://wiki.php.net/todo/php70#timetable
|
||||
[9]:http://talks.php.net/velocity15
|
||||
[10]:http://www.php-fig.org/psr/psr-7/
|
||||
[11]:http://phpmentoring.org/
|
||||
这里向Lerdorf和PHP社区的其他人致敬,感谢他们把这个“愚蠢的小项目”变成了一个如今网络上持久的,强大的组块。
|
@ -0,0 +1,268 @@
|
||||
如何在 Ubuntu 中管理和使用 LVM(Logical Volume Management,逻辑卷管理)
|
||||
================================================================================
|
||||

|
||||
|
||||
在我们之前的文章中,我们介绍了[什么是 LVM 以及能用 LVM 做什么][1],今天我们会给你介绍一些 LVM 的主要管理工具,使得你在设置和扩展安装时更游刃有余。
|
||||
|
||||
正如之前所述,LVM 是介于你的操作系统和物理硬盘驱动器之间的抽象层。这意味着你的物理硬盘驱动器和分区不再依赖于他们所在的硬盘驱动和分区。而是,你的操作系统所见的硬盘驱动和分区可以是由任意数目的独立硬盘驱动汇集而成或是一个软件磁盘阵列。
|
||||
|
||||
要管理 LVM,这里有很多可用的 GUI 工具,但要真正理解 LVM 配置发生的事情,最好要知道一些命令行工具。这当你在一个服务器或不提供 GUI 工具的发行版上管理 LVM 时尤为有用。
|
||||
|
||||
LVM 的大部分命令和彼此都非常相似。每个可用的命令都由以下其中之一开头:
|
||||
|
||||
- Physical Volume = pv
|
||||
- Volume Group = vg
|
||||
- Logical Volume = lv
|
||||
|
||||
物理卷命令用于在卷组中添加或删除硬盘驱动。卷组命令用于为你的逻辑卷操作更改显示的物理分区抽象集。逻辑卷命令会以分区形式显示卷组使得你的操作系统能使用指定的空间。
|
||||
|
||||
### 可下载的 LVM 备忘单 ###
|
||||
|
||||
为了帮助你理解每个前缀可用的命令,我们制作了一个备忘单。我们会在该文章中介绍一些命令,但仍有很多你可用但没有介绍到的命令。
|
||||
|
||||
该列表中的所有命令都要以 root 身份运行,因为你更改的是会影响整个机器系统级设置。
|
||||
|
||||

|
||||
|
||||
### 如何查看当前 LVM 信息 ###
|
||||
|
||||
你首先需要做的事情是检查你的 LVM 设置。s 和 display 命令和物理卷(pv)、卷组(vg)以及逻辑卷(lv)一起使用,是一个找出当前设置好的开始点。
|
||||
|
||||
display 命令会格式化输出信息,因此比 s 命令更易于理解。对每个命令你会看到名称和 pv/vg 的路径,它还会给出空闲和已使用空间的信息。
|
||||
|
||||

|
||||
|
||||
最重要的信息是 PV 名称和 VG 名称。用这两部分信息我们可以继续进行 LVM 设置。
|
||||
|
||||
### 创建一个逻辑卷 ###
|
||||
|
||||
逻辑卷是你的操作系统在 LVM 中使用的分区。创建一个逻辑卷,首先需要拥有一个物理卷和卷组。下面是创建一个新的逻辑卷所需要的全部命令。
|
||||
|
||||
#### 创建物理卷 ####
|
||||
|
||||
我们会从一个完全新的没有任何分区和信息的硬盘驱动开始。首先找出你将要使用的磁盘。(/dev/sda, sdb, 等)
|
||||
|
||||
> 注意:记住所有的命令都要以 root 身份运行或者在命令前面添加 'sudo' 。
|
||||
|
||||
fdisk -l
|
||||
|
||||
如果之前你的硬盘驱动从没有格式化或分区,在 fdisk 的输出中你很可能看到类似下面的信息。这完全正常,因为我们会在下面的步骤中创建需要的分区。
|
||||
|
||||

|
||||
|
||||
我们的新磁盘位置是 /dev/sdb,让我们用 fdisk 命令在驱动上创建一个新的分区。
|
||||
|
||||
这里有大量能创建新分区的 GUI 工具,包括 [Gparted][2],但由于我们已经打开了终端,我们将使用 fdisk 命令创建需要的分区。
|
||||
|
||||
在终端中输入以下命令:
|
||||
|
||||
fdisk /dev/sdb
|
||||
|
||||
这会使你进入到一个特殊的 fdisk 提示符中。
|
||||
|
||||

|
||||
|
||||
以指定的顺序输入命令创建一个使用新硬盘驱动 100% 空间的主分区并为 LVM 做好了准备。如果你需要更改分区的大小或相应多个分区,我建议使用 GParted 或自己了解关于 fdisk 命令的使用。
|
||||
|
||||
**警告:下面的步骤会格式化你的硬盘驱动。确保在进行下面步骤之前你的硬盘驱动中没有任何信息。**
|
||||
|
||||
- n = 创建新分区
|
||||
- p = 创建主分区
|
||||
- 1 = 成为磁盘上的首个分区
|
||||
|
||||
输入 enter 键两次以接受默认的第一个和最后一个柱面。
|
||||
|
||||

|
||||
|
||||
用下面的命令准备 LVM 所使用的分区。
|
||||
|
||||
- t = 更改分区类型
|
||||
- 8e = 更改为 LVM 分区类型
|
||||
|
||||
核实并将信息写入硬盘驱动器。
|
||||
|
||||
- p = 查看分区设置使得写入更改到磁盘之前可以回看
|
||||
- w = 写入更改到磁盘
|
||||
|
||||

|
||||
|
||||
运行这些命令之后,会退出 fdisk 提示符并返回到终端的 bash 提示符中。
|
||||
|
||||
输入 pvcreate /dev/sdb1 在刚创建的分区上新建一个 LVM 物理卷。
|
||||
|
||||
你也许会问为什么我们不用一个文件系统格式化分区,不用担心,该步骤在后面。
|
||||
|
||||

|
||||
|
||||
#### 创建卷组 ####
|
||||
|
||||
现在我们有了一个指定的分区和创建好的物理卷,我们需要创建一个卷组。很幸运这只需要一个命令。
|
||||
|
||||
vgcreate vgpool /dev/sdb1
|
||||
|
||||

|
||||
|
||||
Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称,但建议标签以 vg 开头,以便后面你使用它时能意识到这是一个卷组。
|
||||
|
||||
#### 创建逻辑卷 ####
|
||||
|
||||
创建 LVM 将使用的逻辑卷:
|
||||
|
||||
lvcreate -L 3G -n lvstuff vgpool
|
||||
|
||||

|
||||
|
||||
-L 命令指定逻辑卷的大小,在该情况中是 3 GB,-n 命令指定卷的名称。 指定 vgpool 所以 lvcreate 命令知道从什么卷获取空间。
|
||||
|
||||
#### 格式化并挂载逻辑卷 ####
|
||||
|
||||
最后一步是用一个文件系统格式化新的逻辑卷。如果你需要选择一个 Linux 文件系统的帮助,请阅读 [如果根据需要选取最合适的文件系统][3]。
|
||||
|
||||
mkfs -t ext3 /dev/vgpool/lvstuff
|
||||
|
||||

|
||||
|
||||
创建挂载点并将卷挂载到你可以使用的地方。
|
||||
|
||||
mkdir /mnt/stuff
|
||||
mount -t ext3 /dev/vgpool/lvstuff /mnt/stuff
|
||||
|
||||

|
||||
|
||||
#### 重新设置逻辑卷大小 ####
|
||||
|
||||
逻辑卷的一个好处是你能使你的共享物理变大或变小而不需要移动所有东西到一个更大的硬盘驱动。另外,你可以添加新的硬盘驱动并同时扩展你的卷组。或者如果你有一个不使用的硬盘驱动,你可以从卷组中移除它使得逻辑卷变小。
|
||||
|
||||
这里有三个用于使物理卷、卷组和逻辑卷变大或变小的基础工具。
|
||||
|
||||
注意:这些命令中的每个都要以 pv、vg 或 lv 开头,取决于你的工作对象。
|
||||
|
||||
- resize – 能压缩或扩展物理卷和逻辑卷,但卷组不能
|
||||
- extend – 能使卷组和逻辑卷变大但不能变小
|
||||
- reduce – 能使卷组和逻辑卷变小但不能变大
|
||||
|
||||
让我们来看一个如何向刚创建的逻辑卷 "lvstuff" 添加新硬盘驱动的例子。
|
||||
|
||||
#### 安装并格式化新硬盘驱动 ####
|
||||
|
||||
按照上面创建新分区并更改分区类型为 LVM(8e) 的步骤安装一个新硬盘驱动。然后用 pvcreate 命令创建一个 LVM 能识别的物理卷。
|
||||
|
||||
#### 添加新硬盘驱动到卷组 ####
|
||||
|
||||
要添加新的硬盘驱动到一个卷组,你只需要知道你的新分区,在我们的例子中是 /dev/sdc1,以及想要添加到的卷组的名称。
|
||||
|
||||
这会添加新物理卷到已存在的卷组中。
|
||||
|
||||
vgextend vgpool /dev/sdc1
|
||||
|
||||

|
||||
|
||||
#### 扩展逻辑卷 ####
|
||||
|
||||
调整逻辑卷的大小,我们需要指出的是通过大小而不是设备来扩展。在我们的例子中,我们会添加一个 8GB 的硬盘驱动到我们的 3GB vgpool。我们可以用 lvextend 或 lvresize 命令使该空间可用。
|
||||
|
||||
lvextend -L8G /dev/vgpool/lvstuff
|
||||
|
||||

|
||||
|
||||
当这个命令工作的时候你会发现它实际上重新设置逻辑卷大小为 8GB 而不是我们期望的将 8GB 添加到已存在的卷上。要添加剩余的可用 3GB 你需要用下面的命令。
|
||||
|
||||
lvextend -L+3G /dev/vgpool/lvstuff
|
||||
|
||||

|
||||
|
||||
现在我们的逻辑卷已经是 11GB 大小了。
|
||||
|
||||
#### 扩展文件系统 ####
|
||||
|
||||
逻辑卷是 11GB 大小但是上面的文件系统仍然只有 3GB。要使文件系统使用整个的 11GB 可用空间你需要用 resize2fs 命令。你只需要指定 resize2fs 到 11GB 逻辑卷它就会帮你完成其余的工作。
|
||||
|
||||
resize2fs /dev/vgpool/lvstuff
|
||||
|
||||

|
||||
|
||||
**注意:如果你使用除 ext3/4 之外的文件系统,请查看调整你的文件系统大小的工具。**
|
||||
|
||||
#### 压缩逻辑卷 ####
|
||||
|
||||
如果你想从卷组中移除一个硬盘驱动你可以按照上面的步骤反向操作,并用 lvreduce 或 vgreduce 命令代替。
|
||||
|
||||
1. 调整文件系统大小 (调整之前确保已经移动文件到硬盘驱动安全的地方)
|
||||
1. 减小逻辑卷 (除了 + 可以扩展大小,你也可以用 - 压缩大小)
|
||||
1. 用 vgreduce 从卷组中移除硬盘驱动
|
||||
|
||||
#### 备份逻辑卷 ####
|
||||
|
||||
快照是一些新的高级文件系统提供的功能,但是 ext3/4 文件系统并没有快照的功能。LVM 快照最棒的是你的文件系统永不掉线,你可以拥有你想要的任何大小而不需要额外的硬盘空间。
|
||||
|
||||

|
||||
|
||||
LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该照片可以用于在不同的硬盘驱动上进行备份。生成一个备份的时候,任何需要添加到逻辑卷的新信息会如往常一样写入磁盘,但会跟踪更改使得原始快照永远不会损毁。
|
||||
|
||||
要创建一个快照,我们需要创建拥有足够空闲空间的逻辑卷,用于保存我们备份的时候会写入该逻辑卷的任何新信息。如果驱动并不是经常写入,你可以使用很小的一个存储空间。备份完成的时候我们只需要移除临时逻辑卷,原始逻辑卷会和往常一样。
|
||||
|
||||
#### 创建新快照 ####
|
||||
|
||||
创建 lvstuff 的快照,用带 -s 标记的 lvcreate 命令。
|
||||
|
||||
lvcreate -L512M -s -n lvstuffbackup /dev/vgpool/lvstuff
|
||||
|
||||

|
||||
|
||||
这里我们创建了一个只有 512MB 的逻辑卷,因为驱动实际上并不会使用。512MB 的空间会保存备份时产生的任何新数据。
|
||||
|
||||
#### 挂载新快照 ####
|
||||
|
||||
和之前一样,我们需要创建一个挂载点并挂载新快照,然后才能从中复制文件。
|
||||
|
||||
mkdir /mnt/lvstuffbackup
|
||||
mount /dev/vgpool/lvstuffbackup /mnt/lvstuffbackup
|
||||
|
||||

|
||||
|
||||
#### 复制快照和删除逻辑卷 ####
|
||||
|
||||
你剩下需要做的是从 /mnt/lvstuffbackup/ 中复制所有文件到一个外部的硬盘驱动或者打包所有文件到一个文件。
|
||||
|
||||
**注意:tar -c 会创建一个归档文件,-f 要指出归档文件的名称和路径。要获取 tar 命令的帮助信息,可以在终端中输入 man tar。**
|
||||
|
||||
tar -cf /home/rothgar/Backup/lvstuff-ss /mnt/lvstuffbackup/
|
||||
|
||||

|
||||
|
||||
记住备份发生的时候写到 lvstuff 的所有文件都会在我们之前创建的临时逻辑卷中被跟踪。确保备份的时候你有足够的空闲空间。
|
||||
|
||||
备份完成后,卸载卷并移除临时快照。
|
||||
|
||||
umount /mnt/lvstuffbackup
|
||||
lvremove /dev/vgpool/lvstuffbackup/
|
||||
|
||||

|
||||
|
||||
#### 删除逻辑卷 ####
|
||||
|
||||
要删除一个逻辑卷,你首先需要确保卷已经卸载,然后你可以用 lvremove 命令删除它。逻辑卷删除后你可以移除卷组,卷组删除后你可以删除物理卷。
|
||||
|
||||
这是所有移除我们创建的卷和组的命令。
|
||||
|
||||
umount /mnt/lvstuff
|
||||
lvremove /dev/vgpool/lvstuff
|
||||
vgremove vgpool
|
||||
pvremove /dev/sdb1 /dev/sdc1
|
||||
|
||||

|
||||
|
||||
这些已经囊括了关于 LVM 你需要了解的大部分知识。如果你有任何关于这些讨论的经验,请在下面的评论框中和大家分享。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
|
||||
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
|
||||
|
||||
[1]:http://www.howtogeek.com/howto/36568/what-is-logical-volume-management-and-how-do-you-enable-it-in-ubuntu/
|
||||
[2]:http://www.howtogeek.com/howto/17001/how-to-format-a-usb-drive-in-ubuntu-using-gparted/
|
||||
[3]:http://www.howtogeek.com/howto/33552/htg-explains-which-linux-file-system-should-you-choose/
|
@ -0,0 +1,302 @@
|
||||
20款优秀的Linux终端仿真器
|
||||
================================================================================
|
||||
终端仿真器是一款用其它显示结构重现可视终端的计算机程序。换句话说就是终端仿真器能使哑终端看似像一台客户机连接上了服务器。终端仿真器允许最终用户像文本用户界面和命令行界面一样连接控制台和应用程序。
|
||||
|
||||

|
||||
|
||||
20款Linux终端仿真器
|
||||
|
||||
你能从开源世界中找到大量的终端仿真器来使用,有些拥有大量的特性而有些则反之。为了能更好地理解它们所能提供的质量,我们收集了一份不可思议的Linux终端仿真器清单。每一款都列出了它们各自的描述和特性以及软件界面截图和下载链接。
|
||||
|
||||
### 1. Terminator ###
|
||||
|
||||
Terminator是一款先进且强大的终端仿真器,它支持多终端窗口。这款仿真器可以完全自定义。你可以更改它的界面尺寸、颜色,给它设置不同的形状。拥有高用户友好性且使用起来很有乐趣。
|
||||
|
||||
#### Terminator的特性 ####
|
||||
|
||||
- 自定义外形和配色方案,根据你的需要设置尺寸。
|
||||
- 使用插件来获取更多功能。
|
||||
- 快捷键可以加快普通操作。
|
||||
- 可以把终端窗口分裂成几个虚拟终端并把它们重新设置成你需要的尺寸。
|
||||
|
||||

|
||||
|
||||
Terminator Terminal
|
||||
|
||||
- [Terminator Homepage][1]
|
||||
- [Download and Installation Instructions][2]
|
||||
|
||||
### 2. Tilda ###
|
||||
|
||||
Tilda是一款漂亮的基于GTK+的下拉式终端,敲击一个键你就可以呼出一个新的或隐藏着的Tilda窗口。你也可以添加你所选择的颜色来更改文本颜色和终端背景颜色。
|
||||
|
||||
#### Tilda的特性 ####
|
||||
|
||||
- 高度定制的选项界面设置。
|
||||
- 你可以给Tilda设置透明度。
|
||||
- 优秀的嵌入式配色方案。
|
||||
|
||||

|
||||
|
||||
Tilda Terminal
|
||||
|
||||
- [Tilda Homepage][3]
|
||||
|
||||
### 3. Guake ###
|
||||
|
||||
Guake是一款基于python的下拉式终端,诞生于GNOME桌面环境。按一个键就能调出,再按一下就能隐藏。它的设计构思来源于FPS (第一人称射击) 游戏例如Quake,其目标显而易见。
|
||||
|
||||
Guake与Yakuaka和Tilda非常相似,不过它是一个集上述二者的优点于一体的基于GTK的程序。Guake完全是用Python和小片的C写成的(全局热键部分)。
|
||||
|
||||

|
||||
|
||||
Guake Terminal
|
||||
|
||||
- [Guake Homepage][4]
|
||||
|
||||
### 4. Yakuake ###
|
||||
|
||||
Yakuake (Yet Another Kuake) 是一款基于KDE的下拉式终端仿真器,它与Guake再功能上非常相似。它的射击构思也是受FPS游戏的启发例如Quake。
|
||||
|
||||
Yakuake主要是一款KDE应用程序,它能非常轻松地安装在KDE桌面上,但是如果你试着将它安装在GNOME桌面上,你将会安装大量的依赖包。
|
||||
|
||||
#### Yakuake的特性 ####
|
||||
|
||||
- 从屏幕顶端弹回顺畅
|
||||
- 选项卡式界面
|
||||
- 可配置的尺寸和动画速度
|
||||
- 可定制
|
||||
|
||||

|
||||
|
||||
Yakuake Terminal
|
||||
|
||||
- [Yakuake Homepage][5]
|
||||
|
||||
### 5. ROXTerm ###
|
||||
|
||||
ROXterm又是一款轻量级终端仿真器,旨在提供与GNOME终端相似的特性。它原本创造出来是为了通过不使用Gnome库从而拥有更少的占用空间和更快的启动速度并使用独立的小程序来建立配置界面(GUI), 但是随着时间的流逝,它的角色就转变为给那些高级用户带来更高一层的特性。
|
||||
|
||||
然而,它比GNOME终端更加具有可制定性并且对于那些经常使用终端的高级用户更令人期望。它能和GNOME桌面环境完美结合并在终端中提供像拖拽项目那样的特性。
|
||||
|
||||

|
||||
|
||||
Roxterm Terminal
|
||||
|
||||
- [ROXTerm Homepage][6]
|
||||
|
||||
### 6. Eterm ###
|
||||
|
||||
Eterm是最轻量级的一款彩色终端仿真器,是作为xterm的替代品而被设计出来。它是以一种选择自由的思想、避免臃肿、灵活性和自由在用户手中是触手可及的理念而开发出来的。
|
||||
|
||||

|
||||
|
||||
Eterm Terminal
|
||||
|
||||
- [Eterm Homepage][7]
|
||||
|
||||
### 7. Rxvt ###
|
||||
|
||||
代表着扩展虚拟终端的Rxvt是一款彩色终端仿真器,为那些不需要一些特性例如Tektronix 4014仿真和toolkit-style可配置性的高级用户而生的xterm的替代品。
|
||||
|
||||

|
||||
|
||||
Rxvt Terminal
|
||||
|
||||
- [Rxvt Homepage][8]
|
||||
|
||||
### 8. Wterm ###
|
||||
|
||||
Wterm是另一款以rxvt项目为基础的轻量级彩色终端仿真器。它所包含的特性包括设置背景图片、透明度、反向透明度和大量的设置或运行环境选项让它成为一款可高度自定义的终端仿真器。
|
||||
|
||||

|
||||
|
||||
wterm Terminal
|
||||
|
||||
- [Wterm Homepage][9]
|
||||
|
||||
### 9. LXTerminal ###
|
||||
|
||||
LXTerminal是一款基于VTE的终端仿真器,默认运行于没有任何多余依赖的LXDE(轻量级X桌面环境)下。这款终端有很多很棒的特性。
|
||||
|
||||
#### LXTerminal的特性 ####
|
||||
|
||||
- 多标签支持
|
||||
- 支持普通命令如cp, cd, dir, mkdir, mvdir
|
||||
- 隐藏菜单栏以保证足够界面空间
|
||||
- 更改配色方案
|
||||
|
||||

|
||||
|
||||
lxterminal Terminal
|
||||
|
||||
- [LXTerminal Homepage][10]
|
||||
|
||||
### 10. Konsole ###
|
||||
|
||||
Konsole是另一款强大的基于KDE的免费终端仿真器,最初由Lars Doelle创造。
|
||||
|
||||
#### Konsole的特性 ####
|
||||
|
||||
- 多标签式终端
|
||||
- 半透明背景
|
||||
- 支持拆分视图模式
|
||||
- 目录和SSH书签化
|
||||
- 可定制的配色方案
|
||||
- 可定制的按键绑定
|
||||
- 终端中的活动通知警告
|
||||
- 增量搜索
|
||||
- 支持Dolphin文件管理器
|
||||
- 普通文本和HTML格式输出出口
|
||||
|
||||

|
||||
|
||||
Konsole Terminal
|
||||
|
||||
- [Konsole Homepage][11]
|
||||
|
||||
### 11. TermKit ###
|
||||
|
||||
TermKit是一款漂亮简洁的终端,其目标是用在Google Chrome和Chromium中广泛被使用的WebKit渲染引擎在基于应用程序的命令行中构建出GUI视图。TermKit起初是为Mac和Windows设计的,但是由于TermKit被Floby给fork并做了修改,现在你可以将它安装在Linux发行版上并感受TermKit带来的魅力。
|
||||
|
||||

|
||||
|
||||
TermKit Terminal
|
||||
|
||||
- [TermKit Homepage][12]
|
||||
|
||||
### 12. st ###
|
||||
|
||||
st是一款简单的X Window终端实现接口。
|
||||
|
||||

|
||||
|
||||
st terminal
|
||||
|
||||
- [st Homepage][13]
|
||||
|
||||
### 13. Gnome-Terminal ###
|
||||
|
||||
GNOME终端是一款在GNOME桌面环境下的嵌入式终端仿真器,由Havoc Pennington和其他一些人共同开发。它允许用户在GNOME环境下的同时使用一个真实的Linux shell来运行命令。GNOME终端是模拟的xterm终端仿真器并带来了一些相似的特性。
|
||||
|
||||
Gnome终端支持多用户,用户可以为他们的账户创建多个用户,每个用户能自定义配置选项,如字体、颜色、背景图片、行为习惯等等并能分别给它们取名。它也支持鼠标事件、url探测、多标签等。
|
||||
|
||||

|
||||
|
||||
Gnome Terminal
|
||||
|
||||
- [Gnome Terminal][14]
|
||||
|
||||
### 14. Final Term ###
|
||||
|
||||
Final Term是一款漂亮的开源终端仿真器,在这一个界面里蕴藏着一些令人激动的功能和特性。虽然它仍然有待改进,但是它提供了一些重要的特性比如Semantic文本菜单、智能的命令行实现、GUI终端控制、全能的快捷键、彩色支持等等。以下动图抓取并演示了它们的一些特性,点开来看看吧。
|
||||
|
||||

|
||||
|
||||
FinalTerm Terminal
|
||||
|
||||
- [Final Term][15]
|
||||
|
||||
### 15. Terminology ###
|
||||
|
||||
Terminology是一款新的现代化终端仿真器,为Enlightenment桌面创造,但也能用于其它桌面环境。它有一些独一无二的棒极了的特性,这是其它终端仿真器所不具备的。
|
||||
|
||||
抛开这些特性,terminology甚至还提供了你无法从其它仿真器看到的东西,比如图像、视频和文档的缩略图预览,它允许你从Terminology直接就能看到那些文件。
|
||||
|
||||
你可以来看看Terminology的开发人员制作的小视频(视频画质不太清晰,但足以让你了解Terminology了)。
|
||||
|
||||
<iframe width="630" height="480" frameborder="0" allowfullscreen="" src="//www.youtube.com/embed/ibPziLRGvkg"></iframe>
|
||||
|
||||
- [Terminology][16]
|
||||
|
||||
### 16. Xfce4 terminal ###
|
||||
|
||||
Xfce终端是一款轻量级的现代化终端仿真器,它简单易用,为Xfce桌面环境设计。它最新的一个版本有许多新的炫酷特性比如搜索会话、标签颜色转换器、像Guake或Yakuake一样的下拉式控制台等等。
|
||||
|
||||

|
||||
|
||||
Xfce Terminal
|
||||
|
||||
- [Xfce4 Terminal][17]
|
||||
|
||||
### 17. xterm ###
|
||||
|
||||
xterm应用程序是一款标准的在X Window系统上的终端仿真器。它保持了DEC VT102和Tektronix 4014,让应用能直接使用X Window系统。
|
||||
|
||||

|
||||
|
||||
xterm Terminal
|
||||
|
||||
- [xterm][18]
|
||||
|
||||
### 18. LilyTerm ###
|
||||
|
||||
LilyTerm是一款基于libvte的开源终端仿真器,这款不太出名的仿真器追求的是快速和轻量级。LilyTerm也包括一些关键特性:
|
||||
|
||||
- 支持标签移动、着色以及标签重新排序
|
||||
- 通过快捷键管理标签
|
||||
- 支持背景透明化和饱和度调整
|
||||
- 支持为特定用户创建配置文件
|
||||
- 若干个自定义选项
|
||||
- 广泛支持UTF-8
|
||||
|
||||

|
||||
|
||||
Lilyterm Terminal
|
||||
|
||||
- [LilyTerm][19]
|
||||
|
||||
### 19. Sakura ###
|
||||
|
||||
Sakura是另一款不知名的Unix风格终端仿真器,按照命令行模式和基于文本的终端程序开发。Sakura基于GTK和livte,自身特性不多,不过还是有一些自定义选项比如多标签支持、自定义文本颜色、字体和背景图片、快速命令处理等等。
|
||||
|
||||

|
||||
|
||||
Sakura Terminal
|
||||
|
||||
- [Sakura][20]
|
||||
|
||||
### 20. rxvt-unicode ###
|
||||
|
||||
rxvt-unicode (也称为urxvt) 是另一个高度可定制、轻量级和快速的终端仿真器,支持xft和unicode,由Marc Lehmann开发。它有许多显著特性比如支持Unicode上的国际语言,能显示多种字体类型并支持Perl扩展。
|
||||
|
||||

|
||||
|
||||
rxvt unicode
|
||||
|
||||
- [rxvt-unicode][21]
|
||||
|
||||
如果你知道任何其它强大的Linux终端仿真器而上文未提及,欢迎在评论中与我们分享。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/linux-terminal-emulators/
|
||||
|
||||
作者:[Ravi Saive][a]
|
||||
译者:[ZTinoZ](https://github.com/ZTinoZ)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.tecmint.com/author/admin/
|
||||
[1]:https://launchpad.net/terminator
|
||||
[2]:http://www.tecmint.com/terminator-a-linux-terminal-emulator-to-manage-multiple-terminal-windows/
|
||||
[3]:http://tilda.sourceforge.net/tildaabout.php
|
||||
[4]:https://github.com/Guake/guake
|
||||
[5]:http://extragear.kde.org/apps/yakuake/
|
||||
[6]:http://roxterm.sourceforge.net/index.php?page=index&lang=en
|
||||
[7]:http://www.eterm.org/
|
||||
[8]:http://sourceforge.net/projects/rxvt/
|
||||
[9]:http://sourceforge.net/projects/wterm/
|
||||
[10]:http://wiki.lxde.org/en/LXTerminal
|
||||
[11]:http://konsole.kde.org/
|
||||
[12]:https://github.com/unconed/TermKit
|
||||
[13]:http://st.suckless.org/
|
||||
[14]:https://help.gnome.org/users/gnome-terminal/stable/
|
||||
[15]:http://finalterm.org/
|
||||
[16]:http://www.enlightenment.org/p.php?p=about/terminology
|
||||
[17]:http://docs.xfce.org/apps/terminal/start
|
||||
[18]:http://invisible-island.net/xterm/
|
||||
[19]:http://lilyterm.luna.com.tw/
|
||||
[20]:https://launchpad.net/sakura
|
||||
[21]:http://software.schmorp.de/pkg/rxvt-unicode
|
@ -0,0 +1,91 @@
|
||||
如何在 Cacti 中合并两幅图片
|
||||
================================================================================
|
||||
[Cacti][1] 是一个很棒的开源网络监视系统,它广泛使用于图示网络元素,例如带宽、存储、处理器和内存使用。使用它的基于网络的接口,你可以轻松地创建和组织图。然而,默认并没有提供一些高级功能,例如合并图片、使用多个来源创建聚合图、迁移 Cacti 到另一台服务器。使用 Cacti 的这些功能你还需要一些经验。在该教程中,我们会看到如何在将两幅 Cacti 图片合并为一幅。
|
||||
|
||||
考虑这个例子。在过去的 6 个月中,客户端 A 连接到了交换机 A 的端口 5。端口 5 发生了错误,因此客户端迁移到了端口 6。由于 Cacti 为每个接口/元素使用不同的图,客户端的带宽历史会分成端口 5 和端口 6。结果是对于一个客户端我们有两幅图片 - 一幅是 6 个月的旧数据,另一幅保存了后续的数据。
|
||||
|
||||
在这种情况下,我们实际上可以合并两幅图片将旧数据加到新的图中,使得用一个单独的图为一个用户保存历史的和新数据。该教程将会解释如何做到这一点。
|
||||
|
||||
Cacti 将每幅图片的数据保存在它自己的 RRD(round robin database,循环数据库) 文件中。当请求一幅图片时,根据保存在对应 RRD 文件中的值生成图。在 Ubuntu/Debian 系统中,RRD 文件保存在 `/var/lib/cacti/rra`,在 CentOS/RHEL 系统中则是 `/var/www/cacti/rra`。
|
||||
|
||||
合并图片背后的思想是更改这些 RRD 文件使得旧 RRD 文件中的值能追加到新的 RRD 文件中。
|
||||
|
||||
### 情景 ###
|
||||
|
||||
一个客户端的服务在 eth0 上运行了超过一年。由于硬件损坏,客户端迁移到了另一台服务器的 eth1 接口。我们想图示新接口的带宽,同时保留超过一年的历史数据。只在一幅图中显示客户端。
|
||||
|
||||
### 确定图的 RRD 文件 ###
|
||||
|
||||
图合并的首个步骤是确定和图关联的 RRD 文件。我们可以通过以调试模式打开图检查文件。要做到这点,在 Cacti 的菜单中: 控制台 > 管理图 > 选择图 > 打开图调试模式。
|
||||
|
||||
#### 旧图: ####
|
||||
|
||||

|
||||
|
||||
#### 新图: ####
|
||||
|
||||

|
||||
|
||||
从样例输出(基于 Debian 系统)中,我们可以确定两幅图片的 RRD 文件:
|
||||
|
||||
- **旧图**: /var/lib/cacti/rra/old_graph_traffic_in_8.rrd
|
||||
- **新图**: /var/lib/cacti/rra/new_graph_traffic_in_10.rrd
|
||||
|
||||
### 准备脚本 ###
|
||||
|
||||
我们会用一个 [RRD 剪接脚本][2] 合并两个 RRD 文件。下载该 PHP 脚本,并安装为 /var/lib/cacti/rra/rrdsplice.php (Debian/Ubuntu 系统) 或 /var/www/cacti/rra/rrdsplice.php (CentOS/RHEL 系统)。
|
||||
|
||||
下一步,确认 Apache 用户拥有该文件。
|
||||
|
||||
在 Debian 或 Ubuntu 系统中,运行下面的命令:
|
||||
|
||||
# chown www-data:www-data rrdsplice.php
|
||||
|
||||
并更新 rrdsplice.php。查找下面的行:
|
||||
|
||||
chown($finrrd, "apache");
|
||||
|
||||
用下面的语句替换:
|
||||
|
||||
chown($finrrd, "www-data");
|
||||
|
||||
在 CentOS 或 RHEL 系统中,运行下面的命令:
|
||||
|
||||
# chown apache:apache rrdsplice.php
|
||||
|
||||
### 合并两幅图 ###
|
||||
|
||||
通过不带任何参数运行该脚本可以获得脚本的使用语法。
|
||||
|
||||
# cd /path/to/rrdsplice.php
|
||||
# php rrdsplice.php
|
||||
|
||||
----------
|
||||
|
||||
USAGE: rrdsplice.php --oldrrd=file --newrrd=file --finrrd=file
|
||||
|
||||
现在我们准备好合并两个 RRD 文件了。只需要指定旧 RRD 文件和新 RRD 文件的名称。我们会将合并后的结果重写到新 RRD 文件中。
|
||||
|
||||
# php rrdsplice.php --oldrrd=old_graph_traffic_in_8.rrd --newrrd=new_graph_traffic_in_10.rrd --finrrd=new_graph_traffic_in_10.rrd
|
||||
|
||||
现在旧 RRD 文件中的数据已经追加到了新 RRD 文件中。Cacti 会将任何新数据写到新 RRD 文件中。如果我们点击图,我们可以发现也已经添加了旧图的周、月、年记录。下面图表中的第二幅图显示了旧图的周记录。
|
||||
|
||||

|
||||
|
||||
总之,该教程显示了如何简单地将两幅 Cacti 图片合并为一幅。当服务迁移到另一个设备/接口,我们希望只处理一幅图片而不是两幅时,这个小技巧非常有用。该脚本非常方便,因为它可以不管源设备合并图片,例如 Cisco 1800 路由器和 Cisco 2960 交换机。
|
||||
|
||||
希望这些能对你有所帮助。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://xmodulo.com/combine-two-graphs-cacti.html
|
||||
|
||||
作者:[Sarmed Rahman][a]
|
||||
译者:[ictlyh](https://github.com/ictlyh)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://xmodulo.com/author/sarmed
|
||||
[1]:http://xmodulo.com/install-configure-cacti-linux.html
|
||||
[2]:http://svn.cacti.net/viewvc/developers/thewitness/rrdsplice/rrdsplice.php
|
@ -1,64 +1,64 @@
|
||||
Installing LAMP (Linux, Apache, MariaDB, PHP/PhpMyAdmin) in RHEL/CentOS 7.0
|
||||
在RHEL/CentOS 7.0中安装LAMP(Linux、 Apache、 MariaDB、 PHP/PhpMyAdmin)
|
||||
================================================================================
|
||||
Skipping the LAMP introduction, as I’m sure that most of you know what is all about. This tutorial will concentrate on how to install and configure famous LAMP stack – Linux Apache, MariaDB, PHP, PhpMyAdmin – on the last release of Red Hat Enterprise Linux 7.0 and CentOS 7.0, with the mention that both distributions have upgraded httpd daemon to Apache HTTP 2.4.
|
||||
跳过LAMP的介绍因为我认为你们大多数已经知道了。这个教程会集中在如何在升级到Apache 2.4的 Red Hat Enterprise Linux 7.0 和 CentOS 7.0中安装和配置LAMP-Linux Apache、 MariaDB、 PHP、PhpMyAdmin。
|
||||
|
||||

|
||||
|
||||
Install LAMP in RHEL/CentOS 7.0
|
||||
在RHEL/CentOS 7.0中安装LAMP
|
||||
|
||||
#### Requirements ####
|
||||
#### 要求 ####
|
||||
|
||||
Depending on the used distribution, RHEL or CentOS 7.0, use the following links to perform a minimal system installation, using a static IP Address for network configuration.
|
||||
根据使用的发行版,RHEL 或者 CentOS 7.0使用下面的链接来执行最小的系统安装,网络使用静态ip
|
||||
|
||||
**For RHEL 7.0**
|
||||
**对于RHEL 7.0**
|
||||
|
||||
- [RHEL 7.0 Installation Procedure][1]
|
||||
- [Register and Enable Subscriptions/Repositories on RHEL 7.0][2]
|
||||
- [RHEL 7.0安装过程][1]
|
||||
- [在RHEL 7.0中注册和启用订阅仓库][2]
|
||||
|
||||
**For CentOS 7.0**
|
||||
**对于 CentOS 7.0**
|
||||
|
||||
- [CentOS 7.0 Installation Procedure][3]
|
||||
- [CentOS 7.0 安装过程][3]
|
||||
|
||||
### Step 1: Install Apache Server with Basic Configurations ###
|
||||
### 第一步: 使用基本配置安装apache ###
|
||||
|
||||
**1. After performing a minimal system installation and configure your server network interface with a [Static IP Address on RHEL/CentOS 7.0][4], go ahead and install Apache 2.4 httpd service binary package provided form official repositories using the following command.**
|
||||
**1. 在执行最小系统安装并配置[在RHEL/CentOS 7.0中配置静态ip][4]**就可以从使用下面的命令从官方仓库安装最新的Apache 2.4 httpd服务。
|
||||
|
||||
# yum install httpd
|
||||
|
||||

|
||||
|
||||
Install Apache Web Server
|
||||
安装apache服务
|
||||
|
||||
**2. After yum manager finish installation, use the following commands to manage Apache daemon, since RHEL and CentOS 7.0 both migrated their init scripts from SysV to systemd – you can also use SysV and Apache scripts the same time to manage the service.**
|
||||
**2. 安装安城后,使用下面的命令来管理apache守护进程,因为RHEL and CentOS 7.0都将init脚本从SysV升级到了systemd - 你也可以同事使用SysV和Apache脚本来管理服务。**
|
||||
|
||||
# systemctl status|start|stop|restart|reload httpd
|
||||
|
||||
OR
|
||||
或者
|
||||
|
||||
# service httpd status|start|stop|restart|reload
|
||||
|
||||
OR
|
||||
或者
|
||||
|
||||
# apachectl configtest| graceful
|
||||
|
||||

|
||||
|
||||
Start Apache Web Server
|
||||
启动apache服务
|
||||
|
||||
**3. On the next step start Apache service using systemd init script and open RHEL/CentOS 7.0 Firewall rules using firewall-cmd, which is the default command to manage iptables through firewalld daemon.**
|
||||
**3. 下一步使用systemd初始化脚本来启动apache服务并用firewall-cmd打开RHEL/CentOS 7.0防火墙规则, 这是通过firewalld守护进程管理iptables的默认命令。**
|
||||
|
||||
# firewall-cmd --add-service=http
|
||||
|
||||
**NOTE**: Make notice that using this rule will lose its effect after a system reboot or firewalld service restart, because it opens on-fly rules, which are not applied permanently. To apply consistency iptables rules on firewall use –permanent option and restart firewalld service to take effect.
|
||||
**注意**:上面的命令会在系统重启或者firewalld服务重启后失效,因为它是即时的规则,它不会永久生效。要使iptables规则在fiewwall中持久化,使用-permanent选项并重启firewalld服务来生效。
|
||||
|
||||
# firewall-cmd --permanent --add-service=http
|
||||
# systemctl restart firewalld
|
||||
|
||||

|
||||
|
||||
Enable Firewall in CentOS 7
|
||||
在CentOS 7中启用Firewall
|
||||
|
||||
Other important Firewalld options are presented below:
|
||||
下面是firewalld其他的重要选项:
|
||||
|
||||
# firewall-cmd --state
|
||||
# firewall-cmd --list-all
|
||||
@ -67,37 +67,38 @@ Other important Firewalld options are presented below:
|
||||
# firewall-cmd --query-service service_name
|
||||
# firewall-cmd --add-port=8080/tcp
|
||||
|
||||
**4. To verify Apache functionality open a remote browser and type your server IP Address using HTTP protocol on URL (http://server_IP), and a default page should appear like in the screenshot below.**
|
||||
**4. 要验证apache的功能,打开一个远程浏览器并使用http协议输入你服务器的ip地址(http://server_IP), 应该会显示下图中的默认页面。**
|
||||
|
||||

|
||||
|
||||
Apache Default Page
|
||||
Apache默认页
|
||||
|
||||
**5. For now, Apache DocumentRoot path it’s set to /var/www/html system path, which by default doesn’t provide any index file. If you want to see a directory list of your DocumentRoot path open Apache welcome configuration file and set Indexes statement from – to + on <LocationMach> directive, using the below screenshot as an example.**
|
||||
**5. 现在apache的根地址在/var/www/html,该目录中没有提供任何index文件。如果你想要看见根目录下的文件夹列表,打开apache欢迎配置文件并设置 <LocationMach>下Indexes前的状态从-到+,下面的截图就是一个例子。**
|
||||
|
||||
# nano /etc/httpd/conf.d/welcome.conf
|
||||
|
||||

|
||||
|
||||
Apache Directory Listing
|
||||
Apache目录列出
|
||||
|
||||
**6. Close the file, restart Apache service to reflect changes and reload your browser page to see the final result.**
|
||||
**6. 关闭文件,重启apache服务来使设置生效,重载页面来看最终效果。**
|
||||
|
||||
# systemctl restart httpd
|
||||
|
||||

|
||||
|
||||
Apache Index File
|
||||
Apache Index 文件
|
||||
|
||||
### Step 2: Install PHP5 Support for Apache ###
|
||||
### 第二步: 为Apache安装php5支持 ###
|
||||
|
||||
**7. Before installing PHP5 dynamic language support for Apache, get a full list of available PHP modules and extensions using the following command.**
|
||||
|
||||
**7. 在为apache安装php支持之前,使用下面的命令的得到所有可用的php模块和扩展。**
|
||||
|
||||
# yum search php
|
||||
|
||||

|
||||
|
||||
Install PHP in CentOS 7
|
||||
在
|
||||
|
||||
**8. Depending on what type of applications you want to use, install the required PHP modules from the above list, but for a basic MariaDB support in PHP and PhpMyAdmin you need to install the following modules.**
|
||||
|
||||
@ -140,22 +141,22 @@ Set Timezone in PHP
|
||||
|
||||

|
||||
|
||||
Install MariaDB in CentOS 7
|
||||
在CentOS 7中安装PHP
|
||||
|
||||
**12. After MariaDB package is installed, start database daemon and use mysql_secure_installation script to secure database (set root password, disable remotely logon from root, remove test database and remove anonymous users).**
|
||||
***12. 安装MariaDB后,开启数据库守护进程并使用mysql_secure_installation脚本来保护数据库(设置root密码、禁止远程root登录、移除测试数据库、移除匿名用户)**
|
||||
|
||||
# systemctl start mariadb
|
||||
# mysql_secure_installation
|
||||
|
||||

|
||||
|
||||
Start MariaDB Database
|
||||
启动MariaDB数据库
|
||||
|
||||

|
||||
|
||||
Secure MySQL Installation
|
||||
MySQL安全设置
|
||||
|
||||
**13. To test database functionality login to MariaDB using its root account and exit using quit statement.**
|
||||
**13. 要测试数据库功能,使用root账户登录MariaDB并用quit退出。**
|
||||
|
||||
mysql -u root -p
|
||||
MariaDB > SHOW VARIABLES;
|
||||
@ -163,27 +164,27 @@ Secure MySQL Installation
|
||||
|
||||

|
||||
|
||||
Connect MySQL Database
|
||||
连接MySQL数据库
|
||||
|
||||
### Step 4: Install PhpMyAdmin ###
|
||||
### 第四步: 安装PhpMyAdmin ###
|
||||
|
||||
**14. By default official RHEL 7.0 or CentOS 7.0 repositories doesn’t provide any binary package for PhpMyAdmin Web Interface. If you are uncomfortable using MySQL command line to manage your database you can install PhpMyAdmin package by enabling CentOS 7.0 rpmforge repositories using the following command.**
|
||||
**14. RHEL 7.0 或者 CentOS 7.0仓库默认没有提供PhpMyAdmin二进制安装包。如果你不适应使用MySQL命令行来管理你的数据库,你可以通过下面的命令启用CentOS 7.0 rpmforge仓库来安装PhpMyAdmin。**
|
||||
|
||||
# yum install http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm
|
||||
|
||||
After enabling rpmforge repository, next install PhpMyAdmin.
|
||||
启用rpmforge仓库后,下面安装PhpMyAdmin。
|
||||
|
||||
# yum install phpmyadmin
|
||||
|
||||

|
||||
|
||||
Enable RPMForge Repository
|
||||
启用RPMForge仓库
|
||||
|
||||
**15. Next configure PhpMyAdmin to allow connections from remote hosts by editing phpmyadmin.conf file, located on Apache conf.d directory, commenting the following lines.**
|
||||
**15. 下面配置PhpMyAdmin的phpmyadmin.conf来允许远程连接,它位于Apache conf.d目录下,并注释掉下面的行。**
|
||||
|
||||
# nano /etc/httpd/conf.d/phpmyadmin.conf
|
||||
|
||||
Use a # and comment this lines.
|
||||
使用#来注释掉行。
|
||||
|
||||
# Order Deny,Allow
|
||||
# Deny from all
|
||||
@ -191,40 +192,40 @@ Use a # and comment this lines.
|
||||
|
||||

|
||||
|
||||
Allow Remote PhpMyAdmin Access
|
||||
允许远程PhpMyAdmin访问
|
||||
|
||||
**16. To be able to login to PhpMyAdmin Web interface using cookie authentication method add a blowfish string to phpmyadmin config.inc.php file like in the screenshot below using the [generate a secret string][6], restart Apache Web service and direct your browser to the URL address http://server_IP/phpmyadmin/.**
|
||||
**16. 要使用cookie验证来登录PhpMyAdmin,像下面的截图那样使用[生成字符串][6]添加一个blowfish字符串到config.inc.php文件下,重启apache服务并打开URL:http://server_IP/phpmyadmin/。**
|
||||
|
||||
# nano /etc/httpd/conf.d/phpmyadmin.conf
|
||||
# systemctl restart httpd
|
||||
|
||||

|
||||
|
||||
Add Blowfish in PhpMyAdmin
|
||||
在PhpMyAdmin中添加Blowfish
|
||||
|
||||

|
||||
|
||||
PhpMyAdmin Dashboard
|
||||
PhpMyAdmin面板
|
||||
|
||||
### Step 5: Enable LAMP System-wide ###
|
||||
### 第五步: 系统范围启用LAMP ###
|
||||
|
||||
**17. If you need MariaDB and Apache services to be automatically started after reboot issue the following commands to enable them system-wide.**
|
||||
**17. 如果你需要在重启后自动运行MariaDB和Apache服务,你需要系统级地启用它们。**
|
||||
|
||||
# systemctl enable mariadb
|
||||
# systemctl enable httpd
|
||||
|
||||

|
||||
|
||||
Enable Services System Wide
|
||||
系统级启用服务
|
||||
|
||||
That’s all it takes for a basic LAMP installation on Red Hat Enterprise 7.0 or CentOS 7.0. The next series of articles related to LAMP stack on CentOS/RHEL 7.0 will discuss how to create Virtual Hosts, generate SSL Certificates and Keys and add SSL transaction support for Apache HTTP Server.
|
||||
这就是在Red Hat Enterprise 7.0或者CentOS 7.0中安装LAMP的过程。CentOS/RHEL 7.0上关于LAMP洗系列文章将会讨论在Apache中创建虚拟主机,生成SSL证书、密钥和添加SSL事物支持。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://www.tecmint.com/install-lamp-in-centos-7/
|
||||
|
||||
作者:[Matei Cezar][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
431
translated/tech/20150617 The Art of Command Line.md
Normal file
431
translated/tech/20150617 The Art of Command Line.md
Normal file
@ -0,0 +1,431 @@
|
||||
命令行艺术
|
||||
================================================================================
|
||||
- [基础](#basics)
|
||||
- [日常使用](#everyday-use)
|
||||
- [处理文件和数据](#processing-files-and-data)
|
||||
- [系统调试](#system-debugging)
|
||||
- [单行程序](#one-liners)
|
||||
- [晦涩难懂,但却有用](#obscure-but-useful)
|
||||
- [更多资源](#more-resources)
|
||||
- [免责声明](#disclaimer)
|
||||
|
||||
|
||||

|
||||
|
||||
流畅地使用命令行是一个常被忽略的技能,或被认为是神秘的奥义。但是,它会以明显而微妙的方式改善你作为工程师的灵活度和生产力。这是我在Linux上工作时发现的有用的命令行使用小窍门和笔记的精粹。有些小窍门是很基础的,而有些是相当地特别、相当地复杂、或者相当地晦涩难懂。这一页不长,但是如果你可以使用并记得这里的所有项目,那么你知道不少了。
|
||||
|
||||
其中大部分[最初](http://www.quora.com/What-are-some-lesser-known-but-useful-Unix-commands)[出现](http://www.quora.com/What-are-the-most-useful-Swiss-army-knife-one-liners-on-Unix)在[Quora](http://www.quora.com/What-are-some-time-saving-tips-that-every-Linux-user-should-know)上,但是考虑到利益,似乎更值得使用Github,这上面的人比我更能提出改进建议。如果你看到一个错误,或者更好的某种东西,请提交问题或PR!
|
||||
|
||||
范围:
|
||||
|
||||
- 目标宽广而简洁。每个小窍门在某种情形下都很基础,或者比替代品大大节省时间。
|
||||
- 这是为Linux写的。大多数,但并非全部项目可以同样应用到MacOS(或者甚至Cygwin)。
|
||||
- 焦点集中在交互的Bash上,尽管大多数小窍门也可以应用到其它shell,以及常规Bash脚本。
|
||||
- 意在作最少说明,要想期待更多,你可以使用`man`、使用`apt-get`/`yum`/`dnf`来安装,还可以使用Google来获得更多背景知识。
|
||||
|
||||
|
||||
## 基础
|
||||
|
||||
- 学习基本Bash技能。实际上,键入`man bash`,然后至少浏览一遍所有东西;它很容易理解,没那么长。其它shell会很好,但是Bash很强大,而且总是可用(*只*学习zsh、fish之类,而在你自己的笔记本上测试时,会在很多情形下受到限制,比如使用现存的服务器)。
|
||||
|
||||
- 至少学好一种基于文本的编辑器。理想的一个是Vim(`vi`),因为在终端中用于随机编辑时它没有竞争者(即使大多数时候你使用Emacs,一个大型的IDE,或一个现代的时髦编辑器)。
|
||||
|
||||
- 学习使用`>`和`<`来进行输出和输入重定向,以及使用`|`来管道重定向,学习关于stdout和stderr的东西。
|
||||
|
||||
- 学习`*`(也许还有`?`和`{`...`}`)文件通配扩展和应用,以及双引号`"`和单引号`'`之间的区别。(更多内容请参看下面关于变量扩展部分)。
|
||||
|
||||
- 熟悉Bash作业管理:`&`, **ctrl-z**, **ctrl-c**, `jobs`, `fg`, `bg`, `kill`等等。
|
||||
|
||||
- 掌握`ssh`,以及通过`ssh-agent`,`ssh-add`等进行无密码验证的基础技能。
|
||||
|
||||
- 基本的文件管理:`ls`和`ls -l`(特别是,知道`ls -l`各个栏目的意义),`less`, `head`, `tail` 和`tail -f`(或者更好的`less +F`),`ln`和`ln -s`(知道硬链接和软链接的区别,以及硬链接相对于软链接的优势),`chown`,`chmod`,`du`(用于查看磁盘使用率的快速摘要:`du -sk *`),`df`, `mount`。
|
||||
|
||||
- 基本的网络管理: `ip`或`ifconfig`,`dig`。
|
||||
|
||||
- 熟知正则表达式,以及各种标识来使用`grep`/`egrep`。`-i`,`-o`,`-A`和`-B`选项值得掌握。
|
||||
|
||||
- 学会使用`apt-get`,`yum`或`dnf`(这取决于你的发行版)来查找并安装软件包。确保你可以用`pip`来安装基于Python的命令行工具(下面的一些东西可以很容易地通过`pip`安装)。
|
||||
|
||||
|
||||
## 日常使用
|
||||
|
||||
- 在Bash中,使用**ctrl-r**来搜索命令历史。
|
||||
|
||||
- 在Bash中,使用 **ctrl-w** 来删除最后的单词,使用 **ctrl-u** 来删除整行。使用 **alt-b** 和 **alt-f** 来逐词移动,以及使用**ctrl-k**来杀死到行尾。请使用 `man readline` 来查看Bash中所有默认的键绑定,有很多。例如,**alt-.** 可以循环显示先前的参数,而**alt-** 扩展通配。
|
||||
|
||||
- 返回先前的工作目录: `cd -`
|
||||
|
||||
- 如果你命令输入到一半,但是改变主意了,可以敲 **alt-#** 来添加一个 `#` 到开头,然后将该命令作为注释输入(或者使用 **ctrl-a**, **#**,**enter**)。然后,你可以在后面通过命令历史来返回到该命令。
|
||||
|
||||
- 使用`xargs`(或`parallel`),它很强大。注意,你可以控制每行(`-L`)执行多少个项目,而言可以使用平行结构(`-P`)。如果你不确定它是否会做正确的事情,可以首先使用`xargs echo`。同时,使用`-I{}`也很方便。样例:
|
||||
```bash
|
||||
find . -name '*.py' | xargs grep some_function
|
||||
cat hosts | xargs -I{} ssh root@{} hostname
|
||||
```
|
||||
|
||||
- `pstree -p`对于现实进程树很有帮助。
|
||||
|
||||
- 使用`pgrep`和`pkill`来按名称查找或用信号通知进程(`-f`很有帮助)。
|
||||
|
||||
- 掌握各种可以发送给进程的信号。例如,要挂起进程,可以使用`kill -STOP [pid]`。完整的列表可以查阅see `man 7 signal`。
|
||||
|
||||
- 如果你想要一个后台进程一直保持运行,使用`nohup`或`disown`。
|
||||
|
||||
- 通过`netstat -lntp`检查什么进程在监听。
|
||||
|
||||
- `lsof`来查看打开的套接口和文件。
|
||||
|
||||
- 在Bash脚本中,使用`set -x`调试脚本输出。每当可能时,使用严格模式。使用`set -e`在遇到错误时退出。也可以使用`set -o pipefail`,对错误严格(虽然该话题有点敏感)。对于更复杂的脚本,也可以使用`trap`。
|
||||
|
||||
- 在Bash脚本中,子shell(写在括号中的)是集合命令的便利的方式。一个常见的例子是临时移动到一个不同的工作目录,如:
|
||||
```bash
|
||||
# do something in current dir
|
||||
(cd /some/other/dir; other-command)
|
||||
# continue in original dir
|
||||
```
|
||||
|
||||
- 注意,在Bash中有大量各种各样的变量扩展。检查一个变量是否存在:`${name:?error message}`。例如,如果一个Bash脚本要求一个单一参数,只需写`input_file=${1:?usage: $0 input_file}`。算术扩展:`i=$(( (i + 1) % 5 ))`。序列:`{1..10}`。修剪字符串:`${var%suffix}`和`${var#prefix}`。例如,if `var=foo.pdf`, then `echo ${var%.pdf}.txt` prints `foo.txt`。
|
||||
|
||||
- 命令的输出可以通过`<(some command)`作为一个文件来处理。例如,将本地的`/etc/hosts`和远程的比较:
|
||||
```sh
|
||||
diff /etc/hosts <(ssh somehost cat /etc/hosts)
|
||||
```
|
||||
|
||||
- 知道Bash中的“嵌入文档”,就像在`cat <<EOF ...`中。
|
||||
|
||||
- 在Bash中,通过:`some-command >logfile 2>&1`同时重定向标准输出和标准错误。通常,要确保某个命令不会再标准输入中遗留有打开的文件句柄,将它捆绑到你所在的终端,添加`</dev/null`是个不错的做法。
|
||||
|
||||
- 使用带有十六进制和十进制值的`man ascii`作为一个好的ASCII表。对于常规编码信息,`man unicode`,`man utf-8`和`man latin1`将很有帮助。
|
||||
|
||||
- 使用`screen`或`tmux`来多路输出屏幕,这对于远程ssh会话尤为有用,使用它们来分离并重连到会话。另一个只用于保持会话的最小可选方案是`dtach`。
|
||||
|
||||
- 在ssh中,知道如何使用`-L`或`-D`(偶尔也用`-R`)来进行端口移植是很有用的,如从一台远程服务器访问网站。
|
||||
|
||||
- 为你的ssh配置进行优化很有用;例如,这个`~/.ssh/config`包含了可以避免在特定网络环境中连接被丢弃的情况的设置,使用压缩(这对于通过低带宽连接使用scp很有用),以及使用一个本地控制文件来开启多隧道到同一台服务器:
|
||||
```
|
||||
TCPKeepAlive=yes
|
||||
ServerAliveInterval=15
|
||||
ServerAliveCountMax=6
|
||||
Compression=yes
|
||||
ControlMaster auto
|
||||
ControlPath /tmp/%r@%h:%p
|
||||
ControlPersist yes
|
||||
```
|
||||
|
||||
- 其它一些与ssh相关的选项对安全很敏感,请小心开启,如各个子网或主机,或者在信任的网络中:`StrictHostKeyChecking=no`, `ForwardAgent=yes`
|
||||
|
||||
- 要获得八进制格式的文件的权限,这对于系统配置很有用而`ls`又没法查看,而且也很容易搞得一团糟,可以使用像这样的东西
|
||||
```sh
|
||||
stat -c '%A %a %n' /etc/timezone
|
||||
```
|
||||
|
||||
- 对于从另一个命令的输出结果中交互选择值,可以使用[`percol`](https://github.com/mooz/percol)。
|
||||
|
||||
- For interaction with files based on the output of another command (like `git`), use `fpp` ([PathPicker](https://github.com/facebook/PathPicker)).对于基于另一个命令(如`git`)输出的文件交互,可以使用`fpp` ([补丁选择器](https://github.com/facebook/PathPicker))。
|
||||
|
||||
- 对于为当前目录(及子目录)中的所有文件构建一个简单的网络服务器,让网络中的任何人都可以获取,可以使用:
|
||||
`python -m SimpleHTTPServer 7777` (对于端口7777和Python 2)。
|
||||
|
||||
|
||||
## 处理文件和数据
|
||||
|
||||
- 要在当前目录中按名称定位文件,`find . -iname '*something*'`(或者相类似的)。要按名称查找任何地方的文件,使用`locate something`(但请记住,`updatedb`可能还没有索引最近创建的文件)。
|
||||
|
||||
- 对于通过源或数据文件进行的常规搜索(比`grep -r`更高级),使用[`ag`](https://github.com/ggreer/the_silver_searcher)。
|
||||
|
||||
- 要将HTML转成文本:`lynx -dump -stdin`
|
||||
|
||||
- 对于Markdown、HTML,以及所有类型的文档转换,可以试试[`pandoc`](http://pandoc.org/)。
|
||||
|
||||
- 如果你必须处理XML,`xmlstarlet`虽然有点老旧,但是很好用。
|
||||
|
||||
- 对于JSON,使用`jq`。
|
||||
|
||||
- 对于Excel或CSV文件,[csvkit](https://github.com/onyxfish/csvkit)提供了`in2csv`,`csvcut`,`csvjoin`,`csvgrep`等工具。
|
||||
|
||||
- 对于亚马逊 S3 [`s3cmd`](https://github.com/s3tools/s3cmd)会很方便,而[`s4cmd`](https://github.com/bloomreach/s4cmd)则更快速。亚马逊的[`aws`](https://github.com/aws/aws-cli)则是其它AWS相关任务的必备。
|
||||
|
||||
- 知道`sort`和`uniq`,包括uniq的`-u`和`-d`选项——参见下面的单行程序。
|
||||
|
||||
- 掌握`cut`,`paste`和`join`,用于处理文本文件。很多人会使用`cut`,但常常忘了`join`。
|
||||
|
||||
- 知道本地环境以微妙的方式对命令行工具产生大量的影响,包括排序的顺序(整理)以及性能。大多数Linux安装会设置`LANG`或其它本地环境变量为本地设置,比如像美国英语。但是,你要明白,如果改变了本地环境,那么排序也将改变。而且i18n例行程序可以让排序或其它命令的运行慢*好多倍*。在某些情形中(如像下面那样的设置操作或唯一性操作),你完全可以安全地忽略慢的i18n例行程序,然后使用传统的基于字节的排序顺序`export LC_ALL=C`。
|
||||
|
||||
- 知道基本的用以数据改动的`awk`和`sed`技能。例如,计算某个文本文件第三列所有数字的和:`awk '{ x += $3 } END { print x }'`。这可能比Python的同等操作要快3倍,而且要短3倍。
|
||||
|
||||
- 在恰当的地方对所有出现的某个字符串进行替换,在一个或多个文件中:
|
||||
```sh
|
||||
perl -pi.bak -e 's/old-string/new-string/g' my-files-*.txt
|
||||
```
|
||||
|
||||
- 要立即根据某个样式对大量文件重命名,使用`rename`。对于复杂的重命名,[`repren`](https://github.com/jlevy/repren)可以帮助你达成。
|
||||
```sh
|
||||
# Recover backup files foo.bak -> foo:
|
||||
rename 's/\.bak$//' *.bak
|
||||
# Full rename of filenames, directories, and contents foo -> bar:
|
||||
repren --full --preserve-case --from foo --to bar .
|
||||
```
|
||||
|
||||
- 使用`shuf`来从某个文件中随机选择随机的行。
|
||||
|
||||
- 知道`sort`的选项。知道这些键是怎么工作的(`-t`和`-k`)。特别是,注意你需要写`-k1,1`来只通过第一个字段排序;`-k1`意味着根据整行排序。
|
||||
|
||||
- 稳定排序(`sort -s`)会很有用。例如,要首先按字段2排序,然后再按字段1排序,你可以使用`sort -k1,1 | sort -s -k2,2`
|
||||
|
||||
- 如果你曾经需要在Bash命令行中写一个标签文字(如,用于-t参数的排序),按**ctrl-v** **[Tab]**,或者写`$'\t'`(后面的更好,因为你可以复制/粘贴)。
|
||||
|
||||
- 对于二进制文件,使用`hd`进行简单十六进制转储,以及`bvi`用于二进制编辑。
|
||||
|
||||
- 还是用于二进制文件,`strings`(加上`grep`等)可以让你找出文本的二进制数。
|
||||
|
||||
- 要转换文本编码,试试`iconv`吧,或者对于更高级的使用`uconv`;它支持一些高级的统一字符标准的东西。例如,这个命令可以将所有重音符号转成小写并(通过扩展和丢弃)移除:
|
||||
```sh
|
||||
uconv -f utf-8 -t utf-8 -x '::Any-Lower; ::Any-NFD; [:Nonspacing Mark:] >; ::Any-NFC; ' < input.txt > output.txt
|
||||
```
|
||||
|
||||
- 要将文件分割成各个部分,来看看`split`(按大小分割)和`csplit`(按格式分割)吧。
|
||||
|
||||
- 使用`zless`,`zmore`,`zcat`和`zgrep`来操作压缩文件。
|
||||
|
||||
|
||||
## 系统调试
|
||||
|
||||
- 对于网络调试,`curl`和`curl -I`很方便灵活,或者业可以使用它们的同行`wget`,或者更现代的[`httpie`](https://github.com/jakubroztocil/httpie)。
|
||||
|
||||
- 要知道disk/cpu/network的状态,使用`iostat`,`netstat`,`top`(或更好的`htop`)和(特别是)`dstat`。它们对于快速获知系统中发生的状况很好用。
|
||||
|
||||
- 对于更深层次的系统总览,可以使用[`glances`](https://github.com/nicolargo/glances)。它会在一个终端窗口中为你呈现几个系统层次的统计数据,对于快速检查各个子系统很有帮助。
|
||||
|
||||
- 要知道内存状态,可以运行`free`和`vmstat`,看懂它们的输出结果吧。特别是,要知道“cached”值是Linux内核作为文件缓存所占有的内存,因此,要有效地统计“free”值。
|
||||
|
||||
- 系统调试是一件截然不同的事,但是对于Oracle系统以及其它一些JVM而言,不过是一个简单的小把戏,你可以运行`kill -3 <pid>`,然后一个完整的堆栈追踪和累积的摘要(包括分代垃圾收集细节,这里头信息量很大)将被转储到stderr/logs.Java。
|
||||
|
||||
- 使用`mtr`作为更好的路由追踪,来识别网络问题。
|
||||
|
||||
- 对于查看磁盘满载的原因,`ncdu`会比常规命令如`du -sh *`更节省时间。
|
||||
|
||||
- 要查找占用带宽的套接口和进程,试试`iftop`或`nethogs`吧。
|
||||
|
||||
- (Apache附带的)`ab`工具对于临时应急检查网络服务器性能很有帮助。对于更复杂的负载测试,可以试试`siege`。
|
||||
|
||||
- 对于更重型的网络调试,可以用`wireshark`,`tshark`或`ngrep`。
|
||||
|
||||
- 掌握`strace`和`ltrace`。如果某个程序失败、挂起或崩溃,而你又不知道原因,或者如果你想要获得性能的大概信息,这些工具会很有帮助。注意,分析选项(`-c`)和关联运行进程的能力(`-p`)。
|
||||
|
||||
- 掌握`ldd`来检查共享库等。
|
||||
|
||||
- 知道如何使用`gdb`来连接到一个运行着的进程并获取其堆栈追踪信息。
|
||||
|
||||
- 使用`/proc`。当调试当前注目问题时,它有时候出奇地有帮助。样例:`/proc/cpuinfo`,`/proc/xxx/cwd`,`/proc/xxx/exe`,`/proc/xxx/fd/`,`/proc/xxx/smaps`。
|
||||
|
||||
- 当调试过去某个东西出错时,`sar`会非常有帮助。它显示了CPU、内存、网络等的历史统计数据。
|
||||
|
||||
- 对于更深层的系统和性能分析,看看`stap` ([SystemTap](https://sourceware.org/systemtap/wiki)),[`perf`](http://en.wikipedia.org/wiki/Perf_(Linux))和[`sysdig`](https://github.com/draios/sysdig)吧。
|
||||
|
||||
- 确认是正在使用的Linux发行版版本(大多数发行版可用):`lsb_release -a`。
|
||||
|
||||
- 每当某个东西的行为异常时(可能是硬件或者驱动器问题),使用`dmesg`。
|
||||
|
||||
|
||||
## 单行程序
|
||||
|
||||
将命令拼凑在一起的一些样例:
|
||||
|
||||
- 有时候通过`sort`/`uniq`来进行交互设置、合并,以及比较文本文件的差异时,这个例子会相当有帮助。假定`a`和`b`是已经进行唯一性处理的文本文件。这会很快,而且可以处理任意大小的文件,总计可达数千兆字节。(Sort不受内存限制,然后你可能需要使用`-T`选项来检查`/tmp`是否挂载一个容量小的root分区上。)也可参见上面关于`LC_ALL`的注解。
|
||||
```sh
|
||||
cat a b | sort | uniq > c # c is a union b
|
||||
cat a b | sort | uniq -d > c # c is a intersect b
|
||||
cat a b b | sort | uniq -u > c # c is set difference a - b
|
||||
```
|
||||
|
||||
- 对某个文本文件的第三列中所有数据进行求和(该例子可能比同等功能的Python要快3倍,而且代码也少于其3倍):
|
||||
```sh
|
||||
awk '{ x += $3 } END { print x }' myfile
|
||||
```
|
||||
|
||||
- 如果想要查看某个文件树的大小/日期,该例子就像一个递归`ls -l`,但是比`ls -lR`要更容易读懂:
|
||||
```sh
|
||||
find . -type f -ls
|
||||
```
|
||||
|
||||
- 只要可以,请使用Use `xargs`或`parallel`。注意,你可以控制每行(`-L`)执行多少个项目,也可同时控制并行计算(`-P`)。如果你不确定它是否会做正确的事,可以在前面加上xargs echo。同时,`-I{}`很灵便。样例:
|
||||
```sh
|
||||
find . -name '*.py' | xargs grep some_function
|
||||
cat hosts | xargs -I{} ssh root@{} hostname
|
||||
```
|
||||
|
||||
- 比如说,你有一个文本文件,像网络服务器的日志,在某些行中出现了某个特定的值,如URL中出现的`acct_id`参数。如果你想要一个针对每个`acct_id`的请求的计数器:
|
||||
```sh
|
||||
cat access.log | egrep -o 'acct_id=[0-9]+' | cut -d= -f2 | sort | uniq -c | sort -rn
|
||||
```
|
||||
|
||||
- 运行该函数来获得来自文档的随机提示(解析Markdown并从中提取某个项目):
|
||||
```sh
|
||||
function taocl() {
|
||||
curl -s https://raw.githubusercontent.com/jlevy/the-art-of-command-line/master/README.md |
|
||||
pandoc -f markdown -t html |
|
||||
xmlstarlet fo --html --dropdtd |
|
||||
xmlstarlet sel -t -v "(html/body/ul/li[count(p)>0])[$RANDOM mod last()+1]" |
|
||||
xmlstarlet unesc | fmt -80
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## 晦涩难懂,但却有用
|
||||
|
||||
- `expr`:实施算术或布林操作,或者求正则表达式的值
|
||||
|
||||
- `m4`:简单宏处理器
|
||||
|
||||
- `screen`:强大的终端多路复用和会话保持
|
||||
|
||||
- `yes`:大量打印一个字符串
|
||||
|
||||
- `cal`:漂亮的日历
|
||||
|
||||
- `env`:运行一个命令(脚本中很有用)
|
||||
|
||||
- `look`:查找以某个字符串开头的英文单词(或文件中的行)
|
||||
|
||||
- `cut `和`paste`以及`join`:数据处理
|
||||
|
||||
- `fmt`:格式化文本段落
|
||||
|
||||
- `pr`:格式化文本为页/栏
|
||||
|
||||
- `fold`:包裹文本行
|
||||
|
||||
- `column`:格式化文本为栏或表
|
||||
|
||||
- `expand`和`unexpand`:在制表和空格间转换
|
||||
|
||||
- `nl`:添加行号
|
||||
|
||||
- `seq`:打印数字
|
||||
|
||||
- `bc`:计算器
|
||||
|
||||
- `factor`:把整数因子分解
|
||||
|
||||
- `gpg`:加密并为文件签名
|
||||
|
||||
- `toe`:terminfo条目表
|
||||
|
||||
- `nc`:网络调试和数据传输
|
||||
|
||||
- `ngrep`:查找网络层
|
||||
|
||||
- `dd`:在文件或设备间移动数据
|
||||
|
||||
- `file`:识别文件类型
|
||||
|
||||
- `stat`:文件信息
|
||||
|
||||
- `tac`:逆序打印文件
|
||||
|
||||
- `shuf`:从文件中随机选择行
|
||||
|
||||
- `comm`:逐行对比分类排序的文件
|
||||
|
||||
- `hd`和`bvi`:转储或编辑二进制文件
|
||||
|
||||
- `strings`:从二进制文件提取文本
|
||||
|
||||
- `tr`:字符转译或处理
|
||||
|
||||
- `iconv `或`uconv`:文本编码转换
|
||||
|
||||
- `split `和`csplit`:分割文件
|
||||
|
||||
- `7z`:高比率文件压缩
|
||||
|
||||
- `ldd`:动态库信息
|
||||
|
||||
- `nm`:目标文件的符号
|
||||
|
||||
- `ab`:网络服务器基准测试
|
||||
|
||||
- `strace`:系统调用调试
|
||||
|
||||
- `mtr`:用于网络调试的更好的路由追踪
|
||||
|
||||
- `cssh`:可视化并发shell
|
||||
|
||||
- `wireshark`和`tshark`:抓包和网络调试
|
||||
|
||||
- `host`和`dig`:DNS查询
|
||||
|
||||
- `lsof`:处理文件描述符和套接字信息
|
||||
|
||||
- `dstat`:有用的系统统计数据
|
||||
|
||||
- [`glances`](https://github.com/nicolargo/glances):高级,多子系统概览
|
||||
|
||||
- `iostat`:CPU和磁盘使用率统计
|
||||
|
||||
- `htop`:top的改进版
|
||||
|
||||
- `last`:登录历史
|
||||
|
||||
- `w`:谁登录进来了
|
||||
|
||||
- `id`:用户/组身份信息
|
||||
|
||||
- `sar`:历史系统统计数据
|
||||
|
||||
- `iftop`或`nethogs`:按套接口或进程的网络使用率
|
||||
|
||||
- `ss`:套接口统计数据
|
||||
|
||||
- `dmesg`:启动和系统错误信息
|
||||
|
||||
- `hdparm`:SATA/ATA磁盘操作/性能
|
||||
|
||||
- `lsb_release`:Linux发行版信息
|
||||
|
||||
- `lshw`:硬件信息
|
||||
|
||||
- `fortune`,`ddate`和`sl`:嗯,好吧,它取决于你是否认为蒸汽机车和齐皮士引用“有用”
|
||||
|
||||
|
||||
## 更多资源
|
||||
|
||||
- [超棒的shell](https://github.com/alebcay/awesome-shell): 一个shell工具和资源一览表。
|
||||
- [严格模式](http://redsymbol.net/articles/unofficial-bash-strict-mode/) 用于写出更佳的shell脚本。
|
||||
|
||||
|
||||
## 免责声明
|
||||
|
||||
除了非常小的任务外,其它都写出了代码供大家阅读。伴随力量而来的是责任。事实是,你*能*在Bash中做的,并不意味着是你所应该做的!;)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://github.com/jlevy/the-art-of-command-line
|
||||
|
||||
作者:[jlevy][a]
|
||||
译者:[GOLinux](https://github.com/GOLinux)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://github.com/jlevy
|
||||
[1]:
|
||||
[2]:
|
||||
[3]:
|
||||
[4]:
|
||||
[5]:
|
||||
[6]:
|
||||
[7]:
|
||||
[8]:
|
||||
[9]:
|
||||
[10]:
|
||||
[11]:
|
||||
[12]:
|
||||
[13]:
|
||||
[14]:
|
||||
[15]:
|
||||
[16]:
|
||||
[17]:
|
||||
[18]:
|
||||
[19]:
|
||||
[20]:
|
@ -0,0 +1,264 @@
|
||||
在Ubuntu下用不同方式安装Node.JS
|
||||
================================================================================
|
||||
|
||||
如果你要在Ubuntu 15.04上安装Node.js的话,这篇教程对你来说肯定很重要。Node.js根本上来说就是一个运行在服务端上的封装好了输入输出流的javascript程序。Node.js巧妙的使用单线程的event loop来处理异步IO。同时它在平台层面上,面向系统拥有非常实用的文件读写,网络操作功能。所以这篇文章将展示在Ubuntu 15.04 server上不同的安装Node.Js的方式。
|
||||
|
||||
### 安装Node.JS 的方法###
|
||||
|
||||
有许多不同的方法安装Node.JS,我们可以选择其一。通过本篇文章我们将手把手带着你在Ubuntu 15.04上安装Node.Js,在此之前请卸载旧版本的包以免发生包冲突。
|
||||
|
||||
- 从源代码安装Node.JS
|
||||
- 用包管理器安装Node.JS
|
||||
- 从Github远程库安装Node.JS
|
||||
- 用NVM安装Node.JS
|
||||
|
||||
### 1) 从源代码安装Node.JS ###
|
||||
|
||||
让我们开始从从源代码安装Node.JS之前,请确认系统上的所有的依赖包都已经更新到最新版本。然后跟着以下步骤来开始安装:
|
||||
|
||||
#### 步骤1: 升级系统 ####
|
||||
|
||||
用以下命令来升级系统,并且安装一些Node.JS必要的包。
|
||||
|
||||
root@ubuntu-15:~# apt-get update
|
||||
|
||||
root@ubuntu-15:~# apt-get install python gcc make g++
|
||||
|
||||
#### 步骤2: 获取Node.JS的源代码 ####
|
||||
|
||||
安装好依赖包之后我们可以从官方网站上下载Node.JS的源代码。下载以及解压的命令如下:
|
||||
|
||||
root@ubuntu-15:~# wget http://nodejs.org/dist/v0.12.4/node-v0.12.4.tar.gz
|
||||
root@ubuntu-15:~# tar zxvf node-v0.12.4.tar.gz
|
||||
|
||||
#### 步骤3: 开始安装 ####
|
||||
|
||||
现在我们进入源代码的目录然后运行.configuration文件
|
||||
|
||||

|
||||
|
||||
root@ubuntu-15:~# ls
|
||||
node-v0.12.4 node-v0.12.4.tar.gz
|
||||
root@ubuntu-15:~# cd node-v0.12.4/
|
||||
root@ubuntu-15:~/node-v0.12.4# ./configure
|
||||
root@ubuntu-15:~/node-v0.12.4# make install
|
||||
|
||||
### 安装后测试 ###
|
||||
|
||||
只要运行一次上面的命令就顺利安装好了Node.JS,现在我们来确认一下版本信息和测试以下Node.JS是否可以运行输出。
|
||||
|
||||
root@ubuntu-15:~/node-v0.12.4# node -v
|
||||
v0.12.4
|
||||
|
||||

|
||||
|
||||
创建一个以.js为扩展名的文件然后用Node的命令运行
|
||||
|
||||
root@ubuntu-15:~/node-v0.12.4# touch helo_test.js
|
||||
root@ubuntu-15:~/node-v0.12.4# vim helo_test.js
|
||||
console.log('Hello World');
|
||||
|
||||
现在我们用Node的命令运行文件
|
||||
|
||||
root@ubuntu-15:~/node-v0.12.4# node helo_test.js
|
||||
Hello World
|
||||
|
||||
输出的结果证明我们已经成功的在Ubuntu 15.04安装好了Node.JS,同时我们也能运行JavaScript文件。
|
||||
|
||||
### 2) 利用包管理器安装Node.JS ###
|
||||
|
||||
在Ubuntu下用包管理器安装Node.JS是非常简单的,只要增加NodeSource的个人软件包档案(PPA)即可。
|
||||
|
||||
我们将下面通过PPA安装Node.JS
|
||||
|
||||
#### 步骤1: 用curl获取源代码 ####
|
||||
|
||||
在我们用curl获取源代码之前,我们必须先升级操作系统然后用curl命令获取NodeSource添加到本地仓库。
|
||||
|
||||
root@ubuntu-15:~#apt-get update
|
||||
root@ubuntu-15:~# curl -sL https://deb.nodesource.com/setup | sudo bash -
|
||||
|
||||
curl将运行以下任务
|
||||
|
||||
## Installing the NodeSource Node.js 0.10 repo...
|
||||
## Populating apt-get cache...
|
||||
## Confirming "vivid" is supported...
|
||||
## Adding the NodeSource signing key to your keyring...
|
||||
## Creating apt sources list file for the NodeSource Node.js 0.10 repo...
|
||||
## Running `apt-get update` for you...
|
||||
Fetched 6,411 B in 5s (1,077 B/s)
|
||||
Reading package lists... Done
|
||||
## Run `apt-get install nodejs` (as root) to install Node.js 0.10 and npm
|
||||
|
||||
#### 步骤2: 安装NodeJS和NPM ####
|
||||
|
||||
运行以上命令之后如果输出如上所示,我们可以用apt-get命令来安装NodeJS和NPM包。
|
||||
|
||||
root@ubuntu-15:~# apt-get install nodejs
|
||||
|
||||

|
||||
|
||||
#### STEP 3: Installing Build Essentials Tool ####
|
||||
#### 步骤3: 安装一些必备的工具 ####
|
||||
|
||||
通过以下命令来安装编译安装一些我们必需的本地插件。
|
||||
|
||||
root@ubuntu-15:~# apt-get install -y build-essential
|
||||
|
||||
### 通过Node.JS Shell来测试 ###
|
||||
|
||||
测试Node.JS的步骤与之前使用源代码安装相似,通过以下node命令来确认Node.JS是否完全安装好:
|
||||
|
||||
root@ubuntu-15:~# node
|
||||
> console.log('Node.js Installed Using Package Manager');
|
||||
Node.js Installed Using Package Manager
|
||||
|
||||
----------
|
||||
|
||||
root@ubuntu-15:~# node
|
||||
> a = [1,2,3,4,5]
|
||||
[ 1, 2, 3, 4, 5 ]
|
||||
> typeof a
|
||||
'object'
|
||||
> 5 + 2
|
||||
7
|
||||
>
|
||||
(^C again to quit)
|
||||
>
|
||||
root@ubuntu-15:~#
|
||||
|
||||
### 使用NodeJS应用进行简单的测试 ###
|
||||
|
||||
REPL是一个Node.js的shell,任何有效的JavaScript代码都能在REPL下运行通过。所以让我们看看在Node.JS下的REPL是什么样子吧。
|
||||
|
||||
root@ubuntu-15:~# node
|
||||
> var repl = require("repl");
|
||||
undefined
|
||||
> repl.start("> ");
|
||||
|
||||
Press Enter and it will show out put like this:
|
||||
> { domain: null,
|
||||
_events: {},
|
||||
_maxListeners: 10,
|
||||
useGlobal: false,
|
||||
ignoreUndefined: false,
|
||||
eval: [Function],
|
||||
inputStream:
|
||||
{ _connecting: false,
|
||||
_handle:
|
||||
{ fd: 0,
|
||||
writeQueueSize: 0,
|
||||
owner: [Circular],
|
||||
onread: [Function: onread],
|
||||
reading: true },
|
||||
_readableState:
|
||||
{ highWaterMark: 0,
|
||||
buffer: [],
|
||||
length: 0,
|
||||
pipes: null,
|
||||
...
|
||||
...
|
||||
|
||||
以下是可以在REPL下使用的命令列表
|
||||
|
||||

|
||||
|
||||
### 使用NodeJS的包管理器 ###
|
||||
|
||||
NPM是一个提供给node脚本连续运行的命令行工具,它能通过package.json来安装和管理依赖包。最开始从初始化命令init开始
|
||||
|
||||
root@ubuntu-15:~# npm init
|
||||
|
||||

|
||||
|
||||
### 3) 从Github远程库安装Node.JS ###
|
||||
|
||||
在这个方法中我们需要一些步骤来把Node.JS从Github的远程的仓库克隆到本地仓库目录
|
||||
|
||||
在开始克隆(clone)包到本地并且配制之前,我们要先安装以下依赖包
|
||||
|
||||
root@ubuntu-15:~# apt-get install g++ curl make libssl-dev apache2-utils git-core
|
||||
|
||||
现在我们开始用git命令克隆到本地并且转到配制目录
|
||||
|
||||
root@ubuntu-15:~# git clone git://github.com/ry/node.git
|
||||
root@ubuntu-15:~# cd node/
|
||||
|
||||

|
||||
|
||||
clone仓库之后,通过运行.config命令来编译生成完整的安装包。
|
||||
|
||||
root@ubuntu-15:~# ./configure
|
||||
|
||||

|
||||
|
||||
运行make install命令之后耐心等待几分钟,程序将会安装好Node.JS
|
||||
|
||||
root@ubuntu-15:~/node# make install
|
||||
|
||||
root@ubuntu-15:~/node# node -v
|
||||
v0.13.0-pre
|
||||
|
||||
### 测试Node.JS ###
|
||||
|
||||
root@ubuntu-15:~/node# node
|
||||
> a = [1,2,3,4,5,6,7]
|
||||
[ 1, 2, 3, 4, 5, 6, 7 ]
|
||||
> typeof a
|
||||
'object'
|
||||
> 6 + 5
|
||||
11
|
||||
>
|
||||
(^C again to quit)
|
||||
>
|
||||
root@ubuntu-15:~/node#
|
||||
|
||||
### 4) 通过NVM安装Node.JS ###
|
||||
|
||||
在最后一种方法中我们我们将用NVM来比较容易安装Node.JS。安装和配制Node.JS,这是最好的方法之一,它可以供我们选择要安装的版本。
|
||||
|
||||
在安装之前,请确认本机以前的安装包已经被卸载
|
||||
|
||||
#### 步骤1: 安装依赖包 ####
|
||||
|
||||
首先升级Ubuntu Server系统,然后安装以下安装Node.JS和使用NVM所要依赖的包。用curl命令从git上下载NVM到本地仓库:
|
||||
|
||||
root@ubuntu-15:~# apt-get install build-essential libssl-dev
|
||||
root@ubuntu-15:~# curl https://raw.githubusercontent.com/creationix/nvm/v0.16.1/install.sh | sh
|
||||
|
||||

|
||||
|
||||
#### 步骤2: 修改Home环境 ####
|
||||
|
||||
用curl从NVM下载必需的包到用户的home目录之后,我们需要修改bash的配制文件添加NVM,之后只要重新登录中断或者用如下命令更新即可
|
||||
|
||||
root@ubuntu-15:~# source ~/.profile
|
||||
|
||||
现在我们可以用NVM来设置默认的NVM的版本或者用如下命令来指定之前版本:
|
||||
|
||||
root@ubuntu-15:~# nvm ls
|
||||
root@ubuntu-15:~# nvm alias default 0.12.4
|
||||
|
||||

|
||||
|
||||
#### 步骤3: 使用NVM ####
|
||||
|
||||
我们已经通过NVM成功的安装了Node.JS,所以我们现在可以使用各种有用的命令。
|
||||
|
||||

|
||||
|
||||
### 总结 ###
|
||||
|
||||
现在我们已经准备好了在服务端安装Node.JS,你可以从我们说的四种方式中选择最合适你的方式在最新的Ubuntu 15.04上来安装Node.JS,安装好之后你就可以利用Node.JS来编写你的代码。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://linoxide.com/ubuntu-how-to/setup-node-js-ubuntu-15-04-different-methods/
|
||||
|
||||
作者:[Kashif Siddique][a]
|
||||
译者:[NearTan](https://github.com/NearTan)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://linoxide.com/author/kashifs/
|
Loading…
Reference in New Issue
Block a user