Merge pull request #37 from LCTT/master

update 2015/12/12
This commit is contained in:
Yu Ye 2015-12-12 15:47:08 +08:00
commit dcad07e3ec
123 changed files with 8225 additions and 4130 deletions

188
README.md
View File

@ -51,113 +51,117 @@ LCTT的组成
* 2014/12/25 提升runningwater为Core Translators成员。
* 2015/04/19 发起 LFS-BOOK-7.7-systemd 项目。
* 2015/06/09 提升ictlyh和dongfengweixiao为Core Translators成员。
* 2015/11/10 提升strugglingyouth、FSSlc、Vic020、alim0x为Core Translators成员。
活跃成员
-------------------------------
目前 TP 活跃成员有:
- CORE @wxy,
- CORE @carolinewuyan,
- CORE @DeadFire,
- CORE @geekpi,
- CORE @GOLinux,
- CORE @reinoir,
- CORE @bazz2,
- CORE @zpl1025,
- CORE @ictlyh,
- CORE @dongfengweixiao
- CORE @carolinewuyan,
- CORE @strugglingyouth,
- CORE @FSSlc
- CORE @zpl1025,
- CORE @runningwater,
- CORE @bazz2,
- CORE @Vic020,
- CORE @dongfengweixiao,
- CORE @alim0x,
- Senior @reinoir,
- Senior @tinyeyeser,
- Senior @vito-L,
- Senior @jasminepeng,
- Senior @willqian,
- Senior @vizv,
- @ZTinoZ,
- @Vic020,
- @runningwater,
- @KayGuoWhu,
- @luoxcat,
- @alim0x,
- @2q1w2007,
- @theo-l,
- @FSSlc,
- @su-kaiyao,
- @blueabysm,
- @flsf,
- @martin2011qi,
- @SPccman,
- @wi-cuckoo,
- @Linchenguang,
- @linuhap,
- @crowner,
- @Linux-pdz,
- @H-mudcup,
- @yechunxiao19,
- @woodboow,
- @Stevearzh,
- @disylee,
- @cvsher,
- @wwy-hust,
- @johnhoow,
- @felixonmars,
- @TxmszLou,
- @shipsw,
- @scusjs,
- @wangjiezhe,
- @hyaocuk,
- @MikeCoder,
- @ZhouJ-sh,
- @boredivan,
- @goreliu,
- @l3b2w1,
- @JonathanKang,
- @NearTan,
- @jiajia9linuxer,
- @Love-xuan,
- @coloka,
- @owen-carter,
- @luoyutiantang,
- @JeffDing,
- @icybreaker,
- @tenght,
- @liuaiping,
- @mtunique,
- @rogetfan,
- @nd0104,
- @mr-ping,
- @szrlee,
- @lfzark,
- @CNprober,
- @DongShuaike,
- @ggaaooppeenngg,
- @haimingfg,
- @213edu,
- @Tanete,
- @guodongxiaren,
- @zzlyzq,
- @FineFan,
- @yujianxuechuan,
- @Medusar,
- @shaohaolin,
- @ailurus1991,
- @liaoishere,
- @CHINAANSHE,
- @stduolc,
- @yupmoon,
- @tomatoKiller,
- @zhangboyue,
- @kingname,
- @KevinSJ,
- @zsJacky,
- @willqian,
- @Hao-Ding,
- @JygjHappy,
- @Maclauring,
- @small-Wood,
- @cereuz,
- @fbigun,
- @lijhg,
- @soooogreen,
- ZTinoZ,
- theo-l,
- luoxcat,
- disylee,
- wi-cuckoo,
- haimingfg,
- KayGuoWhu,
- wwy-hust,
- martin2011qi,
- cvsher,
- su-kaiyao,
- flsf,
- SPccman,
- Stevearzh
- Linchenguang,
- oska874
- Linux-pdz,
- 2q1w2007,
- felixonmars,
- wyangsun,
- MikeCoder,
- mr-ping,
- xiqingongzi
- H-mudcup,
- zhangboyue,
- goreliu,
- DongShuaike,
- TxmszLou,
- ZhouJ-sh,
- wangjiezhe,
- NearTan,
- icybreaker,
- shipsw,
- johnhoow,
- linuhap,
- boredivan,
- blueabysm,
- liaoishere,
- yechunxiao19,
- l3b2w1,
- XLCYun,
- KevinSJ,
- tenght,
- coloka,
- luoyutiantang,
- yupmoon,
- jiajia9linuxer,
- scusjs,
- tnuoccalanosrep,
- woodboow,
- 1w2b3l,
- crowner,
- mtunique,
- dingdongnigetou,
- CNprober,
- JonathanKang,
- Medusar,
- hyaocuk,
- szrlee,
- Xuanwo,
- nd0104,
- xiaoyu33,
- guodongxiaren,
- zzlyzq,
- yujianxuechuan,
- ailurus1991,
- ggaaooppeenngg,
- Ricky-Gong,
- lfzark,
- 213edu,
- Tanete,
- liuaiping,
- jerryling315,
- tomatoKiller,
- stduolc,
- shaohaolin,
- Timeszoro,
- rogetfan,
- FineFan,
- kingname,
- jasminepeng,
- JeffDing,
- CHINAANSHE,
(按提交行数排名前百)
LFS 项目活跃成员有:
@ -169,7 +173,7 @@ LFS 项目活跃成员有:
- @KevinSJ
- @Yuking-net
更新于2015/06/09以Github contributors列表排名
更新于2015/11/29
谢谢大家的支持!

View File

@ -0,0 +1,435 @@
如何在 Ubuntu 15.04 中安装 puppet
================================================================================
大家好,本教程将学习如何在 ubuntu 15.04 上面安装 puppet它可以用来管理你的服务器基础环境。puppet 是由puppet 实验室Puppet Labs开发并维护的一款开源的配置管理软件它能够帮我们自动化供给、配置和管理服务器的基础环境。不管我们管理的是几个服务器还是数以千计的计算机组成的业务报表体系puppet 都能够使管理员从繁琐的手动配置调整中解放出来腾出时间和精力去提系统的升整体效率。它能够确保所有自动化流程作业的一致性、可靠性以及稳定性。它让管理员和开发者更紧密的联系在一起使开发者更容易产出付出设计良好、简洁清晰的代码。puppet 提供了配置管理和数据中心自动化的两个解决方案。这两个解决方案分别是 **puppet 开源版****puppet 企业版**。puppet 开源版以 Apache 2.0 许可证发布它是一个非常灵活、可定制的解决方案设置初衷是帮助管理员去完成那些重复性操作工作。pupprt 企业版是一个全平台复杂 IT 环境下的成熟解决方案,它除了拥有开源版本所有优势以外还有移动端 apps、只有商业版才有的加强支持以及模块化和集成管理等。Puppet 使用 SSL 证书来认证主控服务器与代理节点之间的通信。
本教程将要介绍如何在运行 ubuntu 15.04 的主控服务器和代理节点上面安装开源版的 puppet。在这里我们用一台服务器做主控服务器master管理和控制剩余的当作 puppet 代理节点agent node的服务器这些代理节点将依据主控服务器来进行配置。在 ubuntu 15.04 只需要简单的几步就能安装配置好 puppet用它来管理我们的服务器基础环境非常的方便。LCTT 译注puppet 采用 C/S 架构,所以必须有至少有一台作为服务器,其他作为客户端处理)
### 1.设置主机文件 ###
在本教程里我们将使用2台运行 ubuntu 15.04 “Vivid Vervet" 的主机,一台作为主控服务器,另一台作为 puppet 的代理节点。下面是我们将用到的服务器的基础信息。
- puupet 主控服务器 IP44.55.88.6 ,主机名: puppetmaster
- puppet 代理节点 IP 45.55.86.39 ,主机名: puppetnode
我们要在代理节点和服务器这两台机器的 hosts 文件里面都添加上相应的条目,使用 root 或是 sudo 访问权限来编辑 /etc/hosts 文件,命令如下:
# nano /etc/hosts
45.55.88.6 puppetmaster.example.com puppetmaster
45.55.86.39 puppetnode.example.com puppetnode
注意puppet 主控服务器必使用 8140 端口来运行所以请务必保证开启8140端口。
### 2. 用 NTP 更新时间 ###
puppet 代理节点所使用系统时间必须要准确,这样可以避免代理证书出现问题。如果有时间差异,那么证书将过期失效,所以服务器与代理节点的系统时间必须互相同步。我们使用 NTPNetwork Time Protocol网络时间协议来同步时间。**在服务器与代理节点上面分别**运行以下命令来同步时间。
# ntpdate pool.ntp.org
17 Jun 00:17:08 ntpdate[882]: adjust time server 66.175.209.17 offset -0.001938 sec
LCTT 译注:显示类似的输出结果表示运行正常)
如果没有安装 ntp请使用下面的命令更新你的软件仓库安装并运行ntp服务
# apt-get update && sudo apt-get -y install ntp ; service ntp restart
### 3. 安装主控服务器软件 ###
安装开源版本的 puppet 有很多的方法。在本教程中我们在 puppet 实验室官网下载一个名为 puppetlabs-release 的软件包的软件源,安装后它将为我们在软件源里面添加 puppetmaster-passenger。puppetmaster-passenger 包括带有 apache 的 puppet 主控服务器。我们开始下载这个软件包:
# cd /tmp/
# wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
--2015-06-17 00:19:26-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d
Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7384 (7.2K) [application/x-debian-package]
Saving to: puppetlabs-release-trusty.deb
puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.06s
2015-06-17 00:19:26 (130 KB/s) - puppetlabs-release-trusty.deb saved [7384/7384]
下载完成,我们来安装它:
# dpkg -i puppetlabs-release-trusty.deb
Selecting previously unselected package puppetlabs-release.
(Reading database ... 85899 files and directories currently installed.)
Preparing to unpack puppetlabs-release-trusty.deb ...
Unpacking puppetlabs-release (1.0-11) ...
Setting up puppetlabs-release (1.0-11) ...
使用 apt 包管理命令更新一下本地的软件源:
# apt-get update
现在我们就可以安装 puppetmaster-passenger 了
# apt-get install puppetmaster-passenger
**提示**: 在安装的时候可能会报错:
Warning: Setting templatedir is deprecated.see http://links.puppetlabs.com/env-settings-deprecations (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')
不过不用担心,忽略掉它就好,我们只需要在设置配置文件的时候把这一项禁用就行了。
如何来查看puppet 主控服务器是否已经安装成功了呢?非常简单,只需要使用下面的命令查看它的版本就可以了。
# puppet --version
3.8.1
现在我们已经安装好了 puppet 主控服务器。因为我们使用的是配合 apache 的 passenger由 apache 来控制 puppet 主控服务器,当 apache 运行时 puppet 主控才运行。
在开始之前,我们需要通过停止 apache 服务来让 puppet 主控服务器停止运行。
# systemctl stop apache2
### 4. 使用 Apt 工具锁定主控服务器的版本 ###
现在已经安装了 3.8.1 版的 puppet我们锁定这个版本不让它随意升级因为升级会造成配置文件混乱。 使用 apt 工具来锁定它,这里我们需要使用文本编辑器来创建一个新的文件 **/etc/apt/preferences.d/00-puppet.pref**
# nano /etc/apt/preferences.d/00-puppet.pref
在新创建的文件里面添加以下内容:
# /etc/apt/preferences.d/00-puppet.pref
Package: puppet puppet-common puppetmaster-passenger
Pin: version 3.8*
Pin-Priority: 501
这样在以后的系统软件升级中, puppet 主控服务器将不会跟随系统软件一起升级。
### 5. 配置 Puppet 主控服务器###
Puppet 主控服务器作为一个证书发行机构,需要生成它自己的证书,用于签署所有代理的证书的请求。首先我们要删除所有在该软件包安装过程中创建出来的 ssl 证书。本地默认的 puppet 证书放在 /var/lib/puppet/ssl。因此我们只需要使用 rm 命令来整个移除这些证书就可以了。
# rm -rf /var/lib/puppet/ssl
现在来配置该证书,在创建 puppet 主控服务器证书时,我们需要包括代理节点与主控服务器沟通所用的每个 DNS 名称。使用文本编辑器来修改服务器的配置文件 puppet.conf
# nano /etc/puppet/puppet.conf
输出的结果像下面这样
[main]
logdir=/var/log/puppet
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
rundir=/var/run/puppet
factpath=$vardir/lib/facter
templatedir=$confdir/templates
[master]
# These are needed when the puppetmaster is run by passenger
# and can safely be removed if webrick is used.
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY
在这我们需要注释掉 templatedir 这行使它失效。然后在文件的 `[main]` 小节的结尾添加下面的信息。
server = puppetmaster
environment = production
runinterval = 1h
strict_variables = true
certname = puppetmaster
dns_alt_names = puppetmaster, puppetmaster.example.com
还有很多你可能用的到的配置选项。 如果你有需要,在 Puppet 实验室有一份详细的描述文件供你阅读: [Main Config File (puppet.conf)][1]。
编辑完成后保存退出。
使用下面的命令来生成一个新的证书。
# puppet master --verbose --no-daemonize
Info: Creating a new SSL key for ca
Info: Creating a new SSL certificate request for ca
Info: Certificate Request fingerprint (SHA256): F6:2F:69:89:BA:A5:5E:FF:7F:94:15:6B:A7:C4:20:CE:23:C7:E3:C9:63:53:E0:F2:76:D7:2E:E0:BF:BD:A6:78
...
Notice: puppetmaster has a waiting certificate request
Notice: Signed certificate request for puppetmaster
Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/ca/requests/puppetmaster.pem'
Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/certificate_requests/puppetmaster.pem'
Notice: Starting Puppet master version 3.8.1
^CNotice: Caught INT; storing stop
Notice: Processing stop
至此,证书已经生成。一旦我们看到 **Notice: Starting Puppet master version 3.8.1**,就表明证书就已经制作好了。我们按下 CTRL-C 回到 shell 命令行。
查看新生成证书的信息,可以使用下面的命令。
# puppet cert list -all
+ "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com")
### 6. 创建一个 Puppet 清单 ###
默认的主要清单Manifest是 /etc/puppet/manifests/site.pp。 这个主要清单文件包括了用于在代理节点执行的配置定义。现在我们来创建一个清单文件:
# nano /etc/puppet/manifests/site.pp
在刚打开的文件里面添加下面这几行:
# execute 'apt-get update'
exec { 'apt-update': # exec resource named 'apt-update'
command => '/usr/bin/apt-get update' # command this resource will run
}
# install apache2 package
package { 'apache2':
require => Exec['apt-update'], # require 'apt-update' before installing
ensure => installed,
}
# ensure apache2 service is running
service { 'apache2':
ensure => running,
}
以上这几行的意思是给代理节点部署 apache web 服务。
### 7. 运行 puppet 主控服务 ###
已经准备好运行 puppet 主控服务器 了,那么开启 apache 服务来让它启动
# systemctl start apache2
我们 puppet 主控服务器已经运行,不过它还不能管理任何代理节点。现在我们给 puppet 主控服务器添加代理节点.
**提示**: 如果报错
Job for apache2.service failed. see "systemctl status apache2.service" and "journalctl -xe" for details.
肯定是 apache 服务器有一些问题,我们可以使用 root 或是 sudo 访问权限来运行**apachectl start**查看它输出的日志。在本教程执行过程中, 我们发现一个 **/etc/apache2/sites-enabled/puppetmaster.conf** 的证书配置问题。修改其中的**SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem **为 **SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem**,然后注释掉后面这行**SSLCertificateKeyFile** 。然后在命令行重新启动 apache。
### 8. 安装 Puppet 代理节点的软件包 ###
我们已经准备好了 puppet 的服务器,现在需要一个可以管理的代理节点,我们将安装 puppet 代理软件到节点上去。这里我们要给每一个需要管理的节点安装代理软件,并且确保这些节点能够通过 DNS 查询到服务器主机。下面将 安装最新的代理软件到 节点 puppetnode.example.com 上。
在代理节点上使用下面的命令下载 puppet 实验室提供的软件包:
# cd /tmp/
# wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb\
--2015-06-17 00:54:42-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d
Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7384 (7.2K) [application/x-debian-package]
Saving to: puppetlabs-release-trusty.deb
puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.04s
2015-06-17 00:54:42 (162 KB/s) - puppetlabs-release-trusty.deb saved [7384/7384]
在 ubuntu 15.04 上我们使用debian包管理系统来安装它命令如下
# dpkg -i puppetlabs-release-trusty.deb
使用 apt 包管理命令更新一下本地的软件源:
# apt-get update
通过远程仓库安装:
# apt-get install puppet
Puppet 代理默认是不启动的。这里我们需要使用文本编辑器修改 /etc/default/puppet 文件,使它正常工作:
# nano /etc/default/puppet
更改 **START** 的值改成 "yes" 。
START=yes
最后保存并退出。
### 9. 使用 Apt 工具锁定代理软件的版本 ###
和上面的步骤一样为防止随意升级造成的配置文件混乱,我们要使用 apt 工具来把它锁定。具体做法是使用文本编辑器创建一个文件 **/etc/apt/preferences.d/00-puppet.pref**
# nano /etc/apt/preferences.d/00-puppet.pref
在新建的文件里面加入如下内容
# /etc/apt/preferences.d/00-puppet.pref
Package: puppet puppet-common
Pin: version 3.8*
Pin-Priority: 501
这样 puppet 就不会随着系统软件升级而随意升级了。
### 10. 配置 puppet 代理节点 ###
我们需要编辑一下代理节点的 puppet.conf 文件,来使它运行。
# nano /etc/puppet/puppet.conf
它看起来和服务器的配置文件完全一样。同样注释掉**templatedir**这行。不同的是在这里我们需要删除掉所有关于`[master]` 的部分。
假定主控服务器可以通过名字“puppet-master”访问我们的客户端应该可以和它相互连接通信。如果不行的话我们需要使用完整的主机域名 puppetmaster.example.com
[agent]
server = puppetmaster.example.com
certname = puppetnode.example.com
在文件的结尾增加上面3行增加之后文件内容像下面这样
[main]
logdir=/var/log/puppet
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
rundir=/var/run/puppet
factpath=$vardir/lib/facter
#templatedir=$confdir/templates
[agent]
server = puppetmaster.example.com
certname = puppetnode.example.com
最后保存并退出。
使用下面的命令来启动客户端软件:
# systemctl start puppet
如果一切顺利的话,我们不会看到命令行有任何输出。 第一次运行的时候,代理节点会生成一个 ssl 证书并且给服务器发送一个请求,经过签名确认后,两台机器就可以互相通信了。
**提示** 如果这是你添加的第一个代理节点,建议你在添加其他节点前先给这个证书签名。一旦能够通过并正常运行,回过头来再添加其他代理节点。
### 11. 在主控服务器上对证书请求进行签名 ###
第一次运行的时候,代理节点会生成一个 ssl 证书并且给服务器发送一个签名请求。在主控服务器给代理节点服务器证书签名之后,主服务器才能和代理服务器通信并且控制代理服务器。
在主控服务器上使用下面的命令来列出当前的证书请求:
# puppet cert list
"puppetnode.example.com" (SHA256) 31:A1:7E:23:6B:CD:7B:7D:83:98:33:8B:21:01:A6:C4:01:D5:53:3D:A0:0E:77:9A:77:AE:8F:05:4A:9A:50:B2
因为只设置了一台代理节点服务器,所以我们将只看到一个请求。看起来类似如上,代理节点的完整域名即其主机名。
注意有没有“+”号在前面,代表这个证书有没有被签名。
使用带有主机名的**puppet cert sign**这个命令来签署这个签名请求,如下:
# puppet cert sign puppetnode.example.com
Notice: Signed certificate request for puppetnode.example.com
Notice: Removing file Puppet::SSL::CertificateRequest puppetnode.example.com at '/var/lib/puppet/ssl/ca/requests/puppetnode.example.com.pem'
主控服务器现在可以通讯和控制它签名过的代理节点了。
如果想签署所有的当前请求,可以使用 -all 选项,如下所示:
# puppet cert sign --all
### 12. 删除一个 Puppet 证书 ###
如果我们想移除一个主机,或者想重建一个主机然后再添加它。下面的例子里我们将展示如何删除 puppet 主控服务器上面的一个证书。使用的命令如下:
# puppet cert clean hostname
Notice: Revoked certificate with serial 5
Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/ca/signed/puppetnode.example.com.pem'
Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/certs/puppetnode.example.com.pem'
如果我们想查看所有的签署和未签署的请求,使用下面这条命令:
# puppet cert list --all
+ "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com")
### 13. 部署 Puppet 清单 ###
当配置并完成 puppet 清单后,现在我们需要部署清单到代理节点服务器上。要应用并加载主 puppet 清单,我们可以在代理节点服务器上面使用下面的命令:
# puppet agent --test
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppetnode.example.com
Info: Applying configuration version '1434563858'
Notice: /Stage[main]/Main/Exec[apt-update]/returns: executed successfully
Notice: Finished catalog run in 10.53 seconds
这里向我们展示了主清单如何立即影响到了一个单一的服务器。
如果我们打算运行的 puppet 清单与主清单没有什么关联,我们可以简单使用 puppet apply 带上相应的清单文件的路径即可。它仅将清单应用到我们运行该清单的代理节点上。
# puppet apply /etc/puppet/manifest/test.pp
### 14. 为特定节点配置清单 ###
如果我们想部署一个清单到某个特定的节点,我们需要如下配置清单。
在主控服务器上面使用文本编辑器编辑 /etc/puppet/manifest/site.pp
# nano /etc/puppet/manifest/site.pp
添加下面的内容进去
node 'puppetnode', 'puppetnode1' {
# execute 'apt-get update'
exec { 'apt-update': # exec resource named 'apt-update'
command => '/usr/bin/apt-get update' # command this resource will run
}
# install apache2 package
package { 'apache2':
require => Exec['apt-update'], # require 'apt-update' before installing
ensure => installed,
}
# ensure apache2 service is running
service { 'apache2':
ensure => running,
}
}
这里的配置显示我们将在名为 puppetnode 和 puppetnode1 的2个指定的节点上面安装 apache 服务。这里可以添加其他我们需要安装部署的具体节点进去。
### 15. 配置清单模块 ###
模块对于组合任务是非常有用的,在 Puppet 社区有很多人贡献了自己的模块组件。
在主控服务器上, 我们将使用 puppet module 命令来安装 **puppetlabs-apache** 模块。
# puppet module install puppetlabs-apache
**警告**: 千万不要在一个已经部署 apache 环境的机器上面使用这个模块,否则它将清空你没有被 puppet 管理的 apache 配置。
现在用文本编辑器来修改 **site.pp**
# nano /etc/puppet/manifest/site.pp
添加下面的内容进去,在 puppetnode 上面安装 apache 服务。
node 'puppet-node' {
class { 'apache': } # use apache module
apache::vhost { 'example.com': # define vhost resource
port => '80',
docroot => '/var/www/html'
}
}
保存退出。然后重新运行该清单来为我们的代理节点部署 apache 配置。
### 总结 ###
现在我们已经成功的在 ubuntu 15.04 上面部署并运行 puppet 来管理代理节点服务器的基础运行环境。我们学习了puppet 是如何工作的,编写清单文件,节点与主机间使用 ssl 证书认证的认证过程。使用 puppet 开源软件配置管理工具在众多的代理节点上来控制、管理和配置重复性任务是非常容易的。如果你有任何的问题,建议,反馈,与我们取得联系,我们将第一时间完善更新,谢谢。
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/install-puppet-ubuntu-15-04/
作者:[Arun Pyasi][a]
译者:[ivo-wang](https://github.com/ivo-wang)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html

View File

@ -1,6 +1,7 @@
如何在 CentOS 7.x 上安装 Zephyr 测试管理工具
================================================================================
测试管理工具包括作为测试人员需要的任何东西。测试管理工具用来记录测试执行的结果、计划测试活动以及报告质量保证活动的情况。在这篇文章中我们会向你介绍如何配置 Zephyr 测试管理工具,它包括了管理测试活动需要的所有东西,不需要单独安装测试活动所需要的应用程序从而降低测试人员不必要的麻烦。一旦你安装完它,你就看可以用它跟踪 bug、缺陷和你的团队成员协作项目任务因为你可以轻松地共享和访问测试过程中多个项目团队的数据。
测试管理Test Management指测试人员所需要的任何的所有东西。测试管理工具用来记录测试执行的结果、计划测试活动以及汇报质量控制活动的情况。在这篇文章中我们会向你介绍如何配置 Zephyr 测试管理工具,它包括了管理测试活动需要的所有东西,不需要单独安装测试活动所需要的应用程序从而降低测试人员不必要的麻烦。一旦你安装完它,你就看可以用它跟踪 bug 和缺陷,和你的团队成员协作项目任务,因为你可以轻松地共享和访问测试过程中多个项目团队的数据。
### Zephyr 要求 ###
@ -19,21 +20,21 @@
</tr>
<tr>
<td width="140"><strong>Packages</strong></td>
<td width="312">JDK 7 or above ,&nbsp; Oracle JDK 6 update</td>
<td width="209">No Prior Tomcat, MySQL installed</td>
<td width="312">JDK 7 或更高 ,&nbsp; Oracle JDK 6 update</td>
<td width="209">没有事先安装的 Tomcat 和 MySQL</td>
</tr>
<tr>
<td width="140"><strong>RAM</strong></td>
<td width="312">4 GB</td>
<td width="209">Preferred 8 GB</td>
<td width="209">推荐 8 GB</td>
</tr>
<tr>
<td width="140"><strong>CPU</strong></td>
<td width="521" colspan="2">2.0 GHZ or Higher</td>
<td width="521" colspan="2">2.0 GHZ 或更高</td>
</tr>
<tr>
<td width="140"><strong>Hard Disk</strong></td>
<td width="521" colspan="2">30 GB , Atleast 5GB must be free</td>
<td width="521" colspan="2">30 GB , 至少 5GB </td>
</tr>
</tbody>
</table>
@ -48,8 +49,6 @@
[root@centos-007 ~]# yum install java-1.7.0-openjdk-1.7.0.79-2.5.5.2.el7_1
----------
[root@centos-007 ~]# yum install java-1.7.0-openjdk-devel-1.7.0.85-2.6.1.2.el7_1.x86_64
安装完 java 和它的所有依赖后,运行下面的命令设置 JAVA_HOME 环境变量。
@ -61,8 +60,6 @@
[root@centos-007 ~]# java version
----------
java version "1.7.0_79"
OpenJDK Runtime Environment (rhel-2.5.5.2.el7_1-x86_64 u79-b14)
OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)
@ -71,7 +68,7 @@
### 安装 MySQL 5.6.x ###
如果的机器上有其它的 MySQL建议你先卸载它们并安装这个版本或者升级它们的模式到指定的版本。因为 Zephyr 前提要求这个指定的主要/最小 MySQL 5.6.x版本要有 root 用户名。
如果的机器上有其它的 MySQL建议你先卸载它们并安装这个版本或者升级它们的模式schemas到指定的版本。因为 Zephyr 前提要求这个指定的 5.6.x 版本的 MySQL 要有 root 用户名。
可以按照下面的步骤在 CentOS-7.1 上安装 MySQL 5.6
@ -93,10 +90,7 @@
[root@centos-007 ~]# service mysqld start
[root@centos-007 ~]# service mysqld status
对于全新安装的 MySQL 服务器MySQL root 用户的密码为空。
为了安全起见,我们应该重置 MySQL root 用户的密码。
用自动生成的空密码连接到 MySQL 并更改 root 用户密码。
对于全新安装的 MySQL 服务器MySQL root 用户的密码为空。为了安全起见,我们应该重置 MySQL root 用户的密码。用自动生成的空密码连接到 MySQL 并更改 root 用户密码。
[root@centos-007 ~]# mysql
mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('your_password');
@ -224,7 +218,7 @@ via: http://linoxide.com/linux-how-to/setup-zephyr-tool-centos-7-x/
作者:[Kashif Siddique][a]
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,101 @@
UNIX 家族小史
================================================================================
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/05/linux-712x445.png)
要记住,当一扇门在你面前关闭的时候,另一扇门就会打开。肯·汤普森([Ken Thompson][1] 和丹尼斯·里奇([Dennis Richie][2] 两个人就是这句名言很好的实例。他们俩是**20世纪**最优秀的信息技术专家之二,因为他们创造了最具影响力和创新性的软件之一: **UNIX**
### UNIX 系统诞生于贝尔实验室 ###
**UNIX** 最开始的名字是 **UNICS** (**UN**iplexed **I**nformation and **C**omputing **S**ervice)它有一个大家庭并不是从石头缝里蹦出来的。UNIX的祖父是 **CTSS** (**C**ompatible **T**ime **S**haring **S**ystem),它的父亲是 **Multics** (**MULT**iplexed **I**nformation and **C**omputing **S**ervice)这个系统能支持大量用户通过交互式分时timesharing的方式使用大型机。
UNIX 诞生于 **1969** 年,由**肯·汤普森**以及后来加入的**丹尼斯·里奇**共同完成。这两位优秀的研究员和科学家在一个**通用电器 GE**和**麻省理工学院**的合作项目里工作,项目目标是开发一个叫 Multics 的交互式分时系统。
Multics 的目标是整合分时技术以及当时其他先进技术,允许用户在远程终端通过电话(拨号)登录到主机,然后可以编辑文档,阅读电子邮件,运行计算器,等等。
在之后的五年里AT&T 公司为 Multics 项目投入了数百万美元。他们购买了 GE-645 大型机,聚集了贝尔实验室的顶级研究人员,例如肯·汤普森、 Stuart Feldman、丹尼斯·里奇、道格拉斯·麦克罗伊M. Douglas McIlroy、 Joseph F. Ossanna 以及 Robert Morris。但是项目目标太过激进进度严重滞后。最后AT&T 高层决定放弃这个项目。
贝尔实验室的管理层决定停止这个让许多研究人员无比纠结的操作系统上的所有遗留工作。不过要感谢汤普森,里奇和一些其他研究员,他们把老板的命令丢到一边,并继续在实验室里满怀热心地忘我工作,最终孵化出前无古人后无来者的 UNIX。
UNIX 的第一声啼哭是在一台 PDP-7 微型机上,它是汤普森测试自己在操作系统设计上的点子的机器,也是汤普森和 里奇一起玩 Space and Travel 游戏的模拟器。
> “我们想要的不仅是一个优秀的编程环境,而是能围绕这个系统形成团体。按我们自己的经验,通过远程访问和分时主机实现的公共计算,本质上不只是用终端输入程序代替打孔机而已,而是鼓励密切沟通。”丹尼斯·里奇说。
UNIX 是第一个靠近理想的系统,在这里程序员可以坐在机器前自由摆弄程序,探索各种可能性并随手测试。在 UNIX 整个生命周期里,它吸引了大量因其他操作系统限制而投身过来的高手做出无私贡献,因此它的功能模型一直保持上升趋势。
UNIX 在 1970 年因为 PDP-11/20 获得了首次资金注入,之后正式更名为 UNIX 并支持在 PDP-11/20 上运行。UNIX 带来的第一次用于实际场景中是在 1971 年,贝尔实验室的专利部门配备来做文字处理。
### UNIX 上的 C 语言革命 ###
丹尼斯·里奇在 1972 年发明了一种叫 “**C**” 的高级编程语言 ,之后他和肯·汤普森决定用 “C” 重写 UNIX 系统,来支持更好的移植性。他们在那一年里编写和调试了差不多 100,000 行代码。在迁移到 “C” 语言后,系统可移植性非常好,只需要修改一小部分机器相关的代码就可以将 UNIX 移植到其他计算机平台上。
UNIX 第一次公开露面是 1973 年丹尼斯·里奇和肯·汤普森在操作系统原理Operating Systems Principles上发表的一篇论文然后 AT&T 发布了 UNIX 系统第 5 版,并授权给教育机构使用,之后在 1975 年第一次以 **$20.000** 的价格授权企业使用 UNIX 第 6 版。应用最广泛的是 1980 年发布的 UNIX 第 7 版,任何人都可以购买授权,只是授权条款非常严格。授权内容包括源代码,以及用 PDP-11 汇编语言写的及其相关内核。反正,各种版本 UNIX 系统完全由它的用户手册确定。
### AIX 系统 ###
**1983** 年,**微软**计划开发 **Xenix** 作为 MS-DOS 的多用户版继任者,他们在那一年花了 $8,000 搭建了一台拥有 **512 KB** 内存以及 **10 MB**硬盘并运行 Xenix 的 Altos 586。而到 1984 年为止,全世界 UNIX System V 第二版的安装数量已经超过了 100,000 。在 1986 年发布了包含因特网域名服务的 4.3BSD,而且 **IBM** 宣布 **AIX 系统**的安装数已经超过 250,000。AIX 基于 Unix System V 开发,这套系统拥有 BSD 风格的根文件系统,是两者的结合。
AIX 第一次引入了 **日志文件系统 (JFS)** 以及集成逻辑卷管理器 (Logical Volume Manager LVM)。IBM 在 1989 年将 AIX 移植到自己的 RS/6000 平台。2001 年发布的 5L 版是一个突破性的版本,提供了 Linux 友好性以及支持 Power4 服务器的逻辑分区。
在 2004 年发布的 AIX 5.3 引入了支持高级电源虚拟化( Advanced Power VirtualizationAPV的虚拟化技术支持对称多线程、微分区以及共享处理器池。
在 2007 年IBM 同时发布 AIX 6.1 和 Power6 架构,开始加强自己的虚拟化产品。他们还将高级电源虚拟化重新包装成 PowerVM。
这次改进包括被称为 WPARs 的负载分区形式,类似于 Solaris 的 zones/Containers但是功能更强。
### HP-UX 系统 ###
**惠普 UNIX (Hewlett-Packards UNIXHP-UX)** 源于 System V 第 3 版。这套系统一开始只支持 PA-RISC HP 9000 平台。HP-UX 第 1 版发布于 1984 年。
HP-UX 第 9 版引入了 SAM一个基于字符的图形用户界面 (GUI),用户可以用来管理整个系统。在 1995 年发布的第 10 版,调整了系统文件分布以及目录结构,变得有点类似 AT&T SVR4。
第 11 版发布于 1997 年。这是 HP 第一个支持 64 位寻址的版本。不过在 2000 年重新发布成 11i因为 HP 为特定的信息技术用途引入了操作环境operating environments和分级应用layered applications的捆绑组bundled groups
在 2001 年发布的 11.20 版宣称支持安腾Itanium系统。HP-UX 是第一个使用 ACLs访问控制列表Access Control Lists管理文件权限的 UNIX 系统也是首先支持内建逻辑卷管理器Logical Volume Manager的系统之一。
如今HP-UX 因为 HP 和 Veritas 的合作关系使用了 Veritas 作为主文件系统。
HP-UX 目前的最新版本是 11iv3, update 4。
### Solaris 系统 ###
Sun 的 UNIX 版本是 **Solaris**,用来接替 1992 年创建的 **SunOS**。SunOS 一开始基于 BSD伯克利软件发行版Berkeley Software Distribution风格的 UNIX但是 SunOS 5.0 版以及之后的版本都是基于重新包装为 Solaris 的 Unix System V 第 4 版。
SunOS 1.0 版于 1983 年发布,用于支持 Sun-1 和 Sun-2 平台。随后在 1985 年发布了 2.0 版。在 1987 年Sun 和 AT&T 宣布合作一个项目以 SVR4 为基础将 System V 和 BSD 合并成一个版本。
Solaris 2.4 是 Sun 发布的第一个 Sparc/x86 版本。1994 年 11 月份发布的 SunOS 4.1.4 版是最后一个版本。Solaris 7 是首个 64 位 Ultra Sparc 版本,加入了对文件系统元数据记录的原生支持。
Solaris 9 发布于 2002 年,支持 Linux 特性以及 Solaris 卷管理器Solaris Volume Manager。之后2005 年发布了 Solaris 10带来许多创新比如支持 Solaris Containers新的 ZFS 文件系统以及逻辑域Logical Domains
目前 Solaris 最新的版本是 第 10 版,最后的更新发布于 2008 年。
### Linux ###
到了 1991 年用来替代商业操作系统的自由free操作系统的需求日渐高涨。因此**Linus Torvalds** 开始构建一个自由的操作系统,最终成为 **Linux**。Linux 最开始只有一些 “C” 文件并且使用了阻止商业发行的授权。Linux 是一个类 UNIX 系统但又不尽相同。
2015 年发布了基于 GNU Public License GPL授权的 3.18 版。IBM 声称有超过 1800 万行开源代码开源给开发者。
如今 GNU Public License 是应用最广泛的自由软件授权方式。根据开源软件原则,这份授权允许个人和企业自由分发、运行、通过拷贝共享、学习,以及修改软件源码。
### UNIX vs. Linux技术概要 ###
- Linux 鼓励多样性Linux 的开发人员来自各种背景,有更多不同经验和意见。
- Linux 比 UNIX 支持更多的平台和架构。
- UNIX 商业版本的开发人员针对特定目标平台以及用户设计他们的操作系统。
- **Linux 比 UNIX 有更好的安全性**更少受病毒或恶意软件攻击。截止到现在Linux 上大约有 60-100 种病毒但是没有任何一种还在传播。另一方面UNIX 上大约有 85-120 种病毒,但是其中有一些还在传播中。
- 由于 UNIX 命令、工具和元素很少改变,甚至很多接口和命令行参数在后续 UNIX 版本中一直沿用。
- 有些 Linux 开发项目以自愿为基础进行资助,比如 Debian。其他项目会维护一个和商业 Linux 的社区版,比如 SUSE 的 openSUSE 以及红帽的 Fedora。
- 传统 UNIX 是纵向扩展,而另一方面 Linux 是横向扩展。
--------------------------------------------------------------------------------
via: http://www.unixmen.com/brief-history-aix-hp-ux-solaris-bsd-linux/
作者:[M.el Khamlichi][a]
译者:[zpl1025](https://github.com/zpl1025)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/pirat9/
[1]:http://www.unixmen.com/ken-thompson-unix-systems-father/
[2]:http://www.unixmen.com/dennis-m-ritchie-father-c-programming-language/

View File

@ -8,45 +8,44 @@ Copyright (C) 2015 greenbytes GmbH
### 源码 ###
你可以从[这里][1]得到 Apache 发行版。Apache 2.4.17 及其更高版本都支持 HTTP/2。我不会再重复介绍如何构建服务器的指令。在很多地方有很好的指南例如[这里][2]。
你可以从[这里][1]得到 Apache 版。Apache 2.4.17 及其更高版本都支持 HTTP/2。我不会再重复介绍如何构建服务器的指令。在很多地方有很好的指南,例如[这里][2]。
(有任何试验的链接?在 Twitter 上告诉我吧 @icing
(有任何这个试验性软件包相关链接?在 Twitter 上告诉我吧 @icing
#### 编译支持 HTTP/2 ####
#### 编译支持 HTTP/2 ####
在你编译发行版之前,你要进行一些**配置**。这里有成千上万的选项。和 HTTP/2 相关的是:
在你编译版之前,你要进行一些**配置**。这里有成千上万的选项。和 HTTP/2 相关的是:
- **--enable-http2**
启用在 Apache 服务器内部实现协议的 http2 模块。
启用在 Apache 服务器内部实现协议的 http2 模块。
- **--with-nghttp2=<dir>**
- **--with-nghttp2=\<dir>**
指定 http2 模块需要的 libnghttp2 模块的非默认位置。如果 nghttp2 是在默认的位置,配置过程会自动采用。
- **--enable-nghttp2-staticlib-deps**
很少用到的选项,你可能用来静态链接 nghttp2 库到服务器。在大部分平台上,只有在找不到共享 nghttp2 库时才有效
很少用到的选项,你可能想将 nghttp2 库静态链接到服务器里。在大部分平台上,只有在找不到共享 nghttp2 库时才有用
如果你想自己编译 nghttp2你可以到 [nghttp2.org][3] 查看文档。最新的 Fedora 以及其它发行版已经附带了这个库。
如果你想自己编译 nghttp2你可以到 [nghttp2.org][3] 查看文档。最新的 Fedora 以及其它版已经附带了这个库。
#### TLS 支持 ####
大部分人想在浏览器上使用 HTTP/2 而浏览器只在 TLS 连接(**https:// 开头的 url时支持它。你需要一些我下面介绍的配置。但首先你需要的是支持 ALPN 扩展的 TLS 库。
大部分人想在浏览器上使用 HTTP/2 而浏览器只在使用 TLS 连接(**https:// 开头的 url时才支持 HTTP/2。你需要一些我下面介绍的配置。但首先你需要的是支持 ALPN 扩展的 TLS 库。
ALPN 用来协商negotiate服务器和客户端之间的协议。如果你服务器上 TLS 库还没有实现 ALPN客户端只能通过 HTTP/1.1 通信。那么,可以和 Apache 链接并支持它的是什么库呢?
ALPN 用来屏蔽服务器和客户端之间的协议。如果你服务器上 TLS 库还没有实现 ALPN客户端只能通过 HTTP/1.1 通信。那么,和 Apache 连接的到底是什么?又是什么支持它呢?
- **OpenSSL 1.0.2** 及其以后。
- ??? (别的我也不知道了)
- **OpenSSL 1.0.2** 即将到来。
- ???
如果你的 OpenSSL 库是 Linux 发行版自带的,这里使用的版本号可能和官方 OpenSSL 发行版的不同。如果不确定的话检查一下你的 Linux 发行版吧。
如果你的 OpenSSL 库是 Linux 版本自带的,这里使用的版本号可能和官方 OpenSSL 版本的不同。如果不确定的话检查一下你的 Linux 版本吧。
### 配置 ###
另一个给服务器的好建议是为 http2 模块设置合适的日志等级。添加下面的配置:
# 某个地方有这样一行
# 放在某个地方的这样一行
LoadModule http2_module modules/mod_http2.so
<IfModule http2_module>
@ -62,38 +61,37 @@ ALPN 用来屏蔽服务器和客户端之间的协议。如果你服务器上 TL
那么,假设你已经编译部署好了服务器, TLS 库也是最新的,你启动了你的服务器,打开了浏览器。。。你怎么知道它在工作呢?
如果除此之外你没有添加其它服务器配置,很可能它没有工作。
如果除此之外你没有添加其它服务器配置,很可能它没有工作。
你需要告诉服务器在哪里使用协议。默认情况下,你的服务器并没有启动 HTTP/2 协议。因为这是安全路由,你可能要有一套部署了才能继续
你需要告诉服务器在哪里使用协议。默认情况下,你的服务器并没有启动 HTTP/2 协议。因为这样比较安全,也许才能让你已有的部署可以继续工作
**Protocols**令启用 HTTP/2 协议:
可以用新的 **Protocols**令启用 HTTP/2 协议:
# for a https server
# 对于 https 服务器
Protocols h2 http/1.1
...
# for a http server
# 对于 http 服务器
Protocols h2c http/1.1
你可以给一般服务器或者指定的 **vhosts** 添加这个配置。
你可以给整个服务器或者指定的 **vhosts** 添加这个配置。
#### SSL 参数 ####
对于 TLS SSLHTTP/2 有一些特殊的要求。阅读 [https:// 连接][4]了解更详细的信息。
对于 TLS SSLHTTP/2 有一些特殊的要求。阅读下面的“ https:// 连接”一节了解更详细的信息。
### http:// 连接 (h2c) ###
尽管现在还没有浏览器支持 HTTP/2 协议, http:// 这样的 url 也能正常工作, 因为有 mod_h[ttp]2 的支持。启用它你只需要做的一件事是在 **httpd.conf** 配置 Protocols
尽管现在还没有浏览器支持,但是 HTTP/2 协议也工作在 http:// 这样的 url 上, 而且 mod_h[ttp]2 也支持。启用它你唯一所要做的是在 Protocols 配置中启用它
# for a http server
# 对于 http 服务器
Protocols h2c http/1.1
这里有一些支持 **h2c** 的客户端(和客户端库)。我会在下面介绍:
#### curl ####
Daniel Stenberg 维护的网络资源命令行客户端 curl 当然支持。如果你的系统上有 curl有一个简单的方法检查它是否支持 http/2
Daniel Stenberg 维护的用于访问网络资源命令行客户端 curl 当然支持。如果你的系统上有 curl有一个简单的方法检查它是否支持 http/2
sh> curl -V
curl 7.43.0 (x86_64-apple-darwin15.0) libcurl/7.43.0 SecureTransport zlib/1.2.5
@ -126,11 +124,11 @@ Daniel Stenberg 维护的网络资源命令行客户端 curl 当然支持。如
恭喜,如果看到了有 **...101 Switching...** 的行就表示它正在工作!
有一些情况不会发生到 HTTP/2 的 Upgrade 。如果你的第一个请求没有内容,例如你上传一个文件,就不会触发 Upgrade。[h2c 限制][5]部分有详细的解释。
有一些情况不会发生 HTTP/2 的升级切换Upgrade。如果你的第一个请求有内容数据body例如你上传一个文件时就不会触发升级切换。[h2c 限制][5]部分有详细的解释。
#### nghttp ####
nghttp2 有能一起编译的客户端和服务器。如果你的系统中有客户端,你可以简单地通过获取资源验证你的安装:
nghttp2 可以一同编译它自己的客户端和服务器。如果你的系统中有客户端,你可以简单地通过获取一个资源验证你的安装:
sh> nghttp -uv http://<yourserver>/
[ 0.001] Connected
@ -151,7 +149,7 @@ nghttp2 有能一起编译的客户端和服务器。如果你的系统中有客
这和我们上面 **curl** 例子中看到的 Upgrade 输出很相似。
在命令行参数中隐藏着一种可以使用 **h2c**:的参数:**-u**。这会指示 **nghttp** 进行 HTTP/1 Upgrade 过程。但如果我们不使用呢?
有另外一种在命令行参数中不用 **-u** 参数而使用 **h2c** 的方法。这个参数会指示 **nghttp** 进行 HTTP/1 升级切换过程。但如果我们不使用呢?
sh> nghttp -v http://<yourserver>/
[ 0.002] Connected
@ -166,36 +164,33 @@ nghttp2 有能一起编译的客户端和服务器。如果你的系统中有客
:scheme: http
...
连接马上显示出了 HTTP/2这就是协议中所谓的直接模式,当客户端发送一些特殊的 24 字节到服务器时就会发生:
连接马上使用了 HTTP/2这就是协议中所谓的直接direct模式,当客户端发送一些特殊的 24 字节到服务器时就会发生:
0x505249202a20485454502f322e300d0a0d0a534d0d0a0d0a
or in ASCII: PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n
用 ASCII 表示是:
PRI * HTTP/2.0\r\n\r\nSM\r\n\r\n
支持 **h2c** 的服务器在一个新的连接中看到这些信息就会马上切换到 HTTP/2。HTTP/1.1 服务器则认为是一个可笑的请求,响应并关闭连接。
因此 **直接** 模式只适合于那些确定服务器支持 HTTP/2 的客户端。例如,前一个 Upgrade 过程是成功的
因此**直接**模式只适合于那些确定服务器支持 HTTP/2 的客户端。例如,当前一个升级切换过程成功了的时候
**直接** 模式的魅力是零开销,它支持所有请求,即使没有 body 部分(查看[h2c 限制][6])。任何支持 h2c 协议的服务器默认启用了直接模式。如果你想停用它,可以添加下面的配置指令到你的服务器:
**直接**模式的魅力是零开销,它支持所有请求,即使带有请求数据部分(查看[h2c 限制][6])。
注:下面这行打删除线
H2Direct off
注:下面这行打删除线
对于 2.4.17 发行版,默认明文连接时启用 **H2Direct** 。但是有一些模块和这不兼容。因此,在下一发行版中,默认会设置为**off**,如果你希望你的服务器支持它,你需要设置它为:
对于 2.4.17 版本,明文连接时默认启用 **H2Direct** 。但是有一些模块和这不兼容。因此,在下一版本中,默认会设置为**off**,如果你希望你的服务器支持它,你需要设置它为:
H2Direct on
### https:// 连接 (h2) ###
一旦你的 mod_h[ttp]2 支持 h2c 连接,就是时候一同启用 **h2**,因为现在的浏览器支持它和 **https:** 一同使用。
当你的 mod_h[ttp]2 可以支持 h2c 连接时,那就可以一同启用 **h2** 兄弟了,现在的浏览器仅支持它和 **https:** 一同使用。
HTTP/2 标准对 https:TLS连接增加了一些额外的要求。上面已经提到了 ALNP 扩展。另外的一个要求是不会使用特定[黑名单][7]中的密码
HTTP/2 标准对 https:TLS连接增加了一些额外的要求。上面已经提到了 ALNP 扩展。另外的一个要求是不能使用特定[黑名单][7]中的加密算法
尽管现在版本的 **mod_h[ttp]2** 不增强这些密码(以后可能会),大部分客户端会这么做。如果你用不切当的密码在浏览器中打开 **h2** 服务器,你会看到模糊警告**INADEQUATE_SECURITY**,浏览器会拒接连接。
尽管现在版本的 **mod_h[ttp]2** 不增强这些算法(以后可能会),但大部分客户端会这么做。如果让你的浏览器使用不恰当的算法打开 **h2** 服务器,你会看到不明确的警告**INADEQUATE_SECURITY**,浏览器会拒接连接。
一个可接受的 Apache SSL 配置类似:
一个可的 Apache SSL 配置类似:
SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK
SSLProtocol All -SSLv2 -SSLv3
@ -203,11 +198,11 @@ HTTP/2 标准对 https:TLS连接增加了一些额外的要求。上面已
(是的,这确实很长。)
这里还有一些应该调整的 SSL 配置参数,但不是必须**SSLSessionCache** **SSLUseStapling** 等,其它地方也有介绍这些。例如 Ilya Grigorik 写的一篇博客 [高性能浏览器网络][8]。
这里还有一些应该调整,但不是必须调整的 SSL 配置参数:**SSLSessionCache** **SSLUseStapling** 等,其它地方也有介绍这些。例如 Ilya Grigorik 写的一篇超赞的博客 [高性能浏览器网络][8]。
#### curl ####
再次回到 shell 并使用 curl查看 [curl h2c 章节][9] 了解要求)你也可以通过 curl 用简单的命令检测你的服务器:
再次回到 shell 使用 curl查看上面的“curl h2c”章节了解要求你也可以通过 curl 用简单的命令检测你的服务器:
sh> curl -v --http2 https://<yourserver>/
...
@ -220,9 +215,9 @@ HTTP/2 标准对 https:TLS连接增加了一些额外的要求。上面已
恭喜你,能正常工作啦!如果还不能,可能原因是:
- 你的 curl 不支持 HTTP/2。查看[检测][10]
- 你的 curl 不支持 HTTP/2。查看上面的“检测 curl”一节
- 你的 openssl 版本太低不支持 ALPN。
- 不能验证你的证书,或者不接受你的密码配置。尝试添加命令行选项 -k 停用 curl 中的检查。如果那能工作,还要重新配置你的 SSL 和证书。
- 不能验证你的证书,或者不接受你的算法配置。尝试添加命令行选项 -k 停用 curl 中的这些检查。如果可以工作,就重新配置你的 SSL 和证书。
#### nghttp ####
@ -246,11 +241,11 @@ HTTP/2 标准对 https:TLS连接增加了一些额外的要求。上面已
The negotiated protocol: http/1.1
[ERROR] HTTP/2 protocol was not selected. (nghttp2 expects h2)
这表示 ALPN 能正常工作,但并没有用 h2 协议。你需要像上面介绍的那样在服务器上选中那个协议。如果一开始在 vhost 部分选中不能正常工作,试着在通用部分选中它。
这表示 ALPN 能正常工作,但并没有用 h2 协议。你需要像上面介绍的那样检查你服务器上的 Protocols 配置。如果一开始在 vhost 部分设置不能正常工作,试着在通用部分设置它。
#### Firefox ####
Update [Apache Lounge][11] 的 Steffen Land 告诉我 [Firefox HTTP/2 指示插件][12]。你可以看到有多少地方用到了 h2提示Apache Lounge 用 h2 已经有一段时间了。。。)
更新 [Apache Lounge][11] 的 Steffen Land 告诉我 [Firefox 上有个 HTTP/2 指示插件][12]。你可以看到有多少地方用到了 h2提示Apache Lounge 用 h2 已经有一段时间了。。。)
你可以在 Firefox 浏览器中打开开发者工具,在那里的网络标签页查看 HTTP/2 连接。当你打开了 HTTP/2 并重新刷新 html 页面时,你会看到类似下面的东西:
@ -260,9 +255,9 @@ Update [Apache Lounge][11] 的 Steffen Land 告诉我 [Firefox HTTP/2 指示
#### Google Chrome ####
在 Google Chrome 中,你在开发者工具中看不到 HTTP/2 指示器。相反Chrome 用特殊的地址 **chrome://net-internals/#http2** 给出了相关信息。
在 Google Chrome 中,你在开发者工具中看不到 HTTP/2 指示器。相反Chrome 用特殊的地址 **chrome://net-internals/#http2** 给出了相关信息。LCTT 译注Chrome 已经有一个 “HTTP/2 and SPDY indicator” 可以很好的在地址栏识别 HTTP/2 连接)
如果你在服务器中打开了一个页面并在 Chrome 那个页面查看,你可以看到类似下面这样:
如果你打开了一个服务器的页面,可以在 Chrome 中查看那个 net-internals 页面,你可以看到类似下面这样:
![](https://icing.github.io/mod_h2/images/chrome-h2.png)
@ -276,21 +271,21 @@ Windows 10 中 Internet Explorer 的继任者 Edge 也支持 HTTP/2。你也可
#### Safari ####
在 Apple 的 Safari 中,打开开发者工具,那里有个网络标签页。重新加载你的服务器页面并在开发者工具中选择显示了加载的行。如果你启用了在右边显示详细试图,看 **状态** 部分。那里显示了 **HTTP/2.0 200**,类似
在 Apple 的 Safari 中,打开开发者工具,那里有个网络标签页。重新加载你的服务器上的页面,并在开发者工具中选择显示了加载的那行。如果你启用了在右边显示详细视图,看 **Status** 部分。那里显示了 **HTTP/2.0 200**,像这样
![](https://icing.github.io/mod_h2/images/safari-h2.png)
#### 重新协商 ####
https 连接重新协商是指正在运行的连接中特定的 TLS 参数会发生变化。在 Apache httpd 中,你可以通过目录中的配置文件修改 TLS 参数。如果一个要获取特定位置资源的请求到来,配置的 TLS 参数会和当前的 TLS 参数进行对比。如果它们不相同,就会触发重新协商。
https 连接重新协商是指正在运行的连接中特定的 TLS 参数会发生变化。在 Apache httpd 中,你可以在 directory 配置中改变 TLS 参数。如果进来一个获取特定位置资源的请求,配置的 TLS 参数会和当前的 TLS 参数进行对比。如果它们不相同,就会触发重新协商。
这种最常见的情形是密码变化和客户端验证。你可以要求客户访问特定位置时需要通过验证,或者对于特定资源,你可以使用更安全的 CPU 敏感的密码
这种最常见的情形是算法变化和客户端证书。你可以要求客户访问特定位置时需要通过验证,或者对于特定资源,你可以使用更安全的、对 CPU 压力更大的算法
不管你的想法有多么好HTTP/2 中都**不可以**发生重新协商。如果有 100 多个请求到同一个地方,什么时候哪个会发生重新协商呢?
不管你的想法有多么好HTTP/2 中都**不可以**发生重新协商。在同一个连接上会有 100 多个请求,那么重新协商该什么时候做呢?
对于这种配置,现有的 **mod_h[ttp]2**不能保证你的安全。如果你有一个站点使用了 TLS 重新协商,别在上面启用 h2
对于这种配置,现有的 **mod_h[ttp]2**没有办法。如果你有一个站点使用了 TLS 重新协商,别在上面启用 h2
当然,我们会在后面的发行版中解决这个问题然后你就可以安全地启用了。
当然,我们会在后面的版中解决这个问题然后你就可以安全地启用了。
### 限制 ###
@ -298,45 +293,45 @@ https 连接重新协商是指正在运行的连接中特定的 TLS 参数会
实现除 HTTP 之外协议的模块可能和 **mod_http2** 不兼容。这在其它协议要求服务器首先发送数据时无疑会发生。
**NNTP** 就是这种协议的一个例子。如果你在服务器中配置了 **mod_nntp_like_ssl**,甚至都不要加载 mod_http2。等待下一个发行版
**NNTP** 就是这种协议的一个例子。如果你在服务器中配置了 **mod\_nntp\_like\_ssl**,那么就不要加载 mod_http2。等待下一个版本
#### h2c 限制 ####
**h2c** 的实现还有一些限制,你应该注意:
#### 在虚拟主机中拒绝 h2c ####
##### 在虚拟主机中拒绝 h2c #####
你不能对指定的虚拟主机拒绝 **h2c 直连**。连接建立而没有看到请求时会触发**直连**,这使得不可能预先知道 Apache 需要查找哪个虚拟主机。
#### 升级请求体 ####
##### 有请求数据时的升级切换 #####
对于有 body 部分的请求,**h2c** 升级不能正常工作。那些是 PUT 和 POST 请求(用于提交和上传)。如果你写了一个客户端,你可能会用一个简单的 GET 去处理请求或者用选项 * 去触发升级
对于有数据的请求,**h2c** 升级切换不能正常工作。那些是 PUT 和 POST 请求(用于提交和上传)。如果你写了一个客户端,你可能会用一个简单的 GET 或者 OPTIONS * 来处理那些请求以触发升级切换
原因从技术层面来看显而易见,但如果你想知道:升级过程中,连接处于半疯状态。请求按照 HTTP/1.1 的格式,而响应使用 HTTP/2。如果请求有一个 body 部分,服务器在发送响应之前需要读取整个 body。因为响应可能需要从客户端处得到应答用于流控制。但如果仍在发送 HTTP/1.1 请求,客户端就还不能处理 HTTP/2 连接。
原因从技术层面来看显而易见,但如果你想知道:升级切换过程中,连接处于半疯状态。请求按照 HTTP/1.1 的格式,而响应使用 HTTP/2 帧。如果请求有一个数据部分,服务器在发送响应之前需要读取整个数据。因为响应可能需要从客户端处得到应答用于流控制及其它东西。但如果仍在发送 HTTP/1.1 请求,客户端就仍然不能以 HTTP/2 连接。
为了使行为可预测,几个服务器实现商决定不要在任何请求体中进行升级,即使 body 很小。
为了使行为可预测,几个服务器在实现上决定不在任何带有请求数据的请求中进行升级切换,即使请求数据很小。
#### 升级 302s ####
##### 302 时的升级切换 #####
有重定向发生时当前 h2c 升级也不能工作。看起来 mod_http2 之前的重写有可能发生。这当然不会导致断路,但你测试这样的站点也许会让你迷惑。
有重定向发生时当前 h2c 升级切换也不能工作。看起来 mod_http2 之前的重写有可能发生。这当然不会导致断路,但你测试这样的站点也许会让你迷惑。
#### h2 限制 ####
这里有一些你应该意识到的 h2 实现限制:
#### 连接重用 ####
##### 连接重用 #####
HTTP/2 协议允许在特定条件下重用 TLS 连接:如果你有带通配符的证书或者多个 AltSubject 名称,浏览器可能会重用现有的连接。例如:
你有一个 **a.example.org** 的证书,它还有另外一个名称 **b.example.org**。你在浏览器中打开 url **https://a.example.org/**,用另一个标签页加载 **https://b.example.org/**
你有一个 **a.example.org** 的证书,它还有另外一个名称 **b.example.org**。你在浏览器中打开 URL **https://a.example.org/**,用另一个标签页加载 **https://b.example.org/**
在重新打开一个新的连接之前,浏览器看到它有一个到 **a.example.org** 的连接并且证书对于 **b.example.org** 也可用。因此,它在第一个连接上面向第二个标签页发送请求。
在重新打开一个新的连接之前,浏览器看到它有一个到 **a.example.org** 的连接并且证书对于 **b.example.org** 也可用。因此,它在第一个连接上面发送第二个标签页的请求。
这种连接重用是刻意设计的,它使得致力于 HTTP/1 切分效率的站点能够不需要太多变化就能利用 HTTP/2。
这种连接重用是刻意设计的,它使得使用了 HTTP/1 切分sharding来提高效率的站点能够不需要太多变化就能利用 HTTP/2。
Apache **mod_h[ttp]2** 还没有完全实现这点。如果 **a.example.org****b.example.org** 是不同的虚拟主机, Apache 不会允许这样的连接重用,并会告知浏览器状态码**421 错误请求**。浏览器会意识到它需要重新打开一个到 **b.example.org** 的连接。这仍然能工作,只是会降低一些效率。
Apache **mod_h[ttp]2** 还没有完全实现这点。如果 **a.example.org****b.example.org** 是不同的虚拟主机, Apache 不会允许这样的连接重用,并会告知浏览器状态码 **421 Misdirected Request**。浏览器会意识到它需要重新打开一个到 **b.example.org** 的连接。这仍然能工作,只是会降低一些效率。
我们期望下一次的发布中能有切当的检查。
我们期望下一次的发布中能有合适的检查。
Münster, 12.10.2015,
@ -355,7 +350,7 @@ via: https://icing.github.io/mod_h2/howto.html
作者:[icing][a]
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,44 @@
好奇 Linux试试云端的 Linux 桌面
================================================================================
Linux 在桌面操作系统市场上只占据了非常小的份额从目前的调查结果来看估计只有2%的市场份额;对比来看,丰富多变的 Windows 系统占据了接近90%的市场份额。对于 Linux 来说,要挑战 Windows 在桌面操作系统市场的垄断,需要有一个让用户学习不同的操作系统的简单方式。如果你相信传统的 Windows 用户会再买一台机器来使用 Linux那你就太天真了。我们只能去试想用户重新分区设置引导程序来使用双系统或者跳过所有步骤回到一个最简单的方法。
![](http://www.linuxlinks.com/portal/content/reviews/Cloud/CloudComputing.png)
我们实验过一系列让用户试操作 Linux 的无风险的使用方法,不涉及任何分区管理,包括 CD/DVD 光盘、USB 存储棒和桌面虚拟化软件等等。通过实验,我强烈推荐使用 VMware 的 VMware Player 或者 Oracle VirtualBox 虚拟机,对于桌面操作系统或者便携式电脑的用户,这是一种安装运行多操作系统的相对简单而且免费的的方法。每一台虚拟机和其他虚拟机相隔离,但是共享 CPU、内存、网络接口等等。虚拟机仍需要一定的资源来安装运行 Linux也需要一台相当强劲的主机。但对于一个好奇心不大的人这样做实在是太麻烦了。
要打破用户传统的使用观念是非常困难的。很多 Windows 用户可以尝试使用 Linux 提供的自由软件,但也有太多要学习的 Linux 系统知识。这会花掉他们相当一部分时间才能习惯 Linux 的工作方式。
当然了,对于一个第一次在 Linux 上操作的新手,有没有一个更高效的方法呢?答案是肯定的,接着往下看看云实验平台。
### LabxNow ###
![LabxNow](http://www.linuxlinks.com/portal/content/reviews/Cloud/Screenshot-LabxNow.png)
LabxNow 提供了一个免费服务,方便广大用户通过浏览器来访问远程 Linux 桌面。开发者将其加强为一个用户个人远程实验室(用户可以在系统里运行、开发任何程序),用户可以在任何地方通过互联网登入远程实验室。
这项服务现在可以为个人用户提供2核处理器4GB RAM和10GB的固态硬盘运行在128G RAM的4 AMD 6272处理器上。
#### 配置参数: ####
- 系统镜像:基于 Ubuntu 14.04 的 Xface 4.10RHEL 6.5CentOS(Gnome桌面)Oracle
- 硬件: CPU - 1核或者2核内存: 512MB, 1GB, 2GB or 4GB
- 超快的网络数据传输
- 可以运行在所有流行的浏览器上
- 可以安装任意程序,可以运行任何程序 这是一个非常棒的方法,可以随意做实验学习你想学的任何知识,没有 一点风险
- 添加、删除、管理、制定虚拟机非常方便
- 支持虚拟机共享,远程桌面
你所需要的只是一台有稳定网络的设备。不用担心虚拟专用系统VPS、域名、或者硬件带来的高费用。LabxNow提供了一个在 Ubuntu、RHEL 和 CentOS 上实验的非常好的方法。它给 Windows 用户提供一个极好的环境,让他们探索美妙的 Linux 世界。说得深入一点,它可以让用户随时随地在里面工作,而没有了要在每台设备上安装 Linux 的压力。点击下面这个链接进入 [www.labxnow.org/labxweb/][1]。
另外还有一些其它服务(大部分是收费服务)可以让用户使用 Linux包括 Cloudsigma 环境的7天使用权和Icebergs.io 通过HTML5实现root权限。但是现在我推荐 LabxNow。
--------------------------------------------------------------------------------
来自: http://www.linuxlinks.com/article/20151003095334682/LinuxCloud.html
译者:[sevenot](https://github.com/sevenot)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://www.labxnow.org/labxweb/

View File

@ -0,0 +1,113 @@
用浏览器管理 Docker
================================================================================
Docker 越来越流行了。在一个容器里面而不是虚拟机里运行一个完整的操作系统是一种非常棒的技术和想法。docker 已经通过节省工作时间来拯救了成千上万的系统管理员和开发人员。这是一个开源技术,提供一个平台来把应用程序当作容器来打包、分发、共享和运行,而不用关注主机上运行的操作系统是什么。它没有开发语言、框架或打包系统的限制,并且可以在任何时间、任何地点运行,从小型计算机到高端服务器都可以。运行 docker 容器和管理它们可能会花费一点点努力和时间,所以现在有一款基于 web 的应用程序DockerUI可以让管理和运行容器变得很简单。DockerUI 是一个对那些不熟悉 Linux 命令行但又很想运行容器化程序的人很有帮助的工具。DockerUI 是一个开源的基于 web 的应用程序,它最值得称道的是它华丽的设计和用来运行和管理 docker 的简洁的操作界面。
下面会介绍如何在 Linux 上安装配置 DockerUI。
### 1. 安装 docker ###
首先,我们需要安装 docker。我们得感谢 docker 的开发者,让我们可以简单的在主流 linux 发行版上安装 docker。为了安装 docker我们得在对应的发行版上使用下面的命令。
#### Ubuntu/Fedora/CentOS/RHEL/Debian ####
docker 维护者已经写了一个非常棒的脚本,用它可以在 Ubuntu 15.04/14.10/14.04、 CentOS 6.x/7、 Fedora 22、 RHEL 7 和 Debian 8.x 这几个 linux 发行版上安装 docker。这个脚本可以识别出我们的机器上运行的 linux 的发行版本,然后将需要的源库添加到文件系统、并更新本地的安装源目录,最后安装 docker 及其依赖库。要使用这个脚本安装docker我们需要在 root 用户或者 sudo 权限下运行如下的命令,
# curl -sSL https://get.docker.com/ | sh
#### OpenSuse/SUSE Linux 企业版 ####
要在运行了 OpenSuse 13.1/13.2 或者 SUSE Linux Enterprise Server 12 的机器上安装 docker我们只需要简单的执行zypper 命令。运行下面的命令就可以安装最新版本的docker
# zypper in docker
#### ArchLinux ####
docker 在 ArchLinux 的官方源和社区维护的 AUR 库中可以找到。所以在 ArchLinux 上我们有两种方式来安装 docker。使用官方源安装需要执行下面的 pacman 命令:
# pacman -S docker
如果要从社区源 AUR 安装 docker需要执行下面的命令
# yaourt -S docker-git
### 2. 启动 ###
安装好 docker 之后,我们需要运行 docker 守护进程,然后才能运行并管理 docker 容器。我们需要使用下列命令来确认 docker 守护进程已经安装并运行了。
#### 在 SysVinit 上####
# service docker start
#### 在Systemd 上####
# systemctl start docker
### 3. 安装 DockerUI ###
安装 DockerUI 比安装 docker 要简单很多。我们仅仅需要从 docker 注册库上拉取 dockerui ,然后在容器里面运行。要完成这些,我们只需要简单的执行下面的命令:
# docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock dockerui/dockerui
![Starting DockerUI Container](http://blog.linoxide.com/wp-content/uploads/2015/09/starting-dockerui-container.png)
在上面的命令里dockerui 使用的默认端口是9000我们需要使用`-p` 命令映射默认端口。使用`-v` 标志我们可以指定docker 的 socket。如果主机使用了 SELinux 那么就得使用`--privileged` 标志。
执行完上面的命令后,我们要检查 DockerUI 容器是否运行了,或者使用下面的命令检查:
# docker ps
![Running Docker Containers](http://blog.linoxide.com/wp-content/uploads/2015/09/running-docker-containers.png)
### 4. 拉取 docker 镜像 ###
现在我们还不能直接使用 DockerUI 拉取镜像,所以我们需要在命令行下拉取 docker 镜像。要完成这些我们需要执行下面的命令。
# docker pull ubuntu
![Docker Image Pull](http://blog.linoxide.com/wp-content/uploads/2015/10/docker-image-pull.png)
上面的命令将会从 docker 官方源 [Docker Hub][1]拉取一个标志为 ubuntu 的镜像。类似的我们可以从 Hub 拉取需要的其它镜像。
### 4. 管理 ###
启动了 DockerUI 容器之后,我们可以用它来执行启动、暂停、终止、删除以及 DockerUI 提供的其它操作 docker 容器的命令。
首先,我们需要在 web 浏览器里面打开 dockerui在浏览器里面输入 http://ip-address:9000 或者 http://mydomain.com:9000具体要根据你的系统配置。默认情况下登录不需要认证但是可以配置我们的 web 服务器来要求登录认证。要启动一个容器,我们需要有包含我们要运行的程序的镜像。
#### 创建 ####
创建容器我们需要在 Images 页面里,点击我们想创建的容器的镜像 id。然后点击 `Create` 按钮,接下来我们就会被要求输入创建容器所需要的属性。这些都完成之后,我们需要点击按钮`Create` 完成最终的创建。
![Creating Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/10/creating-docker-container.png)
#### 停止 ####
要停止一个容器,我们只需要跳转到`Containers` 页面,然后选取要停止的容器。然后在 Action 的子菜单里面按下 Stop 就行了。
![Managing Container](http://blog.linoxide.com/wp-content/uploads/2015/10/managing-container.png)
#### 暂停与恢复 ####
要暂停一个容器,只需要简单的选取目标容器,然后点击 Pause 就行了。恢复一个容器只需要在 Actions 的子菜单里面点击 Unpause 就行了。
#### 删除 ####
类似于我们上面完成的任务,杀掉或者删除一个容器或镜像也是很简单的。只需要检查、选择容器或镜像,然后点击 Kill 或者 Remove 就行了。
### 结论 ###
DockerUI 使用了 docker 远程 API 提供了一个很棒的管理 docker 容器的 web 界面。它的开发者们完全使用 HTML 和 JS 设计、开发了这个应用。目前这个程序还处于开发中,并且还有大量的工作要完成,所以我们并不推荐将它应用在生产环境。它可以帮助用户简单的完成管理容器和镜像,而且只需要一点点工作。如果想要为 DockerUI 做贡献,可以访问它们的 [Github 仓库][2]。如果有问题、建议、反馈,请写在下面的评论框,这样我们就可以修改或者更新我们的内容。谢谢。
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/setup-dockerui-web-interface-docker/
作者:[Arun Pyasi][a]
译者:[oska874](https://github.com/oska874)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://hub.docker.com/
[2]:https://github.com/crosbymichael/dockerui/

View File

@ -1,4 +1,3 @@
如何在 Linux 终端下创建新的文件系统/分区
================================================================================
![](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-feature-image.png)
@ -13,8 +12,7 @@
![cfdisk-lsblk](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-lsblk.png)
一旦你运行了 `lsblk`,你应该会看到当前系统上每个磁盘的详细列表。看看这个列表,然后找出你想要使用的磁盘。在本文中,我将使用 `sdb` 来进行演示。
当你运行了 `lsblk`,你应该会看到当前系统上每个磁盘的详细列表。看看这个列表,然后找出你想要使用的磁盘。在本文中,我将使用 `sdb` 来进行演示。
在终端输入这个命令。它会显示一个功能强大的基于终端的分区编辑程序。
@ -26,9 +24,7 @@
当输入此命令后,你将进入分区编辑器中,然后访问你想改变的磁盘。
Since hard drive partitions are different, depending on a users needs, this part of the guide will go over **how to set up a split Linux home/root system layout**.
由于磁盘分区的不同,这取决于用户的需求,这部分的指南将在 **如何建立一个分布的 Linux home/root 文件分区**
由于磁盘分区的不同,这取决于用户的需求,这部分的指南将在 **如何建立一个分离的 Linux home/root 分区布局**
首先,需要创建根分区。这需要根据磁盘的字节数来进行分割。我测试的磁盘是 32 GB。
@ -38,7 +34,7 @@ Since hard drive partitions are different, depending on a users needs, this p
该程序会要求你输入分区大小。一旦你指定好大小后,按 Enter 键。这将被称为根分区(或 /dev/sdb1
接下来该创建用户分区(/dev/sdb2了。你需要在 CFdisk 中再选择一些空闲分区。使用箭头选择 [ NEW ] 选项,然后按 Enter 键。输入你用户分区的大小,然后按 Enter 键来创建它。
接下来该创建 home 分区(/dev/sdb2了。你需要在 CFdisk 中再选择一些空闲分区。使用箭头选择 [ NEW ] 选项,然后按 Enter 键。输入你的 home 分区的大小,然后按 Enter 键来创建它。
![cfdisk-create-home-partition](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-create-home-partition.png)
@ -48,7 +44,7 @@ Since hard drive partitions are different, depending on a users needs, this p
![cfdisk-specify-partition-type-swap](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-specify-partition-type-swap.png)
现在,交换分区被创建了,该指定其类型。使用上下箭头来选择它。之后,使用左右箭头选择 [ TYPE ] 。找到 Linux swap 选项,然后按 Enter 键。
现在,创建了交换分区,该指定其类型。使用上下箭头来选择它。之后,使用左右箭头选择 [ TYPE ] 。找到 Linux swap 选项,然后按 Enter 键。
![cfdisk-write-partition-table](https://www.maketecheasier.com/assets/uploads/2015/03/cfdisk-write-partition-table.jpg)
@ -56,13 +52,13 @@ Since hard drive partitions are different, depending on a users needs, this p
### 使用 mkfs 创建文件系统 ###
有时候,你并不需要一个完整的分区,你只想要创建一个文件系统而已。你可以在终端直接使用 `mkfs` 命令来实现。
有时候,你并不需要一个整个重新分区,你只想要创建一个文件系统而已。你可以在终端直接使用 `mkfs` 命令来实现。
![cfdisk-mkfs-list-partitions-lsblk](https://www.maketecheasier.com/assets/uploads/2015/10/cfdisk-mkfs-list-partitions-lsblk.png)
首先,找出你要使用的磁盘。在终端输入 `lsblk` 找出来。它会打印出列表,之后只要找到你想制作文件系统的分区或盘符。
首先,找出你要使用的磁盘。在终端输入 `lsblk` 找出来。它会打印出列表,之后只要找到你想创建文件系统的分区或盘符。
在这个例子中,我将使用 `/dev/sdb1` 的第一个分区。只对 `/dev/sdb` 使用 mkfs将会使用整个分区)。
在这个例子中,我将使用第二个硬盘的 `/dev/sdb1` 作为第一个分区。可以对 `/dev/sdb` 使用 mkfs将会使用整个分区)。
![cfdisk-mkfs-make-file-system-ext4](https://www.maketecheasier.com/assets/uploads/2015/10/cfdisk-mkfs-make-file-system-ext4.png)
@ -70,13 +66,13 @@ Since hard drive partitions are different, depending on a users needs, this p
sudo mkfs.ext4 /dev/sdb1
在终端。应当指出的是,`mkfs.ext4` 可以将你指定的任何文件系统改变
在终端。应当指出的是,`mkfs.ext4` 可以换成任何你想要使用的的文件系统
### 结论 ###
虽然使用图形工具编辑文件系统和分区更容易但终端可以说是更有效的。终端的加载速度更快点击几个按钮即可。GParted 和其它工具一样,它也是一个完整的工具。我希望在本教程的帮助下,你会明白如何在终端中高效的编辑文件系统。
你是否更喜欢使用基于终端的方法在 Linux 上编辑分区?为什么或为什么不?在下面告诉我们!
你是否更喜欢使用基于终端的方法在 Linux 上编辑分区?不管是不是,请在下面告诉我们。
--------------------------------------------------------------------------------
@ -84,7 +80,7 @@ via: https://www.maketecheasier.com/create-file-systems-partitions-terminal-linu
作者:[Derrik Diener][a]
译者:[strugglingyouth](https://github.com/strugglingyouth)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,5 +1,4 @@
Linux 有问必答 - 如何在 Linux 上找到当前正在使用的 shell
Linux 有问必答:如何知道当前正在使用的 shell 是哪个?
================================================================================
> **问题**: 我经常在命令行中切换 shell。是否有一个快速简便的方法来找出我当前正在使用的 shell 呢?此外,我怎么能找到当前 shell 的版本?
@ -7,36 +6,30 @@ Linux 有问必答 - 如何在 Linux 上找到当前正在使用的 shell
有多种方式可以查看你目前在使用什么 shell最简单的方法就是通过使用 shell 的特殊参数。
其一,[一个名为 "$$" 的特殊参数][1] 表示当前你正在运行的 shell 的 PID。此参数是只读的不能被修改。所以下面的命令也将显示你正在运行的 shell 的名字:
其一,[一个名为 "$$" 的特殊参数][1] 表示当前你正在运行的 shell 实例的 PID。此参数是只读的不能被修改。所以下面的命令也将显示你正在运行的 shell 的名字:
$ ps -p $$
----------
PID TTY TIME CMD
21666 pts/4 00:00:00 bash
上述命令可在所有可用的 shell 中工作。
如果你不使用 csh使用 shell 的特殊参数 “$$” 可以找出当前的 shell表示当前正在运行的 shell 或 shell 脚本的名称。这是 Bash 的一个特殊参数,但也可用在其他 shells 中,如 sh, zsh, tcsh or dash。使用 echo 命令也可以查看你目前正在使用的 shell 的名称。
如果你不使用 csh找到当前使用的 shell 的另外一个办法是使用特殊参数 “$0” ,它表示当前正在运行的 shell 或 shell 脚本的名称。这是 Bash 的一个特殊参数,但也可用在其他 shell 中,如 sh、zsh、tcsh 或 dash。使用 echo 命令可以查看你目前正在使用的 shell 的名称。
$ echo $0
----------
bash
不要将 $SHELL 看成是一个单独的环境变量,它被设置为整个路径下的默认 shell。因此,这个变量并不一定指向你当前使用的 shell。例如即使你在终端中调用不同的 shell$SHELL 也保持不变。
不要被一个叫做 $SHELL 的单独的环境变量所迷惑,它被设置为你的默认 shell 的完整路径。因此,这个变量并不一定指向你当前使用的 shell。例如即使你在终端中调用不同的 shell$SHELL 也保持不变。
$ echo $SHELL
----------
/bin/shell
![](https://c2.staticflickr.com/6/5688/22544087680_4a9c180485_c.jpg)
因此找出当前的shell你应该使用 $$ 或 $0但不是 $ SHELL。
因此找出当前的shell你应该使用 $$ 或 $0但不是 $SHELL。
### 找出当前 Shell 的版本 ###
@ -46,8 +39,6 @@ Linux 有问必答 - 如何在 Linux 上找到当前正在使用的 shell
$ bash --version
----------
GNU bash, version 4.3.30(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
@ -59,23 +50,17 @@ Linux 有问必答 - 如何在 Linux 上找到当前正在使用的 shell
$ zsh --version
----------
zsh 5.0.7 (x86_64-pc-linux-gnu)
**对于** tcsh **shell**:
$ tcsh --version
----------
tcsh 6.18.01 (Astron) 2012-02-14 (x86_64-unknown-linux) options wide,nls,dl,al,kan,rh,nd,color,filec
对于一些 shells,你还可以使用 shell 特定的变量(例如,$ BASH_VERSION 或 $ ZSH_VERSION
对于某些 shell,你还可以使用 shell 特定的变量(例如,$BASH_VERSION 或 $ZSH_VERSION
$ echo $BASH_VERSION
----------
4.3.8(1)-release
--------------------------------------------------------------------------------
@ -84,7 +69,7 @@ via: http://ask.xmodulo.com/which-shell-am-i-using.html
作者:[Dan Nanni][a]
译者:[strugglingyouth](https://github.com/strugglingyouth)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,17 +1,16 @@
如何在 Ubuntu 15.1014.04 中安装 NVIDIA 358.16 驱动程序
================================================================================
![nvidia-logo-1](http://ubuntuhandbook.org/wp-content/uploads/2015/06/nvidia-logo-1.png)
[NVIDIA 358.16][1], NVIDIA 358 系列的第一个稳定版本已经发布并在 358.09 中(测试版)做了一些修正,以及一些小的改进。
[NVIDIA 358.16][1] —— NVIDIA 358 系列的第一个稳定版本已经发布,并对 358.09 中(测试版)做了一些修正,以及一些小的改进。
NVIDIA 358 增加了一个新的 **nvidia-modeset.ko** 内核模块并配合 nvidia.ko 内核模块工作来显示 GPU 引擎。在以后发布版本中,**nvidia-modeset.ko** 内核驱动程序将被用于基本的模式接口由内核直接传递管理DRM
NVIDIA 358 增加了一个新的 **nvidia-modeset.ko** 内核模块,可以配合 nvidia.ko 内核模块工作来调用 GPU 显示引擎。在以后发布版本中,**nvidia-modeset.ko** 内核驱动程序将被用于模式设置接口的基础该接口由内核的直接渲染管理器DRM所提供
在 OpenGL 驱动中,新的驱动程序也有了新的 GLX 扩展协议,对于分配大量内存也有了一种新的系统内存分配机制。新的 GPU **GeForce 805A****GeForce GTX 960A** 也被支持了。NVIDIA 358.16 也支持 X.Org 1.18 服务器和 OpenGL 4.3。
新的驱动程序也有新的 GLX 协议扩展,以及在 OpenGL 驱动中分配大量内存的系统内存分配新机制。新的 GPU **GeForce 805A****GeForce GTX 960A** 都支持。NVIDIA 358.16 也支持 X.Org 1.18 服务器和 OpenGL 4.3。
### 如何在 Ubuntu 中安装 NVIDIA 358.16 : ###
> 请不要在生产设备上安装,除非你知道自己在做什么以及如何才能恢复。
> **请不要在生产设备上安装,除非你知道自己在做什么以及如何才能恢复。**
对于官方的二进制文件,请到 [nvidia.com/object/unix.html][1] 查看。
@ -19,7 +18,7 @@ NVIDIA 358 增加了一个新的 **nvidia-modeset.ko** 内核模块并配合 nvi
**1. 添加 PPA.**
通过按 Ctrl+Alt+T 快捷键来从 Unity 桌面打开终端。当打启动应用后,粘贴下面的命令并按回车键:
通过按 `Ctrl+Alt+T` 快捷键来从 Unity 桌面打开终端。当打启动应用后,粘贴下面的命令并按回车键:
sudo add-apt-repository ppa:graphics-drivers/ppa
@ -35,7 +34,7 @@ NVIDIA 358 增加了一个新的 **nvidia-modeset.ko** 内核模块并配合 nvi
sudo apt-get install nvidia-358 nvidia-settings
### (可选) 卸载: ###
### (如果需要的话,) 卸载: ###
开机从 GRUB 菜单进入恢复模式,进入根控制台。然后逐一运行下面的命令:
@ -59,7 +58,7 @@ via: http://ubuntuhandbook.org/index.php/2015/11/install-nvidia-358-16-driver-ub
作者:[Ji m][a]
译者:[strugglingyouth](https://github.com/strugglingyouth)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,46 @@
在 Ubuntu 15.10 上安装 Intel Graphics 安装器
================================================================================
![Intel graphics installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel_logo.jpg)
Intel 最近发布了一个新版本的 Linux Graphics 安装器。在新版本中,将不支持 Ubuntu 15.04,而必须用 Ubuntu 15.10 Wily。
> Linux 版 Intel® Graphics 安装器可以让你很容易的为你的 Intel Graphics 硬件安装最新版的图形与视频驱动。它能保证你一直使用最新的增强与优化功能,并能够安装到 Intel Graphics Stack 中,来保证你在你的 Intel 图形硬件下,享受到最佳的用户体验。*现在 Linux 版的 Intel® Graphics 安装器支持最新版的 Ubuntu。*
![intel-graphics-installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel-graphics-installer.jpg)
### 安装 ###
**1.** 从[这个链接页面][1]中下载该安装器。当前支持 Ubuntu 15.10 的版本是1.2.1版。你可以在**系统设置 -> 详细信息**中检查你的操作系统32位或64位的类型。
![download-intel-graphics-installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/download-intel-graphics-installer.jpg)
**2.** 一旦下载完成,到下载目录中点击 .deb 安装包,用 Ubuntu 软件中心打开它,然最后点击“安装”按钮。
![install-via-software-center](http://ubuntuhandbook.org/wp-content/uploads/2015/11/install-via-software-center.jpg)
**3.** 为了让系统信任 Intel Graphics 安装器,你需要通过下面的命令来为它添加密钥。
用快捷键`Ctrl+Alt+T`或者在 Unity Dash 中的“应用程序启动器”中打开终端。依次粘贴运行下面的命令。
wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg -O - | sudo apt-key add -
wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg-2 -O - | sudo apt-key add -
![trust-intel](http://ubuntuhandbook.org/wp-content/uploads/2015/11/trust-intel.jpg)
注意:在运行第一个命令的过程中,如果密钥下载完成后,光标停住不动并且一直闪烁的话,就像上面图片显示的那样,输入你的密码(输入时不会看到什么有变化)然后回车就行了。
最后通过 Unity Dash 或应用程序启动器打开 Intel Graphics 安装器。
--------------------------------------------------------------------------------
via: http://ubuntuhandbook.org/index.php/2015/11/install-intel-graphics-installer-in-ubuntu-15-10/
作者:[Ji m][a]
译者:[XLCYun](https://github.com/XLCYun)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ubuntuhandbook.org/index.php/about/
[1]:https://01.org/linuxgraphics/downloads

View File

@ -0,0 +1,239 @@
如何在 CentOS 7 上安装 Redis 服务器
================================================================================
大家好,本文的主题是 Redis我们将要在 CentOS 7 上安装它。编译源代码,安装二进制文件,创建、安装文件。在安装了它的组件之后,我们还会配置 redis ,就像配置操作系统参数一样,目标就是让 redis 运行的更加可靠和快速。
![Runnins Redis](http://blog.linoxide.com/wp-content/uploads/2015/10/run-redis-standalone.jpg)
*Redis 服务器*
Redis 是一个开源的多平台数据存储软件,使用 ANSI C 编写直接在内存使用数据集这使得它得以实现非常高的效率。Redis 支持多种编程语言,包括 Lua, C, Java, Python, Perl, PHP 和其他很多语言。redis 的代码量很小只有约3万行它只做“很少”的事但是做的很好。尽管是在内存里工作但是数据持久化的保存还是有的而redis 的可靠性就很高,同时也支持集群,这些可以很好的保证你的数据安全。
### 构建 Redis ###
redis 目前没有官方 RPM 安装包,我们需要从源代码编译,而为了要编译就需要安装 Make 和 GCC。
如果没有安装过 GCC 和 Make那么就使用 yum 安装。
yum install gcc make
从[官网][1]下载 tar 压缩包。
curl http://download.redis.io/releases/redis-3.0.4.tar.gz -o redis-3.0.4.tar.gz
解压缩。
tar zxvf redis-3.0.4.tar.gz
进入解压后的目录。
cd redis-3.0.4
使用Make 编译源文件。
make
### 安装 ###
进入源文件的目录。
cd src
复制 Redis 的服务器和客户端到 /usr/local/bin。
cp redis-server redis-cli /usr/local/bin
最好也把 sentinelbenchmark 和 check 复制过去。
cp redis-sentinel redis-benchmark redis-check-aof redis-check-dump /usr/local/bin
创建redis 配置文件夹。
mkdir /etc/redis
在`/var/lib/redis` 下创建有效的保存数据的目录
mkdir -p /var/lib/redis/6379
#### 系统参数 ####
为了让 redis 正常工作需要配置一些内核参数。
配置 `vm.overcommit_memory` 为1这可以避免数据被截断详情[见此][2]。
sysctl -w vm.overcommit_memory=1
修改 backlog 连接数的最大值超过 redis.conf 中的 `tcp-backlog`即默认值511。你可以在[kernel.org][3] 找到更多有关基于 sysctl 的 ip 网络隧道的信息。
sysctl -w net.core.somaxconn=512
取消对透明巨页内存transparent huge pages的支持因为这会造成 redis 使用过程产生延时和内存访问问题。
echo never > /sys/kernel/mm/transparent_hugepage/enabled
### redis.conf ###
redis.conf 是 redis 的配置文件,然而你会看到这个文件的名字是 6379.conf ,而这个数字就是 redis 监听的网络端口。如果你想要运行超过一个的 redis 实例,推荐用这样的名字。
复制示例的 redis.conf 到 **/etc/redis/6379.conf**。
cp redis.conf /etc/redis/6379.conf
现在编辑这个文件并且配置参数。
vi /etc/redis/6379.conf
#### daemonize ####
设置 `daemonize` 为 nosystemd 需要它运行在前台,否则 redis 会突然挂掉。
daemonize no
#### pidfile ####
设置 `pidfile` 为 /var/run/redis_6379.pid。
pidfile /var/run/redis_6379.pid
#### port ####
如果不准备用默认端口,可以修改。
port 6379
#### loglevel ####
设置日志级别。
loglevel notice
#### logfile ####
修改日志文件路径。
logfile /var/log/redis_6379.log
#### dir ####
设置目录为 /var/lib/redis/6379
dir /var/lib/redis/6379
### 安全 ###
下面有几个可以提高安全性的操作。
#### Unix sockets ####
在很多情况下,客户端程序和服务器端程序运行在同一个机器上,所以不需要监听网络上的 socket。如果这和你的使用情况类似你就可以使用 unix socket 替代网络 socket为此你需要配置 `port` 为0然后配置下面的选项来启用 unix socket。
设置 unix socket 的套接字文件。
unixsocket /tmp/redis.sock
限制 socket 文件的权限。
unixsocketperm 700
现在为了让 redis-cli 可以访问,应该使用 -s 参数指向该 socket 文件。
redis-cli -s /tmp/redis.sock
#### requirepass ####
你可能需要远程访问,如果是,那么你应该设置密码,这样子每次操作之前要求输入密码。
requirepass "bTFBx1NYYWRMTUEyNHhsCg"
#### rename-command ####
想象一下如下指令的输出。是的,这会输出服务器的配置,所以你应该在任何可能的情况下拒绝这种访问。
CONFIG GET *
为了限制甚至禁止这条或者其他指令可以使用 `rename-command` 命令。你必须提供一个命令名和替代的名字。要禁止的话需要设置替代的名字为空字符串,这样禁止任何人猜测命令的名字会比较安全。
rename-command FLUSHDB "FLUSHDB_MY_SALT_G0ES_HERE09u09u"
rename-command FLUSHALL ""
rename-command CONFIG "CONFIG_MY_S4LT_GO3S_HERE09u09u"
![Access Redis through unix with password and command changes](http://blog.linoxide.com/wp-content/uploads/2015/10/redis-security-test.jpg)
*使用密码通过 unix socket 访问,和修改命令*
#### 快照 ####
默认情况下redis 会周期性的将数据集转储到我们设置的目录下的 **dump.rdb** 文件。你可以使用 `save` 命令配置转储的频率,它的第一个参数是以秒为单位的时间帧,第二个参数是在数据文件上进行修改的数量。
每隔15分钟并且最少修改过一次键。
save 900 1
每隔5分钟并且最少修改过10次键。
save 300 10
每隔1分钟并且最少修改过10000次键。
save 60 10000
文件 `/var/lib/redis/6379/dump.rdb` 包含了从上次保存以来内存里数据集的转储数据。因为它先创建临时文件然后替换之前的转储文件,这里不存在数据破坏的问题,你不用担心,可以直接复制这个文件。
### 开机时启动 ###
你可以使用 systemd 将 redis 添加到系统开机启动列表。
复制示例的 init_script 文件到 `/etc/init.d`,注意脚本名所代表的端口号。
cp utils/redis_init_script /etc/init.d/redis_6379
现在我们要使用 systemd所以在 `/etc/systems/system` 下创建一个单位文件名字为 `redis_6379.service`
vi /etc/systemd/system/redis_6379.service
填写下面的内容,详情可见 systemd.service。
[Unit]
Description=Redis on port 6379
[Service]
Type=forking
ExecStart=/etc/init.d/redis_6379 start
ExecStop=/etc/init.d/redis_6379 stop
[Install]
WantedBy=multi-user.target
现在添加我之前在 `/etc/sysctl.conf` 里面修改过的内存过量使用和 backlog 最大值的选项。
vm.overcommit_memory = 1
net.core.somaxconn=512
对于透明巨页内存支持,并没有直接 sysctl 命令可以控制,所以需要将下面的命令放到 `/etc/rc.local` 的结尾。
echo never > /sys/kernel/mm/transparent_hugepage/enabled
### 总结 ###
这样就可以启动了,通过设置这些选项你就可以部署 redis 服务到很多简单的场景,然而在 redis.conf 还有很多为复杂环境准备的 redis 选项。在一些情况下,你可以使用 [replication][4] 和 [Sentinel][5] 来提高可用性,或者[将数据分散][6]在多个服务器上,创建服务器集群。
谢谢阅读。
--------------------------------------------------------------------------------
via: http://linoxide.com/storage/install-redis-server-centos-7/
作者:[Carlos Alberto][a]
译者:[ezio](https://github.com/oska874)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/carlosal/
[1]:http://redis.io/download
[2]:https://www.kernel.org/doc/Documentation/vm/overcommit-accounting
[3]:https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
[4]:http://redis.io/topics/replication
[5]:http://redis.io/topics/sentinel
[6]:http://redis.io/topics/partitioning

View File

@ -0,0 +1,35 @@
开源开发者提交不安全代码,遭 Linus 炮轰
================================================================================
![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/11/linus-torvalds.jpg)
Linus 上个月骂了一个 Linux 开发者,原因是他向 kernel 提交了一份不安全的代码。
Linus 是个 Linux 内核项目非官方的“仁慈的独裁者benevolent dictatorLCTT译注英国《卫报》曾将乔布斯评价为仁慈的独裁者这意味着他有权决定将哪些代码合入内核哪些代码直接丢掉。
在10月28号一个开源开发者提交的代码未能符合 Torvalds 的要求,于是遭来了[一顿臭骂][1]。Torvalds 在他提交的代码下评论道:“你提交的是什么东西。”
接着他说这个开发者是“毫无能力的神经病”。
Torvalds 为什么会这么生气?他觉得那段代码可以写得更有效率一点,可读性更强一点,编译器编译后跑得更好一点(编译器的作用就是将让人看的代码翻译成让电脑看的代码)。
Torvalds 重新写了一版代码将原来的那份替换掉,并建议所有开发者应该像他那种风格来写代码。
Torvalds 一直在嘲讽那些不符合他观点的人。早在1991年他就攻击过 [Andrew Tanenbaum][2]——那个 Minix 操作系统的作者,而那个 Minix 操作系统被 Torvalds 描述为“脑残”。
但是 Torvalds 在这次嘲讽中表现得更有战略性了:“我想让*每个人*都知道,像他这种代码是完全不能被接收的。”他说他的目的是提醒每个 Linux 开发者,而不是针对那个开发者。
Torvalds 也用这个机会强调了烂代码的安全问题。现在的企业对安全问题很重视所以安全问题需要在开源开发者心中得到足够重视甚至需要在代码中表现为最高等级LCTT 译注:操作系统必须权衡许多因素:安全、处理速度、灵活性、易用性等,而这里 Torvalds 将安全提升为最高优先级了)。骂一下那些提交不安全代码的开发者可以帮助提高 Linux 系统的安全性。
--------------------------------------------------------------------------------
via: http://thevarguy.com/open-source-application-software-companies/110415/linus-torvalds-lambasts-open-source-programmers-over-inse
作者:[Christopher Tozzi][a]
译者:[bazz2](https://github.com/bazz2)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://thevarguy.com/author/christopher-tozzi
[1]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html
[2]:https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate

View File

@ -1,16 +1,14 @@
如何在 Ubuntu 服务器中配置 AWStats
================================================================================
![](https://www.maketecheasier.com/assets/uploads/2015/10/Apache_awstats_featured.jpg)
AWStats 是一个开源的网站分析报告工具,自带网络,流媒体FTP 或邮件服务器统计图。此日志分析器以 CGI 或命令行方式进行工作,并在网页中以图表的形式尽可能的显示你日志中所有的信息。它采用的是部分信息文件,以便能够频繁并快速处理大量的日志文件。它支持绝大多数 Web 服务器日志文件格式,包括 ApacheIIS 等。
AWStats 是一个开源的网站分析报告工具可以生成强大的网站、流媒体、FTP 或邮件服务器的访问统计图。此日志分析器以 CGI 或命令行方式进行工作,并在网页中以图表的形式尽可能的显示你日志中所有的信息。它可以“部分”读取信息文件,以便能够频繁并快速处理大量的日志文件。它支持绝大多数 Web 服务器日志文件格式,包括 ApacheIIS 等。
本文将帮助你在 Ubuntu 上安装配置 AWStats。
### 安装 AWStats 包 ###
默认情况下AWStats 的包在 Ubuntu 仓库中。
默认情况下AWStats 的包可以在 Ubuntu 仓库中找到
可以通过运行下面的命令来安装:
@ -18,7 +16,7 @@ AWStats 是一个开源的网站分析报告工具,自带网络,流媒体FT
接下来,你需要启用 Apache 的 CGI 模块。
运行以下命令来启动:
运行以下命令来启动 CGI
sudo a2enmod cgi
@ -38,7 +36,7 @@ AWStats 是一个开源的网站分析报告工具,自带网络,流媒体FT
sudo nano /etc/awstats/awstats.test.com.conf
像下面这样修改下:
像下面这样修改下:
# Change to Apache log file, by default it's /var/log/apache2/access.log
LogFile="/var/log/apache2/access.log"
@ -73,6 +71,7 @@ AWStats 是一个开源的网站分析报告工具,自带网络,流媒体FT
### 测试 AWStats ###
现在,您可以通过访问 url “http://your-server-ip/cgi-bin/awstats.pl?config=test.com.” 来查看 AWStats 的页面。
它的页面像下面这样:
![awstats_page](https://www.maketecheasier.com/assets/uploads/2015/10/awstats_page.jpg)
@ -101,7 +100,7 @@ via: https://www.maketecheasier.com/set-up-awstats-ubuntu/
作者:[Hitesh Jethva][a]
译者:[strugglingyouth](https://github.com/strugglingyouth)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,81 @@
LNAV基于 Ncurses 的日志文件阅读器
================================================================================
日志文件导航器Logfile Navigator简称 lnav是一个基于 curses 的,用于查看和分析日志文件的工具。和文本阅读器/编辑器相比, lnav 的好处是它充分利用了可以从日志文件中获取的语义信息,例如时间戳和日志等级。利用这些额外的语义信息, lnav 可以处理像这样的事情:来自不同文件的交错的信息;按照时间生成信息直方图;支持在文件中导航的快捷键。它希望使用这些功能可以使得用户可以快速有效地定位和解决问题。
### lnav 功能 ###
#### 支持以下日志文件格式: ####
Syslog、Apache 访问日志、strace、tcsh 历史以及常见的带时间戳的日志文件。读入文件的时候回自动检测文件格式。
#### 直方图视图: ####
以时间区划来显示日志信息数量。这对于大概了解在一长段时间内发生了什么非常有用。
#### 过滤器: ####
只显示那些匹配或不匹配一些正则表达式的行。对于移除大量你不感兴趣的日志行非常有用。
#### 即时操作: ####
在你输入到时候会同时完成检索;当添加了新日志行的时候会自动加载和搜索;加载行的时候会应用过滤器;另外,还会在你输入 SQL 查询的时候检查其正确性。
#### 自动显示后文: ####
日志文件视图会自动往下滚动到新添加到文件中的行。只需要向上滚动就可以锁定当前视图,然后向下滚动到底部恢复显示后文。
#### 按照日期顺序排序行: ####
从所有文件中加载的日志行会按照日期进行排序。使得你不需要手动从不同文件中收集日志信息。
#### 语法高亮: ####
错误和警告会用红色和黄色显示。高亮还可用于: SQL 关键字、XML 标签、Java 文件行号和括起来的字符串。
#### 导航: ####
有快捷键用于跳转到下一个或上一个错误或警告,按照指定的时间向后或向前翻页。
#### 用 SQL 查询日志: ####
每个日志文件行都相当于数据库中的一行,可以使用 SQL 进行查询。可以使用的列取决于查看的日志文件类型。
#### 命令和搜索历史: ####
会自动保存你之前输入的命令和搜素,因此你可以在会话之间使用它们。
#### 压缩文件: ####
会实时自动检测和解压压缩的日志文件。
### 在 ubuntu 15.10 上安装 lnav ####
打开终端运行下面的命令
sudo apt-get install lnav
### 使用 lnav ###
如果你想使用 lnav 查看日志,你可以使用下面的命令,默认它会显示 syslogs
lnav
![](http://www.ubuntugeek.com/wp-content/uploads/2015/11/51.png)
如果你想查看特定的日志,那么需要指定路径。如果你想看 CPU 日志,在你的终端里运行下面的命令
lnav /var/log/cups
![](http://www.ubuntugeek.com/wp-content/uploads/2015/11/6.png)
--------------------------------------------------------------------------------
via: http://www.ubuntugeek.com/lnav-ncurses-based-log-file-viewer.html
作者:[ruchi][a]
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.ubuntugeek.com/author/ubuntufix

View File

@ -0,0 +1,59 @@
如何在 Ubuntu 16.0415.1014.04 中安装 GIMP 2.8.16
================================================================================
![GIMP 2.8.16](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-icon.png)
GIMP 图像编辑器 2.8.16 版本在其20岁生日时发布了。下面是如何安装或升级 GIMP 在 Ubuntu 16.04, Ubuntu 15.10, Ubuntu 14.04, Ubuntu 12.04 及其衍生版本中,如 Linux Mint 17.x/13, Elementary OS Freya。
GIMP 2.8.16 支持 OpenRaster 文件中的层组,修复了 PSD 中的层组支持以及各种用户界面改进,修复了 OSX 上的构建系统,以及更多新的变化。请阅读 [官方声明][1]。
![GIMP image editor 2.8,16](http://ubuntuhandbook.org/wp-content/uploads/2014/08/gimp-2-8-14.jpg)
### 如何安装或升级: ###
多亏了 Otto Meier[Ubuntu PPA][2] 中最新的 GIMP 包可用于当前所有的 Ubuntu 版本和其衍生版。
**1. 添加 GIMP PPA**
从 Unity Dash 中打开终端,或通过 Ctrl+Alt+T 快捷键打开。在它打开它后,粘贴下面的命令并回车:
sudo add-apt-repository ppa:otto-kesselgulasch/gimp
![add GIMP PPA](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-ppa.jpg)
输入你的密码,密码不会在终端显示,然后回车继续。
**2. 安装或升级编辑器**
在添加了 PPA 后,启动 **Software Updater**(在 Mint 中是 Software Manager)。检查更新后,你将看到 GIMP 的更新列表。点击 “Install Now” 进行升级。
![upgrade-gimp2816](http://ubuntuhandbook.org/wp-content/uploads/2015/11/upgrade-gimp2816.jpg)
对于那些喜欢 Linux 命令的,按顺序执行下面的命令,刷新仓库的缓存然后安装 GIMP
sudo apt-get update
sudo apt-get install gimp
**3. (可选的) 卸载**
如果你想卸载或降级 GIMP 图像编辑器。从软件中心直接删除它,或者按顺序运行下面的命令来将 PPA 清除并降级软件:
sudo apt-get install ppa-purge
sudo ppa-purge ppa:otto-kesselgulasch/gimp
就这样。玩的愉快!
--------------------------------------------------------------------------------
via: http://ubuntuhandbook.org/index.php/2015/11/how-to-install-gimp-2-8-16-in-ubuntu-16-04-15-10-14-04/
作者:[Ji m][a]
译者:[strugglingyouth](https://github.com/strugglingyouth)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ubuntuhandbook.org/index.php/about/
[1]:http://www.gimp.org/news/2015/11/22/20-years-of-gimp-release-of-gimp-2816/
[2]:https://launchpad.net/~otto-kesselgulasch/+archive/ubuntu/gimp

View File

@ -0,0 +1,143 @@
tar 命令使用介绍
================================================================================
Linux [tar][1] 命令是归档或分发文件时的强大武器。GNU tar 归档包可以包含多个文件和目录还能保留其文件权限它还支持多种压缩格式。Tar 表示 "**T**ape **Ar**chiver",这种格式是 POSIX 标准。
### Tar 文件格式 ###
tar 压缩等级简介:
- **无压缩** 没有压缩的文件用 .tar 结尾。
- **Gzip 压缩** Gzip 格式是 tar 使用最广泛的压缩格式,它能快速压缩和提取文件。用 gzip 压缩的文件通常用 .tar.gz 或 .tgz 结尾。这里有一些如何[创建][2]和[解压][3] tar.gz 文件的例子。
- **Bzip2 压缩** 和 Gzip 格式相比 Bzip2 提供了更好的压缩比。创建压缩文件也比较慢,通常采用 .tar.bz2 结尾。
- **Lzip(LAMA)压缩** Lizp 压缩结合了 Gzip 快速的优势,以及和 Bzip2 类似(甚至更好) 的压缩率。尽管有这些好处,这个格式并没有得到广泛使用。
- **Lzop 压缩** 这个压缩选项也许是 tar 最快的压缩格式,它的压缩率和 gzip 类似,但也没有广泛使用。
常见的格式是 tar.gz 和 tar.bz2。如果你想快速压缩那么就是用 gzip。如果归档文件大小比较重要就是用 tar.bz2。
### tar 命令用来干什么? ###
下面是一些使用 tar 命令的常见情形。
- 备份服务器或桌面系统
- 文档归档
- 软件分发
### 安装 tar ###
大部分 Linux 系统默认都安装了 tar。如果没有这里有安装 tar 的命令。
#### CentOS ####
在 CentOS 中,以 root 用户在 shell 中执行下面的命令安装 tar。
yum install tar
#### Ubuntu ####
下面的命令会在 Ubuntu 上安装 tar。“sudo” 命令确保 apt 命令是以 root 权限运行的。
sudo apt-get install tar
#### Debian ####
下面的 apt 命令在 Debian 上安装 tar。
apt-get install tar
#### Windows ####
tar 命令在 Windows 也可以使用,你可以从 Gunwin 项目[http://gnuwin32.sourceforge.net/packages/gtar.htm][4]中下载它。
### 创建 tar.gz 文件 ###
下面是在 shell 中运行 [tar 命令][5] 的一些例子。下面我会解释这些命令行选项。
tar pczf myarchive.tar.gz /home/till/mydocuments
这个命令会创建归档文件 myarchive.tar.gz其中包括了路径 /home/till/mydocuments 中的文件和目录。**命令行选项解释**
- **[p]** 这个选项表示 “preserve”它指示 tar 在归档文件中保留文件属主和权限信息。
- **[c]** 表示创建。要创建文件时不能缺少这个选项。
- **[z]** z 选项启用 gzip 压缩。
- **[f]** file 选项告诉 tar 创建一个归档文件。如果没有这个选项 tar 会把输出发送到标准输出( LCTT 译注:如果没有指定,标准输出默认是屏幕,显然你不会想在屏幕上显示一堆乱码,通常你可以用管道符号送到其它程序去)。
#### Tar 命令示例 ####
**示例 1 备份 /etc 目录**
创建 /etc 配置目录的一个备份。备份保存在 root 目录。
tar pczvf /root/etc.tar.gz /etc
![用 tar 备份 /etc 目录](https://www.howtoforge.com/images/linux-tar-command/big/create-tar.png)
要以 root 用户运行命令确保 /etc 中的所有文件都会被包含在备份中。这次,我在命令中添加了 [v] 选项。这个选项表示 verbose它告诉 tar 显示所有被包含到归档文件中的文件名。
**示例 2 备份你的 /home 目录**
创建你的 home 目录的备份。备份会被保存到 /backup 目录。
tar czf /backup/myuser.tar.gz /home/myuser
用你的用户名替换 myuser。这个命令中我省略了 [p] 选项,也就不会保存权限。
**示例 3 基于文件的 MySQL 数据库备份**
在大部分 Linux 发行版中MySQL 数据库保存在 /var/lib/mysql。你可以使用下面的命令来查看
ls /var/lib/mysql
![使用 tar 基于文件备份 MySQL](https://www.howtoforge.com/images/linux-tar-command/big/tar_backup_mysql.png)
用 tar 备份 MySQL 数据文件时为了保持数据一致性,首先停用数据库服务器。备份会被写到 /backup 目录。
1 创建 backup 目录
mkdir /backup
chmod 600 /backup
2 停止 MySQL用 tar 进行备份并重新启动数据库。
service mysql stop
tar pczf /backup/mysql.tar.gz /var/lib/mysql
service mysql start
ls -lah /backup
![基于文件的 MySQL 备份](https://www.howtoforge.com/images/linux-tar-command/big/tar-backup-mysql2.png)
### 提取 tar.gz 文件###
提取 tar.gz 文件的命令是:
tar xzf myarchive.tar.gz
#### tar 命令选项解释 ####
- **[x]** x 表示提取,提取 tar 文件时这个命令不可缺少。
- **[z]** z 选项告诉 tar 要解压的归档文件是 gzip 格式。
- **[f]** 该选项告诉 tar 从一个文件中读取归档内容,本例中是 myarchive.tar.gz。
上面的 tar 命令会安静地提取 tar.gz 文件,除非有错误信息。如果你想要看提取了哪些文件,那么添加 “v” 选项。
tar xzvf myarchive.tar.gz
**[v]** 选项表示 verbose它会向你显示解压的文件名。
![提取 tar.gz 文件](https://www.howtoforge.com/images/linux-tar-command/big/tar-xfz.png)
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/linux-tar-command/
作者:[howtoforge][a]
译者:[ictlyh](http://mutouxiaogui.cn/blog/)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/
[1]:https://en.wikipedia.org/wiki/Tar_(computing)
[2]:http://www.faqforge.com/linux/create-tar-gz/
[3]:http://www.faqforge.com/linux/extract-tar-gz/
[4]:http://gnuwin32.sourceforge.net/packages/gtar.htm
[5]:http://www.faqforge.com/linux/tar-command/

View File

@ -0,0 +1,60 @@
如何在 CentOS 6/7 上移除被 Fail2ban 禁止的 IP
================================================================================
![](http://www.ehowstuff.com/wp-content/uploads/2015/12/security-265130_1280.jpg)
[fail2ban][1] 是一款用于保护你的服务器免于暴力攻击的入侵保护软件。fail2ban 用 python 写成并广泛用于很多服务器上。fail2ban 会扫描日志文件和 IP 黑名单来显示恶意软件、过多的密码失败尝试、web 服务器利用、wordpress 插件攻击和其他漏洞。如果你已经安装并使用了 fail2ban 来保护你的 web 服务器,你也许会想知道如何在 CentOS 6、CentOS 7、RHEL 6、RHEL 7 和 Oracle Linux 6/7 中找到被 fail2ban 阻止的 IP或者你想将 ip 从 fail2ban 监狱中移除。
### 如何列出被禁止的 IP ###
要查看所有被禁止的 ip 地址,运行下面的命令:
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
f2b-AccessForbidden tcp -- anywhere anywhere tcp dpt:http
f2b-WPLogin tcp -- anywhere anywhere tcp dpt:http
f2b-ConnLimit tcp -- anywhere anywhere tcp dpt:http
f2b-ReqLimit tcp -- anywhere anywhere tcp dpt:http
f2b-NoAuthFailures tcp -- anywhere anywhere tcp dpt:http
f2b-SSH tcp -- anywhere anywhere tcp dpt:ssh
f2b-php-url-open tcp -- anywhere anywhere tcp dpt:http
f2b-nginx-http-auth tcp -- anywhere anywhere multiport dports http,https
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere tcp dpt:EtherNet/IP-1
ACCEPT tcp -- anywhere anywhere tcp dpt:http
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain f2b-NoAuthFailures (1 references)
target prot opt source destination
REJECT all -- 64.68.50.128 anywhere reject-with icmp-port-unreachable
REJECT all -- 104.194.26.205 anywhere reject-with icmp-port-unreachable
RETURN all -- anywhere anywhere
### 如何从 Fail2ban 中移除 IP ###
# iptables -D f2b-NoAuthFailures -s banned_ip -j REJECT
我希望这篇教程可以给你在 CentOS 6、CentOS 7、RHEL 6、RHEL 7 和 Oracle Linux 6/7 中移除被禁止的 ip 一些指导。
--------------------------------------------------------------------------------
via: http://www.ehowstuff.com/how-to-remove-banned-ip-from-fail2ban-on-centos/
作者:[skytech][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.ehowstuff.com/author/skytech/
[1]:http://www.fail2ban.org/wiki/index.php/Main_Page

View File

@ -1,12 +1,13 @@
第 10 部分:在 RHEL/CentOS 7 中设置 “NTP网络时间协议 服务器”
RHCE 系列(十):在 RHEL/CentOS 7 中设置 NTP网络时间协议服务器
================================================================================
网络时间协议 - NTP - 是运行在传输层 123 号端口允许计算机通过网络同步准确时间的协议。随着时间的流逝,计算机内部时间会出现漂移,这会导致时间不一致问题,尤其是对于服务器和客户端日志文件,或者你想要备份服务器资源或数据库。
网络时间协议 - NTP - 是运行在传输层 123 号端口的 UDP 协议,它允许计算机通过网络同步准确时间。随着时间的流逝,计算机内部时间会出现漂移,这会导致时间不一致问题,尤其是对于服务器和客户端日志文件,或者你想要复制服务器的资源或数据库。
![在 CentOS 上安装 NTP 服务器](http://www.tecmint.com/wp-content/uploads/2014/09/NTP-Server-Install-in-CentOS.png)
在 CentOS 和 RHEL 7 上安装 NTP 服务器
*在 CentOS 和 RHEL 7 上安装 NTP 服务器*
#### 要求: ####
#### 前置要求: ####
- [CentOS 7 安装过程][1]
- [RHEL 安装过程][2]
@ -17,62 +18,62 @@
- [在 CentOS/RHCE 7 上配置静态 IP][4]
- [在 CentOS/RHEL 7 上停用并移除不需要的服务][5]
这篇指南会告诉你如何在 CentOS/RHCE 7 上安装和配置 NTP 服务器,并使用 NTP 公共时间服务器池列表中和你服务器地理位置最近的可用节点中同步时间。
这篇指南会告诉你如何在 CentOS/RHCE 7 上安装和配置 NTP 服务器,并使用 NTP 公共时间服务器池NTP Public Pool Time Servers列表中和你服务器地理位置最近的可用节点中同步时间。
#### 步骤一:安装和配置 NTP 守护进程 ####
1. 官方 CentOS /RHEL 7 库默认提供 NTP 服务器安装包,可以通过使用下面的命令安装。
1 官方 CentOS /RHEL 7 库默认提供 NTP 服务器安装包,可以通过使用下面的命令安装。
# yum install ntp
![在 CentOS 上安装 NTP 服务器](http://www.tecmint.com/wp-content/uploads/2014/09/Install-NTP-in-CentOS.png)
安装 NTP 服务器
*安装 NTP 服务器*
2. 安装完服务器之后,首先到官方 [NTP 公共时间服务器池][6],选择你服务器物理位置所在的洲,然后搜索你的国家位置,然后会出现 NTP 服务器列表。
2 安装完服务器之后,首先到官方 [NTP 公共时间服务器池NTP Public Pool Time Servers][6],选择你服务器物理位置所在的洲,然后搜索你的国家位置,然后会出现 NTP 服务器列表。
![NTP 服务器池](http://www.tecmint.com/wp-content/uploads/2014/09/NTP-Pool-Server.png)
NTP 服务器池
*NTP 服务器池*
3. 然后打开编辑 NTP 守护进程主要配置文件,从 pool.ntp.org 中注释掉默认的公共服务器列表并用类似下面截图提供给你国家的列表替换。
3、 然后打开编辑 NTP 守护进程的主配置文件,注释掉来自 pool.ntp.org 项目的公共服务器默认列表并用类似下面截图中提供给你所在国家的列表替换。LCTT 译注:中国使用 0.cn.pool.ntp.org 等)
![在 CentOS 中配置 NTP 服务器](http://www.tecmint.com/wp-content/uploads/2014/09/Configure-NTP-Server.png)
配置 NTP 服务器
*配置 NTP 服务器*
4. 下一步,你需要允许客户端从你的网络中和这台服务器同步时间。为了做到这点,添加下面一行到 NTP 配置文件,其中限制语句控制允许哪些网络查询和同步时间 - 根据需要替换网络 IP。
4、 下一步,你需要允许来自你的网络的客户端和这台服务器同步时间。为了做到这点,添加下面一行到 NTP 配置文件,其中 **restrict** 语句控制允许哪些网络查询和同步时间 - 根据需要替换网络 IP。
restrict 192.168.1.0 netmask 255.255.255.0 nomodify notrap
nomodify notrap 语句意味着不允许你的客户端配置服务器或者作为同步时间的节点。
5. 如果你需要额外的信息用于错误处理,以防你的 NTP 守护进程出现问题,添加一个 logfile 语句,用于记录所有 NTP 服务器问题到一个指定的日志文件。
5、 如果你需要用于错误处理的额外信息,以防你的 NTP 守护进程出现问题,添加一个 logfile 语句,用于记录所有 NTP 服务器问题到一个指定的日志文件。
logfile /var/log/ntp.log
![在 CentOS 中启用 NTP 日志](http://www.tecmint.com/wp-content/uploads/2014/09/Enable-NTP-Log.png)
启用 NTP 日志
*启用 NTP 日志*
6. 你编辑完所有上面解释的配置并保存关闭 ntp.conf 文件后,你最终的配置看起来像下面的截图。
6、 在你编辑完所有上面解释的配置并保存关闭 ntp.conf 文件后,你最终的配置看起来像下面的截图。
![CentOS 中 NTP 服务器的配置](http://www.tecmint.com/wp-content/uploads/2014/09/NTP-Server-Configuration.png)
NTP 服务器配置
*NTP 服务器配置*
### 步骤二:添加防火墙规则并启动 NTP 守护进程 ###
7. NTP 服务在传输层(第四层)使用 123 号 UDP 端口。它是针对限制可变延迟的影响特别设计的。要在 RHEL/CentOS 7 中开放这个端口,可以对 Firewalld 服务使用下面的命令。
7、 NTP 服务使用 OSI 传输层(第四层)的 123 号 UDP 端口。它是为了避免可变延迟的影响所特别设计的。要在 RHEL/CentOS 7 中开放这个端口,可以对 Firewalld 服务使用下面的命令。
# firewall-cmd --add-service=ntp --permanent
# firewall-cmd --reload
![在 Firewall 中开放 NTP 端口](http://www.tecmint.com/wp-content/uploads/2014/09/Open-NTP-Port.png)
在 Firewall 中开放 NTP 端口
*在 Firewall 中开放 NTP 端口*
8. 你在防火墙中开放了 123 号端口之后,启动 NTP 服务器并确保系统范围内可用。用下面的命令管理服务。
8 你在防火墙中开放了 123 号端口之后,启动 NTP 服务器并确保系统范围内可用。用下面的命令管理服务。
# systemctl start ntpd
# systemctl enable ntpd
@ -80,34 +81,34 @@ NTP 服务器配置
![启动 NTP 服务](http://www.tecmint.com/wp-content/uploads/2014/09/Start-NTP-Service.png)
启动 NTP 服务
*启动 NTP 服务*
### 步骤三:验证服务器时间同步 ###
9. 启动了 NTP 守护进程后,用几分钟等服务器和它的服务器池列表同步时间,然后运行下面的命令验证 NTP 节点同步状态和你的系统时间。
9 启动了 NTP 守护进程后,用几分钟等服务器和它的服务器池列表同步时间,然后运行下面的命令验证 NTP 节点同步状态和你的系统时间。
# ntpq -p
# date -R
![验证 NTP 服务器时间](http://www.tecmint.com/wp-content/uploads/2014/09/Verify-NTP-Time-Sync.png)
验证 NTP 时间同步
*验证 NTP 时间同步*
10. 如果你想查询或者和你选择的服务器池同步,你可以使用 ntpdate 命令,后面跟服务器名或服务器地址,类似下面建议的命令行例。
10 如果你想查询或者和你选择的服务器池同步,你可以使用 ntpdate 命令,后面跟服务器名或服务器地址,类似下面建议的命令行例。
# ntpdate -q 0.ro.pool.ntp.org 1.ro.pool.ntp.org
![同步 NTP 同步](http://www.tecmint.com/wp-content/uploads/2014/09/Synchronize-NTP-Time.png)
同步 NTP 时间
*同步 NTP 时间*
### 步骤四:设置 Windows NTP 客户端 ###
11. 如果你的 windows 机器不是域名控制器的一部分,你可以配置 Windows 和你的 NTP服务器同步时间。在任务栏右边 -> 时间 -> 更改日期和时间设置 -> 网络时间标签 -> 更改设置 -> 和一个网络时间服务器检查同步 -> 在 Server 空格输入服务器 IP 或 FQDN -> 马上更新 -> OK。
11 如果你的 windows 机器不是域名控制器的一部分,你可以配置 Windows 和你的 NTP服务器同步时间。在任务栏右边 -> 时间 -> 更改日期和时间设置 -> 网络时间标签 -> 更改设置 -> 和一个网络时间服务器检查同步 -> 在 Server 空格输入服务器 IP 或 FQDN -> 马上更新 -> OK。
![和 NTP 同步 Windows 时间](http://www.tecmint.com/wp-content/uploads/2014/09/Synchronize-Windows-Time-with-NTP.png)
和 NTP 同步 Windows 时间
*和 NTP 同步 Windows 时间*
就是这些。在你的网络中配置一个本地 NTP 服务器能确保你所有的服务器和客户端有相同的时间设置,以防出现网络连接失败,并且它们彼此都相互同步。
@ -117,7 +118,7 @@ via: http://www.tecmint.com/install-ntp-server-in-centos/
作者:[Matei Cezar][a]
译者:[ictlyh](http://motouxiaogui.cn/blog)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,11 +1,13 @@
RHCE 系列: 使用网络安全服务NSS为 Apache 通过 TLS 实现 HTTPS
RHCE 系列(八) Apache 上使用网络安全服务NSS实现 HTTPS
================================================================================
如果你是一个负责维护和确保 web 服务器安全的系统管理员,你不能不花费最大的精力确保服务器中处理和通过的数据任何时候都受到保护。
如果你是一个负责维护和确保 web 服务器安全的系统管理员,你需要花费最大的精力确保服务器中处理和通过的数据任何时候都受到保护。
![使用 SSL/TLS 设置 Apache HTTPS](http://www.tecmint.com/wp-content/uploads/2015/09/Setup-Apache-SSL-TLS-Server.png)
RHCE 系列:第八部分 - 使用网络安全服务NSS为 Apache 通过 TLS 实现 HTTPS
*RHCE 系列:第八部分 - 使用网络安全服务NSS为 Apache 通过 TLS 实现 HTTPS*
为了在客户端和服务器之间提供更安全的连接,作为 HTTP 和 SSL安全套接层或者最近称为 TLS传输层安全的组合产生了 HTTPS 协议。
为了在客户端和服务器之间提供更安全的连接,作为 HTTP 和 SSLSecure Sockets Layer安全套接层)或者最近称为 TLSTransport Layer Security传输层安全)的组合,产生了 HTTPS 协议。
由于一些严重的安全漏洞SSL 已经被更健壮的 TLS 替代。由于这个原因,在这篇文章中我们会解析如何通过 TLS 实现你 web 服务器和客户端之间的安全连接。
@ -22,11 +24,11 @@ RHCE 系列:第八部分 - 使用网络安全服务NSS为 Apache 通过
# firewall-cmd --permanent -add-service=http
# firewall-cmd --permanent -add-service=https
然后安装一些必软件包:
然后安装一些必需的软件包:
# yum update && yum install openssl mod_nss crypto-utils
**重要**:请注意如果你想使用 OpenSSL 库而不是 NSS网络安全服务实现 TLS你可以在上面的命令中用 mod\_ssl 替换 mod\_nss使用哪一个取决于你但在这篇文章中由于更加健壮我们会使用 NSS例如,它支持最新的加密标准,比如 PKCS #11)。
**重要**:请注意如果你想使用 OpenSSL 库而不是 NSSNetwork Security Service网络安全服务)实现 TLS你可以在上面的命令中用 mod\_ssl 替换 mod\_nss使用哪一个取决于你但在这篇文章中我们会使用 NSS因为它更加安全比如说,它支持最新的加密标准,比如 PKCS #11)。
如果你使用 mod\_nss首先要卸载 mod\_ssl反之如此。
@ -54,15 +56,15 @@ nss.conf 配置文件
下一步,在 `/etc/httpd/conf.d/nss.conf` 配置文件中做以下更改:
1. 指定 NSS 数据库目录。你可以使用默认的目录或者新建一个。本文中我们使用默认的:
1 指定 NSS 数据库目录。你可以使用默认的目录或者新建一个。本文中我们使用默认的:
NSSCertificateDatabase /etc/httpd/alias
2. 通过保存密码到数据库目录中的 /etc/httpd/nss-db-password.conf 文件避免每次系统启动时要手动输入密码:
2、 通过保存密码到数据库目录中的 `/etc/httpd/nss-db-password.conf` 文件来避免每次系统启动时要手动输入密码:
NSSPassPhraseDialog file:/etc/httpd/nss-db-password.conf
其中 /etc/httpd/nss-db-password.conf 只包含以下一行,其中 mypassword 是后面你为 NSS 数据库设置的密码:
其中 `/etc/httpd/nss-db-password.conf` 只包含以下一行,其中 mypassword 是后面你为 NSS 数据库设置的密码:
internal:mypassword
@ -71,27 +73,27 @@ nss.conf 配置文件
# chmod 640 /etc/httpd/nss-db-password.conf
# chgrp apache /etc/httpd/nss-db-password.conf
3. 由于 POODLE SSLv3 漏洞,红帽建议停用 SSL 和 TLSv1.0 之前所有版本的 TLS更多信息可以查看[这里][2])。
3 由于 POODLE SSLv3 漏洞,红帽建议停用 SSL 和 TLSv1.0 之前所有版本的 TLS更多信息可以查看[这里][2])。
确保 NSSProtocol 指令的每个实例都类似下面一样(如果你没有托管其它虚拟主机,很可能只有一条):
NSSProtocol TLSv1.0,TLSv1.1
4. 由于这是一个自签名证书Apache 会拒绝重启,并不会识别为有效发行人。由于这个原因,对于这种特殊情况我们还需要添加:
4 由于这是一个自签名证书Apache 会拒绝重启,并不会识别为有效发行人。由于这个原因,对于这种特殊情况我们还需要添加:
NSSEnforceValidCerts off
5. 虽然并不是严格要求,为 NSS 数据库设置一个密码同样很重要:
5 虽然并不是严格要求,为 NSS 数据库设置一个密码同样很重要:
# certutil -W -d /etc/httpd/alias
![为 NSS 数据库设置密码](http://www.tecmint.com/wp-content/uploads/2015/09/Set-Password-for-NSS-Database.png)
为 NSS 数据库设置密码
*为 NSS 数据库设置密码*
### 创建一个 Apache SSL 自签名证书 ###
下一步,我们会创建一个自签名证书为我们的客户机识别服务器(请注意这个方法对于生产环境并不是最好的选择;对于生产环境你应该考虑购买第三方可信证书机构验证的证书,例如 DigiCert
下一步,我们会创建一个自签名证书来让我们的客户机可以识别服务器(请注意这个方法对于生产环境并不是最好的选择;对于生产环境你应该考虑购买第三方可信证书机构验证的证书,例如 DigiCert
我们用 genkey 命令为 box1 创建有效期为 365 天的 NSS 兼容证书。完成这一步后:
@ -101,19 +103,19 @@ nss.conf 配置文件
![创建 Apache SSL 密钥](http://www.tecmint.com/wp-content/uploads/2015/09/Create-Apache-SSL-Key.png)
创建 Apache SSL 密钥
*创建 Apache SSL 密钥*
你可以使用默认的密钥大小2048然后再次选择 Next
![选择 Apache SSL 密钥大小](http://www.tecmint.com/wp-content/uploads/2015/09/Select-Apache-SSL-Key-Size.png)
选择 Apache SSL 密钥大小
*选择 Apache SSL 密钥大小*
等待系统生成随机比特:
![生成随机密钥比特](http://www.tecmint.com/wp-content/uploads/2015/09/Generating-Random-Bits.png)
生成随机密钥比特
*生成随机密钥比特*
为了加快速度,会提示你在控制台输入随机字符,正如下面的截图所示。请注意当没有从键盘接收到输入时进度条是如何停止的。然后,会让你选择:
@ -124,35 +126,35 @@ nss.conf 配置文件
youtube 视频
<iframe width="720" height="405" frameborder="0" src="//www.youtube.com/embed/mgsfeNfuurA" allowfullscreen="allowfullscreen"></iframe>
最后,会提示你输入之前设置的密码到 NSS 证书
最后,会提示你输入之前给 NSS 证书设置的密码
# genkey --nss --days 365 box1
![Apache NSS 证书密码](http://www.tecmint.com/wp-content/uploads/2015/09/Apache-NSS-Password.png)
Apache NSS 证书密码
*Apache NSS 证书密码*
在任何时候你都可以用以下命令列出现有的证书:
需要的话,你可以用以下命令列出现有的证书:
# certutil L d /etc/httpd/alias
![列出 Apache NSS 证书](http://www.tecmint.com/wp-content/uploads/2015/09/List-Apache-Certificates.png)
列出 Apache NSS 证书
*列出 Apache NSS 证书*
然后通过名字删除(除非严格要求,用你自己的证书名称替换 box1
然后通过名字删除(如果你真的需要删除的,用你自己的证书名称替换 box1
# certutil -d /etc/httpd/alias -D -n "box1"
如果你需要继续的话:
如果你需要继续进行的话,请继续阅读。
### 测试 Apache SSL HTTPS 连接 ###
最后,是时候测试到我们服务器的安全连接了。当你用浏览器打开 https://<web 服务器 IP 或主机名\>你会看到著名的信息 This connection is untrusted”:
最后,是时候测试到我们服务器的安全连接了。当你用浏览器打开 https://\<web 服务器 IP 或主机名\>,你会看到著名的信息 “This connection is untrusted”
![检查 Apache SSL 连接](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Apache-SSL-Connection.png)
检查 Apache SSL 连接
*检查 Apache SSL 连接*
在上面的情况中你可以点击添加例外Add Exception 然后确认安全例外Confirm Security Exception - 但先不要这么做。让我们首先来看看证书看它的信息是否和我们之前输入的相符(如截图所示)。
@ -160,37 +162,37 @@ Apache NSS 证书密码
![确认 Apache SSL 证书详情](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Apache-SSL-Certificate-Details.png)
确认 Apache SSL 证书详情
*确认 Apache SSL 证书详情*
现在你继续,确认例外(限于此次或永久),然后会通过 https 把你带到你 web 服务器的 DocumentRoot 目录,在这里你可以使用你浏览器自带的开发者工具检查连接详情:
现在你可以继续,确认例外(限于此次或永久),然后会通过 https 把你带到你 web 服务器的 DocumentRoot 目录,在这里你可以使用你浏览器自带的开发者工具检查连接详情:
在火狐浏览器中你可以通过在屏幕中右击然后从上下文菜单中选择检查元素Inspect Element启动,尤其是通过网络选项卡:
在火狐浏览器中,你可以通过在屏幕中右击然后从上下文菜单中选择检查元素Inspect Element启动开发者工具,尤其要看“网络”选项卡:
![检查 Apache HTTPS 连接](http://www.tecmint.com/wp-content/uploads/2015/09/Inspect-Apache-HTTPS-Connection.png)
检查 Apache HTTPS 连接
*检查 Apache HTTPS 连接*
请注意这和之前显示的在验证过程中输入的信息一致。还有一种方式通过使用命令行工具测试连接:
(测试 SSLv3
(测试 SSLv3
# openssl s_client -connect localhost:443 -ssl3
(测试 TLS
(测试 TLS
# openssl s_client -connect localhost:443 -tls1
![测试 Apache SSL 和 TLS 连接](http://www.tecmint.com/wp-content/uploads/2015/09/Testing-Apache-SSL-and-TLS.png)
测试 Apache SSL 和 TLS 连接
*测试 Apache SSL 和 TLS 连接*
参考上面的截图了解更相信信息。
参考上面的截图了解更详细信息。
### 总结 ###
确信你已经知道,使用 HTTPS 会增加会在你站点中输入个人信息的访客的信任(从用户名和密码到任何商业/银行账户信息)。
你已经知道,使用 HTTPS 会增加会在你站点中输入个人信息的访客的信任(从用户名和密码到任何商业/银行账户信息)。
在那种情况下,你会希望获得由可信验证机构签名的证书,正如我们之前解释的(启用的步骤和发送 CSR 到 CA 然后获得签名证书的例子相同);另外的情况,就是像我们的例子中一样使用自签名证书
在那种情况下,你会希望获得由可信验证机构签名的证书,正如我们之前解释的(步骤和设置需要启用例外的证书的步骤相同,发送 CSR 到 CA 然后获得返回的签名证书);否则,就像我们的例子中一样使用自签名证书即可
要获取更多关于使用 NSS 的详情,可以参考关于 [mod-nss][3] 的在线帮助。如果你有任何疑问或评论,请告诉我们。
@ -200,11 +202,11 @@ via: http://www.tecmint.com/create-apache-https-self-signed-certificate-using-ns
作者:[Gabriel Cánepa][a]
译者:[ictlyh](http://www.mutouxiaogui.cn/blog/)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/install-lamp-in-centos-7/
[1]:http://www.tecmint.com/author/gacanepa/
[a]:http://www.tecmint.com/author/gacanepa/
[1]:https://linux.cn/article-5789-1.html
[2]:https://access.redhat.com/articles/1232123
[3]:https://git.fedorahosted.org/cgit/mod_nss.git/plain/docs/mod_nss.html

View File

@ -1,25 +1,25 @@
第九部分 - 如果使用零客户端配置 Postfix 邮件服务器SMTP
RHCE 系列(九):如何使用无客户端配置 Postfix 邮件服务器SMTP
================================================================================
尽管现在有很多在线联系方式,邮件仍然是一个人传递信息给远在世界尽头或办公室里坐在我们旁边的另一个人的有效方式。
尽管现在有很多在线联系方式,电子邮件仍然是一个人传递信息给远在世界尽头或办公室里坐在我们旁边的另一个人的有效方式。
下面的图描述了邮件从发送者发出直到信息到达接收者收件箱的传递过程。
下面的图描述了电子邮件从发送者发出直到信息到达接收者收件箱的传递过程。
![邮件如何工作](http://www.tecmint.com/wp-content/uploads/2015/09/How-Mail-Setup-Works.png)
![电子邮件如何工作](http://www.tecmint.com/wp-content/uploads/2015/09/How-Mail-Setup-Works.png)
邮件如何工作
*电子邮件如何工作*
使这成为可能,背后发生了好多事情。为了使邮件信息从一个客户端应用程序(例如 [Thunderbird][1]、Outlook或者网络邮件服务例如 Gmail 或 Yahoo 邮件)到一个邮件服务器,并从其到目标服务器并最终到目标接收人,每个服务器上都必须有 SMTP简单邮件传输协议服务。
实现这一切,背后发生了好多事情。为了使电子邮件信息从一个客户端应用程序(例如 [Thunderbird][1]、Outlook或者 web 邮件服务,例如 Gmail 或 Yahoo 邮件)投递到一个邮件服务器,并从其投递到目标服务器并最终到目标接收人,每个服务器上都必须有 SMTP简单邮件传输协议服务。
这就是为什么我们要在这篇博文中介绍如何在 RHEL 7 中设置 SMTP 服务器,从本地用户发送的邮件(甚至发送到本地用户)被转发到一个中央邮件服务器以便于访问。
这就是为什么我们要在这篇博文中介绍如何在 RHEL 7 中设置 SMTP 服务器,从本地用户发送的邮件(甚至发送到另外一个本地用户)被转发forward到一个中央邮件服务器以便于访问。
实际需求中这称为零客户端安装。
这个考试的要求中这称为无客户端null-client安装。
在我们的测试环境中将包括一个原始邮件服务器和一个中央服务器或中继主机。
在我们的测试环境中将包括一个起源originating邮件服务器和一个中央服务器或中继主机relayhost
原始邮件服务器: (主机名: box1.mydomain.com / IP: 192.168.0.18
中央邮件服务器: (主机名: mail.mydomain.com / IP: 192.168.0.20
- 起源邮件服务器: (主机名: box1.mydomain.com / IP: 192.168.0.18
- 中央邮件服务器: (主机名: mail.mydomain.com / IP: 192.168.0.20
为了域名解析我们在两台机器中都会使用有名的 /etc/hosts 文件
我们在两台机器中都会使用你熟知的 `/etc/hosts` 文件做名字解析
192.168.0.18 box1.mydomain.com box1
192.168.0.20 mail.mydomain.com mail
@ -28,34 +28,29 @@
首先,我们需要(在两台机器上):
**1. 安装 Postfix**
**1 安装 Postfix**
# yum update && yum install postfix
**2. 启动服务并启用开机自动启动:**
**2 启动服务并启用开机自动启动:**
# systemctl start postfix
# systemctl enable postfix
**3. 允许邮件流量通过防火墙:**
**3 允许邮件流量通过防火墙:**
# firewall-cmd --permanent --add-service=smtp
# firewall-cmd --add-service=smtp
![在防火墙中开通邮件服务器端口](http://www.tecmint.com/wp-content/uploads/2015/09/Allow-Traffic-through-Firewall.png)
在防火墙中开通邮件服务器端口
*在防火墙中开通邮件服务器端口*
**4. 在 box1.mydomain.com 配置 Postfix**
**4 在 box1.mydomain.com 配置 Postfix**
Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一个很大的文本因为其中包含的注释解析了程序设置的目的
Postfix 的主要配置文件是 `/etc/postfix/main.cf`。这个文件本身是一个很大的文本文件,因为其中包含了解释程序设置的用途的注释
为了简洁,我们只显示了需要编辑的行(是的,在原始服务器中你需要保留 mydestination 为空;否则邮件会被保存到本地而不是我们实际想要的中央邮件服务器):
**在 box1.mydomain.com 配置 Postfix**
----------
为了简洁,我们只显示了需要编辑的行(没错,在起源服务器中你需要保留 `mydestination` 为空;否则邮件会被存储到本地,而不是我们实际想要发往的中央邮件服务器):
myhostname = box1.mydomain.com
mydomain = mydomain.com
@ -64,11 +59,7 @@ Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一
mydestination =
relayhost = 192.168.0.20
**5. 在 mail.mydomain.com 配置 Postfix**
** 在 mail.mydomain.com 配置 Postfix **
----------
**5、 在 mail.mydomain.com 配置 Postfix**
myhostname = mail.mydomain.com
mydomain = mydomain.com
@ -83,23 +74,23 @@ Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一
![设置 Postfix SELinux 权限](http://www.tecmint.com/wp-content/uploads/2015/09/Set-Postfix-SELinux-Permission.png)
设置 Postfix SELinux 权限
*设置 Postfix SELinux 权限*
上面的 SELinux 布尔值会允许 Postfix 在中央服务器写入邮件池
上面的 SELinux 布尔值会允许中央服务器上的 Postfix 可以写入邮件池mail spool
**6. 在两台机子上重启服务以使更改生效:**
**6 在两台机子上重启服务以使更改生效:**
# systemctl restart postfix
如果 Postfix 没有正确启动,你可以使用下面的命令进行错误处理。
# systemctl l status postfix
# journalctl xn
# postconf n
# systemctl -l status postfix
# journalctl -xn
# postconf -n
### 测试 Postfix 邮件服务 ###
为了测试邮件服务器,你可以使用任何邮件用户代理(最常见的简称为 MUA例如 [mail 或 mutt][2]。
要测试邮件服务器你可以使用任何邮件用户代理Mail User Agent常简称为 MUA例如 [mail 或 mutt][2]。
由于我个人喜欢 mutt我会在 box1 中使用它发送邮件给用户 tecmint并把现有文件mailbody.txt作为信息内容
@ -107,7 +98,7 @@ Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一
![测试 Postfix 邮件服务器](http://www.tecmint.com/wp-content/uploads/2015/09/Test-Postfix-Mail-Server.png)
测试 Postfix 邮件服务器
*测试 Postfix 邮件服务器*
现在到中央邮件服务器mail.mydomain.com以 tecmint 用户登录,并检查是否收到了邮件:
@ -116,15 +107,15 @@ Postfix 的主要配置文件是 /etc/postfix/main.cf。这个文件本身是一
![检查 Postfix 邮件服务器发送](http://www.tecmint.com/wp-content/uploads/2015/09/Check-Postfix-Mail-Server-Delivery.png)
检查 Postfix 邮件服务器发送
*检查 Postfix 邮件服务器发送*
如果没有收到邮件,检查 root 用户的邮件池查看警告或者错误提示。你也需要使用 [nmap 命令][3]确保两台服务器运行了 SMTP 服务,并在中央邮件服务器中 打开了 25 号端口:
如果没有收到邮件,检查 root 用户的邮件池看看是否有警告或者错误提示。你也许需要使用 [nmap 命令][3]确保两台服务器运行了 SMTP 服务,并在中央邮件服务器中打开了 25 号端口:
# nmap -PN 192.168.0.20
![Postfix 邮件服务器错误处理](http://www.tecmint.com/wp-content/uploads/2015/09/Troubleshoot-Postfix-Mail-Server.png)
Postfix 邮件服务器错误处理
*Postfix 邮件服务器错误处理*
### 总结 ###
@ -134,7 +125,7 @@ Postfix 邮件服务器错误处理
- [在 CentOS/RHEL 07 上配置仅缓存的 DNS 服务器][4]
最后,我强烈建议你熟悉 Postfix 的配置文件main.cf和这个程序的帮助手册。如果有任何疑问别犹豫使用下面的评论框或者我们的论坛 Linuxsay.com 告诉我们吧,你会从世界各地的 Linux 高手中获得几乎及时的帮助。
最后,我强烈建议你熟悉 Postfix 的配置文件main.cf和这个程序的帮助手册。如果有任何疑问别犹豫使用下面的评论框或者我们的论坛 Linuxsay.com 告诉我们吧,你会从世界各地的 Linux 高手中获得几乎及时的帮助。
--------------------------------------------------------------------------------
@ -142,7 +133,7 @@ via: http://www.tecmint.com/setup-postfix-mail-server-smtp-using-null-client-on-
作者:[Gabriel Cánepa][a]
译者:[ictlyh](https//www.mutouxiaogui.cn/blog/)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,43 @@
Flowsnow translating
Apple Swift Programming Language Comes To Linux
================================================================================
![](http://itsfoss.com/wp-content/uploads/2015/12/Apple-Swift-Open-Source.jpg)
Apple and Open Source toogether? Yes! Apples Swift programming language is now open source. This should not come as surprise because [Apple had already announced it six months back][1].
Apple announced the launch of open source Swift community came this week. A [new website][2] dedicated to the open source Swift community has been put in place with the following message:
> We are excited by this new chapter in the story of Swift. After Apple unveiled the Swift programming language, it quickly became one of the fastest growing languages in history. Swift makes it easy to write software that is incredibly fast and safe by design. Now that Swift is open source, you can help make the best general purpose programming language available everywhere.
[swift.org][2] will work as the one stop shop providing downloads for various platforms, community guidelines, news, getting started tutorials, instructions for contribution to open source Swift, documentation and other guidelines. If you are looking forward to learn Swift, this website must be bookmarked.
In this announcement, a new package manager for easy sharing and building code has been made available as well.
Most important of all for Linux users, the source code is now available at [Github][3]. You can check it out from the link below:
- [Apple Swift Source Code][3]
In addition to that, there are prebuilt binaries for Ubuntu 14.04 and 15.10.
- [Swift binaries for Ubuntu][4]
Dont rush to use them because these are development branches and will not be suitable for production machine. So avoid it for now. Once stable version of Swift for Linux is released, I hope that Ubuntu will include it in [umake][5] on the line of [Visual Studio][6].
--------------------------------------------------------------------------------
via: http://itsfoss.com/swift-open-source-linux/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://itsfoss.com/apple-open-sources-swift-programming-language-linux/
[2]:https://swift.org/
[3]:https://github.com/apple
[4]:https://swift.org/download/#latest-development-snapshots
[5]:https://wiki.ubuntu.com/ubuntu-make
[6]:http://itsfoss.com/install-visual-studio-code-ubuntu/

View File

@ -1,3 +1,4 @@
translating by tastynoodle
5 best open source board games to play online
================================================================================
I have always had a fascination with board games, in part because they are a device of social interaction, they challenge the mind and, most importantly, they are great fun to play. In my misspent youth, myself and a group of friends gathered together to escape the horrors of the classroom, and indulge in a little escapism. The time provided an outlet for tension and rivalry. Board games help teach diplomacy, how to make and break alliances, bring families and friends together, and learn valuable lessons.

View File

@ -1,605 +0,0 @@
translation by strugglingyouth
80 Linux Monitoring Tools for SysAdmins
================================================================================
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/linux-monitoring.jpg)
The industry is hotting up at the moment, and there are more tools than you can shake a stick at. Here lies the most comprehensive list on the Internet (of Tools). Featuring over 80 ways to your machines. Within this article we outline:
- Command line tools
- Network related
- System related monitoring
- Log monitoring tools
- Infrastructure monitoring tools
Its hard work monitoring and debugging performance problems, but its easier with the right tools at the right time. Here are some tools youve probably heard of, some you probably havent and when to use them:
### Top 10 System Monitoring Tools ###
#### 1. Top ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/top.jpg)
This is a small tool which is pre-installed in many unix systems. When you want an overview of all the processes or threads running in the system: top is a good tool. You can order these processes on different criteria and the default criteria is CPU.
#### 2. [htop][1] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/htop.jpg)
Htop is essentially an enhanced version of top. Its easier to sort by processes. Its visually easier to understand and has built in commands for common things you would like to do. Plus its fully interactive.
#### 3. [atop][2] ####
Atop monitors all processes much like top and htop, unlike top and htop however it has daily logging of the processes for long-term analysis. It also shows resource consumption by all processes. It will also highlight resources that have reached a critical load.
#### 4. [apachetop][3] ####
Apachetop monitors the overall performance of your apache webserver. Its largely based on mytop. It displays current number of reads, writes and the overall number of requests processed.
#### 5. [ftptop][4] ####
ftptop gives you basic information of all the current ftp connections to your server such as the total amount of sessions, how many are uploading and downloading and who the client is.
#### 6. [mytop][5] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mytop.jpg)
mytop is a neat tool for monitoring threads and performance of mysql. It gives you a live look into the database and what queries its processing in real time.
#### 7. [powertop][6] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/powertop.jpg)
powertop helps you diagnose issues that has to do with power consumption and power management. It can also help you experiment with power management settings to achieve the most efficient settings for your server. You switch tabs with the tab key.
#### 8. [iotop][7] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iotop.jpg)
iotop checks the I/O usage information and gives you a top-like interface to that. It displays columns on read and write and each row represents a process. It also displays the percentage of time the process spent while swapping in and while waiting on I/O.
### Network related monitoring ###
#### 9. [ntopng][8] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ntopng.jpg)
ntopng is the next generation of ntop and the tool provides a graphical user interface via the browser for network monitoring. It can do stuff such as: geolocate hosts, get network traffic and show ip traffic distribution and analyze it.
#### 10. [iftop][9] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iftop.jpg)
iftop is similar to top, but instead of mainly checking for cpu usage it listens to network traffic on selected network interfaces and displays a table of current usage. It can be handy for answering questions such as “Why on earth is my internet connection so slow?!”.
#### 11. [jnettop][10] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/jnettop.jpg)
jnettop visualises network traffic in much the same way as iftop does. It also supports customizable text output and a machine-friendly mode to support further analysis.
12. [bandwidthd][11]
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/bandwidthd.jpg)
BandwidthD tracks usage of TCP/IP network subnets and visualises that in the browser by building a html page with graphs in png. There is a database driven system that supports searching, filtering, multiple sensors and custom reports.
#### 13. [EtherApe][12] ####
EtherApe displays network traffic graphically, the more talkative the bigger the node. It either captures live traffic or can read it from a tcpdump. The displayed can also be refined using a network filter with pcap syntax.
#### 14. [ethtool][13] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ethtool.jpg)
ethtool is used for displaying and modifying some parameters of the network interface controllers. It can also be used to diagnose Ethernet devices and get more statistics from the devices.
#### 15. [NetHogs][14] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nethogs.jpg)
NetHogs breaks down network traffic per protocol or per subnet. It then groups by process. So if theres a surge in network traffic you can fire up NetHogs and see which process is causing it.
#### 16. [iptraf][15] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iptraf.jpg)
iptraf gathers a variety of metrics such as TCP connection packet and byte count, interface statistics and activity indicators, TCP/UDP traffic breakdowns and station packet and byte counts.
#### 17. [ngrep][16] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ngrep.jpg)
ngrep is grep but for the network layer. Its pcap aware and will allow to specify extended regular or hexadecimal expressions to match against packets of .
#### 18. [MRTG][17] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mrtg.jpg)
MRTG was orginally developed to monitor router traffic, but now its able to monitor other network related things as well. It typically collects every five minutes and then generates a html page. It also has the capability of sending warning emails.
#### 19. [bmon][18] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/bmon.jpg)
Bmon monitors and helps you debug networks. It captures network related statistics and presents it in human friendly way. You can also interact with bmon through curses or through scripting.
#### 20. traceroute ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/traceroute.jpg)
Traceroute is a built-in tool for displaying the route and measuring the delay of packets across a network.
#### 21. [IPTState][19] ####
IPTState allows you to watch where traffic that crosses your iptables is going and then sort that by different criteria as you please. The tool also allows you to delete states from the table.
#### 22. [darkstat][20] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/darkstat.jpg)
Darkstat captures network traffic and calculates statistics about usage. The reports are served over a simple HTTP server and gives you a nice graphical user interface of the graphs.
#### 23. [vnStat][21] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/vnstat.jpg)
vnStat is a network traffic monitor that uses statistics provided by the kernel which ensures light use of system resources. The gathered statistics persists through system reboots. It has color options for the artistic sysadmins.
#### 24. netstat ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/netstat.jpg)
Netstat is a built-in tool that displays TCP network connections, routing tables and a number of network interfaces. Its used to find problems in the network.
#### 25. ss ####
Instead of using netstat, its however preferable to use ss. The ss command is capable of showing more information than netstat and is actually faster. If you want a summary statistics you can use the command `ss -s`.
#### 26. [nmap][22] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nmap.jpg)
Nmap allows you to scan your server for open ports or detect which OS is being used. But you could also use this for SQL injection vulnerabilities, network discovery and other means related to penetration testing.
#### 27. [MTR][23] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mtr.jpg)
MTR combines the functionality of traceroute and the ping tool into a single network diagnostic tool. When using the tool it will limit the number hops individual packets has to travel while also listening to their expiry. It then repeats this every second.
#### 28. [Tcpdump][24] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/tcpdump.jpg)
Tcpdump will output a description of the contents of the packet it just captured which matches the expression that you provided in the command. You can also save the this data for further analysis.
#### 29. [Justniffer][25] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/justniffer.jpg)
Justniffer is a tcp packet sniffer. You can choose whether you would like to collect low-level data or high-level data with this sniffer. It also allows you to generate logs in customizable way. You could for instance mimic the access log that apache has.
### System related monitoring ###
#### 30. [nmon][26] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nmon.jpg)
nmon either outputs the data on screen or saves it in a comma separated file. You can display CPU, memory, network, filesystems, top processes. The data can also be added to a RRD database for further analysis.
#### 31. [conky][27] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/cpulimit.jpg)
Conky monitors a plethora of different OS stats. It has support for IMAP and POP3 and even support for many popular music players! For the handy person you could extend it with your own scripts or programs using Lua.
#### 32. [Glances][28] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/glances.jpg)
Glances monitors your system and aims to present a maximum amount of information in a minimum amount of space. It has the capability to function in a client/server mode as well as monitoring remotely. It also has a web interface.
#### 33. [saidar][29] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/saidar.jpg)
Saidar is a very small tool that gives you basic information about your system resources. It displays a full screen of the standard system resources. The emphasis for saidar is being as simple as possible.
#### 34. [RRDtool][30] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/rrdtool.jpg)
RRDtool is a tool developed to handle round-robin databases or RRD. RRD aims to handle time-series data like CPU load, temperatures etc. This tool provides a way to extract RRD data in a graphical format.
#### 35. [monit][31] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/monit.jpg)
Monit has the capability of sending you alerts as well as restarting services if they run into trouble. Its possible to perform any type of check you could write a script for with monit and it has a web user interface to ease your eyes.
#### 36. [Linux process explorer][32] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/linux-process-monitor.jpg)
Linux process explorer is akin to the activity monitor for OSX or the windows equivalent. It aims to be more usable than top or ps. You can view each process and see how much memory usage or CPU it uses.
#### 37. df ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/df.jpg)
df is an abbreviation for disk free and is pre-installed program in all unix systems used to display the amount of available disk space for filesystems which the user have access to.
#### 38. [discus][33] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/discus.jpg)
Discus is similar to df however it aims to improve df by making it prettier using fancy features as colors, graphs and smart formatting of numbers.
#### 39. [xosview][34] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/xosview.jpg)
xosview is a classic system monitoring tool and it gives you a simple overview of all the different parts of the including IRQ.
#### 40. [Dstat][35] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/dstat.jpg)
Dstat aims to be a replacement for vmstat, iostat, netstat and ifstat. It allows you to view all of your system resources in real-time. The data can then be exported into csv. Most importantly dstat allows for plugins and could thus be extended into areas not yet known to mankind.
#### 41. [Net-SNMP][36] ####
SNMP is the protocol simple network management protocol and the Net-SNMP tool suite helps you collect accurate information about your servers using this protocol.
#### 42. [incron][37] ####
Incron allows you to monitor a directory tree and then take action on those changes. If you wanted to copy files to directory b once new files appeared in directory a thats exactly what incron does.
#### 43. [monitorix][38] ####
Monitorix is lightweight system monitoring tool. It helps you monitor a single machine and gives you a wealth of metrics. It also has a built-in HTTP server to view graphs and a reporting mechanism of all metrics.
#### 44. vmstat ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/vmstat.jpg)
vmstat or virtual memory statistics is a small built-in tool that monitors and displays a summary about the memory in the machine.
#### 45. uptime ####
This small command that quickly gives you information about how long the machine has been running, how many users currently are logged on and the system load average for the past 1, 5 and 15 minutes.
#### 46. mpstat ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/mpstat.jpg)
mpstat is a built-in tool that monitors cpu usage. The most common command is using `mpstat -P ALL` which gives you the usage of all the cores. You can also get an interval update of the CPU usage.
#### 47. pmap ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/pmap.jpg)
pmap is a built-in tool that reports the memory map of a process. You can use this command to find out causes of memory bottlenecks.
#### 48. ps ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ps.jpg)
The ps command will give you an overview of all the current processes. You can easily select all processes using the command `ps -A`
#### 49. [sar][39] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/sar.jpg)
sar is a part of the sysstat package and helps you to collect, report and save different system metrics. With different commands it will give you CPU, memory and I/O usage among other things.
#### 50. [collectl][40] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/collectl.jpg)
Similar to sar collectl collects performance metrics for your machine. By default it shows cpu, network and disk stats but it collects a lot more. The difference to sar is collectl is able to deal with times below 1 second, it can be fed into a plotting tool directly and collectl monitors processes more extensively.
#### 51. [iostat][41] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/iostat.jpg)
iostat is also part of the sysstat package. This command is used for monitoring system input/output. The reports themselves can be used to change system configurations to better balance input/output load between hard drives in your machine.
#### 52. free ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/free.jpg)
This is a built-in command that displays the total amount of free and used physical memory on your machine. It also displays the buffers used by the kernel at that given moment.
#### 53. /Proc file system ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/procfile.jpg)
The proc file system gives you a peek into kernel statistics. From these statistics you can get detailed information about the different hardware devices on your machine. Take a look at the [full list of the proc file statistics][42]
#### 54. [GKrellM][43] ####
GKrellm is a gui application that monitor the status of your hardware such CPU, main memory, hard disks, network interfaces and many other things. It can also monitor and launch a mail reader of your choice.
#### 55. [Gnome system monitor][44] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/gnome-system-monitor.jpg)
Gnome system monitor is a basic system monitoring tool that has features looking at process dependencies from a tree view, kill or renice processes and graphs of all server metrics.
### Log monitoring tools ###
#### 56. [GoAccess][45] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/goaccess.jpg)
GoAccess is a real-time web log analyzer which analyzes the access log from either apache, nginx or amazon cloudfront. Its also possible to output the data into HTML, JSON or CSV. It will give you general statistics, top visitors, 404s, geolocation and many other things.
#### 57. [Logwatch][46] ####
Logwatch is a log analysis system. It parses through your systems logs and creates a report analyzing the areas that you specify. It can give you daily reports with short digests of the activities taking place on your machine.
#### 58. [Swatch][47] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/swatch.jpg)
Much like Logwatch Swatch also monitors your logs, but instead of giving reports it watches for regular expression and notifies you via mail or the console when there is a match. It could be used for intruder detection for example.
#### 59. [MultiTail][48] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/multitail.jpg)
MultiTail helps you monitor logfiles in multiple windows. You can merge two or more of these logfiles into one. It will also use colors to display the logfiles for easier reading with the help of regular expressions.
#### System tools ####
#### 60. [acct or psacct][49] ####
acct or psacct (depending on if you use apt-get or yum) allows you to monitor all the commands a users executes inside the system including CPU and memory time. Once installed you get that summary with the command sa.
#### 61. [whowatch][50] ####
Similar to acct this tool monitors users on your system and allows you to see in real time what commands and processes they are using. It gives you a tree structure of all the processes and so you can see exactly whats happening.
#### 62. [strace][51] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/strace.jpg)
strace is used to diagnose, debug and monitor interactions between processes. The most common thing to do is making strace print a list of system calls made by the program which is useful if the program does not behave as expected.
#### 63. [DTrace][52] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/dtrace.jpg)
DTrace is the big brother of strace. It dynamically patches live running instructions with instrumentation code. This allows you to do in-depth performance analysis and troubleshooting. However, its not for the weak of heart as there is a 1200 book written on the topic.
#### 64. [webmin][53] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/webmin.jpg)
Webmin is a web-based system administration tool. It removes the need to manually edit unix configuration files and lets you manage the system remotely if need be. It has a couple of monitoring modules that you can attach to it.
#### 65. stat ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/stat.jpg)
Stat is a built-in tool for displaying status information of files and file systems. It will give you information such as when the file was modified, accessed or changed.
#### 66. ifconfig ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/ifconfig.jpg)
ifconfig is a built-in tool used to configure the network interfaces. Behind the scenes network monitor tools use ifconfig to set it into promiscuous mode to capture all packets. You can do it yourself with `ifconfig eth0 promisc` and return to normal mode with `ifconfig eth0 -promisc`.
#### 67. [ulimit][54] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/unlimit.jpg)
ulimit is a built-in tool that monitors system resources and keeps a limit so any of the monitored resources dont go overboard. For instance making a fork bomb where a properly configured ulimit is in place would be totally fine.
#### 68. [cpulimit][55] ####
CPUlimit is a small tool that monitors and then limits the CPU usage of a process. Its particularly useful to make batch jobs not eat up too many CPU cycles.
#### 69. lshw ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/lshw.jpg)
lshw is a small built-in tool extract detailed information about the hardware configuration of the machine. It can output everything from CPU version and speed to mainboard configuration.
#### 70. w ####
W is a built-in command that displays information about the users currently using the machine and their processes.
#### 71. lsof ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/lsof.jpg)
lsof is a built-in tool that gives you a list of all open files and network connections. From there you can narrow it down to files opened by processes, based on the process name, by a specific user or perhaps kill all processes that belongs to a specific user.
### Infrastructure monitoring tools ###
#### 72. Server Density ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/server-density-monitoring.png)
Our [server monitoring tool][56]! It has a web interface that allows you to set alerts and view graphs for all system and network metrics. You can also set up monitoring of websites whether they are up or down. Server Density allows you to set permissions for users and you can extend your monitoring with our plugin infrastructure or api. The service already supports Nagios plugins.
#### 73. [OpenNMS][57] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/opennms.jpg)
OpenNMS has four main functional areas: event management and notifications; discovery and provisioning; service monitoring and data collection. Its designed to be customizable to work in a variety of network environments.
#### 74. [SysUsage][58] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/sysusage.jpg)
SysUsage monitors your system continuously via Sar and other system commands. It also allows notifications to alarm you once a threshold is reached. SysUsage itself can be run from a centralized place where all the collected statistics are also being stored. It has a web interface where you can view all the stats.
#### 75. [brainypdm][59] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/brainypdm.jpg)
brainypdm is a data management and monitoring tool that has the capability to gather data from nagios or another generic source to make graphs. Its cross-platform, has custom graphs and is web based.
#### 76. [PCP][60] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/pcp.jpg)
PCP has the capability of collating metrics from multiple hosts and does so efficiently. It also has a plugin framework so you can make it collect specific metrics that is important to you. You can access graph data through either a web interface or a GUI. Good for monitoring large systems.
#### 77. [KDE system guard][61] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/kdesystemguard.jpg)
This tool is both a system monitor and task manager. You can view server metrics from several machines through the worksheet and if a process needs to be killed or if you need to start a process it can be done within KDE system guard.
#### 78. [Munin][62] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/munin.jpg)
Munin is both a network and a system monitoring tool which offers alerts for when metrics go beyond a given threshold. It uses RRDtool to create the graphs and it has web interface to display these graphs. Its emphasis is on plug and play capabilities with a number of plugins available.
#### 79. [Nagios][63] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/nagios.jpg)
Nagios is system and network monitoring tool that helps you monitor monitor your many servers. It has support for alerting for when things go wrong. It also has many plugins written for the platform.
#### 80. [Zenoss][64] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/zenoss.jpg)
Zenoss provides a web interface that allows you to monitor all system and network metrics. Moreover it discovers network resources and changes in network configurations. It has alerts for you to take action on and it supports the Nagios plugins.
#### 81. [Cacti][65] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/cacti.jpg)
(And one for luck!) Cacti is network graphing solution that uses the RRDtool data storage. It allows a user to poll services at predetermined intervals and graph the result. Cacti can be extended to monitor a source of your choice through shell scripts.
#### 82. [Zabbix][66] ####
![](https://serverdensity-wpengine.netdna-ssl.com/wp-content/uploads/2015/02/zabbix-monitoring.png)
Zabbix is an open source infrastructure monitoring solution. It can use most databases out there to store the monitoring statistics. The Core is written in C and has a frontend in PHP. If you dont like installing an agent, Zabbix might be an option for you.
### Bonus section: ###
Thanks for your suggestions. Its an oversight on our part that well have to go back trough and renumber all the headings. In light of that, heres a short section at the end for some of the Linux monitoring tools recommended by you:
#### 83. [collectd][67] ####
Collectd is a Unix daemon that collects all your monitoring statistics. It uses a modular design and plugins to fill in any niche monitoring. This way collectd stays as lightweight and customizable as possible.
#### 84. [Observium][68] ####
Observium is an auto-discovering network monitoring platform supporting a wide range of hardware platforms and operating systems. Observium focuses on providing a beautiful and powerful yet simple and intuitive interface to the health and status of your network.
#### 85. Nload ####
Its a command line tool that monitors network throughput. Its neat because it visualizes the in and and outgoing traffic using two graphs and some additional useful data like total amount of transferred data. You can install it with
yum install nload
or
sudo apt-get install nload
#### 84. [SmokePing][69] ####
SmokePing keeps track of the network latencies of your network and it visualises them too. There are a wide range of latency measurement plugins developed for SmokePing. If a GUI is important to you its there is an ongoing development to make that happen.
#### 85. [MobaXterm][70] ####
If youre working in windows environment day in and day out. You may feel limited by the terminal Windows provides. MobaXterm comes to the rescue and allows you to use many of the terminal commands commonly found in Linux. Which will help you tremendously in your monitoring needs!
#### 86. [Shinken monitoring][71] ####
Shinken is a monitoring framework which is a total rewrite of Nagios in python. It aims to enhance flexibility and managing a large environment. While still keeping all your nagios configuration and plugins.
--------------------------------------------------------------------------------
via: https://blog.serverdensity.com/80-linux-monitoring-tools-know/
作者:[Jonathan Sundqvist][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.serverdensity.com/
[1]:http://hisham.hm/htop/
[2]:http://www.atoptool.nl/
[3]:https://github.com/JeremyJones/Apachetop
[4]:http://www.proftpd.org/docs/howto/Scoreboard.html
[5]:http://jeremy.zawodny.com/mysql/mytop/
[6]:https://01.org/powertop
[7]:http://guichaz.free.fr/iotop/
[8]:http://www.ntop.org/products/ntop/
[9]:http://www.ex-parrot.com/pdw/iftop/
[10]:http://jnettop.kubs.info/wiki/
[11]:http://bandwidthd.sourceforge.net/
[12]:http://etherape.sourceforge.net/
[13]:https://www.kernel.org/pub/software/network/ethtool/
[14]:http://nethogs.sourceforge.net/
[15]:http://iptraf.seul.org/
[16]:http://ngrep.sourceforge.net/
[17]:http://oss.oetiker.ch/mrtg/
[18]:https://github.com/tgraf/bmon/
[19]:http://www.phildev.net/iptstate/index.shtml
[20]:https://unix4lyfe.org/darkstat/
[21]:http://humdi.net/vnstat/
[22]:http://nmap.org/
[23]:http://www.bitwizard.nl/mtr/
[24]:http://www.tcpdump.org/
[25]:http://justniffer.sourceforge.net/
[26]:http://nmon.sourceforge.net/pmwiki.php
[27]:http://conky.sourceforge.net/
[28]:https://github.com/nicolargo/glances
[29]:https://packages.debian.org/sid/utils/saidar
[30]:http://oss.oetiker.ch/rrdtool/
[31]:http://mmonit.com/monit
[32]:http://sourceforge.net/projects/procexp/
[33]:http://packages.ubuntu.com/lucid/utils/discus
[34]:http://www.pogo.org.uk/~mark/xosview/
[35]:http://dag.wiee.rs/home-made/dstat/
[36]:http://www.net-snmp.org/
[37]:http://inotify.aiken.cz/?section=incron&page=about&lang=en
[38]:http://www.monitorix.org/
[39]:http://sebastien.godard.pagesperso-orange.fr/
[40]:http://collectl.sourceforge.net/
[41]:http://sebastien.godard.pagesperso-orange.fr/
[42]:http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html
[43]:http://members.dslextreme.com/users/billw/gkrellm/gkrellm.html
[44]:http://freecode.com/projects/gnome-system-monitor
[45]:http://goaccess.io/
[46]:http://sourceforge.net/projects/logwatch/
[47]:http://sourceforge.net/projects/swatch/
[48]:http://www.vanheusden.com/multitail/
[49]:http://www.gnu.org/software/acct/
[50]:http://whowatch.sourceforge.net/
[51]:http://sourceforge.net/projects/strace/
[52]:http://dtrace.org/blogs/about/
[53]:http://www.webmin.com/
[54]:http://ss64.com/bash/ulimit.html
[55]:https://github.com/opsengine/cpulimit
[56]:https://www.serverdensity.com/server-monitoring/
[57]:http://www.opennms.org/
[58]:http://sysusage.darold.net/
[59]:http://sourceforge.net/projects/brainypdm/
[60]:http://www.pcp.io/
[61]:https://userbase.kde.org/KSysGuard
[62]:http://munin-monitoring.org/
[63]:http://www.nagios.org/
[64]:http://www.zenoss.com/
[65]:http://www.cacti.net/
[66]:http://www.zabbix.com/
[67]:https://collectd.org/
[68]:http://www.observium.org/
[69]:http://oss.oetiker.ch/smokeping/
[70]:http://mobaxterm.mobatek.net/
[71]:http://www.shinken-monitoring.org/

View File

@ -1,3 +1,4 @@
Translating by ZTinoZ
7 ways hackers can use Wi-Fi against you
================================================================================
![Image courtesy Thinkstock](http://core0.staticworld.net/images/article/2015/11/intro_title-100626673-orig.jpg)
@ -66,4 +67,4 @@ via: http://www.networkworld.com/article/3003170/mobile-security/7-ways-hackers-
[3]:http://news.yahoo.com/blogs/upgrade-your-life/banking-online-not-hacked-182159934.html
[4]:http://pocketnow.com/2014/10/15/should-you-leave-your-smartphones-wifi-on-or-turn-it-off
[5]:http://www.cnet.com/news/chrome-becoming-tool-in-googles-push-for-encrypted-web/
[6]:https://twitter.com/JoshAlthuser
[6]:https://twitter.com/JoshAlthuser

View File

@ -0,0 +1,65 @@
Review EXT4 vs. Btrfs vs. XFS
================================================================================
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2015/09/1385698302_funny_linux_wallpapers-593x445.jpg)
To be honest, one of the things that comes last in peoples thinking is to look at which file system on their PC is being used. Windows users as well as Mac OS X users even have less reason for looking as they have really only 1 choice for their operating system which are NTFS and HFS+. Linux operating system, on the other side, has plenty of various file system options, with the current default is being widely used ext4. However, there is another push for changing the file system to something other which is called btrfs. But what makes btrfs better, what are other file systems, and when can we see the distributions making the change?
Lets first have a general look at file systems and what they really do, then we will make a small comparison between famous file systems.
### So, What Do File Systems Do? ###
Just in case if you are unfamiliar about what file systems really do, it is actually simple when it is summarized. The file systems are mainly used in order for controlling how the data is stored after any program is no longer using it, how access to the data is controlled, what other information (metadata) is attached to the data itself, etc. I know that it does not sound like an easy thing to be programmed, and it is definitely not. The file systems are continually still being revised for including more functionality while becoming more efficient in what it simply needs to do. Therefore, however, it is a basic need for all computers, it is not quite as basic as it sounds like.
### Why Partitioning? ###
Many people have a vague knowledge of what the partitions are since each operating system has an ability for creating or removing them. It can seem strange that Linux operating system uses more than 1 partition on the same disk, even while using the standard installation procedure, so few explanations are called for them. One of the main goals of having different partitions is achieving higher data security in the disaster case.
By dividing your hard disk into partitions, the data may be grouped and also separated. When the accidents occur, only the data stored in the partition which got the hit will only be damaged, while data on the other partitions will survive most likely. These principles date from the days when the Linux operating system didnt have a journaled file system and any power failure might have led to a disaster.
The using of partitions will remain for security and the robustness reasons, then the breach on 1 part of the operating system does not automatically mean that whole computer is under risk or danger. This is currently most important factor for the partitioning process. For example, the users create scripts, the programs or web applications which start filling up the disk. If that disk contains only 1 big partition, then entire system may stop functioning if that disk is full. If the users store data on separate partitions, then only that data partition can be affected, while system partitions and the possible other data partitions will keep functioning.
Mind that to have a journaled file system will only provide data security in case if there is a power failure as well as sudden disconnection of the storage devices. Such will not protect the data against the bad blocks and the logical errors in the file system. In such cases, the user should use a Redundant Array of Inexpensive Disks (RAID) solution.
### Why Switch File Systems? ###
The ext4 file system has been an improvement for the ext3 file system that was also an improvement over the ext2 file system. While the ext4 is a very solid file system which has been the default choice for almost all distributions for the past few years, it is made from an aging code base. Additionally, Linux operating system users are seeking many new different features in file systems which ext4 does not handle on its own. There is software which takes care of some of such needs, but in the performance aspect, being able to do such things on the file system level could be faster.
### Ext4 File System ###
The ext4 has some limits which are still a bit impressive. The maximum file size is 16 tebibytes (which is roughly 17.6 terabytes) and is much bigger than any hard drive a regular consumer can currently buy. While, the largest volume/partition you can make with ext4 is 1 exbibyte (which is roughly 1,152,921.5 terabytes). The ext4 is known to bring the speed improvements over ext3 by using multiple various techniques. Like in the most modern file systems, it is a journaling file system that means that it will keep a journal of where the files are mainly located on the disk and of any other changes that happen to the disk. Regardless all of its features, it doesnt support the transparent compression, the data deduplication, or the transparent encryption. The snapshots are supported technically, but such feature is experimental at best.
### Btrfs File System ###
The btrfs, many of us pronounce it different ways, as an example, Better FS, Butter FS, or B-Tree FS. It is a file system which is completely made from scratch. The btrfs exists because its developers firstly wanted to expand the file system functionality in order to include snapshots, pooling, as well as checksums among the other things. While it is independent from the ext4, it also wants to build off the ideas present in the ext4 that are great for the consumers and the businesses alike as well as incorporate those additional features that will benefit everybody, but specifically the enterprises. For the enterprises who are using very large programs with very large databases, they are having a seemingly continuous file system across the multiple hard drives could be very beneficial as it will make a consolidation of the data much easier. The data deduplication could reduce the amount of the actual space data could occupy, and the data mirroring could become easier with the btrfs as well when there is a single and broad file system which needs to be mirrored.
The user certainly can still choose to create multiple partitions so that he does not need to mirror everything. Considering that the btrfs will be able for spanning over the multiple hard drives, it is a very good thing that it can support 16 times more drive space than the ext4. A maximum partition size of the btrfs file system is 16 exbibytes, as well as maximum file size is 16 exbibytes too.
### XFS File System ###
The XFS file system is an extension of the extent file system. The XFS is a high-performance 64-bit journaling file system. The support of the XFS was merged into Linux kernel in around 2002 and In 2009 Red Hat Enterprise Linux version 5.4 usage of the XFS file system. XFS supports maximum file system size of 8 exbibytes for the 64-bit file system. There is some comparison of XFS file system is XFS file system cant be shrunk and poor performance with deletions of the large numbers of files. Now, the RHEL 7.0 uses XFS as the default filesystem.
### Final Thoughts ###
Unfortunately, the arrival date for the btrfs is not quite known. But officially, the next-generation file system is still classified as “unstable”, but if the user downloads the latest version of Ubuntu, he will be able to choose to install on a btrfs partition. When the btrfs will be classified actually as “stable” is still a mystery, but users shouldnt expect the Ubuntu to use the btrfs by default until its indeed considered “stable”. It has been reported that Fedora 18 will use the btrfs as its default file system as by the time of its release a file system checker for the btrfs should exist. There is a good amount of work still left for the btrfs, as not all the features are yet implemented and the performance is a little sluggish if we compare it to the ext4.
So, which is better to use? Till now, the ext4 will be the winner despite the identical performance. But why? The answer will be the convenience as well as the ubiquity. The ext4 is still excellent file system for the desktop or workstation use. It is provided by default, so the user can install the operating system on it. Also, the ext4 supports volumes up to 1 Exabyte and files up to 16 Terabyte in size, so theres still a plenty of room for the growth where space is concerned.
The btrfs might offer greater volumes up to 16 Exabyte and improved fault tolerance, but, till now, it feels more as an add-on file system rather than one integrated into the Linux operating system. For example, the btrfs-tools have to be present before a drive will be formatted with the btrfs, which means that the btrfs is not an option during the Linux operating system installation though that could vary with the distribution.
Even though the transfer rates are so important, theres more to a just file system than speed of the file transfers. The btrfs has many useful features such as Copy-on-Write (CoW), extensive checksums, snapshots, scrubbing, self-healing data, deduplication, as well as many more good improvements that ensure the data integrity. The btrfs lacks the RAID-Z features of ZFS, so the RAID is still in an experimental state with the btrfs. For pure data storage, however, the btrfs is the winner over the ext4, but time still will tell.
Till the moment, the ext4 seems to be a better choice on the desktop system since it is presented as a default file system, as well as it is faster than the btrfs when transferring files. The btrfs is definitely worth to look into, but to completely switch to replace the ext4 on desktop Linux might be few years later. The data farms and the large storage pools could reveal different stories and show the right differences between ext4, XCF, and btrfs.
If you have a different or additional opinion, kindly let us know by commenting on this article.
--------------------------------------------------------------------------------
via: http://www.unixmen.com/review-ext4-vs-btrfs-vs-xfs/
作者:[M.el Khamlichi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/pirat9/

View File

@ -1,147 +0,0 @@
Why did you start using Linux?
================================================================================
> In today's open source roundup: What got you started with Linux? Plus: IBM's Linux only Mainframe. And why you should skip Windows 10 and go with Linux
### Why did you start using Linux? ###
Linux has become quite popular over the years, with many users defecting to it from OS X or Windows. But have you ever wondered what got people started with Linux? A redditor asked that question and got some very interesting answers.
SilverKnight asked his question on the Linux subreddit:
> I know this has been asked before, but I wanted to hear more from the younger generation why it is that they started using linux and what keeps them here.
>
> I dont want to discourage others from giving their linux origin stories, because those are usually pretty good, but I was mostly curious about our younger population since there isn't much out there from them yet.
>
> I myself am 27 and am a linux dabbler. I have installed quite a few different distros over the years but I haven't made the plunge to full time linux. I guess I am looking for some more reasons/inspiration to jump on the bandwagon.
>
> [More at Reddit][1]
Fellow redditors in the Linux subreddit responded with their thoughts:
> **DoublePlusGood**: "I started using Backtrack Linux (now Kali) at 12 because I wanted to be a "1337 haxor". I've stayed with Linux (Archlinux currently) because it lets me have the endless freedom to make my computer do what I want."
>
> **Zack**: "I'm a Linux user since, I think, the age of 12 or 13, I'm 15 now.
>
> It started when I got tired with Windows XP at 11 and the waiting, dammit am I impatient sometimes, but waiting for a basic task such as shutting down just made me tired of Windows all together.
>
> A few months previously I had started participating in discussions in a channel on the freenode IRC network which was about a game, and as freenode usually goes, it was open source and most of the users used Linux.
>
> I kept on hearing about this Linux but wasn't that interested in it at the time. However, because the channel (and most of freenode) involved quite a bit of programming I started learning Python.
>
> A year passed and I was attempting to install GNU/Linux (specifically Ubuntu) on my new (technically old, but I had just got it for my birthday) PC, unfortunately it continually froze, for reasons unknown (probably a bad hard drive, or a lot of dust or something else...).
>
> Back then I was the type to give up on things, so I just continually nagged my dad to try and install Ubuntu, he couldn't do it for the same reasons.
>
> After wanting Linux for a while I became determined to get Linux and ditch windows for good. So instead of Ubuntu I tried Linux Mint, being a derivative of Ubuntu(?) I didn't have high hopes, but it worked!
>
> I continued using it for another 6 months.
>
> During that time a friend on IRC gave me a virtual machine (which ran Ubuntu) on their server, I kept it for a year a bit until my dad got me my own server.
>
> After the 6 months I got a new PC (which I still use!) I wanted to try something different.
>
> I decided to install openSUSE.
>
> I liked it a lot, and on the same Christmas I obtained a Raspberry Pi, and stuck with Debian on it for a while due to the lack of support other distros had for it."
>
> **Cqz**: "Was about 9 when the Windows 98 machine handed down to me stopped working for reasons unknown. We had no Windows install disk, but Dad had one of those magazines that comes with demo programs and stuff on CDs. This one happened to have install media for Mandrake Linux, and so suddenly I was a Linux user. Had no idea what I was doing but had a lot of fun doing it, and although in following years I often dual booted with various Windows versions, the FLOSS world always felt like home. Currently only have one Windows installation, which is a virtual machine for games."
>
> **Tosmarcel**: "I was 15 and was really curious about this new concept called 'programming' and then I stumbled upon this Harvard course, CS50. They told users to install a Linux vm to use the command line. But then I asked myself: "Why doesn't windows have this command line?!". I googled 'linux' and Ubuntu was the top result -Ended up installing Ubuntu and deleted the windows partition accidentally... It was really hard to adapt because I knew nothing about linux. Now I'm 16 and running arch linux, never looked back and I love it!"
>
> **Micioonthet**: "First heard about Linux in the 5th grade when I went over to a friend's house and his laptop was running MEPIS (an old fork of Debian) instead of Windows XP.
>
> Turns out his dad was a socialist (in America) and their family didn't trust Microsoft. This was completely foreign to me, and I was confused as to why he would bother using an operating system that didn't support the majority of software that I knew.
>
> Fast forward to when I was 13 and without a laptop. Another friend of mine was complaining about how slow his laptop was, so I offered to buy it off of him so I could fix it up and use it for myself. I paid $20 and got a virus filled, unusable HP Pavilion with Windows Vista. Instead of trying to clean up the disgusting Windows install, I remembered that Linux was a thing and that it was free. I burned an Ubuntu 12.04 disc and installed it right away, and was absolutely astonished by the performance.
>
> Minecraft (one of the few early Linux games because it ran on Java), which could barely run at 5 FPS on Vista, ran at an entirely playable 25 FPS on a clean install of Ubuntu.
>
> I actually still have that old laptop and use it occasionally, because why not? Linux doesn't care how old your hardware is.
>
> I since converted my dad to Linux and we buy old computers at lawn sales and thrift stores for pennies and throw Linux Mint or some other lightweight distros on them."
>
> **Webtm**: "My dad had every computer in the house with some distribution on it, I think a couple with OpenSUSE and Debian, and his personal computer had Slackware on it. So I remember being little and playing around with Debian and not really getting into it much. So I had a Windows laptop for a few years and my dad asked me if I wanted to try out Debian. It was a fun experience and ever since then I've been using Debian and trying out distributions. I currently moved away from Linux and have been using FreeBSD for around 5 months now, and I am absolutely happy with it.
>
> The control over your system is fantastic. There are a lot of cool open source projects. I guess a lot of the fun was figuring out how to do the things I want by myself and tweaking those things in ways to make them do something else. Stability and performance is also a HUGE plus. Not to mention the level of privacy when switching."
>
> **Wyronaut**: "I'm currently 18, but I first started using Linux when I was 13. Back then my first distro was Ubuntu. The reason why I wanted to check out Linux, was because I was hosting little Minecraft game servers for myself and a couple of friends, back then Minecraft was pretty new-ish. I read that the defacto operating system for hosting servers was Linux.
>
> I was a big newbie when it came to command line work, so Linux scared me a little, because I had to take care of a lot of things myself. But thanks to google and a few wiki pages I managed to get up a couple of simple servers running on a few older PC's I had lying around. Great use for all that older hardware no one in the house ever uses.
>
> After running a few game servers I started running a few web servers as well. Experimenting with HTML, CSS and PHP. I worked with those for a year or two. Afterwards, took a look at Java. I made the terrible mistake of watching TheNewBoston video's.
>
> So after like a week I gave up on Java and went to pick up a book on Python instead. That book was Learn Python The Hard Way by Zed A. Shaw. After I finished that at the fast pace of two weeks, I picked up the book C++ Primer, because at the time I wanted to become a game developer. Went trough about half of the book (~500 pages) and burned out on learning. At that point I was spending a sickening amount of time behind my computer.
>
> After taking a bit of a break, I decided to pick up JavaScript. Read like 2 books, made like 4 different platformers and called it a day.
>
> Now we're arriving at the present. I had to go through the horrendous process of finding a school and deciding what job I wanted to strive for when I graduated. I ruled out anything in the gaming sector as I didn't want anything to do with graphics programming anymore, I also got completely sick of drawing and modelling. And I found this bachelor that had something to do with netsec and I instantly fell in love. I picked up a couple books on C to shred this vacation period and brushed up on some maths and I'm now waiting for the new school year to commence.
>
> Right now, I am having loads of fun with Arch Linux, made couple of different arrangements on different PC's and it's going great!
>
> In a sense Linux is what also got me into programming and ultimately into what I'm going to study in college starting this september. I probably have my future life to thank for it."
>
> **Linuxllc**: "You also can learn from old farts like me.
>
> The crutch, The crutch, The crutch. Getting rid of the crutch will inspired you and have good reason to stick with Linux.
>
> I got rid of my crutch(Windows XP) back in 2003. Took me only 5 days to get all my computer task back and running at a 100% workflow. Including all my peripheral devices. Minus any Windows games. I just play native Linux games."
>
> **Highclass**: "Hey I'm 28 not sure if this is the age group you are looking for.
>
> To be honest, I was always interested in computers and the thought of a free operating system was intriguing even though at the time I didn't fully grasp the free software philosophy, to me it was free as in no cost. I also did not find the CLI too intimidating as from an early age I had exposure to DOS.
>
> I believe my first distro was Mandrake, I was 11 or 12, I messed up the family computer on several occasions.... I ended up sticking with it always trying to push myself to the next level. Now I work in the industry with Linux everyday.
>
> /shrug"
>
> Matto: "My computer couldn't run fast enough for XP (got it at a garage sale), so I started looking for alternatives. Ubuntu came up in Google. I was maybe 15 or 16 at the time. Now I'm 23 and have a job working on a product that uses Linux internally."
>
> [More at Reddit][2]
### IBM's Linux only Mainframe ###
IBM has a long history with Linux, and now the company has created a Mainframe that features Ubuntu Linux. The new machine is named LinuxOne.
Ron Miller reports for TechCrunch:
> The new mainframes come in two flavors, named for penguins (Linux — penguins — get it?). The first is called Emperor and runs on the IBM z13, which we wrote about in January. The other is a smaller mainframe called the Rockhopper designed for a more “entry level” mainframe buyer.
>
> You may have thought that mainframes went the way of the dinosaur, but they are still alive and well and running in large institutions throughout the world. IBM as part of its broader strategy to promote the cloud, analytics and security is hoping to expand the potential market for mainframes by running Ubuntu Linux and supporting a range of popular open source enterprise software such as Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL and Chef.
>
> The metered mainframe will still sit inside the customers on-premises data center, but billing will be based on how much the customer uses the system, much like a cloud model, Mauri explained.
>
> ...IBM is looking for ways to increase those sales. Partnering with Canonical and encouraging use of open source tools on a mainframe gives the company a new way to attract customers to a small, but lucrative market.
>
> [More at TechCrunch][3]
### Why you should skip Windows 10 and opt for Linux ###
Since Windows 10 has been released there has been quite a bit of media coverage about its potential to spy on users. ZDNet has listed some reasons why you should skip Windows 10 and opt for Linux instead on your computer.
SJVN reports for ZDNet:
> You can try to turn Windows 10's data-sharing ways off, but, bad news: Windows 10 will keep sharing some of your data with Microsoft anyway. There is an alternative: Desktop Linux.
>
> You can do a lot to keep Windows 10 from blabbing, but you can't always stop it from talking. Cortana, Windows 10's voice activated assistant, for example, will share some data with Microsoft, even when it's disabled. That data includes a persistent computer ID to identify your PC to Microsoft.
>
> So, if that gives you a privacy panic attack, you can either stick with your old operating system, which is likely Windows 7, or move to Linux. Eventually, when Windows 7 is no longer supported, if you want privacy you'll have no other viable choice but Linux.
>
> There are other, more obscure desktop operating systems that are also desktop-based and private. These include the BSD Unix family such as FreeBSD, PCBSD, and NetBSD and eComStation, OS/2 for the 21st century. Your best choice, though, is a desktop-based Linux with a low learning curve.
>
> [More at ZDNet][4]
--------------------------------------------------------------------------------
via: http://www.itworld.com/article/2972587/linux/why-did-you-start-using-linux.html
作者:[Jim Lynch][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.itworld.com/author/Jim-Lynch/
[1]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/
[2]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/
[3]:http://techcrunch.com/2015/08/16/ibm-teams-with-canonical-on-linux-mainframe/
[4]:http://www.zdnet.com/article/sick-of-windows-spying-on-you-go-linux/

View File

@ -1,4 +1,3 @@
icybreaker translating...
14 tips for teaching open source development
================================================================================
Academia is an excellent platform for training and preparing the open source developers of tomorrow. In research, we occasionally open source software we write. We do this for two reasons. One, to promote the use of the tools we produce. And two, to learn more about the impact and issues other people face when using them. With this background of writing research software, I was tasked with redesigning the undergraduate software engineering course for second-year students at the University of Bradford.

View File

@ -1,35 +0,0 @@
Linus Torvalds Lambasts Open Source Programmers over Insecure Code
================================================================================
![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/11/linus-torvalds.jpg)
Linus Torvalds's latest rant underscores the high expectations the Linux developer places on open source programmers—as well the importance of security for Linux kernel code.
Torvalds is the unofficial "benevolent dictator" of the Linux kernel project. That means he gets to decide which code contributions go into the kernel, and which ones land in the reject pile.
On Oct. 28, open source coders whose work did not meet Torvalds's expectations faced an [angry rant][1]. "Christ people," Torvalds wrote about the code. "This is just sh*t."
He went on to call the coders "just incompetent and out to lunch."
What made Torvalds so angry? He believed the code could have been written more efficiently. It could have been easier for other programmers to understand and would run better through a compiler, the program that translates human-readable code into the binaries that computers understand.
Torvalds posted his own substitution for the code in question and suggested that the programmers should have written it his way.
Torvalds has a history of lashing out against people with whom he disagrees. It stretches back to 1991, when he famously [flamed Andrew Tanenbaum][2]—whose Minix operating system he later described as a series of "brain-damages." No doubt this latest criticism of fellow open source coders will go down as another example of Torvalds's confrontational personality.
But Torvalds may also have been acting strategically during this latest rant. "I want to make it clear to *everybody* that code like this is completely unacceptable," he wrote, suggesting that his goal was to send a message to all Linux programmers, not just vent his anger at particular ones.
Torvalds also used the incident as an opportunity to highlight the security concerns that arise from poorly written code. Those are issues dear to open source programmers' hearts in an age when enterprises are finally taking software security seriously, and demanding top-notch performance from their code in this regard. Lambasting open source programmers who write insecure code thus helps Linux's image.
--------------------------------------------------------------------------------
via: http://thevarguy.com/open-source-application-software-companies/110415/linus-torvalds-lambasts-open-source-programmers-over-inse
作者:[Christopher Tozzi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://thevarguy.com/author/christopher-tozzi
[1]:http://lkml.iu.edu/hypermail/linux/kernel/1510.3/02866.html
[2]:https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate

View File

@ -1,3 +1,4 @@
translating。。。
Review: 5 memory debuggers for Linux coding
================================================================================
![](http://images.techhive.com/images/article/2015/11/penguinadmin-2400px-100627186-primary.idge.jpg)
@ -281,4 +282,4 @@ via: http://www.computerworld.com/article/3003957/linux/review-5-memory-debugger
[41]:http://linux.die.net/man/1/gcc
[42]:http://linux.die.net/man/1/g++
[43]:https://sourceware.org/ml/libc-help/2014-05/msg00008.html
[44]:https://en.wikipedia.org/wiki/Thread_safety
[44]:https://en.wikipedia.org/wiki/Thread_safety

View File

@ -0,0 +1,87 @@
Cinnamon 2.8 Review
================================================================================
![](https://www.maketecheasier.com/assets/uploads/2015/11/cinnamon-2-8-featured.jpg)
Other than Gnome and KDE, Cinnamon is another desktop environment that is used by many people. It is made by the same team that produces Linux Mint (and ships with Linux Mint) and can also be installed on several other distributions. The latest version of this DE Cinnamon 2.8 was released earlier this month, and it brings a host of bug fixes and improvements as well as some new features.
Im going to go over the major improvements made in this release as well as how to update to Cinnamon 2.8 or install it for the first time.
### Improvements to Applets ###
There are several improvements to already existing applets for the panel.
#### Sound Applet ####
![cinnamon-28-sound-applet](https://www.maketecheasier.com/assets/uploads/2015/11/rsz_cinnamon-28-sound-applet.jpg)
The Sound applet was revamped and now displays track information as well as the media controls on top of the cover art of the audio file. For music players with seeking support (such as Banshee), a progress bar will be displayed in the same region which you can use to change the position of the audio track. Right-clicking on the applet in the panel will display the options to mute input and output devices.
#### Power Applet ####
The Power applet now displays the status of each of the connected batteries and devices using the manufacturers data instead of generic names.
#### Window Thumbnails ####
![cinnamon-2.8-window-thumbnails](https://www.maketecheasier.com/assets/uploads/2015/11/cinnamon-2.8-window-thumbnails.png)
Cinnamon 2.8 brings the option to show window thumbnails when hovering over the window list in the panel. You can turn it off if you dont like it, though.
#### Workspace Switcher Applet ####
![cinnamon-2.8-workspace-switcher](https://www.maketecheasier.com/assets/uploads/2015/11/cinnamon-2.8-workspace-switcher.png)
Adding the Workspace switcher applet to your panel will show you a visual representation of your workspaces with little rectangles embedded inside to show the position of your windows.
#### System Tray ####
Cinnamon 2.8 brings support for app indicators in the system tray. You can easily disable this in the settings which will force affected apps to fall back to using status icons instead.
### Visual Improvements ###
A host of visual improvements were made in Cinnamon 2.8. The classic and preview Alt + Tab switchers were polished with noticeable improvements, while the Alt + F2 dialog received bug fixes and better auto completion for commands.
Also, the issue with the traditional animation effect for minimizing windows is now sorted and works with multiple panels.
### Nemo Improvements ###
![cinnamon-2.8-nemo](https://www.maketecheasier.com/assets/uploads/2015/11/rsz_cinnamon-28-nemo.jpg)
The default file manager for Cinnamon also received several bug fixes and has a new “Quick-rename” feature for renaming files and directories. This works by clicking the file or directory twice with a short pause in between to rename the files.
Nemo also detects issues with thumbnails automatically and prompts you to quickly fix them.
### Other Notable improvements ###
- Applets now reload themselves automatically once they are updated.
- Support for multiple monitors was improved significantly.
- Dialog windows have been improved and now attach themselves to their parent windows.
- HiDPI dectection has been improved.
- QT5 applications now look more native and use the default GTK theme.
- Window management and rendering performance has been improved.
- There are various bugfixes.
### How to Get Cinnamon 2.8 ###
If youre running Linux Mint you will get Cinnamon 2.8 as part of the upgrade to Linux Mint 17.3 “Rosa” Cinnamon Edition. The BETA release is already out, so you can grab that if youd like to get your hands on the new software immediately.
For Arch users, Cinnamon 2.8 is already in the official Arch repositories, so you can just update your packages and do a system-wide upgrade to get the latest version.
Finally, for Ubuntu users, you can install or upgrade to Cinnamon 2.8 by running in turn the following commands:
sudo add-apt-repository -y ppa:moorkai/cinnamon
sudo apt-get update
sudo apt-get install cinnamon
Have you tried Cinnamon 2.8? What do you think of it?
--------------------------------------------------------------------------------
via: https://www.maketecheasier.com/cinnamon-2-8-review/
作者:[Ayo Isaiah][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.maketecheasier.com/author/ayoisaiah/

View File

@ -0,0 +1,53 @@
KDE vs GNOME vs XFCE Desktop
================================================================================
![](http://1426826955.rsc.cdn77.org/wp-content/uploads/2013/07/300px-Xfce_logo.svg_.png)
Over many years, many people spent a long time with Linux desktop using either KDE or GNOME. These two environments have grown through the previous years and each of these desktops continued to expand their current user-base. For example, sleeper desktop environment has been XFCE as XFCE offers more robustness than LXDE that lacks much of XFCEs polish in the default configuration. The XFCE provides all benefits which users enjoyed in the GNOME 2, but with some lightweight experiences which made it a hit on the older computers.
### The Desktop Theming ###
After the user has fresh installation, the XFCE will be a bit boring, which lacks some certain visual attractiveness to it. So, dont misunderstand my words here, the XFCE is still having nice looking desktop, but it may be like vanilla in users eyes as well as most people who are new to the XFCE desktop environment. The good news here is that while installing new theme to the XFCE, it is a reasonably easy process as you can easily find the right XFCE theme which appeals to you, after that, you can extract that theme to the proper directory. From this point, the XFCE comes with an important tool located under the Appearance for helping the user to select the chosen theme easily throughout the Graphical User Interface (GUID). Therere no other tools that might be required here, and if the user follows the above directions, it will be a bit simple for everyone who is caring to have a try.
On the GNOME desktop, the user should follow the similar above approach. The main key difference for this point is that users have to download and then install GNOME Tweak Tool before proceeding with anything. It does not have any huge barriers under any means, but it is simple valid oversight when the user consider that the XFCE does not require any tweak tool in order for installing and activating the new desktop themes. By being under the GNOME, and especially after installing that Tweak tool which is mentioned above, you will need to go ahead and also to make sure that you have the extension of User Themes installed.
The same as with the XFCE, the user will want to search for, and then download the theme which most appeals personally to him. Then, user can revisit to the GNOME Tweak tool, and click on the Appearance option on left side of that Tweak tool. Then, the user can simply look at the bottom of the page and click on file browse button to right of the Shell Theme. User then can browse to the zipped folder, and click open. In case if this process was successfully done, the user will see an alert that tell him that it was installed without any problems. From this point, user can simply use the pull down menu in order for selecting the theme he wants to use. The same as with the XFCE, process of theme activation is very easy, however, a need to download the non-included application for using a new theme will leave much to be desired.
Finally, there is the process of the KDE desktop theming. The same as with XFCE, there is no need at all to install any extra tools for making it work. This is one area where there is a feeling that the XFCE has to make the KDE the winner. Not only the installing themes in the KDE is accomplished entirely within the Graphical User Interface, but its also even possible to click on (Get New Themes) button and user will be able to locate, view, and also install the new themes automatically.
However, it should be noted that the KDE is a bit more robust desktop environment comparing to the XFCE. Therefore, it is a bit reasonable now to see why such extra functionalities could be missing from the desktops which are mainly designed to be minimalist. So, we all have to give the KDE props for such outstanding functionality.
### MATE is not Lightweight Desktop ###
Before continuing with the comparison between the XFCE, the GNOME 3 and the KDE, it should be clear for experts that we cant touch the MATE desktop as an option in the comparison. MATE can be considered as the GNOME 2 desktops next incarnation, but its not mainly marketed to be a lightweight or fast desktop. But instead of that, its primary goal is to be more traditional and comfortable desktop environment where the users can feel right at their home to use it.
On the other hand, the XFCE comes with a completely other goal set. The XFCE offers its users a more lightweight and yet still visually appealing desktop experience. Then, for everyone who points out that MATE is a lightweight desktop too, it isnt really targeting that lightweight desktop crowd. Both options may be dressed up for looking quite attractive with the proper theme installed.
### The Navigation of Desktop ###
The XFCE honestly offers an obvious navigation which is out of the box. Anyone who is used to the traditional Windows or the GNOME 2/MATE desktop experience will be going to have the ability to navigate around the new XFCE installation without any kind of help. Straight away, adding the applets to panel is still very obvious. The same as with locating installed applications, just use the launcher and simply click on any desired application. With an exception of LXDE and MATE, there is no other desktop that can make the navigation that simple. What can be even better is that fact which the control panel is very easy to use, that is a really big benefit to everyone who is new to the desktop environment. If the user prefer older methods to use his desktop, then GNOME is not an option. With the hot corners as well as the no minimize button, plus the other application layout method, itll take the most newcomers getting easily used to it.
If the user is coming from, as an example, Windows environment, then he is going to be put off by the inability to add applets to the top of his workspace simply with just a mere right-click. Just instead of this, it can be handled by using extensions. Installing extensions in the GNOME is granted and is a brain-dead easy, based on the easy to use (on/off) toggle switches located on the extensions page of the GNOME. Users have to know, sadly, to actually visit that page to enjoy this functionality.
On the other side, the GNOME is sharing its desire for providing a straight forward and an easy to use control panel, which many of you may think that it is not be a big deal, but it is really something that I by myself find commendable and worth to be mentioned. The KDE offers its users a bit more traditional desktop experience, throughout familiar launchers as well as the ability for getting to the software in more familiar way if they are coming from Windows desktop. The process of adding widgets or applets to the KDE desktop is an easy matter of just right-clicking on the bottom of the desktop. Only the problem with the KDEs approach is to be that, as many things KDE, the feature which users are actually looking for are hidden. The KDE users might berate my opinion for this, but I still stand by my statement.
In order for adding a Widget, just right-click on “my panel”, just to see the panel options, but not as an immediate method to install Widgets. You will not actually see the Add Widgets until you select the Panel Options, then the Add Widgets. This not a big deal to me, but later for some users, it becomes unnecessary tidbit of confusion. To make things here more convoluted, after the users manage to locate Widgets area they discover later a brand new term called “Activities”. It is in the same area as the Widgets, yet it is somehow in its own area as to what it does.
Now dont misunderstand me, the Activities feature in the KDE is totally great and actually valued. But to look at it from the usability standpoint, I think that it would be better suited in another menu option in order to not confuse the newbies. User is welcome to differ, but to test this with newbies for some extended periods of time can prove the correct over and over again. The rant against the Activities placement aside, the KDE approach to add new widgets is really great. The same as with the KDE themes, user cant browse through and install the Widgets automatically via using the provided Graphical User Interface. It is a bit fantastic of functionality, and also it could be celebrated such way. The control panel of the KDE is not as easy as the user might like it to be, yet it is a bit clear that thiss something that they are still working on.
### So, the XFCE is the best desktop, right? ###
I, by myself, actually run GNOME, KDE, and XFCE on my computers in my office and home. I also have some older machines with OpenBox and LXDE too. Each desktop experience can offer something that is a bit useful to me and may help me to use each machine as I see that it is fit. For me, I have a soft spot in my heart for the XFCE as it is one of the desktop environments which I stuck with for years. But in this article, Im just writing it on my daily use computer which is in fact, GNOME.
The main idea here is that I still feel that the XFCE provides a bit better user experience for users who are looking for stable, traditional, and easy to understand desktop environment. You are also welcome to share with us your opinion in the comments section.
--------------------------------------------------------------------------------
via: http://www.unixmen.com/kde-vs-gnome-vs-xfce-desktop/
作者:[M.el Khamlichi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/pirat9/

View File

@ -0,0 +1,155 @@
10 tools for visual effects in Linux with Kdenlive
================================================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life-uploads/kdenlivetoolssummary.png)
Image credits : Seth Kenlon. [CC BY-SA 4.0.][1]
[Kdenlive][2] is one of those applications; you can use it daily for a year and wake up one morning only to realize that you still have only grazed the surface of all of its potential. That's why it's nice every once in a while to sit back and look over some of the lesser-used tricks and tools in Kdenlive. Even though something's not used as often as, say, the Spacer or Razor tools, it still may end up being just the right finishing touch on your latest masterpiece.
Most of the tools I'll discuss here are not officially part of Kdenlive; they are plugins from the [Frei0r][3] package. These are ubiquitous parts of video processing on Linux and Unix, and they usually get installed along with Kdenlive as distributed by most Linux distributions, so they often seem like part of the application. If your install of Kdenlive does not feature some of the tools mentioned here, make sure that you have Frei0r plugins installed.
Since many of the tools in this article affect the look of an image, here is the base image, without effects or adjustment:
![](https://opensource.com/sites/default/files/images/life-uploads/before_0.png)
Still image grabbed from a video by Footage Firm, Inc. [CC BY-SA 4.0.][1]
Let's get started.
### 1. Color effect ###
![](https://opensource.com/sites/default/files/images/life-uploads/coloreffect.png)
You can find the **Color Effect** filter in **Add Effect > Misc** context menu. As filters go, it's mostly just a preset; the only controls it has are which filter you want to use.
![](https://opensource.com/sites/default/files/images/life-uploads/coloreffect_ctl_0.png)
Normally that's the kind of filter I avoid, but I have to be honest: Sometimes a plug-and-play solution is exactly what you want. This filter has a few different settings, but the two that make it worth while (at least for me) are the Sepia and XPro effects. Admittedly, controls to adjust how sepia tone the sepia effect is would be nice, but no matter what, when you need a quick and familiar color effect, this is the filter to throw onto a clip. It's immediate, it's easy, and if your client asks for that look, this does the trick every time.
### 2. Colorize ###
![](https://opensource.com/sites/default/files/images/life-uploads/colorize.png)
The simplicity of the **Colorize** filter in **Add Effect > Misc** is also its strength. In some editing applications, it takes two filters and some compositing to achieve this simple color-wash effect. It's refreshing that in Kdenlive, it's a matter of one filter with three possible controls (only one of which, strictly speaking, is necessary to achieve the look).
![](https://opensource.com/sites/default/files/images/life-uploads/colorize_ctl.png)
Its use is intuitive; use the **Hue** slider to set the color. Use the other controls to adjust the luma of the base image as needed.
This is not a filter I use every day, but for ad spots, bumpers, dreamy sequences, or titles, it's the easiest and quickest path to a commonly needed look. Get a company's color, use it as the colorize effect, slap a logo over the top of the screen, and you've just created a winning corporate intro.
### 3. Dynamic Text ###
![](https://opensource.com/sites/default/files/images/life-uploads/dyntext.png)
For the assistant editor, the Add Effect > Misc > Dynamic **Text** effect is worth the price of Kdenlive. With one mostly pre-set filter, you can add a running timecode burn-in to your project, which is an absolute must-have safety feature when round-tripping your footage through effects and sound.
The controls look more complex than they actually are.
![](https://opensource.com/sites/default/files/images/life-uploads/dyntext_ctl.png)
The font settings are self-explanatory. Placement of the text is controlled by the Horizontal and Vertical Alignment settings; steer clear of the **Size** setting (it controls the size of the "canvas" upon which you are compositing the burn-in, not the size of the burn-in itself).
The text itself doesn't have to be timecode. From the dropdown menu, you can choose from a list of useful text, including frame count (useful for VFX, since animators work in frames), source frame rate, source dimensions, and more.
You are not limited to just one choice. The text field in the control panel will take whatever arbitrary text you put into it, so if you want to burn in more information than just timecode and frame rate (such as **Sc 3 - #timecode# - #meta.media.0.stream.frame_rate#**), then have at it.
### 4. Luminance ###
![](https://opensource.com/sites/default/files/images/life-uploads/luminance.png)
The **Add Effect > Misc > Luminance** filter is a no-options filter. Luminance does one thing and it does it well: It drops the chroma values of all pixels in an image so that they are displayed by their luma values. In simpler terms, it's a grayscale filter.
The nice thing about this filter is that it's quick, easy, efficient, and effective. This filter combines particularly well with other related filters (meaning that yes, I'm cheating and including three filters for one).
![](https://opensource.com/sites/default/files/images/life-uploads/luminance_ctl.png)
Combining, in this order, the **RGB Noise** for emulated grain, **Luminance** for grayscale, and **LumaLiftGainGamma** for levels can render a textured image that suggests the classic look and feel of [Kodax Tri-X][4] film.
### 5. Mask0mate ###
![](https://opensource.com/sites/default/files/images/life-uploads/mask0mate.png)
Image by Footage Firm, Inc.
Better known as a four-point garbage mask, the **Add Effect > Alpha Manipulation > Mask0mate** tool is a quick, no-frills way to ditch parts of your frame that you don't need. There isn't much to say about it; it is what it is.
![](https://opensource.com/sites/default/files/images/life-uploads/mask0mate_ctl.png)
The confusing thing about the effect is that it does not imply compositing. You can pull in the edges all you want, but you won't see it unless you add the **Composite** transition to reveal what's underneath the clip (even if that's nothing). Also, use the **Invert** function for the filter to act like you think it should act (without it, the controls will probably feel backward to you).
### 6. Pr0file ###
![](https://opensource.com/sites/default/files/images/life-uploads/pr0file.png)
The **Add Effect > Misc > Pr0file** filter is an analytical tool, not something you would actually leave on a clip for final export (unless, of course, you do). Pr0file consists of two components: the Marker, which dictates what area of the image is being analyzed, and the Graph, which displays information about the marked region.
Set the marker using the **X, Y, Tilt**, and **Length** controls. The graphical readout of all the relevant color channel information is displayed as a graph, superimposed over your image.
![](https://opensource.com/sites/default/files/images/life-uploads/pr0file_ctl.jpg)
The readout displays a profile of the colors within the region marked. The result is a sort of hyper-specific vectorscope (or oscilloscope, as the case may be) that can help you zero in on problem areas during color correction, or compare regions while color matching.
In other editors, the way to get the same information was simply to temporarily scale your image up to the region you want to analyze, look at your readout, and then hit undo to scale back. Both ways work, but the Pr0file filter does feel a little more elegant.
### 7. Vectorscope ###
![](https://opensource.com/sites/default/files/images/life-uploads/vectorscope.jpg)
Kdenlive features an inbuilt vectorscope, available from the **View** menu in the main menu bar. A vectorscope is not a filter, it's just another view the footage in your Project Monitor, specifically a view of the color saturation in the current frame. If you are color correcting an image and you're not sure what colors you need to boost or counteract, looking at the vectorscope can be a huge help.
There are several different views available. You can render the vectorscope in traditional green monochrome (like the hardware vectorscopes you'd find in a broadcast control room), or a chromatic view (my personal preference), or subtracted from a color-wheel background, and more.
The vectorscope reads the entire frame, so unlike the Pr0file filter, you are not just getting a reading of one area in the frame. The result is a consolidated view of what colors are most prominent within a frame. Technically, the same sort of information can be intuited by several trial-and-error passes with color correction, or you can just leave your vectorscope open and watch the colors float along the color wheel and make adjustments accordingly.
Aside from how you want the vectorscope to look, there are no controls for this tool. It is a readout only.
### 8. Vertigo ###
![](https://opensource.com/sites/default/files/images/life-uploads/vertigo.jpg)
There's no way around it; **Add Effect > Misc > Vertigo** is a gimmicky special effect filter. So unless you're remaking [Fear and Loathing][5] or the movie adaptation of [Dead Island][6], you probably aren't going to use it that much; however, it's one of those high-quality filters that does the exact trick you want when you happen to be looking for it.
The controls are simple. You can adjust how distorted the image becomes and the rate at which it distorts. The overall effect is probably more drunk or vision-quest than vertigo, but it's good.
![](https://opensource.com/sites/default/files/images/life-uploads/vertigo_ctl.png)
### 9. Vignette ###
![](https://opensource.com/sites/default/files/images/life-uploads/vignette.jpg)
Another beautiful effect, the **Add Effect > Misc > Vignette** darkens the outer edges of the frame to provide a sort of portrait, soft-focus nouveau look. Combined with the Color Effect or the Luminance faux Tri-X trick, this can be a powerful and emotional look.
The softness of the border and the aspect ratio of the iris can be adjusted. The **Clear Center Size** attribute controls the size of the clear area, which has the effect of adjusting the intensity of the vignette effect.
![](https://opensource.com/sites/default/files/images/life-uploads/vignette_ctl.png)
### 10. Volume ###
![](https://opensource.com/sites/default/files/images/life-uploads/vol.jpg)
I don't believe in mixing sound within the video editing application, but I do acknowledge that sometimes it's just necessary for a quick fix or, sometimes, even for a tight production schedule. And that's when the **Audio correction > Volume (Keyframable)** effect comes in handy.
The control panel is clunky, and no one really wants to adjust volume that way, so the effect is best when used directly in the timeline. To create a volume change, double-click the volume line over the audio clip, and then click and drag to adjust. It's that simple.
Should you use it? Not really. Sound mixing should be done in a sound mixing application. Will you use it? Absolutely. At some point, you'll get audio that is too loud to play as you edit, or you'll be up against a deadline without a sound engineer in sight. Use it judiciously, watch your levels, and get the show finished.
### Everything else ###
This has been 10 (OK, 13 or 14) effects and tools that Kdenlive has quietly lying around to help your edits become great. Obviously there's a lot more to Kdenlive than just these little tricks. Some are obvious, some are cliché, some are obtuse, but they're all in your toolkit. Get to know them, explore your options, and you might be surprised what a few cheap tricks will get you.
--------------------------------------------------------------------------------
via: https://opensource.com/life/15/12/10-kdenlive-tools
作者:[Seth Kenlon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/seth
[1]:https://creativecommons.org/licenses/by-sa/4.0/
[2]:https://kdenlive.org/
[3]:http://frei0r.dyne.org/
[4]:http://www.kodak.com/global/en/professional/products/films/bw/triX2.jhtml
[5]:https://en.wikipedia.org/wiki/Fear_and_Loathing_in_Las_Vegas_(film)
[6]:https://en.wikipedia.org/wiki/Dead_Island

View File

@ -0,0 +1,96 @@
5 great Raspberry Pi projects for the classroom
================================================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-open-source-yearbook-lead3.png)
Image by : opensource.com
### 1. Minecraft Pi ###
Courtesy of the Raspberry Pi Foundation. [CC BY-SA 4.0.][1]
Minecraft is the favorite game of pretty much every teenager in the world—and it's one of the most creative games ever to capture the attention of young people. The version that comes with every Raspberry Pi is not only a creative thinking building game, but comes with a programming interface allowing for additional interaction with the Minecraft world through Python code.
Minecraft: Pi Edition is a great way for teachers to engage students with problem solving and writing code to perform tasks. You can use the Python API to build a house and have it follow you wherever you go, build a bridge wherever you walk, make it rain lava, show the temperature in the sky, and anything else your imagination can create.
Read more in "[Getting Started with Minecraft Pi][2]."
### 2. Reaction game and traffic lights ###
![](https://opensource.com/sites/default/files/pi_traffic_installed_yellow_led_on.jpg)
Courtesy of [Low Voltage Labs][3]. [CC BY-SA 4.0][1].
It's really easy to get started with physical computing on Raspberry Pi—just connect up LEDs and buttons to the GPIO pins, and with a few lines of code you can turn lights on and control things with button presses. Once you know the code to do the basics, it's down to your imagination as to what you do next!
If you know how to flash one light, you can flash three. Pick out three LEDs in traffic light colors and you can code the traffic light sequence. If you know how to use a button to a trigger an event, then you have a pedestrian crossing! Also look out for great pre-built traffic light add-ons like [PI-TRAFFIC][4], [PI-STOP][5], [Traffic HAT][6], and more.
It's not always about the code—this can be used as an exercise in understanding how real world systems are devised. Computational thinking is a useful skill in any walk of life.
![](https://opensource.com/sites/default/files/reaction-game.png)
Courtesy of the Raspberry Pi Foundation. [CC BY-SA 4.0][1].
Next, try wiring up two buttons and an LED and making a two-player reaction game—let the light come on after a random amount of time and see who can press the button first!
To learn more, check out "[GPIO Zero recipes][7]. Everything you need is in [CamJam EduKit 1][8].
### 3. Sense HAT Pixel Pet ###
The Astro Pi—an augmented Raspberry Pi—is going to space this December, but you haven't missed your chance to get your hands on the hardware. The Sense HAT is the sensor board add-on used in the Astro Pi mission and it's available for anyone to buy. You can use it for data collection, science experiments, games and more. Watch this Gurl Geek Diaries video from Raspberry Pi's Carrie Anne for a great way to get started—by bringing to life an animated pixel pet of your own design on the Sense HAT display:
youtube 视频
<iframe width="520" height="315" frameborder="0" src="https://www.youtube.com/embed/gfRDFvEVz-w" allowfullscreen=""></iframe>
Learn more in "[Exploring the Sense HAT][9]."
### 4. Infrared bird box ###
![](https://opensource.com/sites/default/files/ir-bird-box.png)
Courtesy of the Raspberry Pi Foundation. [CC BY-SA 4.0.][1]
A great exercise for the whole class to get involved with—place a Raspberry Pi and the NoIR camera module inside a bird box along with some infra-red lights so you can see in the dark, then stream video from the Pi over the network or on the internet. Wait for birds to nest and you can observe them without disturbing them in their habitat.
Learn all about infrared and the light spectrum, and how to adjust the camera focus and control the camera in software.
Learn more in "[Make an infrared bird box.][10]"
### 5. Robotics ###
![](https://opensource.com/sites/default/files/edukit3_1500-alex-eames-sm.jpg)
Courtesy of Low Voltage Labs. [CC BY-SA 4.0][1].
With a Raspberry Pi and as little as a couple of motors and a motor controller board, you can build your own robot. There is a vast range of robots you can make, from basic buggies held together by sellotape and a homemade chassis, all the way to self-aware, sensor-laden metallic stallions with camera attachments driven by games controllers.
Learn how to control individual motors with something straightforward like the RTK Motor Controller Board (£8/$12), or dive into the new CamJam robotics kit (£17/$25) which comes with motors, wheels and a couple of sensors—great value and plenty of learning potential.
Alternatively, if you'd like something more hardcore, try PiBorg's [4Borg][11] (£99/$150) or [DiddyBorg][12] (£180/$273) or go the whole hog and treat yourself to their DoodleBorg Metal edition (£250/$380)—and build a mini version of their infamous [DoodleBorg tank][13] (unfortunately not for sale).
Check out the [CamJam robotics kit worksheets][14].
--------------------------------------------------------------------------------
via: https://opensource.com/education/15/12/5-great-raspberry-pi-projects-classroom
作者:[Ben Nuttall][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/bennuttall
[1]:https://creativecommons.org/licenses/by-sa/4.0/
[2]:https://opensource.com/life/15/5/getting-started-minecraft-pi
[3]:http://lowvoltagelabs.com/
[4]:http://lowvoltagelabs.com/products/pi-traffic/
[5]:http://4tronix.co.uk/store/index.php?rt=product/product&product_id=390
[6]:https://ryanteck.uk/hats/1-traffichat-0635648607122.html
[7]:http://pythonhosted.org/gpiozero/recipes/
[8]:http://camjam.me/?page_id=236
[9]:https://opensource.com/life/15/10/exploring-raspberry-pi-sense-hat
[10]:https://www.raspberrypi.org/learning/infrared-bird-box/
[11]:https://www.piborg.org/4borg
[12]:https://www.piborg.org/diddyborg
[13]:https://www.piborg.org/doodleborg
[14]:http://camjam.me/?page_id=1035#worksheets

View File

@ -0,0 +1,94 @@
6 creative ways to use ownCloud
================================================================================
![Yearbook cover 2015](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/osdc-open-source-yearbook-lead1-inc0335020sw-201511-01.png)
Image by : Opensource.com
[ownCloud][1] is a self-hosted open source file sync and share server. Like "big boys" Dropbox, Google Drive, Box, and others, ownCloud lets you access your files, calendar, contacts, and other data. You can synchronize everything (or part of it) between your devices and share files with others. But ownCloud can do much more than its proprietary, [hosted-on-somebody-else's-computer competitors][2].
Let's look at six creative things ownCloud can do. Some of these are possible because ownCloud is open source, whereas others are just unique features it offers.
### 1. A scalable ownCloud Pi cluster ###
Because ownCloud is open source, you can choose between self-hosting on your own server or renting space from a provider you trust—no need to put your files at a big company that stores it who knows where. [Find some ownCloud providers here][3] or grab packages or a virtual machine for [your own server here][4].
![](https://opensource.com/sites/default/files/images/life-uploads/banana-pi-owncloud-cluster.jpg)
Photo by Jörn Friedrich Dreyer. [CC BY-SA 4.0.][5]
The most creative things we've seen are a [Banana Pi cluster][6] and a [Raspberry Pi cluster][7]. Although ownCloud's scalability is often used to deploy to hundreds of thousands of users, some folks out there take it in a different direction, bringing multiple tiny systems together to make a super-fast ownCloud. Kudos!
### 2. Keep your passwords synced ###
To make ownCloud easier to extend, we have made it extremely modular and have an [ownCloud app store][8]. There you can find things like music and video players, calendars, contacts, productivity apps, games, a sketching app, and much more.
Picking only one app from the almost 200 available is hard, but managing passwords is certainly a unique feature. There are no less than three apps providing this functionality: [Passwords][9], [Secure Container][10], and [Passman][11].
![](https://opensource.com/sites/default/files/images/life-uploads/password.png)
### 3. Store your files where you want ###
External storage allows you to hook your existing data storage into ownCloud, letting you to access files stored on FTP, WebDAV, Amazon S3, and even Dropbox and Google Drive through one interface.
youtube 视频
<iframe width="520" height="315" frameborder="0" allowfullscreen="" src="https://www.youtube.com/embed/uezzFDRnoPY"></iframe>
The "big boys" like to create their own little walled gardens—Box user can only collaborate with other Box users; and if you want to share your files from Google Drive, your mate needs a Google account or they can't do much. With ownCloud's external storage, you can break these barriers.
A very creative solution is adding Google Drive and Dropbox as external storage. You can work with files on both seamlessly and share them with others through a simple link—no account needed to work with you!
### 4. Get files uploaded ###
Because ownCloud is open source, people contribute interesting features without being limited by corporate requirements. Our contributors have always cared about security and privacy, so ownCloud introduced features such as protecting a public link with a password and setting an expire date [years before anybody else did][12].
Today, ownCloud has the ability to configure a shared link as read-write, which means visitors can seamlessly edit the files you share with them (protected with a password or not) or upload new files to your server without being forced to sign up to another web service that wants their private data.
youtube 视频
<iframe width="520" height="315" frameborder="0" allowfullscreen="" src="https://www.youtube.com/embed/3GSppxEhmZY"></iframe>
This is great for when people want to share a large file with you. Rather than having to upload it to a third-party site, send you a link, and make you go there and download it (often requiring a login), they can just upload it to a shared folder you provide, and you can get to work right away.
### 5. Get free secure storage ###
We already talked about how many of our contributors care about security and privacy. That's why ownCloud has an app that can encrypt and decrypt stored data.
Using ownCloud to store your files on Dropbox or Google Drive defeats the whole idea of retaking control of your data and keeping it private. The Encryption app changes that. By encrypting data before sending it to these providers and decrypting it upon retrieval, your data is safe as kittens.
### 6. Share your files and stay in control ###
As an open source project, ownCloud has no stake in building walled gardens. Enter Federated Cloud Sharing: a protocol [developed and published by ownCloud][13] that enables different file sync and share servers to talk to one another and exchange files securely. Federated Cloud Sharing has an interesting history. [Twenty-two German universities][14] decided to build a huge cloud for their 500,000 students. But as each university wanted to stay in control of the data of their own students, a creative solution was needed: Federated Cloud Sharing. The solution now connects all these universities so the students can seamlessly work together. At the same time, the system administrators at each university stay in control of the files their students have created and can apply policies, such as storage restrictions, or limitations on what, with whom, and how files can be shared.
youtube 视频
<iframe width="520" height="315" frameborder="0" allowfullscreen="" src="https://www.youtube.com/embed/9-JEmlH2DEg"></iframe>
And this awesome technology isn't limited to German universities: Every ownCloud user can find their [Federated Cloud ID][15] in their user settings and share it with others.
So there you have it. Six ways ownCloud enables people to do special and unique things, all made possible because it is open source and designed to help you liberate your data.
--------------------------------------------------------------------------------
via: https://opensource.com/life/15/12/6-creative-ways-use-owncloud
作者:[Jos Poortvliet][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jospoortvliet
[1]:https://owncloud.com/
[2]:https://blogs.fsfe.org/mk/new-stickers-and-leaflets-no-cloud-and-e-mail-self-defense/
[3]:https://owncloud.org/providers
[4]:https://owncloud.org/install/#instructions-server
[5]:https://creativecommons.org/licenses/by-sa/4.0/
[6]:http://www.owncluster.de/
[7]:https://christopherjcoleman.wordpress.com/2013/01/05/host-your-owncloud-on-a-raspberry-pi-cluster/
[8]:https://apps.owncloud.com/
[9]:https://apps.owncloud.com/content/show.php/Passwords?content=170480
[10]:https://apps.owncloud.com/content/show.php/Secure+Container?content=167268
[11]:https://apps.owncloud.com/content/show.php/Passman?content=166285
[12]:https://owncloud.com/owncloud45-community/
[13]:http://karlitschek.de/2015/08/announcing-the-draft-federated-cloud-sharing-api/
[14]:https://owncloud.com/customer/sciebo/
[15]:https://owncloud.org/federation/

View File

@ -0,0 +1,79 @@
6 useful LibreOffice extensions
================================================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/yearbook2015-osdc-lead-2.png)
Image by : Opensource.com
LibreOffice is the best free office suite around, and as such has been adopted by all major Linux distributions. Although LibreOffice is already packed with features, it can be extended by using specific add-ons, called extensions.
The main LibreOffice extensions website is [extensions.libreoffice.org][1]. Extensions are tools that can be added or removed independently from the main installation, and may add new functionality or make existing functionality easier to use.
### 1. MultiFormatSave ###
MultiFormatSave lets users save a document in the OpenDocument, Microsoft Office (old and new), and/or PDF formats simultaneously, according to user settings. This extension is extremely useful during the migration from Microsoft Office document formats to the [Open Document Format][2] standard, because it offers the option to save in both flavors: ODF for interoperability, and Microsoft Office for compatibility with all users sticking to legacy formats. This makes the migration process softer, and easier to administer.
**[Download MultiFormatSave][3]**
![Multiformatsave extension](https://opensource.com/sites/default/files/images/business-uploads/multiformatsave.png)
### 2. Alternative dialog Find & Replace for Writer (AltSearch) ###
This extension adds many new features to Writer's find & replace function: searched or replaced text can contain one or more paragraphs; multiple search and replacement in one step; searching: Bookmarks, Notes, Text fields, Cross-references and Reference marks to their content, name or mark and their inserting; searching and inserting Footnote and Endnote; searching object of Table, Pictures and Text frames according to their name; searching out manual page and column break and their set up or deactivation; and searching similarly formatted text, according to cursor point. It is also possible to save and load search and replacement parameters, and execute the batch on several opened documents at the same time.
**[Download Alternative dialog Find & Replace for Writer (AltSearch)][4]**
![Alternative Find&amp;amp;Replace add-on](https://opensource.com/sites/default/files/images/business-uploads/alternativefindreplace.png)
### 3. Pepito Cleaner ###
Pepito Cleaner is an extension of LibreOffice created to quickly resolve the most common formatting mistakes of old scans, PDF imports, and every digital text file. By clicking the Pepito Cleaner icon on the LibreOffice toolbar, users will open a window that will analyze the document and show the results broken down by category. This is extremely useful when converting PDF documents to ODF, as it cleans all the cruft left in place by the automatic process.
**[Download Pepito Cleaner][5]**
![Pepito cleaner screenshot](https://opensource.com/sites/default/files/images/business-uploads/pepitocleaner.png)
### 4. ImpressRunner ###
Impress Runner is a simple extension that transforms an [Impress][6] presentation into an auto-running file. The extension adds two icons, to set and remove the autostart function, which can also be added manually by editing the File | Properties | Custom Properties menu, and adding the term autostart in one of the first four text fields. This extension is especially useful for booths at conferences and events, where the slides are supposed to run unattended.
**[Download ImpressRunner][7]**
### 5. Export as Images ###
The Export as Images extension adds a File menu entry export as Images... in Impress and [Draw][8], to export all slides or pages as images in JPG, PNG, GIF, BMP, and TIFF format, and allows users to choose a file name for exported images, the image size, and other parameters.
**[Download Export as Images][9]**
![Export as images extension](https://opensource.com/sites/default/files/images/business-uploads/exportasimages.png)
### 6. Anaphraseus ###
Anaphraseus is a CAT (Computer-Aided Translation) tool for creating, managing, and using bilingual Translation Memories. Anaphraseus is a LibreOffice macro set available as an extension or a standalone document. Originally, Anaphraseus was developed to work with the Wordfast format, but it can also export and import files in TMX format. Anaphraseus main features are: text segmentation, fuzzy search in Translation Memory, terminology recognition, and TMX Export/Import (OmegaT translation memory format).
**[Download Anaphraseus][10]**
![Anaphraseus screenshot](https://opensource.com/sites/default/files/images/business-uploads/anaphraseus.png)
Do you have a favorite LibreOffice extension to recommend? Let us know about it in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/business/15/12/6-useful-libreoffice-extensions
作者:[Italo Vignoli][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/italovignoli
[1]:http://extensions.libreoffice.org/
[2]:http://www.opendocumentformat.org/
[3]:http://extensions.libreoffice.org/extension-center/multisave-1
[4]:http://extensions.libreoffice.org/extension-center/alternative-dialog-find-replace-for-writer
[5]:http://pepitoweb.altervista.org/pepito_cleaner/index.php
[6]:https://www.libreoffice.org/discover/impress/
[7]:http://extensions.libreoffice.org/extension-center/impressrunner
[8]:https://www.libreoffice.org/discover/draw/
[9]:http://extensions.libreoffice.org/extension-center/export-as-images
[10]:http://anaphraseus.sourceforge.net/

View File

@ -0,0 +1,79 @@
Top 5 open source community metrics to track
================================================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/yearbook2015-osdc-lead-1.png)
So you decided to use metrics to track your free, open source software (FOSS) community. Now comes the big question: Which metrics should I be tracking?
To answer this question, you must have an idea of what information you need. For example, you may want to know about the sustainability of the project community. How quickly does the community react to problems? How is the community attracting, retaining, or losing contributors? Once you decide which information you need, you can figure out which traces of community activity are available to provide it. Fortunately, FOSS projects following an open development model tend to leave loads of public data in their software development repositories, which can be analyzed to gather useful data.
In this article, I'll introduce metrics that help provide a multi-faceted view of your project community.
### 1. Activity ###
The overall activity of the community and how it evolves over time is a useful metric for all open source communities. Activity provides a first view of how much the community is doing, and can be used to track different kinds of activity. For example, the number of commits gives a first idea about the volume of the development effort. The number of tickets opened provides insight into how many bugs are reported or new features are proposed. The number of messages in mailing lists or posts in forums gives an idea of how much discussion is being held in public.
![Activity metrics chart](https://opensource.com/sites/default/files/images/business-uploads/activity-metrics.png)
Number of commits and number of merged changes after code review in the OpenStack project, as found in the [OpenStack Activity Dashboard][1]. Evolution over time (weekly data).
### 2. Size ###
The size of the community is the number of people participating in it, but, depending on the kind of participation, size numbers may vary. Usually you're interested in active contributors, which is good news. Active people may leave traces in the repositories of the project, which means you can count contributors who are active in producing code by looking at the **Author** field in git repositories, or count people participating in the resolution of tickets by looking at who is contributing to them.
This basic idea of activity" (somebody did something) can be extended in many ways. One common way to track activity is to look at how many people did a sizable chunk of the activity. Generally most of a project's code contributions, for example, are from a small fraction of the people in the project's community. Knowing about that fraction helps provide an idea of the core group (i.e., the people who help lead the community).
![Size metrics chart](https://opensource.com/sites/default/files/images/business-uploads/size-metrics.png)
Number of authors and number of posters in mailing lists in the Xen project, as found in the [Xen Project Development Dashboard][2]. Evolution over time (monthly data).
### 3. Performance ###
So far, I have focused on measuring quantities of activities and contributors. You also can analyze how processes and people are performing. For example, you can measure how long processes take to finish. Time to resolve or close tickets shows how the project is reacting to new information that requires action, such as fixing a reported bug or implementing a requested new feature. Time spent in code review—from the moment when a change to the code is proposed to the moment it is accepted—shows how long upgrading a proposed change to the quality standards expected by the community takes.
Other metrics deal with how well the project is coping with pending work, such as the ratio of new to closed tickets, or the backlog of still non-completed code reviews. Those parameters tell us, for example, whether or not the resources put into solving issues is enough.
![Efficiency metrics chart](https://opensource.com/sites/default/files/images/business-uploads/efficiency-metrics.png)
Ratio of tickets closed by tickets opened, and ratio of change proposals accepted or abandoned by new change proposals per quarter. OpenStack project, as shown in the [OpenStack Development Report, 2015-Q3][3] (PDF).
### 4. Demographics ###
Communities change as contributors move in and out. Depending on how people enter and leave a community over time, the age (time since members joined the community) of the community varies. The [community aging chart][4] nicely illustrates these exchanges over time. The chart is structured as a set of horizontal bars, two per "generation" of people joining the community. For each generation, the attracted bar shows how many new people joined the community during the corresponding period of time. The retained bar shows how many people are still active in the community.
The relationship between the two bars for each generation is the retention rate: the fraction of people of that generation who are still in the project. The complete set of attracted bars show how attractive the project was in the past. And the complete set of the retention bars shows the current age structure of the community.
![Demographics metrics chart](https://opensource.com/sites/default/files/images/business-uploads/demography-metrics.png)
Community aging chart for the Eclipse community, as shown in the [Eclipse Development Dashboard][5]. Generations are defined every six months.
### 5. Diversity ###
Diversity is an important factor in the resiliency of communities. In general, the more diverse communities are—in terms of people or organizations participating—the more resilient they are. For example, when a company decides to leave a FOSS community, the potential problems the departure may cause are much smaller if its employees were contributing 5% of the work rather than 85%.
The [Pony Factor][6], a term defined by [Daniel Gruno][7] for the minimum number of developers performing 50% of the commits. Based on the Pony Factor, the Elephant Factor is the minimum number of companies whose employees perform 50% of the commits. Both numbers provide an indication of how many people or companies the community depends on.
![Diversity metrics chart](https://opensource.com/sites/default/files/images/business-uploads/diversity-metrics.png)
Pony and Elephant Factor for several FOSS projects in the area of cloud computing, as presented in [The quantitative state of the open cloud 2015][8] (slides).
There are many other metrics to help measure a community. When determing which metrics to collect, think about the goals of your community, and which metrics will help you reach them.
--------------------------------------------------------------------------------
via: https://opensource.com/business/15/12/top-5-open-source-community-metrics-track
作者:[Jesus M. Gonzalez-Barahona][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jgbarah
[1]:http://activity.openstack.org/
[2]:http://projects.bitergia.com/xen-project-dashboard/
[3]:http://activity.openstack.org/dash/reports/2015-q3/pdf/2015-q3_OpenStack_report.pdf
[4]:http://radar.oreilly.com/2014/10/measure-your-open-source-communitys-age-to-keep-it-healthy.html
[5]:http://dashboard.eclipse.org/demographics.html
[6]:https://ke4qqq.wordpress.com/2015/02/08/pony-factor-math/
[7]:https://twitter.com/humbedooh
[8]:https://speakerdeck.com/jgbarah/the-quantitative-state-of-the-open-cloud-2015-edition

View File

@ -1,159 +0,0 @@
How to Install and Configure Multihomed ISC DHCP Server on Debian Linux
================================================================================
Dynamic Host Control Protocol (DHCP) offers an expedited method for network administrators to provide network layer addressing to hosts on a constantly changing, or dynamic, network. One of the most common server utilities to offer DHCP functionality is ISC DHCP Server. The goal of this service is to provide hosts with the necessary network information to be able to communicate on the networks in which the host is connected. Information that is typically served by this service can include: DNS server information, network address (IP), subnet mask, default gateway information, hostname, and much more.
This tutorial will cover ISC-DHCP-Server version 4.2.4 on a Debian 7.7 server that will manage multiple virtual local area networks (VLAN) but can very easily be applied to a single network setup as well.
The test network that this server was setup on has traditionally relied on a Cisco router to manage the DHCP address leases. The network currently has 12 VLANs needing to be managed by one centralized server. By moving this responsibility to a dedicated server, the router can regain resources for more important tasks such as routing, access control lists, traffic inspection, and network address translation.
The other benefit to moving DHCP to a dedicated server will, in a later guide, involve setting up Dynamic Domain Name Service (DDNS) so that new hosts host-names will be added to the DNS system when the host requests a DHCP address from the server.
### Step 1: Installing and Configuring ISC DHCP Server ###
1. To start the process of creating this multi-homed server, the ISC software needs to be installed via the Debian repositories using the apt utility. As with all tutorials, root or sudo access is assumed. Please make the appropriate modifications to the following commands.
# apt-get install isc-dhcp-server [Installs the ISC DHCP Server software]
# dpkg --get-selections isc-dhcp-server [Confirms successful installation]
# dpkg -s isc-dhcp-server [Alternative confirmation of installation]
![Install ISC DHCP Server in Debian](http://www.tecmint.com/wp-content/uploads/2015/04/Install-ISC-DHCP-Server.jpg)
2. Now that the server software is confirmed installed, it is now necessary to configure the server with the network information that it will need to hand out. At the bare minimum, the administrator needs to know the following information for a basic DHCP scope:
- The network addresses
- The subnet masks
- The range of addresses to be dynamically assigned
Other useful information to have the server dynamically assign includes:
- Default gateway
- DNS server IP addresses
- The Domain Name
- Host name
- Network Broadcast addresses
These are merely a few of the many options that the ISC DHCP server can handle. To get a complete list as well as a description of each option, enter the following command after installing the package:
# man dhcpd.conf
3. Once the administrator has concluded all the necessary information for this server to hand out it is time to configure the DHCP server as well as the necessary pools. Before creating any pools or server configurations though, the DHCP service must be configured to listen on one of the servers interfaces.
On this particular server, a NIC team has been setup and DHCP will listen on the teamed interfaces which were given the name `'bond0'`. Be sure to make the appropriate changes given the server and environment in which everything is being configured. The defaults in this file are okay for this tutorial.
![Configure ISC DHCP Network](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DHCP-Network.jpg)
This line will instruct the DHCP service to listen for DHCP traffic on the specified interface(s). At this point, it is time to modify the main configuration file to enable the DHCP pools on the necessary networks. The main configuration file is located at /etc/dhcp/dhcpd.conf. Open the file with a text editor to begin:
# nano /etc/dhcp/dhcpd.conf
This file is the configuration for the DHCP server specific options as well as all of the pools/hosts one wishes to configure. The top of the file starts of with a ddns-update-style clause and for this tutorial it will remain set to none however in a future article, Dynamic DNS will be covered and ISC-DHCP-Server will be integrated with BIND9 to enable host name to IP address updates.
4. The next section is typically the area where and administrator can configure global network settings such as the DNS domain name, default lease time for IP addresses, subnet-masks, and much more. Again to know more about all the options be sure to read the man page for the dhcpd.conf file.
# man dhcpd.conf
For this server install, there were a couple of global network options that were configured at the top of the configuration file so that they wouldnt have to be implemented in every single pool created.
![Configure ISC DDNS](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ISC-DDNS.png)
Lets take a moment to explain some of these options. While they are configured globally in this example, all of them can be configured on a per pool basis as well.
- option domain-name “comptech.local”; All hosts that this DHCP server hosts, will be a member of the DNS domain name “comptech.local”
- option domain-name-servers 172.27.10.6; DHCP will hand out DNS server IP of 172.27.10.6 to all of the hosts on all of the networks it is configured to host.
- option subnet-mask 255.255.255.0; The subnet mask handed out to every network will be a 255.255.255.0 or a /24
- default-lease-time 3600; This is the time in seconds that a lease will automatically be valid. The host can re-request the same lease if time runs out or if the host is done with the lease, they can hand the address back early.
- max-lease-time 86400; This is the maximum amount of time in seconds a lease can be held by a host.
- ping-check true; This is an extra test to ensure that the address the server wants to assign out isnt in use by another host on the network already.
- ping-timeout; This is how long in second the server will wait for a response to a ping before assuming the address isnt in use.
- ignore client-updates; For now this option is irrelevant since DDNS has been disabled earlier in the configuration file but when DDNS is operating, this option will ignore a hosts to request to update its host-name in DNS.
5. The next line in this file is the authoritative DHCP server line. This line means that if this server is to be the server that hands out addresses for the networks configured in this file, then uncomment the authoritative stanza.
This server will be the only authority on all the networks it manages so the global authoritative stanza was un-commented by removing the # in front of the keyword authoritative.
![Enable ISC Authoritative](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-authoritative.png)
Enable ISC Authoritative
By default the server is assumed to NOT be an authority on the network. The rationale behind this is security. If someone unknowingly configures the DHCP server improperly or on a network they shouldnt, it could cause serious connectivity issues. This line can also be used on a per network basis. This means that if the server is not the entire networks DHCP server, the authoritative line can instead be used on a per network basis rather than in the global configuration as seen in the above screen-shot.
6. The next step is to configure all of the DHCP pools/networks that this server will manage. For brevities sake, this guide will only walk through one of the pools configured. The administrator will need to have gathered all of the necessary network information (ie domain name, network addresses, how many addresses can be handed out, etc).
For this pool the following information was obtained from the network administrator: network id of 172.27.60.0, subnet mask of 255.255.255.0 or a /24, the default gateway for the subnet is 172.27.60.1, and a broadcast address of 172.27.60.255.
This information is important to building the appropriate network stanza in the dhcpd.conf file. Without further ado, lets open the configuration file again using a text editor and then add the new network to the server. This must be done with root/sudo!
# nano /etc/dhcp/dhcpd.conf
![Configure DHCP Pools and Networks](http://www.tecmint.com/wp-content/uploads/2015/04/ISC-network.png)
Configure DHCP Pools and Networks
This is the sample created to hand out IP addresses to a network that is used for the creation of VMWare virtual practice servers. The first line indicates the network as well as the subnet mask for that network. Then inside the brackets are all the options that the DHCP server should provide to hosts on this network.
The first stanza, range 172.27.60.50 172.27.60.254;, is the range of dynamically assignable addresses that the DHCP server can hand out to hosts on this network. Notice that the first 49 addresses arent in the pool and can be assigned statically to hosts if needed.
The second stanza, option routers 172.27.60.1; , hands out the default gateway address for all hosts on this network.
The last stanza, option broadcast-address 172.27.60.255;, indicates what the networks broadcast address. This address SHOULD NOT be a part of the range stanza as the broadcast address cant be assigned to a host.
Some pointers, be sure to always end the option lines with a semi-colon (;) and always make sure each network created is enclosed in curly braces { }.
7. If there are more networks to create, continue creating them with their appropriate options and then save the text file. Once all configurations have been completed, the ISC-DHCP-Server process will need to be restarted in order to apply the new changes. This can be accomplished with the following command:
# service isc-dhcp-server restart
This will restart the DHCP service and then the administrator can check to see if the server is ready for DHCP requests several different ways. The easiest is to simply see if the server is listening on port 67 via the [lsof command][1]:
# lsof -i :67
![Check DHCP Listening Port](http://www.tecmint.com/wp-content/uploads/2015/04/lsof.png)
Check DHCP Listening Port
This output indicates that the DHCPD (DHCP Server daemon) is running and listening on port 67. Port 67 in this output was actually converted to bootps due to a port number mapping for port 67 in /etc/services file.
This is very common on most systems. At this point, the server should be ready for network connectivity and can be confirmed by connecting a machine to the network and having it request a DHCP address from the server.
### Step 2: Testing Client Connectivity ###
8. Most systems now-a-days are using Network Manager to maintain network connections and as such the device should be pre-configured to pull DHCP when the interface is active.
However on machines that arent using Network Manager, it may be necessary to manually attempt to pull a DHCP address. The next few steps will show how to do this as well as how to see whether the server is handing out addresses.
The [ifconfig][2] utility can be used to check an interfaces configuration. The machine used to test the DHCP server only has one network adapter and it is called eth0.
# ifconfig eth0
![Check Network Interface IP Address](http://www.tecmint.com/wp-content/uploads/2015/04/No-ip.png)
Check Network Interface IP Address
From this output, this machine currently doesnt have an IPv4 address, great! Lets instruct this machine to reach out to the DHCP server and request an address. This machine has the DHCP client utility known as dhclient installed. The DHCP client utility may very from system to system.
# dhclient eth0
![Request IP Address from DHCP](http://www.tecmint.com/wp-content/uploads/2015/04/IP.png)
Request IP Address from DHCP
Now the `'inet addr:'` field shows an IPv4 address that falls within the scope of what was configured for the 172.27.60.0 network. Also notice that the proper broadcast address was handed out as well as subnet mask for this network.
Things are looking promising but lets check the server to see if it was actually the place where this machine received this new IP address. To accomplish this task, the servers system log file will be consulted. While the entire log file may contain hundreds of thousands of entries, only a few are necessary for confirming that the server is working properly. Rather than using a full text editor, this time a utility known as tail will be used to only show the last few lines of the log file.
# tail /var/log/syslog
![Check DHCP Logs](http://www.tecmint.com/wp-content/uploads/2015/04/DHCP-Log.png)
Check DHCP Logs
Voila! The server recorded handing out an address to this host (HRTDEBXENSRV). It is a safe assumption at this point that the server is working as intended and handing out the appropriate addresses for the networks that it is an authority. At this point the DHCP server is up and running. Configure the other networks, troubleshoot, and secure as necessary.
Enjoy the newly functioning ISC-DHCP-Server and tune in later for more Debian tutorials. In the not too distant future there will be an article on Bind9 and DDNS that will tie into this article.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/install-and-configure-multihomed-isc-dhcp-server-on-debian-linux/
作者:[Rob Turner][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/robturner/
[1]:http://www.tecmint.com/10-lsof-command-examples-in-linux/
[2]:http://www.tecmint.com/ifconfig-command-examples/

View File

@ -1,429 +0,0 @@
Installation Guide for Puppet on Ubuntu 15.04
================================================================================
Hi everyone, today in this article we'll learn how to install puppet to manage your server infrastructure running ubuntu 15.04. Puppet is an open source software configuration management tool which is developed and maintained by Puppet Labs that allows us to automate the provisioning, configuration and management of a server infrastructure. Whether we're managing just a few servers or thousands of physical and virtual machines to orchestration and reporting, puppet automates tasks that system administrators often do manually which frees up time and mental space so sysadmins can work on improving other aspects of your overall setup. It ensures consistency, reliability and stability of the automated jobs processed. It facilitates closer collaboration between sysadmins and developers, enabling more efficient delivery of cleaner, better-designed code. Puppet is available in two solutions configuration management and data center automation. They are **puppet open source and puppet enterprise**. Puppet open source is a flexible, customizable solution available under the Apache 2.0 license, designed to help system administrators automate the many repetitive tasks they regularly perform. Whereas puppet enterprise edition is a proven commercial solution for diverse enterprise IT environments which lets us get all the benefits of open source puppet, plus puppet apps, commercial-only enhancements, supported modules and integrations, and the assurance of a fully supported platform. Puppet uses SSL certificates to authenticate communication between master and agent nodes.
In this tutorial, we will cover how to install open source puppet in an agent and master setup running ubuntu 15.04 linux distribution. Here, Puppet master is a server from where all the configurations will be controlled and managed and all our remaining servers will be puppet agent nodes, which is configured according to the configuration of puppet master server. Here are some easy steps to install and configure puppet to manage our server infrastructure running Ubuntu 15.04.
### 1. Setting up Hosts ###
In this tutorial, we'll use two machines, one as puppet master server and another as puppet node agent both running ubuntu 15.04 "Vivid Vervet" in both the machines. Here is the infrastructure of the server that we're gonna use for this tutorial.
puppet master server with IP 44.55.88.6 and hostname : puppetmaster
puppet node agent with IP 45.55.86.39 and hostname : puppetnode
Now we'll add the entry of the machines to /etc/hosts on both machines node agent and master server.
# nano /etc/hosts
45.55.88.6 puppetmaster.example.com puppetmaster
45.55.86.39 puppetnode.example.com puppetnode
Please note that the Puppet Master server must be reachable on port 8140. So, we'll need to open port 8140 in it.
### 2. Updating Time with NTP ###
As puppet nodes needs to maintain accurate system time to avoid problems when it issues agent certificates. Certificates can appear to be expired if there is time difference, the time of the both the master and the node agent must be synced with each other. To sync the time, we'll update the time with NTP. To do so, here's the command below that we need to run on both master and node agent.
# ntpdate pool.ntp.org
17 Jun 00:17:08 ntpdate[882]: adjust time server 66.175.209.17 offset -0.001938 sec
Now, we'll update our local repository index and install ntp as follows.
# apt-get update && sudo apt-get -y install ntp ; service ntp restart
### 3. Puppet Master Package Installation ###
There are many ways to install open source puppet. In this tutorial, we'll download and install a debian binary package named as **puppetlabs-release** packaged by the Puppet Labs which will add the source of the **puppetmaster-passenger** package. The puppetmaster-passenger includes the puppet master with apache web server. So, we'll now download the Puppet Labs package.
# cd /tmp/
# wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
--2015-06-17 00:19:26-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d
Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7384 (7.2K) [application/x-debian-package]
Saving to: puppetlabs-release-trusty.deb
puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.06s
2015-06-17 00:19:26 (130 KB/s) - puppetlabs-release-trusty.deb saved [7384/7384]
After the download has been completed, we'll wanna install the package.
# dpkg -i puppetlabs-release-trusty.deb
Selecting previously unselected package puppetlabs-release.
(Reading database ... 85899 files and directories currently installed.)
Preparing to unpack puppetlabs-release-trusty.deb ...
Unpacking puppetlabs-release (1.0-11) ...
Setting up puppetlabs-release (1.0-11) ...
Then, we'll update the local respository index with the server using apt package manager.
# apt-get update
Then, we'll install the puppetmaster-passenger package by running the below command.
# apt-get install puppetmaster-passenger
**Note**: While installing we may get an error **Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')** but we no need to worry, we'll just simply ignore this as it says that the templatedir is deprecated so, we'll simply disbale that setting in the configuration. :)
To check whether puppetmaster has been installed successfully in our Master server not not, we'll gonna try to check its version.
# puppet --version
3.8.1
We have successfully installed puppet master package in our puppet master box. As we are using passenger with apache, the puppet master process is controlled by apache server, that means it runs when apache is running.
Before continuing, we'll need to stop the Puppet master by stopping the apache2 service.
# systemctl stop apache2
### 4. Master version lock with Apt ###
As We have puppet version as 3.8.1, we need to lock the puppet version update as this will mess up the configurations while updating the puppet. So, we'll use apt's locking feature for that. To do so, we'll need to create a new file **/etc/apt/preferences.d/00-puppet.pref** using our favorite text editor.
# nano /etc/apt/preferences.d/00-puppet.pref
Then, we'll gonna add the entries in the newly created file as:
# /etc/apt/preferences.d/00-puppet.pref
Package: puppet puppet-common puppetmaster-passenger
Pin: version 3.8*
Pin-Priority: 501
Now, it will not update the puppet while running updates in the system.
### 5. Configuring Puppet Config ###
Puppet master acts as a certificate authority and must generate its own certificates which is used to sign agent certificate requests. First of all, we'll need to remove any existing SSL certificates that were created during the installation of package. The default location of puppet's SSL certificates is /var/lib/puppet/ssl. So, we'll remove the entire ssl directory using rm command.
# rm -rf /var/lib/puppet/ssl
Then, we'll configure the certificate. While creating the puppet master's certificate, we need to include every DNS name at which agent nodes can contact the master at. So, we'll edit the master's puppet.conf using our favorite text editor.
# nano /etc/puppet/puppet.conf
The output seems as shown below.
[main]
logdir=/var/log/puppet
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
rundir=/var/run/puppet
factpath=$vardir/lib/facter
templatedir=$confdir/templates
[master]
# These are needed when the puppetmaster is run by passenger
# and can safely be removed if webrick is used.
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY
Here, we'll need to comment the templatedir line to disable the setting as it has been already depreciated. After that, we'll add the following line at the end of the file under [main].
server = puppetmaster
environment = production
runinterval = 1h
strict_variables = true
certname = puppetmaster
dns_alt_names = puppetmaster, puppetmaster.example.com
This configuration file has many options which might be useful in order to setup own configuration. A full description of the file is available at Puppet Labs [Main Config File (puppet.conf)][1].
After editing the file, we'll wanna save that and exit.
Now, we'll gonna generate a new CA certificates by running the following command.
# puppet master --verbose --no-daemonize
Info: Creating a new SSL key for ca
Info: Creating a new SSL certificate request for ca
Info: Certificate Request fingerprint (SHA256): F6:2F:69:89:BA:A5:5E:FF:7F:94:15:6B:A7:C4:20:CE:23:C7:E3:C9:63:53:E0:F2:76:D7:2E:E0:BF:BD:A6:78
...
Notice: puppetmaster has a waiting certificate request
Notice: Signed certificate request for puppetmaster
Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/ca/requests/puppetmaster.pem'
Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/certificate_requests/puppetmaster.pem'
Notice: Starting Puppet master version 3.8.1
^CNotice: Caught INT; storing stop
Notice: Processing stop
Now, the certificate is being generated. Once we see **Notice: Starting Puppet master version 3.8.1**, the certificate setup is complete. Then we'll press CTRL-C to return to the shell.
If we wanna look at the cert information of the certificate that was just created, we can get the list by running in the following command.
# puppet cert list -all
+ "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com")
### 6. Creating a Puppet Manifest ###
The default location of the main manifest is /etc/puppet/manifests/site.pp. The main manifest file contains the definition of configuration that is used to execute in the puppet node agent. Now, we'll create the manifest file by running the following command.
# nano /etc/puppet/manifests/site.pp
Then, we'll add the following lines of configuration in the file that we just opened.
# execute 'apt-get update'
exec { 'apt-update': # exec resource named 'apt-update'
command => '/usr/bin/apt-get update' # command this resource will run
}
# install apache2 package
package { 'apache2':
require => Exec['apt-update'], # require 'apt-update' before installing
ensure => installed,
}
# ensure apache2 service is running
service { 'apache2':
ensure => running,
}
The above lines of configuration are responsible for the deployment of the installation of apache web server across the node agent.
### 7. Starting Master Service ###
We are now ready to start the puppet master. We can start it by running the apache2 service.
# systemctl start apache2
Here, our puppet master is running, but it isn't managing any agent nodes yet. Now, we'll gonna add the puppet node agents to the master.
**Note**: If you get an error **Job for apache2.service failed. See "systemctl status apache2.service" and "journalctl -xe" for details.** then it must be that there is some problem with the apache server. So, we can see the log what exactly has happened by running **apachectl start** under root or sudo mode. Here, while performing this tutorial, we got a misconfiguration of the certificates under **/etc/apache2/sites-enabled/puppetmaster.conf** file. We replaced **SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem with SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem** and commented **SSLCertificateKeyFile** line. Then we'll need to rerun the above command to run apache server.
### 8. Puppet Agent Package Installation ###
Now, as we have our puppet master ready and it needs an agent to manage, we'll need to install puppet agent into the nodes. We'll need to install puppet agent in every nodes in our infrastructure we want puppet master to manage. We'll need to make sure that we have added our node agents in the DNS. Now, we'll gonna install the latest puppet agent in our agent node ie. puppetnode.example.com .
We'll run the following command to download the Puppet Labs package in our puppet agent nodes.
# cd /tmp/
# wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb\
--2015-06-17 00:54:42-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d
Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7384 (7.2K) [application/x-debian-package]
Saving to: puppetlabs-release-trusty.deb
puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.04s
2015-06-17 00:54:42 (162 KB/s) - puppetlabs-release-trusty.deb saved [7384/7384]
Then, as we're running ubuntu 15.04, we'll use debian package manager to install it.
# dpkg -i puppetlabs-release-trusty.deb
Now, we'll gonna update the repository index using apt-get.
# apt-get update
Finally, we'll gonna install the puppet agent directly from the remote repository.
# apt-get install puppet
Puppet agent is always disabled by default, so we'll need to enable it. To do so we'll need to edit /etc/default/puppet file using a text editor.
# nano /etc/default/puppet
Then, we'll need to change value of **START** to "yes" as shown below.
START=yes
Then, we'll need to save and exit the file.
### 9. Agent Version Lock with Apt ###
As We have puppet version as 3.8.1, we need to lock the puppet version update as this will mess up the configurations while updating the puppet. So, we'll use apt's locking feature for that. To do so, we'll need to create a file /etc/apt/preferences.d/00-puppet.pref using our favorite text editor.
# nano /etc/apt/preferences.d/00-puppet.pref
Then, we'll gonna add the entries in the newly created file as:
# /etc/apt/preferences.d/00-puppet.pref
Package: puppet puppet-common
Pin: version 3.8*
Pin-Priority: 501
Now, it will not update the Puppet while running updates in the system.
### 10. Configuring Puppet Node Agent ###
Next, We must make a few configuration changes before running the agent. To do so, we'll need to edit the agent's puppet.conf
# nano /etc/puppet/puppet.conf
It will look exactly like the Puppet master's initial configuration file.
This time also we'll comment the **templatedir** line. Then we'll gonna delete the [master] section, and all of the lines below it.
Assuming that the puppet master is reachable at "puppet-master", the agent should be able to connect to the master. If not we'll need to use its fully qualified domain name ie. puppetmaster.example.com .
[agent]
server = puppetmaster.example.com
certname = puppetnode.example.com
After adding this, it will look alike this.
[main]
logdir=/var/log/puppet
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
rundir=/var/run/puppet
factpath=$vardir/lib/facter
#templatedir=$confdir/templates
[agent]
server = puppetmaster.example.com
certname = puppetnode.example.com
After done with that, we'll gonna save and exit it.
Next, we'll wanna start our latest puppet agent in our Ubuntu 15.04 nodes. To start our puppet agent, we'll need to run the following command.
# systemctl start puppet
If everything went as expected and configured properly, we should not see any output displayed by running the above command. When we run an agent for the first time, it generates an SSL certificate and sends a request to the puppet master then if the master signs the agent's certificate, it will be able to communicate with the agent node.
**Note**: If you are adding your first node, it is recommended that you attempt to sign the certificate on the puppet master before adding your other agents. Once you have verified that everything works properly, then you can go back and add the remaining agent nodes further.
### 11. Signing certificate Requests on Master ###
While puppet agent runs for the first time, it generates an SSL certificate and sends a request for signing to the master server. Before the master will be able to communicate and control the agent node, it must sign that specific agent node's certificate.
To get the list of the certificate requests, we'll run the following command in the puppet master server.
# puppet cert list
"puppetnode.example.com" (SHA256) 31:A1:7E:23:6B:CD:7B:7D:83:98:33:8B:21:01:A6:C4:01:D5:53:3D:A0:0E:77:9A:77:AE:8F:05:4A:9A:50:B2
As we just setup our first agent node, we will see one request. It will look something like the following, with the agent node's Domain name as the hostname.
Note that there is no + in front of it which indicates that it has not been signed yet.
Now, we'll go for signing a certification request. In order to sign a certification request, we should simply run **puppet cert sign** with the **hostname** as shown below.
# puppet cert sign puppetnode.example.com
Notice: Signed certificate request for puppetnode.example.com
Notice: Removing file Puppet::SSL::CertificateRequest puppetnode.example.com at '/var/lib/puppet/ssl/ca/requests/puppetnode.example.com.pem'
The Puppet master can now communicate and control the node that the signed certificate belongs to.
If we want to sign all of the current requests, we can use the -all option as shown below.
# puppet cert sign --all
### Removing a Puppet Certificate ###
If we wanna remove a host from it or wanna rebuild a host then add it back to it. In this case, we will want to revoke the host's certificate from the puppet master. To do this, we will want to use the clean action as follows.
# puppet cert clean hostname
Notice: Revoked certificate with serial 5
Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/ca/signed/puppetnode.example.com.pem'
Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/certs/puppetnode.example.com.pem'
If we want to view all of the requests signed and unsigned, run the following command:
# puppet cert list --all
+ "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com")
### 12. Deploying a Puppet Manifest ###
After we configure and complete the puppet manifest, we'll wanna deploy the manifest to the agent nodes server. To apply and load the main manifest we can simply run the following command in the agent node.
# puppet agent --test
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppetnode.example.com
Info: Applying configuration version '1434563858'
Notice: /Stage[main]/Main/Exec[apt-update]/returns: executed successfully
Notice: Finished catalog run in 10.53 seconds
This will show us all the processes how the main manifest will affect a single server immediately.
If we wanna run a puppet manifest that is not related to the main manifest, we can simply use puppet apply followed by the manifest file path. It only applies the manifest to the node that we run the apply from.
# puppet apply /etc/puppet/manifest/test.pp
### 13. Configuring Manifest for a Specific Node ###
If we wanna deploy a manifest only to a specific node then we'll need to configure the manifest as follows.
We'll need to edit the manifest on the master server using a text editor.
# nano /etc/puppet/manifest/site.pp
Now, we'll gonna add the following lines there.
node 'puppetnode', 'puppetnode1' {
# execute 'apt-get update'
exec { 'apt-update': # exec resource named 'apt-update'
command => '/usr/bin/apt-get update' # command this resource will run
}
# install apache2 package
package { 'apache2':
require => Exec['apt-update'], # require 'apt-update' before installing
ensure => installed,
}
# ensure apache2 service is running
service { 'apache2':
ensure => running,
}
}
Here, the above configuration will install and deploy the apache web server only to the two specified nodes having shortname puppetnode and puppetnode1. We can add more nodes that we need to get deployed with the manifest specifically.
### 14. Configuring Manifest with a Module ###
Modules are useful for grouping tasks together, they are many available in the Puppet community which anyone can contribute further.
On the puppet master, we'll gonna install the **puppetlabs-apache** module using the puppet module command.
# puppet module install puppetlabs-apache
**Warning**: Please do not use this module on an existing apache setup else it will purge your apache configurations that are not managed by puppet.
Now we'll gonna edit the main manifest ie **site.pp** using a text editor.
# nano /etc/puppet/manifest/site.pp
Now add the following lines to install apache under puppetnode.
node 'puppet-node' {
class { 'apache': } # use apache module
apache::vhost { 'example.com': # define vhost resource
port => '80',
docroot => '/var/www/html'
}
}
Then we'll wanna save and exit it. Then, we'll wanna rerun the manifest to deploy the configuration to the agents for our infrastructure.
### Conclusion ###
Finally we have successfully installed puppet to manage our Server Infrastructure running Ubuntu 15.04 "Vivid Vervet" linux operating system. We learned how puppet works, configure a manifest configuration, communicate with nodes and deploy the manifest on the agent nodes with secure SSL certification. Controlling, managing and configuring repeated task in several N number of nodes is very easy with puppet open source software configuration management tool. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/install-puppet-ubuntu-15-04/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html

View File

@ -1,3 +1,4 @@
translated by iov-wang
How to Install OsTicket Ticketing System in Fedora 22 / Centos 7
================================================================================
In this article, we'll learn how to setup help desk ticketing system with osTicket in our machine or server running Fedora 22 or CentOS 7 as operating system. osTicket is a free and open source popular customer support ticketing system developed and maintained by [Enhancesoft][1] and its contributors. osTicket is the best solution for help and support ticketing system and management for better communication and support assistance with clients and customers. It has the ability to easily integrate with inquiries created via email, phone and web based forms into a beautiful multi-user web interface. osTicket makes us easy to manage, organize and log all our support requests and responses in one single place. It is a simple, lightweight, reliable, open source, web-based and easy to setup and use help desk ticketing system.
@ -176,4 +177,4 @@ via: http://linoxide.com/linux-how-to/install-osticket-fedora-22-centos-7/
[a]:http://linoxide.com/author/arunp/
[1]:http://www.enhancesoft.com/
[2]:http://osticket.com/download
[3]:https://github.com/osTicket/osTicket-1.8/releases
[3]:https://github.com/osTicket/osTicket-1.8/releases

View File

@ -1,3 +1,4 @@
translated by ivo-wang
How to Configure OpenNMS on CentOS 7.x
================================================================================
Systems management and monitoring services are very important that provides information to view important systems management information that allow us to to make decisions based on this information. To make sure the network is running at its best and to minimize the network downtime we need to improve application performance. So, in this article we will make you understand the step by step procedure to setup OpenNMS in your IT infrastructure. OpenNMS is a free open source enterprise level network monitoring and management platform that provides information to allow us to make decisions in regards to future network and capacity planning.
@ -216,4 +217,4 @@ via: http://linoxide.com/monitoring-2/install-configure-opennms-centos-7-x/
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/kashifs/
[a]:http://linoxide.com/author/kashifs/

View File

@ -1,3 +1,4 @@
translated by ivo-wang
How to install Suricata intrusion detection system on Linux
================================================================================
With incessant security threats, intrusion detection system (IDS) has become one of the most critical requirements in today's data center environments. However, as more and more servers upgrade their NICs to 10GB/40GB Ethernet, it is increasingly difficult to implement compute-intensive intrusion detection on commodity hardware at line rates. One approach to scaling IDS performance is **multi-threaded IDS**, where CPU-intensive deep packet inspection workload is parallelized into multiple concurrent tasks. Such parallelized inspection can exploit multi-core hardware to scale up IDS throughput easily. Two well-known open-source efforts in this area are [Suricata][1] and [Bro][2].
@ -194,4 +195,4 @@ via: http://xmodulo.com/install-suricata-intrusion-detection-system-linux.html
[6]:https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Runmodes
[7]:http://ask.xmodulo.com/view-threads-process-linux.html
[8]:http://xmodulo.com/how-to-compile-and-install-snort-from-source-code-on-ubuntu.html
[9]:https://redmine.openinfosecfoundation.org/projects/suricata/wiki
[9]:https://redmine.openinfosecfoundation.org/projects/suricata/wiki

View File

@ -1,3 +1,4 @@
translating wi-cuckoo
A Repository with 44 Years of Unix Evolution
================================================================================
### Abstract ###
@ -199,4 +200,4 @@ via: http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.htm
[50]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAI
[51]:http://ftp.netbsd.org/pub/NetBSD/NetBSD-current/src/share/misc/bsd-family-tree
[52]:http://www.dmst.aueb.gr/dds/pubs/conf/2015-MSR-Unix-History/html/Spi15c.html#tthFrefAAJ
[53]:https://github.com/dspinellis/unix-history-make
[53]:https://github.com/dspinellis/unix-history-make

View File

@ -1,330 +0,0 @@
translating by ezio
Going Beyond Hello World Containers is Hard Stuff
================================================================================
In [my previous post][1], I provided the basic concepts behind Linux container technology. I wrote as much for you as I did for me. Containers are new to me. And I figured having the opportunity to blog about the subject would provide the motivation to really learn the stuff.
I intend to learn by doing. First get the concepts down, then get hands-on and write about it as I go. I assumed there must be a lot of Hello World type stuff out there to give me up to speed with the basics. Then, I could take things a bit further and build a microservice container or something.
I mean, it cant be that hard, right?
Wrong.
Maybe its easy for someone who spends significant amount of their life immersed in operations work. But for me, getting started with this stuff turned out to be hard to the point of posting my frustrations to Facebook...
But, there is good news: I got it to work! And its always nice being able to make lemonade from lemons. So I am going to share the story of how I made my first microservice container with you. Maybe my pain will save you some time.
If you've ever found yourself in a situation like this, fear not: folks like me are here to deal with the problems so you don't have to!
Lets begin.
### A Thumbnail Micro Service ###
The microservice I designed was simple in concept. Post a digital image in JPG or PNG format to an HTTP endpoint and get back a a 100px wide thumbnail.
Heres what that looks like:
![container-diagram-0](https://deis.com/images/blog-images/containers-hard-0.png)
I decide to use a NodeJS for my code and version of [ImageMagick][2] to do the thumbnail transformation.
I did my first version of the service, using the logic shown here:
![container-diagram-1](https://deis.com/images/blog-images/containers-hard-1.png)
I download the [Docker Toolbox][3] which installs an the Docker Quickstart Terminal. Docker Quickstart Terminal makes creating containers easier. The terminal fires up a Linux virtual machine that has Docker installed, allowing you to run Docker commands from within a terminal.
In my case, I am running on OS X. But theres a Windows version too.
I am going to use Docker Quickstart Terminal to build a container image for my microservice and run a container from that image.
The Docker Quickstart Terminal runs in your regular terminal, like so:
![container-diagram-2](https://deis.com/images/blog-images/containers-hard-2.png)
### The First Little Problem and the First Big Problem ###
So I fiddled around with NodeJS and ImageMagick and I got the service to work on my local machine.
Then, I created the Dockerfile, which is the configuration script Docker uses to build your container. (Ill go more into builds and Dockerfile more later on.)
Heres the build command I ran on the Docker Quickstart Terminal:
$ docker build -t thumbnailer:0.1
I got this response:
docker: "build" requires 1 argument.
Huh.
After 15 minutes I realized: I forgot to put a period . as the last argument!
It needs to be:
$ docker build -t thumbnailer:0.1 .
But this wasnt the end of my problems.
I got the image to build and then I typed [the the `run` command][4] on the Docker Quickstart Terminal to fire up a container based on the image, called `thumbnailer:0.1`:
$ docker run -d -p 3001:3000 thumbnailer:0.1
The `-p 3001:3000` argument makes it so the NodeJS microservice running on port 3000 within the container binds to port 3001 on the host virtual machine.
Looks so good so far, right?
Wrong. Things are about to get pretty bad.
I determined the IP address of the virtual machine created by Docker Quickstart Terminal by running the `docker-machine` command:
$ docker-machine ip default
This returns the IP address of the default virtual machine, the one that is run under the Docker Quickstart Terminal. For me, this IP address was 192.168.99.100.
I browsed to http://192.168.99.100:3001/ and got the file upload page I built:
![container-diagram-3](https://deis.com/images/blog-images/containers-hard-3.png)
I selected a file and clicked the Upload Image button.
But it didnt work.
The terminal is telling me it cant find the `/upload` directory my microservice requires.
Now, keep in mind, I had been at this for about a day—between the fiddling and research. Im feeling a little frustrated by this point.
Then, a brain spark flew. Somewhere along the line remembered reading a microservice should not do any data persistence on its own! Saving data should be the job of another service.
So what if the container cant find the `/upload` directory? The real issue is: my microservice has a fundamentally flawed design.
Lets take another look:
![container-diagram-4](https://deis.com/images/blog-images/containers-hard-4.png)
Why am I saving a file to disk? Microservices are supposed to be fast. Why not do all my work in memory? Using memory buffers will make the "I cant find no stickin directory" error go away and will increase the performance of my app dramatically.
So thats what I did. And heres what the plan was:
![container-diagram-5](https://deis.com/images/blog-images/containers-hard-5.png)
Heres the NodeJS I wrote to do all the in-memory work for creating a thumbnail:
// Bind to the packages
var express = require('express');
var router = express.Router();
var path = require('path'); // used for file path
var im = require("imagemagick");
// Simple get that allows you test that you can access the thumbnail process
router.get('/', function (req, res, next) {
res.status(200).send('Thumbnailer processor is up and running');
});
// This is the POST handler. It will take the uploaded file and make a thumbnail from the
// submitted byte array. I know, it's not rocket science, but it serves a purpose
router.post('/', function (req, res, next) {
req.pipe(req.busboy);
req.busboy.on('file', function (fieldname, file, filename) {
var ext = path.extname(filename)
// Make sure that only png and jpg is allowed
if(ext.toLowerCase() != '.jpg' && ext.toLowerCase() != '.png'){
res.status(406).send("Service accepts only jpg or png files");
}
var bytes = [];
// put the bytes from the request into a byte array
file.on('data', function(data) {
for (var i = 0; i < data.length; ++i) {
bytes.push(data[i]);
}
console.log('File [' + fieldname + '] got bytes ' + bytes.length + ' bytes');
});
// Once the request is finished pushing the file bytes into the array, put the bytes in
// a buffer and process that buffer with the imagemagick resize function
file.on('end', function() {
var buffer = new Buffer(bytes,'binary');
console.log('Bytes got ' + bytes.length + ' bytes');
//resize
im.resize({
srcData: buffer,
height: 100
}, function(err, stdout, stderr){
if (err){
throw err;
}
// get the extension without the period
var typ = path.extname(filename).replace('.','');
res.setHeader("content-type", "image/" + typ);
res.status(200);
// send the image back as a response
res.send(new Buffer(stdout,'binary'));
});
});
});
});
module.exports = router;
Okay, so were back on track and everything is hunky dory on my local machine. I go to sleep.
But, before I do I test the microservice code running as standard Node app on localhost...
![Containers Hard](https://deis.com/images/blog-images/containers-hard-6.png)
It works fine. Now all I needed to do was get it working in a container.
The next day I woke up, grabbed some coffee, and built an image—not forgetting to put in the period!
$ docker build -t thumbnailer:01 .
I am building from the root directory of my thumbnailer project. The build command uses the Dockerfile that is in the root directory. Thats how it goes: put the Dockerfile in the same place you want to run build and the Dockerfile will be used by default.
Here is the text of the Dockerfile I was using:
FROM ubuntu:latest
MAINTAINER bob@CogArtTech.com
RUN apt-get update
RUN apt-get install -y nodejs nodejs-legacy npm
RUN apt-get install imagemagick libmagickcore-dev libmagickwand-dev
RUN apt-get clean
COPY ./package.json src/
RUN cd src && npm install
COPY . /src
WORKDIR src/
CMD npm start
What could go wrong?
### The Second Big Problem ###
I ran the `build` command and I got this error:
Do you want to continue? [Y/n] Abort.
The command '/bin/sh -c apt-get install imagemagick libmagickcore-dev libmagickwand-dev' returned a non-zero code: 1
I figured something was wrong with the microservice. I went back to my machine, fired up the service on localhost, and uploaded a file.
Then I got this error from NodeJS:
Error: spawn convert ENOENT
Whats going on? This worked the other night!
I searched and searched, for every permutation of the error I could think of. After about four hours of replacing different node modules here and there, I figured: why not restart the machine?
I did. And guess what? The error went away!
Go figure.
### Putting the Genie Back in the Bottle ###
So, back to the original quest: I needed to get this build working.
I removed all of the containers running on the VM, using [the `rm` command][5]:
$ docker rm -f $(docker ps -a -q)
The `-f` flag here force removes running images.
Then I removed all of my Docker images, using [the `rmi` command][6]:
$ docker rmi if $(docker images | tail -n +2 | awk '{print $3}')
I go through the whole process of rebuilding the image, installing the container and try to get the microservice running. Then after about an hour of self-doubt and accompanying frustration, I thought to myself: maybe this isnt a problem with the microservice.
So, I looked that the the error again:
Do you want to continue? [Y/n] Abort.
The command '/bin/sh -c apt-get install imagemagick libmagickcore-dev libmagickwand-dev' returned a non-zero code: 1
Then it hit me: the build is looking for a Y input from the keyboard! But, this is a non-interactive Dockerfile script. There is no keyboard.
I went back to the Dockerfile, and there it was:
RUN apt-get update
RUN apt-get install -y nodejs nodejs-legacy npm
RUN apt-get install imagemagick libmagickcore-dev libmagickwand-dev
RUN apt-get clean
The second `apt-get` command is missing the `-y` flag which causes "yes" to be given automatically where usually it would be prompted for.
I added the missing `-y` to the command:
RUN apt-get update
RUN apt-get install -y nodejs nodejs-legacy npm
RUN apt-get install -y imagemagick libmagickcore-dev libmagickwand-dev
RUN apt-get clean
And guess what: after two days of trial and tribulation, it worked! Two whole days!
So, I did my build:
$ docker build -t thumbnailer:0.1 .
I fired up the container:
$ docker run -d -p 3001:3000 thumbnailer:0.1
Got the IP address of the Virtual Machine:
$ docker-machine ip default
Went to my browser and entered http://192.168.99.100:3001/ into the address bar.
The upload page loaded.
I selected an image, and this is what I got:
![container-diagram-7](https://deis.com/images/blog-images/containers-hard-7.png)
It worked!
Inside a container, for the first time!
### So What Does It All Mean? ###
A long time ago, I accepted the fact when it comes to tech, sometimes even the easy stuff is hard. Along with that, I abandoned the desire to be the smartest guy in the room. Still, the last few days trying get basic competency with containers has been, at times, a journey of self doubt.
But, you wanna know something? Its 2 AM on an early morning as I write this, and every nerve wracking hour has been worth it. Why? Because you gotta put in the time. This stuff is hard and it does not come easy for anyone. And dont forget: youre learning tech and tech runs the world!
P.S. Check out this two part video of Hello World containers, check out [Raziel Tabibs][7] excellent work in this video...
youtube视频
<iframe width="560" height="315" src="https://www.youtube.com/embed/PJ95WY2DqXo" frameborder="0" allowfullscreen></iframe>
And don't miss part two...
youtube视频
<iframe width="560" height="315" src="https://www.youtube.com/embed/lss2rZ3Ppuk" frameborder="0" allowfullscreen></iframe>
--------------------------------------------------------------------------------
via: https://deis.com/blog/2015/beyond-hello-world-containers-hard-stuff
作者:[Bob Reselman][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://deis.com/blog
[1]:http://deis.com/blog/2015/developer-journey-linux-containers
[2]:https://github.com/rsms/node-imagemagick
[3]:https://www.docker.com/toolbox
[4]:https://docs.docker.com/reference/commandline/run/
[5]:https://docs.docker.com/reference/commandline/rm/
[6]:https://docs.docker.com/reference/commandline/rmi/
[7]:http://twitter.com/RazielTabib

View File

@ -1,257 +0,0 @@
Data Structures in the Linux Kernel
================================================================================
Doubly linked list
--------------------------------------------------------------------------------
Linux kernel provides its own implementation of doubly linked list, which you can find in the [include/linux/list.h](https://github.com/torvalds/linux/blob/master/include/linux/list.h). We will start `Data Structures in the Linux kernel` from the doubly linked list data structure. Why? Because it is very popular in the kernel, just try to [search](http://lxr.free-electrons.com/ident?i=list_head)
First of all, let's look on the main structure in the [include/linux/types.h](https://github.com/torvalds/linux/blob/master/include/linux/types.h):
```C
struct list_head {
struct list_head *next, *prev;
};
```
You can note that it is different from many implementations of doubly linked list which you have seen. For example, this doubly linked list structure from the [glib](http://www.gnu.org/software/libc/) library looks like :
```C
struct GList {
gpointer data;
GList *next;
GList *prev;
};
```
Usually a linked list structure contains a pointer to the item. The implementation of linked list in Linux kernel does not. So the main question is - `where does the list store the data?`. The actual implementation of linked list in the kernel is - `Intrusive list`. An intrusive linked list does not contain data in its nodes - A node just contains pointers to the next and previous node and list nodes part of the data that are added to the list. This makes the data structure generic, so it does not care about entry data type anymore.
For example:
```C
struct nmi_desc {
spinlock_t lock;
struct list_head head;
};
```
Let's look at some examples to understand how `list_head` is used in the kernel. As I already wrote about, there are many, really many different places where lists are used in the kernel. Let's look for an example in miscellaneous character drivers. Misc character drivers API from the [drivers/char/misc.c](https://github.com/torvalds/linux/blob/master/drivers/char/misc.c) is used for writing small drivers for handling simple hardware or virtual devices. Those drivers share same major number:
```C
#define MISC_MAJOR 10
```
but have their own minor number. For example you can see it with:
```
ls -l /dev | grep 10
crw------- 1 root root 10, 235 Mar 21 12:01 autofs
drwxr-xr-x 10 root root 200 Mar 21 12:01 cpu
crw------- 1 root root 10, 62 Mar 21 12:01 cpu_dma_latency
crw------- 1 root root 10, 203 Mar 21 12:01 cuse
drwxr-xr-x 2 root root 100 Mar 21 12:01 dri
crw-rw-rw- 1 root root 10, 229 Mar 21 12:01 fuse
crw------- 1 root root 10, 228 Mar 21 12:01 hpet
crw------- 1 root root 10, 183 Mar 21 12:01 hwrng
crw-rw----+ 1 root kvm 10, 232 Mar 21 12:01 kvm
crw-rw---- 1 root disk 10, 237 Mar 21 12:01 loop-control
crw------- 1 root root 10, 227 Mar 21 12:01 mcelog
crw------- 1 root root 10, 59 Mar 21 12:01 memory_bandwidth
crw------- 1 root root 10, 61 Mar 21 12:01 network_latency
crw------- 1 root root 10, 60 Mar 21 12:01 network_throughput
crw-r----- 1 root kmem 10, 144 Mar 21 12:01 nvram
brw-rw---- 1 root disk 1, 10 Mar 21 12:01 ram10
crw--w---- 1 root tty 4, 10 Mar 21 12:01 tty10
crw-rw---- 1 root dialout 4, 74 Mar 21 12:01 ttyS10
crw------- 1 root root 10, 63 Mar 21 12:01 vga_arbiter
crw------- 1 root root 10, 137 Mar 21 12:01 vhci
```
Now let's have a close look at how lists are used in the misc device drivers. First of all, let's look on `miscdevice` structure:
```C
struct miscdevice
{
int minor;
const char *name;
const struct file_operations *fops;
struct list_head list;
struct device *parent;
struct device *this_device;
const char *nodename;
mode_t mode;
};
```
We can see the fourth field in the `miscdevice` structure - `list` which is a list of registered devices. In the beginning of the source code file we can see the definition of misc_list:
```C
static LIST_HEAD(misc_list);
```
which expands to the definition of variables with `list_head` type:
```C
#define LIST_HEAD(name) \
struct list_head name = LIST_HEAD_INIT(name)
```
and initializes it with the `LIST_HEAD_INIT` macro, which sets previous and next entries with the address of variable - name:
```C
#define LIST_HEAD_INIT(name) { &(name), &(name) }
```
Now let's look on the `misc_register` function which registers a miscellaneous device. At the start it initializes `miscdevice->list` with the `INIT_LIST_HEAD` function:
```C
INIT_LIST_HEAD(&misc->list);
```
which does the same as the `LIST_HEAD_INIT` macro:
```C
static inline void INIT_LIST_HEAD(struct list_head *list)
{
list->next = list;
list->prev = list;
}
```
In the next step after a device is created by the `device_create` function, we add it to the miscellaneous devices list with:
```
list_add(&misc->list, &misc_list);
```
Kernel `list.h` provides this API for the addition of a new entry to the list. Let's look at its implementation:
```C
static inline void list_add(struct list_head *new, struct list_head *head)
{
__list_add(new, head, head->next);
}
```
It just calls internal function `__list_add` with the 3 given parameters:
* new - new entry.
* head - list head after which the new item will be inserted.
* head->next - next item after list head.
Implementation of the `__list_add` is pretty simple:
```C
static inline void __list_add(struct list_head *new,
struct list_head *prev,
struct list_head *next)
{
next->prev = new;
new->next = next;
new->prev = prev;
prev->next = new;
}
```
Here we add a new item between `prev` and `next`. So `misc` list which we defined at the start with the `LIST_HEAD_INIT` macro will contain previous and next pointers to the `miscdevice->list`.
There is still one question: how to get list's entry. There is a special macro:
```C
#define list_entry(ptr, type, member) \
container_of(ptr, type, member)
```
which gets three parameters:
* ptr - the structure list_head pointer;
* type - structure type;
* member - the name of the list_head within the structure;
For example:
```C
const struct miscdevice *p = list_entry(v, struct miscdevice, list)
```
After this we can access to any `miscdevice` field with `p->minor` or `p->name` and etc... Let's look on the `list_entry` implementation:
```C
#define list_entry(ptr, type, member) \
container_of(ptr, type, member)
```
As we can see it just calls `container_of` macro with the same arguments. At first sight, the `container_of` looks strange:
```C
#define container_of(ptr, type, member) ({ \
const typeof( ((type *)0)->member ) *__mptr = (ptr); \
(type *)( (char *)__mptr - offsetof(type,member) );})
```
First of all you can note that it consists of two expressions in curly brackets. The compiler will evaluate the whole block in the curly braces and use the value of the last expression.
For example:
```
#include <stdio.h>
int main() {
int i = 0;
printf("i = %d\n", ({++i; ++i;}));
return 0;
}
```
will print `2`.
The next point is `typeof`, it's simple. As you can understand from its name, it just returns the type of the given variable. When I first saw the implementation of the `container_of` macro, the strangest thing I found was the zero in the `((type *)0)` expression. Actually this pointer magic calculates the offset of the given field from the address of the structure, but as we have `0` here, it will be just a zero offset along with the field width. Let's look at a simple example:
```C
#include <stdio.h>
struct s {
int field1;
char field2;
char field3;
};
int main() {
printf("%p\n", &((struct s*)0)->field3);
return 0;
}
```
will print `0x5`.
The next `offsetof` macro calculates offset from the beginning of the structure to the given structure's field. Its implementation is very similar to the previous code:
```C
#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)
```
Let's summarize all about `container_of` macro. The `container_of` macro returns the address of the structure by the given address of the structure's field with `list_head` type, the name of the structure field with `list_head` type and type of the container structure. At the first line this macro declares the `__mptr` pointer which points to the field of the structure that `ptr` points to and assigns `ptr` to it. Now `ptr` and `__mptr` point to the same address. Technically we don't need this line but it's useful for type checking. The first line ensures that the given structure (`type` parameter) has a member called `member`. In the second line it calculates offset of the field from the structure with the `offsetof` macro and subtracts it from the structure address. That's all.
Of course `list_add` and `list_entry` is not the only functions which `<linux/list.h>` provides. Implementation of the doubly linked list provides the following API:
* list_add
* list_add_tail
* list_del
* list_replace
* list_move
* list_is_last
* list_empty
* list_cut_position
* list_splice
* list_for_each
* list_for_each_entry
and many more.
via: https://github.com/0xAX/linux-insides/edit/master/DataStructures/dlist.md
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,3 +1,5 @@
Translating by DongShuaike
Data Structures in the Linux Kernel
================================================================================

View File

@ -1,113 +0,0 @@
How To Install Microsoft Visual Studio Code on Linux
================================================================================
Visual Studio code (VScode) is the cross-platform Chromium-based code editor is being open sourced today by Microsoft. How do I install Microsoft Visual Studio Code on a Debian or Ubuntu or Fedora Linux desktop?
Visual Studio supports debugging Linux apps and code editor now open source by Microsoft. It is a preview (beta) version but you can test it and use it on your own Linux based desktop.
### Why use Visual Studio Code? ###
From the project website:
> Visual Studio Code provides developers with a new choice of developer tool that combines the simplicity and streamlined experience of a code editor with the best of what developers need for their core code-edit-debug cycle. Visual Studio Code is the first code editor, and first cross-platform development tool - supporting OS X, Linux, and Windows - in the Visual Studio family. If you use Unity, ASP.NET 5, NODE.JS or related tool, give it a try.
### Requirements for Visual Studio Code on Linux ###
1. Ubuntu Desktop version 14.04
1. GLIBCXX version 3.4.15 or later
1. GLIBC version 2.15 or later
The following installation instructions are tested on:
1. Fedora Linux 22 and 23
1. Debian Linux 8
1. Ubuntu Linux 14.04 LTS
### Download Visual Studio Code ###
Visit [this page][1] to grab the latest version and save it to ~/Downloads/ folder on Linux desktop:
![Fig.01: Download Visual Studio Code For Linux](http://s0.cyberciti.org/uploads/faq/2015/11/download-visual-studio-code.jpg)
Fig.01: Download Visual Studio Code For Linux
Make a new folder (say $HOME/VSCode) and extract VSCode-linux-x64.zip inside that folder or in /usr/local/ folder. Unzip VSCode-linux64.zip to that folder.
Make a new folder (say $HOME/VSCode) and extract VSCode-linux-x64.zip inside that folder or in /usr/local/ folder. Unzip VSCode-linux64.zip to that folder.
### Alternate install method ###
You can use the wget command to download VScode as follows:
$ wget 'https://az764295.vo.msecnd.net/public/0.10.1-release/VSCode-linux64.zip'
Sample outputs:
--2015-11-18 13:55:23-- https://az764295.vo.msecnd.net/public/0.10.1-release/VSCode-linux64.zip
Resolving az764295.vo.msecnd.net (az764295.vo.msecnd.net)... 93.184.215.200, 2606:2800:11f:179a:1972:2405:35b:459
Connecting to az764295.vo.msecnd.net (az764295.vo.msecnd.net)|93.184.215.200|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 64638315 (62M) [application/octet-stream]
Saving to: 'VSCode-linux64.zip'
100%[======================================>] 64,638,315 84.9MB/s in 0.7s
2015-11-18 13:55:23 (84.9 MB/s) - 'VSCode-linux64.zip' saved [64638315/64638315]
### Install VScode using the command line ###
Cd to ~/Download/ location, enter:
$ cd ~/Download/
$ ls -l
Sample outputs:
![Fig.02: VSCode downloaded to my ~/Downloads/ folder](http://s0.cyberciti.org/uploads/faq/2015/11/list-vscode-linux.jpg)
Fig.02: VSCode downloaded to my ~/Downloads/ folder
Unzip VSCode-linux64.zip in /usr/local/ directory, enter:
$ sudo unzip VSCode-linux64.zip -d /usr/local/
Cd into /usr/local/ to create the soft-link as follows using the ln command for the Code executable. This is useful to run VSCode from the terminal application:
$ su -
# cd /usr/local/
# ls -l
# cd bin/
# ln -s ../VSCode-linux-x64/Code code
# exit
Sample session:
![Fig.03 Create the sym-link with the absolute path to the Code executable](http://s0.cyberciti.org/uploads/faq/2015/11/verify-and-ln-vscode.jpg)
Fig.03 Create the sym-link with the absolute path to the Code executable
### How do I use VSCode on Linux? ###
Open the Terminal app and type the following command:
$ /usr/local/bin/code
Sample outputs:
![Fig.04: VSCode in action on Linux](http://s0.cyberciti.org/uploads/faq/2015/11/vscode-welcome.jpg)
Fig.04: VSCode in action on Linux
And, there you have it, the VSCode installed and working correctly on the latest version of Debian, Ubuntu and Fedora Linux. I suggest that you read [getting started pages from Microsoft][2] to understand the core concepts that will make you more productive writing and navigating your code.
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/faq/debian-ubuntu-fedora-linux-installing-visual-studio-code/
作者Vivek Gite
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://code.visualstudio.com/Download
[2]:https://code.visualstudio.com/docs

View File

@ -1,97 +0,0 @@
How to access Dropbox from the command line in Linux
================================================================================
Cloud storage is everywhere in today's multi-device environment, where people want to access content across multiple devices wherever they go. Dropbox is the most widely used cloud storage service thanks to its elegant UI and flawless multi-platform compatibility. The popularity of Dropbox has led to a flurry of official or unofficial Dropbox clients that are available across different operating system platforms.
Linux has its own share of Dropbox clients: CLI clients as well as GUI-based clients. [Dropbox Uploader][1] is an easy-to-use Dropbox CLI client written in BASH scripting language. In this tutorial, I describe** how to access Dropbox from the command line in Linux by using Dropbox Uploader**.
### Install and Configure Dropbox Uploader on Linux ###
To use Dropbox Uploader, download the script and make it executable.
$ wget https://raw.github.com/andreafabrizi/Dropbox-Uploader/master/dropbox_uploader.sh
$ chmod +x dropbox_uploader.sh
Make sure that you have installed curl on your system, since Dropbox Uploader runs Dropbox APIs via curl.
To configure Dropbox Uploader, simply run dropbox_uploader.sh. When you run the script for the first time, it will ask you to grant the script access to your Dropbox account.
$ ./dropbox_uploader.sh
![](https://c2.staticflickr.com/6/5739/22860931599_10c08ff15f_c.jpg)
As instructed above, go to [https://www.dropbox.com/developers/apps][2] on your web browser, and create a new Dropbox app. Fill in the information of the new app as shown below, and enter the app name as generated by Dropbox Uploader.
![](https://c2.staticflickr.com/6/5745/22932921350_4123d2dbee_c.jpg)
After you have created a new app, you will see app key/secret on the next page. Make a note of them.
![](https://c1.staticflickr.com/1/736/22932962610_7db51aa718_c.jpg)
Enter the app key and secret in the terminal window where dropbox_uploader.sh is running. dropbox_uploader.sh will then generate an oAUTH URL (e.g., https://www.dropbox.com/1/oauth/authorize?oauth_token=XXXXXXXXXXXX).
![](https://c1.staticflickr.com/1/563/22601635533_423738baed_c.jpg)
Go to the oAUTH URL generated above on your web browser, and allow access to your Dropbox account.
![](https://c1.staticflickr.com/1/675/23202598606_6110c1a31b_c.jpg)
This completes Dropbox Uploader configuration. To check whether Dropbox Uploader is successfully authenticated, run the following command.
$ ./dropbox_uploader.sh info
----------
Dropbox Uploader v0.12
> Getting info...
Name: Dan Nanni
UID: XXXXXXXXXX
Email: my@email_address
Quota: 2048 Mb
Used: 13 Mb
Free: 2034 Mb
### Dropbox Uploader Examples ###
To list all contents in the top-level directory:
$ ./dropbox_uploader.sh list
To list all contents in a specific folder:
$ ./dropbox_uploader.sh list Documents/manuals
To upload a local file to a remote Dropbox folder:
$ ./dropbox_uploader.sh upload snort.pdf Documents/manuals
To download a remote file from Dropbox to a local file:
$ ./dropbox_uploader.sh download Documents/manuals/mysql.pdf ./mysql.pdf
To download an entire remote folder from Dropbox to a local folder:
$ ./dropbox_uploader.sh download Documents/manuals ./manuals
To create a new remote folder on Dropbox:
$ ./dropbox_uploader.sh mkdir Documents/whitepapers
To delete an entire remote folder (including all its contents) on Dropbox:
$ ./dropbox_uploader.sh delete Documents/manuals
--------------------------------------------------------------------------------
via: http://xmodulo.com/access-dropbox-command-line-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://www.andreafabrizi.it/?dropbox_uploader
[2]:https://www.dropbox.com/developers/apps

View File

@ -1,139 +0,0 @@
How to install Android Studio on Ubuntu 15.04 / CentOS 7
================================================================================
With the advancement of smart phones in the recent years, Android has become one of the biggest phone platforms and all the tools required to build Android applications are also freely available. Android Studio is an Integrated Development Environment (IDE) for developing Android applications based on [IntelliJ IDEA][1]. It is a free and open source software by Google released in 2014 and succeeds Eclipse as the main IDE.
In this article, we will learn how to install Android Studio on Ubuntu 15.04 and CentOS 7.
### Installation on Ubuntu 15.04 ###
We can install Android Studio in two ways. One is to set up the required repository and install it; other is to download it from the official Android site and install it locally. In the following example, we will be setting up the repo using command line and install it. Before proceeding, we need to make sure that we have JDK version1.6 or greater installed.
Here, I'm installing JDK 1.8.
$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer oracle-java8-set-default
Verify if java installation was successful:
poornima@poornima-Lenovo:~$ java -version
Now, setup the repo for installing Android Studio
$ sudo apt-add-repository ppa:paolorotolo/android-studio
![Android-Studio-repo](http://blog.linoxide.com/wp-content/uploads/2015/11/Android-studio-repo.png)
$ sudo apt-get update
$ sudo apt-get install android-studio
Above install command will install android-studio in the directory /opt.
Now, run the following command to start the setup wizard:
$ /opt/android-studio/bin/studio.sh
This will invoke the setup screen. Following are the screen shots that follow to set up Android studio:
![Android Studio setup](http://blog.linoxide.com/wp-content/uploads/2015/11/Studio-setup.png)
![Install-type](Android Studio setup)
![Emulator Settings](http://blog.linoxide.com/wp-content/uploads/2015/11/Emulator-settings.png)
Once you press the Finish button, Licence agreement will be displayed. After you accept the licence, it starts downloading the required components.
![Download components](http://blog.linoxide.com/wp-content/uploads/2015/11/Download.png)
Android studio installation will be complete after this step. When you relaunch Android studio, you will be shown the following welcome screen from where you will be able to start working with your Android Studio.
![Welcome screen](http://blog.linoxide.com/wp-content/uploads/2015/11/Welcome-screen.png)
### Installation on CentOS 7 ###
Let us now learn how to install Android Studio on CentOS 7. Here also, you need to install JDK 1.6 or later. Remember to use 'sudo' before the commands if you are not a root user. You can download the [latest version][2] of JDK. In case you already have an older version installed, remove the same before installing the new one. In the below example, I will be installing JDK version 1.8.0_65 by downloading the required rpm.
[root@li1260-39 ~]# rpm -ivh jdk-8u65-linux-x64.rpm
Preparing... ################################# [100%]
Updating / installing...
1:jdk1.8.0_65-2000:1.8.0_65-fcs ################################# [100%]
Unpacking JAR files...
tools.jar...
plugin.jar...
javaws.jar...
deploy.jar...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...
jfxrt.jar...
If Java path is not set properly, you will get error messages. Hence, set the correct path:
export JAVA_HOME=/usr/java/jdk1.8.0_25/
export PATH=$PATH:$JAVA_HOME
Check if the correct version has been installed:
[root@li1260-39 ~]# java -version
java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
If you notice any error message of the sort "unable-to-run-mksdcard-sdk-tool:" while trying to install Android Studio, you might also have to install the following packages on CentOS 7 64-bit:
glibc.i686
glibc-devel.i686
libstdc++.i686
zlib-devel.i686
ncurses-devel.i686
libX11-devel.i686
libXrender.i686
libXrandr.i686
Let us know install studio by downloading the ide file from [Android site][3] and unzipping the same.
[root@li1260-39 tmp]# unzip android-studio-ide-141.2343393-linux.zip
Move android-studio directory to /opt directory
[root@li1260-39 tmp]# mv /tmp/android-studio/ /opt/
You can create a simlink to the studio executable to quickly start it whenever you need it.
[root@li1260-39 tmp]# ln -s /opt/android-studio/bin/studio.sh /usr/local/bin/android-studio
Now launch the studio from a terminal:
[root@localhost ~]#studio
The screens that follow for completing the installation are same as the ones shown above for Ubuntu. When the installation completes, you can start creating your own Android applications.
### Conclusion ###
Within a year of its release, Android Studio has taken over as the primary IDE for Android development by eclipsing Eclipse. It is the only official IDE tool that will support future Android SDKs and other Android features that will be provided by Google. So, what are you waiting for? Go install Android Studio and have fun developing Android apps.
--------------------------------------------------------------------------------
via: http://linoxide.com/tools/install-android-studio-ubuntu-15-04-centos-7/
作者:[B N Poornima][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/bnpoornima/
[1]:https://www.jetbrains.com/idea/
[2]:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
[3]:http://developer.android.com/sdk/index.html

View File

@ -1,47 +0,0 @@
Translating by XLCYun
Install Intel Graphics Installer in Ubuntu 15.10
================================================================================
![Intel graphics installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel_logo.jpg)
Intel has announced a new release of its Linux graphics installer recently. Ubuntu 15.10 Wily is required and support for Ubuntu 15.04 is deprecated in the new release.
> The Intel® Graphics Installer for Linux* allows you to easily install the latest graphics and video drivers for your Intel graphics hardware. This allows you to stay current with the latest enhancements, optimizations, and fixes to the Intel® Graphics Stack to ensure the best user experience with your Intel® graphics hardware. The Intel® Graphics Installer for Linux* is available for the latest version of Ubuntu*.
![intel-graphics-installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/intel-graphics-installer.jpg)
### How to Install: ###
**1.** Download the installer from [the link page][1]. The current is version 1.2.1 for Ubuntu 15.10. Check your OS type, 32-bit or 64-bit, via **System Settings -> Details**.
![download-intel-graphics-installer](http://ubuntuhandbook.org/wp-content/uploads/2015/11/download-intel-graphics-installer.jpg)
**2.** Once the download process finished, go to your Download folder and click open the .deb package with Ubuntu Software Center and finally click the install button.
![install-via-software-center](http://ubuntuhandbook.org/wp-content/uploads/2015/11/install-via-software-center.jpg)
**3.** In order to trust the Intel Graphics Installer, you will need to add keys via below commands.
Open terminal from Unity Dash, App Launcher, or via Ctrl+Alt+T shortcut key. When it opens, paste below commands and run one by one:
wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg -O - | sudo apt-key add -
wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg-2 -O - | sudo apt-key add -
![trust-intel](http://ubuntuhandbook.org/wp-content/uploads/2015/11/trust-intel.jpg)
NOTE: While running the first command, if the cursor is stuck and blinking after downloading the key, as above picture shows, type your password (no visual feedback) and hit enter to continue.
Finally launch Intel Graphics Installer via Unity Dash or Application launcher.
--------------------------------------------------------------------------------
via: http://ubuntuhandbook.org/index.php/2015/11/install-intel-graphics-installer-in-ubuntu-15-10/
作者:[Ji m][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ubuntuhandbook.org/index.php/about/
[1]:https://01.org/linuxgraphics/downloads

View File

@ -1,83 +0,0 @@
LNAV Ncurses based log file viewer
================================================================================
The Logfile Navigator, lnav for short, is a curses-based tool for viewing and analyzing log files. The value added by lnav over text viewers / editors is that it takes advantage of any semantic information that can be gleaned from the log file, such as timestamps and log levels. Using this extra semantic information, lnav can do things like: interleaving messages from different files; generate histograms of messages over time; and providing hotkeys for navigating through the file. It is hoped that these features will allow the user to quickly and efficiently zero-in on problems.
### lnav Features ###
#### Support for the following log file formats: ####
Syslog, Apache access log, strace, tcsh history, and generic log files with timestamps. The file format is automatically detected when the file is read in.
#### Histogram view: ####
Displays the number of log messages per bucket-of-time. Useful for getting an overview of what was happening over a long period of time.
#### Filters: ####
Display only lines that match or do not match a set of regular expressions. Useful for removing extraneous log lines that you are not interested in.
#### "Live" operation: ####
Searches are done as you type; new log lines are automatically loaded and searched as they are added; filters apply to lines as they are loaded; and, SQL queries are checked for correctness as you type.
#### Automatic tailing: ####
The log file view automatically scrolls down to follow new lines that are added to files. Simply scroll up to lock the view in place and then scroll down to the bottom to resume tailing.
#### Time-of-day ordering of lines: ####
The log lines from all the files are loaded and then sorted by time-of-day. Relieves you of having to manually line up log messages from different files.
#### Syntax highlighting: ####
Errors and warnings are colored in red and yellow, respectively. Highlights are also applied to: SQL keywords, XML tags, file and line numbers in Java backtraces, and quoted strings.
#### Navigation: ####
There are hotkeys for jumping to the next or previous error or warning and moving forward or backward by an amount of time.
#### Use SQL to query logs: ####
Each log file line is treated as a row in a database that can be queried using SQL. The columns that are available depend on logs file types being viewed.
#### Command and search history: ####
Your previously entered commands and searches are saved so you can access them between sessions.
#### Compressed files: ####
Compressed log files are automatically detected and uncompressed on the fly.
### Install lnav on ubuntu 15.10 ###
Open the terminal and run the following command
sudo apt-get install lnav
### Using lnav ###
If you want to view logs using lnav you can do using the following command by default it shows syslogs
lnav
![](http://www.ubuntugeek.com/wp-content/uploads/2015/11/51.png)
If you want to view specific logs provide the path
If you want to view CUPS logs run the following command from your terminal
lnav /var/log/cups
![](http://www.ubuntugeek.com/wp-content/uploads/2015/11/6.png)
--------------------------------------------------------------------------------
via: http://www.ubuntugeek.com/lnav-ncurses-based-log-file-viewer.html
作者:[ruchi][a]
译者:[zky001](https://github.com/zky001)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.ubuntugeek.com/author/ubuntufix

View File

@ -1,60 +0,0 @@
translation by strugglingyouth
How to Install GIMP 2.8.16 in Ubuntu 16.04, 15.10, 14.04
================================================================================
![GIMP 2.8.16](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-icon.png)
GIMP image editor 2.8.16 was released on its 20th birthday. Heres how to install or upgrade in Ubuntu 16.04, Ubuntu 15.10, Ubuntu 14.04, Ubuntu 12.04 and their derivatives, e.g., Linux Mint 17.x/13, Elementary OS Freya.
GIMP 2.8.16 features support for layer groups in OpenRaster files, fixes for layer groups support in PSD, various user inrterface improvements, OSX build system fixes, translation updates, and more changes. Read the [official announcement][1].
![GIMP image editor 2.8,16](http://ubuntuhandbook.org/wp-content/uploads/2014/08/gimp-2-8-14.jpg)
### How to Install or Upgrade: ###
Thanks to Otto Meier, an [Ubuntu PPA][2] with latest GIMP packages is available for all current Ubuntu releases and derivatives.
**1. Add GIMP PPA**
Open terminal from Unity Dash, App launcher, or via Ctrl+Alt+T shortcut key. When it opens, paste below command and hit Enter:
sudo add-apt-repository ppa:otto-kesselgulasch/gimp
![add GIMP PPA](http://ubuntuhandbook.org/wp-content/uploads/2015/11/gimp-ppa.jpg)
Type in your password when it asks, no visual feedback so just type in mind, and hit enter to continue.
**2. Install or Upgrade the editor.**
After added the PPA, launch **Software Updater** (or Software Manager in Mint). After checking for updates, youll see GIMP in the update list. Click “Install Now” to upgrade it.
![upgrade-gimp2816](http://ubuntuhandbook.org/wp-content/uploads/2015/11/upgrade-gimp2816.jpg)
For those who prefer Linux commands, run below commands one by one to refresh your repository caches and install GIMP:
sudo apt-get update
sudo apt-get install gimp
**3. (Optional) Uninstall.**
Just in case you want to uninstall or downgrade GIMP image editor. Use Software Center to remove it, or run below commands one by one to purge PPA as well as downgrade the software:
sudo apt-get install ppa-purge
sudo ppa-purge ppa:otto-kesselgulasch/gimp
Thats it. Enjoy!
--------------------------------------------------------------------------------
via: http://ubuntuhandbook.org/index.php/2015/11/how-to-install-gimp-2-8-16-in-ubuntu-16-04-15-10-14-04/
作者:[Ji m][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ubuntuhandbook.org/index.php/about/
[1]:http://www.gimp.org/news/2015/11/22/20-years-of-gimp-release-of-gimp-2816/
[2]:https://launchpad.net/~otto-kesselgulasch/+archive/ubuntu/gimp

View File

@ -1,137 +0,0 @@
The tar command explained
================================================================================
The Linux [tar][1] command is the swiss army of the Linux admin when it comes to archiving or distributing files. Gnu Tar archives can contain multiple files and directories, file permissions can be preserved and it supports multiple compression formats. The name tar stands for "**T**ape **Ar**chiver", the format is an official POSIX standard.
### Tar file formats ###
A short introduction into tar compression levels.
- **No compression** Uncompressed files have the file ending .tar.
- **Gzip Compression** The Gzip format is the most widely used compression format for tar, it is fast for creating and extracting files. Files with gz compression have normally the file ending .tar.gz or .tgz. Here some examples on how to [create][2] and [extract][3] a tar.gz file.
- **Bzip2 Compression** The Bzip2 format offers a better compression then the Gzip format. Creating files is slower, the file ending is usually .tar.bz2.
- **Lzip (LZMA) Compression** The Lzip compression combines the speed of Gzip with a compression level that is similar to Bzip2 (or even better). Independently from these good attributes, this format is not widely used.
- **Lzop Compression** This compress option is probably the fastest compression format for tar, it has a compression level similar to gzip and is not widely used.
The common formats are tar.gz and tar.bz2. If you goal is fast compression, then use gzip. When the archive file size is critical, then use tar.bz2.
### What is the tar command used for? ###
Here a few common use cases of the tar command.
- Backup of Servers and Desktops.
- Document archiving.
- Software Distribution.
### Installing tar ###
The command is installed on most Linux Systems by default. Here are the instructions to install tar in case that the command is missing.
#### CentOS ####
Execute the following command as root user on the shell to install tar on CentOS.
yum install tar
#### Ubuntu ####
This command will install tar on Ubuntu. The "sudo" command ensures that the apt command is run with root privileges.
sudo apt-get install tar
#### Debian ####
The following apt command installs tar on Debian.
apt-get install tar
#### Windows ####
The tar command is available for Windows as well, you can download it from the Gunwin project. [http://gnuwin32.sourceforge.net/packages/gtar.htm][4]
### Create tar.gz Files ###
Here is the [tar command][5] that has to be run on the shell. I will explain the command line options below.
tar pczf myarchive.tar.gz /home/till/mydocuments
This command creates the archive myarchive.tar.gz which contains the files and folders from the path /home/till/mydocuments. **The command line options explained**:
- **[p]** This option stand for "preserve", it instructs tar to store details on file owner and file permissions in the archive.
- **[c]** Stands for create. This option is mandatory when a file is created.
- **[z]** The z option enables gzip compression.
- **[f]** The file option tells tar to create an archive file. Tar will send the output to stdout if this option is omitted.
#### Tar command examples ####
**Example 1: Backup the /etc Directory** Create a backup of the /etc config directory. The backup is stored in the root folder.
tar pczvf /root/etc.tar.gz /etc
![Backup the /etc directory with tar.](https://www.howtoforge.com/images/linux-tar-command/big/create-tar.png)
The command should be run as root to ensure that all files in /etc are included in the backup. This time, I've added the [v] option in the command. This option stands for verbose, it tells tar to show all file names that get added into the archive.
**Example 2: Backup your /home directory** Create a backup of your home directory. The backup will be stored in a directory /backup.
tar czf /backup/myuser.tar.gz /home/myuser
Replace myuser with your username. In this command, I've omitted the [p] switch, so the permissions get not preserved.
**Example 3: A file-based backup of MySQL databases** The MySQL databases are stored in /var/lib/mysql on most Linux distributions. You can check that with the command:
ls /var/lib/mysql
![File based MySQL backup with tar.](https://www.howtoforge.com/images/linux-tar-command/big/tar_backup_mysql.png)
Stop the database server to get a consistent MySQL file backup with tar. The backup will be written to the /backup folder.
1) Create the backup folder
mkdir /backup
chmod 600 /backup
2) Stop MySQL, run the backup with tar and start the database server again.
service mysql stop
tar pczf /backup/mysql.tar.gz /var/lib/mysql
service mysql start
ls -lah /backup
![File based MySQL backup.](https://www.howtoforge.com/images/linux-tar-command/big/tar-backup-mysql2.png)
### Extract tar.gz Files ###
The command to extract tar.gz files is:
tar xzf myarchive.tar.gz
#### The tar command options explained ####
- **[x]** The x stand for extract, it is mandatory when a tar file shall be extracted.
- **[z]** The z option tells tar that the archive that shall be unpacked is in gzip format.
- **[f]** This option instructs tar to read the archive content from a file, in this case the file myarchive.tar.gz.
The above tar command will silently extract that tar.gz file, it will show only error messages. If you like to see which files get extracted, then add the "v" option.
tar xzvf myarchive.tar.gz
The **[v]** option stands for verbose, it will show the file names while they get unpacked.
![Extract a tar.gz file.](https://www.howtoforge.com/images/linux-tar-command/big/tar-xfz.png)
--------------------------------------------------------------------------------
via: https://www.howtoforge.com/tutorial/linux-tar-command/
作者:[howtoforge][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.howtoforge.com/
[1]:https://en.wikipedia.org/wiki/Tar_(computing)
[2]:http://www.faqforge.com/linux/create-tar-gz/
[3]:http://www.faqforge.com/linux/extract-tar-gz/
[4]:http://gnuwin32.sourceforge.net/packages/gtar.htm
[5]:http://www.faqforge.com/linux/tar-command/

View File

@ -1,326 +0,0 @@
How to Install Nginx as Reverse Proxy for Apache on FreeBSD 10.2
================================================================================
Nginx is free and open source HTTP server and reverse proxy, as well as an mail proxy server for IMAP/POP3. Nginx is high performance web server with rich of features, simple configuration and low memory usage. Originally written by Igor Sysoev on 2002, and until now has been used by a big technology company including Netflix, Github, Cloudflare, WordPress.com etc.
In this tutorial we will "**install and configure nginx web server as reverse proxy for apache on freebsd 10.2**". Apache will run with php on port 8080, and then we need to configure nginx run on port 80 to receive a request from user/visitor. If user request for web page from the browser on port 80, then nginx will pass the request to apache webserver and PHP that running on port 8080.
#### Prerequisite ####
- FreeBSD 10.2.
- Root privileges.
### Step 1 - Update the System ###
Log in to your freebsd server with ssh credential and update system with command below :
freebsd-update fetch
freebsd-update install
### Step 2 - Install Apache ###
pache is open source HTTP server and the most widely used web server. Apache is not installed by default on freebsd, but we can install it from the ports or package on "/usr/ports/www/apache24" or install it from freebsd repository with pkg command. In this tutorial we will use pkg command to install from the freebsd repository :
pkg install apache24
### Step 3 - Install PHP ###
Once apache is installed, followed with installing php for handling a PHP file request by a user. We will install php with pkg command as below :
pkg install php56 mod_php56 php56-mysql php56-mysqli
### Step 4 - Configure Apache and PHP ###
Once all is installed, we will configure apache to run on port 8080, and php working with apache. To configure apache, we can edit the configuration file "httpd.conf", and for PHP we just need to copy the php configuration file php.ini on "/usr/local/etc/" directory.
Go to "/usr/local/etc/" directory and copy php.ini-production file to php.ini :
cd /usr/local/etc/
cp php.ini-production php.ini
Next, configure apache by editing file "httpd.conf" on apache directory :
cd /usr/local/etc/apache24
nano -c httpd.conf
Port configuration on line **52** :
Listen 8080
ServerName configuration on line **219** :
ServerName 127.0.0.1:8080
Add DirectoryIndex file that apache will serve it if a directory requested on line **277** :
DirectoryIndex index.php index.html
Configure apache to work with php by adding script below under line **287** :
<FilesMatch "\.php$">
SetHandler application/x-httpd-php
</FilesMatch>
<FilesMatch "\.phps$">
SetHandler application/x-httpd-php-source
</FilesMatch>
Save and exit.
Now add apache to start at boot time with sysrc command :
sysrc apache24_enable=yes
And test apache configuration with command below :
apachectl configtest
If there is no error, start apache :
service apache24 start
If all is done, verify that php is running well with apache by creating phpinfo file on "/usr/local/www/apache24/data" directory :
cd /usr/local/www/apache24/data
echo "<?php phpinfo(); ?>" > info.php
Now visit the freebsd server IP : 192.168.1.123:8080/info.php.
![Apache and PHP on Port 8080](http://blog.linoxide.com/wp-content/uploads/2015/11/Apache-and-PHP-on-Port-8080.png)
Apache is working with php on port 8080.
### Step 5 - Install Nginx ###
Nginx high performance web server and reverse proxy with low memory consumption. In this step we will use nginx as reverse proxy for apache, so let's install it with pkg command :
pkg install nginx
### Step 6 - Configure Nginx ###
Once nginx is installed, we must configure it by replacing nginx file "**nginx.conf**" with new configuration below. Change the directory to "/usr/local/etc/nginx/" and backup default nginx.conf :
cd /usr/local/etc/nginx/
mv nginx.conf nginx.conf.oroginal
Now create new nginx configuration file :
nano -c nginx.conf
and paste configuration below :
user www;
worker_processes 1;
error_log /var/log/nginx/error.log;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log;
sendfile on;
keepalive_timeout 65;
# Nginx cache configuration
proxy_cache_path /var/nginx/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m;
proxy_temp_path /var/nginx/cache/tmp;
proxy_cache_key "$scheme$host$request_uri";
gzip on;
server {
#listen 80;
server_name _;
location /nginx_status {
stub_status on;
access_log off;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/local/www/nginx-dist;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:8080
#
location ~ \.php$ {
proxy_pass http://127.0.0.1:8080;
include /usr/local/etc/nginx/proxy.conf;
}
}
include /usr/local/etc/nginx/vhost/*;
}
Save and exit.
Next, create new file called **proxy.conf** for reverse proxy configuration on nginx directory :
cd /usr/local/etc/nginx/
nano -c proxy.conf
Paste configuration below :
proxy_buffering on;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 100 8k;
add_header X-Cache $upstream_cache_status;
Save and exit.
And the last, create new directory for nginx cache on "/var/nginx/cache" :
mkdir -p /var/nginx/cache
### Step 7 - Configure Nginx VirtualHost ###
In this step we will create new virtualhost for domain "saitama.me", with document root on "/usr/local/www/saitama.me" and the log file on "/var/log/nginx" directory.
First thing we must do is creating new directory to store the virtualhost file, we here use new directory called "**vhost**". Let's create it :
cd /usr/local/etc/nginx/
mkdir vhost
vhost directory has been created, now go to the directory and create new file virtualhost. I'me here will create new file "**saitama.conf**" :
cd vhost/
nano -c saitama.conf
Paste virtualhost configuration below :
server {
# Replace with your freebsd IP
listen 192.168.1.123:80;
# Document Root
root /usr/local/www/saitama.me;
index index.php index.html index.htm;
# Domain
server_name www.saitama.me saitama.me;
# Error and Access log file
error_log /var/log/nginx/saitama-error.log;
access_log /var/log/nginx/saitama-access.log main;
# Reverse Proxy Configuration
location ~ \.php$ {
proxy_pass http://127.0.0.1:8080;
include /usr/local/etc/nginx/proxy.conf;
# Cache configuration
proxy_cache my-cache;
proxy_cache_valid 10s;
proxy_no_cache $cookie_PHPSESSID;
proxy_cache_bypass $cookie_PHPSESSID;
proxy_cache_key "$scheme$host$request_uri";
}
# Disable Cache for the file type html, json
location ~* .(?:manifest|appcache|html?|xml|json)$ {
expires -1;
}
# Enable Cache the file 30 days
location ~* .(jpg|png|gif|jpeg|css|mp3|wav|swf|mov|doc|pdf|xls|ppt|docx|pptx|xlsx)$ {
proxy_cache_valid 200 120m;
expires 30d;
proxy_cache my-cache;
access_log off;
}
}
Save and exit.
Next, create new log directory for nginx and virtualhost on "/var/log/" :
mkdir -p /var/log/nginx/
If all is done, let's create a directory for document root for saitama.me :
cd /usr/local/www/
mkdir saitama.me
### Step 8 - Testing ###
This step is just test our nginx configuration and test the nginx virtualhost.
Test nginx configuration with command below :
nginx -t
If there is no problem, add nginx to boot time with sysrc command, and then start it and restart apache:
sysrc nginx_enable=yes
service nginx start
service apache24 restart
All is done, now verify the the php is working by adding new file phpinfo on saitama.me directory :
cd /usr/local/www/saitama.me
echo "<?php phpinfo(); ?>" > info.php
Visit the domain : **www.saitama.me/info.php**.
![Virtualhost Configured saitamame](http://blog.linoxide.com/wp-content/uploads/2015/11/Virtualhost-Configured-saitamame.png)
Nginx as reverse proxy for apache is working, and php is working too.
And this is another results :
Test .html file with no-cache.
curl -I www.saitama.me
![html with no-cache](http://blog.linoxide.com/wp-content/uploads/2015/11/html-with-no-cache.png)
Test .css file with 30day cache.
curl -I www.saitama.me/test.css
![css file 30day cache](http://blog.linoxide.com/wp-content/uploads/2015/11/css-file-30day-cache.png)
Test .php file with cache :
curl -I www.saitama.me/info.php
![PHP file cached](http://blog.linoxide.com/wp-content/uploads/2015/11/PHP-file-cached.png)
All is done.
### Conclusion ###
Nginx is most popular HTTP server and reverse proxy. Has a rich of features with high performance and low memory/RAM usage. Nginx use too for caching, we can cache a static file on the web to make the web fast load, and cache for php file if a user request for it. Nginx is easy to configure and use, use for HTTP server or act as reverse proxy for apache.
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/install-nginx-reverse-proxy-apache-freebsd-10-2/
作者:[Arul][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arulm/

View File

@ -1,53 +0,0 @@
Linux FAQs with Answers--How to remove trailing whitespaces in a file on Linux
================================================================================
> Question: I have a text file in which I need to remove all trailing whitespsaces (e.g., spaces and tabs) in each line for formatting purpose. Is there a quick and easy Linux command line tool I can use for this?
When you are writing code for your program, you must understand that there are standard coding styles to follow. For example, "trailing whitespaces" are typically considered evil because when they get into a code repository for revision control, they can cause a lot of problems and confusion (e.g., "false diffs"). Many IDEs and text editors are capable of highlighting and automatically trimming trailing whitepsaces at the end of each line.
Here are a few ways to **remove trailing whitespaces in Linux command-line environment**.
### Method One ###
A simple command line approach to remove unwanted whitespaces is via sed.
The following command deletes all spaces and tabs at the end of each line in input.java.
$ sed -i 's/[[:space:]]*$//' input.java
If there are multiple files that need trailing whitespaces removed, you can use a combination of find and sed. For example, the following command deletes trailing whitespaces in all *.java files recursively found in the current directory as well as all its sub-directories.
$ find . -name "*.java" -type f -print0 | xargs -0 sed -i 's/[[:space:]]*$//'
### Method Two ###
Vim text editor is able to highlight and trim whitespaces in a file as well.
To highlight all trailing whitespaces in a file, open the file with Vim editor and enable text highlighting by typing the following in Vim command line mode.
:set hlsearch
Then search for trailing whitespaces by typing:
/\s\+$
This will show all trailing spaces and tabs found throughout the file.
![](https://c1.staticflickr.com/1/757/23198657732_bc40e757b4_b.jpg)
Then to clean up all trailing whitespaces in a file with Vim, type the following Vim command.
:%s/\s\+$//
This command means substituting all whitespace characters found at the end of the line (\s\+$) with no character.
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/remove-trailing-whitespaces-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni

View File

@ -0,0 +1,425 @@
15 Useful Linux and Unix Tape Managements Commands For Sysadmins
================================================================================
Tape devices should be used on a regular basis only for archiving files or for transferring data from one server to another. Usually, tape devices are all hooked up to Unix boxes, and controlled with mt or mtx. You must backup all data to both disks (may be in cloud) and tape device. In this tutorial you will learn about:
- Tape device names
- Basic commands to manage tape drive
- Basic backup and restore commands
### Why backup? ###
A backup plant is important:
- Ability to recover from disk failure
- Accidental file deletion
- File or file system corruption
- Complete server destruction, including destruction of on-site backups due to fire or other problems.
You can use tape based archives to backup the whole server and move tapes off-site.
### Understanding tape file marks and block size ###
![Fig.01: Tape file marks](http://s0.cyberciti.org/uploads/cms/2015/10/tape-format.jpg)
Fig.01: Tape file marks
Each tape device can store multiple tape backup files. Tape backup files are created using cpio, tar, dd, and so on. However, tape device can be opened, written data to, and closed by various program. You can store several backups (tapes) on physical tape. Between each tape file is a "tape file mark". This is used to indicate where one tape file ends and another begins on physical tape. You need to use mt command to positions the tape (winds forward and rewinds and marks).
#### How data is stored on a tape ####
![Fig.02: How data is stored on a tape](http://s0.cyberciti.org/uploads/cms/2015/10/how-data-is-stored-on-a-tape.jpg)
Fig.02: How data is stored on a tape
All data is stored subsequently in sequential tape archive format using tar. The first tape archive will start on the physical beginning of the tape (tar #0). The next will be tar #1 and so on.
### Tape device names on Unix ###
1. /dev/rmt/0 or /dev/rmt/1 or /dev/rmt/[0-127] : Regular tape device name on Unix. The tape is rewound.
1. /dev/rmt/0n : This is know as no rewind i.e. after using tape, leaves the tape in current status for next command.
1. /dev/rmt/0b : Use magtape interface i.e. BSD behavior. More-readable by a variety of OS's such as AIX, Windows, Linux, FreeBSD, and more.
1. /dev/rmt/0l : Set density to low.
1. /dev/rmt/0m : Set density to medium.
1. /dev/rmt/0u : Set density to high.
1. /dev/rmt/0c : Set density to compressed.
1. /dev/st[0-9] : Linux specific SCSI tape device name.
1. /dev/sa[0-9] : FreeBSD specific SCSI tape device name.
1. /dev/esa0 : FreeBSD specific SCSI tape device name that eject on close (if capable).
#### Tape device name examples ####
- The /dev/rmt/1cn indicate that I'm using unity 1, compressed density and no rewind.
- The /dev/rmt/0hb indicate that I'm using unity 0, high density and BSD behavior.
- The auto rewind SCSI tape device name on Linux : /dev/st0
- The non-rewind SCSI tape device name on Linux : /dev/nst0
- The auto rewind SCSI tape device name on FreeBSD: /dev/sa0
- The non-rewind SCSI tape device name on FreeBSD: /dev/nsa0
#### How do I list installed scsi tape devices? ####
Type the following commands:
## Linux (read man pages for more info) ##
lsscsi
lsscsi -g
## IBM AIX ##
lsdev -Cc tape
lsdev -Cc adsm
lscfg -vl rmt*
## Solaris Unix ##
cfgadm a
cfgadm -al
luxadm probe
iostat -En
## HP-UX Unix ##
ioscan Cf
ioscan -funC tape
ioscan -fnC tape
ioscan -kfC tape
Sample outputs from my Linux server:
![Fig.03: Installed tape devices on Linux server](http://s0.cyberciti.org/uploads/cms/2015/10/linux-find-tape-devices-command.jpg)
Fig.03: Installed tape devices on Linux server
### mt command examples ###
In Linux and Unix-like system, mt command is used to control operations of the tape drive, such as finding status or seeking through files on a tape or writing tape control marks to the tape. You must most of the following command as root user. The syntax is:
mt -f /tape/device/name operation
#### Setting up environment ####
You can set TAPE shell variable. This is the pathname of the tape drive. The default (if the variable is unset, but not if it is null) is /dev/nsa0 on FreeBSD. It may be overridden with the -f option passed to the mt command as explained below.
## Add to your shell startup file ##
TAPE=/dev/st1 #Linux
TAPE=/dev/rmt/2 #Unix
TAPE=/dev/nsa3 #FreeBSD
export TAPE
### 1: Display status of the tape/drive ###
mt status #Use default
mt -f /dev/rmt/0 status #Unix
mt -f /dev/st0 status #Linux
mt -f /dev/nsa0 status #FreeBSD
mt -f /dev/rmt/1 status #Unix unity 1 i.e. tape device no. 1
You can use shell loop as follows to poll a system and locate all of its tape drives:
for d in 0 1 2 3 4 5
do
mt -f "/dev/rmt/${d}" status
done
### 2: Rewinds the tape ###
mt rew
mt rewind
mt -f /dev/mt/0 rewind
mt -f /dev/st0 rewind
### 3: Eject the tape ###
mt off
mt offline
mt eject
mt -f /dev/mt/0 off
mt -f /dev/st0 eject
### 4: Erase the tape (rewind the tape and, if applicable, unload the tape) ###
mt erase
mt -f /dev/st0 erase #Linux
mt -f /dev/rmt/0 erase #Unix
### 5: Retensioning a magnetic tape cartridge ###
If errors occur when a tape is being read, you can retension the tape, clean the tape drive, and then try again as follows:
mt retension
mt -f /dev/rmt/1 retension #Unix
mt -f /dev/st0 retension #Linux
### 6: Writes n EOF marks in the current position of tape ###
mt eof
mt weof
mt -f /dev/st0 eof
### 7: Forward space count files i.e. jumps n EOF marks ###
The tape is positioned on the first block of the next file i.e. tape will position on first block of the field (see fig.01):
mt fsf
mt -f /dev/rmt/0 fsf
mt -f /dev/rmt/1 fsf 1 #go 1 forward file/tape (see fig.01)
### 8: Backward space count files i.e. rewinds n EOF marks ###
The tape is positioned on the first block of the next file i.e. tape positions after EOF mark (see fig.01):
mt bsf
mt -f /dev/rmt/1 bsf
mt -f /dev/rmt/1 bsf 1 #go 1 backward file/tape (see fig.01)
Here is a list of the tape position commands:
fsf Forward space count files. The tape is positioned on the first block of the next file.
fsfm Forward space count files. The tape is positioned on the last block of the previous file.
bsf Backward space count files. The tape is positioned on the last block of the previous file.
bsfm Backward space count files. The tape is positioned on the first block of the next file.
asf The tape is positioned at the beginning of the count file. Positioning is done by first rewinding the tape and then spacing forward over count filemarks.
fsr Forward space count records.
bsr Backward space count records.
fss (SCSI tapes) Forward space count setmarks.
bss (SCSI tapes) Backward space count setmarks.
### Basic backup commands ###
Let us see commands to backup and restore files
### 9: To backup directory (tar format) ###
tar cvf /dev/rmt/0n /etc
tar cvf /dev/st0 /etc
### 10: To restore directory (tar format) ###
tar xvf /dev/rmt/0n -C /path/to/restore
tar xvf /dev/st0 -C /tmp
### 11: List or check tape contents (tar format) ###
mt -f /dev/st0 rewind; dd if=/dev/st0 of=-
## tar format ##
tar tvf {DEVICE} {Directory-FileName}
tar tvf /dev/st0
tar tvf /dev/st0 desktop
tar tvf /dev/rmt/0 foo > list.txt
### 12: Backup partition with dump or ufsdump ###
## Unix backup c0t0d0s2 partition ##
ufsdump 0uf /dev/rmt/0 /dev/rdsk/c0t0d0s2
## Linux backup /home partition ##
dump 0uf /dev/nst0 /dev/sda5
dump 0uf /dev/nst0 /home
## FreeBSD backup /usr partition ##
dump -0aL -b64 -f /dev/nsa0 /usr
### 12: Restore partition with ufsrestore or restore ###
## Unix ##
ufsrestore xf /dev/rmt/0
## Unix interactive restore ##
ufsrestore if /dev/rmt/0
## Linux ##
restore rf /dev/nst0
## Restore interactive from the 6th backup on the tape media ##
restore isf 6 /dev/nst0
## FreeBSD restore ufsdump format ##
restore -i -f /dev/nsa0
### 13: Start writing at the beginning of the tape (see fig.02) ###
## This will overwrite all data on tape ##
mt -f /dev/st1 rewind
### Backup home ##
tar cvf /dev/st1 /home
## Offline and unload tape ##
mt -f /dev/st0 offline
To restore from the beginning of the tape:
mt -f /dev/st0 rewind
tar xvf /dev/st0
mt -f /dev/st0 offline
### 14: Start writing after the last tar (see fig.02) ###
## This will kee all data written so far ##
mt -f /dev/st1 eom
### Backup home ##
tar cvf /dev/st1 /home
## Unload ##
mt -f /dev/st0 offline
### 15: Start writing after tar number 2 (see fig.02) ###
## To wrtite after tar number 2 (should be 2+1)
mt -f /dev/st0 asf 3
tar cvf /dev/st0 /usr
## asf equivalent command done using fsf ##
mt -f /dev/sf0 rewind
mt -f /dev/st0 fsf 2
To restore tar from tar number 2:
mt -f /dev/st0 asf 3
tar xvf /dev/st0
mt -f /dev/st0 offline
### How do I verify backup tapes created using tar? ###
It is important that you do regular full system restorations and service testing, it's the only way to know for sure that the entire system is working correctly. See our [tutorial on verifying tar command tape backups][1] for more information.
### Sample shell script ###
#!/bin/bash
# A UNIX / Linux shell script to backup dirs to tape device like /dev/st0 (linux)
# This script make both full and incremental backups.
# You need at two sets of five tapes. Label each tape as Mon, Tue, Wed, Thu and Fri.
# You can run script at midnight or early morning each day using cronjons.
# The operator or sys admin can replace the tape every day after the script has done.
# Script must run as root or configure permission via sudo.
# -------------------------------------------------------------------------
# Copyright (c) 1999 Vivek Gite <vivek@nixcraft.com>
# This script is licensed under GNU GPL version 2.0 or above
# -------------------------------------------------------------------------
# This script is part of nixCraft shell script collection (NSSC)
# Visit http://bash.cyberciti.biz/ for more information.
# -------------------------------------------------------------------------
# Last updated on : March-2003 - Added log file support.
# Last updated on : Feb-2007 - Added support for excluding files / dirs.
# -------------------------------------------------------------------------
LOGBASE=/root/backup/log
# Backup dirs; do not prefix /
BACKUP_ROOT_DIR="home sales"
# Get todays day like Mon, Tue and so on
NOW=$(date +"%a")
# Tape devie name
TAPE="/dev/st0"
# Exclude file
TAR_ARGS=""
EXCLUDE_CONF=/root/.backup.exclude.conf
# Backup Log file
LOGFIILE=$LOGBASE/$NOW.backup.log
# Path to binaries
TAR=/bin/tar
MT=/bin/mt
MKDIR=/bin/mkdir
# ------------------------------------------------------------------------
# Excluding files when using tar
# Create a file called $EXCLUDE_CONF using a text editor
# Add files matching patterns such as follows (regex allowed):
# home/vivek/iso
# home/vivek/*.cpp~
# ------------------------------------------------------------------------
[ -f $EXCLUDE_CONF ] && TAR_ARGS="-X $EXCLUDE_CONF"
#### Custom functions #####
# Make a full backup
full_backup(){
local old=$(pwd)
cd /
$TAR $TAR_ARGS -cvpf $TAPE $BACKUP_ROOT_DIR
$MT -f $TAPE rewind
$MT -f $TAPE offline
cd $old
}
# Make a partial backup
partial_backup(){
local old=$(pwd)
cd /
$TAR $TAR_ARGS -cvpf $TAPE -N "$(date -d '1 day ago')" $BACKUP_ROOT_DIR
$MT -f $TAPE rewind
$MT -f $TAPE offline
cd $old
}
# Make sure all dirs exits
verify_backup_dirs(){
local s=0
for d in $BACKUP_ROOT_DIR
do
if [ ! -d /$d ];
then
echo "Error : /$d directory does not exits!"
s=1
fi
done
# if not; just die
[ $s -eq 1 ] && exit 1
}
#### Main logic ####
# Make sure log dir exits
[ ! -d $LOGBASE ] && $MKDIR -p $LOGBASE
# Verify dirs
verify_backup_dirs
# Okay let us start backup procedure
# If it is Monday make a full backup;
# For Tue to Fri make a partial backup
# Weekend no backups
case $NOW in
Mon) full_backup;;
Tue|Wed|Thu|Fri) partial_backup;;
*) ;;
esac > $LOGFIILE 2>&1
### A note about third party backup utilities ###
Both Linux and Unix-like system provides many third-party utilities which you can use to schedule the creation of backups including tape backups such as:
- Amanda
- Bacula
- rsync
- duplicity
- rsnapshot
See also
- Man pages - [mt(1)][2], [mtx(1)][3], [tar(1)][4], [dump(8)][5], [restore(8)][6]
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/hardware/unix-linux-basic-tape-management-commands/
作者Vivek Gite
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://www.cyberciti.biz/faq/unix-verify-tape-backup/
[2]:http://www.manpager.com/linux/man1/mt.1.html
[3]:http://www.manpager.com/linux/man1/mtx.1.html
[4]:http://www.manpager.com/linux/man1/tar.1.html
[5]:http://www.manpager.com/linux/man8/dump.8.html
[6]:http://www.manpager.com/linux/man8/restore.8.html

View File

@ -0,0 +1,42 @@
Translating by DongShuaike
Backup (System Restore Point) your Ubuntu/Linux Mint with SystemBack
================================================================================
System Restore is must have feature for any OS that allows the user to revert their computer's state (including system files, installed applications, and system settings) to that of a previous point in time, which can be used to recover from system malfunctions or other problems.
Sometimes installing a program or driver can make your OS go to blank screen. System Restore can return your PC's system files and programs to a time when everything was working fine, potentially preventing hours of troubleshooting headaches. It won't affect your documents, pictures, or other data.
Simple system backup and restore application with extra features. [Systemback][1] makes it easy to create backups of system and users configuration files. In case of problems you can easily restore the previous state of the system. There are extra features like system copying, system installation and Live system creation.
Screenshots
![systemback](http://2.bp.blogspot.com/-2UPS3yl3LHw/VlilgtGAlvI/AAAAAAAAGts/ueRaAghXNvc/s1600/systemback-1.jpg)
![systemback](http://2.bp.blogspot.com/-7djBLbGenxE/Vlilgk-FZHI/AAAAAAAAGtk/2PVNKlaPO-c/s1600/systemback-2.jpg)
![](http://3.bp.blogspot.com/-beZYwKrsT4o/VlilgpThziI/AAAAAAAAGto/cwsghXFNGRA/s1600/systemback-3.jpg)
![](http://1.bp.blogspot.com/-t_gmcoQZrvM/VlilhLP--TI/AAAAAAAAGt0/GWBg6bGeeaI/s1600/systemback-5.jpg)
**Note**: Using System Restore will not restore documents, music, emails, or personal files of any kind. Depending on your perspective, this is both a positive and negative feature. The bad news is that it won't restore that accidentally deleted file you wish you could get back, though a file recovery program might solve that problem.
If no restore point exists on your computer, System Restore has nothing to revert to so the tool won't work for you. If you're trying to recover from a major problem, you'll need to move on to another troubleshooting step.
>>> Available for Ubuntu 15.10 Wily/16.04/15.04 Vivid/14.04 Trusty/Linux Mint 17.x/other Ubuntu derivatives
To install SystemBack Application in Ubuntu/Linux Mint open Terminal (Press Ctrl+Alt+T) and copy the following commands in the Terminal:
Terminal Commands:
sudo add-apt-repository ppa:nemh/systemback
sudo apt-get update
sudo apt-get install systemback
That's it
--------------------------------------------------------------------------------
via: http://www.noobslab.com/2015/11/backup-system-restore-point-your.html
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://launchpad.net/systemback

View File

@ -0,0 +1,108 @@
8 things to do after installing openSUSE Leap 42.1
================================================================================
![Credit: Metropolitan Transportation/Flicrk](http://images.techhive.com/images/article/2015/11/things-to-do-100626947-primary.idge.jpg)
Credit: [Metropolitan Transportation/Flicrk][1]
> You've installed openSUSE on your PC. Here's what to do next.
[openSUSE Leap is indeed a huge leap][2], allowing users to run a distro that has the same DNA of SUSE Linux Enterprise. Like any other operating system, some work is needed to get it set up for optimal use.
Following are some of the things that I did after installing openSUSE Leap on my PC (these are not applicable for server installations). None of them are mandatory, and you may be fine with the basic install. But if you need more out of your openSUSE Leap, follow me.
### 1. Adding Packman repository ###
Due to software patents and licences, openSUSE, like many Linux distributions, doesn't offer many applications, codecs, and drivers through official repositories (repos). Instead, these are made available through 3rd party or community repos. The first and most important repository is 'Packman'. Since these repos are not enabled by default, we have to add them. You can do so either using YaST (one of the gems of openSUSE) or by command line (instructions below).
![o42 yast repo](http://images.techhive.com/images/article/2015/11/o42-yast-repo-100626952-large970.idge.png)
Adding Packman repositories.
Using YaST, go to the Software Repositories section. Click on the 'Add button and select 'Community Repositories.' Click 'next.' And once the repos are loaded, select the Packman Repository. Click 'OK,' then import the trusted GnuPG key by clicking on the 'Trust' button.
Or, using the terminal you can add and enable the Packman repo using the following command:
zypper ar -f -n packmanhttp://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Leap_42.1/ packman
Once the repo is added, you have access to many more packages. To install any application or package, open YaST Software Manager, search for the package and install it.
### 2. Install VLC ###
VLC is the Swiss Army knife of media players and can play virtually any media file. You can install VLC from YaST Software Manager or from software.opensuse.org. You will need to install two packages: vlc and vlc-codecs.
If using terminal, run the following command:
sudo zypper install vlc vlc-codecs
### 3. Install Handbrake ###
If you need to transcode or convert your video files from one format to another, [Handbrake is the tools for you][3]. Handbrake is available through repositories we enabled, so just search for it in YaST and install.
If you are using the terminal, run the following command:
sudo zypper install handbrake-cli handbrake-gtk
(Pro tip: VLC can also transcode audio and video files.)
### 4. Install Chrome ###
OpenSUSE comes with Firefox as the default browser. But since Firefox isn't capable of playing restricted media such as Netflix, I recommend installing Chrome. This takes some extra work. First you need to import the trusted key from Google. Open the terminal app and run the 'wget' command to download the key:
wget https://dl.google.com/linux/linux_signing_key.pub
Then import the key:
sudo rpm --import linux_signing_key.pub
Now head over to the [Google Chrome website][4] and download the 64 bit .rpm file. Once downloaded run the following command to install the browser:
sudo zypper install /PATH_OF_GOOGLE_CHROME.rpm
### 5. Install Nvidia drivers ###
OpenSUSE Leap will work out of the box even if you have Nvidia or ATI graphics cards. However, if you do need the proprietary drivers for gaming or any other purpose, you can install such drivers, but some extra work is needed.
First you need to add the Nvidia repositories; it's the same procedure we used to add Packman repositories using YaST. The only difference is that you will choose Nvidia from the Community Repositories section. Once it's added, go to **Software Management > Extras** and select 'Extras/Install All Matching Recommended Packages'.
![o42 nvidia](http://images.techhive.com/images/article/2015/11/o42-nvidia-100626950-large.idge.png)
It will open a dialogue box showing all the packages it's going to install, click OK and follow the instructions. You can also run the following command after adding the Nvidia repository to install the needed Nvidia drivers:
sudo zypper inr
(Note: I have never used AMD/ATI cards so I have no experience with them.)
### 6. Install media codecs ###
Once you have VLC installed you won't need to install media codecs, but if you are using other apps for media playback you will need to install such codecs. Some developers have written scripts/tools which makes it a much easier process. Just go to [this page][5] and install the entire pack by clicking on the appropriate button. It will open YaST and install the packages automatically (of source you will have to give the root password and trust the GnuPG key, as usual).
### 7. Install your preferred email client ###
OpenSUSE comes with Kmail or Evolution, depending on the Desktop Environment you installed on the system. I run Plasma, which comes with Kmail, and this email client leaves a lot to be desired. I suggest trying Thunderbird or Evolution mail. All major email clients are available through official repositories. You can also check my [handpicked list of the best email clients for Linux][7].
### 8. Enable Samba services from Firewall ###
OpenSUSE offers a much more secure system out of the box, compared to other distributions. But it also requires a little bit more work for a new user. If you are using Samba protocol to share files within your local network then you will have to allow that service from the Firewall.
![o42 firewall](http://images.techhive.com/images/article/2015/11/o42-firewall-100626948-large970.idge.png)
Allow Samba Client and Server from Firewall settings.
Open YaST and search for Firewall. Once in Firewall settings, go to 'Allowed Services' where you will see a drop down list under 'Service to allow.' Select 'Samba Client,' then click 'Add.' Do the same with the 'Samba Server' option. Once both are added, click 'Next,' then click 'Finish,' and now you will be able to share folders from your openSUSE system and also access other machines over the local network.
That's pretty much all that I did on my new openSUSE system to set it up just the way I like it. If you have any questions, please feel free to ask in the comments below.
--------------------------------------------------------------------------------
via: http://www.itworld.com/article/3003865/open-source-tools/8-things-to-do-after-installing-opensuse-leap-421.html
作者:[Swapnil Bhartiya][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.itworld.com/author/Swapnil-Bhartiya/
[1]:https://www.flickr.com/photos/mtaphotos/11200079265/
[2]:https://www.linux.com/news/software/applications/865760-opensuse-leap-421-review-the-most-mature-linux-distribution
[3]:https://www.linux.com/learn/tutorials/857788-how-to-convert-videos-in-linux-using-the-command-line
[4]:https://www.google.com/intl/en/chrome/browser/desktop/index.html#brand=CHMB&utm_campaign=en&utm_source=en-ha-na-us-sk&utm_medium=ha
[5]:http://opensuse-community.org/
[6]:http://www.itworld.com/article/2875981/the-5-best-open-source-email-clients-for-linux.html

View File

@ -0,0 +1,43 @@
A new Mindcraft moment?
=======================
Credit:Jonathan Corbet
It is not often that Linux kernel development attracts the attention of a mainstream newspaper like The Washington Post; lengthy features on the kernel community's approach to security are even more uncommon. So when just such a feature hit the net, it attracted a lot of attention. This article has gotten mixed reactions, with many seeing it as a direct attack on Linux. The motivations behind the article are hard to know, but history suggests that we may look back on it as having given us a much-needed push in a direction we should have been going for some time.
Think back, a moment, to the dim and distant past — April 1999, to be specific. An analyst company named Mindcraft issued a report showing that Windows NT greatly outperformed Red Hat Linux 5.2 and Apache for web-server workloads. The outcry from the Linux community, including from a very young LWN, was swift and strong. The report was a piece of Microsoft-funded FUD trying to cut off an emerging threat to its world-domination plans. The Linux system had been deliberately configured for poor performance. The hardware chosen was not well supported by Linux at the time. And so on.
Once people calmed down a bit, though, one other fact came clear: the Mindcraft folks, whatever their motivations, had a point. Linux did, indeed, have performance problems that were reasonably well understood even at the time. The community then did what it does best: we sat down and fixed the problems. The scheduler got exclusive wakeups, for example, to put an end to thethundering-herd problem in the acceptance of connection requests. Numerous other little problems were fixed. Within a year or so, the kernel's performance on this kind of workload had improved considerably.
The Mindcraft report, in other words, was a much-needed kick in the rear that got the community to deal with issues that had been neglected until then.
The Washington Post article seems clearly slanted toward a negative view of the Linux kernel and its contributors. It freely mixes kernel problems with other issues (the AshleyMadison.com breakin, for example) that were not kernel vulnerabilities at all. The fact that vendors seem to have little interest in getting security fixes to their customers is danced around like a huge elephant in the room. There are rumors of dark forces that drove the article in the hopes of taking Linux down a notch. All of this could well be true, but it should not be allowed to overshadow the simple fact that the article has a valid point.
We do a reasonable job of finding and fixing bugs. Problems, whether they are security-related or not, are patched quickly, and the stable-update mechanism makes those patches available to kernel users. Compared to a lot of programs out there (free and proprietary alike), the kernel is quite well supported. But pointing at our ability to fix bugs is missing a crucial point: fixing security bugs is, in the end, a game of whack-a-mole. There will always be more moles, some of which we will not know about (and will thus be unable to whack) for a long time after they are discovered and exploited by attackers. These bugs leave our users vulnerable, even if the commercial side of Linux did a perfect job of getting fixes to users — which it decidedly does not.
The point that developers concerned about security have been trying to make for a while is that fixing bugs is not enough. We must instead realize that we will never fix them all and focus on making bugs harder to exploit. That means restricting access to information about the kernel, making it impossible for the kernel to execute code in user-space memory, instrumenting the kernel to detect integer overflows, and all the other things laid out in Kees Cook's Kernel Summit talk at the end of October. Many of these techniques are well understood and have been adopted by other operating systems; others will require innovation on our part. But, if we want to adequately defend our users from attackers, these changes need to be made.
Why hasn't the kernel adopted these technologies already? The Washington Post article puts the blame firmly on the development community, and on Linus Torvalds in particular. The culture of the kernel community prioritizes performance and functionality over security and is unwilling to make compromises if they are needed to improve the security of the kernel. There is some truth to this claim; the good news is that attitudes appear to be shifting as the scope of the problem becomes clear. Kees's talk was well received, and it clearly got developers thinking and talking about the issues.
The point that has been missed is that we do not just have a case of Linus fending off useful security patches. There simply are not many such patches circulating in the kernel community. In particular, the few developers who are working in this area have never made a serious attempt to get that work integrated upstream. Getting any large, intrusive patch set merged requires working with the kernel community, making the case for the changes, splitting the changes into reviewable pieces, dealing with review comments, and so on. It can be tiresome and frustrating, but it's how the kernel works, and it clearly results in a more generally useful, more maintainable kernel in the long run.
Almost nobody is doing that work to get new security technologies into the kernel. One might cite a "chilling effect" from the hostile reaction such patches can receive, but that is an inadequate answer: developers have managed to merge many changes over the years despite a difficult initial reaction. Few security developers are even trying.
Why aren't they trying? One fairly obvious answer is that almost nobody is being paid to try. Almost all of the work going into the kernel is done by paid developers and has been for many years. The areas that companies see fit to support get a lot of work and are well advanced in the kernel. The areas that companies think are not their problem are rather less so. The difficulties in getting support for realtime development are a clear case in point. Other areas, such as documentation, tend to languish as well. Security is clearly one of those areas. There are a lot of reasons why Linux lags behind in defensive security technologies, but one of the key ones is that the companies making money on Linux have not prioritized the development and integration of those technologies.
There are signs that things might be changing a bit. More developers are showing interest in security-related issues, though commercial support for their work is still less than it should be. The reaction against security-related changes might be less knee-jerk negative than it used to be. Efforts like the Kernel Self Protection Project are starting to work on integrating existing security technologies into the kernel.
We have a long way to go, but, with some support and the right mindset, a lot of progress can be made in a short time. The kernel community can do amazing things when it sets its mind to it. With luck, the Washington Post article will help to provide the needed impetus for that sort of setting of mind. History suggests that we will eventually see this moment as a turning point, when we were finally embarrassed into doing work that has clearly needed doing for a while. Linux should not have a substandard security story for much longer.
---------------------------
via: https://lwn.net/Articles/663474/
作者Jonathan Corbet
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,175 @@
How to Install Laravel PHP Framework on CentOS 7 / Ubuntu 15.04
================================================================================
Hi All, In this article we are going to setup Laravel on CentOS 7 and Ubuntu 15.04. If you are a PHP web developer then you don't need to worry about of all modern PHP frameworks, Laravel is the easiest to get up and running that saves your time and effort and makes web development a joy. Laravel embraces a general development philosophy that sets a high priority on creating maintainable code by following some simple guidelines, you should be able to keep a rapid pace of development and be free to change your code with little fear of breaking existing functionality.
Laravel's PHP framework installation is not a big deal. You can simply follow the step by step guide in this article for your CentOS 7 or Ubuntu 15 server.
### 1) Server Requirements ###
Laravel depends upon a number of prerequisites that must be setup before installing it. Those prerequisites includes some basic tuning parameter of server like your system update, sudo rights and installation of required packages.
Once you are connected to your server make sure to configure the fully qualified domain name then run the commands below to enable EPEL Repo and update your server.
#### CentOS-7 ####
# yum install epel-release
----------
# rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm
----------
# yum update
#### Ubuntu ####
# apt-get install python-software-properties
# add-apt-repository ppa:ondrej/php5
----------
# apt-get update
----------
# apt-get install -y php5 mcrypt php5-mcrypt php5-gd
### 2) Firewall Setup ###
System Firewall and SELinux setup is an important part regarding the security of your applications in production. You can make firewall off if you are working on test server and keep SELinux to permissive mode using the below command, so that you installing setup won't be affected by it.
# setenforce 0
### 3) Apache, MariaDB, PHP Setup ###
Laravel installation requires a complete LAMP stack with OpenSSL, PDO, Mbstring and Tokenizer PHP Extensions. If you are already running LAMP server then you can skip this step to move on and just make sure that the required PHP extensions are installed.
To install AMP stack you can use the below commands on your respective server.
#### CentOS ####
# yum install httpd mariadb-server php56w php56w-mysql php56w-mcrypt php56w-dom php56w-mbstring
To start and enable Apache web and MySQL/Mariadb services at bootup on CentOS 7 , we will use below commands.
# systemctl start httpd
# systemctl enable httpd
----------
#systemctl start mysqld
#systemctl enable mysqld
After starting MariaDB service, we will configure its secured password with below command.
#mysql_secure_installation
#### Ubuntu ####
# apt-get install mysql-server apache2 libapache2-mod-php5 php5-mysql
### 4) Install Composer ###
Now we are going to install composer that is one of the most important requirement before starting the Laravel installation that helps in installing Laravel's dependencies.
#### CentOS/Ubuntu ####
Run the below commands to setup 'composer' in CentOS/Ubuntu.
# curl -sS https://getcomposer.org/installer | php
# mv composer.phar /usr/local/bin/composer
# chmod +x /usr/local/bin/composer
![composer installation](http://blog.linoxide.com/wp-content/uploads/2015/11/14.png)
### 5) Installing Laravel ###
Laravel's installation package can be downloaded from github using the command below.
# wget https://github.com/laravel/laravel/archive/develop.zip
To extract the archived package and move into the document root directory use below commands.
# unzip develop.zip
----------
# mv laravel-develop /var/www/
Now use the following compose command that will install all required dependencies for Laravel within its directory.
# cd /var/www/laravel-develop/
# composer install
![compose laravel](http://blog.linoxide.com/wp-content/uploads/2015/11/25.png)
### 6) Key Encryption ###
For encrypter service, we will be generating a 32 digit encryption key using the command below.
# php artisan key:generate
Application key [Lf54qK56s3qDh0ywgf9JdRxO2N0oV9qI] set successfully
Now put this key into the 'app.php' file as shown below.
# vim /var/www/laravel-develop/config/app.php
![Key encryption](http://blog.linoxide.com/wp-content/uploads/2015/11/45.png)
### 7) Virtua Host and Ownership ###
After composer installation assign the permissions and apache user ownership to the document root directory as shown.
# chmod 775 /var/www/laravel-develop/app/storage
----------
# chown -R apache:apache /var/www/laravel-develop
Open the default configuration file of apache web server using any editor to add the following lines at the end file for new virtual host entry.
# vim /etc/httpd/conf/httpd.conf
----------
ServerName laravel-develop
DocumentRoot /var/www/laravel/public
start Directory /var/www/laravel
AllowOverride All
Directory close
Now the time is to restart apache web server services as shown below and then open your web browser to check your localhost page.
#### CentOS ####
# systemctl restart httpd
#### Ubuntu ####
# service apache2 restart
### 8) Laravel 5 Web Access ###
Open your web browser and give your server IP or Fully Qualified Domain name and you will see the default web page of Laravel 5 frame work.
![Laravel Default](http://blog.linoxide.com/wp-content/uploads/2015/11/35.png)
### Conclusion ###
Laravel Framework is a great tool to develop your web applications. So, at the end of this article you have learned its installation setup on Ubuntu 15 and CentOS 7 , Now start using this awesome PHP framework that provides you a lot of more features and comfort in your development work. Feel free to comment us back for your valuable suggestions an feedback to guide you in more specific and easiest way.
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/install-laravel-php-centos-7-ubuntu-15-04/
作者:[Kashif][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/kashifs/

View File

@ -0,0 +1,145 @@
Install and Configure Munin monitoring server in Linux
================================================================================
![](http://www.linuxnix.com/wp-content/uploads/2015/12/munin_page.jpg)
Munin is an excellent system monitoring tool similar to [RRD tool][1] which will give you ample information about system performance in multiple fronts like **disk, network, process, system and users**. These are some of the default properties Munin monitors.
### How Munin works? ###
Munin works in a client-server model. Munin server process on main server try to collect data from client daemon which is running locally(Munin can monitor itss own resources) or from remote client(Munin can monitor hundreds of machines) and displays them in graphs on its web interface.
### Configuring Munin in nutshell ###
This is of two steps as we have to configure both server and client.
1)Install Munin server package and configure it so that it get data from clients.
2)Configure Munin client so that server will connect to client daemon for data collocation.
### Install munin server in Linux ###
Munin server installation on Ubuntu/Debian based machines
apt-get install munin apache2
Munin server installation on Redhat/Centos based machines. Make sure that you [enable EPEL repo][2] before installing Munin on Redhat based machines as by default Redhat based machines do not have Munin in their repos.
yum install munin httpd
### Configuring Munin server in Linux ###
Below are the steps we have to do in order to bring server up.
1. Add host details which need monitoring in /etc/munin/munin.conf
1. Configure apache web server to include munin details.
1. Create User name and password for web interface
1. Restart apache server
**Step 1**: Add hosts entry in this file in **/etc/munin/munin.conf**. Go to end of the file and a client to monitor. Here in this example, I added my DB server and its IP address to monitor
Example:
[db.linuxnix.com]
address 192.168.1.25
use_node_name yes
Save the file and exit.
**Step 2**: Edit/create munin.conf file in /etc/apache2/conf.d folder to include Munin Apache related configs. In another note, by default other Munin web related configs are kept in /var/www/munin folder.
vi /etc/apache2/conf.d/munin.conf
Content:
Alias /munin /var/www/munin
<Directory /var/www/munin>
Order allow,deny
Allow from localhost 127.0.0.0/8 ::1
AllowOverride None
Options ExecCGI FollowSymlinks
AddHandler cgi-script .cgi
DirectoryIndex index.cgi
AuthUserFile /etc/munin/munin.passwd
AuthType basic
AuthName "Munin stats"
require valid-user
<IfModule mod_expires.c>
ExpiresActive On
ExpiresDefault M310
</IfModule>
</Directory>
Save the file and exit
**Step 3**: Now create a username and password for viewing muning graphs:
htpasswd -c /etc/munin/munin-htpasswd munin
**Note**: For Redhat/Centos machines replace “**apache2**” with “**httpd**” in each path to access your config files.
**Step 3**: Restart Apache server so that Munin configurations are picked-up by Apache.
#### Ubuntu/Debian based: ####
service apache2 restart
#### Centos/Redhat based: ####
service httpd restart
### Install and configure Munin client in Linux ###
**Step 1**: Install Munin client in Linux
apt-get install munin-node
**Note**: If you want to monitor your Munin server, then you have to install munin-node on that as well.
**Step 2**: Configure client by editing munin-node.conf file.
vi /etc/munin/munin-node.conf
Example:
allow ^127\.0\.0\.1$
allow ^10\.10\.20\.20$
----------
# Which address to bind to;
host *
----------
# And which port
port 4949
**Note**: 10.10.20.20 is my Munin server and it connections to 4949 port on client to get its data.
**Step 3**: Restart munin-node on client server
service munin-node restart
### Testing connection ###
check if you are able to connect client from server on 4949 port, other wise you have to open that port on client machine.
telnet db.linuxnix.com 4949
Accessing Munin web interface
http://munin.linuxnix.com/munin/index.html
Hope this helps to configure basic Munin server.
--------------------------------------------------------------------------------
via: http://www.linuxnix.com/install-and-configure-munin-monitoring-server-in-linux/
作者:[Surendra Anne][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxnix.com/author/surendra/
[1]:http://www.linuxnix.com/network-monitoringinfo-gathering-tools-in-linux/
[2]:http://www.linuxnix.com/how-to-install-and-enable-epel-repo-in-rhel-centos-oracle-scentific-linux/

View File

@ -0,0 +1,196 @@
translation by strugglingyouth
Linux / Unix: jobs Command Examples
================================================================================
I am new Linux and Unix user. How do I show the active jobs on Linux or Unix-like systems using BASH/KSH/TCSH or POSIX based shell? How can I display status of jobs in the current session on Unix/Linux?
Job control is nothing but the ability to stop/suspend the execution of processes (command) and continue/resume their execution as per your requirements. This is done using your operating system and shell such as bash/ksh or POSIX shell.
You shell keeps a table of currently executing jobs and can be displayed with jobs command.
### Purpose ###
> Displays status of jobs in the current shell session.
### Syntax ###
The basic syntax is as follows:
jobs
OR
jobs jobID
OR
jobs [options] jobID
### Starting few jobs for demonstration purpose ###
Before you start using jobs command, you need to start couple of jobs on your system. Type the following commands to start jobs:
## Start xeyes, calculator, and gedit text editor ###
xeyes &
gnome-calculator &
gedit fetch-stock-prices.py &
Finally, run ping command in foreground:
ping www.cyberciti.biz
To suspend ping command job hit the **Ctrl-Z** key sequence.
### jobs command examples ###
To display the status of jobs in the current shell, enter:
$ jobs
Sample outputs:
[1] 7895 Running gpass &
[2] 7906 Running gnome-calculator &
[3]- 7910 Running gedit fetch-stock-prices.py &
[4]+ 7946 Stopped ping cyberciti.biz
To display the process ID or jobs for the job whose name begins with "p," enter:
$ jobs -p %p
OR
$ jobs %p
Sample outputs:
[4]- Stopped ping cyberciti.biz
The character % introduces a job specification. In this example, you are using the string whose name begins with suspended command such as %ping.
### How do I show process IDs in addition to the normal information? ###
Pass the -l(lowercase L) option to jobs command for more information about each job listed, run:
$ jobs -l
Sample outputs:
![Fig.01: Displaying the status of jobs in the shell](http://s0.cyberciti.org/uploads/faq/2013/02/jobs-command-output.jpg)
Fig.01: Displaying the status of jobs in the shell
### How do I list only processes that have changed status since the last notification? ###
First, start a new job as follows:
$ sleep 100 &
Now, only show jobs that have stopped or exited since last notified, type:
$ jobs -n
Sample outputs:
[5]- Running sleep 100 &
### Display lists process IDs (PIDs) only ###
Pass the -p option to jobs command to display PIDs only:
$ jobs -p
Sample outputs:
7895
7906
7910
7946
7949
### How do I display only running jobs? ###
Pass the -r option to jobs command to display only running jobs only, type:
$ jobs -r
Sample outputs:
[1] Running gpass &
[2] Running gnome-calculator &
[3]- Running gedit fetch-stock-prices.py &
### How do I display only jobs that have stopped? ###
Pass the -s option to jobs command to display only stopped jobs only, type:
$ jobs -s
Sample outputs:
[4]+ Stopped ping cyberciti.biz
To resume the ping cyberciti.biz job by entering the following bg command:
$ bg %4
### jobs command options ###
From the [bash(1)][1] command man page:
注:表格
<table border="1">
<tbody>
<tr>
<td>Option</td>
<td>Description</td>
</tr>
<tr>
<td><kbd><strong>-l</strong></kbd></td>
<td>Show process id's in addition to the normal information.</td>
</tr>
<tr>
<td><kbd><strong>-p</strong></kbd></td>
<td>Show process id's only.</td>
</tr>
<tr>
<td><kbd><strong>-n</strong></kbd></td>
<td>Show only processes that have changed status since the last notification are printed.</td>
</tr>
<tr>
<td><kbd><strong>-r</strong></kbd></td>
<td>Restrict output to running jobs only.</td>
</tr>
<tr>
<td><kbd><strong>-s</strong></kbd></td>
<td>Restrict output to stopped jobs only.</td>
</tr>
<tr>
<td><kbd><strong>-x</strong></kbd></td>
<td>COMMAND is run after all job specifications that appear in ARGS have been replaced with the process ID of that job's process group leader./td&gt;</td>
</tr>
</tbody>
</table>
### A note about /usr/bin/jobs and shell builtin ###
Type the following type command to find out whether jobs is part of shell, external command or both:
$ type -a jobs
Sample outputs:
jobs is a shell builtin
jobs is /usr/bin/jobs
In almost all cases you need to use the jobs command that is implemented as a BASH/KSH/POSIX shell built-in. The /usr/bin/jobs command can not be used in the current shell. The /usr/bin/jobs command operates in a different environment and does not share the parent bash/ksh's shells understanding of jobs.
--------------------------------------------------------------------------------
via:
作者Vivek Gite
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://www.manpager.com/linux/man1/bash.1.html

View File

@ -0,0 +1,54 @@
NetworkManager and privacy in the IPv6 internet
======================
IPv6 is gaining momentum. With growing use of the protocol concerns about privacy that were not initially anticipated arise. The Internet community actively publishes solutions to them. Whats the current state and how does NetworkManager catch up? Lets figure out!
![](https://blogs.gnome.org/lkundrak/files/2015/12/cameras1.jpg)
## The identity of a IPv6-connected host
The IPv6 enabled nodes dont need a central authority similar to IPv4 [DHCP](https://tools.ietf.org/html/rfc2132) servers to configure their addresses. They discover the networks they are in and [complete the addresses themselves](https://tools.ietf.org/html/rfc4862) by generating the host part. This makes the network configuration simpler and scales better to larger networks. However, theres some drawbacks to this approach. Firstly, the node needs to ensure that its address doesnt collide with an address of any other node on the network. Secondly, if the node uses the same host part of the address in every network it enters then its movement can be tracked and the privacy is at risk.
Internet Engineering Task Force (IETF), the organization behind the Internet standards, [acknowledged this problem](https://tools.ietf.org/html/draft-iesg-serno-privacy-00) and recommends against use of hardware serial numbers to identify the node in the network.
But what does the actual implementation look like?
The problem of address uniqueness is addressed with [Duplicate Address Detection](https://tools.ietf.org/html/rfc4862#section-5.4) (DAD) mechanism. When a node creates an address for itself it first checks whether another node uses the same address using the [Neighbor Discovery Protocol](https://tools.ietf.org/html/rfc4861) (a mechanism not unlike IPv4 [ARP](https://tools.ietf.org/html/rfc826) protocol). When it discovers the address is already used, it must discard it.
The other problem (privacy) is a bit harder to solve. An IP address (be it IPv4 or IPv6) address consists of a network part and the host part. The host discovers the relevant network parts and is supposed generate the host part. Traditionally it just uses an Interface Identifier derived from the network hardwares (MAC) address. The MAC address is set at manufacturing time and can uniquely identify the machine. This guarantees the address is stable and unique. Thats a good thing for address collision avoidance but a bad thing for privacy. The host part remaining constant in different network means that the machine can be uniquely identified as it enters different networks. This seemed like non-issue at the time the protocol was designed, but the privacy concerns arose as the IPv6 gained popularity. Fortunately, theres a solution to this problem.
## Enter privacy extensions
Its no secret that the biggest problem with IPv4 is that the addresses are scarce. This is no longer true with IPv6 and in fact an IPv6-enabled host can use addresses quite liberally. Theres absolutely nothing wrong with having multiple IPv6 addresses attached to the same interface. On the contrary, its a pretty standard situation. At the very minimum each node has an address that is used for contacting nodes on the same hardware link called a link-local address.  When the network contains a router that connects it to other networks in the internet, a node has an address for every network its directly connected to. If a host has more addresses in the same network the node accepts incoming traffic for all of them. For the outgoing connections which, of course, reveal the address to the remote host, the kernel picks the fittest one. But which one is it?
With privacy extensions enabled, as defined by [RFC4941](https://tools.ietf.org/html/rfc4941), a new address with a random host part is generated every now and then. The newest one is used for new outgoing connections while the older ones are deprecated when theyre unused. This is a nifty trick — the host does not reveal the stable address as its not used for outgoing connections, but still accepts connections to it from the hosts that are aware of it.
Theres a downside to this. Certain applications tie the address to the user identity. Consider a web application that issues a HTTP Cookie for the user during the authentication but only accepts it for the connections that come from the address that conducted the authentications. As the kernel generates a new temporary address, the server would reject the requests that use it, effectively logging the user out. It could be argued that the address is not an appropriate mechanism for establishing users identity but thats what some real-world applications do.
## Privacy stable addressing to the rescue
Another approach would be needed to cope with this. Theres a need for an address that is unique (of course), stable for a particular network but still changes when user enters another network so that tracking is not possible. The RFC7217 introduces a mechanism that provides exactly this.
Creation of a privacy stable address relies on a pseudo-random key thats only known the host itself and never revealed to other hosts in the network. This key is then hashed using a cryptographically secure algorithm along with values specific for a particular network connection. It includes an identifier of the network interface, the network prefix and possibly other values specific to the network such as the wireless SSID. The use of the secret key makes it impossible to predict the resulting address for the other hosts while the network-specific data causes it to be different when entering a different network.
This also solves the duplicate address problem nicely. The random key makes collisions unlikely. If, in spite of this, a collision occurs then the hash can be salted with a DAD failure counter and a different address can be generated instead of failing the network connectivity. Now thats clever.
Using privacy stable address doesnt interfere with the privacy extensions at all. You can use the [RFC7217](https://tools.ietf.org/html/rfc7217) stable address while still employing the RFC4941 temporary addresses at the same time.
## Where does NetworkManager stand?
Weve already enabled the privacy extensions with the release NetworkManager 1.0.4. Theyre turned on by default; you can control them with ipv6.ip6-privacy property.
With the release of NetworkManager 1.2, were adding the stable privacy addressing. Its supposed to address the situations where the privacy extensions dont make the cut. The use of the feature is controlled with the ipv6.addr-gen-mode property. If its set to stable-privacy then stable privacy addressing is used. Setting it to “eui64” or not setting it at all preserves the traditional default behavior.
Stay tuned for NetworkManager 1.2 release in early 2016! If you want to try the bleeding-edge snapshot, give Fedora Rawhide a try. It will eventually become Fedora 24.
*Id like to thank Hannes Frederic Sowa for a valuable feedback. The article would make less sense without his corrections. Hannes also created the in-kernel implementation of the RFC7217 mechanism which can be used when the networking is not managed by NetworkManager.*
--------------------------------------------------------------------------------
via: https://blogs.gnome.org/lkundrak/2015/12/03/networkmanager-and-privacy-in-the-ipv6-internet/
作者:[Lubomir Rintel]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,46 @@
Supporting secure DNS in glibc
========================
Credit: Jonathan Corbet
One of the many weak links in Internet security is the domain name system (DNS); it is subject to attacks that, among other things, can mislead applications regarding the IP address of a system they wish to connect to. That, in turn, can cause connections to go to the wrong place, facilitating man-in-the-middle attacks and more. The DNSSEC protocol extensions are meant to address this threat by setting up a cryptographically secure chain of trust for DNS information. When DNSSEC is set up properly, applications should be able to trust the results of domain lookups. As the discussion over an attempt to better integrate DNSSEC into the GNU C Library shows, though, ensuring that DNS lookups are safe is still not a straightforward problem.
In a sense, the problem was solved years ago; one can configure a local nameserver to perform full DNSSEC verification and use that server via glibc calls in applications. DNSSEC can even be used to increase security in other areas; it can, for example, carry SSH or TLS key fingerprints, allowing applications to verify that they are talking to the right server. Things get tricky, though, when one wants to be sure that DNS results claiming to have DNSSEC verification are actually what they claim to be — when one wants the security that DNSSEC is meant to provide, in other words.
The /etc/resolv.conf problem
Part of the problem, from the glibc perspective, is that glibc itself does not do DNSSEC verification. Instead, it consults /etc/resolv.conf and asks the servers found therein to do the lookup and verification; the results are then returned to the application. If the application is using the low-level res_query() interface, those results may include the "authenticated data" (AD) flag (if the nameserver has set it) indicating that DNSSEC verification has been successfully performed. But glibc knows nothing about the trustworthiness of the nameserver that has provided those results, so it cannot tell the application anything about whether they should really be trusted.
One of the first steps suggested by glibc maintainer Carlos O'Donell is to add an option (dns-strip-dnssec-ad-bit) to the resolv.conf file telling glibc to unconditionally remove the AD bit. This option could be set by distributions to indicate that the DNS lookup results cannot be trusted at a DNSSEC level. Once things have been set up so that the results can be trusted, that option can be removed. In the meantime, though, applications would have a way to judge the DNS lookup results they get from glibc, something that does not exist now.
What would a trustworthy setup look like? The standard picture looks something like this: there is a local nameserver, accessed via the loopback interface, as the only entry in /etc/resolv.conf. That nameserver would be configured to do verification and, in the case that verification fails, simply return no results at all. There would, in almost all cases, be no need to worry about whether applications see the AD bit or not; if the results are not trustworthy, applications will simply not see them at all. A number of distributions are moving toward this model, but the situation is still not as simple as some might think.
One problem is that this scheme makes /etc/resolv.conf into a central point of trust for the system. But, in a typical Linux system, there are no end of DHCP clients, networking scripts, and more that will make changes to that file. As Paul Wouters pointed out, locking down this file in the short term is not really an option. Sometimes those changes are necessary: when a diskless system is booting, it may need name-resolution service before it is at a point where it can start up its own nameserver. A system's entire DNS environment may change depending on which network it is attached to. Systems in containers may be best configured to talk to a nameserver on the host. And so on.
So there seems to be a general belief that /etc/resolv.conf cannot really be trusted on current systems. Ideas to add secondary configuration files (/etc/secure-resolv.conf or whatever) have been floated, but they don't much change the basic nature of the situation. Beyond that, some participants felt that even a local nameserver running on the loopback interface is not really trustworthy; Zack Weinberg suggested that administrators might intentionally short out DNSSEC validation, for example.
Since the configuration cannot be trusted on current systems, the reasoning goes, glibc needs to have a way to indicate to applications when the situation has improved and things can be trusted. That could include the AD-stripping option described above (or, conversely, an explicit "this nameserver is trusted" option); that, of course, would require that the system be locked down to a level where surprising changes to /etc/resolv.conf no longer happen. A variant, as suggested by Petr Spacek, is to have a way for an application to ask glibc whether it is talking to a local nameserver or not.
Do it in glibc?
An alternative would be to dispense with the nameserver and have glibc do DNSSEC validation itself. There is, however, resistance to putting a big pile of cryptographic code into glibc itself. That would increase the size of the library and, it is felt, increase the attack surface of any application using it. A variant of this idea, suggested by Zack, would be to put the validation code into the name-service caching daemon (nscd) instead. Since nscd is part of glibc, it is under the control of the glibc developers and there could be a certain amount of confidence that DNSSEC validation is being performed properly. The location of the nscd socket is well known, so the /etc/resolv.confissues don't come into play. Carlos worried, though, that this approach might deter adoption by users who do not want the caching features of nscd; in his mind, that seems to rule out the nscd option.
So, in the short term, at least, it seems unlikely that glibc will take on the full task of performing validated DNSSEC lookups. That means that, if security-conscious applications are going to use glibc for their name lookups, the library will have to provide an indication of how trustworthy the results received from a separate nameserver are. And that will almost certainly require explicit action on the part of the distributor and/or system administrator. As Simo Sorce put it:
A situation in which glibc does not use an explicit configuration option to signal applications that it is using a trusted resolver is not useful ... no scratch that, it is actively harmful, because applications developers will quickly realize they cannot trust any information coming from glibc and will simply not use it for DNSSEC related information.
Configuring a system to properly use DNSSEC involves change to many of the components of that system — it is a distribution-wide problem that will take time to solve fully. The role that glibc plays in this transition is likely to be relatively small, but it is an important one: glibc is probably the only place where applications can receive some assurance that their DNS results are trustworthy without implementing their own resolver code. Running multiple DNSSEC implementations on a system seems like an unlikely path to greater security, so it would be good to get this right.
The glibc project has not yet chosen a path by which it intends to get things right, though some sort of annotation in /etc/resolv.conf looks like a likely outcome. Any such change would then have to get into a release; given the conservative nature of glibc development, it may already be late for the 2.23 release, which is likely to happen in February. So higher DNSSEC awareness in glibc may not happen right away, but there is at least some movement in that direction.
---------------------------
via: https://lwn.net/Articles/663474/
作者Jonathan Corbet
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,65 @@
How to Customize Time & Date Format in Ubuntu Panel
================================================================================
![Time & Date format](http://ubuntuhandbook.org/wp-content/uploads/2015/08/ubuntu_tips1.png)
This quick tutorial is going to show you how to customize your Time & Date indicator in Ubuntu panel, though there are already a few options available in the settings page.
![custom-timedate](http://ubuntuhandbook.org/wp-content/uploads/2015/12/custom-timedate.jpg)
To get started, search for and install **dconf Editor** in Ubuntu Software Center. Then launch the software and follow below steps:
**1.** When dconf Editor launches, navigate to **com -> canonical -> indicator -> datetime**. Set the value of **time-format** to **custom**.
![custom time format](http://ubuntuhandbook.org/wp-content/uploads/2015/12/time-format.jpg)
You can also do this via a command in terminal:
gsettings set com.canonical.indicator.datetime time-format 'custom'
**2.** Now you can customize the Time & Date format by editing the value of **custom-time-format**.
![customize-timeformat](http://ubuntuhandbook.org/wp-content/uploads/2015/12/customize-timeformat.jpg)
You can also do this via command:
gsettings set com.canonical.indicator.datetime custom-time-format 'FORMAT_VALUE_HERE'
Interpreted sequences are:
- %a = abbreviated weekday name
- %A = full weekday name
- %b = abbreviated month name
- %B = full month name
- %d = day of month
- %l = hour ( 1..12), %I = hour (01..12)
- %k = hour ( 1..23), %H = hour (01..23)
- %M = minute (00..59)
- %p = AM or PM, %P = am or pm.
- %S = second (00..59)
- open terminal and run command `man date` to get more details.
Some examples:
custom time format value: **%a %H:%M %m/%d/%Y**
![exam-1](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-1.jpg)
**%a %r %b %d or %a %I:%M:%S %p %b %d**
![exam-2](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-2.jpg)
**%a %-d %b %l:%M %P %z**
![exam-3](http://ubuntuhandbook.org/wp-content/uploads/2015/12/exam-3.jpg)
--------------------------------------------------------------------------------
via: http://ubuntuhandbook.org/index.php/2015/12/time-date-format-ubuntu-panel/
作者:[Ji m][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ubuntuhandbook.org/index.php/about/

View File

@ -0,0 +1,267 @@
How to Install Bugzilla with Apache and SSL on FreeBSD 10.2
================================================================================
Bugzilla is open source web base application for bug tracker and testing tool, develop by mozilla project, and licensed under Mozilla Public License. It is used by high tech company like mozilla, redhat and gnome. Bugzilla was originally created by Terry Weissman in 1998. It written in perl, use MySQL as the database back-end. It is a server software designed to help you manage software development. Bugzilla has a lot of features, optimized database, excellent security, advanced search tool, integrated with email capabilities etc.
In this tutorial we will install bugzilla 5.0 with apache for the web server, and enable SSL for it. Then install mysql51 as the database system on freebsd 10.2.
#### Prerequisite ####
FreeBSD 10.2 - 64bit.
Root privileges.
### Step 1 - Update System ###
Log in to the freebsd server with ssl login, and update the repository database :
sudo su
freebsd-update fetch
freebsd-update install
### Step 2 - Install and Configure Apache ###
In this step we will install apache from the freebsd repositories with pkg command. Then configure apache by editing file "httpd.conf" on apache24 directory, configure apache to use SSL, and CGI support.
Install apache with pkg command :
pkg install apache24
Go to the apache directory and edit the file "httpd.conf" with nanao editor :
cd /usr/local/etc/apache24
nano -c httpd.conf
Uncomment the list line below :
#Line 70
LoadModule authn_socache_module libexec/apache24/mod_authn_socache.so
#Line 89
LoadModule socache_shmcb_module libexec/apache24/mod_socache_shmcb.so
# Line 117
LoadModule expires_module libexec/apache24/mod_expires.so
#Line 141 to enabling SSL
LoadModule ssl_module libexec/apache24/mod_ssl.so
# Line 162 for cgi support
LoadModule cgi_module libexec/apache24/mod_cgi.so
# Line 174 to enable mod_rewrite
LoadModule rewrite_module libexec/apache24/mod_rewrite.so
# Line 219 for the servername configuration
ServerName 127.0.0.1:80
Save and exit.
Next, we need to install mod perl from freebsd repository, and then enable it :
pkg install ap24-mod_perl2
To enable mod_perl, edit httpd.conf and add to the "Loadmodule" line below :
nano -c httpd.conf
Add line below :
# Line 175
LoadModule perl_module libexec/apache24/mod_perl.so
Save and exit.
And before start apache, add it to start at boot time with sysrc command :
sysrc apache24_enable=yes
service apache24 start
### Step 3 - Install and Configure MySQL Database ###
We will use mysql51 for the database back-end, and it is support for perl module for mysql. Install mysql51 with pkg command below :
pkg install p5-DBD-mysql51 mysql51-server mysql51-client
Now we must add mysql to the boot time, and then start and configure the root password for mysql.
Run command below to do it all :
sysrc mysql_enable=yes
service mysql-server start
mysqladmin -u root password aqwe123
Note :
mysql password : aqwe123
![Configure MySQL Password](http://blog.linoxide.com/wp-content/uploads/2015/12/Configure-MySQL-Password.png)
Next, we will log in to the mysql shell with user root and password that we've configured above, then we will create new database and user for bugzilla installation.
Log in to the mysql shell with command below :
mysql -u root -p
password: aqwe123
Add the database :
create database bugzilladb;
create user bugzillauser@localhost identified by 'bugzillauser@';
grant all privileges on bugzilladb.* to bugzillauser@localhost identified by 'bugzillauser@';
flush privileges;
\q
![Creating Database for Bugzilla](http://blog.linoxide.com/wp-content/uploads/2015/12/Creating-Database-for-Bugzilla.png)
Database for bugzilla is created, database "bugzilladb" with user "bugzillauser" and password "bugzillauser@".
### Step 4 - Generate New SSL Certificate ###
Generate new self signed ssl certificate on directory "ssl" for bugzilla site.
Go to the apache24 directory and create new directory "ssl" on it :
cd /usr/local/etc/apache24/
mkdir ssl; cd ssl
Next, generate the certificate file with openssl command, then change the permission of the certificate file :
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /usr/local/etc/apache24/ssl/bugzilla.key -out /usr/local/etc/apache24/ssl/bugzilla.crt
chmod 600 *
### Step 5 - Configure Virtualhost ###
We will install bugzilla on directory "/usr/local/www/bugzilla", so we must create new virtualhost configuration for it.
Go to the apache directory and create new directory called "vhost" for virtualhost file :
cd /usr/local/etc/apache24/
mkdir vhost; cd vhost
Now create new file "bugzilla.conf" for the virtualhost file :
nano -c bugzilla.conf
Paste configuration below :
<VirtualHost *:80>
ServerName mybugzilla.me
ServerAlias www.mybuzilla.me
DocumentRoot /usr/local/www/bugzilla
Redirect permanent / https://mybugzilla.me/
</VirtualHost>
Listen 443
<VirtualHost _default_:443>
ServerName mybugzilla.me
DocumentRoot /usr/local/www/bugzilla
ErrorLog "/var/log/mybugzilla.me-error_log"
CustomLog "/var/log/mybugzilla.me-access_log" common
SSLEngine On
SSLCertificateFile /usr/local/etc/apache24/ssl/bugzilla.crt
SSLCertificateKeyFile /usr/local/etc/apache24/ssl/bugzilla.key
<Directory "/usr/local/www/bugzilla">
AddHandler cgi-script .cgi
Options +ExecCGI
DirectoryIndex index.cgi index.html
AllowOverride Limit FileInfo Indexes Options
Require all granted
</Directory>
</VirtualHost>
Save and exit.
If all is done, create new directory for bugzilla installation and then enable the bugzilla virtualhost by adding the virtualhost configuration to httpd.conf file.
Run command below on "apache24" directory :
mkdir -p /usr/local/www/bugzilla
cd /usr/local/etc/apache24/
nano -c httpd.conf
In the end of the line, add configuration below :
Include etc/apache24/vhost/*.conf
Save and exit.
Now test the apache configuration with "apachectl" command and restart it :
apachectl configtest
service apache24 restart
### Step 6 - Install Bugzilla ###
We can install bugzilla manually by downloading the source, or install it from freebsd repository. In this step we will install bugzilla from freebsd repository with pkg command :
pkg install bugzilla50
If it's done, go to the bugzilla installation directory and install all perl module that needed by bugzilla.
cd /usr/local/www/bugzilla
./install-module --all
Wait it until all is finished, it is take the time.
Next, generate the configuration file "localconfig" by executing "checksetup.pl" file on bugzilla installation directory.
./checksetup.pl
You will see the error message about the database configuration, so edit the file "localconfig" with nano editor :
nano -c localconfig
Now add the database that was created on step 3.
#Line 57
$db_name = 'bugzilladb';
#Line 60
$db_user = 'bugzillauser';
#Line 67
$db_pass = 'bugzillauser@';
Save and exit.
Then run "checksetup.pl" again :
./checksetup.pl
You will be prompt about mail and administrator account, fill all of it with your email, user and password.
![Admin Setup](http://blog.linoxide.com/wp-content/uploads/2015/12/Admin-Setup.png)
In the last, we need to change the owner of the installation directory to user "www", then restart apache with service command :
cd /usr/local/www/
chown -R www:www bugzilla
service apache24 restart
Now Bugzilla is installed, you can see it by visiting mybugzilla.me and you will be redirect to the https connection.
Bugzilla home page.
![Bugzilla Home](http://blog.linoxide.com/wp-content/uploads/2015/12/Bugzilla-Home.png)
Bugzilla admin panel.
![Bugzilla Admin Page](http://blog.linoxide.com/wp-content/uploads/2015/12/Bugzilla-Admin-Page.png)
### Conclusion ###
Bugzilla is web based application help you to manage the software development. It is written in perl and use MySQL as the database system. Bugzilla used by mozilla, redhat, gnome etc for help their software development. Bugzilla has a lot of features and easy to configure and install.
--------------------------------------------------------------------------------
via: http://linoxide.com/tools/install-bugzilla-apache-ssl-freebsd-10-2/
作者:[Arul][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arulm/

View File

@ -0,0 +1,59 @@
How to renew the ISPConfig 3 SSL Certificate
================================================================================
This tutorial describes the steps to renew the SSL Certificate of the ISPConfig 3 control panel. There are two alternative ways to achieve that:
- Create a new OpenSSL Certificate and CSR on the command line with OpenSSL.
- Renew the SSL Certificate with the ISPConfig updater
I'll start with the manual way to renew the ssl cert.
### 1) Create a new ISPConfig 3 SSL Certificate with OpenSSL ###
Login to your server on the shell as root user. Before we create a new SSL Cert, backup the current ones. SSL Certs are security sensitive so I'll store the backup in the /root/ folder.
tar pcfz /root/ispconfig_ssl_backup.tar.gz /usr/local/ispconfig/interface/ssl
chmod 600 /root/ispconfig_ssl_backup.tar.gz
> Now create a new SSL Certificate key, Certificate Request (csr) and a self signed Certificate.
cd /usr/local/ispconfig/interface/ssl
openssl genrsa -des3 -out ispserver.key 4096
openssl req -new -key ispserver.key -out ispserver.csr
openssl x509 -req -days 3650 -in ispserver.csr \
-signkey ispserver.key -out ispserver.crt
openssl rsa -in ispserver.key -out ispserver.key.insecure
mv ispserver.key ispserver.key.secure
mv ispserver.key.insecure ispserver.key
Restart Apache to load the new SSL Certificate.
service apache2 restart
### 2) Renew the SSL Certificate with the ISPConfig installer ###
The alternative way to get a new SSL Certificate is to use the ISPConfig update script.
Download ISPConfig to the /tmp folder, unpack the archive and start the update script.
cd /tmp
wget http://www.ispconfig.org/downloads/ISPConfig-3-stable.tar.gz
tar xvfz ISPConfig-3-stable.tar.gz
cd ispconfig3_install/install
php -q update.php
The update script will ask the following question during update:
Create new ISPConfig SSL certificate (yes,no) [no]:
Answer "yes" here and the SSL Certificate creation dialog will start.
--------------------------------------------------------------------------------
via: http://www.faqforge.com/linux/how-to-renew-the-ispconfig-3-ssl-certificate/
作者:[Till][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.faqforge.com/author/till/

View File

@ -0,0 +1,450 @@
Getting started with Docker by Dockerizing this Blog
======================
>This article covers the basic concepts of Docker and how to Dockerize an application by creating a custom Dockerfile
>Written by Benjamin Cane on 2015-12-01 10:00:00
Docker is an interesting technology that over the past 2 years has gone from an idea, to being used by organizations all over the world to deploy applications. In today's article I am going to cover how to get started with Docker by "Dockerizing" an existing application. The application in question is actually this very blog!
## What is Docker
Before we dive into learning the basics of Docker let's first understand what Docker is and why it is so popular. Docker, is an operating system container management tool that allows you to easily manage and deploy applications by making it easy to package them within operating system containers.
### Containers vs. Virtual Machines
Containers may not be as familiar as virtual machines but they are another method to provide **Operating System Virtualization**. However, they differ quite a bit from standard virtual machines.
Standard virtual machines generally include a full Operating System, OS Packages and eventually an Application or two. This is made possible by a Hypervisor which provides hardware virtualization to the virtual machine. This allows for a single server to run many standalone operating systems as virtual guests.
Containers are similar to virtual machines in that they allow a single server to run multiple operating environments, these environments however, are not full operating systems. Containers generally only include the necessary OS Packages and Applications. They do not generally contain a full operating system or hardware virtualization. This also means that containers have a smaller overhead than traditional virtual machines.
Containers and Virtual Machines are often seen as conflicting technology, however, this is often a misunderstanding. Virtual Machines are a way to take a physical server and provide a fully functional operating environment that shares those physical resources with other virtual machines. A Container is generally used to isolate a running process within a single host to ensure that the isolated processes cannot interact with other processes within that same system. In fact containers are closer to **BSD Jails** and `chroot`'ed processes than full virtual machines.
### What Docker provides on top of containers
Docker itself is not a container runtime environment; in fact Docker is actually container technology agnostic with efforts planned for Docker to support [Solaris Zones](https://blog.docker.com/2015/08/docker-oracle-solaris-zones/) and [BSD Jails](https://wiki.freebsd.org/Docker). What Docker provides is a method of managing, packaging, and deploying containers. While these types of functions may exist to some degree for virtual machines they traditionally have not existed for most container solutions and the ones that existed, were not as easy to use or fully featured as Docker.
Now that we know what Docker is, let's start learning how Docker works by first installing Docker and deploying a public pre-built container.
## Starting with Installation
As Docker is not installed by default step 1 will be to install the Docker package; since our example system is running Ubuntu 14.0.4 we will do this using the Apt package manager.
```
# apt-get install docker.io
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
aufs-tools cgroup-lite git git-man liberror-perl
Suggested packages:
btrfs-tools debootstrap lxc rinse git-daemon-run git-daemon-sysvinit git-doc
git-el git-email git-gui gitk gitweb git-arch git-bzr git-cvs git-mediawiki
git-svn
The following NEW packages will be installed:
aufs-tools cgroup-lite docker.io git git-man liberror-perl
0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded.
Need to get 7,553 kB of archives.
After this operation, 46.6 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
```
To check if any containers are running we can execute the `docker` command using the `ps` option.
```
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
The `ps` function of the `docker` command works similar to the Linux `ps `command. It will show available Docker containers and their current status. Since we have not started any Docker containers yet, the command shows no running containers.
## Deploying a pre-built nginx Docker container
One of my favorite features of Docker is the ability to deploy a pre-built container in the same way you would deploy a package with `yum` or `apt-get`. To explain this better let's deploy a pre-built container running the nginx web server. We can do this by executing the `docker` command again, however, this time with the `run` option.
```
# docker run -d nginx
Unable to find image 'nginx' locally
Pulling repository nginx
5c82215b03d1: Download complete
e2a4fb18da48: Download complete
58016a5acc80: Download complete
657abfa43d82: Download complete
dcb2fe003d16: Download complete
c79a417d7c6f: Download complete
abb90243122c: Download complete
d6137c9e2964: Download complete
85e566ddc7ef: Download complete
69f100eb42b5: Download complete
cd720b803060: Download complete
7cc81e9a118a: Download complete
```
The `run` function of the `docker` command tells Docker to find a specified Docker image and start a container running that image. By default, Docker containers run in the foreground, meaning when you execute `docker run` your shell will be bound to the container's console and the process running within the container. In order to launch this Docker container in the background I included the `-d` (**detach**) flag.
By executing `docker ps` again we can see the nginx container running.
```
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f6d31ab01fc9 nginx:latest nginx -g 'daemon off 4 seconds ago Up 3 seconds 443/tcp, 80/tcp desperate_lalande
```
In the above output we can see the running container `desperate_lalande` and that this container has been built from the `nginx:latest image`.
### Docker Images
Images are one of Docker's key features and is similar to a virtual machine image. Like virtual machine images, a Docker image is a container that has been saved and packaged. Docker however, doesn't just stop with the ability to create images. Docker also includes the ability to distribute those images via Docker repositories which are a similar concept to package repositories. This is what gives Docker the ability to deploy an image like you would deploy a package with `yum`. To get a better understanding of how this works let's look back at the output of the `docker run` execution.
```
# docker run -d nginx
Unable to find image 'nginx' locally
```
The first message we see is that `docker` could not find an image named nginx locally. The reason we see this message is that when we executed `docker run` we told Docker to startup a container, a container based on an image named **nginx**. Since Docker is starting a container based on a specified image it needs to first find that image. Before checking any remote repository Docker first checks locally to see if there is a local image with the specified name.
Since this system is brand new there is no Docker image with the name nginx, which means Docker will need to download it from a Docker repository.
```
Pulling repository nginx
5c82215b03d1: Download complete
e2a4fb18da48: Download complete
58016a5acc80: Download complete
657abfa43d82: Download complete
dcb2fe003d16: Download complete
c79a417d7c6f: Download complete
abb90243122c: Download complete
d6137c9e2964: Download complete
85e566ddc7ef: Download complete
69f100eb42b5: Download complete
cd720b803060: Download complete
7cc81e9a118a: Download complete
```
This is exactly what the second part of the output is showing us. By default, Docker uses the [Docker Hub](https://hub.docker.com/) repository, which is a repository service that Docker (the company) runs.
Like GitHub, Docker Hub is free for public repositories but requires a subscription for private repositories. It is possible however, to deploy your own Docker repository, in fact it is as easy as `docker run registry`. For this article we will not be deploying a custom registry service.
### Stopping and Removing the Container
Before moving on to building a custom Docker container let's first clean up our Docker environment. We will do this by stopping the container from earlier and removing it.
To start a container we executed `docker` with the `run` option, in order to stop this same container we simply need to execute the `docker` with the `kill` option specifying the container name.
```
# docker kill desperate_lalande
desperate_lalande
```
If we execute `docker ps` again we will see that the container is no longer running.
```
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
However, at this point we have only stopped the container; while it may no longer be running it still exists. By default, `docker ps` will only show running containers, if we add the `-a` (all) flag it will show all containers running or not.
```
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f6d31ab01fc9 5c82215b03d1 nginx -g 'daemon off 4 weeks ago Exited (-1) About a minute ago desperate_lalande
```
In order to fully remove the container we can use the `docker` command with the `rm` option.
```
# docker rm desperate_lalande
desperate_lalande
```
While this container has been removed; we still have a **nginx** image available. If we were to re-run `docker run -d nginx` again the container would be started without having to fetch the nginx image again. This is because Docker already has a saved copy on our local system.
To see a full list of local images we can simply run the `docker` command with the `images` option.
```
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
nginx latest 9fab4090484a 5 days ago 132.8 MB
```
## Building our own custom image
At this point we have used a few basic Docker commands to start, stop and remove a common pre-built image. In order to "Dockerize" this blog however, we are going to have to build our own Docker image and that means creating a **Dockerfile**.
With most virtual machine environments if you wish to create an image of a machine you need to first create a new virtual machine, install the OS, install the application and then finally convert it to a template or image. With Docker however, these steps are automated via a Dockerfile. A Dockerfile is a way of providing build instructions to Docker for the creation of a custom image. In this section we are going to build a custom Dockerfile that can be used to deploy this blog.
### Understanding the Application
Before we can jump into creating a Dockerfile we first need to understand what is required to deploy this blog.
The blog itself is actually static HTML pages generated by a custom static site generator that I wrote named; **hamerkop**. The generator is very simple and more about getting the job done for this blog specifically. All the code and source files for this blog are available via a public [GitHub](https://github.com/madflojo/blog) repository. In order to deploy this blog we simply need to grab the contents of the GitHub repository, install **Python** along with some **Python** modules and execute the `hamerkop` application. To serve the generated content we will use **nginx**; which means we will also need **nginx** to be installed.
So far this should be a pretty simple Dockerfile, but it will show us quite a bit of the [Dockerfile Syntax](https://docs.docker.com/v1.8/reference/builder/). To get started we can clone the GitHub repository and creating a Dockerfile with our favorite editor; `vi` in my case.
```
# git clone https://github.com/madflojo/blog.git
Cloning into 'blog'...
remote: Counting objects: 622, done.
remote: Total 622 (delta 0), reused 0 (delta 0), pack-reused 622
Receiving objects: 100% (622/622), 14.80 MiB | 1.06 MiB/s, done.
Resolving deltas: 100% (242/242), done.
Checking connectivity... done.
# cd blog/
# vi Dockerfile
```
### FROM - Inheriting a Docker image
The first instruction of a Dockerfile is the `FROM` instruction. This is used to specify an existing Docker image to use as our base image. This basically provides us with a way to inherit another Docker image. In this case we will be starting with the same **nginx** image we were using before, if we wanted to start with a blank slate we could use the **Ubuntu** Docker image by specifying `ubuntu:latest`.
```
## Dockerfile that generates an instance of http://bencane.com
FROM nginx:latest
MAINTAINER Benjamin Cane <ben@bencane.com>
```
In addition to the `FROM` instruction, I also included a `MAINTAINER` instruction which is used to show the Author of the Dockerfile.
As Docker supports using `#` as a comment marker, I will be using this syntax quite a bit to explain the sections of this Dockerfile.
### Running a test build
Since we inherited the **nginx** Docker image our current Dockerfile also inherited all the instructions within the [Dockerfile](https://github.com/nginxinc/docker-nginx/blob/08eeb0e3f0a5ee40cbc2bc01f0004c2aa5b78c15/Dockerfile) used to build that **nginx** image. What this means is even at this point we are able to build a Docker image from this Dockerfile and run a container from that image. The resulting image will essentially be the same as the **nginx** image but we will run through a build of this Dockerfile now and a few more times as we go to help explain the Docker build process.
In order to start the build from a Dockerfile we can simply execute the `docker` command with the **build** option.
```
# docker build -t blog /root/blog
Sending build context to Docker daemon 23.6 MB
Sending build context to Docker daemon
Step 0 : FROM nginx:latest
---> 9fab4090484a
Step 1 : MAINTAINER Benjamin Cane <ben@bencane.com>
---> Running in c97f36450343
---> 60a44f78d194
Removing intermediate container c97f36450343
Successfully built 60a44f78d194
```
In the above example I used the `-t` (**tag**) flag to "tag" the image as "blog". This essentially allows us to name the image, without specifying a tag the image would only be callable via an **Image ID** that Docker assigns. In this case the **Image ID** is `60a44f78d194` which we can see from the `docker` command's build success message.
In addition to the `-t` flag, I also specified the directory `/root/blog`. This directory is the "build directory", which is the directory that contains the Dockerfile and any other files necessary to build this container.
Now that we have run through a successful build, let's start customizing this image.
### Using RUN to execute apt-get
The static site generator used to generate the HTML pages is written in **Python** and because of this the first custom task we should perform within this `Dockerfile` is to install Python. To install the Python package we will use the Apt package manager. This means we will need to specify within the Dockerfile that `apt-get update` and `apt-get install python-dev` are executed; we can do this with the `RUN` instruction.
```
## Dockerfile that generates an instance of http://bencane.com
FROM nginx:latest
MAINTAINER Benjamin Cane <ben@bencane.com>
## Install python and pip
RUN apt-get update
RUN apt-get install -y python-dev python-pip
```
In the above we are simply using the `RUN` instruction to tell Docker that when it builds this image it will need to execute the specified `apt-get` commands. The interesting part of this is that these commands are only executed within the context of this container. What this means is even though `python-dev` and `python-pip` are being installed within the container, they are not being installed for the host itself. Or to put it simplier, within the container the `pip` command will execute, outside the container, the `pip` command does not exist.
It is also important to note that the Docker build process does not accept user input during the build. This means that any commands being executed by the `RUN` instruction must complete without user input. This adds a bit of complexity to the build process as many applications require user input during installation. For our example, none of the commands executed by `RUN` require user input.
### Installing Python modules
With **Python** installed we now need to install some Python modules. To do this outside of Docker, we would generally use the `pip` command and reference a file within the blog's Git repository named `requirements.txt`. In an earlier step we used the `git` command to "clone" the blog's GitHub repository to the `/root/blog` directory; this directory also happens to be the directory that we have created the `Dockerfile`. This is important as it means the contents of the Git repository are accessible to Docker during the build process.
When executing a build, Docker will set the context of the build to the specified "build directory". This means that any files within that directory and below can be used during the build process, files outside of that directory (outside of the build context), are inaccessible.
In order to install the required Python modules we will need to copy the `requirements.txt` file from the build directory into the container. We can do this using the `COPY` instruction within the `Dockerfile`.
```
## Dockerfile that generates an instance of http://bencane.com
FROM nginx:latest
MAINTAINER Benjamin Cane <ben@bencane.com>
## Install python and pip
RUN apt-get update
RUN apt-get install -y python-dev python-pip
## Create a directory for required files
RUN mkdir -p /build/
## Add requirements file and run pip
COPY requirements.txt /build/
RUN pip install -r /build/requirements.txt
```
Within the `Dockerfile` we added 3 instructions. The first instruction uses `RUN` to create a `/build/` directory within the container. This directory will be used to copy any application files needed to generate the static HTML pages. The second instruction is the `COPY` instruction which copies the `requirements.txt` file from the "build directory" (`/root/blog`) into the `/build` directory within the container. The third is using the `RUN` instruction to execute the `pip` command; installing all the modules specified within the `requirements.txt` file.
`COPY` is an important instruction to understand when building custom images. Without specifically copying the file within the Dockerfile this Docker image would not contain the requirements.txt file. With Docker containers everything is isolated, unless specifically executed within a Dockerfile a container is not likely to include required dependencies.
### Re-running a build
Now that we have a few customization tasks for Docker to perform let's try another build of the blog image again.
```
# docker build -t blog /root/blog
Sending build context to Docker daemon 19.52 MB
Sending build context to Docker daemon
Step 0 : FROM nginx:latest
---> 9fab4090484a
Step 1 : MAINTAINER Benjamin Cane <ben@bencane.com>
---> Using cache
---> 8e0f1899d1eb
Step 2 : RUN apt-get update
---> Using cache
---> 78b36ef1a1a2
Step 3 : RUN apt-get install -y python-dev python-pip
---> Using cache
---> ef4f9382658a
Step 4 : RUN mkdir -p /build/
---> Running in bde05cf1e8fe
---> f4b66e09fa61
Removing intermediate container bde05cf1e8fe
Step 5 : COPY requirements.txt /build/
---> cef11c3fb97c
Removing intermediate container 9aa8ff43f4b0
Step 6 : RUN pip install -r /build/requirements.txt
---> Running in c50b15ddd8b1
Downloading/unpacking jinja2 (from -r /build/requirements.txt (line 1))
Downloading/unpacking PyYaml (from -r /build/requirements.txt (line 2))
<truncated to reduce noise>
Successfully installed jinja2 PyYaml mistune markdown MarkupSafe
Cleaning up...
---> abab55c20962
Removing intermediate container c50b15ddd8b1
Successfully built abab55c20962
```
From the above build output we can see the build was successful, but we can also see another interesting message;` ---> Using cache`. What this message is telling us is that Docker was able to use its build cache during the build of this image.
#### Docker build cache
When Docker is building an image, it doesn't just build a single image; it actually builds multiple images throughout the build processes. In fact we can see from the above output that after each "Step" Docker is creating a new image.
```
Step 5 : COPY requirements.txt /build/
---> cef11c3fb97c
```
The last line from the above snippet is actually Docker informing us of the creating of a new image, it does this by printing the **Image ID**; `cef11c3fb97c`. The useful thing about this approach is that Docker is able to use these images as cache during subsequent builds of the **blog** image. This is useful because it allows Docker to speed up the build process for new builds of the same container. If we look at the example above we can actually see that rather than installing the `python-dev` and `python-pip` packages again, Docker was able to use a cached image. However, since Docker was unable to find a build that executed the `mkdir` command, each subsequent step was executed.
The Docker build cache is a bit of a gift and a curse; the reason for this is that the decision to use cache or to rerun the instruction is made within a very narrow scope. For example, if there was a change to the `requirements.txt` file Docker would detect this change during the build and start fresh from that point forward. It does this because it can view the contents of the `requirements.txt` file. The execution of the `apt-get` commands however, are another story. If the **Apt** repository that provides the Python packages were to contain a newer version of the python-pip package; Docker would not be able to detect the change and would simply use the build cache. This means that an older package may be installed. While this may not be a major issue for the `python-pip` package it could be a problem if the installation was caching a package with a known vulnerability.
For this reason it is useful to periodically rebuild the image without using Docker's cache. To do this you can simply specify `--no-cache=True` when executing a Docker build.
## Deploying the rest of the blog
With the Python packages and modules installed this leaves us at the point of copying the required application files and running the `hamerkop` application. To do this we will simply use more `COPY` and `RUN` instructions.
```
## Dockerfile that generates an instance of http://bencane.com
FROM nginx:latest
MAINTAINER Benjamin Cane <ben@bencane.com>
## Install python and pip
RUN apt-get update
RUN apt-get install -y python-dev python-pip
## Create a directory for required files
RUN mkdir -p /build/
## Add requirements file and run pip
COPY requirements.txt /build/
RUN pip install -r /build/requirements.txt
## Add blog code nd required files
COPY static /build/static
COPY templates /build/templates
COPY hamerkop /build/
COPY config.yml /build/
COPY articles /build/articles
## Run Generator
RUN /build/hamerkop -c /build/config.yml
```
Now that we have the rest of the build instructions, let's run through another build and verify that the image builds successfully.
```
# docker build -t blog /root/blog/
Sending build context to Docker daemon 19.52 MB
Sending build context to Docker daemon
Step 0 : FROM nginx:latest
---> 9fab4090484a
Step 1 : MAINTAINER Benjamin Cane <ben@bencane.com>
---> Using cache
---> 8e0f1899d1eb
Step 2 : RUN apt-get update
---> Using cache
---> 78b36ef1a1a2
Step 3 : RUN apt-get install -y python-dev python-pip
---> Using cache
---> ef4f9382658a
Step 4 : RUN mkdir -p /build/
---> Using cache
---> f4b66e09fa61
Step 5 : COPY requirements.txt /build/
---> Using cache
---> cef11c3fb97c
Step 6 : RUN pip install -r /build/requirements.txt
---> Using cache
---> abab55c20962
Step 7 : COPY static /build/static
---> 15cb91531038
Removing intermediate container d478b42b7906
Step 8 : COPY templates /build/templates
---> ecded5d1a52e
Removing intermediate container ac2390607e9f
Step 9 : COPY hamerkop /build/
---> 59efd1ca1771
Removing intermediate container b5fbf7e817b7
Step 10 : COPY config.yml /build/
---> bfa3db6c05b7
Removing intermediate container 1aebef300933
Step 11 : COPY articles /build/articles
---> 6b61cc9dde27
Removing intermediate container be78d0eb1213
Step 12 : RUN /build/hamerkop -c /build/config.yml
---> Running in fbc0b5e574c5
Successfully created file /usr/share/nginx/html//2011/06/25/checking-the-number-of-lwp-threads-in-linux
Successfully created file /usr/share/nginx/html//2011/06/checking-the-number-of-lwp-threads-in-linux
<truncated to reduce noise>
Successfully created file /usr/share/nginx/html//archive.html
Successfully created file /usr/share/nginx/html//sitemap.xml
---> 3b25263113e1
Removing intermediate container fbc0b5e574c5
Successfully built 3b25263113e1
```
### Running a custom container
With a successful build we can now start our custom container by running the `docker` command with the `run` option, similar to how we started the nginx container earlier.
```
# docker run -d -p 80:80 --name=blog blog
5f6c7a2217dcdc0da8af05225c4d1294e3e6bb28a41ea898a1c63fb821989ba1
```
Once again the `-d` (**detach**) flag was used to tell Docker to run the container in the background. However, there are also two new flags. The first new flag is `--name`, which is used to give the container a user specified name. In the earlier example we did not specify a name and because of that Docker randomly generated one. The second new flag is `-p`, this flag allows users to map a port from the host machine to a port within the container.
The base **nginx** image we used exposes port 80 for the HTTP service. By default, ports bound within a Docker container are not bound on the host system as a whole. In order for external systems to access ports exposed within a container the ports must be mapped from a host port to a container port using the `-p` flag. The command above maps port 80 from the host, to port 80 within the container. If we wished to map port 8080 from the host, to port 80 within the container we could do so by specifying the ports in the following syntax `-p 8080:80`.
From the above command it appears that our container was started successfully, we can verify this by executing `docker ps`.
```
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d264c7ef92bd blog:latest nginx -g 'daemon off 3 seconds ago Up 3 seconds 443/tcp, 0.0.0.0:80->80/tcp blog
```
## Wrapping up
At this point we now have a running custom Docker container. While we touched on a few Dockerfile instructions within this article we have yet to discuss all the instructions. For a full list of Dockerfile instructions you can checkout [Docker's reference page](https://docs.docker.com/v1.8/reference/builder/), which explains the instructions very well.
Another good resource is their [Dockerfile Best Practices page](https://docs.docker.com/engine/articles/dockerfile_best-practices/) which contains quite a few best practices for building custom Dockerfiles. Some of these tips are very useful such as strategically ordering the commands within the Dockerfile. In the above examples our Dockerfile has the `COPY` instruction for the `articles` directory as the last `COPY` instruction. The reason for this is that the `articles` directory will change quite often. It's best to put instructions that will change oftenat the lowest point possible within the Dockerfile to optimize steps that can be cached.
In this article we covered how to start a pre-built container and how to build, then deploy a custom container. While there is quite a bit to learn about Docker this article should give you a good idea on how to get started. Of course, as always if you think there is anything that should be added drop it in the comments below.
--------------------------------------
via:http://bencane.com/2015/12/01/getting-started-with-docker-by-dockerizing-this-blog/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+bencane%2FSAUo+%28Benjamin+Cane%29
作者Benjamin Cane
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,229 +0,0 @@
Part 9 - LFCS: Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper
================================================================================
Last August, the Linux Foundation announced the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of succeeding at overall operational support for Linux systems. A Linux Foundation Certified Sysadmin has the expertise to ensure effective system support, first-level troubleshooting and monitoring, including finally issue escalation, when needed, to engineering support teams.
![Linux Package Management](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-9.png)
Linux Foundation Certified Sysadmin Part 9
Watch the following video that explains about the Linux Foundation Certification Program.
youtube 视频
<iframe width="720" height="405" frameborder="0" allowfullscreen="allowfullscreen" src="//www.youtube.com/embed/Y29qZ71Kicg"></iframe>
This article is a Part 9 of 10-tutorial long series, today in this article we will guide you about Linux Package Management, that are required for the LFCS certification exam.
### Package Management ###
In few words, package management is a method of installing and maintaining (which includes updating and probably removing as well) software on the system.
In the early days of Linux, programs were only distributed as source code, along with the required man pages, the necessary configuration files, and more. Nowadays, most Linux distributors use by default pre-built programs or sets of programs called packages, which are presented to users ready for installation on that distribution. However, one of the wonders of Linux is still the possibility to obtain source code of a program to be studied, improved, and compiled.
**How package management systems work**
If a certain package requires a certain resource such as a shared library, or another package, it is said to have a dependency. All modern package management systems provide some method of dependency resolution to ensure that when a package is installed, all of its dependencies are installed as well.
**Packaging Systems**
Almost all the software that is installed on a modern Linux system will be found on the Internet. It can either be provided by the distribution vendor through central repositories (which can contain several thousands of packages, each of which has been specifically built, tested, and maintained for the distribution) or be available in source code that can be downloaded and installed manually.
Because different distribution families use different packaging systems (Debian: *.deb / CentOS: *.rpm / openSUSE: *.rpm built specially for openSUSE), a package intended for one distribution will not be compatible with another distribution. However, most distributions are likely to fall into one of the three distribution families covered by the LFCS certification.
**High and low-level package tools**
In order to perform the task of package management effectively, you need to be aware that you will have two types of available utilities: low-level tools (which handle in the backend the actual installation, upgrade, and removal of package files), and high-level tools (which are in charge of ensuring that the tasks of dependency resolution and metadata searching -”data about the data”- are performed).
注:表格
<table cellspacing="0" border="0">
<colgroup width="200">
</colgroup>
<colgroup width="200">
</colgroup>
<colgroup width="200">
</colgroup>
<tbody>
<tr>
<td bgcolor="#AEA79F" align="CENTER" height="18" style="border: 1px solid #000001;"><b><span style="color: black;">DISTRIBUTION</span></b></td>
<td bgcolor="#AEA79F" align="CENTER" style="border: 1px solid #000001;"><b><span style="color: black;">LOW-LEVEL TOOL</span></b></td>
<td bgcolor="#AEA79F" align="CENTER" style="border: 1px solid #000001;"><b><span style="color: black;">HIGH-LEVEL TOOL</span></b></td>
</tr>
<tr class="alt">
<td bgcolor="#FFFFFF" align="LEFT" height="18" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;Debian and derivatives</span></td>
<td bgcolor="#FFFFFF" align="LEFT" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;dpkg</span></td>
<td bgcolor="#FFFFFF" align="LEFT" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;apt-get / aptitude</span></td>
</tr>
<tr>
<td bgcolor="#FFFFFF" align="LEFT" height="18" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;CentOS</span></td>
<td bgcolor="#FFFFFF" align="LEFT" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;rpm</span></td>
<td bgcolor="#FFFFFF" align="LEFT" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;yum</span></td>
</tr>
<tr class="alt">
<td bgcolor="#FFFFFF" align="LEFT" height="18" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;openSUSE</span></td>
<td bgcolor="#FFFFFF" align="LEFT" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;rpm</span></td>
<td bgcolor="#FFFFFF" align="LEFT" style="border: 1px solid #000001;"><span style="color: black;">&nbsp;zypper</span></td>
</tr>
</tbody>
</table>
Let us see the descrption of the low-level and high-level tools.
dpkg is a low-level package manager for Debian-based systems. It can install, remove, provide information about and build *.deb packages but it cant automatically download and install their corresponding dependencies.
- Read More: [15 dpkg Command Examples][1]
apt-get is a high-level package manager for Debian and derivatives, and provides a simple way to retrieve and install packages, including dependency resolution, from multiple sources using the command line. Unlike dpkg, apt-get does not work directly with *.deb files, but with the package proper name.
- Read More: [25 apt-get Command Examples][2]
aptitude is another high-level package manager for Debian-based systems, and can be used to perform management tasks (installing, upgrading, and removing packages, also handling dependency resolution automatically) in a fast and easy way. It provides the same functionality as apt-get and additional ones, such as offering access to several versions of a package.
rpm is the package management system used by Linux Standard Base (LSB)-compliant distributions for low-level handling of packages. Just like dpkg, it can query, install, verify, upgrade, and remove packages, and is more frequently used by Fedora-based distributions, such as RHEL and CentOS.
- Read More: [20 rpm Command Examples][3]
yum adds the functionality of automatic updates and package management with dependency management to RPM-based systems. As a high-level tool, like apt-get or aptitude, yum works with repositories.
- Read More: [20 yum Command Examples][4]
-
### Common Usage of Low-Level Tools ###
The most frequent tasks that you will do with low level tools are as follows:
**1. Installing a package from a compiled (*.deb or *.rpm) file**
The downside of this installation method is that no dependency resolution is provided. You will most likely choose to install a package from a compiled file when such package is not available in the distributions repositories and therefore cannot be downloaded and installed through a high-level tool. Since low-level tools do not perform dependency resolution, they will exit with an error if we try to install a package with unmet dependencies.
# dpkg -i file.deb [Debian and derivative]
# rpm -i file.rpm [CentOS / openSUSE]
**Note**: Do not attempt to install on CentOS a *.rpm file that was built for openSUSE, or vice-versa!
**2. Upgrading a package from a compiled file**
Again, you will only upgrade an installed package manually when it is not available in the central repositories.
# dpkg -i file.deb [Debian and derivative]
# rpm -U file.rpm [CentOS / openSUSE]
**3. Listing installed packages**
When you first get your hands on an already working system, chances are youll want to know what packages are installed.
# dpkg -l [Debian and derivative]
# rpm -qa [CentOS / openSUSE]
If you want to know whether a specific package is installed, you can pipe the output of the above commands to grep, as explained in [manipulate files in Linux Part 1][6] of this series. Suppose we need to verify if package mysql-common is installed on an Ubuntu system.
# dpkg -l | grep mysql-common
![Check Installed Packages in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Installed-Package.png)
Check Installed Packages
Another way to determine if a package is installed.
# dpkg --status package_name [Debian and derivative]
# rpm -q package_name [CentOS / openSUSE]
For example, lets find out whether package sysdig is installed on our system.
# rpm -qa | grep sysdig
![Check sysdig Package](http://www.tecmint.com/wp-content/uploads/2014/11/Check-sysdig-Package.png)
Check sysdig Package
**4. Finding out which package installed a file**
# dpkg --search file_name
# rpm -qf file_name
For example, which package installed pw_dict.hwm?
# rpm -qf /usr/share/cracklib/pw_dict.hwm
![Query File in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Query-File-in-Linux.png)
Query File in Linux
### Common Usage of High-Level Tools ###
The most frequent tasks that you will do with high level tools are as follows.
**1. Searching for a package**
aptitude update will update the list of available packages, and aptitude search will perform the actual search for package_name.
# aptitude update && aptitude search package_name
In the search all option, yum will search for package_name not only in package names, but also in package descriptions.
# yum search package_name
# yum search all package_name
# yum whatprovides “*/package_name”
Lets supposed we need a file whose name is sysdig. To know that package we will have to install, lets run.
# yum whatprovides “*/sysdig”
![Check Package Description in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Description.png)
Check Package Description
whatprovides tells yum to search the package the will provide a file that matches the above regular expression.
# zypper refresh && zypper search package_name [On openSUSE]
**2. Installing a package from a repository**
While installing a package, you may be prompted to confirm the installation after the package manager has resolved all dependencies. Note that running update or refresh (according to the package manager being used) is not strictly necessary, but keeping installed packages up to date is a good sysadmin practice for security and dependency reasons.
# aptitude update && aptitude install package_name [Debian and derivatives]
# yum update && yum install package_name [CentOS]
# zypper refresh && zypper install package_name [openSUSE]
**3. Removing a package**
The option remove will uninstall the package but leaving configuration files intact, whereas purge will erase every trace of the program from your system.
# aptitude remove / purge package_name
# yum erase package_name
---Notice the minus sign in front of the package that will be uninstalled, openSUSE ---
# zypper remove -package_name
Most (if not all) package managers will prompt you, by default, if youre sure about proceeding with the uninstallation before actually performing it. So read the onscreen messages carefully to avoid running into unnecessary trouble!
**4. Displaying information about a package**
The following command will display information about the birthday package.
# aptitude show birthday
# yum info birthday
# zypper info birthday
![Check Package Information in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Information.png)
Check Package Information
### Summary ###
Package management is something you just cant sweep under the rug as a system administrator. You should be prepared to use the tools described in this article at a moments notice. Hope you find it useful in your preparation for the LFCS exam and for your daily tasks. Feel free to leave your comments or questions below. We will be more than glad to get back to you as soon as possible.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-package-management/
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/gacanepa/
[1]:http://www.tecmint.com/dpkg-command-examples/
[2]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/
[3]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/
[4]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/
[5]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/

View File

@ -1,8 +1,9 @@
Learn with Linux: Physics Simulation
[bazz222222]
Linux 学习系列之物理模拟
================================================================================
![](https://www.maketecheasier.com/assets/uploads/2015/07/physics-fetured.jpg)
This article is part of the [Learn with Linux][1] series:
[Linux 学习系列][1]的所有文章:
- [Learn with Linux: Learning to Type][2]
- [Learn with Linux: Physics Simulation][3]
@ -104,4 +105,4 @@ via: https://www.maketecheasier.com/linux-physics-simulation/
[7]:https://edu.kde.org/applications/all/step
[8]:https://edu.kde.org/
[9]:http://lightspeed.sourceforge.net/
[10]:http://www.physion.net/
[10]:http://www.physion.net/

View File

@ -0,0 +1,41 @@
(翻译中 by runningwater)Search Multiple Words / String Pattern Using grep Command
================================================================================
How do I search multiple strings or words using the grep command? For example I'd like to search word1, word2, word3 and so on within /path/to/file. How do I force grep to search multiple words?
The [grep command supports regular expression][1] pattern. To search multiple words, use following syntax:
grep 'word1\|word2\|word3' /path/to/file
In this example, search warning, error, and critical words in a text log file called /var/log/messages, enter:
$ grep 'warning\|error\|critical' /var/log/messages
To just match words, add -w swith:
$ grep -w 'warning\|error\|critical' /var/log/messages
egrep command can skip the above syntax and use the following syntax:
$ egrep -w 'warning|error|critical' /var/log/messages
I recommend that you pass the -i (ignore case) and --color option as follows:
$ egrep -wi --color 'warning|error|critical' /var/log/messages
Sample outputs:
![Fig.01: Linux / Unix egrep Command Search Multiple Words Demo Output](http://s0.cyberciti.org/uploads/faq/2008/04/egrep-words-output.png)
Fig.01: Linux / Unix egrep Command Search Multiple Words Demo Output
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/faq/searching-multiple-words-string-using-grep/
作者Vivek Gite
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://www.cyberciti.biz/faq/grep-regular-expressions/

View File

@ -0,0 +1,33 @@
Grep Count Lines If a String / Word Matches
================================================================================
How do I count lines if given word or string matches for each input file under Linux or UNIX operating systems?
You need to pass the -c or --count option to suppress normal output. It will display a count of matching lines for each input file:
$ grep -c vivek /etc/passwd
OR
$ grep -w -c vivek /etc/passwd
Sample outputs:
1
However, with the -v or --invert-match option it will count non-matching lines, enter:
$ grep -c vivek /etc/passwd
Sample outputs:
45
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/faq/grep-count-lines-if-a-string-word-matches/
作者Vivek Gite
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,67 @@
Grep From Files and Display the File Name
================================================================================
How do I grep from a number of files and display the file name only?
When there is more than one file to search it will display file name by default:
grep "word" filename
grep root /etc/*
Sample outputs:
/etc/bash.bashrc: See "man sudo_root" for details.
/etc/crontab:17 * * * * root cd / && run-parts --report /etc/cron.hourly
/etc/crontab:25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
/etc/crontab:47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
/etc/crontab:52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
/etc/group:root:x:0:
grep: /etc/gshadow: Permission denied
/etc/logrotate.conf: create 0664 root utmp
/etc/logrotate.conf: create 0660 root utmp
The first name is file name (e.g., /etc/crontab, /etc/group). The -l option will only print filename if th
grep -l "string" filename
grep -l root /etc/*
Sample outputs:
/etc/aliases
/etc/arpwatch.conf
grep: /etc/at.deny: Permission denied
/etc/bash.bashrc
/etc/bash_completion
/etc/ca-certificates.conf
/etc/crontab
/etc/group
You can suppress normal output; instead print the name of each input file from **which no output would normally have been** printed:
grep -L "word" filename
grep -L root /etc/*
Sample outputs:
/etc/apm
/etc/apparmor
/etc/apparmor.d
/etc/apport
/etc/apt
/etc/avahi
/etc/bash_completion.d
/etc/bindresvport.blacklist
/etc/blkid.conf
/etc/bluetooth
/etc/bogofilter.cf
/etc/bonobo-activation
/etc/brlapi.key
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/faq/grep-from-files-and-display-the-file-name/
作者Vivek Gite
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,66 @@
How To Find Files by Content Under UNIX
================================================================================
I had written lots of code in C for my school work and saved it as source code under /home/user/c/*.c and *.h. How do I find files by content such as string or words (function name such as main() under UNIX shell prompt?
You need to use the following tools:
[a] **grep command** : print lines matching a pattern.
[b] **find command**: search for files in a directory hierarchy.
### [grep Command To Find Files By][1] Content ###
Type the command as follows:
grep 'string' *.txt
grep 'main(' *.c
grep '#include<example.h>' *.c
grep 'getChar*' *.c
grep -i 'ultra' *.conf
grep -iR 'ultra' *.conf
Where
- **-i** : Ignore case distinctions in both the PATTERN (match valid, VALID, ValID string) and the input files (math file.c FILE.c FILE.C filename).
- **-R** : Read all files under each directory, recursively
### Highlighting searched patterns ###
You can highlight patterns easily while searching large number of files:
$ grep --color=auto -iR 'getChar();' *.c
### Displaying file names and line number for searched patterns ###
You may also need to display filenames and numbers:
$ grep --color=auto -iRnH 'getChar();' *.c
Where,
- **-n** : Prefix each line of output with the 1-based line number within its input file.
- **-H** Print the file name for each match. This is the default when there is more than one file to search.
$grep --color=auto -nH 'DIR' *
Sample output:
![Fig.01: grep command displaying searched pattern](http://www.cyberciti.biz/faq/wp-content/uploads/2008/09/grep-command.png)
Fig.01: grep command displaying searched pattern
You can also use find command:
$ find . -name "*.c" -print | xargs grep "main("
--------------------------------------------------------------------------------
via: http://www.cyberciti.biz/faq/unix-linux-finding-files-by-content/
作者Vivek Gite
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://www.cyberciti.biz/faq/howto-search-find-file-for-text-string/

Some files were not shown because too many files have changed in this diff Show More