Merge pull request #5 from LCTT/master

更新至2015年7月17日
This commit is contained in:
struggling 2015-07-17 19:51:32 +08:00
commit da569ecab5
33 changed files with 2951 additions and 916 deletions

View File

@ -1,9 +1,11 @@
Translating by H-mudcup
Linux上绝对有趣的10个彩蛋
十个非常有趣的 Linux 彩蛋
================================================================================
*制作 Adventure 的程序员悄悄的把一个秘密的功能塞进了游戏里。Atari 并没有对此感到生气,而是给这类“秘密功能”起了个名字——“彩蛋”,因为——你懂的——你会像找复活节彩蛋一样寻找它们。*
![](http://en.wikipedia.org/wiki/File:Adventure_Easteregg.PNG)
制作 Adventure 的程序员悄悄的把一个秘密的功能塞进了游戏里。Atari 并没有对此感到生气,而是给这类“秘密功能”起了个名字——“彩蛋”,因为——你懂的——你会像找复活节彩蛋一样寻找它们。图片来自: Wikipedia。
*图片来自: Wikipedia*
在1979年的时候公司为 Atari 2600 开发了一个电子游戏——[Adventure][1]。
@ -13,9 +15,9 @@ Atari 有一项反对作者将自己的名字放进他们的游戏里的政策
这种软件里的“隐藏功能”并不是第一次出现这类特性的首次出现是在1966年[PDP-10][3]的操作系统上),但这是它第一次有了名字,同时也是第一次真正的被众多电脑用户和游戏玩家所注意。
Linux以及和Linux相关的软件没有被遗忘。这些年来,人们为这个倍受喜爱的操作系统创作了很多非常有趣的彩蛋。下面将介绍我个人最喜爱的彩蛋——以及如何到它们。
Linux以及和Linux相关的软件没有被遗忘。这些年来,人们为这个倍受喜爱的操作系统创作了很多非常有趣的彩蛋。下面将介绍我个人最喜爱的彩蛋——以及如何到它们。
将迅速意识到这些彩蛋大多需要通过终端才能体验到。这是故意的。因为终端比较酷。【我应该借此机机会提醒你一下,如果你想运行我所列出的应用,然而你却还没有安装它们,你是绝对无法运行成功的。你应该先安装好它们的。因为……毕竟只是计算机。】
很快就会想到这些彩蛋大多需要通过终端才能体验到。这是故意的。因为终端比较酷。【我应该借此机机会提醒你一下,如果你想运行我所列出的应用,然而你却还没有安装它们,你是绝对无法运行成功的。你应该先安装好它们的。因为……毕竟只是计算机。】
### Arch : 包管理器pacman里的吃豆人Pac-Man ###
@ -28,7 +30,8 @@ Linux以及和Linux相关的软件并没有被遗忘。这些年来
### GNU Emacs : 俄罗斯方块Tetris以及…… ###
![emacs Tetris](http://www.linux.com/images/stories/41373/emacsTetris.jpg)
我不喜欢 emacs。一点也不喜欢。但是它确实能玩俄罗斯方块。
*我不喜欢 emacs。一点也不喜欢。但是它确实能玩俄罗斯方块。*
我要坦白一件事:我不喜欢[emacs][7]。一点也不喜欢。
@ -36,7 +39,7 @@ Linux以及和Linux相关的软件并没有被遗忘。这些年来
但是它确实能玩俄罗斯方块。这可不是件小事。方法如下:
第一步)打开 emacs。有疑问输入“emacs”。
第一步)打开 emacs。有疑问输入“emacs”。
第二步按下键盘上的Esc和X键。
@ -51,21 +54,24 @@ Linux以及和Linux相关的软件并没有被遗忘。这些年来
在用基于 Debian 的发行版试试输入“apt-get moo"。
![apt-get moo](http://www.linux.com/images/stories/41373/AptGetMoo.jpg)
apt-get moo
简单的确。但这是只会说话的牛。所以惹我们喜欢。再试试“aptitude moo”。他会告诉你“There are no Easter Eggs in this program这个程序里没有彩蛋”。
*apt-get moo*
简单的确。但这是只会说话的牛所以惹我们喜欢。再试试“aptitude moo”。他会告诉你“There are no Easter Eggs in this program这个程序里没有彩蛋”。
关于 [aptitude][9] 有一件事你一定要知道,它是个肮脏、下流的骗子。如果 aptitude 是匹诺曹,那它的鼻子能刺穿月球。在这条命令中添加“-v”选项。不停的添加 v直到它被逼得投降。
![](http://www.linux.com/images/stories/41373/AptitudeMoo.jpg)
我猜大家都同意,这是 aptitude 中最重要的功能。
*我猜大家都同意,这是 aptitude 中最重要的功能。*
我猜大家都同意,这是 aptitude 中最重要的功能。但是万一你想把自己的话让一头牛说出来怎么办这时我们就需要“cowsay”了。
还有别让“cowsay牛说”这个名字把你给骗了。你可以让你的话从各种东西的嘴里说出来。比如一头大象CalvinBeavis 甚至可以是Ghostbusters捉鬼敢死队的标志。只需在终端输入“cowsay -l”就能看到所有选项的列表。
还有别让“cowsay牛说”这个名字把你给骗了。你可以让你的话从各种东西的嘴里说出来。比如一头大象CalvinBeavis 甚至可以是 Ghostbusters捉鬼敢死队的标志。只需在终端输入“cowsay -l”就能看到所有选项的列表。
![](http://www.linux.com/images/stories/41373/cowsay.jpg)
你可以让你的话从各种东西的嘴里说出来
*你可以让你的话从各种东西的嘴里说出来*
想玩高端点的?你可以用管道把其他应用的输出放到 cowsay 中。试试“fortune | cowsay”。非常有趣。
@ -75,7 +81,7 @@ apt-get moo
输入“sudo visudo”以打开“sudoers”文件。在文件的开头你很可能会看见几行以“Defaults”开头的文字。在那几行后面添加“Defaults insults”并保存文件。
现在,只要你输错了你的 sudo 密码,你的系统就会骂你。这些提高自信的语句包括“听着,煎饼脑袋,我可没时间听这些垃圾。”,“你吃错药了吧?”以及“你被电过以后大脑就跟以前不太一样了是不是?”
现在,只要你输错了你的 sudo 密码,你的系统就会骂你。这些可以提高自信的语句包括“听着,煎饼脑袋,我可没时间听这些垃圾。”,“你吃错药了吧?”以及“你被电过以后大脑就跟以前不太一样了是不是?”
把这个设在同事的电脑上会有非常有趣。
@ -85,10 +91,11 @@ apt-get moo
打开火狐浏览器。在地址栏填上“about:about”。你将得到火狐浏览器中所有的“about”页。一点也不炫酷是不是
现在试试“about:mozilla”浏览器就会回应你一条从“[Book of MozillaMozilla 之书)][10]”——浏览网页的圣经——里引用的话。我的另一个最爱是“about:robots”这个也很有趣。
现在试试“about:mozilla”浏览器就会回应你一条从“[Book of MozillaMozilla 之书)][10]”——这本浏览网页的圣经——里引用的话。我的另一个最爱是“about:robots”这个也很有趣。
![](http://www.linux.com/images/stories/41373/About-Mozilla550.jpg)
“[Book of MozillaMozilla 之书)][10]”——浏览网页的圣经。
*“[Book of MozillaMozilla 之书)][10]”——浏览网页的圣经。*
### 精心调制的混搭日历 ###
@ -105,26 +112,28 @@ apt-get moo
例如: “nmap -oS - google.com”
赶快试试。我知道你有多想这么做。你一定会让安吉丽娜·朱莉Angelina Jolie[印象深刻][15]
### lolcat彩虹 ###
在你的Linux终端里有很多彩蛋真真是极好的……但是如果你还想要变得……更有魅力些怎么办输入lolcat。把任何一个程序的文本输出通过管道输入到lolcat里。你会得到它的超级无敌彩虹版。
![](http://www.linux.com/images/stories/41373/lolcat.jpg)
把任何一个程序的文本输出通过管道输入到lolcat里。你会得到它的超级无敌彩虹版。
*把任何一个程序的文本输出通过管道输入到lolcat里。你会得到它的超级无敌彩虹版。*
### 追光标的小家伙 ###
![oneko cat](http://www.linux.com/images/stories/41373/onekocat.jpg)
“Oneko” -- 经典 “Neko”的Linux端口。
“Oneko” -- 经典 “[Neko][16]”的Linux端口。 .
接下来是“Oneko” -- 经典 “[Neko][16]”的Linux端口。基本上就是个满屏幕追着你的光标跑的小猫。
*“Oneko” -- 经典 “Neko”的Linux移植版本。*
接下来是“Oneko” -- 经典 “[Neko][16]”的Linux移植版本。基本上就是个满屏幕追着你的光标跑的小猫。
虽然严格来它并不算是“彩蛋”,它还是很有趣的。而且感觉上也是很彩蛋的。
你还可以用不同的选项比如“oneko -dog”把小猫替代成小狗或是调成其他样式。用这个对付讨厌的同事有着无限的可能。
There you have it! A list of my favorite Linux Easter Eggs (and things of that ilk). Feel free to add your own favorite in the comments section below. Because this is the Internet. And you can do that sort of thing.就是这些了一个我最喜欢的Linux彩蛋或是类似东西的清单。请尽情的的在下面的评论区留下你的最爱。因为这是互联网。你就能做这些事。
就是这些了一个我最喜欢的Linux彩蛋或是类似东西的清单。请尽情的的在下面的评论区留下你的最爱。因为这是互联网。你就能做这些事。
--------------------------------------------------------------------------------
@ -132,7 +141,7 @@ via: http://www.linux.com/news/software/applications/820944-10-truly-amusing-lin
作者:[Bryan Lunduke][a]
译者:[H-mudcup](https://github.com/H-mudcup)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,14 +1,12 @@
Translating by Love-xuan
动态壁纸给linux发行版添加活力背景
================================================================================
**我们知道你想拥有一个有格调的ubuntu桌面来炫耀一下 **
![Live Wallpaper](http://i.imgur.com/9JIUw5p.gif)
Live Wallpaper
*Live Wallpaper*
在linxu上费一点点劲搭建一个出色的工作环境是很简单的。
今天,我们着重来探讨[重新着重探讨][2]长驻你脑海中那些东西 一款自由,开源,能够给你的截图增添光彩的工具。
在linxu上费一点点劲搭建一个出色的工作环境是很简单的。今天我们[重新][2])着重来探讨长驻你脑海中那些东西 :一款自由,开源,能够给你的截图增添光彩的工具。
它叫 **Live Wallpaper** (正如你猜的那样) 它用由OpenGL驱动的一款动态桌面背景来代替标准的静态桌面背景。
@ -25,13 +23,13 @@ Live Wallpaper 不是此类软件唯一的一款,但它是最好的一款之
从精细的(noise)到狂热的 (nexus)包罗万象甚至有受到Ubuntu Phone欢迎屏幕启发的obligatory锁屏壁纸。
- Circles — 带着evolving circle风格的时钟灵感来自于Ubuntu Phone
- Galaxy — 支持自定义大小,位置的星系
- Gradient Clock — 覆盖基本梯度的时钟
- Galaxy — 支持自定义大小,位置的旋转星系
- Gradient Clock — 放在倾斜面上的polar时钟
- Nexus — 亮色粒子火花穿越屏幕
- Noise — 类似于iOS动态壁纸的Bokeh设计
- Photoslide — 由文件夹(默认为 ~/Photos内照片构成的动态网格相册
Live Wallpaper **完全开源** ,所以没有什么能够阻挡天马行空的艺术家用提供的做法(当然还有耐心)来创造他们自己的精美主题。
Live Wallpaper **完全开源**,所以没有什么能够阻挡天马行空的艺术家们用诀窍(当然还有耐心)来创造他们自己的精美主题。
### 设置 & 特点 ###
@ -39,24 +37,24 @@ Live Wallpaper **完全开源** ,所以没有什么能够阻挡天马行空
虽然某些主题与其它主题相比有更多的选项,但每款主题都可以通过某些方式来配置或者定制。
例如, Nexus主题中 (上图所示) 你可以更改脉冲粒子的数量,颜色,大小和出现频率。
例如Nexus主题中 (上图所示) 你可以更改脉冲粒子的数量,颜色,大小和出现频率。
首选项提供了 **通用选项** 适用于所有主题,包括:
- 设置登界面的动态壁纸
- 设置登界面的动态壁纸
- 自定义动画背景
- 调节 FPS (包括在屏幕上显示FPS)
- 指定多显示器行为
- 指定多显示器行为
有如此多的选项diy适用于你自己的桌面背景是很容易的。
### 缺陷 ###
#### 没有桌面图标 ####
Live Wallpaper在运行时你无法在桌面添加打开或者是编辑文件和文件夹。
首选项程序提供了一个选项来让你这样做只是猜测。也许是它只能在老版本中使用在我们的测试中测试环境为Ununtu 14.10,它并没有用。
在测试中发现当把桌面壁纸设置成格式为png的图片文件时这个选项有用不需要是透明的png图片文件只要是png图片文件就行了。
首选项程序提供了一个选项来让你这样做只是猜测。也许是它只能在老版本中使用在我们的测试中测试环境为Ununtu 14.10它不工作。但在测试中发现当把桌面壁纸设置成格式为png的图片文件时这个选项有用不需要是透明的png图片文件只要是png图片文件就行了。
#### 资源占用 ####
@ -68,9 +66,9 @@ Live Wallpaper在运行时你无法在桌面添加打开或者是编辑文
对我来说最大的“bug”绝对是没有“退出”选项。
当然,Sure, 动态壁纸可以通过托盘图标和首选项完全退出那退出托盘图标呢没办法。只能在终端执行命令pkill livewallpaper
当然动态壁纸可以通过托盘图标和首选项完全退出那退出托盘图标呢没办法。只能在终端执行命令pkill livewallpaper
### 怎么在 Ubuntu 14.04 LTS +上安装 Live Wallpaper ###
### 怎么在 Ubuntu 14.04 LTS+ 上安装 Live Wallpaper ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/terminal-command-750x146.jpg)
@ -81,7 +79,7 @@ Live Wallpaper在运行时你无法在桌面添加打开或者是编辑文
sudo apt-get update && sudo apt-get install livewallpaper
你还需要安装 indicator applet, 这样可以方便快速的打开或是关闭动态壁纸,从菜单选择主题,另外图形配置工具可以让你基于你自己的口味来配置每款主题。
你还需要安装 indicator applet这样可以方便快速的打开或是关闭动态壁纸,从菜单选择主题,另外图形配置工具可以让你基于你自己的口味来配置每款主题。
sudo apt-get install livewallpaper-config livewallpaper-indicator
@ -89,11 +87,11 @@ Live Wallpaper在运行时你无法在桌面添加打开或者是编辑文
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/live-wallpaper-app-launcher.png)
让人不爽的是,安装完成后,程序不会自动打开托盘图标,而仅仅将它自己加入自动启动项,所以,快速来个注> 登陆它就会出现啦。
让人不爽的是,安装完成后,程序不会自动打开托盘图标,而仅仅将它自己加入自动启动项,所以,快速来个注销 -> 登陆它就会出现啦。
### 总结 ###
如果你正处在无聊呆板的桌面中,幻想有一个更有活力的生活,不试试。另外,告诉我们你想看到什么样的动态壁纸!
如果你正处在无聊呆板的桌面中,幻想有一个更有活力的生活,不试试。另外,告诉我们你想看到什么样的动态壁纸!
--------------------------------------------------------------------------------
@ -101,7 +99,7 @@ via: http://www.omgubuntu.co.uk/2015/05/animated-wallpaper-adds-live-backgrounds
作者:[Joey-Elijah Sneddon][a]
译者:[Love-xuan](https://github.com/Love-xuan)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,6 +1,7 @@
在Ubuntu 15.04中安装RUby on Rails
在Ubuntu 15.04中安装Ruby on Rails
================================================================================
本篇我们会学习如何用rbenv在Ubuntu 15.04中安装Ruby on Rails。我们选择Ubuntu作为操作系统因为Ubuntu是Linux发行版中自带很多包和完整文档的操作系统因此我认为这是正确的选择。如果你不想安装最新的Ubuntu你可以从[下载iso文件][1]开始。
本篇我们会学习如何用rbenv在Ubuntu 15.04中安装Ruby on Rails。我们选择Ubuntu作为操作系统是因为Ubuntu是Linux发行版中自带很多包和完整文档的操作系统因此我认为这是正确的选择。如果你还没有安装最新的Ubuntu你可以从[下载iso文件][1]开始。
### 安装 Ruby ###
@ -9,9 +10,9 @@
sudo apt-get update
sudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev
有三种方法来安装Ruby比如rbenv,rvm和从源码安装。每种都有各自的好处但是这些天开发者们更倾向使用rbenv而不是rvm和源码来安装。我们将安装最新的Ruby版本2.2.2。
有三种方法来安装Rubyrbenv、rvm和从源码安装。每种都有各自的好处但是近来开发者们更倾向使用rbenv而不是rvm和源码来安装。我们将安装最新的Ruby版本2.2.2。
用rbenv来安装只有简单的两步。第一步安装rbenv接着是ruby-build
用rbenv来安装只有简单的两步。第一步安装rbenv接着是ruby-build
cd
git clone git://github.com/sstephenson/rbenv.git .rbenv
@ -28,23 +29,23 @@
rbenv global 2.2.2
ruby -v
我们需要安装Bundler但是我们要在安装之前告诉rubygems不要为每个包本地安装文档。
我们需要安装Bundler但是我们要在安装之前告诉rubygems不要为每个包安装本地文档。
echo "gem: --no-ri --no-rdoc" > ~/.gemrc
gem install bundler
### 配置 GIT ###
配置git之前你要创建一个github账号你可以注册[git][2]。我们需要git作为版本控制系统因此我们要设置来匹配github账号。
配置git之前你要创建一个github账号你可以注册一个[github 账号][2]。我们需要git作为版本控制系统因此我们要设置来匹配github账号。
用户的github账号来替下面的**Name** 和 **Email address**
用户的github账号来替下面的**Name** 和 **Email address**
git config --global color.ui true
git config --global user.name "YOUR NAME"
git config --global user.email "YOUR@EMAIL.com"
ssh-keygen -t rsa -C "YOUR@EMAIL.com"
接下来用新生成的ssh key添加到github账号中。这样你需要复制下面命令的输出并[粘贴在][3]。
接下来用新生成的ssh key添加到github账号中。这样你需要复制下面命令的输出并[粘贴在Github的设置页面里面][3]。
cat ~/.ssh/id_rsa.pub
@ -58,7 +59,7 @@
### 安装 Rails ###
我们需要安装javascript运行时像NodeJS因为这些天Rails带来很多依赖。这样我们可以结合并缩小你的javascript来提供一个更快的生产环境。
我们需要安装像NodeJS这样的javascript运行时环境因为近来Rails的依赖越来越多了。这样我们可以合并和压缩你的javascript从而提供一个更快的生产环境。
我们需要添加PPA来安装nodeJS。
@ -66,7 +67,7 @@
sudo apt-get update
sudo apt-get install nodejs
如果在更新是晕倒了问题,你可以试试这个命令:
如果在更新时遇到了问题,你可以试试这个命令:
# Note the new setup script name for Node.js v0.12
curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
@ -74,15 +75,15 @@
# Then install with:
sudo apt-get install -y nodejs
下一步,用这个命令:
下一步,用这个命令安装 rails
gem install rails -v 4.2.1
因为我们正在使用rbenv用下面的命令来安装rails
因为我们正在使用rbenv用下面的命令来让rails的执行程序可以使用
rbenv rehash
要确保rails已经正确安炸u哪个你可以运行rails -v显示如下
要确保rails已经正确安你可以运行rails -v显示如下
rails -v
# Rails 4.2.1
@ -91,25 +92,25 @@
### 设置 MySQL ###
或许你已经熟悉MySQL了你可以从Ubuntu的仓库中安装MySQL的客户端与服务端。你可以在安装时设置root用户密码。这个信息将来会进入你rails程序的database.yml文件中用下面的命令来安装mysql。
或许你已经熟悉MySQL了你可以从Ubuntu的仓库中安装MySQL的客户端与服务端。你可以在安装时设置root用户密码。这个信息将来会进入你rails程序的database.yml文件中用下面的命令来安装mysql。
sudo apt-get install mysql-server mysql-client libmysqlclient-dev
安装libmysqlclient-dev用于提供在设置rails程序时rails在连接mysql所需要用到的用于编译mysql2 gem的文件
安装libmysqlclient-dev用于mysql2 gem的编译在设置rails程序时rails通过它来连接mysql
### 最后一步 ###
让我们尝试创建你的第一个rails程序
# Use MySQL
# 使用 MySQL 数据库
rails new myapp -d mysql
# Move into the application directory
# 进入到应用目录
cd myapp
# Create Database
# 创建数据库
rake db:create
@ -125,7 +126,7 @@
nano config/database.yml
接着入MySql root用户的密码。
接着入MySql root用户的密码。
![](http://blog.linoxide.com/wp-content/uploads/2015/05/root_passw.png)
@ -133,7 +134,7 @@
### 总结 ###
Rails是用Ruby写的是随着rails一起使用的编程语言。在Ubuntu 15.04中Ruby on Rails可以用rbenv、 rvm和源码的方式来安装。本篇我们使用的是rbenv方式并用了MySQL作为数据库。有任何的问题或建议请在评论栏指出。
Rails是用Ruby写的 也是随着rails一起使用的编程语言。在Ubuntu 15.04中Ruby on Rails可以用rbenv、 rvm和源码的方式来安装。本篇我们使用的是rbenv方式并用了MySQL作为数据库。有任何的问题或建议请在评论栏指出。
--------------------------------------------------------------------------------
@ -141,7 +142,7 @@ via: http://linoxide.com/ubuntu-how-to/installing-ruby-rails-using-rbenv-ubuntu-
作者:[Obet][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,69 @@
原子(Atom)代码编辑器的视频短片介绍
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/06/Atom_stable.png)
[Atom 1.0][1]时代来临。作为[最好的开源代码编辑器]之一Atom已公开使用快一年了近段时间第一个稳定版本的原子编辑器的发布却引起了广大用户的谈论。这个[Github][4]上的项目随着“为21世纪破解文本编辑器”活动的兴起已近被下载了150万余次积累35万活跃用户。
### 这是个漫长的过程 ###
滴水穿石非一日之功Atom同样经历一个漫长的过程。从2008年首次提出概念到这个月第一个稳定版本的发布主创人员和全球各地的贡献者这几年来不断地致力于Atom核心的开发。我们通过下面这张图来了解一下Atom的发展过程
![Image credit: Atom](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/06/Atom_stable_timeline.jpeg)
*图片来源Atom*
### 回到未来 ###
Atom 1.0 通过流行的视频发布方式展示了这款编辑器的潜能。这个视屏就像70年代的科幻连续剧一样今天你将会看到一个极其酷炫的视屏
youtube视频不行做个链接吧
<iframe width="640" height="390" frameborder="0" allowfullscreen="true" src="http://www.youtube.com/embed/Y7aEiVwBAdk?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" class="youtube-player"></iframe>
### 原子编辑器特点 ###
- 跨平台编辑
- 实现包管理
- 智能化、自动化
- 文件系统视图
- 多窗操作
- 支持查找更换
- 高度个性化
- 界面更新颖
### Atom 1.0起来 ###
Atom 1.0 支持LinuxWindows和Mac OS X。对于基于Debian的Linux例如Ubuntu和Linux MintAtom提供了deb包。对于Fedora同样有rpm包。如果你愿意你可以下载源代码。通过下面的链接下载最新的版本。
- [Atom .deb][5]
- [Atom .rpm][6]
- [Atom Source Code][7]
如果你愿意,你可以[通过PPA在Ubuntu上安装Atom]。PPA并不是官方解决方案。
注:下面是一个调查,可以发布的时候在文章内发布个调查
#### 你对Atom感兴趣吗 ####
- 噢,当然!这是程序员的福音。
- 我并不这样认为。我见过更好的编辑器。
- 并不关心,我的默认编辑器就能胜任我的工作。
--------------------------------------------------------------------------------
via: http://itsfoss.com/atom-stable-released/
作者:[Abhishek][a]
译者:[sevenot](https://github.com/sevenot)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://blog.atom.io/2015/06/25/atom-1-0.html
[2]:http://itsfoss.com/best-modern-open-source-code-editors-for-linux/
[3]:https://atom.io/
[4]:https://github.com/
[5]:https://atom.io/download/deb
[6]:https://atom.io/download/rpm
[7]:https://github.com/atom/atom/blob/master/docs/build-instructions/linux.md
[8]:http://itsfoss.com/install-atom-text-editor-ubuntu-1404-linux-mint-17/

View File

@ -1,6 +1,7 @@
Linux常见问题解答--如何修复"tar由于前一个错误导致于失败状态中退出"("Exiting with failure status due to previous errors")
Linux常见问题解答--如何修复"tar由于前一个错误导致于失败状态中退出"
================================================================================
> **问题**: 当我想试着用tar命令来创建一个压缩文件时总在执行过程中失败并且抛出一个错误说明"tar由于前一个错误导致于失败状态中退出"("Exiting with failure status due to previous errors"). 什么导致这个错误的发生,要如何解决?
![](https://farm9.staticflickr.com/8863/17631029953_1140fe2dd3_b.jpg)
如果当你执行tar命令时遇到了下面的错误那么最有可能的原因是对于你想用tar命令压缩的某个文件中你并不具备其读权限。
@ -13,21 +14,20 @@ Linux常见问题解答--如何修复"tar由于前一个错误导致于失败
$ tar cvzfz backup.tgz my_program/ > /dev/null
然后你会看到tar输出的标准错误(stderr)信息。
然后你会看到tar输出的标准错误(stderr)信息。LCTT 译注:自然,不用 v 参数也可以。)
tar: my_program/src/lib/.conf.db.~lock~: Cannot open: Permission denied
tar: Exiting with failure status due to previous errors
你可以从上面的例子中看到引起错误的原因的确是“读权限不允许”denied read permission.)
要解决这个问题,只要简单地更改(或移除)问题文件的权限然后重新执行tar命令即可。
你可以从上面的例子中看到引起错误的原因的确是“读权限不允许”denied read permission.)要解决这个问题,只要简单地更改(或移除)问题文件的权限然后重新执行tar命令即可。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/tar-exiting-with-failure-status-due-to-previous-errors.html
作者:[Dan Nanni][a]
译者:[XLCYun(袖里藏云)](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[XLCYun(袖里藏云)](https://github.com/XLCYun)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,8 +1,8 @@
Linux 答疑--如何在 Ubuntu 15.04 的 GNOME 终端中开启多个标签
Linux有问必答:如何在 Ubuntu 15.04 的 GNOME 终端中开启多个标签
================================================================================
> **问**: 我以前可以在我的 Ubuntu 台式机中的 gnome-terminal 中开启多个标签。但升到 Ubuntu 15.04 后,我就无法再在 gnome-terminal 窗口中打开新标签了。要怎样做才能在 Ubuntu 15.04 的 gnome-terminal 中打开标签呢?
在 Ubuntu 14.10 或之前的版本中gnome-terminal 允许你在终端窗口中开启一个新标签或一个终端窗口。但从 Ubuntu 15.04开始gnome-terminal 移除了“新标签”选项。这实际上并不是一个 bug而是一个合并新标签和新窗口的举措。GNOME 3.12 引入了 [单独的“开启终端”选项][1]。开启新终端标签的功能从终端菜单移动到了首选项中。
在 Ubuntu 14.10 或之前的版本中gnome-terminal 允许你在终端窗口中开启一个新标签或一个终端窗口。但从 Ubuntu 15.04开始gnome-terminal 移除了“新标签”选项。这实际上并不是一个 bug而是一个合并新标签和新窗口的举措。GNOME 3.12 引入了[单独的“开启终端”选项][1]。开启新终端标签的功能从终端菜单移动到了首选项中。
![](https://farm1.staticflickr.com/562/19286510971_f0abe3e7fb_b.jpg)
@ -29,8 +29,8 @@ Linux 答疑--如何在 Ubuntu 15.04 的 GNOME 终端中开启多个标签
via: http://ask.xmodulo.com/open-multiple-tabs-gnome-terminal-ubuntu.html
作者:[Dan Nanni][a]
译者:[KevSJ](https://github.com/KevSJ)
校对:[校对者ID](https://github.com/校对者ID)
译者:[KevinSJ](https://github.com/KevinSJ)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,114 @@
为什么 mysql 里的 ibdata1 文件不断的增长?
================================================================================
![ibdata1 file](https://www.percona.com/blog/wp-content/uploads/2013/08/ibdata1-file.jpg)
我们在 [Percona 支持栏目][1]经常收到关于 MySQL 的 ibdata1 文件的这个问题。
当监控服务器发送一个关于 MySQL 服务器存储的报警时,恐慌就开始了 —— 就是说磁盘快要满了。
一番调查后你意识到大多数地盘空间被 InnoDB 的共享表空间 ibdata1 使用。而你已经启用了 [innodb_file_per_table][2],所以问题是:
### ibdata1存了什么 ###
当你启用了 `innodb_file_per_table`,表被存储在他们自己的表空间里,但是共享表空间仍然在存储其它的 InnoDB 内部数据:
- 数据字典,也就是 InnoDB 表的元数据
- 变更缓冲区
- 双写缓冲区
- 撤销日志
其中的一些在 [Percona 服务器][3]上可以被配置来避免增长过大的。例如你可以通过 [innodb_ibuf_max_size][4] 设置最大变更缓冲区,或设置 [innodb_doublewrite_file][5] 来将双写缓冲区存储到一个分离的文件。
MySQL 5.6 版中你也可以创建外部的撤销表空间,所以它们可以放到自己的文件来替代存储到 ibdata1。可以看看这个[文档][6]。
### 什么引起 ibdata1 增长迅速? ###
当 MySQL 出现问题通常我们需要执行的第一个命令是:
SHOW ENGINE INNODB STATUS/G
这将展示给我们一些很有价值的信息。我们从** TRANSACTION事务**部分开始检查,然后我们会发现这个:
---TRANSACTION 36E, ACTIVE 1256288 sec
MySQL thread id 42, OS thread handle 0x7f8baaccc700, query id 7900290 localhost root
show engine innodb status
Trx read view will not see trx with id >= 36F, sees < 36F
这是一个最常见的原因一个14天前创建的相当老的事务。这个状态是**活动的**,这意味着 InnoDB 已经创建了一个数据的快照,所以需要在**撤销**日志中维护旧页面,以保障数据库的一致性视图,直到事务开始。如果你的数据库有大量的写入任务,那就意味着存储了大量的撤销页。
如果你找不到任何长时间运行的事务你也可以监控INNODB STATUS 中的其他的变量,“**History list length历史记录列表长度**”展示了一些等待清除操作。这种情况下问题经常发生,因为清除线程(或者老版本的主线程)不能像这些记录进来的速度一样快地处理撤销。
### 我怎么检查什么被存储到了 ibdata1 里了? ###
很不幸MySQL 不提供查看什么被存储到 ibdata1 共享表空间的信息,但是有两个工具将会很有帮助。第一个是马克·卡拉汉制作的一个修改版 innochecksum ,它发布在[这个漏洞报告][7]里。
它相当易于使用:
# ./innochecksum /var/lib/mysql/ibdata1
0 bad checksum
13 FIL_PAGE_INDEX
19272 FIL_PAGE_UNDO_LOG
230 FIL_PAGE_INODE
1 FIL_PAGE_IBUF_FREE_LIST
892 FIL_PAGE_TYPE_ALLOCATED
2 FIL_PAGE_IBUF_BITMAP
195 FIL_PAGE_TYPE_SYS
1 FIL_PAGE_TYPE_TRX_SYS
1 FIL_PAGE_TYPE_FSP_HDR
1 FIL_PAGE_TYPE_XDES
0 FIL_PAGE_TYPE_BLOB
0 FIL_PAGE_TYPE_ZBLOB
0 other
3 max index_id
全部的 20608 中有 19272 个撤销日志页。**这占用了表空间的 93%**。
第二个检查表空间内容的方式是杰里米·科尔制作的 [InnoDB Ruby 工具][8]。它是个检查 InnoDB 的内部结构的更先进的工具。例如我们可以使用 space-summary 参数来得到每个页面及其数据类型的列表。我们可以使用标准的 Unix 工具来统计**撤销日志**页的数量:
# innodb_space -f /var/lib/mysql/ibdata1 space-summary | grep UNDO_LOG | wc -l
19272
尽管这种特殊的情况下innochedcksum 更快更容易使用,但是我推荐你使用杰里米的工具去了解更多的 InnoDB 内部的数据分布及其内部结构。
好,现在我们知道问题所在了。下一个问题:
### 我该怎么解决问题? ###
这个问题的答案很简单。如果你还能提交语句,就做吧。如果不能的话,你必须要杀掉线程开始回滚过程。那将停止 ibdata1 的增长,但是很显然,你的软件会出现漏洞,有些人会遇到错误。现在你知道如何去鉴定问题所在,你需要使用你自己的调试工具或普通的查询日志来找出谁或者什么引起的问题。
如果问题发生在清除线程,解决方法通常是升级到新版本,新版中使用一个独立的清除线程替代主线程。更多信息查看该[文档][9]
### 有什么方法回收已使用的空间么? ###
没有目前还没有一个容易并且快速的方法。InnoDB 表空间从不收缩...参见[10 年之久的漏洞报告][10],最新更新自詹姆斯·戴(谢谢):
当你删除一些行,这个页被标为已删除稍后重用,但是这个空间从不会被回收。唯一的方法是使用新的 ibdata1 启动数据库。要做这个你应该需要使用 mysqldump 做一个逻辑全备份,然后停止 MySQL 并删除所有数据库、ib_logfile*、ibdata1* 文件。当你再启动 MySQL 的时候将会创建一个新的共享表空间。然后恢复逻辑备份。
### 总结 ###
当 ibdata1 文件增长太快,通常是 MySQL 里长时间运行的被遗忘的事务引起的。尝试去解决问题越快越好(提交或者杀死事务),因为不经过痛苦缓慢的 mysqldump 过程,你就不能回收浪费的磁盘空间。
也是非常推荐监控数据库以避免这些问题。我们的 [MySQL 监控插件][11]包括一个 Nagios 脚本,如果发现了一个太老的运行事务它可以提醒你。
--------------------------------------------------------------------------------
via: https://www.percona.com/blog/2013/08/20/why-is-the-ibdata1-file-continuously-growing-in-mysql/
作者:[Miguel Angel Nieto][a]
译者:[wyangsun](https://github.com/wyangsun)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.percona.com/blog/author/miguelangelnieto/
[1]:https://www.percona.com/products/mysql-support
[2]:http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table
[3]:https://www.percona.com/software/percona-server
[4]:https://www.percona.com/doc/percona-server/5.5/scalability/innodb_insert_buffer.html#innodb_ibuf_max_size
[5]:https://www.percona.com/doc/percona-server/5.5/performance/innodb_doublewrite_path.html?id=percona-server:features:percona_innodb_doublewrite_path#innodb_doublewrite_file
[6]:http://dev.mysql.com/doc/refman/5.6/en/innodb-performance.html#innodb-undo-tablespace
[7]:http://bugs.mysql.com/bug.php?id=57611
[8]:https://github.com/jeremycole/innodb_ruby
[9]:http://dev.mysql.com/doc/innodb/1.1/en/innodb-improved-purge-scheduling.html
[10]:http://bugs.mysql.com/bug.php?id=1341
[11]:https://www.percona.com/software/percona-monitoring-plugins

View File

@ -1,160 +0,0 @@
FSSlc Translating
Backup with these DeDuplicating Encryption Tools
================================================================================
Data is growing both in volume and value. It is becoming increasingly important to be able to back up and restore this information quickly and reliably. As society has adapted to technology and learned how to depend on computers and mobile devices, there are few that can deal with the reality of losing important data. Of firms that suffer the loss of data, 30% fold within a year, 70% cease trading within five years. This highlights the value of data.
With data growing in volume, improving storage utilization is pretty important. In computing, data deduplication is a specialized data compression technique for eliminating duplicate copies of repeating data. This technique therefore improves storage utilization.
Data is not only of interest to its creator. Governments, competitors, criminals, snoopers may be very keen to access your data. They might want to steal your data, extort money from you, or see what you are up to. Enryption is essential to protect your data.
So the solution is a deduplicating encrypting backup software.
Making file backups is an essential activity for all users, yet many users do not take adequate steps to protect their data. Whether a computer is being used in a corporate environment, or for private use, the machine's hard disk may fail without any warning signs. Alternatively, some data loss occurs as a result of human error. Without regular backups being made, data will inevitably be lost even if the services of a specialist recovery organisation are used.
This article provides a quick roundup of 6 deduplicating encryption backup tools.
----------
### Attic ###
Attic is a deduplicating, encrypted, authenticated and compressed backup program written in Python. The main goal of Attic is to provide an efficient and secure way to backup data. The data deduplication technique used makes Attic suitable for daily backups since only the changes are stored.
Features include:
- Easy to use
- Space efficient storage variable block size deduplication is used to reduce the number of bytes stored by detecting redundant data
- Optional data encryption using 256-bit AES encryption. Data integrity and authenticity is verified using HMAC-SHA256
- Off-site backups with SDSH
- Backups mountable as filesystems
Website: [attic-backup.org][1]
----------
### Borg ###
Borg is a fork of Attic. It is a secure open source backup program designed for efficient data storage where only new or modified data is stored.
The main goal of Borg is to provide an efficient and secure way to backup data. The data deduplication technique used makes Borg suitable for daily backups since only the changes are stored. The authenticated encryption makes it suitable for backups to not fully trusted targets.
Borg is written in Python. Borg was created in May 2015 in response to the difficulty of getting new code or larger changes incorporated into Attic.
Features include:
- Easy to use
- Space efficient storage variable block size deduplication is used to reduce the number of bytes stored by detecting redundant data
- Optional data encryption using 256-bit AES encryption. Data integrity and authenticity is verified using HMAC-SHA256
- Off-site backups with SDSH
- Backups mountable as filesystems
Borg is not compatible with Attic.
Website: [borgbackup.github.io/borgbackup][2]
----------
### Obnam ###
Obnam (OBligatory NAMe) is an easy to use, secure Python based backup program. Backups can be stored on local hard disks, or online via the SSH SFTP protocol. The backup server, if used, does not require any special software, on top of SSH.
Obnam performs de-duplication by splitting up file data into chunks, and storing those individually. Generations are incremental backups; Every backup generation looks like a fresh snapshot, but is really incremental. Obnam is developed by Lars Wirzenius.
Features include:
- Easy to use
- Snapshot backups
- Data de-duplication, across files, and backup generations
- Encrypted backups, using GnuPG
- Backup multiple clients to a single repository
- Backup checkpoints (creates a "save" every 100MBs or so)
- Number of options for performance tuning including lru-size and/or upload-queue-size
- MD5 checksum algorithm for recognising duplicate data chunks
- Store backups to a server via SFTP
- Supports both push (i.e. Run on the client) and pull (i.e. Run on the server) methods
Website: [obnam.org][3]
----------
### Duplicity ###
Duplicity incrementally backs up files and directory by encrypting tar-format volumes with GnuPG and uploading them to a remote (or local) file server. To transmit data it can use ssh/scp, local file access, rsync, ftp, and Amazon S3.
Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. As the software uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server.
Currently duplicity supports deleted files, full unix permissions, directories, symbolic links, fifos, etc.
The duplicity package also includes the rdiffdir utility. Rdiffdir is an extension of librsync's rdiff to directories; it can be used to produce signatures and deltas of directories as well as regular files.
Features include:
- Simple to use
- Encrypted and signed archives (using GnuPG)
- Bandwidth and space efficient, using the rsync algorithm
- Standard file format
- Choice of remote protocol
- Local storage
- scp/ssh
- ftp
- rsync
- HSI
- WebDAV
- Amazon S3
Website: [duplicity.nongnu.org][4]
----------
### ZBackup ###
ZBackup is a versatile globally-deduplicating backup tool.
Features include:
- Parallel LZMA or LZO compression of the stored data. You can mix LZMA and LZO in a repository
- Built-in AES encryption of the stored data
- Possibility to delete old backup data
- Use of a 64-bit rolling hash, keeping the amount of soft collisions to zero
- Repository consists of immutable files. No existing files are ever modified
- Written in C++ only with only modest library dependencies
- Safe to use in production
- Possibility to exchange data between repos without recompression
- Uses a 64-bit modified Rabin-Karp rolling hash
Website: [zbackup.org][5]
----------
### bup ###
bup is a program written in Python that backs things up. It's short for "backup". It provides an efficient way to backup a system based on the git packfile format, providing fast incremental saves and global deduplication (among and within files, including virtual machine images).
bup is released under the LGPL version 2 license.
Features include:
- Global deduplication (among and within files, including virtual machine images)
- Uses a rolling checksum algorithm (similar to rsync) to split large files into chunks
- Uses the packfile format from git
- Writes packfiles directly offering fast incremental saves
- Can use "par2" redundancy to recover corrupted backups
- Mount your bup repository as a FUSE filesystem
Website: [bup.github.io][6]
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20150628060000607/BackupTools.html
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://attic-backup.org/
[2]:https://borgbackup.github.io/borgbackup/
[3]:http://obnam.org/
[4]:http://duplicity.nongnu.org/
[5]:http://zbackup.org/
[6]:https://bup.github.io/

View File

@ -1,69 +0,0 @@
sevenot translating
First Stable Version Of Atom Code Editor Has Been Released
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/06/Atom_stable.png)
[Atom 1.0][1] is here. One of the [best open source code editors][2], [Atom][3] was available for public uses for almost a year but this is the first stable version of the most talked about text/code editor of recent times. Promoted as the “hackable text editor for 21st century”, this project of [Github][4] has already been downloaded 1.5 million times in the past and currently it has over 350,000 monthly active users.
### Its been a long time ###
Rome was not built in a day and neither was Atom. Since it was first conceptualized in 2008 till the first stable release this month, it has taken several years and hundreds of contributors from across the globe, along with main developers working on Atom core. A quick look at the journey of Atom can be seen in the picture below:
![Image credit: Atom](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/06/Atom_stable_timeline.jpeg)
Image credit: Atom
### Back to the future ###
This launch of Atom 1.0 is announced with a retro video showing the capabilities of the editor. Resembling to 70s science fiction TV series, this will be the coolest video you are going to watch today :)
youtube视频不行做个链接吧
<iframe width="640" height="390" frameborder="0" allowfullscreen="true" src="http://www.youtube.com/embed/Y7aEiVwBAdk?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" class="youtube-player"></iframe>
### Features of Atom text editor ###
- Cross-platform editing
- Built-in package manager
- Smart autocompletion
- File system browser
- Multiple panes
- Find and replace
- Highly customizable
- Modern look
### Get Atom 1.0 ###
Atom 1.0 is available for Linux, Windows and Mac OS X. For Debian based Linux distributions such as Ubuntu and Linux Mint, Atom provides .deb binaries. For Fedora, it also has .rpm binaries. You can also get the source code, if you like. The links below will let you download the latest stable version.
- [Atom .deb][5]
- [Atom .rpm][6]
- [Atom Source Code][7]
If you prefer, you can [install Atom in Ubuntu using PPA][8]. The PPA is not official though.
注:下面是一个调查,可以发布的时候在文章内发布个调查
#### Are you excited about Atom? ####
- Oh Yes! This is the best thing that could happen to programmers.
- Not really. I have seen better editors.
- Don't care. My default text editor does the job just fine.
--------------------------------------------------------------------------------
via: http://itsfoss.com/atom-stable-released/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://blog.atom.io/2015/06/25/atom-1-0.html
[2]:http://itsfoss.com/best-modern-open-source-code-editors-for-linux/
[3]:https://atom.io/
[4]:https://github.com/
[5]:https://atom.io/download/deb
[6]:https://atom.io/download/rpm
[7]:https://github.com/atom/atom/blob/master/docs/build-instructions/linux.md
[8]:http://itsfoss.com/install-atom-text-editor-ubuntu-1404-linux-mint-17/

View File

@ -0,0 +1,118 @@
4 CCleaner Alternatives For Ubuntu Linux
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/ccleaner-10-700x393.jpg)
Back in my Windows days, [CCleaner][1] was my favorite tool for freeing up space, delete junk files and speed up Windows. I know I am not the only one who looked for CCleaner for Linux when switched from Windows. If you are looking for CCleaner alternative in Linux, I am going to list here four such application that you can use to clean up Ubuntu or Ubuntu based Linux distributions. But before we see the list, lets ponder over whether Linux requires system clean up tools or not.
### Does Linux need system clean up utilities like CCleaner? ###
To get this answer, lets first see what does CCleaner do. As per [How-To Geek][2]:
> CCleaner has two main uses. One, it scans for and deletes useless files, freeing up space. Two, it erases private data like your browsing history and list of most recently opened files in various programs.
So in short, it performs a system wide clean up of temporary file be it in your web browser or in your media player. You might know that Windows has the affection for keeping junk files in the system for like since ever but what about Linux? What does it do with the temporary files?
Unlike Windows, Linux cleans up all the temporary files (store in /tmp) automatically. You dont have registry in Linux which further reduces the headache. At worst, you might have some broken packages, packages that are not needed anymore and internet browsing history, cookies and cache.
### Does it mean that Linux does not need system clean up utilities? ###
- Answer is no if you can run few commands for occasional package cleaning, manually deleting browser history etc.
- Answer is yes if you dont want to run from places to places and want one tool to rule them all where you can clean up all the suggested things in one (or few) click(s).
If you have got your answer as yes, lets move on to see some CCleaner like utilities to clean up your Ubuntu Linux.
### CCleaner alternatives for Ubuntu ###
Please note that I am using Ubuntu here because some tools discussed here are only existing for Ubuntu based Linux distributions while some are available for all Linux distributions.
#### 1. BleachBit ####
![BleachBit System Cleaning Tool for Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/BleachBit_Cleaning_Tool_Ubuntu.jpeg)
[BleachBit][3] is cross platform app available for both Windows and Linux. It has a long list of applications that it support for cleaning and thus giving you option for cleaning cache, cookies and log files. A quick look at its feature:
- Simple GUI check the boxes you want, preview it and delete it.
- Multi-platform: Linux and Windows
- Free and open source
- Shred files to hide their contents and prevent data recovery
- Overwrite free disk space to hide previously deleted files
- Command line interface also available
BleachBit is available by default in Ubuntu 14.04 and 15.04. You can install it using the command below in terminal:
sudo apt-get install bleachbit
BleachBit has binaries available for all major Linux distributions. You can download BleachBit from the link below:
- [Download BleachBit for Linux][4]
#### 2. Sweeper ####
![Sweeper system clean up tool for Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/sweeper.jpeg)
Sweeper is a system clean up utility which is a part of [KDE SC utilities][5] module. Its main features are:
- remove web-related traces: cookies, history, cache
- remove the image thumbnails cache
- clean the applications and documentes history
Sweeper is available by default in Ubuntu repository. Use the command below in a terminal to install Sweeper:
sudo apt-get install sweeper
#### 3. Ubuntu Tweak ####
![Ubuntu Tweak Tool for cleaning up Ubuntu system](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Ubuntu_Tweak_Janitor.jpeg)
As the name suggests, [Ubuntu Tweak][6] is more of a tweaking tool than a cleaning utility. But along with tweaking things like compiz settings, panel configuration, start up program control, power management etc, Ubuntu Tweak also provides a Janitor tab that lets you:
- clean browser cache
- clean Ubuntu Software Center cache
- clean thumbnail cache
- clan apt repository cache
- clean old kernel files
- clean package configs
You can get the .deb installer for Ubuntu Tweak from the link below:
- [Download Ubuntu Tweak][7]
#### 4. GCleaner (beta) ####
![GCleaner CCleaner like tool](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/GCleaner.jpeg)
One of the third party apps for elementaryOS Freya, GCleaner aims to be CCleaner in GNU world. The interface resembles heavily to CCleaner. Some of the main features of GCleaner are:
- clean browser history
- clean app cache
- clean packages and configs
- clean recent document history
- empty recycle bin
At the time of writing this article, GCleaner is in heavy development. You can check the project website and get the source code to build and use GCleaner.
- [Know More About GCleaner][8]
### Your choice? ###
I have listed down the possibilities to you. I let you decide which tool you would use to clean Ubuntu 14.04. But I am certain that if you were looking for a CCleaner like application, one of these four end your search.
--------------------------------------------------------------------------------
via: http://itsfoss.com/ccleaner-alternatives-ubuntu-linux/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:https://www.piriform.com/ccleaner/download
[2]:http://www.howtogeek.com/172820/beginner-geek-what-does-ccleaner-do-and-should-you-use-it/
[3]:http://bleachbit.sourceforge.net/
[4]:http://bleachbit.sourceforge.net/download/linux
[5]:https://www.kde.org/applications/utilities/
[6]:http://ubuntu-tweak.com/
[7]:http://ubuntu-tweak.com/
[8]:https://quassy.github.io/elementary-apps/GCleaner/

View File

@ -1,3 +1,5 @@
Translating by H-mudcup
Defending the Free Linux World
================================================================================
![](http://www.linuxinsider.com/ai/908455/open-invention-network.jpg)
@ -122,4 +124,4 @@ via: http://www.linuxinsider.com/story/Defending-the-Free-Linux-World-81512.html
[2]:http://www.redhat.com/
[3]:http://www.law.uh.edu/
[4]:http://www.chaoticmoon.com/
[5]:http://www.ieee.org/
[5]:http://www.ieee.org/

View File

@ -1,3 +1,4 @@
sevenot translating
10 Top Distributions in Demand to Get Your Dream Job
================================================================================
We are coming up with a series of five articles which aims at making you aware of the top skills which will help you in getting yours dream job. In this competitive world you can not rely on one skill. You need to have balanced set of skills. There is no measure of a balanced skill set except a few conventions and statistics which changes from time-to-time.

View File

@ -0,0 +1,125 @@
Interview: Larry Wall
================================================================================
> Perl 6 has been 15 years in the making, and is now due to be released at the end of this year. We speak to its creator to find out whats going on.
Larry Wall is a fascinating man. Hes the creator of Perl, a programming language thats widely regarded as the glue holding the internet together, and mocked by some as being a “write-only” language due to its density and liberal use of non-alphanumeric characters. Larry also has a background in linguistics, and is well known for delivering entertaining “State of the Onion” presentations about the future of Perl.
At FOSDEM 2015 in Brussels, we caught up with Larry to ask him why Perl 6 has taken so long (Perl 5 was released in 1994), how difficult it is to manage a project when everyone has strong opinions and pulling in different directions, and how his background in linguistics influenced the design of Perl from the start. Get ready for some intriguing diversions…
![](http://www.linuxvoice.com/wp-content/uploads/2015/07/wall1.jpg)
**Linux Voice: You once had a plan to go and find an undocumented language somewhere in the world and create a written script for it, but you never had the opportunity to fulfil this plan. Is that something youd like to go back and do now?**
Larry Wall: You have to be kind of young to be able to carry that off! Its actually a lot of hard work, and organisations that do these things dont tend to take people in when theyre over a certain age. Partly this is down to health and vigour, but also because people are much better at picking up new languages when theyre younger, and you have to learn the language before making a script for it.
I started trying to teach myself Japanese about 10 years ago, and I could speak it quite well, because of my phonology and phonetics training but its very hard for me to understand what anybody says. So I can go to Japan and ask for directions, but I cant really understand the answers!
> “With Perl 6, we found some ways to make the computer more sure about what the user is talking about.”
So usually learning a language well enough to develop a writing system, and to at least be conversational in the language, takes some period of years before you can get to the point where you can actually do literacy and start educating people on their own culture, as it were. And then you teach them to write about their own culture as well.
Of course, if you have language helpers and we were told not to call them “language informants”, or everyone would think we were working for the CIA! if you have these people, you can get them to come in and help you learn the foreign language. They are not teachers but there are ways of eliciting things from someone whos not a language teacher they can still teach you how to speak. They can take a stick and point to it and say “thats a stick”, and drop it and say “the stick falls”. Then you start writing things down and systematising things.
The motivation that most people have, going out to these groups, is to translate the Bible into their languages. But thats only one part of it; the other is also culture preservation. Missionaries get kind of a bad rep on that, because anthropologists think they should be left to sit their in their own culture. But somebody is probably going to change their culture anyway its usually the army, or businesses coming in, like Coca Cola or the sewing machine people, or missionaries. And of those three, the missionaries are the least damaging, if theyre doing their job right.
**LV: Many writing systems are based on existing scripts, and then you have invented ones like Greenlandic…**
LW: The Cherokee invented their own just by copying letters, and they have no mapping much to what we think of letters, and its fairly arbitrary in that sense. It just has to represent how the people themselves think of the language, and sufficiently well to communicate. Often there will be variations on Western orthography, using characters from Latin where possible. Tonal languages have to mark the tones somehow, by accents or by numbers.
As soon as you start leaning towards a phoenetic or phonological representation, then you also start to lose dialectical differences or you have to write the dialectal differences. Or you have conventional spelling like we have in English, but pronunciation that doesnt really match it.
**LV: When you started working on Perl, what did you take from your background in linguistics that made you think: “this is really important in a programming language”?**
LW: I thought a lot about how people use languages. In real languages, you have a system of nouns and verbs and adjectives, and you kind of know which words are which type. And in real natural languages, you have a lot of instances of shoving one word into a different slot. The linguistic theory I studied was called tagmemics, and it accounts for how this works in a natural language that you could have something that you think of as a noun, but you can verb it, and people do that all time.
You can pretty much shove anything in any slot, and you can communicate. One of my favourite examples is shoving an entire sentence in as an adjective. The sentence goes like this: “I dont like your I-can-use-anything-as-an-adjective attitude”!
So natural language is very flexible this way because you have a very intelligent listener or at least, compared with a computer who you can rely on to figure out what you must have meant, in case of ambiguity. Of course, in a computer language you have to manage the ambiguity much more closely.
Arguably in Perl 1 through to 5 we didnt manage it quite adequately enough. Sometimes the computer was confused when it really shouldnt be. With Perl 6, we discovered some ways to make the computer more sure about what the user is talking about, even if the user is confused about whether something is really a string or a number. The computer knows the exact type of it. We figured out ways of having stronger typing internally but still have the allomorphic “you can use this as that” idea.
![](http://www.linuxvoice.com/wp-content/uploads/2015/07/wall2.jpg)
**LV: For a long time Perl was seen as the “glue” language of the internet, for fitting bits and pieces together. Do you see Perl 6 as a release to satisfy the needs of existing users, or as a way to bring in new people, and bring about a resurgence in the language?**
LW: The initial intent was to make a better Perl for Perl programmers. But as we looked at the some of the inadequacies of Perl 5, it became apparent that if we fixed these inadequacies, Perl 6 would be more applicable, as I mentioned in my talk like how J. R. R. Tolkien talked about applicability [see http://tinyurl.com/nhpr8g2].
The idea that “easy things should be easy and hard things should be possible” goes way back, to the boundary between Perl 2 and Perl 3. In Perl 2, we couldnt handle binary data or embedded nulls it was just C-style strings. I said then that “Perl is just a text processing language you dont need those things in a text processing language”.
But it occurred to me at the time that there were a large number of problems that were mostly text, and had a little bit of binary data in them network addresses and things like that. You use binary data to open the socket but then text to process it. So the applicability of the language more than doubled by making it possible to handle binary data.
That began a trade-off about what things should be easy in a language. Nowadays we have a principle in Perl, and we stole the phrase Huffman coding for it, from the bit encoding system where you have different sizes for characters. Common characters are encoded in a fewer number of bits, and rarer characters are encoded in more bits.
> “There had to be a very careful balancing act. There were just so many good ideas at the beginning.”
We stole that idea as a general principle for Perl, for things that are commonly used, or when you have to type them very often the common things need to be shorter or more succinct. Another bit of that, however, is that theyre allowed to be more irregular. In natural language, its actually the most commonly used verbs that tend to be the most irregular.
And theres a reason for that, because you need more differentiation of them. One of my favourite books is called The Search for the Perfect Language by Umberto Eco, and its not about computer languages; its about philosophical languages, and the whole idea that maybe some ancient language was the perfect language and we should get back to it.
All of those languages make the mistake of thinking that similar things should always be encoded similarly. But thats not how you communicate. If you have a bunch of barnyard animals, and they all have related names, and you say “Go out and kill the Blerfoo”, but you really wanted them to kill the Blerfee, you might get a cow killed when you want a chicken killed.
So in realms like that its actually better to differentiate the words, for more redundancy in the communication channel. The common words need to have more of that differentiation. Its all about communicating efficiently, and then theres also this idea of self-clocking codes. If you look at a UPC label on a product a barcode thats actually a self-clocking code where each pair of bars and spaces is always in a unit of seven columns wide. You rely on that you know the width of the bars will always add up to that. So its self-clocking.
There are other self-clocking codes used in electronics. In the old transmission serial protocols there were stop and start bits so you could keep things synced up. Natural languages also do this. For instance, in the writing of Japanese, they dont use spaces. Because the way they write it, they will have a Kanji character from Chinese at the head of each phrase, and then the endings are written in the a syllabary.
**LV: Hiragana, right?**
LW: Yes, Hiragana. So naturally the head of each phrase really stands out with this system. Similarly, in ancient Greek, most of the verbs were declined or conjugated. So they had standard endings were sort-of a clocking mechanism. Spaces were optional in their writing system as well it was a more modern invention to put the spaces in.
So similarly in computer languages, theres value in having a self-clocking code. We rely on this heavily in Perl, and even more heavily in Perl 6 than in previous releases. The idea that when youre parsing an expression, youre either expecting a term or an infix operator. When youre expecting a term you might also get a prefix operator thats kind-of in the same expectation slot and when youre expecting an infix you might also get a postfix for the previous term.
But it flips back and forth. And if the compiler actually knows which it is expecting, you can overload those a little bit, and Perl does this. So a slash when its expecting a term will introduce a regular expression, whereas a slash when youre expecting an infix will be division. On the other hand, we dont want to overload everything, because then you lose the self-clocking redundancy.
Most of our best error messages, for syntax errors, actually come out of noticing that you have two terms in a row. And then we try to figure out why there are two terms in a row “oh, you must have left a semicolon out on the previous line”. So we can produce much better error messages than the more ad-hoc parsers.
![](http://www.linuxvoice.com/wp-content/uploads/2015/07/wall3.jpg)
**LV: Why has Perl 6 taken fifteen years? It must be hard overseeing a language when everyone has different opinions about things, and theres not always the right way to do things, and the wrong way.**
LW: There had to be a very careful balancing act. There were just so many good ideas at the beginning well, I dont want to say they were all good ideas. There were so many pain points, like there were 361 RFCs [feature proposal documents] when I expected maybe 20. We had to sit back and actually look at them all, and ignore the proposed solutions, because they were all over the map and all had tunnel vision. Each one many have just changed one thing, but if we had done them all, it wouldve been a complete mess.
So we had to re-rationalise based on how people were actually hurting when they tried to use Perl 5. We started to look at the unifying, underlying ideas. Many of these RFCs were based on the fact that we had an inadequate type system. By introducing a more coherent type system we could fix many problems in a sane fashion and a cohesive fashion.
And we started noticing other ways how we could unify the featuresets and start reusing ideas in different areas. Not necessarily that they were the same thing underneath. We have a standard way of writing pairs well, two ways in Perl! But the way of writing pairs with a colon could also be reused for radix notation, or for literal numbers in any base. It could also be used for various alternative forms of quoting. We say in Perl that its “strangely consistent”.
> “People who made early implementations of Perl 6 came back to me, cap in hand, and said “We really need a language designer.””
Similar ideas pop up, and you say “Im already familiar with how that syntax works, but I see its being used for something else”. So it took some unity of vision to find these unifications. People who had the various ideas and made early implementations of Perl 6 came back to me, cap-in-hand, and said “We really need a language designer. Could you be our benevolent dictator?”
So I was the language designer, but I was almost explicitly told: “Stay out of the implementation! We saw what you did made out of Perl 5, and we dont like it!” It was really funny because the innards of the new implementation started looking a whole lot like Perl 5 inside, and maybe thats why some of the early implementations didnt work well.
Because we were still feeling our way into the whole design, the implementations made a lot of assumptions about what VM should do and shouldnt do, so we ended up with something like an object oriented assembly language. That sort of problem was fairly pervasive at the beginning. Then the Pugs guys came along and said “Lets use Haskell, because it makes you think very clearly about what youre doing. Lets use it to clarify our semantic model underneath.”
So we nailed down some of those semantic models, but more importantly, we started building the test suite at that point, to be consistent with those semantic models. Then after that, the Parrot VM continued developing, and then another implementation, Niecza, came along and it was based on .NET. It was by a young fellow who was very smart and implemented a large subset of Perl 6, but he was kind of a loner, didnt really figure out a way to get other people involved in his project.
At the same time the Parrot project was getting too big for anyone to really manage it inside, and very difficult to refactor. At that point the fellows working on Rakudo decided that we probably needed to be on more platforms than just the Parrot VM. So they invented a portability layer called NQP which stands for “Not Quite Perl”. They ported it to first of all run on the JVM (Java Virtual Machine), and while they were doing that they were also secretly working on a new VM called MoarVM. That became public a little over a year ago.
Both MoarVM and JVM run a pretty much equivalent set of regression tests Parrot is kind-of trailing back in some areas. So that has been very good to flush out VM-specific assumptions, and were starting to think about NQP targeting other things. There was a Google Summer of Code project year to target NQP to JavaScript, and that might fit right in, because MoarVM also uses Node.js for much of its more mundane processing.
We probably need to concentrate on MoarVM for the rest of this year, until we actually define 6.0, and then the rest will catch up.
**LV: Last year in the UK, the government kicked off the Year of Code, an attempt to get young people interested in programming. There are lots of opinions about how this should be done like whether you should teach low-level languages at the start, so that people really understand memory usage, or a high-level language. Whats your take on that?**
LW: Up until now, the Python community has done a much better job of getting into the lower levels of education than we have. Wed like to do something in that space too, and thats partly why we have the butterfly logo, because its going to be appealing to seven year old girls!
But we do think that Perl 6 will be learnable as a first language. A number of people have surprised us by learning Perl 5 as their first language. And you know, there are a number of fairly powerful concepts even in Perl 5, like closures, lexical scoping, and features you generally get from functional programming. Even more so in Perl 6.
> “Until now, the Python community has done a much better job of getting into the lower levels of education.”
Part of the reason the Perl 6 has taken so long is that we have around 50 different principles we try to stick to, and in language design youre end up juggling everything and saying “whats really the most important principle here”? There has been a lot of discussion about a lot of different things. Sometimes we commit to a decision, work with it for a while, and then realise it wasnt quite the right decision.
We didnt design or specify pretty much anything about concurrent programming until someone came along who was smart enough about it and knew what the different trade-offs were, and thats Jonathan Worthington. He has blended together ideas from other languages like Go and C#, with concurrent primitives that compose well. Composability is important in the rest of the language.
There are an awful lot of concurrent and parallel programming systems that dont compose well like threads and locks, and there have been lots of ways to do it poorly. So in one sense, its been worth waiting this extra time to see some of these languages like Go and C# develop really good high-level primitives thats sort of a contradiction in terms that compose well.
--------------------------------------------------------------------------------
via: http://www.linuxvoice.com/interview-larry-wall/
作者:[Mike Saunders][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.linuxvoice.com/author/mike/

View File

@ -1,3 +1,4 @@
XLCYun translating.
Syncthing: A Private, And Secure Tool To Sync Files/Folders Between Computers
================================================================================
### Introduction ###
@ -204,4 +205,4 @@ via: http://www.unixmen.com/syncthing-private-secure-tool-sync-filesfolders-comp
[a]:http://www.unixmen.com/author/sk/
[1]:https://github.com/syncthing/syncthing/releases/tag/v0.10.20
[2]:http://syncthing.net/
[2]:http://syncthing.net/

View File

@ -1,359 +0,0 @@
PHP Security
================================================================================
![](http://www.codeproject.com/KB/PHP/363897/php_security.jpg)
### Introduction ###
When offering an Internet service, you must always keep security in mind as you develop your code. It may appear that most PHP scripts aren't sensitive to security concerns; this is mainly due to the large number of inexperienced programmers working in the language. However, there is no reason for you to have an inconsistent security policy based on a rough guess at your code's significance. The moment you put anything financially interesting on your server, it becomes likely that someone will try to casually hack it. Create a forum program or any sort of shopping cart, and the probability of attack rises to a dead certainty.
### Background ###
Here are a few general security guidelines for securing your web content:
#### Don't trust forms ####
Hacking forms is trivial. Yes, by using a silly JavaScript trick, you may be able to limit your form to allow only the numbers 1 through 5 in a rating field. The moment someone turns JavaScript off in their browser or posts custom form data, your client-side validation flies out the window.
Users interact with your scripts primarily through form parameters, and therefore they're the biggest security risk. What's the lesson? Always validate the data that gets passed to any PHP script in the PHP script. In this article, we show you how to analyze and protect against cross-site scripting (XSS) attacks, which can hijack your user's credentials (and worse). You'll also see how to prevent the MySQL injection attacks that can taint or destroy your data.
#### Don't trust users ####
Assume that every piece of data your website gathers is laden with harmful code. Sanitize every piece, even if you're positive that nobody would ever try to attack your site.
#### Turn off global variables ####
The biggest security hole you can have is having the register_globals configuration parameter enabled. Mercifully, it's turned off by default in PHP 4.2 and later. If **register_globals** is on, then you can disable this feature by turning the register_globals variable to Off in your server's php.ini file :
register_globals = Off
Novice programmers view registered globals as a convenience, but they don't realize how dangerous this setting is. A server with global variables enabled automatically assigns global variables to any form parameters. For an idea of how this works and why this is dangerous, let's look at an example.
Let's say that you have a script named process.php that enters form data into your user database. The original form looked like this:
<input name="username" type="text" size="15" maxlength="64">
When running process.php, PHP with registered globals enabled places the value of this parameter into the $username variable. This saves some typing over accessing them through **$_POST['username']** or **$_GET['username']**. Unfortunately, this also leaves you open to security problems, because PHP sets a variable for any value sent to the script via a GET or POST parameter, and that is a big problem if you didn't explicitly initialize the variable and you don't want someone to manipulate it.
Take the script below, for example—if the $authorized variable is true, it shows confidential data to the user. Under normal circumstances, the $authorized variable is set to true only if the user has been properly authenticated via the hypothetical authenticated_user() function. But if you have **register_globals** active, anyone could send a GET parameter such as authorized=1 to override this:
<?php
// Define $authorized = true only if user is authenticated
if (authenticated_user()) {
$authorized = true;
}
?>
The moral of the story is that you should pull form data from predefined server variables. All data passed on to your web page via a posted form is automatically stored in a large array called $_POST, and all GET data is stored in a large array called **$_GET**. File upload information is stored in a special array called $_FILES. In addition, there is a combined variable called $_REQUEST.
To access the username field from a POST method form, use **$_POST['username']**. Use **$_GET['username']** if the username is in the URL. If you don't care where the value came from, use **$_REQUEST['username']**.
<?php
$post_value = $_POST['post_value'];
$get_value = $_GET['get_value'];
$some_variable = $_REQUEST['some_value'];
?>
$_REQUEST is a union of the $_GET, $_POST, and $_COOKIE arrays. If you have two or more values of the same parameter name, be careful of which one PHP uses. The default order is cookie, POST, then GET.
#### Recommended Security Configuration Options ####
There are several PHP configuration settings that affect security features. Here are the ones that should obviously be used for production servers:
- **register_globals** set to off
- **safe_mode** set to off
- **error_reporting** set to off. This is visible error reporting that sends a message to the user's browser if something goes wrong. For production servers, use error logging instead. Development servers can enable error logging as long as they're behind a firewall.
- Disable these functions: system(), exec(), passthru(), shell_exec(), proc_open(), and popen().
- **open_basedir** set for both the /tmp directory (so that session information can be stored) and the web root so that scripts cannot access files outside a selected area.
- **expose_php** set to off. This feature adds a PHP signature that includes the version number to the Apache headers.
- **allow_url_fopen** set to off. This isn't strictly necessary if you're careful about how you access files in your code—that is, you validate all input parameters.
- **allow_url_include** set to off. There's really no sane reason for anyone to want to access include files via HTTP.
In general, if you find code that wants to use these features, you shouldn't trust it. Be especially careful of anything that wants to use a function such as system()—it's almost certainly flawed.
With these settings now behind us, let's look at some specific attacks and the methods that will help you protect your server.
### SQL Injection Attacks ###
Because the queries that PHP passes to MySQL databases are written in the powerful SQL programming language, you run the risk of someone attempting an SQL injection attack by using MySQL in web query parameters. By inserting malicious SQL code fragments into form parameters, an attacker attempts to break into (or disable) your server.
Let's say that you have a form parameter that you eventually place into a variable named $product, and you create some SQL like this:
$sql = "select * from pinfo where product = '$product'";
If that parameter came straight from the form, use database-specific escapes with PHP's native functions, like this:
$sql = 'Select * from pinfo where product = '"'
mysql_real_escape_string($product) . '"';
If you don't, someone might just decide to throw this fragment into the form parameter:
39'; DROP pinfo; SELECT 'FOO
Then the result of $sql is:
select product from pinfo where product = '39'; DROP pinfo; SELECT 'FOO'
Because the semicolon is MySQL's statement delimiter, the database processes these three statements:
select * from pinfo where product = '39'
DROP pinfo
SELECT 'FOO'
Well, there goes your table.
Note that this particular syntax won't actually work with PHP and MySQL, because the **mysql_query()** function allows just one statement to be processed per request. However, a subquery will still work.
To prevent SQL injection attacks, do two things:
- Always validate all parameters. For example, if something needs to be a number, make sure that it's a number.
- Always use the mysql_real_escape_string() function on data to escape any quotes or double quotes in your data.
**Note: To automatically escape any form data, you can turn on Magic Quotes.**
Some MySQL damage can be avoided by restricting your MySQL user privileges. Any MySQL account can be restricted to only do certain kinds of queries on selected tables. For example, you could create a MySQL user who can select rows but nothing else. However, this is not terribly useful for dynamic data, and, furthermore, if you have sensitive customer information, it might be possible for someone to have access to some data that you didn't intend to make available. For example, a user accessing account data could try to inject some code that accesses another account number instead of the one assigned to the current session.
### Preventing Basic XSS Attacks ###
XSS stands for cross-site scripting. Unlike most attacks, this exploit works on the client side. The most basic form of XSS is to put some JavaScript in user-submitted content to steal the data in a user's cookie. Since most sites use cookies and sessions to identify visitors, the stolen data can then be used to impersonate that user—which is deeply troublesome when it's a typical user account, and downright disastrous if it's the administrative account. If you don't use cookies or session IDs on your site, your users aren't vulnerable, but you should still be aware of how this attack works.
Unlike MySQL injection attacks, XSS attacks are difficult to prevent. Yahoo!, eBay, Apple, and Microsoft have all been affected by XSS. Although the attack doesn't involve PHP, you can use PHP to strip user data in order to prevent attacks. To stop an XSS attack, you have to restrict and filter the data a user submits to your site. It is for this precise reason that most online bulletin boards don't allow the use of HTML tags in posts and instead replace them with custom tag formats such as **[b]** and **[linkto]**.
Let's look at a simple script that illustrates how to prevent some of these attacks. For a more complete solution, use SafeHTML, discussed later in this article.
function transform_HTML($string, $length = null) {
// Helps prevent XSS attacks
// Remove dead space.
$string = trim($string);
// Prevent potential Unicode codec problems.
$string = utf8_decode($string);
// HTMLize HTML-specific characters.
$string = htmlentities($string, ENT_NOQUOTES);
$string = str_replace("#", "&#35;", $string);
$string = str_replace("%", "&#37;", $string);
$length = intval($length);
if ($length > 0) {
$string = substr($string, 0, $length);
}
return $string;
}
This function transforms HTML-specific characters into HTML literals. A browser renders any HTML run through this script as text with no markup. For example, consider this HTML string:
<STRONG>Bold Text</STRONG>
Normally, this HTML would render as follows:
Bold Text
However, when run through **transform_HTML()**, it renders as the original input. The reason is that the tag characters are HTML entities in the processed string. The resulting string from **HTML()** in plaintext looks like this:
&lt;STRONG&gt;Bold Text&lt;/STRONG&gt;
The essential piece of this function is the htmlentities() function call that transforms <, >, and & into their entity equivalents of **&lt;**, **&gt;**, and **&amp;**. Although this takes care of the most common attacks, experienced XSS hackers have another sneaky trick up their sleeve: Encoding their malicious scripts in hexadecimal or UTF-8 instead of normal ASCII text, hoping to circumvent your filters. They can send the code along as a GET variable in the URL, saying, "Hey, this is hexadecimal code, but could you run it for me anyway?" A hexadecimal example looks something like this:
<a href="http://host/a.php?variable=%22%3e %3c%53%43%52%49%50%54%3e%44%6f%73%6f%6d%65%74%68%69%6e%67%6d%61%6c%69%63%69%6f%75%73%3c%2f%53%43%52%49%50%54%3e">
But when the browser renders that information, it turns out to be:
<a href="http://host/a.php?variable="> <SCRIPT>Dosomethingmalicious</SCRIPT>
To prevent this, transform_HTML() takes the additional steps of converting # and % signs into their entity, shutting down hex attacks, and converting UTF-8encoded data.
Finally, just in case someone tries to overload a string with a very long input, hoping to crash something, you can add an optional $length parameter to trim the string to the maximum length you specify.
### Using SafeHTML ###
The problem with the previous script is that it is simple, and it does not allow for any kind of user markup. Unfortunately, there are hundreds of ways to try to sneak JavaScript past someone's filters, and short of stripping all HTML from someone's input, there's no way of stopping it.
Currently, there's no single script that's guaranteed to be unbreakable, though there are some that are better than most. There are two approaches to security, whitelisting and blacklisting, and whitelisting tends to be less complicated and more effective.
One whitelisting solution is the SafeHTML anti-XSS parser from PixelApes.
SafeHTML is smart enough to recognize valid HTML, so it can hunt and strip any dangerous tags. It does its parsing with another package called HTMLSax.
To install and use SafeHTML, do the following:
1. Go to [http://pixel-apes.com/safehtml/?page=safehtml][1] and download the latest version of SafeHTML.
1. Put the files in the classes directory on your server. This directory contains everything that SafeHTML and HTMLSax need to function.
1. Include the SafeHTML class file (safehtml.php) in your script.
1. Create a new SafeHTML object called $safehtml.
1. Sanitize your data with the $safehtml->parse() method.
Here's a complete example:
<?php
/* If you're storing the HTMLSax3.php in the /classes directory, along
with the safehtml.php script, define XML_HTMLSAX3 as a null string. */
define(XML_HTMLSAX3, '');
// Include the class file.
require_once('classes/safehtml.php');
// Define some sample bad code.
$data = "This data would raise an alert <script>alert('XSS Attack')</script>";
// Create a safehtml object.
$safehtml = new safehtml();
// Parse and sanitize the data.
$safe_data = $safehtml->parse($data);
// Display result.
echo 'The sanitized data is <br />' . $safe_data;
?>
If you want to sanitize any other data in your script, you don't have to create a new object; just use the $safehtml->parse() method throughout your script.
#### What Can Go Wrong? ####
The biggest mistake you can make is assuming that this class completely shuts down XSS attacks. SafeHTML is a fairly complex script that checks for almost everything, but nothing is guaranteed. You still want to do the parameter validation that applies to your site. For example, this class doesn't check the length of a given variable to ensure that it fits into a database field. It doesn't check for buffer overflow problems.
XSS hackers are creative and use a variety of approaches to try to accomplish their objectives. Just look at RSnake's XSS tutorial at [http://ha.ckers.org/xss.html][2] to see how many ways there are to try to sneak code past someone's filters. The SafeHTML project has good programmers working overtime to try to stop XSS attacks, and it has a solid approach, but there's no guarantee that someone won't come up with some weird and fresh approach that could short-circuit its filters.
**Note: For an example of the powerful effects of XSS attacks, check out [http://namb.la/popular/tech.html][3], which shows a step-by-step approach to creating the JavaScript XSS worm that overloaded the MySpace servers. **
### Protecting Data with a One-Way Hash ###
This script performs a one-way transformation on data—in other words, it can make a hash signature of someone's password, but you can't ever decrypt it and go back to the original password. Why would you want to do that? The application is in storing passwords. An administrator doesn't need to know users' passwords—in fact, it's a good idea that only the user knows his or her password. The system (and the system alone) should be able to identify a correct password; this has been the Unix password security model for years. One-way password security works as follows:
1. When a user or administrator creates or changes an account password, the system hashes the password and stores the result. The host system discards the plaintext password.
1. When the user logs in to a system via any means, the entered password is again hashed.
1. The host system throws away the plaintext password entered.
1. This newly hashed password is compared against the stored hash.
1. If the hashed passwords match, then the system grants access.
The host system does this without ever knowing the original password; in fact, the original value is completely irrelevant. As a side effect, should someone break into your system and steal your password database, the intruder will have a bunch of hashed passwords without any way of reversing them to find the originals. Of course, given enough time, computer power, and poorly chosen user passwords, an attacker could probably use a dictionary attack to figure out the passwords. Therefore, don't make it easy for people to get their hands on your password database, and if someone does, have everyone change their passwords.
#### Encryption Vs Hashing ####
Technically speaking, this process is not encryption. It is a hash, which is different from encryption for two reasons:
Unlike in encryption, data cannot be decrypted.
It's possible (but extremely unlikely) that two different strings will produce the same hash. There's no guarantee that a hash is unique, so don't try to use a hash as something like a unique key in a database.
function hash_ish($string) {
return md5($string);
}
The md5() function returns a 32-character hexadecimal string, based on the RSA Data Security Inc. Message-Digest Algorithm (also known, conveniently enough, as MD5). You can then insert that 32-character string into your database, compare it against other md5'd strings, or just adore its 32-character perfection.
#### Hacking the Script ####
It is virtually impossible to decrypt MD5 data. That is, it's very hard. However, you still need good passwords, because it's still easy to make a database of hashes for the entire dictionary. There are online MD5 dictionaries where you can enter **06d80eb0c50b49a509b49f2424e8c805** and get a result of "dog." Thus, even though MD5s can't technically be decrypted, they're still vulnerable—and if someone gets your password database, you can be sure that they'll be consulting an MD5 dictionary. Thus, it's in your best interests when creating password-based systems that the passwords are long (a minimum of six characters and preferably eight) and contain both letters and numbers. And make sure that the password isn't in the dictionary.
### Encrypting Data with Mcrypt ###
MD5 hashes work just fine if you never need to see your data in readable form. Unfortunately, that's not always an option—if you offer to store someone's credit card information in encrypted format, you need to decrypt it at some later point.
One of the easiest solutions is the Mcrypt module, an add-in for PHP that allows high-grade encryption. The Mcrypt library offers more than 30 ciphers to use in encryption and the possibility of a passphrase that ensures that only you (or, optionally, your users) can decrypt data.
Let's see some hands-on use. The following script contains functions that use Mcrypt to encrypt and decrypt data:
<?php
$data = "Stuff you want encrypted";
$key = "Secret passphrase used to encrypt your data";
$cipher = "MCRYPT_SERPENT_256";
$mode = "MCRYPT_MODE_CBC";
function encrypt($data, $key, $cipher, $mode) {
// Encrypt data
return (string)
base64_encode
(
mcrypt_encrypt
(
$cipher,
substr(md5($key),0,mcrypt_get_key_size($cipher, $mode)),
$data,
$mode,
substr(md5($key),0,mcrypt_get_block_size($cipher, $mode))
)
);
}
function decrypt($data, $key, $cipher, $mode) {
// Decrypt data
return (string)
mcrypt_decrypt
(
$cipher,
substr(md5($key),0,mcrypt_get_key_size($cipher, $mode)),
base64_decode($data),
$mode,
substr(md5($key),0,mcrypt_get_block_size($cipher, $mode))
);
}
?>
The **mcrypt()** function requires several pieces of information:
- The data to encrypted.
- The passphrase used to encrypt and unlock your data, also known as the key.
- The cipher used to encrypt the data, which is the specific algorithm used to encrypt the data. This script uses **MCRYPT_SERPENT_256**, but you can choose from an array of fancy-sounding ciphers, including **MCRYPT_TWOFISH192**, **MCRYPT_RC2**, **MCRYPT_DES**, and **MCRYPT_LOKI97**.
- The mode used to encrypt the data. There are several modes you can use, including Electronic Codebook and Cipher Feedback. This script uses **MCRYPT_MODE_CBC**, Cipher Block Chaining.
- An **initialization vector**—also known as an IV, or a seed—an additional bit of binary data used to seed the encryption algorithm. That is, it's something extra thrown in to make the algorithm harder to crack.
- The length of the string needed for the key and IV, which vary by cipher and block. Use the **mcrypt_get_key_size()** and **mcrypt_get_block_size()** functions to find the appropriate length; then trim the key value to the appropriate length with a handy **substr()** function. (If the key is shorter than the required value, don't worry—Mcrypt pads it with zeros.)
If someone steals both your data and your passphrase, they can just cycle through the ciphers until finding the one that works. Thus, we apply the additional security of using the **md5()** function on the key before we use it, so even having both data and passphrase won't get the intruder what she wants.
An intruder would need the function, the data, and the passphrase all at once—and if that is the case, they probably have complete access to your server, and you're hosed anyway.
There's a small data storage format problem here. Mcrypt returns its encrypted data in an ugly binary format that causes horrific errors when you try to store it in certain MySQL fields. Therefore, we use the **base64encode()** and **base64decode()** functions to transform the data into a SQL-compatible alphabetical format and retrieve rows.
#### Hacking the Script ####
In addition to experimenting with various encryption methods, you can add some convenience to this script. For example, rather than providing the key and mode every time, you could declare them as global constants in an included file.
### Generating Random Passwords ###
Random (but difficult-to-guess) strings are important in user security. For example, if someone loses a password and you're using MD5 hashes, you won't be able to, nor should you want to, look it up. Instead, you should generate a secure random password and send that to the user. Another application for random number generation is creating activation links in order to access your site's services. Here is a function that creates a password:
<?php
function make_password($num_chars) {
if ((is_numeric($num_chars)) &&
($num_chars > 0) &&
(! is_null($num_chars))) {
$password = '';
$accepted_chars = 'abcdefghijklmnopqrstuvwxyz1234567890';
// Seed the generator if necessary.
srand(((int)((double)microtime()*1000003)) );
for ($i=0; $i<=$num_chars; $i++) {
$random_number = rand(0, (strlen($accepted_chars) -1));
$password .= $accepted_chars[$random_number] ;
}
return $password;
}
}
?>
#### Using the Script ####
The **make_password()** function returns a string, so all you need to do is supply the length of the string as an argument:
<?php
$fifteen_character_password = make_password(15);
?>
The function works as follows:
- The function makes sure that **$num_chars** is a positive nonzero integer.
- The function initializes the **$password** variable to an empty string.
- The function initializes the **$accepted_chars** variable to the list of characters the password may contain. This script uses all lowercase letters and the numbers 0 through 9, but you can choose any set of characters you like.
- The random number generator needs a seed, so it gets a bunch of random-like values. (This isn't strictly necessary on PHP 4.2 and later.)
- The function loops **$num_chars** times, one iteration for each character in the password to generate.
- For each new character, the script looks at the length of **$accepted_chars**, chooses a number between 0 and the length, and adds the character at that index in **$accepted_chars** to $password.
- After the loop completes, the function returns **$password**.
### License ###
This article, along with any associated source code and files, is licensed under [The Code Project Open License (CPOL)][4]
--------------------------------------------------------------------------------
via: http://www.codeproject.com/Articles/363897/PHP-Security
作者:[SamarRizvi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.codeproject.com/script/Membership/View.aspx?mid=7483622
[1]:http://pixel-apes.com/safehtml/?page=safehtml
[2]:http://ha.ckers.org/xss.html
[3]:http://namb.la/popular/tech.html
[4]:http://www.codeproject.com/info/cpol10.aspx

View File

@ -1,140 +0,0 @@
XLCYun 翻译中
How to manage Vim plugins
================================================================================
Vim is a versatile, lightweight text editor on Linux. While its initial learning curve can be overwhelming for an average Linux user, its benefits are completely worthy. As far as the functionality goes, Vim is fully customizable by means of plugins. Due to its high level of configuration, though, you need to spend some time with its plugin system to be able to personalize Vim in an effective way. Luckily, we have several tools that make our life with Vim plugins easier. The one I use on a daily basis is Vundle.
### What is Vundle? ###
[Vundle][1], which stands for Vim Bundle, is a Vim plugin manager. Vundle allows you to install, update, search and clean up Vim plugins very easily. It can also manage your runtime and help with tags. In this tutorial, I am going to show how to install and use Vundle.
### Installing Vundle ###
First, [install Git][2] if you don't have it on your Linux system.
Next, create a directory where Vim plugins will be downloaded and installed. By default, this directory is located at ~/.vim/bundle
$ mkdir -p ~/.vim/bundle
Now go ahead and install Vundle as follows. Note that Vundle itself is another Vim plugin. Thus we install Vundle under ~/.vim/bundle we created earlier.
$ git clone https://github.com/gmarik/Vundle.vim.git ~/.vim/bundle/Vundle.vim
### Configuring Vundle ###
Now set up you .vimrc file as follows:
set nocompatible " This is required
filetype off " This is required
" Here you set up the runtime path
set rtp+=~/.vim/bundle/Vundle.vim
" Initialize vundle
call vundle#begin()
" This should always be the first
Plugin 'gmarik/Vundle.vim'
" This examples are from https://github.com/gmarik/Vundle.vim README
Plugin 'tpope/vim-fugitive'
" Plugin from http://vim-scripts.org/vim/scripts.html
Plugin 'L9'
"Git plugin not hosted on GitHub
Plugin 'git://git.wincent.com/command-t.git'
"git repos on your local machine (i.e. when working on your own plugin)
Plugin 'file:///home/gmarik/path/to/plugin'
" The sparkup vim script is in a subdirectory of this repo called vim.
" Pass the path to set the runtimepath properly.
Plugin 'rstacruz/sparkup', {'rtp': 'vim/'}
" Avoid a name conflict with L9
Plugin 'user/L9', {'name': 'newL9'}
"Every Plugin should be before this line
call vundle#end() " required
Let me explain the above configuration a bit. By default, Vundle downloads and installs Vim plugins from github.com or vim-scripts.org. You can modify the default behavior.
To install from Github:
Plugin 'user/plugin'
To install from http://vim-scripts.org/vim/scripts.html:
Plugin 'plugin_name'
To install from another git repo:
Plugin 'git://git.another_repo.com/plugin'
To install from a local file:
Plugin 'file:///home/user/path/to/plugin'
Also you can customize others such as the runtime path of you plugins, which is really useful if you are programming a plugin yourself, or just want to load it from another directory that is not ~/.vim.
Plugin 'rstacruz/sparkup', {'rtp': 'another_vim_path/'}
If you have plugins with the same names, you can rename you plugin so that it doesn't conflict.
Plugin 'user/plugin', {'name': 'newPlugin'}
### Using Vum Commands ###
Once you have set up you plugins with Vundle, you can use it to to install, update, search and clean unused plugins using several Vundle commands.
#### Installing a new plugin ####
The PluginInstall command will install all plugins listed in your .vimrc file. You can also install just one specific plugin by passing its name.
:PluginInstall
:PluginInstall <plugin-name>
![](https://farm1.staticflickr.com/559/18998707843_438cd55463_c.jpg)
#### Cleaning up an unused plugin ####
If you have any unused plugin, you can remove it by using the PluginClean command.
:PluginClean
![](https://farm4.staticflickr.com/3814/19433047689_17d9822af6_c.jpg)
#### Searching for a plugin ####
If you want to install a plugin from a plugin list provided, search functionality can be useful.
:PluginSearch <text-list>
![](https://farm1.staticflickr.com/541/19593459846_75b003443d_c.jpg)
While searching, you can install, clean, research or reload the same list on the interactive split. Installing plugins won't load your plugins automatically. To do so, add them to you .vimrc file.
### Conclusion ###
Vim is an amazing tool. It can not only be a great default text editor that can make your work flow faster and smoother, but also be turned into an IDE for almost any programming language available. Vundle can be a big help in personalizing the powerful Vim environment quickly and easily.
Note that there are several sites that allow you to find the right Vim plugins for you. Always check [http://www.vim-scripts.org][3], Github or [http://www.vimawesome.com][4] for new scripts or plugins. Also remember to use the help provider for you plugin.
Keep rocking with your favorite text editor!
--------------------------------------------------------------------------------
via: http://xmodulo.com/manage-vim-plugins.html
作者:[Christopher Valerio][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/valerio
[1]:https://github.com/VundleVim/Vundle.vim
[2]:http://ask.xmodulo.com/install-git-linux.html
[3]:http://www.vim-scripts.org/
[4]:http://www.vimawesome.com/

View File

@ -0,0 +1,54 @@
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 1 - Introduction
================================================================================
*Author's Note: If by some miracle you managed to click this article without reading the title then I want to re-iterate something... This is an editorial. These are my opinions. They are not representative of Phoronix, or Michael, these are my own thoughts.*
Additionally, yes... This is quite possibly a flame-bait article. I hope the community is better than that, because I do want to start a discussion and give feedback to both the KDE and Gnome communities. For that reason when I point out, what I see as, a flaw I will try to be specific and direct so that any discussion can be equally specific and direct. For the record: The alternative title for this article was "Death By A Thousand [Paper Cuts][1]".
Now, with that out of the way... Onto the article.
![](http://www.phoronix.net/image.php?id=fedora-22-fan&image=fedora_22_good1_show&w=1920)
When I sent the [Fedora 22 KDE Review][2] off to Michael I did it with a bit of a bad taste in my mouth. It wasn't because I didn't like KDE, or hadn't been enjoying Fedora, far from it. In fact, I started to transition my T450s over to Arch Linux but quickly decided against that, as I enjoyed the level of convenience that Fedora brings to me for many things.
The reason I had a bad taste in my mouth was because the Fedora developers put a lot of time and effort into their "Workstation" product and I wasn't seeing any of it. I wasn't using Fedora the way the main developers had intended it to be used and therefore wasn't getting the "Fedora Experience." It felt like someone reviewing Ubuntu by using Kubuntu, using a Hackintosh to review OS X, or reviewing Gentoo by using Sabayon. A lot of readers in the forums bash on Michael for reviewing distributions in their default configurations-- myself included. While I still do believe that reviews should be done under 'real-world' configurations, I do see the value in reviewing something in the condition it was given to you-- for better or worse.
It was with that attitude in mind that I decided to take a dip in the Gnome pool.
I do, however, need to add one more disclaimer... I am looking at KDE and Gnome as they are packaged in Fedora. OpenSUSE, Kubuntu, Arch, etc, might all have different implementations of each desktop that will change whether my specific 'pain points' are relevant to your distribution. Furthermore, despite the title, this is going to be a VERY KDE heavy article. I called the article what I did because it was actually USING Gnome that made me realize how many "paper cuts" KDE actually has.
### Login Screen ###
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login1_show&w=1920)
I normally don't mind Distributions shipping distro-specific themes, because most of them make the desktop look nicer. I finally found my exception.
First impression's count for a lot, right? Well, GDM definitely gets this one right. The login screen is incredibly clean with consistent design language through every single part of it. The use of common-language icons instead of text boxes helps in that regard.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login2_show&w=1920)
That is not to say that the Fedora 22 KDE login screen-- now SDDM rather than KDM-- looks 'bad' per say but its definitely more jarring.
Where's the fault? The top bar. Look at the Gnome screenshot-- you select a user and you get a tiny little gear simple for selecting what session you want to log into. The design is clean, it gets out of your way, you could honestly miss it completely if you weren't paying attention. Now look at the blue KDE screenshot, the bar doesn't look it was even rendered using the same widgets, and its entire placement feels like an after thought of "Well shit, we need to throw this option somewhere..."
The same can be said for the Reboot and Shutdown options in the top right. Why not just a power button that creates a drop down menu that has a drop down for Reboot, Shutdown, Suspend? Having the buttons be different colors than the background certainly makes them stick out and be noticeable... but I don't think in a good way. Again, they feel like an after thought.
GDM is also far more useful from a practical standpoint, look again along the top row. The time is listed, there's a volume control so that if you are trying to be quiet you can mute all sounds before you even login, there's an accessibility button for things like high contrast, zooming, test to speech, etc, all available via simple toggle buttons.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login3_show&w=1920)
Swap it to upstream's Breeze theme and... suddenly most of my complaints are fixed. Common-language icons, everything is in the center of the screen, but the less important stuff is off to the sides. This creates a nice harmony between the top and bottom of the screen since they are equally empty. You still have a text box for the session switcher, but I can forgive that since the power buttons are now common language icons. Current time is available which is a nice touch, as is a battery life indicator. Sure gnome still has a few nice additions, such as the volume applet and the accessibility buttons, but Breeze is a step up from Fedora's KDE theme.
Go to Windows (pre-Windows 8 & 10...) or OS X and you will see similar things very clean, get-out-of-your-way lock screens and login screens that are devoid of text boxes or other widgets that distract the eye. It's a design that works and that is non-distracting. Fedora... Ship Breeze by default. VDG got the design of the Breeze theme right. Don't mess it up.
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=1
作者Eric Griffith
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://wiki.ubuntu.com/One%20Hundred%20Papercuts
[2]:http://www.phoronix.com/scan.php?page=article&item=fedora-22-kde&num=1

View File

@ -0,0 +1,31 @@
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 2 - The GNOME Desktop
================================================================================
### The Desktop ###
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_gdm_show&w=1920)
I spent the first five days of my week logging into Gnome manually-- not turning on automatic login. On night of the fifth day I got annoyed with having to login by hand and so I went into the User Manager and turned on automatic login. The next time I logged in I got a prompt: "Your keychain was not unlocked. Please enter your password to unlock your keychain." That was when I realized something... Gnome had been automatically unlocking my keychain—my wallet in KDE speak-- every time I logged in via GDM. It was only when I bypassed GDM's login that Gnome had to step in and make me do it manually.
Now, I am under the personal belief that if you enable automatic login then your key chain should be unlocked automatically as well-- otherwise what's the point? Either way you still have to type in your password and at least if you hit the GDM Login screen you have a chance to change your session if you want to.
But, regardless of that, it was at that moment that I realized it was such a simple thing that made the desktop feel so much more like it was working WITH ME. When I log into KDE via SDDM? Before the splash screen is even finished loading there is a window popping up over top the splash animation-- thereby disrupting the splash screen-- prompting me to unlock my KDE wallet or GPG keyring.
If a wallet doesn't exist already you get prompted to create a wallet-- why couldn't one have been created for me at user creation?-- and then get asked to pick between two encryption methods, where one is even implied as insecure (Blowfish), why are you letting me pick something that's insecure for my security? Author's Note: If you install the actual KDE spin and don't just install KDE after-the-fact then a wallet is created for you at user creation. Unfortunately it's not unlocked for you automatically, and it seems to use the older Blowfish method rather than the new, and more secure, GPG method.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_kgpg_show&w=1920)
If you DO pick the secure one (GPG) then it tries to load an Gpg key... which I hope you had one created already because if you don't you get yelled at. How do you create one? Well, it doesn't offer to make one for you... nor It doesn't tell you... and if you do manage TO figure out that you are supposed to use KGpg to create the key then you get taken through several menus and prompts that are nothing but confusing to new users. Why are you asking me where the GPG binary is located? How on earth am I supposed to know? Can't you just use the most recent one if there's more than one? And if there IS only one then, I ask again, why are you prompting me?
Why are you asking me what key size and encryption algorithm to use? You select 2048 and RSA/RSA by default, so why not just use those? If you want to have those options available then throw them under the "Expert mode" button that is right there. This isn't just about having configuration options available, its about needless things that get thrown in the user's face by default. This is going to be a theme for the rest of the article... KDE needs better sane defaults. Configuration is great, I love the configuration I get with KDE, but it needs to learn when to and when not to prompt. It also needs to learn that "Well its configurable" is no excuse for bad defaults. Defaults are what users see initially, bad defaults will lose users.
Let's move on from the key chain issue though, because I think I made my point.
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=2
作者Eric Griffith
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,61 @@
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 3 - GNOME Applications
================================================================================
### Applications ###
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_videos_show&w=1920)
This is the one area where things are basically a wash. Each environment has a few applications that are really nice, and a few that are not so great. Once again though, Gnome gets the little things right in a way that KDE completely misses. None of KDE's applications are bad or broken, that's not what I'm saying. They function. But that's about it. To use an analogy: they passed the test, but they sure didn't get any where close to 100% on it.
Gnome on left, KDE on right. Dragon performs perfectly fine, it has clearly marked buttons for playing a file, URL, or a disc, just as you can do under Gnome Videos... but Gnome takes it one extra little step further in the name of convenience and user friendliness: they show all the videos detected under your system by default, without you having to do anything. KDE has Baloo-- just as they had Nepomuk before that-- why not use them? They've got a list video files that are freely accessible... but don't make use of the feature.
Moving on... Music Players.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_rhythmbox_show&w=1920)
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_amarok_show&w=1920)
Both of these applications, Rhythmbox on the left and Amarok on the right were opened up and then a screenshot was immediately taken, nothing was clicked, or altered. See the difference? Rhythmbox looks like a music player. It's direct, there's obvious ways to sort the results, it knows what is trying to be and what it's job is: to play music.
Amarok feels like one of the tech demos, or library demos where someone puts every option and extension they possible can all inside one application in order to show them off-- it's never something that gets shipped as production, it's just there to show off bits and pieces. And that's exactly what Amarok feels like: its someone trying to show off every single possible cool thing they shove into a media player without ever stopping to think "Wait, what were trying to write again? An app to play music?"
Just look at the default layout. What is front and center for the user? A visualizer and Wikipedia integration-- the largest and most prominent column on the page. What's the second largest? Playlist list. Third largest, aka smallest? The actual music listing. How on earth are these sane defaults for a core application?
Software Managers! Something that has seen a lot of push in recent years and will likely only see a bigger push in the months to come. Unfortunately, it's another area where KDE was so close... and then fell on its face right at the finish line.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_software_show&w=1920)
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apper_show&w=1920)
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon_show&w=1920)
Gnome Software is probably my new favorite software center, minus one gripe which I will get to in a bit. Muon, I wanted to like you. I really did. But you are a design nightmare. When the VDG was drawing up plans for you (mockup below), you looked pretty slick. Good use of white space, clean design, nice category listing, your whole not-being-split-into-two-applications.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon1_show&w=1920)
Then someone got around to coding you and doing your actual UI, and I can only guess they were drunk while they did it.
Let's look at Gnome Software. What's smack dab in the middle? The application, its screenshots, its description, etc. What's smack dab in the middle of Muon? Gigantic waste of white space. Gnome Software also includes the lovely convenience feature of putting a "Launch" button right there in case you already have an application installed. Convenience and ease of use are important, people. Honestly, JUST having things in Muon be centered aligned would probably make things look better already.
What's along the top edge of Gnome Software, like a tab listing? All Software, Installed, Updates. Clean language, direct, to the point. Muon? Well, we have "Discover", which works okay as far as language goes, and then we have Installed, and then nothing. Where's updates?
Well.. the developers decided to split updates off into its own application, thus requiring you to open two applications to handle your software-- one to install it, and one to update it-- going against every Software Center paradigm that has ever existed since the Synaptic graphical package manager.
I'm not going to show it in a screenshot just because I don't want to have to clean up my system afterwards, but if you go into Muon and start installing something the way it shows that is by adding a little tab to the bottom of your screen with the application's name. That tab doesn't go away when the application is done installing either, so if you're installing a lot of applications at a single time then you'll just slowly accumulate tabs along the bottom that you then have to go through and clean up manually, because if you don't then they grow off the screen and you have to swipe through them all to get to the most recent ones. Think: opening 50 tabs in Firefox. Major annoyance, major inconvenience.
I did say I would bash on Gnome a bit, and I meant it. Muon does get one thing very right that Gnome Software doesn't. Under the settings bar Muon has an option for "Show Technical Packages" aka: compilers, software libraries, non-graphical applications, applications without AppData, etc. Gnome doesn't. If you want to install any of those you have to drop down to the terminal. I think that's wrong. I certainly understand wanting to push AppData but I think they pushed it too soon. What made me realize Gnome didn't have this setting was when I went to install PowerTop and couldn't get Gnome to display it-- no AppData, no "Show Technical Packages" setting.
Doubly unfortunate is the fact that you can't "just use apper" if you're under KDE since...
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apperlocal_show&w=1920)
Apper's support for installing local packages has been broken for since Fedora 19 or so, almost two years. I love the attention to detail and quality.
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=3
作者Eric Griffith
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,51 @@
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 4 - GNOME Settings
================================================================================
### Settings ###
There are a few specific KDE Control modules that I am going to pick at, mostly because they are so laughable horrible compared to their gnome counter-part that its honestly pathetic.
First one up? Printers.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers1_show&w=1920)
Gnome is on the left, KDE is on the right. You know what the difference is between the printer applet on the left, and the one on the right? When I opened up Gnome Control Center and hit "Printers" the applet popped up and nothing happened. When I opened up KDE System Settings and hit "Printers" I got a password prompt. Before I was even allowed to LOOK at the printers I had to give up ROOT'S password.
Let me just re-iterate that. In this, the days of PolicyKit and Logind, I am still being asked for Root's password for what should be a sudo operation. I didn't even SETUP root's password when I installed the system. I had to drop down to Konsole and run 'sudo passwd root' so that I could GIVE root a password so that I could go back into System Setting's printer applet and then give up root's password to even LOOK at what printers were available. Once I did that I got prompted for root's password AGAIN when I hit "Add Printer" then I got prompted for root's password AGAIN after I went through and selected a printer and driver. Three times I got asked for ROOT'S password just to add a printer to the system.
When I added a printer under Gnome I didn't get prompted for my SUDO password until I hit "Unlock" in the printer applet. I got asked once, then I never got asked again. KDE, I am begging you... Adopt Gnome's "Unlock" methodology. Do not prompt for a password until you really need one. Furthermore, whatever library is out there that allows for KDE applications to bypass PolicyKit / Logind (if its available) and prompt directly for root... Bin that code. If this was a multi-user system I either have to give up root's password, or be there every second of every day in order to put it in any time a user might have to update, change, or add a new printer. Both options are completely unacceptable.
One more thing...
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers2_show&w=1920)
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers3_show&w=1920)
Question to the forums: What looks cleaner to you? I had this realization when I was writing this article: Gnome's applet makes it very clear where any additional printers are going to go, they set aside a column on the left to list them. Before I added a second printer to KDE, and it suddenly grew a left side column, I had this nightmare-image in my head of the applet just shoving another icon into the screen and them being listed out like preview images in a folder of pictures. I was pleasantly surprised to see that I was wrong but the fact that the applet just 'grew' another column that didn't exist before and drastically altered its presentation is not really 'good' either. It's a design that's confusing, shocking, and non-intuitive.
Enough about printers though... Next KDE System Setting that is up for my public stoning? Multimedia, Aka Phonon.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_sound_show&w=1920)
As always, Gnome's on the left, KDE is on the right. Let's just run through the Gnome setting first... The eyes go left to right, top to bottom, right? So let's do the same. First up: volume control slider. The blue hint against the empty bar with 100% clearly marked removes all confusion about which way is "volume up." Immediately after the slider is an easy On/Off toggle that functions a mute on/off. Points to Gnome for remembering what the volume was set to BEFORE I muted sound, and returning to that same level AFTER I press volume-up to un-mute. Kmixer, you amnesiac piece of crap, I wish I could say as much about you.
Moving on! Tabbed options for Output, Input and Applications? With per application volume controls within easy reach? Gnome I love you more and more with every passing second. Balance options, sound profiles, and a clearly marked "Test Speakers" option.
I'm not sure how this could have been implemented in a cleaner, more concise way. Yes, it's just a Gnome-ized Pavucontrol but I think that's the point. Pavucontrol got it mostly right to begin with, the Sound applet in Gnome Control Center just refines it slightly to make it even closer to perfect.
Phonon, you're up. And let me start by saying: What the fsck am I looking at? -I- get that I am looking at the priority list for the audio devices on the system, but the way it is presented is a bit of a nightmare. Also where are the things the user probably cares about? A priority list is a great thing to have, it SHOULD be available, but it's something the user messes with once or twice and then never touches again. It's not important, or common, enough to warrant being front and center. Where's the volume slider? Where's per application controls? The things that users will be using more frequently? Well.. those are under Kmix, a separate program, with its own settings and configuration... not under the System Settings... which kind of makes System Settings a bit of a misnomer. And in that same vein, Let's hop over to network settings.
![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_network_show&w=1920)
Presented above is the Gnome Network Settings. KDE's isn't included because of the reason I'm about to hit on. If you go to KDE's System Settings and hit any of the three options under the "Network" Section you get tons of options: Bluetooth settings, default username and password for Samba shares (Seriously, "Connectivity" only has 2 options: Username and password for SMB shares. How the fsck does THAT deserve the all-inclusive title "Connectivity"?), controls for Browser Identification (which only work for Konqueror...a dead project), proxy settings, etc... Where's my wifi settings? They aren't there. Where are they? Well, they are in the network applet's private settings... not under Network Settings...
KDE, you're killing me. You have "System Settings" USE IT!
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=4
作者Eric Griffith
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,39 @@
A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 5 - Conclusion
================================================================================
### User Experience and Closing Thoughts ###
When Gnome 2.x and KDE 4.x were going head to head.. I jumped between the two quite happily. Some things I loved, some things I hated, but over all they were both a pleasure to use. Then Gnome 3.x came around and all of the drama with Gnome Shell. I swore off Gnome and avoided it every chance I could. It wasn't user friendly, it was non-intuitive, it broke an establish paradigm in preparation for tablet's taking over the world... A future that, judging from the dropping sales of tablets, will never come.
Eight releases of Gnome 3 later and the unimaginable happened. Gnome got user friendly. Gnome got intuitive. Is it perfect? Of course not. I still hate the paradigm it tries to push, I hate how it tries to force a work flow onto me, but both of those things can be gotten used to with time and patience. Once you have managed to look past Gnome Shell's alien appearance and you start interacting with it and the other parts of Gnome (Control Center especially) you see what Gnome has definitely gotten right: the little things. The attention to detail.
People can adapt to new paradigms, people can adapt to new work flows-- the iPhone and iPad proved that-- but what will always bother them are the paper cuts.
Which brings up an important distinction between KDE and Gnome. Gnome feels like a product. It feels like a singular experience. When you use it, it feels like it is complete and that everything you need is at your fingertips. It feel's like THE Linux desktop in the same way that Windows or OS X have THE desktop experience: what you need is there and it was all written by the same guys working on the same team towards the same goal. Hell, even an application prompting for sudo access feels like an intentional part of the desktop under Gnome, much the way that it is under Windows. In KDE it's just some random-looking window popup that any application could have created. It doesn't feel like a part of the system stopping and going "Hey! Something has requested administrative rights! Do you want to let it go through?" in an official capacity.
KDE doesn't feel like cohesive experience. KDE doesn't feel like it has a direction its moving in, it doesn't feel like a full experience. KDE feels like its a bunch of pieces that are moving in a bunch of different directions, that just happen to have a shared toolkit beneath them. If that's what the developers are happy with, then fine, good for them, but if the developers still have the hope of offering the best experience possible then the little stuff needs to matter. The user experience and being intuitive needs to be at the forefront of every single application, there needs to be a vision of what KDE wants to offer -and- how it should look.
Is there anything stopping me from using Gnome Disks under KDE? Rhythmbox? Evolution? Nope. Nope. Nope. But that misses the point. Gnome and KDE both market themselves as "Desktop Environments." They are supposed to be full -environments-, that means they all the pieces come and fit together, that you use that environment's tools because they are saying "We support everything you need to have a full desktop." Honestly? Only Gnome seems to fit the bill of being complete. KDE feel's half-finished when it comes to "coming together" part, let alone offering everything you need for a "full experience". There's no counterpart to Gnome Disks-- kpartitionmanager prompts for root. No "First Time User" run through, it just now got a user manager in Kubuntu. Hell, Gnome even provides a Maps, Notes, Calendar and Clock application. Do all of these applications matter 100%? No, of course not. But the fact that Gnome has them helps to push the idea that Gnome is a full and complete experience.
My complaints about KDE are not impossible to fix, not by a long shot. But it requires people to care. It requires developers to take pride in their work beyond just function-- form counts for a whole hell of a lot. Don't take away the user's ability to configure things-- the lack of configuration is one of my biggest gripes with GNOME 3.x, but don't use "Well you can configure it however you want," as an excuse for not providing sane defaults. The defaults are what users are going to see, they are what the users are going to judge from the first moment they open your application. Make it a good impression.
I know the KDE developers know design matters, that is WHY the Visual Design Group exists, but it feels like they aren't using the VDG to their fullest. And therein lies KDE's hamartia. It's not that KDE can't be complete, it's not that it can't come together and fix the downfalls, it just that they haven't. They aimed for the bulls eye... but they missed.
And before anyone says it... Don't say "Patches are welcome." Because while I can happily submit patches for the individual annoyances more will just keep coming as developers keep on their marry way of doing things in non-intuitive ways. This isn't about Muon not being center-aligned. This isn't about Amarok having an ugly UI. This isn't about the volume and brightness pop-up notifiers taking up a large chunk of my screen real-estate every time I hit my hotkeys (seriously, someone shrink those things).
This is about a mentality of apathy, this is about developers apparently not thinking things through when they make the UI for their applications. Everything the KDE Community does works fine. Amarok plays music. Dragon Player plays videos. Kwin / Qt & kdelibs is seemingly more power efficient than Mutter / gtk (according to my battery life times. Non-scientific testing). Those things are all well and good, and important.. but the presentation matters to. Arguably, the presentation matters the most because that is what user's see and interact with.
To KDE application developers... Get the VDG involved. Make every single 'core' application get its design vetted and approved by the VDG, have a UI/UX expert from the VDG go through the usage patterns and usage flow of your application to make sure its intuitive. Hell, even just posting a mock up to the VDG forums and asking for feedback would probably get you some nice pointers and feedback for whatever application you're working on. You have this great resource there, now actually use them.
I am not trying to sound ungrateful. I love KDE, I love the work and effort that volunteers put into giving Linux users a viable desktop, and an alternative to Gnome. And it is because I care that I write this article. Because I want to see KDE excel, I want to see it go further and farther than it has before. But doing that requires work on everyone's part, and it requires that people don't hold back criticism. It requires that people are honest about their interaction with the system and where it falls apart. If we can't give direct criticism, if we can't say "This sucks!" then it will never get better.
Will I still use Gnome after this week? Probably not, no. Gnome still trying to force a work flow on me that I don't want to follow or abide by, I feel less productive when I'm using it because it doesn't follow my paradigm. For my friends though, when they ask me "What desktop environment should I use?" I'm probably going to recommend Gnome, especially if they are less technical users who want things to "just work." And that is probably the most damning assessment I could make in regards to the current state of KDE.
--------------------------------------------------------------------------------
via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=5
作者Eric Griffith
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,314 @@
How to Configure Chef (server/client) on Ubuntu 14.04 / 15.04
================================================================================
Chef is a configuration management and automation tool for information technology professionals that configures and manages your infrastructure whether it is on-premises or in the cloud. It can be used to speed up application deployment and to coordinate the work of multiple system administrators and developers involving hundreds, or even thousands, of servers and applications to support a large customer base. The key to Chefs power is that it turns infrastructure into code. Once you master Chef, you will be able to enable web IT with first class support for managing your cloud infrastructure with an easy automation of your internal deployments or end users systems.
Here are the major components of Chef that we are going to setup and configure in this tutorial.
chef components
![](http://blog.linoxide.com/wp-content/uploads/2015/07/chef.png)
### Chef Prerequisites and Versions ###
We are going to setup Chef configuration management system under the following basic environment.
注:表格
<table width="701" style="height: 284px;">
<tbody>
<tr>
<td width="660" colspan="2"><strong>Chef, Configuration Management Tool</strong></td>
</tr>
<tr>
<td width="220"><strong>Base Operating System</strong></td>
<td width="492">Ubuntu 14.04.1 LTS&nbsp;(x86_64)</td>
</tr>
<tr>
<td width="220"><strong>Chef Server</strong></td>
<td width="492">Version 12.1.0</td>
</tr>
<tr>
<td width="220"><strong>Chef Manage</strong></td>
<td width="492">Version 1.17.0</td>
</tr>
<tr>
<td width="220"><strong>Chef Development Kit</strong></td>
<td width="492">Version 0.6.2</td>
</tr>
<tr>
<td width="220"><strong>RAM and CPU</strong></td>
<td width="492">4 GB&nbsp; , 2.0+2.0 GHZ</td>
</tr>
</tbody>
</table>
### Chef Server's Installation and Configurations ###
Chef Server is central core component that stores recipes as well as other configuration data and interact with the workstations and nodes. let's download the installation media by selecting the latest version of chef server from its official web link.
We will get its installation package and install it by using following commands.
**1) Downloading Chef Server**
root@ubuntu-14-chef:/tmp# wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/chef-server-core_12.1.0-1_amd64.deb
**2) To install Chef Server**
root@ubuntu-14-chef:/tmp# dpkg -i chef-server-core_12.1.0-1_amd64.deb
**3) Reconfigure Chef Server**
Now Run the following command to start all of the chef server services ,this step may take a few minutes to complete as its composed of many different services that work together to create a functioning system.
root@ubuntu-14-chef:/tmp# chef-server-ctl reconfigure
The chef server startup command 'chef-server-ctl reconfigure' needs to be run twice so that installation ends with the following completion output.
Chef Client finished, 342/350 resources updated in 113.71139964 seconds
opscode Reconfigured!
**4) Reboot OS**
Once the installation complete reboot the operating system for the best working without doing this we you might get the following SSL_connect error during creation of User.
ERROR: Errno::ECONNRESET: Connection reset by peer - SSL_connect
**5) Create new Admin User**
Run the following command to create a new administrator user with its profile settings. During its creation users RSA private key is generated automatically that should be saved to a safe location. The --filename option will save the RSA private key to a specified path.
root@ubuntu-14-chef:/tmp# chef-server-ctl user-create kashi kashi kashi kashif.fareedi@gmail.com kashi123 --filename /root/kashi.pem
### Chef Manage Setup on Chef Server ###
Chef Manage is a management console for Enterprise Chef that enables a web-based user interface for visualizing and managing nodes, data bags, roles, environments, cookbooks and role-based access control (RBAC).
**1) Downloading Chef Manage**
Copy the link for Chef Manage from the official web site and download the chef manage package.
root@ubuntu-14-chef:~# wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/opscode-manage_1.17.0-1_amd64.deb
**2) Installing Chef Manage**
Let's install it into the root's home directory with below command.
root@ubuntu-14-chef:~# chef-server-ctl install opscode-manage --path /root
**3) Restart Chef Manage and Server**
Once the installation is complete we need to restart chef manage and chef server services by executing following commands.
root@ubuntu-14-chef:~# opscode-manage-ctl reconfigure
root@ubuntu-14-chef:~# chef-server-ctl reconfigure
### Chef Manage Web Console ###
We can access chef manage web console from the localhost as wel as its fqdn and login with the already created admin user account.
![chef amanage](http://blog.linoxide.com/wp-content/uploads/2015/07/5-chef-web.png)
**1) Create New Organization with Chef Manage**
You would be asked to create new organization or accept the invitation from the organizations. Let's create a new organization by providing its short and full name as shown.
![Create Org](http://blog.linoxide.com/wp-content/uploads/2015/07/7-create-org.png)
**2) Create New Organization with Command line**
We can also create new Organization from the command line by executing the following command.
root@ubuntu-14-chef:~# chef-server-ctl org-create linux Linoxide Linux Org. --association_user kashi --filename linux.pem
### Configuration and setup of Workstation ###
As we had done with successful installation of chef server now we are going to setup its workstation to create and configure any recipes, cookbooks, attributes, and other changes that we want to made to our Chef configurations.
**1) Create New User and Organization on Chef Server**
In order to setup workstation we create a new user and an organization for this from the command line.
root@ubuntu-14-chef:~# chef-server-ctl user-create bloger Bloger Kashif bloger.kashif@gmail.com bloger123 --filename bloger.pem
root@ubuntu-14-chef:~# chef-server-ctl org-create blogs Linoxide Blogs Inc. --association_user bloger --filename blogs.pem
**2) Download Starter Kit for Workstation**
Now Download and Save starter-kit from the chef manage web console on a workstation and use it to work with Chef server.
![Starter Kit](http://blog.linoxide.com/wp-content/uploads/2015/07/8-download-kit.png)
**3) Click to "Proceed" with starter kit download**
![starter kit](http://blog.linoxide.com/wp-content/uploads/2015/07/9-download-kit.png)
### Chef Development Kit Setup for Workstation ###
Chef Development Kit is a software package suite with all the development tools need to code Chef. It combines with the best of the breed tools developed by Chef community with Chef Client.
**1) Downloading Chef DK**
We can Download chef development kit from its official web link and choose the required operating system to get its chef development tool kit.
![Chef DK](http://blog.linoxide.com/wp-content/uploads/2015/07/10-CDK.png)
Copy the link and download it with wget command.
root@ubuntu-15-WKS:~# wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.6.2-1_amd64.deb
**1) Chef Development Kit Installatoion**
Install chef-development kit using dpkg command
root@ubuntu-15-WKS:~# dpkg -i chefdk_0.6.2-1_amd64.deb
**3) Chef DK Verfication**
Verify using the below command that the client got installed properly.
root@ubuntu-15-WKS:~# chef verify
----------
Running verification for component 'berkshelf'
Running verification for component 'test-kitchen'
Running verification for component 'chef-client'
Running verification for component 'chef-dk'
Running verification for component 'chefspec'
Running verification for component 'rubocop'
Running verification for component 'fauxhai'
Running verification for component 'knife-spork'
Running verification for component 'kitchen-vagrant'
Running verification for component 'package installation'
Running verification for component 'openssl'
..............
---------------------------------------------
Verification of component 'rubocop' succeeded.
Verification of component 'knife-spork' succeeded.
Verification of component 'openssl' succeeded.
Verification of component 'berkshelf' succeeded.
Verification of component 'chef-dk' succeeded.
Verification of component 'fauxhai' succeeded.
Verification of component 'test-kitchen' succeeded.
Verification of component 'kitchen-vagrant' succeeded.
Verification of component 'chef-client' succeeded.
Verification of component 'chefspec' succeeded.
Verification of component 'package installation' succeeded.
**Connecting to Chef Server**
We will Create ~/.chef and copy the two user and organization pem files to this folder from chef server.
root@ubuntu-14-chef:~# scp bloger.pem blogs.pem kashi.pem linux.pem root@172.25.10.172:/.chef/
----------
root@172.25.10.172's password:
bloger.pem 100% 1674 1.6KB/s 00:00
blogs.pem 100% 1674 1.6KB/s 00:00
kashi.pem 100% 1678 1.6KB/s 00:00
linux.pem 100% 1678 1.6KB/s 00:00
**Knife Configurations to Manage your Chef Environment**
Now create "~/.chef/knife.rb" with following content as configured in previous steps.
root@ubuntu-15-WKS:/.chef# vim knife.rb
current_dir = File.dirname(__FILE__)
log_level :info
log_location STDOUT
node_name "kashi"
client_key "#{current_dir}/kashi.pem"
validation_client_name "kashi-linux"
validation_key "#{current_dir}/linux.pem"
chef_server_url "https://172.25.10.173/organizations/linux"
cache_type 'BasicFile'
cache_options( :path => "#{ENV['HOME']}/.chef/checksums" )
cookbook_path ["#{current_dir}/../cookbooks"]
Create "~/cookbooks" folder for cookbooks as specified knife.rb file.
root@ubuntu-15-WKS:/# mkdir cookbooks
**Testing with Knife Configurations**
Run "knife user list" and "knife client list" commands to verify whether knife configuration is working.
root@ubuntu-15-WKS:/.chef# knife user list
You might get the following error while first time you run this command.This occurs because we do not have our Chef server's SSL certificate on our workstation.
ERROR: SSL Validation failure connecting to host: 172.25.10.173 - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
ERROR: Could not establish a secure connection to the server.
Use `knife ssl check` to troubleshoot your SSL configuration.
If your Chef Server uses a self-signed certificate, you can use
`knife ssl fetch` to make knife trust the server's certificates.
To recover from the above error run the following command to fetch ssl certs and once again run the knife user and client list command and it should be fine then.
root@ubuntu-15-WKS:/.chef# knife ssl fetch
WARNING: Certificates from 172.25.10.173 will be fetched and placed in your trusted_cert
directory (/.chef/trusted_certs).
Knife has no means to verify these are the correct certificates. You should
verify the authenticity of these certificates after downloading.
Adding certificate for ubuntu-14-chef.test.com in /.chef/trusted_certs/ubuntu-14-chef_test_com.crt
Now after fetching ssl certs with above command, let's again run the below command.
root@ubuntu-15-WKS:/.chef#knife client list
kashi-linux
### New Node Configuration to interact with chef-server ###
Nodes contain chef-client which performs all the infrastructure automation. So, Its time to begin with adding new servers to our chef environment by Configuring a new node to interact with chef-server after we had Configured chef-server and knife workstation combinations.
To configure a new node to work with chef server use below command.
root@ubuntu-15-WKS:~# knife bootstrap 172.25.10.170 --ssh-user root --ssh-password kashi123 --node-name mydns
----------
Doing old-style registration with the validation key at /.chef/linux.pem...
Delete your validation key in order to use your user credentials instead
Connecting to 172.25.10.170
172.25.10.170 Installing Chef Client...
172.25.10.170 --2015-07-04 22:21:16-- https://www.opscode.com/chef/install.sh
172.25.10.170 Resolving www.opscode.com (www.opscode.com)... 184.106.28.91
172.25.10.170 Connecting to www.opscode.com (www.opscode.com)|184.106.28.91|:443... connected.
172.25.10.170 HTTP request sent, awaiting response... 200 OK
172.25.10.170 Length: 18736 (18K) [application/x-sh]
172.25.10.170 Saving to: STDOUT
172.25.10.170
100%[======================================>] 18,736 --.-K/s in 0s
172.25.10.170
172.25.10.170 2015-07-04 22:21:17 (200 MB/s) - written to stdout [18736/18736]
172.25.10.170
172.25.10.170 Downloading Chef 12 for ubuntu...
172.25.10.170 downloading https://www.opscode.com/chef/metadata?v=12&prerelease=false&nightlies=false&p=ubuntu&pv=14.04&m=x86_64
172.25.10.170 to file /tmp/install.sh.26024/metadata.txt
172.25.10.170 trying wget...
After all we can see the vewly created node under the knife node list and new client list as it it will also creates a new client with the node.
root@ubuntu-15-WKS:~# knife node list
mydns
Similarly we can add multiple number of nodes to our chef infrastructure by providing ssh credentials with the same above knofe bootstrap command.
### Conclusion ###
In this detailed article we learnt about the Chef Configuration Management tool with its basic understanding and overview of its components with installation and configuration settings. We hope you have enjoyed learning the installation and configuration of Chef server with its workstation and client nodes.
--------------------------------------------------------------------------------
via: http://linoxide.com/ubuntu-how-to/install-configure-chef-ubuntu-14-04-15-04/
作者:[Kashif Siddique][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/kashifs/

View File

@ -0,0 +1,237 @@
How to collect NGINX metrics - Part 2
================================================================================
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_2.png)
### How to get the NGINX metrics you need ###
How you go about capturing metrics depends on which version of NGINX you are using, as well as which metrics you wish to access. (See [the companion article][1] for an in-depth exploration of NGINX metrics.) Free, open-source NGINX and the commercial product NGINX Plus both have status modules that report metrics, and NGINX can also be configured to report certain metrics in its logs:
注:表格
<table>
<colgroup>
<col style="text-align: left;">
<col style="text-align: center;">
<col style="text-align: center;">
<col style="text-align: center;"> </colgroup>
<thead>
<tr>
<th rowspan="2" style="text-align: left;">Metric</th>
<th colspan="3" style="text-align: center;">Availability</th>
</tr>
<tr>
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source">NGINX (open-source)</a></th>
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus">NGINX Plus</a></th>
<th style="text-align: center;"><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#logs">NGINX logs</a></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">accepts / accepted</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td style="text-align: left;">handled</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td style="text-align: left;">dropped</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td style="text-align: left;">active</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td style="text-align: left;">requests / total</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;">x</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td style="text-align: left;">4xx codes</td>
<td style="text-align: center;"></td>
<td style="text-align: center;">x</td>
<td style="text-align: center;">x</td>
</tr>
<tr>
<td style="text-align: left;">5xx codes</td>
<td style="text-align: center;"></td>
<td style="text-align: center;">x</td>
<td style="text-align: center;">x</td>
</tr>
<tr>
<td style="text-align: left;">request time</td>
<td style="text-align: center;"></td>
<td style="text-align: center;"></td>
<td style="text-align: center;">x</td>
</tr>
</tbody>
</table>
#### Metrics collection: NGINX (open-source) ####
Open-source NGINX exposes several basic metrics about server activity on a simple status page, provided that you have the HTTP [stub status module][2] enabled. To check if the module is already enabled, run:
nginx -V 2>&1 | grep -o with-http_stub_status_module
The status module is enabled if you see with-http_stub_status_module as output in the terminal.
If that command returns no output, you will need to enable the status module. You can use the --with-http_stub_status_module configuration parameter when [building NGINX from source][3]:
./configure \
… \
--with-http_stub_status_module
make
sudo make install
After verifying the module is enabled or enabling it yourself, you will also need to modify your NGINX configuration to set up a locally accessible URL (e.g., /nginx_status) for the status page:
server {
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
Note: The server blocks of the NGINX config are usually found not in the master configuration file (e.g., /etc/nginx/nginx.conf) but in supplemental configuration files that are referenced by the master config. To find the relevant configuration files, first locate the master config by running:
nginx -t
Open the master configuration file listed, and look for lines beginning with include near the end of the http block, such as:
include /etc/nginx/conf.d/*.conf;
In one of the referenced config files you should find the main server block, which you can modify as above to configure NGINX metrics reporting. After changing any configurations, reload the configs by executing:
nginx -s reload
Now you can view the status page to see your metrics:
Active connections: 24
server accepts handled requests
1156958 1156958 4491319
Reading: 0 Writing: 18 Waiting : 6
Note that if you are trying to access the status page from a remote machine, you will need to whitelist the remote machines IP address in your status configuration, just as 127.0.0.1 is whitelisted in the configuration snippet above.
The NGINX status page is an easy way to get a quick snapshot of your metrics, but for continuous monitoring you will need to automatically record that data at regular intervals. Parsers for the NGINX status page already exist for monitoring tools such as [Nagios][4] and [Datadog][5], as well as for the statistics collection daemon [collectD][6].
#### Metrics collection: NGINX Plus ####
The commercial NGINX Plus provides [many more metrics][7] through its ngx_http_status_module than are available in open-source NGINX. Among the additional metrics exposed by NGINX Plus are bytes streamed, as well as information about upstream systems and caches. NGINX Plus also reports counts of all HTTP status code types (1xx, 2xx, 3xx, 4xx, 5xx). A sample NGINX Plus status board is available [here][8].
![NGINX Plus status board](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/status_plus-2.png)
*Note: the “Active” connections on the NGINX Plus status dashboard are defined slightly differently than the Active state connections in the metrics collected via the open-source NGINX stub status module. In NGINX Plus metrics, Active connections do not include connections in the Waiting state (aka Idle connections).*
NGINX Plus also reports [metrics in JSON format][9] for easy integration with other monitoring systems. With NGINX Plus, you can see the metrics and health status [for a given upstream grouping of servers][10], or drill down to get a count of just the response codes [from a single server][11] in that upstream:
{"1xx":0,"2xx":3483032,"3xx":0,"4xx":23,"5xx":0,"total":3483055}
To enable the NGINX Plus metrics dashboard, you can add a status server block inside the http block of your NGINX configuration. ([See the section above][12] on collecting metrics from open-source NGINX for instructions on locating the relevant config files.) For example, to set up a status dashboard at http://your.ip.address:8080/status.html and a JSON interface at http://your.ip.address:8080/status, you would add the following server block:
server {
listen 8080;
root /usr/share/nginx/html;
location /status {
status;
}
location = /status.html {
}
}
The status pages should be live once you reload your NGINX configuration:
nginx -s reload
The official NGINX Plus docs have [more details][13] on how to configure the expanded status module.
#### Metrics collection: NGINX logs ####
NGINXs [log module][14] writes configurable access logs to a destination of your choosing. You can customize the format of your logs and the data they contain by [adding or subtracting variables][15]. The simplest way to capture detailed logs is to add the following line in the server block of your config file (see [the section][16] on collecting metrics from open-source NGINX for instructions on locating your config files):
access_log logs/host.access.log combined;
After changing any NGINX configurations, reload the configs by executing:
nginx -s reload
The “combined” log format, included by default, captures [a number of key data points][17], such as the actual HTTP request and the corresponding response code. In the example logs below, NGINX logged a 200 (success) status code for a request for /index.html and a 404 (not found) error for the nonexistent /fail.
127.0.0.1 - - [19/Feb/2015:12:10:46 -0500] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari 537.36"
127.0.0.1 - - [19/Feb/2015:12:11:05 -0500] "GET /fail HTTP/1.1" 404 570 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36"
You can log request processing time as well by adding a new log format to the http block of your NGINX config file:
log_format nginx '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent $request_time '
'"$http_referer" "$http_user_agent"';
And by adding or modifying the access_log line in the server block of your config file:
access_log logs/host.access.log nginx;
After reloading the updated configs (by running nginx -s reload), your access logs will include response times, as seen below. The units are seconds, with millisecond resolution. In this instance, the server received a request for /big.pdf, returning a 206 (success) status code after sending 33973115 bytes. Processing the request took 0.202 seconds (202 milliseconds):
127.0.0.1 - - [19/Feb/2015:15:50:36 -0500] "GET /big.pdf HTTP/1.1" 206 33973115 0.202 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36"
You can use a variety of tools and services to parse and analyze NGINX logs. For instance, [rsyslog][18] can monitor your logs and pass them to any number of log-analytics services; you can use a free, open-source tool such as [logstash][19] to collect and analyze logs; or you can use a unified logging layer such as [Fluentd][20] to collect and parse your NGINX logs.
### Conclusion ###
Which NGINX metrics you monitor will depend on the tools available to you, and whether the insight provided by a given metric justifies the overhead of monitoring that metric. For instance, is measuring error rates important enough to your organization to justify investing in NGINX Plus or implementing a system to capture and analyze logs?
At Datadog, we have built integrations with both NGINX and NGINX Plus so that you can begin collecting and monitoring metrics from all your web servers with a minimum of setup. Learn how to monitor NGINX with Datadog [in this post][21], and get started right away with a [free trial of Datadog][22].
----------
Source Markdown for this post is available [on GitHub][23]. Questions, corrections, additions, etc.? Please [let us know][24].
--------------------------------------------------------------------------------
via: https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
作者K Young
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
[2]:http://nginx.org/en/docs/http/ngx_http_stub_status_module.html
[3]:http://wiki.nginx.org/InstallOptions
[4]:https://exchange.nagios.org/directory/Plugins/Web-Servers/nginx
[5]:http://docs.datadoghq.com/integrations/nginx/
[6]:https://collectd.org/wiki/index.php/Plugin:nginx
[7]:http://nginx.org/en/docs/http/ngx_http_status_module.html#data
[8]:http://demo.nginx.com/status.html
[9]:http://demo.nginx.com/status
[10]:http://demo.nginx.com/status/upstreams/demoupstreams
[11]:http://demo.nginx.com/status/upstreams/demoupstreams/0/responses
[12]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
[13]:http://nginx.org/en/docs/http/ngx_http_status_module.html#example
[14]:http://nginx.org/en/docs/http/ngx_http_log_module.html
[15]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
[17]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
[18]:http://www.rsyslog.com/
[19]:https://www.elastic.co/products/logstash
[20]:http://www.fluentd.org/
[21]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
[22]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#sign-up
[23]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_collect_nginx_metrics.md
[24]:https://github.com/DataDog/the-monitor/issues

View File

@ -0,0 +1,150 @@
How to monitor NGINX with Datadog - Part 3
================================================================================
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png)
If youve already read [our post on monitoring NGINX][1], you know how much information you can gain about your web environment from just a handful of metrics. And youve also seen just how easy it is to start collecting metrics from NGINX on ad hoc basis. But to implement comprehensive, ongoing NGINX monitoring, you will need a robust monitoring system to store and visualize your metrics, and to alert you when anomalies happen. In this post, well show you how to set up NGINX monitoring in Datadog so that you can view your metrics on customizable dashboards like this:
![NGINX dashboard](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_board_5.png)
Datadog allows you to build graphs and alerts around individual hosts, services, processes, metrics—or virtually any combination thereof. For instance, you can monitor all of your NGINX hosts, or all hosts in a certain availability zone, or you can monitor a single key metric being reported by all hosts with a certain tag. This post will show you how to:
- Monitor NGINX metrics on Datadog dashboards, alongside all your other systems
- Set up automated alerts to notify you when a key metric changes dramatically
### Configuring NGINX ###
To collect metrics from NGINX, you first need to ensure that NGINX has an enabled status module and a URL for reporting its status metrics. Step-by-step instructions [for configuring open-source NGINX][2] and [NGINX Plus][3] are available in our companion post on metric collection.
### Integrating Datadog and NGINX ###
#### Install the Datadog Agent ####
The Datadog Agent is [the open-source software][4] that collects and reports metrics from your hosts so that you can view and monitor them in Datadog. Installing the agent usually takes [just a single command][5].
As soon as your Agent is up and running, you should see your host reporting metrics [in your Datadog account][6].
![Datadog infrastructure list](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/infra_2.png)
#### Configure the Agent ####
Next youll need to create a simple NGINX configuration file for the Agent. The location of the Agents configuration directory for your OS can be found [here][7].
Inside that directory, at conf.d/nginx.yaml.example, you will find [a sample NGINX config file][8] that you can edit to provide the status URL and optional tags for each of your NGINX instances:
init_config:
instances:
- nginx_status_url: http://localhost/nginx_status/
tags:
- instance:foo
Once you have supplied the status URLs and any tags, save the config file as conf.d/nginx.yaml.
#### Restart the Agent ####
You must restart the Agent to load your new configuration file. The restart command varies somewhat by platform—see the specific commands for your platform [here][9].
#### Verify the configuration settings ####
To check that Datadog and NGINX are properly integrated, run the Datadog info command. The command for each platform is available [here][10].
If the configuration is correct, you will see a section like this in the output:
Checks
======
[...]
nginx
-----
- instance #0 [OK]
- Collected 8 metrics & 0 events
#### Install the integration ####
Finally, switch on the NGINX integration inside your Datadog account. Its as simple as clicking the “Install Integration” button under the Configuration tab in the [NGINX integration settings][11].
![Install integration](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/install.png)
### Metrics! ###
Once the Agent begins reporting NGINX metrics, you will see [an NGINX dashboard][12] among your list of available dashboards in Datadog.
The basic NGINX dashboard displays a handful of graphs encapsulating most of the key metrics highlighted [in our introduction to NGINX monitoring][13]. (Some metrics, notably request processing time, require log analysis and are not available in Datadog.)
You can easily create a comprehensive dashboard for monitoring your entire web stack by adding additional graphs with important metrics from outside NGINX. For example, you might want to monitor host-level metrics on your NGINX hosts, such as system load. To start building a custom dashboard, simply clone the default NGINX dashboard by clicking on the gear near the upper right of the dashboard and selecting “Clone Dash”.
![Clone dash](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/clone_2.png)
You can also monitor your NGINX instances at a higher level using Datadogs [Host Maps][14]—for instance, color-coding all your NGINX hosts by CPU usage to identify potential hotspots.
![](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx-host-map-3.png)
### Alerting on NGINX metrics ###
Once Datadog is capturing and visualizing your metrics, you will likely want to set up some monitors to automatically keep tabs on your metrics—and to alert you when there are problems. Below well walk through a representative example: a metric monitor that alerts on sudden drops in NGINX throughput.
#### Monitor your NGINX throughput ####
Datadog metric alerts can be threshold-based (alert when the metric exceeds a set value) or change-based (alert when the metric changes by a certain amount). In this case well take the latter approach, alerting when our incoming requests per second drop precipitously. Such drops are often indicative of problems.
1.**Create a new metric monitor**. Select “New Monitor” from the “Monitors” dropdown in Datadog. Select “Metric” as the monitor type.
![NGINX metric monitor](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_1.png)
2.**Define your metric monitor**. We want to know when our total NGINX requests per second drop by a certain amount. So we define the metric of interest to be the sum of nginx.net.request_per_s across our infrastructure.
![NGINX metric](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_2.png)
3.**Set metric alert conditions**. Since we want to alert on a change, rather than on a fixed threshold, we select “Change Alert.” Well set the monitor to alert us whenever the request volume drops by 30 percent or more. Here we use a one-minute window of data to represent the metrics value “now” and alert on the average change across that interval, as compared to the metrics value 10 minutes prior.
![NGINX metric change alert](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_3.png)
4.**Customize the notification**. If our NGINX request volume drops, we want to notify our team. In this case we will post a notification in the ops teams chat room and page the engineer on call. In “Say whats happening”, we name the monitor and add a short message that will accompany the notification to suggest a first step for investigation. We @mention the Slack channel that we use for ops and use @pagerduty to [route the alert to PagerDuty][15]
![NGINX metric notification](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_4v3.png)
5.**Save the integration monitor**. Click the “Save” button at the bottom of the page. Youre now monitoring a key NGINX [work metric][16], and your on-call engineer will be paged anytime it drops rapidly.
### Conclusion ###
In this post weve walked you through integrating NGINX with Datadog to visualize your key metrics and notify your team when your web infrastructure shows signs of trouble.
If youve followed along using your own Datadog account, you should now have greatly improved visibility into whats happening in your web environment, as well as the ability to create automated monitors tailored to your environment, your usage patterns, and the metrics that are most valuable to your organization.
If you dont yet have a Datadog account, you can sign up for [a free trial][17] and start monitoring your infrastructure, your applications, and your services today.
----------
Source Markdown for this post is available [on GitHub][18]. Questions, corrections, additions, etc.? Please [let us know][19].
------------------------------------------------------------
via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
作者K Young
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
[2]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
[3]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus
[4]:https://github.com/DataDog/dd-agent
[5]:https://app.datadoghq.com/account/settings#agent
[6]:https://app.datadoghq.com/infrastructure
[7]:http://docs.datadoghq.com/guides/basic_agent_usage/
[8]:https://github.com/DataDog/dd-agent/blob/master/conf.d/nginx.yaml.example
[9]:http://docs.datadoghq.com/guides/basic_agent_usage/
[10]:http://docs.datadoghq.com/guides/basic_agent_usage/
[11]:https://app.datadoghq.com/account/settings#integrations/nginx
[12]:https://app.datadoghq.com/dash/integration/nginx
[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
[14]:https://www.datadoghq.com/blog/introducing-host-maps-know-thy-infrastructure/
[15]:https://www.datadoghq.com/blog/pagerduty/
[16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up
[18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md
[19]:https://github.com/DataDog/the-monitor/issues

View File

@ -0,0 +1,408 @@
How to monitor NGINX - Part 1
================================================================================
![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png)
### What is NGINX? ###
[NGINX][1] (pronounced “engine X”) is a popular HTTP server and reverse proxy server. As an HTTP server, NGINX serves static content very efficiently and reliably, using relatively little memory. As a [reverse proxy][2], it can be used as a single, controlled point of access for multiple back-end servers or for additional applications such as caching and load balancing. NGINX is available as a free, open-source product or in a more full-featured, commercially distributed version called NGINX Plus.
NGINX can also be used as a mail proxy and a generic TCP proxy, but this article does not directly address NGINX monitoring for these use cases.
### Key NGINX metrics ###
By monitoring NGINX you can catch two categories of issues: resource issues within NGINX itself, and also problems developing elsewhere in your web infrastructure. Some of the metrics most NGINX users will benefit from monitoring include **requests per second**, which provides a high-level view of combined end-user activity; **server error rate**, which indicates how often your servers are failing to process seemingly valid requests; and **request processing time**, which describes how long your servers are taking to process client requests (and which can point to slowdowns or other problems in your environment).
More generally, there are at least three key categories of metrics to watch:
- Basic activity metrics
- Error metrics
- Performance metrics
Below well break down a few of the most important NGINX metrics in each category, as well as metrics for a fairly common use case that deserves special mention: using NGINX Plus for reverse proxying. We will also describe how you can monitor all of these metrics with your graphing or monitoring tools of choice.
This article references metric terminology [introduced in our Monitoring 101 series][3], which provides a framework for metric collection and alerting.
#### Basic activity metrics ####
Whatever your NGINX use case, you will no doubt want to monitor how many client requests your servers are receiving and how those requests are being processed.
NGINX Plus can report basic activity metrics exactly like open-source NGINX, but it also provides a secondary module that reports metrics slightly differently. We discuss open-source NGINX first, then the additional reporting capabilities provided by NGINX Plus.
**NGINX**
The diagram below shows the lifecycle of a client connection and how the open-source version of NGINX collects metrics during a connection.
![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_connection_diagram-2.png)
Accepts, handled, and requests are ever-increasing counters. Active, waiting, reading, and writing grow and shrink with request volume.
注:表格
<table>
<colgroup>
<col style="text-align: left;">
<col style="text-align: left;">
<col style="text-align: left;"> </colgroup>
<thead>
<tr>
<th style="text-align: left;"><strong>Name</strong></th>
<th style="text-align: left;"><strong>Description</strong></th>
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">accepts</td>
<td style="text-align: left;">Count of client connections attempted by NGINX</td>
<td style="text-align: left;">Resource: Utilization</td>
</tr>
<tr>
<td style="text-align: left;">handled</td>
<td style="text-align: left;">Count of successful client connections</td>
<td style="text-align: left;">Resource: Utilization</td>
</tr>
<tr>
<td style="text-align: left;">active</td>
<td style="text-align: left;">Currently active client connections</td>
<td style="text-align: left;">Resource: Utilization</td>
</tr>
<tr>
<td style="text-align: left;">dropped (calculated)</td>
<td style="text-align: left;">Count of dropped connections (accepts &ndash; handled)</td>
<td style="text-align: left;">Work: Errors*</td>
</tr>
<tr>
<td style="text-align: left;">requests</td>
<td style="text-align: left;">Count of client requests</td>
<td style="text-align: left;">Work: Throughput</td>
</tr>
<tr>
<td colspan="3" style="text-align: left;">*<em>Strictly speaking, dropped connections is <a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/#resource-metrics">a metric of resource saturation</a>, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as <a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/#work-metrics">a work metric</a>.</em></td>
</tr>
</tbody>
</table>
The **accepts** counter is incremented when an NGINX worker picks up a request for a connection from the OS, whereas **handled** is incremented when the worker actually gets a connection for the request (by establishing a new connection or reusing an open one). These two counts are usually the same—any divergence indicates that connections are being **dropped**, often because a resource limit, such as NGINXs [worker_connections][4] limit, has been reached.
Once NGINX successfully handles a connection, the connection moves to an **active** state, where it remains as client requests are processed:
Active state
- **Waiting**: An active connection may also be in a Waiting substate if there is no active request at the moment. New connections can bypass this state and move directly to Reading, most commonly when using “accept filter” or “deferred accept”, in which case NGINX does not receive notice of work until it has enough data to begin working on the response. Connections will also be in the Waiting state after sending a response if the connection is set to keep-alive.
- **Reading**: When a request is received, the connection moves out of the waiting state, and the request itself is counted as Reading. In this state NGINX is reading a client request header. Request headers are lightweight, so this is usually a fast operation.
- **Writing**: After the request is read, it is counted as Writing, and remains in that state until a response is returned to the client. That means that the request is Writing while NGINX is waiting for results from upstream systems (systems “behind” NGINX), and while NGINX is operating on the response. Requests will often spend the majority of their time in the Writing state.
Often a connection will only support one request at a time. In this case, the number of Active connections == Waiting connections + Reading requests + Writing requests. However, the newer SPDY and HTTP/2 protocols allow multiple concurrent requests/responses to be multiplexed over a connection, so Active may be less than the sum of Waiting, Reading, and Writing. (As of this writing, NGINX does not support HTTP/2, but expects to add support during 2015.)
**NGINX Plus**
As mentioned above, all of open-source NGINXs metrics are available within NGINX Plus, but Plus can also report additional metrics. The section covers the metrics that are only available from NGINX Plus.
![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_plus_connection_diagram-2.png)
Accepted, dropped, and total are ever-increasing counters. Active, idle, and current track the current number of connections or requests in each of those states, so they grow and shrink with request volume.
注:表格
<table>
<colgroup>
<col style="text-align: left;">
<col style="text-align: left;">
<col style="text-align: left;"> </colgroup>
<thead>
<tr>
<th style="text-align: left;"><strong>Name</strong></th>
<th style="text-align: left;"><strong>Description</strong></th>
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">accepted</td>
<td style="text-align: left;">Count of client connections attempted by NGINX</td>
<td style="text-align: left;">Resource: Utilization</td>
</tr>
<tr>
<td style="text-align: left;">dropped</td>
<td style="text-align: left;">Count of dropped connections</td>
<td style="text-align: left;">Work: Errors*</td>
</tr>
<tr>
<td style="text-align: left;">active</td>
<td style="text-align: left;">Currently active client connections</td>
<td style="text-align: left;">Resource: Utilization</td>
</tr>
<tr>
<td style="text-align: left;">idle</td>
<td style="text-align: left;">Client connections with zero current requests</td>
<td style="text-align: left;">Resource: Utilization</td>
</tr>
<tr>
<td style="text-align: left;">total</td>
<td style="text-align: left;">Count of client requests</td>
<td style="text-align: left;">Work: Throughput</td>
</tr>
<tr>
<td colspan="3" style="text-align: left;">*<em>Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.</em></td>
</tr>
</tbody>
</table>
The **accepted** counter is incremented when an NGINX Plus worker picks up a request for a connection from the OS. If the worker fails to get a connection for the request (by establishing a new connection or reusing an open one), then the connection is dropped and **dropped** is incremented. Ordinarily connections are dropped because a resource limit, such as NGINX Pluss [worker_connections][4] limit, has been reached.
**Active** and **idle** are the same as “active” and “waiting” states in open-source NGINX as described [above][5], with one key exception: in open-source NGINX, “waiting” falls under the “active” umbrella, whereas in NGINX Plus “idle” connections are excluded from the “active” count. **Current** is the same as the combined “reading + writing” states in open-source NGINX.
**Total** is a cumulative count of client requests. Note that a single client connection can involve multiple requests, so this number may be significantly larger than the cumulative number of connections. In fact, (total / accepted) yields the average number of requests per connection.
**Metric differences between Open-Source and Plus**
注:表格
<table>
<colgroup>
<col style="text-align: left;">
<col style="text-align: left;"> </colgroup>
<thead>
<tr>
<th style="text-align: left;">NGINX (open-source)</th>
<th style="text-align: left;">NGINX Plus</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">accepts</td>
<td style="text-align: left;">accepted</td>
</tr>
<tr>
<td style="text-align: left;">dropped must be calculated</td>
<td style="text-align: left;">dropped is reported directly</td>
</tr>
<tr>
<td style="text-align: left;">reading + writing</td>
<td style="text-align: left;">current</td>
</tr>
<tr>
<td style="text-align: left;">waiting</td>
<td style="text-align: left;">idle</td>
</tr>
<tr>
<td style="text-align: left;">active (includes “waiting” states)</td>
<td style="text-align: left;">active (excludes “idle” states)</td>
</tr>
<tr>
<td style="text-align: left;">requests</td>
<td style="text-align: left;">total</td>
</tr>
</tbody>
</table>
**Metric to alert on: Dropped connections**
The number of connections that have been dropped is equal to the difference between accepts and handled (NGINX) or is exposed directly as a standard metric (NGINX Plus). Under normal circumstances, dropped connections should be zero. If your rate of dropped connections per unit time starts to rise, look for possible resource saturation.
![Dropped connections](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/dropped_connections.png)
**Metric to alert on: Requests per second**
Sampling your request data (**requests** in open-source, or **total** in Plus) with a fixed time interval provides you with the number of requests youre receiving per unit of time—often minutes or seconds. Monitoring this metric can alert you to spikes in incoming web traffic, whether legitimate or nefarious, or sudden drops, which are usually indicative of problems. A drastic change in requests per second can alert you to problems brewing somewhere in your environment, even if it cannot tell you exactly where those problems lie. Note that all requests are counted the same, regardless of their URLs.
![Requests per second](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/requests_per_sec.png)
**Collecting activity metrics**
Open-source NGINX exposes these basic server metrics on a simple status page. Because the status information is displayed in a standardized form, virtually any graphing or monitoring tool can be configured to parse the relevant data for analysis, visualization, or alerting. NGINX Plus provides a JSON feed with much richer data. Read the companion post on [NGINX metrics collection][6] for instructions on enabling metrics collection.
#### Error metrics ####
注:表格
<table>
<colgroup>
<col style="text-align: left;">
<col style="text-align: left;">
<col style="text-align: left;">
<col style="text-align: left;"> </colgroup>
<thead>
<tr>
<th style="text-align: left;"><strong>Name</strong></th>
<th style="text-align: left;"><strong>Description</strong></th>
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
<th style="text-align: left;"><strong>Availability</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">4xx codes</td>
<td style="text-align: left;">Count of client errors</td>
<td style="text-align: left;">Work: Errors</td>
<td style="text-align: left;">NGINX logs, NGINX Plus</td>
</tr>
<tr>
<td style="text-align: left;">5xx codes</td>
<td style="text-align: left;">Count of server errors</td>
<td style="text-align: left;">Work: Errors</td>
<td style="text-align: left;">NGINX logs, NGINX Plus</td>
</tr>
</tbody>
</table>
NGINX error metrics tell you how often your servers are returning errors instead of producing useful work. Client errors are represented by 4xx status codes, server errors with 5xx status codes.
**Metric to alert on: Server error rate**
Your server error rate is equal to the number of 5xx errors divided by the total number of [status codes][7] (1xx, 2xx, 3xx, 4xx, 5xx), per unit of time (often one to five minutes). If your error rate starts to climb over time, investigation may be in order. If it spikes suddenly, urgent action may be required, as clients are likely to report errors to the end user.
![Server error rate](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/5xx_rate.png)
A note on client errors: while it is tempting to monitor 4xx, there is limited information you can derive from that metric since it measures client behavior without offering any insight into particular URLs. In other words, a change in 4xx could be noise, e.g. web scanners blindly looking for vulnerabilities.
**Collecting error metrics**
Although open-source NGINX does not make error rates immediately available for monitoring, there are at least two ways to capture that information:
- Use the expanded status module available with commercially supported NGINX Plus
- Configure NGINXs log module to write response codes in access logs
Read the companion post on NGINX metrics collection for detailed instructions on both approaches.
#### Performance metrics ####
注:表格
<table>
<colgroup>
<col style="text-align: left;">
<col style="text-align: left;">
<col style="text-align: left;">
<col style="text-align: left;"> </colgroup>
<thead>
<tr>
<th style="text-align: left;"><strong>Name</strong></th>
<th style="text-align: left;"><strong>Description</strong></th>
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
<th style="text-align: left;"><strong>Availability</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">request time</td>
<td style="text-align: left;">Time to process each request, in seconds</td>
<td style="text-align: left;">Work: Performance</td>
<td style="text-align: left;">NGINX logs</td>
</tr>
</tbody>
</table>
**Metric to alert on: Request processing time**
The request time metric logged by NGINX records the processing time for each request, from the reading of the first client bytes to fulfilling the request. Long response times can point to problems upstream.
**Collecting processing time metrics**
NGINX and NGINX Plus users can capture data on processing time by adding the $request_time variable to the access log format. More details on configuring logs for monitoring are available in our companion post on [NGINX metrics collection][8].
#### Reverse proxy metrics ####
注:表格
<table>
<colgroup>
<col style="text-align: left;">
<col style="text-align: left;">
<col style="text-align: left;">
<col style="text-align: left;"> </colgroup>
<thead>
<tr>
<th style="text-align: left;"><strong>Name</strong></th>
<th style="text-align: left;"><strong>Description</strong></th>
<th style="text-align: left;"><strong><a target="_blank" href="https://www.datadoghq.com/blog/monitoring-101-collecting-data/">Metric type</a></strong></th>
<th style="text-align: left;"><strong>Availability</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Active connections by upstream server</td>
<td style="text-align: left;">Currently active client connections</td>
<td style="text-align: left;">Resource: Utilization</td>
<td style="text-align: left;">NGINX Plus</td>
</tr>
<tr>
<td style="text-align: left;">5xx codes by upstream server</td>
<td style="text-align: left;">Server errors</td>
<td style="text-align: left;">Work: Errors</td>
<td style="text-align: left;">NGINX Plus</td>
</tr>
<tr>
<td style="text-align: left;">Available servers per upstream group</td>
<td style="text-align: left;">Servers passing health checks</td>
<td style="text-align: left;">Resource: Availability</td>
<td style="text-align: left;">NGINX Plus</td>
</tr>
</tbody>
</table>
One of the most common ways to use NGINX is as a [reverse proxy][9]. The commercially supported NGINX Plus exposes a large number of metrics about backend (or “upstream”) servers, which are relevant to a reverse proxy setup. This section highlights a few of the key upstream metrics that are available to users of NGINX Plus.
NGINX Plus segments its upstream metrics first by group, and then by individual server. So if, for example, your reverse proxy is distributing requests to five upstream web servers, you can see at a glance whether any of those individual servers is overburdened, and also whether you have enough healthy servers in the upstream group to ensure good response times.
**Activity metrics**
The number of **active connections per upstream server** can help you verify that your reverse proxy is properly distributing work across your server group. If you are using NGINX as a load balancer, significant deviations in the number of connections handled by any one server can indicate that the server is struggling to process requests in a timely manner or that the load-balancing method (e.g., [round-robin or IP hashing][10]) you have configured is not optimal for your traffic patterns
**Error metrics**
Recall from the error metric section above that 5xx (server error) codes are a valuable metric to monitor, particularly as a share of total response codes. NGINX Plus allows you to easily extract the number of **5xx codes per upstream server**, as well as the total number of responses, to determine that particular servers error rate.
**Availability metrics**
For another view of the health of your web servers, NGINX also makes it simple to monitor the health of your upstream groups via the total number of **servers currently available within each group**. In a large reverse proxy setup, you may not care very much about the current state of any one server, just as long as your pool of available servers is capable of handling the load. But monitoring the total number of servers that are up within each upstream group can provide a very high-level view of the aggregate health of your web servers.
**Collecting upstream metrics**
NGINX Plus upstream metrics are exposed on the internal NGINX Plus monitoring dashboard, and are also available via a JSON interface that can serve up metrics into virtually any external monitoring platform. See examples in our companion post on [collecting NGINX metrics][11].
### Conclusion ###
In this post weve touched on some of the most useful metrics you can monitor to keep tabs on your NGINX servers. If you are just getting started with NGINX, monitoring most or all of the metrics in the list below will provide good visibility into the health and activity levels of your web infrastructure:
- [Dropped connections][12]
- [Requests per second][13]
- [Server error rate][14]
- [Request processing time][15]
Eventually you will recognize additional, more specialized metrics that are particularly relevant to your own infrastructure and use cases. Of course, what you monitor will depend on the tools you have and the metrics available to you. See the companion post for [step-by-step instructions on metric collection][16], whether you use NGINX or NGINX Plus.
At Datadog, we have built integrations with both NGINX and NGINX Plus so that you can begin collecting and monitoring metrics from all your web servers with a minimum of setup. Learn how to monitor NGINX with Datadog [in this post][17], and get started right away with [a free trial of Datadog][18].
### Acknowledgments ###
Many thanks to the NGINX team for reviewing this article prior to publication and providing important feedback and clarifications.
----------
Source Markdown for this post is available [on GitHub][19]. Questions, corrections, additions, etc.? Please [let us know][20].
--------------------------------------------------------------------------------
via: https://www.datadoghq.com/blog/how-to-monitor-nginx/
作者K Young
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:http://nginx.org/en/
[2]:http://nginx.com/resources/glossary/reverse-proxy-server/
[3]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/
[4]:http://nginx.org/en/docs/ngx_core_module.html#worker_connections
[5]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#active-state
[6]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
[7]:http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
[8]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
[9]:https://en.wikipedia.org/wiki/Reverse_proxy
[10]:http://nginx.com/blog/load-balancing-with-nginx-plus/
[11]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
[12]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#dropped-connections
[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#requests-per-second
[14]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#server-error-rate
[15]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#request-processing-time
[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up
[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md
[20]:https://github.com/DataDog/the-monitor/issues

View File

@ -0,0 +1,188 @@
zpl1025
Howto Configure FTP Server with Proftpd on Fedora 22
================================================================================
In this article, we'll learn about setting up an FTP server with Proftpd running Fedora 22 in our machine or server. [ProFTPD][1] is a free and open source FTP daemon software licensed under GPL. It is among most popular FTP server among machines running Linux. Its primary design aims to have an FTP server with many advanced features and provisioning users for more configuration options for easy customization. It includes a number of configuration options that are still not available with many other FTP daemons. It was initially developed by the developers as an alternative with better security and configuration to wu-ftpd server. An FTP server is a program that allows us to upload or download files and folders from a remote server where it is setup using an FTP client. Some of the features of ProFTPD daemon are as follows, you can check more features on [http://www.proftpd.org/features.html][2] .
- It includes a per directory ".ftpaccess" access configuration similar to Apache's ".htaccess"
- It features multiple virtual FTP server with multiple users login and anonymous FTP services.
- It can be run either as a stand-alone server or from inetd/xinetd.
- Its ownership, file/folder attributes and file/folder permissions are UNIX-based.
- It can be run as standalone mode in order to protect the system from damage that can be caused from root access.
- The modular design of it makes it easily extensible with modules like LDAP servers, SSL/TLS encryption, RADIUS support, etc.
- IPv6 support is also included in the ProFTPD server.
Here are some easy to perform steps on how we can setup an FTP Server with ProFTPD in Fedora 22 operating system.
### 1. Installing ProFTPD ###
First of all, we'll wanna install Proftpd server in our box running Fedora 22 as its operating system. As yum package manager has been depreciated, we'll use the latest and greatest built package manager called dnf. DNF is pretty easy to use and highly user friendly package manager available in Fedora 22. We'll simply use it to install proftpd daemon server. To do so, we'll need to run the following command in a terminal or a console in sudo mode.
$ sudo dnf -y install proftpd proftpd-utils
### 2. Configuring ProFTPD ###
Now, we'll make changes to some configurations in the daemon. To configure the daemon, we will need to edit /etc/proftpd.conf with a text editor. The main configuration file of the ProFTPD daemon is **/etc/proftpd.conf** so, any changes made to this file will affect the FTP server. Here, are some changes we make in this initial step.
$ sudo vi /etc/proftpd.conf
Next, after we open the file using a text editor, we'll wanna make changes to the ServerName and ServerAdmin as hostname and email address respectively. Here's what we have made changes to those configs.
ServerName "ftp.linoxide.com"
ServerAdmin arun@linoxide.com
After that, we'll wanna the following lines into the configuration file so that it logs access & auth into its specified log files.
ExtendedLog /var/log/proftpd/access.log WRITE,READ default
ExtendedLog /var/log/proftpd/auth.log AUTH auth
![Configuring ProFTPD Config](http://blog.linoxide.com/wp-content/uploads/2015/06/configuring-proftpd-config.png)
### 3. Adding FTP users ###
After configure the basics of the configuration file, we'll surely wanna create an FTP user which is rooted at a specific directory we want. The current users that we use to login into our machine are automatically enabled with the FTP service, we can even use it to login into the FTP server. But, in this tutorial, we'll gonna create a new user with a specified home directory to the ftp server.
Here, we'll create a new group named ftpgroup.
$ sudo groupadd ftpgroup
Then, we'll gonna add a new user arunftp into the group with home directory specified as /ftp-dir/
$ sudo useradd -G ftpgroup arunftp -s /sbin/nologin -d /ftp-dir/
After the user has been created and added to the group, we'll wanna set a password to the user arunftp.
$ sudo passwd arunftp
Changing password for user arunftp.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Now, we'll set read and write permission of the home directory by the ftp users by executing the following command.
$ sudo setsebool -P allow_ftpd_full_access=1
$ sudo setsebool -P ftp_home_dir=1
Then, we'll wanna make that directory and its contents unable to get removed or renamed by any other users.
$ sudo chmod -R 1777 /ftp-dir/
### 4. Enabling TLS Support ###
FTP is considered less secure in comparison to the latest encryption methods used these days as anybody sniffing the network card can read the data pass through FTP. So, we'll enable TLS Encryption support in our FTP server. To do so, we'll need to a edit /etc/proftpd.conf configuration file. Before that, we'll wanna backup our existing configuration file to make sure we can revert our configuration if any unexpected happens.
$ sudo cp /etc/proftpd.conf /etc/proftpd.conf.bak
Then, we'll wanna edit the configuration file using our favorite text editor.
$ sudo vi /etc/proftpd.conf
Then, we'll wanna add the following lines just below line we configured in step 2 .
TLSEngine on
TLSRequired on
TLSProtocol SSLv23
TLSLog /var/log/proftpd/tls.log
TLSRSACertificateFile /etc/pki/tls/certs/proftpd.pem
TLSRSACertificateKeyFile /etc/pki/tls/certs/proftpd.pem
![Enabling TLS Configuration](http://blog.linoxide.com/wp-content/uploads/2015/06/tls-configuration.png)
After finishing up with the configuration, we'll wanna save and exit it.
Next, we'll need to generate the SSL certificates inside **/etc/pki/tls/certs/** directory as proftpd.pem. To do so, first we'll need to install openssl in our Fedora 22 machine.
$ sudo dnf install openssl
Then, we'll gonna generate the SSL certificate by running the following command.
$ sudo openssl req -x509 -nodes -newkey rsa:2048 -keyout /etc/pki/tls/certs/proftpd.pem -out /etc/pki/tls/certs/proftpd.pem
We'll be asked with some information that will be associated into the certificate. After completing the required information, it will generate a 2048 bit RSA private key.
Generating a 2048 bit RSA private key
...................+++
...................+++
writing new private key to '/etc/pki/tls/certs/proftpd.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:NP
State or Province Name (full name) []:Narayani
Locality Name (eg, city) [Default City]:Bharatpur
Organization Name (eg, company) [Default Company Ltd]:Linoxide
Organizational Unit Name (eg, section) []:Linux Freedom
Common Name (eg, your name or your server's hostname) []:ftp.linoxide.com
Email Address []:arun@linoxide.com
After that, we'll gonna change the permission of the generated certificate file in order to secure it.
$ sudo chmod 600 /etc/pki/tls/certs/proftpd.pem
### 5. Allowing FTP through Firewall ###
Now, we'll need to allow the ftp ports that are usually blocked by the firewall by default. So, we'll allow ports and enable access to the ftp through firewall.
If **TLS/SSL Encryption is enabled** run the following command.
$sudo firewall-cmd --add-port=1024-65534/tcp
$ sudo firewall-cmd --add-port=1024-65534/tcp --permanent
If **TLS/SSL Encryption is disabled** run the following command.
$ sudo firewall-cmd --permanent --zone=public --add-service=ftp
success
Then, we'll need to reload the firewall configuration.
$ sudo firewall-cmd --reload
success
### 6. Starting and Enabling ProFTPD ###
After everything is set, we'll finally start our ProFTPD and give it a try. To start the proftpd ftp daemon, we'll need to run the following command.
$ sudo systemctl start proftpd.service
Then, we'll wanna enable proftpd to start on every boot.
$ sudo systemctl enable proftpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/proftpd.service to /usr/lib/systemd/system/proftpd.service.
### 7. Logging into the FTP server ###
Now, if everything was configured and done as expected, we must be able to connect to the ftp server and login with the details we set above. Here, we'll gonna configure our FTP client, filezilla with hostname as **server's ip or url**, Protocol as **FTP**, User as **arunftp** and password as the one we set in above step 3. If you followed step 4 for enabling TLS support, then we'll need to set the Encryption type as **Require explicit FTP over TLS** but if you didn't follow step 4 and don't wanna use TLS encryption then set the Encryption type as **Plain FTP**.
![FTP Login Details](http://blog.linoxide.com/wp-content/uploads/2015/06/ftp-login-details.png)
To setup the above configuration, we'll need goto File which is under the Menu and then click on Site Manager in which we can click on new site then configure as illustrated above.
![FTP SSL Certificate](http://blog.linoxide.com/wp-content/uploads/2015/06/ftp-ssl-certificate.png)
Then, we're asked to accept the SSL certificate, that can be done by click OK. After that, we are able to upload and download required files and folders from our FTP server.
### Conclusion ###
Finally, we have successfully installed and configured our Fedora 22 box with Proftpd FTP server. Proftpd is an awesome powerful highly configurable and extensible FTP daemon. The above tutorial illustrates us how we can configure a secure FTP server with TLS encryption. It is highly recommended to configure FTP server with TLS encryption as it enables SSL certificate security to the data transfer and login. Here, we haven't configured anonymous access to the FTP cause they are usually not recommended in a protected FTP system. An FTP access makes pretty easy for people to upload and download at good efficient performance. We can even change the ports for the users for additional security. So, if you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/configure-ftp-proftpd-fedora-22/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://www.proftpd.org/
[2]:http://www.proftpd.org/features.html

View File

@ -0,0 +1,159 @@
Setting Up XR (Crossroads) Load Balancer for Web Servers on RHEL/CentOS
================================================================================
Crossroads is a service independent, open source load balance and fail-over utility for Linux and TCP based services. It can be used for HTTP, HTTPS, SSH, SMTP and DNS etc. It is also a multi-threaded utility which consumes only one memory space which leads to increase the performance when balancing load.
Lets have a look at how XR works. We can locate XR between network clients and a nest of servers which dispatches client requests to the servers balancing the load.
If a server is down, XR forwards next client request to the next server in line, so client feels no down time. Have a look at the below diagram to understand what kind of a situation we are going to handle with XR.
![Install XR Crossroads Load Balancer](http://www.tecmint.com/wp-content/uploads/2015/07/Install-XR-Crossroads-Load-Balancer.jpg)
Install XR Crossroads Load Balancer
There are two web-servers, one gateway server which we install and setup XR to receive client requests and distribute them among the servers.
XR Crossroads Gateway Server : 172.16.1.204
Web Server 01 : 172.16.1.222
Web Server 02 : 192.168.1.161
In above scenario, my gateway server (i.e XR Crossroads) bears the IP address 172.16.1.222, webserver01 is 172.16.1.222 and it listens through port 8888 and webserver02 is 192.168.1.161 and it listens through port 5555.
Now all I need is to balance the load of all the requests that receives by the XR gateway from internet and distribute them among two web-servers balancing the load.
### Step1: Install XR Crossroads Load Balancer on Gateway Server ###
**1. Unfortunately, there isnt any binary RPM packages available for crosscroads, the only way to install XR crossroads from source tarball.**
To compile XR, you must have C++ compiler and Gnu make utilities installed on the system in order to continue installation error free.
# yum install gcc gcc-c++ make
Next, download the source tarball by going to their official site ([https://crossroads.e-tunity.com][1]), and grab the archived package (i.e. crossroads-stable.tar.gz).
Alternatively, you use following wget utility to download the package and extract it in any location (eg: /usr/src/), go to unpacked directory and issue “make install” command.
# wget https://crossroads.e-tunity.com/downloads/crossroads-stable.tar.gz
# tar -xvf crossroads-stable.tar.gz
# cd crossroads-2.74/
# make install
![Install XR Crossroads Load Balancer](http://www.tecmint.com/wp-content/uploads/2015/07/Install-XR-Crossroads-Load-Balancer.png)
Install XR Crossroads Load Balancer
After installation finishes, the binary files are created under /usr/sbin/ and XR configuration within /etc namely “xrctl.xml”.
**2. As the last prerequisite, you need two web-servers. For ease of use, I have created two python SimpleHTTPServer instances in one server.**
To see how to setup a python SimpleHTTPServer, read our article at [Create Two Web Servers Easily Using SimpleHTTPServer][2].
As I said, were using two web-servers, and they are webserver01 running on 172.16.1.222 through port 8888 and webserver02 running on 192.168.1.161 through port 5555.
![XR WebServer 01](http://www.tecmint.com/wp-content/uploads/2015/07/XR-WebServer01.jpg)
XR WebServer 01
![XR WebServer 02](http://www.tecmint.com/wp-content/uploads/2015/07/XR-WebServer02.jpg)
XR WebServer 02
### Step 2: Configure XR Crossroads Load Balancer ###
**3. All requisites are in place. Now what we have to do is configure the `xrctl.xml` file to distribute the load among the web-servers which receives by the XR server from the internet.**
Now open `xrctl.xml` file with [vi/vim editor][3].
# vim /etc/xrctl.xml
and make the changes as suggested below.
<?xml version=<94>1.0<94> encoding=<94>UTF-8<94>?>
<configuration>
<system>
<uselogger>true</uselogger>
<logdir>/tmp</logdir>
</system>
<service>
<name>Tecmint</name>
<server>
<address>172.16.1.204:8080</address>
<type>tcp</type>
<webinterface>0:8010</webinterface>
<verbose>yes</verbose>
<clientreadtimeout>0</clientreadtimeout>
<clientwritetimout>0</clientwritetimeout>
<backendreadtimeout>0</backendreadtimeout>
<backendwritetimeout>0</backendwritetimeout>
</server>
<backend>
<address>172.16.1.222:8888</address>
</backend>
<backend>
<address>192.168.1.161:5555</address>
</backend>
</service>
</configuration>
![Configure XR Crossroads Load Balancer](http://www.tecmint.com/wp-content/uploads/2015/07/Configure-XR-Crossroads-Load-Balancer.jpg)
Configure XR Crossroads Load Balancer
Here, you can see a very basic XR configuration done within xrctl.xml. I have defined what the XR server is, what are the back end servers and their ports and web interface port for the XR.
**4. Now you need to start the XR daemon by issuing below commands.**
# xrctl start
# xrctl status
![Start XR Crossroads](http://www.tecmint.com/wp-content/uploads/2015/07/Start-XR-Crossroads.jpg)
Start XR Crossroads
**5. Okay great. Now its time to check whether the configs are working fine. Open two web browsers and enter the IP address of the XR server with port and see the output.**
![Verify Web Server Load Balancing](http://www.tecmint.com/wp-content/uploads/2015/07/Verify-Web-Server-Load-Balancing.jpg)
Verify Web Server Load Balancing
Fantastic. It works fine. now its time to play with XR.
**6. Now its time to login into XR Crossroads dashboard and see the port weve configured for web-interface. Enter your XR servers IP address with the port number for web-interface you have configured in xrctl.xml.**
http://172.16.1.204:8010
![XR Crossroads Dashboard](http://www.tecmint.com/wp-content/uploads/2015/07/XR-Crossroads-Dashboard.jpg)
XR Crossroads Dashboard
This is what it looks like. Its easy to understand, user-friendly and easy to use. It shows how many connections each back end server received in the top right corner along with the additional details regarding the requests receiving. Even you can set the load weight each server you need to bear, maximum number of connections and load average etc..
The best part is, you actually can do this even without configuring xrctl.xml. Only thing you have to do is issue the command with following syntax and it will do the job done.
# xr --verbose --server tcp:172.16.1.204:8080 --backend 172.16.1.222:8888 --backend 192.168.1.161:5555
Explanation of above syntax in detail:
- verbose will show what happens when the command has executed.
- server defines the XR server you have installed the package in.
- backend defines the webservers you need to balance the traffic to.
- Tcp defines it uses tcp services.
For more details, about documentations and configuration of CROSSROADS, please visit their official site at: [https://crossroads.e-tunity.com/][4].
XR Corssroads enables many ways to enhance your server performance, protect downtimes and make your admin tasks easier and handier. Hope you enjoyed the guide and feel free to comment below for the suggestions and clarifications. Keep in touch with Tecmint for handy How Tos.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/setting-up-xr-crossroads-load-balancer-for-web-servers-on-rhel-centos/
作者:[Thilina Uvindasiri][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/thilidhanushka/
[1]:https://crossroads.e-tunity.com/
[2]:http://www.tecmint.com/python-simplehttpserver-to-create-webserver-or-serve-files-instantly/
[3]:http://www.tecmint.com/vi-editor-usage/
[4]:https://crossroads.e-tunity.com/

View File

@ -1,3 +1,6 @@
XLCYun translating.
How To Fix System Program Problem Detected In Ubuntu 14.04
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/system_program_Problem_detected.jpeg)
@ -79,4 +82,4 @@ via: http://itsfoss.com/how-to-fix-system-program-problem-detected-ubuntu/
[a]:http://itsfoss.com/author/abhishek/
[1]:http://itsfoss.com/how-to-solve-sorry-ubuntu-12-04-has-experienced-an-internal-error/
[2]:https://launchpad.net/
[2]:https://launchpad.net/

View File

@ -0,0 +1,157 @@
使用去重加密工具来备份
================================================================================
在体积和价值方面数据都在增长。快速而可靠地备份和恢复数据正变得越来越重要。社会已经适应了技术的广泛使用并懂得了如何依靠电脑和移动设备但很少有人能够处理丢失重要数据的现实。在遭受数据损失的公司中30% 的公司将在一年内损失一半市值70% 的公司将在五年内停止交易。这更加凸显了数据的价值。
随着数据在体积上的增长提高存储利用率尤为重要。In Computing(注:这里不知如何翻译),数据去重是一种特别的数据压缩技术,因为它可以消除重复数据的拷贝,所以这个技术可以提高存储利用率。
数据并不仅仅只有其创造者感兴趣。政府、竞争者、犯罪分子、偷窥者可能都热衷于获取你的数据。他们或许想偷取你的数据,从你那里进行敲诈,或看你正在做什么。对于保护你的数据,加密是非常必要的。
所以,解决方法是我们需要一个去重加密备份软件。
对于所有的用户而言,做文件备份是一件非常必要的事,至今为止许多用户还没有采取足够的措施来保护他们的数据。一台电脑不论是工作在一个合作的环境中,还是供私人使用,机器的硬盘可能在没有任何警告的情况下挂掉。另外,有些数据丢失可能是人为的错误所引发的。如果没有做经常性的备份,数据也可能不可避免地失去掉,即使请了专业的数据恢复公司来帮忙。
这篇文章将对 6 个去重加密备份工具进行简要的介绍。
----------
### Attic ###
Attic 是一个可用于去重、加密,验证完整性的用 Python 写的压缩备份程序。Attic 的主要目标是提供一个高效且安全的方式来备份数据。Attic 使用的数据去重技术使得它适用于每日备份,因为只需存储改变的数据。
其特点有:
- 易用
- 可高效利用存储空间,通过检查冗余的数据,数据块大小的去重被用来减少存储所用的空间
- 可选的数据加密,使用 256 位的 AES 加密算法。数据的完整性和可靠性使用 HMAC-SHA256 来检查
- 使用 SDSH 来进行离线备份
- 备份可作为文件系统来挂载
网站: [attic-backup.org][1]
----------
### Borg ###
Borg 是 Attic 的分支。它是一个安全的开源备份程序,被设计用来高效地存储那些新的或修改过的数据。
Borg 的主要目标是提供一个高效、安全的方式来存储数据。Borg 使用的数据去重技术使得它适用于每日备份,因为只需存储改变的数据。认证加密使得它适用于不完全可信的目标的存储。
Borg 由 Python 写成。Borg 于 2015 年 5 月被创造出来,为了回应让新的代码或重大的改变带入 Attic 的困难。
其特点包括:
- 易用
- 可高效利用存储空间,通过检查冗余的数据,数据块大小的去重被用来减少存储所用的空间
- 可选的数据加密,使用 256 位的 AES 加密算法。数据的完整性和可靠性使用 HMAC-SHA256 来检查
- 使用 SDSH 来进行离线备份
- 备份可作为文件系统来挂载
Borg 与 Attic 不兼容。
网站: [borgbackup.github.io/borgbackup][2]
----------
### Obnam ###
Obnam (OBligatory NAMe) 是一个易用、安全的基于 Python 的备份程序。备份可被存储在本地硬盘或通过 SSH SFTP 协议存储到网上。若使用了备份服务器,它并不需要任何特殊的软件,只需要使用 SSH 即可。
Obnam 通过将数据数据分成数据块并单独存储它们来达到去重的目的每次通过增量备份来生成备份每次备份的生成就像是一次新的快照但事实上是真正的增量备份。Obnam 由 Lars Wirzenius 开发。
其特点有:
- 易用
- 快照备份
- 数据去重,跨文件,生成备份
- 可使用 GnuPG 来加密备份
- 向一个单独的仓库中备份多个客户端的数据
- 备份检查点 (创建一个保存点,以每 100MB 或其他容量)
- 包含多个选项来调整性能,包括调整 lru-size 或 upload-queue-size
- 支持 MD5 校验和算法来识别重复的数据块
- 通过 SFTP 将备份存储到一个服务器上
- 同时支持 push(即在客户端上运行) 和 pull(即在服务器上运行)
网站: [obnam.org][3]
----------
### Duplicity ###
Duplicity 持续地以 tar 文件格式备份文件和目录,并使用 GnuPG 来进行加密,同时将它们上传到远程(或本地)的文件服务器上。它可以使用 ssh/scp, 本地文件获取, rsync, ftp, 和 Amazon S3 等来传递数据。
因为 duplicity 使用了 librsync 增加的存档高效地利用了存储空间,且只记录自从上次备份依赖改变的那部分文件。由于该软件使用 GnuPG 来机密或对这些归档文件进行进行签名,这使得它们免于服务器的监视或修改。
当前 duplicity 支持备份删除的文件,全部的 unix 权限,目录,符号链接, fifo 等。
duplicity 软件包还包含有 rdiffdir 工具。 Rdiffdir 是 librsync 的 rdiff 针对目录的扩展。它可以用来生成对目录的签名和差异,对普通文件也有效。
其特点有:
- 使用简单
- 对归档进行加密和签名(使用 GnuPG)
- 高效使用带宽和存储空间,使用 rsync 的算法
- 标准的文件格式
- 可选择多种远程协议
- 本地存储
- scp/ssh
- ftp
- rsync
- HSI
- WebDAV
- Amazon S3
网站: [duplicity.nongnu.org][4]
----------
### ZBackup ###
ZBackup 是一个通用的全局去重备份工具。
其特点包括:
- 存储数据的并行 LZMA 或 LZO 压缩,在一个仓库中,你还可以混合使用 LZMA 和 LZO
- 内置对存储数据的 AES 加密
- 可选择地删除旧的备份数据
- 可以使用 64 位的滚动哈希算法,使得文件冲突的数量几乎为零
- Repository consists of immutable files. No existing files are ever modified ====
- 用 C++ 写成,只需少量的库文件依赖
- 在生成环境中可以安全使用
- 可以在不同仓库中进行数据交换而不必再进行压缩
- 可以使用 64 位改进型 Rabin-Karp 滚动哈希算法
网站: [zbackup.org][5]
----------
### bup ###
bup 是一个用 Python 写的备份程序,其名称是 "backup" 的缩写。在 git packfile 文件的基础上, bup 提供了一个高效的方式来备份一个系统,提供快速的增量备份和全局去重(在文件中或文件里,甚至包括虚拟机镜像)。
bup 在 LGPL 版本 2 协议下发行。
其特点包括:
- 全局去重 (在文件中或文件里,甚至包括虚拟机镜像)
- 使用一个滚动的校验和算法(类似于 rsync) 来将大文件分为多个数据块
- 使用来自 git 的 packfile 格式
- 直接写入 packfile 文件,以此提供快速的增量备份
- 可以使用 "par2" 冗余来恢复冲突的备份
- 可以作为一个 FUSE 文件系统来挂载你的 bup 仓库
网站: [bup.github.io][6]
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20150628060000607/BackupTools.html
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[1]:https://attic-backup.org/
[2]:https://borgbackup.github.io/borgbackup/
[3]:http://obnam.org/
[4]:http://duplicity.nongnu.org/
[5]:http://zbackup.org/
[6]:https://bup.github.io/

View File

@ -0,0 +1,358 @@
PHP 安全
================================================================================
![](http://www.codeproject.com/KB/PHP/363897/php_security.jpg)
### 简介 ###
为提供互联网服务,当你在开发代码的时候必须时刻保持安全意识。可能大部分 PHP 脚本都对安全问题不敏感;这很大程度上是因为有大量的无经验程序员在使用这门语言。但是,没有理由让你基于粗略估计你代码的影响性而有不一致的安全策略。当你在服务器上放任何经济相关的东西时,就有可能会有人尝试破解它。创建一个论坛程序或者任何形式的购物车,被攻击的可能性就上升到了无穷大。
### 背景 ###
为了确保你的 web 内容安全,这里有一些一般的安全准则:
#### 别相信表单 ####
攻击表单很简单。通过使用一个简单的 JavaScript 技巧,你可以限制你的表单只允许在评分域中填写 1 到 5 的数字。如果有人关闭了他们浏览器的 JavaScript 功能或者提交自定义的表单数据,你客户端的验证就失败了。
用户主要通过表单参数和你的脚本交互,因此他们是最大的安全风险。你应该学到什么呢?总是要验证 PHP 脚本中传递到其它任何 PHP 脚本的数据。在本文中我们向你演示了如何分析和防范跨站点脚本XSS攻击它可能劫持用户凭据甚至更严重。你也会看到如何防止会玷污或毁坏你数据的 MySQL 注入攻击。
#### 别相信用户 ####
假设你网站获取的每一份数据都充满了有害的代码。清理每一部分,就算你相信没有人会尝试攻击你的站点。
#### 关闭全局变量 ####
你可能会有的最大安全漏洞是启用了 register\_globals 配置参数。幸运的是PHP 4.2 及以后版本默认关闭了这个配置。如果打开了 **register\_globals**,你可以在你的 php.ini 文件中通过改变 register\_globals 变量为 Off 关闭该功能:
register_globals = Off
新手程序员觉得注册全局变量很方便,但他们不会意识到这个设置有多么危险。一个启用了全局变量的服务器会自动为全局变量赋任何形式的参数。为了了解它如何工作以及为什么有危险,让我们来看一个例子。
假设你有一个称为 process.php 的脚本,它会向你的数据库插入表单数据。初始的表单像下面这样:
<input name="username" type="text" size="15" maxlength="64">
运行 process.php 的时候,启用了注册全局变量的 PHP 会为该参数赋值为 $username 变量。这会比通过 **$\_POST['username']** 或 **$\_GET['username']** 访问它节省敲击次数。不幸的是,这也会给你留下安全问题,因为 PHP 设置该变量的值为通过 GET 或 POST 参数发送到脚本的任何值,如果你没有显示地初始化该变量并且你不希望任何人去操作它,这就会有一个大问题。
看下面的脚本,假如 $authorized 变量的值为 true它会给用户显示验证数据。正常情况下只有当用户正确通过了假想的 authenticated\_user() 函数验证,$authorized 变量的值才会被设置为真。但是如果你启用了 **register\_globals**,任何人都可以发送一个 GET 参数,例如 authorized=1 去覆盖它:
<?php
// Define $authorized = true only if user is authenticated
if (authenticated_user()) {
$authorized = true;
}
?>
这个故事的寓意是,你应该从预定义的服务器变量中获取表单数据。所有通过 post 表单传递到你 web 页面的数据都会自动保存到一个称为 **$\_POST** 的大数组中,所有的 GET 数据都保存在 **$\_GET** 大数组中。文件上传信息保存在一个称为 **$\_FILES** 的特殊数据中。另外,还有一个称为 **$\_REQUEST** 的复合变量。
要从一个 POST 方法表单中访问 username 域,可以使用 **$\_POST['username']**。如果 username 在 URL 中就使用 **$\_GET['username']**。如果你不确定值来自哪里,用 **$\_REQUEST['username']**。
<?php
$post_value = $_POST['post_value'];
$get_value = $_GET['get_value'];
$some_variable = $_REQUEST['some_value'];
?>
$\_REQUEST 是 $\_GET、$\_POST、和 $\_COOKIE 数组的结合。如果你有两个或多个值有相同的参数名称,注意 PHP 会使用哪个。默认的顺序是 cookie、POST、然后是 GET。
#### 推荐安全配置选项 ####
这里有几个会影响安全功能的 PHP 配置设置。下面是一些显然应该用于生产服务器的:
- **register\_globals** 设置为 off
- **safe\_mode** 设置为 off
- **error\_reporting** 设置为 off。如果出现错误了这会向用户浏览器发送可见的错误报告信息。对于生产服务器使用错误日志代替。开发服务器如果在防火墙后面就可以启用错误日志。
- 停用这些函数system()、exec()、passthru()、shell\_exec()、proc\_open()、和 popen()。
- **open\_basedir** 为 /tmp以便保存会话信息目录和 web 根目录设置值,以便脚本不能访问选定区域外的文件。
- **expose\_php** 设置为 off。该功能会向 Apache 头添加包含版本数字的 PHP 签名。
- **allow\_url\_fopen** 设置为 off。如果你小心在你代码中访问文件的方式-也就是你验证所有输入参数,这并不严格需要。
- **allow\_url\_include** 设置为 off。这实在没有明智的理由任何人会想要通过 HTTP 访问包含的文件。
一般来说,如果你发现想要使用这些功能的代码,你就不应该相信它。尤其要小心会使用类似 system() 函数的代码-它几乎肯定有缺陷。
启用了这些设置后,让我们来看看一些特定的攻击以及能帮助你保护你服务器的方法。
### SQL 注入攻击 ###
由于 PHP 传递到 MySQL 数据库的查询语句是按照强大的 SQL 编程语言编写的,你就有某些人通过在 web 查询参数中使用 MySQL 语句尝试 SQL 注入攻击的风险。通过在参数中插入有害的 SQL 代码片段,攻击者会尝试进入(或破坏)你的服务器。
假如说你有一个最终会放入变量 $product 的表单参数,你使用了类似下面的 SQL 语句:
$sql = "select * from pinfo where product = '$product'";
如果参数是直接从表单中获得的,使用 PHP 自带的数据库特定转义函数,类似:
$sql = 'Select * from pinfo where product = '"'
mysql_real_escape_string($product) . '"';
如果不这样做的话,有人也许会把下面的代码段放到表单参数中:
39'; DROP pinfo; SELECT 'FOO
$sql 的结果就是:
select product from pinfo where product = '39'; DROP pinfo; SELECT 'FOO'
由于分号是 MySQL 的语句分隔符,数据库会运行下面三条语句:
select * from pinfo where product = '39'
DROP pinfo
SELECT 'FOO'
好了,你丢失了你的表。
注意实际上 PHP 和 MySQL 不会运行这种特殊语法,因为 **mysql\_query()** 函数只允许每个请求处理一个语句。但是,一个子查询仍然会生效。
要防止 SQL 注入攻击,做这两件事:
- 总是验证所有参数。例如,如果需要一个数字,就要确保它是一个数字。
- 总是对数据使用 mysql\_real\_escape\_string() 函数转义数据中的任何引号和双引号。
**注意要自动转义任何表单数据可以启用魔术引号Magic Quotes。**
一些 MySQL 破坏可以通过限制 MySQL 用户权限避免。任何 MySQL 账户可以限制为只允许对选定的表进行特定类型的查询。例如,你可以创建只能选择行的 MySQL 用户。但是,这对于动态数据并不十分有用,另外,如果你有敏感的用户信息,可能某些人能访问一些数据,但你并不希望如此。例如,一个访问账户数据的用户可能会尝试注入访问另一个账户号码的代码,而不是为当前会话指定的号码。
### 防止基本的 XSS 攻击 ###
XSS 表示跨站点脚本。不像大部分攻击该漏洞发生在客户端。XSS 最常见的基本形式是在用户提交的内容中放入 JavaScript 以便偷取用户 cookie 中的数据。由于大部分站点使用 cookie 和 session 验证访客,偷取的数据可用于模拟该用于-如果是一个典型的用户账户就会深受麻烦,如果是管理员账户甚至是彻底的惨败。如果你不在站点中使用 cookie 和 session ID你的用户就不容易被攻击但你仍然应该明白这种攻击是如何工作的。
不像 MySQL 注入攻击XSS 攻击很难预防。Yahoo、eBay、Apple、以及 Microsoft 都曾经受 XSS 影响。尽管攻击不包含 PHP你可以使用 PHP 来剥离用户数据以防止攻击。为了防止 XSS 攻击,你应该限制和过滤用户提交给你站点的数据。正是因为这个原因大部分在线公告板都不允许在提交的数据中使用 HTML 标签,而是用自定义的标签格式代替,例如 **[b]** 和 **[linkto]**。
让我们来看一个如何防止这类攻击的简单脚本。对于更完善的解决办法,可以使用 SafeHHTML本文的后面部分会讨论到。
function transform_HTML($string, $length = null) {
// Helps prevent XSS attacks
// Remove dead space.
$string = trim($string);
// Prevent potential Unicode codec problems.
$string = utf8_decode($string);
// HTMLize HTML-specific characters.
$string = htmlentities($string, ENT_NOQUOTES);
$string = str_replace("#", "&#35;", $string);
$string = str_replace("%", "&#37;", $string);
$length = intval($length);
if ($length > 0) {
$string = substr($string, 0, $length);
}
return $string;
}
这个函数将 HTML 特定字符转换为 HTML 字面字符。一个浏览器对任何通过这个脚本的 HTML 以无标记的文本呈现。例如,考虑下面的 HTML 字符串:
<STRONG>Bold Text</STRONG>
一般情况下HTML 会显示为:
Bold Text
但是,通过 **transform\_HTML()** 后,它就像初始输入一样呈现。原因是处理的字符串中标签字符串是 HTML 条目。**transform\_HTML()** 结果字符串的纯文本看起来像下面这样:
&lt;STRONG&gt;Bold Text&lt;/STRONG&gt;
该函数的实质是 htmlentities() 函数调用,它会将 <、>、和 & 转换为 **&lt;**、**&gt;**、和 **&amp;**。尽管这会处理大部分的普通攻击,有经验的 XSS 攻击者有另一种把戏:用十六进制或 UTF-8 编码恶意脚本,而不是采用普通的 ASCII 文本,从而希望能饶过你的过滤器。他们可以在 URL 的 GET 变量中发送代码,例如,“这是十六进制代码,你能帮我运行吗?” 一个十六进制例子看起来像这样:
<a href="http://host/a.php?variable=%22%3e %3c%53%43%52%49%50%54%3e%44%6f%73%6f%6d%65%74%68%69%6e%67%6d%61%6c%69%63%69%6f%75%73%3c%2f%53%43%52%49%50%54%3e">
浏览器渲染这信息的时候,结果就是:
<a href="http://host/a.php?variable="> <SCRIPT>Dosomethingmalicious</SCRIPT>
为了防止这种情况transform\_HTML() 采用额外的步骤把 # 和 % 符号转换为它们的实体,从而避免十六进制攻击,并转换 UTF-8 编码的数据。
最后,为了防止某些人用很长的输入超载字符串从而导致某些东西崩溃,你可以添加一个可选的 $length 参数来截取你指定最大长度的字符串。
### 使用 SafeHTML ###
之前脚本的问题比较简单,它不允许任何类型的用户标记。不幸的是,这里有上百种方法能使 JavaScript 跳过用户的过滤器,从用户输入中剥离 HTML没有方法可以防止这种情况。
当前,没有任何一个脚本能保证无法被破解,尽管有一些确实比大部分要好。有白名单和黑名单两种方法加固安全,白名单比较简单而且更加有效。
一个白名单解决方案是 PixelApes 的 SafeHTML 反跨站点脚本解析器。
SafeHTML 能识别有效 HTML能追踪并剥离任何危险标签。它用另一个称为 HTMLSax 的软件包进行解析。
按照下面步骤安装和使用 SafeHTML
1. 到 [http://pixel-apes.com/safehtml/?page=safehtml][1] 下载最新版本的 SafeHTML。
1. 把文件放到你服务器的类文件夹。该文件夹包括 SafeHTML 和 HTMLSax 起作用需要的所有东西。
1. 在脚本中包含 SafeHTML 类文件safehtml.php
1. 创建称为 $safehtml 的新 SafeHTML 对象。
1. 用 $safehtml->parse() 方法清理你的数据。
这是一个完整的例子:
<?php
/* If you're storing the HTMLSax3.php in the /classes directory, along
with the safehtml.php script, define XML_HTMLSAX3 as a null string. */
define(XML_HTMLSAX3, '');
// Include the class file.
require_once('classes/safehtml.php');
// Define some sample bad code.
$data = "This data would raise an alert <script>alert('XSS Attack')</script>";
// Create a safehtml object.
$safehtml = new safehtml();
// Parse and sanitize the data.
$safe_data = $safehtml->parse($data);
// Display result.
echo 'The sanitized data is <br />' . $safe_data;
?>
如果你想清理脚本中的任何其它数据,你不需要创建一个新的对象;在你的整个脚本中只需要使用 $safehtml->parse() 方法。
#### 什么可能会出现问题? ####
你可能犯的最大错误是假设这个类能完全避免 XSS 攻击。SafeHTML 是一个相当复杂的脚本,几乎能检查所有事情,但没有什么是能保证的。你仍然需要对你的站点做参数验证。例如,该类不能检查给定变量的长度以确保能适应数据库的字段。它也不检查缓冲溢出问题。
XSS 攻击者很有创造力,他们使用各种各样的方法来尝试达到他们的目标。可以阅读 RSnake 的 XSS 教程[http://ha.ckers.org/xss.html][2] 看一下这里有多少种方法尝试使代码跳过过滤器。SafeHTML 项目有很好的程序员一直在尝试阻止 XSS 攻击,但无法保证某些人不会想起一些奇怪和新奇的方法来跳过过滤器。
**注意XSS 攻击严重影响的一个例子 [http://namb.la/popular/tech.html][3],其中显示了如何一步一步创建会超载 MySpace 服务器的 JavaScript XSS 蠕虫。**
### 用单向哈希保护数据 ###
该脚本对输入的数据进行单向转换-换句话说,它能对某人的密码产生哈希签名,但不能解码获得原始密码。为什么你希望这样呢?应用程序会存储密码。一个管理员不需要知道用户的密码-事实上,只有用户知道他的/她的密码是个好主意。系统(也仅有系统)应该能识别一个正确的密码;这是 Unix 多年来的密码安全模型。单向密码安全按照下面的方式工作:
1. 当一个用户或管理员创建或更改一个账户密码时,系统对密码进行哈希并保存结果。主机系统忽视明文密码。
2. 当用户通过任何方式登录到系统时,再次对输入的密码进行哈希。
3. 主机系统抛弃输入的明文密码。
4. 当前新哈希的密码和之前保存的哈希相比较。
5. 如果哈希的密码相匹配,系统就会授予访问权限。
主机系统完成这些并不需要知道原始密码;事实上,原始值完全不相关。一个副作用是,如果某人侵入系统并盗取了密码数据库,入侵者会获得很多哈希后的密码,但无法把它们反向转换为原始密码。当然,给足够时间、计算能力,以及弱用户密码,一个攻击者还是有可能采用字典攻击找出密码。因此,别轻易让人碰你的密码数据库,如果确实有人这样做了,让每个用户更改他们的密码。
#### 加密 Vs 哈希 ####
技术上来来说,这过程并不是加密。哈希和加密是不相同的,这有两个理由:
不像加密,数据不能被解密。
是有可能(但很不常见)两个不同的字符串会产生相同的哈希。并不能保证哈希是唯一的,因此别像数据库中的唯一键那样使用哈希。
function hash_ish($string) {
return md5($string);
}
md5() 函数基于 RSA 数据安全公司的消息摘要算法(即 MD5返回一个由 32 个字符组成的十六进制串。然后你可以将那个 32 位字符串插入到数据库中,和另一个 md5 字符串相比较,或者就用这 32 个字符。
#### 破解脚本 ####
几乎不可能解密 MD5 数据。或者说很难。但是,你仍然需要好的密码,因为根据整个字典生成哈希数据库仍然很简单。这里有在线 MD5 字典,当你输入 **06d80eb0c50b49a509b49f2424e8c805** 后会得到结果 “dog”。因此尽管技术上 MD5 不能被解密,这里仍然有漏洞-如果某人获得了你的密码数据库,你可以肯定他们肯定会使用 MD5 字典破译。因此,当你创建基于密码的系统的时候尤其要注意密码长度(最小 6 个字符8 个或许会更好)和包括字母和数字。并确保字典中没有这个密码。
### 用 Mcrypt 加密数据 ###
如果你不需要以可阅读形式查看密码,采用 MD5 就足够了。不幸的是,这里并不总是有可选项-如果你提供以加密形式存储某人的信用卡信息,你可能需要在后面的某个点进行解密。
最早的一个解决方案是 Mcrypt 模块,用于允许 PHP 高速加密的附件。Mcrypt 库提供了超过 30 种计算方法用于加密,并且提供短语确保只有你(或者你的用户)可以解密数据。
让我们来看看使用方法。下面的脚本包含了使用 Mcrypt 加密和解密数据的函数:
<?php
$data = "Stuff you want encrypted";
$key = "Secret passphrase used to encrypt your data";
$cipher = "MCRYPT_SERPENT_256";
$mode = "MCRYPT_MODE_CBC";
function encrypt($data, $key, $cipher, $mode) {
// Encrypt data
return (string)
base64_encode
(
mcrypt_encrypt
(
$cipher,
substr(md5($key),0,mcrypt_get_key_size($cipher, $mode)),
$data,
$mode,
substr(md5($key),0,mcrypt_get_block_size($cipher, $mode))
)
);
}
function decrypt($data, $key, $cipher, $mode) {
// Decrypt data
return (string)
mcrypt_decrypt
(
$cipher,
substr(md5($key),0,mcrypt_get_key_size($cipher, $mode)),
base64_decode($data),
$mode,
substr(md5($key),0,mcrypt_get_block_size($cipher, $mode))
);
}
?>
**mcrypt()** 函数需要几个信息:
- 需要加密的数据
- 用于加密和解锁数据的短语,也称为键。
- 用于加密数据的计算方法,也就是用于加密数据的算法。该脚本使用了 **MCRYPT\_SERPENT\_256**,但你可以从很多算法中选择,包括 **MCRYPT\_TWOFISH192**、**MCRYPT\_RC2**、**MCRYPT\_DES**、和 **MCRYPT\_LOKI97**
- 加密数据的模式。这里有几个你可以使用的模式包括电子密码本Electronic Codebook 和加密反馈Cipher Feedback。该脚本使用 **MCRYPT\_MODE\_CBC** 密码块链接。
- 一个 **初始化向量**-也称为 IV或着一个种子-用于为加密算法设置种子的额外二进制位。也就是使算法更难于破解的额外信息。
- 键和 IV 字符串的长度,这可能随着加密和块而不同。使用 **mcrypt\_get\_key\_size()****mcrypt\_get\_block\_size()** 函数获取合适的长度;然后用 **substr()** 函数将键的值截取为合适的长度。(如果键的长度比要求的短,别担心-Mcrypt 会用 0 填充。)
如果有人窃取了你的数据和短语,他们只能一个个尝试加密算法直到找到正确的那一个。因此,在使用它之前我们通过对键使用 **md5()** 函数增加安全,就算他们获取了数据和短语,入侵者也不能获得想要的东西。
入侵者同时需要函数,数据和短语-如果真是如此,他们可能获得了对你服务器的完整访问,你只能大清洗了。
这里还有一个数据存储格式的小问题。Mcrypt 以难懂的二进制形式返回加密后的数据,这使得当你将其存储到 MySQL 字段的时候可能出现可怕错误。因此,我们使用 **base64encode()****base64decode()** 函数转换为和 SQL 兼容的字母格式和检索行。
#### 破解脚本 ####
除了实验多种加密方法,你还可以在脚本中添加一些便利。例如,不是每次都提供键和模式,而是在包含的文件中声明为全局常量。
### 生成随机密码 ###
随机(但难以猜测)字符串在用户安全中很重要。例如,如果某人丢失了密码并且你使用 MD5 哈希,你不可能,也不希望查找回来。而是应该生成一个安全的随机密码并发送给用户。为了访问你站点的服务,另外一个用于生成随机数字的应用程序会创建有效链接。下面是创建密码的一个函数:
<?php
function make_password($num_chars) {
if ((is_numeric($num_chars)) &&
($num_chars > 0) &&
(! is_null($num_chars))) {
$password = '';
$accepted_chars = 'abcdefghijklmnopqrstuvwxyz1234567890';
// Seed the generator if necessary.
srand(((int)((double)microtime()*1000003)) );
for ($i=0; $i<=$num_chars; $i++) {
$random_number = rand(0, (strlen($accepted_chars) -1));
$password .= $accepted_chars[$random_number] ;
}
return $password;
}
}
?>
#### 使用脚本 ####
**make_password()** 函数返回一个字符串,因此你需要做的就是提供字符串的长度作为参数:
<?php
$fifteen_character_password = make_password(15);
?>
函数按照下面步骤工作:
- 函数确保 **$num\_chars** 是非零的正整数。
- 函数初始化 **$accepted\_chars** 变量为密码可能包含的字符列表。该脚本使用所有小写字母和数字 0 到 9但你可以使用你喜欢的任何字符集合。
- 随机数生成器需要一个种子从而获得一系列类随机值PHP 4.2 及之后版本中并不严格要求)。
- 函数循环 **$num\_chars** 次,每次迭代生成密码中的一个字符。
- 对于每个新字符,脚本查看 **$accepted_chars** 的长度,选择 0 和长度之间的一个数字,然后添加 **$accepted\_chars** 中该数字为索引值的字符到 $password。
- 循环结束后,函数返回 **$password**。
### 许可证 ###
本篇文章,包括相关的源代码和文件,都是在 [The Code Project Open License (CPOL)][4] 协议下发布。
--------------------------------------------------------------------------------
via: http://www.codeproject.com/Articles/363897/PHP-Security
作者:[SamarRizvi][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.codeproject.com/script/Membership/View.aspx?mid=7483622
[1]:http://pixel-apes.com/safehtml/?page=safehtml
[2]:http://ha.ckers.org/xss.html
[3]:http://namb.la/popular/tech.html
[4]:http://www.codeproject.com/info/cpol10.aspx

View File

@ -1,114 +0,0 @@
为什么mysql里的ibdata1文件不断的增长
================================================================================
![ibdata1 file](https://www.percona.com/blog/wp-content/uploads/2013/08/ibdata1-file.jpg)
我们在[Percona支持][1]经常收到关于MySQL的ibdata1文件的这个问题。
当监控服务器发送一个关于MySQL服务器存储的报警恐慌就开始了 - 就是说磁盘快要满了。
一番调查后你意识到大多数地盘空间被InnDB的共享表空间ibdata1使用。你已经启用了[innodb_file_per_table][2],所以问题是:
### ibdata1存了什么 ###
当你启用了innodb_file_per_table表被存储在他们自己的表空间里但是共享表空间仍然在存储其他InnoDB的内部数据
- 数据字典也就是InnoDB表的元数据
- 交换缓冲区
- 双写缓冲区
- 撤销日志
在[Percona服务器][3]上有一些是可以被配置来避免增长过大的。例如你可以通过[innodb_ibuf_max_size][4]设置最大交换缓冲区或设置[innodb_doublewrite_file][5]存储双写缓冲区到一个分离的文件。
MySQL 5.6版中你也可以创建额外的撤销表空间所以他们将会有他们自己的文件来替代存储到ibdata1。照着[文档链接][6]检查。
### 什么引起ibdata1增长迅速 ###
当MySQL出现问题通常我们需要执行的第一个命令是
SHOW ENGINE INNODB STATUSG这里我觉得应该是STATUS/G
这将展示给我们一些很有价值的信息。我们开始检查**事务**部分,然后我们会发现这个:
---TRANSACTION 36E, ACTIVE 1256288 sec
MySQL thread id 42, OS thread handle 0x7f8baaccc700, query id 7900290 localhost root
show engine innodb status
Trx read view will not see trx with id >= 36F, sees < 36F
这是一个最常见的原因一个14天前创建的相当老的老事务。这个状态是**活动的**这意味着InnoDB已经创建了一个快照数据所以需要在**撤销**日志中维护旧页面,以保障数据库的一致性视图,直到事务开始。如果你的数据库是写负载大,那就意味着大量的撤销页已经被存储了。
如果你找不到任何长时间运行的事务你也可以监控INNODB状态中的其他的变量“**历史记录列表长度**”展示了一些等待清除操作。这种情况下问题经常发生,因为清除线程(或者老版本的主线程)不能以这些记录进来的速度处理撤销。
### 我怎么检查什么被存储到了ibdata1里了 ###
很不幸MySQL不提供什么被存储到ibdata1共享表空间的信息但是有两个工具将会很有帮助。第一个是马克·卡拉汉制作的一个修订版本的innochecksum并且发布在[这个漏洞报告][7]。
它相当易于使用:
# ./innochecksum /var/lib/mysql/ibdata1
0 bad checksum
13 FIL_PAGE_INDEX
19272 FIL_PAGE_UNDO_LOG
230 FIL_PAGE_INODE
1 FIL_PAGE_IBUF_FREE_LIST
892 FIL_PAGE_TYPE_ALLOCATED
2 FIL_PAGE_IBUF_BITMAP
195 FIL_PAGE_TYPE_SYS
1 FIL_PAGE_TYPE_TRX_SYS
1 FIL_PAGE_TYPE_FSP_HDR
1 FIL_PAGE_TYPE_XDES
0 FIL_PAGE_TYPE_BLOB
0 FIL_PAGE_TYPE_ZBLOB
0 other
3 max index_id
全部的20608中有19272个撤销日志页。**这是表空间的93%**。
第二个检查表空间内容的方式是杰里米科尔制作的[InnoDB Ruby工具][8]。它是个更先进的工具来检查InnoDB的内部结构。例如我们可以使用space-summary参数来得到每个页面及其数据类型的列表。我们可以使用标准的Unix工具**撤销日志**页的数量:
# innodb_space -f /var/lib/mysql/ibdata1 space-summary | grep UNDO_LOG | wc -l
19272
尽管这种特殊的情况innochedcksum更快更容易使用我推荐你使用杰里米的工具去学习更多的InnoDB内部数据分布和内部结构。
好,现在我们知道问题所在。下一个问题:
### 我能怎么解决问题? ###
这个问题的答案很简单。如果你还能提交语句就做吧。如果不能你必须要杀掉进程开始回滚进程。那将停止ibdata1的增长但是很清楚你的软件有一个漏洞或者出了一些错误。现在你知道如何去鉴定问题所在你需要使用你自己的调试工具或普通语句日志找出谁或者什么引起的问题。
如果问题发生在清除线程,解决方法通常是升级到新版本,新版中使用一个独立的清除线程替代主线程。更多信息查看[文档链接][9]
### 有什么方法恢复已使用的空间么? ###
没有目前不可能有一个容易并且快速的方法。InnoDB表空间从不收缩...见[10年老漏洞报告][10]最新更新自詹姆斯(谢谢):
当你删除一些行这个页被标为已删除稍后重用但是这个空间从不会被恢复。只有一种方式来启动数据库使用新的ibdata1。做这个你应该需要使用mysqldump做一个逻辑全备份。然后停止MySQL并删除所有数据库、ib_logfile*、ibdata1*文件。当你再启动MySQL的时候将会创建一个新的共享表空间。然后恢复逻辑仓库。
### 总结 ###
当ibdata1文件增长太快通常是MySQL里长时间运行的被遗忘的事务引起的。尝试去解决问题越快越好提交或者杀死事务因为没有痛苦缓慢的mysqldump执行你不能恢复浪费的磁盘空间。
监控数据库避免这些问题也是非常推荐的。我们的[MySQL监控插件][11]包括一个Nagios脚本如果发现了一个太老的运行事务它可以提醒你。
--------------------------------------------------------------------------------
via: https://www.percona.com/blog/2013/08/20/why-is-the-ibdata1-file-continuously-growing-in-mysql/
作者:[Miguel Angel Nieto][a]
译者:[wyangsun](https://github.com/wyangsun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.percona.com/blog/author/miguelangelnieto/
[1]:https://www.percona.com/products/mysql-support
[2]:http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table
[3]:https://www.percona.com/software/percona-server
[4]:https://www.percona.com/doc/percona-server/5.5/scalability/innodb_insert_buffer.html#innodb_ibuf_max_size
[5]:https://www.percona.com/doc/percona-server/5.5/performance/innodb_doublewrite_path.html?id=percona-server:features:percona_innodb_doublewrite_path#innodb_doublewrite_file
[6]:http://dev.mysql.com/doc/refman/5.6/en/innodb-performance.html#innodb-undo-tablespace
[7]:http://bugs.mysql.com/bug.php?id=57611
[8]:https://github.com/jeremycole/innodb_ruby
[9]:http://dev.mysql.com/doc/innodb/1.1/en/innodb-improved-purge-scheduling.html
[10]:http://bugs.mysql.com/bug.php?id=1341
[11]:https://www.percona.com/software/percona-monitoring-plugins

View File

@ -0,0 +1,80 @@
如何修复ubuntu 14.04中检测到系统程序错误的问题
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/system_program_Problem_detected.jpeg)
在过去的几个星期,(几乎)每次都有消息 **Ubuntu 15.04在启动时检测到系统程序错误system program problem detected on startup in Ubuntu 15.04)** 跑出来“欢迎”我。那时我是直接忽略掉它的,但是这种情况到了某个时刻,它就让人觉得非常烦人了!
> 检测到系统程序错误(System program problem detected)
>
> 你想立即报告这个问题吗?
>
> ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/System_Program_Problem_Detected.png)
我肯定地知道如果你是一个Ubuntu用户你可能曾经也遇到过这个恼人的弹窗。在本文中我们将探讨在Ubuntu 14.04和15.04中遇到"检测到系统程序错误(system program problem detected)"时 应该怎么办。
### 怎么解决Ubuntu中"检测到系统程序错误"的错误 ###
#### 那么这个通知到底是关于什么的? ####
大体上讲它是在告知你你的系统的一部分崩溃了。可别因为“崩溃”这个词而恐慌。这不是一个严重的问题你的系统还是完完全全可用的。只是在以前的某个时刻某个程序崩溃了而Ubuntu想让你决定要不要把这个问题报告给开发者这样他们就能够修复这个问题。
#### 那么,我们点了“报告错误”的按钮后,它以后就不再显示了?####
不,不是的!即使你点了“报告错误”按钮,最后你还是会被一个如下的弹窗再次“欢迎”:
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Ubuntu_Internal_error.png)
[对不起Ubuntu发生了一个内部错误(Sorry, Ubuntu has experienced an internal error)][1]是一个Apport(Apport是Ubuntu中错误信息的收集报告系统,详见Ubuntu Wiki中的Apport篇,译者注),它将会进一步的打开网页浏览器,然后你可以通过登录或创建[Launchpad][2]帐户来填写一份漏洞Bug)报告文件。你看,这是一个复杂的过程,它要花整整四步来完成.
#### 但是我想帮助开发者,让他们知道这个漏洞啊 !####
你这样想的确非常地周到体贴,而且这样做也是正确的。但是这样做的话,存在两个问题。第一,存在非常高的概率,这个漏洞已经被报告过了;第二,即使你报告了个这次崩溃,也无法保证你不会再看到它。
#### 那么,你的意思就是说别报告这次崩溃了?####
也不对。如果你想的话在你第一次看到它的时候报告它。你可以在上面图片显示的“显示细节Show Details)”中查看崩溃的程序。但是如果你总是看到它或者你不想报告漏洞Bug),那么我建议你还是一次性摆脱这个问题吧。
### 修复Ubuntu中“检测到系统程序错误”的错误 ###
这些错误报告被存放在Ubuntu中目录/var/crash中。如果你翻看这个目录的话应该可以看到有一些以crash结尾的文件。
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Crash_reports_Ubuntu.jpeg)
我的建议是删除这些错误报告。打开一个终端,执行下面的命令:
sudo rm /var/crash/*
这个操作会删除所有在/var/crash目录下的所有内容。这样你就不会再被这些报告以前程序错误的弹窗所扰。但是如果有一个程序又崩溃了你就会再次看到“检测到系统程序错误”的错误。你可以再次删除这些报告文件或者你可以禁用Apport来彻底地摆脱这个错误弹窗。
#### 彻底地摆脱Ubuntu中的系统错误弹窗 ####
如果你这样做,系统中任何程序崩溃时,系统都不会再通知你。如果你想问问我的看法的话,我会说,这不是一件坏事,除非你愿意填写错误报告。如果你不想填写错误报告,那么这些错误通知存不存在都不会有什么区别。
要禁止Apport并且彻底地摆脱Ubuntu系统中的程序崩溃报告打开一个终端输入以下命令
gksu gedit /etc/default/apport
这个文件的内容是:
# set this to 0 to disable apport, or to 1 to enable it
# 设置0表示禁用Apportw或者1开启它。译者注下同。
# you can temporarily override this with
# 你可以用下面的命令暂时关闭它:
# sudo service apport start force_start=1
enabled=1
把**enabled=1**改为**enabled=0**.保存并关闭文件。完成之后你就再也不会看到弹窗报告错误了。很显然如果我们想重新开启错误报告功能只要再打开这个文件把enabled设置为1就可以了。
#### 你的有效吗? ####
我希望这篇教程能够帮助你修复Ubuntu 14.04和Ubuntu 15.04中检测到系统程序错误的问题。如果这个小窍门帮你摆脱了这个烦人的问题,请让我知道。
--------------------------------------------------------------------------------
via: http://itsfoss.com/how-to-fix-system-program-problem-detected-ubuntu/
作者:[Abhishek][a]
译者:[XLCYun](https://github.com/XLCYun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://itsfoss.com/how-to-solve-sorry-ubuntu-12-04-has-experienced-an-internal-error/
[2]:https://launchpad.net/

View File

@ -0,0 +1,149 @@
如何管理Vim插件
================================================================================
Vim是Linux上一个轻量级的通用文本编辑器。虽然它开始时的学习曲线对于一般的Linux用户来说可能很困难但比起它的好处这些付出完全是值得的。随着功能的增长在插件工具的应用下Vim是完全可定制的。但是由于它高级的功能配置你需要花一些时间去了解它的插件系统然后才能够有效地去个性化定置Vim。幸运的是我们已经有一些工具能够使我们在使用Vim插件时更加轻松。而我日常所使用的就是Vundle.
### 什么是Vundle ###
[Vundle][1]是一个vim插件管理器用于支持Vim包。Vundle能让你很简单地实现插件的安装升级搜索或者清除。它还能管理你的运行环境并且在标签方面提供帮助。
### 安装Vundle ###
首先如果你的Linux系统上没有Git的话先[安装Git][2].
接着创建一个目录Vim的插件将会被下载并且安装在这个目录上。默认情况下这个目录为~/.vim/bundle。
$ mkdir -p ~/.vim/bundle
现在安装Vundle如下。注意Vundle本身也是一个vim插件。因此我们同样把vundle安装到之前创建的目录~/.vim/bundle下。
$ git clone https://github.com/gmarik/Vundle.vim.git ~/.vim/bundle/Vundle.vim
### 配置Vundle ###
现在配置你的.vimrc文件如下
set nocompatible " This is required
" 这是被要求的。(译注:中文注释为译者所加,下同。)
filetype off " This is required
" 这是被要求的。
" Here you set up the runtime path 
" 在这里设置你的运行时环境的路径。
set rtp+=~/.vim/bundle/Vundle.vim
" Initialize vundle 
" 初始化vundle
call vundle#begin()
" This should always be the first 
" 这一行应该永远放在前面。
Plugin 'gmarik/Vundle.vim'
" This examples are from https://github.com/gmarik/Vundle.vim README
" 这个示范来自https://github.com/gmarik/Vundle.vim README
Plugin 'tpope/vim-fugitive'
" Plugin from http://vim-scripts.org/vim/scripts.html
" 取自http://vim-scripts.org/vim/scripts.html的插件
Plugin 'L9'
" Git plugin not hosted on GitHub
" Git插件但并不在GitHub上。
Plugin 'git://git.wincent.com/command-t.git'
"git repos on your local machine (i.e. when working on your own plugin)
"本地计算机上的Git仓库路径 例如当你在开发你自己的插件时
Plugin 'file:///home/gmarik/path/to/plugin'
" The sparkup vim script is in a subdirectory of this repo called vim.
" Pass the path to set the runtimepath properly.
" vim脚本sparkup存放在这个名叫vim的仓库下的一个子目录中。
" 提交这个路径来正确地设置运行时路径
Plugin 'rstacruz/sparkup', {'rtp': 'vim/'}
" Avoid a name conflict with L9
" 避免与L9发生名字上的冲突
Plugin 'user/L9', {'name': 'newL9'}
"Every Plugin should be before this line
"所有的插件都应该在这一行之前。
call vundle#end() " required 被要求的
容我简单解释一下上面的设置默认情况下Vundle将从github.com或者vim-scripts.org下载和安装vim插件。你也可以改变这个默认行为。
要从github安装安装插件译者注下同:
Plugin 'user/plugin'
要从http://vim-scripts.org/vim/scripts.html处安装:
Plugin 'plugin_name'
要从另外一个git仓库中安装:
Plugin 'git://git.another_repo.com/plugin'
从本地文件中安装:
Plugin 'file:///home/user/path/to/plugin'
你同样可以定制其它东西,例如你的插件运行时路径,当你自己在编写一个插件时,或者你只是想从其它目录——而不是~/.vim——中加载插件时这样做就非常有用。
Plugin 'rstacruz/sparkup', {'rtp': 'another_vim_path/'}
如果你有同名的插件,你可以重命名你的插件,这样它们就不会发生冲突了。
Plugin 'user/plugin', {'name': 'newPlugin'}
### 使用Vum命令 ###
一旦你用vundle设置好你的插件你就可以通过几个vundle命令来安装升级搜索插件或者清除没有用的插件。
#### 安装一个新的插件 ####
所有列在你的.vimrc文件中的插件都会被PluginInstall命令安装。你也可以通递一个插件名给它来安装某个的特定插件。
:PluginInstall
:PluginInstall <插件名>
![](https://farm1.staticflickr.com/559/18998707843_438cd55463_c.jpg)
#### 清除没有用的插件 ####
如果你有任何没有用到的插件,你可以通过PluginClean命令来删除它.
:PluginClean
![](https://farm4.staticflickr.com/3814/19433047689_17d9822af6_c.jpg)
#### 查找一个插件 ####
如果你想从提供的插件清单中安装一个插件,搜索功能会很有用
:PluginSearch <文本>
![](https://farm1.staticflickr.com/541/19593459846_75b003443d_c.jpg)
在搜索的时候,你可以在交互式分割窗口中安装,清除,重新搜索或者重新加载插件清单.安装后的插件不会自动加载生效,要使其加载生效,可以将它们添加进你的.vimrc文件中.
### 总结 ###
Vim是一个妙不可言的工具.它不单单是一个能够使你的工作更加顺畅高效的默认文本编辑器,同时它还能够摇身一变,成为现存的几乎任何一门编程语言的IDE.
注意,有一些网站能帮你找到适合的vim插件.猛击[http://www.vim-scripts.org][3], Github或者 [http://www.vimawesome.com][4] 获取新的脚本或插件.同时记得为你的插件使用帮助供应程序.
和你最爱的编辑器一起嗨起来吧!
--------------------------------------------------------------------------------
via: http://xmodulo.com/manage-vim-plugins.html
作者:[Christopher Valerio][a]
译者:[XLCYun(袖里藏云)](https://github.com/XLCYun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/valerio
[1]:https://github.com/VundleVim/Vundle.vim
[2]:http://ask.xmodulo.com/install-git-linux.html
[3]:http://www.vim-scripts.org/
[4]:http://www.vimawesome.com/