+
+### Chef Server's Installation and Configurations ###
+
+Chef Server is central core component that stores recipes as well as other configuration data and interact with the workstations and nodes. let's download the installation media by selecting the latest version of chef server from its official web link.
+
+We will get its installation package and install it by using following commands.
+
+**1) Downloading Chef Server**
+
+ root@ubuntu-14-chef:/tmp# wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/chef-server-core_12.1.0-1_amd64.deb
+
+**2) To install Chef Server**
+
+ root@ubuntu-14-chef:/tmp# dpkg -i chef-server-core_12.1.0-1_amd64.deb
+
+**3) Reconfigure Chef Server**
+
+Now Run the following command to start all of the chef server services ,this step may take a few minutes to complete as its composed of many different services that work together to create a functioning system.
+
+ root@ubuntu-14-chef:/tmp# chef-server-ctl reconfigure
+
+The chef server startup command 'chef-server-ctl reconfigure' needs to be run twice so that installation ends with the following completion output.
+
+ Chef Client finished, 342/350 resources updated in 113.71139964 seconds
+ opscode Reconfigured!
+
+**4) Reboot OS**
+
+Once the installation complete reboot the operating system for the best working without doing this we you might get the following SSL_connect error during creation of User.
+
+ ERROR: Errno::ECONNRESET: Connection reset by peer - SSL_connect
+
+**5) Create new Admin User**
+
+Run the following command to create a new administrator user with its profile settings. During its creation user’s RSA private key is generated automatically that should be saved to a safe location. The --filename option will save the RSA private key to a specified path.
+
+ root@ubuntu-14-chef:/tmp# chef-server-ctl user-create kashi kashi kashi kashif.fareedi@gmail.com kashi123 --filename /root/kashi.pem
+
+### Chef Manage Setup on Chef Server ###
+
+Chef Manage is a management console for Enterprise Chef that enables a web-based user interface for visualizing and managing nodes, data bags, roles, environments, cookbooks and role-based access control (RBAC).
+
+**1) Downloading Chef Manage**
+
+Copy the link for Chef Manage from the official web site and download the chef manage package.
+
+ root@ubuntu-14-chef:~# wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/opscode-manage_1.17.0-1_amd64.deb
+
+**2) Installing Chef Manage**
+
+Let's install it into the root's home directory with below command.
+
+ root@ubuntu-14-chef:~# chef-server-ctl install opscode-manage --path /root
+
+**3) Restart Chef Manage and Server**
+
+Once the installation is complete we need to restart chef manage and chef server services by executing following commands.
+
+ root@ubuntu-14-chef:~# opscode-manage-ctl reconfigure
+ root@ubuntu-14-chef:~# chef-server-ctl reconfigure
+
+### Chef Manage Web Console ###
+
+We can access chef manage web console from the localhost as wel as its fqdn and login with the already created admin user account.
+
+![chef amanage](http://blog.linoxide.com/wp-content/uploads/2015/07/5-chef-web.png)
+
+**1) Create New Organization with Chef Manage**
+
+You would be asked to create new organization or accept the invitation from the organizations. Let's create a new organization by providing its short and full name as shown.
+
+![Create Org](http://blog.linoxide.com/wp-content/uploads/2015/07/7-create-org.png)
+
+**2) Create New Organization with Command line**
+
+We can also create new Organization from the command line by executing the following command.
+
+ root@ubuntu-14-chef:~# chef-server-ctl org-create linux Linoxide Linux Org. --association_user kashi --filename linux.pem
+
+### Configuration and setup of Workstation ###
+
+As we had done with successful installation of chef server now we are going to setup its workstation to create and configure any recipes, cookbooks, attributes, and other changes that we want to made to our Chef configurations.
+
+**1) Create New User and Organization on Chef Server**
+
+In order to setup workstation we create a new user and an organization for this from the command line.
+
+ root@ubuntu-14-chef:~# chef-server-ctl user-create bloger Bloger Kashif bloger.kashif@gmail.com bloger123 --filename bloger.pem
+
+ root@ubuntu-14-chef:~# chef-server-ctl org-create blogs Linoxide Blogs Inc. --association_user bloger --filename blogs.pem
+
+**2) Download Starter Kit for Workstation**
+
+Now Download and Save starter-kit from the chef manage web console on a workstation and use it to work with Chef server.
+
+![Starter Kit](http://blog.linoxide.com/wp-content/uploads/2015/07/8-download-kit.png)
+
+**3) Click to "Proceed" with starter kit download**
+
+![starter kit](http://blog.linoxide.com/wp-content/uploads/2015/07/9-download-kit.png)
+
+### Chef Development Kit Setup for Workstation ###
+
+Chef Development Kit is a software package suite with all the development tools need to code Chef. It combines with the best of the breed tools developed by Chef community with Chef Client.
+
+**1) Downloading Chef DK**
+
+We can Download chef development kit from its official web link and choose the required operating system to get its chef development tool kit.
+
+![Chef DK](http://blog.linoxide.com/wp-content/uploads/2015/07/10-CDK.png)
+
+Copy the link and download it with wget command.
+
+ root@ubuntu-15-WKS:~# wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.6.2-1_amd64.deb
+
+**1) Chef Development Kit Installatoion**
+
+Install chef-development kit using dpkg command
+
+ root@ubuntu-15-WKS:~# dpkg -i chefdk_0.6.2-1_amd64.deb
+
+**3) Chef DK Verfication**
+
+Verify using the below command that the client got installed properly.
+
+ root@ubuntu-15-WKS:~# chef verify
+
+----------
+
+ Running verification for component 'berkshelf'
+ Running verification for component 'test-kitchen'
+ Running verification for component 'chef-client'
+ Running verification for component 'chef-dk'
+ Running verification for component 'chefspec'
+ Running verification for component 'rubocop'
+ Running verification for component 'fauxhai'
+ Running verification for component 'knife-spork'
+ Running verification for component 'kitchen-vagrant'
+ Running verification for component 'package installation'
+ Running verification for component 'openssl'
+ ..............
+ ---------------------------------------------
+ Verification of component 'rubocop' succeeded.
+ Verification of component 'knife-spork' succeeded.
+ Verification of component 'openssl' succeeded.
+ Verification of component 'berkshelf' succeeded.
+ Verification of component 'chef-dk' succeeded.
+ Verification of component 'fauxhai' succeeded.
+ Verification of component 'test-kitchen' succeeded.
+ Verification of component 'kitchen-vagrant' succeeded.
+ Verification of component 'chef-client' succeeded.
+ Verification of component 'chefspec' succeeded.
+ Verification of component 'package installation' succeeded.
+
+**Connecting to Chef Server**
+
+We will Create ~/.chef and copy the two user and organization pem files to this folder from chef server.
+
+ root@ubuntu-14-chef:~# scp bloger.pem blogs.pem kashi.pem linux.pem root@172.25.10.172:/.chef/
+
+----------
+
+ root@172.25.10.172's password:
+ bloger.pem 100% 1674 1.6KB/s 00:00
+ blogs.pem 100% 1674 1.6KB/s 00:00
+ kashi.pem 100% 1678 1.6KB/s 00:00
+ linux.pem 100% 1678 1.6KB/s 00:00
+
+**Knife Configurations to Manage your Chef Environment**
+
+Now create "~/.chef/knife.rb" with following content as configured in previous steps.
+
+ root@ubuntu-15-WKS:/.chef# vim knife.rb
+ current_dir = File.dirname(__FILE__)
+
+ log_level :info
+ log_location STDOUT
+ node_name "kashi"
+ client_key "#{current_dir}/kashi.pem"
+ validation_client_name "kashi-linux"
+ validation_key "#{current_dir}/linux.pem"
+ chef_server_url "https://172.25.10.173/organizations/linux"
+ cache_type 'BasicFile'
+ cache_options( :path => "#{ENV['HOME']}/.chef/checksums" )
+ cookbook_path ["#{current_dir}/../cookbooks"]
+
+Create "~/cookbooks" folder for cookbooks as specified knife.rb file.
+
+ root@ubuntu-15-WKS:/# mkdir cookbooks
+
+**Testing with Knife Configurations**
+
+Run "knife user list" and "knife client list" commands to verify whether knife configuration is working.
+
+ root@ubuntu-15-WKS:/.chef# knife user list
+
+You might get the following error while first time you run this command.This occurs because we do not have our Chef server's SSL certificate on our workstation.
+
+ ERROR: SSL Validation failure connecting to host: 172.25.10.173 - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
+ ERROR: Could not establish a secure connection to the server.
+ Use `knife ssl check` to troubleshoot your SSL configuration.
+ If your Chef Server uses a self-signed certificate, you can use
+ `knife ssl fetch` to make knife trust the server's certificates.
+
+To recover from the above error run the following command to fetch ssl certs and once again run the knife user and client list command and it should be fine then.
+
+ root@ubuntu-15-WKS:/.chef# knife ssl fetch
+ WARNING: Certificates from 172.25.10.173 will be fetched and placed in your trusted_cert
+ directory (/.chef/trusted_certs).
+
+ Knife has no means to verify these are the correct certificates. You should
+ verify the authenticity of these certificates after downloading.
+
+ Adding certificate for ubuntu-14-chef.test.com in /.chef/trusted_certs/ubuntu-14-chef_test_com.crt
+
+Now after fetching ssl certs with above command, let's again run the below command.
+
+ root@ubuntu-15-WKS:/.chef#knife client list
+ kashi-linux
+
+### New Node Configuration to interact with chef-server ###
+
+Nodes contain chef-client which performs all the infrastructure automation. So, Its time to begin with adding new servers to our chef environment by Configuring a new node to interact with chef-server after we had Configured chef-server and knife workstation combinations.
+
+To configure a new node to work with chef server use below command.
+
+ root@ubuntu-15-WKS:~# knife bootstrap 172.25.10.170 --ssh-user root --ssh-password kashi123 --node-name mydns
+
+----------
+
+ Doing old-style registration with the validation key at /.chef/linux.pem...
+ Delete your validation key in order to use your user credentials instead
+
+ Connecting to 172.25.10.170
+ 172.25.10.170 Installing Chef Client...
+ 172.25.10.170 --2015-07-04 22:21:16-- https://www.opscode.com/chef/install.sh
+ 172.25.10.170 Resolving www.opscode.com (www.opscode.com)... 184.106.28.91
+ 172.25.10.170 Connecting to www.opscode.com (www.opscode.com)|184.106.28.91|:443... connected.
+ 172.25.10.170 HTTP request sent, awaiting response... 200 OK
+ 172.25.10.170 Length: 18736 (18K) [application/x-sh]
+ 172.25.10.170 Saving to: ‘STDOUT’
+ 172.25.10.170
+ 100%[======================================>] 18,736 --.-K/s in 0s
+ 172.25.10.170
+ 172.25.10.170 2015-07-04 22:21:17 (200 MB/s) - written to stdout [18736/18736]
+ 172.25.10.170
+ 172.25.10.170 Downloading Chef 12 for ubuntu...
+ 172.25.10.170 downloading https://www.opscode.com/chef/metadata?v=12&prerelease=false&nightlies=false&p=ubuntu&pv=14.04&m=x86_64
+ 172.25.10.170 to file /tmp/install.sh.26024/metadata.txt
+ 172.25.10.170 trying wget...
+
+After all we can see the vewly created node under the knife node list and new client list as it it will also creates a new client with the node.
+
+ root@ubuntu-15-WKS:~# knife node list
+ mydns
+
+Similarly we can add multiple number of nodes to our chef infrastructure by providing ssh credentials with the same above knofe bootstrap command.
+
+### Conclusion ###
+
+In this detailed article we learnt about the Chef Configuration Management tool with its basic understanding and overview of its components with installation and configuration settings. We hope you have enjoyed learning the installation and configuration of Chef server with its workstation and client nodes.
+
+--------------------------------------------------------------------------------
+
+via: http://linoxide.com/ubuntu-how-to/install-configure-chef-ubuntu-14-04-15-04/
+
+作者:[Kashif Siddique][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://linoxide.com/author/kashifs/
\ No newline at end of file
diff --git a/sources/tech/20150717 How to collect NGINX metrics - Part 2.md b/sources/tech/20150717 How to collect NGINX metrics - Part 2.md
new file mode 100644
index 0000000000..8d83b3a0f6
--- /dev/null
+++ b/sources/tech/20150717 How to collect NGINX metrics - Part 2.md
@@ -0,0 +1,237 @@
+How to collect NGINX metrics - Part 2
+================================================================================
+![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_2.png)
+
+### How to get the NGINX metrics you need ###
+
+How you go about capturing metrics depends on which version of NGINX you are using, as well as which metrics you wish to access. (See [the companion article][1] for an in-depth exploration of NGINX metrics.) Free, open-source NGINX and the commercial product NGINX Plus both have status modules that report metrics, and NGINX can also be configured to report certain metrics in its logs:
+
+注:表格
+
+
+#### Metrics collection: NGINX (open-source) ####
+
+Open-source NGINX exposes several basic metrics about server activity on a simple status page, provided that you have the HTTP [stub status module][2] enabled. To check if the module is already enabled, run:
+
+ nginx -V 2>&1 | grep -o with-http_stub_status_module
+
+The status module is enabled if you see with-http_stub_status_module as output in the terminal.
+
+If that command returns no output, you will need to enable the status module. You can use the --with-http_stub_status_module configuration parameter when [building NGINX from source][3]:
+
+ ./configure \
+ … \
+ --with-http_stub_status_module
+ make
+ sudo make install
+
+After verifying the module is enabled or enabling it yourself, you will also need to modify your NGINX configuration to set up a locally accessible URL (e.g., /nginx_status) for the status page:
+
+ server {
+ location /nginx_status {
+ stub_status on;
+
+ access_log off;
+ allow 127.0.0.1;
+ deny all;
+ }
+ }
+
+Note: The server blocks of the NGINX config are usually found not in the master configuration file (e.g., /etc/nginx/nginx.conf) but in supplemental configuration files that are referenced by the master config. To find the relevant configuration files, first locate the master config by running:
+
+ nginx -t
+
+Open the master configuration file listed, and look for lines beginning with include near the end of the http block, such as:
+
+ include /etc/nginx/conf.d/*.conf;
+
+In one of the referenced config files you should find the main server block, which you can modify as above to configure NGINX metrics reporting. After changing any configurations, reload the configs by executing:
+
+ nginx -s reload
+
+Now you can view the status page to see your metrics:
+
+ Active connections: 24
+ server accepts handled requests
+ 1156958 1156958 4491319
+ Reading: 0 Writing: 18 Waiting : 6
+
+Note that if you are trying to access the status page from a remote machine, you will need to whitelist the remote machine’s IP address in your status configuration, just as 127.0.0.1 is whitelisted in the configuration snippet above.
+
+The NGINX status page is an easy way to get a quick snapshot of your metrics, but for continuous monitoring you will need to automatically record that data at regular intervals. Parsers for the NGINX status page already exist for monitoring tools such as [Nagios][4] and [Datadog][5], as well as for the statistics collection daemon [collectD][6].
+
+#### Metrics collection: NGINX Plus ####
+
+The commercial NGINX Plus provides [many more metrics][7] through its ngx_http_status_module than are available in open-source NGINX. Among the additional metrics exposed by NGINX Plus are bytes streamed, as well as information about upstream systems and caches. NGINX Plus also reports counts of all HTTP status code types (1xx, 2xx, 3xx, 4xx, 5xx). A sample NGINX Plus status board is available [here][8].
+
+![NGINX Plus status board](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/status_plus-2.png)
+
+*Note: the “Active” connections on the NGINX Plus status dashboard are defined slightly differently than the Active state connections in the metrics collected via the open-source NGINX stub status module. In NGINX Plus metrics, Active connections do not include connections in the Waiting state (aka Idle connections).*
+
+NGINX Plus also reports [metrics in JSON format][9] for easy integration with other monitoring systems. With NGINX Plus, you can see the metrics and health status [for a given upstream grouping of servers][10], or drill down to get a count of just the response codes [from a single server][11] in that upstream:
+
+ {"1xx":0,"2xx":3483032,"3xx":0,"4xx":23,"5xx":0,"total":3483055}
+
+To enable the NGINX Plus metrics dashboard, you can add a status server block inside the http block of your NGINX configuration. ([See the section above][12] on collecting metrics from open-source NGINX for instructions on locating the relevant config files.) For example, to set up a status dashboard at http://your.ip.address:8080/status.html and a JSON interface at http://your.ip.address:8080/status, you would add the following server block:
+
+ server {
+ listen 8080;
+ root /usr/share/nginx/html;
+
+ location /status {
+ status;
+ }
+
+ location = /status.html {
+ }
+ }
+
+The status pages should be live once you reload your NGINX configuration:
+
+ nginx -s reload
+
+The official NGINX Plus docs have [more details][13] on how to configure the expanded status module.
+
+#### Metrics collection: NGINX logs ####
+
+NGINX’s [log module][14] writes configurable access logs to a destination of your choosing. You can customize the format of your logs and the data they contain by [adding or subtracting variables][15]. The simplest way to capture detailed logs is to add the following line in the server block of your config file (see [the section][16] on collecting metrics from open-source NGINX for instructions on locating your config files):
+
+ access_log logs/host.access.log combined;
+
+After changing any NGINX configurations, reload the configs by executing:
+
+ nginx -s reload
+
+The “combined” log format, included by default, captures [a number of key data points][17], such as the actual HTTP request and the corresponding response code. In the example logs below, NGINX logged a 200 (success) status code for a request for /index.html and a 404 (not found) error for the nonexistent /fail.
+
+ 127.0.0.1 - - [19/Feb/2015:12:10:46 -0500] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari 537.36"
+
+ 127.0.0.1 - - [19/Feb/2015:12:11:05 -0500] "GET /fail HTTP/1.1" 404 570 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36"
+
+You can log request processing time as well by adding a new log format to the http block of your NGINX config file:
+
+ log_format nginx '$remote_addr - $remote_user [$time_local] '
+ '"$request" $status $body_bytes_sent $request_time '
+ '"$http_referer" "$http_user_agent"';
+
+And by adding or modifying the access_log line in the server block of your config file:
+
+ access_log logs/host.access.log nginx;
+
+After reloading the updated configs (by running nginx -s reload), your access logs will include response times, as seen below. The units are seconds, with millisecond resolution. In this instance, the server received a request for /big.pdf, returning a 206 (success) status code after sending 33973115 bytes. Processing the request took 0.202 seconds (202 milliseconds):
+
+ 127.0.0.1 - - [19/Feb/2015:15:50:36 -0500] "GET /big.pdf HTTP/1.1" 206 33973115 0.202 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36"
+
+You can use a variety of tools and services to parse and analyze NGINX logs. For instance, [rsyslog][18] can monitor your logs and pass them to any number of log-analytics services; you can use a free, open-source tool such as [logstash][19] to collect and analyze logs; or you can use a unified logging layer such as [Fluentd][20] to collect and parse your NGINX logs.
+
+### Conclusion ###
+
+Which NGINX metrics you monitor will depend on the tools available to you, and whether the insight provided by a given metric justifies the overhead of monitoring that metric. For instance, is measuring error rates important enough to your organization to justify investing in NGINX Plus or implementing a system to capture and analyze logs?
+
+At Datadog, we have built integrations with both NGINX and NGINX Plus so that you can begin collecting and monitoring metrics from all your web servers with a minimum of setup. Learn how to monitor NGINX with Datadog [in this post][21], and get started right away with a [free trial of Datadog][22].
+
+----------
+
+Source Markdown for this post is available [on GitHub][23]. Questions, corrections, additions, etc.? Please [let us know][24].
+
+--------------------------------------------------------------------------------
+
+via: https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
+
+作者:K Young
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
+[2]:http://nginx.org/en/docs/http/ngx_http_stub_status_module.html
+[3]:http://wiki.nginx.org/InstallOptions
+[4]:https://exchange.nagios.org/directory/Plugins/Web-Servers/nginx
+[5]:http://docs.datadoghq.com/integrations/nginx/
+[6]:https://collectd.org/wiki/index.php/Plugin:nginx
+[7]:http://nginx.org/en/docs/http/ngx_http_status_module.html#data
+[8]:http://demo.nginx.com/status.html
+[9]:http://demo.nginx.com/status
+[10]:http://demo.nginx.com/status/upstreams/demoupstreams
+[11]:http://demo.nginx.com/status/upstreams/demoupstreams/0/responses
+[12]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
+[13]:http://nginx.org/en/docs/http/ngx_http_status_module.html#example
+[14]:http://nginx.org/en/docs/http/ngx_http_log_module.html
+[15]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
+[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
+[17]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format
+[18]:http://www.rsyslog.com/
+[19]:https://www.elastic.co/products/logstash
+[20]:http://www.fluentd.org/
+[21]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
+[22]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#sign-up
+[23]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_collect_nginx_metrics.md
+[24]:https://github.com/DataDog/the-monitor/issues
\ No newline at end of file
diff --git a/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md b/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md
new file mode 100644
index 0000000000..949fd3d949
--- /dev/null
+++ b/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md
@@ -0,0 +1,150 @@
+How to monitor NGINX with Datadog - Part 3
+================================================================================
+![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png)
+
+If you’ve already read [our post on monitoring NGINX][1], you know how much information you can gain about your web environment from just a handful of metrics. And you’ve also seen just how easy it is to start collecting metrics from NGINX on ad hoc basis. But to implement comprehensive, ongoing NGINX monitoring, you will need a robust monitoring system to store and visualize your metrics, and to alert you when anomalies happen. In this post, we’ll show you how to set up NGINX monitoring in Datadog so that you can view your metrics on customizable dashboards like this:
+
+![NGINX dashboard](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_board_5.png)
+
+Datadog allows you to build graphs and alerts around individual hosts, services, processes, metrics—or virtually any combination thereof. For instance, you can monitor all of your NGINX hosts, or all hosts in a certain availability zone, or you can monitor a single key metric being reported by all hosts with a certain tag. This post will show you how to:
+
+- Monitor NGINX metrics on Datadog dashboards, alongside all your other systems
+- Set up automated alerts to notify you when a key metric changes dramatically
+
+### Configuring NGINX ###
+
+To collect metrics from NGINX, you first need to ensure that NGINX has an enabled status module and a URL for reporting its status metrics. Step-by-step instructions [for configuring open-source NGINX][2] and [NGINX Plus][3] are available in our companion post on metric collection.
+
+### Integrating Datadog and NGINX ###
+
+#### Install the Datadog Agent ####
+
+The Datadog Agent is [the open-source software][4] that collects and reports metrics from your hosts so that you can view and monitor them in Datadog. Installing the agent usually takes [just a single command][5].
+
+As soon as your Agent is up and running, you should see your host reporting metrics [in your Datadog account][6].
+
+![Datadog infrastructure list](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/infra_2.png)
+
+#### Configure the Agent ####
+
+Next you’ll need to create a simple NGINX configuration file for the Agent. The location of the Agent’s configuration directory for your OS can be found [here][7].
+
+Inside that directory, at conf.d/nginx.yaml.example, you will find [a sample NGINX config file][8] that you can edit to provide the status URL and optional tags for each of your NGINX instances:
+
+ init_config:
+
+ instances:
+
+ - nginx_status_url: http://localhost/nginx_status/
+ tags:
+ - instance:foo
+
+Once you have supplied the status URLs and any tags, save the config file as conf.d/nginx.yaml.
+
+#### Restart the Agent ####
+
+You must restart the Agent to load your new configuration file. The restart command varies somewhat by platform—see the specific commands for your platform [here][9].
+
+#### Verify the configuration settings ####
+
+To check that Datadog and NGINX are properly integrated, run the Datadog info command. The command for each platform is available [here][10].
+
+If the configuration is correct, you will see a section like this in the output:
+
+ Checks
+ ======
+
+ [...]
+
+ nginx
+ -----
+ - instance #0 [OK]
+ - Collected 8 metrics & 0 events
+
+#### Install the integration ####
+
+Finally, switch on the NGINX integration inside your Datadog account. It’s as simple as clicking the “Install Integration” button under the Configuration tab in the [NGINX integration settings][11].
+
+![Install integration](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/install.png)
+
+### Metrics! ###
+
+Once the Agent begins reporting NGINX metrics, you will see [an NGINX dashboard][12] among your list of available dashboards in Datadog.
+
+The basic NGINX dashboard displays a handful of graphs encapsulating most of the key metrics highlighted [in our introduction to NGINX monitoring][13]. (Some metrics, notably request processing time, require log analysis and are not available in Datadog.)
+
+You can easily create a comprehensive dashboard for monitoring your entire web stack by adding additional graphs with important metrics from outside NGINX. For example, you might want to monitor host-level metrics on your NGINX hosts, such as system load. To start building a custom dashboard, simply clone the default NGINX dashboard by clicking on the gear near the upper right of the dashboard and selecting “Clone Dash”.
+
+![Clone dash](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/clone_2.png)
+
+You can also monitor your NGINX instances at a higher level using Datadog’s [Host Maps][14]—for instance, color-coding all your NGINX hosts by CPU usage to identify potential hotspots.
+
+![](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx-host-map-3.png)
+
+### Alerting on NGINX metrics ###
+
+Once Datadog is capturing and visualizing your metrics, you will likely want to set up some monitors to automatically keep tabs on your metrics—and to alert you when there are problems. Below we’ll walk through a representative example: a metric monitor that alerts on sudden drops in NGINX throughput.
+
+#### Monitor your NGINX throughput ####
+
+Datadog metric alerts can be threshold-based (alert when the metric exceeds a set value) or change-based (alert when the metric changes by a certain amount). In this case we’ll take the latter approach, alerting when our incoming requests per second drop precipitously. Such drops are often indicative of problems.
+
+1.**Create a new metric monitor**. Select “New Monitor” from the “Monitors” dropdown in Datadog. Select “Metric” as the monitor type.
+
+![NGINX metric monitor](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_1.png)
+
+2.**Define your metric monitor**. We want to know when our total NGINX requests per second drop by a certain amount. So we define the metric of interest to be the sum of nginx.net.request_per_s across our infrastructure.
+
+![NGINX metric](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_2.png)
+
+3.**Set metric alert conditions**. Since we want to alert on a change, rather than on a fixed threshold, we select “Change Alert.” We’ll set the monitor to alert us whenever the request volume drops by 30 percent or more. Here we use a one-minute window of data to represent the metric’s value “now” and alert on the average change across that interval, as compared to the metric’s value 10 minutes prior.
+
+![NGINX metric change alert](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_3.png)
+
+4.**Customize the notification**. If our NGINX request volume drops, we want to notify our team. In this case we will post a notification in the ops team’s chat room and page the engineer on call. In “Say what’s happening”, we name the monitor and add a short message that will accompany the notification to suggest a first step for investigation. We @mention the Slack channel that we use for ops and use @pagerduty to [route the alert to PagerDuty][15]
+
+![NGINX metric notification](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_4v3.png)
+
+5.**Save the integration monitor**. Click the “Save” button at the bottom of the page. You’re now monitoring a key NGINX [work metric][16], and your on-call engineer will be paged anytime it drops rapidly.
+
+### Conclusion ###
+
+In this post we’ve walked you through integrating NGINX with Datadog to visualize your key metrics and notify your team when your web infrastructure shows signs of trouble.
+
+If you’ve followed along using your own Datadog account, you should now have greatly improved visibility into what’s happening in your web environment, as well as the ability to create automated monitors tailored to your environment, your usage patterns, and the metrics that are most valuable to your organization.
+
+If you don’t yet have a Datadog account, you can sign up for [a free trial][17] and start monitoring your infrastructure, your applications, and your services today.
+
+----------
+
+Source Markdown for this post is available [on GitHub][18]. Questions, corrections, additions, etc.? Please [let us know][19].
+
+------------------------------------------------------------
+
+via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
+
+作者:K Young
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
+[2]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source
+[3]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus
+[4]:https://github.com/DataDog/dd-agent
+[5]:https://app.datadoghq.com/account/settings#agent
+[6]:https://app.datadoghq.com/infrastructure
+[7]:http://docs.datadoghq.com/guides/basic_agent_usage/
+[8]:https://github.com/DataDog/dd-agent/blob/master/conf.d/nginx.yaml.example
+[9]:http://docs.datadoghq.com/guides/basic_agent_usage/
+[10]:http://docs.datadoghq.com/guides/basic_agent_usage/
+[11]:https://app.datadoghq.com/account/settings#integrations/nginx
+[12]:https://app.datadoghq.com/dash/integration/nginx
+[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/
+[14]:https://www.datadoghq.com/blog/introducing-host-maps-know-thy-infrastructure/
+[15]:https://www.datadoghq.com/blog/pagerduty/
+[16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics
+[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up
+[18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md
+[19]:https://github.com/DataDog/the-monitor/issues
\ No newline at end of file
diff --git a/sources/tech/20150717 How to monitor NGINX- Part 1.md b/sources/tech/20150717 How to monitor NGINX- Part 1.md
new file mode 100644
index 0000000000..25270eb5cb
--- /dev/null
+++ b/sources/tech/20150717 How to monitor NGINX- Part 1.md
@@ -0,0 +1,408 @@
+How to monitor NGINX - Part 1
+================================================================================
+![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png)
+
+### What is NGINX? ###
+
+[NGINX][1] (pronounced “engine X”) is a popular HTTP server and reverse proxy server. As an HTTP server, NGINX serves static content very efficiently and reliably, using relatively little memory. As a [reverse proxy][2], it can be used as a single, controlled point of access for multiple back-end servers or for additional applications such as caching and load balancing. NGINX is available as a free, open-source product or in a more full-featured, commercially distributed version called NGINX Plus.
+
+NGINX can also be used as a mail proxy and a generic TCP proxy, but this article does not directly address NGINX monitoring for these use cases.
+
+### Key NGINX metrics ###
+
+By monitoring NGINX you can catch two categories of issues: resource issues within NGINX itself, and also problems developing elsewhere in your web infrastructure. Some of the metrics most NGINX users will benefit from monitoring include **requests per second**, which provides a high-level view of combined end-user activity; **server error rate**, which indicates how often your servers are failing to process seemingly valid requests; and **request processing time**, which describes how long your servers are taking to process client requests (and which can point to slowdowns or other problems in your environment).
+
+More generally, there are at least three key categories of metrics to watch:
+
+- Basic activity metrics
+- Error metrics
+- Performance metrics
+
+Below we’ll break down a few of the most important NGINX metrics in each category, as well as metrics for a fairly common use case that deserves special mention: using NGINX Plus for reverse proxying. We will also describe how you can monitor all of these metrics with your graphing or monitoring tools of choice.
+
+This article references metric terminology [introduced in our Monitoring 101 series][3], which provides a framework for metric collection and alerting.
+
+#### Basic activity metrics ####
+
+Whatever your NGINX use case, you will no doubt want to monitor how many client requests your servers are receiving and how those requests are being processed.
+
+NGINX Plus can report basic activity metrics exactly like open-source NGINX, but it also provides a secondary module that reports metrics slightly differently. We discuss open-source NGINX first, then the additional reporting capabilities provided by NGINX Plus.
+
+**NGINX**
+
+The diagram below shows the lifecycle of a client connection and how the open-source version of NGINX collects metrics during a connection.
+
+![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_connection_diagram-2.png)
+
+Accepts, handled, and requests are ever-increasing counters. Active, waiting, reading, and writing grow and shrink with request volume.
+
+注:表格
+
*Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.
+
+
+
+
+The **accepts** counter is incremented when an NGINX worker picks up a request for a connection from the OS, whereas **handled** is incremented when the worker actually gets a connection for the request (by establishing a new connection or reusing an open one). These two counts are usually the same—any divergence indicates that connections are being **dropped**, often because a resource limit, such as NGINX’s [worker_connections][4] limit, has been reached.
+
+Once NGINX successfully handles a connection, the connection moves to an **active** state, where it remains as client requests are processed:
+
+Active state
+
+- **Waiting**: An active connection may also be in a Waiting substate if there is no active request at the moment. New connections can bypass this state and move directly to Reading, most commonly when using “accept filter” or “deferred accept”, in which case NGINX does not receive notice of work until it has enough data to begin working on the response. Connections will also be in the Waiting state after sending a response if the connection is set to keep-alive.
+- **Reading**: When a request is received, the connection moves out of the waiting state, and the request itself is counted as Reading. In this state NGINX is reading a client request header. Request headers are lightweight, so this is usually a fast operation.
+- **Writing**: After the request is read, it is counted as Writing, and remains in that state until a response is returned to the client. That means that the request is Writing while NGINX is waiting for results from upstream systems (systems “behind” NGINX), and while NGINX is operating on the response. Requests will often spend the majority of their time in the Writing state.
+
+Often a connection will only support one request at a time. In this case, the number of Active connections == Waiting connections + Reading requests + Writing requests. However, the newer SPDY and HTTP/2 protocols allow multiple concurrent requests/responses to be multiplexed over a connection, so Active may be less than the sum of Waiting, Reading, and Writing. (As of this writing, NGINX does not support HTTP/2, but expects to add support during 2015.)
+
+**NGINX Plus**
+
+As mentioned above, all of open-source NGINX’s metrics are available within NGINX Plus, but Plus can also report additional metrics. The section covers the metrics that are only available from NGINX Plus.
+
+![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_plus_connection_diagram-2.png)
+
+Accepted, dropped, and total are ever-increasing counters. Active, idle, and current track the current number of connections or requests in each of those states, so they grow and shrink with request volume.
+
+注:表格
+
*Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.
+
+
+
+
+The **accepted** counter is incremented when an NGINX Plus worker picks up a request for a connection from the OS. If the worker fails to get a connection for the request (by establishing a new connection or reusing an open one), then the connection is dropped and **dropped** is incremented. Ordinarily connections are dropped because a resource limit, such as NGINX Plus’s [worker_connections][4] limit, has been reached.
+
+**Active** and **idle** are the same as “active” and “waiting” states in open-source NGINX as described [above][5], with one key exception: in open-source NGINX, “waiting” falls under the “active” umbrella, whereas in NGINX Plus “idle” connections are excluded from the “active” count. **Current** is the same as the combined “reading + writing” states in open-source NGINX.
+
+**Total** is a cumulative count of client requests. Note that a single client connection can involve multiple requests, so this number may be significantly larger than the cumulative number of connections. In fact, (total / accepted) yields the average number of requests per connection.
+
+**Metric differences between Open-Source and Plus**
+
+注:表格
+
+
+
+
+
+
+
NGINX (open-source)
+
NGINX Plus
+
+
+
+
+
accepts
+
accepted
+
+
+
dropped must be calculated
+
dropped is reported directly
+
+
+
reading + writing
+
current
+
+
+
waiting
+
idle
+
+
+
active (includes “waiting” states)
+
active (excludes “idle” states)
+
+
+
requests
+
total
+
+
+
+
+**Metric to alert on: Dropped connections**
+
+The number of connections that have been dropped is equal to the difference between accepts and handled (NGINX) or is exposed directly as a standard metric (NGINX Plus). Under normal circumstances, dropped connections should be zero. If your rate of dropped connections per unit time starts to rise, look for possible resource saturation.
+
+![Dropped connections](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/dropped_connections.png)
+
+**Metric to alert on: Requests per second**
+
+Sampling your request data (**requests** in open-source, or **total** in Plus) with a fixed time interval provides you with the number of requests you’re receiving per unit of time—often minutes or seconds. Monitoring this metric can alert you to spikes in incoming web traffic, whether legitimate or nefarious, or sudden drops, which are usually indicative of problems. A drastic change in requests per second can alert you to problems brewing somewhere in your environment, even if it cannot tell you exactly where those problems lie. Note that all requests are counted the same, regardless of their URLs.
+
+![Requests per second](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/requests_per_sec.png)
+
+**Collecting activity metrics**
+
+Open-source NGINX exposes these basic server metrics on a simple status page. Because the status information is displayed in a standardized form, virtually any graphing or monitoring tool can be configured to parse the relevant data for analysis, visualization, or alerting. NGINX Plus provides a JSON feed with much richer data. Read the companion post on [NGINX metrics collection][6] for instructions on enabling metrics collection.
+
+#### Error metrics ####
+
+注:表格
+
+
+NGINX error metrics tell you how often your servers are returning errors instead of producing useful work. Client errors are represented by 4xx status codes, server errors with 5xx status codes.
+
+**Metric to alert on: Server error rate**
+
+Your server error rate is equal to the number of 5xx errors divided by the total number of [status codes][7] (1xx, 2xx, 3xx, 4xx, 5xx), per unit of time (often one to five minutes). If your error rate starts to climb over time, investigation may be in order. If it spikes suddenly, urgent action may be required, as clients are likely to report errors to the end user.
+
+![Server error rate](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/5xx_rate.png)
+
+A note on client errors: while it is tempting to monitor 4xx, there is limited information you can derive from that metric since it measures client behavior without offering any insight into particular URLs. In other words, a change in 4xx could be noise, e.g. web scanners blindly looking for vulnerabilities.
+
+**Collecting error metrics**
+
+Although open-source NGINX does not make error rates immediately available for monitoring, there are at least two ways to capture that information:
+
+- Use the expanded status module available with commercially supported NGINX Plus
+- Configure NGINX’s log module to write response codes in access logs
+
+Read the companion post on NGINX metrics collection for detailed instructions on both approaches.
+
+#### Performance metrics ####
+
+注:表格
+
+
+**Metric to alert on: Request processing time**
+
+The request time metric logged by NGINX records the processing time for each request, from the reading of the first client bytes to fulfilling the request. Long response times can point to problems upstream.
+
+**Collecting processing time metrics**
+
+NGINX and NGINX Plus users can capture data on processing time by adding the $request_time variable to the access log format. More details on configuring logs for monitoring are available in our companion post on [NGINX metrics collection][8].
+
+#### Reverse proxy metrics ####
+
+注:表格
+
+
+One of the most common ways to use NGINX is as a [reverse proxy][9]. The commercially supported NGINX Plus exposes a large number of metrics about backend (or “upstream”) servers, which are relevant to a reverse proxy setup. This section highlights a few of the key upstream metrics that are available to users of NGINX Plus.
+
+NGINX Plus segments its upstream metrics first by group, and then by individual server. So if, for example, your reverse proxy is distributing requests to five upstream web servers, you can see at a glance whether any of those individual servers is overburdened, and also whether you have enough healthy servers in the upstream group to ensure good response times.
+
+**Activity metrics**
+
+The number of **active connections per upstream server** can help you verify that your reverse proxy is properly distributing work across your server group. If you are using NGINX as a load balancer, significant deviations in the number of connections handled by any one server can indicate that the server is struggling to process requests in a timely manner or that the load-balancing method (e.g., [round-robin or IP hashing][10]) you have configured is not optimal for your traffic patterns
+
+**Error metrics**
+
+Recall from the error metric section above that 5xx (server error) codes are a valuable metric to monitor, particularly as a share of total response codes. NGINX Plus allows you to easily extract the number of **5xx codes per upstream server**, as well as the total number of responses, to determine that particular server’s error rate.
+
+**Availability metrics**
+
+For another view of the health of your web servers, NGINX also makes it simple to monitor the health of your upstream groups via the total number of **servers currently available within each group**. In a large reverse proxy setup, you may not care very much about the current state of any one server, just as long as your pool of available servers is capable of handling the load. But monitoring the total number of servers that are up within each upstream group can provide a very high-level view of the aggregate health of your web servers.
+
+**Collecting upstream metrics**
+
+NGINX Plus upstream metrics are exposed on the internal NGINX Plus monitoring dashboard, and are also available via a JSON interface that can serve up metrics into virtually any external monitoring platform. See examples in our companion post on [collecting NGINX metrics][11].
+
+### Conclusion ###
+
+In this post we’ve touched on some of the most useful metrics you can monitor to keep tabs on your NGINX servers. If you are just getting started with NGINX, monitoring most or all of the metrics in the list below will provide good visibility into the health and activity levels of your web infrastructure:
+
+- [Dropped connections][12]
+- [Requests per second][13]
+- [Server error rate][14]
+- [Request processing time][15]
+
+Eventually you will recognize additional, more specialized metrics that are particularly relevant to your own infrastructure and use cases. Of course, what you monitor will depend on the tools you have and the metrics available to you. See the companion post for [step-by-step instructions on metric collection][16], whether you use NGINX or NGINX Plus.
+
+At Datadog, we have built integrations with both NGINX and NGINX Plus so that you can begin collecting and monitoring metrics from all your web servers with a minimum of setup. Learn how to monitor NGINX with Datadog [in this post][17], and get started right away with [a free trial of Datadog][18].
+
+### Acknowledgments ###
+
+Many thanks to the NGINX team for reviewing this article prior to publication and providing important feedback and clarifications.
+
+----------
+
+Source Markdown for this post is available [on GitHub][19]. Questions, corrections, additions, etc.? Please [let us know][20].
+
+--------------------------------------------------------------------------------
+
+via: https://www.datadoghq.com/blog/how-to-monitor-nginx/
+
+作者:K Young
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[1]:http://nginx.org/en/
+[2]:http://nginx.com/resources/glossary/reverse-proxy-server/
+[3]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/
+[4]:http://nginx.org/en/docs/ngx_core_module.html#worker_connections
+[5]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#active-state
+[6]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
+[7]:http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
+[8]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
+[9]:https://en.wikipedia.org/wiki/Reverse_proxy
+[10]:http://nginx.com/blog/load-balancing-with-nginx-plus/
+[11]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
+[12]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#dropped-connections
+[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#requests-per-second
+[14]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#server-error-rate
+[15]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#request-processing-time
+[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/
+[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/
+[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up
+[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md
+[20]:https://github.com/DataDog/the-monitor/issues
\ No newline at end of file
diff --git a/sources/tech/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md b/sources/tech/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md
new file mode 100644
index 0000000000..416696bc91
--- /dev/null
+++ b/sources/tech/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md
@@ -0,0 +1,188 @@
+zpl1025
+Howto Configure FTP Server with Proftpd on Fedora 22
+================================================================================
+In this article, we'll learn about setting up an FTP server with Proftpd running Fedora 22 in our machine or server. [ProFTPD][1] is a free and open source FTP daemon software licensed under GPL. It is among most popular FTP server among machines running Linux. Its primary design aims to have an FTP server with many advanced features and provisioning users for more configuration options for easy customization. It includes a number of configuration options that are still not available with many other FTP daemons. It was initially developed by the developers as an alternative with better security and configuration to wu-ftpd server. An FTP server is a program that allows us to upload or download files and folders from a remote server where it is setup using an FTP client. Some of the features of ProFTPD daemon are as follows, you can check more features on [http://www.proftpd.org/features.html][2] .
+
+- It includes a per directory ".ftpaccess" access configuration similar to Apache's ".htaccess"
+- It features multiple virtual FTP server with multiple users login and anonymous FTP services.
+- It can be run either as a stand-alone server or from inetd/xinetd.
+- Its ownership, file/folder attributes and file/folder permissions are UNIX-based.
+- It can be run as standalone mode in order to protect the system from damage that can be caused from root access.
+- The modular design of it makes it easily extensible with modules like LDAP servers, SSL/TLS encryption, RADIUS support, etc.
+- IPv6 support is also included in the ProFTPD server.
+
+Here are some easy to perform steps on how we can setup an FTP Server with ProFTPD in Fedora 22 operating system.
+
+### 1. Installing ProFTPD ###
+
+First of all, we'll wanna install Proftpd server in our box running Fedora 22 as its operating system. As yum package manager has been depreciated, we'll use the latest and greatest built package manager called dnf. DNF is pretty easy to use and highly user friendly package manager available in Fedora 22. We'll simply use it to install proftpd daemon server. To do so, we'll need to run the following command in a terminal or a console in sudo mode.
+
+ $ sudo dnf -y install proftpd proftpd-utils
+
+### 2. Configuring ProFTPD ###
+
+Now, we'll make changes to some configurations in the daemon. To configure the daemon, we will need to edit /etc/proftpd.conf with a text editor. The main configuration file of the ProFTPD daemon is **/etc/proftpd.conf** so, any changes made to this file will affect the FTP server. Here, are some changes we make in this initial step.
+
+ $ sudo vi /etc/proftpd.conf
+
+Next, after we open the file using a text editor, we'll wanna make changes to the ServerName and ServerAdmin as hostname and email address respectively. Here's what we have made changes to those configs.
+
+ ServerName "ftp.linoxide.com"
+ ServerAdmin arun@linoxide.com
+
+After that, we'll wanna the following lines into the configuration file so that it logs access & auth into its specified log files.
+
+ ExtendedLog /var/log/proftpd/access.log WRITE,READ default
+ ExtendedLog /var/log/proftpd/auth.log AUTH auth
+
+![Configuring ProFTPD Config](http://blog.linoxide.com/wp-content/uploads/2015/06/configuring-proftpd-config.png)
+
+### 3. Adding FTP users ###
+
+After configure the basics of the configuration file, we'll surely wanna create an FTP user which is rooted at a specific directory we want. The current users that we use to login into our machine are automatically enabled with the FTP service, we can even use it to login into the FTP server. But, in this tutorial, we'll gonna create a new user with a specified home directory to the ftp server.
+
+Here, we'll create a new group named ftpgroup.
+
+ $ sudo groupadd ftpgroup
+
+Then, we'll gonna add a new user arunftp into the group with home directory specified as /ftp-dir/
+
+ $ sudo useradd -G ftpgroup arunftp -s /sbin/nologin -d /ftp-dir/
+
+After the user has been created and added to the group, we'll wanna set a password to the user arunftp.
+
+ $ sudo passwd arunftp
+
+ Changing password for user arunftp.
+ New password:
+ Retype new password:
+ passwd: all authentication tokens updated successfully.
+
+Now, we'll set read and write permission of the home directory by the ftp users by executing the following command.
+
+ $ sudo setsebool -P allow_ftpd_full_access=1
+ $ sudo setsebool -P ftp_home_dir=1
+
+Then, we'll wanna make that directory and its contents unable to get removed or renamed by any other users.
+
+ $ sudo chmod -R 1777 /ftp-dir/
+
+### 4. Enabling TLS Support ###
+
+FTP is considered less secure in comparison to the latest encryption methods used these days as anybody sniffing the network card can read the data pass through FTP. So, we'll enable TLS Encryption support in our FTP server. To do so, we'll need to a edit /etc/proftpd.conf configuration file. Before that, we'll wanna backup our existing configuration file to make sure we can revert our configuration if any unexpected happens.
+
+ $ sudo cp /etc/proftpd.conf /etc/proftpd.conf.bak
+
+Then, we'll wanna edit the configuration file using our favorite text editor.
+
+ $ sudo vi /etc/proftpd.conf
+
+Then, we'll wanna add the following lines just below line we configured in step 2 .
+
+ TLSEngine on
+ TLSRequired on
+ TLSProtocol SSLv23
+ TLSLog /var/log/proftpd/tls.log
+ TLSRSACertificateFile /etc/pki/tls/certs/proftpd.pem
+ TLSRSACertificateKeyFile /etc/pki/tls/certs/proftpd.pem
+
+![Enabling TLS Configuration](http://blog.linoxide.com/wp-content/uploads/2015/06/tls-configuration.png)
+
+After finishing up with the configuration, we'll wanna save and exit it.
+
+Next, we'll need to generate the SSL certificates inside **/etc/pki/tls/certs/** directory as proftpd.pem. To do so, first we'll need to install openssl in our Fedora 22 machine.
+
+ $ sudo dnf install openssl
+
+Then, we'll gonna generate the SSL certificate by running the following command.
+
+ $ sudo openssl req -x509 -nodes -newkey rsa:2048 -keyout /etc/pki/tls/certs/proftpd.pem -out /etc/pki/tls/certs/proftpd.pem
+
+We'll be asked with some information that will be associated into the certificate. After completing the required information, it will generate a 2048 bit RSA private key.
+
+ Generating a 2048 bit RSA private key
+ ...................+++
+ ...................+++
+ writing new private key to '/etc/pki/tls/certs/proftpd.pem'
+ -----
+ You are about to be asked to enter information that will be incorporated
+ into your certificate request.
+ What you are about to enter is what is called a Distinguished Name or a DN.
+ There are quite a few fields but you can leave some blank
+ For some fields there will be a default value,
+ If you enter '.', the field will be left blank.
+ -----
+ Country Name (2 letter code) [XX]:NP
+ State or Province Name (full name) []:Narayani
+ Locality Name (eg, city) [Default City]:Bharatpur
+ Organization Name (eg, company) [Default Company Ltd]:Linoxide
+ Organizational Unit Name (eg, section) []:Linux Freedom
+ Common Name (eg, your name or your server's hostname) []:ftp.linoxide.com
+ Email Address []:arun@linoxide.com
+
+After that, we'll gonna change the permission of the generated certificate file in order to secure it.
+
+ $ sudo chmod 600 /etc/pki/tls/certs/proftpd.pem
+
+### 5. Allowing FTP through Firewall ###
+
+Now, we'll need to allow the ftp ports that are usually blocked by the firewall by default. So, we'll allow ports and enable access to the ftp through firewall.
+
+If **TLS/SSL Encryption is enabled** run the following command.
+
+ $sudo firewall-cmd --add-port=1024-65534/tcp
+ $ sudo firewall-cmd --add-port=1024-65534/tcp --permanent
+
+If **TLS/SSL Encryption is disabled** run the following command.
+
+ $ sudo firewall-cmd --permanent --zone=public --add-service=ftp
+
+ success
+
+Then, we'll need to reload the firewall configuration.
+
+ $ sudo firewall-cmd --reload
+
+ success
+
+### 6. Starting and Enabling ProFTPD ###
+
+After everything is set, we'll finally start our ProFTPD and give it a try. To start the proftpd ftp daemon, we'll need to run the following command.
+
+ $ sudo systemctl start proftpd.service
+
+Then, we'll wanna enable proftpd to start on every boot.
+
+ $ sudo systemctl enable proftpd.service
+
+ Created symlink from /etc/systemd/system/multi-user.target.wants/proftpd.service to /usr/lib/systemd/system/proftpd.service.
+
+### 7. Logging into the FTP server ###
+
+Now, if everything was configured and done as expected, we must be able to connect to the ftp server and login with the details we set above. Here, we'll gonna configure our FTP client, filezilla with hostname as **server's ip or url**, Protocol as **FTP**, User as **arunftp** and password as the one we set in above step 3. If you followed step 4 for enabling TLS support, then we'll need to set the Encryption type as **Require explicit FTP over TLS** but if you didn't follow step 4 and don't wanna use TLS encryption then set the Encryption type as **Plain FTP**.
+
+![FTP Login Details](http://blog.linoxide.com/wp-content/uploads/2015/06/ftp-login-details.png)
+
+To setup the above configuration, we'll need goto File which is under the Menu and then click on Site Manager in which we can click on new site then configure as illustrated above.
+
+![FTP SSL Certificate](http://blog.linoxide.com/wp-content/uploads/2015/06/ftp-ssl-certificate.png)
+
+Then, we're asked to accept the SSL certificate, that can be done by click OK. After that, we are able to upload and download required files and folders from our FTP server.
+
+### Conclusion ###
+
+Finally, we have successfully installed and configured our Fedora 22 box with Proftpd FTP server. Proftpd is an awesome powerful highly configurable and extensible FTP daemon. The above tutorial illustrates us how we can configure a secure FTP server with TLS encryption. It is highly recommended to configure FTP server with TLS encryption as it enables SSL certificate security to the data transfer and login. Here, we haven't configured anonymous access to the FTP cause they are usually not recommended in a protected FTP system. An FTP access makes pretty easy for people to upload and download at good efficient performance. We can even change the ports for the users for additional security. So, if you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
+
+--------------------------------------------------------------------------------
+
+via: http://linoxide.com/linux-how-to/configure-ftp-proftpd-fedora-22/
+
+作者:[Arun Pyasi][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://linoxide.com/author/arunp/
+[1]:http://www.proftpd.org/
+[2]:http://www.proftpd.org/features.html
diff --git a/sources/tech/20150717 Setting Up 'XR' (Crossroads) Load Balancer for Web Servers on RHEL or CentOS.md b/sources/tech/20150717 Setting Up 'XR' (Crossroads) Load Balancer for Web Servers on RHEL or CentOS.md
new file mode 100644
index 0000000000..13312b6272
--- /dev/null
+++ b/sources/tech/20150717 Setting Up 'XR' (Crossroads) Load Balancer for Web Servers on RHEL or CentOS.md
@@ -0,0 +1,160 @@
+translation by strugglingyouth
+Setting Up ‘XR’ (Crossroads) Load Balancer for Web Servers on RHEL/CentOS
+================================================================================
+Crossroads is a service independent, open source load balance and fail-over utility for Linux and TCP based services. It can be used for HTTP, HTTPS, SSH, SMTP and DNS etc. It is also a multi-threaded utility which consumes only one memory space which leads to increase the performance when balancing load.
+
+Let’s have a look at how XR works. We can locate XR between network clients and a nest of servers which dispatches client requests to the servers balancing the load.
+
+If a server is down, XR forwards next client request to the next server in line, so client feels no down time. Have a look at the below diagram to understand what kind of a situation we are going to handle with XR.
+
+![Install XR Crossroads Load Balancer](http://www.tecmint.com/wp-content/uploads/2015/07/Install-XR-Crossroads-Load-Balancer.jpg)
+
+Install XR Crossroads Load Balancer
+
+There are two web-servers, one gateway server which we install and setup XR to receive client requests and distribute them among the servers.
+
+ XR Crossroads Gateway Server : 172.16.1.204
+ Web Server 01 : 172.16.1.222
+ Web Server 02 : 192.168.1.161
+
+In above scenario, my gateway server (i.e XR Crossroads) bears the IP address 172.16.1.222, webserver01 is 172.16.1.222 and it listens through port 8888 and webserver02 is 192.168.1.161 and it listens through port 5555.
+
+Now all I need is to balance the load of all the requests that receives by the XR gateway from internet and distribute them among two web-servers balancing the load.
+
+### Step1: Install XR Crossroads Load Balancer on Gateway Server ###
+
+**1. Unfortunately, there isn’t any binary RPM packages available for crosscroads, the only way to install XR crossroads from source tarball.**
+
+To compile XR, you must have C++ compiler and Gnu make utilities installed on the system in order to continue installation error free.
+
+ # yum install gcc gcc-c++ make
+
+Next, download the source tarball by going to their official site ([https://crossroads.e-tunity.com][1]), and grab the archived package (i.e. crossroads-stable.tar.gz).
+
+Alternatively, you use following wget utility to download the package and extract it in any location (eg: /usr/src/), go to unpacked directory and issue “make install” command.
+
+ # wget https://crossroads.e-tunity.com/downloads/crossroads-stable.tar.gz
+ # tar -xvf crossroads-stable.tar.gz
+ # cd crossroads-2.74/
+ # make install
+
+![Install XR Crossroads Load Balancer](http://www.tecmint.com/wp-content/uploads/2015/07/Install-XR-Crossroads-Load-Balancer.png)
+
+Install XR Crossroads Load Balancer
+
+After installation finishes, the binary files are created under /usr/sbin/ and XR configuration within /etc namely “xrctl.xml”.
+
+**2. As the last prerequisite, you need two web-servers. For ease of use, I have created two python SimpleHTTPServer instances in one server.**
+
+To see how to setup a python SimpleHTTPServer, read our article at [Create Two Web Servers Easily Using SimpleHTTPServer][2].
+
+As I said, we’re using two web-servers, and they are webserver01 running on 172.16.1.222 through port 8888 and webserver02 running on 192.168.1.161 through port 5555.
+
+![XR WebServer 01](http://www.tecmint.com/wp-content/uploads/2015/07/XR-WebServer01.jpg)
+
+XR WebServer 01
+
+![XR WebServer 02](http://www.tecmint.com/wp-content/uploads/2015/07/XR-WebServer02.jpg)
+
+XR WebServer 02
+
+### Step 2: Configure XR Crossroads Load Balancer ###
+
+**3. All requisites are in place. Now what we have to do is configure the `xrctl.xml` file to distribute the load among the web-servers which receives by the XR server from the internet.**
+
+Now open `xrctl.xml` file with [vi/vim editor][3].
+
+ # vim /etc/xrctl.xml
+
+and make the changes as suggested below.
+
+ 1.0<94> encoding=<94>UTF-8<94>?>
+
+
+ true
+ /tmp
+
+
+ Tecmint
+
+ 172.16.1.204:8080
+ tcp
+ 0:8010
+ yes
+ 0
+ 0
+ 0
+ 0
+
+
+ 172.16.1.222:8888
+
+
+ 192.168.1.161:5555
+
+
+
+
+![Configure XR Crossroads Load Balancer](http://www.tecmint.com/wp-content/uploads/2015/07/Configure-XR-Crossroads-Load-Balancer.jpg)
+
+Configure XR Crossroads Load Balancer
+
+Here, you can see a very basic XR configuration done within xrctl.xml. I have defined what the XR server is, what are the back end servers and their ports and web interface port for the XR.
+
+**4. Now you need to start the XR daemon by issuing below commands.**
+
+ # xrctl start
+ # xrctl status
+
+![Start XR Crossroads](http://www.tecmint.com/wp-content/uploads/2015/07/Start-XR-Crossroads.jpg)
+
+Start XR Crossroads
+
+**5. Okay great. Now it’s time to check whether the configs are working fine. Open two web browsers and enter the IP address of the XR server with port and see the output.**
+
+![Verify Web Server Load Balancing](http://www.tecmint.com/wp-content/uploads/2015/07/Verify-Web-Server-Load-Balancing.jpg)
+
+Verify Web Server Load Balancing
+
+Fantastic. It works fine. now it’s time to play with XR.
+
+**6. Now it’s time to login into XR Crossroads dashboard and see the port we’ve configured for web-interface. Enter your XR server’s IP address with the port number for web-interface you have configured in xrctl.xml.**
+
+ http://172.16.1.204:8010
+
+![XR Crossroads Dashboard](http://www.tecmint.com/wp-content/uploads/2015/07/XR-Crossroads-Dashboard.jpg)
+
+XR Crossroads Dashboard
+
+This is what it looks like. It’s easy to understand, user-friendly and easy to use. It shows how many connections each back end server received in the top right corner along with the additional details regarding the requests receiving. Even you can set the load weight each server you need to bear, maximum number of connections and load average etc..
+
+The best part is, you actually can do this even without configuring xrctl.xml. Only thing you have to do is issue the command with following syntax and it will do the job done.
+
+ # xr --verbose --server tcp:172.16.1.204:8080 --backend 172.16.1.222:8888 --backend 192.168.1.161:5555
+
+Explanation of above syntax in detail:
+
+- –verbose will show what happens when the command has executed.
+- –server defines the XR server you have installed the package in.
+- –backend defines the webservers you need to balance the traffic to.
+- Tcp defines it uses tcp services.
+
+For more details, about documentations and configuration of CROSSROADS, please visit their official site at: [https://crossroads.e-tunity.com/][4].
+
+XR Corssroads enables many ways to enhance your server performance, protect downtime’s and make your admin tasks easier and handier. Hope you enjoyed the guide and feel free to comment below for the suggestions and clarifications. Keep in touch with Tecmint for handy How To’s.
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/setting-up-xr-crossroads-load-balancer-for-web-servers-on-rhel-centos/
+
+作者:[Thilina Uvindasiri][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/thilidhanushka/
+[1]:https://crossroads.e-tunity.com/
+[2]:http://www.tecmint.com/python-simplehttpserver-to-create-webserver-or-serve-files-instantly/
+[3]:http://www.tecmint.com/vi-editor-usage/
+[4]:https://crossroads.e-tunity.com/
diff --git a/sources/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md b/sources/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md
new file mode 100644
index 0000000000..0f393fd7c4
--- /dev/null
+++ b/sources/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md
@@ -0,0 +1,144 @@
+struggling 翻译中
+Introduction to RAID, Concepts of RAID and RAID Levels – Part 1
+================================================================================
+RAID is a Redundant Array of Inexpensive disks, but nowadays it is called Redundant Array of Independent drives. Earlier it is used to be very costly to buy even a smaller size of disk, but nowadays we can buy a large size of disk with the same amount like before. Raid is just a collection of disks in a pool to become a logical volume.
+
+![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg)
+
+Understanding RAID Setups in Linux
+
+Raid contains groups or sets or Arrays. A combine of drivers make a group of disks to form a RAID Array or RAID set. It can be a minimum of 2 number of disk connected to a raid controller and make a logical volume or more drives can be in a group. Only one Raid level can be applied in a group of disks. Raid are used when we need excellent performance. According to our selected raid level, performance will differ. Saving our data by fault tolerance & high availability.
+
+This series will be titled Preparation for the setting up RAID ‘s through Parts 1-9 and covers the following topics.
+
+- Part 1: Introduction to RAID, Concepts of RAID and RAID Levels
+- Part 2: How to setup RAID0 (Stripe) in Linux
+- Part 3: How to setup RAID1 (Mirror) in Linux
+- Part 4: How to setup RAID5 (Striping with Distributed Parity) in Linux
+- Part 5: How to setup RAID6 (Striping with Double Distributed Parity) in Linux
+- Part 6: Setting Up RAID 10 or 1+0 (Nested) in Linux
+- Part 7: Growing an Existing RAID Array and Removing Failed Disks in Raid
+- Part 8: Recovering (Rebuilding) failed drives in RAID
+- Part 9: Managing RAID in Linux
+
+This is the Part 1 of a 9-tutorial series, here we will cover the introduction of RAID, Concepts of RAID and RAID Levels that are required for the setting up RAID in Linux.
+
+### Software RAID and Hardware RAID ###
+
+Software RAID have low performance, because of consuming resource from hosts. Raid software need to load for read data from software raid volumes. Before loading raid software, OS need to get boot to load the raid software. No need of Physical hardware in software raids. Zero cost investment.
+
+Hardware RAID have high performance. They are dedicated RAID Controller which is Physically built using PCI express cards. It won’t use the host resource. They have NVRAM for cache to read and write. Stores cache while rebuild even if there is power-failure, it will store the cache using battery power backups. Very costly investments needed for a large scale.
+
+Hardware RAID Card will look like below:
+
+![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg)
+
+Hardware RAID
+
+#### Featured Concepts of RAID ####
+
+- Parity method in raid regenerate the lost content from parity saved information’s. RAID 5, RAID 6 Based on Parity.
+- Stripe is sharing data randomly to multiple disk. This won’t have full data in a single disk. If we use 3 disks half of our data will be in each disks.
+- Mirroring is used in RAID 1 and RAID 10. Mirroring is making a copy of same data. In RAID 1 it will save the same content to the other disk too.
+- Hot spare is just a spare drive in our server which can automatically replace the failed drives. If any one of the drive failed in our array this hot spare drive will be used and rebuild automatically.
+- Chunks are just a size of data which can be minimum from 4KB and more. By defining chunk size we can increase the I/O performance.
+
+RAID’s are in various Levels. Here we will see only the RAID Levels which is used mostly in real environment.
+
+- RAID0 = Striping
+- RAID1 = Mirroring
+- RAID5 = Single Disk Distributed Parity
+- RAID6 = Double Disk Distributed Parity
+- RAID10 = Combine of Mirror & Stripe. (Nested RAID)
+
+RAID are managed using mdadm package in most of the Linux distributions. Let us get a Brief look into each RAID Levels.
+
+#### RAID 0 (or) Striping ####
+
+Striping have a excellent performance. In Raid 0 (Striping) the data will be written to disk using shared method. Half of the content will be in one disk and another half will be written to other disk.
+
+Let us assume we have 2 Disk drives, for example, if we write data “TECMINT” to logical volume it will be saved as ‘T‘ will be saved in first disk and ‘E‘ will be saved in Second disk and ‘C‘ will be saved in First disk and again ‘M‘ will be saved in Second disk and it continues in round-robin process.
+
+In this situation if any one of the drive fails we will loose our data, because with half of data from one of the disk can’t use to rebuilt the raid. But while comparing to Write Speed and performance RAID 0 is Excellent. We need at least minimum 2 disks to create a RAID 0 (Striping). If you need your valuable data don’t use this RAID LEVEL.
+
+- High Performance.
+- There is Zero Capacity Loss in RAID 0
+- Zero Fault Tolerance.
+- Write and Reading will be good performance.
+
+#### RAID 1 (or) Mirroring ####
+
+Mirroring have a good performance. Mirroring can make a copy of same data what we have. Assuming we have two numbers of 2TB Hard drives, total there we have 4TB, but in mirroring while the drives are behind the RAID Controller to form a Logical drive Only we can see the 2TB of logical drive.
+
+While we save any data, it will write to both 2TB Drives. Minimum two drives are needed to create a RAID 1 or Mirror. If a disk failure occurred we can reproduce the raid set by replacing a new disk. If any one of the disk fails in RAID 1, we can get the data from other one as there was a copy of same content in the other disk. So there is zero data loss.
+
+- Good Performance.
+- Here Half of the Space will be lost in total capacity.
+- Full Fault Tolerance.
+- Rebuilt will be faster.
+- Writing Performance will be slow.
+- Reading will be good.
+- Can be used for operating systems and database for small scale.
+
+#### RAID 5 (or) Distributed Parity ####
+
+RAID 5 is mostly used in enterprise levels. RAID 5 work by distributed parity method. Parity info will be used to rebuild the data. It rebuilds from the information left on the remaining good drives. This will protect our data from drive failure.
+
+Assume we have 4 drives, if one drive fails and while we replace the failed drive we can rebuild the replaced drive from parity informations. Parity information’s are Stored in all 4 drives, if we have 4 numbers of 1TB hard-drive. The parity information will be stored in 256GB in each drivers and other 768GB in each drives will be defined for Users. RAID 5 can be survive from a single Drive failure, If drives fails more than 1 will cause loss of data’s.
+
+- Excellent Performance
+- Reading will be extremely very good in speed.
+- Writing will be Average, slow if we won’t use a Hardware RAID Controller.
+- Rebuild from Parity information from all drives.
+- Full Fault Tolerance.
+- 1 Disk Space will be under Parity.
+- Can be used in file servers, web servers, very important backups.
+
+#### RAID 6 Two Parity Distributed Disk ####
+
+RAID 6 is same as RAID 5 with two parity distributed system. Mostly used in a large number of arrays. We need minimum 4 Drives, even if there 2 Drive fails we can rebuild the data while replacing new drives.
+
+Very slower than RAID 5, because it writes data to all 4 drivers at same time. Will be average in speed while we using a Hardware RAID Controller. If we have 6 numbers of 1TB hard-drives 4 drives will be used for data and 2 drives will be used for Parity.
+
+- Poor Performance.
+- Read Performance will be good.
+- Write Performance will be Poor if we not using a Hardware RAID Controller.
+- Rebuild from 2 Parity Drives.
+- Full Fault tolerance.
+- 2 Disks space will be under Parity.
+- Can be Used in Large Arrays.
+- Can be use in backup purpose, video streaming, used in large scale.
+
+#### RAID 10 (or) Mirror & Stripe ####
+
+RAID 10 can be called as 1+0 or 0+1. This will do both works of Mirror & Striping. Mirror will be first and stripe will be the second in RAID 10. Stripe will be the first and mirror will be the second in RAID 01. RAID 10 is better comparing to 01.
+
+Assume, we have 4 Number of drives. While I’m writing some data to my logical volume it will be saved under All 4 drives using mirror and stripe methods.
+
+If I’m writing a data “TECMINT” in RAID 10 it will save the data as follow. First “T” will write to both disks and second “E” will write to both disk, this step will be used for all data write. It will make a copy of every data to other disk too.
+
+Same time it will use the RAID 0 method and write data as follow “T” will write to first disk and “E” will write to second disk. Again “C” will write to first Disk and “M” to second disk.
+
+- Good read and write performance.
+- Here Half of the Space will be lost in total capacity.
+- Fault Tolerance.
+- Fast rebuild from copying data.
+- Can be used in Database storage for high performance and availability.
+
+### Conclusion ###
+
+In this article we have seen what is RAID and which levels are mostly used in RAID in real environment. Hope you have learned the write-up about RAID. For RAID setup one must know about the basic Knowledge about RAID. The above content will fulfil basic understanding about RAID.
+
+In the next upcoming articles I’m going to cover how to setup and create a RAID using Various Levels, Growing a RAID Group (Array) and Troubleshooting with failed Drives and much more.
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/understanding-raid-setup-in-linux/
+
+作者:[Babin Lonston][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/babinlonston/
\ No newline at end of file
diff --git a/sources/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on 'Two Devices' Using 'mdadm' Tool in Linux.md b/sources/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on 'Two Devices' Using 'mdadm' Tool in Linux.md
new file mode 100644
index 0000000000..8057e4828e
--- /dev/null
+++ b/sources/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on 'Two Devices' Using 'mdadm' Tool in Linux.md
@@ -0,0 +1,219 @@
+struggling 翻译中
+Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2
+================================================================================
+RAID is Redundant Array of Inexpensive disks, used for high availability and reliability in large scale environments, where data need to be protected than normal use. Raid is just a collection of disks in a pool to become a logical volume and contains an array. A combine drivers makes an array or called as set of (group).
+
+RAID can be created, if there are minimum 2 number of disk connected to a raid controller and make a logical volume or more drives can be added in an array according to defined RAID Levels. Software Raid are available without using Physical hardware those are called as software raid. Software Raid will be named as Poor man raid.
+
+![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg)
+
+Setup RAID0 in Linux
+
+Main concept of using RAID is to save data from Single point of failure, means if we using a single disk to store the data and if it’s failed, then there is no chance of getting our data back, to stop the data loss we need a fault tolerance method. So, that we can use some collection of disk to form a RAID set.
+
+#### What is Stripe in RAID 0? ####
+
+Stripe is striping data across multiple disk at the same time by dividing the contents. Assume we have two disks and if we save content to logical volume it will be saved under both two physical disks by dividing the content. For better performance RAID 0 will be used, but we can’t get the data if one of the drive fails. So, it isn’t a good practice to use RAID 0. The only solution is to install operating system with RAID0 applied logical volumes to safe your important files.
+
+- RAID 0 has High Performance.
+- Zero Capacity Loss in RAID 0. No Space will be wasted.
+- Zero Fault Tolerance ( Can’t get back the data if any one of disk fails).
+- Write and Reading will be Excellent.
+
+#### Requirements ####
+
+Minimum number of disks are allowed to create RAID 0 is 2, but you can add more disk but the order should be twice as 2, 4, 6, 8. If you have a Physical RAID card with enough ports, you can add more disks.
+
+Here we are not using a Hardware raid, this setup depends only on Software RAID. If we have a physical hardware raid card we can access it from it’s utility UI. Some motherboard by default in-build with RAID feature, there UI can be accessed using Ctrl+I keys.
+
+If you’re new to RAID setups, please read our earlier article, where we’ve covered some basic introduction of about RAID.
+
+- [Introduction to RAID and RAID Concepts][1]
+
+**My Server Setup**
+
+ Operating System : CentOS 6.5 Final
+ IP Address : 192.168.0.225
+ Two Disks : 20 GB each
+
+This article is Part 2 of a 9-tutorial RAID series, here in this part, we are going to see how we can create and setup Software RAID0 or striping in Linux systems or servers using two 20GB disks named sdb and sdc.
+
+### Step 1: Updating System and Installing mdadm for Managing RAID ###
+
+1. Before setting up RAID0 in Linux, let’s do a system update and then install ‘mdadm‘ package. The mdadm is a small program, which will allow us to configure and manage RAID devices in Linux.
+
+ # yum clean all && yum update
+ # yum install mdadm -y
+
+![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png)
+
+Install mdadm Tool
+
+### Step 2: Verify Attached Two 20GB Drives ###
+
+2. Before creating RAID 0, make sure to verify that the attached two hard drives are detected or not, using the following command.
+
+ # ls -l /dev | grep sd
+
+![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png)
+
+Check Hard Drives
+
+3. Once the new hard drives detected, it’s time to check whether the attached drives are already using any existing raid with the help of following ‘mdadm’ command.
+
+ # mdadm --examine /dev/sd[b-c]
+
+![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png)
+
+Check RAID Devices
+
+In the above output, we come to know that none of the RAID have been applied to these two sdb and sdc drives.
+
+### Step 3: Creating Partitions for RAID ###
+
+4. Now create sdb and sdc partitions for raid, with the help of following fdisk command. Here, I will show how to create partition on sdb drive.
+
+ # fdisk /dev/sdb
+
+Follow below instructions for creating partitions.
+
+- Press ‘n‘ for creating new partition.
+- Then choose ‘P‘ for Primary partition.
+- Next select the partition number as 1.
+- Give the default value by just pressing two times Enter key.
+- Next press ‘P‘ to print the defined partition.
+
+![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png)
+
+Create Partitions
+
+Follow below instructions for creating Linux raid auto on partitions.
+
+- Press ‘L‘ to list all available types.
+- Type ‘t‘to choose the partitions.
+- Choose ‘fd‘ for Linux raid auto and press Enter to apply.
+- Then again use ‘P‘ to print the changes what we have made.
+- Use ‘w‘ to write the changes.
+
+![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png)
+
+Create RAID Partitions in Linux
+
+**Note**: Please follow same above instructions to create partition on sdc drive now.
+
+5. After creating partitions, verify both the drivers are correctly defined for RAID using following command.
+
+ # mdadm --examine /dev/sd[b-c]
+ # mdadm --examine /dev/sd[b-c]1
+
+![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png)
+
+Verify RAID Partitions
+
+### Step 4: Creating RAID md Devices ###
+
+6. Now create md device (i.e. /dev/md0) and apply raid level using below command.
+
+ # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1
+ # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1
+
+- -C – create
+- -l – level
+- -n – No of raid-devices
+
+7. Once md device has been created, now verify the status of RAID Level, Devices and Array used, with the help of following series of commands as shown.
+
+ # cat /proc/mdstat
+
+![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png)
+
+Verify RAID Level
+
+ # mdadm -E /dev/sd[b-c]1
+
+![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png)
+
+Verify RAID Device
+
+ # mdadm --detail /dev/md0
+
+![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png)
+
+Verify RAID Array
+
+### Step 5: Assiging RAID Devices to Filesystem ###
+
+8. Create a ext4 filesystem for a RAID device /dev/md0 and mount it under /dev/raid0.
+
+ # mkfs.ext4 /dev/md0
+
+![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png)
+
+Create ext4 Filesystem
+
+9. Once ext4 filesystem has been created for Raid device, now create a mount point directory (i.e. /mnt/raid0) and mount the device /dev/md0 under it.
+
+ # mkdir /mnt/raid0
+ # mount /dev/md0 /mnt/raid0/
+
+10. Next, verify that the device /dev/md0 is mounted under /mnt/raid0 directory using df command.
+
+ # df -h
+
+11. Next, create a file called ‘tecmint.txt‘ under the mount point /mnt/raid0, add some content to the created file and view the content of a file and directory.
+
+ # touch /mnt/raid0/tecmint.txt
+ # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt
+ # cat /mnt/raid0/tecmint.txt
+ # ls -l /mnt/raid0/
+
+![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png)
+
+Verify Mount Device
+
+12. Once you’ve verified mount points, it’s time to create an fstab entry in /etc/fstab file.
+
+ # vim /etc/fstab
+
+Add the following entry as described. May vary according to your mount location and filesystem you using.
+
+ /dev/md0 /mnt/raid0 ext4 deaults 0 0
+
+![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png)
+
+Add Device to Fstab
+
+13. Run mount ‘-a‘ to check if there is any error in fstab entry.
+
+ # mount -av
+
+![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png)
+
+Check Errors in Fstab
+
+### Step 6: Saving RAID Configurations ###
+
+14. Finally, save the raid configuration to one of the file to keep the configurations for future use. Again we use ‘mdadm’ command with ‘-s‘ (scan) and ‘-v‘ (verbose) options as shown.
+
+ # mdadm -E -s -v >> /etc/mdadm.conf
+ # mdadm --detail --scan --verbose >> /etc/mdadm.conf
+ # cat /etc/mdadm.conf
+
+![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png)
+
+Save RAID Configurations
+
+That’s it, we have seen here, how to configure RAID0 striping with raid levels by using two hard disks. In next article, we will see how to setup RAID5.
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/create-raid0-in-linux/
+
+作者:[Babin Lonston][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/babinlonston/
+[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
\ No newline at end of file
diff --git a/sources/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md b/sources/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md
new file mode 100644
index 0000000000..4acfe4366b
--- /dev/null
+++ b/sources/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md
@@ -0,0 +1,213 @@
+struggling 翻译中
+Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3
+================================================================================
+RAID Mirroring means an exact clone (or mirror) of the same data writing to two drives. A minimum two number of disks are more required in an array to create RAID1 and it’s useful only, when read performance or reliability is more precise than the data storage capacity.
+
+![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg)
+
+Setup Raid1 in Linux
+
+Mirrors are created to protect against data loss due to disk failure. Each disk in a mirror involves an exact copy of the data. When one disk fails, the same data can be retrieved from other functioning disk. However, the failed drive can be replaced from the running computer without any user interruption.
+
+### Features of RAID 1 ###
+
+- Mirror has Good Performance.
+- 50% of space will be lost. Means if we have two disk with 500GB size total, it will be 1TB but in Mirroring it will only show us 500GB.
+- No data loss in Mirroring if one disk fails, because we have the same content in both disks.
+- Reading will be good than writing data to drive.
+
+#### Requirements ####
+
+Minimum Two number of disks are allowed to create RAID 1, but you can add more disks by using twice as 2, 4, 6, 8. To add more disks, your system must have a RAID physical adapter (hardware card).
+
+Here we’re using software raid not a Hardware raid, if your system has an inbuilt physical hardware raid card you can access it from it’s utility UI or using Ctrl+I key.
+
+Read Also: [Basic Concepts of RAID in Linux][1]
+
+#### My Server Setup ####
+
+ Operating System : CentOS 6.5 Final
+ IP Address : 192.168.0.226
+ Hostname : rd1.tecmintlocal.com
+ Disk 1 [20GB] : /dev/sdb
+ Disk 2 [20GB] : /dev/sdc
+
+This article will guide you through a step-by-step instructions on how to setup a software RAID 1 or Mirror using mdadm (creates and manages raid) on Linux Platform. Although the same instructions also works on other Linux distributions such as RedHat, CentOS, Fedora, etc.
+
+### Step 1: Installing Prerequisites and Examine Drives ###
+
+1. As I said above, we’re using mdadm utility for creating and managing RAID in Linux. So, let’s install the mdadm software package on Linux using yum or apt-get package manager tool.
+
+ # yum install mdadm [on RedHat systems]
+ # apt-get install mdadm [on Debain systems]
+
+2. Once ‘mdadm‘ package has been installed, we need to examine our disk drives whether there is already any raid configured using the following command.
+
+ # mdadm -E /dev/sd[b-c]
+
+![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png)
+
+Check RAID on Disks
+
+As you see from the above screen, that there is no any super-block detected yet, means no RAID defined.
+
+### Step 2: Drive Partitioning for RAID ###
+
+3. As I mentioned above, that we’re using minimum two partitions /dev/sdb and /dev/sdc for creating RAID1. Let’s create partitions on these two drives using ‘fdisk‘ command and change the type to raid during partition creation.
+
+ # fdisk /dev/sdb
+
+Follow the below instructions
+
+- Press ‘n‘ for creating new partition.
+- Then choose ‘P‘ for Primary partition.
+- Next select the partition number as 1.
+- Give the default full size by just pressing two times Enter key.
+- Next press ‘p‘ to print the defined partition.
+- Press ‘L‘ to list all available types.
+- Type ‘t‘to choose the partitions.
+- Choose ‘fd‘ for Linux raid auto and press Enter to apply.
+- Then again use ‘p‘ to print the changes what we have made.
+- Use ‘w‘ to write the changes.
+
+![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png)
+
+Create Disk Partitions
+
+After ‘/dev/sdb‘ partition has been created, next follow the same instructions to create new partition on /dev/sdc drive.
+
+ # fdisk /dev/sdc
+
+![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png)
+
+Create Second Partitions
+
+4. Once both the partitions are created successfully, verify the changes on both sdb & sdc drive using the same ‘mdadm‘ command and also confirm the RAID type as shown in the following screen grabs.
+
+ # mdadm -E /dev/sd[b-c]
+
+![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png)
+
+Verify Partitions Changes
+
+![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png)
+
+Check RAID Type
+
+**Note**: As you see in the above picture, there is no any defined RAID on the sdb1 and sdc1 drives so far, that’s the reason we are getting as no super-blocks detected.
+
+### Step 3: Creating RAID1 Devices ###
+
+5. Next create RAID1 Device called ‘/dev/md0‘ using the following command and verity it.
+
+ # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1
+ # cat /proc/mdstat
+
+![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png)
+
+Create RAID Device
+
+6. Next check the raid devices type and raid array using following commands.
+
+ # mdadm -E /dev/sd[b-c]1
+ # mdadm --detail /dev/md0
+
+![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png)
+
+Check RAID Device type
+
+![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png)
+
+Check RAID Device Array
+
+From the above pictures, one can easily understand that raid1 have been created and using /dev/sdb1 and /dev/sdc1 partitions and also you can see the status as resyncing.
+
+### Step 4: Creating File System on RAID Device ###
+
+7. Create file system using ext4 for md0 and mount under /mnt/raid1.
+
+ # mkfs.ext4 /dev/md0
+
+![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png)
+
+Create RAID Device Filesystem
+
+8. Next, mount the newly created filesystem under ‘/mnt/raid1‘ and create some files and verify the contents under mount point.
+
+ # mkdir /mnt/raid1
+ # mount /dev/md0 /mnt/raid1/
+ # touch /mnt/raid1/tecmint.txt
+ # echo "tecmint raid setups" > /mnt/raid1/tecmint.txt
+
+![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png)
+
+Mount Raid Device
+
+9. To auto-mount RAID1 on system reboot, you need to make an entry in fstab file. Open ‘/etc/fstab‘ file and add the following line at the bottom of the file.
+
+ /dev/md0 /mnt/raid1 ext4 defaults 0 0
+
+![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png)
+
+Raid Automount Device
+
+10. Run ‘mount -a‘ to check whether there are any errors in fstab entry.
+
+ # mount -av
+
+![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png)
+
+Check Errors in fstab
+
+11. Next, save the raid configuration manually to ‘mdadm.conf‘ file using the below command.
+
+ # mdadm --detail --scan --verbose >> /etc/mdadm.conf
+
+![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png)
+
+Save Raid Configuration
+
+The above configuration file is read by the system at the reboots and load the RAID devices.
+
+### Step 5: Verify Data After Disk Failure ###
+
+12. Our main purpose is, even after any of hard disk fail or crash our data needs to be available. Let’s see what will happen when any of disk disk is unavailable in array.
+
+ # mdadm --detail /dev/md0
+
+![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png)
+
+Raid Device Verify
+
+In the above image, we can see there are 2 devices available in our RAID and Active Devices are 2. Now let us see what will happen when a disk plugged out (removed sdc disk) or fails.
+
+ # ls -l /dev | grep sd
+ # mdadm --detail /dev/md0
+
+![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png)
+
+Test RAID Devices
+
+Now in the above image, you can see that one of our drive is lost. I unplugged one of the drive from my Virtual machine. Now let us check our precious data.
+
+ # cd /mnt/raid1/
+ # cat tecmint.txt
+
+![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png)
+
+Verify RAID Data
+
+Did you see our data is still available. From this we come to know the advantage of RAID 1 (mirror). In next article, we will see how to setup a RAID 5 striping with distributed Parity. Hope this helps you to understand how the RAID 1 (Mirror) Works.
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/create-raid1-in-linux/
+
+作者:[Babin Lonston][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/babinlonston/
+[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
\ No newline at end of file
diff --git a/sources/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md b/sources/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md
new file mode 100644
index 0000000000..dafdf514aa
--- /dev/null
+++ b/sources/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md
@@ -0,0 +1,286 @@
+struggling 翻译中
+Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4
+================================================================================
+In RAID 5, data strips across multiple drives with distributed parity. The striping with distributed parity means it will split the parity information and stripe data over the multiple disks, which will have good data redundancy.
+
+![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/setup-raid-5-in-linux.jpg)
+
+Setup Raid 5 in Linux
+
+For RAID Level it should have at least three hard drives or more. RAID 5 are being used in the large scale production environment where it’s cost effective and provide performance as well as redundancy.
+
+#### What is Parity? ####
+
+Parity is a simplest common method of detecting errors in data storage. Parity stores information in each disks, Let’s say we have 4 disks, in 4 disks one disk space will be split to all disks to store the parity information’s. If any one of the disks fails still we can get the data by rebuilding from parity information after replacing the failed disk.
+
+#### Pros and Cons of RAID 5 ####
+
+- Gives better performance
+- Support Redundancy and Fault tolerance.
+- Support hot spare options.
+- Will loose a single disk capacity for using parity information.
+- No data loss if a single disk fails. We can rebuilt from parity after replacing the failed disk.
+- Suits for transaction oriented environment as the reading will be faster.
+- Due to parity overhead, writing will be slow.
+- Rebuild takes long time.
+
+#### Requirements ####
+
+Minimum 3 hard drives are required to create Raid 5, but you can add more disks, only if you’ve a dedicated hardware raid controller with multi ports. Here, we are using software RAID and ‘mdadm‘ package to create raid.
+
+mdadm is a package which allow us to configure and manage RAID devices in Linux. By default there is no configuration file is available for RAID, we must save the configuration file after creating and configuring RAID setup in separate file called mdadm.conf.
+
+Before moving further, I suggest you to go through the following articles for understanding the basics of RAID in Linux.
+
+- [Basic Concepts of RAID in Linux – Part 1][1]
+- [Creating RAID 0 (Stripe) in Linux – Part 2][2]
+- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3]
+
+#### My Server Setup ####
+
+ Operating System : CentOS 6.5 Final
+ IP Address : 192.168.0.227
+ Hostname : rd5.tecmintlocal.com
+ Disk 1 [20GB] : /dev/sdb
+ Disk 2 [20GB] : /dev/sdc
+ Disk 3 [20GB] : /dev/sdd
+
+This article is a Part 4 of a 9-tutorial RAID series, here we are going to setup a software RAID 5 with distributed parity in Linux systems or servers using three 20GB disks named /dev/sdb, /dev/sdc and /dev/sdd.
+
+### Step 1: Installing mdadm and Verify Drives ###
+
+1. As we said earlier, that we’re using CentOS 6.5 Final release for this raid setup, but same steps can be followed for RAID setup in any Linux based distributions.
+
+ # lsb_release -a
+ # ifconfig | grep inet
+
+![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/CentOS-6.5-Summary.png)
+
+CentOS 6.5 Summary
+
+2. If you’re following our raid series, we assume that you’ve already installed ‘mdadm‘ package, if not, use the following command according to your Linux distribution to install the package.
+
+ # yum install mdadm [on RedHat systems]
+ # apt-get install mdadm [on Debain systems]
+
+3. After the ‘mdadm‘ package installation, let’s list the three 20GB disks which we have added in our system using ‘fdisk‘ command.
+
+ # fdisk -l | grep sd
+
+![Install mdadm Tool in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Install-mdadm-Tool.png)
+
+Install mdadm Tool
+
+4. Now it’s time to examine the attached three drives for any existing RAID blocks on these drives using following command.
+
+ # mdadm -E /dev/sd[b-d]
+ # mdadm --examine /dev/sdb /dev/sdc /dev/sdd
+
+![Examine Drives For Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-Drives-For-Raid.png)
+
+Examine Drives For Raid
+
+**Note**: From the above image illustrated that there is no any super-block detected yet. So, there is no RAID defined in all three drives. Let us start to create one now.
+
+### Step 2: Partitioning the Disks for RAID ###
+
+5. First and foremost, we have to partition the disks (/dev/sdb, /dev/sdc and /dev/sdd) before adding to a RAID, So let us define the partition using ‘fdisk’ command, before forwarding to the next steps.
+
+ # fdisk /dev/sdb
+ # fdisk /dev/sdc
+ # fdisk /dev/sdd
+
+#### Create /dev/sdb Partition ####
+
+Please follow the below instructions to create partition on /dev/sdb drive.
+
+- Press ‘n‘ for creating new partition.
+- Then choose ‘P‘ for Primary partition. Here we are choosing Primary because there is no partitions defined yet.
+- Then choose ‘1‘ to be the first partition. By default it will be 1.
+- Here for cylinder size we don’t have to choose the specified size because we need the whole partition for RAID so just Press Enter two times to choose the default full size.
+- Next press ‘p‘ to print the created partition.
+- Change the Type, If we need to know the every available types Press ‘L‘.
+- Here, we are selecting ‘fd‘ as my type is RAID.
+- Next press ‘p‘ to print the defined partition.
+- Then again use ‘p‘ to print the changes what we have made.
+- Use ‘w‘ to write the changes.
+
+![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition1.png)
+
+Create sdb Partition
+
+**Note**: We have to follow the steps mentioned above to create partitions for sdc & sdd drives too.
+
+#### Create /dev/sdc Partition ####
+
+Now partition the sdc and sdd drives by following the steps given in the screenshot or you can follow above steps.
+
+ # fdisk /dev/sdc
+
+![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition1.png)
+
+Create sdc Partition
+
+#### Create /dev/sdd Partition ####
+
+ # fdisk /dev/sdd
+
+![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition1.png)
+
+Create sdd Partition
+
+6. After creating partitions, check for changes in all three drives sdb, sdc, & sdd.
+
+ # mdadm --examine /dev/sdb /dev/sdc /dev/sdd
+
+ or
+
+ # mdadm -E /dev/sd[b-c]
+
+![Check Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Changes-on-Partitions.png)
+
+Check Partition Changes
+
+**Note**: In the above pic. depict the type is fd i.e. for RAID.
+
+7. Now Check for the RAID blocks in newly created partitions. If no super-blocks detected, than we can move forward to create a new RAID 5 setup on these drives.
+
+![Check Raid on Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-Partitions.png)
+
+Check Raid on Partition
+
+### Step 3: Creating md device md0 ###
+
+8. Now create a Raid device ‘md0‘ (i.e. /dev/md0) and include raid level on all newly created partitions (sdb1, sdc1 and sdd1) using below command.
+
+ # mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
+
+ or
+
+ # mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1
+
+9. After creating raid device, check and verify the RAID, devices included and RAID Level from the mdstat output.
+
+ # cat /proc/mdstat
+
+![Verify Raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Device.png)
+
+Verify Raid Device
+
+If you want to monitor the current building process, you can use ‘watch‘ command, just pass through the ‘cat /proc/mdstat‘ with watch command which will refresh screen every 1 second.
+
+ # watch -n1 cat /proc/mdstat
+
+![Monitor Raid Process](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Raid-Process.png)
+
+Monitor Raid 5 Process
+
+![Raid 5 Process Summary](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Process-Summary.png)
+
+Raid 5 Process Summary
+
+10. After creation of raid, Verify the raid devices using the following command.
+
+ # mdadm -E /dev/sd[b-d]1
+
+![Verify Raid Level](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Level.png)
+
+Verify Raid Level
+
+**Note**: The Output of the above command will be little long as it prints the information of all three drives.
+
+11. Next, verify the RAID array to assume that the devices which we’ve included in the RAID level are running and started to re-sync.
+
+ # mdadm --detail /dev/md0
+
+![Verify Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Array.png)
+
+Verify Raid Array
+
+### Step 4: Creating file system for md0 ###
+
+12. Create a file system for ‘md0‘ device using ext4 before mounting.
+
+ # mkfs.ext4 /dev/md0
+
+![Create md0 Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md0-Filesystem.png)
+
+Create md0 Filesystem
+
+13. Now create a directory under ‘/mnt‘ then mount the created filesystem under /mnt/raid5 and check the files under mount point, you will see lost+found directory.
+
+ # mkdir /mnt/raid5
+ # mount /dev/md0 /mnt/raid5/
+ # ls -l /mnt/raid5/
+
+14. Create few files under mount point /mnt/raid5 and append some text in any one of the file to verify the content.
+
+ # touch /mnt/raid5/raid5_tecmint_{1..5}
+ # ls -l /mnt/raid5/
+ # echo "tecmint raid setups" > /mnt/raid5/raid5_tecmint_1
+ # cat /mnt/raid5/raid5_tecmint_1
+ # cat /proc/mdstat
+
+![Mount Raid 5 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-Raid-Device.png)
+
+Mount Raid Device
+
+15. We need to add entry in fstab, else will not display our mount point after system reboot. To add an entry, we should edit the fstab file and append the following line as shown below. The mount point will differ according to your environment.
+
+ # vim /etc/fstab
+
+ /dev/md0 /mnt/raid5 ext4 defaults 0 0
+
+![Raid 5 Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Device-Automount.png)
+
+Raid 5 Automount
+
+16. Next, run ‘mount -av‘ command to check whether any errors in fstab entry.
+
+ # mount -av
+
+![Check Fstab Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Fstab-Errors.png)
+
+Check Fstab Errors
+
+### Step 5: Save Raid 5 Configuration ###
+
+17. As mentioned earlier in requirement section, by default RAID don’t have a config file. We have to save it manually. If this step is not followed RAID device will not be in md0, it will be in some other random number.
+
+So, we must have to save the configuration before system reboot. If the configuration is saved it will be loaded to the kernel during the system reboot and RAID will also gets loaded.
+
+ # mdadm --detail --scan --verbose >> /etc/mdadm.conf
+
+![Save Raid 5 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid-5-Configuration.png)
+
+Save Raid 5 Configuration
+
+Note: Saving the configuration will keep the RAID level stable in md0 device.
+
+### Step 6: Adding Spare Drives ###
+
+18. What the use of adding a spare drive? its very useful if we have a spare drive, if any one of the disk fails in our array, this spare drive will get active and rebuild the process and sync the data from other disk, so we can see a redundancy here.
+
+For more instructions on how to add spare drive and check Raid 5 fault tolerance, read #Step 6 and #Step 7 in the following article.
+
+- [Add Spare Drive to Raid 5 Setup][4]
+
+### Conclusion ###
+
+Here, in this article, we have seen how to setup a RAID 5 using three number of disks. Later in my upcoming articles, we will see how to troubleshoot when a disk fails in RAID 5 and how to replace for recovery.
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/create-raid-5-in-linux/
+
+作者:[Babin Lonston][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/babinlonston/
+[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
+[2]:http://www.tecmint.com/create-raid0-in-linux/
+[3]:http://www.tecmint.com/create-raid1-in-linux/
+[4]:http://www.tecmint.com/create-raid-6-in-linux/
\ No newline at end of file
diff --git a/sources/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md b/sources/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md
new file mode 100644
index 0000000000..ea1d5993c0
--- /dev/null
+++ b/sources/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md
@@ -0,0 +1,321 @@
+struggling 翻译中
+Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux – Part 5
+================================================================================
+RAID 6 is upgraded version of RAID 5, where it has two distributed parity which provides fault tolerance even after two drives fails. Mission critical system still operational incase of two concurrent disks failures. It’s alike RAID 5, but provides more robust, because it uses one more disk for parity.
+
+In our earlier article, we’ve seen distributed parity in RAID 5, but in this article we will going to see RAID 6 with double distributed parity. Don’t expect extra performance than any other RAID, if so we have to install a dedicated RAID Controller too. Here in RAID 6 even if we loose our 2 disks we can get the data back by replacing a spare drive and build it from parity.
+
+![Setup RAID 6 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Setup-RAID-6-in-Linux.jpg)
+
+Setup RAID 6 in Linux
+
+To setup a RAID 6, minimum 4 numbers of disks or more in a set are required. RAID 6 have multiple disks even in some set it may be have some bunch of disks, while reading, it will read from all the drives, so reading would be faster whereas writing would be poor because it has to stripe over multiple disks.
+
+Now, many of us comes to conclusion, why we need to use RAID 6, when it doesn’t perform like any other RAID. Hmm… those who raise this question need to know that, if they need high fault tolerance choose RAID 6. In every higher environments with high availability for database, they use RAID 6 because database is the most important and need to be safe in any cost, also it can be useful for video streaming environments.
+
+#### Pros and Cons of RAID 6 ####
+
+- Performance are good.
+- RAID 6 is expensive, as it requires two independent drives are used for parity functions.
+- Will loose a two disks capacity for using parity information (double parity).
+- No data loss, even after two disk fails. We can rebuilt from parity after replacing the failed disk.
+- Reading will be better than RAID 5, because it reads from multiple disk, But writing performance will be very poor without dedicated RAID Controller.
+
+#### Requirements ####
+
+Minimum 4 numbers of disks are required to create a RAID 6. If you want to add more disks, you can, but you must have dedicated raid controller. In software RAID, we will won’t get better performance in RAID 6. So we need a physical RAID controller.
+
+Those who are new to RAID setup, we recommend to go through RAID articles below.
+
+- [Basic Concepts of RAID in Linux – Part 1][1]
+- [Creating Software RAID 0 (Stripe) in Linux – Part 2][2]
+- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3]
+
+#### My Server Setup ####
+
+ Operating System : CentOS 6.5 Final
+ IP Address : 192.168.0.228
+ Hostname : rd6.tecmintlocal.com
+ Disk 1 [20GB] : /dev/sdb
+ Disk 2 [20GB] : /dev/sdc
+ Disk 3 [20GB] : /dev/sdd
+ Disk 4 [20GB] : /dev/sde
+
+This article is a Part 5 of a 9-tutorial RAID series, here we are going to see how we can create and setup Software RAID 6 or Striping with Double Distributed Parity in Linux systems or servers using four 20GB disks named /dev/sdb, /dev/sdc, /dev/sdd and /dev/sde.
+
+### Step 1: Installing mdadm Tool and Examine Drives ###
+
+1. If you’re following our last two Raid articles (Part 2 and Part 3), where we’ve already shown how to install ‘mdadm‘ tool. If you’re new to this article, let me explain that ‘mdadm‘ is a tool to create and manage Raid in Linux systems, let’s install the tool using following command according to your Linux distribution.
+
+ # yum install mdadm [on RedHat systems]
+ # apt-get install mdadm [on Debain systems]
+
+2. After installing the tool, now it’s time to verify the attached four drives that we are going to use for raid creation using the following ‘fdisk‘ command.
+
+ # fdisk -l | grep sd
+
+![Check Hard Disk in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Linux-Disks.png)
+
+Check Disks in Linux
+
+3. Before creating a RAID drives, always examine our disk drives whether there is any RAID is already created on the disks.
+
+ # mdadm -E /dev/sd[b-e]
+ # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
+
+![Check Raid on Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Disk-Raid.png)
+
+Check Raid on Disk
+
+**Note**: In the above image depicts that there is no any super-block detected or no RAID is defined in four disk drives. We may move further to start creating RAID 6.
+
+### Step 2: Drive Partitioning for RAID 6 ###
+
+4. Now create partitions for raid on ‘/dev/sdb‘, ‘/dev/sdc‘, ‘/dev/sdd‘ and ‘/dev/sde‘ with the help of following fdisk command. Here, we will show how to create partition on sdb drive and later same steps to be followed for rest of the drives.
+
+**Create /dev/sdb Partition**
+
+ # fdisk /dev/sdb
+
+Please follow the instructions as shown below for creating partition.
+
+- Press ‘n‘ for creating new partition.
+- Then choose ‘P‘ for Primary partition.
+- Next choose the partition number as 1.
+- Define the default value by just pressing two times Enter key.
+- Next press ‘P‘ to print the defined partition.
+- Press ‘L‘ to list all available types.
+- Type ‘t‘ to choose the partitions.
+- Choose ‘fd‘ for Linux raid auto and press Enter to apply.
+- Then again use ‘P‘ to print the changes what we have made.
+- Use ‘w‘ to write the changes.
+
+![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition.png)
+
+Create /dev/sdb Partition
+
+**Create /dev/sdb Partition**
+
+ # fdisk /dev/sdc
+
+![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition.png)
+
+Create /dev/sdc Partition
+
+**Create /dev/sdd Partition**
+
+ # fdisk /dev/sdd
+
+![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition.png)
+
+Create /dev/sdd Partition
+
+**Create /dev/sde Partition**
+
+ # fdisk /dev/sde
+
+![Create sde Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sde-Partition.png)
+
+Create /dev/sde Partition
+
+5. After creating partitions, it’s always good habit to examine the drives for super-blocks. If super-blocks does not exist than we can go head to create a new RAID setup.
+
+ # mdadm -E /dev/sd[b-e]1
+
+
+ or
+
+ # mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
+
+![Check Raid on New Partitions](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Partitions.png)
+
+Check Raid on New Partitions
+
+### Step 3: Creating md device (RAID) ###
+
+6. Now it’s time to create Raid device ‘md0‘ (i.e. /dev/md0) and apply raid level on all newly created partitions and confirm the raid using following commands.
+
+ # mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
+ # cat /proc/mdstat
+
+![Create Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Raid-6-Device.png)
+
+Create Raid 6 Device
+
+7. You can also check the current process of raid using watch command as shown in the screen grab below.
+
+ # watch -n1 cat /proc/mdstat
+
+![Check Raid 6 Process](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Process.png)
+
+Check Raid 6 Process
+
+8. Verify the raid devices using the following command.
+
+# mdadm -E /dev/sd[b-e]1
+
+**Note**:: The above command will be display the information of the four disks, which is quite long so not possible to post the output or screen grab here.
+
+9. Next, verify the RAID array to confirm that the re-syncing is started.
+
+ # mdadm --detail /dev/md0
+
+![Check Raid 6 Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Array.png)
+
+Check Raid 6 Array
+
+### Step 4: Creating FileSystem on Raid Device ###
+
+10. Create a filesystem using ext4 for ‘/dev/md0‘ and mount it under /mnt/raid5. Here we’ve used ext4, but you can use any type of filesystem as per your choice.
+
+ # mkfs.ext4 /dev/md0
+
+![Create File System on Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Create-File-System-on-Raid.png)
+
+Create File System on Raid 6
+
+11. Mount the created filesystem under /mnt/raid6 and verify the files under mount point, we can see lost+found directory.
+
+ # mkdir /mnt/raid6
+ # mount /dev/md0 /mnt/raid6/
+ # ls -l /mnt/raid6/
+
+12. Create some files under mount point and append some text in any one of the file to verify the content.
+
+ # touch /mnt/raid6/raid6_test.txt
+ # ls -l /mnt/raid6/
+ # echo "tecmint raid setups" > /mnt/raid6/raid6_test.txt
+ # cat /mnt/raid6/raid6_test.txt
+
+![Verify Raid Content](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Content.png)
+
+Verify Raid Content
+
+13. Add an entry in /etc/fstab to auto mount the device at the system startup and append the below entry, mount point may differ according to your environment.
+
+ # vim /etc/fstab
+
+ /dev/md0 /mnt/raid6 ext4 defaults 0 0
+
+![Automount Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Automount-Raid-Device.png)
+
+Automount Raid 6 Device
+
+14. Next, execute ‘mount -a‘ command to verify whether there is any error in fstab entry.
+
+ # mount -av
+
+![Verify Raid Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Automount-Raid-Devices.png)
+
+Verify Raid Automount
+
+### Step 5: Save RAID 6 Configuration ###
+
+15. Please note by default RAID don’t have a config file. We have to save it by manually using below command and then verify the status of device ‘/dev/md0‘.
+
+ # mdadm --detail --scan --verbose >> /etc/mdadm.conf
+ # mdadm --detail /dev/md0
+
+![Save Raid 6 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png)
+
+Save Raid 6 Configuration
+
+![Check Raid 6 Status](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png)
+
+Check Raid 6 Status
+
+### Step 6: Adding a Spare Drives ###
+
+16. Now it has 4 disks and there are two parity information’s available. In some cases, if any one of the disk fails we can get the data, because there is double parity in RAID 6.
+
+May be if the second disk fails, we can add a new one before loosing third disk. It is possible to add a spare drive while creating our RAID set, But I have not defined the spare drive while creating our raid set. But, we can add a spare drive after any drive failure or while creating the RAID set. Now we have already created the RAID set now let me add a spare drive for demonstration.
+
+For the demonstration purpose, I’ve hot-plugged a new HDD disk (i.e. /dev/sdf), let’s verify the attached disk.
+
+ # ls -l /dev/ | grep sd
+
+![Check New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-New-Disk.png)
+
+Check New Disk
+
+17. Now again confirm the new attached disk for any raid is already configured or not using the same mdadm command.
+
+ # mdadm --examine /dev/sdf
+
+![Check Raid on New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Disk.png)
+
+Check Raid on New Disk
+
+**Note**: As usual, like we’ve created partitions for four disks earlier, similarly we’ve to create new partition on the new plugged disk using fdisk command.
+
+ # fdisk /dev/sdf
+
+![Create sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Partition-on-sdf.png)
+
+Create /dev/sdf Partition
+
+18. Again after creating new partition on /dev/sdf, confirm the raid on the partition, include the spare drive to the /dev/md0 raid device and verify the added device.
+
+ # mdadm --examine /dev/sdf
+ # mdadm --examine /dev/sdf1
+ # mdadm --add /dev/md0 /dev/sdf1
+ # mdadm --detail /dev/md0
+
+![Verify Raid on sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-on-sdf.png)
+
+Verify Raid on sdf Partition
+
+![Add sdf Partition to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Add-sdf-Partition-to-Raid.png)
+
+Add sdf Partition to Raid
+
+![Verify sdf Partition Details](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-sdf-Details.png)
+
+Verify sdf Partition Details
+
+### Step 7: Check Raid 6 Fault Tolerance ###
+
+19. Now, let us check whether spare drive works automatically, if anyone of the disk fails in our Array. For testing, I’ve personally marked one of the drive is failed.
+
+Here, we’re going to mark /dev/sdd1 as failed drive.
+
+ # mdadm --manage --fail /dev/md0 /dev/sdd1
+
+![Check Raid 6 Fault Tolerance](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Failover.png)
+
+Check Raid 6 Fault Tolerance
+
+20. Let me get the details of RAID set now and check whether our spare started to sync.
+
+ # mdadm --detail /dev/md0
+
+![Check Auto Raid Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Auto-Raid-Syncing.png)
+
+Check Auto Raid Syncing
+
+**Hurray!** Here, we can see the spare got activated and started rebuilding process. At the bottom we can see the faulty drive /dev/sdd1 listed as faulty. We can monitor build process using following command.
+
+ # cat /proc/mdstat
+
+![Raid 6 Auto Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-6-Auto-Syncing.png)
+
+Raid 6 Auto Syncing
+
+### Conclusion: ###
+
+Here, we have seen how to setup RAID 6 using four disks. This RAID level is one of the expensive setup with high redundancy. We will see how to setup a Nested RAID 10 and much more in the next articles. Till then, stay connected with TECMINT.
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/create-raid-6-in-linux/
+
+作者:[Babin Lonston][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/babinlonston/
+[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
+[2]:http://www.tecmint.com/create-raid0-in-linux/
+[3]:http://www.tecmint.com/create-raid1-in-linux/
\ No newline at end of file
diff --git a/sources/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md b/sources/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md
new file mode 100644
index 0000000000..a08903e00e
--- /dev/null
+++ b/sources/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md
@@ -0,0 +1,276 @@
+struggling 翻译中
+Setting Up RAID 10 or 1+0 (Nested) in Linux – Part 6
+================================================================================
+RAID 10 is a combine of RAID 0 and RAID 1 to form a RAID 10. To setup Raid 10, we need at least 4 number of disks. In our earlier articles, we’ve seen how to setup a RAID 0 and RAID 1 with minimum 2 number of disks.
+
+Here we will use both RAID 0 and RAID 1 to perform a Raid 10 setup with minimum of 4 drives. Assume, that we’ve some data saved to logical volume, which is created with RAID 10. Just for an example, if we are saving a data “apple” this will be saved under all 4 disk by this following method.
+
+![Create Raid 10 in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/raid10.jpg)
+
+Create Raid 10 in Linux
+
+Using RAID 0 it will save as “A” in first disk and “p” in the second disk, then again “p” in first disk and “l” in second disk. Then “e” in first disk, like this it will continue the Round robin process to save the data. From this we come to know that RAID 0 will write the half of the data to first disk and other half of the data to second disk.
+
+In RAID 1 method, same data will be written to other 2 disks as follows. “A” will write to both first and second disks, “P” will write to both disk, Again other “P” will write to both the disks. Thus using RAID 1 it will write to both the disks. This will continue in round robin process.
+
+Now you all came to know that how RAID 10 works by combining of both RAID 0 and RAID 1. If we have 4 number of 20 GB size disks, it will be 80 GB in total, but we will get only 40 GB of Storage capacity, the half of total capacity will be lost for building RAID 10.
+
+#### Pros and Cons of RAID 5 ####
+
+- Gives better performance.
+- We will loose two of the disk capacity in RAID 10.
+- Reading and writing will be very good, because it will write and read to all those 4 disk at the same time.
+- It can be used for Database solutions, which needs a high I/O disk writes.
+
+#### Requirements ####
+
+In RAID 10, we need minimum of 4 disks, the first 2 disks for RAID 0 and other 2 Disks for RAID 1. Like I said before, RAID 10 is just a Combine of RAID 0 & 1. If we need to extended the RAID group, we must increase the disk by minimum 4 disks.
+
+**My Server Setup**
+
+ Operating System : CentOS 6.5 Final
+ IP Address : 192.168.0.229
+ Hostname : rd10.tecmintlocal.com
+ Disk 1 [20GB] : /dev/sdd
+ Disk 2 [20GB] : /dev/sdc
+ Disk 3 [20GB] : /dev/sdd
+ Disk 4 [20GB] : /dev/sde
+
+There are two ways to setup RAID 10, but here I’m going to show you both methods, but I prefer you to follow the first method, which makes the work lot easier for setting up a RAID 10.
+
+### Method 1: Setting Up Raid 10 ###
+
+1. First, verify that all the 4 added disks are detected or not using the following command.
+
+ # ls -l /dev | grep sd
+
+2. Once the four disks are detected, it’s time to check for the drives whether there is already any raid existed before creating a new one.
+
+ # mdadm -E /dev/sd[b-e]
+ # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
+
+![Verify 4 Added Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-4-Added-Disks.png)
+
+Verify 4 Added Disks
+
+**Note**: In the above output, you see there isn’t any super-block detected yet, that means there is no RAID defined in all 4 drives.
+
+#### Step 1: Drive Partitioning for RAID ####
+
+3. Now create a new partition on all 4 disks (/dev/sdb, /dev/sdc, /dev/sdd and /dev/sde) using the ‘fdisk’ tool.
+
+ # fdisk /dev/sdb
+ # fdisk /dev/sdc
+ # fdisk /dev/sdd
+ # fdisk /dev/sde
+
+**Create /dev/sdb Partition**
+
+Let me show you how to partition one of the disk (/dev/sdb) using fdisk, this steps will be the same for all the other disks too.
+
+ # fdisk /dev/sdb
+
+Please use the below steps for creating a new partition on /dev/sdb drive.
+
+- Press ‘n‘ for creating new partition.
+- Then choose ‘P‘ for Primary partition.
+- Then choose ‘1‘ to be the first partition.
+- Next press ‘p‘ to print the created partition.
+- Change the Type, If we need to know the every available types Press ‘L‘.
+- Here, we are selecting ‘fd‘ as my type is RAID.
+- Next press ‘p‘ to print the defined partition.
+- Then again use ‘p‘ to print the changes what we have made.
+- Use ‘w‘ to write the changes.
+
+![Disk sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-sdb-Partition.png)
+
+Disk sdb Partition
+
+**Note**: Please use the above same instructions for creating partitions on other disks (sdc, sdd sdd sde).
+
+4. After creating all 4 partitions, again you need to examine the drives for any already existing raid using the following command.
+
+ # mdadm -E /dev/sd[b-e]
+ # mdadm -E /dev/sd[b-e]1
+
+ OR
+
+ # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde
+ # mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
+
+![Check All Disks for Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Check-All-Disks-for-Raid.png)
+
+Check All Disks for Raid
+
+**Note**: The above outputs shows that there isn’t any super-block detected on all four newly created partitions, that means we can move forward to create RAID 10 on these drives.
+
+#### Step 2: Creating ‘md’ RAID Device ####
+
+5. Now it’s time to create a ‘md’ (i.e. /dev/md0) device, using ‘mdadm’ raid management tool. Before, creating device, your system must have ‘mdadm’ tool installed, if not install it first.
+
+ # yum install mdadm [on RedHat systems]
+ # apt-get install mdadm [on Debain systems]
+
+Once ‘mdadm’ tool installed, you can now create a ‘md’ raid device using the following command.
+
+ # mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1
+
+6. Next verify the newly created raid device using the ‘cat’ command.
+
+ # cat /proc/mdstat
+
+![Create md raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-raid-Device.png)
+
+Create md raid Device
+
+7. Next, examine all the 4 drives using the below command. The output of the below command will be long as it displays the information of all 4 disks.
+
+ # mdadm --examine /dev/sd[b-e]1
+
+8. Next, check the details of Raid Array with the help of following command.
+
+ # mdadm --detail /dev/md0
+
+![Check Raid Array Details](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Array-Details.png)
+
+Check Raid Array Details
+
+**Note**: You see in the above results, that the status of Raid was active and re-syncing.
+
+#### Step 3: Creating Filesystem ####
+
+9. Create a file system using ext4 for ‘md0′ and mount it under ‘/mnt/raid10‘. Here, I’ve used ext4, but you can use any filesystem type if you want.
+
+ # mkfs.ext4 /dev/md0
+
+![Create md Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-Filesystem.png)
+
+Create md Filesystem
+
+10. After creating filesystem, mount the created file-system under ‘/mnt/raid10‘ and list the contents of the mount point using ‘ls -l’ command.
+
+ # mkdir /mnt/raid10
+ # mount /dev/md0 /mnt/raid10/
+ # ls -l /mnt/raid10/
+
+Next, add some files under mount point and append some text in any one of the file and check the content.
+
+ # touch /mnt/raid10/raid10_files.txt
+ # ls -l /mnt/raid10/
+ # echo "raid 10 setup with 4 disks" > /mnt/raid10/raid10_files.txt
+ # cat /mnt/raid10/raid10_files.txt
+
+![Mount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-md-Device.png)
+
+Mount md Device
+
+11. For automounting, open the ‘/etc/fstab‘ file and append the below entry in fstab, may be mount point will differ according to your environment. Save and quit using wq!.
+
+ # vim /etc/fstab
+
+ /dev/md0 /mnt/raid10 ext4 defaults 0 0
+
+![AutoMount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/AutoMount-md-Device.png)
+
+AutoMount md Device
+
+12. Next, verify the ‘/etc/fstab‘ file for any errors before restarting the system using ‘mount -a‘ command.
+
+ # mount -av
+
+![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Errors-in-Fstab.png)
+
+Check Errors in Fstab
+
+#### Step 4: Save RAID Configuration ####
+
+13. By default RAID don’t have a config file, so we need to save it manually after making all the above steps, to preserve these settings during system boot.
+
+ # mdadm --detail --scan --verbose >> /etc/mdadm.conf
+
+![Save Raid10 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid10-Configuration.png)
+
+Save Raid10 Configuration
+
+That’s it, we have created RAID 10 using method 1, this method is the easier one. Now let’s move forward to setup RAID 10 using method 2.
+
+### Method 2: Creating RAID 10 ###
+
+1. In method 2, we have to define 2 sets of RAID 1 and then we need to define a RAID 0 using those created RAID 1 sets. Here, what we will do is to first create 2 mirrors (RAID1) and then striping over RAID0.
+
+First, list the disks which are all available for creating RAID 10.
+
+ # ls -l /dev | grep sd
+
+![List 4 Devices](http://www.tecmint.com/wp-content/uploads/2014/11/List-4-Devices.png)
+
+List 4 Devices
+
+2. Partition the all 4 disks using ‘fdisk’ command. For partitioning, you can follow #step 3 above.
+
+ # fdisk /dev/sdb
+ # fdisk /dev/sdc
+ # fdisk /dev/sdd
+ # fdisk /dev/sde
+
+3. After partitioning all 4 disks, now examine the disks for any existing raid blocks.
+
+ # mdadm --examine /dev/sd[b-e]
+ # mdadm --examine /dev/sd[b-e]1
+
+![Examine 4 Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-4-Disks.png)
+
+Examine 4 Disks
+
+#### Step 1: Creating RAID 1 ####
+
+4. First let me create 2 sets of RAID 1 using 4 disks ‘sdb1′ and ‘sdc1′ and other set using ‘sdd1′ & ‘sde1′.
+
+ # mdadm --create /dev/md1 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[b-c]1
+ # mdadm --create /dev/md2 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[d-e]1
+ # cat /proc/mdstat
+
+![Creating Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png)
+
+Creating Raid 1
+
+![Check Details of Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png)
+
+Check Details of Raid 1
+
+#### Step 2: Creating RAID 0 ####
+
+5. Next, create the RAID 0 using md1 and md2 devices.
+
+ # mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/md1 /dev/md2
+ # cat /proc/mdstat
+
+![Creating Raid 0](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-0.png)
+
+Creating Raid 0
+
+#### Step 3: Save RAID Configuration ####
+
+6. We need to save the Configuration under ‘/etc/mdadm.conf‘ to load all raid devices in every reboot times.
+
+ # mdadm --detail --scan --verbose >> /etc/mdadm.conf
+
+After this, we need to follow #step 3 Creating file system of method 1.
+
+That’s it! we have created RAID 1+0 using method 2. We will loose two disks space here, but the performance will be excellent compared to any other raid setups.
+
+### Conclusion ###
+
+Here we have created RAID 10 using two methods. RAID 10 has good performance and redundancy too. Hope this helps you to understand about RAID 10 Nested Raid level. Let us see how to grow an existing raid array and much more in my upcoming articles.
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/create-raid-10-in-linux/
+
+作者:[Babin Lonston][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/babinlonston/
\ No newline at end of file
diff --git a/sources/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md b/sources/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md
new file mode 100644
index 0000000000..76039f4371
--- /dev/null
+++ b/sources/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md
@@ -0,0 +1,180 @@
+struggling 翻译中
+Growing an Existing RAID Array and Removing Failed Disks in Raid – Part 7
+================================================================================
+Every newbies will get confuse of the word array. Array is just a collection of disks. In other words, we can call array as a set or group. Just like a set of eggs containing 6 numbers. Likewise RAID Array contains number of disks, it may be 2, 4, 6, 8, 12, 16 etc. Hope now you know what Array is.
+
+Here we will see how to grow (extend) an existing array or raid group. For example, if we are using 2 disks in an array to form a raid 1 set, and in some situation if we need more space in that group, we can extend the size of an array using mdadm –grow command, just by adding one of the disk to the existing array. After growing (adding disk to an existing array), we will see how to remove one of the failed disk from array.
+
+![Grow Raid Array in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Growing-Raid-Array.jpg)
+
+Growing Raid Array and Removing Failed Disks
+
+Assume that one of the disk is little weak and need to remove that disk, till it fails let it under use, but we need to add one of the spare drive and grow the mirror before it fails, because we need to save our data. While the weak disk fails we can remove it from array this is the concept we are going to see in this topic.
+
+#### Features of RAID Growth ####
+
+- We can grow (extend) the size of any raid set.
+- We can remove the faulty disk after growing raid array with new disk.
+- We can grow raid array without any downtime.
+
+Requirements
+
+- To grow an RAID array, we need an existing RAID set (Array).
+- We need extra disks to grow the Array.
+- Here I’m using 1 disk to grow the existing array.
+
+Before we learn about growing and recovering of Array, we have to know about the basics of RAID levels and setups. Follow the below links to know about those setups.
+
+- [Understanding Basic RAID Concepts – Part 1][1]
+- [Creating a Software Raid 0 in Linux – Part 2][2]
+
+#### My Server Setup ####
+
+ Operating System : CentOS 6.5 Final
+ IP Address : 192.168.0.230
+ Hostname : grow.tecmintlocal.com
+ 2 Existing Disks : 1 GB
+ 1 Additional Disk : 1 GB
+
+Here, my already existing RAID has 2 number of disks with each size is 1GB and we are now adding one more disk whose size is 1GB to our existing raid array.
+
+### Growing an Existing RAID Array ###
+
+1. Before growing an array, first list the existing Raid array using the following command.
+
+ # mdadm --detail /dev/md0
+
+![Check Existing Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Existing-Raid-Array.png)
+
+Check Existing Raid Array
+
+**Note**: The above output shows that I’ve already has two disks in Raid array with raid1 level. Now here we are adding one more disk to an existing array,
+
+2. Now let’s add the new disk “sdd” and create a partition using ‘fdisk‘ command.
+
+ # fdisk /dev/sdd
+
+Please use the below instructions to create a partition on /dev/sdd drive.
+
+- Press ‘n‘ for creating new partition.
+- Then choose ‘P‘ for Primary partition.
+- Then choose ‘1‘ to be the first partition.
+- Next press ‘p‘ to print the created partition.
+- Here, we are selecting ‘fd‘ as my type is RAID.
+- Next press ‘p‘ to print the defined partition.
+- Then again use ‘p‘ to print the changes what we have made.
+- Use ‘w‘ to write the changes.
+
+![Create New Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Create-New-sdd-Partition.png)
+
+Create New sdd Partition
+
+3. Once new sdd partition created, you can verify it using below command.
+
+ # ls -l /dev/ | grep sd
+
+![Confirm sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-sdd-Partition.png)
+
+Confirm sdd Partition
+
+4. Next, examine the newly created disk for any existing raid, before adding to the array.
+
+ # mdadm --examine /dev/sdd1
+
+![Check Raid on sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-sdd-Partition.png)
+
+Check Raid on sdd Partition
+
+**Note**: The above output shows that the disk has no super-blocks detected, means we can move forward to add a new disk to an existing array.
+
+4. To add the new partition /dev/sdd1 in existing array md0, use the following command.
+
+ # mdadm --manage /dev/md0 --add /dev/sdd1
+
+![Add Disk To Raid-Array](http://www.tecmint.com/wp-content/uploads/2014/11/Add-Disk-To-Raid-Array.png)
+
+Add Disk To Raid-Array
+
+5. Once the new disk has been added, check for the added disk in our array using.
+
+ # mdadm --detail /dev/md0
+
+![Confirm Disk Added to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-Disk-Added-To-Raid.png)
+
+Confirm Disk Added to Raid
+
+**Note**: In the above output, you can see the drive has been added as a spare. Here, we already having 2 disks in the array, but what we are expecting is 3 devices in array for that we need to grow the array.
+
+6. To grow the array we have to use the below command.
+
+ # mdadm --grow --raid-devices=3 /dev/md0
+
+![Grow Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Grow-Raid-Array.png)
+
+Grow Raid Array
+
+Now we can see the third disk (sdd1) has been added to array, after adding third disk it will sync the data from other two disks.
+
+ # mdadm --detail /dev/md0
+
+![Confirm Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-Raid-Array.png)
+
+Confirm Raid Array
+
+**Note**: For large size disk it will take hours to sync the contents. Here I have used 1GB virtual disk, so its done very quickly within seconds.
+
+### Removing Disks from Array ###
+
+7. After the data has been synced to new disk ‘sdd1‘ from other two disks, that means all three disks now have same contents.
+
+As I told earlier let’s assume that one of the disk is weak and needs to be removed, before it fails. So, now assume disk ‘sdc1‘ is weak and needs to be removed from an existing array.
+
+Before removing a disk we have to mark the disk as failed one, then only we can able to remove it.
+
+ # mdadm --fail /dev/md0 /dev/sdc1
+ # mdadm --detail /dev/md0
+
+![Disk Fail in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-Fail-in-Raid-Array.png)
+
+Disk Fail in Raid Array
+
+From the above output, we clearly see that the disk was marked as faulty at the bottom. Even its faulty, we can see the raid devices are 3, failed 1 and state was degraded.
+
+Now we have to remove the faulty drive from the array and grow the array with 2 devices, so that the raid devices will be set to 2 devices as before.
+
+ # mdadm --remove /dev/md0 /dev/sdc1
+
+![Remove Disk in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Remove-Disk-in-Raid-Array.png)
+
+Remove Disk in Raid Array
+
+8. Once the faulty drive is removed, now we’ve to grow the raid array using 2 disks.
+
+ # mdadm --grow --raid-devices=2 /dev/md0
+ # mdadm --detail /dev/md0
+
+![Grow Disks in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Grow-Disks-in-Raid-Array.png)
+
+Grow Disks in Raid Array
+
+From the about output, you can see that our array having only 2 devices. If you need to grow the array again, follow the same steps as described above. If you need to add a drive as spare, mark it as spare so that if the disk fails, it will automatically active and rebuild.
+
+### Conclusion ###
+
+In the article, we’ve seen how to grow an existing raid set and how to remove a faulty disk from an array after re-syncing the existing contents. All these steps can be done without any downtime. During data syncing, system users, files and applications will not get affected in any case.
+
+In next, article I will show you how to manage the RAID, till then stay tuned to updates and don’t forget to add your comments.
+
+--------------------------------------------------------------------------------
+
+via: http://www.tecmint.com/grow-raid-array-in-linux/
+
+作者:[Babin Lonston][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.tecmint.com/author/babinlonston/
+[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/
+[2]:http://www.tecmint.com/create-raid0-in-linux/
\ No newline at end of file
diff --git a/sources/tech/XLCYun translating 20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md b/sources/tech/XLCYun translating 20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md
new file mode 100644
index 0000000000..4de779a599
--- /dev/null
+++ b/sources/tech/XLCYun translating 20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md
@@ -0,0 +1,85 @@
+XLCYun translating.
+
+
+How To Fix System Program Problem Detected In Ubuntu 14.04
+================================================================================
+![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/system_program_Problem_detected.jpeg)
+
+For the last couple of weeks, (almost) every time I was greeted with **system program problem detected on startup in Ubuntu 15.04**. I ignored it for sometime but it was quite annoying after a certain point. You won’t be too happy as well if you are greeted by a pop-up displaying this every time you boot in to the system:
+
+> System program problem detected
+>
+> Do you want to report the problem now?
+>
+> ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/System_Program_Problem_Detected.png)
+
+I know if you are an Ubuntu user you might have faced this annoying pop-up sometimes for sure. In this post we are going to see what to do with “system program problem detected” report in Ubuntu 14.04 and 15.04.
+
+### What to do with “system program problem detected” error in Ubuntu? ###
+
+#### So what exactly is this notifier all about? ####
+
+Basically, this notifies you of a crash in your system. Don’t panic by the word ‘crash’. It’s not a major issue and your system is very much usable. It just that some program crashed some time in the past and Ubuntu wants you to decide whether or not you want to report this crash report to developers so that they could fix this issue.
+
+#### So, we click on Report problem and it will vanish? ####
+
+No, not really. Even if you click on report problem, you’ll be ultimately greeted with a pop up like this:
+
+![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Ubuntu_Internal_error.png)
+
+[Sorry, Ubuntu has experienced an internal error][1] is the apport that will further open a web browser and then you can file a bug report by logging or creating an account with [Launchpad][2]. You see, it is a complicated procedure which will take around four steps to complete.
+
+#### But, I want to help developers and let them know of the bugs! ####
+
+That’s very thoughtful of you and the right thing to do. But there are two issues here. First, there are high chances that the bug would have already been reported. Second, even if you take the pain of reporting the crash, it’s not a guarantee that you won’t see it again.
+
+#### So, you suggesting to not report the crash? ####
+
+Yes and no. Report the crash when you see it the first time, if you want. You can see the crashing program under “Show Details” in the above picture. But if you see it repetitively or if you do not want to report the bug, I advise you to get rid of the system crash once and for all.
+
+### Fix “system program problem detected” error in Ubuntu ###
+
+The crash reports are stored in /var/crash directory in Ubuntu. If you look in to this directory, you should see some files ending with crash.
+
+![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Crash_reports_Ubuntu.jpeg)
+
+What I suggest is that you delete these crash reports. Open a terminal and use the following command:
+
+ sudo rm /var/crash/*
+
+This will delete all the content of directory /var/crash. This way you won’t be annoyed by the pop up for the programs crash that happened in the past. But if a programs crashes again, you’ll again see system program problem detected error. You can either remove the crash reports again, like we just did, or you can disable the Apport (debug tool) and permanently get rid of the pop-ups.
+
+#### Permanently get rid of system error pop up in Ubuntu ####
+
+If you do this, you’ll never be notified about any program crash that happens in the system. If you ask my view, I would say it’s not that bad a thing unless you are willing to file bug reports. If you have no intention of filing a bug report, the crash notifications and their absence will make no difference.
+
+To disable the Apport and get rid of system crash report completely, open a terminal and use the following command to edit the Apport settings file:
+
+ gksu gedit /etc/default/apport
+
+The content of the file is:
+
+ # set this to 0 to disable apport, or to 1 to enable it
+ # you can temporarily override this with
+ # sudo service apport start force_start=1
+ enabled=1
+
+Change the **enabled=1** to **enabled=0**. Save and close the file. You won’t see any pop up for crash reports after doing this. Obvious to point out that if you want to enable the crash reports again, you just need to change the same file and put enabled as 1 again.
+
+#### Did it work for you? ####
+
+I hope this tutorial helped you to fix system program problem detected in Ubuntu 14.04 and Ubuntu 15.04. Let me know if this tip helped you to get rid of this annoyance.
+
+--------------------------------------------------------------------------------
+
+via: http://itsfoss.com/how-to-fix-system-program-problem-detected-ubuntu/
+
+作者:[Abhishek][a]
+译者:[译者ID](https://github.com/译者ID)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://itsfoss.com/author/abhishek/
+[1]:http://itsfoss.com/how-to-solve-sorry-ubuntu-12-04-has-experienced-an-internal-error/
+[2]:https://launchpad.net/
diff --git a/translated/share/20150527 How to Develop Own Custom Linux Distribution From Scratch.md b/translated/share/20150527 How to Develop Own Custom Linux Distribution From Scratch.md
deleted file mode 100644
index 059f07b195..0000000000
--- a/translated/share/20150527 How to Develop Own Custom Linux Distribution From Scratch.md
+++ /dev/null
@@ -1,65 +0,0 @@
-δԼLinuxа
-================================================================================
-ǷԼLinuxа棿ÿLinuxûʹLinuxĹжһԼķа棬һΡҲ⣬ΪһLinuxҲǹһԼLinuxа档һLinuxа汻Linux From Scratch (LFS)
-
-ڿʼ֮ǰܽһЩLFSݣ£
-
-### 1. ЩҪԼLinuxаӦ˽һLinuxа棨ζŴͷʼһеLinuxаIJͬ ###
-
-ֻĻʾƵ¼Լӵиõʹ顣ѡκһLinuxа沢ҰϲýиԻá⣬ù߿
-
-бļboot-loadersںˣѡʲôñȻԼһжôҪLinux From Scratch (LFS)
-
-**ע**ֻҪLinuxϵͳ飬ָϲʺһLinuxа棬˽ôʼԼһЩϢôָΪд
-
-### 2. һLinuxа棨LFSĺô ###
-
-- ˽Linuxϵͳڲ
-- һӦϵͳ
-- ϵͳLFSdzգΪԸð/ðʲôӵоԵƿ
-- ϵͳLFSڰȫϻ
-
-### 3. һLinuxа棨LFSĻ ###
-
-һLinuxϵͳζŽҪĶһұ֮Ҫġĺʱ䡣ҪһõLinuxϵͳ㹻Ĵ̿ռLinuxϵͳ
-
-### 4. ȤǣGentoo/GNU LinuxijӽLFSGentooLFSȫԴĶƵLinuxϵͳ ###
-
-### 5. ӦһоLinuxûԱ൱˽⣬Ǹshellűרҡ˽һűԣCãʹЩһֻ֣ҪһѧϰߣԺ֪ܿʶҲԿʼҪDzҪLFSжʧ顣 ###
-
-ᶨ»LFSеһʱ
-
-### 6. ҪһһָһLinuxLFSǴLinuxĹٷָϡǵĴվtradepubҲΪǵĶLFSָϣͬѵġ ###
-
-ԴLinux From Scratch鼮
-
-[![](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-From-Scratch.gif)][1]
-
-: [Linux From Scratch][1]
-
-### ڣLinux From Scratch ###
-
-ⱾLFSĿͷGerard BeekmansģMatthew BurgessBruse Dubbs༭˶LFSĿ쵼ˡⱾݺܹ㷺338ҳ
-
-ݰLFSLinuxLFSűʹLFS¼к֪LFSĿж
-
-Ȿ黹˱һԤʱ䡣ԤʱԱһʱΪοеĶķʽ֣˵
-
-гԣʱ䲢ԹԼLinuxаȤôԲ飨أĻᡣҪģⱾһLinuxϵͳκLinuxа棬㹻Ĵ̿ռ伴ɣпʼԼLinuxϵͳʱ顣
-
-LinuxʹԣԼֹһԼLinuxа棬ֽӦ֪ȫˣϢԲοӵеݡ
-
-˽Ķ/ʹⱾľⱾ꾡LFSָϵʹǷ㹻ѾһLFSǵĶһЩ飬ӭԺͷ
-
---------------------------------------------------------------------------------
-
-via: http://www.tecmint.com/create-custom-linux-distribution-from-scratch/
-
-ߣ[Avishek Kumar][a]
-ߣ[wwy-hust](https://github.com/wwy-hust)
-Уԣ[УID](https://github.com/УID)
-
- [LCTT](https://github.com/LCTT/TranslateProject) ԭ룬[Linuxй](https://linux.cn/) Ƴ
-
-[a]:http://www.tecmint.com/author/avishek/
-[1]:http://tecmint.tradepub.com/free/w_linu01/prgm.cgi
diff --git a/translated/share/20150629 Backup with these DeDuplicating Encryption Tools.md b/translated/share/20150629 Backup with these DeDuplicating Encryption Tools.md
new file mode 100644
index 0000000000..366be7dd32
--- /dev/null
+++ b/translated/share/20150629 Backup with these DeDuplicating Encryption Tools.md
@@ -0,0 +1,157 @@
+使用去重加密工具来备份
+================================================================================
+在体积和价值方面,数据都在增长。快速而可靠地备份和恢复数据正变得越来越重要。社会已经适应了技术的广泛使用,并懂得了如何依靠电脑和移动设备,但很少有人能够处理丢失重要数据的现实。在遭受数据损失的公司中,30% 的公司将在一年内损失一半市值,70% 的公司将在五年内停止交易。这更加凸显了数据的价值。
+
+随着数据在体积上的增长,提高存储利用率尤为重要。In Computing(注:这里不知如何翻译),数据去重是一种特别的数据压缩技术,因为它可以消除重复数据的拷贝,所以这个技术可以提高存储利用率。
+
+数据并不仅仅只有其创造者感兴趣。政府、竞争者、犯罪分子、偷窥者可能都热衷于获取你的数据。他们或许想偷取你的数据,从你那里进行敲诈,或看你正在做什么。对于保护你的数据,加密是非常必要的。
+
+所以,解决方法是我们需要一个去重加密备份软件。
+
+对于所有的用户而言,做文件备份是一件非常必要的事,至今为止许多用户还没有采取足够的措施来保护他们的数据。一台电脑不论是工作在一个合作的环境中,还是供私人使用,机器的硬盘可能在没有任何警告的情况下挂掉。另外,有些数据丢失可能是人为的错误所引发的。如果没有做经常性的备份,数据也可能不可避免地失去掉,即使请了专业的数据恢复公司来帮忙。
+
+这篇文章将对 6 个去重加密备份工具进行简要的介绍。
+----------
+
+### Attic ###
+
+Attic 是一个可用于去重、加密,验证完整性的用 Python 写的压缩备份程序。Attic 的主要目标是提供一个高效且安全的方式来备份数据。Attic 使用的数据去重技术使得它适用于每日备份,因为只需存储改变的数据。
+
+其特点有:
+
+- 易用
+- 可高效利用存储空间,通过检查冗余的数据,数据块大小的去重被用来减少存储所用的空间
+- 可选的数据加密,使用 256 位的 AES 加密算法。数据的完整性和可靠性使用 HMAC-SHA256 来检查
+- 使用 SDSH 来进行离线备份
+- 备份可作为文件系统来挂载
+
+网站: [attic-backup.org][1]
+
+----------
+
+### Borg ###
+
+Borg 是 Attic 的分支。它是一个安全的开源备份程序,被设计用来高效地存储那些新的或修改过的数据。
+
+Borg 的主要目标是提供一个高效、安全的方式来存储数据。Borg 使用的数据去重技术使得它适用于每日备份,因为只需存储改变的数据。认证加密使得它适用于不完全可信的目标的存储。
+
+Borg 由 Python 写成。Borg 于 2015 年 5 月被创造出来,为了回应让新的代码或重大的改变带入 Attic 的困难。
+
+其特点包括:
+
+- 易用
+- 可高效利用存储空间,通过检查冗余的数据,数据块大小的去重被用来减少存储所用的空间
+- 可选的数据加密,使用 256 位的 AES 加密算法。数据的完整性和可靠性使用 HMAC-SHA256 来检查
+- 使用 SDSH 来进行离线备份
+- 备份可作为文件系统来挂载
+
+Borg 与 Attic 不兼容。
+
+网站: [borgbackup.github.io/borgbackup][2]
+
+----------
+
+### Obnam ###
+
+Obnam (OBligatory NAMe) 是一个易用、安全的基于 Python 的备份程序。备份可被存储在本地硬盘或通过 SSH SFTP 协议存储到网上。若使用了备份服务器,它并不需要任何特殊的软件,只需要使用 SSH 即可。
+
+Obnam 通过将数据数据分成数据块,并单独存储它们来达到去重的目的,每次通过增量备份来生成备份,每次备份的生成就像是一次新的快照,但事实上是真正的增量备份。Obnam 由 Lars Wirzenius 开发。
+
+其特点有:
+
+- 易用
+- 快照备份
+- 数据去重,跨文件,生成备份
+- 可使用 GnuPG 来加密备份
+- 向一个单独的仓库中备份多个客户端的数据
+- 备份检查点 (创建一个保存点,以每 100MB 或其他容量)
+- 包含多个选项来调整性能,包括调整 lru-size 或 upload-queue-size
+- 支持 MD5 校验和算法来识别重复的数据块
+- 通过 SFTP 将备份存储到一个服务器上
+- 同时支持 push(即在客户端上运行) 和 pull(即在服务器上运行)
+
+网站: [obnam.org][3]
+
+----------
+
+### Duplicity ###
+
+Duplicity 持续地以 tar 文件格式备份文件和目录,并使用 GnuPG 来进行加密,同时将它们上传到远程(或本地)的文件服务器上。它可以使用 ssh/scp, 本地文件获取, rsync, ftp, 和 Amazon S3 等来传递数据。
+
+因为 duplicity 使用了 librsync, 增加的存档高效地利用了存储空间,且只记录自从上次备份依赖改变的那部分文件。由于该软件使用 GnuPG 来机密或对这些归档文件进行进行签名,这使得它们免于服务器的监视或修改。
+
+当前 duplicity 支持备份删除的文件,全部的 unix 权限,目录,符号链接, fifo 等。
+
+duplicity 软件包还包含有 rdiffdir 工具。 Rdiffdir 是 librsync 的 rdiff 针对目录的扩展。它可以用来生成对目录的签名和差异,对普通文件也有效。
+
+其特点有:
+
+- 使用简单
+- 对归档进行加密和签名(使用 GnuPG)
+- 高效使用带宽和存储空间,使用 rsync 的算法
+- 标准的文件格式
+- 可选择多种远程协议
+ - 本地存储
+ - scp/ssh
+ - ftp
+ - rsync
+ - HSI
+ - WebDAV
+ - Amazon S3
+
+网站: [duplicity.nongnu.org][4]
+
+----------
+
+### ZBackup ###
+
+ZBackup 是一个通用的全局去重备份工具。
+
+其特点包括:
+
+- 存储数据的并行 LZMA 或 LZO 压缩,在一个仓库中,你还可以混合使用 LZMA 和 LZO
+- 内置对存储数据的 AES 加密
+- 可选择地删除旧的备份数据
+- 可以使用 64 位的滚动哈希算法,使得文件冲突的数量几乎为零
+- Repository consists of immutable files. No existing files are ever modified ====
+- 用 C++ 写成,只需少量的库文件依赖
+- 在生成环境中可以安全使用
+- 可以在不同仓库中进行数据交换而不必再进行压缩
+- 可以使用 64 位改进型 Rabin-Karp 滚动哈希算法
+
+网站: [zbackup.org][5]
+
+----------
+
+### bup ###
+
+bup 是一个用 Python 写的备份程序,其名称是 "backup" 的缩写。在 git packfile 文件的基础上, bup 提供了一个高效的方式来备份一个系统,提供快速的增量备份和全局去重(在文件中或文件里,甚至包括虚拟机镜像)。
+
+bup 在 LGPL 版本 2 协议下发行。
+
+其特点包括:
+
+- 全局去重 (在文件中或文件里,甚至包括虚拟机镜像)
+- 使用一个滚动的校验和算法(类似于 rsync) 来将大文件分为多个数据块
+- 使用来自 git 的 packfile 格式
+- 直接写入 packfile 文件,以此提供快速的增量备份
+- 可以使用 "par2" 冗余来恢复冲突的备份
+- 可以作为一个 FUSE 文件系统来挂载你的 bup 仓库
+
+网站: [bup.github.io][6]
+
+--------------------------------------------------------------------------------
+
+via: http://www.linuxlinks.com/article/20150628060000607/BackupTools.html
+
+译者:[FSSlc](https://github.com/FSSlc)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[1]:https://attic-backup.org/
+[2]:https://borgbackup.github.io/borgbackup/
+[3]:http://obnam.org/
+[4]:http://duplicity.nongnu.org/
+[5]:http://zbackup.org/
+[6]:https://bup.github.io/
\ No newline at end of file
diff --git a/translated/tech/20150706 PHP Security.md b/translated/tech/20150706 PHP Security.md
new file mode 100644
index 0000000000..8d14bf3bb9
--- /dev/null
+++ b/translated/tech/20150706 PHP Security.md
@@ -0,0 +1,358 @@
+PHP 安全
+================================================================================
+![](http://www.codeproject.com/KB/PHP/363897/php_security.jpg)
+
+### 简介 ###
+
+为提供互联网服务,当你在开发代码的时候必须时刻保持安全意识。可能大部分 PHP 脚本都对安全问题不敏感;这很大程度上是因为有大量的无经验程序员在使用这门语言。但是,没有理由让你基于粗略估计你代码的影响性而有不一致的安全策略。当你在服务器上放任何经济相关的东西时,就有可能会有人尝试破解它。创建一个论坛程序或者任何形式的购物车,被攻击的可能性就上升到了无穷大。
+
+### 背景 ###
+
+为了确保你的 web 内容安全,这里有一些一般的安全准则:
+
+#### 别相信表单 ####
+
+攻击表单很简单。通过使用一个简单的 JavaScript 技巧,你可以限制你的表单只允许在评分域中填写 1 到 5 的数字。如果有人关闭了他们浏览器的 JavaScript 功能或者提交自定义的表单数据,你客户端的验证就失败了。
+
+用户主要通过表单参数和你的脚本交互,因此他们是最大的安全风险。你应该学到什么呢?总是要验证 PHP 脚本中传递到其它任何 PHP 脚本的数据。在本文中,我们向你演示了如何分析和防范跨站点脚本(XSS)攻击,它可能劫持用户凭据(甚至更严重)。你也会看到如何防止会玷污或毁坏你数据的 MySQL 注入攻击。
+
+#### 别相信用户 ####
+
+假设你网站获取的每一份数据都充满了有害的代码。清理每一部分,就算你相信没有人会尝试攻击你的站点。
+
+#### 关闭全局变量 ####
+
+你可能会有的最大安全漏洞是启用了 register\_globals 配置参数。幸运的是,PHP 4.2 及以后版本默认关闭了这个配置。如果打开了 **register\_globals**,你可以在你的 php.ini 文件中通过改变 register\_globals 变量为 Off 关闭该功能:
+
+ register_globals = Off
+
+新手程序员觉得注册全局变量很方便,但他们不会意识到这个设置有多么危险。一个启用了全局变量的服务器会自动为全局变量赋任何形式的参数。为了了解它如何工作以及为什么有危险,让我们来看一个例子。
+
+假设你有一个称为 process.php 的脚本,它会向你的数据库插入表单数据。初始的表单像下面这样:
+
+
+
+运行 process.php 的时候,启用了注册全局变量的 PHP 会为该参数赋值为 $username 变量。这会比通过 **$\_POST['username']** 或 **$\_GET['username']** 访问它节省敲击次数。不幸的是,这也会给你留下安全问题,因为 PHP 设置该变量的值为通过 GET 或 POST 参数发送到脚本的任何值,如果你没有显示地初始化该变量并且你不希望任何人去操作它,这就会有一个大问题。
+
+看下面的脚本,假如 $authorized 变量的值为 true,它会给用户显示验证数据。正常情况下,只有当用户正确通过了假想的 authenticated\_user() 函数验证,$authorized 变量的值才会被设置为真。但是如果你启用了 **register\_globals**,任何人都可以发送一个 GET 参数,例如 authorized=1 去覆盖它:
+
+
+
+这个故事的寓意是,你应该从预定义的服务器变量中获取表单数据。所有通过 post 表单传递到你 web 页面的数据都会自动保存到一个称为 **$\_POST** 的大数组中,所有的 GET 数据都保存在 **$\_GET** 大数组中。文件上传信息保存在一个称为 **$\_FILES** 的特殊数据中。另外,还有一个称为 **$\_REQUEST** 的复合变量。
+
+要从一个 POST 方法表单中访问 username 域,可以使用 **$\_POST['username']**。如果 username 在 URL 中就使用 **$\_GET['username']**。如果你不确定值来自哪里,用 **$\_REQUEST['username']**。
+
+
+
+$\_REQUEST 是 $\_GET、$\_POST、和 $\_COOKIE 数组的结合。如果你有两个或多个值有相同的参数名称,注意 PHP 会使用哪个。默认的顺序是 cookie、POST、然后是 GET。
+
+#### 推荐安全配置选项 ####
+
+这里有几个会影响安全功能的 PHP 配置设置。下面是一些显然应该用于生产服务器的:
+
+- **register\_globals** 设置为 off
+- **safe\_mode** 设置为 off
+- **error\_reporting** 设置为 off。如果出现错误了,这会向用户浏览器发送可见的错误报告信息。对于生产服务器,使用错误日志代替。开发服务器如果在防火墙后面就可以启用错误日志。
+- 停用这些函数:system()、exec()、passthru()、shell\_exec()、proc\_open()、和 popen()。
+- **open\_basedir** 为 /tmp(以便保存会话信息)目录和 web 根目录设置值,以便脚本不能访问选定区域外的文件。
+- **expose\_php** 设置为 off。该功能会向 Apache 头添加包含版本数字的 PHP 签名。
+- **allow\_url\_fopen** 设置为 off。如果你小心在你代码中访问文件的方式-也就是你验证所有输入参数,这并不严格需要。
+- **allow\_url\_include** 设置为 off。这实在没有明智的理由任何人会想要通过 HTTP 访问包含的文件。
+
+一般来说,如果你发现想要使用这些功能的代码,你就不应该相信它。尤其要小心会使用类似 system() 函数的代码-它几乎肯定有缺陷。
+
+启用了这些设置后,让我们来看看一些特定的攻击以及能帮助你保护你服务器的方法。
+
+### SQL 注入攻击 ###
+
+由于 PHP 传递到 MySQL 数据库的查询语句是按照强大的 SQL 编程语言编写的,你就有某些人通过在 web 查询参数中使用 MySQL 语句尝试 SQL 注入攻击的风险。通过在参数中插入有害的 SQL 代码片段,攻击者会尝试进入(或破坏)你的服务器。
+
+假如说你有一个最终会放入变量 $product 的表单参数,你使用了类似下面的 SQL 语句:
+
+ $sql = "select * from pinfo where product = '$product'";
+
+如果参数是直接从表单中获得的,使用 PHP 自带的数据库特定转义函数,类似:
+
+ $sql = 'Select * from pinfo where product = '"'
+ mysql_real_escape_string($product) . '"';
+
+如果不这样做的话,有人也许会把下面的代码段放到表单参数中:
+
+ 39'; DROP pinfo; SELECT 'FOO
+
+$sql 的结果就是:
+
+ select product from pinfo where product = '39'; DROP pinfo; SELECT 'FOO'
+
+由于分号是 MySQL 的语句分隔符,数据库会运行下面三条语句:
+
+ select * from pinfo where product = '39'
+ DROP pinfo
+ SELECT 'FOO'
+
+好了,你丢失了你的表。
+
+注意实际上 PHP 和 MySQL 不会运行这种特殊语法,因为 **mysql\_query()** 函数只允许每个请求处理一个语句。但是,一个子查询仍然会生效。
+
+要防止 SQL 注入攻击,做这两件事:
+
+- 总是验证所有参数。例如,如果需要一个数字,就要确保它是一个数字。
+- 总是对数据使用 mysql\_real\_escape\_string() 函数转义数据中的任何引号和双引号。
+
+**注意:要自动转义任何表单数据,可以启用魔术引号(Magic Quotes)。**
+
+一些 MySQL 破坏可以通过限制 MySQL 用户权限避免。任何 MySQL 账户可以限制为只允许对选定的表进行特定类型的查询。例如,你可以创建只能选择行的 MySQL 用户。但是,这对于动态数据并不十分有用,另外,如果你有敏感的用户信息,可能某些人能访问一些数据,但你并不希望如此。例如,一个访问账户数据的用户可能会尝试注入访问另一个账户号码的代码,而不是为当前会话指定的号码。
+
+### 防止基本的 XSS 攻击 ###
+
+XSS 表示跨站点脚本。不像大部分攻击,该漏洞发生在客户端。XSS 最常见的基本形式是在用户提交的内容中放入 JavaScript 以便偷取用户 cookie 中的数据。由于大部分站点使用 cookie 和 session 验证访客,偷取的数据可用于模拟该用于-如果是一个典型的用户账户就会深受麻烦,如果是管理员账户甚至是彻底的惨败。如果你不在站点中使用 cookie 和 session ID,你的用户就不容易被攻击,但你仍然应该明白这种攻击是如何工作的。
+
+不像 MySQL 注入攻击,XSS 攻击很难预防。Yahoo、eBay、Apple、以及 Microsoft 都曾经受 XSS 影响。尽管攻击不包含 PHP,你可以使用 PHP 来剥离用户数据以防止攻击。为了防止 XSS 攻击,你应该限制和过滤用户提交给你站点的数据。正是因为这个原因大部分在线公告板都不允许在提交的数据中使用 HTML 标签,而是用自定义的标签格式代替,例如 **[b]** 和 **[linkto]**。
+
+让我们来看一个如何防止这类攻击的简单脚本。对于更完善的解决办法,可以使用 SafeHHTML,本文的后面部分会讨论到。
+
+ function transform_HTML($string, $length = null) {
+ // Helps prevent XSS attacks
+ // Remove dead space.
+ $string = trim($string);
+ // Prevent potential Unicode codec problems.
+ $string = utf8_decode($string);
+ // HTMLize HTML-specific characters.
+ $string = htmlentities($string, ENT_NOQUOTES);
+ $string = str_replace("#", "#", $string);
+ $string = str_replace("%", "%", $string);
+ $length = intval($length);
+ if ($length > 0) {
+ $string = substr($string, 0, $length);
+ }
+ return $string;
+ }
+
+这个函数将 HTML 特定字符转换为 HTML 字面字符。一个浏览器对任何通过这个脚本的 HTML 以无标记的文本呈现。例如,考虑下面的 HTML 字符串:
+
+ Bold Text
+
+一般情况下,HTML 会显示为:
+
+ Bold Text
+
+但是,通过 **transform\_HTML()** 后,它就像初始输入一样呈现。原因是处理的字符串中标签字符串是 HTML 条目。**transform\_HTML()** 结果字符串的纯文本看起来像下面这样:
+
+ <STRONG>Bold Text</STRONG>
+
+该函数的实质是 htmlentities() 函数调用,它会将 <、>、和 & 转换为 **<**、**>**、和 **&**。尽管这会处理大部分的普通攻击,有经验的 XSS 攻击者有另一种把戏:用十六进制或 UTF-8 编码恶意脚本,而不是采用普通的 ASCII 文本,从而希望能饶过你的过滤器。他们可以在 URL 的 GET 变量中发送代码,例如,“这是十六进制代码,你能帮我运行吗?” 一个十六进制例子看起来像这样:
+
+
+
+浏览器渲染这信息的时候,结果就是:
+
+
+
+为了防止这种情况,transform\_HTML() 采用额外的步骤把 # 和 % 符号转换为它们的实体,从而避免十六进制攻击,并转换 UTF-8 编码的数据。
+
+最后,为了防止某些人用很长的输入超载字符串从而导致某些东西崩溃,你可以添加一个可选的 $length 参数来截取你指定最大长度的字符串。
+
+### 使用 SafeHTML ###
+
+之前脚本的问题比较简单,它不允许任何类型的用户标记。不幸的是,这里有上百种方法能使 JavaScript 跳过用户的过滤器,从用户输入中剥离 HTML,没有方法可以防止这种情况。
+
+当前,没有任何一个脚本能保证无法被破解,尽管有一些确实比大部分要好。有白名单和黑名单两种方法加固安全,白名单比较简单而且更加有效。
+
+一个白名单解决方案是 PixelApes 的 SafeHTML 反跨站点脚本解析器。
+
+SafeHTML 能识别有效 HTML,能追踪并剥离任何危险标签。它用另一个称为 HTMLSax 的软件包进行解析。
+
+按照下面步骤安装和使用 SafeHTML:
+
+1. 到 [http://pixel-apes.com/safehtml/?page=safehtml][1] 下载最新版本的 SafeHTML。
+1. 把文件放到你服务器的类文件夹。该文件夹包括 SafeHTML 和 HTMLSax 起作用需要的所有东西。
+1. 在脚本中包含 SafeHTML 类文件(safehtml.php)。
+1. 创建称为 $safehtml 的新 SafeHTML 对象。
+1. 用 $safehtml->parse() 方法清理你的数据。
+
+这是一个完整的例子:
+
+ alert('XSS Attack')";
+ // Create a safehtml object.
+ $safehtml = new safehtml();
+ // Parse and sanitize the data.
+ $safe_data = $safehtml->parse($data);
+ // Display result.
+ echo 'The sanitized data is ' . $safe_data;
+ ?>
+
+如果你想清理脚本中的任何其它数据,你不需要创建一个新的对象;在你的整个脚本中只需要使用 $safehtml->parse() 方法。
+
+#### 什么可能会出现问题? ####
+
+你可能犯的最大错误是假设这个类能完全避免 XSS 攻击。SafeHTML 是一个相当复杂的脚本,几乎能检查所有事情,但没有什么是能保证的。你仍然需要对你的站点做参数验证。例如,该类不能检查给定变量的长度以确保能适应数据库的字段。它也不检查缓冲溢出问题。
+
+XSS 攻击者很有创造力,他们使用各种各样的方法来尝试达到他们的目标。可以阅读 RSnake 的 XSS 教程[http://ha.ckers.org/xss.html][2] 看一下这里有多少种方法尝试使代码跳过过滤器。SafeHTML 项目有很好的程序员一直在尝试阻止 XSS 攻击,但无法保证某些人不会想起一些奇怪和新奇的方法来跳过过滤器。
+
+**注意:XSS 攻击严重影响的一个例子 [http://namb.la/popular/tech.html][3],其中显示了如何一步一步创建会超载 MySpace 服务器的 JavaScript XSS 蠕虫。**
+
+### 用单向哈希保护数据 ###
+
+该脚本对输入的数据进行单向转换-换句话说,它能对某人的密码产生哈希签名,但不能解码获得原始密码。为什么你希望这样呢?应用程序会存储密码。一个管理员不需要知道用户的密码-事实上,只有用户知道他的/她的密码是个好主意。系统(也仅有系统)应该能识别一个正确的密码;这是 Unix 多年来的密码安全模型。单向密码安全按照下面的方式工作:
+
+1. 当一个用户或管理员创建或更改一个账户密码时,系统对密码进行哈希并保存结果。主机系统忽视明文密码。
+2. 当用户通过任何方式登录到系统时,再次对输入的密码进行哈希。
+3. 主机系统抛弃输入的明文密码。
+4. 当前新哈希的密码和之前保存的哈希相比较。
+5. 如果哈希的密码相匹配,系统就会授予访问权限。
+
+主机系统完成这些并不需要知道原始密码;事实上,原始值完全不相关。一个副作用是,如果某人侵入系统并盗取了密码数据库,入侵者会获得很多哈希后的密码,但无法把它们反向转换为原始密码。当然,给足够时间、计算能力,以及弱用户密码,一个攻击者还是有可能采用字典攻击找出密码。因此,别轻易让人碰你的密码数据库,如果确实有人这样做了,让每个用户更改他们的密码。
+
+#### 加密 Vs 哈希 ####
+
+技术上来来说,这过程并不是加密。哈希和加密是不相同的,这有两个理由:
+
+不像加密,数据不能被解密。
+
+是有可能(但很不常见)两个不同的字符串会产生相同的哈希。并不能保证哈希是唯一的,因此别像数据库中的唯一键那样使用哈希。
+
+ function hash_ish($string) {
+ return md5($string);
+ }
+
+md5() 函数基于 RSA 数据安全公司的消息摘要算法(即 MD5)返回一个由 32 个字符组成的十六进制串。然后你可以将那个 32 位字符串插入到数据库中,和另一个 md5 字符串相比较,或者就用这 32 个字符。
+
+#### 破解脚本 ####
+
+几乎不可能解密 MD5 数据。或者说很难。但是,你仍然需要好的密码,因为根据整个字典生成哈希数据库仍然很简单。这里有在线 MD5 字典,当你输入 **06d80eb0c50b49a509b49f2424e8c805** 后会得到结果 “dog”。因此,尽管技术上 MD5 不能被解密,这里仍然有漏洞-如果某人获得了你的密码数据库,你可以肯定他们肯定会使用 MD5 字典破译。因此,当你创建基于密码的系统的时候尤其要注意密码长度(最小 6 个字符,8 个或许会更好)和包括字母和数字。并确保字典中没有这个密码。
+
+### 用 Mcrypt 加密数据 ###
+
+如果你不需要以可阅读形式查看密码,采用 MD5 就足够了。不幸的是,这里并不总是有可选项-如果你提供以加密形式存储某人的信用卡信息,你可能需要在后面的某个点进行解密。
+
+最早的一个解决方案是 Mcrypt 模块,用于允许 PHP 高速加密的附件。Mcrypt 库提供了超过 30 种计算方法用于加密,并且提供短语确保只有你(或者你的用户)可以解密数据。
+
+让我们来看看使用方法。下面的脚本包含了使用 Mcrypt 加密和解密数据的函数:
+
+
+
+**mcrypt()** 函数需要几个信息:
+
+- 需要加密的数据
+- 用于加密和解锁数据的短语,也称为键。
+- 用于加密数据的计算方法,也就是用于加密数据的算法。该脚本使用了 **MCRYPT\_SERPENT\_256**,但你可以从很多算法中选择,包括 **MCRYPT\_TWOFISH192**、**MCRYPT\_RC2**、**MCRYPT\_DES**、和 **MCRYPT\_LOKI97**。
+- 加密数据的模式。这里有几个你可以使用的模式,包括电子密码本(Electronic Codebook) 和加密反馈(Cipher Feedback)。该脚本使用 **MCRYPT\_MODE\_CBC** 密码块链接。
+- 一个 **初始化向量**-也称为 IV,或着一个种子-用于为加密算法设置种子的额外二进制位。也就是使算法更难于破解的额外信息。
+- 键和 IV 字符串的长度,这可能随着加密和块而不同。使用 **mcrypt\_get\_key\_size()** 和 **mcrypt\_get\_block\_size()** 函数获取合适的长度;然后用 **substr()** 函数将键的值截取为合适的长度。(如果键的长度比要求的短,别担心-Mcrypt 会用 0 填充。)
+
+如果有人窃取了你的数据和短语,他们只能一个个尝试加密算法直到找到正确的那一个。因此,在使用它之前我们通过对键使用 **md5()** 函数增加安全,就算他们获取了数据和短语,入侵者也不能获得想要的东西。
+
+入侵者同时需要函数,数据和短语-如果真是如此,他们可能获得了对你服务器的完整访问,你只能大清洗了。
+
+这里还有一个数据存储格式的小问题。Mcrypt 以难懂的二进制形式返回加密后的数据,这使得当你将其存储到 MySQL 字段的时候可能出现可怕错误。因此,我们使用 **base64encode()** 和 **base64decode()** 函数转换为和 SQL 兼容的字母格式和检索行。
+
+#### 破解脚本 ####
+
+除了实验多种加密方法,你还可以在脚本中添加一些便利。例如,不是每次都提供键和模式,而是在包含的文件中声明为全局常量。
+
+### 生成随机密码 ###
+
+随机(但难以猜测)字符串在用户安全中很重要。例如,如果某人丢失了密码并且你使用 MD5 哈希,你不可能,也不希望查找回来。而是应该生成一个安全的随机密码并发送给用户。为了访问你站点的服务,另外一个用于生成随机数字的应用程序会创建有效链接。下面是创建密码的一个函数:
+
+ 0) &&
+ (! is_null($num_chars))) {
+ $password = '';
+ $accepted_chars = 'abcdefghijklmnopqrstuvwxyz1234567890';
+ // Seed the generator if necessary.
+ srand(((int)((double)microtime()*1000003)) );
+ for ($i=0; $i<=$num_chars; $i++) {
+ $random_number = rand(0, (strlen($accepted_chars) -1));
+ $password .= $accepted_chars[$random_number] ;
+ }
+ return $password;
+ }
+ }
+ ?>
+
+#### 使用脚本 ####
+
+**make_password()** 函数返回一个字符串,因此你需要做的就是提供字符串的长度作为参数:
+
+
+
+函数按照下面步骤工作:
+
+- 函数确保 **$num\_chars** 是非零的正整数。
+- 函数初始化 **$accepted\_chars** 变量为密码可能包含的字符列表。该脚本使用所有小写字母和数字 0 到 9,但你可以使用你喜欢的任何字符集合。
+- 随机数生成器需要一个种子,从而获得一系列类随机值(PHP 4.2 及之后版本中并不严格要求)。
+- 函数循环 **$num\_chars** 次,每次迭代生成密码中的一个字符。
+- 对于每个新字符,脚本查看 **$accepted_chars** 的长度,选择 0 和长度之间的一个数字,然后添加 **$accepted\_chars** 中该数字为索引值的字符到 $password。
+- 循环结束后,函数返回 **$password**。
+
+### 许可证 ###
+
+本篇文章,包括相关的源代码和文件,都是在 [The Code Project Open License (CPOL)][4] 协议下发布。
+
+--------------------------------------------------------------------------------
+
+via: http://www.codeproject.com/Articles/363897/PHP-Security
+
+作者:[SamarRizvi][a]
+译者:[ictlyh](https://github.com/ictlyh)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://www.codeproject.com/script/Membership/View.aspx?mid=7483622
+[1]:http://pixel-apes.com/safehtml/?page=safehtml
+[2]:http://ha.ckers.org/xss.html
+[3]:http://namb.la/popular/tech.html
+[4]:http://www.codeproject.com/info/cpol10.aspx
\ No newline at end of file
diff --git a/translated/tech/20150709 Install Google Hangouts Desktop Client In Linux.md b/translated/tech/20150709 Install Google Hangouts Desktop Client In Linux.md
new file mode 100644
index 0000000000..e8257cbedf
--- /dev/null
+++ b/translated/tech/20150709 Install Google Hangouts Desktop Client In Linux.md
@@ -0,0 +1,67 @@
+在 Linux 中安装 Google 环聊桌面客户端
+================================================================================
+![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/google-hangouts-header-664x374.jpg)
+
+先前,我们已经介绍了如何[在 Linux 中安装 Facebook Messenger][1] 和[WhatsApp 桌面客户端][2]。这些应用都是非官方的应用。今天,我将为你推荐另一款非官方的应用,它就是 [Google 环聊][3]
+
+当然,你可以在 Web 浏览器中使用 Google 环聊,但相比于此,使用桌面客户端会更加有趣。好奇吗?那就跟着我看看如何 **在 Linux 中安装 Google 环聊** 以及如何使用它把。
+
+### 在 Linux 中安装 Google 环聊 ###
+
+我们将使用一个名为 [yakyak][4] 的开源项目,它是一个针对 Linux,Windows 和 OS X 平台的非官方 Google 环聊客户端。我将向你展示如何在 Ubuntu 中使用 yakyak,但我相信在其他的 Linux 发行版本中,你可以使用同样的方法来使用它。在了解如何使用它之前,让我们先看看 yakyak 的主要特点:
+
+- 发送和接受聊天信息
+- 创建和更改对话 (重命名, 添加人物)
+- 离开或删除对话
+- 桌面提醒通知
+- 打开或关闭通知
+- 针对图片上传,支持拖放,复制粘贴或使用上传按钮
+- Hangupsbot 房间同步(实际的用户图片) (注: 这里翻译不到位,希望改善一下)
+- 展示行内图片
+- 历史回放
+
+听起来不错吧,你可以从下面的链接下载到该软件的安装文件:
+
+- [下载 Google 环聊客户端 yakyak][5]
+
+下载的文件是压缩的。解压后,你将看到一个名称类似于 linux-x64 或 linux-x32 的目录,其名称取决于你的系统。进入这个目录,你应该可以看到一个名为 yakyak 的文件。双击这个文件来启动它。
+
+![在 Linux 中运行 Run Google 环聊](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_3.jpeg)
+
+当然,你需要键入你的 Google 账号来认证。
+
+![在 Ubuntu 中设置 Google 环聊](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_2.jpeg)
+
+一旦你通过认证后,你将看到如下的画面,在这里你可以和你的 Google 联系人进行聊天。
+
+![Google_Hangout_Linux_4](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_4.jpeg)
+
+假如你想看看对话的配置图,你可以选择 `查看-> 展示对话缩略图`
+
+![Google 环聊缩略图](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_5.jpeg)
+
+当有新的信息时,你将得到桌面提醒。
+
+![在 Ubuntu 中 Google 环聊的桌面提醒](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_1.jpeg)
+
+### 值得一试吗? ###
+
+我让你尝试一下,并决定 **在 Linux 中安装 Google 环聊客户端** 是否值得。若你想要官方的应用,你可以看看这些 [拥有原生 Linux 客户端的即时消息应用程序][6]。不要忘记分享你在 Linux 中使用 Google 环聊的体验。
+
+--------------------------------------------------------------------------------
+
+via: http://itsfoss.com/install-google-hangouts-linux/
+
+作者:[Abhishek][a]
+译者:[FSSlc](https://github.com/FSSlc)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://itsfoss.com/author/abhishek/
+[1]:http://itsfoss.com/facebook-messenger-linux/
+[2]:http://itsfoss.com/whatsapp-linux-desktop/
+[3]:http://www.google.com/+/learnmore/hangouts/
+[4]:https://github.com/yakyak/yakyak
+[5]:https://github.com/yakyak/yakyak
+[6]:http://itsfoss.com/best-messaging-apps-linux/
diff --git a/translated/tech/20150709 Linux FAQs with Answers--How to install a Brother printer on Linux.md b/translated/tech/20150709 Linux FAQs with Answers--How to install a Brother printer on Linux.md
new file mode 100644
index 0000000000..ad90ec75a4
--- /dev/null
+++ b/translated/tech/20150709 Linux FAQs with Answers--How to install a Brother printer on Linux.md
@@ -0,0 +1,108 @@
+Linux有问必答-- 如何为在Linux中安装兄弟打印机
+================================================================================
+> **提问**: 我有一台兄弟HL-2270DW激光打印机,我想从我的Linux机器上答应文档。我该如何在我的电脑上安装合适的驱动并使用它?
+
+兄弟牌以买得起的[紧凑型激光打印机][1]而闻名。你可以用低于200美元的价格得到高质量的WiFi/双工激光打印机,而且价格还在下降。最棒的是,它们还提供良好的Linux支持,因此你可以在Linux中下载并安装它们的打印机驱动。我在一年前买了台[HL-2270DW][2],我对它的性能和可靠性都很满意。
+
+下面是如何在Linux中安装和配置兄弟打印机驱动。本篇教程中,我会演示安装HL-2270DW激光打印机的USB驱动。首先通过USB线连接你的打印机到Linux上。
+
+### 准备 ###
+
+在准备阶段,进入[兄弟官方支持网站][3],输入你的型号(比如:HL-2270DW)搜索你的兄弟打印机型号。
+
+![](https://farm1.staticflickr.com/301/18970034829_6f3a48d817_c.jpg)
+
+进入下面页面后,选择你的Linux平台。对于Debian、Ubuntu或者其他衍生版,选择“Linux (deb)”。对于Fedora、CentOS或者RHEL选择“Linux (rpm)”。
+
+![](https://farm1.staticflickr.com/380/18535558583_cb43240f8a_c.jpg)
+
+下一页,你会找到你打印机的LPR驱动和CUPS包装器驱动。前者是命令行驱动后者允许你通过网页管理和配置你的打印机。尤其是基于CUPS的GUI对(本地、远程)打印机维护非常有用。建议你安装这两个驱动。点击“Driver Install Tool”下载安装文件。
+
+![](https://farm1.staticflickr.com/329/19130013736_1850b0d61e_c.jpg)
+
+运行安装文件之前,你需要在64位的Linux系统上做另外一件事情。
+
+因为兄弟打印机驱动是为32位的Linux系统开发的,因此你需要按照下面的方法安装32位的库。
+
+在早期的Debian(6.0或者更早期)或者Ubuntu(11.04或者更早期),安装下面的包。
+
+ $ sudo apt-get install ia32-libs
+
+对于已经引入多架构的新的Debian或者Ubuntu而言,你可以安装下面的包:
+
+ $ sudo apt-get install lib32z1 lib32ncurses5
+
+上面的包代替了ia32-libs包。或者你只需要安装:
+
+ $ sudo apt-get install lib32stdc++6
+
+如果你使用的是基于Red Hat的Linux,你可以安装:
+
+ $ sudo yum install glibc.i686
+
+### 驱动安装 ###
+
+现在解压下载的驱动文件。
+
+ $ gunzip linux-brprinter-installer-2.0.0-1.gz
+
+接下来像下面这样运行安装文件。
+
+ $ sudo sh ./linux-brprinter-installer-2.0.0-1
+
+你会被要求输入打印机的型号。输入你打印机的型号,比如“HL-2270DW”。
+
+![](https://farm1.staticflickr.com/292/18535599323_1a94f6dae5_b.jpg)
+
+同意GPL协议直呼,接受接下来的任何默认问题。
+
+![](https://farm1.staticflickr.com/526/19130014316_5835939501_b.jpg)
+
+现在LPR/CUPS打印机驱动已经安装好了。接下来要配置你的打印机了。
+
+### 打印机配置 ###
+
+我接下来就要通过基于CUPS的网页管理和配置兄弟打印机了。
+
+首先验证CUPS守护进程已经启动。
+
+ $ sudo netstat -nap | grep 631
+
+打开一个浏览器输入http://localhost:631。你会下面的打印机管理界面。
+
+![](https://farm1.staticflickr.com/324/18968588688_202086fc72_c.jpg)
+
+进入“Administration”选项卡,点击打印机选项下的“Manage Printers”。
+
+![](https://farm1.staticflickr.com/484/18533632074_0526cccb86_c.jpg)
+
+你一定在下面的页面中看到了你的打印机(HL-2270DW)。点击打印机名。
+
+在下拉菜单“Administration”中,选择“Set As Server Default”。这会设置你的打印机位系统默认打印机。
+
+![](https://farm1.staticflickr.com/472/19150412212_b37987c359_c.jpg)
+
+当被要求验证时,输入你的Linux登录信息。
+
+![](https://farm1.staticflickr.com/511/18968590168_807e807f73_c.jpg)
+
+现在基础配置已经基本完成了。为了测试打印,打开任何文档浏览程序(比如:PDF浏览器)并打印。你会看到“HL-2270DW”被列出并被作为默认的打印机设置。
+
+![](https://farm4.staticflickr.com/3872/18970034679_6d41d75bf9_c.jpg)
+
+打印机应该可以工作了。你可以通过CUPS的网页看到打印机状态和管理打印机任务。
+
+--------------------------------------------------------------------------------
+
+via: http://ask.xmodulo.com/install-brother-printer-linux.html
+
+作者:[Dan Nanni][a]
+译者:[geekpi](https://github.com/geekpi)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://ask.xmodulo.com/author/nanni
+[1]:http://xmodulo.com/go/brother_printers
+[2]:http://xmodulo.com/go/hl_2270dw
+[3]:http://support.brother.com/
diff --git a/translated/tech/20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md b/translated/tech/20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md
new file mode 100644
index 0000000000..3658528e77
--- /dev/null
+++ b/translated/tech/20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md
@@ -0,0 +1,80 @@
+
+如何修复ubuntu 14.04中检测到系统程序错误的问题
+================================================================================
+![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/system_program_Problem_detected.jpeg)
+
+
+在过去的几个星期,(几乎)每次都有消息 **Ubuntu 15.04在启动时检测到系统程序错误(system program problem detected on startup in Ubuntu 15.04)** 跑出来“欢迎”我。那时我是直接忽略掉它的,但是这种情况到了某个时刻,它就让人觉得非常烦人了!
+
+> 检测到系统程序错误(System program problem detected)
+>
+> 你想立即报告这个问题吗?
+>
+> ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/System_Program_Problem_Detected.png)
+
+
+我肯定地知道如果你是一个Ubuntu用户,你可能曾经也遇到过这个恼人的弹窗。在本文中,我们将探讨在Ubuntu 14.04和15.04中遇到"检测到系统程序错误(system program problem detected)"时 应该怎么办。
+### 怎么解决Ubuntu中"检测到系统程序错误"的错误 ###
+
+#### 那么这个通知到底是关于什么的? ####
+
+大体上讲,它是在告知你,你的系统的一部分崩溃了。可别因为“崩溃”这个词而恐慌。这不是一个严重的问题,你的系统还是完完全全可用的。只是在以前的某个时刻某个程序崩溃了,而Ubuntu想让你决定要不要把这个问题报告给开发者,这样他们就能够修复这个问题。
+
+#### 那么,我们点了“报告错误”的按钮后,它以后就不再显示了?####
+
+
+不,不是的!即使你点了“报告错误”按钮,最后你还是会被一个如下的弹窗再次“欢迎”:
+![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Ubuntu_Internal_error.png)
+
+[对不起,Ubuntu发生了一个内部错误(Sorry, Ubuntu has experienced an internal error)][1]是一个Apport(Apport是Ubuntu中错误信息的收集报告系统,详见Ubuntu Wiki中的Apport篇,译者注),它将会进一步的打开网页浏览器,然后你可以通过登录或创建[Launchpad][2]帐户来填写一份漏洞(Bug)报告文件。你看,这是一个复杂的过程,它要花整整四步来完成.
+#### 但是我想帮助开发者,让他们知道这个漏洞啊 !####
+
+你这样想的确非常地周到体贴,而且这样做也是正确的。但是这样做的话,存在两个问题。第一,存在非常高的概率,这个漏洞已经被报告过了;第二,即使你报告了个这次崩溃,也无法保证你不会再看到它。
+
+#### 那么,你的意思就是说别报告这次崩溃了?####
+
+对,也不对。如果你想的话,在你第一次看到它的时候报告它。你可以在上面图片显示的“显示细节(Show Details)”中,查看崩溃的程序。但是如果你总是看到它,或者你不想报告漏洞(Bug),那么我建议你还是一次性摆脱这个问题吧。
+### 修复Ubuntu中“检测到系统程序错误”的错误 ###
+
+这些错误报告被存放在Ubuntu中目录/var/crash中。如果你翻看这个目录的话,应该可以看到有一些以crash结尾的文件。
+![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Crash_reports_Ubuntu.jpeg)
+
+我的建议是删除这些错误报告。打开一个终端,执行下面的命令:
+
+ sudo rm /var/crash/*
+
+这个操作会删除所有在/var/crash目录下的所有内容。这样你就不会再被这些报告以前程序错误的弹窗所扰。但是如果有一个程序又崩溃了,你就会再次看到“检测到系统程序错误”的错误。你可以再次删除这些报告文件,或者你可以禁用Apport来彻底地摆脱这个错误弹窗。
+#### 彻底地摆脱Ubuntu中的系统错误弹窗 ####
+
+如果你这样做,系统中任何程序崩溃时,系统都不会再通知你。如果你想问问我的看法的话,我会说,这不是一件坏事,除非你愿意填写错误报告。如果你不想填写错误报告,那么这些错误通知存不存在都不会有什么区别。
+
+要禁止Apport,并且彻底地摆脱Ubuntu系统中的程序崩溃报告,打开一个终端,输入以下命令:
+ gksu gedit /etc/default/apport
+
+这个文件的内容是:
+
+ # set this to 0 to disable apport, or to 1 to enable it
+ # 设置0表示禁用Apportw,或者1开启它。译者注,下同。
+ # you can temporarily override this with
+ # 你可以用下面的命令暂时关闭它:
+ # sudo service apport start force_start=1
+ enabled=1
+
+把**enabled=1**改为**enabled=0**.保存并关闭文件。完成之后你就再也不会看到弹窗报告错误了。很显然,如果我们想重新开启错误报告功能,只要再打开这个文件,把enabled设置为1就可以了。
+
+#### 你的有效吗? ####
+我希望这篇教程能够帮助你修复Ubuntu 14.04和Ubuntu 15.04中检测到系统程序错误的问题。如果这个小窍门帮你摆脱了这个烦人的问题,请让我知道。
+
+--------------------------------------------------------------------------------
+
+via: http://itsfoss.com/how-to-fix-system-program-problem-detected-ubuntu/
+
+作者:[Abhishek][a]
+译者:[XLCYun](https://github.com/XLCYun)
+校对:[校对者ID](https://github.com/校对者ID)
+
+本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+[a]:http://itsfoss.com/author/abhishek/
+[1]:http://itsfoss.com/how-to-solve-sorry-ubuntu-12-04-has-experienced-an-internal-error/
+[2]:https://launchpad.net/
diff --git a/translated/tech/20150713 How to manage Vim plugins.md b/translated/tech/20150713 How to manage Vim plugins.md
new file mode 100644
index 0000000000..c575d426e3
--- /dev/null
+++ b/translated/tech/20150713 How to manage Vim plugins.md
@@ -0,0 +1,149 @@
+
+如何管理Vim插件
+================================================================================
+
+
+
+ Vim是Linux上一个轻量级的通用文本编辑器。虽然它开始时的学习曲线对于一般的Linux用户来说可能很困难,但比起它的好处,这些付出完全是值得的。随着功能的增长,在插件工具的应用下,Vim是完全可定制的。但是,由于它高级的功能配置,你需要花一些时间去了解它的插件系统,然后才能够有效地去个性化定置Vim。幸运的是,我们已经有一些工具能够使我们在使用Vim插件时更加轻松。而我日常所使用的就是Vundle.
+### 什么是Vundle ###
+
+ [Vundle][1]是一个vim插件管理器,用于支持Vim包。Vundle能让你很简单地实现插件的安装,升级,搜索或者清除。它还能管理你的运行环境并且在标签方面提供帮助。
+### 安装Vundle ###
+
+ 首先,如果你的Linux系统上没有Git的话,先[安装Git][2].
+
+ 接着,创建一个目录,Vim的插件将会被下载并且安装在这个目录上。默认情况下,这个目录为~/.vim/bundle。
+
+ $ mkdir -p ~/.vim/bundle
+
+ 现在,安装Vundle如下。注意Vundle本身也是一个vim插件。因此我们同样把vundle安装到之前创建的目录~/.vim/bundle下。
+
+ $ git clone https://github.com/gmarik/Vundle.vim.git ~/.vim/bundle/Vundle.vim
+
+### 配置Vundle ###
+
+ 现在配置你的.vimrc文件如下:
+
+ set nocompatible " This is required
+ " 这是被要求的。(译注:中文注释为译者所加,下同。)
+ filetype off " This is required
+ " 这是被要求的。
+
+ " Here you set up the runtime path
+ " 在这里设置你的运行时环境的路径。
+ set rtp+=~/.vim/bundle/Vundle.vim
+
+ " Initialize vundle
+ " 初始化vundle
+ call vundle#begin()
+
+ " This should always be the first
+ " 这一行应该永远放在前面。
+ Plugin 'gmarik/Vundle.vim'
+
+ " This examples are from https://github.com/gmarik/Vundle.vim README
+ " 这个示范来自https://github.com/gmarik/Vundle.vim README
+ Plugin 'tpope/vim-fugitive'
+
+ " Plugin from http://vim-scripts.org/vim/scripts.html
+ " 取自http://vim-scripts.org/vim/scripts.html的插件
+ Plugin 'L9'
+
+ " Git plugin not hosted on GitHub
+ " Git插件,但并不在GitHub上。
+ Plugin 'git://git.wincent.com/command-t.git'
+
+ "git repos on your local machine (i.e. when working on your own plugin)
+ "本地计算机上的Git仓库路径 (例如,当你在开发你自己的插件时)
+ Plugin 'file:///home/gmarik/path/to/plugin'
+
+ " The sparkup vim script is in a subdirectory of this repo called vim.
+ " Pass the path to set the runtimepath properly.
+ " vim脚本sparkup存放在这个名叫vim的仓库下的一个子目录中。
+ " 提交这个路径来正确地设置运行时路径
+ Plugin 'rstacruz/sparkup', {'rtp': 'vim/'}
+
+ " Avoid a name conflict with L9
+ " 避免与L9发生名字上的冲突
+ Plugin 'user/L9', {'name': 'newL9'}
+
+ "Every Plugin should be before this line
+ "所有的插件都应该在这一行之前。
+ call vundle#end() " required 被要求的
+
+ 容我简单解释一下上面的设置:默认情况下,Vundle将从github.com或者vim-scripts.org下载和安装vim插件。你也可以改变这个默认行为。
+
+ 要从github安装(安装插件,译者注,下同):
+ Plugin 'user/plugin'
+
+ 要从http://vim-scripts.org/vim/scripts.html处安装:
+ Plugin 'plugin_name'
+
+ 要从另外一个git仓库中安装:
+
+ Plugin 'git://git.another_repo.com/plugin'
+
+ 从本地文件中安装:
+
+ Plugin 'file:///home/user/path/to/plugin'
+
+
+ 你同样可以定制其它东西,例如你的插件运行时路径,当你自己在编写一个插件时,或者你只是想从其它目录——而不是~/.vim——中加载插件时,这样做就非常有用。
+
+ Plugin 'rstacruz/sparkup', {'rtp': 'another_vim_path/'}
+
+ 如果你有同名的插件,你可以重命名你的插件,这样它们就不会发生冲突了。
+
+ Plugin 'user/plugin', {'name': 'newPlugin'}
+
+### 使用Vum命令 ###
+ 一旦你用vundle设置好你的插件,你就可以通过几个vundle命令来安装,升级,搜索插件,或者清除没有用的插件。
+
+#### 安装一个新的插件 ####
+
+ 所有列在你的.vimrc文件中的插件,都会被PluginInstall命令安装。你也可以通递一个插件名给它,来安装某个的特定插件。
+ :PluginInstall
+ :PluginInstall <插件名>
+
+ ![](https://farm1.staticflickr.com/559/18998707843_438cd55463_c.jpg)
+
+#### 清除没有用的插件 ####
+
+ 如果你有任何没有用到的插件,你可以通过PluginClean命令来删除它.
+ :PluginClean
+
+ ![](https://farm4.staticflickr.com/3814/19433047689_17d9822af6_c.jpg)
+
+#### 查找一个插件 ####
+
+ 如果你想从提供的插件清单中安装一个插件,搜索功能会很有用
+ :PluginSearch <文本>
+
+ ![](https://farm1.staticflickr.com/541/19593459846_75b003443d_c.jpg)
+
+
+ 在搜索的时候,你可以在交互式分割窗口中安装,清除,重新搜索或者重新加载插件清单.安装后的插件不会自动加载生效,要使其加载生效,可以将它们添加进你的.vimrc文件中.
+### 总结 ###
+
+ Vim是一个妙不可言的工具.它不单单是一个能够使你的工作更加顺畅高效的默认文本编辑器,同时它还能够摇身一变,成为现存的几乎任何一门编程语言的IDE.
+
+ 注意,有一些网站能帮你找到适合的vim插件.猛击[http://www.vim-scripts.org][3], Github或者 [http://www.vimawesome.com][4] 获取新的脚本或插件.同时记得为你的插件使用帮助供应程序.
+
+ 和你最爱的编辑器一起嗨起来吧!
+
+ --------------------------------------------------------------------------------
+
+ via: http://xmodulo.com/manage-vim-plugins.html
+
+ 作者:[Christopher Valerio][a]
+ 译者:[XLCYun(袖里藏云)](https://github.com/XLCYun)
+ 校对:[校对者ID](https://github.com/校对者ID)
+
+ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
+
+ [a]:http://xmodulo.com/author/valerio
+ [1]:https://github.com/VundleVim/Vundle.vim
+ [2]:http://ask.xmodulo.com/install-git-linux.html
+ [3]:http://www.vim-scripts.org/
+ [4]:http://www.vimawesome.com/
+