From 97e33d764177e7d95bba17e48334705058771534 Mon Sep 17 00:00:00 2001 From: joeren Date: Mon, 27 Jul 2015 08:16:44 +0800 Subject: [PATCH 001/207] Update 20150612 How to Configure Swarm Native Clustering for Docker.md --- ...0612 How to Configure Swarm Native Clustering for Docker.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150612 How to Configure Swarm Native Clustering for Docker.md b/sources/tech/20150612 How to Configure Swarm Native Clustering for Docker.md index d07cd26428..0cf3d9cbb7 100644 --- a/sources/tech/20150612 How to Configure Swarm Native Clustering for Docker.md +++ b/sources/tech/20150612 How to Configure Swarm Native Clustering for Docker.md @@ -1,3 +1,4 @@ +Translating by GOLinux! How to Configure Swarm Native Clustering for Docker ================================================================================ Hi everyone, today we'll learn about Swarm and how we can create native clusters using Docker with Swarm. [Docker Swarm][1] is a native clustering program for Docker which turns a pool of Docker hosts into a single virtual host. Swarm serves the standard Docker API, so any tool which can communicate with a Docker daemon can use Swarm to transparently scale to multiple hosts. Swarm follows the "batteries included but removable" principle as other Docker Projects. It ships with a simple scheduling backend out of the box, and as initial development settles, an API will develop to enable pluggable backends. The goal is to provide a smooth out-of-box experience for simple use cases, and allow swapping in more powerful backends, like Mesos, for large scale production deployments. Swarm is extremely easy to setup and get started. @@ -92,4 +93,4 @@ via: http://linoxide.com/linux-how-to/configure-swarm-clustering-docker/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://linoxide.com/author/arunp/ -[1]:https://docs.docker.com/swarm/ \ No newline at end of file +[1]:https://docs.docker.com/swarm/ From a41db0b179ec43989557861872c2ec8204a4f656 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Mon, 27 Jul 2015 09:31:05 +0800 Subject: [PATCH 002/207] [Translated]20150612 How to Configure Swarm Native Clustering for Docker.md --- ...gure Swarm Native Clustering for Docker.md | 96 ------------------- ...gure Swarm Native Clustering for Docker.md | 95 ++++++++++++++++++ 2 files changed, 95 insertions(+), 96 deletions(-) delete mode 100644 sources/tech/20150612 How to Configure Swarm Native Clustering for Docker.md create mode 100644 translated/tech/20150612 How to Configure Swarm Native Clustering for Docker.md diff --git a/sources/tech/20150612 How to Configure Swarm Native Clustering for Docker.md b/sources/tech/20150612 How to Configure Swarm Native Clustering for Docker.md deleted file mode 100644 index 0cf3d9cbb7..0000000000 --- a/sources/tech/20150612 How to Configure Swarm Native Clustering for Docker.md +++ /dev/null @@ -1,96 +0,0 @@ -Translating by GOLinux! -How to Configure Swarm Native Clustering for Docker -================================================================================ -Hi everyone, today we'll learn about Swarm and how we can create native clusters using Docker with Swarm. [Docker Swarm][1] is a native clustering program for Docker which turns a pool of Docker hosts into a single virtual host. Swarm serves the standard Docker API, so any tool which can communicate with a Docker daemon can use Swarm to transparently scale to multiple hosts. Swarm follows the "batteries included but removable" principle as other Docker Projects. It ships with a simple scheduling backend out of the box, and as initial development settles, an API will develop to enable pluggable backends. The goal is to provide a smooth out-of-box experience for simple use cases, and allow swapping in more powerful backends, like Mesos, for large scale production deployments. Swarm is extremely easy to setup and get started. - -So, here are some features of Swarm 0.2 out of the box. - -1. Swarm 0.2.0 is about 85% compatible with the Docker Engine. -2. It supports Resource Management. -3. It has Advanced Scheduling feature with constraints and affinities. -4. It supports multiple Discovery Backends (hubs, consul, etcd, zookeeper) -5. It uses TLS encryption method for security and authentication. - -So, here are some very simple and easy steps on how we can use Swarm. - -### 1. Pre-requisites to run Swarm ### - -We must install Docker 1.4.0 or later on all nodes. While each node's IP need not be public, the Swarm manager must be able to access each node across the network. - -Note: Swarm is currently in beta, so things are likely to change. We don't recommend you use it in production yet. - -### 2. Creating Swarm Cluster ### - -Now, we'll create swarm cluster by running the below command. Each node will run a swarm node agent. The agent registers the referenced Docker daemon, monitors it, and updates the discovery backend with the node's status. The below command returns a token which is a unique cluster id, it will be used when starting the Swarm Agent on nodes. - - # docker run swarm create - -![Creating Swarm Cluster](http://blog.linoxide.com/wp-content/uploads/2015/05/creating-swarm-cluster.png) - -### 3. Starting the Docker Daemon in each nodes ### - -We'll need to login into each node that we'll use to create clusters and start the Docker Daemon into it using flag -H . It ensures that the docker remote API on the node is available over TCP for the Swarm Manager. To do start the docker daemon, we'll need to run the following command inside the nodes. - - # docker -H tcp://0.0.0.0:2375 -d - -![Starting Docker Daemon](http://blog.linoxide.com/wp-content/uploads/2015/05/starting-docker-daemon.png) - -### 4. Adding the Nodes ### - -After enabling Docker Daemon, we'll need to add the Swarm Nodes to the discovery service. We must ensure that node's IP must be accessible from the Swarm Manager. To do so, we'll need to run the following command. - - # docker run -d swarm join --addr=:2375 token:// - -![Adding Nodes to Cluster](http://blog.linoxide.com/wp-content/uploads/2015/05/adding-nodes-to-cluster.png) - -**Note**: Here, we'll need to replace and with the IP address of the Node and the cluster ID we got from step 2. - -### 5. Starting the Swarm Manager ### - -Now, as we have got the nodes connected to the cluster. Now, we'll start the swarm manager, we'll need to run the following command in the node. - - # docker run -d -p :2375 swarm manage token:// - -![Starting Swarm Manager](http://blog.linoxide.com/wp-content/uploads/2015/05/starting-swarm-manager.png) - -### 6. Checking the Configuration ### - -Once the manager is running, we can check the configuration by running the following command. - - # docker -H tcp:// info - -![Accessing Swarm Clusters](http://blog.linoxide.com/wp-content/uploads/2015/05/accessing-swarm-cluster.png) - -**Note**: We'll need to replace with the ip address and port of the host running the swarm manager. - -### 7. Using the docker CLI to access nodes ### - -After everything is done perfectly as explained above, this part is the most important part of the Docker Swarm. We can use Docker CLI to access the nodes and run containers on them. - - # docker -H tcp:// info - # docker -H tcp:// run ... - -### 8. Listing nodes in the cluster ### - -We can get a list of all of the running nodes using the swarm list command. - - # docker run --rm swarm list token:// - -![Listing Swarm Nodes](http://blog.linoxide.com/wp-content/uploads/2015/05/listing-swarm-nodes.png) - -### Conclusion ### - -Swarm is really an awesome feature of docker that can be used for creating and managing clusters. It is pretty easy to setup and use. It is more beautiful when we use constraints and affinities on top of it. Advanced Scheduling is an awesome feature of it which applies filters to exclude nodes with ports, labels, health and it uses strategies to pick the best node. So, if you have any questions, comments, feedback please do write on the comment box below and let us know what stuffs needs to be added or improved. Thank You! Enjoy :-) - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/configure-swarm-clustering-docker/ - -作者:[Arun Pyasi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ -[1]:https://docs.docker.com/swarm/ diff --git a/translated/tech/20150612 How to Configure Swarm Native Clustering for Docker.md b/translated/tech/20150612 How to Configure Swarm Native Clustering for Docker.md new file mode 100644 index 0000000000..82849b4661 --- /dev/null +++ b/translated/tech/20150612 How to Configure Swarm Native Clustering for Docker.md @@ -0,0 +1,95 @@ +为Docker配置Swarm本地集群 +================================================================================ +嗨,大家好。今天我们来学一学Swarm相关的内容吧,我们将学习通过Swarm来创建Docker本地集群。[Docker Swarm][1]是用于Docker的本地集群项目,它可以将Docker主机池转换成单个的虚拟主机。Swarm提供了标准的Docker API,所以任何可以和Docker守护进程通信的工具都可以使用Swarm来透明地规模化多个主机。Swarm遵循“包含电池并可拆卸”的原则,就像其它Docker项目一样。它附带有一个开箱即用的简单的后端调度程序,而且作为初始开发套件,也为其开发了一个可启用即插即用后端的API。其目标在于为一些简单的使用情况提供一个平滑的、开箱即用的体验,并且它允许在更强大的后端,如Mesos,中开启交换,以达到大量生产部署的目的。Swarm配置和使用极其简单。 + +这里给大家提供Swarm 0.2开箱的即用一些特性。 + +1. Swarm 0.2.0大约85%与Docker引擎兼容。 +2. 它支持资源管理。 +3. 它具有一些带有限制器和类同器高级调度特性。 +4. 它支持多个发现后端(hubs,consul,etcd,zookeeper) +5. 它使用TLS加密方法进行安全通信和验证。 + +那么,我们来看一看Swarm的一些相当简单而简易的使用步骤吧。 + +### 1. 运行Swarm的先决条件 ### + +我们必须在所有节点安装Docker 1.4.0或更高版本。虽然哥哥节点的IP地址不需要要公共地址,但是Swarm管理器必须可以通过网络访问各个节点。 + +注意:Swarm当前还处于beta版本,因此功能特性等还有可能发生改变,我们不推荐你在生产环境中使用。 + +### 2. 创建Swarm集群 ### + +现在,我们将通过运行下面的命令来创建Swarm集群。各个节点都将运行一个swarm节点代理,该代理会注册、监控相关的Docker守护进程,并更新发现后端获取的节点状态。下面的命令会返回一个唯一的集群ID标记,在启动节点上的Swarm代理时会用到它。 + + # docker run swarm create + +![Creating Swarm Cluster](http://blog.linoxide.com/wp-content/uploads/2015/05/creating-swarm-cluster.png) + +### 3. 启动各个节点上的Docker守护进程 ### + +我们需要使用-H标记登陆进我们将用来创建几圈和启动Docker守护进程的各个节点,它会保证Swarm管理器能够通过TCP访问到各个节点上的Docker远程API。要启动Docker守护进程,我们需要在各个节点内部运行以下命令。 + + # docker -H tcp://0.0.0.0:2375 -d + +![Starting Docker Daemon](http://blog.linoxide.com/wp-content/uploads/2015/05/starting-docker-daemon.png) + +### 4. 添加节点 ### + +在启用Docker守护进程后,我们需要添加Swarm节点到发现服务,我们必须确保节点IP可从Swarm管理器访问到。要完成该操作,我们需要运行以下命令。 + + # docker run -d swarm join --addr=:2375 token:// + +![Adding Nodes to Cluster](http://blog.linoxide.com/wp-content/uploads/2015/05/adding-nodes-to-cluster.png) + +** 注意**:我们需要用步骤2中获取到的节点IP地址和集群ID替换这里的。 + +### 5. 开启Swarm管理器 ### + +现在,由于我们已经获得了连接到集群的节点,我们将启动swarm管理器。我们需要在节点中运行以下命令。 + + # docker run -d -p :2375 swarm manage token:// + +![Starting Swarm Manager](http://blog.linoxide.com/wp-content/uploads/2015/05/starting-swarm-manager.png) + +### 6. 检查配置 ### + +一旦管理运行起来后,我们可以通过运行以下命令来检查配置。 + + # docker -H tcp:// info + +![Accessing Swarm Clusters](http://blog.linoxide.com/wp-content/uploads/2015/05/accessing-swarm-cluster.png) + +** 注意**:我们需要替换为运行swarm管理器的主机的IP地址和端口。 + +### 7. 使用docker CLI来访问节点 ### + +在一切都像上面说得那样完美地完成后,这一部分是Docker Swarm最为重要的部分。我们可以使用Docker CLI来访问节点,并在节点上运行容器。 + + # docker -H tcp:// info + # docker -H tcp:// run ... + +### 8. 监听集群中的节点 ### + +我们可以使用swarm list命令来获取所有运行中节点的列表。 + + # docker run --rm swarm list token:// + +![Listing Swarm Nodes](http://blog.linoxide.com/wp-content/uploads/2015/05/listing-swarm-nodes.png) + +### 尾声 ### + +Swarm真的是一个有着相当不错的功能的docker,它可以用于创建和管理集群。它相当易于配置和使用,当我们在它上面使用限制器和类同器师它更为出色。高级调度程序是一个相当不错的特性,它可以应用过滤器来通过端口、标签、健康状况来排除节点,并且它使用策略来挑选最佳节点。那么,如果你有任何问题、评论、反馈,请在下面的评论框中写出来吧,好让我们知道哪些材料需要补充或改进。谢谢大家了!尽情享受吧 :-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/configure-swarm-clustering-docker/ + +作者:[Arun Pyasi][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:https://docs.docker.com/swarm/ From 4106ff98c00ea84ea431bb254c5699d09671790d Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Mon, 27 Jul 2015 13:24:26 +0800 Subject: [PATCH 003/207] Delete 20150515 Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完毕 --- ...er On Ubuntu or CentOS 7.1 or Fedora 22.md | 193 ------------------ 1 file changed, 193 deletions(-) delete mode 100644 sources/tech/20150515 Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md diff --git a/sources/tech/20150515 Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md b/sources/tech/20150515 Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md deleted file mode 100644 index 1567211cd5..0000000000 --- a/sources/tech/20150515 Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md +++ /dev/null @@ -1,193 +0,0 @@ -Translating by dingdongnigetou - -Install Plex Media Server On Ubuntu / CentOS 7.1 / Fedora 22 -================================================================================ -In this article we will show you how easily you can setup Plex Home Media Server on major Linux distributions with their latest releases. After its successful installation of Plex you will be able to use your centralized home media playback system that streams its media to many Plex player Apps and the Plex Home will allows you to setup your environment by adding your devices and to setup a group of users that all can use Plex Together. So let’s start its installation first on Ubuntu 15.04. - -### Basic System Resources ### - -System resources mainly depend on the type and number of devices that you are planning to connect with the server. So according to our requirements we will be using as following system resources and software for a standalone server. - -注:表格 - - - - - - - - - - - - - - - - - - - - - - -
Plex Home Media Server
Base Operating SystemUbuntu 15.04 / CentOS 7.1 / Fedora 22 Work Station
Plex Media ServerVersion 0.9.12.3.1173-937aac3
RAM and CPU1 GB  , 2.0 GHZ
Hard Disk30 GB
- -### Plex Media Server 0.9.12.3 on Ubuntu 15.04 ### - -We are now ready to start the installations process of Plex Media Server on Ubuntu so let’s start with the following steps to get it ready. - -#### Step 1: System Update #### - -Login to your server with root privileges Make your that your system is upto date if not then do by using below command. - - root@ubuntu-15:~#apt-get update - -#### Step 2: Download the Latest Plex Media Server Package #### - -Create a new directory and download .deb plex Media Package in it from the official website of Plex for Ubuntu using wget command. - - root@ubuntu-15:~# cd /plex/ - root@ubuntu-15:/plex# - root@ubuntu-15:/plex# wget https://downloads.plex.tv/plex-media-server/0.9.12.3.1173-937aac3/plexmediaserver_0.9.12.3.1173-937aac3_amd64.deb - -#### Step 3: Install the Debian Package of Plex Media Server #### - -Now within the same directory run following command to start installation of debian package and then check the status of plekmediaserver. - - root@ubuntu-15:/plex# dpkg -i plexmediaserver_0.9.12.3.1173-937aac3_amd64.deb - ----------- - - root@ubuntu-15:~# service plexmediaserver status - -![Plexmediaserver Service](http://blog.linoxide.com/wp-content/uploads/2015/06/plex-status.png) - -### Plex Home Media Web App Setup on Ubuntu 15.04 ### - -Let's open your web browser within your localhost network and open the Web Interface with your localhost IP and port 32400 and do following steps to configure it: - - http://172.25.10.179:32400/web - http://localhost:32400/web - -#### Step 1:Sign UP before Login #### - -After you have access to the web interface of Plesk Media Server make sure to Sign Up and set your username email ID and Password to login as. - -![Plex Sign In](http://blog.linoxide.com/wp-content/uploads/2015/06/PMS-Login.png) - -#### Step 2: Enter Your Pin to Secure Your Plex Media Home User #### - -![Plex User Pin](http://blog.linoxide.com/wp-content/uploads/2015/06/333.png) - -Now you have successfully configured your user under Plex Home Media. - -![Welcome To Plex](http://blog.linoxide.com/wp-content/uploads/2015/06/3333.png) - -### Opening Plex Web App on Devices Other than Localhost Server ### - -As we have seen in our Plex media home page that it indicates that "You do not have permissions to access this server". Its because of we are on a different network than the Server computer. - -![Plex Server Permissions](http://blog.linoxide.com/wp-content/uploads/2015/06/33.png) - -Now we need to resolve this permissions issue so that we can have access to server on the devices other than the hosted server by doing following setup. - -### Setup SSH Tunnel for Windows System to access Linux Server ### - -First we need to set up a SSH tunnel so that we can access things as if they were local. This is only necessary for the initial setup. - -If you are using Windows as your local system and server on Linux then we can setup SSH-Tunneling using Putty as shown. - -![Plex SSH Tunnel](http://blog.linoxide.com/wp-content/uploads/2015/06/ssh-tunnel.png) - -**Once you have the SSH tunnel set up:** - -Open your Web browser window and type following URL in the address bar. - - http://localhost:8888/web - -The browser will connect to the server and load the Plex Web App with same functionality as on local. -Agree to the terms of Services and start - -![Agree to Plex term](http://blog.linoxide.com/wp-content/uploads/2015/06/5.png) - -Now a fully functional Plex Home Media Server is ready to add new media libraries, channels, playlists etc. - -![PMS Settings](http://blog.linoxide.com/wp-content/uploads/2015/06/8.png) - -### Plex Media Server 0.9.12.3 on CentOS 7.1 ### - -We will follow the same steps on CentOS-7.1 that we did for the installation of Plex Home Media Server on Ubuntu 15.04. - -So lets start with Plex Media Servers Package Installation. - -#### Step 1: Plex Media Server Installation #### - -To install Plex Media Server on centOS 7.1 we need to download the .rpm package from the official website of Plex. So we will use wget command to download .rpm package for this purpose in a new directory. - - [root@linux-tutorials ~]# cd /plex - [root@linux-tutorials plex]# wget https://downloads.plex.tv/plex-media-server/0.9.12.3.1173-937aac3/plexmediaserver-0.9.12.3.1173-937aac3.x86_64.rpm - -#### Step 2: Install .RPM Package #### - -After completion of complete download package we will install this package using rpm command within the same direcory where we installed the .rpm package. - - [root@linux-tutorials plex]# ls - plexmediaserver-0.9.12.3.1173-937aac3.x86_64.rpm - [root@linux-tutorials plex]# rpm -i plexmediaserver-0.9.12.3.1173-937aac3.x86_64.rpm - -#### Step 3: Start Plexmediaservice #### - -We have successfully installed Plex Media Server Now we just need to restart its service and then enable it permanently. - - [root@linux-tutorials plex]# systemctl start plexmediaserver.service - [root@linux-tutorials plex]# systemctl enable plexmediaserver.service - [root@linux-tutorials plex]# systemctl status plexmediaserver.service - -### Plex Home Media Web App Setup on CentOS-7.1 ### - -Now we just need to repeat all steps that we performed during the Web app setup of Ubuntu. -So let's Open a new window in your web browser and access the Plex Media Server Web app using localhost or IP or your Plex server. - - http://172.20.3.174:32400/web - http://localhost:32400/web - -Then to get full permissions on the server you need to repeat the steps to create the SSH-Tunnel. -After signing up with new user account we will be able to access its all features and can add new users, add new libraries and setup it per our needs. - -![Plex Device Centos](http://blog.linoxide.com/wp-content/uploads/2015/06/plex-devices-centos.png) - -### Plex Media Server 0.9.12.3 on Fedora 22 Work Station ### - -The Basic steps to download and install Plex Media Server are the same as its we did for in CentOS 7.1. -We just need to download its .rpm package and then install it with rpm command. - -![PMS Installation](http://blog.linoxide.com/wp-content/uploads/2015/06/plex-on-fedora.png) - -### Plex Home Media Web App Setup on Fedora 22 Work Station ### - -We had setup Plex Media Server on the same host so we don't need to setup SSH-Tunnel in this time scenario. Just open the web browser in your Fedora 22 Workstation with default port 32400 of Plex Home Media Server and accept the Plex Terms of Services Agreement. - -![Plex Agreement](http://blog.linoxide.com/wp-content/uploads/2015/06/Plex-Terms.png) - -**Welcome to Plex Home Media Server on Fedora 22 Workstation** - -Lets login with your plex account and start with adding your libraries for your favorite movie channels , create your playlists, add your photos and enjoy with many other features of Plex Home Media Server. - -![Plex Add Libraries](http://blog.linoxide.com/wp-content/uploads/2015/06/create-library.png) - -### Conclusion ### - -We had successfully installed and configured Plex Media Server on Major Linux Distributions. So, Plex Home Media Server has always been a best choice for media management. Its so simple to setup on cross platform as we did for Ubuntu, CentOS and Fedora. It has simplifies the tasks of organizing your media content and streaming to other computers and devices then to share it with your friends. - --------------------------------------------------------------------------------- - -via: http://linoxide.com/tools/install-plex-media-server-ubuntu-centos-7-1-fedora-22/ - -作者:[Kashif Siddique][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/kashifs/ From baa2ea46dafd6edd8c1dd34ca455e034abdfb7af Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Mon, 27 Jul 2015 13:27:22 +0800 Subject: [PATCH 004/207] Create Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完毕 --- ...er On Ubuntu or CentOS 7.1 or Fedora 22.md | 190 ++++++++++++++++++ 1 file changed, 190 insertions(+) create mode 100644 translated/tech/Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md diff --git a/translated/tech/Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md b/translated/tech/Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md new file mode 100644 index 0000000000..813057798b --- /dev/null +++ b/translated/tech/Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md @@ -0,0 +1,190 @@ + +如何在 Ubuntu/CentOS7.1/Fedora22 上安装 Plex Media Server ? +================================================================================ +在本文中我们将会向你展示如何容易地在主流的最新发布的Linux发行版上安装Plex Home Media Server。在Plex安装成功后你将可以使用你的集中式家庭媒体播放系统,该系统能让多个Plex播放器App共享它的媒体资源,并且该系统允许你设置你的环境,通过增加你的设备以及设置一个可以一起使用Plex的用户组。让我们首先在Ubuntu15.04上开始Plex的安装。 + +### 基本的系统资源 ### + +系统资源主要取决于你打算用来连接服务的设备类型和数量, 所以根据我们的需求我们将会在一个单独的服务器上使用以下系统资源。 + +注:表格 + + + + + + + + + + + + + + + + + + + + + + +
Plex Home Media Server
Base Operating SystemUbuntu 15.04 / CentOS 7.1 / Fedora 22 Work Station
Plex Media ServerVersion 0.9.12.3.1173-937aac3
RAM and CPU1 GB  , 2.0 GHZ
Hard Disk30 GB
+ +### 在Ubuntu 15.04上安装Plex Media Server 0.9.12.3 ### + +我们现在准备开始在Ubuntu上安装Plex Media Server,让我们从下面的步骤开始来让Plex做好准备。 + +#### 步骤 1: 系统更新 #### + +用root权限登陆你的服务器。确保你的系统是最新的,如果不是就使用下面的命令。 + + root@ubuntu-15:~#apt-get update + +#### 步骤 2: 下载最新的Plex Media Server包 #### + +创建一个新目录,用wget命令从Plex官网下载为Ubuntu提供的.deb包并放入该目录中。 + + root@ubuntu-15:~# cd /plex/ + root@ubuntu-15:/plex# + root@ubuntu-15:/plex# wget https://downloads.plex.tv/plex-media-server/0.9.12.3.1173-937aac3/plexmediaserver_0.9.12.3.1173-937aac3_amd64.deb + +#### 步骤 3: 安装Plex Media Server的Debian包 #### + +现在在相同的目录下执行下面的命令来开始debian包的安装, 然后检查plexmediaserver(译者注: 原文plekmediaserver, 明显笔误)的状态。 + + root@ubuntu-15:/plex# dpkg -i plexmediaserver_0.9.12.3.1173-937aac3_amd64.deb + +---------- + + root@ubuntu-15:~# service plexmediaserver status + +![Plexmediaserver Service](http://blog.linoxide.com/wp-content/uploads/2015/06/plex-status.png) + +### 在Ubuntu 15.04上设置Plex Home Media Web应用 ### + +让我们在你的本地网络主机中打开web浏览器, 并用你的本地主机IP以及端口32400来打开Web界面并完成以下步骤来配置Plex。 + + http://172.25.10.179:32400/web + http://localhost:32400/web + +#### 步骤 1: 登陆前先注册 #### + +在你访问到Plex Media Server的Web界面之后(译者注: 原文是Plesk, 应该是笔误), 确保注册并填上你的用户名(译者注: 原文username email ID感觉怪怪:))和密码来登陆。 + +![Plex Sign In](http://blog.linoxide.com/wp-content/uploads/2015/06/PMS-Login.png) + +#### 输入你的PIN码来保护你的Plex Home Media用户(译者注: 原文Plex Media Home, 个人觉得专业称谓应该保持一致) #### + +![Plex User Pin](http://blog.linoxide.com/wp-content/uploads/2015/06/333.png) + +现在你已经成功的在Plex Home Media下配置你的用户。 + +![Welcome To Plex](http://blog.linoxide.com/wp-content/uploads/2015/06/3333.png) + +### 在设备上而不是本地服务器上打开Plex Web应用 ### + +正如我们在Plex Media主页看到的表明"你没有权限访问这个服务"。 这是因为我们跟服务器计算机不在同个网络。 + +![Plex Server Permissions](http://blog.linoxide.com/wp-content/uploads/2015/06/33.png) + +现在我们需要解决这个权限问题以便我们通过设备访问服务器而不是通过托管服务器(Plex服务器), 通过完成下面的步骤。 + +### 设置SSH隧道使Windows系统访问到Linux服务器 ### + +首先我们需要建立一条SSH隧道以便我们访问远程服务器资源,就好像资源在本地一样。 这仅仅是必要的初始设置。 + +如果你正在使用Windows作为你的本地系统,Linux作为服务器,那么我们可以参照下图通过Putty来设置SSH隧道。 +(译者注: 首先要在Putty的Session中用Plex服务器IP配置一个SSH的会话,才能进行下面的隧道转发规则配置。 +然后点击“Open”,输入远端服务器用户名密码, 来保持SSH会话连接。) + +![Plex SSH Tunnel](http://blog.linoxide.com/wp-content/uploads/2015/06/ssh-tunnel.png) + +**一旦你完成SSH隧道设置:** + +打开你的Web浏览器窗口并在地址栏输入下面的URL。 + + http://localhost:8888/web + +浏览器将会连接到Plex服务器并且加载与服务器本地功能一致的Plex Web应用。 同意服务条款并开始。 + +![Agree to Plex term](http://blog.linoxide.com/wp-content/uploads/2015/06/5.png) + +现在一个功能齐全的Plex Home Media Server已经准备好添加新的媒体库、频道、播放列表等资源。 + +![PMS Settings](http://blog.linoxide.com/wp-content/uploads/2015/06/8.png) + +### 在CentOS 7.1上安装Plex Media Server 0.9.12.3 ### + +我们将会按照上述在Ubuntu15.04上安装Plex Home Media Server的步骤来将Plex安装到CentOS 7.1上。 + +让我们从安装Plex Media Server开始。 + +#### 步骤1: 安装Plex Media Server #### + +为了在CentOS7.1上安装Plex Media Server,我们需要从Plex官网下载rpm安装包。 因此我们使用wget命令来将rpm包下载到一个新的目录下。 + + [root@linux-tutorials ~]# cd /plex + [root@linux-tutorials plex]# wget https://downloads.plex.tv/plex-media-server/0.9.12.3.1173-937aac3/plexmediaserver-0.9.12.3.1173-937aac3.x86_64.rpm + +#### 步骤2: 安装RPM包 #### + +在完成安装包完整的下载之后, 我们将会使用rpm命令在相同的目录下安装这个rpm包。 + + [root@linux-tutorials plex]# ls + plexmediaserver-0.9.12.3.1173-937aac3.x86_64.rpm + [root@linux-tutorials plex]# rpm -i plexmediaserver-0.9.12.3.1173-937aac3.x86_64.rpm + +#### 步骤3: 启动Plexmediaservice #### + +我们已经成功地安装Plex Media Server, 现在我们只需要重启它的服务然后让它永久地启用。 + + [root@linux-tutorials plex]# systemctl start plexmediaserver.service + [root@linux-tutorials plex]# systemctl enable plexmediaserver.service + [root@linux-tutorials plex]# systemctl status plexmediaserver.service + +### 在CentOS-7.1上设置Plex Home Media Web应用 ### + +现在我们只需要重复在Ubuntu上设置Plex Web应用的所有步骤就可以了。 让我们在Web浏览器上打开一个新窗口并用localhost或者Plex服务器的IP(译者注: 原文为or your Plex server, 明显的笔误)来访问Plex Home Media Web应用(译者注:称谓一致)。 + + http://172.20.3.174:32400/web + http://localhost:32400/web + +为了获取服务的完整权限你需要重复创建SSH隧道的步骤。 在你用新账户注册后我们将可以访问到服务的所有特性,并且可以添加新用户、添加新的媒体库以及根据我们的需求来设置它。 + +![Plex Device Centos](http://blog.linoxide.com/wp-content/uploads/2015/06/plex-devices-centos.png) + +### 在Fedora 22工作站上安装Plex Media Server 0.9.12.3 ### + +基本的下载和安装Plex Media Server步骤跟在CentOS 7.1上安装的步骤一致。我们只需要下载对应的rpm包然后用rpm命令来安装它。 + +![PMS Installation](http://blog.linoxide.com/wp-content/uploads/2015/06/plex-on-fedora.png) + +### 在Fedora 22工作站上配置Plex Home Media Web应用 ### + +我们在(与Plex服务器)相同的主机上配置Plex Media Server,因此不需要设置SSH隧道。只要在你的Fedora 22工作站上用Plex Home Media Server的默认端口号32400打开Web浏览器并同意Plex的服务条款即可。 + +![Plex Agreement](http://blog.linoxide.com/wp-content/uploads/2015/06/Plex-Terms.png) + +**欢迎来到Fedora 22工作站上的Plex Home Media Server** + +让我们用你的Plex账户登陆,并且开始将你喜欢的电影频道添加到媒体库、创建你的播放列表、添加你的图片以及享用更多其他的特性。 + +![Plex Add Libraries](http://blog.linoxide.com/wp-content/uploads/2015/06/create-library.png) + +### 总结 ### + +我们已经成功完成Plex Media Server在主流Linux发行版上安装和配置。Plex Home Media Server永远都是媒体管理的最佳选择。 它在跨平台上的设置是如此的简单,就像我们在Ubuntu,CentOS以及Fedora上的设置一样。它简化了你组织媒体内容的工作,并将媒体内容“流”向其他计算机以及设备以便你跟你的朋友分享媒体内容。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/tools/install-plex-media-server-ubuntu-centos-7-1-fedora-22/ + +作者:[Kashif Siddique][a] +译者:[dingdongnigetou](https://github.com/dingdongnigetou) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ From 235d39a2694ec8f4e10e835f7a0041a3767113e8 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 27 Jul 2015 13:30:38 +0800 Subject: [PATCH 005/207] PUB:20150709 7 command line tools for monitoring your Linux system @ZTinoZ --- ... tools for monitoring your Linux system.md | 83 +++++++++++++++++++ ... tools for monitoring your Linux system.md | 79 ------------------ 2 files changed, 83 insertions(+), 79 deletions(-) create mode 100644 published/20150709 7 command line tools for monitoring your Linux system.md delete mode 100644 translated/talk/20150709 7 command line tools for monitoring your Linux system.md diff --git a/published/20150709 7 command line tools for monitoring your Linux system.md b/published/20150709 7 command line tools for monitoring your Linux system.md new file mode 100644 index 0000000000..da46bd124e --- /dev/null +++ b/published/20150709 7 command line tools for monitoring your Linux system.md @@ -0,0 +1,83 @@ +监控 Linux 系统的 7 个命令行工具 +================================================================================ +**这里有一些基本的命令行工具,让你能更简单地探索和操作Linux。** + +![Image courtesy Meltys-stock](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-1-100591899-orig.png) + +### 深入 ### + +关于Linux最棒的一件事之一是你能深入操作系统,来探索它是如何工作的,并寻找机会来微调性能或诊断问题。这里有一些基本的命令行工具,让你能更简单地探索和操作Linux。大多数的这些命令是在你的Linux系统中已经内建的,但假如它们没有的话,就用谷歌搜索命令名和你的发行版名吧,你会找到哪些包需要安装(注意,一些命令是和其它命令捆绑起来打成一个包的,你所找的包可能写的是其它的名字)。如果你知道一些你所使用的其它工具,欢迎评论。 + + +### 我们怎么开始 ### + +![](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-2-100591901-orig.png) + +须知: 本文中的截图取自一台[Debian Linux 8.1][1] (“Jessie”),其运行在[OS X 10.10.3][3] (“Yosemite”)操作系统下的[Oracle VirtualBox 4.3.28][2]中的一台虚拟机里。想要建立你的Debian虚拟机,可以看看我的这篇教程——“[如何在 VirtualBox VM 下安装 Debian][4]”。 + + +### Top ### + +![](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-3-100591902-orig.png) + +作为Linux系统监控工具中比较易用的一个,**top命令**能带我们一览Linux中的几乎每一处。以下这张图是它的默认界面,但是按“z”键可以切换不同的显示颜色。其它热键和命令则有其它的功能,例如显示概要信息和内存信息(第四行第二个),根据各种不一样的条件排序、终止进程任务等等(你可以在[这里][5]找到完整的列表)。 + + +### htop ### + +![](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-4-100591904-orig.png) + +相比top,它的替代品Htop则更为精致。维基百科是这样描述的:“用户经常会部署htop以免Unix top不能提供关于系统进程的足够信息,比如说当你在尝试发现应用程序里的一个小的内存泄露问题,Htop一般也能作为一个系统监听器来使用。相比top,它提供了一个更方便的光标控制界面来向进程发送信号。” (想了解更多细节猛戳[这里][6]) + + +### Vmstat ### + +![](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-5-100591903-orig.png) + +Vmstat是一款监控Linux系统性能数据的简易工具,这让它更合适使用在shell脚本中。使出你的正则表达式绝招,用vmstat和cron作业来做一些激动人心的事情吧。“后面的报告给出的是上一次系统重启之后的均值,另外一份报告给出的则是从前一个报告起间隔周期中的信息。其它的进程和内存报告是那个瞬态的情况”(猛戳[这里][7]获取更多信息)。 + +### ps ### + +![](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-6-100591905-orig.png) + +ps命令展现的是正在运行中的进程列表。在这种情况下,我们用“-e”选项来显示每个进程,也就是所有正在运行的进程了(我把列表滚动到了前面,否则列名就看不到了)。这个命令有很多选项允许你去按需格式化输出。只要使用上述一点点的正则表达式技巧,你就能得到一个强大的工具了。猛戳[这里][8]获取更多信息。 + +### Pstree ### + +![](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-7-100591906-orig.png) + +Pstree“以树状图显示正在运行中的进程。这个进程树是以某个 pid 为根节点的,如果pid被省略的话那树是以init为根节点的。如果指定用户名,那所有进程树都会以该用户所属的进程为父进程进行显示。”以树状图来帮你将进程之间的所属关系进行分类,这的确是个很有效的工具(戳[这里][9])。 + +### pmap ### + +![](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-8-100591907-orig.png) + +在调试过程中,理解一个应用程序如何使用内存是至关重要的,而pmap的作用就是当给出一个进程ID时显示出相关信息。上面的截图展示的是使用“-x”选项所产生的部分输出,你也可以用pmap的“-X”选项来获取更多的细节信息,但是前提是你要有个更宽的终端窗口。 + +### iostat ### + +![](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-9-100591900-orig.png) + +Linux系统的一个至关重要的性能指标是处理器和存储的使用率,它也是iostat命令所报告的内容。如同ps命令一样,iostat有很多选项允许你选择你需要的输出格式,除此之外还可以在某一段时间范围内的重复采样几次。详情请戳[这里][10]。 + +-------------------------------------------------------------------------------- + +via: http://www.networkworld.com/article/2937219/linux/7-command-line-tools-for-monitoring-your-linux-system.html + +作者:[Mark Gibbs][a] +译者:[ZTinoZ](https://github.com/ZTinoZ) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.networkworld.com/author/Mark-Gibbs/ +[1]:https://www.debian.org/releases/stable/ +[2]:https://www.virtualbox.org/ +[3]:http://www.apple.com/osx/ +[4]:http://www.networkworld.com/article/2937148/how-to-install-debian-linux-8-1-in-a-virtualbox-vm +[5]:http://linux.die.net/man/1/top +[6]:http://linux.die.net/man/1/htop +[7]:http://linuxcommand.org/man_pages/vmstat8.html +[8]:http://linux.die.net/man/1/ps +[9]:http://linux.die.net/man/1/pstree +[10]:http://linux.die.net/man/1/iostat diff --git a/translated/talk/20150709 7 command line tools for monitoring your Linux system.md b/translated/talk/20150709 7 command line tools for monitoring your Linux system.md deleted file mode 100644 index e33c259d5c..0000000000 --- a/translated/talk/20150709 7 command line tools for monitoring your Linux system.md +++ /dev/null @@ -1,79 +0,0 @@ -监控你的Linux系统的7个命令行工具 -================================================================================ -**这里有一些基本的命令行工具,让你能更简单地探索和操作Linux。** - -![Image courtesy Meltys-stock](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-1-100591899-orig.png) - -### 深入 ### - -关于Linux最棒的一件事之一是你能深入操作系统多深,来探索它是如何工作的并寻找机会来微调性能或诊断问题。这里有一些基本的命令行工具,让你能更简单地探索和操作Linux。大多数的这些命令是在你的Linux系统中已经内建的,但假设它们不是,就用谷歌搜索命令名和你的发行版名吧,你会找到哪些包需要安装(注意,一些命令是和其它命令捆绑起来打成一个包的,你所找的包可能写的是其它的名字)。如果你知道一些你所使用的其它工具,欢迎评论。 - -![Image courtesy Mark Gibbs](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-2-100591901-orig.png) - -### 我们怎么做 ### - -须知: 本文中的截图取自[Debian Linux 8.1][1] (“Jessie”),其运行在[OS X 10.10.3][3] (“Yosemite”)操作系统下[Oracle VirtualBox 4.3.28][2]中的一台虚拟机里。想要建立你的Debian虚拟机,可以看看我的这篇教程——“[How to install Debian Linux in a VirtualBox VM][4]”。 - -![Image courtesy Mark Gibbs](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-3-100591902-orig.png) - -### Top ### - -作为Linux系统监控工具中比较易用的一个,**top命令**能带我们一览Linux中的几乎每一处。以下这张图是它的默认界面,但是按“z”键可以切换不同的显示颜色。其它热键和命令则有其它的功能,例如显示概要信息和内存信息(第四行第二个),根据各种不一样的条件排序、终止进程任务等等(你可以在[这里][5]找到完整的列表)。 - -![Image courtesy Mark Gibbs](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-4-100591904-orig.png) - -### htop ### - -相比top,它的替代品Htop则更为精致。维基百科是这样描述的:“用户经常会部署htop以防Unix top不能提供关于系统进程的足够信息,比如说当你在尝试发现应用程序里的一个小的内存泄露问题,Htop一般也能作为一个系统监听器来使用。相比top,它提供了一个更方便的光标控制界面来向进程发送信号。” (想了解更多细节猛戳[这里][6].) - -![Image courtesy Mark Gibbs](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-5-100591903-orig.png) - -### Vmstat ### - -Vmstat是一款监控Linux系统性能数据的简易工具,这让它在shell脚本中使用更合适。打开你的regex-fu,用vmstat和cron作业来做一些激动人心的事情吧。“产出的第一份报告给出的是上一次系统重启之后的均值,另外其一份报告给出的则是从前一个报告起间隔周期中的信息。进程和内存报告在任何情况下都是不停更新的”(猛戳[这里][7]获取更多信息)。 - -![Image courtesy Mark Gibbs](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-6-100591905-orig.png) - -### ps ### - -ps命令展现的是正在运行中的进程列表。在这种情况下,我们用“-e”选项来显示每个进程,也就是所有正在运行的进程了(我把列表滚动到了头部否则列名就看不到了)。这个命令有很多选项允许你去按需格式化输出。只要使用上述一点点的regex-fu你就能得到一个强大的工具了。猛戳[这里][8]获取更多信息。 - -![Image courtesy Mark Gibbs](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-7-100591906-orig.png) - -### Pstree ### - -Pstree“以树状图显示正在运行中的进程。如果pid被省略的话那树结构是以pid或init为父进程,如果用户名指定,那所有进程树都会以该用户所属的进程为父进程进行显示。”以树状图来帮你将进程之间的所属关系进行分类,这的确是个很有效的工具(戳[这里][9])。 - -![Image courtesy Mark Gibbs](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-8-100591907-orig.png) - -### pmap ### - -理解一个应用程序在调试过程中如何使用内存是至关重要的,而pmap的作用就是当给出一个进程ID(PID)时显示出相关信息。上面的截图展示的是使用“-x”选项所产生的部分输出,你也可以用pmap的“-X”选项来获取更多的细节信息但是前提是你要有个更宽的终端窗口。 - -![Image courtesy Mark Gibbs](http://images.techhive.com/images/article/2015/06/command-line-tools-monitoring-linux-system-9-100591900-orig.png) - -### iostat ### - -Linux系统的一个至关重要的性能指标是处理器和存储的使用率,它也是iostat命令所报告的内容。如同ps命令一样,iostat有很多选项允许你选择你需要的输出格式,除此之外还有某一段时间范围内的简单性能输出并在报告之前重复抽样多次。详情戳[这里][10]。 - --------------------------------------------------------------------------------- - -via: http://www.networkworld.com/article/2937219/linux/7-command-line-tools-for-monitoring-your-linux-system.html - -作者:[Mark Gibbs][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.networkworld.com/author/Mark-Gibbs/ -[1]:https://www.debian.org/releases/stable/ -[2]:https://www.virtualbox.org/ -[3]:http://www.apple.com/osx/ -[4]:http://www.networkworld.com/article/2937148/how-to-install-debian-linux-8-1-in-a-virtualbox-vm -[5]:http://linux.die.net/man/1/top -[6]:http://linux.die.net/man/1/htop -[7]:http://linuxcommand.org/man_pages/vmstat8.html -[8]:http://linux.die.net/man/1/ps -[9]:http://linux.die.net/man/1/pstree -[10]:http://linux.die.net/man/1/iostat From 6e389ee8d08ba743b98c15634d197ffe908ebf7c Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 27 Jul 2015 16:45:43 +0800 Subject: [PATCH 006/207] =?UTF-8?q?20150727-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...estore and Migrate Containers in Docker.md | 90 +++++++++++++++++++ 1 file changed, 90 insertions(+) create mode 100644 sources/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md diff --git a/sources/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md b/sources/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md new file mode 100644 index 0000000000..fc21489ec9 --- /dev/null +++ b/sources/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md @@ -0,0 +1,90 @@ +Easy Backup, Restore and Migrate Containers in Docker +================================================================================ +Today we'll learn how we can easily backup, restore and migrate docker containers out of the box. [Docker][1] is an open source platform that automates the deployment of applications with fast and easy way to pack, ship and run it under a lightweight layer of software called container. It makes the applications platform independent as it acts an additional layer of abstraction and automation of operating system level virtualization on Linux. It utilizes resource isolation features of Linux Kernel with its components cgroups and namespace for avoiding the overhead of virtual machines. It makes the great building blocks for deploying and scaling web apps, databases, and back-end services without depending on a particular stack or provider. Containers are those software layers which are created from a docker image that contains the respective linux filesystem and applications out of the box. If we have a docker container running in our box and need to backup those containers for future use or wanna migrate those containers, then this tutorial will help you how we can backup, restore and migrate docker containers in linux operating system. + +Here are some easy steps on how we can backup, restore and migrate the docker containers in linux. + +### 1. Backing up the Containers ### + +First of all, in order to backup the containers in docker, we'll wanna see the list of containers that we wanna backup. To do so, we'll need to run docker ps in our linux machine running docker engine with containers already created. + + # docker ps + +![Docker Containers List](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-containers-list.png) + +After that, we'll choose the containers we wanna backup and then we'll go for creating the snapshot of the container. We can use docker commit command in order to create the snapshot. + + # docker commit -p 30b8f18f20b4 container-backup + +![Docker Commit](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-commit.png) + +This will generated a snapshot of the container as the docker image. We can see the docker image by running the command docker images as shown below. + + # docker images + +![Docker Images](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-images.png) + +As we can see the snapshot that was taken above has been preserved as docker image. Now, inorder to backup that snapshot, we have two options, one is that we can login into the docker registry hub and push the image and the next is that we can backup the docker image as tarballs for further use. + +If we wanna upload or backup the image in the [docker registry hub][2], we can simply run docker login command to login into the docker registry hub and then push the required image. + + # docker login + +![Docker Login](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-login.png) + + # docker tag a25ddfec4d2a arunpyasi/container-backup:test + # docker push arunpyasi/container-backup + +![Docker Push](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-push.png) + +If we don't wanna backup to the docker registry hub and wanna save the image for future use in the machine locally then we can backup the image as tarballs. To do so, we'll need to run the following docker save command. + + # docker save -o ~/container-backup.tar container-backup + +![taking tarball backup](http://blog.linoxide.com/wp-content/uploads/2015/07/taking-tarball-backup.png) + +To verify if the tarball has been generated or not, we can simply run docker ls inside the directory where we saved the tarball. + +### 2. Restoring the Containers ### + +Next, after we have successfully backed up our docker containers, we'll now go for restoring those contianers which are snapshotted as docker images. If we have pushed those docker images in the registry hub, then we can simply pull that docker image and run it out of the box. + + # docker pull arunpyasi/container-backup:test + +![Docker Pull](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-pull.png) + +But if we have backed up those docker images locally as tarball file, then we can easy load that docker image using docker load command followed by the backed up tarball. + + # docker load -i ~/container-backup.tar + +Now, to ensure that those docker images have been loaded successfully, we'll run docker images command. + + # docker images + +After the images have been loaded, we'll gonna run the docker container from the loaded image. + + # docker run -d -p 80:80 container-backup + +![Restoring Docker Tarball](http://blog.linoxide.com/wp-content/uploads/2015/07/restoring-docker-tarballs.png) + +### 3. Migrating the Docker Containers ### + +Migrating the containers involve both the above process ie Backup and Restore. We can migrate any docker container from one machine to another. In the process of migration, first we take the backup of the container as snapshot docker image. Then, that docker image is either pushed to the docker registry hub or saved as tarball files in the locally. If we have pushed the image to the docker registry hub, we can easily restore and run the container using docker run command from any machine we want. But if we have saved the image as tarballs locally, we can simply copy or move the image to the machine where we want to load image and run the required container. + +### Conclusion ### + +Finally, we have learned how we can backup, restore and migrate the docker containers out of the box. This tutorial is exactly same for every platform of operating system where docker runs successfully. Really, docker is pretty simple and easy to use but very powerful tool. It has pretty easy to remember commands which are short enough with many simple but powerful flags and parameters. The above methods makes us comfortable to backup our containers so that we can restore them when needed in future. This can help us recover our containers and images even if our host system crashes or gets wiped out accidentally. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/backup-restore-migrate-containers-docker/ + +作者:[Arun Pyasi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://docker.com/ +[2]:https://registry.hub.docker.com/ \ No newline at end of file From 54c0e5492e53abfadb87df7bc727714942a47e55 Mon Sep 17 00:00:00 2001 From: zpl1025 Date: Mon, 27 Jul 2015 21:18:53 +0800 Subject: [PATCH 007/207] [translating] 20150709 Interviews--Linus Torvalds Answers Your Question.md --- ...0150709 Interviews--Linus Torvalds Answers Your Question.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/talk/20150709 Interviews--Linus Torvalds Answers Your Question.md b/sources/talk/20150709 Interviews--Linus Torvalds Answers Your Question.md index e723658787..f1420fd0e4 100644 --- a/sources/talk/20150709 Interviews--Linus Torvalds Answers Your Question.md +++ b/sources/talk/20150709 Interviews--Linus Torvalds Answers Your Question.md @@ -1,3 +1,4 @@ +zpl1025 Interviews: Linus Torvalds Answers Your Question ================================================================================ Last Thursday you had a chance to [ask Linus Torvalds][1] about programming, hardware, and all things Linux. You can read his answers to those questions below. If you'd like to see what he had to say the last time we sat down with him, [you can do so here][2]. @@ -181,4 +182,4 @@ via: http://linux.slashdot.org/story/15/06/30/0058243/interviews-linus-torvalds- [a]:samzenpus@slashdot.org [1]:http://interviews.slashdot.org/story/15/06/24/1718247/interview-ask-linus-torvalds-a-question [2]:http://meta.slashdot.org/story/12/10/11/0030249/linus-torvalds-answers-your-questions -[3]:https://lwn.net/Articles/604695/ \ No newline at end of file +[3]:https://lwn.net/Articles/604695/ From 439bc9a7b9cc721341ec2eb5c23d8ec9fe3821c8 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 27 Jul 2015 23:57:15 +0800 Subject: [PATCH 008/207] PUB:20150713 How To Fix System Program Problem Detected In Ubuntu 14.04 @XLCYun --- ...rogram Problem Detected In Ubuntu 14.04.md | 29 ++++++++++--------- 1 file changed, 16 insertions(+), 13 deletions(-) rename {translated/tech => published}/20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md (74%) diff --git a/translated/tech/20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md b/published/20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md similarity index 74% rename from translated/tech/20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md rename to published/20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md index 3658528e77..92aa82ac03 100644 --- a/translated/tech/20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md +++ b/published/20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md @@ -1,10 +1,9 @@ - -如何修复ubuntu 14.04中检测到系统程序错误的问题 +如何修复 ubuntu 中检测到系统程序错误的问题 ================================================================================ + ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/system_program_Problem_detected.jpeg) - -在过去的几个星期,(几乎)每次都有消息 **Ubuntu 15.04在启动时检测到系统程序错误(system program problem detected on startup in Ubuntu 15.04)** 跑出来“欢迎”我。那时我是直接忽略掉它的,但是这种情况到了某个时刻,它就让人觉得非常烦人了! +在过去的几个星期,(几乎)每次都有消息 **Ubuntu 15.04在启动时检测到系统程序错误** 跑出来“欢迎”我。那时我是直接忽略掉它的,但是这种情况到了某个时刻,它就让人觉得非常烦人了! > 检测到系统程序错误(System program problem detected) > @@ -18,15 +17,16 @@ #### 那么这个通知到底是关于什么的? #### -大体上讲,它是在告知你,你的系统的一部分崩溃了。可别因为“崩溃”这个词而恐慌。这不是一个严重的问题,你的系统还是完完全全可用的。只是在以前的某个时刻某个程序崩溃了,而Ubuntu想让你决定要不要把这个问题报告给开发者,这样他们就能够修复这个问题。 +大体上讲,它是在告知你,你的系统的一部分崩溃了。可别因为“崩溃”这个词而恐慌。这不是一个严重的问题,你的系统还是完完全全可用的。只是在之前的某个时刻某个程序崩溃了,而Ubuntu想让你决定要不要把这个问题报告给开发者,这样他们就能够修复这个问题。 #### 那么,我们点了“报告错误”的按钮后,它以后就不再显示了?#### +不,不是的!即使你点了“报告错误”按钮,最后你还是会被一个如下的弹窗再次“欢迎”一下: -不,不是的!即使你点了“报告错误”按钮,最后你还是会被一个如下的弹窗再次“欢迎”: ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Ubuntu_Internal_error.png) -[对不起,Ubuntu发生了一个内部错误(Sorry, Ubuntu has experienced an internal error)][1]是一个Apport(Apport是Ubuntu中错误信息的收集报告系统,详见Ubuntu Wiki中的Apport篇,译者注),它将会进一步的打开网页浏览器,然后你可以通过登录或创建[Launchpad][2]帐户来填写一份漏洞(Bug)报告文件。你看,这是一个复杂的过程,它要花整整四步来完成. +[对不起,Ubuntu发生了一个内部错误][1]是个Apport(LCTT 译注:Apport是Ubuntu中错误信息的收集报告系统,详见Ubuntu Wiki中的Apport篇),它将会进一步的打开网页浏览器,然后你可以通过登录或创建[Launchpad][2]帐户来填写一份漏洞(Bug)报告文件。你看,这是一个复杂的过程,它要花整整四步来完成。 + #### 但是我想帮助开发者,让他们知道这个漏洞啊 !#### 你这样想的确非常地周到体贴,而且这样做也是正确的。但是这样做的话,存在两个问题。第一,存在非常高的概率,这个漏洞已经被报告过了;第二,即使你报告了个这次崩溃,也无法保证你不会再看到它。 @@ -34,35 +34,38 @@ #### 那么,你的意思就是说别报告这次崩溃了?#### 对,也不对。如果你想的话,在你第一次看到它的时候报告它。你可以在上面图片显示的“显示细节(Show Details)”中,查看崩溃的程序。但是如果你总是看到它,或者你不想报告漏洞(Bug),那么我建议你还是一次性摆脱这个问题吧。 + ### 修复Ubuntu中“检测到系统程序错误”的错误 ### 这些错误报告被存放在Ubuntu中目录/var/crash中。如果你翻看这个目录的话,应该可以看到有一些以crash结尾的文件。 + ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Crash_reports_Ubuntu.jpeg) 我的建议是删除这些错误报告。打开一个终端,执行下面的命令: sudo rm /var/crash/* -这个操作会删除所有在/var/crash目录下的所有内容。这样你就不会再被这些报告以前程序错误的弹窗所扰。但是如果有一个程序又崩溃了,你就会再次看到“检测到系统程序错误”的错误。你可以再次删除这些报告文件,或者你可以禁用Apport来彻底地摆脱这个错误弹窗。 +这个操作会删除所有在/var/crash目录下的所有内容。这样你就不会再被这些报告以前程序错误的弹窗所扰。但是如果又有一个程序崩溃了,你就会再次看到“检测到系统程序错误”的错误。你可以再次删除这些报告文件,或者你可以禁用Apport来彻底地摆脱这个错误弹窗。 + #### 彻底地摆脱Ubuntu中的系统错误弹窗 #### 如果你这样做,系统中任何程序崩溃时,系统都不会再通知你。如果你想问问我的看法的话,我会说,这不是一件坏事,除非你愿意填写错误报告。如果你不想填写错误报告,那么这些错误通知存不存在都不会有什么区别。 要禁止Apport,并且彻底地摆脱Ubuntu系统中的程序崩溃报告,打开一个终端,输入以下命令: + gksu gedit /etc/default/apport 这个文件的内容是: - # set this to 0 to disable apport, or to 1 to enable it - # 设置0表示禁用Apportw,或者1开启它。译者注,下同。 - # you can temporarily override this with + # 设置0表示禁用Apportw,或者1开启它。 # 你可以用下面的命令暂时关闭它: # sudo service apport start force_start=1 enabled=1 -把**enabled=1**改为**enabled=0**.保存并关闭文件。完成之后你就再也不会看到弹窗报告错误了。很显然,如果我们想重新开启错误报告功能,只要再打开这个文件,把enabled设置为1就可以了。 +把**enabled=1**改为**enabled=0**。保存并关闭文件。完成之后你就再也不会看到弹窗报告错误了。很显然,如果我们想重新开启错误报告功能,只要再打开这个文件,把enabled设置为1就可以了。 #### 你的有效吗? #### + 我希望这篇教程能够帮助你修复Ubuntu 14.04和Ubuntu 15.04中检测到系统程序错误的问题。如果这个小窍门帮你摆脱了这个烦人的问题,请让我知道。 -------------------------------------------------------------------------------- @@ -71,7 +74,7 @@ via: http://itsfoss.com/how-to-fix-system-program-problem-detected-ubuntu/ 作者:[Abhishek][a] 译者:[XLCYun](https://github.com/XLCYun) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 3340d5e78d492c8950aee6ac30e9ec157654d46d Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 28 Jul 2015 00:05:54 +0800 Subject: [PATCH 009/207] PUB:20150709 Install Google Hangouts Desktop Client In Linux @FSSlc --- ...all Google Hangouts Desktop Client In Linux.md | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) rename {translated/tech => published}/20150709 Install Google Hangouts Desktop Client In Linux.md (89%) diff --git a/translated/tech/20150709 Install Google Hangouts Desktop Client In Linux.md b/published/20150709 Install Google Hangouts Desktop Client In Linux.md similarity index 89% rename from translated/tech/20150709 Install Google Hangouts Desktop Client In Linux.md rename to published/20150709 Install Google Hangouts Desktop Client In Linux.md index e8257cbedf..4adca83c52 100644 --- a/translated/tech/20150709 Install Google Hangouts Desktop Client In Linux.md +++ b/published/20150709 Install Google Hangouts Desktop Client In Linux.md @@ -1,24 +1,25 @@ 在 Linux 中安装 Google 环聊桌面客户端 ================================================================================ + ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/google-hangouts-header-664x374.jpg) 先前,我们已经介绍了如何[在 Linux 中安装 Facebook Messenger][1] 和[WhatsApp 桌面客户端][2]。这些应用都是非官方的应用。今天,我将为你推荐另一款非官方的应用,它就是 [Google 环聊][3] -当然,你可以在 Web 浏览器中使用 Google 环聊,但相比于此,使用桌面客户端会更加有趣。好奇吗?那就跟着我看看如何 **在 Linux 中安装 Google 环聊** 以及如何使用它把。 +当然,你可以在 Web 浏览器中使用 Google 环聊,但相比于此,使用桌面客户端会更加有趣。好奇吗?那就跟着我看看如何 **在 Linux 中安装 Google 环聊** 以及如何使用它吧。 ### 在 Linux 中安装 Google 环聊 ### 我们将使用一个名为 [yakyak][4] 的开源项目,它是一个针对 Linux,Windows 和 OS X 平台的非官方 Google 环聊客户端。我将向你展示如何在 Ubuntu 中使用 yakyak,但我相信在其他的 Linux 发行版本中,你可以使用同样的方法来使用它。在了解如何使用它之前,让我们先看看 yakyak 的主要特点: - 发送和接受聊天信息 -- 创建和更改对话 (重命名, 添加人物) +- 创建和更改对话 (重命名, 添加参与者) - 离开或删除对话 - 桌面提醒通知 - 打开或关闭通知 -- 针对图片上传,支持拖放,复制粘贴或使用上传按钮 -- Hangupsbot 房间同步(实际的用户图片) (注: 这里翻译不到位,希望改善一下) +- 对于图片上传,支持拖放,复制粘贴或使用上传按钮 +- Hangupsbot 房间同步(使用用户实际的图片) - 展示行内图片 -- 历史回放 +- 翻阅历史 听起来不错吧,你可以从下面的链接下载到该软件的安装文件: @@ -36,7 +37,7 @@ ![Google_Hangout_Linux_4](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_4.jpeg) -假如你想看看对话的配置图,你可以选择 `查看-> 展示对话缩略图` +假如你想在联系人里面显示用户头像,你可以选择 `查看-> 展示对话缩略图` ![Google 环聊缩略图](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Google_Hangout_Linux_5.jpeg) @@ -54,7 +55,7 @@ via: http://itsfoss.com/install-google-hangouts-linux/ 作者:[Abhishek][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 2f8bb078e09a6ad0807e328d24d71fd780e00a10 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 28 Jul 2015 00:22:44 +0800 Subject: [PATCH 010/207] PUB:20150722 How to Use and Execute PHP Codes in Linux Command Line--Part 1 @GOLinux --- ...PHP Codes in Linux Command Line--Part 1.md | 57 +++++++++++-------- 1 file changed, 33 insertions(+), 24 deletions(-) rename {translated/tech => published}/20150722 How to Use and Execute PHP Codes in Linux Command Line--Part 1.md (77%) diff --git a/translated/tech/20150722 How to Use and Execute PHP Codes in Linux Command Line--Part 1.md b/published/20150722 How to Use and Execute PHP Codes in Linux Command Line--Part 1.md similarity index 77% rename from translated/tech/20150722 How to Use and Execute PHP Codes in Linux Command Line--Part 1.md rename to published/20150722 How to Use and Execute PHP Codes in Linux Command Line--Part 1.md index 1d69a7c746..79fa6b6b12 100644 --- a/translated/tech/20150722 How to Use and Execute PHP Codes in Linux Command Line--Part 1.md +++ b/published/20150722 How to Use and Execute PHP Codes in Linux Command Line--Part 1.md @@ -1,10 +1,12 @@ -Linux命令行中使用和执行PHP代码——第一部分 +在 Linux 命令行中使用和执行 PHP 代码(一) ================================================================================ -PHP是一个开元服务器端脚本语言,最初这三个字母代表的是“Personal Home Page”,而现在则代表的是“PHP:Hypertext Preprocessor”,它是个递归首字母缩写。它是一个跨平台脚本语言,深受C、C++和Java的影响。 -![Run PHP Codes in Linux Command Line](http://www.tecmint.com/wp-content/uploads/2015/07/php-command-line-usage.jpeg) -Linux命令行中运行PHP代码——第一部分 +PHP是一个开源服务器端脚本语言,最初这三个字母代表的是“Personal Home Page”,而现在则代表的是“PHP:Hypertext Preprocessor”,它是个递归首字母缩写。它是一个跨平台脚本语言,深受C、C++和Java的影响。 -PHP的语法和C、Java以及带有一些PHP特性的Perl变成语言中的语法十分相似,它眼下大约正被2.6亿个网站所使用,当前最新的稳定版本是PHP版本5.6.10。 +![Run PHP Codes in Linux Command Line](http://www.tecmint.com/wp-content/uploads/2015/07/php-command-line-usage.jpeg) + +*在 Linux 命令行中运行 PHP 代码* + +PHP的语法和C、Java以及带有一些PHP特性的Perl变成语言中的语法十分相似,它当下大约正被2.6亿个网站所使用,当前最新的稳定版本是PHP版本5.6.10。 PHP是HTML的嵌入脚本,它便于开发人员快速写出动态生成的页面。PHP主要用于服务器端(而Javascript则用于客户端)以通过HTTP生成动态网页,然而,当你知道可以在Linux终端中不需要网页浏览器来执行PHP时,你或许会大为惊讶。 @@ -12,40 +14,44 @@ PHP是HTML的嵌入脚本,它便于开发人员快速写出动态生成的页 **1. 在安装完PHP和Apache2后,我们需要安装PHP命令行解释器。** - # apt-get install php5-cli [Debian and alike System) - # yum install php-cli [CentOS and alike System) + # apt-get install php5-cli [Debian 及类似系统] + # yum install php-cli [CentOS 及类似系统] -接下来我们通常要做的是,在‘/var/www/html‘(这是 Apache2 在大多数发行版中的工作目录)这个位置创建一个内容为 ‘‘,名为 ‘infophp.php‘ 的文件来测试(是否安装正确),执行以下命令即可。 +接下来我们通常要做的是,在`/var/www/html`(这是 Apache2 在大多数发行版中的工作目录)这个位置创建一个内容为 ``,名为 `infophp.php` 的文件来测试(PHP是否安装正确),执行以下命令即可。 # echo '' > /var/www/html/infophp.php -然后,将浏览器指向http://127.0.0.1/infophp.php, 这将会在网络浏览器中打开该文件。 +然后,将浏览器访问 http://127.0.0.1/infophp.php ,这将会在网络浏览器中打开该文件。 ![Check PHP Info](http://www.tecmint.com/wp-content/uploads/2015/07/Check-PHP-Info.png) -检查PHP信息 -不需要任何浏览器,在Linux终端中也可以获得相同的结果。在Linux命令行中执行‘/var/www/html/infophp.php‘,如: +*检查PHP信息* + +不需要任何浏览器,在Linux终端中也可以获得相同的结果。在Linux命令行中执行`/var/www/html/infophp.php`,如: # php -f /var/www/html/infophp.php ![Check PHP info from Commandline](http://www.tecmint.com/wp-content/uploads/2015/07/Check-PHP-info-from-Commandline.png) -从命令行检查PHP信息 -由于输出结果太大,我们可以通过管道将上述输出结果输送给 ‘less‘ 命令,这样就可以一次输出一屏了,命令如下: +*从命令行检查PHP信息* + +由于输出结果太大,我们可以通过管道将上述输出结果输送给 `less` 命令,这样就可以一次输出一屏了,命令如下: # php -f /var/www/html/infophp.php | less ![Check All PHP Info](http://www.tecmint.com/wp-content/uploads/2015/07/Check-All-PHP-Info.png) -检查所有PHP信息 -这里,‘-f‘选项解析病执行命令后跟随的文件。 +*检查所有PHP信息* + +这里,‘-f‘选项解析并执行命令后跟随的文件。 **2. 我们可以直接在Linux命令行使用`phpinfo()`这个十分有价值的调试工具而不需要从文件来调用,只需执行以下命令:** # php -r 'phpinfo();' ![PHP Debugging Tool](http://www.tecmint.com/wp-content/uploads/2015/07/PHP-Debugging-Tool.png) -PHP调试工具 + +*PHP调试工具* 这里,‘-r‘ 选项会让PHP代码在Linux终端中不带`<`和`>`标记直接执行。 @@ -74,13 +80,14 @@ PHP调试工具 输入 ‘exit‘ 或者按下 ‘ctrl+c‘ 来关闭PHP交互模式。 ![Enable PHP Interactive Mode](http://www.tecmint.com/wp-content/uploads/2015/07/Enable-PHP-interactive-mode1.png) -启用PHP交互模式 + +*启用PHP交互模式* **4. 你可以仅仅将PHP脚本作为shell脚本来运行。首先,创建在你当前工作目录中创建一个PHP样例脚本。** # echo -e '#!/usr/bin/php\n' > phpscript.php -注意,我们在该PHP脚本的第一行使用#!/usr/bin/php,就像在shell脚本中那样(/bin/bash)。第一行的#!/usr/bin/php告诉Linux命令行将该脚本文件解析到PHP解释器中。 +注意,我们在该PHP脚本的第一行使用`#!/usr/bin/php`,就像在shell脚本中那样(`/bin/bash`)。第一行的`#!/usr/bin/php`告诉Linux命令行用 PHP 解释器来解析该脚本文件。 其次,让该脚本可执行: @@ -96,7 +103,7 @@ PHP调试工具 # php -a -创建一个函授,将它命名为 addition。同时,声明两个变量 $a 和 $b。 +创建一个函数,将它命名为 `addition`。同时,声明两个变量 `$a` 和 `$b`。 php > function addition ($a, $b) @@ -133,7 +140,8 @@ PHP调试工具 12.3NULL ![Create PHP Functions](http://www.tecmint.com/wp-content/uploads/2015/07/Create-PHP-Functions.png) -创建PHP函数 + +*创建PHP函数* 你可以一直运行该函数,直至退出交互模式(ctrl+z)。同时,你也应该注意到了,上面输出结果中返回的数据类型为 NULL。这个问题可以通过要求 php 交互 shell用 return 代替 echo 返回结果来修复。 @@ -152,11 +160,12 @@ PHP调试工具 这里是一个样例,在该样例的输出结果中返回了正确的数据类型。 ![PHP Functions](http://www.tecmint.com/wp-content/uploads/2015/07/PHP-Functions.png) -PHP函数 + +*PHP函数* 永远都记住,用户定义的函数不会从一个shell会话保留到下一个shell会话,因此,一旦你退出交互shell,它就会丢失了。 -希望你喜欢此次会话。保持连线,你会获得更多此类文章。保持关注,保持健康。请在下面的评论中为我们提供有价值的反馈。点赞并分享,帮助我们扩散。 +希望你喜欢此次教程。保持连线,你会获得更多此类文章。保持关注,保持健康。请在下面的评论中为我们提供有价值的反馈。点赞并分享,帮助我们扩散。 还请阅读: [12个Linux终端中有用的的PHP命令行用法——第二部分][1] @@ -164,9 +173,9 @@ PHP函数 via: http://www.tecmint.com/run-php-codes-from-linux-commandline/ -作者:[vishek Kumar][a] +作者:[Avishek Kumar][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 9bd072c35fe1925c440fd8215c0d74d3126d4418 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 28 Jul 2015 09:43:05 +0800 Subject: [PATCH 011/207] =?UTF-8?q?20150728-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...28 Process of the Linux kernel building.md | 674 ++++++++++++++++++ 1 file changed, 674 insertions(+) create mode 100644 sources/tech/20150728 Process of the Linux kernel building.md diff --git a/sources/tech/20150728 Process of the Linux kernel building.md b/sources/tech/20150728 Process of the Linux kernel building.md new file mode 100644 index 0000000000..cb7ec19b45 --- /dev/null +++ b/sources/tech/20150728 Process of the Linux kernel building.md @@ -0,0 +1,674 @@ +Process of the Linux kernel building +================================================================================ +Introduction +-------------------------------------------------------------------------------- + +I will not tell you how to build and install custom Linux kernel on your machine, you can find many many [resources](https://encrypted.google.com/search?q=building+linux+kernel#q=building+linux+kernel+from+source+code) that will help you to do it. Instead, we will know what does occur when you are typed `make` in the directory with Linux kernel source code in this part. When I just started to learn source code of the Linux kernel, the [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) file was a first file that I've opened. And it was scary :) This [makefile](https://en.wikipedia.org/wiki/Make_%28software%29) contains `1591` lines of code at the time when I wrote this part and it was [third](https://github.com/torvalds/linux/commit/52721d9d3334c1cb1f76219a161084094ec634dc) release candidate. + +This makefile is the the top makefile in the Linux kernel source code and kernel build starts here. Yes, it is big, but moreover, if you've read the source code of the Linux kernel you can noted that all directories with a source code has an own makefile. Of course it is not real to describe how each source files compiled and linked. So, we will see compilation only for the standard case. You will not find here building of the kernel's documentation, cleaning of the kernel source code, [tags](https://en.wikipedia.org/wiki/Ctags) generation, [cross-compilation](https://en.wikipedia.org/wiki/Cross_compiler) related stuff and etc. We will start from the `make` execution with the standard kernel configuration file and will finish with the building of the [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage). + +It would be good if you're already familiar with the [make](https://en.wikipedia.org/wiki/Make_%28software%29) util, but I will anyway try to describe all code that will be in this part. + +So let's start. + +Preparation before the kernel compilation +--------------------------------------------------------------------------------- + +There are many things to preparate before the kernel compilation will be started. The main point here is to find and configure +the type of compilation, to parse command line arguments that are passed to the `make` util and etc. So let's dive into the top `Makefile` of the Linux kernel. + +The Linux kernel top `Makefile` is responsible for building two major products: [vmlinux](https://en.wikipedia.org/wiki/Vmlinux) (the resident kernel image) and the modules (any module files). The [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) of the Linux kernel starts from the definition of the following variables: + +```Makefile +VERSION = 4 +PATCHLEVEL = 2 +SUBLEVEL = 0 +EXTRAVERSION = -rc3 +NAME = Hurr durr I'ma sheep +``` + +These variables determine the current version of the Linux kernel and are used in the different places, for example in the forming of the `KERNELVERSION` variable: + +```Makefile +KERNELVERSION = $(VERSION)$(if $(PATCHLEVEL),.$(PATCHLEVEL)$(if $(SUBLEVEL),.$(SUBLEVEL)))$(EXTRAVERSION) +``` + +After this we can see a couple of the `ifeq` condition that check some of the parameters passed to `make`. The Linux kernel `makefiles` provides a special `make help` target that prints all available targets and some of the command line arguments that can be passed to `make`. For example: `make V=1` - provides verbose builds. The first `ifeq` condition checks if the `V=n` option is passed to make: + +```Makefile +ifeq ("$(origin V)", "command line") + KBUILD_VERBOSE = $(V) +endif +ifndef KBUILD_VERBOSE + KBUILD_VERBOSE = 0 +endif + +ifeq ($(KBUILD_VERBOSE),1) + quiet = + Q = +else + quiet=quiet_ + Q = @ +endif + +export quiet Q KBUILD_VERBOSE +``` + +If this option is passed to `make` we set the `KBUILD_VERBOSE` variable to the value of the `V` option. Otherwise we set the `KBUILD_VERBOSE` variable to zero. After this we check value of the `KBUILD_VERBOSE` variable and set values of the `quiet` and `Q` variables depends on the `KBUILD_VERBOSE` value. The `@` symbols suppress the output of the command and if it will be set before a command we will see something like this: `CC scripts/mod/empty.o` instead of the `Compiling .... scripts/mod/empty.o`. In the end we just export all of these variables. The next `ifeq` statement checks that `O=/dir` option was passed to the `make`. This option allows to locate all output files in the given `dir`: + +```Makefile +ifeq ($(KBUILD_SRC),) + +ifeq ("$(origin O)", "command line") + KBUILD_OUTPUT := $(O) +endif + +ifneq ($(KBUILD_OUTPUT),) +saved-output := $(KBUILD_OUTPUT) +KBUILD_OUTPUT := $(shell mkdir -p $(KBUILD_OUTPUT) && cd $(KBUILD_OUTPUT) \ + && /bin/pwd) +$(if $(KBUILD_OUTPUT),, \ + $(error failed to create output directory "$(saved-output)")) + +sub-make: FORCE + $(Q)$(MAKE) -C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR) \ + -f $(CURDIR)/Makefile $(filter-out _all sub-make,$(MAKECMDGOALS)) + +skip-makefile := 1 +endif # ifneq ($(KBUILD_OUTPUT),) +endif # ifeq ($(KBUILD_SRC),) +``` + +We check the `KBUILD_SRC` that represent top directory of the source code of the linux kernel and if it is empty (it is empty every time while makefile executes first time) and the set the `KBUILD_OUTPUT` variable to the value that passed with the `O` option (if this option was passed). In the next step we check this `KBUILD_OUTPUT` variable and if we set it, we do following things: + +* Store value of the `KBUILD_OUTPUT` in the temp `saved-output` variable; +* Try to create given output directory; +* Check that directory created, in other way print error; +* If custom output directory created sucessfully, execute `make` again with the new directory (see `-C` option). + +The next `ifeq` statements checks that `C` or `M` options was passed to the make: + +```Makefile +ifeq ("$(origin C)", "command line") + KBUILD_CHECKSRC = $(C) +endif +ifndef KBUILD_CHECKSRC + KBUILD_CHECKSRC = 0 +endif + +ifeq ("$(origin M)", "command line") + KBUILD_EXTMOD := $(M) +endif +``` + +The first `C` option tells to the `makefile` that need to check all `c` source code with a tool provided by the `$CHECK` environment variable, by default it is [sparse](https://en.wikipedia.org/wiki/Sparse). The second `M` option provides build for the external modules (will not see this case in this part). As we set this variables we make a check of the `KBUILD_SRC` variable and if it is not set we set `srctree` variable to `.`: + +```Makefile +ifeq ($(KBUILD_SRC),) + srctree := . +endif + +objtree := . +src := $(srctree) +obj := $(objtree) + +export srctree objtree VPATH +``` + +That tells to `Makefile` that source tree of the Linux kernel will be in the current directory where `make` command was executed. After this we set `objtree` and other variables to this directory and export these variables. The next step is the getting value for the `SUBARCH` variable that will represent tewhat the underlying archicecture is: + +```Makefile +SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \ + -e s/sun4u/sparc64/ \ + -e s/arm.*/arm/ -e s/sa110/arm/ \ + -e s/s390x/s390/ -e s/parisc64/parisc/ \ + -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \ + -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ ) +``` + +As you can see it executes [uname](https://en.wikipedia.org/wiki/Uname) utils that prints information about machine, operating system and architecture. As it will get output of the `uname` util, it will parse it and assign to the `SUBARCH` variable. As we got `SUBARCH`, we set the `SRCARCH` variable that provides directory of the certain architecture and `hfr-arch` that provides directory for the header files: + +```Makefile +ifeq ($(ARCH),i386) + SRCARCH := x86 +endif +ifeq ($(ARCH),x86_64) + SRCARCH := x86 +endif + +hdr-arch := $(SRCARCH) +``` + +Note that `ARCH` is the alias for the `SUBARCH`. In the next step we set the `KCONFIG_CONFIG` variable that represents path to the kernel configuration file and if it was not set before, it will be `.config` by default: + +```Makefile +KCONFIG_CONFIG ?= .config +export KCONFIG_CONFIG +``` + +and the [shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) that will be used during kernel compilation: + +```Makefile +CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \ + else if [ -x /bin/bash ]; then echo /bin/bash; \ + else echo sh; fi ; fi) +``` + +The next set of variables related to the compiler that will be used during Linux kernel compilation. We set the host compilers for the `c` and `c++` and flags for it: + +```Makefile +HOSTCC = gcc +HOSTCXX = g++ +HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer -std=gnu89 +HOSTCXXFLAGS = -O2 +``` + +Next we will meet the `CC` variable that represent compiler too, so why do we need in the `HOST*` variables? The `CC` is the target compiler that will be used during kernel compilation, but `HOSTCC` will be used during compilation of the set of the `host` programs (we will see it soon). After this we can see definition of the `KBUILD_MODULES` and `KBUILD_BUILTIN` variables that are used for the determination of the what to compile (kernel, modules or both): + +```Makefile +KBUILD_MODULES := +KBUILD_BUILTIN := 1 + +ifeq ($(MAKECMDGOALS),modules) + KBUILD_BUILTIN := $(if $(CONFIG_MODVERSIONS),1) +endif +``` + +Here we can see definition of these variables and the value of the `KBUILD_BUILTIN` will depens on the `CONFIG_MODVERSIONS` kernel configuration parameter if we pass only `modules` to the `make`. The next step is including of the: + +```Makefile +include scripts/Kbuild.include +``` + +`kbuild` file. The [Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) or `Kernel Build System` is the special infrastructure to manage building of the kernel and its modules. The `kbuild` files has the same syntax that makefiles. The [scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) file provides some generic definitions for the `kbuild` system. As we included this `kbuild` files we can see definition of the variables that are related to the different tools that will be used during kernel and modules compilation (like linker, compilers, utils from the [binutils](http://www.gnu.org/software/binutils/) and etc...): + +```Makefile +AS = $(CROSS_COMPILE)as +LD = $(CROSS_COMPILE)ld +CC = $(CROSS_COMPILE)gcc +CPP = $(CC) -E +AR = $(CROSS_COMPILE)ar +NM = $(CROSS_COMPILE)nm +STRIP = $(CROSS_COMPILE)strip +OBJCOPY = $(CROSS_COMPILE)objcopy +OBJDUMP = $(CROSS_COMPILE)objdump +AWK = awk +... +... +... +``` + +After definition of these variables we define two variables: `USERINCLUDE` and `LINUXINCLUDE`. They will contain paths of the directories with headers (public for users in the first case and for kernel in the second case): + +```Makefile +USERINCLUDE := \ + -I$(srctree)/arch/$(hdr-arch)/include/uapi \ + -Iarch/$(hdr-arch)/include/generated/uapi \ + -I$(srctree)/include/uapi \ + -Iinclude/generated/uapi \ + -include $(srctree)/include/linux/kconfig.h + +LINUXINCLUDE := \ + -I$(srctree)/arch/$(hdr-arch)/include \ + ... +``` + +And the standard flags for the C compiler: + +```Makefile +KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ + -fno-strict-aliasing -fno-common \ + -Werror-implicit-function-declaration \ + -Wno-format-security \ + -std=gnu89 +``` + +It is the not last compiler flags, they can be updated by the other makefiles (for example kbuilds from `arch/`). After all of these, all variables will be exported to be available in the other makefiles. The following two the `RCS_FIND_IGNORE` and the `RCS_TAR_IGNORE` variables will contain files that will be ignored in the version control system: + +```Makefile +export RCS_FIND_IGNORE := \( -name SCCS -o -name BitKeeper -o -name .svn -o \ + -name CVS -o -name .pc -o -name .hg -o -name .git \) \ + -prune -o +export RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn \ + --exclude CVS --exclude .pc --exclude .hg --exclude .git +``` + +That's all. We have finished with the all preparations, next point is the building of `vmlinux`. + +Directly to the kernel build +-------------------------------------------------------------------------------- + +As we have finished all preparations, next step in the root makefile is related to the kernel build. Before this moment we will not see in the our terminal after the execution of the `make` command. But now first steps of the compilation are started. In this moment we need to go on the [598](https://github.com/torvalds/linux/blob/master/Makefile#L598) line of the Linux kernel top makefile and we will see `vmlinux` target there: + +```Makefile +all: vmlinux + include arch/$(SRCARCH)/Makefile +``` + +Don't worry that we have missed many lines in Makefile that are placed after `export RCS_FIND_IGNORE.....` and before `all: vmlinux.....`. This part of the makefile is responsible for the `make *.config` targets and as I wrote in the beginning of this part we will see only building of the kernel in a general way. + +The `all:` target is the default when no target is given on the command line. You can see here that we include architecture specific makefile there (in our case it will be [arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)). From this moment we will continue from this makefile. As we can see `all` target depends on the `vmlinux` target that defined a little lower in the top makefile: + +```Makefile +vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE +``` + +The `vmlinux` is is the Linux kernel in an statically linked executable file format. The [scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) script links combines different compiled subsystems into vmlinux. The second target is the `vmlinux-deps` that defined as: + +```Makefile +vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) +``` + +and consists from the set of the `built-in.o` from the each top directory of the Linux kernel. Later, when we will go through all directories in the Linux kernel, the `Kbuild` will compile all the `$(obj-y)` files. It then calls `$(LD) -r` to merge these files into one `built-in.o` file. For this moment we have no `vmlinux-deps`, so the `vmlinux` target will not be executed now. For me `vmlinux-deps` contains following files: + +``` +arch/x86/kernel/vmlinux.lds arch/x86/kernel/head_64.o +arch/x86/kernel/head64.o arch/x86/kernel/head.o +init/built-in.o usr/built-in.o +arch/x86/built-in.o kernel/built-in.o +mm/built-in.o fs/built-in.o +ipc/built-in.o security/built-in.o +crypto/built-in.o block/built-in.o +lib/lib.a arch/x86/lib/lib.a +lib/built-in.o arch/x86/lib/built-in.o +drivers/built-in.o sound/built-in.o +firmware/built-in.o arch/x86/pci/built-in.o +arch/x86/power/built-in.o arch/x86/video/built-in.o +net/built-in.o +``` + +The next target that can be executed is following: + +```Makefile +$(sort $(vmlinux-deps)): $(vmlinux-dirs) ; +$(vmlinux-dirs): prepare scripts + $(Q)$(MAKE) $(build)=$@ +``` + +As we can see the `vmlinux-dirs` depends on the two targets: `prepare` and `scripts`. The first `prepare` defined in the top `Makefile` of the Linux kernel and executes three stages of preparations: + +```Makefile +prepare: prepare0 +prepare0: archprepare FORCE + $(Q)$(MAKE) $(build)=. +archprepare: archheaders archscripts prepare1 scripts_basic + +prepare1: prepare2 $(version_h) include/generated/utsrelease.h \ + include/config/auto.conf + $(cmd_crmodverdir) +prepare2: prepare3 outputmakefile asm-generic +``` + +The first `prepare0` expands to the `archprepare` that exapnds to the `archheaders` and `archscripts` that defined in the `x86_64` specific [Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile). Let's look on it. The `x86_64` specific makefile starts from the definition of the variables that are related to the archicteture-specific configs ([defconfig](https://github.com/torvalds/linux/tree/master/arch/x86/configs) and etc.). After this it defines flags for the compiling of the [16-bit](https://en.wikipedia.org/wiki/Real_mode) code, calculating of the `BITS` variable that can be `32` for `i386` or `64` for the `x86_64` flags for the assembly source code, flags for the linker and many many more (all definitions you can find in the [arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)). The first target is `archheaders` in the makefile generates syscall table: + +```Makefile +archheaders: + $(Q)$(MAKE) $(build)=arch/x86/entry/syscalls all +``` + +And the second target is `archscripts` in this makefile is: + +```Makefile +archscripts: scripts_basic + $(Q)$(MAKE) $(build)=arch/x86/tools relocs +``` + +We can see that it depends on the `scripts_basic` target from the top [Makefile](https://github.com/torvalds/linux/blob/master/Makefile). At the first we can see the `scripts_basic` target that executes make for the [scripts/basic](https://github.com/torvalds/linux/blob/master/scripts/basic/Makefile) makefile: + +```Maklefile +scripts_basic: + $(Q)$(MAKE) $(build)=scripts/basic +``` + +The `scripts/basic/Makefile` contains targets for compilation of the two host programs: `fixdep` and `bin2`: + +```Makefile +hostprogs-y := fixdep +hostprogs-$(CONFIG_BUILD_BIN2C) += bin2c +always := $(hostprogs-y) + +$(addprefix $(obj)/,$(filter-out fixdep,$(always))): $(obj)/fixdep +``` + +First program is `fixdep` - optimizes list of dependencies generated by the [gcc](https://gcc.gnu.org/) that tells make when to remake a source code file. The second program is `bin2c` depends on the value of the `CONFIG_BUILD_BIN2C` kernel configuration option and very little C program that allows to convert a binary on stdin to a C include on stdout. You can note here strange notation: `hostprogs-y` and etc. This notation is used in the all `kbuild` files and more about it you can read in the [documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt). In our case the `hostprogs-y` tells to the `kbuild` that there is one host program named `fixdep` that will be built from the will be built from `fixdep.c` that located in the same directory that `Makefile`. The first output after we will execute `make` command in our terminal will be result of this `kbuild` file: + +``` +$ make + HOSTCC scripts/basic/fixdep +``` + +As `script_basic` target was executed, the `archscripts` target will execute `make` for the [arch/x86/tools](https://github.com/torvalds/linux/blob/master/arch/x86/tools/Makefile) makefile with the `relocs` target: + +```Makefile +$(Q)$(MAKE) $(build)=arch/x86/tools relocs +``` + +The `relocs_32.c` and the `relocs_64.c` will be compiled that will contain [relocation](https://en.wikipedia.org/wiki/Relocation_%28computing%29) information and we will see it in the `make` output: + +```Makefile + HOSTCC arch/x86/tools/relocs_32.o + HOSTCC arch/x86/tools/relocs_64.o + HOSTCC arch/x86/tools/relocs_common.o + HOSTLD arch/x86/tools/relocs +``` + +There is checking of the `version.h` after compiling of the `relocs.c`: + +```Makefile +$(version_h): $(srctree)/Makefile FORCE + $(call filechk,version.h) + $(Q)rm -f $(old_version_h) +``` + +We can see it in the output: + +``` +CHK include/config/kernel.release +``` + +and the building of the `generic` assembly headers with the `asm-generic` target from the `arch/x86/include/generated/asm` that generated in the top Makefile of the Linux kernel. After the `asm-generic` target the `archprepare` will be done, so the `prepare0` target will be executed. As I wrote above: + +```Makefile +prepare0: archprepare FORCE + $(Q)$(MAKE) $(build)=. +``` + +Note on the `build`. It defined in the [scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) and looks like this: + +```Makefile +build := -f $(srctree)/scripts/Makefile.build obj +``` + +or in our case it is current source directory - `.`: + +```Makefile +$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.build obj=. +``` + +The [scripts/Makefile.build](https://github.com/torvalds/linux/blob/master/scripts/Makefile.build) tries to find the `Kbuild` file by the given directory via the `obj` parameter, include this `Kbuild` files: + +```Makefile +include $(kbuild-file) +``` + +and build targets from it. In our case `.` contains the [Kbuild](https://github.com/torvalds/linux/blob/master/Kbuild) file that generates the `kernel/bounds.s` and the `arch/x86/kernel/asm-offsets.s`. After this the `prepare` target finished to work. The `vmlinux-dirs` also depends on the second target - `scripts` that compiles following programs: `file2alias`, `mk_elfconfig`, `modpost` and etc... After scripts/host-programs compilation our `vmlinux-dirs` target can be executed. First of all let's try to understand what does `vmlinux-dirs` contain. For my case it contains paths of the following kernel directories: + +``` +init usr arch/x86 kernel mm fs ipc security crypto block +drivers sound firmware arch/x86/pci arch/x86/power +arch/x86/video net lib arch/x86/lib +``` + +We can find definition of the `vmlinux-dirs` in the top [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) of the Linux kernel: + +```Makefile +vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \ + $(core-y) $(core-m) $(drivers-y) $(drivers-m) \ + $(net-y) $(net-m) $(libs-y) $(libs-m))) + +init-y := init/ +drivers-y := drivers/ sound/ firmware/ +net-y := net/ +libs-y := lib/ +... +... +... +``` + +Here we remove the `/` symbol from the each directory with the help of the `patsubst` and `filter` functions and put it to the `vmlinux-dirs`. So we have list of directories in the `vmlinux-dirs` and the following code: + +```Makefile +$(vmlinux-dirs): prepare scripts + $(Q)$(MAKE) $(build)=$@ +``` + +The `$@` represents `vmlinux-dirs` here that means that it will go recursively over all directories from the `vmlinux-dirs` and its internal directories (depens on configuration) and will execute `make` in there. We can see it in the output: + +``` + CC init/main.o + CHK include/generated/compile.h + CC init/version.o + CC init/do_mounts.o + ... + CC arch/x86/crypto/glue_helper.o + AS arch/x86/crypto/aes-x86_64-asm_64.o + CC arch/x86/crypto/aes_glue.o + ... + AS arch/x86/entry/entry_64.o + AS arch/x86/entry/thunk_64.o + CC arch/x86/entry/syscall_64.o +``` + +Source code in each directory will be compiled and linked to the `built-in.o`: + +``` +$ find . -name built-in.o +./arch/x86/crypto/built-in.o +./arch/x86/crypto/sha-mb/built-in.o +./arch/x86/net/built-in.o +./init/built-in.o +./usr/built-in.o +... +... +``` + +Ok, all buint-in.o(s) built, now we can back to the `vmlinux` target. As you remember, the `vmlinux` target is in the top Makefile of the Linux kernel. Before the linking of the `vmlinux` it builds [samples](https://github.com/torvalds/linux/tree/master/samples), [Documentation](https://github.com/torvalds/linux/tree/master/Documentation) and etc., but I will not describe it in this part as I wrote in the beginning of this part. + +```Makefile +vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE + ... + ... + +$(call if_changed,link-vmlinux) +``` + +As you can see main purpose of it is a call of the [scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) script is linking of the all `built-in.o`(s) to the one statically linked executable and creation of the [System.map](https://en.wikipedia.org/wiki/System.map). In the end we will see following output: + +``` + LINK vmlinux + LD vmlinux.o + MODPOST vmlinux.o + GEN .version + CHK include/generated/compile.h + UPD include/generated/compile.h + CC init/version.o + LD init/built-in.o + KSYM .tmp_kallsyms1.o + KSYM .tmp_kallsyms2.o + LD vmlinux + SORTEX vmlinux + SYSMAP System.map +``` + +and `vmlinux` and `System.map` in the root of the Linux kernel source tree: + +``` +$ ls vmlinux System.map +System.map vmlinux +``` + +That's all, `vmlinux` is ready. The next step is creation of the [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage). + +Building bzImage +-------------------------------------------------------------------------------- + +The `bzImage` is the compressed Linux kernel image. We can get it with the execution of the `make bzImage` after the `vmlinux` built. In other way we can just execute `make` without arguments and will get `bzImage` anyway because it is default image: + +```Makefile +all: bzImage +``` + +in the [arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile). Let's look on this target, it will help us to understand how this image builds. As I already said the `bzImage` target defined in the [arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile) and looks like this: + +```Makefile +bzImage: vmlinux + $(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE) + $(Q)mkdir -p $(objtree)/arch/$(UTS_MACHINE)/boot + $(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/$@ +``` + +We can see here, that first of all called `make` for the boot directory, in our case it is: + +```Makefile +boot := arch/x86/boot +``` + +The main goal now to build source code in the `arch/x86/boot` and `arch/x86/boot/compressed` directories, build `setup.bin` and `vmlinux.bin`, and build the `bzImage` from they in the end. First target in the [arch/x86/boot/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/Makefile) is the `$(obj)/setup.elf`: + +```Makefile +$(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE + $(call if_changed,ld) +``` + +We already have the `setup.ld` linker script in the `arch/x86/boot` directory and the `SETUP_OBJS` expands to the all source files from the `boot` directory. We can see first output: + +```Makefile + AS arch/x86/boot/bioscall.o + CC arch/x86/boot/cmdline.o + AS arch/x86/boot/copy.o + HOSTCC arch/x86/boot/mkcpustr + CPUSTR arch/x86/boot/cpustr.h + CC arch/x86/boot/cpu.o + CC arch/x86/boot/cpuflags.o + CC arch/x86/boot/cpucheck.o + CC arch/x86/boot/early_serial_console.o + CC arch/x86/boot/edd.o +``` + +The next source code file is the [arch/x86/boot/header.S](https://github.com/torvalds/linux/blob/master/arch/x86/boot/header.S), but we can't build it now because this target depends on the following two header files: + +```Makefile +$(obj)/header.o: $(obj)/voffset.h $(obj)/zoffset.h +``` + +The first is `voffset.h` generated by the `sed` script that gets two addresses from the `vmlinux` with the `nm` util: + +```C +#define VO__end 0xffffffff82ab0000 +#define VO__text 0xffffffff81000000 +``` + +They are start and end of the kernel. The second is `zoffset.h` depens on the `vmlinux` target from the [arch/x86/boot/compressed/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/Makefile): + +```Makefile +$(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE + $(call if_changed,zoffset) +``` + +The `$(obj)/compressed/vmlinux` target depends on the `vmlinux-objs-y` that compiles source code files from the [arch/x86/boot/compressed](https://github.com/torvalds/linux/tree/master/arch/x86/boot/compressed) directory and generates `vmlinux.bin`, `vmlinux.bin.bz2`, and compiles programm - `mkpiggy`. We can see this in the output: + +```Makefile + LDS arch/x86/boot/compressed/vmlinux.lds + AS arch/x86/boot/compressed/head_64.o + CC arch/x86/boot/compressed/misc.o + CC arch/x86/boot/compressed/string.o + CC arch/x86/boot/compressed/cmdline.o + OBJCOPY arch/x86/boot/compressed/vmlinux.bin + BZIP2 arch/x86/boot/compressed/vmlinux.bin.bz2 + HOSTCC arch/x86/boot/compressed/mkpiggy +``` + +Where the `vmlinux.bin` is the `vmlinux` with striped debuging information and comments and the `vmlinux.bin.bz2` compressed `vmlinux.bin.all` + `u32` size of `vmlinux.bin.all`. The `vmlinux.bin.all` is `vmlinux.bin + vmlinux.relocs`, where `vmlinux.relocs` is the `vmlinux` that was handled by the `relocs` program (see above). As we got these files, the `piggy.S` assembly files will be generated with the `mkpiggy` program and compiled: + +```Makefile + MKPIGGY arch/x86/boot/compressed/piggy.S + AS arch/x86/boot/compressed/piggy.o +``` + +This assembly files will contain computed offset from a compressed kernel. After this we can see that `zoffset` generated: + +```Makefile + ZOFFSET arch/x86/boot/zoffset.h +``` + +As the `zoffset.h` and the `voffset.h` are generated, compilation of the source code files from the [arch/x86/boot](https://github.com/torvalds/linux/tree/master/arch/x86/boot/) can be continued: + +```Makefile + AS arch/x86/boot/header.o + CC arch/x86/boot/main.o + CC arch/x86/boot/mca.o + CC arch/x86/boot/memory.o + CC arch/x86/boot/pm.o + AS arch/x86/boot/pmjump.o + CC arch/x86/boot/printf.o + CC arch/x86/boot/regs.o + CC arch/x86/boot/string.o + CC arch/x86/boot/tty.o + CC arch/x86/boot/video.o + CC arch/x86/boot/video-mode.o + CC arch/x86/boot/video-vga.o + CC arch/x86/boot/video-vesa.o + CC arch/x86/boot/video-bios.o +``` + +As all source code files will be compiled, they will be linked to the `setup.elf`: + +```Makefile + LD arch/x86/boot/setup.elf +``` + +or: + +``` +ld -m elf_x86_64 -T arch/x86/boot/setup.ld arch/x86/boot/a20.o arch/x86/boot/bioscall.o arch/x86/boot/cmdline.o arch/x86/boot/copy.o arch/x86/boot/cpu.o arch/x86/boot/cpuflags.o arch/x86/boot/cpucheck.o arch/x86/boot/early_serial_console.o arch/x86/boot/edd.o arch/x86/boot/header.o arch/x86/boot/main.o arch/x86/boot/mca.o arch/x86/boot/memory.o arch/x86/boot/pm.o arch/x86/boot/pmjump.o arch/x86/boot/printf.o arch/x86/boot/regs.o arch/x86/boot/string.o arch/x86/boot/tty.o arch/x86/boot/video.o arch/x86/boot/video-mode.o arch/x86/boot/version.o arch/x86/boot/video-vga.o arch/x86/boot/video-vesa.o arch/x86/boot/video-bios.o -o arch/x86/boot/setup.elf +``` + +The last two things is the creation of the `setup.bin` that will contain compiled code from the `arch/x86/boot/*` directory: + +``` +objcopy -O binary arch/x86/boot/setup.elf arch/x86/boot/setup.bin +``` + +and the creation of the `vmlinux.bin` from the `vmlinux`: + +``` +objcopy -O binary -R .note -R .comment -S arch/x86/boot/compressed/vmlinux arch/x86/boot/vmlinux.bin +``` + +In the end we compile host program: [arch/x86/boot/tools/build.c](https://github.com/torvalds/linux/blob/master/arch/x86/boot/tools/build.c) that will create our `bzImage` from the `setup.bin` and the `vmlinux.bin`: + +``` +arch/x86/boot/tools/build arch/x86/boot/setup.bin arch/x86/boot/vmlinux.bin arch/x86/boot/zoffset.h arch/x86/boot/bzImage +``` + +Actually the `bzImage` is the concatenated `setup.bin` and the `vmlinux.bin`. In the end we will see the output which familiar to all who once build the Linux kernel from source: + +``` +Setup is 16268 bytes (padded to 16384 bytes). +System is 4704 kB +CRC 94a88f9a +Kernel: arch/x86/boot/bzImage is ready (#5) +``` + +That's all. + +Conclusion +================================================================================ + +It is the end of this part and here we saw all steps from the execution of the `make` command to the generation of the `bzImage`. I know, the Linux kernel makefiles and process of the Linux kernel building may seem confusing at first glance, but it is not so hard. Hope this part will help you to understand process of the Linux kernel building. + +Links +================================================================================ + +* [GNU make util](https://en.wikipedia.org/wiki/Make_%28software%29) +* [Linux kernel top Makefile](https://github.com/torvalds/linux/blob/master/Makefile) +* [cross-compilation](https://en.wikipedia.org/wiki/Cross_compiler) +* [Ctags](https://en.wikipedia.org/wiki/Ctags) +* [sparse](https://en.wikipedia.org/wiki/Sparse) +* [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage) +* [uname](https://en.wikipedia.org/wiki/Uname) +* [shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) +* [Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) +* [binutils](http://www.gnu.org/software/binutils/) +* [gcc](https://gcc.gnu.org/) +* [Documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt) +* [System.map](https://en.wikipedia.org/wiki/System.map) +* [Relocation](https://en.wikipedia.org/wiki/Relocation_%28computing%29) + +-------------------------------------------------------------------------------- + +via: https://github.com/0xAX/linux-insides/blob/master/Misc/how_kernel_compiled.md + +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file From 0c6b380f27e7308dbef79d5a736f7a6efc9f71e6 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 28 Jul 2015 10:53:51 +0800 Subject: [PATCH 012/207] =?UTF-8?q?20150728-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...vity and Check Memory Usages of Browser.md | 129 ++++++++++++++++++ ...y Using 'Explain Shell' Script in Linux.md | 118 ++++++++++++++++ 2 files changed, 247 insertions(+) create mode 100644 sources/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md create mode 100644 sources/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md diff --git a/sources/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md b/sources/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md new file mode 100644 index 0000000000..6adc5abaa6 --- /dev/null +++ b/sources/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md @@ -0,0 +1,129 @@ +Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser +================================================================================ +Here again, I have written another post on [Linux Tips and Tricks][1] series. Since beginning the objective of this post is to make you aware of those small tips and hacks that lets you manage your system/server efficiently. + +![Create Cdrom ISO Image and Monitor Users in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/creating-cdrom-iso-watch-users-in-linux.jpg) + +Create Cdrom ISO Image and Monitor Users in Linux + +In this post we will see how to create ISO image from the contents of CD/DVD loaded in the drive, Open random man pages for learning, know details of other logged-in users and what they are doing and monitoring the memory usages of a browser, and all these using native tools/commands without any third-party application/utility. Here we go… + +### Create ISO image from a CD ### + +Often we need to backup/copy the content of CD/DVD. If you are on Linux platform you do not need any additional software. All you need is the access to Linux console. + +To create ISO image of the files in your CD/DVD ROM, you need two things. The first thing is you need to find the name of your CD/DVD drive. To find the name of your CD/DVD drive, you may choose any of the below three methods. + +**1. Run command lsblk (list block devices) from your terminal/console.** + + $ lsblk + +![Find Block Devices in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Find-Block-Devices.png) + +Find Block Devices + +**2. To see information about CD-ROM, you may use commands like less or more.** + + $ less /proc/sys/dev/cdrom/info + +![Check Cdrom Information](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Cdrom-Inforamtion.png) + +Check Cdrom Information + +**3. You may get the same information from [dmesg command][2] and customize the output using egrep.** + +The command ‘dmesg‘ print/control the kernel buffer ring. ‘egrep‘ command is used to print lines that matches a pattern. Option -i and –color with egrep is used to ignore case sensitive search and highlight the matching string respectively. + + $ dmesg | egrep -i --color 'cdrom|dvd|cd/rw|writer' + +![Find Device Information](http://www.tecmint.com/wp-content/uploads/2015/07/Find-Device-Information.png) + +Find Device Information + +Once you know the name of your CD/DVD, you can use following command to create a ISO image of your cdrom in Linux. + + $ cat /dev/sr0 > /path/to/output/folder/iso_name.iso + +Here ‘sr0‘ is the name of my CD/DVD drive. You should replace this with the name of your CD/DVD. This will help you in creating ISO image and backup contents of CD/DVD without any third-party application. + +![Create ISO Image of CDROM in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Create-ISO-Image-of-CDROM.png) + +Create ISO Image of CDROM + +### Open a man page randomly for Reading ### + +If you are new to Linux and want to learn commands and switches, this tweak is for you. Put the below line of code at the end of your `~/.bashrc` file. + + /use/bin/man $(ls /bin | shuf | head -1) + +Remember to put the above one line script in users’s `.bashrc` file and not in the .bashrc file of root. So when the next you login either locally or remotely using SSH you will see a man page randomly opened for you to read. For the newbies who want to learn commands and command-line switches, this will prove helpful. + +Here is what I got in my terminal after logging in to session for two times back-to-back. + +![LoadKeys Man Pages](http://www.tecmint.com/wp-content/uploads/2015/07/LoadKeys-Man-Pages.png) + +LoadKeys Man Pages + +![Zgrep Man Pages](http://www.tecmint.com/wp-content/uploads/2015/07/Zgrep-Man-Pages.png) + +Zgrep Man Pages + +### Check Activity of Logged-in Users ### + +Know what other users are doing on your shared server. + +In most general case, either you are a user of Shared Linux Server or the Admin. If you are concerned about your server and want to check what other users are doing, you may try command ‘w‘. + +This command lets you know if someone is executing any malicious code or tampering the server, slowing it down or anything else. ‘w‘ is the preferred way of keeping an eye on logged on users and what they are doing. + +To see logged on users and what they re doing, run command ‘w’ from terminal, preferably as root. + + # w + +![Check Linux User Activity](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Linux-User-Activity.png) + +Check Linux User Activity + +### Check Memory usages by Browser ### + +These days a lot of jokes are cracked on Google-chrome and its demand of memory. If you want to know the memory usages of a browser, you can list the name of the process, its PID and Memory usages of it. To check memory usages of a browser, just enter the “about:memory” in the address bar without quotes. + +I have tested it on Google-Chrome and Mozilla Firefox web browser. If you can check it on any other browser and it works well you may acknowledge us in the comments below. Also you may kill the browser process simply as if you have done for any Linux terminal process/service. + +In Google Chrome, type `about:memory` in address bar, you should get something similar to below image. + +![Check Chrome Memory Usage](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Chrome-Memory-Usage.png) + +Check Chrome Memory Usage + +In Mozilla Firefox, type `about:memory` in address bar, you should get something similar to below image. + +![Check Firefox Memory Usage](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Firefox-Memory-Usage.png) + +Check Firefox Memory Usage + +Out of these options you may select any of them, if you understand what it is. To check memory usages, click the left most option ‘Measure‘. + +![Firefox Main Process](http://www.tecmint.com/wp-content/uploads/2015/07/Firefox-Main-Processes.png) + +Firefox Main Process + +It shows tree like process-memory usages by browser. + +That’s all for now. Hope all the above tips will help you at some point of time. If you have one (or more) tips/tricks that will help Linux Users to manage their Linux System/Server more efficiently ans is lesser known, you may like to share it with us. + +I’ll be here with another post soon, till then stay tuned and connected to TecMint. Provide us with your valuable feedback in the comments below. Like and share us and help us get spread. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/creating-cdrom-iso-image-watch-user-activity-in-linux/ + +作者:[Avishek Kumar][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/tag/linux-tricks/ +[2]:http://www.tecmint.com/dmesg-commands/ \ No newline at end of file diff --git a/sources/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md b/sources/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md new file mode 100644 index 0000000000..fe76f160cb --- /dev/null +++ b/sources/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md @@ -0,0 +1,118 @@ +Understanding Shell Commands Easily Using “Explain Shell” Script in Linux +================================================================================ +While working on Linux platform all of us need help on shell commands, at some point of time. Although inbuilt help like man pages, whatis command is helpful, but man pages output are too lengthy and until and unless one has some experience with Linux, it is very difficult to get any help from massive man pages. The output of whatis command is rarely more than one line which is not sufficient for newbies. + +![Explain Shell Commands in Linux Shell](http://www.tecmint.com/wp-content/uploads/2015/07/Explain-Shell-Commands-in-Linux-Shell.jpeg) + +Explain Shell Commands in Linux Shell + +There are third-party application like ‘cheat‘, which we have covered here “[Commandline Cheat Sheet for Linux Users][1]. Although Cheat is an exceptionally good application which shows help on shell command even when computer is not connected to Internet, it shows help on predefined commands only. + +There is a small piece of code written by Jackson which is able to explain shell commands within the bash shell very effectively and guess what the best part is you don’t need to install any third party package. He named the file containing this piece of code as `'explain.sh'`. + +#### Features of Explain Utility #### + +- Easy Code Embedding. +- No third-party utility needed to be installed. +- Output just enough information in course of explanation. +- Requires internet connection to work. +- Pure command-line utility. +- Able to explain most of the shell commands in bash shell. +- No root Account involvement required. + +**Prerequisite** + +The only requirement is `'curl'` package. In most of the today’s latest Linux distributions, curl package comes pre-installed, if not you can install it using package manager as shown below. + + # apt-get install curl [On Debian systems] + # yum install curl [On CentOS systems] + +### Installation of explain.sh Utility in Linux ### + +We have to insert the below piece of code as it is in the `~/.bashrc` file. The code should be inserted for each user and each `.bashrc` file. It is suggested to insert the code to the user’s .bashrc file only and not in the .bashrc of root user. + +Notice the first line of code that starts with hash `(#)` is optional and added just to differentiate rest of the codes of .bashrc. + +# explain.sh marks the beginning of the codes, we are inserting in .bashrc file at the bottom of this file. + + # explain.sh begins + explain () { + if [ "$#" -eq 0 ]; then + while read -p "Command: " cmd; do + curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$cmd" + done + echo "Bye!" + elif [ "$#" -eq 1 ]; then + curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$1" + else + echo "Usage" + echo "explain interactive mode." + echo "explain 'cmd -o | ...' one quoted command to explain it." + fi + } + +### Working of explain.sh Utility ### + +After inserting the code and saving it, you must logout of the current session and login back to make the changes taken into effect. Every thing is taken care of by the ‘curl’ command which transfer the input command and flag that need explanation to the mankier server and then print just necessary information to the Linux command-line. Not to mention to use this utility you must be connected to internet always. + +Let’s test few examples of command which I don’t know the meaning with explain.sh script. + +**1. I forgot what ‘du -h‘ does. All I need to do is:** + + $ explain 'du -h' + +![Get Help on du Command](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-du-Command.png) + +Get Help on du Command + +**2. If you forgot what ‘tar -zxvf‘ does, you may simply do:** + + $ explain 'tar -zxvf' + +![Tar Command Help](http://www.tecmint.com/wp-content/uploads/2015/07/Tar-Command-Help.png) + +Tar Command Help + +**3. One of my friend often confuse the use of ‘whatis‘ and ‘whereis‘ command, so I advised him.** + +Go to Interactive Mode by simply typing explain command on the terminal. + + $ explain + +and then type the commands one after another to see what they do in one window, as: + + Command: whatis + Command: whereis + +![Whatis Whereis Commands Help](http://www.tecmint.com/wp-content/uploads/2015/07/Whatis-Whereis-Commands-Help.png) + +Whatis Whereis Commands Help + +To exit interactive mode he just need to do Ctrl + c. + +**4. You can ask to explain more than one command chained by pipeline.** + + $ explain 'ls -l | grep -i Desktop' + +![Get Help on Multiple Commands](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-Multiple-Commands.png) + +Get Help on Multiple Commands + +Similarly you can ask your shell to explain any shell command. All you need is a working Internet connection. The output is generated based upon the explanation needed from the server and hence the output result is not customizable. + +For me this utility is really helpful and it has been honored being added to my .bashrc. Let me know what is your thought on this project? How it can useful for you? Is the explanation satisfactory? + +Provide us with your valuable feedback in the comments below. Like and share us and help us get spread. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/explain-shell-commands-in-the-linux-shell/ + +作者:[Avishek Kumar][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/cheat-command-line-cheat-sheet-for-linux-users/ \ No newline at end of file From beebe0747c11b66ca82060586fb19a68ec826dbb Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Tue, 28 Jul 2015 12:40:29 +0800 Subject: [PATCH 013/207] =?UTF-8?q?=E3=80=90Translating=20by=20dingdongnig?= =?UTF-8?q?etou=E3=80=9120150728=20Understanding=20Shell=20Commands=20Easi?= =?UTF-8?q?ly=20Using=20'Explain=20Shell'=20Script=20in=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 【Translating by dingdongnigetou】20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md --- ... Commands Easily Using 'Explain Shell' Script in Linux.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md b/sources/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md index fe76f160cb..ab7572cd7a 100644 --- a/sources/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md +++ b/sources/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md @@ -1,3 +1,6 @@ + +Translating by dingdongnigetou + Understanding Shell Commands Easily Using “Explain Shell” Script in Linux ================================================================================ While working on Linux platform all of us need help on shell commands, at some point of time. Although inbuilt help like man pages, whatis command is helpful, but man pages output are too lengthy and until and unless one has some experience with Linux, it is very difficult to get any help from massive man pages. The output of whatis command is rarely more than one line which is not sufficient for newbies. @@ -115,4 +118,4 @@ via: http://www.tecmint.com/explain-shell-commands-in-the-linux-shell/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/cheat-command-line-cheat-sheet-for-linux-users/ \ No newline at end of file +[1]:http://www.tecmint.com/cheat-command-line-cheat-sheet-for-linux-users/ From 77d5ab6c1a57922d59f0f878d011269099f212a4 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 28 Jul 2015 15:27:20 +0800 Subject: [PATCH 014/207] Update 20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md --- ..., Watch User Activity and Check Memory Usages of Browser.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md b/sources/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md index 6adc5abaa6..2219e5e25e 100644 --- a/sources/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md +++ b/sources/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md @@ -1,3 +1,4 @@ +translation by strugglingyouth Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser ================================================================================ Here again, I have written another post on [Linux Tips and Tricks][1] series. Since beginning the objective of this post is to make you aware of those small tips and hacks that lets you manage your system/server efficiently. @@ -126,4 +127,4 @@ via: http://www.tecmint.com/creating-cdrom-iso-image-watch-user-activity-in-linu [a]:http://www.tecmint.com/author/avishek/ [1]:http://www.tecmint.com/tag/linux-tricks/ -[2]:http://www.tecmint.com/dmesg-commands/ \ No newline at end of file +[2]:http://www.tecmint.com/dmesg-commands/ From 8b32002911ade88fd15d5e51d103fd813e781ca0 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 28 Jul 2015 15:48:00 +0800 Subject: [PATCH 015/207] =?UTF-8?q?20150728-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...mmand installed for 7-zip archive files.md | 49 +++++++ ... Kernel for Improved System Performance.md | 127 ++++++++++++++++++ 2 files changed, 176 insertions(+) create mode 100644 sources/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md create mode 100644 sources/tech/20150728 How to Update Linux Kernel for Improved System Performance.md diff --git a/sources/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md b/sources/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md new file mode 100644 index 0000000000..8c9b781117 --- /dev/null +++ b/sources/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md @@ -0,0 +1,49 @@ +How To Fix: There is no command installed for 7-zip archive files +================================================================================ +### Problem ### + +I was trying to install Emerald icon theme in Ubuntu and the theme came in .7z archive. As always, I tried to extract it, in GUI, using the right click and “extract here”. Instead of extracting the file, Ubuntu 15.04 threw an error which read: + +> Could not open this file +> +> There is no command installed for 7-zip archive files. Do you want to search for a command to open this file? + +The error looked like this: + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Install_7zip_ubuntu_1.png) + +### Reason ### + +Reason is quite evident from the error message itself. The 7Z, better to call it [7-zip][1], program is not installed and hence 7Z compressed files are not being extracted. This also hints that Ubuntu doesn’t support 7-Zip files by default. + +### Solution: Install 7zip in Ubuntu ### + +Solution is quite simple as well. Install the 7-Zip package in Ubuntu. Now you might wonder how to install 7Zip in Ubuntu? Well, in the previous error dialogue box if you click on “Search Command”, it will look for available p7zip package. Just click on “Install” here: + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Install_7zip_ubuntu.png) + +### Alternative: Install 7zip in terminal ### + +If you prefer terminal, you can install 7zip in terminal using the following command: + + sudo apt-get install p7zip-full + +Note: You’ll find three 7zip packages in Ubuntu: p7zip, p7zip-full and p7zip-rar. The difference between p7zip and p7zip-full is that p7zip is a lighter version providing support only for .7z and .7za files while the full version provides support for more 7z compression algorithms (for audio files etc). For p7zip-rar, it provides support for .rar files along with 7z. + +In fact similar error can be encountered with [RAR files in Ubuntu][2]. Solution is the same, install the correct program. + +I hope this quick post helped you to solve the mystery of **how to open 7zip in Ubuntu 14.04**. Any questions or suggestions are always welcomed. + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/fix-there-is-no-command-installed-for-7-zip-archive-files/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://www.7-zip.org/ +[2]:http://itsfoss.com/fix-there-is-no-command-installed-for-rar-archive-files/ \ No newline at end of file diff --git a/sources/tech/20150728 How to Update Linux Kernel for Improved System Performance.md b/sources/tech/20150728 How to Update Linux Kernel for Improved System Performance.md new file mode 100644 index 0000000000..4e238de09d --- /dev/null +++ b/sources/tech/20150728 How to Update Linux Kernel for Improved System Performance.md @@ -0,0 +1,127 @@ +How to Update Linux Kernel for Improved System Performance +================================================================================ +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/update-linux-kernel-644x373.jpg?2c3c1f) + +The rate of development for [the Linux kernel][1] is unprecedented, with a new major release approximately every two to three months. Each release offers several new features and improvements that a lot of people could take advantage of to make their computing experience faster, more efficient, or better in other ways. + +The problem, however, is that you usually can’t take advantage of these new kernel releases as soon as they come out — you have to wait until your distribution comes out with a new release that packs a newer kernel with it. We’ve previously laid out [the benefits for regularly updating your kernel][2], and you don’t have to wait to get your hands on them. We’ll show you how. + +> Disclaimer: As some of our literature may have mentioned before, updating your kernel does carry a (small) risk of breaking your system. If this is the case, it’s usually easy to pick an older kernel at boot time that works, but something may always go wrong. Therefore, we’re not responsible for any damage to your system — use at your own risk! + +### Prep Work ### + +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/linux_kernel_arch.jpg?2c3c1f) + +To update your kernel, you’ll first need to determine whether you’re using a 32-bit or 64-bit system. Open up a terminal window and run + + uname -a + +Then check to see if the output says x86_64 or i686. If it’s x86_64, then you’re running the 64-bit version; otherwise, you’re running the 32-bit version. Remember this, because it will be important. + +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/kernel_latest_version.jpg?2c3c1f) + +Next, visit the [official Linux kernel website][3]. This will tell you what the current stable version of the kernel is. You can try out release candidates if you’d like, but they are a lot less tested than the stable releases. Stick with the stable kernel unless you are certain you need a release candidate version. + +### Ubuntu Instructions ### + +It’s quite easy for Ubuntu and Ubuntu-derivative users to update their kernel, thanks to the Ubuntu Mainline Kernel PPA. Although it’s officially called a PPA, you cannot use it like other PPAs by adding them to your software sources list and expecting it to automatically update the kernel for you. Instead, it’s simply a webpage you navigate through to download the kernel you want. + +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/ubuntu_new_kernels.jpg?2c3c1f) + +Now, visit the [kernel PPA webpage][4] and scroll all the way to the bottom. The absolute bottom of the list will probably contain some release candidate versions (which you can see by the “rc” in the name), but just above them should be the latest stable kernel (to make this easier to explain, at the time of writing the stable version was 4.1.2). Click on that, and you’ll be presented with several options. You’ll need to grab three files and save them in their own folder (within the Downloads folder if you’d like) so that they’re isolated from all other files: + +- The “generic” header file for your architecture (in my case, 64-bit or “amd64″) +- The header file in the middle that has “all” towards the end of the filename +- The “generic” kernel file for your architecture (again, I would pick “amd64″ but if you use 32-bit you’ll need “i686″) + +You’ll notice that there are also “lowlatency” files available to download, but it’s fine to ignore this. These files are relatively unstable and are only made available for people who need their low-latency benefits if the general files don’t suffice for tasks such as audio recording. Again, the recommendation is to always use generic first and only try lowlatency if your performance isn’t good enough for certain tasks. No, gaming or Internet browsing aren’t excuses to try lowlatency. + +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/ubuntu_install_kernel.jpg?2c3c1f) + +You put these files into their own folder, right? Now open up the Terminal, use the + + cd + +command to go to your newly-created folder, such as + + cd /home/user/Downloads/Kernel + +and then run: + + sudo dpkg -i *.deb + +This command marks all .deb files within the folder as “to be installed” and then performs the installation. This is the recommended way to install these files because otherwise it’s easy to pick one file to install and it’ll complain about dependency issues. This approach avoids that problem. If you’re not sure what cd or sudo are, get a quick crash course on [essential Linux commands][5]. + +Once the installation is complete, **Restart** your system and you should be running the just-installed kernel! You can check this by running uname -a in the Terminal and checking the output. + +### Fedora Instructions ### + +If you use Fedora or one of its derivatives, the process is very similar to Ubuntu. There’s just a different location to grab different files, and a different command to install them. + +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/fedora_new_kernels.jpg?2c3c1f) + +VIew the list of the most [recent kernel builds for Fedora][6]. Pick the latest stable version out of the list, and then scroll down to either the i686 or x86_64 section, depending on your system’s architecture. In this section, you’ll need to grab the following files and save them in their own folder (such as “Kernel” within your Downloads folder, as an example): + +- kernel +- kernel-core +- kernel-headers +- kernel-modules +- kernel-modules-extra +- kernel-tools +- perf and python-perf (optional) + +If your system is i686 (32-bit) and you have 4GB of RAM or more, you’ll need to grab the PAE version of all of these files where available. PAE is an address extension technique used for 32-bit system to allow them to use more than 3GB of RAM. + +Now, use the + + cd + +command to go to that folder, such as + + cd /home/user/Downloads/Kernel + +and then run the following command to install all the files: + + yum --nogpgcheck localinstall *.rpm + +Finally **Restart** your computer and you should be running a new kernel! + +### Using Rawhide ### + +Alternatively, Fedora users can also simply [switch to Rawhide][7] and it’ll automatically update every package to the latest version, including the kernel. However, Rawhide is known to break quite often (especially early on in the development cycle) and should **not** be used on a system that you need to rely on. + +### Arch Instructions ### + +[Arch users][8] should always have the latest and greatest stable kernel available (or one pretty close to it). If you want to get even closer to the latest-released stable kernel, you can enable the testing repo which will give you access to major new releases roughly two to four weeks early. + +To do this, open the file located at + + /etc/pacman.conf + +with sudo permissions in [your favorite terminal text editor][9], and then uncomment (delete the pound symbols from the front of each line) the three lines associated with testing. If you have the multilib repository enabled, then do the same for the multilib-testing repository. See [this Arch Linux wiki page][10] if you need more information. + +Upgrading your kernel isn’t easy (done so intentionally), but it can give you a lot of benefits. So long as your new kernel didn’t break anything, you can now enjoy improved performance, better efficiency, support for more hardware, and potential new features. Especially if you’re running relatively new hardware, upgrading the kernel can really help out. + +**How has upgraded the kernel helped you? Do you think your favorite distribution’s policy on kernel releases is what it should be?** Let us know in the comments! + +-------------------------------------------------------------------------------- + +via: http://www.makeuseof.com/tag/update-linux-kernel-improved-system-performance/ + +作者:[Danny Stieben][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.makeuseof.com/tag/author/danny/ +[1]:http://www.makeuseof.com/tag/linux-kernel-explanation-laymans-terms/ +[2]:http://www.makeuseof.com/tag/5-reasons-update-kernel-linux/ +[3]:http://www.kernel.org/ +[4]:http://kernel.ubuntu.com/~kernel-ppa/mainline/ +[5]:http://www.makeuseof.com/tag/an-a-z-of-linux-40-essential-commands-you-should-know/ +[6]:http://koji.fedoraproject.org/koji/packageinfo?packageID=8 +[7]:http://www.makeuseof.com/tag/bleeding-edge-linux-fedora-rawhide/ +[8]:http://www.makeuseof.com/tag/arch-linux-letting-you-build-your-linux-system-from-scratch/ +[9]:http://www.makeuseof.com/tag/nano-vs-vim-terminal-text-editors-compared/ +[10]:https://wiki.archlinux.org/index.php/Pacman#Repositories \ No newline at end of file From b91b501850e195c679f207773b0a7b515ef87239 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 28 Jul 2015 20:24:55 +0800 Subject: [PATCH 016/207] =?UTF-8?q?=E5=9B=9E=E6=94=B6=E5=92=8C=E6=B8=85?= =?UTF-8?q?=E9=99=A4=E6=96=87=E7=AB=A0?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 回收:@barney-ro @2q1w2007 @zhangboyue 补充删除:@wwy-hust --- ...er the enterprise and other predictions.md | 74 ------- ...urious Case of the Disappearing Distros.md | 120 ---------- ...ff -u--What's New in Kernel Development.md | 36 --- ...1 Did this JavaScript break the console.md | 91 -------- ... Revealed--The best and worst of Docker.md | 66 ------ ...erver behind NAT via reverse SSH tunnel.md | 1 - sources/tech/20150522 Analyzing Linux Logs.md | 1 - ...Experience on Linux 'iptables' Firewall.md | 209 ------------------ 8 files changed, 598 deletions(-) delete mode 100644 sources/talk/20141219 2015 will be the year Linux takes over the enterprise and other predictions.md delete mode 100644 sources/talk/20141224 The Curious Case of the Disappearing Distros.md delete mode 100644 sources/talk/20150112 diff -u--What's New in Kernel Development.md delete mode 100644 sources/talk/20150121 Did this JavaScript break the console.md delete mode 100644 sources/talk/20150320 Revealed--The best and worst of Docker.md delete mode 100644 sources/tech/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md diff --git a/sources/talk/20141219 2015 will be the year Linux takes over the enterprise and other predictions.md b/sources/talk/20141219 2015 will be the year Linux takes over the enterprise and other predictions.md deleted file mode 100644 index 0d2b26cc98..0000000000 --- a/sources/talk/20141219 2015 will be the year Linux takes over the enterprise and other predictions.md +++ /dev/null @@ -1,74 +0,0 @@ -2015 will be the year Linux takes over the enterprise (and other predictions) -================================================================================ -> Jack Wallen removes his rose-colored glasses and peers into the crystal ball to predict what 2015 has in store for Linux. - -![](http://tr1.cbsistatic.com/hub/i/r/2014/12/15/f79d21fe-f1d1-416d-ba22-7e757dfcdb31/resize/620x485/52a10d26d34c3fc4201c5daa8ff277ff/linux2015hero.jpg) - -The crystal ball has been vague and fuzzy for quite some time. Every pundit and voice has opined on what the upcoming year will mean to whatever topic it is they hold dear to their heart. In my case, we're talking Linux and open source. - -In previous years, I'd don the rose-colored glasses and make predictions that would shine a fantastic light over the Linux landscape and proclaim 20** will be the year of Linux on the _____ (name your platform). Many times, those predictions were wrong, and Linux would wind up grinding on in the background. - -This coming year, however, there are some fairly bold predictions to be made, some of which are sure things. Read on and see if you agree. - -### Linux takes over big data ### - -This should come as no surprise, considering the advancements Linux and open source has made over the previous few years. With the help of SuSE, Red Hat, and SAP Hana, Linux will hold powerful sway over big data in 2015. In-memory computing and live kernel patching will be the thing that catapults big data into realms of uptime and reliability never before known. SuSE will lead this charge like a warrior rushing into a battle it cannot possibly lose. - -This rise of Linux in the world of big data will have serious trickle down over the rest of the business world. We already know how fond enterprise businesses are of Linux and big data. What we don't know is how this relationship will alter the course of Linux with regards to the rest of the business world. - -My prediction is that the success of Linux with big data will skyrocket the popularity of Linux throughout the business landscape. More contracts for SuSE and Red Hat will equate to more deployments of Linux servers that handle more tasks within the business world. This will especially apply to the cloud, where OpenStack should easily become an overwhelming leader. - -As the end of 2015 draws to a close, Linux will continue its take over of more backend services, which may include the likes of collaboration servers, security, and much more. - -### Smart machines ### - -Linux is already leading the trend for making homes and autos more intelligent. With improvements in the likes of Nest (which currently uses an embedded Linux), the open source platform is poised to take over your machines. Because 2015 should see a massive rise in smart machines, it goes without saying that Linux will be a huge part of that growth. I firmly believe more homes and businesses will take advantage of such smart controls, and that will lead to more innovations (all of which will be built on Linux). - -One of the issues facing Nest, however, is that it was purchased by Google. What does this mean for the thermostat controller? Will Google continue using the Linux platform -- or will it opt to scrap that in favor of Android? Of course, a switch would set the Nest platform back a bit. - -The upcoming year will see Linux lead the rise in popularity of home automation. Wink, Iris, Q Station, Staples Connect, and more (similar) systems will help to bridge Linux and home users together. - -### The desktop ### - -The big question, as always, is one that tends to hang over the heads of the Linux community like a dark cloud. That question is in relation to the desktop. Unfortunately, my predictions here aren't nearly as positive. I believe that the year 2015 will remain quite stagnant for Linux on the desktop. That complacency will center around Ubuntu. - -As much as I love Ubuntu (and the Unity desktop), this particular distribution will continue to drag the Linux desktop down. Why? - -Convergence... or the lack thereof. - -Canonical has been so headstrong about converging the desktop and mobile experience that they are neglecting the current state of the desktop. The last two releases of Ubuntu (one being an LTS release) have been stagnant (at best). The past year saw two of the most unexciting releases of Ubuntu that I can recall. The reason? Because the developers of Ubuntu are desperately trying to make Unity 8/Mir and the ubiquitous Ubuntu Phone a reality. The vaporware that is the Ubuntu Phone will continue on through 2015, and Unity 8/Mir may or may not be released. - -When the new iteration of the Ubuntu Unity desktop is finally released, it will suffer a serious setback, because there will be so little hardware available to truly show it off. [System76][1] will sell their outstanding [Sable Touch][2], which will probably become the flagship system for Unity 8/Mir. As for the Ubuntu Phone? How many reports have you read that proclaimed "Ubuntu Phone will ship this year"? - -I'm now going on the record to predict that the Ubuntu Phone will not ship in 2015. Why? Canonical created partnerships with two OEMs over a year ago. Those partnerships have yet to produce a single shippable product. The closest thing to a shippable product is the Meizu MX4 phone. The "Pro" version of that phone was supposed to have a formal launch of Sept 25. Like everything associated with the Ubuntu Phone, it didn't happen. - -Unless Canonical stops putting all of its eggs in one vaporware basket, desktop Linux will take a major hit in 2015. Ubuntu needs to release something major -- something to make heads turn -- otherwise, 2015 will be just another year where we all look back and think "we could have done something special." - -Outside of Ubuntu, I do believe there are some outside chances that Linux could still make some noise on the desktop. I think two distributions, in particular, will bring something rather special to the table: - -- [Evolve OS][3] -- a ChromeOS-like Linux distribution -- [Quantum OS][4] -- a Linux distribution that uses Android's Material Design specs - -Both of these projects are quite exciting and offer unique, user-friendly takes on the Linux desktop. This is quickly become a necessity in a landscape being dragged down by out-of-date design standards (think the likes of Cinnamon, Mate, XFCE, LXCE -- all desperately clinging to the past). - -This is not to say that Linux on the desktop doesn't have a chance in 2015. It does. In order to grasp the reins of that chance, it will have to move beyond the past and drop the anchors that prevent it from moving out to deeper, more viable waters. - -Linux stands to make more waves in 2015 than it has in a very long time. From enterprise to home automation -- the world could be the oyster that Linux uses as a springboard to the desktop and beyond. - -What are your predictions for Linux and open source in 2015? Share your thoughts in the discussion thread below. - --------------------------------------------------------------------------------- - -via: http://www.techrepublic.com/article/2015-will-be-the-year-linux-takes-over-the-enterprise-and-other-predictions/ - -作者:[Jack Wallen][a] -译者:[barney-ro](https://github.com/barney-ro) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.techrepublic.com/search/?a=jack+wallen -[1]:https://system76.com/ -[2]:https://system76.com/desktops/sable -[3]:https://evolve-os.com/ -[4]:http://quantum-os.github.io/ \ No newline at end of file diff --git a/sources/talk/20141224 The Curious Case of the Disappearing Distros.md b/sources/talk/20141224 The Curious Case of the Disappearing Distros.md deleted file mode 100644 index b9fc7875d7..0000000000 --- a/sources/talk/20141224 The Curious Case of the Disappearing Distros.md +++ /dev/null @@ -1,120 +0,0 @@ -The Curious Case of the Disappearing Distros -================================================================================ -![](http://www.linuxinsider.com/ai/828896/linux-distros.jpg) - -"Linux is a big game now, with billions of dollars of profit, and it's the best thing since sliced bread, but corporations are taking control, and slowly but systematically, community distros are being killed," said Google+ blogger Alessandro Ebersol. "Linux is slowly becoming just like BSD, where companies use and abuse it and give very little in return." - -Well the holidays are pretty much upon us at last here in the Linux blogosphere, and there's nowhere left to hide. The next two weeks or so promise little more than a blur of forced social occasions and too-large meals, punctuated only -- for the luckier ones among us -- by occasional respite down at the Broken Windows Lounge. - -Perhaps that's why Linux bloggers seized with such glee upon the good old-fashioned mystery that came up recently -- delivered in the nick of time, as if on cue. - -"Why is the Number of Linux Distros Declining?" is the [question][1] posed over at Datamation, and it's just the distraction so many FOSS fans have been needing. - -"Until about 2011, the number of active distributions slowly increased by a few each year," wrote author Bruce Byfield. "By contrast, the last three years have seen a 12 percent decline -- a decrease too high to be likely to be coincidence. - -"So what's happening?" Byfield wondered. - -It would be difficult to imagine a more thought-provoking question with which to spend the Northern hemisphere's shortest days. - -### 'There Are Too Many Distros' ### - -![](http://www.linuxinsider.com/images/article_images/linuxgirl_bg_pinkswirl_150x245.jpg) - -"That's an easy question," began blogger [Robert Pogson][2]. "There are too many distros." - -After all, "if a fanatic like me can enjoy life having sampled only a dozen distros, why have any more?" Pogson explained. "If someone has a concept different from the dozen or so most common distros, that concept can likely be demonstrated by documenting the tweaks and package-lists and, perhaps, some code." - -Trying to compete with some 40,000 package repositories like Debian's, however, is "just silly," he said. - -"No startup can compete with such a distro," Pogson asserted. "Why try? Just use it to do what you want and tell the world about it." - -### 'I Don't Distro-Hop Anymore' ### - -The major existing distros are doing a good job, so "we don't need so many derivative works," Google+ blogger Kevin O'Brien agreed. - -"I know I don't 'distro-hop' anymore, and my focus is on using my computer to get work done," O'Brien added. - -"If my apps run fine every day, that is all that I need," he said. "Right now I am sticking with Ubuntu LTS 14.04, and probably will until 2016." - -### 'The More Distros, the Better' ### - -It stands to reason that "as distros get better, there will be less reasons to roll your own," concurred [Linux Rants][3] blogger Mike Stone. - -"I think the modern Linux distros cover the bases of a larger portion of the Linux-using crowd, so fewer and fewer people are starting their own distribution to compensate for something that the others aren't satisfying," he explained. "Add to that the fact that corporations are more heavily involved in the development of Linux now than they ever have been, and they're going to focus their resources." - -So, the decline isn't necessarily a bad thing, as it only points to the strength of the current offerings, he asserted. - -At the same time, "I do think there are some negative consequences as well," Stone added. "Variation in the distros is a way that Linux grows and evolves, and with a narrower field, we're seeing less opportunity to put new ideas out there. In my mind, the more distros, the better -- hopefully the trend reverses soon." - -### 'I Hope Some Diversity Survives' ### - -Indeed, "the era of novelty and experimentation is over," Google+ blogger Gonzalo Velasco C. told Linux Girl. - -"Linux is 20+ years old and got professional," he noted. "There is always room for experimentation, but the top 20 are here since more than a decade ago. - -"Godspeed GNU/Linux," he added. "I hope some diversity survives -- especially distros without Systemd; on the other hand, some standards are reached through consensus." - -### A Question of Package Managers ### - -There are two trends at work here, suggested consultant and [Slashdot][4] blogger Gerhard Mack. - -First, "there are fewer reasons to start a new distro," he said. "The basic nuts and bolts are mostly done, installation is pretty easy across most distros, and it's not difficult on most hardware to get a working system without having to resort to using the command line." - -The second thing is that "we are seeing a reduction of distros with inferior package managers," Mack suggested. "It is clear that .deb-based distros had fewer losses and ended up with a larger overall share." - -### Survival of the Fittest ### - -It's like survival of the fittest, suggested consultant Rodolfo Saenz, who is certified in Linux, IBM Tivoli Storage Manager and Microsoft Active Directory. - -"I prefer to see a strong Linux with less distros," Saenz added. "Too many distros dilutes development efforts and can confuse potential future users." - -Fewer distros, on the other hand, "focuses development efforts into the stronger distros and also attracts new potential users with clear choices for their needs," he said. - -### All About the Money ### - -Google+ blogger Alessandro Ebersol also saw survival of the fittest at play, but he took a darker view. - -"Linux is a big game now, with billions of dollars of profit, and it's the best thing since sliced bread," Ebersol began. "But corporations are taking control, and slowly but systematically, community distros are being killed." - -It's difficult for community distros to keep pace with the ever-changing field, and cash is a necessity, he conceded. - -Still, "Linux is slowly becoming just like BSD, where companies use and abuse it and give very little in return," Ebersol said. "It saddens me, but GNU/Linux's best days were 10 years ago, circa 2002 to 2004. Now, it's the survival of the fittest -- and of course, the ones with more money will prevail." - -### 'Fewer Devs Care' ### - -SoylentNews blogger hairyfeet focused on today's altered computing landscape. - -"The reason there are fewer distros is simple: With everybody moving to the Google Playwall of Android, and Windows 10 looking to be the next XP, fewer devs care," hairyfeet said. - -"Why should they?" he went on. "The desktop wars are over, MSFT won, and the mobile wars are gonna be proprietary Google, proprietary Apple and proprietary MSFT. The money is in apps and services, and with a slow economy, there just isn't time for pulling a Taco Bell and rerolling yet another distro. - -"For the few that care about Linux desktops you have Ubuntu, Mint and Cent, and that is plenty," hairyfeet said. - -### 'No Less Diversity' ### - -Last but not least, Chris Travers, a [blogger][5] who works on the [LedgerSMB][6] project, took an optimistic view. - -"Ever since I have been around Linux, there have been a few main families -- [SuSE][7], [Red Hat][8], Debian, Gentoo, Slackware -- and a number of forks of these," Travers said. "The number of major families of distros has been declining for some time -- Mandrake and Connectiva merging, for example, Caldera disappearing -- but each of these families is ending up with fewer members as well. - -"I think this is a good thing," he concluded. - -"The big community distros -- Debian, Slackware, Gentoo, Fedora -- are going strong and picking up a lot of the niche users that other distros catered to," he pointed out. "Many of these distros are making it easier to come up with customized variants for niche markets. So what you have is a greater connectedness within the big distros, and no less diversity." - --------------------------------------------------------------------------------- - -via: http://www.linuxinsider.com/story/The-Curious-Case-of-the-Disappearing-Distros-81518.html - -作者:Katherine Noyes -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[1]:http://www.datamation.com/open-source/why-is-the-number-of-linux-distros-declining.html -[2]:http://mrpogson.com/ -[3]:http://linuxrants.com/ -[4]:http://slashdot.org/ -[5]:http://ledgersmbdev.blogspot.com/ -[6]:http://www.ledgersmb.org/ -[7]:http://www.novell.com/linux -[8]:http://www.redhat.com/ diff --git a/sources/talk/20150112 diff -u--What's New in Kernel Development.md b/sources/talk/20150112 diff -u--What's New in Kernel Development.md deleted file mode 100644 index 2e6f8e3480..0000000000 --- a/sources/talk/20150112 diff -u--What's New in Kernel Development.md +++ /dev/null @@ -1,36 +0,0 @@ -diff -u: What's New in Kernel Development -================================================================================ -**David Drysdale** wanted to add Capsicum security features to Linux after he noticed that FreeBSD already had Capsicum support. Capsicum defines fine-grained security privileges, not unlike filesystem capabilities. But as David discovered, Capsicum also has some controversy surrounding it. - -Capsicum has been around for a while and was described in a USENIX paper in 2010: [http://www.cl.cam.ac.uk/research/security/capsicum/papers/2010usenix-security-capsicum-website.pdf][1]. - -Part of the controversy is just because of the similarity with capabilities. As Eric Biderman pointed out during the discussion, it would be possible to implement features approaching Capsicum's as an extension of capabilities, but implementing Capsicum directly would involve creating a whole new (and extensive) abstraction layer in the kernel. Although David argued that capabilities couldn't actually be extended far enough to match Capsicum's fine-grained security controls. - -Capsicum also was controversial within its own developer community. For example, as Eric described, it lacked a specification for how to revoke privileges. And, David pointed out that this was because the community couldn't agree on how that could best be done. David quoted an e-mail sent by Ben Laurie to the cl-capsicum-discuss mailing list in 2011, where Ben said, "It would require additional book-keeping to find and revoke outstanding capabilities, which requires knowing how to reach capabilities, and then whether they are derived from the capability being revoked. It also requires an authorization model for revocation. The former two points mean additional overhead in terms of data structure operations and synchronisation." - -Given the ongoing controversy within the Capsicum developer community and the corresponding lack of specification of key features, and given the existence of capabilities that already perform a similar function in the kernel and the invasiveness of Capsicum patches, Eric was opposed to David implementing Capsicum in Linux. - -But, given the fact that capabilities are much coarser-grained than Capsicum's security features, to the point that capabilities can't really be extended far enough to mimic Capsicum's features, and given that FreeBSD already has Capsicum implemented in its kernel, showing that it can be done and that people might want it, it seems there will remain a lot of folks interested in getting Capsicum into the Linux kernel. - -Sometimes it's unclear whether there's a bug in the code or just a bug in the written specification. Henrique de Moraes Holschuh noticed that the Intel Software Developer Manual (vol. 3A, section 9.11.6) said quite clearly that microcode updates required 16-byte alignment for the P6 family of CPUs, the Pentium 4 and the Xeon. But, the code in the kernel's microcode driver didn't enforce that alignment. - -In fact, Henrique's investigation uncovered the fact that some Intel chips, like the Xeon X5550 and the second-generation i5 chips, needed only 4-byte alignment in practice, and not 16. However, to conform to the documented specification, he suggested fixing the kernel code to match the spec. - -Borislav Petkov objected to this. He said Henrique was looking for problems where there weren't any. He said that Henrique simply had discovered a bug in Intel's documentation, because the alignment issue clearly wasn't a problem in the real world. He suggested alerting the Intel folks to the documentation problem and moving on. As he put it, "If the processor accepts the non-16-byte-aligned update, why do you care?" - -But, as H. Peter Anvin remarked, the written spec was Intel's guarantee that certain behaviors would work. If the kernel ignored the spec, it could lead to subtle bugs later on. And, Bill Davidsen said that if the kernel ignored the alignment requirement, and "if the requirement is enforced in some future revision, and updates then fail in some insane way, the vendor is justified in claiming 'I told you so'." - -The end result was that Henrique sent in some patches to make the microcode driver enforce the 16-byte alignment requirement. - --------------------------------------------------------------------------------- - -via: http://www.linuxjournal.com/content/diff-u-whats-new-kernel-development-6 - -作者:[Zack Brown][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.linuxjournal.com/user/801501 -[1]:http://www.cl.cam.ac.uk/research/security/capsicum/papers/2010usenix-security-capsicum-website.pdf \ No newline at end of file diff --git a/sources/talk/20150121 Did this JavaScript break the console.md b/sources/talk/20150121 Did this JavaScript break the console.md deleted file mode 100644 index aab924ab33..0000000000 --- a/sources/talk/20150121 Did this JavaScript break the console.md +++ /dev/null @@ -1,91 +0,0 @@ -Did this JavaScript break the console? ---------- - -#Q: - -Just doing some JavaScript stuff in google chrome (don't want to try in other browsers for now, in case this is really doing real damage) and I'm not sure why this seemed to break my console. - -```javascript ->var x = "http://www.foo.bar/q?name=%%this%%"; -x -``` - -After x (and enter) the console stops working... I restarted chrome and now when I do a simple - -```javascript -console.clear(); -``` - -It's giving me - -```javascript -Console was cleared -``` - -And not clearing the console. Now in my scripts console.log's do not register and I'm wondering what is going on. 99% sure it has to do with the double percent signs (%%). - -Anyone know what I did wrong or better yet, how to fix the console? - -[A bug report for this issue has been filed here.][1] -Edit: Feeling pretty dumb, but I had Preserve log checked... That's why the console wasn't clearing. - -#A: - -As discussed in the comments, there are actually many different ways of constructing a string that causes this issue, and it is not necessary for there to be two percent signs in most cases. - -```TXT -http://example.com/% -http://%%% -http://ab% -http://%ab -http://%zz -``` - -However, it's not just the presence of a percent sign that breaks the Chrome console, as when we enter the following well-formed URL, the console continues to work properly and produces a clickable link. - -```TXT -http://ab%20cd -``` - -Additionally, the strings `http://%`, and `http://%%` will also print properly, since Chrome will not auto-link a URL-link string unless the [`http://`][2] is followed by at least 3 characters. - -From here I hypothesized that the issue must be in the process of linking a URL string in the console, likely in the process of decoding a malformed URL. I remembered that the JavaScript function `decodeURI` will throw an exception if given a malformed URL, and since Chrome's developer tools are largely written in JavaScript, could this be the issue that is evidently crashing the developer console? - -To test this theory, I ran Chrome by the command link, to see if any errors were being logged. - -Indeed, the same error you would see if you ran decodeURI on a malformed URL (i.e. decodeURI('http://example.com/%')) was being printed to the console: - ->[4810:1287:0107/164725:ERROR:CONSOLE(683)] "Uncaught URIError: URI malformed", source: chrome-devtools://devtools/bundled/devtools.js (683) ->So, I opened the URL 'chrome-devtools://devtools/bundled/devtools.js' in Chrome, and on line 683, I found the following. - -```javascript -{var parsedURL=new WebInspector.ParsedURL(decodeURI(url));var origin;var folderPath;var name;if(parsedURL.isValid){origin=parsedURL.scheme+"://"+parsedURL.host;if(parsedURL.port) -``` - -As we can see, `decodeURI(url)` is being called on the URL without any error checking, thus throwing the exception and crashing the developer console. - -A real fix for this issue will come from adding error handling to the Chrome console code, but in the meantime, one way to avoid the issue would be to wrap the string in a complex data type like an array to prevent parsing when logging. - -```javascript -var x = "http://example.com/%"; -console.log([x]); -``` - -Thankfully, the broken console issue does not persist once the tab is closed, and will not affect other tabs. - -###Update: - -Apparently, the issue can persist across tabs and restarts if Preserve Log is checked. Uncheck this if you are having this issue. - -via:[stackoverflow](http://stackoverflow.com/questions/27828804/did-this-javascript-break-the-console/27830948#27830948) - -作者:[Alexander O'Mara][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://stackoverflow.com/users/3155639/alexander-omara -[1]:https://code.google.com/p/chromium/issues/detail?id=446975 -[2]:https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURI \ No newline at end of file diff --git a/sources/talk/20150320 Revealed--The best and worst of Docker.md b/sources/talk/20150320 Revealed--The best and worst of Docker.md deleted file mode 100644 index 1e188d6cba..0000000000 --- a/sources/talk/20150320 Revealed--The best and worst of Docker.md +++ /dev/null @@ -1,66 +0,0 @@ -Revealed: The best and worst of Docker -================================================================================ -![](http://images.techhive.com/images/article/2015/01/best_worst_places_to_work-100564193-primary.idge.jpg) -Credit: [Shutterstock][1] - -> Docker experts talk about the good, the bad, and the ugly of the ubiquitous application container system - -No question about it: Docker's app container system has made its mark and become a staple in many IT environments. With its accelerating adoption, it's bound to stick around for a good long time. - -But there's no end to the debate about what Docker's best for, where it falls short, or how to most sensibly move it forward without alienating its existing users or damaging its utility. Here, we've turned to a few of the folks who have made Docker their business to get their takes on Docker's good, bad, and ugly sides. - -### The good ### - -One hardly expects Steve Francia, chief of operations of the Docker open source project, to speak of Docker in anything less than glowing terms. When asked by email about Docker's best attributes, he didn't disappoint: "I think the best thing about Docker is that it enables people, enables developers, enables users to very easily run an application anywhere," he said. "It's almost like the Holy Grail of development in that you can run an application on your desktop, and the exact same application without any changes can run on the server. That's never been done before." - -Alexis Richardson of [Weaveworks][2], a virtual networking product, praised Docker for enabling simplicity. "Docker offers immense potential to radically simplify and speed up how software gets built," he replied in an email. "This is why it has delivered record-breaking initial mind share and traction." - -Bob Quillin, CEO of [StackEngine][3], which makes Docker management and automation solutions, noted in an email that Docker (the company) has done a fine job of maintaining Docker's (the product) appeal to its audience. "Docker has been best at delivering strong developer support and focused investment in its product," he wrote. "Clearly, they know they have to keep the momentum, and they are doing that by putting intense effort into product functionality." He also mentioned that Docker's commitment to open source has accelerated adoption by "[allowing] people to build around their features as they are being built." - -Though containerization itself isn't new, as Rob Markovich of IT monitoring-service makers [Moogsoft][4] pointed out, Docker's implementation makes it new. "Docker is considered a next-generation virtualization technology given its more modern, lightweight form [of containerization]," he wrote in an email. "[It] brings an opportunity for an order-of-magnitude leap forward for software development teams seeking to deploy code faster." - -### The bad ### - -What's less appealing about Docker boils down to two issues: the complexity of using the product, and the direction of the company behind it. - -Samir Ghosh, CEO of enterprise PaaS outfit [WaveMaker][5], gave Docker a thumbs-up for simplifying the complex scripting typically needed for continuous delivery. That said, he added, "That doesn't mean Docker is simple. Implementing Docker is complicated. There are a lot of supporting technologies needed for things like container management, orchestration, app stack packaging, intercontainer networking, data snapshots, and so on." - -Ghosh noted the ones who feel the most of that pain are enterprises that want to leverage Docker for continuous delivery, but "it's even more complicated for enterprises that have diverse workloads, various app stacks, heterogenous infrastructures, and limited resources, not to mention unique IT needs for visibility, control and security." - -Complexity also becomes an issue in troubleshooting and analysis, and Markovich cited the fact that Docker provides application abstraction as the reason why. "It is nearly impossible to relate problems with application performance running on Docker to the performance of the underlying infrastructure domains," he said in an email. "IT teams are going to need visibility -- a new class of monitoring and analysis tools that can correlate across and relate how everything is working up and down the Docker stack, from the applications down to the private or public infrastructure." - -Quillin is most concerned about Docker's direction vis-à-vis its partner community: "Where will Docker make money, and where will their partners? If [Docker] wants to be the next VMware, it will need to take a page out of VMware's playbook in how to build and support a thriving partner ecosystem. - -"Additionally, to drive broader adoption, especially in the enterprise, Docker needs to start acting like a market leader by releasing more fully formed capabilities that organizations can count on, versus announcements of features with 'some assembly required,' that don't exist yet, or that require you to 'submit a pull request' to fix it yourself." - -Francia pointed to Docker's rapid ascent for creating its own difficulties. "[Docker] caught on so quickly that there's definitely places that we're focused on to add some features that a lot of users are looking forward to." - -One such feature, he noted, was having a GUI. "Right now to use Docker," he said, "you have to be comfortable with the command line. There's no visual interface to using Docker. Right now it's all command line-based. And we know if we want to really be as successful as we think we can be, we need to be more approachable and a lot of people when they see a command line, it's a bit intimidating for a lot of users." - -### The future ### - -In that last respect, Docker recently started to make advances. Last week it [bought the startup Kitematic][6], whose product gave Docker a convenient GUI on Mac OS X (and will eventually do the same for Windows). Another acqui-hire, [SocketPlane][7], is being spun in to work on Docker's networking. - -What remains to be seen is whether Docker's proposed solutions to its problems will be adopted, or whether another party -- say, [Red Hat][8] -- will provide a more immediately useful solution for enterprise customers who can't wait around for the chips to stop falling. - -"Good technology is hard and takes time to build," said Richardson. "The big risk is that expectations spin wildly out of control and customers are disappointed." - --------------------------------------------------------------------------------- - -via: http://www.infoworld.com/article/2896895/application-virtualization/best-and-worst-about-docker.html - -作者:[Serdar Yegulalp][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.infoworld.com/author/Serdar-Yegulalp/ -[1]:http://shutterstock.com/ -[2]:http://weave.works/ -[3]:http://stackengine.com/ -[4]:http://www.moogsoft.com/ -[5]:http://www.wavemaker.com/ -[6]:http://www.infoworld.com/article/2896099/application-virtualization/dockers-new-acquisition-does-containers-on-the-desktop.html -[7]:http://www.infoworld.com/article/2892916/application-virtualization/docker-snaps-up-socketplane-to-fix-networking-flaws.html -[8]:http://www.infoworld.com/article/2895804/application-virtualization/red-hat-wants-to-do-for-containers-what-its-done-for-linux.html \ No newline at end of file diff --git a/sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md b/sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md index 7eeb33676b..b67f5aee26 100644 --- a/sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md +++ b/sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md @@ -1,4 +1,3 @@ -2q1w2007申领 How to access a Linux server behind NAT via reverse SSH tunnel ================================================================================ You are running a Linux server at home, which is behind a NAT router or restrictive firewall. Now you want to SSH to the home server while you are away from home. How would you set that up? SSH port forwarding will certainly be an option. However, port forwarding can become tricky if you are dealing with multiple nested NAT environment. Besides, it can be interfered with under various ISP-specific conditions, such as restrictive ISP firewalls which block forwarded ports, or carrier-grade NAT which shares IPv4 addresses among users. diff --git a/sources/tech/20150522 Analyzing Linux Logs.md b/sources/tech/20150522 Analyzing Linux Logs.md index 38d5b4636e..aebc28deae 100644 --- a/sources/tech/20150522 Analyzing Linux Logs.md +++ b/sources/tech/20150522 Analyzing Linux Logs.md @@ -1,4 +1,3 @@ -translating by zhangboyue Analyzing Linux Logs ================================================================================ There’s a great deal of information waiting for you within your logs, although it’s not always as easy as you’d like to extract it. In this section we will cover some examples of basic analysis you can do with your logs right away (just search what’s there). We’ll also cover more advanced analysis that may take some upfront effort to set up properly, but will save you time on the back end. Examples of advanced analysis you can do on parsed data include generating summary counts, filtering on field values, and more. diff --git a/sources/tech/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md b/sources/tech/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md deleted file mode 100644 index ca51791986..0000000000 --- a/sources/tech/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md +++ /dev/null @@ -1,209 +0,0 @@ -translating by wwy-hust - -Nishita Agarwal Shares Her Interview Experience on Linux ‘iptables’ Firewall -================================================================================ -Nishita Agarwal, a frequent Tecmint Visitor shared her experience (Question and Answer) with us regarding the job interview she had just given in a privately owned hosting company in Pune, India. She was asked a lot of questions on a variety of topics however she is an expert in iptables and she wanted to share those questions and their answer (she gave) related to iptables to others who may be going to give interview in near future. - -![Linux Firewall Iptables Interview Questions](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-iptables-Interview-Questions.jpg) - -All the questions and their Answer are rewritten based upon the memory of Nishita Agarwal. - -> “Hello Friends! My name is **Nishita Agarwal**. I have Pursued Bachelor Degree in Technology. My area of Specialization is UNIX and Variants of UNIX (BSD, Linux) fascinates me since the time I heard it. I have 1+ years of experience in storage. I was looking for a job change which ended with a hosting company in Pune, India.” - -Here is the collection of what I was asked during the Interview. I’ve documented only those questions and their answer that were related to iptables based upon my memory. Hope this will help you in cracking your Interview. - -### 1. Have you heard of iptables and firewall in Linux? Any idea of what they are and for what it is used? ### - -> **Answer** : I’ve been using iptables for quite long time and I am aware of both iptables and firewall. Iptables is an application program mostly written in C Programming Language and is released under GNU General Public License. Written for System administration point of view, the latest stable release if iptables 1.4.21.iptables may be considered as firewall for UNIX like operating system which can be called as iptables/netfilter, more accurately. The Administrator interact with iptables via console/GUI front end tools to add and define firewall rules into predefined tables. Netfilter is a module built inside of kernel that do the job of filtering. -> -> Firewalld is the latest implementation of filtering rules in RHEL/CentOS 7 (may be implemented in other distributions which I may not be aware of). It has replaced iptables interface and connects to netfilter. - -### 2. Have you used some kind of GUI based front end tool for iptables or the Linux Command Line? ### - -> **Answer** : Though I have used both the GUI based front end tools for iptables like Shorewall in conjugation of [Webmin][1] in GUI and Direct access to iptables via console.And I must admit that direct access to iptables via Linux console gives a user immense power in the form of higher degree of flexibility and better understanding of what is going on in the background, if not anything other. GUI is for novice administrator while console is for experienced. - -### 3. What are the basic differences between between iptables and firewalld? ### - -> **Answer** : iptables and firewalld serves the same purpose (Packet Filtering) but with different approach. iptables flush the entire rules set each time a change is made unlike firewalld. Typically the location of iptables configuration lies at ‘/etc/sysconfig/iptables‘ whereas firewalld configuration lies at ‘/etc/firewalld/‘, which is a set of XML files.Configuring a XML based firewalld is easier as compared to configuration of iptables, however same task can be achieved using both the packet filtering application ie., iptables and firewalld. Firewalld runs iptables under its hood along with it’s own command line interface and configuration file that is XML based and said above. - -### 4. Would you replace iptables with firewalld on all your servers, if given a chance? ### - -> **Answer** : I am familiar with iptables and it’s working and if there is nothing that requires dynamic aspect of firewalld, there seems no reason to migrate all my configuration from iptables to firewalld.In most of the cases, so far I have never seen iptables creating an issue. Also the general rule of Information technology says “why fix if it is not broken”. However this is my personal thought and I would never mind implementing firewalld if the Organization is going to replace iptables with firewalld. - -### 5. You seems confident with iptables and the plus point is even we are using iptables on our server. ### - -What are the tables used in iptables? Give a brief description of the tables used in iptables and the chains they support. - -> **Answer** : Thanks for the recognition. Moving to question part, There are four tables used in iptables, namely they are: -> -> Nat Table -> Mangle Table -> Filter Table -> Raw Table -> -> Nat Table : Nat table is primarily used for Network Address Translation. Masqueraded packets get their IP address altered as per the rules in the table. Packets in the stream traverse Nat Table only once. ie., If a packet from a jet of Packets is masqueraded they rest of the packages in the stream will not traverse through this table again. It is recommended not to filter in this table. Chains Supported by NAT Table are PREROUTING Chain, POSTROUTING Chain and OUTPUT Chain. -> -> Mangle Table : As the name suggests, this table serves for mangling the packets. It is used for Special package alteration. It can be used to alter the content of different packets and their headers. Mangle table can’t be used for Masquerading. Supported chains are PREROUTING Chain, OUTPUT Chain, Forward Chain, INPUT Chain, POSTROUTING Chain. -> -> Filter Table : Filter Table is the default table used in iptables. It is used for filtering Packets. If no rules are defined, Filter Table is taken as default table and filtering is done on the basis of this table. Supported Chains are INPUT Chain, OUTPUT Chain, FORWARD Chain. -> -> Raw Table : Raw table comes into action when we want to configure packages that were exempted earlier. It supports PREROUTING Chain and OUTPUT Chain. - -### 6. What are the target values (that can be specified in target) in iptables and what they do, be brief! ### - -> **Answer** : Following are the target values that we can specify in target in iptables: -> -> ACCEPT : Accept Packets -> QUEUE : Paas Package to user space (place where application and drivers reside) -> DROP : Drop Packets -> RETURN : Return Control to calling chain and stop executing next set of rules for the current Packets in the chain. - - -### 7. Lets move to the technical aspects of iptables, by technical I means practical. ### - -How will you Check iptables rpm that is required to install iptables in CentOS?. - -> **Answer** : iptables rpm are included in standard CentOS installation and we do not need to install it separately. We can check the rpm as: -> -> # rpm -qa iptables -> -> iptables-1.4.21-13.el7.x86_64 -> -> If you need to install it, you may do yum to get it. -> -> # yum install iptables-services - -### 8. How to Check and ensure if iptables service is running? ### - -> **Answer** : To check the status of iptables, you may run the following command on the terminal. -> -> # service status iptables [On CentOS 6/5] -> # systemctl status iptables [On CentOS 7] -> -> If it is not running, the below command may be executed. -> -> ---------------- On CentOS 6/5 ---------------- -> # chkconfig --level 35 iptables on -> # service iptables start -> -> ---------------- On CentOS 7 ---------------- -> # systemctl enable iptables -> # systemctl start iptables -> -> We may also check if the iptables module is loaded or not, as: -> -> # lsmod | grep ip_tables - -### 9. How will you review the current Rules defined in iptables? ### - -> **Answer** : The current rules in iptables can be review as simple as: -> -> # iptables -L -> -> Sample Output -> -> Chain INPUT (policy ACCEPT) -> target prot opt source destination -> ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED -> ACCEPT icmp -- anywhere anywhere -> ACCEPT all -- anywhere anywhere -> ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh -> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited -> -> Chain FORWARD (policy ACCEPT) -> target prot opt source destination -> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited -> -> Chain OUTPUT (policy ACCEPT) -> target prot opt source destination - -### 10. How will you flush all iptables rules or a particular chain? ### - -> **Answer** : To flush a particular iptables chain, you may use following commands. -> -> -> # iptables --flush OUTPUT -> -> To Flush all the iptables rules. -> -> # iptables --flush - -### 11. Add a rule in iptables to accept packets from a trusted IP Address (say 192.168.0.7) ### - -> **Answer** : The above scenario can be achieved simply by running the below command. -> -> # iptables -A INPUT -s 192.168.0.7 -j ACCEPT -> -> We may include standard slash or subnet mask in the source as: -> -> # iptables -A INPUT -s 192.168.0.7/24 -j ACCEPT -> # iptables -A INPUT -s 192.168.0.7/255.255.255.0 -j ACCEPT - -### 12. How to add rules to ACCEPT, REJECT, DENY and DROP ssh service in iptables. ### - -> **Answer** : Hoping ssh is running on port 22, which is also the default port for ssh, we can add rule to iptables as:To ACCEPT tcp packets for ssh service (port 22). -> -> # iptables -A INPUT -s -p tcp - -dport -j ACCEPT -> -> To REJECT tcp packets for ssh service (port 22). -> -> # iptables -A INPUT -s -p tcp - -dport -j REJECT -> -> To DENY tcp packets for ssh service (port 22). -> -> -> # iptables -A INPUT -s -p tcp - -dport -j DENY -> -> To DROP tcp packets for ssh service (port 22). -> -> -> # iptables -A INPUT -s -p tcp - -dport -j DROP - -### 13. Let me give you a scenario. Say there is a machine the local ip address of which is 192.168.0.6. You need to block connections on port 21, 22, 23, and 80 to your machine. What will you do? ### - -> **Answer** : Well all I need to use is the ‘multiport‘ option with iptables followed by port numbers to be blocked and the above scenario can be achieved in a single go as. -> -> # iptables -A INPUT -s 192.168.0.6 -p tcp -m multiport --dport 22,23,80,8080 -j DROP -> -> The written rules can be checked using the below command. -> -> # iptables -L -> -> Chain INPUT (policy ACCEPT) -> target prot opt source destination -> ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED -> ACCEPT icmp -- anywhere anywhere -> ACCEPT all -- anywhere anywhere -> ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh -> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited -> DROP tcp -- 192.168.0.6 anywhere multiport dports ssh,telnet,http,webcache -> -> Chain FORWARD (policy ACCEPT) -> target prot opt source destination -> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited -> -> Chain OUTPUT (policy ACCEPT) -> target prot opt source destination - -**Interviewer** : That’s all I wanted to ask. You are a valuable employee we won’t like to miss. I will recommend your name to the HR. If you have any question you may ask me. - -As a candidate I don’t wanted to kill the conversation hence keep asking about the projects I would be handling if selected and what are the other openings in the company. Not to mention HR round was not difficult to crack and I got the opportunity. - -Also I would like to thank Avishek and Ravi (whom I am a friend since long) for taking the time to document my interview. - -Friends! If you had given any such interview and you would like to share your interview experience to millions of Tecmint readers around the globe? then send your questions and answers to admin@tecmint.com. - -Thank you! Keep Connected. Also let me know if I could have answered a question more correctly than what I did. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-firewall-iptables-interview-questions-and-answers/ - -作者:[Avishek Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/install-webmin-web-based-system-administration-tool-for-rhel-centos-fedora/ From f70052ce39a0a68d2b265f29c5901294130447fd Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 28 Jul 2015 22:10:15 +0800 Subject: [PATCH 017/207] PUB:20150227 Fix Minimal BASH like line editing is supported GRUB Error In Linux @martin2011qi --- ...ne editing is supported GRUB Error In Linux.md | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) rename {translated/tech => published}/20150227 Fix Minimal BASH like line editing is supported GRUB Error In Linux.md (83%) diff --git a/translated/tech/20150227 Fix Minimal BASH like line editing is supported GRUB Error In Linux.md b/published/20150227 Fix Minimal BASH like line editing is supported GRUB Error In Linux.md similarity index 83% rename from translated/tech/20150227 Fix Minimal BASH like line editing is supported GRUB Error In Linux.md rename to published/20150227 Fix Minimal BASH like line editing is supported GRUB Error In Linux.md index 4edaa333d8..66ab15e999 100644 --- a/translated/tech/20150227 Fix Minimal BASH like line editing is supported GRUB Error In Linux.md +++ b/published/20150227 Fix Minimal BASH like line editing is supported GRUB Error In Linux.md @@ -1,8 +1,11 @@ -修复Linux中的提供最小化类BASH命令行编辑GRUB错误 +修复Linux中的“提供类似行编辑的袖珍BASH...”的GRUB错误 ================================================================================ + 这两天我[安装了Elementary OS和Windows双系统][1],在启动的时候遇到了一个Grub错误。命令行中呈现如下信息: -**提供最小化类BASH命令行编辑。对于第一个词,TAB键补全可以使用的命令。除此之外,TAB键补全可用的设备或文件。** +**Minimal BASH like line editing is supported. For the first word, TAB lists possible command completions. anywhere else TAB lists possible device or file completions.** + +**提供类似行编辑的袖珍 BASH。TAB键补全第一个词,列出可以使用的命令。除此之外,TAB键补全可以列出可用的设备或文件。** ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/02/Boot_Repair_Ubuntu_Linux_1.jpeg) @@ -10,7 +13,7 @@ 通过这篇文章里我们可以学到基于Linux系统**如何修复Ubuntu中出现的“minimal BASH like line editing is supported” Grub错误**。 -> 你可以参阅这篇教程来修复类似的高频问题,[错误:分区未找到Linux grub救援模式][3]。 +> 你可以参阅这篇教程来修复类似的常见问题,[错误:分区未找到Linux grub救援模式][3]。 ### 先决条件 ### @@ -19,11 +22,11 @@ - 一个包含相同版本、相同OS的LiveUSB或磁盘 - 当前会话的Internet连接正常工作 -在确认了你拥有先决条件了之后,让我们看看如何修复Linux的死亡黑屏(如果我可以这样的称呼它的话;))。 +在确认了你拥有先决条件了之后,让我们看看如何修复Linux的死亡黑屏(如果我可以这样的称呼它的话 ;) )。 ### 如何在基于Ubuntu的Linux中修复“minimal BASH like line editing is supported” Grub错误 ### -我知道你一定疑问这种Grub错误并不局限于在基于Ubuntu的Linux发行版上发生,那为什么我要强调在基于Ubuntu的发行版上呢?原因是,在这里我们将采用一个简单的方法并叫作**Boot Repair**的工具来修复我们的问题。我并不确定在其他的诸如Fedora的发行版中是否有这个工具可用。不再浪费时间,我们来看如何修复minimal BASH like line editing is supported Grub错误。 +我知道你一定疑问这种Grub错误并不局限于在基于Ubuntu的Linux发行版上发生,那为什么我要强调在基于Ubuntu的发行版上呢?原因是,在这里我们将采用一个简单的方法,用个叫做**Boot Repair**的工具来修复我们的问题。我并不确定在其他的诸如Fedora的发行版中是否有这个工具可用。不再浪费时间,我们来看如何修复“minimal BASH like line editing is supported” Grub错误。 ### 步骤 1: 引导进入lives会话 ### @@ -75,7 +78,7 @@ via: http://itsfoss.com/fix-minimal-bash-line-editing-supported-grub-error-linux 作者:[Abhishek][a] 译者:[martin2011qi](https://github.com/martin2011qi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From 62e6cbbd139246c280b1e4dc5c1c314606481265 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Tue, 28 Jul 2015 22:27:26 +0800 Subject: [PATCH 018/207] [Translating] sources/tech/20150522 Analyzing Linux Logs.md --- sources/tech/20150522 Analyzing Linux Logs.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20150522 Analyzing Linux Logs.md b/sources/tech/20150522 Analyzing Linux Logs.md index aebc28deae..832ea369ec 100644 --- a/sources/tech/20150522 Analyzing Linux Logs.md +++ b/sources/tech/20150522 Analyzing Linux Logs.md @@ -1,3 +1,4 @@ +ictlyh Translating Analyzing Linux Logs ================================================================================ There’s a great deal of information waiting for you within your logs, although it’s not always as easy as you’d like to extract it. In this section we will cover some examples of basic analysis you can do with your logs right away (just search what’s there). We’ll also cover more advanced analysis that may take some upfront effort to set up properly, but will save you time on the back end. Examples of advanced analysis you can do on parsed data include generating summary counts, filtering on field values, and more. From 7634a479357b9e584bb6dc8fe30d6b8b7bfa77b3 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 29 Jul 2015 00:03:39 +0800 Subject: [PATCH 019/207] PUB:20150616 LINUX 101--POWER UP YOUR SHELL MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @FSSlc 辛苦啦。 --- ...20150616 LINUX 101--POWER UP YOUR SHELL.md | 177 ++++++++++++++++++ ...20150616 LINUX 101--POWER UP YOUR SHELL.md | 177 ------------------ 2 files changed, 177 insertions(+), 177 deletions(-) create mode 100644 published/20150616 LINUX 101--POWER UP YOUR SHELL.md delete mode 100644 translated/tech/20150616 LINUX 101--POWER UP YOUR SHELL.md diff --git a/published/20150616 LINUX 101--POWER UP YOUR SHELL.md b/published/20150616 LINUX 101--POWER UP YOUR SHELL.md new file mode 100644 index 0000000000..dd7b985b01 --- /dev/null +++ b/published/20150616 LINUX 101--POWER UP YOUR SHELL.md @@ -0,0 +1,177 @@ +LINUX 101: 让你的 SHELL 更强大 +================================================================================ +> 在我们的关于 shell 基础的指导下, 得到一个更灵活,功能更强大且多彩的命令行界面 + +**为何要这样做?** + +- 使得在 shell 提示符下过得更轻松,高效 +- 在失去连接后恢复先前的会话 +- Stop pushing around that fiddly rodent! + +![bash1](http://www.linuxvoice.com/wp-content/uploads/2015/02/bash1-large15.png) + +这是我的命令行提示符的设置。对于这个小的终端窗口来说,这或许有些长。但你可以根据你的喜好来调整它。 + +作为一个 Linux 用户, 你可能熟悉 shell (又名为命令行)。 或许你需要时不时的打开终端来完成那些不能在 GUI 下处理的必要任务,抑或是因为你处在一个将窗口铺满桌面的环境中,而 shell 是你与你的 linux 机器交互的主要方式。 + +在上面那些情况下,你可能正在使用你所使用的发行版本自带的 Bash 配置。 尽管对于大多数的任务而言,它足够好了,但它可以更加强大。 在本教程中,我们将向你展示如何使得你的 shell 提供更多有用信息、更加实用且更适合工作。 我们将对提示符进行自定义,让它比默认情况下提供更好的反馈,并向你展示如何使用炫酷的 `tmux` 工具来管理会话并同时运行多个程序。 并且,为了让眼睛舒服一点,我们还将关注配色方案。那么,进击吧,少女! + +### 让提示符更美妙 ### + +大多数的发行版本配置有一个非常简单的提示符,它们大多向你展示了一些基本信息, 但提示符可以为你提供更多的内容。例如,在 Debian 7 下,默认的提示符是这样的: + + mike@somebox:~$ + +上面的提示符展示出了用户、主机名、当前目录和账户类型符号(假如你切换到 root 账户, **$** 会变为 **#**)。 那这些信息是在哪里存储的呢? 答案是:在 **PS1** 环境变量中。 假如你键入 `echo $PS1`, 你将会在这个命令的输出字符串的最后有如下的字符: + + \u@\h:\w$ + +这看起来有一些丑陋,并在瞥见它的第一眼时,你可能会开始尖叫,认为它是令人恐惧的正则表达式,但我们不打算用这些复杂的字符来煎熬我们的大脑。这不是正则表达式,这里的斜杠是转义序列,它告诉提示符进行一些特别的处理。 例如,上面的 **u** 部分,告诉提示符展示用户名, 而 w 则展示工作路径. + +下面是一些你可以在提示符中用到的字符的列表: + +- d 当前的日期 +- h 主机名 +- n 代表换行的字符 +- A 当前的时间 (HH:MM) +- u 当前的用户 +- w (小写) 整个工作路径的全称 +- W (大写) 工作路径的简短名称 +- $ 一个提示符号,对于 root 用户为 # 号 +- ! 当前命令在 shell 历史记录中的序号 + +下面解释 **w** 和 **W** 选项的区别: 对于前者,你将看到你所在的工作路径的完整地址,(例如 **/usr/local/bin**),而对于后者, 它则只显示 **bin** 这一部分。 + +现在,我们该怎样改变提示符呢? 你需要更改 **PS1** 环境变量的内容,试试下面这个: + + export PS1="I am \u and it is \A $" + +现在,你的提示符将会像下面这样: + + I am mike and it is 11:26 $ + +从这个例子出发,你就可以按照你的想法来试验一下上面列出的其他转义序列。 但等等 – 当你登出后,你的这些努力都将消失,因为在你每次打开终端时,**PS1** 环境变量的值都会被重置。解决这个问题的最简单方式是打开 **.bashrc** 配置文件(在你的家目录下) 并在这个文件的最下方添加上完整的 `export` 命令。在每次你启动一个新的 shell 会话时,这个 **.bashrc** 会被 `Bash` 读取, 所以你的加强的提示符就可以一直出现。你还可以使用额外的颜色来装扮提示符。刚开始,这将有点棘手,因为你必须使用一些相当奇怪的转义序列,但结果是非常漂亮的。 将下面的字符添加到你的 **PS1**字符串中的某个位置,最终这将把文本变为红色: + + \[\e[31m\] + +你可以将这里的 31 更改为其他的数字来获得不同的颜色: + +- 30 黑色 +- 32 绿色 +- 33 黄色 +- 34 蓝色 +- 35 洋红色 +- 36 青色 +- 37 白色 + +所以,让我们使用先前看到的转义序列和颜色来创造一个提示符,以此来结束这一小节的内容。深吸一口气,弯曲你的手指,然后键入下面这只“野兽”: + + export PS1="(\!) \[\e[31m\] \[\A\] \[\e[32m\]\u@\h \[\e[34m\]\w \[\e[30m\]$" + +上面的命令提供了一个 Bash 命令历史序号、当前的时间、彩色的用户或主机名组合、以及工作路径。假如你“野心勃勃”,利用一些惊人的组合,你还可以更改提示符的背景色和前景色。非常有用的 Arch wiki 有一个关于颜色代码的完整列表:[http://tinyurl.com/3gvz4ec][1]。 + +> **Shell 精要** +> +> 假如你是一个彻底的 Linux 新手并第一次阅读这份杂志,或许你会发觉阅读这些教程有些吃力。 所以这里有一些基础知识来让你熟悉一些 shell。 通常在你的菜单中, shell 指的是 Terminal、 XTerm 或 Konsole, 当你启动它后, 最为实用的命令有这些: +> +> **ls** (列出文件名); **cp one.txt two.txt** (复制文件); **rm file.txt** (移除文件); **mv old.txt new.txt** (移动或重命名文件); +> +> **cd /some/directory** (改变目录); **cd ..** (回到上级目录); **./program** (在当前目录下运行一个程序); **ls > list.txt** (重定向输出到一个文件)。 +> +> 几乎每个命令都有一个手册页用来解释其选项(例如 **man ls** – 按 Q 来退出)。在那里,你可以知晓命令的选项,这样你就知道 **ls -la** 展示一个详细的列表,其中也列出了隐藏文件, 并且在键入一个文件或目录的名字的一部分后, 可以使用 Tab 键来自动补全。 + +### Tmux: 针对 shell 的窗口管理器 ### + +在文本模式的环境中使用一个窗口管理器 – 这听起来有点不可思议, 是吧? 然而,你应该记得当 Web 浏览器第一次实现分页浏览的时候吧? 在当时, 这是在可用性上的一个重大进步,它减少了桌面任务栏的杂乱无章和繁多的窗口列表。 对于你的浏览器来说,你只需要一个按钮便可以在浏览器中切换到你打开的每个单独网站, 而不是针对每个网站都有一个任务栏或导航图标。 这个功能非常有意义。 + +若有时你同时运行着几个虚拟终端,你便会遇到相似的情况; 在这些终端之间跳转,或每次在任务栏或窗口列表中找到你所需要的那一个终端,都可能会让你觉得麻烦。 拥有一个文本模式的窗口管理器不仅可以让你像在同一个终端窗口中运行多个 shell 会话,而且你甚至还可以将这些窗口排列在一起。 + +另外,这样还有另一个好处:可以将这些窗口进行分离和重新连接。想要看看这是如何运行的最好方式是自己尝试一下。在一个终端窗口中,输入 `screen` (在大多数发行版本中,它已经默认安装了或者可以在软件包仓库中找到)。 某些欢迎的文字将会出现 – 只需敲击 Enter 键这些文字就会消失。 现在运行一个交互式的文本模式的程序,例如 `nano`, 并关闭这个终端窗口。 + +在一个正常的 shell 对话中, 关闭窗口将会终止所有在该终端中运行的进程 – 所以刚才的 Nano 编辑对话也就被终止了, 但对于 screen 来说,并不是这样的。打开一个新的终端并输入如下命令: + + screen -r + +瞧,你刚开打开的 Nano 会话又回来了! + +当刚才你运行 **screen** 时, 它会创建了一个新的独立的 shell 会话, 它不与某个特定的终端窗口绑定在一起,所以可以在后面被分离并重新连接(即 **-r** 选项)。 + +当你正使用 SSH 去连接另一台机器并做着某些工作时, 但并不想因为一个脆弱的连接而影响你的进度,这个方法尤其有用。假如你在一个 **screen** 会话中做着某些工作,并且你的连接突然中断了(或者你的笔记本没电了,又或者你的电脑报废了——不是这么悲催吧),你只需重新连接或给电脑充电或重新买一台电脑,接着运行 **screen -r** 来重新连接到远程的电脑,并在刚才掉线的地方接着开始。 + +现在,我们都一直在讨论 GNU 的 **screen**,但这个小节的标题提到的是 tmux。 实质上, **tmux** (terminal multiplexer) 就像是 **screen** 的一个进阶版本,带有许多有用的额外功能,所以现在我们开始关注 tmux。 某些发行版本默认包含了 **tmux**; 在其他的发行版本上,通常只需要一个 **apt-get、 yum install** 或 **pacman -S** 命令便可以安装它。 + +一旦你安装了它过后,键入 **tmux** 来启动它。接着你将注意到,在终端窗口的底部有一条绿色的信息栏,它非常像传统的窗口管理器中的任务栏: 上面显示着一个运行着的程序的列表、机器的主机名、当前时间和日期。 现在运行一个程序,同样以 Nano 为例, 敲击 Ctrl+B 后接着按 C 键, 这将在 tmux 会话中创建一个新的窗口,你便可以在终端的底部的任务栏中看到如下的信息: + + 0:nano- 1:bash* + +每一个窗口都有一个数字,当前呈现的程序被一个星号所标记。 Ctrl+B 是与 tmux 交互的标准方式, 所以若你敲击这个按键组合并带上一个窗口序号, 那么就会切换到对应的那个窗口。你也可以使用 Ctrl+B 再加上 N 或 P 来分别切换到下一个或上一个窗口 – 或者使用 Ctrl+B 加上 L 来在最近使用的两个窗口之间来进行切换(有点类似于桌面中的经典的 Alt+Tab 组合键的效果)。 若需要知道窗口列表,使用 Ctrl+B 再加上 W。 + +目前为止,一切都还好:现在你可以在一个单独的终端窗口中运行多个程序,避免混乱(尤其是当你经常与同一个远程主机保持多个 SSH 连接时)。 当想同时看两个程序又该怎么办呢? + +针对这种情况, 可以使用 tmux 中的窗格。 敲击 Ctrl+B 再加上 % , 则当前窗口将分为两个部分:一个在左一个在右。你可以使用 Ctrl+B 再加上 O 来在这两个部分之间切换。 这尤其在你想同时看两个东西时非常实用, – 例如一个窗格看指导手册,另一个窗格里用编辑器看一个配置文件。 + +有时,你想对一个单独的窗格进行缩放,而这需要一定的技巧。 首先你需要敲击 Ctrl+B 再加上一个 :(冒号),这将使得位于底部的 tmux 栏变为深橙色。 现在,你进入了命令模式,在这里你可以输入命令来操作 tmux。 输入 **resize-pane -R** 来使当前窗格向右移动一个字符的间距, 或使用 **-L** 来向左移动。 对于一个简单的操作,这些命令似乎有些长,但请注意,在 tmux 的命令模式(前面提到的一个分号开始的模式)下,可以使用 Tab 键来补全命令。 另外需要提及的是, **tmux** 同样也有一个命令历史记录,所以若你想重复刚才的缩放操作,可以先敲击 Ctrl+B 再跟上一个分号,并使用向上的箭头来取回刚才输入的命令。 + +最后,让我们看一下分离和重新连接 - 即我们刚才介绍的 screen 的特色功能。 在 tmux 中,敲击 Ctrl+B 再加上 D 来从当前的终端窗口中分离当前的 tmux 会话。这使得这个会话的一切工作都在后台中运行、使用 `tmux a` 可以再重新连接到刚才的会话。但若你同时有多个 tmux 会话在运行时,又该怎么办呢? 我们可以使用下面的命令来列出它们: + + tmux ls + +这个命令将为每个会话分配一个序号; 假如你想重新连接到会话 1, 可以使用 `tmux a -t 1`. tmux 是可以高度定制的,你可以自定义按键绑定并更改配色方案, 所以一旦你适应了它的主要功能,请钻研指导手册以了解更多的内容。 + + +![tmux](http://www.linuxvoice.com/wp-content/uploads/2015/02/tmux-large13.jpg) + +上图中, tmux 开启了两个窗格: 左边是 Vim 正在编辑一个配置文件,而右边则展示着指导手册页。 + +> **Zsh: 另一个 shell** +> +> 选择是好的,但标准同样重要。 你要知道几乎每个主流的 Linux 发行版本都默认使用 Bash shell – 尽管还存在其他的 shell。 Bash 为你提供了一个 shell 能够给你提供的几乎任何功能,包括命令历史记录,文件名补全和许多脚本编程的能力。它成熟、可靠并文档丰富 – 但它不是你唯一的选择。 +> +> 许多高级用户热衷于 Zsh, 即 Z shell。 这是 Bash 的一个替代品并提供了 Bash 的几乎所有功能,另外还提供了一些额外的功能。 例如, 在 Zsh 中,你输入 **ls** ,并敲击 Tab 键可以得到 **ls** 可用的各种不同选项的一个大致描述。 而不需要再打开 man page 了! +> +> Zsh 还支持其他强大的自动补全功能: 例如,输入 **cd /u/lo/bi** 再敲击 Tab 键, 则完整的路径名 **/usr/local/bin** 就会出现(这里假设没有其他的路径包含 **u**, **lo** 和 **bi** 等字符)。 或者只输入 **cd** 再跟上 Tab 键,则你将看到着色后的路径名的列表 – 这比 Bash 给出的简单的结果好看得多。 +> +> Zsh 在大多数的主要发行版本上都可以得到了; 安装它后并输入 **zsh** 便可启动它。 要将你的默认 shell 从 Bash 改为 Zsh, 可以使用 **chsh** 命令。 若需了解更多的信息,请访问 [www.zsh.org][2]。 + +### “未来”的终端 ### + +你或许会好奇为什么包含你的命令行提示符的应用被叫做终端。 这需要追溯到 Unix 的早期, 那时人们一般工作在一个多用户的机器上,这个巨大的电脑主机将占据一座建筑中的一个房间, 人们通过某些线路,使用屏幕和键盘来连接到这个主机, 这些终端机通常被称为“哑终端”, 因为它们不能靠自己做任何重要的执行任务 – 它们只展示通过线路从主机传来的信息,并输送回从键盘的敲击中得到的输入信息。 + +今天,我们在自己的机器上执行几乎所有的实际操作,所以我们的电脑不是传统意义下的终端,这就是为什么诸如 **XTerm**、 Gnome Terminal、 Konsole 等程序被称为“终端模拟器” 的原因 – 他们提供了同昔日的物理终端一样的功能。事实上,在许多方面它们并没有改变多少。诚然,现在我们有了反锯齿字体,更好的颜色和点击网址的能力,但总的来说,几十年来我们一直以同样的方式在工作。 + +所以某些程序员正尝试改变这个状况。 **Terminology** ([http://tinyurl.com/osopjv9][3]), 它来自于超级时髦的 Enlightenment 窗口管理器背后的团队,旨在让终端步入到 21 世纪,例如带有在线媒体显示功能。你可以在一个充满图片的目录里输入 **ls** 命令,便可以看到它们的缩略图,或甚至可以直接在你的终端里播放视频。 这使得一个终端有点类似于一个文件管理器,意味着你可以快速地检查媒体文件的内容而不必用另一个应用来打开它们。 + +接着还有 Xiki ([www.xiki.org][4]),它自身的描述为“命令的革新”。它就像是一个传统的 shell、一个 GUI 和一个 wiki 之间的过渡;你可以在任何地方输入命令,并在后面将它们的输出存储为笔记以作为参考,并可以创建非常强大的自定义命令。用几句话是很能描述它的,所以作者们已经创作了一个视频来展示它的潜力是多么的巨大(请看 **Xiki** 网站的截屏视频部分)。 + +并且 Xiki 绝不是那种在几个月之内就消亡的昙花一现的项目,作者们成功地进行了一次 Kickstarter 众筹,在七月底已募集到超过 $84,000。 是的,你没有看错 – $84K 来支持一个终端模拟器。这可能是最不寻常的集资活动了,因为某些疯狂的家伙已经决定开始创办它们自己的 Linux 杂志 ...... + +### 下一代终端 ### + +许多命令行和基于文本的程序在功能上与它们的 GUI 程序是相同的,并且常常更加快速和高效。我们的推荐有: +**Irssi** (IRC 客户端); **Mutt** (mail 客户端); **rTorrent** (BitTorrent); **Ranger** (文件管理器); **htop** (进程监视器)。 若给定在终端的限制下来进行 Web 浏览, Elinks 确实做的很好,并且对于阅读那些以文字为主的网站例如 Wikipedia 来说。它非常实用。 + +> **微调配色方案** +> +> 在《Linux Voice》杂志社中,我们并不迷恋那些养眼的东西,但当你每天花费几个小时盯着屏幕看东西时,我们确实认识到美学的重要性。我们中的许多人都喜欢调整我们的桌面和窗口管理器来达到完美的效果,调整阴影效果、摆弄不同的配色方案,直到我们 100% 的满意(然后出于习惯,摆弄更多的东西)。 +> +> 但我们倾向于忽视终端窗口,它理应也获得我们的喜爱,并且在 [http://ciembor.github.io/4bit][5] 你将看到一个极其棒的配色方案设计器,对于所有受欢迎的终端模拟器(**XTerm, Gnome Terminal, Konsole 和 Xfce4 Terminal 等都是支持的应用。**),它可以输出其设定。移动滑块直到你看到配色方案最佳, 然后点击位于该页面右上角的 `得到方案` 按钮。 +> +> 相似的,假如你在一个文本编辑器,如 Vim 或 Emacs 上花费了很多的时间,使用一个精心设计的调色板也是非常值得的。 **Solarized** [http://ethanschoonover.com/solarized][6] 是一个卓越的方案,它不仅漂亮,而且因追求最大的可用性而设计,在其背后有着大量的研究和测试。 + +-------------------------------------------------------------------------------- + +via: http://www.linuxvoice.com/linux-101-power-up-your-shell-8/ + +作者:[Ben Everard][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxvoice.com/author/ben_everard/ +[1]:http://tinyurl.com/3gvz4ec +[2]:http://www.zsh.org/ +[3]:http://tinyurl.com/osopjv9 +[4]:http://www.xiki.org/ +[5]:http://ciembor.github.io/4bit +[6]:http://ethanschoonover.com/solarized \ No newline at end of file diff --git a/translated/tech/20150616 LINUX 101--POWER UP YOUR SHELL.md b/translated/tech/20150616 LINUX 101--POWER UP YOUR SHELL.md deleted file mode 100644 index fac7fa2e1b..0000000000 --- a/translated/tech/20150616 LINUX 101--POWER UP YOUR SHELL.md +++ /dev/null @@ -1,177 +0,0 @@ -LINUX 101: 让你的 SHELL 更强大 -================================================================================ -> 在我们的有关 shell 基础的指导下, 得到一个更灵活,功能更强大且多彩的命令行界面 - -**为何要这样做?** - -- 使得在 shell 提示符下过得更轻松,高效 -- 在失去连接后恢复先前的会话 -- Stop pushing around that fiddly rodent! (注: 我不知道这句该如何翻译) - -![bash1](http://www.linuxvoice.com/wp-content/uploads/2015/02/bash1-large15.png) - -Here’s our souped-up prompt on steroids.(注: 我不知道该如何翻译这句)对于这个细小的终端窗口来说,这或许有些长.但你可以根据你的喜好来调整它的大小. - -作为一个 Linux 用户, 对 shell (又名为命令行),你可能会熟悉. 或许你需要时不时的打开终端来完成那些不能在 GUI 下处理的必要任务,抑或是因为你处在一个平铺窗口管理器的环境中, 而 shell 是你与你的 linux 机器交互的主要方式. - -在上面的任一情况下,你可能正在使用你所使用的发行版本自带的 Bash 配置. 尽管对于大多数的任务而言,它足够强大,但它可以更加强大. 在本教程中,我们将向你展示如何使得你的 shell 更具信息性,更加实用且更适于在其中工作. 我们将对提示符进行自定义,让它比默认情况下提供更好的反馈,并向你展示如何使用炫酷的 `tmux` 工具来管理会话并同时运行多个程序. 并且,为了让眼睛舒服一点,我们还将关注配色方案. 接着,就让我们向前吧! - -### 让提示符 "唱歌" ### - -大多数的发行版本配置有一个非常简单的提示符 – 它们大多向你展示了一些基本信息, 但提示符可以为你提供更多的内容.例如,在 Debian 7 下,默认的提示符是这样的: - - mike@somebox:~$ - -上面的提示符展示出了用户,主机名,当前目录和账户类型符号(假如你切换到 root 账户, **$** 会变为 # ). 那这些信息是在哪里存储的呢? 答案是:在 **PS1** 环境变量中. 假如你键入 **echo $PS1**, 你将会在这个命令的输出字符串的最后有如下的字符: - - \u@\h:\w$ (注:这里没有加上斜杠 \,应该是没有转义 ,下面的有些命令也一样,我把 \ 都加上了,发表的时候也得注意一下) - -这看起来有一些丑陋,并在瞥见它的第一眼时,你可能会开始尖叫,认为它是令人恐惧的正则表达式,但我们不打算用这些复杂的字符来煎熬我们的大脑. 这不是正则表达式, 这里的斜杠是转义序列,它告诉提示符进行一些特别的处理. 例如,上面的 **u** 部分,告诉提示符展示用户名, 而 w 则展示工作路径. - -下面是一些你可以在提示符中用到的字符的列表: - -- d 当前的日期. -- h 主机名. -- n 代表新的一行的字符. -- A 当前的时间 (HH:MM). -- u 当前的用户. -- w (小写) 整个工作路径的全称. -- W (大写) 工作路径的简短名称. -- $ 一个提示符号,对于 root 用户为 # 号. -- ! 当前命令在 shell 历史记录中的序号. - -下面解释 **w** 和 **W** 选项的区别: 对于前者,你将看到你所在的工作路径的完整地址,(例如 **/usr/local/bin**), 而对于后者, 它则只显示 **bin** 这一部分. - -现在, 我们该怎样改变提示符呢? 你需要更改 **PS1** 环境变量的内容, 试试下面这个: - - export PS1=”I am \u and it is \A $” - -现在, 你的提示符将会像下面这样: - - I am mike and it is 11:26 $ - -从这个例子出发, 你就可以按照你的想法来试验一下上面列出的其他转义序列. 但稍等片刻 – 当你登出后,你的这些努力都将消失,因为在你每次打开终端时, **PS1** 环境变量的值都会被重置. 解决这个问题的最简单方式是打开 **.bashrc** 配置文件(在你的家目录下) 并在这个文件的最下方添加上完整的 `export` 命令.在每次你启动一个新的 shell 会话时,这个 **.bashrc** 会被 `Bash` 读取, 所以你的被加强了的提示符就可以一直出现.你还可以使用额外的颜色来装扮提示符.刚开始,这将有点棘手,因为你必须使用一些相当奇怪的转义序列,但结果是非常漂亮的. 将下面的字符添加到你的 **PS1**字符串中的某个位置,最终这将把文本变为红色: - - \[\e[31m\] - -你可以将这里的 31 更改为其他的数字来获得不同的颜色: - -- 30 黑色 -- 32 绿色 -- 33 黄色 -- 34 蓝色 -- 35 洋红色 -- 36 青色 -- 37 白色 - -所以,让我们使用先前看到的转义序列和颜色来创造一个提示符,以此来结束这一小节的内容. 深吸一口气,弯曲你的手指,然后键入下面这只"野兽": - - export PS1="(\!) \[\e[31m\] \[\A\] \[\e[32m\]\u@\h \[\e[34m\]\w \[\e[30m\]$" - -上面的命令提供了一个 Bash 命令历史序号, 当前的时间,用户或主机名与颜色之间的组合,以及工作路径.假如你"野心勃勃",利用一些惊人的组合,你还可以更改提示符的背景色和前景色.先前实用的 Arch wiki 有一个关于颜色代码的完整列表:[http://tinyurl.com/3gvz4ec][1]. - -> ### Shell 精要 ### -> -> 假如你是一个彻底的 Linux 新手并第一次阅读这份杂志,或许你会发觉阅读这些教程有些吃力. 所以这里有一些基础知识来让你熟悉一些 shell. 通常在你的菜单中, shell 指的是 Terminal, XTerm 或 Konsole, 但你启动它后, 最为实用的命令有这些: -> -> **ls** (列出文件名); **cp one.txt two.txt** (复制文件); **rm file.txt** (移除文件); **mv old.txt new.txt** (移动或重命名文件); -> -> **cd /some/directory** (改变目录); **cd ..** (回到上级目录); **./program** (在当前目录下运行一个程序); **ls > list.txt** (重定向输出到一个文件). -> -> 几乎每个命令都有一个手册页用来解释其选项(例如 **man ls** – 按 Q 来退出).在那里,你可以知晓命令的选项,这样你就知道 **ls -la** 展示一个详细的列表,其中也列出了隐藏文件, 并且在键入一个文件或目录的名字的一部分后, 可以使用 Tab 键来自动补全. - -### Tmux: 针对 shell 的窗口管理器 ### - -在文本模式的环境中使用一个窗口管理器 – 这听起来有点不可思议, 是吧? 然而,你应该记得当 Web 浏览器第一次实现分页浏览的时候吧? 在当时, 这是在可用性上的一个重大进步,它减少了桌面任务栏的杂乱无章和繁多的窗口列表. 对于你的浏览器来说,你只需要一个按钮便可以在浏览器中切换到你打开的每个单独网站, 而不是针对每个网站都有一个任务栏或导航图标. 这个功能非常有意义. - -若有时你同时运行着几个虚拟终端,你便会遇到相似的情况; 在这些终端之间跳转,或每次在任务栏或窗口列表中找到你所需要的那一个终端,都可能会让你觉得麻烦. 拥有一个文本模式的窗口管理器不仅可以让你像在同一个终端窗口中运行多个 shell 会话,而且你甚至还可以将这些窗口排列在一起. - -另外,这样还有另一个好处:可以将这些窗口进行分离和重新连接.想要看看这是如何运行的最好方式是自己尝试一下. 在一个终端窗口中,输入 **screen** (在大多数发行版本中,它被默认安装了或者可以在软件包仓库中找到). 某些欢迎的文字将会出现 – 只需敲击 Enter 键这些文字就会消失. 现在运行一个交互式的文本模式的程序,例如 **nano**, 并关闭这个终端窗口. - -在一个正常的 shell 对话中, 关闭窗口将会终止所有在该终端中运行的进程 – 所以刚才的 Nano 编辑对话也就被终止了, 但对于 screen 来说,并不是这样的. 打开一个新的终端并输入如下命令: - - screen -r - -瞧, 你刚开打开的 Nano 会话又回来了! - -当刚才你运行 **screen** 时, 它会创建了一个新的独立的 shell 会话, 它不与某个特定的终端窗口绑定在一起,所以可以在后面被分离并重新连接( 即 **-r** 选项). - -当你正使用 SSH 去连接另一台机器并做着某些工作, 但并不想因为一个单独的连接而毁掉你的所有进程时,这个方法尤其有用.假如你在一个 **screen** 会话中做着某些工作,并且你的连接突然中断了(或者你的笔记本没电了,又或者你的电脑报废了),你只需重新连接一个新的电脑或给电脑充电或重新买一台电脑,接着运行 **screen -r** 来重新连接到远程的电脑,并在刚才掉线的地方接着开始. - -现在,我们都一直在讨论 GNU 的 **screen**,但这个小节的标题提到的是 tmux. 实质上, **tmux** (terminal multiplexer) 就像是 **screen** 的一个进阶版本,带有许多有用的额外功能,所以现在我们开始关注 tmux. 某些发行版本默认包含了 **tmux**; 在其他的发行版本上,通常只需要一个 **apt-get, yum install** 或 **pacman -S** 命令便可以安装它. - -一旦你安装了它过后,键入 **tmux** 来启动它.接着你将注意到,在终端窗口的底部有一条绿色的信息栏,它非常像传统的窗口管理器中的任务栏: 上面显示着一个运行着的程序的列表,机器的主机名,当前时间和日期. 现在运行一个程序,又以 Nano 为例, 敲击 Ctrl+B 后接着按 C 键, 这将在 tmux 会话中创建一个新的窗口,你便可以在终端的底部的任务栏中看到如下的信息: - - 0:nano- 1:bash* - -每一个窗口都有一个数字,当前呈现的程序被一个星号所标记. Ctrl+B 是与 tmux 交互的标准方式, 所以若你敲击这个按键组合并带上一个窗口序号, 那么就会切换到对应的那个窗口.你也可以使用 Ctrl+B 再加上 N 或 P 来分别切换到下一个或上一个窗口 – 或者使用 Ctrl+B 加上 L 来在最近使用的两个窗口之间来进行切换(有点类似于桌面中的经典的 Alt+Tab 组合键的效果). 若需要知道窗口列表,使用 Ctrl+B 再加上 W. - -目前为止,一切都还好:现在你可以在一个单独的终端窗口中运行多个程序,避免混乱(尤其是当你经常与同一个远程主机保持多个 SSH 连接时.). 当想同时看两个程序又该怎么办呢? - -针对这种情况, 可以使用 tmux 中的窗格. 敲击 Ctrl+B 再加上 % , 则当前窗口将分为两个部分,一个在左一个在右.你可以使用 Ctrl+B 再加上 O 来在这两个部分之间切换. 这尤其在你想同时看两个东西时非常实用, – 例如一个窗格看指导手册,另一个窗格里用编辑器看一个配置文件. - -有时,你想对一个单独的窗格进行缩放,而这需要一定的技巧. 首先你需要敲击 Ctrl+B 再加上一个 :(分号),这将使得位于底部的 tmux 栏变为深橙色. 现在,你进入了命令模式,在这里你可以输入命令来操作 tmux. 输入 **resize-pane -R** 来使当前窗格向右移动一个字符的间距, 或使用 **-L** 来向左移动. 对于一个简单的操作,这些命令似乎有些长,但请注意,在 tmux 的命令模式(以前面提到的一个分号开始的模式)下,可以使用 Tab 键来补全命令. 另外需要提及的是, **tmux** 同样也有一个命令历史记录,所以若你想重复刚才的缩放操作,可以先敲击 Ctrl+B 再跟上一个分号并使用向上的箭头来取回刚才输入的命令. - -最后,让我们看一下分离和重新连接 - 即我们刚才介绍的 screen 的特色功能. 在 tmux 中,敲击 Ctrl+B 再加上 D 来从当前的终端窗口中分离当前的 tmux 会话, 这使得这个会话的一切工作都在后台中运行.使用 **tmux a** 可以再重新连接到刚才的会话. 但若你同时有多个 tmux 会话在运行时,又该怎么办呢? 我们可以使用下面的命令来列出它们: - - tmux ls - -这个命令将为每个会话分配一个序号; 假如你想重新连接到会话 1, 可以使用 `tmux a -t 1`. tmux 是可以高度定制的,你可以自定义按键绑定并更改配色方案, 所以一旦你适应了它的主要功能,请钻研指导手册以了解更多的内容. - -tmux: 一个针对 shell 的窗口管理器 - -![tmux](http://www.linuxvoice.com/wp-content/uploads/2015/02/tmux-large13.jpg) - -上图中, tmux 开启了两个窗格: 左边是 Vim 正在编辑一个配置文件,而右边则展示着指导手册页. - -> ### Zsh: 另一个 shell ### -> -> 选择是好的,但标准同样重要. 你要知道几乎每个主流的 Linux 发行版本都默认使用 Bash shell – 尽管还存在其他的 shell. Bash 为你提供了一个 shell 能够给你提供的几乎任何功能,包括命令历史记录,文件名补全和许多脚本编程的能力.它成熟,可靠并文档丰富 – 但它不是你唯一的选择. -> -> 许多高级用户热衷于 Zsh, 即 Z shell. 这是 Bash 的一个替代品并提供了 Bash 的几乎所有功能,令外还提供了一些额外的功能. 例如, 在 Zsh 中,你输入 **ls** - 并敲击 Tab 键可以得到 **ls** 可用的各种不同选项的一个大致描述. 而不需要再打开 man page 了! -> -> Zsh 还支持其他强大的自动补全功能: 例如,输入 **cd /u/lo/bi** 再敲击 Tab 键, 则完整的路径名 **/usr/local/bin** 就会出现(这里假设没有其他的路径包含 **u**, **lo** 和 **bi** 等字符.). 或者只输入 **cd** 再跟上 Tab 键,则你将看到着色后的路径名的列表 – 这比 Bash 给出的简单的结果好看得多. -> -> Zsh 在大多数的主要发行版本上都可以得到; 安装它后并输入 **zsh** 便可启动它. 要将你的默认 shell 从 Bash 改为 Zsh, 可以使用 **chsh** 命令. 若需了解更多的信息,请访问 [www.zsh.org][2]. - -### "未来" 的终端 ### - -你或许会好奇为什么包含你的命令行提示符的应用被叫做终端. 这需要追溯到 Unix 的早期, 那时人们一般工作在一个多用户的机器上,这个巨大的电脑主机将占据一座建筑中的一个房间, 人们在某些线路的配合下,使用屏幕和键盘来连接到这个主机, 这些终端机通常被称为 "哑终端", 因为它们不能靠自己做任何重要的执行任务 – 它们只展示通过线路从主机传来的信息,并输送回从键盘的敲击中得到的输入信息. - -今天,几乎所有的我们在自己的机器上执行实际的操作,所以我们的电脑不是传统意义下的终端, 这就是为什么诸如 **XTerm**, Gnome Terminal, Konsole 等程序被称为 "终端模拟器" 的原因 – 他们提供了同昔日的物理终端一样的功能.事实上,在许多方面它们并没有改变多少.诚然,现在我们有了反锯齿字体,更好的颜色和点击网址的能力,但总的来说,几十年来我们一直以同样的方式在工作. - -所以某些程序员正尝试改变这个状况. **Terminology** ([http://tinyurl.com/osopjv9][3]), 它来自于超级时髦的 Enlightenment 窗口管理器背后的团队,旨在将终端引入 21 世纪,例如带有在线媒体显示功能.你可以在一个充满图片的目录里输入 **ls** 命令,便可以看到它们的缩略图,或甚至可以直接在你的终端里播放视频. 这使得一个终端有点类似于一个文件管理器,意味着你可以快速地检查媒体文件的内容而不必用另一个应用来打开它们. - -接着还有 Xiki ([www.xiki.org][4]),它自身的描述为 "命令的革新".它就像是一个传统的 shell, 一个 GUI 和一个 wiki 之间的过渡; 你可以在任何地方输入命令,并在后面将它们的输出存储为笔记以作为参考,并可以创建非常强大的自定义命令.用几句话是很能描述它的,所以作者们已经创作了一个视频来展示它的潜力是多么的巨大(请看 **Xiki** 网站的截屏视频部分). - -并且 Xiki 绝不是那种在几个月之内就消亡的昙花一现的项目,作者们成功地进行了一次 Kickstarter 众筹,在七月底已募集到超过 $84,000. 是的,你没有看错 – $84K 来支持一个终端模拟器.这可能是最不寻常的集资活动,因为某些疯狂的家伙已经决定开始创办它们自己的 Linux 杂志 ...... - -### 下一代终端 ### - -许多命令行和基于文本的程序在功能上与它们的 GUI 程序是相同的,并且常常更加快速和高效. 我们的推荐有: -**Irssi** (IRC 客户端); **Mutt** (mail 客户端); **rTorrent** (BitTorrent); **Ranger** (文件管理器); **htop** (进程监视器). 若给定在终端的限制下来进行 Web 浏览, Elinks 确实做的很好,并且对于阅读那些以文字为主的网站例如 Wikipedia 来说,它非常实用. - -> ### 微调配色方案 ### -> -> 在 Linux Voice 中,我们并不迷恋养眼的东西,但当你每天花费几个小时盯着屏幕看东西时,我们确实认识到美学的重要性.我们中的许多人都喜欢调整我们的桌面和窗口管理器来达到完美的效果,调整阴影效果,摆弄不同的配色方案,直到我们 100% 的满意.(然后出于习惯,摆弄更多的东西.) -> -> 但我们倾向于忽视终端窗口,它理应也获得我们的喜爱, 并且在 [http://ciembor.github.io/4bit][5] 你将看到一个极其棒的配色方案设计器,对于所有受欢迎的终端模拟器(**XTerm, Gnome Terminal, Konsole and Xfce4 Terminal are among the apps supported.**),它可以色设定.移动滑动条直到你看到配色方案 norvana, 然后点击位于该页面右上角的 `得到方案` 按钮. -> -> 相似的,假如你在一个文本编辑器,如 Vim 或 Emacs 上花费很多的时间,使用一个精心设计的调色板也是非常值得的. **Solarized at** [http://ethanschoonover.com/solarized][6] 是一个卓越的方案,它不仅漂亮,而且因追求最大的可用性而设计,在其背后有着大量的研究和测试. --------------------------------------------------------------------------------- - -via: http://www.linuxvoice.com/linux-101-power-up-your-shell-8/ - -作者:[Ben Everard][a] -译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linuxvoice.com/author/ben_everard/ -[1]:http://tinyurl.com/3gvz4ec -[2]:http://www.zsh.org/ -[3]:http://tinyurl.com/osopjv9 -[4]:http://www.xiki.org/ -[5]:http://ciembor.github.io/4bit -[6]:http://ethanschoonover.com/solarized \ No newline at end of file From b0df98351b4c4b573e0045e942b202005f148035 Mon Sep 17 00:00:00 2001 From: joeren Date: Wed, 29 Jul 2015 08:46:27 +0800 Subject: [PATCH 020/207] Update 20150728 How To Fix--There is no command installed for 7-zip archive files.md --- ...x--There is no command installed for 7-zip archive files.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md b/sources/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md index 8c9b781117..dd3f7211ce 100644 --- a/sources/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md +++ b/sources/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md @@ -1,3 +1,4 @@ +Translating by GOLinux! How To Fix: There is no command installed for 7-zip archive files ================================================================================ ### Problem ### @@ -46,4 +47,4 @@ via: http://itsfoss.com/fix-there-is-no-command-installed-for-7-zip-archive-file [a]:http://itsfoss.com/author/abhishek/ [1]:http://www.7-zip.org/ -[2]:http://itsfoss.com/fix-there-is-no-command-installed-for-rar-archive-files/ \ No newline at end of file +[2]:http://itsfoss.com/fix-there-is-no-command-installed-for-rar-archive-files/ From db5aa16eb7ba418bba279f4d795cf80d0fdb303a Mon Sep 17 00:00:00 2001 From: GOLinux Date: Wed, 29 Jul 2015 09:21:53 +0800 Subject: [PATCH 021/207] [Translated]20150728 How To Fix--There is no command installed for 7-zip archive files.md --- ...mmand installed for 7-zip archive files.md | 50 ------------------ ...mmand installed for 7-zip archive files.md | 51 +++++++++++++++++++ 2 files changed, 51 insertions(+), 50 deletions(-) delete mode 100644 sources/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md create mode 100644 translated/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md diff --git a/sources/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md b/sources/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md deleted file mode 100644 index dd3f7211ce..0000000000 --- a/sources/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md +++ /dev/null @@ -1,50 +0,0 @@ -Translating by GOLinux! -How To Fix: There is no command installed for 7-zip archive files -================================================================================ -### Problem ### - -I was trying to install Emerald icon theme in Ubuntu and the theme came in .7z archive. As always, I tried to extract it, in GUI, using the right click and “extract here”. Instead of extracting the file, Ubuntu 15.04 threw an error which read: - -> Could not open this file -> -> There is no command installed for 7-zip archive files. Do you want to search for a command to open this file? - -The error looked like this: - -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Install_7zip_ubuntu_1.png) - -### Reason ### - -Reason is quite evident from the error message itself. The 7Z, better to call it [7-zip][1], program is not installed and hence 7Z compressed files are not being extracted. This also hints that Ubuntu doesn’t support 7-Zip files by default. - -### Solution: Install 7zip in Ubuntu ### - -Solution is quite simple as well. Install the 7-Zip package in Ubuntu. Now you might wonder how to install 7Zip in Ubuntu? Well, in the previous error dialogue box if you click on “Search Command”, it will look for available p7zip package. Just click on “Install” here: - -![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Install_7zip_ubuntu.png) - -### Alternative: Install 7zip in terminal ### - -If you prefer terminal, you can install 7zip in terminal using the following command: - - sudo apt-get install p7zip-full - -Note: You’ll find three 7zip packages in Ubuntu: p7zip, p7zip-full and p7zip-rar. The difference between p7zip and p7zip-full is that p7zip is a lighter version providing support only for .7z and .7za files while the full version provides support for more 7z compression algorithms (for audio files etc). For p7zip-rar, it provides support for .rar files along with 7z. - -In fact similar error can be encountered with [RAR files in Ubuntu][2]. Solution is the same, install the correct program. - -I hope this quick post helped you to solve the mystery of **how to open 7zip in Ubuntu 14.04**. Any questions or suggestions are always welcomed. - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/fix-there-is-no-command-installed-for-7-zip-archive-files/ - -作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ -[1]:http://www.7-zip.org/ -[2]:http://itsfoss.com/fix-there-is-no-command-installed-for-rar-archive-files/ diff --git a/translated/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md b/translated/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md new file mode 100644 index 0000000000..61237467ca --- /dev/null +++ b/translated/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md @@ -0,0 +1,51 @@ +如何修复:There is no command installed for 7-zip archive files +================================================================================ +### 问题 ### + +我试着在Ubuntu中安装Emerald图标主题,而这个主题被打包成了.7z归档包。和以往一样,我试着通过在GUI中右击并选择“提取到这里”来将它解压缩。但是Ubuntu 15.04却并没有解压文件,取而代之的,却是丢给了我一个下面这样的错误信息: + +> Could not open this file +> 无法打开该文件 +> +> There is no command installed for 7-zip archive files. Do you want to search for a command to open this file? +> 没有安装用于7-zip归档文件的命令。你是否想要搜索命令来打开该文件? + +错误信息看上去是这样的: + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Install_7zip_ubuntu_1.png) + +### 原因 ### + +发生该错误的原因从错误信息本身来看就十分明了。7Z,称之为[7-zip][1]更好,该程序没有安装,因此7Z压缩文件就无法解压缩。这也暗示着Ubuntu默认不支持7-zip文件。 + +### 解决方案:在Ubuntu中安装 7zip ### + +要解决该问题也十分简单,在Ubuntu中安装7-Zip包即可。现在,你也许想知道如何在Ubuntu中安装 7Zip吧?好吧,在前面的错误对话框中,如果你右击“Search Command”搜索命令,它会查找可用的 p7zip 包。只要点击“Install”安装,如下图: + +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Install_7zip_ubuntu.png) + +### 可选方案:在终端中安装 7Zip ### + +如果偏好使用终端,你可以使用以下命令在终端中安装 7zip: + + sudo apt-get install p7zip-full + +注意:在Ubuntu中,你会发现有3个7zip包:p7zip,p7zip-full 和 p7zip-rar。p7zip和p7zip-full的区别在于,p7zip是一个更轻量化的版本,仅仅提供了对 .7z 和 .7za 文件的支持,而完整版则提供了对更多(用于音频文件等的) 7z 压缩算法的支持。对于 p7zip-rar,它除了对 7z 文件的支持外,也提供了对 .rar 文件的支持。 + +事实上,相同的错误也会发生在[Ubuntu中的RAR文件][2]身上。解决方案也一样,安装正确的程序即可。 + +希望这篇快文帮助你解决了**Ubuntu 14.04中如何打开 7zip**的谜团。如有任何问题或建议,我们将无任欢迎。 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/fix-there-is-no-command-installed-for-7-zip-archive-files/ + +作者:[Abhishek][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://www.7-zip.org/ +[2]:http://itsfoss.com/fix-there-is-no-command-installed-for-rar-archive-files/ From e60fc38a7eceff5263a59b72534684bc910f2ffd Mon Sep 17 00:00:00 2001 From: joeren Date: Wed, 29 Jul 2015 09:24:22 +0800 Subject: [PATCH 022/207] Update 20150727 Easy Backup Restore and Migrate Containers in Docker.md --- ...727 Easy Backup Restore and Migrate Containers in Docker.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md b/sources/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md index fc21489ec9..7607fe58f7 100644 --- a/sources/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md +++ b/sources/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md @@ -1,3 +1,4 @@ +Translating by GOLinux! Easy Backup, Restore and Migrate Containers in Docker ================================================================================ Today we'll learn how we can easily backup, restore and migrate docker containers out of the box. [Docker][1] is an open source platform that automates the deployment of applications with fast and easy way to pack, ship and run it under a lightweight layer of software called container. It makes the applications platform independent as it acts an additional layer of abstraction and automation of operating system level virtualization on Linux. It utilizes resource isolation features of Linux Kernel with its components cgroups and namespace for avoiding the overhead of virtual machines. It makes the great building blocks for deploying and scaling web apps, databases, and back-end services without depending on a particular stack or provider. Containers are those software layers which are created from a docker image that contains the respective linux filesystem and applications out of the box. If we have a docker container running in our box and need to backup those containers for future use or wanna migrate those containers, then this tutorial will help you how we can backup, restore and migrate docker containers in linux operating system. @@ -87,4 +88,4 @@ via: http://linoxide.com/linux-how-to/backup-restore-migrate-containers-docker/ [a]:http://linoxide.com/author/arunp/ [1]:http://docker.com/ -[2]:https://registry.hub.docker.com/ \ No newline at end of file +[2]:https://registry.hub.docker.com/ From e695b053546e6c6f30930a7e91500a7671dcbb09 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Wed, 29 Jul 2015 11:16:25 +0800 Subject: [PATCH 023/207] [Translated]20150727 Easy Backup Restore and Migrate Containers in Docker.md --- ...estore and Migrate Containers in Docker.md | 91 ------------------ ...estore and Migrate Containers in Docker.md | 92 +++++++++++++++++++ 2 files changed, 92 insertions(+), 91 deletions(-) delete mode 100644 sources/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md create mode 100644 translated/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md diff --git a/sources/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md b/sources/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md deleted file mode 100644 index 7607fe58f7..0000000000 --- a/sources/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md +++ /dev/null @@ -1,91 +0,0 @@ -Translating by GOLinux! -Easy Backup, Restore and Migrate Containers in Docker -================================================================================ -Today we'll learn how we can easily backup, restore and migrate docker containers out of the box. [Docker][1] is an open source platform that automates the deployment of applications with fast and easy way to pack, ship and run it under a lightweight layer of software called container. It makes the applications platform independent as it acts an additional layer of abstraction and automation of operating system level virtualization on Linux. It utilizes resource isolation features of Linux Kernel with its components cgroups and namespace for avoiding the overhead of virtual machines. It makes the great building blocks for deploying and scaling web apps, databases, and back-end services without depending on a particular stack or provider. Containers are those software layers which are created from a docker image that contains the respective linux filesystem and applications out of the box. If we have a docker container running in our box and need to backup those containers for future use or wanna migrate those containers, then this tutorial will help you how we can backup, restore and migrate docker containers in linux operating system. - -Here are some easy steps on how we can backup, restore and migrate the docker containers in linux. - -### 1. Backing up the Containers ### - -First of all, in order to backup the containers in docker, we'll wanna see the list of containers that we wanna backup. To do so, we'll need to run docker ps in our linux machine running docker engine with containers already created. - - # docker ps - -![Docker Containers List](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-containers-list.png) - -After that, we'll choose the containers we wanna backup and then we'll go for creating the snapshot of the container. We can use docker commit command in order to create the snapshot. - - # docker commit -p 30b8f18f20b4 container-backup - -![Docker Commit](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-commit.png) - -This will generated a snapshot of the container as the docker image. We can see the docker image by running the command docker images as shown below. - - # docker images - -![Docker Images](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-images.png) - -As we can see the snapshot that was taken above has been preserved as docker image. Now, inorder to backup that snapshot, we have two options, one is that we can login into the docker registry hub and push the image and the next is that we can backup the docker image as tarballs for further use. - -If we wanna upload or backup the image in the [docker registry hub][2], we can simply run docker login command to login into the docker registry hub and then push the required image. - - # docker login - -![Docker Login](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-login.png) - - # docker tag a25ddfec4d2a arunpyasi/container-backup:test - # docker push arunpyasi/container-backup - -![Docker Push](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-push.png) - -If we don't wanna backup to the docker registry hub and wanna save the image for future use in the machine locally then we can backup the image as tarballs. To do so, we'll need to run the following docker save command. - - # docker save -o ~/container-backup.tar container-backup - -![taking tarball backup](http://blog.linoxide.com/wp-content/uploads/2015/07/taking-tarball-backup.png) - -To verify if the tarball has been generated or not, we can simply run docker ls inside the directory where we saved the tarball. - -### 2. Restoring the Containers ### - -Next, after we have successfully backed up our docker containers, we'll now go for restoring those contianers which are snapshotted as docker images. If we have pushed those docker images in the registry hub, then we can simply pull that docker image and run it out of the box. - - # docker pull arunpyasi/container-backup:test - -![Docker Pull](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-pull.png) - -But if we have backed up those docker images locally as tarball file, then we can easy load that docker image using docker load command followed by the backed up tarball. - - # docker load -i ~/container-backup.tar - -Now, to ensure that those docker images have been loaded successfully, we'll run docker images command. - - # docker images - -After the images have been loaded, we'll gonna run the docker container from the loaded image. - - # docker run -d -p 80:80 container-backup - -![Restoring Docker Tarball](http://blog.linoxide.com/wp-content/uploads/2015/07/restoring-docker-tarballs.png) - -### 3. Migrating the Docker Containers ### - -Migrating the containers involve both the above process ie Backup and Restore. We can migrate any docker container from one machine to another. In the process of migration, first we take the backup of the container as snapshot docker image. Then, that docker image is either pushed to the docker registry hub or saved as tarball files in the locally. If we have pushed the image to the docker registry hub, we can easily restore and run the container using docker run command from any machine we want. But if we have saved the image as tarballs locally, we can simply copy or move the image to the machine where we want to load image and run the required container. - -### Conclusion ### - -Finally, we have learned how we can backup, restore and migrate the docker containers out of the box. This tutorial is exactly same for every platform of operating system where docker runs successfully. Really, docker is pretty simple and easy to use but very powerful tool. It has pretty easy to remember commands which are short enough with many simple but powerful flags and parameters. The above methods makes us comfortable to backup our containers so that we can restore them when needed in future. This can help us recover our containers and images even if our host system crashes or gets wiped out accidentally. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/backup-restore-migrate-containers-docker/ - -作者:[Arun Pyasi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ -[1]:http://docker.com/ -[2]:https://registry.hub.docker.com/ diff --git a/translated/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md b/translated/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md new file mode 100644 index 0000000000..420430cca8 --- /dev/null +++ b/translated/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md @@ -0,0 +1,92 @@ +无忧之道:Docker中容器的备份、恢复和迁移 +================================================================================ +今天,我们将学习如何快速地对docker容器进行快捷备份、恢复和迁移。[Docker][1]是一个开源平台,用于自动化部署应用,以通过快捷的途径在称之为容器的轻量级软件层下打包、发布和运行这些应用。它使得应用平台独立,因为它扮演了Linux上一个额外的操作系统级虚拟化的自动化抽象层。它通过其组件cgroups和命名空间利用Linux内核的资源分离特性,达到避免虚拟机开销的目的。它使得用于部署和扩展web应用、数据库和后端服务的大规模构建块无需依赖于特定的堆栈或供应者。 + +所谓的容器,就是那些创建自Docker镜像的软件层,它包含了独立的Linux文件系统和开箱即用的应用程序。如果我们有一个在盒子中运行着的Docker容器,并且想要备份这些容器以便今后使用,或者想要迁移这些容器,那么,本教程将帮助你掌握在Linux操作系统中备份、恢复和迁移Docker容器。 + +我们怎样才能在Linux中备份、恢复和迁移Docker容器呢?这里为您提供了一些便捷的步骤。 + +### 1. 备份容器 ### + +首先,为了备份Docker中的容器,我们会想看看我们想要备份的容器列表。要达成该目的,我们需要在我们运行这Docker引擎,并已创建了容器的Linux机器中运行 docker ps 命令。 + + # docker ps + +![Docker Containers List](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-containers-list.png) + +在此之后,我们要选择我们想要备份的容器,然后我们会去创建该容器的快照。我们可以使用 docker commit 命令来创建快照。 + + # docker commit -p 30b8f18f20b4 container-backup + +![Docker Commit](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-commit.png) + +该命令会生成一个作为Docker镜像的容器快照,我们可以通过运行 docker images 命令来查看Docker镜像,如下。 + + # docker images + +![Docker Images](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-images.png) + +正如我们所看见的,上面做的快照已经作为Docker镜像保存了。现在,为了备份该快照,我们有两个选择,一个是我们可以登陆进Docker注册中心,并推送该镜像;另一个是我们可以将Docker镜像打包成tarball备份,以供今后使用。 + +如果我们想要在[Docker注册中心][2]上传或备份镜像,我们只需要运行 docker login 命令来登录进Docker注册中心,然后推送所需的镜像即可。 + + # docker login + +![Docker Login](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-login.png) + + # docker tag a25ddfec4d2a arunpyasi/container-backup:test + # docker push arunpyasi/container-backup + +![Docker Push](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-push.png) + +如果我们不想备份到docker注册中心,而是想要将此镜像保存在本地机器中,以供日后使用,那么我们可以将其作为tarball备份。要完成该操作,我们需要运行以下 docker save 命令。 + + # docker save -o ~/container-backup.tar container-backup + +![taking tarball backup](http://blog.linoxide.com/wp-content/uploads/2015/07/taking-tarball-backup.png) + +要验证tarball时候已经生成,我们只需要在保存tarball的目录中运行 ls 命令。 + +### 2. 恢复容器 ### + +接下来,在我们成功备份了我们的Docker容器后,我们现在来恢复这些被快照成Docker镜像的容器。如果我们已经在注册中心推送了这些Docker镜像,那么我们仅仅需要把那个Docker镜像拖回并直接运行即可。 + + # docker pull arunpyasi/container-backup:test + +![Docker Pull](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-pull.png) + +但是,如果我们将这些Docker镜像作为tarball文件备份到了本地,那么我们只要使用 docker load 命令,后面加上tarball的备份路径,就可以加载该Docker镜像了。 + + # docker load -i ~/container-backup.tar + +现在,为了确保这些Docker镜像已经加载成功,我们来运行 docker images 命令。 + + # docker images + +在镜像被加载后,我们将从加载的镜像去运行Docker容器。 + + # docker run -d -p 80:80 container-backup + +![Restoring Docker Tarball](http://blog.linoxide.com/wp-content/uploads/2015/07/restoring-docker-tarballs.png) + +### 3. 迁移Docker容器 ### + +迁移容器同时涉及到了上面两个操作,备份和恢复。我们可以将任何一个Docker容器从一台机器迁移到另一台机器。在迁移过程中,首先我们将容器的备份作为快照Docker镜像。然后,该Docker镜像或者是被推送到了Docker注册中心,或者被作为tarball文件保存到了本地。如果我们将镜像推送到了Docker注册中心,我们简单地从任何我们想要的机器上使用 docker run 命令来恢复并运行该容器。但是,如果我们将镜像打包成tarball备份到了本地,我们只需要拷贝或移动该镜像到我们想要的机器上,加载该镜像并运行需要的容器即可。 + +### 尾声 ### + +最后,我们已经学习了如何快速地备份、恢复和迁移Docker容器,本教程适用于各个成功运行Docker的操作系统平台。真的,Docker是一个相当简单易用,然而功能却十分强大的工具。它的命令相当易记,这些命令都非常短,带有许多简单而强大的标记和参数。上面的方法让我们备份容器时很是安逸,使得我们可以在日后很轻松地恢复它们。这会帮助我们恢复我们的容器和镜像,即便主机系统崩溃,甚至意外地被清除。如果你还有很多问题、建议、反馈,请在下面的评论框中写出来吧,可以帮助我们改进或更新我们的内容。谢谢大家!享受吧 :-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/backup-restore-migrate-containers-docker/ + +作者:[Arun Pyasi][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://docker.com/ +[2]:https://registry.hub.docker.com/ From c135e226aad8457e323f83d79c2264dc3dff4da9 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 29 Jul 2015 14:32:23 +0800 Subject: [PATCH 024/207] PUB:20150612 How to Configure Swarm Native Clustering for Docker @GOLinux --- ...gure Swarm Native Clustering for Docker.md | 25 +++++++++++-------- 1 file changed, 14 insertions(+), 11 deletions(-) rename {translated/tech => published}/20150612 How to Configure Swarm Native Clustering for Docker.md (65%) diff --git a/translated/tech/20150612 How to Configure Swarm Native Clustering for Docker.md b/published/20150612 How to Configure Swarm Native Clustering for Docker.md similarity index 65% rename from translated/tech/20150612 How to Configure Swarm Native Clustering for Docker.md rename to published/20150612 How to Configure Swarm Native Clustering for Docker.md index 82849b4661..66ff94367e 100644 --- a/translated/tech/20150612 How to Configure Swarm Native Clustering for Docker.md +++ b/published/20150612 How to Configure Swarm Native Clustering for Docker.md @@ -1,34 +1,37 @@ -为Docker配置Swarm本地集群 +如何配置一个 Docker Swarm 原生集群 ================================================================================ -嗨,大家好。今天我们来学一学Swarm相关的内容吧,我们将学习通过Swarm来创建Docker本地集群。[Docker Swarm][1]是用于Docker的本地集群项目,它可以将Docker主机池转换成单个的虚拟主机。Swarm提供了标准的Docker API,所以任何可以和Docker守护进程通信的工具都可以使用Swarm来透明地规模化多个主机。Swarm遵循“包含电池并可拆卸”的原则,就像其它Docker项目一样。它附带有一个开箱即用的简单的后端调度程序,而且作为初始开发套件,也为其开发了一个可启用即插即用后端的API。其目标在于为一些简单的使用情况提供一个平滑的、开箱即用的体验,并且它允许在更强大的后端,如Mesos,中开启交换,以达到大量生产部署的目的。Swarm配置和使用极其简单。 + +嗨,大家好。今天我们来学一学Swarm相关的内容吧,我们将学习通过Swarm来创建Docker原生集群。[Docker Swarm][1]是用于Docker的原生集群项目,它可以将一个Docker主机池转换成单个的虚拟主机。Swarm工作于标准的Docker API,所以任何可以和Docker守护进程通信的工具都可以使用Swarm来透明地伸缩到多个主机上。就像其它Docker项目一样,Swarm遵循“内置电池,并可拆卸”的原则(LCTT 译注:batteries included,内置电池原来是 Python 圈里面对 Python 的一种赞誉,指自给自足,无需外求的丰富环境;but removable,并可拆卸应该指的是非强制耦合)。它附带有一个开箱即用的简单的后端调度程序,而且作为初始开发套件,也为其开发了一个可插拔不同后端的API。其目标在于为一些简单的使用情况提供一个平滑的、开箱即用的体验,并且它允许切换为更强大的后端,如Mesos,以用于大规模生产环境部署。Swarm配置和使用极其简单。 这里给大家提供Swarm 0.2开箱的即用一些特性。 1. Swarm 0.2.0大约85%与Docker引擎兼容。 2. 它支持资源管理。 -3. 它具有一些带有限制器和类同器高级调度特性。 +3. 它具有一些带有限制和类同功能的高级调度特性。 4. 它支持多个发现后端(hubs,consul,etcd,zookeeper) 5. 它使用TLS加密方法进行安全通信和验证。 -那么,我们来看一看Swarm的一些相当简单而简易的使用步骤吧。 +那么,我们来看一看Swarm的一些相当简单而简用的使用步骤吧。 ### 1. 运行Swarm的先决条件 ### -我们必须在所有节点安装Docker 1.4.0或更高版本。虽然哥哥节点的IP地址不需要要公共地址,但是Swarm管理器必须可以通过网络访问各个节点。 +我们必须在所有节点安装Docker 1.4.0或更高版本。虽然各个节点的IP地址不需要要公共地址,但是Swarm管理器必须可以通过网络访问各个节点。 -注意:Swarm当前还处于beta版本,因此功能特性等还有可能发生改变,我们不推荐你在生产环境中使用。 +**注意**:Swarm当前还处于beta版本,因此功能特性等还有可能发生改变,我们不推荐你在生产环境中使用。 ### 2. 创建Swarm集群 ### 现在,我们将通过运行下面的命令来创建Swarm集群。各个节点都将运行一个swarm节点代理,该代理会注册、监控相关的Docker守护进程,并更新发现后端获取的节点状态。下面的命令会返回一个唯一的集群ID标记,在启动节点上的Swarm代理时会用到它。 +在集群管理器中: + # docker run swarm create ![Creating Swarm Cluster](http://blog.linoxide.com/wp-content/uploads/2015/05/creating-swarm-cluster.png) ### 3. 启动各个节点上的Docker守护进程 ### -我们需要使用-H标记登陆进我们将用来创建几圈和启动Docker守护进程的各个节点,它会保证Swarm管理器能够通过TCP访问到各个节点上的Docker远程API。要启动Docker守护进程,我们需要在各个节点内部运行以下命令。 +我们需要登录进我们将用来创建集群的每个节点,并在其上使用-H标记启动Docker守护进程。它会保证Swarm管理器能够通过TCP访问到各个节点上的Docker远程API。要启动Docker守护进程,我们需要在各个节点内部运行以下命令。 # docker -H tcp://0.0.0.0:2375 -d @@ -42,7 +45,7 @@ ![Adding Nodes to Cluster](http://blog.linoxide.com/wp-content/uploads/2015/05/adding-nodes-to-cluster.png) -** 注意**:我们需要用步骤2中获取到的节点IP地址和集群ID替换这里的。 +**注意**:我们需要用步骤2中获取到的节点IP地址和集群ID替换这里的。 ### 5. 开启Swarm管理器 ### @@ -60,7 +63,7 @@ ![Accessing Swarm Clusters](http://blog.linoxide.com/wp-content/uploads/2015/05/accessing-swarm-cluster.png) -** 注意**:我们需要替换为运行swarm管理器的主机的IP地址和端口。 +**注意**:我们需要替换为运行swarm管理器的主机的IP地址和端口。 ### 7. 使用docker CLI来访问节点 ### @@ -79,7 +82,7 @@ ### 尾声 ### -Swarm真的是一个有着相当不错的功能的docker,它可以用于创建和管理集群。它相当易于配置和使用,当我们在它上面使用限制器和类同器师它更为出色。高级调度程序是一个相当不错的特性,它可以应用过滤器来通过端口、标签、健康状况来排除节点,并且它使用策略来挑选最佳节点。那么,如果你有任何问题、评论、反馈,请在下面的评论框中写出来吧,好让我们知道哪些材料需要补充或改进。谢谢大家了!尽情享受吧 :-) +Swarm真的是一个有着相当不错的功能的docker,它可以用于创建和管理集群。它相当易于配置和使用,当我们在它上面使用限制器和类同器时它更为出色。高级调度程序是一个相当不错的特性,它可以应用过滤器来通过端口、标签、健康状况来排除节点,并且它使用策略来挑选最佳节点。那么,如果你有任何问题、评论、反馈,请在下面的评论框中写出来吧,好让我们知道哪些材料需要补充或改进。谢谢大家了!尽情享受吧 :-) -------------------------------------------------------------------------------- @@ -87,7 +90,7 @@ via: http://linoxide.com/linux-how-to/configure-swarm-clustering-docker/ 作者:[Arun Pyasi][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From e1cfb34b0b010db96d7f11f02504fd3187ed8b12 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 29 Jul 2015 14:54:46 +0800 Subject: [PATCH 025/207] PUB:20150722 12 Useful PHP Commandline Usage Every Linux User Must Know @GOLinux --- ...ndline Usage Every Linux User Must Know.md | 70 +++++++++++-------- 1 file changed, 39 insertions(+), 31 deletions(-) rename {translated/tech => published}/20150722 12 Useful PHP Commandline Usage Every Linux User Must Know.md (71%) diff --git a/translated/tech/20150722 12 Useful PHP Commandline Usage Every Linux User Must Know.md b/published/20150722 12 Useful PHP Commandline Usage Every Linux User Must Know.md similarity index 71% rename from translated/tech/20150722 12 Useful PHP Commandline Usage Every Linux User Must Know.md rename to published/20150722 12 Useful PHP Commandline Usage Every Linux User Must Know.md index 8c00c6a75c..2b5a6e9cf9 100644 --- a/translated/tech/20150722 12 Useful PHP Commandline Usage Every Linux User Must Know.md +++ b/published/20150722 12 Useful PHP Commandline Usage Every Linux User Must Know.md @@ -1,14 +1,12 @@ -每个Linux人应知应会的12个有用的PHP命令行用法 +在 Linux 命令行中使用和执行 PHP 代码(二):12 个 PHP 交互性 shell 的用法 ================================================================================ -在我上一篇文章“[在Linux命令行中使用并执行PHP代码][1]”中,我同时着重讨论了直接在Linux命令行中运行PHP代码以及在Linux终端中执行PHP脚本文件。 +在上一篇文章“[在 Linux 命令行中使用和执行 PHP 代码(一)][1]”中,我同时着重讨论了直接在Linux命令行中运行PHP代码以及在Linux终端中执行PHP脚本文件。 ![Run PHP Codes in Linux Commandline](http://www.tecmint.com/wp-content/uploads/2015/07/Run-PHP-Codes-in-Linux-Commandline.jpeg) -在Linux命令行运行PHP代码——第二部分 +本文旨在让你了解一些相当不错的Linux终端中的PHP交互性 shell 的用法特性。 -本文旨在让你了解一些相当不错的Linux终端中的PHP用法特性。 - -让我们先在PHP交互shell中来对`php.ini`设置进行一些配置吧。 +让我们先在PHP 的交互shell中来对`php.ini`设置进行一些配置吧。 **6. 设置PHP命令行提示符** @@ -21,7 +19,8 @@ php > #cli.prompt=Hi Tecmint :: ![Enable PHP Interactive Shell](http://www.tecmint.com/wp-content/uploads/2015/07/Enable-PHP-Interactive-Shell.png) -启用PHP交互Shell + +*启用PHP交互Shell* 同时,你也可以设置当前时间作为你的命令行提示符,操作如下: @@ -31,20 +30,22 @@ **7. 每次输出一屏** -在我们上一篇文章中,我们已经在原始命令中通过管道在很多地方使用了‘less‘命令。通过该操作,我们可以在那些不能一次满屏输出的地方获得每次一屏的输出。但是,我们可以通过配置php.ini文件,设置pager的值为less以每次输出一屏,操作如下: +在我们上一篇文章中,我们已经在原始命令中通过管道在很多地方使用了`less`命令。通过该操作,我们可以在那些不能一屏全部输出的地方获得分屏显示。但是,我们可以通过配置php.ini文件,设置pager的值为less以每次输出一屏,操作如下: $ php -a php > #cli.pager=less ![Fix PHP Screen Output](http://www.tecmint.com/wp-content/uploads/2015/07/Fix-PHP-Screen-Output.png) -固定PHP屏幕输出 + +*限制PHP屏幕输出* 这样,下次当你运行一个命令(比如说条调试器`phpinfo();`)的时候,而该命令的输出内容又太过庞大而不能固定在一屏,它就会自动产生适合你当前屏幕的输出结果。 php > phpinfo(); ![PHP Info Output](http://www.tecmint.com/wp-content/uploads/2015/07/PHP-Info-Output.png) -PHP信息输出 + +*PHP信息输出* **8. 建议和TAB补全** @@ -58,50 +59,53 @@ PHP shell足够智能,它可以显示给你建议和进行TAB补全,你可 php > #cli.pager [TAB] -你可以一直按TAB键来获得选项,直到选项值满足要求。所有的行为都将记录到`~/.php-history`文件。 +你可以一直按TAB键来获得建议的补全,直到该值满足要求。所有的行为都将记录到`~/.php-history`文件。 要检查你的PHP交互shell活动日志,你可以执行: $ nano ~/.php_history | less ![Check PHP Interactive Shell Logs](http://www.tecmint.com/wp-content/uploads/2015/07/Check-PHP-Interactive-Shell-Logs.png) -检查PHP交互Shell日志 + +*检查PHP交互Shell日志* **9. 你可以在PHP交互shell中使用颜色,你所需要知道的仅仅是颜色代码。** -使用echo来打印各种颜色的输出结果,看我信手拈来: +使用echo来打印各种颜色的输出结果,类似这样: - php > echo “color_code1 TEXT second_color_code”; + php > echo "color_code1 TEXT second_color_code"; -一个更能说明问题的例子是: +具体来说是: php > echo "\033[0;31m Hi Tecmint \x1B[0m"; ![Enable Colors in PHP Shell](http://www.tecmint.com/wp-content/uploads/2015/07/Enable-Colors-in-PHP-Shell.png) -在PHP Shell中启用彩色 + +*在PHP Shell中启用彩色* 到目前为止,我们已经看到,按回车键意味着执行命令,然而PHP Shell中各个命令结尾的分号是必须的。 -**10. PHP shell中的用以打印后续组件的路径名称** +**10. 在PHP shell中用basename()输出路径中最后一部分** -PHP shell中的basename函数从给出的包含有到文件或目录路径的后续组件的路径名称。 +PHP shell中的basename函数可以从给出的包含有到文件或目录路径的最后部分。 basename()样例#1和#2。 php > echo basename("/var/www/html/wp/wp-content/plugins"); php > echo basename("www.tecmint.com/contact-us.html"); -上述两个样例都将输出: +上述两个样例将输出: plugins contact-us.html ![Print Base Name in PHP](http://www.tecmint.com/wp-content/uploads/2015/07/Print-Base-Name-in-PHP.png) -在PHP中打印基本名称 + +*在PHP中打印基本名称* **11. 你可以使用PHP交互shell在你的桌面创建文件(比如说test1.txt),就像下面这么简单** - $ touch("/home/avi/Desktop/test1.txt"); + php> touch("/home/avi/Desktop/test1.txt"); 我们已经见识了PHP交互shell在数学运算中有多优秀,这里还有更多一些例子会令你吃惊。 @@ -112,7 +116,8 @@ strlen函数用于获取指定字符串的长度。 php > echo strlen("tecmint.com"); ![Print Length String in PHP](http://www.tecmint.com/wp-content/uploads/2015/07/Print-Length-String-in-PHP.png) -在PHP中打印字符串长度 + +*在PHP中打印字符串长度* **13. PHP交互shell可以对数组排序,是的,你没听错** @@ -137,9 +142,10 @@ strlen函数用于获取指定字符串的长度。 ) ![Sort Arrays in PHP](http://www.tecmint.com/wp-content/uploads/2015/07/Sort-Arrays-in-PHP.png) -在PHP中对数组排序 -**14. 在PHP交互Shell中获取Pi的值** +*在PHP中对数组排序* + +**14. 在PHP交互Shell中获取π的值** php > echo pi(); @@ -151,14 +157,15 @@ strlen函数用于获取指定字符串的长度。 12.247448713916 -**16. 从0-10的范围内回显一个随机数** +**16. 从0-10的范围内挑选一个随机数** php > echo rand(0, 10); ![Get Random Number in PHP](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Random-Number-in-PHP.png) -在PHP中获取随机数 -**17. 获取某个指定字符串的md5sum和sha1sum,例如,让我们在PHP Shell中检查某个字符串(比如说avi)的md5sum和sha1sum,并交叉检查这些带有bash shell生成的md5sum和sha1sum的结果。** +*在PHP中获取随机数* + +**17. 获取某个指定字符串的md5校验和sha1校验,例如,让我们在PHP Shell中检查某个字符串(比如说avi)的md5校验和sha1校验,并交叉校验bash shell生成的md5校验和sha1校验的结果。** php > echo md5(avi); 3fca379b3f0e322b7b7967bfcfb948ad @@ -175,9 +182,10 @@ strlen函数用于获取指定字符串的长度。 8f920f22884d6fea9df883843c4a8095a2e5ac6f - ![Check md5sum and sha1sum in PHP](http://www.tecmint.com/wp-content/uploads/2015/07/Check-md5sum-and-sha1sum.png) -在PHP中检查md5sum和sha1sum -这里只是PHP Shell中所能获取的功能和PHP Shell的交互特性的惊鸿一瞥,这些就是到现在为止我所讨论的一切。保持和tecmint的连线,在评论中为我们提供你有价值的反馈吧。为我们点赞并分享,帮助我们扩散哦。 +*在PHP中检查md5校验和sha1校验* + +这里只是PHP Shell中所能获取的功能和PHP Shell的交互特性的惊鸿一瞥,这些就是到现在为止我所讨论的一切。保持连线,在评论中为我们提供你有价值的反馈吧。为我们点赞并分享,帮助我们扩散哦。 -------------------------------------------------------------------------------- @@ -185,9 +193,9 @@ via: http://www.tecmint.com/execute-php-codes-functions-in-linux-commandline/ 作者:[Avishek Kumar][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/run-php-codes-from-linux-commandline/ +[1]:https://linux.cn/article-5906-1.html From 685a9945742f54247a9d66c842f01521b85cf0e1 Mon Sep 17 00:00:00 2001 From: DongShuaike Date: Wed, 29 Jul 2015 16:39:35 +0800 Subject: [PATCH 026/207] Translated by DongShuaike --- ...ion Swarm Clusters using Docker Machine.md | 125 ++++++++++++++++++ 1 file changed, 125 insertions(+) create mode 100644 translated/tech/20150625 How to Provision Swarm Clusters using Docker Machine.md diff --git a/translated/tech/20150625 How to Provision Swarm Clusters using Docker Machine.md b/translated/tech/20150625 How to Provision Swarm Clusters using Docker Machine.md new file mode 100644 index 0000000000..940c68b55d --- /dev/null +++ b/translated/tech/20150625 How to Provision Swarm Clusters using Docker Machine.md @@ -0,0 +1,125 @@ +如何使用Docker Machine部署Swarm集群 +================================================================================ +大家好,今天我们来研究一下如何使用Docker Machine部署Swarm集群。Docker Machine提供了独立的Docker API,所以任何与Docker守护进程进行交互的工具都可以使用Swarm来(透明地)扩增到多台主机上。Docker Machine可以用来在个人电脑、云端以及的数据中心里创建Docker主机。它为创建服务器,安装Docker以及根据用户设定配置Docker客户端提供了便捷化的解决方案。我们可以使用任何驱动来部署swarm集群,并且swarm集群将由于使用了TLS加密具有极好的安全性。 + +下面是我提供的简便方法。 +### 1. 安装Docker Machine ### + +Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Github上下载最新版本的Docker Machine。我们使用curl命令来下载最先版本Docker Machine ie 0.2.0。 +64位操作系统: + + # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine + +32位操作系统: + + # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-i386 > /usr/local/bin/docker-machine + +下载了最先版本的Docker Machine之后,我们需要对 /usr/local/bin/ 目录下的docker-machine文件的权限进行修改。命令如下: + + # chmod +x /usr/local/bin/docker-machine + +在做完上面的事情以后,我们必须确保docker-machine已经安装好。怎么检查呢?运行docker-machine -v指令,指令将会给出我们系统上所安装的docker-machine版本。 + + # docker-machine -v + +![Installing Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/installing-docker-machine.png) + +为了让Docker命令能够在我们的机器上运行,必须还要在机器上安装Docker客户端。命令如下。 + + # curl -L https://get.docker.com/builds/linux/x86_64/docker-latest > /usr/local/bin/docker + # chmod +x /usr/local/bin/docker + +### 2. 创建Machine ### + +在将Docker Machine安装到我们的设备上之后,我们需要使用Docker Machine创建一个machine。在这片文章中,我们会将其部署在Digital Ocean Platform上。所以我们将使用“digitalocean”作为它的Driver API,然后将docker swarm运行在其中。这个Droplet会被设置为Swarm主节点,我们还要创建另外一个Droplet,并将其设定为Swarm节点代理。 +创建machine的命令如下: + + # docker-machine create --driver digitalocean --digitalocean-access-token linux-dev + +**Note**: 假设我们要创建一个名为“linux-dev”的machine。是用户在Digital Ocean Cloud Platform的Digital Ocean控制面板中生成的密钥。为了获取这个密钥,我们需要登录我们的Digital Ocean控制面板,然后点击API选项,之后点击Generate New Token,起个名字,然后在Read和Write两个选项上打钩。之后我们将得到一个很长的十六进制密钥,这个就是了。用其替换上面那条命令中的API-Token字段。 + +现在,运行下面的指令,将Machine configuration装载进shell。 + + # eval "$(docker-machine env linux-dev)" + +![Docker Machine Digitalocean Cloud](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-digitalocean-cloud.png) + +然后,我们使用如下命令将我们的machine标记为ACTIVE状态。 + + # docker-machine active linux-dev + +现在,我们检查是否它(指machine)被标记为了 ACTIVE "*"。 + + # docker-machine ls + +![Docker Machine Active List](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-active-list.png) + +### 3. 运行Swarm Docker镜像 ### + +现在,在我们创建完成了machine之后。我们需要将swarm docker镜像部署上去。这个machine将会运行这个docker镜像并且控制Swarm主节点和从节点。使用下面的指令运行镜像: + + # docker run swarm create + +![Docker Machine Swarm Create](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-create.png) + +If you are trying to run swarm docker image using **32 bit Operating System** in the computer where Docker Machine is running, we'll need to SSH into the Droplet. +如果你想要在**32位操作系统**上运行swarm docker镜像。你需要SSH登录到Droplet当中。 + + # docker-machine ssh + #docker run swarm create + #exit + +### 4. 创建Swarm主节点 ### + +在我们的swarm image已经运行在machine当中之后,我们将要创建一个Swarm主节点。使用下面的语句,添加一个主节点。(这里的感觉怪怪的,好像少翻译了很多东西,是我把Master翻译为主节点的原因吗?) + + # docker-machine create \ + -d digitalocean \ + --digitalocean-access-token + --swarm \ + --swarm-master \ + --swarm-discovery token:// \ + swarm-master + +![Docker Machine Swarm Master Create](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-master-create.png) + +### 5. 创建Swarm结点群 ### + +现在,我们将要创建一个swarm结点,此结点将与Swarm主节点相连接。下面的指令将创建一个新的名为swarm-node的droplet,其与Swarm主节点相连。到此,我们就拥有了一个两节点的swarm集群了。 + + # docker-machine create \ + -d digitalocean \ + --digitalocean-access-token + --swarm \ + --swarm-discovery token:// \ + swarm-node + +![Docker Machine Swarm Nodes](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-nodes.png) + +### 6. Connecting to the Swarm Master ### +### 6. 与Swarm主节点连接 ### + +现在,我们连接Swarm主节点以便我们可以依照需求和配置文件在节点间部署Docker容器。运行下列命令将Swarm主节点的Machine配置文件加载到环境当中。 + + # eval "$(docker-machine env --swarm swarm-master)" + +然后,我们就可以跨结点地运行我们所需的容器了。在这里,我们还要检查一下是否一切正常。所以,运行**docker info**命令来检查Swarm集群的信息。 + + # docker info + +### Conclusion ### +### 总结 ### + +我们可以用Docker Machine轻而易举地创建Swarm集群。这种方法有非常高的效率,因为它极大地减少了系统管理员和用户的时间消耗。在这篇文章中,我们以Digital Ocean作为驱动,通过创建一个主节点和一个从节点成功地部署了集群。其他类似的应用还有VirtualBox,Google Cloud Computing,Amazon Web Service,Microsoft Azure等等。这些连接都是通过TLS进行加密的,具有很高的安全性。如果你有任何的疑问,建议,反馈,欢迎在下面的评论框中注明以便我们可以更好地提高文章的质量! + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/provision-swarm-clusters-using-docker-machine/ + +作者:[Arun Pyasi][a] +译者:[DongShuaike](https://github.com/DongShuaike) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ From d2a812c5715c967171b989f64f1389b39cb5f547 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 29 Jul 2015 17:40:16 +0800 Subject: [PATCH 027/207] =?UTF-8?q?20150729-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ment and How Do You Enable It in Ubuntu.md | 72 +++++++++++++++++++ 1 file changed, 72 insertions(+) create mode 100644 sources/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md diff --git a/sources/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md b/sources/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md new file mode 100644 index 0000000000..1641bd8f20 --- /dev/null +++ b/sources/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md @@ -0,0 +1,72 @@ +What is Logical Volume Management and How Do You Enable It in Ubuntu? +================================================================================ +> Logical Volume Management (LVM) is a disk management option that every major Linux distribution includes. Whether you need to set up storage pools or just need to dynamically create partitions, LVM is probably what you are looking for. + +### What is LVM? ### + +Logical Volume Manager allows for a layer of abstraction between your operating system and the disks/partitions it uses. In traditional disk management your operating system looks for what disks are available (/dev/sda, /dev/sdb, etc.) and then looks at what partitions are available on those disks (/dev/sda1, /dev/sda2, etc.). + +With LVM, disks and partitions can be abstracted to contain multiple disks and partitions into one device. Your operating systems will never know the difference because LVM will only show the OS the volume groups (disks) and logical volumes (partitions) that you have set up. + +Because volume groups and logical volumes aren’t physically tied to a hard drive, it makes it easy to dynamically resize and create new disks and partitions. In addition, LVM can give you features that your file system is not capable of doing. For example, Ext3 does not have support for live snapshots, but if you’re using LVM you have the ability to take a snapshot of your logical volumes without unmounting the disk. + +### When Should You Use LVM? ### + +The first thing your should consider before setting up LVM is what you want to accomplish with your disks and partitions. Some distributions, like Fedora, install with LVM by default. + +If you are using Ubuntu on a laptop with only one internal hard drive and you don’t need extended features like live snapshots, then you may not need LVM. If you need easy expansion or want to combine multiple hard drives into a single pool of storage then LVM may be what you have been looking for. + +### Setting up LVM in Ubuntu ### + +First thing to know about using LVM is there is no easy way to convert your existing traditional partitions to logical volumes. It is possible to move to a new partition that uses LVM, but that won’t be something that we will cover in this article; instead we are going to take the approach of setting up LVM on a fresh installation of Ubuntu 10.10. + +![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/ubuntu-10-banner.png) + +To install Ubuntu using LVM you need to use the alternate install CD. Download it from the link below and burn a CD or [use unetbootin to create a USB drive][1]. + +![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/download-web.png) + +Boot your computer from the alternate install disk and select your options up until the partition disks screen and select guided – use entire disk and set up LVM. + +*Note: This will format your entire hard drive so if you are trying to dual boot or have another installation select manual instead.* + +![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/setup-1.png) + +Select the main disk you want to use, typically your largest drive, and then go to the next step. + +![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/setup-2.png) + +You will immediately need to write the changes to disk so make sure you selected the right disk and then write the changes. + +![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/setup-3.png) + +Select the size you want the first logical volume to be and then continue. + +![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/setup-4.png) + +Confirm your disk partitions and continue with the installation. + +![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/setup-5.png) + +The final step will write the GRUB bootloader to the hard drive. It is important to note that GRUB cannot be on an LVM partition because computer BIOSes cannot directly read from a logical volume. Ubuntu will automatically create a 255 MB ext2 partition for your bootloader. + +![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/setup-6.png) + +After the installation is complete, reboot the machine and boot into Ubuntu as normal. There should be no perceivable difference between using LVM or traditional disk management with this type of installation. + +![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/disk-manager.png) + +To use LVM to its full potential, stay tuned for our upcoming article on managing your LVM installation. + +-------------------------------------------------------------------------------- + +via: http://www.howtogeek.com/howto/36568/what-is-logical-volume-management-and-how-do-you-enable-it-in-ubuntu/ + +作者:[How-To Geek][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/+howtogeek?prsrc=5 +[1]:http://www.howtogeek.com/howto/13379/create-a-bootable-ubuntu-9.10-usb-flash-drive/ \ No newline at end of file From 07d57a8f363881dc7d8b235803819e907bc43f1b Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 29 Jul 2015 22:24:39 +0800 Subject: [PATCH 028/207] =?UTF-8?q?=E8=A1=A5=E5=AE=8C=20PR?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @DongShuaike --- ...ion Swarm Clusters using Docker Machine.md | 127 ------------------ 1 file changed, 127 deletions(-) delete mode 100644 sources/tech/20150625 How to Provision Swarm Clusters using Docker Machine.md diff --git a/sources/tech/20150625 How to Provision Swarm Clusters using Docker Machine.md b/sources/tech/20150625 How to Provision Swarm Clusters using Docker Machine.md deleted file mode 100644 index 092eb3dbbd..0000000000 --- a/sources/tech/20150625 How to Provision Swarm Clusters using Docker Machine.md +++ /dev/null @@ -1,127 +0,0 @@ -Translating by DongShuaike - -How to Provision Swarm Clusters using Docker Machine -================================================================================ -Hi all, today we'll learn how we can deploy Swarm Clusters using Docker Machine. It serves the standard Docker API, so any tool which can communicate with a Docker daemon can use Swarm to transparently scale to multiple hosts. Docker Machine is an application that helps to create Docker hosts on our computer, on cloud providers and inside our own data center. It provides easy solution for creating servers, installing Docker on them and then configuring the Docker client according the users configuration and requirements. We can provision swarm clusters with any driver we need and is highly secured with TLS Encryption. - -Here are some quick and easy steps on how to provision swarm clusters with Docker Machine. - -### 1. Installing Docker Machine ### - -Docker Machine supports awesome on every Linux Operating System. First of all, we'll need to download the latest version of Docker Machine from the Github site . Here, we'll use curl to download the latest version of Docker Machine ie 0.2.0 . - -For 64 Bit Operating System - - # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine - -For 32 Bit Operating System - - # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-i386 > /usr/local/bin/docker-machine - -After downloading the latest release of Docker Machine, we'll make the file named docker-machine under /usr/local/bin/ executable using the command below. - - # chmod +x /usr/local/bin/docker-machine - -After doing the above, we'll wanna ensure that we have successfully installed docker-machine. To check it, we can run the docker-machine -v which will give output of the version of docker-machine installed in our system. - - # docker-machine -v - -![Installing Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/installing-docker-machine.png) - -To enable Docker commands on our machines, make sure to install the Docker client as well by running the command below. - - # curl -L https://get.docker.com/builds/linux/x86_64/docker-latest > /usr/local/bin/docker - # chmod +x /usr/local/bin/docker - -### 2. Creating Machine ### - -After installing Machine into our working PC or device, we'll wanna go forward to create a machine using Docker Machine. Here, in this tutorial we'll gonna deploy a machine in the Digital Ocean Platform so we'll gonna use "digitalocean" as its Driver API then, docker swarm will be running into that Droplet which will be further configured as Swarm Master and another droplet will be created which will be configured as Swarm Node Agent. - -So, to create the machine, we'll need to run the following command. - - # docker-machine create --driver digitalocean --digitalocean-access-token linux-dev - -**Note**: Here, linux-dev is the name of the machine we are wanting to create. is a security key which can be generated from the Digital Ocean Control Panel of the account holder of Digital Ocean Cloud Platform. To retrieve that key, we simply need to login to our Digital Ocean Control Panel then click on API, then click on Generate New Token and give it a name tick on both Read and Write. Then we'll get a long hex key, thats the now, simply replace it into the command above. - -Now, to load the Machine configuration into the shell we are running the comamands, run the following command. - - # eval "$(docker-machine env linux-dev)" - -![Docker Machine Digitalocean Cloud](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-digitalocean-cloud.png) - -Then, we'll mark our machine as ACTIVE by running the below command. - - # docker-machine active linux-dev - -Now, we'll check whether its been marked as ACTIVE "*" or not. - - # docker-machine ls - -![Docker Machine Active List](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-active-list.png) - -### 3. Running Swarm Docker Image ### - -Now, after we finish creating the required machine. We'll gonna deploy swarm docker image in our active machine. This machine will run the docker image and will control over the Swarm master and node. To run the image, we can simply run the below command. - - # docker run swarm create - -![Docker Machine Swarm Create](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-create.png) - -If you are trying to run swarm docker image using **32 bit Operating System** in the computer where Docker Machine is running, we'll need to SSH into the Droplet. - - # docker-machine ssh - #docker run swarm create - #exit - -### 4. Creating Swarm Master ### - -Now, after our machine and swarm image is running into the machine, we'll now create a Swarm Master. This will also add the master as a node. To do so, here's the command below. - - # docker-machine create \ - -d digitalocean \ - --digitalocean-access-token - --swarm \ - --swarm-master \ - --swarm-discovery token:// \ - swarm-master - -![Docker Machine Swarm Master Create](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-master-create.png) - -### 5. Creating Swarm Nodes ### - -Now, we'll create a swarm node which will get connected with the Swarm Master. The command below will create a new droplet which will be named as swarm-node and will get connected with the Swarm Master as node. This will create a Swarm cluster across the two nodes. - - # docker-machine create \ - -d digitalocean \ - --digitalocean-access-token - --swarm \ - --swarm-discovery token:// \ - swarm-node - -![Docker Machine Swarm Nodes](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-nodes.png) - -### 6. Connecting to the Swarm Master ### - -We, now connect to the Swarm Master so that we can deploy Docker containers across the nodes as per the requirement and configuration. To load the Swarm Master's Machine configuration into our environment, we can run the below command. - - # eval "$(docker-machine env --swarm swarm-master)" - -After that, we can run the required containers of our choice across the nodes. Here, we'll check if everything went fine or not. So, we'll run **docker info** to check the information about the Swarm Clusters. - - # docker info - -### Conclusion ### - -We can pretty easily create Swarm Cluster with Docker Machine. This method is a lot productive because it reduces a lot of time of a system admin or users. In this article, we successfully provisioned clusters by creating a master and a node using a machine with Digital Ocean as driver. It can be created using any driver like VirtualBox, Google Cloud Computing, Amazon Web Service, Microsoft Azure and more according to the need and requirement of the user and the connection is highly secured with TLS Encryption. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/provision-swarm-clusters-using-docker-machine/ - -作者:[Arun Pyasi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ From 92dc1ab34b83aaf48e3b5a864a372d8fc1984827 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 30 Jul 2015 17:48:22 +0800 Subject: [PATCH 029/207] =?UTF-8?q?20150730-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...TOP (IT Operational Portal) on CentOS 7.md | 174 ++++++++++++++++++ ... or Load Balancer with Weave and Docker.md | 126 +++++++++++++ 2 files changed, 300 insertions(+) create mode 100644 sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md create mode 100644 sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md diff --git a/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md b/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md new file mode 100644 index 0000000000..38477bb662 --- /dev/null +++ b/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md @@ -0,0 +1,174 @@ +How to Setup iTOP (IT Operational Portal) on CentOS 7 +================================================================================ +iTOP is a simple, Open source web based IT Service Management tool. It has all of ITIL functionality that includes with Service desk, Configuration Management, Incident Management, Problem Management, Change Management and Service Management. iTop relays on Apache/IIS, MySQL and PHP, so it can run on any operating system supporting these applications. Since iTop is a web based application you don’t need to deploy any client software on each user’s PC. A simple web browser is enough to perform day to day operations of an IT environment with iTOP. + +To install and configure iTOP we will be using CentOS 7 as base operating with basic LAMP Stack environment installed on it that will cover its almost all prerequisites. + +### Downloading iTOP ### + +iTop download package is present on SourceForge, we can get its link from their official website [link][1]. + +![itop download](http://blog.linoxide.com/wp-content/uploads/2015/07/1-itop-download.png) + +We will the download link from here and get this zipped file on server with wget command as below. + + [root@centos-007 ~]# wget http://downloads.sourceforge.net/project/itop/itop/2.1.0/iTop-2.1.0-2127.zip + +### iTop Extensions and Web Setup ### + +By using unzip command we will extract the downloaded packages in the document root directory of our apache web server in a new directory with name itop. + + [root@centos-7 ~]# ls + iTop-2.1.0-2127.zip + [root@centos-7 ~]# unzip iTop-2.1.0-2127.zip -d /var/www/html/itop/ + +List the folder to view installation packages in it. + + [root@centos-7 ~]# ls -lh /var/www/html/itop/ + total 68K + -rw-r--r--. 1 root root 1.4K Dec 17 2014 INSTALL + -rw-r--r--. 1 root root 35K Dec 17 2014 LICENSE + -rw-r--r--. 1 root root 23K Dec 17 2014 README + drwxr-xr-x. 19 root root 4.0K Jul 14 13:10 web + +Here is all the extensions that we can install. + + [root@centos-7 2.x]# ls + authent-external itop-backup itop-config-mgmt itop-problem-mgmt itop-service-mgmt-provider itop-welcome-itil + authent-ldap itop-bridge-virtualization-storage itop-datacenter-mgmt itop-profiles-itil itop-sla-computation version.xml + authent-local itop-change-mgmt itop-endusers-devices itop-request-mgmt itop-storage-mgmt wizard-icons + installation.xml itop-change-mgmt-itil itop-incident-mgmt-itil itop-request-mgmt-itil itop-tickets + itop-attachments itop-config itop-knownerror-mgmt itop-service-mgmt itop-virtualization-mgmt + +Now from the extracted web directory, moving through different data models we will migrate the required extensions from the datamodels into the web extensions directory of web document root directory with copy command. + + [root@centos-7 2.x]# pwd + /var/www/html/itop/web/datamodels/2.x + [root@centos-7 2.x]# cp -r itop-request-mgmt itop-service-mgmt itop-service-mgmt itop-config itop-change-mgmt /var/www/html/itop/web/extensions/ + +### Installing iTop Web Interface ### + +Most of our server side settings and configurations are done.Finally we need to complete its web interface installation process to finalize the setup. + +Open your favorite web browser and access the WordPress web directory in your web browser using your server IP or FQDN like. + + http://servers_ip_address/itop/web/ + +You will be redirected towards the web installation process for iTop. Let’s configure it as per your requirements like we did here in this tutorial. + +#### Prerequisites Validation #### + +At the stage you will be prompted for welcome screen with prerequisites validation ok. If you get some warning then you have to make resolve it by installing its prerequisites. + +![mcrypt missing](http://blog.linoxide.com/wp-content/uploads/2015/07/2-itop-web-install.png) + +At this stage one optional package named php mcrypt will be missing. Download the following rpm package then try to install php mcrypt package. + + [root@centos-7 ~]#yum localinstall php-mcrypt-5.3.3-1.el6.x86_64.rpm libmcrypt-2.5.8-9.el6.x86_64.rpm. + +After successful installation of php-mcrypt library we need to restart apache web service, then reload the web page and this time its prerequisites validation should be OK. + +#### Install or Upgrade iTop #### + +Here we will choose the fresh installation as we have not installed iTop previously on our server. + +![Install New iTop](http://blog.linoxide.com/wp-content/uploads/2015/07/3.png) + +#### iTop License Agreement #### + +Chose the option to accept the terms of the licenses of all the components of iTop and click "NEXT". + +![License Agreement](http://blog.linoxide.com/wp-content/uploads/2015/07/4.png) + +#### Database Configuration #### + +Here we the do Configuration of the database connection by giving our database servers credentials and then choose from the option to create new database as shown. + +![DB Connection](http://blog.linoxide.com/wp-content/uploads/2015/07/5.png) + +#### Administrator Account #### + +In this step we will configure an Admin account by filling out its login details as. + +![Admin Account](http://blog.linoxide.com/wp-content/uploads/2015/07/6.png) + +#### Miscellaneous Parameters #### + +Let's choose the additional parameters whether you want to install with demo contents or with fresh database and proceed forward. + +![Misc Parameters](http://blog.linoxide.com/wp-content/uploads/2015/07/7.png) + +### iTop Configurations Management ### + +The options below allow you to configure the type of elements that are to be managed inside iTop like all the base objects that are mandatory in the iTop CMDB, Manage Data Center devices, storage device and virtualization. + +![Conf Management](http://blog.linoxide.com/wp-content/uploads/2015/07/8.png) + +#### Service Management #### + +Select from the choices that best describes the relationships between the services and the IT infrastructure in your IT environment. So we are choosing Service Management for Service Providers here. + +![Service Management](http://blog.linoxide.com/wp-content/uploads/2015/07/9.png) + +#### iTop Tickets Management #### + +From the different available options we will Select the ITIL Compliant Tickets Management option to have different types of ticket for managing user requests and incidents. + +![Ticket Management](http://blog.linoxide.com/wp-content/uploads/2015/07/10.png) + +#### Change Management Options #### + +Select the type of tickets you want to use in order to manage changes to the IT infrastructure from the available options. We are going to choose ITIL change management option here. + +![ITIL Change](http://blog.linoxide.com/wp-content/uploads/2015/07/11.png) + +#### iTop Extensions #### + +In this section we can select the additional extensions to install or we can unchecked the ones that you want to skip. + +![iTop Extensions](http://blog.linoxide.com/wp-content/uploads/2015/07/13.png) + +### Ready to Start Web Installation ### + +Now we are ready to start installing the components that we choose in previous steps. We can also drop down these installation parameters to view our configuration from the drop down. + +Once you are confirmed with the installation parameters click on the install button. + +![Installation Parameters](http://blog.linoxide.com/wp-content/uploads/2015/07/16.png) + +Let's wait for the progress bar to complete the installation process. It might takes few minutes to complete its installation process. + +![iTop Installation Process](http://blog.linoxide.com/wp-content/uploads/2015/07/17.png) + +### iTop Installation Done ### + +Our iTop installation setup is complete, just need to do a simple manual operation as shown and then click to enter iTop. + +![iTop Done](http://blog.linoxide.com/wp-content/uploads/2015/07/18.png) + +### Welcome to iTop (IT Operational Portal) ### + +![itop welcome note](http://blog.linoxide.com/wp-content/uploads/2015/07/20.png) + +### iTop Dashboard ### + +You can manage configuration of everything from here Servers, computers, Contacts, Locations, Contracts, Network devices…. You can create your own. Just the fact, that the installed CMDB module is great which is an essential part of every bigger IT. + +![iTop Dashboard](http://blog.linoxide.com/wp-content/uploads/2015/07/19.png) + +### Conclusion ### + +ITOP is one of the best Open Source Service Desk solutions. We have successfully installed and configured it on our CentOS 7 cloud host. So, the most powerful aspect of iTop is the ease with which it can be customized via its “extensions”. Feel free to comment if you face any trouble during its setup. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/tools/setup-itop-centos-7/ + +作者:[Kashif Siddique][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ +[1]:http://www.combodo.com/spip.php?page=rubrique&id_rubrique=8 \ No newline at end of file diff --git a/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md b/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md new file mode 100644 index 0000000000..82c592d3b4 --- /dev/null +++ b/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md @@ -0,0 +1,126 @@ +Howto Configure Nginx as Rreverse Proxy / Load Balancer with Weave and Docker +================================================================================ +Hi everyone today we'll learnHowto configure Nginx as Rreverse Proxy / Load balancer with Weave and Docker Weave creates a virtual network that connects Docker containers with each other, deploys across multiple hosts and enables their automatic discovery. It allows us to focus on developing our application, rather than our infrastructure. It provides such an awesome environment that the applications uses the network as if its containers were all plugged into the same network without need to configure ports, mappings, link, etc. The services of the application containers on the network can be easily accessible to the external world with no matter where its running. Here, in this tutorial we'll be using weave to quickly and easily deploy nginx web server as a load balancer for a simple php application running in docker containers on multiple nodes in Amazon Web Services. Here, we will be introduced to WeaveDNS, which provides a simple way for containers to find each other using hostname with no changes in codes and tells other containers to connect to those names. + +Here, in this tutorial, we will use Nginx to load balance requests to a set of containers running Apache. Here are the simple and easy to do steps on using Weave to configure nginx as a load balancer running in ubuntu docker container. + +### 1. Settting up AWS Instances ### + +First of all, we'll need to setup Amazon Web Service Instances so that we can run docker containers with Weave and Ubuntu as Operating System. We will use the [AWS CLI][1] to setup and configure two AWS EC2 instances. Here, in this tutorial, we'll use the smallest available instances, t1.micro. We will need to have a valid **Amazon Web Services account** with AWS CLI setup and configured. We'll first gonna clone the repository of weave from the github by running the following command in AWS CLI. + + $ git clone http://github.com/fintanr/weave-gs + $ cd weave-gs/aws-nginx-ubuntu-simple + +After cloning the repository, we wanna run the script that will deploy two instances of t1.micro instance running Weave and Docker in Ubuntu Operating System. + + $ sudo ./demo-aws-setup.sh + +Here, for this tutorial we'll need the IP addresses of these instances further in future. These are stored in an environment file weavedemo.env which is created during the execution of the demo-aws-setup.sh. To get those ip addresses, we need to run the following command which will give the output similar to the output below. + + $ cat weavedemo.env + + export WEAVE_AWS_DEMO_HOST1=52.26.175.175 + export WEAVE_AWS_DEMO_HOST2=52.26.83.141 + export WEAVE_AWS_DEMO_HOSTCOUNT=2 + export WEAVE_AWS_DEMO_HOSTS=(52.26.175.175 52.26.83.141) + +Please note these are not the IP addresses for our tutorial, AWS dynamically allocate IP addresses to our instances. + +As were are using a bash, we will just source this file and execute it using the command below. + + . ./weavedemo.env + +### 2. Launching Weave and WeaveDNS ### + +After deploying the instances, we'll want to launch weave and weavedns on each hosts. Weave and weavedns allows us to easily deploy our containers to a new infrastructure and configuration without the need of changing the codes and without the need to understand concepts such as ambassador containers and links. Here are the commands to launch them in the first host. + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 + $ sudo weave launch + $ sudo weave launch-dns 10.2.1.1/24 + +Next, we'll also wanna launch them in our second host. + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2 + $ sudo weave launch $WEAVE_AWS_DEMO_HOST1 + $ sudo weave launch-dns 10.2.1.2/24 + +### 3. Launching Application Containers ### + +Now, we wanna launch six containers across our two hosts running an Apache2 Web Server instance with our simple php site. So, we'll be running the following commands which will run 3 containers running Apache2 Web Server on our 1st instance. + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 + $ sudo weave run --with-dns 10.3.1.1/24 -h ws1.weave.local fintanr/weave-gs-nginx-apache + $ sudo weave run --with-dns 10.3.1.2/24 -h ws2.weave.local fintanr/weave-gs-nginx-apache + $ sudo weave run --with-dns 10.3.1.3/24 -h ws3.weave.local fintanr/weave-gs-nginx-apache + +After that, we'll again launch 3 containers running apache2 web server in our 2nd instance as shown below. + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2 + $ sudo weave run --with-dns 10.3.1.4/24 -h ws4.weave.local fintanr/weave-gs-nginx-apache + $ sudo weave run --with-dns 10.3.1.5/24 -h ws5.weave.local fintanr/weave-gs-nginx-apache + $ sudo weave run --with-dns 10.3.1.6/24 -h ws6.weave.local fintanr/weave-gs-nginx-apache + +Note: Here, --with-dns option tells the container to use weavedns to resolve names and -h x.weave.local allows the host to be resolvable with WeaveDNS. + +### 4. Launching Nginx Container ### + +After our application containers are running well as expected, we'll wanna launch an nginx container which contains the nginx configuration which will round-robin across the severs for the reverse proxy or load balancing. To run the nginx container, we'll need to run the following command. + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 + $ sudo weave run --with-dns 10.3.1.7/24 -ti -h nginx.weave.local -d -p 80:80 fintanr/weave-gs-nginx-simple + +Hence, our Nginx container is publicly exposed as a http server on $WEAVE_AWS_DEMO_HOST1. + +### 5. Testing the Load Balancer ### + +To test our load balancer is working or not, we'll run a script that will make http requests to our nginx container. We'll make six requests so that we can see nginx moving through each of the webservers in round-robin turn. + + $ ./access-aws-hosts.sh + + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws1.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws2.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws3.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws4.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws5.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws6.weave.local", + "date" : "2015-06-26 12:24:23" + } + +### Conclusion ### + +Finally, we've successfully configured nginx as a reverse proxy or load balancer with weave and docker running ubuntu server in AWS (Amazon Web Service) EC2 . From the above output in above step, it is clear that we have configured it correctly. We can see that the request is being sent to 6 application containers in round-robin turn which is running a PHP app hosted in apache web server. Here, weave and weavedns did great work to deploy a containerised PHP application using nginx across multiple hosts on AWS EC2 without need to change in codes and connected the containers to eachother with the hostname using weavedns. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/nginx-load-balancer-weave-docker/ + +作者:[Arun Pyasi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://console.aws.amazon.com/ \ No newline at end of file From b0452f98fac85ec842786a5e91371484ba4b16fe Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 30 Jul 2015 18:19:51 +0800 Subject: [PATCH 030/207] =?UTF-8?q?20150730-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20150730 Compare PDF Files on Ubuntu.md | 47 +++++ ... Must-Know Linux Commands For New Users.md | 185 ++++++++++++++++++ 2 files changed, 232 insertions(+) create mode 100644 sources/tech/20150730 Compare PDF Files on Ubuntu.md create mode 100644 sources/tech/20150730 Must-Know Linux Commands For New Users.md diff --git a/sources/tech/20150730 Compare PDF Files on Ubuntu.md b/sources/tech/20150730 Compare PDF Files on Ubuntu.md new file mode 100644 index 0000000000..9612f0430e --- /dev/null +++ b/sources/tech/20150730 Compare PDF Files on Ubuntu.md @@ -0,0 +1,47 @@ +Compare PDF Files on Ubuntu +================================================================================ +If you want to compare PDF files you can use one of the following utility + +### Comparepdf ### + +comparepdf is a command line application used to compare two PDF files.The default comparison mode is text mode where the text of each corresponding pair of pages is compared. As soon as a difference is detected the program terminates with a message (unless -v0 is set) and an indicative return code. + +The OPTIONS are -ct or --compare=text (the default) for text mode comparisons or -ca or --compare=appearance for visual comparisons (useful if diagrams or other images have changed), and -v=1 or --verbose=1 for reporting differences (and saying nothing for matching files): use -v=0 for no reporting or -v=2 for reporting both different and matching files. + +### Install Comparepdf on ubuntu ### + +Open the terminal and run the following command + + sudo apt-get install comparepdf + +**Comparepdf syntax** + + comparepdf [OPTIONS] file1.pdf file2.pdf + +**Diffpdf** + +DiffPDF is a GUI application used to compare two PDF files.By default the comparison is of the text on each pair of pages, but comparing the visual appearance of pages is also supported (for example, if a diagram is changed or if a paragraph is reformatted). It is also possible to compare pticular pages or page ranges. For example, if there are two versions of a PDF file, one with pages 1-12 and the other with pages 1-13 because of an extra page having been added as page 4, they can be compared by specifying two page ranges, 1-12 for the first and 1-3, 5-13 for the second. This will make DiffPDF compare pages in the pairs (1, 1), (2, 2), (3, 3), (4, 5), (5, 6), and so on, to (12, 13). + +### Install diffpdf on ubuntu ### + +Open the terminal and run the following command + + sudo apt-get install diffpdf + +### Screenshots ### + +![](http://www.ubuntugeek.com/wp-content/uploads/2015/07/14.png) + +![](http://www.ubuntugeek.com/wp-content/uploads/2015/07/23.png) + +-------------------------------------------------------------------------------- + +via: http://www.ubuntugeek.com/compare-pdf-files-on-ubuntu.html + +作者:[ruchi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.ubuntugeek.com/author/ubuntufix \ No newline at end of file diff --git a/sources/tech/20150730 Must-Know Linux Commands For New Users.md b/sources/tech/20150730 Must-Know Linux Commands For New Users.md new file mode 100644 index 0000000000..ea21c001e0 --- /dev/null +++ b/sources/tech/20150730 Must-Know Linux Commands For New Users.md @@ -0,0 +1,185 @@ +Must-Know Linux Commands For New Users +================================================================================ +![Manage system updates via the command line with dnf on Fedora.](http://www.linux.com/images/stories/41373/fedora-cli.png) +Manage system updates via the command line with dnf on Fedora. + +One of the beauties of Linux-based systems is that you can manage your entire system right from the terminal using the command line. The advantage of using the command line is that you can use the same knowledge and skills to manage any Linux distribution. + +This is not possible through the graphical user interface (GUI) as each distro, and desktop environment (DE), offers its own user interfaces. To be clear, there are cases in which you will need different commands to perform certain tasks on different distributions, but more or less the concept and ideas remain the same. + +In this article, we are going to talk about some of the basic commands that a new Linux user should know. I will show you how to update your system, manage software, manipulate files and switch to root using the command line on three major distributions: Ubuntu (which also includes its flavors and derivatives, and Debian), openSUSE and Fedora. + +*Let's get started!* + +### Keep your system safe and up-to-date ### + +Linux is secure by design, but the fact is that all software has bugs and there could be security holes. So it's very important to keep your system updated. Think of it this way: Running an out-of-date operating system is like being in an armored tank with the doors unlocked. Will the armor protect you? Anyone can enter through the open doors and cause harm. Similarly there can be un-patched holes in your OS which can compromise your systems. Open source communities, unlike the proprietary world, are extremely quick at patching holes, so if you keep your system updated you'll stay safe. + +Keep an eye on news sites to be aware of security vulnerabilities. If there is a hole discovered, read about it and update your system as soon as a patch is out. Either way you must make it a practice to run the update commands at least once a week on production machines. If you are running a complicated server then be extra careful and go through the changelog to ensure updates won't break your customization. + +**Ubuntu**: Bear one thing in mind: you must always refresh repositories (aka repos) before upgrading the system or installing any software. On Ubuntu, you can update your system with the following commands. The first command refreshes repositories: + + sudo apt-get update + +Once the repos are updated you can now run the system update command: + + sudo apt-get upgrade + +However this command doesn't update the kernel and some other packages, so you must also run this command: + + sudo apt-get dist-upgrade + +**openSUSE**: If you are on openSUSE, you can update the system using these commands (as usual, the first command is meant to update repos) + + sudo zypper refresh + sudo zypper up + +**Fedora**: If you are on Fedora, you can use the 'dnf' command which is 'kind' of equivalent to zypper and apt-get: + + sudo dnf update + sudo dnf upgrade + +### Software installation and removal ### + +You can install only those packages which are available in the repositories enabled on your system. Every distro comes with some official or third-party repos enabled by default. + +**Ubuntu**: To install any package on Ubuntu, first update the repo and then use this syntax: + + sudo apt-get install [package_name] + +Example: + + sudo apt-get install gimp + +**openSUSE**: The commands would be: + + sudo zypper install [package_name] + +**Fedora**: Fedora has dropped 'yum' and now uses 'dnf' so the command would be: + + sudo dnf install [package_name] + +The procedure to remove the software is the same, just exchange 'install' with 'remove'. + +**Ubuntu**: + + sudo apt-get remove [package_name] + +**openSUSE**: + + sudo zypper remove [package_name] + +**Fedora**: + + sudo dnf remove [package_name] + +### How to manage third party software? ### + +There is a huge community of developers who offer their software to users. Different distributions use different mechanisms to make third party software available to their users. It also depends on how a developer offers their software to users; some offer binaries and others offer it through repositories. + +Ubuntu heavily relies on PPAs (personal package archives) but, unfortunately, there is no built-in tool which can assist a user in searching PPAs. You will need to Google the PPA and then add the repository manually before installing the software. This is how you would add any PPA to your system: + + sudo add-apt-repository ppa: + +Example: Let's say I want to add LibreOffice PPA to my system. I would Google the PPA and then acquire the repo name from Launchpad, which in this case is "libreoffice/ppa". Then add the ppa using the following command: + + sudo add-apt-repository ppa:libreoffice/ppa + +It will ask you to hit the Enter key in order to import the keys. Once it's done, refresh the repos with the 'update' command and then install the package. + +openSUSE has an elegant solution for third-party apps. You can visit software.opensuse.org, search for the package and install it with one click. It will automatically add the repo to your system. If you want to add any repo manually, use this command:. + + sudo zypper ar -f url_of_the_repo name_of_repo + sudo zypper ar -f http://download.opensuse.org/repositories/LibreOffice:Factory/openSUSE_13.2/LibreOffice:Factory.repo LOF + +Then refresh the repo and install software: + + sudo zypper refresh + sudo zypper install libreoffice + +Fedora users can simply add RPMFusion (both free and non-free repos) which contain a majority of applications. In case you do need to add a repo, this is the command: + +dnf config-manager --add-repo http://www.example.com/example.repo + +### Some basic commands ### + +I have written a few [articles][1] on how to manage files on your system using the CLI, here are some of the basic commands which are common across all distributions. + +Copy files or directories to a new location: + + cp path_of_file_1 path_of_the_directory_where_you_want_to_copy/ + +Copy all files from a directory to a new location (notice the slash and asterisk, which implies all files within that directory): + + cp path_of_files/* path_of_the_directory_where_you_want_to_copy/ + +Move a file from one location to another (the trailing slash means inside that directory): + + mv path_of_file_1 path_of_the_directory_where_you_want_to_move/ + +Move all file from one location to another: + + mv path_of_directory_where_files_are/* path_of_the_directory_where_you_want_to_move/ + +Delete a file: + + rm path_of_file + +Delete a directory: + + rm -r path_of_directory + +Remove all content from the directory, leaving the directory folder intact: + + rm -r path_of_directory/* + +### Create new directory ### + +To create a new directory, first enter the location where you want to create a directory. Let's say you want to create a 'foundation' folder inside your Documents directory. Let's change the directory using the cd (aka change directory) command: + + cd /home/swapnil/Documents + +(exchange 'swapnil with the user on your system) + +Then create the directory with mkdir command: + + mkdir foundation + +You can also create a directory from anywhere, by giving the path of the directory. For example: + + mdkir /home/swapnil/Documents/foundation + +If you want to create parent-child directories, which means directories within other directories then use the -p option. It will create all directories in the given path: + + mdkir -p /home/swapnil/Documents/linux/foundation + +### Become root ### + +You either need to be root or the user should have sudo powers to perform some administrative tasks such as managing packages or making changes to the root directories or files. An example would be to edit the 'fstab' file which keeps a record of mounted hard drives. It's inside the 'etc' directory which is within the root directory. You can make changes to this file only as a super user. In most distros you can become root by 'switching user'. Let's say on openSUSE I want to become root as I am going to work inside the root directory. You can use either command: + + sudo su - + +Or + + su - + +That will ask for the password and then you will have root privileges. Keep one point in mind: never run your system as root user unless you know what you are doing. Another important point to note is that the files or directories you modify as root also change ownership of those files from that user or specific service to root. You may have to revert the ownership of those files otherwise the services or users won't be able to to access or write to those files. To change users, this is the command: + + sudo chown -R user:user /path_of_file_or_directory + +You may often need this when you have partitions from other distros mounted on the system. When you try to access files on such partitions, you may come across a permission denied error. You can simply change the ownership of such partitions to access them. Just be extra careful, don't change permissions or ownership of root directories. + +These are the basic commands any new Linux user needs. If you have any questions or if you want us to cover a specific topic, please mention them in the comments below. + +-------------------------------------------------------------------------------- + +via: http://www.linux.com/learn/tutorials/842251-must-know-linux-commands-for-new-users + +作者:[Swapnil Bhartiya][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linux.com/community/forums/person/61003 +[1]:http://www.linux.com/learn/tutorials/828027-how-to-manage-your-files-from-the-command-line \ No newline at end of file From 29c54a558a994752cf69ae812866e79eab6ea135 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 30 Jul 2015 21:03:28 +0800 Subject: [PATCH 031/207] PUB:20150505 How to Manage 'Systemd' Services and Units Using 'Systemctl' in Linux @GOLinux --- ...es and Units Using 'Systemctl' in Linux.md | 33 +++++++++---------- 1 file changed, 16 insertions(+), 17 deletions(-) rename {translated/tech => published}/20150505 How to Manage 'Systemd' Services and Units Using 'Systemctl' in Linux.md (95%) diff --git a/translated/tech/20150505 How to Manage 'Systemd' Services and Units Using 'Systemctl' in Linux.md b/published/20150505 How to Manage 'Systemd' Services and Units Using 'Systemctl' in Linux.md similarity index 95% rename from translated/tech/20150505 How to Manage 'Systemd' Services and Units Using 'Systemctl' in Linux.md rename to published/20150505 How to Manage 'Systemd' Services and Units Using 'Systemctl' in Linux.md index a78dc01820..e8b8466f90 100644 --- a/translated/tech/20150505 How to Manage 'Systemd' Services and Units Using 'Systemctl' in Linux.md +++ b/published/20150505 How to Manage 'Systemd' Services and Units Using 'Systemctl' in Linux.md @@ -1,13 +1,14 @@ -在Linux中使用‘Systemctl’管理‘Systemd’服务和单元 +systemctl 完全指南 ================================================================================ Systemctl是一个systemd工具,主要负责控制systemd系统和服务管理器。 Systemd是一个系统管理守护进程、工具和库的集合,用于取代System V初始进程。Systemd的功能是用于集中管理和配置类UNIX系统。 -在Linux生态系统中,Systemd被部署到了大多数的标准Linux发行版中,只有位数不多的几个尚未部署。Systemd通常是所有其它守护进程的父进程,但并非总是如此。 +在Linux生态系统中,Systemd被部署到了大多数的标准Linux发行版中,只有为数不多的几个发行版尚未部署。Systemd通常是所有其它守护进程的父进程,但并非总是如此。 ![Manage Linux Services Using Systemctl](http://www.tecmint.com/wp-content/uploads/2015/04/Manage-Linux-Services-Using-Systemctl.jpg) -使用Systemctl管理Linux服务 + +*使用Systemctl管理Linux服务* 本文旨在阐明在运行systemd的系统上“如何控制系统和服务”。 @@ -41,11 +42,9 @@ Systemd是一个系统管理守护进程、工具和库的集合,用于取代S root 555 1 0 16:27 ? 00:00:00 /usr/lib/systemd/systemd-logind dbus 556 1 0 16:27 ? 00:00:00 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation -**注意**:systemd是作为父进程(PID=1)运行的。在上面带(-e)参数的ps命令输出中,选择所有进程,(- +**注意**:systemd是作为父进程(PID=1)运行的。在上面带(-e)参数的ps命令输出中,选择所有进程,(-a)选择除会话前导外的所有进程,并使用(-f)参数输出完整格式列表(即 -eaf)。 -a)选择除会话前导外的所有进程,并使用(-f)参数输出完整格式列表(如 -eaf)。 - -也请注意上例中后随的方括号和样例剩余部分。方括号表达式是grep的字符类表达式的一部分。 +也请注意上例中后随的方括号和例子中剩余部分。方括号表达式是grep的字符类表达式的一部分。 #### 4. 分析systemd启动进程 #### @@ -147,7 +146,7 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完 1 loaded units listed. Pass --all to see loaded but inactive units, too. To show all installed unit files use 'systemctl list-unit-files'. -#### 10. 检查某个单元(cron.service)是否启用 #### +#### 10. 检查某个单元(如 cron.service)是否启用 #### # systemctl is-enabled crond.service @@ -187,7 +186,7 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完 dbus-org.fedoraproject.FirewallD1.service enabled .... -#### 13. Linux中如何启动、重启、停止、重载服务以及检查服务(httpd.service)状态 #### +#### 13. Linux中如何启动、重启、停止、重载服务以及检查服务(如 httpd.service)状态 #### # systemctl start httpd.service # systemctl restart httpd.service @@ -214,15 +213,15 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完 Apr 28 17:21:30 tecmint systemd[1]: Started The Apache HTTP Server. Hint: Some lines were ellipsized, use -l to show in full. -**注意**:当我们使用systemctl的start,restart,stop和reload命令时,我们不会不会从终端获取到任何输出内容,只有status命令可以打印输出。 +**注意**:当我们使用systemctl的start,restart,stop和reload命令时,我们不会从终端获取到任何输出内容,只有status命令可以打印输出。 -#### 14. 如何激活服务并在启动时启用或禁用服务(系统启动时自动启动服务) #### +#### 14. 如何激活服务并在启动时启用或禁用服务(即系统启动时自动启动服务) #### # systemctl is-active httpd.service # systemctl enable httpd.service # systemctl disable httpd.service -#### 15. 如何屏蔽(让它不能启动)或显示服务(httpd.service) #### +#### 15. 如何屏蔽(让它不能启动)或显示服务(如 httpd.service) #### # systemctl mask httpd.service ln -s '/dev/null' '/etc/systemd/system/httpd.service' @@ -297,7 +296,7 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完 # systemctl enable tmp.mount # systemctl disable tmp.mount -#### 20. 在Linux中屏蔽(让它不能启动)或显示挂载点 #### +#### 20. 在Linux中屏蔽(让它不能启用)或可见挂载点 #### # systemctl mask tmp.mount @@ -375,7 +374,7 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完 CPUShares=2000 -**注意**:当你为某个服务设置CPUShares,会自动创建一个以服务名命名的目录(httpd.service),里面包含了一个名为90-CPUShares.conf的文件,该文件含有CPUShare限制信息,你可以通过以下方式查看该文件: +**注意**:当你为某个服务设置CPUShares,会自动创建一个以服务名命名的目录(如 httpd.service),里面包含了一个名为90-CPUShares.conf的文件,该文件含有CPUShare限制信息,你可以通过以下方式查看该文件: # vi /etc/systemd/system/httpd.service.d/90-CPUShares.conf @@ -528,13 +527,13 @@ a)选择除会话前导外的所有进程,并使用(-f)参数输出完 #### 35. 启动运行等级5,即图形模式 #### # systemctl isolate runlevel5.target - OR + 或 # systemctl isolate graphical.target #### 36. 启动运行等级3,即多用户模式(命令行) #### # systemctl isolate runlevel3.target - OR + 或 # systemctl isolate multiuser.target #### 36. 设置多用户模式或图形模式为默认运行等级 #### @@ -572,7 +571,7 @@ via: http://www.tecmint.com/manage-services-using-systemd-and-systemctl-in-linux 作者:[Avishek Kumar][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From dc8ba02e754ba942f2b89627d05a4044aa97e7af Mon Sep 17 00:00:00 2001 From: joeren Date: Fri, 31 Jul 2015 08:11:39 +0800 Subject: [PATCH 032/207] Update 20150730 Compare PDF Files on Ubuntu.md --- sources/tech/20150730 Compare PDF Files on Ubuntu.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150730 Compare PDF Files on Ubuntu.md b/sources/tech/20150730 Compare PDF Files on Ubuntu.md index 9612f0430e..6319508af5 100644 --- a/sources/tech/20150730 Compare PDF Files on Ubuntu.md +++ b/sources/tech/20150730 Compare PDF Files on Ubuntu.md @@ -1,3 +1,4 @@ +Translating by GOLinux! Compare PDF Files on Ubuntu ================================================================================ If you want to compare PDF files you can use one of the following utility @@ -44,4 +45,4 @@ via: http://www.ubuntugeek.com/compare-pdf-files-on-ubuntu.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://www.ubuntugeek.com/author/ubuntufix \ No newline at end of file +[a]:http://www.ubuntugeek.com/author/ubuntufix From 2fe341d9fc32764011f35ecb459afbba39f066b4 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Fri, 31 Jul 2015 08:41:12 +0800 Subject: [PATCH 033/207] [Translated]20150730 Compare PDF Files on Ubuntu.md --- .../20150730 Compare PDF Files on Ubuntu.md | 48 ------------------- .../20150730 Compare PDF Files on Ubuntu.md | 48 +++++++++++++++++++ 2 files changed, 48 insertions(+), 48 deletions(-) delete mode 100644 sources/tech/20150730 Compare PDF Files on Ubuntu.md create mode 100644 translated/tech/20150730 Compare PDF Files on Ubuntu.md diff --git a/sources/tech/20150730 Compare PDF Files on Ubuntu.md b/sources/tech/20150730 Compare PDF Files on Ubuntu.md deleted file mode 100644 index 6319508af5..0000000000 --- a/sources/tech/20150730 Compare PDF Files on Ubuntu.md +++ /dev/null @@ -1,48 +0,0 @@ -Translating by GOLinux! -Compare PDF Files on Ubuntu -================================================================================ -If you want to compare PDF files you can use one of the following utility - -### Comparepdf ### - -comparepdf is a command line application used to compare two PDF files.The default comparison mode is text mode where the text of each corresponding pair of pages is compared. As soon as a difference is detected the program terminates with a message (unless -v0 is set) and an indicative return code. - -The OPTIONS are -ct or --compare=text (the default) for text mode comparisons or -ca or --compare=appearance for visual comparisons (useful if diagrams or other images have changed), and -v=1 or --verbose=1 for reporting differences (and saying nothing for matching files): use -v=0 for no reporting or -v=2 for reporting both different and matching files. - -### Install Comparepdf on ubuntu ### - -Open the terminal and run the following command - - sudo apt-get install comparepdf - -**Comparepdf syntax** - - comparepdf [OPTIONS] file1.pdf file2.pdf - -**Diffpdf** - -DiffPDF is a GUI application used to compare two PDF files.By default the comparison is of the text on each pair of pages, but comparing the visual appearance of pages is also supported (for example, if a diagram is changed or if a paragraph is reformatted). It is also possible to compare pticular pages or page ranges. For example, if there are two versions of a PDF file, one with pages 1-12 and the other with pages 1-13 because of an extra page having been added as page 4, they can be compared by specifying two page ranges, 1-12 for the first and 1-3, 5-13 for the second. This will make DiffPDF compare pages in the pairs (1, 1), (2, 2), (3, 3), (4, 5), (5, 6), and so on, to (12, 13). - -### Install diffpdf on ubuntu ### - -Open the terminal and run the following command - - sudo apt-get install diffpdf - -### Screenshots ### - -![](http://www.ubuntugeek.com/wp-content/uploads/2015/07/14.png) - -![](http://www.ubuntugeek.com/wp-content/uploads/2015/07/23.png) - --------------------------------------------------------------------------------- - -via: http://www.ubuntugeek.com/compare-pdf-files-on-ubuntu.html - -作者:[ruchi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.ubuntugeek.com/author/ubuntufix diff --git a/translated/tech/20150730 Compare PDF Files on Ubuntu.md b/translated/tech/20150730 Compare PDF Files on Ubuntu.md new file mode 100644 index 0000000000..3215caf23f --- /dev/null +++ b/translated/tech/20150730 Compare PDF Files on Ubuntu.md @@ -0,0 +1,48 @@ +Ubuntu上比较PDF文件 +================================================================================ + +如果你想要对PDF文件进行比较,你可以使用下面工具之一。 + +### Comparepdf ### + +comparepdf是一个命令行应用,用于将两个PDF文件进行对比。默认对比模式文本模式,该模式会对各对相关页面进行文字对比。只要一检测到差异,该程序就会终止,并显示一条信息(除非设置了-v0)和一个指示性的返回码。 + +用于文本模式对比的选项有 -ct 或 --compare=text(默认),用于视觉对比(这对图标或其它图像发生改变时很有用)的选项有 -ca 或 --compare=appearance。而 -v=1 或 --verbose=1 选项则用于报告差异(或者对匹配文件不作任何回应):使用 -v=0 选项取消报告,或者 -v=2 来同时报告不同的和匹配的文件。 + +### 安装comparepdf到Ubuntu ### + +打开终端,然后运行以下命令 + + sudo apt-get install comparepdf + +**Comparepdf 语法** + + comparepdf [OPTIONS] file1.pdf file2.pdf + +**Diffpdf** + +DiffPDF是一个图形化应用程序,用于对两个PDF文件进行对比。默认情况下,它只会对比两个相关页面的文字,但是也支持对图形化页面进行对比(例如,如果图表被修改过,或者段落被重新格式化过)。它也可以对特定的页面或者页面范围进行对比。例如,如果同一个PDF文件有两个版本,其中一个有页面1-12,而另一个则有页面1-13,因为这里添加了一个额外的页面4,它们可以通过指定两个页面范围来进行对比,第一个是1-12,而1-3,5-13则可以作为第二个页面范围。这将使得DiffPDF成对地对比这些页面(1,1),(2,2),(3,3),(4,5),(5,6),以此类推,直到(12,13)。 + +### 安装 diffpdf 到 ubuntu ### + +打开终端,然后运行以下命令 + + sudo apt-get install diffpdf + +### 截图 ### + +![](http://www.ubuntugeek.com/wp-content/uploads/2015/07/14.png) + +![](http://www.ubuntugeek.com/wp-content/uploads/2015/07/23.png) + +-------------------------------------------------------------------------------- + +via: http://www.ubuntugeek.com/compare-pdf-files-on-ubuntu.html + +作者:[ruchi][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.ubuntugeek.com/author/ubuntufix From 26b3b9d09f839328314e3cf37fa56537ae6bb759 Mon Sep 17 00:00:00 2001 From: joeren Date: Fri, 31 Jul 2015 08:43:39 +0800 Subject: [PATCH 034/207] Update 20150730 Must-Know Linux Commands For New Users.md --- .../tech/20150730 Must-Know Linux Commands For New Users.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150730 Must-Know Linux Commands For New Users.md b/sources/tech/20150730 Must-Know Linux Commands For New Users.md index ea21c001e0..55f1b0dbfe 100644 --- a/sources/tech/20150730 Must-Know Linux Commands For New Users.md +++ b/sources/tech/20150730 Must-Know Linux Commands For New Users.md @@ -1,3 +1,4 @@ +Translating by GOLinux! Must-Know Linux Commands For New Users ================================================================================ ![Manage system updates via the command line with dnf on Fedora.](http://www.linux.com/images/stories/41373/fedora-cli.png) @@ -182,4 +183,4 @@ via: http://www.linux.com/learn/tutorials/842251-must-know-linux-commands-for-ne 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.linux.com/community/forums/person/61003 -[1]:http://www.linux.com/learn/tutorials/828027-how-to-manage-your-files-from-the-command-line \ No newline at end of file +[1]:http://www.linux.com/learn/tutorials/828027-how-to-manage-your-files-from-the-command-line From 0ba2c6878f0407180b357ef62e776996a3896903 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 31 Jul 2015 10:11:58 +0800 Subject: [PATCH 035/207] Delete 20150717 How to collect NGINX metrics - Part 2.md --- ...7 How to collect NGINX metrics - Part 2.md | 239 ------------------ 1 file changed, 239 deletions(-) delete mode 100644 sources/tech/20150717 How to collect NGINX metrics - Part 2.md diff --git a/sources/tech/20150717 How to collect NGINX metrics - Part 2.md b/sources/tech/20150717 How to collect NGINX metrics - Part 2.md deleted file mode 100644 index eb627649a7..0000000000 --- a/sources/tech/20150717 How to collect NGINX metrics - Part 2.md +++ /dev/null @@ -1,239 +0,0 @@ -translation by strugglingyouth - -How to collect NGINX metrics - Part 2 -================================================================================ -![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_2.png) - -### How to get the NGINX metrics you need ### - -How you go about capturing metrics depends on which version of NGINX you are using, as well as which metrics you wish to access. (See [the companion article][1] for an in-depth exploration of NGINX metrics.) Free, open-source NGINX and the commercial product NGINX Plus both have status modules that report metrics, and NGINX can also be configured to report certain metrics in its logs: - -注:表格 - ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
MetricAvailability
NGINX (open-source)NGINX PlusNGINX logs
accepts / acceptedxx
handledxx
droppedxx
activexx
requests / totalxx
4xx codesxx
5xx codesxx
request timex
- -#### Metrics collection: NGINX (open-source) #### - -Open-source NGINX exposes several basic metrics about server activity on a simple status page, provided that you have the HTTP [stub status module][2] enabled. To check if the module is already enabled, run: - - nginx -V 2>&1 | grep -o with-http_stub_status_module - -The status module is enabled if you see with-http_stub_status_module as output in the terminal. - -If that command returns no output, you will need to enable the status module. You can use the --with-http_stub_status_module configuration parameter when [building NGINX from source][3]: - - ./configure \ - … \ - --with-http_stub_status_module - make - sudo make install - -After verifying the module is enabled or enabling it yourself, you will also need to modify your NGINX configuration to set up a locally accessible URL (e.g., /nginx_status) for the status page: - - server { - location /nginx_status { - stub_status on; - - access_log off; - allow 127.0.0.1; - deny all; - } - } - -Note: The server blocks of the NGINX config are usually found not in the master configuration file (e.g., /etc/nginx/nginx.conf) but in supplemental configuration files that are referenced by the master config. To find the relevant configuration files, first locate the master config by running: - - nginx -t - -Open the master configuration file listed, and look for lines beginning with include near the end of the http block, such as: - - include /etc/nginx/conf.d/*.conf; - -In one of the referenced config files you should find the main server block, which you can modify as above to configure NGINX metrics reporting. After changing any configurations, reload the configs by executing: - - nginx -s reload - -Now you can view the status page to see your metrics: - - Active connections: 24 - server accepts handled requests - 1156958 1156958 4491319 - Reading: 0 Writing: 18 Waiting : 6 - -Note that if you are trying to access the status page from a remote machine, you will need to whitelist the remote machine’s IP address in your status configuration, just as 127.0.0.1 is whitelisted in the configuration snippet above. - -The NGINX status page is an easy way to get a quick snapshot of your metrics, but for continuous monitoring you will need to automatically record that data at regular intervals. Parsers for the NGINX status page already exist for monitoring tools such as [Nagios][4] and [Datadog][5], as well as for the statistics collection daemon [collectD][6]. - -#### Metrics collection: NGINX Plus #### - -The commercial NGINX Plus provides [many more metrics][7] through its ngx_http_status_module than are available in open-source NGINX. Among the additional metrics exposed by NGINX Plus are bytes streamed, as well as information about upstream systems and caches. NGINX Plus also reports counts of all HTTP status code types (1xx, 2xx, 3xx, 4xx, 5xx). A sample NGINX Plus status board is available [here][8]. - -![NGINX Plus status board](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/status_plus-2.png) - -*Note: the “Active” connections on the NGINX Plus status dashboard are defined slightly differently than the Active state connections in the metrics collected via the open-source NGINX stub status module. In NGINX Plus metrics, Active connections do not include connections in the Waiting state (aka Idle connections).* - -NGINX Plus also reports [metrics in JSON format][9] for easy integration with other monitoring systems. With NGINX Plus, you can see the metrics and health status [for a given upstream grouping of servers][10], or drill down to get a count of just the response codes [from a single server][11] in that upstream: - - {"1xx":0,"2xx":3483032,"3xx":0,"4xx":23,"5xx":0,"total":3483055} - -To enable the NGINX Plus metrics dashboard, you can add a status server block inside the http block of your NGINX configuration. ([See the section above][12] on collecting metrics from open-source NGINX for instructions on locating the relevant config files.) For example, to set up a status dashboard at http://your.ip.address:8080/status.html and a JSON interface at http://your.ip.address:8080/status, you would add the following server block: - - server { - listen 8080; - root /usr/share/nginx/html; - - location /status { - status; - } - - location = /status.html { - } - } - -The status pages should be live once you reload your NGINX configuration: - - nginx -s reload - -The official NGINX Plus docs have [more details][13] on how to configure the expanded status module. - -#### Metrics collection: NGINX logs #### - -NGINX’s [log module][14] writes configurable access logs to a destination of your choosing. You can customize the format of your logs and the data they contain by [adding or subtracting variables][15]. The simplest way to capture detailed logs is to add the following line in the server block of your config file (see [the section][16] on collecting metrics from open-source NGINX for instructions on locating your config files): - - access_log logs/host.access.log combined; - -After changing any NGINX configurations, reload the configs by executing: - - nginx -s reload - -The “combined” log format, included by default, captures [a number of key data points][17], such as the actual HTTP request and the corresponding response code. In the example logs below, NGINX logged a 200 (success) status code for a request for /index.html and a 404 (not found) error for the nonexistent /fail. - - 127.0.0.1 - - [19/Feb/2015:12:10:46 -0500] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari 537.36" - - 127.0.0.1 - - [19/Feb/2015:12:11:05 -0500] "GET /fail HTTP/1.1" 404 570 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36" - -You can log request processing time as well by adding a new log format to the http block of your NGINX config file: - - log_format nginx '$remote_addr - $remote_user [$time_local] ' - '"$request" $status $body_bytes_sent $request_time ' - '"$http_referer" "$http_user_agent"'; - -And by adding or modifying the access_log line in the server block of your config file: - - access_log logs/host.access.log nginx; - -After reloading the updated configs (by running nginx -s reload), your access logs will include response times, as seen below. The units are seconds, with millisecond resolution. In this instance, the server received a request for /big.pdf, returning a 206 (success) status code after sending 33973115 bytes. Processing the request took 0.202 seconds (202 milliseconds): - - 127.0.0.1 - - [19/Feb/2015:15:50:36 -0500] "GET /big.pdf HTTP/1.1" 206 33973115 0.202 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36" - -You can use a variety of tools and services to parse and analyze NGINX logs. For instance, [rsyslog][18] can monitor your logs and pass them to any number of log-analytics services; you can use a free, open-source tool such as [logstash][19] to collect and analyze logs; or you can use a unified logging layer such as [Fluentd][20] to collect and parse your NGINX logs. - -### Conclusion ### - -Which NGINX metrics you monitor will depend on the tools available to you, and whether the insight provided by a given metric justifies the overhead of monitoring that metric. For instance, is measuring error rates important enough to your organization to justify investing in NGINX Plus or implementing a system to capture and analyze logs? - -At Datadog, we have built integrations with both NGINX and NGINX Plus so that you can begin collecting and monitoring metrics from all your web servers with a minimum of setup. Learn how to monitor NGINX with Datadog [in this post][21], and get started right away with a [free trial of Datadog][22]. - ----------- - -Source Markdown for this post is available [on GitHub][23]. Questions, corrections, additions, etc.? Please [let us know][24]. - --------------------------------------------------------------------------------- - -via: https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ - -作者:K Young -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/ -[2]:http://nginx.org/en/docs/http/ngx_http_stub_status_module.html -[3]:http://wiki.nginx.org/InstallOptions -[4]:https://exchange.nagios.org/directory/Plugins/Web-Servers/nginx -[5]:http://docs.datadoghq.com/integrations/nginx/ -[6]:https://collectd.org/wiki/index.php/Plugin:nginx -[7]:http://nginx.org/en/docs/http/ngx_http_status_module.html#data -[8]:http://demo.nginx.com/status.html -[9]:http://demo.nginx.com/status -[10]:http://demo.nginx.com/status/upstreams/demoupstreams -[11]:http://demo.nginx.com/status/upstreams/demoupstreams/0/responses -[12]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source -[13]:http://nginx.org/en/docs/http/ngx_http_status_module.html#example -[14]:http://nginx.org/en/docs/http/ngx_http_log_module.html -[15]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format -[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source -[17]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format -[18]:http://www.rsyslog.com/ -[19]:https://www.elastic.co/products/logstash -[20]:http://www.fluentd.org/ -[21]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ -[22]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#sign-up -[23]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_collect_nginx_metrics.md -[24]:https://github.com/DataDog/the-monitor/issues From 1d8b52392ab4df0d1aaeb22c601c5565a2434d66 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 31 Jul 2015 10:14:58 +0800 Subject: [PATCH 036/207] Create How to collect NGINX metrics - Part 2 --- .../How to collect NGINX metrics - Part 2 | 237 ++++++++++++++++++ 1 file changed, 237 insertions(+) create mode 100644 translated/tech/How to collect NGINX metrics - Part 2 diff --git a/translated/tech/How to collect NGINX metrics - Part 2 b/translated/tech/How to collect NGINX metrics - Part 2 new file mode 100644 index 0000000000..848042bf2c --- /dev/null +++ b/translated/tech/How to collect NGINX metrics - Part 2 @@ -0,0 +1,237 @@ + +如何收集NGINX指标 - 第2部分 +================================================================================ +![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_2.png) + +### 如何获取你所需要的NGINX指标 ### + +如何获取需要的指标取决于你正在使用的 NGINX 版本。(参见 [the companion article][1] 将深入探索NGINX指标。)免费,开源版的 NGINX 和商业版的 NGINX 都有指标度量的状态模块,NGINX 也可以在其日志中配置指标模块: + +注:表格 + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
MetricAvailability
NGINX (open-source)NGINX PlusNGINX logs
accepts / acceptedxx
handledxx
droppedxx
activexx
requests / totalxx
4xx codesxx
5xx codesxx
request timex
+ +#### 指标收集:NGINX(开源版) #### + +开源版的 NGINX 会显示几个与服务器状态有关的指标在状态页面上,只要你启用了 HTTP [stub status module][2] 。要检查模块是否被加载,运行以下命令: + + nginx -V 2>&1 | grep -o with-http_stub_status_module + +如果你看到 http_stub_status_module 被输出在终端,说明状态模块已启用。 + +如果该命令没有输出,你需要启用状态模块。你可以使用 --with-http_stub_status_module 参数去配置 [building NGINX from source][3]: + + ./configure \ + … \ + --with-http_stub_status_module + make + sudo make install + +验证模块已经启用或你自己启用它后,你还需要修改 NGINX 配置文件为状态页面设置本地访问的 URL(例如,/ nginx_status): + + server { + location /nginx_status { + stub_status on; + + access_log off; + allow 127.0.0.1; + deny all; + } + } + +注:nginx 配置中的 server 块通常并不在主配置文件中(例如,/etc/nginx/nginx.conf),但主配置中会加载补充的配置文件。要找到主配置文件,首先运行以下命令: + + nginx -t + +打开主配置文件,在以 http 模块结尾的附近查找以 include 开头的行包,如: + + include /etc/nginx/conf.d/*.conf; + +在所包含的配置文件中,你应该会找到主服务器模块,你可以如上所示修改 NGINX 的指标报告。更改任何配置后,通过执行以下命令重新加载配置文件: + + nginx -s reload + +现在,你可以查看指标的状态页: + + Active connections: 24 + server accepts handled requests + 1156958 1156958 4491319 + Reading: 0 Writing: 18 Waiting : 6 + +请注意,如果你正试图从远程计算机访问状态页面,则需要将远程计算机的 IP 地址添加到你的状态配置文件的白名单中,在上面的配置文件中 127.0.0.1 仅在白名单中。 + +nginx 的状态页面是一中查看指标快速又简单的方法,但当连续监测时,你需要每隔一段时间自动记录该数据。然后通过监控工具箱 [Nagios][4] 或者 [Datadog][5],以及收集统计信息的服务 [collectD][6] 来分析已保存的 NGINX 状态信息。 + +#### 指标收集: NGINX Plus #### + +商业版的 NGINX Plus 通过 ngx_http_status_module 提供的可用指标比开源版 NGINX 更多 [many more metrics][7] 。NGINX Plus 附加了更多的字节流指标,以及负载均衡系统和高速缓存的信息。NGINX Plus 还报告所有的 HTTP 状态码类型(1XX,2XX,3XX,4XX,5XX)的计数。一个简单的 NGINX Plus 状态报告 [here][8]。 + +![NGINX Plus status board](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/status_plus-2.png) + +*注: NGINX Plus 在状态仪表盘"Active”连接定义的收集指标的状态模块和开源 NGINX 的略有不同。在 NGINX Plus 指标中,活动连接不包括等待状态(又叫空闲连接)连接。* + +NGINX Plus 也集成了其他监控系统的报告 [JSON格式指标][9] 。用 NGINX Plus 时,你可以看到 [负载均衡服务器组的][10]指标和健康状况,或着再向下能取得的仅是响应代码计数[从单个服务器][11]在负载均衡服务器中: + {"1xx":0,"2xx":3483032,"3xx":0,"4xx":23,"5xx":0,"total":3483055} + +启动 NGINX Plus 指标仪表盘,你可以在 NGINX 配置文件的 http 块内添加状态 server 块。 ([参见上一页][12]查找相关的配置文件,收集开源 NGINX 版指标的说明。)例如,设立以下一个状态仪表盘在http://your.ip.address:8080/status.html 和一个 JSON 接口 http://your.ip.address:8080/status,可以添加以下 server block 来设定: + + server { + listen 8080; + root /usr/share/nginx/html; + + location /status { + status; + } + + location = /status.html { + } + } + +一旦你重新加载 NGINX 配置,状态页就会被加载: + + nginx -s reload + +关于如何配置扩展状态模块,官方 NGINX Plus 文档有 [详细介绍][13] 。 + +#### 指标收集:NGINX日志 #### + +NGINX 的 [日志模块][14] 写到配置可以自定义访问日志到指定文件。你可以自定义日志的格式和时间通过 [添加或移除变量][15]。捕获日志的详细信息,最简单的方法是添加下面一行在你配置文件的server 块中(参见[此节][16] 通过加载配置文件的信息来收集开源 NGINX 的指标): + + access_log logs/host.access.log combined; + +更改 NGINX 配置文件后,必须要重新加载配置文件: + + nginx -s reload + +“combined” 的日志格式,只包含默认参数,包括[一些关键数据][17],如实际的 HTTP 请求和相应的响应代码。在下面的示例日志中,NGINX 记录了200(成功)状态码当请求 /index.html 时和404(未找到)错误不存在的请求文件 /fail。 + + 127.0.0.1 - - [19/Feb/2015:12:10:46 -0500] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari 537.36" + + 127.0.0.1 - - [19/Feb/2015:12:11:05 -0500] "GET /fail HTTP/1.1" 404 570 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36" + +你可以记录请求处理的时间通过添加一个新的日志格式在 NGINX 配置文件中的 http 块: + + log_format nginx '$remote_addr - $remote_user [$time_local] ' + '"$request" $status $body_bytes_sent $request_time ' + '"$http_referer" "$http_user_agent"'; + +通过修改配置文件中 server 块的 access_log 行: + + access_log logs/host.access.log nginx; + +重新加载配置文件(运行 nginx -s reload)后,你的访问日志将包括响应时间,如下图所示。单位为秒,毫秒。在这种情况下,服务器接收 /big.pdf 的请求时,发送33973115字节后返回206(成功)状态码。处理请求用时0.202秒(202毫秒): + + 127.0.0.1 - - [19/Feb/2015:15:50:36 -0500] "GET /big.pdf HTTP/1.1" 206 33973115 0.202 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36" + +你可以使用各种工具和服务来收集和分析 NGINX 日志。例如,[rsyslog][18] 可以监视你的日志,并将其传递给多个日志分析服务;你也可以使用免费的开源工具,如[logstash][19]来收集和分析日志;或者你可以使用一个统一日志记录层,如[Fluentd][20]来收集和分析你的 NGINX 日志。 + +### 结论 ### + +监视 NGINX 的哪一项指标将取决于你提供的工具,以及是否由给定指标证明监控指标的开销。例如,通过收集和分析日志来定位问题是非常重要的在 NGINX Plus 或者 运行的系统中。 + +在 Datadog 中,我们已经集成了 NGINX 和 NGINX Plus,这样你就可以以最小的设置来收集和监控所有 Web 服务器的指标。了解如何用 NGINX Datadog来监控 [在本文中][21],并开始使用 [免费的Datadog][22]。 + +---------- + +原文在这 [on GitHub][23]。问题,更正,补充等?请[让我们知道][24]。 + +-------------------------------------------------------------------------------- + +via: https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ + +作者:K Young +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/ +[2]:http://nginx.org/en/docs/http/ngx_http_stub_status_module.html +[3]:http://wiki.nginx.org/InstallOptions +[4]:https://exchange.nagios.org/directory/Plugins/Web-Servers/nginx +[5]:http://docs.datadoghq.com/integrations/nginx/ +[6]:https://collectd.org/wiki/index.php/Plugin:nginx +[7]:http://nginx.org/en/docs/http/ngx_http_status_module.html#data +[8]:http://demo.nginx.com/status.html +[9]:http://demo.nginx.com/status +[10]:http://demo.nginx.com/status/upstreams/demoupstreams +[11]:http://demo.nginx.com/status/upstreams/demoupstreams/0/responses +[12]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source +[13]:http://nginx.org/en/docs/http/ngx_http_status_module.html#example +[14]:http://nginx.org/en/docs/http/ngx_http_log_module.html +[15]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format +[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source +[17]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format +[18]:http://www.rsyslog.com/ +[19]:https://www.elastic.co/products/logstash +[20]:http://www.fluentd.org/ +[21]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ +[22]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#sign-up +[23]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_collect_nginx_metrics.md +[24]:https://github.com/DataDog/the-monitor/issues From 059f098f30616efcd0809e7f157a2d2d1277c008 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 31 Jul 2015 10:23:53 +0800 Subject: [PATCH 037/207] Delete 20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md --- ...vity and Check Memory Usages of Browser.md | 130 ------------------ 1 file changed, 130 deletions(-) delete mode 100644 sources/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md diff --git a/sources/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md b/sources/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md deleted file mode 100644 index 2219e5e25e..0000000000 --- a/sources/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md +++ /dev/null @@ -1,130 +0,0 @@ -translation by strugglingyouth -Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser -================================================================================ -Here again, I have written another post on [Linux Tips and Tricks][1] series. Since beginning the objective of this post is to make you aware of those small tips and hacks that lets you manage your system/server efficiently. - -![Create Cdrom ISO Image and Monitor Users in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/creating-cdrom-iso-watch-users-in-linux.jpg) - -Create Cdrom ISO Image and Monitor Users in Linux - -In this post we will see how to create ISO image from the contents of CD/DVD loaded in the drive, Open random man pages for learning, know details of other logged-in users and what they are doing and monitoring the memory usages of a browser, and all these using native tools/commands without any third-party application/utility. Here we go… - -### Create ISO image from a CD ### - -Often we need to backup/copy the content of CD/DVD. If you are on Linux platform you do not need any additional software. All you need is the access to Linux console. - -To create ISO image of the files in your CD/DVD ROM, you need two things. The first thing is you need to find the name of your CD/DVD drive. To find the name of your CD/DVD drive, you may choose any of the below three methods. - -**1. Run command lsblk (list block devices) from your terminal/console.** - - $ lsblk - -![Find Block Devices in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Find-Block-Devices.png) - -Find Block Devices - -**2. To see information about CD-ROM, you may use commands like less or more.** - - $ less /proc/sys/dev/cdrom/info - -![Check Cdrom Information](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Cdrom-Inforamtion.png) - -Check Cdrom Information - -**3. You may get the same information from [dmesg command][2] and customize the output using egrep.** - -The command ‘dmesg‘ print/control the kernel buffer ring. ‘egrep‘ command is used to print lines that matches a pattern. Option -i and –color with egrep is used to ignore case sensitive search and highlight the matching string respectively. - - $ dmesg | egrep -i --color 'cdrom|dvd|cd/rw|writer' - -![Find Device Information](http://www.tecmint.com/wp-content/uploads/2015/07/Find-Device-Information.png) - -Find Device Information - -Once you know the name of your CD/DVD, you can use following command to create a ISO image of your cdrom in Linux. - - $ cat /dev/sr0 > /path/to/output/folder/iso_name.iso - -Here ‘sr0‘ is the name of my CD/DVD drive. You should replace this with the name of your CD/DVD. This will help you in creating ISO image and backup contents of CD/DVD without any third-party application. - -![Create ISO Image of CDROM in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Create-ISO-Image-of-CDROM.png) - -Create ISO Image of CDROM - -### Open a man page randomly for Reading ### - -If you are new to Linux and want to learn commands and switches, this tweak is for you. Put the below line of code at the end of your `~/.bashrc` file. - - /use/bin/man $(ls /bin | shuf | head -1) - -Remember to put the above one line script in users’s `.bashrc` file and not in the .bashrc file of root. So when the next you login either locally or remotely using SSH you will see a man page randomly opened for you to read. For the newbies who want to learn commands and command-line switches, this will prove helpful. - -Here is what I got in my terminal after logging in to session for two times back-to-back. - -![LoadKeys Man Pages](http://www.tecmint.com/wp-content/uploads/2015/07/LoadKeys-Man-Pages.png) - -LoadKeys Man Pages - -![Zgrep Man Pages](http://www.tecmint.com/wp-content/uploads/2015/07/Zgrep-Man-Pages.png) - -Zgrep Man Pages - -### Check Activity of Logged-in Users ### - -Know what other users are doing on your shared server. - -In most general case, either you are a user of Shared Linux Server or the Admin. If you are concerned about your server and want to check what other users are doing, you may try command ‘w‘. - -This command lets you know if someone is executing any malicious code or tampering the server, slowing it down or anything else. ‘w‘ is the preferred way of keeping an eye on logged on users and what they are doing. - -To see logged on users and what they re doing, run command ‘w’ from terminal, preferably as root. - - # w - -![Check Linux User Activity](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Linux-User-Activity.png) - -Check Linux User Activity - -### Check Memory usages by Browser ### - -These days a lot of jokes are cracked on Google-chrome and its demand of memory. If you want to know the memory usages of a browser, you can list the name of the process, its PID and Memory usages of it. To check memory usages of a browser, just enter the “about:memory” in the address bar without quotes. - -I have tested it on Google-Chrome and Mozilla Firefox web browser. If you can check it on any other browser and it works well you may acknowledge us in the comments below. Also you may kill the browser process simply as if you have done for any Linux terminal process/service. - -In Google Chrome, type `about:memory` in address bar, you should get something similar to below image. - -![Check Chrome Memory Usage](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Chrome-Memory-Usage.png) - -Check Chrome Memory Usage - -In Mozilla Firefox, type `about:memory` in address bar, you should get something similar to below image. - -![Check Firefox Memory Usage](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Firefox-Memory-Usage.png) - -Check Firefox Memory Usage - -Out of these options you may select any of them, if you understand what it is. To check memory usages, click the left most option ‘Measure‘. - -![Firefox Main Process](http://www.tecmint.com/wp-content/uploads/2015/07/Firefox-Main-Processes.png) - -Firefox Main Process - -It shows tree like process-memory usages by browser. - -That’s all for now. Hope all the above tips will help you at some point of time. If you have one (or more) tips/tricks that will help Linux Users to manage their Linux System/Server more efficiently ans is lesser known, you may like to share it with us. - -I’ll be here with another post soon, till then stay tuned and connected to TecMint. Provide us with your valuable feedback in the comments below. Like and share us and help us get spread. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/creating-cdrom-iso-image-watch-user-activity-in-linux/ - -作者:[Avishek Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/tag/linux-tricks/ -[2]:http://www.tecmint.com/dmesg-commands/ From ad35cb507dd4985b31bcb82bd359776ca201be6b Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 31 Jul 2015 10:24:34 +0800 Subject: [PATCH 038/207] Delete 20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md --- ...vity and Check Memory Usages of Browser.md | 130 ------------------ 1 file changed, 130 deletions(-) delete mode 100644 sources/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md diff --git a/sources/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md b/sources/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md deleted file mode 100644 index 2219e5e25e..0000000000 --- a/sources/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md +++ /dev/null @@ -1,130 +0,0 @@ -translation by strugglingyouth -Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser -================================================================================ -Here again, I have written another post on [Linux Tips and Tricks][1] series. Since beginning the objective of this post is to make you aware of those small tips and hacks that lets you manage your system/server efficiently. - -![Create Cdrom ISO Image and Monitor Users in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/creating-cdrom-iso-watch-users-in-linux.jpg) - -Create Cdrom ISO Image and Monitor Users in Linux - -In this post we will see how to create ISO image from the contents of CD/DVD loaded in the drive, Open random man pages for learning, know details of other logged-in users and what they are doing and monitoring the memory usages of a browser, and all these using native tools/commands without any third-party application/utility. Here we go… - -### Create ISO image from a CD ### - -Often we need to backup/copy the content of CD/DVD. If you are on Linux platform you do not need any additional software. All you need is the access to Linux console. - -To create ISO image of the files in your CD/DVD ROM, you need two things. The first thing is you need to find the name of your CD/DVD drive. To find the name of your CD/DVD drive, you may choose any of the below three methods. - -**1. Run command lsblk (list block devices) from your terminal/console.** - - $ lsblk - -![Find Block Devices in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Find-Block-Devices.png) - -Find Block Devices - -**2. To see information about CD-ROM, you may use commands like less or more.** - - $ less /proc/sys/dev/cdrom/info - -![Check Cdrom Information](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Cdrom-Inforamtion.png) - -Check Cdrom Information - -**3. You may get the same information from [dmesg command][2] and customize the output using egrep.** - -The command ‘dmesg‘ print/control the kernel buffer ring. ‘egrep‘ command is used to print lines that matches a pattern. Option -i and –color with egrep is used to ignore case sensitive search and highlight the matching string respectively. - - $ dmesg | egrep -i --color 'cdrom|dvd|cd/rw|writer' - -![Find Device Information](http://www.tecmint.com/wp-content/uploads/2015/07/Find-Device-Information.png) - -Find Device Information - -Once you know the name of your CD/DVD, you can use following command to create a ISO image of your cdrom in Linux. - - $ cat /dev/sr0 > /path/to/output/folder/iso_name.iso - -Here ‘sr0‘ is the name of my CD/DVD drive. You should replace this with the name of your CD/DVD. This will help you in creating ISO image and backup contents of CD/DVD without any third-party application. - -![Create ISO Image of CDROM in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Create-ISO-Image-of-CDROM.png) - -Create ISO Image of CDROM - -### Open a man page randomly for Reading ### - -If you are new to Linux and want to learn commands and switches, this tweak is for you. Put the below line of code at the end of your `~/.bashrc` file. - - /use/bin/man $(ls /bin | shuf | head -1) - -Remember to put the above one line script in users’s `.bashrc` file and not in the .bashrc file of root. So when the next you login either locally or remotely using SSH you will see a man page randomly opened for you to read. For the newbies who want to learn commands and command-line switches, this will prove helpful. - -Here is what I got in my terminal after logging in to session for two times back-to-back. - -![LoadKeys Man Pages](http://www.tecmint.com/wp-content/uploads/2015/07/LoadKeys-Man-Pages.png) - -LoadKeys Man Pages - -![Zgrep Man Pages](http://www.tecmint.com/wp-content/uploads/2015/07/Zgrep-Man-Pages.png) - -Zgrep Man Pages - -### Check Activity of Logged-in Users ### - -Know what other users are doing on your shared server. - -In most general case, either you are a user of Shared Linux Server or the Admin. If you are concerned about your server and want to check what other users are doing, you may try command ‘w‘. - -This command lets you know if someone is executing any malicious code or tampering the server, slowing it down or anything else. ‘w‘ is the preferred way of keeping an eye on logged on users and what they are doing. - -To see logged on users and what they re doing, run command ‘w’ from terminal, preferably as root. - - # w - -![Check Linux User Activity](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Linux-User-Activity.png) - -Check Linux User Activity - -### Check Memory usages by Browser ### - -These days a lot of jokes are cracked on Google-chrome and its demand of memory. If you want to know the memory usages of a browser, you can list the name of the process, its PID and Memory usages of it. To check memory usages of a browser, just enter the “about:memory” in the address bar without quotes. - -I have tested it on Google-Chrome and Mozilla Firefox web browser. If you can check it on any other browser and it works well you may acknowledge us in the comments below. Also you may kill the browser process simply as if you have done for any Linux terminal process/service. - -In Google Chrome, type `about:memory` in address bar, you should get something similar to below image. - -![Check Chrome Memory Usage](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Chrome-Memory-Usage.png) - -Check Chrome Memory Usage - -In Mozilla Firefox, type `about:memory` in address bar, you should get something similar to below image. - -![Check Firefox Memory Usage](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Firefox-Memory-Usage.png) - -Check Firefox Memory Usage - -Out of these options you may select any of them, if you understand what it is. To check memory usages, click the left most option ‘Measure‘. - -![Firefox Main Process](http://www.tecmint.com/wp-content/uploads/2015/07/Firefox-Main-Processes.png) - -Firefox Main Process - -It shows tree like process-memory usages by browser. - -That’s all for now. Hope all the above tips will help you at some point of time. If you have one (or more) tips/tricks that will help Linux Users to manage their Linux System/Server more efficiently ans is lesser known, you may like to share it with us. - -I’ll be here with another post soon, till then stay tuned and connected to TecMint. Provide us with your valuable feedback in the comments below. Like and share us and help us get spread. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/creating-cdrom-iso-image-watch-user-activity-in-linux/ - -作者:[Avishek Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/tag/linux-tricks/ -[2]:http://www.tecmint.com/dmesg-commands/ From 6d077f1f911fb9810de044db8898ca2c9be3b5a9 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 31 Jul 2015 10:25:36 +0800 Subject: [PATCH 039/207] Create Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser --- ...ctivity and Check Memory Usages of Browser | 131 ++++++++++++++++++ 1 file changed, 131 insertions(+) create mode 100644 translated/Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser diff --git a/translated/Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser b/translated/Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser new file mode 100644 index 0000000000..02805f62ff --- /dev/null +++ b/translated/Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser @@ -0,0 +1,131 @@ + +用 CD 创建 ISO,观察用户活动和检查浏览器内存的技巧 +================================================================================ +我已经写过 [Linux 提示和技巧][1] 系列的一篇文章。写这篇文章的目的是让你知道这些小技巧可以有效地管理你的系统/服务器。 + +![Create Cdrom ISO Image and Monitor Users in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/creating-cdrom-iso-watch-users-in-linux.jpg) + +在Linux中创建 Cdrom ISO 镜像和监控用户 + +在这篇文章中,我们将看到如何使用 CD/DVD 驱动器中加载到的内容来创建 ISO 镜像,打开随机手册页学习,看到登录用户的详细情况和查看浏览器内存使用量,而所有这些完全使用本地工具/命令无任何第三方应用程序/组件。让我们开始吧... + +### 用 CD 中创建 ISO 映像 ### + +我们经常需要备份/复制 CD/DVD 的内容。如果你是在 Linux 平台上,不需要任何额外的软件。所有需要的是进入 Linux 终端。 + +要从 CD/DVD 上创建 ISO 镜像,你需要做两件事。第一件事就是需要找到CD/DVD 驱动器的名称。要找到 CD/DVD 驱动器的名称,可以使用以下三种方法。 + +**1. 从终端/控制台上运行 lsblk 命令(单个驱动器).** + + $ lsblk + +![Find Block Devices in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Find-Block-Devices.png) + +找驱动器 + +**2.要查看有关 CD-ROM 的信息,可以使用以下命令。** + + $ less /proc/sys/dev/cdrom/info + +![Check Cdrom Information](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Cdrom-Inforamtion.png) + +检查 Cdrom 信息 + +**3. 使用 [dmesg 命令][2] 也会得到相同的信息,并使用 egrep 来自定义输出。** + +命令 ‘dmesg‘ 命令的输出/控制内核缓冲区信息。‘egrep‘ 命令输出匹配到的行。选项 -i 和 -color 与 egrep 连用时会忽略大小写,并高亮显示匹配的字符串。 + + $ dmesg | egrep -i --color 'cdrom|dvd|cd/rw|writer' + +![Find Device Information](http://www.tecmint.com/wp-content/uploads/2015/07/Find-Device-Information.png) + +查找设备信息 + +一旦知道 CD/DVD 的名称后,在 Linux 上你可以用下面的命令来创建 ISO 镜像。 + + $ cat /dev/sr0 > /path/to/output/folder/iso_name.iso + +这里的‘sr0‘是我的 CD/DVD 驱动器的名称。你应该用你的 CD/DVD 名称来代替。这将帮你创建 ISO 镜像并备份 CD/DVD 的内容无需任何第三方应用程序。 + +![Create ISO Image of CDROM in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Create-ISO-Image-of-CDROM.png) + +创建 CDROM 的 ISO 映像 + +### 随机打开一个手册页 ### + +如果你是 Linux 新人并想学习使用命令行开关,这个修改是为你做的。把下面的代码行添加在`〜/ .bashrc`文件的末尾。 + + /use/bin/man $(ls /bin | shuf | head -1) + +记得把上面一行脚本添加在用户的`.bashrc`文件中,而不是根目录的 .bashrc 文件。所以,当你下次登录本地或远程使用 SSH 时,你会看到一个随机打开的手册页供你阅读。对于想要学习命令行开关的新手,这被证明是有益的。 + +下面是在终端登录两次分别看到的。 + +![LoadKeys Man Pages](http://www.tecmint.com/wp-content/uploads/2015/07/LoadKeys-Man-Pages.png) + +LoadKeys 手册页 + +![Zgrep Man Pages](http://www.tecmint.com/wp-content/uploads/2015/07/Zgrep-Man-Pages.png) + +Zgrep 手册页 + +### 查看登录用户的状态 ### + +了解其他用户正在共享服务器上做什么。 + +一般情况下,你是共享的 Linux 服务器的用户或管理员的。如果你担心自己服务器的安全并想要查看哪些用户在做什么,你可以使用命令 'w'。 + +这个命令可以让你知道是否有人在执行恶意代码或篡改服务器,让他停下或使用其他方法。'w' 是查看登录用户状态的首选方式。 + +要查看登录的用户正在做什么,从终端运行命令“w”,最好是 root 用户。 + + # w + +![Check Linux User Activity](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Linux-User-Activity.png) + +检查 Linux 用户状态 + +### 查看浏览器的内存使用状况 ### + +最近有不少谈论关于 Google-chrome 内存使用量。如果你想知道一个浏览器的内存用量,你可以列出进程名,PID 和它的使用情况。要检查浏览器的内存使用情况,只需在地址栏输入 “about:memory” 不要带引号。 + +我已经在 Google-Chrome 和 Mozilla 的 Firefox 网页浏览器进行了测试。你可以查看任何浏览器,如果它工作得很好,你可能会承认我们在下面的评论。你也可以杀死浏览器进程在 Linux 终端的进程/服务中。 + +在 Google Chrome 中,在地址栏输入 `about:memory`,你应该得到类似下图的东西。 + +![Check Chrome Memory Usage](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Chrome-Memory-Usage.png) + +查看 Chrome 内存使用状况 + +在Mozilla Firefox浏览器,在地址栏输入 `about:memory`,你应该得到类似下图的东西。 + +![Check Firefox Memory Usage](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Firefox-Memory-Usage.png) + +查看 Firefox 内存使用状况 + +如果你已经了解它是什么,除了这些选项。要检查内存用量,你也可以点击最左边的 ‘Measure‘ 选项。 + +![Firefox Main Process](http://www.tecmint.com/wp-content/uploads/2015/07/Firefox-Main-Processes.png) + +Firefox 主进程 + +它将通过浏览器树形展示进程内存使用量 + +目前为止就这样了。希望上述所有的提示将会帮助你。如果你有一个(或多个)技巧,分享给我们,将帮助 Linux 用户更有效地管理他们的 Linux 系统/服务器。 + +我会很快在这里发帖,到时候敬请关注。请在下面的评论里提供你的宝贵意见。喜欢请分享并帮助我们传播。 + + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/creating-cdrom-iso-image-watch-user-activity-in-linux/ + +作者:[Avishek Kumar][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/tag/linux-tricks/ +[2]:http://www.tecmint.com/dmesg-commands/ From 106b5e487e43838d84df99a557be5b9a6801b5ea Mon Sep 17 00:00:00 2001 From: GOLinux Date: Fri, 31 Jul 2015 10:54:26 +0800 Subject: [PATCH 040/207] [Translated]20150730 Must-Know Linux Commands For New Users.md --- ... Must-Know Linux Commands For New Users.md | 186 ------------------ ... Must-Know Linux Commands For New Users.md | 185 +++++++++++++++++ 2 files changed, 185 insertions(+), 186 deletions(-) delete mode 100644 sources/tech/20150730 Must-Know Linux Commands For New Users.md create mode 100644 translated/tech/20150730 Must-Know Linux Commands For New Users.md diff --git a/sources/tech/20150730 Must-Know Linux Commands For New Users.md b/sources/tech/20150730 Must-Know Linux Commands For New Users.md deleted file mode 100644 index 55f1b0dbfe..0000000000 --- a/sources/tech/20150730 Must-Know Linux Commands For New Users.md +++ /dev/null @@ -1,186 +0,0 @@ -Translating by GOLinux! -Must-Know Linux Commands For New Users -================================================================================ -![Manage system updates via the command line with dnf on Fedora.](http://www.linux.com/images/stories/41373/fedora-cli.png) -Manage system updates via the command line with dnf on Fedora. - -One of the beauties of Linux-based systems is that you can manage your entire system right from the terminal using the command line. The advantage of using the command line is that you can use the same knowledge and skills to manage any Linux distribution. - -This is not possible through the graphical user interface (GUI) as each distro, and desktop environment (DE), offers its own user interfaces. To be clear, there are cases in which you will need different commands to perform certain tasks on different distributions, but more or less the concept and ideas remain the same. - -In this article, we are going to talk about some of the basic commands that a new Linux user should know. I will show you how to update your system, manage software, manipulate files and switch to root using the command line on three major distributions: Ubuntu (which also includes its flavors and derivatives, and Debian), openSUSE and Fedora. - -*Let's get started!* - -### Keep your system safe and up-to-date ### - -Linux is secure by design, but the fact is that all software has bugs and there could be security holes. So it's very important to keep your system updated. Think of it this way: Running an out-of-date operating system is like being in an armored tank with the doors unlocked. Will the armor protect you? Anyone can enter through the open doors and cause harm. Similarly there can be un-patched holes in your OS which can compromise your systems. Open source communities, unlike the proprietary world, are extremely quick at patching holes, so if you keep your system updated you'll stay safe. - -Keep an eye on news sites to be aware of security vulnerabilities. If there is a hole discovered, read about it and update your system as soon as a patch is out. Either way you must make it a practice to run the update commands at least once a week on production machines. If you are running a complicated server then be extra careful and go through the changelog to ensure updates won't break your customization. - -**Ubuntu**: Bear one thing in mind: you must always refresh repositories (aka repos) before upgrading the system or installing any software. On Ubuntu, you can update your system with the following commands. The first command refreshes repositories: - - sudo apt-get update - -Once the repos are updated you can now run the system update command: - - sudo apt-get upgrade - -However this command doesn't update the kernel and some other packages, so you must also run this command: - - sudo apt-get dist-upgrade - -**openSUSE**: If you are on openSUSE, you can update the system using these commands (as usual, the first command is meant to update repos) - - sudo zypper refresh - sudo zypper up - -**Fedora**: If you are on Fedora, you can use the 'dnf' command which is 'kind' of equivalent to zypper and apt-get: - - sudo dnf update - sudo dnf upgrade - -### Software installation and removal ### - -You can install only those packages which are available in the repositories enabled on your system. Every distro comes with some official or third-party repos enabled by default. - -**Ubuntu**: To install any package on Ubuntu, first update the repo and then use this syntax: - - sudo apt-get install [package_name] - -Example: - - sudo apt-get install gimp - -**openSUSE**: The commands would be: - - sudo zypper install [package_name] - -**Fedora**: Fedora has dropped 'yum' and now uses 'dnf' so the command would be: - - sudo dnf install [package_name] - -The procedure to remove the software is the same, just exchange 'install' with 'remove'. - -**Ubuntu**: - - sudo apt-get remove [package_name] - -**openSUSE**: - - sudo zypper remove [package_name] - -**Fedora**: - - sudo dnf remove [package_name] - -### How to manage third party software? ### - -There is a huge community of developers who offer their software to users. Different distributions use different mechanisms to make third party software available to their users. It also depends on how a developer offers their software to users; some offer binaries and others offer it through repositories. - -Ubuntu heavily relies on PPAs (personal package archives) but, unfortunately, there is no built-in tool which can assist a user in searching PPAs. You will need to Google the PPA and then add the repository manually before installing the software. This is how you would add any PPA to your system: - - sudo add-apt-repository ppa: - -Example: Let's say I want to add LibreOffice PPA to my system. I would Google the PPA and then acquire the repo name from Launchpad, which in this case is "libreoffice/ppa". Then add the ppa using the following command: - - sudo add-apt-repository ppa:libreoffice/ppa - -It will ask you to hit the Enter key in order to import the keys. Once it's done, refresh the repos with the 'update' command and then install the package. - -openSUSE has an elegant solution for third-party apps. You can visit software.opensuse.org, search for the package and install it with one click. It will automatically add the repo to your system. If you want to add any repo manually, use this command:. - - sudo zypper ar -f url_of_the_repo name_of_repo - sudo zypper ar -f http://download.opensuse.org/repositories/LibreOffice:Factory/openSUSE_13.2/LibreOffice:Factory.repo LOF - -Then refresh the repo and install software: - - sudo zypper refresh - sudo zypper install libreoffice - -Fedora users can simply add RPMFusion (both free and non-free repos) which contain a majority of applications. In case you do need to add a repo, this is the command: - -dnf config-manager --add-repo http://www.example.com/example.repo - -### Some basic commands ### - -I have written a few [articles][1] on how to manage files on your system using the CLI, here are some of the basic commands which are common across all distributions. - -Copy files or directories to a new location: - - cp path_of_file_1 path_of_the_directory_where_you_want_to_copy/ - -Copy all files from a directory to a new location (notice the slash and asterisk, which implies all files within that directory): - - cp path_of_files/* path_of_the_directory_where_you_want_to_copy/ - -Move a file from one location to another (the trailing slash means inside that directory): - - mv path_of_file_1 path_of_the_directory_where_you_want_to_move/ - -Move all file from one location to another: - - mv path_of_directory_where_files_are/* path_of_the_directory_where_you_want_to_move/ - -Delete a file: - - rm path_of_file - -Delete a directory: - - rm -r path_of_directory - -Remove all content from the directory, leaving the directory folder intact: - - rm -r path_of_directory/* - -### Create new directory ### - -To create a new directory, first enter the location where you want to create a directory. Let's say you want to create a 'foundation' folder inside your Documents directory. Let's change the directory using the cd (aka change directory) command: - - cd /home/swapnil/Documents - -(exchange 'swapnil with the user on your system) - -Then create the directory with mkdir command: - - mkdir foundation - -You can also create a directory from anywhere, by giving the path of the directory. For example: - - mdkir /home/swapnil/Documents/foundation - -If you want to create parent-child directories, which means directories within other directories then use the -p option. It will create all directories in the given path: - - mdkir -p /home/swapnil/Documents/linux/foundation - -### Become root ### - -You either need to be root or the user should have sudo powers to perform some administrative tasks such as managing packages or making changes to the root directories or files. An example would be to edit the 'fstab' file which keeps a record of mounted hard drives. It's inside the 'etc' directory which is within the root directory. You can make changes to this file only as a super user. In most distros you can become root by 'switching user'. Let's say on openSUSE I want to become root as I am going to work inside the root directory. You can use either command: - - sudo su - - -Or - - su - - -That will ask for the password and then you will have root privileges. Keep one point in mind: never run your system as root user unless you know what you are doing. Another important point to note is that the files or directories you modify as root also change ownership of those files from that user or specific service to root. You may have to revert the ownership of those files otherwise the services or users won't be able to to access or write to those files. To change users, this is the command: - - sudo chown -R user:user /path_of_file_or_directory - -You may often need this when you have partitions from other distros mounted on the system. When you try to access files on such partitions, you may come across a permission denied error. You can simply change the ownership of such partitions to access them. Just be extra careful, don't change permissions or ownership of root directories. - -These are the basic commands any new Linux user needs. If you have any questions or if you want us to cover a specific topic, please mention them in the comments below. - --------------------------------------------------------------------------------- - -via: http://www.linux.com/learn/tutorials/842251-must-know-linux-commands-for-new-users - -作者:[Swapnil Bhartiya][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.linux.com/community/forums/person/61003 -[1]:http://www.linux.com/learn/tutorials/828027-how-to-manage-your-files-from-the-command-line diff --git a/translated/tech/20150730 Must-Know Linux Commands For New Users.md b/translated/tech/20150730 Must-Know Linux Commands For New Users.md new file mode 100644 index 0000000000..230cecf736 --- /dev/null +++ b/translated/tech/20150730 Must-Know Linux Commands For New Users.md @@ -0,0 +1,185 @@ +新手应知应会的Linux命令 +================================================================================ +![Manage system updates via the command line with dnf on Fedora.](http://www.linux.com/images/stories/41373/fedora-cli.png) +在Fedora上通过命令行使用dnf来管理系统更新 + +基于Linux的系统的优点之一,就是你可以通过终端中使用命令该ing来管理整个系统。使用命令行的优势在于,你可以使用相同的知识和技能来管理随便哪个Linux发行版。 + +对于各个发行版以及桌面环境(DE)而言,要一致地使用图形化用户界面(GUI)却几乎是不可能的,因为它们都提供了各自的用户界面。要明确的是,有那么些情况,你需要在不同的发行版上使用不同的命令来部署某些特定的任务,但是,或多或少它们的概念和意图却仍然是一致的。 + +在本文中,我们打算讨论Linux用户应当掌握的一些基本命令。我将给大家演示怎样使用命令行来更新系统、管理软件、操作文件以及切换到root,这些操作将在三个主要发行版上进行:Ubuntu(也包括其定制版和衍生版,还有Debian),openSUSE,以及Fedora。 + +*让我们开始吧!* + +### 保持系统安全和最新 ### + +Linux是基于安全设计的,但事实上是,任何软件都有缺陷,会导致安全漏洞。所以,保持你的系统更新到最新是十分重要的。这么想吧:运行过时的操作系统,就像是你坐在全副武装的坦克里头,而门却没有锁。武器会保护你吗?任何人都可以进入开放的大门,对你造成伤害。同样,在你的系统中也有没有打补丁的漏洞,这些漏洞会危害到你的系统。开源社区,不像专利世界,在漏洞补丁方面反应是相当快的,所以,如果你保持系统最新,你也获得了安全保证。 + +留意新闻站点,了解安全漏洞。如果发现了一个漏洞,请阅读之,然后在补丁出来的第一时间更新。不管怎样,在生产机器上,你每星期必须至少运行一次更新命令。如果你运行这一台复杂的服务器,那么就要额外当心了。仔细阅读变更日志,以确保更新不会搞坏你的自定义服务。 + +**Ubuntu**:牢记一点:你在升级系统或安装不管什么软件之前,都必须要刷新仓库(也就是repos)。在Ubuntu上,你可以使用下面的命令来更新系统,第一个命令用于刷新仓库: + + sudo apt-get update + +仓库更新后,现在你可以运行系统更新命令了: + + sudo apt-get upgrade + +然而,这个命令不会更新内核和其它一些包,所以你也必须要运行下面这个命令: + + sudo apt-get dist-upgrade + +**openSUSE**:如果你是在openSUSE上,你可以使用以下命令来更新系统(照例,第一个命令的意思是更新仓库) + + sudo zypper refresh + sudo zypper up + +**Fedora**:如果你是在Fedora上,你可以使用'dnf'命令,它是zypper和apt-get的'同类': + + sudo dnf update + sudo dnf upgrade + +### 软件安装与移除 ### + +你只可以安装那些你系统上启用的仓库中可用的包,各个发行版默认都附带有并启用了一些官方或者第三方仓库。 +**Ubuntu**: To install any package on Ubuntu, first update the repo and then use this syntax: +**Ubuntu**:要在Ubuntu上安装包,首先更新仓库,然后使用下面的语句: + + sudo apt-get install [package_name] + +样例: + + sudo apt-get install gimp + +**openSUSE**:命令是这样的: + + sudo zypper install [package_name] + +**Fedora**:Fedora已经丢弃了'yum',现在换成了'dnf',所以命令是这样的: + + sudo dnf install [package_name] + +移除软件的过程也一样,只要把'install'改成'remove'。 + +**Ubuntu**: + + sudo apt-get remove [package_name] + +**openSUSE**: + + sudo zypper remove [package_name] + +**Fedora**: + + sudo dnf remove [package_name] + +### 如何管理第三方软件? ### + +在一个庞大的开发者社区中,这些开发者们为用户提供了许多的软件。不同的发行版有不同的机制来使用这些第三方软件,将它们提供给用户。同时也取决于开发者怎样将这些软件提供给用户,有些开发者会提供二进制包,而另外一些开发者则将软件发布到仓库中。 + +Ubuntu严重依赖于PPA(个人包归档),但是,不幸的是,它却没有提供一个内建工具来帮助用于搜索这些PPA仓库。在安装软件前,你将需要通过Google搜索PPA,然后手工添加该仓库。下面就是添加PPA到系统的方法: + + sudo add-apt-repository ppa: + +样例:比如说,我想要添加LibreOffice PPA到我的系统中。我应该Google该PPA,然后从Launchpad获得该仓库的名称,在本例中它是"libreoffice/ppa"。然后,使用下面的命令来添加该PPA: + + sudo add-apt-repository ppa:libreoffice/ppa + +它会要你按下回车键来导入秘钥。完成后,使用'update'命令来刷新仓库,然后安装该包。 + +openSUSE拥有一个针对第三方应用的优雅的解决方案。你可以访问software.opensuse.org,一键点击搜索并安装相应包,它会自动将对应的仓库添加到你的系统中。如果你想要手工添加仓库,可以使用该命令: + + sudo zypper ar -f url_of_the_repo name_of_repo + sudo zypper ar -f http://download.opensuse.org/repositories/LibreOffice:Factory/openSUSE_13.2/LibreOffice:Factory.repo LOF + +然后,刷新仓库并安装软件: + + sudo zypper refresh + sudo zypper install libreoffice + +Fedora用户只需要添加RPMFusion(free和non-free仓库一起),该仓库包含了大量的应用。如果你需要添加仓库,命令如下: + +dnf config-manager --add-repo http://www.example.com/example.repo + +### 一些基本命令 ### + +我已经写了一些关于使用CLI来管理你系统上的文件的[文章][1],下面介绍一些基本米ing令,这些命令在所有发行版上都经常会用到。 + +拷贝文件或目录到一个新的位置: + + cp path_of_file_1 path_of_the_directory_where_you_want_to_copy/ + +将某个目录中的所有文件拷贝到一个新的位置(注意斜线和星号,它指的是该目录下的所有文件): + + cp path_of_files/* path_of_the_directory_where_you_want_to_copy/ + +将一个文件从某个位置移动到另一个位置(尾斜杠是说在该目录中): + + mv path_of_file_1 path_of_the_directory_where_you_want_to_move/ + +将所有文件从一个位置移动到另一个位置: + + mv path_of_directory_where_files_are/* path_of_the_directory_where_you_want_to_move/ + +删除一个文件: + + rm path_of_file + +删除一个目录: + + rm -r path_of_directory + +移除目录中所有内容,完整保留目录文件夹: + + rm -r path_of_directory/* + +### 创建新目录 ### + +要创建一个新目录,首先输入你要创建的目录的位置。比如说,你想要在你的Documents目录中创建一个名为'foundation'的文件夹。让我们使用 cd (即change directory,改变目录)命令来改变目录: + + cd /home/swapnil/Documents + +(替换'swapnil'为你系统中的用户) + +然后,使用 mkdir 命令来创建该目录: + + mkdir foundation + +你也可以从任何地方创建一个目录,通过指定该目录的路径即可。例如: + + mdkir /home/swapnil/Documents/foundation + +如果你想要创建父-子目录,那是指目录中的目录,那么可以使用 -p 选项。它会在指定路径中创建所有目录: + + mdkir -p /home/swapnil/Documents/linux/foundation + +### 成为root ### + +你或许需要成为root,或者具有sudo权力的用户,来实施一些管理任务,如管理软件包或者对根目录或其下的文件进行一些修改。其中一个例子就是编辑'fstab'文件,该文件记录了挂载的硬件驱动器。它在'etc'目录中,而该目录又在根目录中,你只能作为超级用户来修改该文件。在大多数的发行版中,你可以通过'切换用户'来成为root。比如说,在openSUSE上,我想要成为root,因为我要在根目录中工作,你可以使用下面的命令之一: + + sudo su - + +或 + + su - + +该命令会要求输入密码,然后你就具有root特权了。记住一点:千万不要以root用户来运行系统,除非你知道你正在做什么。另外重要的一点需要注意的是,你以root什么对目录或文件进行修改后,会将它们的拥有关系从该用户或特定的服务改变为root。你必须恢复这些文件的拥有关系,否则该服务或用户就不能访问或写入到那些文件。要改变用户,命令如下: + + sudo chown -R user:user /path_of_file_or_directory + +当你将其它发行版上的分区挂载到系统中时,你可能经常需要该操作。当你试着访问这些分区上的文件时,你可能会碰到权限拒绝错误,你只需要改变这些分区的拥有关系就可以访问它们了。需要额外当心的是,不要改变根目录的权限或者拥有关系。 + +这些就是Linux新手们需要的基本命令。如果你有任何问题,或者如果你想要我们涵盖一个特定的话题,请在下面的评论中告诉我们吧。 + +-------------------------------------------------------------------------------- + +via: http://www.linux.com/learn/tutorials/842251-must-know-linux-commands-for-new-users + +作者:[Swapnil Bhartiya][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linux.com/community/forums/person/61003 +[1]:http://www.linux.com/learn/tutorials/828027-how-to-manage-your-files-from-the-command-line From bc558997ab70ad174767f013c6a2dac05e959e9b Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 30 Jul 2015 22:23:23 +0800 Subject: [PATCH 041/207] PUB:20150309 Comparative Introduction To FreeBSD For Linux Users MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @wwy-hust 翻译的不错! --- ...Introduction To FreeBSD For Linux Users.md | 101 ++++++++++++++++++ ...Introduction To FreeBSD For Linux Users.md | 98 ----------------- 2 files changed, 101 insertions(+), 98 deletions(-) create mode 100644 published/20150309 Comparative Introduction To FreeBSD For Linux Users.md delete mode 100644 translated/talk/20150309 Comparative Introduction To FreeBSD For Linux Users.md diff --git a/published/20150309 Comparative Introduction To FreeBSD For Linux Users.md b/published/20150309 Comparative Introduction To FreeBSD For Linux Users.md new file mode 100644 index 0000000000..01ee52d26a --- /dev/null +++ b/published/20150309 Comparative Introduction To FreeBSD For Linux Users.md @@ -0,0 +1,101 @@ +FreeBSD 和 Linux 有什么不同? +================================================================================ + +![](https://1102047360.rsc.cdn77.org/wp-content/uploads/2015/03/FreeBSD-790x494.jpg) + +### 简介 ### + +BSD最初从UNIX继承而来,目前,有许多的类Unix操作系统是基于BSD的。FreeBSD是使用最广泛的开源的伯克利软件发行版(即 BSD 发行版)。就像它隐含的意思一样,它是一个自由开源的类Unix操作系统,并且是公共服务器平台。FreeBSD源代码通常以宽松的BSD许可证发布。它与Linux有很多相似的地方,但我们得承认它们在很多方面仍有不同。 + +本文的其余部分组织如下:FreeBSD的描述在第一部分,FreeBSD和Linux的相似点在第二部分,它们的区别将在第三部分讨论,对他们功能的讨论和总结在最后一节。 + +### FreeBSD描述 ### + +#### 历史 #### + +- FreeBSD的第一个版本发布于1993年,它的第一张CD-ROM是FreeBSD1.0,发行于1993年12月。接下来,FreeBSD 2.1.0在1995年发布,并且获得了所有用户的青睐。实际上许多IT公司都使用FreeBSD并且很满意,我们可以列出其中的一些:IBM、Nokia、NetApp和Juniper Network。 + +#### 许可证 #### + +- 关于它的许可证,FreeBSD以多种开源许可证进行发布,它的名为Kernel的最新代码以两句版BSD许可证进行了发布,给予使用和重新发布FreeBSD的绝对自由。其它的代码则以三句版或四句版BSD许可证进行发布,有些是以GPL和CDDL的许可证发布的。 + +(LCTT 译注:BSD 许可证与 GPL 许可证相比,相当简短,最初只有四句规则;1999年应 RMS 请求,删除了第三句,新的许可证称作“新 BSD”或三句版BSD;原来的 BSD 许可证称作“旧 BSD”、“修订的 BSD”或四句版BSD;也有一种删除了第三、第四两句的版本,称之为两句版 BSD,等价于 MIT 许可证。) + +#### 用户 #### + +- FreeBSD的重要特点之一就是它的用户多样性。实际上,FreeBSD可以作为邮件服务器、Web 服务器、FTP 服务器以及路由器等,您只需要在它上运行服务相关的软件即可。而且FreeBSD还支持ARM、PowerPC、MIPS、x86、x86-64架构。 + +### FreeBSD和Linux的相似处 ### + +FreeBSD和Linux是两个自由开源的软件。实际上,它们的用户可以很容易的检查并修改源代码,用户拥有绝对的自由。而且,FreeBSD和Linux都是类Unix系统,它们的内核、内部组件、库程序都使用从历史上的AT&T Unix继承来的算法。FreeBSD从根基上更像Unix系统,而Linux是作为自由的类Unix系统发布的。许多工具应用都可以在FreeBSD和Linux中找到,实际上,他们几乎有同样的功能。 + +此外,FreeBSD能够运行大量的Linux应用。它可以安装一个Linux的兼容层,这个兼容层可以在编译FreeBSD时加入AAC Compact Linux得到,或通过下载已编译了Linux兼容层的FreeBSD系统,其中会包括兼容程序:aac_linux.ko。不同于FreeBSD的是,Linux无法运行FreeBSD的软件。 + +最后,我们注意到虽然二者有同样的目标,但二者还是有一些不同之处,我们在下一节中列出。 + +### FreeBSD和Linux的区别 ### + +目前对于大多数用户来说并没有一个选择FreeBSD还是Linux的明确的准则。因为他们有着很多同样的应用程序,因为他们都被称作类Unix系统。 + +在这一章,我们将列出这两种系统的一些重要的不同之处。 + +#### 许可证 #### + +- 两个系统的区别首先在于它们的许可证。Linux以GPL许可证发行,它为用户提供阅读、发行和修改源代码的自由,GPL许可证帮助用户避免仅仅发行二进制。而FreeBSD以BSD许可证发布,BSD许可证比GPL更宽容,因为其衍生著作不需要仍以该许可证发布。这意味着任何用户能够使用、发布、修改代码,并且不需要维持之前的许可证。 +- 您可以依据您的需求,在两种许可证中选择一种。首先是BSD许可证,由于其特殊的条款,它更受用户青睐。实际上,这个许可证使用户在保证源代码的封闭性的同时,可以售卖以该许可证发布的软件。再说说GPL,它需要每个使用以该许可证发布的软件的用户多加注意。 +- 如果想在以不同许可证发布的两种软件中做出选择,您需要了解他们各自的许可证,以及他们开发中的方法论,从而能了解他们特性的区别,来选择更适合自己需求的。 + +#### 控制 #### + +- 由于FreeBSD和Linux是以不同的许可证发布的,Linus Torvalds控制着Linux的内核,而FreeBSD却与Linux不同,它并未被控制。我个人更倾向于使用FreeBSD而不是Linux,这是因为FreeBSD才是绝对自由的软件,没有任何控制许可证的存在。Linux和FreeBSD还有其他的不同之处,我建议您先不急着做出选择,等读完本文后再做出您的选择。 + +#### 操作系统 #### + +- Linux主要指内核系统,这与FreeBSD不同,FreeBSD的整个系统都被维护着。FreeBSD的内核和一组由FreeBSD团队开发的软件被作为一个整体进行维护。实际上,FreeBSD开发人员能够远程且高效的管理核心操作系统。 +- 而Linux方面,在管理系统方面有一些困难。由于不同的组件由不同的源维护,Linux开发者需要将它们汇集起来,才能达到同样的功能。 +- FreeBSD和Linux都给了用户大量的可选软件和发行版,但他们管理的方式不同。FreeBSD是统一的管理方式,而Linux需要被分别维护。 + +#### 硬件支持 #### + +- 说到硬件支持,Linux比FreeBSD做的更好。但这不意味着FreeBSD没有像Linux那样支持硬件的能力。他们只是在管理的方式不同,这通常还依赖于您的需求。因此,如果您在寻找最新的解决方案,FreeBSD更适应您;但如果您在寻找更多的普适性,那最好使用Linux。 + +#### 原生FreeBSD Vs 原生Linux #### + +- 两者的原生系统的区别又有不同。就像我之前说的,Linux是一个Unix的替代系统,由Linux Torvalds编写,并由网络上的许多极客一起协助实现的。Linux有一个现代系统所需要的全部功能,诸如虚拟内存、共享库、动态加载、优秀的内存管理等。它以GPL许可证发布。 +- FreeBSD也继承了Unix的许多重要的特性。FreeBSD作为在加州大学开发的BSD的一种发行版。开发BSD的最重要的原因是用一个开源的系统来替代AT&T操作系统,从而给用户无需AT&T许可证便可使用的能力。 +- 许可证的问题是开发者们最关心的问题。他们试图提供一个最大化克隆Unix的开源系统。这影响了用户的选择,由于FreeBSD使用BSD许可证进行发布,因而相比Linux更加自由。 + +#### 支持的软件包 #### + +- 从用户的角度来看,另一个二者不同的地方便是软件包以及从源码安装的软件的可用性和支持。Linux只提供了预编译的二进制包,这与FreeBSD不同,它不但提供预编译的包,而且还提供从源码编译和安装的构建系统。使用它的 ports 工具,FreeBSD给了您选择使用预编译的软件包(默认)和在编译时定制您软件的能力。(LCTT 译注:此处说明有误。Linux 也提供了源代码方式的包,并支持自己构建。) +- 这些 ports 允许您构建所有支持FreeBSD的软件。而且,它们的管理还是层次化的,您可以在/usr/ports下找到源文件的地址以及一些正确使用FreeBSD的文档。 +- 这些提到的 ports给予你产生不同软件包版本的可能性。FreeBSD给了您通过源代码构建以及预编译的两种软件,而不是像Linux一样只有预编译的软件包。您可以使用两种安装方式管理您的系统。 + +#### FreeBSD 和 Linux 常用工具比较 #### + +- 有大量的常用工具在FreeBSD上可用,并且有趣的是他们由FreeBSD的团队所拥有。相反的,Linux工具来自GNU,这就是为什么在使用中有一些限制。(LCTT 译注:这也是 Linux 正式的名称被称作“GNU/Linux”的原因,因为本质上 Linux 其实只是指内核。) +- 实际上FreeBSD采用的BSD许可证非常有益且有用。因此,您有能力维护核心操作系统,控制这些应用程序的开发。有一些工具类似于它们的祖先 - BSD和Unix的工具,但不同于GNU的套件,GNU套件只想做到最小的向后兼容。 + +#### 标准 Shell #### + +- FreeBSD默认使用tcsh。它是csh的评估版,由于FreeBSD以BSD许可证发行,因此不建议您在其中使用GNU的组件 bash shell。bash和tcsh的区别仅仅在于tcsh的脚本功能。实际上,我们更推荐在FreeBSD中使用sh shell,因为它更加可靠,可以避免一些使用tcsh和csh时出现的脚本问题。 + +#### 一个更加层次化的文件系统 #### + +- 像之前提到的一样,使用FreeBSD时,基础操作系统以及可选组件可以被很容易的区别开来。这导致了一些管理它们的标准。在Linux下,/bin,/sbin,/usr/bin或者/usr/sbin都是存放可执行文件的目录。FreeBSD不同,它有一些附加的对其进行组织的规范。基础操作系统被放在/usr/local/bin或者/usr/local/sbin目录下。这种方法可以帮助管理和区分基础操作系统和可选组件。 + +### 结论 ### + +FreeBSD和Linux都是自由且开源的系统,他们有相似点也有不同点。上面列出的内容并不能说哪个系统比另一个更好。实际上,FreeBSD和Linux都有自己的特点和技术规格,这使它们与别的系统区别开来。那么,您有什么看法呢?您已经有在使用它们中的某个系统了么?如果答案为是的话,请给我们您的反馈;如果答案是否的话,在读完我们的描述后,您怎么看?请在留言处发表您的观点。 + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/comparative-introduction-freebsd-linux-users/ + +作者:[anismaj][a] +译者:[wwy-hust](https://github.com/wwy-hust) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:https://www.unixmen.com/author/anis/ \ No newline at end of file diff --git a/translated/talk/20150309 Comparative Introduction To FreeBSD For Linux Users.md b/translated/talk/20150309 Comparative Introduction To FreeBSD For Linux Users.md deleted file mode 100644 index 76368e1033..0000000000 --- a/translated/talk/20150309 Comparative Introduction To FreeBSD For Linux Users.md +++ /dev/null @@ -1,98 +0,0 @@ -ԱȽϵķʽLinuxûFreeBSD -================================================================================ -![](https://1102047360.rsc.cdn77.org/wp-content/uploads/2015/03/FreeBSD-790x494.jpg) - -### ### - -BSDUNIX̳жĿǰUnixϵͳǻBSDġFreeBSDʹ㷺ĿԴа棨BSDа棩˼һһѿԴUnixϵͳǹƽ̨FreeBSDԴͨԿɵBSD֤LinuxкܶƵĵطǵóںܶ෽вͬ - -ĵಿ֯£FreeBSDڵһ֣FreeBSDLinuxƵڵڶ֣ǵڵۣǹܵۺܽһڡ - -### FreeBSD ### - -#### ʷ #### - -- FreeBSDĵһ汾1993꣬ĵһCD-ROMFreeBSD1.0Ҳ1993ꡣFreeBSD 2.1.01995귢һûʵIT˾ʹFreeBSDҺ⣬ǿгеһЩIBMNokiaNetAppJuniper Network - -#### ֤ #### - -- ֤FreeBSDԶֿԴ֤зµΪKernelĴBSD֤˷ʹú·FreeBSDľɡĴBSD֤зЩGPLCDDL֤ġ - -#### û #### - -- FreeBSDҪص֮һûʵϣFreeBSDΪʼWeb ServerFTPԼ·ȣֻҪзصɡFreeBSD֧ARMPowerPCMIPSx86x86-64ܹ - -### FreeBSDLinuxƴ ### - -FreeBSDLinuxѿԴʵϣǵûԺ׵ļ鲢޸Դ룬ûӵоԵɡңFreeBSDLinuxUnixϵͳǵںˡڲʹôʷϵAT&T Unix̳е㷨FreeBSDӸϸUnixϵͳLinuxΪѵUnixϵͳġ๤ӦöFreeBSDLinuxҵʵϣǼͬĹܡ - -⣬FreeBSDܹдLinuxӦá԰װһLinuxļݲ㣬ݲڱFreeBSDʱAAC Compact LinuxõͨѱLinuxݲFreeBSDϵͳлݳaac_linux.koͬFreeBSDǣLinux޷FreeBSD - -ע⵽ȻͬĿ꣬߻һЩ֮ͬһг - -### FreeBSDLinux ### - -Ŀǰڴû˵ûһѡFreeBSDLinux׼ΪźܶͬӦóΪǶUnixϵͳ - -һ£ǽгϵͳһЩҪIJ֮ͬ - -#### ֤ #### - -- ϵͳǵ֤LinuxGPL֤УΪûṩĶк޸ԴɣGPL֤ûжơFreeBSDBSD֤BSD֤GPLݣΪҪԸ֤ζκûܹʹá޸Ĵ룬ҲҪά֮ǰ֤ -- ֤ѡһ֡BSD֤ûʵϣ֤ʹûڱ֤ԴķԵͬʱԸ֤˵˵GPLҪÿʹԸ֤ûע⡣ -- Բ֤ͬѡҪ˽ǸԵ֤ԼǿеķۣӶ˽ԵѡʺԼġ - -#### #### - -- FreeBSDLinuxԲ֤ͬģLinus TorvaldsLinuxںˣFreeBSDȴLinuxͬδơҸ˸ʹFreeBSDLinuxΪFreeBSDǾɵûκοɵĴڡLinuxFreeBSDIJ֮ͬҽȲѡ񣬵ȶ걾ĺѡ - -#### ϵͳ #### - -- Linux۽ںϵͳFreeBSDͬFreeBSDϵͳάšFreeBSDں˺һFreeBSDŶӿΪһάʵϣFreeBSDԱܹԶҸЧĹIJϵͳ -- Linux棬ڹϵͳһЩѡڲͬɲͬԴάLinuxҪǻ㼯ܴﵽͬĹܡ -- FreeBSDLinuxûĿѡͷа棬ǹķʽͬFreeBSDͳһĹʽLinuxҪֱά - -#### Ӳ֧ #### - -- ˵Ӳ֧֣LinuxFreeBSDĸáⲻζFreeBSDûLinux֧ӲֻڹķʽͬͨˣѰµĽFreeBSDӦѰһĻʹLinux - -#### ԭFreeBSD Vs ԭLinux #### - -- ߵԭϵͳв֮ͬǰ˵ģLinuxһUnixϵͳLinux Torvaldsдϵ༫һЭʵֵġLinuxһִϵͳҪȫܣڴ桢⡢̬ءڴȡGPL֤ -- FreeBSDҲ̳UnixҪԡFreeBSDΪڼݴѧBSDһַа档BSDҪԭһԴϵͳAT&TϵͳӶûAT&T֤ʹõ -- ֤ǿĵ⡣ͼṩһ󻯿¡UnixĿԴϵͳӰûѡFreeBSDLinuxʹBSD֤зɡ - -#### ֵ֧ #### - -- ûĽǶһ߲ͬĵطԼԴ밲װĿԺ֧֡LinuxֻṩԤĶưFreeBSDͬṩԤİһṩԴͰװĹϵͳֲFreeBSDѡʹԤĬϣڱʱ -- ЩѡFreeBSDеңǵĹDzλģ/usr/portsҵԴļĵַԼһЩȷʹFreeBSDĵ -- ЩᵽĿѡ˲ͬ汾ĿԡFreeBSDͨԴ빹ԼԤLinuxһֻԤʹְװʽϵͳ - -#### FreeBSD Linux ù߱Ƚ #### - -- дijùFreeBSDϿãȤFreeBSDŶӵС෴ģLinuxGNUΪʲôʹһЩơ -- ʵFreeBSDõBSD֤dzáˣάIJϵͳЩӦóĿһЩǵ - BSDUnixĹߣͬGNU׼GNU׼ֻСݡ - -#### ׼ Shell #### - -- FreeBSDĬʹtcshcsh棬FreeBSDBSD֤У˲ʹGNU bash shellbashtcshtcshĽűܡʵϣǸƼFreeBSDʹsh shellΪӿɿԱһЩʹtcshcshʱֵĽű⡣ - -#### һӲλļϵͳ #### - -- ֮ǰᵽһʹFreeBSDʱϵͳԼѡԱ׵⵼һЩǵı׼Linux£/bin/sbin/usr/bin/usr/sbinǴſִļĿ¼FreeBSDͬһЩӵĶ֯Ĺ淶ϵͳ/usr/local/bin/usr/local/sbinĿ¼¡ַ԰ֻϵͳͿѡ - -### ### - -FreeBSDLinuxҿԴϵͳƵҲвͬ㡣гݲ˵ĸϵͳһáʵϣFreeBSDLinuxԼصͼʹϵͳôʲôأѾʹеijϵͳôΪǵĻķǷĻڶǵôԴĹ۵㡣 - --------------------------------------------------------------------------------- - -via: https://www.unixmen.com/comparative-introduction-freebsd-linux-users/ - -ߣ[anismaj][a] -ߣ[wwy-hust](https://github.com/wwy-hust) -Уԣ[УID](https://github.com/УID) - - [LCTT](https://github.com/LCTT/TranslateProject) ԭ룬[Linuxй](http://linux.cn/) Ƴ - -[a]:https://www.unixmen.com/author/anis/ \ No newline at end of file From 471b7f98d2364433e49d47db97c1070b4ecbdd8a Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 31 Jul 2015 13:57:30 +0800 Subject: [PATCH 042/207] =?UTF-8?q?=E8=A1=A5=E5=AE=8C=20PR?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @strugglingyouth 另外,也别忘记扩展名。 --- ...- Part 2 => 20150717 How to collect NGINX metrics - Part 2.md} | 0 ...CD, Watch User Activity and Check Memory Usages of Browser.md} | 0 2 files changed, 0 insertions(+), 0 deletions(-) rename translated/tech/{How to collect NGINX metrics - Part 2 => 20150717 How to collect NGINX metrics - Part 2.md} (100%) rename translated/{Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser => tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md} (100%) diff --git a/translated/tech/How to collect NGINX metrics - Part 2 b/translated/tech/20150717 How to collect NGINX metrics - Part 2.md similarity index 100% rename from translated/tech/How to collect NGINX metrics - Part 2 rename to translated/tech/20150717 How to collect NGINX metrics - Part 2.md diff --git a/translated/Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser b/translated/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md similarity index 100% rename from translated/Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser rename to translated/tech/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md From f76fa728ffb1bcb30bab43d4ab4bc4976b3530a4 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 31 Jul 2015 14:36:26 +0800 Subject: [PATCH 043/207] PUB:Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @dingdongnigetou 原文中多次混杂 Plex Media Server 和 Plex Home Media Server,经过到官网的查看,并无 Home 产品及名称,所以统一了。 --- ...er On Ubuntu or CentOS 7.1 or Fedora 22.md | 62 +++++++++---------- 1 file changed, 30 insertions(+), 32 deletions(-) rename {translated/tech => published}/Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md (63%) diff --git a/translated/tech/Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md b/published/Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md similarity index 63% rename from translated/tech/Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md rename to published/Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md index 813057798b..f8f1a26a3b 100644 --- a/translated/tech/Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md +++ b/published/Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md @@ -1,20 +1,18 @@ - -如何在 Ubuntu/CentOS7.1/Fedora22 上安装 Plex Media Server ? +如何在 Ubuntu/CentOS7.1/Fedora22 上安装 Plex Media Server ================================================================================ -在本文中我们将会向你展示如何容易地在主流的最新发布的Linux发行版上安装Plex Home Media Server。在Plex安装成功后你将可以使用你的集中式家庭媒体播放系统,该系统能让多个Plex播放器App共享它的媒体资源,并且该系统允许你设置你的环境,通过增加你的设备以及设置一个可以一起使用Plex的用户组。让我们首先在Ubuntu15.04上开始Plex的安装。 +在本文中我们将会向你展示如何容易地在主流的最新Linux发行版上安装Plex Media Server。在Plex安装成功后你将可以使用你的中央式家庭媒体播放系统,该系统能让多个Plex播放器App共享它的媒体资源,并且该系统允许你设置你的环境,增加你的设备以及设置一个可以一起使用Plex的用户组。让我们首先在Ubuntu15.04上开始Plex的安装。 ### 基本的系统资源 ### 系统资源主要取决于你打算用来连接服务的设备类型和数量, 所以根据我们的需求我们将会在一个单独的服务器上使用以下系统资源。 -注:表格 - + - + @@ -22,11 +20,11 @@ - + - + @@ -38,13 +36,13 @@ #### 步骤 1: 系统更新 #### -用root权限登陆你的服务器。确保你的系统是最新的,如果不是就使用下面的命令。 +用root权限登录你的服务器。确保你的系统是最新的,如果不是就使用下面的命令。 root@ubuntu-15:~#apt-get update #### 步骤 2: 下载最新的Plex Media Server包 #### -创建一个新目录,用wget命令从Plex官网下载为Ubuntu提供的.deb包并放入该目录中。 +创建一个新目录,用wget命令从[Plex官网](https://plex.tv/)下载为Ubuntu提供的.deb包并放入该目录中。 root@ubuntu-15:~# cd /plex/ root@ubuntu-15:/plex# @@ -52,7 +50,7 @@ #### 步骤 3: 安装Plex Media Server的Debian包 #### -现在在相同的目录下执行下面的命令来开始debian包的安装, 然后检查plexmediaserver(译者注: 原文plekmediaserver, 明显笔误)的状态。 +现在在相同的目录下执行下面的命令来开始debian包的安装, 然后检查plexmediaserver服务的状态。 root@ubuntu-15:/plex# dpkg -i plexmediaserver_0.9.12.3.1173-937aac3_amd64.deb @@ -62,41 +60,41 @@ ![Plexmediaserver Service](http://blog.linoxide.com/wp-content/uploads/2015/06/plex-status.png) -### 在Ubuntu 15.04上设置Plex Home Media Web应用 ### +### 在Ubuntu 15.04上设置Plex Media Web应用 ### -让我们在你的本地网络主机中打开web浏览器, 并用你的本地主机IP以及端口32400来打开Web界面并完成以下步骤来配置Plex。 +让我们在你的本地网络主机中打开web浏览器, 并用你的本地主机IP以及端口32400来打开Web界面,并完成以下步骤来配置Plex。 http://172.25.10.179:32400/web http://localhost:32400/web -#### 步骤 1: 登陆前先注册 #### +#### 步骤 1: 登录前先注册 #### -在你访问到Plex Media Server的Web界面之后(译者注: 原文是Plesk, 应该是笔误), 确保注册并填上你的用户名(译者注: 原文username email ID感觉怪怪:))和密码来登陆。 +在你访问到Plex Media Server的Web界面之后, 确保注册并填上你的用户名和密码来登录。 ![Plex Sign In](http://blog.linoxide.com/wp-content/uploads/2015/06/PMS-Login.png) -#### 输入你的PIN码来保护你的Plex Home Media用户(译者注: 原文Plex Media Home, 个人觉得专业称谓应该保持一致) #### +#### 输入你的PIN码来保护你的Plex Media用户#### ![Plex User Pin](http://blog.linoxide.com/wp-content/uploads/2015/06/333.png) -现在你已经成功的在Plex Home Media下配置你的用户。 +现在你已经成功的在Plex Media下配置你的用户。 ![Welcome To Plex](http://blog.linoxide.com/wp-content/uploads/2015/06/3333.png) ### 在设备上而不是本地服务器上打开Plex Web应用 ### -正如我们在Plex Media主页看到的表明"你没有权限访问这个服务"。 这是因为我们跟服务器计算机不在同个网络。 +如我们在Plex Media主页看到的提示“你没有权限访问这个服务”。 这说明我们跟服务器计算机不在同个网络。 ![Plex Server Permissions](http://blog.linoxide.com/wp-content/uploads/2015/06/33.png) -现在我们需要解决这个权限问题以便我们通过设备访问服务器而不是通过托管服务器(Plex服务器), 通过完成下面的步骤。 +现在我们需要解决这个权限问题,以便我们通过设备访问服务器而不是只能在服务器上访问。通过完成下面的步骤完成。 -### 设置SSH隧道使Windows系统访问到Linux服务器 ### +### 设置SSH隧道使Windows系统可以访问到Linux服务器 ### 首先我们需要建立一条SSH隧道以便我们访问远程服务器资源,就好像资源在本地一样。 这仅仅是必要的初始设置。 如果你正在使用Windows作为你的本地系统,Linux作为服务器,那么我们可以参照下图通过Putty来设置SSH隧道。 -(译者注: 首先要在Putty的Session中用Plex服务器IP配置一个SSH的会话,才能进行下面的隧道转发规则配置。 +(LCTT译注: 首先要在Putty的Session中用Plex服务器IP配置一个SSH的会话,才能进行下面的隧道转发规则配置。 然后点击“Open”,输入远端服务器用户名密码, 来保持SSH会话连接。) ![Plex SSH Tunnel](http://blog.linoxide.com/wp-content/uploads/2015/06/ssh-tunnel.png) @@ -111,13 +109,13 @@ ![Agree to Plex term](http://blog.linoxide.com/wp-content/uploads/2015/06/5.png) -现在一个功能齐全的Plex Home Media Server已经准备好添加新的媒体库、频道、播放列表等资源。 +现在一个功能齐全的Plex Media Server已经准备好添加新的媒体库、频道、播放列表等资源。 ![PMS Settings](http://blog.linoxide.com/wp-content/uploads/2015/06/8.png) ### 在CentOS 7.1上安装Plex Media Server 0.9.12.3 ### -我们将会按照上述在Ubuntu15.04上安装Plex Home Media Server的步骤来将Plex安装到CentOS 7.1上。 +我们将会按照上述在Ubuntu15.04上安装Plex Media Server的步骤来将Plex安装到CentOS 7.1上。 让我们从安装Plex Media Server开始。 @@ -144,9 +142,9 @@ [root@linux-tutorials plex]# systemctl enable plexmediaserver.service [root@linux-tutorials plex]# systemctl status plexmediaserver.service -### 在CentOS-7.1上设置Plex Home Media Web应用 ### +### 在CentOS-7.1上设置Plex Media Web应用 ### -现在我们只需要重复在Ubuntu上设置Plex Web应用的所有步骤就可以了。 让我们在Web浏览器上打开一个新窗口并用localhost或者Plex服务器的IP(译者注: 原文为or your Plex server, 明显的笔误)来访问Plex Home Media Web应用(译者注:称谓一致)。 +现在我们只需要重复在Ubuntu上设置Plex Web应用的所有步骤就可以了。 让我们在Web浏览器上打开一个新窗口并用localhost或者Plex服务器的IP来访问Plex Media Web应用。 http://172.20.3.174:32400/web http://localhost:32400/web @@ -157,25 +155,25 @@ ### 在Fedora 22工作站上安装Plex Media Server 0.9.12.3 ### -基本的下载和安装Plex Media Server步骤跟在CentOS 7.1上安装的步骤一致。我们只需要下载对应的rpm包然后用rpm命令来安装它。 +下载和安装Plex Media Server步骤基本跟在CentOS 7.1上安装的步骤一致。我们只需要下载对应的rpm包然后用rpm命令来安装它。 ![PMS Installation](http://blog.linoxide.com/wp-content/uploads/2015/06/plex-on-fedora.png) -### 在Fedora 22工作站上配置Plex Home Media Web应用 ### +### 在Fedora 22工作站上配置Plex Media Web应用 ### -我们在(与Plex服务器)相同的主机上配置Plex Media Server,因此不需要设置SSH隧道。只要在你的Fedora 22工作站上用Plex Home Media Server的默认端口号32400打开Web浏览器并同意Plex的服务条款即可。 +我们在(与Plex服务器)相同的主机上配置Plex Media Server,因此不需要设置SSH隧道。只要在你的Fedora 22工作站上用Plex Media Server的默认端口号32400打开Web浏览器并同意Plex的服务条款即可。 ![Plex Agreement](http://blog.linoxide.com/wp-content/uploads/2015/06/Plex-Terms.png) -**欢迎来到Fedora 22工作站上的Plex Home Media Server** +*欢迎来到Fedora 22工作站上的Plex Media Server* -让我们用你的Plex账户登陆,并且开始将你喜欢的电影频道添加到媒体库、创建你的播放列表、添加你的图片以及享用更多其他的特性。 +让我们用你的Plex账户登录,并且开始将你喜欢的电影频道添加到媒体库、创建你的播放列表、添加你的图片以及享用更多其他的特性。 ![Plex Add Libraries](http://blog.linoxide.com/wp-content/uploads/2015/06/create-library.png) ### 总结 ### -我们已经成功完成Plex Media Server在主流Linux发行版上安装和配置。Plex Home Media Server永远都是媒体管理的最佳选择。 它在跨平台上的设置是如此的简单,就像我们在Ubuntu,CentOS以及Fedora上的设置一样。它简化了你组织媒体内容的工作,并将媒体内容“流”向其他计算机以及设备以便你跟你的朋友分享媒体内容。 +我们已经成功完成Plex Media Server在主流Linux发行版上安装和配置。Plex Media Server永远都是媒体管理的最佳选择。 它在跨平台上的设置是如此的简单,就像我们在Ubuntu,CentOS以及Fedora上的设置一样。它简化了你组织媒体内容的工作,并将媒体内容“流”向其他计算机以及设备以便你跟你的朋友分享媒体内容。 -------------------------------------------------------------------------------- @@ -183,7 +181,7 @@ via: http://linoxide.com/tools/install-plex-media-server-ubuntu-centos-7-1-fedor 作者:[Kashif Siddique][a] 译者:[dingdongnigetou](https://github.com/dingdongnigetou) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From e90526660c916abce2730f20217ab36b1ff9840c Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 1 Aug 2015 10:05:00 +0800 Subject: [PATCH 044/207] [Translated] tech/20150522 Analyzing Linux Logs.md --- sources/tech/20150522 Analyzing Linux Logs.md | 182 ------------------ .../tech/20150522 Analyzing Linux Logs.md | 181 +++++++++++++++++ 2 files changed, 181 insertions(+), 182 deletions(-) delete mode 100644 sources/tech/20150522 Analyzing Linux Logs.md create mode 100644 translated/tech/20150522 Analyzing Linux Logs.md diff --git a/sources/tech/20150522 Analyzing Linux Logs.md b/sources/tech/20150522 Analyzing Linux Logs.md deleted file mode 100644 index 832ea369ec..0000000000 --- a/sources/tech/20150522 Analyzing Linux Logs.md +++ /dev/null @@ -1,182 +0,0 @@ -ictlyh Translating -Analyzing Linux Logs -================================================================================ -There’s a great deal of information waiting for you within your logs, although it’s not always as easy as you’d like to extract it. In this section we will cover some examples of basic analysis you can do with your logs right away (just search what’s there). We’ll also cover more advanced analysis that may take some upfront effort to set up properly, but will save you time on the back end. Examples of advanced analysis you can do on parsed data include generating summary counts, filtering on field values, and more. - -We’ll show you first how to do this yourself on the command line using several different tools and then show you how a log management tool can automate much of the grunt work and make this so much more streamlined. - -### Searching with Grep ### - -Searching for text is the most basic way to find what you’re looking for. The most common tool for searching text is [grep][1]. This command line tool, available on most Linux distributions, allows you to search your logs using regular expressions. A regular expression is a pattern written in a special language that can identify matching text. The simplest pattern is to put the string you’re searching for surrounded by quotes - -#### Regular Expressions #### - -Here’s an example to find authentication logs for “user hoover” on an Ubuntu system: - - $ grep "user hoover" /var/log/auth.log - Accepted password for hoover from 10.0.2.2 port 4792 ssh2 - pam_unix(sshd:session): session opened for user hoover by (uid=0) - pam_unix(sshd:session): session closed for user hoover - -It can be hard to construct regular expressions that are accurate. For example, if we searched for a number like the port “4792” it could also match timestamps, URLs, and other undesired data. In the below example for Ubuntu, it matched an Apache log that we didn’t want. - - $ grep "4792" /var/log/auth.log - Accepted password for hoover from 10.0.2.2 port 4792 ssh2 - 74.91.21.46 - - [31/Mar/2015:19:44:32 +0000] "GET /scripts/samples/search?q=4972HTTP/1.0" 404 545 "-" "-” - -#### Surround Search #### - -Another useful tip is that you can do surround search with grep. This will show you what happened a few lines before or after a match. It can help you debug what lead up to a particular error or problem. The B flag gives you lines before, and A gives you lines after. For example, we can see that when someone failed to login as an admin, they also failed the reverse mapping which means they might not have a valid domain name. This is very suspicious! - - $ grep -B 3 -A 2 'Invalid user' /var/log/auth.log - Apr 28 17:06:20 ip-172-31-11-241 sshd[12545]: reverse mapping checking getaddrinfo for 216-19-2-8.commspeed.net [216.19.2.8] failed - POSSIBLE BREAK-IN ATTEMPT! - Apr 28 17:06:20 ip-172-31-11-241 sshd[12545]: Received disconnect from 216.19.2.8: 11: Bye Bye [preauth] - Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: Invalid user admin from 216.19.2.8 - Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: input_userauth_request: invalid user admin [preauth] - Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: Received disconnect from 216.19.2.8: 11: Bye Bye [preauth] - -#### Tail #### - -You can also pair grep with [tail][2] to get the last few lines of a file, or to follow the logs and print them in real time. This is useful if you are making interactive changes like starting a server or testing a code change. - - $ tail -f /var/log/auth.log | grep 'Invalid user' - Apr 30 19:49:48 ip-172-31-11-241 sshd[6512]: Invalid user ubnt from 219.140.64.136 - Apr 30 19:49:49 ip-172-31-11-241 sshd[6514]: Invalid user admin from 219.140.64.136 - -A full introduction on grep and regular expressions is outside the scope of this guide, but [Ryan’s Tutorials][3] include more in-depth information. - -Log management systems have higher performance and more powerful searching abilities. They often index their data and parallelize queries so you can quickly search gigabytes or terabytes of logs in seconds. In contrast, this would take minutes or in extreme cases hours with grep. Log management systems also use query languages like [Lucene][4] which offer an easier syntax for searching on numbers, fields, and more. - -### Parsing with Cut, AWK, and Grok ### - -#### Command Line Tools #### - -Linux offers several command line tools for text parsing and analysis. They are great if you want to quickly parse a small amount of data but can take a long time to process large volumes of data - -#### Cut #### - -The [cut][5] command allows you to parse fields from delimited logs. Delimiters are characters like equal signs or commas that break up fields or key value pairs. - -Let’s say we want to parse the user from this log: - - pam_unix(su:auth): authentication failure; logname=hoover uid=1000 euid=0 tty=/dev/pts/0 ruser=hoover rhost= user=root - -We can use the cut command like this to get the text after the eighth equal sign. This example is on an Ubuntu system: - - $ grep "authentication failure" /var/log/auth.log | cut -d '=' -f 8 - root - hoover - root - nagios - nagios - -#### AWK #### - -Alternately, you can use [awk][6], which offers more powerful features to parse out fields. It offers a scripting language so you can filter out nearly everything that’s not relevant. - -For example, let’s say we have the following log line on an Ubuntu system and we want to extract the username that failed to login: - - Mar 24 08:28:18 ip-172-31-11-241 sshd[32701]: input_userauth_request: invalid user guest [preauth] - -Here’s how you can use the awk command. First, put a regular expression /sshd.*invalid user/ to match the sshd invalid user lines. Then print the ninth field using the default delimiter of space using { print $9 }. This outputs the usernames. - - $ awk '/sshd.*invalid user/ { print $9 }' /var/log/auth.log - guest - admin - info - test - ubnt - -You can read more about how to use regular expressions and print fields in the [Awk User’s Guide][7]. - -#### Log Management Systems #### - -Log management systems make parsing easier and enable users to quickly analyze large collections of log files. They can automatically parse standard log formats like common Linux logs or web server logs. This saves a lot of time because you don’t have to think about writing your own parsing logic when troubleshooting a system problem. - -Here you can see an example log message from sshd which has each of the fields remoteHost and user parsed out. This is a screenshot from Loggly, a cloud-based log management service. - -![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.25.09-AM.png) - -You can also do custom parsing for non-standard formats. A common tool to use is [Grok][8] which uses a library of common regular expressions to parse raw text into structured JSON. Here is an example configuration for Grok to parse kernel log files inside Logstash: - - filter{ - grok { - match => {"message" => "%{CISCOTIMESTAMP:timestamp} %{HOST:host} %{WORD:program}%{NOTSPACE} %{NOTSPACE}%{NUMBER:duration}%{NOTSPACE} %{GREEDYDATA:kernel_logs}" - } - } - -And here is what the parsed output looks like from Grok: - -![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.30.37-AM.png) - -### Filtering with Rsyslog and AWK ### - -Filtering allows you to search on a specific field value instead of doing a full text search. This makes your log analysis more accurate because it will ignore undesired matches from other parts of the log message. In order to search on a field value, you need to parse your logs first or at least have a way of searching based on the event structure. - -#### How to Filter on One App #### - -Often, you just want to see the logs from just one application. This is easy if your application always logs to a single file. It’s more complicated if you need to filter one application among many in an aggregated or centralized log. Here are several ways to do this: - -1. Use the rsyslog daemon to parse and filter logs. This example writes logs from the sshd application to a file named sshd-messages, then discards the event so it’s not repeated elsewhere. You can try this example by adding it to your rsyslog.conf file. - - :programname, isequal, “sshd” /var/log/sshd-messages - &~ - -2. Use command line tools like awk to extract the values of a particular field like the sshd username. This example is from an Ubuntu system. - - $ awk '/sshd.*invalid user/ { print $9 }' /var/log/auth.log - guest - admin - info - test - ubnt - -3. Use a log management system that automatically parses your logs, then click to filter on the desired application name. Here is a screenshot showing the syslog fields in a log management service called Loggly. We are filtering on the appName “sshd” as indicated by the Venn diagram icon. - -![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.02-AM.png) - -#### How to Filter on Errors #### - -One of the most common thing people want to see in their logs is errors. Unfortunately, the default syslog configuration doesn’t output the severity of errors directly, making it difficult to filter on them. - -There are two ways you can solve this problem. First, you can modify your rsyslog configuration to output the severity in the log file to make it easier to read and search. In your rsyslog configuration you can add a [template][9] with pri-text such as the following: - - "<%pri-text%> : %timegenerated%,%HOSTNAME%,%syslogtag%,%msg%n" - -This example gives you output in the following format. You can see that the severity in this message is err. - - : Mar 11 18:18:00,hoover-VirtualBox,su[5026]:, pam_authenticate: Authentication failure - -You can use awk or grep to search for just the error messages. In this example for Ubuntu, we’re including some surrounding syntax like the . and the > which match only this field. - - $ grep '.err>' /var/log/auth.log - : Mar 11 18:18:00,hoover-VirtualBox,su[5026]:, pam_authenticate: Authentication failure - -Your second option is to use a log management system. Good log management systems automatically parse syslog messages and extract the severity field. They also allow you to filter on log messages of a certain severity with a single click. - -Here is a screenshot from Loggly showing the syslog fields with the error severity highlighted to show we are filtering for errors: - -![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.00.36-AM.png) - --------------------------------------------------------------------------------- - -via: http://www.loggly.com/ultimate-guide/logging/analyzing-linux-logs/ - -作者:[Jason Skowronski][a] [Amy Echeverri][b] [ Sadequl Hussain][c] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linkedin.com/in/jasonskowronski -[b]:https://www.linkedin.com/in/amyecheverri -[c]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 -[1]:http://linux.die.net/man/1/grep -[2]:http://linux.die.net/man/1/tail -[3]:http://ryanstutorials.net/linuxtutorial/grep.php -[4]:https://lucene.apache.org/core/2_9_4/queryparsersyntax.html -[5]:http://linux.die.net/man/1/cut -[6]:http://linux.die.net/man/1/awk -[7]:http://www.delorie.com/gnu/docs/gawk/gawk_26.html#IDX155 -[8]:http://logstash.net/docs/1.4.2/filters/grok -[9]:http://www.rsyslog.com/doc/v8-stable/configuration/templates.html diff --git a/translated/tech/20150522 Analyzing Linux Logs.md b/translated/tech/20150522 Analyzing Linux Logs.md new file mode 100644 index 0000000000..c037fc60aa --- /dev/null +++ b/translated/tech/20150522 Analyzing Linux Logs.md @@ -0,0 +1,181 @@ +Linux 日志分析 +============================================================================== +日志中有大量的信息需要你处理,尽管有时候想要提取并非想象中的容易。在这篇文章中我们会介绍一些你现在就能做的基本日志分析例子。我们还将设计一些更高级的分析,但这些需要你前期努力做出切当的设置,后期就能节省很多时间。对数据进行高级分析的例子包括生成汇总计数、对有效值进行过滤,等等。 + +我们首先会想你展示如何在命令行中使用多个不同的工具,然后展示了一个日志管理工具如何能自动完成大部分繁重工作从而使得日志分析变得简单。 + +### 用 Grep 搜索 ### + +搜索文本是查找信息最基本的方式。搜索文本最常用的工具是 [grep][1]。这个命令行工具在大部分 Linux 发行版中都有,它允许你用正则表达式搜索日志。一个正则表达式是用特殊的语言写的、能识别匹配文本的模式。最简单的模式就是用引号把你想要查找的字符串括起来。 + +#### 正则表达式 #### + +这是一个在 Ubuntu 系统中查找 “user hoover” 认证日志的例子: + + $ grep "user hoover" /var/log/auth.log + Accepted password for hoover from 10.0.2.2 port 4792 ssh2 + pam_unix(sshd:session): session opened for user hoover by (uid=0) + pam_unix(sshd:session): session closed for user hoover + +构建精确的正则表达式可能很难。例如,如果我们想要搜索一个类似端口 “4792” 的数字,它可能也会匹配时间戳、URLs 以及其它不需要的数据。Ubuntu 中下面的例子,它匹配了一个我们不想要的 Apache 日志。 + + $ grep "4792" /var/log/auth.log + Accepted password for hoover from 10.0.2.2 port 4792 ssh2 + 74.91.21.46 - - [31/Mar/2015:19:44:32 +0000] "GET /scripts/samples/search?q=4972HTTP/1.0" 404 545 "-" "-” + +#### 环绕搜索 #### + +另一个有用的小技巧是你可以用 grep 做环绕搜索。这会向你展示一个匹配前面或后面几行是什么。它能帮助你调试导致错误或问题的东西。B 选项展示前面几行,A 展示后面几行。举个例子,我们知道当一个人以管理员员身份登录失败时,他们也没有逆映射,也就意味着他们可能没有有效的域名。这非常可疑! + + $ grep -B 3 -A 2 'Invalid user' /var/log/auth.log + Apr 28 17:06:20 ip-172-31-11-241 sshd[12545]: reverse mapping checking getaddrinfo for 216-19-2-8.commspeed.net [216.19.2.8] failed - POSSIBLE BREAK-IN ATTEMPT! + Apr 28 17:06:20 ip-172-31-11-241 sshd[12545]: Received disconnect from 216.19.2.8: 11: Bye Bye [preauth] + Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: Invalid user admin from 216.19.2.8 + Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: input_userauth_request: invalid user admin [preauth] + Apr 28 17:06:20 ip-172-31-11-241 sshd[12547]: Received disconnect from 216.19.2.8: 11: Bye Bye [preauth] + +#### Tail #### + +你也可以把 grep 和 [tail][2] 结合使用来获取一个文件的最后几行,或者跟踪日志并实时打印。这在你做交互式更改的时候非常有用,例如启动服务器或者测试代码更改。 + + $ tail -f /var/log/auth.log | grep 'Invalid user' + Apr 30 19:49:48 ip-172-31-11-241 sshd[6512]: Invalid user ubnt from 219.140.64.136 + Apr 30 19:49:49 ip-172-31-11-241 sshd[6514]: Invalid user admin from 219.140.64.136 + +关于 grep 和正则表达式的详细介绍并不在该指南的范围,但 [Ryan’s Tutorials][3] 有更深入的介绍。 + +日志管理系统有更高的性能和更强大的搜索能力。它们通常会索引数据并进行并行查询,因此你可以很快的在几秒内就能搜索 GB 或 TB 的日志。相比之下,grep 就需要几分钟,在极端情况下可能甚至几小时。日志管理系统也使用类似 [Lucene][4] 的查询语言,它提供更简单的语法来检索数字、域以及其它。 + +### 用 Cut、 AWK、 和 Grok 解析 ### + +#### 命令行工具 #### + +Linux 提供了多个命令行工具用于文本解析和分析。当你想要快速解析少量数据时非常有用,但处理大量数据时可能需要很长时间。 + +#### Cut #### + +[cut][5] 命令允许你从分隔的日志解析域。分隔符是指能分开域或键值对的等号或逗号。 + +假设我们想从下面的日志中解析出用户: + + pam_unix(su:auth): authentication failure; logname=hoover uid=1000 euid=0 tty=/dev/pts/0 ruser=hoover rhost= user=root + +我们可以像下面这样用 cut 命令获取第八个等号后面的文本。这是一个 Ubuntu 系统上的例子: + + $ grep "authentication failure" /var/log/auth.log | cut -d '=' -f 8 + root + hoover + root + nagios + nagios + +#### AWK #### + +另外,你也可以使用 [awk][6],它能提供解析域更强大的功能。它提供了一个脚本语言,你可以过滤出几乎任何不相干的东西。 + +例如,假设在 Ubuntu 系统中我们有下面的一行日志,我们想要提取登录失败的用户名称: + + Mar 24 08:28:18 ip-172-31-11-241 sshd[32701]: input_userauth_request: invalid user guest [preauth] + +你可以像下面这样使用 awk 命令。首先,用一个正则表达式 /sshd.*invalid user/ 来匹配 sshd 无效用户行。然后用 { print $9 } 根据默认的分隔符空格打印第九个域。然后就输出了用户名。 + + $ awk '/sshd.*invalid user/ { print $9 }' /var/log/auth.log + guest + admin + info + test + ubnt + +你可以在 [Awk 用户指南][7] 中阅读更多关于如何使用正则表达式和输出域的信息。 + +#### 日志管理系统 #### + +日志管理系统使得解析变得更加简单,使用户能快速的分析很多的日志文件。他们能自动解析标准的日志格式,比如常见的 Linux 日志和 Web 服务器日志。这能节省很多时间,因为当处理系统问题的时候你不需要考虑写自己的解析逻辑。 + +下面是一个 sshd 日志消息的例子,解析出了每个 remoteHost 和 user。这是 Loggly 中的一张截图,它是一个基于云的日志管理服务。 + +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.25.09-AM.png) + +你也可以对非标准格式自定义解析。一个常用的工具是 [Grok][8],它用一个常见正则表达式库解析原始文本为结构化 JSON。下面是一个 Grok 在 Logstash 中解析内核日志文件的事例配置: + + filter{ + grok { + match => {"message" => "%{CISCOTIMESTAMP:timestamp} %{HOST:host} %{WORD:program}%{NOTSPACE} %{NOTSPACE}%{NUMBER:duration}%{NOTSPACE} %{GREEDYDATA:kernel_logs}" + } + } + +下图是 Grok 解析后输出的结果: + +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.30.37-AM.png) + +### 用 Rsyslog 和 AWK 过滤 ### + +过滤使得你能检索一个特定的域值而不是进行全文检索。这使你的日志分析更加准确,因为它会忽略来自其它部分日志信息不需要的匹配。为了对一个域值进行搜索,你首先需要解析日志或者至少有对事件结构进行检索的方式。 + +#### 如果对 App 进行过滤 #### + +通常,你可能只想看一个应用的日志。如果你的应用把记录都保存到一个文件中就会很容易。如果你需要在一个聚集或集中式日志中过滤一个应用就会比较复杂。下面有几种方法来实现: + +1. 用 rsyslog 守护进程解析和过滤日志。下面的例子将 sshd 应用的日志写入一个名为 sshd-message 的文件,然后丢弃事件也就不会在其它地方重复。你可以将它添加到你的 rsyslog.conf 文件中测试这个例子。 + + :programname, isequal, “sshd” /var/log/sshd-messages + &~ + +2. 用类似 awk 的命令行工具提取特定域的值,例如 sshd 用户名。下面是 Ubuntu 系统中的一个例子。 + + $ awk '/sshd.*invalid user/ { print $9 }' /var/log/auth.log + guest + admin + info + test + ubnt + +3. 用日志管理系统自动解析日志,然后在需要的应用名称上点击过滤。下面是在 Loggly 日志管理服务中提取 syslog 域的截图。我们对应用名称 “sshd” 进行过滤,如维恩图图标所示。 + +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.02-AM.png) + +#### 如何过滤错误 #### + +一个人最希望看到日志中的错误。不幸的是,默认的 syslog 配置不直接输出错误的严重性,也就使得难以过滤它们。 + +这里有两个解决该问题的方法。首先,你可以修改你的 rsyslog 配置,在日志文件中输出错误的严重性,使得便于查看和检索。在你的 rsyslog 配置中你可以用 pri-text 添加一个 [模板][9],像下面这样: + + "<%pri-text%> : %timegenerated%,%HOSTNAME%,%syslogtag%,%msg%n" + +这个例子会按照下面的格式输出。你可以看到该信息中指示错误的 err。 + + : Mar 11 18:18:00,hoover-VirtualBox,su[5026]:, pam_authenticate: Authentication failure + +你可以用 awk 或者 grep 检索错误信息。在 Ubuntu 中,对这个例子,我们可以用一些环绕语法,例如 . 和 >,它们只会匹配这个域。 + + $ grep '.err>' /var/log/auth.log + : Mar 11 18:18:00,hoover-VirtualBox,su[5026]:, pam_authenticate: Authentication failure + +你的第二个选择是使用日志管理系统。好的日志管理系统能自动解析 syslog 消息并抽取错误域。它们也允许你用简单的点击过滤日志消息中的特定错误。 + +下面是 Loggly 中一个截图,显示了高亮错误严重性的 syslog 域,表示我们正在过滤错误: + +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.00.36-AM.png) + +-------------------------------------------------------------------------------- + +via: http://www.loggly.com/ultimate-guide/logging/analyzing-linux-logs/ + +作者:[Jason Skowronski][a] [Amy Echeverri][b] [ Sadequl Hussain][c] +译者:[ictlyh](https://github.com/ictlyh) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linkedin.com/in/jasonskowronski +[b]:https://www.linkedin.com/in/amyecheverri +[c]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 +[1]:http://linux.die.net/man/1/grep +[2]:http://linux.die.net/man/1/tail +[3]:http://ryanstutorials.net/linuxtutorial/grep.php +[4]:https://lucene.apache.org/core/2_9_4/queryparsersyntax.html +[5]:http://linux.die.net/man/1/cut +[6]:http://linux.die.net/man/1/awk +[7]:http://www.delorie.com/gnu/docs/gawk/gawk_26.html#IDX155 +[8]:http://logstash.net/docs/1.4.2/filters/grok +[9]:http://www.rsyslog.com/doc/v8-stable/configuration/templates.html From dba1249da1d325f56aa736e5216d6ac2b0f12eb1 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 1 Aug 2015 10:06:00 +0800 Subject: [PATCH 045/207] translating --- ... to Update Linux Kernel for Improved System Performance.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150728 How to Update Linux Kernel for Improved System Performance.md b/sources/tech/20150728 How to Update Linux Kernel for Improved System Performance.md index 4e238de09d..2ee2ff4f15 100644 --- a/sources/tech/20150728 How to Update Linux Kernel for Improved System Performance.md +++ b/sources/tech/20150728 How to Update Linux Kernel for Improved System Performance.md @@ -1,3 +1,5 @@ +Translating---geekpi + How to Update Linux Kernel for Improved System Performance ================================================================================ ![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/update-linux-kernel-644x373.jpg?2c3c1f) @@ -124,4 +126,4 @@ via: http://www.makeuseof.com/tag/update-linux-kernel-improved-system-performanc [7]:http://www.makeuseof.com/tag/bleeding-edge-linux-fedora-rawhide/ [8]:http://www.makeuseof.com/tag/arch-linux-letting-you-build-your-linux-system-from-scratch/ [9]:http://www.makeuseof.com/tag/nano-vs-vim-terminal-text-editors-compared/ -[10]:https://wiki.archlinux.org/index.php/Pacman#Repositories \ No newline at end of file +[10]:https://wiki.archlinux.org/index.php/Pacman#Repositories From 5b0ae3268ba031a4eb9295f5b27c712c278484a9 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sat, 1 Aug 2015 11:49:49 +0800 Subject: [PATCH 046/207] translated --- ... Kernel for Improved System Performance.md | 129 ------------------ ... Kernel for Improved System Performance.md | 129 ++++++++++++++++++ 2 files changed, 129 insertions(+), 129 deletions(-) delete mode 100644 sources/tech/20150728 How to Update Linux Kernel for Improved System Performance.md create mode 100644 translated/tech/20150728 How to Update Linux Kernel for Improved System Performance.md diff --git a/sources/tech/20150728 How to Update Linux Kernel for Improved System Performance.md b/sources/tech/20150728 How to Update Linux Kernel for Improved System Performance.md deleted file mode 100644 index 2ee2ff4f15..0000000000 --- a/sources/tech/20150728 How to Update Linux Kernel for Improved System Performance.md +++ /dev/null @@ -1,129 +0,0 @@ -Translating---geekpi - -How to Update Linux Kernel for Improved System Performance -================================================================================ -![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/update-linux-kernel-644x373.jpg?2c3c1f) - -The rate of development for [the Linux kernel][1] is unprecedented, with a new major release approximately every two to three months. Each release offers several new features and improvements that a lot of people could take advantage of to make their computing experience faster, more efficient, or better in other ways. - -The problem, however, is that you usually can’t take advantage of these new kernel releases as soon as they come out — you have to wait until your distribution comes out with a new release that packs a newer kernel with it. We’ve previously laid out [the benefits for regularly updating your kernel][2], and you don’t have to wait to get your hands on them. We’ll show you how. - -> Disclaimer: As some of our literature may have mentioned before, updating your kernel does carry a (small) risk of breaking your system. If this is the case, it’s usually easy to pick an older kernel at boot time that works, but something may always go wrong. Therefore, we’re not responsible for any damage to your system — use at your own risk! - -### Prep Work ### - -![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/linux_kernel_arch.jpg?2c3c1f) - -To update your kernel, you’ll first need to determine whether you’re using a 32-bit or 64-bit system. Open up a terminal window and run - - uname -a - -Then check to see if the output says x86_64 or i686. If it’s x86_64, then you’re running the 64-bit version; otherwise, you’re running the 32-bit version. Remember this, because it will be important. - -![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/kernel_latest_version.jpg?2c3c1f) - -Next, visit the [official Linux kernel website][3]. This will tell you what the current stable version of the kernel is. You can try out release candidates if you’d like, but they are a lot less tested than the stable releases. Stick with the stable kernel unless you are certain you need a release candidate version. - -### Ubuntu Instructions ### - -It’s quite easy for Ubuntu and Ubuntu-derivative users to update their kernel, thanks to the Ubuntu Mainline Kernel PPA. Although it’s officially called a PPA, you cannot use it like other PPAs by adding them to your software sources list and expecting it to automatically update the kernel for you. Instead, it’s simply a webpage you navigate through to download the kernel you want. - -![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/ubuntu_new_kernels.jpg?2c3c1f) - -Now, visit the [kernel PPA webpage][4] and scroll all the way to the bottom. The absolute bottom of the list will probably contain some release candidate versions (which you can see by the “rc” in the name), but just above them should be the latest stable kernel (to make this easier to explain, at the time of writing the stable version was 4.1.2). Click on that, and you’ll be presented with several options. You’ll need to grab three files and save them in their own folder (within the Downloads folder if you’d like) so that they’re isolated from all other files: - -- The “generic” header file for your architecture (in my case, 64-bit or “amd64″) -- The header file in the middle that has “all” towards the end of the filename -- The “generic” kernel file for your architecture (again, I would pick “amd64″ but if you use 32-bit you’ll need “i686″) - -You’ll notice that there are also “lowlatency” files available to download, but it’s fine to ignore this. These files are relatively unstable and are only made available for people who need their low-latency benefits if the general files don’t suffice for tasks such as audio recording. Again, the recommendation is to always use generic first and only try lowlatency if your performance isn’t good enough for certain tasks. No, gaming or Internet browsing aren’t excuses to try lowlatency. - -![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/ubuntu_install_kernel.jpg?2c3c1f) - -You put these files into their own folder, right? Now open up the Terminal, use the - - cd - -command to go to your newly-created folder, such as - - cd /home/user/Downloads/Kernel - -and then run: - - sudo dpkg -i *.deb - -This command marks all .deb files within the folder as “to be installed” and then performs the installation. This is the recommended way to install these files because otherwise it’s easy to pick one file to install and it’ll complain about dependency issues. This approach avoids that problem. If you’re not sure what cd or sudo are, get a quick crash course on [essential Linux commands][5]. - -Once the installation is complete, **Restart** your system and you should be running the just-installed kernel! You can check this by running uname -a in the Terminal and checking the output. - -### Fedora Instructions ### - -If you use Fedora or one of its derivatives, the process is very similar to Ubuntu. There’s just a different location to grab different files, and a different command to install them. - -![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/fedora_new_kernels.jpg?2c3c1f) - -VIew the list of the most [recent kernel builds for Fedora][6]. Pick the latest stable version out of the list, and then scroll down to either the i686 or x86_64 section, depending on your system’s architecture. In this section, you’ll need to grab the following files and save them in their own folder (such as “Kernel” within your Downloads folder, as an example): - -- kernel -- kernel-core -- kernel-headers -- kernel-modules -- kernel-modules-extra -- kernel-tools -- perf and python-perf (optional) - -If your system is i686 (32-bit) and you have 4GB of RAM or more, you’ll need to grab the PAE version of all of these files where available. PAE is an address extension technique used for 32-bit system to allow them to use more than 3GB of RAM. - -Now, use the - - cd - -command to go to that folder, such as - - cd /home/user/Downloads/Kernel - -and then run the following command to install all the files: - - yum --nogpgcheck localinstall *.rpm - -Finally **Restart** your computer and you should be running a new kernel! - -### Using Rawhide ### - -Alternatively, Fedora users can also simply [switch to Rawhide][7] and it’ll automatically update every package to the latest version, including the kernel. However, Rawhide is known to break quite often (especially early on in the development cycle) and should **not** be used on a system that you need to rely on. - -### Arch Instructions ### - -[Arch users][8] should always have the latest and greatest stable kernel available (or one pretty close to it). If you want to get even closer to the latest-released stable kernel, you can enable the testing repo which will give you access to major new releases roughly two to four weeks early. - -To do this, open the file located at - - /etc/pacman.conf - -with sudo permissions in [your favorite terminal text editor][9], and then uncomment (delete the pound symbols from the front of each line) the three lines associated with testing. If you have the multilib repository enabled, then do the same for the multilib-testing repository. See [this Arch Linux wiki page][10] if you need more information. - -Upgrading your kernel isn’t easy (done so intentionally), but it can give you a lot of benefits. So long as your new kernel didn’t break anything, you can now enjoy improved performance, better efficiency, support for more hardware, and potential new features. Especially if you’re running relatively new hardware, upgrading the kernel can really help out. - -**How has upgraded the kernel helped you? Do you think your favorite distribution’s policy on kernel releases is what it should be?** Let us know in the comments! - --------------------------------------------------------------------------------- - -via: http://www.makeuseof.com/tag/update-linux-kernel-improved-system-performance/ - -作者:[Danny Stieben][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.makeuseof.com/tag/author/danny/ -[1]:http://www.makeuseof.com/tag/linux-kernel-explanation-laymans-terms/ -[2]:http://www.makeuseof.com/tag/5-reasons-update-kernel-linux/ -[3]:http://www.kernel.org/ -[4]:http://kernel.ubuntu.com/~kernel-ppa/mainline/ -[5]:http://www.makeuseof.com/tag/an-a-z-of-linux-40-essential-commands-you-should-know/ -[6]:http://koji.fedoraproject.org/koji/packageinfo?packageID=8 -[7]:http://www.makeuseof.com/tag/bleeding-edge-linux-fedora-rawhide/ -[8]:http://www.makeuseof.com/tag/arch-linux-letting-you-build-your-linux-system-from-scratch/ -[9]:http://www.makeuseof.com/tag/nano-vs-vim-terminal-text-editors-compared/ -[10]:https://wiki.archlinux.org/index.php/Pacman#Repositories diff --git a/translated/tech/20150728 How to Update Linux Kernel for Improved System Performance.md b/translated/tech/20150728 How to Update Linux Kernel for Improved System Performance.md new file mode 100644 index 0000000000..2114549452 --- /dev/null +++ b/translated/tech/20150728 How to Update Linux Kernel for Improved System Performance.md @@ -0,0 +1,129 @@ +如何更新Linux内核提升系统性能 +================================================================================ +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/update-linux-kernel-644x373.jpg?2c3c1f) + +[Linux内核][1]内核的开发速度目前是空前,大概每2到3个月就会有一个主要的版本发布。每个发布都带来让很多人的计算更加快、更加有效率、或者更好的功能和提升。 + +问题是你不能在这些内核发布的时候就用它们-你要等到你的发行版带来新内核的发布。我们先前发布了[定期更新内核的好处][2],你不必等到那时。我们会向你展示该怎么做。 + +> 免责声明: 我们先前的一些文章已经提到过,升级内核会带来(很小的)破坏你系统的风险。在这种情况下,通常可以通过旧内核来使系统工作,但是有时还是不行。因此我们对系统的任何损坏都不负责-你自己承担风险! + +### 预备工作 ### + +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/linux_kernel_arch.jpg?2c3c1f) + +要更新你的内核,你首先要确定你使用的是32位还是64位的系统。打开终端并运行: + + uname -a + +检查一下输出的是x86_64还是i686。如果是x86_64,你就运行64位的版本,否则就运行32位的版本。记住这个因为这个很重要。 + +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/kernel_latest_version.jpg?2c3c1f) + +接下来,访问[官方Linux内核网站][3],它会告诉你目前稳定内核的版本。如果你喜欢你可以尝试发布预选版,但是这比稳定版少了很多测试。除非你确定想要用发布预选版否则就用稳定内核。 + +### Ubuntu指导 ### + +对Ubuntu及其衍生版的用户而言升级内核非常简单,这要感谢Ubuntu主线内核PPA。虽然,官方称为一个PPA。但是你不能像其他PPA一样用来添加它到你软件源列表中,并指望它自动升级你的内核。而它只是一个简单的网页,你可以下载到你想要的内核。 + +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/ubuntu_new_kernels.jpg?2c3c1f) + +现在,访问[内核PPA网页][4],并滚到底部。列表的最下面会含有最新发布的预选版本(你可以在名字中看到“rc”字样),但是这上面就可以看到最新的稳定版(为了更容易地解释这个,这时最新的稳定版是4.1.2)。点击它,你会看到几个选项。你需要下载3个文件并保存到各自的文件夹中(如果你喜欢的话可以在下载文件夹中),这样就可以将它们相互隔离了: + +- 针对架构的含“generic”的头文件(我这里是64位或者“amd64”) +- 中间的头文件在文件名末尾有“all” +- 针对架构的含“generic”内核文件(再说一次,我会用“amd64”,但是你如果用32位的,你需要使用“i686”) + +你会看到还有含有“lowlatency”的文件可以下面。但最好忽略它们。这些文件相对不稳定,并且只为那些通用文件不能满足像录音这类任务想要低延迟的人准备的。再说一次,首选通用版除非你特定的任务需求不能很好地满足。一般的游戏和网络浏览不是使用低延时版的借口。 + +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/ubuntu_install_kernel.jpg?2c3c1f) + +你把它们放在各自的文件夹下,对么?现在打开终端,使用 + + cd + +命令到新创建的文件夹下,像 + + cd /home/user/Downloads/Kernel + +接着运行: + + sudo dpkg -i *.deb + +这个命令会标记所有文件夹的“.deb”文件为“待安装”,接着执行安装。这是推荐的安装放大,因为除非可以很简单地选择一个文件安装,它总会报出依赖问题。这个方法可以避免这个问题。如果你不清楚cd和sudo是什么。快速地看一下[Linux基本命令][5]这篇文章。 + +安装完成后,**重启**你的系统,这时应该就会运行刚安装的内核了!你可以在命令行中使用uname -a来检查输出。 + +### Fedora指导 ### + +如果你使用的是Fedora或者它的衍生版,过程跟Ubuntu很类似。不同的是文件获取的位置不同,安装的命令也不同。 + +![](http://cdn.makeuseof.com/wp-content/uploads/2015/07/fedora_new_kernels.jpg?2c3c1f) + +查看[Fedora最新内核编译][6]列表。选取列表中最新的稳定版并滚到下面选择i686或者x86_64版。这依赖于你的系统架构。这时你需要下载下面这些文件并保存到它们对应的目录下(比如“Kernel”到下载目录下): + +- kernel +- kernel-core +- kernel-headers +- kernel-modules +- kernel-modules-extra +- kernel-tools +- perf and python-perf (optional) + +如果你的系统是i686(32位)同时你有4GB或者更大的内存,你需要下载所有这些文件的PAE版本。PAE是用于32位的地址扩展技术上,它允许你使用3GB的内存。 + +现在使用 + + cd + +命令进入文件夹,像这样 + + cd /home/user/Downloads/Kernel + +and then run the following command to install all the files: +接着运行下面的命令来安装所有的文件 + + yum --nogpgcheck localinstall *.rpm + +最后**重启**你的系统,这样你就可以运行新的内核了! + +### 使用 Rawhide ### + +另外一个方案是,Fedora用户也可以[切换到Rawhide][7],它会自动更新所有的包到最新版本,包括内核。然而,Rawhide经常会破坏系统(尤其是在早期的开发版中),它**不应该**在你日常使用的系统中用。 + +### Arch指导 ### + +[Arch][8]应该总是使用的是最新和最棒的稳定版(或者相当接近的版本)。如果你想要更接近最新发布的稳定版,你可以启用测试库提前2到3周获取到主要的更新。 + +要这么做,用[你喜欢的编辑器][9]以sudo权限打开下面的文件 + + /etc/pacman.conf + +接着取消注释带有testing的三行(删除行前面的井号)。如果你想要启用multilib仓库,就把multilib-testing也做相同的事情。如果想要了解更多参考[这个Arch的wiki界面][10]。 + +升级内核并不简单(有意这么做),但是这会给你带来很多好处。只要你的新内核不会破坏任何东西,你可以享受它带来的性能提升,更好的效率,支持更多的硬件和潜在的新特性。尤其是你正在使用相对更新的硬件,升级内核可以帮助到它。 + + +**怎么升级内核这篇文章帮助到你了么?你认为你所喜欢的发行版对内核的发布策略应该是怎样的?**。在评论栏让我们知道! + +-------------------------------------------------------------------------------- + +via: http://www.makeuseof.com/tag/update-linux-kernel-improved-system-performance/ + +作者:[Danny Stieben][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.makeuseof.com/tag/author/danny/ +[1]:http://www.makeuseof.com/tag/linux-kernel-explanation-laymans-terms/ +[2]:http://www.makeuseof.com/tag/5-reasons-update-kernel-linux/ +[3]:http://www.kernel.org/ +[4]:http://kernel.ubuntu.com/~kernel-ppa/mainline/ +[5]:http://www.makeuseof.com/tag/an-a-z-of-linux-40-essential-commands-you-should-know/ +[6]:http://koji.fedoraproject.org/koji/packageinfo?packageID=8 +[7]:http://www.makeuseof.com/tag/bleeding-edge-linux-fedora-rawhide/ +[8]:http://www.makeuseof.com/tag/arch-linux-letting-you-build-your-linux-system-from-scratch/ +[9]:http://www.makeuseof.com/tag/nano-vs-vim-terminal-text-editors-compared/ +[10]:https://wiki.archlinux.org/index.php/Pacman#Repositories From fc1fe1c59b1dc55bb6d7999166a40203a803d870 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 1 Aug 2015 22:55:13 +0800 Subject: [PATCH 047/207] PUB:20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04 @GOLinux --- ...e OpenVPN Server-Client on Ubuntu 15.04.md | 96 +++++++++---------- 1 file changed, 48 insertions(+), 48 deletions(-) rename {translated/tech => published}/20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md (69%) diff --git a/translated/tech/20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md b/published/20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md similarity index 69% rename from translated/tech/20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md rename to published/20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md index 5b6c2fc6bd..7ec20f794e 100644 --- a/translated/tech/20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md +++ b/published/20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md @@ -1,18 +1,18 @@ -Ubuntu 15.04上配置OpenVPN服务器-客户端 +在 Ubuntu 15.04 上配置 OpenVPN 服务器和客户端 ================================================================================ -虚拟专用网(VPN)是几种用于建立与其它网络连接的网络技术中常见的一个名称。它被称为虚拟网,因为各个节点的连接不是通过物理线路实现的。而由于没有网络所有者的正确授权是不能通过公共线路访问到网络,所以它是专用的。 +虚拟专用网(VPN)常指几种通过其它网络建立连接技术。它之所以被称为“虚拟”,是因为各个节点间的连接不是通过物理线路实现的,而“专用”是指如果没有网络所有者的正确授权是不能被公开访问到。 ![](http://blog.linoxide.com/wp-content/uploads/2015/05/vpn_custom_illustration.jpg) -[OpenVPN][1]软件通过TUN/TAP驱动的帮助,使用TCP和UDP协议来传输数据。UDP协议和TUN驱动允许NAT后的用户建立到OpenVPN服务器的连接。此外,OpenVPN允许指定自定义端口。它提额外提供了灵活的配置,可以帮助你避免防火墙限制。 +[OpenVPN][1]软件借助TUN/TAP驱动使用TCP和UDP协议来传输数据。UDP协议和TUN驱动允许NAT后的用户建立到OpenVPN服务器的连接。此外,OpenVPN允许指定自定义端口。它提供了更多的灵活配置,可以帮助你避免防火墙限制。 OpenVPN中,由OpenSSL库和传输层安全协议(TLS)提供了安全和加密。TLS是SSL协议的一个改进版本。 -OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示了如何配置OpenVPN的服务器端,以及如何预备使用带有公共密钥非对称加密和TLS协议基础结构(PKI)。 +OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示了如何配置OpenVPN的服务器端,以及如何配置使用带有公共密钥基础结构(PKI)的非对称加密和TLS协议。 ### 服务器端配置 ### -首先,我们必须安装OpenVPN。在Ubuntu 15.04和其它带有‘apt’报管理器的Unix系统中,可以通过如下命令安装: +首先,我们必须安装OpenVPN软件。在Ubuntu 15.04和其它带有‘apt’包管理器的Unix系统中,可以通过如下命令安装: sudo apt-get install openvpn @@ -20,7 +20,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示 sudo apt-get unstall easy-rsa -**注意**: 所有接下来的命令要以超级用户权限执行,如在“sudo -i”命令后;此外,你可以使用“sudo -E”作为接下来所有命令的前缀。 +**注意**: 所有接下来的命令要以超级用户权限执行,如在使用`sudo -i`命令后执行,或者你可以使用`sudo -E`作为接下来所有命令的前缀。 开始之前,我们需要拷贝“easy-rsa”到openvpn文件夹。 @@ -32,15 +32,15 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示 cd /etc/openvpn/easy-rsa/2.0 -这里,我们开启了一个密钥生成进程。 +这里,我们开始密钥生成进程。 -首先,我们编辑一个“var”文件。为了简化生成过程,我们需要在里面指定数据。这里是“var”文件的一个样例: +首先,我们编辑一个“vars”文件。为了简化生成过程,我们需要在里面指定数据。这里是“vars”文件的一个样例: - export KEY_COUNTRY="US" - export KEY_PROVINCE="CA" - export KEY_CITY="SanFrancisco" - export KEY_ORG="Fort-Funston" - export KEY_EMAIL="my@myhost.mydomain" + export KEY_COUNTRY="CN" + export KEY_PROVINCE="BJ" + export KEY_CITY="Beijing" + export KEY_ORG="Linux.CN" + export KEY_EMAIL="open@vpn.linux.cn" export KEY_OU=server 希望这些字段名称对你而言已经很清楚,不需要进一步说明了。 @@ -61,7 +61,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示 ./build-ca -在对话中,我们可以看到默认的变量,这些变量是我们先前在“vars”中指定的。我们可以检查以下,如有必要进行编辑,然后按回车几次。对话如下 +在对话中,我们可以看到默认的变量,这些变量是我们先前在“vars”中指定的。我们可以检查一下,如有必要进行编辑,然后按回车几次。对话如下 Generating a 2048 bit RSA private key .............................................+++ @@ -75,14 +75,14 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示 For some fields there will be a default value, If you enter '.', the field will be left blank. ----- - Country Name (2 letter code) [US]: - State or Province Name (full name) [CA]: - Locality Name (eg, city) [SanFrancisco]: - Organization Name (eg, company) [Fort-Funston]: - Organizational Unit Name (eg, section) [MyOrganizationalUnit]: - Common Name (eg, your name or your server's hostname) [Fort-Funston CA]: + Country Name (2 letter code) [CN]: + State or Province Name (full name) [BJ]: + Locality Name (eg, city) [Beijing]: + Organization Name (eg, company) [Linux.CN]: + Organizational Unit Name (eg, section) [Tech]: + Common Name (eg, your name or your server's hostname) [Linux.CN CA]: Name [EasyRSA]: - Email Address [me@myhost.mydomain]: + Email Address [open@vpn.linux.cn]: 接下来,我们需要生成一个服务器密钥 @@ -102,14 +102,14 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示 For some fields there will be a default value, If you enter '.', the field will be left blank. ----- - Country Name (2 letter code) [US]: - State or Province Name (full name) [CA]: - Locality Name (eg, city) [SanFrancisco]: - Organization Name (eg, company) [Fort-Funston]: - Organizational Unit Name (eg, section) [MyOrganizationalUnit]: - Common Name (eg, your name or your server's hostname) [server]: + Country Name (2 letter code) [CN]: + State or Province Name (full name) [BJ]: + Locality Name (eg, city) [Beijing]: + Organization Name (eg, company) [Linux.CN]: + Organizational Unit Name (eg, section) [Tech]: + Common Name (eg, your name or your server's hostname) [Linux.CN server]: Name [EasyRSA]: - Email Address [me@myhost.mydomain]: + Email Address [open@vpn.linux.cn]: Please enter the following 'extra' attributes to be sent with your certificate request @@ -119,14 +119,14 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示 Check that the request matches the signature Signature ok The Subject's Distinguished Name is as follows - countryName :PRINTABLE:'US' - stateOrProvinceName :PRINTABLE:'CA' - localityName :PRINTABLE:'SanFrancisco' - organizationName :PRINTABLE:'Fort-Funston' - organizationalUnitName:PRINTABLE:'MyOrganizationalUnit' - commonName :PRINTABLE:'server' + countryName :PRINTABLE:'CN' + stateOrProvinceName :PRINTABLE:'BJ' + localityName :PRINTABLE:'Beijing' + organizationName :PRINTABLE:'Linux.CN' + organizationalUnitName:PRINTABLE:'Tech' + commonName :PRINTABLE:'Linux.CN server' name :PRINTABLE:'EasyRSA' - emailAddress :IA5STRING:'me@myhost.mydomain' + emailAddress :IA5STRING:'open@vpn.linux.cn' Certificate is to be certified until May 22 19:00:25 2025 GMT (3650 days) Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]y @@ -143,7 +143,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示 Generating DH parameters, 2048 bit long safe prime, generator 2 This is going to take a long time - ................................+................ + ................................+................<许多的点> 在漫长的等待之后,我们可以继续生成最后的密钥了,该密钥用于TLS验证。命令如下: @@ -176,7 +176,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示 ### Unix的客户端配置 ### -假定我们有一台装有类Unix操作系统的设备,比如Ubuntu 15.04,并安装有OpenVPN。我们想要从先前的部分连接到OpenVPN服务器。首先,我们需要为客户端生成密钥。为了生成该密钥,请转到服务器上的目录中: +假定我们有一台装有类Unix操作系统的设备,比如Ubuntu 15.04,并安装有OpenVPN。我们想要连接到前面建立的OpenVPN服务器。首先,我们需要为客户端生成密钥。为了生成该密钥,请转到服务器上的对应目录中: cd /etc/openvpn/easy-rsa/2.0 @@ -211,7 +211,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示 dev tun proto udp - # IP and Port of remote host with OpenVPN server + # 远程 OpenVPN 服务器的 IP 和 端口号 remote 111.222.333.444 1194 resolv-retry infinite @@ -243,7 +243,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示 安卓设备上的OpenVPN配置和Unix系统上的十分类似,我们需要一个含有配置文件、密钥和证书的包。文件列表如下: -- configuration file (.ovpn), +- 配置文件 (扩展名 .ovpn), - ca.crt, - dh2048.pem, - client.crt, @@ -257,7 +257,7 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示 dev tun proto udp - # IP and Port of remote host with OpenVPN server + # 远程 OpenVPN 服务器的 IP 和 端口号 remote 111.222.333.444 1194 resolv-retry infinite @@ -274,21 +274,21 @@ OpenSSL提供了两种加密方法:对称和非对称。下面,我们展示 所有这些文件我们必须移动我们设备的SD卡上。 -然后,我们需要安装[OpenVPN连接][2]。 +然后,我们需要安装一个[OpenVPN Connect][2] 应用。 接下来,配置过程很是简单: - open setting of OpenVPN and select Import options - select Import Profile from SD card option - in opened window go to folder with prepared files and select .ovpn file - application offered us to create a new profile - tap on the Connect button and wait a second +- 打开 OpenVPN 并选择“Import”选项 +- 选择“Import Profile from SD card” +- 在打开的窗口中导航到我们放置好文件的目录,并选择那个 .ovpn 文件 +- 应用会要求我们创建一个新的配置文件 +- 点击“Connect”按钮并稍等一下 搞定。现在,我们的安卓设备已经通过安全的VPN连接连接到我们的专用网。 ### 尾声 ### -虽然OpenVPN初始配置花费不少时间,但是简易客户端配置为我们弥补了时间上的损失,也提供了从任何设备连接的能力。此外,OpenVPN提供了一个很高的安全等级,以及从不同地方连接的能力,包括位于NAT后面的客户端。因此,OpenVPN可以同时在家和在企业中使用。 +虽然OpenVPN初始配置花费不少时间,但是简易的客户端配置为我们弥补了时间上的损失,也提供了从任何设备连接的能力。此外,OpenVPN提供了一个很高的安全等级,以及从不同地方连接的能力,包括位于NAT后面的客户端。因此,OpenVPN可以同时在家和企业中使用。 -------------------------------------------------------------------------------- @@ -296,7 +296,7 @@ via: http://linoxide.com/ubuntu-how-to/configure-openvpn-server-client-ubuntu-15 作者:[Ivan Zabrovskiy][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 820aed5474fc757f233339713410d0767678ec0c Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 1 Aug 2015 23:33:38 +0800 Subject: [PATCH 048/207] PUB:20150522 Analyzing Linux Logs @ictlyh --- .../20150522 Analyzing Linux Logs.md | 66 ++++++++++--------- 1 file changed, 34 insertions(+), 32 deletions(-) rename {translated/tech => published}/20150522 Analyzing Linux Logs.md (71%) diff --git a/translated/tech/20150522 Analyzing Linux Logs.md b/published/20150522 Analyzing Linux Logs.md similarity index 71% rename from translated/tech/20150522 Analyzing Linux Logs.md rename to published/20150522 Analyzing Linux Logs.md index c037fc60aa..5c7e53c629 100644 --- a/translated/tech/20150522 Analyzing Linux Logs.md +++ b/published/20150522 Analyzing Linux Logs.md @@ -1,31 +1,33 @@ -Linux 日志分析 +如何分析 Linux 日志 ============================================================================== -日志中有大量的信息需要你处理,尽管有时候想要提取并非想象中的容易。在这篇文章中我们会介绍一些你现在就能做的基本日志分析例子。我们还将设计一些更高级的分析,但这些需要你前期努力做出切当的设置,后期就能节省很多时间。对数据进行高级分析的例子包括生成汇总计数、对有效值进行过滤,等等。 +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Linux-Copy@2x1.png) -我们首先会想你展示如何在命令行中使用多个不同的工具,然后展示了一个日志管理工具如何能自动完成大部分繁重工作从而使得日志分析变得简单。 +日志中有大量的信息需要你处理,尽管有时候想要提取并非想象中的容易。在这篇文章中我们会介绍一些你现在就能做的基本日志分析例子(只需要搜索即可)。我们还将涉及一些更高级的分析,但这些需要你前期努力做出适当的设置,后期就能节省很多时间。对数据进行高级分析的例子包括生成汇总计数、对有效值进行过滤,等等。 + +我们首先会向你展示如何在命令行中使用多个不同的工具,然后展示了一个日志管理工具如何能自动完成大部分繁重工作从而使得日志分析变得简单。 ### 用 Grep 搜索 ### -搜索文本是查找信息最基本的方式。搜索文本最常用的工具是 [grep][1]。这个命令行工具在大部分 Linux 发行版中都有,它允许你用正则表达式搜索日志。一个正则表达式是用特殊的语言写的、能识别匹配文本的模式。最简单的模式就是用引号把你想要查找的字符串括起来。 +搜索文本是查找信息最基本的方式。搜索文本最常用的工具是 [grep][1]。这个命令行工具在大部分 Linux 发行版中都有,它允许你用正则表达式搜索日志。正则表达式是一种用特殊的语言写的、能识别匹配文本的模式。最简单的模式就是用引号把你想要查找的字符串括起来。 #### 正则表达式 #### -这是一个在 Ubuntu 系统中查找 “user hoover” 认证日志的例子: +这是一个在 Ubuntu 系统的认证日志中查找 “user hoover” 的例子: $ grep "user hoover" /var/log/auth.log Accepted password for hoover from 10.0.2.2 port 4792 ssh2 pam_unix(sshd:session): session opened for user hoover by (uid=0) pam_unix(sshd:session): session closed for user hoover -构建精确的正则表达式可能很难。例如,如果我们想要搜索一个类似端口 “4792” 的数字,它可能也会匹配时间戳、URLs 以及其它不需要的数据。Ubuntu 中下面的例子,它匹配了一个我们不想要的 Apache 日志。 +构建精确的正则表达式可能很难。例如,如果我们想要搜索一个类似端口 “4792” 的数字,它可能也会匹配时间戳、URL 以及其它不需要的数据。Ubuntu 中下面的例子,它匹配了一个我们不想要的 Apache 日志。 $ grep "4792" /var/log/auth.log Accepted password for hoover from 10.0.2.2 port 4792 ssh2 - 74.91.21.46 - - [31/Mar/2015:19:44:32 +0000] "GET /scripts/samples/search?q=4972HTTP/1.0" 404 545 "-" "-” + 74.91.21.46 - - [31/Mar/2015:19:44:32 +0000] "GET /scripts/samples/search?q=4972 HTTP/1.0" 404 545 "-" "-” #### 环绕搜索 #### -另一个有用的小技巧是你可以用 grep 做环绕搜索。这会向你展示一个匹配前面或后面几行是什么。它能帮助你调试导致错误或问题的东西。B 选项展示前面几行,A 展示后面几行。举个例子,我们知道当一个人以管理员员身份登录失败时,他们也没有逆映射,也就意味着他们可能没有有效的域名。这非常可疑! +另一个有用的小技巧是你可以用 grep 做环绕搜索。这会向你展示一个匹配前面或后面几行是什么。它能帮助你调试导致错误或问题的东西。`B` 选项展示前面几行,`A` 选项展示后面几行。举个例子,我们知道当一个人以管理员员身份登录失败时,同时他们的 IP 也没有反向解析,也就意味着他们可能没有有效的域名。这非常可疑! $ grep -B 3 -A 2 'Invalid user' /var/log/auth.log Apr 28 17:06:20 ip-172-31-11-241 sshd[12545]: reverse mapping checking getaddrinfo for 216-19-2-8.commspeed.net [216.19.2.8] failed - POSSIBLE BREAK-IN ATTEMPT! @@ -42,7 +44,7 @@ Linux 日志分析 Apr 30 19:49:48 ip-172-31-11-241 sshd[6512]: Invalid user ubnt from 219.140.64.136 Apr 30 19:49:49 ip-172-31-11-241 sshd[6514]: Invalid user admin from 219.140.64.136 -关于 grep 和正则表达式的详细介绍并不在该指南的范围,但 [Ryan’s Tutorials][3] 有更深入的介绍。 +关于 grep 和正则表达式的详细介绍并不在本指南的范围,但 [Ryan’s Tutorials][3] 有更深入的介绍。 日志管理系统有更高的性能和更强大的搜索能力。它们通常会索引数据并进行并行查询,因此你可以很快的在几秒内就能搜索 GB 或 TB 的日志。相比之下,grep 就需要几分钟,在极端情况下可能甚至几小时。日志管理系统也使用类似 [Lucene][4] 的查询语言,它提供更简单的语法来检索数字、域以及其它。 @@ -54,13 +56,13 @@ Linux 提供了多个命令行工具用于文本解析和分析。当你想要 #### Cut #### -[cut][5] 命令允许你从分隔的日志解析域。分隔符是指能分开域或键值对的等号或逗号。 +[cut][5] 命令允许你从有分隔符的日志解析字段。分隔符是指能分开字段或键值对的等号或逗号等。 假设我们想从下面的日志中解析出用户: pam_unix(su:auth): authentication failure; logname=hoover uid=1000 euid=0 tty=/dev/pts/0 ruser=hoover rhost= user=root -我们可以像下面这样用 cut 命令获取第八个等号后面的文本。这是一个 Ubuntu 系统上的例子: +我们可以像下面这样用 cut 命令获取用等号分割后的第八个字段的文本。这是一个 Ubuntu 系统上的例子: $ grep "authentication failure" /var/log/auth.log | cut -d '=' -f 8 root @@ -71,13 +73,13 @@ Linux 提供了多个命令行工具用于文本解析和分析。当你想要 #### AWK #### -另外,你也可以使用 [awk][6],它能提供解析域更强大的功能。它提供了一个脚本语言,你可以过滤出几乎任何不相干的东西。 +另外,你也可以使用 [awk][6],它能提供更强大的解析字段功能。它提供了一个脚本语言,你可以过滤出几乎任何不相干的东西。 例如,假设在 Ubuntu 系统中我们有下面的一行日志,我们想要提取登录失败的用户名称: Mar 24 08:28:18 ip-172-31-11-241 sshd[32701]: input_userauth_request: invalid user guest [preauth] -你可以像下面这样使用 awk 命令。首先,用一个正则表达式 /sshd.*invalid user/ 来匹配 sshd 无效用户行。然后用 { print $9 } 根据默认的分隔符空格打印第九个域。然后就输出了用户名。 +你可以像下面这样使用 awk 命令。首先,用一个正则表达式 /sshd.*invalid user/ 来匹配 sshd invalid user 行。然后用 { print $9 } 根据默认的分隔符空格打印第九个字段。这样就输出了用户名。 $ awk '/sshd.*invalid user/ { print $9 }' /var/log/auth.log guest @@ -86,17 +88,17 @@ Linux 提供了多个命令行工具用于文本解析和分析。当你想要 test ubnt -你可以在 [Awk 用户指南][7] 中阅读更多关于如何使用正则表达式和输出域的信息。 +你可以在 [Awk 用户指南][7] 中阅读更多关于如何使用正则表达式和输出字段的信息。 #### 日志管理系统 #### -日志管理系统使得解析变得更加简单,使用户能快速的分析很多的日志文件。他们能自动解析标准的日志格式,比如常见的 Linux 日志和 Web 服务器日志。这能节省很多时间,因为当处理系统问题的时候你不需要考虑写自己的解析逻辑。 +日志管理系统使得解析变得更加简单,使用户能快速的分析很多的日志文件。他们能自动解析标准的日志格式,比如常见的 Linux 日志和 Web 服务器日志。这能节省很多时间,因为当处理系统问题的时候你不需要考虑自己写解析逻辑。 下面是一个 sshd 日志消息的例子,解析出了每个 remoteHost 和 user。这是 Loggly 中的一张截图,它是一个基于云的日志管理服务。 ![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.25.09-AM.png) -你也可以对非标准格式自定义解析。一个常用的工具是 [Grok][8],它用一个常见正则表达式库解析原始文本为结构化 JSON。下面是一个 Grok 在 Logstash 中解析内核日志文件的事例配置: +你也可以对非标准格式自定义解析。一个常用的工具是 [Grok][8],它用一个常见正则表达式库,可以解析原始文本为结构化 JSON。下面是一个 Grok 在 Logstash 中解析内核日志文件的事例配置: filter{ grok { @@ -110,29 +112,29 @@ Linux 提供了多个命令行工具用于文本解析和分析。当你想要 ### 用 Rsyslog 和 AWK 过滤 ### -过滤使得你能检索一个特定的域值而不是进行全文检索。这使你的日志分析更加准确,因为它会忽略来自其它部分日志信息不需要的匹配。为了对一个域值进行搜索,你首先需要解析日志或者至少有对事件结构进行检索的方式。 +过滤使得你能检索一个特定的字段值而不是进行全文检索。这使你的日志分析更加准确,因为它会忽略来自其它部分日志信息不需要的匹配。为了对一个字段值进行搜索,你首先需要解析日志或者至少有对事件结构进行检索的方式。 -#### 如果对 App 进行过滤 #### +#### 如何对应用进行过滤 #### 通常,你可能只想看一个应用的日志。如果你的应用把记录都保存到一个文件中就会很容易。如果你需要在一个聚集或集中式日志中过滤一个应用就会比较复杂。下面有几种方法来实现: -1. 用 rsyslog 守护进程解析和过滤日志。下面的例子将 sshd 应用的日志写入一个名为 sshd-message 的文件,然后丢弃事件也就不会在其它地方重复。你可以将它添加到你的 rsyslog.conf 文件中测试这个例子。 +1. 用 rsyslog 守护进程解析和过滤日志。下面的例子将 sshd 应用的日志写入一个名为 sshd-message 的文件,然后丢弃事件以便它不会在其它地方重复出现。你可以将它添加到你的 rsyslog.conf 文件中测试这个例子。 - :programname, isequal, “sshd” /var/log/sshd-messages - &~ + :programname, isequal, “sshd” /var/log/sshd-messages + &~ -2. 用类似 awk 的命令行工具提取特定域的值,例如 sshd 用户名。下面是 Ubuntu 系统中的一个例子。 +2. 用类似 awk 的命令行工具提取特定字段的值,例如 sshd 用户名。下面是 Ubuntu 系统中的一个例子。 - $ awk '/sshd.*invalid user/ { print $9 }' /var/log/auth.log - guest - admin - info - test - ubnt + $ awk '/sshd.*invalid user/ { print $9 }' /var/log/auth.log + guest + admin + info + test + ubnt 3. 用日志管理系统自动解析日志,然后在需要的应用名称上点击过滤。下面是在 Loggly 日志管理服务中提取 syslog 域的截图。我们对应用名称 “sshd” 进行过滤,如维恩图图标所示。 -![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.02-AM.png) + ![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.02-AM.png) #### 如何过滤错误 #### @@ -146,7 +148,7 @@ Linux 提供了多个命令行工具用于文本解析和分析。当你想要 : Mar 11 18:18:00,hoover-VirtualBox,su[5026]:, pam_authenticate: Authentication failure -你可以用 awk 或者 grep 检索错误信息。在 Ubuntu 中,对这个例子,我们可以用一些环绕语法,例如 . 和 >,它们只会匹配这个域。 +你可以用 awk 或者 grep 检索错误信息。在 Ubuntu 中,对这个例子,我们可以用一些语法特征,例如 . 和 >,它们只会匹配这个域。 $ grep '.err>' /var/log/auth.log : Mar 11 18:18:00,hoover-VirtualBox,su[5026]:, pam_authenticate: Authentication failure @@ -161,9 +163,9 @@ Linux 提供了多个命令行工具用于文本解析和分析。当你想要 via: http://www.loggly.com/ultimate-guide/logging/analyzing-linux-logs/ -作者:[Jason Skowronski][a] [Amy Echeverri][b] [ Sadequl Hussain][c] +作者:[Jason Skowronski][a],[Amy Echeverri][b],[ Sadequl Hussain][c] 译者:[ictlyh](https://github.com/ictlyh) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From d643100f422d15c3d3858304b8aa3a65369d3a82 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 1 Aug 2015 23:35:42 +0800 Subject: [PATCH 049/207] =?UTF-8?q?=E5=BD=92=E6=A1=A3=20201507?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... And Secure Tool To Sync Files or Folders Between Computers.md | 0 ... Apache '.htaccess' Tricks to Secure and Customize Websites.md | 0 ...mal BASH like line editing is supported GRUB Error In Linux.md | 0 ...0150309 Comparative Introduction To FreeBSD For Linux Users.md | 0 published/{ => 201507}/20150401 ZMap Documentation.md | 0 .../20150407 10 Truly Amusing Easter Eggs in Linux.md | 0 ...150410 10 Top Distributions in Demand to Get Your Dream Job.md | 0 ...age 'Systemd' Services and Units Using 'Systemctl' in Linux.md | 0 ...and Line Tool to Output Rainbow Of Colors in Linux Terminal.md | 0 ...Linux Better than OS X GNU Open Source and Apple in History.md | 0 .../20150526 20 Useful Terminal Emulators for Linux.md | 0 ...7 Animated Wallpaper Adds Live Backgrounds To Linux Distros.md | 0 ...7 How to Develop Own Custom Linux Distribution From Scratch.md | 0 ...0150527 How to edit your documents collaboratively on Linux.md | 0 .../20150601 How to monitor Linux servers with SNMP and Cacti.md | 0 .../20150601 How to monitor common services with Nagios.md | 0 ...150603 Installing Ruby on Rails using rbenv on Ubuntu 15.04.md | 0 .../20150604 How to access SQLite database in Perl.md | 0 ...ate Filenames Having Spaces and Special Characters in Linux.md | 0 .../{ => 201507}/20150610 How to secure your Linux server.md | 0 ...0610 watch--Repeat Linux or Unix Commands Regular Intervals.md | 0 ...How to Configure Apache Containers with Docker on Fedora 22.md | 0 ...0150612 How to Configure Swarm Native Clustering for Docker.md | 0 ... Line Tool to Print Color ANSI Logos of Linux Distributions.md | 0 .../{ => 201507}/20150615 How to combine two graphs on Cacti.md | 0 ...x, Apache, MariaDB, PHP or PhpMyAdmin in RHEL or CentOS 7.0.md | 0 published/{ => 201507}/20150616 LINUX 101--POWER UP YOUR SHELL.md | 0 .../{ => 201507}/20150616 Linux Humor on the Command-line.md | 0 published/{ => 201507}/20150616 XBMC--build a remote control.md | 0 ...An Ultimate Web Browser for Anonymous Web Browsing in Linux.md | 0 ...How to Setup Node.JS on Ubuntu 15.04 with Different Methods.md | 0 .../20150618 What will be the future of Linux without Linus.md | 0 ...0150625 Screen Capture Made Easy with these Dedicated Tools.md | 0 ...0629 4 Useful Tips on mkdir, tar and kill Commands in Linux.md | 0 .../20150629 Backup with these DeDuplicating Encryption Tools.md | 0 ... First Stable Version Of Atom Code Editor Has Been Released.md | 0 published/{ => 201507}/20150706 PHP Security.md | 0 ...50709 7 command line tools for monitoring your Linux system.md | 0 .../20150709 Install Google Hangouts Desktop Client In Linux.md | 0 ...alds Says People Who Believe in AI Singularity Are on Drugs.md | 0 ...x 'tar--Exiting with failure status due to previous errors'.md | 0 ...AQs with Answers--How to install a Brother printer on Linux.md | 0 ...w to open multiple tabs in a GNOME terminal on Ubuntu 15.04.md | 0 ...50709 Why is the ibdata1 file continuously growing in MySQL.md | 0 ... How To Fix System Program Problem Detected In Ubuntu 14.04.md | 0 published/{ => 201507}/20150713 How to manage Vim plugins.md | 0 .../20150716 4 CCleaner Alternatives For Ubuntu Linux.md | 0 ...Crossroads) Load Balancer for Web Servers on RHEL or CentOS.md | 0 ... 12 Useful PHP Commandline Usage Every Linux User Must Know.md | 0 ... to Use and Execute PHP Codes in Linux Command Line--Part 1.md | 0 ...tall Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md | 0 published/{ => 201507}/PHP 7 upgrading.md | 0 52 files changed, 0 insertions(+), 0 deletions(-) rename published/{ => 201507}/20150121 Syncthing--A Private And Secure Tool To Sync Files or Folders Between Computers.md (100%) rename published/{ => 201507}/20150127 25 Useful Apache '.htaccess' Tricks to Secure and Customize Websites.md (100%) rename published/{ => 201507}/20150227 Fix Minimal BASH like line editing is supported GRUB Error In Linux.md (100%) rename published/{ => 201507}/20150309 Comparative Introduction To FreeBSD For Linux Users.md (100%) rename published/{ => 201507}/20150401 ZMap Documentation.md (100%) rename published/{ => 201507}/20150407 10 Truly Amusing Easter Eggs in Linux.md (100%) rename published/{ => 201507}/20150410 10 Top Distributions in Demand to Get Your Dream Job.md (100%) rename published/{ => 201507}/20150505 How to Manage 'Systemd' Services and Units Using 'Systemctl' in Linux.md (100%) rename published/{ => 201507}/20150515 Lolcat--A Command Line Tool to Output Rainbow Of Colors in Linux Terminal.md (100%) rename published/{ => 201507}/20150520 Is Linux Better than OS X GNU Open Source and Apple in History.md (100%) rename published/{ => 201507}/20150526 20 Useful Terminal Emulators for Linux.md (100%) rename published/{ => 201507}/20150527 Animated Wallpaper Adds Live Backgrounds To Linux Distros.md (100%) rename published/{ => 201507}/20150527 How to Develop Own Custom Linux Distribution From Scratch.md (100%) rename published/{ => 201507}/20150527 How to edit your documents collaboratively on Linux.md (100%) rename published/{ => 201507}/20150601 How to monitor Linux servers with SNMP and Cacti.md (100%) rename published/{ => 201507}/20150601 How to monitor common services with Nagios.md (100%) rename published/{ => 201507}/20150603 Installing Ruby on Rails using rbenv on Ubuntu 15.04.md (100%) rename published/{ => 201507}/20150604 How to access SQLite database in Perl.md (100%) rename published/{ => 201507}/20150610 How to Manipulate Filenames Having Spaces and Special Characters in Linux.md (100%) rename published/{ => 201507}/20150610 How to secure your Linux server.md (100%) rename published/{ => 201507}/20150610 watch--Repeat Linux or Unix Commands Regular Intervals.md (100%) rename published/{ => 201507}/20150612 How to Configure Apache Containers with Docker on Fedora 22.md (100%) rename published/{ => 201507}/20150612 How to Configure Swarm Native Clustering for Docker.md (100%) rename published/{ => 201507}/20150612 Linux_Logo--A Command Line Tool to Print Color ANSI Logos of Linux Distributions.md (100%) rename published/{ => 201507}/20150615 How to combine two graphs on Cacti.md (100%) rename published/{ => 201507}/20150616 Installing LAMP Linux, Apache, MariaDB, PHP or PhpMyAdmin in RHEL or CentOS 7.0.md (100%) rename published/{ => 201507}/20150616 LINUX 101--POWER UP YOUR SHELL.md (100%) rename published/{ => 201507}/20150616 Linux Humor on the Command-line.md (100%) rename published/{ => 201507}/20150616 XBMC--build a remote control.md (100%) rename published/{ => 201507}/20150617 Tor Browser--An Ultimate Web Browser for Anonymous Web Browsing in Linux.md (100%) rename published/{ => 201507}/20150618 How to Setup Node.JS on Ubuntu 15.04 with Different Methods.md (100%) rename published/{ => 201507}/20150618 What will be the future of Linux without Linus.md (100%) rename published/{ => 201507}/20150625 Screen Capture Made Easy with these Dedicated Tools.md (100%) rename published/{ => 201507}/20150629 4 Useful Tips on mkdir, tar and kill Commands in Linux.md (100%) rename published/{ => 201507}/20150629 Backup with these DeDuplicating Encryption Tools.md (100%) rename published/{ => 201507}/20150629 First Stable Version Of Atom Code Editor Has Been Released.md (100%) rename published/{ => 201507}/20150706 PHP Security.md (100%) rename published/{ => 201507}/20150709 7 command line tools for monitoring your Linux system.md (100%) rename published/{ => 201507}/20150709 Install Google Hangouts Desktop Client In Linux.md (100%) rename published/{ => 201507}/20150709 Linus Torvalds Says People Who Believe in AI Singularity Are on Drugs.md (100%) rename published/{ => 201507}/20150709 Linux FAQs with Answers--How to fix 'tar--Exiting with failure status due to previous errors'.md (100%) rename published/{ => 201507}/20150709 Linux FAQs with Answers--How to install a Brother printer on Linux.md (100%) rename published/{ => 201507}/20150709 Linux FAQs with Answers--How to open multiple tabs in a GNOME terminal on Ubuntu 15.04.md (100%) rename published/{ => 201507}/20150709 Why is the ibdata1 file continuously growing in MySQL.md (100%) rename published/{ => 201507}/20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md (100%) rename published/{ => 201507}/20150713 How to manage Vim plugins.md (100%) rename published/{ => 201507}/20150716 4 CCleaner Alternatives For Ubuntu Linux.md (100%) rename published/{ => 201507}/20150717 Setting Up 'XR' (Crossroads) Load Balancer for Web Servers on RHEL or CentOS.md (100%) rename published/{ => 201507}/20150722 12 Useful PHP Commandline Usage Every Linux User Must Know.md (100%) rename published/{ => 201507}/20150722 How to Use and Execute PHP Codes in Linux Command Line--Part 1.md (100%) rename published/{ => 201507}/Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md (100%) rename published/{ => 201507}/PHP 7 upgrading.md (100%) diff --git a/published/20150121 Syncthing--A Private And Secure Tool To Sync Files or Folders Between Computers.md b/published/201507/20150121 Syncthing--A Private And Secure Tool To Sync Files or Folders Between Computers.md similarity index 100% rename from published/20150121 Syncthing--A Private And Secure Tool To Sync Files or Folders Between Computers.md rename to published/201507/20150121 Syncthing--A Private And Secure Tool To Sync Files or Folders Between Computers.md diff --git a/published/20150127 25 Useful Apache '.htaccess' Tricks to Secure and Customize Websites.md b/published/201507/20150127 25 Useful Apache '.htaccess' Tricks to Secure and Customize Websites.md similarity index 100% rename from published/20150127 25 Useful Apache '.htaccess' Tricks to Secure and Customize Websites.md rename to published/201507/20150127 25 Useful Apache '.htaccess' Tricks to Secure and Customize Websites.md diff --git a/published/20150227 Fix Minimal BASH like line editing is supported GRUB Error In Linux.md b/published/201507/20150227 Fix Minimal BASH like line editing is supported GRUB Error In Linux.md similarity index 100% rename from published/20150227 Fix Minimal BASH like line editing is supported GRUB Error In Linux.md rename to published/201507/20150227 Fix Minimal BASH like line editing is supported GRUB Error In Linux.md diff --git a/published/20150309 Comparative Introduction To FreeBSD For Linux Users.md b/published/201507/20150309 Comparative Introduction To FreeBSD For Linux Users.md similarity index 100% rename from published/20150309 Comparative Introduction To FreeBSD For Linux Users.md rename to published/201507/20150309 Comparative Introduction To FreeBSD For Linux Users.md diff --git a/published/20150401 ZMap Documentation.md b/published/201507/20150401 ZMap Documentation.md similarity index 100% rename from published/20150401 ZMap Documentation.md rename to published/201507/20150401 ZMap Documentation.md diff --git a/published/20150407 10 Truly Amusing Easter Eggs in Linux.md b/published/201507/20150407 10 Truly Amusing Easter Eggs in Linux.md similarity index 100% rename from published/20150407 10 Truly Amusing Easter Eggs in Linux.md rename to published/201507/20150407 10 Truly Amusing Easter Eggs in Linux.md diff --git a/published/20150410 10 Top Distributions in Demand to Get Your Dream Job.md b/published/201507/20150410 10 Top Distributions in Demand to Get Your Dream Job.md similarity index 100% rename from published/20150410 10 Top Distributions in Demand to Get Your Dream Job.md rename to published/201507/20150410 10 Top Distributions in Demand to Get Your Dream Job.md diff --git a/published/20150505 How to Manage 'Systemd' Services and Units Using 'Systemctl' in Linux.md b/published/201507/20150505 How to Manage 'Systemd' Services and Units Using 'Systemctl' in Linux.md similarity index 100% rename from published/20150505 How to Manage 'Systemd' Services and Units Using 'Systemctl' in Linux.md rename to published/201507/20150505 How to Manage 'Systemd' Services and Units Using 'Systemctl' in Linux.md diff --git a/published/20150515 Lolcat--A Command Line Tool to Output Rainbow Of Colors in Linux Terminal.md b/published/201507/20150515 Lolcat--A Command Line Tool to Output Rainbow Of Colors in Linux Terminal.md similarity index 100% rename from published/20150515 Lolcat--A Command Line Tool to Output Rainbow Of Colors in Linux Terminal.md rename to published/201507/20150515 Lolcat--A Command Line Tool to Output Rainbow Of Colors in Linux Terminal.md diff --git a/published/20150520 Is Linux Better than OS X GNU Open Source and Apple in History.md b/published/201507/20150520 Is Linux Better than OS X GNU Open Source and Apple in History.md similarity index 100% rename from published/20150520 Is Linux Better than OS X GNU Open Source and Apple in History.md rename to published/201507/20150520 Is Linux Better than OS X GNU Open Source and Apple in History.md diff --git a/published/20150526 20 Useful Terminal Emulators for Linux.md b/published/201507/20150526 20 Useful Terminal Emulators for Linux.md similarity index 100% rename from published/20150526 20 Useful Terminal Emulators for Linux.md rename to published/201507/20150526 20 Useful Terminal Emulators for Linux.md diff --git a/published/20150527 Animated Wallpaper Adds Live Backgrounds To Linux Distros.md b/published/201507/20150527 Animated Wallpaper Adds Live Backgrounds To Linux Distros.md similarity index 100% rename from published/20150527 Animated Wallpaper Adds Live Backgrounds To Linux Distros.md rename to published/201507/20150527 Animated Wallpaper Adds Live Backgrounds To Linux Distros.md diff --git a/published/20150527 How to Develop Own Custom Linux Distribution From Scratch.md b/published/201507/20150527 How to Develop Own Custom Linux Distribution From Scratch.md similarity index 100% rename from published/20150527 How to Develop Own Custom Linux Distribution From Scratch.md rename to published/201507/20150527 How to Develop Own Custom Linux Distribution From Scratch.md diff --git a/published/20150527 How to edit your documents collaboratively on Linux.md b/published/201507/20150527 How to edit your documents collaboratively on Linux.md similarity index 100% rename from published/20150527 How to edit your documents collaboratively on Linux.md rename to published/201507/20150527 How to edit your documents collaboratively on Linux.md diff --git a/published/20150601 How to monitor Linux servers with SNMP and Cacti.md b/published/201507/20150601 How to monitor Linux servers with SNMP and Cacti.md similarity index 100% rename from published/20150601 How to monitor Linux servers with SNMP and Cacti.md rename to published/201507/20150601 How to monitor Linux servers with SNMP and Cacti.md diff --git a/published/20150601 How to monitor common services with Nagios.md b/published/201507/20150601 How to monitor common services with Nagios.md similarity index 100% rename from published/20150601 How to monitor common services with Nagios.md rename to published/201507/20150601 How to monitor common services with Nagios.md diff --git a/published/20150603 Installing Ruby on Rails using rbenv on Ubuntu 15.04.md b/published/201507/20150603 Installing Ruby on Rails using rbenv on Ubuntu 15.04.md similarity index 100% rename from published/20150603 Installing Ruby on Rails using rbenv on Ubuntu 15.04.md rename to published/201507/20150603 Installing Ruby on Rails using rbenv on Ubuntu 15.04.md diff --git a/published/20150604 How to access SQLite database in Perl.md b/published/201507/20150604 How to access SQLite database in Perl.md similarity index 100% rename from published/20150604 How to access SQLite database in Perl.md rename to published/201507/20150604 How to access SQLite database in Perl.md diff --git a/published/20150610 How to Manipulate Filenames Having Spaces and Special Characters in Linux.md b/published/201507/20150610 How to Manipulate Filenames Having Spaces and Special Characters in Linux.md similarity index 100% rename from published/20150610 How to Manipulate Filenames Having Spaces and Special Characters in Linux.md rename to published/201507/20150610 How to Manipulate Filenames Having Spaces and Special Characters in Linux.md diff --git a/published/20150610 How to secure your Linux server.md b/published/201507/20150610 How to secure your Linux server.md similarity index 100% rename from published/20150610 How to secure your Linux server.md rename to published/201507/20150610 How to secure your Linux server.md diff --git a/published/20150610 watch--Repeat Linux or Unix Commands Regular Intervals.md b/published/201507/20150610 watch--Repeat Linux or Unix Commands Regular Intervals.md similarity index 100% rename from published/20150610 watch--Repeat Linux or Unix Commands Regular Intervals.md rename to published/201507/20150610 watch--Repeat Linux or Unix Commands Regular Intervals.md diff --git a/published/20150612 How to Configure Apache Containers with Docker on Fedora 22.md b/published/201507/20150612 How to Configure Apache Containers with Docker on Fedora 22.md similarity index 100% rename from published/20150612 How to Configure Apache Containers with Docker on Fedora 22.md rename to published/201507/20150612 How to Configure Apache Containers with Docker on Fedora 22.md diff --git a/published/20150612 How to Configure Swarm Native Clustering for Docker.md b/published/201507/20150612 How to Configure Swarm Native Clustering for Docker.md similarity index 100% rename from published/20150612 How to Configure Swarm Native Clustering for Docker.md rename to published/201507/20150612 How to Configure Swarm Native Clustering for Docker.md diff --git a/published/20150612 Linux_Logo--A Command Line Tool to Print Color ANSI Logos of Linux Distributions.md b/published/201507/20150612 Linux_Logo--A Command Line Tool to Print Color ANSI Logos of Linux Distributions.md similarity index 100% rename from published/20150612 Linux_Logo--A Command Line Tool to Print Color ANSI Logos of Linux Distributions.md rename to published/201507/20150612 Linux_Logo--A Command Line Tool to Print Color ANSI Logos of Linux Distributions.md diff --git a/published/20150615 How to combine two graphs on Cacti.md b/published/201507/20150615 How to combine two graphs on Cacti.md similarity index 100% rename from published/20150615 How to combine two graphs on Cacti.md rename to published/201507/20150615 How to combine two graphs on Cacti.md diff --git a/published/20150616 Installing LAMP Linux, Apache, MariaDB, PHP or PhpMyAdmin in RHEL or CentOS 7.0.md b/published/201507/20150616 Installing LAMP Linux, Apache, MariaDB, PHP or PhpMyAdmin in RHEL or CentOS 7.0.md similarity index 100% rename from published/20150616 Installing LAMP Linux, Apache, MariaDB, PHP or PhpMyAdmin in RHEL or CentOS 7.0.md rename to published/201507/20150616 Installing LAMP Linux, Apache, MariaDB, PHP or PhpMyAdmin in RHEL or CentOS 7.0.md diff --git a/published/20150616 LINUX 101--POWER UP YOUR SHELL.md b/published/201507/20150616 LINUX 101--POWER UP YOUR SHELL.md similarity index 100% rename from published/20150616 LINUX 101--POWER UP YOUR SHELL.md rename to published/201507/20150616 LINUX 101--POWER UP YOUR SHELL.md diff --git a/published/20150616 Linux Humor on the Command-line.md b/published/201507/20150616 Linux Humor on the Command-line.md similarity index 100% rename from published/20150616 Linux Humor on the Command-line.md rename to published/201507/20150616 Linux Humor on the Command-line.md diff --git a/published/20150616 XBMC--build a remote control.md b/published/201507/20150616 XBMC--build a remote control.md similarity index 100% rename from published/20150616 XBMC--build a remote control.md rename to published/201507/20150616 XBMC--build a remote control.md diff --git a/published/20150617 Tor Browser--An Ultimate Web Browser for Anonymous Web Browsing in Linux.md b/published/201507/20150617 Tor Browser--An Ultimate Web Browser for Anonymous Web Browsing in Linux.md similarity index 100% rename from published/20150617 Tor Browser--An Ultimate Web Browser for Anonymous Web Browsing in Linux.md rename to published/201507/20150617 Tor Browser--An Ultimate Web Browser for Anonymous Web Browsing in Linux.md diff --git a/published/20150618 How to Setup Node.JS on Ubuntu 15.04 with Different Methods.md b/published/201507/20150618 How to Setup Node.JS on Ubuntu 15.04 with Different Methods.md similarity index 100% rename from published/20150618 How to Setup Node.JS on Ubuntu 15.04 with Different Methods.md rename to published/201507/20150618 How to Setup Node.JS on Ubuntu 15.04 with Different Methods.md diff --git a/published/20150618 What will be the future of Linux without Linus.md b/published/201507/20150618 What will be the future of Linux without Linus.md similarity index 100% rename from published/20150618 What will be the future of Linux without Linus.md rename to published/201507/20150618 What will be the future of Linux without Linus.md diff --git a/published/20150625 Screen Capture Made Easy with these Dedicated Tools.md b/published/201507/20150625 Screen Capture Made Easy with these Dedicated Tools.md similarity index 100% rename from published/20150625 Screen Capture Made Easy with these Dedicated Tools.md rename to published/201507/20150625 Screen Capture Made Easy with these Dedicated Tools.md diff --git a/published/20150629 4 Useful Tips on mkdir, tar and kill Commands in Linux.md b/published/201507/20150629 4 Useful Tips on mkdir, tar and kill Commands in Linux.md similarity index 100% rename from published/20150629 4 Useful Tips on mkdir, tar and kill Commands in Linux.md rename to published/201507/20150629 4 Useful Tips on mkdir, tar and kill Commands in Linux.md diff --git a/published/20150629 Backup with these DeDuplicating Encryption Tools.md b/published/201507/20150629 Backup with these DeDuplicating Encryption Tools.md similarity index 100% rename from published/20150629 Backup with these DeDuplicating Encryption Tools.md rename to published/201507/20150629 Backup with these DeDuplicating Encryption Tools.md diff --git a/published/20150629 First Stable Version Of Atom Code Editor Has Been Released.md b/published/201507/20150629 First Stable Version Of Atom Code Editor Has Been Released.md similarity index 100% rename from published/20150629 First Stable Version Of Atom Code Editor Has Been Released.md rename to published/201507/20150629 First Stable Version Of Atom Code Editor Has Been Released.md diff --git a/published/20150706 PHP Security.md b/published/201507/20150706 PHP Security.md similarity index 100% rename from published/20150706 PHP Security.md rename to published/201507/20150706 PHP Security.md diff --git a/published/20150709 7 command line tools for monitoring your Linux system.md b/published/201507/20150709 7 command line tools for monitoring your Linux system.md similarity index 100% rename from published/20150709 7 command line tools for monitoring your Linux system.md rename to published/201507/20150709 7 command line tools for monitoring your Linux system.md diff --git a/published/20150709 Install Google Hangouts Desktop Client In Linux.md b/published/201507/20150709 Install Google Hangouts Desktop Client In Linux.md similarity index 100% rename from published/20150709 Install Google Hangouts Desktop Client In Linux.md rename to published/201507/20150709 Install Google Hangouts Desktop Client In Linux.md diff --git a/published/20150709 Linus Torvalds Says People Who Believe in AI Singularity Are on Drugs.md b/published/201507/20150709 Linus Torvalds Says People Who Believe in AI Singularity Are on Drugs.md similarity index 100% rename from published/20150709 Linus Torvalds Says People Who Believe in AI Singularity Are on Drugs.md rename to published/201507/20150709 Linus Torvalds Says People Who Believe in AI Singularity Are on Drugs.md diff --git a/published/20150709 Linux FAQs with Answers--How to fix 'tar--Exiting with failure status due to previous errors'.md b/published/201507/20150709 Linux FAQs with Answers--How to fix 'tar--Exiting with failure status due to previous errors'.md similarity index 100% rename from published/20150709 Linux FAQs with Answers--How to fix 'tar--Exiting with failure status due to previous errors'.md rename to published/201507/20150709 Linux FAQs with Answers--How to fix 'tar--Exiting with failure status due to previous errors'.md diff --git a/published/20150709 Linux FAQs with Answers--How to install a Brother printer on Linux.md b/published/201507/20150709 Linux FAQs with Answers--How to install a Brother printer on Linux.md similarity index 100% rename from published/20150709 Linux FAQs with Answers--How to install a Brother printer on Linux.md rename to published/201507/20150709 Linux FAQs with Answers--How to install a Brother printer on Linux.md diff --git a/published/20150709 Linux FAQs with Answers--How to open multiple tabs in a GNOME terminal on Ubuntu 15.04.md b/published/201507/20150709 Linux FAQs with Answers--How to open multiple tabs in a GNOME terminal on Ubuntu 15.04.md similarity index 100% rename from published/20150709 Linux FAQs with Answers--How to open multiple tabs in a GNOME terminal on Ubuntu 15.04.md rename to published/201507/20150709 Linux FAQs with Answers--How to open multiple tabs in a GNOME terminal on Ubuntu 15.04.md diff --git a/published/20150709 Why is the ibdata1 file continuously growing in MySQL.md b/published/201507/20150709 Why is the ibdata1 file continuously growing in MySQL.md similarity index 100% rename from published/20150709 Why is the ibdata1 file continuously growing in MySQL.md rename to published/201507/20150709 Why is the ibdata1 file continuously growing in MySQL.md diff --git a/published/20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md b/published/201507/20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md similarity index 100% rename from published/20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md rename to published/201507/20150713 How To Fix System Program Problem Detected In Ubuntu 14.04.md diff --git a/published/20150713 How to manage Vim plugins.md b/published/201507/20150713 How to manage Vim plugins.md similarity index 100% rename from published/20150713 How to manage Vim plugins.md rename to published/201507/20150713 How to manage Vim plugins.md diff --git a/published/20150716 4 CCleaner Alternatives For Ubuntu Linux.md b/published/201507/20150716 4 CCleaner Alternatives For Ubuntu Linux.md similarity index 100% rename from published/20150716 4 CCleaner Alternatives For Ubuntu Linux.md rename to published/201507/20150716 4 CCleaner Alternatives For Ubuntu Linux.md diff --git a/published/20150717 Setting Up 'XR' (Crossroads) Load Balancer for Web Servers on RHEL or CentOS.md b/published/201507/20150717 Setting Up 'XR' (Crossroads) Load Balancer for Web Servers on RHEL or CentOS.md similarity index 100% rename from published/20150717 Setting Up 'XR' (Crossroads) Load Balancer for Web Servers on RHEL or CentOS.md rename to published/201507/20150717 Setting Up 'XR' (Crossroads) Load Balancer for Web Servers on RHEL or CentOS.md diff --git a/published/20150722 12 Useful PHP Commandline Usage Every Linux User Must Know.md b/published/201507/20150722 12 Useful PHP Commandline Usage Every Linux User Must Know.md similarity index 100% rename from published/20150722 12 Useful PHP Commandline Usage Every Linux User Must Know.md rename to published/201507/20150722 12 Useful PHP Commandline Usage Every Linux User Must Know.md diff --git a/published/20150722 How to Use and Execute PHP Codes in Linux Command Line--Part 1.md b/published/201507/20150722 How to Use and Execute PHP Codes in Linux Command Line--Part 1.md similarity index 100% rename from published/20150722 How to Use and Execute PHP Codes in Linux Command Line--Part 1.md rename to published/201507/20150722 How to Use and Execute PHP Codes in Linux Command Line--Part 1.md diff --git a/published/Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md b/published/201507/Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md similarity index 100% rename from published/Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md rename to published/201507/Install Plex Media Server On Ubuntu or CentOS 7.1 or Fedora 22.md diff --git a/published/PHP 7 upgrading.md b/published/201507/PHP 7 upgrading.md similarity index 100% rename from published/PHP 7 upgrading.md rename to published/201507/PHP 7 upgrading.md From 7a48a26cd5e38cfab5140f261777e84ce39351f1 Mon Sep 17 00:00:00 2001 From: geekpi Date: Sun, 2 Aug 2015 07:38:30 +0800 Subject: [PATCH 050/207] translating --- ...al Volume Management and How Do You Enable It in Ubuntu.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md b/sources/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md index 1641bd8f20..2a95f127d0 100644 --- a/sources/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md +++ b/sources/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md @@ -1,3 +1,5 @@ +translating----geekpi + What is Logical Volume Management and How Do You Enable It in Ubuntu? ================================================================================ > Logical Volume Management (LVM) is a disk management option that every major Linux distribution includes. Whether you need to set up storage pools or just need to dynamically create partitions, LVM is probably what you are looking for. @@ -69,4 +71,4 @@ via: http://www.howtogeek.com/howto/36568/what-is-logical-volume-management-and- 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://plus.google.com/+howtogeek?prsrc=5 -[1]:http://www.howtogeek.com/howto/13379/create-a-bootable-ubuntu-9.10-usb-flash-drive/ \ No newline at end of file +[1]:http://www.howtogeek.com/howto/13379/create-a-bootable-ubuntu-9.10-usb-flash-drive/ From 6719850a73e7d0985b14cacac51b88052e1be9cf Mon Sep 17 00:00:00 2001 From: geekpi Date: Sun, 2 Aug 2015 08:20:22 +0800 Subject: [PATCH 051/207] translated --- ...ment and How Do You Enable It in Ubuntu.md | 74 ------------------- ...ment and How Do You Enable It in Ubuntu.md | 72 ++++++++++++++++++ 2 files changed, 72 insertions(+), 74 deletions(-) delete mode 100644 sources/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md create mode 100644 translated/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md diff --git a/sources/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md b/sources/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md deleted file mode 100644 index 2a95f127d0..0000000000 --- a/sources/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md +++ /dev/null @@ -1,74 +0,0 @@ -translating----geekpi - -What is Logical Volume Management and How Do You Enable It in Ubuntu? -================================================================================ -> Logical Volume Management (LVM) is a disk management option that every major Linux distribution includes. Whether you need to set up storage pools or just need to dynamically create partitions, LVM is probably what you are looking for. - -### What is LVM? ### - -Logical Volume Manager allows for a layer of abstraction between your operating system and the disks/partitions it uses. In traditional disk management your operating system looks for what disks are available (/dev/sda, /dev/sdb, etc.) and then looks at what partitions are available on those disks (/dev/sda1, /dev/sda2, etc.). - -With LVM, disks and partitions can be abstracted to contain multiple disks and partitions into one device. Your operating systems will never know the difference because LVM will only show the OS the volume groups (disks) and logical volumes (partitions) that you have set up. - -Because volume groups and logical volumes aren’t physically tied to a hard drive, it makes it easy to dynamically resize and create new disks and partitions. In addition, LVM can give you features that your file system is not capable of doing. For example, Ext3 does not have support for live snapshots, but if you’re using LVM you have the ability to take a snapshot of your logical volumes without unmounting the disk. - -### When Should You Use LVM? ### - -The first thing your should consider before setting up LVM is what you want to accomplish with your disks and partitions. Some distributions, like Fedora, install with LVM by default. - -If you are using Ubuntu on a laptop with only one internal hard drive and you don’t need extended features like live snapshots, then you may not need LVM. If you need easy expansion or want to combine multiple hard drives into a single pool of storage then LVM may be what you have been looking for. - -### Setting up LVM in Ubuntu ### - -First thing to know about using LVM is there is no easy way to convert your existing traditional partitions to logical volumes. It is possible to move to a new partition that uses LVM, but that won’t be something that we will cover in this article; instead we are going to take the approach of setting up LVM on a fresh installation of Ubuntu 10.10. - -![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/ubuntu-10-banner.png) - -To install Ubuntu using LVM you need to use the alternate install CD. Download it from the link below and burn a CD or [use unetbootin to create a USB drive][1]. - -![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/download-web.png) - -Boot your computer from the alternate install disk and select your options up until the partition disks screen and select guided – use entire disk and set up LVM. - -*Note: This will format your entire hard drive so if you are trying to dual boot or have another installation select manual instead.* - -![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/setup-1.png) - -Select the main disk you want to use, typically your largest drive, and then go to the next step. - -![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/setup-2.png) - -You will immediately need to write the changes to disk so make sure you selected the right disk and then write the changes. - -![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/setup-3.png) - -Select the size you want the first logical volume to be and then continue. - -![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/setup-4.png) - -Confirm your disk partitions and continue with the installation. - -![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/setup-5.png) - -The final step will write the GRUB bootloader to the hard drive. It is important to note that GRUB cannot be on an LVM partition because computer BIOSes cannot directly read from a logical volume. Ubuntu will automatically create a 255 MB ext2 partition for your bootloader. - -![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/setup-6.png) - -After the installation is complete, reboot the machine and boot into Ubuntu as normal. There should be no perceivable difference between using LVM or traditional disk management with this type of installation. - -![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/disk-manager.png) - -To use LVM to its full potential, stay tuned for our upcoming article on managing your LVM installation. - --------------------------------------------------------------------------------- - -via: http://www.howtogeek.com/howto/36568/what-is-logical-volume-management-and-how-do-you-enable-it-in-ubuntu/ - -作者:[How-To Geek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/+howtogeek?prsrc=5 -[1]:http://www.howtogeek.com/howto/13379/create-a-bootable-ubuntu-9.10-usb-flash-drive/ diff --git a/translated/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md b/translated/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md new file mode 100644 index 0000000000..05f07b74e6 --- /dev/null +++ b/translated/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md @@ -0,0 +1,72 @@ +什么是逻辑分区管理工具,它怎么在Ubuntu启用? +================================================================================ +> 逻辑分区管理(LVM)是每一个主流Linux发行版都含有的磁盘管理选项。无论你是否需要设置存储池或者只需要动态创建分区,LVM就是你正在寻找的。 + +### 什么是 LVM? ### + +逻辑分区管理是一个存在于磁盘/分区和操作系统之间的一个抽象层。在传统的磁盘管理中,你的操作系统寻找有哪些磁盘可用(/dev/sda、/dev/sdb等等)接着这些磁盘有哪些可用的分区(如/dev/sda1、/dev/sda2等等)。 + +在LVM下,磁盘和分区可以抽象成一个设备中含有多个磁盘和分区。你的操作系统将不会知道这些区别,因为LVM只会给操作系统展示你设置的卷组(磁盘)和逻辑卷(分区) + +,因此可以很容易地动态调整和创建新的磁盘和分区。除此之外,LVM带来你的文件系统不具备的功能。比如,ext3不支持实时快照,但是如果你正在使用LVM你可以不卸载磁盘的情况下做一个逻辑卷的快照。 + +### 你什么时候该使用LVM? ### + +在使用LVM之前首先得考虑的一件事是你要用你的磁盘和分区完成什么。一些发行版如Fedora已经默认安装了LVM。 + +如果你使用的是一台只有一块磁盘的Ubuntu笔记本电脑,并且你不需要像实时快照这样的扩展功能,那么你或许不需要LVM。如果I想要轻松地扩展或者想要将多块磁盘组成一个存储池,那么LVM或许正式你郑寻找的。 + +### 在Ubuntu中设置LVM ### + +使用LVM首先要了解的一件事是没有简单的方法将已经存在传统的分区转换成逻辑分区。可以将它移到一个使用LVM的新分区下,但是这并不会在本篇中提到;反之我们将全新安装一台Ubuntu 10.10来设置LVM + +![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/ubuntu-10-banner.png) + +要使用LVM安装Ubuntu你需要使用另外的安装CD。从下面的链接中下载并烧录到CD中或者[使用unetbootin创建一个USB盘][1]。 + +![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/download-web.png) + +从安装盘启动你的电脑,并在磁盘选择界面选择整个磁盘并设置LVM。 + +*注意:这会格式化你的整个磁盘,因此如果正在尝试双启动或者其他的安装选择,选择手动。* + +![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/setup-1.png) + +选择你想用的主磁盘,最典型的是使用你最大的磁盘,接着进入下一步。 + +![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/setup-2.png) + +你马上会将改变写入磁盘所以确保此时你选择的是正确的磁盘接着才写入设置。 + +![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/setup-3.png) + +选择第一个逻辑卷的大小并继续。 + +![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/setup-4.png) + +确认你的磁盘分区并继续安装。 + +![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/setup-5.png) + +最后一步将GRUB的bootloader写到磁盘中。重点注意的是GRUB不能作为一个LVM分区因为计算机BIOS不能直接从逻辑卷中读取数据。Ubuntu将自动创建一个255MB的ext2分区用于bootloder。 + +![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/setup-6.png) + +安装完成之后。重启电脑并如往常一样进入Ubuntu。使用这种方式安装之后应该就感受不到LVM和传统磁盘管理之间的区别了。 + +![](http://cdn3.howtogeek.com/wp-content/uploads/2011/01/disk-manager.png) + +要使用LVM的全部功能,静待我们的下篇关于管理LVM的文章。 + +-------------------------------------------------------------------------------- + +via: http://www.howtogeek.com/howto/36568/what-is-logical-volume-management-and-how-do-you-enable-it-in-ubuntu/ + +作者:[How-To Geek][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/+howtogeek?prsrc=5 +[1]:http://www.howtogeek.com/howto/13379/create-a-bootable-ubuntu-9.10-usb-flash-drive/ From 110ba8016c7041b6c478c5b3c20e7ca435b75a3d Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sun, 2 Aug 2015 12:51:45 +0800 Subject: [PATCH 052/207] [Translating] sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md --- ...to access a Linux server behind NAT via reverse SSH tunnel.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md b/sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md index b67f5aee26..4239073013 100644 --- a/sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md +++ b/sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md @@ -1,3 +1,4 @@ +ictlyh Translating How to access a Linux server behind NAT via reverse SSH tunnel ================================================================================ You are running a Linux server at home, which is behind a NAT router or restrictive firewall. Now you want to SSH to the home server while you are away from home. How would you set that up? SSH port forwarding will certainly be an option. However, port forwarding can become tricky if you are dealing with multiple nested NAT environment. Besides, it can be interfered with under various ISP-specific conditions, such as restrictive ISP firewalls which block forwarded ports, or carrier-grade NAT which shares IPv4 addresses among users. From 1bc3516185f27417f9a750a8cd6b756010c72929 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sun, 2 Aug 2015 13:58:34 +0800 Subject: [PATCH 053/207] Update 20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md [Translating] 20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md --- ...0150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md b/sources/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md index 4b49e3acca..99b2b3acc1 100644 --- a/sources/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md +++ b/sources/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md @@ -1,3 +1,5 @@ ++Translating by Ezio + How to run Ubuntu Snappy Core on Raspberry Pi 2 ================================================================================ The Internet of Things (IoT) is upon us. In a couple of years some of us might ask ourselves how we ever survived without it, just like we question our past without cellphones today. Canonical is a contender in this fast growing, but still wide open market. The company wants to claim their stakes in IoT just as they already did for the cloud. At the end of January, the company launched a small operating system that goes by the name of [Ubuntu Snappy Core][1] which is based on Ubuntu Core. @@ -86,4 +88,4 @@ via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html [1]:http://www.ubuntu.com/things [2]:http://www.raspberrypi.org/downloads/ [3]:http://xmodulo.com/write-raspberry-pi-image-sd-card.html -[4]:https://developer.ubuntu.com/en/snappy/ \ No newline at end of file +[4]:https://developer.ubuntu.com/en/snappy/ From 70694119830a191fbcd888e946c9b7ecb3d0619b Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 2 Aug 2015 16:13:22 +0800 Subject: [PATCH 054/207] Delete Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md --- ... RAID, Concepts of RAID and RAID Levels.md | 144 ------------------ 1 file changed, 144 deletions(-) delete mode 100644 sources/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md diff --git a/sources/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md b/sources/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md deleted file mode 100644 index 0f393fd7c4..0000000000 --- a/sources/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md +++ /dev/null @@ -1,144 +0,0 @@ -struggling 翻译中 -Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 -================================================================================ -RAID is a Redundant Array of Inexpensive disks, but nowadays it is called Redundant Array of Independent drives. Earlier it is used to be very costly to buy even a smaller size of disk, but nowadays we can buy a large size of disk with the same amount like before. Raid is just a collection of disks in a pool to become a logical volume. - -![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg) - -Understanding RAID Setups in Linux - -Raid contains groups or sets or Arrays. A combine of drivers make a group of disks to form a RAID Array or RAID set. It can be a minimum of 2 number of disk connected to a raid controller and make a logical volume or more drives can be in a group. Only one Raid level can be applied in a group of disks. Raid are used when we need excellent performance. According to our selected raid level, performance will differ. Saving our data by fault tolerance & high availability. - -This series will be titled Preparation for the setting up RAID ‘s through Parts 1-9 and covers the following topics. - -- Part 1: Introduction to RAID, Concepts of RAID and RAID Levels -- Part 2: How to setup RAID0 (Stripe) in Linux -- Part 3: How to setup RAID1 (Mirror) in Linux -- Part 4: How to setup RAID5 (Striping with Distributed Parity) in Linux -- Part 5: How to setup RAID6 (Striping with Double Distributed Parity) in Linux -- Part 6: Setting Up RAID 10 or 1+0 (Nested) in Linux -- Part 7: Growing an Existing RAID Array and Removing Failed Disks in Raid -- Part 8: Recovering (Rebuilding) failed drives in RAID -- Part 9: Managing RAID in Linux - -This is the Part 1 of a 9-tutorial series, here we will cover the introduction of RAID, Concepts of RAID and RAID Levels that are required for the setting up RAID in Linux. - -### Software RAID and Hardware RAID ### - -Software RAID have low performance, because of consuming resource from hosts. Raid software need to load for read data from software raid volumes. Before loading raid software, OS need to get boot to load the raid software. No need of Physical hardware in software raids. Zero cost investment. - -Hardware RAID have high performance. They are dedicated RAID Controller which is Physically built using PCI express cards. It won’t use the host resource. They have NVRAM for cache to read and write. Stores cache while rebuild even if there is power-failure, it will store the cache using battery power backups. Very costly investments needed for a large scale. - -Hardware RAID Card will look like below: - -![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg) - -Hardware RAID - -#### Featured Concepts of RAID #### - -- Parity method in raid regenerate the lost content from parity saved information’s. RAID 5, RAID 6 Based on Parity. -- Stripe is sharing data randomly to multiple disk. This won’t have full data in a single disk. If we use 3 disks half of our data will be in each disks. -- Mirroring is used in RAID 1 and RAID 10. Mirroring is making a copy of same data. In RAID 1 it will save the same content to the other disk too. -- Hot spare is just a spare drive in our server which can automatically replace the failed drives. If any one of the drive failed in our array this hot spare drive will be used and rebuild automatically. -- Chunks are just a size of data which can be minimum from 4KB and more. By defining chunk size we can increase the I/O performance. - -RAID’s are in various Levels. Here we will see only the RAID Levels which is used mostly in real environment. - -- RAID0 = Striping -- RAID1 = Mirroring -- RAID5 = Single Disk Distributed Parity -- RAID6 = Double Disk Distributed Parity -- RAID10 = Combine of Mirror & Stripe. (Nested RAID) - -RAID are managed using mdadm package in most of the Linux distributions. Let us get a Brief look into each RAID Levels. - -#### RAID 0 (or) Striping #### - -Striping have a excellent performance. In Raid 0 (Striping) the data will be written to disk using shared method. Half of the content will be in one disk and another half will be written to other disk. - -Let us assume we have 2 Disk drives, for example, if we write data “TECMINT” to logical volume it will be saved as ‘T‘ will be saved in first disk and ‘E‘ will be saved in Second disk and ‘C‘ will be saved in First disk and again ‘M‘ will be saved in Second disk and it continues in round-robin process. - -In this situation if any one of the drive fails we will loose our data, because with half of data from one of the disk can’t use to rebuilt the raid. But while comparing to Write Speed and performance RAID 0 is Excellent. We need at least minimum 2 disks to create a RAID 0 (Striping). If you need your valuable data don’t use this RAID LEVEL. - -- High Performance. -- There is Zero Capacity Loss in RAID 0 -- Zero Fault Tolerance. -- Write and Reading will be good performance. - -#### RAID 1 (or) Mirroring #### - -Mirroring have a good performance. Mirroring can make a copy of same data what we have. Assuming we have two numbers of 2TB Hard drives, total there we have 4TB, but in mirroring while the drives are behind the RAID Controller to form a Logical drive Only we can see the 2TB of logical drive. - -While we save any data, it will write to both 2TB Drives. Minimum two drives are needed to create a RAID 1 or Mirror. If a disk failure occurred we can reproduce the raid set by replacing a new disk. If any one of the disk fails in RAID 1, we can get the data from other one as there was a copy of same content in the other disk. So there is zero data loss. - -- Good Performance. -- Here Half of the Space will be lost in total capacity. -- Full Fault Tolerance. -- Rebuilt will be faster. -- Writing Performance will be slow. -- Reading will be good. -- Can be used for operating systems and database for small scale. - -#### RAID 5 (or) Distributed Parity #### - -RAID 5 is mostly used in enterprise levels. RAID 5 work by distributed parity method. Parity info will be used to rebuild the data. It rebuilds from the information left on the remaining good drives. This will protect our data from drive failure. - -Assume we have 4 drives, if one drive fails and while we replace the failed drive we can rebuild the replaced drive from parity informations. Parity information’s are Stored in all 4 drives, if we have 4 numbers of 1TB hard-drive. The parity information will be stored in 256GB in each drivers and other 768GB in each drives will be defined for Users. RAID 5 can be survive from a single Drive failure, If drives fails more than 1 will cause loss of data’s. - -- Excellent Performance -- Reading will be extremely very good in speed. -- Writing will be Average, slow if we won’t use a Hardware RAID Controller. -- Rebuild from Parity information from all drives. -- Full Fault Tolerance. -- 1 Disk Space will be under Parity. -- Can be used in file servers, web servers, very important backups. - -#### RAID 6 Two Parity Distributed Disk #### - -RAID 6 is same as RAID 5 with two parity distributed system. Mostly used in a large number of arrays. We need minimum 4 Drives, even if there 2 Drive fails we can rebuild the data while replacing new drives. - -Very slower than RAID 5, because it writes data to all 4 drivers at same time. Will be average in speed while we using a Hardware RAID Controller. If we have 6 numbers of 1TB hard-drives 4 drives will be used for data and 2 drives will be used for Parity. - -- Poor Performance. -- Read Performance will be good. -- Write Performance will be Poor if we not using a Hardware RAID Controller. -- Rebuild from 2 Parity Drives. -- Full Fault tolerance. -- 2 Disks space will be under Parity. -- Can be Used in Large Arrays. -- Can be use in backup purpose, video streaming, used in large scale. - -#### RAID 10 (or) Mirror & Stripe #### - -RAID 10 can be called as 1+0 or 0+1. This will do both works of Mirror & Striping. Mirror will be first and stripe will be the second in RAID 10. Stripe will be the first and mirror will be the second in RAID 01. RAID 10 is better comparing to 01. - -Assume, we have 4 Number of drives. While I’m writing some data to my logical volume it will be saved under All 4 drives using mirror and stripe methods. - -If I’m writing a data “TECMINT” in RAID 10 it will save the data as follow. First “T” will write to both disks and second “E” will write to both disk, this step will be used for all data write. It will make a copy of every data to other disk too. - -Same time it will use the RAID 0 method and write data as follow “T” will write to first disk and “E” will write to second disk. Again “C” will write to first Disk and “M” to second disk. - -- Good read and write performance. -- Here Half of the Space will be lost in total capacity. -- Fault Tolerance. -- Fast rebuild from copying data. -- Can be used in Database storage for high performance and availability. - -### Conclusion ### - -In this article we have seen what is RAID and which levels are mostly used in RAID in real environment. Hope you have learned the write-up about RAID. For RAID setup one must know about the basic Knowledge about RAID. The above content will fulfil basic understanding about RAID. - -In the next upcoming articles I’m going to cover how to setup and create a RAID using Various Levels, Growing a RAID Group (Array) and Troubleshooting with failed Drives and much more. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/understanding-raid-setup-in-linux/ - -作者:[Babin Lonston][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ \ No newline at end of file From 4e630c4b50c5c68b42fb2706cad309db1d8320b8 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 2 Aug 2015 16:13:38 +0800 Subject: [PATCH 055/207] Delete Part 2 - Creating Software RAID0 (Stripe) on 'Two Devices' Using 'mdadm' Tool in Linux.md --- ...wo Devices' Using 'mdadm' Tool in Linux.md | 219 ------------------ 1 file changed, 219 deletions(-) delete mode 100644 sources/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on 'Two Devices' Using 'mdadm' Tool in Linux.md diff --git a/sources/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on 'Two Devices' Using 'mdadm' Tool in Linux.md b/sources/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on 'Two Devices' Using 'mdadm' Tool in Linux.md deleted file mode 100644 index 8057e4828e..0000000000 --- a/sources/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on 'Two Devices' Using 'mdadm' Tool in Linux.md +++ /dev/null @@ -1,219 +0,0 @@ -struggling 翻译中 -Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2 -================================================================================ -RAID is Redundant Array of Inexpensive disks, used for high availability and reliability in large scale environments, where data need to be protected than normal use. Raid is just a collection of disks in a pool to become a logical volume and contains an array. A combine drivers makes an array or called as set of (group). - -RAID can be created, if there are minimum 2 number of disk connected to a raid controller and make a logical volume or more drives can be added in an array according to defined RAID Levels. Software Raid are available without using Physical hardware those are called as software raid. Software Raid will be named as Poor man raid. - -![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg) - -Setup RAID0 in Linux - -Main concept of using RAID is to save data from Single point of failure, means if we using a single disk to store the data and if it’s failed, then there is no chance of getting our data back, to stop the data loss we need a fault tolerance method. So, that we can use some collection of disk to form a RAID set. - -#### What is Stripe in RAID 0? #### - -Stripe is striping data across multiple disk at the same time by dividing the contents. Assume we have two disks and if we save content to logical volume it will be saved under both two physical disks by dividing the content. For better performance RAID 0 will be used, but we can’t get the data if one of the drive fails. So, it isn’t a good practice to use RAID 0. The only solution is to install operating system with RAID0 applied logical volumes to safe your important files. - -- RAID 0 has High Performance. -- Zero Capacity Loss in RAID 0. No Space will be wasted. -- Zero Fault Tolerance ( Can’t get back the data if any one of disk fails). -- Write and Reading will be Excellent. - -#### Requirements #### - -Minimum number of disks are allowed to create RAID 0 is 2, but you can add more disk but the order should be twice as 2, 4, 6, 8. If you have a Physical RAID card with enough ports, you can add more disks. - -Here we are not using a Hardware raid, this setup depends only on Software RAID. If we have a physical hardware raid card we can access it from it’s utility UI. Some motherboard by default in-build with RAID feature, there UI can be accessed using Ctrl+I keys. - -If you’re new to RAID setups, please read our earlier article, where we’ve covered some basic introduction of about RAID. - -- [Introduction to RAID and RAID Concepts][1] - -**My Server Setup** - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.225 - Two Disks : 20 GB each - -This article is Part 2 of a 9-tutorial RAID series, here in this part, we are going to see how we can create and setup Software RAID0 or striping in Linux systems or servers using two 20GB disks named sdb and sdc. - -### Step 1: Updating System and Installing mdadm for Managing RAID ### - -1. Before setting up RAID0 in Linux, let’s do a system update and then install ‘mdadm‘ package. The mdadm is a small program, which will allow us to configure and manage RAID devices in Linux. - - # yum clean all && yum update - # yum install mdadm -y - -![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png) - -Install mdadm Tool - -### Step 2: Verify Attached Two 20GB Drives ### - -2. Before creating RAID 0, make sure to verify that the attached two hard drives are detected or not, using the following command. - - # ls -l /dev | grep sd - -![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png) - -Check Hard Drives - -3. Once the new hard drives detected, it’s time to check whether the attached drives are already using any existing raid with the help of following ‘mdadm’ command. - - # mdadm --examine /dev/sd[b-c] - -![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png) - -Check RAID Devices - -In the above output, we come to know that none of the RAID have been applied to these two sdb and sdc drives. - -### Step 3: Creating Partitions for RAID ### - -4. Now create sdb and sdc partitions for raid, with the help of following fdisk command. Here, I will show how to create partition on sdb drive. - - # fdisk /dev/sdb - -Follow below instructions for creating partitions. - -- Press ‘n‘ for creating new partition. -- Then choose ‘P‘ for Primary partition. -- Next select the partition number as 1. -- Give the default value by just pressing two times Enter key. -- Next press ‘P‘ to print the defined partition. - -![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png) - -Create Partitions - -Follow below instructions for creating Linux raid auto on partitions. - -- Press ‘L‘ to list all available types. -- Type ‘t‘to choose the partitions. -- Choose ‘fd‘ for Linux raid auto and press Enter to apply. -- Then again use ‘P‘ to print the changes what we have made. -- Use ‘w‘ to write the changes. - -![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png) - -Create RAID Partitions in Linux - -**Note**: Please follow same above instructions to create partition on sdc drive now. - -5. After creating partitions, verify both the drivers are correctly defined for RAID using following command. - - # mdadm --examine /dev/sd[b-c] - # mdadm --examine /dev/sd[b-c]1 - -![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png) - -Verify RAID Partitions - -### Step 4: Creating RAID md Devices ### - -6. Now create md device (i.e. /dev/md0) and apply raid level using below command. - - # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1 - # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1 - -- -C – create -- -l – level -- -n – No of raid-devices - -7. Once md device has been created, now verify the status of RAID Level, Devices and Array used, with the help of following series of commands as shown. - - # cat /proc/mdstat - -![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png) - -Verify RAID Level - - # mdadm -E /dev/sd[b-c]1 - -![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png) - -Verify RAID Device - - # mdadm --detail /dev/md0 - -![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png) - -Verify RAID Array - -### Step 5: Assiging RAID Devices to Filesystem ### - -8. Create a ext4 filesystem for a RAID device /dev/md0 and mount it under /dev/raid0. - - # mkfs.ext4 /dev/md0 - -![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png) - -Create ext4 Filesystem - -9. Once ext4 filesystem has been created for Raid device, now create a mount point directory (i.e. /mnt/raid0) and mount the device /dev/md0 under it. - - # mkdir /mnt/raid0 - # mount /dev/md0 /mnt/raid0/ - -10. Next, verify that the device /dev/md0 is mounted under /mnt/raid0 directory using df command. - - # df -h - -11. Next, create a file called ‘tecmint.txt‘ under the mount point /mnt/raid0, add some content to the created file and view the content of a file and directory. - - # touch /mnt/raid0/tecmint.txt - # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt - # cat /mnt/raid0/tecmint.txt - # ls -l /mnt/raid0/ - -![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png) - -Verify Mount Device - -12. Once you’ve verified mount points, it’s time to create an fstab entry in /etc/fstab file. - - # vim /etc/fstab - -Add the following entry as described. May vary according to your mount location and filesystem you using. - - /dev/md0 /mnt/raid0 ext4 deaults 0 0 - -![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png) - -Add Device to Fstab - -13. Run mount ‘-a‘ to check if there is any error in fstab entry. - - # mount -av - -![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png) - -Check Errors in Fstab - -### Step 6: Saving RAID Configurations ### - -14. Finally, save the raid configuration to one of the file to keep the configurations for future use. Again we use ‘mdadm’ command with ‘-s‘ (scan) and ‘-v‘ (verbose) options as shown. - - # mdadm -E -s -v >> /etc/mdadm.conf - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - # cat /etc/mdadm.conf - -![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png) - -Save RAID Configurations - -That’s it, we have seen here, how to configure RAID0 striping with raid levels by using two hard disks. In next article, we will see how to setup RAID5. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid0-in-linux/ - -作者:[Babin Lonston][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ \ No newline at end of file From 5a2d8d351fa17cb222c3ab9429902a75352d9804 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 2 Aug 2015 16:15:47 +0800 Subject: [PATCH 056/207] =?UTF-8?q?Create=20Introduction=20to=20RAID,=20Co?= =?UTF-8?q?ncepts=20of=20RAID=20and=20RAID=20Levels=20=E2=80=93=20Part=201?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Concepts of RAID and RAID Levels – Part 1 | 146 ++++++++++++++++++ 1 file changed, 146 insertions(+) create mode 100644 translated/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 diff --git a/translated/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 b/translated/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 new file mode 100644 index 0000000000..8ca0ecbd7e --- /dev/null +++ b/translated/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 @@ -0,0 +1,146 @@ + +RAID的级别和概念的介绍 - 第1部分 +================================================================================ +RAID是廉价磁盘冗余阵列,但现在它被称为独立磁盘冗余阵列。早先一个容量很小的磁盘都是非常昂贵的,但是现在我们可以很便宜的买到一个更大的磁盘。Raid 是磁盘的一个集合,被称为逻辑卷。 + + +![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg) + +在 Linux 中理解 RAID 的设置 + +RAID包含一组或者一个集合甚至一个阵列。使用一组磁盘结合驱动器组成 RAID 阵列或 RAID 集。一个 RAID 控制器至少使用两个磁盘并且使用一个逻辑卷或者多个驱动器在一个组中。在一个磁盘组的应用中只能使用一个 RAID 级别。使用 RAID 可以提高服务器的性能。不同 RAID 的级别,性能会有所不同。它通过容错和高可用性来保存我们的数据。 + +这个系列被命名为RAID的构建共包含9个部分包括以下主题。 + +- 第1部分:RAID的级别和概念的介绍 +- 第2部分:在Linux中如何设置 RAID0(条带化) +- 第3部分:在Linux中如何设置 RAID1(镜像化) +- 第4部分:在Linux中如何设置 RAID5(条带化与分布式奇偶校验) +- 第5部分:在Linux中如何设置 RAID6(条带双分布式奇偶校验) +- 第6部分:在Linux中设置 RAID 10 或1 + 0(嵌套) +- 第7部分:增加现有的 RAID 阵列并删除损坏的磁盘 +- 第8部分:在 RAID 中恢复(重建)损坏的驱动器 +- 第9部分:在 Linux 中管理 RAID + +这是9系列教程的第1部分,在这里我们将介绍 RAID 的概念和 RAID 级别,这是在 Linux 中构建 RAID 需要理解的。 + + +### 软件RAID和硬件RAID ### + +软件 RAID 的性能很低,因为其从主机资源消耗。 RAID 软件需要加载可读取数据从软件 RAID 卷中。在加载 RAID 软件前,操作系统需要得到加载 RAID 软件的引导。在软件 RAID 中无需物理硬件。零成本投资。 + +硬件 RAID 具有很高的性能。他们有专用的 RAID 控制器,采用 PCI Express卡物理内置的。它不会使用主机资源。他们有 NVRAM 缓存读取和写入。当重建时即使出现电源故障,它会使用电池电源备份存储缓存。对于大规模使用需要非常昂贵的投资。 + +硬件 RAID 卡如下所示: + +![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg) + +硬件RAID + +#### 精选的 RAID 概念 #### + +- 在 RAID 重建中校验方法中丢失的内容来自从校验中保存的信息。 RAID 5,RAID 6 基于校验。 +- 条带化是随机共享数据到多个磁盘。它不会在单个磁盘中保存完整的数据。如果我们使用3个磁盘,则数据将会存在于每个磁盘上。 +- 镜像被用于 RAID 1 和 RAID 10。镜像会自动备份数据。在 RAID 1,它将保存相同的内容到其他盘上。 +- 在我们的服务器上,热备份只是一个备用驱动器,它可以自动更换发生故障的驱动器。在我们的阵列中,如果任何一个驱动器损坏,热备份驱动器会自动重建。 +- 块是 RAID 控制器每次读写数据时的最小单位,最小4KB。通过定义块大小,我们可以增加 I/O 性能。 + +RAID有不同的级别。在这里,我们仅看到在真实环境下的使用最多的 RAID 级别。 + +- RAID0 = 条带化 +- RAID1 = 镜像 +- RAID5 = 单个磁盘分布式奇偶校验 +- RAID6 = 双盘分布式奇偶校验 +- RAID10 = 镜像 + 条带。(嵌套RAID) + +RAID 在大多数 Linux 发行版上使用 mdadm 的包管理。让我们先对每个 RAID 级别认识一下。 + +#### RAID 0(或)条带化 #### + +条带化有很好的性能。在 RAID 0(条带化)中数据将使用共享的方式被写入到磁盘。一半的内容将是在一个磁盘上,另一半内容将被写入到其它磁盘。 + +假设我们有2个磁盘驱动器,例如,如果我们将数据“TECMINT”写到逻辑卷中,“T”将被保存在第一盘中,“E”将保存在第二盘,'C'将被保存在第一盘,“M”将保存在第二盘,它会一直继续此循环过程。 + +在这种情况下,如果驱动器中的任何一个发生故障,我们将丢失所有的数据,因为一个盘中只有一半的数据,不能用于重建。不过,当比较写入速度和性能时,RAID0 是非常好的。创建 RAID 0(条带化)至少需要2个磁盘。如果你的数据是非常宝贵的,那么不要使用此 RAID 级别。 + +- 高性能。 +- 在 RAID0 上零容量损失。 +- 零容错。 +- 写和读有很高的性能。 + +#### RAID1(或)镜像化 #### + +镜像也有不错的性能。镜像可以备份我们的数据。假设我们有两组2TB的硬盘驱动器,我们总共有4TB,但在镜像中,驱动器在 RAID 控制器的后面形成一个逻辑驱动器,我们只能看到逻辑驱动器有2TB。 + +当我们保存数据时,它将同时写入2TB驱动器中。创建 RAID 1 (镜像化)最少需要两个驱动器。如果发生磁盘故障,我们可以恢复 RAID 通过更换一个新的磁盘。如果在 RAID 1 中任何一个磁盘发生故障,我们可以从另一个磁盘中获取相同的数据,因为其他的磁盘中也有相同的数据。所以是零数据丢失。 + +- 良好的性能。 +- 空间的一半将在总容量丢失。 +- 完全容错。 +- 重建会更快。 +- 写性能将是缓慢的。 +- 读将会很好。 +- 被操作系统和数据库使用的规模很小。 + +#### RAID 5(或)分布式奇偶校验 #### + +RAID 5 多用于企业的水平。 RAID 5 的工作通过分布式奇偶校验的方法。奇偶校验信息将被用于重建数据。它需要留下的正常驱动器上的信息去重建。驱动器故障时,这会保护我们的数据。 + +假设我们有4个驱动器,如果一个驱动器发生故障而后我们更换发生故障的驱动器后,我们可以从奇偶校验中重建数据到更换的驱动器上。奇偶校验信息存储在所有的4个驱动器上,如果我们有4个 1TB 的驱动器。奇偶校验信息将被存储在每个驱动器的256G中而其它768GB是用户自己使用的。单个驱动器故障后,RAID 5 依旧正常工作,如果驱动器损坏个数超过1个会导致数据的丢失。 + +- 性能卓越 +- 读速度将非常好。 +- 如果我们不使用硬件 RAID 控制器,写速度是缓慢的。 +- 从所有驱动器的奇偶校验信息中重建。 +- 完全容错。 +- 1个磁盘空间将用于奇偶校验。 +- 可以被用在文件服务器,Web服务器,非常重要的备份中。 + +#### RAID 6 两个分布式奇偶校验磁盘 #### + +RAID 6 和 RAID 5 相似但它有两个分布式奇偶校验。大多用在大量的阵列中。我们最少需要4个驱动器,即使有2个驱动器发生故障,我们依然可以重建数据,同时更换新的驱动器。 + +它比 RAID 5 非常慢,因为它将数据同时写到4个驱动器上。当我们使用硬件 RAID 控制器时速度将被平均。如果我们有6个的1TB驱动器,4个驱动器将用于数据保存,2个驱动器将用于校验。 + +- 性能不佳。 +- 读的性能很好。 +- 如果我们不使用硬件 RAID 控制器写的性能会很差。 +- 从2奇偶校验驱动器上重建。 +- 完全容错。 +- 2个磁盘空间将用于奇偶校验。 +- 可用于大型阵列。 +- 在备份和视频流中大规模使用。 + +#### RAID 10(或)镜像+条带 #### + +RAID 10 可以被称为1 + 0或0 +1。它将做镜像+条带两个工作。在 RAID 10 中首先做镜像然后做条带。在 RAID 01 上首先做条带,然后做镜像。RAID 10 比 01 好。 + +假设,我们有4个驱动器。当我写了一些数据到逻辑卷上,它会使用镜像和条带将数据保存到4个驱动器上。 + +如果我在 RAID 10 上写入数据“TECMINT”,数据将使用如下形式保存。首先将“T”同时写入两个磁盘,“E”也将同时写入两个磁盘,这一步将所有数据都写入。这使数据得到备份。 + +同时它将使用 RAID 0 方式写入数据,遵循将“T”写入第一个盘,“E”写入第二个盘。再次将“C”写入第一个盘,“M”到第二个盘。 + +- 良好的读写性能。 +- 空间的一半将在总容量丢失。 +- 容错。 +- 从备份数据中快速重建。 +- 它的高性能和高可用性常被用于数据库的存储中。 + +### 结论 ### + +在这篇文章中,我们已经看到了什么是 RAID 和在实际环境大多采用 RAID 的哪个级别。希望你已经学会了上面所写的。对于 RAID 的构建必须了解有关 RAID 的基本知识。以上内容对于你了解 RAID 基本满足。 + +在接下来的文章中,我将介绍如何设置和使用各种级别创建 RAID,增加 RAID 组(阵列)和驱动器故障排除等。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/understanding-raid-setup-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ From f1ab428b04c8e3d225b0e5bfdde1c28f541282e3 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 2 Aug 2015 16:16:13 +0800 Subject: [PATCH 057/207] =?UTF-8?q?Delete=20Introduction=20to=20RAID,=20Co?= =?UTF-8?q?ncepts=20of=20RAID=20and=20RAID=20Levels=20=E2=80=93=20Part=201?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Concepts of RAID and RAID Levels – Part 1 | 146 ------------------ 1 file changed, 146 deletions(-) delete mode 100644 translated/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 diff --git a/translated/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 b/translated/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 deleted file mode 100644 index 8ca0ecbd7e..0000000000 --- a/translated/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 +++ /dev/null @@ -1,146 +0,0 @@ - -RAID的级别和概念的介绍 - 第1部分 -================================================================================ -RAID是廉价磁盘冗余阵列,但现在它被称为独立磁盘冗余阵列。早先一个容量很小的磁盘都是非常昂贵的,但是现在我们可以很便宜的买到一个更大的磁盘。Raid 是磁盘的一个集合,被称为逻辑卷。 - - -![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg) - -在 Linux 中理解 RAID 的设置 - -RAID包含一组或者一个集合甚至一个阵列。使用一组磁盘结合驱动器组成 RAID 阵列或 RAID 集。一个 RAID 控制器至少使用两个磁盘并且使用一个逻辑卷或者多个驱动器在一个组中。在一个磁盘组的应用中只能使用一个 RAID 级别。使用 RAID 可以提高服务器的性能。不同 RAID 的级别,性能会有所不同。它通过容错和高可用性来保存我们的数据。 - -这个系列被命名为RAID的构建共包含9个部分包括以下主题。 - -- 第1部分:RAID的级别和概念的介绍 -- 第2部分:在Linux中如何设置 RAID0(条带化) -- 第3部分:在Linux中如何设置 RAID1(镜像化) -- 第4部分:在Linux中如何设置 RAID5(条带化与分布式奇偶校验) -- 第5部分:在Linux中如何设置 RAID6(条带双分布式奇偶校验) -- 第6部分:在Linux中设置 RAID 10 或1 + 0(嵌套) -- 第7部分:增加现有的 RAID 阵列并删除损坏的磁盘 -- 第8部分:在 RAID 中恢复(重建)损坏的驱动器 -- 第9部分:在 Linux 中管理 RAID - -这是9系列教程的第1部分,在这里我们将介绍 RAID 的概念和 RAID 级别,这是在 Linux 中构建 RAID 需要理解的。 - - -### 软件RAID和硬件RAID ### - -软件 RAID 的性能很低,因为其从主机资源消耗。 RAID 软件需要加载可读取数据从软件 RAID 卷中。在加载 RAID 软件前,操作系统需要得到加载 RAID 软件的引导。在软件 RAID 中无需物理硬件。零成本投资。 - -硬件 RAID 具有很高的性能。他们有专用的 RAID 控制器,采用 PCI Express卡物理内置的。它不会使用主机资源。他们有 NVRAM 缓存读取和写入。当重建时即使出现电源故障,它会使用电池电源备份存储缓存。对于大规模使用需要非常昂贵的投资。 - -硬件 RAID 卡如下所示: - -![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg) - -硬件RAID - -#### 精选的 RAID 概念 #### - -- 在 RAID 重建中校验方法中丢失的内容来自从校验中保存的信息。 RAID 5,RAID 6 基于校验。 -- 条带化是随机共享数据到多个磁盘。它不会在单个磁盘中保存完整的数据。如果我们使用3个磁盘,则数据将会存在于每个磁盘上。 -- 镜像被用于 RAID 1 和 RAID 10。镜像会自动备份数据。在 RAID 1,它将保存相同的内容到其他盘上。 -- 在我们的服务器上,热备份只是一个备用驱动器,它可以自动更换发生故障的驱动器。在我们的阵列中,如果任何一个驱动器损坏,热备份驱动器会自动重建。 -- 块是 RAID 控制器每次读写数据时的最小单位,最小4KB。通过定义块大小,我们可以增加 I/O 性能。 - -RAID有不同的级别。在这里,我们仅看到在真实环境下的使用最多的 RAID 级别。 - -- RAID0 = 条带化 -- RAID1 = 镜像 -- RAID5 = 单个磁盘分布式奇偶校验 -- RAID6 = 双盘分布式奇偶校验 -- RAID10 = 镜像 + 条带。(嵌套RAID) - -RAID 在大多数 Linux 发行版上使用 mdadm 的包管理。让我们先对每个 RAID 级别认识一下。 - -#### RAID 0(或)条带化 #### - -条带化有很好的性能。在 RAID 0(条带化)中数据将使用共享的方式被写入到磁盘。一半的内容将是在一个磁盘上,另一半内容将被写入到其它磁盘。 - -假设我们有2个磁盘驱动器,例如,如果我们将数据“TECMINT”写到逻辑卷中,“T”将被保存在第一盘中,“E”将保存在第二盘,'C'将被保存在第一盘,“M”将保存在第二盘,它会一直继续此循环过程。 - -在这种情况下,如果驱动器中的任何一个发生故障,我们将丢失所有的数据,因为一个盘中只有一半的数据,不能用于重建。不过,当比较写入速度和性能时,RAID0 是非常好的。创建 RAID 0(条带化)至少需要2个磁盘。如果你的数据是非常宝贵的,那么不要使用此 RAID 级别。 - -- 高性能。 -- 在 RAID0 上零容量损失。 -- 零容错。 -- 写和读有很高的性能。 - -#### RAID1(或)镜像化 #### - -镜像也有不错的性能。镜像可以备份我们的数据。假设我们有两组2TB的硬盘驱动器,我们总共有4TB,但在镜像中,驱动器在 RAID 控制器的后面形成一个逻辑驱动器,我们只能看到逻辑驱动器有2TB。 - -当我们保存数据时,它将同时写入2TB驱动器中。创建 RAID 1 (镜像化)最少需要两个驱动器。如果发生磁盘故障,我们可以恢复 RAID 通过更换一个新的磁盘。如果在 RAID 1 中任何一个磁盘发生故障,我们可以从另一个磁盘中获取相同的数据,因为其他的磁盘中也有相同的数据。所以是零数据丢失。 - -- 良好的性能。 -- 空间的一半将在总容量丢失。 -- 完全容错。 -- 重建会更快。 -- 写性能将是缓慢的。 -- 读将会很好。 -- 被操作系统和数据库使用的规模很小。 - -#### RAID 5(或)分布式奇偶校验 #### - -RAID 5 多用于企业的水平。 RAID 5 的工作通过分布式奇偶校验的方法。奇偶校验信息将被用于重建数据。它需要留下的正常驱动器上的信息去重建。驱动器故障时,这会保护我们的数据。 - -假设我们有4个驱动器,如果一个驱动器发生故障而后我们更换发生故障的驱动器后,我们可以从奇偶校验中重建数据到更换的驱动器上。奇偶校验信息存储在所有的4个驱动器上,如果我们有4个 1TB 的驱动器。奇偶校验信息将被存储在每个驱动器的256G中而其它768GB是用户自己使用的。单个驱动器故障后,RAID 5 依旧正常工作,如果驱动器损坏个数超过1个会导致数据的丢失。 - -- 性能卓越 -- 读速度将非常好。 -- 如果我们不使用硬件 RAID 控制器,写速度是缓慢的。 -- 从所有驱动器的奇偶校验信息中重建。 -- 完全容错。 -- 1个磁盘空间将用于奇偶校验。 -- 可以被用在文件服务器,Web服务器,非常重要的备份中。 - -#### RAID 6 两个分布式奇偶校验磁盘 #### - -RAID 6 和 RAID 5 相似但它有两个分布式奇偶校验。大多用在大量的阵列中。我们最少需要4个驱动器,即使有2个驱动器发生故障,我们依然可以重建数据,同时更换新的驱动器。 - -它比 RAID 5 非常慢,因为它将数据同时写到4个驱动器上。当我们使用硬件 RAID 控制器时速度将被平均。如果我们有6个的1TB驱动器,4个驱动器将用于数据保存,2个驱动器将用于校验。 - -- 性能不佳。 -- 读的性能很好。 -- 如果我们不使用硬件 RAID 控制器写的性能会很差。 -- 从2奇偶校验驱动器上重建。 -- 完全容错。 -- 2个磁盘空间将用于奇偶校验。 -- 可用于大型阵列。 -- 在备份和视频流中大规模使用。 - -#### RAID 10(或)镜像+条带 #### - -RAID 10 可以被称为1 + 0或0 +1。它将做镜像+条带两个工作。在 RAID 10 中首先做镜像然后做条带。在 RAID 01 上首先做条带,然后做镜像。RAID 10 比 01 好。 - -假设,我们有4个驱动器。当我写了一些数据到逻辑卷上,它会使用镜像和条带将数据保存到4个驱动器上。 - -如果我在 RAID 10 上写入数据“TECMINT”,数据将使用如下形式保存。首先将“T”同时写入两个磁盘,“E”也将同时写入两个磁盘,这一步将所有数据都写入。这使数据得到备份。 - -同时它将使用 RAID 0 方式写入数据,遵循将“T”写入第一个盘,“E”写入第二个盘。再次将“C”写入第一个盘,“M”到第二个盘。 - -- 良好的读写性能。 -- 空间的一半将在总容量丢失。 -- 容错。 -- 从备份数据中快速重建。 -- 它的高性能和高可用性常被用于数据库的存储中。 - -### 结论 ### - -在这篇文章中,我们已经看到了什么是 RAID 和在实际环境大多采用 RAID 的哪个级别。希望你已经学会了上面所写的。对于 RAID 的构建必须了解有关 RAID 的基本知识。以上内容对于你了解 RAID 基本满足。 - -在接下来的文章中,我将介绍如何设置和使用各种级别创建 RAID,增加 RAID 组(阵列)和驱动器故障排除等。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/understanding-raid-setup-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ From 9e4df4271683151f4f04b3cc44bc04b1f16cf1e0 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 2 Aug 2015 16:16:43 +0800 Subject: [PATCH 058/207] =?UTF-8?q?Create=20Introduction=20to=20RAID,=20Co?= =?UTF-8?q?ncepts=20of=20RAID=20and=20RAID=20Levels=20=E2=80=93=20Part=201?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Concepts of RAID and RAID Levels – Part 1 | 218 ++++++++++++++++++ 1 file changed, 218 insertions(+) create mode 100644 translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 diff --git a/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 b/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 new file mode 100644 index 0000000000..9feba99609 --- /dev/null +++ b/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 @@ -0,0 +1,218 @@ +在 Linux 上使用 ‘mdadm’ 工具创建软件 RAID0 (条带化)在 ‘两个设备’ 上 - 第2部分 +================================================================================ +RAID 是廉价磁盘的冗余阵列,其高可用性和可靠性适用于大规模环境中,为了使数据被保护而不是被正常使用。RAID 只是磁盘的一个集合被称为逻辑卷。结合驱动器,使其成为一个阵列或称为集合(组)。 + +创建 RAID 最少应使用2个磁盘被连接组成 RAID 控制器,逻辑卷或多个驱动器可以根据定义的 RAID 级别添加在一个阵列中。不使用物理硬件创建的 RAID 被称为软件 RAID。软件 RAID 一般都是不太有钱的人才使用的。 + +![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg) + +在 Linux 中创建 RAID0 + +使用 RAID 的主要目的是为了在单点故障时保存数据,如果我们使用单个磁盘来存储数据,如果它损坏了,那么就没有机会取回我们的数据了,为了防止数据丢失我们需要一个容错的方法。所以,我们可以使用多个磁盘组成 RAID 阵列。 + +#### 在 RAID 0 中条带是什么 #### + +条带是通过将数据在同一时间分割到多个磁盘上。假设我们有两个磁盘,如果我们将数据保存到逻辑卷上,它会将数据保存在两个磁盘上。使用 RAID 0 是为了获得更好的性能,但是如果驱动器中一个出现故障,我们将不能得到完整的数据。因此,使用 RAID 0 不是一种好的做法。唯一的解决办法就是安装有 RAID0 逻辑卷的操作系统来提高文件的安全性。 + +- RAID 0 性能较高。 +- 在 RAID 0 上,空间零浪费。 +- 零容错(如果硬盘中的任何一个发生故障,无法取回数据)。 +- 写和读性能得以提高。 + +#### 要求 #### + +创建 RAID 0 允许的最小磁盘数目是2个,但你可以添加更多的磁盘,但数目应该是2,4,6,8等的两倍。如果你有一个物理 RAID 卡并且有足够的端口,你可以添加更多磁盘。 + +在这里,我们没有使用硬件 RAID,此设置只依赖于软件 RAID。如果我们有一个物理硬件 RAID 卡,我们可以从它的 UI 组件访问它。有些主板默认内建 RAID 功能,还可以使用 Ctrl + I 键访问 UI。 + +如果你是刚开始设置 RAID,请阅读我们前面的文章,我们已经介绍了一些关于 RAID 基本的概念。 + +- [Introduction to RAID and RAID Concepts][1] + +**我的服务器设置** + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.225 + Two Disks : 20 GB each + +这篇文章是9个 RAID 系列教程的第2部分,在这部分,我们将看看如何能够在 Linux 上创建和使用 RAID0(条带化),以名为 sdb 和 sdc 两个20GB的硬盘为例。 + +### 第1步:更新系统和安装管理 RAID 的 mdadm 软件 ### + +1.在 Linux 上设置 RAID0 前,我们先更新一下系统,然后安装 ‘mdadm’ 包。mdadm 是一个小程序,这将使我们能够在Linux下配置和管理 RAID 设备。 + + # yum clean all && yum update + # yum install mdadm -y + +![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png) + +安装 mdadm 工具 + +### 第2步:检测并连接两个 20GB 的硬盘 ### + +2.在创建 RAID 0 前,请务必确认两个硬盘能被检测到,使用下面的命令确认。 + + # ls -l /dev | grep sd + +![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png) + +检查硬盘 + +3.一旦检测到新的硬盘驱动器,同时检查是否连接的驱动器已经被现有的 RAID 使用,使用下面的 ‘mdadm’ 命令来查看。 + + # mdadm --examine /dev/sd[b-c] + +![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png) + +检查 RAID 设备 + +从上面的输出我们可以看到,没有任何 RAID 使用 sdb 和 sdc 这两个驱动器。 + +### 第3步:创建 RAID 分区 ### + +4.现在用 sdb 和 sdc 创建 RAID 的分区,使用 fdisk 命令来创建。在这里,我将展示如何创建 sdb 驱动器上的分区。 + + # fdisk /dev/sdb + +请按照以下说明创建分区。 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。 +- 接下来选择分区号为1。 +- 只需按两次回车键选择默认值即可。 +- 然后,按 ‘P’ 来打印创建好的分区。 + +![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png) + +创建分区 + +请按照以下说明将分区创建为 Linux 的 RAID 类型。 + +- 按 ‘L’,列出所有可用的类型。 +- 按 ‘t’ 去修改分区。 +- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png) + +在 Linux 上创建 RAID 分区 + +**注**: 请使用上述步骤同样在 sdc 驱动器上创建分区。 + +5.创建分区后,验证这两个驱动器能使用下面的命令来正确定义 RAID。 + + # mdadm --examine /dev/sd[b-c] + # mdadm --examine /dev/sd[b-c]1 + +![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png) + +验证 RAID 分区 + +### 第4步:创建 RAID md 设备 ### + +6.现在使用以下命令创建 md 设备(即 /dev/md0),并选择 RAID 合适的级别。 + + # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1 + # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1 + +- -C – create +- -l – level +- -n – No of raid-devices + +7.一旦 md 设备已经建立,使用如下命令可以查看 RAID 级别,设备和阵列的使用状态。 + + # cat /proc/mdstat + +![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png) + +查看 RAID 级别 + + # mdadm -E /dev/sd[b-c]1 + +![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png) + +查看 RAID 设备 + + # mdadm --detail /dev/md0 + +![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png) + +查看 RAID 阵列 + +### 第5步:挂载 RAID 设备到文件系统 ### + +8.将 RAID 设备 /dev/md0 创建为 ext4 文件系统并挂载到 /mnt/raid0 下。 + + # mkfs.ext4 /dev/md0 + +![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png) + +创建 ext4 文件系统 + +9. ext4 文件系统为 RAID 设备创建好后,现在创建一个挂载点(即 /mnt/raid0),并将设备 /dev/md0 挂载在它下。 + + # mkdir /mnt/raid0 + # mount /dev/md0 /mnt/raid0/ + +10.下一步,使用 df 命令验证设备 /dev/md0 是否被挂载在 /mnt/raid0 下。 + + # df -h + +11.接下来,创建一个名为 ‘tecmint.txt’ 的文件挂载到 /mnt/raid0 下,为创建的文件添加一些内容,并查看文件和目录的内容。 + + # touch /mnt/raid0/tecmint.txt + # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt + # cat /mnt/raid0/tecmint.txt + # ls -l /mnt/raid0/ + +![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png) + +验证挂载的设备 + +12.一旦你验证挂载点后,同时将它添加到 /etc/fstab 文件中。 + + # vim /etc/fstab + +添加以下条目,根据你的安装位置和使用文件系统的不同,自行做修改。 + + /dev/md0 /mnt/raid0 ext4 deaults 0 0 + +![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png) + +添加设备到 fstab 文件中 + +13.使用 mount ‘-a‘ 来检查 fstab 的条目是否有误。 + + # mount -av + +![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png) + +检查 fstab 文件是否有误 + +### 第6步:保存 RAID 配置 ### + +14.最后,保存 RAID 配置到一个文件中,以供将来使用。同样,我们使用 ‘mdadm’ 命令带有 ‘-s‘ (scan) 和 ‘-v‘ (verbose) 选项,如图所示。 + + # mdadm -E -s -v >> /etc/mdadm.conf + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + # cat /etc/mdadm.conf + +![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png) + +保存 RAID 配置 + +就这样,我们在这里看到,如何通过使用两个硬盘配置具有条带化的 RAID0 级别。在接下来的文章中,我们将看到如何设置 RAID5。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid0-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 0b475953acb42284d0e6595285b29c20745b9124 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 2 Aug 2015 16:17:39 +0800 Subject: [PATCH 059/207] =?UTF-8?q?Update=20Introduction=20to=20RAID,=20Co?= =?UTF-8?q?ncepts=20of=20RAID=20and=20RAID=20Levels=20=E2=80=93=20Part=201?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Concepts of RAID and RAID Levels – Part 1 | 250 +++++++----------- 1 file changed, 89 insertions(+), 161 deletions(-) diff --git a/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 b/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 index 9feba99609..8ca0ecbd7e 100644 --- a/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 +++ b/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 @@ -1,212 +1,141 @@ -在 Linux 上使用 ‘mdadm’ 工具创建软件 RAID0 (条带化)在 ‘两个设备’ 上 - 第2部分 + +RAID的级别和概念的介绍 - 第1部分 ================================================================================ -RAID 是廉价磁盘的冗余阵列,其高可用性和可靠性适用于大规模环境中,为了使数据被保护而不是被正常使用。RAID 只是磁盘的一个集合被称为逻辑卷。结合驱动器,使其成为一个阵列或称为集合(组)。 +RAID是廉价磁盘冗余阵列,但现在它被称为独立磁盘冗余阵列。早先一个容量很小的磁盘都是非常昂贵的,但是现在我们可以很便宜的买到一个更大的磁盘。Raid 是磁盘的一个集合,被称为逻辑卷。 -创建 RAID 最少应使用2个磁盘被连接组成 RAID 控制器,逻辑卷或多个驱动器可以根据定义的 RAID 级别添加在一个阵列中。不使用物理硬件创建的 RAID 被称为软件 RAID。软件 RAID 一般都是不太有钱的人才使用的。 -![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg) +![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg) -在 Linux 中创建 RAID0 +在 Linux 中理解 RAID 的设置 -使用 RAID 的主要目的是为了在单点故障时保存数据,如果我们使用单个磁盘来存储数据,如果它损坏了,那么就没有机会取回我们的数据了,为了防止数据丢失我们需要一个容错的方法。所以,我们可以使用多个磁盘组成 RAID 阵列。 +RAID包含一组或者一个集合甚至一个阵列。使用一组磁盘结合驱动器组成 RAID 阵列或 RAID 集。一个 RAID 控制器至少使用两个磁盘并且使用一个逻辑卷或者多个驱动器在一个组中。在一个磁盘组的应用中只能使用一个 RAID 级别。使用 RAID 可以提高服务器的性能。不同 RAID 的级别,性能会有所不同。它通过容错和高可用性来保存我们的数据。 -#### 在 RAID 0 中条带是什么 #### +这个系列被命名为RAID的构建共包含9个部分包括以下主题。 -条带是通过将数据在同一时间分割到多个磁盘上。假设我们有两个磁盘,如果我们将数据保存到逻辑卷上,它会将数据保存在两个磁盘上。使用 RAID 0 是为了获得更好的性能,但是如果驱动器中一个出现故障,我们将不能得到完整的数据。因此,使用 RAID 0 不是一种好的做法。唯一的解决办法就是安装有 RAID0 逻辑卷的操作系统来提高文件的安全性。 +- 第1部分:RAID的级别和概念的介绍 +- 第2部分:在Linux中如何设置 RAID0(条带化) +- 第3部分:在Linux中如何设置 RAID1(镜像化) +- 第4部分:在Linux中如何设置 RAID5(条带化与分布式奇偶校验) +- 第5部分:在Linux中如何设置 RAID6(条带双分布式奇偶校验) +- 第6部分:在Linux中设置 RAID 10 或1 + 0(嵌套) +- 第7部分:增加现有的 RAID 阵列并删除损坏的磁盘 +- 第8部分:在 RAID 中恢复(重建)损坏的驱动器 +- 第9部分:在 Linux 中管理 RAID -- RAID 0 性能较高。 -- 在 RAID 0 上,空间零浪费。 -- 零容错(如果硬盘中的任何一个发生故障,无法取回数据)。 -- 写和读性能得以提高。 +这是9系列教程的第1部分,在这里我们将介绍 RAID 的概念和 RAID 级别,这是在 Linux 中构建 RAID 需要理解的。 -#### 要求 #### -创建 RAID 0 允许的最小磁盘数目是2个,但你可以添加更多的磁盘,但数目应该是2,4,6,8等的两倍。如果你有一个物理 RAID 卡并且有足够的端口,你可以添加更多磁盘。 +### 软件RAID和硬件RAID ### -在这里,我们没有使用硬件 RAID,此设置只依赖于软件 RAID。如果我们有一个物理硬件 RAID 卡,我们可以从它的 UI 组件访问它。有些主板默认内建 RAID 功能,还可以使用 Ctrl + I 键访问 UI。 +软件 RAID 的性能很低,因为其从主机资源消耗。 RAID 软件需要加载可读取数据从软件 RAID 卷中。在加载 RAID 软件前,操作系统需要得到加载 RAID 软件的引导。在软件 RAID 中无需物理硬件。零成本投资。 -如果你是刚开始设置 RAID,请阅读我们前面的文章,我们已经介绍了一些关于 RAID 基本的概念。 +硬件 RAID 具有很高的性能。他们有专用的 RAID 控制器,采用 PCI Express卡物理内置的。它不会使用主机资源。他们有 NVRAM 缓存读取和写入。当重建时即使出现电源故障,它会使用电池电源备份存储缓存。对于大规模使用需要非常昂贵的投资。 -- [Introduction to RAID and RAID Concepts][1] +硬件 RAID 卡如下所示: -**我的服务器设置** +![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg) - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.225 - Two Disks : 20 GB each +硬件RAID -这篇文章是9个 RAID 系列教程的第2部分,在这部分,我们将看看如何能够在 Linux 上创建和使用 RAID0(条带化),以名为 sdb 和 sdc 两个20GB的硬盘为例。 +#### 精选的 RAID 概念 #### -### 第1步:更新系统和安装管理 RAID 的 mdadm 软件 ### +- 在 RAID 重建中校验方法中丢失的内容来自从校验中保存的信息。 RAID 5,RAID 6 基于校验。 +- 条带化是随机共享数据到多个磁盘。它不会在单个磁盘中保存完整的数据。如果我们使用3个磁盘,则数据将会存在于每个磁盘上。 +- 镜像被用于 RAID 1 和 RAID 10。镜像会自动备份数据。在 RAID 1,它将保存相同的内容到其他盘上。 +- 在我们的服务器上,热备份只是一个备用驱动器,它可以自动更换发生故障的驱动器。在我们的阵列中,如果任何一个驱动器损坏,热备份驱动器会自动重建。 +- 块是 RAID 控制器每次读写数据时的最小单位,最小4KB。通过定义块大小,我们可以增加 I/O 性能。 -1.在 Linux 上设置 RAID0 前,我们先更新一下系统,然后安装 ‘mdadm’ 包。mdadm 是一个小程序,这将使我们能够在Linux下配置和管理 RAID 设备。 +RAID有不同的级别。在这里,我们仅看到在真实环境下的使用最多的 RAID 级别。 - # yum clean all && yum update - # yum install mdadm -y +- RAID0 = 条带化 +- RAID1 = 镜像 +- RAID5 = 单个磁盘分布式奇偶校验 +- RAID6 = 双盘分布式奇偶校验 +- RAID10 = 镜像 + 条带。(嵌套RAID) -![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png) +RAID 在大多数 Linux 发行版上使用 mdadm 的包管理。让我们先对每个 RAID 级别认识一下。 -安装 mdadm 工具 +#### RAID 0(或)条带化 #### -### 第2步:检测并连接两个 20GB 的硬盘 ### +条带化有很好的性能。在 RAID 0(条带化)中数据将使用共享的方式被写入到磁盘。一半的内容将是在一个磁盘上,另一半内容将被写入到其它磁盘。 -2.在创建 RAID 0 前,请务必确认两个硬盘能被检测到,使用下面的命令确认。 +假设我们有2个磁盘驱动器,例如,如果我们将数据“TECMINT”写到逻辑卷中,“T”将被保存在第一盘中,“E”将保存在第二盘,'C'将被保存在第一盘,“M”将保存在第二盘,它会一直继续此循环过程。 - # ls -l /dev | grep sd +在这种情况下,如果驱动器中的任何一个发生故障,我们将丢失所有的数据,因为一个盘中只有一半的数据,不能用于重建。不过,当比较写入速度和性能时,RAID0 是非常好的。创建 RAID 0(条带化)至少需要2个磁盘。如果你的数据是非常宝贵的,那么不要使用此 RAID 级别。 -![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png) +- 高性能。 +- 在 RAID0 上零容量损失。 +- 零容错。 +- 写和读有很高的性能。 -检查硬盘 +#### RAID1(或)镜像化 #### -3.一旦检测到新的硬盘驱动器,同时检查是否连接的驱动器已经被现有的 RAID 使用,使用下面的 ‘mdadm’ 命令来查看。 +镜像也有不错的性能。镜像可以备份我们的数据。假设我们有两组2TB的硬盘驱动器,我们总共有4TB,但在镜像中,驱动器在 RAID 控制器的后面形成一个逻辑驱动器,我们只能看到逻辑驱动器有2TB。 - # mdadm --examine /dev/sd[b-c] +当我们保存数据时,它将同时写入2TB驱动器中。创建 RAID 1 (镜像化)最少需要两个驱动器。如果发生磁盘故障,我们可以恢复 RAID 通过更换一个新的磁盘。如果在 RAID 1 中任何一个磁盘发生故障,我们可以从另一个磁盘中获取相同的数据,因为其他的磁盘中也有相同的数据。所以是零数据丢失。 -![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png) +- 良好的性能。 +- 空间的一半将在总容量丢失。 +- 完全容错。 +- 重建会更快。 +- 写性能将是缓慢的。 +- 读将会很好。 +- 被操作系统和数据库使用的规模很小。 -检查 RAID 设备 +#### RAID 5(或)分布式奇偶校验 #### -从上面的输出我们可以看到,没有任何 RAID 使用 sdb 和 sdc 这两个驱动器。 +RAID 5 多用于企业的水平。 RAID 5 的工作通过分布式奇偶校验的方法。奇偶校验信息将被用于重建数据。它需要留下的正常驱动器上的信息去重建。驱动器故障时,这会保护我们的数据。 -### 第3步:创建 RAID 分区 ### +假设我们有4个驱动器,如果一个驱动器发生故障而后我们更换发生故障的驱动器后,我们可以从奇偶校验中重建数据到更换的驱动器上。奇偶校验信息存储在所有的4个驱动器上,如果我们有4个 1TB 的驱动器。奇偶校验信息将被存储在每个驱动器的256G中而其它768GB是用户自己使用的。单个驱动器故障后,RAID 5 依旧正常工作,如果驱动器损坏个数超过1个会导致数据的丢失。 -4.现在用 sdb 和 sdc 创建 RAID 的分区,使用 fdisk 命令来创建。在这里,我将展示如何创建 sdb 驱动器上的分区。 +- 性能卓越 +- 读速度将非常好。 +- 如果我们不使用硬件 RAID 控制器,写速度是缓慢的。 +- 从所有驱动器的奇偶校验信息中重建。 +- 完全容错。 +- 1个磁盘空间将用于奇偶校验。 +- 可以被用在文件服务器,Web服务器,非常重要的备份中。 - # fdisk /dev/sdb +#### RAID 6 两个分布式奇偶校验磁盘 #### -请按照以下说明创建分区。 +RAID 6 和 RAID 5 相似但它有两个分布式奇偶校验。大多用在大量的阵列中。我们最少需要4个驱动器,即使有2个驱动器发生故障,我们依然可以重建数据,同时更换新的驱动器。 -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。 -- 接下来选择分区号为1。 -- 只需按两次回车键选择默认值即可。 -- 然后,按 ‘P’ 来打印创建好的分区。 +它比 RAID 5 非常慢,因为它将数据同时写到4个驱动器上。当我们使用硬件 RAID 控制器时速度将被平均。如果我们有6个的1TB驱动器,4个驱动器将用于数据保存,2个驱动器将用于校验。 -![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png) +- 性能不佳。 +- 读的性能很好。 +- 如果我们不使用硬件 RAID 控制器写的性能会很差。 +- 从2奇偶校验驱动器上重建。 +- 完全容错。 +- 2个磁盘空间将用于奇偶校验。 +- 可用于大型阵列。 +- 在备份和视频流中大规模使用。 -创建分区 +#### RAID 10(或)镜像+条带 #### -请按照以下说明将分区创建为 Linux 的 RAID 类型。 +RAID 10 可以被称为1 + 0或0 +1。它将做镜像+条带两个工作。在 RAID 10 中首先做镜像然后做条带。在 RAID 01 上首先做条带,然后做镜像。RAID 10 比 01 好。 -- 按 ‘L’,列出所有可用的类型。 -- 按 ‘t’ 去修改分区。 -- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 +假设,我们有4个驱动器。当我写了一些数据到逻辑卷上,它会使用镜像和条带将数据保存到4个驱动器上。 -![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png) +如果我在 RAID 10 上写入数据“TECMINT”,数据将使用如下形式保存。首先将“T”同时写入两个磁盘,“E”也将同时写入两个磁盘,这一步将所有数据都写入。这使数据得到备份。 -在 Linux 上创建 RAID 分区 +同时它将使用 RAID 0 方式写入数据,遵循将“T”写入第一个盘,“E”写入第二个盘。再次将“C”写入第一个盘,“M”到第二个盘。 -**注**: 请使用上述步骤同样在 sdc 驱动器上创建分区。 +- 良好的读写性能。 +- 空间的一半将在总容量丢失。 +- 容错。 +- 从备份数据中快速重建。 +- 它的高性能和高可用性常被用于数据库的存储中。 -5.创建分区后,验证这两个驱动器能使用下面的命令来正确定义 RAID。 +### 结论 ### - # mdadm --examine /dev/sd[b-c] - # mdadm --examine /dev/sd[b-c]1 +在这篇文章中,我们已经看到了什么是 RAID 和在实际环境大多采用 RAID 的哪个级别。希望你已经学会了上面所写的。对于 RAID 的构建必须了解有关 RAID 的基本知识。以上内容对于你了解 RAID 基本满足。 -![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png) - -验证 RAID 分区 - -### 第4步:创建 RAID md 设备 ### - -6.现在使用以下命令创建 md 设备(即 /dev/md0),并选择 RAID 合适的级别。 - - # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1 - # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1 - -- -C – create -- -l – level -- -n – No of raid-devices - -7.一旦 md 设备已经建立,使用如下命令可以查看 RAID 级别,设备和阵列的使用状态。 - - # cat /proc/mdstat - -![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png) - -查看 RAID 级别 - - # mdadm -E /dev/sd[b-c]1 - -![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png) - -查看 RAID 设备 - - # mdadm --detail /dev/md0 - -![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png) - -查看 RAID 阵列 - -### 第5步:挂载 RAID 设备到文件系统 ### - -8.将 RAID 设备 /dev/md0 创建为 ext4 文件系统并挂载到 /mnt/raid0 下。 - - # mkfs.ext4 /dev/md0 - -![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png) - -创建 ext4 文件系统 - -9. ext4 文件系统为 RAID 设备创建好后,现在创建一个挂载点(即 /mnt/raid0),并将设备 /dev/md0 挂载在它下。 - - # mkdir /mnt/raid0 - # mount /dev/md0 /mnt/raid0/ - -10.下一步,使用 df 命令验证设备 /dev/md0 是否被挂载在 /mnt/raid0 下。 - - # df -h - -11.接下来,创建一个名为 ‘tecmint.txt’ 的文件挂载到 /mnt/raid0 下,为创建的文件添加一些内容,并查看文件和目录的内容。 - - # touch /mnt/raid0/tecmint.txt - # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt - # cat /mnt/raid0/tecmint.txt - # ls -l /mnt/raid0/ - -![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png) - -验证挂载的设备 - -12.一旦你验证挂载点后,同时将它添加到 /etc/fstab 文件中。 - - # vim /etc/fstab - -添加以下条目,根据你的安装位置和使用文件系统的不同,自行做修改。 - - /dev/md0 /mnt/raid0 ext4 deaults 0 0 - -![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png) - -添加设备到 fstab 文件中 - -13.使用 mount ‘-a‘ 来检查 fstab 的条目是否有误。 - - # mount -av - -![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png) - -检查 fstab 文件是否有误 - -### 第6步:保存 RAID 配置 ### - -14.最后,保存 RAID 配置到一个文件中,以供将来使用。同样,我们使用 ‘mdadm’ 命令带有 ‘-s‘ (scan) 和 ‘-v‘ (verbose) 选项,如图所示。 - - # mdadm -E -s -v >> /etc/mdadm.conf - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - # cat /etc/mdadm.conf - -![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png) - -保存 RAID 配置 - -就这样,我们在这里看到,如何通过使用两个硬盘配置具有条带化的 RAID0 级别。在接下来的文章中,我们将看到如何设置 RAID5。 +在接下来的文章中,我将介绍如何设置和使用各种级别创建 RAID,增加 RAID 组(阵列)和驱动器故障排除等。 -------------------------------------------------------------------------------- -via: http://www.tecmint.com/create-raid0-in-linux/ +via: http://www.tecmint.com/understanding-raid-setup-in-linux/ 作者:[Babin Lonston][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) @@ -215,4 +144,3 @@ via: http://www.tecmint.com/create-raid0-in-linux/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 958d434bed7380f898c012761a351780b273d9bd Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Sun, 2 Aug 2015 16:18:07 +0800 Subject: [PATCH 060/207] =?UTF-8?q?Create=20Creating=20Software=20RAID0=20?= =?UTF-8?q?(Stripe)=20on=20=E2=80=98Two=20Devices=E2=80=99=20Using=20?= =?UTF-8?q?=E2=80=98mdadm=E2=80=99=20Tool=20in=20Linux=20=E2=80=93=20Part?= =?UTF-8?q?=202?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...evices’ Using ‘mdadm’ Tool in Linux – Part 2 | 218 ++++++++++++++++++ 1 file changed, 218 insertions(+) create mode 100644 translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2 diff --git a/translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2 b/translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2 new file mode 100644 index 0000000000..9feba99609 --- /dev/null +++ b/translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2 @@ -0,0 +1,218 @@ +在 Linux 上使用 ‘mdadm’ 工具创建软件 RAID0 (条带化)在 ‘两个设备’ 上 - 第2部分 +================================================================================ +RAID 是廉价磁盘的冗余阵列,其高可用性和可靠性适用于大规模环境中,为了使数据被保护而不是被正常使用。RAID 只是磁盘的一个集合被称为逻辑卷。结合驱动器,使其成为一个阵列或称为集合(组)。 + +创建 RAID 最少应使用2个磁盘被连接组成 RAID 控制器,逻辑卷或多个驱动器可以根据定义的 RAID 级别添加在一个阵列中。不使用物理硬件创建的 RAID 被称为软件 RAID。软件 RAID 一般都是不太有钱的人才使用的。 + +![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg) + +在 Linux 中创建 RAID0 + +使用 RAID 的主要目的是为了在单点故障时保存数据,如果我们使用单个磁盘来存储数据,如果它损坏了,那么就没有机会取回我们的数据了,为了防止数据丢失我们需要一个容错的方法。所以,我们可以使用多个磁盘组成 RAID 阵列。 + +#### 在 RAID 0 中条带是什么 #### + +条带是通过将数据在同一时间分割到多个磁盘上。假设我们有两个磁盘,如果我们将数据保存到逻辑卷上,它会将数据保存在两个磁盘上。使用 RAID 0 是为了获得更好的性能,但是如果驱动器中一个出现故障,我们将不能得到完整的数据。因此,使用 RAID 0 不是一种好的做法。唯一的解决办法就是安装有 RAID0 逻辑卷的操作系统来提高文件的安全性。 + +- RAID 0 性能较高。 +- 在 RAID 0 上,空间零浪费。 +- 零容错(如果硬盘中的任何一个发生故障,无法取回数据)。 +- 写和读性能得以提高。 + +#### 要求 #### + +创建 RAID 0 允许的最小磁盘数目是2个,但你可以添加更多的磁盘,但数目应该是2,4,6,8等的两倍。如果你有一个物理 RAID 卡并且有足够的端口,你可以添加更多磁盘。 + +在这里,我们没有使用硬件 RAID,此设置只依赖于软件 RAID。如果我们有一个物理硬件 RAID 卡,我们可以从它的 UI 组件访问它。有些主板默认内建 RAID 功能,还可以使用 Ctrl + I 键访问 UI。 + +如果你是刚开始设置 RAID,请阅读我们前面的文章,我们已经介绍了一些关于 RAID 基本的概念。 + +- [Introduction to RAID and RAID Concepts][1] + +**我的服务器设置** + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.225 + Two Disks : 20 GB each + +这篇文章是9个 RAID 系列教程的第2部分,在这部分,我们将看看如何能够在 Linux 上创建和使用 RAID0(条带化),以名为 sdb 和 sdc 两个20GB的硬盘为例。 + +### 第1步:更新系统和安装管理 RAID 的 mdadm 软件 ### + +1.在 Linux 上设置 RAID0 前,我们先更新一下系统,然后安装 ‘mdadm’ 包。mdadm 是一个小程序,这将使我们能够在Linux下配置和管理 RAID 设备。 + + # yum clean all && yum update + # yum install mdadm -y + +![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png) + +安装 mdadm 工具 + +### 第2步:检测并连接两个 20GB 的硬盘 ### + +2.在创建 RAID 0 前,请务必确认两个硬盘能被检测到,使用下面的命令确认。 + + # ls -l /dev | grep sd + +![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png) + +检查硬盘 + +3.一旦检测到新的硬盘驱动器,同时检查是否连接的驱动器已经被现有的 RAID 使用,使用下面的 ‘mdadm’ 命令来查看。 + + # mdadm --examine /dev/sd[b-c] + +![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png) + +检查 RAID 设备 + +从上面的输出我们可以看到,没有任何 RAID 使用 sdb 和 sdc 这两个驱动器。 + +### 第3步:创建 RAID 分区 ### + +4.现在用 sdb 和 sdc 创建 RAID 的分区,使用 fdisk 命令来创建。在这里,我将展示如何创建 sdb 驱动器上的分区。 + + # fdisk /dev/sdb + +请按照以下说明创建分区。 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。 +- 接下来选择分区号为1。 +- 只需按两次回车键选择默认值即可。 +- 然后,按 ‘P’ 来打印创建好的分区。 + +![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png) + +创建分区 + +请按照以下说明将分区创建为 Linux 的 RAID 类型。 + +- 按 ‘L’,列出所有可用的类型。 +- 按 ‘t’ 去修改分区。 +- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png) + +在 Linux 上创建 RAID 分区 + +**注**: 请使用上述步骤同样在 sdc 驱动器上创建分区。 + +5.创建分区后,验证这两个驱动器能使用下面的命令来正确定义 RAID。 + + # mdadm --examine /dev/sd[b-c] + # mdadm --examine /dev/sd[b-c]1 + +![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png) + +验证 RAID 分区 + +### 第4步:创建 RAID md 设备 ### + +6.现在使用以下命令创建 md 设备(即 /dev/md0),并选择 RAID 合适的级别。 + + # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1 + # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1 + +- -C – create +- -l – level +- -n – No of raid-devices + +7.一旦 md 设备已经建立,使用如下命令可以查看 RAID 级别,设备和阵列的使用状态。 + + # cat /proc/mdstat + +![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png) + +查看 RAID 级别 + + # mdadm -E /dev/sd[b-c]1 + +![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png) + +查看 RAID 设备 + + # mdadm --detail /dev/md0 + +![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png) + +查看 RAID 阵列 + +### 第5步:挂载 RAID 设备到文件系统 ### + +8.将 RAID 设备 /dev/md0 创建为 ext4 文件系统并挂载到 /mnt/raid0 下。 + + # mkfs.ext4 /dev/md0 + +![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png) + +创建 ext4 文件系统 + +9. ext4 文件系统为 RAID 设备创建好后,现在创建一个挂载点(即 /mnt/raid0),并将设备 /dev/md0 挂载在它下。 + + # mkdir /mnt/raid0 + # mount /dev/md0 /mnt/raid0/ + +10.下一步,使用 df 命令验证设备 /dev/md0 是否被挂载在 /mnt/raid0 下。 + + # df -h + +11.接下来,创建一个名为 ‘tecmint.txt’ 的文件挂载到 /mnt/raid0 下,为创建的文件添加一些内容,并查看文件和目录的内容。 + + # touch /mnt/raid0/tecmint.txt + # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt + # cat /mnt/raid0/tecmint.txt + # ls -l /mnt/raid0/ + +![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png) + +验证挂载的设备 + +12.一旦你验证挂载点后,同时将它添加到 /etc/fstab 文件中。 + + # vim /etc/fstab + +添加以下条目,根据你的安装位置和使用文件系统的不同,自行做修改。 + + /dev/md0 /mnt/raid0 ext4 deaults 0 0 + +![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png) + +添加设备到 fstab 文件中 + +13.使用 mount ‘-a‘ 来检查 fstab 的条目是否有误。 + + # mount -av + +![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png) + +检查 fstab 文件是否有误 + +### 第6步:保存 RAID 配置 ### + +14.最后,保存 RAID 配置到一个文件中,以供将来使用。同样,我们使用 ‘mdadm’ 命令带有 ‘-s‘ (scan) 和 ‘-v‘ (verbose) 选项,如图所示。 + + # mdadm -E -s -v >> /etc/mdadm.conf + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + # cat /etc/mdadm.conf + +![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png) + +保存 RAID 配置 + +就这样,我们在这里看到,如何通过使用两个硬盘配置具有条带化的 RAID0 级别。在接下来的文章中,我们将看到如何设置 RAID5。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid0-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 6a03d892dc65eba50a5390d41dc88241690b109b Mon Sep 17 00:00:00 2001 From: XLCYun Date: Sun, 2 Aug 2015 19:39:53 +0800 Subject: [PATCH 061/207] Delete 20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md --- ...t Right & Wrong - Page 1 - Introduction.md | 55 ------------------- 1 file changed, 55 deletions(-) delete mode 100644 sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md diff --git a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md b/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md deleted file mode 100644 index 43735170c3..0000000000 --- a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md +++ /dev/null @@ -1,55 +0,0 @@ -Translating by XLCYun. -A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 1 - Introduction -================================================================================ -*Author's Note: If by some miracle you managed to click this article without reading the title then I want to re-iterate something... This is an editorial. These are my opinions. They are not representative of Phoronix, or Michael, these are my own thoughts.* - -Additionally, yes... This is quite possibly a flame-bait article. I hope the community is better than that, because I do want to start a discussion and give feedback to both the KDE and Gnome communities. For that reason when I point out, what I see as, a flaw I will try to be specific and direct so that any discussion can be equally specific and direct. For the record: The alternative title for this article was "Death By A Thousand [Paper Cuts][1]". - -Now, with that out of the way... Onto the article. - -![](http://www.phoronix.net/image.php?id=fedora-22-fan&image=fedora_22_good1_show&w=1920) - -When I sent the [Fedora 22 KDE Review][2] off to Michael I did it with a bit of a bad taste in my mouth. It wasn't because I didn't like KDE, or hadn't been enjoying Fedora, far from it. In fact, I started to transition my T450s over to Arch Linux but quickly decided against that, as I enjoyed the level of convenience that Fedora brings to me for many things. - -The reason I had a bad taste in my mouth was because the Fedora developers put a lot of time and effort into their "Workstation" product and I wasn't seeing any of it. I wasn't using Fedora the way the main developers had intended it to be used and therefore wasn't getting the "Fedora Experience." It felt like someone reviewing Ubuntu by using Kubuntu, using a Hackintosh to review OS X, or reviewing Gentoo by using Sabayon. A lot of readers in the forums bash on Michael for reviewing distributions in their default configurations-- myself included. While I still do believe that reviews should be done under 'real-world' configurations, I do see the value in reviewing something in the condition it was given to you-- for better or worse. - -It was with that attitude in mind that I decided to take a dip in the Gnome pool. - -I do, however, need to add one more disclaimer... I am looking at KDE and Gnome as they are packaged in Fedora. OpenSUSE, Kubuntu, Arch, etc, might all have different implementations of each desktop that will change whether my specific 'pain points' are relevant to your distribution. Furthermore, despite the title, this is going to be a VERY KDE heavy article. I called the article what I did because it was actually USING Gnome that made me realize how many "paper cuts" KDE actually has. - -### Login Screen ### - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login1_show&w=1920) - -I normally don't mind Distributions shipping distro-specific themes, because most of them make the desktop look nicer. I finally found my exception. - -First impression's count for a lot, right? Well, GDM definitely gets this one right. The login screen is incredibly clean with consistent design language through every single part of it. The use of common-language icons instead of text boxes helps in that regard. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login2_show&w=1920) - -That is not to say that the Fedora 22 KDE login screen-- now SDDM rather than KDM-- looks 'bad' per say but its definitely more jarring. - -Where's the fault? The top bar. Look at the Gnome screenshot-- you select a user and you get a tiny little gear simple for selecting what session you want to log into. The design is clean, it gets out of your way, you could honestly miss it completely if you weren't paying attention. Now look at the blue KDE screenshot, the bar doesn't look it was even rendered using the same widgets, and its entire placement feels like an after thought of "Well shit, we need to throw this option somewhere..." - -The same can be said for the Reboot and Shutdown options in the top right. Why not just a power button that creates a drop down menu that has a drop down for Reboot, Shutdown, Suspend? Having the buttons be different colors than the background certainly makes them stick out and be noticeable... but I don't think in a good way. Again, they feel like an after thought. - -GDM is also far more useful from a practical standpoint, look again along the top row. The time is listed, there's a volume control so that if you are trying to be quiet you can mute all sounds before you even login, there's an accessibility button for things like high contrast, zooming, test to speech, etc, all available via simple toggle buttons. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login3_show&w=1920) - -Swap it to upstream's Breeze theme and... suddenly most of my complaints are fixed. Common-language icons, everything is in the center of the screen, but the less important stuff is off to the sides. This creates a nice harmony between the top and bottom of the screen since they are equally empty. You still have a text box for the session switcher, but I can forgive that since the power buttons are now common language icons. Current time is available which is a nice touch, as is a battery life indicator. Sure gnome still has a few nice additions, such as the volume applet and the accessibility buttons, but Breeze is a step up from Fedora's KDE theme. - -Go to Windows (pre-Windows 8 & 10...) or OS X and you will see similar things – very clean, get-out-of-your-way lock screens and login screens that are devoid of text boxes or other widgets that distract the eye. It's a design that works and that is non-distracting. Fedora... Ship Breeze by default. VDG got the design of the Breeze theme right. Don't mess it up. - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=1 - -作者:Eric Griffith -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://wiki.ubuntu.com/One%20Hundred%20Papercuts -[2]:http://www.phoronix.com/scan.php?page=article&item=fedora-22-kde&num=1 From b2d7ab2ba05d746334150505746e8c43bbe4f3a8 Mon Sep 17 00:00:00 2001 From: XLCYun Date: Sun, 2 Aug 2015 19:40:51 +0800 Subject: [PATCH 062/207] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit XLCYun 翻译完成 --- ...t Right & Wrong - Page 1 - Introduction.md | 55 +++++++++++++++++++ 1 file changed, 55 insertions(+) create mode 100644 translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md new file mode 100644 index 0000000000..de47f0864e --- /dev/null +++ b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md @@ -0,0 +1,55 @@ +将GNOME作为我的Linux桌面的一周: 他们做对的与做错的 - 第一节 - 简介 +================================================================================ +*作者声明: 如果你是因为某种神迹而在没看标题的情况下点开了这篇文章,那么我想再重申一些东西...这是一篇评论文章。文中的观点都是我自己的,不代表Phoronix和Michael的观点。它们完全是我自己的想法。 + +另外,没错……这可能是一篇引战的文章。我希望社团成员们更沉稳一些,因为我确实想在KDE和Gnome的社团上发起讨论,反馈。因此当我想指出——我所看到的——一个瑕疵时,我会尽量地做到具体而直接。这样,相关的讨论也能做到同样的具体和直接。再次声明:本文另一可选标题为“被[剪纸][1]千刀万剐”(原文剪纸一词为papercuts, 指易修复而烦人的漏洞,译者注)。 + +现在,重申完毕……文章开始。 + +![](http://www.phoronix.net/image.php?id=fedora-22-fan&image=fedora_22_good1_show&w=1920) + +当我把[《评价Fedora 22 KDE》][2]一文发给Michael时,感觉很不是滋味。不是因为我不喜欢KDE,或者不享受Fedora,远非如此。事实上,我刚开始想把我的T450s的系统换为Arch Linux时,马上又决定放弃了,因为我很享受fedora在很多方面所带来的便捷性。 + +我感觉很不是滋味的原因是Fedora的开发者花费了大量的时间和精力在他们的“工作站”产品上,但是我却一点也没看到。在使用Fedora时,我采用的并非那些主要开发者希望用户采用的那种使用方式,因此我也就体验不到所谓的“Fedora体验”。它感觉就像一个人评价Ubuntu时用的却是Kubuntu,评价OS X时用的却是Hackintosh,或者评价Gentoo时用的却是Sabayon。根据大量Michael论坛的读者的说法,它们在评价各种发行版时使用的都是默认设置的发行版——我也不例外。但是我还是认为这些评价应该在“真实”配置下完成,当然我也知道在给定的情况下评论某些东西也的确是有价值的——无论是好是坏。 + +正是在怀着这种态度的情况下,我决定到Gnome这个水坑里来泡泡澡。 + +但是,我还要在此多加一个声明……我在这里所看到的KDE和Gnome都是打包在Fedora中的。OpenSUSE, Kubuntu, Arch等发行版的各个桌面可能有不同的实现方法,使得我这里所说的具体的“痛处”跟你所用的发行版有所不同。还有,虽然用了这个标题,但这篇文章将会是一篇很沉重的非常“KDE”的文章。之所以这样称呼这篇文章,是因为我在使用了Gnome之后,才知道KDE的“剪纸”到底有多多。 + +### 登录界面 ### + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login1_show&w=1920) + +我一般情况下都不会介意发行版装载它们自己的特别主题,因为一般情况下桌面看起来会更好看。可我今天可算是找到了一个例外。 + +第一印象很重要,对吧?那么,GDM(Gnome Display Manage:Gnome显示管理器,译者注,下同。)决对干得漂亮。它的登录界面看起来极度简洁,每一部分都应用了一致的设计风格。使用通用图标而不是输入框为它的简洁加了分。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login2_show&w=1920) + +这并不是说Fedora 22 KDE——现在已经是SDDM而不是KDM了——的登录界面不好看,但是看起来决对没有它这样和谐。 + +问题到底出来在哪?顶部栏。看看Gnome的截图——你选择一个用户,然后用一个很小的齿轮简单地选择想登入哪个会话。设计很简洁,它不挡着你的道儿,实话讲,如果你没注意的话可能完全会看不到它。现在看看那蓝色( blue,有忧郁之意,一语双关,译者注)的KDE截图,顶部栏看起来甚至不像是用同一个工具渲染出来的,它的整个位置的安排好像是某人想着:“哎哟妈呀,我们需要把这个选项扔在哪个地方……”之后决定下来的。 + +对于右上角的重启和关机选项也一样。为什么不单单用一个电源按钮,点击后会下拉出一个菜单,里面包括重启,关机,挂起的功能?按钮的颜色跟背景色不同肯定会让它更加突兀和显眼……但我可不觉得这样子有多好。同样,这看起来可真像“苦思”后的决定。 + +从实用观点来看,GDM还要远远实用的多,再看看顶部一栏。时间被列了出来,还有一个音量控制按钮,如果你想保持周围安静,你甚至可以在登录前设置静音,还有一个可用的按钮来实现高对比度,缩放,语音转文字等功能,所有可用的功能通过简单的一个开关按钮就能得到。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login3_show&w=1920) + +切换到上流的Breeve主题……突然间,我抱怨的大部分问题都被完善了。通用图标,所有东西都放在了屏幕中央,但不是那么重要的被放到了一边。因为屏幕顶部和底部都是同样的空白,在中间也就酝酿出了一种美好的和谐。还是有一个输入框来切换会话,但既然电源按钮被做成了通用图标,那么这点还算可以原谅。当然gnome还是有一些很好的附加物,例如音量小程序和可访问按钮,但Breeze总归是Fedora的KDE主题的一个进步。 + +到Windows(Windows 8和10之前)或者OS X中去,你会看到类似的东西——非常简洁的,“不挡你道”的锁屏与登录界面,它们都没有输入框或者其它分散视觉的小工具。这是一种有效的不分散人注意力的设计。Fedora……默认装有Breeze。VDG在Breeze主题设计上干得不错。可别糟蹋了它。 + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=1 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://wiki.ubuntu.com/One%20Hundred%20Papercuts +[2]:http://www.phoronix.com/scan.php?page=article&item=fedora-22-kde&num=1 +[3]:https://launchpad.net/hundredpapercuts From 5426b4cc3c2b5f0acb6550d8a6ae1cb0686537dc Mon Sep 17 00:00:00 2001 From: martin qi Date: Sun, 2 Aug 2015 21:04:30 +0800 Subject: [PATCH 063/207] Delete 20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md --- ...ring and filtering in Quagga BGP router.md | 260 ------------------ 1 file changed, 260 deletions(-) delete mode 100644 sources/tech/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md diff --git a/sources/tech/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md b/sources/tech/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md deleted file mode 100644 index 033e03be35..0000000000 --- a/sources/tech/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md +++ /dev/null @@ -1,260 +0,0 @@ -translating... - -How to set up IPv6 BGP peering and filtering in Quagga BGP router -================================================================================ -In the previous tutorials, we demonstrated how we can set up a [full-fledged BGP router][1] and configure [prefix filtering][2] with Quagga. In this tutorial, we are going to show you how we can set up IPv6 BGP peering and advertise IPv6 prefixes through BGP. We will also demonstrate how we can filter IPv6 prefixes advertised or received by using prefix-list and route-map features. - -### Topology ### - -For this tutorial, we will be considering the following topology. - -![](https://farm9.staticflickr.com/8599/15944659374_1c65852df2_c.jpg) - -Service providers A and B want to establish an IPv6 BGP peering between them. Their IPv6 and AS information is as follows. - -- Peering IP block: 2001:DB8:3::/64 -- Service provider A: AS 100, 2001:DB8:1::/48 -- Service provider B: AS 200, 2001:DB8:2::/48 - -### Installing Quagga on CentOS/RHEL ### - -If Quagga has not already been installed, we can install it using yum. - - # yum install quagga - -On CentOS/RHEL 7, the default SELinux policy, which prevents /usr/sbin/zebra from writing to its configuration directory, can interfere with the setup procedure we are going to describe. Thus we want to disable this policy as follows. Skip this step if you are using CentOS/RHEL 6. - - # setsebool -P zebra_write_config 1 - -### Creating Configuration Files ### - -After installation, we start the configuration process by creating the zebra/bgpd configuration files. - - # cp /usr/share/doc/quagga-XXXXX/zebra.conf.sample /etc/quagga/zebra.conf - # cp /usr/share/doc/quagga-XXXXX/bgpd.conf.sample /etc/quagga/bgpd.conf - -Next, enable auto-start of these services. - -**On CentOS/RHEL 6:** - - # service zebra start; service bgpd start - # chkconfig zebra on; chkconfig bgpd on - -**On CentOS/RHEL 7:** - - # systemctl start zebra; systemctl start bgpd - # systemctl enable zebra; systmectl enable bgpd - -Quagga provides a built-in shell called vtysh, whose interface is similar to those of major router vendors such as Cisco or Juniper. Launch vtysh command shell: - - # vtysh - -The prompt will be changed to: - - router-a# - -or - - router-b# - -In the rest of the tutorials, these prompts indicate that you are inside vtysh shell of either router. - -### Specifying Log File for Zebra ### - -Let's configure the log file for Zebra, which will be helpful for debugging. - -First, enter the global configuration mode by typing: - - router-a# configure terminal - -The prompt will be changed to: - - router-a(config)# - -Now specify log file location. Then exit the configuration mode: - - router-a(config)# log file /var/log/quagga/quagga.log - router-a(config)# exit - -Save configuration permanently by: - - router-a# write - -### Configuring Interface IP Addresses ### - -Let's now configure the IP addresses for Quagga's physical interfaces. - -First, we check the available interfaces from inside vtysh. - - router-a# show interfaces - ----------- - - Interface eth0 is up, line protocol detection is disabled - ## OUTPUT TRUNCATED ### - Interface eth1 is up, line protocol detection is disabled - ## OUTPUT TRUNCATED ## - -Now we assign necessary IPv6 addresses. - - router-a# conf terminal - router-a(config)# interface eth0 - router-a(config-if)# ipv6 address 2001:db8:3::1/64 - router-a(config-if)# interface eth1 - router-a(config-if)# ipv6 address 2001:db8:1::1/64 - -We use the same method to assign IPv6 addresses to router-B. I am summarizing the configuration below. - - router-b# show running-config - ----------- - - interface eth0 - ipv6 address 2001:db8:3::2/64 - - interface eth1 - ipv6 address 2001:db8:2::1/64 - -Since the eth0 interface of both routers are in the same subnet, i.e., 2001:DB8:3::/64, you should be able to ping from one router to another. Make sure that you can ping successfully before moving on to the next step. - - router-a# ping ipv6 2001:db8:3::2 - ----------- - - PING 2001:db8:3::2(2001:db8:3::2) 56 data bytes - 64 bytes from 2001:db8:3::2: icmp_seq=1 ttl=64 time=3.20 ms - 64 bytes from 2001:db8:3::2: icmp_seq=2 ttl=64 time=1.05 ms - -### Phase 1: IPv6 BGP Peering ### - -In this section, we will configure IPv6 BGP between the two routers. We start by specifying BGP neighbors in router-A. - - router-a# conf t - router-a(config)# router bgp 100 - router-a(config-router)# no auto-summary - router-a(config-router)# no synchronization - router-a(config-router)# neighbor 2001:DB8:3::2 remote-as 200 - -Next, we define the address family for IPv6. Within the address family section, we will define the network to be advertised, and activate the neighbors as well. - - router-a(config-router)# address-family ipv6 - router-a(config-router-af)# network 2001:DB8:1::/48 - router-a(config-router-af)# neighbor 2001:DB8:3::2 activate - -We will go through the same configuration for router-B. I'm providing the summary of the configuration. - - router-b# conf t - router-b(config)# router bgp 200 - router-b(config-router)# no auto-summary - router-b(config-router)# no synchronization - router-b(config-router)# neighbor 2001:DB8:3::1 remote-as 100 - router-b(config-router)# address-family ipv6 - router-b(config-router-af)# network 2001:DB8:2::/48 - router-b(config-router-af)# neighbor 2001:DB8:3::1 activate - -If all goes well, an IPv6 BGP session should be up between the two routers. If not already done, please make sure that necessary ports (TCP 179) are [open in your firewall][3]. - -We can check IPv6 BGP session information using the following commands. - -**For BGP summary:** - - router-a# show bgp ipv6 unicast summary - -**For BGP advertised routes:** - - router-a# show bgp ipv6 neighbors advertised-routes - -**For BGP received routes:** - - router-a# show bgp ipv6 neighbors routes - -![](https://farm8.staticflickr.com/7317/16379555088_6e29cb6884_b.jpg) - -### Phase 2: Filtering IPv6 Prefixes ### - -As we can see from the above output, the routers are advertising their full /48 IPv6 prefix. For demonstration purposes, we will consider the following requirements. - -- Router-B will advertise one /64 prefix, one /56 prefix, as well as one full /48 prefix. -- Router-A will accept any IPv6 prefix owned by service provider B, which has a netmask length between /56 and /64. - -We are going to filter the prefix as required, using prefix-list and route-map in router-A. - -![](https://farm8.staticflickr.com/7367/16381297417_6549218289_c.jpg) - -#### Modifying prefix advertisement for Router-B #### - -Currently, router-B is advertising only one /48 prefix. We will modify router-B's BGP configuration so that it advertises additional /56 and /64 prefixes as well. - - router-b# conf t - router-b(config)# router bgp 200 - router-b(config-router)# address-family ipv6 - router-b(config-router-af)# network 2001:DB8:2::/56 - router-b(config-router-af)# network 2001:DB8:2::/64 - -We will verify that all prefixes are received at router-A. - -![](https://farm9.staticflickr.com/8598/16379761980_7c083ae977_b.jpg) - -Great! As we are receiving all prefixes in router-A, we will move forward and create prefix-list and route-map entries to filter these prefixes. - -#### Creating Prefix-List #### - -As described in the [previous tutorial][4], prefix-list is a mechanism that is used to match an IP address prefix with a subnet length. Once a matched prefix is found, we can apply filtering or other actions to the matched prefix. To meet our requirements, we will go ahead and create a necessary prefix-list entry in router-A. - - router-a# conf t - router-a(config)# ipv6 prefix-list FILTER-IPV6-PRFX permit 2001:DB8:2::/56 le 64 - -The above commands will create a prefix-list entry named 'FILTER-IPV6-PRFX' which will match any prefix in the 2001:DB8:2:: pool with a netmask between 56 and 64. - -#### Creating and Applying Route-Map #### - -Now that the prefix-list entry is created, we will create a corresponding route-map rule which uses the prefix-list entry. - - router-a# conf t - router-a(config)# route-map FILTER-IPV6-RMAP permit 10 - router-a(config-route-map)# match ipv6 address prefix-list FILTER-IPV6-PRFX - -The above commands will create a route-map rule named 'FILTER-IPV6-RMAP'. This rule will permit IPv6 addresses matched by the prefix-list 'FILTER-IPV6-PRFX' that we have created earlier. - -Remember that a route-map rule is only effective when it is applied to a neighbor or an interface in a certain direction. We will apply the route-map in the BGP neighbor configuration. As the filter is meant for inbound prefixes, we apply the route-map in the inbound direction. - - router-a# conf t - router-a(config)# router bgp 100 - router-a(config-router)# address-family ipv6 - router-a(config-router-af)# neighbor 2001:DB8:3::2 route-map FILTER-IPV6-RMAP in - -Now when we check the routes received at router-A, we should see only two prefixes that are allowed. - -![](https://farm8.staticflickr.com/7337/16379762020_ec2dc39b31_c.jpg) - -**Note**: You may need to reset the BGP session for the route-map to take effect. - -All IPv6 BGP sessions can be restarted using the following command: - - router-a# clear bgp ipv6 * - -I am summarizing the configuration of both routers so you get a clear picture at a glance. - -![](https://farm9.staticflickr.com/8672/16567240165_eee4398dc8_c.jpg) - -### Summary ### - -To sum up, this tutorial focused on how to set up BGP peering and filtering using IPv6. We showed how to advertise IPv6 prefixes to a neighboring BGP router, and how to filter the prefixes advertised or received are advertised. Note that the process described in this tutorial may affect production networks of a service provider, so please use caution. - -Hope this helps. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/ipv6-bgp-peering-filtering-quagga-bgp-router.html - -作者:[Sarmed Rahman][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/sarmed -[1]:http://xmodulo.com/centos-bgp-router-quagga.html -[2]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html -[3]:http://ask.xmodulo.com/open-port-firewall-centos-rhel.html -[4]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html From 28454501d7beafe8c0a622740e71ce7b160df929 Mon Sep 17 00:00:00 2001 From: martin qi Date: Sun, 2 Aug 2015 21:05:08 +0800 Subject: [PATCH 064/207] Create 20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md --- ...ring and filtering in Quagga BGP router.md | 258 ++++++++++++++++++ 1 file changed, 258 insertions(+) create mode 100644 translated/tech/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md diff --git a/translated/tech/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md b/translated/tech/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md new file mode 100644 index 0000000000..23e2314576 --- /dev/null +++ b/translated/tech/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md @@ -0,0 +1,258 @@ +如何设置在Quagga BGP路由器中设置IPv6的BGP对等体和过滤 +================================================================================ +在之前的教程中,我们演示了如何使用Quagga建立一个[完备的BGP路由器][1]和配置[前缀过滤][2]。在本教程中,我们会向你演示如何创建IPv6 BGP对等体并通过BGP通告IPv6前缀。同时我们也将演示如何使用前缀列表和路由映射特性来过滤通告的或者获取到的IPv6前缀。 + +### 拓扑 ### + +教程中,我们主要参考如下拓扑。 + +![](https://farm9.staticflickr.com/8599/15944659374_1c65852df2_c.jpg) + +服务供应商A和B希望在他们之间建立一个IPv6的BGP对等体。他们的IPv6地址和AS信息如下所示。 + +- 对等体IP块: 2001:DB8:3::/64 +- 供应商A: AS 100, 2001:DB8:1::/48 +- 供应商B: AS 200, 2001:DB8:2::/48 + +### CentOS/RHEL安装Quagga ### + +如果Quagga还没有安装,我们可以先使用yum安装。 + + # yum install quagga + +在CentOS/RHEL 7,SELinux策略会默认的阻止对于/usr/sbin/zebra配置目录的写操作,这会对我们将要介绍的安装操作有所影响。因此我们需要像下面这样关闭这个策略。如果你使用的是CentOS/RHEL 6可以跳过这一步。 + + # setsebool -P zebra_write_config 1 + +### 创建配置文件 ### + +在安装过后,我们先创建配置文件zebra/bgpd作为配置流程的开始。 + + # cp /usr/share/doc/quagga-XXXXX/zebra.conf.sample /etc/quagga/zebra.conf + # cp /usr/share/doc/quagga-XXXXX/bgpd.conf.sample /etc/quagga/bgpd.conf + +然后,允许这些服务开机自启。 + +**在 CentOS/RHEL 6:** + + # service zebra start; service bgpd start + # chkconfig zebra on; chkconfig bgpd on + +**在 CentOS/RHEL 7:** + + # systemctl start zebra; systemctl start bgpd + # systemctl enable zebra; systmectl enable bgpd + +Quagga内部提供一个叫作vtysh的shell,其界面与那些主流路由厂商Cisco或Juniper十分相似。启动vtysh shell命令行: + + # vtysh + +提示将改为: + + router-a# + +或 + + router-b# + +在教程的其余部分,这个提示可以表明你正身处在哪个路由的vtysh shell中。 + +### 为Zebra指定日志文件 ### + +来为Zebra配置日志文件,这会有助于调试。 + +首先,进入全局配置模式通过输入: + + router-a# configure terminal + +提示将变更成: + + router-a(config)# + +指定日志文件的位置。然后退出配置模式: + + router-a(config)# log file /var/log/quagga/quagga.log + router-a(config)# exit + +保存配置通过: + + router-a# write + +### 配置接口IP地址 ### + +现在,让我们为Quagga的物理接口配置IP地址。 + +首先,查看一下vtysh中现有的接口。 + + router-a# show interfaces + +---------- + + Interface eth0 is up, line protocol detection is disabled + ## OUTPUT TRUNCATED ### + Interface eth1 is up, line protocol detection is disabled + ## OUTPUT TRUNCATED ## + +现在我们配置IPv6地址。 + + router-a# conf terminal + router-a(config)# interface eth0 + router-a(config-if)# ipv6 address 2001:db8:3::1/64 + router-a(config-if)# interface eth1 + router-a(config-if)# ipv6 address 2001:db8:1::1/64 + +在路由B上采用同样的方式分配IPv6地址。我将配置汇总成如下。 + + router-b# show running-config + +---------- + + interface eth0 + ipv6 address 2001:db8:3::2/64 + + interface eth1 + ipv6 address 2001:db8:2::1/64 + +由于两台路由的eth0端口同属一个子网,即2001:DB8:3::/64,你应该可以相互ping通。在保证ping通的情况下,我们开始下面的内容。 + + router-a# ping ipv6 2001:db8:3::2 + +---------- + + PING 2001:db8:3::2(2001:db8:3::2) 56 data bytes + 64 bytes from 2001:db8:3::2: icmp_seq=1 ttl=64 time=3.20 ms + 64 bytes from 2001:db8:3::2: icmp_seq=2 ttl=64 time=1.05 ms + +### 步骤 1: IPv6 BGP 对等体 ### + +本段,我们将在两个路由之间配置IPv6 BGP。首先,我们在路由A上指定BGP邻居。 + + router-a# conf t + router-a(config)# router bgp 100 + router-a(config-router)# no auto-summary + router-a(config-router)# no synchronization + router-a(config-router)# neighbor 2001:DB8:3::2 remote-as 200 + +然后,我们定义IPv6的地址族。在地址族中,我们需要定义要通告的网段,并激活邻居。 + + router-a(config-router)# address-family ipv6 + router-a(config-router-af)# network 2001:DB8:1::/48 + router-a(config-router-af)# neighbor 2001:DB8:3::2 activate + +我们在路由B上也实施相同的配置。这里提供我归总后的配置。 + + router-b# conf t + router-b(config)# router bgp 200 + router-b(config-router)# no auto-summary + router-b(config-router)# no synchronization + router-b(config-router)# neighbor 2001:DB8:3::1 remote-as 100 + router-b(config-router)# address-family ipv6 + router-b(config-router-af)# network 2001:DB8:2::/48 + router-b(config-router-af)# neighbor 2001:DB8:3::1 activate + +如果一切顺利,在路由间将会形成一个IPv6 BGP会话。如果失败了,请确保[在防火墙中开启了][3]必要的端口(TCP 179)。 + +我们使用以下命令来确认IPv6 BGP会话的信息。 + +**查看BGP汇总:** + + router-a# show bgp ipv6 unicast summary + +**查看BGP通告的路由:** + + router-a# show bgp ipv6 neighbors advertised-routes + +**查看BGP获得的路由:** + + router-a# show bgp ipv6 neighbors routes + +![](https://farm8.staticflickr.com/7317/16379555088_6e29cb6884_b.jpg) + +### 步骤 2: 过滤IPv6前缀 ### + +正如我们在上面看到的输出信息那样,路由间通告了他们完整的/48 IPv6前缀。出于演示的目的,我们会考虑以下要求。 + +- Router-B将通告一个/64前缀,一个/56前缀,和一个完整的/48前缀. +- Router-A将接受任由B提供的何形式的IPv6前缀,其中包含有/56和/64之间的网络掩码长度。 + +我们将根据需要过滤的前缀,来使用路由器的前缀列表和路由映射。 + +![](https://farm8.staticflickr.com/7367/16381297417_6549218289_c.jpg) + +#### 为路由B修改通告的前缀 #### + +目前,路由B只通告一个/48前缀。我们修改路由B的BGP配置使它可以通告额外的/56和/64前缀。 + + router-b# conf t + router-b(config)# router bgp 200 + router-b(config-router)# address-family ipv6 + router-b(config-router-af)# network 2001:DB8:2::/56 + router-b(config-router-af)# network 2001:DB8:2::/64 + +我们将路由A上验证了所有的前缀都获得到了。 + +![](https://farm9.staticflickr.com/8598/16379761980_7c083ae977_b.jpg) + +太好了!我们在路由A上收到了所有的前缀,那么我们可以更进一步创建前缀列表和路由映射来过滤这些前缀。 + +#### 创建前缀列表 #### + +就像在[上则教程中][4]描述的那样,前缀列表是一种机制用来匹配带有子网长度的IP地址前缀。按照我们指定的需求,我们需要在路由A的前缀列表中创建一则必要的条目。 + + router-a# conf t + router-a(config)# ipv6 prefix-list FILTER-IPV6-PRFX permit 2001:DB8:2::/56 le 64 + +以上的命令会创建一个名为'FILTER-IPV6-PRFX'的前缀列表,用以匹配任何2001:DB8:2::池内掩码在56和64之间的所有前缀。 + +#### 创建并应用路由映射 #### + +现在已经在前缀列表中创建了条目,我们也应该相应的创建一条使用此条目的路由映射规则了。 + + router-a# conf t + router-a(config)# route-map FILTER-IPV6-RMAP permit 10 + router-a(config-route-map)# match ipv6 address prefix-list FILTER-IPV6-PRFX + +以上的命令会创建一条名为'FILTER-IPV6-RMAP'的路由映射规则。这则规则将会允许与之前在前缀列表中创建'FILTER-IPV6-PRFX'所匹配的IPv6 + +要记住路由映射规则只有在应用在邻居或者端口的指定方向时才有效。我们将把路由映射应用到BGP的邻居配置中。我们将路由映射应用于入方向,作为进入路由端的前缀过滤器。 + + router-a# conf t + router-a(config)# router bgp 100 + router-a(config-router)# address-family ipv6 + router-a(config-router-af)# neighbor 2001:DB8:3::2 route-map FILTER-IPV6-RMAP in + +现在我们在路由A上再查看一边获得到的路由,我们应该只能看见两个被允许的前缀了。 + +![](https://farm8.staticflickr.com/7337/16379762020_ec2dc39b31_c.jpg) + +**注意**: 你可能需要重置BGP会话来刷新路由表。 + +所有IPv6的BGP会话可以使用以下的命令重启: + + router-a# clear bgp ipv6 * + +我汇总了两个路由的配置,并做成了一张清晰的图片以便阅读。 + +![](https://farm9.staticflickr.com/8672/16567240165_eee4398dc8_c.jpg) + +### 总结 ### + +总结一下,这篇教程重点在于如何创建BGP对等体和IPv6的过滤。我们演示了如何向邻居BGP路由通告IPv6前缀,和如何过滤通告前缀或获得的通告。需要注意,本教程使用的过程可能会对网络供应商的网络运作有所影响,请谨慎参考。 + +希望这些对你有用。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/ipv6-bgp-peering-filtering-quagga-bgp-router.html + +作者:[Sarmed Rahman][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/sarmed +[1]:http://xmodulo.com/centos-bgp-router-quagga.html +[2]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html +[3]:http://ask.xmodulo.com/open-port-firewall-centos-rhel.html +[4]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html From e01d81f5945506d1b0098cdc625a8ab565442e2e Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 2 Aug 2015 22:06:53 +0800 Subject: [PATCH 065/207] PUB:20150722 How To Manage StartUp Applications In Ubuntu @FSSlc --- ...22 How To Manage StartUp Applications In Ubuntu.md | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) rename {translated/tech => published}/20150722 How To Manage StartUp Applications In Ubuntu.md (92%) diff --git a/translated/tech/20150722 How To Manage StartUp Applications In Ubuntu.md b/published/20150722 How To Manage StartUp Applications In Ubuntu.md similarity index 92% rename from translated/tech/20150722 How To Manage StartUp Applications In Ubuntu.md rename to published/20150722 How To Manage StartUp Applications In Ubuntu.md index 745a84860a..3494e90a61 100644 --- a/translated/tech/20150722 How To Manage StartUp Applications In Ubuntu.md +++ b/published/20150722 How To Manage StartUp Applications In Ubuntu.md @@ -6,17 +6,17 @@ 每当你开机进入一个操作系统,一系列的应用将会自动启动。这些应用被称为‘开机启动应用’ 或‘开机启动程序’。随着时间的推移,当你在系统中安装了足够多的应用时,你将发现有太多的‘开机启动应用’在开机时自动地启动了,它们吃掉了很多的系统资源,并将你的系统拖慢。这可能会让你感觉卡顿,我想这种情况并不是你想要的。 -让 Ubuntu 变得更快的方法之一是对这些开机启动应用进行控制。 Ubuntu 为你提供了一个 GUI 工具来让你发现这些开机启动应用,然后完全禁止或延迟它们的启动,这样就可以不让每个应用在开机时同时运行。 +让 Ubuntu 变得更快的方法之一是对这些开机启动应用进行控制。 Ubuntu 为你提供了一个 GUI 工具来让你找到这些开机启动应用,然后完全禁止或延迟它们的启动,这样就可以不让每个应用在开机时同时运行。 在这篇文章中,我们将看到 **在 Ubuntu 中,如何控制开机启动应用,如何让一个应用在开机时启动以及如何发现隐藏的开机启动应用。**这里提供的指导对所有的 Ubuntu 版本均适用,例如 Ubuntu 12.04, Ubuntu 14.04 和 Ubuntu 15.04。 ### 在 Ubuntu 中管理开机启动应用 ### -默认情况下, Ubuntu 提供了一个`开机启动应用工具`来供你使用,你不必再进行安装。只需到 Unity 面板中就可以查找到该工具。 +默认情况下, Ubuntu 提供了一个`Startup Applications`工具来供你使用,你不必再进行安装。只需到 Unity 面板中就可以查找到该工具。 ![ubuntu 中的开机启动应用工具](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/startup_applications_Ubuntu.jpeg) -点击它来启动。下面是我的`开机启动应用`的样子: +点击它来启动。下面是我的`Startup Applications`的样子: ![在 Ubuntu 中查看开机启动程序](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Screenshot-from-2015-07-18-122550.png) @@ -84,7 +84,7 @@ 就这样,你将在下一次开机时看到这个程序会自动运行。这就是在 Ubuntu 中你能做的关于开机启动应用的所有事情。 -到现在为止,我们已经讨论在开机时可见的应用,但仍有更多的服务,守护进程和程序并不在`开机启动应用工具`中可见。下一节中,我们将看到如何在 Ubuntu 中查看这些隐藏的开机启动程序。 +到现在为止,我们已经讨论在开机时可见到的应用,但仍有更多的服务,守护进程和程序并不在`开机启动应用工具`中可见。下一节中,我们将看到如何在 Ubuntu 中查看这些隐藏的开机启动程序。 ### 在 Ubuntu 中查看隐藏的开机启动程序 ### @@ -97,13 +97,14 @@ ![在 Ubuntu 中查看隐藏的开机启动程序](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Hidden_startup_program_Ubuntu.jpg) 你可以像先前我们讨论的那样管理这些开机启动应用。我希望这篇教程可以帮助你在 Ubuntu 中控制开机启动程序。任何的问题或建议总是欢迎的。 + -------------------------------------------------------------------------------- via: http://itsfoss.com/manage-startup-applications-ubuntu/ 作者:[Abhishek][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 7f1ac1f1041cd8f0db2d5bee74dbb9b1151a5ccf Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 2 Aug 2015 22:33:58 +0800 Subject: [PATCH 066/207] PUB:20150625 How to Provision Swarm Clusters using Docker Machine @DongShuaike --- ...ion Swarm Clusters using Docker Machine.md | 41 ++++++++++--------- 1 file changed, 21 insertions(+), 20 deletions(-) rename {translated/tech => published}/20150625 How to Provision Swarm Clusters using Docker Machine.md (54%) diff --git a/translated/tech/20150625 How to Provision Swarm Clusters using Docker Machine.md b/published/20150625 How to Provision Swarm Clusters using Docker Machine.md similarity index 54% rename from translated/tech/20150625 How to Provision Swarm Clusters using Docker Machine.md rename to published/20150625 How to Provision Swarm Clusters using Docker Machine.md index 940c68b55d..a36284e6de 100644 --- a/translated/tech/20150625 How to Provision Swarm Clusters using Docker Machine.md +++ b/published/20150625 How to Provision Swarm Clusters using Docker Machine.md @@ -1,11 +1,14 @@ 如何使用Docker Machine部署Swarm集群 ================================================================================ -大家好,今天我们来研究一下如何使用Docker Machine部署Swarm集群。Docker Machine提供了独立的Docker API,所以任何与Docker守护进程进行交互的工具都可以使用Swarm来(透明地)扩增到多台主机上。Docker Machine可以用来在个人电脑、云端以及的数据中心里创建Docker主机。它为创建服务器,安装Docker以及根据用户设定配置Docker客户端提供了便捷化的解决方案。我们可以使用任何驱动来部署swarm集群,并且swarm集群将由于使用了TLS加密具有极好的安全性。 + +大家好,今天我们来研究一下如何使用Docker Machine部署Swarm集群。Docker Machine提供了标准的Docker API 支持,所以任何可以与Docker守护进程进行交互的工具都可以使用Swarm来(透明地)扩增到多台主机上。Docker Machine可以用来在个人电脑、云端以及的数据中心里创建Docker主机。它为创建服务器,安装Docker以及根据用户设定来配置Docker客户端提供了便捷化的解决方案。我们可以使用任何驱动来部署swarm集群,并且swarm集群将由于使用了TLS加密具有极好的安全性。 下面是我提供的简便方法。 + ### 1. 安装Docker Machine ### -Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Github上下载最新版本的Docker Machine。我们使用curl命令来下载最先版本Docker Machine ie 0.2.0。 +Docker Machine 在各种Linux系统上都支持的很好。首先,我们需要从Github上下载最新版本的Docker Machine。我们使用curl命令来下载最先版本Docker Machine ie 0.2.0。 + 64位操作系统: # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine @@ -18,7 +21,7 @@ Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Git # chmod +x /usr/local/bin/docker-machine -在做完上面的事情以后,我们必须确保docker-machine已经安装好。怎么检查呢?运行docker-machine -v指令,指令将会给出我们系统上所安装的docker-machine版本。 +在做完上面的事情以后,我们要确保docker-machine已经安装正确。怎么检查呢?运行`docker-machine -v`指令,该指令将会给出我们系统上所安装的docker-machine版本。 # docker-machine -v @@ -31,14 +34,15 @@ Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Git ### 2. 创建Machine ### -在将Docker Machine安装到我们的设备上之后,我们需要使用Docker Machine创建一个machine。在这片文章中,我们会将其部署在Digital Ocean Platform上。所以我们将使用“digitalocean”作为它的Driver API,然后将docker swarm运行在其中。这个Droplet会被设置为Swarm主节点,我们还要创建另外一个Droplet,并将其设定为Swarm节点代理。 +在将Docker Machine安装到我们的设备上之后,我们需要使用Docker Machine创建一个machine。在这篇文章中,我们会将其部署在Digital Ocean Platform上。所以我们将使用“digitalocean”作为它的Driver API,然后将docker swarm运行在其中。这个Droplet会被设置为Swarm主控节点,我们还要创建另外一个Droplet,并将其设定为Swarm节点代理。 + 创建machine的命令如下: # docker-machine create --driver digitalocean --digitalocean-access-token linux-dev -**Note**: 假设我们要创建一个名为“linux-dev”的machine。是用户在Digital Ocean Cloud Platform的Digital Ocean控制面板中生成的密钥。为了获取这个密钥,我们需要登录我们的Digital Ocean控制面板,然后点击API选项,之后点击Generate New Token,起个名字,然后在Read和Write两个选项上打钩。之后我们将得到一个很长的十六进制密钥,这个就是了。用其替换上面那条命令中的API-Token字段。 +**备注**: 假设我们要创建一个名为“linux-dev”的machine。是用户在Digital Ocean Cloud Platform的Digital Ocean控制面板中生成的密钥。为了获取这个密钥,我们需要登录我们的Digital Ocean控制面板,然后点击API选项,之后点击Generate New Token,起个名字,然后在Read和Write两个选项上打钩。之后我们将得到一个很长的十六进制密钥,这个就是了。用其替换上面那条命令中的API-Token字段。 -现在,运行下面的指令,将Machine configuration装载进shell。 +现在,运行下面的指令,将Machine 的配置变量加载进shell里。 # eval "$(docker-machine env linux-dev)" @@ -48,7 +52,7 @@ Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Git # docker-machine active linux-dev -现在,我们检查是否它(指machine)被标记为了 ACTIVE "*"。 +现在,我们检查它(指machine)是否被标记为了 ACTIVE "*"。 # docker-machine ls @@ -56,22 +60,21 @@ Docker Machine 在任何Linux系统上都被支持。首先,我们需要从Git ### 3. 运行Swarm Docker镜像 ### -现在,在我们创建完成了machine之后。我们需要将swarm docker镜像部署上去。这个machine将会运行这个docker镜像并且控制Swarm主节点和从节点。使用下面的指令运行镜像: +现在,在我们创建完成了machine之后。我们需要将swarm docker镜像部署上去。这个machine将会运行这个docker镜像,并且控制Swarm主控节点和从节点。使用下面的指令运行镜像: # docker run swarm create ![Docker Machine Swarm Create](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-create.png) -If you are trying to run swarm docker image using **32 bit Operating System** in the computer where Docker Machine is running, we'll need to SSH into the Droplet. 如果你想要在**32位操作系统**上运行swarm docker镜像。你需要SSH登录到Droplet当中。 # docker-machine ssh #docker run swarm create #exit -### 4. 创建Swarm主节点 ### +### 4. 创建Swarm主控节点 ### -在我们的swarm image已经运行在machine当中之后,我们将要创建一个Swarm主节点。使用下面的语句,添加一个主节点。(这里的感觉怪怪的,好像少翻译了很多东西,是我把Master翻译为主节点的原因吗?) +在我们的swarm image已经运行在machine当中之后,我们将要创建一个Swarm主控节点。使用下面的语句,添加一个主控节点。 # docker-machine create \ -d digitalocean \ @@ -83,9 +86,9 @@ If you are trying to run swarm docker image using **32 bit Operating System** in ![Docker Machine Swarm Master Create](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-master-create.png) -### 5. 创建Swarm结点群 ### +### 5. 创建Swarm从节点 ### -现在,我们将要创建一个swarm结点,此结点将与Swarm主节点相连接。下面的指令将创建一个新的名为swarm-node的droplet,其与Swarm主节点相连。到此,我们就拥有了一个两节点的swarm集群了。 +现在,我们将要创建一个swarm从节点,此节点将与Swarm主控节点相连接。下面的指令将创建一个新的名为swarm-node的droplet,其与Swarm主控节点相连。到此,我们就拥有了一个两节点的swarm集群了。 # docker-machine create \ -d digitalocean \ @@ -96,21 +99,19 @@ If you are trying to run swarm docker image using **32 bit Operating System** in ![Docker Machine Swarm Nodes](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-swarm-nodes.png) -### 6. Connecting to the Swarm Master ### -### 6. 与Swarm主节点连接 ### +### 6. 与Swarm主控节点连接 ### -现在,我们连接Swarm主节点以便我们可以依照需求和配置文件在节点间部署Docker容器。运行下列命令将Swarm主节点的Machine配置文件加载到环境当中。 +现在,我们连接Swarm主控节点以便我们可以依照需求和配置文件在节点间部署Docker容器。运行下列命令将Swarm主控节点的Machine配置文件加载到环境当中。 # eval "$(docker-machine env --swarm swarm-master)" -然后,我们就可以跨结点地运行我们所需的容器了。在这里,我们还要检查一下是否一切正常。所以,运行**docker info**命令来检查Swarm集群的信息。 +然后,我们就可以跨节点地运行我们所需的容器了。在这里,我们还要检查一下是否一切正常。所以,运行**docker info**命令来检查Swarm集群的信息。 # docker info -### Conclusion ### ### 总结 ### -我们可以用Docker Machine轻而易举地创建Swarm集群。这种方法有非常高的效率,因为它极大地减少了系统管理员和用户的时间消耗。在这篇文章中,我们以Digital Ocean作为驱动,通过创建一个主节点和一个从节点成功地部署了集群。其他类似的应用还有VirtualBox,Google Cloud Computing,Amazon Web Service,Microsoft Azure等等。这些连接都是通过TLS进行加密的,具有很高的安全性。如果你有任何的疑问,建议,反馈,欢迎在下面的评论框中注明以便我们可以更好地提高文章的质量! +我们可以用Docker Machine轻而易举地创建Swarm集群。这种方法有非常高的效率,因为它极大地减少了系统管理员和用户的时间消耗。在这篇文章中,我们以Digital Ocean作为驱动,通过创建一个主控节点和一个从节点成功地部署了集群。其他类似的驱动还有VirtualBox,Google Cloud Computing,Amazon Web Service,Microsoft Azure等等。这些连接都是通过TLS进行加密的,具有很高的安全性。如果你有任何的疑问,建议,反馈,欢迎在下面的评论框中注明以便我们可以更好地提高文章的质量! -------------------------------------------------------------------------------- @@ -118,7 +119,7 @@ via: http://linoxide.com/linux-how-to/provision-swarm-clusters-using-docker-mach 作者:[Arun Pyasi][a] 译者:[DongShuaike](https://github.com/DongShuaike) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 9b61686c047dd14af43d13b1e593789c388b6aaf Mon Sep 17 00:00:00 2001 From: XLCYun Date: Mon, 3 Aug 2015 08:25:33 +0800 Subject: [PATCH 067/207] =?UTF-8?q?=E5=88=A0=E9=99=A4=E5=8E=9F=E6=96=87=20?= =?UTF-8?q?=20XLCYun?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ht & Wrong - Page 2 - The GNOME Desktop.md | 32 ------------------- 1 file changed, 32 deletions(-) delete mode 100644 sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md diff --git a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md b/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md deleted file mode 100644 index 1bf684313b..0000000000 --- a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md +++ /dev/null @@ -1,32 +0,0 @@ -Translating by XLCYun. -A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 2 - The GNOME Desktop -================================================================================ -### The Desktop ### - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_gdm_show&w=1920) - -I spent the first five days of my week logging into Gnome manually-- not turning on automatic login. On night of the fifth day I got annoyed with having to login by hand and so I went into the User Manager and turned on automatic login. The next time I logged in I got a prompt: "Your keychain was not unlocked. Please enter your password to unlock your keychain." That was when I realized something... Gnome had been automatically unlocking my keychain—my wallet in KDE speak-- every time I logged in via GDM. It was only when I bypassed GDM's login that Gnome had to step in and make me do it manually. - -Now, I am under the personal belief that if you enable automatic login then your key chain should be unlocked automatically as well-- otherwise what's the point? Either way you still have to type in your password and at least if you hit the GDM Login screen you have a chance to change your session if you want to. - -But, regardless of that, it was at that moment that I realized it was such a simple thing that made the desktop feel so much more like it was working WITH ME. When I log into KDE via SDDM? Before the splash screen is even finished loading there is a window popping up over top the splash animation-- thereby disrupting the splash screen-- prompting me to unlock my KDE wallet or GPG keyring. - -If a wallet doesn't exist already you get prompted to create a wallet-- why couldn't one have been created for me at user creation?-- and then get asked to pick between two encryption methods, where one is even implied as insecure (Blowfish), why are you letting me pick something that's insecure for my security? Author's Note: If you install the actual KDE spin and don't just install KDE after-the-fact then a wallet is created for you at user creation. Unfortunately it's not unlocked for you automatically, and it seems to use the older Blowfish method rather than the new, and more secure, GPG method. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_kgpg_show&w=1920) - -If you DO pick the secure one (GPG) then it tries to load an Gpg key... which I hope you had one created already because if you don't you get yelled at. How do you create one? Well, it doesn't offer to make one for you... nor It doesn't tell you... and if you do manage TO figure out that you are supposed to use KGpg to create the key then you get taken through several menus and prompts that are nothing but confusing to new users. Why are you asking me where the GPG binary is located? How on earth am I supposed to know? Can't you just use the most recent one if there's more than one? And if there IS only one then, I ask again, why are you prompting me? - -Why are you asking me what key size and encryption algorithm to use? You select 2048 and RSA/RSA by default, so why not just use those? If you want to have those options available then throw them under the "Expert mode" button that is right there. This isn't just about having configuration options available, its about needless things that get thrown in the user's face by default. This is going to be a theme for the rest of the article... KDE needs better sane defaults. Configuration is great, I love the configuration I get with KDE, but it needs to learn when to and when not to prompt. It also needs to learn that "Well its configurable" is no excuse for bad defaults. Defaults are what users see initially, bad defaults will lose users. - -Let's move on from the key chain issue though, because I think I made my point. - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=2 - -作者:Eric Griffith -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 03c44a7869162c8f82d29bf73501ef7984f124bd Mon Sep 17 00:00:00 2001 From: XLCYun Date: Mon, 3 Aug 2015 08:27:06 +0800 Subject: [PATCH 068/207] =?UTF-8?q?XLCYun=20=20Gnome=E7=AC=AC=E4=BA=8C?= =?UTF-8?q?=E8=8A=82=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit translated by XLCYun --- ...ht & Wrong - Page 2 - The GNOME Desktop.md | 31 +++++++++++++++++++ 1 file changed, 31 insertions(+) create mode 100644 translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md new file mode 100644 index 0000000000..5ce4dcd8d5 --- /dev/null +++ b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md @@ -0,0 +1,31 @@ +将GNOME作为我的Linux桌面的一周:他们做对的与做错的 - 第二节 - GNOME桌面 +================================================================================ +### 桌面 ### + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_gdm_show&w=1920) + +在我这一周的前五天中,我都是直接手动登录进Gnome的——没有打开自动登录功能。在第五天的晚上,每一次都要手动登录让我觉得很厌烦,所以我就到用户管理器中打开了自动登录功能。下一次我登录的时候收到了一个提示:“你的密钥链(keychain)未解锁,请输入你的密码解锁”。在这时我才意识到了什么……每一次我通过GDM登录时——我的KDB钱包提示——Gnome以前一直都在自动解锁我的密钥链!当我绕开GDM的登录程序时,Gnome才不得不介入让我手动解锁。 + +现在,鄙人的陋见是如果你打开了自动登录功能,那么你的密钥链也应当自动解锁——否则,这功能还有何用?无论如何,你还是要输入你的密码,况且在GDM登录界面你还能有机会选择要登录的会话。 + +但是,这点且不提,也就是在那一刻,我意识到要让这桌面感觉是它在**和我**一起工作是多么简单的一件事。当我通过SDDM登录KDE时?甚至连启动界面都还没加载完成,就有一个窗口弹出来遮挡了启动动画——因此启动动画也就被破坏了——提示我解锁我的KDE钱包或GPG钥匙环。 + +如果当前不存在钱包,你就会收到一个创建钱包的提醒——就不能在创建用户的时候同时为我创建一个吗?——接着它又让你在两种加密模式中选择一种,甚至还暗示我们其中一种是不安全的(Blowfish),既然是为了安全,为什么还要我选择一个不安全的东西?作者声明:如果你安装了真正的KDE spin版本而不是仅仅安装了KDE的事后版本,那么在创建用户时,它就会为你创建一个钱包。但很不幸的是,它不会帮你解锁,并且它似乎还使用了更老的Blowfish加密模式,而不是更新而且更安全的GPG模式。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_kgpg_show&w=1920) + +如果你选择了那个安全的加密模式(GPG),那么它会尝试加载GPG密钥……我希望你已经创建过一个了,因为如果你没有,那么你可又要被批一顿了。怎么样才能创建一个?额……它不帮你创建一个……也不告诉你怎么创建……假如你真的搞明白了你应该使用KGpg来创建一个密钥,接着在你就会遇到一层层的菜单和一个个的提示,而这些菜单和提示只能让新手感到困惑。为什么你要问我GPG的二进制文件在哪?天知道在哪!如果不止一个,你就不能为我选择一个最新的吗?如果只有一个,我再问一次,为什么你还要问我? + +为什么你要问我要使用多大的密钥大小和加密算法?你既然默认选择了2048和RSA/RSA,为什么不直接使用?如果你想让这些选项能够被改变,那就把它们扔在下面的"Expert mode(专家模式)"按钮里去。这不仅仅关于使配置可被用户改变,而是关于默认地把多余的东西扔在了用户面前。这种问题将会成为剩下文章的主题之一……KDE需要更好更理智的默认配置。配置是好的,我很喜欢在使用KDE时的配置,但它还需要知道什么时候应该,什么时候不应该去提示用户。而且它还需要知道“嗯,它是可配置的”不能做为默认配置做得不好的借口。用户最先接触到的就是默认配置,不好的默认配置注定要失去用户。 + +让我们抛开密钥链的问题,因为我想我已经表达出了我的想法。 + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=2 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 1665f293d1a20b351480b49a9a2a9be67ca16af9 Mon Sep 17 00:00:00 2001 From: joeren Date: Mon, 3 Aug 2015 09:58:33 +0800 Subject: [PATCH 069/207] =?UTF-8?q?github=202.0=E6=B5=8B=E8=AF=95=E6=8E=A8?= =?UTF-8?q?=E9=80=81=E6=B5=8B=E8=AF=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- github 2.0测试.txt | 0 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 github 2.0测试.txt diff --git a/github 2.0测试.txt b/github 2.0测试.txt new file mode 100644 index 0000000000..e69de29bb2 From d90f37fdb59f8130a83c3894ef2c247870979aef Mon Sep 17 00:00:00 2001 From: joeren Date: Mon, 3 Aug 2015 10:11:49 +0800 Subject: [PATCH 070/207] =?UTF-8?q?=E6=B5=8B=E8=AF=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- github 2.0测试.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/github 2.0测试.txt b/github 2.0测试.txt index e69de29bb2..9d07aa0df5 100644 --- a/github 2.0测试.txt +++ b/github 2.0测试.txt @@ -0,0 +1 @@ +111 \ No newline at end of file From f7b881a22862a051f1f2529900989bce239dcf51 Mon Sep 17 00:00:00 2001 From: joeren Date: Mon, 3 Aug 2015 10:15:21 +0800 Subject: [PATCH 071/207] Test --- github 2.0测试.txt | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/github 2.0测试.txt b/github 2.0测试.txt index 9d07aa0df5..7787faa3c1 100644 --- a/github 2.0测试.txt +++ b/github 2.0测试.txt @@ -1 +1,2 @@ -111 \ No newline at end of file +111 +222 \ No newline at end of file From 60689c5fc61a277731adf66298f33409f95dcde2 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 3 Aug 2015 10:57:25 +0800 Subject: [PATCH 072/207] Delete 20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 删除原文 oska874 --- ...un Ubuntu Snappy Core on Raspberry Pi 2.md | 91 ------------------- 1 file changed, 91 deletions(-) delete mode 100644 sources/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md diff --git a/sources/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md b/sources/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md deleted file mode 100644 index 99b2b3acc1..0000000000 --- a/sources/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md +++ /dev/null @@ -1,91 +0,0 @@ -+Translating by Ezio - -How to run Ubuntu Snappy Core on Raspberry Pi 2 -================================================================================ -The Internet of Things (IoT) is upon us. In a couple of years some of us might ask ourselves how we ever survived without it, just like we question our past without cellphones today. Canonical is a contender in this fast growing, but still wide open market. The company wants to claim their stakes in IoT just as they already did for the cloud. At the end of January, the company launched a small operating system that goes by the name of [Ubuntu Snappy Core][1] which is based on Ubuntu Core. - -Snappy, the new component in the mix, represents a package format that is derived from DEB, is a frontend to update the system that lends its idea from atomic upgrades used in CoreOS, Red Hat's Atomic and elsewhere. As soon as the Raspberry Pi 2 was marketed, Canonical released Snappy Core for that plattform. The first edition of the Raspberry Pi was not able to run Ubuntu because Ubuntu's ARM images use the ARMv7 architecture, while the first Raspberry Pis were based on ARMv6. That has changed now, and Canonical, by releasing a RPI2-Image of Snappy Core, took the opportunity to make clear that Snappy was meant for the cloud and especially for IoT. - -Snappy also runs on other platforms like Amazon EC2, Microsofts Azure, and Google's Compute Engine, and can also be virtualized with KVM, Virtualbox, or Vagrant. Canonical has embraced big players like Microsoft, Google, Docker or OpenStack and, at the same time, also included small projects from the maker scene as partners. Besides startups like Ninja Sphere and Erle Robotics, there are board manufacturers like Odroid, Banana Pro, Udoo, PCDuino and Parallella as well as Allwinner. Snappy Core will also run in routers soon to help with the poor upgrade policy that vendors perform. - -In this post, let's see how we can test Ubuntu Snappy Core on Raspberry Pi 2. - -The image for Snappy Core for the RPI2 can be downloaded from the [Raspberry Pi website][2]. Unpacked from the archive, the resulting image should be [written to an SD card][3] of at least 8 GB. Even though the OS is small, atomic upgrades and the rollback function eat up quite a bit of space. After booting up your Raspberry Pi 2 with Snappy Core, you can log into the system with the default username and password being 'ubuntu'. - -![](https://farm8.staticflickr.com/7639/16428527263_f7bdd56a0d_c.jpg) - -sudo is already configured and ready for use. For security reasons you should change the username with: - - $ sudo usermod -l - -Alternatively, you can add a new user with the command `adduser`. - -Due to the lack of a hardware clock on the RPI, that the Snappy Core image does not take account of, the image has a small bug that will throw a lot of errors when processing commands. It is easy to fix. - -To find out if the bug affects you, use the command: - - $ date - -If the output is "Thu Jan 1 01:56:44 UTC 1970", you can fix it with: - - $ sudo date --set="Sun Apr 04 17:43:26 UTC 2015" - -adapted to your actual time. - -![](https://farm9.staticflickr.com/8735/16426231744_c54d9b8877_b.jpg) - -Now you might want to check if there are any updates available. Note that the usual commands: - - $ sudo apt-get update && sudo apt-get distupgrade - -will not get you very far though, as Snappy uses its own simplified package management system which is based on dpkg. This makes sense, as Snappy will run on a lot of embedded appliances, and you want things to be as simple as possible. - -Let's dive into the engine room for a minute to understand how things work with Snappy. The SD card you run Snappy on has three partitions besides the boot partition. Two of those house a duplicated file system. Both of those parallel file systems are permanently mounted as "read only", and only one is active at any given time. The third partition holds a partial writable file system and the users persistent data. With a fresh system, the partition labeled 'system-a' holds one complete file system, called a core, leaving the parallel partition still empty. - -![](https://farm9.staticflickr.com/8758/16841251947_21f42609ce_b.jpg) - -If we run the following command now: - - $ sudo snappy update - -the system will install the update as a complete core, similar to an image, on 'system-b'. You will be asked to reboot your device afterwards to activate the new core. - -After the reboot, run the following command to check if your system is up to date and which core is active. - - $ sudo snappy versions -a - -After rolling out the update and rebooting, you should see that the core that is now active has changed. - -As we have not installed any apps yet, the following command: - - $ sudo snappy update ubuntu-core - -would have been sufficient, and is the way if you want to upgrade just the underlying OS. Should something go wrong, you can rollback by: - - $ sudo snappy rollback ubuntu-core - -which will take you back to the system's state before the update. - -![](https://farm8.staticflickr.com/7666/17022676786_5fe6804ed8_c.jpg) - -Speaking of apps, they are what makes Snappy useful. There are not that many at this point, but the IRC channel #snappy on Freenode is humming along nicely and with a lot of people involved, the Snappy App Store gets new apps added on a regular basis. You can visit the shop by pointing your browser to http://:4200, and you can install apps right from the shop and then launch them with http://webdm.local in your browser. Building apps yourself for Snappy is not all that hard, and [well documented][4]. You can also port DEB packages into the snappy format quite easily. - -![](https://farm8.staticflickr.com/7656/17022676836_968a2a7254_c.jpg) - -Ubuntu Snappy Core, due to the limited number of available apps, is not overly useful in a productive way at this point in time, although it invites us to dive into the new Snappy package format and play with atomic upgrades the Canonical way. Since it is easy to set up, this seems like a good opportunity to learn something new. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html - -作者:[Ferdinand Thommes][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/ferdinand -[1]:http://www.ubuntu.com/things -[2]:http://www.raspberrypi.org/downloads/ -[3]:http://xmodulo.com/write-raspberry-pi-image-sd-card.html -[4]:https://developer.ubuntu.com/en/snappy/ From 0b73d40c0e585226303bc4dcff5ed5a172c87ee8 Mon Sep 17 00:00:00 2001 From: Ezio Date: Mon, 3 Aug 2015 11:01:10 +0800 Subject: [PATCH 073/207] Create 20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 添加翻译过的文章 oska874 --- ...un Ubuntu Snappy Core on Raspberry Pi 2.md | 89 +++++++++++++++++++ 1 file changed, 89 insertions(+) create mode 100644 translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md diff --git a/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md b/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md new file mode 100644 index 0000000000..c4475f39a2 --- /dev/null +++ b/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md @@ -0,0 +1,89 @@ +如何在树莓派2 代运行ubuntu Snappy Core +================================================================================ +物联网(Internet of Things, IoT) 时代即将来临。很快,过不了几年,我们就会问自己当初是怎么在没有物联网的情况下生存的,就像我们现在怀疑过去没有手机的年代。Canonical 就是一个物联网快速发展却还是开放市场下的竞争者。这家公司宣称自己把赌注压到了IoT 上,就像他们已经在“云”上做过的一样。。在今年一月底,Canonical 启动了一个基于Ubuntu Core 的小型操作系统,名字叫做 [Ubuntu Snappy Core][1] 。 + +Snappy 是一种用来替代deb 的新的打包格式,是一个用来更新系统的前端,从CoreOS、红帽子和其他地方借鉴了原子更新这个想法。很快树莓派2 代投入市场,Canonical 就发布了用于树莓派的Snappy Core 版本。第一代树莓派因为是基于ARMv6 ,而Ubuntu 的ARM 镜像是基于ARMv7 ,所以不能运行ubuntu 。不过这种状况现在改变了,Canonical 通过发布用于RPI2 的镜像,抓住机会澄清了Snappy 就是一个用于云计算,特别是IoT 的系统。 + +Snappy 同样可以运行在其它像Amazon EC2, Microsofts Azure, Google's Compute Engine 这样的云端上,也可以虚拟化在KVM、Virtuabox 和vagrant 上。Canonical 已经拥抱了微软、谷歌、Docker、OpenStack 这些重量级选手,同时也与一些小项目达成合作关系。除了一些创业公司,像Ninja Sphere、Erle Robotics,还有一些开发板生产商比如Odroid、Banana Pro, Udoo, PCDuino 和Parallella 、全志。Snappy Core 也希望很快能运行在路由器上,来帮助改进路由器生产商目前很少更新固件的策略。 + +接下来,让我们看看怎么样在树莓派2 上运行Snappy。 + +用于树莓派2 的Snappy 镜像可以从 [Raspberry Pi 网站][2] 上下载。解压缩出来的镜像必须[写到一个至少8GB 大小的SD 卡][3]。尽管原始系统很小,但是院子升级和回滚功能会蚕食不小的空间。使用Snappy 启动树莓派2 后你就可以使用默认用户名和密码(都是ubuntu)登录系统。 + +![](https://farm8.staticflickr.com/7639/16428527263_f7bdd56a0d_c.jpg) + +sudo 已经配置好了可以直接用,安全起见,你应该使用以下命令来修改你的用户名 + + $ sudo usermod -l + +或者也可以使用`adduser` 为你添加一个新用户。 + +因为RPI缺少硬件始终,而Snappy 不知道这一点,所以系统会有一个小bug:处理命令时会报很多错。不过这个很容易解决: + +使用这个命令来确认这个bug 是否影响: + + $ date + +如果输出是 "Thu Jan 1 01:56:44 UTC 1970", 你可以这样做来改正: + + $ sudo date --set="Sun Apr 04 17:43:26 UTC 2015" + +改成你的实际时间。 + +![](https://farm9.staticflickr.com/8735/16426231744_c54d9b8877_b.jpg) + +现在你可能打算检查一下,看看有没有可用的更新。注意通常使用的命令: + + $ sudo apt-get update && sudo apt-get distupgrade + +现在将不会让你通过,因为Snappy 会使用它自己精简过的、基于dpkg 的包管理系统。这是做是应为Snappy 会运行很多嵌入式程序,而你也会想着所有事情尽可能的简化。 + +让我们来看看最关键的部分,理解一下程序是如何与Snappy 工作的。运行Snappy 的SD 卡上除了boot 分区外还有3个分区。其中的两个构成了一个重复的文件系统。这两个平行文件系统被固定挂载为只读模式,并且任何时刻只有一个是激活的。第三个分区是一个部分可写的文件系统,用来让用户存储数据。通过更新系统,标记为'system-a' 的分区会保持一个完整的文件系统,被称作核心,而另一个平行文件系统仍然会是空的。 + +![](https://farm9.staticflickr.com/8758/16841251947_21f42609ce_b.jpg) + +如果我们运行以下命令: + + $ sudo snappy update + +系统将会在'system-b' 上作为一个整体进行更新,这有点像是更新一个镜像文件。接下来你将会被告知要重启系统来激活新核心。 + +重启之后,运行下面的命令可以检查你的系统是否已经更新到最新版本,以及当前被激活的是那个核心 + + $ sudo snappy versions -a + +经过更新-重启的操作,你应该可以看到被激活的核心已经被改变了。 + +因为到目前为止我们还没有安装任何软件,下面的命令: + + $ sudo snappy update ubuntu-core + +将会生效,而且如果你打算仅仅更新特定的OS,这也是一个办法。如果出了问题,你可以使用下面的命令回滚: + + $ sudo snappy rollback ubuntu-core + +这将会把系统状态回滚到更新之前。 + +![](https://farm8.staticflickr.com/7666/17022676786_5fe6804ed8_c.jpg) + +再来说说那些让Snappy 有用的软件。这里不会讲的太多关于如何构建软件、向Snappy 应用商店添加软件的基础知识,但是你可以通过Freenode 上的IRC 频道#snappy 了解更多信息,那个上面有很多人参与。你可以通过浏览器访问http://:4200 来浏览应用商店,然后从商店安装软件,再在浏览器里访问http://webdm.local 来启动程序。如何构建用于Snappy 的软件并不难,而且也有了现成的[参考文档][4] 。你也可以很容易的把DEB 安装包使用Snappy 格式移植到Snappy 上。 + +![](https://farm8.staticflickr.com/7656/17022676836_968a2a7254_c.jpg) + +尽管Ubuntu Snappy Core 吸引我们去研究新型的Snappy 安装包格式和Canonical 式的原子更新操作,但是因为有限的可用应用,它现在在生产环境里还不是很有用。但是既然搭建一个Snappy 环境如此简单,这看起来是一个学点新东西的好机会。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html + +作者:[Ferdinand Thommes][a] +译者:[译者ID](https://github.com/oska874) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/ferdinand +[1]:http://www.ubuntu.com/things +[2]:http://www.raspberrypi.org/downloads/ +[3]:http://xmodulo.com/write-raspberry-pi-image-sd-card.html +[4]:https://developer.ubuntu.com/en/snappy/ From 9e78da3fe288fdc6346cc4cefff359b1f56d8915 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 3 Aug 2015 16:00:21 +0800 Subject: [PATCH 074/207] =?UTF-8?q?20150803-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ds for profiling your Unix file systems.md | 64 +++ sources/tech/20150803 Linux Logging Basics.md | 90 ++++ sources/tech/20150803 Managing Linux Logs.md | 418 ++++++++++++++++++ ...0150803 Troubleshooting with Linux Logs.md | 116 +++++ 4 files changed, 688 insertions(+) create mode 100644 sources/tech/20150803 Handy commands for profiling your Unix file systems.md create mode 100644 sources/tech/20150803 Linux Logging Basics.md create mode 100644 sources/tech/20150803 Managing Linux Logs.md create mode 100644 sources/tech/20150803 Troubleshooting with Linux Logs.md diff --git a/sources/tech/20150803 Handy commands for profiling your Unix file systems.md b/sources/tech/20150803 Handy commands for profiling your Unix file systems.md new file mode 100644 index 0000000000..ae5951b0d7 --- /dev/null +++ b/sources/tech/20150803 Handy commands for profiling your Unix file systems.md @@ -0,0 +1,64 @@ +Handy commands for profiling your Unix file systems +================================================================================ +![Credit: Sandra H-S](http://images.techhive.com/images/article/2015/07/file-profile-100597239-primary.idge.png) +Credit: Sandra H-S + +One of the problems that seems to plague nearly all file systems -- Unix and others -- is the continuous buildup of files. Almost no one takes the time to clean out files that they no longer use and file systems, as a result, become so cluttered with material of little or questionable value that keeping them them running well, adequately backed up, and easy to manage is a constant challenge. + +One way that I have seen to help encourage the owners of all that data detritus to address the problem is to create a summary report or "profile" of a file collection that reports on such things as the number of files; the oldest, newest, and largest of those files; and a count of who owns those files. If someone realizes that a collection of half a million files contains none less than five years old, they might go ahead and remove them -- or, at least, archive and compress them. The basic problem is that huge collections of files are overwhelming and most people are afraid that they might accidentally delete something important. Having a way to characterize a file collection can help demonstrate the nature of the content and encourage those digital packrats to clean out their nests. + +When I prepare a file system summary report on Unix, a handful of Unix commands easily provide some very useful statistics. To count the files in a directory, you can use a find command like this. + + $ find . -type f | wc -l + 187534 + +Finding the oldest and newest files is a bit more complicated, but still quite easy. In the commands below, we're using the find command again to find files, displaying the data with a year-month-day format that makes it possible to sort by file age, and then displaying the top -- thus the oldest -- file in that list. + +In the second command, we do the same, but print the last line -- thus the newest -- file. + + $ find -type f -printf '%T+ %p\n' | sort | head -n 1 + 2006-02-03+02:40:33 ./skel/.xemacs/init.el + $ find -type f -printf '%T+ %p\n' | sort | tail -n 1 + 2015-07-19+14:20:16 ./.bash_history + +The %T (file date and time) and %p (file name with path) parameters with the printf command allow this to work. + +If we're looking at home directories, we're undoubtedly going to find that history files are the newest files and that isn't likely to be a very interesting bit of information. You can omit those files by "un-grepping" them or you can omit all files that start with dots as shown below. + + $ find -type f -printf '%T+ %p\n' | grep -v "\./\." | sort | tail -n 1 + 2015-07-19+13:02:12 ./isPrime + +Finding the largest file involves using the %s (size) parameter and we include the file name (%f) since that's what we want the report to show. + + $ find -type f -printf '%s %f \n' | sort -n | uniq | tail -1 + 20183040 project.org.tar + +To summarize file ownership, use the %u (owner) + + $ find -type f -printf '%u \n' | grep -v "\./\." | sort | uniq -c + 180034 shs + 7500 jdoe + +If your file system also records the last access date, it can be very useful to show that files haven't been accessed in, say, more than two years. This would give your reviewers an important insight into the value of those files. The last access parameter (%a) could be used like this: + + $ find -type f -printf '%a+ %p\n' | sort | head -n 1 + Fri Dec 15 03:00:30 2006+ ./statreport + +Of course, if the most recently accessed file is also in the deep dark past, that's likely to get even more of a reaction. + + $ find -type f -printf '%a+ %p\n' | sort | tail -n 1 + Wed Nov 26 03:00:27 2007+ ./my-notes + +Getting a sense of what's in a file system or large directory by creating a summary report showing the file date ranges, the largest files, the file owners, and the oldest and new access times can help to demonstrate how current and how important a file collection is and help its owners decide if it's time to clean up. + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2949898/linux/profiling-your-file-systems.html + +作者:[Sandra Henry-Stocker][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/ \ No newline at end of file diff --git a/sources/tech/20150803 Linux Logging Basics.md b/sources/tech/20150803 Linux Logging Basics.md new file mode 100644 index 0000000000..d20f68f140 --- /dev/null +++ b/sources/tech/20150803 Linux Logging Basics.md @@ -0,0 +1,90 @@ +Linux Logging Basics +================================================================================ +First we’ll describe the basics of what Linux logs are, where to find them, and how they get created. If you already know this stuff, feel free to skip to the next section. + +### Linux System Logs ### + +Many valuable log files are automatically created for you by Linux. You can find them in your /var/log directory. Here is what this directory looks like on a typical Ubuntu system: + +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Linux-system-log-terminal.png) + +Some of the most important Linux system logs include: + +- /var/log/syslog or /var/log/messages stores all global system activity data, including startup messages. Debian-based systems like Ubuntu store this in /var/log/syslog. RedHat-based systems like RHEL or CentOS store this in /var/log/messages. +- /var/log/auth.log or /var/log/secure stores logs from the Pluggable Authentication Module (pam) including successful logins, failed login attempts, and authentication methods. Ubuntu and Debian store authentication messages in /var/log/auth.log. RedHat and CentOS store this data in /var/log/secure. +- /var/log/kern stores kernel error and warning data, which is particularly helpful for troubleshooting custom kernels. +- /var/log/cron stores information about cron jobs. Use this data to verify that your cron jobs are running successfully. + +Digital Ocean has a thorough [tutorial][1] on these files and how rsyslog creates them on common distributions like RedHat and CentOS. + +Applications also write log files in this directory. For example, popular servers like Apache, Nginx, MySQL, and more can write log files here. Some of these log files are written by the application itself. Others are created through syslog (see below). + +### What’s Syslog? ### + +How do Linux system log files get created? The answer is through the syslog daemon, which listens for log messages on the syslog socket /dev/log and then writes them to the appropriate log file. + +The word “syslog” is an overloaded term and is often used in short to refer to one of these: + +1. **Syslog daemon** — a program to receive, process, and send syslog messages. It can [send syslog remotely][2] to a centralized server or write it to a local file. Common examples include rsyslogd and syslog-ng. In this usage, people will often say “sending to syslog.” +1. **Syslog protocol** — a transport protocol specifying how logs can be sent over a network and a data format definition for syslog messages (below). It’s officially defined in [RFC-5424][3]. The standard ports are 514 for plaintext logs and 6514 for encrypted logs. In this usage, people will often say “sending over syslog.” +1. **Syslog messages** — log messages or events in the syslog format, which includes a header with several standard fields. In this usage, people will often say “sending syslog.” + +Syslog messages or events include a header with several standard fields, making analysis and routing easier. They include the timestamp, the name of the application, the classification or location in the system where the message originates, and the priority of the issue. + +Here is an example log message with the syslog header included. It’s from the sshd daemon, which controls remote logins to the system. This message describes a failed login attempt: + + <34>1 2003-10-11T22:14:15.003Z server1.com sshd - - pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2 + +### Syslog Format and Fields ### + +Each syslog message includes a header with fields. Fields are structured data that makes it easier to analyze and route the events. Here is the format we used to generate the above syslog example. You can match each value to a specific field name. + + <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% %msg%n + +Below, you’ll find descriptions of some of the most commonly used syslog fields when searching or troubleshooting issues. + +#### Timestamp #### + +The [timestamp][4] field (2003-10-11T22:14:15.003Z in the example) indicates the time and date that the message was generated on the system sending the message. That time can be different from when another system receives the message. The example timestamp breaks down like this: + +- **2003-10-11** is the year, month, and day. +- **T** is a required element of the TIMESTAMP field, separating the date and the time. +- **22:14:15.003** is the 24-hour format of the time, including the number of milliseconds (**003**) into the next second. +- **Z** is an optional element, indicating UTC time. Instead of Z, the example could have included an offset, such as -08:00, which indicates that the time is offset from UTC by 8 hours, PST. + +#### Hostname #### + +The [hostname][5] field (server1.com in the example above) indicates the name of the host or system that sent the message. + +#### App-Name #### + +The [app-name][6] field (sshd:auth in the example) indicates the name of the application that sent the message. + +#### Priority #### + +The priority field or [pri][7] for short (<34> in the example above) tells you how urgent or severe the event is. It’s a combination of two numerical fields: the facility and the severity. The severity ranges from the number 7 for debug events all the way to 0 which is an emergency. The facility describes which process created the event. It ranges from 0 for kernel messages to 23 for local application use. + +Pri can be output in two ways. The first is as a single number prival which is calculated as the facility field value multiplied by 8, then the result is added to the severity field value: (facility)(8) + (severity). The second is pri-text which will output in the string format “facility.severity.” The latter format can often be easier to read and search but takes up more storage space. + +-------------------------------------------------------------------------------- + +via: http://www.loggly.com/ultimate-guide/logging/linux-logging-basics/ + +作者:[Jason Skowronski][a1] +作者:[Amy Echeverri][a2] +作者:[Sadequl Hussain][a3] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a1]:https://www.linkedin.com/in/jasonskowronski +[a2]:https://www.linkedin.com/in/amyecheverri +[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 +[1]:https://www.digitalocean.com/community/tutorials/how-to-view-and-configure-linux-logs-on-ubuntu-and-centos +[2]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.y2e9tdfk1cdb +[3]:https://tools.ietf.org/html/rfc5424 +[4]:https://tools.ietf.org/html/rfc5424#section-6.2.3 +[5]:https://tools.ietf.org/html/rfc5424#section-6.2.4 +[6]:https://tools.ietf.org/html/rfc5424#section-6.2.5 +[7]:https://tools.ietf.org/html/rfc5424#section-6.2.1 \ No newline at end of file diff --git a/sources/tech/20150803 Managing Linux Logs.md b/sources/tech/20150803 Managing Linux Logs.md new file mode 100644 index 0000000000..d68adddf52 --- /dev/null +++ b/sources/tech/20150803 Managing Linux Logs.md @@ -0,0 +1,418 @@ +Managing Linux Logs +================================================================================ +A key best practice for logging is to centralize or aggregate your logs in one place, especially if you have multiple servers or tiers in your architecture. We’ll tell you why this is a good idea and give tips on how to do it easily. + +### Benefits of Centralizing Logs ### + +It can be cumbersome to look at individual log files if you have many servers. Modern web sites and services often include multiple tiers of servers, distributed with load balancers, and more. It’d take a long time to hunt down the right file, and even longer to correlate problems across servers. There’s nothing more frustrating than finding the information you are looking for hasn’t been captured, or the log file that could have held the answer has just been lost after a restart. + +Centralizing your logs makes them faster to search, which can help you solve production issues faster. You don’t have to guess which server had the issue because all the logs are in one place. Additionally, you can use more powerful tools to analyze them, including log management solutions. These solutions can [transform plain text logs][1] into fields that can be easily searched and analyzed. + +Centralizing your logs also makes them easier to manage: + +- They are safer from accidental or intentional loss when they are backed up and archived in a separate location. If your servers go down or are unresponsive, you can use the centralized logs to debug the problem. +- You don’t have to worry about ssh or inefficient grep commands requiring more resources on troubled systems. +- You don’t have to worry about full disks, which can crash your servers. +- You can keep your production servers secure without giving your entire team access just to look at logs. It’s much safer to give your team access to logs from the central location. + +With centralized log management, you still must deal with the risk of being unable to transfer logs to the centralized location due to poor network connectivity or of using up a lot of network bandwidth. We’ll discuss how to intelligently address these issues in the sections below. + +### Popular Tools for Centralizing Logs ### + +The most common way of centralizing logs on Linux is by using syslog daemons or agents. The syslog daemon supports local log collection, then transports logs through the syslog protocol to a central server. There are several popular daemons that you can use to centralize your log files: + +- [rsyslog][2] is a light-weight daemon installed on most common Linux distributions. +- [syslog-ng][3] is the second most popular syslog daemon for Linux. +- [logstash][4] is a heavier-weight agent that can do more advanced processing and parsing. +- [fluentd][5] is another agent with advanced processing capabilities. + +Rsyslog is the most popular daemon for centralizing your log data because it’s installed by default in most common distributions of Linux. You don’t need to download it or install it, and it’s lightweight so it won’t take up much of your system resources. + +If you need more advanced filtering or custom parsing capabilities, Logstash is the next most popular choice if you don’t mind the extra system footprint. + +### Configure Rsyslog.conf ### + +Since rsyslog is the most widely used syslog daemon, we’ll show how to configure it to centralize logs. The global configuration file is located at /etc/rsyslog.conf. It loads modules, sets global directives, and has an include for application-specific files located in the directory /etc/rsyslog.d/. This directory contains /etc/rsyslog.d/50-default.conf which instructs rsyslog to write the system logs to file. You can read more about the configuration files in the [rsyslog documentation][6]. + +The configuration language for rsyslog is [RainerScript][7]. You set up specific inputs for logs as well as actions to output them to another destination. Rsyslog already configures standard defaults for syslog input, so you usually just need to add an output to your log server. Here is an example configuration for rsyslog to output logs to an external server. In this example, **BEBOP** is the hostname of the server, so you should replace it with your own server name. + + action(type="omfwd" protocol="tcp" target="BEBOP" port="514") + +You could send your logs to a log server with ample storage to keep a copy for search, backup, and analysis. If you’re storing logs in the file system, then you should set up [log rotation][8] to keep your disk from getting full. + +Alternatively, you can send these logs to a log management solution. If your solution is installed locally you can send it to your local host and port as specified in your system documentation. If you use a cloud-based provider, you will send them to the hostname and port specified by your provider. + +### Log Directories ### + +You can centralize all the files in a directory or matching a wildcard pattern. The nxlog and syslog-ng daemons support both directories and wildcards (*). + +Common versions of rsyslog can’t monitor directories directly. As a workaround, you can setup a cron job to monitor the directory for new files, then configure rsyslog to send those files to a destination, such as your log management system. As an example, the log management vendor Loggly has an open source version of a [script to monitor directories][9]. + +### Which Protocol: UDP, TCP, or RELP? ### + +There are three main protocols that you can choose from when you transmit data over the Internet. The most common is UDP for your local network and TCP for the Internet. If you cannot lose logs, then use the more advanced RELP protocol. + +[UDP][10] sends a datagram packet, which is a single packet of information. It’s an outbound-only protocol, so it doesn’t send you an acknowledgement of receipt (ACK). It makes only one attempt to send the packet. UDP can be used to smartly degrade or drop logs when the network gets congested. It’s most commonly used on reliable networks like localhost. + +[TCP][11] sends streaming information in multiple packets and returns an ACK. TCP makes multiple attempts to send the packet, but is limited by the size of the [TCP buffer][12]. This is the most common protocol for sending logs over the Internet. + +[RELP][13] is the most reliable of these three protocols but was created for rsyslog and has less industry adoption. It acknowledges receipt of data in the application layer and will resend if there is an error. Make sure your destination also supports this protocol. + +### Reliably Send with Disk Assisted Queues ### + +If rsyslog encounters a problem when storing logs, such as an unavailable network connection, it can queue the logs until the connection is restored. The queued logs are stored in memory by default. However, memory is limited and if the problem persists, the logs can exceed memory capacity. + +**Warning: You can lose data if you store logs only in memory.** + +Rsyslog can queue your logs to disk when memory is full. [Disk-assisted queues][14] make transport of logs more reliable. Here is an example of how to configure rsyslog with a disk-assisted queue: + + $WorkDirectory /var/spool/rsyslog # where to place spool files + $ActionQueueFileName fwdRule1 # unique name prefix for spool files + $ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible) + $ActionQueueSaveOnShutdown on # save messages to disk on shutdown + $ActionQueueType LinkedList # run asynchronously + $ActionResumeRetryCount -1 # infinite retries if host is down + +### Encrypt Logs Using TLS ### + +When the security and privacy of your data is a concern, you should consider encrypting your logs. Sniffers and middlemen could read your log data if you transmit it over the Internet in clear text. You should encrypt your logs if they contain private information, sensitive identification data, or government-regulated data. The rsyslog daemon can encrypt your logs using the TLS protocol and keep your data safer. + +To set up TLS encryption, you need to do the following tasks: + +1. Generate a [certificate authority][15] (CA). There are sample certificates in /contrib/gnutls, which are good only for testing, but you need to create your own for production. If you’re using a log management service, it will have one ready for you. +1. Generate a [digital certificate][16] for your server to enable SSL operation, or use one from your log management service provider. +1. Configure your rsyslog daemon to send TLS-encrypted data to your log management system. + +Here’s an example rsyslog configuration with TLS encryption. Replace CERT and DOMAIN_NAME with your own server setting. + + $DefaultNetstreamDriverCAFile /etc/rsyslog.d/keys/ca.d/CERT.crt + $ActionSendStreamDriver gtls + $ActionSendStreamDriverMode 1 + $ActionSendStreamDriverAuthMode x509/name + $ActionSendStreamDriverPermittedPeer *.DOMAIN_NAME.com + +### Best Practices for Application Logging ### + +In addition to the logs that Linux creates by default, it’s also a good idea to centralize logs from important applications. Almost all Linux-based server class applications write their status information in separate, dedicated log files. This includes database products like PostgreSQL or MySQL, web servers like Nginx or Apache, firewalls, print and file sharing services, directory and DNS servers and so on. + +The first thing administrators do after installing an application is to configure it. Linux applications typically have a .conf file somewhere within the /etc directory. It can be somewhere else too, but that’s the first place where people look for configuration files. + +Depending on how complex or large the application is, the number of settable parameters can be few or in hundreds. As mentioned before, most applications would write their status in some sort of log file: configuration file is where log settings are defined among other things. + +If you’re not sure where it is, you can use the locate command to find it: + + [root@localhost ~]# locate postgresql.conf + /usr/pgsql-9.4/share/postgresql.conf.sample + /var/lib/pgsql/9.4/data/postgresql.conf + +#### Set a Standard Location for Log Files #### + +Linux systems typically save their log files under /var/log directory. This works fine, but check if the application saves under a specific directory under /var/log. If it does, great, if not, you may want to create a dedicated directory for the app under /var/log. Why? That’s because other applications save their log files under /var/log too and if your app saves more than one log file – perhaps once every day or after each service restart – it may be a bit difficult to trawl through a large directory to find the file you want. + +If you have the more than one instance of the application running in your network, this approach also comes handy. Think about a situation where you may have a dozen web servers running in your network. When troubleshooting any one of the boxes, you would know exactly where to go. + +#### Use A Standard Filename #### + +Use a standard filename for the latest logs from your application. This makes it easy because you can monitor and tail a single file. Many applications add some sort of date time stamp in them. This makes it much more difficult to find the latest file and to setup file monitoring by rsyslog. A better approach is to add timestamps to older log files using logrotate. This makes them easier to archive and search historically. + +#### Append the Log File #### + +Is the log file going to be overwritten after each application restart? If so, we recommend turning that off. After each restart the app should append to the log file. That way, you can always go back to the last log line before the restart. + +#### Appending vs. Rotation of Log File #### + +Even if the application writes a new log file after each restart, how is it saving in the current log? Is it appending to one single, massive file? Linux systems are not known for frequent reboots or crashes: applications can run for very long periods without even blinking, but that can also make the log file very large. If you are trying to analyze the root cause of a connection failure that happened last week, you could easily be searching through tens of thousands of lines. + +We recommend you configure the application to rotate its log file once every day, say at mid-night. + +Why? Well it becomes manageable for a starter. It’s much easier to find a file name with a specific date time pattern than to search through one file for that date’s entries. Files are also much smaller: you don’t think vi has frozen when you open a log file. Secondly, if you are sending the log file over the wire to a different location – perhaps a nightly backup job copying to a centralized log server – it doesn’t chew up your network’s bandwidth. Third and final, it helps with your log retention. If you want to cull old log entries, it’s easier to delete files older than a particular date than to have an application parsing one single large file. + +#### Retention of Log File #### + +How long do you keep a log file? That definitely comes down to business requirement. You could be asked to keep one week’s worth of logging information, or it may be a regulatory requirement to keep ten years’ worth of data. Whatever it is, logs need to go from the server at one time or other. + +In our opinion, unless otherwise required, keep at least a month’s worth of log files online, plus copy them to a secondary location like a logging server. Anything older than that can be offloaded to a separate media. For example, if you are on AWS, your older logs can be copied to Glacier. + +#### Separate Disk Location for Log Files #### + +Linux best practice usually suggests mounting the /var directory to a separate file system. This is because of the high number of I/Os associated with this directory. We would recommend mounting /var/log directory under a separate disk system. This can save I/O contention with the main application’s data. Also, if the number of log files becomes too large or the single log file becomes too big, it doesn’t fill up the entire disk. + +#### Log Entries #### + +What information should be captured in each log entry? + +That depends on what you want to use the log for. Do you want to use it only for troubleshooting purposes, or do you want to capture everything that’s happening? Is it a legal requirement to capture what each user is running or viewing? + +If you are using logs for troubleshooting purposes, save only errors, warnings or fatal messages. There’s no reason to capture debug messages, for example. The app may log debug messages by default or another administrator might have turned this on for another troubleshooting exercise, but you need to turn this off because it can definitely fill up the space quickly. At a minimum, capture the date, time, client application name, source IP or client host name, action performed and the message itself. + +#### A Practical Example for PostgreSQL #### + +As an example, let’s look at the main configuration file for a vanilla PostgreSQL 9.4 installation. It’s called postgresql.conf and contrary to other config files in Linux systems, it’s not saved under /etc directory. In the code snippet below, we can see it’s in /var/lib/pgsql directory of our CentOS 7 server: + + root@localhost ~]# vi /var/lib/pgsql/9.4/data/postgresql.conf + ... + #------------------------------------------------------------------------------ + # ERROR REPORTING AND LOGGING + #------------------------------------------------------------------------------ + # - Where to Log - + log_destination = 'stderr' + # Valid values are combinations of + # stderr, csvlog, syslog, and eventlog, + # depending on platform. csvlog + # requires logging_collector to be on. + # This is used when logging to stderr: + logging_collector = on + # Enable capturing of stderr and csvlog + # into log files. Required to be on for + # csvlogs. + # (change requires restart) + # These are only used if logging_collector is on: + log_directory = 'pg_log' + # directory where log files are written, + # can be absolute or relative to PGDATA + log_filename = 'postgresql-%a.log' # log file name pattern, + # can include strftime() escapes + # log_file_mode = 0600 . + # creation mode for log files, + # begin with 0 to use octal notation + log_truncate_on_rotation = on # If on, an existing log file with the + # same name as the new log file will be + # truncated rather than appended to. + # But such truncation only occurs on + # time-driven rotation, not on restarts + # or size-driven rotation. Default is + # off, meaning append to existing files + # in all cases. + log_rotation_age = 1d + # Automatic rotation of logfiles will happen after that time. 0 disables. + log_rotation_size = 0 # Automatic rotation of logfiles will happen after that much log output. 0 disables. + # These are relevant when logging to syslog: + #syslog_facility = 'LOCAL0' + #syslog_ident = 'postgres' + # This is only relevant when logging to eventlog (win32): + #event_source = 'PostgreSQL' + # - When to Log - + #client_min_messages = notice # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # log + # notice + # warning + # error + #log_min_messages = warning # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # info + # notice + # warning + # error + # log + # fatal + # panic + #log_min_error_statement = error # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # info + # notice + # warning + # error + # log + # fatal + # panic (effectively off) + #log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements + # and their durations, > 0 logs only + # statements running at least this number + # of milliseconds + # - What to Log + #debug_print_parse = off + #debug_print_rewritten = off + #debug_print_plan = off + #debug_pretty_print = on + #log_checkpoints = off + #log_connections = off + #log_disconnections = off + #log_duration = off + #log_error_verbosity = default + # terse, default, or verbose messages + #log_hostname = off + log_line_prefix = '< %m >' # special values: + # %a = application name + # %u = user name + # %d = database name + # %r = remote host and port + # %h = remote host + # %p = process ID + # %t = timestamp without milliseconds + # %m = timestamp with milliseconds + # %i = command tag + # %e = SQL state + # %c = session ID + # %l = session line number + # %s = session start timestamp + # %v = virtual transaction ID + # %x = transaction ID (0 if none) + # %q = stop here in non-session + # processes + # %% = '%' + # e.g. '<%u%%%d> ' + #log_lock_waits = off # log lock waits >= deadlock_timeout + #log_statement = 'none' # none, ddl, mod, all + #log_temp_files = -1 # log temporary files equal or larger + # than the specified size in kilobytes;5# -1 disables, 0 logs all temp files5 + log_timezone = 'Australia/ACT' + +Although most parameters are commented out, they assume default values. We can see the log file directory is pg_log (log_directory parameter), the file names should start with postgresql (log_filename parameter), the files are rotated once every day (log_rotation_age parameter) and the log entries start with a timestamp (log_line_prefix parameter). Of particular interest is the log_line_prefix parameter: there is a whole gamut of information you can include there. + +Looking under /var/lib/pgsql/9.4/data/pg_log directory shows us these files: + + [root@localhost ~]# ls -l /var/lib/pgsql/9.4/data/pg_log + total 20 + -rw-------. 1 postgres postgres 1212 May 1 20:11 postgresql-Fri.log + -rw-------. 1 postgres postgres 243 Feb 9 21:49 postgresql-Mon.log + -rw-------. 1 postgres postgres 1138 Feb 7 11:08 postgresql-Sat.log + -rw-------. 1 postgres postgres 1203 Feb 26 21:32 postgresql-Thu.log + -rw-------. 1 postgres postgres 326 Feb 10 01:20 postgresql-Tue.log + +So the log files only have the name of the weekday stamped in the file name. We can change it. How? Configure the log_filename parameter in postgresql.conf. + +Looking inside one log file shows its entries start with date time only: + + [root@localhost ~]# cat /var/lib/pgsql/9.4/data/pg_log/postgresql-Fri.log + ... + < 2015-02-27 01:21:27.020 EST >LOG: received fast shutdown request + < 2015-02-27 01:21:27.025 EST >LOG: aborting any active transactions + < 2015-02-27 01:21:27.026 EST >LOG: autovacuum launcher shutting down + < 2015-02-27 01:21:27.036 EST >LOG: shutting down + < 2015-02-27 01:21:27.211 EST >LOG: database system is shut down + +### Centralizing Application Logs ### + +#### Log File Monitoring with Imfile #### + +Traditionally, the most common way for applications to log their data is with files. Files are easy to search on a single machine but don’t scale well with more servers. You can set up log file monitoring and send the events to a centralized server when new logs are appended to the bottom. Create a new configuration file in /etc/rsyslog.d/ then add a file input like this: + + $ModLoad imfile + $InputFilePollInterval 10 + $PrivDropToGroup adm + +---------- + + # Input for FILE1 + $InputFileName /FILE1 + $InputFileTag APPNAME1 + $InputFileStateFile stat-APPNAME1 #this must be unique for each file being polled + $InputFileSeverity info + $InputFilePersistStateInterval 20000 + $InputRunFileMonitor + +Replace FILE1 and APPNAME1 with your own file and application names. Rsyslog will send it to the outputs you have configured. + +#### Local Socket Logs with Imuxsock #### + +A socket is similar to a UNIX file handle except that the socket is read into memory by your syslog daemon and then sent to the destination. No file needs to be written. As an example, the logger command sends its logs to this UNIX socket. + +This approach makes efficient use of system resources if your server is constrained by disk I/O or you have no need for local file logs. The disadvantage of this approach is that the socket has a limited queue size. If your syslog daemon goes down or can’t keep up, then you could lose log data. + +The rsyslog daemon will read from the /dev/log socket by default, but you can specifically enable it with the [imuxsock input module][17] using the following command: + + $ModLoad imuxsock + +#### UDP Logs with Imupd #### + +Some applications output log data in UDP format, which is the standard syslog protocol when transferring log files over a network or your localhost. Your syslog daemon receives these logs and can process them or transmit them in a different format. Alternately, you can send the logs to your log server or to a log management solution. + +Use the following command to configure rsyslog to accept syslog data over UDP on the standard port 514: + + $ModLoad imudp + +---------- + + $UDPServerRun 514 + +### Manage Logs with Logrotate ### + +Log rotation is a process that archives log files automatically when they reach a specified age. Without intervention, log files keep growing, using up disk space. Eventually they will crash your machine. + +The logrotate utility can truncate your logs as they age, freeing up space. Your new log file retains the filename. Your old log file is renamed with a number appended to the end of it. Each time the logrotate utility runs, a new file is created and the existing file is renamed in sequence. You determine the threshold when old files are deleted or archived. + +When logrotate copies a file, the new file has a new inode, which can interfere with rsyslog’s ability to monitor the new file. You can alleviate this issue by adding the copytruncate parameter to your logrotate cron job. This parameter copies existing log file contents to a new file and truncates these contents from the existing file. The inode never changes because the log file itself remains the same; its contents are in a new file. + +The logrotate utility uses the main configuration file at /etc/logrotate.conf and application-specific settings in the directory /etc/logrotate.d/. DigitalOcean has a detailed [tutorial on logrotate][18]. + +### Manage Configuration on Many Servers ### + +When you have just a few servers, you can manually configure logging on them. Once you have a few dozen or more servers, you can take advantage of tools that make this easier and more scalable. At a basic level, all of these copy your rsyslog configuration to each server, and then restart rsyslog so the changes take effect. + +#### Pssh #### + +This tool lets you run an ssh command on several servers in parallel. Use a pssh deployment for only a small number of servers. If one of your servers fails, then you have to ssh into the failed server and do the deployment manually. If you have several failed servers, then the manual deployment on them can take a long time. + +#### Puppet/Chef #### + +Puppet and Chef are two different tools that can automatically configure all of the servers in your network to a specified standard. Their reporting tools let you know about failures and can resync periodically. Both Puppet and Chef have enthusiastic supporters. If you aren’t sure which one is more suitable for your deployment configuration management, you might appreciate [InfoWorld’s comparison of the two tools][19]. + +Some vendors also offer modules or recipes for configuring rsyslog. Here is an example from Loggly’s Puppet module. It offers a class for rsyslog to which you can add an identifying token: + + node 'my_server_node.example.net' { + # Send syslog events to Loggly + class { 'loggly::rsyslog': + customer_token => 'de7b5ccd-04de-4dc4-fbc9-501393600000', + } + } + +#### Docker #### + +Docker uses containers to run applications independent of the underlying server. Everything runs from inside a container, which you can think of as a unit of functionality. ZDNet has an in-depth article about [using Docker][20] in your data center. + +There are several ways to log from Docker containers including linking to a logging container, logging to a shared volume, or adding a syslog agent directly inside the container. One of the most popular logging containers is called [logspout][21]. + +#### Vendor Scripts or Agents #### + +Most log management solutions offer scripts or agents to make sending data from one or more servers relatively easy. Heavyweight agents can use up extra system resources. Some vendors like Loggly offer configuration scripts to make using existing syslog daemons easier. Here is an example [script][22] from Loggly which can run on any number of servers. + +-------------------------------------------------------------------------------- + +via: http://www.loggly.com/ultimate-guide/logging/managing-linux-logs/ + +作者:[Jason Skowronski][a1] +作者:[Amy Echeverri][a2] +作者:[Sadequl Hussain][a3] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a1]:https://www.linkedin.com/in/jasonskowronski +[a2]:https://www.linkedin.com/in/amyecheverri +[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 +[1]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.esrreycnpnbl +[2]:http://www.rsyslog.com/ +[3]:http://www.balabit.com/network-security/syslog-ng/opensource-logging-system +[4]:http://logstash.net/ +[5]:http://www.fluentd.org/ +[6]:http://www.rsyslog.com/doc/rsyslog_conf.html +[7]:http://www.rsyslog.com/doc/master/rainerscript/index.html +[8]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.eck7acdxin87 +[9]:https://www.loggly.com/docs/file-monitoring/ +[10]:http://www.networksorcery.com/enp/protocol/udp.htm +[11]:http://www.networksorcery.com/enp/protocol/tcp.htm +[12]:http://blog.gerhards.net/2008/04/on-unreliability-of-plain-tcp-syslog.html +[13]:http://www.rsyslog.com/doc/relp.html +[14]:http://www.rsyslog.com/doc/queues.html +[15]:http://www.rsyslog.com/doc/tls_cert_ca.html +[16]:http://www.rsyslog.com/doc/tls_cert_machine.html +[17]:http://www.rsyslog.com/doc/v8-stable/configuration/modules/imuxsock.html +[18]:https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10 +[19]:http://www.infoworld.com/article/2614204/data-center/puppet-or-chef--the-configuration-management-dilemma.html +[20]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/ +[21]:https://github.com/progrium/logspout +[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/ \ No newline at end of file diff --git a/sources/tech/20150803 Troubleshooting with Linux Logs.md b/sources/tech/20150803 Troubleshooting with Linux Logs.md new file mode 100644 index 0000000000..8f595427a9 --- /dev/null +++ b/sources/tech/20150803 Troubleshooting with Linux Logs.md @@ -0,0 +1,116 @@ +Troubleshooting with Linux Logs +================================================================================ +Troubleshooting is the main reason people create logs. Often you’ll want to diagnose why a problem happened with your Linux system or application. An error message or a sequence of events can give you clues to the root cause, indicate how to reproduce the issue, and point out ways to fix it. Here are a few use cases for things you might want to troubleshoot in your logs. + +### Cause of Login Failures ### + +If you want to check if your system is secure, you can check your authentication logs for failed login attempts and unfamiliar successes. Authentication failures occur when someone passes incorrect or otherwise invalid login credentials, often to ssh for remote access or su for local access to another user’s permissions. These are logged by the [pluggable authentication module][1], or pam for short. Look in your logs for strings like Failed password and user unknown. Successful authentication records include strings like Accepted password and session opened. + +Failure Examples: + + pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2 + Failed password for invalid user hoover from 10.0.2.2 port 4791 ssh2 + pam_unix(sshd:auth): check pass; user unknown + PAM service(sshd) ignoring max retries; 6 > 3 + +Success Examples: + + Accepted password for hoover from 10.0.2.2 port 4792 ssh2 + pam_unix(sshd:session): session opened for user hoover by (uid=0) + pam_unix(sshd:session): session closed for user hoover + +You can use grep to find which users accounts have the most failed logins. These are the accounts that potential attackers are trying and failing to access. This example is for an Ubuntu system. + + $ grep "invalid user" /var/log/auth.log | cut -d ' ' -f 10 | sort | uniq -c | sort -nr + 23 oracle + 18 postgres + 17 nagios + 10 zabbix + 6 test + +You’ll need to write a different command for each application and message because there is no standard format. Log management systems that automatically parse logs will effectively normalize them and help you extract key fields like username. + +Log management systems can extract the usernames from your Linux logs using automated parsing. This lets you see an overview of the users and filter on them with a single click. In this example, we can see that the root user logged in over 2,700 times because we are filtering the logs to show login attempts only for the root user. + +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.36-AM.png) + +Log management systems also let you view graphs over time to spot unusual trends. If someone had one or two failed logins within a few minutes, it might be that a real user forgot his or her password. However, if there are hundreds of failed logins or they are all different usernames, it’s more likely that someone is trying to attack the system. Here you can see that on March 12, someone tried to login as test and nagios several hundred times. This is clearly not a legitimate use of the system. + +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.12.18-AM.png) + +### Cause of Reboots ### + +Sometimes a server can stop due to a system crash or reboot. How do you know when it happened and who did it? + +#### Shutdown Command #### + +If someone ran the shutdown command manually, you can see it in the auth log file. Here you can see that someone remotely logged in from the IP 50.0.134.125 as the user ubuntu and then shut the system down. + + Mar 19 18:36:41 ip-172-31-11-231 sshd[23437]: Accepted publickey for ubuntu from 50.0.134.125 port 52538 ssh + Mar 19 18:36:41 ip-172-31-11-231 23437]:sshd[ pam_unix(sshd:session): session opened for user ubuntu by (uid=0) + Mar 19 18:37:09 ip-172-31-11-231 sudo: ubuntu : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/sbin/shutdown -r now + +#### Kernel Initializing #### + +If you want to see when the server restarted regardless of reason (including crashes) you can search logs from the kernel initializing. You’d search for the facility kernel messages and Initializing cpu. + + Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpuset + Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpu + Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Linux version 3.8.0-44-generic (buildd@tipua) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #66~precise1-Ubuntu SMP Tue Jul 15 04:01:04 UTC 2014 (Ubuntu 3.8.0-44.66~precise1-generic 3.8.13.25) + +### Detect Memory Problems ### + +There are lots of reasons a server might crash, but one common cause is running out of memory. + +When your system is low on memory, processes are killed, typically in the order of which ones will release the most resources. The error occurs when your system is using all of its memory and a new or existing process attempts to access additional memory. Look in your log files for strings like Out of Memory or for kernel warnings like to kill. These strings indicate that your system intentionally killed the process or application rather than allowing the process to crash. + +Examples: + + [33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child + [29923450.995084] select 5230 (docker), adj 0, size 708, to kill + +You can find these logs using a tool like grep. This example is for Ubuntu: + + $ grep “Out of memory” /var/log/syslog + [33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child + +Keep in mind that grep itself uses memory, so you might cause an out of memory error just by running grep. This is another reason it’s a fabulous idea to centralize your logs! + +### Log Cron Job Errors ### + +The cron daemon is a scheduler that runs processes at specified dates and times. If the process fails to run or fails to finish, then a cron error appears in your log files. You can find these files in /var/log/cron, /var/log/messages, and /var/log/syslog depending on your distribution. There are many reasons a cron job can fail. Usually the problems lie with the process rather than the cron daemon itself. + +By default, cron jobs output through email using Postfix. Here is a log showing that an email was sent. Unfortunately, you cannot see the contents of the message here. + + Mar 13 16:35:01 PSQ110 postfix/pickup[15158]: C3EDC5800B4: uid=1001 from= + Mar 13 16:35:01 PSQ110 postfix/cleanup[15727]: C3EDC5800B4: message-id=<20150310110501.C3EDC5800B4@PSQ110> + Mar 13 16:35:01 PSQ110 postfix/qmgr[15159]: C3EDC5800B4: from=, size=607, nrcpt=1 (queue active) + Mar 13 16:35:05 PSQ110 postfix/smtp[15729]: C3EDC5800B4: to=, relay=gmail-smtp-in.l.google.com[74.125.130.26]:25, delay=4.1, delays=0.26/0/2.2/1.7, dsn=2.0.0, status=sent (250 2.0.0 OK 1425985505 f16si501651pdj.5 - gsmtp) + +You should consider logging the cron standard output to help debug problems. Here is how you can redirect your cron standard output to syslog using the logger command. Replace the echo command with your own script and helloCron with whatever you want to set the appName to. + + */5 * * * * echo ‘Hello World’ 2>&1 | /usr/bin/logger -t helloCron + +Which creates the log entries: + + Apr 28 22:20:01 ip-172-31-11-231 CRON[15296]: (ubuntu) CMD (echo 'Hello World!' 2>&1 | /usr/bin/logger -t helloCron) + Apr 28 22:20:01 ip-172-31-11-231 helloCron: Hello World! + +Each cron job will log differently based on the specific type of job and how it outputs data. Hopefully there are clues to the root cause of problems within the logs, or you can add additional logging as needed. + +-------------------------------------------------------------------------------- + +via: http://www.loggly.com/ultimate-guide/logging/troubleshooting-with-linux-logs/ + +作者:[Jason Skowronski][a1] +作者:[Amy Echeverri][a2] +作者:[Sadequl Hussain][a3] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a1]:https://www.linkedin.com/in/jasonskowronski +[a2]:https://www.linkedin.com/in/amyecheverri +[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 +[1]:http://linux.die.net/man/8/pam.d \ No newline at end of file From c06d768d03a95906e23bb18e5f4db16df178c668 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 3 Aug 2015 19:52:52 +0800 Subject: [PATCH 075/207] Update 20150803 Handy commands for profiling your Unix file systems.md --- ...0803 Handy commands for profiling your Unix file systems.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150803 Handy commands for profiling your Unix file systems.md b/sources/tech/20150803 Handy commands for profiling your Unix file systems.md index ae5951b0d7..359aba14c9 100644 --- a/sources/tech/20150803 Handy commands for profiling your Unix file systems.md +++ b/sources/tech/20150803 Handy commands for profiling your Unix file systems.md @@ -1,3 +1,4 @@ +translation by strugglingyouth Handy commands for profiling your Unix file systems ================================================================================ ![Credit: Sandra H-S](http://images.techhive.com/images/article/2015/07/file-profile-100597239-primary.idge.png) @@ -61,4 +62,4 @@ via: http://www.itworld.com/article/2949898/linux/profiling-your-file-systems.ht 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/ \ No newline at end of file +[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/ From 6660ef7b90c207bfa032b6db21ae448434ecfa47 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 3 Aug 2015 19:55:12 +0800 Subject: [PATCH 076/207] Update 20150803 Troubleshooting with Linux Logs.md --- sources/tech/20150803 Troubleshooting with Linux Logs.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150803 Troubleshooting with Linux Logs.md b/sources/tech/20150803 Troubleshooting with Linux Logs.md index 8f595427a9..9ee0820a9c 100644 --- a/sources/tech/20150803 Troubleshooting with Linux Logs.md +++ b/sources/tech/20150803 Troubleshooting with Linux Logs.md @@ -1,3 +1,4 @@ +translation by strugglingyouth Troubleshooting with Linux Logs ================================================================================ Troubleshooting is the main reason people create logs. Often you’ll want to diagnose why a problem happened with your Linux system or application. An error message or a sequence of events can give you clues to the root cause, indicate how to reproduce the issue, and point out ways to fix it. Here are a few use cases for things you might want to troubleshoot in your logs. @@ -113,4 +114,4 @@ via: http://www.loggly.com/ultimate-guide/logging/troubleshooting-with-linux-log [a1]:https://www.linkedin.com/in/jasonskowronski [a2]:https://www.linkedin.com/in/amyecheverri [a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 -[1]:http://linux.die.net/man/8/pam.d \ No newline at end of file +[1]:http://linux.die.net/man/8/pam.d From 59b2f6c25fc31bc791983d869f02b3f2bf97d78a Mon Sep 17 00:00:00 2001 From: ictlyh Date: Mon, 3 Aug 2015 20:21:50 +0800 Subject: [PATCH 077/207] [Translated] tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md --- ...erver behind NAT via reverse SSH tunnel.md | 131 ------------------ ...erver behind NAT via reverse SSH tunnel.md | 131 ++++++++++++++++++ ...Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2.md} | 0 ...oncepts of RAID and RAID Levels – Part 1.md} | 0 4 files changed, 131 insertions(+), 131 deletions(-) delete mode 100644 sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md create mode 100644 translated/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md rename translated/tech/{Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2 => Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2.md} (100%) rename translated/tech/{Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 => Introduction to RAID, Concepts of RAID and RAID Levels – Part 1.md} (100%) diff --git a/sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md b/sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md deleted file mode 100644 index 4239073013..0000000000 --- a/sources/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md +++ /dev/null @@ -1,131 +0,0 @@ -ictlyh Translating -How to access a Linux server behind NAT via reverse SSH tunnel -================================================================================ -You are running a Linux server at home, which is behind a NAT router or restrictive firewall. Now you want to SSH to the home server while you are away from home. How would you set that up? SSH port forwarding will certainly be an option. However, port forwarding can become tricky if you are dealing with multiple nested NAT environment. Besides, it can be interfered with under various ISP-specific conditions, such as restrictive ISP firewalls which block forwarded ports, or carrier-grade NAT which shares IPv4 addresses among users. - -### What is Reverse SSH Tunneling? ### - -One alternative to SSH port forwarding is **reverse SSH tunneling**. The concept of reverse SSH tunneling is simple. For this, you will need another host (so-called "relay host") outside your restrictive home network, which you can connect to via SSH from where you are. You could set up a relay host using a [VPS instance][1] with a public IP address. What you do then is to set up a persistent SSH tunnel from the server in your home network to the public relay host. With that, you can connect "back" to the home server from the relay host (which is why it's called a "reverse" tunnel). As long as the relay host is reachable to you, you can connect to your home server wherever you are, or however restrictive your NAT or firewall is in your home network. - -![](https://farm8.staticflickr.com/7742/17162647378_c7d9f10de8_b.jpg) - -### Set up a Reverse SSH Tunnel on Linux ### - -Let's see how we can create and use a reverse SSH tunnel. We assume the following. We will be setting up a reverse SSH tunnel from homeserver to relayserver, so that we can SSH to homeserver via relayserver from another computer called clientcomputer. The public IP address of **relayserver** is 1.1.1.1. - -On homeserver, open an SSH connection to relayserver as follows. - - homeserver~$ ssh -fN -R 10022:localhost:22 relayserver_user@1.1.1.1 - -Here the port 10022 is any arbitrary port number you can choose. Just make sure that this port is not used by other programs on relayserver. - -The "-R 10022:localhost:22" option defines a reverse tunnel. It forwards traffic on port 10022 of relayserver to port 22 of homeserver. - -With "-fN" option, SSH will go right into the background once you successfully authenticate with an SSH server. This option is useful when you do not want to execute any command on a remote SSH server, and just want to forward ports, like in our case. - -After running the above command, you will be right back to the command prompt of homeserver. - -Log in to relayserver, and verify that 127.0.0.1:10022 is bound to sshd. If so, that means a reverse tunnel is set up correctly. - - relayserver~$ sudo netstat -nap | grep 10022 - ----------- - - tcp 0 0 127.0.0.1:10022 0.0.0.0:* LISTEN 8493/sshd - -Now from any other computer (e.g., clientcomputer), log in to relayserver. Then access homeserver as follows. - - relayserver~$ ssh -p 10022 homeserver_user@localhost - -One thing to take note is that the SSH login/password you type for localhost should be for homeserver, not for relayserver, since you are logging in to homeserver via the tunnel's local endpoint. So do not type login/password for relayserver. After successful login, you will be on homeserver. - -### Connect Directly to a NATed Server via a Reverse SSH Tunnel ### - -While the above method allows you to reach **homeserver** behind NAT, you need to log in twice: first to **relayserver**, and then to **homeserver**. This is because the end point of an SSH tunnel on relayserver is binding to loopback address (127.0.0.1). - -But in fact, there is a way to reach NATed homeserver directly with a single login to relayserver. For this, you will need to let sshd on relayserver forward a port not only from loopback address, but also from an external host. This is achieved by specifying **GatewayPorts** option in sshd running on relayserver. - -Open /etc/ssh/sshd_conf of **relayserver** and add the following line. - - relayserver~$ vi /etc/ssh/sshd_conf - ----------- - - GatewayPorts clientspecified - -Restart sshd. - -Debian-based system: - - relayserver~$ sudo /etc/init.d/ssh restart - -Red Hat-based system: - - relayserver~$ sudo systemctl restart sshd - -Now let's initiate a reverse SSH tunnel from homeserver as follows. -homeserver~$ ssh -fN -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1 - -Log in to relayserver and confirm with netstat command that a reverse SSH tunnel is established successfully. - - relayserver~$ sudo netstat -nap | grep 10022 - ----------- - - tcp 0 0 1.1.1.1:10022 0.0.0.0:* LISTEN 1538/sshd: dev - -Unlike a previous case, the end point of a tunnel is now at 1.1.1.1:10022 (relayserver's public IP address), not 127.0.0.1:10022. This means that the end point of the tunnel is reachable from an external host. - -Now from any other computer (e.g., clientcomputer), type the following command to gain access to NATed homeserver. - - clientcomputer~$ ssh -p 10022 homeserver_user@1.1.1.1 - -In the above command, while 1.1.1.1 is the public IP address of relayserver, homeserver_user must be the user account associated with homeserver. This is because the real host you are logging in to is homeserver, not relayserver. The latter simply relays your SSH traffic to homeserver. - -### Set up a Persistent Reverse SSH Tunnel on Linux ### - -Now that you understand how to create a reverse SSH tunnel, let's make the tunnel "persistent", so that the tunnel is up and running all the time (regardless of temporary network congestion, SSH timeout, relay host rebooting, etc.). After all, if the tunnel is not always up, you won't be able to connect to your home server reliably. - -For a persistent tunnel, I am going to use a tool called autossh. As the name implies, this program allows you to automatically restart an SSH session should it breaks for any reason. So it is useful to keep a reverse SSH tunnel active. - -As the first step, let's set up [passwordless SSH login][2] from homeserver to relayserver. That way, autossh can restart a broken reverse SSH tunnel without user's involvement. - -Next, [install autossh][3] on homeserver where a tunnel is initiated. - -From homeserver, run autossh with the following arguments to create a persistent SSH tunnel destined to relayserver. - - homeserver~$ autossh -M 10900 -fN -o "PubkeyAuthentication=yes" -o "StrictHostKeyChecking=false" -o "PasswordAuthentication=no" -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1 - -The "-M 10900" option specifies a monitoring port on relayserver which will be used to exchange test data to monitor an SSH session. This port should not be used by any program on relayserver. - -The "-fN" option is passed to ssh command, which will let the SSH tunnel run in the background. - -The "-o XXXX" options tell ssh to: - -- Use key authentication, not password authentication. -- Automatically accept (unknown) SSH host keys. -- Exchange keep-alive messages every 60 seconds. -- Send up to 3 keep-alive messages without receiving any response back. - -The rest of reverse SSH tunneling related options remain the same as before. - -If you want an SSH tunnel to be automatically up upon boot, you can add the above autossh command in /etc/rc.local. - -### Conclusion ### - -In this post, I talked about how you can use a reverse SSH tunnel to access a Linux server behind a restrictive firewall or NAT gateway from outside world. While I demonstrated its use case for a home network, you must be careful when applying it for corporate networks. Such a tunnel can be considered as a breach of a corporate policy, as it circumvents corporate firewalls and can expose corporate networks to outside attacks. There is a great chance it can be misused or abused. So always remember its implication before setting it up. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/nanni -[1]:http://xmodulo.com/go/digitalocean -[2]:http://xmodulo.com/how-to-enable-ssh-login-without.html -[3]:http://ask.xmodulo.com/install-autossh-linux.html diff --git a/translated/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md b/translated/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md new file mode 100644 index 0000000000..5f9828e912 --- /dev/null +++ b/translated/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md @@ -0,0 +1,131 @@ +如何通过反向 SSH 隧道访问 NAT 后面的 Linux 服务器 +================================================================================ +你在家里运行着一台 Linux 服务器,访问它需要先经过 NAT 路由器或者限制性防火墙。现在你想不在家的时候用 SSH 登录到这台服务器。你如何才能做到呢?SSH 端口转发当然是一种选择。但是,如果你需要处理多个嵌套的 NAT 环境,端口转发可能会变得非常棘手。另外,在多种 ISP 特定条件下可能会受到干扰,例如阻塞转发端口的限制性 ISP 防火墙、或者在用户间共享 IPv4 地址的运营商级 NAT。 + +### 什么是反向 SSH 隧道? ### + +SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧道的概念非常简单。对于此,在限制性家庭网络之外你需要另一台主机(所谓的“中继主机”),你能从当前所在地通过 SSH 登录。你可以用有公共 IP 地址的 [VPS 实例][1] 配置一个中继主机。然后要做的就是从你家庭网络服务器中建立一个到公共中继主机的永久 SSH 隧道。有了这个隧道,你就可以从中继主机中连接“回”家庭服务器(这就是为什么称之为 “反向” 隧道)。不管你在哪里、你家庭网络中的 NAT 或 防火墙限制多么严重,只要你可以访问中继主机,你就可以连接到家庭服务器。 + +![](https://farm8.staticflickr.com/7742/17162647378_c7d9f10de8_b.jpg) + +### 在 Linux 上设置反向 SSH 隧道 ### + +让我们来看看怎样创建和使用反向 SSH 隧道。我们有如下假设。我们会设置一个从家庭服务器到中继服务器的反向 SSH 隧道,然后我们可以通过中继服务器从客户端计算机 SSH 登录到家庭服务器。**中继服务器** 的公共 IP 地址是 1.1.1.1。 + +在家庭主机上,按照以下方式打开一个到中继服务器的 SSH 连接。 + + homeserver~$ ssh -fN -R 10022:localhost:22 relayserver_user@1.1.1.1 + +这里端口 10022 是任何你可以使用的端口数字。只需要确保中继服务器上不会有其它程序使用这个端口。 + +“-R 10022:localhost:22” 选项定义了一个反向隧道。它转发中继服务器 10022 端口的流量到家庭服务器的 22 号端口。 + +用 “-fN” 选项,当你用一个 SSH 服务器成功通过验证时 SSH 会进入后台运行。当你不想在远程 SSH 服务器执行任何命令、就像我们的例子中只想转发端口的时候非常有用。 + +运行上面的命令之后,你就会回到家庭主机的命令行提示框中。 + +登录到中继服务器,确认 127.0.0.1:10022 绑定到了 sshd。如果是的话就表示已经正确设置了反向隧道。 + + relayserver~$ sudo netstat -nap | grep 10022 + +---------- + + tcp 0 0 127.0.0.1:10022 0.0.0.0:* LISTEN 8493/sshd + +现在就可以从任何其它计算机(客户端计算机)登录到中继服务器,然后按照下面的方法访问家庭服务器。 + + relayserver~$ ssh -p 10022 homeserver_user@localhost + +需要注意的一点是你在本地输入的 SSH 登录/密码应该是家庭服务器的,而不是中继服务器的,因为你是通过隧道的本地端点登录到家庭服务器。因此不要输入中继服务器的登录/密码。成功登陆后,你就在家庭服务器上了。 + +### 通过反向 SSH 隧道直接连接到网络地址变换后的服务器 ### + +上面的方法允许你访问 NAT 后面的 **家庭服务器**,但你需要登录两次:首先登录到 **中继服务器**,然后再登录到**家庭服务器**。这是因为中继服务器上 SSH 隧道的端点绑定到了回环地址(127.0.0.1)。 + +事实上,有一种方法可以只需要登录到中继服务器就能直接访问网络地址变换之后的家庭服务器。要做到这点,你需要让中继服务器上的 sshd 不仅转发回环地址上的端口,还要转发外部主机的端口。这通过指定中继服务器上运行的 sshd 的 **网关端口** 实现。 + +打开**中继服务器**的 /etc/ssh/sshd_conf 并添加下面的行。 + + relayserver~$ vi /etc/ssh/sshd_conf + +---------- + + GatewayPorts clientspecified + +重启 sshd。 + +基于 Debian 的系统: + + relayserver~$ sudo /etc/init.d/ssh restart + +基于红帽的系统: + + relayserver~$ sudo systemctl restart sshd + +现在在家庭服务器中按照下面方式初始化一个反向 SSH 隧道。 + + homeserver~$ ssh -fN -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1 + +登录到中继服务器然后用 netstat 命令确认成功建立的一个反向 SSH 隧道。 + + relayserver~$ sudo netstat -nap | grep 10022 + +---------- + + tcp 0 0 1.1.1.1:10022 0.0.0.0:* LISTEN 1538/sshd: dev + +不像之前的情况,现在隧道的端点是 1.1.1.1:10022(中继服务器的公共 IP 地址),而不是 127.0.0.1:10022。这就意味着从外部主机可以访问隧道的端点。 + +现在在任何其它计算机(客户端计算机),输入以下命令访问网络地址变换之后的家庭服务器。 + + clientcomputer~$ ssh -p 10022 homeserver_user@1.1.1.1 + +在上面的命令中,1.1.1.1 是中继服务器的公共 IP 地址,家庭服务器用户必须是和家庭服务器相关联的用户账户。这是因为你真正登录到的主机是家庭服务器,而不是中继服务器。后者只是中继你的 SSH 流量到家庭服务器。 + +### 在 Linux 上设置一个永久反向 SSH 隧道 ### + +现在你已经明白了怎样创建一个反向 SSH 隧道,然后把隧道设置为 “永久”,这样隧道启动后就会一直运行(不管临时的网络拥塞、SSH 超时、中继主机重启,等等)。毕竟,如果隧道不是一直有效,你不可能可靠的登录到你的家庭服务器。 + +对于永久隧道,我打算使用一个叫 autossh 的工具。正如名字暗示的,这个程序允许你不管任何理由自动重启 SSH 会话。因此对于保存一个反向 SSH 隧道有效非常有用。 + +第一步,我们要设置从家庭服务器到中继服务器的[无密码 SSH 登录][2]。这样的话,autossh 可以不需要用户干预就能重启一个损坏的反向 SSH 隧道。 + +下一步,在初始化隧道的家庭服务器上[安装 autossh][3]。 + +在家庭服务器上,用下面的参数运行 autossh 来创建一个连接到中继服务器的永久 SSH 隧道。 + + homeserver~$ autossh -M 10900 -fN -o "PubkeyAuthentication=yes" -o "StrictHostKeyChecking=false" -o "PasswordAuthentication=no" -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -R 1.1.1.1:10022:localhost:22 relayserver_user@1.1.1.1 + +“-M 10900” 选项指定中继服务器上的监视端口,用于交换监视 SSH 会话的测试数据。中继服务器上的其它程序不能使用这个端口。 + +“-fN” 选项传递给 ssh 命令,让 SSH 隧道在后台运行。 + +“-o XXXX” 选项让 ssh: + +- 使用密钥验证,而不是密码验证。 +- 自动接受(未知)SSH 主机密钥。 +- 每 60 秒交换 keep-alive 消息。 +- 没有收到任何响应时最多发送 3 条 keep-alive 消息。 + +其余 SSH 隧道相关的选项和之前介绍的一样。 + +如果你想系统启动时自动运行 SSH 隧道,你可以将上面的 autossh 命令添加到 /etc/rc.local。 + +### 总结 ### + +在这篇博文中,我介绍了你如何能从外部中通过反向 SSH 隧道访问限制性防火墙或 NAT 网关之后的 Linux 服务器。尽管我介绍了家庭网络中的一个使用事例,在企业网络中使用时你尤其要小心。这样的一个隧道可能被视为违反公司政策,因为它绕过了企业的防火墙并把企业网络暴露给外部攻击。这很可能被误用或者滥用。因此在使用之前一定要记住它的作用。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html + +作者:[Dan Nanni][a] +译者:[ictlyh](https://github.com/ictlyh) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://xmodulo.com/go/digitalocean +[2]:http://xmodulo.com/how-to-enable-ssh-login-without.html +[3]:http://ask.xmodulo.com/install-autossh-linux.html diff --git a/translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2 b/translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2.md similarity index 100% rename from translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2 rename to translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2.md diff --git a/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 b/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1.md similarity index 100% rename from translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1 rename to translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1.md From f1e0bd44ae78a3ffd31607226d2d44e846a15e5a Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Mon, 3 Aug 2015 21:31:58 +0800 Subject: [PATCH 078/207] [Translated]20150128 7 communities driving open source development.md --- ...unities driving open source development.md | 56 +++++++++---------- 1 file changed, 27 insertions(+), 29 deletions(-) diff --git a/sources/talk/20150128 7 communities driving open source development.md b/sources/talk/20150128 7 communities driving open source development.md index c3b6df31d2..2074ad9e23 100644 --- a/sources/talk/20150128 7 communities driving open source development.md +++ b/sources/talk/20150128 7 communities driving open source development.md @@ -1,79 +1,77 @@ -FSSlc Translating - -7 communities driving open source development +7 个驱动开源发展的社区 ================================================================================ -Not so long ago, the open source model was the rebellious kid on the block, viewed with suspicion by established industry players. Today, open initiatives and foundations are flourishing with long lists of vendor committers who see the model as a key to innovation. +不久前,开源模式还被成熟的工业厂商以怀疑的态度认作是叛逆小孩的玩物。如今,开放的倡议和基金会在一长列的供应商提供者的支持下正蓬勃发展,而他们将开源模式视作创新的关键。 ![](http://images.techhive.com/images/article/2015/01/0_opensource-title-100539095-orig.jpg) -### Open Development of Tech Drives Innovation ### +### 技术的开放发展驱动着创新 ### -Over the past two decades, open development of technology has come to be seen as a key to driving innovation. Even companies that once saw open source as a threat have come around — Microsoft, for example, is now active in a number of open source initiatives. To date, most open development has focused on software. But even that is changing as communities have begun to coalesce around open hardware initiatives. Here are seven organizations that are successfully promoting and developing open technologies, both hardware and software. +在过去的 20 几年间,技术的开放发展已被视作驱动创新的关键因素。即使那些以前将开源视作威胁的公司也开始接受这个观点 — 例如微软,如今它在一系列的开源倡议中表现活跃。到目前为止,大多数的开放发展都集中在软件方面,但甚至这个也正在改变,因为社区已经开始向开源硬件倡议方面聚拢。这里有 7 个成功地在硬件和软件方面同时促进和发展开源技术的组织。 -### OpenPOWER Foundation ### +### OpenPOWER 基金会 ### ![](http://images.techhive.com/images/article/2015/01/1_openpower-100539100-orig.jpg) -The [OpenPOWER Foundation][2] was founded by IBM, Google, Mellanox, Tyan and NVIDIA in 2013 to drive open collaboration hardware development in the same spirit as the open source software development which has found fertile ground in the past two decades. +[OpenPOWER 基金会][2] 由 IBM, Google, Mellanox, Tyan 和 NVIDIA 于 2013 年共同创建,在与开源软件发展相同的精神下,旨在驱动开放协作硬件的发展,在过去的 20 几年间,开源软件发展已经找到了肥沃的土壤。 -IBM seeded the foundation by opening up its Power-based hardware and software technologies, offering licenses to use Power IP in independent hardware products. More than 70 members now work together to create custom open servers, components and software for Linux-based data centers. +IBM 通过开放其基于 Power 架构的硬件和软件技术,向使用 Power IP 的独立硬件产品提供许可证等方式为基金会的建立播下种子。如今超过 70 个成员共同协作来为基于 Linux 的数据中心提供自定义的开放服务器,组件和硬件。 -In April, OpenPOWER unveiled a technology roadmap based on new POWER8 process-based servers capable of analyzing data 50 times faster than the latest x86-based systems. In July, IBM and Google released a firmware stack. October saw the availability of NVIDIA GPU accelerated POWER8 systems and the first OpenPOWER reference server from Tyan. +今年四月,在比最新基于 x86 系统快 50 倍的数据分析能力的新的 POWER8 处理器的服务器的基础上, OpenPOWER 推出了一个技术路线图。七月, IBM 和 Google 发布了一个固件堆栈。十月见证了 NVIDIA GPU 带来加速 POWER8 系统的能力和来自 Tyan 的第一个 OpenPOWER 参考服务器。 -### The Linux Foundation ### +### Linux 基金会 ### ![](http://images.techhive.com/images/article/2015/01/2_the-linux-foundation-100539101-orig.jpg) -Founded in 2000, [The Linux Foundation][2] is now the host for the largest open source, collaborative development effort in history, with more than 180 corporate members and many individual and student members. It sponsors the work of key Linux developers and promotes, protects and advances the Linux operating system and collaborative software development. +于 2000 年建立的 [Linux 基金会][2] 如今成为掌控着历史上最大的开源协同发展成果,它有着超过 180 个合作成员和许多独立成员及学生成员。它赞助核心 Linux 开发者的工作并促进、保护和推进 Linux 操作系统和协作软件的开发。 -Some of its most successful collaborative projects include Code Aurora Forum (a consortium of companies with projects serving the mobile wireless industry), MeeGo (a project to build a Linux kernel-based operating system for mobile devices and IVI) and the Open Virtualization Alliance (which fosters the adoption of free and open source software virtualization solutions). +它最为成功的协作项目包括 Code Aurora Forum (一个拥有为移动无线产业服务的企业财团),MeeGo (一个为移动设备和 IVI (注:指的是车载消息娱乐设备,为 In-Vehicle Infotainment 的简称) 构建一个基于 Linux 内核的操作系统的项目) 和 Open Virtualization Alliance (开放虚拟化联盟,它促进自由和开源软件虚拟化解决方案的采用)。 -### Open Virtualization Alliance ### +### 开放虚拟化联盟 ### ![](http://images.techhive.com/images/article/2015/01/3_open-virtualization-alliance-100539102-orig.jpg) -The [Open Virtualization Alliance (OVA)][3] exists to foster the adoption of free and open source software virtualization solutions like Kernel-based Virtual Machine (KVM) through use cases and support for the development of interoperable common interfaces and APIs. KVM turns the Linux kernel into a hypervisor. +[开放虚拟化联盟(OVA)][3] 的存在目的为:通过提供使用案例和对具有互操作性的通用接口和 API 的发展提供支持,来促进自由、开源软件的虚拟化解决方案例如 KVM 的采用。KVM 将 Linux 内核转变为一个虚拟机管理程序。 -Today, KVM is the most commonly used hypervisor with OpenStack. +如今, KVM 已成为和 OpenStack 共同使用的最为常见的虚拟机管理程序。 -### The OpenStack Foundation ### +### OpenStack 基金会 ### ![](http://images.techhive.com/images/article/2015/01/4_the-openstack-foundation-100539096-orig.jpg) -Originally launched as an Infrastructure-as-a-Service (IaaS) product by NASA and Rackspace hosting in 2010, the [OpenStack Foundation][4] has become the home for one of the biggest open source projects around. It boasts more than 200 member companies, including AT&T, AMD, Avaya, Canonical, Cisco, Dell and HP. +原本作为一个 IaaS(基础设施即服务) 产品由 NASA 和 Rackspace 于 2010 年启动,[OpenStack 基金会][4] 已成为最大的开源项目聚居地之一。它拥有超过 200 家公司成员,其中包括 AT&T, AMD, Avaya, Canonical, Cisco, Dell 和 HP。 -Organized around a six-month release cycle, the foundation's OpenStack projects are developed to control pools of processing, storage and networking resources through a data center — all managed or provisioned through a Web-based dashboard, command-line tools or a RESTful API. So far, the collaborative development supported by the foundation has resulted in the creation of OpenStack components including OpenStack Compute (a cloud computing fabric controller that is the main part of an IaaS system), OpenStack Networking (a system for managing networks and IP addresses) and OpenStack Object Storage (a scalable redundant storage system). +大约以 6 个月为一个发行周期,基金会的 OpenStack 项目被发展用来通过一个基于 Web 的仪表盘,命令行工具或一个 RESTful 风格的 API 来控制或调配流经一个数据中心的处理存储池和网络资源。至今为止,基金会支持的协作发展已经孕育出了一系列 OpenStack 组件,其中包括 OpenStack Compute(一个云计算网络控制器,它是一个 IaaS 系统的主要部分),OpenStack Networking(一个用以管理网络和 IP 地址的系统) 和 OpenStack Object Storage(一个可扩展的冗余存储系统)。 ### OpenDaylight ### ![](http://images.techhive.com/images/article/2015/01/5_opendaylight-100539097-orig.jpg) -Another collaborative project to come out of the Linux Foundation, [OpenDaylight][5] is a joint initiative of industry vendors, like Dell, HP, Oracle and Avaya founded in April 2013. Its mandate is the creation of a community-led, open, industry-supported framework consisting of code and blueprints for Software-Defined Networking (SDN). The idea is to provide a fully functional SDN platform that can be deployed directly, without requiring other components, though vendors can offer add-ons and enhancements. +作为来自 Linux 基金会的另一个协作项目, [OpenDaylight][5] 是一个由诸如 Dell, HP, Oracle 和 Avaya 等行业厂商于 2013 年 4 月建立的联合倡议。它的任务是建立一个由社区主导,开放,有工业支持的针对 Software-Defined Networking (SDN) 的包含代码和蓝图的框架。其思路是提供一个可直接部署的全功能 SDN 平台,而不需要其他组件,供应商可提供附件组件和增强组件。 -### Apache Software Foundation ### +### Apache 软件基金会 ### ![](http://images.techhive.com/images/article/2015/01/6_apache-software-foundation-100539098-orig.jpg) -The [Apache Software Foundation (ASF)][7] is home to nearly 150 top level projects ranging from open source enterprise automation software to a whole ecosystem of distributed computing projects related to Apache Hadoop. These projects deliver enterprise-grade, freely available software products, while the Apache License is intended to make it easy for users, whether commercial or individual, to deploy Apache products. +[Apache 软件基金会 (ASF)][7] 是将近 150 个顶级项目的聚居地,这些项目涵盖从开源企业级自动化软件到与 Apache Hadoop 相关的分布式计算的整个生态系统。这些项目分发企业级、可免费获取的软件产品,而 Apache 协议则是为了让无论是商业用户还是个人用户更方便地部署 Apache 的产品。 -ASF was incorporated in 1999 as a membership-based, not-for-profit corporation with meritocracy at its heart — to become a member you must first be actively contributing to one or more of the foundation's collaborative projects. +ASF 于 1999 年作为一个会员制,非盈利公司注册,其核心为精英 — 要成为它的成员,你必须首先在基金会的一个或多个协作项目中做出积极贡献。 -### Open Compute Project ### +### 开放计算项目 ### ![](http://images.techhive.com/images/article/2015/01/7_open-compute-project-100539099-orig.jpg) -An outgrowth of Facebook's redesign of its Oregon data center, the [Open Compute Project (OCP)][7] aims to develop open hardware solutions for data centers. The OCP is an initiative made up of cheap, vanity-free servers, modular I/O storage for Open Rack (a rack standard designed for data centers to integrate the rack into the data center infrastructure) and a relatively "green" data center design. +作为 Facebook 重新设计其 Oregon 数据中心的副产物, [开放计算项目][7] 旨在发展针对数据中心的开放硬件解决方案。 OCP 是一个由廉价、无浪费的服务器,针对 Open Rack(为数据中心设计的机架标准,来让机架集成到数据中心的基础设施中) 的模块化 I/O 存储和一个相对 "绿色" 的数据中心设计方案等构成。 -OCP board members include representatives from Facebook, Intel, Goldman Sachs, Rackspace and Microsoft. +OCP 董事会成员包括来自 Facebook,Intel,Goldman Sachs,Rackspace 和 Microsoft 的代表。 -OCP recently announced two options for licensing: an Apache 2.0-like license that allows for derivative works and a more prescriptive license that encourages changes to be rolled back into the original software. +OCP 最近宣布了许可证的两个选择: 一个类似 Apache 2.0 的允许衍生工作的许可证和一个更规范的鼓励回滚到原有软件的更改的许可证。 -------------------------------------------------------------------------------- via: http://www.networkworld.com/article/2866074/opensource-subnet/7-communities-driving-open-source-development.html 作者:[Thor Olavsrud][a] -译者:[译者ID](https://github.com/译者ID) +译者:[FSSlc](https://github.com/FSSlc) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 @@ -85,4 +83,4 @@ via: http://www.networkworld.com/article/2866074/opensource-subnet/7-communities [4]:http://www.openstack.org/foundation/ [5]:http://www.opendaylight.org/ [6]:http://www.apache.org/ -[7]:http://www.opencompute.org/ +[7]:http://www.opencompute.org/ \ No newline at end of file From 87c50a0cd95d1f25360ef2b85cf47a997aa333e0 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 3 Aug 2015 22:27:09 +0800 Subject: [PATCH 079/207] =?UTF-8?q?=E8=A1=A5=E5=AE=8C=20PR?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @FSSlc --- .../20150128 7 communities driving open source development.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/talk/20150128 7 communities driving open source development.md (100%) diff --git a/sources/talk/20150128 7 communities driving open source development.md b/translated/talk/20150128 7 communities driving open source development.md similarity index 100% rename from sources/talk/20150128 7 communities driving open source development.md rename to translated/talk/20150128 7 communities driving open source development.md From 2ee44522e991b6d9f8be3aab4e341a3c473772dc Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 3 Aug 2015 22:36:14 +0800 Subject: [PATCH 080/207] =?UTF-8?q?=E8=B6=85=E6=9C=9F=E5=9B=9E=E6=94=B6?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @wi-cuckoo @KevinSJ --- .../tech/20150717 How to monitor NGINX with Datadog - Part 3.md | 1 - sources/tech/20150717 How to monitor NGINX- Part 1.md | 1 - 2 files changed, 2 deletions(-) diff --git a/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md b/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md index 40787cdd96..949fd3d949 100644 --- a/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md +++ b/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md @@ -1,4 +1,3 @@ -translating wi-cuckoo How to monitor NGINX with Datadog - Part 3 ================================================================================ ![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png) diff --git a/sources/tech/20150717 How to monitor NGINX- Part 1.md b/sources/tech/20150717 How to monitor NGINX- Part 1.md index 97ab822fca..1ae6858792 100644 --- a/sources/tech/20150717 How to monitor NGINX- Part 1.md +++ b/sources/tech/20150717 How to monitor NGINX- Part 1.md @@ -1,4 +1,3 @@ -KevinSJ Translating How to monitor NGINX - Part 1 ================================================================================ ![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png) From 42083f4166af08cd49ac5749ca8f770663545f95 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 3 Aug 2015 22:42:03 +0800 Subject: [PATCH 081/207] Update 20150717 How to monitor NGINX with Datadog - Part 3.md --- .../20150717 How to monitor NGINX with Datadog - Part 3.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md b/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md index 949fd3d949..727c552ed0 100644 --- a/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md +++ b/sources/tech/20150717 How to monitor NGINX with Datadog - Part 3.md @@ -1,3 +1,4 @@ +translation by strugglingyouth How to monitor NGINX with Datadog - Part 3 ================================================================================ ![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png) @@ -147,4 +148,4 @@ via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ [16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics [17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up [18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md -[19]:https://github.com/DataDog/the-monitor/issues \ No newline at end of file +[19]:https://github.com/DataDog/the-monitor/issues From c52f369407f617927b6cc65e9a07937e957acd50 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Mon, 3 Aug 2015 22:44:22 +0800 Subject: [PATCH 082/207] Update 20150717 How to monitor NGINX- Part 1.md --- sources/tech/20150717 How to monitor NGINX- Part 1.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20150717 How to monitor NGINX- Part 1.md b/sources/tech/20150717 How to monitor NGINX- Part 1.md index 1ae6858792..690ab192ba 100644 --- a/sources/tech/20150717 How to monitor NGINX- Part 1.md +++ b/sources/tech/20150717 How to monitor NGINX- Part 1.md @@ -1,3 +1,4 @@ +translation by strugglingyouth How to monitor NGINX - Part 1 ================================================================================ ![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png) From afcff7a42fd2eb7d06bc5cdf51ca624424c8f75a Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 3 Aug 2015 23:14:11 +0800 Subject: [PATCH 083/207] PUB:20150717 Howto Configure FTP Server with Proftpd on Fedora 22 @zpl1025 --- ...re FTP Server with Proftpd on Fedora 22.md | 20 ++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) rename {translated/tech => published}/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md (83%) diff --git a/translated/tech/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md b/published/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md similarity index 83% rename from translated/tech/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md rename to published/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md index 0ccfe69b8f..d812c1b0ac 100644 --- a/translated/tech/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md +++ b/published/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md @@ -1,11 +1,13 @@ 如何在 Fedora 22 上配置 Proftpd 服务器 ================================================================================ -在本文中,我们将了解如何在运行 Fedora 22 的电脑或服务器上使用 Proftpd 架设 FTP 服务器。[ProFTPD][1] 是一款免费的基于 GPL 授权开源的 FTP 服务器软件,是 Linux 上的主流 FTP 服务器。它的主要设计目标是具备许多高级功能以及能为用户提供丰富的配置选项可以轻松实现定制。它的许多配置选项在其他一些 FTP 服务器软件里仍然没有集成。最初它是被开发作为 wu-ftpd 服务器的一个更安全更容易配置的替代。FTP 服务器是这样一个软件,用户可以通过 FTP 客户端从安装了它的远端服务器上传或下载文件和目录。下面是一些 ProFTPD 服务器的主要功能,更详细的资料可以访问 [http://www.proftpd.org/features.html][2]。 +在本文中,我们将了解如何在运行 Fedora 22 的电脑或服务器上使用 Proftpd 架设 FTP 服务器。[ProFTPD][1] 是一款基于 GPL 授权的自由开源 FTP 服务器软件,是 Linux 上的主流 FTP 服务器。它的主要设计目标是提供许多高级功能以及给用户提供丰富的配置选项以轻松实现定制。它具备许多在其他一些 FTP 服务器软件里仍然没有的配置选项。最初它是被开发作为 wu-ftpd 服务器的一个更安全更容易配置的替代。 -- 每个目录都包含 ".ftpaccess" 文件用于访问控制,类似 Apache 的 ".htaccess" +FTP 服务器是这样一个软件,用户可以通过 FTP 客户端从安装了它的远端服务器上传或下载文件和目录。下面是一些 ProFTPD 服务器的主要功能,更详细的资料可以访问 [http://www.proftpd.org/features.html][2]。 + +- 每个目录都可以包含 ".ftpaccess" 文件用于访问控制,类似 Apache 的 ".htaccess" - 支持多个虚拟 FTP 服务器以及多用户登录和匿名 FTP 服务。 - 可以作为独立进程启动服务或者通过 inetd/xinetd 启动 -- 它的文件/目录属性、属主和权限采用类 UNIX 方式。 +- 它的文件/目录属性、属主和权限是基于 UNIX 方式的。 - 它可以独立运行,保护系统避免 root 访问可能带来的损坏。 - 模块化的设计让它可以轻松扩展其他模块,比如 LDAP 服务器,SSL/TLS 加密,RADIUS 支持,等等。 - ProFTPD 服务器还支持 IPv6. @@ -38,7 +40,7 @@ ### 3. 添加 FTP 用户 ### -在设定好了基本的配置文件后,我们很自然地希望为指定目录添加 FTP 用户。目前用来登录的用户是 FTP 服务自动生成的,可以用来登录到 FTP 服务器。但是,在这篇教程里,我们将创建一个以 ftp 服务器上指定目录为主目录的新用户。 +在设定好了基本的配置文件后,我们很自然地希望添加一个以特定目录为根目录的 FTP 用户。目前登录的用户自动就可以使用 FTP 服务,可以用来登录到 FTP 服务器。但是,在这篇教程里,我们将创建一个以 ftp 服务器上指定目录为主目录的新用户。 下面,我们将建立一个名字是 ftpgroup 的新用户组。 @@ -57,7 +59,7 @@ Retype new password: passwd: all authentication tokens updated successfully. -现在,我们将通过下面命令为这个 ftp 用户设定主目录的读写权限。 +现在,我们将通过下面命令为这个 ftp 用户设定主目录的读写权限(LCTT 译注:这是SELinux 相关设置,如果未启用 SELinux,可以不用)。 $ sudo setsebool -P allow_ftpd_full_access=1 $ sudo setsebool -P ftp_home_dir=1 @@ -129,7 +131,7 @@ 如果 **打开了 TLS/SSL 加密**,执行下面的命令。 - $sudo firewall-cmd --add-port=1024-65534/tcp + $ sudo firewall-cmd --add-port=1024-65534/tcp $ sudo firewall-cmd --add-port=1024-65534/tcp --permanent 如果 **没有打开 TLS/SSL 加密**,执行下面的命令。 @@ -158,7 +160,7 @@ ### 7. 登录到 FTP 服务器 ### -现在,如果都是按照本教程设置好的,我们一定可以连接到 ftp 服务器并使用以上设置的信息登录上去。在这里,我们将配置一下 FTP 客户端 filezilla,使用 **服务器的 IP 或 URL **作为主机名,协议选择 **FTP**,用户名填入 **arunftp**,密码是在上面第 3 步中设定的密码。如果你按照第 4 步中的方式打开了 TLS 支持,还需要在加密类型中选择 **显式要求基于 TLS 的 FTP**,如果没有打开,也不想使用 TLS 加密,那么加密类型选择 **简单 FTP**。 +现在,如果都是按照本教程设置好的,我们一定可以连接到 ftp 服务器并使用以上设置的信息登录上去。在这里,我们将配置一下 FTP 客户端 filezilla,使用 **服务器的 IP 或名称 **作为主机名,协议选择 **FTP**,用户名填入 **arunftp**,密码是在上面第 3 步中设定的密码。如果你按照第 4 步中的方式打开了 TLS 支持,还需要在加密类型中选择 **要求显式的基于 TLS 的 FTP**,如果没有打开,也不想使用 TLS 加密,那么加密类型选择 **简单 FTP**。 ![FTP 登录细节](http://blog.linoxide.com/wp-content/uploads/2015/06/ftp-login-details.png) @@ -170,7 +172,7 @@ ### 总结 ### -最后,我们成功地在 Fedora 22 机器上安装并配置好了 Proftpd FTP 服务器。Proftpd 是一个超级强大,能高度配置和扩展的 FTP 守护软件。上面的教程展示了如何配置一个采用 TLS 加密的安全 FTP 服务器。强烈建议设置 FTP 服务器支持 TLS 加密,因为它允许使用 SSL 凭证加密数据传输和登录。本文中,我们也没有配置 FTP 的匿名访问,因为一般受保护的 FTP 系统不建议这样做。 FTP 访问让人们的上传和下载变得非常简单也更高效。我们还可以改变用户端口增加安全性。好吧,如果你有任何疑问,建议,反馈,请在下面评论区留言,这样我们就能够改善并更新文章内容。谢谢!玩的开心 :-) +最后,我们成功地在 Fedora 22 机器上安装并配置好了 Proftpd FTP 服务器。Proftpd 是一个超级强大,能高度定制和扩展的 FTP 守护软件。上面的教程展示了如何配置一个采用 TLS 加密的安全 FTP 服务器。强烈建议设置 FTP 服务器支持 TLS 加密,因为它允许使用 SSL 凭证加密数据传输和登录。本文中,我们也没有配置 FTP 的匿名访问,因为一般受保护的 FTP 系统不建议这样做。 FTP 访问让人们的上传和下载变得非常简单也更高效。我们还可以改变用户端口增加安全性。好吧,如果你有任何疑问,建议,反馈,请在下面评论区留言,这样我们就能够改善并更新文章内容。谢谢!玩的开心 :-) -------------------------------------------------------------------------------- @@ -178,7 +180,7 @@ via: http://linoxide.com/linux-how-to/configure-ftp-proftpd-fedora-22/ 作者:[Arun Pyasi][a] 译者:[zpl1025](https://github.com/zpl1025) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 1908ba60a53e99fcfc69f82dba743935707b41d3 Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 3 Aug 2015 23:46:03 +0800 Subject: [PATCH 084/207] PUB:20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall @wwy-hust --- ...Experience on Linux 'iptables' Firewall.md | 205 ++++++++++++++++++ ...Experience on Linux 'iptables' Firewall.md | 205 ------------------ 2 files changed, 205 insertions(+), 205 deletions(-) create mode 100644 published/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md delete mode 100644 translated/tech/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md diff --git a/published/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md b/published/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md new file mode 100644 index 0000000000..9d8d582dfb --- /dev/null +++ b/published/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md @@ -0,0 +1,205 @@ +关于Linux防火墙'iptables'的面试问答 +================================================================================ +Nishita Agarwal是Tecmint的用户,她将分享关于她刚刚经历的一家公司(印度的一家私人公司Pune)的面试经验。在面试中她被问及许多不同的问题,但她是iptables方面的专家,因此她想分享这些关于iptables的问题和相应的答案给那些以后可能会进行相关面试的人。 + +![Linux防火墙Iptables面试问题](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-iptables-Interview-Questions.jpg) + +所有的问题和相应的答案都基于Nishita Agarwal的记忆并经过了重写。 + +> “嗨,朋友!我叫**Nishita Agarwal**。我已经取得了理学学士学位,我的专业集中在UNIX和它的变种(BSD,Linux)。它们一直深深的吸引着我。我在存储方面有1年多的经验。我正在寻求职业上的变化,并将供职于印度的Pune公司。” + +下面是我在面试中被问到的问题的集合。我已经把我记忆中有关iptables的问题和它们的答案记录了下来。希望这会对您未来的面试有所帮助。 + +### 1. 你听说过Linux下面的iptables和Firewalld么?知不知道它们是什么,是用来干什么的? ### + +**答案** : iptables和Firewalld我都知道,并且我已经使用iptables好一段时间了。iptables主要由C语言写成,并且以GNU GPL许可证发布。它是从系统管理员的角度写的,最新的稳定版是iptables 1.4.21。iptables通常被用作类UNIX系统中的防火墙,更准确的说,可以称为iptables/netfilter。管理员通过终端/GUI工具与iptables打交道,来添加和定义防火墙规则到预定义的表中。Netfilter是内核中的一个模块,它执行包过滤的任务。 + +Firewalld是RHEL/CentOS 7(也许还有其他发行版,但我不太清楚)中最新的过滤规则的实现。它已经取代了iptables接口,并与netfilter相连接。 + +### 2. 你用过一些iptables的GUI或命令行工具么? ### + +**答案** : 虽然我既用过GUI工具,比如与[Webmin][1]结合的Shorewall;以及直接通过终端访问iptables,但我必须承认通过Linux终端直接访问iptables能给予用户更高级的灵活性、以及对其背后工作更好的理解的能力。GUI适合初级管理员,而终端适合有经验的管理员。 + +### 3. 那么iptables和firewalld的基本区别是什么呢? ### + +**答案** : iptables和firewalld都有着同样的目的(包过滤),但它们使用不同的方式。iptables与firewalld不同,在每次发生更改时都刷新整个规则集。通常iptables配置文件位于‘/etc/sysconfig/iptables‘,而firewalld的配置文件位于‘/etc/firewalld/‘。firewalld的配置文件是一组XML文件。以XML为基础进行配置的firewalld比iptables的配置更加容易,但是两者都可以完成同样的任务。例如,firewalld可以在自己的命令行界面以及基于XML的配置文件下使用iptables。 + +### 4. 如果有机会的话,你会在你所有的服务器上用firewalld替换iptables么? ### + +**答案** : 我对iptables很熟悉,它也工作的很好。如果没有任何需求需要firewalld的动态特性,那么没有理由把所有的配置都从iptables移动到firewalld。通常情况下,目前为止,我还没有看到iptables造成什么麻烦。IT技术的通用准则也说道“为什么要修一件没有坏的东西呢?”。上面是我自己的想法,但如果组织愿意用firewalld替换iptables的话,我不介意。 + +### 5. 你看上去对iptables很有信心,巧的是,我们的服务器也在使用iptables。 ### + +iptables使用的表有哪些?请简要的描述iptables使用的表以及它们所支持的链。 + +**答案** : 谢谢您的赞赏。至于您问的问题,iptables使用的表有四个,它们是: + +- Nat 表 +- Mangle 表 +- Filter 表 +- Raw 表 + +Nat表 : Nat表主要用于网络地址转换。根据表中的每一条规则修改网络包的IP地址。流中的包仅遍历一遍Nat表。例如,如果一个通过某个接口的包被修饰(修改了IP地址),该流中其余的包将不再遍历这个表。通常不建议在这个表中进行过滤,由NAT表支持的链称为PREROUTING 链,POSTROUTING 链和OUTPUT 链。 + +Mangle表 : 正如它的名字一样,这个表用于校正网络包。它用来对特殊的包进行修改。它能够修改不同包的头部和内容。Mangle表不能用于地址伪装。支持的链包括PREROUTING 链,OUTPUT 链,Forward 链,Input 链和POSTROUTING 链。 + +Filter表 : Filter表是iptables中使用的默认表,它用来过滤网络包。如果没有定义任何规则,Filter表则被当作默认的表,并且基于它来过滤。支持的链有INPUT 链,OUTPUT 链,FORWARD 链。 + +Raw表 : Raw表在我们想要配置之前被豁免的包时被使用。它支持PREROUTING 链和OUTPUT 链。 + +### 6. 简要谈谈什么是iptables中的目标值(能被指定为目标),他们有什么用 ### + +**答案** : 下面是在iptables中可以指定为目标的值: + +- ACCEPT : 接受包 +- QUEUE : 将包传递到用户空间 (应用程序和驱动所在的地方) +- DROP : 丢弃包 +- RETURN : 将控制权交回调用的链并且为当前链中的包停止执行下一调用规则 + +### 7. 让我们来谈谈iptables技术方面的东西,我的意思是说实际使用方面 ### + +你怎么检测在CentOS中安装iptables时需要的iptables的rpm? + +**答案** : iptables已经被默认安装在CentOS中,我们不需要单独安装它。但可以这样检测rpm: + + # rpm -qa iptables + + iptables-1.4.21-13.el7.x86_64 + +如果您需要安装它,您可以用yum来安装。 + + # yum install iptables-services + +### 8. 怎样检测并且确保iptables服务正在运行? ### + +**答案** : 您可以在终端中运行下面的命令来检测iptables的状态。 + + # service status iptables [On CentOS 6/5] + # systemctl status iptables [On CentOS 7] + +如果iptables没有在运行,可以使用下面的语句 + + ---------------- 在CentOS 6/5下 ---------------- + # chkconfig --level 35 iptables on + # service iptables start + + ---------------- 在CentOS 7下 ---------------- + # systemctl enable iptables + # systemctl start iptables + +我们还可以检测iptables的模块是否被加载: + + # lsmod | grep ip_tables + +### 9. 你怎么检查iptables中当前定义的规则呢? ### + +**答案** : 当前的规则可以简单的用下面的命令查看: + + # iptables -L + +示例输出 + + Chain INPUT (policy ACCEPT) + target prot opt source destination + ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED + ACCEPT icmp -- anywhere anywhere + ACCEPT all -- anywhere anywhere + ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh + REJECT all -- anywhere anywhere reject-with icmp-host-prohibited + + Chain FORWARD (policy ACCEPT) + target prot opt source destination + REJECT all -- anywhere anywhere reject-with icmp-host-prohibited + + Chain OUTPUT (policy ACCEPT) + target prot opt source destination + +### 10. 你怎样刷新所有的iptables规则或者特定的链呢? ### + +**答案** : 您可以使用下面的命令来刷新一个特定的链。 + + # iptables --flush OUTPUT + +要刷新所有的规则,可以用: + + # iptables --flush + +### 11. 请在iptables中添加一条规则,接受所有从一个信任的IP地址(例如,192.168.0.7)过来的包。 ### + +**答案** : 上面的场景可以通过运行下面的命令来完成。 + + # iptables -A INPUT -s 192.168.0.7 -j ACCEPT + +我们还可以在源IP中使用标准的斜线和子网掩码: + + # iptables -A INPUT -s 192.168.0.7/24 -j ACCEPT + # iptables -A INPUT -s 192.168.0.7/255.255.255.0 -j ACCEPT + +### 12. 怎样在iptables中添加规则以ACCEPT,REJECT,DENY和DROP ssh的服务? ### + +**答案** : 但愿ssh运行在22端口,那也是ssh的默认端口,我们可以在iptables中添加规则来ACCEPT ssh的tcp包(在22号端口上)。 + + # iptables -A INPUT -s -p tcp --dport 22 -j ACCEPT + +REJECT ssh服务(22号端口)的tcp包。 + + # iptables -A INPUT -s -p tcp --dport 22 -j REJECT + +DENY ssh服务(22号端口)的tcp包。 + + + # iptables -A INPUT -s -p tcp --dport 22 -j DENY + +DROP ssh服务(22号端口)的tcp包。 + + + # iptables -A INPUT -s -p tcp --dport 22 -j DROP + +### 13. 让我给你另一个场景,假如有一台电脑的本地IP地址是192.168.0.6。你需要封锁在21、22、23和80号端口上的连接,你会怎么做? ### + +**答案** : 这时,我所需要的就是在iptables中使用‘multiport‘选项,并将要封锁的端口号跟在它后面。上面的场景可以用下面的一条语句搞定: + + # iptables -A INPUT -s 192.168.0.6 -p tcp -m multiport --dport 22,23,80,8080 -j DROP + +可以用下面的语句查看写入的规则。 + + # iptables -L + + Chain INPUT (policy ACCEPT) + target prot opt source destination + ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED + ACCEPT icmp -- anywhere anywhere + ACCEPT all -- anywhere anywhere + ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh + REJECT all -- anywhere anywhere reject-with icmp-host-prohibited + DROP tcp -- 192.168.0.6 anywhere multiport dports ssh,telnet,http,webcache + + Chain FORWARD (policy ACCEPT) + target prot opt source destination + REJECT all -- anywhere anywhere reject-with icmp-host-prohibited + + Chain OUTPUT (policy ACCEPT) + target prot opt source destination + +**面试官** : 好了,我问的就是这些。你是一个很有价值的雇员,我们不会错过你的。我将会向HR推荐你的名字。如果你有什么问题,请问我。 + +作为一个候选人我不愿不断的问将来要做的项目的事以及公司里其他的事,这样会打断愉快的对话。更不用说HR轮会不会比较难,总之,我获得了机会。 + +同时我要感谢Avishek和Ravi(我的朋友)花时间帮我整理我的面试。 + +朋友!如果您有过类似的面试,并且愿意与数百万Tecmint读者一起分享您的面试经历,请将您的问题和答案发送到admin@tecmint.com。 + +谢谢!保持联系。如果我能更好的回答我上面的问题的话,请记得告诉我。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-firewall-iptables-interview-questions-and-answers/ + +作者:[Avishek Kumar][a] +译者:[wwy-hust](https://github.com/wwy-hust) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/install-webmin-web-based-system-administration-tool-for-rhel-centos-fedora/ diff --git a/translated/tech/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md b/translated/tech/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md deleted file mode 100644 index 1d476d0f18..0000000000 --- a/translated/tech/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md +++ /dev/null @@ -1,205 +0,0 @@ -Nishita Agarwal分享它关于Linux防火墙'iptables'的面试经验 -================================================================================ -Nishita Agarwal是Tecmint的用户,她将分享关于她刚刚经历的一家公司(私人公司Pune,印度)的面试经验。在面试中她被问及许多不同的问题,但她是iptables方面的专家,因此她想分享这些关于iptables的问题和相应的答案给那些以后可能会进行相关面试的人。 - -![Linux防火墙Iptables面试问题](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-iptables-Interview-Questions.jpg) - -所有的问题和相应的答案都基于Nishita Agarwal的记忆并经过了重写。 - -> “嗨,朋友!我叫**Nishita Agarwal**。我已经取得了理学学士学位,我的专业集中在UNIX和它的变种(BSD,Linux)。它们一直深深的吸引着我。我在存储方面有1年多的经验。我正在寻求职业上的变化,并将供职于印度的Pune公司。” - -下面是我在面试中被问到的问题的集合。我已经把我记忆中有关iptables的问题和它们的答案记录了下来。希望这会对您未来的面试有所帮助。 - -### 1. 你听说过Linux下面的iptables和Firewalld么?知不知道它们是什么,是用来干什么的? ### - -> **答案** : iptables和Firewalld我都知道,并且我已经使用iptables好一段时间了。iptables主要由C语言写成,并且以GNU GPL许可证发布。它是从系统管理员的角度写的,最新的稳定版是iptables 1.4.21。iptables通常被认为是类UNIX系统中的防火墙,更准确的说,可以称为iptables/netfilter。管理员通过终端/GUI工具与iptables打交道,来添加和定义防火墙规则到预定义的表中。Netfilter是内核中的一个模块,它执行过滤的任务。 -> -> Firewalld是RHEL/CentOS 7(也许还有其他发行版,但我不太清楚)中最新的过滤规则的实现。它已经取代了iptables接口,并与netfilter相连接。 - -### 2. 你用过一些iptables的GUI或命令行工具么? ### - -> **答案** : 虽然我既用过GUI工具,比如与[Webmin][1]结合的Shorewall;以及直接通过终端访问iptables。但我必须承认通过Linux终端直接访问iptables能给予用户更高级的灵活性、以及对其背后工作更好的理解的能力。GUI适合初级管理员而终端适合有经验的管理员。 - -### 3. 那么iptables和firewalld的基本区别是什么呢? ### - -> **答案** : iptables和firewalld都有着同样的目的(包过滤),但它们使用不同的方式。iptables与firewalld不同,在每次发生更改时都刷新整个规则集。通常iptables配置文件位于‘/etc/sysconfig/iptables‘,而firewalld的配置文件位于‘/etc/firewalld/‘。firewalld的配置文件是一组XML文件。以XML为基础进行配置的firewalld比iptables的配置更加容易,但是两者都可以完成同样的任务。例如,firewalld可以在自己的命令行界面以及基于XML的配置文件下使用iptables。 - -### 4. 如果有机会的话,你会在你所有的服务器上用firewalld替换iptables么? ### - -> **答案** : 我对iptables很熟悉,它也工作的很好。如果没有任何需求需要firewalld的动态特性,那么没有理由把所有的配置都从iptables移动到firewalld。通常情况下,目前为止,我还没有看到iptables造成什么麻烦。IT技术的通用准则也说道“为什么要修一件没有坏的东西呢?”。上面是我自己的想法,但如果组织愿意用firewalld替换iptables的话,我不介意。 - -### 5. 你看上去对iptables很有信心,巧的是,我们的服务器也在使用iptables。 ### - -iptables使用的表有哪些?请简要的描述iptables使用的表以及它们所支持的链。 - -> **答案** : 谢谢您的赞赏。至于您问的问题,iptables使用的表有四个,它们是: -> -> Nat 表 -> Mangle 表 -> Filter 表 -> Raw 表 -> -> Nat表 : Nat表主要用于网络地址转换。根据表中的每一条规则修改网络包的IP地址。流中的包仅遍历一遍Nat表。例如,如果一个通过某个接口的包被修饰(修改了IP地址),该流中其余的包将不再遍历这个表。通常不建议在这个表中进行过滤,由NAT表支持的链称为PREROUTING Chain,POSTROUTING Chain和OUTPUT Chain。 -> -> Mangle表 : 正如它的名字一样,这个表用于校正网络包。它用来对特殊的包进行修改。它能够修改不同包的头部和内容。Mangle表不能用于地址伪装。支持的链包括PREROUTING Chain,OUTPUT Chain,Forward Chain,InputChain和POSTROUTING Chain。 -> -> Filter表 : Filter表是iptables中使用的默认表,它用来过滤网络包。如果没有定义任何规则,Filter表则被当作默认的表,并且基于它来过滤。支持的链有INPUT Chain,OUTPUT Chain,FORWARD Chain。 -> -> Raw表 : Raw表在我们想要配置之前被豁免的包时被使用。它支持PREROUTING Chain 和OUTPUT Chain。 - -### 6. 简要谈谈什么是iptables中的目标值(能被指定为目标),他们有什么用 ### - -> **答案** : 下面是在iptables中可以指定为目标的值: -> -> ACCEPT : 接受包 -> QUEUE : 将包传递到用户空间 (应用程序和驱动所在的地方) -> DROP : 丢弃包 -> RETURN : 将控制权交回调用的链并且为当前链中的包停止执行下一调规则 - -### 7. 让我们来谈谈iptables技术方面的东西,我的意思是说实际使用方面 ### - -你怎么检测在CentOS中安装iptables时需要的iptables的rpm? - -> **答案** : iptables已经被默认安装在CentOS中,我们不需要单独安装它。但可以这样检测rpm: -> -> # rpm -qa iptables -> -> iptables-1.4.21-13.el7.x86_64 -> -> 如果您需要安装它,您可以用yum来安装。 -> -> # yum install iptables-services - -### 8. 怎样检测并且确保iptables服务正在运行? ### - -> **答案** : 您可以在终端中运行下面的命令来检测iptables的状态。 -> -> # service status iptables [On CentOS 6/5] -> # systemctl status iptables [On CentOS 7] -> -> 如果iptables没有在运行,可以使用下面的语句 -> -> ---------------- 在CentOS 6/5下 ---------------- -> # chkconfig --level 35 iptables on -> # service iptables start -> -> ---------------- 在CentOS 7下 ---------------- -> # systemctl enable iptables -> # systemctl start iptables -> -> 我们还可以检测iptables的模块是否被加载: -> -> # lsmod | grep ip_tables - -### 9. 你怎么检查iptables中当前定义的规则呢? ### - -> **答案** : 当前的规则可以简单的用下面的命令查看: -> -> # iptables -L -> -> 示例输出 -> -> Chain INPUT (policy ACCEPT) -> target prot opt source destination -> ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED -> ACCEPT icmp -- anywhere anywhere -> ACCEPT all -- anywhere anywhere -> ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh -> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited -> -> Chain FORWARD (policy ACCEPT) -> target prot opt source destination -> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited -> -> Chain OUTPUT (policy ACCEPT) -> target prot opt source destination - -### 10. 你怎样刷新所有的iptables规则或者特定的链呢? ### - -> **答案** : 您可以使用下面的命令来刷新一个特定的链。 -> -> # iptables --flush OUTPUT -> -> 要刷新所有的规则,可以用: -> -> # iptables --flush - -### 11. 请在iptables中添加一条规则,接受所有从一个信任的IP地址(例如,192.168.0.7)过来的包。 ### - -> **答案** : 上面的场景可以通过运行下面的命令来完成。 -> -> # iptables -A INPUT -s 192.168.0.7 -j ACCEPT -> -> 我们还可以在源IP中使用标准的斜线和子网掩码: -> -> # iptables -A INPUT -s 192.168.0.7/24 -j ACCEPT -> # iptables -A INPUT -s 192.168.0.7/255.255.255.0 -j ACCEPT - -### 12. 怎样在iptables中添加规则以ACCEPT,REJECT,DENY和DROP ssh的服务? ### - -> **答案** : 但愿ssh运行在22端口,那也是ssh的默认端口,我们可以在iptables中添加规则来ACCEPT ssh的tcp包(在22号端口上)。 -> -> # iptables -A INPUT -s -p tcp --dport 22 -j ACCEPT -> -> REJECT ssh服务(22号端口)的tcp包。 -> -> # iptables -A INPUT -s -p tcp --dport 22 -j REJECT -> -> DENY ssh服务(22号端口)的tcp包。 -> -> -> # iptables -A INPUT -s -p tcp --dport 22 -j DENY -> -> DROP ssh服务(22号端口)的tcp包。 -> -> -> # iptables -A INPUT -s -p tcp --dport 22 -j DROP - -### 13. 让我给你另一个场景,假如有一台电脑的本地IP地址是192.168.0.6。你需要封锁在21、22、23和80号端口上的连接,你会怎么做? ### - -> **答案** : 这时,我所需要的就是在iptables中使用‘multiport‘选项,并将要封锁的端口号跟在它后面。上面的场景可以用下面的一条语句搞定: -> -> # iptables -A INPUT -s 192.168.0.6 -p tcp -m multiport --dport 22,23,80,8080 -j DROP -> -> 可以用下面的语句查看写入的规则。 -> -> # iptables -L -> -> Chain INPUT (policy ACCEPT) -> target prot opt source destination -> ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED -> ACCEPT icmp -- anywhere anywhere -> ACCEPT all -- anywhere anywhere -> ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh -> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited -> DROP tcp -- 192.168.0.6 anywhere multiport dports ssh,telnet,http,webcache -> -> Chain FORWARD (policy ACCEPT) -> target prot opt source destination -> REJECT all -- anywhere anywhere reject-with icmp-host-prohibited -> -> Chain OUTPUT (policy ACCEPT) -> target prot opt source destination - -**面试官** : 好了,我问的就是这些。你是一个很有价值的雇员,我们不会错过你的。我将会向HR推荐你的名字。如果你有什么问题,请问我。 - -作为一个候选人我不愿不断的问将来要做的项目的事以及公司里其他的事,这样会打断愉快的对话。更不用说HR轮会不会比较难,总之,我获得了机会。 - -同时我要感谢Avishek和Ravi(我的朋友)花时间帮我整理我的面试。 - -朋友!如果您有过类似的面试,并且愿意与数百万Tecmint读者一起分享您的面试经历,请将您的问题和答案发送到admin@tecmint.com。 - -谢谢!保持联系。如果我能更好的回答我上面的问题的话,请记得告诉我。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-firewall-iptables-interview-questions-and-answers/ - -作者:[Avishek Kumar][a] -译者:[wwy-hust](https://github.com/wwy-hust) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/install-webmin-web-based-system-administration-tool-for-rhel-centos-fedora/ From f250353b717561555612c15b4fe71910487aec56 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 4 Aug 2015 00:08:13 +0800 Subject: [PATCH 085/207] PUB:20150730 Compare PDF Files on Ubuntu @GOLinux --- .../20150730 Compare PDF Files on Ubuntu.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) rename {translated/tech => published}/20150730 Compare PDF Files on Ubuntu.md (81%) diff --git a/translated/tech/20150730 Compare PDF Files on Ubuntu.md b/published/20150730 Compare PDF Files on Ubuntu.md similarity index 81% rename from translated/tech/20150730 Compare PDF Files on Ubuntu.md rename to published/20150730 Compare PDF Files on Ubuntu.md index 3215caf23f..57b933765f 100644 --- a/translated/tech/20150730 Compare PDF Files on Ubuntu.md +++ b/published/20150730 Compare PDF Files on Ubuntu.md @@ -1,15 +1,15 @@ -Ubuntu上比较PDF文件 +如何在 Ubuntu 上比较 PDF 文件 ================================================================================ 如果你想要对PDF文件进行比较,你可以使用下面工具之一。 ### Comparepdf ### -comparepdf是一个命令行应用,用于将两个PDF文件进行对比。默认对比模式文本模式,该模式会对各对相关页面进行文字对比。只要一检测到差异,该程序就会终止,并显示一条信息(除非设置了-v0)和一个指示性的返回码。 +comparepdf是一个命令行应用,用于将两个PDF文件进行对比。默认对比模式是文本模式,该模式会对各对相关页面进行文字对比。只要一检测到差异,该程序就会终止,并显示一条信息(除非设置了-v0)和一个指示性的返回码。 -用于文本模式对比的选项有 -ct 或 --compare=text(默认),用于视觉对比(这对图标或其它图像发生改变时很有用)的选项有 -ca 或 --compare=appearance。而 -v=1 或 --verbose=1 选项则用于报告差异(或者对匹配文件不作任何回应):使用 -v=0 选项取消报告,或者 -v=2 来同时报告不同的和匹配的文件。 +用于文本模式对比的选项有 -ct 或 --compare=text(默认),用于视觉对比(这对图标或其它图像发生改变时很有用)的选项有 -ca 或 --compare=appearance。而 -v=1 或 --verbose=1 选项则用于报告差异(或者对匹配文件不作任何回应);使用 -v=0 选项取消报告,或者 -v=2 来同时报告不同的和匹配的文件。 -### 安装comparepdf到Ubuntu ### +#### 安装comparepdf到Ubuntu #### 打开终端,然后运行以下命令 @@ -19,17 +19,17 @@ comparepdf是一个命令行应用,用于将两个PDF文件进行对比。默 comparepdf [OPTIONS] file1.pdf file2.pdf -**Diffpdf** +###Diffpdf### DiffPDF是一个图形化应用程序,用于对两个PDF文件进行对比。默认情况下,它只会对比两个相关页面的文字,但是也支持对图形化页面进行对比(例如,如果图表被修改过,或者段落被重新格式化过)。它也可以对特定的页面或者页面范围进行对比。例如,如果同一个PDF文件有两个版本,其中一个有页面1-12,而另一个则有页面1-13,因为这里添加了一个额外的页面4,它们可以通过指定两个页面范围来进行对比,第一个是1-12,而1-3,5-13则可以作为第二个页面范围。这将使得DiffPDF成对地对比这些页面(1,1),(2,2),(3,3),(4,5),(5,6),以此类推,直到(12,13)。 -### 安装 diffpdf 到 ubuntu ### +#### 安装 diffpdf 到 ubuntu #### 打开终端,然后运行以下命令 sudo apt-get install diffpdf -### 截图 ### +#### 截图 #### ![](http://www.ubuntugeek.com/wp-content/uploads/2015/07/14.png) @@ -41,7 +41,7 @@ via: http://www.ubuntugeek.com/compare-pdf-files-on-ubuntu.html 作者:[ruchi][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 07af5c5aa9a77aeb84e0faa8874660c511116f95 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Tue, 4 Aug 2015 06:51:47 +0800 Subject: [PATCH 086/207] =?UTF-8?q?=E7=A7=BB=E5=8A=A8=E8=AF=91=E6=96=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20150128 7 communities driving open source development.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/talk/20150128 7 communities driving open source development.md (100%) diff --git a/sources/talk/20150128 7 communities driving open source development.md b/translated/talk/20150128 7 communities driving open source development.md similarity index 100% rename from sources/talk/20150128 7 communities driving open source development.md rename to translated/talk/20150128 7 communities driving open source development.md From 065cdbb77acac0170f8604631294c1790fd55bb5 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Tue, 4 Aug 2015 07:04:09 +0800 Subject: [PATCH 087/207] Update 20150803 Linux Logging Basics.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- sources/tech/20150803 Linux Logging Basics.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150803 Linux Logging Basics.md b/sources/tech/20150803 Linux Logging Basics.md index d20f68f140..6c3c3693a4 100644 --- a/sources/tech/20150803 Linux Logging Basics.md +++ b/sources/tech/20150803 Linux Logging Basics.md @@ -1,3 +1,5 @@ +FSSlc translating + Linux Logging Basics ================================================================================ First we’ll describe the basics of what Linux logs are, where to find them, and how they get created. If you already know this stuff, feel free to skip to the next section. @@ -87,4 +89,4 @@ via: http://www.loggly.com/ultimate-guide/logging/linux-logging-basics/ [4]:https://tools.ietf.org/html/rfc5424#section-6.2.3 [5]:https://tools.ietf.org/html/rfc5424#section-6.2.4 [6]:https://tools.ietf.org/html/rfc5424#section-6.2.5 -[7]:https://tools.ietf.org/html/rfc5424#section-6.2.1 \ No newline at end of file +[7]:https://tools.ietf.org/html/rfc5424#section-6.2.1 From 0a2bc302010ada1e2f78027cc3965b655e62aa56 Mon Sep 17 00:00:00 2001 From: joeren Date: Tue, 4 Aug 2015 07:54:44 +0800 Subject: [PATCH 088/207] Change --- github 2.0测试.txt | 2 -- 1 file changed, 2 deletions(-) delete mode 100644 github 2.0测试.txt diff --git a/github 2.0测试.txt b/github 2.0测试.txt deleted file mode 100644 index 7787faa3c1..0000000000 --- a/github 2.0测试.txt +++ /dev/null @@ -1,2 +0,0 @@ -111 -222 \ No newline at end of file From 6811374932148a3ce7bfe446c8670a63ace04b3e Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 09:08:16 +0800 Subject: [PATCH 089/207] =?UTF-8?q?Create=20Setting=20up=20RAID=201=20(Mir?= =?UTF-8?q?roring)=20using=20=E2=80=98Two=20Disks=E2=80=99=20in=20Linux=20?= =?UTF-8?q?=E2=80=93=20Part=203?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...oring) using ‘Two Disks’ in Linux – Part 3 | 217 ++++++++++++++++++ 1 file changed, 217 insertions(+) create mode 100644 translated/tech/Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 diff --git a/translated/tech/Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 b/translated/tech/Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 new file mode 100644 index 0000000000..948e530ed8 --- /dev/null +++ b/translated/tech/Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 @@ -0,0 +1,217 @@ +在 Linux 中使用"两个磁盘"创建 RAID 1(镜像) - 第3部分 +================================================================================ +RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁盘中。创建 RAID1 至少需要两个磁盘,它的读取性能或者可靠性比数据存储容量更好。 + + +![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg) + +在 Linux 中设置 RAID1 + +创建镜像是为了防止因硬盘故障导致数据丢失。镜像中的每个磁盘包含数据的完整副本。当一个磁盘发生故障时,相同的数据可以从其它正常磁盘中读取。而后,可以从正在运行的计算机中直接更换发生故障的磁盘,无需任何中断。 + +### RAID 1 的特点 ### + +-镜像具有良好的性能。 + +-磁盘利用率为50%。也就是说,如果我们有两个磁盘每个500GB,总共是1TB,但在镜像中它只会显示500GB。 + +-在镜像如果一个磁盘发生故障不会有数据丢失,因为两个磁盘中的内容相同。 + +-读取数据会比写入性能更好。 + +#### 要求 #### + + +创建 RAID 1 至少要有两个磁盘,你也可以添加更多的磁盘,磁盘数需为2,4,6,8的两倍。为了能够添加更多的磁盘,你的系统必须有 RAID 物理适配器(硬件卡)。 + +这里,我们使用软件 RAID 不是硬件 RAID,如果你的系统有一个内置的物理硬件 RAID 卡,你可以从它的 UI 组件或使用 Ctrl + I 键来访问它。 + +需要阅读: [Basic Concepts of RAID in Linux][1] + +#### 在我的服务器安装 #### + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.226 + Hostname : rd1.tecmintlocal.com + Disk 1 [20GB] : /dev/sdb + Disk 2 [20GB] : /dev/sdc + +本文将指导你使用 mdadm (创建和管理 RAID 的)一步一步的建立一个软件 RAID 1 或镜像在 Linux 平台上。但同样的做法也适用于其它 Linux 发行版如 RedHat,CentOS,Fedora 等等。 + +### 第1步:安装所需要的并且检查磁盘 ### + +1.正如我前面所说,在 Linux 中我们需要使用 mdadm 软件来创建和管理 RAID。所以,让我们用 yum 或 apt-get 的软件包管理工具在 Linux 上安装 mdadm 软件包。 + + # yum install mdadm [on RedHat systems] + # apt-get install mdadm [on Debain systems] + +2. 一旦安装好‘mdadm‘包,我们需要使用下面的命令来检查磁盘是否已经配置好。 + + # mdadm -E /dev/sd[b-c] + +![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png) + +检查 RAID 的磁盘 + + +正如你从上面图片看到的,没有检测到任何超级块,这意味着还没有创建RAID。 + +### 第2步:为 RAID 创建分区 ### + +3. 正如我提到的,我们最少使用两个分区 /dev/sdb 和 /dev/sdc 来创建 RAID1。我们首先使用‘fdisk‘命令来创建这两个分区并更改其类型为 raid。 + + # fdisk /dev/sdb + +按照下面的说明 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。 +- 接下来选择分区号为1。 +- 按两次回车键默认将整个容量分配给它。 +- 然后,按 ‘P’ 来打印创建好的分区。 +- 按 ‘L’,列出所有可用的类型。 +- 按 ‘t’ 修改分区类型。 +- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png) + +创建磁盘分区 + +在创建“/dev/sdb”分区后,接下来按照同样的方法创建分区 /dev/sdc 。 + + # fdisk /dev/sdc + +![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png) + +创建第二个分区 + +4. 一旦这两个分区创建成功后,使用相同的命令来检查 sdb & sdc 分区并确认 RAID 分区的类型如上图所示。 + + # mdadm -E /dev/sd[b-c] + +![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png) + +验证分区变化 + +![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png) + +检查 RAID 类型 + +**注意**: 正如你在上图所看到的,在 sdb1 和 sdc1 中没有任何对 RAID 的定义,这就是我们没有检测到超级块的原因。 + +### 步骤3:创建 RAID1 设备 ### + +5.接下来使用以下命令来创建一个名为 /dev/md0 的“RAID1”设备并验证它 + + # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1 + # cat /proc/mdstat + +![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png) + +创建RAID设备 + +6. 接下来使用如下命令来检查 RAID 设备类型和 RAID 阵列 + + # mdadm -E /dev/sd[b-c]1 + # mdadm --detail /dev/md0 + +![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png) + +检查 RAID 设备类型 + +![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png) + +检查 RAID 设备阵列 + +从上图中,人们很容易理解,RAID1 已经使用的 /dev/sdb1 和 /dev/sdc1 分区被创建,你也可以看到状态为 resyncing。 + +### 第4步:在 RAID 设备上创建文件系统 ### + +7. 使用 ext4 为 md0 创建文件系统并挂载到 /mnt/raid1 . + + # mkfs.ext4 /dev/md0 + +![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png) + +创建 RAID 设备文件系统 + +8. 接下来,挂载新创建的文件系统到“/mnt/raid1”,并创建一些文件,验证在挂载点的数据 + + # mkdir /mnt/raid1 + # mount /dev/md0 /mnt/raid1/ + # touch /mnt/raid1/tecmint.txt + # echo "tecmint raid setups" > /mnt/raid1/tecmint.txt + +![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png) + +挂载 RAID 设备 + +9.为了在系统重新启动自动挂载 RAID1,需要在 fstab 文件中添加条目。打开“/etc/fstab”文件并添加以下行。 + + /dev/md0 /mnt/raid1 ext4 defaults 0 0 + +![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png) + +自动挂载 Raid 设备 + +10. 运行“mount -a”,检查 fstab 中的条目是否有错误 + # mount -av + +![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png) + +检查 fstab 中的错误 + +11. 接下来,使用下面的命令保存 raid 的配置到文件“mdadm.conf”中。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + +![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png) + +保存 Raid 的配置 + +上述配置文件在系统重启时会读取并加载 RAID 设备。 + +### 第5步:在磁盘故障后检查数据 ### + +12.我们的主要目的是,即使在任何磁盘故障或死机时必须保证数据是可用的。让我们来看看,当任何一个磁盘不可用时会发生什么。 + + # mdadm --detail /dev/md0 + +![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png) + +验证 Raid 设备 + +在上面的图片中,我们可以看到在 RAID 中有2个设备是可用的并且 Active Devices 是2.现在让我们看看,当一个磁盘拔出(移除 sdc 磁盘)或损坏后会发生什么。 + + # ls -l /dev | grep sd + # mdadm --detail /dev/md0 + +![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png) + +测试 RAID 设备 + +现在,在上面的图片中你可以看到,一个磁盘不见了。我从虚拟机上删除了一个磁盘。此时让我们来检查我们宝贵的数据。 + + # cd /mnt/raid1/ + # cat tecmint.txt + +![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png) + +验证 RAID 数据 + +你有没有看到我们的数据仍然可用。由此,我们可以知道 RAID 1(镜像)的优势。在接下来的文章中,我们将看到如何设置一个 RAID 5 条带化分布式奇偶校验。希望这可以帮助你了解 RAID 1(镜像)是如何工作的。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid1-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 2ec2917fdba92c31201a7bdde94ac2e8befb41ed Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 09:09:17 +0800 Subject: [PATCH 090/207] =?UTF-8?q?Create=20Creating=20RAID=205=20(Stripin?= =?UTF-8?q?g=20with=20Distributed=20Parity)=20in=20Linux=20=E2=80=93=20Par?= =?UTF-8?q?t=204?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...with Distributed Parity) in Linux – Part 4 | 285 ++++++++++++++++++ 1 file changed, 285 insertions(+) create mode 100644 translated/tech/Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 diff --git a/translated/tech/Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 b/translated/tech/Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 new file mode 100644 index 0000000000..7de5199a08 --- /dev/null +++ b/translated/tech/Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 @@ -0,0 +1,285 @@ + +在 Linux 中创建 RAID 5(条带化与分布式奇偶校验) - 第4部分 +================================================================================ +在 RAID 5 中,条带化数据跨多个驱磁盘使用分布式奇偶校验。分布式奇偶校验的条带化意味着它将奇偶校验信息和条带中的数据分布在多个磁盘上,它将有很好的数据冗余。 + +![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/setup-raid-5-in-linux.jpg) + +在 Linux 中配置 RAID 5 + +对于此 RAID 级别它至少应该有三个或更多个磁盘。RAID 5 通常被用于大规模生产环境中花费更多的成本来提供更好的数据冗余性能。 + +#### 什么是奇偶校验? #### + +奇偶校验是在数据存储中检测错误最简单的一个方法。奇偶校验信息存储在每个磁盘中,比如说,我们有4个磁盘,其中一个磁盘空间被分割去存储所有磁盘的奇偶校验信息。如果任何一个磁盘出现故障,我们可以通过更换故障磁盘后,从奇偶校验信息重建得到原来的数据。 + +#### RAID 5 的优点和缺点 #### + +- 提供更好的性能 +- 支持冗余和容错。 +- 支持热备份。 +- 将失去一个磁盘的容量存储奇偶校验信息。 +- 单个磁盘发生故障后不会丢失数据。我们可以更换故障硬盘后从奇偶校验信息中重建数据。 +- 事务处理读操作会更快。 +- 由于奇偶校验占用资源,写操作将是缓慢的。 +- 重建需要很长的时间。 + +#### 要求 #### +创建 RAID 5 最少需要3个磁盘,你也可以添加更多的磁盘,前提是你要有多端口的专用硬件 RAID 控制器。在这里,我们使用“mdadm”包来创建软件 RAID。 + +mdadm 是一个允许我们在 Linux 下配置和管理 RAID 设备的包。默认情况下 RAID 没有可用的配置文件,我们在创建和配置 RAID 后必须将配置文件保存在一个单独的文件中,例如:mdadm.conf。 + +在进一步学习之前,我建议你通过下面的文章去了解 Linux 中 RAID 的基础知识。 + +- [Basic Concepts of RAID in Linux – Part 1][1] +- [Creating RAID 0 (Stripe) in Linux – Part 2][2] +- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3] + +#### 我的服务器设置 #### + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.227 + Hostname : rd5.tecmintlocal.com + Disk 1 [20GB] : /dev/sdb + Disk 2 [20GB] : /dev/sdc + Disk 3 [20GB] : /dev/sdd + +这篇文章是 RAID 系列9教程的第4部分,在这里我们要建立一个软件 RAID 5(分布式奇偶校验)使用三个20GB(名为/dev/sdb, /dev/sdc 和 /dev/sdd)的磁盘在 Linux 系统或服务器中上。 + +### 第1步:安装 mdadm 并检验磁盘 ### + +1.正如我们前面所说,我们使用 CentOS 6.5 Final 版本来创建 RAID 设置,但同样的做法也适用于其他 Linux 发行版。 + + # lsb_release -a + # ifconfig | grep inet + +![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/CentOS-6.5-Summary.png) + +CentOS 6.5 摘要 + +2. 如果你按照我们的 RAID 系列去配置的,我们假设你已经安装了“mdadm”包,如果没有,根据你的 Linux 发行版使用下面的命令安装。 + + # yum install mdadm [on RedHat systems] + # apt-get install mdadm [on Debain systems] + +3. “mdadm”包安装后,先使用‘fdisk‘命令列出我们在系统上增加的三个20GB的硬盘。 + + # fdisk -l | grep sd + +![Install mdadm Tool in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Install-mdadm-Tool.png) + +安装 mdadm 工具 + +4. 现在该检查这三个磁盘是否存在 RAID 块,使用下面的命令来检查。 + + # mdadm -E /dev/sd[b-d] + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd + +![Examine Drives For Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-Drives-For-Raid.png) + +检查 Raid 磁盘 + +**注意**: 上面的图片说明,没有检测到任何超级块。所以,这三个磁盘中没有定义 RAID。让我们现在开始创建一个吧! + +### 第2步:为磁盘创建 RAID 分区 ### + +5. 首先,在创建 RAID 前我们要为磁盘分区(/dev/sdb, /dev/sdc 和 /dev/sdd),在进行下一步之前,先使用‘fdisk’命令进行分区。 + + # fdisk /dev/sdb + # fdisk /dev/sdc + # fdisk /dev/sdd + +#### 创建 /dev/sdb 分区 #### + +请按照下面的说明在 /dev/sdb 硬盘上创建分区。 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。选择主分区是因为还没有定义过分区。 +- 接下来选择分区号为1。默认就是1. +- 这里是选择柱面大小,我们没必要选择指定的大小,因为我们需要为 RAID 使用整个分区,所以只需按两次 Enter 键默认将整个容量分配给它。 +- 然后,按 ‘P’ 来打印创建好的分区。 +- 改变分区类型,按 ‘L’可以列出所有可用的类型。 +- 按 ‘t’ 修改分区类型。 +- 这里使用‘fd’设置为 RAID 的类型。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition1.png) + +创建 sdb 分区 + +**注意**: 我们仍要按照上面的步骤来创建 sdc 和 sdd 的分区。 + +#### 创建 /dev/sdc 分区 #### + +现在,通过下面的截图给出创建 sdc 和 sdd 磁盘分区的方法,或者你可以按照上面的步骤。 + + # fdisk /dev/sdc + +![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition1.png) + +创建 sdc 分区 + +#### 创建 /dev/sdd 分区 #### + + # fdisk /dev/sdd + +![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition1.png) + +创建 sdd 分区 + +6. 创建分区后,检查三个磁盘 sdb, sdc, sdd 的变化。 + + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd + + or + + # mdadm -E /dev/sd[b-c] + +![Check Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Changes-on-Partitions.png) + +检查磁盘变化 + +**注意**: 在上面的图片中,磁盘的类型是 fd。 + +7.现在在新创建的分区检查 RAID 块。如果没有检测到超级块,我们就能够继续下一步,创建一个新的 RAID 5 的设置在这些磁盘中。 + +![Check Raid on Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-Partitions.png) + +在分区中检查 Raid + +### 第3步:创建 md 设备 md0 ### + +8. 现在创建一个 RAID 设备“md0”(即 /dev/md0)使用所有新创建的分区(sdb1, sdc1 and sdd1) ,使用以下命令。 + + # mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 + + or + + # mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1 + +9. 创建 RAID 设备后,检查并确认 RAID,包括设备和从 mdstat 中输出的 RAID 级别。 + + # cat /proc/mdstat + +![Verify Raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Device.png) + +验证 Raid 设备 + +如果你想监视当前的创建过程,你可以使用‘watch‘命令,使用 watch ‘cat /proc/mdstat‘,它会在屏幕上显示且每隔1秒刷新一次。 + + # watch -n1 cat /proc/mdstat + +![Monitor Raid Process](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Raid-Process.png) + +监控 Raid 5 过程 + +![Raid 5 Process Summary](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Process-Summary.png) + +Raid 5 过程概要 + +10. 创建 RAID 后,使用以下命令验证 RAID 设备 + + # mdadm -E /dev/sd[b-d]1 + +![Verify Raid Level](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Level.png) + +验证 Raid 级别 + +**注意**: 因为它显示三个磁盘的信息,上述命令的输出会有点长。 + +11. 接下来,验证 RAID 阵列的假设,这包含正在运行 RAID 的设备,并开始重新同步。 + + # mdadm --detail /dev/md0 + +![Verify Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Array.png) + +验证 Raid 阵列 + +### 第4步:为 md0 创建文件系统### + +12. 在挂载前为“md0”设备创建 ext4 文件系统。 + + # mkfs.ext4 /dev/md0 + +![Create md0 Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md0-Filesystem.png) + +创建 md0 文件系统 + +13.现在,在‘/mnt‘下创建目录 raid5,然后挂载文件系统到 /mnt/raid5/ 下并检查下挂载点的文件,你会看到 lost+found 目录。 + + # mkdir /mnt/raid5 + # mount /dev/md0 /mnt/raid5/ + # ls -l /mnt/raid5/ + +14. 在挂载点 /mnt/raid5 下创建几个文件,并在其中一个文件中添加一些内容然后去验证。 + + # touch /mnt/raid5/raid5_tecmint_{1..5} + # ls -l /mnt/raid5/ + # echo "tecmint raid setups" > /mnt/raid5/raid5_tecmint_1 + # cat /mnt/raid5/raid5_tecmint_1 + # cat /proc/mdstat + +![Mount Raid 5 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-Raid-Device.png) + +挂载 Raid 设备 + +15. 我们需要在 fstab 中添加条目,否则系统重启后将不会显示我们的挂载点。然后编辑 fstab 文件添加条目,在文件尾追加以下行,如下图所示。挂载点会根据你环境的不同而不同。 + + # vim /etc/fstab + + /dev/md0 /mnt/raid5 ext4 defaults 0 0 + +![Raid 5 Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Device-Automount.png) + +自动挂载 Raid 5 + +16. 接下来,运行‘mount -av‘命令检查 fstab 条目中是否有错误。 + + # mount -av + +![Check Fstab Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Fstab-Errors.png) + +检查 Fstab 错误 + +### 第5步:保存 Raid 5 的配置 ### + +17. 在前面章节已经说过,默认情况下 RAID 没有配置文件。我们必须手动保存。如果此步不跟 RAID 设备将不会存在 md0,它将会跟一些其他数子。 + +所以,我们必须要在系统重新启动之前保存配置。如果配置保存它在系统重新启动时会被加载到内核中然后 RAID 也将被加载。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + +![Save Raid 5 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid-5-Configuration.png) + +保存 Raid 5 配置 + +注意:保存配置将保持 RAID 级别的稳定性在 md0 设备中。 + +### 第6步:添加备用磁盘 ### + +18.备用磁盘有什么用?它是非常有用的,如果我们有一个备用磁盘,当我们阵列中的任何一个磁盘发生故障后,这个备用磁盘会主动添加并重建进程,并从其他磁盘上同步数据,所以我们可以在这里看到冗余。 + +更多关于添加备用磁盘和检查 RAID 5 容错的指令,请阅读下面文章中的第6步和第7步。 + +- [Add Spare Drive to Raid 5 Setup][4] + +### 结论 ### + +在这篇文章中,我们已经看到了如何使用三个磁盘配置一个 RAID 5 。在接下来的文章中,我们将看到如何故障排除并且当 RAID 5 中的一个磁盘损坏后如何恢复。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid-5-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ +[2]:http://www.tecmint.com/create-raid0-in-linux/ +[3]:http://www.tecmint.com/create-raid1-in-linux/ +[4]:http://www.tecmint.com/create-raid-6-in-linux/ From 593eb1799e96e40d4a8d88141bf4de850382f253 Mon Sep 17 00:00:00 2001 From: XLCYun Date: Tue, 4 Aug 2015 09:30:10 +0800 Subject: [PATCH 091/207] Update 20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md --- ...ktop--What They Get Right & Wrong - Page 1 - Introduction.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md index de47f0864e..39f29af147 100644 --- a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md +++ b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md @@ -36,7 +36,7 @@ ![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_login3_show&w=1920) -切换到上流的Breeve主题……突然间,我抱怨的大部分问题都被完善了。通用图标,所有东西都放在了屏幕中央,但不是那么重要的被放到了一边。因为屏幕顶部和底部都是同样的空白,在中间也就酝酿出了一种美好的和谐。还是有一个输入框来切换会话,但既然电源按钮被做成了通用图标,那么这点还算可以原谅。当然gnome还是有一些很好的附加物,例如音量小程序和可访问按钮,但Breeze总归是Fedora的KDE主题的一个进步。 +切换到upstream的Breeve主题……突然间,我抱怨的大部分问题都被完善了。通用图标,所有东西都放在了屏幕中央,但不是那么重要的被放到了一边。因为屏幕顶部和底部都是同样的空白,在中间也就酝酿出了一种美好的和谐。还是有一个输入框来切换会话,但既然电源按钮被做成了通用图标,那么这点还算可以原谅。当然gnome还是有一些很好的附加物,例如音量小程序和可访问按钮,但Breeze总归是Fedora的KDE主题的一个进步。 到Windows(Windows 8和10之前)或者OS X中去,你会看到类似的东西——非常简洁的,“不挡你道”的锁屏与登录界面,它们都没有输入框或者其它分散视觉的小工具。这是一种有效的不分散人注意力的设计。Fedora……默认装有Breeze。VDG在Breeze主题设计上干得不错。可别糟蹋了它。 From bba6ac1d9e04d9cdf1a2d35a77a549e83873350d Mon Sep 17 00:00:00 2001 From: XLCYun Date: Tue, 4 Aug 2015 09:34:03 +0800 Subject: [PATCH 092/207] =?UTF-8?q?=E5=88=A0=E9=99=A4=E5=8E=9F=E6=96=87=20?= =?UTF-8?q?=20XLCYun?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...t & Wrong - Page 3 - GNOME Applications.md | 62 ------------------- 1 file changed, 62 deletions(-) delete mode 100644 sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md diff --git a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md b/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md deleted file mode 100644 index c70978dc9b..0000000000 --- a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md +++ /dev/null @@ -1,62 +0,0 @@ -Translating by XLCYun. -A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 3 - GNOME Applications -================================================================================ -### Applications ### - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_videos_show&w=1920) - -This is the one area where things are basically a wash. Each environment has a few applications that are really nice, and a few that are not so great. Once again though, Gnome gets the little things right in a way that KDE completely misses. None of KDE's applications are bad or broken, that's not what I'm saying. They function. But that's about it. To use an analogy: they passed the test, but they sure didn't get any where close to 100% on it. - -Gnome on left, KDE on right. Dragon performs perfectly fine, it has clearly marked buttons for playing a file, URL, or a disc, just as you can do under Gnome Videos... but Gnome takes it one extra little step further in the name of convenience and user friendliness: they show all the videos detected under your system by default, without you having to do anything. KDE has Baloo-- just as they had Nepomuk before that-- why not use them? They've got a list video files that are freely accessible... but don't make use of the feature. - -Moving on... Music Players. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_rhythmbox_show&w=1920) - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_amarok_show&w=1920) - -Both of these applications, Rhythmbox on the left and Amarok on the right were opened up and then a screenshot was immediately taken, nothing was clicked, or altered. See the difference? Rhythmbox looks like a music player. It's direct, there's obvious ways to sort the results, it knows what is trying to be and what it's job is: to play music. - -Amarok feels like one of the tech demos, or library demos where someone puts every option and extension they possible can all inside one application in order to show them off-- it's never something that gets shipped as production, it's just there to show off bits and pieces. And that's exactly what Amarok feels like: its someone trying to show off every single possible cool thing they shove into a media player without ever stopping to think "Wait, what were trying to write again? An app to play music?" - -Just look at the default layout. What is front and center for the user? A visualizer and Wikipedia integration-- the largest and most prominent column on the page. What's the second largest? Playlist list. Third largest, aka smallest? The actual music listing. How on earth are these sane defaults for a core application? - -Software Managers! Something that has seen a lot of push in recent years and will likely only see a bigger push in the months to come. Unfortunately, it's another area where KDE was so close... and then fell on its face right at the finish line. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_software_show&w=1920) - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apper_show&w=1920) - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon_show&w=1920) - -Gnome Software is probably my new favorite software center, minus one gripe which I will get to in a bit. Muon, I wanted to like you. I really did. But you are a design nightmare. When the VDG was drawing up plans for you (mockup below), you looked pretty slick. Good use of white space, clean design, nice category listing, your whole not-being-split-into-two-applications. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon1_show&w=1920) - -Then someone got around to coding you and doing your actual UI, and I can only guess they were drunk while they did it. - -Let's look at Gnome Software. What's smack dab in the middle? The application, its screenshots, its description, etc. What's smack dab in the middle of Muon? Gigantic waste of white space. Gnome Software also includes the lovely convenience feature of putting a "Launch" button right there in case you already have an application installed. Convenience and ease of use are important, people. Honestly, JUST having things in Muon be centered aligned would probably make things look better already. - -What's along the top edge of Gnome Software, like a tab listing? All Software, Installed, Updates. Clean language, direct, to the point. Muon? Well, we have "Discover", which works okay as far as language goes, and then we have Installed, and then nothing. Where's updates? - -Well.. the developers decided to split updates off into its own application, thus requiring you to open two applications to handle your software-- one to install it, and one to update it-- going against every Software Center paradigm that has ever existed since the Synaptic graphical package manager. - -I'm not going to show it in a screenshot just because I don't want to have to clean up my system afterwards, but if you go into Muon and start installing something the way it shows that is by adding a little tab to the bottom of your screen with the application's name. That tab doesn't go away when the application is done installing either, so if you're installing a lot of applications at a single time then you'll just slowly accumulate tabs along the bottom that you then have to go through and clean up manually, because if you don't then they grow off the screen and you have to swipe through them all to get to the most recent ones. Think: opening 50 tabs in Firefox. Major annoyance, major inconvenience. - -I did say I would bash on Gnome a bit, and I meant it. Muon does get one thing very right that Gnome Software doesn't. Under the settings bar Muon has an option for "Show Technical Packages" aka: compilers, software libraries, non-graphical applications, applications without AppData, etc. Gnome doesn't. If you want to install any of those you have to drop down to the terminal. I think that's wrong. I certainly understand wanting to push AppData but I think they pushed it too soon. What made me realize Gnome didn't have this setting was when I went to install PowerTop and couldn't get Gnome to display it-- no AppData, no "Show Technical Packages" setting. - -Doubly unfortunate is the fact that you can't "just use apper" if you're under KDE since... - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apperlocal_show&w=1920) - -Apper's support for installing local packages has been broken for since Fedora 19 or so, almost two years. I love the attention to detail and quality. - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=3 - -作者:Eric Griffith -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From f7e118d42898004ec5ce6ce2c842c94822a2f148 Mon Sep 17 00:00:00 2001 From: XLCYun Date: Tue, 4 Aug 2015 09:39:34 +0800 Subject: [PATCH 093/207] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...t & Wrong - Page 3 - GNOME Applications.md | 61 +++++++++++++++++++ 1 file changed, 61 insertions(+) create mode 100644 translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md new file mode 100644 index 0000000000..42539badcc --- /dev/null +++ b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md @@ -0,0 +1,61 @@ +将GNOME作为我的Linux桌面的一周: 他们做对的与做错的 - 第三节 - GNOME应用 +================================================================================ +### 应用 ### + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_videos_show&w=1920) + +这是一个基本上一潭死水的地方。每一个桌面环境都有一些非常好的和不怎么样的应用。再次强调,Gnome把那些KDE完全错失的小细节给做对了。我不是想说KDE中有哪些应用不好。他们都能工作。但仅此而已。也就是说:它们合格了,但确实还没有达到甚至接近100分。 + +Gnome的在左边,KDE的在右边。Dragon运行得很好,清晰的标出了播放文件、URL或和光盘的按钮,正如你在Gnome Videos中能做到的一样……但是在便利的文件名和用户的友好度方面,Gnome多走了一小步。它默认显示了在你的电脑上检测到的所有影像文件,不需要你做任何事情。KDE有Baloo——正如之前有Nepomuk——为什么不使用它们?它们能列出可读取的影像文件……但却没被使用。 + +下一步……音乐播放器 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_rhythmbox_show&w=1920) + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_amarok_show&w=1920) + +这两个应用,左边的Rhythmbox和右边的Amarok,都是打开后没有做任何修改直接截屏的。看到差别了吗?Rhythmbox看起来像个音乐播放器,直接了当,排序文件的方法也很清晰,它知道它应该是什么样的,它的工作是什么:就是播放音乐。 + +Amarok感觉就像是某个人为了展示而把所有的扩展和选项都尽可能地塞进一个应用程序中去而做出来的一个技术演示产品(tech demos),或者一个库演示产品(library demos)——而这些是不应该做为产品装进去的,它只应该展示一些零碎的东西。而Amarok给人的感觉却是这样的:好像是某个人想把每一个感觉可能很酷的东西都塞进一个媒体播放器里,甚至都不停下来想“我想写啥来着?一个播放音乐的应用?” + +看看默认布局就行了。前面和中心都呈现了什么?一个可视化工具和维基集成(wikipedia integration)——占了整个页面最大和最显眼的区域。第二大的呢?播放列表。第三大,同时也是最小的呢?真正的音乐列表。这种默认设置对于一个核心应用来说,怎么可能称得上理智? + +软件管理器!它在最近几年当中有很大的进步,而且接下来的几个月中,很可能只能看到它更大的进步。不幸的是,这是另一个地方KDE做得差一点点就能……但还是在终点线前摔了脸。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_software_show&w=1920) + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apper_show&w=1920) + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon_show&w=1920) + +Gnome软件中心可能是我最新的最爱,先放下牢骚等下再发。Muon, 我想爱上你,真的。但你就是个设计上的梦魇。当VDG给你画设计草稿时(模型在下面),你看起来真漂亮。白色空间用得很好,设计简洁,类别列表也很好,你的整个“不要分开做成两个应用程序”的设计都很不错。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_muon1_show&w=1920) + +接着就有人为你写代码,实现真正的UI,但是,我猜这些家伙当时一定是喝醉了。 + +我们来看看Gnome软件中心。正中间是什么?软件,软件截图和软件描述等等。Muon的正中心是什么?白白浪费的大块白色空间。Gnome软件中心还有一个贴心便利特点,那就是放了一个“运行“的按钮在那儿,以防你已经安装了这个软件。便利性和易用性很重要啊,大哥。说实话,仅仅让Muon把东西都居中对齐了可能看起来的效果都要好得多。 + +Gnome软件中心沿着顶部的东西是什么,像个标签列表?所有软件,已安装软件,软件升级。语言简洁,直接,直指要点。Muon,好吧,我们有个”发现“,这个语言表达上还算差强人意,然后我们又有一个”已安装软件“,然后,就没有然后了。软件升级哪去了? + +好吧……开发者决定把升级独立分开成一个应用程序,这样你就得打开两个应用程序才能管理你的软件——一个用来安装,一个用来升级——自从有了新得立图形软件包管理器以来,首次有这种破天荒的设计,与任何已存的软件中心的设计范例相违背。 + +我不想贴上截图给你们看,因为我不想等下还得清理我的电脑,如果你进入Muon安装了什么,那么它就会在屏幕下方根据安装的应用名创建一个标签,所以如果你一次性安装很多软件的话,那么下面的标签数量就会慢慢的增长,然后你就不得不手动检查清除它们,因为如果你不这样做,当标签增长到超过屏幕显示时,你就不得不一个个找过去来才能找到最近正在安装的软件。想想:在火狐浏览器打开50个标签。太烦人,太不方便! + +我说过我会给Gnome一点打击,我是认真的。Muon有一点做得比Gnome软件中心做得好。在Muon的设置栏下面有个“显示技术包”,即:编辑器,软件库,非图形应用程序,无AppData的应用等等(AppData,软件包中的一个特殊文件,用于专门存储软件的信息,译注)。Gnome则没有。如果你想安装其中任何一项你必须跑到终端操作。我想这是他们做得不对的一点。我完全理解他们推行AppData的心情,但我想他们太急了(推行所有软件包带有AppData,是Gnome软件中心的目标之一,译注)。我是在想安装PowerTop,而Gnome不显示这个软件时我才发现这点的——没有AppData,没有“显示技术包“设置。 + +更不幸的事实是你不能“用Apper就行了”,自从…… + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_apperlocal_show&w=1920) + +Apper对安装本地软件包的支持大约在Fedora 19时就中止了,几乎两年了。我喜欢那种对细节与质量的关注。 + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=3 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 6c78506ad82f4ca2fa473cb2e7e609813b5637f6 Mon Sep 17 00:00:00 2001 From: Ezio Date: Tue, 4 Aug 2015 09:48:34 +0800 Subject: [PATCH 094/207] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 修改一些笔误 --- ...o run Ubuntu Snappy Core on Raspberry Pi 2.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md b/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md index c4475f39a2..f5e6fe60b2 100644 --- a/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md +++ b/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md @@ -2,13 +2,13 @@ ================================================================================ 物联网(Internet of Things, IoT) 时代即将来临。很快,过不了几年,我们就会问自己当初是怎么在没有物联网的情况下生存的,就像我们现在怀疑过去没有手机的年代。Canonical 就是一个物联网快速发展却还是开放市场下的竞争者。这家公司宣称自己把赌注压到了IoT 上,就像他们已经在“云”上做过的一样。。在今年一月底,Canonical 启动了一个基于Ubuntu Core 的小型操作系统,名字叫做 [Ubuntu Snappy Core][1] 。 -Snappy 是一种用来替代deb 的新的打包格式,是一个用来更新系统的前端,从CoreOS、红帽子和其他地方借鉴了原子更新这个想法。很快树莓派2 代投入市场,Canonical 就发布了用于树莓派的Snappy Core 版本。第一代树莓派因为是基于ARMv6 ,而Ubuntu 的ARM 镜像是基于ARMv7 ,所以不能运行ubuntu 。不过这种状况现在改变了,Canonical 通过发布用于RPI2 的镜像,抓住机会澄清了Snappy 就是一个用于云计算,特别是IoT 的系统。 +Snappy 是一种用来替代deb 的新的打包格式,是一个用来更新系统的前端,从CoreOS、红帽子和其他系统借鉴了**原子更新**这个想法。树莓派2 代投入市场,Canonical 很快就发布了用于树莓派的Snappy Core 版本。而第一代树莓派因为是基于ARMv6 ,Ubuntu 的ARM 镜像是基于ARMv7 ,所以不能运行ubuntu 。不过这种状况现在改变了,Canonical 通过发布用于RPI2 的镜像,抓住机会证明了Snappy 就是一个用于云计算,特别是用于物联网的系统。 -Snappy 同样可以运行在其它像Amazon EC2, Microsofts Azure, Google's Compute Engine 这样的云端上,也可以虚拟化在KVM、Virtuabox 和vagrant 上。Canonical 已经拥抱了微软、谷歌、Docker、OpenStack 这些重量级选手,同时也与一些小项目达成合作关系。除了一些创业公司,像Ninja Sphere、Erle Robotics,还有一些开发板生产商比如Odroid、Banana Pro, Udoo, PCDuino 和Parallella 、全志。Snappy Core 也希望很快能运行在路由器上,来帮助改进路由器生产商目前很少更新固件的策略。 +Snappy 同样可以运行在其它像Amazon EC2, Microsofts Azure, Google的 Compute Engine 这样的云端上,也可以虚拟化在KVM、Virtuabox 和vagrant 上。Canonical Ubuntu 已经拥抱了微软、谷歌、Docker、OpenStack 这些重量级选手,同时也与一些小项目达成合作关系。除了一些创业公司,比如Ninja Sphere、Erle Robotics,还有一些开发板生产商,比如Odroid、Banana Pro, Udoo, PCDuino 和Parallella 、全志,Snappy 也提供了支持。Snappy Core 同时也希望尽快运行到路由器上来帮助改进路由器生产商目前很少更新固件的策略。 接下来,让我们看看怎么样在树莓派2 上运行Snappy。 -用于树莓派2 的Snappy 镜像可以从 [Raspberry Pi 网站][2] 上下载。解压缩出来的镜像必须[写到一个至少8GB 大小的SD 卡][3]。尽管原始系统很小,但是院子升级和回滚功能会蚕食不小的空间。使用Snappy 启动树莓派2 后你就可以使用默认用户名和密码(都是ubuntu)登录系统。 +用于树莓派2 的Snappy 镜像可以从 [Raspberry Pi 网站][2] 上下载。解压缩出来的镜像必须[写到一个至少8GB 大小的SD 卡][3]。尽管原始系统很小,但是原子升级和回滚功能会占用不小的空间。使用Snappy 启动树莓派2 后你就可以使用默认用户名和密码(都是ubuntu)登录系统。 ![](https://farm8.staticflickr.com/7639/16428527263_f7bdd56a0d_c.jpg) @@ -18,7 +18,7 @@ sudo 已经配置好了可以直接用,安全起见,你应该使用以下命 或者也可以使用`adduser` 为你添加一个新用户。 -因为RPI缺少硬件始终,而Snappy 不知道这一点,所以系统会有一个小bug:处理命令时会报很多错。不过这个很容易解决: +因为RPI缺少硬件时钟,而Snappy 并不知道这一点,所以系统会有一个小bug:处理某些命令时会报很多错。不过这个很容易解决: 使用这个命令来确认这个bug 是否影响: @@ -36,7 +36,7 @@ sudo 已经配置好了可以直接用,安全起见,你应该使用以下命 $ sudo apt-get update && sudo apt-get distupgrade -现在将不会让你通过,因为Snappy 会使用它自己精简过的、基于dpkg 的包管理系统。这是做是应为Snappy 会运行很多嵌入式程序,而你也会想着所有事情尽可能的简化。 +不过这时系统不会让你通过,因为Snappy 使用它自己精简过的、基于dpkg 的包管理系统。这么做的原因是Snappy 会运行很多嵌入式程序,而同时你也会想着所有事情尽可能的简化。 让我们来看看最关键的部分,理解一下程序是如何与Snappy 工作的。运行Snappy 的SD 卡上除了boot 分区外还有3个分区。其中的两个构成了一个重复的文件系统。这两个平行文件系统被固定挂载为只读模式,并且任何时刻只有一个是激活的。第三个分区是一个部分可写的文件系统,用来让用户存储数据。通过更新系统,标记为'system-a' 的分区会保持一个完整的文件系统,被称作核心,而另一个平行文件系统仍然会是空的。 @@ -52,13 +52,13 @@ sudo 已经配置好了可以直接用,安全起见,你应该使用以下命 $ sudo snappy versions -a -经过更新-重启的操作,你应该可以看到被激活的核心已经被改变了。 +经过更新-重启两步操作,你应该可以看到被激活的核心已经被改变了。 因为到目前为止我们还没有安装任何软件,下面的命令: $ sudo snappy update ubuntu-core -将会生效,而且如果你打算仅仅更新特定的OS,这也是一个办法。如果出了问题,你可以使用下面的命令回滚: +将会生效,而且如果你打算仅仅更新特定的OS 版本,这也是一个办法。如果出了问题,你可以使用下面的命令回滚: $ sudo snappy rollback ubuntu-core @@ -77,7 +77,7 @@ sudo 已经配置好了可以直接用,安全起见,你应该使用以下命 via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html 作者:[Ferdinand Thommes][a] -译者:[译者ID](https://github.com/oska874) +译者:[Ezio](https://github.com/oska874) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From 5e44c4c18b98bbb459008ca388adc4f54a5a59e3 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:29:24 +0800 Subject: [PATCH 095/207] Create Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md --- ... (Mirroring) using 'Two Disks' in Linux.md | 217 ++++++++++++++++++ 1 file changed, 217 insertions(+) create mode 100644 translated/tech/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md diff --git a/translated/tech/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md b/translated/tech/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md new file mode 100644 index 0000000000..948e530ed8 --- /dev/null +++ b/translated/tech/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md @@ -0,0 +1,217 @@ +在 Linux 中使用"两个磁盘"创建 RAID 1(镜像) - 第3部分 +================================================================================ +RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁盘中。创建 RAID1 至少需要两个磁盘,它的读取性能或者可靠性比数据存储容量更好。 + + +![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg) + +在 Linux 中设置 RAID1 + +创建镜像是为了防止因硬盘故障导致数据丢失。镜像中的每个磁盘包含数据的完整副本。当一个磁盘发生故障时,相同的数据可以从其它正常磁盘中读取。而后,可以从正在运行的计算机中直接更换发生故障的磁盘,无需任何中断。 + +### RAID 1 的特点 ### + +-镜像具有良好的性能。 + +-磁盘利用率为50%。也就是说,如果我们有两个磁盘每个500GB,总共是1TB,但在镜像中它只会显示500GB。 + +-在镜像如果一个磁盘发生故障不会有数据丢失,因为两个磁盘中的内容相同。 + +-读取数据会比写入性能更好。 + +#### 要求 #### + + +创建 RAID 1 至少要有两个磁盘,你也可以添加更多的磁盘,磁盘数需为2,4,6,8的两倍。为了能够添加更多的磁盘,你的系统必须有 RAID 物理适配器(硬件卡)。 + +这里,我们使用软件 RAID 不是硬件 RAID,如果你的系统有一个内置的物理硬件 RAID 卡,你可以从它的 UI 组件或使用 Ctrl + I 键来访问它。 + +需要阅读: [Basic Concepts of RAID in Linux][1] + +#### 在我的服务器安装 #### + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.226 + Hostname : rd1.tecmintlocal.com + Disk 1 [20GB] : /dev/sdb + Disk 2 [20GB] : /dev/sdc + +本文将指导你使用 mdadm (创建和管理 RAID 的)一步一步的建立一个软件 RAID 1 或镜像在 Linux 平台上。但同样的做法也适用于其它 Linux 发行版如 RedHat,CentOS,Fedora 等等。 + +### 第1步:安装所需要的并且检查磁盘 ### + +1.正如我前面所说,在 Linux 中我们需要使用 mdadm 软件来创建和管理 RAID。所以,让我们用 yum 或 apt-get 的软件包管理工具在 Linux 上安装 mdadm 软件包。 + + # yum install mdadm [on RedHat systems] + # apt-get install mdadm [on Debain systems] + +2. 一旦安装好‘mdadm‘包,我们需要使用下面的命令来检查磁盘是否已经配置好。 + + # mdadm -E /dev/sd[b-c] + +![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png) + +检查 RAID 的磁盘 + + +正如你从上面图片看到的,没有检测到任何超级块,这意味着还没有创建RAID。 + +### 第2步:为 RAID 创建分区 ### + +3. 正如我提到的,我们最少使用两个分区 /dev/sdb 和 /dev/sdc 来创建 RAID1。我们首先使用‘fdisk‘命令来创建这两个分区并更改其类型为 raid。 + + # fdisk /dev/sdb + +按照下面的说明 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。 +- 接下来选择分区号为1。 +- 按两次回车键默认将整个容量分配给它。 +- 然后,按 ‘P’ 来打印创建好的分区。 +- 按 ‘L’,列出所有可用的类型。 +- 按 ‘t’ 修改分区类型。 +- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png) + +创建磁盘分区 + +在创建“/dev/sdb”分区后,接下来按照同样的方法创建分区 /dev/sdc 。 + + # fdisk /dev/sdc + +![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png) + +创建第二个分区 + +4. 一旦这两个分区创建成功后,使用相同的命令来检查 sdb & sdc 分区并确认 RAID 分区的类型如上图所示。 + + # mdadm -E /dev/sd[b-c] + +![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png) + +验证分区变化 + +![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png) + +检查 RAID 类型 + +**注意**: 正如你在上图所看到的,在 sdb1 和 sdc1 中没有任何对 RAID 的定义,这就是我们没有检测到超级块的原因。 + +### 步骤3:创建 RAID1 设备 ### + +5.接下来使用以下命令来创建一个名为 /dev/md0 的“RAID1”设备并验证它 + + # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1 + # cat /proc/mdstat + +![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png) + +创建RAID设备 + +6. 接下来使用如下命令来检查 RAID 设备类型和 RAID 阵列 + + # mdadm -E /dev/sd[b-c]1 + # mdadm --detail /dev/md0 + +![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png) + +检查 RAID 设备类型 + +![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png) + +检查 RAID 设备阵列 + +从上图中,人们很容易理解,RAID1 已经使用的 /dev/sdb1 和 /dev/sdc1 分区被创建,你也可以看到状态为 resyncing。 + +### 第4步:在 RAID 设备上创建文件系统 ### + +7. 使用 ext4 为 md0 创建文件系统并挂载到 /mnt/raid1 . + + # mkfs.ext4 /dev/md0 + +![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png) + +创建 RAID 设备文件系统 + +8. 接下来,挂载新创建的文件系统到“/mnt/raid1”,并创建一些文件,验证在挂载点的数据 + + # mkdir /mnt/raid1 + # mount /dev/md0 /mnt/raid1/ + # touch /mnt/raid1/tecmint.txt + # echo "tecmint raid setups" > /mnt/raid1/tecmint.txt + +![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png) + +挂载 RAID 设备 + +9.为了在系统重新启动自动挂载 RAID1,需要在 fstab 文件中添加条目。打开“/etc/fstab”文件并添加以下行。 + + /dev/md0 /mnt/raid1 ext4 defaults 0 0 + +![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png) + +自动挂载 Raid 设备 + +10. 运行“mount -a”,检查 fstab 中的条目是否有错误 + # mount -av + +![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png) + +检查 fstab 中的错误 + +11. 接下来,使用下面的命令保存 raid 的配置到文件“mdadm.conf”中。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + +![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png) + +保存 Raid 的配置 + +上述配置文件在系统重启时会读取并加载 RAID 设备。 + +### 第5步:在磁盘故障后检查数据 ### + +12.我们的主要目的是,即使在任何磁盘故障或死机时必须保证数据是可用的。让我们来看看,当任何一个磁盘不可用时会发生什么。 + + # mdadm --detail /dev/md0 + +![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png) + +验证 Raid 设备 + +在上面的图片中,我们可以看到在 RAID 中有2个设备是可用的并且 Active Devices 是2.现在让我们看看,当一个磁盘拔出(移除 sdc 磁盘)或损坏后会发生什么。 + + # ls -l /dev | grep sd + # mdadm --detail /dev/md0 + +![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png) + +测试 RAID 设备 + +现在,在上面的图片中你可以看到,一个磁盘不见了。我从虚拟机上删除了一个磁盘。此时让我们来检查我们宝贵的数据。 + + # cd /mnt/raid1/ + # cat tecmint.txt + +![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png) + +验证 RAID 数据 + +你有没有看到我们的数据仍然可用。由此,我们可以知道 RAID 1(镜像)的优势。在接下来的文章中,我们将看到如何设置一个 RAID 5 条带化分布式奇偶校验。希望这可以帮助你了解 RAID 1(镜像)是如何工作的。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid1-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 21d4f77e67c1780b8f0a67defbe4dc8487cd512a Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:31:36 +0800 Subject: [PATCH 096/207] Create Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md --- ...iping with Distributed Parity) in Linux.md | 285 ++++++++++++++++++ 1 file changed, 285 insertions(+) create mode 100644 translated/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md diff --git a/translated/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md b/translated/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md new file mode 100644 index 0000000000..7de5199a08 --- /dev/null +++ b/translated/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md @@ -0,0 +1,285 @@ + +在 Linux 中创建 RAID 5(条带化与分布式奇偶校验) - 第4部分 +================================================================================ +在 RAID 5 中,条带化数据跨多个驱磁盘使用分布式奇偶校验。分布式奇偶校验的条带化意味着它将奇偶校验信息和条带中的数据分布在多个磁盘上,它将有很好的数据冗余。 + +![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/setup-raid-5-in-linux.jpg) + +在 Linux 中配置 RAID 5 + +对于此 RAID 级别它至少应该有三个或更多个磁盘。RAID 5 通常被用于大规模生产环境中花费更多的成本来提供更好的数据冗余性能。 + +#### 什么是奇偶校验? #### + +奇偶校验是在数据存储中检测错误最简单的一个方法。奇偶校验信息存储在每个磁盘中,比如说,我们有4个磁盘,其中一个磁盘空间被分割去存储所有磁盘的奇偶校验信息。如果任何一个磁盘出现故障,我们可以通过更换故障磁盘后,从奇偶校验信息重建得到原来的数据。 + +#### RAID 5 的优点和缺点 #### + +- 提供更好的性能 +- 支持冗余和容错。 +- 支持热备份。 +- 将失去一个磁盘的容量存储奇偶校验信息。 +- 单个磁盘发生故障后不会丢失数据。我们可以更换故障硬盘后从奇偶校验信息中重建数据。 +- 事务处理读操作会更快。 +- 由于奇偶校验占用资源,写操作将是缓慢的。 +- 重建需要很长的时间。 + +#### 要求 #### +创建 RAID 5 最少需要3个磁盘,你也可以添加更多的磁盘,前提是你要有多端口的专用硬件 RAID 控制器。在这里,我们使用“mdadm”包来创建软件 RAID。 + +mdadm 是一个允许我们在 Linux 下配置和管理 RAID 设备的包。默认情况下 RAID 没有可用的配置文件,我们在创建和配置 RAID 后必须将配置文件保存在一个单独的文件中,例如:mdadm.conf。 + +在进一步学习之前,我建议你通过下面的文章去了解 Linux 中 RAID 的基础知识。 + +- [Basic Concepts of RAID in Linux – Part 1][1] +- [Creating RAID 0 (Stripe) in Linux – Part 2][2] +- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3] + +#### 我的服务器设置 #### + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.227 + Hostname : rd5.tecmintlocal.com + Disk 1 [20GB] : /dev/sdb + Disk 2 [20GB] : /dev/sdc + Disk 3 [20GB] : /dev/sdd + +这篇文章是 RAID 系列9教程的第4部分,在这里我们要建立一个软件 RAID 5(分布式奇偶校验)使用三个20GB(名为/dev/sdb, /dev/sdc 和 /dev/sdd)的磁盘在 Linux 系统或服务器中上。 + +### 第1步:安装 mdadm 并检验磁盘 ### + +1.正如我们前面所说,我们使用 CentOS 6.5 Final 版本来创建 RAID 设置,但同样的做法也适用于其他 Linux 发行版。 + + # lsb_release -a + # ifconfig | grep inet + +![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/CentOS-6.5-Summary.png) + +CentOS 6.5 摘要 + +2. 如果你按照我们的 RAID 系列去配置的,我们假设你已经安装了“mdadm”包,如果没有,根据你的 Linux 发行版使用下面的命令安装。 + + # yum install mdadm [on RedHat systems] + # apt-get install mdadm [on Debain systems] + +3. “mdadm”包安装后,先使用‘fdisk‘命令列出我们在系统上增加的三个20GB的硬盘。 + + # fdisk -l | grep sd + +![Install mdadm Tool in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Install-mdadm-Tool.png) + +安装 mdadm 工具 + +4. 现在该检查这三个磁盘是否存在 RAID 块,使用下面的命令来检查。 + + # mdadm -E /dev/sd[b-d] + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd + +![Examine Drives For Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-Drives-For-Raid.png) + +检查 Raid 磁盘 + +**注意**: 上面的图片说明,没有检测到任何超级块。所以,这三个磁盘中没有定义 RAID。让我们现在开始创建一个吧! + +### 第2步:为磁盘创建 RAID 分区 ### + +5. 首先,在创建 RAID 前我们要为磁盘分区(/dev/sdb, /dev/sdc 和 /dev/sdd),在进行下一步之前,先使用‘fdisk’命令进行分区。 + + # fdisk /dev/sdb + # fdisk /dev/sdc + # fdisk /dev/sdd + +#### 创建 /dev/sdb 分区 #### + +请按照下面的说明在 /dev/sdb 硬盘上创建分区。 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。选择主分区是因为还没有定义过分区。 +- 接下来选择分区号为1。默认就是1. +- 这里是选择柱面大小,我们没必要选择指定的大小,因为我们需要为 RAID 使用整个分区,所以只需按两次 Enter 键默认将整个容量分配给它。 +- 然后,按 ‘P’ 来打印创建好的分区。 +- 改变分区类型,按 ‘L’可以列出所有可用的类型。 +- 按 ‘t’ 修改分区类型。 +- 这里使用‘fd’设置为 RAID 的类型。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition1.png) + +创建 sdb 分区 + +**注意**: 我们仍要按照上面的步骤来创建 sdc 和 sdd 的分区。 + +#### 创建 /dev/sdc 分区 #### + +现在,通过下面的截图给出创建 sdc 和 sdd 磁盘分区的方法,或者你可以按照上面的步骤。 + + # fdisk /dev/sdc + +![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition1.png) + +创建 sdc 分区 + +#### 创建 /dev/sdd 分区 #### + + # fdisk /dev/sdd + +![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition1.png) + +创建 sdd 分区 + +6. 创建分区后,检查三个磁盘 sdb, sdc, sdd 的变化。 + + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd + + or + + # mdadm -E /dev/sd[b-c] + +![Check Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Changes-on-Partitions.png) + +检查磁盘变化 + +**注意**: 在上面的图片中,磁盘的类型是 fd。 + +7.现在在新创建的分区检查 RAID 块。如果没有检测到超级块,我们就能够继续下一步,创建一个新的 RAID 5 的设置在这些磁盘中。 + +![Check Raid on Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-Partitions.png) + +在分区中检查 Raid + +### 第3步:创建 md 设备 md0 ### + +8. 现在创建一个 RAID 设备“md0”(即 /dev/md0)使用所有新创建的分区(sdb1, sdc1 and sdd1) ,使用以下命令。 + + # mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 + + or + + # mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1 + +9. 创建 RAID 设备后,检查并确认 RAID,包括设备和从 mdstat 中输出的 RAID 级别。 + + # cat /proc/mdstat + +![Verify Raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Device.png) + +验证 Raid 设备 + +如果你想监视当前的创建过程,你可以使用‘watch‘命令,使用 watch ‘cat /proc/mdstat‘,它会在屏幕上显示且每隔1秒刷新一次。 + + # watch -n1 cat /proc/mdstat + +![Monitor Raid Process](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Raid-Process.png) + +监控 Raid 5 过程 + +![Raid 5 Process Summary](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Process-Summary.png) + +Raid 5 过程概要 + +10. 创建 RAID 后,使用以下命令验证 RAID 设备 + + # mdadm -E /dev/sd[b-d]1 + +![Verify Raid Level](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Level.png) + +验证 Raid 级别 + +**注意**: 因为它显示三个磁盘的信息,上述命令的输出会有点长。 + +11. 接下来,验证 RAID 阵列的假设,这包含正在运行 RAID 的设备,并开始重新同步。 + + # mdadm --detail /dev/md0 + +![Verify Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Array.png) + +验证 Raid 阵列 + +### 第4步:为 md0 创建文件系统### + +12. 在挂载前为“md0”设备创建 ext4 文件系统。 + + # mkfs.ext4 /dev/md0 + +![Create md0 Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md0-Filesystem.png) + +创建 md0 文件系统 + +13.现在,在‘/mnt‘下创建目录 raid5,然后挂载文件系统到 /mnt/raid5/ 下并检查下挂载点的文件,你会看到 lost+found 目录。 + + # mkdir /mnt/raid5 + # mount /dev/md0 /mnt/raid5/ + # ls -l /mnt/raid5/ + +14. 在挂载点 /mnt/raid5 下创建几个文件,并在其中一个文件中添加一些内容然后去验证。 + + # touch /mnt/raid5/raid5_tecmint_{1..5} + # ls -l /mnt/raid5/ + # echo "tecmint raid setups" > /mnt/raid5/raid5_tecmint_1 + # cat /mnt/raid5/raid5_tecmint_1 + # cat /proc/mdstat + +![Mount Raid 5 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-Raid-Device.png) + +挂载 Raid 设备 + +15. 我们需要在 fstab 中添加条目,否则系统重启后将不会显示我们的挂载点。然后编辑 fstab 文件添加条目,在文件尾追加以下行,如下图所示。挂载点会根据你环境的不同而不同。 + + # vim /etc/fstab + + /dev/md0 /mnt/raid5 ext4 defaults 0 0 + +![Raid 5 Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Device-Automount.png) + +自动挂载 Raid 5 + +16. 接下来,运行‘mount -av‘命令检查 fstab 条目中是否有错误。 + + # mount -av + +![Check Fstab Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Fstab-Errors.png) + +检查 Fstab 错误 + +### 第5步:保存 Raid 5 的配置 ### + +17. 在前面章节已经说过,默认情况下 RAID 没有配置文件。我们必须手动保存。如果此步不跟 RAID 设备将不会存在 md0,它将会跟一些其他数子。 + +所以,我们必须要在系统重新启动之前保存配置。如果配置保存它在系统重新启动时会被加载到内核中然后 RAID 也将被加载。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + +![Save Raid 5 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid-5-Configuration.png) + +保存 Raid 5 配置 + +注意:保存配置将保持 RAID 级别的稳定性在 md0 设备中。 + +### 第6步:添加备用磁盘 ### + +18.备用磁盘有什么用?它是非常有用的,如果我们有一个备用磁盘,当我们阵列中的任何一个磁盘发生故障后,这个备用磁盘会主动添加并重建进程,并从其他磁盘上同步数据,所以我们可以在这里看到冗余。 + +更多关于添加备用磁盘和检查 RAID 5 容错的指令,请阅读下面文章中的第6步和第7步。 + +- [Add Spare Drive to Raid 5 Setup][4] + +### 结论 ### + +在这篇文章中,我们已经看到了如何使用三个磁盘配置一个 RAID 5 。在接下来的文章中,我们将看到如何故障排除并且当 RAID 5 中的一个磁盘损坏后如何恢复。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid-5-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ +[2]:http://www.tecmint.com/create-raid0-in-linux/ +[3]:http://www.tecmint.com/create-raid1-in-linux/ +[4]:http://www.tecmint.com/create-raid-6-in-linux/ From c16ed6b8063def3f227e5e3f783a1b1520569574 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:32:00 +0800 Subject: [PATCH 097/207] =?UTF-8?q?Delete=20Setting=20up=20RAID=201=20(Mir?= =?UTF-8?q?roring)=20using=20=E2=80=98Two=20Disks=E2=80=99=20in=20Linux=20?= =?UTF-8?q?=E2=80=93=20Part=203?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...oring) using ‘Two Disks’ in Linux – Part 3 | 217 ------------------ 1 file changed, 217 deletions(-) delete mode 100644 translated/tech/Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 diff --git a/translated/tech/Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 b/translated/tech/Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 deleted file mode 100644 index 948e530ed8..0000000000 --- a/translated/tech/Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 +++ /dev/null @@ -1,217 +0,0 @@ -在 Linux 中使用"两个磁盘"创建 RAID 1(镜像) - 第3部分 -================================================================================ -RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁盘中。创建 RAID1 至少需要两个磁盘,它的读取性能或者可靠性比数据存储容量更好。 - - -![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg) - -在 Linux 中设置 RAID1 - -创建镜像是为了防止因硬盘故障导致数据丢失。镜像中的每个磁盘包含数据的完整副本。当一个磁盘发生故障时,相同的数据可以从其它正常磁盘中读取。而后,可以从正在运行的计算机中直接更换发生故障的磁盘,无需任何中断。 - -### RAID 1 的特点 ### - --镜像具有良好的性能。 - --磁盘利用率为50%。也就是说,如果我们有两个磁盘每个500GB,总共是1TB,但在镜像中它只会显示500GB。 - --在镜像如果一个磁盘发生故障不会有数据丢失,因为两个磁盘中的内容相同。 - --读取数据会比写入性能更好。 - -#### 要求 #### - - -创建 RAID 1 至少要有两个磁盘,你也可以添加更多的磁盘,磁盘数需为2,4,6,8的两倍。为了能够添加更多的磁盘,你的系统必须有 RAID 物理适配器(硬件卡)。 - -这里,我们使用软件 RAID 不是硬件 RAID,如果你的系统有一个内置的物理硬件 RAID 卡,你可以从它的 UI 组件或使用 Ctrl + I 键来访问它。 - -需要阅读: [Basic Concepts of RAID in Linux][1] - -#### 在我的服务器安装 #### - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.226 - Hostname : rd1.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc - -本文将指导你使用 mdadm (创建和管理 RAID 的)一步一步的建立一个软件 RAID 1 或镜像在 Linux 平台上。但同样的做法也适用于其它 Linux 发行版如 RedHat,CentOS,Fedora 等等。 - -### 第1步:安装所需要的并且检查磁盘 ### - -1.正如我前面所说,在 Linux 中我们需要使用 mdadm 软件来创建和管理 RAID。所以,让我们用 yum 或 apt-get 的软件包管理工具在 Linux 上安装 mdadm 软件包。 - - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] - -2. 一旦安装好‘mdadm‘包,我们需要使用下面的命令来检查磁盘是否已经配置好。 - - # mdadm -E /dev/sd[b-c] - -![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png) - -检查 RAID 的磁盘 - - -正如你从上面图片看到的,没有检测到任何超级块,这意味着还没有创建RAID。 - -### 第2步:为 RAID 创建分区 ### - -3. 正如我提到的,我们最少使用两个分区 /dev/sdb 和 /dev/sdc 来创建 RAID1。我们首先使用‘fdisk‘命令来创建这两个分区并更改其类型为 raid。 - - # fdisk /dev/sdb - -按照下面的说明 - -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。 -- 接下来选择分区号为1。 -- 按两次回车键默认将整个容量分配给它。 -- 然后,按 ‘P’ 来打印创建好的分区。 -- 按 ‘L’,列出所有可用的类型。 -- 按 ‘t’ 修改分区类型。 -- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 - -![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png) - -创建磁盘分区 - -在创建“/dev/sdb”分区后,接下来按照同样的方法创建分区 /dev/sdc 。 - - # fdisk /dev/sdc - -![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png) - -创建第二个分区 - -4. 一旦这两个分区创建成功后,使用相同的命令来检查 sdb & sdc 分区并确认 RAID 分区的类型如上图所示。 - - # mdadm -E /dev/sd[b-c] - -![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png) - -验证分区变化 - -![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png) - -检查 RAID 类型 - -**注意**: 正如你在上图所看到的,在 sdb1 和 sdc1 中没有任何对 RAID 的定义,这就是我们没有检测到超级块的原因。 - -### 步骤3:创建 RAID1 设备 ### - -5.接下来使用以下命令来创建一个名为 /dev/md0 的“RAID1”设备并验证它 - - # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1 - # cat /proc/mdstat - -![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png) - -创建RAID设备 - -6. 接下来使用如下命令来检查 RAID 设备类型和 RAID 阵列 - - # mdadm -E /dev/sd[b-c]1 - # mdadm --detail /dev/md0 - -![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png) - -检查 RAID 设备类型 - -![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png) - -检查 RAID 设备阵列 - -从上图中,人们很容易理解,RAID1 已经使用的 /dev/sdb1 和 /dev/sdc1 分区被创建,你也可以看到状态为 resyncing。 - -### 第4步:在 RAID 设备上创建文件系统 ### - -7. 使用 ext4 为 md0 创建文件系统并挂载到 /mnt/raid1 . - - # mkfs.ext4 /dev/md0 - -![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png) - -创建 RAID 设备文件系统 - -8. 接下来,挂载新创建的文件系统到“/mnt/raid1”,并创建一些文件,验证在挂载点的数据 - - # mkdir /mnt/raid1 - # mount /dev/md0 /mnt/raid1/ - # touch /mnt/raid1/tecmint.txt - # echo "tecmint raid setups" > /mnt/raid1/tecmint.txt - -![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png) - -挂载 RAID 设备 - -9.为了在系统重新启动自动挂载 RAID1,需要在 fstab 文件中添加条目。打开“/etc/fstab”文件并添加以下行。 - - /dev/md0 /mnt/raid1 ext4 defaults 0 0 - -![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png) - -自动挂载 Raid 设备 - -10. 运行“mount -a”,检查 fstab 中的条目是否有错误 - # mount -av - -![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png) - -检查 fstab 中的错误 - -11. 接下来,使用下面的命令保存 raid 的配置到文件“mdadm.conf”中。 - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - -![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png) - -保存 Raid 的配置 - -上述配置文件在系统重启时会读取并加载 RAID 设备。 - -### 第5步:在磁盘故障后检查数据 ### - -12.我们的主要目的是,即使在任何磁盘故障或死机时必须保证数据是可用的。让我们来看看,当任何一个磁盘不可用时会发生什么。 - - # mdadm --detail /dev/md0 - -![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png) - -验证 Raid 设备 - -在上面的图片中,我们可以看到在 RAID 中有2个设备是可用的并且 Active Devices 是2.现在让我们看看,当一个磁盘拔出(移除 sdc 磁盘)或损坏后会发生什么。 - - # ls -l /dev | grep sd - # mdadm --detail /dev/md0 - -![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png) - -测试 RAID 设备 - -现在,在上面的图片中你可以看到,一个磁盘不见了。我从虚拟机上删除了一个磁盘。此时让我们来检查我们宝贵的数据。 - - # cd /mnt/raid1/ - # cat tecmint.txt - -![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png) - -验证 RAID 数据 - -你有没有看到我们的数据仍然可用。由此,我们可以知道 RAID 1(镜像)的优势。在接下来的文章中,我们将看到如何设置一个 RAID 5 条带化分布式奇偶校验。希望这可以帮助你了解 RAID 1(镜像)是如何工作的。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid1-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From e7e25c9cb5858924af23c3ba900b69568250285e Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:32:12 +0800 Subject: [PATCH 098/207] =?UTF-8?q?Delete=20Introduction=20to=20RAID,=20Co?= =?UTF-8?q?ncepts=20of=20RAID=20and=20RAID=20Levels=20=E2=80=93=20Part=201?= =?UTF-8?q?.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ncepts of RAID and RAID Levels – Part 1.md | 146 ------------------ 1 file changed, 146 deletions(-) delete mode 100644 translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1.md diff --git a/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1.md b/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1.md deleted file mode 100644 index 8ca0ecbd7e..0000000000 --- a/translated/tech/Introduction to RAID, Concepts of RAID and RAID Levels – Part 1.md +++ /dev/null @@ -1,146 +0,0 @@ - -RAID的级别和概念的介绍 - 第1部分 -================================================================================ -RAID是廉价磁盘冗余阵列,但现在它被称为独立磁盘冗余阵列。早先一个容量很小的磁盘都是非常昂贵的,但是现在我们可以很便宜的买到一个更大的磁盘。Raid 是磁盘的一个集合,被称为逻辑卷。 - - -![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg) - -在 Linux 中理解 RAID 的设置 - -RAID包含一组或者一个集合甚至一个阵列。使用一组磁盘结合驱动器组成 RAID 阵列或 RAID 集。一个 RAID 控制器至少使用两个磁盘并且使用一个逻辑卷或者多个驱动器在一个组中。在一个磁盘组的应用中只能使用一个 RAID 级别。使用 RAID 可以提高服务器的性能。不同 RAID 的级别,性能会有所不同。它通过容错和高可用性来保存我们的数据。 - -这个系列被命名为RAID的构建共包含9个部分包括以下主题。 - -- 第1部分:RAID的级别和概念的介绍 -- 第2部分:在Linux中如何设置 RAID0(条带化) -- 第3部分:在Linux中如何设置 RAID1(镜像化) -- 第4部分:在Linux中如何设置 RAID5(条带化与分布式奇偶校验) -- 第5部分:在Linux中如何设置 RAID6(条带双分布式奇偶校验) -- 第6部分:在Linux中设置 RAID 10 或1 + 0(嵌套) -- 第7部分:增加现有的 RAID 阵列并删除损坏的磁盘 -- 第8部分:在 RAID 中恢复(重建)损坏的驱动器 -- 第9部分:在 Linux 中管理 RAID - -这是9系列教程的第1部分,在这里我们将介绍 RAID 的概念和 RAID 级别,这是在 Linux 中构建 RAID 需要理解的。 - - -### 软件RAID和硬件RAID ### - -软件 RAID 的性能很低,因为其从主机资源消耗。 RAID 软件需要加载可读取数据从软件 RAID 卷中。在加载 RAID 软件前,操作系统需要得到加载 RAID 软件的引导。在软件 RAID 中无需物理硬件。零成本投资。 - -硬件 RAID 具有很高的性能。他们有专用的 RAID 控制器,采用 PCI Express卡物理内置的。它不会使用主机资源。他们有 NVRAM 缓存读取和写入。当重建时即使出现电源故障,它会使用电池电源备份存储缓存。对于大规模使用需要非常昂贵的投资。 - -硬件 RAID 卡如下所示: - -![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg) - -硬件RAID - -#### 精选的 RAID 概念 #### - -- 在 RAID 重建中校验方法中丢失的内容来自从校验中保存的信息。 RAID 5,RAID 6 基于校验。 -- 条带化是随机共享数据到多个磁盘。它不会在单个磁盘中保存完整的数据。如果我们使用3个磁盘,则数据将会存在于每个磁盘上。 -- 镜像被用于 RAID 1 和 RAID 10。镜像会自动备份数据。在 RAID 1,它将保存相同的内容到其他盘上。 -- 在我们的服务器上,热备份只是一个备用驱动器,它可以自动更换发生故障的驱动器。在我们的阵列中,如果任何一个驱动器损坏,热备份驱动器会自动重建。 -- 块是 RAID 控制器每次读写数据时的最小单位,最小4KB。通过定义块大小,我们可以增加 I/O 性能。 - -RAID有不同的级别。在这里,我们仅看到在真实环境下的使用最多的 RAID 级别。 - -- RAID0 = 条带化 -- RAID1 = 镜像 -- RAID5 = 单个磁盘分布式奇偶校验 -- RAID6 = 双盘分布式奇偶校验 -- RAID10 = 镜像 + 条带。(嵌套RAID) - -RAID 在大多数 Linux 发行版上使用 mdadm 的包管理。让我们先对每个 RAID 级别认识一下。 - -#### RAID 0(或)条带化 #### - -条带化有很好的性能。在 RAID 0(条带化)中数据将使用共享的方式被写入到磁盘。一半的内容将是在一个磁盘上,另一半内容将被写入到其它磁盘。 - -假设我们有2个磁盘驱动器,例如,如果我们将数据“TECMINT”写到逻辑卷中,“T”将被保存在第一盘中,“E”将保存在第二盘,'C'将被保存在第一盘,“M”将保存在第二盘,它会一直继续此循环过程。 - -在这种情况下,如果驱动器中的任何一个发生故障,我们将丢失所有的数据,因为一个盘中只有一半的数据,不能用于重建。不过,当比较写入速度和性能时,RAID0 是非常好的。创建 RAID 0(条带化)至少需要2个磁盘。如果你的数据是非常宝贵的,那么不要使用此 RAID 级别。 - -- 高性能。 -- 在 RAID0 上零容量损失。 -- 零容错。 -- 写和读有很高的性能。 - -#### RAID1(或)镜像化 #### - -镜像也有不错的性能。镜像可以备份我们的数据。假设我们有两组2TB的硬盘驱动器,我们总共有4TB,但在镜像中,驱动器在 RAID 控制器的后面形成一个逻辑驱动器,我们只能看到逻辑驱动器有2TB。 - -当我们保存数据时,它将同时写入2TB驱动器中。创建 RAID 1 (镜像化)最少需要两个驱动器。如果发生磁盘故障,我们可以恢复 RAID 通过更换一个新的磁盘。如果在 RAID 1 中任何一个磁盘发生故障,我们可以从另一个磁盘中获取相同的数据,因为其他的磁盘中也有相同的数据。所以是零数据丢失。 - -- 良好的性能。 -- 空间的一半将在总容量丢失。 -- 完全容错。 -- 重建会更快。 -- 写性能将是缓慢的。 -- 读将会很好。 -- 被操作系统和数据库使用的规模很小。 - -#### RAID 5(或)分布式奇偶校验 #### - -RAID 5 多用于企业的水平。 RAID 5 的工作通过分布式奇偶校验的方法。奇偶校验信息将被用于重建数据。它需要留下的正常驱动器上的信息去重建。驱动器故障时,这会保护我们的数据。 - -假设我们有4个驱动器,如果一个驱动器发生故障而后我们更换发生故障的驱动器后,我们可以从奇偶校验中重建数据到更换的驱动器上。奇偶校验信息存储在所有的4个驱动器上,如果我们有4个 1TB 的驱动器。奇偶校验信息将被存储在每个驱动器的256G中而其它768GB是用户自己使用的。单个驱动器故障后,RAID 5 依旧正常工作,如果驱动器损坏个数超过1个会导致数据的丢失。 - -- 性能卓越 -- 读速度将非常好。 -- 如果我们不使用硬件 RAID 控制器,写速度是缓慢的。 -- 从所有驱动器的奇偶校验信息中重建。 -- 完全容错。 -- 1个磁盘空间将用于奇偶校验。 -- 可以被用在文件服务器,Web服务器,非常重要的备份中。 - -#### RAID 6 两个分布式奇偶校验磁盘 #### - -RAID 6 和 RAID 5 相似但它有两个分布式奇偶校验。大多用在大量的阵列中。我们最少需要4个驱动器,即使有2个驱动器发生故障,我们依然可以重建数据,同时更换新的驱动器。 - -它比 RAID 5 非常慢,因为它将数据同时写到4个驱动器上。当我们使用硬件 RAID 控制器时速度将被平均。如果我们有6个的1TB驱动器,4个驱动器将用于数据保存,2个驱动器将用于校验。 - -- 性能不佳。 -- 读的性能很好。 -- 如果我们不使用硬件 RAID 控制器写的性能会很差。 -- 从2奇偶校验驱动器上重建。 -- 完全容错。 -- 2个磁盘空间将用于奇偶校验。 -- 可用于大型阵列。 -- 在备份和视频流中大规模使用。 - -#### RAID 10(或)镜像+条带 #### - -RAID 10 可以被称为1 + 0或0 +1。它将做镜像+条带两个工作。在 RAID 10 中首先做镜像然后做条带。在 RAID 01 上首先做条带,然后做镜像。RAID 10 比 01 好。 - -假设,我们有4个驱动器。当我写了一些数据到逻辑卷上,它会使用镜像和条带将数据保存到4个驱动器上。 - -如果我在 RAID 10 上写入数据“TECMINT”,数据将使用如下形式保存。首先将“T”同时写入两个磁盘,“E”也将同时写入两个磁盘,这一步将所有数据都写入。这使数据得到备份。 - -同时它将使用 RAID 0 方式写入数据,遵循将“T”写入第一个盘,“E”写入第二个盘。再次将“C”写入第一个盘,“M”到第二个盘。 - -- 良好的读写性能。 -- 空间的一半将在总容量丢失。 -- 容错。 -- 从备份数据中快速重建。 -- 它的高性能和高可用性常被用于数据库的存储中。 - -### 结论 ### - -在这篇文章中,我们已经看到了什么是 RAID 和在实际环境大多采用 RAID 的哪个级别。希望你已经学会了上面所写的。对于 RAID 的构建必须了解有关 RAID 的基本知识。以上内容对于你了解 RAID 基本满足。 - -在接下来的文章中,我将介绍如何设置和使用各种级别创建 RAID,增加 RAID 组(阵列)和驱动器故障排除等。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/understanding-raid-setup-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ From be241caf54d4564570a4897b5f68aa2e5e2a9842 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:32:27 +0800 Subject: [PATCH 099/207] =?UTF-8?q?Delete=20Creating=20RAID=205=20(Stripin?= =?UTF-8?q?g=20with=20Distributed=20Parity)=20in=20Linux=20=E2=80=93=20Par?= =?UTF-8?q?t=204?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...with Distributed Parity) in Linux – Part 4 | 285 ------------------ 1 file changed, 285 deletions(-) delete mode 100644 translated/tech/Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 diff --git a/translated/tech/Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 b/translated/tech/Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 deleted file mode 100644 index 7de5199a08..0000000000 --- a/translated/tech/Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 +++ /dev/null @@ -1,285 +0,0 @@ - -在 Linux 中创建 RAID 5(条带化与分布式奇偶校验) - 第4部分 -================================================================================ -在 RAID 5 中,条带化数据跨多个驱磁盘使用分布式奇偶校验。分布式奇偶校验的条带化意味着它将奇偶校验信息和条带中的数据分布在多个磁盘上,它将有很好的数据冗余。 - -![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/setup-raid-5-in-linux.jpg) - -在 Linux 中配置 RAID 5 - -对于此 RAID 级别它至少应该有三个或更多个磁盘。RAID 5 通常被用于大规模生产环境中花费更多的成本来提供更好的数据冗余性能。 - -#### 什么是奇偶校验? #### - -奇偶校验是在数据存储中检测错误最简单的一个方法。奇偶校验信息存储在每个磁盘中,比如说,我们有4个磁盘,其中一个磁盘空间被分割去存储所有磁盘的奇偶校验信息。如果任何一个磁盘出现故障,我们可以通过更换故障磁盘后,从奇偶校验信息重建得到原来的数据。 - -#### RAID 5 的优点和缺点 #### - -- 提供更好的性能 -- 支持冗余和容错。 -- 支持热备份。 -- 将失去一个磁盘的容量存储奇偶校验信息。 -- 单个磁盘发生故障后不会丢失数据。我们可以更换故障硬盘后从奇偶校验信息中重建数据。 -- 事务处理读操作会更快。 -- 由于奇偶校验占用资源,写操作将是缓慢的。 -- 重建需要很长的时间。 - -#### 要求 #### -创建 RAID 5 最少需要3个磁盘,你也可以添加更多的磁盘,前提是你要有多端口的专用硬件 RAID 控制器。在这里,我们使用“mdadm”包来创建软件 RAID。 - -mdadm 是一个允许我们在 Linux 下配置和管理 RAID 设备的包。默认情况下 RAID 没有可用的配置文件,我们在创建和配置 RAID 后必须将配置文件保存在一个单独的文件中,例如:mdadm.conf。 - -在进一步学习之前,我建议你通过下面的文章去了解 Linux 中 RAID 的基础知识。 - -- [Basic Concepts of RAID in Linux – Part 1][1] -- [Creating RAID 0 (Stripe) in Linux – Part 2][2] -- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3] - -#### 我的服务器设置 #### - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.227 - Hostname : rd5.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc - Disk 3 [20GB] : /dev/sdd - -这篇文章是 RAID 系列9教程的第4部分,在这里我们要建立一个软件 RAID 5(分布式奇偶校验)使用三个20GB(名为/dev/sdb, /dev/sdc 和 /dev/sdd)的磁盘在 Linux 系统或服务器中上。 - -### 第1步:安装 mdadm 并检验磁盘 ### - -1.正如我们前面所说,我们使用 CentOS 6.5 Final 版本来创建 RAID 设置,但同样的做法也适用于其他 Linux 发行版。 - - # lsb_release -a - # ifconfig | grep inet - -![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/CentOS-6.5-Summary.png) - -CentOS 6.5 摘要 - -2. 如果你按照我们的 RAID 系列去配置的,我们假设你已经安装了“mdadm”包,如果没有,根据你的 Linux 发行版使用下面的命令安装。 - - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] - -3. “mdadm”包安装后,先使用‘fdisk‘命令列出我们在系统上增加的三个20GB的硬盘。 - - # fdisk -l | grep sd - -![Install mdadm Tool in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Install-mdadm-Tool.png) - -安装 mdadm 工具 - -4. 现在该检查这三个磁盘是否存在 RAID 块,使用下面的命令来检查。 - - # mdadm -E /dev/sd[b-d] - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd - -![Examine Drives For Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-Drives-For-Raid.png) - -检查 Raid 磁盘 - -**注意**: 上面的图片说明,没有检测到任何超级块。所以,这三个磁盘中没有定义 RAID。让我们现在开始创建一个吧! - -### 第2步:为磁盘创建 RAID 分区 ### - -5. 首先,在创建 RAID 前我们要为磁盘分区(/dev/sdb, /dev/sdc 和 /dev/sdd),在进行下一步之前,先使用‘fdisk’命令进行分区。 - - # fdisk /dev/sdb - # fdisk /dev/sdc - # fdisk /dev/sdd - -#### 创建 /dev/sdb 分区 #### - -请按照下面的说明在 /dev/sdb 硬盘上创建分区。 - -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。选择主分区是因为还没有定义过分区。 -- 接下来选择分区号为1。默认就是1. -- 这里是选择柱面大小,我们没必要选择指定的大小,因为我们需要为 RAID 使用整个分区,所以只需按两次 Enter 键默认将整个容量分配给它。 -- 然后,按 ‘P’ 来打印创建好的分区。 -- 改变分区类型,按 ‘L’可以列出所有可用的类型。 -- 按 ‘t’ 修改分区类型。 -- 这里使用‘fd’设置为 RAID 的类型。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 - -![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition1.png) - -创建 sdb 分区 - -**注意**: 我们仍要按照上面的步骤来创建 sdc 和 sdd 的分区。 - -#### 创建 /dev/sdc 分区 #### - -现在,通过下面的截图给出创建 sdc 和 sdd 磁盘分区的方法,或者你可以按照上面的步骤。 - - # fdisk /dev/sdc - -![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition1.png) - -创建 sdc 分区 - -#### 创建 /dev/sdd 分区 #### - - # fdisk /dev/sdd - -![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition1.png) - -创建 sdd 分区 - -6. 创建分区后,检查三个磁盘 sdb, sdc, sdd 的变化。 - - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd - - or - - # mdadm -E /dev/sd[b-c] - -![Check Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Changes-on-Partitions.png) - -检查磁盘变化 - -**注意**: 在上面的图片中,磁盘的类型是 fd。 - -7.现在在新创建的分区检查 RAID 块。如果没有检测到超级块,我们就能够继续下一步,创建一个新的 RAID 5 的设置在这些磁盘中。 - -![Check Raid on Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-Partitions.png) - -在分区中检查 Raid - -### 第3步:创建 md 设备 md0 ### - -8. 现在创建一个 RAID 设备“md0”(即 /dev/md0)使用所有新创建的分区(sdb1, sdc1 and sdd1) ,使用以下命令。 - - # mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 - - or - - # mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1 - -9. 创建 RAID 设备后,检查并确认 RAID,包括设备和从 mdstat 中输出的 RAID 级别。 - - # cat /proc/mdstat - -![Verify Raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Device.png) - -验证 Raid 设备 - -如果你想监视当前的创建过程,你可以使用‘watch‘命令,使用 watch ‘cat /proc/mdstat‘,它会在屏幕上显示且每隔1秒刷新一次。 - - # watch -n1 cat /proc/mdstat - -![Monitor Raid Process](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Raid-Process.png) - -监控 Raid 5 过程 - -![Raid 5 Process Summary](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Process-Summary.png) - -Raid 5 过程概要 - -10. 创建 RAID 后,使用以下命令验证 RAID 设备 - - # mdadm -E /dev/sd[b-d]1 - -![Verify Raid Level](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Level.png) - -验证 Raid 级别 - -**注意**: 因为它显示三个磁盘的信息,上述命令的输出会有点长。 - -11. 接下来,验证 RAID 阵列的假设,这包含正在运行 RAID 的设备,并开始重新同步。 - - # mdadm --detail /dev/md0 - -![Verify Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Array.png) - -验证 Raid 阵列 - -### 第4步:为 md0 创建文件系统### - -12. 在挂载前为“md0”设备创建 ext4 文件系统。 - - # mkfs.ext4 /dev/md0 - -![Create md0 Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md0-Filesystem.png) - -创建 md0 文件系统 - -13.现在,在‘/mnt‘下创建目录 raid5,然后挂载文件系统到 /mnt/raid5/ 下并检查下挂载点的文件,你会看到 lost+found 目录。 - - # mkdir /mnt/raid5 - # mount /dev/md0 /mnt/raid5/ - # ls -l /mnt/raid5/ - -14. 在挂载点 /mnt/raid5 下创建几个文件,并在其中一个文件中添加一些内容然后去验证。 - - # touch /mnt/raid5/raid5_tecmint_{1..5} - # ls -l /mnt/raid5/ - # echo "tecmint raid setups" > /mnt/raid5/raid5_tecmint_1 - # cat /mnt/raid5/raid5_tecmint_1 - # cat /proc/mdstat - -![Mount Raid 5 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-Raid-Device.png) - -挂载 Raid 设备 - -15. 我们需要在 fstab 中添加条目,否则系统重启后将不会显示我们的挂载点。然后编辑 fstab 文件添加条目,在文件尾追加以下行,如下图所示。挂载点会根据你环境的不同而不同。 - - # vim /etc/fstab - - /dev/md0 /mnt/raid5 ext4 defaults 0 0 - -![Raid 5 Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Device-Automount.png) - -自动挂载 Raid 5 - -16. 接下来,运行‘mount -av‘命令检查 fstab 条目中是否有错误。 - - # mount -av - -![Check Fstab Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Fstab-Errors.png) - -检查 Fstab 错误 - -### 第5步:保存 Raid 5 的配置 ### - -17. 在前面章节已经说过,默认情况下 RAID 没有配置文件。我们必须手动保存。如果此步不跟 RAID 设备将不会存在 md0,它将会跟一些其他数子。 - -所以,我们必须要在系统重新启动之前保存配置。如果配置保存它在系统重新启动时会被加载到内核中然后 RAID 也将被加载。 - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - -![Save Raid 5 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid-5-Configuration.png) - -保存 Raid 5 配置 - -注意:保存配置将保持 RAID 级别的稳定性在 md0 设备中。 - -### 第6步:添加备用磁盘 ### - -18.备用磁盘有什么用?它是非常有用的,如果我们有一个备用磁盘,当我们阵列中的任何一个磁盘发生故障后,这个备用磁盘会主动添加并重建进程,并从其他磁盘上同步数据,所以我们可以在这里看到冗余。 - -更多关于添加备用磁盘和检查 RAID 5 容错的指令,请阅读下面文章中的第6步和第7步。 - -- [Add Spare Drive to Raid 5 Setup][4] - -### 结论 ### - -在这篇文章中,我们已经看到了如何使用三个磁盘配置一个 RAID 5 。在接下来的文章中,我们将看到如何故障排除并且当 RAID 5 中的一个磁盘损坏后如何恢复。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid-5-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ -[2]:http://www.tecmint.com/create-raid0-in-linux/ -[3]:http://www.tecmint.com/create-raid1-in-linux/ -[4]:http://www.tecmint.com/create-raid-6-in-linux/ From b1fd032e97a42a7256d951cbc670bdce03cc08cb Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:32:40 +0800 Subject: [PATCH 100/207] Delete Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md --- ... (Mirroring) using 'Two Disks' in Linux.md | 217 ------------------ 1 file changed, 217 deletions(-) delete mode 100644 translated/tech/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md diff --git a/translated/tech/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md b/translated/tech/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md deleted file mode 100644 index 948e530ed8..0000000000 --- a/translated/tech/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md +++ /dev/null @@ -1,217 +0,0 @@ -在 Linux 中使用"两个磁盘"创建 RAID 1(镜像) - 第3部分 -================================================================================ -RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁盘中。创建 RAID1 至少需要两个磁盘,它的读取性能或者可靠性比数据存储容量更好。 - - -![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg) - -在 Linux 中设置 RAID1 - -创建镜像是为了防止因硬盘故障导致数据丢失。镜像中的每个磁盘包含数据的完整副本。当一个磁盘发生故障时,相同的数据可以从其它正常磁盘中读取。而后,可以从正在运行的计算机中直接更换发生故障的磁盘,无需任何中断。 - -### RAID 1 的特点 ### - --镜像具有良好的性能。 - --磁盘利用率为50%。也就是说,如果我们有两个磁盘每个500GB,总共是1TB,但在镜像中它只会显示500GB。 - --在镜像如果一个磁盘发生故障不会有数据丢失,因为两个磁盘中的内容相同。 - --读取数据会比写入性能更好。 - -#### 要求 #### - - -创建 RAID 1 至少要有两个磁盘,你也可以添加更多的磁盘,磁盘数需为2,4,6,8的两倍。为了能够添加更多的磁盘,你的系统必须有 RAID 物理适配器(硬件卡)。 - -这里,我们使用软件 RAID 不是硬件 RAID,如果你的系统有一个内置的物理硬件 RAID 卡,你可以从它的 UI 组件或使用 Ctrl + I 键来访问它。 - -需要阅读: [Basic Concepts of RAID in Linux][1] - -#### 在我的服务器安装 #### - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.226 - Hostname : rd1.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc - -本文将指导你使用 mdadm (创建和管理 RAID 的)一步一步的建立一个软件 RAID 1 或镜像在 Linux 平台上。但同样的做法也适用于其它 Linux 发行版如 RedHat,CentOS,Fedora 等等。 - -### 第1步:安装所需要的并且检查磁盘 ### - -1.正如我前面所说,在 Linux 中我们需要使用 mdadm 软件来创建和管理 RAID。所以,让我们用 yum 或 apt-get 的软件包管理工具在 Linux 上安装 mdadm 软件包。 - - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] - -2. 一旦安装好‘mdadm‘包,我们需要使用下面的命令来检查磁盘是否已经配置好。 - - # mdadm -E /dev/sd[b-c] - -![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png) - -检查 RAID 的磁盘 - - -正如你从上面图片看到的,没有检测到任何超级块,这意味着还没有创建RAID。 - -### 第2步:为 RAID 创建分区 ### - -3. 正如我提到的,我们最少使用两个分区 /dev/sdb 和 /dev/sdc 来创建 RAID1。我们首先使用‘fdisk‘命令来创建这两个分区并更改其类型为 raid。 - - # fdisk /dev/sdb - -按照下面的说明 - -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。 -- 接下来选择分区号为1。 -- 按两次回车键默认将整个容量分配给它。 -- 然后,按 ‘P’ 来打印创建好的分区。 -- 按 ‘L’,列出所有可用的类型。 -- 按 ‘t’ 修改分区类型。 -- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 - -![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png) - -创建磁盘分区 - -在创建“/dev/sdb”分区后,接下来按照同样的方法创建分区 /dev/sdc 。 - - # fdisk /dev/sdc - -![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png) - -创建第二个分区 - -4. 一旦这两个分区创建成功后,使用相同的命令来检查 sdb & sdc 分区并确认 RAID 分区的类型如上图所示。 - - # mdadm -E /dev/sd[b-c] - -![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png) - -验证分区变化 - -![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png) - -检查 RAID 类型 - -**注意**: 正如你在上图所看到的,在 sdb1 和 sdc1 中没有任何对 RAID 的定义,这就是我们没有检测到超级块的原因。 - -### 步骤3:创建 RAID1 设备 ### - -5.接下来使用以下命令来创建一个名为 /dev/md0 的“RAID1”设备并验证它 - - # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1 - # cat /proc/mdstat - -![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png) - -创建RAID设备 - -6. 接下来使用如下命令来检查 RAID 设备类型和 RAID 阵列 - - # mdadm -E /dev/sd[b-c]1 - # mdadm --detail /dev/md0 - -![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png) - -检查 RAID 设备类型 - -![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png) - -检查 RAID 设备阵列 - -从上图中,人们很容易理解,RAID1 已经使用的 /dev/sdb1 和 /dev/sdc1 分区被创建,你也可以看到状态为 resyncing。 - -### 第4步:在 RAID 设备上创建文件系统 ### - -7. 使用 ext4 为 md0 创建文件系统并挂载到 /mnt/raid1 . - - # mkfs.ext4 /dev/md0 - -![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png) - -创建 RAID 设备文件系统 - -8. 接下来,挂载新创建的文件系统到“/mnt/raid1”,并创建一些文件,验证在挂载点的数据 - - # mkdir /mnt/raid1 - # mount /dev/md0 /mnt/raid1/ - # touch /mnt/raid1/tecmint.txt - # echo "tecmint raid setups" > /mnt/raid1/tecmint.txt - -![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png) - -挂载 RAID 设备 - -9.为了在系统重新启动自动挂载 RAID1,需要在 fstab 文件中添加条目。打开“/etc/fstab”文件并添加以下行。 - - /dev/md0 /mnt/raid1 ext4 defaults 0 0 - -![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png) - -自动挂载 Raid 设备 - -10. 运行“mount -a”,检查 fstab 中的条目是否有错误 - # mount -av - -![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png) - -检查 fstab 中的错误 - -11. 接下来,使用下面的命令保存 raid 的配置到文件“mdadm.conf”中。 - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - -![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png) - -保存 Raid 的配置 - -上述配置文件在系统重启时会读取并加载 RAID 设备。 - -### 第5步:在磁盘故障后检查数据 ### - -12.我们的主要目的是,即使在任何磁盘故障或死机时必须保证数据是可用的。让我们来看看,当任何一个磁盘不可用时会发生什么。 - - # mdadm --detail /dev/md0 - -![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png) - -验证 Raid 设备 - -在上面的图片中,我们可以看到在 RAID 中有2个设备是可用的并且 Active Devices 是2.现在让我们看看,当一个磁盘拔出(移除 sdc 磁盘)或损坏后会发生什么。 - - # ls -l /dev | grep sd - # mdadm --detail /dev/md0 - -![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png) - -测试 RAID 设备 - -现在,在上面的图片中你可以看到,一个磁盘不见了。我从虚拟机上删除了一个磁盘。此时让我们来检查我们宝贵的数据。 - - # cd /mnt/raid1/ - # cat tecmint.txt - -![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png) - -验证 RAID 数据 - -你有没有看到我们的数据仍然可用。由此,我们可以知道 RAID 1(镜像)的优势。在接下来的文章中,我们将看到如何设置一个 RAID 5 条带化分布式奇偶校验。希望这可以帮助你了解 RAID 1(镜像)是如何工作的。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid1-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 10ee418d6422f54036002872fd4a328bdf691557 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:32:59 +0800 Subject: [PATCH 101/207] =?UTF-8?q?Delete=20Creating=20Software=20RAID0=20?= =?UTF-8?q?(Stripe)=20on=20=E2=80=98Two=20Devices=E2=80=99=20Using=20?= =?UTF-8?q?=E2=80=98mdadm=E2=80=99=20Tool=20in=20Linux=20=E2=80=93=20Part?= =?UTF-8?q?=202.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ces’ Using ‘mdadm’ Tool in Linux – Part 2.md | 218 ------------------ 1 file changed, 218 deletions(-) delete mode 100644 translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2.md diff --git a/translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2.md b/translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2.md deleted file mode 100644 index 9feba99609..0000000000 --- a/translated/tech/Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux – Part 2.md +++ /dev/null @@ -1,218 +0,0 @@ -在 Linux 上使用 ‘mdadm’ 工具创建软件 RAID0 (条带化)在 ‘两个设备’ 上 - 第2部分 -================================================================================ -RAID 是廉价磁盘的冗余阵列,其高可用性和可靠性适用于大规模环境中,为了使数据被保护而不是被正常使用。RAID 只是磁盘的一个集合被称为逻辑卷。结合驱动器,使其成为一个阵列或称为集合(组)。 - -创建 RAID 最少应使用2个磁盘被连接组成 RAID 控制器,逻辑卷或多个驱动器可以根据定义的 RAID 级别添加在一个阵列中。不使用物理硬件创建的 RAID 被称为软件 RAID。软件 RAID 一般都是不太有钱的人才使用的。 - -![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg) - -在 Linux 中创建 RAID0 - -使用 RAID 的主要目的是为了在单点故障时保存数据,如果我们使用单个磁盘来存储数据,如果它损坏了,那么就没有机会取回我们的数据了,为了防止数据丢失我们需要一个容错的方法。所以,我们可以使用多个磁盘组成 RAID 阵列。 - -#### 在 RAID 0 中条带是什么 #### - -条带是通过将数据在同一时间分割到多个磁盘上。假设我们有两个磁盘,如果我们将数据保存到逻辑卷上,它会将数据保存在两个磁盘上。使用 RAID 0 是为了获得更好的性能,但是如果驱动器中一个出现故障,我们将不能得到完整的数据。因此,使用 RAID 0 不是一种好的做法。唯一的解决办法就是安装有 RAID0 逻辑卷的操作系统来提高文件的安全性。 - -- RAID 0 性能较高。 -- 在 RAID 0 上,空间零浪费。 -- 零容错(如果硬盘中的任何一个发生故障,无法取回数据)。 -- 写和读性能得以提高。 - -#### 要求 #### - -创建 RAID 0 允许的最小磁盘数目是2个,但你可以添加更多的磁盘,但数目应该是2,4,6,8等的两倍。如果你有一个物理 RAID 卡并且有足够的端口,你可以添加更多磁盘。 - -在这里,我们没有使用硬件 RAID,此设置只依赖于软件 RAID。如果我们有一个物理硬件 RAID 卡,我们可以从它的 UI 组件访问它。有些主板默认内建 RAID 功能,还可以使用 Ctrl + I 键访问 UI。 - -如果你是刚开始设置 RAID,请阅读我们前面的文章,我们已经介绍了一些关于 RAID 基本的概念。 - -- [Introduction to RAID and RAID Concepts][1] - -**我的服务器设置** - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.225 - Two Disks : 20 GB each - -这篇文章是9个 RAID 系列教程的第2部分,在这部分,我们将看看如何能够在 Linux 上创建和使用 RAID0(条带化),以名为 sdb 和 sdc 两个20GB的硬盘为例。 - -### 第1步:更新系统和安装管理 RAID 的 mdadm 软件 ### - -1.在 Linux 上设置 RAID0 前,我们先更新一下系统,然后安装 ‘mdadm’ 包。mdadm 是一个小程序,这将使我们能够在Linux下配置和管理 RAID 设备。 - - # yum clean all && yum update - # yum install mdadm -y - -![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png) - -安装 mdadm 工具 - -### 第2步:检测并连接两个 20GB 的硬盘 ### - -2.在创建 RAID 0 前,请务必确认两个硬盘能被检测到,使用下面的命令确认。 - - # ls -l /dev | grep sd - -![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png) - -检查硬盘 - -3.一旦检测到新的硬盘驱动器,同时检查是否连接的驱动器已经被现有的 RAID 使用,使用下面的 ‘mdadm’ 命令来查看。 - - # mdadm --examine /dev/sd[b-c] - -![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png) - -检查 RAID 设备 - -从上面的输出我们可以看到,没有任何 RAID 使用 sdb 和 sdc 这两个驱动器。 - -### 第3步:创建 RAID 分区 ### - -4.现在用 sdb 和 sdc 创建 RAID 的分区,使用 fdisk 命令来创建。在这里,我将展示如何创建 sdb 驱动器上的分区。 - - # fdisk /dev/sdb - -请按照以下说明创建分区。 - -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。 -- 接下来选择分区号为1。 -- 只需按两次回车键选择默认值即可。 -- 然后,按 ‘P’ 来打印创建好的分区。 - -![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png) - -创建分区 - -请按照以下说明将分区创建为 Linux 的 RAID 类型。 - -- 按 ‘L’,列出所有可用的类型。 -- 按 ‘t’ 去修改分区。 -- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 - -![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png) - -在 Linux 上创建 RAID 分区 - -**注**: 请使用上述步骤同样在 sdc 驱动器上创建分区。 - -5.创建分区后,验证这两个驱动器能使用下面的命令来正确定义 RAID。 - - # mdadm --examine /dev/sd[b-c] - # mdadm --examine /dev/sd[b-c]1 - -![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png) - -验证 RAID 分区 - -### 第4步:创建 RAID md 设备 ### - -6.现在使用以下命令创建 md 设备(即 /dev/md0),并选择 RAID 合适的级别。 - - # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1 - # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1 - -- -C – create -- -l – level -- -n – No of raid-devices - -7.一旦 md 设备已经建立,使用如下命令可以查看 RAID 级别,设备和阵列的使用状态。 - - # cat /proc/mdstat - -![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png) - -查看 RAID 级别 - - # mdadm -E /dev/sd[b-c]1 - -![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png) - -查看 RAID 设备 - - # mdadm --detail /dev/md0 - -![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png) - -查看 RAID 阵列 - -### 第5步:挂载 RAID 设备到文件系统 ### - -8.将 RAID 设备 /dev/md0 创建为 ext4 文件系统并挂载到 /mnt/raid0 下。 - - # mkfs.ext4 /dev/md0 - -![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png) - -创建 ext4 文件系统 - -9. ext4 文件系统为 RAID 设备创建好后,现在创建一个挂载点(即 /mnt/raid0),并将设备 /dev/md0 挂载在它下。 - - # mkdir /mnt/raid0 - # mount /dev/md0 /mnt/raid0/ - -10.下一步,使用 df 命令验证设备 /dev/md0 是否被挂载在 /mnt/raid0 下。 - - # df -h - -11.接下来,创建一个名为 ‘tecmint.txt’ 的文件挂载到 /mnt/raid0 下,为创建的文件添加一些内容,并查看文件和目录的内容。 - - # touch /mnt/raid0/tecmint.txt - # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt - # cat /mnt/raid0/tecmint.txt - # ls -l /mnt/raid0/ - -![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png) - -验证挂载的设备 - -12.一旦你验证挂载点后,同时将它添加到 /etc/fstab 文件中。 - - # vim /etc/fstab - -添加以下条目,根据你的安装位置和使用文件系统的不同,自行做修改。 - - /dev/md0 /mnt/raid0 ext4 deaults 0 0 - -![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png) - -添加设备到 fstab 文件中 - -13.使用 mount ‘-a‘ 来检查 fstab 的条目是否有误。 - - # mount -av - -![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png) - -检查 fstab 文件是否有误 - -### 第6步:保存 RAID 配置 ### - -14.最后,保存 RAID 配置到一个文件中,以供将来使用。同样,我们使用 ‘mdadm’ 命令带有 ‘-s‘ (scan) 和 ‘-v‘ (verbose) 选项,如图所示。 - - # mdadm -E -s -v >> /etc/mdadm.conf - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - # cat /etc/mdadm.conf - -![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png) - -保存 RAID 配置 - -就这样,我们在这里看到,如何通过使用两个硬盘配置具有条带化的 RAID0 级别。在接下来的文章中,我们将看到如何设置 RAID5。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid0-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From f3d587ede36d68eb11002fb77c4eb66db7af746d Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:33:36 +0800 Subject: [PATCH 102/207] Create Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md --- ... (Mirroring) using 'Two Disks' in Linux.md | 217 ++++++++++++++++++ 1 file changed, 217 insertions(+) create mode 100644 translated/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md diff --git a/translated/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md b/translated/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md new file mode 100644 index 0000000000..948e530ed8 --- /dev/null +++ b/translated/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md @@ -0,0 +1,217 @@ +在 Linux 中使用"两个磁盘"创建 RAID 1(镜像) - 第3部分 +================================================================================ +RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁盘中。创建 RAID1 至少需要两个磁盘,它的读取性能或者可靠性比数据存储容量更好。 + + +![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg) + +在 Linux 中设置 RAID1 + +创建镜像是为了防止因硬盘故障导致数据丢失。镜像中的每个磁盘包含数据的完整副本。当一个磁盘发生故障时,相同的数据可以从其它正常磁盘中读取。而后,可以从正在运行的计算机中直接更换发生故障的磁盘,无需任何中断。 + +### RAID 1 的特点 ### + +-镜像具有良好的性能。 + +-磁盘利用率为50%。也就是说,如果我们有两个磁盘每个500GB,总共是1TB,但在镜像中它只会显示500GB。 + +-在镜像如果一个磁盘发生故障不会有数据丢失,因为两个磁盘中的内容相同。 + +-读取数据会比写入性能更好。 + +#### 要求 #### + + +创建 RAID 1 至少要有两个磁盘,你也可以添加更多的磁盘,磁盘数需为2,4,6,8的两倍。为了能够添加更多的磁盘,你的系统必须有 RAID 物理适配器(硬件卡)。 + +这里,我们使用软件 RAID 不是硬件 RAID,如果你的系统有一个内置的物理硬件 RAID 卡,你可以从它的 UI 组件或使用 Ctrl + I 键来访问它。 + +需要阅读: [Basic Concepts of RAID in Linux][1] + +#### 在我的服务器安装 #### + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.226 + Hostname : rd1.tecmintlocal.com + Disk 1 [20GB] : /dev/sdb + Disk 2 [20GB] : /dev/sdc + +本文将指导你使用 mdadm (创建和管理 RAID 的)一步一步的建立一个软件 RAID 1 或镜像在 Linux 平台上。但同样的做法也适用于其它 Linux 发行版如 RedHat,CentOS,Fedora 等等。 + +### 第1步:安装所需要的并且检查磁盘 ### + +1.正如我前面所说,在 Linux 中我们需要使用 mdadm 软件来创建和管理 RAID。所以,让我们用 yum 或 apt-get 的软件包管理工具在 Linux 上安装 mdadm 软件包。 + + # yum install mdadm [on RedHat systems] + # apt-get install mdadm [on Debain systems] + +2. 一旦安装好‘mdadm‘包,我们需要使用下面的命令来检查磁盘是否已经配置好。 + + # mdadm -E /dev/sd[b-c] + +![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png) + +检查 RAID 的磁盘 + + +正如你从上面图片看到的,没有检测到任何超级块,这意味着还没有创建RAID。 + +### 第2步:为 RAID 创建分区 ### + +3. 正如我提到的,我们最少使用两个分区 /dev/sdb 和 /dev/sdc 来创建 RAID1。我们首先使用‘fdisk‘命令来创建这两个分区并更改其类型为 raid。 + + # fdisk /dev/sdb + +按照下面的说明 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。 +- 接下来选择分区号为1。 +- 按两次回车键默认将整个容量分配给它。 +- 然后,按 ‘P’ 来打印创建好的分区。 +- 按 ‘L’,列出所有可用的类型。 +- 按 ‘t’ 修改分区类型。 +- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png) + +创建磁盘分区 + +在创建“/dev/sdb”分区后,接下来按照同样的方法创建分区 /dev/sdc 。 + + # fdisk /dev/sdc + +![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png) + +创建第二个分区 + +4. 一旦这两个分区创建成功后,使用相同的命令来检查 sdb & sdc 分区并确认 RAID 分区的类型如上图所示。 + + # mdadm -E /dev/sd[b-c] + +![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png) + +验证分区变化 + +![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png) + +检查 RAID 类型 + +**注意**: 正如你在上图所看到的,在 sdb1 和 sdc1 中没有任何对 RAID 的定义,这就是我们没有检测到超级块的原因。 + +### 步骤3:创建 RAID1 设备 ### + +5.接下来使用以下命令来创建一个名为 /dev/md0 的“RAID1”设备并验证它 + + # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1 + # cat /proc/mdstat + +![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png) + +创建RAID设备 + +6. 接下来使用如下命令来检查 RAID 设备类型和 RAID 阵列 + + # mdadm -E /dev/sd[b-c]1 + # mdadm --detail /dev/md0 + +![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png) + +检查 RAID 设备类型 + +![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png) + +检查 RAID 设备阵列 + +从上图中,人们很容易理解,RAID1 已经使用的 /dev/sdb1 和 /dev/sdc1 分区被创建,你也可以看到状态为 resyncing。 + +### 第4步:在 RAID 设备上创建文件系统 ### + +7. 使用 ext4 为 md0 创建文件系统并挂载到 /mnt/raid1 . + + # mkfs.ext4 /dev/md0 + +![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png) + +创建 RAID 设备文件系统 + +8. 接下来,挂载新创建的文件系统到“/mnt/raid1”,并创建一些文件,验证在挂载点的数据 + + # mkdir /mnt/raid1 + # mount /dev/md0 /mnt/raid1/ + # touch /mnt/raid1/tecmint.txt + # echo "tecmint raid setups" > /mnt/raid1/tecmint.txt + +![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png) + +挂载 RAID 设备 + +9.为了在系统重新启动自动挂载 RAID1,需要在 fstab 文件中添加条目。打开“/etc/fstab”文件并添加以下行。 + + /dev/md0 /mnt/raid1 ext4 defaults 0 0 + +![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png) + +自动挂载 Raid 设备 + +10. 运行“mount -a”,检查 fstab 中的条目是否有错误 + # mount -av + +![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png) + +检查 fstab 中的错误 + +11. 接下来,使用下面的命令保存 raid 的配置到文件“mdadm.conf”中。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + +![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png) + +保存 Raid 的配置 + +上述配置文件在系统重启时会读取并加载 RAID 设备。 + +### 第5步:在磁盘故障后检查数据 ### + +12.我们的主要目的是,即使在任何磁盘故障或死机时必须保证数据是可用的。让我们来看看,当任何一个磁盘不可用时会发生什么。 + + # mdadm --detail /dev/md0 + +![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png) + +验证 Raid 设备 + +在上面的图片中,我们可以看到在 RAID 中有2个设备是可用的并且 Active Devices 是2.现在让我们看看,当一个磁盘拔出(移除 sdc 磁盘)或损坏后会发生什么。 + + # ls -l /dev | grep sd + # mdadm --detail /dev/md0 + +![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png) + +测试 RAID 设备 + +现在,在上面的图片中你可以看到,一个磁盘不见了。我从虚拟机上删除了一个磁盘。此时让我们来检查我们宝贵的数据。 + + # cd /mnt/raid1/ + # cat tecmint.txt + +![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png) + +验证 RAID 数据 + +你有没有看到我们的数据仍然可用。由此,我们可以知道 RAID 1(镜像)的优势。在接下来的文章中,我们将看到如何设置一个 RAID 5 条带化分布式奇偶校验。希望这可以帮助你了解 RAID 1(镜像)是如何工作的。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid1-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 03f40b38babfc323a30e846016f4743afa8ca4d8 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:48:30 +0800 Subject: [PATCH 103/207] =?UTF-8?q?Create=20Part=202=20-=20Creating=20Soft?= =?UTF-8?q?ware=20RAID0=20(Stripe)=20on=20=E2=80=98Two=20Devices=E2=80=99?= =?UTF-8?q?=20Using=20=E2=80=98mdadm=E2=80=99=20Tool=20in=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Two Devices’ Using ‘mdadm’ Tool in Linux.md | 218 ++++++++++++++++++ 1 file changed, 218 insertions(+) create mode 100644 translated/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md diff --git a/translated/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md b/translated/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md new file mode 100644 index 0000000000..9feba99609 --- /dev/null +++ b/translated/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md @@ -0,0 +1,218 @@ +在 Linux 上使用 ‘mdadm’ 工具创建软件 RAID0 (条带化)在 ‘两个设备’ 上 - 第2部分 +================================================================================ +RAID 是廉价磁盘的冗余阵列,其高可用性和可靠性适用于大规模环境中,为了使数据被保护而不是被正常使用。RAID 只是磁盘的一个集合被称为逻辑卷。结合驱动器,使其成为一个阵列或称为集合(组)。 + +创建 RAID 最少应使用2个磁盘被连接组成 RAID 控制器,逻辑卷或多个驱动器可以根据定义的 RAID 级别添加在一个阵列中。不使用物理硬件创建的 RAID 被称为软件 RAID。软件 RAID 一般都是不太有钱的人才使用的。 + +![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg) + +在 Linux 中创建 RAID0 + +使用 RAID 的主要目的是为了在单点故障时保存数据,如果我们使用单个磁盘来存储数据,如果它损坏了,那么就没有机会取回我们的数据了,为了防止数据丢失我们需要一个容错的方法。所以,我们可以使用多个磁盘组成 RAID 阵列。 + +#### 在 RAID 0 中条带是什么 #### + +条带是通过将数据在同一时间分割到多个磁盘上。假设我们有两个磁盘,如果我们将数据保存到逻辑卷上,它会将数据保存在两个磁盘上。使用 RAID 0 是为了获得更好的性能,但是如果驱动器中一个出现故障,我们将不能得到完整的数据。因此,使用 RAID 0 不是一种好的做法。唯一的解决办法就是安装有 RAID0 逻辑卷的操作系统来提高文件的安全性。 + +- RAID 0 性能较高。 +- 在 RAID 0 上,空间零浪费。 +- 零容错(如果硬盘中的任何一个发生故障,无法取回数据)。 +- 写和读性能得以提高。 + +#### 要求 #### + +创建 RAID 0 允许的最小磁盘数目是2个,但你可以添加更多的磁盘,但数目应该是2,4,6,8等的两倍。如果你有一个物理 RAID 卡并且有足够的端口,你可以添加更多磁盘。 + +在这里,我们没有使用硬件 RAID,此设置只依赖于软件 RAID。如果我们有一个物理硬件 RAID 卡,我们可以从它的 UI 组件访问它。有些主板默认内建 RAID 功能,还可以使用 Ctrl + I 键访问 UI。 + +如果你是刚开始设置 RAID,请阅读我们前面的文章,我们已经介绍了一些关于 RAID 基本的概念。 + +- [Introduction to RAID and RAID Concepts][1] + +**我的服务器设置** + + Operating System : CentOS 6.5 Final + IP Address : 192.168.0.225 + Two Disks : 20 GB each + +这篇文章是9个 RAID 系列教程的第2部分,在这部分,我们将看看如何能够在 Linux 上创建和使用 RAID0(条带化),以名为 sdb 和 sdc 两个20GB的硬盘为例。 + +### 第1步:更新系统和安装管理 RAID 的 mdadm 软件 ### + +1.在 Linux 上设置 RAID0 前,我们先更新一下系统,然后安装 ‘mdadm’ 包。mdadm 是一个小程序,这将使我们能够在Linux下配置和管理 RAID 设备。 + + # yum clean all && yum update + # yum install mdadm -y + +![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png) + +安装 mdadm 工具 + +### 第2步:检测并连接两个 20GB 的硬盘 ### + +2.在创建 RAID 0 前,请务必确认两个硬盘能被检测到,使用下面的命令确认。 + + # ls -l /dev | grep sd + +![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png) + +检查硬盘 + +3.一旦检测到新的硬盘驱动器,同时检查是否连接的驱动器已经被现有的 RAID 使用,使用下面的 ‘mdadm’ 命令来查看。 + + # mdadm --examine /dev/sd[b-c] + +![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png) + +检查 RAID 设备 + +从上面的输出我们可以看到,没有任何 RAID 使用 sdb 和 sdc 这两个驱动器。 + +### 第3步:创建 RAID 分区 ### + +4.现在用 sdb 和 sdc 创建 RAID 的分区,使用 fdisk 命令来创建。在这里,我将展示如何创建 sdb 驱动器上的分区。 + + # fdisk /dev/sdb + +请按照以下说明创建分区。 + +- 按 ‘n’ 创建新的分区。 +- 然后按 ‘P’ 选择主分区。 +- 接下来选择分区号为1。 +- 只需按两次回车键选择默认值即可。 +- 然后,按 ‘P’ 来打印创建好的分区。 + +![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png) + +创建分区 + +请按照以下说明将分区创建为 Linux 的 RAID 类型。 + +- 按 ‘L’,列出所有可用的类型。 +- 按 ‘t’ 去修改分区。 +- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用‘p’查看我们所做的更改。 +- 使用‘w’保存更改。 + +![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png) + +在 Linux 上创建 RAID 分区 + +**注**: 请使用上述步骤同样在 sdc 驱动器上创建分区。 + +5.创建分区后,验证这两个驱动器能使用下面的命令来正确定义 RAID。 + + # mdadm --examine /dev/sd[b-c] + # mdadm --examine /dev/sd[b-c]1 + +![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png) + +验证 RAID 分区 + +### 第4步:创建 RAID md 设备 ### + +6.现在使用以下命令创建 md 设备(即 /dev/md0),并选择 RAID 合适的级别。 + + # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1 + # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1 + +- -C – create +- -l – level +- -n – No of raid-devices + +7.一旦 md 设备已经建立,使用如下命令可以查看 RAID 级别,设备和阵列的使用状态。 + + # cat /proc/mdstat + +![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png) + +查看 RAID 级别 + + # mdadm -E /dev/sd[b-c]1 + +![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png) + +查看 RAID 设备 + + # mdadm --detail /dev/md0 + +![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png) + +查看 RAID 阵列 + +### 第5步:挂载 RAID 设备到文件系统 ### + +8.将 RAID 设备 /dev/md0 创建为 ext4 文件系统并挂载到 /mnt/raid0 下。 + + # mkfs.ext4 /dev/md0 + +![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png) + +创建 ext4 文件系统 + +9. ext4 文件系统为 RAID 设备创建好后,现在创建一个挂载点(即 /mnt/raid0),并将设备 /dev/md0 挂载在它下。 + + # mkdir /mnt/raid0 + # mount /dev/md0 /mnt/raid0/ + +10.下一步,使用 df 命令验证设备 /dev/md0 是否被挂载在 /mnt/raid0 下。 + + # df -h + +11.接下来,创建一个名为 ‘tecmint.txt’ 的文件挂载到 /mnt/raid0 下,为创建的文件添加一些内容,并查看文件和目录的内容。 + + # touch /mnt/raid0/tecmint.txt + # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt + # cat /mnt/raid0/tecmint.txt + # ls -l /mnt/raid0/ + +![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png) + +验证挂载的设备 + +12.一旦你验证挂载点后,同时将它添加到 /etc/fstab 文件中。 + + # vim /etc/fstab + +添加以下条目,根据你的安装位置和使用文件系统的不同,自行做修改。 + + /dev/md0 /mnt/raid0 ext4 deaults 0 0 + +![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png) + +添加设备到 fstab 文件中 + +13.使用 mount ‘-a‘ 来检查 fstab 的条目是否有误。 + + # mount -av + +![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png) + +检查 fstab 文件是否有误 + +### 第6步:保存 RAID 配置 ### + +14.最后,保存 RAID 配置到一个文件中,以供将来使用。同样,我们使用 ‘mdadm’ 命令带有 ‘-s‘ (scan) 和 ‘-v‘ (verbose) 选项,如图所示。 + + # mdadm -E -s -v >> /etc/mdadm.conf + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + # cat /etc/mdadm.conf + +![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png) + +保存 RAID 配置 + +就这样,我们在这里看到,如何通过使用两个硬盘配置具有条带化的 RAID0 级别。在接下来的文章中,我们将看到如何设置 RAID5。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid0-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ From 81f06d85c3773c901edb69c80c182fd225a8f217 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 4 Aug 2015 10:52:16 +0800 Subject: [PATCH 104/207] Create Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md --- ... RAID, Concepts of RAID and RAID Levels.md | 146 ++++++++++++++++++ 1 file changed, 146 insertions(+) create mode 100644 translated/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md diff --git a/translated/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md b/translated/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md new file mode 100644 index 0000000000..8ca0ecbd7e --- /dev/null +++ b/translated/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md @@ -0,0 +1,146 @@ + +RAID的级别和概念的介绍 - 第1部分 +================================================================================ +RAID是廉价磁盘冗余阵列,但现在它被称为独立磁盘冗余阵列。早先一个容量很小的磁盘都是非常昂贵的,但是现在我们可以很便宜的买到一个更大的磁盘。Raid 是磁盘的一个集合,被称为逻辑卷。 + + +![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg) + +在 Linux 中理解 RAID 的设置 + +RAID包含一组或者一个集合甚至一个阵列。使用一组磁盘结合驱动器组成 RAID 阵列或 RAID 集。一个 RAID 控制器至少使用两个磁盘并且使用一个逻辑卷或者多个驱动器在一个组中。在一个磁盘组的应用中只能使用一个 RAID 级别。使用 RAID 可以提高服务器的性能。不同 RAID 的级别,性能会有所不同。它通过容错和高可用性来保存我们的数据。 + +这个系列被命名为RAID的构建共包含9个部分包括以下主题。 + +- 第1部分:RAID的级别和概念的介绍 +- 第2部分:在Linux中如何设置 RAID0(条带化) +- 第3部分:在Linux中如何设置 RAID1(镜像化) +- 第4部分:在Linux中如何设置 RAID5(条带化与分布式奇偶校验) +- 第5部分:在Linux中如何设置 RAID6(条带双分布式奇偶校验) +- 第6部分:在Linux中设置 RAID 10 或1 + 0(嵌套) +- 第7部分:增加现有的 RAID 阵列并删除损坏的磁盘 +- 第8部分:在 RAID 中恢复(重建)损坏的驱动器 +- 第9部分:在 Linux 中管理 RAID + +这是9系列教程的第1部分,在这里我们将介绍 RAID 的概念和 RAID 级别,这是在 Linux 中构建 RAID 需要理解的。 + + +### 软件RAID和硬件RAID ### + +软件 RAID 的性能很低,因为其从主机资源消耗。 RAID 软件需要加载可读取数据从软件 RAID 卷中。在加载 RAID 软件前,操作系统需要得到加载 RAID 软件的引导。在软件 RAID 中无需物理硬件。零成本投资。 + +硬件 RAID 具有很高的性能。他们有专用的 RAID 控制器,采用 PCI Express卡物理内置的。它不会使用主机资源。他们有 NVRAM 缓存读取和写入。当重建时即使出现电源故障,它会使用电池电源备份存储缓存。对于大规模使用需要非常昂贵的投资。 + +硬件 RAID 卡如下所示: + +![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg) + +硬件RAID + +#### 精选的 RAID 概念 #### + +- 在 RAID 重建中校验方法中丢失的内容来自从校验中保存的信息。 RAID 5,RAID 6 基于校验。 +- 条带化是随机共享数据到多个磁盘。它不会在单个磁盘中保存完整的数据。如果我们使用3个磁盘,则数据将会存在于每个磁盘上。 +- 镜像被用于 RAID 1 和 RAID 10。镜像会自动备份数据。在 RAID 1,它将保存相同的内容到其他盘上。 +- 在我们的服务器上,热备份只是一个备用驱动器,它可以自动更换发生故障的驱动器。在我们的阵列中,如果任何一个驱动器损坏,热备份驱动器会自动重建。 +- 块是 RAID 控制器每次读写数据时的最小单位,最小4KB。通过定义块大小,我们可以增加 I/O 性能。 + +RAID有不同的级别。在这里,我们仅看到在真实环境下的使用最多的 RAID 级别。 + +- RAID0 = 条带化 +- RAID1 = 镜像 +- RAID5 = 单个磁盘分布式奇偶校验 +- RAID6 = 双盘分布式奇偶校验 +- RAID10 = 镜像 + 条带。(嵌套RAID) + +RAID 在大多数 Linux 发行版上使用 mdadm 的包管理。让我们先对每个 RAID 级别认识一下。 + +#### RAID 0(或)条带化 #### + +条带化有很好的性能。在 RAID 0(条带化)中数据将使用共享的方式被写入到磁盘。一半的内容将是在一个磁盘上,另一半内容将被写入到其它磁盘。 + +假设我们有2个磁盘驱动器,例如,如果我们将数据“TECMINT”写到逻辑卷中,“T”将被保存在第一盘中,“E”将保存在第二盘,'C'将被保存在第一盘,“M”将保存在第二盘,它会一直继续此循环过程。 + +在这种情况下,如果驱动器中的任何一个发生故障,我们将丢失所有的数据,因为一个盘中只有一半的数据,不能用于重建。不过,当比较写入速度和性能时,RAID0 是非常好的。创建 RAID 0(条带化)至少需要2个磁盘。如果你的数据是非常宝贵的,那么不要使用此 RAID 级别。 + +- 高性能。 +- 在 RAID0 上零容量损失。 +- 零容错。 +- 写和读有很高的性能。 + +#### RAID1(或)镜像化 #### + +镜像也有不错的性能。镜像可以备份我们的数据。假设我们有两组2TB的硬盘驱动器,我们总共有4TB,但在镜像中,驱动器在 RAID 控制器的后面形成一个逻辑驱动器,我们只能看到逻辑驱动器有2TB。 + +当我们保存数据时,它将同时写入2TB驱动器中。创建 RAID 1 (镜像化)最少需要两个驱动器。如果发生磁盘故障,我们可以恢复 RAID 通过更换一个新的磁盘。如果在 RAID 1 中任何一个磁盘发生故障,我们可以从另一个磁盘中获取相同的数据,因为其他的磁盘中也有相同的数据。所以是零数据丢失。 + +- 良好的性能。 +- 空间的一半将在总容量丢失。 +- 完全容错。 +- 重建会更快。 +- 写性能将是缓慢的。 +- 读将会很好。 +- 被操作系统和数据库使用的规模很小。 + +#### RAID 5(或)分布式奇偶校验 #### + +RAID 5 多用于企业的水平。 RAID 5 的工作通过分布式奇偶校验的方法。奇偶校验信息将被用于重建数据。它需要留下的正常驱动器上的信息去重建。驱动器故障时,这会保护我们的数据。 + +假设我们有4个驱动器,如果一个驱动器发生故障而后我们更换发生故障的驱动器后,我们可以从奇偶校验中重建数据到更换的驱动器上。奇偶校验信息存储在所有的4个驱动器上,如果我们有4个 1TB 的驱动器。奇偶校验信息将被存储在每个驱动器的256G中而其它768GB是用户自己使用的。单个驱动器故障后,RAID 5 依旧正常工作,如果驱动器损坏个数超过1个会导致数据的丢失。 + +- 性能卓越 +- 读速度将非常好。 +- 如果我们不使用硬件 RAID 控制器,写速度是缓慢的。 +- 从所有驱动器的奇偶校验信息中重建。 +- 完全容错。 +- 1个磁盘空间将用于奇偶校验。 +- 可以被用在文件服务器,Web服务器,非常重要的备份中。 + +#### RAID 6 两个分布式奇偶校验磁盘 #### + +RAID 6 和 RAID 5 相似但它有两个分布式奇偶校验。大多用在大量的阵列中。我们最少需要4个驱动器,即使有2个驱动器发生故障,我们依然可以重建数据,同时更换新的驱动器。 + +它比 RAID 5 非常慢,因为它将数据同时写到4个驱动器上。当我们使用硬件 RAID 控制器时速度将被平均。如果我们有6个的1TB驱动器,4个驱动器将用于数据保存,2个驱动器将用于校验。 + +- 性能不佳。 +- 读的性能很好。 +- 如果我们不使用硬件 RAID 控制器写的性能会很差。 +- 从2奇偶校验驱动器上重建。 +- 完全容错。 +- 2个磁盘空间将用于奇偶校验。 +- 可用于大型阵列。 +- 在备份和视频流中大规模使用。 + +#### RAID 10(或)镜像+条带 #### + +RAID 10 可以被称为1 + 0或0 +1。它将做镜像+条带两个工作。在 RAID 10 中首先做镜像然后做条带。在 RAID 01 上首先做条带,然后做镜像。RAID 10 比 01 好。 + +假设,我们有4个驱动器。当我写了一些数据到逻辑卷上,它会使用镜像和条带将数据保存到4个驱动器上。 + +如果我在 RAID 10 上写入数据“TECMINT”,数据将使用如下形式保存。首先将“T”同时写入两个磁盘,“E”也将同时写入两个磁盘,这一步将所有数据都写入。这使数据得到备份。 + +同时它将使用 RAID 0 方式写入数据,遵循将“T”写入第一个盘,“E”写入第二个盘。再次将“C”写入第一个盘,“M”到第二个盘。 + +- 良好的读写性能。 +- 空间的一半将在总容量丢失。 +- 容错。 +- 从备份数据中快速重建。 +- 它的高性能和高可用性常被用于数据库的存储中。 + +### 结论 ### + +在这篇文章中,我们已经看到了什么是 RAID 和在实际环境大多采用 RAID 的哪个级别。希望你已经学会了上面所写的。对于 RAID 的构建必须了解有关 RAID 的基本知识。以上内容对于你了解 RAID 基本满足。 + +在接下来的文章中,我将介绍如何设置和使用各种级别创建 RAID,增加 RAID 组(阵列)和驱动器故障排除等。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/understanding-raid-setup-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ From 58e17d89a5653abf7d8f4d7315e4f19a684099c0 Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Tue, 4 Aug 2015 19:12:16 +0800 Subject: [PATCH 105/207] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=AF=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 翻译完毕 --- ...y Using 'Explain Shell' Script in Linux.md | 121 ------------------ 1 file changed, 121 deletions(-) delete mode 100644 sources/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md diff --git a/sources/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md b/sources/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md deleted file mode 100644 index ab7572cd7a..0000000000 --- a/sources/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md +++ /dev/null @@ -1,121 +0,0 @@ - -Translating by dingdongnigetou - -Understanding Shell Commands Easily Using “Explain Shell” Script in Linux -================================================================================ -While working on Linux platform all of us need help on shell commands, at some point of time. Although inbuilt help like man pages, whatis command is helpful, but man pages output are too lengthy and until and unless one has some experience with Linux, it is very difficult to get any help from massive man pages. The output of whatis command is rarely more than one line which is not sufficient for newbies. - -![Explain Shell Commands in Linux Shell](http://www.tecmint.com/wp-content/uploads/2015/07/Explain-Shell-Commands-in-Linux-Shell.jpeg) - -Explain Shell Commands in Linux Shell - -There are third-party application like ‘cheat‘, which we have covered here “[Commandline Cheat Sheet for Linux Users][1]. Although Cheat is an exceptionally good application which shows help on shell command even when computer is not connected to Internet, it shows help on predefined commands only. - -There is a small piece of code written by Jackson which is able to explain shell commands within the bash shell very effectively and guess what the best part is you don’t need to install any third party package. He named the file containing this piece of code as `'explain.sh'`. - -#### Features of Explain Utility #### - -- Easy Code Embedding. -- No third-party utility needed to be installed. -- Output just enough information in course of explanation. -- Requires internet connection to work. -- Pure command-line utility. -- Able to explain most of the shell commands in bash shell. -- No root Account involvement required. - -**Prerequisite** - -The only requirement is `'curl'` package. In most of the today’s latest Linux distributions, curl package comes pre-installed, if not you can install it using package manager as shown below. - - # apt-get install curl [On Debian systems] - # yum install curl [On CentOS systems] - -### Installation of explain.sh Utility in Linux ### - -We have to insert the below piece of code as it is in the `~/.bashrc` file. The code should be inserted for each user and each `.bashrc` file. It is suggested to insert the code to the user’s .bashrc file only and not in the .bashrc of root user. - -Notice the first line of code that starts with hash `(#)` is optional and added just to differentiate rest of the codes of .bashrc. - -# explain.sh marks the beginning of the codes, we are inserting in .bashrc file at the bottom of this file. - - # explain.sh begins - explain () { - if [ "$#" -eq 0 ]; then - while read -p "Command: " cmd; do - curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$cmd" - done - echo "Bye!" - elif [ "$#" -eq 1 ]; then - curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$1" - else - echo "Usage" - echo "explain interactive mode." - echo "explain 'cmd -o | ...' one quoted command to explain it." - fi - } - -### Working of explain.sh Utility ### - -After inserting the code and saving it, you must logout of the current session and login back to make the changes taken into effect. Every thing is taken care of by the ‘curl’ command which transfer the input command and flag that need explanation to the mankier server and then print just necessary information to the Linux command-line. Not to mention to use this utility you must be connected to internet always. - -Let’s test few examples of command which I don’t know the meaning with explain.sh script. - -**1. I forgot what ‘du -h‘ does. All I need to do is:** - - $ explain 'du -h' - -![Get Help on du Command](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-du-Command.png) - -Get Help on du Command - -**2. If you forgot what ‘tar -zxvf‘ does, you may simply do:** - - $ explain 'tar -zxvf' - -![Tar Command Help](http://www.tecmint.com/wp-content/uploads/2015/07/Tar-Command-Help.png) - -Tar Command Help - -**3. One of my friend often confuse the use of ‘whatis‘ and ‘whereis‘ command, so I advised him.** - -Go to Interactive Mode by simply typing explain command on the terminal. - - $ explain - -and then type the commands one after another to see what they do in one window, as: - - Command: whatis - Command: whereis - -![Whatis Whereis Commands Help](http://www.tecmint.com/wp-content/uploads/2015/07/Whatis-Whereis-Commands-Help.png) - -Whatis Whereis Commands Help - -To exit interactive mode he just need to do Ctrl + c. - -**4. You can ask to explain more than one command chained by pipeline.** - - $ explain 'ls -l | grep -i Desktop' - -![Get Help on Multiple Commands](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-Multiple-Commands.png) - -Get Help on Multiple Commands - -Similarly you can ask your shell to explain any shell command. All you need is a working Internet connection. The output is generated based upon the explanation needed from the server and hence the output result is not customizable. - -For me this utility is really helpful and it has been honored being added to my .bashrc. Let me know what is your thought on this project? How it can useful for you? Is the explanation satisfactory? - -Provide us with your valuable feedback in the comments below. Like and share us and help us get spread. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/explain-shell-commands-in-the-linux-shell/ - -作者:[Avishek Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/cheat-command-line-cheat-sheet-for-linux-users/ From b8127cd50d3c05e938f45795e9f37e617112cc99 Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Tue, 4 Aug 2015 19:17:36 +0800 Subject: [PATCH 106/207] =?UTF-8?q?Create=20=E3=80=90=E7=BF=BB=E8=AF=91?= =?UTF-8?q?=E5=AE=8C=E6=AF=95=E3=80=9120150728=20Understanding=20Shell=20C?= =?UTF-8?q?ommands=20Easily=20Using=20'Explain=20Shell'=20Script=20in=20Li?= =?UTF-8?q?nux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Easily Using 'Explain Shell' Script in Linux.md | 118 ++++++++++++++++++ 1 file changed, 118 insertions(+) create mode 100644 translated/tech/【翻译完毕】20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md diff --git a/translated/tech/【翻译完毕】20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md b/translated/tech/【翻译完毕】20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md new file mode 100644 index 0000000000..b8f993676c --- /dev/null +++ b/translated/tech/【翻译完毕】20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md @@ -0,0 +1,118 @@ +在Linux中利用"Explain Shell"脚本更容易地理解Shell命令 +================================================================================ +在某些时刻, 当我们在Linux平台上工作时我们所有人都需要shell命令的帮助信息。 尽管内置的帮助像man pages、whatis命令是有帮助的, 但man pages的输出非常冗长, 除非是个有linux经验的人,不然从大量的man pages中获取帮助信息是非常困难的,而whatis命令的输出很少超过一行, 这对初学者来说是不够的。 + +![Explain Shell Commands in Linux Shell](http://www.tecmint.com/wp-content/uploads/2015/07/Explain-Shell-Commands-in-Linux-Shell.jpeg) + +在Linux Shell中解释Shell命令 + +有一些第三方应用程序, 像我们在[Commandline Cheat Sheet for Linux Users][1]提及过的'cheat'命令。Cheat是个杰出的应用程序,即使计算机没有联网也能提供shell命令的帮助, 但是它仅限于预先定义好的命令。 + +Jackson写了一小段代码,它能非常有效地在bash shell里面解释shell命令,可能最美之处就是你不需要安装第三方包了。他把包含这段代码的的文件命名为”explain.sh“。 + +#### Explain工具的特性 #### + +- 易嵌入代码。 +- 不需要安装第三方工具。 +- 在解释过程中输出恰到好处的信息。 +- 需要网络连接才能工作。 +- 纯命令行工具。 +- 可以解释bash shell里面的大部分shell命令。 +- 无需root账户参与。 + +**先决条件** + +唯一的条件就是'curl'包了。 在如今大多数Linux发行版里面已经预安装了culr包, 如果没有你可以按照下面的命令来安装。 + + # apt-get install curl [On Debian systems] + # yum install curl [On CentOS systems] + +### 在Linux上安装explain.sh工具 ### + +我们要将下面这段代码插入'~/.bashrc'文件(LCTT注: 若没有该文件可以自己新建一个)中。我们必须为每个用户以及对应的'.bashrc'文件插入这段代码,笔者建议你不要加在root用户下。 + +我们注意到.bashrc文件的第一行代码以(#)开始, 这个是可选的并且只是为了区分余下的代码。 + +# explain.sh 标记代码的开始, 我们将代码插入.bashrc文件的底部。 + + # explain.sh begins + explain () { + if [ "$#" -eq 0 ]; then + while read -p "Command: " cmd; do + curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$cmd" + done + echo "Bye!" + elif [ "$#" -eq 1 ]; then + curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$1" + else + echo "Usage" + echo "explain interactive mode." + echo "explain 'cmd -o | ...' one quoted command to explain it." + fi + } + +### explain.sh工具的使用 ### + +在插入代码并保存之后,你必须退出当前的会话然后重新登录来使改变生效(LCTT注:你也可以直接使用命令“source~/.bashrc”来让改变生效)。每件事情都是交由‘curl’命令处理, 它负责将需要解释的命令以及命令选项传送给mankier服务,然后将必要的信息打印到Linux命令行。不必说的就是使用这个工具你总是需要连接网络。 + +让我们用explain.sh脚本测试几个笔者不懂的命令例子。 + +**1.我忘了‘du -h’是干嘛用的, 我只需要这样做:** + + $ explain 'du -h' + +![Get Help on du Command](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-du-Command.png) + +获得du命令的帮助 + +**2.如果你忘了'tar -zxvf'的作用,你可以简单地如此做:** + + $ explain 'tar -zxvf' + +![Tar Command Help](http://www.tecmint.com/wp-content/uploads/2015/07/Tar-Command-Help.png) + +Tar命令帮助 + +**3.我的一个朋友经常对'whatis'以及'whereis'命令的使用感到困惑,所以我建议他:** + +在终端简单的地敲下explain命令进入交互模式。 + + $ explain + +然后一个接着一个地输入命令,就能在一个窗口看到他们各自的作用: + + Command: whatis + Command: whereis + +![Whatis Whereis Commands Help](http://www.tecmint.com/wp-content/uploads/2015/07/Whatis-Whereis-Commands-Help.png) + +Whatis/Whereis命令的帮助 + +你只需要使用“Ctrl+c”就能退出交互模式。 + +**4. 你可以通过管道来请求解释更多的命令。** + + $ explain 'ls -l | grep -i Desktop' + +![Get Help on Multiple Commands](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-Multiple-Commands.png) + +获取多条命令的帮助 + +同样地,你可以请求你的shell来解释任何shell命令。 前提是你需要一个可用的网络。输出的信息是基于解释的需要从服务器中生成的,因此输出的结果是不可定制的。 + +对于我来说这个工具真的很有用并且它已经荣幸地添加在我的.bashrc文件中。你对这个项目有什么想法?它对你有用么?它的解释令你满意吗?请让我知道吧! + +请在下面评论为我们提供宝贵意见,喜欢并分享我们以及帮助我们得到传播。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/explain-shell-commands-in-the-linux-shell/ + +作者:[Avishek Kumar][a] +译者:[dingdongnigetou](https://github.com/dingdongnigetou) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/cheat-command-line-cheat-sheet-for-linux-users/ From a61412acbc3535975deb7581f54844a83d1f1e99 Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Tue, 4 Aug 2015 19:18:09 +0800 Subject: [PATCH 107/207] =?UTF-8?q?Rename=20=E3=80=90=E7=BF=BB=E8=AF=91?= =?UTF-8?q?=E5=AE=8C=E6=AF=95=E3=80=9120150728=20Understanding=20Shell=20C?= =?UTF-8?q?ommands=20Easily=20Using=20'Explain=20Shell'=20Script=20in=20Li?= =?UTF-8?q?nux.md=20to=2020150728=20Understanding=20Shell=20Commands=20Eas?= =?UTF-8?q?ily=20Using=20'Explain=20Shell'=20Script=20in=20Linux.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ding Shell Commands Easily Using 'Explain Shell' Script in Linux.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename translated/tech/{【翻译完毕】20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md => 20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md} (100%) diff --git a/translated/tech/【翻译完毕】20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md b/translated/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md similarity index 100% rename from translated/tech/【翻译完毕】20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md rename to translated/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md From c823978d30497a34d2e73d8ace42140ddc4567c1 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 4 Aug 2015 22:32:09 +0800 Subject: [PATCH 108/207] PUB:20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu @geekpi --- ...ment and How Do You Enable It in Ubuntu.md | 21 +++++++++---------- 1 file changed, 10 insertions(+), 11 deletions(-) rename {translated/tech => published}/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md (67%) diff --git a/translated/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md b/published/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md similarity index 67% rename from translated/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md rename to published/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md index 05f07b74e6..86569b0128 100644 --- a/translated/tech/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md +++ b/published/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md @@ -1,26 +1,25 @@ -什么是逻辑分区管理工具,它怎么在Ubuntu启用? +什么是逻辑分区管理 LVM ,如何在Ubuntu中使用? ================================================================================ -> 逻辑分区管理(LVM)是每一个主流Linux发行版都含有的磁盘管理选项。无论你是否需要设置存储池或者只需要动态创建分区,LVM就是你正在寻找的。 + +> 逻辑分区管理(LVM)是每一个主流Linux发行版都含有的磁盘管理选项。无论是你需要设置存储池,还是只想动态创建分区,那么LVM就是你正在寻找的。 ### 什么是 LVM? ### -逻辑分区管理是一个存在于磁盘/分区和操作系统之间的一个抽象层。在传统的磁盘管理中,你的操作系统寻找有哪些磁盘可用(/dev/sda、/dev/sdb等等)接着这些磁盘有哪些可用的分区(如/dev/sda1、/dev/sda2等等)。 +逻辑分区管理是一个存在于磁盘/分区和操作系统之间的一个抽象层。在传统的磁盘管理中,你的操作系统寻找有哪些磁盘可用(/dev/sda、/dev/sdb等等),并且这些磁盘有哪些可用的分区(如/dev/sda1、/dev/sda2等等)。 -在LVM下,磁盘和分区可以抽象成一个设备中含有多个磁盘和分区。你的操作系统将不会知道这些区别,因为LVM只会给操作系统展示你设置的卷组(磁盘)和逻辑卷(分区) +在LVM下,磁盘和分区可以抽象成一个含有多个磁盘和分区的设备。你的操作系统将不会知道这些区别,因为LVM只会给操作系统展示你设置的卷组(磁盘)和逻辑卷(分区) -,因此可以很容易地动态调整和创建新的磁盘和分区。除此之外,LVM带来你的文件系统不具备的功能。比如,ext3不支持实时快照,但是如果你正在使用LVM你可以不卸载磁盘的情况下做一个逻辑卷的快照。 +因为卷组和逻辑卷并不物理地对应到影片,因此可以很容易地动态调整和创建新的磁盘和分区。除此之外,LVM带来了你的文件系统所不具备的功能。比如,ext3不支持实时快照,但是如果你正在使用LVM你可以不卸载磁盘的情况下做一个逻辑卷的快照。 ### 你什么时候该使用LVM? ### -在使用LVM之前首先得考虑的一件事是你要用你的磁盘和分区完成什么。一些发行版如Fedora已经默认安装了LVM。 +在使用LVM之前首先得考虑的一件事是你要用你的磁盘和分区来做什么。注意,一些发行版如Fedora已经默认安装了LVM。 -如果你使用的是一台只有一块磁盘的Ubuntu笔记本电脑,并且你不需要像实时快照这样的扩展功能,那么你或许不需要LVM。如果I想要轻松地扩展或者想要将多块磁盘组成一个存储池,那么LVM或许正式你郑寻找的。 +如果你使用的是一台只有一块磁盘的Ubuntu笔记本电脑,并且你不需要像实时快照这样的扩展功能,那么你或许不需要LVM。如果你想要轻松地扩展或者想要将多块磁盘组成一个存储池,那么LVM或许正是你所寻找的。 ### 在Ubuntu中设置LVM ### -使用LVM首先要了解的一件事是没有简单的方法将已经存在传统的分区转换成逻辑分区。可以将它移到一个使用LVM的新分区下,但是这并不会在本篇中提到;反之我们将全新安装一台Ubuntu 10.10来设置LVM - -![](http://cdn3.howtogeek.com/wp-content/uploads/2010/12/ubuntu-10-banner.png) +使用LVM首先要了解的一件事是,没有一个简单的方法可以将已有的传统分区转换成逻辑卷。可以将数据移到一个使用LVM的新分区下,但是这并不会在本篇中提到;在这里,我们将全新安装一台Ubuntu 10.10来设置LVM。(LCTT 译注:本文针对的是较老的版本,新的版本已经不需如此麻烦了) 要使用LVM安装Ubuntu你需要使用另外的安装CD。从下面的链接中下载并烧录到CD中或者[使用unetbootin创建一个USB盘][1]。 @@ -64,7 +63,7 @@ via: http://www.howtogeek.com/howto/36568/what-is-logical-volume-management-and- 作者:[How-To Geek][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 7c8fd52aa7704ddee0fa04a71fee6b5b06647180 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 4 Aug 2015 23:11:02 +0800 Subject: [PATCH 109/207] PUB:20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu @ictlyh --- ...M (Logical Volume Management) in Ubuntu.md | 52 +++++++++---------- 1 file changed, 26 insertions(+), 26 deletions(-) rename {translated/tech => published}/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md (79%) diff --git a/translated/tech/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md b/published/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md similarity index 79% rename from translated/tech/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md rename to published/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md index c3a84f5fcf..76a2c8d224 100644 --- a/translated/tech/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md +++ b/published/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md @@ -1,20 +1,20 @@ -如何在 Ubuntu 中管理和使用 LVM(Logical Volume Management,逻辑卷管理) +如何在 Ubuntu 中管理和使用 逻辑卷管理 LVM ================================================================================ ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/652x202xbanner-1.png.pagespeed.ic.VGSxDeVS9P.png) 在我们之前的文章中,我们介绍了[什么是 LVM 以及能用 LVM 做什么][1],今天我们会给你介绍一些 LVM 的主要管理工具,使得你在设置和扩展安装时更游刃有余。 -正如之前所述,LVM 是介于你的操作系统和物理硬盘驱动器之间的抽象层。这意味着你的物理硬盘驱动器和分区不再依赖于他们所在的硬盘驱动和分区。而是,你的操作系统所见的硬盘驱动和分区可以是由任意数目的独立硬盘驱动汇集而成或是一个软件磁盘阵列。 +正如之前所述,LVM 是介于你的操作系统和物理硬盘驱动器之间的抽象层。这意味着你的物理硬盘驱动器和分区不再依赖于他们所在的硬盘驱动和分区。而是你的操作系统所见的硬盘驱动和分区可以是由任意数目的独立硬盘汇集而成的或是一个软件磁盘阵列。 要管理 LVM,这里有很多可用的 GUI 工具,但要真正理解 LVM 配置发生的事情,最好要知道一些命令行工具。这当你在一个服务器或不提供 GUI 工具的发行版上管理 LVM 时尤为有用。 LVM 的大部分命令和彼此都非常相似。每个可用的命令都由以下其中之一开头: -- Physical Volume = pv -- Volume Group = vg -- Logical Volume = lv +- Physical Volume (物理卷) = pv +- Volume Group (卷组)= vg +- Logical Volume (逻辑卷)= lv -物理卷命令用于在卷组中添加或删除硬盘驱动。卷组命令用于为你的逻辑卷操作更改显示的物理分区抽象集。逻辑卷命令会以分区形式显示卷组使得你的操作系统能使用指定的空间。 +物理卷命令用于在卷组中添加或删除硬盘驱动。卷组命令用于为你的逻辑卷操作更改显示的物理分区抽象集。逻辑卷命令会以分区形式显示卷组,使得你的操作系统能使用指定的空间。 ### 可下载的 LVM 备忘单 ### @@ -26,7 +26,7 @@ LVM 的大部分命令和彼此都非常相似。每个可用的命令都由以 ### 如何查看当前 LVM 信息 ### -你首先需要做的事情是检查你的 LVM 设置。s 和 display 命令和物理卷(pv)、卷组(vg)以及逻辑卷(lv)一起使用,是一个找出当前设置好的开始点。 +你首先需要做的事情是检查你的 LVM 设置。s 和 display 命令可以和物理卷(pv)、卷组(vg)以及逻辑卷(lv)一起使用,是一个找出当前设置的好起点。 display 命令会格式化输出信息,因此比 s 命令更易于理解。对每个命令你会看到名称和 pv/vg 的路径,它还会给出空闲和已使用空间的信息。 @@ -40,17 +40,17 @@ display 命令会格式化输出信息,因此比 s 命令更易于理解。对 #### 创建物理卷 #### -我们会从一个完全新的没有任何分区和信息的硬盘驱动开始。首先找出你将要使用的磁盘。(/dev/sda, sdb, 等) +我们会从一个全新的没有任何分区和信息的硬盘开始。首先找出你将要使用的磁盘。(/dev/sda, sdb, 等) > 注意:记住所有的命令都要以 root 身份运行或者在命令前面添加 'sudo' 。 fdisk -l -如果之前你的硬盘驱动从没有格式化或分区,在 fdisk 的输出中你很可能看到类似下面的信息。这完全正常,因为我们会在下面的步骤中创建需要的分区。 +如果之前你的硬盘从未格式化或分区过,在 fdisk 的输出中你很可能看到类似下面的信息。这完全正常,因为我们会在下面的步骤中创建需要的分区。 ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/fdisk.png.pagespeed.ce.AmAEsxm-7Q.png) -我们的新磁盘位置是 /dev/sdb,让我们用 fdisk 命令在驱动上创建一个新的分区。 +我们的新磁盘位置是 /dev/sdb,让我们用 fdisk 命令在磁盘上创建一个新的分区。 这里有大量能创建新分区的 GUI 工具,包括 [Gparted][2],但由于我们已经打开了终端,我们将使用 fdisk 命令创建需要的分区。 @@ -62,9 +62,9 @@ display 命令会格式化输出信息,因此比 s 命令更易于理解。对 ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/627x145xfdisk00.png.pagespeed.ic.I7S8bjoXQG.png) -以指定的顺序输入命令创建一个使用新硬盘驱动 100% 空间的主分区并为 LVM 做好了准备。如果你需要更改分区的大小或相应多个分区,我建议使用 GParted 或自己了解关于 fdisk 命令的使用。 +以指定的顺序输入命令创建一个使用新硬盘 100% 空间的主分区并为 LVM 做好了准备。如果你需要更改分区的大小或想要多个分区,我建议使用 GParted 或自己了解一下关于 fdisk 命令的使用。 -**警告:下面的步骤会格式化你的硬盘驱动。确保在进行下面步骤之前你的硬盘驱动中没有任何信息。** +**警告:下面的步骤会格式化你的硬盘驱动。确保在进行下面步骤之前你的硬盘驱动中没有任何有用的信息。** - n = 创建新分区 - p = 创建主分区 @@ -79,9 +79,9 @@ display 命令会格式化输出信息,因此比 s 命令更易于理解。对 - t = 更改分区类型 - 8e = 更改为 LVM 分区类型 -核实并将信息写入硬盘驱动器。 +核实并将信息写入硬盘。 -- p = 查看分区设置使得写入更改到磁盘之前可以回看 +- p = 查看分区设置使得在写入更改到磁盘之前可以回看 - w = 写入更改到磁盘 ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/560x339xfdisk03.png.pagespeed.ic.FC8foICZsb.png) @@ -102,7 +102,7 @@ display 命令会格式化输出信息,因此比 s 命令更易于理解。对 ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/vgcreate.png.pagespeed.ce.fVLzSmPZou.png) -Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称,但建议标签以 vg 开头,以便后面你使用它时能意识到这是一个卷组。 +vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称,但建议标签以 vg 开头,以便后面你使用它时能意识到这是一个卷组。 #### 创建逻辑卷 #### @@ -112,7 +112,7 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称 ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/lvcreate.png.pagespeed.ce.vupLB-LJEW.png) --L 命令指定逻辑卷的大小,在该情况中是 3 GB,-n 命令指定卷的名称。 指定 vgpool 所以 lvcreate 命令知道从什么卷获取空间。 +-L 命令指定逻辑卷的大小,在该情况中是 3 GB,-n 命令指定卷的名称。 指定 vgpool 以便 lvcreate 命令知道从什么卷获取空间。 #### 格式化并挂载逻辑卷 #### @@ -131,7 +131,7 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称 #### 重新设置逻辑卷大小 #### -逻辑卷的一个好处是你能使你的共享物理变大或变小而不需要移动所有东西到一个更大的硬盘驱动。另外,你可以添加新的硬盘驱动并同时扩展你的卷组。或者如果你有一个不使用的硬盘驱动,你可以从卷组中移除它使得逻辑卷变小。 +逻辑卷的一个好处是你能使你的存储物理地变大或变小,而不需要移动所有东西到一个更大的硬盘。另外,你可以添加新的硬盘并同时扩展你的卷组。或者如果你有一个不使用的硬盘,你可以从卷组中移除它使得逻辑卷变小。 这里有三个用于使物理卷、卷组和逻辑卷变大或变小的基础工具。 @@ -147,9 +147,9 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称 按照上面创建新分区并更改分区类型为 LVM(8e) 的步骤安装一个新硬盘驱动。然后用 pvcreate 命令创建一个 LVM 能识别的物理卷。 -#### 添加新硬盘驱动到卷组 #### +#### 添加新硬盘到卷组 #### -要添加新的硬盘驱动到一个卷组,你只需要知道你的新分区,在我们的例子中是 /dev/sdc1,以及想要添加到的卷组的名称。 +要添加新的硬盘到一个卷组,你只需要知道你的新分区,在我们的例子中是 /dev/sdc1,以及想要添加到的卷组的名称。 这会添加新物理卷到已存在的卷组中。 @@ -189,7 +189,7 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称 1. 调整文件系统大小 (调整之前确保已经移动文件到硬盘驱动安全的地方) 1. 减小逻辑卷 (除了 + 可以扩展大小,你也可以用 - 压缩大小) -1. 用 vgreduce 从卷组中移除硬盘驱动 +1. 用 vgreduce 从卷组中移除硬盘 #### 备份逻辑卷 #### @@ -197,7 +197,7 @@ Vgpool 是新创建的卷组的名称。你可以使用任何你喜欢的名称 ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/652x202xbanner-2.png.pagespeed.ic.VtOUuqYX1W.png) -LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该照片可以用于在不同的硬盘驱动上进行备份。生成一个备份的时候,任何需要添加到逻辑卷的新信息会如往常一样写入磁盘,但会跟踪更改使得原始快照永远不会损毁。 +LVM 获取快照的时候,会有一张和逻辑卷完全相同的“照片”,该“照片”可以用于在不同的硬盘上进行备份。生成一个备份的时候,任何需要添加到逻辑卷的新信息会如往常一样写入磁盘,但会跟踪更改使得原始快照永远不会损毁。 要创建一个快照,我们需要创建拥有足够空闲空间的逻辑卷,用于保存我们备份的时候会写入该逻辑卷的任何新信息。如果驱动并不是经常写入,你可以使用很小的一个存储空间。备份完成的时候我们只需要移除临时逻辑卷,原始逻辑卷会和往常一样。 @@ -209,7 +209,7 @@ LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该 ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/597x68xlvcreate-snapshot.png.pagespeed.ic.Rw2ivtcpPg.png) -这里我们创建了一个只有 512MB 的逻辑卷,因为驱动实际上并不会使用。512MB 的空间会保存备份时产生的任何新数据。 +这里我们创建了一个只有 512MB 的逻辑卷,因为该硬盘实际上并不会使用。512MB 的空间会保存备份时产生的任何新数据。 #### 挂载新快照 #### @@ -222,7 +222,7 @@ LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该 #### 复制快照和删除逻辑卷 #### -你剩下需要做的是从 /mnt/lvstuffbackup/ 中复制所有文件到一个外部的硬盘驱动或者打包所有文件到一个文件。 +你剩下需要做的是从 /mnt/lvstuffbackup/ 中复制所有文件到一个外部的硬盘或者打包所有文件到一个文件。 **注意:tar -c 会创建一个归档文件,-f 要指出归档文件的名称和路径。要获取 tar 命令的帮助信息,可以在终端中输入 man tar。** @@ -230,7 +230,7 @@ LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该 ![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/627x67xsnapshot-backup.png.pagespeed.ic.tw-2AK_lfZ.png) -记住备份发生的时候写到 lvstuff 的所有文件都会在我们之前创建的临时逻辑卷中被跟踪。确保备份的时候你有足够的空闲空间。 +记住备份时候写到 lvstuff 的所有文件都会在我们之前创建的临时逻辑卷中被跟踪。确保备份的时候你有足够的空闲空间。 备份完成后,卸载卷并移除临时快照。 @@ -259,10 +259,10 @@ LVM 获取快照的时候,会有一张和逻辑卷完全相同的照片,该 via: http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ 译者:[ictlyh](https://github.com/ictlyh) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 -[1]:http://www.howtogeek.com/howto/36568/what-is-logical-volume-management-and-how-do-you-enable-it-in-ubuntu/ +[1]:https://linux.cn/article-5953-1.html [2]:http://www.howtogeek.com/howto/17001/how-to-format-a-usb-drive-in-ubuntu-using-gparted/ [3]:http://www.howtogeek.com/howto/33552/htg-explains-which-linux-file-system-should-you-choose/ \ No newline at end of file From 2d90d07755d4b8bcc6786007e1324fdb48f49f22 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 5 Aug 2015 00:06:26 +0800 Subject: [PATCH 110/207] PUB:20150128 7 communities driving open source development @FSSlc --- ...unities driving open source development.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) rename {translated/talk => published}/20150128 7 communities driving open source development.md (58%) diff --git a/translated/talk/20150128 7 communities driving open source development.md b/published/20150128 7 communities driving open source development.md similarity index 58% rename from translated/talk/20150128 7 communities driving open source development.md rename to published/20150128 7 communities driving open source development.md index 2074ad9e23..1f4aac1a09 100644 --- a/translated/talk/20150128 7 communities driving open source development.md +++ b/published/20150128 7 communities driving open source development.md @@ -1,12 +1,12 @@ 7 个驱动开源发展的社区 ================================================================================ -不久前,开源模式还被成熟的工业厂商以怀疑的态度认作是叛逆小孩的玩物。如今,开放的倡议和基金会在一长列的供应商提供者的支持下正蓬勃发展,而他们将开源模式视作创新的关键。 +不久前,开源模式还被成熟的工业级厂商以怀疑的态度认作是叛逆小孩的玩物。如今,开源的促进会和基金会在一长列的供应商提供者的支持下正蓬勃发展,而他们将开源模式视作创新的关键。 ![](http://images.techhive.com/images/article/2015/01/0_opensource-title-100539095-orig.jpg) ### 技术的开放发展驱动着创新 ### -在过去的 20 几年间,技术的开放发展已被视作驱动创新的关键因素。即使那些以前将开源视作威胁的公司也开始接受这个观点 — 例如微软,如今它在一系列的开源倡议中表现活跃。到目前为止,大多数的开放发展都集中在软件方面,但甚至这个也正在改变,因为社区已经开始向开源硬件倡议方面聚拢。这里有 7 个成功地在硬件和软件方面同时促进和发展开源技术的组织。 +在过去的 20 几年间,技术的开源推进已被视作驱动创新的关键因素。即使那些以前将开源视作威胁的公司也开始接受这个观点 — 例如微软,如今它在一系列的开源的促进会中表现活跃。到目前为止,大多数的开源推进都集中在软件方面,但甚至这个也正在改变,因为社区已经开始向开源硬件倡议方面聚拢。这里介绍 7 个成功地在硬件和软件方面同时促进和发展开源技术的组织。 ### OpenPOWER 基金会 ### @@ -16,21 +16,21 @@ IBM 通过开放其基于 Power 架构的硬件和软件技术,向使用 Power IP 的独立硬件产品提供许可证等方式为基金会的建立播下种子。如今超过 70 个成员共同协作来为基于 Linux 的数据中心提供自定义的开放服务器,组件和硬件。 -今年四月,在比最新基于 x86 系统快 50 倍的数据分析能力的新的 POWER8 处理器的服务器的基础上, OpenPOWER 推出了一个技术路线图。七月, IBM 和 Google 发布了一个固件堆栈。十月见证了 NVIDIA GPU 带来加速 POWER8 系统的能力和来自 Tyan 的第一个 OpenPOWER 参考服务器。 +去年四月,在比最新基于 x86 系统快 50 倍的数据分析能力的新的 POWER8 处理器的服务器的基础上, OpenPOWER 推出了一个技术路线图。七月, IBM 和 Google 发布了一个固件堆栈。去年十月见证了 NVIDIA GPU 带来加速 POWER8 系统的能力和来自 Tyan 的第一个 OpenPOWER 参考服务器。 ### Linux 基金会 ### ![](http://images.techhive.com/images/article/2015/01/2_the-linux-foundation-100539101-orig.jpg) -于 2000 年建立的 [Linux 基金会][2] 如今成为掌控着历史上最大的开源协同发展成果,它有着超过 180 个合作成员和许多独立成员及学生成员。它赞助核心 Linux 开发者的工作并促进、保护和推进 Linux 操作系统和协作软件的开发。 +于 2000 年建立的 [Linux 基金会][2] 如今成为掌控着历史上最大的开源协同开发成果,它有着超过 180 个合作成员和许多独立成员及学生成员。它赞助 Linux 核心开发者的工作并促进、保护和推进 Linux 操作系统,并协调软件的协作开发。 -它最为成功的协作项目包括 Code Aurora Forum (一个拥有为移动无线产业服务的企业财团),MeeGo (一个为移动设备和 IVI (注:指的是车载消息娱乐设备,为 In-Vehicle Infotainment 的简称) 构建一个基于 Linux 内核的操作系统的项目) 和 Open Virtualization Alliance (开放虚拟化联盟,它促进自由和开源软件虚拟化解决方案的采用)。 +它最为成功的协作项目包括 Code Aurora Forum (一个拥有为移动无线产业服务的企业财团),MeeGo (一个为移动设备和 IVI [注:指的是车载消息娱乐设备,为 In-Vehicle Infotainment 的简称] 构建一个基于 Linux 内核的操作系统的项目) 和 Open Virtualization Alliance (开放虚拟化联盟,它促进自由和开源软件虚拟化解决方案的采用)。 ### 开放虚拟化联盟 ### ![](http://images.techhive.com/images/article/2015/01/3_open-virtualization-alliance-100539102-orig.jpg) -[开放虚拟化联盟(OVA)][3] 的存在目的为:通过提供使用案例和对具有互操作性的通用接口和 API 的发展提供支持,来促进自由、开源软件的虚拟化解决方案例如 KVM 的采用。KVM 将 Linux 内核转变为一个虚拟机管理程序。 +[开放虚拟化联盟(OVA)][3] 的存在目的为:通过提供使用案例和对具有互操作性的通用接口和 API 的发展提供支持,来促进自由、开源软件的虚拟化解决方案,例如 KVM 的采用。KVM 将 Linux 内核转变为一个虚拟机管理程序。 如今, KVM 已成为和 OpenStack 共同使用的最为常见的虚拟机管理程序。 @@ -40,31 +40,31 @@ IBM 通过开放其基于 Power 架构的硬件和软件技术,向使用 Power 原本作为一个 IaaS(基础设施即服务) 产品由 NASA 和 Rackspace 于 2010 年启动,[OpenStack 基金会][4] 已成为最大的开源项目聚居地之一。它拥有超过 200 家公司成员,其中包括 AT&T, AMD, Avaya, Canonical, Cisco, Dell 和 HP。 -大约以 6 个月为一个发行周期,基金会的 OpenStack 项目被发展用来通过一个基于 Web 的仪表盘,命令行工具或一个 RESTful 风格的 API 来控制或调配流经一个数据中心的处理存储池和网络资源。至今为止,基金会支持的协作发展已经孕育出了一系列 OpenStack 组件,其中包括 OpenStack Compute(一个云计算网络控制器,它是一个 IaaS 系统的主要部分),OpenStack Networking(一个用以管理网络和 IP 地址的系统) 和 OpenStack Object Storage(一个可扩展的冗余存储系统)。 +大约以 6 个月为一个发行周期,基金会的 OpenStack 项目开发用于通过一个基于 Web 的仪表盘,命令行工具或一个 RESTful 风格的 API 来控制或调配流经一个数据中心的处理存储池和网络资源。至今为止,基金会支持的协同开发已经孕育出了一系列 OpenStack 组件,其中包括 OpenStack Compute(一个云计算网络控制器,它是一个 IaaS 系统的主要部分),OpenStack Networking(一个用以管理网络和 IP 地址的系统) 和 OpenStack Object Storage(一个可扩展的冗余存储系统)。 ### OpenDaylight ### ![](http://images.techhive.com/images/article/2015/01/5_opendaylight-100539097-orig.jpg) -作为来自 Linux 基金会的另一个协作项目, [OpenDaylight][5] 是一个由诸如 Dell, HP, Oracle 和 Avaya 等行业厂商于 2013 年 4 月建立的联合倡议。它的任务是建立一个由社区主导,开放,有工业支持的针对 Software-Defined Networking (SDN) 的包含代码和蓝图的框架。其思路是提供一个可直接部署的全功能 SDN 平台,而不需要其他组件,供应商可提供附件组件和增强组件。 +作为来自 Linux 基金会的另一个协作项目, [OpenDaylight][5] 是一个由诸如 Dell, HP, Oracle 和 Avaya 等行业厂商于 2013 年 4 月建立的联合倡议。它的任务是建立一个由社区主导、开源、有工业支持的针对软件定义网络( SDN: Software-Defined Networking)的包含代码和蓝图的框架。其思路是提供一个可直接部署的全功能 SDN 平台,而不需要其他组件,供应商可提供附件组件和增强组件。 ### Apache 软件基金会 ### ![](http://images.techhive.com/images/article/2015/01/6_apache-software-foundation-100539098-orig.jpg) -[Apache 软件基金会 (ASF)][7] 是将近 150 个顶级项目的聚居地,这些项目涵盖从开源企业级自动化软件到与 Apache Hadoop 相关的分布式计算的整个生态系统。这些项目分发企业级、可免费获取的软件产品,而 Apache 协议则是为了让无论是商业用户还是个人用户更方便地部署 Apache 的产品。 +[Apache 软件基金会 (ASF)][7] 是将近 150 个顶级项目的聚居地,这些项目涵盖从开源的企业级自动化软件到与 Apache Hadoop 相关的分布式计算的整个生态系统。这些项目分发企业级、可免费获取的软件产品,而 Apache 协议则是为了让无论是商业用户还是个人用户更方便地部署 Apache 的产品。 -ASF 于 1999 年作为一个会员制,非盈利公司注册,其核心为精英 — 要成为它的成员,你必须首先在基金会的一个或多个协作项目中做出积极贡献。 +ASF 是 1999 年成立的一个会员制,非盈利公司,以精英为其核心 — 要成为它的成员,你必须首先在基金会的一个或多个协作项目中做出积极贡献。 ### 开放计算项目 ### ![](http://images.techhive.com/images/article/2015/01/7_open-compute-project-100539099-orig.jpg) -作为 Facebook 重新设计其 Oregon 数据中心的副产物, [开放计算项目][7] 旨在发展针对数据中心的开放硬件解决方案。 OCP 是一个由廉价、无浪费的服务器,针对 Open Rack(为数据中心设计的机架标准,来让机架集成到数据中心的基础设施中) 的模块化 I/O 存储和一个相对 "绿色" 的数据中心设计方案等构成。 +作为 Facebook 重新设计其 Oregon 数据中心的副产物, [开放计算项目][7] 旨在发展针对数据中心的开源硬件解决方案。 OCP 是一个由廉价无浪费的服务器、针对 Open Rack(为数据中心设计的机架标准,来让机架集成到数据中心的基础设施中) 的模块化 I/O 存储和一个相对 "绿色" 的数据中心设计方案等构成。 OCP 董事会成员包括来自 Facebook,Intel,Goldman Sachs,Rackspace 和 Microsoft 的代表。 -OCP 最近宣布了许可证的两个选择: 一个类似 Apache 2.0 的允许衍生工作的许可证和一个更规范的鼓励回滚到原有软件的更改的许可证。 +OCP 最近宣布了有两种可选的许可证: 一个类似 Apache 2.0 的允许衍生工作的许可证,和一个更规范的鼓励将更改回馈到原有软件的许可证。 -------------------------------------------------------------------------------- @@ -72,7 +72,7 @@ via: http://www.networkworld.com/article/2866074/opensource-subnet/7-communities 作者:[Thor Olavsrud][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From 77b1de1f10b9e5d07426da4163dd6b57197078ee Mon Sep 17 00:00:00 2001 From: joeren Date: Wed, 5 Aug 2015 07:42:36 +0800 Subject: [PATCH 111/207] Update 20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md --- ...LVM on Ubuntu for Easy Partition Resizing and Snapshots.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sources/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md b/sources/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md index 883c5e3203..2cb09193b6 100644 --- a/sources/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md +++ b/sources/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md @@ -1,4 +1,4 @@ - +Translating by GOLinux! How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots ================================================================================ ![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55035707bbd74.png.pagespeed.ic.9_yebxUF1C.png) @@ -65,4 +65,4 @@ via: http://www.howtogeek.com/211937/how-to-use-lvm-on-ubuntu-for-easy-partition [3]:http://www.howtogeek.com/114503/how-to-resize-your-ubuntu-partitions/ [4]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ [5]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ -[6]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ \ No newline at end of file +[6]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ From d8b9908e0f63f78c4156bedc13c45698bd21f205 Mon Sep 17 00:00:00 2001 From: XLCYun Date: Wed, 5 Aug 2015 08:29:58 +0800 Subject: [PATCH 112/207] =?UTF-8?q?=E5=88=A0=E9=99=A4=E5=8E=9F=E6=96=87=20?= =?UTF-8?q?=20XLCYun?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Right & Wrong - Page 4 - GNOME Settings.md | 52 ------------------- 1 file changed, 52 deletions(-) delete mode 100644 sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md diff --git a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md b/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md deleted file mode 100644 index bf233ce5d3..0000000000 --- a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md +++ /dev/null @@ -1,52 +0,0 @@ -Translating by XLCYun. -A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 4 - GNOME Settings -================================================================================ -### Settings ### - -There are a few specific KDE Control modules that I am going to pick at, mostly because they are so laughable horrible compared to their gnome counter-part that its honestly pathetic. - -First one up? Printers. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers1_show&w=1920) - -Gnome is on the left, KDE is on the right. You know what the difference is between the printer applet on the left, and the one on the right? When I opened up Gnome Control Center and hit "Printers" the applet popped up and nothing happened. When I opened up KDE System Settings and hit "Printers" I got a password prompt. Before I was even allowed to LOOK at the printers I had to give up ROOT'S password. - -Let me just re-iterate that. In this, the days of PolicyKit and Logind, I am still being asked for Root's password for what should be a sudo operation. I didn't even SETUP root's password when I installed the system. I had to drop down to Konsole and run 'sudo passwd root' so that I could GIVE root a password so that I could go back into System Setting's printer applet and then give up root's password to even LOOK at what printers were available. Once I did that I got prompted for root's password AGAIN when I hit "Add Printer" then I got prompted for root's password AGAIN after I went through and selected a printer and driver. Three times I got asked for ROOT'S password just to add a printer to the system. - -When I added a printer under Gnome I didn't get prompted for my SUDO password until I hit "Unlock" in the printer applet. I got asked once, then I never got asked again. KDE, I am begging you... Adopt Gnome's "Unlock" methodology. Do not prompt for a password until you really need one. Furthermore, whatever library is out there that allows for KDE applications to bypass PolicyKit / Logind (if its available) and prompt directly for root... Bin that code. If this was a multi-user system I either have to give up root's password, or be there every second of every day in order to put it in any time a user might have to update, change, or add a new printer. Both options are completely unacceptable. - -One more thing... - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers2_show&w=1920) - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers3_show&w=1920) - -Question to the forums: What looks cleaner to you? I had this realization when I was writing this article: Gnome's applet makes it very clear where any additional printers are going to go, they set aside a column on the left to list them. Before I added a second printer to KDE, and it suddenly grew a left side column, I had this nightmare-image in my head of the applet just shoving another icon into the screen and them being listed out like preview images in a folder of pictures. I was pleasantly surprised to see that I was wrong but the fact that the applet just 'grew' another column that didn't exist before and drastically altered its presentation is not really 'good' either. It's a design that's confusing, shocking, and non-intuitive. - -Enough about printers though... Next KDE System Setting that is up for my public stoning? Multimedia, Aka Phonon. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_sound_show&w=1920) - -As always, Gnome's on the left, KDE is on the right. Let's just run through the Gnome setting first... The eyes go left to right, top to bottom, right? So let's do the same. First up: volume control slider. The blue hint against the empty bar with 100% clearly marked removes all confusion about which way is "volume up." Immediately after the slider is an easy On/Off toggle that functions a mute on/off. Points to Gnome for remembering what the volume was set to BEFORE I muted sound, and returning to that same level AFTER I press volume-up to un-mute. Kmixer, you amnesiac piece of crap, I wish I could say as much about you. - -Moving on! Tabbed options for Output, Input and Applications? With per application volume controls within easy reach? Gnome I love you more and more with every passing second. Balance options, sound profiles, and a clearly marked "Test Speakers" option. - -I'm not sure how this could have been implemented in a cleaner, more concise way. Yes, it's just a Gnome-ized Pavucontrol but I think that's the point. Pavucontrol got it mostly right to begin with, the Sound applet in Gnome Control Center just refines it slightly to make it even closer to perfect. - -Phonon, you're up. And let me start by saying: What the fsck am I looking at? -I- get that I am looking at the priority list for the audio devices on the system, but the way it is presented is a bit of a nightmare. Also where are the things the user probably cares about? A priority list is a great thing to have, it SHOULD be available, but it's something the user messes with once or twice and then never touches again. It's not important, or common, enough to warrant being front and center. Where's the volume slider? Where's per application controls? The things that users will be using more frequently? Well.. those are under Kmix, a separate program, with its own settings and configuration... not under the System Settings... which kind of makes System Settings a bit of a misnomer. And in that same vein, Let's hop over to network settings. - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_network_show&w=1920) - -Presented above is the Gnome Network Settings. KDE's isn't included because of the reason I'm about to hit on. If you go to KDE's System Settings and hit any of the three options under the "Network" Section you get tons of options: Bluetooth settings, default username and password for Samba shares (Seriously, "Connectivity" only has 2 options: Username and password for SMB shares. How the fsck does THAT deserve the all-inclusive title "Connectivity"?), controls for Browser Identification (which only work for Konqueror...a dead project), proxy settings, etc... Where's my wifi settings? They aren't there. Where are they? Well, they are in the network applet's private settings... not under Network Settings... - -KDE, you're killing me. You have "System Settings" USE IT! - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=4 - -作者:Eric Griffith -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 67e1d8db380b6fdbd959f30e473853725945904c Mon Sep 17 00:00:00 2001 From: XLCYun Date: Wed, 5 Aug 2015 08:36:39 +0800 Subject: [PATCH 113/207] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90=20?= =?UTF-8?q?=E7=AC=AC=E5=9B=9B=E8=8A=82?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Right & Wrong - Page 4 - GNOME Settings.md | 54 +++++++++++++++++++ 1 file changed, 54 insertions(+) create mode 100644 translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md new file mode 100644 index 0000000000..1c0cc4bd86 --- /dev/null +++ b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md @@ -0,0 +1,54 @@ +将GNOME作为我的Linux桌面的一周: 他们做对的与做错的 - 第四节 - GNOME设置 +================================================================================ +### Settings设置 ### + +在这我要挑一挑几个特定KDE控制模块的毛病,大部分原因是因为相比它们的对手GNOME来说,糟糕得太可笑,实话说,真是悲哀。 + +第一个接招的?打印机。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers1_show&w=1920) + +GNOME在左,KDE在右。你知道左边跟右边的打印程序有什么区别吗?当我在GNOME控制中心打开“打印机”时,程序窗口弹出来了,之后没有也没发生。而当我在KDE系统设置打开“打印机”时,我收到了一条密码提示。甚至我都没能看一眼打印机呢,我就必须先交出ROOT密码。 + +让我再重复一遍。在今天,PolicyKit和Logind的日子里,对一个应该是sudo的操作,我依然被询问要求ROOT的密码。我安装系统的时候甚至都没设置root密码。所以我必须跑到Konsole去,然后运行'sudo passwd root'命令,这样我才能给root设一个密码,这样我才能回到系统设置中的打印程序,然后交出root密码,然后仅仅是看一看哪些打印机可用。完成了这些工作后,当我点击“添加打印机”时,我再次收到请求ROOT密码的提示,当我解决了它后再选择一个打印机和驱动时,我再次收到请求ROOT密码的提示。仅仅是为了添加一个打印机到系统我就收到三次密码请求。 + +而在GNOME下添加打印机,在点击打印机程序中的”解锁“之前,我没有收到任何请求SUDO密码的提示。整个过程我只被请求过一次,仅此而已。KDE,求你了……采用GNOME的”解锁“模式吧。不到一定需要的时候不要发出提示。还有,不管是哪个库,只要它允许KDE应用程序绕过PolicyKit/Logind(如果有的话)并直接请求ROOT权限……那就把它封进箱里吧。如果这是个多用户系统,那我要么必须交出ROOT密码,要么我必须时时刻刻呆着以免有一个用户需要升级、更改或添加一个新的打印机。而这两种情况都是完全无法接受的。 + +有还一件事…… + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers2_show&w=1920) + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers3_show&w=1920) + +给论坛的问题:怎么样看起来更简洁?我在写这篇文章时意识到:当有任何的附加打印机准备好时,Gnome打印机程序会把过程做得非常简洁,它们在左边上放了一个竖直栏来列出这些打印机。而我在KDE添加第二台打印机时,它突然增加出一个左边栏来。而在添加之前,我脑海中已经有了一个恐怖的画面它会像图片文件夹显示预览图一样,直接插入另外一个图标到界面里去。我很高兴也很惊讶的看到我是错的。但是事实是它直接”长出”另外一个从末存在的竖直栏,彻底改变了它的界面布局,而这样也称不上“好”。终究还是一种令人困惑,奇怪而又不直观的设计。 + +打印机说得够多了……下一个接受我公开石刑的KDE系统设置是?多媒体,即Phonon。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_sound_show&w=1920) + +一如既往,GNOME在左边,KDE在右边。让我们先看看GNOME的系统设置先……眼睛从左到右,从上到下,对吧?来吧,就这样做。首先:音量控制滑条。滑条中的蓝色条与空白条百分百清晰地消除了哪边是“音量增加”的困惑。在音量控制条后马上就是一个On/Off开关,用来开关静音功能。Gnome的再次得分在于静音后能记住当前设置的音量,而在点击音量增加按钮取消静音后能回到原来设置的音量中来。Kmixer,你个健忘的垃圾,我真的希望我能多讨论你。 + + +继续!输入输出和应用程序的标签选项?每一个应用程序的音量随时可控?Gnome,每过一秒,我爱你越深。均衡的选项设置,声音配置,和清晰地标上标志的“测试麦克风”选项。 + + + +我不清楚它能否以一种更干净更简洁的设计实现。是的,它只是一个Gnome化的Pavucontrol,但我想这就是重要的地方。Pavucontrol在这方面几乎完全做对了,Gnome控制中心中的“声音”应用程序的改善使它向完美更进了一步。 + +Phonon,该你上了。但开始前我想说:我TM看到的是什么?我知道我看到的是音频设备的权限列表,但是它呈现的方式有点太坑。还有,那些用户可能关心的那些东西哪去了?拥有一个权限列表当然很好,它也应该存在,但问题是权限列表属于那种用户乱搞一两次之后就不会再碰的东西。它还不够重要,或者说常用到可以直接放在正中间位置的程度。音量控制滑块呢?对每个应用程序的音量控制功能呢?那些用户使用最频繁的东西呢?好吧,它们在Kmix中,一个分离的程序,拥有它自己的配置选项……而不是在系统设置下……这样真的让“系统设置”这个词变得有点用词不当。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_network_show&w=1920) + +上面展示的Gnome的网络设置。KDE的没有展示,原因就是我接下来要吐槽的内容了。如果你进入KDE的系统设置里,然后点击“网络”区域中三个选项中的任何一个,你会得到一大堆的选项:蓝牙设置,Samba分享的默认用户名和密码(说真的,“连通性(Connectivity)”下面只有两个选项:SMB的用户名和密码。TMD怎么就配得上“连通性”这么大的词?),浏览器身份验证控制(只有Konqueror能用……一个已经倒闭的项目),代理设置,等等……我的wifi设置哪去了?它们没在这。哪去了?好吧,它们在网络应用程序的设置里面……而不是在网络设置里…… + +KDE,你这是要杀了我啊,你有“系统设置”当凶器,拿着它动手吧! + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=4 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From e9411c42a3eeead7b498a8950f1625bcbb326c5b Mon Sep 17 00:00:00 2001 From: joeren Date: Wed, 5 Aug 2015 09:04:29 +0800 Subject: [PATCH 114/207] [Translated]201150318How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md --- ...r Easy Partition Resizing and Snapshots.md | 68 ------------------- ...r Easy Partition Resizing and Snapshots.md | 67 ++++++++++++++++++ 2 files changed, 67 insertions(+), 68 deletions(-) delete mode 100644 sources/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md create mode 100644 translated/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md diff --git a/sources/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md b/sources/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md deleted file mode 100644 index 2cb09193b6..0000000000 --- a/sources/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md +++ /dev/null @@ -1,68 +0,0 @@ -Translating by GOLinux! -How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots -================================================================================ -![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55035707bbd74.png.pagespeed.ic.9_yebxUF1C.png) - -Ubuntu’s installer offers an easy “Use LVM” checkbox. The description says it enables Logical Volume Management so you can take snapshots and more easily resize your hard disk partitions — here’s how to do that. - -LVM is a technology that’s similar to [RAID arrays][1] or [Storage Spaces on Windows][2] in some ways. While this technology is particularly useful on servers, it can be used on desktop PCs, too. - -### Should You Use LVM With Your New Ubuntu Installation? ### - -The first question is whether you even want to use LVM with your Ubuntu installation. Ubuntu makes this easy to enable with a quick click, but this option isn’t enabled by default. As the installer says, this allows you to resize partitions, create snapshots, merge multiple disks into a single logical volume, and so on — all while the system is running. Unlike with typical partitions, you don’t have to shut down your system, boot from a live CD or USB drive, and [resize your partitions while they aren’t in use][3]. - -To be perfectly honest, the average Ubuntu desktop user probably won’t realize whether they’re using LVM or not. But, if you want to do more advanced things later, LVM can help. LVM is potentially more complex, which could cause problems if you need to recover your data later — especially if you’re not that experienced with it. There shouldn’t be a noticeable performance penalty here — LVM is implemented right down in the Linux kernel. - -![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55035cbada6ae.png.pagespeed.ic.cnqyiKfCvi.png) - -### Logical Volume Management Explained ### - -We’re previously [explained what LVM is][4]. In a nutshell, it provides a layer of abstraction between your physical disks and the partitions presented to your operating system. For example, your computer might have two hard drives inside it, each 1 TB in size. You’d have to have at least two partitions on these disks, and each of these partitions would be 1 TB in size. - -LVM provides a layer of abstraction over this. Instead of the traditional partition on a disk, LVM would treat the disks as two separate “physical volumes” after you initialize them. You could then create “logical volumes” based on these physical volumes. For example, you could combine those two 1 TB disks into a single 2 TB partition. Your operating system would just see a 2 TB volume, and LVM would deal with everything in the background. A group of physical volumes and logical volumes is known as a “volume group.” A typical system will just have a single volume group. - -This layer of abstraction makes it possibly to easily resize partitions, combine multiple disks into a single volume, and even take “snapshots” of a partition’s file system while it’s running, all without unmounting it. - -Note that merging multiple disks into a single volume can be a bad idea if you’re not creating backups. It’s like with RAID 0 — if you combine two 1 TB volumes into a single 2 TB volume, you could lose important data on the volume if just one of your hard disks fails. Backups are crucial if you go this route. - -### Graphical Utilities for Managing Your LVM Volumes ### - -Traditionally, [LVM volumes are managed with Linux terminal commands][5].These will work for you on Ubuntu, but there’s an easier, graphical method anyone can take advantage of. If you’re a Linux user used to using GParted or a similar partition manager, don’t bother — GParted doesn’t have support for LVM disks. - -Instead, you can use the Disks utility included along with Ubuntu for this. This utility is also known as GNOME Disk Utility, or Palimpsest. Launch it by clicking the icon on the dash, searching for Disks, and pressing Enter. Unlike GParted, the Disks utility will display your LVM partitions under “Other Devices,” so you can format them and adjust other options if you need to. This utility will also work from a live CD or USB drive, too. - -![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550361b3772f7.png.pagespeed.ic.nZWwLJUywR.png) - -Unfortunately, the Disks utility doesn’t include support for taking advantage of LVM’s most powerful features. There’s no options for managing your volume groups, extending partitions, or taking snapshots. You could do that from the terminal, but you don’t have to. Instead, you can open the Ubuntu Software Center, search for LVM, and install the Logical Volume Management tool. You could also just run the **sudo apt-get install system-config-lvm** command in a terminal window. After it’s installed, you can open the Logical Volume Management utility from the dash. - -This graphical configuration tool was made by Red Hat. It’s a bit dated, but it’s the only graphical way to do this stuff without resorting to terminal commands. - -Let’s say you wanted to add a new physical volume to your volume group. You’d open the tool, select the new disk under Uninitialized Entries, and click the “Initialize Entry” button. You’d then find the new physical volume under Unallocated Volumes, and you could use the “Add to existing Volume Group” button to add it to the “ubuntu-vg” volume group Ubuntu created during the installation process. - -![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550363106789c.png.pagespeed.ic.drVInt3Weq.png) - -The volume group view shows you a visual overview of your physical volumes and logical volumes. Here, we have two physical partitions across two separate hard drives. We have a swap partition and a root partition, just as Ubuntu sets up its partitioning scheme by default. Because we’ve added a second physical partition from another drive, there’s now a good chunk of unused space. - -![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550363f631c19.png.pagespeed.ic.54E_Owcq8y.png) - -To expand a logical partition into the physical space, you could select it under Logical View, click Edit Properties, and modify the size to grow the partition. You could also shrink it from here. - -![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55036893712d3.png.pagespeed.ic.ce7y_Mt0uF.png) - -The other options in system-config-lvm allow you to set up snapshots and mirroring. You probably won’t need these features on a typical desktop, but they’re available graphically here. Remember, you can also [do all of this with terminal commands][6]. - --------------------------------------------------------------------------------- - -via: http://www.howtogeek.com/211937/how-to-use-lvm-on-ubuntu-for-easy-partition-resizing-and-snapshots/ - -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[1]:http://www.howtogeek.com/162676/how-to-use-multiple-disks-intelligently-an-introduction-to-raid/ -[2]:http://www.howtogeek.com/109380/how-to-use-windows-8s-storage-spaces-to-mirror-combine-drives/ -[3]:http://www.howtogeek.com/114503/how-to-resize-your-ubuntu-partitions/ -[4]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ -[5]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ -[6]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ diff --git a/translated/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md b/translated/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md new file mode 100644 index 0000000000..2e66e27f31 --- /dev/null +++ b/translated/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md @@ -0,0 +1,67 @@ +Ubuntu上使用LVM轻松调整分区并制作快照 +================================================================================ +![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55035707bbd74.png.pagespeed.ic.9_yebxUF1C.png) + +Ubuntu的安装器提供了一个轻松“使用LVM”的复选框。说明中说,它启用了逻辑卷管理,因此你可以制作快照,并更容易地调整硬盘分区大小——这里将为大家讲述如何完成这些操作。 + +LVM是一种技术,某种程度上和[RAID阵列][1]或[Windows上的存储空间][2]类似。虽然该技术在服务器上更为有用,但是它也可以在桌面端PC上使用。 + +### 你应该在新安装Ubuntu时使用LVM吗? ### + +第一个问题是,你是否想要在安装Ubuntu时使用LVM?如果是,那么Ubuntu让这一切变得很简单,只需要轻点鼠标就可以完成,但是该选项默认是不启用的。正如安装器所说的,它允许你调整分区、创建快照、合并多个磁盘到一个逻辑卷等等——所有这一切都可以在系统运行时完成。不同于传统分区,你不需要关掉你的系统,从Live CD或USB驱动,然后[调整这些不使用的分区][3]。 + +完全坦率地说,普通Ubuntu桌面用户可能不会意识到他们是否正在使用LVM。但是,如果你想要在今后做一些更高深的事情,那么LVM就会有所帮助了。LVM可能更复杂,可能会在你今后恢复数据时会导致问题——尤其是在你经验不足时。这里不会有显著的性能损失——LVM是彻底地在Linux内核中实现的。 + +![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55035cbada6ae.png.pagespeed.ic.cnqyiKfCvi.png) + +### 逻辑卷管理说明 ### + +前面,我们已经[说明了何谓LVM][4]。概括来讲,它在你的物理磁盘和呈现在你系统中的分区之间提供了一个抽象层。例如,你的计算机可能装有两个硬盘驱动器,它们的大小都是 1 TB。你必须得在这些磁盘上至少分两个区,每个区大小 1 TB。 + +LVM就在这些分区上提供了一个抽象层。用于取代磁盘上的传统分区,LVM将在你对这些磁盘初始化后,将它们当作独立的“物理卷”来对待。然后,你就可以基于这些物理卷创建“逻辑卷”。例如,你可以将这两个 1 TB 的磁盘组合成一个 2 TB 的分区,你的系统将只看到一个 2 TB 的卷,而LVM将会在后台处理这一切。一组物理卷以及一组逻辑卷被称之为“卷组”,一个标准的系统只会有一个卷组。 + +该抽象层使得调整分区、将多个磁盘组合成单个卷、甚至为一个运行着的分区的文件系统创建“快照”变得十分简单,而完成所有这一切都无需先卸载分区。 + +注意,如果你没有创建备份,那么将多个磁盘合并成一个卷将会是个糟糕的想法。它就像RAID 0——如果你将两个 1 TB 的卷组合成一个 2 TB 的卷,只要其中一个硬盘失败,你将丢失该卷上的重要数据。所以,如果你要走这条路,那么备份就及其重要。 + +### 管理LVM卷的图形化工具 ### + +通常,[LVM通过Linux终端命令来管理][5]。这在Ubuntu上也行得通,但是有个更简单的图形化方法可供大家采用。如果你是一个Linux用户,对GParted或者与其类似的分区管理器熟悉,算了,别瞎掰了——GParted根本不支持LVM磁盘。 + +然而,你可以使用Ubuntu附带的磁盘工具。该工具也被称之为GNOME磁盘工具,或者叫Palimpsest。点击停靠盘上的图标来开启它吧,搜索磁盘然后敲击回车。不像GParted,该磁盘工具将会在“其它设备”下显示LVM分区,因此你可以根据需要格式化这些分区,也可以调整其它选项。该工具在Live CD或USB 驱动下也可以使用。 + +![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550361b3772f7.png.pagespeed.ic.nZWwLJUywR.png) + +不幸的是,该磁盘工具不支持LVM的大多数强大的特性,没有管理卷组、扩展分区,或者创建快照等选项。对于这些操作,你可以通过终端来实现,但是你没有那个必要。相反,你可以打开Ubuntu软件中心,搜索关键字LVM,然后安装逻辑卷管理工具,你可以在终端窗口中运行**sudo apt-get install system-config-lvm**命令来安装它。安装完之后,你就可以从停靠盘上打开逻辑卷管理工具了。 + +这个图形化配置工具是由红帽公司开发的,虽然有点陈旧了,但却是唯一的图形化方式,你可以通过它来完成上述操作,将那些终端命令抛诸脑后了。 + +比如说,你想要添加一个新的物理卷到卷组中。你可以打开该工具,选择未初始化条目下的新磁盘,然后点击“初始化条目”按钮。然后,你就可以在未分配卷下找到新的物理卷了,你可以使用“添加到现存卷组”按钮来将它添加到“ubuntu-vg”卷组,这是Ubuntu在安装过程中创建的卷组。 + +![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550363106789c.png.pagespeed.ic.drVInt3Weq.png) + +卷组视图会列出你所有物理卷和逻辑卷的总览。这里,我们有两个横跨两个独立硬盘驱动器的物理分区,我们有一个交换分区和一个根分区,就像Ubuntu默认设置的分区图表。由于我们从另一个驱动器添加了第二个物理分区,现在那里有大量未使用空间。 + +![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550363f631c19.png.pagespeed.ic.54E_Owcq8y.png) + +要扩展逻辑分区到物理空间,你可以在逻辑视图下选择它,点击编辑属性,然后修改大小来扩大分区。你也可以在这里缩减分区。 + +![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55036893712d3.png.pagespeed.ic.ce7y_Mt0uF.png) + +system-config-lvm的其它选项允许你设置快照和镜像。对于传统桌面而言,你或许不需要这些特性,但是在这里也可以通过图形化处理。记住,你也可以[使用终端命令完成这一切][6]。 + +-------------------------------------------------------------------------------- + +via: http://www.howtogeek.com/211937/how-to-use-lvm-on-ubuntu-for-easy-partition-resizing-and-snapshots/ + +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[1]:http://www.howtogeek.com/162676/how-to-use-multiple-disks-intelligently-an-introduction-to-raid/ +[2]:http://www.howtogeek.com/109380/how-to-use-windows-8s-storage-spaces-to-mirror-combine-drives/ +[3]:http://www.howtogeek.com/114503/how-to-resize-your-ubuntu-partitions/ +[4]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ +[5]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ +[6]:http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/ From c9ee20d2d32beefb6ae69c12d2ae696ce352213b Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Wed, 5 Aug 2015 13:38:18 +0800 Subject: [PATCH 115/207] Delete 20150717 How to monitor NGINX- Part 1.md --- .../20150717 How to monitor NGINX- Part 1.md | 409 ------------------ 1 file changed, 409 deletions(-) delete mode 100644 sources/tech/20150717 How to monitor NGINX- Part 1.md diff --git a/sources/tech/20150717 How to monitor NGINX- Part 1.md b/sources/tech/20150717 How to monitor NGINX- Part 1.md deleted file mode 100644 index 690ab192ba..0000000000 --- a/sources/tech/20150717 How to monitor NGINX- Part 1.md +++ /dev/null @@ -1,409 +0,0 @@ -translation by strugglingyouth -How to monitor NGINX - Part 1 -================================================================================ -![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png) - -### What is NGINX? ### - -[NGINX][1] (pronounced “engine X”) is a popular HTTP server and reverse proxy server. As an HTTP server, NGINX serves static content very efficiently and reliably, using relatively little memory. As a [reverse proxy][2], it can be used as a single, controlled point of access for multiple back-end servers or for additional applications such as caching and load balancing. NGINX is available as a free, open-source product or in a more full-featured, commercially distributed version called NGINX Plus. - -NGINX can also be used as a mail proxy and a generic TCP proxy, but this article does not directly address NGINX monitoring for these use cases. - -### Key NGINX metrics ### - -By monitoring NGINX you can catch two categories of issues: resource issues within NGINX itself, and also problems developing elsewhere in your web infrastructure. Some of the metrics most NGINX users will benefit from monitoring include **requests per second**, which provides a high-level view of combined end-user activity; **server error rate**, which indicates how often your servers are failing to process seemingly valid requests; and **request processing time**, which describes how long your servers are taking to process client requests (and which can point to slowdowns or other problems in your environment). - -More generally, there are at least three key categories of metrics to watch: - -- Basic activity metrics -- Error metrics -- Performance metrics - -Below we’ll break down a few of the most important NGINX metrics in each category, as well as metrics for a fairly common use case that deserves special mention: using NGINX Plus for reverse proxying. We will also describe how you can monitor all of these metrics with your graphing or monitoring tools of choice. - -This article references metric terminology [introduced in our Monitoring 101 series][3], which provides a framework for metric collection and alerting. - -#### Basic activity metrics #### - -Whatever your NGINX use case, you will no doubt want to monitor how many client requests your servers are receiving and how those requests are being processed. - -NGINX Plus can report basic activity metrics exactly like open-source NGINX, but it also provides a secondary module that reports metrics slightly differently. We discuss open-source NGINX first, then the additional reporting capabilities provided by NGINX Plus. - -**NGINX** - -The diagram below shows the lifecycle of a client connection and how the open-source version of NGINX collects metrics during a connection. - -![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_connection_diagram-2.png) - -Accepts, handled, and requests are ever-increasing counters. Active, waiting, reading, and writing grow and shrink with request volume. - -注:表格 -
Plex Home Media ServerPlex Media Server
Base Operating System基础操作系统 Ubuntu 15.04 / CentOS 7.1 / Fedora 22 Work Station
Version 0.9.12.3.1173-937aac3
RAM and CPURAM 和 CPU 1 GB  , 2.0 GHZ
Hard Disk硬盘 30 GB
---- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionMetric type
acceptsCount of client connections attempted by NGINXResource: Utilization
handledCount of successful client connectionsResource: Utilization
activeCurrently active client connectionsResource: Utilization
dropped (calculated)Count of dropped connections (accepts – handled)Work: Errors*
requestsCount of client requestsWork: Throughput
*Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.
- -The **accepts** counter is incremented when an NGINX worker picks up a request for a connection from the OS, whereas **handled** is incremented when the worker actually gets a connection for the request (by establishing a new connection or reusing an open one). These two counts are usually the same—any divergence indicates that connections are being **dropped**, often because a resource limit, such as NGINX’s [worker_connections][4] limit, has been reached. - -Once NGINX successfully handles a connection, the connection moves to an **active** state, where it remains as client requests are processed: - -Active state - -- **Waiting**: An active connection may also be in a Waiting substate if there is no active request at the moment. New connections can bypass this state and move directly to Reading, most commonly when using “accept filter” or “deferred accept”, in which case NGINX does not receive notice of work until it has enough data to begin working on the response. Connections will also be in the Waiting state after sending a response if the connection is set to keep-alive. -- **Reading**: When a request is received, the connection moves out of the waiting state, and the request itself is counted as Reading. In this state NGINX is reading a client request header. Request headers are lightweight, so this is usually a fast operation. -- **Writing**: After the request is read, it is counted as Writing, and remains in that state until a response is returned to the client. That means that the request is Writing while NGINX is waiting for results from upstream systems (systems “behind” NGINX), and while NGINX is operating on the response. Requests will often spend the majority of their time in the Writing state. - -Often a connection will only support one request at a time. In this case, the number of Active connections == Waiting connections + Reading requests + Writing requests. However, the newer SPDY and HTTP/2 protocols allow multiple concurrent requests/responses to be multiplexed over a connection, so Active may be less than the sum of Waiting, Reading, and Writing. (As of this writing, NGINX does not support HTTP/2, but expects to add support during 2015.) - -**NGINX Plus** - -As mentioned above, all of open-source NGINX’s metrics are available within NGINX Plus, but Plus can also report additional metrics. The section covers the metrics that are only available from NGINX Plus. - -![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_plus_connection_diagram-2.png) - -Accepted, dropped, and total are ever-increasing counters. Active, idle, and current track the current number of connections or requests in each of those states, so they grow and shrink with request volume. - -注:表格 - ---- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionMetric type
acceptedCount of client connections attempted by NGINXResource: Utilization
droppedCount of dropped connectionsWork: Errors*
activeCurrently active client connectionsResource: Utilization
idleClient connections with zero current requestsResource: Utilization
totalCount of client requestsWork: Throughput
*Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.
- -The **accepted** counter is incremented when an NGINX Plus worker picks up a request for a connection from the OS. If the worker fails to get a connection for the request (by establishing a new connection or reusing an open one), then the connection is dropped and **dropped** is incremented. Ordinarily connections are dropped because a resource limit, such as NGINX Plus’s [worker_connections][4] limit, has been reached. - -**Active** and **idle** are the same as “active” and “waiting” states in open-source NGINX as described [above][5], with one key exception: in open-source NGINX, “waiting” falls under the “active” umbrella, whereas in NGINX Plus “idle” connections are excluded from the “active” count. **Current** is the same as the combined “reading + writing” states in open-source NGINX. - -**Total** is a cumulative count of client requests. Note that a single client connection can involve multiple requests, so this number may be significantly larger than the cumulative number of connections. In fact, (total / accepted) yields the average number of requests per connection. - -**Metric differences between Open-Source and Plus** - -注:表格 - --- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NGINX (open-source)NGINX Plus
acceptsaccepted
dropped must be calculateddropped is reported directly
reading + writingcurrent
waitingidle
active (includes “waiting” states)active (excludes “idle” states)
requeststotal
- -**Metric to alert on: Dropped connections** - -The number of connections that have been dropped is equal to the difference between accepts and handled (NGINX) or is exposed directly as a standard metric (NGINX Plus). Under normal circumstances, dropped connections should be zero. If your rate of dropped connections per unit time starts to rise, look for possible resource saturation. - -![Dropped connections](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/dropped_connections.png) - -**Metric to alert on: Requests per second** - -Sampling your request data (**requests** in open-source, or **total** in Plus) with a fixed time interval provides you with the number of requests you’re receiving per unit of time—often minutes or seconds. Monitoring this metric can alert you to spikes in incoming web traffic, whether legitimate or nefarious, or sudden drops, which are usually indicative of problems. A drastic change in requests per second can alert you to problems brewing somewhere in your environment, even if it cannot tell you exactly where those problems lie. Note that all requests are counted the same, regardless of their URLs. - -![Requests per second](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/requests_per_sec.png) - -**Collecting activity metrics** - -Open-source NGINX exposes these basic server metrics on a simple status page. Because the status information is displayed in a standardized form, virtually any graphing or monitoring tool can be configured to parse the relevant data for analysis, visualization, or alerting. NGINX Plus provides a JSON feed with much richer data. Read the companion post on [NGINX metrics collection][6] for instructions on enabling metrics collection. - -#### Error metrics #### - -注:表格 - ----- - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionMetric typeAvailability
4xx codesCount of client errorsWork: ErrorsNGINX logs, NGINX Plus
5xx codesCount of server errorsWork: ErrorsNGINX logs, NGINX Plus
- -NGINX error metrics tell you how often your servers are returning errors instead of producing useful work. Client errors are represented by 4xx status codes, server errors with 5xx status codes. - -**Metric to alert on: Server error rate** - -Your server error rate is equal to the number of 5xx errors divided by the total number of [status codes][7] (1xx, 2xx, 3xx, 4xx, 5xx), per unit of time (often one to five minutes). If your error rate starts to climb over time, investigation may be in order. If it spikes suddenly, urgent action may be required, as clients are likely to report errors to the end user. - -![Server error rate](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/5xx_rate.png) - -A note on client errors: while it is tempting to monitor 4xx, there is limited information you can derive from that metric since it measures client behavior without offering any insight into particular URLs. In other words, a change in 4xx could be noise, e.g. web scanners blindly looking for vulnerabilities. - -**Collecting error metrics** - -Although open-source NGINX does not make error rates immediately available for monitoring, there are at least two ways to capture that information: - -- Use the expanded status module available with commercially supported NGINX Plus -- Configure NGINX’s log module to write response codes in access logs - -Read the companion post on NGINX metrics collection for detailed instructions on both approaches. - -#### Performance metrics #### - -注:表格 - ----- - - - - - - - - - - - - - - - - -
NameDescriptionMetric typeAvailability
request timeTime to process each request, in secondsWork: PerformanceNGINX logs
- -**Metric to alert on: Request processing time** - -The request time metric logged by NGINX records the processing time for each request, from the reading of the first client bytes to fulfilling the request. Long response times can point to problems upstream. - -**Collecting processing time metrics** - -NGINX and NGINX Plus users can capture data on processing time by adding the $request_time variable to the access log format. More details on configuring logs for monitoring are available in our companion post on [NGINX metrics collection][8]. - -#### Reverse proxy metrics #### - -注:表格 - ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionMetric typeAvailability
Active connections by upstream serverCurrently active client connectionsResource: UtilizationNGINX Plus
5xx codes by upstream serverServer errorsWork: ErrorsNGINX Plus
Available servers per upstream groupServers passing health checksResource: AvailabilityNGINX Plus
- -One of the most common ways to use NGINX is as a [reverse proxy][9]. The commercially supported NGINX Plus exposes a large number of metrics about backend (or “upstream”) servers, which are relevant to a reverse proxy setup. This section highlights a few of the key upstream metrics that are available to users of NGINX Plus. - -NGINX Plus segments its upstream metrics first by group, and then by individual server. So if, for example, your reverse proxy is distributing requests to five upstream web servers, you can see at a glance whether any of those individual servers is overburdened, and also whether you have enough healthy servers in the upstream group to ensure good response times. - -**Activity metrics** - -The number of **active connections per upstream server** can help you verify that your reverse proxy is properly distributing work across your server group. If you are using NGINX as a load balancer, significant deviations in the number of connections handled by any one server can indicate that the server is struggling to process requests in a timely manner or that the load-balancing method (e.g., [round-robin or IP hashing][10]) you have configured is not optimal for your traffic patterns - -**Error metrics** - -Recall from the error metric section above that 5xx (server error) codes are a valuable metric to monitor, particularly as a share of total response codes. NGINX Plus allows you to easily extract the number of **5xx codes per upstream server**, as well as the total number of responses, to determine that particular server’s error rate. - -**Availability metrics** - -For another view of the health of your web servers, NGINX also makes it simple to monitor the health of your upstream groups via the total number of **servers currently available within each group**. In a large reverse proxy setup, you may not care very much about the current state of any one server, just as long as your pool of available servers is capable of handling the load. But monitoring the total number of servers that are up within each upstream group can provide a very high-level view of the aggregate health of your web servers. - -**Collecting upstream metrics** - -NGINX Plus upstream metrics are exposed on the internal NGINX Plus monitoring dashboard, and are also available via a JSON interface that can serve up metrics into virtually any external monitoring platform. See examples in our companion post on [collecting NGINX metrics][11]. - -### Conclusion ### - -In this post we’ve touched on some of the most useful metrics you can monitor to keep tabs on your NGINX servers. If you are just getting started with NGINX, monitoring most or all of the metrics in the list below will provide good visibility into the health and activity levels of your web infrastructure: - -- [Dropped connections][12] -- [Requests per second][13] -- [Server error rate][14] -- [Request processing time][15] - -Eventually you will recognize additional, more specialized metrics that are particularly relevant to your own infrastructure and use cases. Of course, what you monitor will depend on the tools you have and the metrics available to you. See the companion post for [step-by-step instructions on metric collection][16], whether you use NGINX or NGINX Plus. - -At Datadog, we have built integrations with both NGINX and NGINX Plus so that you can begin collecting and monitoring metrics from all your web servers with a minimum of setup. Learn how to monitor NGINX with Datadog [in this post][17], and get started right away with [a free trial of Datadog][18]. - -### Acknowledgments ### - -Many thanks to the NGINX team for reviewing this article prior to publication and providing important feedback and clarifications. - ----------- - -Source Markdown for this post is available [on GitHub][19]. Questions, corrections, additions, etc.? Please [let us know][20]. - --------------------------------------------------------------------------------- - -via: https://www.datadoghq.com/blog/how-to-monitor-nginx/ - -作者:K Young -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:http://nginx.org/en/ -[2]:http://nginx.com/resources/glossary/reverse-proxy-server/ -[3]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/ -[4]:http://nginx.org/en/docs/ngx_core_module.html#worker_connections -[5]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#active-state -[6]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ -[7]:http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html -[8]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ -[9]:https://en.wikipedia.org/wiki/Reverse_proxy -[10]:http://nginx.com/blog/load-balancing-with-nginx-plus/ -[11]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ -[12]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#dropped-connections -[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#requests-per-second -[14]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#server-error-rate -[15]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#request-processing-time -[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ -[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ -[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up -[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md -[20]:https://github.com/DataDog/the-monitor/issues From 2b38bda7026c505229b4de281ac9ef5823b7f6d6 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Wed, 5 Aug 2015 13:41:46 +0800 Subject: [PATCH 116/207] Create 20150717 How to monitor NGINX- Part 1.md --- .../20150717 How to monitor NGINX- Part 1.md | 416 ++++++++++++++++++ 1 file changed, 416 insertions(+) create mode 100644 translated/tech/20150717 How to monitor NGINX- Part 1.md diff --git a/translated/tech/20150717 How to monitor NGINX- Part 1.md b/translated/tech/20150717 How to monitor NGINX- Part 1.md new file mode 100644 index 0000000000..86e72c0324 --- /dev/null +++ b/translated/tech/20150717 How to monitor NGINX- Part 1.md @@ -0,0 +1,416 @@ +如何监控 NGINX - 第1部分 +================================================================================ +![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png) + +### NGINX 是什么? ### + +[NGINX][1] (发音为 “engine X”) 是一种流行的 HTTP 和反向代理服务器。作为一个 HTTP 服务器,NGINX 提供静态内容非常高效可靠,使用较少的内存。作为[反向代理][2],它可以用作一个单一的控制器来为其他应用代理至后端的多个服务器上,如高速缓存和负载平衡。NGINX 是作为一个免费,开源的产品并有更全的功能,商业版的叫 NGINX Plus。 + +NGINX 也可以用作邮件代理和通用的 TCP 代理,但本文并不直接说明对 NGINX 的这些用例做监控。 + +### NGINX 主要指标 ### + +通过监控 NGINX 可以捕捉两类问题:NGINX 本身的资源问题,也有很多问题会出现在你的基础网络设施处。大多数 NGINX 用户受益于以下指标的监控,包括**requests per second**,它提供了一个所有用户活动的高级视图;**server error rate** ,这表明你的服务器已经多长没有处理看似有效的请求;还有**request processing time**,这说明你的服务器处理客户端请求的总共时长(并且可以看出性能降低时或当前环境的其他问题)。 + +更一般地,至少有三个主要的指标类别来监视: + +- 基本活动指标 +- 错误指标 +- 性能指标 + +下面我们将分析在每个类别中最重要的 NGINX 指标,以及用一个相当普遍的案例来说明,值得特别说明的是:使用 NGINX Plus 作反向代理。我们还将介绍如何使用图形工具或可选择的监控工具来监控所有的指标。 + +本文引用指标术语[介绍我们的监控在 101 系列][3],,它提供了指标收集和警告框架。 + +#### 基本活动指标 #### + +无论你在怎样的情况下使用 NGINX,毫无疑问你要监视服务器接收多少客户端请求和如何处理这些请求。 + +NGINX Plus 上像开源 NGINX 一样可以报告基本活动指标,但它也提供了略有不同的辅助模块。我们首先讨论开源的 NGINX,再来说明 NGINX Plus 提供的其他指标的功能。 + +**NGINX** + +下图显示了一个客户端连接,以及如何在连接过程中收集指标的活动周期在开源 NGINX 版本上。 + +![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_connection_diagram-2.png) + +接受,处理,增加请求的计数器。主动,等待,读,写增加和减少请求量。 + +注:表格 + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescriptionMetric type
acceptsCount of client connections attempted by NGINXResource: Utilization
handledCount of successful client connectionsResource: Utilization
activeCurrently active client connectionsResource: Utilization
dropped (calculated)Count of dropped connections (accepts – handled)Work: Errors*
requestsCount of client requestsWork: Throughput
*Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.
+ +NGINX 进程接受 OS 的连接请求时**accepts** 计数器增加,而**handled** 是当实际的请求得到连接时(通过建立一个新的连接或重新使用一个空闲的)。这两个计数器的值通常都是相同的,表明连接正在被**dropped**,往往由于资源限制,如 NGINX 的[worker_connections][4]的限制已经达到。 + +一旦 NGINX 成功处理一个连接时,连接会移动到**active**状态,然后保持为客户端请求进行处理: + +Active 状态 + +- **Waiting**: 活动的连接也可以是一个 Waiting 子状态,如果有在此刻没有活动请求。新连接绕过这个状态并直接移动到读,最常见的是使用“accept filter” 和 “deferred accept”,在这种情况下,NGINX 不会接收进程的通知,直到它具有足够的数据来开始响应工作。如果连接设置为 keep-alive ,连接在发送响应后将处于等待状态。 + +- **Reading**: 当接收到请求时,连接移出等待状态,并且该请求本身也被视为 Reading。在这种状态下NGINX 正在读取客户端请求首部。请求首部是比较少的,因此这通常是一个快速的操作。 + +- **Writing**: 请求被读取之后,将其计为 Writing,并保持在该状态,直到响应返回给客户端。这意味着,该请求在 Writing 时, NGINX 同时等待来自负载均衡服务器的结果(系统“背后”的 NGINX),NGINX 也同时响应。请求往往会花费大量的时间在 Writing 状态。 + +通常,一个连接在同一时间只接受一个请求。在这种情况下,Active 连接的数目 == Waiting 连接 + Reading 请求 + Writing 请求。然而,较新的 SPDY 和 HTTP/2 协议允许多个并发请求/响应对被复用的连接,所以 Active 可小于 Waiting,Reading,Writing 的总和。 (在撰写本文时,NGINX 不支持 HTTP/2,但预计到2015年期间将会支持。) + +**NGINX Plus** + +正如上面提到的,所有开源 NGINX 的指标在 NGINX Plus 中是可用的,但另外也提供其他的指标。本节仅说明了 NGINX Plus 可用的指标。 + + +![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_plus_connection_diagram-2.png) + +接受,中断,总数是不断增加的。活动,空闲和已建立连接的,当前状态下每一个连接或请​​求的数量是随着请求量增加和收缩的。 + +注:表格 + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescriptionMetric type
acceptedCount of client connections attempted by NGINXResource: Utilization
droppedCount of dropped connectionsWork: Errors*
activeCurrently active client connectionsResource: Utilization
idleClient connections with zero current requestsResource: Utilization
totalCount of client requestsWork: Throughput
*Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.
+ +当 NGINX Plus 进程接受 OS 的连接请求时 **accepted** 计数器递增。如果进程请求连接失败(通过建立一个新的连接或重新使用一个空闲),则该连接断开 **dropped** 计数增加。通常连接被中断是因为资源限制,如 NGINX Plus 的[worker_connections][4]的限制已经达到。 + +**Active** 和 **idle** 和开源 NGINX 的“active” 和 “waiting”状态是相同的,[如上所述][5],有一个不同的地方:在开源 NGINX 上,“waiting”状态包括在“active”中,而在 NGINX Plus 上“idle”的连接被排除在“active” 计数外。**Current** 和开源 NGINX 是一样的也是由“reading + writing” 状态组成。 + + +**Total** 为客户端请求的累积计数。请注意,单个客户端连接可涉及多个请求,所以这个数字可能会比连接的累计次数明显大。事实上,(total / accepted)是每个连接请求的平均数量。 + +**开源 和 Plus 之间指标的不同** + +注:表格 + +++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NGINX (open-source)NGINX Plus
acceptsaccepted
dropped must be calculateddropped is reported directly
reading + writingcurrent
waitingidle
active (includes “waiting” states)active (excludes “idle” states)
requeststotal
+ +**提醒指标: 中断连接** + +被中断的连接数目等于接受和处理之差(NGINX),或被公开直接作为指标的标准(NGINX加)。在正常情况下,中断连接数应该是零。如果每秒中中断连接的速度开始上升,寻找资源可能用尽的地方。 + +![Dropped connections](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/dropped_connections.png) + +**提醒指标: 每秒请求数** + +提供你(开源中的**requests**或者 Plus 中**total**)固定时间间隔每秒或每分钟请求的平均数据。监测这个指标可以查看 Web 的输入流量的最大值,无论是合法的还是恶意的,有可能会突然下降,通常可以看出问题。每秒的请求若发生急剧变化可以提醒你出问题了,即使它不能告诉你确切问题的位置所在。请注意,所有的请求都算作是相同的,无论哪个 URLs。 + +![Requests per second](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/requests_per_sec.png) + +**收集活动指标** + +开源的 NGINX 提供了一个简单状态页面来显示基本的服务器指标。该状态信息以标准格式被显示,实际上任何图形或监控工具可以被配置去解析相关的数据为分析,可视化,或提醒而用。NGINX Plus 提供一个 JSON 接口来显示更多的数据。阅读[NGINX 指标收集][6]后来启用指标收集的功能。 + +#### 错误指标 #### + +注:表格 + +++++ + + + + + + + + + + + + + + + + + + + + + + +
NameDescriptionMetric typeAvailability
4xx codesCount of client errorsWork: ErrorsNGINX logs, NGINX Plus
5xx codesCount of server errorsWork: ErrorsNGINX logs, NGINX Plus
+ +NGINX 错误指标告诉你服务器经常返回哪些错误,这也是有用的。客户端错误返回4XX状态码,服务器端错误返回5XX状态码。 + +**提醒指标: 服务器错误率** + +服务器错误率等于5xx错误状态代码的总数除以[状态码][7](1XX,2XX,3XX,4XX,5XX)的总数,每单位时间(通常为一到五分钟)的数目。如果你的错误率随着时间的推移开始攀升,调查可能的原因。如果突然增加,可能需要采取紧急行动,因为客户端可能收到错误信息。 + +![Server error rate](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/5xx_rate.png) + +客户端收到错误时的注意事项:虽然监控4XX是很有用的,但从该指标中你仅可以捕捉有限的信息,因为它只是衡量客户的行为而不捕捉任何特殊的 URLs。换句话说,在4xx出现时只是相当于一点噪音,例如寻找漏洞的网络扫描仪。 + +**收集错误度量** + +虽然开源 NGINX 不会监测错误率,但至少有两种方法可以捕获其信息: + +- 使用商业支持的 NGINX Plus 提供的可扩展状态模块 +- 配置 NGINX 的日志模块将响应码写入访问日志 + +阅读关于 NGINX 指标收集的后两个方法的详细说明。 + +#### 性能指标 #### + +注:表格 + +++++ + + + + + + + + + + + + + + + + +
NameDescriptionMetric typeAvailability
request timeTime to process each request, in secondsWork: PerformanceNGINX logs
+ +**提醒指标: 请求处理时间** + +请求时间指标记录 NGINX 处理每个请求的时间,从第一个客户端的请求字节读出到完成请求。较长的响应时间可以将问题指向负载均衡服务器。 + +**收集处理时间指标** + +NGINX 和 NGINX Plus 用户可以通过添加 $request_time 变量到访问日志格式中来捕​​捉处理时间数据。关于配置日志监控的更多细节在[NGINX指标收集][8]。 + +#### 反向代理指标 #### + +注:表格 + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameDescriptionMetric typeAvailability
Active connections by upstream serverCurrently active client connectionsResource: UtilizationNGINX Plus
5xx codes by upstream serverServer errorsWork: ErrorsNGINX Plus
Available servers per upstream groupServers passing health checksResource: AvailabilityNGINX Plus
+ +[反向代理][9]是 NGINX 最常见的使用方法之一。商业支持的 NGINX Plus 显示了大量有关后端(或“负载均衡”)的服务器指标,这是反向代理设置的。本节重点介绍了几个关键的负载均衡服务器的指标为 NGINX Plus 用户。 + +NGINX Plus 的负载均衡服务器指标首先是组的,然后是单个服务器的。因此,例如,你的反向代理将请求分配到五个 Web 负载均衡服务器上,你可以一眼看出是否有单个服务器压力过大,也可以看出负载均衡服务器组的健康状况,以确保良好的响应时间。 + +**活动指标** + +**active connections per upstream server**的数量可以帮助你确认反向代理是否正确的分配工作到负载均衡服务器上。如果你正在使用 NGINX 作为负载均衡器,任何一台服务器处理的连接数有显著的偏差都可能表明服务器正在努力处理请求或你配置处理请求的负载均衡的方法(例如[round-robin or IP hashing][10])不是最适合你流量模式的。 + +**错误指标** + +错误指标,上面所说的高于5XX(服务器错误)状态码,是监控指标中有价值的一个,尤其是响应码部分。 NGINX Plus 允许你轻松地提取每个负载均衡服务器 **5xx codes per upstream server**的数量,以及响应的总数量,以此来确定该特定服务器的错误率。 + + +**可用性指标** + +对于 web 服务器的运行状况,另一种观点认为,NGINX 也可以很方便监控你的负载均衡服务器组的健康通过**servers currently available within each group**的总量​​。在一个大的反向代理上,你可能不会非常关心其中一个服务器的当前状态,就像你只要可用的服务器组能够处理当前的负载就行了。但监视负载均衡服务器组内的所有服务器可以提供一个高水平的图像来判断 Web 服务器的健康状况。 + +**收集负载均衡服务器的指标** + +NGINX Plus 负载均衡服务器的指标显示在内部 NGINX Plus 的监控仪表盘上,并且也可通过一个JSON 接口来服务于所有外部的监控平台。在这儿看一个例子[收集 NGINX 指标][11]。 + +### 结论 ### + +在这篇文章中,我们已经谈到了一些有用的指标,你可以使用表格来监控 NGINX 服务器。如果你是刚开始使用 NGINX,下面提供了良好的网络基础设施的健康和活动的可视化工具来监控大部分或所有的指标: + +- [Dropped connections][12] +- [Requests per second][13] +- [Server error rate][14] +- [Request processing time][15] + +最终,你会学到更多,更专业的衡量指标,尤其是关于你自己基础设施和使用情况的。当然,监控哪一项指标将取决于你可用的工具。参见[一步一步来说明指标收集][16],不管你使用 NGINX 还是 NGINX Plus。 + + + +在 Datadog 中,我们已经集成了 NGINX 和 NGINX Plus,这样你就可以以最小的设置来收集和监控所有 Web 服务器的指标。了解如何用 NGINX Datadog来监控 [在本文中][17],并开始使用 [免费的 Datadog][18]。 + +### Acknowledgments ### + +在文章发表之前非常感谢 NGINX 团队审阅这篇,并提供重要的反馈和说明。 + +---------- + +文章来源在这儿 [on GitHub][19]。问题,更正,补充等?请[告诉我们][20]。 + + +-------------------------------------------------------------------------------- + +via: https://www.datadoghq.com/blog/how-to-monitor-nginx/ + +作者:K Young +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://nginx.org/en/ +[2]:http://nginx.com/resources/glossary/reverse-proxy-server/ +[3]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/ +[4]:http://nginx.org/en/docs/ngx_core_module.html#worker_connections +[5]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#active-state +[6]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ +[7]:http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html +[8]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ +[9]:https://en.wikipedia.org/wiki/Reverse_proxy +[10]:http://nginx.com/blog/load-balancing-with-nginx-plus/ +[11]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ +[12]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#dropped-connections +[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#requests-per-second +[14]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#server-error-rate +[15]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#request-processing-time +[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ +[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ +[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up +[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md +[20]:https://github.com/DataDog/the-monitor/issues From d883f1ef7607ced1c97a49c36994a3c51934d636 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 6 Aug 2015 11:27:53 +0800 Subject: [PATCH 117/207] =?UTF-8?q?20150806-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20150806 5 heroes of the Linux world.md | 99 +++++++++++++++++++ 1 file changed, 99 insertions(+) create mode 100644 sources/talk/20150806 5 heroes of the Linux world.md diff --git a/sources/talk/20150806 5 heroes of the Linux world.md b/sources/talk/20150806 5 heroes of the Linux world.md new file mode 100644 index 0000000000..ae35d674a1 --- /dev/null +++ b/sources/talk/20150806 5 heroes of the Linux world.md @@ -0,0 +1,99 @@ +5 heroes of the Linux world +================================================================================ +Who are these people, seen and unseen, whose work affects all of us every day? + +![Image courtesy Christopher Michel/Flickr](http://core0.staticworld.net/images/article/2015/07/penguin-100599348-orig.jpg) +Image courtesy [Christopher Michel/Flickr][1] + +### High-flying penguins ### + +Linux and open source is driven by passionate people who write best-of-breed software and then release the code to the public so anyone can use it, without any strings attached. (Well, there is one string attached and that’s licence.) + +Who are these people? These heroes of the Linux world, whose work affects all of us every day. Allow me to introduce you. + +![Image courtesy Swapnil Bhartiya](http://images.techhive.com/images/article/2015/07/swap-klaus-100599357-orig.jpg) +Image courtesy Swapnil Bhartiya + +### Klaus Knopper ### + +Klaus Knopper, an Austrian developer who lives in Germany, is the founder of Knoppix and Adriana Linux, which he developed for his blind wife. + +Knoppix holds a very special place in heart of those Linux users who started using Linux before Ubuntu came along. What makes Knoppix so special is that it popularized the concept of Live CD. Unlike Windows or Mac OS X, you could run the entire operating system from the CD without installing anything on the system. It allowed new users to test Linux on their systems without formatting the hard drive. The live feature of Linux alone contributed heavily to its popularity. + +![Image courtesy Fórum Internacional Software Live/Flickr](http://images.techhive.com/images/article/2015/07/lennart-100599356-orig.jpg) +Image courtesy [Fórum Internacional Software Live/Flickr][2] + +### Lennart Pottering ### + +Lennart Pottering is yet another genius from Germany. He has written so many core components of a Linux (as well as BSD) system that it’s hard to keep track. Most of his work is towards the successors of aging or broken components of the Linux systems. + +Pottering wrote the modern init system systemd, which shook the Linux world and created a [rift in the Debian community][3]. + +While Linus Torvalds has no problems with systemd, and praises it, he is not a huge fan of the way systemd developers (including the co-author Kay Sievers,) respond to bug reports and criticism. At one point Linus said on the LKML (Linux Kernel Mailing List) that he would [never work with Sievers][4]. + +Lennart is also the author of Pulseaudio, sound server on Linux and Avahi, zero-configuration networking (zeroconf) implementation. + +![Image courtesy Meego Com/Flickr](http://images.techhive.com/images/article/2015/07/jim-zemlin-100599362-orig.jpg) +Image courtesy [Meego Com/Flickr][5] + +### Jim Zemlin ### + +Jim Zemlin isn't a developer, but as founder of The Linux Foundation he is certainly one of the most important figures of the Linux world. + +In 2007, The Linux Foundation was formed as a result of merger between two open source bodies: the Free Standards Group and the Open Source Development Labs. Zemlin was the executive director of the Free Standards Group. Post-merger Zemlin became the executive director of The Linux Foundation and has held that position since. + +Under his leadership, The Linux Foundation has become the central figure in the modern IT world and plays a very critical role for the Linux ecosystem. In order to ensure that key developers like Torvalds and Kroah-Hartman can focus on Linux, the foundation sponsors them as fellows. + +Zemlin also made the foundation a bridge between companies so they can collaborate on Linux while at the same time competing in the market. The foundation also organizes many conferences around the world and [offers many courses for Linux developers][6]. + +People may think of Zemlin as Linus Torvalds' boss, but he refers to himself as "Linus Torvalds' janitor." + +![Image courtesy Coscup/Flickr](http://images.techhive.com/images/article/2015/07/greg-kh-100599350-orig.jpg) +Image courtesy [Coscup/Flickr][7] + +### Greg Kroah-Hartman ### + +Greg Kroah-Hartman is known as second-in-command of the Linux kernel. The ‘gentle giant’ is the maintainer of the stable branch of the kernel and of staging subsystem, USB, driver core, debugfs, kref, kobject, and the [sysfs][8] kernel subsystems along with many other components of a Linux system. + +He is also credited for device drivers for Linux. One of his jobs is to travel around the globe, meet hardware makers and persuade them to make their drivers available for Linux. The next time you plug some random USB device to your system and it works out of the box, thank Kroah-Hartman. (Don't thank the distro. Some distros try to take credit for the work Kroah-Hartman or the Linux kernel did.) + +Kroah-Hartman previously worked for Novell and then joined the Linux Foundation as a fellow, alongside Linus Torvalds. + +Kroah-Hartman is the total opposite of Linus and never rants (at least publicly). One time there was some ripple was when he stated that [Canonical doesn’t contribute much to the Linux kernel][9]. + +On a personal level, Kroah-Hartman is extremely helpful to new developers and users and is easily accessible. + +![Image courtesy Swapnil Bhartiya](http://images.techhive.com/images/article/2015/07/linus-swapnil-100599349-orig.jpg) +Image courtesy Swapnil Bhartiya + +### Linus Torvalds ### + +No collection of Linux heroes would be complete without Linus Torvalds. He is the author of the Linux kernel, the most used open source technology on the planet and beyond. His software powers everything from space stations to supercomputers, military drones to mobile devices and tiny smartwatches. Linus remains the authority on the Linux kernel and makes the final decision on which patches to merge to the kernel. + +Linux isn't Torvalds' only contribution open source. When he got fed-up with the existing software revision control systems, which his kernel heavily relied on, he wrote his own, called Git. Git enjoys the same reputation as Linux; it is the most used version control system in the world. + +Torvalds is also a passionate scuba diver and when he found no decent dive logs for Linux, he wrote his own and called it SubSurface. + +Torvalds is [well known for his rants][10] and once admitted that his ego is as big as a small planet. But he is also known for admitting his mistakes if he realizes he was wrong. + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2955001/linux/5-heros-of-the-linux-world.html + +作者:[Swapnil Bhartiya][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Swapnil-Bhartiya/ +[1]:https://flic.kr/p/siJ25M +[2]:https://flic.kr/p/uTzj54 +[3]:http://www.itwire.com/business-it-news/open-source/66153-systemd-fallout-two-debian-technical-panel-members-resign +[4]:http://www.linuxveda.com/2014/04/04/linus-torvalds-systemd-kay-sievers/ +[5]:https://flic.kr/p/9Lnhpu +[6]:http://www.itworld.com/article/2951968/linux/linux-foundation-offers-cheaper-courses-and-certifications-for-india.html +[7]:https://flic.kr/p/hBv8Pp +[8]:https://en.wikipedia.org/wiki/Sysfs +[9]:https://www.youtube.com/watch?v=CyHAeGBFS8k +[10]:http://www.itworld.com/article/2873200/operating-systems/11-technologies-that-tick-off-linus-torvalds.html \ No newline at end of file From 308319dffbe0f9e27d61c482b71b9ba46823cadd Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 6 Aug 2015 11:38:08 +0800 Subject: [PATCH 118/207] =?UTF-8?q?20150806-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...lation Guide for Puppet on Ubuntu 15.04.md | 429 ++++++++++++++++++ 1 file changed, 429 insertions(+) create mode 100644 sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md diff --git a/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md b/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md new file mode 100644 index 0000000000..ea8fcd6e2e --- /dev/null +++ b/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md @@ -0,0 +1,429 @@ +Installation Guide for Puppet on Ubuntu 15.04 +================================================================================ +Hi everyone, today in this article we'll learn how to install puppet to manage your server infrastructure running ubuntu 15.04. Puppet is an open source software configuration management tool which is developed and maintained by Puppet Labs that allows us to automate the provisioning, configuration and management of a server infrastructure. Whether we're managing just a few servers or thousands of physical and virtual machines to orchestration and reporting, puppet automates tasks that system administrators often do manually which frees up time and mental space so sysadmins can work on improving other aspects of your overall setup. It ensures consistency, reliability and stability of the automated jobs processed. It facilitates closer collaboration between sysadmins and developers, enabling more efficient delivery of cleaner, better-designed code. Puppet is available in two solutions configuration management and data center automation. They are **puppet open source and puppet enterprise**. Puppet open source is a flexible, customizable solution available under the Apache 2.0 license, designed to help system administrators automate the many repetitive tasks they regularly perform. Whereas puppet enterprise edition is a proven commercial solution for diverse enterprise IT environments which lets us get all the benefits of open source puppet, plus puppet apps, commercial-only enhancements, supported modules and integrations, and the assurance of a fully supported platform. Puppet uses SSL certificates to authenticate communication between master and agent nodes. + +In this tutorial, we will cover how to install open source puppet in an agent and master setup running ubuntu 15.04 linux distribution. Here, Puppet master is a server from where all the configurations will be controlled and managed and all our remaining servers will be puppet agent nodes, which is configured according to the configuration of puppet master server. Here are some easy steps to install and configure puppet to manage our server infrastructure running Ubuntu 15.04. + +### 1. Setting up Hosts ### + +In this tutorial, we'll use two machines, one as puppet master server and another as puppet node agent both running ubuntu 15.04 "Vivid Vervet" in both the machines. Here is the infrastructure of the server that we're gonna use for this tutorial. + +puppet master server with IP 44.55.88.6 and hostname : puppetmaster +puppet node agent with IP 45.55.86.39 and hostname : puppetnode + +Now we'll add the entry of the machines to /etc/hosts on both machines node agent and master server. + + # nano /etc/hosts + + 45.55.88.6 puppetmaster.example.com puppetmaster + 45.55.86.39 puppetnode.example.com puppetnode + +Please note that the Puppet Master server must be reachable on port 8140. So, we'll need to open port 8140 in it. + +### 2. Updating Time with NTP ### + +As puppet nodes needs to maintain accurate system time to avoid problems when it issues agent certificates. Certificates can appear to be expired if there is time difference, the time of the both the master and the node agent must be synced with each other. To sync the time, we'll update the time with NTP. To do so, here's the command below that we need to run on both master and node agent. + + # ntpdate pool.ntp.org + + 17 Jun 00:17:08 ntpdate[882]: adjust time server 66.175.209.17 offset -0.001938 sec + +Now, we'll update our local repository index and install ntp as follows. + + # apt-get update && sudo apt-get -y install ntp ; service ntp restart + +### 3. Puppet Master Package Installation ### + +There are many ways to install open source puppet. In this tutorial, we'll download and install a debian binary package named as **puppetlabs-release** packaged by the Puppet Labs which will add the source of the **puppetmaster-passenger** package. The puppetmaster-passenger includes the puppet master with apache web server. So, we'll now download the Puppet Labs package. + + # cd /tmp/ + # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb + + --2015-06-17 00:19:26-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb + Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d + Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected. + HTTP request sent, awaiting response... 200 OK + Length: 7384 (7.2K) [application/x-debian-package] + Saving to: ‘puppetlabs-release-trusty.deb’ + + puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.06s + + 2015-06-17 00:19:26 (130 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] + +After the download has been completed, we'll wanna install the package. + + # dpkg -i puppetlabs-release-trusty.deb + + Selecting previously unselected package puppetlabs-release. + (Reading database ... 85899 files and directories currently installed.) + Preparing to unpack puppetlabs-release-trusty.deb ... + Unpacking puppetlabs-release (1.0-11) ... + Setting up puppetlabs-release (1.0-11) ... + +Then, we'll update the local respository index with the server using apt package manager. + + # apt-get update + +Then, we'll install the puppetmaster-passenger package by running the below command. + + # apt-get install puppetmaster-passenger + +**Note**: While installing we may get an error **Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')** but we no need to worry, we'll just simply ignore this as it says that the templatedir is deprecated so, we'll simply disbale that setting in the configuration. :) + +To check whether puppetmaster has been installed successfully in our Master server not not, we'll gonna try to check its version. + + # puppet --version + + 3.8.1 + +We have successfully installed puppet master package in our puppet master box. As we are using passenger with apache, the puppet master process is controlled by apache server, that means it runs when apache is running. + +Before continuing, we'll need to stop the Puppet master by stopping the apache2 service. + + # systemctl stop apache2 + +### 4. Master version lock with Apt ### + +As We have puppet version as 3.8.1, we need to lock the puppet version update as this will mess up the configurations while updating the puppet. So, we'll use apt's locking feature for that. To do so, we'll need to create a new file **/etc/apt/preferences.d/00-puppet.pref** using our favorite text editor. + + # nano /etc/apt/preferences.d/00-puppet.pref + +Then, we'll gonna add the entries in the newly created file as: + + # /etc/apt/preferences.d/00-puppet.pref + Package: puppet puppet-common puppetmaster-passenger + Pin: version 3.8* + Pin-Priority: 501 + +Now, it will not update the puppet while running updates in the system. + +### 5. Configuring Puppet Config ### + +Puppet master acts as a certificate authority and must generate its own certificates which is used to sign agent certificate requests. First of all, we'll need to remove any existing SSL certificates that were created during the installation of package. The default location of puppet's SSL certificates is /var/lib/puppet/ssl. So, we'll remove the entire ssl directory using rm command. + + # rm -rf /var/lib/puppet/ssl + +Then, we'll configure the certificate. While creating the puppet master's certificate, we need to include every DNS name at which agent nodes can contact the master at. So, we'll edit the master's puppet.conf using our favorite text editor. + + # nano /etc/puppet/puppet.conf + +The output seems as shown below. + + [main] + logdir=/var/log/puppet + vardir=/var/lib/puppet + ssldir=/var/lib/puppet/ssl + rundir=/var/run/puppet + factpath=$vardir/lib/facter + templatedir=$confdir/templates + + [master] + # These are needed when the puppetmaster is run by passenger + # and can safely be removed if webrick is used. + ssl_client_header = SSL_CLIENT_S_DN + ssl_client_verify_header = SSL_CLIENT_VERIFY + +Here, we'll need to comment the templatedir line to disable the setting as it has been already depreciated. After that, we'll add the following line at the end of the file under [main]. + + server = puppetmaster + environment = production + runinterval = 1h + strict_variables = true + certname = puppetmaster + dns_alt_names = puppetmaster, puppetmaster.example.com + +This configuration file has many options which might be useful in order to setup own configuration. A full description of the file is available at Puppet Labs [Main Config File (puppet.conf)][1]. + +After editing the file, we'll wanna save that and exit. + +Now, we'll gonna generate a new CA certificates by running the following command. + + # puppet master --verbose --no-daemonize + + Info: Creating a new SSL key for ca + Info: Creating a new SSL certificate request for ca + Info: Certificate Request fingerprint (SHA256): F6:2F:69:89:BA:A5:5E:FF:7F:94:15:6B:A7:C4:20:CE:23:C7:E3:C9:63:53:E0:F2:76:D7:2E:E0:BF:BD:A6:78 + ... + Notice: puppetmaster has a waiting certificate request + Notice: Signed certificate request for puppetmaster + Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/ca/requests/puppetmaster.pem' + Notice: Removing file Puppet::SSL::CertificateRequest puppetmaster at '/var/lib/puppet/ssl/certificate_requests/puppetmaster.pem' + Notice: Starting Puppet master version 3.8.1 + ^CNotice: Caught INT; storing stop + Notice: Processing stop + +Now, the certificate is being generated. Once we see **Notice: Starting Puppet master version 3.8.1**, the certificate setup is complete. Then we'll press CTRL-C to return to the shell. + +If we wanna look at the cert information of the certificate that was just created, we can get the list by running in the following command. + + # puppet cert list -all + + + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") + +### 6. Creating a Puppet Manifest ### + +The default location of the main manifest is /etc/puppet/manifests/site.pp. The main manifest file contains the definition of configuration that is used to execute in the puppet node agent. Now, we'll create the manifest file by running the following command. + + # nano /etc/puppet/manifests/site.pp + +Then, we'll add the following lines of configuration in the file that we just opened. + + # execute 'apt-get update' + exec { 'apt-update': # exec resource named 'apt-update' + command => '/usr/bin/apt-get update' # command this resource will run + } + + # install apache2 package + package { 'apache2': + require => Exec['apt-update'], # require 'apt-update' before installing + ensure => installed, + } + + # ensure apache2 service is running + service { 'apache2': + ensure => running, + } + +The above lines of configuration are responsible for the deployment of the installation of apache web server across the node agent. + +### 7. Starting Master Service ### + +We are now ready to start the puppet master. We can start it by running the apache2 service. + + # systemctl start apache2 + +Here, our puppet master is running, but it isn't managing any agent nodes yet. Now, we'll gonna add the puppet node agents to the master. + +**Note**: If you get an error **Job for apache2.service failed. See "systemctl status apache2.service" and "journalctl -xe" for details.** then it must be that there is some problem with the apache server. So, we can see the log what exactly has happened by running **apachectl start** under root or sudo mode. Here, while performing this tutorial, we got a misconfiguration of the certificates under **/etc/apache2/sites-enabled/puppetmaster.conf** file. We replaced **SSLCertificateFile /var/lib/puppet/ssl/certs/server.pem with SSLCertificateFile /var/lib/puppet/ssl/certs/puppetmaster.pem** and commented **SSLCertificateKeyFile** line. Then we'll need to rerun the above command to run apache server. + +### 8. Puppet Agent Package Installation ### + +Now, as we have our puppet master ready and it needs an agent to manage, we'll need to install puppet agent into the nodes. We'll need to install puppet agent in every nodes in our infrastructure we want puppet master to manage. We'll need to make sure that we have added our node agents in the DNS. Now, we'll gonna install the latest puppet agent in our agent node ie. puppetnode.example.com . + +We'll run the following command to download the Puppet Labs package in our puppet agent nodes. + + # cd /tmp/ + # wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb\ + + --2015-06-17 00:54:42-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb + Resolving apt.puppetlabs.com (apt.puppetlabs.com)... 192.155.89.90, 2600:3c03::f03c:91ff:fedb:6b1d + Connecting to apt.puppetlabs.com (apt.puppetlabs.com)|192.155.89.90|:443... connected. + HTTP request sent, awaiting response... 200 OK + Length: 7384 (7.2K) [application/x-debian-package] + Saving to: ‘puppetlabs-release-trusty.deb’ + + puppetlabs-release-tr 100%[===========================>] 7.21K --.-KB/s in 0.04s + + 2015-06-17 00:54:42 (162 KB/s) - ‘puppetlabs-release-trusty.deb’ saved [7384/7384] + +Then, as we're running ubuntu 15.04, we'll use debian package manager to install it. + + # dpkg -i puppetlabs-release-trusty.deb + +Now, we'll gonna update the repository index using apt-get. + + # apt-get update + +Finally, we'll gonna install the puppet agent directly from the remote repository. + + # apt-get install puppet + +Puppet agent is always disabled by default, so we'll need to enable it. To do so we'll need to edit /etc/default/puppet file using a text editor. + + # nano /etc/default/puppet + +Then, we'll need to change value of **START** to "yes" as shown below. + + START=yes + +Then, we'll need to save and exit the file. + +### 9. Agent Version Lock with Apt ### + +As We have puppet version as 3.8.1, we need to lock the puppet version update as this will mess up the configurations while updating the puppet. So, we'll use apt's locking feature for that. To do so, we'll need to create a file /etc/apt/preferences.d/00-puppet.pref using our favorite text editor. + + # nano /etc/apt/preferences.d/00-puppet.pref + +Then, we'll gonna add the entries in the newly created file as: + + # /etc/apt/preferences.d/00-puppet.pref + Package: puppet puppet-common + Pin: version 3.8* + Pin-Priority: 501 + +Now, it will not update the Puppet while running updates in the system. + +### 10. Configuring Puppet Node Agent ### + +Next, We must make a few configuration changes before running the agent. To do so, we'll need to edit the agent's puppet.conf + + # nano /etc/puppet/puppet.conf + +It will look exactly like the Puppet master's initial configuration file. + +This time also we'll comment the **templatedir** line. Then we'll gonna delete the [master] section, and all of the lines below it. + +Assuming that the puppet master is reachable at "puppet-master", the agent should be able to connect to the master. If not we'll need to use its fully qualified domain name ie. puppetmaster.example.com . + + [agent] + server = puppetmaster.example.com + certname = puppetnode.example.com + +After adding this, it will look alike this. + + [main] + logdir=/var/log/puppet + vardir=/var/lib/puppet + ssldir=/var/lib/puppet/ssl + rundir=/var/run/puppet + factpath=$vardir/lib/facter + #templatedir=$confdir/templates + + [agent] + server = puppetmaster.example.com + certname = puppetnode.example.com + +After done with that, we'll gonna save and exit it. + +Next, we'll wanna start our latest puppet agent in our Ubuntu 15.04 nodes. To start our puppet agent, we'll need to run the following command. + + # systemctl start puppet + +If everything went as expected and configured properly, we should not see any output displayed by running the above command. When we run an agent for the first time, it generates an SSL certificate and sends a request to the puppet master then if the master signs the agent's certificate, it will be able to communicate with the agent node. + +**Note**: If you are adding your first node, it is recommended that you attempt to sign the certificate on the puppet master before adding your other agents. Once you have verified that everything works properly, then you can go back and add the remaining agent nodes further. + +### 11. Signing certificate Requests on Master ### + +While puppet agent runs for the first time, it generates an SSL certificate and sends a request for signing to the master server. Before the master will be able to communicate and control the agent node, it must sign that specific agent node's certificate. + +To get the list of the certificate requests, we'll run the following command in the puppet master server. + + # puppet cert list + + "puppetnode.example.com" (SHA256) 31:A1:7E:23:6B:CD:7B:7D:83:98:33:8B:21:01:A6:C4:01:D5:53:3D:A0:0E:77:9A:77:AE:8F:05:4A:9A:50:B2 + +As we just setup our first agent node, we will see one request. It will look something like the following, with the agent node's Domain name as the hostname. + +Note that there is no + in front of it which indicates that it has not been signed yet. + +Now, we'll go for signing a certification request. In order to sign a certification request, we should simply run **puppet cert sign** with the **hostname** as shown below. + + # puppet cert sign puppetnode.example.com + + Notice: Signed certificate request for puppetnode.example.com + Notice: Removing file Puppet::SSL::CertificateRequest puppetnode.example.com at '/var/lib/puppet/ssl/ca/requests/puppetnode.example.com.pem' + +The Puppet master can now communicate and control the node that the signed certificate belongs to. + +If we want to sign all of the current requests, we can use the -all option as shown below. + + # puppet cert sign --all + +### Removing a Puppet Certificate ### + +If we wanna remove a host from it or wanna rebuild a host then add it back to it. In this case, we will want to revoke the host's certificate from the puppet master. To do this, we will want to use the clean action as follows. + + # puppet cert clean hostname + + Notice: Revoked certificate with serial 5 + Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/ca/signed/puppetnode.example.com.pem' + Notice: Removing file Puppet::SSL::Certificate puppetnode.example.com at '/var/lib/puppet/ssl/certs/puppetnode.example.com.pem' + +If we want to view all of the requests signed and unsigned, run the following command: + + # puppet cert list --all + + + "puppetmaster" (SHA256) 33:28:97:86:A1:C3:2F:73:10:D1:FB:42:DA:D5:42:69:71:84:F0:E2:8A:01:B9:58:38:90:E4:7D:B7:25:23:EC (alt names: "DNS:puppetmaster", "DNS:puppetmaster.example.com") + +### 12. Deploying a Puppet Manifest ### + +After we configure and complete the puppet manifest, we'll wanna deploy the manifest to the agent nodes server. To apply and load the main manifest we can simply run the following command in the agent node. + + # puppet agent --test + + Info: Retrieving pluginfacts + Info: Retrieving plugin + Info: Caching catalog for puppetnode.example.com + Info: Applying configuration version '1434563858' + Notice: /Stage[main]/Main/Exec[apt-update]/returns: executed successfully + Notice: Finished catalog run in 10.53 seconds + +This will show us all the processes how the main manifest will affect a single server immediately. + +If we wanna run a puppet manifest that is not related to the main manifest, we can simply use puppet apply followed by the manifest file path. It only applies the manifest to the node that we run the apply from. + + # puppet apply /etc/puppet/manifest/test.pp + +### 13. Configuring Manifest for a Specific Node ### + +If we wanna deploy a manifest only to a specific node then we'll need to configure the manifest as follows. + +We'll need to edit the manifest on the master server using a text editor. + + # nano /etc/puppet/manifest/site.pp + +Now, we'll gonna add the following lines there. + + node 'puppetnode', 'puppetnode1' { + # execute 'apt-get update' + exec { 'apt-update': # exec resource named 'apt-update' + command => '/usr/bin/apt-get update' # command this resource will run + } + + # install apache2 package + package { 'apache2': + require => Exec['apt-update'], # require 'apt-update' before installing + ensure => installed, + } + + # ensure apache2 service is running + service { 'apache2': + ensure => running, + } + } + +Here, the above configuration will install and deploy the apache web server only to the two specified nodes having shortname puppetnode and puppetnode1. We can add more nodes that we need to get deployed with the manifest specifically. + +### 14. Configuring Manifest with a Module ### + +Modules are useful for grouping tasks together, they are many available in the Puppet community which anyone can contribute further. + +On the puppet master, we'll gonna install the **puppetlabs-apache** module using the puppet module command. + + # puppet module install puppetlabs-apache + +**Warning**: Please do not use this module on an existing apache setup else it will purge your apache configurations that are not managed by puppet. + +Now we'll gonna edit the main manifest ie **site.pp** using a text editor. + + # nano /etc/puppet/manifest/site.pp + +Now add the following lines to install apache under puppetnode. + + node 'puppet-node' { + class { 'apache': } # use apache module + apache::vhost { 'example.com': # define vhost resource + port => '80', + docroot => '/var/www/html' + } + } + +Then we'll wanna save and exit it. Then, we'll wanna rerun the manifest to deploy the configuration to the agents for our infrastructure. + +### Conclusion ### + +Finally we have successfully installed puppet to manage our Server Infrastructure running Ubuntu 15.04 "Vivid Vervet" linux operating system. We learned how puppet works, configure a manifest configuration, communicate with nodes and deploy the manifest on the agent nodes with secure SSL certification. Controlling, managing and configuring repeated task in several N number of nodes is very easy with puppet open source software configuration management tool. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-puppet-ubuntu-15-04/ + +作者:[Arun Pyasi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html \ No newline at end of file From 1093cce5b950327736eea1b94718c7864a58edaf Mon Sep 17 00:00:00 2001 From: XLCYun Date: Thu, 6 Aug 2015 12:29:08 +0800 Subject: [PATCH 119/207] Update 20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md --- ...ktop--What They Get Right & Wrong - Page 1 - Introduction.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md index 39f29af147..582708f5a4 100644 --- a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md +++ b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md @@ -2,7 +2,7 @@ ================================================================================ *作者声明: 如果你是因为某种神迹而在没看标题的情况下点开了这篇文章,那么我想再重申一些东西...这是一篇评论文章。文中的观点都是我自己的,不代表Phoronix和Michael的观点。它们完全是我自己的想法。 -另外,没错……这可能是一篇引战的文章。我希望社团成员们更沉稳一些,因为我确实想在KDE和Gnome的社团上发起讨论,反馈。因此当我想指出——我所看到的——一个瑕疵时,我会尽量地做到具体而直接。这样,相关的讨论也能做到同样的具体和直接。再次声明:本文另一可选标题为“被[剪纸][1]千刀万剐”(原文剪纸一词为papercuts, 指易修复而烦人的漏洞,译者注)。 +另外,没错……这可能是一篇引战的文章。我希望社团成员们更沉稳一些,因为我确实想在KDE和Gnome的社团上发起讨论,反馈。因此当我想指出——我所看到的——一个瑕疵时,我会尽量地做到具体而直接。这样,相关的讨论也能做到同样的具体和直接。再次声明:本文另一可选标题为“被[细纸片][1]千刀万剐”(原文含paper cuts一词,指易修复但烦人的缺陷,译者注)。 现在,重申完毕……文章开始。 From 41b90d9bdfb996de4549c2bafa82a921844a8ed8 Mon Sep 17 00:00:00 2001 From: XLCYun Date: Thu, 6 Aug 2015 12:30:28 +0800 Subject: [PATCH 120/207] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90=20?= =?UTF-8?q?=E7=AC=AC=E4=BA=94=E8=8A=82=20XLCYun?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Get Right & Wrong - Page 5 - Conclusion.md | 39 +++++++++++++++++++ 1 file changed, 39 insertions(+) create mode 100644 translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md new file mode 100644 index 0000000000..02ee7425fc --- /dev/null +++ b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md @@ -0,0 +1,39 @@ +将GNOME作为我的Linux桌面的一周:他们做对的与做错的 - 第五节 - 总结 +================================================================================ +### 用户体验和最后想法 ### + +当Gnome 2.x和KDE 4.x要正面交锋时……我相当开心的跳到其中。我爱的东西它们有,恨的东西也有,但总的来说它们使用起来还算是一种乐趣。然后Gnome 3.x来了,带着一场Gnome Shell的戏剧。那时我就放弃了Gnome,我尽我所能的避开它。当时它对用户是不友好的,而且不直观,它打破了原有的设计典范,只为平板的统治世界做准备……而根据平板下跌的销量来看,这样的未来不可能实现。 + +Gnome 3后续发面了八个版本后,奇迹发生了。Gnome变得对对用户友好了。变得直观了。它完美吗?当然不了。我还是很讨厌它想推动的那种设计范例,我讨厌它总想把工作流(work flow)强加给我,但是在时间和耐心的作用下,这两都能被接受。只要你能够回头去看看Gnome Shell那外星人一样的界面,然后开始跟Gnome的其它部分(特别是控制中心)互动,你就能发现Gnome绝对做对了:细节。对细节的关注! + +人们能适应新的界面设计范例,能适应新的工作流——iPhone和iPad都证明了这一点——但真正一直让他们操心的是“纸片的割伤”(paper cuts,此处指易于修复但烦人的缺陷,译注)。 + +它带出了KDE和Gnome之间最重要的一个区别。Gnome感觉像一个产品。像一种非凡的体验。你用它的时候,觉得它是完整的,你要的东西都在你的指尖。它让人感觉就像是一个拥有windows或者OS X那样桌面体验的Linux桌面版:你要的都在里面,而且它是被同一个目标一致的团队中的同一个人写出来的。天,即使是一个应用程序发出的sudo请求都感觉是Gnome下的一个特意设计的部分,就像在Windows下的一样。而在KDE它就像是任何应用程序都能创建的那种随机外观的弹窗。它不像是以系统的一部分这样的正式身份停下来说“嘿,有个东西要请求管理员权限!你要给它吗?”。 + +KDE让人体验不到有凝聚力的体验。KDE像是在没有方向地打转,感觉没有完整的体验。它就像是一堆东西往不同的的方向移动,只不过恰好它们都有一个共同享有的工具包。如果开发者对此很开心,那么好吧,他们开心就好,但是如果他们想提供最好体验的话,那么就需要多关注那些小地方了。用户体验跟直观应当做为每一个应用程序的设计中心,应当有一个视野,知道KDE要提供什么——并且——知道它看起来应该是什么样的。 + +是不是有什么原因阻止我在KDE下使用Gnome磁盘管理? Rhythmbox? Evolution? 没有。没有。没有。但是这样说又错过了关键。Gnome和KDE都称它们为“桌面环境”。那么它们就应该是完整的环境,这意味着他们的各个部件应该汇集并紧密结合在一起,意味着你使用它们环境下的工具,因为它们说“您在一个完整的桌面中需要的任何东西,我们都支持。”说真的?只有Gnome看起来能符合完整的要求。KDE在“汇集在一起”这一方面感觉就像个半成品,更不用说提供“完整体验”中你所需要的东西。Gnome磁盘管理没有相应的对手——kpartionmanage要求ROOT权限。KDE不运行“首次用户注册”的过程(原文:No 'First Time User' run through.可能是指系统安装过程中KDE没有创建新用户的过程,译注) ,现在也不过是在Kubuntu下引入了一个用户管理器。老天,Gnome甚至提供了地图,笔记,日历和时钟应用。这些应用都是百分百要紧的吗?不,当然不了。但是正是这些应用帮助Gnome推动“Gnome是一种完整丰富的体验”的想法。 + +我吐槽的KDE问题并非不可能解决,决对不是这样的!但是它需要人去关心它。它需要开发者为他们的作品感到自豪,而不仅仅是为它们实现的功能而感到自豪——组织的价值可大了去了。别夺走用户设置选项的能力——GNOME 3.x就是因为缺乏配置选项的能力而为我所诟病,但别把“好吧,你想怎么设置就怎么设置,”作为借口而不提供任何理智的默认设置。默认设置是用户将看到的东西,它们是用户从打开软件的第一刻开始进行评判的关键。给用户留个好印象吧。 + +我知道KDE开发者们知道设计很重要,这也是为什么Visual Design Group(视觉设计团体)存在的原因,但是感觉好像他们没有让VDG充分发挥。所以KDE里存在组织上的缺陷。不是KDE没办法完整,不是它没办法汇集整合在一起然后解决衰败问题,只是开发者们没做到。他们瞄准了靶心……但是偏了。 + +还有,在任何人说这句话之前……千万别说“补丁很受欢迎啊"。因为当我开心的为个人提交补丁时,只要开发者坚持以他们喜欢的却不直观的方式干事,更多这样的烦事就会不断发生。这不关Muon有没有中心对齐。也不关Amarok的界面太丑。也不关每次我敲下快捷键后,弹出的音量和亮度调节窗口占用了我一大块的屏幕“房地产”(说真的,有人会去缩小这些东西)。 + +这跟心态的冷漠有关,跟开发者们在为他们的应用设计UI时根本就不多加思考有关。KDE团队做的东西都工作得很好。Amarok能播放音乐。Dragon能播放视频。Kwin或Qt和kdelibs似乎比Mutter/gtk更有力更效率(仅根本我的电池电量消耗计算。非科学性测试)。这些都很好,很重要……但是它们呈现的方式也很重要。甚至可以说,呈现方式是最重要的,因为它是用户看到的和与之交互的东西。 + +KDE应用开发者们……让VDG参与进来吧。让VDG审查并核准每一个”核心“应用,让一个VDG的UI/UX专家来设计应用的使用模式和使用流程,以此保证其直观性。真见鬼,不管你们在开发的是啥应用,仅仅把它的模型发到VDG论坛寻求反馈甚至都可能都能得到一些非常好的指点跟反馈。你有这么好的资源在这,现在赶紧用吧。 + +我不想说得好像我一点都不懂感恩。我爱KDE,我爱那些志愿者们为了给Linux用户一个可视化的桌面而付出的工作与努力,也爱可供选择的Gnome。正是因为我关心我才写这篇文章。因为我想看到更好的KDE,我想看到它走得比以前更加遥远。而这样做需要每个人继续努力,并且需要人们不再躲避批评。它需要人们对系统互动及系统崩溃的地方都保持诚实。如果我们不能直言批评,如果我们不说”这真垃圾!”,那么情况永远不会变好。 + +这周后我会继续使用Gnome吗?可能不,不。Gnome还在试着强迫我接受其工作流,而我不想追随,也不想遵循,因为我在使用它的时候感觉变得不够高效,因为它并不遵循我的思维模式。可是对于我的朋友们,当他们问我“我该用哪种桌面环境?”我可能会推荐Gnome,特别是那些不大懂技术,只要求“能工作”就行的朋友。根据目前KDE的形势来看,这可能是我能说出的最狠毒的评估了。 + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=5 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 196acfe054a235f90dcbd8e99174c08d4c1eede9 Mon Sep 17 00:00:00 2001 From: XLCYun Date: Thu, 6 Aug 2015 13:12:55 +0800 Subject: [PATCH 121/207] =?UTF-8?q?=E5=88=A0=E9=99=A4=E5=8E=9F=E6=96=87=20?= =?UTF-8?q?=20XLCYun?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Get Right & Wrong - Page 5 - Conclusion.md | 40 ------------------- 1 file changed, 40 deletions(-) delete mode 100644 sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md diff --git a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md b/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md deleted file mode 100644 index cf9028229d..0000000000 --- a/sources/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md +++ /dev/null @@ -1,40 +0,0 @@ -Translating by XLCYun. -A Week With GNOME As My Linux Desktop: What They Get Right & Wrong - Page 5 - Conclusion -================================================================================ -### User Experience and Closing Thoughts ### - -When Gnome 2.x and KDE 4.x were going head to head.. I jumped between the two quite happily. Some things I loved, some things I hated, but over all they were both a pleasure to use. Then Gnome 3.x came around and all of the drama with Gnome Shell. I swore off Gnome and avoided it every chance I could. It wasn't user friendly, it was non-intuitive, it broke an establish paradigm in preparation for tablet's taking over the world... A future that, judging from the dropping sales of tablets, will never come. - -Eight releases of Gnome 3 later and the unimaginable happened. Gnome got user friendly. Gnome got intuitive. Is it perfect? Of course not. I still hate the paradigm it tries to push, I hate how it tries to force a work flow onto me, but both of those things can be gotten used to with time and patience. Once you have managed to look past Gnome Shell's alien appearance and you start interacting with it and the other parts of Gnome (Control Center especially) you see what Gnome has definitely gotten right: the little things. The attention to detail. - -People can adapt to new paradigms, people can adapt to new work flows-- the iPhone and iPad proved that-- but what will always bother them are the paper cuts. - -Which brings up an important distinction between KDE and Gnome. Gnome feels like a product. It feels like a singular experience. When you use it, it feels like it is complete and that everything you need is at your fingertips. It feel's like THE Linux desktop in the same way that Windows or OS X have THE desktop experience: what you need is there and it was all written by the same guys working on the same team towards the same goal. Hell, even an application prompting for sudo access feels like an intentional part of the desktop under Gnome, much the way that it is under Windows. In KDE it's just some random-looking window popup that any application could have created. It doesn't feel like a part of the system stopping and going "Hey! Something has requested administrative rights! Do you want to let it go through?" in an official capacity. - -KDE doesn't feel like cohesive experience. KDE doesn't feel like it has a direction its moving in, it doesn't feel like a full experience. KDE feels like its a bunch of pieces that are moving in a bunch of different directions, that just happen to have a shared toolkit beneath them. If that's what the developers are happy with, then fine, good for them, but if the developers still have the hope of offering the best experience possible then the little stuff needs to matter. The user experience and being intuitive needs to be at the forefront of every single application, there needs to be a vision of what KDE wants to offer -and- how it should look. - -Is there anything stopping me from using Gnome Disks under KDE? Rhythmbox? Evolution? Nope. Nope. Nope. But that misses the point. Gnome and KDE both market themselves as "Desktop Environments." They are supposed to be full -environments-, that means they all the pieces come and fit together, that you use that environment's tools because they are saying "We support everything you need to have a full desktop." Honestly? Only Gnome seems to fit the bill of being complete. KDE feel's half-finished when it comes to "coming together" part, let alone offering everything you need for a "full experience". There's no counterpart to Gnome Disks-- kpartitionmanager prompts for root. No "First Time User" run through, it just now got a user manager in Kubuntu. Hell, Gnome even provides a Maps, Notes, Calendar and Clock application. Do all of these applications matter 100%? No, of course not. But the fact that Gnome has them helps to push the idea that Gnome is a full and complete experience. - -My complaints about KDE are not impossible to fix, not by a long shot. But it requires people to care. It requires developers to take pride in their work beyond just function-- form counts for a whole hell of a lot. Don't take away the user's ability to configure things-- the lack of configuration is one of my biggest gripes with GNOME 3.x, but don't use "Well you can configure it however you want," as an excuse for not providing sane defaults. The defaults are what users are going to see, they are what the users are going to judge from the first moment they open your application. Make it a good impression. - -I know the KDE developers know design matters, that is WHY the Visual Design Group exists, but it feels like they aren't using the VDG to their fullest. And therein lies KDE's hamartia. It's not that KDE can't be complete, it's not that it can't come together and fix the downfalls, it just that they haven't. They aimed for the bulls eye... but they missed. - -And before anyone says it... Don't say "Patches are welcome." Because while I can happily submit patches for the individual annoyances more will just keep coming as developers keep on their marry way of doing things in non-intuitive ways. This isn't about Muon not being center-aligned. This isn't about Amarok having an ugly UI. This isn't about the volume and brightness pop-up notifiers taking up a large chunk of my screen real-estate every time I hit my hotkeys (seriously, someone shrink those things). - -This is about a mentality of apathy, this is about developers apparently not thinking things through when they make the UI for their applications. Everything the KDE Community does works fine. Amarok plays music. Dragon Player plays videos. Kwin / Qt & kdelibs is seemingly more power efficient than Mutter / gtk (according to my battery life times. Non-scientific testing). Those things are all well and good, and important.. but the presentation matters to. Arguably, the presentation matters the most because that is what user's see and interact with. - -To KDE application developers... Get the VDG involved. Make every single 'core' application get its design vetted and approved by the VDG, have a UI/UX expert from the VDG go through the usage patterns and usage flow of your application to make sure its intuitive. Hell, even just posting a mock up to the VDG forums and asking for feedback would probably get you some nice pointers and feedback for whatever application you're working on. You have this great resource there, now actually use them. - -I am not trying to sound ungrateful. I love KDE, I love the work and effort that volunteers put into giving Linux users a viable desktop, and an alternative to Gnome. And it is because I care that I write this article. Because I want to see KDE excel, I want to see it go further and farther than it has before. But doing that requires work on everyone's part, and it requires that people don't hold back criticism. It requires that people are honest about their interaction with the system and where it falls apart. If we can't give direct criticism, if we can't say "This sucks!" then it will never get better. - -Will I still use Gnome after this week? Probably not, no. Gnome still trying to force a work flow on me that I don't want to follow or abide by, I feel less productive when I'm using it because it doesn't follow my paradigm. For my friends though, when they ask me "What desktop environment should I use?" I'm probably going to recommend Gnome, especially if they are less technical users who want things to "just work." And that is probably the most damning assessment I could make in regards to the current state of KDE. - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=5 - -作者:Eric Griffith -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 26ad573be26ca3f90eb97bea4fd591febb0c0c58 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 6 Aug 2015 13:14:12 +0800 Subject: [PATCH 122/207] =?UTF-8?q?20150806-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ware Developer is a Great Career Choice.md | 50 +++++++++++++++++++ 1 file changed, 50 insertions(+) create mode 100644 sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md diff --git a/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md b/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md new file mode 100644 index 0000000000..0302c0b006 --- /dev/null +++ b/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md @@ -0,0 +1,50 @@ +5 Reasons Why Software Developer is a Great Career Choice +================================================================================ +This week I will give a presentation at a local high school on what it is like to work as a programmer. I am volunteering (through the organization [Transfer][1]) to come to schools and talk about what I work with. This school will have a technology theme day this week, and would like to hear what working in the technology sector is like. Since I develop software, that’s what I will talk about. One section will be on why I think a career in software development is great. The main reasons are: + +### 5 Reasons ### + +**1 Creative**. If you ask people to name creative jobs, chances are they will say things like writer, musician or painter. But few people know that software development is also very creative. It is almost by definition creative, since you create new functionality that didn’t exist before. The solutions can be expressed in many ways, both structurally and in the details. Often there are trade-offs to make (for example speed versus memory consumption). And of course the solution has to be correct. All this requires creativity. + +**2 Collaborative**. Another myth is that programmers sit alone at their computers and code all day. But software development is in fact almost always a team effort. You discuss programming problems and solutions with your colleagues, and discuss requirements and other issues with product managers, testers and customers. It is also telling that pair-programming (two developers programming together on one computer) is a popular practice. + +**3 In demand**. More and more in the world is using software, or as Marc Andreessen put it: “[Software is Eating the World][2]“. Even as there are more programmers (in Stockholm, programmer is now the [most common occupation][3]), demand is still outpacing supply. Software companies report that one of their greatest challenges is [finding good developers][4]. I regularly get contacted by recruiters trying to get me to change jobs. I don’t know of many other professions where employers compete for you like that. + +**4 Pays well**. Developing software can create a lot of value. There is no marginal cost to selling one extra copy of software you have already developed. This combined with the high demand for developers means that pay is quite good. There are of course occupations where you make more money, but compared to the general population, I think software developers are paid quite well. + +**5 Future proof**. Many jobs disappear, often because they can be replaced by computers and software. But all those new programs still need to be developed and maintained, so the outlook for programmers is quite good. + +### But… ### + +**What about outsourcing?** Won’t all software development be outsourced to countries where the salaries are much lower? This is an example of an idea that is better in theory than in practice (much like the [waterfall development methodology][5]). Software development is a discovery activity as much as a design activity. It benefits greatly from intense collaboration. Furthermore, especially when the main product is software, the knowledge gained when developing it is a competitive advantage. The easier that knowledge is shared within the whole company, the better it is. + +Another way to look at it is this. Outsourcing of software development has existed for quite a while now. Yet there is still high demand for local developers. So companies see benefits of hiring local developers that outweigh the higher costs. + +### How to Win ### + +There are many reasons why I think developing software is enjoyable (see also [Why I Love Coding][6]). But it is not for everybody. Fortunately it is quite easy to try programming out. There are innumerable resources on the web for learning to program. For example, both [Coursera][7] and [Udacity][8] have introductory courses. If you have never programmed, try one of the free courses or tutorials to get a feel for it. + +Finding something you really enjoy to do for a living has at least two benefits. First, since you do it every day, work will be much more fun than if you simply do something to make money. Second, if you really like it, you have a much better chance of getting good at it. I like the Venn diagram below (by [@eskimon][9]) on what constitutes a great job. Since programming pays relatively well, I think that if you like it, you have a good chance of ending up in the center of the diagram! + +![](https://henrikwarne1.files.wordpress.com/2014/12/career-planning.png) + +-------------------------------------------------------------------------------- + +via: http://henrikwarne.com/2014/12/08/5-reasons-why-software-developer-is-a-great-career-choice/ + +作者:[Henrik Warne][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://henrikwarne.com/ +[1]:http://www.transfer.nu/omoss/transferinenglish.jspx?pageId=23 +[2]:http://online.wsj.com/articles/SB10001424053111903480904576512250915629460 +[3]:http://www.di.se/artiklar/2014/6/12/jobbet-som-tar-over-landet/ +[4]:http://computersweden.idg.se/2.2683/1.600324/examinationstakten-racker-inte-for-branschens-behov +[5]:http://en.wikipedia.org/wiki/Waterfall_model +[6]:http://henrikwarne.com/2012/06/02/why-i-love-coding/ +[7]:https://www.coursera.org/ +[8]:https://www.udacity.com/ +[9]:https://eskimon.wordpress.com/about/ \ No newline at end of file From bb276ee6be7206faefab7041f6c150560987ca6b Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 6 Aug 2015 13:17:29 +0800 Subject: [PATCH 123/207] =?UTF-8?q?=E6=B7=BB=E5=8A=A0=E8=AF=91=E8=80=85?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... 5 Reasons Why Software Developer is a Great Career Choice.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md b/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md index 0302c0b006..d24aa83983 100644 --- a/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md +++ b/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md @@ -1,3 +1,4 @@ +Translating by MousyCoder 5 Reasons Why Software Developer is a Great Career Choice ================================================================================ This week I will give a presentation at a local high school on what it is like to work as a programmer. I am volunteering (through the organization [Transfer][1]) to come to schools and talk about what I work with. This school will have a technology theme day this week, and would like to hear what working in the technology sector is like. Since I develop software, that’s what I will talk about. One section will be on why I think a career in software development is great. The main reasons are: From 16f5b9676503cfc9ed1702e9ebf8e98bf89c32cc Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 6 Aug 2015 15:03:05 +0800 Subject: [PATCH 124/207] PUB:20150727 Easy Backup Restore and Migrate Containers in Docker @GOLinux --- ...estore and Migrate Containers in Docker.md | 28 +++++++++---------- 1 file changed, 14 insertions(+), 14 deletions(-) rename {translated/tech => published}/20150727 Easy Backup Restore and Migrate Containers in Docker.md (61%) diff --git a/translated/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md b/published/20150727 Easy Backup Restore and Migrate Containers in Docker.md similarity index 61% rename from translated/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md rename to published/20150727 Easy Backup Restore and Migrate Containers in Docker.md index 420430cca8..7d2d5f26d8 100644 --- a/translated/tech/20150727 Easy Backup Restore and Migrate Containers in Docker.md +++ b/published/20150727 Easy Backup Restore and Migrate Containers in Docker.md @@ -1,32 +1,32 @@ 无忧之道:Docker中容器的备份、恢复和迁移 ================================================================================ -今天,我们将学习如何快速地对docker容器进行快捷备份、恢复和迁移。[Docker][1]是一个开源平台,用于自动化部署应用,以通过快捷的途径在称之为容器的轻量级软件层下打包、发布和运行这些应用。它使得应用平台独立,因为它扮演了Linux上一个额外的操作系统级虚拟化的自动化抽象层。它通过其组件cgroups和命名空间利用Linux内核的资源分离特性,达到避免虚拟机开销的目的。它使得用于部署和扩展web应用、数据库和后端服务的大规模构建块无需依赖于特定的堆栈或供应者。 +今天,我们将学习如何快速地对docker容器进行快捷备份、恢复和迁移。[Docker][1]是一个开源平台,用于自动化部署应用,以通过快捷的途径在称之为容器的轻量级软件层下打包、发布和运行这些应用。它使得应用平台独立,因为它扮演了Linux上一个额外的操作系统级虚拟化的自动化抽象层。它通过其组件cgroups和命名空间利用Linux内核的资源分离特性,达到避免虚拟机开销的目的。它使得用于部署和扩展web应用、数据库和后端服务的大规模构建组件无需依赖于特定的堆栈或供应者。 -所谓的容器,就是那些创建自Docker镜像的软件层,它包含了独立的Linux文件系统和开箱即用的应用程序。如果我们有一个在盒子中运行着的Docker容器,并且想要备份这些容器以便今后使用,或者想要迁移这些容器,那么,本教程将帮助你掌握在Linux操作系统中备份、恢复和迁移Docker容器。 +所谓的容器,就是那些创建自Docker镜像的软件层,它包含了独立的Linux文件系统和开箱即用的应用程序。如果我们有一个在机器中运行着的Docker容器,并且想要备份这些容器以便今后使用,或者想要迁移这些容器,那么,本教程将帮助你掌握在Linux操作系统中备份、恢复和迁移Docker容器的方法。 我们怎样才能在Linux中备份、恢复和迁移Docker容器呢?这里为您提供了一些便捷的步骤。 ### 1. 备份容器 ### -首先,为了备份Docker中的容器,我们会想看看我们想要备份的容器列表。要达成该目的,我们需要在我们运行这Docker引擎,并已创建了容器的Linux机器中运行 docker ps 命令。 +首先,为了备份Docker中的容器,我们会想看看我们想要备份的容器列表。要达成该目的,我们需要在我们运行着Docker引擎,并已创建了容器的Linux机器中运行 docker ps 命令。 # docker ps ![Docker Containers List](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-containers-list.png) -在此之后,我们要选择我们想要备份的容器,然后我们会去创建该容器的快照。我们可以使用 docker commit 命令来创建快照。 +在此之后,我们要选择我们想要备份的容器,然后去创建该容器的快照。我们可以使用 docker commit 命令来创建快照。 # docker commit -p 30b8f18f20b4 container-backup ![Docker Commit](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-commit.png) -该命令会生成一个作为Docker镜像的容器快照,我们可以通过运行 docker images 命令来查看Docker镜像,如下。 +该命令会生成一个作为Docker镜像的容器快照,我们可以通过运行 `docker images` 命令来查看Docker镜像,如下。 # docker images ![Docker Images](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-images.png) -正如我们所看见的,上面做的快照已经作为Docker镜像保存了。现在,为了备份该快照,我们有两个选择,一个是我们可以登陆进Docker注册中心,并推送该镜像;另一个是我们可以将Docker镜像打包成tarball备份,以供今后使用。 +正如我们所看见的,上面做的快照已经作为Docker镜像保存了。现在,为了备份该快照,我们有两个选择,一个是我们可以登录进Docker注册中心,并推送该镜像;另一个是我们可以将Docker镜像打包成tar包备份,以供今后使用。 如果我们想要在[Docker注册中心][2]上传或备份镜像,我们只需要运行 docker login 命令来登录进Docker注册中心,然后推送所需的镜像即可。 @@ -39,23 +39,23 @@ ![Docker Push](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-push.png) -如果我们不想备份到docker注册中心,而是想要将此镜像保存在本地机器中,以供日后使用,那么我们可以将其作为tarball备份。要完成该操作,我们需要运行以下 docker save 命令。 +如果我们不想备份到docker注册中心,而是想要将此镜像保存在本地机器中,以供日后使用,那么我们可以将其作为tar包备份。要完成该操作,我们需要运行以下 `docker save` 命令。 # docker save -o ~/container-backup.tar container-backup ![taking tarball backup](http://blog.linoxide.com/wp-content/uploads/2015/07/taking-tarball-backup.png) -要验证tarball时候已经生成,我们只需要在保存tarball的目录中运行 ls 命令。 +要验证tar包是否已经生成,我们只需要在保存tar包的目录中运行 ls 命令即可。 ### 2. 恢复容器 ### -接下来,在我们成功备份了我们的Docker容器后,我们现在来恢复这些被快照成Docker镜像的容器。如果我们已经在注册中心推送了这些Docker镜像,那么我们仅仅需要把那个Docker镜像拖回并直接运行即可。 +接下来,在我们成功备份了我们的Docker容器后,我们现在来恢复这些制作了Docker镜像快照的容器。如果我们已经在注册中心推送了这些Docker镜像,那么我们仅仅需要把那个Docker镜像拖回并直接运行即可。 # docker pull arunpyasi/container-backup:test ![Docker Pull](http://blog.linoxide.com/wp-content/uploads/2015/07/docker-pull.png) -但是,如果我们将这些Docker镜像作为tarball文件备份到了本地,那么我们只要使用 docker load 命令,后面加上tarball的备份路径,就可以加载该Docker镜像了。 +但是,如果我们将这些Docker镜像作为tar包文件备份到了本地,那么我们只要使用 docker load 命令,后面加上tar包的备份路径,就可以加载该Docker镜像了。 # docker load -i ~/container-backup.tar @@ -63,7 +63,7 @@ # docker images -在镜像被加载后,我们将从加载的镜像去运行Docker容器。 +在镜像被加载后,我们将用加载的镜像去运行Docker容器。 # docker run -d -p 80:80 container-backup @@ -71,11 +71,11 @@ ### 3. 迁移Docker容器 ### -迁移容器同时涉及到了上面两个操作,备份和恢复。我们可以将任何一个Docker容器从一台机器迁移到另一台机器。在迁移过程中,首先我们将容器的备份作为快照Docker镜像。然后,该Docker镜像或者是被推送到了Docker注册中心,或者被作为tarball文件保存到了本地。如果我们将镜像推送到了Docker注册中心,我们简单地从任何我们想要的机器上使用 docker run 命令来恢复并运行该容器。但是,如果我们将镜像打包成tarball备份到了本地,我们只需要拷贝或移动该镜像到我们想要的机器上,加载该镜像并运行需要的容器即可。 +迁移容器同时涉及到了上面两个操作,备份和恢复。我们可以将任何一个Docker容器从一台机器迁移到另一台机器。在迁移过程中,首先我们将把容器备份为Docker镜像快照。然后,该Docker镜像或者是被推送到了Docker注册中心,或者被作为tar包文件保存到了本地。如果我们将镜像推送到了Docker注册中心,我们简单地从任何我们想要的机器上使用 docker run 命令来恢复并运行该容器。但是,如果我们将镜像打包成tar包备份到了本地,我们只需要拷贝或移动该镜像到我们想要的机器上,加载该镜像并运行需要的容器即可。 ### 尾声 ### -最后,我们已经学习了如何快速地备份、恢复和迁移Docker容器,本教程适用于各个成功运行Docker的操作系统平台。真的,Docker是一个相当简单易用,然而功能却十分强大的工具。它的命令相当易记,这些命令都非常短,带有许多简单而强大的标记和参数。上面的方法让我们备份容器时很是安逸,使得我们可以在日后很轻松地恢复它们。这会帮助我们恢复我们的容器和镜像,即便主机系统崩溃,甚至意外地被清除。如果你还有很多问题、建议、反馈,请在下面的评论框中写出来吧,可以帮助我们改进或更新我们的内容。谢谢大家!享受吧 :-) +最后,我们已经学习了如何快速地备份、恢复和迁移Docker容器,本教程适用于各个可以成功运行Docker的操作系统平台。真的,Docker是一个相当简单易用,然而功能却十分强大的工具。它的命令相当易记,这些命令都非常短,带有许多简单而强大的标记和参数。上面的方法让我们备份容器时很是安逸,使得我们可以在日后很轻松地恢复它们。这会帮助我们恢复我们的容器和镜像,即便主机系统崩溃,甚至意外地被清除。如果你还有很多问题、建议、反馈,请在下面的评论框中写出来吧,可以帮助我们改进或更新我们的内容。谢谢大家!享受吧 :-) -------------------------------------------------------------------------------- @@ -83,7 +83,7 @@ via: http://linoxide.com/linux-how-to/backup-restore-migrate-containers-docker/ 作者:[Arun Pyasi][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From f6f0cfde12d224b51ddcb416a18d01626351c94d Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 6 Aug 2015 15:06:17 +0800 Subject: [PATCH 125/207] =?UTF-8?q?20150806-4=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...minism and increasing diversity in tech.md | 81 +++++++++++++++++++ 1 file changed, 81 insertions(+) create mode 100644 sources/talk/20150806 Torvalds 2.0--Patricia Torvalds on computing college feminism and increasing diversity in tech.md diff --git a/sources/talk/20150806 Torvalds 2.0--Patricia Torvalds on computing college feminism and increasing diversity in tech.md b/sources/talk/20150806 Torvalds 2.0--Patricia Torvalds on computing college feminism and increasing diversity in tech.md new file mode 100644 index 0000000000..36f5642c10 --- /dev/null +++ b/sources/talk/20150806 Torvalds 2.0--Patricia Torvalds on computing college feminism and increasing diversity in tech.md @@ -0,0 +1,81 @@ +Torvalds 2.0: Patricia Torvalds on computing, college, feminism, and increasing diversity in tech +================================================================================ +![Image by : Photo by Becky Svartström. Modified by Opensource.com. CC BY-SA 4.0](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/osdc-lead-patriciatorvalds.png) +Image by : Photo by Becky Svartström. Modified by Opensource.com. [CC BY-SA 4.0][1] + +Patricia Torvalds isn't the Torvalds name that pops up in Linux and open source circles. Yet. + +![](http://opensource.com/sites/default/files/images/life-uploads/ptorvalds.png) + +At 18, Patricia is a feminist with a growing list of tech achievements, open source industry experience, and her sights set on diving into her freshman year of college at Duke University's Pratt School of Engineering. She works for [Puppet Labs][2] in Portland, Oregon, as an intern, but soon she'll head to Durham, North Carolina, to start the fall semester of college. + +In this exclusive interview, Patricia explains what got her interested in computer science and engineering (spoiler alert: it wasn't her father), what her high school did "right" with teaching tech, the important role feminism plays in her life, and her thoughts on the lack of diversity in technology. + +![](http://opensource.com/sites/default/files/images/life/Interview%20banner%20Q%26A.png) + +### What made you interested in studying computer science and engineering? ### + +My interest in tech really grew throughout high school. I wanted to go into biology for a while, until around my sophomore year. I had a web design internship at the Portland VA after my sophomore year. And I took an engineering class called Exploratory Ventures, which sent an ROV into the Pacific ocean late in my sophomore year, but the turning point was probably when I was named a regional winner and national runner up for the [NCWIT Aspirations in Computing][3] award halfway through my junior year. + +The award made me feel validated in my interest, of course, but I think the most important part of it was getting to join a Facebook group for all the award winners. The girls who have won the award are absolutely incredible and so supportive of each other. I was definitely interested in computer science before I won the award, because of my work in XV and at the VA, but having these girls to talk to solidified my interest and has kept it really strong. Teaching XV—more on that later—my junior and senior year, also, made engineering and computer science really fun for me. + +### What do you plan to study? And do you already know what you want to do after college? ### + +I hope to major in either Mechanical or Electrical and Computer Engineering as well as Computer Science, and minor in Women's Studies. After college, I hope to work for a company that supports or creates technology for social good, or start my own company. + +### My daughter had one high school programming class—Visual Basic. She was the only girl in her class, and she ended up getting harassed and having a miserable experience. What was your experience like? ### + +My high school began offering computer science classes my senior year, and I took Visual Basic as well! The class wasn't bad, but I was definitely one of three or four girls in the class of 20 or so students. Other computing classes seemed to have similar gender breakdowns. However, my high school was extremely small and the teacher was supportive of inclusivity in tech, so there was no harassment that I noticed. Hopefully the classes become more diverse in future years. + +### What did your schools do right technology-wise? And how could they have been better? ### + +My high school gave us consistent access to computers, and teachers occasionally assigned technology-based assignments in unrelated classes—we had to create a website for a social studies class a few times—which I think is great because it exposes everyone to tech. The robotics club was also pretty active and well-funded, but fairly small; I was not a member. One very strong component of the school's technology/engineering program is actually a student-taught engineering class called Exploratory Ventures, which is a hands-on class that tackles a new engineering or computer science problem every year. I taught it for two years with a classmate of mine, and have had students come up to me and tell me they're interested in pursuing engineering or computer science as a result of the class. + +However, my high school was not particularly focused on deliberately including young women in these programs, and it isn't very racially diverse. The computing-based classes and clubs were, by a vast majority, filled with white male students. This could definitely be improved on. + +### Growing up, how did you use technology at home? ### + +Honestly, when I was younger I used my computer time (my dad created a tracker, which logged us off after an hour of Internet use) to play Neopets or similar games. I guess I could have tried to mess with the tracker or played on the computer without Internet use, but I just didn't. I sometimes did little science projects with my dad, and I remember once printing "Hello world" in the terminal with him a thousand times, but mostly I just played online games with my sisters and didn't get my start in computing until high school. + +### You were active in the Feminism Club at your high school. What did you learn from that experience? What feminist issues are most important to you now? ### + +My friend and I co-founded Feminism Club at our high school late in our sophomore year. We did receive lots of resistance to the club at first, and while that never entirely went away, by the time we graduated feminist ideals were absolutely a part of the school's culture. The feminist work we did at my high school was generally on a more immediate scale and focused on issues like the dress code. + +Personally, I'm very focused on intersectional feminism, which is feminism as it applies to other aspects of oppression like racism and classism. The Facebook page [Guerrilla Feminism][4] is a great example of an intersectional feminism and has done so much to educate me. I currently run the Portland branch. + +Feminism is also important to me in terms of diversity in tech, although as an upper-class white woman with strong connections in the tech world, the problems here affect me much less than they do other people. The same goes for my involvement in intersectional feminism. Publications like [Model View Culture][5] are very inspiring to me, and I admire Shanley Kane so much for what she does. + +### What advice would you give parents who want to teach their children how to program? ### + +Honestly, nobody ever pushed me into computer science or engineering. Like I said, for a long time I wanted to be a geneticist. I got a summer internship doing web design for the VA the summer after my sophomore year and totally changed my mind. So I don't know if I can fully answer that question. + +I do think genuine interest is important, though. If my dad had sat me down in front of the computer and told me to configure a webserver when I was 12, I don't think I'd be interested in computer science. Instead, my parents gave me a lot of free reign to do what I wanted, which was mostly coding terrible little HTML sites for my Neopets. Neither of my younger sisters are interested in engineering or computer science, and my parents don't care. I'm really lucky my parents have given me and my sisters the encouragement and resources to explore our interests. + +Still, I grew up saying my future career would be "like my dad's"—even when I didn't know what he did. He has a pretty cool job. Also, one time when I was in middle school, I told him that and he got a little choked up and said I wouldn't think that in high school. So I guess that motivated me a bit. + +### What suggestions do you have for leaders in open source communities to help them attract and maintain a more diverse mix of contributors? ### + +I'm actually not active in particular open source communities. I feel much more comfortable discussing computing with other women; I'm a member of the [NCWIT Aspirations in Computing][6] network and it's been one of the most important aspects of my continued interest in technology, as well as the Facebook group [Ladies Storm Hackathons][7]. + +I think this applies well to attracting and maintaining a talented and diverse mix of contributors: Safe spaces are important. I have seen the misogynistic and racist comments made in some open source communities, and subsequent dismissals when people point out the issues. I think that in maintaining a professional community there have to be strong standards on what constitutes harassment or inappropriate conduct. Of course, people can—and will—have a variety of opinions on what they should be able to express in open source communities, or any community. However, if community leaders actually want to attract and maintain diverse talent, they need to create a safe space and hold community members to high standards. + +I also think that some community leaders just don't value diversity. It's really easy to argue that tech is a meritocracy, and the reason there are so few marginalized people in tech is just that they aren't interested, and that the problem comes from earlier on in the pipeline. They argue that if someone is good enough at their job, their gender or race or sexual orientation doesn't matter. That's the easy argument. But I was raised not to make excuses for mistakes. And I think the lack of diversity is a mistake, and that we should be taking responsibility for it and actively trying to make it better. + +-------------------------------------------------------------------------------- + +via: http://opensource.com/life/15/8/patricia-torvalds-interview + +作者:[Rikki Endsley][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://opensource.com/users/rikki-endsley +[1]:https://creativecommons.org/licenses/by-sa/4.0/ +[2]:https://puppetlabs.com/ +[3]:https://www.aspirations.org/ +[4]:https://www.facebook.com/guerrillafeminism +[5]:https://modelviewculture.com/ +[6]:https://www.aspirations.org/ +[7]:https://www.facebook.com/groups/LadiesStormHackathons/ \ No newline at end of file From 60edce17ba50fa21a0caec3f4350b3b79c249e6b Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 6 Aug 2015 16:05:07 +0800 Subject: [PATCH 126/207] =?UTF-8?q?20150806-5=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...witch for debugging and troubleshooting.md | 69 ++++++++++++++++++ ...or--No module named wxversion' on Linux.md | 49 +++++++++++++ ...th Answers--How to install git on Linux.md | 72 +++++++++++++++++++ 3 files changed, 190 insertions(+) create mode 100644 sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md create mode 100644 sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md create mode 100644 sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md b/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md new file mode 100644 index 0000000000..2b4e16bcaf --- /dev/null +++ b/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md @@ -0,0 +1,69 @@ +Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting +================================================================================ +> **Question:** I am trying to troubleshoot my Open vSwitch deployment. For that I would like to inspect its debug messages generated by its built-in logging mechanism. How can I enable logging in Open vSwitch, and change its logging level (e.g., to INFO/DEBUG level) to check more detailed debug information? + +Open vSwitch (OVS) is the most popular open-source implementation of virtual switch on the Linux platform. As the today's data centers increasingly rely on the software-defined network (SDN) architecture, OVS is fastly adopted as the de-facto standard network element in data center's SDN deployments. + +Open vSwitch has a built-in logging mechanism called VLOG. The VLOG facility allows one to enable and customize logging within various components of the switch. The logging information generated by VLOG can be sent to a combination of console, syslog and a separate log file for inspection. You can configure OVS logging dynamically at run-time with a command-line tool called `ovs-appctl`. + +![](https://farm1.staticflickr.com/499/19300367114_cd8aac2fb2_c.jpg) + +Here is how to enable logging and customize logging levels in Open vSwitch with `ovs-appctl`. + +The syntax of `ovs-appctl` to customize VLOG is as follows. + + $ sudo ovs-appctl vlog/set module[:facility[:level]] + +- **Module**: name of any valid component in OVS (e.g., netdev, ofproto, dpif, vswitchd, and many others) +- **Facility**: destination of logging information (must be: console, syslog or file) +- **Level**: verbosity of logging (must be: emer, err, warn, info, or dbg) + +In OVS source code, module name is defined in each source file in the form of: + + VLOG_DEFINE_THIS_MODULE(); + +For example, in lib/netdev.c, you will see: + + VLOG_DEFINE_THIS_MODULE(netdev); + +which indicates that lib/netdev.c is part of netdev module. Any logging messages generated in lib/netdev.c will belong to netdev module. + +In OVS source code, there are multiple severity levels used to define several different kinds of logging messages: VLOG_INFO() for informational, VLOG_WARN() for warning, VLOG_ERR() for error, VLOG_DBG() for debugging, VLOG_EMERG for emergency. Logging level and facility determine which logging messages are sent where. + +To see a full list of available modules, facilities, and their respective logging levels, run the following commands. This command must be invoked after you have started OVS. + + $ sudo ovs-appctl vlog/list + +![](https://farm1.staticflickr.com/465/19734939478_7eb5d44635_c.jpg) + +The output shows the debug levels of each module for three different facilities (console, syslog, file). By default, all modules have their logging level set to INFO. + +Given any one OVS module, you can selectively change the debug level of any particular facility. For example, if you want to see more detailed debug messages of dpif module at the console screen, run the following command. + + $ sudo ovs-appctl vlog/set dpif:console:dbg + +You will see that dpif module's console facility has changed its logging level to DBG. The logging level of two other facilities, syslog and file, remains unchanged. + +![](https://farm1.staticflickr.com/333/19896760146_5d851311ae_c.jpg) + +If you want to change the logging level for all modules, you can specify "ANY" as the module name. For example, the following command will change the console logging level of every module to DBG. + + $ sudo ovs-appctl vlog/set ANY:console:dbg + +![](https://farm1.staticflickr.com/351/19734939828_8c7f59e404_c.jpg) + +Also, if you want to change the logging level of all three facilities at once, you can specify "ANY" as the facility name. For example, the following command will change the logging level of all facilities for every module to DBG. + + $ sudo ovs-appctl vlog/set ANY:ANY:dbg + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/enable-logging-open-vswitch.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md b/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md new file mode 100644 index 0000000000..11d814d8f4 --- /dev/null +++ b/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md @@ -0,0 +1,49 @@ +Linux FAQs with Answers--How to fix “ImportError: No module named wxversion” on Linux +================================================================================ +> **Question:** I was trying to run a Python application on [insert your Linux distro], but I got an error "ImportError: No module named wxversion." How can I solve this error in the Python program? + + Looking for python... 2.7.9 - Traceback (most recent call last): + File "/home/dev/playonlinux/python/check_python.py", line 1, in + import os, wxversion + ImportError: No module named wxversion + failed tests + +This error indicates that your Python application is GUI-based, relying on a missing Python module called wxPython. [wxPython][1] is a Python extension module for the wxWidgets GUI library, popularly used by C++ programmers to design GUI applications. The wxPython extension allows Python developers to easily design and integrate GUI within any Python application. + +To solve this import error, you need to install wxPython on your Linux, as described below. + +### Install wxPython on Debian, Ubuntu or Linux Mint ### + + $ sudo apt-get install python-wxgtk2.8 + +### Install wxPython on Fedora ### + + $ sudo yum install wxPython + +### Install wxPython on CentOS/RHEL ### + +wxPython is available on the EPEL repository of CentOS/RHEL, not on base repositories. Thus, first [enable EPEL repository][2] on your system, and then use yum command. + + $ sudo yum install wxPython + +### Install wxPython on Arch Linux ### + + $ sudo pacman -S wxpython + +### Install wxPython on Gentoo ### + + $ emerge wxPython + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/importerror-no-module-named-wxversion.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:http://wxpython.org/ +[2]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html \ No newline at end of file diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md b/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md new file mode 100644 index 0000000000..c5c34f3a72 --- /dev/null +++ b/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md @@ -0,0 +1,72 @@ +Linux FAQs with Answers--How to install git on Linux +================================================================================ +> **Question:** I am trying to clone a project from a public Git repository, but I am getting "git: command not found" error. How can I install git on [insert your Linux distro]? + +Git is a popular open-source version control system (VCS) originally developed for Linux environment. Contrary to other VCS tools like CVS or SVN, Git's revision control is considered "distributed" in a sense that your local Git working directory can function as a fully-working repository with complete history and version-tracking capabilities. In this model, each collaborator commits to his or her local repository (as opposed to always committing to a central repository), and optionally push to a centralized repository if need be. This brings in scalability and redundancy to the revision control system, which is a must in any kind of large-scale collaboration. + +![](https://farm1.staticflickr.com/341/19433194168_c79d4570aa_b.jpg) + +### Install Git with a Package Manager ### + +Git is shipped with all major Linux distributions. Thus the easiest way to install Git is by using your Linux distro's package manager. + +**Debian, Ubuntu, or Linux Mint** + + $ sudo apt-get install git + +**Fedora, CentOS or RHEL** + + $ sudo yum install git + +**Arch Linux** + + $ sudo pacman -S git + +**OpenSUSE** + + $ sudo zypper install git + +**Gentoo** + + $ emerge --ask --verbose dev-vcs/git + +### Install Git from the Source ### + +If for whatever reason you want to built Git from the source, you can follow the instructions below. + +**Install Dependencies** + +Before building Git, first install dependencies. + +**Debian, Ubuntu or Linux** + + $ sudo apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev asciidoc xmlto docbook2x + +**Fedora, CentOS or RHEL** + + $ sudo yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel asciidoc xmlto docbook2x + +#### Compile Git from the Source #### + +Download the latest release of Git from [https://github.com/git/git/releases][1]. Then build and install Git under /usr as follows. + +Note that if you want to install it under a different directory (e.g., /opt), replace "--prefix=/usr" in configure command with something else. + + $ cd git-x.x.x + $ make configure + $ ./configure --prefix=/usr + $ make all doc info + $ sudo make install install-doc install-html install-info + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/install-git-linux.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:https://github.com/git/git/releases \ No newline at end of file From c709912c2c77d4f618418743de6b1eae007c76f6 Mon Sep 17 00:00:00 2001 From: KS Date: Thu, 6 Aug 2015 16:11:10 +0800 Subject: [PATCH 127/207] Update 20150803 Managing Linux Logs.md --- sources/tech/20150803 Managing Linux Logs.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150803 Managing Linux Logs.md b/sources/tech/20150803 Managing Linux Logs.md index d68adddf52..e317a63253 100644 --- a/sources/tech/20150803 Managing Linux Logs.md +++ b/sources/tech/20150803 Managing Linux Logs.md @@ -1,3 +1,4 @@ +wyangsun translating Managing Linux Logs ================================================================================ A key best practice for logging is to centralize or aggregate your logs in one place, especially if you have multiple servers or tiers in your architecture. We’ll tell you why this is a good idea and give tips on how to do it easily. @@ -415,4 +416,4 @@ via: http://www.loggly.com/ultimate-guide/logging/managing-linux-logs/ [19]:http://www.infoworld.com/article/2614204/data-center/puppet-or-chef--the-configuration-management-dilemma.html [20]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/ [21]:https://github.com/progrium/logspout -[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/ \ No newline at end of file +[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/ From 5a078ed4770c650a2fdc78eab7798b37ae4a8df4 Mon Sep 17 00:00:00 2001 From: mousycoder Date: Thu, 6 Aug 2015 18:54:15 +0800 Subject: [PATCH 128/207] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ware Developer is a Great Career Choice.md | 51 ----------- ...ware Developer is a Great Career Choice.md | 91 +++++++++++++++++++ 2 files changed, 91 insertions(+), 51 deletions(-) delete mode 100644 sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md create mode 100644 translated/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md diff --git a/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md b/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md deleted file mode 100644 index d24aa83983..0000000000 --- a/sources/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md +++ /dev/null @@ -1,51 +0,0 @@ -Translating by MousyCoder -5 Reasons Why Software Developer is a Great Career Choice -================================================================================ -This week I will give a presentation at a local high school on what it is like to work as a programmer. I am volunteering (through the organization [Transfer][1]) to come to schools and talk about what I work with. This school will have a technology theme day this week, and would like to hear what working in the technology sector is like. Since I develop software, that’s what I will talk about. One section will be on why I think a career in software development is great. The main reasons are: - -### 5 Reasons ### - -**1 Creative**. If you ask people to name creative jobs, chances are they will say things like writer, musician or painter. But few people know that software development is also very creative. It is almost by definition creative, since you create new functionality that didn’t exist before. The solutions can be expressed in many ways, both structurally and in the details. Often there are trade-offs to make (for example speed versus memory consumption). And of course the solution has to be correct. All this requires creativity. - -**2 Collaborative**. Another myth is that programmers sit alone at their computers and code all day. But software development is in fact almost always a team effort. You discuss programming problems and solutions with your colleagues, and discuss requirements and other issues with product managers, testers and customers. It is also telling that pair-programming (two developers programming together on one computer) is a popular practice. - -**3 In demand**. More and more in the world is using software, or as Marc Andreessen put it: “[Software is Eating the World][2]“. Even as there are more programmers (in Stockholm, programmer is now the [most common occupation][3]), demand is still outpacing supply. Software companies report that one of their greatest challenges is [finding good developers][4]. I regularly get contacted by recruiters trying to get me to change jobs. I don’t know of many other professions where employers compete for you like that. - -**4 Pays well**. Developing software can create a lot of value. There is no marginal cost to selling one extra copy of software you have already developed. This combined with the high demand for developers means that pay is quite good. There are of course occupations where you make more money, but compared to the general population, I think software developers are paid quite well. - -**5 Future proof**. Many jobs disappear, often because they can be replaced by computers and software. But all those new programs still need to be developed and maintained, so the outlook for programmers is quite good. - -### But… ### - -**What about outsourcing?** Won’t all software development be outsourced to countries where the salaries are much lower? This is an example of an idea that is better in theory than in practice (much like the [waterfall development methodology][5]). Software development is a discovery activity as much as a design activity. It benefits greatly from intense collaboration. Furthermore, especially when the main product is software, the knowledge gained when developing it is a competitive advantage. The easier that knowledge is shared within the whole company, the better it is. - -Another way to look at it is this. Outsourcing of software development has existed for quite a while now. Yet there is still high demand for local developers. So companies see benefits of hiring local developers that outweigh the higher costs. - -### How to Win ### - -There are many reasons why I think developing software is enjoyable (see also [Why I Love Coding][6]). But it is not for everybody. Fortunately it is quite easy to try programming out. There are innumerable resources on the web for learning to program. For example, both [Coursera][7] and [Udacity][8] have introductory courses. If you have never programmed, try one of the free courses or tutorials to get a feel for it. - -Finding something you really enjoy to do for a living has at least two benefits. First, since you do it every day, work will be much more fun than if you simply do something to make money. Second, if you really like it, you have a much better chance of getting good at it. I like the Venn diagram below (by [@eskimon][9]) on what constitutes a great job. Since programming pays relatively well, I think that if you like it, you have a good chance of ending up in the center of the diagram! - -![](https://henrikwarne1.files.wordpress.com/2014/12/career-planning.png) - --------------------------------------------------------------------------------- - -via: http://henrikwarne.com/2014/12/08/5-reasons-why-software-developer-is-a-great-career-choice/ - -作者:[Henrik Warne][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://henrikwarne.com/ -[1]:http://www.transfer.nu/omoss/transferinenglish.jspx?pageId=23 -[2]:http://online.wsj.com/articles/SB10001424053111903480904576512250915629460 -[3]:http://www.di.se/artiklar/2014/6/12/jobbet-som-tar-over-landet/ -[4]:http://computersweden.idg.se/2.2683/1.600324/examinationstakten-racker-inte-for-branschens-behov -[5]:http://en.wikipedia.org/wiki/Waterfall_model -[6]:http://henrikwarne.com/2012/06/02/why-i-love-coding/ -[7]:https://www.coursera.org/ -[8]:https://www.udacity.com/ -[9]:https://eskimon.wordpress.com/about/ \ No newline at end of file diff --git a/translated/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md b/translated/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md new file mode 100644 index 0000000000..a592cb595e --- /dev/null +++ b/translated/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md @@ -0,0 +1,91 @@ +选择软件开发攻城师的5个原因 +================================================================================ +这个星期我将给本地一所高中做一次有关于程序猿是怎样工作的演讲,我是 [Transfer][1] 组织推荐来到这所学校,谈论我的工作。这个学校本周将有一个技术主题日,并且他们很想听听科技行业是怎样工作的。因为我是从事软件开发的,这也是我将和学生们讲的内容。我为什么觉得软件开发是一个很酷的职业将是演讲的其中一部分。主要原因如下: + +### 5个原因 ### + +**1 创造性** + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/34042817.jpg) + +如果你问别人创造性的工作有哪些,别人通常会说像作家,音乐家或者画家那样的(工作)。但是极少有人知道软件开发也是一项非常具有创造性的工作。因为你创造了一个以前没有的新功能,这样的功能基本上可以被定义为非常具有创造性。这种解决方案可以在整体和细节上以很多形式来展现。我们经常会遇到一些需要做权衡的场景(比如说运行速度与内存消耗的权衡)。当然前提是这种解决方案必须是正确的。其实这些所有的行为都是需要强大的创造性的。 + +**2 协作性** + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/94579377.jpg) + +另外一个表象是程序猿们独自坐在他们的电脑前,然后撸一天的代码。但是软件开发事实上通常总是一个团队努力的结果。你会经常和你的同事讨论编程问题以及解决方案,并且和产品经理,测试人员,客户讨论需求以及其他问题。 +经常有人说,结对编程(2个开发人员一起在一个电脑上编程)是一种流行的最佳实践。 + + +**3 高需性** + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/26662164.jpg) + +世界上越来越多的人在用软件,正如 [Marc Andreessen](https://en.wikipedia.org/wiki/Marc_Andreessen) 所说 " [软件正在吞噬世界][2] "。虽然程序猿现在的数量非常巨大(在斯德哥尔摩,程序猿现在是 [最普遍的职业][3] ),但是,需求量一直处于供不应求的局面。据软件公司报告,他们最大的挑战之一就是 [找到优秀的程序猿][4] 。我也经常接到那些想让我跳槽的招聘人员打来的电话。我知道至少除软件行业之外的其他行业的雇主不会那么拼(的去招聘)。 + +**4 高酬性** + + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/50538928.jpg) + +软件开发可以带来不菲的收入。卖一份你已经开发好的软件的额外副本是没有 [边际成本][5] 的。这个事实与对程序猿的高需求意味着收入相当可观。当然还有许多更捞金的职业,但是相比一般人群,我认为软件开发者确实“日进斗金”(知足吧!骚年~~)。 + +**5 前瞻性** + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/89799239.jpg) + +有许多工作岗位消失,往往是由于它们可以被计算机和软件代替。但是所有这些新的程序依然需要开发和维护,因此,程序猿的前景还是相当好的。 + + +### 但是...### + +**外包又是怎么一回事呢?** + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/41615753.jpg) + +难道所有外包到其他地区的软件开发的薪水都很低吗?这是一个理想丰满,现实骨感的例子(有点像 [瀑布开发模型][6] )。软件开发基本上跟设计的工作一样,是一个探索发现的工作。它受益于强有力的合作。更进一步说,特别当你的主打产品是软件的时候,你所掌握的开发知识是绝对的优势。知识在整个公司中分享的越容易,那么公司的发展也将越来越好。 + + +换一种方式去看待这个问题。软件外包已经存在了相当一段时间了。但是对本土程序猿的需求量依旧非常高。因为许多软件公司看到了雇佣本土程序猿的带来的收益要远远超过了相对较高的成本(其实还是赚了)。 + +### 如何成为人生大赢家 ### + + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/44219908.jpg) + +虽然我有许多我认为软件开发是一件非常有趣的事情的理由 (详情见: [为什么我热爱编程][7] )。但是这些理由,并不适用于所有人。幸运的是,尝试编程是一件非常容易的事情。在互联网上有数不尽的学习编程的资源。例如,[Coursera][8] 和 [Udacity][9] 都拥有很好的入门课程。如果你从来没有撸过码,可以尝试其中一个免费的课程,找找感觉。 + +寻找一个既热爱又能谋生的事情至少有2个好处。首先,由于你天天去做,工作将比你简单的只为谋生要有趣的多。其次,如果你真的非常喜欢,你将更好的擅长它。我非常喜欢下面一副关于伟大工作组成的韦恩图(作者 [@eskimon)][10] 。因为编码的薪水确实相当不错,我认为如果你真的喜欢它,你将有一个很好的机会,成为人生的大赢家! + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/17571624.jpg) + +-------------------------------------------------------------------------------- + +via: http://henrikwarne.com/2014/12/08/5-reasons-why-software-developer-is-a-great-career-choice/ + +作者:[Henrik Warne][a] +译者:[mousycoder](https://github.com/mousycoder) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + + +[a]:http://henrikwarne.com/ +[1]:http://www.transfer.nu/omoss/transferinenglish.jspx?pageId=23 +[2]:http://www.wsj.com/articles/SB10001424053111903480904576512250915629460 +[3]:http://www.di.se/artiklar/2014/6/12/jobbet-som-tar-over-landet/ +[4]:http://computersweden.idg.se/2.2683/1.600324/examinationstakten-racker-inte-for-branschens-behov +[5]:https://en.wikipedia.org/wiki/Marginal_cost +[6]:https://en.wikipedia.org/wiki/Waterfall_model +[7]:http://henrikwarne.com/2012/06/02/why-i-love-coding/ +[8]:https://www.coursera.org/ +[9]:https://www.udacity.com/ +[10]:https://eskimon.wordpress.com/about/ + + + + + + + From a37ff025ebb8b740d71d06ed1e6178a5ab4ccc3c Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 7 Aug 2015 00:06:17 +0800 Subject: [PATCH 129/207] PUB:20150717 How to monitor NGINX- Part 1 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @strugglingyouth 好长,翻译辛苦了。其中有些字句翻译不对,比如 upstreaming 是指上游(服务器),并不专指负载均衡环境。 --- .../20150717 How to monitor NGINX- Part 1.md | 231 ++++++++++ .../20150717 How to monitor NGINX- Part 1.md | 416 ------------------ 2 files changed, 231 insertions(+), 416 deletions(-) create mode 100644 published/20150717 How to monitor NGINX- Part 1.md delete mode 100644 translated/tech/20150717 How to monitor NGINX- Part 1.md diff --git a/published/20150717 How to monitor NGINX- Part 1.md b/published/20150717 How to monitor NGINX- Part 1.md new file mode 100644 index 0000000000..908aa7448e --- /dev/null +++ b/published/20150717 How to monitor NGINX- Part 1.md @@ -0,0 +1,231 @@ +如何监控 NGINX(第一篇) +================================================================================ +![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png) + +### NGINX 是什么? ### + +[NGINX][1] (发音为 “engine X”) 是一种流行的 HTTP 和反向代理服务器。作为一个 HTTP 服务器,NGINX 可以使用较少的内存非常高效可靠地提供静态内容。作为[反向代理][2],它可以用作多个后端服务器或类似缓存和负载平衡这样的其它应用的单一访问控制点。NGINX 是一个自由开源的产品,并有一个具备更全的功能的叫做 NGINX Plus 的商业版。 + +NGINX 也可以用作邮件代理和通用的 TCP 代理,但本文并不直接讨论 NGINX 的那些用例的监控。 + +### NGINX 主要指标 ### + +通过监控 NGINX 可以 捕获到两类问题:NGINX 本身的资源问题,和出现在你的基础网络设施的其它问题。大多数 NGINX 用户会用到以下指标的监控,包括**每秒请求数**,它提供了一个由所有最终用户活动组成的上层视图;**服务器错误率** ,这表明你的服务器已经多长没有处理看似有效的请求;还有**请求处理时间**,这说明你的服务器处理客户端请求的总共时长(并且可以看出性能降低或当前环境的其他问题)。 + +更一般地,至少有三个主要的指标类别来监视: + +- 基本活动指标 +- 错误指标 +- 性能指标 + +下面我们将分析在每个类别中最重要的 NGINX 指标,以及用一个相当普遍但是值得特别提到的案例来说明:使用 NGINX Plus 作反向代理。我们还将介绍如何使用图形工具或可选择的监控工具来监控所有的指标。 + +本文引用指标术语[来自我们的“监控 101 系列”][3],,它提供了一个指标收集和警告框架。 + +#### 基本活跃指标 #### + +无论你在怎样的情况下使用 NGINX,毫无疑问你要监视服务器接收多少客户端请求和如何处理这些请求。 + +NGINX Plus 上像开源 NGINX 一样可以报告基本活跃指标,但它也提供了略有不同的辅助模块。我们首先讨论开源的 NGINX,再来说明 NGINX Plus 提供的其他指标的功能。 + +**NGINX** + +下图显示了一个客户端连接的过程,以及开源版本的 NGINX 如何在连接过程中收集指标。 + +![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_connection_diagram-2.png) + +Accepts(接受)、Handled(已处理)、Requests(请求)是一直在增加的计数器。Active(活跃)、Waiting(等待)、Reading(读)、Writing(写)随着请求量而增减。 + +| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)| +|-----------|-----------------|-------------------------------------------------------------------------------------------------------------------------| +| Accepts | NGINX 所接受的客户端连接数 | 资源: 功能 | +| Handled | 成功的客户端连接数 | 资源: 功能 | +| Active | 当前活跃的客户端连接数| 资源: 功能 | +| Dropped(已丢弃,计算得出)| 丢弃的连接数(接受 - 已处理)| 工作:错误*| +| Requests | 客户端请求数 | 工作:吞吐量 | + + +_*严格的来说,丢弃的连接是 [一个资源饱和指标](https://www.datadoghq.com/blog/monitoring-101-collecting-data/#resource-metrics),但是因为饱和会导致 NGINX 停止服务(而不是延后该请求),所以,“已丢弃”视作 [一个工作指标](https://www.datadoghq.com/blog/monitoring-101-collecting-data/#work-metrics) 比较合适。_ + +NGINX worker 进程接受 OS 的连接请求时 **Accepts** 计数器增加,而**Handled** 是当实际的请求得到连接时(通过建立一个新的连接或重新使用一个空闲的)。这两个计数器的值通常都是相同的,如果它们有差别则表明连接被**Dropped**,往往这是由于资源限制,比如已经达到 NGINX 的[worker_connections][4]的限制。 + +一旦 NGINX 成功处理一个连接时,连接会移动到**Active**状态,在这里对客户端请求进行处理: + +Active状态 + +- **Waiting**: 活跃的连接也可以处于 Waiting 子状态,如果有在此刻没有活跃请求的话。新连接可以绕过这个状态并直接变为到 Reading 状态,最常见的是在使用“accept filter(接受过滤器)” 和 “deferred accept(延迟接受)”时,在这种情况下,NGINX 不会接收 worker 进程的通知,直到它具有足够的数据才开始响应。如果连接设置为 keep-alive ,那么它在发送响应后将处于等待状态。 + +- **Reading**: 当接收到请求时,连接离开 Waiting 状态,并且该请求本身使 Reading 状态计数增加。在这种状态下 NGINX 会读取客户端请求首部。请求首部是比较小的,因此这通常是一个快速的操作。 + +- **Writing**: 请求被读取之后,其使 Writing 状态计数增加,并保持在该状态,直到响应返回给客户端。这意味着,该请求在 Writing 状态时, 一方面 NGINX 等待来自上游系统的结果(系统放在 NGINX “后面”),另外一方面,NGINX 也在同时响应。请求往往会在 Writing 状态花费大量的时间。 + +通常,一个连接在同一时间只接受一个请求。在这种情况下,Active 连接的数目 == Waiting 的连接 + Reading 请求 + Writing 。然而,较新的 SPDY 和 HTTP/2 协议允许多个并发请求/响应复用一个连接,所以 Active 可小于 Waiting 的连接、 Reading 请求、Writing 请求的总和。 (在撰写本文时,NGINX 不支持 HTTP/2,但预计到2015年期间将会支持。) + +**NGINX Plus** + +正如上面提到的,所有开源 NGINX 的指标在 NGINX Plus 中是可用的,但另外也提供其他的指标。本节仅说明了 NGINX Plus 可用的指标。 + + +![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_plus_connection_diagram-2.png) + +Accepted (已接受)、Dropped,总数是不断增加的计数器。Active、 Idle(空闲)和处于 Current(当前)处理阶段的各种状态下的连接或请​​求的当前数量随着请求量而增减。 + +| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)| +|-----------|-----------------|-------------------------------------------------------------------------------------------------------------------------| +| Accepted | NGINX 所接受的客户端连接数 | 资源: 功能 | +| Dropped |丢弃的连接数(接受 - 已处理)| 工作:错误*| +| Active | 当前活跃的客户端连接数| 资源: 功能 | +| Idle | 没有当前请求的客户端连接| 资源: 功能 | +| Total(全部) | 客户端请求数 | 工作:吞吐量 | + +_*严格的来说,丢弃的连接是 [一个资源饱和指标](https://www.datadoghq.com/blog/monitoring-101-collecting-data/#resource-metrics),但是因为饱和会导致 NGINX 停止服务(而不是延后该请求),所以,“已丢弃”视作 [一个工作指标](https://www.datadoghq.com/blog/monitoring-101-collecting-data/#work-metrics) 比较合适。_ + +当 NGINX Plus worker 进程接受 OS 的连接请求时 **Accepted** 计数器递增。如果 worker 进程为请求建立连接失败(通过建立一个新的连接或重新使用一个空闲),则该连接被丢弃, **Dropped** 计数增加。通常连接被丢弃是因为资源限制,如 NGINX Plus 的[worker_connections][4]的限制已经达到。 + +**Active** 和 **Idle** 和[如上所述][5]的开源 NGINX 的“active” 和 “waiting”状态是相同的,但是有一点关键的不同:在开源 NGINX 上,“waiting”状态包括在“active”中,而在 NGINX Plus 上“idle”的连接被排除在“active” 计数外。**Current** 和开源 NGINX 是一样的也是由“reading + writing” 状态组成。 + +**Total** 为客户端请求的累积计数。请注意,单个客户端连接可涉及多个请求,所以这个数字可能会比连接的累计次数明显大。事实上,(total / accepted)是每个连接的平均请求数量。 + +**开源 和 Plus 之间指标的不同** + +|NGINX (开源) |NGINX Plus| +|-----------------------|----------------| +| accepts | accepted | +| dropped 通过计算得来| dropped 直接得到 | +| reading + writing| current| +| waiting| idle| +| active (包括 “waiting”状态) | active (排除 “idle” 状态)| +| requests| total| + +**提醒指标: 丢弃连接** + +被丢弃的连接数目等于 Accepts 和 Handled 之差(NGINX 中),或是可直接得到标准指标(NGINX Plus 中)。在正常情况下,丢弃连接数应该是零。如果在每个单位时间内丢弃连接的速度开始上升,那么应该看看是否资源饱和了。 + +![Dropped connections](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/dropped_connections.png) + +**提醒指标: 每秒请求数** + +按固定时间间隔采样你的请求数据(开源 NGINX 的**requests**或者 NGINX Plus 中**total**) 会提供给你单位时间内(通常是分钟或秒)所接受的请求数量。监测这个指标可以查看进入的 Web 流量尖峰,无论是合法的还是恶意的,或者突然的下降,这通常都代表着出现了问题。每秒请求数若发生急剧变化可以提醒你的环境出现问题了,即使它不能告诉你确切问题的位置所在。请注意,所有的请求都同样计数,无论 URL 是什么。 + +![Requests per second](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/requests_per_sec.png) + +**收集活跃指标** + +开源的 NGINX 提供了一个简单状态页面来显示基本的服务器指标。该状态信息以标准格式显示,实际上任何图形或监控工具可以被配置去解析这些相关数据,以用于分析、可视化、或提醒。NGINX Plus 提供一个 JSON 接口来供给更多的数据。阅读相关文章“[NGINX 指标收集][6]”来启用指标收集的功能。 + +#### 错误指标 #### + +| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)| 可用于 | +|-----------|-----------------|--------------------------------------------------------------------------------------------------------|----------------| +| 4xx 代码 | 客户端错误计数 | 工作:错误 | NGINX 日志, NGINX Plus| +| 5xx 代码| 服务器端错误计数 | 工作:错误 | NGINX 日志, NGINX Plus| + +NGINX 错误指标告诉你服务器是否经常返回错误而不是正常工作。客户端错误返回4XX状态码,服务器端错误返回5XX状态码。 + +**提醒指标: 服务器错误率** + +服务器错误率等于在单位时间(通常为一到五分钟)内5xx错误状态代码的总数除以[状态码][7](1XX,2XX,3XX,4XX,5XX)的总数。如果你的错误率随着时间的推移开始攀升,调查可能的原因。如果突然增加,可能需要采取紧急行动,因为客户端可能收到错误信息。 + +![Server error rate](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/5xx_rate.png) + +关于客户端错误的注意事项:虽然监控4XX是很有用的,但从该指标中你仅可以捕捉有限的信息,因为它只是衡量客户的行为而不捕捉任何特殊的 URL。换句话说,4xx出现的变化可能是一个信号,例如网络扫描器正在寻找你的网站漏洞时。 + +**收集错误度量** + +虽然开源 NGINX 不能马上得到用于监测的错误率,但至少有两种方法可以得到: + +- 使用商业支持的 NGINX Plus 提供的扩展状态模块 +- 配置 NGINX 的日志模块将响应码写入访问日志 + +关于这两种方法,请阅读相关文章“[NGINX 指标收集][6]”。 + +#### 性能指标 #### + +| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)| 可用于 | +|-----------|-----------------|--------------------------------------------------------------------------------------------------------|----------------| +| request time (请求处理时间)| 处理每个请求的时间,单位为秒 | 工作:性能 | NGINX 日志| + +**提醒指标: 请求处理时间** + +请求处理时间指标记录了 NGINX 处理每个请求的时间,从读到客户端的第一个请求字节到完成请求。较长的响应时间说明问题在上游。 + +**收集处理时间指标** + +NGINX 和 NGINX Plus 用户可以通过添加 $request_time 变量到访问日志格式中来捕​​捉处理时间数据。关于配置日志监控的更多细节在[NGINX指标收集][6]。 + +#### 反向代理指标 #### + +| 名称 | 描述| [指标类型](https://www.datadoghq.com/blog/monitoring-101-collecting-data/)| 可用于 | +|-----------|-----------------|--------------------------------------------------------------------------------------------------------|----------------| +| 上游服务器的活跃链接 | 当前活跃的客户端连接 | 资源:功能 | NGINX Plus | +| 上游服务器的 5xx 错误代码| 服务器错误 | 工作:错误 | NGINX Plus | +| 每个上游组的可用服务器 | 服务器传递健康检查 | 资源:可用性| NGINX Plus + +[反向代理][9]是 NGINX 最常见的使用方法之一。商业支持的 NGINX Plus 显示了大量有关后端(或“上游 upstream”)的服务器指标,这些与反向代理设置相关的。本节重点介绍了几个 NGINX Plus 用户可用的关键上游指标。 + +NGINX Plus 首先将它的上游指标按组分开,然后是针对单个服务器的。因此,例如,你的反向代理将请求分配到五个上游的 Web 服务器上,你可以一眼看出是否有单个服务器压力过大,也可以看出上游组中服务器的健康状况,以确保良好的响应时间。 + +**活跃指标** + +**每上游服务器的活跃连接**的数量可以帮助你确认反向代理是否正确的分配工作到你的整个服务器组上。如果你正在使用 NGINX 作为负载均衡器,任何一台服务器处理的连接数的明显偏差都可能表明服务器正在努力消化请求,或者是你配置使用的负载均衡的方法(例如[round-robin 或 IP hashing][10])不是最适合你流量模式的。 + +**错误指标** + +错误指标,上面所说的高于5XX(服务器错误)状态码,是监控指标中有价值的一个,尤其是响应码部分。 NGINX Plus 允许你轻松地提取**每个上游服务器的 5xx 错误代码**的数量,以及响应的总数量,以此来确定某个特定服务器的错误率。 + +**可用性指标** + +对于 web 服务器的运行状况,还有另一种角度,NGINX 可以通过**每个组中当前可用服务器的总量**很方便监控你的上游组的健康。在一个大的反向代理上,你可能不会非常关心其中一个服务器的当前状态,就像你只要有可用的服务器组能够处理当前的负载就行了。但监视上游组内的所有工作的服务器总量可为判断 Web 服务器的健康状况提供一个更高层面的视角。 + +**收集上游指标** + +NGINX Plus 上游指标显示在内部 NGINX Plus 的监控仪表盘上,并且也可通过一个JSON 接口来服务于各种外部监控平台。在我们的相关文章“[NGINX指标收集][6]”中有个例子。 + +### 结论 ### + +在这篇文章中,我们已经谈到了一些有用的指标,你可以使用表格来监控 NGINX 服务器。如果你是刚开始使用 NGINX,监控下面提供的大部分或全部指标,可以让你很好的了解你的网络基础设施的健康和活跃程度: + +- [已丢弃的连接][12] +- [每秒请求数][13] +- [服务器错误率][14] +- [请求处理数据][15] + +最终,你会学到更多,更专业的衡量指标,尤其是关于你自己基础设施和使用情况的。当然,监控哪一项指标将取决于你可用的工具。参见相关的文章来[逐步指导你的指标收集][6],不管你使用 NGINX 还是 NGINX Plus。 + +在 Datadog 中,我们已经集成了 NGINX 和 NGINX Plus,这样你就可以以最少的设置来收集和监控所有 Web 服务器的指标。 [在本文中][17]了解如何用 NGINX Datadog来监控,并开始[免费试用 Datadog][18]吧。 + +### 诚谢 ### + +在文章发表之前非常感谢 NGINX 团队审阅这篇,并提供重要的反馈和说明。 + + +-------------------------------------------------------------------------------- + +via: https://www.datadoghq.com/blog/how-to-monitor-nginx/ + +作者:K Young +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://nginx.org/en/ +[2]:http://nginx.com/resources/glossary/reverse-proxy-server/ +[3]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/ +[4]:http://nginx.org/en/docs/ngx_core_module.html#worker_connections +[5]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#active-state +[6]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ +[7]:http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html +[8]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ +[9]:https://en.wikipedia.org/wiki/Reverse_proxy +[10]:http://nginx.com/blog/load-balancing-with-nginx-plus/ +[11]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ +[12]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#dropped-connections +[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#requests-per-second +[14]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#server-error-rate +[15]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#request-processing-time +[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ +[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ +[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up +[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md +[20]:https://github.com/DataDog/the-monitor/issues diff --git a/translated/tech/20150717 How to monitor NGINX- Part 1.md b/translated/tech/20150717 How to monitor NGINX- Part 1.md deleted file mode 100644 index 86e72c0324..0000000000 --- a/translated/tech/20150717 How to monitor NGINX- Part 1.md +++ /dev/null @@ -1,416 +0,0 @@ -如何监控 NGINX - 第1部分 -================================================================================ -![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_1.png) - -### NGINX 是什么? ### - -[NGINX][1] (发音为 “engine X”) 是一种流行的 HTTP 和反向代理服务器。作为一个 HTTP 服务器,NGINX 提供静态内容非常高效可靠,使用较少的内存。作为[反向代理][2],它可以用作一个单一的控制器来为其他应用代理至后端的多个服务器上,如高速缓存和负载平衡。NGINX 是作为一个免费,开源的产品并有更全的功能,商业版的叫 NGINX Plus。 - -NGINX 也可以用作邮件代理和通用的 TCP 代理,但本文并不直接说明对 NGINX 的这些用例做监控。 - -### NGINX 主要指标 ### - -通过监控 NGINX 可以捕捉两类问题:NGINX 本身的资源问题,也有很多问题会出现在你的基础网络设施处。大多数 NGINX 用户受益于以下指标的监控,包括**requests per second**,它提供了一个所有用户活动的高级视图;**server error rate** ,这表明你的服务器已经多长没有处理看似有效的请求;还有**request processing time**,这说明你的服务器处理客户端请求的总共时长(并且可以看出性能降低时或当前环境的其他问题)。 - -更一般地,至少有三个主要的指标类别来监视: - -- 基本活动指标 -- 错误指标 -- 性能指标 - -下面我们将分析在每个类别中最重要的 NGINX 指标,以及用一个相当普遍的案例来说明,值得特别说明的是:使用 NGINX Plus 作反向代理。我们还将介绍如何使用图形工具或可选择的监控工具来监控所有的指标。 - -本文引用指标术语[介绍我们的监控在 101 系列][3],,它提供了指标收集和警告框架。 - -#### 基本活动指标 #### - -无论你在怎样的情况下使用 NGINX,毫无疑问你要监视服务器接收多少客户端请求和如何处理这些请求。 - -NGINX Plus 上像开源 NGINX 一样可以报告基本活动指标,但它也提供了略有不同的辅助模块。我们首先讨论开源的 NGINX,再来说明 NGINX Plus 提供的其他指标的功能。 - -**NGINX** - -下图显示了一个客户端连接,以及如何在连接过程中收集指标的活动周期在开源 NGINX 版本上。 - -![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_connection_diagram-2.png) - -接受,处理,增加请求的计数器。主动,等待,读,写增加和减少请求量。 - -注:表格 - ---- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionMetric type
acceptsCount of client connections attempted by NGINXResource: Utilization
handledCount of successful client connectionsResource: Utilization
activeCurrently active client connectionsResource: Utilization
dropped (calculated)Count of dropped connections (accepts – handled)Work: Errors*
requestsCount of client requestsWork: Throughput
*Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.
- -NGINX 进程接受 OS 的连接请求时**accepts** 计数器增加,而**handled** 是当实际的请求得到连接时(通过建立一个新的连接或重新使用一个空闲的)。这两个计数器的值通常都是相同的,表明连接正在被**dropped**,往往由于资源限制,如 NGINX 的[worker_connections][4]的限制已经达到。 - -一旦 NGINX 成功处理一个连接时,连接会移动到**active**状态,然后保持为客户端请求进行处理: - -Active 状态 - -- **Waiting**: 活动的连接也可以是一个 Waiting 子状态,如果有在此刻没有活动请求。新连接绕过这个状态并直接移动到读,最常见的是使用“accept filter” 和 “deferred accept”,在这种情况下,NGINX 不会接收进程的通知,直到它具有足够的数据来开始响应工作。如果连接设置为 keep-alive ,连接在发送响应后将处于等待状态。 - -- **Reading**: 当接收到请求时,连接移出等待状态,并且该请求本身也被视为 Reading。在这种状态下NGINX 正在读取客户端请求首部。请求首部是比较少的,因此这通常是一个快速的操作。 - -- **Writing**: 请求被读取之后,将其计为 Writing,并保持在该状态,直到响应返回给客户端。这意味着,该请求在 Writing 时, NGINX 同时等待来自负载均衡服务器的结果(系统“背后”的 NGINX),NGINX 也同时响应。请求往往会花费大量的时间在 Writing 状态。 - -通常,一个连接在同一时间只接受一个请求。在这种情况下,Active 连接的数目 == Waiting 连接 + Reading 请求 + Writing 请求。然而,较新的 SPDY 和 HTTP/2 协议允许多个并发请求/响应对被复用的连接,所以 Active 可小于 Waiting,Reading,Writing 的总和。 (在撰写本文时,NGINX 不支持 HTTP/2,但预计到2015年期间将会支持。) - -**NGINX Plus** - -正如上面提到的,所有开源 NGINX 的指标在 NGINX Plus 中是可用的,但另外也提供其他的指标。本节仅说明了 NGINX Plus 可用的指标。 - - -![connection, request states](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_plus_connection_diagram-2.png) - -接受,中断,总数是不断增加的。活动,空闲和已建立连接的,当前状态下每一个连接或请​​求的数量是随着请求量增加和收缩的。 - -注:表格 - ---- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionMetric type
acceptedCount of client connections attempted by NGINXResource: Utilization
droppedCount of dropped connectionsWork: Errors*
activeCurrently active client connectionsResource: Utilization
idleClient connections with zero current requestsResource: Utilization
totalCount of client requestsWork: Throughput
*Strictly speaking, dropped connections is a metric of resource saturation, but since saturation causes NGINX to stop servicing some work (rather than queuing it up for later), “dropped” is best thought of as a work metric.
- -当 NGINX Plus 进程接受 OS 的连接请求时 **accepted** 计数器递增。如果进程请求连接失败(通过建立一个新的连接或重新使用一个空闲),则该连接断开 **dropped** 计数增加。通常连接被中断是因为资源限制,如 NGINX Plus 的[worker_connections][4]的限制已经达到。 - -**Active** 和 **idle** 和开源 NGINX 的“active” 和 “waiting”状态是相同的,[如上所述][5],有一个不同的地方:在开源 NGINX 上,“waiting”状态包括在“active”中,而在 NGINX Plus 上“idle”的连接被排除在“active” 计数外。**Current** 和开源 NGINX 是一样的也是由“reading + writing” 状态组成。 - - -**Total** 为客户端请求的累积计数。请注意,单个客户端连接可涉及多个请求,所以这个数字可能会比连接的累计次数明显大。事实上,(total / accepted)是每个连接请求的平均数量。 - -**开源 和 Plus 之间指标的不同** - -注:表格 - --- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NGINX (open-source)NGINX Plus
acceptsaccepted
dropped must be calculateddropped is reported directly
reading + writingcurrent
waitingidle
active (includes “waiting” states)active (excludes “idle” states)
requeststotal
- -**提醒指标: 中断连接** - -被中断的连接数目等于接受和处理之差(NGINX),或被公开直接作为指标的标准(NGINX加)。在正常情况下,中断连接数应该是零。如果每秒中中断连接的速度开始上升,寻找资源可能用尽的地方。 - -![Dropped connections](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/dropped_connections.png) - -**提醒指标: 每秒请求数** - -提供你(开源中的**requests**或者 Plus 中**total**)固定时间间隔每秒或每分钟请求的平均数据。监测这个指标可以查看 Web 的输入流量的最大值,无论是合法的还是恶意的,有可能会突然下降,通常可以看出问题。每秒的请求若发生急剧变化可以提醒你出问题了,即使它不能告诉你确切问题的位置所在。请注意,所有的请求都算作是相同的,无论哪个 URLs。 - -![Requests per second](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/requests_per_sec.png) - -**收集活动指标** - -开源的 NGINX 提供了一个简单状态页面来显示基本的服务器指标。该状态信息以标准格式被显示,实际上任何图形或监控工具可以被配置去解析相关的数据为分析,可视化,或提醒而用。NGINX Plus 提供一个 JSON 接口来显示更多的数据。阅读[NGINX 指标收集][6]后来启用指标收集的功能。 - -#### 错误指标 #### - -注:表格 - ----- - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionMetric typeAvailability
4xx codesCount of client errorsWork: ErrorsNGINX logs, NGINX Plus
5xx codesCount of server errorsWork: ErrorsNGINX logs, NGINX Plus
- -NGINX 错误指标告诉你服务器经常返回哪些错误,这也是有用的。客户端错误返回4XX状态码,服务器端错误返回5XX状态码。 - -**提醒指标: 服务器错误率** - -服务器错误率等于5xx错误状态代码的总数除以[状态码][7](1XX,2XX,3XX,4XX,5XX)的总数,每单位时间(通常为一到五分钟)的数目。如果你的错误率随着时间的推移开始攀升,调查可能的原因。如果突然增加,可能需要采取紧急行动,因为客户端可能收到错误信息。 - -![Server error rate](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/5xx_rate.png) - -客户端收到错误时的注意事项:虽然监控4XX是很有用的,但从该指标中你仅可以捕捉有限的信息,因为它只是衡量客户的行为而不捕捉任何特殊的 URLs。换句话说,在4xx出现时只是相当于一点噪音,例如寻找漏洞的网络扫描仪。 - -**收集错误度量** - -虽然开源 NGINX 不会监测错误率,但至少有两种方法可以捕获其信息: - -- 使用商业支持的 NGINX Plus 提供的可扩展状态模块 -- 配置 NGINX 的日志模块将响应码写入访问日志 - -阅读关于 NGINX 指标收集的后两个方法的详细说明。 - -#### 性能指标 #### - -注:表格 - ----- - - - - - - - - - - - - - - - - -
NameDescriptionMetric typeAvailability
request timeTime to process each request, in secondsWork: PerformanceNGINX logs
- -**提醒指标: 请求处理时间** - -请求时间指标记录 NGINX 处理每个请求的时间,从第一个客户端的请求字节读出到完成请求。较长的响应时间可以将问题指向负载均衡服务器。 - -**收集处理时间指标** - -NGINX 和 NGINX Plus 用户可以通过添加 $request_time 变量到访问日志格式中来捕​​捉处理时间数据。关于配置日志监控的更多细节在[NGINX指标收集][8]。 - -#### 反向代理指标 #### - -注:表格 - ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionMetric typeAvailability
Active connections by upstream serverCurrently active client connectionsResource: UtilizationNGINX Plus
5xx codes by upstream serverServer errorsWork: ErrorsNGINX Plus
Available servers per upstream groupServers passing health checksResource: AvailabilityNGINX Plus
- -[反向代理][9]是 NGINX 最常见的使用方法之一。商业支持的 NGINX Plus 显示了大量有关后端(或“负载均衡”)的服务器指标,这是反向代理设置的。本节重点介绍了几个关键的负载均衡服务器的指标为 NGINX Plus 用户。 - -NGINX Plus 的负载均衡服务器指标首先是组的,然后是单个服务器的。因此,例如,你的反向代理将请求分配到五个 Web 负载均衡服务器上,你可以一眼看出是否有单个服务器压力过大,也可以看出负载均衡服务器组的健康状况,以确保良好的响应时间。 - -**活动指标** - -**active connections per upstream server**的数量可以帮助你确认反向代理是否正确的分配工作到负载均衡服务器上。如果你正在使用 NGINX 作为负载均衡器,任何一台服务器处理的连接数有显著的偏差都可能表明服务器正在努力处理请求或你配置处理请求的负载均衡的方法(例如[round-robin or IP hashing][10])不是最适合你流量模式的。 - -**错误指标** - -错误指标,上面所说的高于5XX(服务器错误)状态码,是监控指标中有价值的一个,尤其是响应码部分。 NGINX Plus 允许你轻松地提取每个负载均衡服务器 **5xx codes per upstream server**的数量,以及响应的总数量,以此来确定该特定服务器的错误率。 - - -**可用性指标** - -对于 web 服务器的运行状况,另一种观点认为,NGINX 也可以很方便监控你的负载均衡服务器组的健康通过**servers currently available within each group**的总量​​。在一个大的反向代理上,你可能不会非常关心其中一个服务器的当前状态,就像你只要可用的服务器组能够处理当前的负载就行了。但监视负载均衡服务器组内的所有服务器可以提供一个高水平的图像来判断 Web 服务器的健康状况。 - -**收集负载均衡服务器的指标** - -NGINX Plus 负载均衡服务器的指标显示在内部 NGINX Plus 的监控仪表盘上,并且也可通过一个JSON 接口来服务于所有外部的监控平台。在这儿看一个例子[收集 NGINX 指标][11]。 - -### 结论 ### - -在这篇文章中,我们已经谈到了一些有用的指标,你可以使用表格来监控 NGINX 服务器。如果你是刚开始使用 NGINX,下面提供了良好的网络基础设施的健康和活动的可视化工具来监控大部分或所有的指标: - -- [Dropped connections][12] -- [Requests per second][13] -- [Server error rate][14] -- [Request processing time][15] - -最终,你会学到更多,更专业的衡量指标,尤其是关于你自己基础设施和使用情况的。当然,监控哪一项指标将取决于你可用的工具。参见[一步一步来说明指标收集][16],不管你使用 NGINX 还是 NGINX Plus。 - - - -在 Datadog 中,我们已经集成了 NGINX 和 NGINX Plus,这样你就可以以最小的设置来收集和监控所有 Web 服务器的指标。了解如何用 NGINX Datadog来监控 [在本文中][17],并开始使用 [免费的 Datadog][18]。 - -### Acknowledgments ### - -在文章发表之前非常感谢 NGINX 团队审阅这篇,并提供重要的反馈和说明。 - ----------- - -文章来源在这儿 [on GitHub][19]。问题,更正,补充等?请[告诉我们][20]。 - - --------------------------------------------------------------------------------- - -via: https://www.datadoghq.com/blog/how-to-monitor-nginx/ - -作者:K Young -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:http://nginx.org/en/ -[2]:http://nginx.com/resources/glossary/reverse-proxy-server/ -[3]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/ -[4]:http://nginx.org/en/docs/ngx_core_module.html#worker_connections -[5]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#active-state -[6]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ -[7]:http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html -[8]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ -[9]:https://en.wikipedia.org/wiki/Reverse_proxy -[10]:http://nginx.com/blog/load-balancing-with-nginx-plus/ -[11]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ -[12]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#dropped-connections -[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#requests-per-second -[14]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#server-error-rate -[15]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#request-processing-time -[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ -[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ -[18]:https://www.datadoghq.com/blog/how-to-monitor-nginx/#sign-up -[19]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx.md -[20]:https://github.com/DataDog/the-monitor/issues From 67aff80abfcf06fefd44d67bb4a263d90f743d38 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 7 Aug 2015 00:25:48 +0800 Subject: [PATCH 130/207] Delete 20150803 Handy commands for profiling your Unix file systems.md --- ...ds for profiling your Unix file systems.md | 65 ------------------- 1 file changed, 65 deletions(-) delete mode 100644 sources/tech/20150803 Handy commands for profiling your Unix file systems.md diff --git a/sources/tech/20150803 Handy commands for profiling your Unix file systems.md b/sources/tech/20150803 Handy commands for profiling your Unix file systems.md deleted file mode 100644 index 359aba14c9..0000000000 --- a/sources/tech/20150803 Handy commands for profiling your Unix file systems.md +++ /dev/null @@ -1,65 +0,0 @@ -translation by strugglingyouth -Handy commands for profiling your Unix file systems -================================================================================ -![Credit: Sandra H-S](http://images.techhive.com/images/article/2015/07/file-profile-100597239-primary.idge.png) -Credit: Sandra H-S - -One of the problems that seems to plague nearly all file systems -- Unix and others -- is the continuous buildup of files. Almost no one takes the time to clean out files that they no longer use and file systems, as a result, become so cluttered with material of little or questionable value that keeping them them running well, adequately backed up, and easy to manage is a constant challenge. - -One way that I have seen to help encourage the owners of all that data detritus to address the problem is to create a summary report or "profile" of a file collection that reports on such things as the number of files; the oldest, newest, and largest of those files; and a count of who owns those files. If someone realizes that a collection of half a million files contains none less than five years old, they might go ahead and remove them -- or, at least, archive and compress them. The basic problem is that huge collections of files are overwhelming and most people are afraid that they might accidentally delete something important. Having a way to characterize a file collection can help demonstrate the nature of the content and encourage those digital packrats to clean out their nests. - -When I prepare a file system summary report on Unix, a handful of Unix commands easily provide some very useful statistics. To count the files in a directory, you can use a find command like this. - - $ find . -type f | wc -l - 187534 - -Finding the oldest and newest files is a bit more complicated, but still quite easy. In the commands below, we're using the find command again to find files, displaying the data with a year-month-day format that makes it possible to sort by file age, and then displaying the top -- thus the oldest -- file in that list. - -In the second command, we do the same, but print the last line -- thus the newest -- file. - - $ find -type f -printf '%T+ %p\n' | sort | head -n 1 - 2006-02-03+02:40:33 ./skel/.xemacs/init.el - $ find -type f -printf '%T+ %p\n' | sort | tail -n 1 - 2015-07-19+14:20:16 ./.bash_history - -The %T (file date and time) and %p (file name with path) parameters with the printf command allow this to work. - -If we're looking at home directories, we're undoubtedly going to find that history files are the newest files and that isn't likely to be a very interesting bit of information. You can omit those files by "un-grepping" them or you can omit all files that start with dots as shown below. - - $ find -type f -printf '%T+ %p\n' | grep -v "\./\." | sort | tail -n 1 - 2015-07-19+13:02:12 ./isPrime - -Finding the largest file involves using the %s (size) parameter and we include the file name (%f) since that's what we want the report to show. - - $ find -type f -printf '%s %f \n' | sort -n | uniq | tail -1 - 20183040 project.org.tar - -To summarize file ownership, use the %u (owner) - - $ find -type f -printf '%u \n' | grep -v "\./\." | sort | uniq -c - 180034 shs - 7500 jdoe - -If your file system also records the last access date, it can be very useful to show that files haven't been accessed in, say, more than two years. This would give your reviewers an important insight into the value of those files. The last access parameter (%a) could be used like this: - - $ find -type f -printf '%a+ %p\n' | sort | head -n 1 - Fri Dec 15 03:00:30 2006+ ./statreport - -Of course, if the most recently accessed file is also in the deep dark past, that's likely to get even more of a reaction. - - $ find -type f -printf '%a+ %p\n' | sort | tail -n 1 - Wed Nov 26 03:00:27 2007+ ./my-notes - -Getting a sense of what's in a file system or large directory by creating a summary report showing the file date ranges, the largest files, the file owners, and the oldest and new access times can help to demonstrate how current and how important a file collection is and help its owners decide if it's time to clean up. - --------------------------------------------------------------------------------- - -via: http://www.itworld.com/article/2949898/linux/profiling-your-file-systems.html - -作者:[Sandra Henry-Stocker][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/ From 63ad6aab7dce0df6174c05b1dd83a62405af46f3 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Fri, 7 Aug 2015 00:27:30 +0800 Subject: [PATCH 131/207] Create 20150803 Handy commands for profiling your Unix file systems.md --- ...ds for profiling your Unix file systems.md | 66 +++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 translated/tech/20150803 Handy commands for profiling your Unix file systems.md diff --git a/translated/tech/20150803 Handy commands for profiling your Unix file systems.md b/translated/tech/20150803 Handy commands for profiling your Unix file systems.md new file mode 100644 index 0000000000..13efdcf0a1 --- /dev/null +++ b/translated/tech/20150803 Handy commands for profiling your Unix file systems.md @@ -0,0 +1,66 @@ + +很实用的命令来分析你的 Unix 文件系统 +================================================================================ +![Credit: Sandra H-S](http://images.techhive.com/images/article/2015/07/file-profile-100597239-primary.idge.png) +Credit: Sandra H-S + +其中一个问题几乎困扰着所有的文件系统 -- 包括 Unix 和其他的 -- 那就是文件的不断积累。几乎没有人愿意花时间清理掉他们不再使用的文件和文件系统,结果,文件变得很混乱,很难找到有用的东西使它们运行良好,能够得到备份,并且易于管理,这将是一种持久的挑战。 + +我见过的一种解决问题的方法是鼓励使用者将所有的数据碎屑创建成一个总结报告或"profile"这样一个文件集合来报告所有的文件数量;最老的,最新的,最大的文件;并统计谁拥有这些文件。如果有人看到一个包含五十万个文件的文件夹并且时间不小于五年,他们可能会去删除哪些文件 -- 或者,至少归档和压缩。主要问题是太大的文件夹会使人产生压制性害怕误删一些重要的东西。有一个描述文件夹的方法能帮助显示文件的性质并期待你去清理它。 + + +当我准备做 Unix 文件系统的总结报告时,几个有用的 Unix 命令能提供一些非常有用的统计信息。要计算目录中的文件数,你可以使用这样一个 find 命令。 + + $ find . -type f | wc -l + 187534 + +查找最老的和最新的文件是比较复杂,但还是相当方便的。在下面的命令,我们使用 find 命令再次查找文件,以文件时间排序并按年-月-日的格式显示在顶部 -- 因此最老的 -- 的文件在列表中。 + +在第二个命令,我们做同样的,但打印的是最后一行 -- 这是最新的 -- 文件 + + $ find -type f -printf '%T+ %p\n' | sort | head -n 1 + 2006-02-03+02:40:33 ./skel/.xemacs/init.el + $ find -type f -printf '%T+ %p\n' | sort | tail -n 1 + 2015-07-19+14:20:16 ./.bash_history + +printf 命令输出 %T(文件日期和时间)和 %P(带路径的文件名)参数。 + +如果我们在查找家目录时,无疑会发现,history 文件是最新的,这不像是一个很有趣的信息。你可以通过 "un-grepping" 来忽略这些文件,也可以忽略以.开头的文件,如下图所示的。 + + $ find -type f -printf '%T+ %p\n' | grep -v "\./\." | sort | tail -n 1 + 2015-07-19+13:02:12 ./isPrime + +寻找最大的文件使用 %s(大小)参数,包括文件名(%f),因为这就是我们想要在报告中显示的。 + + $ find -type f -printf '%s %f \n' | sort -n | uniq | tail -1 + 20183040 project.org.tar + +打印文件的所有着者,使用%u(所有者) + + $ find -type f -printf '%u \n' | grep -v "\./\." | sort | uniq -c + 180034 shs + 7500 jdoe + +如果文件系统能记录上次的访问日期,也将是非常有用的来看该文件有没有被访问,比方说,两年之内。这将使你能明确分辨这些文件的价值。最后一个访问参数(%a)这样使用: + + $ find -type f -printf '%a+ %p\n' | sort | head -n 1 + Fri Dec 15 03:00:30 2006+ ./statreport + +当然,如果最近​​访问的文件也是在很久之前的,这将使你有更多的处理时间。 + + $ find -type f -printf '%a+ %p\n' | sort | tail -n 1 + Wed Nov 26 03:00:27 2007+ ./my-notes + +一个文件系统要层次分明,为大目录创建一个总结报告,显示该文件的日期范围,最大的文件,文件所有者,最老的和访问时间都可以帮助文件拥有者判断当前有哪些文件夹是重要的哪些该清理了。 + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2949898/linux/profiling-your-file-systems.html + +作者:[Sandra Henry-Stocker][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/ From e33f5e9a1b751567fb5498697363c30c48dff32e Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 7 Aug 2015 00:35:21 +0800 Subject: [PATCH 132/207] PUB:20150806 5 Reasons Why Software Developer is a Great Career Choice MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @mousycoder 明早9:30 发布:https://linux.cn/article-5971-1.html 原文现在取消了大部分配图(配图有问题),所以我也取消了。 翻译的不错,我基本没怎么改动。 --- ...ware Developer is a Great Career Choice.md | 75 +++++++++++++++ ...ware Developer is a Great Career Choice.md | 91 ------------------- 2 files changed, 75 insertions(+), 91 deletions(-) create mode 100644 published/20150806 5 Reasons Why Software Developer is a Great Career Choice.md delete mode 100644 translated/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md diff --git a/published/20150806 5 Reasons Why Software Developer is a Great Career Choice.md b/published/20150806 5 Reasons Why Software Developer is a Great Career Choice.md new file mode 100644 index 0000000000..12831e66ad --- /dev/null +++ b/published/20150806 5 Reasons Why Software Developer is a Great Career Choice.md @@ -0,0 +1,75 @@ +选择成为软件开发工程师的5个原因 +================================================================================ + +![](http://henrikwarne1.files.wordpress.com/2011/09/cropped-desk1.jpg) + +这个星期我将给本地一所高中做一次有关于程序猿是怎样工作的演讲。我是志愿(由 [Transfer][1] 组织的)来到这所学校谈论我的工作的。这个学校本周将有一个技术主题日,并且他们很想听听科技行业是怎样工作的。因为我是从事软件开发的,这也是我将和学生们讲的内容。演讲的其中一部分是我为什么觉得软件开发是一个很酷的职业。主要原因如下: + +### 5个原因 ### + +**1、创造性** + +如果你问别人创造性的工作有哪些,别人通常会说像作家,音乐家或者画家那样的(工作)。但是极少有人知道软件开发也是一项非常具有创造性的工作。它是最符合创造性定义的了,因为你创造了一个以前没有的新功能。这种解决方案可以在整体和细节上以很多形式来展现。我们经常会遇到一些需要做权衡的场景(比如说运行速度与内存消耗的权衡)。当然前提是这种解决方案必须是正确的。这些所有的行为都是需要强大的创造性的。 + +**2、协作性** + +另外一个表象是程序猿们独自坐在他们的电脑前,然后撸一天的代码。但是软件开发事实上通常总是一个团队努力的结果。你会经常和你的同事讨论编程问题以及解决方案,并且和产品经理、测试人员、客户讨论需求以及其他问题。 +经常有人说,结对编程(2个开发人员一起在一个电脑上编程)是一种流行的最佳实践。 + +**3、高需性** + +世界上越来越多的人在用软件,正如 [Marc Andreessen](https://en.wikipedia.org/wiki/Marc_Andreessen) 所说 " [软件正在吞噬世界][2] "。虽然程序猿现在的数量非常巨大(在斯德哥尔摩,程序猿现在是 [最普遍的职业][3] ),但是,需求量一直处于供不应求的局面。据软件公司说,他们最大的挑战之一就是 [找到优秀的程序猿][4] 。我也经常接到那些想让我跳槽的招聘人员打来的电话。我知道至少除软件行业之外的其他行业的雇主不会那么拼(的去招聘)。 + +**4、高酬性** + +软件开发可以带来不菲的收入。卖一份你已经开发好的软件的额外副本是没有 [边际成本][5] 的。这个事实与对程序猿的高需求意味着收入相当可观。当然还有许多更捞金的职业,但是相比一般人群,我认为软件开发者确实“日进斗金”(知足吧!骚年~~)。 + +**5、前瞻性** + +有许多工作岗位消失,往往是由于它们可以被计算机和软件代替。但是所有这些新的程序依然需要开发和维护,因此,程序猿的前景还是相当好的。 + +### 但是...### + +**外包又是怎么一回事呢?** + +难道所有外包到其他国家的软件开发的薪水都很低吗?这是一个理想丰满,现实骨感的例子(有点像 [瀑布开发模型][6] )。软件开发基本上跟设计的工作一样,是一个探索发现的工作。它受益于强有力的合作。更进一步说,特别当你的主打产品是软件的时候,你所掌握的开发知识是绝对的优势。知识在整个公司中分享的越容易,那么公司的发展也将越来越好。 + +换一种方式去看待这个问题。软件外包已经存在了相当一段时间了。但是对本土程序猿的需求量依旧非常高。因为许多软件公司看到了雇佣本土程序猿的带来的收益要远远超过了相对较高的成本(其实还是赚了)。 + +### 如何成为人生大赢家 ### + +虽然我有许多我认为软件开发是一件非常有趣的事情的理由 (详情见: [为什么我热爱编程][7] )。但是这些理由,并不适用于所有人。幸运的是,尝试编程是一件非常容易的事情。在互联网上有数不尽的学习编程的资源。例如,[Coursera][8] 和 [Udacity][9] 都拥有很好的入门课程。如果你从来没有撸过码,可以尝试其中一个免费的课程,找找感觉。 + +寻找一个既热爱又能谋生的事情至少有2个好处。首先,由于你天天去做,工作将比你简单的只为谋生要有趣的多。其次,如果你真的非常喜欢,你将更好的擅长它。我非常喜欢下面一副关于伟大工作组成的韦恩图(作者 [@eskimon)][10]) 。因为编码的薪水确实相当不错,我认为如果你真的喜欢它,你将有一个很好的机会,成为人生的大赢家! + +![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/17571624.jpg) + +-------------------------------------------------------------------------------- + +via: http://henrikwarne.com/2014/12/08/5-reasons-why-software-developer-is-a-great-career-choice/ + +作者:[Henrik Warne][a] +译者:[mousycoder](https://github.com/mousycoder) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + + +[a]:http://henrikwarne.com/ +[1]:http://www.transfer.nu/omoss/transferinenglish.jspx?pageId=23 +[2]:http://www.wsj.com/articles/SB10001424053111903480904576512250915629460 +[3]:http://www.di.se/artiklar/2014/6/12/jobbet-som-tar-over-landet/ +[4]:http://computersweden.idg.se/2.2683/1.600324/examinationstakten-racker-inte-for-branschens-behov +[5]:https://en.wikipedia.org/wiki/Marginal_cost +[6]:https://en.wikipedia.org/wiki/Waterfall_model +[7]:http://henrikwarne.com/2012/06/02/why-i-love-coding/ +[8]:https://www.coursera.org/ +[9]:https://www.udacity.com/ +[10]:https://eskimon.wordpress.com/about/ + + + + + + + diff --git a/translated/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md b/translated/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md deleted file mode 100644 index a592cb595e..0000000000 --- a/translated/talk/20150806 5 Reasons Why Software Developer is a Great Career Choice.md +++ /dev/null @@ -1,91 +0,0 @@ -选择软件开发攻城师的5个原因 -================================================================================ -这个星期我将给本地一所高中做一次有关于程序猿是怎样工作的演讲,我是 [Transfer][1] 组织推荐来到这所学校,谈论我的工作。这个学校本周将有一个技术主题日,并且他们很想听听科技行业是怎样工作的。因为我是从事软件开发的,这也是我将和学生们讲的内容。我为什么觉得软件开发是一个很酷的职业将是演讲的其中一部分。主要原因如下: - -### 5个原因 ### - -**1 创造性** - -![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/34042817.jpg) - -如果你问别人创造性的工作有哪些,别人通常会说像作家,音乐家或者画家那样的(工作)。但是极少有人知道软件开发也是一项非常具有创造性的工作。因为你创造了一个以前没有的新功能,这样的功能基本上可以被定义为非常具有创造性。这种解决方案可以在整体和细节上以很多形式来展现。我们经常会遇到一些需要做权衡的场景(比如说运行速度与内存消耗的权衡)。当然前提是这种解决方案必须是正确的。其实这些所有的行为都是需要强大的创造性的。 - -**2 协作性** - -![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/94579377.jpg) - -另外一个表象是程序猿们独自坐在他们的电脑前,然后撸一天的代码。但是软件开发事实上通常总是一个团队努力的结果。你会经常和你的同事讨论编程问题以及解决方案,并且和产品经理,测试人员,客户讨论需求以及其他问题。 -经常有人说,结对编程(2个开发人员一起在一个电脑上编程)是一种流行的最佳实践。 - - -**3 高需性** - -![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/26662164.jpg) - -世界上越来越多的人在用软件,正如 [Marc Andreessen](https://en.wikipedia.org/wiki/Marc_Andreessen) 所说 " [软件正在吞噬世界][2] "。虽然程序猿现在的数量非常巨大(在斯德哥尔摩,程序猿现在是 [最普遍的职业][3] ),但是,需求量一直处于供不应求的局面。据软件公司报告,他们最大的挑战之一就是 [找到优秀的程序猿][4] 。我也经常接到那些想让我跳槽的招聘人员打来的电话。我知道至少除软件行业之外的其他行业的雇主不会那么拼(的去招聘)。 - -**4 高酬性** - - -![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/50538928.jpg) - -软件开发可以带来不菲的收入。卖一份你已经开发好的软件的额外副本是没有 [边际成本][5] 的。这个事实与对程序猿的高需求意味着收入相当可观。当然还有许多更捞金的职业,但是相比一般人群,我认为软件开发者确实“日进斗金”(知足吧!骚年~~)。 - -**5 前瞻性** - -![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/89799239.jpg) - -有许多工作岗位消失,往往是由于它们可以被计算机和软件代替。但是所有这些新的程序依然需要开发和维护,因此,程序猿的前景还是相当好的。 - - -### 但是...### - -**外包又是怎么一回事呢?** - -![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/41615753.jpg) - -难道所有外包到其他地区的软件开发的薪水都很低吗?这是一个理想丰满,现实骨感的例子(有点像 [瀑布开发模型][6] )。软件开发基本上跟设计的工作一样,是一个探索发现的工作。它受益于强有力的合作。更进一步说,特别当你的主打产品是软件的时候,你所掌握的开发知识是绝对的优势。知识在整个公司中分享的越容易,那么公司的发展也将越来越好。 - - -换一种方式去看待这个问题。软件外包已经存在了相当一段时间了。但是对本土程序猿的需求量依旧非常高。因为许多软件公司看到了雇佣本土程序猿的带来的收益要远远超过了相对较高的成本(其实还是赚了)。 - -### 如何成为人生大赢家 ### - - -![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/44219908.jpg) - -虽然我有许多我认为软件开发是一件非常有趣的事情的理由 (详情见: [为什么我热爱编程][7] )。但是这些理由,并不适用于所有人。幸运的是,尝试编程是一件非常容易的事情。在互联网上有数不尽的学习编程的资源。例如,[Coursera][8] 和 [Udacity][9] 都拥有很好的入门课程。如果你从来没有撸过码,可以尝试其中一个免费的课程,找找感觉。 - -寻找一个既热爱又能谋生的事情至少有2个好处。首先,由于你天天去做,工作将比你简单的只为谋生要有趣的多。其次,如果你真的非常喜欢,你将更好的擅长它。我非常喜欢下面一副关于伟大工作组成的韦恩图(作者 [@eskimon)][10] 。因为编码的薪水确实相当不错,我认为如果你真的喜欢它,你将有一个很好的机会,成为人生的大赢家! - -![](http://7xjl4u.com1.z0.glb.clouddn.com/15-8-6/17571624.jpg) - --------------------------------------------------------------------------------- - -via: http://henrikwarne.com/2014/12/08/5-reasons-why-software-developer-is-a-great-career-choice/ - -作者:[Henrik Warne][a] -译者:[mousycoder](https://github.com/mousycoder) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - - -[a]:http://henrikwarne.com/ -[1]:http://www.transfer.nu/omoss/transferinenglish.jspx?pageId=23 -[2]:http://www.wsj.com/articles/SB10001424053111903480904576512250915629460 -[3]:http://www.di.se/artiklar/2014/6/12/jobbet-som-tar-over-landet/ -[4]:http://computersweden.idg.se/2.2683/1.600324/examinationstakten-racker-inte-for-branschens-behov -[5]:https://en.wikipedia.org/wiki/Marginal_cost -[6]:https://en.wikipedia.org/wiki/Waterfall_model -[7]:http://henrikwarne.com/2012/06/02/why-i-love-coding/ -[8]:https://www.coursera.org/ -[9]:https://www.udacity.com/ -[10]:https://eskimon.wordpress.com/about/ - - - - - - - From bc21018f00e1834be2d8c597855f247441788256 Mon Sep 17 00:00:00 2001 From: Ping Date: Fri, 7 Aug 2015 08:20:37 +0800 Subject: [PATCH 133/207] Translating --- ...06 Linux FAQs with Answers--How to install git on Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md b/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md index c5c34f3a72..c9610a2dfe 100644 --- a/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md +++ b/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md @@ -1,3 +1,5 @@ +Translating by Ping + Linux FAQs with Answers--How to install git on Linux ================================================================================ > **Question:** I am trying to clone a project from a public Git repository, but I am getting "git: command not found" error. How can I install git on [insert your Linux distro]? @@ -69,4 +71,4 @@ via: http://ask.xmodulo.com/install-git-linux.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://ask.xmodulo.com/author/nanni -[1]:https://github.com/git/git/releases \ No newline at end of file +[1]:https://github.com/git/git/releases From 6d313b52541e7bfd243825ad0a9d06a78d7c998b Mon Sep 17 00:00:00 2001 From: joeren Date: Fri, 7 Aug 2015 08:54:23 +0800 Subject: [PATCH 134/207] Update 20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md --- ...to fix 'ImportError--No module named wxversion' on Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md b/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md index 11d814d8f4..66af8413fd 100644 --- a/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md +++ b/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md @@ -1,3 +1,4 @@ +Translating by GOLinux! Linux FAQs with Answers--How to fix “ImportError: No module named wxversion” on Linux ================================================================================ > **Question:** I was trying to run a Python application on [insert your Linux distro], but I got an error "ImportError: No module named wxversion." How can I solve this error in the Python program? @@ -46,4 +47,4 @@ via: http://ask.xmodulo.com/importerror-no-module-named-wxversion.html [a]:http://ask.xmodulo.com/author/nanni [1]:http://wxpython.org/ -[2]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html \ No newline at end of file +[2]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html From f26be7487f95cd3f5098ab147e42da5e74d599e0 Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Fri, 7 Aug 2015 08:59:37 +0800 Subject: [PATCH 135/207] =?UTF-8?q?=E3=80=90Translating=20by=20dingdongnig?= =?UTF-8?q?etou=E3=80=9120150730=20Howto=20Configure=20Nginx=20as=20Rrever?= =?UTF-8?q?se=20Proxy=20or=20Load=20Balancer=20with=20Weave=20and=20Docker?= =?UTF-8?q?.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... Rreverse Proxy or Load Balancer with Weave and Docker.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md b/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md index 82c592d3b4..f217db9c70 100644 --- a/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md +++ b/sources/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md @@ -1,3 +1,6 @@ + +Translating by dingdongnigetou + Howto Configure Nginx as Rreverse Proxy / Load Balancer with Weave and Docker ================================================================================ Hi everyone today we'll learnHowto configure Nginx as Rreverse Proxy / Load balancer with Weave and Docker Weave creates a virtual network that connects Docker containers with each other, deploys across multiple hosts and enables their automatic discovery. It allows us to focus on developing our application, rather than our infrastructure. It provides such an awesome environment that the applications uses the network as if its containers were all plugged into the same network without need to configure ports, mappings, link, etc. The services of the application containers on the network can be easily accessible to the external world with no matter where its running. Here, in this tutorial we'll be using weave to quickly and easily deploy nginx web server as a load balancer for a simple php application running in docker containers on multiple nodes in Amazon Web Services. Here, we will be introduced to WeaveDNS, which provides a simple way for containers to find each other using hostname with no changes in codes and tells other containers to connect to those names. @@ -123,4 +126,4 @@ via: http://linoxide.com/linux-how-to/nginx-load-balancer-weave-docker/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://linoxide.com/author/arunp/ -[1]:http://console.aws.amazon.com/ \ No newline at end of file +[1]:http://console.aws.amazon.com/ From dfa1d5bf8796e70c8ee522a1bb67e6d68efd97a0 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Fri, 7 Aug 2015 09:09:47 +0800 Subject: [PATCH 136/207] [Translated]20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md --- ...or--No module named wxversion' on Linux.md | 50 ------------------- ...or--No module named wxversion' on Linux.md | 49 ++++++++++++++++++ 2 files changed, 49 insertions(+), 50 deletions(-) delete mode 100644 sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md create mode 100644 translated/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md b/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md deleted file mode 100644 index 66af8413fd..0000000000 --- a/sources/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md +++ /dev/null @@ -1,50 +0,0 @@ -Translating by GOLinux! -Linux FAQs with Answers--How to fix “ImportError: No module named wxversion” on Linux -================================================================================ -> **Question:** I was trying to run a Python application on [insert your Linux distro], but I got an error "ImportError: No module named wxversion." How can I solve this error in the Python program? - - Looking for python... 2.7.9 - Traceback (most recent call last): - File "/home/dev/playonlinux/python/check_python.py", line 1, in - import os, wxversion - ImportError: No module named wxversion - failed tests - -This error indicates that your Python application is GUI-based, relying on a missing Python module called wxPython. [wxPython][1] is a Python extension module for the wxWidgets GUI library, popularly used by C++ programmers to design GUI applications. The wxPython extension allows Python developers to easily design and integrate GUI within any Python application. - -To solve this import error, you need to install wxPython on your Linux, as described below. - -### Install wxPython on Debian, Ubuntu or Linux Mint ### - - $ sudo apt-get install python-wxgtk2.8 - -### Install wxPython on Fedora ### - - $ sudo yum install wxPython - -### Install wxPython on CentOS/RHEL ### - -wxPython is available on the EPEL repository of CentOS/RHEL, not on base repositories. Thus, first [enable EPEL repository][2] on your system, and then use yum command. - - $ sudo yum install wxPython - -### Install wxPython on Arch Linux ### - - $ sudo pacman -S wxpython - -### Install wxPython on Gentoo ### - - $ emerge wxPython - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/importerror-no-module-named-wxversion.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni -[1]:http://wxpython.org/ -[2]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html diff --git a/translated/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md b/translated/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md new file mode 100644 index 0000000000..2a937daeff --- /dev/null +++ b/translated/tech/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md @@ -0,0 +1,49 @@ +Linux有问必答——如何修复Linux上的“ImportError: No module named wxversion”错误 +================================================================================ + +> **问题** 我试着在[你的Linux发行版]上运行一个Python应用,但是我得到了这个错误"ImportError: No module named wxversion."。我怎样才能解决Python程序中的这个错误呢? + + Looking for python... 2.7.9 - Traceback (most recent call last): + File "/home/dev/playonlinux/python/check_python.py", line 1, in + import os, wxversion + ImportError: No module named wxversion + failed tests + +该错误表明,你的Python应用是基于GUI的,依赖于一个名为wxPython的缺失模块。[wxPython][1]是一个用于wxWidgets GUI库的Python扩展模块,普遍被C++程序员用来设计GUI应用。该wxPython扩展允许Python开发者在任何Python应用中方便地设计和整合GUI。 +To solve this import error, you need to install wxPython on your Linux, as described below. + +### 安装wxPython到Debian,Ubuntu或Linux Mint ### + + $ sudo apt-get install python-wxgtk2.8 + +### 安装wxPython到Fedora ### + + $ sudo yum install wxPython + +### 安装wxPython到CentOS/RHEL ### + +wxPython可以在CentOS/RHEL的EPEL仓库中获取到,而基本仓库中则没有。因此,首先要在你的系统中[启用EPEL仓库][2],然后使用yum命令来安装。 + + $ sudo yum install wxPython + +### 安装wxPython到Arch Linux ### + + $ sudo pacman -S wxpython + +### 安装wxPython到Gentoo ### + + $ emerge wxPython + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/importerror-no-module-named-wxversion.html + +作者:[Dan Nanni][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:http://wxpython.org/ +[2]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html From 09b98f265e23ed5ec0c759f2e16c749288a47728 Mon Sep 17 00:00:00 2001 From: joeren Date: Fri, 7 Aug 2015 09:11:45 +0800 Subject: [PATCH 137/207] Update 20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md --- ...ogging in Open vSwitch for debugging and troubleshooting.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md b/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md index 2b4e16bcaf..dcf811a003 100644 --- a/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md +++ b/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md @@ -1,3 +1,4 @@ +Translating by GOlinu! Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting ================================================================================ > **Question:** I am trying to troubleshoot my Open vSwitch deployment. For that I would like to inspect its debug messages generated by its built-in logging mechanism. How can I enable logging in Open vSwitch, and change its logging level (e.g., to INFO/DEBUG level) to check more detailed debug information? @@ -66,4 +67,4 @@ via: http://ask.xmodulo.com/enable-logging-open-vswitch.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://ask.xmodulo.com/author/nanni \ No newline at end of file +[a]:http://ask.xmodulo.com/author/nanni From 6323f7946f874af701db19742a8c6931b59aeb11 Mon Sep 17 00:00:00 2001 From: Ping Date: Fri, 7 Aug 2015 09:39:48 +0800 Subject: [PATCH 138/207] Complete 20150806 Linux FAQs with Answers--How to install git on Linux.md --- ...th Answers--How to install git on Linux.md | 74 ------------------- ...th Answers--How to install git on Linux.md | 73 ++++++++++++++++++ 2 files changed, 73 insertions(+), 74 deletions(-) delete mode 100644 sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md create mode 100644 translated/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md b/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md deleted file mode 100644 index c9610a2dfe..0000000000 --- a/sources/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md +++ /dev/null @@ -1,74 +0,0 @@ -Translating by Ping - -Linux FAQs with Answers--How to install git on Linux -================================================================================ -> **Question:** I am trying to clone a project from a public Git repository, but I am getting "git: command not found" error. How can I install git on [insert your Linux distro]? - -Git is a popular open-source version control system (VCS) originally developed for Linux environment. Contrary to other VCS tools like CVS or SVN, Git's revision control is considered "distributed" in a sense that your local Git working directory can function as a fully-working repository with complete history and version-tracking capabilities. In this model, each collaborator commits to his or her local repository (as opposed to always committing to a central repository), and optionally push to a centralized repository if need be. This brings in scalability and redundancy to the revision control system, which is a must in any kind of large-scale collaboration. - -![](https://farm1.staticflickr.com/341/19433194168_c79d4570aa_b.jpg) - -### Install Git with a Package Manager ### - -Git is shipped with all major Linux distributions. Thus the easiest way to install Git is by using your Linux distro's package manager. - -**Debian, Ubuntu, or Linux Mint** - - $ sudo apt-get install git - -**Fedora, CentOS or RHEL** - - $ sudo yum install git - -**Arch Linux** - - $ sudo pacman -S git - -**OpenSUSE** - - $ sudo zypper install git - -**Gentoo** - - $ emerge --ask --verbose dev-vcs/git - -### Install Git from the Source ### - -If for whatever reason you want to built Git from the source, you can follow the instructions below. - -**Install Dependencies** - -Before building Git, first install dependencies. - -**Debian, Ubuntu or Linux** - - $ sudo apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev asciidoc xmlto docbook2x - -**Fedora, CentOS or RHEL** - - $ sudo yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel asciidoc xmlto docbook2x - -#### Compile Git from the Source #### - -Download the latest release of Git from [https://github.com/git/git/releases][1]. Then build and install Git under /usr as follows. - -Note that if you want to install it under a different directory (e.g., /opt), replace "--prefix=/usr" in configure command with something else. - - $ cd git-x.x.x - $ make configure - $ ./configure --prefix=/usr - $ make all doc info - $ sudo make install install-doc install-html install-info - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/install-git-linux.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni -[1]:https://github.com/git/git/releases diff --git a/translated/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md b/translated/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md new file mode 100644 index 0000000000..e6d3f59c71 --- /dev/null +++ b/translated/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md @@ -0,0 +1,73 @@ +Linux问答 -- 如何在Linux上安装Git +================================================================================ + +> **问题:** 我尝试从一个Git公共仓库克隆项目,但出现了这样的错误提示:“git: command not found”。 请问我该如何安装Git? [注明一下是哪个Linux发行版]? + +Git是一个流行的并且开源的版本控制系统(VCS),最初是为Linux环境开发的。跟CVS或者SVN这些版本控制系统不同的是,Git的版本控制被认为是“分布式的”,某种意义上,git的本地工作目录可以作为一个功能完善的仓库来使用,它具备完整的历史记录和版本追踪能力。在这种工作模型之下,各个协作者将内容提交到他们的本地仓库中(与之相对的会直接提交到核心仓库),如果有必要,再有选择性地推送到核心仓库。这就为Git这个版本管理系统带来了大型协作系统所必须的可扩展能力和冗余能力。 + +![](https://farm1.staticflickr.com/341/19433194168_c79d4570aa_b.jpg) + +### 使用包管理器安装Git ### + +Git已经被所有的主力Linux发行版所支持。所以安装它最简单的方法就是使用各个Linux发行版的包管理器。 + +**Debian, Ubuntu, 或 Linux Mint** + + $ sudo apt-get install git + +**Fedora, CentOS 或 RHEL** + + $ sudo yum install git + +**Arch Linux** + + $ sudo pacman -S git + +**OpenSUSE** + + $ sudo zypper install git + +**Gentoo** + + $ emerge --ask --verbose dev-vcs/git + +### 从源码安装Git ### + +如果由于某些原因,你希望从源码安装Git,安装如下介绍操作。 + +**安装依赖包** + +在构建Git之前,先安装它的依赖包。 + +**Debian, Ubuntu 或 Linux Mint** + + $ sudo apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev asciidoc xmlto docbook2x + +**Fedora, CentOS 或 RHEL** + + $ sudo yum install curl-devel expat-devel gettext-devel openssl-devel zlib-devel asciidoc xmlto docbook2x + +#### 从源码编译Git #### + +从 [https://github.com/git/git/releases][1] 下载最新版本的Git。然后在/usr下构建和安装。 + +注意,如果你打算安装到其他目录下(例如:/opt),那就把"--prefix=/usr"这个配置命令使用其他路径替换掉。 + + $ cd git-x.x.x + $ make configure + $ ./configure --prefix=/usr + $ make all doc info + $ sudo make install install-doc install-html install-info + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/install-git-linux.html + +作者:[Dan Nanni][a] +译者:[mr-ping](https://github.com/mr-ping) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:https://github.com/git/git/releases From b2706ff03ec3218931128a03bca7fd05b28817e0 Mon Sep 17 00:00:00 2001 From: ZTinoZ Date: Fri, 7 Aug 2015 09:52:37 +0800 Subject: [PATCH 139/207] Translating by ZTinoZ --- sources/talk/20150806 5 heroes of the Linux world.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/talk/20150806 5 heroes of the Linux world.md b/sources/talk/20150806 5 heroes of the Linux world.md index ae35d674a1..5b5485198c 100644 --- a/sources/talk/20150806 5 heroes of the Linux world.md +++ b/sources/talk/20150806 5 heroes of the Linux world.md @@ -1,3 +1,4 @@ +Translating by ZTinoZ 5 heroes of the Linux world ================================================================================ Who are these people, seen and unseen, whose work affects all of us every day? @@ -96,4 +97,4 @@ via: http://www.itworld.com/article/2955001/linux/5-heros-of-the-linux-world.htm [7]:https://flic.kr/p/hBv8Pp [8]:https://en.wikipedia.org/wiki/Sysfs [9]:https://www.youtube.com/watch?v=CyHAeGBFS8k -[10]:http://www.itworld.com/article/2873200/operating-systems/11-technologies-that-tick-off-linus-torvalds.html \ No newline at end of file +[10]:http://www.itworld.com/article/2873200/operating-systems/11-technologies-that-tick-off-linus-torvalds.html From cf1e4ef93781185b77462724d1c8529ad89a7d60 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Fri, 7 Aug 2015 09:55:36 +0800 Subject: [PATCH 140/207] [Translated]20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md --- ...witch for debugging and troubleshooting.md | 70 ------------------- ...witch for debugging and troubleshooting.md | 69 ++++++++++++++++++ 2 files changed, 69 insertions(+), 70 deletions(-) delete mode 100644 sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md create mode 100644 translated/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md diff --git a/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md b/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md deleted file mode 100644 index dcf811a003..0000000000 --- a/sources/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md +++ /dev/null @@ -1,70 +0,0 @@ -Translating by GOlinu! -Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting -================================================================================ -> **Question:** I am trying to troubleshoot my Open vSwitch deployment. For that I would like to inspect its debug messages generated by its built-in logging mechanism. How can I enable logging in Open vSwitch, and change its logging level (e.g., to INFO/DEBUG level) to check more detailed debug information? - -Open vSwitch (OVS) is the most popular open-source implementation of virtual switch on the Linux platform. As the today's data centers increasingly rely on the software-defined network (SDN) architecture, OVS is fastly adopted as the de-facto standard network element in data center's SDN deployments. - -Open vSwitch has a built-in logging mechanism called VLOG. The VLOG facility allows one to enable and customize logging within various components of the switch. The logging information generated by VLOG can be sent to a combination of console, syslog and a separate log file for inspection. You can configure OVS logging dynamically at run-time with a command-line tool called `ovs-appctl`. - -![](https://farm1.staticflickr.com/499/19300367114_cd8aac2fb2_c.jpg) - -Here is how to enable logging and customize logging levels in Open vSwitch with `ovs-appctl`. - -The syntax of `ovs-appctl` to customize VLOG is as follows. - - $ sudo ovs-appctl vlog/set module[:facility[:level]] - -- **Module**: name of any valid component in OVS (e.g., netdev, ofproto, dpif, vswitchd, and many others) -- **Facility**: destination of logging information (must be: console, syslog or file) -- **Level**: verbosity of logging (must be: emer, err, warn, info, or dbg) - -In OVS source code, module name is defined in each source file in the form of: - - VLOG_DEFINE_THIS_MODULE(); - -For example, in lib/netdev.c, you will see: - - VLOG_DEFINE_THIS_MODULE(netdev); - -which indicates that lib/netdev.c is part of netdev module. Any logging messages generated in lib/netdev.c will belong to netdev module. - -In OVS source code, there are multiple severity levels used to define several different kinds of logging messages: VLOG_INFO() for informational, VLOG_WARN() for warning, VLOG_ERR() for error, VLOG_DBG() for debugging, VLOG_EMERG for emergency. Logging level and facility determine which logging messages are sent where. - -To see a full list of available modules, facilities, and their respective logging levels, run the following commands. This command must be invoked after you have started OVS. - - $ sudo ovs-appctl vlog/list - -![](https://farm1.staticflickr.com/465/19734939478_7eb5d44635_c.jpg) - -The output shows the debug levels of each module for three different facilities (console, syslog, file). By default, all modules have their logging level set to INFO. - -Given any one OVS module, you can selectively change the debug level of any particular facility. For example, if you want to see more detailed debug messages of dpif module at the console screen, run the following command. - - $ sudo ovs-appctl vlog/set dpif:console:dbg - -You will see that dpif module's console facility has changed its logging level to DBG. The logging level of two other facilities, syslog and file, remains unchanged. - -![](https://farm1.staticflickr.com/333/19896760146_5d851311ae_c.jpg) - -If you want to change the logging level for all modules, you can specify "ANY" as the module name. For example, the following command will change the console logging level of every module to DBG. - - $ sudo ovs-appctl vlog/set ANY:console:dbg - -![](https://farm1.staticflickr.com/351/19734939828_8c7f59e404_c.jpg) - -Also, if you want to change the logging level of all three facilities at once, you can specify "ANY" as the facility name. For example, the following command will change the logging level of all facilities for every module to DBG. - - $ sudo ovs-appctl vlog/set ANY:ANY:dbg - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/enable-logging-open-vswitch.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni diff --git a/translated/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md b/translated/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md new file mode 100644 index 0000000000..542cf31cb3 --- /dev/null +++ b/translated/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md @@ -0,0 +1,69 @@ +Linux有问必答——如何启用Open vSwitch的日志功能以便调试和排障 +================================================================================ +> **问题** 我试着为我的Open vSwitch部署排障,鉴于此,我想要检查它的由内建日志机制生成的调试信息。我怎样才能启用Open vSwitch的日志功能,并且修改它的日志等级(如,修改成INFO/DEBUG级别)以便于检查更多详细的调试信息呢? + +Open vSwitch(OVS)是Linux平台上用于虚拟切换的最流行的开源部署。由于当今的数据中心日益依赖于软件定义的网络(SDN)架构,OVS被作为数据中心的SDN部署中实际上的标准网络元素而快速采用。 + +Open vSwitch具有一个内建的日志机制,它称之为VLOG。VLOG工具允许你在各种切换组件中启用并自定义日志,由VLOG生成的日志信息可以被发送到一个控制台,syslog以及一个独立日志文件组合,以供检查。你可以通过一个名为`ovs-appctl`的命令行工具在运行时动态配置OVS日志。 + +![](https://farm1.staticflickr.com/499/19300367114_cd8aac2fb2_c.jpg) + +这里为你演示如何使用`ovs-appctl`启用Open vSwitch中的日志功能,并进行自定义。 + +下面是`ovs-appctl`自定义VLOG的语法。 + + $ sudo ovs-appctl vlog/set module[:facility[:level]] + +- **Module**:OVS中的任何合法组件的名称(如netdev,ofproto,dpif,vswitchd,以及其它大量组件) +- **Facility**:日志信息的目的地(必须是:console,syslog,或者file) +- **Level**:日志的详细程度(必须是:emer,err,warn,info,或者dbg) + +在OVS源代码中,模块名称在源文件中是以以下格式定义的: + + VLOG_DEFINE_THIS_MODULE(); + +例如,在lib/netdev.c中,你可以看到: + + VLOG_DEFINE_THIS_MODULE(netdev); + +这个表明,lib/netdev.c是netdev模块的一部分,任何在lib/netdev.c中生成的日志信息将属于netdev模块。 + +在OVS源代码中,有多个严重度等级用于定义几个不同类型的日志信息:VLOG_INFO()用于报告,VLOG_WARN()用于警告,VLOG_ERR()用于错误提示,VLOG_DBG()用于调试信息,VLOG_EMERG用于紧急情况。日志等级和工具确定哪个日志信息发送到哪里。 + +要查看可用模块、工具和各自日志级别的完整列表,请运行以下命令。该命令必须在你启动OVS后调用。 + + $ sudo ovs-appctl vlog/list + +![](https://farm1.staticflickr.com/465/19734939478_7eb5d44635_c.jpg) + +输出结果显示了用于三个工具(console,syslog,file)的各个模块的调试级别。默认情况下,所有模块的日志等级都被设置为INFO。 + +指定任何一个OVS模块,你可以选择性地修改任何特定工具的调试级别。例如,如果你想要在控制台屏幕中查看dpif更为详细的调试信息,可以运行以下命令。 + + $ sudo ovs-appctl vlog/set dpif:console:dbg + +你将看到dpif模块的console工具已经将其日志等级修改为DBG,而其它两个工具syslog和file的日志级别仍然没有改变。 + +![](https://farm1.staticflickr.com/333/19896760146_5d851311ae_c.jpg) + +如果你想要修改所有模块的日志等级,你可以指定“ANY”作为模块名。例如,下面命令将修改每个模块的console的日志级别为DBG。 + + $ sudo ovs-appctl vlog/set ANY:console:dbg + +![](https://farm1.staticflickr.com/351/19734939828_8c7f59e404_c.jpg) + +同时,如果你想要一次性修改所有三个工具的日志级别,你可以指定“ANY”作为工具名。例如,下面的命令将修改每个模块的所有工具的日志级别为DBG。 + + $ sudo ovs-appctl vlog/set ANY:ANY:dbg + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/enable-logging-open-vswitch.html + +作者:[Dan Nanni][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni From f3cb1163f4d917f9e9db845461f0fbecca88b84f Mon Sep 17 00:00:00 2001 From: ZTinoZ Date: Fri, 7 Aug 2015 10:37:40 +0800 Subject: [PATCH 141/207] Translating by ZTinoZ --- sources/talk/20150806 5 heroes of the Linux world.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/sources/talk/20150806 5 heroes of the Linux world.md b/sources/talk/20150806 5 heroes of the Linux world.md index 5b5485198c..bf1d75d562 100644 --- a/sources/talk/20150806 5 heroes of the Linux world.md +++ b/sources/talk/20150806 5 heroes of the Linux world.md @@ -1,16 +1,15 @@ -Translating by ZTinoZ -5 heroes of the Linux world +Linux世界的五个大神 ================================================================================ -Who are these people, seen and unseen, whose work affects all of us every day? +这些人是谁?见或者没见过?谁在每天影响着我们? ![Image courtesy Christopher Michel/Flickr](http://core0.staticworld.net/images/article/2015/07/penguin-100599348-orig.jpg) Image courtesy [Christopher Michel/Flickr][1] -### High-flying penguins ### +### 野心勃勃的企鹅 ### -Linux and open source is driven by passionate people who write best-of-breed software and then release the code to the public so anyone can use it, without any strings attached. (Well, there is one string attached and that’s licence.) +Linux和开源世界一直在被那些热情洋溢的人们驱动着,他们开发出最好的软件并将代码向公众开放,所以每个人都能无条件地看到。(对了,有那么一个条件,那就是许可证。) -Who are these people? These heroes of the Linux world, whose work affects all of us every day. Allow me to introduce you. +那么,这些人是谁?这些Linux世界里的大神们,谁在每天影响着我们?让我来给你一一揭晓。 ![Image courtesy Swapnil Bhartiya](http://images.techhive.com/images/article/2015/07/swap-klaus-100599357-orig.jpg) Image courtesy Swapnil Bhartiya From 14a989ea41ead6ccb5bf70206ca4e96fed54596f Mon Sep 17 00:00:00 2001 From: Ping Date: Fri, 7 Aug 2015 14:38:03 +0800 Subject: [PATCH 142/207] Marking out Translating article --- .../tech/20150518 How to set up a Replica Set on MongoDB.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150518 How to set up a Replica Set on MongoDB.md b/sources/tech/20150518 How to set up a Replica Set on MongoDB.md index 07e16dafc1..83a7da8769 100644 --- a/sources/tech/20150518 How to set up a Replica Set on MongoDB.md +++ b/sources/tech/20150518 How to set up a Replica Set on MongoDB.md @@ -1,3 +1,4 @@ +Translating by Ping How to set up a Replica Set on MongoDB ================================================================================ MongoDB has become the most famous NoSQL database on the market. MongoDB is document-oriented, and its scheme-free design makes it a really attractive solution for all kinds of web applications. One of the features that I like the most is Replica Set, where multiple copies of the same data set are maintained by a group of mongod nodes for redundancy and high availability. @@ -179,4 +180,4 @@ via: http://xmodulo.com/setup-replica-set-mongodb.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://xmodulo.com/author/valerio -[1]:http://docs.mongodb.org/ecosystem/drivers/ \ No newline at end of file +[1]:http://docs.mongodb.org/ecosystem/drivers/ From ee9f6a02e8e29bd00c790a8a2098d2654a0269fa Mon Sep 17 00:00:00 2001 From: ZTinoZ Date: Fri, 7 Aug 2015 15:15:02 +0800 Subject: [PATCH 143/207] Translating by ZTinoZ --- sources/talk/20150806 5 heroes of the Linux world.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/sources/talk/20150806 5 heroes of the Linux world.md b/sources/talk/20150806 5 heroes of the Linux world.md index bf1d75d562..abc42df7f9 100644 --- a/sources/talk/20150806 5 heroes of the Linux world.md +++ b/sources/talk/20150806 5 heroes of the Linux world.md @@ -7,7 +7,7 @@ Image courtesy [Christopher Michel/Flickr][1] ### 野心勃勃的企鹅 ### -Linux和开源世界一直在被那些热情洋溢的人们驱动着,他们开发出最好的软件并将代码向公众开放,所以每个人都能无条件地看到。(对了,有那么一个条件,那就是许可证。) +Linux和开源世界一直在被那些热情洋溢的人们推动着,他们开发出最好的软件并将代码向公众开放,所以每个人都能无条件地看到。(对了,有那么一个条件,那就是许可证。) 那么,这些人是谁?这些Linux世界里的大神们,谁在每天影响着我们?让我来给你一一揭晓。 @@ -16,9 +16,9 @@ Image courtesy Swapnil Bhartiya ### Klaus Knopper ### -Klaus Knopper, an Austrian developer who lives in Germany, is the founder of Knoppix and Adriana Linux, which he developed for his blind wife. +Klaus Knopper,一个生活在德国的奥地利开发者,他是Knoppix和Adriana Linux的创始人,为了他失明的妻子开发程序。 -Knoppix holds a very special place in heart of those Linux users who started using Linux before Ubuntu came along. What makes Knoppix so special is that it popularized the concept of Live CD. Unlike Windows or Mac OS X, you could run the entire operating system from the CD without installing anything on the system. It allowed new users to test Linux on their systems without formatting the hard drive. The live feature of Linux alone contributed heavily to its popularity. +Knoppix在那些Linux用户心里有着特殊的地位,他们在使用Ubuntu之前都会尝试Knoppix,而Knoppix让人称道的就是它让Live CD的概念普及开来。不像Windows或Mac OS X,你可以通过CD运行整个操作系统而不用再系统上安装任何东西,它允许新用户在他们的机子上快速试用Linux而不用去格式化硬盘。Linux这种实时的特性为它的普及做出了巨大贡献。 ![Image courtesy Fórum Internacional Software Live/Flickr](http://images.techhive.com/images/article/2015/07/lennart-100599356-orig.jpg) Image courtesy [Fórum Internacional Software Live/Flickr][2] From 0fe382d37872929df93bbdc84daa135d253e9d8b Mon Sep 17 00:00:00 2001 From: DeadFire Date: Fri, 7 Aug 2015 15:25:11 +0800 Subject: [PATCH 144/207] =?UTF-8?q?20150807-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...riables on a Linux and Unix-like System.md | 98 +++++++++++++++++++ 1 file changed, 98 insertions(+) create mode 100644 sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md diff --git a/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md b/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md new file mode 100644 index 0000000000..b2fa80ff0a --- /dev/null +++ b/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md @@ -0,0 +1,98 @@ +How To: Temporarily Clear Bash Environment Variables on a Linux and Unix-like System +================================================================================ +I'm a bash shell user. I would like to temporarily clear bash shell environment variables. I do not want to delete or unset an exported environment variable. How do I run a program in a temporary environment in bash or ksh shell? + +You can use the env command to set and print environment on a Linux or Unix-like systems. The env command executes utility after modifying the environment as specified on the command line. + +### How do I display my current environment? ### + +Open the terminal application and type any one of the following command: + + printenv + +OR + + env + +Sample outputs: + +![Fig.01: Unix/Linux: List All Environment Variables Command](http://s0.cyberciti.org/uploads/faq/2015/08/env-unix-linux-command-output.jpg) +Fig.01: Unix/Linux: List All Environment Variables Command + +### Counting your environment variables ### + +Type the following command: + + env | wc -l + printenv | wc -l + +Sample outputs: + + 20 + +### Run a program in a clean environment in bash/ksh/zsh ### + +The syntax is as follows: + + env -i your-program-name-here arg1 arg2 ... + +For example, run the wget program without using http_proxy and/or all other variables i.e. temporarily clear all bash/ksh/zsh environment variables and run the wget program: + + env -i /usr/local/bin/wget www.cyberciti.biz + env -i wget www.cyberciti.biz + +This is very useful when you want to run a command ignoring any environment variables you have set. I use this command many times everyday to ignore the http_proxy and other environment variable I have set. + +#### Example: With the http_proxy #### + + $ wget www.cyberciti.biz + --2015-08-03 23:20:23-- http://www.cyberciti.biz/ + Connecting to 10.12.249.194:3128... connected. + Proxy request sent, awaiting response... 200 OK + Length: unspecified [text/html] + Saving to: 'index.html' + index.html [ <=> ] 36.17K 87.0KB/s in 0.4s + 2015-08-03 23:20:24 (87.0 KB/s) - 'index.html' saved [37041] + +#### Example: Ignore the http_proxy #### + + $ env -i /usr/local/bin/wget www.cyberciti.biz + --2015-08-03 23:25:17-- http://www.cyberciti.biz/ + Resolving www.cyberciti.biz... 74.86.144.194 + Connecting to www.cyberciti.biz|74.86.144.194|:80... connected. + HTTP request sent, awaiting response... 200 OK + Length: unspecified [text/html] + Saving to: 'index.html.1' + index.html.1 [ <=> ] 36.17K 115KB/s in 0.3s + 2015-08-03 23:25:18 (115 KB/s) - 'index.html.1' saved [37041] + +The option -i causes env command to completely ignore the environment it inherits. However, it does not prevent your command (such as wget or curl) setting new variables. Also, note down the side effect of running bash/ksh shell: + + env -i env | wc -l ## empty ## + # Now run bash ## + env -i bash + ## New enviroment set by bash program ## + env | wc -l + +#### Example: Set an environmental variable #### + +The syntax is: + + env var=value /path/to/command arg1 arg2 ... + ## OR ## + var=value /path/to/command arg1 arg2 ... + +For example set http_proxy: + + env http_proxy="http://USER:PASSWORD@server1.cyberciti.biz:3128/" \ + /usr/local/bin/wget www.cyberciti.biz + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/linux-unix-temporarily-clearing-environment-variables-command/ + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file From b9f635745b5dfc4ac1151540c4d74e4a842ddedd Mon Sep 17 00:00:00 2001 From: ictlyh Date: Fri, 7 Aug 2015 17:47:09 +0800 Subject: [PATCH 145/207] [Translating] sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md --- ...Bash Environment Variables on a Linux and Unix-like System.md | 1 + 1 file changed, 1 insertion(+) diff --git a/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md b/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md index b2fa80ff0a..715ecc2084 100644 --- a/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md +++ b/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md @@ -1,3 +1,4 @@ +Translating by ictlyh How To: Temporarily Clear Bash Environment Variables on a Linux and Unix-like System ================================================================================ I'm a bash shell user. I would like to temporarily clear bash shell environment variables. I do not want to delete or unset an exported environment variable. How do I run a program in a temporary environment in bash or ksh shell? From cff6b78e5da1392612b87b6975d9cf38b40ce89d Mon Sep 17 00:00:00 2001 From: martin qi Date: Fri, 7 Aug 2015 19:39:10 +0800 Subject: [PATCH 146/207] Update 20150716 Interview--Larry Wall.md --- sources/talk/20150716 Interview--Larry Wall.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/talk/20150716 Interview--Larry Wall.md b/sources/talk/20150716 Interview--Larry Wall.md index 5d0b40d2ed..1362281517 100644 --- a/sources/talk/20150716 Interview--Larry Wall.md +++ b/sources/talk/20150716 Interview--Larry Wall.md @@ -1,3 +1,5 @@ +martin + Interview: Larry Wall ================================================================================ > Perl 6 has been 15 years in the making, and is now due to be released at the end of this year. We speak to its creator to find out what’s going on. @@ -122,4 +124,4 @@ via: http://www.linuxvoice.com/interview-larry-wall/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://www.linuxvoice.com/author/mike/ \ No newline at end of file +[a]:http://www.linuxvoice.com/author/mike/ From a78cf0b53f3c126737ea4b77a4e7af8f552524f3 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 7 Aug 2015 22:05:46 +0800 Subject: [PATCH 147/207] PUB:20150728 How To Fix--There is no command installed for 7-zip archive files @GOLinux --- ...There is no command installed for 7-zip archive files.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) rename {translated/tech => published}/20150728 How To Fix--There is no command installed for 7-zip archive files.md (96%) diff --git a/translated/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md b/published/20150728 How To Fix--There is no command installed for 7-zip archive files.md similarity index 96% rename from translated/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md rename to published/20150728 How To Fix--There is no command installed for 7-zip archive files.md index 61237467ca..34a7af3190 100644 --- a/translated/tech/20150728 How To Fix--There is no command installed for 7-zip archive files.md +++ b/published/20150728 How To Fix--There is no command installed for 7-zip archive files.md @@ -5,10 +5,12 @@ 我试着在Ubuntu中安装Emerald图标主题,而这个主题被打包成了.7z归档包。和以往一样,我试着通过在GUI中右击并选择“提取到这里”来将它解压缩。但是Ubuntu 15.04却并没有解压文件,取而代之的,却是丢给了我一个下面这样的错误信息: > Could not open this file +> > 无法打开该文件 > > There is no command installed for 7-zip archive files. Do you want to search for a command to open this file? -> 没有安装用于7-zip归档文件的命令。你是否想要搜索命令来打开该文件? +> +> 没有安装用于7-zip归档文件的命令。你是否想要搜索用于来打开该文件的命令? 错误信息看上去是这样的: @@ -42,7 +44,7 @@ via: http://itsfoss.com/fix-there-is-no-command-installed-for-7-zip-archive-file 作者:[Abhishek][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From fb82c13465c5d60e15a2ddadd8c7f17ac307ea99 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 7 Aug 2015 22:47:51 +0800 Subject: [PATCH 148/207] PUB:20150803 Handy commands for profiling your Unix file systems @strugglingyouth --- ...ds for profiling your Unix file systems.md | 65 ++++++++++++++++++ ...ds for profiling your Unix file systems.md | 66 ------------------- 2 files changed, 65 insertions(+), 66 deletions(-) create mode 100644 published/20150803 Handy commands for profiling your Unix file systems.md delete mode 100644 translated/tech/20150803 Handy commands for profiling your Unix file systems.md diff --git a/published/20150803 Handy commands for profiling your Unix file systems.md b/published/20150803 Handy commands for profiling your Unix file systems.md new file mode 100644 index 0000000000..1bfc6ac4bd --- /dev/null +++ b/published/20150803 Handy commands for profiling your Unix file systems.md @@ -0,0 +1,65 @@ +使用 Find 命令来帮你找到那些需要清理的文件 +================================================================================ +![Credit: Sandra H-S](http://images.techhive.com/images/article/2015/07/file-profile-100597239-primary.idge.png) + +*Credit: Sandra H-S* + +有一个问题几乎困扰着所有的文件系统 -- 包括 Unix 和其他的 -- 那就是文件的不断积累。几乎没有人愿意花时间清理掉他们不再使用的文件和整理文件系统,结果,文件变得很混乱,很难找到有用的东西,要使它们运行良好、维护备份、易于管理,这将是一种持久的挑战。 + +我见过的一种解决问题的方法是建议使用者将所有的数据碎屑创建一个文件集合的总结报告或"概况",来报告诸如所有的文件数量;最老的,最新的,最大的文件;并统计谁拥有这些文件等数据。如果有人看到五年前的一个包含五十万个文件的文件夹,他们可能会去删除哪些文件 -- 或者,至少会归档和压缩。主要问题是太大的文件夹会使人担心误删一些重要的东西。如果有一个描述文件夹的方法能帮助显示文件的性质,那么你就可以去清理它了。 + +当我准备做 Unix 文件系统的总结报告时,几个有用的 Unix 命令能提供一些非常有用的统计信息。要计算目录中的文件数,你可以使用这样一个 find 命令。 + + $ find . -type f | wc -l + 187534 + +虽然查找最老的和最新的文件是比较复杂,但还是相当方便的。在下面的命令,我们使用 find 命令再次查找文件,以文件时间排序并按年-月-日的格式显示,在列表顶部的显然是最老的。 + +在第二个命令,我们做同样的,但打印的是最后一行,这是最新的。 + + $ find -type f -printf '%T+ %p\n' | sort | head -n 1 + 2006-02-03+02:40:33 ./skel/.xemacs/init.el + $ find -type f -printf '%T+ %p\n' | sort | tail -n 1 + 2015-07-19+14:20:16 ./.bash_history + +printf 命令输出 %T(文件日期和时间)和 %P(带路径的文件名)参数。 + +如果我们在查找家目录时,无疑会发现,history 文件(如 .bash_history)是最新的,这并没有什么用。你可以通过 "un-grepping" 来忽略这些文件,也可以忽略以.开头的文件,如下图所示的。 + + $ find -type f -printf '%T+ %p\n' | grep -v "\./\." | sort | tail -n 1 + 2015-07-19+13:02:12 ./isPrime + +寻找最大的文件使用 %s(大小)参数,包括文件名(%f),因为这就是我们想要在报告中显示的。 + + $ find -type f -printf '%s %f \n' | sort -n | uniq | tail -1 + 20183040 project.org.tar + +统计文件的所有者,使用%u(所有者) + + $ find -type f -printf '%u \n' | grep -v "\./\." | sort | uniq -c + 180034 shs + 7500 jdoe + +如果文件系统能记录上次的访问日期,也将是非常有用的,可以用来看该文件有没有被访问过,比方说,两年之内没访问过。这将使你能明确分辨这些文件的价值。这个最后访问(%a)参数这样使用: + + $ find -type f -printf '%a+ %p\n' | sort | head -n 1 + Fri Dec 15 03:00:30 2006+ ./statreport + +当然,如果大多数最近​​访问的文件也是在很久之前的,这看起来你需要处理更多文件了。 + + $ find -type f -printf '%a+ %p\n' | sort | tail -n 1 + Wed Nov 26 03:00:27 2007+ ./my-notes + +要想层次分明,可以为一个文件系统或大目录创建一个总结报告,显示这些文件的日期范围、最大的文件、文件所有者们、最老的文件和最新访问时间,可以帮助文件拥有者判断当前有哪些文件夹是重要的哪些该清理了。 + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2949898/linux/profiling-your-file-systems.html + +作者:[Sandra Henry-Stocker][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/ diff --git a/translated/tech/20150803 Handy commands for profiling your Unix file systems.md b/translated/tech/20150803 Handy commands for profiling your Unix file systems.md deleted file mode 100644 index 13efdcf0a1..0000000000 --- a/translated/tech/20150803 Handy commands for profiling your Unix file systems.md +++ /dev/null @@ -1,66 +0,0 @@ - -很实用的命令来分析你的 Unix 文件系统 -================================================================================ -![Credit: Sandra H-S](http://images.techhive.com/images/article/2015/07/file-profile-100597239-primary.idge.png) -Credit: Sandra H-S - -其中一个问题几乎困扰着所有的文件系统 -- 包括 Unix 和其他的 -- 那就是文件的不断积累。几乎没有人愿意花时间清理掉他们不再使用的文件和文件系统,结果,文件变得很混乱,很难找到有用的东西使它们运行良好,能够得到备份,并且易于管理,这将是一种持久的挑战。 - -我见过的一种解决问题的方法是鼓励使用者将所有的数据碎屑创建成一个总结报告或"profile"这样一个文件集合来报告所有的文件数量;最老的,最新的,最大的文件;并统计谁拥有这些文件。如果有人看到一个包含五十万个文件的文件夹并且时间不小于五年,他们可能会去删除哪些文件 -- 或者,至少归档和压缩。主要问题是太大的文件夹会使人产生压制性害怕误删一些重要的东西。有一个描述文件夹的方法能帮助显示文件的性质并期待你去清理它。 - - -当我准备做 Unix 文件系统的总结报告时,几个有用的 Unix 命令能提供一些非常有用的统计信息。要计算目录中的文件数,你可以使用这样一个 find 命令。 - - $ find . -type f | wc -l - 187534 - -查找最老的和最新的文件是比较复杂,但还是相当方便的。在下面的命令,我们使用 find 命令再次查找文件,以文件时间排序并按年-月-日的格式显示在顶部 -- 因此最老的 -- 的文件在列表中。 - -在第二个命令,我们做同样的,但打印的是最后一行 -- 这是最新的 -- 文件 - - $ find -type f -printf '%T+ %p\n' | sort | head -n 1 - 2006-02-03+02:40:33 ./skel/.xemacs/init.el - $ find -type f -printf '%T+ %p\n' | sort | tail -n 1 - 2015-07-19+14:20:16 ./.bash_history - -printf 命令输出 %T(文件日期和时间)和 %P(带路径的文件名)参数。 - -如果我们在查找家目录时,无疑会发现,history 文件是最新的,这不像是一个很有趣的信息。你可以通过 "un-grepping" 来忽略这些文件,也可以忽略以.开头的文件,如下图所示的。 - - $ find -type f -printf '%T+ %p\n' | grep -v "\./\." | sort | tail -n 1 - 2015-07-19+13:02:12 ./isPrime - -寻找最大的文件使用 %s(大小)参数,包括文件名(%f),因为这就是我们想要在报告中显示的。 - - $ find -type f -printf '%s %f \n' | sort -n | uniq | tail -1 - 20183040 project.org.tar - -打印文件的所有着者,使用%u(所有者) - - $ find -type f -printf '%u \n' | grep -v "\./\." | sort | uniq -c - 180034 shs - 7500 jdoe - -如果文件系统能记录上次的访问日期,也将是非常有用的来看该文件有没有被访问,比方说,两年之内。这将使你能明确分辨这些文件的价值。最后一个访问参数(%a)这样使用: - - $ find -type f -printf '%a+ %p\n' | sort | head -n 1 - Fri Dec 15 03:00:30 2006+ ./statreport - -当然,如果最近​​访问的文件也是在很久之前的,这将使你有更多的处理时间。 - - $ find -type f -printf '%a+ %p\n' | sort | tail -n 1 - Wed Nov 26 03:00:27 2007+ ./my-notes - -一个文件系统要层次分明,为大目录创建一个总结报告,显示该文件的日期范围,最大的文件,文件所有者,最老的和访问时间都可以帮助文件拥有者判断当前有哪些文件夹是重要的哪些该清理了。 - --------------------------------------------------------------------------------- - -via: http://www.itworld.com/article/2949898/linux/profiling-your-file-systems.html - -作者:[Sandra Henry-Stocker][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.itworld.com/author/Sandra-Henry_Stocker/ From 96a056137fdee0854671418ce7aae8be952ba66d Mon Sep 17 00:00:00 2001 From: wi-cuckoo Date: Fri, 7 Aug 2015 23:52:45 +0800 Subject: [PATCH 149/207] translating wi-cuckoo --- ... Interview Experience on RedHat Linux Package Management.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md b/sources/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md index 7915907e6a..6243a8c0de 100644 --- a/sources/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md +++ b/sources/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md @@ -1,3 +1,4 @@ +translating wi-cuckoo Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management ================================================================================ **Shilpa Nair has just graduated in the year 2015. She went to apply for Trainee position in a National News Television located in Noida, Delhi. When she was in the last year of graduation and searching for help on her assignments she came across Tecmint. Since then she has been visiting Tecmint regularly.** @@ -345,4 +346,4 @@ via: http://www.tecmint.com/linux-rpm-package-management-interview-questions/ [a]:http://www.tecmint.com/author/avishek/ [1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ -[2]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ \ No newline at end of file +[2]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ From 7b64f7af56f5b9513420c849bfbba8c91c12e926 Mon Sep 17 00:00:00 2001 From: wxy Date: Fri, 7 Aug 2015 23:53:07 +0800 Subject: [PATCH 150/207] PUB:20150504 How to access a Linux server behind NAT via reverse SSH tunnel @ictlyh --- ...erver behind NAT via reverse SSH tunnel.md | 34 +++++++++---------- 1 file changed, 17 insertions(+), 17 deletions(-) rename {translated/tech => published}/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md (58%) diff --git a/translated/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md b/published/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md similarity index 58% rename from translated/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md rename to published/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md index 5f9828e912..c6dddd3639 100644 --- a/translated/tech/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md +++ b/published/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md @@ -1,18 +1,18 @@ 如何通过反向 SSH 隧道访问 NAT 后面的 Linux 服务器 ================================================================================ -你在家里运行着一台 Linux 服务器,访问它需要先经过 NAT 路由器或者限制性防火墙。现在你想不在家的时候用 SSH 登录到这台服务器。你如何才能做到呢?SSH 端口转发当然是一种选择。但是,如果你需要处理多个嵌套的 NAT 环境,端口转发可能会变得非常棘手。另外,在多种 ISP 特定条件下可能会受到干扰,例如阻塞转发端口的限制性 ISP 防火墙、或者在用户间共享 IPv4 地址的运营商级 NAT。 +你在家里运行着一台 Linux 服务器,它放在一个 NAT 路由器或者限制性防火墙后面。现在你想在外出时用 SSH 登录到这台服务器。你如何才能做到呢?SSH 端口转发当然是一种选择。但是,如果你需要处理多级嵌套的 NAT 环境,端口转发可能会变得非常棘手。另外,在多种 ISP 特定条件下可能会受到干扰,例如阻塞转发端口的限制性 ISP 防火墙、或者在用户间共享 IPv4 地址的运营商级 NAT。 ### 什么是反向 SSH 隧道? ### -SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧道的概念非常简单。对于此,在限制性家庭网络之外你需要另一台主机(所谓的“中继主机”),你能从当前所在地通过 SSH 登录。你可以用有公共 IP 地址的 [VPS 实例][1] 配置一个中继主机。然后要做的就是从你家庭网络服务器中建立一个到公共中继主机的永久 SSH 隧道。有了这个隧道,你就可以从中继主机中连接“回”家庭服务器(这就是为什么称之为 “反向” 隧道)。不管你在哪里、你家庭网络中的 NAT 或 防火墙限制多么严重,只要你可以访问中继主机,你就可以连接到家庭服务器。 +SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧道的概念非常简单。使用这种方案,在你的受限的家庭网络之外你需要另一台主机(所谓的“中继主机”),你能从当前所在地通过 SSH 登录到它。你可以用有公网 IP 地址的 [VPS 实例][1] 配置一个中继主机。然后要做的就是从你的家庭网络服务器中建立一个到公网中继主机的永久 SSH 隧道。有了这个隧道,你就可以从中继主机中连接“回”家庭服务器(这就是为什么称之为 “反向” 隧道)。不管你在哪里、你的家庭网络中的 NAT 或 防火墙限制多么严格,只要你可以访问中继主机,你就可以连接到家庭服务器。 ![](https://farm8.staticflickr.com/7742/17162647378_c7d9f10de8_b.jpg) ### 在 Linux 上设置反向 SSH 隧道 ### -让我们来看看怎样创建和使用反向 SSH 隧道。我们有如下假设。我们会设置一个从家庭服务器到中继服务器的反向 SSH 隧道,然后我们可以通过中继服务器从客户端计算机 SSH 登录到家庭服务器。**中继服务器** 的公共 IP 地址是 1.1.1.1。 +让我们来看看怎样创建和使用反向 SSH 隧道。我们做如下假设:我们会设置一个从家庭服务器(homeserver)到中继服务器(relayserver)的反向 SSH 隧道,然后我们可以通过中继服务器从客户端计算机(clientcomputer) SSH 登录到家庭服务器。本例中的**中继服务器** 的公网 IP 地址是 1.1.1.1。 -在家庭主机上,按照以下方式打开一个到中继服务器的 SSH 连接。 +在家庭服务器上,按照以下方式打开一个到中继服务器的 SSH 连接。 homeserver~$ ssh -fN -R 10022:localhost:22 relayserver_user@1.1.1.1 @@ -20,11 +20,11 @@ SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧 “-R 10022:localhost:22” 选项定义了一个反向隧道。它转发中继服务器 10022 端口的流量到家庭服务器的 22 号端口。 -用 “-fN” 选项,当你用一个 SSH 服务器成功通过验证时 SSH 会进入后台运行。当你不想在远程 SSH 服务器执行任何命令、就像我们的例子中只想转发端口的时候非常有用。 +用 “-fN” 选项,当你成功通过 SSH 服务器验证时 SSH 会进入后台运行。当你不想在远程 SSH 服务器执行任何命令,就像我们的例子中只想转发端口的时候非常有用。 运行上面的命令之后,你就会回到家庭主机的命令行提示框中。 -登录到中继服务器,确认 127.0.0.1:10022 绑定到了 sshd。如果是的话就表示已经正确设置了反向隧道。 +登录到中继服务器,确认其 127.0.0.1:10022 绑定到了 sshd。如果是的话就表示已经正确设置了反向隧道。 relayserver~$ sudo netstat -nap | grep 10022 @@ -36,13 +36,13 @@ SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧 relayserver~$ ssh -p 10022 homeserver_user@localhost -需要注意的一点是你在本地输入的 SSH 登录/密码应该是家庭服务器的,而不是中继服务器的,因为你是通过隧道的本地端点登录到家庭服务器。因此不要输入中继服务器的登录/密码。成功登陆后,你就在家庭服务器上了。 +需要注意的一点是你在上面为localhost输入的 SSH 登录/密码应该是家庭服务器的,而不是中继服务器的,因为你是通过隧道的本地端点登录到家庭服务器,因此不要错误输入中继服务器的登录/密码。成功登录后,你就在家庭服务器上了。 ### 通过反向 SSH 隧道直接连接到网络地址变换后的服务器 ### 上面的方法允许你访问 NAT 后面的 **家庭服务器**,但你需要登录两次:首先登录到 **中继服务器**,然后再登录到**家庭服务器**。这是因为中继服务器上 SSH 隧道的端点绑定到了回环地址(127.0.0.1)。 -事实上,有一种方法可以只需要登录到中继服务器就能直接访问网络地址变换之后的家庭服务器。要做到这点,你需要让中继服务器上的 sshd 不仅转发回环地址上的端口,还要转发外部主机的端口。这通过指定中继服务器上运行的 sshd 的 **网关端口** 实现。 +事实上,有一种方法可以只需要登录到中继服务器就能直接访问NAT之后的家庭服务器。要做到这点,你需要让中继服务器上的 sshd 不仅转发回环地址上的端口,还要转发外部主机的端口。这通过指定中继服务器上运行的 sshd 的 **GatewayPorts** 实现。 打开**中继服务器**的 /etc/ssh/sshd_conf 并添加下面的行。 @@ -74,23 +74,23 @@ SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧 tcp 0 0 1.1.1.1:10022 0.0.0.0:* LISTEN 1538/sshd: dev -不像之前的情况,现在隧道的端点是 1.1.1.1:10022(中继服务器的公共 IP 地址),而不是 127.0.0.1:10022。这就意味着从外部主机可以访问隧道的端点。 +不像之前的情况,现在隧道的端点是 1.1.1.1:10022(中继服务器的公网 IP 地址),而不是 127.0.0.1:10022。这就意味着从外部主机可以访问隧道的另一端。 现在在任何其它计算机(客户端计算机),输入以下命令访问网络地址变换之后的家庭服务器。 clientcomputer~$ ssh -p 10022 homeserver_user@1.1.1.1 -在上面的命令中,1.1.1.1 是中继服务器的公共 IP 地址,家庭服务器用户必须是和家庭服务器相关联的用户账户。这是因为你真正登录到的主机是家庭服务器,而不是中继服务器。后者只是中继你的 SSH 流量到家庭服务器。 +在上面的命令中,1.1.1.1 是中继服务器的公共 IP 地址,homeserver_user必须是家庭服务器上的用户账户。这是因为你真正登录到的主机是家庭服务器,而不是中继服务器。后者只是中继你的 SSH 流量到家庭服务器。 ### 在 Linux 上设置一个永久反向 SSH 隧道 ### -现在你已经明白了怎样创建一个反向 SSH 隧道,然后把隧道设置为 “永久”,这样隧道启动后就会一直运行(不管临时的网络拥塞、SSH 超时、中继主机重启,等等)。毕竟,如果隧道不是一直有效,你不可能可靠的登录到你的家庭服务器。 +现在你已经明白了怎样创建一个反向 SSH 隧道,然后把隧道设置为 “永久”,这样隧道启动后就会一直运行(不管临时的网络拥塞、SSH 超时、中继主机重启,等等)。毕竟,如果隧道不是一直有效,你就不能可靠的登录到你的家庭服务器。 -对于永久隧道,我打算使用一个叫 autossh 的工具。正如名字暗示的,这个程序允许你不管任何理由自动重启 SSH 会话。因此对于保存一个反向 SSH 隧道有效非常有用。 +对于永久隧道,我打算使用一个叫 autossh 的工具。正如名字暗示的,这个程序可以让你的 SSH 会话无论因为什么原因中断都会自动重连。因此对于保持一个反向 SSH 隧道非常有用。 第一步,我们要设置从家庭服务器到中继服务器的[无密码 SSH 登录][2]。这样的话,autossh 可以不需要用户干预就能重启一个损坏的反向 SSH 隧道。 -下一步,在初始化隧道的家庭服务器上[安装 autossh][3]。 +下一步,在建立隧道的家庭服务器上[安装 autossh][3]。 在家庭服务器上,用下面的参数运行 autossh 来创建一个连接到中继服务器的永久 SSH 隧道。 @@ -113,7 +113,7 @@ SSH 端口转发的一种替代方案是 **反向 SSH 隧道**。反向 SSH 隧 ### 总结 ### -在这篇博文中,我介绍了你如何能从外部中通过反向 SSH 隧道访问限制性防火墙或 NAT 网关之后的 Linux 服务器。尽管我介绍了家庭网络中的一个使用事例,在企业网络中使用时你尤其要小心。这样的一个隧道可能被视为违反公司政策,因为它绕过了企业的防火墙并把企业网络暴露给外部攻击。这很可能被误用或者滥用。因此在使用之前一定要记住它的作用。 +在这篇博文中,我介绍了你如何能从外部通过反向 SSH 隧道访问限制性防火墙或 NAT 网关之后的 Linux 服务器。这里我介绍了家庭网络中的一个使用事例,但在企业网络中使用时你尤其要小心。这样的一个隧道可能被视为违反公司政策,因为它绕过了企业的防火墙并把企业网络暴露给外部攻击。这很可能被误用或者滥用。因此在使用之前一定要记住它的作用。 -------------------------------------------------------------------------------- @@ -121,11 +121,11 @@ via: http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html 作者:[Dan Nanni][a] 译者:[ictlyh](https://github.com/ictlyh) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [a]:http://xmodulo.com/author/nanni [1]:http://xmodulo.com/go/digitalocean -[2]:http://xmodulo.com/how-to-enable-ssh-login-without.html -[3]:http://ask.xmodulo.com/install-autossh-linux.html +[2]:https://linux.cn/article-5444-1.html +[3]:https://linux.cn/article-5459-1.html From 569768fdea3706313f31ce6787da102f597d9e44 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Sat, 8 Aug 2015 03:26:54 +0800 Subject: [PATCH 151/207] =?UTF-8?q?=E6=9B=B4=E6=96=B0=E5=AE=8C=E6=88=90?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...o Perform File and Directory Management.md | 194 +++++++++--------- 1 file changed, 98 insertions(+), 96 deletions(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 02--How to Perform File and Directory Management.md b/sources/tech/RHCSA Series/RHCSA Series--Part 02--How to Perform File and Directory Management.md index abf9910994..f46fd93321 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 02--How to Perform File and Directory Management.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 02--How to Perform File and Directory Management.md @@ -1,59 +1,61 @@ -[translating by xiqingongzi] -RHCSA Series: How to Perform File and Directory Management – Part 2 +RHCSA 系列: 如何执行文件并进行文件管理 – Part 2 ================================================================================ -In this article, RHCSA Part 2: File and directory management, we will review some essential skills that are required in the day-to-day tasks of a system administrator. + +在本篇(RHCSA 第二篇:文件和目录管理)中,我们江回顾一些系统管理员日常任务需要的技能 ![RHCSA: Perform File and Directory Management – Part 2](http://www.tecmint.com/wp-content/uploads/2015/03/RHCSA-Part2.png) -RHCSA: Perform File and Directory Management – Part 2 -### Create, Delete, Copy, and Move Files and Directories ### +RHCSA : 运行文件以及进行文件夹管理 - 第二章 +### 创建,删除,复制和移动文件及目录 ### -File and directory management is a critical competence that every system administrator should possess. This includes the ability to create / delete text files from scratch (the core of each program’s configuration) and directories (where you will organize files and other directories), and to find out the type of existing files. +文件和目录管理是每一个系统管理员都应该掌握的必要的技能.它包括了从头开始的创建、删除文本文件(每个程序的核心配置)以及目录(你用来组织文件和其他目录),以及识别存在的文件的类型 -The [touch command][1] can be used not only to create empty files, but also to update the access and modification times of existing files. + [touch 命令][1] 不仅仅能用来创建空文件,还能用来更新已存在的文件的权限和时间表 ![touch command example](http://www.tecmint.com/wp-content/uploads/2015/03/touch-command-example.png) -touch command example +touch 命令示例 -You can use `file [filename]` to determine a file’s type (this will come in handy before launching your preferred text editor to edit it). +你可以使用 `file [filename]`来判断一个文件的类型 (在你用文本编辑器编辑之前,判断类型将会更方便编辑). ![file command example](http://www.tecmint.com/wp-content/uploads/2015/03/file-command-example.png) -file command example +file 命令示例 -and `rm [filename]` to delete it. +使用`rm [filename]` 可以删除文件 ![Linux rm command examples](http://www.tecmint.com/wp-content/uploads/2015/03/rm-command-examples.png) -rm command example +rm 命令示例 + +对于目录,你可以使用`mkdir [directory]`在已经存在的路径中创建目录,或者使用 `mkdir -p [/full/path/to/directory].`带全路径创建文件夹 -As for directories, you can create directories inside existing paths with `mkdir [directory]` or create a full path with `mkdir -p [/full/path/to/directory].` ![mkdir command example](http://www.tecmint.com/wp-content/uploads/2015/03/mkdir-command-example.png) -mkdir command example +mkdir 命令示例 -When it comes to removing directories, you need to make sure that they’re empty before issuing the `rmdir [directory]` command, or use the more powerful (handle with care!) `rm -rf [directory]`. This last option will force remove recursively the `[directory]` and all its contents – so use it at your own risk. +当你想要去删除目录时,在你使用`rmdir [directory]` 前,你需要先确保目录是空的,或者使用更加强力的命令(小心使用它)`rm -rf [directory]`.后者会强制删除`[directory]`以及他的内容.所以使用这个命令存在一定的风险 -### Input and Output Redirection and Pipelining ### +### 输入输出重定向以及管道 ### -The command line environment provides two very useful features that allows to redirect the input and output of commands from and to files, and to send the output of a command to another, called redirection and pipelining, respectively. +命令行环境提供了两个非常有用的功能:允许命令重定向的输入和输出到文件和发送到另一个文件,分别称为重定向和管道 To understand those two important concepts, we must first understand the three most important types of I/O (Input and Output) streams (or sequences) of characters, which are in fact special files, in the *nix sense of the word. +为了理解这两个重要概念,我们首先需要理解通常情况下三个重要的输入输出流的形式 -- Standard input (aka stdin) is by default attached to the keyboard. In other words, the keyboard is the standard input device to enter commands to the command line. -- Standard output (aka stdout) is by default attached to the screen, the device that “receives” the output of commands and display them on the screen. -- Standard error (aka stderr), is where the status messages of a command is sent to by default, which is also the screen. +- 标准输入 (aka stdin) 是指默认使用键盘链接. 换句话说,键盘是输入命令到命令行的标准输入设备。 +- 标准输出 (aka stdout) 是指默认展示再屏幕上, 显示器接受输出命令,并且展示在屏幕上。 +- 标准错误 (aka stderr), 是指命令的状态默认输出, 同时也会展示在屏幕上 In the following example, the output of `ls /var` is sent to stdout (the screen), as well as the result of ls /tecmint. But in the latter case, it is stderr that is shown. +在下面的例子中,`ls /var`的结果被发送到stdout(屏幕展示),就像ls /tecmint 的结果。但在后一种情况下,它是标准错误输出。 ![Linux input output redirect](http://www.tecmint.com/wp-content/uploads/2015/03/Linux-input-output-redirect.png) +输入和输出命令实例 -Input and Output Example - -To more easily identify these special files, they are each assigned a file descriptor, an abstract representation that is used to access them. The essential thing to understand is that these files, just like others, can be redirected. What this means is that you can capture the output from a file or script and send it as input to another file, command, or script. This will allow you to store on disk, for example, the output of commands for later processing or analysis. +为了更容易识别这些特殊文件,每个文件都被分配有一个文件描述符(用于控制他们的抽象标识)。主要要理解的是,这些文件就像其他人一样,可以被重定向。这就意味着你可以从一个文件或脚本中捕获输出,并将它传送到另一个文件、命令或脚本中。你就可以在在磁盘上存储命令的输出结果,用于稍后的分析 To redirect stdin (fd 0), stdout (fd 1), or stderr (fd 2), the following operators are available. @@ -63,102 +65,102 @@ To redirect stdin (fd 0), stdout (fd 1), or stderr (fd 2), the following operato -Redirection Operator -Effect +转向操作 +效果 > -Redirects standard output to a file containing standard output. If the destination file exists, it will be overwritten. +标准输出到一个文件。如果目标文件存在,内容就会被重写 >> -Appends standard output to a file. +添加标准输出到文件尾部 2> -Redirects standard error to a file containing standard output. If the destination file exists, it will be overwritten. +标准错误输出到一个文件。如果目标文件存在,内容就会被重写 2>> -Appends standard error to the existing file. +添加标准错误输出到文件尾部. &> -Redirects both standard output and standard error to a file; if the specified file exists, it will be overwritten. +标准错误和标准输出都到一个文件。如果目标文件存在,内容就会被重写 < -Uses the specified file as standard input. +使用特定的文件做标准输出 <> -The specified file is used for both standard input and standard output. +使用特定的文件做标准输出和标准错误 -As opposed to redirection, pipelining is performed by adding a vertical bar `(|)` after a command and before another one. -Remember: +相比与重定向,管道是通过在命令后添加一个竖杠`(|)`再添加另一个命令 . -- Redirection is used to send the output of a command to a file, or to send a file as input to a command. -- Pipelining is used to send the output of a command to another command as input. +记得: -#### Examples Of Redirection and Pipelining #### +- 重定向是用来定向命令的输出到一个文件,或定向一个文件作为输入到一个命令。 +- 管道是用来将命令的输出转发到另一个命令作为输入。 -**Example 1: Redirecting the output of a command to a file** +#### 重定向和管道的使用实例 #### -There will be times when you will need to iterate over a list of files. To do that, you can first save that list to a file and then read that file line by line. While it is true that you can iterate over the output of ls directly, this example serves to illustrate redirection. +** 例1:将一个命令的输出到文件 ** + +有些时候,你需要遍历一个文件列表。要做到这样,你可以先将该列表保存到文件中,然后再按行读取该文件。虽然你可以遍历直接ls的输出,不过这个例子是用来说明重定向。 # ls -1 /var/mail > mail.txt ![Redirect output of command tot a file](http://www.tecmint.com/wp-content/uploads/2015/03/Redirect-output-to-a-file.png) -Redirect output of command tot a file +将一个命令的输出到文件 -**Example 2: Redirecting both stdout and stderr to /dev/null** +** 例2:重定向stdout和stderr到/dev/null ** -In case we want to prevent both stdout and stderr to be displayed on the screen, we can redirect both file descriptors to `/dev/null`. Note how the output changes when the redirection is implemented for the same command. +如果不想让标准输出和标准错误展示在屏幕上,我们可以把文件描述符重定向到 `/dev/null` 请注意在执行这个命令时该如何更改输出 # ls /var /tecmint # ls /var/ /tecmint &> /dev/null ![Redirecting stdout and stderr ouput to /dev/null](http://www.tecmint.com/wp-content/uploads/2015/03/Redirecting-stdout-stderr-ouput.png) -Redirecting stdout and stderr ouput to /dev/null +重定向stdout和stderr到/dev/null -#### Example 3: Using a file as input to a command #### +#### 例3:使用一个文件作为命令的输入 #### -While the classic syntax of the [cat command][2] is as follows. +当官方的[cat 命令][2]的语法如下时 # cat [file(s)] -You can also send a file as input, using the correct redirection operator. +您还可以使用正确的重定向操作符传送一个文件作为输入。 # cat < mail.txt ![Linux cat command examples](http://www.tecmint.com/wp-content/uploads/2015/03/cat-command-examples.png) -cat command example +cat 命令实例 -#### Example 4: Sending the output of a command as input to another #### +#### 例4:发送一个命令的输出作为另一个命令的输入 #### -If you have a large directory or process listing and want to be able to locate a certain file or process at a glance, you will want to pipeline the listing to grep. +如果你有一个较大的目录或进程列表,并且想快速定位,你或许需要将列表通过管道传送给grep -Note that we use to pipelines in the following example. The first one looks for the required keyword, while the second one will eliminate the actual `grep command` from the results. This example lists all the processes associated with the apache user. +接下来我们使用管道在下面的命令中,第一个是查找所需的关键词,第二个是除去产生的 `grep command`.这个例子列举了所有与apache用户有关的进程 # ps -ef | grep apache | grep -v grep ![Send output of command as input to another](http://www.tecmint.com/wp-content/uploads/2015/03/Send-output-of-command-as-input-to-another1.png) -Send output of command as input to another +发送一个命令的输出作为另一个命令的输入 -### Archiving, Compressing, Unpacking, and Uncompressing Files ### +### 归档,压缩,解包,解压文件 ### -If you need to transport, backup, or send via email a group of files, you will use an archiving (or grouping) tool such as [tar][3], typically used with a compression utility like gzip, bzip2, or xz. - -Your choice of a compression tool will be likely defined by the compression speed and rate of each one. Of these three compression tools, gzip is the oldest and provides the least compression, bzip2 provides improved compression, and xz is the newest and provides the best compression. Typically, files compressed with these utilities have .gz, .bz2, or .xz extensions, respectively. +如果你需要传输,备份,或者通过邮件发送一组文件,你可以使用一个存档(或文件夹)如 [tar][3]工具,通常使用gzip,bzip2,或XZ压缩工具. +您选择的压缩工具每一个都有自己的定义的压缩速度和速率的。这三种压缩工具,gzip是最古老和提供最小压缩的工具,bzip2提供经过改进的压缩,以及XZ提供最信和最好的压缩。通常情况下,这些文件都是被压缩的如.gz .bz2或.xz 注:表格 @@ -166,44 +168,44 @@ Your choice of a compression tool will be likely defined by the compression spee - - - + + + - + - + - + - + - + - + - +
CommandAbbreviationDescription命令缩写描述
–create cCreates a tar archive创建一个tar归档
–concatenate AAppends tar files to an archive向归档中添加tar文件
–append rAppends non-tar files to an archive向归档中添加非tar文件
–update uAppends files that are newer than those in an archive添加比归档中的文件更新的文件
–diff or –compare dCompares an archive to files on disk将归档和硬盘的文件夹进行对比
–list tLists the contents of a tarball列举一个tar的压缩包
–extract or –get xExtracts files from an archive从归档中解压文件
@@ -215,51 +217,51 @@ Your choice of a compression tool will be likely defined by the compression spee -Operation modifier -Abbreviation -Description +操作参数 +缩写 +描述 directory dir C -Changes to directory dir before performing operations +在执行操作前更改目录 same-permissions and same-owner p -Preserves permissions and ownership information, respectively. +分别保留权限和所有者信息 –verbose v -Lists all files as they are read or extracted; if combined with –list, it also displays file sizes, ownership, and timestamps +列举所有文件用于读取或提取,这里包含列表,并显示文件的大小、所有权和时间戳 exclude file -Excludes file from the archive. In this case, file can be an actual file or a pattern. +排除存档文件。在这种情况下,文件可以是一个实际的文件或目录。 gzip or gunzip z -Compresses an archive through gzip +使用gzip压缩文件 –bzip2 j -Compresses an archive through bzip2 +使用bzip2压缩文件 –xz J -Compresses an archive through xz +使用xz压缩文件 -#### Example 5: Creating a tarball and then compressing it using the three compression utilities #### +#### 例5:创建一个文件,然后使用三种压缩工具压缩#### -You may want to compare the effectiveness of each tool before deciding to use one or another. Note that while compressing small files, or a few files, the results may not show much differences, but may give you a glimpse of what they have to offer. +在决定使用一个或另一个工具之前,您可能想比较每个工具的压缩效率。请注意压缩小文件或几个文件,结果可能不会有太大的差异,但可能会给你看出他们的差异 # tar cf ApacheLogs-$(date +%Y%m%d).tar /var/log/httpd/* # Create an ordinary tarball # tar czf ApacheLogs-$(date +%Y%m%d).tar.gz /var/log/httpd/* # Create a tarball and compress with gzip @@ -268,51 +270,51 @@ You may want to compare the effectiveness of each tool before deciding to use on ![Linux tar command examples](http://www.tecmint.com/wp-content/uploads/2015/03/tar-command-examples.png) -tar command examples +tar 命令实例 -#### Example 6: Preserving original permissions and ownership while archiving and when #### +#### 例6:归档时同时保存原始权限和所有权 #### -If you are creating backups from users’ home directories, you will want to store the individual files with the original permissions and ownership instead of changing them to that of the user account or daemon performing the backup. The following example preserves these attributes while taking the backup of the contents in the `/var/log/httpd` directory: +如果你创建的是用户的主目录的备份,你需要要存储的个人文件与原始权限和所有权,而不是通过改变他们的用户帐户或守护进程来执行备份。下面的命令可以在归档时保留文件属性 # tar cJf ApacheLogs-$(date +%Y%m%d).tar.xz /var/log/httpd/* --same-permissions --same-owner -### Create Hard and Soft Links ### +### 创建软连接和硬链接 ### -In Linux, there are two types of links to files: hard links and soft (aka symbolic) links. Since a hard link represents another name for an existing file and is identified by the same inode, it then points to the actual data, as opposed to symbolic links, which point to filenames instead. +在Linux中,有2种类型的链接文件:硬链接和软(也称为符号)链接。因为硬链接文件代表另一个名称是由同一点确定,然后链接到实际的数据;符号链接指向的文件名,而不是实际的数据 -In addition, hard links do not occupy space on disk, while symbolic links do take a small amount of space to store the text of the link itself. The downside of hard links is that they can only be used to reference files within the filesystem where they are located because inodes are unique inside a filesystem. Symbolic links save the day, in that they point to another file or directory by name rather than by inode, and therefore can cross filesystem boundaries. +此外,硬链接不占用磁盘上的空间,而符号链接做占用少量的空间来存储的链接本身的文本。硬链接的缺点就是要求他们必须在同一个innode内。而符号链接没有这个限制,符号链接因为只保存了文件名和目录名,所以可以跨文件系统. -The basic syntax to create links is similar in both cases: +创建链接的基本语法看起来是相似的: - # ln TARGET LINK_NAME # Hard link named LINK_NAME to file named TARGET - # ln -s TARGET LINK_NAME # Soft link named LINK_NAME to file named TARGET + # ln TARGET LINK_NAME #从Link_NAME到Target的硬链接 + # ln -s TARGET LINK_NAME #从Link_NAME到Target的软链接 -#### Example 7: Creating hard and soft links #### +#### 例7:创建硬链接和软链接 #### -There is no better way to visualize the relation between a file and a hard or symbolic link that point to it, than to create those links. In the following screenshot you will see that the file and the hard link that points to it share the same inode and both are identified by the same disk usage of 466 bytes. +没有更好的方式来形象的说明一个文件和一个指向它的符号链接的关系,而不是创建这些链接。在下面的截图中你会看到文件的硬链接指向它共享相同的节点都是由466个字节的磁盘使用情况确定。 -On the other hand, creating a hard link results in an extra disk usage of 5 bytes. Not that you’re going to run out of storage capacity, but this example is enough to illustrate the difference between a hard link and a soft link. +另一方面,在别的磁盘创建一个硬链接将占用5个字节,并不是说你将耗尽存储容量,而是这个例子足以说明一个硬链接和软链接之间的区别。 ![Difference between a hard link and a soft link](http://www.tecmint.com/wp-content/uploads/2015/03/hard-soft-link.png) -Difference between a hard link and a soft link +软连接和硬链接之间的不同 -A typical usage of symbolic links is to reference a versioned file in a Linux system. Suppose there are several programs that need access to file fooX.Y, which is subject to frequent version updates (think of a library, for example). Instead of updating every single reference to fooX.Y every time there’s a version update, it is wiser, safer, and faster, to have programs look to a symbolic link named just foo, which in turn points to the actual fooX.Y. +符号链接的典型用法是在Linux系统的版本文件参考。假设有需要一个访问文件foo X.Y 想图书馆一样经常被访问,你想更新一个就可以而不是更新所有的foo X.Y,这时使用软连接更为明智和安全。有文件被看成foo X.Y的链接符号,从而找到foo X.Y -Thus, when X and Y change, you only need to edit the symbolic link foo with a new destination name instead of tracking every usage of the destination file and updating it. +这样的话,当你的X和Y发生变化后,你只需更新一个文件,而不是更新每个文件。 -### Summary ### +### 总结 ### -In this article we have reviewed some essential file and directory management skills that must be a part of every system administrator’s tool-set. Make sure to review other parts of this series as well in order to integrate these topics with the content covered in this tutorial. +在这篇文章中,我们回顾了一些基本的文件和目录管理技能,这是每个系统管理员的工具集的一部分。请确保阅读了本系列的其他部分,以及复习并将这些主题与本教程所涵盖的内容相结合。 -Feel free to let us know if you have any questions or comments. We are always more than glad to hear from our readers. +如果你有任何问题或意见,请随时告诉我们。我们总是很高兴从读者那获取反馈. -------------------------------------------------------------------------------- via: http://www.tecmint.com/file-and-directory-management-in-linux/ 作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) +译者:[xiqingongzi](https://github.com/xiqingongzi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 483fe49938411ce8d731a9e6893d56246d1a363a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E7=99=BD=E5=AE=A6=E6=88=90?= Date: Sat, 8 Aug 2015 03:27:35 +0800 Subject: [PATCH 152/207] =?UTF-8?q?=E9=A2=86RHCSA=E7=AC=AC=E4=B8=89?= =?UTF-8?q?=E7=AF=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...eries--Part 03--How to Manage Users and Groups in RHEL 7.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md b/sources/tech/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md index be78c87e3a..0b85744c6c 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md @@ -1,3 +1,4 @@ +[translated by xiqingongzi] RHCSA Series: How to Manage Users and Groups in RHEL 7 – Part 3 ================================================================================ Managing a RHEL 7 server, as it is the case with any other Linux server, will require that you know how to add, edit, suspend, or delete user accounts, and grant users the necessary permissions to files, directories, and other system resources to perform their assigned tasks. @@ -245,4 +246,4 @@ via: http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ [2]:http://www.tecmint.com/usermod-command-examples/ [3]:http://www.tecmint.com/ls-interview-questions/ [4]:http://www.tecmint.com/file-and-directory-management-in-linux/ -[5]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ \ No newline at end of file +[5]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ From 6a1641ce6e2328d1ad15908f34c9ecb80e7c19f3 Mon Sep 17 00:00:00 2001 From: xiqingongzi Date: Sat, 8 Aug 2015 03:38:00 +0800 Subject: [PATCH 153/207] Move --- ...t 01--Reviewing Essential Commands and System Documentation.md | 0 ...ries--Part 02--How to Perform File and Directory Management.md | 0 2 files changed, 0 insertions(+), 0 deletions(-) rename {sources/tech/RHCSA Series => translated/tech/RHCSA}/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md (100%) rename {sources/tech/RHCSA Series => translated/tech/RHCSA}/RHCSA Series--Part 02--How to Perform File and Directory Management.md (100%) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md b/translated/tech/RHCSA/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md similarity index 100% rename from sources/tech/RHCSA Series/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md rename to translated/tech/RHCSA/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 02--How to Perform File and Directory Management.md b/translated/tech/RHCSA/RHCSA Series--Part 02--How to Perform File and Directory Management.md similarity index 100% rename from sources/tech/RHCSA Series/RHCSA Series--Part 02--How to Perform File and Directory Management.md rename to translated/tech/RHCSA/RHCSA Series--Part 02--How to Perform File and Directory Management.md From 339bbf0b5c6ba7649972b300e40535316fabbb64 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Sat, 8 Aug 2015 18:06:08 +0800 Subject: [PATCH 154/207] [Translated] tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md --- ...riables on a Linux and Unix-like System.md | 47 +++++++++---------- 1 file changed, 23 insertions(+), 24 deletions(-) rename {sources => translated}/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md (50%) diff --git a/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md b/translated/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md similarity index 50% rename from sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md rename to translated/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md index 715ecc2084..202c4e304a 100644 --- a/sources/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md +++ b/translated/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md @@ -1,50 +1,49 @@ -Translating by ictlyh -How To: Temporarily Clear Bash Environment Variables on a Linux and Unix-like System +如何在 Linux 和类 Unix 系统上临时清空 Bash 环境变量 ================================================================================ -I'm a bash shell user. I would like to temporarily clear bash shell environment variables. I do not want to delete or unset an exported environment variable. How do I run a program in a temporary environment in bash or ksh shell? +我是个 bash shell 用户。我想临时清空 bash shell 环境变量。但我不想删除或者 unset 一个 export 环境变量。我怎样才能在 bash 或 ksh shell 的临时环境中运行程序呢? -You can use the env command to set and print environment on a Linux or Unix-like systems. The env command executes utility after modifying the environment as specified on the command line. +你可以在 Linux 或类 Unix 系统中使用 env 命令设置并打印环境。env 命令将环境修改为命令行指定的那样之后再执行程序。 -### How do I display my current environment? ### +### 如何显示当前环境? ### -Open the terminal application and type any one of the following command: +打开终端应用程序并输入下面的其中一个命令: printenv -OR +或 env -Sample outputs: +输出样例: -![Fig.01: Unix/Linux: List All Environment Variables Command](http://s0.cyberciti.org/uploads/faq/2015/08/env-unix-linux-command-output.jpg) -Fig.01: Unix/Linux: List All Environment Variables Command +![Fig.01: Unix/Linux: 列出所有环境变量](http://s0.cyberciti.org/uploads/faq/2015/08/env-unix-linux-command-output.jpg) +Fig.01: Unix/Linux: 列出所有环境变量 -### Counting your environment variables ### +### 统计环境变量数目 ### -Type the following command: +输入下面的命令: env | wc -l printenv | wc -l -Sample outputs: +输出样例: 20 -### Run a program in a clean environment in bash/ksh/zsh ### +### 在 bash/ksh/zsh 干净环境中运行程序 ### -The syntax is as follows: +语法如下所示: env -i your-program-name-here arg1 arg2 ... -For example, run the wget program without using http_proxy and/or all other variables i.e. temporarily clear all bash/ksh/zsh environment variables and run the wget program: +例如,不使用 http_proxy 和/或任何其它变量运行 wget 程序。临时清除所有 bash/ksh/zsh 环境变量并运行 wget 程序: env -i /usr/local/bin/wget www.cyberciti.biz env -i wget www.cyberciti.biz -This is very useful when you want to run a command ignoring any environment variables you have set. I use this command many times everyday to ignore the http_proxy and other environment variable I have set. +这当你想忽视任何已经设置的环境变量来运行命令时非常有用。我每天都会多次使用这个命令,以便忽视 http_proxy 和其它我设置的环境变量。 -#### Example: With the http_proxy #### +#### 例子:使用 http_proxy #### $ wget www.cyberciti.biz --2015-08-03 23:20:23-- http://www.cyberciti.biz/ @@ -55,7 +54,7 @@ This is very useful when you want to run a command ignoring any environment vari index.html [ <=> ] 36.17K 87.0KB/s in 0.4s 2015-08-03 23:20:24 (87.0 KB/s) - 'index.html' saved [37041] -#### Example: Ignore the http_proxy #### +#### 例子:忽视 http_proxy #### $ env -i /usr/local/bin/wget www.cyberciti.biz --2015-08-03 23:25:17-- http://www.cyberciti.biz/ @@ -67,7 +66,7 @@ This is very useful when you want to run a command ignoring any environment vari index.html.1 [ <=> ] 36.17K 115KB/s in 0.3s 2015-08-03 23:25:18 (115 KB/s) - 'index.html.1' saved [37041] -The option -i causes env command to completely ignore the environment it inherits. However, it does not prevent your command (such as wget or curl) setting new variables. Also, note down the side effect of running bash/ksh shell: +-i 选项使 env 命令完全忽视它继承的环境。但是,它并不阻止你的命令(例如 wget 或 curl)设置新的变量。同时,也要注意运行 bash/ksh shell 的副作用: env -i env | wc -l ## empty ## # Now run bash ## @@ -75,15 +74,15 @@ The option -i causes env command to completely ignore the environment it inherit ## New enviroment set by bash program ## env | wc -l -#### Example: Set an environmental variable #### +#### 例子:设置一个环境变量 #### -The syntax is: +语法如下: env var=value /path/to/command arg1 arg2 ... ## OR ## var=value /path/to/command arg1 arg2 ... -For example set http_proxy: +例如设置 http_proxy: env http_proxy="http://USER:PASSWORD@server1.cyberciti.biz:3128/" \ /usr/local/bin/wget www.cyberciti.biz @@ -93,7 +92,7 @@ For example set http_proxy: via: http://www.cyberciti.biz/faq/linux-unix-temporarily-clearing-environment-variables-command/ 作者:Vivek Gite -译者:[译者ID](https://github.com/译者ID) +译者:[ictlyh](https://github.com/ictlyh) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file From a2890da52e074dbd6b53ca274db3ce87202569fa Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 8 Aug 2015 22:53:24 +0800 Subject: [PATCH 155/207] PUB:20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux @dingdongnigetou --- ...y Using 'Explain Shell' Script in Linux.md | 36 +++++++++---------- 1 file changed, 18 insertions(+), 18 deletions(-) rename {translated/tech => published}/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md (68%) diff --git a/translated/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md b/published/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md similarity index 68% rename from translated/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md rename to published/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md index b8f993676c..d31df55711 100644 --- a/translated/tech/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md +++ b/published/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md @@ -1,16 +1,16 @@ -在Linux中利用"Explain Shell"脚本更容易地理解Shell命令 +轻松使用“Explain Shell”脚本来理解 Shell 命令 ================================================================================ -在某些时刻, 当我们在Linux平台上工作时我们所有人都需要shell命令的帮助信息。 尽管内置的帮助像man pages、whatis命令是有帮助的, 但man pages的输出非常冗长, 除非是个有linux经验的人,不然从大量的man pages中获取帮助信息是非常困难的,而whatis命令的输出很少超过一行, 这对初学者来说是不够的。 +我们在Linux上工作时,每个人都会遇到需要查找shell命令的帮助信息的时候。 尽管内置的帮助像man pages、whatis命令有所助益, 但man pages的输出非常冗长, 除非是个有linux经验的人,不然从大量的man pages中获取帮助信息是非常困难的,而whatis命令的输出很少超过一行, 这对初学者来说是不够的。 ![Explain Shell Commands in Linux Shell](http://www.tecmint.com/wp-content/uploads/2015/07/Explain-Shell-Commands-in-Linux-Shell.jpeg) -在Linux Shell中解释Shell命令 +*在Linux Shell中解释Shell命令* -有一些第三方应用程序, 像我们在[Commandline Cheat Sheet for Linux Users][1]提及过的'cheat'命令。Cheat是个杰出的应用程序,即使计算机没有联网也能提供shell命令的帮助, 但是它仅限于预先定义好的命令。 +有一些第三方应用程序, 像我们在[Linux 用户的命令行速查表][1]提及过的'cheat'命令。cheat是个优秀的应用程序,即使计算机没有联网也能提供shell命令的帮助, 但是它仅限于预先定义好的命令。 -Jackson写了一小段代码,它能非常有效地在bash shell里面解释shell命令,可能最美之处就是你不需要安装第三方包了。他把包含这段代码的的文件命名为”explain.sh“。 +Jackson写了一小段代码,它能非常有效地在bash shell里面解释shell命令,可能最美之处就是你不需要安装第三方包了。他把包含这段代码的的文件命名为“explain.sh”。 -#### Explain工具的特性 #### +#### explain.sh工具的特性 #### - 易嵌入代码。 - 不需要安装第三方工具。 @@ -18,22 +18,22 @@ Jackson写了一小段代码,它能非常有效地在bash shell里面解释she - 需要网络连接才能工作。 - 纯命令行工具。 - 可以解释bash shell里面的大部分shell命令。 -- 无需root账户参与。 +- 无需使用root账户。 **先决条件** -唯一的条件就是'curl'包了。 在如今大多数Linux发行版里面已经预安装了culr包, 如果没有你可以按照下面的命令来安装。 +唯一的条件就是'curl'包了。 在如今大多数Linux发行版里面已经预安装了curl包, 如果没有你可以按照下面的命令来安装。 # apt-get install curl [On Debian systems] # yum install curl [On CentOS systems] ### 在Linux上安装explain.sh工具 ### -我们要将下面这段代码插入'~/.bashrc'文件(LCTT注: 若没有该文件可以自己新建一个)中。我们必须为每个用户以及对应的'.bashrc'文件插入这段代码,笔者建议你不要加在root用户下。 +我们要将下面这段代码插入'~/.bashrc'文件(LCTT译注: 若没有该文件可以自己新建一个)中。我们要为每个用户以及对应的'.bashrc'文件插入这段代码,但是建议你不要加在root用户下。 我们注意到.bashrc文件的第一行代码以(#)开始, 这个是可选的并且只是为了区分余下的代码。 -# explain.sh 标记代码的开始, 我们将代码插入.bashrc文件的底部。 +\# explain.sh 标记代码的开始, 我们将代码插入.bashrc文件的底部。 # explain.sh begins explain () { @@ -53,7 +53,7 @@ Jackson写了一小段代码,它能非常有效地在bash shell里面解释she ### explain.sh工具的使用 ### -在插入代码并保存之后,你必须退出当前的会话然后重新登录来使改变生效(LCTT注:你也可以直接使用命令“source~/.bashrc”来让改变生效)。每件事情都是交由‘curl’命令处理, 它负责将需要解释的命令以及命令选项传送给mankier服务,然后将必要的信息打印到Linux命令行。不必说的就是使用这个工具你总是需要连接网络。 +在插入代码并保存之后,你必须退出当前的会话然后重新登录来使改变生效(LCTT译注:你也可以直接使用命令`source~/.bashrc` 来让改变生效)。每件事情都是交由‘curl’命令处理, 它负责将需要解释的命令以及命令选项传送给mankier服务,然后将必要的信息打印到Linux命令行。不必说的就是使用这个工具你总是需要连接网络。 让我们用explain.sh脚本测试几个笔者不懂的命令例子。 @@ -63,7 +63,7 @@ Jackson写了一小段代码,它能非常有效地在bash shell里面解释she ![Get Help on du Command](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-du-Command.png) -获得du命令的帮助 +*获得du命令的帮助* **2.如果你忘了'tar -zxvf'的作用,你可以简单地如此做:** @@ -71,7 +71,7 @@ Jackson写了一小段代码,它能非常有效地在bash shell里面解释she ![Tar Command Help](http://www.tecmint.com/wp-content/uploads/2015/07/Tar-Command-Help.png) -Tar命令帮助 +*Tar命令帮助* **3.我的一个朋友经常对'whatis'以及'whereis'命令的使用感到困惑,所以我建议他:** @@ -86,7 +86,7 @@ Tar命令帮助 ![Whatis Whereis Commands Help](http://www.tecmint.com/wp-content/uploads/2015/07/Whatis-Whereis-Commands-Help.png) -Whatis/Whereis命令的帮助 +*Whatis/Whereis命令的帮助* 你只需要使用“Ctrl+c”就能退出交互模式。 @@ -96,11 +96,11 @@ Whatis/Whereis命令的帮助 ![Get Help on Multiple Commands](http://www.tecmint.com/wp-content/uploads/2015/07/Get-Help-on-Multiple-Commands.png) -获取多条命令的帮助 +*获取多条命令的帮助* -同样地,你可以请求你的shell来解释任何shell命令。 前提是你需要一个可用的网络。输出的信息是基于解释的需要从服务器中生成的,因此输出的结果是不可定制的。 +同样地,你可以请求你的shell来解释任何shell命令。 前提是你需要一个可用的网络。输出的信息是基于需要解释的命令,从服务器中生成的,因此输出的结果是不可定制的。 -对于我来说这个工具真的很有用并且它已经荣幸地添加在我的.bashrc文件中。你对这个项目有什么想法?它对你有用么?它的解释令你满意吗?请让我知道吧! +对于我来说这个工具真的很有用,并且它已经荣幸地添加在我的.bashrc文件中。你对这个项目有什么想法?它对你有用么?它的解释令你满意吗?请让我知道吧! 请在下面评论为我们提供宝贵意见,喜欢并分享我们以及帮助我们得到传播。 @@ -110,7 +110,7 @@ via: http://www.tecmint.com/explain-shell-commands-in-the-linux-shell/ 作者:[Avishek Kumar][a] 译者:[dingdongnigetou](https://github.com/dingdongnigetou) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 8a5094fa7787dc36c9761d7128a0bd15b1cb8a64 Mon Sep 17 00:00:00 2001 From: Ezio Date: Sat, 8 Aug 2015 22:54:02 +0800 Subject: [PATCH 156/207] Update 20150728 Process of the Linux kernel building.md --- sources/tech/20150728 Process of the Linux kernel building.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150728 Process of the Linux kernel building.md b/sources/tech/20150728 Process of the Linux kernel building.md index cb7ec19b45..1c03ebbe72 100644 --- a/sources/tech/20150728 Process of the Linux kernel building.md +++ b/sources/tech/20150728 Process of the Linux kernel building.md @@ -1,3 +1,5 @@ +Translating by Ezio + Process of the Linux kernel building ================================================================================ Introduction @@ -671,4 +673,4 @@ via: https://github.com/0xAX/linux-insides/blob/master/Misc/how_kernel_compiled. 译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From be302cdf123a3ab2492aee561432876902949655 Mon Sep 17 00:00:00 2001 From: wxy Date: Sat, 8 Aug 2015 23:50:31 +0800 Subject: [PATCH 157/207] PUB:20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System @ictlyh --- ...riables on a Linux and Unix-like System.md | 32 +++++++++---------- 1 file changed, 16 insertions(+), 16 deletions(-) rename {translated/tech => published}/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md (71%) diff --git a/translated/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md b/published/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md similarity index 71% rename from translated/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md rename to published/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md index 202c4e304a..2157cdc4e6 100644 --- a/translated/tech/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md +++ b/published/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md @@ -1,8 +1,8 @@ -如何在 Linux 和类 Unix 系统上临时清空 Bash 环境变量 +如何在 Linux 上运行命令前临时清空 Bash 环境变量 ================================================================================ -我是个 bash shell 用户。我想临时清空 bash shell 环境变量。但我不想删除或者 unset 一个 export 环境变量。我怎样才能在 bash 或 ksh shell 的临时环境中运行程序呢? +我是个 bash shell 用户。我想临时清空 bash shell 环境变量。但我不想删除或者 unset 一个输出的环境变量。我怎样才能在 bash 或 ksh shell 的临时环境中运行程序呢? -你可以在 Linux 或类 Unix 系统中使用 env 命令设置并打印环境。env 命令将环境修改为命令行指定的那样之后再执行程序。 +你可以在 Linux 或类 Unix 系统中使用 env 命令设置并打印环境。env 命令可以按命令行指定的变量来修改环境,之后再执行程序。 ### 如何显示当前环境? ### @@ -17,29 +17,30 @@ 输出样例: ![Fig.01: Unix/Linux: 列出所有环境变量](http://s0.cyberciti.org/uploads/faq/2015/08/env-unix-linux-command-output.jpg) -Fig.01: Unix/Linux: 列出所有环境变量 + +*Fig.01: Unix/Linux: 列出所有环境变量* ### 统计环境变量数目 ### 输入下面的命令: env | wc -l - printenv | wc -l + printenv | wc -l # 或者 输出样例: 20 -### 在 bash/ksh/zsh 干净环境中运行程序 ### +### 在干净的 bash/ksh/zsh 环境中运行程序 ### 语法如下所示: env -i your-program-name-here arg1 arg2 ... -例如,不使用 http_proxy 和/或任何其它变量运行 wget 程序。临时清除所有 bash/ksh/zsh 环境变量并运行 wget 程序: +例如,要在不使用 http_proxy 和/或任何其它环境变量的情况下运行 wget 程序。临时清除所有 bash/ksh/zsh 环境变量并运行 wget 程序: env -i /usr/local/bin/wget www.cyberciti.biz - env -i wget www.cyberciti.biz + env -i wget www.cyberciti.biz # 或者 这当你想忽视任何已经设置的环境变量来运行命令时非常有用。我每天都会多次使用这个命令,以便忽视 http_proxy 和其它我设置的环境变量。 @@ -66,12 +67,12 @@ Fig.01: Unix/Linux: 列出所有环境变量 index.html.1 [ <=> ] 36.17K 115KB/s in 0.3s 2015-08-03 23:25:18 (115 KB/s) - 'index.html.1' saved [37041] --i 选项使 env 命令完全忽视它继承的环境。但是,它并不阻止你的命令(例如 wget 或 curl)设置新的变量。同时,也要注意运行 bash/ksh shell 的副作用: +-i 选项使 env 命令完全忽视它继承的环境。但是,它并不会阻止你的命令(例如 wget 或 curl)设置新的变量。同时,也要注意运行 bash/ksh shell 的副作用: - env -i env | wc -l ## empty ## - # Now run bash ## + env -i env | wc -l ## 空的 ## + # 现在运行 bash ## env -i bash - ## New enviroment set by bash program ## + ## bash 设置了新的环境变量 ## env | wc -l #### 例子:设置一个环境变量 #### @@ -79,13 +80,12 @@ Fig.01: Unix/Linux: 列出所有环境变量 语法如下: env var=value /path/to/command arg1 arg2 ... - ## OR ## + ## 或 ## var=value /path/to/command arg1 arg2 ... 例如设置 http_proxy: - env http_proxy="http://USER:PASSWORD@server1.cyberciti.biz:3128/" \ - /usr/local/bin/wget www.cyberciti.biz + env http_proxy="http://USER:PASSWORD@server1.cyberciti.biz:3128/" /usr/local/bin/wget www.cyberciti.biz -------------------------------------------------------------------------------- @@ -93,6 +93,6 @@ via: http://www.cyberciti.biz/faq/linux-unix-temporarily-clearing-environment-va 作者:Vivek Gite 译者:[ictlyh](https://github.com/ictlyh) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file From 1f7c172f463723aa85ea3605c88de9e89835f78b Mon Sep 17 00:00:00 2001 From: geekpi Date: Sun, 9 Aug 2015 10:51:47 +0800 Subject: [PATCH 158/207] translating --- ...0 How to Setup iTOP (IT Operational Portal) on CentOS 7.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md b/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md index 38477bb662..8b598999e1 100644 --- a/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md +++ b/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md @@ -1,3 +1,5 @@ +tranlsating---geekpi + How to Setup iTOP (IT Operational Portal) on CentOS 7 ================================================================================ iTOP is a simple, Open source web based IT Service Management tool. It has all of ITIL functionality that includes with Service desk, Configuration Management, Incident Management, Problem Management, Change Management and Service Management. iTop relays on Apache/IIS, MySQL and PHP, so it can run on any operating system supporting these applications. Since iTop is a web based application you don’t need to deploy any client software on each user’s PC. A simple web browser is enough to perform day to day operations of an IT environment with iTOP. @@ -171,4 +173,4 @@ via: http://linoxide.com/tools/setup-itop-centos-7/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://linoxide.com/author/kashifs/ -[1]:http://www.combodo.com/spip.php?page=rubrique&id_rubrique=8 \ No newline at end of file +[1]:http://www.combodo.com/spip.php?page=rubrique&id_rubrique=8 From 3e3f2fab11865dd4cdbb700a62507cd8d14d2f4d Mon Sep 17 00:00:00 2001 From: geekpi Date: Sun, 9 Aug 2015 16:34:00 +0800 Subject: [PATCH 159/207] translated --- ...TOP (IT Operational Portal) on CentOS 7.md | 104 +++++++++--------- 1 file changed, 51 insertions(+), 53 deletions(-) diff --git a/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md b/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md index 8b598999e1..dd20493d77 100644 --- a/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md +++ b/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md @@ -1,30 +1,28 @@ -tranlsating---geekpi - -How to Setup iTOP (IT Operational Portal) on CentOS 7 +如何在CentOS上安装iTOP(IT操作门户) ================================================================================ -iTOP is a simple, Open source web based IT Service Management tool. It has all of ITIL functionality that includes with Service desk, Configuration Management, Incident Management, Problem Management, Change Management and Service Management. iTop relays on Apache/IIS, MySQL and PHP, so it can run on any operating system supporting these applications. Since iTop is a web based application you don’t need to deploy any client software on each user’s PC. A simple web browser is enough to perform day to day operations of an IT environment with iTOP. +iTOP简单来说是一个简单的基于网络的开源IT服务管理工具。它有所有的ITIL功能包括服务台、配置管理、事件管理、问题管理、更改管理和服务管理。iTOP依赖于Apache/IIS、MySQL和PHP,因此它可以运行在任何支持这些软件的操作系统中。因为iTOP是一个网络程序,因此你不必在用户的PC端任何客户端程序。一个简单的浏览器就足够每天的IT环境操作了。 -To install and configure iTOP we will be using CentOS 7 as base operating with basic LAMP Stack environment installed on it that will cover its almost all prerequisites. +我们要在一台有满足基本需求的LAMP环境的CentOS 7上安装和配置iTOP。 -### Downloading iTOP ### +### 下载 iTOP ### -iTop download package is present on SourceForge, we can get its link from their official website [link][1]. +iTOP的下载包现在在SOurceForge上,我们可以从这获取它的官方[链接][1]。 ![itop download](http://blog.linoxide.com/wp-content/uploads/2015/07/1-itop-download.png) -We will the download link from here and get this zipped file on server with wget command as below. +我们从这里的连接用wget命令获取压缩文件 [root@centos-007 ~]# wget http://downloads.sourceforge.net/project/itop/itop/2.1.0/iTop-2.1.0-2127.zip -### iTop Extensions and Web Setup ### +### iTop扩展和网络安装 ### -By using unzip command we will extract the downloaded packages in the document root directory of our apache web server in a new directory with name itop. +使用unzip命令解压到apache根目录下的itop文件夹下。 [root@centos-7 ~]# ls iTop-2.1.0-2127.zip [root@centos-7 ~]# unzip iTop-2.1.0-2127.zip -d /var/www/html/itop/ -List the folder to view installation packages in it. +列出安装包中的内容。 [root@centos-7 ~]# ls -lh /var/www/html/itop/ total 68K @@ -33,7 +31,7 @@ List the folder to view installation packages in it. -rw-r--r--. 1 root root 23K Dec 17 2014 README drwxr-xr-x. 19 root root 4.0K Jul 14 13:10 web -Here is all the extensions that we can install. +这些是我们可以安装的扩展。 [root@centos-7 2.x]# ls authent-external itop-backup itop-config-mgmt itop-problem-mgmt itop-service-mgmt-provider itop-welcome-itil @@ -42,132 +40,132 @@ Here is all the extensions that we can install. installation.xml itop-change-mgmt-itil itop-incident-mgmt-itil itop-request-mgmt-itil itop-tickets itop-attachments itop-config itop-knownerror-mgmt itop-service-mgmt itop-virtualization-mgmt -Now from the extracted web directory, moving through different data models we will migrate the required extensions from the datamodels into the web extensions directory of web document root directory with copy command. +在解压的目录下,通过不同的数据模型用复制命令迁移需要的扩展从datamodels复制到web扩展目录下。 [root@centos-7 2.x]# pwd /var/www/html/itop/web/datamodels/2.x [root@centos-7 2.x]# cp -r itop-request-mgmt itop-service-mgmt itop-service-mgmt itop-config itop-change-mgmt /var/www/html/itop/web/extensions/ -### Installing iTop Web Interface ### +### 安装 iTop web界面 ### -Most of our server side settings and configurations are done.Finally we need to complete its web interface installation process to finalize the setup. +大多数服务端设置和配置已经完成了。最后我们安装web界面来完成安装。 -Open your favorite web browser and access the WordPress web directory in your web browser using your server IP or FQDN like. +打开浏览器使用ip地址或者FQDN来访问WordPress web目录。 http://servers_ip_address/itop/web/ -You will be redirected towards the web installation process for iTop. Let’s configure it as per your requirements like we did here in this tutorial. +你会被重定向到iTOP的web安装页面。让我们按照要求配置,就像在这篇教程中做的那样。 -#### Prerequisites Validation #### +#### 先决要求验证 #### -At the stage you will be prompted for welcome screen with prerequisites validation ok. If you get some warning then you have to make resolve it by installing its prerequisites. +这一步你就会看到验证完成的欢迎界面。如果你看到了一些警告信息,你需要先安装这些软件来解决这些问题。 ![mcrypt missing](http://blog.linoxide.com/wp-content/uploads/2015/07/2-itop-web-install.png) -At this stage one optional package named php mcrypt will be missing. Download the following rpm package then try to install php mcrypt package. +这一步一个叫php mcrypt的可选包丢失了。下载下面的rpm包接着尝试安装php mcrypt包。 [root@centos-7 ~]#yum localinstall php-mcrypt-5.3.3-1.el6.x86_64.rpm libmcrypt-2.5.8-9.el6.x86_64.rpm. -After successful installation of php-mcrypt library we need to restart apache web service, then reload the web page and this time its prerequisites validation should be OK. +成功安装完php-mcrypt后,我们需要重启apache服务,接着刷新页面,这时验证应该已经OK。 -#### Install or Upgrade iTop #### +#### 安装或者升级 iTop #### -Here we will choose the fresh installation as we have not installed iTop previously on our server. +现在我们要在没有安装iTOP的服务器上选择全新安装。 ![Install New iTop](http://blog.linoxide.com/wp-content/uploads/2015/07/3.png) -#### iTop License Agreement #### +#### iTop 许可协议 #### -Chose the option to accept the terms of the licenses of all the components of iTop and click "NEXT". +勾选同意iTOP所有组件的许可协议并点击“NEXT”。 ![License Agreement](http://blog.linoxide.com/wp-content/uploads/2015/07/4.png) -#### Database Configuration #### +#### 数据库配置 #### -Here we the do Configuration of the database connection by giving our database servers credentials and then choose from the option to create new database as shown. +现在我们输入数据库凭据来配置数据库连接,接着选择如下选择创建新数据库。 ![DB Connection](http://blog.linoxide.com/wp-content/uploads/2015/07/5.png) -#### Administrator Account #### +#### 管理员账户 #### -In this step we will configure an Admin account by filling out its login details as. +这一步中我们会输入它的登录信息来配置管理员账户。 ![Admin Account](http://blog.linoxide.com/wp-content/uploads/2015/07/6.png) -#### Miscellaneous Parameters #### +#### 杂项参数 #### -Let's choose the additional parameters whether you want to install with demo contents or with fresh database and proceed forward. +让我们选择额外的参数来选择你是否需要安装一个演示内容或者使用全新的数据库,接着下一步。 ![Misc Parameters](http://blog.linoxide.com/wp-content/uploads/2015/07/7.png) -### iTop Configurations Management ### +### iTop 配置管理 ### -The options below allow you to configure the type of elements that are to be managed inside iTop like all the base objects that are mandatory in the iTop CMDB, Manage Data Center devices, storage device and virtualization. +下面的选项允许你配置在iTOP要管理的元素类型,像CMDB、数据中心设备、存储设备和虚拟化这些东西在iTOP中是必须的。 ![Conf Management](http://blog.linoxide.com/wp-content/uploads/2015/07/8.png) -#### Service Management #### +#### 服务管理 #### -Select from the choices that best describes the relationships between the services and the IT infrastructure in your IT environment. So we are choosing Service Management for Service Providers here. +选择一个最能描述你的IT设备和环境之间的关系的选项。因此我们这里选择为服务提供商的服务管理。 ![Service Management](http://blog.linoxide.com/wp-content/uploads/2015/07/9.png) -#### iTop Tickets Management #### +#### iTop Tickets 管理 #### -From the different available options we will Select the ITIL Compliant Tickets Management option to have different types of ticket for managing user requests and incidents. +从不同的可用选项我们选择符合ITIL Tickets管理选项来管理不同类型的用户请求和事件。 ![Ticket Management](http://blog.linoxide.com/wp-content/uploads/2015/07/10.png) -#### Change Management Options #### +#### 改变管理选项 #### -Select the type of tickets you want to use in order to manage changes to the IT infrastructure from the available options. We are going to choose ITIL change management option here. +选择不同的ticket类型以便管理可用选项中的IT设备更改。我们选择ITTL更改管理选项。 ![ITIL Change](http://blog.linoxide.com/wp-content/uploads/2015/07/11.png) -#### iTop Extensions #### +#### iTop 扩展 #### -In this section we can select the additional extensions to install or we can unchecked the ones that you want to skip. +这一节我们选择额外的扩展来安装或者不选直接跳过。 ![iTop Extensions](http://blog.linoxide.com/wp-content/uploads/2015/07/13.png) -### Ready to Start Web Installation ### +### 准备开始web安装 ### -Now we are ready to start installing the components that we choose in previous steps. We can also drop down these installation parameters to view our configuration from the drop down. +现在我们开始准备安装先前先前选择的组件。我们也可以下拉这些安装参数来浏览我们的配置。 -Once you are confirmed with the installation parameters click on the install button. +确认安装参数后点击安装按钮。 ![Installation Parameters](http://blog.linoxide.com/wp-content/uploads/2015/07/16.png) -Let's wait for the progress bar to complete the installation process. It might takes few minutes to complete its installation process. +让我们等待进度条来完成安装步骤。它也许会花费几分钟来完成安装步骤。 ![iTop Installation Process](http://blog.linoxide.com/wp-content/uploads/2015/07/17.png) -### iTop Installation Done ### +### iTop安装完成 ### -Our iTop installation setup is complete, just need to do a simple manual operation as shown and then click to enter iTop. +我们的iTOP安装已经完成了,只要如下一个简单的手动操作就可以进入到iTOP。 ![iTop Done](http://blog.linoxide.com/wp-content/uploads/2015/07/18.png) -### Welcome to iTop (IT Operational Portal) ### +### 欢迎来到iTop (IT操作门户) ### ![itop welcome note](http://blog.linoxide.com/wp-content/uploads/2015/07/20.png) -### iTop Dashboard ### +### iTop 面板 ### -You can manage configuration of everything from here Servers, computers, Contacts, Locations, Contracts, Network devices…. You can create your own. Just the fact, that the installed CMDB module is great which is an essential part of every bigger IT. +你这里可以配置任何东西,服务、计算机、通讯录、位置、合同、网络设备等等。你可以创建你自己的。事实是刚安装的CMDB模块是每一个IT人员的必备模块。 ![iTop Dashboard](http://blog.linoxide.com/wp-content/uploads/2015/07/19.png) -### Conclusion ### +### 总结 ### -ITOP is one of the best Open Source Service Desk solutions. We have successfully installed and configured it on our CentOS 7 cloud host. So, the most powerful aspect of iTop is the ease with which it can be customized via its “extensions”. Feel free to comment if you face any trouble during its setup. +ITOP是一个最棒的开源桌面服务解决方案。我们已经在CentOS 7上成功地安装和配置了。因此,iTOP最强大的一方面是它可以很简单地通过扩展来自定义。如果你在安装中遇到任何问题欢迎评论。 -------------------------------------------------------------------------------- via: http://linoxide.com/tools/setup-itop-centos-7/ 作者:[Kashif Siddique][a] -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From b88de33b50ce231372d1ce08194d09a7d28989d7 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Sun, 9 Aug 2015 19:30:54 +0800 Subject: [PATCH 160/207] [Translated]20150803 Linux Logging Basics.md --- sources/tech/20150803 Linux Logging Basics.md | 92 ------------------- .../tech/20150803 Linux Logging Basics.md | 90 ++++++++++++++++++ 2 files changed, 90 insertions(+), 92 deletions(-) delete mode 100644 sources/tech/20150803 Linux Logging Basics.md create mode 100644 translated/tech/20150803 Linux Logging Basics.md diff --git a/sources/tech/20150803 Linux Logging Basics.md b/sources/tech/20150803 Linux Logging Basics.md deleted file mode 100644 index 6c3c3693a4..0000000000 --- a/sources/tech/20150803 Linux Logging Basics.md +++ /dev/null @@ -1,92 +0,0 @@ -FSSlc translating - -Linux Logging Basics -================================================================================ -First we’ll describe the basics of what Linux logs are, where to find them, and how they get created. If you already know this stuff, feel free to skip to the next section. - -### Linux System Logs ### - -Many valuable log files are automatically created for you by Linux. You can find them in your /var/log directory. Here is what this directory looks like on a typical Ubuntu system: - -![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Linux-system-log-terminal.png) - -Some of the most important Linux system logs include: - -- /var/log/syslog or /var/log/messages stores all global system activity data, including startup messages. Debian-based systems like Ubuntu store this in /var/log/syslog. RedHat-based systems like RHEL or CentOS store this in /var/log/messages. -- /var/log/auth.log or /var/log/secure stores logs from the Pluggable Authentication Module (pam) including successful logins, failed login attempts, and authentication methods. Ubuntu and Debian store authentication messages in /var/log/auth.log. RedHat and CentOS store this data in /var/log/secure. -- /var/log/kern stores kernel error and warning data, which is particularly helpful for troubleshooting custom kernels. -- /var/log/cron stores information about cron jobs. Use this data to verify that your cron jobs are running successfully. - -Digital Ocean has a thorough [tutorial][1] on these files and how rsyslog creates them on common distributions like RedHat and CentOS. - -Applications also write log files in this directory. For example, popular servers like Apache, Nginx, MySQL, and more can write log files here. Some of these log files are written by the application itself. Others are created through syslog (see below). - -### What’s Syslog? ### - -How do Linux system log files get created? The answer is through the syslog daemon, which listens for log messages on the syslog socket /dev/log and then writes them to the appropriate log file. - -The word “syslog” is an overloaded term and is often used in short to refer to one of these: - -1. **Syslog daemon** — a program to receive, process, and send syslog messages. It can [send syslog remotely][2] to a centralized server or write it to a local file. Common examples include rsyslogd and syslog-ng. In this usage, people will often say “sending to syslog.” -1. **Syslog protocol** — a transport protocol specifying how logs can be sent over a network and a data format definition for syslog messages (below). It’s officially defined in [RFC-5424][3]. The standard ports are 514 for plaintext logs and 6514 for encrypted logs. In this usage, people will often say “sending over syslog.” -1. **Syslog messages** — log messages or events in the syslog format, which includes a header with several standard fields. In this usage, people will often say “sending syslog.” - -Syslog messages or events include a header with several standard fields, making analysis and routing easier. They include the timestamp, the name of the application, the classification or location in the system where the message originates, and the priority of the issue. - -Here is an example log message with the syslog header included. It’s from the sshd daemon, which controls remote logins to the system. This message describes a failed login attempt: - - <34>1 2003-10-11T22:14:15.003Z server1.com sshd - - pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2 - -### Syslog Format and Fields ### - -Each syslog message includes a header with fields. Fields are structured data that makes it easier to analyze and route the events. Here is the format we used to generate the above syslog example. You can match each value to a specific field name. - - <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% %msg%n - -Below, you’ll find descriptions of some of the most commonly used syslog fields when searching or troubleshooting issues. - -#### Timestamp #### - -The [timestamp][4] field (2003-10-11T22:14:15.003Z in the example) indicates the time and date that the message was generated on the system sending the message. That time can be different from when another system receives the message. The example timestamp breaks down like this: - -- **2003-10-11** is the year, month, and day. -- **T** is a required element of the TIMESTAMP field, separating the date and the time. -- **22:14:15.003** is the 24-hour format of the time, including the number of milliseconds (**003**) into the next second. -- **Z** is an optional element, indicating UTC time. Instead of Z, the example could have included an offset, such as -08:00, which indicates that the time is offset from UTC by 8 hours, PST. - -#### Hostname #### - -The [hostname][5] field (server1.com in the example above) indicates the name of the host or system that sent the message. - -#### App-Name #### - -The [app-name][6] field (sshd:auth in the example) indicates the name of the application that sent the message. - -#### Priority #### - -The priority field or [pri][7] for short (<34> in the example above) tells you how urgent or severe the event is. It’s a combination of two numerical fields: the facility and the severity. The severity ranges from the number 7 for debug events all the way to 0 which is an emergency. The facility describes which process created the event. It ranges from 0 for kernel messages to 23 for local application use. - -Pri can be output in two ways. The first is as a single number prival which is calculated as the facility field value multiplied by 8, then the result is added to the severity field value: (facility)(8) + (severity). The second is pri-text which will output in the string format “facility.severity.” The latter format can often be easier to read and search but takes up more storage space. - --------------------------------------------------------------------------------- - -via: http://www.loggly.com/ultimate-guide/logging/linux-logging-basics/ - -作者:[Jason Skowronski][a1] -作者:[Amy Echeverri][a2] -作者:[Sadequl Hussain][a3] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a1]:https://www.linkedin.com/in/jasonskowronski -[a2]:https://www.linkedin.com/in/amyecheverri -[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 -[1]:https://www.digitalocean.com/community/tutorials/how-to-view-and-configure-linux-logs-on-ubuntu-and-centos -[2]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.y2e9tdfk1cdb -[3]:https://tools.ietf.org/html/rfc5424 -[4]:https://tools.ietf.org/html/rfc5424#section-6.2.3 -[5]:https://tools.ietf.org/html/rfc5424#section-6.2.4 -[6]:https://tools.ietf.org/html/rfc5424#section-6.2.5 -[7]:https://tools.ietf.org/html/rfc5424#section-6.2.1 diff --git a/translated/tech/20150803 Linux Logging Basics.md b/translated/tech/20150803 Linux Logging Basics.md new file mode 100644 index 0000000000..00acdf183e --- /dev/null +++ b/translated/tech/20150803 Linux Logging Basics.md @@ -0,0 +1,90 @@ +Linux 日志基础 +================================================================================ +首先,我们将描述有关 Linux 日志是什么,到哪儿去找它们以及它们是如何创建的基础知识。如果你已经知道这些,请随意跳至下一节。 + +### Linux 系统日志 ### + +许多有价值的日志文件都是由 Linux 自动地为你创建的。你可以在 `/var/log` 目录中找到它们。下面是在一个典型的 Ubuntu 系统中这个目录的样子: + +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Linux-system-log-terminal.png) + +一些最为重要的 Linux 系统日志包括: + +- `/var/log/syslog` 或 `/var/log/messages` 存储所有的全局系统活动数据,包括开机信息。基于 Debian 的系统如 Ubuntu 在 `/var/log/syslog` 目录中存储它们,而基于 RedHat 的系统如 RHEL 或 CentOS 则在 `/var/log/messages` 中存储它们。 +- `/var/log/auth.log` 或 `/var/log/secure` 存储来自可插拔认证模块(PAM)的日志,包括成功的登录,失败的登录尝试和认证方式。Ubuntu 和 Debian 在 `/var/log/auth.log` 中存储认证信息,而 RedHat 和 CentOS 则在 `/var/log/secure` 中存储该信息。 +- `/var/log/kern` 存储内核错误和警告数据,这对于排除与自定义内核相关的故障尤为实用。 +- `/var/log/cron` 存储有关 cron 作业的信息。使用这个数据来确保你的 cron 作业正成功地运行着。 + +Digital Ocean 有一个完整的关于这些文件及 rsyslog 如何在常见的发行版本如 RedHat 和 CentOS 中创建它们的 [教程][1] 。 + +应用程序也会在这个目录中写入日志文件。例如像 Apache,Nginx,MySQL 等常见的服务器程序可以在这个目录中写入日志文件。其中一些日志文件由应用程序自己创建,其他的则通过 syslog (具体见下文)来创建。 + +### 什么是 Syslog? ### + +Linux 系统日志文件是如何创建的呢?答案是通过 syslog 守护程序,它在 syslog +套接字 `/dev/log` 上监听日志信息,然后将它们写入适当的日志文件中。 + +单词“syslog” 是一个重载的条目,并经常被用来简称如下的几个名称之一: + +1. **Syslog 守护进程** — 一个用来接收,处理和发送 syslog 信息的程序。它可以[远程发送 syslog][2] 到一个集中式的服务器或写入一个本地文件。常见的例子包括 rsyslogd 和 syslog-ng。在这种使用方式中,人们常说 "发送到 syslog." +1. **Syslog 协议** — 一个指定日志如何通过网络来传送的传输协议和一个针对 syslog 信息(具体见下文) 的数据格式的定义。它在 [RFC-5424][3] 中被正式定义。对于文本日志,标准的端口是 514,对于加密日志,端口是 6514。在这种使用方式中,人们常说"通过 syslog 传送." +1. **Syslog 信息** — syslog 格式的日志信息或事件,它包括一个带有几个标准域的文件头。在这种使用方式中,人们常说"发送 syslog." + +Syslog 信息或事件包括一个带有几个标准域的 header ,使得分析和路由更方便。它们包括时间戳,应用程序的名称,在系统中信息来源的分类或位置,以及事件的优先级。 + +下面展示的是一个包含 syslog header 的日志信息,它来自于 sshd 守护进程,它控制着到该系统的远程登录,这个信息描述的是一次失败的登录尝试: + + <34>1 2003-10-11T22:14:15.003Z server1.com sshd - - pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2 + +### Syslog 格式和域 ### + +每条 syslog 信息包含一个带有域的 header,这些域是结构化的数据,使得分析和路由事件更加容易。下面是我们使用的用来产生上面的 syslog 例子的格式,你可以将每个值匹配到一个特定的域的名称上。 + + <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% %msg%n + +下面,你将看到一些在查找或排错时最常使用的 syslog 域: + +#### 时间戳 #### + +[时间戳][4] (上面的例子为 2003-10-11T22:14:15.003Z) 暗示了在系统中发送该信息的时间和日期。这个时间在另一系统上接收该信息时可能会有所不同。上面例子中的时间戳可以分解为: + +- **2003-10-11** 年,月,日. +- **T** 为时间戳的必需元素,它将日期和时间分离开. +- **22:14:15.003** 是 24 小时制的时间,包括进入下一秒的毫秒数(**003**). +- **Z** 是一个可选元素,指的是 UTC 时间,除了 Z,这个例子还可以包括一个偏移量,例如 -08:00,这意味着时间从 UTC 偏移 8 小时,即 PST 时间. + +#### 主机名 #### + +[主机名][5] 域(在上面的例子中对应 server1.com) 指的是主机的名称或发送信息的系统. + +#### 应用名 #### + +[应用名][6] 域(在上面的例子中对应 sshd:auth) 指的是发送信息的程序的名称. + +#### 优先级 #### + +优先级域或缩写为 [pri][7] (在上面的例子中对应 <34>) 告诉我们这个事件有多紧急或多严峻。它由两个数字域组成:设备域和紧急性域。紧急性域从代表 debug 类事件的数字 7 一直到代表紧急事件的数字 0 。设备域描述了哪个进程创建了该事件。它从代表内核信息的数字 0 到代表本地应用使用的 23 。 + +Pri 有两种输出方式。第一种是以一个单独的数字表示,可以这样计算:先用设备域的值乘以 8,再加上紧急性域的值:(设备域)(8) + (紧急性域)。第二种是 pri 文本,将以“设备域.紧急性域” 的字符串格式输出。后一种格式更方便阅读和搜索,但占据更多的存储空间。 +-------------------------------------------------------------------------------- + +via: http://www.loggly.com/ultimate-guide/logging/linux-logging-basics/ + +作者:[Jason Skowronski][a1] +作者:[Amy Echeverri][a2] +作者:[Sadequl Hussain][a3] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a1]:https://www.linkedin.com/in/jasonskowronski +[a2]:https://www.linkedin.com/in/amyecheverri +[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 +[1]:https://www.digitalocean.com/community/tutorials/how-to-view-and-configure-linux-logs-on-ubuntu-and-centos +[2]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.y2e9tdfk1cdb +[3]:https://tools.ietf.org/html/rfc5424 +[4]:https://tools.ietf.org/html/rfc5424#section-6.2.3 +[5]:https://tools.ietf.org/html/rfc5424#section-6.2.4 +[6]:https://tools.ietf.org/html/rfc5424#section-6.2.5 +[7]:https://tools.ietf.org/html/rfc5424#section-6.2.1 \ No newline at end of file From 093951ac6cd1e1d65a99bf76c5224a8d146c52a2 Mon Sep 17 00:00:00 2001 From: wxy Date: Sun, 9 Aug 2015 23:53:54 +0800 Subject: [PATCH 161/207] PUB:20150730 Must-Know Linux Commands For New Users @GOLinux --- ... Must-Know Linux Commands For New Users.md | 43 ++++++++++--------- 1 file changed, 22 insertions(+), 21 deletions(-) rename {translated/tech => published}/20150730 Must-Know Linux Commands For New Users.md (72%) diff --git a/translated/tech/20150730 Must-Know Linux Commands For New Users.md b/published/20150730 Must-Know Linux Commands For New Users.md similarity index 72% rename from translated/tech/20150730 Must-Know Linux Commands For New Users.md rename to published/20150730 Must-Know Linux Commands For New Users.md index 230cecf736..657d7372bb 100644 --- a/translated/tech/20150730 Must-Know Linux Commands For New Users.md +++ b/published/20150730 Must-Know Linux Commands For New Users.md @@ -1,11 +1,12 @@ 新手应知应会的Linux命令 ================================================================================ ![Manage system updates via the command line with dnf on Fedora.](http://www.linux.com/images/stories/41373/fedora-cli.png) -在Fedora上通过命令行使用dnf来管理系统更新 -基于Linux的系统的优点之一,就是你可以通过终端中使用命令该ing来管理整个系统。使用命令行的优势在于,你可以使用相同的知识和技能来管理随便哪个Linux发行版。 +*在Fedora上通过命令行使用dnf来管理系统更新* -对于各个发行版以及桌面环境(DE)而言,要一致地使用图形化用户界面(GUI)却几乎是不可能的,因为它们都提供了各自的用户界面。要明确的是,有那么些情况,你需要在不同的发行版上使用不同的命令来部署某些特定的任务,但是,或多或少它们的概念和意图却仍然是一致的。 +基于Linux的系统最美妙的一点,就是你可以在终端中使用命令行来管理整个系统。使用命令行的优势在于,你可以使用相同的知识和技能来管理随便哪个Linux发行版。 + +对于各个发行版以及桌面环境(DE)而言,要一致地使用图形化用户界面(GUI)却几乎是不可能的,因为它们都提供了各自的用户界面。要明确的是,有些情况下在不同的发行版上需要使用不同的命令来执行某些特定的任务,但是,基本来说它们的思路和目的是一致的。 在本文中,我们打算讨论Linux用户应当掌握的一些基本命令。我将给大家演示怎样使用命令行来更新系统、管理软件、操作文件以及切换到root,这些操作将在三个主要发行版上进行:Ubuntu(也包括其定制版和衍生版,还有Debian),openSUSE,以及Fedora。 @@ -15,7 +16,7 @@ Linux是基于安全设计的,但事实上是,任何软件都有缺陷,会导致安全漏洞。所以,保持你的系统更新到最新是十分重要的。这么想吧:运行过时的操作系统,就像是你坐在全副武装的坦克里头,而门却没有锁。武器会保护你吗?任何人都可以进入开放的大门,对你造成伤害。同样,在你的系统中也有没有打补丁的漏洞,这些漏洞会危害到你的系统。开源社区,不像专利世界,在漏洞补丁方面反应是相当快的,所以,如果你保持系统最新,你也获得了安全保证。 -留意新闻站点,了解安全漏洞。如果发现了一个漏洞,请阅读之,然后在补丁出来的第一时间更新。不管怎样,在生产机器上,你每星期必须至少运行一次更新命令。如果你运行这一台复杂的服务器,那么就要额外当心了。仔细阅读变更日志,以确保更新不会搞坏你的自定义服务。 +留意新闻站点,了解安全漏洞。如果发现了一个漏洞,了解它,然后在补丁出来的第一时间更新。不管怎样,在生产环境上,你每星期必须至少运行一次更新命令。如果你运行着一台复杂的服务器,那么就要额外当心了。仔细阅读变更日志,以确保更新不会搞坏你的自定义服务。 **Ubuntu**:牢记一点:你在升级系统或安装不管什么软件之前,都必须要刷新仓库(也就是repos)。在Ubuntu上,你可以使用下面的命令来更新系统,第一个命令用于刷新仓库: @@ -29,7 +30,7 @@ Linux是基于安全设计的,但事实上是,任何软件都有缺陷,会 sudo apt-get dist-upgrade -**openSUSE**:如果你是在openSUSE上,你可以使用以下命令来更新系统(照例,第一个命令的意思是更新仓库) +**openSUSE**:如果你是在openSUSE上,你可以使用以下命令来更新系统(照例,第一个命令的意思是更新仓库): sudo zypper refresh sudo zypper up @@ -42,7 +43,7 @@ Linux是基于安全设计的,但事实上是,任何软件都有缺陷,会 ### 软件安装与移除 ### 你只可以安装那些你系统上启用的仓库中可用的包,各个发行版默认都附带有并启用了一些官方或者第三方仓库。 -**Ubuntu**: To install any package on Ubuntu, first update the repo and then use this syntax: + **Ubuntu**:要在Ubuntu上安装包,首先更新仓库,然后使用下面的语句: sudo apt-get install [package_name] @@ -75,9 +76,9 @@ Linux是基于安全设计的,但事实上是,任何软件都有缺陷,会 ### 如何管理第三方软件? ### -在一个庞大的开发者社区中,这些开发者们为用户提供了许多的软件。不同的发行版有不同的机制来使用这些第三方软件,将它们提供给用户。同时也取决于开发者怎样将这些软件提供给用户,有些开发者会提供二进制包,而另外一些开发者则将软件发布到仓库中。 +在一个庞大的开发者社区中,这些开发者们为用户提供了许多的软件。不同的发行版有不同的机制来将这些第三方软件提供给用户。当然,同时也取决于开发者怎样将这些软件提供给用户,有些开发者会提供二进制包,而另外一些开发者则将软件发布到仓库中。 -Ubuntu严重依赖于PPA(个人包归档),但是,不幸的是,它却没有提供一个内建工具来帮助用于搜索这些PPA仓库。在安装软件前,你将需要通过Google搜索PPA,然后手工添加该仓库。下面就是添加PPA到系统的方法: +Ubuntu很多地方都用到PPA(个人包归档),但是,不幸的是,它却没有提供一个内建工具来帮助用于搜索这些PPA仓库。在安装软件前,你将需要通过Google搜索PPA,然后手工添加该仓库。下面就是添加PPA到系统的方法: sudo add-apt-repository ppa: @@ -85,7 +86,7 @@ Ubuntu严重依赖于PPA(个人包归档),但是,不幸的是,它却 sudo add-apt-repository ppa:libreoffice/ppa -它会要你按下回车键来导入秘钥。完成后,使用'update'命令来刷新仓库,然后安装该包。 +它会要你按下回车键来导入密钥。完成后,使用'update'命令来刷新仓库,然后安装该包。 openSUSE拥有一个针对第三方应用的优雅的解决方案。你可以访问software.opensuse.org,一键点击搜索并安装相应包,它会自动将对应的仓库添加到你的系统中。如果你想要手工添加仓库,可以使用该命令: @@ -97,13 +98,13 @@ openSUSE拥有一个针对第三方应用的优雅的解决方案。你可以访 sudo zypper refresh sudo zypper install libreoffice -Fedora用户只需要添加RPMFusion(free和non-free仓库一起),该仓库包含了大量的应用。如果你需要添加仓库,命令如下: +Fedora用户只需要添加RPMFusion(包括自由软件和非自由软件仓库),该仓库包含了大量的应用。如果你需要添加该仓库,命令如下: -dnf config-manager --add-repo http://www.example.com/example.repo + dnf config-manager --add-repo http://www.example.com/example.repo ### 一些基本命令 ### -我已经写了一些关于使用CLI来管理你系统上的文件的[文章][1],下面介绍一些基本米ing令,这些命令在所有发行版上都经常会用到。 +我已经写了一些关于使用CLI来管理你系统上的文件的[文章][1],下面介绍一些基本命令,这些命令在所有发行版上都经常会用到。 拷贝文件或目录到一个新的位置: @@ -113,13 +114,13 @@ dnf config-manager --add-repo http://www.example.com/example.repo cp path_of_files/* path_of_the_directory_where_you_want_to_copy/ -将一个文件从某个位置移动到另一个位置(尾斜杠是说在该目录中): +将一个文件从某个位置移动到另一个位置(尾斜杠是说放在该目录中): - mv path_of_file_1 path_of_the_directory_where_you_want_to_move/ + mv path_of_file_1 path_of_the_directory_where_you_want_to_move/ 将所有文件从一个位置移动到另一个位置: - mv path_of_directory_where_files_are/* path_of_the_directory_where_you_want_to_move/ + mv path_of_directory_where_files_are/* path_of_the_directory_where_you_want_to_move/ 删除一个文件: @@ -135,11 +136,11 @@ dnf config-manager --add-repo http://www.example.com/example.repo ### 创建新目录 ### -要创建一个新目录,首先输入你要创建的目录的位置。比如说,你想要在你的Documents目录中创建一个名为'foundation'的文件夹。让我们使用 cd (即change directory,改变目录)命令来改变目录: +要创建一个新目录,首先进入到你要创建该目录的位置。比如说,你想要在你的Documents目录中创建一个名为'foundation'的文件夹。让我们使用 cd (即change directory,改变目录)命令来改变目录: cd /home/swapnil/Documents -(替换'swapnil'为你系统中的用户) +(替换'swapnil'为你系统中的用户名) 然后,使用 mkdir 命令来创建该目录: @@ -149,13 +150,13 @@ dnf config-manager --add-repo http://www.example.com/example.repo mdkir /home/swapnil/Documents/foundation -如果你想要创建父-子目录,那是指目录中的目录,那么可以使用 -p 选项。它会在指定路径中创建所有目录: +如果你想要连父目录一起创建,那么可以使用 -p 选项。它会在指定路径中创建所有目录: mdkir -p /home/swapnil/Documents/linux/foundation ### 成为root ### -你或许需要成为root,或者具有sudo权力的用户,来实施一些管理任务,如管理软件包或者对根目录或其下的文件进行一些修改。其中一个例子就是编辑'fstab'文件,该文件记录了挂载的硬件驱动器。它在'etc'目录中,而该目录又在根目录中,你只能作为超级用户来修改该文件。在大多数的发行版中,你可以通过'切换用户'来成为root。比如说,在openSUSE上,我想要成为root,因为我要在根目录中工作,你可以使用下面的命令之一: +你或许需要成为root,或者具有sudo权力的用户,来实施一些管理任务,如管理软件包或者对根目录或其下的文件进行一些修改。其中一个例子就是编辑'fstab'文件,该文件记录了挂载的硬盘驱动器。它在'etc'目录中,而该目录又在根目录中,你只能作为超级用户来修改该文件。在大多数的发行版中,你可以通过'su'来成为root。比如说,在openSUSE上,我想要成为root,因为我要在根目录中工作,你可以使用下面的命令之一: sudo su - @@ -165,7 +166,7 @@ dnf config-manager --add-repo http://www.example.com/example.repo 该命令会要求输入密码,然后你就具有root特权了。记住一点:千万不要以root用户来运行系统,除非你知道你正在做什么。另外重要的一点需要注意的是,你以root什么对目录或文件进行修改后,会将它们的拥有关系从该用户或特定的服务改变为root。你必须恢复这些文件的拥有关系,否则该服务或用户就不能访问或写入到那些文件。要改变用户,命令如下: - sudo chown -R user:user /path_of_file_or_directory + sudo chown -R 用户:组 文件或目录名 当你将其它发行版上的分区挂载到系统中时,你可能经常需要该操作。当你试着访问这些分区上的文件时,你可能会碰到权限拒绝错误,你只需要改变这些分区的拥有关系就可以访问它们了。需要额外当心的是,不要改变根目录的权限或者拥有关系。 @@ -177,7 +178,7 @@ via: http://www.linux.com/learn/tutorials/842251-must-know-linux-commands-for-ne 作者:[Swapnil Bhartiya][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From de0f6fe24b3e4197727221dfedc0fa8fc4e1a42f Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 10 Aug 2015 00:50:06 +0800 Subject: [PATCH 162/207] PUB:20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem @FSSlc --- ...nd to Quickly Navigate Linux Filesystem.md | 98 ++++++++++--------- 1 file changed, 50 insertions(+), 48 deletions(-) rename {translated/tech => published}/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md (62%) diff --git a/translated/tech/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md b/published/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md similarity index 62% rename from translated/tech/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md rename to published/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md index 546a4b0baf..f749039d5d 100644 --- a/translated/tech/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md +++ b/published/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md @@ -1,53 +1,54 @@ -Autojump – 一个高级的‘cd’命令用以快速浏览 Linux 文件系统 +Autojump:一个可以在 Linux 文件系统快速导航的高级 cd 命令 ================================================================================ -对于那些主要通过控制台或终端使用 Linux 命令行来工作的 Linux 用户来说,他们真切地感受到了 Linux 的强大。 然而在 Linux 的分层文件系统中进行浏览有时或许是一件头疼的事,尤其是对于那些新手来说。 + +对于那些主要通过控制台或终端使用 Linux 命令行来工作的 Linux 用户来说,他们真切地感受到了 Linux 的强大。 然而在 Linux 的分层文件系统中进行导航有时或许是一件头疼的事,尤其是对于那些新手来说。 现在,有一个用 Python 写的名为 `autojump` 的 Linux 命令行实用程序,它是 Linux ‘[cd][1]’命令的高级版本。 ![Autojump 命令](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-Command.jpg) -Autojump – 浏览 Linux 文件系统的最快方式 +*Autojump – Linux 文件系统导航的最快方式* 这个应用原本由 Joël Schaerer 编写,现在由 +William Ting 维护。 -Autojump 应用从用户那里学习并帮助用户在 Linux 命令行中进行更轻松的目录浏览。与传统的 `cd` 命令相比,autojump 能够更加快速地浏览至目的目录。 +Autojump 应用可以从用户那里学习并帮助用户在 Linux 命令行中进行更轻松的目录导航。与传统的 `cd` 命令相比,autojump 能够更加快速地导航至目的目录。 #### autojump 的特色 #### -- 免费且开源的应用,在 GPL V3 协议下发布。 -- 自主学习的应用,从用户的浏览习惯中学习。 -- 更快速地浏览。不必包含子目录的名称。 -- 对于大多数的标准 Linux 发行版本,能够在软件仓库中下载得到,它们包括 Debian (testing/unstable), Ubuntu, Mint, Arch, Gentoo, Slackware, CentOS, RedHat and Fedora。 +- 自由开源的应用,在 GPL V3 协议下发布。 +- 自主学习的应用,从用户的导航习惯中学习。 +- 更快速地导航。不必包含子目录的名称。 +- 对于大多数的标准 Linux 发行版本,能够在软件仓库中下载得到,它们包括 Debian (testing/unstable), Ubuntu, Mint, Arch, Gentoo, Slackware, CentOS, RedHat 和 Fedora。 - 也能在其他平台中使用,例如 OS X(使用 Homebrew) 和 Windows (通过 Clink 来实现) -- 使用 autojump 你可以跳至任何特定的目录或一个子目录。你还可以打开文件管理器来到达某个目录,并查看你在某个目录中所待时间的统计数据。 +- 使用 autojump 你可以跳至任何特定的目录或一个子目录。你还可以用文件管理器打开某个目录,并查看你在某个目录中所待时间的统计数据。 #### 前提 #### - 版本号不低于 2.6 的 Python -### 第 1 步: 做一次全局系统升级 ### +### 第 1 步: 做一次完整的系统升级 ### -1. 以 **root** 用户的身份,做一次系统更新或升级,以此保证你安装有最新版本的 Python。 +1、 以 **root** 用户的身份,做一次系统更新或升级,以此保证你安装有最新版本的 Python。 - # apt-get update && apt-get upgrade && apt-get dist-upgrade [APT based systems] - # yum update && yum upgrade [YUM based systems] - # dnf update && dnf upgrade [DNF based systems] + # apt-get update && apt-get upgrade && apt-get dist-upgrade [基于 APT 的系统] + # yum update && yum upgrade [基于 YUM 的系统] + # dnf update && dnf upgrade [基于 DNF 的系统] **注** : 这里特别提醒,在基于 YUM 或 DNF 的系统中,更新和升级执行相同的行动,大多数时间里它们是通用的,这点与基于 APT 的系统不同。 ### 第 2 步: 下载和安装 Autojump ### -2. 正如前面所言,在大多数的 Linux 发行版本的软件仓库中, autojump 都可获取到。通过包管理器你就可以安装它。但若你想从源代码开始来安装它,你需要克隆源代码并执行 python 脚本,如下面所示: +2、 正如前面所言,在大多数的 Linux 发行版本的软件仓库中, autojump 都可获取到。通过包管理器你就可以安装它。但若你想从源代码开始来安装它,你需要克隆源代码并执行 python 脚本,如下面所示: #### 从源代码安装 #### 若没有安装 git,请安装它。我们需要使用它来克隆 git 仓库。 - # apt-get install git [APT based systems] - # yum install git [YUM based systems] - # dnf install git [DNF based systems] + # apt-get install git [基于 APT 的系统] + # yum install git [基于 YUM 的系统] + # dnf install git [基于 DNF 的系统] -一旦安装完 git,以常规用户身份登录,然后像下面那样来克隆 autojump: +一旦安装完 git,以普通用户身份登录,然后像下面那样来克隆 autojump: $ git clone git://github.com/joelthelion/autojump.git @@ -55,29 +56,29 @@ Autojump 应用从用户那里学习并帮助用户在 Linux 命令行中进行 $ cd autojump -下载,赋予脚本文件可执行权限,并以 root 用户身份来运行安装脚本。 +下载,赋予安装脚本文件可执行权限,并以 root 用户身份来运行安装脚本。 # chmod 755 install.py # ./install.py #### 从软件仓库中安装 #### -3. 假如你不想麻烦,你可以以 **root** 用户身份从软件仓库中直接安装它: +3、 假如你不想麻烦,你可以以 **root** 用户身份从软件仓库中直接安装它: 在 Debian, Ubuntu, Mint 及类似系统中安装 autojump : - # apt-get install autojump (注: 这里原文为 autojumo, 应该为 autojump) + # apt-get install autojump 为了在 Fedora, CentOS, RedHat 及类似系统中安装 autojump, 你需要启用 [EPEL 软件仓库][2]。 # yum install epel-release # yum install autojump - OR + 或 # dnf install autojump ### 第 3 步: 安装后的配置 ### -4. 在 Debian 及其衍生系统 (Ubuntu, Mint,…) 中, 激活 autojump 应用是非常重要的。 +4、 在 Debian 及其衍生系统 (Ubuntu, Mint,…) 中, 激活 autojump 应用是非常重要的。 为了暂时激活 autojump 应用,即直到你关闭当前会话或打开一个新的会话之前让 autojump 均有效,你需要以常规用户身份运行下面的命令: @@ -89,7 +90,7 @@ Autojump 应用从用户那里学习并帮助用户在 Linux 命令行中进行 ### 第 4 步: Autojump 的预测试和使用 ### -5. 如先前所言, autojump 将只跳到先前 `cd` 命令到过的目录。所以在我们开始测试之前,我们要使用 `cd` 切换到一些目录中去,并创建一些目录。下面是我所执行的命令。 +5、 如先前所言, autojump 将只跳到先前 `cd` 命令到过的目录。所以在我们开始测试之前,我们要使用 `cd` 切换到一些目录中去,并创建一些目录。下面是我所执行的命令。 $ cd $ cd @@ -120,45 +121,45 @@ Autojump 应用从用户那里学习并帮助用户在 Linux 命令行中进行 现在,我们已经切换到过上面所列的目录,并为了测试创建了一些目录,一切准备就绪,让我们开始吧。 -**需要记住的一点** : `j` 是 autojump 的一个包装,你可以使用 j 来代替 autojump, 相反亦可。 +**需要记住的一点** : `j` 是 autojump 的一个封装,你可以使用 j 来代替 autojump, 相反亦可。 -6. 使用 -v 选项查看安装的 autojump 的版本。 +6、 使用 -v 选项查看安装的 autojump 的版本。 $ j -v - or + 或 $ autojump -v ![查看 Autojump 的版本](http://www.tecmint.com/wp-content/uploads/2015/06/Check-Autojump-Version.png) -查看 Autojump 的版本 +*查看 Autojump 的版本* -7. 跳到先前到过的目录 ‘/var/www‘。 +7、 跳到先前到过的目录 ‘/var/www‘。 $ j www ![跳到目录](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-To-Directory.png) -跳到目录 +*跳到目录* -8. 跳到先前到过的子目录‘/home/avi/autojump-test/b‘ 而不键入子目录的全名。 +8、 跳到先前到过的子目录‘/home/avi/autojump-test/b‘ 而不键入子目录的全名。 $ jc b ![跳到子目录](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-to-Child-Directory.png) -跳到子目录 +*跳到子目录* -9. 使用下面的命令,你就可以从命令行打开一个文件管理器,例如 GNOME Nautilus ,而不是跳到一个目录。 +9、 使用下面的命令,你就可以从命令行打开一个文件管理器,例如 GNOME Nautilus ,而不是跳到一个目录。 $ jo www -![跳到目录](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-to-Direcotory.png) +![打开目录](http://www.tecmint.com/wp-content/uploads/2015/06/Jump-to-Direcotory.png) -跳到目录 +*打开目录* ![在文件管理器中打开目录](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Directory-in-File-Browser.png) -在文件管理器中打开目录 +*在文件管理器中打开目录* 你也可以在一个文件管理器中打开一个子目录。 @@ -166,19 +167,19 @@ Autojump 应用从用户那里学习并帮助用户在 Linux 命令行中进行 ![打开子目录](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Child-Directory1.png) -打开子目录 +*打开子目录* ![在文件管理器中打开子目录](http://www.tecmint.com/wp-content/uploads/2015/06/Open-Child-Directory-in-File-Browser1.png) -在文件管理器中打开子目录 +*在文件管理器中打开子目录* -10. 查看每个文件夹的关键权重和在所有目录权重中的总关键权重的相关统计数据。文件夹的关键权重代表在这个文件夹中所花的总时间。 目录权重是列表中目录的数目。(注: 在这一句中,我觉得原文中的 if 应该为 is) +10、 查看每个文件夹的权重和全部文件夹计算得出的总权重的统计数据。文件夹的权重代表在这个文件夹中所花的总时间。 文件夹权重是该列表中目录的数字。(LCTT 译注: 在这一句中,我觉得原文中的 if 应该为 is) $ j --stat -![查看目录统计数据](http://www.tecmint.com/wp-content/uploads/2015/06/Check-Statistics.png) +![查看文件夹统计数据](http://www.tecmint.com/wp-content/uploads/2015/06/Check-Statistics.png) -查看目录统计数据 +*查看文件夹统计数据* **提醒** : autojump 存储其运行日志和错误日志的地方是文件夹 `~/.local/share/autojump/`。千万不要重写这些文件,否则你将失去你所有的统计状态结果。 @@ -186,15 +187,15 @@ Autojump 应用从用户那里学习并帮助用户在 Linux 命令行中进行 ![Autojump 的日志](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-Logs.png) -Autojump 的日志 +*Autojump 的日志* -11. 假如需要,你只需运行下面的命令就可以查看帮助 : +11、 假如需要,你只需运行下面的命令就可以查看帮助 : $ j --help ![Autojump 的帮助和选项](http://www.tecmint.com/wp-content/uploads/2015/06/Autojump-help-options.png) -Autojump 的帮助和选项 +*Autojump 的帮助和选项* ### 功能需求和已知的冲突 ### @@ -204,18 +205,19 @@ Autojump 的帮助和选项 ### 结论: ### -假如你是一个命令行用户, autojump 是你必备的实用程序。它可以简化许多事情。它是一个在命令行中浏览 Linux 目录的绝佳的程序。请自行尝试它,并在下面的评论框中让我知晓你宝贵的反馈。保持联系,保持分享。喜爱并分享,帮助我们更好地传播。 +假如你是一个命令行用户, autojump 是你必备的实用程序。它可以简化许多事情。它是一个在命令行中导航 Linux 目录的绝佳的程序。请自行尝试它,并在下面的评论框中让我知晓你宝贵的反馈。保持联系,保持分享。喜爱并分享,帮助我们更好地传播。 + -------------------------------------------------------------------------------- via: http://www.tecmint.com/autojump-a-quickest-way-to-navigate-linux-filesystem/ 作者:[Avishek Kumar][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/avishek/ [1]:http://www.tecmint.com/cd-command-in-linux/ -[2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ +[2]:https://linux.cn/article-2324-1.html [3]:http://www.tecmint.com/manage-linux-filenames-with-special-characters/ \ No newline at end of file From 99ef2dab243d3e400189462378026b7cad784472 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Mon, 10 Aug 2015 09:23:08 +0800 Subject: [PATCH 163/207] Update 20150209 Install OpenQRM Cloud Computing Platform In Debian.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- ...0209 Install OpenQRM Cloud Computing Platform In Debian.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md b/sources/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md index 127f10affc..2c6a990b83 100644 --- a/sources/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md +++ b/sources/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md @@ -1,3 +1,5 @@ +FSSlc translating + Install OpenQRM Cloud Computing Platform In Debian ================================================================================ ### Introduction ### @@ -146,4 +148,4 @@ via: http://www.unixmen.com/install-openqrm-cloud-computing-platform-debian/ [a]:http://www.unixmen.com/author/sk/ [1]:http://www.openqrm-enterprise.com/products/edition-comparison.html [2]:http://sourceforge.net/projects/openqrm/files/?source=navbar -[3]:http://www.openqrm-enterprise.com/fileadmin/Documents/Whitepaper/openQRM-Enterprise-Administrator-Guide-5.2.pdf \ No newline at end of file +[3]:http://www.openqrm-enterprise.com/fileadmin/Documents/Whitepaper/openQRM-Enterprise-Administrator-Guide-5.2.pdf From abc7c38b3accd6953e9a40d2b43d3b362a22307b Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 10 Aug 2015 10:56:03 +0800 Subject: [PATCH 164/207] PUB:20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04 @geekpi --- ...ver or client) on Ubuntu 14.04 or 15.04.md | 120 +++++++----------- 1 file changed, 48 insertions(+), 72 deletions(-) rename {translated/tech => published}/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md (68%) diff --git a/translated/tech/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md b/published/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md similarity index 68% rename from translated/tech/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md rename to published/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md index 38574e6fa7..2fdaa71872 100644 --- a/translated/tech/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md +++ b/published/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md @@ -1,6 +1,6 @@ -如何在Ubuntu 14.04/15.04上配置Chef(服务端/客户端) +如何在 Ubuntu 上安装配置管理系统 Chef (大厨) ================================================================================ -Chef是对于信息技术专业人员的一款配置管理和自动化工具,它可以配置和管理你的设备无论它在本地还是在云上。它可以用于加速应用部署并协调多个系统管理员和开发人员的工作,涉及到成百甚至上千的服务器和程序来支持大量的客户群。chef最有用的是让设备变成代码。一旦你掌握了Chef,你可以获得一流的网络IT支持来自动化管理你的云端设备或者终端用户。 +Chef是面对IT专业人员的一款配置管理和自动化工具,它可以配置和管理你的基础设施,无论它在本地还是在云上。它可以用于加速应用部署并协调多个系统管理员和开发人员的工作,这涉及到可支持大量的客户群的成百上千的服务器和程序。chef最有用的是让基础设施变成代码。一旦你掌握了Chef,你可以获得一流的网络IT支持来自动化管理你的云端基础设施或者终端用户。 下面是我们将要在本篇中要设置和配置Chef的主要组件。 @@ -10,34 +10,13 @@ Chef是对于信息技术专业人员的一款配置管理和自动化工具, 我们将在下面的基础环境下设置Chef配置管理系统。 -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - -
管理和配置工具:Chef
基础操作系统Ubuntu 14.04.1 LTS (x86_64)
Chef ServerVersion 12.1.0
Chef ManageVersion 1.17.0
Chef Development KitVersion 0.6.2
内存和CPU4 GB  , 2.0+2.0 GHZ
+|管理和配置工具:Chef|| +|-------------------------------|---| +|基础操作系统|Ubuntu 14.04.1 LTS (x86_64)| +|Chef Server|Version 12.1.0| +|Chef Manage|Version 1.17.0| +|Chef Development Kit|Version 0.6.2| +|内存和CPU|4 GB  , 2.0+2.0 GHz| ### Chef服务端的安装和配置 ### @@ -45,15 +24,15 @@ Chef服务端是核心组件,它存储配置以及其他和工作站交互的 我使用下面的命令来下载和安装它。 -**1) 下载Chef服务端** +####1) 下载Chef服务端 root@ubuntu-14-chef:/tmp# wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/chef-server-core_12.1.0-1_amd64.deb -**2) 安装Chef服务端** +####2) 安装Chef服务端 root@ubuntu-14-chef:/tmp# dpkg -i chef-server-core_12.1.0-1_amd64.deb -**3) 重新配置Chef服务端** +####3) 重新配置Chef服务端 现在运行下面的命令来启动所有的chef服务端服务,这步也许会花费一些时间,因为它有许多不同一起工作的服务组成来创建一个正常运作的系统。 @@ -64,35 +43,35 @@ chef服务端启动命令'chef-server-ctl reconfigure'需要运行两次,这 Chef Client finished, 342/350 resources updated in 113.71139964 seconds opscode Reconfigured! -**4) 重启系统 ** +####4) 重启系统 安装完成后重启系统使系统能最好的工作,不然我们或许会在创建用户的时候看到下面的SSL连接错误。 ERROR: Errno::ECONNRESET: Connection reset by peer - SSL_connect -**5) 创建心的管理员** +####5) 创建新的管理员 -运行下面的命令来创建一个新的用它自己的配置的管理员账户。创建过程中,用户的RSA私钥会自动生成并需要被保存到一个安全的地方。--file选项会保存RSA私钥到指定的路径下。 +运行下面的命令来创建一个新的管理员账户及其配置。创建过程中,用户的RSA私钥会自动生成,它需要保存到一个安全的地方。--file选项会保存RSA私钥到指定的路径下。 root@ubuntu-14-chef:/tmp# chef-server-ctl user-create kashi kashi kashi kashif.fareedi@gmail.com kashi123 --filename /root/kashi.pem ### Chef服务端的管理设置 ### -Chef Manage是一个针对企业Chef用户的管理控制台,它启用了可视化的web用户界面并可以管理节点、数据包、规则、环境、配置和基于角色的访问控制(RBAC) +Chef Manage是一个针对企业Chef用户的管理控制台,它提供了可视化的web用户界面,可以管理节点、数据包、规则、环境、Cookbook 和基于角色的访问控制(RBAC) -**1) 下载Chef Manage** +####1) 下载Chef Manage -从官网复制链接病下载chef manage的安装包。 +从官网复制链接并下载chef manage的安装包。 root@ubuntu-14-chef:~# wget https://web-dl.packagecloud.io/chef/stable/packages/ubuntu/trusty/opscode-manage_1.17.0-1_amd64.deb -**2) 安装Chef Manage** +####2) 安装Chef Manage 使用下面的命令在root的家目录下安装它。 root@ubuntu-14-chef:~# chef-server-ctl install opscode-manage --path /root -**3) 重启Chef Manage和服务端** +####3) 重启Chef Manage和服务端 安装完成后我们需要运行下面的命令来重启chef manage和服务端。 @@ -101,28 +80,27 @@ Chef Manage是一个针对企业Chef用户的管理控制台,它启用了可 ### Chef Manage网页控制台 ### -我们可以使用localhost访问网页控制台以及fqdn,并用已经创建的管理员登录 +我们可以使用localhost或它的全称域名来访问网页控制台,并用已经创建的管理员登录 ![chef amanage](http://blog.linoxide.com/wp-content/uploads/2015/07/5-chef-web.png) -**1) Chef Manage创建新的组织 ** +####1) Chef Manage创建新的组织 -你或许被要求创建新的组织或者接受其他阻止的邀请。如下所示,使用缩写和全名来创建一个新的组织。 +你或许被要求创建新的组织,或者也可以接受其他组织的邀请。如下所示,使用缩写和全名来创建一个新的组织。 ![Create Org](http://blog.linoxide.com/wp-content/uploads/2015/07/7-create-org.png) -**2) 用命令行创建心的组织 ** +####2) 用命令行创建新的组织 -We can also create new Organization from the command line by executing the following command. 我们同样也可以运行下面的命令来创建新的组织。 root@ubuntu-14-chef:~# chef-server-ctl org-create linux Linoxide Linux Org. --association_user kashi --filename linux.pem ### 设置工作站 ### -我们已经完成安装chef服务端,现在我们可以开始创建任何recipes、cookbooks、属性和其他任何的我们想要对Chef的修改。 +我们已经完成安装chef服务端,现在我们可以开始创建任何recipes([基础配置元素](https://docs.chef.io/recipes.html))、cookbooks([基础配置集](https://docs.chef.io/cookbooks.html))、attributes([节点属性](https://docs.chef.io/attributes.html))和其他任何的我们想要对Chef做的修改。 -**1) 在Chef服务端上创建新的用户和组织 ** +####1) 在Chef服务端上创建新的用户和组织 为了设置工作站,我们用命令行创建一个新的用户和组织。 @@ -130,25 +108,23 @@ We can also create new Organization from the command line by executing the follo root@ubuntu-14-chef:~# chef-server-ctl org-create blogs Linoxide Blogs Inc. --association_user bloger --filename blogs.pem -**2) 下载工作站入门套件 ** +####2) 下载工作站入门套件 -Now Download and Save starter-kit from the chef manage web console on a workstation and use it to work with Chef server. -在工作站的网页控制台中下面并保存入门套件用于与服务端协同工作 +在工作站的网页控制台中下载保存入门套件,它用于与服务端协同工作 ![Starter Kit](http://blog.linoxide.com/wp-content/uploads/2015/07/8-download-kit.png) -**3) 点击"Proceed"下载套件 ** +####3) 下载套件后,点击"Proceed" ![starter kit](http://blog.linoxide.com/wp-content/uploads/2015/07/9-download-kit.png) -### 对于工作站的Chef开发套件设置 ### +### 用于工作站的Chef开发套件设置 ### -Chef开发套件是一款包含所有开发chef所需工具的软件包。它捆绑了由Chef开发的带Chef客户端的工具。 +Chef开发套件是一款包含开发chef所需的所有工具的软件包。它捆绑了由Chef开发的带Chef客户端的工具。 -**1) 下载 Chef DK** +####1) 下载 Chef DK -We can Download chef development kit from its official web link and choose the required operating system to get its chef development tool kit. -我们可以从它的官网链接中下载开发包,并选择操作系统来得到chef开发包。 +我们可以从它的官网链接中下载开发包,并选择操作系统来下载chef开发包。 ![Chef DK](http://blog.linoxide.com/wp-content/uploads/2015/07/10-CDK.png) @@ -156,13 +132,13 @@ We can Download chef development kit from its official web link and choose the r root@ubuntu-15-WKS:~# wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.6.2-1_amd64.deb -**1) Chef开发套件安装** +####2) Chef开发套件安装 使用dpkg命令安装开发套件 root@ubuntu-15-WKS:~# dpkg -i chefdk_0.6.2-1_amd64.deb -**3) Chef DK 验证** +####3) Chef DK 验证 使用下面的命令验证客户端是否已经正确安装。 @@ -195,7 +171,7 @@ We can Download chef development kit from its official web link and choose the r Verification of component 'chefspec' succeeded. Verification of component 'package installation' succeeded. -**连接Chef服务端** +####4) 连接Chef服务端 我们将创建 ~/.chef并从chef服务端复制两个用户和组织的pem文件到chef的文件到这个目录下。 @@ -209,7 +185,7 @@ We can Download chef development kit from its official web link and choose the r kashi.pem 100% 1678 1.6KB/s 00:00 linux.pem 100% 1678 1.6KB/s 00:00 -** 编辑配置来管理chef环境 ** +####5) 编辑配置来管理chef环境 现在使用下面的内容创建"~/.chef/knife.rb"。 @@ -231,13 +207,13 @@ We can Download chef development kit from its official web link and choose the r root@ubuntu-15-WKS:/# mkdir cookbooks -**测试Knife配置** +####6) 测试Knife配置 运行“knife user list”和“knife client list”来验证knife是否在工作。 root@ubuntu-15-WKS:/.chef# knife user list -第一次运行的时候可能会得到下面的错误,这是因为工作站上还没有chef服务端的SSL证书。 +第一次运行的时候可能会看到下面的错误,这是因为工作站上还没有chef服务端的SSL证书。 ERROR: SSL Validation failure connecting to host: 172.25.10.173 - SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed ERROR: Could not establish a secure connection to the server. @@ -245,24 +221,24 @@ We can Download chef development kit from its official web link and choose the r If your Chef Server uses a self-signed certificate, you can use `knife ssl fetch` to make knife trust the server's certificates. -要从上面的命令中恢复,运行下面的命令来获取ssl整数并重新运行knife user和client list,这时候应该就可以了。 +要从上面的命令中恢复,运行下面的命令来获取ssl证书,并重新运行knife user和client list,这时候应该就可以了。 root@ubuntu-15-WKS:/.chef# knife ssl fetch WARNING: Certificates from 172.25.10.173 will be fetched and placed in your trusted_cert directory (/.chef/trusted_certs). - knife没有办法验证这些是有效的证书。你应该在下载时候验证这些证书的真实性。 +knife没有办法验证这些是有效的证书。你应该在下载时候验证这些证书的真实性。 - 在/.chef/trusted_certs/ubuntu-14-chef_test_com.crt下面添加ubuntu-14-chef.test.com的证书。 +在/.chef/trusted_certs/ubuntu-14-chef_test_com.crt下面添加ubuntu-14-chef.test.com的证书。 在上面的命令取得ssl证书后,接着运行下面的命令。 root@ubuntu-15-WKS:/.chef#knife client list kashi-linux -### 与chef服务端交互的新的节点 ### +### 配置与chef服务端交互的新节点 ### -节点是执行所有设备自动化的chef客户端。因此是时侯添加新的服务端到我们的chef环境下,在配置完chef-server和knife工作站后配置新的节点与chef-server交互。 +节点是执行所有基础设施自动化的chef客户端。因此,在配置完chef-server和knife工作站后,通过配置新的与chef-server交互的节点,来添加新的服务端到我们的chef环境下。 我们使用下面的命令来添加新的节点与chef服务端工作。 @@ -291,16 +267,16 @@ We can Download chef development kit from its official web link and choose the r 172.25.10.170 to file /tmp/install.sh.26024/metadata.txt 172.25.10.170 trying wget... -之后我们可以在knife节点列表下看到新创建的节点,也会新节点列表下创建新的客户端。 +之后我们可以在knife节点列表下看到新创建的节点,它也会在新节点创建新的客户端。 root@ubuntu-15-WKS:~# knife node list mydns -相似地我们只要提供ssh证书通过上面的knife命令来创建多个节点到chef设备上。 +相似地我们只要提供ssh证书通过上面的knife命令,就可以在chef设施上创建多个节点。 ### 总结 ### -本篇我们学习了chef管理工具并通过安装和配置设置浏览了它的组件。我希望你在学习安装和配置Chef服务端以及它的工作站和客户端节点中获得乐趣。 +本篇我们学习了chef管理工具并通过安装和配置设置基本了解了它的组件。我希望你在学习安装和配置Chef服务端以及它的工作站和客户端节点中获得乐趣。 -------------------------------------------------------------------------------- @@ -308,7 +284,7 @@ via: http://linoxide.com/ubuntu-how-to/install-configure-chef-ubuntu-14-04-15-04 作者:[Kashif Siddique][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From eb0ff99810690c7d3078c6a85ddc7731ef544e3e Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 10 Aug 2015 10:57:27 +0800 Subject: [PATCH 165/207] =?UTF-8?q?=E6=B8=85=E9=99=A4=E9=94=99=E6=94=BE?= =?UTF-8?q?=E7=9A=84=E6=96=87=E4=BB=B6?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @xiqingongzi --- ...ntial Commands and System Documentation.md | 320 ------------------ 1 file changed, 320 deletions(-) delete mode 100644 translated/tech/RHCSA Series/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md diff --git a/translated/tech/RHCSA Series/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md b/translated/tech/RHCSA Series/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md deleted file mode 100644 index 93c2787c7e..0000000000 --- a/translated/tech/RHCSA Series/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md +++ /dev/null @@ -1,320 +0,0 @@ -[translating by xiqingongzi] - -RHCSA系列: 复习基础命令及系统文档 – 第一部分 -================================================================================ -RHCSA (红帽认证系统工程师) 是由给商业公司提供开源操作系统和软件的RedHat公司举行的认证考试, 除此之外,红帽公司还为这些企业和机构提供支持、训练以及咨询服务 - -![RHCSA Exam Guide](http://www.tecmint.com/wp-content/uploads/2015/02/RHCSA-Series-by-Tecmint.png) - -RHCSA 考试准备指南 - -RHCSA 考试(考试编号 EX200)通过后可以获取由Red Hat 公司颁发的证书. RHCSA 考试是RHCT(红帽认证技师)的升级版,而且RHCSA必须在新的Red Hat Enterprise Linux(红帽企业版)下完成.RHCT和RHCSA的主要变化就是RHCT基于 RHEL5 , 而RHCSA基于RHEL6或者7, 这两个认证的等级也有所不同. - -红帽认证管理员所会的最基础的是在红帽企业版的环境下执行如下系统管理任务: - -- 理解并会使用命令管理文件、目录、命令行以及系统/软件包的文档 -- 使用不同的启动等级启动系统,认证和控制进程,启动或停止虚拟机 -- 使用分区和逻辑卷管理本地存储 -- 创建并且配置本地文件系统和网络文件系统,设置他们的属性(许可、加密、访问控制表) -- 部署、配置、并且控制系统,包括安装、升级和卸载软件 -- 管理系统用户和组,独立使用集中制的LDAP目录权限控制 -- 确保系统安全,包括基础的防火墙规则和SELinux配置 - - -关于你所在国家的考试注册费用参考 [RHCSA Certification page][1]. - -关于你所在国家的考试注册费用参考RHCSA 认证页面 - - -在这个有15章的RHCSA(红帽认证管理员)备考系列,我们将覆盖以下的关于红帽企业Linux第七版的最新的信息 - -- Part 1: 回顾必会的命令和系统文档 -- Part 2: 在RHEL7如何展示文件和管理目录 -- Part 3: 在RHEL7中如何管理用户和组 -- Part 4: 使用nano和vim管理命令/ 使用grep和正则表达式分析文本 -- Part 5: RHEL7的进程管理:启动,关机,以及其他介于二者之间的. -- Part 6: 使用 'Parted'和'SSM'来管理和加密系统存储 -- Part 7: 使用ACLs(访问控制表)并挂载 Samba /NFS 文件分享 -- Part 8: 加固SSH,设置主机名并开启网络服务 -- Part 9: 安装、配置和加固一个Web,FTP服务器 -- Part 10: Yum 包管理方式,使用Cron进行自动任务管理以及监控系统日志 -- Part 11: 使用FirewallD和Iptables设置防火墙,控制网络流量 -- Part 12: 使用Kickstart 自动安装RHEL 7 -- Part 13: RHEL7:什么是SeLinux?他的原理是什么? -- Part 14: 在RHEL7 中使用基于LDAP的权限控制 -- Part 15: RHEL7的虚拟化:KVM 和虚拟机管理 - -在第一章,我们讲解如何输入和运行正确的命令在终端或者Shell窗口,并且讲解如何找到、插入,以及使用系统文档 - -![RHCSA: Reviewing Essential Linux Commands – Part 1](http://www.tecmint.com/wp-content/uploads/2015/02/Reviewing-Essential-Linux-Commands.png) - -RHCSA:回顾必会的Linux命令 - 第一部分 - -#### 前提: #### - -至少你要熟悉如下命令 - -- [cd command][2] (改变目录) -- [ls command][3] (列举文件) -- [cp command][4] (复制文件) -- [mv command][5] (移动或重命名文件) -- [touch command][6] (创建一个新的文件或更新已存在文件的时间表) -- rm command (删除文件) -- mkdir command (创建目录) - -在这篇文章中你将会找到更多的关于如何更好的使用他们的正确用法和特殊用法. - -虽然没有严格的要求,但是作为讨论常用的Linux命令和方法,你应该安装RHEL7 来尝试使用文章中提到的命令.这将会使你学习起来更省力. - -- [红帽企业版Linux(RHEL)7 安装指南][7] - -### 使用Shell进行交互 ### -如果我们使用文本模式登陆Linux,我们就无法使用鼠标在默认的shell。另一方面,如果我们使用图形化界面登陆,我们将会通过启动一个终端来开启shell,无论那种方式,我们都会看到用户提示,并且我们可以开始输入并且执行命令(当按下Enter时,命令就会被执行) - - -当我们使用文本模式登陆Linux时, -命令是由两个部分组成的: - -- 命令本身 -- 参数 - -某些参数,称为选项(通常使用一个连字符区分),改变了由其他参数定义的命令操作. - -命令的类型可以帮助我们识别某一个特定的命令是由shell内建的还是由一个单独的包提供。这样的区别在于我们能够找到更多关于该信息的命令,对shell内置的命令,我们需要看shell的ManPage,如果是其他提供的,我们需要看它自己的ManPage. - -![Check Shell built in Commands](http://www.tecmint.com/wp-content/uploads/2015/02/Check-shell-built-in-Commands.png) - -检查Shell的内建命令 - -在上面的例子中, cd 和 type 是shell内建的命令,top和 less 是由其他的二进制文件提供的(在这种情况下,type将返回命令的位置) -其他的内建命令 - -- [echo command][8]: 展示字符串 -- [pwd command][9]: 输出当前的工作目录 - -![More Built in Shell Commands](http://www.tecmint.com/wp-content/uploads/2015/02/More-Built-in-Shell-Commands.png) - -更多内建函数 - -**exec 命令** - -运行我们指定的外部程序。请注意,最好是只输入我们想要运行的程序的名字,不过exec命令有一个特殊的特性:使用旧的shell运行,而不是创建新的进程,可以作为子请求的验证. - - # ps -ef | grep [shell 进程的PID] - -当新的进程注销,Shell也随之注销,运行 exec top 然后按下 q键来退出top,你会注意到shell 会话会结束,如下面的屏幕录像展示的那样: - -注:youtube视频 - - -**export 命令** - -输出之后执行的命令的环境的变量 - -**history 命令** - -展示数行之前的历史命令.在感叹号前输入命令编号可以再次执行这个命令.如果我们需要编辑历史列表中的命令,我们可以按下 Ctrl + r 并输入与命令相关的第一个字符. -当我们看到的命令自动补全,我们可以根据我们目前的需要来编辑它: - -注:youtube视频 - - -命令列表会保存在一个叫 .bash_history的文件里.history命令是一个非常有用的用于减少输入次数的工具,特别是进行命令行编辑的时候.默认情况下,bash保留最后输入的500个命令,不过可以通过修改 HISTSIZE 环境变量来增加: - - -![Linux history Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-history-Command.png) - -Linux history 命令 - -但上述变化,在我们的下一次启动不会保留。为了保持HISTSIZE变量的变化,我们需要通过手工修改文件编辑: - - # 设置history请看 HISTSIZE 和 HISTFILESIZE 在 bash(1)的文档 - HISTSIZE=1000 - -**重要**: 我们的更改不会生效,除非我们重启了系统 - -**alias 命令** -没有参数或使用-p参数将会以 名称=值的标准形式输出alias 列表.当提供了参数时,一个alias 将被定义给给定的命令和值 - -使用alias ,我们可以创建我们自己的命令,或修改现有的命令,包括需要的参数.举个例子,假设我们想别名 ls 到 ls –color=auto ,这样就可以使用不同颜色输出文件、目录、链接 - - - # alias ls='ls --color=auto' - -![Linux alias Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-alias-Command.png) - -Linux 别名命令 - -**Note**: 你可以给你的新命令起任何的名字,并且附上足够多的使用单引号分割的参数,但是这样的情况下你要用分号区分开他们. - - # alias myNewCommand='cd /usr/bin; ls; cd; clear' - -**exit 命令** - -Exit和logout命令都是退出shell.exit命令退出所有的shell,logout命令只注销登陆的shell,其他的自动以文本模式启动的shell不算. - -如果我们对某个程序由疑问,我们可以看他的man Page,可以使用man命令调出它,额外的,还有一些重要的文件的手册页(inittab,fstab,hosts等等),库函数,shells,设备及其他功能 - -#### 举例: #### - -- man uname (输出系统信息,如内核名称、处理器、操作系统类型、架构等). -- man inittab (初始化守护设置). - -另外一个重要的信息的来源就是info命令提供的,info命令常常被用来读取信息文件.这些文件往往比manpage 提供更多信息.通过info 关键词调用某个命令的信息 - - # info ls - # info cut - - -另外,在/usr/share/doc 文件夹包含了大量的子目录,里面可以找到大量的文档.他们包含文本文件或其他友好的格式. -确保你使用这三种方法去查找命令的信息。重点关注每个命令文档中介绍的详细的语法 - -**使用expand命令把tabs转换为空格** - -有时候文本文档包含了tabs但是程序无法很好的处理的tabs.或者我们只是简单的希望将tabs转换成空格.这就是为什么expand (GNU核心组件提供)工具出现, - -举个例子,给我们一个文件 NumberList.txt,让我们使用expand处理它,将tabs转换为一个空格.并且以标准形式输出. - - # expand --tabs=1 NumbersList.txt - -![Linux expand Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-expand-Command.png) - -Linux expand 命令 - -unexpand命令可以实现相反的功能(将空格转为tab) - -**使用head输出文件首行及使用tail输出文件尾行** - -通常情况下,head命令后跟着文件名时,将会输出该文件的前十行,我们可以通过 -n 参数来自定义具体的行数。 - - # head -n3 /etc/passwd - # tail -n3 /etc/passwd - -![Linux head and tail Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-head-and-tail-Command.png) - -Linux 的 head 和 tail 命令 - -tail 最有意思的一个特性就是能够展现信息(最后一行)就像我们输入文件(tail -f my.log,一行一行的,就像我们在观察它一样。)这在我们监控一个持续增加的日志文件时非常有用 - -更多: [Manage Files Effectively using head and tail Commands][10] - -**使用paste合并文本文件** -paste命令一行一行的合并文件,默认会以tab来区分每一行,或者其他你自定义的分行方式.(下面的例子就是输出使用等号划分行的文件). - # paste -d= file1 file2 - -![Merge Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Merge-Files-in-Linux-with-paste-command.png) - -Merge Files in Linux - -**使用split命令将文件分块** - -split 命令常常用于把一个文件切割成两个或多个文由我们自定义的前缀命名的件文件.这些文件可以通过大小、区块、行数,生成的文件会有一个数字或字母的后缀.在下面的例子中,我们将切割bash.pdf ,每个文件50KB (-b 50KB) ,使用命名后缀 (-d): - - # split -b 50KB -d bash.pdf bash_ - -![Split Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Split-Files-in-Linux-with-split-command.png) - -在Linux下划分文件 - -你可以使用如下命令来合并这些文件,生成源文件: - - # cat bash_00 bash_01 bash_02 bash_03 bash_04 bash_05 > bash.pdf - -**使用tr命令改变字符** - -tr 命令多用于变化(改变)一个一个的字符活使用字符范围.和之前一样,下面的实例我们江使用同样的文件file2,我们将实习: - -- 小写字母 o 变成大写 -- 所有的小写字母都变成大写字母 - - # cat file2 | tr o O - # cat file2 | tr [a-z] [A-Z] - -![Translate Characters in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Translate-characters-in-Linux-with-tr-command.png) - -在Linux中替换文字 - -**使用uniq和sort检查或删除重复的文字** - -uniq命令可以帮我们查出或删除文件中的重复的行,默认会写出到stdout.我们应当注意, uniq 只能查出相邻的两个相同的单纯,所以, uniq 往往和sort 一起使用(sort一般用于对文本文件的内容进行排序) - - -默认的,sort 以第一个参数(使用空格区分)为关键字.想要定义特殊的关键字,我们需要使用 -k参数,请注意如何使用sort 和uniq输出我们想要的字段,具体可以看下面的例子 - - # cat file3 - # sort file3 | uniq - # sort -k2 file3 | uniq - # sort -k3 file3 | uniq - -![删除文件中重复的行](http://www.tecmint.com/wp-content/uploads/2015/02/Remove-Duplicate-Lines-in-file.png) - -删除文件中重复的行 - -**从文件中提取文本的命令** - -Cut命令基于字节(-b),字符(-c),或者区块(-f)从stdin活文件中提取到的部分将会以标准的形式展现在屏幕上 - -当我们使用区块切割时,默认的分隔符是一个tab,不过你可以通过 -d 参数来自定义分隔符. - - # cut -d: -f1,3 /etc/passwd # 这个例子提取了第一块和第三块的文本 - # cut -d: -f2-4 /etc/passwd # 这个例子提取了第一块到第三块的文本 - -![从文件中提取文本](http://www.tecmint.com/wp-content/uploads/2015/02/Extract-Text-from-a-file.png) - -从文件中提取文本 - - -注意,上方的两个输出的结果是十分简洁的。 - -**使用fmt命令重新格式化文件** - -fmt 被用于去“清理”有大量内容或行的文件,或者有很多缩进的文件.新的锻炼格式每行不会超过75个字符款,你能改变这个设定通过 -w(width 宽度)参数,它可以设置行宽为一个特定的数值 - -举个例子,让我们看看当我们用fmt显示定宽为100个字符的时候的文件/etc/passwd 时会发生什么.再来一次,输出值变得更加简洁. - - # fmt -w100 /etc/passwd - -![File Reformatting in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/File-Reformatting-in-Linux-with-fmt-command.png) - -Linux文件重新格式化 - -**使用pr命令格式化打印内容** - -pr 分页并且在列中展示一个或多个用于打印的文件. 换句话说,使用pr格式化一个文件使他打印出来时看起来更好.举个例子,下面这个命令 - - # ls -a /etc | pr -n --columns=3 -h "Files in /etc" - -以一个友好的排版方式(3列)输出/etc下的文件,自定义了页眉(通过 -h 选项实现),行号(-n) - -![File Formatting in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/File-Formatting-in-Linux-with-pr-command.png) - -Linux的文件格式 - -### 总结 ### - -在这篇文章中,我们已经讨论了如何在Shell或终端以正确的语法输入和执行命令,并解释如何找到,检查和使用系统文档。正如你看到的一样简单,这就是你成为RHCSA的第一大步 - -如果你想添加一些其他的你经常使用的能够有效帮你完成你的日常工作的基础命令,并为分享他们而感到自豪,请在下方留言.也欢迎提出问题.我们期待您的回复. - - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:https://www.redhat.com/en/services/certification/rhcsa -[2]:http://www.tecmint.com/cd-command-in-linux/ -[3]:http://www.tecmint.com/ls-command-interview-questions/ -[4]:http://www.tecmint.com/advanced-copy-command-shows-progress-bar-while-copying-files/ -[5]:http://www.tecmint.com/rename-multiple-files-in-linux/ -[6]:http://www.tecmint.com/8-pratical-examples-of-linux-touch-command/ -[7]:http://www.tecmint.com/redhat-enterprise-linux-7-installation/ -[8]:http://www.tecmint.com/echo-command-in-linux/ -[9]:http://www.tecmint.com/pwd-command-examples/ -[10]:http://www.tecmint.com/view-contents-of-file-in-linux/ From 0e4b77320242cc007f4cc6ee17a72b6f6dc824de Mon Sep 17 00:00:00 2001 From: DeadFire Date: Mon, 10 Aug 2015 12:38:55 +0800 Subject: [PATCH 166/207] =?UTF-8?q?20150810-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...20150810 For Linux, Supercomputers R Us.md | 59 +++++++++++++++++++ 1 file changed, 59 insertions(+) create mode 100644 sources/talk/20150810 For Linux, Supercomputers R Us.md diff --git a/sources/talk/20150810 For Linux, Supercomputers R Us.md b/sources/talk/20150810 For Linux, Supercomputers R Us.md new file mode 100644 index 0000000000..7bc48125f0 --- /dev/null +++ b/sources/talk/20150810 For Linux, Supercomputers R Us.md @@ -0,0 +1,59 @@ +For Linux, Supercomputers R Us +================================================================================ +![Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons](http://images.techhive.com/images/article/2015/08/playstation_3-100602985-primary.idge.jpg) +Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons + +> Almost all supercomputers run Linux, including the ones built from Raspberry Pi boards and PlayStation 3 game consoles + +Supercomputers are serious things, called on to do serious computing. They tend to be engaged in serious pursuits like atomic bomb simulations, climate modeling and high-level physics. Naturally, they cost serious money. At the very top of the latest [Top500][1] supercomputer ranking is the Tianhe-2 supercomputer at China’s National University of Defense Technology. It cost about $390 million to build. + +But then there’s the supercomputer that Joshua Kiepert, a doctoral student at Boise State’s Electrical and Computer Engineering department, [created with Raspberry Pi computers][2].It cost less than $2,000. + +No, I’m not making that up. It’s an honest-to-goodness supercomputer made from overclocked 1-GHz [Model B Raspberry Pi][3] ARM11 processors with Videocore IV GPUs. Each one comes with 512MB of RAM, a pair of USB ports and a 10/100 BaseT Ethernet port. + +And what do the Tianhe-2 and the Boise State supercomputer have in common? They both run Linux. As do [486 out of the world’s fastest 500 supercomputers][4]. It’s part of a domination of the category that began over 20 years ago. And now it’s trickling down to built-on-the-cheap supercomputers. Because Kiepert’s machine isn’t the only budget number cruncher out there. + +Gaurav Khanna, an associate professor of physics at the University of Massachusetts Dartmouth, created a [supercomputer with something shy of 200 PlayStation 3 video game consoles][5]. + +The PlayStations are powered by a 3.2-GHz PowerPC-based Power Processing Element. Each comes with 512MB of RAM. You can still buy one, although Sony will be phasing them out by year’s end, for just over $200. Khanna started with only 16 PlayStation 3s for his first supercomputer, so you too could put a supercomputer on your credit card for less than four grand. + +These machines may be built from toys, but they’re not playthings. Khanna has done serious astrophysics on his rig. A white-hat hacking group used a similar [PlayStation 3 supercomputer in 2008 to crack the SSL MD5 hashing algorithm][6] in 2008. + +Two years later, the Air Force Research Laboratory [Condor Cluster was using 1,760 Sony PlayStation 3 processors][7] and 168 general-purpose graphical processing units. This bargain-basement supercomputer runs at about 500TFLOPs, or 500 trillion floating point operations per second. + +Other cheap options for home supercomputers include specialist parallel-processing boards such as the [$99 credit-card-sized Parallella board][8], and high-end graphics boards such as [Nvidia’s Titan Z][9] and [AMD’s FirePro W9100][10]. Those high-end boards, coveted by gamers with visions of a dream machine or even a chance at winning the first-place prize of over $100,000 in the [Intel Extreme Masters World Championship League of][11] [Legends][12], cost considerably more, retailing for about $3,000. On the other hand, a single one can deliver over 2.5TFLOPS all by itself, and for scientists and researchers, they offer an affordable way to get a supercomputer they can call their own. + +As for the Linux connection, that all started in 1994 at the Goddard Space Flight Center with the first [Beowulf supercomputer][13]. + +By our standards, there wasn’t much that was super about the first Beowulf. But in its day, the first homemade supercomputer, with its 16 Intel 486DX processors and 10Mbps Ethernet for the bus, was great. [Beowulf, designed by NASA contractors Don Becker and Thomas Sterling][14], was the first “maker” supercomputer. Its “compute components,” 486DX PCs, cost only a few thousand dollars. While its speed was only in single-digit gigaflops, [Beowulf][15] showed you could build supercomputers from commercial off-the-shelf (COTS) hardware and Linux. + +I wish I’d had a part in its creation, but I’d already left Goddard by 1994 for a career as a full-time technology journalist. Darn it! + +But even from this side of my reporter’s notebook, I can still appreciate how COTS and open-source software changed supercomputing forever. I hope you can too. Because, whether it’s a cluster of Raspberry Pis or a monster with over 3 million Intel Ivy Bridge and Xeon Phi chips, almost all of today’s supercomputers trace their ancestry to Beowulf. + +-------------------------------------------------------------------------------- + +via: + +作者:[Steven J. Vaughan-Nichols][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/ +[1]:http://www.top500.org/ +[2]:http://www.zdnet.com/article/build-your-own-supercomputer-out-of-raspberry-pi-boards/ +[3]:https://www.raspberrypi.org/products/model-b/ +[4]:http://www.zdnet.com/article/linux-still-rules-supercomputing/ +[5]:http://www.nytimes.com/2014/12/23/science/an-economical-way-to-save-progress.html?smid=fb-nytimes&smtyp=cur&bicmp=AD&bicmlukp=WT.mc_id&bicmst=1409232722000&bicmet=1419773522000&_r=4 +[6]:http://www.computerworld.com/article/2529932/cybercrime-hacking/researchers-hack-verisign-s-ssl-scheme-for-securing-web-sites.html +[7]:http://phys.org/news/2010-12-air-playstation-3s-supercomputer.html +[8]:http://www.zdnet.com/article/parallella-the-99-linux-supercomputer/ +[9]:http://blogs.nvidia.com/blog/2014/03/25/titan-z/ +[10]:http://www.amd.com/en-us/press-releases/Pages/amd-flagship-professional-2014apr7.aspx +[11]:http://en.intelextrememasters.com/news/check-out-the-intel-extreme-masters-katowice-prize-money-distribution/ +[12]:http://www.google.com/url?q=http%3A%2F%2Fen.intelextrememasters.com%2Fnews%2Fcheck-out-the-intel-extreme-masters-katowice-prize-money-distribution%2F&sa=D&sntz=1&usg=AFQjCNE6yoAGGz-Hpi2tPF4gdhuPBEckhQ +[13]:http://www.beowulf.org/overview/history.html +[14]:http://yclept.ucdavis.edu/Beowulf/aboutbeowulf.html +[15]:http://www.beowulf.org/ \ No newline at end of file From e58cee17af9223454ed0cead2fa9c52f8027eaaf Mon Sep 17 00:00:00 2001 From: wxy Date: Mon, 10 Aug 2015 14:31:12 +0800 Subject: [PATCH 167/207] PUB:20150717 How to collect NGINX metrics - Part 2 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @strugglingyouth 第二篇也发了~~等第三篇哦 --- ...7 How to collect NGINX metrics - Part 2.md | 178 +++++++++++++ ...7 How to collect NGINX metrics - Part 2.md | 237 ------------------ 2 files changed, 178 insertions(+), 237 deletions(-) create mode 100644 published/20150717 How to collect NGINX metrics - Part 2.md delete mode 100644 translated/tech/20150717 How to collect NGINX metrics - Part 2.md diff --git a/published/20150717 How to collect NGINX metrics - Part 2.md b/published/20150717 How to collect NGINX metrics - Part 2.md new file mode 100644 index 0000000000..f1acf82a35 --- /dev/null +++ b/published/20150717 How to collect NGINX metrics - Part 2.md @@ -0,0 +1,178 @@ + +如何收集 NGINX 指标(第二篇) +================================================================================ +![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_2.png) + +### 如何获取你所需要的 NGINX 指标 ### + +如何获取需要的指标取决于你正在使用的 NGINX 版本以及你希望看到哪些指标。(参见 [如何监控 NGINX(第一篇)][1] 来深入了解NGINX指标。)自由开源的 NGINX 和商业版的 NGINX Plus 都有可以报告指标度量的状态模块,NGINX 也可以在其日志中配置输出特定指标: + +**指标可用性** + +| 指标 | [NGINX (开源)](https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source) | [NGINX Plus](https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus) | [NGINX 日志](https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#logs)| +|-----|------|-------|-----| +|accepts(接受) / accepted(已接受)|x|x| | +|handled(已处理)|x|x| | +|dropped(已丢弃)|x|x| | +|active(活跃)|x|x| | +|requests (请求数)/ total(全部请求数)|x|x| | +|4xx 代码||x|x| +|5xx 代码||x|x| +|request time(请求处理时间)|||x| + +#### 指标收集:NGINX(开源版) #### + +开源版的 NGINX 会在一个简单的状态页面上显示几个与服务器状态有关的基本指标,它们由你启用的 HTTP [stub status module][2] 所提供。要检查该模块是否已启用,运行以下命令: + + nginx -V 2>&1 | grep -o with-http_stub_status_module + +如果你看到终端输出了 **http_stub_status_module**,说明该状态模块已启用。 + +如果该命令没有输出,你需要启用该状态模块。你可以在[从源代码构建 NGINX ][3]时使用 `--with-http_stub_status_module` 配置参数: + + ./configure \ + … \ + --with-http_stub_status_module + make + sudo make install + +在验证该模块已经启用或你自己启用它后,你还需要修改 NGINX 配置文件,来给状态页面设置一个本地可访问的 URL(例如: /nginx_status): + + server { + location /nginx_status { + stub_status on; + + access_log off; + allow 127.0.0.1; + deny all; + } + } + +注:nginx 配置中的 server 块通常并不放在主配置文件中(例如:/etc/nginx/nginx.conf),而是放在主配置会加载的辅助配置文件中。要找到主配置文件,首先运行以下命令: + + nginx -t + +打开列出的主配置文件,在以 http 块结尾的附近查找以 include 开头的行,如: + + include /etc/nginx/conf.d/*.conf; + +在其中一个包含的配置文件中,你应该会找到主 **server** 块,你可以如上所示配置 NGINX 的指标输出。更改任何配置后,通过执行以下命令重新加载配置文件: + + nginx -s reload + +现在,你可以浏览状态页看到你的指标: + + Active connections: 24 + server accepts handled requests + 1156958 1156958 4491319 + Reading: 0 Writing: 18 Waiting : 6 + +请注意,如果你希望从远程计算机访问该状态页面,则需要将远程计算机的 IP 地址添加到你的状态配置文件的白名单中,在上面的配置文件中的白名单仅有 127.0.0.1。 + +NGINX 的状态页面是一种快速查看指标状况的简单方法,但当连续监测时,你需要按照标准间隔自动记录该数据。监控工具箱 [Nagios][4] 或者 [Datadog][5],以及收集统计信息的服务 [collectD][6] 已经可以解析 NGINX 的状态信息了。 + +#### 指标收集: NGINX Plus #### + +商业版的 NGINX Plus 通过它的 ngx_http_status_module 提供了比开源版 NGINX [更多的指标][7]。NGINX Plus 以字节流的方式提供这些额外的指标,提供了关于上游系统和高速缓存的信息。NGINX Plus 也会报告所有的 HTTP 状态码类型(1XX,2XX,3XX,4XX,5XX)的计数。一个 NGINX Plus 状态报告例子[可在此查看][8]: + +![NGINX Plus status board](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/status_plus-2.png) + +注:NGINX Plus 在状态仪表盘中的“Active”连接的定义和开源 NGINX 通过 stub_status_module 收集的“Active”连接指标略有不同。在 NGINX Plus 指标中,“Active”连接不包括Waiting状态的连接(即“Idle”连接)。 + +NGINX Plus 也可以输出 [JSON 格式的指标][9],可以用于集成到其他监控系统。在 NGINX Plus 中,你可以看到 [给定的上游服务器组][10]的指标和健康状况,或者简单地从上游服务器的[单个服务器][11]得到响应代码的计数: + + {"1xx":0,"2xx":3483032,"3xx":0,"4xx":23,"5xx":0,"total":3483055} + +要启动 NGINX Plus 指标仪表盘,你可以在 NGINX 配置文件的 http 块内添加状态 server 块。 (参见上一节,为收集开源版 NGINX 指标而如何查找相关的配置文件的说明。)例如,要设置一个状态仪表盘 (http://your.ip.address:8080/status.html)和一个 JSON 接口(http://your.ip.address:8080/status),可以添加以下 server 块来设定: + + server { + listen 8080; + root /usr/share/nginx/html; + + location /status { + status; + } + + location = /status.html { + } + } + +当你重新加载 NGINX 配置后,状态页就可以用了: + + nginx -s reload + +关于如何配置扩展状态模块,官方 NGINX Plus 文档有 [详细介绍][13] 。 + +#### 指标收集:NGINX 日志 #### + +NGINX 的 [日志模块][14] 会把可自定义的访问日志写到你配置的指定位置。你可以通过[添加或移除变量][15]来自定义日志的格式和包含的数据。要存储详细的日志,最简单的方法是添加下面一行在你配置文件的 server 块中(参见上上节,为收集开源版 NGINX 指标而如何查找相关的配置文件的说明。): + + access_log logs/host.access.log combined; + +更改 NGINX 配置文件后,执行如下命令重新加载配置文件: + + nginx -s reload + +默认包含的 “combined” 的日志格式,会包括[一系列关键的数据][17],如实际的 HTTP 请求和相应的响应代码。在下面的示例日志中,NGINX 记录了请求 /index.html 时的 200(成功)状态码和访问不存在的请求文件 /fail 的 404(未找到)错误。 + + 127.0.0.1 - - [19/Feb/2015:12:10:46 -0500] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari 537.36" + + 127.0.0.1 - - [19/Feb/2015:12:11:05 -0500] "GET /fail HTTP/1.1" 404 570 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36" + +你可以通过在 NGINX 配置文件中的 http 块添加一个新的日志格式来记录请求处理时间: + + log_format nginx '$remote_addr - $remote_user [$time_local] ' + '"$request" $status $body_bytes_sent $request_time ' + '"$http_referer" "$http_user_agent"'; + +并修改配置文件中 **server** 块的 access_log 行: + + access_log logs/host.access.log nginx; + +重新加载配置文件后(运行 `nginx -s reload`),你的访问日志将包括响应时间,如下所示。单位为秒,精度到毫秒。在这个例子中,服务器接收到一个对 /big.pdf 的请求时,发送 33973115 字节后返回 206(成功)状态码。处理请求用时 0.202 秒(202毫秒): + + 127.0.0.1 - - [19/Feb/2015:15:50:36 -0500] "GET /big.pdf HTTP/1.1" 206 33973115 0.202 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36" + +你可以使用各种工具和服务来解析和分析 NGINX 日志。例如,[rsyslog][18] 可以监视你的日志,并将其传递给多个日志分析服务;你也可以使用自由开源工具,比如 [logstash][19] 来收集和分析日志;或者你可以使用一个统一日志记录层,如 [Fluentd][20] 来收集和解析你的 NGINX 日志。 + +### 结论 ### + +监视 NGINX 的哪一项指标将取决于你可用的工具,以及监控指标所提供的信息是否满足你们的需要。举例来说,错误率的收集是否足够重要到需要你们购买 NGINX Plus ,还是架设一个可以捕获和分析日志的系统就够了? + +在 Datadog 中,我们已经集成了 NGINX 和 NGINX Plus,这样你就可以以最小的设置来收集和监控所有 Web 服务器的指标。[在本文中][21]了解如何用 NGINX Datadog 来监控 ,并开始 [Datadog 的免费试用][22]吧。 + + +-------------------------------------------------------------------------------- + +via: https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ + +作者:K Young +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/ +[2]:http://nginx.org/en/docs/http/ngx_http_stub_status_module.html +[3]:http://wiki.nginx.org/InstallOptions +[4]:https://exchange.nagios.org/directory/Plugins/Web-Servers/nginx +[5]:http://docs.datadoghq.com/integrations/nginx/ +[6]:https://collectd.org/wiki/index.php/Plugin:nginx +[7]:http://nginx.org/en/docs/http/ngx_http_status_module.html#data +[8]:http://demo.nginx.com/status.html +[9]:http://demo.nginx.com/status +[10]:http://demo.nginx.com/status/upstreams/demoupstreams +[11]:http://demo.nginx.com/status/upstreams/demoupstreams/0/responses +[12]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source +[13]:http://nginx.org/en/docs/http/ngx_http_status_module.html#example +[14]:http://nginx.org/en/docs/http/ngx_http_log_module.html +[15]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format +[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source +[17]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format +[18]:http://www.rsyslog.com/ +[19]:https://www.elastic.co/products/logstash +[20]:http://www.fluentd.org/ +[21]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ +[22]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#sign-up +[23]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_collect_nginx_metrics.md +[24]:https://github.com/DataDog/the-monitor/issues diff --git a/translated/tech/20150717 How to collect NGINX metrics - Part 2.md b/translated/tech/20150717 How to collect NGINX metrics - Part 2.md deleted file mode 100644 index 848042bf2c..0000000000 --- a/translated/tech/20150717 How to collect NGINX metrics - Part 2.md +++ /dev/null @@ -1,237 +0,0 @@ - -如何收集NGINX指标 - 第2部分 -================================================================================ -![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_2.png) - -### 如何获取你所需要的NGINX指标 ### - -如何获取需要的指标取决于你正在使用的 NGINX 版本。(参见 [the companion article][1] 将深入探索NGINX指标。)免费,开源版的 NGINX 和商业版的 NGINX 都有指标度量的状态模块,NGINX 也可以在其日志中配置指标模块: - -注:表格 - ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
MetricAvailability
NGINX (open-source)NGINX PlusNGINX logs
accepts / acceptedxx
handledxx
droppedxx
activexx
requests / totalxx
4xx codesxx
5xx codesxx
request timex
- -#### 指标收集:NGINX(开源版) #### - -开源版的 NGINX 会显示几个与服务器状态有关的指标在状态页面上,只要你启用了 HTTP [stub status module][2] 。要检查模块是否被加载,运行以下命令: - - nginx -V 2>&1 | grep -o with-http_stub_status_module - -如果你看到 http_stub_status_module 被输出在终端,说明状态模块已启用。 - -如果该命令没有输出,你需要启用状态模块。你可以使用 --with-http_stub_status_module 参数去配置 [building NGINX from source][3]: - - ./configure \ - … \ - --with-http_stub_status_module - make - sudo make install - -验证模块已经启用或你自己启用它后,你还需要修改 NGINX 配置文件为状态页面设置本地访问的 URL(例如,/ nginx_status): - - server { - location /nginx_status { - stub_status on; - - access_log off; - allow 127.0.0.1; - deny all; - } - } - -注:nginx 配置中的 server 块通常并不在主配置文件中(例如,/etc/nginx/nginx.conf),但主配置中会加载补充的配置文件。要找到主配置文件,首先运行以下命令: - - nginx -t - -打开主配置文件,在以 http 模块结尾的附近查找以 include 开头的行包,如: - - include /etc/nginx/conf.d/*.conf; - -在所包含的配置文件中,你应该会找到主服务器模块,你可以如上所示修改 NGINX 的指标报告。更改任何配置后,通过执行以下命令重新加载配置文件: - - nginx -s reload - -现在,你可以查看指标的状态页: - - Active connections: 24 - server accepts handled requests - 1156958 1156958 4491319 - Reading: 0 Writing: 18 Waiting : 6 - -请注意,如果你正试图从远程计算机访问状态页面,则需要将远程计算机的 IP 地址添加到你的状态配置文件的白名单中,在上面的配置文件中 127.0.0.1 仅在白名单中。 - -nginx 的状态页面是一中查看指标快速又简单的方法,但当连续监测时,你需要每隔一段时间自动记录该数据。然后通过监控工具箱 [Nagios][4] 或者 [Datadog][5],以及收集统计信息的服务 [collectD][6] 来分析已保存的 NGINX 状态信息。 - -#### 指标收集: NGINX Plus #### - -商业版的 NGINX Plus 通过 ngx_http_status_module 提供的可用指标比开源版 NGINX 更多 [many more metrics][7] 。NGINX Plus 附加了更多的字节流指标,以及负载均衡系统和高速缓存的信息。NGINX Plus 还报告所有的 HTTP 状态码类型(1XX,2XX,3XX,4XX,5XX)的计数。一个简单的 NGINX Plus 状态报告 [here][8]。 - -![NGINX Plus status board](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/status_plus-2.png) - -*注: NGINX Plus 在状态仪表盘"Active”连接定义的收集指标的状态模块和开源 NGINX 的略有不同。在 NGINX Plus 指标中,活动连接不包括等待状态(又叫空闲连接)连接。* - -NGINX Plus 也集成了其他监控系统的报告 [JSON格式指标][9] 。用 NGINX Plus 时,你可以看到 [负载均衡服务器组的][10]指标和健康状况,或着再向下能取得的仅是响应代码计数[从单个服务器][11]在负载均衡服务器中: - {"1xx":0,"2xx":3483032,"3xx":0,"4xx":23,"5xx":0,"total":3483055} - -启动 NGINX Plus 指标仪表盘,你可以在 NGINX 配置文件的 http 块内添加状态 server 块。 ([参见上一页][12]查找相关的配置文件,收集开源 NGINX 版指标的说明。)例如,设立以下一个状态仪表盘在http://your.ip.address:8080/status.html 和一个 JSON 接口 http://your.ip.address:8080/status,可以添加以下 server block 来设定: - - server { - listen 8080; - root /usr/share/nginx/html; - - location /status { - status; - } - - location = /status.html { - } - } - -一旦你重新加载 NGINX 配置,状态页就会被加载: - - nginx -s reload - -关于如何配置扩展状态模块,官方 NGINX Plus 文档有 [详细介绍][13] 。 - -#### 指标收集:NGINX日志 #### - -NGINX 的 [日志模块][14] 写到配置可以自定义访问日志到指定文件。你可以自定义日志的格式和时间通过 [添加或移除变量][15]。捕获日志的详细信息,最简单的方法是添加下面一行在你配置文件的server 块中(参见[此节][16] 通过加载配置文件的信息来收集开源 NGINX 的指标): - - access_log logs/host.access.log combined; - -更改 NGINX 配置文件后,必须要重新加载配置文件: - - nginx -s reload - -“combined” 的日志格式,只包含默认参数,包括[一些关键数据][17],如实际的 HTTP 请求和相应的响应代码。在下面的示例日志中,NGINX 记录了200(成功)状态码当请求 /index.html 时和404(未找到)错误不存在的请求文件 /fail。 - - 127.0.0.1 - - [19/Feb/2015:12:10:46 -0500] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari 537.36" - - 127.0.0.1 - - [19/Feb/2015:12:11:05 -0500] "GET /fail HTTP/1.1" 404 570 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36" - -你可以记录请求处理的时间通过添加一个新的日志格式在 NGINX 配置文件中的 http 块: - - log_format nginx '$remote_addr - $remote_user [$time_local] ' - '"$request" $status $body_bytes_sent $request_time ' - '"$http_referer" "$http_user_agent"'; - -通过修改配置文件中 server 块的 access_log 行: - - access_log logs/host.access.log nginx; - -重新加载配置文件(运行 nginx -s reload)后,你的访问日志将包括响应时间,如下图所示。单位为秒,毫秒。在这种情况下,服务器接收 /big.pdf 的请求时,发送33973115字节后返回206(成功)状态码。处理请求用时0.202秒(202毫秒): - - 127.0.0.1 - - [19/Feb/2015:15:50:36 -0500] "GET /big.pdf HTTP/1.1" 206 33973115 0.202 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/537.36" - -你可以使用各种工具和服务来收集和分析 NGINX 日志。例如,[rsyslog][18] 可以监视你的日志,并将其传递给多个日志分析服务;你也可以使用免费的开源工具,如[logstash][19]来收集和分析日志;或者你可以使用一个统一日志记录层,如[Fluentd][20]来收集和分析你的 NGINX 日志。 - -### 结论 ### - -监视 NGINX 的哪一项指标将取决于你提供的工具,以及是否由给定指标证明监控指标的开销。例如,通过收集和分析日志来定位问题是非常重要的在 NGINX Plus 或者 运行的系统中。 - -在 Datadog 中,我们已经集成了 NGINX 和 NGINX Plus,这样你就可以以最小的设置来收集和监控所有 Web 服务器的指标。了解如何用 NGINX Datadog来监控 [在本文中][21],并开始使用 [免费的Datadog][22]。 - ----------- - -原文在这 [on GitHub][23]。问题,更正,补充等?请[让我们知道][24]。 - --------------------------------------------------------------------------------- - -via: https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/ - -作者:K Young -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/ -[2]:http://nginx.org/en/docs/http/ngx_http_stub_status_module.html -[3]:http://wiki.nginx.org/InstallOptions -[4]:https://exchange.nagios.org/directory/Plugins/Web-Servers/nginx -[5]:http://docs.datadoghq.com/integrations/nginx/ -[6]:https://collectd.org/wiki/index.php/Plugin:nginx -[7]:http://nginx.org/en/docs/http/ngx_http_status_module.html#data -[8]:http://demo.nginx.com/status.html -[9]:http://demo.nginx.com/status -[10]:http://demo.nginx.com/status/upstreams/demoupstreams -[11]:http://demo.nginx.com/status/upstreams/demoupstreams/0/responses -[12]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source -[13]:http://nginx.org/en/docs/http/ngx_http_status_module.html#example -[14]:http://nginx.org/en/docs/http/ngx_http_log_module.html -[15]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format -[16]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source -[17]:http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format -[18]:http://www.rsyslog.com/ -[19]:https://www.elastic.co/products/logstash -[20]:http://www.fluentd.org/ -[21]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ -[22]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#sign-up -[23]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_collect_nginx_metrics.md -[24]:https://github.com/DataDog/the-monitor/issues From 843f0e99478af5d7c47fd3e774c79aacfff4732f Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Mon, 10 Aug 2015 15:01:28 +0800 Subject: [PATCH 168/207] [Translated]20150209 Install OpenQRM Cloud Computing Platform In Debian.md --- ...nQRM Cloud Computing Platform In Debian.md | 151 ------------------ ...nQRM Cloud Computing Platform In Debian.md | 148 +++++++++++++++++ 2 files changed, 148 insertions(+), 151 deletions(-) delete mode 100644 sources/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md create mode 100644 translated/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md diff --git a/sources/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md b/sources/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md deleted file mode 100644 index 2c6a990b83..0000000000 --- a/sources/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md +++ /dev/null @@ -1,151 +0,0 @@ -FSSlc translating - -Install OpenQRM Cloud Computing Platform In Debian -================================================================================ -### Introduction ### - -**openQRM** is a web-based open source Cloud computing and datacenter management platform that integrates flexibly with existing components in enterprise data centers. - -It supports the following virtualization technologies: - -- KVM, -- XEN, -- Citrix XenServer, -- VMWare ESX, -- LXC, -- OpenVZ. - -The Hybrid Cloud Connector in openQRM supports a range of private or public cloud providers to extend your infrastructure on demand via **Amazon AWS**, **Eucalyptus** or **OpenStack**. It, also, automates provisioning, virtualization, storage and configuration management, and it takes care of high-availability. A self-service cloud portal with integrated billing system enables end-users to request new servers and application stacks on-demand. - -openQRM is available in two different flavours such as: - -- Enterprise Edition -- Community Edition - -You can view the difference between both editions [here][1]. - -### Features ### - -- Private/Hybrid Cloud Computing Platform; -- Manages physical and virtualized server systems; -- Integrates with all major open and commercial storage technologies; -- Cross-platform: Linux, Windows, OpenSolaris, and *BSD; -- Supports KVM, XEN, Citrix XenServer, VMWare ESX(i), lxc, OpenVZ and VirtualBox; -- Support for Hybrid Cloud setups using additional Amazon AWS, Eucalyptus, Ubuntu UEC cloud resources; -- Supports P2V, P2P, V2P, V2V Migrations and High-Availability; -- Integrates with the best Open Source management tools – like puppet, nagios/Icinga or collectd; -- Over 50 plugins for extended features and integration with your infrastructure; -- Self-Service Portal for end-users; -- Integrated billing system. - -### Installation ### - -Here, we will install openQRM in Ubuntu 14.04 LTS. Your server must atleast meet the following requirements. - -- 1 GB RAM; -- 100 GB Hdd; -- Optional: Virtualization enabled (VT for Intel CPUs or AMD-V for AMD CPUs) in Bios. - -First, install make package to compile openQRM source package. - - sudo apt-get update - sudo apt-get upgrade - sudo apt-get install make - -Then, run the following commands one by one to install openQRM. - -Download the latest available version [from here][2]. - - wget http://sourceforge.net/projects/openqrm/files/openQRM-Community-5.1/openqrm-community-5.1.tgz - - tar -xvzf openqrm-community-5.1.tgz - - cd openqrm-community-5.1/src/ - - sudo make - - sudo make install - - sudo make start - -During installation, you’ll be asked to update the php.ini file. - -![~-openqrm-community-5.1-src_001](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_001.png) - -Enter mysql root user password. - -![~-openqrm-community-5.1-src_002](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_002.png) - -Re-enter password: - -![~-openqrm-community-5.1-src_003](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_003.png) - -Select the mail server configuration type. - -![~-openqrm-community-5.1-src_004](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_004.png) - -If you’re not sure, select Local only. In our case, I go with **Local only** option. - -![~-openqrm-community-5.1-src_005](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_005.png) - -Enter your system mail name, and finally enter the Nagios administration password. - -![~-openqrm-community-5.1-src_007](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_007.png) - -The above commands will take long time depending upon your Internet connection to download all packages required to run openQRM. Be patient. - -Finally, you’ll get the openQRM configuration URL along with username and password. - -![~_002](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@debian-_002.png) - -### Configuration ### - -After installing openQRM, open up your web browser and navigate to the URL: **http://ip-address/openqrm**. - -For example, in my case http://192.168.1.100/openqrm. - -The default username and password is: **openqrm/openqrm**. - -![Mozilla Firefox_003](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/Mozilla-Firefox_003.png) - -Select a network card to use for the openQRM management network. - -![openQRM Server - Mozilla Firefox_004](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_004.png) - -Select a database type. In our case, I selected mysql. - -![openQRM Server - Mozilla Firefox_006](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_006.png) - -Now, configure the database connection and initialize openQRM. Here, I use **openQRM** as database name, and user as **root** and debian as password for the database. Be mindful that you should enter the mysql root user password that you have created while installing openQRM. - -![openQRM Server - Mozilla Firefox_012](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_012.png) - -Congratulations!! openQRM has been installed and configured. - -![openQRM Server - Mozilla Firefox_013](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_013.png) - -### Update openQRM ### - -To update openQRM at any time run the following command: - - cd openqrm/src/ - make update - -What we have done so far is just installed and configured openQRM in our Ubuntu server. For creating, running Virtual Machines, managing Storage, integrating additional systems and running your own private Cloud, I suggest you to read the [openQRM Administrator Guide][3]. - -That’s all now. Cheers! Happy weekend!! - --------------------------------------------------------------------------------- - -via: http://www.unixmen.com/install-openqrm-cloud-computing-platform-debian/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.unixmen.com/author/sk/ -[1]:http://www.openqrm-enterprise.com/products/edition-comparison.html -[2]:http://sourceforge.net/projects/openqrm/files/?source=navbar -[3]:http://www.openqrm-enterprise.com/fileadmin/Documents/Whitepaper/openQRM-Enterprise-Administrator-Guide-5.2.pdf diff --git a/translated/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md b/translated/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md new file mode 100644 index 0000000000..2eacc933b9 --- /dev/null +++ b/translated/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md @@ -0,0 +1,148 @@ +在 Debian 中安装 OpenQRM 云计算平台 +================================================================================ +### 简介 ### + +**openQRM**是一个基于 Web 的开源云计算和数据中心管理平台,可灵活地与企业数据中心的现存组件集成。 + +它支持下列虚拟技术: + +- KVM, +- XEN, +- Citrix XenServer, +- VMWare ESX, +- LXC, +- OpenVZ. + +openQRM 中的杂交云连接器通过 **Amazon AWS**, **Eucalyptus** 或 **OpenStack** 来支持一系列的私有或公有云提供商,以此来按需扩展你的基础设施。它也自动地进行资源调配、 虚拟化、 存储和配置管理,且关注高可用性。集成计费系统的自助服务云门户可使终端用户按需请求新的服务器和应用堆栈。 + +openQRM 有两种不同风格的版本可获取: + +- 企业版 +- 社区版 + +你可以在[这里][1] 查看这两个版本间的区别。 + +### 特点 ### + +- 私有/杂交的云计算平台; +- 可管理物理或虚拟的服务器系统; +- 可与所有主流的开源或商业的存储技术集成; +- 跨平台: Linux, Windows, OpenSolaris, and BSD; +- 支持 KVM, XEN, Citrix XenServer, VMWare ESX(i), lxc, OpenVZ 和 VirtualBox; +- 支持使用额外的 Amazon AWS, Eucalyptus, Ubuntu UEC 等云资源来进行杂交云设置; +- 支持 P2V, P2P, V2P, V2V 迁移和高可用性; +- 集成最好的开源管理工具 – 如 puppet, nagios/Icinga 或 collectd; +- 有超过 50 个插件来支持扩展功能并与你的基础设施集成; +- 针对终端用户的自助门户; +- 集成计费系统. + +### 安装 ### + +在这里我们将在 in Debian 7.5 上安装 openQRM。你的服务器必须至少满足以下要求: + +- 1 GB RAM; +- 100 GB Hdd(硬盘驱动器); +- 可选: Bios 支持虚拟化(Intel CPUs 的 VT 或 AMD CPUs AMD-V). + +首先,安装 `make` 软件包来编译 openQRM 源码包: + + sudo apt-get update + sudo apt-get upgrade + sudo apt-get install make + +然后,逐次运行下面的命令来安装 openQRM。 + +从[这里][2] 下载最新的可用版本: + + wget http://sourceforge.net/projects/openqrm/files/openQRM-Community-5.1/openqrm-community-5.1.tgz + + tar -xvzf openqrm-community-5.1.tgz + + cd openqrm-community-5.1/src/ + + sudo make + + sudo make install + + sudo make start + +安装期间,你将被询问去更新文件 `php.ini` + +![~-openqrm-community-5.1-src_001](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_001.png) + +输入 mysql root 用户密码。 + +![~-openqrm-community-5.1-src_002](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_002.png) + +再次输入密码: + +![~-openqrm-community-5.1-src_003](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_003.png) + +选择邮件服务器配置类型。 + +![~-openqrm-community-5.1-src_004](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_004.png) + +假如你不确定该如何选择,可选择 `Local only`。在我们的这个示例中,我选择了 **Local only** 选项。 + +![~-openqrm-community-5.1-src_005](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_005.png) + +输入你的系统邮件名称,并最后输入 Nagios 管理员密码。 + +![~-openqrm-community-5.1-src_007](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_007.png) + +根据你的网络连接状态,上面的命令可能将花费很长的时间来下载所有运行 openQRM 所需的软件包,请耐心等待。 + +最后你将得到 openQRM 配置 URL 地址以及相关的用户名和密码。 + +![~_002](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@debian-_002.png) + +### 配置 ### + +在安装完 openQRM 后,打开你的 Web 浏览器并转到 URL: **http://ip-address/openqrm** + +例如,在我的示例中为 http://192.168.1.100/openqrm 。 + +默认的用户名和密码是: **openqrm/openqrm** 。 + +![Mozilla Firefox_003](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/Mozilla-Firefox_003.png) + +选择一个网卡来给 openQRM 管理网络使用。 + +![openQRM Server - Mozilla Firefox_004](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_004.png) + +选择一个数据库类型,在我们的示例中,我选择了 mysql。 + +![openQRM Server - Mozilla Firefox_006](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_006.png) + +现在,配置数据库连接并初始化 openQRM, 在这里,我使用 **openQRM** 作为数据库名称, **root** 作为用户的身份,并将 debian 作为数据库的密码。 请小心,你应该输入先前在安装 openQRM 时创建的 mysql root 用户密码。 + +![openQRM Server - Mozilla Firefox_012](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_012.png) + +祝贺你!! openQRM 已经安装并配置好了。 + +![openQRM Server - Mozilla Firefox_013](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_013.png) + +### 更新 openQRM ### + +在任何时候可以使用下面的命令来更新 openQRM: + + cd openqrm/src/ + make update + +到现在为止,我们做的只是在我们的 Ubuntu 服务器中安装和配置 openQRM, 至于 创建、运行虚拟,管理存储,额外的系统集成和运行你自己的私有云等内容,我建议你阅读 [openQRM 管理员指南][3]。 + +就是这些了,欢呼吧!周末快乐! +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/install-openqrm-cloud-computing-platform-debian/ + +作者:[SK][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.unixmen.com/author/sk/ +[1]:http://www.openqrm-enterprise.com/products/edition-comparison.html +[2]:http://sourceforge.net/projects/openqrm/files/?source=navbar +[3]:http://www.openqrm-enterprise.com/fileadmin/Documents/Whitepaper/openQRM-Enterprise-Administrator-Guide-5.2.pdf \ No newline at end of file From 1664191a8aa7b3137e1f39eb0756e4e60b2e35b1 Mon Sep 17 00:00:00 2001 From: xiaoyu33 <1136299502@qq.com> Date: Mon, 10 Aug 2015 16:50:55 +0800 Subject: [PATCH 169/207] Update 20150810 For Linux, Supercomputers R Us.md add "Translating by xiaoyu33" --- sources/talk/20150810 For Linux, Supercomputers R Us.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/talk/20150810 For Linux, Supercomputers R Us.md b/sources/talk/20150810 For Linux, Supercomputers R Us.md index 7bc48125f0..8f7302cca1 100644 --- a/sources/talk/20150810 For Linux, Supercomputers R Us.md +++ b/sources/talk/20150810 For Linux, Supercomputers R Us.md @@ -1,3 +1,4 @@ +Translating by xiaoyu33 For Linux, Supercomputers R Us ================================================================================ ![Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons](http://images.techhive.com/images/article/2015/08/playstation_3-100602985-primary.idge.jpg) @@ -56,4 +57,4 @@ via: [12]:http://www.google.com/url?q=http%3A%2F%2Fen.intelextrememasters.com%2Fnews%2Fcheck-out-the-intel-extreme-masters-katowice-prize-money-distribution%2F&sa=D&sntz=1&usg=AFQjCNE6yoAGGz-Hpi2tPF4gdhuPBEckhQ [13]:http://www.beowulf.org/overview/history.html [14]:http://yclept.ucdavis.edu/Beowulf/aboutbeowulf.html -[15]:http://www.beowulf.org/ \ No newline at end of file +[15]:http://www.beowulf.org/ From 219731a082612b418fd451207170d39264c3f67e Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Tue, 11 Aug 2015 07:34:03 +0800 Subject: [PATCH 170/207] Update RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- ...th Nano and Vim or Analyzing text with grep and regexps.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md b/sources/tech/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md index 1529fecf2e..f3de8528fc 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md @@ -1,3 +1,5 @@ +FSSlc translating + RHCSA Series: Editing Text Files with Nano and Vim / Analyzing text with grep and regexps – Part 4 ================================================================================ Every system administrator has to deal with text files as part of his daily responsibilities. That includes editing existing files (most likely configuration files), or creating new ones. It has been said that if you want to start a holy war in the Linux world, you can ask sysadmins what their favorite text editor is and why. We are not going to do that in this article, but will present a few tips that will be helpful to use two of the most widely used text editors in RHEL 7: nano (due to its simplicity and easiness of use, specially to new users), and vi/m (due to its several features that convert it into more than a simple editor). I am sure that you can find many more reasons to use one or the other, or perhaps some other editor such as emacs or pico. It’s entirely up to you. @@ -251,4 +253,4 @@ via: http://www.tecmint.com/rhcsa-exam-how-to-use-nano-vi-editors/ [2]:http://www.tecmint.com/file-and-directory-management-in-linux/ [3]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ [4]:http://www.nano-editor.org/ -[5]:http://www.vim.org/ \ No newline at end of file +[5]:http://www.vim.org/ From d0f66e61773cc0d995e0f3b64495f965d742e416 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 11 Aug 2015 09:31:21 +0800 Subject: [PATCH 171/207] Rename sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md to translated/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md --- ...50730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md (100%) diff --git a/sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md b/translated/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md similarity index 100% rename from sources/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md rename to translated/tech/20150730 How to Setup iTOP (IT Operational Portal) on CentOS 7.md From 83c752ac30a52faf8a0d2b772c2d00180e648f2a Mon Sep 17 00:00:00 2001 From: Ping Date: Mon, 10 Aug 2015 17:27:59 +0800 Subject: [PATCH 172/207] Complete 20150518 How to set up a Replica Set on MongoDB.md --- ... How to set up a Replica Set on MongoDB.md | 183 ------------------ ... How to set up a Replica Set on MongoDB.md | 183 ++++++++++++++++++ 2 files changed, 183 insertions(+), 183 deletions(-) delete mode 100644 sources/tech/20150518 How to set up a Replica Set on MongoDB.md create mode 100644 translated/tech/20150518 How to set up a Replica Set on MongoDB.md diff --git a/sources/tech/20150518 How to set up a Replica Set on MongoDB.md b/sources/tech/20150518 How to set up a Replica Set on MongoDB.md deleted file mode 100644 index 83a7da8769..0000000000 --- a/sources/tech/20150518 How to set up a Replica Set on MongoDB.md +++ /dev/null @@ -1,183 +0,0 @@ -Translating by Ping -How to set up a Replica Set on MongoDB -================================================================================ -MongoDB has become the most famous NoSQL database on the market. MongoDB is document-oriented, and its scheme-free design makes it a really attractive solution for all kinds of web applications. One of the features that I like the most is Replica Set, where multiple copies of the same data set are maintained by a group of mongod nodes for redundancy and high availability. - -This tutorial describes how to configure a Replica Set on MonoDB. - -The most common configuration for a Replica Set involves one primary and multiple secondary nodes. The replication will then be initiated from the primary toward the secondaries. Replica Sets can not only provide database protection against unexpected hardware failure and service downtime, but also improve read throughput of database clients as they can be configured to read from different nodes. - -### Set up the Environment ### - -In this tutorial, we are going to set up a Replica Set with one primary and two secondary nodes. - -![](https://farm8.staticflickr.com/7667/17801038505_529a5224a1.jpg) - -In order to implement this lab, we will use three virtual machines (VMs) running on VirtualBox. I am going to install Ubuntu 14.04 on the VMs, and install official packages for Mongodb. - -I am going to set up a necessary environment on one VM instance, and then clone it to the other two VM instances. Thus pick one VM named master, and perform the following installations. - -First, we need to add the MongoDB key for apt: - - $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 - -Then we need to add the official MongoDB repository to our source.list: - - $ sudo su - # echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release -sc)"/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list - -Let's update repositories and install MongoDB. - - $ sudo apt-get update - $ sudo apt-get install -y mongodb-org - -Now let's make some changes in /etc/mongodb.conf. - - auth = true - dbpath=/var/lib/mongodb - logpath=/var/log/mongodb/mongod.log - logappend=true - keyFile=/var/lib/mongodb/keyFile - replSet=myReplica - -The first line is to make sure that we are going to have authentication on our database. keyFile is to set up a keyfile that is going to be used by MongoDB to replicate between nodes. replSet sets up the name of our replica set. - -Now we are going to create our keyfile, so that it can be in all our instances. - - $ echo -n "MyRandomStringForReplicaSet" | md5sum > keyFile - -This will create keyfile that contains a MD5 string, but it has some noise that we need to clean up before using it in MongoDB. Use the following command to clean it up: - - $ echo -n "MyReplicaSetKey" | md5sum|grep -o "[0-9a-z]\+" > keyFile - -What grep command does is to print MD5 string with no spaces or other characters that we don't want. - -Now we are going to make the keyfile ready for use: - - $ sudo cp keyFile /var/lib/mongodb - $ sudo chown mongodb:nogroup keyFile - $ sudo chmod 400 keyFile - -Now we have our Ubuntu VM ready to be cloned. Power it off, and clone it to the other VMs. - -![](https://farm9.staticflickr.com/8729/17800903865_9876a9cc9c.jpg) - -I name the cloned VMs secondary1 and secondary2. Make sure to reinitialize the MAC address of cloned VMs and clone full disks. - -![](https://farm6.staticflickr.com/5333/17613392900_6de45c9450.jpg) - -All three VM instances should be on the same network to communicate with each other. For this, we are going to attach all three VMs to "Internet Network". - -It is recommended that each VM instances be assigned a static IP address, as opposed to DHCP IP address, so that the VMs will not lose connectivity among themselves when a DHCP server assigns different IP addresses to them. - -Let's edit /etc/networks/interfaces of each VM as follows. - -On primary: - - auto eth1 - iface eth1 inet static - address 192.168.50.2 - netmask 255.255.255.0 - -On secondary1: - - auto eth1 - iface eth1 inet static - address 192.168.50.3 - netmask 255.255.255.0 - -On secondary2: - - auto eth1 - iface eth1 inet static - address 192.168.50.4 - netmask 255.255.255.0 - -Another file that needs to be set up is /etc/hosts, because we don't have DNS. We need to set the hostnames in /etc/hosts. - -On primary: - - 127.0.0.1 localhost primary - 192.168.50.2 primary - 192.168.50.3 secondary1 - 192.168.50.4 secondary2 - -On secondary1: - - 127.0.0.1 localhost secondary1 - 192.168.50.2 primary - 192.168.50.3 secondary1 - 192.168.50.4 secondary2 - -On secondary2: - - 127.0.0.1 localhost secondary2 - 192.168.50.2 primary - 192.168.50.3 secondary1 - 192.168.50.4 secondary2 - -Check connectivity among themselves by using ping command: - - $ ping primary - $ ping secondary1 - $ ping secondary2 - -### Set up a Replica Set ### - -After verifying connectivity among VMs, we can go ahead and create the admin user so that we can start working on the Replica Set. - -On primary node, open /etc/mongodb.conf, and comment out two lines that start with auth and replSet: - - dbpath=/var/lib/mongodb - logpath=/var/log/mongodb/mongod.log - logappend=true - #auth = true - keyFile=/var/lib/mongodb/keyFile - #replSet=myReplica - -Restart mongod daemon. - - $ sudo service mongod restart - -Create an admin user after conencting to MongoDB: - - > use admin - > db.createUser({ - user:"admin", - pwd:" - }) - $ sudo service mongod restart - -Connect to MongoDB and use these commands to add secondary1 and secondary2 to our Replicat Set. - - > use admin - > db.auth("admin","myreallyhardpassword") - > rs.initiate() - > rs.add ("secondary1:27017") - > rs.add("secondary2:27017") - -Now that we have our Replica Set, we can start working on our project. Consult the [official driver documentation][1] to see how to connect to a Replica Set. In case you want to query from shell, you have to connect to primary instance to insert or query the database. Secondary nodes will not let you do that. If you attempt to access the database on a secondary node, you will get this error message: - - myReplica:SECONDARY> - myReplica:SECONDARY> show databases - 2015-05-10T03:09:24.131+0000 E QUERY Error: listDatabases failed:{ "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" } - at Error () - at Mongo.getDBs (src/mongo/shell/mongo.js:47:15) - at shellHelper.show (src/mongo/shell/utils.js:630:33) - at shellHelper (src/mongo/shell/utils.js:524:36) - at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47 - -I hope you find this tutorial useful. You can use Vagrant to automate your local environments and help you code faster. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/setup-replica-set-mongodb.html - -作者:[Christopher Valerio][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/valerio -[1]:http://docs.mongodb.org/ecosystem/drivers/ diff --git a/translated/tech/20150518 How to set up a Replica Set on MongoDB.md b/translated/tech/20150518 How to set up a Replica Set on MongoDB.md new file mode 100644 index 0000000000..44b8535b82 --- /dev/null +++ b/translated/tech/20150518 How to set up a Replica Set on MongoDB.md @@ -0,0 +1,183 @@ +如何配置MongoDB副本集(Replica Set) +================================================================================ +MongoDB已经成为市面上最知名的NoSQL数据库。MongoDB是面向文档的,它的无模式设计使得它在各种各样的WEB应用当中广受欢迎。最让我喜欢的特性之一是它的副本集,副本集将同一数据的多份拷贝放在一组mongod节点上,从而实现数据的冗余以及高可用性。 + +这篇教程将向你介绍如何配置一个MongoDB副本集。 + +副本集的最常见配置涉及到一个主节点以及多个副节点。这之后启动的复制行为会从这个主节点到其他副节点。副本集不止可以针对意外的硬件故障和停机事件对数据库提供保护,同时也因为提供了更多的结点从而提高了数据库客户端数据读取的吞吐量。 + +### 配置环境 ### + +这个教程里,我们会配置一个包括一个主节点以及两个副节点的副本集。 + +![](https://farm8.staticflickr.com/7667/17801038505_529a5224a1.jpg) + +为了达到这个目的,我们使用了3个运行在VirtualBox上的虚拟机。我会在这些虚拟机上安装Ubuntu 14.04,并且安装MongoDB官方包。 + +我会在一个虚拟机实例上配置好需要的环境,然后将它克隆到其他的虚拟机实例上。因此,选择一个名为master的虚拟机,执行以下安装过程。 + +首先,我们需要在apt中增加一个MongoDB密钥: + + $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 + +然后,将官方的MongoDB仓库添加到source.list中: + + $ sudo su + # echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release -sc)"/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list + +接下来更新apt仓库并且安装MongoDB。 + + $ sudo apt-get update + $ sudo apt-get install -y mongodb-org + +现在对/etc/mongodb.conf做一些更改 + + auth = true + dbpath=/var/lib/mongodb + logpath=/var/log/mongodb/mongod.log + logappend=true + keyFile=/var/lib/mongodb/keyFile + replSet=myReplica + +第一行的作用是确认我们的数据库需要验证才可以使用的。keyfile用来配置用于MongoDB结点间复制行为的密钥文件。replSet用来为副本集设置一个名称。 + +接下来我们创建一个用于所有实例的密钥文件。 + + $ echo -n "MyRandomStringForReplicaSet" | md5sum > keyFile + +这将会创建一个含有MD5字符串的密钥文件,但是由于其中包含了一些噪音,我们需要对他们清理后才能正式在MongoDB中使用。 + + $ echo -n "MyReplicaSetKey" | md5sum|grep -o "[0-9a-z]\+" > keyFile + +grep命令的作用的是把将空格等我们不想要的内容过滤掉之后的MD5字符串打印出来。 + +现在我们对密钥文件进行一些操作,让它真正可用。 + + $ sudo cp keyFile /var/lib/mongodb + $ sudo chown mongodb:nogroup keyFile + $ sudo chmod 400 keyFile + +接下来,关闭此虚拟机。将其Ubuntu系统克隆到其他虚拟机上。 + +![](https://farm9.staticflickr.com/8729/17800903865_9876a9cc9c.jpg) + +这是克隆后的副节点1和副节点2。确认你已经将它们的MAC地址重新初始化,并且克隆整个硬盘。 + +![](https://farm6.staticflickr.com/5333/17613392900_6de45c9450.jpg) + +请注意,三个虚拟机示例需要在同一个网络中以便相互通讯。因此,我们需要它们弄到“互联网"上去。 + +这里推荐给每个虚拟机设置一个静态IP地址,而不是使用DHCP。这样它们就不至于在DHCP分配IP地址给他们的时候失去连接。 + +像下面这样编辑每个虚拟机的/etc/networks/interfaces文件。 + +在主结点上: + + auto eth1 + iface eth1 inet static + address 192.168.50.2 + netmask 255.255.255.0 + +在副结点1上: + + auto eth1 + iface eth1 inet static + address 192.168.50.3 + netmask 255.255.255.0 + +在副结点2上: + + auto eth1 + iface eth1 inet static + address 192.168.50.4 + netmask 255.255.255.0 + +由于我们没有DNS服务,所以需要设置设置一下/etc/hosts这个文件,手工将主机名称放到次文件中。 + +在主结点上: + + 127.0.0.1 localhost primary + 192.168.50.2 primary + 192.168.50.3 secondary1 + 192.168.50.4 secondary2 + +在副结点1上: + + 127.0.0.1 localhost secondary1 + 192.168.50.2 primary + 192.168.50.3 secondary1 + 192.168.50.4 secondary2 + +在副结点2上: + + 127.0.0.1 localhost secondary2 + 192.168.50.2 primary + 192.168.50.3 secondary1 + 192.168.50.4 secondary2 + +使用ping命令检查各个结点之间的连接。 + + $ ping primary + $ ping secondary1 + $ ping secondary2 + +### 配置副本集 ### + +验证各个结点可以正常连通后,我们就可以新建一个管理员用户,用于之后的副本集操作。 + +在主节点上,打开/etc/mongodb.conf文件,将auth和replSet两项注释掉。 + + dbpath=/var/lib/mongodb + logpath=/var/log/mongodb/mongod.log + logappend=true + #auth = true + keyFile=/var/lib/mongodb/keyFile + #replSet=myReplica + +重启mongod进程。 + + $ sudo service mongod restart + +连接MongoDB后,新建管理员用户。 + + > use admin + > db.createUser({ + user:"admin", + pwd:" + }) + $ sudo service mongod restart + +连接到MongoDB,用以下命令将secondary1和secondary2节点添加到我们的副本集中。 + + > use admin + > db.auth("admin","myreallyhardpassword") + > rs.initiate() + > rs.add ("secondary1:27017") + > rs.add("secondary2:27017") + + +现在副本集到手了,可以开始我们的项目了。参照 [official driver documentation][1] 来了解如何连接到副本集。如果你想要用Shell来请求数据,那么你需要连接到主节点上来插入或者请求数据,副节点不行。如果你执意要尝试用附件点操作,那么以下错误信息就蹦出来招呼你了。 + + myReplica:SECONDARY> + myReplica:SECONDARY> show databases + 2015-05-10T03:09:24.131+0000 E QUERY Error: listDatabases failed:{ "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" } + at Error () + at Mongo.getDBs (src/mongo/shell/mongo.js:47:15) + at shellHelper.show (src/mongo/shell/utils.js:630:33) + at shellHelper (src/mongo/shell/utils.js:524:36) + at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47 + +希望这篇教程能对你有所帮助。你可以使用Vagrant来自动完成你的本地环境配置,并且加速你的代码。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/setup-replica-set-mongodb.html + +作者:[Christopher Valerio][a] +译者:[mr-ping](https://github.com/mr-ping) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/valerio +[1]:http://docs.mongodb.org/ecosystem/drivers/ From 5adf534ccbbdb8fc2db27e242835ed78f8ba9220 Mon Sep 17 00:00:00 2001 From: ZTinoZ Date: Tue, 11 Aug 2015 10:13:13 +0800 Subject: [PATCH 173/207] Delete translated article --- .../20150806 5 heroes of the Linux world.md | 99 ------------------- 1 file changed, 99 deletions(-) delete mode 100644 sources/talk/20150806 5 heroes of the Linux world.md diff --git a/sources/talk/20150806 5 heroes of the Linux world.md b/sources/talk/20150806 5 heroes of the Linux world.md deleted file mode 100644 index abc42df7f9..0000000000 --- a/sources/talk/20150806 5 heroes of the Linux world.md +++ /dev/null @@ -1,99 +0,0 @@ -Linux世界的五个大神 -================================================================================ -这些人是谁?见或者没见过?谁在每天影响着我们? - -![Image courtesy Christopher Michel/Flickr](http://core0.staticworld.net/images/article/2015/07/penguin-100599348-orig.jpg) -Image courtesy [Christopher Michel/Flickr][1] - -### 野心勃勃的企鹅 ### - -Linux和开源世界一直在被那些热情洋溢的人们推动着,他们开发出最好的软件并将代码向公众开放,所以每个人都能无条件地看到。(对了,有那么一个条件,那就是许可证。) - -那么,这些人是谁?这些Linux世界里的大神们,谁在每天影响着我们?让我来给你一一揭晓。 - -![Image courtesy Swapnil Bhartiya](http://images.techhive.com/images/article/2015/07/swap-klaus-100599357-orig.jpg) -Image courtesy Swapnil Bhartiya - -### Klaus Knopper ### - -Klaus Knopper,一个生活在德国的奥地利开发者,他是Knoppix和Adriana Linux的创始人,为了他失明的妻子开发程序。 - -Knoppix在那些Linux用户心里有着特殊的地位,他们在使用Ubuntu之前都会尝试Knoppix,而Knoppix让人称道的就是它让Live CD的概念普及开来。不像Windows或Mac OS X,你可以通过CD运行整个操作系统而不用再系统上安装任何东西,它允许新用户在他们的机子上快速试用Linux而不用去格式化硬盘。Linux这种实时的特性为它的普及做出了巨大贡献。 - -![Image courtesy Fórum Internacional Software Live/Flickr](http://images.techhive.com/images/article/2015/07/lennart-100599356-orig.jpg) -Image courtesy [Fórum Internacional Software Live/Flickr][2] - -### Lennart Pottering ### - -Lennart Pottering is yet another genius from Germany. He has written so many core components of a Linux (as well as BSD) system that it’s hard to keep track. Most of his work is towards the successors of aging or broken components of the Linux systems. - -Pottering wrote the modern init system systemd, which shook the Linux world and created a [rift in the Debian community][3]. - -While Linus Torvalds has no problems with systemd, and praises it, he is not a huge fan of the way systemd developers (including the co-author Kay Sievers,) respond to bug reports and criticism. At one point Linus said on the LKML (Linux Kernel Mailing List) that he would [never work with Sievers][4]. - -Lennart is also the author of Pulseaudio, sound server on Linux and Avahi, zero-configuration networking (zeroconf) implementation. - -![Image courtesy Meego Com/Flickr](http://images.techhive.com/images/article/2015/07/jim-zemlin-100599362-orig.jpg) -Image courtesy [Meego Com/Flickr][5] - -### Jim Zemlin ### - -Jim Zemlin isn't a developer, but as founder of The Linux Foundation he is certainly one of the most important figures of the Linux world. - -In 2007, The Linux Foundation was formed as a result of merger between two open source bodies: the Free Standards Group and the Open Source Development Labs. Zemlin was the executive director of the Free Standards Group. Post-merger Zemlin became the executive director of The Linux Foundation and has held that position since. - -Under his leadership, The Linux Foundation has become the central figure in the modern IT world and plays a very critical role for the Linux ecosystem. In order to ensure that key developers like Torvalds and Kroah-Hartman can focus on Linux, the foundation sponsors them as fellows. - -Zemlin also made the foundation a bridge between companies so they can collaborate on Linux while at the same time competing in the market. The foundation also organizes many conferences around the world and [offers many courses for Linux developers][6]. - -People may think of Zemlin as Linus Torvalds' boss, but he refers to himself as "Linus Torvalds' janitor." - -![Image courtesy Coscup/Flickr](http://images.techhive.com/images/article/2015/07/greg-kh-100599350-orig.jpg) -Image courtesy [Coscup/Flickr][7] - -### Greg Kroah-Hartman ### - -Greg Kroah-Hartman is known as second-in-command of the Linux kernel. The ‘gentle giant’ is the maintainer of the stable branch of the kernel and of staging subsystem, USB, driver core, debugfs, kref, kobject, and the [sysfs][8] kernel subsystems along with many other components of a Linux system. - -He is also credited for device drivers for Linux. One of his jobs is to travel around the globe, meet hardware makers and persuade them to make their drivers available for Linux. The next time you plug some random USB device to your system and it works out of the box, thank Kroah-Hartman. (Don't thank the distro. Some distros try to take credit for the work Kroah-Hartman or the Linux kernel did.) - -Kroah-Hartman previously worked for Novell and then joined the Linux Foundation as a fellow, alongside Linus Torvalds. - -Kroah-Hartman is the total opposite of Linus and never rants (at least publicly). One time there was some ripple was when he stated that [Canonical doesn’t contribute much to the Linux kernel][9]. - -On a personal level, Kroah-Hartman is extremely helpful to new developers and users and is easily accessible. - -![Image courtesy Swapnil Bhartiya](http://images.techhive.com/images/article/2015/07/linus-swapnil-100599349-orig.jpg) -Image courtesy Swapnil Bhartiya - -### Linus Torvalds ### - -No collection of Linux heroes would be complete without Linus Torvalds. He is the author of the Linux kernel, the most used open source technology on the planet and beyond. His software powers everything from space stations to supercomputers, military drones to mobile devices and tiny smartwatches. Linus remains the authority on the Linux kernel and makes the final decision on which patches to merge to the kernel. - -Linux isn't Torvalds' only contribution open source. When he got fed-up with the existing software revision control systems, which his kernel heavily relied on, he wrote his own, called Git. Git enjoys the same reputation as Linux; it is the most used version control system in the world. - -Torvalds is also a passionate scuba diver and when he found no decent dive logs for Linux, he wrote his own and called it SubSurface. - -Torvalds is [well known for his rants][10] and once admitted that his ego is as big as a small planet. But he is also known for admitting his mistakes if he realizes he was wrong. - --------------------------------------------------------------------------------- - -via: http://www.itworld.com/article/2955001/linux/5-heros-of-the-linux-world.html - -作者:[Swapnil Bhartiya][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.itworld.com/author/Swapnil-Bhartiya/ -[1]:https://flic.kr/p/siJ25M -[2]:https://flic.kr/p/uTzj54 -[3]:http://www.itwire.com/business-it-news/open-source/66153-systemd-fallout-two-debian-technical-panel-members-resign -[4]:http://www.linuxveda.com/2014/04/04/linus-torvalds-systemd-kay-sievers/ -[5]:https://flic.kr/p/9Lnhpu -[6]:http://www.itworld.com/article/2951968/linux/linux-foundation-offers-cheaper-courses-and-certifications-for-india.html -[7]:https://flic.kr/p/hBv8Pp -[8]:https://en.wikipedia.org/wiki/Sysfs -[9]:https://www.youtube.com/watch?v=CyHAeGBFS8k -[10]:http://www.itworld.com/article/2873200/operating-systems/11-technologies-that-tick-off-linus-torvalds.html From 6aa59c9a58eb024b7cda8c6a1d0d8cc9afd66b07 Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 11 Aug 2015 10:17:19 +0800 Subject: [PATCH 174/207] Delete 20150803 Troubleshooting with Linux Logs.md --- ...0150803 Troubleshooting with Linux Logs.md | 117 ------------------ 1 file changed, 117 deletions(-) delete mode 100644 sources/tech/20150803 Troubleshooting with Linux Logs.md diff --git a/sources/tech/20150803 Troubleshooting with Linux Logs.md b/sources/tech/20150803 Troubleshooting with Linux Logs.md deleted file mode 100644 index 9ee0820a9c..0000000000 --- a/sources/tech/20150803 Troubleshooting with Linux Logs.md +++ /dev/null @@ -1,117 +0,0 @@ -translation by strugglingyouth -Troubleshooting with Linux Logs -================================================================================ -Troubleshooting is the main reason people create logs. Often you’ll want to diagnose why a problem happened with your Linux system or application. An error message or a sequence of events can give you clues to the root cause, indicate how to reproduce the issue, and point out ways to fix it. Here are a few use cases for things you might want to troubleshoot in your logs. - -### Cause of Login Failures ### - -If you want to check if your system is secure, you can check your authentication logs for failed login attempts and unfamiliar successes. Authentication failures occur when someone passes incorrect or otherwise invalid login credentials, often to ssh for remote access or su for local access to another user’s permissions. These are logged by the [pluggable authentication module][1], or pam for short. Look in your logs for strings like Failed password and user unknown. Successful authentication records include strings like Accepted password and session opened. - -Failure Examples: - - pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2 - Failed password for invalid user hoover from 10.0.2.2 port 4791 ssh2 - pam_unix(sshd:auth): check pass; user unknown - PAM service(sshd) ignoring max retries; 6 > 3 - -Success Examples: - - Accepted password for hoover from 10.0.2.2 port 4792 ssh2 - pam_unix(sshd:session): session opened for user hoover by (uid=0) - pam_unix(sshd:session): session closed for user hoover - -You can use grep to find which users accounts have the most failed logins. These are the accounts that potential attackers are trying and failing to access. This example is for an Ubuntu system. - - $ grep "invalid user" /var/log/auth.log | cut -d ' ' -f 10 | sort | uniq -c | sort -nr - 23 oracle - 18 postgres - 17 nagios - 10 zabbix - 6 test - -You’ll need to write a different command for each application and message because there is no standard format. Log management systems that automatically parse logs will effectively normalize them and help you extract key fields like username. - -Log management systems can extract the usernames from your Linux logs using automated parsing. This lets you see an overview of the users and filter on them with a single click. In this example, we can see that the root user logged in over 2,700 times because we are filtering the logs to show login attempts only for the root user. - -![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.36-AM.png) - -Log management systems also let you view graphs over time to spot unusual trends. If someone had one or two failed logins within a few minutes, it might be that a real user forgot his or her password. However, if there are hundreds of failed logins or they are all different usernames, it’s more likely that someone is trying to attack the system. Here you can see that on March 12, someone tried to login as test and nagios several hundred times. This is clearly not a legitimate use of the system. - -![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.12.18-AM.png) - -### Cause of Reboots ### - -Sometimes a server can stop due to a system crash or reboot. How do you know when it happened and who did it? - -#### Shutdown Command #### - -If someone ran the shutdown command manually, you can see it in the auth log file. Here you can see that someone remotely logged in from the IP 50.0.134.125 as the user ubuntu and then shut the system down. - - Mar 19 18:36:41 ip-172-31-11-231 sshd[23437]: Accepted publickey for ubuntu from 50.0.134.125 port 52538 ssh - Mar 19 18:36:41 ip-172-31-11-231 23437]:sshd[ pam_unix(sshd:session): session opened for user ubuntu by (uid=0) - Mar 19 18:37:09 ip-172-31-11-231 sudo: ubuntu : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/sbin/shutdown -r now - -#### Kernel Initializing #### - -If you want to see when the server restarted regardless of reason (including crashes) you can search logs from the kernel initializing. You’d search for the facility kernel messages and Initializing cpu. - - Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpuset - Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpu - Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Linux version 3.8.0-44-generic (buildd@tipua) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #66~precise1-Ubuntu SMP Tue Jul 15 04:01:04 UTC 2014 (Ubuntu 3.8.0-44.66~precise1-generic 3.8.13.25) - -### Detect Memory Problems ### - -There are lots of reasons a server might crash, but one common cause is running out of memory. - -When your system is low on memory, processes are killed, typically in the order of which ones will release the most resources. The error occurs when your system is using all of its memory and a new or existing process attempts to access additional memory. Look in your log files for strings like Out of Memory or for kernel warnings like to kill. These strings indicate that your system intentionally killed the process or application rather than allowing the process to crash. - -Examples: - - [33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child - [29923450.995084] select 5230 (docker), adj 0, size 708, to kill - -You can find these logs using a tool like grep. This example is for Ubuntu: - - $ grep “Out of memory” /var/log/syslog - [33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child - -Keep in mind that grep itself uses memory, so you might cause an out of memory error just by running grep. This is another reason it’s a fabulous idea to centralize your logs! - -### Log Cron Job Errors ### - -The cron daemon is a scheduler that runs processes at specified dates and times. If the process fails to run or fails to finish, then a cron error appears in your log files. You can find these files in /var/log/cron, /var/log/messages, and /var/log/syslog depending on your distribution. There are many reasons a cron job can fail. Usually the problems lie with the process rather than the cron daemon itself. - -By default, cron jobs output through email using Postfix. Here is a log showing that an email was sent. Unfortunately, you cannot see the contents of the message here. - - Mar 13 16:35:01 PSQ110 postfix/pickup[15158]: C3EDC5800B4: uid=1001 from= - Mar 13 16:35:01 PSQ110 postfix/cleanup[15727]: C3EDC5800B4: message-id=<20150310110501.C3EDC5800B4@PSQ110> - Mar 13 16:35:01 PSQ110 postfix/qmgr[15159]: C3EDC5800B4: from=, size=607, nrcpt=1 (queue active) - Mar 13 16:35:05 PSQ110 postfix/smtp[15729]: C3EDC5800B4: to=, relay=gmail-smtp-in.l.google.com[74.125.130.26]:25, delay=4.1, delays=0.26/0/2.2/1.7, dsn=2.0.0, status=sent (250 2.0.0 OK 1425985505 f16si501651pdj.5 - gsmtp) - -You should consider logging the cron standard output to help debug problems. Here is how you can redirect your cron standard output to syslog using the logger command. Replace the echo command with your own script and helloCron with whatever you want to set the appName to. - - */5 * * * * echo ‘Hello World’ 2>&1 | /usr/bin/logger -t helloCron - -Which creates the log entries: - - Apr 28 22:20:01 ip-172-31-11-231 CRON[15296]: (ubuntu) CMD (echo 'Hello World!' 2>&1 | /usr/bin/logger -t helloCron) - Apr 28 22:20:01 ip-172-31-11-231 helloCron: Hello World! - -Each cron job will log differently based on the specific type of job and how it outputs data. Hopefully there are clues to the root cause of problems within the logs, or you can add additional logging as needed. - --------------------------------------------------------------------------------- - -via: http://www.loggly.com/ultimate-guide/logging/troubleshooting-with-linux-logs/ - -作者:[Jason Skowronski][a1] -作者:[Amy Echeverri][a2] -作者:[Sadequl Hussain][a3] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a1]:https://www.linkedin.com/in/jasonskowronski -[a2]:https://www.linkedin.com/in/amyecheverri -[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 -[1]:http://linux.die.net/man/8/pam.d From 55f5e577c8684ad955e5e189172343a6e316af6d Mon Sep 17 00:00:00 2001 From: struggling <630441839@qq.com> Date: Tue, 11 Aug 2015 10:18:16 +0800 Subject: [PATCH 175/207] Create 20150803 Troubleshooting with Linux Logs.md --- ...0150803 Troubleshooting with Linux Logs.md | 117 ++++++++++++++++++ 1 file changed, 117 insertions(+) create mode 100644 translated/tech/20150803 Troubleshooting with Linux Logs.md diff --git a/translated/tech/20150803 Troubleshooting with Linux Logs.md b/translated/tech/20150803 Troubleshooting with Linux Logs.md new file mode 100644 index 0000000000..5950a69d98 --- /dev/null +++ b/translated/tech/20150803 Troubleshooting with Linux Logs.md @@ -0,0 +1,117 @@ +在 Linux 中使用日志来排错 +================================================================================ +人们创建日志的主要原因是排错。通常你会诊断为什么问题发生在你的 Linux 系统或应用程序中。错误信息或一些列事件可以给你提供造成根本原因的线索,说明问题是如何发生的,并指出如何解决它。这里有几个使用日志来解决的样例。 + +### 登录失败原因 ### + +如果你想检查你的系统是否安全,你可以在验证日志中检查登录失败的和登录成功但可疑的用户。当有人通过不正当或无效的凭据来登录时会出现认证失败,经常使用 SSH 进行远程登录或 su 到本地其他用户来进行访问权。这些是由[插入式验证模块][1]来记录,或 PAM 进行短期记录。在你的日志中会看到像 Failed 这样的字符串密码和未知的用户。成功认证记录包括像 Accepted 这样的字符串密码并打开会话。 + +失败的例子: + + pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2 + Failed password for invalid user hoover from 10.0.2.2 port 4791 ssh2 + pam_unix(sshd:auth): check pass; user unknown + PAM service(sshd) ignoring max retries; 6 > 3 + +成功的例子: + + Accepted password for hoover from 10.0.2.2 port 4792 ssh2 + pam_unix(sshd:session): session opened for user hoover by (uid=0) + pam_unix(sshd:session): session closed for user hoover + +你可以使用 grep 来查找哪些用户失败登录的次数最多。这些都是潜在的攻击者正在尝试和访问失败的账户。这是一个在 ubuntu 系统上的例子。 + + $ grep "invalid user" /var/log/auth.log | cut -d ' ' -f 10 | sort | uniq -c | sort -nr + 23 oracle + 18 postgres + 17 nagios + 10 zabbix + 6 test + +由于没有标准格式,所以你需要为每个应用程序的日志使用不同的命令。日志管理系统,可以自动分析日志,将它们有效的归类,帮助你提取关键字,如用户名。 + +日志管理系统可以使用自动解析功能从 Linux 日志中提取用户名。这使你可以看到用户的信息,并能单个的筛选。在这个例子中,我们可以看到,root 用户登录了 2700 次,因为我们筛选的日志显示尝试登录的只有 root 用户。 + +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.36-AM.png) + +日志管理系统也让你以时间为做坐标轴的图标来查看使你更容易发现异常。如果有人在几分钟内登录失败一次或两次,它可能是一个真正的用户而忘记了密码。但是,如果有几百个失败的登录并且使用的都是不同的用户名,它更可能是在试图攻击系统。在这里,你可以看到在3月12日,有人试图登录 Nagios 几百次。这显然​​不是一个合法的系统用户。 + +![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.12.18-AM.png) + +### 重启的原因 ### + + +有时候,一台服务器由于系统崩溃或重启而宕机。你怎么知道它何时发生,是谁做的? + +#### 关机命令 #### + +如果有人手动运行 shutdown 命令,你可以看到它的身份在验证日志文件中。在这里,你可以看到,有人从 IP 50.0.134.125 上作为 ubuntu 的用户远程登录了,然后关闭了系统。 + + Mar 19 18:36:41 ip-172-31-11-231 sshd[23437]: Accepted publickey for ubuntu from 50.0.134.125 port 52538 ssh + Mar 19 18:36:41 ip-172-31-11-231 23437]:sshd[ pam_unix(sshd:session): session opened for user ubuntu by (uid=0) + Mar 19 18:37:09 ip-172-31-11-231 sudo: ubuntu : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/sbin/shutdown -r now + +#### 内核初始化 #### + +如果你想看看服务器重新启动的所有原因(包括崩溃),你可以从内核初始化日志中寻找。你需要搜索内核设施和初始化 cpu 的信息。 + + Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpuset + Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpu + Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Linux version 3.8.0-44-generic (buildd@tipua) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #66~precise1-Ubuntu SMP Tue Jul 15 04:01:04 UTC 2014 (Ubuntu 3.8.0-44.66~precise1-generic 3.8.13.25) + +### 检测内存问题 ### + +有很多原因可能导致服务器崩溃,但一个普遍的原因是内存用尽。 + +当你系统的内存不足时,进程会被杀死,通常会杀死使用最多资源的进程。当系统正在使用的内存发生错误并且有新的或现有的进程试图使用更多的内存。在你的日志文件查找像 Out of Memory 这样的字符串,内核也会发出杀死进程的警告。这些信息表明系统故意杀死进程或应用程序,而不是允许进程崩溃。 + +例如: + + [33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child + [29923450.995084] select 5230 (docker), adj 0, size 708, to kill + +你可以使用像 grep 这样的工具找到这些日志。这个例子是在 ubuntu 中: + + $ grep “Out of memory” /var/log/syslog + [33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child + +请记住,grep 也要使用内存,所以导致内存不足的错误可能只是运行的 grep。这是另一个分析日志的独特方法! + +### 定时任务错误日志 ### + +cron 守护程序是一个调度器只在指定的日期和时间运行进程。如果进程运行失败或无法完成,那么 cron 的错误出现在你的日志文件中。你可以找到这些文件在 /var/log/cron,/var/log/messages,和 /var/log/syslog 中,具体取决于你的发行版。cron 任务失败原因有很多。通常情况下,问题出在进程中而不是 cron 守护进程本身。 + +默认情况下,cron 作业会通过电子邮件发送信息。这里是一个日志中记录的发送电子邮件的内容。不幸的是,你不能看到邮件的内容在这里。 + + Mar 13 16:35:01 PSQ110 postfix/pickup[15158]: C3EDC5800B4: uid=1001 from= + Mar 13 16:35:01 PSQ110 postfix/cleanup[15727]: C3EDC5800B4: message-id=<20150310110501.C3EDC5800B4@PSQ110> + Mar 13 16:35:01 PSQ110 postfix/qmgr[15159]: C3EDC5800B4: from=, size=607, nrcpt=1 (queue active) + Mar 13 16:35:05 PSQ110 postfix/smtp[15729]: C3EDC5800B4: to=, relay=gmail-smtp-in.l.google.com[74.125.130.26]:25, delay=4.1, delays=0.26/0/2.2/1.7, dsn=2.0.0, status=sent (250 2.0.0 OK 1425985505 f16si501651pdj.5 - gsmtp) + +你应该想想 cron 在日志中的标准输出以帮助你定位问题。这里展示你可以使用 logger 命令重定向 cron 标准输出到 syslog。用你的脚本来代替 echo 命令,helloCron 可以设置为任何你想要的应用程序的名字。 + + */5 * * * * echo ‘Hello World’ 2>&1 | /usr/bin/logger -t helloCron + +它创建的日志条目: + + Apr 28 22:20:01 ip-172-31-11-231 CRON[15296]: (ubuntu) CMD (echo 'Hello World!' 2>&1 | /usr/bin/logger -t helloCron) + Apr 28 22:20:01 ip-172-31-11-231 helloCron: Hello World! + +每个 cron 作业将根据作业的具体类型以及如何输出数据来记录不同的日志。希望在日志中有问题根源的线索,也可以根据需要添加额外的日志记录。 + +-------------------------------------------------------------------------------- + +via: http://www.loggly.com/ultimate-guide/logging/troubleshooting-with-linux-logs/ + +作者:[Jason Skowronski][a1] +作者:[Amy Echeverri][a2] +作者:[Sadequl Hussain][a3] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a1]:https://www.linkedin.com/in/jasonskowronski +[a2]:https://www.linkedin.com/in/amyecheverri +[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 +[1]:http://linux.die.net/man/8/pam.d From 1252e6195493dd66bcd0f314379c21c6547c5d7e Mon Sep 17 00:00:00 2001 From: ZTinoZ Date: Tue, 11 Aug 2015 10:28:42 +0800 Subject: [PATCH 176/207] Translating by ZTinoZ --- .../20150806 Installation Guide for Puppet on Ubuntu 15.04.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md b/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md index ea8fcd6e2e..501cb4a8dc 100644 --- a/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md +++ b/sources/tech/20150806 Installation Guide for Puppet on Ubuntu 15.04.md @@ -1,3 +1,4 @@ +Translating by ZTinoZ Installation Guide for Puppet on Ubuntu 15.04 ================================================================================ Hi everyone, today in this article we'll learn how to install puppet to manage your server infrastructure running ubuntu 15.04. Puppet is an open source software configuration management tool which is developed and maintained by Puppet Labs that allows us to automate the provisioning, configuration and management of a server infrastructure. Whether we're managing just a few servers or thousands of physical and virtual machines to orchestration and reporting, puppet automates tasks that system administrators often do manually which frees up time and mental space so sysadmins can work on improving other aspects of your overall setup. It ensures consistency, reliability and stability of the automated jobs processed. It facilitates closer collaboration between sysadmins and developers, enabling more efficient delivery of cleaner, better-designed code. Puppet is available in two solutions configuration management and data center automation. They are **puppet open source and puppet enterprise**. Puppet open source is a flexible, customizable solution available under the Apache 2.0 license, designed to help system administrators automate the many repetitive tasks they regularly perform. Whereas puppet enterprise edition is a proven commercial solution for diverse enterprise IT environments which lets us get all the benefits of open source puppet, plus puppet apps, commercial-only enhancements, supported modules and integrations, and the assurance of a fully supported platform. Puppet uses SSL certificates to authenticate communication between master and agent nodes. @@ -426,4 +427,4 @@ via: http://linoxide.com/linux-how-to/install-puppet-ubuntu-15-04/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://linoxide.com/author/arunp/ -[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html \ No newline at end of file +[1]:https://docs.puppetlabs.com/puppet/latest/reference/config_file_main.html From e0a4f5017065fd958548f89125c41bad72a7d550 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 11 Aug 2015 10:37:03 +0800 Subject: [PATCH 177/207] =?UTF-8?q?20150811-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...Install Snort and Usage in Ubuntu 15.04.md | 203 ++++++++++++++++++ ...k files from Google Play Store on Linux.md | 99 +++++++++ 2 files changed, 302 insertions(+) create mode 100644 sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md create mode 100644 sources/tech/20150811 How to download apk files from Google Play Store on Linux.md diff --git a/sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md b/sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md new file mode 100644 index 0000000000..7bf2438c95 --- /dev/null +++ b/sources/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md @@ -0,0 +1,203 @@ +How to Install Snort and Usage in Ubuntu 15.04 +================================================================================ +Intrusion detection in a network is important for IT security. Intrusion Detection System used for the detection of illegal and malicious attempts in the network. Snort is well-known open source intrusion detection system. Web interface (Snorby) can be used for better analysis of alerts. Snort can be used as an intrusion prevention system with iptables/pf firewall. In this article, we will install and configure an open source IDS system snort. + +### Snort Installation ### + +#### Prerequisite #### + +Data Acquisition library (DAQ) is used by the snort for abstract calls to packet capture libraries. It is available on snort website. Downloading process is shown in the following screenshot. + +![downloading_daq](http://blog.linoxide.com/wp-content/uploads/2015/07/downloading_daq.png) + +Extract it and run ./configure, make and make install commands for DAQ installation. However, DAQ required other tools therefore ./configure script will generate following errors . + +flex and bison error + +![flexandbison_error](http://blog.linoxide.com/wp-content/uploads/2015/07/flexandbison_error.png) + +libpcap error. + +![libpcap error](http://blog.linoxide.com/wp-content/uploads/2015/07/libpcap-error.png) + +Therefore first install flex/bison and libcap before DAQ installation which is shown in the figure. + +![install_flex](http://blog.linoxide.com/wp-content/uploads/2015/07/install_flex.png) + +Installation of libpcap development library is shown below + +![libpcap-dev installation](http://blog.linoxide.com/wp-content/uploads/2015/07/libpcap-dev-installation.png) + +After installation of necessary tools, again run ./configure script which will show following output. + +![without_error_configure](http://blog.linoxide.com/wp-content/uploads/2015/07/without_error_configure.png) + +make and make install commands result is shown in the following screens. + +![make install](http://blog.linoxide.com/wp-content/uploads/2015/07/make-install.png) + +![make](http://blog.linoxide.com/wp-content/uploads/2015/07/make.png) + +After successful installation of DAQ, now we will install snort. Downloading using wget is shown in the below figure. + +![downloading_snort](http://blog.linoxide.com/wp-content/uploads/2015/07/downloading_snort.png) + +Extract compressed package using below given command. + + #tar -xvzf snort-2.9.7.3.tar.gz + +![snort_extraction](http://blog.linoxide.com/wp-content/uploads/2015/07/snort_extraction.png) + +Create installation directory and set prefix parameter in the configure script. It is also recommended to enable sourcefire flag for Packet Performance Monitoring (PPM). + + #mkdir /usr/local/snort + + #./configure --prefix=/usr/local/snort/ --enable-sourcefire + +![snort_installation](http://blog.linoxide.com/wp-content/uploads/2015/07/snort_installation.png) + +Configure script generates error due to missing libpcre-dev , libdumbnet-dev and zlib development libraries. + +error due to missing libpcre library. + +![pcre-error](http://blog.linoxide.com/wp-content/uploads/2015/07/pcre-error.png) + +error due to missing dnet (libdumbnet) library. + +![libdnt error](http://blog.linoxide.com/wp-content/uploads/2015/07/libdnt-error.png) + +configure script generate error due to missing zlib library. + +![zlib error](http://blog.linoxide.com/wp-content/uploads/2015/07/zlib-error.png) + +Installation of all required development libraries is shown in the next screenshots. + + # aptitude install libpcre3-dev + +![libpcre3-dev install](http://blog.linoxide.com/wp-content/uploads/2015/07/libpcre3-dev-install.png) + + # aptitude install libdumbnet-dev + +![libdumnet-dev installation](http://blog.linoxide.com/wp-content/uploads/2015/07/libdumnet-dev-installation.png) + + # aptitude install zlib1g-dev + +![zlibg-dev installation](http://blog.linoxide.com/wp-content/uploads/2015/07/zlibg-dev-installation.png) + +After installation of above required libraries for snort, again run the configure scripts without any error. + +Run make & make install commands for the compilation and installations of snort in /usr/local/snort directory. + + #make + +![make snort](http://blog.linoxide.com/wp-content/uploads/2015/07/make-snort.png) + + #make install + +![make install snort](http://blog.linoxide.com/wp-content/uploads/2015/07/make-install-snort.png) + +Finally snort running from /usr/local/snort/bin directory. Currently it is in promisc mode (packet dump mode) of all traffic on eth0 interface. + +![snort running](http://blog.linoxide.com/wp-content/uploads/2015/07/snort-running.png) + +Traffic dump by the snort interface is shown in following figure. + +![traffic](http://blog.linoxide.com/wp-content/uploads/2015/07/traffic1.png) + +#### Rules and Configuration of Snort #### + +Snort installation from source code required rules and configuration setting therefore now we will copy rules and configuration under /etc/snort directory. We have created single bash scripts for rules and configuration setting. It is used for following snort setting. + +- Creation of snort user for snort IDS service on linux. +- Creation of directories and files under /etc directory for snort configuration. +- Permission setting and copying data from etc directory of snort source code. +- Remove # (comment sign) from rules path in snort.conf file. + + #!/bin/bash##PATH of source code of snort + snort_src="/home/test/Downloads/snort-2.9.7.3" + echo "adding group and user for snort..." + groupadd snort &> /dev/null + useradd snort -r -s /sbin/nologin -d /var/log/snort -c snort_idps -g snort &> /dev/null#snort configuration + echo "Configuring snort..."mkdir -p /etc/snort + mkdir -p /etc/snort/rules + touch /etc/snort/rules/black_list.rules + touch /etc/snort/rules/white_list.rules + touch /etc/snort/rules/local.rules + mkdir /etc/snort/preproc_rules + mkdir /var/log/snort + mkdir -p /usr/local/lib/snort_dynamicrules + chmod -R 775 /etc/snort + chmod -R 775 /var/log/snort + chmod -R 775 /usr/local/lib/snort_dynamicrules + chown -R snort:snort /etc/snort + chown -R snort:snort /var/log/snort + chown -R snort:snort /usr/local/lib/snort_dynamicrules + ###copy configuration and rules from etc directory under source code of snort + echo "copying from snort source to /etc/snort ....." + echo $snort_src + echo "-------------" + cp $snort_src/etc/*.conf* /etc/snort + cp $snort_src/etc/*.map /etc/snort##enable rules + sed -i 's/include \$RULE\_PATH/#include \$RULE\_PATH/' /etc/snort/snort.conf + echo "---DONE---" + +Change the snort source directory in the script and run it. Following output appear in case of success. + +![running script](http://blog.linoxide.com/wp-content/uploads/2015/08/running_script.png) + +Above script copied following files/directories from snort source into /etc/snort configuration file. + +![files copied](http://blog.linoxide.com/wp-content/uploads/2015/08/created.png) + +Snort configuration file is very complex however following necessary changes are required in snort.conf for IDS proper working. + + ipvar HOME_NET 192.168.1.0/24 # LAN side + +---------- + + ipvar EXTERNAL_NET !$HOME_NET # WAN side + +![veriable set](http://blog.linoxide.com/wp-content/uploads/2015/08/12.png) + + var RULE_PATH /etc/snort/rules # snort signature path + var SO_RULE_PATH /etc/snort/so_rules #rules in shared libraries + var PREPROC_RULE_PATH /etc/snort/preproc_rules # Preproces path + var WHITE_LIST_PATH /etc/snort/rules # dont scan + var BLACK_LIST_PATH /etc/snort/rules # Must scan + +![main path](http://blog.linoxide.com/wp-content/uploads/2015/08/rule-path.png) + + include $RULE_PATH/local.rules # file for custom rules + +remove comment sign (#) from other rules such as ftp.rules,exploit.rules etc. + +![path rules](http://blog.linoxide.com/wp-content/uploads/2015/08/path-rules.png) + +Now [Download community][1] rules and extract under /etc/snort/rules directory. Enable community and emerging threats rules in snort.conf file. + +![wget_rules](http://blog.linoxide.com/wp-content/uploads/2015/08/wget_rules.png) + +![community rules](http://blog.linoxide.com/wp-content/uploads/2015/08/community-rules1.png) + +Run following command to test the configuration file after above mentioned changes. + + #snort -T -c /etc/snort/snort.conf + +![snort running](http://blog.linoxide.com/wp-content/uploads/2015/08/snort-final.png) + +### Conclusion ### + +In this article our focus was on the installation and configuration of an open source IDPS system snort on Ubuntu distribution. By default it is used for the monitoring of events however it can con configured inline mode for the protection of network. Snort rules can be tested and analysed in offline mode using pcap capture file. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/security/install-snort-usage-ubuntu-15-04/ + +作者:[nido][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/naveeda/ +[1]:https://www.snort.org/downloads/community/community-rules.tar.gz \ No newline at end of file diff --git a/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md b/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md new file mode 100644 index 0000000000..529e877d7b --- /dev/null +++ b/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md @@ -0,0 +1,99 @@ +How to download apk files from Google Play Store on Linux +================================================================================ +Suppose you want to install an Android app on your Android device. However, for whatever reason, you cannot access Google Play Store on the Android device. What can you do then? One way to install the app without Google Play Store access is to download its APK file using some other means, and then [install the APK][1] file on the Android device manually. + +There are several ways to download official APK files from Google Play Store on non-Android devices such as regular computers and laptops. For example, there are browser plugins (e.g., for [Chrome][2] or [Firefox][3]) or online APK archives that allow you to download APK files using a web browser. If you do not trust these closed-source plugins or third-party APK repositories, there is yet another way to download official APK files manually, and that is via an open-source Linux app called [GooglePlayDownloader][4]. + +GooglePlayDownloader is a Python-based GUI application that enables you to search and download APK files from Google Play Store. Since this is completely open-source, you can be assured while using it. In this tutorial, I am going to show how to download an APK file from Google Play Store using GooglePlayDownloader in Linux environment. + +### Python requirement ### + +GooglePlayDownloader requires Python with SNI (Server Name Indication) support for SSL/TLS communication. This feature comes with Python 2.7.9 or higher. This leaves out older distributions such as Debian 7 Wheezy or earlier, Ubuntu 14.04 or earlier, or CentOS/RHEL 7 or earlier. Assuming that you have a Linux distribution with Python 2.7.9 or higher, proceed to install GooglePlayDownloader as follows. + +### Install GooglePlayDownloader on Ubuntu ### + +On Ubuntu, you can use the official deb build. One catch is that you may need to install one required dependency manually. + +#### On Ubuntu 14.10 #### + +Download [python-ndg-httpsclient][5] deb package, which is a missing dependency on older Ubuntu distributions. Also download GooglePlayDownloader's official deb package. + + $ wget http://mirrors.kernel.org/ubuntu/pool/main/n/ndg-httpsclient/python-ndg-httpsclient_0.3.2-1ubuntu4_all.deb + $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb + +We are going to use [gdebi command][6] to install those two deb files as follows. The gdebi command will automatically handle any other dependencies. + + $ sudo apt-get install gdebi-core + $ sudo gdebi python-ndg-httpsclient_0.3.2-1ubuntu4_all.deb + $ sudo gdebi googleplaydownloader_1.7-1_all.deb + +#### On Ubuntu 15.04 or later #### + +Recent Ubuntu distributions ship all required dependencies, and thus the installation is straightforward as follows. + + $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb + $ sudo apt-get install gdebi-core + $ sudo gdebi googleplaydownloader_1.7-1_all.deb + +### Install GooglePlayDownloader on Debian ### + +Due to its Python requirement, GooglePlayDownloader cannot be installed on Debian 7 Wheezy or earlier unless you upgrade its stock Python. + +#### On Debian 8 Jessie and higher: #### + + $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb + $ sudo apt-get install gdebi-core + $ sudo gdebi googleplaydownloader_1.7-1_all.deb + +### Install GooglePlayDownloader on Fedora ### + +Since GooglePlayDownloader was originally developed for Debian based distributions, you need to install it from the source if you want to use it on Fedora. + +First, install necessary dependencies. + + $ sudo yum install python-pyasn1 wxPython python-ndg_httpsclient protobuf-python python-requests + +Then install it as follows. + + $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7.orig.tar.gz + $ tar -xvf googleplaydownloader_1.7.orig.tar.gz + $ cd googleplaydownloader-1.7 + $ chmod o+r -R . + $ sudo python setup.py install + $ sudo sh -c "echo 'python /usr/lib/python2.7/site-packages/googleplaydownloader-1.7-py2.7.egg/googleplaydownloader/googleplaydownloader.py' > /usr/bin/googleplaydownloader" + +### Download APK Files from Google Play Store with GooglePlayDownloader ### + +Once you installed GooglePlayDownloader, you can download APK files from Google Play Store as follows. + +First launch the app by typing: + + $ googleplaydownloader + +![](https://farm1.staticflickr.com/425/20229024898_105396fa68_b.jpg) + +At the search bar, type the name of the app you want to download from Google Play Store. + +![](https://farm1.staticflickr.com/503/20230360479_925f5da613_b.jpg) + +Once you find the app in the search list, choose the app, and click on "Download selected APK(s)" button. You will find the downloaded APK file in your home directory. Now you can move the APK file to the Android device of your choice, and install it manually. + +Hope this helps. + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/download-apk-files-google-play-store.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://xmodulo.com/how-to-install-apk-file-on-android-phone-or-tablet.html +[2]:https://chrome.google.com/webstore/detail/apk-downloader/cgihflhdpokeobcfimliamffejfnmfii +[3]:https://addons.mozilla.org/en-us/firefox/addon/apk-downloader/ +[4]:http://codingteam.net/project/googleplaydownloader +[5]:http://packages.ubuntu.com/vivid/python-ndg-httpsclient +[6]:http://xmodulo.com/how-to-install-deb-file-with-dependencies.html \ No newline at end of file From 4f320ebb4d4b5982e931f6960e3c7fbd41219c2c Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 11 Aug 2015 10:41:56 +0800 Subject: [PATCH 178/207] PUB:20150806 Linux FAQs with Answers--How to install git on Linux @mr-ping --- ...Qs with Answers--How to install git on Linux.md | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) rename {translated/tech => published}/20150806 Linux FAQs with Answers--How to install git on Linux.md (66%) diff --git a/translated/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md b/published/20150806 Linux FAQs with Answers--How to install git on Linux.md similarity index 66% rename from translated/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md rename to published/20150806 Linux FAQs with Answers--How to install git on Linux.md index e6d3f59c71..1d30a02083 100644 --- a/translated/tech/20150806 Linux FAQs with Answers--How to install git on Linux.md +++ b/published/20150806 Linux FAQs with Answers--How to install git on Linux.md @@ -1,15 +1,15 @@ -Linux问答 -- 如何在Linux上安装Git +Linux有问必答:如何在Linux上安装Git ================================================================================ -> **问题:** 我尝试从一个Git公共仓库克隆项目,但出现了这样的错误提示:“git: command not found”。 请问我该如何安装Git? [注明一下是哪个Linux发行版]? +> **问题:** 我尝试从一个Git公共仓库克隆项目,但出现了这样的错误提示:“git: command not found”。 请问我该如何在某某发行版上安装Git? -Git是一个流行的并且开源的版本控制系统(VCS),最初是为Linux环境开发的。跟CVS或者SVN这些版本控制系统不同的是,Git的版本控制被认为是“分布式的”,某种意义上,git的本地工作目录可以作为一个功能完善的仓库来使用,它具备完整的历史记录和版本追踪能力。在这种工作模型之下,各个协作者将内容提交到他们的本地仓库中(与之相对的会直接提交到核心仓库),如果有必要,再有选择性地推送到核心仓库。这就为Git这个版本管理系统带来了大型协作系统所必须的可扩展能力和冗余能力。 +Git是一个流行的开源版本控制系统(VCS),最初是为Linux环境开发的。跟CVS或者SVN这些版本控制系统不同的是,Git的版本控制被认为是“分布式的”,某种意义上,git的本地工作目录可以作为一个功能完善的仓库来使用,它具备完整的历史记录和版本追踪能力。在这种工作模型之下,各个协作者将内容提交到他们的本地仓库中(与之相对的会总是提交到核心仓库),如果有必要,再有选择性地推送到核心仓库。这就为Git这个版本管理系统带来了大型协作系统所必须的可扩展能力和冗余能力。 ![](https://farm1.staticflickr.com/341/19433194168_c79d4570aa_b.jpg) ### 使用包管理器安装Git ### -Git已经被所有的主力Linux发行版所支持。所以安装它最简单的方法就是使用各个Linux发行版的包管理器。 +Git已经被所有的主流Linux发行版所支持。所以安装它最简单的方法就是使用各个Linux发行版的包管理器。 **Debian, Ubuntu, 或 Linux Mint** @@ -18,6 +18,8 @@ Git已经被所有的主力Linux发行版所支持。所以安装它最简单的 **Fedora, CentOS 或 RHEL** $ sudo yum install git + 或 + $ sudo dnf install git **Arch Linux** @@ -33,7 +35,7 @@ Git已经被所有的主力Linux发行版所支持。所以安装它最简单的 ### 从源码安装Git ### -如果由于某些原因,你希望从源码安装Git,安装如下介绍操作。 +如果由于某些原因,你希望从源码安装Git,按照如下介绍操作。 **安装依赖包** @@ -65,7 +67,7 @@ via: http://ask.xmodulo.com/install-git-linux.html 作者:[Dan Nanni][a] 译者:[mr-ping](https://github.com/mr-ping) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 3726bb850331c17fc57933c997ef739b8cfbb9a7 Mon Sep 17 00:00:00 2001 From: XIAOYU <1136299502@qq.com> Date: Tue, 11 Aug 2015 16:45:36 +0800 Subject: [PATCH 179/207] translated translated this article --- ...20150810 For Linux, Supercomputers R Us.md | 59 ++++++++++++++++++ ...20150810 For Linux, Supercomputers R Us.md | 60 ------------------- 2 files changed, 59 insertions(+), 60 deletions(-) create mode 100644 published/20150810 For Linux, Supercomputers R Us.md delete mode 100644 sources/talk/20150810 For Linux, Supercomputers R Us.md diff --git a/published/20150810 For Linux, Supercomputers R Us.md b/published/20150810 For Linux, Supercomputers R Us.md new file mode 100644 index 0000000000..ef9b32684c --- /dev/null +++ b/published/20150810 For Linux, Supercomputers R Us.md @@ -0,0 +1,59 @@ +Linux:我们最好用的超级计算机系统 +================================================================================ +![Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons](http://images.techhive.com/images/article/2015/08/playstation_3-100602985-primary.idge.jpg) +首图来源:By Michel Ngilen,[ CC BY 2.0 ], via Wikimedia Commons + +> 几乎所有超级计算机上运行的系统都是Linux,其中包括那些由树莓派(Raspberry Pi)板和PlayStation 3游戏机板组成的计算机。 + +超级计算机是很正经的工具,目的是做严肃的计算。它们往往从事于严肃的追求,比如原子弹的模拟,气候模拟和高级物理学。当然,它们也需要大笔资金的投资。在最新的超级计算机[500强][1]排名中,中国国防科大研制的天河2号位居第一。天河2号耗资约3.9亿美元。 + +但是,也有一个超级计算机,是由博伊西州立大学电气和计算机工程系的一名在读博士Joshua Kiepert[用树莓派构建完成][2]的。其创建成本低于2000美元。 + +不,这不是我编造的。这是一个真实的超级计算机,由超频1GHz的[B型树莓派][3]ARM11处理器与VideoCore IV GPU组成。每个都配备了512MB的RAM,一对USB端口和1个10/100 BaseT以太网端口。 + +那么天河2号和博伊西州立大学的超级计算机有什么共同点?它们都运行Linux系统。世界最快的超级计算机[前500强中有486][4]个也同样运行的是Linux系统。这是20多年前就开始的一种覆盖。现在Linux开始建立于廉价的超级计算机。因为Kiepert的机器并不是唯一的预算数字计算机。 + +Gaurav Khanna,麻省大学达特茅斯分校的物理学副教授,创建了一台超级计算机仅用了[不足200的PlayStation3视频游戏机][5]。 + +PlayStation游戏机是由一个3.2 GHz的基于PowerPC的电源处理单元供电。每个都配有512M的RAM。你现在仍然可以花200美元买到一个,尽管索尼将在年底逐步淘汰它们。Khanna仅用16个PlayStation 3s构建了他第一台超级计算机,所以你也可以花费不到4000美元就拥有你自己的超级计算机。 + +这些机器可能是从玩具建成的,但他们不是玩具。Khanna已经用它做了严肃的天体物理学研究。一个白帽子黑客组织使用了类似的[PlayStation 3超级计算机在2008年破解了SSL的MD5哈希算法][6]。 + +两年后,美国空军研究实验室研制的[Condor Cluster,使用了1,760个索尼的PlayStation3的处理器][7]和168个通用的图形处理单元。这个低廉的超级计算机,每秒运行约500TFLOPs,或500万亿次浮点运算。 + +其他的一些便宜且适用于构建家庭超级计算机的构件包括,专业并行处理板比如信用卡大小[99美元的Parallella板][8],以及高端显卡比如[Nvidia的 Titan Z][9] 以及[ AMD的 FirePro W9100][10].这些高端主板市场零售价约3000美元,被一些[英特尔极限大师赛世界锦标赛英雄联盟参赛][11]玩家觊觎能够赢得的梦想的机器,c[传说][12]这项比赛第一名能获得超过10万美元奖金。另一方面,一个人能够独自提供超过2.5TFLOPS,并且他们为科学家和研究人员提供了一个经济的方法,使得他们拥有自己专属的超级计算机。 + +作为Linux的连接,这一切都开始于1994年戈达德航天中心的第一个名为[Beowulf超级计算机][13]。 + +按照我们的标准,Beowulf不能算是最优越的。但在那个时期,作为第一台自制的超级计算机,其16英特尔486DX处理器和10Mbps的以太网总线,是伟大的创举。由[美国航空航天局承包人Don Becker和Thomas Sterling设计的Beowulf][14],是第一个“制造者”超级计算机。它的“计算部件”486DX PCs,成本仅有几千美元。尽管它的速度只有一位数的浮点运算,[Beowulf][15]表明了你可以用商用现货(COTS)硬件和Linux创建超级计算机。 + +我真希望我参与创作了一部分,但是我1994年就离开了戈达德,开始了作为一名全职的科技记者的职业生涯。该死。 + +但是尽管我只是使用笔记本的记者,我依然能够体会到COTS和开源软件是如何永远的改变了超级计算机。我希望现在读这篇文章的你也能。因为,无论是Raspberry Pis集群,还是超过300万英特尔的Ivy Bridge和Xeon Phi芯片的庞然大物,几乎所有当代的超级计算机都可以追溯到Beowulf。 + +-------------------------------------------------------------------------------- + +via: + +作者:[Steven J. Vaughan-Nichols][a] +译者:[xiaoyu33](https://github.com/xiaoyu33) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/ +[1]:http://www.top500.org/ +[2]:http://www.zdnet.com/article/build-your-own-supercomputer-out-of-raspberry-pi-boards/ +[3]:https://www.raspberrypi.org/products/model-b/ +[4]:http://www.zdnet.com/article/linux-still-rules-supercomputing/ +[5]:http://www.nytimes.com/2014/12/23/science/an-economical-way-to-save-progress.html?smid=fb-nytimes&smtyp=cur&bicmp=AD&bicmlukp=WT.mc_id&bicmst=1409232722000&bicmet=1419773522000&_r=4 +[6]:http://www.computerworld.com/article/2529932/cybercrime-hacking/researchers-hack-verisign-s-ssl-scheme-for-securing-web-sites.html +[7]:http://phys.org/news/2010-12-air-playstation-3s-supercomputer.html +[8]:http://www.zdnet.com/article/parallella-the-99-linux-supercomputer/ +[9]:http://blogs.nvidia.com/blog/2014/03/25/titan-z/ +[10]:http://www.amd.com/en-us/press-releases/Pages/amd-flagship-professional-2014apr7.aspx +[11]:http://en.intelextrememasters.com/news/check-out-the-intel-extreme-masters-katowice-prize-money-distribution/ +[12]:http://www.google.com/url?q=http%3A%2F%2Fen.intelextrememasters.com%2Fnews%2Fcheck-out-the-intel-extreme-masters-katowice-prize-money-distribution%2F&sa=D&sntz=1&usg=AFQjCNE6yoAGGz-Hpi2tPF4gdhuPBEckhQ +[13]:http://www.beowulf.org/overview/history.html +[14]:http://yclept.ucdavis.edu/Beowulf/aboutbeowulf.html +[15]:http://www.beowulf.org/ diff --git a/sources/talk/20150810 For Linux, Supercomputers R Us.md b/sources/talk/20150810 For Linux, Supercomputers R Us.md deleted file mode 100644 index 8f7302cca1..0000000000 --- a/sources/talk/20150810 For Linux, Supercomputers R Us.md +++ /dev/null @@ -1,60 +0,0 @@ -Translating by xiaoyu33 -For Linux, Supercomputers R Us -================================================================================ -![Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons](http://images.techhive.com/images/article/2015/08/playstation_3-100602985-primary.idge.jpg) -Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons - -> Almost all supercomputers run Linux, including the ones built from Raspberry Pi boards and PlayStation 3 game consoles - -Supercomputers are serious things, called on to do serious computing. They tend to be engaged in serious pursuits like atomic bomb simulations, climate modeling and high-level physics. Naturally, they cost serious money. At the very top of the latest [Top500][1] supercomputer ranking is the Tianhe-2 supercomputer at China’s National University of Defense Technology. It cost about $390 million to build. - -But then there’s the supercomputer that Joshua Kiepert, a doctoral student at Boise State’s Electrical and Computer Engineering department, [created with Raspberry Pi computers][2].It cost less than $2,000. - -No, I’m not making that up. It’s an honest-to-goodness supercomputer made from overclocked 1-GHz [Model B Raspberry Pi][3] ARM11 processors with Videocore IV GPUs. Each one comes with 512MB of RAM, a pair of USB ports and a 10/100 BaseT Ethernet port. - -And what do the Tianhe-2 and the Boise State supercomputer have in common? They both run Linux. As do [486 out of the world’s fastest 500 supercomputers][4]. It’s part of a domination of the category that began over 20 years ago. And now it’s trickling down to built-on-the-cheap supercomputers. Because Kiepert’s machine isn’t the only budget number cruncher out there. - -Gaurav Khanna, an associate professor of physics at the University of Massachusetts Dartmouth, created a [supercomputer with something shy of 200 PlayStation 3 video game consoles][5]. - -The PlayStations are powered by a 3.2-GHz PowerPC-based Power Processing Element. Each comes with 512MB of RAM. You can still buy one, although Sony will be phasing them out by year’s end, for just over $200. Khanna started with only 16 PlayStation 3s for his first supercomputer, so you too could put a supercomputer on your credit card for less than four grand. - -These machines may be built from toys, but they’re not playthings. Khanna has done serious astrophysics on his rig. A white-hat hacking group used a similar [PlayStation 3 supercomputer in 2008 to crack the SSL MD5 hashing algorithm][6] in 2008. - -Two years later, the Air Force Research Laboratory [Condor Cluster was using 1,760 Sony PlayStation 3 processors][7] and 168 general-purpose graphical processing units. This bargain-basement supercomputer runs at about 500TFLOPs, or 500 trillion floating point operations per second. - -Other cheap options for home supercomputers include specialist parallel-processing boards such as the [$99 credit-card-sized Parallella board][8], and high-end graphics boards such as [Nvidia’s Titan Z][9] and [AMD’s FirePro W9100][10]. Those high-end boards, coveted by gamers with visions of a dream machine or even a chance at winning the first-place prize of over $100,000 in the [Intel Extreme Masters World Championship League of][11] [Legends][12], cost considerably more, retailing for about $3,000. On the other hand, a single one can deliver over 2.5TFLOPS all by itself, and for scientists and researchers, they offer an affordable way to get a supercomputer they can call their own. - -As for the Linux connection, that all started in 1994 at the Goddard Space Flight Center with the first [Beowulf supercomputer][13]. - -By our standards, there wasn’t much that was super about the first Beowulf. But in its day, the first homemade supercomputer, with its 16 Intel 486DX processors and 10Mbps Ethernet for the bus, was great. [Beowulf, designed by NASA contractors Don Becker and Thomas Sterling][14], was the first “maker” supercomputer. Its “compute components,” 486DX PCs, cost only a few thousand dollars. While its speed was only in single-digit gigaflops, [Beowulf][15] showed you could build supercomputers from commercial off-the-shelf (COTS) hardware and Linux. - -I wish I’d had a part in its creation, but I’d already left Goddard by 1994 for a career as a full-time technology journalist. Darn it! - -But even from this side of my reporter’s notebook, I can still appreciate how COTS and open-source software changed supercomputing forever. I hope you can too. Because, whether it’s a cluster of Raspberry Pis or a monster with over 3 million Intel Ivy Bridge and Xeon Phi chips, almost all of today’s supercomputers trace their ancestry to Beowulf. - --------------------------------------------------------------------------------- - -via: - -作者:[Steven J. Vaughan-Nichols][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/ -[1]:http://www.top500.org/ -[2]:http://www.zdnet.com/article/build-your-own-supercomputer-out-of-raspberry-pi-boards/ -[3]:https://www.raspberrypi.org/products/model-b/ -[4]:http://www.zdnet.com/article/linux-still-rules-supercomputing/ -[5]:http://www.nytimes.com/2014/12/23/science/an-economical-way-to-save-progress.html?smid=fb-nytimes&smtyp=cur&bicmp=AD&bicmlukp=WT.mc_id&bicmst=1409232722000&bicmet=1419773522000&_r=4 -[6]:http://www.computerworld.com/article/2529932/cybercrime-hacking/researchers-hack-verisign-s-ssl-scheme-for-securing-web-sites.html -[7]:http://phys.org/news/2010-12-air-playstation-3s-supercomputer.html -[8]:http://www.zdnet.com/article/parallella-the-99-linux-supercomputer/ -[9]:http://blogs.nvidia.com/blog/2014/03/25/titan-z/ -[10]:http://www.amd.com/en-us/press-releases/Pages/amd-flagship-professional-2014apr7.aspx -[11]:http://en.intelextrememasters.com/news/check-out-the-intel-extreme-masters-katowice-prize-money-distribution/ -[12]:http://www.google.com/url?q=http%3A%2F%2Fen.intelextrememasters.com%2Fnews%2Fcheck-out-the-intel-extreme-masters-katowice-prize-money-distribution%2F&sa=D&sntz=1&usg=AFQjCNE6yoAGGz-Hpi2tPF4gdhuPBEckhQ -[13]:http://www.beowulf.org/overview/history.html -[14]:http://yclept.ucdavis.edu/Beowulf/aboutbeowulf.html -[15]:http://www.beowulf.org/ From 0584a85ed9e385ce4967c0563afe1e586c995144 Mon Sep 17 00:00:00 2001 From: XIAOYU <1136299502@qq.com> Date: Tue, 11 Aug 2015 16:47:43 +0800 Subject: [PATCH 180/207] translated translated --- published/20150810 For Linux, Supercomputers R Us.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/published/20150810 For Linux, Supercomputers R Us.md b/published/20150810 For Linux, Supercomputers R Us.md index ef9b32684c..e5022a658f 100644 --- a/published/20150810 For Linux, Supercomputers R Us.md +++ b/published/20150810 For Linux, Supercomputers R Us.md @@ -1,4 +1,4 @@ -Linux:我们最好用的超级计算机系统 +Linux:称霸超级计算机系统 ================================================================================ ![Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons](http://images.techhive.com/images/article/2015/08/playstation_3-100602985-primary.idge.jpg) 首图来源:By Michel Ngilen,[ CC BY 2.0 ], via Wikimedia Commons From b0b4e31da96977b4d3a15eb0d978e2d06db2df13 Mon Sep 17 00:00:00 2001 From: XIAOYU <1136299502@qq.com> Date: Tue, 11 Aug 2015 16:56:12 +0800 Subject: [PATCH 181/207] Rename published/20150810 For Linux, Supercomputers R Us.md to translated/talk/20150810 For Linux, Supercomputers R Us.md --- .../talk}/20150810 For Linux, Supercomputers R Us.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {published => translated/talk}/20150810 For Linux, Supercomputers R Us.md (100%) diff --git a/published/20150810 For Linux, Supercomputers R Us.md b/translated/talk/20150810 For Linux, Supercomputers R Us.md similarity index 100% rename from published/20150810 For Linux, Supercomputers R Us.md rename to translated/talk/20150810 For Linux, Supercomputers R Us.md From b8f075d64a773a10553643568f5dde7b83a4d290 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Tue, 11 Aug 2015 20:15:34 +0800 Subject: [PATCH 182/207] [Translated]RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md --- ...or Analyzing text with grep and regexps.md | 256 ----------------- ...or Analyzing text with grep and regexps.md | 258 ++++++++++++++++++ 2 files changed, 258 insertions(+), 256 deletions(-) delete mode 100644 sources/tech/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md create mode 100644 translated/tech/RHCSA/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md b/sources/tech/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md deleted file mode 100644 index f3de8528fc..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md +++ /dev/null @@ -1,256 +0,0 @@ -FSSlc translating - -RHCSA Series: Editing Text Files with Nano and Vim / Analyzing text with grep and regexps – Part 4 -================================================================================ -Every system administrator has to deal with text files as part of his daily responsibilities. That includes editing existing files (most likely configuration files), or creating new ones. It has been said that if you want to start a holy war in the Linux world, you can ask sysadmins what their favorite text editor is and why. We are not going to do that in this article, but will present a few tips that will be helpful to use two of the most widely used text editors in RHEL 7: nano (due to its simplicity and easiness of use, specially to new users), and vi/m (due to its several features that convert it into more than a simple editor). I am sure that you can find many more reasons to use one or the other, or perhaps some other editor such as emacs or pico. It’s entirely up to you. - -![Learn Nano and vi Editors](http://www.tecmint.com/wp-content/uploads/2015/03/Learn-Nano-and-vi-Editors.png) - -RHCSA: Editing Text Files with Nano and Vim – Part 4 - -### Editing Files with Nano Editor ### - -To launch nano, you can either just type nano at the command prompt, optionally followed by a filename (in this case, if the file exists, it will be opened in edition mode). If the file does not exist, or if we omit the filename, nano will also be opened in edition mode but will present a blank screen for us to start typing: - -![Nano Editor](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Editor.png) - -Nano Editor - -As you can see in the previous image, nano displays at the bottom of the screen several functions that are available via the indicated shortcuts (^, aka caret, indicates the Ctrl key). To name a few of them: - -- Ctrl + G: brings up the help menu with a complete list of functions and descriptions:Ctrl + X: exits the current file. If changes have not been saved, they are discarded. -- Ctrl + R: lets you choose a file to insert its contents into the present file by specifying a full path. - -![Nano Editor Help Menu](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Help.png) - -Nano Editor Help Menu - -- Ctrl + O: saves changes made to a file. It will let you save the file with the same name or a different one. Then press Enter to confirm. - -![Nano Editor Save Changes Mode](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Save-Changes.png) - -Nano Editor Save Changes Mode - -- Ctrl + X: exits the current file. If changes have not been saved, they are discarded. -- Ctrl + R: lets you choose a file to insert its contents into the present file by specifying a full path. - -![Nano: Insert File Content to Parent File](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-File-Content.png) - -Nano: Insert File Content to Parent File - -will insert the contents of /etc/passwd into the current file. - -- Ctrl + K: cuts the current line. -- Ctrl + U: paste. -- Ctrl + C: cancels the current operation and places you at the previous screen. - -To easily navigate the opened file, nano provides the following features: - -- Ctrl + F and Ctrl + B move the cursor forward or backward, whereas Ctrl + P and Ctrl + N move it up or down one line at a time, respectively, just like the arrow keys. -- Ctrl + space and Alt + space move the cursor forward and backward one word at a time. - -Finally, - -- Ctrl + _ (underscore) and then entering X,Y will take you precisely to Line X, column Y, if you want to place the cursor at a specific place in the document. - -![Navigate to Line Numbers in Nano](http://www.tecmint.com/wp-content/uploads/2015/03/Column-Numbers.png) - -Navigate to Line Numbers in Nano - -The example above will take you to line 15, column 14 in the current document. - -If you can recall your early Linux days, specially if you came from Windows, you will probably agree that starting off with nano is the best way to go for a new user. - -### Editing Files with Vim Editor ### - -Vim is an improved version of vi, a famous text editor in Linux that is available on all POSIX-compliant *nix systems, such as RHEL 7. If you have the chance and can install vim, go ahead; if not, most (if not all) the tips given in this article should also work. - -One of vim’s distinguishing features is the different modes in which it operates: - - -- Command mode will allow you to browse through the file and enter commands, which are brief and case-sensitive combinations of one or more letters. If you need to repeat one of them a certain number of times, you can prefix it with a number (there are only a few exceptions to this rule). For example, yy (or Y, short for yank) copies the entire current line, whereas 4yy (or 4Y) copies the entire current line along with the next three lines (4 lines in total). -- In ex mode, you can manipulate files (including saving a current file and running outside programs or commands). To enter ex mode, we must type a colon (:) starting from command mode (or in other words, Esc + :), directly followed by the name of the ex-mode command that you want to use. -- In insert mode, which is accessed by typing the letter i, we simply enter text. Most keystrokes result in text appearing on the screen. -- We can always enter command mode (regardless of the mode we’re working on) by pressing the Esc key. - -Let’s see how we can perform the same operations that we outlined for nano in the previous section, but now with vim. Don’t forget to hit the Enter key to confirm the vim command! - -To access vim’s full manual from the command line, type :help while in command mode and then press Enter: - -![vim Edito Help Menu](http://www.tecmint.com/wp-content/uploads/2015/03/vim-Help-Menu.png) - -vim Edito Help Menu - -The upper section presents an index list of contents, with defined sections dedicated to specific topics about vim. To navigate to a section, place the cursor over it and press Ctrl + ] (closing square bracket). Note that the bottom section displays the current file. - -1. To save changes made to a file, run any of the following commands from command mode and it will do the trick: - - :wq! - :x! - ZZ (yes, double Z without the colon at the beginning) - -2. To exit discarding changes, use :q!. This command will also allow you to exit the help menu described above, and return to the current file in command mode. - -3. Cut N number of lines: type Ndd while in command mode. - -4. Copy M number of lines: type Myy while in command mode. - -5. Paste lines that were previously cutted or copied: press the P key while in command mode. - -6. To insert the contents of another file into the current one: - - :r filename - -For example, to insert the contents of `/etc/fstab`, do: - -![Insert Content of File in vi Editor](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-Content-vi-Editor.png) - -Insert Content of File in vi Editor - -7. To insert the output of a command into the current document: - - :r! command - -For example, to insert the date and time in the line below the current position of the cursor: - -![Insert Time an Date in vi Editor](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-Time-and-Date-in-vi-Editor.png) - -Insert Time an Date in vi Editor - -In another article that I wrote for, ([Part 2 of the LFCS series][1]), I explained in greater detail the keyboard shortcuts and functions available in vim. You may want to refer to that tutorial for further examples on how to use this powerful text editor. - -### Analyzing Text with Grep and Regular Expressions ### - -By now you have learned how to create and edit files using nano or vim. Say you become a text editor ninja, so to speak – now what? Among other things, you will also need how to search for regular expressions inside text. - -A regular expression (also known as “regex” or “regexp“) is a way of identifying a text string or pattern so that a program can compare the pattern against arbitrary text strings. Although the use of regular expressions along with grep would deserve an entire article on its own, let us review the basics here: - -**1. The simplest regular expression is an alphanumeric string (i.e., the word “svm”) or two (when two are present, you can use the | (OR) operator):** - - # grep -Ei 'svm|vmx' /proc/cpuinfo - -The presence of either of those two strings indicate that your processor supports virtualization: - -![Regular Expression Example](http://www.tecmint.com/wp-content/uploads/2015/03/Regular-Expression-Example.png) - -Regular Expression Example - -**2. A second kind of a regular expression is a range list, enclosed between square brackets.** - -For example, `c[aeiou]t` matches the strings cat, cet, cit, cot, and cut, whereas `[a-z]` and `[0-9]` match any lowercase letter or decimal digit, respectively. If you want to repeat the regular expression X certain number of times, type `{X}` immediately following the regexp. - -For example, let’s extract the UUIDs of storage devices from `/etc/fstab`: - - # grep -Ei '[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}' -o /etc/fstab - -![Extract String from a File in Linux](http://www.tecmint.com/wp-content/uploads/2015/03/Extract-String-from-a-File.png) - -Extract String from a File - -The first expression in brackets `[0-9a-f]` is used to denote lowercase hexadecimal characters, and `{8}` is a quantifier that indicates the number of times that the preceding match should be repeated (the first sequence of characters in an UUID is a 8-character long hexadecimal string). - -The parentheses, the `{4}` quantifier, and the hyphen indicate that the next sequence is a 4-character long hexadecimal string, and the quantifier that follows `({3})` denote that the expression should be repeated 3 times. - -Finally, the last sequence of 12-character long hexadecimal string in the UUID is retrieved with `[0-9a-f]{12}`, and the -o option prints only the matched (non-empty) parts of the matching line in /etc/fstab. - -**3. POSIX character classes.** - -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Character ClassMatches…
 [[:alnum:]] Any alphanumeric [a-zA-Z0-9] character
 [[:alpha:]] Any alphabetic [a-zA-Z] character
 [[:blank:]] Spaces or tabs
 [[:cntrl:]] Any control characters (ASCII 0 to 32)
 [[:digit:]] Any numeric digits [0-9]
 [[:graph:]] Any visible characters
 [[:lower:]] Any lowercase [a-z] character
 [[:print:]] Any non-control characters
 [[:space:]] Any whitespace
 [[:punct:]] Any punctuation marks
 [[:upper:]] Any uppercase [A-Z] character
 [[:xdigit:]] Any hex digits [0-9a-fA-F]
 [:word:] Any letters, numbers, and underscores [a-zA-Z0-9_]
- -For example, we may be interested in finding out what the used UIDs and GIDs (refer to [Part 2][2] of this series to refresh your memory) are for real users that have been added to our system. Thus, we will search for sequences of 4 digits in /etc/passwd: - - # grep -Ei [[:digit:]]{4} /etc/passwd - -![Search For a String in File](http://www.tecmint.com/wp-content/uploads/2015/03/Search-For-String-in-File.png) - -Search For a String in File - -The above example may not be the best case of use of regular expressions in the real world, but it clearly illustrates how to use POSIX character classes to analyze text along with grep. - -### Conclusion ### - -In this article we have provided some tips to make the most of nano and vim, two text editors for the command-line users. Both tools are supported by extensive documentation, which you can consult in their respective official web sites (links given below) and using the suggestions given in [Part 1][3] of this series. - -#### Reference Links #### - -- [http://www.nano-editor.org/][4] -- [http://www.vim.org/][5] - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-how-to-use-nano-vi-editors/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/vi-editor-usage/ -[2]:http://www.tecmint.com/file-and-directory-management-in-linux/ -[3]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ -[4]:http://www.nano-editor.org/ -[5]:http://www.vim.org/ diff --git a/translated/tech/RHCSA/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md b/translated/tech/RHCSA/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md new file mode 100644 index 0000000000..8438ec0351 --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 04--Editing Text Files with Nano and Vim or Analyzing text with grep and regexps.md @@ -0,0 +1,258 @@ +RHCSA 系列:使用 Nano 和 Vim 编辑文本文件/使用 grep 和 regexps 分析文本 – Part 4 +================================================================================ +作为系统管理员的日常职责的一部分,每个系统管理员都必须处理文本文件,这包括编辑现存文件(大多可能是配置文件),或创建新的文件。有这样一个说法,假如你想在 Linux 世界中挑起一场圣战,你可以询问系统管理员们,什么是他们最喜爱的编辑器以及为什么。在这篇文章中,我们并不打算那样做,但我们将向你呈现一些技巧,这些技巧对使用两款在 RHEL 7 中最为常用的文本编辑器: nano(由于其简单和易用,特别是对于新手来说) 和 vi/m(由于其自身的几个特色使得它不仅仅是一个简单的编辑器)来说都大有裨益。我确信你可以找到更多的理由来使用其中的一个或另一个,或许其他的一些编辑器如 emacs 或 pico。这完全取决于你。 + +![学习 Nano 和 vi 编辑器](http://www.tecmint.com/wp-content/uploads/2015/03/Learn-Nano-and-vi-Editors.png) + +RHCSA: 使用 Nano 和 Vim 编辑文本文件 – Part 4 + +### 使用 Nano 编辑器来编辑文件 ### + +要启动 nano,你可以在命令提示符下输入 `nano`,或选择性地跟上一个文件名(在这种情况下,若文件存在,它将在编辑模式中被打开)。若文件不存在,或我们省略了文件名, nano 也将在 编辑模式下开启,但将为我们开启一个空白屏以便开始输入: + +![Nano 编辑器](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Editor.png) + +Nano 编辑器 + +正如你在上一张图片中所见的那样, nano 在屏幕的底部呈现出一些功能,它们可以通过暗指的快捷键来触发(^,即插入记号,代指 Ctrl 键)。它们中的一些是: + +- Ctrl + G: 触发一个帮助菜单,带有一个关于功能和相应的描述的完整列表; +- Ctrl + X: 离开当前文件,假如更改没有被保存,则它们将被丢弃; +- Ctrl + R: 通过指定一个完整的文件路径,让你选择一个文件来将该文件的内容插入到当前文件中; + +![Nano 编辑器帮助菜单](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Help.png) + +Nano 编辑器帮助菜单 + +- Ctrl + O: 保存更改到一个文件。它将让你用一个与源文件相同或不同的名称来保存该文件,然后按 Enter 键来确认。 + +![Nano 编辑器保存更改模式](http://www.tecmint.com/wp-content/uploads/2015/03/Nano-Save-Changes.png) + +Nano 编辑器的保存更改模式 + +- Ctrl + X: 离开当前文件,假如更改没有被保存,则它们将被丢弃; +- Ctrl + R: 通过指定一个完整的文件路径,让你选择一个文件来将该文件的内容插入到当前文件中; + +![Nano: 插入文件内容到主文件中](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-File-Content.png) + +Nano: 插入文件内容到主文件中 + +上图的操作将把 `/etc/passwd` 的内容插入到当前文件中。 + +- Ctrl + K: 剪切当前行; +- Ctrl + U: 粘贴; +- Ctrl + C: 取消当前的操作并返回先前的屏幕; + +为了轻松地在打开的文件中浏览, nano 提供了下面的功能: + +- Ctrl + F 和 Ctrl + B 分别先前或向后移动光标;而 Ctrl + P 和 Ctrl + N 则分别向上或向下移动一行,功能与箭头键相同; +- Ctrl + space 和 Alt + space 分别向前或向后移动一个单词; + +最后, + +- 假如你想将光标移动到文档中的特定位置,使用 Ctrl + _ (下划线) 并接着输入 X,Y 将准确地带你到 第 X 行,第 Y 列。 + +![在 nano 中定位到具体的行,列](http://www.tecmint.com/wp-content/uploads/2015/03/Column-Numbers.png) + +在 nano 中定位到具体的行和列 + +上面的例子将带你到当前文档的第 15 行,第 14 列。 + +假如你可以回忆起你早期的 Linux 岁月,特别是当你刚从 Windows 迁移到 Linux 中,你就可能会同意:对于一个新手来说,使用 nano 来开始学习是最好的方式。 + +### 使用 Vim 编辑器来编辑文件 ### + + +Vim 是 vi 的加强版本,它是 Linux 中一个著名的文本编辑器,可在所有兼容 POSIX 的 *nix 系统中获取到,例如在 RHEL 7 中。假如你有机会并可以安装 Vim,请继续;假如不能,这篇文章中的大多数(若不是全部)的提示也应该可以正常工作。 + +Vim 的一个出众的特点是可以在多个不同的模式中进行操作: + +- 命令模式将允许你在文件中跳转和输入命令,这些命令是由一个或多个字母组成的简洁且对大小写敏感的组合。假如你想重复执行某个命令特定次,你可以在这个命令前加上需要重复的次数(这个规则只有极少数例外)。例如, yy(或 Y,yank 的缩写)可以复制整个当前行,而 4yy(或 4Y)则复制整个当前行到接着的 3 行(总共 4 行)。 +- 在 ex 模式中,你可以操作文件(包括保存当前文件和运行外部的程序或命令)。要进入 ex 模式,你必须在命令模式前(或其他词前,Esc + :)输入一个冒号(:),再直接跟上你想使用的 ex 模式命令的名称。 +- 对于插入模式,可以输入字母 i 进入,我们只需要输入文字即可。大多数的键击结果都将出现在屏幕中的文本中。 +- 我们总是可以通过敲击 Esc 键来进入命令模式(无论我们正工作在哪个模式下)。 + +现在,让我们看看如何在 vim 中执行在上一节列举的针对 nano 的相同的操作。不要忘记敲击 Enter 键来确认 vim 命令。 + +为了从命令行中获取 vim 的完整手册,在命令模式下键入 `:help` 并敲击 Enter 键: + +![vim 编辑器帮助菜单](http://www.tecmint.com/wp-content/uploads/2015/03/vim-Help-Menu.png) + +vim 编辑器帮助菜单 + +上面的小节呈现出一个目录列表,而定义过的小节则主要关注 Vim 的特定话题。要浏览某一个小节,可以将光标放到它的上面,然后按 Ctrl + ] (闭方括号)。注意,底部的小节展示的是当前文件的内容。 + +1. 要保存更改到文件,在命令模式中运行下面命令中的任意一个,就可以达到这个目的: + +``` +:wq! +:x! +ZZ (是的,两个 ZZ,前面无需添加冒号) +``` + +2. 要离开并丢弃更改,使用 `:q!`。这个命令也将允许你离开上面描述过的帮助菜单,并返回到命令模式中的当前文件。 + +3. 剪切 N 行:在命令模式中键入 `Ndd`。 + +4. 复制 M 行:在命令模式中键入 `Myy`。 + +5. 粘贴先前剪贴或复制过的行:在命令模式中按 `P`键。 + +6. 要插入另一个文件的内容到当前文件: + + :r filename + +例如,插入 `/etc/fstab` 的内容,可以这样做: + +[在 vi 编辑器中插入文件的内容](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-Content-vi-Editor.png) + +在 vi 编辑器中插入文件的内容 + +7. 插入一个命名的输出到当前文档: + + :r! command + +例如,要在光标所在的当前位置后面插入日期和时间: + +![在 vi 编辑器中插入时间和日期](http://www.tecmint.com/wp-content/uploads/2015/03/Insert-Time-and-Date-in-vi-Editor.png) + +在 vi 编辑器中插入时间和日期 + +在另一篇我写的文章中,([LFCS 系列的 Part 2][1]),我更加详细地解释了在 vim 中可用的键盘快捷键和功能。或许你可以参考那个教程来查看如何使用这个强大的文本编辑器的更深入的例子。 + +### 使用 Grep 和正则表达式来分析文本 ### + +到现在为止,你已经学习了如何使用 nano 或 vim 创建和编辑文件。打个比方说,假如你成为了一个文本编辑器忍者 – 那又怎样呢? 在其他事情上,你也需要知道如何在文本中搜索正则表达式。 + +正则表达式(也称为 "regex" 或 "regexp") 是一种识别一个特定文本字符串或模式的方式,使得一个程序可以将这个模式和任意的文本字符串相比较。尽管利用 grep 来使用正则表达式值得用一整篇文章来描述,这里就让我们复习一些基本的知识: + +**1. 最简单的正则表达式是一个由数字和字母构成的字符串(即,单词 "svm") 或两个(在使用两个字符串时,你可以使用 `|`(或) 操作符):** + + # grep -Ei 'svm|vmx' /proc/cpuinfo + +上面命令的输出结果中若有这两个字符串之一的出现,则标志着你的处理器支持虚拟化: + +![正则表达式示例](http://www.tecmint.com/wp-content/uploads/2015/03/Regular-Expression-Example.png) + +正则表达式示例 + +**2. 第二种正则表达式是一个范围列表,由方括号包裹。** + +例如, `c[aeiou]t` 匹配字符串 cat,cet,cit,cot 和 cut,而 `[a-z]` 和 `[0-9]` 则相应地匹配小写字母或十进制数字。假如你想重复正则表达式 X 次,在正则表达式的后面立即输入 `{X}`即可。 + +例如,让我们从 `/etc/fstab` 中析出存储设备的 UUID: + + # grep -Ei '[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}' -o /etc/fstab + +![在 Linux 中从一个文件中析出字符串](http://www.tecmint.com/wp-content/uploads/2015/03/Extract-String-from-a-File.png) + +从一个文件中析出字符串 + +方括号中的第一个表达式 `[0-9a-f]` 被用来表示小写的十六进制字符,`{8}`是一个量词,暗示前面匹配的字符串应该重复的次数(在一个 UUID 中的开头序列是一个 8 个字符长的十六进制字符串)。 + +在圆括号中,量词 `{4}`和连字符暗示下一个序列是一个 4 个字符长的十六进制字符串,接着的量词 `({3})`表示前面的表达式要重复 3 次。 + +最后,在 UUID 中的最后一个 12 个字符长的十六进制字符串可以由 `[0-9a-f]{12}` 取得, `-o` 选项表示只打印出在 `/etc/fstab`中匹配行中的匹配的(非空)部分。 + +**3. POSIX 字符类 ** + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
字符类匹配 …
 [[:alnum:]] 任意字母或数字 [a-zA-Z0-9]
 [[:alpha:]] 任意字母 [a-zA-Z]
 [[:blank:]] 空格或制表符
 [[:cntrl:]] 任意控制字符 (ASCII 码的 0 至 32)
 [[:digit:]] 任意数字 [0-9]
 [[:graph:]] 任意可见字符
 [[:lower:]] 任意小写字母 [a-z]
 [[:print:]] 任意非控制字符 +
 [[:space:]] 任意空格
 [[:punct:]] 任意标点字符
 [[:upper:]] 任意大写字母 [A-Z]
 [[:xdigit:]] 任意十六进制数字 [0-9a-fA-F]
 [:word:] 任意字母,数字和下划线 [a-zA-Z0-9_]
+ +例如,我们可能会对查找已添加到我们系统中给真实用户的 UID 和 GID(参考这个系列的 [Part 2][2]来回忆起这些知识)感兴趣。那么,我们将在 `/etc/passwd` 文件中查找 4 个字符长的序列: + + # grep -Ei [[:digit:]]{4} /etc/passwd + +![在文件中查找一个字符串](http://www.tecmint.com/wp-content/uploads/2015/03/Search-For-String-in-File.png) + +在文件中查找一个字符串 + +上面的示例可能不是真实世界中使用正则表达式的最好案例,但它清晰地启发了我们如何使用 POSIX 字符类来使用 grep 分析文本。 + +### 总结 ### + + +在这篇文章中,我们已经提供了一些技巧来最大地利用针对命令行用户的两个文本编辑器 nano 和 vim,这两个工具都有相关的扩展文档可供阅读,你可以分别查询它们的官方网站(链接在下面给出)以及使用这个系列中的 [Part 1][3] 给出的建议。 + +#### 参考文件链接 #### + +- [http://www.nano-editor.org/][4] +- [http://www.vim.org/][5] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-exam-how-to-use-nano-vi-editors/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/vi-editor-usage/ +[2]:http://www.tecmint.com/file-and-directory-management-in-linux/ +[3]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ +[4]:http://www.nano-editor.org/ +[5]:http://www.vim.org/ From 8e789f8324e065ffd379940e6aaf70234213dbcc Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Tue, 11 Aug 2015 20:20:36 +0800 Subject: [PATCH 183/207] Update 20150811 How to download apk files from Google Play Store on Linux.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- ...w to download apk files from Google Play Store on Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md b/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md index 529e877d7b..50bf618e86 100644 --- a/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md +++ b/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md @@ -1,3 +1,5 @@ +FSSlc translating + How to download apk files from Google Play Store on Linux ================================================================================ Suppose you want to install an Android app on your Android device. However, for whatever reason, you cannot access Google Play Store on the Android device. What can you do then? One way to install the app without Google Play Store access is to download its APK file using some other means, and then [install the APK][1] file on the Android device manually. @@ -96,4 +98,4 @@ via: http://xmodulo.com/download-apk-files-google-play-store.html [3]:https://addons.mozilla.org/en-us/firefox/addon/apk-downloader/ [4]:http://codingteam.net/project/googleplaydownloader [5]:http://packages.ubuntu.com/vivid/python-ndg-httpsclient -[6]:http://xmodulo.com/how-to-install-deb-file-with-dependencies.html \ No newline at end of file +[6]:http://xmodulo.com/how-to-install-deb-file-with-dependencies.html From e0f43461b533db0e4444eb78cc24e749d8209713 Mon Sep 17 00:00:00 2001 From: wxy Date: Tue, 11 Aug 2015 22:28:25 +0800 Subject: [PATCH 184/207] PUB:20150810 For Linux, Supercomputers R Us @xiaoyu33 --- ...20150810 For Linux, Supercomputers R Us.md | 61 +++++++++++++++++++ ...20150810 For Linux, Supercomputers R Us.md | 59 ------------------ 2 files changed, 61 insertions(+), 59 deletions(-) create mode 100644 published/20150810 For Linux, Supercomputers R Us.md delete mode 100644 translated/talk/20150810 For Linux, Supercomputers R Us.md diff --git a/published/20150810 For Linux, Supercomputers R Us.md b/published/20150810 For Linux, Supercomputers R Us.md new file mode 100644 index 0000000000..e173d7513c --- /dev/null +++ b/published/20150810 For Linux, Supercomputers R Us.md @@ -0,0 +1,61 @@ +Linux:称霸超级计算机系统 +================================================================================ + +> 几乎所有超级计算机上运行的系统都是 Linux,其中包括那些由树莓派(Raspberry Pi)板卡和 PlayStation 3游戏机组成的计算机。 + +![Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons](http://images.techhive.com/images/article/2015/08/playstation_3-100602985-primary.idge.jpg) + +*题图来源:By Michel Ngilen,[ CC BY 2.0 ], via Wikimedia Commons* + +超级计算机是一种严肃的工具,做的都是高大上的计算。它们往往从事于严肃的用途,比如原子弹模拟、气候模拟和高等物理学。当然,它们的花费也很高大上。在最新的超级计算机 [Top500][1] 排名中,中国国防科技大学研制的天河 2 号位居第一,而天河 2 号的建造耗资约 3.9 亿美元! + +但是,也有一个超级计算机,是由博伊西州立大学电气和计算机工程系的一名在读博士 Joshua Kiepert [用树莓派构建完成][2]的,其建造成本低于2000美元。 + +不,这不是我编造的。它一个真实的超级计算机,由超频到 1GHz 的 [B 型树莓派][3]的 ARM11 处理器与 VideoCore IV GPU 组成。每个都配备了 512MB 的内存、一对 USB 端口和 1 个 10/100 BaseT 以太网端口。 + +那么天河 2 号和博伊西州立大学的超级计算机有什么共同点吗?它们都运行 Linux 系统。世界最快的超级计算机[前 500 强中有 486][4] 个也同样运行的是 Linux 系统。这是从 20 多年前就开始的格局。而现在的趋势是超级计算机开始由廉价单元组成,因为 Kiepert 的机器并不是唯一一个无所谓预算的超级计算机。 + +麻省大学达特茅斯分校的物理学副教授 Gaurav Khanna 创建了一台超级计算机仅用了[不足 200 台的 PlayStation3 视频游戏机][5]。 + +PlayStation 游戏机由一个 3.2 GHz 的基于 PowerPC 的 Power 处理器所驱动。每个都配有 512M 的内存。你现在仍然可以花 200 美元买到一个,尽管索尼将在年底逐步淘汰它们。Khanna 仅用了 16 个 PlayStation 3 构建了他第一台超级计算机,所以你也可以花费不到 4000 美元就拥有你自己的超级计算机。 + +这些机器可能是用玩具建成的,但他们不是玩具。Khanna 已经用它做了严肃的天体物理学研究。一个白帽子黑客组织使用了类似的 [PlayStation 3 超级计算机在 2008 年破解了 SSL 的 MD5 哈希算法][6]。 + +两年后,美国空军研究实验室研制的 [Condor Cluster,使用了 1760 个索尼的 PlayStation 3 的处理器][7]和168 个通用的图形处理单元。这个低廉的超级计算机,每秒运行约 500 TFLOP ,即每秒可进行 500 万亿次浮点运算。 + +其他的一些便宜且适用于构建家庭超级计算机的构件包括,专业并行处理板卡,比如信用卡大小的 [99 美元的 Parallella 板卡][8],以及高端显卡,比如 [Nvidia 的 Titan Z][9] 和 [ AMD 的 FirePro W9100][10]。这些高端板卡的市场零售价约 3000 美元,一些想要一台梦幻般的机器的玩家为此参加了[英特尔极限大师赛:英雄联盟世界锦标赛][11],要是甚至有机会得到了第一名的话,能获得超过 10 万美元奖金。另一方面,一个能够自己提供超过 2.5TFLOPS 计算能力的计算机,对于科学家和研究人员来说,这为他们提供了一个可以拥有自己专属的超级计算机的经济的方法。 + +而超级计算机与 Linux 的连接,这一切都始于 1994 年戈达德航天中心的第一个名为 [Beowulf 超级计算机][13]。 + +按照我们的标准,Beowulf 不能算是最优越的。但在那个时期,作为第一台自制的超级计算机,它的 16 个英特尔486DX 处理器和 10Mbps 的以太网总线,是个伟大的创举。[Beowulf 是由美国航空航天局的承建商 Don Becker 和 Thomas Sterling 所设计的][14],是第一台“创客”超级计算机。它的“计算部件” 486DX PC,成本仅有几千美元。尽管它的速度只有个位数的 GFLOPS (吉拍,每秒10亿次)浮点运算,[Beowulf][15] 表明了你可以用商用现货(COTS)硬件和 Linux 创建超级计算机。 + +我真希望我参与创建了一部分,但是我 1994 年就离开了戈达德,开始了作为一名全职的科技记者的职业生涯。该死。 + +但是尽管我只是使用笔记本的记者,我依然能够体会到 COTS 和开源软件是如何永远的改变了超级计算机。我希望现在读这篇文章的你也能。因为,无论是 Raspberry Pi 集群,还是超过 300 万个英特尔的 Ivy Bridge 和 Xeon Phi 芯片的庞然大物,几乎所有当代的超级计算机都可以追溯到 Beowulf。 + +-------------------------------------------------------------------------------- + +via: http://www.computerworld.com/article/2960701/linux/for-linux-supercomputers-r-us.html + +作者:[Steven J. Vaughan-Nichols][a] +译者:[xiaoyu33](https://github.com/xiaoyu33) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/ +[1]:http://www.top500.org/ +[2]:http://www.zdnet.com/article/build-your-own-supercomputer-out-of-raspberry-pi-boards/ +[3]:https://www.raspberrypi.org/products/model-b/ +[4]:http://www.zdnet.com/article/linux-still-rules-supercomputing/ +[5]:http://www.nytimes.com/2014/12/23/science/an-economical-way-to-save-progress.html?smid=fb-nytimes&smtyp=cur&bicmp=AD&bicmlukp=WT.mc_id&bicmst=1409232722000&bicmet=1419773522000&_r=4 +[6]:http://www.computerworld.com/article/2529932/cybercrime-hacking/researchers-hack-verisign-s-ssl-scheme-for-securing-web-sites.html +[7]:http://phys.org/news/2010-12-air-playstation-3s-supercomputer.html +[8]:http://www.zdnet.com/article/parallella-the-99-linux-supercomputer/ +[9]:http://blogs.nvidia.com/blog/2014/03/25/titan-z/ +[10]:http://www.amd.com/en-us/press-releases/Pages/amd-flagship-professional-2014apr7.aspx +[11]:http://en.intelextrememasters.com/news/check-out-the-intel-extreme-masters-katowice-prize-money-distribution/ + +[13]:http://www.beowulf.org/overview/history.html +[14]:http://yclept.ucdavis.edu/Beowulf/aboutbeowulf.html +[15]:http://www.beowulf.org/ diff --git a/translated/talk/20150810 For Linux, Supercomputers R Us.md b/translated/talk/20150810 For Linux, Supercomputers R Us.md deleted file mode 100644 index e5022a658f..0000000000 --- a/translated/talk/20150810 For Linux, Supercomputers R Us.md +++ /dev/null @@ -1,59 +0,0 @@ -Linux:称霸超级计算机系统 -================================================================================ -![Credit: Michel Ngilen, CC BY 2.0, via Wikimedia Commons](http://images.techhive.com/images/article/2015/08/playstation_3-100602985-primary.idge.jpg) -首图来源:By Michel Ngilen,[ CC BY 2.0 ], via Wikimedia Commons - -> 几乎所有超级计算机上运行的系统都是Linux,其中包括那些由树莓派(Raspberry Pi)板和PlayStation 3游戏机板组成的计算机。 - -超级计算机是很正经的工具,目的是做严肃的计算。它们往往从事于严肃的追求,比如原子弹的模拟,气候模拟和高级物理学。当然,它们也需要大笔资金的投资。在最新的超级计算机[500强][1]排名中,中国国防科大研制的天河2号位居第一。天河2号耗资约3.9亿美元。 - -但是,也有一个超级计算机,是由博伊西州立大学电气和计算机工程系的一名在读博士Joshua Kiepert[用树莓派构建完成][2]的。其创建成本低于2000美元。 - -不,这不是我编造的。这是一个真实的超级计算机,由超频1GHz的[B型树莓派][3]ARM11处理器与VideoCore IV GPU组成。每个都配备了512MB的RAM,一对USB端口和1个10/100 BaseT以太网端口。 - -那么天河2号和博伊西州立大学的超级计算机有什么共同点?它们都运行Linux系统。世界最快的超级计算机[前500强中有486][4]个也同样运行的是Linux系统。这是20多年前就开始的一种覆盖。现在Linux开始建立于廉价的超级计算机。因为Kiepert的机器并不是唯一的预算数字计算机。 - -Gaurav Khanna,麻省大学达特茅斯分校的物理学副教授,创建了一台超级计算机仅用了[不足200的PlayStation3视频游戏机][5]。 - -PlayStation游戏机是由一个3.2 GHz的基于PowerPC的电源处理单元供电。每个都配有512M的RAM。你现在仍然可以花200美元买到一个,尽管索尼将在年底逐步淘汰它们。Khanna仅用16个PlayStation 3s构建了他第一台超级计算机,所以你也可以花费不到4000美元就拥有你自己的超级计算机。 - -这些机器可能是从玩具建成的,但他们不是玩具。Khanna已经用它做了严肃的天体物理学研究。一个白帽子黑客组织使用了类似的[PlayStation 3超级计算机在2008年破解了SSL的MD5哈希算法][6]。 - -两年后,美国空军研究实验室研制的[Condor Cluster,使用了1,760个索尼的PlayStation3的处理器][7]和168个通用的图形处理单元。这个低廉的超级计算机,每秒运行约500TFLOPs,或500万亿次浮点运算。 - -其他的一些便宜且适用于构建家庭超级计算机的构件包括,专业并行处理板比如信用卡大小[99美元的Parallella板][8],以及高端显卡比如[Nvidia的 Titan Z][9] 以及[ AMD的 FirePro W9100][10].这些高端主板市场零售价约3000美元,被一些[英特尔极限大师赛世界锦标赛英雄联盟参赛][11]玩家觊觎能够赢得的梦想的机器,c[传说][12]这项比赛第一名能获得超过10万美元奖金。另一方面,一个人能够独自提供超过2.5TFLOPS,并且他们为科学家和研究人员提供了一个经济的方法,使得他们拥有自己专属的超级计算机。 - -作为Linux的连接,这一切都开始于1994年戈达德航天中心的第一个名为[Beowulf超级计算机][13]。 - -按照我们的标准,Beowulf不能算是最优越的。但在那个时期,作为第一台自制的超级计算机,其16英特尔486DX处理器和10Mbps的以太网总线,是伟大的创举。由[美国航空航天局承包人Don Becker和Thomas Sterling设计的Beowulf][14],是第一个“制造者”超级计算机。它的“计算部件”486DX PCs,成本仅有几千美元。尽管它的速度只有一位数的浮点运算,[Beowulf][15]表明了你可以用商用现货(COTS)硬件和Linux创建超级计算机。 - -我真希望我参与创作了一部分,但是我1994年就离开了戈达德,开始了作为一名全职的科技记者的职业生涯。该死。 - -但是尽管我只是使用笔记本的记者,我依然能够体会到COTS和开源软件是如何永远的改变了超级计算机。我希望现在读这篇文章的你也能。因为,无论是Raspberry Pis集群,还是超过300万英特尔的Ivy Bridge和Xeon Phi芯片的庞然大物,几乎所有当代的超级计算机都可以追溯到Beowulf。 - --------------------------------------------------------------------------------- - -via: - -作者:[Steven J. Vaughan-Nichols][a] -译者:[xiaoyu33](https://github.com/xiaoyu33) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.computerworld.com/author/Steven-J.-Vaughan_Nichols/ -[1]:http://www.top500.org/ -[2]:http://www.zdnet.com/article/build-your-own-supercomputer-out-of-raspberry-pi-boards/ -[3]:https://www.raspberrypi.org/products/model-b/ -[4]:http://www.zdnet.com/article/linux-still-rules-supercomputing/ -[5]:http://www.nytimes.com/2014/12/23/science/an-economical-way-to-save-progress.html?smid=fb-nytimes&smtyp=cur&bicmp=AD&bicmlukp=WT.mc_id&bicmst=1409232722000&bicmet=1419773522000&_r=4 -[6]:http://www.computerworld.com/article/2529932/cybercrime-hacking/researchers-hack-verisign-s-ssl-scheme-for-securing-web-sites.html -[7]:http://phys.org/news/2010-12-air-playstation-3s-supercomputer.html -[8]:http://www.zdnet.com/article/parallella-the-99-linux-supercomputer/ -[9]:http://blogs.nvidia.com/blog/2014/03/25/titan-z/ -[10]:http://www.amd.com/en-us/press-releases/Pages/amd-flagship-professional-2014apr7.aspx -[11]:http://en.intelextrememasters.com/news/check-out-the-intel-extreme-masters-katowice-prize-money-distribution/ -[12]:http://www.google.com/url?q=http%3A%2F%2Fen.intelextrememasters.com%2Fnews%2Fcheck-out-the-intel-extreme-masters-katowice-prize-money-distribution%2F&sa=D&sntz=1&usg=AFQjCNE6yoAGGz-Hpi2tPF4gdhuPBEckhQ -[13]:http://www.beowulf.org/overview/history.html -[14]:http://yclept.ucdavis.edu/Beowulf/aboutbeowulf.html -[15]:http://www.beowulf.org/ From cc16c9074da3fd5406449f3e7cadbfd39998790c Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 11 Aug 2015 23:13:10 +0800 Subject: [PATCH 185/207] =?UTF-8?q?20150811-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...k Traffic Analyzer--Install it on Linux.md | 62 +++++++++++++++++++ 1 file changed, 62 insertions(+) create mode 100644 sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md diff --git a/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md b/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md new file mode 100644 index 0000000000..9f78722cb6 --- /dev/null +++ b/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md @@ -0,0 +1,62 @@ +Darkstat is a Web Based Network Traffic Analyzer – Install it on Linux +================================================================================ +Darkstat is a simple, web based network traffic analyzer application. It works on many popular operating systems like Linux, Solaris, Mac and AIX. It keeps running in the background as a daemon and continues collecting and sniffing network data and presents it in easily understandable format within its web interface. It can generate traffic reports for hosts, identify which ports are open on some particular host and is IPV 6 complaint application. Let’s see how we can install and configure it on Linux operating system. + +### Installing Darkstat on Linux ### + +**Install Darkstat on Fedora/CentOS/RHEL:** + +In order to install it on Fedora/RHEL and CentOS Linux distributions, run following command on the terminal. + + sudo yum install darkstat + +**Install Darkstat on Ubuntu/Debian:** + +Run following on the terminal to install it on Ubuntu and Debian. + + sudo apt-get install darkstat + +Congratulations, Darkstat has been installed on your Linux system now. + +### Configuring Darkstat ### + +In order to run this application properly, we need to perform some basic configurations. Edit /etc/darkstat/init.cfg file in Gedit text editor by running the following command on the terminal. + + sudo gedit /etc/darkstat/init.cfg + +![](http://linuxpitstop.com/wp-content/uploads/2015/08/13.png) +Edit Darkstat + +Change START_DARKSTAT parameter to “yes” and provide your network interface in “INTERFACE”. Make sure to uncomment DIR, PORT, BINDIP, and LOCAL parameters here. If you wish to bind the web interface for Darkstat to some specific IP, provide it in BINDIP section. + +### Starting Darkstat Daemon ### + +Once the installation and configuration for Darkstat is complete, run following command to start its daemon. + + sudo /etc/init.d/darkstat start + +![Restarting Darkstat](http://linuxpitstop.com/wp-content/uploads/2015/08/23.png) + +You can configure Darkstat to start on system boot by running the following command: + + chkconfig darkstat on + +Launch your browser and load **http://localhost:666** and it will display the web based graphical interface for Darkstat. Start using this tool to analyze your network traffic. + +![Darkstat](http://linuxpitstop.com/wp-content/uploads/2015/08/32.png) + +### Conclusion ### + +It is a lightweight tool with very low memory footprints. The key reason for the popularity of this tool is simplicity, ease of configuration and usage. It is a must-have application for System and Network Administrators. + +-------------------------------------------------------------------------------- + +via: http://linuxpitstop.com/install-darkstat-on-ubuntu-linux/ + +作者:[Aun][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://linuxpitstop.com/author/aun/ \ No newline at end of file From accf7e54ceb2f72f33d9db5b6ce3e35fe1a3e9f3 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Tue, 11 Aug 2015 23:24:39 +0800 Subject: [PATCH 186/207] =?UTF-8?q?20150811-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ind and Delete Duplicate Files in Linux.md | 265 ++++++++++++++++++ 1 file changed, 265 insertions(+) create mode 100644 sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md diff --git a/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md b/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md new file mode 100644 index 0000000000..f89f060c92 --- /dev/null +++ b/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md @@ -0,0 +1,265 @@ +fdupes – A Comamndline Tool to Find and Delete Duplicate Files in Linux +================================================================================ +It is a common requirement to find and replace duplicate files for most of the computer users. Finding and removing duplicate files is a tiresome job that demands time and patience. Finding duplicate files can be very easy if your machine is powered by GNU/Linux, thanks to ‘**fdupes**‘ utility. + +![Find and Delete Duplicate Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/find-and-delete-duplicate-files-in-linux.png) + +Fdupes – Find and Delete Duplicate Files in Linux + +### What is fdupes? ### + +**Fdupes** is a Linux utility written by **Adrian Lopez** in C programming Language released under MIT License. The application is able to find duplicate files in the given set of directories and sub-directories. Fdupes recognize duplicates by comparing MD5 signature of files followed by a byte-to-byte comparison. A lots of options can be passed with Fdupes to list, delete and replace the files with hardlinks to duplicates. + +The comparison starts in the order: + +**size comparison > Partial MD5 Signature Comparison > Full MD5 Signature Comparison > Byte-to-Byte Comparison.** + +### Install fdupes on a Linux ### + +Installation of latest version of fdupes (fdupes version 1.51) as easy as running following command on **Debian** based systems such as **Ubuntu** and **Linux Mint**. + + $ sudo apt-get install fdupes + +On CentOS/RHEL and Fedora based systems, you need to turn on [epel repository][1] to install fdupes package. + + # yum install fdupes + # dnf install fdupes [On Fedora 22 onwards] + +**Note**: The default package manager yum is replaced by dnf from Fedora 22 onwards… + +### How to use fdupes command? ### + +1. For demonstration purpose, let’s a create few duplicate files under a directory (say tecmint) simply as: + + $ mkdir /home/"$USER"/Desktop/tecmint && cd /home/"$USER"/Desktop/tecmint && for i in {1..15}; do echo "I Love Tecmint. Tecmint is a very nice community of Linux Users." > tecmint${i}.txt ; done + +After running above command, let’s verify the duplicates files are created or not using ls [command][2]. + + $ ls -l + + total 60 + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint10.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint11.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint12.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint13.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint14.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint15.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint1.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint2.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint3.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint4.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint5.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint6.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint7.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint8.txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9.txt + +The above script create **15** files namely tecmint1.txt, tecmint2.txt…tecmint15.txt and every files contains the same data i.e., + + "I Love Tecmint. Tecmint is a very nice community of Linux Users." + +2. Now search for duplicate files within the folder **tecmint**. + + $ fdupes /home/$USER/Desktop/tecmint + + /home/tecmint/Desktop/tecmint/tecmint13.txt + /home/tecmint/Desktop/tecmint/tecmint8.txt + /home/tecmint/Desktop/tecmint/tecmint11.txt + /home/tecmint/Desktop/tecmint/tecmint3.txt + /home/tecmint/Desktop/tecmint/tecmint4.txt + /home/tecmint/Desktop/tecmint/tecmint6.txt + /home/tecmint/Desktop/tecmint/tecmint7.txt + /home/tecmint/Desktop/tecmint/tecmint9.txt + /home/tecmint/Desktop/tecmint/tecmint10.txt + /home/tecmint/Desktop/tecmint/tecmint2.txt + /home/tecmint/Desktop/tecmint/tecmint5.txt + /home/tecmint/Desktop/tecmint/tecmint14.txt + /home/tecmint/Desktop/tecmint/tecmint1.txt + /home/tecmint/Desktop/tecmint/tecmint15.txt + /home/tecmint/Desktop/tecmint/tecmint12.txt + +3. Search for duplicates recursively under every directory including it’s sub-directories using the **-r** option. + +It search across all the files and folder recursively, depending upon the number of files and folders it will take some time to scan duplicates. In that mean time, you will be presented with the total progress in terminal, something like this. + + $ fdupes -r /home + + Progress [37780/54747] 69% + +4. See the size of duplicates found within a folder using the **-S** option. + + $ fdupes -S /home/$USER/Desktop/tecmint + + 65 bytes each: + /home/tecmint/Desktop/tecmint/tecmint13.txt + /home/tecmint/Desktop/tecmint/tecmint8.txt + /home/tecmint/Desktop/tecmint/tecmint11.txt + /home/tecmint/Desktop/tecmint/tecmint3.txt + /home/tecmint/Desktop/tecmint/tecmint4.txt + /home/tecmint/Desktop/tecmint/tecmint6.txt + /home/tecmint/Desktop/tecmint/tecmint7.txt + /home/tecmint/Desktop/tecmint/tecmint9.txt + /home/tecmint/Desktop/tecmint/tecmint10.txt + /home/tecmint/Desktop/tecmint/tecmint2.txt + /home/tecmint/Desktop/tecmint/tecmint5.txt + /home/tecmint/Desktop/tecmint/tecmint14.txt + /home/tecmint/Desktop/tecmint/tecmint1.txt + /home/tecmint/Desktop/tecmint/tecmint15.txt + /home/tecmint/Desktop/tecmint/tecmint12.txt + +5. You can see the size of duplicate files for every directory and subdirectories encountered within using the **-S** and **-r** options at the same time, as: + + $ fdupes -Sr /home/avi/Desktop/ + + 65 bytes each: + /home/tecmint/Desktop/tecmint/tecmint13.txt + /home/tecmint/Desktop/tecmint/tecmint8.txt + /home/tecmint/Desktop/tecmint/tecmint11.txt + /home/tecmint/Desktop/tecmint/tecmint3.txt + /home/tecmint/Desktop/tecmint/tecmint4.txt + /home/tecmint/Desktop/tecmint/tecmint6.txt + /home/tecmint/Desktop/tecmint/tecmint7.txt + /home/tecmint/Desktop/tecmint/tecmint9.txt + /home/tecmint/Desktop/tecmint/tecmint10.txt + /home/tecmint/Desktop/tecmint/tecmint2.txt + /home/tecmint/Desktop/tecmint/tecmint5.txt + /home/tecmint/Desktop/tecmint/tecmint14.txt + /home/tecmint/Desktop/tecmint/tecmint1.txt + /home/tecmint/Desktop/tecmint/tecmint15.txt + /home/tecmint/Desktop/tecmint/tecmint12.txt + + 107 bytes each: + /home/tecmint/Desktop/resume_files/r-csc.html + /home/tecmint/Desktop/resume_files/fc.html + +6. Other than searching in one folder or all the folders recursively, you may choose to choose in two folders or three folders as required. Not to mention you can use option **-S** and/or **-r** if required. + + $ fdupes /home/avi/Desktop/ /home/avi/Templates/ + +7. To delete the duplicate files while preserving a copy you can use the option ‘**-d**’. Extra care should be taken while using this option else you might end up loosing necessary files/data and mind it the process is unrecoverable. + + $ fdupes -d /home/$USER/Desktop/tecmint + + [1] /home/tecmint/Desktop/tecmint/tecmint13.txt + [2] /home/tecmint/Desktop/tecmint/tecmint8.txt + [3] /home/tecmint/Desktop/tecmint/tecmint11.txt + [4] /home/tecmint/Desktop/tecmint/tecmint3.txt + [5] /home/tecmint/Desktop/tecmint/tecmint4.txt + [6] /home/tecmint/Desktop/tecmint/tecmint6.txt + [7] /home/tecmint/Desktop/tecmint/tecmint7.txt + [8] /home/tecmint/Desktop/tecmint/tecmint9.txt + [9] /home/tecmint/Desktop/tecmint/tecmint10.txt + [10] /home/tecmint/Desktop/tecmint/tecmint2.txt + [11] /home/tecmint/Desktop/tecmint/tecmint5.txt + [12] /home/tecmint/Desktop/tecmint/tecmint14.txt + [13] /home/tecmint/Desktop/tecmint/tecmint1.txt + [14] /home/tecmint/Desktop/tecmint/tecmint15.txt + [15] /home/tecmint/Desktop/tecmint/tecmint12.txt + + Set 1 of 1, preserve files [1 - 15, all]: + +You may notice that all the duplicates are listed and you are prompted to delete, either one by one or certain range or all in one go. You may select a range something like below to delete files files of specific range. + + Set 1 of 1, preserve files [1 - 15, all]: 2-15 + + [-] /home/tecmint/Desktop/tecmint/tecmint13.txt + [+] /home/tecmint/Desktop/tecmint/tecmint8.txt + [-] /home/tecmint/Desktop/tecmint/tecmint11.txt + [-] /home/tecmint/Desktop/tecmint/tecmint3.txt + [-] /home/tecmint/Desktop/tecmint/tecmint4.txt + [-] /home/tecmint/Desktop/tecmint/tecmint6.txt + [-] /home/tecmint/Desktop/tecmint/tecmint7.txt + [-] /home/tecmint/Desktop/tecmint/tecmint9.txt + [-] /home/tecmint/Desktop/tecmint/tecmint10.txt + [-] /home/tecmint/Desktop/tecmint/tecmint2.txt + [-] /home/tecmint/Desktop/tecmint/tecmint5.txt + [-] /home/tecmint/Desktop/tecmint/tecmint14.txt + [-] /home/tecmint/Desktop/tecmint/tecmint1.txt + [-] /home/tecmint/Desktop/tecmint/tecmint15.txt + [-] /home/tecmint/Desktop/tecmint/tecmint12.txt + +8. From safety point of view, you may like to print the output of ‘**fdupes**’ to file and then check text file to decide what file to delete. This decrease chances of getting your file deleted accidentally. You may do: + + $ fdupes -Sr /home > /home/fdupes.txt + +**Note**: You may replace ‘**/home**’ with the your desired folder. Also use option ‘**-r**’ and ‘**-S**’ if you want to search recursively and Print Size, respectively. + +9. You may omit the first file from each set of matches by using option ‘**-f**’. + +First List files of the directory. + + $ ls -l /home/$USER/Desktop/tecmint + + total 20 + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9 (3rd copy).txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9 (4th copy).txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9 (another copy).txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9 (copy).txt + -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9.txt + +and then omit the first file from each set of matches. + + $ fdupes -f /home/$USER/Desktop/tecmint + + /home/tecmint/Desktop/tecmint9 (copy).txt + /home/tecmint/Desktop/tecmint9 (3rd copy).txt + /home/tecmint/Desktop/tecmint9 (another copy).txt + /home/tecmint/Desktop/tecmint9 (4th copy).txt + +10. Check installed version of fdupes. + + $ fdupes --version + + fdupes 1.51 + +11. If you need any help on fdupes you may use switch ‘**-h**’. + + $ fdupes -h + + Usage: fdupes [options] DIRECTORY... + + -r --recurse for every directory given follow subdirectories + encountered within + -R --recurse: for each directory given after this option follow + subdirectories encountered within (note the ':' at + the end of the option, manpage for more details) + -s --symlinks follow symlinks + -H --hardlinks normally, when two or more files point to the same + disk area they are treated as non-duplicates; this + option will change this behavior + -n --noempty exclude zero-length files from consideration + -A --nohidden exclude hidden files from consideration + -f --omitfirst omit the first file in each set of matches + -1 --sameline list each set of matches on a single line + -S --size show size of duplicate files + -m --summarize summarize dupe information + -q --quiet hide progress indicator + -d --delete prompt user for files to preserve and delete all + others; important: under particular circumstances, + data may be lost when using this option together + with -s or --symlinks, or when specifying a + particular directory more than once; refer to the + fdupes documentation for additional information + -N --noprompt together with --delete, preserve the first file in + each set of duplicates and delete the rest without + prompting the user + -v --version display fdupes version + -h --help display this help message + +That’s for all now. Let me know how you were finding and deleting duplicates files till now in Linux? and also tell me your opinion about this utility. Put your valuable feedback in the comment section below and don’t forget to like/share us and help us get spread. + +I am working on another utility called **fslint** to remove duplicate files, will soon post and you people will love to read. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/fdupes-find-and-delete-duplicate-files-in-linux/ + +作者:[Avishek Kumar][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ +[2]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/ \ No newline at end of file From 54f9b2ca351aa6a65fa5896a6896aaf93255c5cc Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 12 Aug 2015 00:35:39 +0800 Subject: [PATCH 187/207] =?UTF-8?q?=E4=BF=AE=E6=94=B9=E6=A0=87=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- published/20150810 For Linux, Supercomputers R Us.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/published/20150810 For Linux, Supercomputers R Us.md b/published/20150810 For Linux, Supercomputers R Us.md index e173d7513c..f86b3694d0 100644 --- a/published/20150810 For Linux, Supercomputers R Us.md +++ b/published/20150810 For Linux, Supercomputers R Us.md @@ -1,4 +1,4 @@ -Linux:称霸超级计算机系统 +有了 Linux,你就可以搭建自己的超级计算机 ================================================================================ > 几乎所有超级计算机上运行的系统都是 Linux,其中包括那些由树莓派(Raspberry Pi)板卡和 PlayStation 3游戏机组成的计算机。 From d3454dc30effe283eeca9dac490e57e860333fe0 Mon Sep 17 00:00:00 2001 From: joeren Date: Wed, 12 Aug 2015 09:29:13 +0800 Subject: [PATCH 188/207] Update 20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md --- ...mndline Tool to Find and Delete Duplicate Files in Linux.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md b/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md index f89f060c92..1e55090d67 100644 --- a/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md +++ b/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md @@ -1,3 +1,4 @@ +Translating by GOLinux! fdupes – A Comamndline Tool to Find and Delete Duplicate Files in Linux ================================================================================ It is a common requirement to find and replace duplicate files for most of the computer users. Finding and removing duplicate files is a tiresome job that demands time and patience. Finding duplicate files can be very easy if your machine is powered by GNU/Linux, thanks to ‘**fdupes**‘ utility. @@ -262,4 +263,4 @@ via: http://www.tecmint.com/fdupes-find-and-delete-duplicate-files-in-linux/ [a]:http://www.tecmint.com/author/avishek/ [1]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ -[2]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/ \ No newline at end of file +[2]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/ From b161d18382714006ea52e2cf6cb46b3febe0ba92 Mon Sep 17 00:00:00 2001 From: GOLinux Date: Wed, 12 Aug 2015 10:51:29 +0800 Subject: [PATCH 189/207] [Translated]20150811 fdupes--A Commandline Tool to Find and Delete Duplicate Files in Linux.md --- ...ind and Delete Duplicate Files in Linux.md | 69 +++++++++---------- 1 file changed, 33 insertions(+), 36 deletions(-) rename {sources => translated}/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md (67%) diff --git a/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md b/translated/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md similarity index 67% rename from sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md rename to translated/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md index 1e55090d67..09f10fb546 100644 --- a/sources/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md +++ b/translated/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md @@ -1,40 +1,38 @@ -Translating by GOLinux! -fdupes – A Comamndline Tool to Find and Delete Duplicate Files in Linux +fdupes——Linux中查找并删除重复文件的命令行工具 ================================================================================ -It is a common requirement to find and replace duplicate files for most of the computer users. Finding and removing duplicate files is a tiresome job that demands time and patience. Finding duplicate files can be very easy if your machine is powered by GNU/Linux, thanks to ‘**fdupes**‘ utility. +对于大多数计算机用户而言,查找并替换重复的文件是一个常见的需求。查找并移除重复文件真是一项领人不胜其烦的工作,它耗时又耗力。如果你的机器上跑着GNU/Linux,那么查找重复文件会变得十分简单,这多亏了`**fdupes**`工具。 ![Find and Delete Duplicate Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/find-and-delete-duplicate-files-in-linux.png) -Fdupes – Find and Delete Duplicate Files in Linux +Fdupes——在Linux中查找并删除重复文件 -### What is fdupes? ### +### fdupes是啥东东? ### -**Fdupes** is a Linux utility written by **Adrian Lopez** in C programming Language released under MIT License. The application is able to find duplicate files in the given set of directories and sub-directories. Fdupes recognize duplicates by comparing MD5 signature of files followed by a byte-to-byte comparison. A lots of options can be passed with Fdupes to list, delete and replace the files with hardlinks to duplicates. +**Fdupes**是Linux下的一个工具,它由**Adrian Lopez**用C编程语言编写并基于MIT许可证发行,该应用程序可以在指定的目录及子目录中查找重复的文件。Fdupes通过对比文件的MD5签名,以及逐字节比较文件来识别重复内容,可以为Fdupes指定大量的选项以实现对文件的列出、删除、替换到文件副本的硬链接等操作。 -The comparison starts in the order: +对比以下列顺序开始: -**size comparison > Partial MD5 Signature Comparison > Full MD5 Signature Comparison > Byte-to-Byte Comparison.** +**大小对比 > 部分 MD5 签名对比 > 完整 MD5 签名对比 > 逐字节对比** -### Install fdupes on a Linux ### +### 安装 fdupes 到 Linux ### -Installation of latest version of fdupes (fdupes version 1.51) as easy as running following command on **Debian** based systems such as **Ubuntu** and **Linux Mint**. +在基于**Debian**的系统上,如**Ubuntu**和**Linux Mint**,安装最新版fdupes,用下面的命令手到擒来。 $ sudo apt-get install fdupes -On CentOS/RHEL and Fedora based systems, you need to turn on [epel repository][1] to install fdupes package. +在基于CentOS/RHEL和Fedora的系统上,你需要开启[epel仓库][1]来安装fdupes包。 # yum install fdupes # dnf install fdupes [On Fedora 22 onwards] -**Note**: The default package manager yum is replaced by dnf from Fedora 22 onwards… +**注意**:自Fedora 22之后,默认的包管理器yum被dnf取代了。 -### How to use fdupes command? ### - -1. For demonstration purpose, let’s a create few duplicate files under a directory (say tecmint) simply as: +### fdupes命令咋个搞? ### +1.作为演示的目的,让我们来在某个目录(比如 tecmint)下创建一些重复文件,命令如下: $ mkdir /home/"$USER"/Desktop/tecmint && cd /home/"$USER"/Desktop/tecmint && for i in {1..15}; do echo "I Love Tecmint. Tecmint is a very nice community of Linux Users." > tecmint${i}.txt ; done -After running above command, let’s verify the duplicates files are created or not using ls [command][2]. +在执行以上命令后,让我们使用ls[命令][2]验证重复文件是否创建。 $ ls -l @@ -55,11 +53,11 @@ After running above command, let’s verify the duplicates files are created or -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint8.txt -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9.txt -The above script create **15** files namely tecmint1.txt, tecmint2.txt…tecmint15.txt and every files contains the same data i.e., +上面的脚本创建了**15**个文件,名称分别为tecmint1.txt,tecmint2.txt……tecmint15.txt,并且每个文件的数据相同,如 "I Love Tecmint. Tecmint is a very nice community of Linux Users." -2. Now search for duplicate files within the folder **tecmint**. +2.现在在**tecmint**文件夹内搜索重复的文件。 $ fdupes /home/$USER/Desktop/tecmint @@ -79,15 +77,15 @@ The above script create **15** files namely tecmint1.txt, tecmint2.txt…tecmint /home/tecmint/Desktop/tecmint/tecmint15.txt /home/tecmint/Desktop/tecmint/tecmint12.txt -3. Search for duplicates recursively under every directory including it’s sub-directories using the **-r** option. +3.使用**-r**选项在每个目录包括其子目录中递归搜索重复文件。 -It search across all the files and folder recursively, depending upon the number of files and folders it will take some time to scan duplicates. In that mean time, you will be presented with the total progress in terminal, something like this. +它会递归搜索所有文件和文件夹,花一点时间来扫描重复文件,时间的长短取决于文件和文件夹的数量。在此其间,终端中会显示全部过程,像下面这样。 $ fdupes -r /home Progress [37780/54747] 69% -4. See the size of duplicates found within a folder using the **-S** option. +4.使用**-S**选项来查看某个文件夹内找到的重复文件的大小。 $ fdupes -S /home/$USER/Desktop/tecmint @@ -108,7 +106,7 @@ It search across all the files and folder recursively, depending upon the number /home/tecmint/Desktop/tecmint/tecmint15.txt /home/tecmint/Desktop/tecmint/tecmint12.txt -5. You can see the size of duplicate files for every directory and subdirectories encountered within using the **-S** and **-r** options at the same time, as: +5.你可以同时使用**-S**和**-r**选项来查看所有涉及到的目录和子目录中的重复文件的大小,如下: $ fdupes -Sr /home/avi/Desktop/ @@ -133,11 +131,11 @@ It search across all the files and folder recursively, depending upon the number /home/tecmint/Desktop/resume_files/r-csc.html /home/tecmint/Desktop/resume_files/fc.html -6. Other than searching in one folder or all the folders recursively, you may choose to choose in two folders or three folders as required. Not to mention you can use option **-S** and/or **-r** if required. +6.不同于在一个或所有文件夹内递归搜索,你可以选择按要求有选择性地在两个或三个文件夹内进行搜索。不必再提醒你了吧,如有需要,你可以使用**-S**和/或**-r**选项。 $ fdupes /home/avi/Desktop/ /home/avi/Templates/ -7. To delete the duplicate files while preserving a copy you can use the option ‘**-d**’. Extra care should be taken while using this option else you might end up loosing necessary files/data and mind it the process is unrecoverable. +7.要删除重复文件,同时保留一个副本,你可以使用`**-d**`选项。使用该选项,你必须额外小心,否则最终结果可能会是文件/数据的丢失。郑重提醒,此操作不可恢复。 $ fdupes -d /home/$USER/Desktop/tecmint @@ -159,7 +157,7 @@ It search across all the files and folder recursively, depending upon the number Set 1 of 1, preserve files [1 - 15, all]: -You may notice that all the duplicates are listed and you are prompted to delete, either one by one or certain range or all in one go. You may select a range something like below to delete files files of specific range. +你可能注意到了,所有重复的文件被列了出来,并给出删除提示,一个一个来,或者指定范围,或者一次性全部删除。你可以选择一个范围,就像下面这样,来删除指定范围内的文件。 Set 1 of 1, preserve files [1 - 15, all]: 2-15 @@ -179,15 +177,15 @@ You may notice that all the duplicates are listed and you are prompted to delete [-] /home/tecmint/Desktop/tecmint/tecmint15.txt [-] /home/tecmint/Desktop/tecmint/tecmint12.txt -8. From safety point of view, you may like to print the output of ‘**fdupes**’ to file and then check text file to decide what file to delete. This decrease chances of getting your file deleted accidentally. You may do: +8.从安全角度出发,你可能想要打印`**fdupes**`的输出结果到文件中,然后检查文本文件来决定要删除什么文件。这可以降低意外删除文件的风险。你可以这么做: $ fdupes -Sr /home > /home/fdupes.txt -**Note**: You may replace ‘**/home**’ with the your desired folder. Also use option ‘**-r**’ and ‘**-S**’ if you want to search recursively and Print Size, respectively. +**注意**:你可以替换`**/home**`为你想要的文件夹。同时,如果你想要递归搜索并打印大小,可以使用`**-r**`和`**-S**`选项。 -9. You may omit the first file from each set of matches by using option ‘**-f**’. +9.你可以使用`**-f**`选项来忽略每个匹配集中的首个文件。 -First List files of the directory. +首先列出该目录中的文件。 $ ls -l /home/$USER/Desktop/tecmint @@ -198,7 +196,7 @@ First List files of the directory. -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9 (copy).txt -rw-r--r-- 1 tecmint tecmint 65 Aug 8 11:22 tecmint9.txt -and then omit the first file from each set of matches. +然后,忽略掉每个匹配集中的首个文件。 $ fdupes -f /home/$USER/Desktop/tecmint @@ -207,13 +205,13 @@ and then omit the first file from each set of matches. /home/tecmint/Desktop/tecmint9 (another copy).txt /home/tecmint/Desktop/tecmint9 (4th copy).txt -10. Check installed version of fdupes. +10.检查已安装的fdupes版本。 $ fdupes --version fdupes 1.51 -11. If you need any help on fdupes you may use switch ‘**-h**’. +11.如果你需要关于fdupes的帮助,可以使用`**-h**`开关。 $ fdupes -h @@ -247,16 +245,15 @@ and then omit the first file from each set of matches. -v --version display fdupes version -h --help display this help message -That’s for all now. Let me know how you were finding and deleting duplicates files till now in Linux? and also tell me your opinion about this utility. Put your valuable feedback in the comment section below and don’t forget to like/share us and help us get spread. +到此为止了。让我知道你到现在为止你是怎么在Linux中查找并删除重复文件的?同时,也让我知道你关于这个工具的看法。在下面的评论部分中提供你有价值的反馈吧,别忘了为我们点赞并分享,帮助我们扩散哦。 -I am working on another utility called **fslint** to remove duplicate files, will soon post and you people will love to read. +我正在使用另外一个移除重复文件的工具,它叫**fslint**。很快就会把使用心得分享给大家哦,你们一定会喜欢看的。 -------------------------------------------------------------------------------- via: http://www.tecmint.com/fdupes-find-and-delete-duplicate-files-in-linux/ -作者:[Avishek Kumar][a] -译者:[译者ID](https://github.com/译者ID) +作者:[GOLinux](https://github.com/GOLinux) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From 31c2f1416e45300f44b57be5518b824e926053ef Mon Sep 17 00:00:00 2001 From: wi-cuckoo Date: Wed, 12 Aug 2015 12:56:35 +0800 Subject: [PATCH 190/207] translated wi-cuckoo --- ...ence on RedHat Linux Package Management.md | 349 ------------------ ...ence on RedHat Linux Package Management.md | 348 +++++++++++++++++ 2 files changed, 348 insertions(+), 349 deletions(-) delete mode 100644 sources/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md create mode 100644 translated/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md diff --git a/sources/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md b/sources/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md deleted file mode 100644 index 6243a8c0de..0000000000 --- a/sources/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md +++ /dev/null @@ -1,349 +0,0 @@ -translating wi-cuckoo -Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management -================================================================================ -**Shilpa Nair has just graduated in the year 2015. She went to apply for Trainee position in a National News Television located in Noida, Delhi. When she was in the last year of graduation and searching for help on her assignments she came across Tecmint. Since then she has been visiting Tecmint regularly.** - -![Linux Interview Questions on RPM](http://www.tecmint.com/wp-content/uploads/2015/06/Linux-Interview-Questions-on-RPM.jpeg) - -Linux Interview Questions on RPM - -All the questions and answers are rewritten based upon the memory of Shilpa Nair. - -> “Hi friends! I am Shilpa Nair from Delhi. I have completed my graduation very recently and was hunting for a Trainee role soon after my degree. I have developed a passion for UNIX since my early days in the collage and I was looking for a role that suits me and satisfies my soul. I was asked a lots of questions and most of them were basic questions related to RedHat Package Management.” - -Here are the questions, that I was asked and their corresponding answers. I am posting only those questions that are related to RedHat GNU/Linux Package Management, as they were mainly asked. - -### 1. How will you find if a package is installed or not? Say you have to find if ‘nano’ is installed or not, what will you do? ### - -> **Answer** : To find the package nano, weather installed or not, we can use rpm command with the option -q is for query and -a stands for all the installed packages. -> -> # rpm -qa nano -> OR -> # rpm -qa | grep -i nano -> -> nano-2.3.1-10.el7.x86_64 -> -> Also the package name must be complete, an incomplete package name will return the prompt without printing anything which means that package (incomplete package name) is not installed. It can be understood easily by the example below: -> -> We generally substitute vim command with vi. But if we find package vi/vim we will get no result on the standard output. -> -> # vi -> # vim -> -> However we can clearly see that the package is installed by firing vi/vim command. Here is culprit is incomplete file name. If we are not sure of the exact file-name we can use wildcard as: -> -> # rpm -qa vim* -> -> vim-minimal-7.4.160-1.el7.x86_64 -> -> This way we can find information about any package, if installed or not. - -### 2. How will you install a package XYZ using rpm? ### - -> **Answer** : We can install any package (*.rpm) using rpm command a shown below, here options -i (install), -v (verbose or display additional information) and -h (print hash mark during package installation). -> -> # rpm -ivh peazip-1.11-1.el6.rf.x86_64.rpm -> -> Preparing... ################################# [100%] -> Updating / installing... -> 1:peazip-1.11-1.el6.rf ################################# [100%] -> -> If upgrading a package from earlier version -U switch should be used, option -v and -h follows to make sure we get a verbose output along with hash Mark, that makes it readable. - -### 3. You have installed a package (say httpd) and now you want to see all the files and directories installed and created by the above package. What will you do? ### - -> **Answer** : We can list all the files (Linux treat everything as file including directories) installed by the package httpd using options -l (List all the files) and -q (is for query). -> -> # rpm -ql httpd -> -> /etc/httpd -> /etc/httpd/conf -> /etc/httpd/conf.d -> ... - -### 4. You are supposed to remove a package say postfix. What will you do? ### - -> **Answer** : First we need to know postfix was installed by what package. Find the package name that installed postfix using options -e erase/uninstall a package) and –v (verbose output). -> -> # rpm -qa postfix* -> -> postfix-2.10.1-6.el7.x86_64 -> -> and then remove postfix as: -> -> # rpm -ev postfix-2.10.1-6.el7.x86_64 -> -> Preparing packages... -> postfix-2:3.0.1-2.fc22.x86_64 - -### 5. Get detailed information about an installed package, means information like Version, Release, Install Date, Size, Summary and a brief description. ### - -> **Answer** : We can get detailed information about an installed package by using option -qa with rpm followed by package name. -> -> For example to find details of package openssh, all I need to do is: -> -> # rpm -qi openssh -> -> [root@tecmint tecmint]# rpm -qi openssh -> Name : openssh -> Version : 6.8p1 -> Release : 5.fc22 -> Architecture: x86_64 -> Install Date: Thursday 28 May 2015 12:34:50 PM IST -> Group : Applications/Internet -> Size : 1542057 -> License : BSD -> .... - -### 6. You are not sure about what are the configuration files provided by a specific package say httpd. How will you find list of all the configuration files provided by httpd and their location. ### - -> **Answer** : We need to run option -c followed by package name with rpm command and it will list the name of all the configuration file and their location. -> -> # rpm -qc httpd -> -> /etc/httpd/conf.d/autoindex.conf -> /etc/httpd/conf.d/userdir.conf -> /etc/httpd/conf.d/welcome.conf -> /etc/httpd/conf.modules.d/00-base.conf -> /etc/httpd/conf/httpd.conf -> /etc/sysconfig/httpd -> -> Similarly we can list all the associated document files as: -> -> # rpm -qd httpd -> -> /usr/share/doc/httpd/ABOUT_APACHE -> /usr/share/doc/httpd/CHANGES -> /usr/share/doc/httpd/LICENSE -> ... -> -> also, we can list the associated License file as: -> -> # rpm -qL openssh -> -> /usr/share/licenses/openssh/LICENCE -> -> Not to mention that the option -d and option -L in the above command stands for ‘documents‘ and ‘License‘, respectively. - -### 7. You came across a configuration file located at ‘/usr/share/alsa/cards/AACI.conf’ and you are not sure this configuration file is associated with what package. How will you find out the parent package name? ### - -> **Answer** : When a package is installed, the relevant information gets stored in the database. So it is easy to trace what provides the above package using option -qf (-f query packages owning files). -> -> # rpm -qf /usr/share/alsa/cards/AACI.conf -> alsa-lib-1.0.28-2.el7.x86_64 -> -> Similarly we can find (what provides) information about any sub-packge, document files and License files. - -### 8. How will you find list of recently installed software’s using rpm? ### - -> **Answer** : As said earlier, everything being installed is logged in database. So it is not difficult to query the rpm database and find the list of recently installed software’s. -> -> We can do this by running the below commands using option –last (prints the most recent installed software’s). -> -> # rpm -qa --last -> -> The above command will print all the packages installed in a order such that, the last installed software appears at the top. -> -> If our concern is to find out specific package, we can grep that package (say sqlite) from the list, simply as: -> -> # rpm -qa --last | grep -i sqlite -> -> sqlite-3.8.10.2-1.fc22.x86_64 Thursday 18 June 2015 05:05:43 PM IST -> -> We can also get a list of 10 most recently installed software simply as: -> -> # rpm -qa --last | head -> -> We can refine the result to output a more custom result simply as: -> -> # rpm -qa --last | head -n 2 -> -> In the above command -n represents number followed by a numeric value. The above command prints a list of 2 most recent installed software. - -### 9. Before installing a package, you are supposed to check its dependencies. What will you do? ### - -> **Answer** : To check the dependencies of a rpm package (XYZ.rpm), we can use switches -q (query package), -p (query a package file) and -R (Requires / List packages on which this package depends i.e., dependencies). -> -> # rpm -qpR gedit-3.16.1-1.fc22.i686.rpm -> -> /bin/sh -> /usr/bin/env -> glib2(x86-32) >= 2.40.0 -> gsettings-desktop-schemas -> gtk3(x86-32) >= 3.16 -> gtksourceview3(x86-32) >= 3.16 -> gvfs -> libX11.so.6 -> ... - -### 10. Is rpm a front-end Package Management Tool? ### - -> **Answer** : No! rpm is a back-end package management for RPM based Linux Distribution. -> -> [YUM][1] which stands for Yellowdog Updater Modified is the front-end for rpm. YUM automates the overall process of resolving dependencies and everything else. -> -> Very recently [DNF][2] (Dandified YUM) replaced YUM in Fedora 22. Though YUM is still available to be used in RHEL and CentOS, we can install dnf and use it alongside of YUM. DNF is said to have a lots of improvement over YUM. -> -> Good to know, you keep yourself updated. Lets move to the front-end part. - -### 11. How will you list all the enabled repolist on a system. ### - -> **Answer** : We can list all the enabled repos on a system simply using following commands. -> -> # yum repolist -> or -> # dnf repolist -> -> Last metadata expiration check performed 0:30:03 ago on Mon Jun 22 16:50:00 2015. -> repo id repo name status -> *fedora Fedora 22 - x86_64 44,762 -> ozonos Repository for Ozon OS 61 -> *updates Fedora 22 - x86_64 - Updates -> -> The above command will only list those repos that are enabled. If we need to list all the repos, enabled or not, we can do. -> -> # yum repolist all -> or -> # dnf repolist all -> -> Last metadata expiration check performed 0:29:45 ago on Mon Jun 22 16:50:00 2015. -> repo id repo name status -> *fedora Fedora 22 - x86_64 enabled: 44,762 -> fedora-debuginfo Fedora 22 - x86_64 - Debug disabled -> fedora-source Fedora 22 - Source disabled -> ozonos Repository for Ozon OS enabled: 61 -> *updates Fedora 22 - x86_64 - Updates enabled: 5,018 -> updates-debuginfo Fedora 22 - x86_64 - Updates - Debug - -### 12. How will you list all the available and installed packages on a system? ### - -> **Answer** : To list all the available packages on a system, we can do: -> -> # yum list available -> or -> # dnf list available -> -> ast metadata expiration check performed 0:34:09 ago on Mon Jun 22 16:50:00 2015. -> Available Packages -> 0ad.x86_64 0.0.18-1.fc22 fedora -> 0ad-data.noarch 0.0.18-1.fc22 fedora -> 0install.x86_64 2.6.1-2.fc21 fedora -> 0xFFFF.x86_64 0.3.9-11.fc22 fedora -> 2048-cli.x86_64 0.9-4.git20141214.723738c.fc22 fedora -> 2048-cli-nocurses.x86_64 0.9-4.git20141214.723738c.fc22 fedora -> .... -> -> To list all the installed Packages on a system, we can do. -> -> # yum list installed -> or -> # dnf list installed -> -> Last metadata expiration check performed 0:34:30 ago on Mon Jun 22 16:50:00 2015. -> Installed Packages -> GeoIP.x86_64 1.6.5-1.fc22 @System -> GeoIP-GeoLite-data.noarch 2015.05-1.fc22 @System -> NetworkManager.x86_64 1:1.0.2-1.fc22 @System -> NetworkManager-libnm.x86_64 1:1.0.2-1.fc22 @System -> aajohan-comfortaa-fonts.noarch 2.004-4.fc22 @System -> .... -> -> To list all the available and installed packages on a system, we can do. -> -> # yum list -> or -> # dnf list -> -> Last metadata expiration check performed 0:32:56 ago on Mon Jun 22 16:50:00 2015. -> Installed Packages -> GeoIP.x86_64 1.6.5-1.fc22 @System -> GeoIP-GeoLite-data.noarch 2015.05-1.fc22 @System -> NetworkManager.x86_64 1:1.0.2-1.fc22 @System -> NetworkManager-libnm.x86_64 1:1.0.2-1.fc22 @System -> aajohan-comfortaa-fonts.noarch 2.004-4.fc22 @System -> acl.x86_64 2.2.52-7.fc22 @System -> .... - -### 13. How will you install and update a package and a group of packages separately on a system using YUM/DNF? ### - -> Answer : To Install a package (say nano), we can do, -> -> # yum install nano -> -> To Install a Group of Package (say Haskell), we can do. -> -> # yum groupinstall 'haskell' -> -> To update a package (say nano), we can do. -> -> # yum update nano -> -> To update a Group of Package (say Haskell), we can do. -> -> # yum groupupdate 'haskell' - -### 14. How will you SYNC all the installed packages on a system to stable release? ### - -> **Answer** : We can sync all the packages on a system (say CentOS or Fedora) to stable release as, -> -> # yum distro-sync [On CentOS/RHEL] -> or -> # dnf distro-sync [On Fedora 20 Onwards] - -Seems you have done a good homework before coming for the interview,Good!. Before proceeding further I just want to ask one more question. - -### 15. Are you familiar with YUM local repository? Have you tried making a Local YUM repository? Let me know in brief what you will do to create a local YUM repo. ### - -> **Answer** : First I would like to Thank you Sir for appreciation. Coming to question, I must admit that I am quiet familiar with Local YUM repositories and I have already implemented it for testing purpose in my local machine. -> -> 1. To set up Local YUM repository, we need to install the below three packages as: -> -> # yum install deltarpm python-deltarpm createrepo -> -> 2. Create a directory (say /home/$USER/rpm) and copy all the RPMs from RedHat/CentOS DVD to that folder. -> -> # mkdir /home/$USER/rpm -> # cp /path/to/rpm/on/DVD/*.rpm /home/$USER/rpm -> -> 3. Create base repository headers as. -> -> # createrepo -v /home/$USER/rpm -> -> 4. Create the .repo file (say abc.repo) at the location /etc/yum.repos.d simply as: -> -> cd /etc/yum.repos.d && cat << EOF > abc.repo -> [local-installation]name=yum-local -> baseurl=file:///home/$USER/rpm -> enabled=1 -> gpgcheck=0 -> EOF - -**Important**: Make sure to remove $USER with user_name. - -That’s all we need to do to create a Local YUM repository. We can now install applications from here, that is relatively fast, secure and most important don’t need an Internet connection. - -Okay! It was nice interviewing you. I am done. I am going to suggest your name to HR. You are a young and brilliant candidate we would like to have in our organization. If you have any question you may ask me. - -**Me**: Sir, it was really a very nice interview and I feel very lucky today, to have cracked the interview.. - -Obviously it didn’t end here. I asked a lots of questions like the project they are handling. What would be my role and responsibility and blah..blah..blah - -Friends, by the time all these were documented I have been called for HR round which is 3 days from now. Hope I do my best there as well. All your blessings will count. - -Thankyou friends and Tecmint for taking time and documenting my experience. Mates I believe Tecmint is doing some really extra-ordinary which must be praised. When we share ours experience with other, other get to know many things from us and we get to know our mistakes. - -It enhances our confidence level. If you have given any such interview recently, don’t keep it to yourself. Spread it! Let all of us know that. You may use the below form to share your experience with us. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/linux-rpm-package-management-interview-questions/ - -作者:[Avishek Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ -[2]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ diff --git a/translated/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md b/translated/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md new file mode 100644 index 0000000000..f095a31c65 --- /dev/null +++ b/translated/tech/20150623 Shilpa Nair Shares Her Interview Experience on RedHat Linux Package Management.md @@ -0,0 +1,348 @@ +Shilpa Nair 分享了她面试 RedHat Linux 包管理方面的经验 +======================================================================== +**Shilpa Nair 刚于2015年毕业。她之后去了一家位于 Noida,Delhi 的国家新闻电视台,应聘实习生的岗位。在她去年毕业季的时候,常逛 Tecmint 寻求作业上的帮助。从那时开始,她就常去 Tecmint。** + +![Linux Interview Questions on RPM](http://www.tecmint.com/wp-content/uploads/2015/06/Linux-Interview-Questions-on-RPM.jpeg) + +有关 RPM 方面的 Linux 面试题 + +所有的问题和回答都是 Shilpa Nair 根据回忆重写的。 + +> “大家好!我是来自 Delhi 的Shilpa Nair。我不久前才顺利毕业,正寻找一个实习的机会。在大学早期的时候,我就对 UNIX 十分喜爱,所以我也希望这个机会能适合我,满足我的兴趣。我被提问了很多问题,大部分都是关于 RedHat 包管理的基础问题。” + +下面就是我被问到的问题,和对应的回答。我仅贴出了与 RedHat GNU/Linux 包管理相关的,也是主要被提问的。 + +### 1,里如何查找一个包安装与否?假设你需要确认 ‘nano’ 有没有安装,你怎么做? ### + +> **回答**:为了确认 nano 软件包有没有安装,我们可以使用 rpm 命令,配合 -q 和 -a 选项来查询所有已安装的包 +> +> # rpm -qa nano +> OR +> # rpm -qa | grep -i nano +> +> nano-2.3.1-10.el7.x86_64 +> +> 同时包的名字必须是完成的,不完整的包名返回提示,不打印任何东西,就是说这包(包名字不全)未安装。下面的例子会更好理解些: +> +> 我们通常使用 vim 替代 vi 命令。当时如果我们查找安装包 vi/vim 的时候,我们就会看到标准输出上没有任何结果。 +> +> # vi +> # vim +> +> 尽管如此,我们仍然可以通过使用 vi/vim 命令来清楚地知道包有没有安装。Here is ... name(这句不知道)。如果我们不确切知道完整的文件名,我们可以使用通配符: +> +> # rpm -qa vim* +> +> vim-minimal-7.4.160-1.el7.x86_64 +> +> 通过这种方式,我们可以获得任何软件包的信息,安装与否。 + +### 2. 你如何使用 rpm 命令安装 XYZ 软件包? ### + +> **回答**:我们可以使用 rpm 命令安装任何的软件包(*.rpm),像下面这样,选项 -i(install),-v(冗余或者显示额外的信息)和 -h(打印#号显示进度,在安装过程中)。 +> +> # rpm -ivh peazip-1.11-1.el6.rf.x86_64.rpm +> +> Preparing... ################################# [100%] +> Updating / installing... +> 1:peazip-1.11-1.el6.rf ################################# [100%] +> +> 如果要升级一个早期版本的包,应加上 -U 选项,选项 -v 和 -h 可以确保我们得到用 # 号表示的冗余输出,这增加了可读性。 + +### 3. 你已经安装了一个软件包(假设是 httpd),现在你想看看软件包创建并安装的所有文件和目录,你会怎么做? ### + +> **回答**:使用选项 -l(列出所有文件)和 -q(查询)列出 httpd 软件包安装的所有文件(Linux哲学:所有的都是文件,包括目录)。 +> +> # rpm -ql httpd +> +> /etc/httpd +> /etc/httpd/conf +> /etc/httpd/conf.d +> ... + +### 4. 假如你要移除一个软件包,叫 postfix。你会怎么做? ### + +> **回答**:首先我们需要知道什么包安装了 postfix。查找安装 postfix 的包名后,使用 -e(擦除/卸载软件包)和 -v(冗余输出)两个选项来实现。 +> +> # rpm -qa postfix* +> +> postfix-2.10.1-6.el7.x86_64 +> +> 然后移除 postfix,如下: +> +> # rpm -ev postfix-2.10.1-6.el7.x86_64 +> +> Preparing packages... +> postfix-2:3.0.1-2.fc22.x86_64 + +### 5. 获得一个已安装包的具体信息,如版本,发行号,安装日期,大小,总结和一个间短的描述。 ### + +> **回答**:我们通过使用 rpm 的选项 -qi,后面接包名,可以获得关于一个已安装包的具体信息。 +> +> 举个例子,为了获得 openssh 包的具体信息,我需要做的就是: +> +> # rpm -qi openssh +> +> [root@tecmint tecmint]# rpm -qi openssh +> Name : openssh +> Version : 6.8p1 +> Release : 5.fc22 +> Architecture: x86_64 +> Install Date: Thursday 28 May 2015 12:34:50 PM IST +> Group : Applications/Internet +> Size : 1542057 +> License : BSD +> .... + +### 6. 假如你不确定一个指定包的配置文件在哪,比如 httpd。你如何找到所有 httpd 提供的配置文件列表和位置。 ### + +> **回答**: 我们需要用选项 -c 接包名,这会列出所有配置文件的名字和他们的位置。 +> +> # rpm -qc httpd +> +> /etc/httpd/conf.d/autoindex.conf +> /etc/httpd/conf.d/userdir.conf +> /etc/httpd/conf.d/welcome.conf +> /etc/httpd/conf.modules.d/00-base.conf +> /etc/httpd/conf/httpd.conf +> /etc/sysconfig/httpd +> +> 相似地,我们可以列出所有相关的文档文件,如下: +> +> # rpm -qd httpd +> +> /usr/share/doc/httpd/ABOUT_APACHE +> /usr/share/doc/httpd/CHANGES +> /usr/share/doc/httpd/LICENSE +> ... +> +> 我们也可以列出所有相关的证书文件,如下: +> +> # rpm -qL openssh +> +> /usr/share/licenses/openssh/LICENCE +> +> 忘了说明上面的选项 -d 和 -L 分别表示 “文档” 和 “证书”,抱歉。 + +### 7. 你进入了一个配置文件,位于‘/usr/share/alsa/cards/AACI.conf’,现在你不确定该文件属于哪个包。你如何查找出包的名字? ### + +> **回答**:当一个包被安装后,相关的信息就存储在了数据库里。所以使用选项 -qf(-f 查询包拥有的文件)很容易追踪谁提供了上述的包。 +> +> # rpm -qf /usr/share/alsa/cards/AACI.conf +> alsa-lib-1.0.28-2.el7.x86_64 +> +> 类似地,我们可以查找(谁提供的)关于任何子包,文档和证书文件的信息。 + +### 8. 你如何使用 rpm 查找最近安装的软件列表? ### + +> **回答**:如刚刚说的,每一样被安装的文件都记录在了数据库里。所以这并不难,通过查询 rpm 的数据库,找到最近安装软件的列表。 +> +> 我们通过运行下面的命令,使用选项 -last(打印出最近安装的软件)达到目的。 +> +> # rpm -qa --last +> +> 上面的命令会打印出所有安装的软件,最近一次安装的软件在列表的顶部。 +> +> 如果我们关心的是找出特定的包,我们可以使用 grep 命令从列表中匹配包(假设是 sqlite ),简单如下: +> +> # rpm -qa --last | grep -i sqlite +> +> sqlite-3.8.10.2-1.fc22.x86_64 Thursday 18 June 2015 05:05:43 PM IST +> +> 我们也可以获得10个最近安装的软件列表,简单如下: +> +> # rpm -qa --last | head +> +> 我们可以重定义一下,输出想要的结果,简单如下: +> +> # rpm -qa --last | head -n 2 +> +> 上面的命令中,-n 代表数目,后面接一个常数值。该命令是打印2个最近安装的软件的列表。 + +### 9. 安装一个包之前,你如果要检查其依赖。你会怎么做? ### + +> **回答**:检查一个 rpm 包(XYZ.rpm)的依赖,我们可以使用选项 -q(查询包),-p(指定包名)和 -R(查询/列出该包依赖的包,嗯,就是依赖)。 +> +> # rpm -qpR gedit-3.16.1-1.fc22.i686.rpm +> +> /bin/sh +> /usr/bin/env +> glib2(x86-32) >= 2.40.0 +> gsettings-desktop-schemas +> gtk3(x86-32) >= 3.16 +> gtksourceview3(x86-32) >= 3.16 +> gvfs +> libX11.so.6 +> ... + +### 10. rpm 是不是一个前端的包管理工具呢? ### + +> **回答**:不是!rpm 是一个后端管理工具,适用于基于 Linux 发行版的 RPM (此处指 Redhat Package Management)。 +> +> [YUM][1],全称 Yellowdog Updater Modified,是一个 RPM 的前端工具。YUM 命令自动完成所有工作,包括解决依赖和其他一切事务。 +> +> 最近,[DNF][2](YUM命令升级版)在Fedora 22发行版中取代了 YUM。尽管 YUM 仍然可以在 RHEL 和 CentOS 平台使用,我们也可以安装 dnf,与 YUM 命令共存使用。据说 DNF 较于 YUM 有很多提高。 +> +> 知道更多总是好的,保持自我更新。现在我们移步到前端部分来谈谈。 + +### 11. 你如何列出一个系统上面所有可用的仓库列表。 ### + +> **回答**:简单地使用下面的命令,我们就可以列出一个系统上所有可用的仓库列表。 +> +> # yum repolist +> 或 +> # dnf repolist +> +> Last metadata expiration check performed 0:30:03 ago on Mon Jun 22 16:50:00 2015. +> repo id repo name status +> *fedora Fedora 22 - x86_64 44,762 +> ozonos Repository for Ozon OS 61 +> *updates Fedora 22 - x86_64 - Updates +> +> 上面的命令仅会列出可用的仓库。如果你需要列出所有的仓库,不管可用与否,可以这样做。 +> +> # yum repolist all +> or +> # dnf repolist all +> +> Last metadata expiration check performed 0:29:45 ago on Mon Jun 22 16:50:00 2015. +> repo id repo name status +> *fedora Fedora 22 - x86_64 enabled: 44,762 +> fedora-debuginfo Fedora 22 - x86_64 - Debug disabled +> fedora-source Fedora 22 - Source disabled +> ozonos Repository for Ozon OS enabled: 61 +> *updates Fedora 22 - x86_64 - Updates enabled: 5,018 +> updates-debuginfo Fedora 22 - x86_64 - Updates - Debug + +### 12. 你如何列出一个系统上所有可用并且安装了的包? ### + +> **回答**:列出一个系统上所有可用的包,我们可以这样做: +> +> # yum list available +> 或 +> # dnf list available +> +> ast metadata expiration check performed 0:34:09 ago on Mon Jun 22 16:50:00 2015. +> Available Packages +> 0ad.x86_64 0.0.18-1.fc22 fedora +> 0ad-data.noarch 0.0.18-1.fc22 fedora +> 0install.x86_64 2.6.1-2.fc21 fedora +> 0xFFFF.x86_64 0.3.9-11.fc22 fedora +> 2048-cli.x86_64 0.9-4.git20141214.723738c.fc22 fedora +> 2048-cli-nocurses.x86_64 0.9-4.git20141214.723738c.fc22 fedora +> .... +> +> 而列出一个系统上所有已安装的包,我们可以这样做。 +> +> # yum list installed +> or +> # dnf list installed +> +> Last metadata expiration check performed 0:34:30 ago on Mon Jun 22 16:50:00 2015. +> Installed Packages +> GeoIP.x86_64 1.6.5-1.fc22 @System +> GeoIP-GeoLite-data.noarch 2015.05-1.fc22 @System +> NetworkManager.x86_64 1:1.0.2-1.fc22 @System +> NetworkManager-libnm.x86_64 1:1.0.2-1.fc22 @System +> aajohan-comfortaa-fonts.noarch 2.004-4.fc22 @System +> .... +> +> 而要同时满足两个要求的时候,我们可以这样做。 +> +> # yum list +> 或 +> # dnf list +> +> Last metadata expiration check performed 0:32:56 ago on Mon Jun 22 16:50:00 2015. +> Installed Packages +> GeoIP.x86_64 1.6.5-1.fc22 @System +> GeoIP-GeoLite-data.noarch 2015.05-1.fc22 @System +> NetworkManager.x86_64 1:1.0.2-1.fc22 @System +> NetworkManager-libnm.x86_64 1:1.0.2-1.fc22 @System +> aajohan-comfortaa-fonts.noarch 2.004-4.fc22 @System +> acl.x86_64 2.2.52-7.fc22 @System +> .... + +### 13. 你会怎么分别安装和升级一个包与一组包,在一个系统上面使用 YUM/DNF? ### + +> **回答**:安装一个包(假设是 nano),我们可以这样做, +> +> # yum install nano +> +> 而安装一组包(假设是 Haskell),我们可以这样做, +> +> # yum groupinstall 'haskell' +> +> 升级一个包(还是 nano),我们可以这样做, +> +> # yum update nano +> +> 而为了升级一组包(还是 haskell),我们可以这样做, +> +> # yum groupupdate 'haskell' + +### 14. 你会如何同步一个系统上面的所有安装软件到稳定发行版? ### + +> **回答**:我们可以一个系统上(假设是 CentOS 或者 Fedora)的所有包到稳定发行版,如下, +> +> # yum distro-sync [On CentOS/ RHEL] +> 或 +> # dnf distro-sync [On Fedora 20之后版本] + +似乎来面试之前你做了相当不多的功课,很好!在进一步交谈前,我还想问一两个问题。 + +### 15. 你对 YUM 本地仓库熟悉吗?你尝试过建立一个本地 YUM 仓库吗?让我们简单看看你会怎么建立一个本地 YUM 仓库。 ### + +> **回答**:首先,感谢你的夸奖。回到问题,我必须承认我对本地 YUM 仓库十分熟悉,并且在我的本地主机上也部署过,作为测试用。 +> +> 1. 为了建立本地 YUM 仓库,我们需要安装下面三个包: +> +> # yum install deltarpm python-deltarpm createrepo +> +> 2. 新建一个目录(假设 /home/$USER/rpm),然后复制 RedHat/CentOS DVD 上的 RPM 包到这个文件夹下 +> +> # mkdir /home/$USER/rpm +> # cp /path/to/rpm/on/DVD/*.rpm /home/$USER/rpm +> +> 3. 新建基本的库头文件如下。 +> +> # createrepo -v /home/$USER/rpm +> +> 4. 在路径 /etc/yum.repo.d 下创建一个 .repo 文件(如 abc.repo): +> +> cd /etc/yum.repos.d && cat << EOF > abc.repo +> [local-installation]name=yum-local +> baseurl=file:///home/$USER/rpm +> enabled=1 +> gpgcheck=0 +> EOF + +**重要**:用你的用户名替换掉 $USER。 + +以上就是创建一个本地 YUM 仓库所要做的全部工作。我们现在可以从这里安装软件了,相对快一些,安全一些,并且最重要的是不需要 Internet 连接。 + +好了!面试过程很愉快。我已经问完了。我会将你推荐给 HR。你是一个年轻且十分聪明的候选者,我们很愿意你加入进来。如果你有任何问题,你可以问我。 + +**我**:谢谢,这确实是一次愉快的面试,我感到非常幸运今天,然后这次面试就毁了。。。 + +显然,不会在这里结束。我问了很多问题,比如他们正在做的项目。我会担任什么角色,负责什么,,,balabalabala + +小伙伴们,3天以前 HR 轮的所有问题到时候也会被写成文档。希望我当时表现不错。感谢你们所有的祝福。 + +谢谢伙伴们和 Tecmint,花时间来编辑我的面试经历。我相信 Tecmint 好伙伴们做了很大的努力,必要要赞一个。当我们与他人分享我们的经历的时候,其他人从我们这里知道了更多,而我们自己则发现了自己的不足。 + +这增加了我们的信心。如果你最近也有任何类似的面试经历,别自己蔵着。分享出来!让我们所有人都知道。你可以使用如下的格式来与我们分享你的经历。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-rpm-package-management-interview-questions/ + +作者:[Avishek Kumar][a] +译者:[wi-cuckoo](https://github.com/wi-cuckoo) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ +[2]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ From d00a59eee12db7b2ae81fea8ca39046d75e00f77 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 12 Aug 2015 16:24:17 +0800 Subject: [PATCH 191/207] =?UTF-8?q?20150812-1=20=E9=80=89=E9=A2=98=20=20RH?= =?UTF-8?q?CE=20=E4=B8=93=E9=A2=98=20=E6=96=87=E7=AB=A0=E6=9C=AA=E5=85=A8?= =?UTF-8?q?=E9=83=A8=E5=AE=8C=E7=BB=93=EF=BC=8C=E7=9B=AE=E5=89=8D=E5=8F=AA?= =?UTF-8?q?=E6=9C=89=E4=B8=89=E7=AF=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...o Setup and Test Static Network Routing.md | 227 ++++++++++++++++++ ...ation and Set Kernel Runtime Parameters.md | 177 ++++++++++++++ ...m Activity Reports Using Linux Toolsets.md | 182 ++++++++++++++ 3 files changed, 586 insertions(+) create mode 100644 sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md create mode 100644 sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md create mode 100644 sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md diff --git a/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md b/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md new file mode 100644 index 0000000000..03356f9dd1 --- /dev/null +++ b/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md @@ -0,0 +1,227 @@ +Part 1 - RHCE Series: How to Setup and Test Static Network Routing +================================================================================ +RHCE (Red Hat Certified Engineer) is a certification from Red Hat company, which gives an open source operating system and software to the enterprise community, It also gives training, support and consulting services for the companies. + +![RHCE Exam Preparation Guide](http://www.tecmint.com/wp-content/uploads/2015/07/RHCE-Exam-Series-by-TecMint.jpg) + +RHCE Exam Preparation Guide + +This RHCE (Red Hat Certified Engineer) is a performance-based exam (codename EX300), who possesses the additional skills, knowledge, and abilities required of a senior system administrator responsible for Red Hat Enterprise Linux (RHEL) systems. + +**Important**: [Red Hat Certified System Administrator][1] (RHCSA) certification is required to earn RHCE certification. + +Following are the exam objectives based on the Red Hat Enterprise Linux 7 version of the exam, which will going to cover in this RHCE series: + +- Part 1: How to Setup and Test Static Routing in RHEL 7 +- Part 2: How to Perform Packet Filtering, Network Address Translation and Set Kernel Runtime Parameters +- Part 3: How to Produce and Deliver System Activity Reports Using Linux Toolsets +- Part 4: Automate System Maintenance Tasks Using Shell Scripts +- Part 5: How to Configure Local and Remote System Logging +- Part 6: How to Configure a Samba Server and a NFS Server +- Part 7: Setting Up Complete SMTP Server for Mailing +- Part 8: Setting Up HTTPS and TLS on RHEL 7 +- Part 9: Setting Up Network Time Protocol +- Part 10: How to Configure a Cache-Only DNS Server + +To view fees and register for an exam in your country, check the [RHCE Certification][2] page. + +In this Part 1 of the RHCE series and the next, we will present basic, yet typical, cases where the principles of static routing, packet filtering, and network address translation come into play. + +![Setup Static Network Routing in RHEL](http://www.tecmint.com/wp-content/uploads/2015/07/Setup-Static-Network-Routing-in-RHEL-7.jpg) + +RHCE: Setup and Test Network Static Routing – Part 1 + +Please note that we will not cover them in depth, but rather organize these contents in such a way that will be helpful to take the first steps and build from there. + +### Static Routing in Red Hat Enterprise Linux 7 ### + +One of the wonders of modern networking is the vast availability of devices that can connect groups of computers, whether in relatively small numbers and confined to a single room or several machines in the same building, city, country, or across continents. + +However, in order to effectively accomplish this in any situation, network packets need to be routed, or in other words, the path they follow from source to destination must be ruled somehow. + +Static routing is the process of specifying a route for network packets other than the default, which is provided by a network device known as the default gateway. Unless specified otherwise through static routing, network packets are directed to the default gateway; with static routing, other paths are defined based on predefined criteria, such as the packet destination. + +Let us define the following scenario for this tutorial. We have a Red Hat Enterprise Linux 7 box connecting to router #1 [192.168.0.1] to access the Internet and machines in 192.168.0.0/24. + +A second router (router #2) has two network interface cards: enp0s3 is also connected to router #1 to access the Internet and to communicate with the RHEL 7 box and other machines in the same network, whereas the other (enp0s8) is used to grant access to the 10.0.0.0/24 network where internal services reside, such as a web and / or database server. + +This scenario is illustrated in the diagram below: + +![Static Routing Network Diagram](http://www.tecmint.com/wp-content/uploads/2015/07/Static-Routing-Network-Diagram.png) + +Static Routing Network Diagram + +In this article we will focus exclusively on setting up the routing table on our RHEL 7 box to make sure that it can both access the Internet through router #1 and the internal network via router #2. + +In RHEL 7, you will use the [ip command][3] to configure and show devices and routing using the command line. These changes can take effect immediately on a running system but since they are not persistent across reboots, we will use ifcfg-enp0sX and route-enp0sX files inside /etc/sysconfig/network-scripts to save our configuration permanently. + +To begin, let’s print our current routing table: + + # ip route show + +![Check Routing Table in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Current-Routing-Table.png) + +Check Current Routing Table + +From the output above, we can see the following facts: + +- The default gateway’s IP address is 192.168.0.1 and can be accessed via the enp0s3 NIC. +- When the system booted up, it enabled the zeroconf route to 169.254.0.0/16 (just in case). In few words, if a machine is set to obtain an IP address through DHCP but fails to do so for some reason, it is automatically assigned an address in this network. Bottom line is, this route will allow us to communicate, also via enp0s3, with other machines who have failed to obtain an IP address from a DHCP server. +- Last, but not least, we can communicate with other boxes inside the 192.168.0.0/24 network through enp0s3, whose IP address is 192.168.0.18. + +These are the typical tasks that you would have to perform in such a setting. Unless specified otherwise, the following tasks should be performed in router #2: + +Make sure all NICs have been properly installed: + + # ip link show + +If one of them is down, bring it up: + + # ip link set dev enp0s8 up + +and assign an IP address in the 10.0.0.0/24 network to it: + + # ip addr add 10.0.0.17 dev enp0s8 + +Oops! We made a mistake in the IP address. We will have to remove the one we assigned earlier and then add the right one (10.0.0.18): + + # ip addr del 10.0.0.17 dev enp0s8 + # ip addr add 10.0.0.18 dev enp0s8 + +Now, please note that you can only add a route to a destination network through a gateway that is itself already reachable. For that reason, we need to assign an IP address within the 192.168.0.0/24 range to enp0s3 so that our RHEL 7 box can communicate with it: + + # ip addr add 192.168.0.19 dev enp0s3 + +Finally, we will need to enable packet forwarding: + + # echo "1" > /proc/sys/net/ipv4/ip_forward + +and stop / disable (just for the time being – until we cover packet filtering in the next article) the firewall: + + # systemctl stop firewalld + # systemctl disable firewalld + +Back in our RHEL 7 box (192.168.0.18), let’s configure a route to 10.0.0.0/24 through 192.168.0.19 (enp0s3 in router #2): + + # ip route add 10.0.0.0/24 via 192.168.0.19 + +After that, the routing table looks as follows: + + # ip route show + +![Show Network Routing Table](http://www.tecmint.com/wp-content/uploads/2015/07/Show-Network-Routing.png) + +Confirm Network Routing Table + +Likewise, add the corresponding route in the machine(s) you’re trying to reach in 10.0.0.0/24: + + # ip route add 192.168.0.0/24 via 10.0.0.18 + +You can test for basic connectivity using ping: + +In the RHEL 7 box, run + + # ping -c 4 10.0.0.20 + +where 10.0.0.20 is the IP address of a web server in the 10.0.0.0/24 network. + +In the web server (10.0.0.20), run + + # ping -c 192.168.0.18 + +where 192.168.0.18 is, as you will recall, the IP address of our RHEL 7 machine. + +Alternatively, we can use [tcpdump][4] (you may need to install it with yum install tcpdump) to check the 2-way communication over TCP between our RHEL 7 box and the web server at 10.0.0.20. + +To do so, let’s start the logging in the first machine with: + + # tcpdump -qnnvvv -i enp0s3 host 10.0.0.20 + +and from another terminal in the same system let’s telnet to port 80 in the web server (assuming Apache is listening on that port; otherwise, indicate the right port in the following command): + + # telnet 10.0.0.20 80 + +The tcpdump log should look as follows: + +![Check Network Communication between Servers](http://www.tecmint.com/wp-content/uploads/2015/07/Tcpdump-logs.png) + +Check Network Communication between Servers + +Where the connection has been properly initialized, as we can tell by looking at the 2-way communication between our RHEL 7 box (192.168.0.18) and the web server (10.0.0.20). + +Please remember that these changes will go away when you restart the system. If you want to make them persistent, you will need to edit (or create, if they don’t already exist) the following files, in the same systems where we performed the above commands. + +Though not strictly necessary for our test case, you should know that /etc/sysconfig/network contains system-wide network parameters. A typical /etc/sysconfig/network looks as follows: + + # Enable networking on this system? + NETWORKING=yes + # Hostname. Should match the value in /etc/hostname + HOSTNAME=yourhostnamehere + # Default gateway + GATEWAY=XXX.XXX.XXX.XXX + # Device used to connect to default gateway. Replace X with the appropriate number. + GATEWAYDEV=enp0sX + +When it comes to setting specific variables and values for each NIC (as we did for router #2), you will have to edit /etc/sysconfig/network-scripts/ifcfg-enp0s3 and /etc/sysconfig/network-scripts/ifcfg-enp0s8. + +Following our case, + + TYPE=Ethernet + BOOTPROTO=static + IPADDR=192.168.0.19 + NETMASK=255.255.255.0 + GATEWAY=192.168.0.1 + NAME=enp0s3 + ONBOOT=yes + +and + + TYPE=Ethernet + BOOTPROTO=static + IPADDR=10.0.0.18 + NETMASK=255.255.255.0 + GATEWAY=10.0.0.1 + NAME=enp0s8 + ONBOOT=yes + +for enp0s3 and enp0s8, respectively. + +As for routing in our client machine (192.168.0.18), we will need to edit /etc/sysconfig/network-scripts/route-enp0s3: + + 10.0.0.0/24 via 192.168.0.19 dev enp0s3 + +Now reboot your system and you should see that route in your table. + +### Summary ### + +In this article we have covered the essentials of static routing in Red Hat Enterprise Linux 7. Although scenarios may vary, the case presented here illustrates the required principles and the procedures to perform this task. Before wrapping up, I would like to suggest you to take a look at [Chapter 4][5] of the Securing and Optimizing Linux section in The Linux Documentation Project site for further details on the topics covered here. + +Free ebook on Securing & Optimizing Linux: The Hacking Solution (v.3.0) – This 800+ eBook contains comprehensive collection of Linux security tips and how to use them safely and easily to configure Linux-based applications and services. + +![Linux Security and Optimization Book](http://www.tecmint.com/wp-content/uploads/2015/07/Linux-Security-Optimization-Book.gif) + +Linux Security and Optimization Book + +[Download Now][6] + +In the next article we will talk about packet filtering and network address translation to sum up the networking basic skills needed for the RHCE certification. + +As always, we look forward to hearing from you, so feel free to leave your questions, comments, and suggestions using the form below. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ +[2]:https://www.redhat.com/en/services/certification/rhce +[3]:http://www.tecmint.com/ip-command-examples/ +[4]:http://www.tecmint.com/12-tcpdump-commands-a-network-sniffer-tool/ +[5]:http://www.tldp.org/LDP/solrhe/Securing-Optimizing-Linux-RH-Edition-v1.3/net-manage.html +[6]:http://tecmint.tradepub.com/free/w_opeb01/prgm.cgi \ No newline at end of file diff --git a/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md b/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md new file mode 100644 index 0000000000..8a5f4e6cf4 --- /dev/null +++ b/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md @@ -0,0 +1,177 @@ +Part 2 - How to Perform Packet Filtering, Network Address Translation and Set Kernel Runtime Parameters +================================================================================ +As promised in Part 1 (“[Setup Static Network Routing][1]”), in this article (Part 2 of RHCE series) we will begin by introducing the principles of packet filtering and network address translation (NAT) in Red Hat Enterprise Linux 7, before diving into setting runtime kernel parameters to modify the behavior of a running kernel if certain conditions change or needs arise. + +![Network Packet Filtering in RHEL](http://www.tecmint.com/wp-content/uploads/2015/07/Network-Packet-Filtering-in-RHEL.jpg) + +RHCE: Network Packet Filtering – Part 2 + +### Network Packet Filtering in RHEL 7 ### + +When we talk about packet filtering, we refer to a process performed by a firewall in which it reads the header of each data packet that attempts to pass through it. Then, it filters the packet by taking the required action based on rules that have been previously defined by the system administrator. + +As you probably know, beginning with RHEL 7, the default service that manages firewall rules is [firewalld][2]. Like iptables, it talks to the netfilter module in the Linux kernel in order to examine and manipulate network packets. Unlike iptables, updates can take effect immediately without interrupting active connections – you don’t even have to restart the service. + +Another advantage of firewalld is that it allows us to define rules based on pre-configured service names (more on that in a minute). + +In Part 1, we used the following scenario: + +![Static Routing Network Diagram](http://www.tecmint.com/wp-content/uploads/2015/07/Static-Routing-Network-Diagram.png) + +Static Routing Network Diagram + +However, you will recall that we disabled the firewall on router #2 to simplify the example since we had not covered packet filtering yet. Let’s see now how we can enable incoming packets destined for a specific service or port in the destination. + +First, let’s add a permanent rule to allow inbound traffic in enp0s3 (192.168.0.19) to enp0s8 (10.0.0.18): + + # firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT + +The above command will save the rule to /etc/firewalld/direct.xml: + + # cat /etc/firewalld/direct.xml + +![Check Firewalld Saved Rules in CentOS 7](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Firewalld-Save-Rules.png) + +Check Firewalld Saved Rules + +Then enable the rule for it to take effect immediately: + + # firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT + +Now you can telnet to the web server from the RHEL 7 box and run [tcpdump][3] again to monitor the TCP traffic between the two machines, this time with the firewall in router #2 enabled. + + # telnet 10.0.0.20 80 + # tcpdump -qnnvvv -i enp0s3 host 10.0.0.20 + +What if you want to only allow incoming connections to the web server (port 80) from 192.168.0.18 and block connections from other sources in the 192.168.0.0/24 network? + +In the web server’s firewall, add the following rules: + + # firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.18/24" service name="http" accept' + # firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.18/24" service name="http" accept' --permanent + # firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.0/24" service name="http" drop' + # firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.0/24" service name="http" drop' --permanent + +Now you can make HTTP requests to the web server, from 192.168.0.18 and from some other machine in 192.168.0.0/24. In the first case the connection should complete successfully, whereas in the second it will eventually timeout. + +To do so, any of the following commands will do the trick: + + # telnet 10.0.0.20 80 + # wget 10.0.0.20 + +I strongly advise you to check out the [Firewalld Rich Language][4] documentation in the Fedora Project Wiki for further details on rich rules. + +### Network Address Translation in RHEL 7 ### + +Network Address Translation (NAT) is the process where a group of computers (it can also be just one of them) in a private network are assigned an unique public IP address. As result, they are still uniquely identified by their own private IP address inside the network but to the outside they all “seem” the same. + +In addition, NAT makes it possible that computers inside a network sends requests to outside resources (like the Internet) and have the corresponding responses be sent back to the source system only. + +Let’s now consider the following scenario: + +![Network Address Translation in RHEL](http://www.tecmint.com/wp-content/uploads/2015/07/Network-Address-Translation-Diagram.png) + +Network Address Translation + +In router #2, we will move the enp0s3 interface to the external zone, and enp0s8 to the internal zone, where masquerading, or NAT, is enabled by default: + + # firewall-cmd --list-all --zone=external + # firewall-cmd --change-interface=enp0s3 --zone=external + # firewall-cmd --change-interface=enp0s3 --zone=external --permanent + # firewall-cmd --change-interface=enp0s8 --zone=internal + # firewall-cmd --change-interface=enp0s8 --zone=internal --permanent + +For our current setup, the internal zone – along with everything that is enabled in it will be the default zone: + + # firewall-cmd --set-default-zone=internal + +Next, let’s reload firewall rules and keep state information: + + # firewall-cmd --reload + +Finally, let’s add router #2 as default gateway in the web server: + + # ip route add default via 10.0.0.18 + +You can now verify that you can ping router #1 and an external site (tecmint.com, for example) from the web server: + + # ping -c 2 192.168.0.1 + # ping -c 2 tecmint.com + +![Verify Network Routing](http://www.tecmint.com/wp-content/uploads/2015/07/Verify-Network-Routing.png) + +Verify Network Routing + +### Setting Kernel Runtime Parameters in RHEL 7 ### + +In Linux, you are allowed to change, enable, and disable the kernel runtime parameters, and RHEL is no exception. The /proc/sys interface (sysctl) lets you set runtime parameters on-the-fly to modify the system’s behavior without much hassle when operating conditions change. + +To do so, the echo shell built-in is used to write to files inside /proc/sys/, where is most likely one of the following directories: + +- dev: parameters for specific devices connected to the machine. +- fs: filesystem configuration (quotas and inodes, for example). +- kernel: kernel-specific configuration. +- net: network configuration. +- vm: use of the kernel’s virtual memory. + +To display the list of all the currently available values, run + + # sysctl -a | less + +In Part 1, we changed the value of the net.ipv4.ip_forward parameter by doing + + # echo 1 > /proc/sys/net/ipv4/ip_forward + +in order to allow a Linux machine to act as router. + +Another runtime parameter that you may want to set is kernel.sysrq, which enables the Sysrq key in your keyboard to instruct the system to perform gracefully some low-level functions, such as rebooting the system if it has frozen for some reason: + + # echo 1 > /proc/sys/kernel/sysrq + +To display the value of a specific parameter, use sysctl as follows: + + # sysctl + +For example, + + # sysctl net.ipv4.ip_forward + # sysctl kernel.sysrq + +Some parameters, such as the ones mentioned above, require only one value, whereas others (for example, fs.inode-state) require multiple values: + +![Check Kernel Parameters in Linux](http://www.tecmint.com/wp-content/uploads/2015/07/Check-Kernel-Parameters.png) + +Check Kernel Parameters + +In either case, you need to read the kernel’s documentation before making any changes. + +Please note that these settings will go away when the system is rebooted. To make these changes permanent, we will need to add .conf files inside the /etc/sysctl.d as follows: + + # echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/10-forward.conf + +(where the number 10 indicates the order of processing relative to other files in the same directory). + +and enable the changes with + + # sysctl -p /etc/sysctl.d/10-forward.conf + +### Summary ### + +In this tutorial we have explained the basics of packet filtering, network address translation, and setting kernel runtime parameters on a running system and persistently across reboots. I hope you have found this information useful, and as always, we look forward to hearing from you! +Don’t hesitate to share with us your questions, comments, or suggestions using the form below. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/perform-packet-filtering-network-address-translation-and-set-kernel-runtime-parameters-in-rhel/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/ +[2]:http://www.tecmint.com/firewalld-rules-for-centos-7/ +[3]:http://www.tecmint.com/12-tcpdump-commands-a-network-sniffer-tool/ +[4]:https://fedoraproject.org/wiki/Features/FirewalldRichLanguage \ No newline at end of file diff --git a/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md b/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md new file mode 100644 index 0000000000..34693ea6bf --- /dev/null +++ b/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md @@ -0,0 +1,182 @@ +Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets +================================================================================ +As a system engineer, you will often need to produce reports that show the utilization of your system’s resources in order to make sure that: 1) they are being utilized optimally, 2) prevent bottlenecks, and 3) ensure scalability, among other reasons. + +![Monitor Linux Performance Activity Reports](http://www.tecmint.com/wp-content/uploads/2015/08/Monitor-Linux-Performance-Activity-Reports.jpg) + +RHCE: Monitor Linux Performance Activity Reports – Part 3 + +Besides the well-known native Linux tools that are used to check disk, memory, and CPU usage – to name a few examples, Red Hat Enterprise Linux 7 provides two additional toolsets to enhance the data you can collect for your reports: sysstat and dstat. + +In this article we will describe both, but let’s first start by reviewing the usage of the classic tools. + +### Native Linux Tools ### + +With df, you will be able to report disk space and inode usage of by filesystem. You need to monitor both because a lack of space will prevent you from being able to save further files (and may even cause the system to crash), just like running out of inodes will mean you can’t link further files with their corresponding data structures, thus producing the same effect: you won’t be able to save those files to disk. + + # df -h [Display output in human-readable form] + # df -h --total [Produce a grand total] + +![Check Linux Total Disk Usage](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Total-Disk-Usage.png) + +Check Linux Total Disk Usage + + # df -i [Show inode count by filesystem] + # df -i --total [Produce a grand total] + +![Check Linux Total inode Numbers](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Total-inode-Numbers.png) + +Check Linux Total inode Numbers + +With du, you can estimate file space usage by either file, directory, or filesystem. + +For example, let’s see how much space is used by the /home directory, which includes all of the user’s personal files. The first command will return the overall space currently used by the entire /home directory, whereas the second will also display a disaggregated list by sub-directory as well: + + # du -sch /home + # du -sch /home/* + +![Check Linux Directory Disk Size](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Directory-Disk-Size.png) + +Check Linux Directory Disk Size + +Don’t Miss: + +- [12 ‘df’ Command Examples to Check Linux Disk Space Usage][1] +- [10 ‘du’ Command Examples to Find Disk Usage of Files/Directories][2] + +Another utility that can’t be missing from your toolset is vmstat. It will allow you to see at a quick glance information about processes, CPU and memory usage, disk activity, and more. + +If run without arguments, vmstat will return averages since the last reboot. While you may use this form of the command once in a while, it will be more helpful to take a certain amount of system utilization samples, one after another, with a defined time separation between samples. + +For example, + + # vmstat 5 10 + +will return 10 samples taken every 5 seconds: + +![Check Linux System Performance](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-Systerm-Performance.png) + +Check Linux System Performance + +As you can see in the above picture, the output of vmstat is divided by columns: procs (processes), memory, swap, io, system, and cpu. The meaning of each field can be found in the FIELD DESCRIPTION sections in the man page of vmstat. + +Where can vmstat come in handy? Let’s examine the behavior of the system before and during a yum update: + + # vmstat -a 1 5 + +![Vmstat Linux Performance Monitoring](http://www.tecmint.com/wp-content/uploads/2015/08/Vmstat-Linux-Peformance-Monitoring.png) + +Vmstat Linux Performance Monitoring + +Please note that as files are being modified on disk, the amount of active memory increases and so does the number of blocks written to disk (bo) and the CPU time that is dedicated to user processes (us). + +Or during the saving process of a large file directly to disk (caused by dsync): + + # vmstat -a 1 5 + # dd if=/dev/zero of=dummy.out bs=1M count=1000 oflag=dsync + +![VmStat Linux Disk Performance Monitoring](http://www.tecmint.com/wp-content/uploads/2015/08/VmStat-Linux-Disk-Performance-Monitoring.png) + +VmStat Linux Disk Performance Monitoring + +In this case, we can see a yet larger number of blocks being written to disk (bo), which was to be expected, but also an increase of the amount of CPU time that it has to wait for I/O operations to complete before processing tasks (wa). + +**Don’t Miss**: [Vmstat – Linux Performance Monitoring][3] + +### Other Linux Tools ### + +As mentioned in the introduction of this chapter, there are other tools that you can use to check the system status and utilization (they are not only provided by Red Hat but also by other major distributions from their officially supported repositories). + +The sysstat package contains the following utilities: + +- sar (collect, report, or save system activity information). +- sadf (display data collected by sar in multiple formats). +- mpstat (report processors related statistics). +- iostat (report CPU statistics and I/O statistics for devices and partitions). +- pidstat (report statistics for Linux tasks). +- nfsiostat (report input/output statistics for NFS). +- cifsiostat (report CIFS statistics) and +- sa1 (collect and store binary data in the system activity daily data file. +- sa2 (write a daily report in the /var/log/sa directory) tools. + +whereas dstat adds some extra features to the functionality provided by those tools, along with more counters and flexibility. You can find an overall description of each tool by running yum info sysstat or yum info dstat, respectively, or checking the individual man pages after installation. + +To install both packages: + + # yum update && yum install sysstat dstat + +The main configuration file for sysstat is /etc/sysconfig/sysstat. You will find the following parameters in that file: + + # How long to keep log files (in days). + # If value is greater than 28, then log files are kept in + # multiple directories, one for each month. + HISTORY=28 + # Compress (using gzip or bzip2) sa and sar files older than (in days): + COMPRESSAFTER=31 + # Parameters for the system activity data collector (see sadc manual page) + # which are used for the generation of log files. + SADC_OPTIONS="-S DISK" + # Compression program to use. + ZIP="bzip2" + +When sysstat is installed, two cron jobs are added and enabled in /etc/cron.d/sysstat. The first job runs the system activity accounting tool every 10 minutes and stores the reports in /var/log/sa/saXX where XX is the day of the month. + +Thus, /var/log/sa/sa05 will contain all the system activity reports from the 5th of the month. This assumes that we are using the default value in the HISTORY variable in the configuration file above: + + */10 * * * * root /usr/lib64/sa/sa1 1 1 + +The second job generates a daily summary of process accounting at 11:53 pm every day and stores it in /var/log/sa/sarXX files, where XX has the same meaning as in the previous example: + + 53 23 * * * root /usr/lib64/sa/sa2 -A + +For example, you may want to output system statistics from 9:30 am through 5:30 pm of the sixth of the month to a .csv file that can easily be viewed using LibreOffice Calc or Microsoft Excel (this approach will also allow you to create charts or graphs): + + # sadf -s 09:30:00 -e 17:30:00 -dh /var/log/sa/sa06 -- | sed 's/;/,/g' > system_stats20150806.csv + +You could alternatively use the -j flag instead of -d in the sadf command above to output the system stats in JSON format, which could be useful if you need to consume the data in a web application, for example. + +![Linux System Statistics](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-System-Statistics.png) + +Linux System Statistics + +Finally, let’s see what dstat has to offer. Please note that if run without arguments, dstat assumes -cdngy by default (short for CPU, disk, network, memory pages, and system stats, respectively), and adds one line every second (execution can be interrupted anytime with Ctrl + C): + + # dstat + +![Linux Disk Statistics Monitoring](http://www.tecmint.com/wp-content/uploads/2015/08/dstat-command.png) + +Linux Disk Statistics Monitoring + +To output the stats to a .csv file, use the –output flag followed by a file name. Let’s see how this looks on LibreOffice Calc: + +![Monitor Linux Statistics Output](http://www.tecmint.com/wp-content/uploads/2015/08/Monitor-Linux-Statistics-Output.png) + +Monitor Linux Statistics Output + +I strongly advise you to check out the man page of dstat, included with this article along with the man page of sysstat in PDF format for your reading convenience. You will find several other options that will help you create custom and detailed system activity reports. + +**Don’t Miss**: [Sysstat – Linux Usage Activity Monitoring Tool][4] + +### Summary ### + +In this guide we have explained how to use both native Linux tools and specific utilities provided with RHEL 7 in order to produce reports on system utilization. At one point or another, you will come to rely on these reports as best friends. + +You will probably have used other tools that we have not covered in this tutorial. If so, feel free to share them with the rest of the community along with any other suggestions / questions / comments that you may have- using the form below. + +We look forward to hearing from you. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-performance-monitoring-and-file-system-statistics-reports/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/how-to-check-disk-space-in-linux/ +[2]:http://www.tecmint.com/check-linux-disk-usage-of-files-and-directories/ +[3]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/ +[4]:http://www.tecmint.com/install-sysstat-in-linux/ \ No newline at end of file From eaecf395ae651fbf78bef16e9acc27cea0e636c0 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Wed, 12 Aug 2015 16:39:28 +0800 Subject: [PATCH 192/207] =?UTF-8?q?20150812-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...edule a Job and Watch Commands in Linux.md | 143 ++++++++++++++++++ 1 file changed, 143 insertions(+) create mode 100644 sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md diff --git a/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md b/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md new file mode 100644 index 0000000000..1ad92c594b --- /dev/null +++ b/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md @@ -0,0 +1,143 @@ +Linux Tricks: Play Game in Chrome, Text-to-Speech, Schedule a Job and Watch Commands in Linux +================================================================================ +Here again, I have compiled a list of four things under [Linux Tips and Tricks][1] series you may do to remain more productive and entertained with Linux Environment. + +![Linux Tips and Tricks Series](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-Tips-and-Tricks.png) + +Linux Tips and Tricks Series + +The topics I have covered includes Google-chrome inbuilt small game, Text-to-speech in Linux Terminal, Quick job scheduling using ‘at‘ command and watch a command at regular interval. + +### 1. Play A Game in Google Chrome Browser ### + +Very often when there is a power shedding or no network due to some other reason, I don’t put my Linux box into maintenance mode. I keep myself engage in a little fun game by Google Chrome. I am not a gamer and hence I have not installed third-party creepy games. Security is another concern. + +So when there is Internet related issue and my web page seems something like this: + +![Unable to Connect Internet](http://www.tecmint.com/wp-content/uploads/2015/08/Unable-to-Connect-Internet.png) + +Unable to Connect Internet + +You may play the Google-chrome inbuilt game simply by hitting the space-bar. There is no limitation for the number of times you can play. The best thing is you need not break a sweat installing and using it. + +No third-party application/plugin required. It should work well on other platforms like Windows and Mac but our niche is Linux and I’ll talk about Linux only and mind it, it works well on Linux. It is a very simple game (a kind of time pass). + +Use Space-Bar/Navigation-up-key to jump. A glimpse of the game in action. + +![Play Game in Google Chrome](http://www.tecmint.com/wp-content/uploads/2015/08/Play-Game-in-Google-Chrome.gif) + +Play Game in Google Chrome + +### 2. Text to Speech in Linux Terminal ### + +For those who may not be aware of espeak utility, It is a Linux command-line text to speech converter. Write anything in a variety of languages and espeak utility will read it loud for you. + +Espeak should be installed in your system by default, however it is not installed for your system, you may do: + + # apt-get install espeak (Debian) + # yum install espeak (CentOS) + # dnf install espeak (Fedora 22 onwards) + +You may ask espeak to accept Input Interactively from standard Input device and convert it to speech for you. You may do: + + $ espeak [Hit Return Key] + +For detailed output you may do: + + $ espeak --stdout | aplay [Hit Return Key][Double - Here] + +espeak is flexible and you can ask espeak to accept input from a text file and speak it loud for you. All you need to do is: + + $ espeak --stdout /path/to/text/file/file_name.txt | aplay [Hit Enter] + +You may ask espeak to speak fast/slow for you. The default speed is 160 words per minute. Define your preference using switch ‘-s’. + +To ask espeak to speak 30 words per minute, you may do: + + $ espeak -s 30 -f /path/to/text/file/file_name.txt | aplay + +To ask espeak to speak 200 words per minute, you may do: + + $ espeak -s 200 -f /path/to/text/file/file_name.txt | aplay + +To use another language say Hindi (my mother tongue), you may do: + + $ espeak -v hindi --stdout 'टेकमिंट विश्व की एक बेहतरीन लाइंक्स आधारित वेबसाइट है|' | aplay + +You may choose any language of your preference and ask to speak in your preferred language as suggested above. To get the list of all the languages supported by espeak, you need to run: + + $ espeak --voices + +### 3. Quick Schedule a Job ### + +Most of us are already familiar with [cron][2] which is a daemon to execute scheduled commands. + +Cron is an advanced command often used by Linux SYSAdmins to schedule a job such as Backup or practically anything at certain time/interval. + +Are you aware of ‘at’ command in Linux which lets you schedule a job/command to run at specific time? You can tell ‘at’ what to do and when to do and everything else will be taken care by command ‘at’. + +For an example, say you want to print the output of uptime command at 11:02 AM, All you need to do is: + + $ at 11:02 + uptime >> /home/$USER/uptime.txt + Ctrl+D + +![Schedule Job in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Schedule-Job-in-Linux.png) + +Schedule Job in Linux + +To check if the command/script/job has been set or not by ‘at’ command, you may do: + + $ at -l + +![View Scheduled Jobs](http://www.tecmint.com/wp-content/uploads/2015/08/View-Scheduled-Jobs.png) + +View Scheduled Jobs + +You may schedule more than one command in one go using at, simply as: + + $ at 12:30 + Command – 1 + Command – 2 + … + command – 50 + … + Ctrl + D + +### 4. Watch a Command at Specific Interval ### + +We need to run some command for specified amount of time at regular interval. Just for example say we need to print the current time and watch the output every 3 seconds. + +To see current time we need to run the below command in terminal. + + $ date +"%H:%M:%S + +![Check Date and Time in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Date-in-Linux.png) + +Check Date and Time in Linux + +and to check the output of this command every three seconds, we need to run the below command in Terminal. + + $ watch -n 3 'date +"%H:%M:%S"' + +![Watch Command in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Watch-Command-in-Linux.gif) + +Watch Command in Linux + +The switch ‘-n’ in watch command is for Interval. In the above example we defined Interval to be 3 sec. You may define yours as required. Also you may pass any command/script with watch command to watch that command/script at the defined interval. + +That’s all for now. Hope you are like this series that aims at making you more productive with Linux and that too with fun inside. All the suggestions are welcome in the comments below. Stay tuned for more such posts. Keep connected and Enjoy… + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/text-to-speech-in-terminal-schedule-a-job-and-watch-commands-in-linux/ + +作者:[Avishek Kumar][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/tag/linux-tricks/ +[2]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/ \ No newline at end of file From 4ea6ca5ad27848c8bed914a6ffa5ea4ee08016d5 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Wed, 12 Aug 2015 19:57:08 +0800 Subject: [PATCH 193/207] [Translated]20150811 How to download apk files from Google Play Store on Linux.md --- ...k files from Google Play Store on Linux.md | 101 ------------------ ...k files from Google Play Store on Linux.md | 99 +++++++++++++++++ 2 files changed, 99 insertions(+), 101 deletions(-) delete mode 100644 sources/tech/20150811 How to download apk files from Google Play Store on Linux.md create mode 100644 translated/tech/20150811 How to download apk files from Google Play Store on Linux.md diff --git a/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md b/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md deleted file mode 100644 index 50bf618e86..0000000000 --- a/sources/tech/20150811 How to download apk files from Google Play Store on Linux.md +++ /dev/null @@ -1,101 +0,0 @@ -FSSlc translating - -How to download apk files from Google Play Store on Linux -================================================================================ -Suppose you want to install an Android app on your Android device. However, for whatever reason, you cannot access Google Play Store on the Android device. What can you do then? One way to install the app without Google Play Store access is to download its APK file using some other means, and then [install the APK][1] file on the Android device manually. - -There are several ways to download official APK files from Google Play Store on non-Android devices such as regular computers and laptops. For example, there are browser plugins (e.g., for [Chrome][2] or [Firefox][3]) or online APK archives that allow you to download APK files using a web browser. If you do not trust these closed-source plugins or third-party APK repositories, there is yet another way to download official APK files manually, and that is via an open-source Linux app called [GooglePlayDownloader][4]. - -GooglePlayDownloader is a Python-based GUI application that enables you to search and download APK files from Google Play Store. Since this is completely open-source, you can be assured while using it. In this tutorial, I am going to show how to download an APK file from Google Play Store using GooglePlayDownloader in Linux environment. - -### Python requirement ### - -GooglePlayDownloader requires Python with SNI (Server Name Indication) support for SSL/TLS communication. This feature comes with Python 2.7.9 or higher. This leaves out older distributions such as Debian 7 Wheezy or earlier, Ubuntu 14.04 or earlier, or CentOS/RHEL 7 or earlier. Assuming that you have a Linux distribution with Python 2.7.9 or higher, proceed to install GooglePlayDownloader as follows. - -### Install GooglePlayDownloader on Ubuntu ### - -On Ubuntu, you can use the official deb build. One catch is that you may need to install one required dependency manually. - -#### On Ubuntu 14.10 #### - -Download [python-ndg-httpsclient][5] deb package, which is a missing dependency on older Ubuntu distributions. Also download GooglePlayDownloader's official deb package. - - $ wget http://mirrors.kernel.org/ubuntu/pool/main/n/ndg-httpsclient/python-ndg-httpsclient_0.3.2-1ubuntu4_all.deb - $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb - -We are going to use [gdebi command][6] to install those two deb files as follows. The gdebi command will automatically handle any other dependencies. - - $ sudo apt-get install gdebi-core - $ sudo gdebi python-ndg-httpsclient_0.3.2-1ubuntu4_all.deb - $ sudo gdebi googleplaydownloader_1.7-1_all.deb - -#### On Ubuntu 15.04 or later #### - -Recent Ubuntu distributions ship all required dependencies, and thus the installation is straightforward as follows. - - $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb - $ sudo apt-get install gdebi-core - $ sudo gdebi googleplaydownloader_1.7-1_all.deb - -### Install GooglePlayDownloader on Debian ### - -Due to its Python requirement, GooglePlayDownloader cannot be installed on Debian 7 Wheezy or earlier unless you upgrade its stock Python. - -#### On Debian 8 Jessie and higher: #### - - $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb - $ sudo apt-get install gdebi-core - $ sudo gdebi googleplaydownloader_1.7-1_all.deb - -### Install GooglePlayDownloader on Fedora ### - -Since GooglePlayDownloader was originally developed for Debian based distributions, you need to install it from the source if you want to use it on Fedora. - -First, install necessary dependencies. - - $ sudo yum install python-pyasn1 wxPython python-ndg_httpsclient protobuf-python python-requests - -Then install it as follows. - - $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7.orig.tar.gz - $ tar -xvf googleplaydownloader_1.7.orig.tar.gz - $ cd googleplaydownloader-1.7 - $ chmod o+r -R . - $ sudo python setup.py install - $ sudo sh -c "echo 'python /usr/lib/python2.7/site-packages/googleplaydownloader-1.7-py2.7.egg/googleplaydownloader/googleplaydownloader.py' > /usr/bin/googleplaydownloader" - -### Download APK Files from Google Play Store with GooglePlayDownloader ### - -Once you installed GooglePlayDownloader, you can download APK files from Google Play Store as follows. - -First launch the app by typing: - - $ googleplaydownloader - -![](https://farm1.staticflickr.com/425/20229024898_105396fa68_b.jpg) - -At the search bar, type the name of the app you want to download from Google Play Store. - -![](https://farm1.staticflickr.com/503/20230360479_925f5da613_b.jpg) - -Once you find the app in the search list, choose the app, and click on "Download selected APK(s)" button. You will find the downloaded APK file in your home directory. Now you can move the APK file to the Android device of your choice, and install it manually. - -Hope this helps. - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/download-apk-files-google-play-store.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/nanni -[1]:http://xmodulo.com/how-to-install-apk-file-on-android-phone-or-tablet.html -[2]:https://chrome.google.com/webstore/detail/apk-downloader/cgihflhdpokeobcfimliamffejfnmfii -[3]:https://addons.mozilla.org/en-us/firefox/addon/apk-downloader/ -[4]:http://codingteam.net/project/googleplaydownloader -[5]:http://packages.ubuntu.com/vivid/python-ndg-httpsclient -[6]:http://xmodulo.com/how-to-install-deb-file-with-dependencies.html diff --git a/translated/tech/20150811 How to download apk files from Google Play Store on Linux.md b/translated/tech/20150811 How to download apk files from Google Play Store on Linux.md new file mode 100644 index 0000000000..670c0f331b --- /dev/null +++ b/translated/tech/20150811 How to download apk files from Google Play Store on Linux.md @@ -0,0 +1,99 @@ +如何在 Linux 中从 Google Play 商店里下载 apk 文件 +================================================================================ +假设你想在你的 Android 设备中安装一个 Android 应用,然而由于某些原因,你不能在 Andor 设备上访问 Google Play 商店。接着你该怎么做呢?在不访问 Google Play 商店的前提下安装应用的一种可能的方法是使用其他的手段下载该应用的 APK 文件,然后手动地在 Android 设备上 [安装 APK 文件][1]。 + +在非 Android 设备如常规的电脑和笔记本电脑上,有着几种方式来从 Google Play 商店下载到官方的 APK 文件。例如,使用浏览器插件(例如, 针对 [Chrome][2] 或针对 [Firefox][3] 的插件) 或利用允许你使用浏览器下载 APK 文件的在线的 APK 存档等。假如你不信任这些闭源的插件或第三方的 APK 仓库,这里有另一种手动下载官方 APK 文件的方法,它使用一个名为 [GooglePlayDownloader][4] 的开源 Linux 应用。 + +GooglePlayDownloader 是一个基于 Python 的 GUI 应用,使得你可以从 Google Play 商店上搜索和下载 APK 文件。由于它是完全开源的,你可以放心地使用它。在本篇教程中,我将展示如何在 Linux 环境下,使用 GooglePlayDownloader 来从 Google Play 商店下载 APK 文件。 + +### Python 需求 ### + +GooglePlayDownloader 需要使用 Python 中 SSL 模块的扩展 SNI(服务器名称指示) 来支持 SSL/TLS 通信,该功能由 Python 2.7.9 或更高版本带来。这使得一些旧的发行版本如 Debian 7 Wheezy 及早期版本,Ubuntu 14.04 及早期版本或 CentOS/RHEL 7 及早期版本均不能满足该要求。假设你已经有了一个带有 Python 2.7.9 或更高版本的发行版本,可以像下面这样接着安装 GooglePlayDownloader。 + +### 在 Ubuntu 上安装 GooglePlayDownloader ### + +在 Ubuntu 上,你可以使用官方构建的 deb 包。有一个条件是你可能需要手动地安装一个必需的依赖。 + +#### 在 Ubuntu 14.10 上 #### + +下载 [python-ndg-httpsclient][5] deb 软件包,这在旧一点的 Ubuntu 发行版本中是一个缺失的依赖。同时还要下载 GooglePlayDownloader 的官方 deb 软件包。 + + $ wget http://mirrors.kernel.org/ubuntu/pool/main/n/ndg-httpsclient/python-ndg-httpsclient_0.3.2-1ubuntu4_all.deb + $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb + +如下所示,我们将使用 [gdebi 命令][6] 来安装这两个 deb 文件。 gedbi 命令将自动地处理任何其他的依赖。 + + $ sudo apt-get install gdebi-core + $ sudo gdebi python-ndg-httpsclient_0.3.2-1ubuntu4_all.deb + $ sudo gdebi googleplaydownloader_1.7-1_all.deb + +#### 在 Ubuntu 15.04 或更新的版本上 #### + +最近的 Ubuntu 发行版本上已经配备了所有需要的依赖,所以安装过程可以如下面那样直接进行。 + + $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb + $ sudo apt-get install gdebi-core + $ sudo gdebi googleplaydownloader_1.7-1_all.deb + +### 在 Debian 上安装 GooglePlayDownloader ### + +由于其 Python 需求, Googleplaydownloader 不能被安装到 Debian 7 Wheezy 或早期版本上,除非你升级了它自备的 Python 版本。 + +#### 在 Debian 8 Jessie 及更高版本上: #### + + $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7-1_all.deb + $ sudo apt-get install gdebi-core + $ sudo gdebi googleplaydownloader_1.7-1_all.deb + +### 在 Fedora 上安装 GooglePlayDownloader ### + +由于 GooglePlayDownloader 原本是针对基于 Debian 的发行版本所开发的,假如你想在 Fedora 上使用它,你需要从它的源码开始安装。 + +首先安装必需的依赖。 + + $ sudo yum install python-pyasn1 wxPython python-ndg_httpsclient protobuf-python python-requests + +然后像下面这样安装它。 + + $ wget http://codingteam.net/project/googleplaydownloader/download/file/googleplaydownloader_1.7.orig.tar.gz + $ tar -xvf googleplaydownloader_1.7.orig.tar.gz + $ cd googleplaydownloader-1.7 + $ chmod o+r -R . + $ sudo python setup.py install + $ sudo sh -c "echo 'python /usr/lib/python2.7/site-packages/googleplaydownloader-1.7-py2.7.egg/googleplaydownloader/googleplaydownloader.py' > /usr/bin/googleplaydownloader" + +### 使用 GooglePlayDownloader 从 Google Play 商店下载 APK 文件 ### + +一旦你安装好 GooglePlayDownloader 后,你就可以像下面那样从 Google Play 商店下载 APK 文件。 + +首先通过输入下面的命令来启动该应用: + + $ googleplaydownloader + +![](https://farm1.staticflickr.com/425/20229024898_105396fa68_b.jpg) + +在搜索栏中,输入你想从 Google Play 商店下载的应用的名称。 + +![](https://farm1.staticflickr.com/503/20230360479_925f5da613_b.jpg) + +一旦你从搜索列表中找到了该应用,就选择该应用,接着点击 "下载选定的 APK 文件" 按钮。最后你将在你的家目录中找到下载的 APK 文件。现在,你就可以将下载到的 APK 文件转移到你所选择的 Android 设备上,然后手动安装它。 + +希望这篇教程对你有所帮助。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/download-apk-files-google-play-store.html + +作者:[Dan Nanni][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://xmodulo.com/how-to-install-apk-file-on-android-phone-or-tablet.html +[2]:https://chrome.google.com/webstore/detail/apk-downloader/cgihflhdpokeobcfimliamffejfnmfii +[3]:https://addons.mozilla.org/en-us/firefox/addon/apk-downloader/ +[4]:http://codingteam.net/project/googleplaydownloader +[5]:http://packages.ubuntu.com/vivid/python-ndg-httpsclient +[6]:http://xmodulo.com/how-to-install-deb-file-with-dependencies.html From c80b246c12f9be9c9dea4267d399b779f230a186 Mon Sep 17 00:00:00 2001 From: Chang Liu Date: Wed, 12 Aug 2015 20:01:28 +0800 Subject: [PATCH 194/207] Update RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 准备翻译该篇。 --- ...ment in RHEL 7--Boot Shutdown and Everything in Between.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md b/sources/tech/RHCSA Series/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md index 2befb7bc55..23bf9f0ac1 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 05--Process Management in RHEL 7--Boot Shutdown and Everything in Between.md @@ -1,3 +1,5 @@ +FSSlc translating + RHCSA Series: Process Management in RHEL 7: Boot, Shutdown, and Everything in Between – Part 5 ================================================================================ We will start this article with an overall and brief revision of what happens since the moment you press the Power button to turn on your RHEL 7 server until you are presented with the login screen in a command line interface. @@ -213,4 +215,4 @@ via: http://www.tecmint.com/rhcsa-exam-boot-process-and-process-management/ [a]:http://www.tecmint.com/author/gacanepa/ [1]:http://www.tecmint.com/dmesg-commands/ [2]:http://www.tecmint.com/systemd-replaces-init-in-linux/ -[3]:http://www.tecmint.com/how-to-kill-a-process-in-linux/ \ No newline at end of file +[3]:http://www.tecmint.com/how-to-kill-a-process-in-linux/ From 69122f983c12c6911363b696af572a6d06a06d68 Mon Sep 17 00:00:00 2001 From: ictlyh Date: Wed, 12 Aug 2015 23:10:47 +0800 Subject: [PATCH 195/207] =?UTF-8?q?[Translating]=20RHCE=20=E7=B3=BB?= =?UTF-8?q?=E5=88=97?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ... RHCE Series--How to Setup and Test Static Network Routing.md | 1 + ...work Address Translation and Set Kernel Runtime Parameters.md | 1 + ...e and Deliver System Activity Reports Using Linux Toolsets.md | 1 + 3 files changed, 3 insertions(+) diff --git a/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md b/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md index 03356f9dd1..731e78e5cf 100644 --- a/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md +++ b/sources/tech/RHCE/Part 1 - RHCE Series--How to Setup and Test Static Network Routing.md @@ -1,3 +1,4 @@ +Translating by ictlyh Part 1 - RHCE Series: How to Setup and Test Static Network Routing ================================================================================ RHCE (Red Hat Certified Engineer) is a certification from Red Hat company, which gives an open source operating system and software to the enterprise community, It also gives training, support and consulting services for the companies. diff --git a/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md b/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md index 8a5f4e6cf4..cd798b906d 100644 --- a/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md +++ b/sources/tech/RHCE/Part 2 - How to Perform Packet Filtering Network Address Translation and Set Kernel Runtime Parameters.md @@ -1,3 +1,4 @@ +Translating by ictlyh Part 2 - How to Perform Packet Filtering, Network Address Translation and Set Kernel Runtime Parameters ================================================================================ As promised in Part 1 (“[Setup Static Network Routing][1]”), in this article (Part 2 of RHCE series) we will begin by introducing the principles of packet filtering and network address translation (NAT) in Red Hat Enterprise Linux 7, before diving into setting runtime kernel parameters to modify the behavior of a running kernel if certain conditions change or needs arise. diff --git a/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md b/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md index 34693ea6bf..ea0157be4f 100644 --- a/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md +++ b/sources/tech/RHCE/Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets.md @@ -1,3 +1,4 @@ +Translating by ictlyh Part 3 - How to Produce and Deliver System Activity Reports Using Linux Toolsets ================================================================================ As a system engineer, you will often need to produce reports that show the utilization of your system’s resources in order to make sure that: 1) they are being utilized optimally, 2) prevent bottlenecks, and 3) ensure scalability, among other reasons. From 604582f47ad84aebdf1df2a0fef838c702f59086 Mon Sep 17 00:00:00 2001 From: wxy Date: Wed, 12 Aug 2015 23:55:02 +0800 Subject: [PATCH 196/207] PUB:20141211 Open source all over the world MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @fyh 这篇不好翻译啊,翻译的不错! --- ...20141211 Open source all over the world.md | 47 +++++++++---------- 1 file changed, 23 insertions(+), 24 deletions(-) rename {translated/talk => published}/20141211 Open source all over the world.md (67%) diff --git a/translated/talk/20141211 Open source all over the world.md b/published/20141211 Open source all over the world.md similarity index 67% rename from translated/talk/20141211 Open source all over the world.md rename to published/20141211 Open source all over the world.md index 0abb08121f..e07db43680 100644 --- a/translated/talk/20141211 Open source all over the world.md +++ b/published/20141211 Open source all over the world.md @@ -2,8 +2,6 @@ ================================================================================ ![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUS_OpenSourceExperience_520x292_cm.png) -图片来源 : opensource.com - 经过了一整天的Opensource.com[社区版主][1]年会,最后一项日程提了上来,内容只有“特邀嘉宾:待定”几个字。作为[Opensource.com][3]的项目负责人和社区管理员,[Jason Hibbets][2]起身解释道,“因为这个嘉宾有可能无法到场,因此我不想提前说是谁。在几个月前我问他何时有空过来,他给了我两个时间点,我选了其中一个。今天是这三周中Jim唯一能来的一天”。(译者注:Jim是指下文中提到的Jim Whitehurst,即红帽公司总裁兼首席执行官) 这句话在版主们(Moderators)中引起一阵轰动,他们从世界各地赶来参加此次的[拥抱开源大会(All Things Open Conference)][4]。版主们纷纷往前挪动椅子,仔细聆听。 @@ -14,7 +12,7 @@ “大家好!”,这个家伙开口了。他没穿正装,只是衬衫和休闲裤。 -这时会场中第二高个子的人,红帽全球意识部门(Global Awareness)的高级主管[Jeff Mackanic][5],告诉他大部分社区版本今天都在场,然后让每个人开始作简单的自我介绍。 +这时会场中第二高个子的人,红帽全球意识部门(Global Awareness)的高级主管[Jeff Mackanic][5],告诉他大部分社区版主今天都在场,然后让每个人开始作简单的自我介绍。 “我叫[Jen Wike Huger][6],负责Opensource.com的内容管理,很高兴见到大家。” @@ -22,13 +20,13 @@ “我叫[Robin][9],从2013年开始参与版主项目。我在OSDC做了一些事情,工作是在[City of the Hague][10]维护[网站][11]。” -“我叫[Marcus Hanwell][12],来自英格兰,在[Kitware][13]工作。同时,我是FOSS科学软件的技术总监,和国家实验室在[Titan][14] Z和[Gpu programming][15]方面合作。我主要使用[Gentoo][16]和[KDE][17]。最后,我很激动能加入FOSS和开源科学。” +“我叫[Marcus Hanwell][12],来自英格兰,在[Kitware][13]工作。同时,我是FOSS science software的技术总监,和国家实验室在[Titan][14] Z和[Gpu programming][15]方面合作。我主要使用[Gentoo][16]和[KDE][17]。最后,我很激动能参与到FOSS和开源科学。” -“我叫[Phil Shapiro][18],是华盛顿的一个小图书馆28个Linux工作站的管理员。我视各位为我的同事。非常高兴能一起交流分享,贡献力量。我主要关注FOSS和自豪感的关系,以及FOSS如何提升自豪感。” +“我叫[Phil Shapiro][18],是华盛顿的一个小图书馆的28个Linux工作站的管理员。我视各位为我的同事。非常高兴能一起交流分享,贡献力量。我主要关注FOSS和自豪感的关系,以及FOSS如何提升自豪感。” “我叫[Joshua Holm][19]。我大多数时间都在关注系统更新,以及帮助人们在网上找工作。” -“我叫[Mel Chernoff][20],在红帽工作,和[Jason Hibbets]和[Mark Bohannon]一起主要关注政府渠道方面。” +“我叫[Mel Chernoff][20],在红帽工作,和[Jason Hibbets][22]和[Mark Bohannon][23]一起主要关注[政府][21]渠道方面。” “我叫[Scott Nesbitt][24],写过很多东西,使用FOSS很久了。我是个普通人,不是系统管理员,也不是程序员,只希望能更加高效工作。我帮助人们在商业和生活中使用FOSS。” @@ -38,41 +36,41 @@ “你在[新FOSS Minor][30]教书?!”,Jim说道,“很酷!” -“我叫[Jason Baker][31]。我是红慢的一个云专家,主要做[OpenStack][32]方面的工作。” +“我叫[Jason Baker][31]。我是红帽的一个云专家,主要做[OpenStack][32]方面的工作。” “我叫[Mark Bohannan][33],是红帽全球开放协议的一员,在华盛顿外工作。和Mel一样,我花了相当多时间写作,也从法律和政府部门中找合作者。我做了一个很好的小册子来讨论正在发生在政府中的积极变化。” -“我叫[Jason Hibbets][34],我组织了这次会议。” +“我叫[Jason Hibbets][34],我组织了这次讨论。” 会场中一片笑声。 -“我也组织了这片讨论,可以这么说,”这个棕红色头发笑容灿烂的家伙说道。笑声持续一会逐渐平息。 +“我也组织了这个讨论,可以这么说,”这个棕红色头发笑容灿烂的家伙说道。笑声持续一会逐渐平息。 -我当时在他左边,时不时从转录空隙中抬头看一眼,然后从眼神中注意到微笑背后暗示的那个自2008年1月起开始领导公司的人,红帽的CEO[Jim Whitehurst][35]。 +我当时在他左边,时不时从记录的间隙中抬头看一眼,我注意到淡淡微笑背后的那个令人瞩目的人,是自2008年1月起开始领导红帽公司的CEO [Jim Whitehurst][35]。 -“我有世界上最好的工作,”稍稍向后靠、叉腿抱头,Whitehurst开始了演讲。“我开始领导红帽,在世界各地旅行到处看看情况。在这里的七年中,FOSS和广泛的开源创新所发生的美好的事情是开源已经脱离了条条框框。我现在认为,IT正处在FOSS之前所在的位置。我们可以预见FOSS从一个替代走向创新驱动力。”用户也看到了这一点。他们用FOSS并不是因为它便宜,而是因为它能提供和创新的解决方案。这也十一个全球现象。比如,我刚才还在印度,然后发现那里的用户拥抱开源的两个理由:一个是创新,另一个是那里的市场有些特殊,需要完全的控制。 +“我有世界上最好的工作,”稍稍向后靠、叉腿抱头,Whitehurst开始了演讲。“我开始领导红帽,在世界各地旅行到处看看情况。在这里的七年中,FOSS和广泛的开源创新所发生的最美好的事情是开源已经脱离了条条框框。我现在认为,信息技术正处在FOSS之前所在的位置。我们可以预见FOSS从一个替代品走向创新驱动力。我们的用户也看到了这一点。他们用FOSS并不是因为它便宜,而是因为它能带来可控和创新的解决方案。这也是个全球现象。比如,我刚才还在印度,然后发现那里的用户拥抱开源的两个理由:一个是创新,另一个是那里的市场有些特殊,需要完全的可控。” -“[孟买证券交易所][36]想得到源代码并加以控制,五年前这在证券交易领域闻所未闻。那时FOSS正在重复发明轮子。今天看来,FOSS正在做几乎所有的结合了大数据的事物。几乎所有的新框架,语言和方法论,包括流动(尽管不包括设备),都首先发生在开源世界。” +“[孟买证券交易所][36]想得到源代码并加以控制,五年前这种事情在证券交易领域就没有听说过。那时FOSS正在重复发明轮子。今天看来,实际上大数据的每件事情都出现在FOSS领域。几乎所有的新框架,语言和方法论,包括移动通讯(尽管不包括设备),都首先发生在开源世界。” -“这是因为用户数量已经达到了相当的规模。这不只是红帽遇到的情况,[Google][37],[Amazon][38],[Facebook][39]等也出现这样的情况。他们想解决自己的问题,用开源的方式。忘掉协议吧,开源绝不仅如此。我们建立了一个交通工具,一套规则,例如[Hadoop][40],[Cassandra][41]和其他工具。事实上,开源驱动创新。例如,Hadoop在厂商们意识的规模带来的问题。他们实际上有足够的资和资源金来解决自己的问题。”开源是许多领域的默认技术方案。这在一个更加注重内容的世界中更是如此,例如[3D打印][42]和其他使用信息内容的物理产品。” +“这是因为用户数量已经达到了相当的规模。这不只是红帽遇到的情况,[Google][37],[Amazon][38],[Facebook][39]等也出现这样的情况。他们想解决自己的问题,用开源的方式。忘掉许可协议吧,开源绝不仅如此。我们建立了一个交通工具,一套规则,例如[Hadoop][40],[Cassandra][41]和其他工具。事实上,开源驱动创新。例如,Hadoop是在厂商们意识到规模带来的问题时的一个解决方案。他们实际上有足够的资金和资源来解决自己的问题。开源是许多领域的默认技术方案。这在一个更加注重内容的世界中更是如此,例如[3D打印][42]和其他使用信息内容的实体产品。” “源代码的开源确实很酷,但开源不应当仅限于此。在各行各业不同领域开源仍有可以用武之地。我们要问下自己:‘开源能够为教育,政府,法律带来什么?其它的呢?其它的领域如何能学习我们?’” -“还有内容的问题。内容在现在是免费的,当然我们可以投资更多的免费内容,不过我们也需要商业模式围绕的内容。这是我们更应该关注的。如果你相信开放的创新能带来更好,那么我们需要更多的商业模式。” +“还有内容的问题。内容在现在是免费的,当然我们可以投资更多的免费内容,不过我们也需要商业模式围绕的内容。这是我们更应该关注的。如果你相信开放的创新更好,那么我们需要更多的商业模式。” -“教育让我担心其相比与‘社区’它更关注‘内容’。例如,无论我走到哪里,大学校长们都会说,‘等等,难道教育将会免费?!’对于下游来说FOSS免费很棒,但别忘了上游很强大。免费课程很棒,但我们同样需要社区来不断迭代和完善。这是很多人都在做的事情,Opensource.com是一个提供交流的社区。问题不是‘我们如何控制内容’,也不是‘如何建立和分发内容’,而是要确保它处在不断的完善当中,而且能给其他领域提供有价值的参考。” +“教育让我担心,其相比与‘社区’它更关注‘内容’。例如,无论我走到哪里,大学的校长们都会说,‘等等,难道教育将会免费?!’对于下游来说FOSS免费很棒,但别忘了上游很强大。免费课程很棒,但我们同样需要社区来不断迭代和完善。这是很多人都在做的事情,Opensource.com是一个提供交流的社区。问题不是‘我们如何控制内容’,也不是‘如何建立和分发内容’,而是要确保它处在不断的完善当中,而且能给其他领域提供有价值的参考。” “改变世界的潜力是无穷无尽的,我们已经取得了很棒的进步。”六年前我们痴迷于制定宣言,我们说‘我们是领导者’。我们用错词了,因为那潜在意味着控制。积极的参与者们同样也不能很好理解……[Máirín Duffy][43]提出了[催化剂][44]这个词。然后我们组成了红帽,不断地促进行动,指引方向。” -“Opensource.com也是其他领域的催化剂,而这正是它的本义所在,我希望你们也这样认为。当时的内容质量和现在比起来都令人难以置信。你可以看到每季度它都在进步。谢谢你们的时间!谢谢成为了催化剂!这是一个让世界变得更好的机会。我想听听你们的看法。” +“Opensource.com也是其他领域的催化剂,而这正是它的本义所在,我希望你们也这样认为。当时的内容质量和现在比起来都令人难以置信。你可以看到每季度它都在进步。谢谢你们付出的时间!谢谢成为了催化剂!这是一个让世界变得更好的机会。我想听听你们的看法。” 我瞥了一下桌子,发现几个人眼中带泪。 然后Whitehurst又回顾了大会的开放教育议题。“极端一点看,如果你有一门[Ulysses][45]的公开课。在这里你能和一群人一起合作体验课堂。这样就和代码块一样的:大家一起努力,代码随着时间不断改进。” -在这一点上,我有发言权。当谈论其FOSS和学术团体之间的差异,向基础和可能的不调和这些词语都跳了出来。 +在这一点上,我有发言权。当谈论其FOSS和学术团体之间的差异,像“基础”和“可能不调和”这些词语都跳了出来。 -**Remy**: “倒退带来死亡。如果你在论文或者发布的代码中烦了一个错误,有可能带来十分严重的后果。学校一直都是避免失败寻求正确答案的地方。复制意味着抄袭。轮子在一遍遍地教条地被发明。FOSS你能快速失败,但在学术界,你只能带来无效的结果。” +**Remy**: “倒退带来死亡。如果你在论文或者发布的代码中犯了一个错误,有可能带来十分严重的后果。学校一直都是避免失败寻求正确答案的地方。复制意味着抄袭。轮子在一遍遍地教条地被发明。FOSS让你能快速失败,但在学术界,你只能带来无效的结果。” **Nicole**: “学术界有太多自我的家伙,你们需要一个发布经理。” @@ -80,20 +78,21 @@ **Luis**: “团队和分享应该优先考虑,红帽可以多向它们强调这一点。” -**Jim**: “还有公司在其中扮演积极角色吗?” +**Jim**: “还有公司在其中扮演积极角色了吗?” -[Phil Shapiro][46]: “我对FOSS的临界点感兴趣。联邦没有改用[LibreOffice][47]把我逼疯了。我们没有在软件上花税款,也不应当在字处理软件或者微软的Office上浪费税钱。” +[Phil Shapiro][46]: “我对FOSS的临界点感兴趣。Fed没有改用[LibreOffice][47]把我逼疯了。我们没有在软件上花税款,也不应当在字处理软件或者微软的Office上浪费税钱。” -**Jim**: “我们经常提倡这一点。我们能做更多吗?这是个问题。首先,我们在我们的产品涉足的地方取得了进步。我们在政府中有坚实的专营权。我们比私有公司平均话费更多。银行和电信业都和政府挨着。我们在欧洲做的更好,我认为在那工作又更低的税。下一代计算就像‘终结者’,我们到处取得了进步,但仍然需要忧患意识。” +**Jim**: “我们经常提倡这一点。我们能做更多吗?这是个问题。首先,我们在我们的产品涉足的地方取得了进步。我们在政府中有坚实的专营权。我们比私有公司平均花费更多。银行和电信业都和政府挨着。我们在欧洲做的更好,我认为在那工作有更低的税。下一代计算就像‘终结者’,我们到处取得了进步,但仍然需要忧患意识。” + +突然,门开了。Jim转身向门口站着的执行助理点头。他要去参加下一场会了。他并拢双腿,站着向前微倾。然后,他再次向每个人的工作和奉献表示感谢,微笑着出了门……留给我们更多的激励。 -突然,门开了。Jim转身向门口站着的执行助理点头。他要去参加下一场会了。他并拢双腿,站着向前微倾。然后,他再次向每个人的工作和奉献表示感谢,微笑着除了门……留给我们更多的激励。 -------------------------------------------------------------------------------- via: https://opensource.com/business/14/12/jim-whitehurst-inspiration-open-source 作者:[Remy][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[fyh](https://github.com/fyh) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From 83b197b3ce99246cb5c1965fc1de693d9f090f55 Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 13 Aug 2015 09:31:59 +0800 Subject: [PATCH 197/207] translating --- ...Web Based Network Traffic Analyzer--Install it on Linux.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md b/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md index 9f78722cb6..3b3fe49a7f 100644 --- a/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md +++ b/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md @@ -1,3 +1,5 @@ +translating-----geekpi + Darkstat is a Web Based Network Traffic Analyzer – Install it on Linux ================================================================================ Darkstat is a simple, web based network traffic analyzer application. It works on many popular operating systems like Linux, Solaris, Mac and AIX. It keeps running in the background as a daemon and continues collecting and sniffing network data and presents it in easily understandable format within its web interface. It can generate traffic reports for hosts, identify which ports are open on some particular host and is IPV 6 complaint application. Let’s see how we can install and configure it on Linux operating system. @@ -59,4 +61,4 @@ via: http://linuxpitstop.com/install-darkstat-on-ubuntu-linux/ 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 -[a]:http://linuxpitstop.com/author/aun/ \ No newline at end of file +[a]:http://linuxpitstop.com/author/aun/ From f5f2a55acba0a563381de7dc8718d856de00ff22 Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 13 Aug 2015 10:12:57 +0800 Subject: [PATCH 198/207] translating --- ...k Traffic Analyzer--Install it on Linux.md | 40 +++++++++---------- 1 file changed, 19 insertions(+), 21 deletions(-) diff --git a/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md b/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md index 3b3fe49a7f..e8e6bace07 100644 --- a/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md +++ b/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md @@ -1,62 +1,60 @@ -translating-----geekpi - -Darkstat is a Web Based Network Traffic Analyzer – Install it on Linux +Darkstat一个基于网络的流量分析器 - 在Linux中安装 ================================================================================ -Darkstat is a simple, web based network traffic analyzer application. It works on many popular operating systems like Linux, Solaris, Mac and AIX. It keeps running in the background as a daemon and continues collecting and sniffing network data and presents it in easily understandable format within its web interface. It can generate traffic reports for hosts, identify which ports are open on some particular host and is IPV 6 complaint application. Let’s see how we can install and configure it on Linux operating system. +Darkstat是一个简易的,基于网络的流量分析程序。它可以在主流的操作系统如Linux、Solaris、MAC、AIX上工作。它以守护进程的形式持续工作在后台并不断地嗅探网络数据并以简单易懂的形式展现在网页上。它可以为主机生成流量报告,鉴别特定主机上哪些端口打开并且兼容IPv6。让我们看下如何在Linux中安装和配置它。 -### Installing Darkstat on Linux ### +### 在Linux中安装配置Darkstat ### -**Install Darkstat on Fedora/CentOS/RHEL:** +** 在Fedora/CentOS/RHEL中安装Darkstat:** -In order to install it on Fedora/RHEL and CentOS Linux distributions, run following command on the terminal. +要在Fedora/RHEL和CentOS中安装,运行下面的命令。 sudo yum install darkstat -**Install Darkstat on Ubuntu/Debian:** +**在Ubuntu/Debian中安装Darkstat:** -Run following on the terminal to install it on Ubuntu and Debian. +运行下面的命令在Ubuntu和Debian中安装。 sudo apt-get install darkstat -Congratulations, Darkstat has been installed on your Linux system now. +恭喜你,Darkstat已经在你的Linux中安装了。 -### Configuring Darkstat ### +### 配置 Darkstat ### -In order to run this application properly, we need to perform some basic configurations. Edit /etc/darkstat/init.cfg file in Gedit text editor by running the following command on the terminal. +为了正确运行这个程序,我恩需要执行一些基本的配置。运行下面的命令用gedit编辑器打开/etc/darkstat/init.cfg文件。 sudo gedit /etc/darkstat/init.cfg ![](http://linuxpitstop.com/wp-content/uploads/2015/08/13.png) -Edit Darkstat +编辑 Darkstat -Change START_DARKSTAT parameter to “yes” and provide your network interface in “INTERFACE”. Make sure to uncomment DIR, PORT, BINDIP, and LOCAL parameters here. If you wish to bind the web interface for Darkstat to some specific IP, provide it in BINDIP section. +修改START_DARKSTAT这个参数为yes,并在“INTERFACE”中提供你的网络接口。确保取消了DIR、PORT、BINDIP和LOCAL这些参数的注释。如果你希望绑定Darkstat到特定的IP,在BINDIP中提供它 -### Starting Darkstat Daemon ### +### 启动Darkstat守护进程 ### -Once the installation and configuration for Darkstat is complete, run following command to start its daemon. +安装并配置完Darkstat后,运行下面的命令启动它的守护进程。 sudo /etc/init.d/darkstat start ![Restarting Darkstat](http://linuxpitstop.com/wp-content/uploads/2015/08/23.png) -You can configure Darkstat to start on system boot by running the following command: +你可以用下面的命令来在开机时启动Darkstat: chkconfig darkstat on -Launch your browser and load **http://localhost:666** and it will display the web based graphical interface for Darkstat. Start using this tool to analyze your network traffic. +打开浏览器并打开**http://localhost:666**,它会显示Darkstat的网页界面。使用这个工具来分析你的网络流量。 ![Darkstat](http://linuxpitstop.com/wp-content/uploads/2015/08/32.png) -### Conclusion ### +### 总结 ### -It is a lightweight tool with very low memory footprints. The key reason for the popularity of this tool is simplicity, ease of configuration and usage. It is a must-have application for System and Network Administrators. +它是一个占用很少内存的轻量级工具。这个工具流行的原因是简易、易于配置和使用。这是一个对系统管理员而言必须拥有的程序 -------------------------------------------------------------------------------- via: http://linuxpitstop.com/install-darkstat-on-ubuntu-linux/ 作者:[Aun][a] -译者:[译者ID](https://github.com/译者ID) +译者:[geekpi](https://github.com/geekpi) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 From ed05a3b8483d65879b9bfc48daaa909e29afd347 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 13 Aug 2015 10:22:14 +0800 Subject: [PATCH 199/207] =?UTF-8?q?20150813-1=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ow to get Public IP from Linux Terminal.md | 68 +++ ...150813 Linux file system hierarchy v2.0.md | 438 ++++++++++++++++++ ... Install The Latest Nvidia Linux Driver.md | 63 +++ 3 files changed, 569 insertions(+) create mode 100644 sources/tech/20150813 How to get Public IP from Linux Terminal.md create mode 100644 sources/tech/20150813 Linux file system hierarchy v2.0.md create mode 100644 sources/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md diff --git a/sources/tech/20150813 How to get Public IP from Linux Terminal.md b/sources/tech/20150813 How to get Public IP from Linux Terminal.md new file mode 100644 index 0000000000..f0bba2cea9 --- /dev/null +++ b/sources/tech/20150813 How to get Public IP from Linux Terminal.md @@ -0,0 +1,68 @@ +How to get Public IP from Linux Terminal? +================================================================================ +![](http://www.blackmoreops.com/wp-content/uploads/2015/06/256x256xHow-to-get-Public-IP-from-Linux-Terminal-blackMORE-Ops.png.pagespeed.ic.GKEAEd4UNr.png) + +Public addresses are assigned by InterNIC and consist of class-based network IDs or blocks of CIDR-based addresses (called CIDR blocks) that are guaranteed to be globally unique to the Internet. How to get Public IP from Linux Terminal - blackMORE OpsWhen the public addresses are assigned, routes are programmed into the routers of the Internet so that traffic to the assigned public addresses can reach their locations. Traffic to destination public addresses are reachable on the Internet. For example, when an organization is assigned a CIDR block in the form of a network ID and subnet mask, that [network ID, subnet mask] pair also exists as a route in the routers of the Internet. IP packets destined to an address within the CIDR block are routed to the proper destination. In this post I will show several ways to find your public IP address from Linux terminal. This though seems like a waste for normal users, but when you are in a terminal of a headless Linux server(i.e. no GUI or you’re connected as a user with minimal tools). Either way, being able to getHow to get Public IP from Linux Terminal public IP from Linux terminal can be useful in many cases or it could be one of those things that might just come in handy someday. + +There’s two main commands we use, curl and wget. You can use them interchangeably. + +### Curl output in plain text format: ### + + curl icanhazip.com + curl ifconfig.me + curl curlmyip.com + curl ip.appspot.com + curl ipinfo.io/ip + curl ipecho.net/plain + curl www.trackip.net/i + +### curl output in JSON format: ### + + curl ipinfo.io/json + curl ifconfig.me/all.json + curl www.trackip.net/ip?json (bit ugly) + +### curl output in XML format: ### + + curl ifconfig.me/all.xml + +### curl all IP details – The motherload ### + + curl ifconfig.me/all + +### Using DYNDNS (Useful when you’re using DYNDNS service) ### + + curl -s 'http://checkip.dyndns.org' | sed 's/.*Current IP Address: \([0-9\.]*\).*/\1/g' + curl -s http://checkip.dyndns.org/ | grep -o "[[:digit:].]\+" + +### Using wget instead of curl ### + + wget http://ipecho.net/plain -O - -q ; echo + wget http://observebox.com/ip -O - -q ; echo + +### Using host and dig command (cause we can) ### + +You can also use host and dig command assuming they are available or installed + + host -t a dartsclink.com | sed 's/.*has address //' + dig +short myip.opendns.com @resolver1.opendns.com + +### Sample bash script: ### + + #!/bin/bash + + PUBLIC_IP=`wget http://ipecho.net/plain -O - -q ; echo` + echo $PUBLIC_IP + +Quite a few to pick from. + +I was actually writing a small script to track all the IP changes of my router each day and save those into a file. I found these nifty commands and sites to use while doing some online research. Hope they help someone else someday too. Thanks for reading, please Share and RT. + +-------------------------------------------------------------------------------- + +via: http://www.blackmoreops.com/2015/06/14/how-to-get-public-ip-from-linux-terminal/ + +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file diff --git a/sources/tech/20150813 Linux file system hierarchy v2.0.md b/sources/tech/20150813 Linux file system hierarchy v2.0.md new file mode 100644 index 0000000000..9df6d23dcf --- /dev/null +++ b/sources/tech/20150813 Linux file system hierarchy v2.0.md @@ -0,0 +1,438 @@ +Linux file system hierarchy v2.0 +================================================================================ +What is a file in Linux? What is file system in Linux? Where are all the configuration files? Where do I keep my downloaded applications? Is there really a filesystem standard structure in Linux? Well, the above image explains Linux file system hierarchy in a very simple and non-complex way. It’s very useful when you’re looking for a configuration file or a binary file. I’ve added some explanation and examples below, but that’s TL;DR. + +Another issue is when you got configuration and binary files all over the system that creates inconsistency and if you’re a large organization or even an end user, it can compromise your system (binary talking with old lib files etc.) and when you do [security audit of your Linux system][1], you find it is vulnerable to different exploits. So keeping a clean operating system (no matter Windows or Linux) is important. + +### What is a file in Linux? ### + +A simple description of the UNIX system, also applicable to Linux, is this: + +> On a UNIX system, everything is a file; if something is not a file, it is a process. + +This statement is true because there are special files that are more than just files (named pipes and sockets, for instance), but to keep things simple, saying that everything is a file is an acceptable generalization. A Linux system, just like UNIX, makes no difference between a file and a directory, since a directory is just a file containing names of other files. Programs, services, texts, images, and so forth, are all files. Input and output devices, and generally all devices, are considered to be files, according to the system. + +![](http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png) + +- Version 2.0 – 17-06-2015 + - – Improved: Added title and version history. + - – Improved: Added /srv, /media and /proc. + - – Improved: Updated descriptions to reflect modern Linux File Systems. + - – Fixed: Multiple typo’s. + - – Fixed: Appearance and colour. +- Version 1.0 – 14-02-2015 + - – Created: Initial diagram. + - – Note: Discarded lowercase version. + +### Download Links ### + +Following are two links for download. If you need this in any other format, let me know and I will try to create that and upload it somewhere. + +- [Large (PNG) Format – 2480×1755 px – 184KB][2] +- [Largest (PDF) Format – 9919x7019 px – 1686KB][3] + +**Note**: PDF Format is best for printing and very high in quality + +### Linux file system description ### + +In order to manage all those files in an orderly fashion, man likes to think of them in an ordered tree-like structure on the hard disk, as we know from `MS-DOS` (Disk Operating System) for instance. The large branches contain more branches, and the branches at the end contain the tree’s leaves or normal files. For now we will use this image of the tree, but we will find out later why this is not a fully accurate image. + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DirectoryDescription
+
/
+
Primary hierarchy root and root directory of the entire file system hierarchy.
+
/bin
+
Essential command binaries that need to be available in single user mode; for all users, e.g., cat, ls, cp.
+
/boot
+
Boot loader files, e.g., kernels, initrd.
+
/dev
+
Essential devices, e.g., /dev/null.
+
/etc
+
Host-specific system-wide configuration filesThere has been controversy over the meaning of the name itself. In early versions of the UNIX Implementation Document from Bell labs, /etc is referred to as the etcetera directory, as this directory historically held everything that did not belong elsewhere (however, the FHS restricts /etc to static configuration files and may not contain binaries). Since the publication of early documentation, the directory name has been re-designated in various ways. Recent interpretations include backronyms such as “Editable Text Configuration” or “Extended Tool Chest”.
+
+
+
/opt
+
+
+
Configuration files for add-on packages that are stored in /opt/.
+
+
+
/sgml
+
+
+
Configuration files, such as catalogs, for software that processes SGML.
+
+
+
/X11
+
+
+
Configuration files for the X Window System, version 11.
+
+
+
/xml
+
+
+
Configuration files, such as catalogs, for software that processes XML.
+
/home
+
Users’ home directories, containing saved files, personal settings, etc.
+
/lib
+
Libraries essential for the binaries in /bin/ and /sbin/.
+
/lib<qual>
+
Alternate format essential libraries. Such directories are optional, but if they exist, they have some requirements.
+
/media
+
Mount points for removable media such as CD-ROMs (appeared in FHS-2.3).
+
/mnt
+
Temporarily mounted filesystems.
+
/opt
+
Optional application software packages.
+
/proc
+
Virtual filesystem providing process and kernel information as files. In Linux, corresponds to a procfs mount.
+
/root
+
Home directory for the root user.
+
/sbin
+
Essential system binaries, e.g., init, ip, mount.
+
/srv
+
Site-specific data which are served by the system.
+
/tmp
+
Temporary files (see also /var/tmp). Often not preserved between system reboots.
+
/usr
+
Secondary hierarchy for read-only user data; contains the majority of (multi-)user utilities and applications.
+
+
+
/bin
+
+
+
Non-essential command binaries (not needed in single user mode); for all users.
+
+
+
/include
+
+
+
Standard include files.
+
+
+
/lib
+
+
+
Libraries for the binaries in /usr/bin/ and /usr/sbin/.
+
+
+
/lib<qual>
+
+
+
Alternate format libraries (optional).
+
+
+
/local
+
+
+
Tertiary hierarchy for local data, specific to this host. Typically has further subdirectories, e.g., bin/, lib/, share/.
+
+
+
/sbin
+
+
+
Non-essential system binaries, e.g., daemons for various network-services.
+
+
+
/share
+
+
+
Architecture-independent (shared) data.
+
+
+
/src
+
+
+
Source code, e.g., the kernel source code with its header files.
+
+
+
/X11R6
+
+
+
X Window System, Version 11, Release 6.
+
/var
+
Variable files—files whose content is expected to continually change during normal operation of the system—such as logs, spool files, and temporary e-mail files.
+
+
+
/cache
+
+
+
Application cache data. Such data are locally generated as a result of time-consuming I/O or calculation. The application must be able to regenerate or restore the data. The cached files can be deleted without loss of data.
+
+
+
/lib
+
+
+
State information. Persistent data modified by programs as they run, e.g., databases, packaging system metadata, etc.
+
+
+
/lock
+
+
+
Lock files. Files keeping track of resources currently in use.
+
+
+
/log
+
+
+
Log files. Various logs.
+
+
+
/mail
+
+
+
Users’ mailboxes.
+
+
+
/opt
+
+
+
Variable data from add-on packages that are stored in /opt/.
+
+
+
/run
+
+
+
Information about the running system since last boot, e.g., currently logged-in users and running daemons.
+
+
+
/spool
+
+
+
Spool for tasks waiting to be processed, e.g., print queues and outgoing mail queue.
+
+
+
+
+
/mail
+
+
+
+
+
Deprecated location for users’ mailboxes.
+
+
+
/tmp
+
+
+
Temporary files to be preserved between reboots.
+ +### Types of files in Linux ### + +Most files are just files, called `regular` files; they contain normal data, for example text files, executable files or programs, input for or output from a program and so on. + +While it is reasonably safe to suppose that everything you encounter on a Linux system is a file, there are some exceptions. + +- `Directories`: files that are lists of other files. +- `Special files`: the mechanism used for input and output. Most special files are in `/dev`, we will discuss them later. +- `Links`: a system to make a file or directory visible in multiple parts of the system’s file tree. We will talk about links in detail. +- `(Domain) sockets`: a special file type, similar to TCP/IP sockets, providing inter-process networking protected by the file system’s access control. +- `Named pipes`: act more or less like sockets and form a way for processes to communicate with each other, without using network socket semantics. + +### File system in reality ### + +For most users and for most common system administration tasks, it is enough to accept that files and directories are ordered in a tree-like structure. The computer, however, doesn’t understand a thing about trees or tree-structures. + +Every partition has its own file system. By imagining all those file systems together, we can form an idea of the tree-structure of the entire system, but it is not as simple as that. In a file system, a file is represented by an `inode`, a kind of serial number containing information about the actual data that makes up the file: to whom this file belongs, and where is it located on the hard disk. + +Every partition has its own set of inodes; throughout a system with multiple partitions, files with the same inode number can exist. + +Each inode describes a data structure on the hard disk, storing the properties of a file, including the physical location of the file data. When a hard disk is initialized to accept data storage, usually during the initial system installation process or when adding extra disks to an existing system, a fixed number of inodes per partition is created. This number will be the maximum amount of files, of all types (including directories, special files, links etc.) that can exist at the same time on the partition. We typically count on having 1 inode per 2 to 8 kilobytes of storage.At the time a new file is created, it gets a free inode. In that inode is the following information: + +- Owner and group owner of the file. +- File type (regular, directory, …) +- Permissions on the file +- Date and time of creation, last read and change. +- Date and time this information has been changed in the inode. +- Number of links to this file (see later in this chapter). +- File size +- An address defining the actual location of the file data. + +The only information not included in an inode, is the file name and directory. These are stored in the special directory files. By comparing file names and inode numbers, the system can make up a tree-structure that the user understands. Users can display inode numbers using the -i option to ls. The inodes have their own separate space on the disk. + +-------------------------------------------------------------------------------- + +via: http://www.blackmoreops.com/2015/06/18/linux-file-system-hierarchy-v2-0/ + +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.blackmoreops.com/2015/02/15/in-light-of-recent-linux-exploits-linux-security-audit-is-a-must/ +[2]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png +[3]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-File-System-Hierarchy-blackMORE-Ops.pdf \ No newline at end of file diff --git a/sources/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md b/sources/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md new file mode 100644 index 0000000000..2bae0061c4 --- /dev/null +++ b/sources/tech/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md @@ -0,0 +1,63 @@ +Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver +================================================================================ +![Ubuntu Gamers are on the rise -and so is demand for the latest drivers](http://www.omgubuntu.co.uk/wp-content/uploads/2014/03/ubuntugamer_logo_dark-500x250.jpg) +Ubuntu Gamers are on the rise -and so is demand for the latest drivers + +**Installing the latest upstream NVIDIA graphics driver on Ubuntu could be about to get much easier. ** + +Ubuntu developers are considering the creation of a brand new ‘official’ PPA to distribute the latest closed-source NVIDIA binary drivers to desktop users. + +The move would benefit Ubuntu gamers **without** risking the stability of the OS for everyone else. + +New upstream drivers would be installed and updated from this new PPA **only** when a user explicitly opts-in to it. Everyone else would continue to receive and use the more recent stable NVIDIA Linux driver snapshot included in the Ubuntu archive. + +### Why Is This Needed? ### + +![Ubuntu provides drivers – but they’re not the latest](http://www.omgubuntu.co.uk/wp-content/uploads/2013/04/drivers.jpg) +Ubuntu provides drivers – but they’re not the latest + +The closed-source NVIDIA graphics drivers that are available to install on Ubuntu from the archive (using the command line, synaptic or through the additional drivers tool) work fine for most and can handle the composited Unity desktop shell with ease. + +For gaming needs it’s a different story. + +If you want to squeeze every last frame and HD texture out of the latest big-name Steam game you’ll need the latest binary drivers blob. + +> ‘Installing the very latest Nvidia Linux driver on Ubuntu is not easy and not always safe.’ + +The more recent the driver the more likely it is to support the latest features and technologies, or come pre-packed with game-specific tweaks and bug fixes too. + +The problem is that installing the very latest Nvidia Linux driver on Ubuntu is not easy and not always safe. + +To fill the void many third-party PPAs maintained by enthusiasts have emerged. Since many of these PPAs also distribute other experimental or bleeding-edge software their use is **not without risk**. Adding a bleeding edge PPA is often the fastest way to entirely hose a system! + +A solution that lets Ubuntu users install the latest propriety graphics drivers as offered in third-party PPAs is needed **but** with the safety catch of being able to roll-back to the stable archive version if needed. + +### ‘Demand for fresh drivers is hard to ignore’ ### + +> ‘A solution that lets Ubuntu users get the latest hardware drivers safely is coming.’ + +‘The demand for fresh drivers in a fast developing market is becoming hard to ignore, users are going to want the latest upstream has to offer,’ Castro explains in an e-mail to the Ubuntu Desktop mailing list. + +‘[NVIDIA] can deliver a kickass experience with almost no effort from the user [in Windows 10]. Until we can convince NVIDIA to do the same with Ubuntu we’re going to have to pick up the slack.’ + +Castro’s proposition of a “blessed” NVIDIA PPA is the easiest way to do this. + +Gamers would be able to opt-in to receive new drivers from the PPA straight from Ubuntu’s default proprietary hardware drivers tool — no need for them to copy and paste terminal commands from websites or wiki pages. + +The drivers within this PPA would be packaged and maintained by a select band of community members and receive benefits from being a semi-official option, namely **automated testing**. + +As Castro himself puts it: ‘People want the latest bling, and no matter what they’re going to do it. We might as well put a framework around it so people can get what they want without breaking their computer.’ + +**Would you make use of this PPA? How would you rate the performance of the default Nvidia drivers on Ubuntu? Share your thoughts in the comments, folks! ** + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2015/08/ubuntu-easy-install-latest-nvidia-linux-drivers + +作者:[Joey-Elijah Sneddon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author \ No newline at end of file From cc0d58299115994ccd4982cdb806460dc18e718b Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 13 Aug 2015 10:35:12 +0800 Subject: [PATCH 200/207] translated --- ...s a Web Based Network Traffic Analyzer--Install it on Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md (100%) diff --git a/sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md b/translated/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md similarity index 100% rename from sources/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md rename to translated/tech/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md From 2857115c5c775ef504947bd7cfc8fdffcafcb256 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 13 Aug 2015 10:45:09 +0800 Subject: [PATCH 201/207] =?UTF-8?q?20150813-2=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...How to Install Logwatch on Ubuntu 15.04.md | 137 +++++++++++++++ ...st Disk I O Performance With dd Command.md | 162 ++++++++++++++++++ 2 files changed, 299 insertions(+) create mode 100644 sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md create mode 100644 sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md diff --git a/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md b/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md new file mode 100644 index 0000000000..fa9458dcb4 --- /dev/null +++ b/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md @@ -0,0 +1,137 @@ +How to Install Logwatch on Ubuntu 15.04 +================================================================================ +Hi, Today we are going to illustrate the setup of Logwatch on Ubuntu 15.04 Operating system where as it can be used for any Linux and UNIX like operating systems. Logwatch is a customizable system log analyzer and reporting log-monitoring system that go through your logs for a given period of time and make a report in the areas that you wish with the details you want. Its an easy tool to install, configure, review and to take actions that will improve security from data it provides. Logwatch scans the log files of major operating system components, like SSH, Web Server and forwards a summary that contains the valuable items in it that needs to be looked at. + +### Pre-installation Setup ### + +We will be using Ubuntu 15.04 operating system to deploy Logwatch on it so as a perquisite for the installation of Logwatch, make sure that your emails setup is working as it will be used to send email to the administrators for daily reports on the gathered reports.Your system repositories should be enabled as we will be installing it from its available universal repositories. + +Then open the terminal of your ubuntu operating system and login with root user to update your system packages before moving to Logwatch installation. + + root@ubuntu-15:~# apt-get update + +### Installing Logwatch ### + +Once your system is updated and your have fulfilled all its prerequisites then run the following command to start the installation of Logwatch in your server. + + root@ubuntu-15:~# apt-get install logwatch + +The logwatch installation process will starts with addition of some extra required packages as shown once you press “Y” to accept the required changes to the system. + +During the installation process you will be prompted to configure the Postfix Configurations according to your mail server’s setup. Here we used “Local only” in the tutorial for ease, we can choose from the other available options as per your infrastructure requirements and then press “OK” to proceed. + +![Potfix Configurations](http://blog.linoxide.com/wp-content/uploads/2015/08/21.png) + +Then you have to choose your mail server’s name that will also be used by other programs, so it should be single fully qualified domain name (FQDN). + +![Postfix Setup](http://blog.linoxide.com/wp-content/uploads/2015/08/31.png) + +Once you press “OK” after postfix configurations, then it will completes the Logwatch installation process with default configurations of Postfix. + +![Logwatch Completion](http://blog.linoxide.com/wp-content/uploads/2015/08/41.png) + +You can check the status of Logwatch by issuing the following command in the terminal that should be in active state. + + root@ubuntu-15:~# service postfix status + +![Postfix Status](http://blog.linoxide.com/wp-content/uploads/2015/08/51.png) + +To confirm the installation of Logwatch with its default configurations, issue the simple “logwatch” command as shown. + + root@ubuntu-15:~# logwatch + +The output from the above executed command will results in following compiled report form in the terminal. + +![Logwatch Report](http://blog.linoxide.com/wp-content/uploads/2015/08/61.png) + +### Logwatch Configurations ### + +Now after successful installation of Logwatch, we need to make few configuration changes in its configuration file located under following shown path. So, let’s open it with the file editor to update its configurations as required. + + root@ubuntu-15:~# vim /usr/share/logwatch/default.conf/logwatch.conf + +**Output/Format Options** + +By default Logwatch will print to stdout in text with no encoding.To make email Default set “Output = mail” and to save to file set “Output = file”. So you can comment out the its default configurations as per your required settings. + + Output = stdout + +To make Html the default formatting update the following line if you are using Internet email configurations. + + Format = text + +Now add the default person to mail reports should be sent to, it could be a local account or a complete email address that you are free to mention in this line + + MailTo = root + #MailTo = user@test.com + +Default person to mail reports sent from can be a local account or any other you wish to use. + + # complete email address. + MailFrom = Logwatch + +Save the changes made in the configuration file of Logwatch while leaving the other parameter as default. + +**Cronjob Configuration** + +Now edit the "00logwatch" file in daily crons directory to configure your desired email address to forward reports from logwatch. + + root@ubuntu-15:~# vim /etc/cron.daily/00logwatch + +Here you need to use "--mailto" user@test.com instead of --output mail and save the file. + +![Logwatch Cronjob](http://blog.linoxide.com/wp-content/uploads/2015/08/71.png) + +### Using Logwatch Report ### + +Now we generate the test report by executing the "logwatch" command in the terminal to get its result shown in the Text format within the terminal. + + root@ubuntu-15:~#logwatch + +The generated report starts with showing its execution time and date, it will be comprising of different sections that starts with its begin status and closed with end status after showing the complete information about its logs of the mentioned sections. + +Here is its starting point looks like, where it starts by showing all the installed packages in the system as shown below. + +![dpkg status](http://blog.linoxide.com/wp-content/uploads/2015/08/81.png) + +The following sections shows the logs informmation about the login sessions, rsyslogs and SSH connections about the current and last sessions enabled on the system. + +![logwatch report](http://blog.linoxide.com/wp-content/uploads/2015/08/9.png) + +The logwatch report will ends up by showing the secure sudo logs and the disk space usage of the root diretory as shown below. + +![Logwatch end report](http://blog.linoxide.com/wp-content/uploads/2015/08/10.png) + +You can also check for the generated emails about the logwatch reports by opening the following file. + + root@ubuntu-15:~# vim /var/mail/root + +Here you will be able to see all the generated emails to your configured users with their message delivery status. + +### More about Logwatch ### + +Logwatch is a great tool to lern more about it, so if your more interested to learn more about its logwatch then you can also get much help from the below few commands. + + root@ubuntu-15:~# man logwatch + +The above command contains all the users manual about the logwatch, so read it carefully and to exit from the manuals section simply press "q". + +To get help about the logwatch commands usage you can run the following help command for further information in details. + + root@ubuntu-15:~# logwatch --help + +### Conclusion ### + +At the end of this tutorial you learn about the complete setup of Logwatch on Ubuntu 15.04 that includes with its installation and configurations guide. Now you can start monitoring your logs in a customize able form, whether you monitor the logs of all the services rnning on your system or you customize it to send you the reports about the specific services on the scheduled days. So, let's use this tool and feel free to leave us a comment if you face any issue or need to know more about logwatch usage. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/ubuntu-how-to/install-use-logwatch-ubuntu-15-04/ + +作者:[Kashif Siddique][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ \ No newline at end of file diff --git a/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md b/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md new file mode 100644 index 0000000000..c30619d13e --- /dev/null +++ b/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md @@ -0,0 +1,162 @@ +Linux and Unix Test Disk I/O Performance With dd Command +================================================================================ +How can I use dd command on a Linux to test I/O performance of my hard disk drive? How do I check the performance of a hard drive including the read and write speed on a Linux operating systems? + +You can use the following commands on a Linux or Unix-like systems for simple I/O performance test: + +- **dd command** : It is used to monitor the writing performance of a disk device on a Linux and Unix-like system +- **hdparm command** : It is used to get/set hard disk parameters including test the reading and caching performance of a disk device on a Linux based system. + +In this tutorial you will learn how to use the dd command to test disk I/O performance. + +### Use dd command to monitor the reading and writing performance of a disk device: ### + +- Open a shell prompt. +- Or login to a remote server via ssh. +- Use the dd command to measure server throughput (write speed) `dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync` +- Use the dd command to measure server latency `dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync` + +#### Understanding dd command options #### + +In this example, I'm using RAID-10 (Adaptec 5405Z with SAS SSD) array running on a Ubuntu Linux 14.04 LTS server. The basic syntax is + + dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync + ## GNU dd syntax ## + dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync + ## OR alternate syntax for GNU/dd ## + dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync + +Sample outputs: + +![Fig.01: Ubuntu Linux Server with RAID10 and testing server throughput with dd](http://s0.cyberciti.org/uploads/faq/2015/08/dd-server-test-io-speed-output.jpg) +Fig.01: Ubuntu Linux Server with RAID10 and testing server throughput with dd + +Please note that one gigabyte was written for the test and 135 MB/s was server throughput for this test. Where, + +- `if=/dev/zero (if=/dev/input.file)` : The name of the input file you want dd the read from. +- `of=/tmp/test1.img (of=/path/to/output.file)` : The name of the output file you want dd write the input.file to. +- `bs=1G (bs=block-size)` : Set the size of the block you want dd to use. 1 gigabyte was written for the test. +- `count=1 (count=number-of-blocks)`: The number of blocks you want dd to read. +- `oflag=dsync (oflag=dsync)` : Use synchronized I/O for data. Do not skip this option. This option get rid of caching and gives you good and accurate results +- `conv=fdatasyn`: Again, this tells dd to require a complete "sync" once, right before it exits. This option is equivalent to oflag=dsync. + +In this example, 512 bytes were written one thousand times to get RAID10 server latency time: + + dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync + +Sample outputs: + + 1000+0 records in + 1000+0 records out + 512000 bytes (512 kB) copied, 0.60362 s, 848 kB/s + +Please note that server throughput and latency time depends upon server/application load too. So I recommend that you run these tests on a newly rebooted server as well as peak time to get better idea about your workload. You can now compare these numbers with all your devices. + +#### But why the server throughput and latency time are so low? #### + +Low values does not mean you are using slow hardware. The value can be low because of the HARDWARE RAID10 controller's cache. + +Use hdparm command to see buffered and cached disk read speed + +I suggest you run the following commands 2 or 3 times Perform timings of device reads for benchmark and comparison purposes: + + ### Buffered disk read test for /dev/sda ## + hdparm -t /dev/sda1 + ## OR ## + hdparm -t /dev/sda + +To perform timings of cache reads for benchmark and comparison purposes again run the following command 2-3 times (note the -T option): + + ## Cache read benchmark for /dev/sda ### + hdparm -T /dev/sda1 + ## OR ## + hdparm -T /dev/sda + +OR combine both tests: + + hdparm -Tt /dev/sda + +Sample outputs: + +![Fig.02: Linux hdparm command to test reading and caching disk performance](http://s0.cyberciti.org/uploads/faq/2015/08/hdparam-output.jpg) +Fig.02: Linux hdparm command to test reading and caching disk performance + +Again note that due to filesystems caching on file operations, you will always see high read rates. + +**Use dd command on Linux to test read speed** + +To get accurate read test data, first discard caches before testing by running the following commands: + + flush + echo 3 | sudo tee /proc/sys/vm/drop_caches + time time dd if=/path/to/bigfile of=/dev/null bs=8k + +**Linux Laptop example** + +Run the following command: + + ### Debian Laptop Throughput With Cache ## + dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct + + ### Deactivate the cache ### + hdparm -W0 /dev/sda + + ### Debian Laptop Throughput Without Cache ## + dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct + +**Apple OS X Unix (Macbook pro) example** + +GNU dd has many more options but OS X/BSD and Unix-like dd command need to run as follows to test real disk I/O and not memory add sync option as follows: + + ## Run command 2-3 times to get good results ### + time sh -c "dd if=/dev/zero of=/tmp/testfile bs=100k count=1k && sync" + +Sample outputs: + + 1024+0 records in + 1024+0 records out + 104857600 bytes transferred in 0.165040 secs (635346520 bytes/sec) + + real 0m0.241s + user 0m0.004s + sys 0m0.113s + +So I'm getting 635346520 bytes (635.347 MB/s) write speed on my MBP. + +**Not a fan of command line...?** + +You can use disk utility (gnome-disk-utility) on a Linux or Unix based system to get the same information. The following screenshot is taken from my Fedora Linux v22 VM. + +**Graphical method** + +Click on the "Activities" or press the "Super" key to switch between the Activities overview and desktop. Type "Disks" + +![Fig.03: Start the Gnome disk utility](http://s0.cyberciti.org/uploads/faq/2015/08/disk-1.jpg) +Fig.03: Start the Gnome disk utility + +Select your hard disk at left pane and click on configure button and click on "Benchmark partition": + +![Fig.04: Benchmark disk/partition](http://s0.cyberciti.org/uploads/faq/2015/08/disks-2.jpg) +Fig.04: Benchmark disk/partition + +Finally, click on the "Start Benchmark..." button (you may be promoted for the admin username and password): + +![Fig.05: Final benchmark result](http://s0.cyberciti.org/uploads/faq/2015/08/disks-3.jpg) +Fig.05: Final benchmark result + +Which method and command do you recommend to use? + +- I recommend dd command on all Unix-like systems (`time sh -c "dd if=/dev/zero of=/tmp/testfile bs=100k count=1k && sync`" +- If you are using GNU/Linux use the dd command (`dd if=/dev/zero of=/tmp/testALT.img bs=1G count=1 conv=fdatasync`) +- Make sure you adjust count and bs arguments as per your setup to get a good set of result. +- The GUI method is recommended only for Linux/Unix laptop users running Gnome2 or 3 desktop. + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd-command/ + +作者:Vivek Gite +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file From fc087db114a358509e8efdd90b708b5c6e72548a Mon Sep 17 00:00:00 2001 From: runningwater Date: Thu, 13 Aug 2015 10:49:04 +0800 Subject: [PATCH 202/207] by runningwater --- .../tech/20150813 How to Install Logwatch on Ubuntu 15.04.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md b/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md index fa9458dcb4..24c71b0cbe 100644 --- a/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md +++ b/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md @@ -1,3 +1,4 @@ +(translating by runningwater) How to Install Logwatch on Ubuntu 15.04 ================================================================================ Hi, Today we are going to illustrate the setup of Logwatch on Ubuntu 15.04 Operating system where as it can be used for any Linux and UNIX like operating systems. Logwatch is a customizable system log analyzer and reporting log-monitoring system that go through your logs for a given period of time and make a report in the areas that you wish with the details you want. Its an easy tool to install, configure, review and to take actions that will improve security from data it provides. Logwatch scans the log files of major operating system components, like SSH, Web Server and forwards a summary that contains the valuable items in it that needs to be looked at. @@ -129,9 +130,9 @@ At the end of this tutorial you learn about the complete setup of Logwatch on Ub via: http://linoxide.com/ubuntu-how-to/install-use-logwatch-ubuntu-15-04/ 作者:[Kashif Siddique][a] -译者:[译者ID](https://github.com/译者ID) +译者:[runningwater](https://github.com/runningwater) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 -[a]:http://linoxide.com/author/kashifs/ \ No newline at end of file +[a]:http://linoxide.com/author/kashifs/ From 14f172121f66f179cb7525924c2c46a0cdb064b2 Mon Sep 17 00:00:00 2001 From: DeadFire Date: Thu, 13 Aug 2015 11:01:53 +0800 Subject: [PATCH 203/207] =?UTF-8?q?20150813-3=20=E9=80=89=E9=A2=98?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...ation GA with OData in Docker Container.md | 102 ++++++++++++++++++ 1 file changed, 102 insertions(+) create mode 100644 sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md diff --git a/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md b/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md new file mode 100644 index 0000000000..0893b9a361 --- /dev/null +++ b/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md @@ -0,0 +1,102 @@ +Howto Run JBoss Data Virtualization GA with OData in Docker Container +================================================================================ +Hi everyone, today we'll learn how to run JBoss Data Virtualization 6.0.0.GA with OData in a Docker Container. JBoss Data Virtualization is a data supply and integration solution platform that transforms various scatered multiple sources data, treats them as single source and delivers the required data into actionable information at business speed to any applications or users. JBoss Data Virtualization can help us easily combine and transform data into reusable business friendly data models and make unified data easily consumable through open standard interfaces. It offers comprehensive data abstraction, federation, integration, transformation, and delivery capabilities to combine data from one or multiple sources into reusable for agile data utilization and sharing.For more information about JBoss Data Virtualization, we can check out [its official page][1]. Docker is an open source platform that provides an open platform to pack, ship and run any application as a lightweight container. Running JBoss Data Virtualization with OData in Docker Container makes us easy to handle and launch. + +Here are some easy to follow tutorial on how we can run JBoss Data Virtualization with OData in Docker Container. + +### 1. Cloning the Repository ### + +First of all, we'll wanna clone the repository of OData with Data Virtualization ie [https://github.com/jbossdemocentral/dv-odata-docker-integration-demo][2] using git command. As we have an Ubuntu 15.04 distribution of linux running in our machine. We'll need to install git initially using apt-get command. + + # apt-get install git + +Then after installing git, we'll wanna clone the repository by running the command below. + + # git clone https://github.com/jbossdemocentral/dv-odata-docker-integration-demo + + Cloning into 'dv-odata-docker-integration-demo'... + remote: Counting objects: 96, done. + remote: Total 96 (delta 0), reused 0 (delta 0), pack-reused 96 + Unpacking objects: 100% (96/96), done. + Checking connectivity... done. + +### 2. Downloading JBoss Data Virtualization Installer ### + +Now, we'll need to download JBoss Data Virtualization Installer from the Download Page ie [http://www.jboss.org/products/datavirt/download/][3] . After we download **jboss-dv-installer-6.0.0.GA-redhat-4.jar**, we'll need to keep it under the directory named **software**. + +### 3. Building the Docker Image ### + +Next, after we have downloaded the JBoss Data Virtualization installer, we'll then go for building the docker image using the Dockerfile and its resources we had just cloned from the repository. + + # cd dv-odata-docker-integration-demo/ + # docker build -t jbossdv600 . + + ... + Step 22 : USER jboss + ---> Running in 129f701febd0 + ---> 342941381e37 + Removing intermediate container 129f701febd0 + Step 23 : EXPOSE 8080 9990 31000 + ---> Running in 61e6d2c26081 + ---> 351159bb6280 + Removing intermediate container 61e6d2c26081 + Step 24 : CMD $JBOSS_HOME/bin/standalone.sh -c standalone.xml -b 0.0.0.0 -bmanagement 0.0.0.0 + ---> Running in a9fed69b3000 + ---> 407053dc470e + Removing intermediate container a9fed69b3000 + Successfully built 407053dc470e + +Note: Here, we assume that you have already installed docker and is running in your machine. + +### 4. Starting the Docker Container ### + +As we have built the Docker Image of JBoss Data Virtualization with oData, we'll now gonna run the docker container and expose its port with -P flag. To do so, we'll run the following command. + + # docker run -p 8080:8080 -d -t jbossdv600 + + 7765dee9cd59c49ca26850e88f97c21f46859d2dc1d74166353d898773214c9c + +### 5. Getting the Container IP ### + +After we have started the Docker Container, we'll wanna get the IP address of the running docker container. To do so, we'll run the docker inspect command followed by the running container id. + + # docker inspect <$containerID> + + ... + "NetworkSettings": { + "Bridge": "", + "EndpointID": "3e94c5900ac5954354a89591a8740ce2c653efde9232876bc94878e891564b39", + "Gateway": "172.17.42.1", + "GlobalIPv6Address": "", + "GlobalIPv6PrefixLen": 0, + "HairpinMode": false, + "IPAddress": "172.17.0.8", + "IPPrefixLen": 16, + "IPv6Gateway": "", + "LinkLocalIPv6Address": "", + "LinkLocalIPv6PrefixLen": 0, + +### 6. Web Interface ### + +Now, if everything went as expected as done above, we'll gonna see the login screen of JBoss Data Virtualization with oData when pointing our web browser to http://container-ip:8080/ and the JBoss Management from http://container-ip:9990. The Management credentials for username is admin and password is redhat1! whereas the Data virtualization credentials for username is user and password is user . After that, we can navigate the contents via the web interface. + +**Note**: It is strongly recommended to change the password as soon as possible after the first login. Thanks :) + +### Conclusion ### + +Finally we've successfully run Docker Container running JBoss Data Virtualization with OData Multisource Virtual Database. JBoss Data Virtualization is really an awesome platform for the virtualization of data from different multiple source and transform them into reusable business friendly data models and produces data easily consumable through open standard interfaces. The deployment of JBoss Data Virtualization with OData Multisource Virtual Database has been very easy, secure and fast to setup with the Docker Technology. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/run-jboss-data-virtualization-ga-odata-docker-container/ + +作者:[Arun Pyasi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://www.redhat.com/en/technologies/jboss-middleware/data-virtualization +[2]:https://github.com/jbossdemocentral/dv-odata-docker-integration-demo +[3]:http://www.jboss.org/products/datavirt/download/ \ No newline at end of file From e5c72db0c2ee4f1b0b0a57d65de1c17ecbdfe2bd Mon Sep 17 00:00:00 2001 From: DongShuaike Date: Thu, 13 Aug 2015 11:29:35 +0800 Subject: [PATCH 204/207] Update 20150813 Linux and Unix Test Disk I O Performance With dd Command.md --- ...inux and Unix Test Disk I O Performance With dd Command.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md b/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md index c30619d13e..bcd9f8455f 100644 --- a/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md +++ b/sources/tech/20150813 Linux and Unix Test Disk I O Performance With dd Command.md @@ -1,3 +1,5 @@ +DongShuaike is translating. + Linux and Unix Test Disk I/O Performance With dd Command ================================================================================ How can I use dd command on a Linux to test I/O performance of my hard disk drive? How do I check the performance of a hard drive including the read and write speed on a Linux operating systems? @@ -159,4 +161,4 @@ via: http://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd 译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 \ No newline at end of file +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 363e76972134acc20f1b92efe0b5c4bf2f145615 Mon Sep 17 00:00:00 2001 From: wxy Date: Thu, 13 Aug 2015 13:34:50 +0800 Subject: [PATCH 205/207] PUB:20150803 Linux Logging Basics @FSSlc --- .../20150803 Linux Logging Basics.md | 52 +++++++++---------- 1 file changed, 25 insertions(+), 27 deletions(-) rename {translated/tech => published}/20150803 Linux Logging Basics.md (53%) diff --git a/translated/tech/20150803 Linux Logging Basics.md b/published/20150803 Linux Logging Basics.md similarity index 53% rename from translated/tech/20150803 Linux Logging Basics.md rename to published/20150803 Linux Logging Basics.md index 00acdf183e..de8a5d661c 100644 --- a/translated/tech/20150803 Linux Logging Basics.md +++ b/published/20150803 Linux Logging Basics.md @@ -1,6 +1,6 @@ Linux 日志基础 ================================================================================ -首先,我们将描述有关 Linux 日志是什么,到哪儿去找它们以及它们是如何创建的基础知识。如果你已经知道这些,请随意跳至下一节。 +首先,我们将描述有关 Linux 日志是什么,到哪儿去找它们,以及它们是如何创建的基础知识。如果你已经知道这些,请随意跳至下一节。 ### Linux 系统日志 ### @@ -10,71 +10,69 @@ Linux 日志基础 一些最为重要的 Linux 系统日志包括: -- `/var/log/syslog` 或 `/var/log/messages` 存储所有的全局系统活动数据,包括开机信息。基于 Debian 的系统如 Ubuntu 在 `/var/log/syslog` 目录中存储它们,而基于 RedHat 的系统如 RHEL 或 CentOS 则在 `/var/log/messages` 中存储它们。 +- `/var/log/syslog` 或 `/var/log/messages` 存储所有的全局系统活动数据,包括开机信息。基于 Debian 的系统如 Ubuntu 在 `/var/log/syslog` 中存储它们,而基于 RedHat 的系统如 RHEL 或 CentOS 则在 `/var/log/messages` 中存储它们。 - `/var/log/auth.log` 或 `/var/log/secure` 存储来自可插拔认证模块(PAM)的日志,包括成功的登录,失败的登录尝试和认证方式。Ubuntu 和 Debian 在 `/var/log/auth.log` 中存储认证信息,而 RedHat 和 CentOS 则在 `/var/log/secure` 中存储该信息。 -- `/var/log/kern` 存储内核错误和警告数据,这对于排除与自定义内核相关的故障尤为实用。 +- `/var/log/kern` 存储内核的错误和警告数据,这对于排除与定制内核相关的故障尤为实用。 - `/var/log/cron` 存储有关 cron 作业的信息。使用这个数据来确保你的 cron 作业正成功地运行着。 -Digital Ocean 有一个完整的关于这些文件及 rsyslog 如何在常见的发行版本如 RedHat 和 CentOS 中创建它们的 [教程][1] 。 +Digital Ocean 有一个关于这些文件的完整[教程][1],介绍了 rsyslog 如何在常见的发行版本如 RedHat 和 CentOS 中创建它们。 应用程序也会在这个目录中写入日志文件。例如像 Apache,Nginx,MySQL 等常见的服务器程序可以在这个目录中写入日志文件。其中一些日志文件由应用程序自己创建,其他的则通过 syslog (具体见下文)来创建。 ### 什么是 Syslog? ### -Linux 系统日志文件是如何创建的呢?答案是通过 syslog 守护程序,它在 syslog -套接字 `/dev/log` 上监听日志信息,然后将它们写入适当的日志文件中。 +Linux 系统日志文件是如何创建的呢?答案是通过 syslog 守护程序,它在 syslog 套接字 `/dev/log` 上监听日志信息,然后将它们写入适当的日志文件中。 -单词“syslog” 是一个重载的条目,并经常被用来简称如下的几个名称之一: +单词“syslog” 代表几个意思,并经常被用来简称如下的几个名称之一: -1. **Syslog 守护进程** — 一个用来接收,处理和发送 syslog 信息的程序。它可以[远程发送 syslog][2] 到一个集中式的服务器或写入一个本地文件。常见的例子包括 rsyslogd 和 syslog-ng。在这种使用方式中,人们常说 "发送到 syslog." -1. **Syslog 协议** — 一个指定日志如何通过网络来传送的传输协议和一个针对 syslog 信息(具体见下文) 的数据格式的定义。它在 [RFC-5424][3] 中被正式定义。对于文本日志,标准的端口是 514,对于加密日志,端口是 6514。在这种使用方式中,人们常说"通过 syslog 传送." -1. **Syslog 信息** — syslog 格式的日志信息或事件,它包括一个带有几个标准域的文件头。在这种使用方式中,人们常说"发送 syslog." +1. **Syslog 守护进程** — 一个用来接收、处理和发送 syslog 信息的程序。它可以[远程发送 syslog][2] 到一个集中式的服务器或写入到一个本地文件。常见的例子包括 rsyslogd 和 syslog-ng。在这种使用方式中,人们常说“发送到 syslog”。 +1. **Syslog 协议** — 一个指定日志如何通过网络来传送的传输协议和一个针对 syslog 信息(具体见下文) 的数据格式的定义。它在 [RFC-5424][3] 中被正式定义。对于文本日志,标准的端口是 514,对于加密日志,端口是 6514。在这种使用方式中,人们常说“通过 syslog 传送”。 +1. **Syslog 信息** — syslog 格式的日志信息或事件,它包括一个带有几个标准字段的消息头。在这种使用方式中,人们常说“发送 syslog”。 -Syslog 信息或事件包括一个带有几个标准域的 header ,使得分析和路由更方便。它们包括时间戳,应用程序的名称,在系统中信息来源的分类或位置,以及事件的优先级。 +Syslog 信息或事件包括一个带有几个标准字段的消息头,可以使分析和路由更方便。它们包括时间戳、应用程序的名称、在系统中信息来源的分类或位置、以及事件的优先级。 -下面展示的是一个包含 syslog header 的日志信息,它来自于 sshd 守护进程,它控制着到该系统的远程登录,这个信息描述的是一次失败的登录尝试: +下面展示的是一个包含 syslog 消息头的日志信息,它来自于控制着到该系统的远程登录的 sshd 守护进程,这个信息描述的是一次失败的登录尝试: <34>1 2003-10-11T22:14:15.003Z server1.com sshd - - pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.2.2 -### Syslog 格式和域 ### +### Syslog 格式和字段 ### -每条 syslog 信息包含一个带有域的 header,这些域是结构化的数据,使得分析和路由事件更加容易。下面是我们使用的用来产生上面的 syslog 例子的格式,你可以将每个值匹配到一个特定的域的名称上。 +每条 syslog 信息包含一个带有字段的信息头,这些字段是结构化的数据,使得分析和路由事件更加容易。下面是我们使用的用来产生上面的 syslog 例子的格式,你可以将每个值匹配到一个特定的字段的名称上。 <%pri%>%protocol-version% %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% %msg%n -下面,你将看到一些在查找或排错时最常使用的 syslog 域: +下面,你将看到一些在查找或排错时最常使用的 syslog 字段: #### 时间戳 #### [时间戳][4] (上面的例子为 2003-10-11T22:14:15.003Z) 暗示了在系统中发送该信息的时间和日期。这个时间在另一系统上接收该信息时可能会有所不同。上面例子中的时间戳可以分解为: -- **2003-10-11** 年,月,日. -- **T** 为时间戳的必需元素,它将日期和时间分离开. -- **22:14:15.003** 是 24 小时制的时间,包括进入下一秒的毫秒数(**003**). -- **Z** 是一个可选元素,指的是 UTC 时间,除了 Z,这个例子还可以包括一个偏移量,例如 -08:00,这意味着时间从 UTC 偏移 8 小时,即 PST 时间. +- **2003-10-11** 年,月,日。 +- **T** 为时间戳的必需元素,它将日期和时间分隔开。 +- **22:14:15.003** 是 24 小时制的时间,包括进入下一秒的毫秒数(**003**)。 +- **Z** 是一个可选元素,指的是 UTC 时间,除了 Z,这个例子还可以包括一个偏移量,例如 -08:00,这意味着时间从 UTC 偏移 8 小时,即 PST 时间。 #### 主机名 #### -[主机名][5] 域(在上面的例子中对应 server1.com) 指的是主机的名称或发送信息的系统. +[主机名][5] 字段(在上面的例子中对应 server1.com) 指的是主机的名称或发送信息的系统. #### 应用名 #### -[应用名][6] 域(在上面的例子中对应 sshd:auth) 指的是发送信息的程序的名称. +[应用名][6] 字段(在上面的例子中对应 sshd:auth) 指的是发送信息的程序的名称. #### 优先级 #### -优先级域或缩写为 [pri][7] (在上面的例子中对应 <34>) 告诉我们这个事件有多紧急或多严峻。它由两个数字域组成:设备域和紧急性域。紧急性域从代表 debug 类事件的数字 7 一直到代表紧急事件的数字 0 。设备域描述了哪个进程创建了该事件。它从代表内核信息的数字 0 到代表本地应用使用的 23 。 +优先级字段或缩写为 [pri][7] (在上面的例子中对应 <34>) 告诉我们这个事件有多紧急或多严峻。它由两个数字字段组成:设备字段和紧急性字段。紧急性字段从代表 debug 类事件的数字 7 一直到代表紧急事件的数字 0 。设备字段描述了哪个进程创建了该事件。它从代表内核信息的数字 0 到代表本地应用使用的 23 。 + +Pri 有两种输出方式。第一种是以一个单独的数字表示,可以这样计算:先用设备字段的值乘以 8,再加上紧急性字段的值:(设备字段)(8) + (紧急性字段)。第二种是 pri 文本,将以“设备字段.紧急性字段” 的字符串格式输出。后一种格式更方便阅读和搜索,但占据更多的存储空间。 -Pri 有两种输出方式。第一种是以一个单独的数字表示,可以这样计算:先用设备域的值乘以 8,再加上紧急性域的值:(设备域)(8) + (紧急性域)。第二种是 pri 文本,将以“设备域.紧急性域” 的字符串格式输出。后一种格式更方便阅读和搜索,但占据更多的存储空间。 -------------------------------------------------------------------------------- via: http://www.loggly.com/ultimate-guide/logging/linux-logging-basics/ -作者:[Jason Skowronski][a1] -作者:[Amy Echeverri][a2] -作者:[Sadequl Hussain][a3] +作者:[Jason Skowronski][a1],[Amy Echeverri][a2],[Sadequl Hussain][a3] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 From 1e2265405f51c68843b693a2b28d1895b6f8d286 Mon Sep 17 00:00:00 2001 From: Jindong Huang Date: Thu, 13 Aug 2015 14:31:47 +0800 Subject: [PATCH 206/207] =?UTF-8?q?=E3=80=90Translating=20by=20dingdongnig?= =?UTF-8?q?etou=E3=80=9120150813=20Linux=20file=20system=20hierarchy=20v2.?= =?UTF-8?q?0.md?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- sources/tech/20150813 Linux file system hierarchy v2.0.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/sources/tech/20150813 Linux file system hierarchy v2.0.md b/sources/tech/20150813 Linux file system hierarchy v2.0.md index 9df6d23dcf..0021bb57c9 100644 --- a/sources/tech/20150813 Linux file system hierarchy v2.0.md +++ b/sources/tech/20150813 Linux file system hierarchy v2.0.md @@ -1,3 +1,6 @@ + +Translating by dingdongnigetou + Linux file system hierarchy v2.0 ================================================================================ What is a file in Linux? What is file system in Linux? Where are all the configuration files? Where do I keep my downloaded applications? Is there really a filesystem standard structure in Linux? Well, the above image explains Linux file system hierarchy in a very simple and non-complex way. It’s very useful when you’re looking for a configuration file or a binary file. I’ve added some explanation and examples below, but that’s TL;DR. @@ -435,4 +438,4 @@ via: http://www.blackmoreops.com/2015/06/18/linux-file-system-hierarchy-v2-0/ [1]:http://www.blackmoreops.com/2015/02/15/in-light-of-recent-linux-exploits-linux-security-audit-is-a-must/ [2]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png -[3]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-File-System-Hierarchy-blackMORE-Ops.pdf \ No newline at end of file +[3]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-File-System-Hierarchy-blackMORE-Ops.pdf From 4aec4761431227cc01ac683281d99eef6c87460e Mon Sep 17 00:00:00 2001 From: XIAOYU <1136299502@qq.com> Date: Thu, 13 Aug 2015 19:41:10 +0800 Subject: [PATCH 207/207] translating by xiaoyu33 translating by xiaoyu33 --- ...kr Is An Open-Source RSS News Ticker for Linux Desktops.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sources/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md b/sources/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md index 638482a144..ccbbd3abd8 100644 --- a/sources/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md +++ b/sources/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md @@ -1,3 +1,5 @@ +translating by xiaoyu33 + Tickr Is An Open-Source RSS News Ticker for Linux Desktops ================================================================================ ![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/rss-tickr.jpg) @@ -92,4 +94,4 @@ via: http://www.omgubuntu.co.uk/2015/06/tickr-open-source-desktop-rss-news-ticke 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:https://plus.google.com/117485690627814051450/?rel=author -[1]:apt://tickr \ No newline at end of file +[1]:apt://tickr