diff --git a/translated/tech/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md b/published/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md similarity index 97% rename from translated/tech/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md rename to published/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md index 23e2314576..1e17c7c6d3 100644 --- a/translated/tech/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md +++ b/published/20150225 How to set up IPv6 BGP peering and filtering in Quagga BGP router.md @@ -1,5 +1,6 @@ -如何设置在Quagga BGP路由器中设置IPv6的BGP对等体和过滤 +如何设置在 Quagga BGP 路由器中设置 IPv6 的 BGP 对等体和过滤 ================================================================================ + 在之前的教程中,我们演示了如何使用Quagga建立一个[完备的BGP路由器][1]和配置[前缀过滤][2]。在本教程中,我们会向你演示如何创建IPv6 BGP对等体并通过BGP通告IPv6前缀。同时我们也将演示如何使用前缀列表和路由映射特性来过滤通告的或者获取到的IPv6前缀。 ### 拓扑 ### @@ -47,7 +48,7 @@ Quagga内部提供一个叫作vtysh的shell,其界面与那些主流路由厂 # vtysh -提示将改为: +提示符将改为: router-a# @@ -65,7 +66,7 @@ Quagga内部提供一个叫作vtysh的shell,其界面与那些主流路由厂 router-a# configure terminal -提示将变更成: +提示符将变更成: router-a(config)# @@ -246,13 +247,13 @@ Quagga内部提供一个叫作vtysh的shell,其界面与那些主流路由厂 via: http://xmodulo.com/ipv6-bgp-peering-filtering-quagga-bgp-router.html 作者:[Sarmed Rahman][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[martin2011qi](https://github.com/martin2011qi) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [a]:http://xmodulo.com/author/sarmed -[1]:http://xmodulo.com/centos-bgp-router-quagga.html +[1]:https://linux.cn/article-4232-1.html [2]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html [3]:http://ask.xmodulo.com/open-port-firewall-centos-rhel.html [4]:http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html diff --git a/translated/tech/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md b/published/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md similarity index 60% rename from translated/tech/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md rename to published/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md index 8ad03dd06c..ac93dceb50 100644 --- a/translated/tech/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md +++ b/published/20150722 Howto Interactively Perform Tasks with Docker using Kitematic.md @@ -1,8 +1,9 @@ -如何在 Docker 中通过 Kitematic 交互式执行任务 +如何在 Windows 上通过 Kitematic 使用 Docker ================================================================================ -在本篇文章中,我们会学习如何在 Windows 操作系统上安装 Kitematic 以及部署一个 Hello World Nginx Web 服务器。Kitematic 是一个自由开源软件,它有现代化的界面设计使得允许我们在 Docker 中交互式执行任务。Kitematic 设计非常漂亮、界面也很不错。我们可以简单快速地开箱搭建我们的容器而不需要输入命令,我们可以在图形用户界面中通过简单的点击从而在容器上部署我们的应用。Kitematic 集成了 Docker Hub,允许我们搜索、拉取任何需要的镜像,并在上面部署应用。它同时也能很好地切换到命令行用户接口模式。目前,它包括了自动映射端口、可视化更改环境变量、配置卷、精简日志以及其它功能。 -下面是在 Windows 上安装 Kitematic 并部署 Hello World Nginx Web 服务器的 3 个简单步骤。 +在本篇文章中,我们会学习如何在 Windows 操作系统上安装 Kitematic 以及部署一个测试性的 Nginx Web 服务器。Kitematic 是一个具有现代化的界面设计的自由开源软件,它可以让我们在 Docker 中交互式执行任务。Kitematic 设计的非常漂亮、界面美观。使用它,我们可以简单快速地开箱搭建我们的容器而不需要输入命令,可以在图形用户界面中通过简单的点击从而在容器上部署我们的应用。Kitematic 集成了 Docker Hub,允许我们搜索、拉取任何需要的镜像,并在上面部署应用。它同时也能很好地切换到命令行用户接口模式。目前,它包括了自动映射端口、可视化更改环境变量、配置卷、流式日志以及其它功能。 + +下面是在 Windows 上安装 Kitematic 并部署测试性 Nginx Web 服务器的 3 个简单步骤。 ### 1. 下载 Kitematic ### @@ -16,15 +17,15 @@ ### 2. 安装 Kitematic ### -下载好可执行安装程序之后,我们现在打算在我们的 Windows 操作系统上安装 Kitematic。安装程序现在会开始下载并安装运行 Kitematic 需要的依赖,包括 Virtual Box 和 Docker。如果已经在系统上安装了 Virtual Box,它会把它升级到最新版本。安装程序会在几分钟内完成,但取决于你网络和系统的速度。如果你还没有安装 Virtual Box,它会问你是否安装 Virtual Box 网络驱动。建议安装它,因为它有助于 Virtual Box 的网络。 +下载好可执行安装程序之后,我们现在就可以在我们的 Windows 操作系统上安装 Kitematic了。安装程序现在会开始下载并安装运行 Kitematic 需要的依赖软件,包括 Virtual Box 和 Docker。如果已经在系统上安装了 Virtual Box,它会把它升级到最新版本。安装程序会在几分钟内完成,但取决于你网络和系统的速度。如果你还没有安装 Virtual Box,它会问你是否安装 Virtual Box 网络驱动。建议安装它,因为它用于 Virtual Box 的网络功能。 ![安装 Kitematic](http://blog.linoxide.com/wp-content/uploads/2015/06/installing-kitematic.png) -需要的依赖 Docker 和 Virtual Box 安装完成并运行后,会让我们登录到 Docker Hub。如果我们还没有账户或者还不想登录,可以点击 **SKIP FOR NOW** 继续后面的步骤。 +所需的依赖 Docker 和 Virtual Box 安装完成并运行后,会让我们登录到 Docker Hub。如果我们还没有账户或者还不想登录,可以点击 **SKIP FOR NOW** 继续后面的步骤。 ![登录 Docker Hub](http://blog.linoxide.com/wp-content/uploads/2015/06/login-docker-hub.jpg) -如果你还没有账户,你可以在应用程序上点击注册链接并在 Docker Hub 上创建账户。 +如果你还没有账户,你可以在应用程序上点击注册(Sign Up)链接并在 Docker Hub 上创建账户。 完成之后,就会出现 Kitematic 应用程序的第一个界面。正如下面看到的这样。我们可以搜索可用的 docker 镜像。 @@ -50,7 +51,11 @@ ### 总结 ### -我们终于成功在 Windows 操作系统上安装了 Kitematic 并部署了一个 Hello World Ngnix 服务器。总是推荐下载安装 Kitematic 最新的发行版,因为会增加很多新的高级功能。由于 Docker 运行在 64 位平台,当前 Kitematic 也是为 64 位操作系统构建。它只能在 Windows 7 以及更高版本上运行。在这篇教程中,我们部署了一个 Nginx Web 服务器,类似地我们可以在 Kitematic 中简单的点击就能通过镜像部署任何 docker 容器。Kitematic 已经有可用的 Mac OS X 和 Windows 版本,Linux 版本也在开发中很快就会发布。如果你有任何疑问、建议或者反馈,请在下面的评论框中写下来以便我们更改地改进或更新我们的内容。非常感谢!Enjoy :-) +我们终于成功在 Windows 操作系统上安装了 Kitematic 并部署了一个 Hello World Ngnix 服务器。推荐下载安装 Kitematic 最新的发行版,因为会增加很多新的高级功能。由于 Docker 运行在 64 位平台,当前 Kitematic 也是为 64 位操作系统构建。它只能在 Windows 7 以及更高版本上运行。 + +在这篇教程中,我们部署了一个 Nginx Web 服务器,类似地我们可以在 Kitematic 中简单的点击就能通过镜像部署任何 docker 容器。Kitematic 已经有可用的 Mac OS X 和 Windows 版本,Linux 版本也在开发中很快就会发布。 + +如果你有任何疑问、建议或者反馈,请在下面的评论框中写下来以便我们更改地改进或更新我们的内容。非常感谢!Enjoy :-) -------------------------------------------------------------------------------- @@ -58,7 +63,7 @@ via: http://linoxide.com/linux-how-to/interactively-docker-kitematic/ 作者:[Arun Pyasi][a] 译者:[ictlyh](https://github.com/ictlyh) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md b/published/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md new file mode 100644 index 0000000000..0f08cf12fa --- /dev/null +++ b/published/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md @@ -0,0 +1,129 @@ +如何使用 Weave 以及 Docker 搭建 Nginx 反向代理/负载均衡服务器 +================================================================================ + +Hi, 今天我们将会学习如何使用 Weave 和 Docker 搭建 Nginx 的反向代理/负载均衡服务器。Weave 可以创建一个虚拟网络将 Docker 容器彼此连接在一起,支持跨主机部署及自动发现。它可以让我们更加专注于应用的开发,而不是基础架构。Weave 提供了一个如此棒的环境,仿佛它的所有容器都属于同个网络,不需要端口/映射/连接等的配置。容器中的应用提供的服务在 weave 网络中可以轻易地被外部世界访问,不论你的容器运行在哪里。在这个教程里我们将会使用 weave 快速并且简单地将 nginx web 服务器部署为一个负载均衡器,反向代理一个运行在 Amazon Web Services 里面多个节点上的 docker 容器中的简单 php 应用。这里我们将会介绍 WeaveDNS,它提供一个不需要改变代码就可以让容器利用主机名找到的简单方式,并且能够让其他容器通过主机名连接彼此。 + +在这篇教程里,我们将使用 nginx 来将负载均衡分配到一个运行 Apache 的容器集合。最简单轻松的方法就是使用 Weave 来把运行在 ubuntu 上的 docker 容器中的 nginx 配置成负载均衡服务器。 + +### 1. 搭建 AWS 实例 ### + +首先,我们需要搭建 Amzaon Web Service 实例,这样才能在 ubuntu 下用 weave 跑 docker 容器。我们将会使用[AWS 命令行][1] 来搭建和配置两个 AWS EC2 实例。在这里,我们使用最小的可用实例,t1.micro。我们需要一个有效的**Amazon Web Services 账户**使用 AWS 命令行界面来搭建和配置。我们先在 AWS 命令行界面下使用下面的命令将 github 上的 weave 仓库克隆下来。 + + $ git clone http://github.com/fintanr/weave-gs + $ cd weave-gs/aws-nginx-ubuntu-simple + +在克隆完仓库之后,我们执行下面的脚本,这个脚本将会部署两个 t1.micro 实例,每个实例中都是 ubuntu 作为操作系统并用 weave 跑着 docker 容器。 + + $ sudo ./demo-aws-setup.sh + +在这里,我们将会在以后用到这些实例的 IP 地址。这些地址储存在一个 weavedemo.env 文件中,这个文件创建于执行 demo-aws-setup.sh 脚本期间。为了获取这些 IP 地址,我们需要执行下面的命令,命令输出类似下面的信息。 + + $ cat weavedemo.env + + export WEAVE_AWS_DEMO_HOST1=52.26.175.175 + export WEAVE_AWS_DEMO_HOST2=52.26.83.141 + export WEAVE_AWS_DEMO_HOSTCOUNT=2 + export WEAVE_AWS_DEMO_HOSTS=(52.26.175.175 52.26.83.141) + +请注意这些不是固定的 IP 地址,AWS 会为我们的实例动态地分配 IP 地址。 + +我们在 bash 下执行下面的命令使环境变量生效。 + + . ./weavedemo.env + +### 2. 启动 Weave 和 WeaveDNS ### + +在安装完实例之后,我们将会在每台主机上启动 weave 以及 weavedns。Weave 以及 weavedns 使得我们能够轻易地将容器部署到一个全新的基础架构以及配置中, 不需要改变代码,也不需要去理解像 Ambassador 容器以及 Link 机制之类的概念。下面是在第一台主机上启动 weave 以及 weavedns 的命令。 + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 + $ sudo weave launch + $ sudo weave launch-dns 10.2.1.1/24 + +下一步,我也准备在第二台主机上启动 weave 以及 weavedns。 + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2 + $ sudo weave launch $WEAVE_AWS_DEMO_HOST1 + $ sudo weave launch-dns 10.2.1.2/24 + +### 3. 启动应用容器 ### + +现在,我们准备跨两台主机启动六个容器,这两台主机都用 Apache2 Web 服务实例跑着简单的 php 网站。为了在第一个 Apache2 Web 服务器实例跑三个容器, 我们将会使用下面的命令。 + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 + $ sudo weave run --with-dns 10.3.1.1/24 -h ws1.weave.local fintanr/weave-gs-nginx-apache + $ sudo weave run --with-dns 10.3.1.2/24 -h ws2.weave.local fintanr/weave-gs-nginx-apache + $ sudo weave run --with-dns 10.3.1.3/24 -h ws3.weave.local fintanr/weave-gs-nginx-apache + +在那之后,我们将会在第二个实例上启动另外三个容器,请使用下面的命令。 + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2 + $ sudo weave run --with-dns 10.3.1.4/24 -h ws4.weave.local fintanr/weave-gs-nginx-apache + $ sudo weave run --with-dns 10.3.1.5/24 -h ws5.weave.local fintanr/weave-gs-nginx-apache + $ sudo weave run --with-dns 10.3.1.6/24 -h ws6.weave.local fintanr/weave-gs-nginx-apache + +注意: 在这里,--with-dns 选项告诉容器使用 weavedns 来解析主机名,-h x.weave.local 则使得 weavedns 能够解析该主机。 + +### 4. 启动 Nginx 容器 ### + +在应用容器如预期的运行后,我们将会启动 nginx 容器,它将会在六个应用容器服务之间轮询并提供反向代理或者负载均衡。 为了启动 nginx 容器,请使用下面的命令。 + + ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 + $ sudo weave run --with-dns 10.3.1.7/24 -ti -h nginx.weave.local -d -p 80:80 fintanr/weave-gs-nginx-simple + +因此,我们的 nginx 容器在 $WEAVE_AWS_DEMO_HOST1 上公开地暴露成为一个 http 服务器。 + +### 5. 测试负载均衡服务器 ### + +为了测试我们的负载均衡服务器是否可以工作,我们执行一段可以发送 http 请求给 nginx 容器的脚本。我们将会发送6个请求,这样我们就能看到 nginx 在一次的轮询中服务于每台 web 服务器之间。 + + $ ./access-aws-hosts.sh + + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws1.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws2.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws3.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws4.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws5.weave.local", + "date" : "2015-06-26 12:24:23" + } + { + "message" : "Hello Weave - nginx example", + "hostname" : "ws6.weave.local", + "date" : "2015-06-26 12:24:23" + } + +### 结束语 ### + +我们最终成功地将 nginx 配置成一个反向代理/负载均衡服务器,通过使用 weave 以及运行在 AWS(Amazon Web Service)EC2 里面的 ubuntu 服务器中的 docker。从上面的步骤输出可以清楚的看到我们已经成功地配置了 nginx。我们可以看到请求在一次轮询中被发送到6个应用容器,这些容器在 Apache2 Web 服务器中跑着 PHP 应用。在这里,我们部署了一个容器化的 PHP 应用,使用 nginx 横跨多台在 AWS EC2 上的主机而不需要改变代码,利用 weavedns 使得每个容器连接在一起,只需要主机名就够了,眼前的这些便捷, 都要归功于 weave 以及 weavedns。 + +如果你有任何的问题、建议、反馈,请在评论中注明,这样我们才能够做得更好,谢谢:-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/nginx-load-balancer-weave-docker/ + +作者:[Arun Pyasi][a] +译者:[dingdongnigetou](https://github.com/dingdongnigetou) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://console.aws.amazon.com/ diff --git a/published/20141211 Open source all over the world.md b/published/201508/20141211 Open source all over the world.md similarity index 100% rename from published/20141211 Open source all over the world.md rename to published/201508/20141211 Open source all over the world.md diff --git a/published/20150128 7 communities driving open source development.md b/published/201508/20150128 7 communities driving open source development.md similarity index 100% rename from published/20150128 7 communities driving open source development.md rename to published/201508/20150128 7 communities driving open source development.md diff --git a/published/201508/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md b/published/201508/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md new file mode 100644 index 0000000000..22e50e355d --- /dev/null +++ b/published/201508/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md @@ -0,0 +1,114 @@ +安装 Strongswan :Linux 上一个基于 IPsec 的 VPN 工具 +================================================================================ + +IPsec是一个提供网络层安全的标准。它包含认证头(AH)和安全负载封装(ESP)组件。AH提供包的完整性,ESP组件提供包的保密性。IPsec确保了在网络层的安全特性。 + +- 保密性 +- 数据包完整性 +- 来源不可抵赖性 +- 重放攻击防护 + +[Strongswan][1]是一个IPsec协议的开源代码实现,Strongswan的意思是强安全广域网(StrongS/WAN)。它支持IPsec的VPN中的两个版本的密钥自动交换(网络密钥交换(IKE)V1和V2)。 + +Strongswan基本上提供了在VPN的两个节点/网关之间自动交换密钥的共享,然后它使用了Linux内核的IPsec(AH和ESP)实现。密钥共享使用了之后用于ESP数据加密的IKE 机制。在IKE阶段,strongswan使用OpenSSL的加密算法(AES,SHA等等)和其他加密类库。无论如何,IPsec中的ESP组件使用的安全算法是由Linux内核实现的。Strongswan的主要特性如下: + +- x.509证书或基于预共享密钥认证 +- 支持IKEv1和IKEv2密钥交换协议 +- 可选的,对于插件和库的内置完整性和加密测试 +- 支持椭圆曲线DH群和ECDSA证书 +- 在智能卡上存储RSA私钥和证书 + +它能被使用在客户端/服务器(road warrior模式)和网关到网关的情景。 + +### 如何安装 ### + +几乎所有的Linux发行版都支持Strongswan的二进制包。在这个教程,我们会从二进制包安装strongswan,也会从源代码编译带有合适的特性的strongswan。 + +### 使用二进制包 ### + +可以使用以下命令安装Strongswan到Ubuntu 14.04 LTS + + $ sudo aptitude install strongswan + +![安装strongswan](http://blog.linoxide.com/wp-content/uploads/2014/12/strongswan-binary.png) + +strongswan的全局配置(strongswan.conf)文件和ipsec配置(ipsec.conf/ipsec.secrets)文件都在/etc/目录下。 + +### strongswan源码编译安装的依赖包 ### + +- GMP(strongswan使用的高精度数学库) +- OpenSSL(加密算法来自这个库) +- PKCS(1,7,8,11,12)(证书编码和智能卡集成) + +#### 步骤 #### + +**1)** 在终端使用下面命令到/usr/src/目录 + + $ cd /usr/src + +**2)** 用下面命令从strongswan网站下载源代码 + + $ sudo wget http://download.strongswan.org/strongswan-5.2.1.tar.gz + +(strongswan-5.2.1.tar.gz 是当前最新版。) + +![下载软件](http://blog.linoxide.com/wp-content/uploads/2014/12/download_strongswan.png) + +**3)** 用下面命令提取下载的软件,然后进入目录。 + + $ sudo tar –xvzf strongswan-5.2.1.tar.gz; cd strongswan-5.2.1 + +**4)** 使用configure命令配置strongswan每个想要的选项。 + + $ ./configure --prefix=/usr/local -–enable-pkcs11 -–enable-openssl + +![检查strongswan包](http://blog.linoxide.com/wp-content/uploads/2014/12/strongswan-configure.png) + +如果GMP库没有安装,配置脚本将会发生下面的错误。 + +![GMP library error](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-error.png) + +因此,首先,使用下面命令安装GMP库然后执行配置脚本。 + +![gmp installation](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-installation1.png) + +不过,如果GMP已经安装还报上述错误的话,在Ubuntu上使用如下命令,给在路径 /usr/lib,/lib/,/usr/lib/x86_64-linux-gnu/ 下的libgmp.so库创建软连接。 + + $ sudo ln -s /usr/lib/x86_64-linux-gnu/libgmp.so.10.1.3 /usr/lib/x86_64-linux-gnu/libgmp.so + +![softlink of libgmp.so library](http://blog.linoxide.com/wp-content/uploads/2014/12/softlink.png) + +创建libgmp.so软连接后,再执行./configure脚本也许就找到gmp库了。然而,如果gmp头文件发生其他错误,像下面这样。 + +![GMP header file issu](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-header.png) + +为解决上面的错误,使用下面命令安装libgmp-dev包 + + $ sudo aptitude install libgmp-dev + +![Installation of Development library of GMP](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-dev.png) + +安装gmp的开发库后,在运行一遍配置脚本,如果没有发生错误,则将看见下面的这些输出。 + +![Output of Configure scirpt](http://blog.linoxide.com/wp-content/uploads/2014/12/successful-run.png) + +使用下面的命令编译安装strongswan。 + + $ sudo make ; sudo make install + +安装strongswan后,全局配置(strongswan.conf)和ipsec策略/密码配置文件(ipsec.conf/ipsec.secretes)被放在**/usr/local/etc**目录。 + +根据我们的安全需要Strongswan可以用作隧道或者传输模式。它提供众所周知的site-2-site模式和road warrior模式的VPN。它很容易使用在Cisco,Juniper设备上。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/security/install-strongswan/ + +作者:[nido][a] +译者:[wyangsun](https://github.com/wyangsun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/naveeda/ +[1]:https://www.strongswan.org/ diff --git a/translated/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md b/published/201508/20150209 Install OpenQRM Cloud Computing Platform In Debian.md similarity index 54% rename from translated/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md rename to published/201508/20150209 Install OpenQRM Cloud Computing Platform In Debian.md index 2eacc933b9..fdaa039b2f 100644 --- a/translated/tech/20150209 Install OpenQRM Cloud Computing Platform In Debian.md +++ b/published/201508/20150209 Install OpenQRM Cloud Computing Platform In Debian.md @@ -1,48 +1,49 @@ 在 Debian 中安装 OpenQRM 云计算平台 ================================================================================ + ### 简介 ### **openQRM**是一个基于 Web 的开源云计算和数据中心管理平台,可灵活地与企业数据中心的现存组件集成。 它支持下列虚拟技术: -- KVM, -- XEN, -- Citrix XenServer, -- VMWare ESX, -- LXC, -- OpenVZ. +- KVM +- XEN +- Citrix XenServer +- VMWare ESX +- LXC +- OpenVZ -openQRM 中的杂交云连接器通过 **Amazon AWS**, **Eucalyptus** 或 **OpenStack** 来支持一系列的私有或公有云提供商,以此来按需扩展你的基础设施。它也自动地进行资源调配、 虚拟化、 存储和配置管理,且关注高可用性。集成计费系统的自助服务云门户可使终端用户按需请求新的服务器和应用堆栈。 +openQRM 中的混合云连接器支持 **Amazon AWS**, **Eucalyptus** 或 **OpenStack** 等一系列的私有或公有云提供商,以此来按需扩展你的基础设施。它也可以自动地进行资源调配、 虚拟化、 存储和配置管理,且保证高可用性。集成的计费系统的自服务云门户可使终端用户按需请求新的服务器和应用堆栈。 openQRM 有两种不同风格的版本可获取: - 企业版 - 社区版 -你可以在[这里][1] 查看这两个版本间的区别。 +你可以在[这里][1]查看这两个版本间的区别。 ### 特点 ### -- 私有/杂交的云计算平台; -- 可管理物理或虚拟的服务器系统; -- 可与所有主流的开源或商业的存储技术集成; -- 跨平台: Linux, Windows, OpenSolaris, and BSD; -- 支持 KVM, XEN, Citrix XenServer, VMWare ESX(i), lxc, OpenVZ 和 VirtualBox; -- 支持使用额外的 Amazon AWS, Eucalyptus, Ubuntu UEC 等云资源来进行杂交云设置; -- 支持 P2V, P2P, V2P, V2V 迁移和高可用性; -- 集成最好的开源管理工具 – 如 puppet, nagios/Icinga 或 collectd; -- 有超过 50 个插件来支持扩展功能并与你的基础设施集成; -- 针对终端用户的自助门户; -- 集成计费系统. +- 私有/混合的云计算平台 +- 可管理物理或虚拟的服务器系统 +- 集成了所有主流的开源或商业的存储技术 +- 跨平台: Linux, Windows, OpenSolaris 和 BSD +- 支持 KVM, XEN, Citrix XenServer, VMWare ESX(i), lxc, OpenVZ 和 VirtualBox +- 支持使用额外的 Amazon AWS, Eucalyptus, Ubuntu UEC 等云资源来进行混合云设置 +- 支持 P2V, P2P, V2P, V2V 迁移和高可用性 +- 集成最好的开源管理工具 – 如 puppet, nagios/Icinga 或 collectd +- 有超过 50 个插件来支持扩展功能并与你的基础设施集成 +- 针对终端用户的自服务门户 +- 集成了计费系统 ### 安装 ### -在这里我们将在 in Debian 7.5 上安装 openQRM。你的服务器必须至少满足以下要求: +在这里我们将在 Debian 7.5 上安装 openQRM。你的服务器必须至少满足以下要求: -- 1 GB RAM; -- 100 GB Hdd(硬盘驱动器); -- 可选: Bios 支持虚拟化(Intel CPUs 的 VT 或 AMD CPUs AMD-V). +- 1 GB RAM +- 100 GB Hdd(硬盘驱动器) +- 可选: Bios 支持虚拟化(Intel CPUs 的 VT 或 AMD CPUs AMD-V) 首先,安装 `make` 软件包来编译 openQRM 源码包: @@ -52,7 +53,7 @@ openQRM 有两种不同风格的版本可获取: 然后,逐次运行下面的命令来安装 openQRM。 -从[这里][2] 下载最新的可用版本: +从[这里][2]下载最新的可用版本: wget http://sourceforge.net/projects/openqrm/files/openQRM-Community-5.1/openqrm-community-5.1.tgz @@ -66,35 +67,35 @@ openQRM 有两种不同风格的版本可获取: sudo make start -安装期间,你将被询问去更新文件 `php.ini` +安装期间,会要求你更新文件 `php.ini` -![~-openqrm-community-5.1-src_001](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_001.png) +![~-openqrm-community-5.1-src_001](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_001.png) 输入 mysql root 用户密码。 -![~-openqrm-community-5.1-src_002](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_002.png) +![~-openqrm-community-5.1-src_002](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_002.png) 再次输入密码: -![~-openqrm-community-5.1-src_003](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_003.png) +![~-openqrm-community-5.1-src_003](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_003.png) -选择邮件服务器配置类型。 +选择邮件服务器配置类型: -![~-openqrm-community-5.1-src_004](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_004.png) +![~-openqrm-community-5.1-src_004](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_004.png) 假如你不确定该如何选择,可选择 `Local only`。在我们的这个示例中,我选择了 **Local only** 选项。 -![~-openqrm-community-5.1-src_005](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_005.png) +![~-openqrm-community-5.1-src_005](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_005.png) 输入你的系统邮件名称,并最后输入 Nagios 管理员密码。 -![~-openqrm-community-5.1-src_007](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_007.png) +![~-openqrm-community-5.1-src_007](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@server-openqrm-community-5.1-src_007.png) 根据你的网络连接状态,上面的命令可能将花费很长的时间来下载所有运行 openQRM 所需的软件包,请耐心等待。 最后你将得到 openQRM 配置 URL 地址以及相关的用户名和密码。 -![~_002](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/sk@debian-_002.png) +![~_002](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/sk@debian-_002.png) ### 配置 ### @@ -104,23 +105,23 @@ openQRM 有两种不同风格的版本可获取: 默认的用户名和密码是: **openqrm/openqrm** 。 -![Mozilla Firefox_003](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/Mozilla-Firefox_003.png) +![Mozilla Firefox_003](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/Mozilla-Firefox_003.png) 选择一个网卡来给 openQRM 管理网络使用。 -![openQRM Server - Mozilla Firefox_004](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_004.png) +![openQRM Server - Mozilla Firefox_004](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_004.png) 选择一个数据库类型,在我们的示例中,我选择了 mysql。 -![openQRM Server - Mozilla Firefox_006](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_006.png) +![openQRM Server - Mozilla Firefox_006](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_006.png) 现在,配置数据库连接并初始化 openQRM, 在这里,我使用 **openQRM** 作为数据库名称, **root** 作为用户的身份,并将 debian 作为数据库的密码。 请小心,你应该输入先前在安装 openQRM 时创建的 mysql root 用户密码。 -![openQRM Server - Mozilla Firefox_012](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_012.png) +![openQRM Server - Mozilla Firefox_012](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_012.png) -祝贺你!! openQRM 已经安装并配置好了。 +祝贺你! openQRM 已经安装并配置好了。 -![openQRM Server - Mozilla Firefox_013](http://180016988.r.cdn77.net/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_013.png) +![openQRM Server - Mozilla Firefox_013](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/02/openQRM-Server-Mozilla-Firefox_013.png) ### 更新 openQRM ### @@ -129,16 +130,17 @@ openQRM 有两种不同风格的版本可获取: cd openqrm/src/ make update -到现在为止,我们做的只是在我们的 Ubuntu 服务器中安装和配置 openQRM, 至于 创建、运行虚拟,管理存储,额外的系统集成和运行你自己的私有云等内容,我建议你阅读 [openQRM 管理员指南][3]。 +到现在为止,我们做的只是在我们的 Debian 服务器中安装和配置 openQRM, 至于 创建、运行虚拟,管理存储,额外的系统集成和运行你自己的私有云等内容,我建议你阅读 [openQRM 管理员指南][3]。 就是这些了,欢呼吧!周末快乐! + -------------------------------------------------------------------------------- via: http://www.unixmen.com/install-openqrm-cloud-computing-platform-debian/ 作者:[SK][a] 译者:[FSSlc](https://github.com/FSSlc) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/published/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md b/published/201508/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md similarity index 100% rename from published/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md rename to published/201508/20150318 How to Manage and Use LVM (Logical Volume Management) in Ubuntu.md diff --git a/translated/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md b/published/201508/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md similarity index 75% rename from translated/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md rename to published/201508/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md index 2e66e27f31..adf9abd11c 100644 --- a/translated/tech/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md +++ b/published/201508/20150318 How to Use LVM on Ubuntu for Easy Partition Resizing and Snapshots.md @@ -1,14 +1,14 @@ -Ubuntu上使用LVM轻松调整分区并制作快照 +Ubuntu 上使用 LVM 轻松调整分区并制作快照 ================================================================================ ![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55035707bbd74.png.pagespeed.ic.9_yebxUF1C.png) -Ubuntu的安装器提供了一个轻松“使用LVM”的复选框。说明中说,它启用了逻辑卷管理,因此你可以制作快照,并更容易地调整硬盘分区大小——这里将为大家讲述如何完成这些操作。 +Ubuntu的安装器提供了一个轻松“使用LVM”的复选框。它的描述中说,启用逻辑卷管理可以让你制作快照,并更容易地调整硬盘分区大小——这里将为大家讲述如何完成这些操作。 -LVM是一种技术,某种程度上和[RAID阵列][1]或[Windows上的存储空间][2]类似。虽然该技术在服务器上更为有用,但是它也可以在桌面端PC上使用。 +LVM是一种技术,某种程度上和[RAID阵列][1]或[Windows上的“存储空间”][2]类似。虽然该技术在服务器上更为有用,但是它也可以在桌面端PC上使用。 ### 你应该在新安装Ubuntu时使用LVM吗? ### -第一个问题是,你是否想要在安装Ubuntu时使用LVM?如果是,那么Ubuntu让这一切变得很简单,只需要轻点鼠标就可以完成,但是该选项默认是不启用的。正如安装器所说的,它允许你调整分区、创建快照、合并多个磁盘到一个逻辑卷等等——所有这一切都可以在系统运行时完成。不同于传统分区,你不需要关掉你的系统,从Live CD或USB驱动,然后[调整这些不使用的分区][3]。 +第一个问题是,你是否想要在安装Ubuntu时使用LVM?如果是,那么Ubuntu让这一切变得很简单,只需要轻点鼠标就可以完成,但是该选项默认是不启用的。正如安装器所说的,它允许你调整分区、创建快照、将多个磁盘合并到一个逻辑卷等等——所有这一切都可以在系统运行时完成。不同于传统分区,你不需要关掉你的系统,从Live CD或USB驱动,然后[当这些分区不使用时才能调整][3]。 完全坦率地说,普通Ubuntu桌面用户可能不会意识到他们是否正在使用LVM。但是,如果你想要在今后做一些更高深的事情,那么LVM就会有所帮助了。LVM可能更复杂,可能会在你今后恢复数据时会导致问题——尤其是在你经验不足时。这里不会有显著的性能损失——LVM是彻底地在Linux内核中实现的。 @@ -18,7 +18,7 @@ LVM是一种技术,某种程度上和[RAID阵列][1]或[Windows上的存储空 前面,我们已经[说明了何谓LVM][4]。概括来讲,它在你的物理磁盘和呈现在你系统中的分区之间提供了一个抽象层。例如,你的计算机可能装有两个硬盘驱动器,它们的大小都是 1 TB。你必须得在这些磁盘上至少分两个区,每个区大小 1 TB。 -LVM就在这些分区上提供了一个抽象层。用于取代磁盘上的传统分区,LVM将在你对这些磁盘初始化后,将它们当作独立的“物理卷”来对待。然后,你就可以基于这些物理卷创建“逻辑卷”。例如,你可以将这两个 1 TB 的磁盘组合成一个 2 TB 的分区,你的系统将只看到一个 2 TB 的卷,而LVM将会在后台处理这一切。一组物理卷以及一组逻辑卷被称之为“卷组”,一个标准的系统只会有一个卷组。 +LVM就在这些分区上提供了一个抽象层。用于取代磁盘上的传统分区,LVM将在你对这些磁盘初始化后,将它们当作独立的“物理卷”来对待。然后,你就可以基于这些物理卷创建“逻辑卷”。例如,你可以将这两个 1 TB 的磁盘组合成一个 2 TB 的分区,你的系统将只看到一个 2 TB 的卷,而LVM将会在后台处理这一切。一组物理卷以及一组逻辑卷被称之为“卷组”,一个典型的系统只会有一个卷组。 该抽象层使得调整分区、将多个磁盘组合成单个卷、甚至为一个运行着的分区的文件系统创建“快照”变得十分简单,而完成所有这一切都无需先卸载分区。 @@ -28,11 +28,11 @@ LVM就在这些分区上提供了一个抽象层。用于取代磁盘上的传 通常,[LVM通过Linux终端命令来管理][5]。这在Ubuntu上也行得通,但是有个更简单的图形化方法可供大家采用。如果你是一个Linux用户,对GParted或者与其类似的分区管理器熟悉,算了,别瞎掰了——GParted根本不支持LVM磁盘。 -然而,你可以使用Ubuntu附带的磁盘工具。该工具也被称之为GNOME磁盘工具,或者叫Palimpsest。点击停靠盘上的图标来开启它吧,搜索磁盘然后敲击回车。不像GParted,该磁盘工具将会在“其它设备”下显示LVM分区,因此你可以根据需要格式化这些分区,也可以调整其它选项。该工具在Live CD或USB 驱动下也可以使用。 +然而,你可以使用Ubuntu附带的磁盘工具。该工具也被称之为GNOME磁盘工具,或者叫Palimpsest。点击dash中的图标来开启它吧,搜索“磁盘”然后敲击回车。不像GParted,该磁盘工具将会在“其它设备”下显示LVM分区,因此你可以根据需要格式化这些分区,也可以调整其它选项。该工具在Live CD或USB 驱动下也可以使用。 ![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550361b3772f7.png.pagespeed.ic.nZWwLJUywR.png) -不幸的是,该磁盘工具不支持LVM的大多数强大的特性,没有管理卷组、扩展分区,或者创建快照等选项。对于这些操作,你可以通过终端来实现,但是你没有那个必要。相反,你可以打开Ubuntu软件中心,搜索关键字LVM,然后安装逻辑卷管理工具,你可以在终端窗口中运行**sudo apt-get install system-config-lvm**命令来安装它。安装完之后,你就可以从停靠盘上打开逻辑卷管理工具了。 +不幸的是,该磁盘工具不支持LVM的大多数强大的特性,没有管理卷组、扩展分区,或者创建快照等选项。对于这些操作,你可以通过终端来实现,但是没有那个必要。相反,你可以打开Ubuntu软件中心,搜索关键字LVM,然后安装逻辑卷管理工具,你可以在终端窗口中运行**sudo apt-get install system-config-lvm**命令来安装它。安装完之后,你就可以从dash上打开逻辑卷管理工具了。 这个图形化配置工具是由红帽公司开发的,虽然有点陈旧了,但却是唯一的图形化方式,你可以通过它来完成上述操作,将那些终端命令抛诸脑后了。 @@ -40,11 +40,11 @@ LVM就在这些分区上提供了一个抽象层。用于取代磁盘上的传 ![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550363106789c.png.pagespeed.ic.drVInt3Weq.png) -卷组视图会列出你所有物理卷和逻辑卷的总览。这里,我们有两个横跨两个独立硬盘驱动器的物理分区,我们有一个交换分区和一个根分区,就像Ubuntu默认设置的分区图表。由于我们从另一个驱动器添加了第二个物理分区,现在那里有大量未使用空间。 +卷组视图会列出你所有的物理卷和逻辑卷的总览。这里,我们有两个横跨两个独立硬盘驱动器的物理分区,我们有一个交换分区和一个根分区,这是Ubuntu默认设置的分区图表。由于我们从另一个驱动器添加了第二个物理分区,现在那里有大量未使用空间。 ![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_550363f631c19.png.pagespeed.ic.54E_Owcq8y.png) -要扩展逻辑分区到物理空间,你可以在逻辑视图下选择它,点击编辑属性,然后修改大小来扩大分区。你也可以在这里缩减分区。 +要扩展逻辑分区到物理空间,你可以在逻辑视图下选择它,点击编辑属性,然后修改大小来扩大分区。你也可以在这里缩小分区。 ![](http://cdn5.howtogeek.com/wp-content/uploads/2015/03/ximg_55036893712d3.png.pagespeed.ic.ce7y_Mt0uF.png) @@ -55,7 +55,7 @@ system-config-lvm的其它选项允许你设置快照和镜像。对于传统桌 via: http://www.howtogeek.com/211937/how-to-use-lvm-on-ubuntu-for-easy-partition-resizing-and-snapshots/ 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/published/201508/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md b/published/201508/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md new file mode 100644 index 0000000000..c36ae7adb7 --- /dev/null +++ b/published/201508/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md @@ -0,0 +1,89 @@ +如何在树莓派 2 运行 ubuntu Snappy Core +================================================================================ +物联网(Internet of Things, IoT) 时代即将来临。很快,过不了几年,我们就会问自己当初是怎么在没有物联网的情况下生存的,就像我们现在怀疑过去没有手机的年代。Canonical 就是一个物联网快速发展却还是开放市场下的竞争者。这家公司宣称自己把赌注压到了IoT 上,就像他们已经在“云”上做过的一样。在今年一月底,Canonical 启动了一个基于Ubuntu Core 的小型操作系统,名字叫做 [Ubuntu Snappy Core][1] 。 + +Snappy 代表了两种意思,它是一种用来替代 deb 的新的打包格式;也是一个用来更新系统的前端,从CoreOS、红帽子和其他系统借鉴了**原子更新**这个想法。自从树莓派 2 投入市场,Canonical 很快就发布了用于树莓派的Snappy Core 版本。而第一代树莓派因为是基于ARMv6 ,Ubuntu 的ARM 镜像是基于ARMv7 ,所以不能运行ubuntu 。不过这种状况现在改变了,Canonical 通过发布 Snappy Core 的RPI2 镜像,抓住机会证明了Snappy 就是一个用于云计算,特别是用于物联网的系统。 + +Snappy 同样可以运行在其它像Amazon EC2, Microsofts Azure, Google的 Compute Engine 这样的云端上,也可以虚拟化在 KVM、Virtuabox 和vagrant 上。Canonical Ubuntu 已经拥抱了微软、谷歌、Docker、OpenStack 这些重量级选手,同时也与一些小项目达成合作关系。除了一些创业公司,比如 Ninja Sphere、Erle Robotics,还有一些开发板生产商,比如 Odroid、Banana Pro, Udoo, PCDuino 和 Parallella 、全志,Snappy 也提供了支持。Snappy Core 同时也希望尽快运行到路由器上来帮助改进路由器生产商目前很少更新固件的策略。 + +接下来,让我们看看怎么样在树莓派 2 上运行 Ubuntu Snappy Core。 + +用于树莓派2 的Snappy 镜像可以从 [Raspberry Pi 网站][2] 上下载。解压缩出来的镜像必须[写到一个至少8GB 大小的SD 卡][3]。尽管原始系统很小,但是原子升级和回滚功能会占用不小的空间。使用 Snappy 启动树莓派 2 后你就可以使用默认用户名和密码(都是ubuntu)登录系统。 + +![](https://farm8.staticflickr.com/7639/16428527263_f7bdd56a0d_c.jpg) + +sudo 已经配置好了可以直接用,安全起见,你应该使用以下命令来修改你的用户名 + + $ sudo usermod -l + +或者也可以使用`adduser` 为你添加一个新用户。 + +因为RPI缺少硬件时钟,而 Snappy Core 镜像并不知道这一点,所以系统会有一个小 bug:处理某些命令时会报很多错。不过这个很容易解决: + +使用这个命令来确认这个bug 是否影响: + + $ date + +如果输出类似 "Thu Jan 1 01:56:44 UTC 1970", 你可以这样做来改正: + + $ sudo date --set="Sun Apr 04 17:43:26 UTC 2015" + +改成你的实际时间。 + +![](https://farm9.staticflickr.com/8735/16426231744_c54d9b8877_b.jpg) + +现在你可能打算检查一下,看看有没有可用的更新。注意通常使用的命令是不行的: + + $ sudo apt-get update && sudo apt-get distupgrade + +这时系统不会让你通过,因为 Snappy 使用它自己精简过的、基于dpkg 的包管理系统。这么做的原因是 Snappy 会运行很多嵌入式程序,而同时你也会试图所有事情尽可能的简化。 + +让我们来看看最关键的部分,理解一下程序是如何与 Snappy 工作的。运行 Snappy 的SD 卡上除了 boot 分区外还有3个分区。其中的两个构成了一个重复的文件系统。这两个平行文件系统被固定挂载为只读模式,并且任何时刻只有一个是激活的。第三个分区是一个部分可写的文件系统,用来让用户存储数据。通过更新系统,标记为'system-a' 的分区会保持一个完整的文件系统,被称作核心,而另一个平行的文件系统仍然会是空的。 + +![](https://farm9.staticflickr.com/8758/16841251947_21f42609ce_b.jpg) + +如果我们运行以下命令: + + $ sudo snappy update + +系统将会在'system-b' 上作为一个整体进行更新,这有点像是更新一个镜像文件。接下来你将会被告知要重启系统来激活新核心。 + +重启之后,运行下面的命令可以检查你的系统是否已经更新到最新版本,以及当前被激活的是哪个核心 + + $ sudo snappy versions -a + +经过更新-重启两步操作,你应该可以看到被激活的核心已经被改变了。 + +因为到目前为止我们还没有安装任何软件,所以可以用下面的命令更新: + + $ sudo snappy update ubuntu-core + +如果你打算仅仅更新特定的OS 版本这就够了。如果出了问题,你可以使用下面的命令回滚: + + $ sudo snappy rollback ubuntu-core + +这将会把系统状态回滚到更新之前。 + +![](https://farm8.staticflickr.com/7666/17022676786_5fe6804ed8_c.jpg) + +再来说说那些让 Snappy 变得有用的软件。这里不会讲的太多关于如何构建软件、向 Snappy 应用商店添加软件的基础知识,但是你可以通过 Freenode 上的IRC 频道 #snappy 了解更多信息,那个上面有很多人参与。你可以通过浏览器访问http://\:4200 来浏览应用商店,然后从商店安装软件,再在浏览器里访问 http://webdm.local 来启动程序。如何构建用于 Snappy 的软件并不难,而且也有了现成的[参考文档][4] 。你也可以很容易的把 DEB 安装包使用Snappy 格式移植到Snappy 上。 + +![](https://farm8.staticflickr.com/7656/17022676836_968a2a7254_c.jpg) + +尽管 Ubuntu Snappy Core 吸引了我们去研究新型的 Snappy 安装包格式和 Canonical 式的原子更新操作,但是因为有限的可用应用,它现在在生产环境里还不是很有用。但是既然搭建一个 Snappy 环境如此简单,这看起来是一个学点新东西的好机会。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html + +作者:[Ferdinand Thommes][a] +译者:[Ezio](https://github.com/oska874) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/ferdinand +[1]:http://www.ubuntu.com/things +[2]:http://www.raspberrypi.org/downloads/ +[3]:http://xmodulo.com/write-raspberry-pi-image-sd-card.html +[4]:https://developer.ubuntu.com/en/snappy/ diff --git a/published/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md b/published/201508/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md similarity index 100% rename from published/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md rename to published/201508/20150504 How to access a Linux server behind NAT via reverse SSH tunnel.md diff --git a/translated/tech/20150518 How to set up a Replica Set on MongoDB.md b/published/201508/20150518 How to set up a Replica Set on MongoDB.md similarity index 51% rename from translated/tech/20150518 How to set up a Replica Set on MongoDB.md rename to published/201508/20150518 How to set up a Replica Set on MongoDB.md index 44b8535b82..7d05a48d95 100644 --- a/translated/tech/20150518 How to set up a Replica Set on MongoDB.md +++ b/published/201508/20150518 How to set up a Replica Set on MongoDB.md @@ -1,10 +1,11 @@ -如何配置MongoDB副本集(Replica Set) +如何配置 MongoDB 副本集 ================================================================================ -MongoDB已经成为市面上最知名的NoSQL数据库。MongoDB是面向文档的,它的无模式设计使得它在各种各样的WEB应用当中广受欢迎。最让我喜欢的特性之一是它的副本集,副本集将同一数据的多份拷贝放在一组mongod节点上,从而实现数据的冗余以及高可用性。 -这篇教程将向你介绍如何配置一个MongoDB副本集。 +MongoDB 已经成为市面上最知名的 NoSQL 数据库。MongoDB 是面向文档的,它的无模式设计使得它在各种各样的WEB 应用当中广受欢迎。最让我喜欢的特性之一是它的副本集(Replica Set),副本集将同一数据的多份拷贝放在一组 mongod 节点上,从而实现数据的冗余以及高可用性。 -副本集的最常见配置涉及到一个主节点以及多个副节点。这之后启动的复制行为会从这个主节点到其他副节点。副本集不止可以针对意外的硬件故障和停机事件对数据库提供保护,同时也因为提供了更多的结点从而提高了数据库客户端数据读取的吞吐量。 +这篇教程将向你介绍如何配置一个 MongoDB 副本集。 + +副本集的最常见配置需要一个主节点以及多个副节点。这之后启动的复制行为会从这个主节点到其他副节点。副本集不止可以针对意外的硬件故障和停机事件对数据库提供保护,同时也因为提供了更多的节点从而提高了数据库客户端数据读取的吞吐量。 ### 配置环境 ### @@ -12,25 +13,25 @@ MongoDB已经成为市面上最知名的NoSQL数据库。MongoDB是面向文档 ![](https://farm8.staticflickr.com/7667/17801038505_529a5224a1.jpg) -为了达到这个目的,我们使用了3个运行在VirtualBox上的虚拟机。我会在这些虚拟机上安装Ubuntu 14.04,并且安装MongoDB官方包。 +为了达到这个目的,我们使用了3个运行在 VirtualBox 上的虚拟机。我会在这些虚拟机上安装 Ubuntu 14.04,并且安装 MongoDB 官方包。 -我会在一个虚拟机实例上配置好需要的环境,然后将它克隆到其他的虚拟机实例上。因此,选择一个名为master的虚拟机,执行以下安装过程。 +我会在一个虚拟机实例上配置好所需的环境,然后将它克隆到其他的虚拟机实例上。因此,选择一个名为 master 的虚拟机,执行以下安装过程。 -首先,我们需要在apt中增加一个MongoDB密钥: +首先,我们需要给 apt 增加一个 MongoDB 密钥: $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 -然后,将官方的MongoDB仓库添加到source.list中: +然后,将官方的 MongoDB 仓库添加到 source.list 中: $ sudo su # echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release -sc)"/mongodb-org/3.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list -接下来更新apt仓库并且安装MongoDB。 +接下来更新 apt 仓库并且安装 MongoDB。 $ sudo apt-get update $ sudo apt-get install -y mongodb-org -现在对/etc/mongodb.conf做一些更改 +现在对 /etc/mongodb.conf 做一些更改 auth = true dbpath=/var/lib/mongodb @@ -39,17 +40,17 @@ MongoDB已经成为市面上最知名的NoSQL数据库。MongoDB是面向文档 keyFile=/var/lib/mongodb/keyFile replSet=myReplica -第一行的作用是确认我们的数据库需要验证才可以使用的。keyfile用来配置用于MongoDB结点间复制行为的密钥文件。replSet用来为副本集设置一个名称。 +第一行的作用是表明我们的数据库需要验证才可以使用。keyfile 配置用于 MongoDB 节点间复制行为的密钥文件。replSet 为副本集设置一个名称。 接下来我们创建一个用于所有实例的密钥文件。 $ echo -n "MyRandomStringForReplicaSet" | md5sum > keyFile -这将会创建一个含有MD5字符串的密钥文件,但是由于其中包含了一些噪音,我们需要对他们清理后才能正式在MongoDB中使用。 +这将会创建一个含有 MD5 字符串的密钥文件,但是由于其中包含了一些噪音,我们需要对他们清理后才能正式在 MongoDB 中使用。 $ echo -n "MyReplicaSetKey" | md5sum|grep -o "[0-9a-z]\+" > keyFile -grep命令的作用的是把将空格等我们不想要的内容过滤掉之后的MD5字符串打印出来。 +grep 命令的作用的是把将空格等我们不想要的内容过滤掉之后的 MD5 字符串打印出来。 现在我们对密钥文件进行一些操作,让它真正可用。 @@ -57,7 +58,7 @@ grep命令的作用的是把将空格等我们不想要的内容过滤掉之后 $ sudo chown mongodb:nogroup keyFile $ sudo chmod 400 keyFile -接下来,关闭此虚拟机。将其Ubuntu系统克隆到其他虚拟机上。 +接下来,关闭此虚拟机。将其 Ubuntu 系统克隆到其他虚拟机上。 ![](https://farm9.staticflickr.com/8729/17800903865_9876a9cc9c.jpg) @@ -67,55 +68,55 @@ grep命令的作用的是把将空格等我们不想要的内容过滤掉之后 请注意,三个虚拟机示例需要在同一个网络中以便相互通讯。因此,我们需要它们弄到“互联网"上去。 -这里推荐给每个虚拟机设置一个静态IP地址,而不是使用DHCP。这样它们就不至于在DHCP分配IP地址给他们的时候失去连接。 +这里推荐给每个虚拟机设置一个静态 IP 地址,而不是使用 DHCP。这样它们就不至于在 DHCP 分配IP地址给他们的时候失去连接。 -像下面这样编辑每个虚拟机的/etc/networks/interfaces文件。 +像下面这样编辑每个虚拟机的 /etc/networks/interfaces 文件。 -在主结点上: +在主节点上: auto eth1 iface eth1 inet static address 192.168.50.2 netmask 255.255.255.0 -在副结点1上: +在副节点1上: auto eth1 iface eth1 inet static address 192.168.50.3 netmask 255.255.255.0 -在副结点2上: +在副节点2上: auto eth1 iface eth1 inet static address 192.168.50.4 netmask 255.255.255.0 -由于我们没有DNS服务,所以需要设置设置一下/etc/hosts这个文件,手工将主机名称放到次文件中。 +由于我们没有 DNS 服务,所以需要设置设置一下 /etc/hosts 这个文件,手工将主机名称放到此文件中。 -在主结点上: +在主节点上: 127.0.0.1 localhost primary 192.168.50.2 primary 192.168.50.3 secondary1 192.168.50.4 secondary2 -在副结点1上: +在副节点1上: 127.0.0.1 localhost secondary1 192.168.50.2 primary 192.168.50.3 secondary1 192.168.50.4 secondary2 -在副结点2上: +在副节点2上: 127.0.0.1 localhost secondary2 192.168.50.2 primary 192.168.50.3 secondary1 192.168.50.4 secondary2 -使用ping命令检查各个结点之间的连接。 +使用 ping 命令检查各个节点之间的连接。 $ ping primary $ ping secondary1 @@ -123,9 +124,9 @@ grep命令的作用的是把将空格等我们不想要的内容过滤掉之后 ### 配置副本集 ### -验证各个结点可以正常连通后,我们就可以新建一个管理员用户,用于之后的副本集操作。 +验证各个节点可以正常连通后,我们就可以新建一个管理员用户,用于之后的副本集操作。 -在主节点上,打开/etc/mongodb.conf文件,将auth和replSet两项注释掉。 +在主节点上,打开 /etc/mongodb.conf 文件,将 auth 和 replSet 两项注释掉。 dbpath=/var/lib/mongodb logpath=/var/log/mongodb/mongod.log @@ -133,21 +134,30 @@ grep命令的作用的是把将空格等我们不想要的内容过滤掉之后 #auth = true keyFile=/var/lib/mongodb/keyFile #replSet=myReplica + +在一个新安装的 MongoDB 上配置任何用户或副本集之前,你需要注释掉 auth 行。默认情况下,MongoDB 并没有创建任何用户。而如果在你创建用户前启用了 auth,你就不能够做任何事情。你可以在创建一个用户后再次启用 auth。 -重启mongod进程。 +修改 /etc/mongodb.conf 之后,重启 mongod 进程。 $ sudo service mongod restart -连接MongoDB后,新建管理员用户。 +现在连接到 MongoDB master: + + $ mongo :27017 + +连接 MongoDB 后,新建管理员用户。 > use admin > db.createUser({ user:"admin", pwd:" }) + +重启 MongoDB: + $ sudo service mongod restart -连接到MongoDB,用以下命令将secondary1和secondary2节点添加到我们的副本集中。 +再次连接到 MongoDB,用以下命令将 副节点1 和副节点2节点添加到我们的副本集中。 > use admin > db.auth("admin","myreallyhardpassword") @@ -156,7 +166,7 @@ grep命令的作用的是把将空格等我们不想要的内容过滤掉之后 > rs.add("secondary2:27017") -现在副本集到手了,可以开始我们的项目了。参照 [official driver documentation][1] 来了解如何连接到副本集。如果你想要用Shell来请求数据,那么你需要连接到主节点上来插入或者请求数据,副节点不行。如果你执意要尝试用附件点操作,那么以下错误信息就蹦出来招呼你了。 +现在副本集到手了,可以开始我们的项目了。参照 [官方驱动文档][1] 来了解如何连接到副本集。如果你想要用 Shell 来请求数据,那么你需要连接到主节点上来插入或者请求数据,副节点不行。如果你执意要尝试用副本集操作,那么以下错误信息就蹦出来招呼你了。 myReplica:SECONDARY> myReplica:SECONDARY> show databases @@ -166,6 +176,12 @@ grep命令的作用的是把将空格等我们不想要的内容过滤掉之后 at shellHelper.show (src/mongo/shell/utils.js:630:33) at shellHelper (src/mongo/shell/utils.js:524:36) at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47 + +如果你要从 shell 连接到整个副本集,你可以安装如下命令。在副本集中的失败切换是自动的。 + + $ mongo primary,secondary1,secondary2:27017/?replicaSet=myReplica + +如果你使用其它驱动语言(例如,JavaScript、Ruby 等等),格式也许不同。 希望这篇教程能对你有所帮助。你可以使用Vagrant来自动完成你的本地环境配置,并且加速你的代码。 @@ -175,7 +191,7 @@ via: http://xmodulo.com/setup-replica-set-mongodb.html 作者:[Christopher Valerio][a] 译者:[mr-ping](https://github.com/mr-ping) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20150522 Analyzing Linux Logs.md b/published/201508/20150522 Analyzing Linux Logs.md similarity index 100% rename from published/20150522 Analyzing Linux Logs.md rename to published/201508/20150522 Analyzing Linux Logs.md diff --git a/published/201508/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md b/published/201508/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md new file mode 100644 index 0000000000..f47f79b3b7 --- /dev/null +++ b/published/201508/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md @@ -0,0 +1,113 @@ +在 VirtualBox 中使用 Docker Machine 管理主机 +================================================================================ +大家好,今天我们学习在 VirtualBox 中使用 Docker Machine 来创建和管理 Docker 主机。Docker Machine 是一个可以帮助我们在电脑上、在云端、在数据中心内创建 Docker 主机的应用。它为根据用户的配置和需求创建服务器并在其上安装 Docker和客户端提供了一个轻松的解决方案。这个 API 可以用于在本地主机、或数据中心的虚拟机、或云端的实例提供 Docker 服务。Docker Machine 支持 Windows、OSX 和 Linux,并且是以一个独立的二进制文件包形式安装的。仍然使用(与现有 Docker 工具)相同的接口,我们就可以充分利用已经提供 Docker 基础框架的生态系统。只要一个命令,用户就能快速部署 Docker 容器。 + +本文列出一些简单的步骤用 Docker Machine 来部署 docker 容器。 + +### 1. 安装 Docker Machine ### + +Docker Machine 完美支持所有 Linux 操作系统。首先我们需要从 [github][1] 下载最新版本的 Docker Machine,本文使用 curl 作为下载工具,Docker Machine 版本为 0.2.0。 + +**64 位操作系统** + + # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine + +**32 位操作系统** + + # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-i386 > /usr/local/bin/docker-machine + +下载完成后,找到 **/usr/local/bin** 目录下的 **docker-machine** 文件,让其可以执行: + + # chmod +x /usr/local/bin/docker-machine + +确认是否成功安装了 docker-machine,可以运行下面的命令,它会打印 Docker Machine 的版本信息: + + # docker-machine -v + +![安装 Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/installing-docker-machine.png) + +运行下面的命令,安装 Docker 客户端,以便于在我们自己的电脑止运行 Docker 命令: + + # curl -L https://get.docker.com/builds/linux/x86_64/docker-latest > /usr/local/bin/docker + # chmod +x /usr/local/bin/docker + +### 2. 创建 VirtualBox 虚拟机 ### + +在 Linux 系统上安装完 Docker Machine 后,接下来我们可以安装 VirtualBox 虚拟机,运行下面的就可以了。`--driver virtualbox` 选项表示我们要在 VirtualBox 的虚拟机里面部署 docker,最后的参数“linux” 是虚拟机的名称。这个命令会下载 [boot2docker][2] iso,它是个基于 Tiny Core Linux 的轻量级发行版,自带 Docker 程序,然后 `docker-machine` 命令会创建一个 VirtualBox 虚拟机(LCTT译注:当然,我们也可以选择其他的虚拟机软件)来运行这个 boot2docker 系统。 + + # docker-machine create --driver virtualbox linux + +![创建 Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/creating-docker-machine.png) + +测试下有没有成功运行 VirtualBox 和 Docker,运行命令: + + # docker-machine ls + +![Docker Machine List](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-list.png) + +如果执行成功,我们可以看到在 ACTIVE 那列下面会出现一个星号“*”。 + +### 3. 设置环境变量 ### + +现在我们需要让 docker 与 docker-machine 通信,运行 `docker-machine env <虚拟机名称>` 来实现这个目的。 + + # eval "$(docker-machine env linux)" + # docker ps + +这个命令会设置 TLS 认证的环境变量,每次重启机器或者重新打开一个会话都需要执行一下这个命令,我们可以看到它的输出内容: + + # docker-machine env linux + + export DOCKER_TLS_VERIFY=1 + export DOCKER_CERT_PATH=/Users//.docker/machine/machines/dev + export DOCKER_HOST=tcp://192.168.99.100:2376 + +### 4. 运行 Docker 容器 ### + +完成配置后我们就可以在 VirtualBox 上运行 docker 容器了。测试一下,我们可以运行虚拟机 `docker run busybox` ,并在里面里执行 `echo hello world` 命令,我们可以看到容器的输出信息。 + + # docker run busybox echo hello world + +![运行 Docker 容器](http://blog.linoxide.com/wp-content/uploads/2015/05/running-docker-container.png) + +### 5. 拿到 Docker 主机的 IP ### + +我们可以执行下面的命令获取运行 Docker 的主机的 IP 地址。我们可以看到在 Docker 主机的 IP 地址上的任何暴露出来的端口。 + + # docker-machine ip + +![Docker IP 地址](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-ip-address.png) + +### 6. 管理主机 ### + +现在我们可以随心所欲地使用上述的 docker-machine 命令来不断创建主机了。 + +当你使用完 docker 时,可以运行 **docker-machine stop** 来停止所有主机,如果想开启所有主机,运行 **docker-machine start**。 + + # docker-machine stop + # docker-machine start + +你也可以只停止或开启一台主机: + + $ docker-machine stop linux + $ docker-machine start linux + +### 总结 ### + +最后,我们使用 Docker Machine 成功在 VirtualBox 上创建并管理一台 Docker 主机。Docker Machine 确实能让用户快速地在不同的平台上部署 Docker 主机,就像我们这里部署在 VirtualBox 上一样。这个 virtualbox 驱动可以在本地机器上使用,也可以在数据中心的虚拟机上使用。Docker Machine 驱动除了支持本地的 VirtualBox 之外,还支持远端的 Digital Ocean、AWS、Azure、VMware 以及其它基础设施。 + +如果你有任何疑问,或者建议,请在评论栏中写出来,我们会不断改进我们的内容。谢谢,祝愉快。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/host-virtualbox-docker-machine/ + +作者:[Arun Pyasi][a] +译者:[bazz2](https://github.com/bazz2) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:https://github.com/docker/machine/releases +[2]:https://github.com/boot2docker/boot2docker diff --git a/published/20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md b/published/201508/20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md similarity index 100% rename from published/20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md rename to published/201508/20150602 Howto Configure OpenVPN Server-Client on Ubuntu 15.04.md diff --git a/published/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md b/published/201508/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md similarity index 100% rename from published/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md rename to published/201508/20150604 Nishita Agarwal Shares Her Interview Experience on Linux 'iptables' Firewall.md diff --git a/translated/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md b/published/201508/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md similarity index 59% rename from translated/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md rename to published/201508/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md index d7bb0e425b..fec42d22fa 100644 --- a/translated/share/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md +++ b/published/201508/20150610 Tickr Is An Open-Source RSS News Ticker for Linux Desktops.md @@ -1,24 +1,24 @@ -Trickr:一个开源的Linux桌面RSS新闻速递 +Tickr:一个开源的 Linux 桌面 RSS 新闻速递应用 ================================================================================ ![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/05/rss-tickr.jpg) **最新的!最新的!阅读关于它的一切!** -好了,所以我们今天要强调的应用程序不是相当于旧报纸的二进制版本—而是它会以一个伟大的方式,将最新的新闻推送到你的桌面上。 +好了,我们今天要推荐的应用程序可不是旧式报纸的二进制版本——它会以一种漂亮的方式将最新的新闻推送到你的桌面上。 -Tick是一个基于GTK的Linux桌面新闻速递,能够在水平带滚动显示最新头条新闻,以及你最爱的RSS资讯文章标题,当然你可以放置在你桌面的任何地方。 +Tickr 是一个基于 GTK 的 Linux 桌面新闻速递应用,能够以横条方式滚动显示最新头条新闻以及你最爱的RSS资讯文章标题,当然你可以放置在你桌面的任何地方。 -请叫我Joey Calamezzo;我把我的放在底部,有电视新闻台的风格。 +请叫我 Joey Calamezzo;我把它放在底部,就像电视新闻台的滚动字幕一样。 (LCTT 译注: Joan Callamezzo 是 Pawnee Today 的主持人,一位 Pawnee 的本地新闻/脱口秀主持人。而本文作者是 Joey。) -“到你了,子标题” +“到你了,副标题”。 ### RSS -还记得吗? ### -“谢谢段落结尾。” +“谢谢,这段结束了。” -在一个推送通知,社交媒体,以及点击诱饵的时代,哄骗我们阅读最新的令人惊奇的,人人都爱读的清单,RSS看起来有一点过时了。 +在一个充斥着推送通知、社交媒体、标题党,以及哄骗人们点击的清单体的时代,RSS看起来有一点过时了。 -对我来说?恩,RSS是名副其实的真正简单的聚合。这是将消息通知给我的最简单,最易于管理的方式。我可以在我愿意的时候,管理和阅读一些东西;没必要匆忙的去看,以防这条微博消失在信息流中,或者推送通知消失。 +对我来说呢?恩,RSS是名副其实的真正简单的聚合(RSS : Really Simple Syndication)。这是将消息通知给我的最简单、最易于管理的方式。我可以在我愿意的时候,管理和阅读一些东西;没必要匆忙的去看,以防这条微博消失在信息流中,或者推送通知消失。 tickr的美在于它的实用性。你可以不断地有新闻滚动在屏幕的底部,然后不时地瞥一眼。 @@ -32,31 +32,30 @@ tickr的美在于它的实用性。你可以不断地有新闻滚动在屏幕的 尽管虽然tickr可以从Ubuntu软件中心安装,然而它已经很久没有更新了。当你打开笨拙的不直观的控制面板的时候,没有什么能够比这更让人感觉被遗弃的了。 -打开它: +要打开它: 1. 右键单击tickr条 1. 转至编辑>首选项 1. 调整各种设置 -选项和设置行的后面,有些似乎是容易理解的。但是知己知彼你能够几乎掌控一切,包括: +选项和设置行的后面,有些似乎是容易理解的。但是详细了解这些你才能够掌握一切,包括: - 设置滚动速度 - 选择鼠标经过时的行为 - 资讯更新频率 - 字体,包括字体大小和颜色 -- 分隔符(“delineator”) +- 消息分隔符(“delineator”) - tickr在屏幕上的位置 - tickr条的颜色和不透明度 - 选择每种资讯显示多少文章 有个值得一提的“怪癖”是,当你点击“应用”按钮,只会更新tickr的屏幕预览。当您退出“首选项”窗口时,请单击“确定”。 -想要滚动条在你的显示屏上水平显示,也需要公平一点的调整,特别是统一显示。 +想要得到完美的显示效果, 你需要一点点调整,特别是在 Unity 上。 -按下“全宽按钮”,能够让应用程序自动检测你的屏幕宽度。默认情况下,当放置在顶部或底部时,会留下25像素的间距(应用程序被创建在过去的GNOME2.x桌面)。只需添加额外的25像素到输入框,来弥补这个问题。 +按下“全宽按钮”,能够让应用程序自动检测你的屏幕宽度。默认情况下,当放置在顶部或底部时,会留下25像素的间距(应用程序以前是在GNOME2.x桌面上创建的)。只需添加额外的25像素到输入框,来弥补这个问题。 -其他可供选择的选项包括:选择文章在哪个浏览器打开;tickr是否以一个常规的窗口出现; -是否显示一个时钟;以及应用程序多久检查一次文章资讯。 +其他可供选择的选项包括:选择文章在哪个浏览器打开;tickr是否以一个常规的窗口出现;是否显示一个时钟;以及应用程序多久检查一次文章资讯。 #### 添加资讯 #### @@ -76,9 +75,9 @@ tickr自带的有超过30种不同的资讯列表,从技术博客到主流新 ### 在Ubuntu 14.04 LTS或更高版本上安装Tickr ### -在Ubuntu 14.04 LTS或更高版本上安装Tickr +这就是 Tickr,它不会改变世界,但是它能让你知道世界上发生了什么。 -在Ubuntu 14.04 LTS或更高版本中安装,转到Ubuntu软件中心,但要点击下面的按钮。 +在Ubuntu 14.04 LTS或更高版本中安装,点击下面的按钮转到Ubuntu软件中心。 - [点击此处进入Ubuntu软件中心安装tickr][1] @@ -88,7 +87,7 @@ via: http://www.omgubuntu.co.uk/2015/06/tickr-open-source-desktop-rss-news-ticke 作者:[Joey-Elijah Sneddon][a] 译者:[xiaoyu33](https://github.com/xiaoyu33) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20150625 How to Provision Swarm Clusters using Docker Machine.md b/published/201508/20150625 How to Provision Swarm Clusters using Docker Machine.md similarity index 100% rename from published/20150625 How to Provision Swarm Clusters using Docker Machine.md rename to published/201508/20150625 How to Provision Swarm Clusters using Docker Machine.md diff --git a/published/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md b/published/201508/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md similarity index 100% rename from published/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md rename to published/201508/20150629 Autojump--An Advanced 'cd' Command to Quickly Navigate Linux Filesystem.md diff --git a/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md b/published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md similarity index 100% rename from published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md rename to published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 1 - Introduction.md diff --git a/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md b/published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md similarity index 100% rename from published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md rename to published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 2 - The GNOME Desktop.md diff --git a/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md b/published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md similarity index 91% rename from published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md rename to published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md index 61600366c9..4dd942dd29 100644 --- a/published/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md +++ b/published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 3 - GNOME Applications.md @@ -7,7 +7,7 @@ 这是一个基本扯平的方面。每一个桌面环境都有一些非常好的应用,也有一些不怎么样的。再次强调,Gnome 把那些 KDE 完全错失的小细节给做对了。我不是想说 KDE 中有哪些应用不好。他们都能工作,但仅此而已。也就是说:它们合格了,但确实还没有达到甚至接近100分。 -Gnome 是一个样子,KDE 是另外一种。Dragon 播放器运行得很好,清晰的标出了播放文件、URL或和光盘的按钮,正如你在 Gnome Videos 中能做到的一样……但是在便利的文件名和用户的友好度方面,Gnome 多走了一小步。它默认显示了在你的电脑上检测到的所有影像文件,不需要你做任何事情。KDE 有 [Baloo][](正如之前的 [Nepomuk][2],LCTT 译注:这是 KDE 中一种文件索引服务框架)为什么不使用它们?它们能列出可读取的影像文件……但却没被使用。 +Gnome 在左,KDE 在右。Dragon 播放器运行得很好,清晰的标出了播放文件、URL或和光盘的按钮,正如你在 Gnome Videos 中能做到的一样……但是在便利的文件名和用户的友好度方面,Gnome 多走了一小步。它默认显示了在你的电脑上检测到的所有影像文件,不需要你做任何事情。KDE 有 [Baloo][](正如之前的 [Nepomuk][2],LCTT 译注:这是 KDE 中一种文件索引服务框架)为什么不使用它们?它们能列出可读取的影像文件……但却没被使用。 下一步……音乐播放器 diff --git a/published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md b/published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md new file mode 100644 index 0000000000..289c1cb14e --- /dev/null +++ b/published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md @@ -0,0 +1,52 @@ +一周 GNOME 之旅:品味它和 KDE 的是是非非(第四节 GNOME设置) +================================================================================ + +### 设置 ### + +在这我要挑一挑几个特定 KDE 控制模块的毛病,大部分原因是因为相比它们的对手GNOME来说,糟糕得太可笑,实话说,真是悲哀。 + +第一个接招的?打印机。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers1_show&w=1920) + +GNOME 在左,KDE 在右。你知道左边跟右边的打印程序有什么区别吗?当我在 GNOME 控制中心打开“打印机”时,程序窗口弹出来了,然后这样就可以使用了。而当我在 KDE 系统设置打开“打印机”时,我得到了一条密码提示。甚至我都没能看一眼打印机呢,我就必须先交出 ROOT 密码。 + +让我再重复一遍。在今天这个有了 PolicyKit 和 Logind 的日子里,对一个应该是 sudo 的操作,我依然被询问要求 ROOT 的密码。我安装系统的时候甚至都没设置 root 密码。所以我必须跑到 Konsole 去,接着运行 'sudo passwd root' 命令,这样我才能给 root 设一个密码,然后我才能回到系统设置中的打印程序,再交出 root 密码,然后仅仅是看一看哪些打印机可用。完成了这些工作后,当我点击“添加打印机”时,我再次得到请求 ROOT 密码的提示,当我解决了它后再选择一个打印机和驱动时,我再次得到请求 ROOT 密码的提示。仅仅是为了添加一个打印机到系统我就收到三次密码请求! + +而在 GNOME 下添加打印机,在点击打印机程序中的“解锁”之前,我没有得到任何请求 SUDO 密码的提示。整个过程我只被请求过一次,仅此而已。KDE,求你了……采用 GNOME 的“解锁”模式吧。不到一定需要的时候不要发出提示。还有,不管是哪个库,只要它允许 KDE 应用程序绕过 PolicyKit/Logind(如果有的话)并直接请求 ROOT 权限……那就把它封进箱里吧。如果这是个多用户系统,那我要么必须交出 ROOT 密码,要么我必须时时刻刻待命,以免有一个用户需要升级、更改或添加一个新的打印机。而这两种情况都是完全无法接受的。 + +有还一件事…… + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers2_show&w=1920) + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers3_show&w=1920) + +这个问题问大家:怎么样看起来更简洁?我在写这篇文章时意识到:当有任何附加的打印机准备好时,Gnome 打印机程序会把过程做得非常简洁,它们在左边上放了一个竖直栏来列出这些打印机。而我在 KDE 中添加第二台打印机时,它突然增加出一个左边栏来。而在添加之前,我脑海中已经有了一个恐怖的画面,它会像图片文件夹显示预览图一样直接在界面里插入另外一个图标。我很高兴也很惊讶的看到我是错的。但是事实是它直接“长出”另外一个从未存在的竖直栏,彻底改变了它的界面布局,而这样也称不上“好”。终究还是一种令人困惑,奇怪而又不直观的设计。 + +打印机说得够多了……下一个接受我公开石刑的 KDE 系统设置是?多媒体,即 Phonon。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_sound_show&w=1920) + +一如既往,GNOME 在左边,KDE 在右边。让我们先看看 GNOME 的系统设置先……眼睛移动是从左到右,从上到下,对吧?来吧,就这样做。首先:音量控制滑条。滑条中的蓝色条与空条百分百清晰地消除了哪边是“音量增加”的困惑。在音量控制条后马上就是一个 On/Off 开关,用来开关静音功能。Gnome 的再次得分在于静音后能记住当前设置的音量,而在点击音量增加按钮取消静音后能回到原来设置的音量中来。Kmixer,你个健忘的垃圾,我真的希望我能多讨论你一下。 + +继续!输入输出和应用程序的标签选项?每一个应用程序的音量随时可控?Gnome,每过一秒,我爱你越深。音量均衡选项、声音配置、和清晰地标上标志的“测试麦克风”选项。 + +我不清楚它能否以一种更干净更简洁的设计实现。是的,它只是一个 Gnome 化的 Pavucontrol,但我想这就是重要的地方。Pavucontrol 在这方面几乎完全做对了,Gnome 控制中心中的“声音”应用程序的改善使它向完美更进了一步。 + +Phonon,该你上了。但开始前我想说:我 TM 看到的是什么?!我知道我看到的是音频设备的优先级列表,但是它呈现的方式有点太坑。还有,那些用户可能关心的那些东西哪去了?拥有一个优先级列表当然很好,它也应该存在,但问题是优先级列表属于那种用户乱搞一两次之后就不会再碰的东西。它还不够重要,或者说不够常用到可以直接放在正中间位置的程度。音量控制滑块呢?对每个应用程序的音量控制功能呢?那些用户使用最频繁的东西呢?好吧,它们在 Kmix 中,一个分离的程序,拥有它自己的配置选项……而不是在系统设置下……这样真的让“系统设置”这个词变得有点用词不当。 + +![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_network_show&w=1920) + +上面展示的 Gnome 的网络设置。KDE 的没有展示,原因就是我接下来要吐槽的内容了。如果你进入 KDE 的系统设置里,然后点击“网络”区域中三个选项中的任何一个,你会得到一大堆的选项:蓝牙设置、Samba 分享的默认用户名和密码(说真的,“连通性(Connectivity)”下面只有两个选项:SMB 的用户名和密码。TMD 怎么就配得上“连通性”这么大的词?),浏览器身份验证控制(只有 Konqueror 能用……一个已经倒闭的项目),代理设置,等等……我的 wifi 设置哪去了?它们没在这。哪去了?好吧,它们在网络应用程序的设置里面……而不是在网络设置里…… + +KDE,你这是要杀了我啊,你有“系统设置”当凶器,拿着它动手吧! + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=4 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md b/published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md new file mode 100644 index 0000000000..ee9ded7f77 --- /dev/null +++ b/published/201508/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md @@ -0,0 +1,40 @@ +一周 GNOME 之旅:品味它和 KDE 的是是非非(第三节 总结) +================================================================================ + +### 用户体验和最后想法 ### + +当 Gnome 2.x 和 KDE 4.x 要正面交锋时……我在它们之间左右逢源。我对它们爱恨交织,但总的来说它们使用起来还算是一种乐趣。然后 Gnome 3.x 来了,带着一场 Gnome Shell 的戏剧。那时我就放弃了 Gnome,我尽我所能的避开它。当时它对用户是不友好的,而且不直观,它打破了原有的设计典范,只为平板的统治世界做准备……而根据平板下跌的销量来看,这样的未来不可能实现。 + +在 Gnome 3 后续发布了八个版本后,奇迹发生了。Gnome 变得对对用户友好了,变得直观了。它完美吗?当然不。我还是很讨厌它想推动的那种设计范例,我讨厌它总想给我强加一种工作流(work flow),但是在付出时间和耐心后,这两都能被接受。只要你能够回头去看看 Gnome Shell 那外星人一样的界面,然后开始跟 Gnome 的其它部分(特别是控制中心)互动,你就能发现 Gnome 绝对做对了:细节,对细节的关注! + +人们能适应新的界面设计范例,能适应新的工作流—— iPhone 和 iPad 都证明了这一点——但真正让他们操心的一直是“纸割”——那些不完美的细节。 + +它带出了 KDE 和 Gnome 之间最重要的一个区别。Gnome 感觉像一个产品,像一种非凡的体验。你用它的时候,觉得它是完整的,你要的东西都触手可及。它让人感觉就像是一个拥有 Windows 或者 OS X 那样桌面体验的 Linux 桌面版:你要的都在里面,而且它是被同一个目标一致的团队中的同一个人写出来的。天,即使是一个应用程序发出的 sudo 请求都感觉是 Gnome 下的一个特意设计的部分,就像在 Windows 下的一样。而在 KDE 下感觉就是随便一个应用程序都能创建的那种各种外观的弹窗。它不像是以系统本身这样的正式身份停下来说“嘿,有个东西要请求管理员权限!你要给它吗?”。 + +KDE 让人体验不到有凝聚力的体验。KDE 像是在没有方向地打转,感觉没有完整的体验。它就像是一堆东西往不同的的方向移动,只不过恰好它们都有一个共同享有的工具包而已。如果开发者对此很开心,那么好吧,他们开心就好,但是如果他们想提供最好体验的话,那么就需要多关注那些小地方了。用户体验跟直观应当做为每一个应用程序的设计中心,应当有一个视野,知道 KDE 要提供什么——并且——知道它看起来应该是什么样的。 + +是不是有什么原因阻止我在 KDE 下使用 Gnome 磁盘管理? Rhythmbox 呢? Evolution 呢? 没有,没有,没有。但是这样说又错过了关键。Gnome 和 KDE 都称它们自己为“桌面环境”。那么它们就应该是完整的环境,这意味着他们的各个部件应该汇集并紧密结合在一起,意味着你应该使用它们环境下的工具,因为它们说“您在一个完整的桌面中需要的任何东西,我们都支持。”说真的?只有 Gnome 看起来能符合完整的要求。KDE 在“汇集在一起”这一方面感觉就像个半成品,更不用说提供“完整体验”中你所需要的东西。Gnome 磁盘管理没有相应的对手—— kpartionmanage 要求 ROOT 权限。KDE 不运行“首次用户注册”的过程(原文:No 'First Time User' run through。可能是指系统安装过程中KDE没有创建新用户的过程,译注) ,现在也不过是在 Kubuntu 下引入了一个用户管理器。老天,Gnome 甚至提供了地图、笔记、日历和时钟应用。这些应用都是百分百要紧的吗?不,当然不了。但是正是这些应用帮助 Gnome 推动“Gnome 是一种完整丰富的体验”的想法。 + +我吐槽的 KDE 问题并非不可能解决,决对不是这样的!但是它需要人去关心它。它需要开发者为他们的作品感到自豪,而不仅仅是为它们实现的功能而感到自豪——组织的价值可大了去了。别夺走用户设置选项的能力—— GNOME 3.x 就是因为缺乏配置选项的能力而为我所诟病,但别把“好吧,你想怎么设置就怎么设置”作为借口而不提供任何理智的默认设置。默认设置是用户将看到的东西,它们是用户从打开软件的第一刻开始进行评判的关键。给用户留个好印象吧。 + +我知道 KDE 开发者们知道设计很重要,这也是为什么VDG(Visual Design Group 视觉设计组)存在的原因,但是感觉好像他们没有让 VDG 充分发挥,所以 KDE 里存在组织上的缺陷。不是 KDE 没办法完整,不是它没办法汇集整合在一起然后解决衰败问题,只是开发者们没做到。他们瞄准了靶心……但是偏了。 + +还有,在任何人说这句话之前……千万别说“欢迎给我们提交补丁啊"。因为当我开心的为某个人提交补丁时,只要开发者坚持以他们喜欢的却不直观的方式干事,更多这样的烦人事就会不断发生。这不关 Muon 有没有中心对齐。也不关 Amarok 的界面太丑。也不关每次我敲下快捷键后,弹出的音量和亮度调节窗口占用了我一大块的屏幕“地皮”(说真的,有人会把这些东西缩小)。 + +这跟心态的冷漠有关,跟开发者们在为他们的应用设计 UI 时根本就不多加思考有关。KDE 团队做的东西都工作得很好。Amarok 能播放音乐。Dragon 能播放视频。Kwin 或 Qt 和 kdelibs 似乎比 Mutter/gtk 更有力更效率(仅根据我的电池电量消耗计算。非科学性测试)。这些都很好,很重要……但是它们呈现的方式也很重要。甚至可以说,呈现方式是最重要的,因为它是用户看到的并与之交互的东西。 + +KDE 应用开发者们……让 VDG 参与进来吧。让 VDG 审查并核准每一个“核心”应用,让一个 VDG 的 UI/UX 专家来设计应用的使用模式和使用流程,以此保证其直观性。真见鬼,不管你们在开发的是啥应用,仅仅把它的模型发到 VDG 论坛寻求反馈甚至都可能都能得到一些非常好的指点跟反馈。你有这么好的资源在这,现在赶紧用吧。 + +我不想说得好像我一点都不懂感恩。我爱 KDE,我爱那些志愿者们为了给 Linux 用户一个可视化的桌面而付出的工作与努力,也爱可供选择的 Gnome。正是因为我关心我才写这篇文章。因为我想看到更好的 KDE,我想看到它走得比以前更加遥远。而这样做需要每个人继续努力,并且需要人们不再躲避批评。它需要人们对系统互动及系统崩溃的地方都保持诚实。如果我们不能直言批评,如果我们不说“这真垃圾!”,那么情况永远不会变好。 + +这周后我会继续使用 Gnome 吗?可能不。应该不。Gnome 还在试着强迫我接受其工作流,而我不想追随,也不想遵循,因为我在使用它的时候感觉变得不够高效,因为它并不遵循我的思维模式。可是对于我的朋友们,当他们问我“我该用哪种桌面环境?”我可能会推荐 Gnome,特别是那些不大懂技术,只要求“能工作”就行的朋友。根据目前 KDE 的形势来看,这可能是我能说出的最狠毒的评估了。 + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=5 + +作者:Eric Griffith +译者:[XLCYun](https://github.com/XLCYun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md b/published/201508/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md similarity index 100% rename from published/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md rename to published/201508/20150717 How to Configure Chef (server or client) on Ubuntu 14.04 or 15.04.md diff --git a/published/20150717 How to collect NGINX metrics - Part 2.md b/published/201508/20150717 How to collect NGINX metrics - Part 2.md similarity index 100% rename from published/20150717 How to collect NGINX metrics - Part 2.md rename to published/201508/20150717 How to collect NGINX metrics - Part 2.md diff --git a/published/201508/20150717 How to monitor NGINX with Datadog - Part 3.md b/published/201508/20150717 How to monitor NGINX with Datadog - Part 3.md new file mode 100644 index 0000000000..fecab87e66 --- /dev/null +++ b/published/201508/20150717 How to monitor NGINX with Datadog - Part 3.md @@ -0,0 +1,146 @@ +如何使用 Datadog 监控 NGINX(第三篇) +================================================================================ +![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png) + +如果你已经阅读了前面的[如何监控 NGINX][1],你应该知道从你网络环境的几个指标中可以获取多少信息。而且你也看到了从 NGINX 特定的基础中收集指标是多么容易的。但要实现全面,持续的监控 NGINX,你需要一个强大的监控系统来存储并将指标可视化,当异常发生时能提醒你。在这篇文章中,我们将向你展示如何使用 Datadog 安装 NGINX 监控,以便你可以在定制的仪表盘中查看这些指标: + +![NGINX dashboard](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_board_5.png) + +Datadog 允许你以单个主机、服务、流程和度量来构建图形和警告,或者使用它们的几乎任何组合构建。例如,你可以监控你的所有主机,或者某个特定可用区域的所有NGINX主机,或者您可以监视具有特定标签的所有主机的一个关键指标。本文将告诉您如何: + +- 在 Datadog 仪表盘上监控 NGINX 指标,就像监控其他系统一样 +- 当一个关键指标急剧变化时设置自动警报来通知你 + +### 配置 NGINX ### + +为了收集 NGINX 指标,首先需要确保 NGINX 已启用 status 模块和一个 报告 status 指标的 URL。一步步的[配置开源 NGINX][2] 和 [NGINX Plus][3] 请参见之前的相关文章。 + +### 整合 Datadog 和 NGINX ### + +#### 安装 Datadog 代理 #### + +Datadog 代理是[一个开源软件][4],它能收集和报告你主机的指标,这样就可以使用 Datadog 查看和监控他们。安装这个代理通常[仅需要一个命令][5] + +只要你的代理启动并运行着,你会看到你主机的指标报告[在你 Datadog 账号下][6]。 + +![Datadog infrastructure list](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/infra_2.png) + +#### 配置 Agent #### + +接下来,你需要为代理创建一个简单的 NGINX 配置文件。在你系统中代理的配置目录应该[在这儿][7]找到。 + +在目录里面的 conf.d/nginx.yaml.example 中,你会发现[一个简单的配置文件][8],你可以编辑并提供 status URL 和可选的标签为每个NGINX 实例: + + init_config: + + instances: + + - nginx_status_url: http://localhost/nginx_status/ + tags: + - instance:foo + +当你提供了 status URL 和任意 tag,将配置文件保存为 conf.d/nginx.yaml。 + +#### 重启代理 #### + +你必须重新启动代理程序来加载新的配置文件。重新启动命令[在这里][9],根据平台的不同而不同。 + +#### 检查配置文件 #### + +要检查 Datadog 和 NGINX 是否正确整合,运行 Datadog 的 info 命令。每个平台使用的命令[看这儿][10]。 + +如果配置是正确的,你会看到这样的输出: + + Checks + ====== + + [...] + + nginx + ----- + - instance #0 [OK] + - Collected 8 metrics & 0 events + +#### 安装整合 #### + +最后,在你的 Datadog 帐户打开“Nginx 整合”。这非常简单,你只要在 [NGINX 整合设置][11]中点击“Install Integration”按钮。 + +![Install integration](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/install.png) + +### 指标! ### + +一旦代理开始报告 NGINX 指标,你会看到[一个 NGINX 仪表盘][12]出现在在你 Datadog 可用仪表盘的列表中。 + +基本的 NGINX 仪表盘显示有用的图表,囊括了几个[我们的 NGINX 监控介绍][13]中的关键指标。 (一些指标,特别是请求处理时间要求进行日志分析,Datadog 不支持。) + +你可以通过增加 NGINX 之外的重要指标的图表来轻松创建一个全面的仪表盘,以监控你的整个网站设施。例如,你可能想监视你 NGINX 的主机级的指标,如系统负载。要构建一个自定义的仪表盘,只需点击靠近仪表盘的右上角的选项并选择“Clone Dash”来克隆一个默认的 NGINX 仪表盘。 + +![Clone dash](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/clone_2.png) + +你也可以使用 Datadog 的[主机地图][14]在更高层面监控你的 NGINX 实例,举个例子,用颜色标示你所有的 NGINX 主机的 CPU 使用率来辨别潜在热点。 + +![](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx-host-map-3.png) + +### NGINX 指标警告 ### + +一旦 Datadog 捕获并可视化你的指标,你可能会希望建立一些监控自动地密切关注你的指标,并当有问题提醒你。下面将介绍一个典型的例子:一个提醒你 NGINX 吞吐量突然下降时的指标监控器。 + +#### 监控 NGINX 吞吐量 #### + +Datadog 指标警报可以是“基于吞吐量的”(当指标超过设定值会警报)或“基于变化幅度的”(当指标的变化超过一定范围会警报)。在这个例子里,我们会采取后一种方式,当每秒传入的请求急剧下降时会提醒我们。下降往往意味着有问题。 + +1. **创建一个新的指标监控**。从 Datadog 的“Monitors”下拉列表中选择“New Monitor”。选择“Metric”作为监视器类型。 + + ![NGINX metric monitor](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_1.png) + +2. **定义你的指标监视器**。我们想知道 NGINX 每秒总的请求量下降的数量,所以我们在基础设施中定义我们感兴趣的 nginx.net.request_per_s 之和。 + + ![NGINX metric](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_2.png) + +3. **设置指标警报条件**。我们想要在变化时警报,而不是一个固定的值,所以我们选择“Change Alert”。我们设置监控为无论何时请求量下降了30%以上时警报。在这里,我们使用一个一分钟的数据窗口来表示 “now” 指标的值,对横跨该间隔内的平均变化和之前 10 分钟的指标值作比较。 + + ![NGINX metric change alert](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_3.png) + +4. **自定义通知**。如果 NGINX 的请求量下降,我们想要通知我们的团队。在这个例子中,我们将给 ops 团队的聊天室发送通知,并给值班工程师发送短信。在“Say what’s happening”中,我们会为监控器命名,并添加一个伴随该通知的短消息,建议首先开始调查的内容。我们会 @ ops 团队使用的 Slack,并 @pagerduty [将警告发给短信][15]。 + + ![NGINX metric notification](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_4v3.png) + +5. **保存集成监控**。点击页面底部的“Save”按钮。你现在在监控一个关键的 NGINX [工作指标][16],而当它快速下跌时会给值班工程师发短信。 + +### 结论 ### + +在这篇文章中,我们谈到了通过整合 NGINX 与 Datadog 来可视化你的关键指标,并当你的网络基础架构有问题时会通知你的团队。 + +如果你一直使用你自己的 Datadog 账号,你现在应该可以极大的提升你的 web 环境的可视化,也有能力对你的环境、你所使用的模式、和对你的组织最有价值的指标创建自动监控。 + +如果你还没有 Datadog 帐户,你可以注册[免费试用][17],并开始监视你的基础架构,应用程序和现在的服务。 + +------------------------------------------------------------ + +via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ + +作者:K Young +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://linux.cn/article-5970-1.html +[2]:https://linux.cn/article-5985-1.html#open-source +[3]:https://linux.cn/article-5985-1.html#plus +[4]:https://github.com/DataDog/dd-agent +[5]:https://app.datadoghq.com/account/settings#agent +[6]:https://app.datadoghq.com/infrastructure +[7]:http://docs.datadoghq.com/guides/basic_agent_usage/ +[8]:https://github.com/DataDog/dd-agent/blob/master/conf.d/nginx.yaml.example +[9]:http://docs.datadoghq.com/guides/basic_agent_usage/ +[10]:http://docs.datadoghq.com/guides/basic_agent_usage/ +[11]:https://app.datadoghq.com/account/settings#integrations/nginx +[12]:https://app.datadoghq.com/dash/integration/nginx +[13]:https://linux.cn/article-5970-1.html +[14]:https://www.datadoghq.com/blog/introducing-host-maps-know-thy-infrastructure/ +[15]:https://www.datadoghq.com/blog/pagerduty/ +[16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics +[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up +[18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md +[19]:https://github.com/DataDog/the-monitor/issues diff --git a/published/20150717 How to monitor NGINX- Part 1.md b/published/201508/20150717 How to monitor NGINX- Part 1.md similarity index 100% rename from published/20150717 How to monitor NGINX- Part 1.md rename to published/201508/20150717 How to monitor NGINX- Part 1.md diff --git a/published/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md b/published/201508/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md similarity index 100% rename from published/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md rename to published/201508/20150717 Howto Configure FTP Server with Proftpd on Fedora 22.md diff --git a/translated/tech/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md b/published/201508/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md similarity index 91% rename from translated/tech/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md rename to published/201508/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md index cebfab93c4..c4a9e43e85 100644 --- a/translated/tech/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md +++ b/published/201508/20150722 How To Fix 'The Update Information Is Outdated' In Ubuntu 14.04.md @@ -2,7 +2,7 @@ Ubuntu 14.04中修复“update information is outdated”错误 ================================================================================ ![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/07/Fix_update_information_is_outdated.jpeg) -看到Ubuntu 14.04的顶部面板上那个显示下面这个错误的红色三角形了吗? +看到过Ubuntu 14.04的顶部面板上那个显示下面这个错误的红色三角形了吗? > 更新信息过时。该错误可能是由网络问题,或者某个仓库不再可用而造成的。请通过从指示器菜单中选择‘显示更新’来手动更新,然后查看是否存在有失败的仓库。 > @@ -25,7 +25,7 @@ Ubuntu 14.04中修复“update information is outdated”错误 ### 修复‘update information is outdated’错误 ### -这里讨论的‘解决方案’可能对Ubuntu的这些版本有用:Ubuntu 14.04,12.04或14.04。你所要做的仅仅是打开终端(Ctrl+Alt+T),然后使用下面的命令: +这里讨论的‘解决方案’可能对Ubuntu的这些版本有用:Ubuntu 14.04,12.04。你所要做的仅仅是打开终端(Ctrl+Alt+T),然后使用下面的命令: sudo apt-get update @@ -47,7 +47,7 @@ via: http://itsfoss.com/fix-update-information-outdated-ubuntu/ 作者:[Abhishek][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -56,4 +56,4 @@ via: http://itsfoss.com/fix-update-information-outdated-ubuntu/ [2]:http://itsfoss.com/notification-terminal-command-completion-ubuntu/ [3]:http://itsfoss.com/solve-gpg-error-signatures-verified-ubuntu/ [4]:http://itsfoss.com/install-spotify-ubuntu-1504/ -[5]:http://itsfoss.com/fix-update-errors-ubuntu-1404/ +[5]:https://linux.cn/article-5603-1.html diff --git a/published/20150722 How To Manage StartUp Applications In Ubuntu.md b/published/201508/20150722 How To Manage StartUp Applications In Ubuntu.md similarity index 100% rename from published/20150722 How To Manage StartUp Applications In Ubuntu.md rename to published/201508/20150722 How To Manage StartUp Applications In Ubuntu.md diff --git a/published/20150727 Easy Backup Restore and Migrate Containers in Docker.md b/published/201508/20150727 Easy Backup Restore and Migrate Containers in Docker.md similarity index 100% rename from published/20150727 Easy Backup Restore and Migrate Containers in Docker.md rename to published/201508/20150727 Easy Backup Restore and Migrate Containers in Docker.md diff --git a/published/20150728 How To Fix--There is no command installed for 7-zip archive files.md b/published/201508/20150728 How To Fix--There is no command installed for 7-zip archive files.md similarity index 100% rename from published/20150728 How To Fix--There is no command installed for 7-zip archive files.md rename to published/201508/20150728 How To Fix--There is no command installed for 7-zip archive files.md diff --git a/published/20150728 How to Update Linux Kernel for Improved System Performance.md b/published/201508/20150728 How to Update Linux Kernel for Improved System Performance.md similarity index 100% rename from published/20150728 How to Update Linux Kernel for Improved System Performance.md rename to published/201508/20150728 How to Update Linux Kernel for Improved System Performance.md diff --git a/published/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md b/published/201508/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md similarity index 100% rename from published/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md rename to published/201508/20150728 Tips to Create ISO from CD, Watch User Activity and Check Memory Usages of Browser.md diff --git a/published/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md b/published/201508/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md similarity index 100% rename from published/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md rename to published/201508/20150728 Understanding Shell Commands Easily Using 'Explain Shell' Script in Linux.md diff --git a/published/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md b/published/201508/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md similarity index 100% rename from published/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md rename to published/201508/20150729 What is Logical Volume Management and How Do You Enable It in Ubuntu.md diff --git a/published/20150730 Compare PDF Files on Ubuntu.md b/published/201508/20150730 Compare PDF Files on Ubuntu.md similarity index 100% rename from published/20150730 Compare PDF Files on Ubuntu.md rename to published/201508/20150730 Compare PDF Files on Ubuntu.md diff --git a/published/20150730 Must-Know Linux Commands For New Users.md b/published/201508/20150730 Must-Know Linux Commands For New Users.md similarity index 100% rename from published/20150730 Must-Know Linux Commands For New Users.md rename to published/201508/20150730 Must-Know Linux Commands For New Users.md diff --git a/published/20150803 Handy commands for profiling your Unix file systems.md b/published/201508/20150803 Handy commands for profiling your Unix file systems.md similarity index 100% rename from published/20150803 Handy commands for profiling your Unix file systems.md rename to published/201508/20150803 Handy commands for profiling your Unix file systems.md diff --git a/published/20150803 Linux Logging Basics.md b/published/201508/20150803 Linux Logging Basics.md similarity index 100% rename from published/20150803 Linux Logging Basics.md rename to published/201508/20150803 Linux Logging Basics.md diff --git a/translated/tech/20150803 Troubleshooting with Linux Logs.md b/published/201508/20150803 Troubleshooting with Linux Logs.md similarity index 61% rename from translated/tech/20150803 Troubleshooting with Linux Logs.md rename to published/201508/20150803 Troubleshooting with Linux Logs.md index 5950a69d98..ca117d8af3 100644 --- a/translated/tech/20150803 Troubleshooting with Linux Logs.md +++ b/published/201508/20150803 Troubleshooting with Linux Logs.md @@ -1,10 +1,11 @@ 在 Linux 中使用日志来排错 ================================================================================ -人们创建日志的主要原因是排错。通常你会诊断为什么问题发生在你的 Linux 系统或应用程序中。错误信息或一些列事件可以给你提供造成根本原因的线索,说明问题是如何发生的,并指出如何解决它。这里有几个使用日志来解决的样例。 + +人们创建日志的主要原因是排错。通常你会诊断为什么问题发生在你的 Linux 系统或应用程序中。错误信息或一系列的事件可以给你提供找出根本原因的线索,说明问题是如何发生的,并指出如何解决它。这里有几个使用日志来解决的样例。 ### 登录失败原因 ### -如果你想检查你的系统是否安全,你可以在验证日志中检查登录失败的和登录成功但可疑的用户。当有人通过不正当或无效的凭据来登录时会出现认证失败,经常使用 SSH 进行远程登录或 su 到本地其他用户来进行访问权。这些是由[插入式验证模块][1]来记录,或 PAM 进行短期记录。在你的日志中会看到像 Failed 这样的字符串密码和未知的用户。成功认证记录包括像 Accepted 这样的字符串密码并打开会话。 +如果你想检查你的系统是否安全,你可以在验证日志中检查登录失败的和登录成功但可疑的用户。当有人通过不正当或无效的凭据来登录时会出现认证失败,这通常发生在使用 SSH 进行远程登录或 su 到本地其他用户来进行访问权时。这些是由[插入式验证模块(PAM)][1]来记录的。在你的日志中会看到像 Failed password 和 user unknown 这样的字符串。而成功认证记录则会包括像 Accepted password 和 session opened 这样的字符串。 失败的例子: @@ -30,22 +31,21 @@ 由于没有标准格式,所以你需要为每个应用程序的日志使用不同的命令。日志管理系统,可以自动分析日志,将它们有效的归类,帮助你提取关键字,如用户名。 -日志管理系统可以使用自动解析功能从 Linux 日志中提取用户名。这使你可以看到用户的信息,并能单个的筛选。在这个例子中,我们可以看到,root 用户登录了 2700 次,因为我们筛选的日志显示尝试登录的只有 root 用户。 +日志管理系统可以使用自动解析功能从 Linux 日志中提取用户名。这使你可以看到用户的信息,并能通过点击过滤。在下面这个例子中,我们可以看到,root 用户登录了 2700 次之多,因为我们筛选的日志仅显示 root 用户的尝试登录记录。 ![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.05.36-AM.png) -日志管理系统也让你以时间为做坐标轴的图标来查看使你更容易发现异常。如果有人在几分钟内登录失败一次或两次,它可能是一个真正的用户而忘记了密码。但是,如果有几百个失败的登录并且使用的都是不同的用户名,它更可能是在试图攻击系统。在这里,你可以看到在3月12日,有人试图登录 Nagios 几百次。这显然​​不是一个合法的系统用户。 +日志管理系统也可以让你以时间为做坐标轴的图表来查看,使你更容易发现异常。如果有人在几分钟内登录失败一次或两次,它可能是一个真正的用户而忘记了密码。但是,如果有几百个失败的登录并且使用的都是不同的用户名,它更可能是在试图攻击系统。在这里,你可以看到在3月12日,有人试图登录 Nagios 几百次。这显然​​不是一个合法的系统用户。 ![](http://www.loggly.com/ultimate-guide/wp-content/uploads/2015/05/Screen-Shot-2015-03-12-at-11.12.18-AM.png) ### 重启的原因 ### - 有时候,一台服务器由于系统崩溃或重启而宕机。你怎么知道它何时发生,是谁做的? #### 关机命令 #### -如果有人手动运行 shutdown 命令,你可以看到它的身份在验证日志文件中。在这里,你可以看到,有人从 IP 50.0.134.125 上作为 ubuntu 的用户远程登录了,然后关闭了系统。 +如果有人手动运行 shutdown 命令,你可以在验证日志文件中看到它。在这里,你可以看到,有人从 IP 50.0.134.125 上作为 ubuntu 的用户远程登录了,然后关闭了系统。 Mar 19 18:36:41 ip-172-31-11-231 sshd[23437]: Accepted publickey for ubuntu from 50.0.134.125 port 52538 ssh Mar 19 18:36:41 ip-172-31-11-231 23437]:sshd[ pam_unix(sshd:session): session opened for user ubuntu by (uid=0) @@ -53,7 +53,7 @@ #### 内核初始化 #### -如果你想看看服务器重新启动的所有原因(包括崩溃),你可以从内核初始化日志中寻找。你需要搜索内核设施和初始化 cpu 的信息。 +如果你想看看服务器重新启动的所有原因(包括崩溃),你可以从内核初始化日志中寻找。你需要搜索内核类(kernel)和 cpu 初始化(Initializing)的信息。 Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpuset Mar 19 18:39:30 ip-172-31-11-231 kernel: [ 0.000000] Initializing cgroup subsys cpu @@ -61,9 +61,9 @@ ### 检测内存问题 ### -有很多原因可能导致服务器崩溃,但一个普遍的原因是内存用尽。 +有很多原因可能导致服务器崩溃,但一个常见的原因是内存用尽。 -当你系统的内存不足时,进程会被杀死,通常会杀死使用最多资源的进程。当系统正在使用的内存发生错误并且有新的或现有的进程试图使用更多的内存。在你的日志文件查找像 Out of Memory 这样的字符串,内核也会发出杀死进程的警告。这些信息表明系统故意杀死进程或应用程序,而不是允许进程崩溃。 +当你系统的内存不足时,进程会被杀死,通常会杀死使用最多资源的进程。当系统使用了所有内存,而新的或现有的进程试图使用更多的内存时就会出现错误。在你的日志文件查找像 Out of Memory 这样的字符串或类似 kill 这样的内核警告信息。这些信息表明系统故意杀死进程或应用程序,而不是允许进程崩溃。 例如: @@ -75,20 +75,20 @@ $ grep “Out of memory” /var/log/syslog [33238.178288] Out of memory: Kill process 6230 (firefox) score 53 or sacrifice child -请记住,grep 也要使用内存,所以导致内存不足的错误可能只是运行的 grep。这是另一个分析日志的独特方法! +请记住,grep 也要使用内存,所以只是运行 grep 也可能导致内存不足的错误。这是另一个你应该中央化存储日志的原因! ### 定时任务错误日志 ### -cron 守护程序是一个调度器只在指定的日期和时间运行进程。如果进程运行失败或无法完成,那么 cron 的错误出现在你的日志文件中。你可以找到这些文件在 /var/log/cron,/var/log/messages,和 /var/log/syslog 中,具体取决于你的发行版。cron 任务失败原因有很多。通常情况下,问题出在进程中而不是 cron 守护进程本身。 +cron 守护程序是一个调度器,可以在指定的日期和时间运行进程。如果进程运行失败或无法完成,那么 cron 的错误出现在你的日志文件中。具体取决于你的发行版,你可以在 /var/log/cron,/var/log/messages,和 /var/log/syslog 几个位置找到这个日志。cron 任务失败原因有很多。通常情况下,问题出在进程中而不是 cron 守护进程本身。 -默认情况下,cron 作业会通过电子邮件发送信息。这里是一个日志中记录的发送电子邮件的内容。不幸的是,你不能看到邮件的内容在这里。 +默认情况下,cron 任务的输出会通过 postfix 发送电子邮件。这是一个显示了该邮件已经发送的日志。不幸的是,你不能在这里看到邮件的内容。 Mar 13 16:35:01 PSQ110 postfix/pickup[15158]: C3EDC5800B4: uid=1001 from= Mar 13 16:35:01 PSQ110 postfix/cleanup[15727]: C3EDC5800B4: message-id=<20150310110501.C3EDC5800B4@PSQ110> Mar 13 16:35:01 PSQ110 postfix/qmgr[15159]: C3EDC5800B4: from=, size=607, nrcpt=1 (queue active) Mar 13 16:35:05 PSQ110 postfix/smtp[15729]: C3EDC5800B4: to=, relay=gmail-smtp-in.l.google.com[74.125.130.26]:25, delay=4.1, delays=0.26/0/2.2/1.7, dsn=2.0.0, status=sent (250 2.0.0 OK 1425985505 f16si501651pdj.5 - gsmtp) -你应该想想 cron 在日志中的标准输出以帮助你定位问题。这里展示你可以使用 logger 命令重定向 cron 标准输出到 syslog。用你的脚本来代替 echo 命令,helloCron 可以设置为任何你想要的应用程序的名字。 +你可以考虑将 cron 的标准输出记录到日志中,以帮助你定位问题。这是一个你怎样使用 logger 命令重定向 cron 标准输出到 syslog的例子。用你的脚本来代替 echo 命令,helloCron 可以设置为任何你想要的应用程序的名字。 */5 * * * * echo ‘Hello World’ 2>&1 | /usr/bin/logger -t helloCron @@ -97,7 +97,9 @@ cron 守护程序是一个调度器只在指定的日期和时间运行进程。 Apr 28 22:20:01 ip-172-31-11-231 CRON[15296]: (ubuntu) CMD (echo 'Hello World!' 2>&1 | /usr/bin/logger -t helloCron) Apr 28 22:20:01 ip-172-31-11-231 helloCron: Hello World! -每个 cron 作业将根据作业的具体类型以及如何输出数据来记录不同的日志。希望在日志中有问题根源的线索,也可以根据需要添加额外的日志记录。 +每个 cron 任务将根据任务的具体类型以及如何输出数据来记录不同的日志。 + +希望在日志中有问题根源的线索,也可以根据需要添加额外的日志记录。 -------------------------------------------------------------------------------- @@ -107,7 +109,7 @@ via: http://www.loggly.com/ultimate-guide/logging/troubleshooting-with-linux-log 作者:[Amy Echeverri][a2] 作者:[Sadequl Hussain][a3] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20150806 5 Reasons Why Software Developer is a Great Career Choice.md b/published/201508/20150806 5 Reasons Why Software Developer is a Great Career Choice.md similarity index 100% rename from published/20150806 5 Reasons Why Software Developer is a Great Career Choice.md rename to published/201508/20150806 5 Reasons Why Software Developer is a Great Career Choice.md diff --git a/published/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md b/published/201508/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md similarity index 100% rename from published/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md rename to published/201508/20150806 Linux FAQs with Answers--How to fix 'ImportError--No module named wxversion' on Linux.md diff --git a/published/20150806 Linux FAQs with Answers--How to install git on Linux.md b/published/201508/20150806 Linux FAQs with Answers--How to install git on Linux.md similarity index 100% rename from published/20150806 Linux FAQs with Answers--How to install git on Linux.md rename to published/201508/20150806 Linux FAQs with Answers--How to install git on Linux.md diff --git a/published/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md b/published/201508/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md similarity index 100% rename from published/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md rename to published/201508/20150807 How To--Temporarily Clear Bash Environment Variables on a Linux and Unix-like System.md diff --git a/published/20150810 For Linux, Supercomputers R Us.md b/published/201508/20150810 For Linux, Supercomputers R Us.md similarity index 100% rename from published/20150810 For Linux, Supercomputers R Us.md rename to published/201508/20150810 For Linux, Supercomputers R Us.md diff --git a/published/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md b/published/201508/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md similarity index 100% rename from published/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md rename to published/201508/20150811 Darkstat is a Web Based Network Traffic Analyzer--Install it on Linux.md diff --git a/published/20150811 How to download apk files from Google Play Store on Linux.md b/published/201508/20150811 How to download apk files from Google Play Store on Linux.md similarity index 100% rename from published/20150811 How to download apk files from Google Play Store on Linux.md rename to published/201508/20150811 How to download apk files from Google Play Store on Linux.md diff --git a/published/201508/20150813 How to Install Logwatch on Ubuntu 15.04.md b/published/201508/20150813 How to Install Logwatch on Ubuntu 15.04.md new file mode 100644 index 0000000000..4ea05688cd --- /dev/null +++ b/published/201508/20150813 How to Install Logwatch on Ubuntu 15.04.md @@ -0,0 +1,138 @@ +如何在 Ubuntu 15.04 系统中安装 Logwatch +================================================================================ + +大家好,今天我们会讲述在 Ubuntu 15.04 操作系统上如何安装 Logwatch 软件,它也可以在各种 Linux 系统和类 Unix 系统上安装。Logwatch 是一款可定制的日志分析和日志监控报告生成系统,它可以根据一段时间的日志文件生成您所希望关注的详细报告。它具有易安装、易配置、可审查等特性,同时对其提供的数据的安全性上也有一些保障措施。Logwatch 会扫描重要的操作系统组件像 SSH、网站服务等的日志文件,然后生成用户所关心的有价值的条目汇总报告。 + +### 预安装设置 ### + +我们会使用 Ubuntu 15.04 版本的操作系统来部署 Logwatch,所以安装 Logwatch 之前,要确保系统上邮件服务设置是正常可用的。因为它会每天把生成的报告通过日报的形式发送邮件给管理员。您的系统的源库也应该设置可用,以便可以从通用源库来安装 Logwatch。 + +然后打开您 ubuntu 系统的终端,用 root 账号登陆,在进入 Logwatch 的安装操作前,先更新您的系统软件包。 + + root@ubuntu-15:~# apt-get update + +### 安装 Logwatch ### + +只要你的系统已经更新和已经满足前面说的先决条件,那么就可以在您的机器上输入如下命令来安装 Logwatch。 + + root@ubuntu-15:~# apt-get install logwatch + +在安装过程中,一旦您按提示按下“Y”键同意对系统修改的话,Logwatch 将会开始安装一些额外的必须软件包。 + +在安装过程中会根据您机器上的邮件服务器设置情况弹出提示对 Postfix 设置的配置界面。在这篇教程中我们使用最容易的 “仅本地(Local only)” 选项。根据您的基础设施情况也可以选择其它的可选项,然后点击“确定”继续。 + +![Potfix Configurations](http://blog.linoxide.com/wp-content/uploads/2015/08/21.png) + +随后您得选择邮件服务器名,这邮件服务器名也会被其它程序使用,所以它应该是一个完全合格域名/全称域名(FQDN)。 + +![Postfix Setup](http://blog.linoxide.com/wp-content/uploads/2015/08/31.png) + +一旦按下在 postfix 配置提示底端的 “OK”,安装进程就会用 Postfix 的默认配置来安装,并且完成 Logwatch 的整个安装。 + +![Logwatch Completion](http://blog.linoxide.com/wp-content/uploads/2015/08/41.png) + +您可以在终端下发出如下命令来检查 Logwatch 状态,正常情况下它应该是激活状态。 + + root@ubuntu-15:~# service postfix status + +![Postfix Status](http://blog.linoxide.com/wp-content/uploads/2015/08/51.png) + +要确认 Logwatch 在默认配置下的安装信息,可以如下示简单的发出“logwatch” 命令。 + + root@ubuntu-15:~# logwatch + +上面执行命令的输出就是终端下编制出的报表展现格式。 + +![Logwatch Report](http://blog.linoxide.com/wp-content/uploads/2015/08/61.png) + +### 配置 Logwatch ### + +在成功安装好 Logwatch 后,我们需要在它的配置文件中做一些修改,配置文件位于如下所示的路径。那么,就让我们用文本编辑器打开它,然后按需要做些变动。 + + root@ubuntu-15:~# vim /usr/share/logwatch/default.conf/logwatch.conf + +**输出/格式化选项** + +默认情况下 Logwatch 会以无编码的文本打印到标准输出方式。要改为以邮件为默认方式,需设置“Output = mail”,要改为保存成文件方式,需设置“Output = file”。所以您可以根据您的要求设置其默认配置。 + + Output = stdout + +如果使用的是因特网电子邮件配置,要用 Html 格式为默认出格式,需要修改成如下行所示的样子。 + + Format = text + +现在增加默认的邮件报告接收人地址,可以是本地账号也可以是完整的邮件地址,需要的都可以在这行上写上 + + MailTo = root + #MailTo = user@test.com + +默认的邮件发送人可以是本地账号,也可以是您需要使用的其它名字。 + + # complete email address. + MailFrom = Logwatch + +对这个配置文件保存修改,至于其它的参数就让它保持默认,无需改动。 + +**调度任务配置** + +现在编辑在 “daily crons” 目录下的 “00logwatch” 文件来配置从 logwatch 生成的报告需要发送的邮件地址。 + + root@ubuntu-15:~# vim /etc/cron.daily/00logwatch + +在这儿您需要作用“--mailto user@test.com”来替换掉“--output mail”,然后保存文件。 + +![Logwatch Cronjob](http://blog.linoxide.com/wp-content/uploads/2015/08/71.png) + +### 生成报告 ### + +现在我们在终端中执行“logwatch”命令来生成测试报告,生成的结果在终端中会以文本格式显示出来。 + + root@ubuntu-15:~#logwatch + +生成的报告开始部分显示的是执行的时间和日期。它包含不同的部分,每个部分以开始标识开始而以结束标识结束,中间显示的是该部分的完整信息。 + +这儿显示的是开始的样子,它以显示系统上所有安装的软件包的部分开始,如下所示: + +![dpkg status](http://blog.linoxide.com/wp-content/uploads/2015/08/81.png) + +接下来的部分显示的日志信息是关于当前系统登录会话、rsyslogs 和当前及最近的 SSH 会话信息。 + +![logwatch report](http://blog.linoxide.com/wp-content/uploads/2015/08/9.png) + +Logwatch 报告最后显示的是安全方面的 sudo 日志及根目录磁盘使用情况,如下示: + +![Logwatch end report](http://blog.linoxide.com/wp-content/uploads/2015/08/10.png) + +您也可以打开如下的文件来查看生成的 logwatch 报告电子邮件。 + + root@ubuntu-15:~# vim /var/mail/root + +您会看到发送给你配置的用户的所有已生成的邮件及其邮件递交状态。 + +### 更多详情 ### + +Logwatch 是一款很不错的工具,可以学习的很多很多,所以如果您对它的日志监控功能很感兴趣的话,也以通过如下所示的简短命令来获得更多帮助。 + + root@ubuntu-15:~# man logwatch + +上面的命令包含所有关于 logwatch 的用户手册,所以仔细阅读,要退出手册的话可以简单的输入“q”。 + +关于 logwatch 命令的使用,您可以使用如下所示的帮助命令来获得更多的详细信息。 + + root@ubuntu-15:~# logwatch --help + +### 结论 ### + +教程结束,您也学会了如何在 Ubuntu 15.04 上对 Logwatch 的安装、配置等全部设置指导。现在您就可以自定义监控您的系统日志,不管是监控所有服务的运行情况还是对特定的服务在指定的时间发送报告都可以。所以,开始使用这工具吧,无论何时有问题或想知道更多关于 logwatch 的使用的都可以给我们留言。 + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/ubuntu-how-to/install-use-logwatch-ubuntu-15-04/ + +作者:[Kashif Siddique][a] +译者:[runningwater](https://github.com/runningwater) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ diff --git a/published/201508/20150813 How to get Public IP from Linux Terminal.md b/published/201508/20150813 How to get Public IP from Linux Terminal.md new file mode 100644 index 0000000000..c454db655c --- /dev/null +++ b/published/201508/20150813 How to get Public IP from Linux Terminal.md @@ -0,0 +1,70 @@ +如何在 Linux 终端中知道你的公有 IP +================================================================================ +![](http://www.blackmoreops.com/wp-content/uploads/2015/06/256x256xHow-to-get-Public-IP-from-Linux-Terminal-blackMORE-Ops.png.pagespeed.ic.GKEAEd4UNr.png) + +公有地址由 InterNIC 分配并由基于类的网络 ID 或基于 CIDR 的地址块构成(被称为 CIDR 块),并保证了在全球互联网中的唯一性。当公有地址被分配时,其路由将会被记录到互联网中的路由器中,这样访问公有地址的流量就能顺利到达。访问目标公有地址的流量可经由互联网抵达。比如,当一个 CIDR 块被以网络 ID 和子网掩码的形式分配给一个组织时,对应的 [网络 ID,子网掩码] 也会同时作为路由储存在互联网中的路由器中。目标是 CIDR 块中的地址的 IP 封包会被导向对应的位置。 + +在本文中我将会介绍在几种在 Linux 终端中查看你的公有 IP 地址的方法。这对普通用户来说并无意义,但 Linux 服务器(无GUI或者作为只能使用基本工具的用户登录时)会很有用。无论如何,从 Linux 终端中获取公有 IP 在各种方面都很意义,说不定某一天就能用得着。 + +以下是我们主要使用的两个命令,curl 和 wget。你可以换着用。 + +### Curl 纯文本格式输出: ### + + curl icanhazip.com + curl ifconfig.me + curl curlmyip.com + curl ip.appspot.com + curl ipinfo.io/ip + curl ipecho.net/plain + curl www.trackip.net/i + +### curl JSON格式输出: ### + + curl ipinfo.io/json + curl ifconfig.me/all.json + curl www.trackip.net/ip?json (有点丑陋) + +### curl XML格式输出: ### + + curl ifconfig.me/all.xml + +### curl 得到所有IP细节 (挖掘机)### + + curl ifconfig.me/all + +### 使用 DYDNS (当你使用 DYDNS 服务时有用)### + + curl -s 'http://checkip.dyndns.org' | sed 's/.*Current IP Address: \([0-9\.]*\).*/\1/g' + curl -s http://checkip.dyndns.org/ | grep -o "[[:digit:].]\+" + +### 使用 Wget 代替 Curl ### + + wget http://ipecho.net/plain -O - -q ; echo + wget http://observebox.com/ip -O - -q ; echo + +### 使用 host 和 dig 命令 ### + +如果有的话,你也可以直接使用 host 和 dig 命令。 + + host -t a dartsclink.com | sed 's/.*has address //' + dig +short myip.opendns.com @resolver1.opendns.com + +### bash 脚本示例: ### + + #!/bin/bash + + PUBLIC_IP=`wget http://ipecho.net/plain -O - -q ; echo` + echo $PUBLIC_IP + +简单易用。 + +我实际上是在写一个用于记录每日我的路由器中所有 IP 变化并保存到一个文件的脚本。我在搜索过程中找到了这些很好用的命令。希望某天它能帮到其他人。 + +-------------------------------------------------------------------------------- + +via: http://www.blackmoreops.com/2015/06/14/how-to-get-public-ip-from-linux-terminal/ + +译者:[KevinSJ](https://github.com/KevinSJ) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md b/published/201508/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md similarity index 100% rename from published/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md rename to published/201508/20150813 Ubuntu Want To Make It Easier For You To Install The Latest Nvidia Linux Driver.md diff --git a/published/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md b/published/201508/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md similarity index 100% rename from published/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md rename to published/201508/20150816 Ubuntu NVIDIA Graphics Drivers PPA Is Ready For Action.md diff --git a/translated/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md b/published/201508/20150816 shellinabox--A Web based AJAX Terminal Emulator.md similarity index 66% rename from translated/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md rename to published/201508/20150816 shellinabox--A Web based AJAX Terminal Emulator.md index 71acf990c1..c4d9523d50 100644 --- a/translated/share/20150816 shellinabox--A Web based AJAX Terminal Emulator.md +++ b/published/201508/20150816 shellinabox--A Web based AJAX Terminal Emulator.md @@ -1,16 +1,17 @@ -shellinabox–基于Web的Ajax的终端模拟器安装及使用详解 +shellinabox:一款使用 AJAX 的基于 Web 的终端模拟器 ================================================================================ + ### shellinabox简介 ### -unixmen的读者朋友们,你们好! +通常情况下,我们在访问任何远程服务器时,会使用常见的通信工具如OpenSSH和Putty等。但是,有可能我们在防火墙后面不能使用这些工具访问远程系统,或者防火墙只允许HTTPS流量才能通过。不用担心!即使你在这样的防火墙后面,我们依然有办法来访问你的远程系统。而且,你不需要安装任何类似于OpenSSH或Putty的通讯工具。你只需要有一个支持JavaScript和CSS的现代浏览器,并且你不用安装任何插件或第三方应用软件。 -通常情况下,我们访问任何远程服务器时,使用常见的通信工具如OpenSSH和Putty等。但是如果我们在防火墙外,或者防火墙只允许HTTPS流量才能通过,那么我们就不能再使用这些工具来访问远程系统了。不用担心!即使你在防火墙后面,我们依然有办法来访问你的远程系统。而且,你不需要安装任何类似于OpenSSH或Putty的通讯工具。你只需要有一个支持JavaScript和CSS的现代浏览器。并且你不用安装任何插件或第三方应用软件。 +这个 **Shell In A Box**,发音是**shellinabox**,是由**Markus Gutschke**开发的一款自由开源的基于Web的Ajax的终端模拟器。它使用AJAX技术,通过Web浏览器提供了类似原生的 Shell 的外观和感受。 -Meet **Shell In A Box**,发音是**shellinabox**,是由**Markus Gutschke**开发的一款免费的,开源的,基于Web的Ajax的终端模拟器。它使用AJAX技术,通过Web浏览器提供的外观和感觉像一个原生壳。该**shellinaboxd**的守护进程实现了一个Web服务器,能够侦听指定的端口。Web服务器发布一个或多个服务,这些服务将在VT100模拟器实现为一个AJAX的Web应用程序显示。默认情况下,端口为4200。你可以更改默认端口到任意选择的任意端口号。在你的远程服务器安装shellinabox以后,如果你想从本地系统接入,打开Web浏览器并导航到:**http://IP-Address:4200/**。输入你的用户名和密码,然后就可以开始使用你远程系统的外壳。看起来很有趣,不是吗?确实! +这个**shellinaboxd**守护进程实现了一个Web服务器,能够侦听指定的端口。其Web服务器可以发布一个或多个服务,这些服务显示在用 AJAX Web 应用实现的VT100模拟器中。默认情况下,端口为4200。你可以更改默认端口到任意选择的任意端口号。在你的远程服务器安装shellinabox以后,如果你想从本地系统接入,打开Web浏览器并导航到:**http://IP-Address:4200/**。输入你的用户名和密码,然后就可以开始使用你远程系统的Shell。看起来很有趣,不是吗?确实 有趣! **免责声明**: -shellinabox不是SSH客户端或任何安全软件。它仅仅是一个应用程序,能够通过Web浏览器模拟一个远程系统的壳。同时,它和SSH没有任何关系。这不是防弹的安全的方式来远程控制您的系统。这只是迄今为止最简单的方法之一。无论什么原因,你都不应该在任何公共网络上运行它。 +shellinabox不是SSH客户端或任何安全软件。它仅仅是一个应用程序,能够通过Web浏览器模拟一个远程系统的Shell。同时,它和SSH没有任何关系。这不是可靠的安全地远程控制您的系统的方式。这只是迄今为止最简单的方法之一。无论如何,你都不应该在任何公共网络上运行它。 ### 安装shellinabox ### @@ -48,7 +49,7 @@ shellinabox在默认库是可用的。所以,你可以使用命令来安装它 # vi /etc/sysconfig/shellinaboxd -更改你的端口到任意数量。因为我在本地网络上测试它,所以我使用默认值。 +更改你的端口到任意数字。因为我在本地网络上测试它,所以我使用默认值。 # Shell in a box daemon configuration # For details see shellinaboxd man page @@ -98,7 +99,7 @@ shellinabox在默认库是可用的。所以,你可以使用命令来安装它 ### 使用 ### -现在,去你的客户端系统,打开Web浏览器并导航到:**https://ip-address-of-remote-servers:4200**。 +现在,在你的客户端系统,打开Web浏览器并导航到:**https://ip-address-of-remote-servers:4200**。 **注意**:如果你改变了端口,请填写修改后的端口。 @@ -110,13 +111,13 @@ shellinabox在默认库是可用的。所以,你可以使用命令来安装它 ![Shell In A Box - Google Chrome_003](http://www.unixmen.com/wp-content/uploads/2015/08/sk@server1-Shell-In-A-Box-Google-Chrome_003.jpg) -右键点击你浏览器的空白位置。你可以得到一些有很有用的额外的菜单选项。 +右键点击你浏览器的空白位置。你可以得到一些有很有用的额外菜单选项。 ![Shell In A Box - Google Chrome_004](http://www.unixmen.com/wp-content/uploads/2015/08/sk@server1-Shell-In-A-Box-Google-Chrome_004.jpg) 从现在开始,你可以通过本地系统的Web浏览器在你的远程服务器随意操作。 -当你完成时,记得点击**退出**。 +当你完成工作时,记得输入`exit`退出。 当再次连接到远程系统时,单击**连接**按钮,然后输入远程服务器的用户名和密码。 @@ -134,7 +135,7 @@ shellinabox在默认库是可用的。所以,你可以使用命令来安装它 ### 结论 ### -正如我之前提到的,如果你在服务器运行在防火墙后面,那么基于web的SSH工具是非常有用的。有许多基于web的SSH工具,但shellinabox是非常简单并且有用的工具,能从的网络上的任何地方,模拟一个远程系统的壳。因为它是基于浏览器的,所以你可以从任何设备访问您的远程服务器,只要你有一个支持JavaScript和CSS的浏览器。 +正如我之前提到的,如果你在服务器运行在防火墙后面,那么基于web的SSH工具是非常有用的。有许多基于web的SSH工具,但shellinabox是非常简单而有用的工具,可以从的网络上的任何地方,模拟一个远程系统的Shell。因为它是基于浏览器的,所以你可以从任何设备访问您的远程服务器,只要你有一个支持JavaScript和CSS的浏览器。 就这些啦。祝你今天有个好心情! @@ -148,7 +149,7 @@ via: http://www.unixmen.com/shellinabox-a-web-based-ajax-terminal-emulator/ 作者:[SK][a] 译者:[xiaoyu33](https://github.com/xiaoyu33) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 diff --git a/published/201508/20150817 Top 5 Torrent Clients For Ubuntu Linux.md b/published/201508/20150817 Top 5 Torrent Clients For Ubuntu Linux.md new file mode 100644 index 0000000000..0ad6d04671 --- /dev/null +++ b/published/201508/20150817 Top 5 Torrent Clients For Ubuntu Linux.md @@ -0,0 +1,115 @@ +Ubuntu 下五个最好的 BT 客户端 +================================================================================ + +![Best Torrent clients for Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/5_Best_Torrent_Ubuntu.png) + +在寻找 **Ubuntu 中最好的 BT 客户端**吗?事实上,Linux 桌面平台中有许多 BT 客户端,但是它们中的哪些才是**最好的 Ubuntu 客户端**呢? + +我将会列出 Linux 上最好的五个 BT 客户端,它们都拥有着体积轻盈,功能强大的特点,而且还有令人印象深刻的用户界面。自然,易于安装和使用也是特性之一。 + +### Ubuntu 下最好的 BT 客户端 ### + +考虑到 Ubuntu 默认安装了 Transmission,所以我将会从这个列表中排除了 Transmission。但是这并不意味着 Transmission 没有资格出现在这个列表中,事实上,Transmission 是一个非常好的BT客户端,这也正是它被包括 Ubuntu 在内的多个发行版默认安装的原因。 + +### Deluge ### + +![Logo of Deluge torrent client for Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Deluge.png) + +[Deluge][1] 被 Lifehacker 评选为 Linux 下最好的 BT 客户端,这说明了 Deluge 是多么的有用。而且,并不仅仅只有 Lifehacker 是 Deluge 的粉丝,纵观多个论坛,你都会发现不少 Deluge 的忠实拥趸。 + +快速,时尚而直观的界面使得 Deluge 成为 Linux 用户的挚爱。 + +Deluge 可在 Ubuntu 的仓库中获取,你能够在 Ubuntu 软件中心中安装它,或者使用下面的命令: + + sudo apt-get install deluge + +### qBittorrent ### + +![qBittorrent client for Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/qbittorrent_icon.png) + +正如它的名字所暗示的,[qBittorrent][2] 是著名的 [Bittorrent][3] 应用的 Qt 版本。如果曾经使用过它,你将会看到和 Windows 下的 Bittorrent 相似的界面。同样轻巧并且有着 BT 客户端的所有标准功能, qBittorrent 也可以在 Ubuntu 的默认仓库中找到。 + +它可以通过 Ubuntu 软件仓库安装,或者使用下面的命令: + + sudo apt-get install qbittorrent + + +### Tixati ### + +![Tixati torrent client logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/tixati_icon.png) + +[Tixati][4] 是另一个不错的 Ubuntu 下的 BT 客户端。它有着一个默认的黑暗主题,尽管很多人喜欢,但是我例外。它拥有着一切你能在 BT 客户端中找到的功能。 + +除此之外,它还有着数据分析的额外功能。你可以在美观的图表中分析流量以及其它数据。 + +- [下载 Tixati][5] + + + +### Vuze ### + +![Vuze Torrent Logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/vuze_icon_for_mac_os_x_by_hamzasaleem-d6yx1fp.png) + +[Vuze][6] 是许多 Linux 以及 Windows 用户最喜欢的 BT 客户端。除了标准的功能,你可以直接在应用程序中搜索种子,也可以订阅系列片源,这样就无需再去寻找新的片源了,因为你可以在侧边栏中的订阅看到它们。 + +它还配备了一个视频播放器,可以播放带有字幕的高清视频等等。但是我不认为你会用它来代替那些更好的视频播放器,比如 VLC。 + +Vuze 可以通过 Ubuntu 软件中心安装或者使用下列命令: + + sudo apt-get install vuze + + + +### Frostwire ### + +![Logo of Frostwire torrent client](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/frostwire.png) + +[Frostwire][7] 是一个你应该试一下的应用。它不仅仅是一个简单的 BT 客户端,它还可以应用于安卓,你可以用它通过 Wifi 来共享文件。 + +你可以在应用中搜索种子并且播放他们。除了下载文件,它还可以浏览本地的影音文件,并且将它们有条理的呈现在播放器中。这同样适用于安卓版本。 + +还有一个特点是:Frostwire 提供了独立音乐人的[合法音乐下载][13]。你可以下载并且欣赏它们,免费而且合法。 + +- [下载 Frostwire][8] + + + +### 荣誉奖 ### + +在 Windows 中,uTorrent(发音:mu torrent)是我最喜欢的 BT 应用。尽管 uTorrent 可以在 Linux 下运行,但是我还是特意忽略了它。因为在 Linux 下使用 uTorrent 不仅困难,而且无法获得完整的应用体验(运行在浏览器中)。 + +可以[在这里][9]阅读 Ubuntu下uTorrent 的安装教程。 + +#### 快速提示: #### + +大多数情况下,BT 应用不会默认自动启动。如果想改变这一行为,请阅读[如何管理 Ubuntu 下的自启动程序][10]来学习。 + +### 你最喜欢的是什么? ### + +这些是我对于 Ubuntu 下最好的 BT 客户端的意见。你最喜欢的是什么呢?请发表评论。也可以查看与本主题相关的[Ubuntu 最好的下载管理器][11]。如果使用 Popcorn Time,试试 [Popcorn Time 技巧][12] + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/best-torrent-ubuntu/ + +作者:[Abhishek][a] +译者:[Xuanwo](https://github.com/Xuanwo) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://deluge-torrent.org/ +[2]:http://www.qbittorrent.org/ +[3]:http://www.bittorrent.com/ +[4]:http://www.tixati.com/ +[5]:http://www.tixati.com/download/ +[6]:http://www.vuze.com/ +[7]:http://www.frostwire.com/ +[8]:http://www.frostwire.com/downloads +[9]:http://sysads.co.uk/2014/05/install-utorrent-3-3-ubuntu-14-04-13-10/ +[10]:http://itsfoss.com/manage-startup-applications-ubuntu/ +[11]:http://itsfoss.com/4-best-download-managers-for-linux/ +[12]:http://itsfoss.com/popcorn-time-tips/ +[13]:http://www.frostclick.com/wp/ + diff --git a/published/201508/20150818 How to monitor stock quotes from the command line on Linux.md b/published/201508/20150818 How to monitor stock quotes from the command line on Linux.md new file mode 100644 index 0000000000..53be8376a4 --- /dev/null +++ b/published/201508/20150818 How to monitor stock quotes from the command line on Linux.md @@ -0,0 +1,100 @@ +Linux中通过命令行监控股票报价 +================================================================================ + +如果你是那些股票投资者或者交易者中的一员,那么监控证券市场将是你的日常工作之一。最有可能的是你会使用一个在线交易平台,这个平台有着一些漂亮的实时图表和全部种类的高级股票分析和交易工具。虽然这种复杂的市场研究工具是任何严肃的证券投资者了解市场的必备工具,但是监控最新的股票报价来构建有利可图的投资组合仍然有很长一段路要走。 + +如果你是一位长久坐在终端前的全职系统管理员,而证券交易又成了你日常生活中的业余兴趣,那么一个简单地显示实时股票报价的命令行工具会是给你的恩赐。 + +在本教程中,让我来介绍一个灵巧而简洁的命令行工具,它可以让你在Linux上从命令行监控股票报价。 + +这个工具叫做[Mop][1]。它是用GO编写的一个轻量级命令行工具,可以极其方便地跟踪来自美国市场的最新股票报价。你可以很轻松地自定义要监控的证券列表,它会在一个基于ncurses的便于阅读的界面显示最新的股票报价。 + +**注意**:Mop是通过雅虎金融API获取最新的股票报价的。你必须意识到,他们的的股票报价已知会有15分钟的延时。所以,如果你正在寻找0延时的“实时”股票报价,那么Mop就不是你的菜了。这种“现场”股票报价订阅通常可以通过向一些不开放的私有接口付费获取。了解这些之后,让我们来看看怎样在Linux环境下使用Mop吧。 + +### 安装 Mop 到 Linux ### + +由于Mop是用Go实现的,你首先需要安装Go语言。如果你还没有安装Go,请参照[此指南][2]将Go安装到你的Linux平台中。请确保按指南中所讲的设置GOPATH环境变量。 + +安装完Go后,继续像下面这样安装Mop。 + +**Debian,Ubuntu 或 Linux Mint** + + $ sudo apt-get install git + $ go get github.com/michaeldv/mop + $ cd $GOPATH/src/github.com/michaeldv/mop + $ make install + +**Fedora,CentOS,RHEL** + + $ sudo yum install git + $ go get github.com/michaeldv/mop + $ cd $GOPATH/src/github.com/michaeldv/mop + $ make install + +上述命令将安装Mop到$GOPATH/bin。 + +现在,编辑你的.bashrc,将$GOPATH/bin写到你的PATH变量中。 + + export PATH="$PATH:$GOPATH/bin" + +---------- + + $ source ~/.bashrc + +### 使用Mop来通过命令行监控股票报价 ### + +要启动Mop,只需运行名为cmd的命令(LCTT 译注:这名字实在是……)。 + + $ cmd + +首次启动,你将看到一些Mop预配置的证券行情自动收录器。 + +![](https://farm6.staticflickr.com/5749/20018949104_c8c64e0e06_c.jpg) + +报价显示了像最新价格、交易百分比、每日低/高、52周低/高、股息以及年收益率等信息。Mop从[CNN][3]获取市场总览信息,从[雅虎金融][4]获得个股报价,股票报价信息它自己会在终端内周期性更新。 + +### 自定义Mop中的股票报价 ### + +让我们来试试自定义证券列表吧。对此,Mop提供了易于记忆的快捷键:‘+’用于添加一只新股,而‘-’则用于移除一只股票。 + +要添加新股,请按‘+’,然后输入股票代码来添加(如MSFT)。你可以通过输入一个由逗号分隔的交易代码列表来一次添加多个股票(如”MSFT, AMZN, TSLA”)。 + +![](https://farm1.staticflickr.com/636/20648164441_642ae33a22_c.jpg) + +从列表中移除股票可以类似地按‘-’来完成。 + +### 对Mop中的股票报价排序 ### + +你可以基于任何栏目对股票报价列表进行排序。要排序,请按‘o’,然后使用左/右键来选择排序的基准栏目。当选定了一个特定栏目后,你可以按回车来对列表进行升序排序,或者降序排序。 + +![](https://farm1.staticflickr.com/724/20648164481_15631eefcf_c.jpg) + +通过按‘g’,你可以根据股票当日的涨或跌来分组。涨的情况以绿色表示,跌的情况以白色表示。 + +![](https://c2.staticflickr.com/6/5633/20615252696_a5bd44d3aa_b.jpg) + +如果你想要访问帮助页,只需要按‘?’。 + +![](https://farm1.staticflickr.com/573/20632365342_da196b657f_c.jpg) + +### 尾声 ### + +正如你所见,Mop是一个轻量级的,然而极其方便的证券监控工具。当然,你可以很轻松地从其它别的什么地方,从在线站点,你的智能手机等等访问到股票报价信息。然而,如果你在整天使用终端环境,Mop可以很容易地适应你的工作环境,希望没有让你过多地从你的工作流程中分心。只要让它在你其中一个终端中运行并保持市场日期持续更新,那就够了。 + +交易快乐! + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/monitor-stock-quotes-command-line-linux.html + +作者:[Dan Nanni][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:https://github.com/michaeldv/mop +[2]:http://ask.xmodulo.com/install-go-language-linux.html +[3]:http://money.cnn.com/data/markets/ +[4]:http://finance.yahoo.com/ diff --git a/published/201508/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md b/published/201508/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md new file mode 100644 index 0000000000..7899bfaf31 --- /dev/null +++ b/published/201508/20150818 Linux Without Limits--IBM Launch LinuxONE Mainframes.md @@ -0,0 +1,52 @@ +Linux无极限:IBM发布LinuxONE大型机 +================================================================================ +![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/08/Screenshot-2015-08-17-at-12.58.10.png) + +LinuxONE Emperor MainframeGood的Ubuntu服务器团队今天发布了一条消息关于[IBM发布了LinuxONE][1],一种只支持Linux的大型机,也可以运行Ubuntu。 + +IBM发布的最大的LinuxONE系统称作‘Emperor’,它可以扩展到8000台虚拟机或者上万台容器- 对任何一台Linux系统都可能的记录。 + +LinuxONE被IBM称作‘游戏改变者’,它‘释放了Linux的商业潜力’。 + +IBM和Canonical正在一起协作为LinuxONE和其他IBM z系统创建Ubuntu发行版。Ubuntu将会在IBM z加入RedHat和SUSE作为首屈一指的Linux发行版。 + +随着IBM ‘Emperor’发布的还有LinuxONE Rockhopper,一个为中等规模商业或者组织小一点的大型机。 + +IBM是大型机中的领导者,并且占有大型机市场中90%的份额。 + +注:youtube 视频 + + +### 大型机用于什么? ### + +你阅读这篇文章所使用的电脑在一个‘大铁块’一样的大型机前会显得很矮小。它们是巨大的,笨重的机柜里面充满了高端的组件、自己设计的技术和眼花缭乱的大量存储(就是数据存储,没有空间放钢笔和尺子)。 + +大型机被大型机构和商业用来处理和存储大量数据,通过统计来处理数据和处理大规模的事务处理。 + +### ‘世界最快的处理器’ ### + +IBM已经与Canonical Ltd组成了团队来在LinuxONE和其他IBM z系统中使用Ubuntu。 + +LinuxONE Emperor使用IBM z13处理器。发布于一月的芯片声称是时间上最快的微处理器。它可以在几毫秒内响应事务。 + +但是也可以很好地处理高容量的移动事务,z13中的LinuxONE系统也是一个理想的云系统。 + +每个核心可以处理超过50个虚拟服务器,总共可以超过8000台虚拟服务器么,这使它以更便宜,更环保、更高效的方式扩展到云。 + +**在阅读这篇文章时你不必是一个CIO或者大型机巡查员。LinuxONE提供的可能性足够清晰。** + +来源: [Reuters (h/t @popey)][2] + +-------------------------------------------------------------------------------- + +via: http://www.omgubuntu.co.uk/2015/08/ibm-linuxone-mainframe-ubuntu-partnership + +作者:[Joey-Elijah Sneddon][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://plus.google.com/117485690627814051450/?rel=author +[1]:http://www-03.ibm.com/systems/z/announcement.html +[2]:http://www.reuters.com/article/2015/08/17/us-ibm-linuxone-idUSKCN0QM09P20150817 diff --git a/published/201508/20150818 ​Ubuntu Linux is coming to IBM mainframes.md b/published/201508/20150818 ​Ubuntu Linux is coming to IBM mainframes.md new file mode 100644 index 0000000000..d3bacb3da6 --- /dev/null +++ b/published/201508/20150818 ​Ubuntu Linux is coming to IBM mainframes.md @@ -0,0 +1,53 @@ +Ubuntu Linux 来到 IBM 大型机 +================================================================================ +最终来到了。在 [LinuxCon][1] 上,IBM 和 [Canonical][2] 宣布 [Ubuntu Linux][3] 不久就会运行在 IBM 大型机 [LinuxONE][1] 上,这是一种只支持 Linux 的大型机,现在也可以运行 Ubuntu 了。 + +这个 IBM 发布的最大的 LinuxONE 系统称作‘Emperor’,它可以扩展到 8000 台虚拟机或者上万台容器,这可能是单独一台 Linux 系统的记录。 + +LinuxONE 被 IBM 称作‘游戏改变者’,它‘释放了 Linux 的商业潜力’。 + +![](http://zdnet2.cbsistatic.com/hub/i/2015/08/17/f389e12f-03f5-48cc-8019-af4ccf6c6ecd/f15b099e439c0e3a5fd823637d4bcf87/ubuntu-mainframe.jpg) + +*很快你就可以在你的 IBM 大型机上安装 Ubuntu Linux orange 啦* + +根据 IBM z 系统的总经理 Ross Mauri 以及 Canonical 和 Ubuntu 的创立者 Mark Shuttleworth 所言,这是因为客户需要。十多年来,IBM 大型机只支持 [红帽企业版 Linux (RHEL)][4] 和 [SUSE Linux 企业版 (SLES)][5] Linux 发行版。 + +随着 Ubuntu 越来越成熟,更多的企业把它作为企业级 Linux,也有更多的人希望它能运行在 IBM 大型机上。尤其是银行希望如此。不久,金融 CIO 们就可以满足他们的需求啦。 + +在一次采访中 Shuttleworth 说 Ubuntu Linux 在 2016 年 4 月下一次长期支持版 Ubuntu 16.04 中就可以用到大型机上。而在 2014 年底 Canonical 和 IBM 将 [Ubuntu 带到 IBM 的 POWER][6] 架构中就迈出了第一步。 + +在那之前,Canonical 和 IBM 差点签署了协议 [在 2011 年实现 Ubuntu 支持 IBM 大型机][7],但最终也没有实现。这次,真的发生了。 + +Canonical 的 CEO Jane Silber 解释说 “[把 Ubuntu 平台支持扩大][8]到 [IBM z 系统][9] 是因为认识到需要 z 系统运行其业务的客户数量以及混合云市场的成熟。” + +**Silber 还说:** + +> 由于 z 系统的支持,包括 [LinuxONE][10],Canonical 和 IBM 的关系进一步加深,构建了对 POWER 架构的支持和 OpenPOWER 生态系统。正如 Power 系统的客户受益于 Ubuntu 的可扩展能力,我们的敏捷开发过程也使得类似 POWER8 CAPI (Coherent Accelerator Processor Interface,一致性加速器接口)得到了市场支持,z 系统的客户也可以期望技术进步能快速部署,并从 [Juju][11] 和我们的其它云工具中获益,使得能快速向端用户提供新服务。另外,我们和 IBM 的合作包括实现扩展部署很多 IBM 和 Juju 的软件解决方案。大型机客户对于能通过 Juju 将丰富‘迷人的’ IBM 解决方案、其它软件供应商的产品、开源解决方案部署到大型机上感到高兴。 + +Shuttleworth 期望 z 系统上的 Ubuntu 能取得巨大成功。它发展很快,由于对 OpenStack 的支持,希望有卓越云性能的人会感到非常高兴。 + + +-------------------------------------------------------------------------------- + +via: http://www.zdnet.com/article/ubuntu-linux-is-coming-to-the-mainframe/ + +via: http://www.omgubuntu.co.uk/2015/08/ibm-linuxone-mainframe-ubuntu-partnership + +作者:[Steven J. Vaughan-Nichols][a],[Joey-Elijah Sneddon][a] +译者:[ictlyh](https://github.com/ictlyh),[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ +[1]:http://events.linuxfoundation.org/events/linuxcon-north-america +[2]:http://www.canonical.com/ +[3]:http://www.ubuntu.comj/ +[4]:http://www.redhat.com/en/technologies/linux-platforms/enterprise-linux +[5]:https://www.suse.com/products/server/ +[6]:http://www.zdnet.com/article/ibm-doubles-down-on-linux/ +[7]:http://www.zdnet.com/article/mainframe-ubuntu-linux/ +[8]:https://insights.ubuntu.com/2015/08/17/ibm-and-canonical-plan-ubuntu-support-on-ibm-z-systems-mainframe/ +[9]:http://www-03.ibm.com/systems/uk/z/ +[10]:http://www.zdnet.com/article/linuxone-ibms-new-linux-mainframes/ +[11]:https://jujucharms.com/ \ No newline at end of file diff --git a/published/201508/20150821 How to Install Visual Studio Code in Linux.md b/published/201508/20150821 How to Install Visual Studio Code in Linux.md new file mode 100644 index 0000000000..9694b23d4f --- /dev/null +++ b/published/201508/20150821 How to Install Visual Studio Code in Linux.md @@ -0,0 +1,129 @@ +如何在 Linux 中安装 Visual Studio Code +================================================================================ +大家好,今天我们一起来学习如何在 Linux 发行版中安装 Visual Studio Code。Visual Studio Code 是基于 Electron 优化代码后的编辑器,后者是基于 Chromium 的一款软件,用于为桌面系统发布 io.js 应用。Visual Studio Code 是微软开发的支持包括 Linux 在内的全平台代码编辑器和文本编辑器。它是免费软件但不开源,在专有软件许可条款下发布。它是可以用于我们日常使用的超级强大和快速的代码编辑器。Visual Studio Code 有很多很酷的功能,例如导航、智能感知支持、语法高亮、括号匹配、自动补全、代码片段、支持自定义键盘绑定、并且支持多种语言,例如 Python、C++、Jade、PHP、XML、Batch、F#、DockerFile、Coffee Script、Java、HandleBars、 R、 Objective-C、 PowerShell、 Luna、 Visual Basic、 .Net、 Asp.Net、 C#、 JSON、 Node.js、 Javascript、 HTML、 CSS、 Less、 Sass 和 Markdown。Visual Studio Code 集成了包管理器、库、构建,以及其它通用任务,以加速日常的工作流。Visual Studio Code 中最受欢迎的是它的调试功能,它包括流式支持 Node.js 的预览调试。 + +注意:请注意 Visual Studio Code 只支持 64 位的 Linux 发行版。 + +下面是在所有 Linux 发行版中安装 Visual Studio Code 的几个简单步骤。 + +### 1. 下载 Visual Studio Code 软件包 ### + +首先,我们要从微软服务器中下载 64 位 Linux 操作系统的 Visual Studio Code 安装包,链接是 [http://go.microsoft.com/fwlink/?LinkID=534108][1]。这里我们使用 wget 下载并保存到 tmp/VSCODE 目录。 + + # mkdir /tmp/vscode; cd /tmp/vscode/ + # wget https://az764295.vo.msecnd.net/public/0.3.0/VSCode-linux-x64.zip + + --2015-06-24 06:02:54-- https://az764295.vo.msecnd.net/public/0.3.0/VSCode-linux-x64.zip + Resolving az764295.vo.msecnd.net (az764295.vo.msecnd.net)... 93.184.215.200, 2606:2800:11f:179a:1972:2405:35b:459 + Connecting to az764295.vo.msecnd.net (az764295.vo.msecnd.net)|93.184.215.200|:443... connected. + HTTP request sent, awaiting response... 200 OK + Length: 64992671 (62M) [application/octet-stream] + Saving to: ‘VSCode-linux-x64.zip’ + 100%[================================================>] 64,992,671 14.9MB/s in 4.1s + 2015-06-24 06:02:58 (15.0 MB/s) - ‘VSCode-linux-x64.zip’ saved [64992671/64992671] + +### 2. 提取软件包 ### + +现在,下载好 Visual Studio Code 的 zip 压缩包之后,我们打算使用 unzip 命令解压它。我们要在终端或者控制台中运行以下命令。 + + # unzip /tmp/vscode/VSCode-linux-x64.zip -d /opt/ + +注意:如果我们还没有安装 unzip,我们首先需要通过软件包管理器安装它。如果你运行的是 Ubuntu,使用 apt-get,如果运行的是 Fedora、CentOS、可以用 dnf 或 yum 安装它。 + +### 3. 运行 Visual Studio Code ### + +展开软件包之后,我们可以直接运行一个名为 Code 的文件启动 Visual Studio Code。 + + # sudo chmod +x /opt/VSCode-linux-x64/Code + # sudo /opt/VSCode-linux-x64/Code + +如果我们想通过终端在任何地方启动 Code,我们就需要创建 /opt/vscode/Code 的一个链接 /usr/local/bin/code。 + + # ln -s /opt/VSCode-linux-x64/Code /usr/local/bin/code + +现在,我们就可以在终端中运行以下命令启动 Visual Studio Code 了。 + + # code . + +### 4. 创建桌面启动 ### + +下一步,成功展开 Visual Studio Code 软件包之后,我们打算创建桌面启动程序,使得根据不同桌面环境能够从启动器、菜单、桌面启动它。首先我们要复制一个图标文件到 /usr/share/icons/ 目录。 + + # cp /opt/VSCode-linux-x64/resources/app/vso.png /usr/share/icons/ + +然后,我们创建一个桌面启动程序,文件扩展名为 .desktop。这里我们使用喜欢的文本编辑器在 /tmp/VSCODE/ 目录中创建名为 visualstudiocode.desktop 的文件。 + + # vi /tmp/vscode/visualstudiocode.desktop + +然后,粘贴下面的行到那个文件中。 + + [Desktop Entry] + Name=Visual Studio Code + Comment=Multi-platform code editor for Linux + Exec=/opt/VSCode-linux-x64/Code + Icon=/usr/share/icons/vso.png + Type=Application + StartupNotify=true + Categories=TextEditor;Development;Utility; + MimeType=text/plain; + +创建完桌面文件之后,我们会复制这个桌面文件到 /usr/share/applications/ 目录,这样启动器和菜单中就可以单击启动 Visual Studio Code 了。 + + # cp /tmp/vscode/visualstudiocode.desktop /usr/share/applications/ + +完成之后,我们可以在启动器或者菜单中启动它。 + +![Visual Studio Code](http://blog.linoxide.com/wp-content/uploads/2015/06/visual-studio-code.png) + +### 在 Ubuntu 中 Visual Studio Code ### + +要在 Ubuntu 14.04/14.10/15.04 Linux 发行版中安装 Visual Studio Code,我们可以使用 Ubuntu Make 0.7。这是在 ubuntu 中安装 code 最简单的方法,因为我们只需要执行几个命令。首先,我们要在我们的 ubuntu linux 发行版中安装 Ubuntu Make 0.7。要安装它,首先要为它添加 PPA。可以通过运行下面命令完成。 + + # add-apt-repository ppa:ubuntu-desktop/ubuntu-make + + This ppa proposes package backport of Ubuntu make for supported releases. + More info: https://launchpad.net/~ubuntu-desktop/+archive/ubuntu/ubuntu-make + Press [ENTER] to continue or ctrl-c to cancel adding it + gpg: keyring `/tmp/tmpv0vf24us/secring.gpg' created + gpg: keyring `/tmp/tmpv0vf24us/pubring.gpg' created + gpg: requesting key A1231595 from hkp server keyserver.ubuntu.com + gpg: /tmp/tmpv0vf24us/trustdb.gpg: trustdb created + gpg: key A1231595: public key "Launchpad PPA for Ubuntu Desktop" imported + gpg: no ultimately trusted keys found + gpg: Total number processed: 1 + gpg: imported: 1 (RSA: 1) + OK + +然后,更新本地库索引并安装 ubuntu-make。 + + # apt-get update + # apt-get install ubuntu-make + +在我们的 ubuntu 操作系统上安装完 Ubuntu Make 之后,我们可以在一个终端中运行以下命令来安装 Code。 + + # umake web visual-studio-code + +![Umake Web Code](http://blog.linoxide.com/wp-content/uploads/2015/06/umake-web-code.png) + +运行完上面的命令之后,会要求我们输入想要的安装路径。然后,会请求我们允许在 ubuntu 系统中安装 Visual Studio Code。我们输入“a”(接受)。输入完后,它会在 ubuntu 机器上下载和安装 Code。最后,我们可以在启动器或者菜单中启动它。 + +### 总结 ### + +我们已经成功地在 Linux 发行版上安装了 Visual Studio Code。在所有 linux 发行版上安装 Visual Studio Code 都和上面介绍的相似,我们也可以使用 umake 在 Ubuntu 发行版中安装。Umake 是一个安装开发工具,IDEs 和语言的流行工具。我们可以用 Umake 轻松地安装 Android Studios、Eclipse 和很多其它流行 IDE。Visual Studio Code 是基于 Github 上一个叫 [Electron][2] 的项目,它是 [Atom.io][3] 编辑器的一部分。它有很多 Atom.io 编辑器没有的改进功能。当前 Visual Studio Code 只支持 64 位 linux 操作系统平台。 + +如果你有任何疑问、建议或者反馈,请在下面的评论框中留言以便我们改进和更新我们的内容。非常感谢!Enjoy :-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/install-visual-studio-code-linux/ + +作者:[Arun Pyasi][a] +译者:[ictlyh](https://github.com/ictlyh) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://go.microsoft.com/fwlink/?LinkID=534108 +[2]:https://github.com/atom/electron +[3]:https://github.com/atom/atom \ No newline at end of file diff --git a/published/201508/20150821 Linux FAQs with Answers--How to check MariaDB server version.md b/published/201508/20150821 Linux FAQs with Answers--How to check MariaDB server version.md new file mode 100644 index 0000000000..899c2775de --- /dev/null +++ b/published/201508/20150821 Linux FAQs with Answers--How to check MariaDB server version.md @@ -0,0 +1,49 @@ +Linux有问必答:如何检查MariaDB服务端版本 +================================================================================ +> **提问**: 我使用的是一台运行MariaDB的VPS。我该如何检查MariaDB服务端的版本? + +有时候你需要知道你的数据库版本,比如当你升级你数据库或对已知缺陷打补丁时。这里有几种方法找出MariaDB版本的方法。 + +### 方法一 ### + +第一种找出版本的方法是登录MariaDB服务器,登录之后,你会看到一些MariaDB的版本信息。 + +![](https://farm6.staticflickr.com/5807/20669891016_91249d3239_c.jpg) + +另一种方法是在登录MariaDB后出现的命令行中输入‘status’命令。输出会显示服务器的版本还有协议版本。 + +![](https://farm6.staticflickr.com/5801/20669891046_73f60e5c81_c.jpg) + +### 方法二 ### + +如果你不能访问MariaDB服务器,那么你就不能用第一种方法。这种情况下你可以根据MariaDB的安装包的版本来推测。这种方法只有在MariaDB通过包管理器安装的才有用。 + +你可以用下面的方法检查MariaDB的安装包。 + +#### Debian、Ubuntu或者Linux Mint: #### + + $ dpkg -l | grep mariadb + +下面的输出说明MariaDB的版本是10.0.17。 + +![](https://farm1.staticflickr.com/607/20669890966_b611fcd915_c.jpg) + +#### Fedora、CentOS或者 RHEL: #### + + $ rpm -qa | grep mariadb + +下面的输出说明安装的版本是5.5.41。 + +![](https://farm1.staticflickr.com/764/20508160748_23d9808256_b.jpg) + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/check-mariadb-server-version.html + +作者:[Dan Nanni][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni diff --git a/published/201508/20150826 How to Run Kali Linux 2.0 In Docker Container.md b/published/201508/20150826 How to Run Kali Linux 2.0 In Docker Container.md new file mode 100644 index 0000000000..83248bb51e --- /dev/null +++ b/published/201508/20150826 How to Run Kali Linux 2.0 In Docker Container.md @@ -0,0 +1,74 @@ +如何在 Docker 容器中运行 Kali Linux 2.0 +================================================================================ +### 介绍 ### + +Kali Linux 是一个对于安全测试人员和白帽的一个知名操作系统。它带有大量安全相关的程序,这让它很容易用于渗透测试。最近,[Kali Linux 2.0][1] 发布了,它被认为是这个操作系统最重要的一次发布。另一方面,Docker 技术由于它的可扩展性和易用性让它变得很流行。Dokcer 让你非常容易地将你的程序带给你的用户。好消息是你可以通过 Docker 运行Kali Linux 了,让我们看看该怎么做 :) + +### 在 Docker 中运行 Kali Linux 2.0 ### + +**相关提示** + +> 如果你还没有在系统中安装docker,你可以运行下面的命令: + +> **对于 Ubuntu/Linux Mint/Debian:** + +> sudo apt-get install docker + +> **对于 Fedora/RHEL/CentOS:** + +> sudo yum install docker + +> **对于 Fedora 22:** + +> dnf install docker + +> 你可以运行下面的命令来启动docker: + +> sudo docker start + +首先运行下面的命令确保 Docker 服务运行正常: + + sudo docker status + +Kali Linux 的开发团队已将 Kali Linux 的 docker 镜像上传了,只需要输入下面的命令来下载镜像。 + + docker pull kalilinux/kali-linux-docker + +![Pull Kali Linux docker](http://linuxpitstop.com/wp-content/uploads/2015/08/129.png) + +下载完成后,运行下面的命令来找出你下载的 docker 镜像的 ID。 + + docker images + +![Kali Linux Image ID](http://linuxpitstop.com/wp-content/uploads/2015/08/230.png) + +现在运行下面的命令来从镜像文件启动 kali linux docker 容器(这里需用正确的镜像ID替换)。 + + docker run -i -t 198cd6df71ab3 /bin/bash + +它会立刻启动容器并且让你登录到该操作系统,你现在可以在 Kaili Linux 中工作了。 + +![Kali Linux Login](http://linuxpitstop.com/wp-content/uploads/2015/08/328.png) + +你可以在容器外面通过下面的命令来验证容器已经启动/运行中了: + + docker ps + +![Docker Kali](http://linuxpitstop.com/wp-content/uploads/2015/08/421.png) + +### 总结 ### + +Docker 是一种最聪明的用来部署和分发包的方式。Kali Linux docker 镜像非常容易上手,也不会消耗很大的硬盘空间,这样也可以很容易地在任何安装了 docker 的操作系统上测试这个很棒的发行版了。 + +-------------------------------------------------------------------------------- + +via: http://linuxpitstop.com/run-kali-linux-2-0-in-docker-container/ + +作者:[Aun][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linuxpitstop.com/author/aun/ +[1]:https://linux.cn/article-6005-1.html diff --git a/published/201508/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md b/published/201508/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md new file mode 100644 index 0000000000..366a3c1e98 --- /dev/null +++ b/published/201508/20150827 How to Convert From RPM to DEB and DEB to RPM Package Using Alien.md @@ -0,0 +1,160 @@ +Alien 魔法:RPM 和 DEB 互转 +================================================================================ + +正如我确信,你们一定知道Linux下的多种软件安装方式:使用发行版所提供的包管理系统([aptitude,yum,或者zypper][1],还可以举很多例子),从源码编译(尽管现在很少用了,但在Linux发展早期却是唯一可用的方法),或者使用各自的低级工具dpkg用于.deb,以及rpm用于.rpm,预编译包,如此这般。 + +![Convert RPM to DEB and DEB to RPM](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-RPM-to-DEB-and-DEB-to-RPM.png) + +*使用Alien将RPM转换成DEB以及将DEB转换成RPM* + +在本文中,我们将为你介绍alien,一个用于在各种不同的Linux包格式相互转换的工具,其最常见的用法是将.rpm转换成.deb(或者反过来)。 + +如果你需要某个特定类型的包,而你只能找到其它格式的包的时候,该工具迟早能派得上用场——即使是其作者不再维护,并且在其网站声明:alien将可能永远维持在实验状态。 + +例如,有一次,我正查找一个用于喷墨打印机的.deb驱动,但是却没有找到——生产厂家只提供.rpm包,这时候alien拯救了我。我安装了alien,将包进行转换,不久之后我就可以使用我的打印机了,没有任何问题。 + +即便如此,我们也必须澄清一下,这个工具不应当用来转换重要的系统文件和库,因为它们在不同的发行版中有不同的配置。只有在前面说的那种情况下所建议的安装方法根本不适合时,alien才能作为最后手段使用。 + +最后一项要点是,我们必须注意,虽然我们在本文中使用CentOS和Debian,除了前两个发行版及其各自的家族体系外,据我们所知,alien可以工作在Slackware中,甚至Solaris中。 + +### 步骤1:安装Alien及其依赖包 ### + +要安装alien到CentOS/RHEL 7中,你需要启用EPEL和Nux Dextop(是的,是Dextop——不是Desktop)仓库,顺序如下: + + # yum install epel-release + +启用Nux Dextop仓库的包的当前最新版本是0.5(2015年8月10日发布),在安装之前你可以查看[http://li.nux.ro/download/nux/dextop/el7/x86_64/][2]上是否有更新的版本。 + + # rpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro + # rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm + +然后再做, + + # yum update && yum install alien + +在Fedora中,你只需要运行上面的命令即可。 + +在Debian及其衍生版中,只需要: + + # aptitude install alien + +### 步骤2:将.deb转换成.rpm包 ### + +对于本次测试,我们选择了date工具,它提供了一系列日期和时间工具用于处理大量金融数据。我们将下载.deb包到我们的CentOS 7机器中,将它转换成.rpm并安装: + +![Check CentOS Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Linux-OS-Version.png) + +检查CentOS版本 + + # cat /etc/centos-release + # wget http://ftp.us.debian.org/debian/pool/main/d/dateutils/dateutils_0.3.1-1.1_amd64.deb + # alien --to-rpm --scripts dateutils_0.3.1-1.1_amd64.deb + +![Convert .deb to .rpm package in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-deb-to-rpm-package.png) + +*在Linux中将.deb转换成.rpm* + +**重要**:(请注意alien是怎样来增加目标包的次版本号的。如果你想要无视该行为,请添加-keep-version标识)。 + +如果我们尝试马上安装该包,我们将碰到些许问题: + + # rpm -Uvh dateutils-0.3.1-2.1.x86_64.rpm + +![Install RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-RPM-Package.png) + +*安装RPM包* + +要解决该问题,我们需要启用epel-testing仓库,然后安装rpmbuild工具来编辑该包的配置以重建包: + + # yum --enablerepo=epel-testing install rpmrebuild + +然后运行, + + # rpmrebuild -pe dateutils-0.3.1-2.1.x86_64.rpm + +它会打开你的默认文本编辑器。请转到`%files`章节并删除涉及到错误信息中提到的目录的行,然后保存文件并退出: + +![Convert .deb to Alien Version](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-Deb-Package-to-Alien-Version.png) + +*转换.deb到Alien版* + +但你退出该文件后,将提示你继续去重构。如果你选择“Y”,该文件会重构到指定的目录(与当前工作目录不同): + + # rpmrebuild –pe dateutils-0.3.1-2.1.x86_64.rpm + +![Build RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Build-RPM-Package.png) + +*构建RPM包* + +现在你可以像以往一样继续来安装包并验证: + + # rpm -Uvh /root/rpmbuild/RPMS/x86_64/dateutils-0.3.1-2.1.x86_64.rpm + # rpm -qa | grep dateutils + +![Install Build RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Build-RPM-Package.png) + +*安装构建RPM包* + +最后,你可以列出date工具包含的各个工具,也可以查看各自的手册页: + + # ls -l /usr/bin | grep dateutils + +![Verify Installed RPM Package](http://www.tecmint.com/wp-content/uploads/2015/08/Verify-Installed-Package.png) + +*验证安装的RPM包* + +### 步骤3:将.rpm转换成.deb包 ### + +在本节中,我们将演示如何将.rpm转换成.deb。在一台32位的Debian Wheezy机器中,让我们从CentOS 6操作系统仓库中下载用于zsh shell的.rpm包。注意,该shell在Debian及其衍生版的默认安装中是不可用的。 + + # cat /etc/shells + # lsb_release -a | tail -n 4 + +![Check Shell and Debian OS Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Shell-Debian-OS-Version.png) + +*检查Shell和Debian操作系统版本* + + # wget http://mirror.centos.org/centos/6/os/i386/Packages/zsh-4.3.11-4.el6.centos.i686.rpm + # alien --to-deb --scripts zsh-4.3.11-4.el6.centos.i686.rpm + +你可以安全地无视关于签名丢失的信息: + +![Convert .rpm to .deb Package](http://www.tecmint.com/wp-content/uploads/2015/08/Convert-rpm-to-deb-Package.png) + +*将.rpm转换成.deb包* + +过了一会儿后,.deb包应该已经生成,并可以安装了: + + # dpkg -i zsh_4.3.11-5_i386.deb + +![Install RPM Converted Deb Package](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Deb-Package.png) + +*安装RPM转换来的Deb包* + +安装完后,你看看可以zsh是否添加到了合法shell列表中: + + # cat /etc/shells + +![Confirm Installed Zsh Package](http://www.tecmint.com/wp-content/uploads/2015/08/Confirm-Installed-Package.png) + +*确认安装的Zsh包* + +### 小结 ### + +在本文中,我们已经解释了如何将.rpm转换成.deb及其反向转换,这可以作为这类程序不能从仓库中或者作为可分发源代码获得的最后安装手段。你一定想要将本文添加到书签中,因为我们都需要alien。 + +请自由分享你关于本文的想法,写到下面的表单中吧。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/convert-from-rpm-to-deb-and-deb-to-rpm-package-using-alien/ + +作者:[Gabriel Cánepa][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/linux-package-management/ +[2]:http://li.nux.ro/download/nux/dextop/el7/x86_64/ diff --git a/published/201508/20150827 Linux or UNIX--Bash Read a File Line By Line.md b/published/201508/20150827 Linux or UNIX--Bash Read a File Line By Line.md new file mode 100644 index 0000000000..8702ddec41 --- /dev/null +++ b/published/201508/20150827 Linux or UNIX--Bash Read a File Line By Line.md @@ -0,0 +1,169 @@ +Bash 下如何逐行读取一个文件 +================================================================================ + +在 Linux 或类 UNIX 系统下如何使用 KSH 或 BASH shell 逐行读取一个文件? + +在 Linux、OSX、 *BSD 或者类 Unix 系统下你可以使用 ​​while..do..done 的 bash 循环来逐行读取一个文件。 + +###在 Bash Unix 或者 Linux shell 中逐行读取一个文件的语法 + +对于 bash、ksh、 zsh 和其他的 shells 语法如下 + + while read -r line; do COMMAND; done < input.file + +通过 -r 选项传递给 read 命令以防止阻止解释其中的反斜杠转义符。 + +在 read 命令之前添加 `IFS=` 选项,来防止首尾的空白字符被去掉。 + + while IFS= read -r line; do COMMAND_on $line; done < input.file + +这是更适合人类阅读的语法: + + #!/bin/bash + input="/path/to/txt/file" + while IFS= read -r var + do + echo "$var" + done < "$input" + +**示例** + +下面是一些例子: + + #!/bin/ksh + file="/home/vivek/data.txt" + while IFS= read line + do + # display $line or do somthing with $line + echo "$line" + done <"$file" + +在 bash shell 中相同的例子: + + #!/bin/bash + file="/home/vivek/data.txt" + while IFS= read -r line + do + # display $line or do somthing with $line + printf '%s\n' "$line" + done <"$file" + +你还可以看看这个更好的: + + #!/bin/bash + file="/etc/passwd" + while IFS=: read -r f1 f2 f3 f4 f5 f6 f7 + do + # display fields using f1, f2,..,f7 + printf 'Username: %s, Shell: %s, Home Dir: %s\n' "$f1" "$f7" "$f6" + done <"$file" + +示例输出: + +![Fig.01: Bash shell scripting- read file line by line demo outputs](http://s0.cyberciti.org/uploads/faq/2011/01/Bash-Scripting-Read-File-line-by-line-demo.jpg) + +*图01:Bash 脚本:读取文件并逐行输出文件* + +###Bash 脚本:逐行读取文本文件并创建为 pdf 文件 + +我的输入文件如下(faq.txt): + + 4|http://www.cyberciti.biz/faq/mysql-user-creation/|Mysql User Creation: Setting Up a New MySQL User Account + 4096|http://www.cyberciti.biz/faq/ksh-korn-shell/|What is UNIX / Linux Korn Shell? + 4101|http://www.cyberciti.biz/faq/what-is-posix-shell/|What Is POSIX Shell? + 17267|http://www.cyberciti.biz/faq/linux-check-battery-status/|Linux: Check Battery Status Command + 17245|http://www.cyberciti.biz/faq/restarting-ntp-service-on-linux/|Linux Restart NTPD Service Command + 17183|http://www.cyberciti.biz/faq/ubuntu-linux-determine-your-ip-address/|Ubuntu Linux: Determine Your IP Address + 17172|http://www.cyberciti.biz/faq/determine-ip-address-of-linux-server/|HowTo: Determine an IP Address My Linux Server + 16510|http://www.cyberciti.biz/faq/unix-linux-restart-php-service-command/|Linux / Unix: Restart PHP Service Command + 8292|http://www.cyberciti.biz/faq/mounting-harddisks-in-freebsd-with-mount-command/|FreeBSD: Mount Hard Drive / Disk Command + 8190|http://www.cyberciti.biz/faq/rebooting-solaris-unix-server/|Reboot a Solaris UNIX System + +我的 bash 脚本: + + #!/bin/bash + # Usage: Create pdf files from input (wrapper script) + # Author: Vivek Gite under GPL v2.x+ + #--------------------------------------------------------- + + #Input file + _db="/tmp/wordpress/faq.txt" + + #Output location + o="/var/www/prviate/pdf/faq" + + _writer="~/bin/py/pdfwriter.py" + + # If file exists + if [[ -f "$_db" ]] + then + # read it + while IFS='|' read -r pdfid pdfurl pdftitle + do + local pdf="$o/$pdfid.pdf" + echo "Creating $pdf file ..." + #Genrate pdf file + $_writer --quiet --footer-spacing 2 \ + --footer-left "nixCraft is GIT UL++++ W+++ C++++ M+ e+++ d-" \ + --footer-right "Page [page] of [toPage]" --footer-line \ + --footer-font-size 7 --print-media-type "$pdfurl" "$pdf" + done <"$_db" + fi + +###技巧:从 bash 变量中读取 + +让我们看看如何在 Debian 或者 Ubuntu Linux 下列出所有安装过的 php 包,请输入: + + # 我将输出内容赋值到一个变量名为 $list中 # + + list=$(dpkg --list php\* | awk '/ii/{print $2}') + printf '%s\n' "$list" + +示例输出: + + php-pear + php5-cli + php5-common + php5-fpm + php5-gd + php5-json + php5-memcache + php5-mysql + php5-readline + php5-suhosin-extension + +你现在可以从 $list 中看到它们,并安装这些包: + + #!/bin/bash + # BASH can iterate over $list variable using a "here string" # + while IFS= read -r pkg + do + printf 'Installing php package %s...\n' "$pkg" + /usr/bin/apt-get -qq install $pkg + done <<< "$list" + printf '*** Do not forget to run php5enmod and restart the server (httpd or php5-fpm) ***\n' + +示例输出: + + Installing php package php-pear... + Installing php package php5-cli... + Installing php package php5-common... + Installing php package php5-fpm... + Installing php package php5-gd... + Installing php package php5-json... + Installing php package php5-memcache... + Installing php package php5-mysql... + Installing php package php5-readline... + Installing php package php5-suhosin-extension... + + *** Do not forget to run php5enmod and restart the server (httpd or php5-fpm) *** + +-------------------------------------------------------------------------------- + +via: http://www.cyberciti.biz/faq/unix-howto-read-line-by-line-from-file/ + +作者: VIVEK GIT +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/Linux and Unix Test Disk IO Performance With dd Command.MD b/published/201508/Linux and Unix Test Disk IO Performance With dd Command.md similarity index 70% rename from translated/tech/Linux and Unix Test Disk IO Performance With dd Command.MD rename to published/201508/Linux and Unix Test Disk IO Performance With dd Command.md index be5986b78e..be96c3941b 100644 --- a/translated/tech/Linux and Unix Test Disk IO Performance With dd Command.MD +++ b/published/201508/Linux and Unix Test Disk IO Performance With dd Command.md @@ -1,24 +1,25 @@ -使用dd命令在Linux和Unix环境下进行硬盘I/O性能检测 +使用 dd 命令进行硬盘 I/O 性能检测 ================================================================================ -如何使用dd命令测试硬盘的性能?如何在linux操作系统下检测硬盘的读写能力? + +如何使用dd命令测试我的硬盘性能?如何在linux操作系统下检测硬盘的读写速度? 你可以使用以下命令在一个Linux或类Unix操作系统上进行简单的I/O性能测试。 -- **dd命令** :它被用来在Linux和类Unix系统下对硬盘设备进行写性能的检测。 -- **hparm命令**:它被用来获取或设置硬盘参数,包括测试读性能以及缓存性能等。 +- **dd命令** :它被用来在Linux和类Unix系统下对硬盘设备进行写性能的检测。 +- **hparm命令**:它用来在基于 Linux 的系统上获取或设置硬盘参数,包括测试读性能以及缓存性能等。 在这篇指南中,你将会学到如何使用dd命令来测试硬盘性能。 ### 使用dd命令来监控硬盘的读写性能:### -- 打开shell终端(这里貌似不能翻译为终端提示符)。 -- 通过ssh登录到远程服务器。 +- 打开shell终端。 +- 或者通过ssh登录到远程服务器。 - 使用dd命令来测量服务器的吞吐率(写速度) `dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync` - 使用dd命令测量服务器延迟 `dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync` ####理解dd命令的选项### -在这个例子当中,我将使用搭载Ubuntu Linux 14.04 LTS系统的RAID-10(配有SAS SSD的Adaptec 5405Z)服务器阵列来运行。基本语法为: +在这个例子当中,我将使用搭载Ubuntu Linux 14.04 LTS系统的RAID-10(配有SAS SSD的Adaptec 5405Z)服务器阵列来运行。基本语法为: dd if=/dev/input.file of=/path/to/output.file bs=block-size count=number-of-blocks oflag=dsync ## GNU dd语法 ## @@ -29,18 +30,19 @@ 输出样例: ![Fig.01: Ubuntu Linux Server with RAID10 and testing server throughput with dd](http://s0.cyberciti.org/uploads/faq/2015/08/dd-server-test-io-speed-output.jpg) -Fig.01: 使用dd命令获取的服务器吞吐率 + +*图01: 使用dd命令获取的服务器吞吐率* 请各位注意在这个实验中,我们写入一个G的数据,可以发现,服务器的吞吐率是135 MB/s,这其中 -- `if=/dev/zero (if=/dev/input.file)` :用来设置dd命令读取的输入文件名。 -- `of=/tmp/test1.img (of=/path/to/output.file)` :dd命令将input.file写入的输出文件的名字。 -- `bs=1G (bs=block-size)` :设置dd命令读取的块的大小。例子中为1个G。 -- `count=1 (count=number-of-blocks)`: dd命令读取的块的个数。 -- `oflag=dsync (oflag=dsync)` :使用同步I/O。不要省略这个选项。这个选项能够帮助你去除caching的影响,以便呈现给你精准的结果。 +- `if=/dev/zero` (if=/dev/input.file) :用来设置dd命令读取的输入文件名。 +- `of=/tmp/test1.img` (of=/path/to/output.file):dd命令将input.file写入的输出文件的名字。 +- `bs=1G` (bs=block-size) :设置dd命令读取的块的大小。例子中为1个G。 +- `count=1` (count=number-of-blocks):dd命令读取的块的个数。 +- `oflag=dsync` (oflag=dsync) :使用同步I/O。不要省略这个选项。这个选项能够帮助你去除caching的影响,以便呈现给你精准的结果。 - `conv=fdatasyn`: 这个选项和`oflag=dsync`含义一样。 -在这个例子中,一共写了1000次,每次写入512字节来获得RAID10服务器的延迟时间: +在下面这个例子中,一共写了1000次,每次写入512字节来获得RAID10服务器的延迟时间: dd if=/dev/zero of=/tmp/test2.img bs=512 count=1000 oflag=dsync @@ -50,11 +52,11 @@ Fig.01: 使用dd命令获取的服务器吞吐率 1000+0 records out 512000 bytes (512 kB) copied, 0.60362 s, 848 kB/s -请注意服务器的吞吐率以及延迟时间也取决于服务器/应用的加载。所以我推荐你在一个刚刚重启过并且处于峰值时间的服务器上来运行测试,以便得到更加准确的度量。现在你可以在你的所有设备上互相比较这些测试结果了。 +请注意服务器的吞吐率以及延迟时间也取决于服务器/应用的负载。所以我推荐你在一个刚刚重启过并且处于峰值时间的服务器上来运行测试,以便得到更加准确的度量。现在你可以在你的所有设备上互相比较这些测试结果了。 -####为什么服务器的吞吐率和延迟时间都这么差?### +###为什么服务器的吞吐率和延迟时间都这么差?### -低的数值并不意味着你在使用差劲的硬件。可能是HARDWARE RAID10的控制器缓存导致的。 +低的数值并不意味着你在使用差劲的硬件。可能是硬件 RAID10的控制器缓存导致的。 使用hdparm命令来查看硬盘缓存的读速度。 @@ -79,11 +81,12 @@ Fig.01: 使用dd命令获取的服务器吞吐率 输出样例: ![Fig.02: Linux hdparm command to test reading and caching disk performance](http://s0.cyberciti.org/uploads/faq/2015/08/hdparam-output.jpg) -Fig.02: 检测硬盘读入以及缓存性能的Linux hdparm命令 -请再一次注意由于文件文件操作的缓存属性,你将总是会看到很高的读速度。 +*图02: 检测硬盘读入以及缓存性能的Linux hdparm命令* -**使用dd命令来测试读入速度** +请再次注意,由于文件文件操作的缓存属性,你将总是会看到很高的读速度。 + +###使用dd命令来测试读取速度### 为了获得精确的读测试数据,首先在测试前运行下列命令,来将缓存设置为无效: @@ -91,11 +94,11 @@ Fig.02: 检测硬盘读入以及缓存性能的Linux hdparm命令 echo 3 | sudo tee /proc/sys/vm/drop_caches time time dd if=/path/to/bigfile of=/dev/null bs=8k -**笔记本上的示例** +####笔记本上的示例#### 运行下列命令: - ### Cache存在的Debian系统笔记本吞吐率### + ### 带有Cache的Debian系统笔记本吞吐率### dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct ###使cache失效### @@ -104,10 +107,11 @@ Fig.02: 检测硬盘读入以及缓存性能的Linux hdparm命令 ###没有Cache的Debian系统笔记本吞吐率### dd if=/dev/zero of=/tmp/laptop.bin bs=1G count=1 oflag=direct -**苹果OS X Unix(Macbook pro)的例子** +####苹果OS X Unix(Macbook pro)的例子#### GNU dd has many more options but OS X/BSD and Unix-like dd command need to run as follows to test real disk I/O and not memory add sync option as follows: -GNU dd命令有其他许多选项但是在 OS X/BSD 以及类Unix中, dd命令需要像下面那样执行来检测去除掉内存地址同步的硬盘真实I/O性能: + +GNU dd命令有其他许多选项,但是在 OS X/BSD 以及类Unix中, dd命令需要像下面那样执行来检测去除掉内存地址同步的硬盘真实I/O性能: ## 运行这个命令2-3次来获得更好地结果 ### time sh -c "dd if=/dev/zero of=/tmp/testfile bs=100k count=1k && sync" @@ -124,26 +128,29 @@ GNU dd命令有其他许多选项但是在 OS X/BSD 以及类Unix中, dd命令 本人Macbook Pro的写速度是635346520字节(635.347MB/s)。 -**不喜欢用命令行?^_^** +###不喜欢用命令行?\^_^### 你可以在Linux或基于Unix的系统上使用disk utility(gnome-disk-utility)这款工具来得到同样的信息。下面的那个图就是在我的Fedora Linux v22 VM上截取的。 -**图形化方法** +####图形化方法#### 点击“Activites”或者“Super”按键来在桌面和Activites视图间切换。输入“Disks” ![Fig.03: Start the Gnome disk utility](http://s0.cyberciti.org/uploads/faq/2015/08/disk-1.jpg) -Fig.03: 打开Gnome硬盘工具 + +*图03: 打开Gnome硬盘工具* 在左边的面板上选择你的硬盘,点击configure按钮,然后点击“Benchmark partition”: ![Fig.04: Benchmark disk/partition](http://s0.cyberciti.org/uploads/faq/2015/08/disks-2.jpg) -Fig.04: 评测硬盘/分区 -最后,点击“Start Benchmark...”按钮(你可能被要求输入管理员用户名和密码): +*图04: 评测硬盘/分区* + +最后,点击“Start Benchmark...”按钮(你可能需要输入管理员用户名和密码): ![Fig.05: Final benchmark result](http://s0.cyberciti.org/uploads/faq/2015/08/disks-3.jpg) -Fig.05: 最终的评测结果 + +*图05: 最终的评测结果* 如果你要问,我推荐使用哪种命令和方法? @@ -158,7 +165,7 @@ via: http://www.cyberciti.biz/faq/howto-linux-unix-test-disk-performance-with-dd 作者:Vivek Gite 译者:[DongShuaike](https://github.com/DongShuaike) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201508/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md b/published/201508/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md new file mode 100644 index 0000000000..d54e794459 --- /dev/null +++ b/published/201508/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md @@ -0,0 +1,156 @@ +在 Linux 下使用 RAID(一):介绍 RAID 的级别和概念 +================================================================================ + +RAID 的意思是廉价磁盘冗余阵列(Redundant Array of Inexpensive Disks),但现在它被称为独立磁盘冗余阵列(Redundant Array of Independent Drives)。早先一个容量很小的磁盘都是非常昂贵的,但是现在我们可以很便宜的买到一个更大的磁盘。Raid 是一系列放在一起,成为一个逻辑卷的磁盘集合。 + +![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg) + +*在 Linux 中理解 RAID 设置* + +RAID 包含一组或者一个集合甚至一个阵列。使用一组磁盘结合驱动器组成 RAID 阵列或 RAID 集。将至少两个磁盘连接到一个 RAID 控制器,而成为一个逻辑卷,也可以将多个驱动器放在一个组中。一组磁盘只能使用一个 RAID 级别。使用 RAID 可以提高服务器的性能。不同 RAID 的级别,性能会有所不同。它通过容错和高可用性来保存我们的数据。 + +这个系列被命名为“在 Linux 下使用 RAID”,分为9个部分,包括以下主题: + +- 第1部分:介绍 RAID 的级别和概念 +- 第2部分:在Linux中如何设置 RAID0(条带化) +- 第3部分:在Linux中如何设置 RAID1(镜像化) +- 第4部分:在Linux中如何设置 RAID5(条带化与分布式奇偶校验) +- 第5部分:在Linux中如何设置 RAID6(条带双分布式奇偶校验) +- 第6部分:在Linux中设置 RAID 10 或1 + 0(嵌套) +- 第7部分:增加现有的 RAID 阵列并删除损坏的磁盘 +- 第8部分:在 RAID 中恢复(重建)损坏的驱动器 +- 第9部分:在 Linux 中管理 RAID + +这是9篇系列教程的第1部分,在这里我们将介绍 RAID 的概念和 RAID 级别,这是在 Linux 中构建 RAID 需要理解的。 + +### 软件 RAID 和硬件 RAID ### + +软件 RAID 的性能较低,因为其使用主机的资源。 需要加载 RAID 软件以从软件 RAID 卷中读取数据。在加载 RAID 软件前,操作系统需要引导起来才能加载 RAID 软件。在软件 RAID 中无需物理硬件。零成本投资。 + +硬件 RAID 的性能较高。他们采用 PCI Express 卡物理地提供有专用的 RAID 控制器。它不会使用主机资源。他们有 NVRAM 用于缓存的读取和写入。缓存用于 RAID 重建时,即使出现电源故障,它会使用后备的电池电源保持缓存。对于大规模使用是非常昂贵的投资。 + +硬件 RAID 卡如下所示: + +![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg) + +*硬件 RAID* + +#### 重要的 RAID 概念 #### + +- **校验**方式用在 RAID 重建中从校验所保存的信息中重新生成丢失的内容。 RAID 5,RAID 6 基于校验。 +- **条带化**是将切片数据随机存储到多个磁盘。它不会在单个磁盘中保存完整的数据。如果我们使用2个磁盘,则每个磁盘存储我们的一半数据。 +- **镜像**被用于 RAID 1 和 RAID 10。镜像会自动备份数据。在 RAID 1 中,它会保存相同的内容到其他盘上。 +- **热备份**只是我们的服务器上的一个备用驱动器,它可以自动更换发生故障的驱动器。在我们的阵列中,如果任何一个驱动器损坏,热备份驱动器会自动用于重建 RAID。 +- **块**是 RAID 控制器每次读写数据时的最小单位,最小 4KB。通过定义块大小,我们可以增加 I/O 性能。 + +RAID有不同的级别。在这里,我们仅列出在真实环境下的使用最多的 RAID 级别。 + +- RAID0 = 条带化 +- RAID1 = 镜像 +- RAID5 = 单磁盘分布式奇偶校验 +- RAID6 = 双磁盘分布式奇偶校验 +- RAID10 = 镜像 + 条带。(嵌套RAID) + +RAID 在大多数 Linux 发行版上使用名为 mdadm 的软件包进行管理。让我们先对每个 RAID 级别认识一下。 + +#### RAID 0 / 条带化 #### + +![](https://upload.wikimedia.org/wikipedia/commons/thumb/9/9b/RAID_0.svg/150px-RAID_0.svg.png) + +条带化有很好的性能。在 RAID 0(条带化)中数据将使用切片的方式被写入到磁盘。一半的内容放在一个磁盘上,另一半内容将被写入到另一个磁盘。 + +假设我们有2个磁盘驱动器,例如,如果我们将数据“TECMINT”写到逻辑卷中,“T”将被保存在第一盘中,“E”将保存在第二盘,'C'将被保存在第一盘,“M”将保存在第二盘,它会一直继续此循环过程。(LCTT 译注:实际上不可能按字节切片,是按数据块切片的。) + +在这种情况下,如果驱动器中的任何一个发生故障,我们就会丢失数据,因为一个盘中只有一半的数据,不能用于重建 RAID。不过,当比较写入速度和性能时,RAID 0 是非常好的。创建 RAID 0(条带化)至少需要2个磁盘。如果你的数据是非常宝贵的,那么不要使用此 RAID 级别。 + +- 高性能。 +- RAID 0 中容量零损失。 +- 零容错。 +- 写和读有很高的性能。 + +#### RAID 1 / 镜像化 #### + +![](https://upload.wikimedia.org/wikipedia/commons/thumb/b/b7/RAID_1.svg/150px-RAID_1.svg.png) + +镜像也有不错的性能。镜像可以对我们的数据做一份相同的副本。假设我们有两个2TB的硬盘驱动器,我们总共有4TB,但在镜像中,但是放在 RAID 控制器后面的驱动器形成了一个逻辑驱动器,我们只能看到这个逻辑驱动器有2TB。 + +当我们保存数据时,它将同时写入这两个2TB驱动器中。创建 RAID 1(镜像化)最少需要两个驱动器。如果发生磁盘故障,我们可以通过更换一个新的磁盘恢复 RAID 。如果在 RAID 1 中任何一个磁盘发生故障,我们可以从另一个磁盘中获取相同的数据,因为另外的磁盘中也有相同的数据。所以是零数据丢失。 + +- 良好的性能。 +- 总容量丢失一半可用空间。 +- 完全容错。 +- 重建会更快。 +- 写性能变慢。 +- 读性能变好。 +- 能用于操作系统和小规模的数据库。 + +#### RAID 5 / 分布式奇偶校验 #### + +![](https://upload.wikimedia.org/wikipedia/commons/thumb/6/64/RAID_5.svg/300px-RAID_5.svg.png) + +RAID 5 多用于企业级。 RAID 5 的以分布式奇偶校验的方式工作。奇偶校验信息将被用于重建数据。它从剩下的正常驱动器上的信息来重建。在驱动器发生故障时,这可以保护我们的数据。 + +假设我们有4个驱动器,如果一个驱动器发生故障而后我们更换发生故障的驱动器后,我们可以从奇偶校验中重建数据到更换的驱动器上。奇偶校验信息存储在所有的4个驱动器上,如果我们有4个 1TB 的驱动器。奇偶校验信息将被存储在每个驱动器的256G中,而其它768GB是用户自己使用的。单个驱动器故障后,RAID 5 依旧正常工作,如果驱动器损坏个数超过1个会导致数据的丢失。 + +- 性能卓越 +- 读速度将非常好。 +- 写速度处于平均水准,如果我们不使用硬件 RAID 控制器,写速度缓慢。 +- 从所有驱动器的奇偶校验信息中重建。 +- 完全容错。 +- 1个磁盘空间将用于奇偶校验。 +- 可以被用在文件服务器,Web服务器,非常重要的备份中。 + +#### RAID 6 双分布式奇偶校验磁盘 #### + +![](https://upload.wikimedia.org/wikipedia/commons/thumb/7/70/RAID_6.svg/300px-RAID_6.svg.png) + +RAID 6 和 RAID 5 相似但它有两个分布式奇偶校验。大多用在大数量的阵列中。我们最少需要4个驱动器,即使有2个驱动器发生故障,我们依然可以更换新的驱动器后重建数据。 + +它比 RAID 5 慢,因为它将数据同时写到4个驱动器上。当我们使用硬件 RAID 控制器时速度就处于平均水准。如果我们有6个的1TB驱动器,4个驱动器将用于数据保存,2个驱动器将用于校验。 + +- 性能不佳。 +- 读的性能很好。 +- 如果我们不使用硬件 RAID 控制器写的性能会很差。 +- 从两个奇偶校验驱动器上重建。 +- 完全容错。 +- 2个磁盘空间将用于奇偶校验。 +- 可用于大型阵列。 +- 用于备份和视频流中,用于大规模。 + +#### RAID 10 / 镜像+条带 #### + +![](https://upload.wikimedia.org/wikipedia/commons/thumb/e/e6/RAID_10_01.svg/300px-RAID_10_01.svg.png) + +![](https://upload.wikimedia.org/wikipedia/commons/thumb/a/ad/RAID_01.svg/300px-RAID_01.svg.png) + +RAID 10 可以被称为1 + 0或0 +1。它将做镜像+条带两个工作。在 RAID 10 中首先做镜像然后做条带。在 RAID 01 上首先做条带,然后做镜像。RAID 10 比 01 好。 + +假设,我们有4个驱动器。当我逻辑卷上写数据时,它会使用镜像和条带的方式将数据保存到4个驱动器上。 + +如果我在 RAID 10 上写入数据“TECMINT”,数据将使用如下方式保存。首先将“T”同时写入两个磁盘,“E”也将同时写入另外两个磁盘,所有数据都写入两块磁盘。这样可以将每个数据复制到另外的磁盘。 + +同时它将使用 RAID 0 方式写入数据,遵循将“T”写入第一组盘,“E”写入第二组盘。再次将“C”写入第一组盘,“M”到第二组盘。 + +- 良好的读写性能。 +- 总容量丢失一半的可用空间。 +- 容错。 +- 从副本数据中快速重建。 +- 由于其高性能和高可用性,常被用于数据库的存储中。 + +### 结论 ### + +在这篇文章中,我们已经了解了什么是 RAID 和在实际环境大多采用哪个级别的 RAID。希望你已经学会了上面所写的。对于 RAID 的构建必须了解有关 RAID 的基本知识。以上内容可以基本满足你对 RAID 的了解。 + +在接下来的文章中,我将介绍如何设置和使用各种级别创建 RAID,增加 RAID 组(阵列)和驱动器故障排除等。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/understanding-raid-setup-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ diff --git a/published/201508/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md b/published/201508/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md new file mode 100644 index 0000000000..650897d1d5 --- /dev/null +++ b/published/201508/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md @@ -0,0 +1,219 @@ +在 Linux 下使用 RAID(一):使用 mdadm 工具创建软件 RAID 0 (条带化) +================================================================================ + +RAID 即廉价磁盘冗余阵列,其高可用性和可靠性适用于大规模环境中,相比正常使用,数据更需要被保护。RAID 是一些磁盘的集合,是包含一个阵列的逻辑卷。驱动器可以组合起来成为一个阵列或称为(组的)集合。 + +创建 RAID 最少应使用2个连接到 RAID 控制器的磁盘组成,来构成逻辑卷,可以根据定义的 RAID 级别将更多的驱动器添加到一个阵列中。不使用物理硬件创建的 RAID 被称为软件 RAID。软件 RAID 也叫做穷人 RAID。 + +![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg) + +*在 Linux 中创建 RAID0* + +使用 RAID 的主要目的是为了在发生单点故障时保存数据,如果我们使用单个磁盘来存储数据,如果它损坏了,那么就没有机会取回我们的数据了,为了防止数据丢失我们需要一个容错的方法。所以,我们可以使用多个磁盘组成 RAID 阵列。 + +#### 在 RAID 0 中条带是什么 #### + +条带是通过将数据在同时分割到多个磁盘上。假设我们有两个磁盘,如果我们将数据保存到该逻辑卷上,它会将数据保存在两个磁盘上。使用 RAID 0 是为了获得更好的性能,但是如果驱动器中一个出现故障,我们将不能得到完整的数据。因此,使用 RAID 0 不是一种好的做法。唯一的解决办法就是安装有 RAID 0 逻辑卷的操作系统来提高重要文件的安全性。 + +- RAID 0 性能较高。 +- 在 RAID 0 上,空间零浪费。 +- 零容错(如果硬盘中的任何一个发生故障,无法取回数据)。 +- 写和读性能都很好。 + +#### 要求 #### + +创建 RAID 0 允许的最小磁盘数目是2个,但你可以添加更多的磁盘,不过数目应该是2,4,6,8等的偶数。如果你有一个物理 RAID 卡并且有足够的端口,你可以添加更多磁盘。 + +在这里,我们没有使用硬件 RAID,此设置只需要软件 RAID。如果我们有一个物理硬件 RAID 卡,我们可以从它的功能界面访问它。有些主板默认内建 RAID 功能,还可以使用 Ctrl + I 键访问它的界面。 + +如果你是刚开始设置 RAID,请阅读我们前面的文章,我们已经介绍了一些关于 RAID 基本的概念。 + +- [介绍 RAID 的级别和概念][1] + +**我的服务器设置** + + 操作系统 : CentOS 6.5 Final + IP 地址 : 192.168.0.225 + 两块盘 : 20 GB each + +这是9篇系列教程的第2部分,在这部分,我们将看看如何能够在 Linux 上创建和使用 RAID 0(条带化),以名为 sdb 和 sdc 两个 20GB 的硬盘为例。 + +### 第1步:更新系统和安装管理 RAID 的 mdadm 软件 ### + +1、 在 Linux 上设置 RAID 0 前,我们先更新一下系统,然后安装`mdadm` 包。mdadm 是一个小程序,这将使我们能够在Linux下配置和管理 RAID 设备。 + + # yum clean all && yum update + # yum install mdadm -y + +![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png) + +*安装 mdadm 工具* + +### 第2步:确认连接了两个 20GB 的硬盘 ### + +2、 在创建 RAID 0 前,请务必确认两个硬盘能被检测到,使用下面的命令确认。 + + # ls -l /dev | grep sd + +![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png) + +*检查硬盘* + +3、 一旦检测到新的硬盘驱动器,同时检查是否连接的驱动器已经被现有的 RAID 使用,使用下面的`mdadm` 命令来查看。 + + # mdadm --examine /dev/sd[b-c] + +![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png) + +*检查 RAID 设备* + +从上面的输出我们可以看到,没有任何 RAID 使用 sdb 和 sdc 这两个驱动器。 + +### 第3步:创建 RAID 分区 ### + +4、 现在用 sdb 和 sdc 创建 RAID 的分区,使用 fdisk 命令来创建。在这里,我将展示如何创建 sdb 驱动器上的分区。 + + # fdisk /dev/sdb + +请按照以下说明创建分区。 + +- 按`n` 创建新的分区。 +- 然后按`P` 选择主分区。 +- 接下来选择分区号为1。 +- 只需按两次回车键选择默认值即可。 +- 然后,按`P` 来显示创建好的分区。 + +![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png) + +*创建分区* + +请按照以下说明将分区创建为 Linux 的 RAID 类型。 + +- 按`L`,列出所有可用的类型。 +- 按`t` 去修改分区。 +- 键入`fd` 设置为 Linux 的 RAID 类型,然后按回车确认。 +- 然后再次使用`p`查看我们所做的更改。 +- 使用`w`保存更改。 + +![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png) + +*在 Linux 上创建 RAID 分区* + +**注**: 请使用上述步骤同样在 sdc 驱动器上创建分区。 + +5、 创建分区后,验证这两个驱动器是否正确定义 RAID,使用下面的命令。 + + # mdadm --examine /dev/sd[b-c] + # mdadm --examine /dev/sd[b-c]1 + +![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png) + +*验证 RAID 分区* + +### 第4步:创建 RAID md 设备 ### + +6、 现在使用以下命令创建 md 设备(即 /dev/md0),并选择 RAID 合适的级别。 + + # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1 + # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1 + +- -C – 创建 +- -l – 级别 +- -n – RAID 设备数 + +7、 一旦 md 设备已经建立,使用如下命令可以查看 RAID 级别,设备和阵列的使用状态。 + + # cat /proc/mdstat + +![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png) + +*查看 RAID 级别* + + # mdadm -E /dev/sd[b-c]1 + +![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png) + +*查看 RAID 设备* + + # mdadm --detail /dev/md0 + +![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png) + +*查看 RAID 阵列* + +### 第5步:给 RAID 设备创建文件系统 ### + +8、 将 RAID 设备 /dev/md0 创建为 ext4 文件系统,并挂载到 /mnt/raid0 下。 + + # mkfs.ext4 /dev/md0 + +![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png) + +*创建 ext4 文件系统* + +9、 在 RAID 设备上创建好 ext4 文件系统后,现在创建一个挂载点(即 /mnt/raid0),并将设备 /dev/md0 挂载在它下。 + + # mkdir /mnt/raid0 + # mount /dev/md0 /mnt/raid0/ + +10、下一步,使用 df 命令验证设备 /dev/md0 是否被挂载在 /mnt/raid0 下。 + + # df -h + +11、 接下来,在挂载点 /mnt/raid0 下创建一个名为`tecmint.txt` 的文件,为创建的文件添加一些内容,并查看文件和目录的内容。 + + # touch /mnt/raid0/tecmint.txt + # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt + # cat /mnt/raid0/tecmint.txt + # ls -l /mnt/raid0/ + +![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png) + +*验证挂载的设备* + +12、 当你验证挂载点后,就可以将它添加到 /etc/fstab 文件中。 + + # vim /etc/fstab + +添加以下条目,根据你的安装位置和使用文件系统的不同,自行做修改。 + + /dev/md0 /mnt/raid0 ext4 deaults 0 0 + +![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png) + +*添加设备到 fstab 文件中* + +13、 使用 mount 命令的 `-a` 来检查 fstab 的条目是否有误。 + + # mount -av + +![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png) + +*检查 fstab 文件是否有误* + +### 第6步:保存 RAID 配置 ### + +14、 最后,保存 RAID 配置到一个文件中,以供将来使用。我们再次使用带有`-s` (scan) 和`-v` (verbose) 选项的 `mdadm` 命令,如图所示。 + + # mdadm -E -s -v >> /etc/mdadm.conf + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + # cat /etc/mdadm.conf + +![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png) + +*保存 RAID 配置* + +就这样,我们在这里看到,如何通过使用两个硬盘配置具有条带化的 RAID 0 。在接下来的文章中,我们将看到如何设置 RAID 1。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid0-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:https://linux.cn/article-6085-1.html diff --git a/translated/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md b/published/201508/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md similarity index 50% rename from translated/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md rename to published/201508/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md index 948e530ed8..dba520121f 100644 --- a/translated/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md +++ b/published/201508/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md @@ -1,83 +1,82 @@ -在 Linux 中使用"两个磁盘"创建 RAID 1(镜像) - 第3部分 +在 Linux 下使用 RAID(三):用两块磁盘创建 RAID 1(镜像) ================================================================================ -RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁盘中。创建 RAID1 至少需要两个磁盘,它的读取性能或者可靠性比数据存储容量更好。 + +**RAID 镜像**意味着相同数据的完整克隆(或镜像),分别写入到两个磁盘中。创建 RAID 1 至少需要两个磁盘,而且仅用于读取性能或者可靠性要比数据存储容量更重要的场合。 ![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg) -在 Linux 中设置 RAID1 +*在 Linux 中设置 RAID 1* 创建镜像是为了防止因硬盘故障导致数据丢失。镜像中的每个磁盘包含数据的完整副本。当一个磁盘发生故障时,相同的数据可以从其它正常磁盘中读取。而后,可以从正在运行的计算机中直接更换发生故障的磁盘,无需任何中断。 ### RAID 1 的特点 ### --镜像具有良好的性能。 +- 镜像具有良好的性能。 --磁盘利用率为50%。也就是说,如果我们有两个磁盘每个500GB,总共是1TB,但在镜像中它只会显示500GB。 +- 磁盘利用率为50%。也就是说,如果我们有两个磁盘每个500GB,总共是1TB,但在镜像中它只会显示500GB。 --在镜像如果一个磁盘发生故障不会有数据丢失,因为两个磁盘中的内容相同。 +- 在镜像如果一个磁盘发生故障不会有数据丢失,因为两个磁盘中的内容相同。 --读取数据会比写入性能更好。 +- 读取性能会比写入性能更好。 #### 要求 #### +创建 RAID 1 至少要有两个磁盘,你也可以添加更多的磁盘,磁盘数需为2,4,6,8等偶数。要添加更多的磁盘,你的系统必须有 RAID 物理适配器(硬件卡)。 -创建 RAID 1 至少要有两个磁盘,你也可以添加更多的磁盘,磁盘数需为2,4,6,8的两倍。为了能够添加更多的磁盘,你的系统必须有 RAID 物理适配器(硬件卡)。 +这里,我们使用软件 RAID 不是硬件 RAID,如果你的系统有一个内置的物理硬件 RAID 卡,你可以从它的功能界面或使用 Ctrl + I 键来访问它。 -这里,我们使用软件 RAID 不是硬件 RAID,如果你的系统有一个内置的物理硬件 RAID 卡,你可以从它的 UI 组件或使用 Ctrl + I 键来访问它。 - -需要阅读: [Basic Concepts of RAID in Linux][1] +需要阅读: [介绍 RAID 的级别和概念][1] #### 在我的服务器安装 #### - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.226 - Hostname : rd1.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc + 操作系统 : CentOS 6.5 Final + IP 地址 : 192.168.0.226 + 主机名 : rd1.tecmintlocal.com + 磁盘 1 [20GB] : /dev/sdb + 磁盘 2 [20GB] : /dev/sdc -本文将指导你使用 mdadm (创建和管理 RAID 的)一步一步的建立一个软件 RAID 1 或镜像在 Linux 平台上。但同样的做法也适用于其它 Linux 发行版如 RedHat,CentOS,Fedora 等等。 +本文将指导你在 Linux 平台上使用 mdadm (用于创建和管理 RAID )一步步的建立一个软件 RAID 1 (镜像)。同样的做法也适用于如 RedHat,CentOS,Fedora 等 Linux 发行版。 -### 第1步:安装所需要的并且检查磁盘 ### +### 第1步:安装所需软件并且检查磁盘 ### -1.正如我前面所说,在 Linux 中我们需要使用 mdadm 软件来创建和管理 RAID。所以,让我们用 yum 或 apt-get 的软件包管理工具在 Linux 上安装 mdadm 软件包。 +1、 正如我前面所说,在 Linux 中我们需要使用 mdadm 软件来创建和管理 RAID。所以,让我们用 yum 或 apt-get 的软件包管理工具在 Linux 上安装 mdadm 软件包。 - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] + # yum install mdadm [在 RedHat 系统] + # apt-get install mdadm [在 Debain 系统] -2. 一旦安装好‘mdadm‘包,我们需要使用下面的命令来检查磁盘是否已经配置好。 +2、 一旦安装好`mdadm`包,我们需要使用下面的命令来检查磁盘是否已经配置好。 # mdadm -E /dev/sd[b-c] ![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png) -检查 RAID 的磁盘 - +*检查 RAID 的磁盘* 正如你从上面图片看到的,没有检测到任何超级块,这意味着还没有创建RAID。 ### 第2步:为 RAID 创建分区 ### -3. 正如我提到的,我们最少使用两个分区 /dev/sdb 和 /dev/sdc 来创建 RAID1。我们首先使用‘fdisk‘命令来创建这两个分区并更改其类型为 raid。 +3、 正如我提到的,我们使用最少的两个分区 /dev/sdb 和 /dev/sdc 来创建 RAID 1。我们首先使用`fdisk`命令来创建这两个分区并更改其类型为 raid。 # fdisk /dev/sdb 按照下面的说明 -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。 +- 按 `n` 创建新的分区。 +- 然后按 `P` 选择主分区。 - 接下来选择分区号为1。 - 按两次回车键默认将整个容量分配给它。 -- 然后,按 ‘P’ 来打印创建好的分区。 -- 按 ‘L’,列出所有可用的类型。 -- 按 ‘t’ 修改分区类型。 -- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 +- 然后,按 `P` 来打印创建好的分区。 +- 按 `L`,列出所有可用的类型。 +- 按 `t` 修改分区类型。 +- 键入 `fd` 设置为 Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用`p`查看我们所做的更改。 +- 使用`w`保存更改。 ![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png) -创建磁盘分区 +*创建磁盘分区* 在创建“/dev/sdb”分区后,接下来按照同样的方法创建分区 /dev/sdc 。 @@ -85,59 +84,59 @@ RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁 ![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png) -创建第二个分区 +*创建第二个分区* -4. 一旦这两个分区创建成功后,使用相同的命令来检查 sdb & sdc 分区并确认 RAID 分区的类型如上图所示。 +4、 一旦这两个分区创建成功后,使用相同的命令来检查 sdb 和 sdc 分区并确认 RAID 分区的类型如上图所示。 # mdadm -E /dev/sd[b-c] ![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png) -验证分区变化 +*验证分区变化* ![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png) -检查 RAID 类型 +*检查 RAID 类型* **注意**: 正如你在上图所看到的,在 sdb1 和 sdc1 中没有任何对 RAID 的定义,这就是我们没有检测到超级块的原因。 -### 步骤3:创建 RAID1 设备 ### +### 第3步:创建 RAID 1 设备 ### -5.接下来使用以下命令来创建一个名为 /dev/md0 的“RAID1”设备并验证它 +5、 接下来使用以下命令来创建一个名为 /dev/md0 的“RAID 1”设备并验证它 # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1 # cat /proc/mdstat ![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png) -创建RAID设备 +*创建RAID设备* -6. 接下来使用如下命令来检查 RAID 设备类型和 RAID 阵列 +6、 接下来使用如下命令来检查 RAID 设备类型和 RAID 阵列 # mdadm -E /dev/sd[b-c]1 # mdadm --detail /dev/md0 ![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png) -检查 RAID 设备类型 +*检查 RAID 设备类型* ![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png) -检查 RAID 设备阵列 +*检查 RAID 设备阵列* -从上图中,人们很容易理解,RAID1 已经使用的 /dev/sdb1 和 /dev/sdc1 分区被创建,你也可以看到状态为 resyncing。 +从上图中,人们很容易理解,RAID 1 已经创建好了,使用了 /dev/sdb1 和 /dev/sdc1 分区,你也可以看到状态为 resyncing(重新同步中)。 ### 第4步:在 RAID 设备上创建文件系统 ### -7. 使用 ext4 为 md0 创建文件系统并挂载到 /mnt/raid1 . +7、 给 md0 上创建 ext4 文件系统 # mkfs.ext4 /dev/md0 ![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png) -创建 RAID 设备文件系统 +*创建 RAID 设备文件系统* -8. 接下来,挂载新创建的文件系统到“/mnt/raid1”,并创建一些文件,验证在挂载点的数据 +8、 接下来,挂载新创建的文件系统到“/mnt/raid1”,并创建一些文件,验证在挂载点的数据 # mkdir /mnt/raid1 # mount /dev/md0 /mnt/raid1/ @@ -146,51 +145,52 @@ RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁 ![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png) -挂载 RAID 设备 +*挂载 RAID 设备* -9.为了在系统重新启动自动挂载 RAID1,需要在 fstab 文件中添加条目。打开“/etc/fstab”文件并添加以下行。 +9、为了在系统重新启动自动挂载 RAID 1,需要在 fstab 文件中添加条目。打开`/etc/fstab`文件并添加以下行: /dev/md0 /mnt/raid1 ext4 defaults 0 0 ![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png) -自动挂载 Raid 设备 +*自动挂载 Raid 设备* + +10、 运行`mount -av`,检查 fstab 中的条目是否有错误 -10. 运行“mount -a”,检查 fstab 中的条目是否有错误 # mount -av ![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png) -检查 fstab 中的错误 +*检查 fstab 中的错误* -11. 接下来,使用下面的命令保存 raid 的配置到文件“mdadm.conf”中。 +11、 接下来,使用下面的命令保存 RAID 的配置到文件“mdadm.conf”中。 # mdadm --detail --scan --verbose >> /etc/mdadm.conf ![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png) -保存 Raid 的配置 +*保存 Raid 的配置* 上述配置文件在系统重启时会读取并加载 RAID 设备。 ### 第5步:在磁盘故障后检查数据 ### -12.我们的主要目的是,即使在任何磁盘故障或死机时必须保证数据是可用的。让我们来看看,当任何一个磁盘不可用时会发生什么。 +12、我们的主要目的是,即使在任何磁盘故障或死机时必须保证数据是可用的。让我们来看看,当任何一个磁盘不可用时会发生什么。 # mdadm --detail /dev/md0 ![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png) -验证 Raid 设备 +*验证 RAID 设备* -在上面的图片中,我们可以看到在 RAID 中有2个设备是可用的并且 Active Devices 是2.现在让我们看看,当一个磁盘拔出(移除 sdc 磁盘)或损坏后会发生什么。 +在上面的图片中,我们可以看到在 RAID 中有2个设备是可用的,并且 Active Devices 是2。现在让我们看看,当一个磁盘拔出(移除 sdc 磁盘)或损坏后会发生什么。 # ls -l /dev | grep sd # mdadm --detail /dev/md0 ![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png) -测试 RAID 设备 +*测试 RAID 设备* 现在,在上面的图片中你可以看到,一个磁盘不见了。我从虚拟机上删除了一个磁盘。此时让我们来检查我们宝贵的数据。 @@ -199,9 +199,9 @@ RAID 镜像意味着相同数据的完整克隆(或镜像)写入到两个磁 ![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png) -验证 RAID 数据 +*验证 RAID 数据* -你有没有看到我们的数据仍然可用。由此,我们可以知道 RAID 1(镜像)的优势。在接下来的文章中,我们将看到如何设置一个 RAID 5 条带化分布式奇偶校验。希望这可以帮助你了解 RAID 1(镜像)是如何工作的。 +你可以看到我们的数据仍然可用。由此,我们可以了解 RAID 1(镜像)的优势。在接下来的文章中,我们将看到如何设置一个 RAID 5 条带化分布式奇偶校验。希望这可以帮助你了解 RAID 1(镜像)是如何工作的。 -------------------------------------------------------------------------------- @@ -209,9 +209,9 @@ via: http://www.tecmint.com/create-raid1-in-linux/ 作者:[Babin Lonston][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ +[1]:https://linux.cn/article-6085-1.html diff --git a/translated/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md b/published/201508/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md similarity index 50% rename from translated/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md rename to published/201508/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md index 7de5199a08..34ac7f18b2 100644 --- a/translated/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md +++ b/published/201508/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md @@ -1,89 +1,90 @@ - -在 Linux 中创建 RAID 5(条带化与分布式奇偶校验) - 第4部分 +在 Linux 下使用 RAID(四):创建 RAID 5(条带化与分布式奇偶校验) ================================================================================ -在 RAID 5 中,条带化数据跨多个驱磁盘使用分布式奇偶校验。分布式奇偶校验的条带化意味着它将奇偶校验信息和条带中的数据分布在多个磁盘上,它将有很好的数据冗余。 + +在 RAID 5 中,数据条带化后存储在分布式奇偶校验的多个磁盘上。分布式奇偶校验的条带化意味着它将奇偶校验信息和条带化数据分布在多个磁盘上,这样会有很好的数据冗余。 ![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/setup-raid-5-in-linux.jpg) -在 Linux 中配置 RAID 5 +*在 Linux 中配置 RAID 5* -对于此 RAID 级别它至少应该有三个或更多个磁盘。RAID 5 通常被用于大规模生产环境中花费更多的成本来提供更好的数据冗余性能。 +对于此 RAID 级别它至少应该有三个或更多个磁盘。RAID 5 通常被用于大规模生产环境中,以花费更多的成本来提供更好的数据冗余性能。 #### 什么是奇偶校验? #### -奇偶校验是在数据存储中检测错误最简单的一个方法。奇偶校验信息存储在每个磁盘中,比如说,我们有4个磁盘,其中一个磁盘空间被分割去存储所有磁盘的奇偶校验信息。如果任何一个磁盘出现故障,我们可以通过更换故障磁盘后,从奇偶校验信息重建得到原来的数据。 +奇偶校验是在数据存储中检测错误最简单的常见方式。奇偶校验信息存储在每个磁盘中,比如说,我们有4个磁盘,其中相当于一个磁盘大小的空间被分割去存储所有磁盘的奇偶校验信息。如果任何一个磁盘出现故障,我们可以通过更换故障磁盘后,从奇偶校验信息重建得到原来的数据。 #### RAID 5 的优点和缺点 #### -- 提供更好的性能 +- 提供更好的性能。 - 支持冗余和容错。 - 支持热备份。 -- 将失去一个磁盘的容量存储奇偶校验信息。 +- 将用掉一个磁盘的容量存储奇偶校验信息。 - 单个磁盘发生故障后不会丢失数据。我们可以更换故障硬盘后从奇偶校验信息中重建数据。 -- 事务处理读操作会更快。 -- 由于奇偶校验占用资源,写操作将是缓慢的。 +- 适合于面向事务处理的环境,读操作会更快。 +- 由于奇偶校验占用资源,写操作会慢一些。 - 重建需要很长的时间。 #### 要求 #### + 创建 RAID 5 最少需要3个磁盘,你也可以添加更多的磁盘,前提是你要有多端口的专用硬件 RAID 控制器。在这里,我们使用“mdadm”包来创建软件 RAID。 -mdadm 是一个允许我们在 Linux 下配置和管理 RAID 设备的包。默认情况下 RAID 没有可用的配置文件,我们在创建和配置 RAID 后必须将配置文件保存在一个单独的文件中,例如:mdadm.conf。 +mdadm 是一个允许我们在 Linux 下配置和管理 RAID 设备的包。默认情况下没有 RAID 的配置文件,我们在创建和配置 RAID 后必须将配置文件保存在一个单独的文件 mdadm.conf 中。 在进一步学习之前,我建议你通过下面的文章去了解 Linux 中 RAID 的基础知识。 -- [Basic Concepts of RAID in Linux – Part 1][1] -- [Creating RAID 0 (Stripe) in Linux – Part 2][2] -- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3] +- [介绍 RAID 的级别和概念][1] +- [使用 mdadm 工具创建软件 RAID 0 (条带化)][2] +- [用两块磁盘创建 RAID 1(镜像)][3] #### 我的服务器设置 #### - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.227 - Hostname : rd5.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc - Disk 3 [20GB] : /dev/sdd + 操作系统 : CentOS 6.5 Final + IP 地址 : 192.168.0.227 + 主机名 : rd5.tecmintlocal.com + 磁盘 1 [20GB] : /dev/sdb + 磁盘 2 [20GB] : /dev/sdc + 磁盘 3 [20GB] : /dev/sdd -这篇文章是 RAID 系列9教程的第4部分,在这里我们要建立一个软件 RAID 5(分布式奇偶校验)使用三个20GB(名为/dev/sdb, /dev/sdc 和 /dev/sdd)的磁盘在 Linux 系统或服务器中上。 +这是9篇系列教程的第4部分,在这里我们要在 Linux 系统或服务器上使用三个20GB(名为/dev/sdb, /dev/sdc 和 /dev/sdd)的磁盘建立带有分布式奇偶校验的软件 RAID 5。 ### 第1步:安装 mdadm 并检验磁盘 ### -1.正如我们前面所说,我们使用 CentOS 6.5 Final 版本来创建 RAID 设置,但同样的做法也适用于其他 Linux 发行版。 +1、 正如我们前面所说,我们使用 CentOS 6.5 Final 版本来创建 RAID 设置,但同样的做法也适用于其他 Linux 发行版。 # lsb_release -a # ifconfig | grep inet ![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/CentOS-6.5-Summary.png) -CentOS 6.5 摘要 +*CentOS 6.5 摘要* -2. 如果你按照我们的 RAID 系列去配置的,我们假设你已经安装了“mdadm”包,如果没有,根据你的 Linux 发行版使用下面的命令安装。 +2、 如果你按照我们的 RAID 系列去配置的,我们假设你已经安装了“mdadm”包,如果没有,根据你的 Linux 发行版使用下面的命令安装。 - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] + # yum install mdadm [在 RedHat 系统] + # apt-get install mdadm [在 Debain 系统] -3. “mdadm”包安装后,先使用‘fdisk‘命令列出我们在系统上增加的三个20GB的硬盘。 +3、 “mdadm”包安装后,先使用`fdisk`命令列出我们在系统上增加的三个20GB的硬盘。 # fdisk -l | grep sd ![Install mdadm Tool in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Install-mdadm-Tool.png) -安装 mdadm 工具 +*安装 mdadm 工具* -4. 现在该检查这三个磁盘是否存在 RAID 块,使用下面的命令来检查。 +4、 现在该检查这三个磁盘是否存在 RAID 块,使用下面的命令来检查。 # mdadm -E /dev/sd[b-d] - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd # 或 ![Examine Drives For Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-Drives-For-Raid.png) -检查 Raid 磁盘 +*检查 Raid 磁盘* **注意**: 上面的图片说明,没有检测到任何超级块。所以,这三个磁盘中没有定义 RAID。让我们现在开始创建一个吧! ### 第2步:为磁盘创建 RAID 分区 ### -5. 首先,在创建 RAID 前我们要为磁盘分区(/dev/sdb, /dev/sdc 和 /dev/sdd),在进行下一步之前,先使用‘fdisk’命令进行分区。 +5、 首先,在创建 RAID 前磁盘(/dev/sdb, /dev/sdc 和 /dev/sdd)必须有分区,因此,在进行下一步之前,先使用`fdisk`命令进行分区。 # fdisk /dev/sdb # fdisk /dev/sdc @@ -93,20 +94,20 @@ CentOS 6.5 摘要 请按照下面的说明在 /dev/sdb 硬盘上创建分区。 -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。选择主分区是因为还没有定义过分区。 -- 接下来选择分区号为1。默认就是1. +- 按 `n` 创建新的分区。 +- 然后按 `P` 选择主分区。选择主分区是因为还没有定义过分区。 +- 接下来选择分区号为1。默认就是1。 - 这里是选择柱面大小,我们没必要选择指定的大小,因为我们需要为 RAID 使用整个分区,所以只需按两次 Enter 键默认将整个容量分配给它。 -- 然后,按 ‘P’ 来打印创建好的分区。 -- 改变分区类型,按 ‘L’可以列出所有可用的类型。 -- 按 ‘t’ 修改分区类型。 -- 这里使用‘fd’设置为 RAID 的类型。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 +- 然后,按 `P` 来打印创建好的分区。 +- 改变分区类型,按 `L`可以列出所有可用的类型。 +- 按 `t` 修改分区类型。 +- 这里使用`fd`设置为 RAID 的类型。 +- 然后再次使用`p`查看我们所做的更改。 +- 使用`w`保存更改。 ![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition1.png) -创建 sdb 分区 +*创建 sdb 分区* **注意**: 我们仍要按照上面的步骤来创建 sdc 和 sdd 的分区。 @@ -118,7 +119,7 @@ CentOS 6.5 摘要 ![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition1.png) -创建 sdc 分区 +*创建 sdc 分区* #### 创建 /dev/sdd 分区 #### @@ -126,93 +127,87 @@ CentOS 6.5 摘要 ![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition1.png) -创建 sdd 分区 +*创建 sdd 分区* -6. 创建分区后,检查三个磁盘 sdb, sdc, sdd 的变化。 +6、 创建分区后,检查三个磁盘 sdb, sdc, sdd 的变化。 # mdadm --examine /dev/sdb /dev/sdc /dev/sdd - - or - - # mdadm -E /dev/sd[b-c] + # mdadm -E /dev/sd[b-c] # 或 ![Check Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Changes-on-Partitions.png) -检查磁盘变化 +*检查磁盘变化* **注意**: 在上面的图片中,磁盘的类型是 fd。 -7.现在在新创建的分区检查 RAID 块。如果没有检测到超级块,我们就能够继续下一步,创建一个新的 RAID 5 的设置在这些磁盘中。 +7、 现在在新创建的分区检查 RAID 块。如果没有检测到超级块,我们就能够继续下一步,在这些磁盘中创建一个新的 RAID 5 配置。 ![Check Raid on Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-Partitions.png) -在分区中检查 Raid +*在分区中检查 RAID * ### 第3步:创建 md 设备 md0 ### -8. 现在创建一个 RAID 设备“md0”(即 /dev/md0)使用所有新创建的分区(sdb1, sdc1 and sdd1) ,使用以下命令。 +8、 现在使用所有新创建的分区(sdb1, sdc1 和 sdd1)创建一个 RAID 设备“md0”(即 /dev/md0),使用以下命令。 # mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 - - or - - # mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1 + # mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1 # 或 -9. 创建 RAID 设备后,检查并确认 RAID,包括设备和从 mdstat 中输出的 RAID 级别。 +9、 创建 RAID 设备后,检查并确认 RAID,从 mdstat 中输出中可以看到包括的设备的 RAID 级别。 # cat /proc/mdstat ![Verify Raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Device.png) -验证 Raid 设备 +*验证 Raid 设备* -如果你想监视当前的创建过程,你可以使用‘watch‘命令,使用 watch ‘cat /proc/mdstat‘,它会在屏幕上显示且每隔1秒刷新一次。 +如果你想监视当前的创建过程,你可以使用`watch`命令,将 `cat /proc/mdstat` 传递给它,它会在屏幕上显示且每隔1秒刷新一次。 # watch -n1 cat /proc/mdstat ![Monitor Raid Process](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Raid-Process.png) -监控 Raid 5 过程 +*监控 RAID 5 构建过程* ![Raid 5 Process Summary](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Process-Summary.png) -Raid 5 过程概要 +*Raid 5 过程概要* -10. 创建 RAID 后,使用以下命令验证 RAID 设备 +10、 创建 RAID 后,使用以下命令验证 RAID 设备 # mdadm -E /dev/sd[b-d]1 ![Verify Raid Level](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Level.png) -验证 Raid 级别 +*验证 Raid 级别* **注意**: 因为它显示三个磁盘的信息,上述命令的输出会有点长。 -11. 接下来,验证 RAID 阵列的假设,这包含正在运行 RAID 的设备,并开始重新同步。 +11、 接下来,验证 RAID 阵列,假定包含 RAID 的设备正在运行并已经开始了重新同步。 # mdadm --detail /dev/md0 ![Verify Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Array.png) -验证 Raid 阵列 +*验证 RAID 阵列* ### 第4步:为 md0 创建文件系统### -12. 在挂载前为“md0”设备创建 ext4 文件系统。 +12、 在挂载前为“md0”设备创建 ext4 文件系统。 # mkfs.ext4 /dev/md0 ![Create md0 Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md0-Filesystem.png) -创建 md0 文件系统 +*创建 md0 文件系统* -13.现在,在‘/mnt‘下创建目录 raid5,然后挂载文件系统到 /mnt/raid5/ 下并检查下挂载点的文件,你会看到 lost+found 目录。 +13、 现在,在`/mnt`下创建目录 raid5,然后挂载文件系统到 /mnt/raid5/ 下,并检查挂载点下的文件,你会看到 lost+found 目录。 # mkdir /mnt/raid5 # mount /dev/md0 /mnt/raid5/ # ls -l /mnt/raid5/ -14. 在挂载点 /mnt/raid5 下创建几个文件,并在其中一个文件中添加一些内容然后去验证。 +14、 在挂载点 /mnt/raid5 下创建几个文件,并在其中一个文件中添加一些内容然后去验证。 # touch /mnt/raid5/raid5_tecmint_{1..5} # ls -l /mnt/raid5/ @@ -222,9 +217,9 @@ Raid 5 过程概要 ![Mount Raid 5 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-Raid-Device.png) -挂载 Raid 设备 +*挂载 RAID 设备* -15. 我们需要在 fstab 中添加条目,否则系统重启后将不会显示我们的挂载点。然后编辑 fstab 文件添加条目,在文件尾追加以下行,如下图所示。挂载点会根据你环境的不同而不同。 +15、 我们需要在 fstab 中添加条目,否则系统重启后将不会显示我们的挂载点。编辑 fstab 文件添加条目,在文件尾追加以下行。挂载点会根据你环境的不同而不同。 # vim /etc/fstab @@ -232,19 +227,19 @@ Raid 5 过程概要 ![Raid 5 Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Device-Automount.png) -自动挂载 Raid 5 +*自动挂载 RAID 5* -16. 接下来,运行‘mount -av‘命令检查 fstab 条目中是否有错误。 +16、 接下来,运行`mount -av`命令检查 fstab 条目中是否有错误。 # mount -av ![Check Fstab Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Fstab-Errors.png) -检查 Fstab 错误 +*检查 Fstab 错误* ### 第5步:保存 Raid 5 的配置 ### -17. 在前面章节已经说过,默认情况下 RAID 没有配置文件。我们必须手动保存。如果此步不跟 RAID 设备将不会存在 md0,它将会跟一些其他数子。 +17、 在前面章节已经说过,默认情况下 RAID 没有配置文件。我们必须手动保存。如果此步中没有跟随不属于 md0 的 RAID 设备,它会是一些其他随机数字。 所以,我们必须要在系统重新启动之前保存配置。如果配置保存它在系统重新启动时会被加载到内核中然后 RAID 也将被加载。 @@ -252,17 +247,17 @@ Raid 5 过程概要 ![Save Raid 5 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid-5-Configuration.png) -保存 Raid 5 配置 +*保存 RAID 5 配置* -注意:保存配置将保持 RAID 级别的稳定性在 md0 设备中。 +注意:保存配置将保持 md0 设备的 RAID 级别稳定不变。 ### 第6步:添加备用磁盘 ### -18.备用磁盘有什么用?它是非常有用的,如果我们有一个备用磁盘,当我们阵列中的任何一个磁盘发生故障后,这个备用磁盘会主动添加并重建进程,并从其他磁盘上同步数据,所以我们可以在这里看到冗余。 +18、 备用磁盘有什么用?它是非常有用的,如果我们有一个备用磁盘,当我们阵列中的任何一个磁盘发生故障后,这个备用磁盘会进入激活重建过程,并从其他磁盘上同步数据,这样就有了冗余。 更多关于添加备用磁盘和检查 RAID 5 容错的指令,请阅读下面文章中的第6步和第7步。 -- [Add Spare Drive to Raid 5 Setup][4] +- [在 RAID 5 中添加备用磁盘][4] ### 结论 ### @@ -274,12 +269,12 @@ via: http://www.tecmint.com/create-raid-5-in-linux/ 作者:[Babin Lonston][a] 译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ -[2]:http://www.tecmint.com/create-raid0-in-linux/ -[3]:http://www.tecmint.com/create-raid1-in-linux/ +[1]:https://linux.cn/article-6085-1.html +[2]:https://linux.cn/article-6087-1.html +[3]:https://linux.cn/article-6093-1.html [4]:http://www.tecmint.com/create-raid-6-in-linux/ diff --git a/published/201508/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md b/published/201508/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md new file mode 100644 index 0000000000..d222a997e5 --- /dev/null +++ b/published/201508/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md @@ -0,0 +1,320 @@ +在 Linux 下使用 RAID(五):安装 RAID 6(条带化双分布式奇偶校验) +================================================================================ + +RAID 6 是 RAID 5 的升级版,它有两个分布式奇偶校验,即使两个磁盘发生故障后依然有容错能力。在两个磁盘同时发生故障时,系统的关键任务仍然能运行。它与 RAID 5 相似,但性能更健壮,因为它多用了一个磁盘来进行奇偶校验。 + +在之前的文章中,我们已经在 RAID 5 看了分布式奇偶校验,但在本文中,我们将看到的是 RAID 6 双分布式奇偶校验。不要期望比其他 RAID 有更好的性能,除非你也安装了一个专用的 RAID 控制器。在 RAID 6 中,即使我们失去了2个磁盘,我们仍可以通过更换磁盘,从校验中构建数据,然后取回数据。 + +![Setup RAID 6 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Setup-RAID-6-in-Linux.jpg) + +*在 Linux 中安装 RAID 6* + +要建立一个 RAID 6,一组最少需要4个磁盘。RAID 6 甚至在有些组中会有更多磁盘,这样将多个硬盘捆在一起,当读取数据时,它会同时从所有磁盘读取,所以读取速度会更快,当写数据时,因为它要将数据写在条带化的多个磁盘上,所以性能会较差。 + +现在,很多人都在讨论为什么我们需要使用 RAID 6,它的性能和其他 RAID 相比并不太好。提出这个问题首先需要知道的是,如果需要高容错性就选择 RAID 6。在每一个用于数据库的高可用性要求较高的环境中,他们需要 RAID 6 因为数据库是最重要,无论花费多少都需要保护其安全,它在视频流环境中也是非常有用的。 + +#### RAID 6 的的优点和缺点 #### + +- 性能不错。 +- RAID 6 比较昂贵,因为它要求两个独立的磁盘用于奇偶校验功能。 +- 将失去两个磁盘的容量来保存奇偶校验信息(双奇偶校验)。 +- 即使两个磁盘损坏,数据也不会丢失。我们可以在更换损坏的磁盘后从校验中重建数据。 +- 读性能比 RAID 5 更好,因为它从多个磁盘读取,但对于没有专用的 RAID 控制器的设备写性能将非常差。 + +#### 要求 #### + +要创建一个 RAID 6 最少需要4个磁盘。你也可以添加更多的磁盘,但你必须有专用的 RAID 控制器。使用软件 RAID 我们在 RAID 6 中不会得到更好的性能,所以我们需要一个物理 RAID 控制器。 + +如果你新接触 RAID 设置,我们建议先看完以下 RAID 文章。 + +- [介绍 RAID 的级别和概念][1] +- [使用 mdadm 工具创建软件 RAID 0 (条带化)][2] +- [用两块磁盘创建 RAID 1(镜像)][3] +- [创建 RAID 5(条带化与分布式奇偶校验)](4) + +#### 我的服务器设置 #### + + 操作系统 : CentOS 6.5 Final + IP 地址 : 192.168.0.228 + 主机名 : rd6.tecmintlocal.com + 磁盘 1 [20GB] : /dev/sdb + 磁盘 2 [20GB] : /dev/sdc + 磁盘 3 [20GB] : /dev/sdd + 磁盘 4 [20GB] : /dev/sde + +这是9篇系列教程的第5部分,在这里我们将看到如何在 Linux 系统或者服务器上使用四个 20GB 的磁盘(名为 /dev/sdb、 /dev/sdc、 /dev/sdd 和 /dev/sde)创建和设置软件 RAID 6 (条带化双分布式奇偶校验)。 + +### 第1步:安装 mdadm 工具,并检查磁盘 ### + +1、 如果你按照我们最进的两篇 RAID 文章(第2篇和第3篇),我们已经展示了如何安装`mdadm`工具。如果你直接看的这篇文章,我们先来解释下在 Linux 系统中如何使用`mdadm`工具来创建和管理 RAID,首先根据你的 Linux 发行版使用以下命令来安装。 + + # yum install mdadm [在 RedHat 系统] + # apt-get install mdadm [在 Debain 系统] + +2、 安装该工具后,然后来验证所需的四个磁盘,我们将会使用下面的`fdisk`命令来检查用于创建 RAID 的磁盘。 + + # fdisk -l | grep sd + +![Check Hard Disk in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Linux-Disks.png) + +*在 Linux 中检查磁盘* + +3、 在创建 RAID 磁盘前,先检查下我们的磁盘是否创建过 RAID 分区。 + + # mdadm -E /dev/sd[b-e] + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde # 或 + +![Check Raid on Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Disk-Raid.png) + +*在磁盘上检查 RAID 分区* + +**注意**: 在上面的图片中,没有检测到任何 super-block 或者说在四个磁盘上没有 RAID 存在。现在我们开始创建 RAID 6。 + +### 第2步:为 RAID 6 创建磁盘分区 ### + +4、 现在在 `/dev/sdb`, `/dev/sdc`, `/dev/sdd` 和 `/dev/sde`上为 RAID 创建分区,使用下面的 fdisk 命令。在这里,我们将展示如何在 sdb 磁盘创建分区,同样的步骤也适用于其他分区。 + +**创建 /dev/sdb 分区** + + # fdisk /dev/sdb + +请按照说明进行操作,如下图所示创建分区。 + +- 按 `n`创建新的分区。 +- 然后按 `P` 选择主分区。 +- 接下来选择分区号为1。 +- 只需按两次回车键选择默认值即可。 +- 然后,按 `P` 来打印创建好的分区。 +- 按 `L`,列出所有可用的类型。 +- 按 `t` 去修改分区。 +- 键入 `fd` 设置为 Linux 的 RAID 类型,然后按回车确认。 +- 然后再次使用`p`查看我们所做的更改。 +- 使用`w`保存更改。 + +![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition.png) + +*创建 /dev/sdb 分区* + +**创建 /dev/sdc 分区** + + # fdisk /dev/sdc + +![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition.png) + +*创建 /dev/sdc 分区* + +**创建 /dev/sdd 分区** + + # fdisk /dev/sdd + +![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition.png) + +*创建 /dev/sdd 分区* + +**创建 /dev/sde 分区** + + # fdisk /dev/sde + +![Create sde Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sde-Partition.png) + +*创建 /dev/sde 分区* + +5、 创建好分区后,检查磁盘的 super-blocks 是个好的习惯。如果 super-blocks 不存在我们可以按前面的创建一个新的 RAID。 + + # mdadm -E /dev/sd[b-e]1 + # mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 # 或 + +![Check Raid on New Partitions](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Partitions.png) + +*在新分区中检查 RAID * + +### 步骤3:创建 md 设备(RAID) ### + +6、 现在可以使用以下命令创建 RAID 设备`md0` (即 /dev/md0),并在所有新创建的分区中应用 RAID 级别,然后确认 RAID 设置。 + + # mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 + # cat /proc/mdstat + +![Create Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Raid-6-Device.png) + +*创建 Raid 6 设备* + +7、 你还可以使用 watch 命令来查看当前创建 RAID 的进程,如下图所示。 + + # watch -n1 cat /proc/mdstat + +![Check Raid 6 Process](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Process.png) + +*检查 RAID 6 创建过程* + +8、 使用以下命令验证 RAID 设备。 + + # mdadm -E /dev/sd[b-e]1 + +**注意**::上述命令将显示四个磁盘的信息,这是相当长的,所以没有截取其完整的输出。 + +9、 接下来,验证 RAID 阵列,以确认重新同步过程已经开始。 + + # mdadm --detail /dev/md0 + +![Check Raid 6 Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Array.png) + +*检查 Raid 6 阵列* + +### 第4步:在 RAID 设备上创建文件系统 ### + +10、 使用 ext4 为`/dev/md0`创建一个文件系统,并将它挂载在 /mnt/raid6 。这里我们使用的是 ext4,但你可以根据你的选择使用任意类型的文件系统。 + + # mkfs.ext4 /dev/md0 + +![Create File System on Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Create-File-System-on-Raid.png) + +*在 RAID 6 上创建文件系统* + +11、 将创建的文件系统挂载到 /mnt/raid6,并验证挂载点下的文件,我们可以看到 lost+found 目录。 + + # mkdir /mnt/raid6 + # mount /dev/md0 /mnt/raid6/ + # ls -l /mnt/raid6/ + +12、 在挂载点下创建一些文件,在任意文件中添加一些文字并验证其内容。 + + # touch /mnt/raid6/raid6_test.txt + # ls -l /mnt/raid6/ + # echo "tecmint raid setups" > /mnt/raid6/raid6_test.txt + # cat /mnt/raid6/raid6_test.txt + +![Verify Raid Content](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Content.png) + +*验证 RAID 内容* + +13、 在 /etc/fstab 中添加以下条目使系统启动时自动挂载设备,操作系统环境不同挂载点可能会有所不同。 + + # vim /etc/fstab + + /dev/md0 /mnt/raid6 ext4 defaults 0 0 + +![Automount Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Automount-Raid-Device.png) + +*自动挂载 RAID 6 设备* + +14、 接下来,执行`mount -a`命令来验证 fstab 中的条目是否有错误。 + + # mount -av + +![Verify Raid Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Automount-Raid-Devices.png) + +*验证 RAID 是否自动挂载* + +### 第5步:保存 RAID 6 的配置 ### + +15、 请注意,默认情况下 RAID 没有配置文件。我们需要使用以下命令手动保存它,然后检查设备`/dev/md0`的状态。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + # cat /etc/mdadm.conf + # mdadm --detail /dev/md0 + +![Save Raid 6 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png) + +*保存 RAID 6 配置* + +![Check Raid 6 Status](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png) + +*检查 RAID 6 状态* + +### 第6步:添加备用磁盘 ### + +16、 现在,已经使用了4个磁盘,并且其中两个作为奇偶校验信息来使用。在某些情况下,如果任意一个磁盘出现故障,我们仍可以得到数据,因为在 RAID 6 使用双奇偶校验。 + +如果第二个磁盘也出现故障,在第三块磁盘损坏前我们可以添加一个​​新的。可以在创建 RAID 集时加入一个备用磁盘,但我在创建 RAID 集合前没有定义备用的磁盘。不过,我们可以在磁盘损坏后或者创建 RAID 集合时添加一块备用磁盘。现在,我们已经创建好了 RAID,下面让我演示如何添加备用磁盘。 + +为了达到演示的目的,我已经热插入了一个新的 HDD 磁盘(即 /dev/sdf),让我们来验证接入的磁盘。 + + # ls -l /dev/ | grep sd + +![Check New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-New-Disk.png) + +*检查新磁盘* + +17、 现在再次确认新连接的磁盘没有配置过 RAID ,使用 mdadm 来检查。 + + # mdadm --examine /dev/sdf + +![Check Raid on New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Disk.png) + +*在新磁盘中检查 RAID* + +**注意**: 像往常一样,我们早前已经为四个磁盘创建了分区,同样,我们使用 fdisk 命令为新插入的磁盘创建新分区。 + + # fdisk /dev/sdf + +![Create sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Partition-on-sdf.png) + +*为 /dev/sdf 创建分区* + +18、 在 /dev/sdf 创建新的分区后,在新分区上确认没有 RAID,然后将备用磁盘添加到 RAID 设备 /dev/md0 中,并验证添加的设备。 + + # mdadm --examine /dev/sdf + # mdadm --examine /dev/sdf1 + # mdadm --add /dev/md0 /dev/sdf1 + # mdadm --detail /dev/md0 + +![Verify Raid on sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-on-sdf.png) + +*在 sdf 分区上验证 Raid* + +![Add sdf Partition to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Add-sdf-Partition-to-Raid.png) + +*添加 sdf 分区到 RAID * + +![Verify sdf Partition Details](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-sdf-Details.png) + +*验证 sdf 分区信息* + +### 第7步:检查 RAID 6 容错 ### + +19、 现在,让我们检查备用驱动器是否能自动工作,当我们阵列中的任何一个磁盘出现故障时。为了测试,我将一个磁盘手工标记为故障设备。 + +在这里,我们标记 /dev/sdd1 为故障磁盘。 + + # mdadm --manage --fail /dev/md0 /dev/sdd1 + +![Check Raid 6 Fault Tolerance](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Failover.png) + +*检查 RAID 6 容错* + +20、 让我们查看 RAID 的详细信息,并检查备用磁盘是否开始同步。 + + # mdadm --detail /dev/md0 + +![Check Auto Raid Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Auto-Raid-Syncing.png) + +*检查 RAID 自动同步* + +**哇塞!** 这里,我们看到备用磁盘激活了,并开始重建进程。在底部,我们可以看到有故障的磁盘 /dev/sdd1 标记为 faulty。可以使用下面的命令查看进程重建。 + + # cat /proc/mdstat + +![Raid 6 Auto Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-6-Auto-Syncing.png) + +*RAID 6 自动同步* + +### 结论: ### + +在这里,我们看到了如何使用四个磁盘设置 RAID 6。这种 RAID 级别是具有高冗余的昂贵设置之一。在接下来的文章中,我们将看到如何建立一个嵌套的 RAID 10 甚至更多。请继续关注。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid-6-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:https://linux.cn/article-6085-1.html +[2]:https://linux.cn/article-6087-1.html +[3]:https://linux.cn/article-6093-1.html +[4]:https://linux.cn/article-6102-1.html diff --git a/published/201508/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md b/published/201508/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md new file mode 100644 index 0000000000..c0b03f3dba --- /dev/null +++ b/published/201508/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md @@ -0,0 +1,275 @@ +在 Linux 下使用 RAID(六):设置 RAID 10 或 1 + 0(嵌套) +================================================================================ + +RAID 10 是组合 RAID 1 和 RAID 0 形成的。要设置 RAID 10,我们至少需要4个磁盘。在之前的文章中,我们已经看到了如何使用最少两个磁盘设置 RAID 1 和 RAID 0。 + +在这里,我们将使用最少4个磁盘组合 RAID 1 和 RAID 0 来设置 RAID 10。假设我们已经在用 RAID 10 创建的逻辑卷保存了一些数据。比如我们要保存数据 “TECMINT”,它将使用以下方法将其保存在4个磁盘中。 + +![Create Raid 10 in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/raid10.jpg) + +*在 Linux 中创建 Raid 10(LCTT 译注:此图有误,请参照文字说明和本系列第一篇文章)* + +RAID 10 是先做镜像,再做条带。因此,在 RAID 1 中,相同的数据将被写入到两个磁盘中,“T”将同时被写入到第一和第二个磁盘中。接着的数据被条带化到另外两个磁盘,“E”将被同时写入到第三和第四个磁盘中。它将继续循环此过程,“C”将同时被写入到第一和第二个磁盘,以此类推。 + +(LCTT 译注:原文中此处描述混淆有误,已经根据实际情况进行修改。) + +现在你已经了解 RAID 10 怎样组合 RAID 1 和 RAID 0 来工作的了。如果我们有4个20 GB 的磁盘,总共为 80 GB,但我们将只能得到40 GB 的容量,另一半的容量在构建 RAID 10 中丢失。 + +#### RAID 10 的优点和缺点 #### + +- 提供更好的性能。 +- 在 RAID 10 中我们将失去一半的磁盘容量。 +- 读与写的性能都很好,因为它会同时进行写入和读取。 +- 它能解决数据库的高 I/O 磁盘写操作。 + +#### 要求 #### + +在 RAID 10 中,我们至少需要4个磁盘,前2个磁盘为 RAID 1,其他2个磁盘为 RAID 0,就像我之前说的,RAID 10 仅仅是组合了 RAID 0和1。如果我们需要扩展 RAID 组,最少需要添加4个磁盘。 + +**我的服务器设置** + + 操作系统 : CentOS 6.5 Final + IP 地址 : 192.168.0.229 + 主机名 : rd10.tecmintlocal.com + 磁盘 1 [20GB] : /dev/sdd + 磁盘 2 [20GB] : /dev/sdc + 磁盘 3 [20GB] : /dev/sdd + 磁盘 4 [20GB] : /dev/sde + +有两种方法来设置 RAID 10,在这里两种方法我都会演示,但我更喜欢第一种方法,使用它来设置 RAID 10 更简单。 + +### 方法1:设置 RAID 10 ### + +1、 首先,使用以下命令确认所添加的4块磁盘没有被使用。 + + # ls -l /dev | grep sd + +2、 四个磁盘被检测后,然后来检查磁盘是否存在 RAID 分区。 + + # mdadm -E /dev/sd[b-e] + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde # 或 + +![Verify 4 Added Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-4-Added-Disks.png) + +*验证添加的4块磁盘* + +**注意**: 在上面的输出中,如果没有检测到 super-block 意味着在4块磁盘中没有定义过 RAID。 + +#### 第1步:为 RAID 分区 #### + +3、 现在,使用`fdisk`,命令为4个磁盘(/dev/sdb, /dev/sdc, /dev/sdd 和 /dev/sde)创建新分区。 + + # fdisk /dev/sdb + # fdisk /dev/sdc + # fdisk /dev/sdd + # fdisk /dev/sde + +#####为 /dev/sdb 创建分区##### + +我来告诉你如何使用 fdisk 为磁盘(/dev/sdb)进行分区,此步也适用于其他磁盘。 + + # fdisk /dev/sdb + +请使用以下步骤为 /dev/sdb 创建一个新的分区。 + +- 按 `n` 创建新的分区。 +- 然后按 `P` 选择主分区。 +- 接下来选择分区号为1。 +- 只需按两次回车键选择默认值即可。 +- 然后,按 `P` 来打印创建好的分区。 +- 按 `L`,列出所有可用的类型。 +- 按 `t` 去修改分区。 +- 键入 `fd` 设置为 Linux 的 RAID 类型,然后按 Enter 确认。 +- 然后再次使用`p`查看我们所做的更改。 +- 使用`w`保存更改。 + +![Disk sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-sdb-Partition.png) + +*为磁盘 sdb 分区* + +**注意**: 请使用上面相同的指令对其他磁盘(sdc, sdd sdd sde)进行分区。 + +4、 创建好4个分区后,需要使用下面的命令来检查磁盘是否存在 raid。 + + # mdadm -E /dev/sd[b-e] + # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde # 或 + + # mdadm -E /dev/sd[b-e]1 + # mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 # 或 + +![Check All Disks for Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Check-All-Disks-for-Raid.png) + +*检查磁盘* + +**注意**: 以上输出显示,新创建的四个分区中没有检测到 super-block,这意味着我们可以继续在这些磁盘上创建 RAID 10。 + +#### 第2步: 创建 RAID 设备 `md` #### + +5、 现在该创建一个`md`(即 /dev/md0)设备了,使用“mdadm” raid 管理工具。在创建设备之前,必须确保系统已经安装了`mdadm`工具,如果没有请使用下面的命令来安装。 + + # yum install mdadm [在 RedHat 系统] + # apt-get install mdadm [在 Debain 系统] + +`mdadm`工具安装完成后,可以使用下面的命令创建一个`md` raid 设备。 + + # mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1 + +6、 接下来使用`cat`命令验证新创建的 raid 设备。 + + # cat /proc/mdstat + +![Create md raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-raid-Device.png) + +*创建 md RAID 设备* + +7、 接下来,使用下面的命令来检查4个磁盘。下面命令的输出会很长,因为它会显示4个磁盘的所有信息。 + + # mdadm --examine /dev/sd[b-e]1 + +8、 接下来,使用以下命令来查看 RAID 阵列的详细信息。 + + # mdadm --detail /dev/md0 + +![Check Raid Array Details](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Array-Details.png) + +*查看 RAID 阵列详细信息* + +**注意**: 你在上面看到的结果,该 RAID 的状态是 active 和re-syncing。 + +#### 第3步:创建文件系统 #### + +9、 使用 ext4 作为`md0′的文件系统,并将它挂载到`/mnt/raid10`下。在这里,我用的是 ext4,你可以使用你想要的文件系统类型。 + + # mkfs.ext4 /dev/md0 + +![Create md Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-Filesystem.png) + +*创建 md 文件系统* + +10、 在创建文件系统后,挂载文件系统到`/mnt/raid10`下,并使用`ls -l`命令列出挂载点下的内容。 + + # mkdir /mnt/raid10 + # mount /dev/md0 /mnt/raid10/ + # ls -l /mnt/raid10/ + +接下来,在挂载点下创建一些文件,并在文件中添加些内容,然后检查内容。 + + # touch /mnt/raid10/raid10_files.txt + # ls -l /mnt/raid10/ + # echo "raid 10 setup with 4 disks" > /mnt/raid10/raid10_files.txt + # cat /mnt/raid10/raid10_files.txt + +![Mount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-md-Device.png) + +*挂载 md 设备* + +11、 要想自动挂载,打开`/etc/fstab`文件并添加下面的条目,挂载点根据你环境的不同来添加。使用 wq! 保存并退出。 + + # vim /etc/fstab + + /dev/md0 /mnt/raid10 ext4 defaults 0 0 + +![AutoMount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/AutoMount-md-Device.png) + +*挂载 md 设备* + +12、 接下来,在重新启动系统前使用`mount -a`来确认`/etc/fstab`文件是否有错误。 + + # mount -av + +![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Errors-in-Fstab.png) + +*检查 Fstab 中的错误* + +#### 第四步:保存 RAID 配置 #### + +13、 默认情况下 RAID 没有配置文件,所以我们需要在上述步骤完成后手动保存它。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + +![Save Raid10 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid10-Configuration.png) + +*保存 RAID10 的配置* + +就这样,我们使用方法1创建完了 RAID 10,这种方法是比较容易的。现在,让我们使用方法2来设置 RAID 10。 + +### 方法2:创建 RAID 10 ### + +1、 在方法2中,我们必须定义2组 RAID 1,然后我们需要使用这些创建好的 RAID 1 的集合来定义一个 RAID 0。在这里,我们将要做的是先创建2个镜像(RAID1),然后创建 RAID0 (条带化)。 + +首先,列出所有的可用于创建 RAID 10 的磁盘。 + + # ls -l /dev | grep sd + +![List 4 Devices](http://www.tecmint.com/wp-content/uploads/2014/11/List-4-Devices.png) + +*列出了 4 个设备* + +2、 将4个磁盘使用`fdisk`命令进行分区。对于如何分区,您可以按照上面的第1步。 + + # fdisk /dev/sdb + # fdisk /dev/sdc + # fdisk /dev/sdd + # fdisk /dev/sde + +3、 在完成4个磁盘的分区后,现在检查磁盘是否存在 RAID块。 + + # mdadm --examine /dev/sd[b-e] + # mdadm --examine /dev/sd[b-e]1 + +![Examine 4 Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-4-Disks.png) + +*检查 4 个磁盘* + +#### 第1步:创建 RAID 1 #### + +4、 首先,使用4块磁盘创建2组 RAID 1,一组为`sdb1′和 `sdc1′,另一组是`sdd1′ 和 `sde1′。 + + # mdadm --create /dev/md1 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[b-c]1 + # mdadm --create /dev/md2 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[d-e]1 + # cat /proc/mdstat + +![Creating Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png) + +*创建 RAID 1* + +![Check Details of Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png) + +*查看 RAID 1 的详细信息* + +#### 第2步:创建 RAID 0 #### + +5、 接下来,使用 md1 和 md2 来创建 RAID 0。 + + # mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/md1 /dev/md2 + # cat /proc/mdstat + +![Creating Raid 0](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-0.png) + +*创建 RAID 0* + +#### 第3步:保存 RAID 配置 #### + +6、 我们需要将配置文件保存在`/etc/mdadm.conf`文件中,使其每次重新启动后都能加载所有的 RAID 设备。 + + # mdadm --detail --scan --verbose >> /etc/mdadm.conf + +在此之后,我们需要按照方法1中的第3步来创建文件系统。 + +就是这样!我们采用的方法2创建完了 RAID 1+0。我们将会失去一半的磁盘空间,但相比其他 RAID ,它的性能将是非常好的。 + +### 结论 ### + +在这里,我们采用两种方法创建 RAID 10。RAID 10 具有良好的性能和冗余性。希望这篇文章可以帮助你了解 RAID 10 嵌套 RAID。在后面的文章中我们会看到如何扩展现有的 RAID 阵列以及更多精彩的内容。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/create-raid-10-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ diff --git a/published/201508/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md b/published/201508/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md new file mode 100644 index 0000000000..3376376a2a --- /dev/null +++ b/published/201508/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md @@ -0,0 +1,182 @@ +在 Linux 下使用 RAID(七):在 Raid 中扩展现有的 RAID 阵列和删除故障的磁盘 +================================================================================ + +每个新手都会对阵列(array)这个词所代表的意思产生疑惑。阵列只是磁盘的一个集合。换句话说,我们可以称阵列为一个集合(set)或一组(group)。就像一组鸡蛋中包含6个一样。同样 RAID 阵列中包含着多个磁盘,可能是2,4,6,8,12,16等,希望你现在知道了什么是阵列。 + +在这里,我们将看到如何扩展现有的阵列或 RAID 组。例如,如果我们在阵列中使用2个磁盘形成一个 raid 1 集合,在某些情况,如果该组中需要更多的空间,就可以使用 mdadm -grow 命令来扩展阵列大小,只需要将一个磁盘加入到现有的阵列中即可。在说完扩展(添加磁盘到现有的阵列中)后,我们将看看如何从阵列中删除故障的磁盘。 + +![Grow Raid Array in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Growing-Raid-Array.jpg) + +*扩展 RAID 阵列和删除故障的磁盘* + +假设磁盘中的一个有问题了需要删除该磁盘,但我们需要在删除磁盘前添加一个备用磁盘来扩展该镜像,因为我们需要保存我们的数据。当磁盘发生故障时我们需要从阵列中删除它,这是这个主题中我们将要学习到的。 + +#### 扩展 RAID 的特性 #### + +- 我们可以增加(扩展)任意 RAID 集合的大小。 +- 我们可以在使用新磁盘扩展 RAID 阵列后删除故障的磁盘。 +- 我们可以扩展 RAID 阵列而无需停机。 + +####要求 #### + +- 为了扩展一个RAID阵列,我们需要一个已有的 RAID 组(阵列)。 +- 我们需要额外的磁盘来扩展阵列。 +- 在这里,我们使用一块磁盘来扩展现有的阵列。 + +在我们了解扩展和恢复阵列前,我们必须了解有关 RAID 级别和设置的基本知识。点击下面的链接了解这些。 + +- [介绍 RAID 的级别和概念][1] +- [使用 mdadm 工具创建软件 RAID 0 (条带化)][2] + +#### 我的服务器设置 #### + + 操作系统 : CentOS 6.5 Final +  IP地址 : 192.168.0.230 +  主机名 : grow.tecmintlocal.com + 2 块现有磁盘 : 1 GB + 1 块额外磁盘 : 1 GB + +在这里,我们已有一个 RAID ,有2块磁盘,每个大小为1GB,我们现在再增加一个磁盘到我们现有的 RAID 阵列中,其大小为1GB。 + +### 扩展现有的 RAID 阵列 ### + +1、 在扩展阵列前,首先使用下面的命令列出现有的 RAID 阵列。 + + # mdadm --detail /dev/md0 + +![Check Existing Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Existing-Raid-Array.png) + +*检查现有的 RAID 阵列* + +**注意**: 以上输出显示,已经有了两个磁盘在 RAID 阵列中,级别为 RAID 1。现在我们增加一个磁盘到现有的阵列里。 + +2、 现在让我们添加新的磁盘“sdd”,并使用`fdisk`命令来创建分区。 + + # fdisk /dev/sdd + +请使用以下步骤为 /dev/sdd 创建一个新的分区。 + +- 按 `n` 创建新的分区。 +- 然后按 `P` 选择主分区。 +- 接下来选择分区号为1。 +- 只需按两次回车键选择默认值即可。 +- 然后,按 `P` 来打印创建好的分区。 +- 按 `L`,列出所有可用的类型。 +- 按 `t` 去修改分区。 +- 键入 `fd` 设置为 Linux 的 RAID 类型,然后按回车确认。 +- 然后再次使用`p`查看我们所做的更改。 +- 使用`w`保存更改。 + +![Create New Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Create-New-sdd-Partition.png) + +*为 sdd 创建新的分区* + +3、 一旦新的 sdd 分区创建完成后,你可以使用下面的命令验证它。 + + # ls -l /dev/ | grep sd + +![Confirm sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-sdd-Partition.png) + +*确认 sdd 分区* + +4、 接下来,在添加到阵列前先检查磁盘是否有 RAID 分区。 + + # mdadm --examine /dev/sdd1 + +![Check Raid on sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-sdd-Partition.png) + +*在 sdd 分区中检查 RAID* + +**注意**:以上输出显示,该盘有没有发现 super-blocks,意味着我们可以将新的磁盘添加到现有阵列。 + +5、 要添加新的分区 /dev/sdd1 到现有的阵列 md0,请使用以下命令。 + + # mdadm --manage /dev/md0 --add /dev/sdd1 + +![Add Disk To Raid-Array](http://www.tecmint.com/wp-content/uploads/2014/11/Add-Disk-To-Raid-Array.png) + +*添加磁盘到 RAID 阵列* + +6、 一旦新的磁盘被添加后,在我们的阵列中检查新添加的磁盘。 + + # mdadm --detail /dev/md0 + +![Confirm Disk Added to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-Disk-Added-To-Raid.png) + +*确认将新磁盘添加到 RAID 中* + +**注意**: 在上面的输出,你可以看到磁盘已经被添加作为备用的。在这里,我们的阵列中已经有了2个磁盘,但我们期待阵列中有3个磁盘,因此我们需要扩展阵列。 + +7、 要扩展阵列,我们需要使用下面的命令。 + + # mdadm --grow --raid-devices=3 /dev/md0 + +![Grow Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Grow-Raid-Array.png) + +*扩展 Raid 阵列* + +现在我们可以看到第三块磁盘(sdd1)已被添加到阵列中,在第三块磁盘被添加后,它将从另外两块磁盘上同步数据。 + + # mdadm --detail /dev/md0 + +![Confirm Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-Raid-Array.png) + +*确认 Raid 阵列* + +**注意**: 对于大容量磁盘会需要几个小时来同步数据。在这里,我们使用的是1GB的虚拟磁盘,所以它非常快在几秒钟内便会完成。 + +### 从阵列中删除磁盘 ### + +8、 在数据被从其他两个磁盘同步到新磁盘`sdd1`后,现在三个磁盘中的数据已经相同了(镜像)。 + +正如我前面所说的,假定一个磁盘出问题了需要被删除。所以,现在假设磁盘`sdc1`出问题了,需要从现有阵列中删除。 + +在删除磁盘前我们要将其标记为失效,然后我们才可以将其删除。 + + # mdadm --fail /dev/md0 /dev/sdc1 + # mdadm --detail /dev/md0 + +![Disk Fail in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-Fail-in-Raid-Array.png) + +*在 RAID 阵列中模拟磁盘故障* + +从上面的输出中,我们清楚地看到,磁盘在下面被标记为 faulty。即使它是 faulty 的,我们仍然可以看到 raid 设备有3个,1个损坏了,状态是 degraded。 + +现在我们要从阵列中删除 faulty 的磁盘,raid 设备将像之前一样继续有2个设备。 + + # mdadm --remove /dev/md0 /dev/sdc1 + +![Remove Disk in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Remove-Disk-in-Raid-Array.png) + +*在 Raid 阵列中删除磁盘* + +9、 一旦故障的磁盘被删除,然后我们只能使用2个磁盘来扩展 raid 阵列了。 + + # mdadm --grow --raid-devices=2 /dev/md0 + # mdadm --detail /dev/md0 + +![Grow Disks in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Grow-Disks-in-Raid-Array.png) + +*在 RAID 阵列扩展磁盘* + +从上面的输出中可以看到,我们的阵列中仅有2台设备。如果你需要再次扩展阵列,按照如上所述的同样步骤进行。如果你需要添加一个磁盘作为备用,将其标记为 spare,因此,如果磁盘出现故障时,它会自动顶上去并重建数据。 + +### 结论 ### + +在这篇文章中,我们已经看到了如何扩展现有的 RAID 集合,以及如何在重新同步已有磁盘的数据后从一个阵列中删除故障磁盘。所有这些步骤都可以不用停机来完成。在数据同步期间,系统用户,文件和应用程序不会受到任何影响。 + +在接下来的文章我将告诉你如何管理 RAID,敬请关注更新,不要忘了写评论。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/grow-raid-array-in-linux/ + +作者:[Babin Lonston][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/babinlonston/ +[1]:https://linux.cn/article-6085-1.html +[2]:https://linux.cn/article-6087-1.html diff --git a/published/201508/kde-plasma-5.4.md b/published/201508/kde-plasma-5.4.md new file mode 100644 index 0000000000..6d5b77bfd0 --- /dev/null +++ b/published/201508/kde-plasma-5.4.md @@ -0,0 +1,109 @@ +KDE Plasma 5.4.0 发布,八月特色版 +============================= + +![Plasma 5.4](https://www.kde.org/announcements/plasma-5.4/plasma-screen-desktop-2-shadow.png) + +2015 年 8 月 25 ,星期二,KDE 发布了 Plasma 5 的一个特色新版本。 + +此版本为我们带来了许多非常棒的感受,如优化了对高分辨率的支持,KRunner 自动补全和一些新的 Breeze 漂亮图标。这还为不久以后的技术预览版的 Wayland 桌面奠定了基础。我们还带来了几个新组件,如声音音量部件,显示器校准工具和测试版的用户管理工具。 + +###新的音频音量程序 + +![The new Audio Volume Applet](https://www.kde.org/announcements/plasma-5.4/plasma-screen-audiovolume-shadows.png) + +新的音频音量程序直接工作于 PulseAudio (Linux 上一个非常流行的音频服务) 之上 ,并且在一个漂亮的简约的界面提供一个完整的音量控制和输出设定。 + +###替代的应用控制面板起动器 + +![he new Dashboard alternative launcher](https://www.kde.org/announcements/plasma-5.4/plasma-screen-dashboard-2-shadow.png) + +Plasma 5.4 在 kdeplasma-addons 软件包中提供了一个全新的全屏的应用控制面板,它具有应用菜单的所有功能,还支持缩放和全空间键盘导航。新的起动器可以像你目前所用的“最近使用的”或“收藏的文档和联系人”一样简单和快速地查找应用。 + +###丰富的艺术图标 + +![Just some of the new icons in this release](https://kver.files.wordpress.com/2015/07/image10430.png) + +Plasma 5.4 提供了超过 1400 个的新图标,其中不仅包含 KDE 程序的,而且还为 Inkscape, Firefox 和 Libreoffice 提供 Breeze 主题的艺术图标,可以体验到更加一致和本地化的感觉。 + +###KRunner 历史记录 + +![KRunner](https://www.kde.org/announcements/plasma-5.4/plasma-screen-krunner-shadow.png) + +KRunner 现在可以记住之前的搜索历史并根据历史记录进行自动补全。 + +###Network 程序中实用的图形展示 + +![Network Graphs](https://www.kde.org/announcements/plasma-5.4/plasma-screen-nm-graph-shadow.png) + +Network 程序现在可以以图形形式显示网络流量了,同时也支持两个新的 VPN 插件:通过 SSH 连接或通过 SSTP 连接。 + +###Wayland 技术预览 + +随着 Plasma 5.4 ,Wayland 桌面发布了第一个技术预览版。在使用自由图形驱动(free graphics drivers)的系统上可以使用 KWin(Plasma 的 Wayland 合成器和 X11 窗口管理器)通过[内核模式设定][1]来运行 Plasma。现在已经支持的功能需求来自于[手机 Plasma 项目][2],更多的面向桌面的功能还未被完全实现。现在还不能作为替换那些基于 Xorg 的桌面,但可以轻松地对它测试和贡献,以及观看令人激动视频。有关如何在 Wayland 中使用 Plasma 的介绍请到:[KWin 维基页][3]。Wlayland 将随着我们构建的稳定版本而逐步得到改进。 + +###其他的改变和添加 + + - 优化对高 DPI 支持 + - 更少的内存占用 + - 桌面搜索使用了更快的新后端 + - 便笺增加拖拉支持和键盘导航 + - 回收站重新支持拖拉 + - 系统托盘获得更快的可配置性 + - 文档重新修订和更新 + - 优化了窄小面板上的数字时钟的布局 + - 数字时钟支持 ISO 日期 + - 切换数字时钟 12/24 格式更简单 + - 日历显示第几周 + - 任何项目都可以收藏进应用菜单(Kicker),支持收藏文档和 Telepathy 联系人 + - Telepathy 联系人收藏可以展示联系人的照片和实时状态 + - 优化程序与容器间的焦点和激活处理 + - 文件夹视图中各种小修复:更好的默认尺寸,鼠标交互问题以及文本标签换行 + - 任务管理器更好的呈现起动器默认的应用图标 + - 可再次通过将程序拖入任务管理器来添加启动器 + - 可配置中间键点击在任务管理器中的行为:无动作,关闭窗口,启动一个相同的程序的新实例 + - 任务管理器现在以列排序优先,无论用户是否更倾向于行优先;许多用户更喜欢这样排序是因为它会使更少的任务按钮像窗口一样移来移去 + - 优化任务管理器的图标和缩放边 + - 任务管理器中各种小修复:垂直下拉,触摸事件处理现在支持所有系统,组扩展箭头的视觉问题 + - 提供可用的目的框架技术预览版,可以使用 QuickShare Plasmoid,它可以让许多 web 服务分享文件更容易 + - 增加了显示器配置工具 + - 增加的 kwallet-pam 可以在登录时打开 wallet + - 用户管理器现在会同步联系人到 KConfig 的设置中,用户账户模块被丢弃了 + - 应用程序菜单(Kicker)的性能得到改善 + - 应用程序菜单(Kicker)各种小修复:隐藏/显示程序更加可靠,顶部面板的对齐修复,文件夹视图中 “添加到桌面”更加可靠,在基于 KActivities 的最新模块中有更好的表现 + - 支持自定义菜单布局 (kmenuedit)和应用程序菜单(Kicker)支持菜单项目分隔 + - 当在面板中时,改进了文件夹视图,参见 [blog][4] + - 将文件夹拖放到桌面容器现在会再次创建一个文件夹视图 + +[完整的 Plasma 5.4 变更日志在此](https://www.kde.org/announcements/plasma-5.3.2-5.4.0-changelog.php) + +###Live 镜像 + +尝鲜的最简单的方式就是从 U 盘中启动,可以在 KDE 社区 Wiki 中找到 各种 [带有 Plasma 5 的 Live 镜像][5]。 + +###下载软件包 + +各发行版已经构建了软件包,或者正在构建,wiki 中的列出了各发行版的软件包名:[软件包下载维基页][6]。 + +###源码下载 + +可以直接从源码中安装 Plasma 5。KDE 社区 Wiki 已经介绍了[怎样编译][7]。 + +注意,Plasma 5 与 Plasma 4 不兼容,必须先卸载旧版本,或者安装到不同的前缀处。 + + +- [源代码信息页][8] + +--- + +via: https://www.kde.org/announcements/plasma-5.4.0.php + +译者:[Locez](http://locez.com) 校对:[wxy](http://github.com/wxy) + +[1]:https://en.wikipedia.org/wiki/Direct_Rendering_Manager +[2]:https://dot.kde.org/2015/07/25/plasma-mobile-free-mobile-platform +[3]:https://community.kde.org/KWin/Wayland#Start_a_Plasma_session_on_Wayland +[4]:https://blogs.kde.org/2015/06/04/folder-view-panel-popups-are-list-views-again +[5]:https://community.kde.org/Plasma/LiveImages +[6]:https://community.kde.org/Plasma/Packages +[7]:http://community.kde.org/Frameworks/Building +[8]:https://www.kde.org/info/plasma-5.4.0.php \ No newline at end of file diff --git a/published/20150803 Managing Linux Logs.md b/published/20150803 Managing Linux Logs.md new file mode 100644 index 0000000000..dca518e531 --- /dev/null +++ b/published/20150803 Managing Linux Logs.md @@ -0,0 +1,418 @@ +Linux 日志管理指南 +================================================================================ + +管理日志的一个最好做法是将你的日志集中或整合到一个地方,特别是在你有许多服务器或多层级架构时。我们将告诉你为什么这是一个好主意,然后给出如何更容易的做这件事的一些小技巧。 + +### 集中管理日志的好处 ### + +如果你有很多服务器,查看某个日志文件可能会很麻烦。现代的网站和服务经常包括许多服务器层级、分布式的负载均衡器,等等。找到正确的日志将花费很长时间,甚至要花更长时间在登录服务器的相关问题上。没什么比发现你找的信息没有被保存下来更沮丧的了,或者本该保留的日志文件正好在重启后丢失了。 + +集中你的日志使它们查找更快速,可以帮助你更快速的解决产品问题。你不用猜测那个服务器存在问题,因为所有的日志在同一个地方。此外,你可以使用更强大的工具去分析它们,包括日志管理解决方案。一些解决方案能[转换纯文本日志][1]为一些字段,更容易查找和分析。 + +集中你的日志也可以使它们更易于管理: + +- 它们更安全,当它们备份归档到一个单独区域时会有意无意地丢失。如果你的服务器宕机或者无响应,你可以使用集中的日志去调试问题。 +- 你不用担心ssh或者低效的grep命令在陷入困境的系统上需要更多的资源。 +- 你不用担心磁盘占满,这个能让你的服务器死机。 +- 你能保持你的产品服务器的安全性,只是为了查看日志无需给你所有团队登录权限。给你的团队从日志集中区域访问日志权限更安全。 + +随着集中日志管理,你仍需处理由于网络联通性不好或者耗尽大量网络带宽从而导致不能传输日志到中心区域的风险。在下面的章节我们将要讨论如何聪明的解决这些问题。 + +### 流行的日志归集工具 ### + +在 Linux 上最常见的日志归集是通过使用 syslog 守护进程或者日志代理。syslog 守护进程支持本地日志的采集,然后通过syslog 协议传输日志到中心服务器。你可以使用很多流行的守护进程来归集你的日志文件: + +- [rsyslog][2] 是一个轻量后台程序,在大多数 Linux 分支上已经安装。 +- [syslog-ng][3] 是第二流行的 Linux 系统日志后台程序。 +- [logstash][4] 是一个重量级的代理,它可以做更多高级加工和分析。 +- [fluentd][5] 是另一个具有高级处理能力的代理。 + +Rsyslog 是集中日志数据最流行的后台程序,因为它在大多数 Linux 分支上是被默认安装的。你不用下载或安装它,并且它是轻量的,所以不需要占用你太多的系统资源。 + +如果你需要更多先进的过滤或者自定义分析功能,如果你不在乎额外的系统负载,Logstash 是另一个最流行的选择。 + +### 配置 rsyslog.conf ### + +既然 rsyslog 是最广泛使用的系统日志程序,我们将展示如何配置它为日志中心。它的全局配置文件位于 /etc/rsyslog.conf。它加载模块,设置全局指令,和包含位于目录 /etc/rsyslog.d 中的应用的特有的配置。目录中包含的 /etc/rsyslog.d/50-default.conf 指示 rsyslog 将系统日志写到文件。在 [rsyslog 文档][6]中你可以阅读更多相关配置。 + +rsyslog 配置语言是是[RainerScript][7]。你可以给日志指定输入,就像将它们输出到另外一个位置一样。rsyslog 已经配置标准输入默认是 syslog ,所以你通常只需增加一个输出到你的日志服务器。这里有一个 rsyslog 输出到一个外部服务器的配置例子。在本例中,**BEBOP** 是一个服务器的主机名,所以你应该替换为你的自己的服务器名。 + + action(type="omfwd" protocol="tcp" target="BEBOP" port="514") + +你可以发送你的日志到一个有足够的存储容量的日志服务器来存储,提供查询,备份和分析。如果你存储日志到文件系统,那么你应该建立[日志轮转][8]来防止你的磁盘爆满。 + +作为一种选择,你可以发送这些日志到一个日志管理方案。如果你的解决方案是安装在本地你可以发送到系统文档中指定的本地主机和端口。如果你使用基于云提供商,你将发送它们到你的提供商特定的主机名和端口。 + +### 日志目录 ### + +你可以归集一个目录或者匹配一个通配符模式的所有文件。nxlog 和 syslog-ng 程序支持目录和通配符(*)。 + +常见的 rsyslog 不能直接监控目录。作为一种解决办法,你可以设置一个定时任务去监控这个目录的新文件,然后配置 rsyslog 来发送这些文件到目的地,比如你的日志管理系统。举个例子,日志管理提供商 Loggly 有一个开源版本的[目录监控脚本][9]。 + +### 哪个协议: UDP、TCP 或 RELP? ### + +当你使用网络传输数据时,有三个主流协议可以选择。UDP 在你自己的局域网是最常用的,TCP 用在互联网。如果你不能失去(任何)日志,就要使用更高级的 RELP 协议。 + +[UDP][10] 发送一个数据包,那只是一个单一的信息包。它是一个只外传的协议,所以它不会发送给你回执(ACK)。它只尝试发送包。当网络拥堵时,UDP 通常会巧妙的降级或者丢弃日志。它通常使用在类似局域网的可靠网络。 + +[TCP][11] 通过多个包和返回确认发送流式信息。TCP 会多次尝试发送数据包,但是受限于 [TCP 缓存][12]的大小。这是在互联网上发送送日志最常用的协议。 + +[RELP][13] 是这三个协议中最可靠的,但是它是为 rsyslog 创建的,而且很少有行业采用。它在应用层接收数据,如果有错误就会重发。请确认你的日志接受位置也支持这个协议。 + +### 用磁盘辅助队列可靠的传送 ### + +如果 rsyslog 在存储日志时遭遇错误,例如一个不可用网络连接,它能将日志排队直到连接还原。队列日志默认被存储在内存里。无论如何,内存是有限的并且如果问题仍然存在,日志会超出内存容量。 + +**警告:如果你只存储日志到内存,你可能会失去数据。** + +rsyslog 能在内存被占满时将日志队列放到磁盘。[磁盘辅助队列][14]使日志的传输更可靠。这里有一个例子如何配置rsyslog 的磁盘辅助队列: + + $WorkDirectory /var/spool/rsyslog # 暂存文件(spool)放置位置 + $ActionQueueFileName fwdRule1 # 暂存文件的唯一名字前缀 + $ActionQueueMaxDiskSpace 1g # 1gb 空间限制(尽可能大) + $ActionQueueSaveOnShutdown on # 关机时保存日志到磁盘 + $ActionQueueType LinkedList # 异步运行 + $ActionResumeRetryCount -1 # 如果主机宕机,不断重试 + +### 使用 TLS 加密日志 ### + +如果你担心你的数据的安全性和隐私性,你应该考虑加密你的日志。如果你使用纯文本在互联网传输日志,嗅探器和中间人可以读到你的日志。如果日志包含私人信息、敏感的身份数据或者政府管制数据,你应该加密你的日志。rsyslog 程序能使用 TLS 协议加密你的日志保证你的数据更安全。 + +建立 TLS 加密,你应该做如下任务: + +1. 生成一个[证书授权(CA)][15]。在 /contrib/gnutls 有一些证书例子,可以用来测试,但是你需要为产品环境创建自己的证书。如果你正在使用一个日志管理服务,它会给你一个证书。 +1. 为你的服务器生成一个[数字证书][16]使它能启用 SSL 操作,或者使用你自己的日志管理服务提供商的一个数字证书。 +1. 配置你的 rsyslog 程序来发送 TLS 加密数据到你的日志管理系统。 + +这有一个 rsyslog 配置 TLS 加密的例子。替换 CERT 和 DOMAIN_NAME 为你自己的服务器配置。 + + $DefaultNetstreamDriverCAFile /etc/rsyslog.d/keys/ca.d/CERT.crt + $ActionSendStreamDriver gtls + $ActionSendStreamDriverMode 1 + $ActionSendStreamDriverAuthMode x509/name + $ActionSendStreamDriverPermittedPeer *.DOMAIN_NAME.com + +### 应用日志的最佳管理方法 ### + +除 Linux 默认创建的日志之外,归集重要的应用日志也是一个好主意。几乎所有基于 Linux 的服务器应用都把它们的状态信息写入到独立、专门的日志文件中。这包括数据库产品,像 PostgreSQL 或者 MySQL,网站服务器,像 Nginx 或者 Apache,防火墙,打印和文件共享服务,目录和 DNS 服务等等。 + +管理员安装一个应用后要做的第一件事是配置它。Linux 应用程序典型的有一个放在 /etc 目录里 .conf 文件。它也可能在其它地方,但是那是大家找配置文件首先会看的地方。 + +根据应用程序有多复杂多庞大,可配置参数的数量可能会很少或者上百行。如前所述,大多数应用程序可能会在某种日志文件写它们的状态:配置文件是定义日志设置和其它东西的地方。 + +如果你不确定它在哪,你可以使用locate命令去找到它: + + [root@localhost ~]# locate postgresql.conf + /usr/pgsql-9.4/share/postgresql.conf.sample + /var/lib/pgsql/9.4/data/postgresql.conf + +#### 设置一个日志文件的标准位置 #### + +Linux 系统一般保存它们的日志文件在 /var/log 目录下。一般是这样,但是需要检查一下应用是否保存它们在 /var/log 下的特定目录。如果是,很好,如果不是,你也许想在 /var/log 下创建一个专用目录?为什么?因为其它程序也在 /var/log 下保存它们的日志文件,如果你的应用保存超过一个日志文件 - 也许每天一个或者每次重启一个 - 在这么大的目录也许有点难于搜索找到你想要的文件。 + +如果在你网络里你有运行多于一个的应用实例,这个方法依然便利。想想这样的情景,你也许有一打 web 服务器在你的网络运行。当排查任何一个机器的问题时,你就很容易知道确切的位置。 + +#### 使用一个标准的文件名 #### + +给你的应用最新的日志使用一个标准的文件名。这使一些事变得容易,因为你可以监控和追踪一个单独的文件。很多应用程序在它们的日志文件上追加一种时间戳。它让 rsyslog 更难于找到最新的文件和设置文件监控。一个更好的方法是使用日志轮转给老的日志文件增加时间。这样更易去归档和历史查询。 + +#### 追加日志文件 #### + +日志文件会在每个应用程序重启后被覆盖吗?如果这样,我们建议关掉它。每次重启 app 后应该去追加日志文件。这样,你就可以追溯重启前最后的日志。 + +#### 日志文件追加 vs. 轮转 #### + +要是应用程序每次重启后写一个新日志文件,如何保存当前日志?追加到一个单独的、巨大的文件?Linux 系统并不以频繁重启或者崩溃而出名:应用程序可以运行很长时间甚至不间歇,但是也会使日志文件非常大。如果你查询分析上周发生连接错误的原因,你可能无疑的要在成千上万行里搜索。 + +我们建议你配置应用每天半晚轮转(rotate)它的日志文件。 + +为什么?首先它将变得可管理。找一个带有特定日期的文件名比遍历一个文件中指定日期的条目更容易。文件也小的多:你不用考虑当你打开一个日志文件时 vi 僵住。第二,如果你正发送日志到另一个位置 - 也许每晚备份任务拷贝到归集日志服务器 - 这样不会消耗你的网络带宽。最后第三点,这样帮助你做日志保留。如果你想剔除旧的日志记录,这样删除超过指定日期的文件比用一个应用解析一个大文件更容易。 + +#### 日志文件的保留 #### + +你保留你的日志文件多长时间?这绝对可以归结为业务需求。你可能被要求保持一个星期的日志信息,或者管理要求保持一年的数据。无论如何,日志需要在一个时刻或其它情况下从服务器删除。 + +在我们看来,除非必要,只在线保持最近一个月的日志文件,并拷贝它们到第二个地方如日志服务器。任何比这更旧的日志可以被转到一个单独的介质上。例如,如果你在 AWS 上,你的旧日志可以被拷贝到 Glacier。 + +#### 给日志单独的磁盘分区 #### + +更好的,Linux 通常建议挂载到 /var 目录到一个单独的文件系统。这是因为这个目录的高 I/O。我们推荐挂载 /var/log 目录到一个单独的磁盘系统下。这样可以节省与主要的应用数据的 I/O 竞争。另外,如果一些日志文件变的太多,或者一个文件变的太大,不会占满整个磁盘。 + +#### 日志条目 #### + +每个日志条目中应该捕获什么信息? + +这依赖于你想用日志来做什么。你只想用它来排除故障,或者你想捕获所有发生的事?这是一个捕获每个用户在运行什么或查看什么的规则条件吗? + +如果你正用日志做错误排查的目的,那么只保存错误,报警或者致命信息。没有理由去捕获调试信息,例如,应用也许默认记录了调试信息或者另一个管理员也许为了故障排查而打开了调试信息,但是你应该关闭它,因为它肯定会很快的填满空间。在最低限度上,捕获日期、时间、客户端应用名、来源 ip 或者客户端主机名、执行的动作和信息本身。 + +#### 一个 PostgreSQL 的实例 #### + +作为一个例子,让我们看看 vanilla PostgreSQL 9.4 安装的主配置文件。它叫做 postgresql.conf,与其它Linux 系统中的配置文件不同,它不保存在 /etc 目录下。下列的代码段,我们可以在我们的 Centos 7 服务器的 /var/lib/pgsql 目录下找到它: + + root@localhost ~]# vi /var/lib/pgsql/9.4/data/postgresql.conf + ... + #------------------------------------------------------------------------------ + # ERROR REPORTING AND LOGGING + #------------------------------------------------------------------------------ + # - Where to Log - + log_destination = 'stderr' + # Valid values are combinations of + # stderr, csvlog, syslog, and eventlog, + # depending on platform. csvlog + # requires logging_collector to be on. + # This is used when logging to stderr: + logging_collector = on + # Enable capturing of stderr and csvlog + # into log files. Required to be on for + # csvlogs. + # (change requires restart) + # These are only used if logging_collector is on: + log_directory = 'pg_log' + # directory where log files are written, + # can be absolute or relative to PGDATA + log_filename = 'postgresql-%a.log' # log file name pattern, + # can include strftime() escapes + # log_file_mode = 0600 . + # creation mode for log files, + # begin with 0 to use octal notation + log_truncate_on_rotation = on # If on, an existing log file with the + # same name as the new log file will be + # truncated rather than appended to. + # But such truncation only occurs on + # time-driven rotation, not on restarts + # or size-driven rotation. Default is + # off, meaning append to existing files + # in all cases. + log_rotation_age = 1d + # Automatic rotation of logfiles will happen after that time. 0 disables. + log_rotation_size = 0 # Automatic rotation of logfiles will happen after that much log output. 0 disables. + # These are relevant when logging to syslog: + #syslog_facility = 'LOCAL0' + #syslog_ident = 'postgres' + # This is only relevant when logging to eventlog (win32): + #event_source = 'PostgreSQL' + # - When to Log - + #client_min_messages = notice # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # log + # notice + # warning + # error + #log_min_messages = warning # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # info + # notice + # warning + # error + # log + # fatal + # panic + #log_min_error_statement = error # values in order of decreasing detail: + # debug5 + # debug4 + # debug3 + # debug2 + # debug1 + # info + # notice + # warning + # error + # log + # fatal + # panic (effectively off) + #log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements + # and their durations, > 0 logs only + # statements running at least this number + # of milliseconds + # - What to Log + #debug_print_parse = off + #debug_print_rewritten = off + #debug_print_plan = off + #debug_pretty_print = on + #log_checkpoints = off + #log_connections = off + #log_disconnections = off + #log_duration = off + #log_error_verbosity = default + # terse, default, or verbose messages + #log_hostname = off + log_line_prefix = '< %m >' # special values: + # %a = application name + # %u = user name + # %d = database name + # %r = remote host and port + # %h = remote host + # %p = process ID + # %t = timestamp without milliseconds + # %m = timestamp with milliseconds + # %i = command tag + # %e = SQL state + # %c = session ID + # %l = session line number + # %s = session start timestamp + # %v = virtual transaction ID + # %x = transaction ID (0 if none) + # %q = stop here in non-session + # processes + # %% = '%' + # e.g. '<%u%%%d> ' + #log_lock_waits = off # log lock waits >= deadlock_timeout + #log_statement = 'none' # none, ddl, mod, all + #log_temp_files = -1 # log temporary files equal or larger + # than the specified size in kilobytes;5# -1 disables, 0 logs all temp files5 + log_timezone = 'Australia/ACT' + +虽然大多数参数被加上了注释,它们使用了默认值。我们可以看见日志文件目录是 pg_log(log_directory 参数,在 /var/lib/pgsql/9.4/data/ 下的子目录),文件名应该以 postgresql 开头(log_filename参数),文件每天轮转一次(log_rotation_age 参数)然后每行日志记录以时间戳开头(log_line_prefix参数)。特别值得说明的是 log_line_prefix 参数:全部的信息你都可以包含在这。 + +看 /var/lib/pgsql/9.4/data/pg_log 目录下展现给我们这些文件: + + [root@localhost ~]# ls -l /var/lib/pgsql/9.4/data/pg_log + total 20 + -rw-------. 1 postgres postgres 1212 May 1 20:11 postgresql-Fri.log + -rw-------. 1 postgres postgres 243 Feb 9 21:49 postgresql-Mon.log + -rw-------. 1 postgres postgres 1138 Feb 7 11:08 postgresql-Sat.log + -rw-------. 1 postgres postgres 1203 Feb 26 21:32 postgresql-Thu.log + -rw-------. 1 postgres postgres 326 Feb 10 01:20 postgresql-Tue.log + +所以日志文件名只有星期命名的标签。我们可以改变它。如何做?在 postgresql.conf 配置 log_filename 参数。 + +查看一个日志内容,它的条目仅以日期时间开头: + + [root@localhost ~]# cat /var/lib/pgsql/9.4/data/pg_log/postgresql-Fri.log + ... + < 2015-02-27 01:21:27.020 EST >LOG: received fast shutdown request + < 2015-02-27 01:21:27.025 EST >LOG: aborting any active transactions + < 2015-02-27 01:21:27.026 EST >LOG: autovacuum launcher shutting down + < 2015-02-27 01:21:27.036 EST >LOG: shutting down + < 2015-02-27 01:21:27.211 EST >LOG: database system is shut down + +### 归集应用的日志 ### + +#### 使用 imfile 监控日志 #### + +习惯上,应用通常记录它们数据在文件里。文件容易在一个机器上寻找,但是多台服务器上就不是很恰当了。你可以设置日志文件监控,然后当新的日志被添加到文件尾部后就发送事件到一个集中服务器。在 /etc/rsyslog.d/ 里创建一个新的配置文件然后增加一个配置文件,然后输入如下: + + $ModLoad imfile + $InputFilePollInterval 10 + $PrivDropToGroup adm + +----- + # Input for FILE1 + $InputFileName /FILE1 + $InputFileTag APPNAME1 + $InputFileStateFile stat-APPNAME1 #this must be unique for each file being polled + $InputFileSeverity info + $InputFilePersistStateInterval 20000 + $InputRunFileMonitor + +替换 FILE1 和 APPNAME1 为你自己的文件名和应用名称。rsyslog 将发送它到你配置的输出目标中。 + +#### 本地套接字日志与 imuxsock #### + +套接字类似 UNIX 文件句柄,所不同的是套接字内容是由 syslog 守护进程读取到内存中,然后发送到目的地。不需要写入文件。作为一个例子,logger 命令发送它的日志到这个 UNIX 套接字。 + +如果你的服务器 I/O 有限或者你不需要本地文件日志,这个方法可以使系统资源有效利用。这个方法缺点是套接字有队列大小的限制。如果你的 syslog 守护进程宕掉或者不能保持运行,然后你可能会丢失日志数据。 + +rsyslog 程序将默认从 /dev/log 套接字中读取,但是你需要使用如下命令来让 [imuxsock 输入模块][17] 启用它: + + $ModLoad imuxsock + +#### UDP 日志与 imupd #### + +一些应用程序使用 UDP 格式输出日志数据,这是在网络上或者本地传输日志文件的标准 syslog 协议。你的 syslog 守护进程接受这些日志,然后处理它们或者用不同的格式传输它们。备选的,你可以发送日志到你的日志服务器或者到一个日志管理方案中。 + +使用如下命令配置 rsyslog 通过 UDP 来接收标准端口 514 的 syslog 数据: + + $ModLoad imudp + +---------- + + $UDPServerRun 514 + +### 用 logrotate 管理日志 ### + +日志轮转是当日志到达指定的时期时自动归档日志文件的方法。如果不介入,日志文件一直增长,会用尽磁盘空间。最后它们将破坏你的机器。 + +logrotate 工具能随着日志的日期截取你的日志,腾出空间。你的新日志文件保持该文件名。你的旧日志文件被重命名加上后缀数字。每次 logrotate 工具运行,就会创建一个新文件,然后现存的文件被逐一重命名。你来决定何时旧文件被删除或归档的阈值。 + +当 logrotate 拷贝一个文件,新的文件会有一个新的 inode,这会妨碍 rsyslog 监控新文件。你可以通过增加copytruncate 参数到你的 logrotate 定时任务来缓解这个问题。这个参数会拷贝现有的日志文件内容到新文件然后从现有文件截短这些内容。因为日志文件还是同一个,所以 inode 不会改变;但它的内容是一个新文件。 + +logrotate 工具使用的主配置文件是 /etc/logrotate.conf,应用特有设置在 /etc/logrotate.d/ 目录下。DigitalOcean 有一个详细的 [logrotate 教程][18] + +### 管理很多服务器的配置 ### + +当你只有很少的服务器,你可以登录上去手动配置。一旦你有几打或者更多服务器,你可以利用工具的优势使这变得更容易和更可扩展。基本上,所有的事情就是拷贝你的 rsyslog 配置到每个服务器,然后重启 rsyslog 使更改生效。 + +#### pssh #### + +这个工具可以让你在很多服务器上并行的运行一个 ssh 命令。使用 pssh 部署仅用于少量服务器。如果你其中一个服务器失败,然后你必须 ssh 到失败的服务器,然后手动部署。如果你有很多服务器失败,那么手动部署它们会话费很长时间。 + +#### Puppet/Chef #### + +Puppet 和 Chef 是两个不同的工具,它们能在你的网络按你规定的标准自动的配置所有服务器。它们的报表工具可以使你了解错误情况,然后定期重新同步。Puppet 和 Chef 都有一些狂热的支持者。如果你不确定那个更适合你的部署配置管理,你可以拜读一下 [InfoWorld 上这两个工具的对比][19] + +一些厂商也提供一些配置 rsyslog 的模块或者方法。这有一个 Loggly 上 Puppet 模块的例子。它提供给 rsyslog 一个类,你可以添加一个标识令牌: + + node 'my_server_node.example.net' { + # Send syslog events to Loggly + class { 'loggly::rsyslog': + customer_token => 'de7b5ccd-04de-4dc4-fbc9-501393600000', + } + } + +#### Docker #### + +Docker 使用容器去运行应用,不依赖于底层服务。所有东西都运行在内部的容器,你可以把它想象为一个功能单元。ZDNet 有一篇关于在你的数据中心[使用 Docker][20] 的深入文章。 + +这里有很多方式从 Docker 容器记录日志,包括链接到一个日志容器,记录到一个共享卷,或者直接在容器里添加一个 sysllog 代理。其中最流行的日志容器叫做 [logspout][21]。 + +#### 供应商的脚本或代理 #### + +大多数日志管理方案提供一些脚本或者代理,可以从一个或更多服务器相对容易地发送数据。重量级代理会耗尽额外的系统资源。一些供应商像 Loggly 提供配置脚本,来使用现存的 syslog 守护进程更轻松。这有一个 Loggly 上的例子[脚本][22],它能运行在任意数量的服务器上。 + +-------------------------------------------------------------------------------- + +via: http://www.loggly.com/ultimate-guide/logging/managing-linux-logs/ + +作者:[Jason Skowronski][a1] +作者:[Amy Echeverri][a2] +作者:[Sadequl Hussain][a3] +译者:[wyangsun](https://github.com/wyangsun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a1]:https://www.linkedin.com/in/jasonskowronski +[a2]:https://www.linkedin.com/in/amyecheverri +[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 +[1]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.esrreycnpnbl +[2]:http://www.rsyslog.com/ +[3]:http://www.balabit.com/network-security/syslog-ng/opensource-logging-system +[4]:http://logstash.net/ +[5]:http://www.fluentd.org/ +[6]:http://www.rsyslog.com/doc/rsyslog_conf.html +[7]:http://www.rsyslog.com/doc/master/rainerscript/index.html +[8]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.eck7acdxin87 +[9]:https://www.loggly.com/docs/file-monitoring/ +[10]:http://www.networksorcery.com/enp/protocol/udp.htm +[11]:http://www.networksorcery.com/enp/protocol/tcp.htm +[12]:http://blog.gerhards.net/2008/04/on-unreliability-of-plain-tcp-syslog.html +[13]:http://www.rsyslog.com/doc/relp.html +[14]:http://www.rsyslog.com/doc/queues.html +[15]:http://www.rsyslog.com/doc/tls_cert_ca.html +[16]:http://www.rsyslog.com/doc/tls_cert_machine.html +[17]:http://www.rsyslog.com/doc/v8-stable/configuration/modules/imuxsock.html +[18]:https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10 +[19]:http://www.infoworld.com/article/2614204/data-center/puppet-or-chef--the-configuration-management-dilemma.html +[20]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/ +[21]:https://github.com/progrium/logspout +[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/ diff --git a/translated/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md b/published/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md similarity index 71% rename from translated/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md rename to published/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md index 542cf31cb3..f5afec9a88 100644 --- a/translated/tech/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md +++ b/published/20150806 Linux FAQs with Answers--How to enable logging in Open vSwitch for debugging and troubleshooting.md @@ -1,10 +1,10 @@ -Linux有问必答——如何启用Open vSwitch的日志功能以便调试和排障 +Linux有问必答:如何启用Open vSwitch的日志功能以便调试和排障 ================================================================================ > **问题** 我试着为我的Open vSwitch部署排障,鉴于此,我想要检查它的由内建日志机制生成的调试信息。我怎样才能启用Open vSwitch的日志功能,并且修改它的日志等级(如,修改成INFO/DEBUG级别)以便于检查更多详细的调试信息呢? -Open vSwitch(OVS)是Linux平台上用于虚拟切换的最流行的开源部署。由于当今的数据中心日益依赖于软件定义的网络(SDN)架构,OVS被作为数据中心的SDN部署中实际上的标准网络元素而快速采用。 +Open vSwitch(OVS)是Linux平台上最流行的开源的虚拟交换机。由于当今的数据中心日益依赖于软件定义网络(SDN)架构,OVS被作为数据中心的SDN部署中的事实标准上的网络元素而得到飞速应用。 -Open vSwitch具有一个内建的日志机制,它称之为VLOG。VLOG工具允许你在各种切换组件中启用并自定义日志,由VLOG生成的日志信息可以被发送到一个控制台,syslog以及一个独立日志文件组合,以供检查。你可以通过一个名为`ovs-appctl`的命令行工具在运行时动态配置OVS日志。 +Open vSwitch具有一个内建的日志机制,它称之为VLOG。VLOG工具允许你在各种网络交换组件中启用并自定义日志,由VLOG生成的日志信息可以被发送到一个控制台、syslog以及一个便于查看的单独日志文件。你可以通过一个名为`ovs-appctl`的命令行工具在运行时动态配置OVS日志。 ![](https://farm1.staticflickr.com/499/19300367114_cd8aac2fb2_c.jpg) @@ -14,7 +14,7 @@ Open vSwitch具有一个内建的日志机制,它称之为VLOG。VLOG工具允 $ sudo ovs-appctl vlog/set module[:facility[:level]] -- **Module**:OVS中的任何合法组件的名称(如netdev,ofproto,dpif,vswitchd,以及其它大量组件) +- **Module**:OVS中的任何合法组件的名称(如netdev,ofproto,dpif,vswitchd等等) - **Facility**:日志信息的目的地(必须是:console,syslog,或者file) - **Level**:日志的详细程度(必须是:emer,err,warn,info,或者dbg) @@ -36,13 +36,13 @@ Open vSwitch具有一个内建的日志机制,它称之为VLOG。VLOG工具允 ![](https://farm1.staticflickr.com/465/19734939478_7eb5d44635_c.jpg) -输出结果显示了用于三个工具(console,syslog,file)的各个模块的调试级别。默认情况下,所有模块的日志等级都被设置为INFO。 +输出结果显示了用于三个场合(facility:console,syslog,file)的各个模块的调试级别。默认情况下,所有模块的日志等级都被设置为INFO。 -指定任何一个OVS模块,你可以选择性地修改任何特定工具的调试级别。例如,如果你想要在控制台屏幕中查看dpif更为详细的调试信息,可以运行以下命令。 +指定任何一个OVS模块,你可以选择性地修改任何特定场合的调试级别。例如,如果你想要在控制台屏幕中查看dpif更为详细的调试信息,可以运行以下命令。 $ sudo ovs-appctl vlog/set dpif:console:dbg -你将看到dpif模块的console工具已经将其日志等级修改为DBG,而其它两个工具syslog和file的日志级别仍然没有改变。 +你将看到dpif模块的console工具已经将其日志等级修改为DBG,而其它两个场合syslog和file的日志级别仍然没有改变。 ![](https://farm1.staticflickr.com/333/19896760146_5d851311ae_c.jpg) @@ -52,7 +52,7 @@ Open vSwitch具有一个内建的日志机制,它称之为VLOG。VLOG工具允 ![](https://farm1.staticflickr.com/351/19734939828_8c7f59e404_c.jpg) -同时,如果你想要一次性修改所有三个工具的日志级别,你可以指定“ANY”作为工具名。例如,下面的命令将修改每个模块的所有工具的日志级别为DBG。 +同时,如果你想要一次性修改所有三个场合的日志级别,你可以指定“ANY”作为场合名。例如,下面的命令将修改每个模块的所有场合的日志级别为DBG。 $ sudo ovs-appctl vlog/set ANY:ANY:dbg @@ -62,7 +62,7 @@ via: http://ask.xmodulo.com/enable-logging-open-vswitch.html 作者:[Dan Nanni][a] 译者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md b/published/20150811 How to Install Snort and Usage in Ubuntu 15.04.md similarity index 77% rename from translated/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md rename to published/20150811 How to Install Snort and Usage in Ubuntu 15.04.md index 06fbfd62b8..01d7f7ec13 100644 --- a/translated/tech/20150811 How to Install Snort and Usage in Ubuntu 15.04.md +++ b/published/20150811 How to Install Snort and Usage in Ubuntu 15.04.md @@ -1,12 +1,13 @@ -在Ubuntu 15.04中如何安装和使用Snort +在 Ubuntu 15.04 中如何安装和使用 Snort ================================================================================ -对于IT安全而言入侵检测是一件非常重要的事。入侵检测系统用于检测网络中非法与恶意的请求。Snort是一款知名的开源入侵检测系统。Web界面(Snorby)可以用于更好地分析警告。Snort使用iptables/pf防火墙来作为入侵检测系统。本篇中,我们会安装并配置一个开源的IDS系统snort。 + +对于网络安全而言入侵检测是一件非常重要的事。入侵检测系统(IDS)用于检测网络中非法与恶意的请求。Snort是一款知名的开源的入侵检测系统。其 Web界面(Snorby)可以用于更好地分析警告。Snort使用iptables/pf防火墙来作为入侵检测系统。本篇中,我们会安装并配置一个开源的入侵检测系统snort。 ### Snort 安装 ### #### 要求 #### -snort所使用的数据采集库(DAQ)用于抽象地调用采集库。这个在snort上就有。下载过程如下截图所示。 +snort所使用的数据采集库(DAQ)用于一个调用包捕获库的抽象层。这个在snort上就有。下载过程如下截图所示。 ![downloading_daq](http://blog.linoxide.com/wp-content/uploads/2015/07/downloading_daq.png) @@ -48,7 +49,7 @@ make和make install 命令的结果如下所示。 ![snort_extraction](http://blog.linoxide.com/wp-content/uploads/2015/07/snort_extraction.png) -创建安装目录并在脚本中设置prefix参数。同样也建议启用包性能监控(PPM)标志。 +创建安装目录并在脚本中设置prefix参数。同样也建议启用包性能监控(PPM)的sourcefire标志。 #mkdir /usr/local/snort @@ -56,7 +57,7 @@ make和make install 命令的结果如下所示。 ![snort_installation](http://blog.linoxide.com/wp-content/uploads/2015/07/snort_installation.png) -配置脚本由于缺少libpcre-dev、libdumbnet-dev 和zlib开发库而报错。 +配置脚本会由于缺少libpcre-dev、libdumbnet-dev 和zlib开发库而报错。 配置脚本由于缺少libpcre库报错。 @@ -96,7 +97,7 @@ make和make install 命令的结果如下所示。 ![make install snort](http://blog.linoxide.com/wp-content/uploads/2015/07/make-install-snort.png) -最终snort在/usr/local/snort/bin中运行。现在它对eth0的所有流量都处在promisc模式(包转储模式)。 +最后,从/usr/local/snort/bin中运行snort。现在它对eth0的所有流量都处在promisc模式(包转储模式)。 ![snort running](http://blog.linoxide.com/wp-content/uploads/2015/07/snort-running.png) @@ -106,14 +107,17 @@ make和make install 命令的结果如下所示。 #### Snort的规则和配置 #### -从源码安装的snort需要规则和安装配置,因此我们会从/etc/snort下面复制规则和配置。我们已经创建了单独的bash脚本来用于规则和配置。它会设置下面这些snort设置。 +从源码安装的snort还需要设置规则和配置,因此我们需要复制规则和配置到/etc/snort下面。我们已经创建了单独的bash脚本来用于设置规则和配置。它会设置下面这些snort设置。 -- 在linux中创建snort用户用于snort IDS服务。 +- 在linux中创建用于snort IDS服务的snort用户。 - 在/etc下面创建snort的配置文件和文件夹。 -- 权限设置并从etc中复制snortsnort源代码 +- 权限设置并从源代码的etc目录中复制数据。 - 从snort文件中移除规则中的#(注释符号)。 - #!/bin/bash##PATH of source code of snort +- + + #!/bin/bash# + # snort源代码的路径 snort_src="/home/test/Downloads/snort-2.9.7.3" echo "adding group and user for snort..." groupadd snort &> /dev/null @@ -141,15 +145,15 @@ make和make install 命令的结果如下所示。 sed -i 's/include \$RULE\_PATH/#include \$RULE\_PATH/' /etc/snort/snort.conf echo "---DONE---" -改变脚本中的snort源目录并运行。下面是成功的输出。 +改变脚本中的snort源目录路径并运行。下面是成功的输出。 ![running script](http://blog.linoxide.com/wp-content/uploads/2015/08/running_script.png) -上面的脚本从snort源中复制下面的文件/文件夹到/etc/snort配置文件中 +上面的脚本从snort源中复制下面的文件和文件夹到/etc/snort配置文件中 ![files copied](http://blog.linoxide.com/wp-content/uploads/2015/08/created.png) -、snort的配置非常复杂,然而为了IDS能正常工作需要进行下面必要的修改。 +snort的配置非常复杂,要让IDS能正常工作需要进行下面必要的修改。 ipvar HOME_NET 192.168.1.0/24 # LAN side @@ -173,7 +177,7 @@ make和make install 命令的结果如下所示。 ![path rules](http://blog.linoxide.com/wp-content/uploads/2015/08/path-rules.png) -下载[下载社区][1]规则并解压到/etc/snort/rules。启用snort.conf中的社区及紧急威胁规则。 +现在[下载社区规则][1]并解压到/etc/snort/rules。启用snort.conf中的社区及紧急威胁规则。 ![wget_rules](http://blog.linoxide.com/wp-content/uploads/2015/08/wget_rules.png) @@ -187,7 +191,7 @@ make和make install 命令的结果如下所示。 ### 总结 ### -本篇中,我们致力于开源IDPS系统snort在Ubuntu上的安装和配置。默认它用于监控时间,然而它可以被配置成用于网络保护的内联模式。snort规则可以在离线模式中可以使用pcap文件测试和分析 +本篇中,我们关注了开源IDPS系统snort在Ubuntu上的安装和配置。通常它用于监控事件,然而它可以被配置成用于网络保护的在线模式。snort规则可以在离线模式中可以使用pcap捕获文件进行测试和分析 -------------------------------------------------------------------------------- @@ -195,7 +199,7 @@ via: http://linoxide.com/security/install-snort-usage-ubuntu-15-04/ 作者:[nido][a] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md b/published/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md similarity index 80% rename from translated/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md rename to published/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md index 09f10fb546..76a06ea37c 100644 --- a/translated/tech/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md +++ b/published/20150811 fdupes--A Comamndline Tool to Find and Delete Duplicate Files in Linux.md @@ -1,16 +1,16 @@ -fdupes——Linux中查找并删除重复文件的命令行工具 +fdupes:Linux中查找并删除重复文件的命令行工具 ================================================================================ -对于大多数计算机用户而言,查找并替换重复的文件是一个常见的需求。查找并移除重复文件真是一项领人不胜其烦的工作,它耗时又耗力。如果你的机器上跑着GNU/Linux,那么查找重复文件会变得十分简单,这多亏了`**fdupes**`工具。 +对于大多数计算机用户而言,查找并替换重复的文件是一个常见的需求。查找并移除重复文件真是一项令人不胜其烦的工作,它耗时又耗力。但如果你的机器上跑着GNU/Linux,那么查找重复文件会变得十分简单,这多亏了`fdupes`工具。 ![Find and Delete Duplicate Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/find-and-delete-duplicate-files-in-linux.png) -Fdupes——在Linux中查找并删除重复文件 +*fdupes——在Linux中查找并删除重复文件* ### fdupes是啥东东? ### -**Fdupes**是Linux下的一个工具,它由**Adrian Lopez**用C编程语言编写并基于MIT许可证发行,该应用程序可以在指定的目录及子目录中查找重复的文件。Fdupes通过对比文件的MD5签名,以及逐字节比较文件来识别重复内容,可以为Fdupes指定大量的选项以实现对文件的列出、删除、替换到文件副本的硬链接等操作。 +**fdupes**是Linux下的一个工具,它由**Adrian Lopez**用C编程语言编写并基于MIT许可证发行,该应用程序可以在指定的目录及子目录中查找重复的文件。fdupes通过对比文件的MD5签名,以及逐字节比较文件来识别重复内容,fdupes有各种选项,可以实现对文件的列出、删除、替换为文件副本的硬链接等操作。 -对比以下列顺序开始: +文件对比以下列顺序开始: **大小对比 > 部分 MD5 签名对比 > 完整 MD5 签名对比 > 逐字节对比** @@ -27,8 +27,9 @@ Fdupes——在Linux中查找并删除重复文件 **注意**:自Fedora 22之后,默认的包管理器yum被dnf取代了。 -### fdupes命令咋个搞? ### -1.作为演示的目的,让我们来在某个目录(比如 tecmint)下创建一些重复文件,命令如下: +### fdupes命令如何使用 ### + +1、 作为演示的目的,让我们来在某个目录(比如 tecmint)下创建一些重复文件,命令如下: $ mkdir /home/"$USER"/Desktop/tecmint && cd /home/"$USER"/Desktop/tecmint && for i in {1..15}; do echo "I Love Tecmint. Tecmint is a very nice community of Linux Users." > tecmint${i}.txt ; done @@ -57,7 +58,7 @@ Fdupes——在Linux中查找并删除重复文件 "I Love Tecmint. Tecmint is a very nice community of Linux Users." -2.现在在**tecmint**文件夹内搜索重复的文件。 +2、 现在在**tecmint**文件夹内搜索重复的文件。 $ fdupes /home/$USER/Desktop/tecmint @@ -77,7 +78,7 @@ Fdupes——在Linux中查找并删除重复文件 /home/tecmint/Desktop/tecmint/tecmint15.txt /home/tecmint/Desktop/tecmint/tecmint12.txt -3.使用**-r**选项在每个目录包括其子目录中递归搜索重复文件。 +3、 使用**-r**选项在每个目录包括其子目录中递归搜索重复文件。 它会递归搜索所有文件和文件夹,花一点时间来扫描重复文件,时间的长短取决于文件和文件夹的数量。在此其间,终端中会显示全部过程,像下面这样。 @@ -85,7 +86,7 @@ Fdupes——在Linux中查找并删除重复文件 Progress [37780/54747] 69% -4.使用**-S**选项来查看某个文件夹内找到的重复文件的大小。 +4、 使用**-S**选项来查看某个文件夹内找到的重复文件的大小。 $ fdupes -S /home/$USER/Desktop/tecmint @@ -106,7 +107,7 @@ Fdupes——在Linux中查找并删除重复文件 /home/tecmint/Desktop/tecmint/tecmint15.txt /home/tecmint/Desktop/tecmint/tecmint12.txt -5.你可以同时使用**-S**和**-r**选项来查看所有涉及到的目录和子目录中的重复文件的大小,如下: +5、 你可以同时使用**-S**和**-r**选项来查看所有涉及到的目录和子目录中的重复文件的大小,如下: $ fdupes -Sr /home/avi/Desktop/ @@ -131,11 +132,11 @@ Fdupes——在Linux中查找并删除重复文件 /home/tecmint/Desktop/resume_files/r-csc.html /home/tecmint/Desktop/resume_files/fc.html -6.不同于在一个或所有文件夹内递归搜索,你可以选择按要求有选择性地在两个或三个文件夹内进行搜索。不必再提醒你了吧,如有需要,你可以使用**-S**和/或**-r**选项。 +6、 不同于在一个或所有文件夹内递归搜索,你可以选择按要求有选择性地在两个或三个文件夹内进行搜索。不必再提醒你了吧,如有需要,你可以使用**-S**和/或**-r**选项。 $ fdupes /home/avi/Desktop/ /home/avi/Templates/ -7.要删除重复文件,同时保留一个副本,你可以使用`**-d**`选项。使用该选项,你必须额外小心,否则最终结果可能会是文件/数据的丢失。郑重提醒,此操作不可恢复。 +7、 要删除重复文件,同时保留一个副本,你可以使用`-d`选项。使用该选项,你必须额外小心,否则最终结果可能会是文件/数据的丢失。郑重提醒,此操作不可恢复。 $ fdupes -d /home/$USER/Desktop/tecmint @@ -177,13 +178,13 @@ Fdupes——在Linux中查找并删除重复文件 [-] /home/tecmint/Desktop/tecmint/tecmint15.txt [-] /home/tecmint/Desktop/tecmint/tecmint12.txt -8.从安全角度出发,你可能想要打印`**fdupes**`的输出结果到文件中,然后检查文本文件来决定要删除什么文件。这可以降低意外删除文件的风险。你可以这么做: +8、 从安全角度出发,你可能想要打印`fdupes`的输出结果到文件中,然后检查文本文件来决定要删除什么文件。这可以降低意外删除文件的风险。你可以这么做: $ fdupes -Sr /home > /home/fdupes.txt -**注意**:你可以替换`**/home**`为你想要的文件夹。同时,如果你想要递归搜索并打印大小,可以使用`**-r**`和`**-S**`选项。 +**注意**:你应该替换`/home`为你想要的文件夹。同时,如果你想要递归搜索并打印大小,可以使用`-r`和`-S`选项。 -9.你可以使用`**-f**`选项来忽略每个匹配集中的首个文件。 +9、 你可以使用`-f`选项来忽略每个匹配集中的首个文件。 首先列出该目录中的文件。 @@ -205,13 +206,13 @@ Fdupes——在Linux中查找并删除重复文件 /home/tecmint/Desktop/tecmint9 (another copy).txt /home/tecmint/Desktop/tecmint9 (4th copy).txt -10.检查已安装的fdupes版本。 +10、 检查已安装的fdupes版本。 $ fdupes --version fdupes 1.51 -11.如果你需要关于fdupes的帮助,可以使用`**-h**`开关。 +11、 如果你需要关于fdupes的帮助,可以使用`-h`开关。 $ fdupes -h @@ -245,7 +246,7 @@ Fdupes——在Linux中查找并删除重复文件 -v --version display fdupes version -h --help display this help message -到此为止了。让我知道你到现在为止你是怎么在Linux中查找并删除重复文件的?同时,也让我知道你关于这个工具的看法。在下面的评论部分中提供你有价值的反馈吧,别忘了为我们点赞并分享,帮助我们扩散哦。 +到此为止了。让我知道你以前怎么在Linux中查找并删除重复文件的吧?同时,也让我知道你关于这个工具的看法。在下面的评论部分中提供你有价值的反馈吧,别忘了为我们点赞并分享,帮助我们扩散哦。 我正在使用另外一个移除重复文件的工具,它叫**fslint**。很快就会把使用心得分享给大家哦,你们一定会喜欢看的。 @@ -254,10 +255,10 @@ Fdupes——在Linux中查找并删除重复文件 via: http://www.tecmint.com/fdupes-find-and-delete-duplicate-files-in-linux/ 作者:[GOLinux](https://github.com/GOLinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ -[2]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/ +[1]:https://linux.cn/article-2324-1.html +[2]:https://linux.cn/article-5109-1.html diff --git a/published/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md b/published/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md new file mode 100644 index 0000000000..80e90df7fb --- /dev/null +++ b/published/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md @@ -0,0 +1,145 @@ +Linux 小技巧:Chrome 小游戏,让文字说话,计划作业,重复执行命令 +================================================================================ + +重要的事情说两遍,我完成了一个[Linux提示与彩蛋][1]系列,让你的Linux获得更多创造和娱乐。 + +![Linux提示与彩蛋系列](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-Tips-and-Tricks.png) + +*Linux提示与彩蛋系列* + +本文,我将会讲解Google-chrome内建小游戏,在终端中如何让文字说话,使用‘at’命令设置作业和使用watch命令重复执行命令。 + +### 1. Google Chrome 浏览器小游戏彩蛋 ### + +网线脱掉或者其他什么原因连不上网时,Google Chrome就会出现一个小游戏。声明,我并不是游戏玩家,因此我的电脑上并没有安装任何第三方的恶意游戏。安全是第一位。 + +所以当Internet发生出错,会出现一个这样的界面: + +![不能连接到互联网](http://www.tecmint.com/wp-content/uploads/2015/08/Unable-to-Connect-Internet.png) + +*不能连接到互联网* + +按下空格键来激活Google-chrome彩蛋游戏。游戏没有时间限制。并且还不需要浪费时间安装使用。 + +不需要第三方软件的支持。同样支持Windows和Mac平台,但是我的平台是Linux,我也只谈论Linux。当然在Linux,这个游戏运行很好。游戏简单,但也很花费时间。 + +使用空格/向上方向键来跳跃。请看下列截图: + +![Google Chrome中玩游戏](http://www.tecmint.com/wp-content/uploads/2015/08/Play-Game-in-Google-Chrome.gif) + +*Google Chrome中玩游戏* + +### 2. Linux 终端中朗读文字 ### + +对于那些不能文字朗读的设备,有个小工具可以实现文字说话的转换器。用各种语言写一些东西,espeak就可以朗读给你。 + +系统应该默认安装了Espeak,如果你的系统没有安装,你可以使用下列命令来安装: + + # apt-get install espeak (Debian) + # yum install espeak (CentOS) + # dnf install espeak (Fedora 22 及其以后) + +你可以让espeak接受标准输入的交互输入并及时转换成语音朗读出来。如下: + + $ espeak [按回车键] + +更详细的输出你可以这样做: + + $ espeak --stdout | aplay [按回车键][再次回车] + +espeak设置灵活,也可以朗读文本文件。你可以这样设置: + + $ espeak --stdout /path/to/text/file/file_name.txt | aplay [Hit Enter] + +espeak可以设置朗读速度。默认速度是160词每分钟。使用-s参数来设置。 + +设置每分钟30词的语速: + + $ espeak -s 30 -f /path/to/text/file/file_name.txt | aplay + +设置每分钟200词的语速: + + $ espeak -s 200 -f /path/to/text/file/file_name.txt | aplay + +说其他语言,比如北印度语(作者母语),这样设置: + + $ espeak -v hindi --stdout 'टेकमिंट विश्व की एक बेहतरीन लाइंक्स आधारित वेबसाइट है|' | aplay + +你可以使用各种语言,让espeak如上面说的以你选择的语言朗读。使用下列命令来获得语言列表: + + $ espeak --voices + +### 3. 快速调度任务 ### + +我们已经非常熟悉使用[cron][2]守护进程执行一个计划命令。 + +Cron是一个Linux系统管理的高级命令,用于计划定时任务如备份或者指定时间或间隔的任何事情。 + +但是,你是否知道at命令可以让你在指定时间调度一个任务或者命令?at命令可以指定时间执行指定内容。 + +例如,你打算在早上11点2分执行uptime命令,你只需要这样做: + + $ at 11:02 + uptime >> /home/$USER/uptime.txt + Ctrl+D + +![Linux中计划任务](http://www.tecmint.com/wp-content/uploads/2015/08/Schedule-Job-in-Linux.png) + +*Linux中计划任务* + +检查at命令是否成功设置,使用: + + $ at -l + +![浏览计划任务](http://www.tecmint.com/wp-content/uploads/2015/08/View-Scheduled-Jobs.png) + +*浏览计划任务* + +at支持计划多个命令,例如: + + $ at 12:30 + Command – 1 + Command – 2 + … + command – 50 + … + Ctrl + D + +### 4. 特定时间重复执行命令 ### + +有时,我们可以需要在指定时间间隔执行特定命令。例如,每3秒,想打印一次时间。 + +查看现在时间,使用下列命令。 + + $ date +"%H:%M:%S + +![Linux中查看日期和时间](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Date-in-Linux.png) + +*Linux中查看日期和时间* + +为了每三秒查看一下这个命令的输出,我需要运行下列命令: + + $ watch -n 3 'date +"%H:%M:%S"' + +![Linux中watch命令](http://www.tecmint.com/wp-content/uploads/2015/08/Watch-Command-in-Linux.gif) + +*Linux中watch命令* + +watch命令的‘-n’开关设定时间间隔。在上述命令中,我们定义了时间间隔为3秒。你可以按你的需求定义。同样watch +也支持其他命令或者脚本。 + +至此。希望你喜欢这个系列的文章,让你的linux更有创造性,获得更多快乐。所有的建议欢迎评论。欢迎你也看看其他文章,谢谢。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/text-to-speech-in-terminal-schedule-a-job-and-watch-commands-in-linux/ + +作者:[Avishek Kumar][a] +译者:[VicYu/Vic020](http://vicyu.net) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/tag/linux-tricks/ +[2]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/ diff --git a/published/20150813 Linux file system hierarchy v2.0.md b/published/20150813 Linux file system hierarchy v2.0.md new file mode 100644 index 0000000000..6a68efbd67 --- /dev/null +++ b/published/20150813 Linux file system hierarchy v2.0.md @@ -0,0 +1,440 @@ +Linux 文件系统结构介绍 +================================================================================ + +![](http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png) + +Linux中的文件是什么?它的文件系统又是什么?那些配置文件又在哪里?我下载好的程序保存在哪里了?在 Linux 中文件系统是标准结构的吗?好了,上图简明地阐释了Linux的文件系统的层次关系。当你苦于寻找配置文件或者二进制文件的时候,这便显得十分有用了。我在下方添加了一些解释以及例子,不过“篇幅较长,可以有空再看”。 + +另外一种情况便是当你在系统中获取配置以及二进制文件时,出现了不一致性问题,如果你是在一个大型组织中,或者只是一个终端用户,这也有可能会破坏你的系统(比如,二进制文件运行在旧的库文件上了)。若然你在[你的Linux系统上做安全审计][1]的话,你将会发现它很容易遭到各种攻击。所以,保持一个清洁的操作系统(无论是Windows还是Linux)都显得十分重要。 + +### Linux的文件是什么? ### + +对于UNIX系统来说(同样适用于Linux),以下便是对文件简单的描述: + +> 在UNIX系统中,一切皆为文件;若非文件,则为进程 + +这种定义是比较正确的,因为有些特殊的文件不仅仅是普通文件(比如命名管道和套接字),不过为了让事情变的简单,“一切皆为文件”也是一个可以让人接受的说法。Linux系统也像UNIX系统一样,将文件和目录视如同物,因为目录只是一个包含了其他文件名的文件而已。程序、服务、文本、图片等等,都是文件。对于系统来说,输入和输出设备,基本上所有的设备,都被当做是文件。 + +题图版本历史: + +- Version 2.0 – 17-06-2015 + - – Improved: 添加标题以及版本历史 + - – Improved: 添加/srv,/meida和/proc + - – Improved: 更新了反映当前的Linux文件系统的描述 + - – Fixed: 多处的打印错误 + - – Fixed: 外观和颜色 +- Version 1.0 – 14-02-2015 + - – Created: 基本的图表 + - – Note: 摒弃更低的版本 + +### 下载链接 ### + +以下是大图的下载地址。如果你需要其他格式,请跟原作者联系,他会尝试制作并且上传到某个地方以供下载 + +- [大图 (PNG 格式) – 2480×1755 px – 184KB][2] +- [最大图 (PDF 格式) – 9919x7019 px – 1686KB][3] + +**注意**: PDF格式文件是打印的最好选择,因为它画质很高。 + +### Linux 文件系统描述 ### + +为了有序地管理那些文件,人们习惯把这些文件当做是硬盘上的有序的树状结构,正如我们熟悉的'MS-DOS'(磁盘操作系统)就是一个例子。大的分枝包括更多的分枝,分枝的末梢是树的叶子或者普通的文件。现在我们将会以这树形图为例,但晚点我们会发现为什么这不是一个完全准确的一幅图。 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
目录描述
+ / + 主层次 的根,也是整个文件系统层次结构的根目录
+ /bin + 存放在单用户模式可用的必要命令二进制文件,所有用户都可用,如 cat、ls、cp等等
+ /boot + 存放引导加载程序文件,例如kernels、initrd等
+ /dev + 存放必要的设备文件,例如/dev/null
+ /etc + 存放主机特定的系统级配置文件。其实这里有个关于它名字本身意义上的的争议。在贝尔实验室的UNIX实施文档的早期版本中,/etc表示是“其他(etcetera)目录”,因为从历史上看,这个目录是存放各种不属于其他目录的文件(然而,文件系统目录标准 FSH 限定 /etc 用于存放静态配置文件,这里不该存有二进制文件)。早期文档出版后,这个目录名又重新定义成不同的形式。近期的解释中包含着诸如“可编辑文本配置”或者“额外的工具箱”这样的重定义
+ + + /etc/opt + + + 存储着新增包的配置文件 /opt/.
+ + + /etc/sgml + + + 存放配置文件,比如 catalogs,用于那些处理SGML(译者注:标准通用标记语言)的软件的配置文件
+ + + /etc/X11 + + + X Window 系统11版本的的配置文件
+ + + /etc/xml + + + 配置文件,比如catalogs,用于那些处理XML(译者注:可扩展标记语言)的软件的配置文件
+ /home + 用户的主目录,包括保存的文件,个人配置,等等
+ /lib + /bin//sbin/中的二进制文件的必需的库文件
+ /lib<架构位数> + 备用格式的必要的库文件。 这样的目录是可选的,但如果他们存在的话肯定是有需要用到它们的程序
+ /media + 可移动的多媒体(如CD-ROMs)的挂载点。(出现于 FHS-2.3)
+ /mnt + 临时挂载的文件系统
+ /opt + 可选的应用程序软件包
+ /proc + 以文件形式提供进程以及内核信息的虚拟文件系统,在Linux中,对应进程文件系统(procfs )的挂载点
+ /root + 根用户的主目录
+ /sbin + 必要的系统级二进制文件,比如, init, ip, mount
+ /srv + 系统提供的站点特定数据
+ /tmp + 临时文件 (另见 /var/tmp). 通常在系统重启后删除
+ /usr + 二级层级存储用户的只读数据; 包含(多)用户主要的公共文件以及应用程序
+ + + /usr/bin + + + 非必要的命令二进制文件 (在单用户模式中不需要用到的);用于所有用户
+ + + /usr/include + + + 标准的包含文件
+ + + /usr/lib + + + 库文件,用于/usr/bin//usr/sbin/中的二进制文件
+ + + /usr/lib<架构位数> + + + 备用格式库(可选的)
+ + + /usr/local + + + 三级层次 用于本地数据,具体到该主机上的。通常会有下一个子目录, 比如, bin/, lib/, share/.
+ + + /usr/local/sbin + + + 非必要系统的二进制文件,比如用于不同网络服务的守护进程
+ + + /usr/share + + + 架构无关的 (共享) 数据.
+ + + /usr/src + + + 源代码,比如内核源文件以及与它相关的头文件
+ + + /usr/X11R6 + + + X Window系统,版本号:11,发行版本:6
+ /var + 各式各样的(Variable)文件,一些随着系统常规操作而持续改变的文件就放在这里,比如日志文件,脱机文件,还有临时的电子邮件文件
+ + + /var/cache + + + 应用程序缓存数据. 这些数据是由耗时的I/O(输入/输出)的或者是运算本地生成的结果。这些应用程序是可以重新生成或者恢复数据的。当没有数据丢失的时候,可以删除缓存文件
+ + + /var/lib + + + 状态信息。这些信息随着程序的运行而不停地改变,比如,数据库,软件包系统的元数据等等
+ + + /var/lock + + + 锁文件。这些文件用于跟踪正在使用的资源
+ + + /var/log + + + 日志文件。包含各种日志。
+ + + /var/mail + + + 内含用户邮箱的相关文件
+ + + /var/opt + + + 来自附加包的各种数据都会存储在 /var/opt/.
+ + + /var/run + + + 存放当前系统上次启动以来的相关信息,例如当前登入的用户以及当前运行的daemons(守护进程).
+ + + /var/spool + + + 该spool主要用于存放将要被处理的任务,比如打印队列以及邮件外发队列
+ + + + + /var/mail + + + + + 过时的位置,用于放置用户邮箱文件
+ + + /var/tmp + + + 存放重启后保留的临时文件
+ +### Linux的文件类型 ### + +大多数文件仅仅是普通文件,他们被称为`regular`文件;他们包含普通数据,比如,文本、可执行文件、或者程序、程序的输入或输出等等 + +虽然你可以认为“在Linux中,一切你看到的皆为文件”这个观点相当保险,但这里仍有着一些例外。 + +- `目录`:由其他文件组成的文件 +- `特殊文件`:用于输入和输出的途径。大多数特殊文件都储存在`/dev`中,我们将会在后面讨论这个问题。 +- `链接文件`:让文件或者目录出现在系统文件树结构上多个地方的机制。我们将详细地讨论这个链接文件。 +- `(域)套接字`:特殊的文件类型,和TCP/IP协议中的套接字有点像,提供进程间网络通讯,并受文件系统的访问控制机制保护。 +- `命名管道` : 或多或少有点像sockets(套接字),提供一个进程间的通信机制,而不用网络套接字协议。 + +### 现实中的文件系统 ### + +对于大多数用户和常规系统管理任务而言,“文件和目录是一个有序的类树结构”是可以接受的。然而,对于电脑而言,它是不会理解什么是树,或者什么是树结构。 + +每个分区都有它自己的文件系统。想象一下,如果把那些文件系统想成一个整体,我们可以构思一个关于整个系统的树结构,不过这并没有这么简单。在文件系统中,一个文件代表着一个`inode`(索引节点),这是一种包含着构建文件的实际数据信息的序列号:这些数据表示文件是属于谁的,还有它在硬盘中的位置。 + +每个分区都有一套属于他们自己的inode,在一个系统的不同分区中,可以存在有相同inode的文件。 + +每个inode都表示着一种在硬盘上的数据结构,保存着文件的属性,包括文件数据的物理地址。当硬盘被格式化并用来存储数据时(通常发生在初始系统安装过程,或者是在一个已经存在的系统中添加额外的硬盘),每个分区都会创建固定数量的inode。这个值表示这个分区能够同时存储各类文件的最大数量。我们通常用一个inode去映射2-8k的数据块。当一个新的文件生成后,它就会获得一个空闲的inode。在这个inode里面存储着以下信息: + +- 文件属主和组属主 +- 文件类型(常规文件,目录文件......) +- 文件权限 +- 创建、最近一次读文件和修改文件的时间 +- inode里该信息被修改的时间 +- 文件的链接数(详见下一章) +- 文件大小 +- 文件数据的实际地址 + +唯一不在inode的信息是文件名和目录。它们存储在特殊的目录文件。通过比较文件名和inode的数目,系统能够构造出一个便于用户理解的树结构。用户可以通过ls -i查看inode的数目。在硬盘上,inodes有他们独立的空间。 + +------------------------ + +via: http://www.blackmoreops.com/2015/06/18/linux-file-system-hierarchy-v2-0/ + +译者:[tnuoccalanosrep](https://github.com/tnuoccalanosrep) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://www.blackmoreops.com/2015/02/15/in-light-of-recent-linux-exploits-linux-security-audit-is-a-must/ +[2]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png +[3]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-File-System-Hierarchy-blackMORE-Ops.pdf diff --git a/published/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md b/published/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md new file mode 100644 index 0000000000..e7e2d88e03 --- /dev/null +++ b/published/20150824 Watch These Kids Having Fun With Linux Terminal In Ubuntu.md @@ -0,0 +1,34 @@ +看这些孩子在 Ubuntu 的 Linux 终端下玩耍 +================================================================================ +我发现了一个孩子们在他们的计算机教室里玩得很开心的视频。我不知道他们在哪里,但我猜测是在印度尼西亚或者马来西亚。视频请自行搭梯子: http://www.youtube.com/z8taQPomp0Y + +### 在Linux终端下面跑火车 ### + +这里没有魔术。只是一个叫做“sl”的命令行工具。我想它是在把ls打错的情况下为了好玩而开发的。如果你曾经在Linux的命令行下工作,你会知道ls是一个最常使用的一个命令,也许也是一个最经常打错的命令。 + +如果你想从这个终端下的火车获得一些乐趣,你可以使用下面的命令安装它。 + + sudo apt-get install sl + +要运行终端火车,只需要在终端中输入**sl**。它有以下几个选项: + +- -a : 意外模式。你会看见哭救的群众 +- -l : 显示一个更小的火车但有更多的车厢 +- -F : 一个飞行的火车 +- -e : 允许通过Ctrl+C。使用其他模式你不能使用Ctrl+C中断火车。但是,它不能长时间运行。 + +正常情况下,你应该会听到汽笛声但是在大多数Linux系统下都不管用,Ubuntu是其中一个。这就是一个意外的终端火车。 + +![Linux Terminal Train](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/04/Linux_Terminal_Train.jpeg) + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/ubuntu-terminal-train/ + +作者:[Abhishek][a] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ diff --git a/published/20150826 How to set up a system status page of your infrastructure.md b/published/20150826 How to set up a system status page of your infrastructure.md new file mode 100644 index 0000000000..7725538ddd --- /dev/null +++ b/published/20150826 How to set up a system status page of your infrastructure.md @@ -0,0 +1,295 @@ +如何为你的平台部署一个公开的系统状态页 +================================================================================ + +如果你是一个系统管理员,负责关键的 IT 基础设置或公司的服务,你将明白有效的沟通在日常任务中的重要性。假设你的线上存储服务器故障了。你希望团队所有人达成共识你好尽快的解决问题。当你忙来忙去时,你不会想一半的人问你为什么他们不能访问他们的文档。当一个维护计划快到时间了你想在计划前提醒相关人员,这样避免了不必要的开销。 + +这一切的要求或多或少改进了你、你的团队、和你服务的用户之间沟通渠道。一个实现它的方法是维护一个集中的系统状态页面,报告和记录故障停机详情、进度更新和维护计划等。这样,在故障期间你避免了不必要的打扰,也可以提醒一些相关方,以及加入一些可选的状态更新。 + +有一个不错的**开源, 自承载系统状态页解决方案**叫做 [Cachet][1]。在这个教程,我将要描述如何用 Cachet 部署一个自承载系统状态页面。 + +### Cachet 特性 ### + +在详细的配置 Cachet 之前,让我简单的介绍一下它的主要特性。 + +- **全 JSON API**:Cachet API 可以让你使用任意的外部程序或脚本(例如,uptime 脚本)连接到 Cachet 来自动报告突发事件或更新状态。 +- **认证**:Cachet 支持基础认证和 JSON API 的 API 令牌,所以只有认证用户可以更新状态页面。 +- **衡量系统**:这通常用来展现随着时间推移的自定义数据(例如,服务器负载或者响应时间)。 +- **通知**:可选地,你可以给任一注册了状态页面的人发送突发事件的提示邮件。 +- **多语言**:状态页被翻译为11种不同的语言。 +- **双因子认证**:这允许你使用 Google 的双因子认证来提升 Cachet 管理账户的安全性。 +- **跨数据库支持**:你可以选择 MySQL,SQLite,Redis,APC 和 PostgreSQL 作为后端存储。 + +剩下的教程,我会说明如何在 Linux 上安装配置 Cachet。 + +### 第一步:下载和安装 Cachet ### + +Cachet 需要一个 web 服务器和一个后端数据库来运转。在这个教程中,我将使用 LAMP 架构。以下是一些特定发行版上安装 Cachet 和 LAMP 架构的指令。 + +#### Debian,Ubuntu 或者 Linux Mint #### + + $ sudo apt-get install curl git apache2 mysql-server mysql-client php5 php5-mysql + $ sudo git clone https://github.com/cachethq/Cachet.git /var/www/cachet + $ cd /var/www/cachet + $ sudo git checkout v1.1.1 + $ sudo chown -R www-data:www-data . + +在基于 Debian 的系统上设置 LAMP 架构的更多细节,参考这个[教程][2]。 + +#### Fedora, CentOS 或 RHEL #### + +在基于 Red Hat 系统上,你首先需要[设置 REMI 软件库][3](以满足 PHP 的版本需求)。然后执行下面命令。 + + $ sudo yum install curl git httpd mariadb-server + $ sudo yum --enablerepo=remi-php56 install php php-mysql php-mbstring + $ sudo git clone https://github.com/cachethq/Cachet.git /var/www/cachet + $ cd /var/www/cachet + $ sudo git checkout v1.1.1 + $ sudo chown -R apache:apache . + $ sudo firewall-cmd --permanent --zone=public --add-service=http + $ sudo firewall-cmd --reload + $ sudo systemctl enable httpd.service; sudo systemctl start httpd.service + $ sudo systemctl enable mariadb.service; sudo systemctl start mariadb.service + +在基于 Red Hat 系统上设置 LAMP 的更多细节,参考这个[教程][4]。 + +### 配置 Cachet 的后端数据库### + +下一步是配置后端数据库。 + +登录到 MySQL/MariaDB 服务,然后创建一个空的数据库称为‘cachet’。 + + $ sudo mysql -uroot -p + +---------- + + mysql> create database cachet; + mysql> quit + +现在用一个示例配置文件创建一个 Cachet 配置文件。 + + $ cd /var/www/cachet + $ sudo mv .env.example .env + +在 .env 文件里,填写你自己设置的数据库信息(例如,DB\_\*)。其他的字段先不改变。 + + APP_ENV=production + APP_DEBUG=false + APP_URL=http://localhost + APP_KEY=SomeRandomString + + DB_DRIVER=mysql + DB_HOST=localhost + DB_DATABASE=cachet + DB_USERNAME=root + DB_PASSWORD= + + CACHE_DRIVER=apc + SESSION_DRIVER=apc + QUEUE_DRIVER=database + + MAIL_DRIVER=smtp + MAIL_HOST=mailtrap.io + MAIL_PORT=2525 + MAIL_USERNAME=null + MAIL_PASSWORD=null + MAIL_ADDRESS=null + MAIL_NAME=null + + REDIS_HOST=null + REDIS_DATABASE=null + REDIS_PORT=null + +### 第三步:安装 PHP 依赖和执行数据库迁移 ### + +下面,我们将要安装必要的PHP依赖包。我们会使用 composer 来安装。如果你的系统还没有安装 composer,先安装它: + + $ curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer + +现在开始用 composer 安装 PHP 依赖包。 + + $ cd /var/www/cachet + $ sudo composer install --no-dev -o + +下面执行一次性的数据库迁移。这一步会在我们之前创建的数据库里面创建那些所需的表。 + + $ sudo php artisan migrate + +假设在 /var/www/cachet/.env 的数据库配置无误,数据库迁移应该像下面显示一样成功完成。 + +![](https://farm6.staticflickr.com/5814/20235620184_54048676b0_c.jpg) + +下面,创建一个密钥,它将用来加密进入 Cachet 的数据。 + + $ sudo php artisan key:generate + $ sudo php artisan config:cache + +![](https://farm6.staticflickr.com/5717/20831952096_7105c9fdc7_c.jpg) + +生成的应用密钥将自动添加到你的 .env 文件 APP\_KEY 变量中。你不需要自己编辑 .env。 + +### 第四步:配置 Apache HTTP 服务 ### + +现在到了配置运行 Cachet 的 web 服务的时候了。我们使用 Apache HTTP 服务器,为 Cachet 创建一个新的[虚拟主机][5],如下: + +#### Debian,Ubuntu 或 Linux Mint #### + + $ sudo vi /etc/apache2/sites-available/cachet.conf + +---------- + + + ServerName cachethost + ServerAlias cachethost + DocumentRoot "/var/www/cachet/public" + + Require all granted + Options Indexes FollowSymLinks + AllowOverride All + Order allow,deny + Allow from all + + + +启用新虚拟主机和 mod_rewrite: + + $ sudo a2ensite cachet.conf + $ sudo a2enmod rewrite + $ sudo service apache2 restart + +#### Fedora, CentOS 或 RHEL #### + +在基于 Red Hat 系统上,创建一个虚拟主机文件,如下: + + $ sudo vi /etc/httpd/conf.d/cachet.conf + +---------- + + + ServerName cachethost + ServerAlias cachethost + DocumentRoot "/var/www/cachet/public" + + Require all granted + Options Indexes FollowSymLinks + AllowOverride All + Order allow,deny + Allow from all + + + +现在重载 Apache 配置: + + $ sudo systemctl reload httpd.service + +### 第五步:配置 /etc/hosts 来测试 Cachet ### + +这时候,初始的 Cachet 状态页面应该启动运行了,现在测试一下。 + +由于 Cachet 被配置为Apache HTTP 服务的虚拟主机,我们需要调整你的客户机的 /etc/hosts 来访问他。你将从这个客户端电脑访问 Cachet 页面。(LCTT 译注:如果你给了这个页面一个正式的主机地址,则不需要这一步。) + +打开 /etc/hosts,加入如下行: + + $ sudo vi /etc/hosts + +---------- + + cachethost + +上面名为“cachethost”必须匹配 Cachet 的 Apache 虚拟主机文件的 ServerName。 + +### 测试 Cachet 状态页面 ### + +现在你准备好访问 Cachet 状态页面。在你浏览器地址栏输入 http://cachethost。你将被转到如下的 Cachet 状态页的初始化设置页面。 + +![](https://farm6.staticflickr.com/5745/20858228815_405fce1301_c.jpg) + +选择 cache/session 驱动。这里 cache 和 session 驱动两个都选“File”。 + +下一步,输入关于状态页面的基本信息(例如,站点名称、域名、时区和语言),以及管理员认证账户。 + +![](https://farm1.staticflickr.com/611/20237229693_c22014e4fd_c.jpg) + +![](https://farm6.staticflickr.com/5707/20858228875_b056c9e1b4_c.jpg) + +![](https://farm6.staticflickr.com/5653/20671482009_8629572886_c.jpg) + +你的状态页初始化就要完成了。 + +![](https://farm6.staticflickr.com/5692/20237229793_f6a48f379a_c.jpg) + +继续创建组件(你的系统单元)、事件或者任意你要做的维护计划。 + +例如,增加一个组件: + +![](https://farm6.staticflickr.com/5672/20848624752_9d2e0a07be_c.jpg) + +增加一个维护计划: + +公共 Cachet 状态页就像这样: + +![](https://farm1.staticflickr.com/577/20848624842_df68c0026d_c.jpg) + +集成了 SMTP,你可以在状态更新时发送邮件给订阅者。并且你可以使用 CSS 和 markdown 格式来完全自定义布局和状态页面。 + +### 结论 ### + +Cachet 是一个相当易于使用,自托管的状态页面软件。Cachet 一个高级特性是支持全 JSON API。使用它的 RESTful API,Cachet 可以轻松连接单独的监控后端(例如,[Nagios][6]),然后回馈给 Cachet 事件报告并自动更新状态。比起手工管理一个状态页它更快和有效率。 + +最后一句,我喜欢提及一个事。用 Cachet 设置一个漂亮的状态页面是很简单的,但要将这个软件用好并不像安装它那么容易。你需要完全保障所有 IT 团队习惯准确及时的更新状态页,从而建立公共信息的准确性。同时,你需要教用户去查看状态页面。最后,如果没有很好的填充数据,部署状态页面就没有意义,并且/或者没有一个人查看它。记住这个,尤其是当你考虑在你的工作环境中部署 Cachet 时。 + +### 故障排查 ### + +补充,万一你安装 Cachet 时遇到问题,这有一些有用的故障排查的技巧。 + +1. Cachet 页面没有加载任何东西,并且你看到如下报错。 + + production.ERROR: exception 'RuntimeException' with message 'No supported encrypter found. The cipher and / or key length are invalid.' in /var/www/cachet/bootstrap/cache/compiled.php:6695 + + **解决方案**:确保你创建了一个应用密钥,以及明确配置缓存如下所述。 + + $ cd /path/to/cachet + $ sudo php artisan key:generate + $ sudo php artisan config:cache + +2. 调用 composer 命令时有如下报错。 + + - danielstjules/stringy 1.10.0 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. + - laravel/framework v5.1.8 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. + - league/commonmark 0.10.0 requires ext-mbstring * -the requested PHP extension mbstring is missing from your system. + + **解决方案**:确保在你的系统上安装了必要的 PHP 扩展 mbstring ,并且兼容你的 PHP 版本。在基于 Red Hat 的系统上,由于我们从 REMI-56 库安装PHP,所以要从同一个库安装扩展。 + + $ sudo yum --enablerepo=remi-php56 install php-mbstring + +3. 你访问 Cachet 状态页面时得到一个白屏。HTTP 日志显示如下错误。 + + PHP Fatal error: Uncaught exception 'UnexpectedValueException' with message 'The stream or file "/var/www/cachet/storage/logs/laravel-2015-08-21.log" could not be opened: failed to open stream: Permission denied' in /var/www/cachet/bootstrap/cache/compiled.php:12851 + + **解决方案**:尝试如下命令。 + + $ cd /var/www/cachet + $ sudo php artisan cache:clear + $ sudo chmod -R 777 storage + $ sudo composer dump-autoload + + 如果上面的方法不起作用,试试禁止SELinux: + + $ sudo setenforce 0 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/setup-system-status-page.html + +作者:[Dan Nanni][a] +译者:[wyangsun](https://github.com/wyangsun) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:https://cachethq.io/ +[2]:http://xmodulo.com/install-lamp-stack-ubuntu-server.html +[3]:https://linux.cn/article-4192-1.html +[4]:https://linux.cn/article-5789-1.html +[5]:http://xmodulo.com/configure-virtual-hosts-apache-http-server.html +[6]:http://xmodulo.com/monitor-common-services-nagios.html diff --git a/published/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md b/published/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md new file mode 100644 index 0000000000..3737b88438 --- /dev/null +++ b/published/20150901 How to Install or Upgrade to Linux Kernel 4.2 in Ubuntu.md @@ -0,0 +1,86 @@ +在 Ubuntu 中如何安装或升级 Linux 内核到4.2 +================================================================================ +![](http://ubuntuhandbook.org/wp-content/uploads/2014/12/linux-kernel-icon-tux.png) + +Linux 内核 4.2已经发布了。Linus Torvalds 在 [lkml.org][1] 上写到: + +> 通过这周这么小的变动,看来在最后一周 发布 4.2 版本应该不会有问题,当然还有几个修正,但是看起来也并不需要延迟一周。 +> 所以这就到了,而且 4.3 的合并窗口现已打开。我已经有了几个等待处理的合并请求,明天我开始处理它们,然后在适当的时候放出来。 +> 从 rc8 以来的简短日志很小,已经附加。这个补丁也很小... + +### 新内核 4.2 有哪些改进?: ### + +- 重写英特尔的x86汇编代码 +- 支持新的 ARM 板和 SoC +- 对 F2FS 的 per-file 加密 +- AMDGPU 的内核 DRM 驱动程序 +- 对 Radeon DRM 驱动的 VCE1 视频编码支持 +- 初步支持英特尔的 Broxton Atom SoC +- 支持 ARCv2 和 HS38 CPU 内核 +- 增加了队列自旋锁的支持 +- 许多其他的改进和驱动更新。 + +### 在 Ubuntu 中如何下载4.2内核 : ### + +此内核版本的二进制包可供下载链接如下: + +- [下载 4.2 内核(.DEB)][1] + +首先检查你的操作系统类型,32位(i386)的或64位(amd64)的,然后使用下面的方式依次下载并安装软件包: + +1. linux-headers-4.2.0-xxx_all.deb +1. linux-headers-4.2.0-xxx-generic_xxx_i386/amd64.deb +1. linux-image-4.2.0-xxx-generic_xxx_i386/amd64.deb + +安装内核后,在终端((Ctrl+Alt+T))运行`sudo update-grub`命令来更新 grub boot-loader。 + +如果你需要一个低延迟系统(例如用于录制音频),请下载并安装下面的包: + +1. linux-headers-4.2.0_xxx_all.deb +1. linux-headers-4.2.0-xxx-lowlatency_xxx_i386/amd64.deb +1. linux-image-4.2.0-xxx-lowlatency_xxx_i386/amd64.deb + +对于没有图形用户界面的 Ubuntu 服务器,你可以运行下面的命令通过 wget 来逐一抓下载,并通过 dpkg 来安装: + +对于64位的系统请运行: + + cd /tmp/ + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-headers-4.2.0-040200_4.2.0-040200.201508301530_all.deb + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-headers-4.2.0-040200-generic_4.2.0-040200.201508301530_amd64.deb + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-image-4.2.0-040200-generic_4.2.0-040200.201508301530_amd64.deb + + sudo dpkg -i linux-headers-4.2.0-*.deb linux-image-4.2.0-*.deb + +对于32位的系统,请运行: + + cd /tmp/ + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-headers-4.2.0-040200_4.2.0-040200.201508301530_all.deb + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-headers-4.2.0-040200-generic_4.2.0-040200.201508301530_i386.deb + + wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/linux-image-4.2.0-040200-generic_4.2.0-040200.201508301530_i386.deb + + sudo dpkg -i linux-headers-4.2.0-*.deb linux-image-4.2.0-*.deb + +最后,重新启动计算机才能生效。 + +要恢复或删除旧的内核,请参阅[通过脚本安装内核][3]。 + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/08/upgrade-kernel-4-2-ubuntu/ + +作者:[Ji m][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:https://lkml.org/lkml/2015/8/30/96 +[2]:http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.2-unstable/ +[3]:http://ubuntuhandbook.org/index.php/2015/08/install-latest-kernel-script/ diff --git a/published/20150901 How to automatically dim your screen on Linux.md b/published/20150901 How to automatically dim your screen on Linux.md new file mode 100644 index 0000000000..1fcdc19d47 --- /dev/null +++ b/published/20150901 How to automatically dim your screen on Linux.md @@ -0,0 +1,53 @@ +如何在 Linux 上自动调整屏幕亮度保护眼睛 +================================================================================ + +当你开始在计算机前花费大量时间的时候,问题自然开始显现。这健康吗?怎样才能舒缓我眼睛的压力呢?为什么光线灼烧着我?尽管解答这些问题的研究仍然在不断进行着,许多程序员已经采用了一些应用来改变他们的日常习惯,让他们的眼睛更健康点。在这些应用中,我发现了两个特别有趣的东西:Calise和Redshift。 + +### Calise ### + +处于时断时续的开发中,[Calise][1]的意思是“相机光感应器(Camera Light Sensor)”。换句话说,它是一个根据摄像头接收到的光强度计算屏幕最佳的背光级别的开源程序。更进一步地说,Calise可以基于你的地理坐标来考虑你所在地区的天气。我喜欢它是因为它兼容各个桌面,甚至非X系列。 + +![](https://farm1.staticflickr.com/569/21016715646_6e1e95f066_o.jpg) + +它同时附带了命令行界面和图形界面,支持多用户配置,而且甚至可以导出数据为CSV。安装完后,你必须在见证奇迹前对它进行快速校正。 + +![](https://farm6.staticflickr.com/5770/21050571901_1e7b2d63ec_c.jpg) + +不怎么令人喜欢的是,如果你和我一样有被偷窥妄想症,在你的摄像头前面贴了一条胶带,那就会比较不幸了,这会大大影响Calise的精确度。除此之外,Calise还是个很棒的应用,值得我们关注和支持。正如我先前提到的,它在过去几年中经历了一段修修补补的艰难阶段,所以我真的希望这个项目继续开展下去。 + +![](https://farm1.staticflickr.com/633/21032989702_9ae563db1e_o.png) + +### Redshift ### + +如果你想过要减少由屏幕导致的眼睛的压力,那么你很可能听过f.lux,它是一个免费的专有软件,用于根据一天中的时间来修改显示器的亮度和配色。然而,如果真的偏好于开源软件,那么一个可选方案就是:[Redshift][2]。灵感来自f.lux,Redshift也可以改变配色和亮度来加强你夜间坐在屏幕前的体验。启动时,你可以使用经度和纬度来配置地理坐标,然后就可以让它在托盘中运行了。Redshift将根据太阳的位置平滑地调整你的配色或者屏幕。在夜里,你可以看到屏幕的色温调向偏暖色,这会让你的眼睛少遭些罪。 + +![](https://farm6.staticflickr.com/5823/20420303684_2b6e917fee_b.jpg) + +和Calise一样,它提供了一个命令行界面,同时也提供了一个图形客户端。要快速启动Redshift,只需使用命令: + + $ redshift -l [LAT]:[LON] + +替换[LAT]:[LON]为你的维度和经度。 + +然而,它也可以通过gpsd模块来输入你的坐标。对于Arch Linux用户,我推荐你读一读这个[维基页面][3]。 + +### 尾声 ### + +总而言之,Linux用户没有理由不去保护自己的眼睛,Calise和Redshift两个都很棒。我真希望它们的开发能够继续下去,让它们获得应有的支持。当然,还有比这两个更多的程序可以满足保护眼睛和保持健康的目的,但是我感觉Calise和Redshift会是一个不错的开端。 + +如果你有一个经常用来舒缓眼睛的压力的喜欢的程序,请在下面的评论中留言吧。 + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/automatically-dim-your-screen-linux.html + +作者:[Adrien Brochard][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/adrien +[1]:http://calise.sourceforge.net/ +[2]:http://jonls.dk/redshift/ +[3]:https://wiki.archlinux.org/index.php/Redshift#Automatic_location_based_on_GPS diff --git a/published/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md b/published/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md new file mode 100644 index 0000000000..5682e18a84 --- /dev/null +++ b/published/20150901 Setting Up High-Performance 'HHVM' and Nginx or Apache with MariaDB on Debian or Ubuntu.md @@ -0,0 +1,182 @@ +在 Ubuntu 上配置高性能的 HHVM 环境 +================================================================================ + +HHVM全称为 HipHop Virtual Machine,它是一个开源虚拟机,用来运行由 Hack(一种编程语言)和 PHP 开发应用。HHVM 在保证了 PHP 程序员最关注的高灵活性的要求下,通过使用最新的编译方式来取得了非凡的性能。到目前为止,相对于 PHP + [APC (Alternative PHP Cache)][1] ,HHVM 为 FaceBook 在 HTTP 请求的吞吐量上提高了9倍的性能,在内存的占用上,减少了5倍左右的内存占用。 + +同时,HHVM 也可以与基于 FastCGI 的 Web 服务器(如 Nginx 或者 Apache )协同工作。 + +![Install HHVM, Nginx and Apache with MariaDB](http://www.tecmint.com/wp-content/uploads/2015/08/Install-HHVM-Nginx-Apache-MariaDB.png) + +*安装 HHVM,Nginx和 Apache 还有 MariaDB* + +在本教程中,我们一起来配置 Nginx/Apache web 服务器、 数据库服务器 MariaDB 和 HHVM 。我们将使用 Ubuntu 15.04 (64 位),因为 HHVM 只能运行在64位系统上。同时,该教程也适用于 Debian 和 Linux Mint。 + +### 第一步: 安装 Nginx 或者 Apache 服务器 ### + +1、首先,先进行一次系统的升级并更新软件仓库列表,命令如下 + + # apt-get update && apt-get upgrade + +![System Upgrade](http://www.tecmint.com/wp-content/uploads/2015/08/System-Upgrade.png) + +*系统升级* + +2、 正如我之前说的,HHVM 能和 Nginx 和 Apache 进行集成。所以,究竟使用哪个服务器,这是你的自由,不过,我们会教你如何安装这两个服务器。 + +#### 安装 Nginx #### + +我们通过下面的命令安装 Nginx/Apache 服务器 + + # apt-get install nginx + +![Install Nginx Web Server](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Nginx-Web-Server.png) + +*安装 Nginx 服务器* + +#### 安装 Apache #### + + # apt-get install apache2 + +![Install Apache Web Server](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Apache-Web-Server.png) + +*安装 Apache 服务器* + +完成这一步,你能通过以下的链接看到 Nginx 或者 Apache 的默认页面 + + http://localhost + 或 + http://IP-Address + + +![Nginx Welcome Page](http://www.tecmint.com/wp-content/uploads/2015/08/Nginx-Welcome-Page.png) + +*Nginx 默认页面* + +![Apache Default Page](http://www.tecmint.com/wp-content/uploads/2015/08/Apache-Default-Page.png) + +*Apache 默认页面* + +### 第二步: 安装和配置 MariaDB ### + +3、 这一步,我们将通过如下命令安装 MariaDB,它是一个比 MySQL 性能更好的数据库 + + # apt-get install mariadb-client mariadb-server + +![Install MariaDB Database](http://www.tecmint.com/wp-content/uploads/2015/08/Install-MariaDB-Database.png) + +*安装 MariaDB* + +4、 在 MariaDB 成功安装之后,你可以启动它,并且设置 root 密码来保护数据库: + + + # systemctl start mysql + # mysql_secure_installation + +回答以下问题,只需要按下`y`或者 `n`并且回车。请确保你仔细的阅读过说明。 + + Enter current password for root (enter for none) = press enter + Set root password? [Y/n] = y + Remove anonymous users[y/n] = y + Disallow root login remotely[y/n] = y + Remove test database and access to it [y/n] = y + Reload privileges tables now[y/n] = y + +5、 在设置了密码之后,你就可以登录 MariaDB 了。 + + + # mysql -u root -p + + +### 第三步: 安装 HHVM ### + +6、 在此阶段,我们将安装 HHVM。我们需要添加 HHVM 的仓库到你的`sources.list`文件中,然后更新软件列表。 + + # wget -O - http://dl.hhvm.com/conf/hhvm.gpg.key | apt-key add - + # echo deb http://dl.hhvm.com/ubuntu DISTRIBUTION_VERSION main | sudo tee /etc/apt/sources.list.d/hhvm.list + # apt-get update + +**重要**:不要忘记用你的 Ubuntu 发行版代号替换上述的 DISTRIBUTION_VERSION (比如:lucid, precise, trusty) 或者是 Debian 的 jessie 或者 wheezy。在 Linux Mint 中也是一样的,不过只支持 petra。 + +添加了 HHVM 仓库之后,你就可以轻松安装了。 + + # apt-get install -y hhvm + +安装之后,就可以启动它,但是它并没有做到开机启动。可以用如下命令做到开机启动。 + + # update-rc.d hhvm defaults + +### 第四步: 配置 Nginx/Apache 连接 HHVM ### + +7、 现在,nginx/apache 和 HHVM 都已经安装完成了,并且都独立运行起来了,所以我们需要对它们进行设置,来让它们互相关联。这个关键的步骤,就是需要告知 nginx/apache 将所有的 php 文件,都交给 HHVM 进行处理。 + +如果你用了 Nginx,请按照如下步骤: + +nginx 的配置文件在 /etc/nginx/sites-available/default, 并且这些配置文件会在 /usr/share/nginx/html 中寻找文件执行,不过,它不知道如何处理 PHP。 + +为了确保 Nginx 可以连接 HHVM,我们需要执行所带的如下脚本。它可以帮助我们正确的配置 Nginx,将 hhvm.conf 放到 上面提到的配置文件 nginx.conf 的头部。 + +这个脚本可以确保 Nginx 可以对 .hh 和 .php 的做正确的处理,并且将它们通过 fastcgi 发送给 HHVM。 + + # /usr/share/hhvm/install_fastcgi.sh + +![Configure Nginx for HHVM](http://www.tecmint.com/wp-content/uploads/2015/08/Configure-Nginx-for-HHVM.png) + +*配置 Nginx、HHVM* + +**重要**: 如果你使用的是 Apache,这里不需要进行配置。 + +8、 接下来,你需要使用 hhvm 来提供 php 的运行环境。 + + # /usr/bin/update-alternatives --install /usr/bin/php php /usr/bin/hhvm 60 + +以上步骤完成之后,你现在可以启动并且测试它了。 + + # systemctl start hhvm + +### 第五步: 测试 HHVM 和 Nginx/Apache ### + +9、 为了确认 hhvm 是否工作,你需要在 nginx/apache 的文档根目录下建立 hello.php。 + + # nano /usr/share/nginx/html/hello.php [对于 Nginx] + 或 + # nano /var/www/html/hello.php [对于 Nginx 和 Apache] + +在文件中添加如下代码: + + + +然后访问如下链接,确认自己能否看到 "hello world" + + http://localhost/info.php + 或 + http://IP-Address/info.php + +![HHVM Page](http://www.tecmint.com/wp-content/uploads/2015/08/HHVM-Page.png) + +*HHVM 页面* + +如果 “HHVM” 的页面出现了,那就说明你成功了。 + +### 结论 ### + +以上的步骤都是非常简单的,希望你能觉得这是一篇有用的教程,如果你在以上的步骤中遇到了问题,给我们留一个评论,我们将全力解决。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/install-hhvm-and-nginx-apache-with-mariadb-on-debian-ubuntu/ + +作者:[Ravi Saive][a] +译者:[MikeCoder](https://github.com/MikeCoder) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/admin/ +[1]:http://www.tecmint.com/install-apc-alternative-php-cache-in-rhel-centos-fedora/ diff --git a/published/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md b/published/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md new file mode 100644 index 0000000000..a2b540a8ad --- /dev/null +++ b/published/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md @@ -0,0 +1,313 @@ +RHCSA 系列(一): 回顾基础命令及系统文档 +================================================================================ + +RHCSA (红帽认证系统工程师) 是由 RedHat 公司举行的认证考试,这家公司给商业公司提供开源操作系统和软件,除此之外,还为这些企业和机构提供支持、训练以及咨询服务等。 + +![RHCSA Exam Guide](http://www.tecmint.com/wp-content/uploads/2015/02/RHCSA-Series-by-Tecmint.png) + +*RHCSA 考试准备指南* + +RHCSA 考试(考试编号 EX200)通过后可以获取由 RedHat 公司颁发的证书. RHCSA 考试是 RHCT(红帽认证技师)的升级版,而且 RHCSA 必须在新的 Red Hat Enterprise Linux(红帽企业版)下完成。RHCT 和 RHCSA 的主要变化就是 RHCT 基于 RHEL5,而 RHCSA 基于 RHEL6 或者7,这两个认证的等级也有所不同。 + +红帽认证管理员最起码可以在红帽企业版的环境下执行如下系统管理任务: + +- 理解并会使用命令管理文件、目录、命令行以及系统/软件包的文档 +- 在不同的启动等级操作运行中的系统,识别和控制进程,启动或停止虚拟机 +- 使用分区和逻辑卷管理本地存储 +- 创建并且配置本地文件系统和网络文件系统,设置他们的属性(权限、加密、访问控制表) +- 部署、配置、并且控制系统,包括安装、升级和卸载软件 +- 管理系统用户和组,以及使用集中制的 LDAP 目录进行用户验证 +- 确保系统安全,包括基础的防火墙规则和 SELinux 配置 + +关于你所在国家的考试注册和费用请参考 [RHCSA 认证页面][1]。 + +在这个有15章的 RHCSA(红帽认证管理员)备考系列中,我们将覆盖以下的关于红帽企业 Linux 第七版的最新的信息: + +- Part 1: 回顾基础命令及系统文档 +- Part 2: 在 RHEL7 中如何进行文件和目录管理 +- Part 3: 在 RHEL7 中如何管理用户和组 +- Part 4: 使用 nano 和 vim 管理命令,使用 grep 和正则表达式分析文本 +- Part 5: RHEL7 的进程管理:启动,关机,以及这之间的各种事情 +- Part 6: 使用 'Parted' 和 'SSM' 来管理和加密系统存储 +- Part 7: 使用 ACL(访问控制表)并挂载 Samba/NFS 文件分享 +- Part 8: 加固 SSH,设置主机名并开启网络服务 +- Part 9: 安装、配置和加固一个 Web 和 FTP 服务器 +- Part 10: Yum 包管理方式,使用 Cron 进行自动任务管理以及监控系统日志 +- Part 11: 使用 FirewallD 和 Iptables 设置防火墙,控制网络流量 +- Part 12: 使用 Kickstart 自动安装 RHEL 7 +- Part 13: RHEL7:什么是 SeLinux?他的原理是什么? +- Part 14: 在 RHEL7 中使用基于 LDAP 的权限控制 +- Part 15: 虚拟化基础和用KVM管理虚拟机 + +在第一章,我们讲解如何在终端或者 Shell 窗口输入和运行正确的命令,并且讲解如何找到、查阅,以及使用系统文档。 + +![RHCSA: Reviewing Essential Linux Commands – Part 1](http://www.tecmint.com/wp-content/uploads/2015/02/Reviewing-Essential-Linux-Commands.png) + +*RHCSA:回顾必会的 Linux 命令 - 第一部分* + +#### 前提: #### + +至少你要熟悉如下命令 + +- [cd 命令][2] (改变目录) +- [ls 命令][3] (列举文件) +- [cp 命令][4] (复制文件) +- [mv 命令][5] (移动或重命名文件) +- [touch 命令][6] (创建一个新的文件或更新已存在文件的时间表) +- rm 命令 (删除文件) +- mkdir 命令 (创建目录) + +在这篇文章中你将会找到更多的关于如何更好的使用他们的正确用法和特殊用法. + +虽然没有严格的要求,但是作为讨论常用的 Linux 命令和在 Linux 中搜索信息方法,你应该安装 RHEL7 来尝试使用文章中提到的命令。这将会使你学习起来更省力。 + +- [红帽企业版 Linux(RHEL)7 安装指南][7] + +### 使用 Shell 进行交互 ### + +如果我们使用文本模式登录 Linux,我们就会直接进入到我们的默认 shell 中。另一方面,如果我们使用图形化界面登录,我们必须通过启动一个终端来开启 shell。无论那种方式,我们都会看到用户提示符,并且我们可以在这里输入并且执行命令(当按下回车时,命令就会被执行)。 + +命令是由两个部分组成的: + +- 命令本身 +- 参数 + +某些参数,称为选项(通常使用一个连字符开头),会改变命令的行为方式,而另外一些则指定了命令所操作的对象。 + +type 命令可以帮助我们识别某一个特定的命令是由 shell 内置的还是由一个单独的包提供的。这样的区别在于我们能够在哪里找到更多关于该命令的更多信息。对 shell 内置的命令,我们需要看 shell 的手册页;如果是其他的,我们需要看软件包自己的手册页。 + +![Check Shell built in Commands](http://www.tecmint.com/wp-content/uploads/2015/02/Check-shell-built-in-Commands.png) + +*检查Shell的内置命令* + +在上面的例子中, `cd` 和 `type` 是 shell 内置的命令,`top` 和 `less` 是由 shell 之外的其他的二进制文件提供的(在这种情况下,type将返回命令的位置)。 + +其他的内置命令: + +- [echo 命令][8]: 展示字符串 +- [pwd 命令][9]: 输出当前的工作目录 + +![More Built in Shell Commands](http://www.tecmint.com/wp-content/uploads/2015/02/More-Built-in-Shell-Commands.png) + +*其它内置命令* + +**exec 命令** + +它用来运行我们指定的外部程序。请注意在多数情况下,只需要输入我们想要运行的程序的名字就行,不过` exec` 命令有一个特殊的特性:不是在 shell 之外创建新的进程运行,而是这个新的进程会替代原来的 shell,可以通过下列命令来验证。 + + # ps -ef | grep [shell 进程的PID] + +当新的进程终止时,Shell 也随之终止。运行 `exec top` ,然后按下 `q` 键来退出 top,你会注意到 shell 会话也同时终止,如下面的屏幕录像展示的那样: + + + +**export 命令** + +给之后执行的命令的输出环境变量。 + +**history 命令** + +展示数行之前的历史命令。命令编号前面前缀上感叹号可以再次执行这个命令。如果我们需要编辑历史列表中的命令,我们可以按下 `Ctrl + r` 并输入与命令相关的第一个字符。我们可以看到的命令会自动补全,可以根据我们目前的需要来编辑它: + + + +命令列表会保存在一个叫 `.bash_history` 的文件里。`history` 命令是一个非常有用的用于减少输入次数的工具,特别是进行命令行编辑的时候。默认情况下,bash 保留最后输入的500个命令,不过可以通过修改 HISTSIZE 环境变量来增加: + +![Linux history Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-history-Command.png) + +*Linux history 命令* + +但上述变化,在我们的下一次启动不会保留。为了保持 HISTSIZE 变量的变化,我们需要通过手工修改文件编辑: + + # 要设置 history 长度,请看 bash(1)文档中的 HISTSIZE 和 HISTFILESIZE + HISTSIZE=1000 + +**重要**: 我们的更改不会立刻生效,除非我们重启了 shell 。 + +**alias 命令** + +没有参数或使用 `-p` 选项时将会以“名称=值”的标准形式输出别名列表。当提供了参数时,就会按照给定的名字和值定义一个别名。 + +使用 `alias` ,我们可以创建我们自己的命令,或使用所需的参数修改现有的命令。举个例子,假设我们将 `ls` 定义别名为 `ls –color=auto` ,这样就可以使用不同颜色输出文件、目录、链接等等。 + + + # alias ls='ls --color=auto' + +![Linux alias Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-alias-Command.png) + +*Linux 别名命令* + +**注意**: 你可以给你的“新命令”起任何的名字,并且使用单引号包括很多命令,但是你要用分号区分开它们。如下: + + # alias myNewCommand='cd /usr/bin; ls; cd; clear' + +**exit 命令** + +`exit` 和 `logout` 命令都可以退出 shell 。`exit` 命令可以退出所有的 shell,`logout` 命令只注销登录的 shell(即你用文本模式登录时自动启动的那个)。 + +**man 和 info 命令** +如果你对某个程序有疑问,可以参考它的手册页,可以使用 `man` 命令调出它。此外,还有一些关于重要文件(inittab、fstab、hosts 等等)、库函数、shell、设备及其他功能的手册页。 + +举例: + +- man uname (输出系统信息,如内核名称、处理器、操作系统类型、架构等) +- man inittab (初始化守护进程的设置) + +另外一个重要的信息的来源是由 `info` 命令提供的,`info` 命令常常被用来读取 info 文件。这些文件往往比手册页 提供了更多信息。可以通过 `info keyword` 调用某个命令的信息: + + # info ls + # info cut + +另外,在 `/usr/share/doc` 文件夹包含了大量的子目录,里面可以找到大量的文档。它们是文本文件或其他可读格式。 + +你要习惯于使用这三种方法去查找命令的信息。重点关注每个命令文档中介绍的详细的语法。 + +**使用 expand 命令把制表符转换为空格** + +有时候文本文档包含了制表符,但是程序无法很好的处理。或者我们只是简单的希望将制表符转换成空格。这就是用到 `expand` 地方(由GNU核心组件包提供) 。 + +举个例子,我们有个文件 NumberList.txt,让我们使用 `expand` 处理它,将制表符转换为一个空格,并且显示在标准输出上。 + + # expand --tabs=1 NumbersList.txt + +![Linux expand Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-expand-Command.png) + +*Linux expand 命令* + +unexpand命令可以实现相反的功能(将空格转为制表符) + +**使用 head 输出文件首行及使用 tail 输出文件尾行** + +通常情况下,`head` 命令后跟着文件名时,将会输出该文件的前十行,我们可以通过 `-n` 参数来自定义具体的行数。 + + # head -n3 /etc/passwd + # tail -n3 /etc/passwd + +![Linux head and tail Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-head-and-tail-Command.png) + +*Linux 的 head 和 tail 命令* + +`tail` 最有意思的一个特性就是能够显示增长的输入文件(`tail -f my.log`,my.log 是我们需要监视的文件。)这在我们监控一个持续增加的日志文件时非常有用。 + +- [使用 head 和 tail 命令有效地管理文件][10] + +**使用 paste 按行合并文本文件** + +`paste` 命令一行一行的合并文件,默认会以制表符来区分每个文件的行,或者你可以自定义的其它分隔符。(下面的例子就是输出中的字段使用等号分隔)。 + + # paste -d= file1 file2 + +![Merge Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Merge-Files-in-Linux-with-paste-command.png) + +*Linux 中的 merge 命令* + +**使用 split 命令将文件分块** + +`split` 命令常常用于把一个文件切割成两个或多个由我们自定义的前缀命名的文件。可以根据大小、区块、行数等进行切割,生成的文件会有一个数字或字母的后缀。在下面的例子中,我们将切割 bash.pdf ,每个文件 50KB (-b 50KB),使用数字后缀 (-d): + + # split -b 50KB -d bash.pdf bash_ + +![Split Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Split-Files-in-Linux-with-split-command.png) + +*在 Linux 下切割文件* + +你可以使用如下命令来合并这些文件,生成原来的文件: + + # cat bash_00 bash_01 bash_02 bash_03 bash_04 bash_05 > bash.pdf + +**使用 tr 命令替换字符** + +`tr` 命令多用于一对一的替换(改变)字符,或者使用字符范围。和之前一样,下面的实例我们将使用之前的同样文件file2,我们将做: + +- 小写字母 o 变成大写 +- 所有的小写字母都变成大写字母 + +- + # cat file2 | tr o O + # cat file2 | tr [a-z] [A-Z] + +![Translate Characters in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Translate-characters-in-Linux-with-tr-command.png) + +*在 Linux 中替换字符* + +**使用 uniq 和 sort 检查或删除重复的文字** + +`uniq` 命令可以帮我们查出或删除文件中的重复的行,默认会输出到标准输出,我们应当注意,`uniq`只能查出相邻的相同行,所以,`uniq` 往往和 `sort` 一起使用(`sort` 一般用于对文本文件的内容进行排序) + +默认情况下,`sort` 以第一个字段(使用空格分隔)为关键字段。想要指定不同关键字段,我们需要使用 -k 参数,请注意如何使用 `sort` 和 `uniq` 输出我们想要的字段,具体可以看下面的例子: + + # cat file3 + # sort file3 | uniq + # sort -k2 file3 | uniq + # sort -k3 file3 | uniq + +![删除文件中重复的行](http://www.tecmint.com/wp-content/uploads/2015/02/Remove-Duplicate-Lines-in-file.png) + +*删除文件中重复的行* + +**从文件中提取文本的命令** + +`cut` 命令基于字节(-b)、字符(-c)、或者字段(-f)的数量,从输入文件(标准输入或文件)中提取到的部分将会以标准输出上。 + +当我们使用字段 `cut` 时,默认的分隔符是一个制表符,不过你可以通过 -d 参数来自定义分隔符。 + + # cut -d: -f1,3 /etc/passwd # 这个例子提取了第一和第三字段的文本 + # cut -d: -f2-4 /etc/passwd # 这个例子提取了第二到第四字段的文本 + +![从文件中提取文本](http://www.tecmint.com/wp-content/uploads/2015/02/Extract-Text-from-a-file.png) + +*从文件中提取文本* + +注意,简洁起见,上方的两个输出的结果是截断的。 + +**使用 fmt 命令重新格式化文件** + +`fmt` 被用于去“清理”有大量内容或行的文件,或者有多级缩进的文件。新的段落格式每行不会超过75个字符宽,你能通过 -w (width 宽度)参数改变这个设定,它可以设置行宽为一个特定的数值。 + +举个例子,让我们看看当我们用 `fmt` 显示定宽为100个字符的时候的文件 /etc/passwd 时会发生什么。再次,输出截断了。 + + # fmt -w100 /etc/passwd + +![File Reformatting in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/File-Reformatting-in-Linux-with-fmt-command.png) + +*Linux 文件重新格式化* + +**使用 pr 命令格式化打印内容** + +`pr` 分页并且在按列或多列的方式显示一个或多个文件。 换句话说,使用 `pr` 格式化一个文件使它打印出来时看起来更好。举个例子,下面这个命令: + + # ls -a /etc | pr -n --columns=3 -h "Files in /etc" + +以一个友好的排版方式(3列)输出/etc下的文件,自定义了页眉(通过 -h 选项实现)、行号(-n)。 + +![File Formatting in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/File-Formatting-in-Linux-with-pr-command.png) + +*Linux的文件格式化* + +### 总结 ### + +在这篇文章中,我们已经讨论了如何在 Shell 或终端以正确的语法输入和执行命令,并解释如何找到,查阅和使用系统文档。正如你看到的一样简单,这就是你成为 RHCSA 的第一大步。 + +如果你希望添加一些其他的你经常使用的能够有效帮你完成你的日常工作的基础命令,并愿意分享它们,请在下方留言。也欢迎提出问题。我们期待您的回复。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ + +作者:[Gabriel Cánepa][a] +译者:[xiqingongzi](https://github.com/xiqingongzi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:https://www.redhat.com/en/services/certification/rhcsa +[2]:http://linux.cn/article-2479-1.html +[3]:https://linux.cn/article-5109-1.html +[4]:http://linux.cn/article-2687-1.html +[5]:http://www.tecmint.com/rename-multiple-files-in-linux/ +[6]:http://linux.cn/article-2740-1.html +[7]:http://www.tecmint.com/redhat-enterprise-linux-7-installation/ +[8]:https://linux.cn/article-3948-1.html +[9]:https://linux.cn/article-3422-1.html +[10]:http://www.tecmint.com/view-contents-of-file-in-linux/ diff --git a/translated/tech/RHCSA/RHCSA Series--Part 02--How to Perform File and Directory Management.md b/published/RHCSA Series--Part 02--How to Perform File and Directory Management.md similarity index 59% rename from translated/tech/RHCSA/RHCSA Series--Part 02--How to Perform File and Directory Management.md rename to published/RHCSA Series--Part 02--How to Perform File and Directory Management.md index f46fd93321..8751949b40 100644 --- a/translated/tech/RHCSA/RHCSA Series--Part 02--How to Perform File and Directory Management.md +++ b/published/RHCSA Series--Part 02--How to Perform File and Directory Management.md @@ -1,68 +1,63 @@ -RHCSA 系列: 如何执行文件并进行文件管理 – Part 2 +RHCSA 系列(二): 如何进行文件和目录管理 ================================================================================ -在本篇(RHCSA 第二篇:文件和目录管理)中,我们江回顾一些系统管理员日常任务需要的技能 +在本篇中,我们将回顾一些系统管理员日常任务需要的技能。 ![RHCSA: Perform File and Directory Management – Part 2](http://www.tecmint.com/wp-content/uploads/2015/03/RHCSA-Part2.png) +*RHCSA: 运行文件以及进行文件夹管理 - 第二部分* -RHCSA : 运行文件以及进行文件夹管理 - 第二章 -### 创建,删除,复制和移动文件及目录 ### +### 创建、删除、复制和移动文件及目录 ### -文件和目录管理是每一个系统管理员都应该掌握的必要的技能.它包括了从头开始的创建、删除文本文件(每个程序的核心配置)以及目录(你用来组织文件和其他目录),以及识别存在的文件的类型 +文件和目录管理是每一个系统管理员都应该掌握的必备技能。它包括了从头开始的创建、删除文本文件(每个程序的核心配置)以及目录(你用来组织文件和其它目录),以及识别已有文件的类型。 - [touch 命令][1] 不仅仅能用来创建空文件,还能用来更新已存在的文件的权限和时间表 +[`touch` 命令][1] 不仅仅能用来创建空文件,还能用来更新已有文件的访问时间和修改时间。 ![touch command example](http://www.tecmint.com/wp-content/uploads/2015/03/touch-command-example.png) -touch 命令示例 +*touch 命令示例* -你可以使用 `file [filename]`来判断一个文件的类型 (在你用文本编辑器编辑之前,判断类型将会更方便编辑). +你可以使用 `file [filename]`来判断一个文件的类型 (在你用文本编辑器编辑之前,判断类型将会更方便编辑)。 ![file command example](http://www.tecmint.com/wp-content/uploads/2015/03/file-command-example.png) -file 命令示例 +*file 命令示例* -使用`rm [filename]` 可以删除文件 +使用`rm [filename]` 可以删除文件。 ![Linux rm command examples](http://www.tecmint.com/wp-content/uploads/2015/03/rm-command-examples.png) -rm 命令示例 - -对于目录,你可以使用`mkdir [directory]`在已经存在的路径中创建目录,或者使用 `mkdir -p [/full/path/to/directory].`带全路径创建文件夹 +*rm 命令示例* +对于目录,你可以使用`mkdir [directory]`在已经存在的路径中创建目录,或者使用 `mkdir -p [/full/path/to/directory]`带全路径创建文件夹。 ![mkdir command example](http://www.tecmint.com/wp-content/uploads/2015/03/mkdir-command-example.png) -mkdir 命令示例 +*mkdir 命令示例* -当你想要去删除目录时,在你使用`rmdir [directory]` 前,你需要先确保目录是空的,或者使用更加强力的命令(小心使用它)`rm -rf [directory]`.后者会强制删除`[directory]`以及他的内容.所以使用这个命令存在一定的风险 +当你想要去删除目录时,在你使用`rmdir [directory]` 前,你需要先确保目录是空的,或者使用更加强力的命令(小心使用它!)`rm -rf [directory]`。后者会强制删除`[directory]`以及它的内容,所以使用这个命令存在一定的风险。 ### 输入输出重定向以及管道 ### -命令行环境提供了两个非常有用的功能:允许命令重定向的输入和输出到文件和发送到另一个文件,分别称为重定向和管道 +命令行环境提供了两个非常有用的功能:允许重定向命令的输入和输出为另一个文件,以及发送命令的输出到另一个命令,这分别称为重定向和管道。 -To understand those two important concepts, we must first understand the three most important types of I/O (Input and Output) streams (or sequences) of characters, which are in fact special files, in the *nix sense of the word. -为了理解这两个重要概念,我们首先需要理解通常情况下三个重要的输入输出流的形式 +为了理解这两个重要概念,我们首先需要理解三个最重要的字符输入输出流类型,以 *nix 的话来说,它们实际上是特殊的文件。 -- 标准输入 (aka stdin) 是指默认使用键盘链接. 换句话说,键盘是输入命令到命令行的标准输入设备。 -- 标准输出 (aka stdout) 是指默认展示再屏幕上, 显示器接受输出命令,并且展示在屏幕上。 -- 标准错误 (aka stderr), 是指命令的状态默认输出, 同时也会展示在屏幕上 +- 标准输入 (即 stdin),默认连接到键盘。 换句话说,键盘是输入命令到命令行的标准输入设备。 +- 标准输出 (即 stdout),默认连接到屏幕。 找个设备“接受”命令的输出,并展示到屏幕上。 +- 标准错误 (即 stderr),默认是命令的状态消息出现的地方,它也是屏幕。 -In the following example, the output of `ls /var` is sent to stdout (the screen), as well as the result of ls /tecmint. But in the latter case, it is stderr that is shown. -在下面的例子中,`ls /var`的结果被发送到stdout(屏幕展示),就像ls /tecmint 的结果。但在后一种情况下,它是标准错误输出。 +在下面的例子中,`ls /var`的结果被发送到stdout(屏幕展示),ls /tecmint 的结果也一样。但在后一种情况下,它显示在标准错误输出上。 ![Linux input output redirect](http://www.tecmint.com/wp-content/uploads/2015/03/Linux-input-output-redirect.png) -输入和输出命令实例 -为了更容易识别这些特殊文件,每个文件都被分配有一个文件描述符(用于控制他们的抽象标识)。主要要理解的是,这些文件就像其他人一样,可以被重定向。这就意味着你可以从一个文件或脚本中捕获输出,并将它传送到另一个文件、命令或脚本中。你就可以在在磁盘上存储命令的输出结果,用于稍后的分析 +*输入和输出命令实例* -To redirect stdin (fd 0), stdout (fd 1), or stderr (fd 2), the following operators are available. +为了更容易识别这些特殊文件,每个文件都被分配有一个文件描述符,这是用于访问它们的抽象标识。主要要理解的是,这些文件就像其它的一样,可以被重定向。这就意味着你可以从一个文件或脚本中捕获输出,并将它传送到另一个文件、命令或脚本中。这样你就可以在磁盘上存储命令的输出结果,用于稍后的分析。 -注:表格 - - - +要重定向 stdin (fd 0)、 stdout (fd 1) 或 stderr (fd 2),可以使用如下操作符。 + +
@@ -70,102 +65,98 @@ To redirect stdin (fd 0), stdout (fd 1), or stderr (fd 2), the following operato - + - + - + - + - + - + - +
转向操作
>标准输出到一个文件。如果目标文件存在,内容就会被重写重定向标准输出到一个文件。如果目标文件存在,内容就会被重写。
>>添加标准输出到文件尾部添加标准输出到文件尾部。
2>标准错误输出到一个文件。如果目标文件存在,内容就会被重写重定向标准错误输出到一个文件。如果目标文件存在,内容就会被重写。
2>>添加标准错误输出到文件尾部.添加标准错误输出到文件尾部。
&>标准错误和标准输出都到一个文件。如果目标文件存在,内容就会被重写重定向标准错误和标准输出到一个文件。如果目标文件存在,内容就会被重写。
<使用特定的文件做标准输出使用特定的文件做标准输入。
<>使用特定的文件做标准输出和标准错误使用特定的文件做标准输入和标准输出。
- -相比与重定向,管道是通过在命令后添加一个竖杠`(|)`再添加另一个命令 . +与重定向相比,管道是通过在命令后和另外一个命令前之间添加一个竖杠`(|)`。 记得: -- 重定向是用来定向命令的输出到一个文件,或定向一个文件作为输入到一个命令。 -- 管道是用来将命令的输出转发到另一个命令作为输入。 +- *重定向*是用来定向命令的输出到一个文件,或把一个文件发送作为到一个命令的输入。 +- *管道*是用来将命令的输出转发到另一个命令作为其输入。 #### 重定向和管道的使用实例 #### -** 例1:将一个命令的输出到文件 ** +**例1:将一个命令的输出到文件** -有些时候,你需要遍历一个文件列表。要做到这样,你可以先将该列表保存到文件中,然后再按行读取该文件。虽然你可以遍历直接ls的输出,不过这个例子是用来说明重定向。 +有些时候,你需要遍历一个文件列表。要做到这样,你可以先将该列表保存到文件中,然后再按行读取该文件。虽然你可以直接遍历ls的输出,不过这个例子是用来说明重定向。 # ls -1 /var/mail > mail.txt ![Redirect output of command tot a file](http://www.tecmint.com/wp-content/uploads/2015/03/Redirect-output-to-a-file.png) -将一个命令的输出到文件 +*将一个命令的输出重定向到文件* -** 例2:重定向stdout和stderr到/dev/null ** +**例2:重定向stdout和stderr到/dev/null** -如果不想让标准输出和标准错误展示在屏幕上,我们可以把文件描述符重定向到 `/dev/null` 请注意在执行这个命令时该如何更改输出 +如果不想让标准输出和标准错误展示在屏幕上,我们可以把这两个文件描述符重定向到 `/dev/null`。请注意对于同样的命令,重定向是如何改变了输出。 # ls /var /tecmint # ls /var/ /tecmint &> /dev/null ![Redirecting stdout and stderr ouput to /dev/null](http://www.tecmint.com/wp-content/uploads/2015/03/Redirecting-stdout-stderr-ouput.png) -重定向stdout和stderr到/dev/null +*重定向stdout和stderr到/dev/null* -#### 例3:使用一个文件作为命令的输入 #### +**例3:使用一个文件作为命令的输入** -当官方的[cat 命令][2]的语法如下时 +[cat 命令][2]的经典用法如下 # cat [file(s)] -您还可以使用正确的重定向操作符传送一个文件作为输入。 +您还可以使用正确的重定向操作符发送一个文件作为输入。 # cat < mail.txt ![Linux cat command examples](http://www.tecmint.com/wp-content/uploads/2015/03/cat-command-examples.png) -cat 命令实例 +*cat 命令实例* -#### 例4:发送一个命令的输出作为另一个命令的输入 #### +**例4:发送一个命令的输出作为另一个命令的输入** -如果你有一个较大的目录或进程列表,并且想快速定位,你或许需要将列表通过管道传送给grep +如果你有一个较大的目录或进程列表,并且想快速定位,你或许需要将列表通过管道传送给grep。 -接下来我们使用管道在下面的命令中,第一个是查找所需的关键词,第二个是除去产生的 `grep command`.这个例子列举了所有与apache用户有关的进程 +接下来我们会在下面的命令中使用管道,第一个管道是查找所需的关键词,第二个管道是除去产生的 `grep command`。这个例子列举了所有与apache用户有关的进程: # ps -ef | grep apache | grep -v grep ![Send output of command as input to another](http://www.tecmint.com/wp-content/uploads/2015/03/Send-output-of-command-as-input-to-another1.png) -发送一个命令的输出作为另一个命令的输入 +*发送一个命令的输出作为另一个命令的输入* ### 归档,压缩,解包,解压文件 ### -如果你需要传输,备份,或者通过邮件发送一组文件,你可以使用一个存档(或文件夹)如 [tar][3]工具,通常使用gzip,bzip2,或XZ压缩工具. +如果你需要传输、备份、或者通过邮件发送一组文件,你可以使用一个存档(或打包)工具,如 [tar][3],通常与gzip,bzip2,或 xz 等压缩工具配合使用。 -您选择的压缩工具每一个都有自己的定义的压缩速度和速率的。这三种压缩工具,gzip是最古老和提供最小压缩的工具,bzip2提供经过改进的压缩,以及XZ提供最信和最好的压缩。通常情况下,这些文件都是被压缩的如.gz .bz2或.xz -注:表格 - - - - +您选择的压缩工具每一个都有自己不同的压缩速度和压缩率。这三种压缩工具,gzip是最古老和可以较小压缩的工具,bzip2提供经过改进的压缩,以及xz是最新的而且压缩最大。通常情况下,使用这些压缩工具压缩的文件的扩展名依次是.gz、.bz2或.xz。 + +
@@ -180,12 +171,12 @@ cat 命令实例 - + - + @@ -195,26 +186,22 @@ cat 命令实例 - + - + - +
命令
–concatenate A向归档中添加tar文件添加tar归档到另外一个归档中
–append r向归档中添加非tar文件添加非tar归档到另外一个归档中
–update
–diff or –compare d将归档和硬盘的文件夹进行对比将归档中的文件和硬盘的文件进行对比
–list t列举一个tar的压缩包列举一个tar压缩包的内容
–extract or –get x从归档中解压文件从归档中提取文件
-注:表格 - - - - +
@@ -234,34 +221,34 @@ cat 命令实例 - + - + - + - + - +
操作参数
–verbose v列举所有文件用于读取或提取,这里包含列表,并显示文件的大小、所有权和时间戳列举所有读取或提取的文件,如果和 --list 参数一起使用,也会显示文件的大小、所有权和时间戳
exclude file 排除存档文件。在这种情况下,文件可以是一个实际的文件或目录。从存档中排除文件。在这种情况下,文件可以是一个实际的文件或匹配模式。
gzip or gunzip z使用gzip压缩文件使用gzip压缩归档
–bzip2 j使用bzip2压缩文件使用bzip2压缩归档
–xz J使用xz压缩文件使用xz压缩归档
-#### 例5:创建一个文件,然后使用三种压缩工具压缩#### +**例5:创建一个tar文件,然后使用三种压缩工具压缩** -在决定使用一个或另一个工具之前,您可能想比较每个工具的压缩效率。请注意压缩小文件或几个文件,结果可能不会有太大的差异,但可能会给你看出他们的差异 +在决定使用这个还是那个工具之前,您可能想比较每个工具的压缩效率。请注意压缩小文件或几个文件,结果可能不会有太大的差异,但可能会给你看出它们的差异。 # tar cf ApacheLogs-$(date +%Y%m%d).tar /var/log/httpd/* # Create an ordinary tarball # tar czf ApacheLogs-$(date +%Y%m%d).tar.gz /var/log/httpd/* # Create a tarball and compress with gzip @@ -270,42 +257,42 @@ cat 命令实例 ![Linux tar command examples](http://www.tecmint.com/wp-content/uploads/2015/03/tar-command-examples.png) -tar 命令实例 +*tar 命令实例* -#### 例6:归档时同时保存原始权限和所有权 #### +**例6:归档时同时保存原始权限和所有权** -如果你创建的是用户的主目录的备份,你需要要存储的个人文件与原始权限和所有权,而不是通过改变他们的用户帐户或守护进程来执行备份。下面的命令可以在归档时保留文件属性 +如果你正在从用户的主目录创建备份,你需要要存储的个人文件与原始权限和所有权,而不是通过改变它们的用户帐户或守护进程来执行备份。下面的命令可以在归档时保留文件属性。 # tar cJf ApacheLogs-$(date +%Y%m%d).tar.xz /var/log/httpd/* --same-permissions --same-owner ### 创建软连接和硬链接 ### -在Linux中,有2种类型的链接文件:硬链接和软(也称为符号)链接。因为硬链接文件代表另一个名称是由同一点确定,然后链接到实际的数据;符号链接指向的文件名,而不是实际的数据 +在Linux中,有2种类型的链接文件:硬链接和软(也称为符号)链接。因为硬链接文件只是现存文件的另一个名字,使用相同的 inode 号,它指向实际的数据;而符号链接只是指向的文件名。 -此外,硬链接不占用磁盘上的空间,而符号链接做占用少量的空间来存储的链接本身的文本。硬链接的缺点就是要求他们必须在同一个innode内。而符号链接没有这个限制,符号链接因为只保存了文件名和目录名,所以可以跨文件系统. +此外,硬链接不占用磁盘上的空间,而符号链接则占用少量的空间来存储的链接本身的文本。硬链接的缺点就是要求它们必须在同一个文件系统内,因为 inode 在一个文件系统内是唯一的。而符号链接没有这个限制,它们通过文件名而不是 inode 指向其它文件或目录,所以可以跨文件系统。 创建链接的基本语法看起来是相似的: # ln TARGET LINK_NAME #从Link_NAME到Target的硬链接 # ln -s TARGET LINK_NAME #从Link_NAME到Target的软链接 -#### 例7:创建硬链接和软链接 #### +**例7:创建硬链接和软链接** -没有更好的方式来形象的说明一个文件和一个指向它的符号链接的关系,而不是创建这些链接。在下面的截图中你会看到文件的硬链接指向它共享相同的节点都是由466个字节的磁盘使用情况确定。 +没有更好的方式来形象的说明一个文件和一个指向它的硬链接或符号链接的关系,而不是创建这些链接。在下面的截图中你会看到文件和指向它的硬链接共享相同的inode,都是使用了相同的466个字节的磁盘。 -另一方面,在别的磁盘创建一个硬链接将占用5个字节,并不是说你将耗尽存储容量,而是这个例子足以说明一个硬链接和软链接之间的区别。 +另一方面,在别的磁盘创建一个硬链接将占用5个字节,这并不是说你将耗尽存储容量,而是这个例子足以说明一个硬链接和软链接之间的区别。 ![Difference between a hard link and a soft link](http://www.tecmint.com/wp-content/uploads/2015/03/hard-soft-link.png) -软连接和硬链接之间的不同 +*软连接和硬链接之间的不同* -符号链接的典型用法是在Linux系统的版本文件参考。假设有需要一个访问文件foo X.Y 想图书馆一样经常被访问,你想更新一个就可以而不是更新所有的foo X.Y,这时使用软连接更为明智和安全。有文件被看成foo X.Y的链接符号,从而找到foo X.Y +在Linux系统上符号链接的典型用法是指向一个带版本的文件。假设有几个程序需要访问文件fooX.Y,但麻烦是版本经常变化(像图书馆一样)。每次版本更新时我们都需要更新指向 fooX.Y 的单一引用,而更安全、更快捷的方式是,我们可以让程序寻找名为 foo 的符号链接,它实际上指向 fooX.Y。 -这样的话,当你的X和Y发生变化后,你只需更新一个文件,而不是更新每个文件。 +这样的话,当你的X和Y发生变化后,你只需更新符号链接 foo 到新的目标文件,而不用跟踪每个对目标文件的使用并更新。 ### 总结 ### -在这篇文章中,我们回顾了一些基本的文件和目录管理技能,这是每个系统管理员的工具集的一部分。请确保阅读了本系列的其他部分,以及复习并将这些主题与本教程所涵盖的内容相结合。 +在这篇文章中,我们回顾了一些基本的文件和目录管理技能,这是每个系统管理员的工具集的一部分。请确保阅读了本系列的其它部分,并将这些主题与本教程所涵盖的内容相结合。 如果你有任何问题或意见,请随时告诉我们。我们总是很高兴从读者那获取反馈. @@ -315,11 +302,11 @@ via: http://www.tecmint.com/file-and-directory-management-in-linux/ 作者:[Gabriel Cánepa][a] 译者:[xiqingongzi](https://github.com/xiqingongzi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 [a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/8-pratical-examples-of-linux-touch-command/ +[1]:https://linux.cn/article-2740-1.html [2]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/ [3]:http://www.tecmint.com/18-tar-command-examples-in-linux/ diff --git a/sign.md b/sign.md index ea83b53f1f..1c413aba40 100644 --- a/sign.md +++ b/sign.md @@ -1,8 +1,22 @@ + --- -via: +via:来源链接 -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 +作者:[作者名][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) -译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译, +[Linux中国](https://linux.cn/) 荣誉推出 +[a]:作者链接 +[1]:文内链接 +[2]: +[3]: +[4]: +[5]: +[6]: +[7]: +[8]: +[9]: \ No newline at end of file diff --git a/sources/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md b/sources/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md deleted file mode 100644 index 5ae03e4df1..0000000000 --- a/sources/share/20150817 Top 5 Torrent Clients For Ubuntu Linux.md +++ /dev/null @@ -1,117 +0,0 @@ -Top 5 Torrent Clients For Ubuntu Linux -================================================================================ -![Best Torrent clients for Ubuntu Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/5_Best_Torrent_Ubuntu.png) - -Looking for the **best torrent client in Ubuntu**? Indeed there are a number of torrent clients available for desktop Linux. But which ones are the **best Ubuntu torrent clients** among them? - -I am going to list top 5 torrent clients for Linux, which are lightweight, feature rich and have impressive GUI. Ease of installation and using is also a factor. - -### Best torrent programs for Ubuntu ### - -Since Ubuntu comes by default with Transmission, I am going to exclude it from the list. This doesn’t mean that Transmission doesn’t deserve to be on the list. Transmission is a good to have torrent client for Ubuntu and this is the reason why it is the default Torrent application in several Linux distributions, including Ubuntu. - ----------- - -### Deluge ### - -![Logo of Deluge torrent client for Ubuntu](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Deluge.png) - -[Deluge][1] has been chosen as the best torrent client for Linux by Lifehacker and that speaks itself of the usefulness of Deluge. And it’s not just Lifehacker who is fan of Deluge, check out any forum and you’ll find a number of people admitting that Deluge is their favorite. - -Fast, sleek and intuitive interface makes Deluge a hot favorite among Linux users. - -Deluge is available in Ubuntu repositories and you can install it in Ubuntu Software Center or by using the command below: - - sudo apt-get install deluge - ----------- - -### qBittorrent ### - -![qBittorrent client for Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/qbittorrent_icon.png) - -As the name suggests, [qBittorrent][2] is the Qt version of famous [Bittorrent][3] application. You’ll see an interface similar to Bittorrent client in Windows, if you ever used it. Sort of lightweight and have all the standard features of a torrent program, qBittorrent is also available in default Ubuntu repository. - -It could be installed from Ubuntu Software Center or using the command below: - - sudo apt-get install qbittorrent - ----------- - -### Tixati ### - -![Tixati torrent client logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/tixati_icon.png) - -[Tixati][4] is another nice to have torrent client for Ubuntu. It has a default dark theme which might be preferred by many but not me. It has all the standard features that you can seek in a torrent client. - -In addition to that, there are additional feature of data analysis. You can measure and analyze bandwidth and other statistics in nice charts. - -- [Download Tixati][5] - ----------- - -### Vuze ### - -![Vuze Torrent Logo](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/vuze_icon_for_mac_os_x_by_hamzasaleem-d6yx1fp.png) - -[Vuze][6] is favorite torrent application of a number of Linux as well as Windows users. Apart from the standard features, you can search for torrents directly in the application. You can also subscribe to episodic content so that you won’t have to search for new contents as you can see it in your subscription in sidebar. - -It also comes with a video player that can play HD videos with subtitles and all. But I don’t think you would like to use it over the better video players such as VLC. - -Vuze can be installed from Ubuntu Software Center or using the command below: - - sudo apt-get install vuze - ----------- - -### Frostwire ### - -![Logo of Frostwire torrent client](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/frostwire.png) - -[Frostwire][7] is the torrent application you might want to try. It is more than just a simple torrent client. Also available for Android, you can use it to share files over WiFi. - -You can search for torrents from within the application and play them inside the application. In addition to the downloaded files, it can browse your local media and have them organized inside the player. The same is applicable for the Android version. - -An additional feature is that Frostwire also provides access to legal music by indi artists. You can download them and listen to it, for free, for legal. - -- [Download Frostwire][8] - ----------- - -### Honorable mention ### - -On Windows, uTorrent (pronounced mu torrent) is my favorite torrent application. While uTorrent may be available for Linux, I deliberately skipped it from the list because installing and using uTorrent in Linux is neither easy nor does it provide a complete application experience (runs with in web browser). - -You can read about uTorrent installation in Ubuntu [here][9]. - -#### Quick tip: #### - -Most of the time, torrent applications do not start by default. You might want to change this behavior. Read this post to learn [how to manage startup applications in Ubuntu][10]. - -### What’s your favorite? ### - -That was my opinion on the best Torrent clients in Ubuntu. What is your favorite one? Do leave a comment. You can also check the [best download managers for Ubuntu][11] in related posts. And if you use Popcorn Time, check these [Popcorn Time Tips][12]. - --------------------------------------------------------------------------------- - -via: http://itsfoss.com/best-torrent-ubuntu/ - -作者:[Abhishek][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ -[1]:http://deluge-torrent.org/ -[2]:http://www.qbittorrent.org/ -[3]:http://www.bittorrent.com/ -[4]:http://www.tixati.com/ -[5]:http://www.tixati.com/download/ -[6]:http://www.vuze.com/ -[7]:http://www.frostwire.com/ -[8]:http://www.frostwire.com/downloads -[9]:http://sysads.co.uk/2014/05/install-utorrent-3-3-ubuntu-14-04-13-10/ -[10]:http://itsfoss.com/manage-startup-applications-ubuntu/ -[11]:http://itsfoss.com/4-best-download-managers-for-linux/ -[12]:http://itsfoss.com/popcorn-time-tips/ \ No newline at end of file diff --git a/sources/share/20150824 Great Open Source Collaborative Editing Tools.md b/sources/share/20150824 Great Open Source Collaborative Editing Tools.md new file mode 100644 index 0000000000..4696862569 --- /dev/null +++ b/sources/share/20150824 Great Open Source Collaborative Editing Tools.md @@ -0,0 +1,229 @@ +cygmris is translating... +Great Open Source Collaborative Editing Tools +================================================================================ +In a nutshell, collaborative writing is writing done by more than one person. There are benefits and risks of collaborative working. Some of the benefits include a more integrated / co-ordinated approach, better use of existing resources, and a stronger, united voice. For me, the greatest advantage is one of the most transparent. That's when I need to take colleagues' views. Sending files back and forth between colleagues is inefficient, causes unnecessary delays and leaves people (i.e. me) unhappy with the whole notion of collaboration. With good collaborative software, I can share notes, data and files, and use comments to share thoughts in real-time or asynchronously. Working together on documents, images, video, presentations, and tasks is made less of a chore. + +There are many ways to collaborate online, and it has never been easier. This article highlights my favourite open source tools to collaborate on documents in real time. + +Google Docs is an excellent productivity application with most of the features I need. It serves as a collaborative tool for editing documents in real time. Documents can be shared, opened, and edited by multiple users simultaneously and users can see character-by-character changes as other collaborators make edits. While Google Docs is free for individuals, it is not open source. + +Here is my take on the finest open source collaborative editors which help you focus on writing without interruption, yet work mutually with others. + +---------- + +### Hackpad ### + +![Hackpad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Hackpad.png) + +Hackpad is an open source web-based realtime wiki, based on the open source EtherPad collaborative document editor. + +Hackpad allows users to share your docs realtime and it uses color coding to show which authors have contributed to which content. It also allows in line photos, checklists and can also be used for coding as it offers syntax highlighting. + +While Dropbox acquired Hackpad in April 2014, it is only this month that the software has been released under an open source license. It has been worth the wait. + +Features include: + +- Very rich set of functions, similar to those offered by wikis +- Take collaborative notes, share data and files, and use comments to share your thoughts in real-time or asynchronously +- Granular privacy permissions enable you to invite a single friend, a dozen teammates, or thousands of Twitter followers +- Intelligent execution +- Directly embed videos from popular video sharing sites +- Tables +- Syntax highlighting for most common programming languages including C, C#, CSS, CoffeeScript, Java, and HTML + +- Website: [hackpad.com][1] +- Source code: [github.com/dropbox/hackpad][2] +- Developer: [Contributors][3] +- License: Apache License, Version 2.0 +- Version Number: - + +---------- + +### Etherpad ### + +![Etherpad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Etherpad.png) + +Etherpad is an open source web-based collaborative real-time editor, allowing authors to simultaneously edit a text document leave comments, and interact with others using an integrated chat. + +Etherpad is implemented in JavaScript, on top of the AppJet platform, with the real-time functionality achieved using Comet streaming. + +Features include: + +- Well designed spartan interface +- Simple text formatting features +- "Time slider" - explore the history of a pad +- Download documents in plain text, PDF, Microsoft Word, Open Document, and HTML +- Auto-saves the document at regular, short intervals +- Highly customizable +- Client side plugins extend the editor functionality +- Hundreds of plugins extend Etherpad including support for email notifications, pad management, authentication +- Accessibility enabled +- Interact with Pad contents in real time from within Node and from your CLI + +- Website: [etherpad.org][4] +- Source code: [github.com/ether/etherpad-lite][5] +- Developer: David Greenspan, Aaron Iba, J.D. Zamfiresc, Daniel Clemens, David Cole +- License: Apache License Version 2.0 +- Version Number: 1.5.7 + +---------- + +### Firepad ### + +![Firepad in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Firepad.png) + +Firepad is an open source, collaborative text editor. It is designed to be embedded inside larger web applications with collaborative code editing added in only a few days. + +Firepad is a full-featured text editor, with capabilities like conflict resolution, cursor synchronization, user attribution, and user presence detection. It uses Firebase as a backend, and doesn't need any server-side code. It can be added to any web app. Firepad can use either the CodeMirror editor or the Ace editor to render documents, and its operational transform code borrows from ot.js. + +If you want to extend your web application capabilities by adding the simple document and code editor, Firepad is perfect. + +Firepad is used by several editors, including the Atlassian Stash Realtime Editor, Nitrous.IO, LiveMinutes, and Koding. + +Features include: + +- True collaborative editing +- Intelligent OT-based merging and conflict resolution +- Support for both rich text and code editing +- Cursor position synchronization +- Undo / redo +- Text highlighting +- User attribution +- Presence detection +- Version checkpointing +- Images +- Extend Firepad through its API +- Supports all modern browsers: Chrome, Safari, Opera 11+, IE8+, Firefox 3.6+ + +- Website: [www.firepad.io][6] +- Source code: [github.com/firebase/firepad][7] +- Developer: Michael Lehenbauer and the team at Firebase +- License: MIT +- Version Number: 1.1.1 + +---------- + +### OwnCloud Documents ### + +![ownCloud Documents in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-ownCloud.png) + +ownCloud Documents is an ownCloud app to work with office documents alone and/or collaboratively. It allows up to 5 individuals to collaborate editing .odt and .doc files in a web browser. + +ownCloud is a self-hosted file sync and share server. It provides access to your data through a web interface, sync clients or WebDAV while providing a platform to view, sync and share across devices easily. + +Features include: + +- Cooperative edit, with multiple users editing files simultaneously +- Document creation within ownCloud +- Document upload +- Share and edit files in the browser, and then share them inside ownCloud or through a public link +- ownCloud features like versioning, local syncing, encryption, undelete +- Seamless support for Microsoft Word documents by way of transparent conversion of file formats + +- Website: [owncloud.org][8] +- Source code: [github.com/owncloud/documents][9] +- Developer: OwnCloud Inc. +- License: AGPLv3 +- Version Number: 8.1.1 + +---------- + +### Gobby ### + +![Gobby in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Gobby.png) + +Gobby is a collaborative editor supporting multiple documents in one session and a multi-user chat. All users could work on the file simultaneously without the need to lock it. The parts the various users write are highlighted in different colours and it supports syntax highlighting of various programming and markup languages. + +Gobby allows multiple users to edit the same document together over the internet in real-time. It integrates well with the GNOME environment. It features a client-server architecture which supports multiple documents in one session, document synchronisation on request, password protection and an IRC-like chat for communication out of band. Users can choose a colour to highlight the text they have written in a document. + +A dedicated server called infinoted is also provided. + +Features include: + +- Full-fledged text editing capabilities including syntax highlighting using GtkSourceView +- Real-time, lock-free collaborative text editing through encrypted connections (including PFS) +- Integrated group chat +- Local group undo: Undo does not affect changes of remote users +- Shows cursors and selections of remote users +- Highlights text written by different users with different colors +- Syntax highlighting for most programming languages, auto indentation, configurable tab width +- Zeroconf support +- Encrypted data transfer including perfect forward secrecy (PFS) +- Sessions can be password-protected +- Sophisticated access control with Access Control Lists (ACLs) +- Highly configurable dedicated server +- Automatic saving of documents +- Advanced search and replace options +- Internationalisation +- Full Unicode support + +- Website: [gobby.github.io][10] +- Source code: [github.com/gobby][11] +- Developer: Armin Burgmeier, Philipp Kern and contributors +- License: GNU GPLv2+ and ISC +- Version Number: 0.5.0 + +---------- + +### OnlyOffice ### + +![OnlyOffice in action](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-OnlyOffice.png) + +ONLYOFFICE (formerly known as Teamlab Office) is a multifunctional cloud online office suite integrated with CRM system, document and project management toolset, Gantt chart and email aggregator. + +It allows you to organize business tasks and milestones, store and share your corporate or personal documents, use social networking tools such as blogs and forums, as well as communicate with your team members via corporate IM. + +Manage documents, projects, team and customer relations in one place. OnlyOffice combines text, spreadsheet and presentation editors that include features similar to Microsoft desktop editors (Word, Excel and PowerPoint), but then allow to co-edit, comment and chat in real time. + +OnlyOffice is written in ASP.NET, based on HTML5 Canvas element, and translated to 21 languages. + +Features include: + +- As powerful as a desktop editor when working with large documents, paging and zooming +- Document sharing in view / edit modes +- Document embedding +- Spreadsheet and presentation editors +- Co-editing +- Commenting +- Integrated chat +- Mobile applications +- Gantt charts +- Time management +- Access right management +- Invoicing system +- Calendar +- Integration with file storage systems: Google Drive, Box, OneDrive, Dropbox, OwnCloud +- Integration with CRM, email aggregator and project management module +- Mail server +- Mail aggregator +- Edit documents, spreadsheets and presentations of the most popular formats: DOC, DOCX, ODT, RTF, TXT, XLS, XLSX, ODS, CSV, PPTX, PPT, ODP + +- Website: [www.onlyoffice.com][12] +- Source code: [github.com/ONLYOFFICE/DocumentServer][13] +- Developer: Ascensio System SIA +- License: GNU GPL v3 +- Version Number: 7.7 + +-------------------------------------------------------------------------------- + +via: http://www.linuxlinks.com/article/20150823085112605/CollaborativeEditing.html + +作者:Frazer Kline +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://hackpad.com/ +[2]:https://github.com/dropbox/hackpad +[3]:https://github.com/dropbox/hackpad/blob/master/CONTRIBUTORS +[4]:http://etherpad.org/ +[5]:https://github.com/ether/etherpad-lite +[6]:http://www.firepad.io/ +[7]:https://github.com/firebase/firepad +[8]:https://owncloud.org/ +[9]:http://github.com/owncloud/documents/ +[10]:https://gobby.github.io/ +[11]:https://github.com/gobby +[12]:https://www.onlyoffice.com/free-edition.aspx +[13]:https://github.com/ONLYOFFICE/DocumentServer diff --git a/sources/share/20150826 Five Super Cool Open Source Games.md b/sources/share/20150826 Five Super Cool Open Source Games.md new file mode 100644 index 0000000000..0d3d3c8bfd --- /dev/null +++ b/sources/share/20150826 Five Super Cool Open Source Games.md @@ -0,0 +1,66 @@ +Translating by H-mudcup +Five Super Cool Open Source Games +================================================================================ +In 2014 and 2015, Linux became home to a list of popular commercial titles such as the popular Borderlands, Witcher, Dead Island, and Counter Strike series of games. While this is exciting news, what of the gamer on a budget? Commercial titles are good, but even better are free-to-play alternatives made by developers who know what players like. + +Some time ago, I came across a three year old YouTube video with the ever optimistic title [5 Open Source Games that Don’t Suck][1]. Although the video praises some open source games, I’d prefer to approach the subject with a bit more enthusiasm, at least as far as the title goes. So, here’s my list of five super cool open source games. + +### Tux Racer ### + +![Tux Racer](http://fossforce.com/wp-content/uploads/2015/08/tuxracer-550x413.jpg) + +Tux Racer + +[Tux Racer][2] is the first game on this list because I’ve had plenty of experience with it. On a [recent trip to Mexico][3] that my brother and I took with [Kids on Computers][4], Tux Racer was one of the games that kids and teachers alike enjoyed. In this game, players use the Linux mascot, the penguin Tux, to race on downhill ski slopes in time trials in which players challenge their own personal bests. Currently there’s no multiplayer version available, but that could be subject to change. Available for Linux, OS X, Windows, and Android. + +### Warsow ### + +![Warsow](http://fossforce.com/wp-content/uploads/2015/08/warsow-550x413.jpg) + +Warsow + +The [Warsow][5] website explains: “Set in a futuristic cartoonish world, Warsow is a completely free fast-paced first-person shooter (FPS) for Windows, Linux and Mac OS X. Warsow is the Art of Respect and Sportsmanship Over the Web.” I was reluctant to include games from the FPS genre on this list, because many have played games in this genre, but I was amused by Warsow. It prioritizes lots of movement and the game is fast paced with a set of eight weapons to start with. The cartoonish style makes playing feel less serious and more casual, something for friends and family to play together. However, it boasts competitive play, and when I experienced the game I found there were, indeed, some expert players around. Available for Linux, Windows and OS X. + +### M.A.R.S – A ridiculous shooter ### + +![M.A.R.S. - A ridiculous shooter](http://fossforce.com/wp-content/uploads/2015/08/MARS-screenshot-550x344.jpg) + +M.A.R.S. – A ridiculous shooter + +[M.A.R.S – A ridiculous shooter][6] is appealing because of it’s vibrant coloring and style. There is support for two players on the same keyboard, but an online multiplayer version is currently in the works — meaning plans to play with friends have to wait for now. Regardless, it’s an entertaining space shooter with a few different ships and weapons to play as. There are different shaped ships, ranging from shotguns, lasers, scattered shots and more (one of the random ships shot bubbles at my opponents, which was funny amid the chaotic gameplay). There are a few modes of play, such as the standard death match against opponents to score a certain limit or score high, along with other modes called Spaceball, Grave-itation Pit and Cannon Keep. Available for Linux, Windows and OS X. + +### Valyria Tear ### + +![Valyria Tear](http://fossforce.com/wp-content/uploads/2015/08/bronnan-jump-to-enemy-550x413.jpg) + +Valyria Tear + +[Valyria Tear][7] resembles many fan favorite role-playing games (RPGs) spanning the years. The story is set in the usual era of fantasy games, full of knights, kingdoms and wizardry, and follows the main character Bronann. The design team did great work in designing the world and gives players everything expected from the genre: hidden chests, random monster encounters, non-player character (NPC) interaction, and something no RPG would be complete without: grinding for experience on lower level slime monsters until you’re ready for the big bosses. When I gave it a try, time didn’t permit me to play too far into the campaign, but for those interested there is a ‘[Let’s Play][8]‘ series by YouTube user Yohann Ferriera. Available for Linux, Windows and OS X. + +### SuperTuxKart ### + +![SuperTuxKart](http://fossforce.com/wp-content/uploads/2015/08/hacienda_tux_antarctica-550x293.jpg) + +SuperTuxKart + +Last but not least is [SuperTuxKart][9], a clone of Mario Kart that is every bit as fun as the original. It started development around 2000-2004 as Tux Kart, but there were errors in its production which led to a cease in development for a few years. Since development picked up again in 2006, it’s been improving, with version 0.9 debuting four months ago. In the game, our old friend Tux starts in the role of Mario and a few other open source mascots. One recognizable face among them is Suzanne, the monkey mascot for Blender. The graphics are solid and gameplay is fluent. While online play is in the planning stages, split screen multiplayer action is available, with up to four players supported on a single computer. Available for Linux, Windows, OS X, AmigaOS 4, AROS and MorphOS. + +-------------------------------------------------------------------------------- + +via: http://fossforce.com/2015/08/five-super-cool-open-source-games/ + +作者:Hunter Banks +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.youtube.com/watch?v=BEKVl-XtOP8 +[2]:http://tuxracer.sourceforge.net/download.html +[3]:http://fossforce.com/2015/07/banks-family-values-texas-linux-fest/ +[4]:http://www.kidsoncomputers.org/an-amazing-week-in-oaxaca +[5]:https://www.warsow.net/download +[6]:http://mars-game.sourceforge.net/ +[7]:http://valyriatear.blogspot.com/ +[8]:https://www.youtube.com/channel/UCQ5KrSk9EqcT_JixWY2RyMA +[9]:http://supertuxkart.sourceforge.net/ diff --git a/sources/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md b/sources/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md new file mode 100644 index 0000000000..f36e1b21df --- /dev/null +++ b/sources/share/20150826 Mosh Shell--A SSH Based Client for Connecting Remote Unix or Linux Systems.md @@ -0,0 +1,110 @@ +Mosh Shell – A SSH Based Client for Connecting Remote Unix/Linux Systems +================================================================================ +Mosh, which stands for Mobile Shell is a command-line application which is used for connecting to the server from a client computer, over the Internet. It can be used as SSH and contains more feature than Secure Shell. It is an application similar to SSH, but with additional features. The application is written originally by Keith Winstein for Unix like operating system and released under GNU GPL v3. + +![Mosh Shell SSH Client](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-SSH-Client.png) + +Mosh Shell SSH Client + +#### Features of Mosh #### + +- It is a remote terminal application that supports roaming. +- Available for all major UNIX-like OS viz., Linux, FreeBSD, Solaris, Mac OS X and Android. +- Intermittent Connectivity supported. +- Provides intelligent local echo. +- Line editing of user keystrokes supported. +- Responsive design and Robust Nature over wifi, cellular and long-distance links. +- Remain Connected even when IP changes. It usages UDP in place of TCP (used by SSH). TCP time out when connect is reset or new IP assigned but UDP keeps the connection open. +- The Connection remains intact when you resume the session after a long time. +- No network lag. Shows users typed key and deletions immediately without network lag. +- Same old method to login as it was in SSH. +- Mechanism to handle packet loss. + +### Installation of Mosh Shell in Linux ### + +On Debian, Ubuntu and Mint alike systems, you can easily install the Mosh package with the help of [apt-get package manager][1] as shown. + + # apt-get update + # apt-get install mosh + +On RHEL/CentOS/Fedora based distributions, you need to turn on third party repository called [EPEL][2], in order to install mosh from this repository using [yum package manager][3] as shown. + + # yum update + # yum install mosh + +On Fedora 22+ version, you need to use [dnf package manager][4] to install mosh as shown. + + # dnf install mosh + +### How do I use Mosh Shell? ### + +1. Let’s try to login into remote Linux server using mosh shell. + + $ mosh root@192.168.0.150 + +![Mosh Shell Remote Connection](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-Remote-Connection.png) + +Mosh Shell Remote Connection + +**Note**: Did you see I got an error in connecting since the port was not open in my remote CentOS 7 box. A quick but not recommended solution I performed was: + + # systemctl stop firewalld [on Remote Server] + +The preferred way is to open a port and update firewall rules. And then connect to mosh on a predefined port. For in-depth details on firewalld you may like to visit this post. + +- [How to Configure Firewalld][5] + +2. Let’s assume that the default SSH port 22 was changed to port 70, in this case you can define custom port with the help of ‘-p‘ switch with mosh. + + $ mosh -p 70 root@192.168.0.150 + +3. Check the version of installed Mosh. + + $ mosh --version + +![Check Mosh Version](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Mosh-Version.png) + +Check Mosh Version + +4. You can close mosh session type ‘exit‘ on the prompt. + + $ exit + +5. Mosh supports a lot of options, which you may see as: + + $ mosh --help + +![Mosh Shell Options](http://www.tecmint.com/wp-content/uploads/2015/08/Mosh-Shell-Options.png) + +Mosh Shell Options + +#### Cons of Mosh Shell #### + +- Mosh requires additional prerequisite for example, allow direct connection via UDP, which was not required by SSH. +- Dynamic port allocation in the range of 60000-61000. The first open fort is allocated. It requires one port per connection. +- Default port allocation is a serious security concern, especially in production. +- IPv6 connections supported, but roaming on IPv6 not supported. +- Scrollback not supported. +- No X11 forwarding supported. +- No support for ssh-agent forwarding. + +### Conclusion ### + +Mosh is a nice small utility which is available for download in the repository of most of the Linux Distributions. Though it has a few discrepancies specially security concern and additional requirement it’s features like remaining connected even while roaming is its plus point. My recommendation is Every Linux-er who deals with SSH should try this application and mind it, Mosh is worth a try. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/install-mosh-shell-ssh-client-in-linux/ + +作者:[Avishek Kumar][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/ +[2]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ +[3]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ +[4]:http://www.tecmint.com/dnf-commands-for-fedora-rpm-package-management/ +[5]:http://www.tecmint.com/configure-firewalld-in-centos-7/ \ No newline at end of file diff --git a/sources/share/20150901 5 best open source board games to play online.md b/sources/share/20150901 5 best open source board games to play online.md new file mode 100644 index 0000000000..505ca76f10 --- /dev/null +++ b/sources/share/20150901 5 best open source board games to play online.md @@ -0,0 +1,194 @@ +5 best open source board games to play online +================================================================================ +I have always had a fascination with board games, in part because they are a device of social interaction, they challenge the mind and, most importantly, they are great fun to play. In my misspent youth, myself and a group of friends gathered together to escape the horrors of the classroom, and indulge in a little escapism. The time provided an outlet for tension and rivalry. Board games help teach diplomacy, how to make and break alliances, bring families and friends together, and learn valuable lessons. + +I had a panache for abstract strategy games such as chess and draughts, as well as word games. I can still never resist a game of Escape from Colditz, a strategy card and dice-based board game, or Risk; two timeless multi-player strategy board games. But Catan remains my favourite board game. + +Board games have seen a resurgence in recent years, and Linux has a good range of board games to choose from. There is a credible implementation of Catan called Pioneers. But for my favourite implementations of classic board games to play online, check out the recommendations below. + +---------- + +### TripleA ### + +![TripleA in action](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-TripleA.png) + +TripleA is an open source online turn based strategy game. It allows people to implement and play various strategy board games (ie. Axis & Allies). The TripleA engine has full networking support for online play, support for sounds, XML support for game files, and has its own imaging subsystem that allows for customized user editable maps to be used. TripleA is versatile, scalable and robust. + +TripleA started out as a World War II simulation, but now includes different conflicts, as well as variations and mods of popular games and maps. TripleA comes with multiple games and over 100 more games can be downloaded from the user community. + +Features include: + +- Good interface and attractive graphics +- Optional scenarios +- Multiplayer games +- TripleA comes with the following supported games that uses its game engine (just to name a few): + - Axis & Allies : Classic edition (2nd, 3rd with options enabled) + - Axis & Allies : Revised Edition + - Pact of Steel A&A Variant + - Big World 1942 A&A Variant + - Four if by Sea + - Battle Ship Row + - Capture The Flag + - Minimap +- Hot-seat +- Play By EMail mode allows persons to play a game via EMail without having to be connected to each other online + - More time to think out moves + - Only need to come online to send your turn to the next player + - Dice rolls are done by a dedicated dice server that is independent of TripleA + - All dice rolls are PGP Verified and email to every player + - Every move and every dice roll is logged and saved in TripleA's History Window + - An online game can be later continued under PBEM mode + - Hard for others to cheat +- Hosted online lobby +- Utilities for editing maps +- Website: [triplea.sourceforge.net][1] +- Developer: Sean Bridges (original developer), Mark Christopher Duncan +- License: GNU GPL v2 +- Version Number: 1.8.0.7 + +---------- + +### Domination ### + +![Domination in action](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-Domination.png) + +Domination is an open source game that shares common themes with the hugely popular Risk board game. It has many game options and includes many maps. + +In the classic “World Domination” game of military strategy, you are battling to conquer the world. To win, you must launch daring attacks, defend yourself to all fronts, and sweep across vast continents with boldness and cunning. But remember, the dangers, as well as the rewards, are high. Just when the world is within your grasp, your opponent might strike and take it all away! + +Features include: + +- Simple to learn + - Domination - you must occupy all countries on the map, and thereby eliminate all opponents. These can be long, drawn out games + - Capital - each player has a country they have selected as a Capital. To win the game, you must occupy all Capitals + - Mission - each player draws a random mission. The first to complete their mission wins. Missions may include the elimination of a certain colour, occupation of a particular continent, or a mix of both +- Map editor +- Simple map format +- Multiplayer network play +- Single player +- Hotseat +- 5 user interfaces +- Game types: +- Play online +- Website: [domination.sourceforge.net][2] +- Developer: Yura Mamyrin, Christian Weiske, Mike Chaten, and many others +- License: GNU GPL v3 +- Version Number: 1.1.1.5 + +---------- + +### PyChess ### + +![Micro-Max in action](http://www.linuxlinks.com/portal/content/reviews/Games/Screenshot-Pychess.jpg) + +PyChess is a Gnome inspired chess client written in Python. + +The goal of PyChess, is to provide a fully featured, nice looking, easy to use chess client for the gnome-desktop. + +The client should be usable both to those totally new to chess, those who want to play an occasional game, and those who wants to use the computer to further enhance their play. + +Features include: + +- Attractive interface +- Chess Engine Communication Protocol (CECP) and Univeral Chess Interface (UCI) Engine support +- Free online play on the Free Internet Chess Server (FICS) +- Read and writes PGN, EPD and FEN chess file formats +- Built-in Python based engine +- Undo and pause functions +- Board and piece animation +- Drag and drop +- Tabbed interface +- Hints and spyarrows +- Opening book sidepanel using sqlite +- Score plot sidepanel +- "Enter game" in pgn dialog +- Optional sounds +- Legal move highlighting +- Internationalised or figure pieces in notation +- Website: [www.pychess.org][3] +- Developer: Thomas Dybdahl Ahle +- License: GNU GPL v2 +- Version Number: 0.12 Anderssen rc4 + +---------- + +### Scrabble ### + +![Scrabble in action](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-Scrabble3D.png) + +Scrabble3D is a highly customizable Scrabble game that not only supports Classic Scrabble and Superscrabble but also 3D games and own boards. You can play local against the computer or connect to a game server to find other players. + +Scrabble is a board game with the goal to place letters crossword like. Up to four players take part and get a limited amount of letters (usually 7 or 8). Consecutively, each player tries to compose his letters to one or more word combining with the placed words on the game array. The value of the move depends on the letters (rare letter get more points) and bonus fields which multiply the value of a letter or the whole word. The player with most points win. + +This idea is extended with Scrabble3D to the third dimension. Of course, a classic game with 15x15 fields or Superscrabble with 21x21 fields can be played and you may configure any field setting by yourself. The game can be played by the provided freeware program against Computer, other local players or via internet. Last but not least it's possible to connect to a game server to find other players and to obtain a rating. Most options are configurable, including the number and valuation of letters, the used dictionary, the language of dialogs and certainly colors, fonts etc. + +Features include: + +- Configurable board, letterset and design +- Board in OpenGL graphics with user-definable wavefront model +- Game against computer with support of multithreading +- Post-hoc game analysis with calculation of best move by computer +- Match with other players connected on a game server +- NSA rating and highscore at game server +- Time limit of games +- Localization; use of non-standard digraphs like CH, RR, LL and right to left reading +- Multilanguage help / wiki +- Network games are buffered and asynchronous games are possible +- Running games can be kibitzed +- International rules including italian "Cambio Secco" +- Challenge mode, What-if-variant, CLABBERS, etc +- Website: [sourceforge.net/projects/scrabble][4] +- Developer: Heiko Tietze +- License: GNU GPL v3 +- Version Number: 3.1.3 + +---------- + +### Backgammon ### + +![Backgammon in action](http://www.linuxlinks.com/portal/content/reviews/Games/Screenshot-gnubg.png) + +GNU Backgammon (gnubg) is a strong backgammon program (world-class with a bearoff database installed) usable either as an engine by other programs or as a standalone backgammon game. It is able to play and analyze both money games and tournament matches, evaluate and roll out positions, and more. + +In addition to supporting simple play, it also has extensive analysis features, a tutor mode, adjustable difficulty, and support for exporting annotated games. + +It currently plays at about the level of a championship flight tournament player and is gradually improving. + +gnubg can be played on numerous on-line backgammon servers, such as the First Internet Backgammon Server (FIBS). + +Features include: + +- A command line interface (with full command editing features if GNU readline is available) that lets you play matches and sessions against GNU Backgammon with a rough ASCII representation of the board on text terminals +- Support for a GTK+ interface with a graphical board window. Both 2D and 3D graphics are available +- Tournament match and money session cube handling and cubeful play +- Support for both 1-sided and 2-sided bearoff databases: 1-sided bearoff database for 15 checkers on the first 6 points and optional 2-sided database kept in memory. Optional larger 1-sided and 2-sided databases stored on disk +- Automated rollouts of positions, with lookahead and race variance reduction where appropriate. Rollouts may be extended +- Functions to generate legal moves and evaluate positions at varying search depths +- Neural net functions for giving cubeless evaluations of all other contact and race positions +- Automatic and manual annotation (analysis and commentary) of games and matches +- Record keeping of statistics of players in games and matches (both native inside GNU Backgammon and externally using relational databases and Python) +- Loading and saving analyzed games and matches as .sgf files (Smart Game Format) +- Exporting positions, games and matches to: (.eps) Encapsulated Postscript, (.gam) Jellyfish Game, (.html) HTML, (.mat) Jellyfish Match, (.pdf) PDF, (.png) Portable Network Graphics, (.pos) Jellyfish Position, (.ps) PostScript, (.sgf) Gnu Backgammon File, (.tex) LaTeX, (.txt) Plain Text, (.txt) Snowie Text +- Import of matches and positions from a number of file formats: (.bkg) Hans Berliner's BKG Format, (.gam) GammonEmpire Game, (.gam) PartyGammon Game, (.mat) Jellyfish Match, (.pos) Jellyfish Position, (.sgf) Gnu Backgammon File, (.sgg) GamesGrid Save Game, (.tmg) TrueMoneyGames, (.txt) Snowie Text +- Python Scripting +- Native language support; 10 languages complete or in progress +- Website: [www.gnubg.org][5] +- Developer: Joseph Heled, Oystein Johansen, Jonathan Kinsey, David Montgomery, Jim Segrave, Joern Thyssen, Gary Wong and contributors +- License: GPL v2 +- Version Number: 1.05.000 + +-------------------------------------------------------------------------------- + +via: http://www.linuxlinks.com/article/20150830011533893/BoardGames.html + +作者:Frazer Kline +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://triplea.sourceforge.net/ +[2]:http://domination.sourceforge.net/ +[3]:http://www.pychess.org/ +[4]:http://sourceforge.net/projects/scrabble/ +[5]:http://www.gnubg.org/ \ No newline at end of file diff --git a/sources/talk/20141223 Defending the Free Linux World.md b/sources/talk/20141223 Defending the Free Linux World.md deleted file mode 100644 index 0a552e640d..0000000000 --- a/sources/talk/20141223 Defending the Free Linux World.md +++ /dev/null @@ -1,127 +0,0 @@ -Translating by H-mudcup - -Defending the Free Linux World -================================================================================ -![](http://www.linuxinsider.com/ai/908455/open-invention-network.jpg) - -**Co-opetition is a part of open source. The Open Invention Network model allows companies to decide where they will compete and where they will collaborate, explained OIN CEO Keith Bergelt. As open source evolved, "we had to create channels for collaboration. Otherwise, we would have hundreds of entities spending billions of dollars on the same technology."** - -The [Open Invention Network][1], or OIN, is waging a global campaign to keep Linux out of harm's way in patent litigation. Its efforts have resulted in more than 1,000 companies joining forces to become the largest defense patent management organization in history. - -The Open Invention Network was created in 2005 as a white hat organization to protect Linux from license assaults. It has considerable financial backing from original board members that include Google, IBM, NEC, Novell, Philips, [Red Hat][2] and Sony. Organizations worldwide have joined the OIN community by signing the free OIN license. - -Organizers founded the Open Invention Network as a bold endeavor to leverage intellectual property to protect Linux. Its business model was difficult to comprehend. It asked its members to take a royalty-free license and forever forgo the chance to sue other members over their Linux-oriented intellectual property. - -However, the surge in Linux adoptions since then -- think server and cloud platforms -- has made protecting Linux intellectual property a critically necessary strategy. - -Over the past year or so, there has been a shift in the Linux landscape. OIN is doing a lot less talking to people about what the organization is and a lot less explaining why Linux needs protection. There is now a global awareness of the centrality of Linux, according to Keith Bergelt, CEO of OIN. - -"We have seen a culture shift to recognizing how OIN benefits collaboration," he told LinuxInsider. - -### How It Works ### - -The Open Invention Network uses patents to create a collaborative environment. This approach helps ensure the continuation of innovation that has benefited software vendors, customers, emerging markets and investors. - -Patents owned by Open Invention Network are available royalty-free to any company, institution or individual. All that is required to qualify is the signer's agreement not to assert its patents against the Linux system. - -OIN ensures the openness of the Linux source code. This allows programmers, equipment vendors, independent software vendors and institutions to invest in and use Linux without excessive worry about intellectual property issues. This makes it more economical for companies to repackage, embed and use Linux. - -"With the diffusion of copyright licenses, the need for OIN licenses becomes more acute. People are now looking for a simpler or more utilitarian solution," said Bergelt. - -OIN legal defenses are free of charge to members. Members commit to not initiating patent litigation against the software in OIN's list. They also agree to offer their own patents in defense of that software. Ultimately, these commitments result in access to hundreds of thousands of patents cross-licensed by the network, Bergelt explained. - -### Closing the Legal Loopholes ### - -"What OIN is doing is very essential. It offers another layer of IP protection, said Greg R. Vetter, associate professor of law at the [University of Houston Law Center][3]. - -Version 2 of the GPL license is thought by some to provide an implied patent license, but lawyers always feel better with an explicit license, he told LinuxInsider. - -What OIN provides is something that bridges that gap. It also provides explicit coverage of the Linux kernel. An explicit patent license is not necessarily part of the GPLv2, but it was added in GPLv3, according to Vetter. - -Take the case of a code writer who produces 10,000 lines of code under GPLv3, for example. Over time, other code writers contribute many more lines of code, which adds to the IP. The software patent license provisions in GPLv3 would protect the use of the entire code base under all of the participating contributors' patents, Vetter said. - -### Not Quite the Same ### - -Patents and licenses are overlapping legal constructs. Figuring out how the two entities work with open source software can be like traversing a minefield. - -"Licenses are legal constructs granting additional rights based on, typically, patent and copyright laws. Licenses are thought to give a permission to do something that might otherwise be infringement of someone else's IP rights," Vetter said. - -Many free and open source licenses (such as the Mozilla Public License, the GNU GPLv3, and the Apache Software License) incorporate some form of reciprocal patent rights clearance. Older licenses like BSD and MIT do not mention patents, Vetter pointed out. - -A software license gives someone else certain rights to use the code the programmer created. Copyright to establish ownership is automatic, as soon as someone writes or draws something original. However, copyright covers only that particular expression and derivative works. It does not cover code functionality or ideas for use. - -Patents cover functionality. Patent rights also can be licensed. A copyright may not protect how someone independently developed implementation of another's code, but a patent fills this niche, Vetter explained. - -### Looking for Safe Passage ### - -The mixing of license and patent legalities can appear threatening to open source developers. For some, even the GPL qualifies as threatening, according to William Hurley, cofounder of [Chaotic Moon Studios][4] and [IEEE][5] Computer Society member. - -"Way back in the day, open source was a different world. Driven by mutual respect and a view of code as art, not property, things were far more open than they are today. I believe that many efforts set upon with the best of intentions almost always end up bearing unintended consequences," Hurley told LinuxInsider. - -Surpassing the 1,000-member mark might carry a mixed message about the significance of intellectual property right protection, he suggested. It might just continue to muddy the already murky waters of today's open source ecosystem. - -"At the end of the day, this shows some of the common misconceptions around intellectual property. Having thousands of developers does not decrease risk -- it increases it. The more developers licensing the patents, the more valuable they appear to be," Hurley said. "The more valuable they appear to be, the more likely someone with similar patents or other intellectual property will try to take advantage and extract value for their own financial gain." - -### Sharing While Competing ### - -Co-opetition is a part of open source. The OIN model allows companies to decide where they will compete and where they will collaborate, explained Bergelt. - -"Many of the changes in the evolution of open source in terms of process have moved us into a different direction. We had to create channels for collaboration. Otherwise, we would have hundreds of entities spending billions of dollars on the same technology," he said. - -A glaring example of this is the early evolution of the cellphone industry. Multiple standards were put forward by multiple companies. There was no sharing and no collaboration, noted Bergelt. - -"That damaged our ability to access technology by seven to 10 years in the U.S. Our experience with devices was far behind what everybody else in the world had. We were complacent with GSM (Global System for Mobile Communications) while we were waiting for CDMA (Code Division Multiple Access)," he said. - -### Changing Landscape ### - -OIN experienced a growth surge of 400 new licensees in the last year. That is indicative of a new trend involving open source. - -"The marketplace reached a critical mass where finally people within organizations recognized the need to explicitly collaborate and to compete. The result is doing both at the same time. This can be messy and taxing," Bergelt said. - -However, it is a sustainable transformation driven by a cultural shift in how people think about collaboration and competition. It is also a shift in how people are embracing open source -- and Linux in particular -- as the lead project in the open source community, he explained. - -One indication is that most significant new projects are not being developed under the GPLv3 license. - -### Two Better Than One ### - -"The GPL is incredibly important, but the reality is there are a number of licensing models being used. The relative addressability of patent issues is generally far lower in Eclipse and Apache and Berkeley licenses that it is in GPLv3," said Bergelt. - -GPLv3 is a natural complement for addressing patent issues -- but the GPL is not sufficient on its own to address the issues of potential conflicts around the use of patents. So OIN is designed as a complement to copyright licenses, he added. - -However, the overlap of patent and license may not do much good. In the end, patents are for offensive purposes -- not defensive -- in almost every case, Bergelt suggested. - -"If you are not prepared to take legal action against others, then a patent may not be the best form of legal protection for your intellectual properties," he said. "We now live in a world where the misconceptions around software, both open and proprietary, combined with an ill-conceived and outdated patent system, leave us floundering as an industry and stifling innovation on a daily basis," he said. - -### Court of Last Resort ### - -It would be nice to think the presence of OIN has dampened a flood of litigation, Bergelt said, or at the very least, that OIN's presence is neutralizing specific threats. - -"We are getting people to lay down their arms, so to say. At the same time, we are creating a new cultural norm. Once you buy into patent nonaggression in this model, the correlative effect is to encourage collaboration," he observed. - -If you are committed to collaboration, you tend not to rush to litigation as a first response. Instead, you think in terms of how can we enable you to use what we have and make some money out of it while we use what you have, Bergelt explained. - -"OIN is a multilateral solution. It encourages signers to create bilateral agreements," he said. "That makes litigation the last course of action. That is where it should be." - -### Bottom Line ### - -OIN is working to prevent Linux patent challenges, Bergelt is convinced. There has not been litigation in this space involving Linux. - -The only thing that comes close are the mobile wars with Microsoft, which focus on elements high in the stack. Those legal challenges may be designed to raise the cost of ownership involving the use of Linux products, Bergelt noted. - -Still, "these are not Linux-related law suits," he said. "They do not focus on what is core to Linux. They focus on what is in the Linux system." - --------------------------------------------------------------------------------- - -via: http://www.linuxinsider.com/story/Defending-the-Free-Linux-World-81512.html - -作者:Jack M. Germain -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[1]:http://www.openinventionnetwork.com/ -[2]:http://www.redhat.com/ -[3]:http://www.law.uh.edu/ -[4]:http://www.chaoticmoon.com/ -[5]:http://www.ieee.org/ diff --git a/sources/talk/20150709 Interviews--Linus Torvalds Answers Your Question.md b/sources/talk/20150709 Interviews--Linus Torvalds Answers Your Question.md index f1420fd0e4..bb04ddf0c8 100644 --- a/sources/talk/20150709 Interviews--Linus Torvalds Answers Your Question.md +++ b/sources/talk/20150709 Interviews--Linus Torvalds Answers Your Question.md @@ -1,4 +1,3 @@ -zpl1025 Interviews: Linus Torvalds Answers Your Question ================================================================================ Last Thursday you had a chance to [ask Linus Torvalds][1] about programming, hardware, and all things Linux. You can read his answers to those questions below. If you'd like to see what he had to say the last time we sat down with him, [you can do so here][2]. diff --git a/sources/talk/20150716 Interview--Larry Wall.md b/sources/talk/20150716 Interview--Larry Wall.md index 1362281517..f3fea9c596 100644 --- a/sources/talk/20150716 Interview--Larry Wall.md +++ b/sources/talk/20150716 Interview--Larry Wall.md @@ -1,4 +1,4 @@ -martin +translating... Interview: Larry Wall ================================================================================ diff --git a/sources/talk/20150818 A Linux User Using Windows 10 After More than 8 Years--See Comparison.md b/sources/talk/20150818 A Linux User Using Windows 10 After More than 8 Years--See Comparison.md new file mode 100644 index 0000000000..cf472613c4 --- /dev/null +++ b/sources/talk/20150818 A Linux User Using Windows 10 After More than 8 Years--See Comparison.md @@ -0,0 +1,344 @@ +A Linux User Using ‘Windows 10′ After More than 8 Years – See Comparison +================================================================================ +Windows 10 is the newest member of windows NT family of which general availability was made on July 29, 2015. It is the successor of Windows 8.1. Windows 10 is supported on Intel Architecture 32 bit, AMD64 and ARMv7 processors. + +![Windows 10 and Linux Comparison](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-vs-Linux.jpg) + +Windows 10 and Linux Comparison + +As a Linux-user for more than 8 continuous years, I thought to test Windows 10, as it is making a lots of news these days. This article is a breakthrough of my observation. I will be seeing everything from the perspective of a Linux user so you may find it a bit biased towards Linux but with absolutely no false information. + +1. I searched Google with the text “download windows 10” and clicked the first link. + +![Search Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Search-Windows-10.jpg) + +Search Windows 10 + +You may directly go to link : [https://www.microsoft.com/en-us/software-download/windows10ISO][1] + +2. I was supposed to select a edition from ‘windows 10‘, ‘windows 10 KN‘, ‘windows 10 N‘ and ‘windows 10 single language‘. + +![Select Windows 10 Edition](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Windows-10-Edition.jpg) + +Select Windows 10 Edition + +For those who want to know details of different editions of Windows 10, here is the brief details of editions. + +- Windows 10 – Contains everything offered by Microsoft for this OS. +- Windows 10N – This edition comes without Media-player. +- Windows 10KN – This edition comes without media playing capabilities. +- Windows 10 Single Language – Only one Language Pre-installed. + +3. I selected the first option ‘Windows 10‘ and clicked ‘Confirm‘. Then I was supposed to select a product language. I choose ‘English‘. + +I was provided with Two Download Links. One for 32-bit and other for 64-bit. I clicked 64-bit, as per my architecture. + +![Download Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Download-Windows-10.jpg) + +Download Windows 10 + +With my download speed (15Mbps), it took me 3 long hours to download it. Unfortunately there were no torrent file to download the OS, which could otherwise have made the overall process smooth. The OS iso image size is 3.8 GB. + +I could not find an image of smaller size but again the truth is there don’t exist net-installer image like things for Windows. Also there is no way to calculate hash value after the iso image has been downloaded. + +Wonder why so ignorance from windows on such issues. To verify if the iso is downloaded correctly I need to write the image to a disk or to a USB flash drive and then boot my system and keep my finger crossed till the setup is finished. + +Lets start. I made my USB flash drive bootable with the windows 10 iso using dd command, as: + + # dd if=/home/avi/Downloads/Win10_English_x64.iso of=/dev/sdb1 bs=512M; sync + +It took a few minutes to complete the process. I then rebooted the system and choose to boot from USB flash Drive in my UEFI (BIOS) settings. + +#### System Requirements #### + +If you are upgrading + +- Upgrade supported only from Windows 7 SP1 or Windows 8.1 + +If you are fresh Installing + +- Processor: 1GHz or faster +- RAM : 1GB and Above(32-bit), 2GB and Above(64-bit) +- HDD: 16GB and Above(32-bit), 20GB and Above(64-bit) +- Graphic card: DirectX 9 or later + WDDM 1.0 Driver + +### Installation of Windows 10 ### + +1. Windows 10 boots. Yet again they changed the logo. Also no information on whats going on. + +![Windows 10 Logo](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Logo.jpg) + +Windows 10 Logo + +2. Selected Language to install, Time & currency format and keyboard & Input methods before clicking Next. + +![Select Language and Time](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Language-and-Time.jpg) + +Select Language and Time + +3. And then ‘Install Now‘ Menu. + +![Install Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Windows-10.jpg) + +Install Windows 10 + +4. The next screen is asking for Product key. I clicked ‘skip’. + +![Windows 10 Product Key](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Product-Key.jpg) + +Windows 10 Product Key + +5. Choose from a listed OS. I chose ‘windows 10 pro‘. + +![Select Install Operating System](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Install-Operating-System.jpg) + +Select Install Operating System + +6. oh yes the license agreement. Put a check mark against ‘I accept the license terms‘ and click next. + +![Accept License](http://www.tecmint.com/wp-content/uploads/2015/08/Accept-License.jpg) + +Accept License + +7. Next was to upgrade (to windows 10 from previous versions of windows) and Install Windows. Don’t know why custom: Windows Install only is suggested as advanced by windows. Anyway I chose to Install windows only. + +![Select Installation Type](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Installation-Type.jpg) + +Select Installation Type + +8. Selected the file-system and clicked ‘next’. + +![Select Install Drive](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Install-Drive.jpg) + +Select Install Drive + +9. The installer started to copy files, getting files ready for installation, installing features, installing updates and finishing up. It would be better if the installer would have shown verbose output on the action is it taking. + +![Installing Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Installing-Windows.jpg) + +Installing Windows + +10. And then windows restarted. They said reboot was needed to continue. + +![Windows Installation Process](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Installation-Process.jpg) + +Windows Installation Process + +11. And then all I got was the below screen which reads “Getting Ready”. It took 5+ minutes at this point. No idea what was going on. No output. + +![Windows Getting Ready](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Getting-Ready.jpg) + +Windows Getting Ready + +12. yet again, it was time to “Enter Product Key”. I clicked “Do this later” and then used expressed settings. + +![Enter Product Key](http://www.tecmint.com/wp-content/uploads/2015/08/Enter-Product-Key.jpg) + +Enter Product Key + +![Select Express Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Express-Settings.jpg) + +Select Express Settings + +14. And then three more output screens, where I as a Linuxer expected that the Installer will tell me what it is doing but all in vain. + +![Loading Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Loading-Windows.jpg) + +Loading Windows + +![Getting Updates](http://www.tecmint.com/wp-content/uploads/2015/08/Getting-Updates.jpg) + +Getting Updates + +![Still Loading Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Still-Loading-Windows.jpg) + +Still Loading Windows + +15. And then the installer wanted to know who owns this machine “My organization” or I myself. Chose “I own it” and then next. + +![Select Organization](http://www.tecmint.com/wp-content/uploads/2015/08/Select-Organization.jpg) + +Select Organization + +16. Installer prompted me to join “Azure Ad” or “Join a domain”, before I can click ‘continue’. I chooses the later option. + +![Connect Windows](http://www.tecmint.com/wp-content/uploads/2015/08/Connect-Windows.jpg) + +Connect Windows + +17. The Installer wants me to create an account. So I entered user_name and clicked ‘Next‘, I was expecting an error message that I must enter a password. + +![Create Account](http://www.tecmint.com/wp-content/uploads/2015/08/Create-Account.jpg) + +Create Account + +18. To my surprise Windows didn’t even showed warning/notification that I must create password. Such a negligence. Anyway I got my desktop. + +![Windows 10 Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Desktop.jpg) + +Windows 10 Desktop + +#### Experience of a Linux-user (Myself) till now #### + +- No Net-installer Image +- Image size too heavy +- No way to check the integrity of iso downloaded (no hash check) +- The booting and installation remains same as it was in XP, Windows 7 and 8 perhaps. +- As usual no output on what windows Installer is doing – What file copying or what package installing. +- Installation was straight forward and easy as compared to the installation of a Linux distribution. + +### Windows 10 Testing ### + +19. The default Desktop is clean. It has a recycle bin Icon on the default desktop. Search web directly from the desktop itself. Additionally icons for Task viewing, Internet browsing, folder browsing and Microsoft store is there. As usual notification bar is present on the bottom right to sum up desktop. + +![Deskop Shortcut Icons](http://www.tecmint.com/wp-content/uploads/2015/08/Deskop-Shortcut-icons.jpg) + +Deskop Shortcut Icons + +20. Internet Explorer replaced with Microsoft Edge. Windows 10 has replace the legacy web browser Internet Explorer also known as IE with Edge aka project spartan. + +![Microsoft Edge Browser](http://www.tecmint.com/wp-content/uploads/2015/08/Edge-browser.jpg) + +Microsoft Edge Browser + +It is fast at least as compared to IE (as it seems it testing). Familiar user Interface. The home screen contains news feed updates. There is also a search bar title that reads ‘Where to next?‘. The browser loads time is considerably low which result in improving overall speed and performance. The memory usages of Edge seems normal. + +![Windows Performance](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Performance.jpg) + +Windows Performance + +Edge has got cortana – Intelligent Personal Assistant, Support for chrome-extension, web Note – Take notes while Browsing, Share – Right from the tab without opening any other TAB. + +#### Experience of a Linux-user (Myself) on this point #### + +21. Microsoft has really improved web browsing. Lets see how stable and fine it remains. It don’t lag as of now. + +22. Though RAM usages by Edge was fine for me, a lots of users are complaining that Edge is notorious for Excessive RAM Usages. + +23. Difficult to say at this point if Edge is ready to compete with Chrome and/or Firefox at this point of time. Lets see what future unfolds. + +#### A few more Virtual Tour #### + +24. Start Menu redesigned – Seems clear and effective. Metro icons make it live. Populated with most commonly applications viz., Calendar, Mail, Edge, Photos, Contact, Temperature, Companion suite, OneNote, Store, Xbox, Music, Movies & TV, Money, News, Store, etc. + +![Windows Look and Feel](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Look.jpg) + +Windows Look and Feel + +In Linux on Gnome Desktop Environment, I use to search required applications simply by pressing windows key and then type the name of the application. + +![Search Within Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Search-Within-Desktop.jpg) + +Search Within Desktop + +25. File Explorer – seems clear Designing. Edges are sharp. In the left pane there is link to quick access folders. + +![Windows File Explorer](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-File-Explorer.jpg) + +Windows File Explorer + +Equally clear and effective file explorer on Gnome Desktop Environment on Linux. Removed UN-necessary graphics and images from icons is a plus point. + +![File Browser on Gnome](http://www.tecmint.com/wp-content/uploads/2015/08/File-Browser.jpg) + +File Browser on Gnome + +26. Settings – Though the settings are a bit refined on Windows 10, you may compare it with the settings on a Linux Box. + +**Settings on Windows** + +![Windows 10 Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Settings.jpg) + +Windows 10 Settings + +**Setting on Linux Gnome** + +![Gnome Settings](http://www.tecmint.com/wp-content/uploads/2015/08/Gnome-Settings.jpg) + +Gnome Settings + +27. List of Applications – List of Application on Linux is better than what they use to provide (based upon my memory, when I was a regular windows user) but still it stands low as compared to how Gnome3 list application. + +**Application Listed by Windows** + +![Application List on Windows 10](http://www.tecmint.com/wp-content/uploads/2015/08/Application-List-on-Windows-10.jpg) + +Application List on Windows 10 + +**Application Listed by Gnome3 on Linux** + +![Gnome Application List on Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Gnome-Application-List-on-Linux.jpg) + +Gnome Application List on Linux + +28. Virtual Desktop – Virtual Desktop feature of Windows 10 is one of those topic which are very much talked about these days. + +Here is the virtual Desktop in Windows 10. + +![Windows Virtual Desktop](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-Virtual-Desktop.jpg) + +Windows Virtual Desktop + +and the virtual Desktop on Linux we are using for more than 2 decades. + +![Virtual Desktop on Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Virtual-Desktop-on-Linux.jpg) + +Virtual Desktop on Linux + +#### A few other features of Windows 10 #### + +29. Windows 10 comes with wi-fi sense. It shares your password with others. Anyone who is in the range of your wi-fi and connected to you over Skype, Outlook, Hotmail or Facebook can be granted access to your wifi network. And mind it this feature has been added as a feature by microsoft to save time and hassle-free connection. + +In a reply to question raised by Tecmint, Microsoft said – The user has to agree to enable wifi sense, everytime on a new network. oh! What a pathetic taste as far as security is concerned. I am not convinced. + +30. Up-gradation from Windows 7 and Windows 8.1 is free though the retail cost of Home and pro editions are approximately $119 and $199 respectively. + +31. Microsoft released first cumulative update for windows 10, which is said to put system into endless crash loop for a few people. Windows perhaps don’t understand such problem or don’t want to work on that part don’t know why. + +32. Microsoft’s inbuilt utility to block/hide unwanted updates don’t work in my case. This means If a update is there, there is no way to block/hide it. Sorry windows users! + +#### A few features native to Linux that windows 10 have #### + +Windows 10 has a lots of features that were taken directly from Linux. If Linux were not released under GNU License perhaps Microsoft would never had the below features. + +33. Command-line package management – Yup! You heard it right. Windows 10 has a built-in package management. It works only in Windows Power Shell. OneGet is the official package manager for windows. Windows package manager in action. + +![Windows 10 Package Manager](http://www.tecmint.com/wp-content/uploads/2015/08/Windows-10-Package-Manager.jpg) + +Windows 10 Package Manager + +- Border-less windows +- Flat Icons +- Virtual Desktop +- One search for Online+offline search +- Convergence of mobile and desktop OS + +### Overall Conclusion ### + +- Improved responsiveness +- Well implemented Animation +- low on resource +- Improved battery life +- Microsoft Edge web-browser is rock solid +- Supported on Raspberry pi 2. +- It is good because windows 8/8.1 was not upto mark and really bad. +- It is a the same old wine in new bottle. Almost the same things with brushed up icons. + +What my testing suggest is Windows 10 has improved on a few things like look and feel (as windows always did), +1 for Project spartan, Virtual Desktop, Command-line package management, one search for online and offline search. It is overall an improved product but those who thinks that Windows 10 will prove to be the last nail in the coffin of Linux are mistaken. + +Linux is years ahead of Windows. Their approach is different. In near future windows won’t stand anywhere around Linux and there is nothing for which a Linux user need to go to Windows 10. + +That’s all for now. Hope you liked the post. I will be here again with another interesting post you people will love to read. Provide us with your valuable feedback in the comments below. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/a-linux-user-using-windows-10-after-more-than-8-years-see-comparison/ + +作者:[vishek Kumar][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:https://www.microsoft.com/en-us/software-download/windows10ISO \ No newline at end of file diff --git a/sources/talk/20150820 LinuxCon's surprise keynote speaker ​Linus Torvalds muses about open-source software.md b/sources/talk/20150820 LinuxCon's surprise keynote speaker ​Linus Torvalds muses about open-source software.md new file mode 100644 index 0000000000..c045233630 --- /dev/null +++ b/sources/talk/20150820 LinuxCon's surprise keynote speaker ​Linus Torvalds muses about open-source software.md @@ -0,0 +1,46 @@ +LinuxCon's surprise keynote speaker ​Linus Torvalds muses about open-source software +================================================================================ +> In a broad-ranging question and answer session, Linus Torvalds, Linux's founder, shared his thoughts on the current state of open source and Linux. + +**SEATTLE** -- [LinuxCon][1] attendees got an early Christmas present when the Wednesday morning "surprise" keynote speaker turned out to be Linux's founder, Linus Torvalds. + +![zemlin-and-torvalds-08192015-1.jpg](http://zdnet2.cbsistatic.com/hub/i/2015/08/19/9951f05a-fedf-4bf4-a4a1-3b4a15458de6/c19c89ded58025eccd090787ba40e803/zemlin-and-torvalds-08192015-1.jpg) + +Jim Zemlin and Linus Torvalds shooting the breeze at LinuxCon in Seattle. -- sjvn + +Jim Zemlin, the Linux Foundation's executive director, opened the question and answer session by quoting from a recent article about Linus, "[Torvalds may be the most influential individual economic force][2] of the past 20 years. ... Torvalds has, in effect, been as instrumental in retooling the production lines of the modern economy as Henry Ford was 100 years earlier." + +Torvalds replied, "I don't think I'm all that powerful, but I'm glad to get all the credit for open source." For someone who's arguably been more influential on technology than Bill Gates, Steve Jobs, or Larry Ellison, Torvalds remains amusingly modest. That's probably one reason [Torvalds, who doesn't suffer fools gladly][3], remains the unchallenged leader of Linux. + +It also helps that he doesn't take himself seriously, except when it comes to code quality. Zemlin reminded him that he was also described in the same article as being "5-feet, ho-hum tall with a paunch, ... his body type and gait resemble that of Tux, the penguin mascot of Linux." Torvald's reply was to grin and say "What is this? A roast?" He added that 5'8" was a perfectly good height. + +More seriously, Zemlin asked Torvalds what he thought about the current excitement over containers. Indeed, at times LinuxCon has felt like DockerCon. Torvalds replied, "I'm glad that the kernel is far removed from containers and other buzzwords. We only care about just the kernel. I'm so focused on the kernel I really don't care. I don't get involved in the politics above the kernel and I'm really happy that I don't know." + +Moving on, Zemlin asked Torvalds what he thought about the demand from the Internet of Things (IoT) for an even smaller Linux kernel. "Everyone has always wished for a smaller kernel," Torvalds said. "But, with all the modules it's still tens of MegaBytes in size. It's shocking that it used to fit into a MB. We'd like it to be mean lean, mean IT machine again." + +But, "Torvalds continued, "It's hard to get rid of unnecessary fat. Things tend to grow. Realistically I don't think we can get down to the sizes we were 20 years ago." + +As for security, the next topic, Torvalds said, "I'm at odds with the security community. They tend to see technology as black and white. If it's not security they don't care at all about it." The truth is "security is bugs. Most of the security issues we've had in the kernel hasn't been that big. Most of them have been really stupid and then some clever person takes advantage of it." + +The bottom line is, "We'll never get rid of bugs so security will never be perfect. We do try to be really careful about code. With user space we have to be very strict." But, "Bugs happen and all you can do is mitigate them. Open source is doing fairly well, but anyone who thinks we'll ever be completely secure is foolish." + +Zemlin concluded by asking Torvalds where he saw Linux ten years from now. Torvalds replied that he doesn't look at it this way. "I'm plodding, pedestrian, I look ahead six months, I don't plan 10 years ahead. I think that's insane." + +Sure, "companies plan ten years, and their plans use open source. Their whole process is very forward thinking. But I'm not worried about 10 years ahead. I look to the next release and the release beyond that." + +For Torvalds, who works at home where "the FedEx guy is no longer surprised to find me in my bathrobe at 2 in the afternoon," looking ahead a few months works just fine. And so do all the businesses -- both technology-based Amazon, Google, Facebook and more mainstream, WalMart, the New York Stock Exchange, and McDonalds -- that live on Linux every day. + +-------------------------------------------------------------------------------- + +via: http://www.zdnet.com/article/linus-torvalds-muses-about-open-source-software/ + +作者:[Steven J. Vaughan-Nichols][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.zdnet.com/meet-the-team/us/steven-j-vaughan-nichols/ +[1]:http://events.linuxfoundation.org/events/linuxcon-north-america +[2]:http://www.bloomberg.com/news/articles/2015-06-16/the-creator-of-linux-on-the-future-without-him +[3]:http://www.zdnet.com/article/linus-torvalds-finds-gnome-3-4-to-be-a-total-user-experience-design-failure/ \ No newline at end of file diff --git a/sources/talk/20150820 Which Open Source Linux Distributions Would Presidential Hopefuls Run.md b/sources/talk/20150820 Which Open Source Linux Distributions Would Presidential Hopefuls Run.md new file mode 100644 index 0000000000..2a850a7468 --- /dev/null +++ b/sources/talk/20150820 Which Open Source Linux Distributions Would Presidential Hopefuls Run.md @@ -0,0 +1,53 @@ +Which Open Source Linux Distributions Would Presidential Hopefuls Run? +================================================================================ +![Republican presidential candidate Donald Trump +](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/08/donaldtrump.jpg) + +Republican presidential candidate Donald Trump + +If people running for president used Linux or another open source operating system, which distribution would it be? That's a key question that the rest of the press—distracted by issues of questionable relevance such as "policy platforms" and whether it's appropriate to add an exclamation point to one's Christian name—has been ignoring. But the ignorance ends here: Read on for this sometime-journalist's take on presidential elections and Linux distributions. + +If this sounds like a familiar topic to those of you who have been reading my drivel for years (is anyone, other than my dear editor, unfortunate enough to have actually done that?), it's because I wrote a [similar post][1] during the last presidential election cycle. Some kind readers took that article more seriously than I intended, so I'll take a moment to point out that I don't actually believe that open source software and political campaigns have anything meaningful to do with one another. I am just trying to amuse myself at the start of a new week. + +But you can make of this what you will. You're the reader, after all. + +### Linux Distributions of Choice: Republicans ### + +Today, I'll cover just the Republicans. And I won't even discuss all of them, since the candidates hoping for the Republican party's nomination are too numerous to cover fully here in one post. But for starters: + +If **Jeb (Jeb!?) Bush** ran Linux, it would be [Debian][2]. It's a relatively boring distribution designed for serious, grown-up hackers—the kind who see it as their mission to be the adults in the pack and clean up the messes that less-experienced open source fans create. Of course, this also makes Debian relatively unexciting, and its user base remains perennially small as a result. + +**Scott Walker**, for his part, would be a [Damn Small Linux][3] (DSL) user. Requiring merely 50MB of disk space and 16MB of RAM to run, DSL can breathe new life into 20-year-old 486 computers—which is exactly what a cost-cutting guru like Walker would want. Of course, the user experience you get from DSL is damn primitive; the platform barely runs a browser. But at least you won't be wasting money on new computer hardware when the stuff you bought in 1993 can still serve you perfectly well. + +How about **Chris Christie**? He'd obviously be clinging to [Relax-and-Recover Linux][4], which bills itself as a "setup-and-forget Linux bare metal disaster recovery solution." "Setup-and-forget" has basically been Christie's political strategy ever since that unfortunate incident on the George Washington Bridge stymied his political momentum. Disaster recovery may or may not bring back everything for Christie in the end, but at least he might succeed in recovering a confidential email or two that accidentally disappeared when his computer crashed. + +As for **Carly Fiorina**, she'd no doubt be using software developed for "[The Machine][5]" operating system from [Hewlett-Packard][6] (HPQ), the company she led from 1999 to 2005. The Machine actually may run several different operating systems, which may or may not be based on Linux—details remain unclear—and its development began well after Fiorina's tenure at HP came to a conclusion. Still, her roots as a successful executive in the IT world form an important part of her profile today, meaning that her ties to HP have hardly been severed fully. + +Last but not least—and you knew this was coming—there's **Donald Trump**. He'd most likely pay a team of elite hackers millions of dollars to custom-build an operating system just for him—even though he could obtain a perfectly good, ready-made operating system for free—to show off how much money he has to waste. He'd then brag about it being the best operating system ever made, though it would of course not be compliant with POSIX or anything else, because that would mean catering to the establishment. The platform would also be totally undocumented, since, if Trump explained how his operating system actually worked, he'd risk giving away all his secrets to the Islamic State—obviously. + +Alternatively, if Trump had to go with a Linux platform already out there, [Ubuntu][7] seems like the most obvious choice. Like Trump, the Ubuntu developers have taken a we-do-what-we-want approach to building open source software by implementing their own, sometimes proprietary applications and interfaces. Free-software purists hate Ubuntu for that, but plenty of ordinary people like it a lot. Of course, whether playing purely by your own rules—in the realms of either software or politics—is sustainable in the long run remains to be seen. + +### Stay Tuned ### + +If you're wondering why I haven't yet mentioned the Democratic candidates, worry not. I am not leaving them out of today's writing because I like them any more or less than the Republicans. (Personally, I think the peculiar American practice of having only two viable political parties—which virtually no other functioning democracy does—is ridiculous, and I am suspicious of all of these candidates as a result.) + +On the contrary, there's plenty to say about the Linux distributions the Democrats might use, too. And I will, in a future post. Stay tuned. + +-------------------------------------------------------------------------------- + +via: http://thevarguy.com/open-source-application-software-companies/081715/which-open-source-linux-distributions-would-presidential- + +作者:[Christopher Tozzi][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://thevarguy.com/author/christopher-tozzi +[1]:http://thevarguy.com/open-source-application-software-companies/aligning-linux-distributions-presidential-hopefuls +[2]:http://debian.org/ +[3]:http://www.damnsmalllinux.org/ +[4]:http://relax-and-recover.org/ +[5]:http://thevarguy.com/open-source-application-software-companies/061614/hps-machine-open-source-os-truly-revolutionary +[6]:http://hp.com/ +[7]:http://ubuntu.com/ \ No newline at end of file diff --git a/sources/talk/20150820 Why did you start using Linux.md b/sources/talk/20150820 Why did you start using Linux.md new file mode 100644 index 0000000000..3ddf90c560 --- /dev/null +++ b/sources/talk/20150820 Why did you start using Linux.md @@ -0,0 +1,148 @@ +KevinSJ translating +Why did you start using Linux? +================================================================================ +> In today's open source roundup: What got you started with Linux? Plus: IBM's Linux only Mainframe. And why you should skip Windows 10 and go with Linux + +### Why did you start using Linux? ### + +Linux has become quite popular over the years, with many users defecting to it from OS X or Windows. But have you ever wondered what got people started with Linux? A redditor asked that question and got some very interesting answers. + +SilverKnight asked his question on the Linux subreddit: + +> I know this has been asked before, but I wanted to hear more from the younger generation why it is that they started using linux and what keeps them here. +> +> I dont want to discourage others from giving their linux origin stories, because those are usually pretty good, but I was mostly curious about our younger population since there isn't much out there from them yet. +> +> I myself am 27 and am a linux dabbler. I have installed quite a few different distros over the years but I haven't made the plunge to full time linux. I guess I am looking for some more reasons/inspiration to jump on the bandwagon. +> +> [More at Reddit][1] + +Fellow redditors in the Linux subreddit responded with their thoughts: + +> **DoublePlusGood**: "I started using Backtrack Linux (now Kali) at 12 because I wanted to be a "1337 haxor". I've stayed with Linux (Archlinux currently) because it lets me have the endless freedom to make my computer do what I want." +> +> **Zack**: "I'm a Linux user since, I think, the age of 12 or 13, I'm 15 now. +> +> It started when I got tired with Windows XP at 11 and the waiting, dammit am I impatient sometimes, but waiting for a basic task such as shutting down just made me tired of Windows all together. +> +> A few months previously I had started participating in discussions in a channel on the freenode IRC network which was about a game, and as freenode usually goes, it was open source and most of the users used Linux. +> +> I kept on hearing about this Linux but wasn't that interested in it at the time. However, because the channel (and most of freenode) involved quite a bit of programming I started learning Python. +> +> A year passed and I was attempting to install GNU/Linux (specifically Ubuntu) on my new (technically old, but I had just got it for my birthday) PC, unfortunately it continually froze, for reasons unknown (probably a bad hard drive, or a lot of dust or something else...). +> +> Back then I was the type to give up on things, so I just continually nagged my dad to try and install Ubuntu, he couldn't do it for the same reasons. +> +> After wanting Linux for a while I became determined to get Linux and ditch windows for good. So instead of Ubuntu I tried Linux Mint, being a derivative of Ubuntu(?) I didn't have high hopes, but it worked! +> +> I continued using it for another 6 months. +> +> During that time a friend on IRC gave me a virtual machine (which ran Ubuntu) on their server, I kept it for a year a bit until my dad got me my own server. +> +> After the 6 months I got a new PC (which I still use!) I wanted to try something different. +> +> I decided to install openSUSE. +> +> I liked it a lot, and on the same Christmas I obtained a Raspberry Pi, and stuck with Debian on it for a while due to the lack of support other distros had for it." +> +> **Cqz**: "Was about 9 when the Windows 98 machine handed down to me stopped working for reasons unknown. We had no Windows install disk, but Dad had one of those magazines that comes with demo programs and stuff on CDs. This one happened to have install media for Mandrake Linux, and so suddenly I was a Linux user. Had no idea what I was doing but had a lot of fun doing it, and although in following years I often dual booted with various Windows versions, the FLOSS world always felt like home. Currently only have one Windows installation, which is a virtual machine for games." +> +> **Tosmarcel**: "I was 15 and was really curious about this new concept called 'programming' and then I stumbled upon this Harvard course, CS50. They told users to install a Linux vm to use the command line. But then I asked myself: "Why doesn't windows have this command line?!". I googled 'linux' and Ubuntu was the top result -Ended up installing Ubuntu and deleted the windows partition accidentally... It was really hard to adapt because I knew nothing about linux. Now I'm 16 and running arch linux, never looked back and I love it!" +> +> **Micioonthet**: "First heard about Linux in the 5th grade when I went over to a friend's house and his laptop was running MEPIS (an old fork of Debian) instead of Windows XP. +> +> Turns out his dad was a socialist (in America) and their family didn't trust Microsoft. This was completely foreign to me, and I was confused as to why he would bother using an operating system that didn't support the majority of software that I knew. +> +> Fast forward to when I was 13 and without a laptop. Another friend of mine was complaining about how slow his laptop was, so I offered to buy it off of him so I could fix it up and use it for myself. I paid $20 and got a virus filled, unusable HP Pavilion with Windows Vista. Instead of trying to clean up the disgusting Windows install, I remembered that Linux was a thing and that it was free. I burned an Ubuntu 12.04 disc and installed it right away, and was absolutely astonished by the performance. +> +> Minecraft (one of the few early Linux games because it ran on Java), which could barely run at 5 FPS on Vista, ran at an entirely playable 25 FPS on a clean install of Ubuntu. +> +> I actually still have that old laptop and use it occasionally, because why not? Linux doesn't care how old your hardware is. +> +> I since converted my dad to Linux and we buy old computers at lawn sales and thrift stores for pennies and throw Linux Mint or some other lightweight distros on them." +> +> **Webtm**: "My dad had every computer in the house with some distribution on it, I think a couple with OpenSUSE and Debian, and his personal computer had Slackware on it. So I remember being little and playing around with Debian and not really getting into it much. So I had a Windows laptop for a few years and my dad asked me if I wanted to try out Debian. It was a fun experience and ever since then I've been using Debian and trying out distributions. I currently moved away from Linux and have been using FreeBSD for around 5 months now, and I am absolutely happy with it. +> +> The control over your system is fantastic. There are a lot of cool open source projects. I guess a lot of the fun was figuring out how to do the things I want by myself and tweaking those things in ways to make them do something else. Stability and performance is also a HUGE plus. Not to mention the level of privacy when switching." +> +> **Wyronaut**: "I'm currently 18, but I first started using Linux when I was 13. Back then my first distro was Ubuntu. The reason why I wanted to check out Linux, was because I was hosting little Minecraft game servers for myself and a couple of friends, back then Minecraft was pretty new-ish. I read that the defacto operating system for hosting servers was Linux. +> +> I was a big newbie when it came to command line work, so Linux scared me a little, because I had to take care of a lot of things myself. But thanks to google and a few wiki pages I managed to get up a couple of simple servers running on a few older PC's I had lying around. Great use for all that older hardware no one in the house ever uses. +> +> After running a few game servers I started running a few web servers as well. Experimenting with HTML, CSS and PHP. I worked with those for a year or two. Afterwards, took a look at Java. I made the terrible mistake of watching TheNewBoston video's. +> +> So after like a week I gave up on Java and went to pick up a book on Python instead. That book was Learn Python The Hard Way by Zed A. Shaw. After I finished that at the fast pace of two weeks, I picked up the book C++ Primer, because at the time I wanted to become a game developer. Went trough about half of the book (~500 pages) and burned out on learning. At that point I was spending a sickening amount of time behind my computer. +> +> After taking a bit of a break, I decided to pick up JavaScript. Read like 2 books, made like 4 different platformers and called it a day. +> +> Now we're arriving at the present. I had to go through the horrendous process of finding a school and deciding what job I wanted to strive for when I graduated. I ruled out anything in the gaming sector as I didn't want anything to do with graphics programming anymore, I also got completely sick of drawing and modelling. And I found this bachelor that had something to do with netsec and I instantly fell in love. I picked up a couple books on C to shred this vacation period and brushed up on some maths and I'm now waiting for the new school year to commence. +> +> Right now, I am having loads of fun with Arch Linux, made couple of different arrangements on different PC's and it's going great! +> +> In a sense Linux is what also got me into programming and ultimately into what I'm going to study in college starting this september. I probably have my future life to thank for it." +> +> **Linuxllc**: "You also can learn from old farts like me. +> +> The crutch, The crutch, The crutch. Getting rid of the crutch will inspired you and have good reason to stick with Linux. +> +> I got rid of my crutch(Windows XP) back in 2003. Took me only 5 days to get all my computer task back and running at a 100% workflow. Including all my peripheral devices. Minus any Windows games. I just play native Linux games." +> +> **Highclass**: "Hey I'm 28 not sure if this is the age group you are looking for. +> +> To be honest, I was always interested in computers and the thought of a free operating system was intriguing even though at the time I didn't fully grasp the free software philosophy, to me it was free as in no cost. I also did not find the CLI too intimidating as from an early age I had exposure to DOS. +> +> I believe my first distro was Mandrake, I was 11 or 12, I messed up the family computer on several occasions.... I ended up sticking with it always trying to push myself to the next level. Now I work in the industry with Linux everyday. +> +> /shrug" +> +> Matto: "My computer couldn't run fast enough for XP (got it at a garage sale), so I started looking for alternatives. Ubuntu came up in Google. I was maybe 15 or 16 at the time. Now I'm 23 and have a job working on a product that uses Linux internally." +> +> [More at Reddit][2] + +### IBM's Linux only Mainframe ### + +IBM has a long history with Linux, and now the company has created a Mainframe that features Ubuntu Linux. The new machine is named LinuxOne. + +Ron Miller reports for TechCrunch: + +> The new mainframes come in two flavors, named for penguins (Linux — penguins — get it?). The first is called Emperor and runs on the IBM z13, which we wrote about in January. The other is a smaller mainframe called the Rockhopper designed for a more “entry level” mainframe buyer. +> +> You may have thought that mainframes went the way of the dinosaur, but they are still alive and well and running in large institutions throughout the world. IBM as part of its broader strategy to promote the cloud, analytics and security is hoping to expand the potential market for mainframes by running Ubuntu Linux and supporting a range of popular open source enterprise software such as Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL and Chef. +> +> The metered mainframe will still sit inside the customer’s on-premises data center, but billing will be based on how much the customer uses the system, much like a cloud model, Mauri explained. +> +> ...IBM is looking for ways to increase those sales. Partnering with Canonical and encouraging use of open source tools on a mainframe gives the company a new way to attract customers to a small, but lucrative market. +> +> [More at TechCrunch][3] + +### Why you should skip Windows 10 and opt for Linux ### + +Since Windows 10 has been released there has been quite a bit of media coverage about its potential to spy on users. ZDNet has listed some reasons why you should skip Windows 10 and opt for Linux instead on your computer. + +SJVN reports for ZDNet: + +> You can try to turn Windows 10's data-sharing ways off, but, bad news: Windows 10 will keep sharing some of your data with Microsoft anyway. There is an alternative: Desktop Linux. +> +> You can do a lot to keep Windows 10 from blabbing, but you can't always stop it from talking. Cortana, Windows 10's voice activated assistant, for example, will share some data with Microsoft, even when it's disabled. That data includes a persistent computer ID to identify your PC to Microsoft. +> +> So, if that gives you a privacy panic attack, you can either stick with your old operating system, which is likely Windows 7, or move to Linux. Eventually, when Windows 7 is no longer supported, if you want privacy you'll have no other viable choice but Linux. +> +> There are other, more obscure desktop operating systems that are also desktop-based and private. These include the BSD Unix family such as FreeBSD, PCBSD, and NetBSD and eComStation, OS/2 for the 21st century. Your best choice, though, is a desktop-based Linux with a low learning curve. +> +> [More at ZDNet][4] + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2972587/linux/why-did-you-start-using-linux.html + +作者:[Jim Lynch][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Jim-Lynch/ +[1]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/ +[2]:https://www.reddit.com/r/linux/comments/3hb2sr/question_for_younger_users_why_did_you_start/ +[3]:http://techcrunch.com/2015/08/16/ibm-teams-with-canonical-on-linux-mainframe/ +[4]:http://www.zdnet.com/article/sick-of-windows-spying-on-you-go-linux/ diff --git a/sources/talk/20150821 Linux 4.3 Kernel To Add The MOST Driver Subsystem.md b/sources/talk/20150821 Linux 4.3 Kernel To Add The MOST Driver Subsystem.md new file mode 100644 index 0000000000..5b4ad2251f --- /dev/null +++ b/sources/talk/20150821 Linux 4.3 Kernel To Add The MOST Driver Subsystem.md @@ -0,0 +1,28 @@ +Linux 4.3 Kernel To Add The MOST Driver Subsystem +================================================================================ + While the [Linux 4.2][1] kernel hasn't been officially released yet, Greg Kroah-Hartman sent in early his pull requests for the various subsystems he maintains for the Linux 4.3 merge window. + +The pull requests sent in by Greg KH on Thursday include the Linux 4.3 merge window updates for the driver core, TTY/serial, USB driver, char/misc, and the staging area. These pull requests don't offer any really shocking changes but mostly routine work on improvements / additions / bug-fixes. The staging area once again is heavy with various fixes and clean-ups but there's also a new driver subsystem. + +Greg mentioned of the [4.3 staging changes][2], "Lots of things all over the place, almost all of them trivial fixups and changes. The usual IIO updates and new drivers and we have added the MOST driver subsystem which is getting cleaned up in the tree. The ozwpan driver is finally being deleted as it is obviously abandoned and no one cares about it." + +The MOST driver subsystem is short for the Media Oriented Systems Transport. The documentation to be added in the Linux 4.3 kernel explains, "The Media Oriented Systems Transport (MOST) driver gives Linux applications access a MOST network: The Automotive Information Backbone and the de-facto standard for high-bandwidth automotive multimedia networking. MOST defines the protocol, hardware and software layers necessary to allow for the efficient and low-cost transport of control, real-time and packet data using a single medium (physical layer). Media currently in use are fiber optics, unshielded twisted pair cables (UTP) and coax cables. MOST also supports various speed grades up to 150 Mbps." As explained, MOST is mostly about Linux in automotive applications. + +While Greg KH sent in his various subsystem updates for Linux 4.3, he didn't yet propose the [KDBUS][5] kernel code be pulled. He's previously expressed plans for [KDBUS in Linux 4.3][3] so we'll wait until the 4.3 merge window officially gets going to see what happens. Stay tuned to Phoronix for more Linux 4.3 kernel coverage next week when the merge window will begin, [assuming Linus releases 4.2][4] this weekend. + +-------------------------------------------------------------------------------- + +via: http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.3-Staging-Pull + +作者:[Michael Larabel][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.michaellarabel.com/ +[1]:http://www.phoronix.com/scan.php?page=search&q=Linux+4.2 +[2]:http://lkml.iu.edu/hypermail/linux/kernel/1508.2/02604.html +[3]:http://www.phoronix.com/scan.php?page=news_item&px=KDBUS-Not-In-Linux-4.2 +[4]:http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.2-rc7-Released +[5]:http://www.phoronix.com/scan.php?page=search&q=KDBUS \ No newline at end of file diff --git a/sources/talk/20150823 How learning data structures and algorithms make you a better developer.md b/sources/talk/20150823 How learning data structures and algorithms make you a better developer.md new file mode 100644 index 0000000000..7152efa1ed --- /dev/null +++ b/sources/talk/20150823 How learning data structures and algorithms make you a better developer.md @@ -0,0 +1,126 @@ +How learning data structures and algorithms make you a better developer +================================================================================ + +> "I'm a huge proponent of designing your code around the data, rather than the other way around, and I think it's one of the reasons git has been fairly successful […] I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important." +-- Linus Torvalds + +--- + +> "Smart data structures and dumb code works a lot better than the other way around." +-- Eric S. Raymond, The Cathedral and The Bazaar + +Learning about data structures and algorithms makes you a stonking good programmer. + +**Data structures and algorithms are patterns for solving problems.** The more of them you have in your utility belt, the greater variety of problems you'll be able to solve. You'll also be able to come up with more elegant solutions to new problems than you would otherwise be able to. + +You'll understand, ***in depth***, how your computer gets things done. This informs any technical decisions you make, regardless of whether or not you're using a given algorithm directly. Everything from memory allocation in the depths of your operating system, to the inner workings of your RDBMS to how your networking stack manages to send data from one corner of Earth to another. All computers rely on fundamental data structures and algorithms, so understanding them better makes you understand the computer better. + +Cultivate a broad and deep knowledge of algorithms and you'll have stock solutions to large classes of problems. Problem spaces that you had difficulty modelling before often slot neatly into well-worn data structures that elegantly handle the known use-cases. Dive deep into the implementation of even the most basic data structures and you'll start seeing applications for them in your day-to-day programming tasks. + +You'll also be able to come up with novel solutions to the somewhat fruitier problems you're faced with. Data structures and algorithms have the habit of proving themselves useful in situations that they weren't originally intended for, and the only way you'll discover these on your own is by having a deep and intuitive knowledge of at least the basics. + +But enough with the theory, have a look at some examples + +###Figuring out the fastest way to get somewhere### +Let's say we're creating software to figure out the shortest distance from one international airport to another. Assume we're constrained to following routes: + +![](http://www.happybearsoftware.com/assets/posts/how-learning-data-structures-and-algorithms-makes-you-a-better-developer/airport-graph-d2e32b3344b708383e405d67a80c29ea.svg) + +graph of destinations and the distances between them, how can we find the shortest distance say, from Helsinki to London? **Dijkstra's algorithm** is the algorithm that will definitely get us the right answer in the shortest time. + +In all likelihood, if you ever came across this problem and knew that Dijkstra's algorithm was the solution, you'd probably never have to implement it from scratch. Just ***knowing*** about it would point you to a library implementation that solves the problem for you. + +If you did dive deep into the implementation, you'd be working through one of the most important graph algorithms we know of. You'd know that in practice it's a little resource intensive so an extension called A* is often used in it's place. It gets used everywhere from robot guidance to routing TCP packets to GPS pathfinding. + +###Figuring out the order to do things in### +Let's say you're trying to model courses on a new Massive Open Online Courses platform (like Udemy or Khan Academy). Some of the courses depend on each other. For example, a user has to have taken Calculus before she's eligible for the course on Newtonian Mechanics. Courses can have multiple dependencies. Here's are some examples of what that might look like written out in YAML: + + # Mapping from course name to requirements + # + # If you're a physcist or a mathematicisn and you're reading this, sincere + # apologies for the completely made-up dependency tree :) + courses: + arithmetic: [] + algebra: [arithmetic] + trigonometry: [algebra] + calculus: [algebra, trigonometry] + geometry: [algebra] + mechanics: [calculus, trigonometry] + atomic_physics: [mechanics, calculus] + electromagnetism: [calculus, atomic_physics] + radioactivity: [algebra, atomic_physics] + astrophysics: [radioactivity, calculus] + quantumn_mechanics: [atomic_physics, radioactivity, calculus] + +Given those dependencies, as a user, I want to be able to pick any course and have the system give me an ordered list of courses that I would have to take to be eligible. So if I picked `calculus`, I'd want the system to return the list: + + arithmetic -> algebra -> trigonometry -> calculus + +Two important constraints on this that may not be self-evident: + + - At every stage in the course list, the dependencies of the next course must be met. + - We don't want any duplicate courses in the list. + +This is an example of resolving dependencies and the algorithm we're looking for to solve this problem is called topological sort (tsort). Tsort works on a dependency graph like we've outlined in the YAML above. Here's what that would look like in a graph (where each arrow means `requires`): + +![](http://www.happybearsoftware.com/assets/posts/how-learning-data-structures-and-algorithms-makes-you-a-better-developer/course-graph-2f60f42bb0dc95319954ce34c02705a2.svg) + +topological sort does is take a graph like the one above and find an ordering in which all the dependencies are met at each stage. So if we took a sub-graph that only contained `radioactivity` and it's dependencies, then ran tsort on it, we might get the following ordering: + + arithmetic + algebra + trigonometry + calculus + mechanics + atomic_physics + radioactivity + +This meets the requirements set out by the use case we described above. A user just has to pick `radioactivity` and they'll get an ordered list of all the courses they have to work through before they're allowed to. + +We don't even need to go into the details of how topological sort works before we put it to good use. In all likelihood, your programming language of choice probably has an implementation of it in the standard library. In the worst case scenario, your Unix probably has the `tsort` utility installed by default, run man `tsort` and have a play with it. + +###Other places tsort get's used### + + - **Tools like** `make` allow you to declare task dependencies. Topological sort is used under the hood to figure out what order the tasks should be executed in. + - **Any programming language that has a `require` directive**, indicating that the current file requires the code in a different file to be run first. Here topological sort can be used to figure out what order the files should be loaded in so that each is only loaded once and all dependencies are met. + - **Project management tools with Gantt charts**. A Gantt chart is a graph that outlines all the dependencies of a given task and gives you an estimate of when it will be complete based on those dependencies. I'm not a fan of Gantt charts, but it's highly likely that tsort will be used to draw them. + +###Squeezing data with Huffman coding### +[Huffman coding](http://en.wikipedia.org/wiki/Huffman_coding) is an algorithm used for lossless data compression. It works by analyzing the data you want to compress and creating a binary code for each character. More frequently occurring characters get smaller codes, so `e` might be encoded as `111` while `x` might be `10010`. The codes are created so that they can be concatenated without a delimeter and still be decoded accurately. + +Huffman coding is used along with LZ77 in the DEFLATE algorithm which is used by gzip to compress things. gzip is used all over the place, in particular for compressing files (typically anything with a `.gz` extension) and for http requests/responses in transit. + +Knowing how to implement and use Huffman coding has a number of benefits: + + - You'll know why a larger compression context results in better compression overall (e.g. the more you compress, the better the compression ratio). This is one of the proposed benefits of SPDY: that you get better compression on multiple HTTP requests/responses. + - You'll know that if you're compressing your javascript/css in transit anyway, it's completely pointless to run a minifier on them. Sames goes for PNG files, which use DEFLATE internally for compression already. + - If you ever find yourself trying to forcibly decipher encrypted information , you may realize that since repeating data compresses better, the compression ratio of a given bit of ciphertext will help you determine it's [block cipher mode of operation](http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation). + +###Picking what to learn next is hard### +Being a programmer involves learning constantly. To operate as a web developer you need to know markup languages, high level languages like ruby/python, regular expressions, SQL and JavaScript. You need to know the fine details of HTTP, how to drive a unix terminal and the subtle art of object oriented programming. It's difficult to navigate that landscape effectively and choose what to learn next. + +I'm not a fast learner so I have to choose what to spend time on very carefully. As much as possible, I want to learn skills and techniques that are evergreen, that is, won't be rendered obsolete in a few years time. That means I'm hesitant to learn the javascript framework of the week or untested programming languages and environments. + +As long as our dominant model of computation stays the same, data structures and algorithms that we use today will be used in some form or another in the future. You can safely spend time on gaining a deep and thorough knowledge of them and know that they will pay dividends for your entire career as a programmer. + +###Sign up to the Happy Bear Software List### +Find this article useful? For a regular dose of freshly squeezed technical content delivered straight to your inbox, **click on the big green button below to sign up to the Happy Bear Software mailing list.** + +We'll only be in touch a few times per month and you can unsubscribe at any time. + +-------------------------------------------------------------------------------- + +via: http://www.happybearsoftware.com/how-learning-data-structures-and-algorithms-makes-you-a-better-developer + +作者:[Happy Bear][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.happybearsoftware.com/ +[1]:http://en.wikipedia.org/wiki/Huffman_coding +[2]:http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation + + + diff --git a/sources/talk/20150824 LinuxCon exclusive--Mark Shuttleworth says Snappy was born long before CoreOS and the Atomic Project.md b/sources/talk/20150824 LinuxCon exclusive--Mark Shuttleworth says Snappy was born long before CoreOS and the Atomic Project.md new file mode 100644 index 0000000000..2c45b6064b --- /dev/null +++ b/sources/talk/20150824 LinuxCon exclusive--Mark Shuttleworth says Snappy was born long before CoreOS and the Atomic Project.md @@ -0,0 +1,92 @@ +LinuxCon exclusive: Mark Shuttleworth says Snappy was born long before CoreOS and the Atomic Project +================================================================================ +![](http://images.techhive.com/images/article/2015/08/mark-100608730-primary.idge.jpg) + +Mark Shuttleworth at LinuxCon Credit: Swapnil Bhartiya + +> Mark Shuttleworth, founder of Canonical and Ubuntu, made a surprise visit at LinuxCon. I sat down with him for a video interview and talked about Ubuntu on IBM’s new LinuxONE systems, Canonical’s plans for containers, open source in the enterprise space and much more. + +### You made a surprise entry during the keynote. What brought you to LinuxCon? ### + +**Mark Shuttleworth**: I am here at LinuxCon to support IBM and Canonical in their announcement of Ubuntu on their new Linux-only super-high-end mainframe LinuxONE. These are the biggest machines in the world, purpose-built to run only Linux. And we will be bringing Ubuntu to them, which is a real privilege for us and is going to be incredible for developers. + +![mark selfie](http://images.techhive.com/images/article/2015/08/mark-selfie-100608731-large.idge.jpg) + +Swapnil Bhartiya + +Mark Shuttleworth and Swapnil Bhartiya, mandatory selfie at LinuxCon + +### Only Red Hat and SUSE were supported on it. Why was Ubuntu missing from the mainframe scene? ### + +**Mark**: Ubuntu has always been about developers. It has been about enabling the free software platform from where it is collaboratively built to be available at no cost to developers in the world, so they are limited only by their imagination—not by money, not by geography. + +There was an incredible story told today about a 12-year-old kid who started out with Ubuntu; there are incredible stories about people building giant businesses with Ubuntu. And for me, being able to empower people, whether they come from one part of the world or another to express their ideas on free software, is what Ubuntu is all about. It's been a journey for us essentially, going to the platforms those developers care about, and just in the last year, we suddenly saw a flood of requests from companies who run mainframes, who are using Ubuntu for their infrastructure—70% of OpenStack deployments are on Ubuntu. Those same people said, “Look, there is the mainframe, and we like to unleash it and think of it as a region in the cloud.” So when IBM started talking to us, saying that they have this project in the works, it felt like a very natural fit: You are going to be able to take your Ubuntu laptop, build code there and ship it straight to every cloud, every virtualization environment, every bare metal in every architecture including the mainframe, and that's going to be beautiful. + +### Will Canonical be offering support for these systems? ### + +**Mark**: Yes. Ubuntu on z Systems is going to be completely supported. We will make long-term commitments to that. The idea is to bring together scale-out-fast cloud-like workloads, which is really born on Ubuntu; 70% of workloads on Amazon and other public clouds run on Ubuntu. Now you can think of running that on a mainframe if that makes sense to you. + +We are going to provide exactly the same platform that we do on the cloud, and we are going to provide that on the mainframe as well. We are also going to expose it to the OpenStack API so you can consume it on a mainframe with exactly the same tools and exactly the same processes that you would consume on a laptop, or OpenStack or public cloud resources. So all of the things that Ubuntu builds to make your life easy as a developer are going to be available across that full range of platforms and systems, and all of that is commercially supported. + +### Canonical is doing a lot of things: It is into enterprise, and it’s in the consumer space with mobile and desktop. So what is the core focus of Canonical now? ### + +**Mark**: The trick for us is to enable the reuse of specifically the same parts [of our technology] in as many useful ways as possible. So if you look at the work that we do at z Systems, it's absolutely defined by the work that we do on the cloud. We want to deliver exactly the same libraries on exactly the same date for the mainframe as we do for public clouds and for x86, ARM and Power servers today. + +We don't allow Ubuntu or our focus to fragment very dramatically because we don't allow different products managers to find Ubuntu in different ways in different environments. We just want to bring that standard experience that developers love to this new environment. + +Similarly if you look at the work we are doing on IoT [Internet of Things], Snappy Ubuntu is the heart of the phone. It’s the phone without the GUI. So the definitions, the tools, the kernels, the mechanisms are shared across those projects. So we are able to multiply the impact of the work. We have an incredible community, and we try to enable the community to do things that they want to do that we can’t do. So that's why we have so many buntus, and it's kind of incredible for me to see what they do with that. + +We also see the community climbing in. We see hundreds of developers working with Snappy for IoT, and we see developers working with Snappy on mobile, for personal computing as convergence becomes real. And, of course, there is the cloud server story: 70% of the world is Ubuntu, so there is a huge audience. We don't have to do all the work that we do; we just have to be open and willing to, kind of, do the core infrastructure and then reuse it as efficiently as possible. + +### Is Snappy a response to Atomic or CoreOS? ### + +**Mark**: Snappy as a project was born four years ago when we started working on the phone, which was long before the CoreOS, long before Atomic. I think the principles of atomicity, transactionality are beautiful, but remember: We needed to build the same things for the phone. And with Snappy, we have the ability to deliver transactional updates to any of these systems—phones, servers and cloud devices. + +Of course, it feels a little different because in order to provide those guarantees, we have to shape the system in such a way that we can guarantee the guarantees. And that's why Snappy is snappy; it's a new thing. It's not based on an old packaging system. Though we will keep both of them: All Snaps for us that Canonical makes, the core snaps that define the OS, are all built from Debian packages. They are two different faces of the same coin for us, and developers will use them as tools. We use the right tools for the job. + +There are couple of key advantages for Snappy over CoreOS and Atomic, and the main one is this: We took the view that we wanted the base idea to be extensible. So with Snappy, the core operating system is tiny. You make all the choices, and you take all the decisions about things you want to bolt on that: you want to bolt on Docker; you want to bolt on Kubernete; you want to bolt on Mesos; you want to bolt on Lattice from Pivotal; you want to bolt on OpenStack. Those are the things you choose to add with Snappy. Whereas with Atomic and CoreOS, it's one blob and you have to do it exactly the way they want you to do it. You have to live with the versions of software and the choices they make. + +Whereas with Snappy, we really preserve this idea of the choices you have got in Ubuntu are now transactionally available on Snappy systems. That makes the core much smaller, and it gives you the choice of different container systems, different container management systems, different cloud infrastructure systems or different apps of every description. I think that's the winning idea. In fullness of time, people will realize that they wanted to make those choices themselves; they just want Canonical to do the work of providing the updates in a really efficient manner. + +### There is so much competition in the container space with Docker, Rocket and many other players. Where will Canonical stand amid this competition? ### + +**Mark**: Canonical is focused on platform tools, and we see things like the Rocket and Docker as things super-useful for developers; we just make sure that those work best on Ubuntu. Docker, for years, ran only Ubuntu because we work very closely with them, and we are glad now that it's available everywhere else. But if you look at the numbers, the vast majority of Docker containers are on Ubuntu. Because we work really hard, as developers, you get the best experience with all of these tools on Ubuntu. We don't want to try and control everything, and it’s great for us to have those guys competing. + +I think in the end people will see that there is really two kinds of containers. 1) There are cases where a container is just like a VM machine. It feels like a whole machine, it runs all processes, all the logs and cron jobs are there. It's like a VM, just that it's much cheaper, much lighter, much faster, and that's LXD. 2) And then there would be process containers, which are like Docker or Rocket; they are there to run a specific application very fast. I think we lead the world in general machine container story, which is our hypervisor LXD, and I think Docker leads the story when it comes to applications containers, process containers. And those two work together really beautifully. + +### Microsoft and Canonical are working together on LXD? Can you tell us about this engagement? ### + +Mark: LXD is two things. First, it's an implementation on top of Canonical's work on the kernel so that you can start to create full machine containers on any host. But it's also a REST API. That’s the transitions from LXC to LXD. We got a daemon there so you can talk to the daemon over the network, if it's listening on the network, and says tell me about the containers on that machine, tell me about the file systems on that machine, the networks on that machine, start or stop the container. + +So LXD becomes a distributed hypervisor effectively. Very interestingly, last week Microsoft announced that they like REST API. It is very clean, very simple, very well engineered, and they are going to implement the same API for Windows machines. It's completely cross-platform, which means you will be able to talk to any machine—Linux or Windows. So it gives you very clean and simple APIs to talk about containers on any host on the network. + +Of course, we have led the work in [OpenStack to bind LXD to Nova][1], which is the control system to compute in OpenStack, so that's how we create a whole cloud with OpenStack API with the individual VMs being actually containers, so much denser, much faster, much lighter, much cheaper. + +### Open Source is becoming a norm in the enterprise segment. What do you think is driving the adoption of open source in the enterprise? ### + +**Mark**: The reason why open source has become so popular in the enterprise is because it enables them to go faster. We are all competing at some level, and if you can't make progress because you have to call up some vendor, you can't dig in and help yourself go faster, then you feel frustrated. And given the choice between frustration and at least the ability to dig into a problem, enterprises over time will always choose to give themselves the ability to dig in and help themselves. So that is why open source is phenomenal. + +I think it goes a bit deeper than that. I think people have started to realize as much as we compete, 99% of what we need to do is shared, and there is something meaningful about contributing to something that is shared. As I have seen Ubuntu go from something that developers love, to something that CIOs love that developers love Ubuntu. As that happens, it's not a one-way ticket. They often want to say how can we help contribute to make this whole thing go faster. + +We have always seen a curve of complexity, and open source has traditionally been higher up on the curve of complexity and therefore considered threatening or difficult or too uncertain for people who are not comfortable with the complexity. What's wonderful to me is that many open source projects have identified that as a blocker for their own future. So in Ubuntu we have made user experience, design and “making it easy” a first-class goal. We have done the same for OpenStack. With Ubuntu tools for OpenStack anybody can build an OpenStack cloud in an hour, and if you want, that cloud can run itself, scale itself, manage itself, can deal with failures. It becomes something you can just fire up and forget, which also makes it really cheap. It also makes it something that's not a distraction, and so by making open source easier and easier, we are broadening its appeal to consumers and into the enterprise and potentially into the government. + +### How open are governments to open source? Can you tell us about the utilization of open source by governments, especially in the U.S.? ### + +**Mark**: I don't track the usage in government, but part of government utilization in the modern era is the realization that how untrustworthy other governments might be. There is a desire for people to be able to say, “Look, I want to review or check and potentially self-build all the things that I depend on.” That's a really important mission. At the end of the day, some people see this as a game where maybe they can get something out of the other guy. I see it as a game where we can make a level playing field, where everybody gets to compete. I have a very strong interest in making sure that Ubuntu is trustworthy, which means the way we build it, the way we run it, the governance around it is such that people can have confidence in it as an independent thing. + +### You are quite vocal about freedom, privacy and other social issues on Google+. How do you see yourself, your company and Ubuntu playing a role in making the world a better place? ### + +**Mark**: The most important thing for us to do is to build confidence in trusted platforms, platforms that are freely available but also trustworthy. At any given time, there will always be people who can make arguments about why they should have access to something. But we know from history that at the end of the day, due process of law, justice, doesn't depend on the abuse of privacy, abuse of infrastructure, the abuse of data. So I am very strongly of the view that in the fullness of time, all of the different major actors will come to the view that their primary interest is in having something that is conceptually trustworthy. This isn't about what America can steal from Germany or what China can learn in Russia. This is about saying we’re all going to be able to trust our infrastructure; that's a generational journey. But I believe Ubuntu can be right at the center of people's thinking about that. + +-------------------------------------------------------------------------------- + +via: http://www.itworld.com/article/2973116/linux/linuxcon-exclusive-mark-shuttleworth-says-snappy-was-born-long-before-coreos-and-the-atomic-project.html + +作者:[Swapnil Bhartiya][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.itworld.com/author/Swapnil-Bhartiya/ +[1]:https://wiki.openstack.org/wiki/HypervisorSupportMatrix \ No newline at end of file diff --git a/sources/talk/20150827 The Strangest Most Unique Linux Distros.md b/sources/talk/20150827 The Strangest Most Unique Linux Distros.md new file mode 100644 index 0000000000..04ff47952a --- /dev/null +++ b/sources/talk/20150827 The Strangest Most Unique Linux Distros.md @@ -0,0 +1,67 @@ +The Strangest, Most Unique Linux Distros +================================================================================ +From the most consumer focused distros like Ubuntu, Fedora, Mint or elementary OS to the more obscure, minimal and enterprise focused ones such as Slackware, Arch Linux or RHEL, I thought I've seen them all. Couldn't have been any further from the truth. Linux eco-system is very diverse. There's one for everyone. Let's discuss the weird and wacky world of niche Linux distros that represents the true diversity of open platforms. + +![strangest linux distros](http://2.bp.blogspot.com/--cSL2-6rIgA/VcwNc5hFebI/AAAAAAAAJzk/AgB55mVtJVQ/s1600/Puppy-Linux.png) + +**Puppy Linux**: An operating system which is about 1/10th the size of an average DVD quality movie rip, that's Puppy Linux for you. The OS is just 100 MB in size! And it can run from RAM making it unusually fast even in older PCs. You can even remove the boot medium after the operating system has started! Can it get any better than that? System requirements are bare minimum, most hardware are automatically detected, and it comes loaded with software catering to your basic needs. [Experience Puppy Linux][1]. + +![suicide linux](http://3.bp.blogspot.com/-dfeehRIQKpo/VdMgRVQqIJI/AAAAAAAAJz0/TmBs-n2K9J8/s1600/suicide-linux.jpg) + +**Suicide Linux**: Did the name scare you? Well it should. 'Any time - any time - you type any remotely incorrect command, the interpreter creatively resolves it into rm -rf / and wipes your hard drive'. Simple as that. I really want to know the ones who are confident enough to risk their production machines with [Suicide Linux][2]. **Warning: DO NOT try this on production machines!** The whole thing is available in a neat [DEB package][3] if you're interested. + +![top 10 strangest linux distros](http://3.bp.blogspot.com/-Q0hlEMCD9-o/VdMieAiXY1I/AAAAAAAAJ0M/iS_ZjVaZAk8/s1600/papyros.png) + +**PapyrOS**: "Strange" in a good way. PapyrOS is trying to adapt the material design language of Android into their brand new Linux distribution. Though the project is in early stages, it already looks very promising. The project page says the OS is 80% complete and one can expect the first Alpha release anytime soon. We did a small write up on [PapyrOS][4] when it was announced and by the looks of it, PapyrOS might even become a trend-setter of sorts. Follow the project on [Google+][5] and contribute via [BountySource][6] if you're interested. + +![10 most unique linux distros](http://3.bp.blogspot.com/-8aOtnTp3Yxk/VdMo_KWs4sI/AAAAAAAAJ0o/3NTqhaw60jM/s1600/qubes-linux.png) + +**Qubes OS**: Qubes is an open-source operating system designed to provide strong security using a Security by Compartmentalization approach. The assumption is that there can be no perfect, bug-free desktop environment. And by implementing a 'Security by Isolation' approach, [Qubes Linux][7] intends to remedy that. Qubes is based on Xen, the X Window System, and Linux, and can run most Linux applications and supports most Linux drivers. Qubes was selected as a finalist of Access Innovation Prize 2014 for Endpoint Security Solution. + +![top10 linux distros](http://3.bp.blogspot.com/-2Sqvb_lilC0/VdMq_ceoXnI/AAAAAAAAJ00/kot20ugVJFk/s1600/ubuntu-satanic.jpg) + +**Ubuntu Satanic Edition**: Ubuntu SE is a Linux distribution based on Ubuntu. "It brings together the best of free software and free metal music" in one comprehensive package consisting of themes, wallpapers, and even some heavy-metal music sourced from talented new artists. Though the project doesn't look actively developed anymore, Ubuntu Satanic Edition is strange in every sense of that word. [Ubuntu SE (Slightly NSFW)][8]. + +![10 strange linux distros](http://2.bp.blogspot.com/-ZtIVjGMqdx0/VdMv136Pz1I/AAAAAAAAJ1E/-q34j-TXyUY/s1600/tiny-core-linux.png) + +**Tiny Core Linux**: Puppy Linux not small enough? Try this. Tiny Core Linux is a 12 MB graphical Linux desktop! Yep, you read it right. One major caveat: It is not a complete desktop nor is all hardware completely supported. It represents only the core needed to boot into a very minimal X desktop typically with wired internet access. There is even a version without the GUI called Micro Core Linux which is just 9MB in size. [Tiny Core Linux][9] folks. + +![top 10 unique and special linux distros](http://4.bp.blogspot.com/-idmCvIxtxeo/VdcqcggBk1I/AAAAAAAAJ1U/DTQCkiLqlLk/s1600/nixos.png) + +**NixOS**: A very experienced-user focused Linux distribution with a unique approach to package and configuration management. In other distributions, actions such as upgrades can be dangerous. Upgrading a package can cause other packages to break, upgrading an entire system is much less reliable than reinstalling from scratch. And top of all that you can't safely test what the results of a configuration change will be, there's no "Undo" so to speak. In NixOS, the entire operating system is built by the Nix package manager from a description in a purely functional build language. This means that building a new configuration cannot overwrite previous configurations. Most of the other features follow this pattern. Nix stores all packages in isolation from each other. [More about NixOS][10]. + +![strangest linux distros](http://4.bp.blogspot.com/-rOYfBXg-UiU/VddCF7w_xuI/AAAAAAAAJ1w/Nf11bOheOwM/s1600/gobolinux.jpg) + +**GoboLinux**: This is another very unique Linux distro. What makes GoboLinux so different from the rest is its unique re-arrangement of the filesystem. It has its own subdirectory tree, where all of its files and programs are stored. GoboLinux does not have a package database because the filesystem is its database. In some ways, this sort of arrangement is similar to that seen in OS X. [Get GoboLinux][11]. + +![strangest linux distros](http://1.bp.blogspot.com/-3P22pYfih6Y/VdcucPOv4LI/AAAAAAAAJ1g/PszZDbe83sQ/s1600/hannah-montana-linux.jpg) + +**Hannah Montana Linux**: Here is a Linux distro based on Kubuntu with a Hannah Montana themed boot screen, KDM, icon set, ksplash, plasma, color scheme, and wallpapers (I'm so sorry). [Link][12]. Project not active anymore. + +**RLSD Linux**: An extremely minimalistic, small, lightweight and security-hardened, text-based operating system built on Linux. "It's a unique distribution that provides a selection of console applications and home-grown security features which might appeal to hackers," developers claim. [RLSD Linux][13]. + +Did we miss anything even stranger? Let us know. + +-------------------------------------------------------------------------------- + +via: http://www.techdrivein.com/2015/08/the-strangest-most-unique-linux-distros.html + +作者:Manuel Jose +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:http://puppylinux.org/main/Overview%20and%20Getting%20Started.htm +[2]:http://qntm.org/suicide +[3]:http://sourceforge.net/projects/suicide-linux/files/ +[4]:http://www.techdrivein.com/2015/02/papyros-material-design-linux-coming-soon.html +[5]:https://plus.google.com/communities/109966288908859324845/stream/3262a3d3-0797-4344-bbe0-56c3adaacb69 +[6]:https://www.bountysource.com/teams/papyros +[7]:https://www.qubes-os.org/ +[8]:http://ubuntusatanic.org/ +[9]:http://tinycorelinux.net/ +[10]:https://nixos.org/ +[11]:http://www.gobolinux.org/ +[12]:http://hannahmontana.sourceforge.net/ +[13]:http://rlsd2.dimakrasner.com/ \ No newline at end of file diff --git a/sources/talk/20150901 Is Linux Right For You.md b/sources/talk/20150901 Is Linux Right For You.md new file mode 100644 index 0000000000..89044347ec --- /dev/null +++ b/sources/talk/20150901 Is Linux Right For You.md @@ -0,0 +1,63 @@ +Is Linux Right For You? +================================================================================ +> Not everyone should opt for Linux -- for many users, remaining with Windows or OSX is the better choice. + +I enjoy using Linux on the desktop. Not because of software politics or because I despise other operating systems. I simply like Linux because it just works. + +It's been my experience that not everyone is cut out for the Linux lifestyle. In this article, I'll help you run through the pros and cons of making the switch to Linux so you can determine if switching is right for you. + +### When to make the switch ### + +Switching to Linux makes sense when there is a decisive reason to do so. The same can be said about moving from Windows to OS X or vice versa. In order to have success with switching, you must be able to identify your reason for jumping ship in the first place. + +For some people, the reason for switching is frustration with their current platform. Maybe the latest upgrade left them with a lousy experience and they're ready to chart new horizons. In other instances, perhaps it's simply a matter of curiosity. Whatever the motivation, you must have a good reason for switching operating systems. If you're pushing yourself in this direction without a good reason, then no one wins. + +However, there are exceptions to every rule. And if you're really interested in trying Linux on the desktop, then maybe coming to terms with a workable compromise is the way to go. + +### Starting off slow ### + +After trying Linux for the first time, I've seen people blast their Windows installation to bits because they had a good experience with Ubuntu on a flash drive for 20 minutes. Folks, this isn't a test. Instead I'd suggest the following: + +- Run the [Linux distro in a virtual machine][1] for a week. This means you are committing to running that distro for all browser work, email and other tasks you might otherwise do on that machine. +- If running a VM for a week is too resource intensive, try doing the same with a USB drive running Linux that offers [some persistent storage][2]. This will allow you to leave your main OS alone and intact. At the same time, you'll still be able to "live inside" of your Linux distribution for a week. +- If you find that everything is successful after a week of running Linux, the next step is to examine how many times you booted into Windows that week. If only occasionally, then the next step is to look into [dual-booting Windows][3] and Linux. For those of you that only found themselves using their Linux distro, it might be worth considering making the switch full time. +- Before you hose your Windows partition completely, it might make more sense to purchase a second hard drive to install Linux onto instead. This allows you to dual-boot, but to do so with ample hard drive space. It also makes Windows available to you if something should come up. + +### What do you gain adopting Linux? ### + +So what does one gain by switching to Linux? Generally it comes down to personal freedom for most people. With Linux, if something isn't to your liking, you're free to change it. Using Linux also saves users oodles of money in avoiding hardware upgrades and unnecessary software expenses. Additionally, you're not burdened with tracking down lost license keys for software. And if you dislike the direction a particular distribution is headed, you can switch to another distribution with minimal hassle. + +The sheer volume of desktop choice on the Linux desktop is staggering. This level of choice might even seem overwhelming to the newcomer. But if you find a distro base (Debian, Fedora, Arch, etc) that you like, the hard work is already done. All you need to do now is find a variation of the distro and the desktop environment you prefer. + +Now one of the most common complaints I hear is that there isn't much in the way of software for Linux. However, this isn't accurate at all. While other operating systems may have more of it, today's Linux desktop has applications to do just about anything you can think of. Video editing (home and pro-level), photography, office management, remote access, music (listening and creation), plus much, much more. + +### What you lose adopting Linux? ### + +As much as I enjoy using Linux, my wife's home office relies on OS X. She's perfectly content using Linux for some tasks, however she relies on OS X for specific software not available for Linux. This is a common problem that many people face when first looking at making the switch. You must decide whether or not you're going to be losing out on critical software if you make the switch. + +Sometimes the issue is because the software has content locked down with it. In other cases, it's a workflow and functionality that was found with the legacy applications and not with the software available for Linux. I myself have never experienced this type of challenge, but I know those who have. Many of the software titles available for Linux are also available for other operating systems. So if there is a concern about such things, I encourage you to try out comparable apps on your native OS first. + +Another thing you might lose by switching to Linux is the luxury of local support when you need it. People scoff at this, but I know of countless instances where a newcomer to Linux was dismayed to find their only recourse for solving Linux challenges was from strangers on the Web. This is especially problematic if their only PC is the one having issues. Windows and OS X users are spoiled in that there are endless support techs in cities all over the world that support their platform(s). + +### How to proceed from here ### + +Perhaps the single biggest piece of advice to remember is always have a fallback plan. Remember, once you wipe that copy of Windows 10 from your hard drive, you may find yourself spending money to get it reinstalled. This is especially true for those of you who upgrade from other Windows releases. Accepting this, persistent flash drives with Linux or dual-booting Windows and Linux is always a preferable way forward for newcomers. Odds are that you may be just fine and take to Linux like a fish to water. But having that fallback plan in place just means you'll sleep better at night. + +If instead you've been relying on a dual-boot installation for weeks and feel ready to take the plunge, then by all means do it. Wipe your drive and start off with a clean installation of your favorite Linux distribution. I've been a full time Linux enthusiast for years and I can tell you for certain, it's a great feeling. How long? Let's just say my first Linux experience was with early Red Hat. I finally installed a dedicated installation on my laptop by 2003. + +Existing Linux enthusiasts, where did you first get started? Was your switch an exciting one or was it filled with angst? Hit the Comments and share your experiences. + +-------------------------------------------------------------------------------- + +via: http://www.datamation.com/open-source/is-linux-right-for-you.html + +作者:[Matt Hartley][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.datamation.com/author/Matt-Hartley-3080.html +[1]:http://www.psychocats.net/ubuntu/virtualbox +[2]:http://www.howtogeek.com/howto/14912/create-a-persistent-bootable-ubuntu-usb-flash-drive/ +[3]:http://www.linuxandubuntu.com/home/dual-boot-ubuntu-15-04-14-10-and-windows-10-8-1-8-step-by-step-tutorial-with-screenshots \ No newline at end of file diff --git a/sources/tech/20150202 How to filter BGP routes in Quagga BGP router.md b/sources/tech/20150202 How to filter BGP routes in Quagga BGP router.md index d92c47c774..f227e0c506 100644 --- a/sources/tech/20150202 How to filter BGP routes in Quagga BGP router.md +++ b/sources/tech/20150202 How to filter BGP routes in Quagga BGP router.md @@ -1,3 +1,4 @@ +[bazz222] How to filter BGP routes in Quagga BGP router ================================================================================ In the [previous tutorial][1], we demonstrated how to turn a CentOS box into a BGP router using Quagga. We also covered basic BGP peering and prefix exchange setup. In this tutorial, we will focus on how we can control incoming and outgoing BGP prefixes by using **prefix-list** and **route-map**. @@ -198,4 +199,4 @@ via: http://xmodulo.com/filter-bgp-routes-quagga-bgp-router.html 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 [a]:http://xmodulo.com/author/sarmed -[1]:http://xmodulo.com/centos-bgp-router-quagga.html \ No newline at end of file +[1]:http://xmodulo.com/centos-bgp-router-quagga.html diff --git a/sources/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md b/sources/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md deleted file mode 100644 index ca909934fa..0000000000 --- a/sources/tech/20150205 Install Strongswan - A Tool to Setup IPsec Based VPN in Linux.md +++ /dev/null @@ -1,113 +0,0 @@ -Install Strongswan - A Tool to Setup IPsec Based VPN in Linux -================================================================================ -IPsec is a standard which provides the security at network layer. It consist of authentication header (AH) and encapsulating security payload (ESP) components. AH provides the packet Integrity and confidentiality is provided by ESP component . IPsec ensures the following security features at network layer. - -- Confidentiality -- Integrity of packet -- Source Non. Repudiation -- Replay attack protection - -[Strongswan][1] is an open source implementation of IPsec protocol and Strongswan stands for Strong Secure WAN (StrongS/WAN). It supports the both version of automatic keying exchange in IPsec VPN (Internet keying Exchange (IKE) V1 & V2). - -Strongswan basically provides the automatic keying sharing between two nodes/gateway of the VPN and after that it uses the Linux Kernel implementation of IPsec (AH & ESP). Key shared using IKE mechanism is further used in the ESP for the encryption of data. In IKE phase, strongswan uses the encryption algorithms (AES,SHA etc) of OpenSSL and other crypto libraries. However, ESP component of IPsec uses the security algorithm which are implemented in the Linux Kernel. The main features of Strongswan are given below. - -- 509 certificates or pre-shared keys based Authentication -- Support of IKEv1 and IKEv2 key exchange protocols -- Optional built-in integrity and crypto tests for plugins and libraries -- Support of elliptic curve DH groups and ECDSA certificates -- Storage of RSA private keys and certificates on a smartcard. - -It can be used in the client / server (road warrior) and gateway to gateway scenarios. - -### How to Install ### - -Almost all Linux distro’s, supports the binary package of Strongswan. In this tutorial, we will install the strongswan from binary package and also the compilation of strongswan source code with desirable features. - -### Using binary package ### - -Strongswan can be installed using following command on Ubuntu 14.04 LTS . - - $sudo aptitude install strongswan - -![Installation of strongswan](http://blog.linoxide.com/wp-content/uploads/2014/12/strongswan-binary.png) - -The global configuration (strongswan.conf) file and ipsec configuration (ipsec.conf/ipsec.secrets) files of strongswan are under /etc/ directory. - -### Pre-requisite for strongswan source compilation & installation ### - -- GMP (Mathematical/Precision Library used by strongswan) -- OpenSSL (Crypto Algorithms from this library) -- PKCS (1,7,8,11,12)(Certificate encoding and smart card integration with Strongswan ) - -#### Procedure #### - -**1)** Go to /usr/src/ directory using following command in the terminal. - - $cd /usr/src - -**2)** Download the source code from strongswan site suing following command - - $sudo wget http://download.strongswan.org/strongswan-5.2.1.tar.gz - -(strongswan-5.2.1.tar.gz is the latest version.) - -![Downloading software](http://blog.linoxide.com/wp-content/uploads/2014/12/download_strongswan.png) - -**3)** Extract the downloaded software and go inside it using following command. - - $sudo tar –xvzf strongswan-5.2.1.tar.gz; cd strongswan-5.2.1 - -**4)** Configure the strongswan as per desired options using configure command. - - ./configure --prefix=/usr/local -–enable-pkcs11 -–enable-openssl - -![checking packages for strongswan](http://blog.linoxide.com/wp-content/uploads/2014/12/strongswan-configure.png) - -If GMP library is not installed, then configure script will generate following error. - -![GMP library error](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-error.png) - -Therefore, first of all, install the GMP library using following command and then run the configure script. - -![gmp installation](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-installation1.png) - -However, if GMP is already installed and still above error exists then create soft link of libgmp.so library at /usr/lib , /lib/, /usr/lib/x86_64-linux-gnu/ paths in Ubuntu using following command. - - $ sudo ln -s /usr/lib/x86_64-linux-gnu/libgmp.so.10.1.3 /usr/lib/x86_64-linux-gnu/libgmp.so - -![softlink of libgmp.so library](http://blog.linoxide.com/wp-content/uploads/2014/12/softlink.png) - -After the creation of libgmp.so softlink, again run the ./configure script and it may find the gmp library. However, it may generate another error of gmp header file which is shown the following figure. - -![GMP header file issu](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-header.png) - -Install the libgmp-dev package using following command for the solution of above error. - - $sudo aptitude install libgmp-dev - -![Installation of Development library of GMP](http://blog.linoxide.com/wp-content/uploads/2014/12/gmp-dev.png) - -After installation of development package of gmp library, again run the configure script and if it does not produce any error, then the following output will be displayed. - -![Output of Configure scirpt](http://blog.linoxide.com/wp-content/uploads/2014/12/successful-run.png) - -Type the following commands for the compilation and installation of strongswan. - - $ sudo make ; sudo make install - -After the installation of strongswan , the Global configuration (strongswan.conf) and ipsec policy/secret configuration files (ipsec.conf/ipsec.secretes) are placed in **/usr/local/etc** directory. - -Strongswan can be used as tunnel or transport mode depends on our security need. It provides well known site-2-site and road warrior VPNs. It can be use easily with Cisco,Juniper devices. - --------------------------------------------------------------------------------- - -via: http://linoxide.com/security/install-strongswan/ - -作者:[nido][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/naveeda/ -[1]:https://www.strongswan.org/ \ No newline at end of file diff --git a/sources/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md b/sources/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md deleted file mode 100644 index 77292a03ee..0000000000 --- a/sources/tech/20150527 Howto Manage Host Using Docker Machine in a VirtualBox.md +++ /dev/null @@ -1,114 +0,0 @@ -[bazz2] -Howto Manage Host Using Docker Machine in a VirtualBox -================================================================================ -Hi all, today we'll learn how to create and manage a Docker host using Docker Machine in a VirtualBox. Docker Machine is an application that helps to create Docker hosts on our computer, on cloud providers and inside our own data center. It provides easy solution for creating servers, installing Docker on them and then configuring the Docker client according the users configuration and requirements. This API works for provisioning Docker on a local machine, on a virtual machine in the data center, or on a public cloud instance. Docker Machine is supported on Windows, OSX, and Linux and is available for installation as one standalone binary. It enables us to take full advantage of ecosystem partners providing Docker-ready infrastructure, while still accessing everything through the same interface. It makes people able to deploy the docker containers in the respective platform pretty fast and in pretty easy way with just a single command. - -Here are some easy and simple steps that helps us to deploy docker containers using Docker Machine. - -### 1. Installing Docker Machine ### - -Docker Machine supports awesome on every Linux Operating System. First of all, we'll need to download the latest version of Docker Machine from the [Github site][1] . Here, we'll use curl to download the latest version of Docker Machine ie 0.2.0 . - -**For 64 Bit Operating System** - - # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-amd64 > /usr/local/bin/docker-machine - -**For 32 Bit Operating System** - - # curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_linux-i386 > /usr/local/bin/docker-machine - -After downloading the latest release of Docker Machine, we'll make the file named **docker-machine** under **/usr/local/bin/** executable using the command below. - - # chmod +x /usr/local/bin/docker-machine - -After doing the above, we'll wanna ensure that we have successfully installed docker-machine. To check it, we can run the docker-machine -v which will give output of the version of docker-machine installed in our system. - - # docker-machine -v - -![Installing Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/installing-docker-machine.png) - -To enable Docker commands on our machines, make sure to install the Docker client as well by running the command below. - - # curl -L https://get.docker.com/builds/linux/x86_64/docker-latest > /usr/local/bin/docker - # chmod +x /usr/local/bin/docker - -### 2. Creating VirualBox VM ### - -After we have successfully installed Docker Machine in our Linux running machine, we'll definitely wanna go for creating a Virtual Machine using VirtualBox. To get started, we need to run docker-machine create command followed by --driver flag with string as virtualbox as we are trying to deploy docker inside of Virtual Box running VM and the final argument is the name of the machine, here we have machine name as "linux". This command will download [boot2docker][2] iso which is a light-weighted linux distribution based on Tiny Core Linux with the Docker daemon installed and will create and start a VirtualBox VM with Docker running as mentioned above. - -To do so, we'll run the following command in a terminal or shell in our box. - - # docker-machine create --driver virtualbox linux - -![Creating Docker Machine](http://blog.linoxide.com/wp-content/uploads/2015/05/creating-docker-machine.png) - -Now, to check whether we have successfully create a Virtualbox running Docker or not, we'll run the command **docker-machine** ls as shown below. - - # docker-machine ls - -![Docker Machine List](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-machine-list.png) - -If the host is active, we can see * under the ACTIVE column in the output as shown above. - -### 3. Setting Environment Variables ### - -Now, we'll need to make docker talk with the machine. We can do that by running docker-machine env and then the machine name, here we have named **linux** as above. - - # eval "$(docker-machine env linux)" - # docker ps - -This will set environment variables that the Docker client will read which specify the TLS settings. Note that we'll need to do this every time we reboot our machine or start a new tab. We can see what variables will be set by running the following command. - - # docker-machine env linux - - export DOCKER_TLS_VERIFY=1 - export DOCKER_CERT_PATH=/Users//.docker/machine/machines/dev - export DOCKER_HOST=tcp://192.168.99.100:2376 - -### 4. Running Docker Containers ### - -Finally, after configuring the environment variables and Virtual Machine, we are able to run docker containers in the host running inside the Virtual Machine. To give it a test, we'll run a busybox container out of it run running **docker run busybox** command with **echo hello world** so that we can get the output of the container. - - # docker run busybox echo hello world - -![Running Docker Container](http://blog.linoxide.com/wp-content/uploads/2015/05/running-docker-container.png) - -### 5. Getting Docker Host's IP ### - -We can get the IP Address of the running Docker Host's using the **docker-machine ip** command. We can see any exposed ports that are available on the Docker host’s IP address. - - # docker-machine ip - -![Docker IP Address](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-ip-address.png) - -### 6. Managing the Hosts ### - -Now we can manage as many local VMs running Docker as we desire by running docker-machine create command again and again as mentioned in above steps - -If you are finished working with the running docker, we can simply run **docker-machine stop** command to stop the whole hosts which are Active and if wanna start again, we can run **docker-machine start**. - - # docker-machine stop - # docker-machine start - -You can also specify a host to stop or start using the host name as an argument. - - $ docker-machine stop linux - $ docker-machine start linux - -### Conclusion ### - -Finally, we have successfully created and managed a Docker host inside a VirtualBox using Docker Machine. Really, Docker Machine enables people fast and easy to create, deploy and manage Docker hosts in different platforms as here we are running Docker hosts using Virtualbox platform. This virtualbox driver API works for provisioning Docker on a local machine, on a virtual machine in the data center. Docker Machine ships with drivers for provisioning Docker locally with Virtualbox as well as remotely on Digital Ocean instances whereas more drivers are in the work for AWS, Azure, VMware, and other infrastructure. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/host-virtualbox-docker-machine/ - -作者:[Arun Pyasi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ -[1]:https://github.com/docker/machine/releases -[2]:https://github.com/boot2docker/boot2docker diff --git a/sources/tech/20150728 Process of the Linux kernel building.md b/sources/tech/20150728 Process of the Linux kernel building.md deleted file mode 100644 index 1c03ebbe72..0000000000 --- a/sources/tech/20150728 Process of the Linux kernel building.md +++ /dev/null @@ -1,676 +0,0 @@ -Translating by Ezio - -Process of the Linux kernel building -================================================================================ -Introduction --------------------------------------------------------------------------------- - -I will not tell you how to build and install custom Linux kernel on your machine, you can find many many [resources](https://encrypted.google.com/search?q=building+linux+kernel#q=building+linux+kernel+from+source+code) that will help you to do it. Instead, we will know what does occur when you are typed `make` in the directory with Linux kernel source code in this part. When I just started to learn source code of the Linux kernel, the [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) file was a first file that I've opened. And it was scary :) This [makefile](https://en.wikipedia.org/wiki/Make_%28software%29) contains `1591` lines of code at the time when I wrote this part and it was [third](https://github.com/torvalds/linux/commit/52721d9d3334c1cb1f76219a161084094ec634dc) release candidate. - -This makefile is the the top makefile in the Linux kernel source code and kernel build starts here. Yes, it is big, but moreover, if you've read the source code of the Linux kernel you can noted that all directories with a source code has an own makefile. Of course it is not real to describe how each source files compiled and linked. So, we will see compilation only for the standard case. You will not find here building of the kernel's documentation, cleaning of the kernel source code, [tags](https://en.wikipedia.org/wiki/Ctags) generation, [cross-compilation](https://en.wikipedia.org/wiki/Cross_compiler) related stuff and etc. We will start from the `make` execution with the standard kernel configuration file and will finish with the building of the [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage). - -It would be good if you're already familiar with the [make](https://en.wikipedia.org/wiki/Make_%28software%29) util, but I will anyway try to describe all code that will be in this part. - -So let's start. - -Preparation before the kernel compilation ---------------------------------------------------------------------------------- - -There are many things to preparate before the kernel compilation will be started. The main point here is to find and configure -the type of compilation, to parse command line arguments that are passed to the `make` util and etc. So let's dive into the top `Makefile` of the Linux kernel. - -The Linux kernel top `Makefile` is responsible for building two major products: [vmlinux](https://en.wikipedia.org/wiki/Vmlinux) (the resident kernel image) and the modules (any module files). The [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) of the Linux kernel starts from the definition of the following variables: - -```Makefile -VERSION = 4 -PATCHLEVEL = 2 -SUBLEVEL = 0 -EXTRAVERSION = -rc3 -NAME = Hurr durr I'ma sheep -``` - -These variables determine the current version of the Linux kernel and are used in the different places, for example in the forming of the `KERNELVERSION` variable: - -```Makefile -KERNELVERSION = $(VERSION)$(if $(PATCHLEVEL),.$(PATCHLEVEL)$(if $(SUBLEVEL),.$(SUBLEVEL)))$(EXTRAVERSION) -``` - -After this we can see a couple of the `ifeq` condition that check some of the parameters passed to `make`. The Linux kernel `makefiles` provides a special `make help` target that prints all available targets and some of the command line arguments that can be passed to `make`. For example: `make V=1` - provides verbose builds. The first `ifeq` condition checks if the `V=n` option is passed to make: - -```Makefile -ifeq ("$(origin V)", "command line") - KBUILD_VERBOSE = $(V) -endif -ifndef KBUILD_VERBOSE - KBUILD_VERBOSE = 0 -endif - -ifeq ($(KBUILD_VERBOSE),1) - quiet = - Q = -else - quiet=quiet_ - Q = @ -endif - -export quiet Q KBUILD_VERBOSE -``` - -If this option is passed to `make` we set the `KBUILD_VERBOSE` variable to the value of the `V` option. Otherwise we set the `KBUILD_VERBOSE` variable to zero. After this we check value of the `KBUILD_VERBOSE` variable and set values of the `quiet` and `Q` variables depends on the `KBUILD_VERBOSE` value. The `@` symbols suppress the output of the command and if it will be set before a command we will see something like this: `CC scripts/mod/empty.o` instead of the `Compiling .... scripts/mod/empty.o`. In the end we just export all of these variables. The next `ifeq` statement checks that `O=/dir` option was passed to the `make`. This option allows to locate all output files in the given `dir`: - -```Makefile -ifeq ($(KBUILD_SRC),) - -ifeq ("$(origin O)", "command line") - KBUILD_OUTPUT := $(O) -endif - -ifneq ($(KBUILD_OUTPUT),) -saved-output := $(KBUILD_OUTPUT) -KBUILD_OUTPUT := $(shell mkdir -p $(KBUILD_OUTPUT) && cd $(KBUILD_OUTPUT) \ - && /bin/pwd) -$(if $(KBUILD_OUTPUT),, \ - $(error failed to create output directory "$(saved-output)")) - -sub-make: FORCE - $(Q)$(MAKE) -C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR) \ - -f $(CURDIR)/Makefile $(filter-out _all sub-make,$(MAKECMDGOALS)) - -skip-makefile := 1 -endif # ifneq ($(KBUILD_OUTPUT),) -endif # ifeq ($(KBUILD_SRC),) -``` - -We check the `KBUILD_SRC` that represent top directory of the source code of the linux kernel and if it is empty (it is empty every time while makefile executes first time) and the set the `KBUILD_OUTPUT` variable to the value that passed with the `O` option (if this option was passed). In the next step we check this `KBUILD_OUTPUT` variable and if we set it, we do following things: - -* Store value of the `KBUILD_OUTPUT` in the temp `saved-output` variable; -* Try to create given output directory; -* Check that directory created, in other way print error; -* If custom output directory created sucessfully, execute `make` again with the new directory (see `-C` option). - -The next `ifeq` statements checks that `C` or `M` options was passed to the make: - -```Makefile -ifeq ("$(origin C)", "command line") - KBUILD_CHECKSRC = $(C) -endif -ifndef KBUILD_CHECKSRC - KBUILD_CHECKSRC = 0 -endif - -ifeq ("$(origin M)", "command line") - KBUILD_EXTMOD := $(M) -endif -``` - -The first `C` option tells to the `makefile` that need to check all `c` source code with a tool provided by the `$CHECK` environment variable, by default it is [sparse](https://en.wikipedia.org/wiki/Sparse). The second `M` option provides build for the external modules (will not see this case in this part). As we set this variables we make a check of the `KBUILD_SRC` variable and if it is not set we set `srctree` variable to `.`: - -```Makefile -ifeq ($(KBUILD_SRC),) - srctree := . -endif - -objtree := . -src := $(srctree) -obj := $(objtree) - -export srctree objtree VPATH -``` - -That tells to `Makefile` that source tree of the Linux kernel will be in the current directory where `make` command was executed. After this we set `objtree` and other variables to this directory and export these variables. The next step is the getting value for the `SUBARCH` variable that will represent tewhat the underlying archicecture is: - -```Makefile -SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \ - -e s/sun4u/sparc64/ \ - -e s/arm.*/arm/ -e s/sa110/arm/ \ - -e s/s390x/s390/ -e s/parisc64/parisc/ \ - -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \ - -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ ) -``` - -As you can see it executes [uname](https://en.wikipedia.org/wiki/Uname) utils that prints information about machine, operating system and architecture. As it will get output of the `uname` util, it will parse it and assign to the `SUBARCH` variable. As we got `SUBARCH`, we set the `SRCARCH` variable that provides directory of the certain architecture and `hfr-arch` that provides directory for the header files: - -```Makefile -ifeq ($(ARCH),i386) - SRCARCH := x86 -endif -ifeq ($(ARCH),x86_64) - SRCARCH := x86 -endif - -hdr-arch := $(SRCARCH) -``` - -Note that `ARCH` is the alias for the `SUBARCH`. In the next step we set the `KCONFIG_CONFIG` variable that represents path to the kernel configuration file and if it was not set before, it will be `.config` by default: - -```Makefile -KCONFIG_CONFIG ?= .config -export KCONFIG_CONFIG -``` - -and the [shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) that will be used during kernel compilation: - -```Makefile -CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \ - else if [ -x /bin/bash ]; then echo /bin/bash; \ - else echo sh; fi ; fi) -``` - -The next set of variables related to the compiler that will be used during Linux kernel compilation. We set the host compilers for the `c` and `c++` and flags for it: - -```Makefile -HOSTCC = gcc -HOSTCXX = g++ -HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer -std=gnu89 -HOSTCXXFLAGS = -O2 -``` - -Next we will meet the `CC` variable that represent compiler too, so why do we need in the `HOST*` variables? The `CC` is the target compiler that will be used during kernel compilation, but `HOSTCC` will be used during compilation of the set of the `host` programs (we will see it soon). After this we can see definition of the `KBUILD_MODULES` and `KBUILD_BUILTIN` variables that are used for the determination of the what to compile (kernel, modules or both): - -```Makefile -KBUILD_MODULES := -KBUILD_BUILTIN := 1 - -ifeq ($(MAKECMDGOALS),modules) - KBUILD_BUILTIN := $(if $(CONFIG_MODVERSIONS),1) -endif -``` - -Here we can see definition of these variables and the value of the `KBUILD_BUILTIN` will depens on the `CONFIG_MODVERSIONS` kernel configuration parameter if we pass only `modules` to the `make`. The next step is including of the: - -```Makefile -include scripts/Kbuild.include -``` - -`kbuild` file. The [Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) or `Kernel Build System` is the special infrastructure to manage building of the kernel and its modules. The `kbuild` files has the same syntax that makefiles. The [scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) file provides some generic definitions for the `kbuild` system. As we included this `kbuild` files we can see definition of the variables that are related to the different tools that will be used during kernel and modules compilation (like linker, compilers, utils from the [binutils](http://www.gnu.org/software/binutils/) and etc...): - -```Makefile -AS = $(CROSS_COMPILE)as -LD = $(CROSS_COMPILE)ld -CC = $(CROSS_COMPILE)gcc -CPP = $(CC) -E -AR = $(CROSS_COMPILE)ar -NM = $(CROSS_COMPILE)nm -STRIP = $(CROSS_COMPILE)strip -OBJCOPY = $(CROSS_COMPILE)objcopy -OBJDUMP = $(CROSS_COMPILE)objdump -AWK = awk -... -... -... -``` - -After definition of these variables we define two variables: `USERINCLUDE` and `LINUXINCLUDE`. They will contain paths of the directories with headers (public for users in the first case and for kernel in the second case): - -```Makefile -USERINCLUDE := \ - -I$(srctree)/arch/$(hdr-arch)/include/uapi \ - -Iarch/$(hdr-arch)/include/generated/uapi \ - -I$(srctree)/include/uapi \ - -Iinclude/generated/uapi \ - -include $(srctree)/include/linux/kconfig.h - -LINUXINCLUDE := \ - -I$(srctree)/arch/$(hdr-arch)/include \ - ... -``` - -And the standard flags for the C compiler: - -```Makefile -KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ - -fno-strict-aliasing -fno-common \ - -Werror-implicit-function-declaration \ - -Wno-format-security \ - -std=gnu89 -``` - -It is the not last compiler flags, they can be updated by the other makefiles (for example kbuilds from `arch/`). After all of these, all variables will be exported to be available in the other makefiles. The following two the `RCS_FIND_IGNORE` and the `RCS_TAR_IGNORE` variables will contain files that will be ignored in the version control system: - -```Makefile -export RCS_FIND_IGNORE := \( -name SCCS -o -name BitKeeper -o -name .svn -o \ - -name CVS -o -name .pc -o -name .hg -o -name .git \) \ - -prune -o -export RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn \ - --exclude CVS --exclude .pc --exclude .hg --exclude .git -``` - -That's all. We have finished with the all preparations, next point is the building of `vmlinux`. - -Directly to the kernel build --------------------------------------------------------------------------------- - -As we have finished all preparations, next step in the root makefile is related to the kernel build. Before this moment we will not see in the our terminal after the execution of the `make` command. But now first steps of the compilation are started. In this moment we need to go on the [598](https://github.com/torvalds/linux/blob/master/Makefile#L598) line of the Linux kernel top makefile and we will see `vmlinux` target there: - -```Makefile -all: vmlinux - include arch/$(SRCARCH)/Makefile -``` - -Don't worry that we have missed many lines in Makefile that are placed after `export RCS_FIND_IGNORE.....` and before `all: vmlinux.....`. This part of the makefile is responsible for the `make *.config` targets and as I wrote in the beginning of this part we will see only building of the kernel in a general way. - -The `all:` target is the default when no target is given on the command line. You can see here that we include architecture specific makefile there (in our case it will be [arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)). From this moment we will continue from this makefile. As we can see `all` target depends on the `vmlinux` target that defined a little lower in the top makefile: - -```Makefile -vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE -``` - -The `vmlinux` is is the Linux kernel in an statically linked executable file format. The [scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) script links combines different compiled subsystems into vmlinux. The second target is the `vmlinux-deps` that defined as: - -```Makefile -vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) -``` - -and consists from the set of the `built-in.o` from the each top directory of the Linux kernel. Later, when we will go through all directories in the Linux kernel, the `Kbuild` will compile all the `$(obj-y)` files. It then calls `$(LD) -r` to merge these files into one `built-in.o` file. For this moment we have no `vmlinux-deps`, so the `vmlinux` target will not be executed now. For me `vmlinux-deps` contains following files: - -``` -arch/x86/kernel/vmlinux.lds arch/x86/kernel/head_64.o -arch/x86/kernel/head64.o arch/x86/kernel/head.o -init/built-in.o usr/built-in.o -arch/x86/built-in.o kernel/built-in.o -mm/built-in.o fs/built-in.o -ipc/built-in.o security/built-in.o -crypto/built-in.o block/built-in.o -lib/lib.a arch/x86/lib/lib.a -lib/built-in.o arch/x86/lib/built-in.o -drivers/built-in.o sound/built-in.o -firmware/built-in.o arch/x86/pci/built-in.o -arch/x86/power/built-in.o arch/x86/video/built-in.o -net/built-in.o -``` - -The next target that can be executed is following: - -```Makefile -$(sort $(vmlinux-deps)): $(vmlinux-dirs) ; -$(vmlinux-dirs): prepare scripts - $(Q)$(MAKE) $(build)=$@ -``` - -As we can see the `vmlinux-dirs` depends on the two targets: `prepare` and `scripts`. The first `prepare` defined in the top `Makefile` of the Linux kernel and executes three stages of preparations: - -```Makefile -prepare: prepare0 -prepare0: archprepare FORCE - $(Q)$(MAKE) $(build)=. -archprepare: archheaders archscripts prepare1 scripts_basic - -prepare1: prepare2 $(version_h) include/generated/utsrelease.h \ - include/config/auto.conf - $(cmd_crmodverdir) -prepare2: prepare3 outputmakefile asm-generic -``` - -The first `prepare0` expands to the `archprepare` that exapnds to the `archheaders` and `archscripts` that defined in the `x86_64` specific [Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile). Let's look on it. The `x86_64` specific makefile starts from the definition of the variables that are related to the archicteture-specific configs ([defconfig](https://github.com/torvalds/linux/tree/master/arch/x86/configs) and etc.). After this it defines flags for the compiling of the [16-bit](https://en.wikipedia.org/wiki/Real_mode) code, calculating of the `BITS` variable that can be `32` for `i386` or `64` for the `x86_64` flags for the assembly source code, flags for the linker and many many more (all definitions you can find in the [arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)). The first target is `archheaders` in the makefile generates syscall table: - -```Makefile -archheaders: - $(Q)$(MAKE) $(build)=arch/x86/entry/syscalls all -``` - -And the second target is `archscripts` in this makefile is: - -```Makefile -archscripts: scripts_basic - $(Q)$(MAKE) $(build)=arch/x86/tools relocs -``` - -We can see that it depends on the `scripts_basic` target from the top [Makefile](https://github.com/torvalds/linux/blob/master/Makefile). At the first we can see the `scripts_basic` target that executes make for the [scripts/basic](https://github.com/torvalds/linux/blob/master/scripts/basic/Makefile) makefile: - -```Maklefile -scripts_basic: - $(Q)$(MAKE) $(build)=scripts/basic -``` - -The `scripts/basic/Makefile` contains targets for compilation of the two host programs: `fixdep` and `bin2`: - -```Makefile -hostprogs-y := fixdep -hostprogs-$(CONFIG_BUILD_BIN2C) += bin2c -always := $(hostprogs-y) - -$(addprefix $(obj)/,$(filter-out fixdep,$(always))): $(obj)/fixdep -``` - -First program is `fixdep` - optimizes list of dependencies generated by the [gcc](https://gcc.gnu.org/) that tells make when to remake a source code file. The second program is `bin2c` depends on the value of the `CONFIG_BUILD_BIN2C` kernel configuration option and very little C program that allows to convert a binary on stdin to a C include on stdout. You can note here strange notation: `hostprogs-y` and etc. This notation is used in the all `kbuild` files and more about it you can read in the [documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt). In our case the `hostprogs-y` tells to the `kbuild` that there is one host program named `fixdep` that will be built from the will be built from `fixdep.c` that located in the same directory that `Makefile`. The first output after we will execute `make` command in our terminal will be result of this `kbuild` file: - -``` -$ make - HOSTCC scripts/basic/fixdep -``` - -As `script_basic` target was executed, the `archscripts` target will execute `make` for the [arch/x86/tools](https://github.com/torvalds/linux/blob/master/arch/x86/tools/Makefile) makefile with the `relocs` target: - -```Makefile -$(Q)$(MAKE) $(build)=arch/x86/tools relocs -``` - -The `relocs_32.c` and the `relocs_64.c` will be compiled that will contain [relocation](https://en.wikipedia.org/wiki/Relocation_%28computing%29) information and we will see it in the `make` output: - -```Makefile - HOSTCC arch/x86/tools/relocs_32.o - HOSTCC arch/x86/tools/relocs_64.o - HOSTCC arch/x86/tools/relocs_common.o - HOSTLD arch/x86/tools/relocs -``` - -There is checking of the `version.h` after compiling of the `relocs.c`: - -```Makefile -$(version_h): $(srctree)/Makefile FORCE - $(call filechk,version.h) - $(Q)rm -f $(old_version_h) -``` - -We can see it in the output: - -``` -CHK include/config/kernel.release -``` - -and the building of the `generic` assembly headers with the `asm-generic` target from the `arch/x86/include/generated/asm` that generated in the top Makefile of the Linux kernel. After the `asm-generic` target the `archprepare` will be done, so the `prepare0` target will be executed. As I wrote above: - -```Makefile -prepare0: archprepare FORCE - $(Q)$(MAKE) $(build)=. -``` - -Note on the `build`. It defined in the [scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) and looks like this: - -```Makefile -build := -f $(srctree)/scripts/Makefile.build obj -``` - -or in our case it is current source directory - `.`: - -```Makefile -$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.build obj=. -``` - -The [scripts/Makefile.build](https://github.com/torvalds/linux/blob/master/scripts/Makefile.build) tries to find the `Kbuild` file by the given directory via the `obj` parameter, include this `Kbuild` files: - -```Makefile -include $(kbuild-file) -``` - -and build targets from it. In our case `.` contains the [Kbuild](https://github.com/torvalds/linux/blob/master/Kbuild) file that generates the `kernel/bounds.s` and the `arch/x86/kernel/asm-offsets.s`. After this the `prepare` target finished to work. The `vmlinux-dirs` also depends on the second target - `scripts` that compiles following programs: `file2alias`, `mk_elfconfig`, `modpost` and etc... After scripts/host-programs compilation our `vmlinux-dirs` target can be executed. First of all let's try to understand what does `vmlinux-dirs` contain. For my case it contains paths of the following kernel directories: - -``` -init usr arch/x86 kernel mm fs ipc security crypto block -drivers sound firmware arch/x86/pci arch/x86/power -arch/x86/video net lib arch/x86/lib -``` - -We can find definition of the `vmlinux-dirs` in the top [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) of the Linux kernel: - -```Makefile -vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \ - $(core-y) $(core-m) $(drivers-y) $(drivers-m) \ - $(net-y) $(net-m) $(libs-y) $(libs-m))) - -init-y := init/ -drivers-y := drivers/ sound/ firmware/ -net-y := net/ -libs-y := lib/ -... -... -... -``` - -Here we remove the `/` symbol from the each directory with the help of the `patsubst` and `filter` functions and put it to the `vmlinux-dirs`. So we have list of directories in the `vmlinux-dirs` and the following code: - -```Makefile -$(vmlinux-dirs): prepare scripts - $(Q)$(MAKE) $(build)=$@ -``` - -The `$@` represents `vmlinux-dirs` here that means that it will go recursively over all directories from the `vmlinux-dirs` and its internal directories (depens on configuration) and will execute `make` in there. We can see it in the output: - -``` - CC init/main.o - CHK include/generated/compile.h - CC init/version.o - CC init/do_mounts.o - ... - CC arch/x86/crypto/glue_helper.o - AS arch/x86/crypto/aes-x86_64-asm_64.o - CC arch/x86/crypto/aes_glue.o - ... - AS arch/x86/entry/entry_64.o - AS arch/x86/entry/thunk_64.o - CC arch/x86/entry/syscall_64.o -``` - -Source code in each directory will be compiled and linked to the `built-in.o`: - -``` -$ find . -name built-in.o -./arch/x86/crypto/built-in.o -./arch/x86/crypto/sha-mb/built-in.o -./arch/x86/net/built-in.o -./init/built-in.o -./usr/built-in.o -... -... -``` - -Ok, all buint-in.o(s) built, now we can back to the `vmlinux` target. As you remember, the `vmlinux` target is in the top Makefile of the Linux kernel. Before the linking of the `vmlinux` it builds [samples](https://github.com/torvalds/linux/tree/master/samples), [Documentation](https://github.com/torvalds/linux/tree/master/Documentation) and etc., but I will not describe it in this part as I wrote in the beginning of this part. - -```Makefile -vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE - ... - ... - +$(call if_changed,link-vmlinux) -``` - -As you can see main purpose of it is a call of the [scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) script is linking of the all `built-in.o`(s) to the one statically linked executable and creation of the [System.map](https://en.wikipedia.org/wiki/System.map). In the end we will see following output: - -``` - LINK vmlinux - LD vmlinux.o - MODPOST vmlinux.o - GEN .version - CHK include/generated/compile.h - UPD include/generated/compile.h - CC init/version.o - LD init/built-in.o - KSYM .tmp_kallsyms1.o - KSYM .tmp_kallsyms2.o - LD vmlinux - SORTEX vmlinux - SYSMAP System.map -``` - -and `vmlinux` and `System.map` in the root of the Linux kernel source tree: - -``` -$ ls vmlinux System.map -System.map vmlinux -``` - -That's all, `vmlinux` is ready. The next step is creation of the [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage). - -Building bzImage --------------------------------------------------------------------------------- - -The `bzImage` is the compressed Linux kernel image. We can get it with the execution of the `make bzImage` after the `vmlinux` built. In other way we can just execute `make` without arguments and will get `bzImage` anyway because it is default image: - -```Makefile -all: bzImage -``` - -in the [arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile). Let's look on this target, it will help us to understand how this image builds. As I already said the `bzImage` target defined in the [arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile) and looks like this: - -```Makefile -bzImage: vmlinux - $(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE) - $(Q)mkdir -p $(objtree)/arch/$(UTS_MACHINE)/boot - $(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/$@ -``` - -We can see here, that first of all called `make` for the boot directory, in our case it is: - -```Makefile -boot := arch/x86/boot -``` - -The main goal now to build source code in the `arch/x86/boot` and `arch/x86/boot/compressed` directories, build `setup.bin` and `vmlinux.bin`, and build the `bzImage` from they in the end. First target in the [arch/x86/boot/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/Makefile) is the `$(obj)/setup.elf`: - -```Makefile -$(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE - $(call if_changed,ld) -``` - -We already have the `setup.ld` linker script in the `arch/x86/boot` directory and the `SETUP_OBJS` expands to the all source files from the `boot` directory. We can see first output: - -```Makefile - AS arch/x86/boot/bioscall.o - CC arch/x86/boot/cmdline.o - AS arch/x86/boot/copy.o - HOSTCC arch/x86/boot/mkcpustr - CPUSTR arch/x86/boot/cpustr.h - CC arch/x86/boot/cpu.o - CC arch/x86/boot/cpuflags.o - CC arch/x86/boot/cpucheck.o - CC arch/x86/boot/early_serial_console.o - CC arch/x86/boot/edd.o -``` - -The next source code file is the [arch/x86/boot/header.S](https://github.com/torvalds/linux/blob/master/arch/x86/boot/header.S), but we can't build it now because this target depends on the following two header files: - -```Makefile -$(obj)/header.o: $(obj)/voffset.h $(obj)/zoffset.h -``` - -The first is `voffset.h` generated by the `sed` script that gets two addresses from the `vmlinux` with the `nm` util: - -```C -#define VO__end 0xffffffff82ab0000 -#define VO__text 0xffffffff81000000 -``` - -They are start and end of the kernel. The second is `zoffset.h` depens on the `vmlinux` target from the [arch/x86/boot/compressed/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/Makefile): - -```Makefile -$(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE - $(call if_changed,zoffset) -``` - -The `$(obj)/compressed/vmlinux` target depends on the `vmlinux-objs-y` that compiles source code files from the [arch/x86/boot/compressed](https://github.com/torvalds/linux/tree/master/arch/x86/boot/compressed) directory and generates `vmlinux.bin`, `vmlinux.bin.bz2`, and compiles programm - `mkpiggy`. We can see this in the output: - -```Makefile - LDS arch/x86/boot/compressed/vmlinux.lds - AS arch/x86/boot/compressed/head_64.o - CC arch/x86/boot/compressed/misc.o - CC arch/x86/boot/compressed/string.o - CC arch/x86/boot/compressed/cmdline.o - OBJCOPY arch/x86/boot/compressed/vmlinux.bin - BZIP2 arch/x86/boot/compressed/vmlinux.bin.bz2 - HOSTCC arch/x86/boot/compressed/mkpiggy -``` - -Where the `vmlinux.bin` is the `vmlinux` with striped debuging information and comments and the `vmlinux.bin.bz2` compressed `vmlinux.bin.all` + `u32` size of `vmlinux.bin.all`. The `vmlinux.bin.all` is `vmlinux.bin + vmlinux.relocs`, where `vmlinux.relocs` is the `vmlinux` that was handled by the `relocs` program (see above). As we got these files, the `piggy.S` assembly files will be generated with the `mkpiggy` program and compiled: - -```Makefile - MKPIGGY arch/x86/boot/compressed/piggy.S - AS arch/x86/boot/compressed/piggy.o -``` - -This assembly files will contain computed offset from a compressed kernel. After this we can see that `zoffset` generated: - -```Makefile - ZOFFSET arch/x86/boot/zoffset.h -``` - -As the `zoffset.h` and the `voffset.h` are generated, compilation of the source code files from the [arch/x86/boot](https://github.com/torvalds/linux/tree/master/arch/x86/boot/) can be continued: - -```Makefile - AS arch/x86/boot/header.o - CC arch/x86/boot/main.o - CC arch/x86/boot/mca.o - CC arch/x86/boot/memory.o - CC arch/x86/boot/pm.o - AS arch/x86/boot/pmjump.o - CC arch/x86/boot/printf.o - CC arch/x86/boot/regs.o - CC arch/x86/boot/string.o - CC arch/x86/boot/tty.o - CC arch/x86/boot/video.o - CC arch/x86/boot/video-mode.o - CC arch/x86/boot/video-vga.o - CC arch/x86/boot/video-vesa.o - CC arch/x86/boot/video-bios.o -``` - -As all source code files will be compiled, they will be linked to the `setup.elf`: - -```Makefile - LD arch/x86/boot/setup.elf -``` - -or: - -``` -ld -m elf_x86_64 -T arch/x86/boot/setup.ld arch/x86/boot/a20.o arch/x86/boot/bioscall.o arch/x86/boot/cmdline.o arch/x86/boot/copy.o arch/x86/boot/cpu.o arch/x86/boot/cpuflags.o arch/x86/boot/cpucheck.o arch/x86/boot/early_serial_console.o arch/x86/boot/edd.o arch/x86/boot/header.o arch/x86/boot/main.o arch/x86/boot/mca.o arch/x86/boot/memory.o arch/x86/boot/pm.o arch/x86/boot/pmjump.o arch/x86/boot/printf.o arch/x86/boot/regs.o arch/x86/boot/string.o arch/x86/boot/tty.o arch/x86/boot/video.o arch/x86/boot/video-mode.o arch/x86/boot/version.o arch/x86/boot/video-vga.o arch/x86/boot/video-vesa.o arch/x86/boot/video-bios.o -o arch/x86/boot/setup.elf -``` - -The last two things is the creation of the `setup.bin` that will contain compiled code from the `arch/x86/boot/*` directory: - -``` -objcopy -O binary arch/x86/boot/setup.elf arch/x86/boot/setup.bin -``` - -and the creation of the `vmlinux.bin` from the `vmlinux`: - -``` -objcopy -O binary -R .note -R .comment -S arch/x86/boot/compressed/vmlinux arch/x86/boot/vmlinux.bin -``` - -In the end we compile host program: [arch/x86/boot/tools/build.c](https://github.com/torvalds/linux/blob/master/arch/x86/boot/tools/build.c) that will create our `bzImage` from the `setup.bin` and the `vmlinux.bin`: - -``` -arch/x86/boot/tools/build arch/x86/boot/setup.bin arch/x86/boot/vmlinux.bin arch/x86/boot/zoffset.h arch/x86/boot/bzImage -``` - -Actually the `bzImage` is the concatenated `setup.bin` and the `vmlinux.bin`. In the end we will see the output which familiar to all who once build the Linux kernel from source: - -``` -Setup is 16268 bytes (padded to 16384 bytes). -System is 4704 kB -CRC 94a88f9a -Kernel: arch/x86/boot/bzImage is ready (#5) -``` - -That's all. - -Conclusion -================================================================================ - -It is the end of this part and here we saw all steps from the execution of the `make` command to the generation of the `bzImage`. I know, the Linux kernel makefiles and process of the Linux kernel building may seem confusing at first glance, but it is not so hard. Hope this part will help you to understand process of the Linux kernel building. - -Links -================================================================================ - -* [GNU make util](https://en.wikipedia.org/wiki/Make_%28software%29) -* [Linux kernel top Makefile](https://github.com/torvalds/linux/blob/master/Makefile) -* [cross-compilation](https://en.wikipedia.org/wiki/Cross_compiler) -* [Ctags](https://en.wikipedia.org/wiki/Ctags) -* [sparse](https://en.wikipedia.org/wiki/Sparse) -* [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage) -* [uname](https://en.wikipedia.org/wiki/Uname) -* [shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) -* [Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) -* [binutils](http://www.gnu.org/software/binutils/) -* [gcc](https://gcc.gnu.org/) -* [Documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt) -* [System.map](https://en.wikipedia.org/wiki/System.map) -* [Relocation](https://en.wikipedia.org/wiki/Relocation_%28computing%29) - --------------------------------------------------------------------------------- - -via: https://github.com/0xAX/linux-insides/blob/master/Misc/how_kernel_compiled.md - -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md b/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md deleted file mode 100644 index 28981add17..0000000000 --- a/sources/tech/20150812 Linux Tricks--Play Game in Chrome Text-to-Speech Schedule a Job and Watch Commands in Linux.md +++ /dev/null @@ -1,145 +0,0 @@ - Vic020 - -Linux Tricks: Play Game in Chrome, Text-to-Speech, Schedule a Job and Watch Commands in Linux -================================================================================ -Here again, I have compiled a list of four things under [Linux Tips and Tricks][1] series you may do to remain more productive and entertained with Linux Environment. - -![Linux Tips and Tricks Series](http://www.tecmint.com/wp-content/uploads/2015/08/Linux-Tips-and-Tricks.png) - -Linux Tips and Tricks Series - -The topics I have covered includes Google-chrome inbuilt small game, Text-to-speech in Linux Terminal, Quick job scheduling using ‘at‘ command and watch a command at regular interval. - -### 1. Play A Game in Google Chrome Browser ### - -Very often when there is a power shedding or no network due to some other reason, I don’t put my Linux box into maintenance mode. I keep myself engage in a little fun game by Google Chrome. I am not a gamer and hence I have not installed third-party creepy games. Security is another concern. - -So when there is Internet related issue and my web page seems something like this: - -![Unable to Connect Internet](http://www.tecmint.com/wp-content/uploads/2015/08/Unable-to-Connect-Internet.png) - -Unable to Connect Internet - -You may play the Google-chrome inbuilt game simply by hitting the space-bar. There is no limitation for the number of times you can play. The best thing is you need not break a sweat installing and using it. - -No third-party application/plugin required. It should work well on other platforms like Windows and Mac but our niche is Linux and I’ll talk about Linux only and mind it, it works well on Linux. It is a very simple game (a kind of time pass). - -Use Space-Bar/Navigation-up-key to jump. A glimpse of the game in action. - -![Play Game in Google Chrome](http://www.tecmint.com/wp-content/uploads/2015/08/Play-Game-in-Google-Chrome.gif) - -Play Game in Google Chrome - -### 2. Text to Speech in Linux Terminal ### - -For those who may not be aware of espeak utility, It is a Linux command-line text to speech converter. Write anything in a variety of languages and espeak utility will read it loud for you. - -Espeak should be installed in your system by default, however it is not installed for your system, you may do: - - # apt-get install espeak (Debian) - # yum install espeak (CentOS) - # dnf install espeak (Fedora 22 onwards) - -You may ask espeak to accept Input Interactively from standard Input device and convert it to speech for you. You may do: - - $ espeak [Hit Return Key] - -For detailed output you may do: - - $ espeak --stdout | aplay [Hit Return Key][Double - Here] - -espeak is flexible and you can ask espeak to accept input from a text file and speak it loud for you. All you need to do is: - - $ espeak --stdout /path/to/text/file/file_name.txt | aplay [Hit Enter] - -You may ask espeak to speak fast/slow for you. The default speed is 160 words per minute. Define your preference using switch ‘-s’. - -To ask espeak to speak 30 words per minute, you may do: - - $ espeak -s 30 -f /path/to/text/file/file_name.txt | aplay - -To ask espeak to speak 200 words per minute, you may do: - - $ espeak -s 200 -f /path/to/text/file/file_name.txt | aplay - -To use another language say Hindi (my mother tongue), you may do: - - $ espeak -v hindi --stdout 'टेकमिंट विश्व की एक बेहतरीन लाइंक्स आधारित वेबसाइट है|' | aplay - -You may choose any language of your preference and ask to speak in your preferred language as suggested above. To get the list of all the languages supported by espeak, you need to run: - - $ espeak --voices - -### 3. Quick Schedule a Job ### - -Most of us are already familiar with [cron][2] which is a daemon to execute scheduled commands. - -Cron is an advanced command often used by Linux SYSAdmins to schedule a job such as Backup or practically anything at certain time/interval. - -Are you aware of ‘at’ command in Linux which lets you schedule a job/command to run at specific time? You can tell ‘at’ what to do and when to do and everything else will be taken care by command ‘at’. - -For an example, say you want to print the output of uptime command at 11:02 AM, All you need to do is: - - $ at 11:02 - uptime >> /home/$USER/uptime.txt - Ctrl+D - -![Schedule Job in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Schedule-Job-in-Linux.png) - -Schedule Job in Linux - -To check if the command/script/job has been set or not by ‘at’ command, you may do: - - $ at -l - -![View Scheduled Jobs](http://www.tecmint.com/wp-content/uploads/2015/08/View-Scheduled-Jobs.png) - -View Scheduled Jobs - -You may schedule more than one command in one go using at, simply as: - - $ at 12:30 - Command – 1 - Command – 2 - … - command – 50 - … - Ctrl + D - -### 4. Watch a Command at Specific Interval ### - -We need to run some command for specified amount of time at regular interval. Just for example say we need to print the current time and watch the output every 3 seconds. - -To see current time we need to run the below command in terminal. - - $ date +"%H:%M:%S - -![Check Date and Time in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Date-in-Linux.png) - -Check Date and Time in Linux - -and to check the output of this command every three seconds, we need to run the below command in Terminal. - - $ watch -n 3 'date +"%H:%M:%S"' - -![Watch Command in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Watch-Command-in-Linux.gif) - -Watch Command in Linux - -The switch ‘-n’ in watch command is for Interval. In the above example we defined Interval to be 3 sec. You may define yours as required. Also you may pass any command/script with watch command to watch that command/script at the defined interval. - -That’s all for now. Hope you are like this series that aims at making you more productive with Linux and that too with fun inside. All the suggestions are welcome in the comments below. Stay tuned for more such posts. Keep connected and Enjoy… - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/text-to-speech-in-terminal-schedule-a-job-and-watch-commands-in-linux/ - -作者:[Avishek Kumar][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/avishek/ -[1]:http://www.tecmint.com/tag/linux-tricks/ -[2]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/ diff --git a/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md b/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md deleted file mode 100644 index 24c71b0cbe..0000000000 --- a/sources/tech/20150813 How to Install Logwatch on Ubuntu 15.04.md +++ /dev/null @@ -1,138 +0,0 @@ -(translating by runningwater) -How to Install Logwatch on Ubuntu 15.04 -================================================================================ -Hi, Today we are going to illustrate the setup of Logwatch on Ubuntu 15.04 Operating system where as it can be used for any Linux and UNIX like operating systems. Logwatch is a customizable system log analyzer and reporting log-monitoring system that go through your logs for a given period of time and make a report in the areas that you wish with the details you want. Its an easy tool to install, configure, review and to take actions that will improve security from data it provides. Logwatch scans the log files of major operating system components, like SSH, Web Server and forwards a summary that contains the valuable items in it that needs to be looked at. - -### Pre-installation Setup ### - -We will be using Ubuntu 15.04 operating system to deploy Logwatch on it so as a perquisite for the installation of Logwatch, make sure that your emails setup is working as it will be used to send email to the administrators for daily reports on the gathered reports.Your system repositories should be enabled as we will be installing it from its available universal repositories. - -Then open the terminal of your ubuntu operating system and login with root user to update your system packages before moving to Logwatch installation. - - root@ubuntu-15:~# apt-get update - -### Installing Logwatch ### - -Once your system is updated and your have fulfilled all its prerequisites then run the following command to start the installation of Logwatch in your server. - - root@ubuntu-15:~# apt-get install logwatch - -The logwatch installation process will starts with addition of some extra required packages as shown once you press “Y” to accept the required changes to the system. - -During the installation process you will be prompted to configure the Postfix Configurations according to your mail server’s setup. Here we used “Local only” in the tutorial for ease, we can choose from the other available options as per your infrastructure requirements and then press “OK” to proceed. - -![Potfix Configurations](http://blog.linoxide.com/wp-content/uploads/2015/08/21.png) - -Then you have to choose your mail server’s name that will also be used by other programs, so it should be single fully qualified domain name (FQDN). - -![Postfix Setup](http://blog.linoxide.com/wp-content/uploads/2015/08/31.png) - -Once you press “OK” after postfix configurations, then it will completes the Logwatch installation process with default configurations of Postfix. - -![Logwatch Completion](http://blog.linoxide.com/wp-content/uploads/2015/08/41.png) - -You can check the status of Logwatch by issuing the following command in the terminal that should be in active state. - - root@ubuntu-15:~# service postfix status - -![Postfix Status](http://blog.linoxide.com/wp-content/uploads/2015/08/51.png) - -To confirm the installation of Logwatch with its default configurations, issue the simple “logwatch” command as shown. - - root@ubuntu-15:~# logwatch - -The output from the above executed command will results in following compiled report form in the terminal. - -![Logwatch Report](http://blog.linoxide.com/wp-content/uploads/2015/08/61.png) - -### Logwatch Configurations ### - -Now after successful installation of Logwatch, we need to make few configuration changes in its configuration file located under following shown path. So, let’s open it with the file editor to update its configurations as required. - - root@ubuntu-15:~# vim /usr/share/logwatch/default.conf/logwatch.conf - -**Output/Format Options** - -By default Logwatch will print to stdout in text with no encoding.To make email Default set “Output = mail” and to save to file set “Output = file”. So you can comment out the its default configurations as per your required settings. - - Output = stdout - -To make Html the default formatting update the following line if you are using Internet email configurations. - - Format = text - -Now add the default person to mail reports should be sent to, it could be a local account or a complete email address that you are free to mention in this line - - MailTo = root - #MailTo = user@test.com - -Default person to mail reports sent from can be a local account or any other you wish to use. - - # complete email address. - MailFrom = Logwatch - -Save the changes made in the configuration file of Logwatch while leaving the other parameter as default. - -**Cronjob Configuration** - -Now edit the "00logwatch" file in daily crons directory to configure your desired email address to forward reports from logwatch. - - root@ubuntu-15:~# vim /etc/cron.daily/00logwatch - -Here you need to use "--mailto" user@test.com instead of --output mail and save the file. - -![Logwatch Cronjob](http://blog.linoxide.com/wp-content/uploads/2015/08/71.png) - -### Using Logwatch Report ### - -Now we generate the test report by executing the "logwatch" command in the terminal to get its result shown in the Text format within the terminal. - - root@ubuntu-15:~#logwatch - -The generated report starts with showing its execution time and date, it will be comprising of different sections that starts with its begin status and closed with end status after showing the complete information about its logs of the mentioned sections. - -Here is its starting point looks like, where it starts by showing all the installed packages in the system as shown below. - -![dpkg status](http://blog.linoxide.com/wp-content/uploads/2015/08/81.png) - -The following sections shows the logs informmation about the login sessions, rsyslogs and SSH connections about the current and last sessions enabled on the system. - -![logwatch report](http://blog.linoxide.com/wp-content/uploads/2015/08/9.png) - -The logwatch report will ends up by showing the secure sudo logs and the disk space usage of the root diretory as shown below. - -![Logwatch end report](http://blog.linoxide.com/wp-content/uploads/2015/08/10.png) - -You can also check for the generated emails about the logwatch reports by opening the following file. - - root@ubuntu-15:~# vim /var/mail/root - -Here you will be able to see all the generated emails to your configured users with their message delivery status. - -### More about Logwatch ### - -Logwatch is a great tool to lern more about it, so if your more interested to learn more about its logwatch then you can also get much help from the below few commands. - - root@ubuntu-15:~# man logwatch - -The above command contains all the users manual about the logwatch, so read it carefully and to exit from the manuals section simply press "q". - -To get help about the logwatch commands usage you can run the following help command for further information in details. - - root@ubuntu-15:~# logwatch --help - -### Conclusion ### - -At the end of this tutorial you learn about the complete setup of Logwatch on Ubuntu 15.04 that includes with its installation and configurations guide. Now you can start monitoring your logs in a customize able form, whether you monitor the logs of all the services rnning on your system or you customize it to send you the reports about the specific services on the scheduled days. So, let's use this tool and feel free to leave us a comment if you face any issue or need to know more about logwatch usage. - --------------------------------------------------------------------------------- - -via: http://linoxide.com/ubuntu-how-to/install-use-logwatch-ubuntu-15-04/ - -作者:[Kashif Siddique][a] -译者:[runningwater](https://github.com/runningwater) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/kashifs/ diff --git a/sources/tech/20150813 How to get Public IP from Linux Terminal.md b/sources/tech/20150813 How to get Public IP from Linux Terminal.md deleted file mode 100644 index c22fec283d..0000000000 --- a/sources/tech/20150813 How to get Public IP from Linux Terminal.md +++ /dev/null @@ -1,69 +0,0 @@ -KevinSJ Translating -How to get Public IP from Linux Terminal? -================================================================================ -![](http://www.blackmoreops.com/wp-content/uploads/2015/06/256x256xHow-to-get-Public-IP-from-Linux-Terminal-blackMORE-Ops.png.pagespeed.ic.GKEAEd4UNr.png) - -Public addresses are assigned by InterNIC and consist of class-based network IDs or blocks of CIDR-based addresses (called CIDR blocks) that are guaranteed to be globally unique to the Internet. How to get Public IP from Linux Terminal - blackMORE OpsWhen the public addresses are assigned, routes are programmed into the routers of the Internet so that traffic to the assigned public addresses can reach their locations. Traffic to destination public addresses are reachable on the Internet. For example, when an organization is assigned a CIDR block in the form of a network ID and subnet mask, that [network ID, subnet mask] pair also exists as a route in the routers of the Internet. IP packets destined to an address within the CIDR block are routed to the proper destination. In this post I will show several ways to find your public IP address from Linux terminal. This though seems like a waste for normal users, but when you are in a terminal of a headless Linux server(i.e. no GUI or you’re connected as a user with minimal tools). Either way, being able to getHow to get Public IP from Linux Terminal public IP from Linux terminal can be useful in many cases or it could be one of those things that might just come in handy someday. - -There’s two main commands we use, curl and wget. You can use them interchangeably. - -### Curl output in plain text format: ### - - curl icanhazip.com - curl ifconfig.me - curl curlmyip.com - curl ip.appspot.com - curl ipinfo.io/ip - curl ipecho.net/plain - curl www.trackip.net/i - -### curl output in JSON format: ### - - curl ipinfo.io/json - curl ifconfig.me/all.json - curl www.trackip.net/ip?json (bit ugly) - -### curl output in XML format: ### - - curl ifconfig.me/all.xml - -### curl all IP details – The motherload ### - - curl ifconfig.me/all - -### Using DYNDNS (Useful when you’re using DYNDNS service) ### - - curl -s 'http://checkip.dyndns.org' | sed 's/.*Current IP Address: \([0-9\.]*\).*/\1/g' - curl -s http://checkip.dyndns.org/ | grep -o "[[:digit:].]\+" - -### Using wget instead of curl ### - - wget http://ipecho.net/plain -O - -q ; echo - wget http://observebox.com/ip -O - -q ; echo - -### Using host and dig command (cause we can) ### - -You can also use host and dig command assuming they are available or installed - - host -t a dartsclink.com | sed 's/.*has address //' - dig +short myip.opendns.com @resolver1.opendns.com - -### Sample bash script: ### - - #!/bin/bash - - PUBLIC_IP=`wget http://ipecho.net/plain -O - -q ; echo` - echo $PUBLIC_IP - -Quite a few to pick from. - -I was actually writing a small script to track all the IP changes of my router each day and save those into a file. I found these nifty commands and sites to use while doing some online research. Hope they help someone else someday too. Thanks for reading, please Share and RT. - --------------------------------------------------------------------------------- - -via: http://www.blackmoreops.com/2015/06/14/how-to-get-public-ip-from-linux-terminal/ - -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md b/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md deleted file mode 100644 index f1505c5649..0000000000 --- a/sources/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md +++ /dev/null @@ -1,103 +0,0 @@ -translating wi-cuckoo -Howto Run JBoss Data Virtualization GA with OData in Docker Container -================================================================================ -Hi everyone, today we'll learn how to run JBoss Data Virtualization 6.0.0.GA with OData in a Docker Container. JBoss Data Virtualization is a data supply and integration solution platform that transforms various scatered multiple sources data, treats them as single source and delivers the required data into actionable information at business speed to any applications or users. JBoss Data Virtualization can help us easily combine and transform data into reusable business friendly data models and make unified data easily consumable through open standard interfaces. It offers comprehensive data abstraction, federation, integration, transformation, and delivery capabilities to combine data from one or multiple sources into reusable for agile data utilization and sharing.For more information about JBoss Data Virtualization, we can check out [its official page][1]. Docker is an open source platform that provides an open platform to pack, ship and run any application as a lightweight container. Running JBoss Data Virtualization with OData in Docker Container makes us easy to handle and launch. - -Here are some easy to follow tutorial on how we can run JBoss Data Virtualization with OData in Docker Container. - -### 1. Cloning the Repository ### - -First of all, we'll wanna clone the repository of OData with Data Virtualization ie [https://github.com/jbossdemocentral/dv-odata-docker-integration-demo][2] using git command. As we have an Ubuntu 15.04 distribution of linux running in our machine. We'll need to install git initially using apt-get command. - - # apt-get install git - -Then after installing git, we'll wanna clone the repository by running the command below. - - # git clone https://github.com/jbossdemocentral/dv-odata-docker-integration-demo - - Cloning into 'dv-odata-docker-integration-demo'... - remote: Counting objects: 96, done. - remote: Total 96 (delta 0), reused 0 (delta 0), pack-reused 96 - Unpacking objects: 100% (96/96), done. - Checking connectivity... done. - -### 2. Downloading JBoss Data Virtualization Installer ### - -Now, we'll need to download JBoss Data Virtualization Installer from the Download Page ie [http://www.jboss.org/products/datavirt/download/][3] . After we download **jboss-dv-installer-6.0.0.GA-redhat-4.jar**, we'll need to keep it under the directory named **software**. - -### 3. Building the Docker Image ### - -Next, after we have downloaded the JBoss Data Virtualization installer, we'll then go for building the docker image using the Dockerfile and its resources we had just cloned from the repository. - - # cd dv-odata-docker-integration-demo/ - # docker build -t jbossdv600 . - - ... - Step 22 : USER jboss - ---> Running in 129f701febd0 - ---> 342941381e37 - Removing intermediate container 129f701febd0 - Step 23 : EXPOSE 8080 9990 31000 - ---> Running in 61e6d2c26081 - ---> 351159bb6280 - Removing intermediate container 61e6d2c26081 - Step 24 : CMD $JBOSS_HOME/bin/standalone.sh -c standalone.xml -b 0.0.0.0 -bmanagement 0.0.0.0 - ---> Running in a9fed69b3000 - ---> 407053dc470e - Removing intermediate container a9fed69b3000 - Successfully built 407053dc470e - -Note: Here, we assume that you have already installed docker and is running in your machine. - -### 4. Starting the Docker Container ### - -As we have built the Docker Image of JBoss Data Virtualization with oData, we'll now gonna run the docker container and expose its port with -P flag. To do so, we'll run the following command. - - # docker run -p 8080:8080 -d -t jbossdv600 - - 7765dee9cd59c49ca26850e88f97c21f46859d2dc1d74166353d898773214c9c - -### 5. Getting the Container IP ### - -After we have started the Docker Container, we'll wanna get the IP address of the running docker container. To do so, we'll run the docker inspect command followed by the running container id. - - # docker inspect <$containerID> - - ... - "NetworkSettings": { - "Bridge": "", - "EndpointID": "3e94c5900ac5954354a89591a8740ce2c653efde9232876bc94878e891564b39", - "Gateway": "172.17.42.1", - "GlobalIPv6Address": "", - "GlobalIPv6PrefixLen": 0, - "HairpinMode": false, - "IPAddress": "172.17.0.8", - "IPPrefixLen": 16, - "IPv6Gateway": "", - "LinkLocalIPv6Address": "", - "LinkLocalIPv6PrefixLen": 0, - -### 6. Web Interface ### - -Now, if everything went as expected as done above, we'll gonna see the login screen of JBoss Data Virtualization with oData when pointing our web browser to http://container-ip:8080/ and the JBoss Management from http://container-ip:9990. The Management credentials for username is admin and password is redhat1! whereas the Data virtualization credentials for username is user and password is user . After that, we can navigate the contents via the web interface. - -**Note**: It is strongly recommended to change the password as soon as possible after the first login. Thanks :) - -### Conclusion ### - -Finally we've successfully run Docker Container running JBoss Data Virtualization with OData Multisource Virtual Database. JBoss Data Virtualization is really an awesome platform for the virtualization of data from different multiple source and transform them into reusable business friendly data models and produces data easily consumable through open standard interfaces. The deployment of JBoss Data Virtualization with OData Multisource Virtual Database has been very easy, secure and fast to setup with the Docker Technology. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-) - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/run-jboss-data-virtualization-ga-odata-docker-container/ - -作者:[Arun Pyasi][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ -[1]:http://www.redhat.com/en/technologies/jboss-middleware/data-virtualization -[2]:https://github.com/jbossdemocentral/dv-odata-docker-integration-demo -[3]:http://www.jboss.org/products/datavirt/download/ diff --git a/sources/tech/20150813 Linux file system hierarchy v2.0.md b/sources/tech/20150813 Linux file system hierarchy v2.0.md deleted file mode 100644 index 0021bb57c9..0000000000 --- a/sources/tech/20150813 Linux file system hierarchy v2.0.md +++ /dev/null @@ -1,441 +0,0 @@ - -Translating by dingdongnigetou - -Linux file system hierarchy v2.0 -================================================================================ -What is a file in Linux? What is file system in Linux? Where are all the configuration files? Where do I keep my downloaded applications? Is there really a filesystem standard structure in Linux? Well, the above image explains Linux file system hierarchy in a very simple and non-complex way. It’s very useful when you’re looking for a configuration file or a binary file. I’ve added some explanation and examples below, but that’s TL;DR. - -Another issue is when you got configuration and binary files all over the system that creates inconsistency and if you’re a large organization or even an end user, it can compromise your system (binary talking with old lib files etc.) and when you do [security audit of your Linux system][1], you find it is vulnerable to different exploits. So keeping a clean operating system (no matter Windows or Linux) is important. - -### What is a file in Linux? ### - -A simple description of the UNIX system, also applicable to Linux, is this: - -> On a UNIX system, everything is a file; if something is not a file, it is a process. - -This statement is true because there are special files that are more than just files (named pipes and sockets, for instance), but to keep things simple, saying that everything is a file is an acceptable generalization. A Linux system, just like UNIX, makes no difference between a file and a directory, since a directory is just a file containing names of other files. Programs, services, texts, images, and so forth, are all files. Input and output devices, and generally all devices, are considered to be files, according to the system. - -![](http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png) - -- Version 2.0 – 17-06-2015 - - – Improved: Added title and version history. - - – Improved: Added /srv, /media and /proc. - - – Improved: Updated descriptions to reflect modern Linux File Systems. - - – Fixed: Multiple typo’s. - - – Fixed: Appearance and colour. -- Version 1.0 – 14-02-2015 - - – Created: Initial diagram. - - – Note: Discarded lowercase version. - -### Download Links ### - -Following are two links for download. If you need this in any other format, let me know and I will try to create that and upload it somewhere. - -- [Large (PNG) Format – 2480×1755 px – 184KB][2] -- [Largest (PDF) Format – 9919x7019 px – 1686KB][3] - -**Note**: PDF Format is best for printing and very high in quality - -### Linux file system description ### - -In order to manage all those files in an orderly fashion, man likes to think of them in an ordered tree-like structure on the hard disk, as we know from `MS-DOS` (Disk Operating System) for instance. The large branches contain more branches, and the branches at the end contain the tree’s leaves or normal files. For now we will use this image of the tree, but we will find out later why this is not a fully accurate image. - -注:表格 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
DirectoryDescription
-
/
-
Primary hierarchy root and root directory of the entire file system hierarchy.
-
/bin
-
Essential command binaries that need to be available in single user mode; for all users, e.g., cat, ls, cp.
-
/boot
-
Boot loader files, e.g., kernels, initrd.
-
/dev
-
Essential devices, e.g., /dev/null.
-
/etc
-
Host-specific system-wide configuration filesThere has been controversy over the meaning of the name itself. In early versions of the UNIX Implementation Document from Bell labs, /etc is referred to as the etcetera directory, as this directory historically held everything that did not belong elsewhere (however, the FHS restricts /etc to static configuration files and may not contain binaries). Since the publication of early documentation, the directory name has been re-designated in various ways. Recent interpretations include backronyms such as “Editable Text Configuration” or “Extended Tool Chest”.
-
-
-
/opt
-
-
-
Configuration files for add-on packages that are stored in /opt/.
-
-
-
/sgml
-
-
-
Configuration files, such as catalogs, for software that processes SGML.
-
-
-
/X11
-
-
-
Configuration files for the X Window System, version 11.
-
-
-
/xml
-
-
-
Configuration files, such as catalogs, for software that processes XML.
-
/home
-
Users’ home directories, containing saved files, personal settings, etc.
-
/lib
-
Libraries essential for the binaries in /bin/ and /sbin/.
-
/lib<qual>
-
Alternate format essential libraries. Such directories are optional, but if they exist, they have some requirements.
-
/media
-
Mount points for removable media such as CD-ROMs (appeared in FHS-2.3).
-
/mnt
-
Temporarily mounted filesystems.
-
/opt
-
Optional application software packages.
-
/proc
-
Virtual filesystem providing process and kernel information as files. In Linux, corresponds to a procfs mount.
-
/root
-
Home directory for the root user.
-
/sbin
-
Essential system binaries, e.g., init, ip, mount.
-
/srv
-
Site-specific data which are served by the system.
-
/tmp
-
Temporary files (see also /var/tmp). Often not preserved between system reboots.
-
/usr
-
Secondary hierarchy for read-only user data; contains the majority of (multi-)user utilities and applications.
-
-
-
/bin
-
-
-
Non-essential command binaries (not needed in single user mode); for all users.
-
-
-
/include
-
-
-
Standard include files.
-
-
-
/lib
-
-
-
Libraries for the binaries in /usr/bin/ and /usr/sbin/.
-
-
-
/lib<qual>
-
-
-
Alternate format libraries (optional).
-
-
-
/local
-
-
-
Tertiary hierarchy for local data, specific to this host. Typically has further subdirectories, e.g., bin/, lib/, share/.
-
-
-
/sbin
-
-
-
Non-essential system binaries, e.g., daemons for various network-services.
-
-
-
/share
-
-
-
Architecture-independent (shared) data.
-
-
-
/src
-
-
-
Source code, e.g., the kernel source code with its header files.
-
-
-
/X11R6
-
-
-
X Window System, Version 11, Release 6.
-
/var
-
Variable files—files whose content is expected to continually change during normal operation of the system—such as logs, spool files, and temporary e-mail files.
-
-
-
/cache
-
-
-
Application cache data. Such data are locally generated as a result of time-consuming I/O or calculation. The application must be able to regenerate or restore the data. The cached files can be deleted without loss of data.
-
-
-
/lib
-
-
-
State information. Persistent data modified by programs as they run, e.g., databases, packaging system metadata, etc.
-
-
-
/lock
-
-
-
Lock files. Files keeping track of resources currently in use.
-
-
-
/log
-
-
-
Log files. Various logs.
-
-
-
/mail
-
-
-
Users’ mailboxes.
-
-
-
/opt
-
-
-
Variable data from add-on packages that are stored in /opt/.
-
-
-
/run
-
-
-
Information about the running system since last boot, e.g., currently logged-in users and running daemons.
-
-
-
/spool
-
-
-
Spool for tasks waiting to be processed, e.g., print queues and outgoing mail queue.
-
-
-
-
-
/mail
-
-
-
-
-
Deprecated location for users’ mailboxes.
-
-
-
/tmp
-
-
-
Temporary files to be preserved between reboots.
- -### Types of files in Linux ### - -Most files are just files, called `regular` files; they contain normal data, for example text files, executable files or programs, input for or output from a program and so on. - -While it is reasonably safe to suppose that everything you encounter on a Linux system is a file, there are some exceptions. - -- `Directories`: files that are lists of other files. -- `Special files`: the mechanism used for input and output. Most special files are in `/dev`, we will discuss them later. -- `Links`: a system to make a file or directory visible in multiple parts of the system’s file tree. We will talk about links in detail. -- `(Domain) sockets`: a special file type, similar to TCP/IP sockets, providing inter-process networking protected by the file system’s access control. -- `Named pipes`: act more or less like sockets and form a way for processes to communicate with each other, without using network socket semantics. - -### File system in reality ### - -For most users and for most common system administration tasks, it is enough to accept that files and directories are ordered in a tree-like structure. The computer, however, doesn’t understand a thing about trees or tree-structures. - -Every partition has its own file system. By imagining all those file systems together, we can form an idea of the tree-structure of the entire system, but it is not as simple as that. In a file system, a file is represented by an `inode`, a kind of serial number containing information about the actual data that makes up the file: to whom this file belongs, and where is it located on the hard disk. - -Every partition has its own set of inodes; throughout a system with multiple partitions, files with the same inode number can exist. - -Each inode describes a data structure on the hard disk, storing the properties of a file, including the physical location of the file data. When a hard disk is initialized to accept data storage, usually during the initial system installation process or when adding extra disks to an existing system, a fixed number of inodes per partition is created. This number will be the maximum amount of files, of all types (including directories, special files, links etc.) that can exist at the same time on the partition. We typically count on having 1 inode per 2 to 8 kilobytes of storage.At the time a new file is created, it gets a free inode. In that inode is the following information: - -- Owner and group owner of the file. -- File type (regular, directory, …) -- Permissions on the file -- Date and time of creation, last read and change. -- Date and time this information has been changed in the inode. -- Number of links to this file (see later in this chapter). -- File size -- An address defining the actual location of the file data. - -The only information not included in an inode, is the file name and directory. These are stored in the special directory files. By comparing file names and inode numbers, the system can make up a tree-structure that the user understands. Users can display inode numbers using the -i option to ls. The inodes have their own separate space on the disk. - --------------------------------------------------------------------------------- - -via: http://www.blackmoreops.com/2015/06/18/linux-file-system-hierarchy-v2-0/ - -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:http://www.blackmoreops.com/2015/02/15/in-light-of-recent-linux-exploits-linux-security-audit-is-a-must/ -[2]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-file-system-hierarchy-v2.0-2480px-blackMORE-Ops.png -[3]:http://www.blackmoreops.com/wp-content/uploads/2015/06/Linux-File-System-Hierarchy-blackMORE-Ops.pdf diff --git a/sources/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md b/sources/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md deleted file mode 100644 index 35ee2f00de..0000000000 --- a/sources/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md +++ /dev/null @@ -1,51 +0,0 @@ -translation by strugglingyouth -Linux FAQs with Answers--How to count the number of threads in a process on Linux -================================================================================ -> **Question**: I have an application running, which forks a number of threads at run-time. I want to know how many threads are actively running in the program. What is the easiest way to check the thread count of a process on Linux? - -If you want to see the number of threads per process in Linux environments, there are several ways to do it. - -### Method One: /proc ### - -The proc pseudo filesystem, which resides in /proc directory, is the easiest way to see the thread count of any active process. The /proc directory exports in the form of readable text files a wealth of information related to existing processes and system hardware such as CPU, interrupts, memory, disk, etc. - - $ cat /proc//status - -The above command will show detailed information about the process with , which includes process state (e.g., sleeping, running), parent PID, UID, GID, the number of file descriptors used, and the number of context switches. The output also indicates **the total number of threads created in a process** as follows. - - Threads: - -For example, to check the thread count of a process with PID 20571: - - $ cat /proc/20571/status - -![](https://farm6.staticflickr.com/5649/20341236279_f4a4d809d2_b.jpg) - -The output indicates that the process has 28 threads in it. - -Alternatively, you could simply count the number of directories found in /proc//task, as shown below. - - $ ls /proc//task | wc - -This is because, for every thread created within a process, there is a corresponding directory created in /proc//task, named with its thread ID. Thus the total number of directories in /proc//task represents the number of threads in the process. - -### Method Two: ps ### - -If you are an avid user of the versatile ps command, this command can also show you individual threads of a process (with "H" option). The following command will print the thread count of a process. The "h" option is needed to hide the header in the top output. - - $ ps hH p | wc -l - -If you want to monitor the hardware resources (CPU & memory) consumed by different threads of a process, refer to [this tutorial][1].(注:此文我们翻译过) - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/number-of-threads-process-linux.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni -[1]:http://ask.xmodulo.com/view-threads-process-linux.html diff --git a/sources/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md b/sources/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md deleted file mode 100644 index d906349ff9..0000000000 --- a/sources/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md +++ /dev/null @@ -1,62 +0,0 @@ -translation by strugglingyouth -Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop -================================================================================ -> **Question**: When I try to open a pre-recorded packet dump on Wireshark on Ubuntu, its UI suddenly freezes, and the following errors and warnings appear in the terminal where I launched Wireshark. How can I fix this problem? - -Wireshark is a GUI-based packet capture and sniffer tool. This tool is popularly used by network administrators, network security engineers or developers for various tasks where packet-level network analysis is required, for example during network troubleshooting, vulnerability testing, application debugging, or protocol reverse engineering. Wireshark allows one to capture live packets and browse their protocol headers and payloads via a convenient GUI. - -![](https://farm1.staticflickr.com/722/20584224675_f4d7a59474_c.jpg) - -It is known that Wireshark's UI, especially run under Ubuntu desktop, sometimes hangs or freezes with the following errors, while you are scrolling up or down the packet list view, or starting to load a pre-recorded packet dump file. - - (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject' - (wireshark:3480): GLib-GObject-CRITICAL **: g_object_set_qdata_full: assertion 'G_IS_OBJECT (object)' failed - (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkRange' - (wireshark:3480): Gtk-CRITICAL **: gtk_range_get_adjustment: assertion 'GTK_IS_RANGE (range)' failed - (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkOrientable' - (wireshark:3480): Gtk-CRITICAL **: gtk_orientable_get_orientation: assertion 'GTK_IS_ORIENTABLE (orientable)' failed - (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkScrollbar' - (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkWidget' - (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject' - (wireshark:3480): GLib-GObject-CRITICAL **: g_object_get_qdata: assertion 'G_IS_OBJECT (object)' failed - (wireshark:3480): Gtk-CRITICAL **: gtk_widget_set_name: assertion 'GTK_IS_WIDGET (widget)' failed - -Apparently this error is caused by some incompatibility between Wireshark and overlay-scrollbar, and has not been fixed in the latest Ubuntu desktop (e.g., as of Ubuntu 15.04 Vivid Vervet). - -A workaround to avoid this Wireshark UI freeze problem is to **temporarily disabling overlay-scrollbar**. There are two ways to disable overlay-scrollbar in Wireshark, depending on how you launch Wireshark on your desktop. - -### Command-Line Solution ### - -Overlay-scrollbar can be disabled by setting "**LIBOVERLAY_SCROLLBAR**" environment variable to "0". - -So if you are launching Wireshark from the command in a terminal, you can disable overlay-scrollbar in Wireshark as follows. - -Open your .bashrc, and define the following alias. - - alias wireshark="LIBOVERLAY_SCROLLBAR=0 /usr/bin/wireshark" - -### Desktop Launcher Solution ### - -If you are launching Wireshark using a desktop launcher, you can edit its desktop launcher file. - - $ sudo vi /usr/share/applications/wireshark.desktop - -Look for a line that starts with "Exec", and change it as follows. - - Exec=env LIBOVERLAY_SCROLLBAR=0 wireshark %f - -While this solution will be beneficial for all desktop users system-wide, it will not survive Wireshark upgrade. If you want to preserve the modified .desktop file, copy it to your home directory as follows. - - $ cp /usr/share/applications/wireshark.desktop ~/.local/share/applications/ - --------------------------------------------------------------------------------- - -via: http://ask.xmodulo.com/fix-wireshark-gui-freeze-linux-desktop.html - -作者:[Dan Nanni][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://ask.xmodulo.com/author/nanni diff --git a/sources/tech/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md b/sources/tech/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md new file mode 100644 index 0000000000..b4014bb009 --- /dev/null +++ b/sources/tech/20150824 How to Setup Zephyr Test Management Tool on CentOS 7.x.md @@ -0,0 +1,233 @@ +How to Setup Zephyr Test Management Tool on CentOS 7.x +================================================================================ +Test Management encompasses anything and everything that you need to do as testers. Test management tools are used to store information on how testing is to be done, plan testing activities and report the status of quality assurance activities. So in this article we will illustrate you about the setup of Zephyr test management tool that includes everything needed to manage the test process can save testers hassle of installing separate applications that are necessary for the testing process. Once you have done with its setup you will be able to track bugs, defects and allows the project tasks for collaboration with your team as you can easily share and access the data across multiple project teams for communication and collaboration throughout the testing process. + +### Requirements for Zephyr ### + +We are going to install and run Zephyr under the following set of its minimum resources. Resources can be enhanced as per your infrastructure requirements. We will be installing Zephyr on the CentOS-7 64-bit while its binary distributions are available for almost all Linux operating systems. + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Zephyr test management tool
Linux OSCentOS Linux 7 (Core), 64-bit
PackagesJDK 7 or above ,  Oracle JDK 6 updateNo Prior Tomcat, MySQL installed
RAM4 GBPreferred 8 GB
CPU2.0 GHZ or Higher
Hard Disk30 GB , Atleast 5GB must be free
+ +You must have super user (root) access to perform the installation process for Zephyr and make sure that you have properly configured yout network with static IP address and its default set of ports must be available and allowed in the firewall where as the Port 80/443, 8005, 8009, 8010 will used by tomcat and Port 443 or 2099 will used within Zephyr by flex for the RTMP protocol. + +### Install Java JDK 7 ### + +Java JDK 7 is the basic requirement for the installation of Zephyr, if its not already installed in your operating system then do the following to install Java and setup its JAVA_HOME environment variables to be properly configured. + +Let’s issue the below commands to install Java JDK 7. + + [root@centos-007 ~]# yum install java-1.7.0-openjdk-1.7.0.79-2.5.5.2.el7_1 + +---------- + + [root@centos-007 ~]# yum install java-1.7.0-openjdk-devel-1.7.0.85-2.6.1.2.el7_1.x86_64 + +Once your java is installed including its required dependencies, run the following commands to set its JAVA_HOME environment variables. + + [root@centos-007 ~]# export JAVA_HOME=/usr/java/default + [root@centos-007 ~]# export PATH=/usr/java/default/bin:$PATH + +Now check the version of java to verify its installation with following command. + + [root@centos-007 ~]# java –version + +---------- + + java version "1.7.0_79" + OpenJDK Runtime Environment (rhel-2.5.5.2.el7_1-x86_64 u79-b14) + OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode) + +The output shows that we we have successfully installed OpenJDK Java verion 1.7.0_79. + +### Install MySQL 5.6.X ### + +If you have other MySQLs on the machine then it is recommended to remove them and +install this version on top of them or upgrade their schemas to what is specified. As this specific major/minor (5.6.X) version of MySQL is required with the root username as a prerequisite of Zephyr. + +To install MySQL 5.6 on CentOS-7.1 lets do the following steps: + +Download the rpm package, which will create a yum repo file for MySQL Server installation. + + [root@centos-007 ~]# yum install wget + [root@centos-007 ~]# wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm + +Now Install this downloaded rpm package by using rpm command. + + [root@centos-007 ~]# rpm -ivh mysql-community-release-el7-5.noarch.rpm + +After the installation of this package you will get two new yum repo related to MySQL. Then by using yum command, now we will install MySQL Server 5.6 and all dependencies will be installed itself. + + [root@centos-007 ~]# yum install mysql-server + +Once the installation process completes, run the following commands to start mysqld services and check its status whether its active or not. + + [root@centos-007 ~]# service mysqld start + [root@centos-007 ~]# service mysqld status + +On fresh installation of MySQL Server. The MySQL root user password is blank. +For good security practice, we should reset the password MySQL root user. + +Connect to MySQL using the auto-generated empty password and change the +root password. + + [root@centos-007 ~]# mysql + mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('your_password'); + mysql> flush privileges; + mysql> quit; + +Now we need to configure the required database parameters in the default configuration file of MySQL. Let's open its file located in "/etc/" folder and update it as follow. + + [root@centos-007 ~]# vi /etc/my.cnf + +---------- + + [mysqld] + datadir=/var/lib/mysql + socket=/var/lib/mysql/mysql.sock + symbolic-links=0 + + sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES + max_allowed_packet=150M + max_connections=600 + default-storage-engine=INNODB + character-set-server=utf8 + collation-server=utf8_unicode_ci + + [mysqld_safe] + log-error=/var/log/mysqld.log + pid-file=/var/run/mysqld/mysqld.pid + default-storage-engine=INNODB + character-set-server=utf8 + collation-server=utf8_unicode_ci + + [mysql] + max_allowed_packet = 150M + [mysqldump] + quick + +Save the changes made in the configuration file and restart mysql services. + + [root@centos-007 ~]# service mysqld restart + +### Download Zephyr Installation Package ### + +We done with installation of required packages necessary to install Zephyr. Now we need to get the binary distributed package of Zephyr and its license key. Go to official download link of Zephyr that is http://download.yourzephyr.com/linux/download.php give your email ID and click to download. + +![Zephyr Download](http://blog.linoxide.com/wp-content/uploads/2015/08/13.png) + +Then and confirm your mentioned Email Address and you will get the Zephyr Download link and its License Key link. So click on the provided links and choose the appropriate version of your Operating system to download the binary installation package and its license file to the server. + +We have placed it in the home directory and modify its permissions to make it executable. + +![Zephyr Binary](http://blog.linoxide.com/wp-content/uploads/2015/08/22.png) + +### Start Zephyr Installation and Configuration ### + +Now we are ready to start the installation of Zephyr by executing its binary installation script as below. + + [root@centos-007 ~]# ./zephyr_4_7_9213_linux_setup.sh –c + +Once you run the above command, it will check for the Java environment variables to be properly setup and configured. If there's some mis-configuration you might the error like. + + testing JVM in /usr ... + Starting Installer ... + Error : Either JDK is not found at expected locations or JDK version is mismatched. + Zephyr requires Oracle Java Development Kit (JDK) version 1.7 or higher. + +Once you have properly configured your Java, then it will start installation of Zephyr and asks to press "o" to proceed and "c" to cancel the setup. Let's type "o" and press "Enter" key to start installation. + +![install zephyr](http://blog.linoxide.com/wp-content/uploads/2015/08/32.png) + +The next option is to review all the requirements for the Zephyr setup and Press "Enter" to move forward to next option. + +![zephyr requirements](http://blog.linoxide.com/wp-content/uploads/2015/08/42.png) + +To accept the license agreement type "1" and Press Enter. + + I accept the terms of this license agreement [1], I do not accept the terms of this license agreement [2, Enter] + +Here we need to choose the appropriate destination location where we want to install the zephyr and choose the default ports, if you want to choose other than default ports, you are free to mention here. + +![installation folder](http://blog.linoxide.com/wp-content/uploads/2015/08/52.png) + +Then customize the mysql database parameters and give the right paths to the configurations file. You might the an error at this point as shown below. + + Please update MySQL configuration. Configuration parameter max_connection should be at least 500 (max_connection = 500) and max_allowed_packet should be at least 50MB (max_allowed_packet = 50M). + +To overcome this error make sure that you have configure the "max_connection" and "max_allowed_packet" limits properly in the mysql configuration file. So confirm these settings, connect to mysql server and run the commands as shown. + +![mysql connections](http://blog.linoxide.com/wp-content/uploads/2015/08/62.png) + +Once you have configured your mysql database properly, it will extract the configuration files to complete the setup. + +![mysql customization](http://blog.linoxide.com/wp-content/uploads/2015/08/72.png) + +The installation process completes with successful installation of Zephyr 4.7 on your computer. To Launch Zephyr Desktop type "y" to finish Zephyr installation. + +![launch zephyr](http://blog.linoxide.com/wp-content/uploads/2015/08/82.png) + +### Launch Zephyr Desktop ### + +Open your web browser to launch Zephyr Desktop with your localhost IP adress and you will be direted to the Zephyr Desktop. + + http://your_server_IP/zephyr/desktop/ + +![Zephyr Desktop](http://blog.linoxide.com/wp-content/uploads/2015/08/91.png) + +From your Zephyr Dashboard click on the "Test Manager" and login with the dault user name and password that is "test.manager". + +![Test Manage Login](http://blog.linoxide.com/wp-content/uploads/2015/08/test_manager_login.png) + +Once you are loged in you will be able to configure your administrative settings as shown. So choose the settings you wish to put according to your environment. + +![Test Manage Administration](http://blog.linoxide.com/wp-content/uploads/2015/08/test_manage_admin.png) + +Save the settings after you have done with your administrative settings, similarly do the settings of resources management and project setup and start using Zephyr as a complete set of your testing management tool. You check and edit the status of your administrative settings from the Department Dashboard Management as shown. + +![zephyr dashboard](http://blog.linoxide.com/wp-content/uploads/2015/08/dashboard.png) + +### Conclusion ### + +Cheers! we have done with the complete setup of Zephyr installation setup on Centos 7.1. We hope you are now much aware of Zephyr Test management tool which offer the prospect of streamlining the testing process and allow quick access to data analysis, collaborative tools and easy communication across multiple project teams. Feel free to comment us if you find any difficulty while you are doing it in your environment. + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/setup-zephyr-tool-centos-7-x/ + +作者:[Kashif Siddique][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/kashifs/ \ No newline at end of file diff --git a/sources/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md b/sources/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md new file mode 100644 index 0000000000..bc7ebee015 --- /dev/null +++ b/sources/tech/20150831 How to switch from NetworkManager to systemd-networkd on Linux.md @@ -0,0 +1,167 @@ +Translating by Ping + +How to switch from NetworkManager to systemd-networkd on Linux +================================================================================ +In the world of Linux, adoption of [systemd][1] has been a subject of heated controversy, and the debate between its proponents and critics is still going on. As of today, most major Linux distributions have adopted systemd as a default init system. + +Billed as a "never finished, never complete, but tracking progress of technology" by its author, systemd is not just the init daemon, but is designed as a more broad system and service management platform which encompasses the growing ecosystem of core system daemons, libraries and utilities. + +One of many additions to **systemd** is **systemd-networkd**, which is responsible for network configuration within the systemd ecosystem. Using systemd-networkd, you can configure basic DHCP/static IP networking for network devices. It can also configure virtual networking features such as bridges, tunnels or VLANs. Wireless networking is not directly handled by systemd-networkd, but you can use wpa_supplicant service to configure wireless adapters, and then hook it up with **systemd-networkd**. + +On many Linux distributions, NetworkManager has been and is still used as a default network configuration manager. Compared to NetworkManager, **systemd-networkd** is still under active development, and missing features. For example, it does not have NetworkManager's intelligence to keep your computer connected across various interfaces at all times. It does not provide ifup/ifdown hooks for advanced scripting. Yet, systemd-networkd is integrated well with the rest of systemd components (e.g., **resolved** for DNS, **timesyncd** for NTP, udevd for naming), and the role of **systemd-networkd** may only grow over time in the systemd environment. + +If you are happy with the way **systemd** is evolving, one thing you can consider is to switch from NetworkManager to systemd-networkd. If you are feverishly against systemd, and perfectly happy with NetworkManager or [basic network service][2], that is totally cool. + +But for those of you who want to try out systemd-networkd, you can read on, and find out in this tutorial how to switch from NetworkManager to systemd-networkd on Linux. + +### Requirement ### + +systemd-networkd is available in systemd version 210 and higher. Thus distributions like Debian 8 "Jessie" (systemd 215), Fedora 21 (systemd 217), Ubuntu 15.04 (systemd 219) or later are compatible with systemd-networkd. + +For other distributions, check the version of your systemd before proceeding. + + $ systemctl --version + +### Switch from Network Manager to Systemd-Networkd ### + +It is relatively straightforward to switch from Network Manager to systemd-networkd (and vice versa). + +First, disable Network Manager service, and enable systemd-networkd as follows. + + $ sudo systemctl disable NetworkManager + $ sudo systemctl enable systemd-networkd + +You also need to enable **systemd-resolved** service, which is used by systemd-networkd for network name resolution. This service implements a caching DNS server. + + $ sudo systemctl enable systemd-resolved + $ sudo systemctl start systemd-resolved + +Once started, **systemd-resolved** will create its own resolv.conf somewhere under /run/systemd directory. However, it is a common practise to store DNS resolver information in /etc/resolv.conf, and many applications still rely on /etc/resolv.conf. Thus for compatibility reason, create a symlink to /etc/resolv.conf as follows. + + $ sudo rm /etc/resolv.conf + $ sudo ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf + +### Configure Network Connections with Systemd-networkd ### + +To configure network devices with systemd-networkd, you must specify configuration information in text files with .network extension. These network configuration files are then stored and loaded from /etc/systemd/network. When there are multiple files, systemd-networkd loads and processes them one by one in lexical order. + +Let's start by creating a folder /etc/systemd/network. + + $ sudo mkdir /etc/systemd/network + +#### DHCP Networking #### + +Let's configure DHCP networking first. For this, create the following configuration file. The name of a file can be arbitrary, but remember that files are processed in lexical order. + + $ sudo vi /etc/systemd/network/20-dhcp.network + +---------- + + [Match] + Name=enp3* + + [Network] + DHCP=yes + +As you can see above, each network configuration file contains one or more "sections" with each section preceded by [XXX] heading. Each section contains one or more key/value pairs. The [Match] section determine which network device(s) are configured by this configuration file. For example, this file matches any network interface whose name starts with ens3 (e.g., enp3s0, enp3s1, enp3s2, etc). For matched interface(s), it then applies DHCP network configuration specified under [Network] section. + +### Static IP Networking ### + +If you want to assign a static IP address to a network interface, create the following configuration file. + + $ sudo vi /etc/systemd/network/10-static-enp3s0.network + +---------- + + [Match] + Name=enp3s0 + + [Network] + Address=192.168.10.50/24 + Gateway=192.168.10.1 + DNS=8.8.8.8 + +As you can guess, the interface enp3s0 will be assigned an address 192.168.10.50/24, a default gateway 192.168.10.1, and a DNS server 8.8.8.8. One subtlety here is that the name of an interface enp3s0, in facts, matches the pattern rule defined in the earlier DHCP configuration as well. However, since the file "10-static-enp3s0.network" is processed before "20-dhcp.network" according to lexical order, the static configuration takes priority over DHCP configuration in case of enp3s0 interface. + +Once you are done with creating configuration files, restart systemd-networkd service or reboot. + + $ sudo systemctl restart systemd-networkd + +Check the status of the service by running: + + $ systemctl status systemd-networkd + $ systemctl status systemd-resolved + +![](https://farm1.staticflickr.com/719/21010813392_76abe123ed_c.jpg) + +### Configure Virtual Network Devices with Systemd-networkd ### + +**systemd-networkd** also allows you to configure virtual network devices such as bridges, VLANs, tunnel, VXLAN, bonding, etc. You must configure these virtual devices in files with .netdev extension. + +Here I'll show how to configure a bridge interface. + +#### Linux Bridge #### + +If you want to create a Linux bridge (br0) and add a physical interface (eth1) to the bridge, create the following configuration. + + $ sudo vi /etc/systemd/network/bridge-br0.netdev + +---------- + + [NetDev] + Name=br0 + Kind=bridge + +Then configure the bridge interface br0 and the slave interface eth1 using .network files as follows. + + $ sudo vi /etc/systemd/network/bridge-br0-slave.network + +---------- + + [Match] + Name=eth1 + + [Network] + Bridge=br0 + +---------- + + $ sudo vi /etc/systemd/network/bridge-br0.network + +---------- + + [Match] + Name=br0 + + [Network] + Address=192.168.10.100/24 + Gateway=192.168.10.1 + DNS=8.8.8.8 + +Finally, restart systemd-networkd: + + $ sudo systemctl restart systemd-networkd + +You can use [brctl tool][3] to verify that a bridge br0 has been created. + +### Summary ### + +When systemd promises to be a system manager for Linux, it is no wonder something like systemd-networkd came into being to manage network configurations. At this stage, however, systemd-networkd seems more suitable for a server environment where network configurations are relatively stable. For desktop/laptop environments which involve various transient wired/wireless interfaces, NetworkManager may still be a preferred choice. + +For those who want to check out more on systemd-networkd, refer to the official [man page][4] for a complete list of supported sections and keys. + +-------------------------------------------------------------------------------- + +via: http://xmodulo.com/switch-from-networkmanager-to-systemd-networkd.html + +作者:[Dan Nanni][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://xmodulo.com/author/nanni +[1]:http://xmodulo.com/use-systemd-system-administration-debian.html +[2]:http://xmodulo.com/disable-network-manager-linux.html +[3]:http://xmodulo.com/how-to-configure-linux-bridge-interface.html +[4]:http://www.freedesktop.org/software/systemd/man/systemd.network.html diff --git a/sources/tech/20150831 Linux workstation security checklist.md b/sources/tech/20150831 Linux workstation security checklist.md new file mode 100644 index 0000000000..9ef46339d0 --- /dev/null +++ b/sources/tech/20150831 Linux workstation security checklist.md @@ -0,0 +1,801 @@ +wyangsun translating +Linux workstation security checklist +================================================================================ +This is a set of recommendations used by the Linux Foundation for their systems +administrators. All of LF employees are remote workers and we use this set of +guidelines to ensure that a sysadmin's system passes core security requirements +in order to reduce the risk of it becoming an attack vector against the rest +of our infrastructure. + +Even if your systems administrators are not remote workers, chances are that +they perform a lot of their work either from a portable laptop in a work +environment, or set up their home systems to access the work infrastructure +for after-hours/emergency support. In either case, you can adapt this set of +recommendations to suit your environment. + +This, by no means, is an exhaustive "workstation hardening" document, but +rather an attempt at a set of baseline recommendations to avoid most glaring +security errors without introducing too much inconvenience. You may read this +document and think it is way too paranoid, while someone else may think this +barely scratches the surface. Security is just like driving on the highway -- +anyone going slower than you is an idiot, while anyone driving faster than you +is a crazy person. These guidelines are merely a basic set of core safety +rules that is neither exhaustive, nor a replacement for experience, vigilance, +and common sense. + +Each section is split into two areas: + +- The checklist that can be adapted to your project's needs +- Free-form list of considerations that explain what dictated these decisions + +## Severity levels + +The items in each checklist include the severity level, which we hope will help +guide your decision: + +- _(CRITICAL)_ items should definitely be high on the consideration list. + If not implemented, they will introduce high risks to your workstation + security. +- _(MODERATE)_ items will improve your security posture, but are less + important, especially if they interfere too much with your workflow. +- _(LOW)_ items may improve the overall security, but may not be worth the + convenience trade-offs. +- _(PARANOID)_ is reserved for items we feel will dramatically improve your + workstation security, but will probably require a lot of adjustment to the + way you interact with your operating system. + +Remember, these are only guidelines. If you feel these severity levels do not +reflect your project's commitment to security, you should adjust them as you +see fit. + +## Choosing the right hardware + +We do not mandate that our admins use a specific vendor or a specific model, so +this section addresses core considerations when choosing a work system. + +### Checklist + +- [ ] System supports SecureBoot _(CRITICAL)_ +- [ ] System has no firewire, thunderbolt or ExpressCard ports _(MODERATE)_ +- [ ] System has a TPM chip _(LOW)_ + +### Considerations + +#### SecureBoot + +Despite its controversial nature, SecureBoot offers prevention against many +attacks targeting workstations (Rootkits, "Evil Maid," etc), without +introducing too much extra hassle. It will not stop a truly dedicated attacker, +plus there is a pretty high degree of certainty that state security agencies +have ways to defeat it (probably by design), but having SecureBoot is better +than having nothing at all. + +Alternatively, you may set up [Anti Evil Maid][1] which offers a more +wholesome protection against the type of attacks that SecureBoot is supposed +to prevent, but it will require more effort to set up and maintain. + +#### Firewire, thunderbolt, and ExpressCard ports + +Firewire is a standard that, by design, allows any connecting device full +direct memory access to your system ([see Wikipedia][2]). Thunderbolt and +ExpressCard are guilty of the same, though some later implementations of +Thunderbolt attempt to limit the scope of memory access. It is best if the +system you are getting has none of these ports, but it is not critical, as +they usually can be turned off via UEFI or disabled in the kernel itself. + +#### TPM Chip + +Trusted Platform Module (TPM) is a crypto chip bundled with the motherboard +separately from the core processor, which can be used for additional platform +security (such as to store full-disk encryption keys), but is not normally used +for day-to-day workstation operation. At best, this is a nice-to-have, unless +you have a specific need to use TPM for your workstation security. + +## Pre-boot environment + +This is a set of recommendations for your workstation before you even start +with OS installation. + +### Checklist + +- [ ] UEFI boot mode is used (not legacy BIOS) _(CRITICAL)_ +- [ ] Password is required to enter UEFI configuration _(CRITICAL)_ +- [ ] SecureBoot is enabled _(CRITICAL)_ +- [ ] UEFI-level password is required to boot the system _(LOW)_ + +### Considerations + +#### UEFI and SecureBoot + +UEFI, with all its warts, offers a lot of goodies that legacy BIOS doesn't, +such as SecureBoot. Most modern systems come with UEFI mode on by default. + +Make sure a strong password is required to enter UEFI configuration mode. Pay +attention, as many manufacturers quietly limit the length of the password you +are allowed to use, so you may need to choose high-entropy short passwords vs. +long passphrases (see below for more on passphrases). + +Depending on the Linux distribution you decide to use, you may or may not have +to jump through additional hoops in order to import your distribution's +SecureBoot key that would allow you to boot the distro. Many distributions have +partnered with Microsoft to sign their released kernels with a key that is +already recognized by most system manufacturers, therefore saving you the +trouble of having to deal with key importing. + +As an extra measure, before someone is allowed to even get to the boot +partition and try some badness there, let's make them enter a password. This +password should be different from your UEFI management password, in order to +prevent shoulder-surfing. If you shut down and start a lot, you may choose to +not bother with this, as you will already have to enter a LUKS passphrase and +this will save you a few extra keystrokes. + +## Distro choice considerations + +Chances are you'll stick with a fairly widely-used distribution such as Fedora, +Ubuntu, Arch, Debian, or one of their close spin-offs. In any case, this is +what you should consider when picking a distribution to use. + +### Checklist + +- [ ] Has a robust MAC/RBAC implementation (SELinux/AppArmor/Grsecurity) _(CRITICAL)_ +- [ ] Publishes security bulletins _(CRITICAL)_ +- [ ] Provides timely security patches _(CRITICAL)_ +- [ ] Provides cryptographic verification of packages _(CRITICAL)_ +- [ ] Fully supports UEFI and SecureBoot _(CRITICAL)_ +- [ ] Has robust native full disk encryption support _(CRITICAL)_ + +### Considerations + +#### SELinux, AppArmor, and GrSecurity/PaX + +Mandatory Access Controls (MAC) or Role-Based Access Controls (RBAC) are an +extension of the basic user/group security mechanism used in legacy POSIX +systems. Most distributions these days either already come bundled with a +MAC/RBAC implementation (Fedora, Ubuntu), or provide a mechanism to add it via +an optional post-installation step (Gentoo, Arch, Debian). Obviously, it is +highly advised that you pick a distribution that comes pre-configured with a +MAC/RBAC system, but if you have strong feelings about a distribution that +doesn't have one enabled by default, do plan to configure it +post-installation. + +Distributions that do not provide any MAC/RBAC mechanisms should be strongly +avoided, as traditional POSIX user- and group-based security should be +considered insufficient in this day and age. If you would like to start out +with a MAC/RBAC workstation, AppArmor and PaX are generally considered easier +to learn than SELinux. Furthermore, on a workstation, where there are few or +no externally listening daemons, and where user-run applications pose the +highest risk, GrSecurity/PaX will _probably_ offer more security benefits than +SELinux. + +#### Distro security bulletins + +Most of the widely used distributions have a mechanism to deliver security +bulletins to their users, but if you are fond of something esoteric, check +whether the developers have a documented mechanism of alerting the users about +security vulnerabilities and patches. Absence of such mechanism is a major +warning sign that the distribution is not mature enough to be considered for a +primary admin workstation. + +#### Timely and trusted security updates + +Most of the widely used distributions deliver regular security updates, but is +worth checking to ensure that critical package updates are provided in a +timely fashion. Avoid using spin-offs and "community rebuilds" for this +reason, as they routinely delay security updates due to having to wait for the +upstream distribution to release it first. + +You'll be hard-pressed to find a distribution that does not use cryptographic +signatures on packages, updates metadata, or both. That being said, fairly +widely used distributions have been known to go for years before introducing +this basic security measure (Arch, I'm looking at you), so this is a thing +worth checking. + +#### Distros supporting UEFI and SecureBoot + +Check that the distribution supports UEFI and SecureBoot. Find out whether it +requires importing an extra key or whether it signs its boot kernels with a key +already trusted by systems manufacturers (e.g. via an agreement with +Microsoft). Some distributions do not support UEFI/SecureBoot but offer +alternatives to ensure tamper-proof or tamper-evident boot environments +([Qubes-OS][3] uses Anti Evil Maid, mentioned earlier). If a distribution +doesn't support SecureBoot and has no mechanisms to prevent boot-level attacks, +look elsewhere. + +#### Full disk encryption + +Full disk encryption is a requirement for securing data at rest, and is +supported by most distributions. As an alternative, systems with +self-encrypting hard drives may be used (normally implemented via the on-board +TPM chip) and offer comparable levels of security plus faster operation, but at +a considerably higher cost. + +## Distro installation guidelines + +All distributions are different, but here are general guidelines: + +### Checklist + +- [ ] Use full disk encryption (LUKS) with a robust passphrase _(CRITICAL)_ +- [ ] Make sure swap is also encrypted _(CRITICAL)_ +- [ ] Require a password to edit bootloader (can be same as LUKS) _(CRITICAL)_ +- [ ] Set up a robust root password (can be same as LUKS) _(CRITICAL)_ +- [ ] Use an unprivileged account, part of administrators group _(CRITICAL)_ +- [ ] Set up a robust user-account password, different from root _(CRITICAL)_ + +### Considerations + +#### Full disk encryption + +Unless you are using self-encrypting hard drives, it is important to configure +your installer to fully encrypt all the disks that will be used for storing +your data and your system files. It is not sufficient to simply encrypt the +user directory via auto-mounting cryptfs loop files (I'm looking at you, older +versions of Ubuntu), as this offers no protection for system binaries or swap, +which is likely to contain a slew of sensitive data. The recommended +encryption strategy is to encrypt the LVM device, so only one passphrase is +required during the boot process. + +The `/boot` partition will always remain unencrypted, as the bootloader needs +to be able to actually boot the kernel before invoking LUKS/dm-crypt. The +kernel image itself should be protected against tampering with a cryptographic +signature checked by SecureBoot. + +In other words, `/boot` should always be the only unencrypted partition on your +system. + +#### Choosing good passphrases + +Modern Linux systems have no limitation of password/passphrase length, so the +only real limitation is your level of paranoia and your stubbornness. If you +boot your system a lot, you will probably have to type at least two different +passwords: one to unlock LUKS, and another one to log in, so having long +passphrases will probably get old really fast. Pick passphrases that are 2-3 +words long, easy to type, and preferably from rich/mixed vocabularies. + +Examples of good passphrases (yes, you can use spaces): +- nature abhors roombas +- 12 in-flight Jebediahs +- perdon, tengo flatulence + +You can also stick with non-vocabulary passwords that are at least 10-12 +characters long, if you prefer that to typing passphrases. + +Unless you have concerns about physical security, it is fine to write down your +passphrases and keep them in a safe place away from your work desk. + +#### Root, user passwords and the admin group + +We recommend that you use the same passphrase for your root password as you +use for your LUKS encryption (unless you share your laptop with other trusted +people who should be able to unlock the drives, but shouldn't be able to +become root). If you are the sole user of the laptop, then having your root +password be different from your LUKS password has no meaningful security +advantages. Generally, you can use the same passphrase for your UEFI +administration, disk encryption, and root account -- knowing any of these will +give an attacker full control of your system anyway, so there is little +security benefit to have them be different on a single-user workstation. + +You should have a different, but equally strong password for your regular user +account that you will be using for day-to-day tasks. This user should be member +of the admin group (e.g. `wheel` or similar, depending on the distribution), +allowing you to perform `sudo` to elevate privileges. + +In other words, if you are the sole user on your workstation, you should have 2 +distinct, robust, equally strong passphrases you will need to remember: + +**Admin-level**, used in the following locations: + +- UEFI administration +- Bootloader (GRUB) +- Disk encryption (LUKS) +- Workstation admin (root user) + +**User-level**, used for the following: + +- User account and sudo +- Master password for the password manager + +All of them, obviously, can be different if there is a compelling reason. + +## Post-installation hardening + +Post-installation security hardening will depend greatly on your distribution +of choice, so it is futile to provide detailed instructions in a general +document such as this one. However, here are some steps you should take: + +### Checklist + +- [ ] Globally disable firewire and thunderbolt modules _(CRITICAL)_ +- [ ] Check your firewalls to ensure all incoming ports are filtered _(CRITICAL)_ +- [ ] Make sure root mail is forwarded to an account you check _(CRITICAL)_ +- [ ] Check to ensure sshd service is disabled by default _(MODERATE)_ +- [ ] Set up an automatic OS update schedule, or update reminders _(MODERATE)_ +- [ ] Configure the screensaver to auto-lock after a period of inactivity _(MODERATE)_ +- [ ] Set up logwatch _(MODERATE)_ +- [ ] Install and use rkhunter _(LOW)_ +- [ ] Install an Intrusion Detection System _(PARANOID)_ + +### Considerations + +#### Blacklisting modules + +To blacklist a firewire and thunderbolt modules, add the following lines to a +file in `/etc/modprobe.d/blacklist-dma.conf`: + + blacklist firewire-core + blacklist thunderbolt + +The modules will be blacklisted upon reboot. It doesn't hurt doing this even if +you don't have these ports (but it doesn't do anything either). + +#### Root mail + +By default, root mail is just saved on the system and tends to never be read. +Make sure you set your `/etc/aliases` to forward root mail to a mailbox that +you actually read, otherwise you may miss important system notifications and +reports: + + # Person who should get root's mail + root: bob@example.com + +Run `newaliases` after this edit and test it out to make sure that it actually +gets delivered, as some email providers will reject email coming in from +nonexistent or non-routable domain names. If that is the case, you will need to +play with your mail forwarding configuration until this actually works. + +#### Firewalls, sshd, and listening daemons + +The default firewall settings will depend on your distribution, but many of +them will allow incoming `sshd` ports. Unless you have a compelling legitimate +reason to allow incoming ssh, you should filter that out and disable the `sshd` +daemon. + + systemctl disable sshd.service + systemctl stop sshd.service + +You can always start it temporarily if you need to use it. + +In general, your system shouldn't have any listening ports apart from +responding to ping. This will help safeguard you against network-level 0-day +exploits. + +#### Automatic updates or notifications + +It is recommended to turn on automatic updates, unless you have a very good +reason not to do so, such as fear that an automatic update would render your +system unusable (it's happened in the past, so this fear is not unfounded). At +the very least, you should enable automatic notifications of available updates. +Most distributions already have this service automatically running for you, so +chances are you don't have to do anything. Consult your distribution +documentation to find out more. + +You should apply all outstanding errata as soon as possible, even if something +isn't specifically labeled as "security update" or has an associated CVE code. +All bugs have the potential of being security bugs and erring on the side of +newer, unknown bugs is _generally_ a safer strategy than sticking with old, +known ones. + +#### Watching logs + +You should have a keen interest in what happens on your system. For this +reason, you should install `logwatch` and configure it to send nightly activity +reports of everything that happens on your system. This won't prevent a +dedicated attacker, but is a good safety-net feature to have in place. + +Note, that many systemd distros will no longer automatically install a syslog +server that `logwatch` needs (due to systemd relying on its own journal), so +you will need to install and enable `rsyslog` to make sure your `/var/log` is +not empty before logwatch will be of any use. + +#### Rkhunter and IDS + +Installing `rkhunter` and an intrusion detection system (IDS) like `aide` or +`tripwire` will not be that useful unless you actually understand how they work +and take the necessary steps to set them up properly (such as, keeping the +databases on external media, running checks from a trusted environment, +remembering to refresh the hash databases after performing system updates and +configuration changes, etc). If you are not willing to take these steps and +adjust how you do things on your own workstation, these tools will introduce +hassle without any tangible security benefit. + +We do recommend that you install `rkhunter` and run it nightly. It's fairly +easy to learn and use, and though it will not deter a sophisticated attacker, +it may help you catch your own mistakes. + +## Personal workstation backups + +Workstation backups tend to be overlooked or done in a haphazard, often unsafe +manner. + +### Checklist + +- [ ] Set up encrypted workstation backups to external storage _(CRITICAL)_ +- [ ] Use zero-knowledge backup tools for cloud backups _(MODERATE)_ + +### Considerations + +#### Full encrypted backups to external storage + +It is handy to have an external hard drive where one can dump full backups +without having to worry about such things like bandwidth and upstream speeds +(in this day and age most providers still offer dramatically asymmetric +upload/download speeds). Needless to say, this hard drive needs to be in itself +encrypted (again, via LUKS), or you should use a backup tool that creates +encrypted backups, such as `duplicity` or its GUI companion, `deja-dup`. I +recommend using the latter with a good randomly generated passphrase, stored in +your password manager. If you travel with your laptop, leave this drive at home +to have something to come back to in case your laptop is lost or stolen. + +In addition to your home directory, you should also back up `/etc` and +`/var/log` for various forensic purposes. + +Above all, avoid copying your home directory onto any unencrypted storage, even +as a quick way to move your files around between systems, as you will most +certainly forget to erase it once you're done, exposing potentially private or +otherwise security sensitive data to snooping hands -- especially if you keep +that storage media in the same bag with your laptop. + +#### Selective zero-knowledge backups off-site + +Off-site backups are also extremely important and can be done either to your +employer, if they offer space for it, or to a cloud provider. You can set up a +separate duplicity/deja-dup profile to only include most important files in +order to avoid transferring huge amounts of data that you don't really care to +back up off-site (internet cache, music, downloads, etc). + +Alternatively, you can use a zero-knowledge backup tool, such as +[SpiderOak][5], which offers an excellent Linux GUI tool and has additional +useful features such as synchronizing content between multiple systems and +platforms. + +## Best practices + +What follows is a curated list of best practices that we think you should +adopt. It is most certainly non-exhaustive, but rather attempts to offer +practical advice that strikes a workable balance between security and overall +usability. + +### Browsing + +There is no question that the web browser will be the piece of software with +the largest and the most exposed attack surface on your system. It is a tool +written specifically to download and execute untrusted, frequently hostile +code. It attempts to shield you from this danger by employing multiple +mechanisms such as sandboxes and code sanitization, but they have all been +previously defeated on multiple occasions. You should learn to approach +browsing websites as the most insecure activity you'll engage in on any given +day. + +There are several ways you can reduce the impact of a compromised browser, but +the truly effective ways will require significant changes in the way you +operate your workstation. + +#### 1: Use two different browsers + +This is the easiest to do, but only offers minor security benefits. Not all +browser compromises give an attacker full unfettered access to your system -- +sometimes they are limited to allowing one to read local browser storage, +steal active sessions from other tabs, capture input entered into the browser, +etc. Using two different browsers, one for work/high security sites, and +another for everything else will help prevent minor compromises from giving +attackers access to the whole cookie jar. The main inconvenience will be the +amount of memory consumed by two different browser processes. + +Here's what we recommend: + +##### Firefox for work and high security sites + +Use Firefox to access work-related sites, where extra care should be taken to +ensure that data like cookies, sessions, login information, keystrokes, etc, +should most definitely not fall into attackers' hands. You should NOT use +this browser for accessing any other sites except select few. + +You should install the following Firefox add-ons: + +- [ ] NoScript _(CRITICAL)_ + - NoScript prevents active content from loading, except from user + whitelisted domains. It is a great hassle to use with your default browser + (though offers really good security benefits), so we recommend only + enabling it on the browser you use to access work-related sites. + +- [ ] Privacy Badger _(CRITICAL)_ + - EFF's Privacy Badger will prevent most external trackers and ad platforms + from being loaded, which will help avoid compromises on these tracking + sites from affecting your browser (trackers and ad sites are very commonly + targeted by attackers, as they allow rapid infection of thousands of + systems worldwide). + +- [ ] HTTPS Everywhere _(CRITICAL)_ + - This EFF-developed Add-on will ensure that most of your sites are accessed + over a secure connection, even if a link you click is using http:// (great + to avoid a number of attacks, such as [SSL-strip][7]). + +- [ ] Certificate Patrol _(MODERATE)_ + - This tool will alert you if the site you're accessing has recently changed + their TLS certificates -- especially if it wasn't nearing expiration dates + or if it is now using a different certification authority. It helps + alert you if someone is trying to man-in-the-middle your connection, + but generates a lot of benign false-positives. + +You should leave Firefox as your default browser for opening links, as +NoScript will prevent most active content from loading or executing. + +##### Chrome/Chromium for everything else + +Chromium developers are ahead of Firefox in adding a lot of nice security +features (at least [on Linux][6]), such as seccomp sandboxes, kernel user +namespaces, etc, which act as an added layer of isolation between the sites +you visit and the rest of your system. Chromium is the upstream open-source +project, and Chrome is Google's proprietary binary build based on it (insert +the usual paranoid caution about not using it for anything you don't want +Google to know about). + +It is recommended that you install **Privacy Badger** and **HTTPS Everywhere** +extensions in Chrome as well and give it a distinct theme from Firefox to +indicate that this is your "untrusted sites" browser. + +#### 2: Use two different browsers, one inside a dedicated VM + +This is a similar recommendation to the above, except you will add an extra +step of running Chrome inside a dedicated VM that you access via a fast +protocol, allowing you to share clipboards and forward sound events (e.g. +Spice or RDP). This will add an excellent layer of isolation between the +untrusted browser and the rest of your work environment, ensuring that +attackers who manage to fully compromise your browser will then have to +additionally break out of the VM isolation layer in order to get to the rest +of your system. + +This is a surprisingly workable configuration, but requires a lot of RAM and +fast processors that can handle the increased load. It will also require an +important amount of dedication on the part of the admin who will need to +adjust their work practices accordingly. + +#### 3: Fully separate your work and play environments via virtualization + +See [Qubes-OS project][3], which strives to provide a high-security +workstation environment via compartmentalizing your applications into separate +fully isolated VMs. + +### Password managers + +#### Checklist + +- [ ] Use a password manager _(CRITICAL_) +- [ ] Use unique passwords on unrelated sites _(CRITICAL)_ +- [ ] Use a password manager that supports team sharing _(MODERATE)_ +- [ ] Use a separate password manager for non-website accounts _(PARANOID)_ + +#### Considerations + +Using good, unique passwords should be a critical requirement for every member +of your team. Credential theft is happening all the time -- either via +compromised computers, stolen database dumps, remote site exploits, or any +number of other means. No credentials should ever be reused across sites, +especially for critical applications. + +##### In-browser password manager + +Every browser has a mechanism for saving passwords that is fairly secure and +can sync with vendor-maintained cloud storage while keeping the data encrypted +with a user-provided passphrase. However, this mechanism has important +disadvantages: + +1. It does not work across browsers +2. It does not offer any way of sharing credentials with team members + +There are several well-supported, free-or-cheap password managers that are +well-integrated into multiple browsers, work across platforms, and offer +group sharing (usually as a paid service). Solutions can be easily found via +search engines. + +##### Standalone password manager + +One of the major drawbacks of any password manager that comes integrated with +the browser is the fact that it's part of the application that is most likely +to be attacked by intruders. If this makes you uncomfortable (and it should), +you may choose to have two different password managers -- one for websites +that is integrated into your browser, and one that runs as a standalone +application. The latter can be used to store high-risk credentials such as +root passwords, database passwords, other shell account credentials, etc. + +It may be particularly useful to have such tool for sharing superuser account +credentials with other members of your team (server root passwords, ILO +passwords, database admin passwords, bootloader passwords, etc). + +A few tools can help you: + +- [KeePassX][8], which improves team sharing in version 2 +- [Pass][9], which uses text files and PGP and integrates with git +- [Django-Pstore][10], which uses GPG to share credentials between admins +- [Hiera-Eyaml][11], which, if you are already using Puppet for your + infrastructure, may be a handy way to track your server/service credentials + as part of your encrypted Hiera data store + +### Securing SSH and PGP private keys + +Personal encryption keys, including SSH and PGP private keys, are going to be +the most prized items on your workstation -- something the attackers will be +most interested in obtaining, as that would allow them to further attack your +infrastructure or impersonate you to other admins. You should take extra steps +to ensure that your private keys are well protected against theft. + +#### Checklist + +- [ ] Strong passphrases are used to protect private keys _(CRITICAL)_ +- [ ] PGP Master key is stored on removable storage _(MODERATE)_ +- [ ] Auth, Sign and Encrypt Subkeys are stored on a smartcard device _(MODERATE)_ +- [ ] SSH is configured to use PGP Auth key as ssh private key _(MODERATE)_ + +#### Considerations + +The best way to prevent private key theft is to use a smartcard to store your +encryption private keys and never copy them onto the workstation. There are +several manufacturers that offer OpenPGP capable devices: + +- [Kernel Concepts][12], where you can purchase both the OpenPGP compatible + smartcards and the USB readers, should you need one. +- [Yubikey NEO][13], which offers OpenPGP smartcard functionality in addition + to many other cool features (U2F, PIV, HOTP, etc). + +It is also important to make sure that the master PGP key is not stored on the +main workstation, and only subkeys are used. The master key will only be +needed when signing someone else's keys or creating new subkeys -- operations +which do not happen very frequently. You may follow [the Debian's subkeys][14] +guide to learn how to move your master key to removable storage and how to +create subkeys. + +You should then configure your gnupg agent to act as ssh agent and use the +smartcard-based PGP Auth key to act as your ssh private key. We publish a +[detailed guide][15] on how to do that using either a smartcard reader or a +Yubikey NEO. + +If you are not willing to go that far, at least make sure you have a strong +passphrase on both your PGP private key and your SSH private key, which will +make it harder for attackers to steal and use them. + +### SELinux on the workstation + +If you are using a distribution that comes bundled with SELinux (such as +Fedora), here are some recommendation of how to make the best use of it to +maximize your workstation security. + +#### Checklist + +- [ ] Make sure SELinux is enforcing on your workstation _(CRITICAL)_ +- [ ] Never blindly run `audit2allow -M`, always check _(CRITICAL)_ +- [ ] Never `setenforce 0` _(MODERATE)_ +- [ ] Switch your account to SELinux user `staff_u` _(MODERATE)_ + +#### Considerations + +SELinux is a Mandatory Access Controls (MAC) extension to core POSIX +permissions functionality. It is mature, robust, and has come a long way since +its initial roll-out. Regardless, many sysadmins to this day repeat the +outdated mantra of "just turn it off." + +That being said, SELinux will have limited security benefits on the +workstation, as most applications you will be running as a user are going to +be running unconfined. It does provide enough net benefit to warrant leaving +it on, as it will likely help prevent an attacker from escalating privileges +to gain root-level access via a vulnerable daemon service. + +Our recommendation is to leave it on and enforcing. + +##### Never `setenforce 0` + +It's tempting to use `setenforce 0` to flip SELinux into permissive mode +on a temporary basis, but you should avoid doing that. This essentially turns +off SELinux for the entire system, while what you really want is to +troubleshoot a particular application or daemon. + +Instead of `setenforce 0` you should be using `semanage permissive -a +[somedomain_t]` to put only that domain into permissive mode. First, find out +which domain is causing troubles by running `ausearch`: + + ausearch -ts recent -m avc + +and then look for `scontext=` (source SELinux context) line, like so: + + scontext=staff_u:staff_r:gpg_pinentry_t:s0-s0:c0.c1023 + ^^^^^^^^^^^^^^ + +This tells you that the domain being denied is `gpg_pinentry_t`, so if you +want to troubleshoot the application, you should add it to permissive domains: + + semange permissive -a gpg_pinentry_t + +This will allow you to use the application and collect the rest of the AVCs, +which you can then use in conjunction with `audit2allow` to write a local +policy. Once that is done and you see no new AVC denials, you can remove that +domain from permissive by running: + + semanage permissive -d gpg_pinentry_t + +##### Use your workstation as SELinux role staff_r + +SELinux comes with a native implementation of roles that prohibit or grant +certain privileges based on the role associated with the user account. As an +administrator, you should be using the `staff_r` role, which will restrict +access to many configuration and other security-sensitive files, unless you +first perform `sudo`. + +By default, accounts are created as `unconfined_r` and most applications you +execute will run unconfined, without any (or with only very few) SELinux +constraints. To switch your account to the `staff_r` role, run the following +command: + + usermod -Z staff_u [username] + +You should log out and log back in to enable the new role, at which point if +you run `id -Z`, you'll see: + + staff_u:staff_r:staff_t:s0-s0:c0.c1023 + +When performing `sudo`, you should remember to add an extra flag to tell +SELinux to transition to the "sysadmin" role. The command you want is: + + sudo -i -r sysadm_r + +At which point `id -Z` will show: + + staff_u:sysadm_r:sysadm_t:s0-s0:c0.c1023 + +**WARNING**: you should be comfortable using `ausearch` and `audit2allow` +before you make this switch, as it's possible some of your applications will +no longer work when you're running as role `staff_r`. At the time of writing, +the following popular applications are known to not work under `staff_r` +without policy tweaks: + +- Chrome/Chromium +- Skype +- VirtualBox + +To switch back to `unconfined_r`, run the following command: + + usermod -Z unconfined_u [username] + +and then log out and back in to get back into the comfort zone. + +## Further reading + +The world of IT security is a rabbit hole with no bottom. If you would like to +go deeper, or find out more about security features on your particular +distribution, please check out the following links: + +- [Fedora Security Guide](https://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/index.html) +- [CESG Ubuntu Security Guide](https://www.gov.uk/government/publications/end-user-devices-security-guidance-ubuntu-1404-lts) +- [Debian Security Manual](https://www.debian.org/doc/manuals/securing-debian-howto/index.en.html) +- [Arch Linux Security Wiki](https://wiki.archlinux.org/index.php/Security) +- [Mac OSX Security](https://www.apple.com/support/security/guides/) + +## License +This work is licensed under a +[Creative Commons Attribution-ShareAlike 4.0 International License][0]. + +-------------------------------------------------------------------------------- + +via: https://github.com/lfit/itpol/blob/master/linux-workstation-security.md#linux-workstation-security-checklist + +作者:[mricon][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://github.com/mricon +[0]: http://creativecommons.org/licenses/by-sa/4.0/ +[1]: https://github.com/QubesOS/qubes-antievilmaid +[2]: https://en.wikipedia.org/wiki/IEEE_1394#Security_issues +[3]: https://qubes-os.org/ +[4]: https://xkcd.com/936/ +[5]: https://spideroak.com/ +[6]: https://code.google.com/p/chromium/wiki/LinuxSandboxing +[7]: http://www.thoughtcrime.org/software/sslstrip/ +[8]: https://keepassx.org/ +[9]: http://www.passwordstore.org/ +[10]: https://pypi.python.org/pypi/django-pstore +[11]: https://github.com/TomPoulton/hiera-eyaml +[12]: http://shop.kernelconcepts.de/ +[13]: https://www.yubico.com/products/yubikey-hardware/yubikey-neo/ +[14]: https://wiki.debian.org/Subkeys +[15]: https://github.com/lfit/ssh-gpg-smartcard-config diff --git a/sources/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md b/sources/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md deleted file mode 100644 index ca96b7dac6..0000000000 --- a/sources/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md +++ /dev/null @@ -1,220 +0,0 @@ -Part 1 - LFCS: How to use GNU ‘sed’ Command to Create, Edit, and Manipulate files in Linux -================================================================================ -The Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, a new program that aims at helping individuals all over the world to get certified in basic to intermediate system administration tasks for Linux systems. This includes supporting running systems and services, along with first-hand troubleshooting and analysis, and smart decision-making to escalate issues to engineering teams. - -![Linux Foundation Certified Sysadmin](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-1.png) - -Linux Foundation Certified Sysadmin – Part 1 - -Please watch the following video that demonstrates about The Linux Foundation Certification Program. - -注:youtube 视频 - - -The series will be titled Preparation for the LFCS (Linux Foundation Certified Sysadmin) Parts 1 through 10 and cover the following topics for Ubuntu, CentOS, and openSUSE: - -- Part 1: How to use GNU ‘sed’ Command to Create, Edit, and Manipulate files in Linux -- Part 2: How to Install and Use vi/m as a full Text Editor -- Part 3: Archiving Files/Directories and Finding Files on the Filesystem -- Part 4: Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition -- Part 5: Mount/Unmount Local and Network (Samba & NFS) Filesystems in Linux -- Part 6: Assembling Partitions as RAID Devices – Creating & Managing System Backups -- Part 7: Managing System Startup Process and Services (SysVinit, Systemd and Upstart -- Part 8: Managing Users & Groups, File Permissions & Attributes and Enabling sudo Access on Accounts -- Part 9: Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper -- Part 10: Learning Basic Shell Scripting and Filesystem Troubleshooting - - -This post is Part 1 of a 10-tutorial series, which will cover the necessary domains and competencies that are required for the LFCS certification exam. That being said, fire up your terminal, and let’s start. - -### Processing Text Streams in Linux ### - -Linux treats the input to and the output from programs as streams (or sequences) of characters. To begin understanding redirection and pipes, we must first understand the three most important types of I/O (Input and Output) streams, which are in fact special files (by convention in UNIX and Linux, data streams and peripherals, or device files, are also treated as ordinary files). - -The difference between > (redirection operator) and | (pipeline operator) is that while the first connects a command with a file, the latter connects the output of a command with another command. - - # command > file - # command1 | command2 - -Since the redirection operator creates or overwrites files silently, we must use it with extreme caution, and never mistake it with a pipeline. One advantage of pipes on Linux and UNIX systems is that there is no intermediate file involved with a pipe – the stdout of the first command is not written to a file and then read by the second command. - -For the following practice exercises we will use the poem “A happy child” (anonymous author). - -![cat command](http://www.tecmint.com/wp-content/uploads/2014/10/cat-command.png) - -cat command example - -#### Using sed #### - -The name sed is short for stream editor. For those unfamiliar with the term, a stream editor is used to perform basic text transformations on an input stream (a file or input from a pipeline). - -The most basic (and popular) usage of sed is the substitution of characters. We will begin by changing every occurrence of the lowercase y to UPPERCASE Y and redirecting the output to ahappychild2.txt. The g flag indicates that sed should perform the substitution for all instances of term on every line of file. If this flag is omitted, sed will replace only the first occurrence of term on each line. - -**Basic syntax:** - - # sed ‘s/term/replacement/flag’ file - -**Our example:** - - # sed ‘s/y/Y/g’ ahappychild.txt > ahappychild2.txt - -![sed command](http://www.tecmint.com/wp-content/uploads/2014/10/sed-command.png) - -sed command example - -Should you want to search for or replace a special character (such as /, \, &) you need to escape it, in the term or replacement strings, with a backward slash. - -For example, we will substitute the word and for an ampersand. At the same time, we will replace the word I with You when the first one is found at the beginning of a line. - - # sed 's/and/\&/g;s/^I/You/g' ahappychild.txt - -![sed replace string](http://www.tecmint.com/wp-content/uploads/2014/10/sed-replace-string.png) - -sed replace string - -In the above command, a ^ (caret sign) is a well-known regular expression that is used to represent the beginning of a line. - -As you can see, we can combine two or more substitution commands (and use regular expressions inside them) by separating them with a semicolon and enclosing the set inside single quotes. - -Another use of sed is showing (or deleting) a chosen portion of a file. In the following example, we will display the first 5 lines of /var/log/messages from Jun 8. - - # sed -n '/^Jun 8/ p' /var/log/messages | sed -n 1,5p - -Note that by default, sed prints every line. We can override this behaviour with the -n option and then tell sed to print (indicated by p) only the part of the file (or the pipe) that matches the pattern (Jun 8 at the beginning of line in the first case and lines 1 through 5 inclusive in the second case). - -Finally, it can be useful while inspecting scripts or configuration files to inspect the code itself and leave out comments. The following sed one-liner deletes (d) blank lines or those starting with # (the | character indicates a boolean OR between the two regular expressions). - - # sed '/^#\|^$/d' apache2.conf - -![sed match string](http://www.tecmint.com/wp-content/uploads/2014/10/sed-match-string.png) - -sed match string - -#### uniq Command #### - -The uniq command allows us to report or remove duplicate lines in a file, writing to stdout by default. We must note that uniq does not detect repeated lines unless they are adjacent. Thus, uniq is commonly used along with a preceding sort (which is used to sort lines of text files). By default, sort takes the first field (separated by spaces) as key field. To specify a different key field, we need to use the -k option. - -**Examples** - -The du –sch /path/to/directory/* command returns the disk space usage per subdirectories and files within the specified directory in human-readable format (also shows a total per directory), and does not order the output by size, but by subdirectory and file name. We can use the following command to sort by size. - - # du -sch /var/* | sort –h - -![sort command](http://www.tecmint.com/wp-content/uploads/2014/10/sort-command.jpg) - -sort command example - -You can count the number of events in a log by date by telling uniq to perform the comparison using the first 6 characters (-w 6) of each line (where the date is specified), and prefixing each output line by the number of occurrences (-c) with the following command. - - # cat /var/log/mail.log | uniq -c -w 6 - -![Count Numbers in File](http://www.tecmint.com/wp-content/uploads/2014/10/count-numbers-in-file.jpg) - -Count Numbers in File - -Finally, you can combine sort and uniq (as they usually are). Consider the following file with a list of donors, donation date, and amount. Suppose we want to know how many unique donors there are. We will use the following command to cut the first field (fields are delimited by a colon), sort by name, and remove duplicate lines. - - # cat sortuniq.txt | cut -d: -f1 | sort | uniq - -![Find Unique Records in File](http://www.tecmint.com/wp-content/uploads/2014/10/find-uniqu-records-in-file.jpg) - -Find Unique Records in File - -- Read Also: [13 “cat” Command Examples][1] - -#### grep Command #### - -grep searches text files or (command output) for the occurrence of a specified regular expression and outputs any line containing a match to standard output. - -**Examples** - -Display the information from /etc/passwd for user gacanepa, ignoring case. - - # grep -i gacanepa /etc/passwd - -![grep Command](http://www.tecmint.com/wp-content/uploads/2014/10/grep-command.jpg) - -grep command example - -Show all the contents of /etc whose name begins with rc followed by any single number. - - # ls -l /etc | grep rc[0-9] - -![List Content Using grep](http://www.tecmint.com/wp-content/uploads/2014/10/list-content-using-grep.jpg) - -List Content Using grep - -- Read Also: [12 “grep” Command Examples][2] - -#### tr Command Usage #### - -The tr command can be used to translate (change) or delete characters from stdin, and write the result to stdout. - -**Examples** - -Change all lowercase to uppercase in sortuniq.txt file. - - # cat sortuniq.txt | tr [:lower:] [:upper:] - -![Sort Strings in File](http://www.tecmint.com/wp-content/uploads/2014/10/sort-strings.jpg) - -Sort Strings in File - -Squeeze the delimiter in the output of ls –l to only one space. - - # ls -l | tr -s ' ' - -![Squeeze Delimiter](http://www.tecmint.com/wp-content/uploads/2014/10/squeeze-delimeter.jpg) - -Squeeze Delimiter - -#### cut Command Usage #### - -The cut command extracts portions of input lines (from stdin or files) and displays the result on standard output, based on number of bytes (-b option), characters (-c), or fields (-f). In this last case (based on fields), the default field separator is a tab, but a different delimiter can be specified by using the -d option. - -**Examples** - -Extract the user accounts and the default shells assigned to them from /etc/passwd (the –d option allows us to specify the field delimiter, and the –f switch indicates which field(s) will be extracted. - - # cat /etc/passwd | cut -d: -f1,7 - -![Extract User Accounts](http://www.tecmint.com/wp-content/uploads/2014/10/extract-user-accounts.jpg) - -Extract User Accounts - -Summing up, we will create a text stream consisting of the first and third non-blank files of the output of the last command. We will use grep as a first filter to check for sessions of user gacanepa, then squeeze delimiters to only one space (tr -s ‘ ‘). Next, we’ll extract the first and third fields with cut, and finally sort by the second field (IP addresses in this case) showing unique. - - # last | grep gacanepa | tr -s ‘ ‘ | cut -d’ ‘ -f1,3 | sort -k2 | uniq - -![last command](http://www.tecmint.com/wp-content/uploads/2014/10/last-command.png) - -last command example - -The above command shows how multiple commands and pipes can be combined so as to obtain filtered data according to our desires. Feel free to also run it by parts, to help you see the output that is pipelined from one command to the next (this can be a great learning experience, by the way!). - -### Summary ### - -Although this example (along with the rest of the examples in the current tutorial) may not seem very useful at first sight, they are a nice starting point to begin experimenting with commands that are used to create, edit, and manipulate files from the Linux command line. Feel free to leave your questions and comments below – they will be much appreciated! - -#### Reference Links #### - -- [About the LFCS][3] -- [Why get a Linux Foundation Certification?][4] -- [Register for the LFCS exam][5] - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/ -[2]:http://www.tecmint.com/12-practical-examples-of-linux-grep-command/ -[3]:https://training.linuxfoundation.org/certification/LFCS -[4]:https://training.linuxfoundation.org/certification/why-certify-with-us -[5]:https://identity.linuxfoundation.org/user?destination=pid/1 \ No newline at end of file diff --git a/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md b/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md new file mode 100644 index 0000000000..5dd1782a98 --- /dev/null +++ b/sources/tech/LFCS/Part 10 - LFCS--Understanding and Learning Basic Shell Scripting and Linux Filesystem Troubleshooting.md @@ -0,0 +1,317 @@ +Translating by Xuanwo + +Part 10 - LFCS: Understanding & Learning Basic Shell Scripting and Linux Filesystem Troubleshooting +================================================================================ +The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new initiative whose purpose is to allow individuals everywhere (and anywhere) to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams. + +![Basic Shell Scripting and Filesystem Troubleshooting](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-10.png) + +Linux Foundation Certified Sysadmin – Part 10 + +Check out the following video that guides you an introduction to the Linux Foundation Certification Program. + +注:youtube 视频 + + + +This is the last article (Part 10) of the present 10-tutorial long series. In this article we will focus on basic shell scripting and troubleshooting Linux file systems. Both topics are required for the LFCS certification exam. + +### Understanding Terminals and Shells ### + +Let’s clarify a few concepts first. + +- A shell is a program that takes commands and gives them to the operating system to be executed. +- A terminal is a program that allows us as end users to interact with the shell. One example of a terminal is GNOME terminal, as shown in the below image. + +![Gnome Terminal](http://www.tecmint.com/wp-content/uploads/2014/11/Gnome-Terminal.png) + +Gnome Terminal + +When we first start a shell, it presents a command prompt (also known as the command line), which tells us that the shell is ready to start accepting commands from its standard input device, which is usually the keyboard. + +You may want to refer to another article in this series ([Use Command to Create, Edit, and Manipulate files – Part 1][1]) to review some useful commands. + +Linux provides a range of options for shells, the following being the most common: + +**bash Shell** + +Bash stands for Bourne Again SHell and is the GNU Project’s default shell. It incorporates useful features from the Korn shell (ksh) and C shell (csh), offering several improvements at the same time. This is the default shell used by the distributions covered in the LFCS certification, and it is the shell that we will use in this tutorial. + +**sh Shell** + +The Bourne SHell is the oldest shell and therefore has been the default shell of many UNIX-like operating systems for many years. +ksh Shell + +The Korn SHell is a Unix shell which was developed by David Korn at Bell Labs in the early 1980s. It is backward-compatible with the Bourne shell and includes many features of the C shell. + +A shell script is nothing more and nothing less than a text file turned into an executable program that combines commands that are executed by the shell one after another. + +### Basic Shell Scripting ### + +As mentioned earlier, a shell script is born as a plain text file. Thus, can be created and edited using our preferred text editor. You may want to consider using vi/m (refer to [Usage of vi Editor – Part 2][2] of this series), which features syntax highlighting for your convenience. + +Type the following command to create a file named myscript.sh and press Enter. + + # vim myscript.sh + +The very first line of a shell script must be as follows (also known as a shebang). + + #!/bin/bash + +It “tells” the operating system the name of the interpreter that should be used to run the text that follows. + +Now it’s time to add our commands. We can clarify the purpose of each command, or the entire script, by adding comments as well. Note that the shell ignores those lines beginning with a pound sign # (explanatory comments). + + #!/bin/bash + echo This is Part 10 of the 10-article series about the LFCS certification + echo Today is $(date +%Y-%m-%d) + +Once the script has been written and saved, we need to make it executable. + + # chmod 755 myscript.sh + +Before running our script, we need to say a few words about the $PATH environment variable. If we run, + + echo $PATH + +from the command line, we will see the contents of $PATH: a colon-separated list of directories that are searched when we enter the name of a executable program. It is called an environment variable because it is part of the shell environment – a set of information that becomes available for the shell and its child processes when the shell is first started. + +When we type a command and press Enter, the shell searches in all the directories listed in the $PATH variable and executes the first instance that is found. Let’s see an example, + +![Linux Environment Variables](http://www.tecmint.com/wp-content/uploads/2014/11/Environment-Variable.png) + +Environment Variables + +If there are two executable files with the same name, one in /usr/local/bin and another in /usr/bin, the one in the first directory will be executed first, whereas the other will be disregarded. + +If we haven’t saved our script inside one of the directories listed in the $PATH variable, we need to append ./ to the file name in order to execute it. Otherwise, we can run it just as we would do with a regular command. + + # pwd + # ./myscript.sh + # cp myscript.sh ../bin + # cd ../bin + # pwd + # myscript.sh + +![Execute Script in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Execute-Script.png) + +Execute Script + +#### Conditionals #### + +Whenever you need to specify different courses of action to be taken in a shell script, as result of the success or failure of a command, you will use the if construct to define such conditions. Its basic syntax is: + + if CONDITION; then + COMMANDS; + else + OTHER-COMMANDS + fi + +Where CONDITION can be one of the following (only the most frequent conditions are cited here) and evaluates to true when: + +- [ -a file ] → file exists. +- [ -d file ] → file exists and is a directory. +- [ -f file ] →file exists and is a regular file. +- [ -u file ] →file exists and its SUID (set user ID) bit is set. +- [ -g file ] →file exists and its SGID bit is set. +- [ -k file ] →file exists and its sticky bit is set. +- [ -r file ] →file exists and is readable. +- [ -s file ]→ file exists and is not empty. +- [ -w file ]→file exists and is writable. +- [ -x file ] is true if file exists and is executable. +- [ string1 = string2 ] → the strings are equal. +- [ string1 != string2 ] →the strings are not equal. + +[ int1 op int2 ] should be part of the preceding list, while the items that follow (for example, -eq –> is true if int1 is equal to int2.) should be a “children” list of [ int1 op int2 ] where op is one of the following comparison operators. + +- -eq –> is true if int1 is equal to int2. +- -ne –> true if int1 is not equal to int2. +- -lt –> true if int1 is less than int2. +- -le –> true if int1 is less than or equal to int2. +- -gt –> true if int1 is greater than int2. +- -ge –> true if int1 is greater than or equal to int2. + +#### For Loops #### + +This loop allows to execute one or more commands for each value in a list of values. Its basic syntax is: + + for item in SEQUENCE; do + COMMANDS; + done + +Where item is a generic variable that represents each value in SEQUENCE during each iteration. + +#### While Loops #### + +This loop allows to execute a series of repetitive commands as long as the control command executes with an exit status equal to zero (successfully). Its basic syntax is: + + while EVALUATION_COMMAND; do + EXECUTE_COMMANDS; + done + +Where EVALUATION_COMMAND can be any command(s) that can exit with a success (0) or failure (other than 0) status, and EXECUTE_COMMANDS can be any program, script or shell construct, including other nested loops. + +#### Putting It All Together #### + +We will demonstrate the use of the if construct and the for loop with the following example. + +**Determining if a service is running in a systemd-based distro** + +Let’s create a file with a list of services that we want to monitor at a glance. + + # cat myservices.txt + + sshd + mariadb + httpd + crond + firewalld + +![Script to Monitor Linux Services](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Services.png) + +Script to Monitor Linux Services + +Our shell script should look like. + + #!/bin/bash + + # This script iterates over a list of services and + # is used to determine whether they are running or not. + + for service in $(cat myservices.txt); do + systemctl status $service | grep --quiet "running" + if [ $? -eq 0 ]; then + echo $service "is [ACTIVE]" + else + echo $service "is [INACTIVE or NOT INSTALLED]" + fi + done + +![Linux Service Monitoring Script](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Script.png) + +Linux Service Monitoring Script + +**Let’s explain how the script works.** + +1). The for loop reads the myservices.txt file one element of LIST at a time. That single element is denoted by the generic variable named service. The LIST is populated with the output of, + + # cat myservices.txt + +2). The above command is enclosed in parentheses and preceded by a dollar sign to indicate that it should be evaluated to populate the LIST that we will iterate over. + +3). For each element of LIST (meaning every instance of the service variable), the following command will be executed. + + # systemctl status $service | grep --quiet "running" + +This time we need to precede our generic variable (which represents each element in LIST) with a dollar sign to indicate it’s a variable and thus its value in each iteration should be used. The output is then piped to grep. + +The –quiet flag is used to prevent grep from displaying to the screen the lines where the word running appears. When that happens, the above command returns an exit status of 0 (represented by $? in the if construct), thus verifying that the service is running. + +An exit status different than 0 (meaning the word running was not found in the output of systemctl status $service) indicates that the service is not running. + +![Services Monitoring Script](http://www.tecmint.com/wp-content/uploads/2014/11/Services-Monitoring-Script.png) + +Services Monitoring Script + +We could go one step further and check for the existence of myservices.txt before even attempting to enter the for loop. + + #!/bin/bash + + # This script iterates over a list of services and + # is used to determine whether they are running or not. + + if [ -f myservices.txt ]; then + for service in $(cat myservices.txt); do + systemctl status $service | grep --quiet "running" + if [ $? -eq 0 ]; then + echo $service "is [ACTIVE]" + else + echo $service "is [INACTIVE or NOT INSTALLED]" + fi + done + else + echo "myservices.txt is missing" + fi + +**Pinging a series of network or internet hosts for reply statistics** + +You may want to maintain a list of hosts in a text file and use a script to determine every now and then whether they’re pingable or not (feel free to replace the contents of myhosts and try for yourself). + +The read shell built-in command tells the while loop to read myhosts line by line and assigns the content of each line to variable host, which is then passed to the ping command. + + #!/bin/bash + + # This script is used to demonstrate the use of a while loop + + while read host; do + ping -c 2 $host + done < myhosts + +![Script to Ping Servers](http://www.tecmint.com/wp-content/uploads/2014/11/Script-to-Ping-Servers.png) + +Script to Ping Servers + +Read Also: + +- [Learn Shell Scripting: A Guide from Newbies to System Administrator][3] +- [5 Shell Scripts to Learn Shell Programming][4] + +### Filesystem Troubleshooting ### + +Although Linux is a very stable operating system, if it crashes for some reason (for example, due to a power outage), one (or more) of your file systems will not be unmounted properly and thus will be automatically checked for errors when Linux is restarted. + +In addition, each time the system boots during a normal boot, it always checks the integrity of the filesystems before mounting them. In both cases this is performed using a tool named fsck (“file system check”). + +fsck will not only check the integrity of file systems, but also attempt to repair corrupt file systems if instructed to do so. Depending on the severity of damage, fsck may succeed or not; when it does, recovered portions of files are placed in the lost+found directory, located in the root of each file system. + +Last but not least, we must note that inconsistencies may also happen if we try to remove an USB drive when the operating system is still writing to it, and may even result in hardware damage. + +The basic syntax of fsck is as follows: + + # fsck [options] filesystem + +**Checking a filesystem for errors and attempting to repair automatically** + +In order to check a filesystem with fsck, we must first unmount it. + + # mount | grep sdg1 + # umount /mnt + # fsck -y /dev/sdg1 + +![Scan Linux Filesystem for Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Filesystem-Errors.png) + +Check Filesystem Errors + +Besides the -y flag, we can use the -a option to automatically repair the file systems without asking any questions, and force the check even when the filesystem looks clean. + + # fsck -af /dev/sdg1 + +If we’re only interested in finding out what’s wrong (without trying to fix anything for the time being) we can run fsck with the -n option, which will output the filesystem issues to standard output. + + # fsck -n /dev/sdg1 + +Depending on the error messages in the output of fsck, we will know whether we can try to solve the issue ourselves or escalate it to engineering teams to perform further checks on the hardware. + +### Summary ### + +We have arrived at the end of this 10-article series where have tried to cover the basic domain competencies required to pass the LFCS exam. + +For obvious reasons, it is not possible to cover every single aspect of these topics in any single tutorial, and that’s why we hope that these articles have put you on the right track to try new stuff yourself and continue learning. + +If you have any questions or comments, they are always welcome – so don’t hesitate to drop us a line via the form below! + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-basic-shell-scripting-and-linux-filesystem-troubleshooting/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ +[2]:http://www.tecmint.com/vi-editor-usage/ +[3]:http://www.tecmint.com/learning-shell-scripting-language-a-guide-from-newbies-to-system-administrator/ +[4]:http://www.tecmint.com/basic-shell-programming-part-ii/ \ No newline at end of file diff --git a/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md b/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md index 7537f784bd..1d069e08ea 100644 --- a/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md +++ b/sources/tech/LFCS/Part 2 - LFCS--How to Install and Use vi or vim as a Full Text Editor.md @@ -1,3 +1,5 @@ +Translating by Xuanwo + Part 2 - LFCS: How to Install and Use vi/vim as a Full Text Editor ================================================================================ A couple of months ago, the Linux Foundation launched the LFCS (Linux Foundation Certified Sysadmin) certification in order to help individuals from all over the world to verify they are capable of doing basic to intermediate system administration tasks on Linux systems: system support, first-hand troubleshooting and maintenance, plus intelligent decision-making to know when it’s time to raise issues to upper support teams. @@ -295,7 +297,7 @@ Vi Search String in File c). vi uses a command (similar to sed’s) to perform substitution operations over a range of lines or an entire file. To change the word “old” to “young” for the entire file, we must enter the following command. - :%s/old/young/g + :%s/old/young/g **Notice**: The colon at the beginning of the command. diff --git a/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md b/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md index 6ac3d104a0..77fe5cf040 100644 --- a/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md +++ b/sources/tech/LFCS/Part 3 - LFCS--How to Archive or Compress Files and Directories Setting File Attributes and Finding Files in Linux.md @@ -1,3 +1,5 @@ +Translating by Xuanwo + Part 3 - LFCS: How to Archive/Compress Files & Directories, Setting File Attributes and Finding Files in Linux ================================================================================ Recently, the Linux Foundation started the LFCS (Linux Foundation Certified Sysadmin) certification, a brand new program whose purpose is allowing individuals from all corners of the globe to have access to an exam, which if approved, certifies that the person is knowledgeable in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-level troubleshooting and analysis, plus the ability to decide when to escalate issues to engineering teams. @@ -178,9 +180,9 @@ List Archive Content Run any of the following commands: - # gzip -d myfiles.tar.gz [#1] - # bzip2 -d myfiles.tar.bz2 [#2] - # xz -d myfiles.tar.xz [#3] + # gzip -d myfiles.tar.gz [#1] + # bzip2 -d myfiles.tar.bz2 [#2] + # xz -d myfiles.tar.xz [#3] Then diff --git a/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md b/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md index ada637fabb..93e4b2966b 100644 --- a/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md +++ b/sources/tech/LFCS/Part 4 - LFCS--Partitioning Storage Devices Formatting Filesystems and Configuring Swap Partition.md @@ -1,3 +1,5 @@ +Translating by Xuanwo + Part 4 - LFCS: Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition ================================================================================ Last August, the Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators to show, through a performance-based exam, that they can perform overall operational support of Linux systems: system support, first-level diagnosing and monitoring, plus issue escalation – if needed – to other support teams. diff --git a/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md b/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md index 1544a378bc..4316e32c16 100644 --- a/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md +++ b/sources/tech/LFCS/Part 5 - LFCS--How to Mount or Unmount Local and Network Samba and NFS Filesystems in Linux.md @@ -1,3 +1,5 @@ +Translating by Xuanwo + Part 5 - LFCS: How to Mount/Unmount Local and Network (Samba & NFS) Filesystems in Linux ================================================================================ The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new program whose purpose is allowing individuals from all corners of the globe to get certified in basic to intermediate system administration tasks for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams. diff --git a/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md b/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md new file mode 100644 index 0000000000..901fb7b4f1 --- /dev/null +++ b/sources/tech/LFCS/Part 6 - LFCS--Assembling Partitions as RAID Devices – Creating & Managing System Backups.md @@ -0,0 +1,278 @@ +Translating by Xuanwo + +Part 6 - LFCS: Assembling Partitions as RAID Devices – Creating & Managing System Backups +================================================================================ +Recently, the Linux Foundation launched the LFCS (Linux Foundation Certified Sysadmin) certification, a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of performing overall operational support on Linux systems: system support, first-level diagnosing and monitoring, plus issue escalation, when required, to other support teams. + +![Linux Foundation Certified Sysadmin – Part 6](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-6.png) + +Linux Foundation Certified Sysadmin – Part 6 + +The following video provides an introduction to The Linux Foundation Certification Program. + +注:youtube 视频 + + +This post is Part 6 of a 10-tutorial series, here in this part, we will explain How to Assemble Partitions as RAID Devices – Creating & Managing System Backups, that are required for the LFCS certification exam. + +### Understanding RAID ### + +The technology known as Redundant Array of Independent Disks (RAID) is a storage solution that combines multiple hard disks into a single logical unit to provide redundancy of data and/or improve performance in read / write operations to disk. + +However, the actual fault-tolerance and disk I/O performance lean on how the hard disks are set up to form the disk array. Depending on the available devices and the fault tolerance / performance needs, different RAID levels are defined. You can refer to the RAID series here in Tecmint.com for a more detailed explanation on each RAID level. + +- RAID Guide: [What is RAID, Concepts of RAID and RAID Levels Explained][1] + +Our tool of choice for creating, assembling, managing, and monitoring our software RAIDs is called mdadm (short for multiple disks admin). + + ---------------- Debian and Derivatives ---------------- + # aptitude update && aptitude install mdadm + +---------- + + ---------------- Red Hat and CentOS based Systems ---------------- + # yum update && yum install mdadm + +---------- + + ---------------- On openSUSE ---------------- + # zypper refresh && zypper install mdadm # + +#### Assembling Partitions as RAID Devices #### + +The process of assembling existing partitions as RAID devices consists of the following steps. + +**1. Create the array using mdadm** + +If one of the partitions has been formatted previously, or has been a part of another RAID array previously, you will be prompted to confirm the creation of the new array. Assuming you have taken the necessary precautions to avoid losing important data that may have resided in them, you can safely type y and press Enter. + + # mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1 + +![Creating RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Creating-RAID-Array.png) + +Creating RAID Array + +**2. Check the array creation status** + +After creating RAID array, you an check the status of the array using the following commands. + + # cat /proc/mdstat + or + # mdadm --detail /dev/md0 [More detailed summary] + +![Check RAID Array Status](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Array-Status.png) + +Check RAID Array Status + +**3. Format the RAID Device** + +Format the device with a filesystem as per your needs / requirements, as explained in [Part 4][2] of this series. + +**4. Monitor RAID Array Service** + +Instruct the monitoring service to “keep an eye” on the array. Add the output of mdadm –detail –scan to /etc/mdadm/mdadm.conf (Debian and derivatives) or /etc/mdadm.conf (CentOS / openSUSE), like so. + + # mdadm --detail --scan + +![Monitor RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Monitor-RAID-Array.png) + +Monitor RAID Array + + # mdadm --assemble --scan [Assemble the array] + +To ensure the service starts on system boot, run the following commands as root. + +**Debian and Derivatives** + +Debian and derivatives, though it should start running on boot by default. + + # update-rc.d mdadm defaults + +Edit the /etc/default/mdadm file and add the following line. + + AUTOSTART=true + +**On CentOS and openSUSE (systemd-based)** + + # systemctl start mdmonitor + # systemctl enable mdmonitor + +**On CentOS and openSUSE (SysVinit-based)** + + # service mdmonitor start + # chkconfig mdmonitor on + +**5. Check RAID Disk Failure** + +In RAID levels that support redundancy, replace failed drives when needed. When a device in the disk array becomes faulty, a rebuild automatically starts only if there was a spare device added when we first created the array. + +![Check RAID Faulty Disk](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Faulty-Disk.png) + +Check RAID Faulty Disk + +Otherwise, we need to manually attach an extra physical drive to our system and run. + + # mdadm /dev/md0 --add /dev/sdX1 + +Where /dev/md0 is the array that experienced the issue and /dev/sdX1 is the new device. + +**6. Disassemble a working array** + +You may have to do this if you need to create a new array using the devices – (Optional Step). + + # mdadm --stop /dev/md0 # Stop the array + # mdadm --remove /dev/md0 # Remove the RAID device + # mdadm --zero-superblock /dev/sdX1 # Overwrite the existing md superblock with zeroes + +**7. Set up mail alerts** + +You can configure a valid email address or system account to send alerts to (make sure you have this line in mdadm.conf). – (Optional Step) + + MAILADDR root + +In this case, all alerts that the RAID monitoring daemon collects will be sent to the local root account’s mail box. One of such alerts looks like the following. + +**Note**: This event is related to the example in STEP 5, where a device was marked as faulty and the spare device was automatically built into the array by mdadm. Thus, we “ran out” of healthy spare devices and we got the alert. + +![RAID Monitoring Alerts](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Monitoring-Alerts.png) + +RAID Monitoring Alerts + +#### Understanding RAID Levels #### + +**RAID 0** + +The total array size is n times the size of the smallest partition, where n is the number of independent disks in the array (you will need at least two drives). Run the following command to assemble a RAID 0 array using partitions /dev/sdb1 and /dev/sdc1. + + # mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1 + +Common uses: Setups that support real-time applications where performance is more important than fault-tolerance. + +**RAID 1 (aka Mirroring)** + +The total array size equals the size of the smallest partition (you will need at least two drives). Run the following command to assemble a RAID 1 array using partitions /dev/sdb1 and /dev/sdc1. + + # mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1 + +Common uses: Installation of the operating system or important subdirectories, such as /home. + +**RAID 5 (aka drives with Parity)** + +The total array size will be (n – 1) times the size of the smallest partition. The “lost” space in (n-1) is used for parity (redundancy) calculation (you will need at least three drives). + +Note that you can specify a spare device (/dev/sde1 in this case) to replace a faulty part when an issue occurs. Run the following command to assemble a RAID 5 array using partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, and /dev/sde1 as spare. + + # mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1 + +Common uses: Web and file servers. + +**RAID 6 (aka drives with double Parity** + +The total array size will be (n*s)-2*s, where n is the number of independent disks in the array and s is the size of the smallest disk. Note that you can specify a spare device (/dev/sdf1 in this case) to replace a faulty part when an issue occurs. + +Run the following command to assemble a RAID 6 array using partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1, and /dev/sdf1 as spare. + + # mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde --spare-devices=1 /dev/sdf1 + +Common uses: File and backup servers with large capacity and high availability requirements. + +**RAID 1+0 (aka stripe of mirrors)** + +The total array size is computed based on the formulas for RAID 0 and RAID 1, since RAID 1+0 is a combination of both. First, calculate the size of each mirror and then the size of the stripe. + +Note that you can specify a spare device (/dev/sdf1 in this case) to replace a faulty part when an issue occurs. Run the following command to assemble a RAID 1+0 array using partitions /dev/sdb1, /dev/sdc1, /dev/sdd1, /dev/sde1, and /dev/sdf1 as spare. + + # mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1 --spare-devices=1 /dev/sdf1 + +Common uses: Database and application servers that require fast I/O operations. + +#### Creating and Managing System Backups #### + +It never hurts to remember that RAID with all its bounties IS NOT A REPLACEMENT FOR BACKUPS! Write it 1000 times on the chalkboard if you need to, but make sure you keep that idea in mind at all times. Before we begin, we must note that there is no one-size-fits-all solution for system backups, but here are some things that you do need to take into account while planning a backup strategy. + +- What do you use your system for? (Desktop or server? If the latter case applies, what are the most critical services – whose configuration would be a real pain to lose?) +- How often do you need to take backups of your system? +- What is the data (e.g. files / directories / database dumps) that you want to backup? You may also want to consider if you really need to backup huge files (such as audio or video files). +- Where (meaning physical place and media) will those backups be stored? + +**Backing Up Your Data** + +Method 1: Backup entire drives with dd command. You can either back up an entire hard disk or a partition by creating an exact image at any point in time. Note that this works best when the device is offline, meaning it’s not mounted and there are no processes accessing it for I/O operations. + +The downside of this backup approach is that the image will have the same size as the disk or partition, even when the actual data occupies a small percentage of it. For example, if you want to image a partition of 20 GB that is only 10% full, the image file will still be 20 GB in size. In other words, it’s not only the actual data that gets backed up, but the entire partition itself. You may consider using this method if you need exact backups of your devices. + +**Creating an image file out of an existing device** + + # dd if=/dev/sda of=/system_images/sda.img + OR + --------------------- Alternatively, you can compress the image file --------------------- + # dd if=/dev/sda | gzip -c > /system_images/sda.img.gz + +**Restoring the backup from the image file** + + # dd if=/system_images/sda.img of=/dev/sda + OR + + --------------------- Depending on your choice while creating the image --------------------- + gzip -dc /system_images/sda.img.gz | dd of=/dev/sda + +Method 2: Backup certain files / directories with tar command – already covered in [Part 3][3] of this series. You may consider using this method if you need to keep copies of specific files and directories (configuration files, users’ home directories, and so on). + +Method 3: Synchronize files with rsync command. Rsync is a versatile remote (and local) file-copying tool. If you need to backup and synchronize your files to/from network drives, rsync is a go. + +Whether you’re synchronizing two local directories or local < — > remote directories mounted on the local filesystem, the basic syntax is the same. +Synchronizing two local directories or local < — > remote directories mounted on the local filesystem + + # rsync -av source_directory destination directory + +Where, -a recurse into subdirectories (if they exist), preserve symbolic links, timestamps, permissions, and original owner / group and -v verbose. + +![rsync Synchronizing Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronizing-Files.png) + +rsync Synchronizing Files + +In addition, if you want to increase the security of the data transfer over the wire, you can use ssh over rsync. + +**Synchronizing local → remote directories over ssh** + + # rsync -avzhe ssh backups root@remote_host:/remote_directory/ + +This example will synchronize the backups directory on the local host with the contents of /root/remote_directory on the remote host. + +Where the -h option shows file sizes in human-readable format, and the -e flag is used to indicate a ssh connection. + +![rsync Synchronize Remote Files](http://www.tecmint.com/wp-content/uploads/2014/10/rsync-synchronize-Remote-Files.png) + +rsync Synchronize Remote Files + +Synchronizing remote → local directories over ssh. + +In this case, switch the source and destination directories from the previous example. + + # rsync -avzhe ssh root@remote_host:/remote_directory/ backups + +Please note that these are only 3 examples (most frequent cases you’re likely to run into) of the use of rsync. For more examples and usages of rsync commands can be found at the following article. + +- Read Also: [10 rsync Commands to Sync Files in Linux][4] + +### Summary ### + +As a sysadmin, you need to ensure that your systems perform as good as possible. If you’re well prepared, and if the integrity of your data is well supported by a storage technology such as RAID and regular system backups, you’ll be safe. + +If you have questions, comments, or further ideas on how this article can be improved, feel free to speak out below. In addition, please consider sharing this series through your social network profiles. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/creating-and-managing-raid-backups-in-linux/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ +[2]:http://www.tecmint.com/create-partitions-and-filesystems-in-linux/ +[3]:http://www.tecmint.com/compress-files-and-finding-files-in-linux/ +[4]:http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/ \ No newline at end of file diff --git a/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md b/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md new file mode 100644 index 0000000000..4b7cdf9fe2 --- /dev/null +++ b/sources/tech/LFCS/Part 7 - LFCS--Managing System Startup Process and Services SysVinit Systemd and Upstart.md @@ -0,0 +1,369 @@ +Translating by Xuanwo + +Part 7 - LFCS: Managing System Startup Process and Services (SysVinit, Systemd and Upstart) +================================================================================ +A couple of months ago, the Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, an exciting new program whose aim is allowing individuals from all ends of the world to get certified in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-hand problem-finding and analysis, plus the ability to decide when to raise issues to engineering teams. + +![Linux Foundation Certified Sysadmin – Part 7](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-7.png) + +Linux Foundation Certified Sysadmin – Part 7 + +The following video describes an brief introduction to The Linux Foundation Certification Program. + +注:youtube 视频 + + +This post is Part 7 of a 10-tutorial series, here in this part, we will explain how to Manage Linux System Startup Process and Services, that are required for the LFCS certification exam. + +### Managing the Linux Startup Process ### + +The boot process of a Linux system consists of several phases, each represented by a different component. The following diagram briefly summarizes the boot process and shows all the main components involved. + +![Linux Boot Process](http://www.tecmint.com/wp-content/uploads/2014/10/Linux-Boot-Process.png) + +Linux Boot Process + +When you press the Power button on your machine, the firmware that is stored in a EEPROM chip in the motherboard initializes the POST (Power-On Self Test) to check on the state of the system’s hardware resources. When the POST is finished, the firmware then searches and loads the 1st stage boot loader, located in the MBR or in the EFI partition of the first available disk, and gives control to it. + +#### MBR Method #### + +The MBR is located in the first sector of the disk marked as bootable in the BIOS settings and is 512 bytes in size. + +- First 446 bytes: The bootloader contains both executable code and error message text. +- Next 64 bytes: The Partition table contains a record for each of four partitions (primary or extended). Among other things, each record indicates the status (active / not active), size, and start / end sectors of each partition. +- Last 2 bytes: The magic number serves as a validation check of the MBR. + +The following command performs a backup of the MBR (in this example, /dev/sda is the first hard disk). The resulting file, mbr.bkp can come in handy should the partition table become corrupt, for example, rendering the system unbootable. + +Of course, in order to use it later if the need arises, we will need to save it and store it somewhere else (like a USB drive, for example). That file will help us restore the MBR and will get us going once again if and only if we do not change the hard drive layout in the meanwhile. + +**Backup MBR** + + # dd if=/dev/sda of=mbr.bkp bs=512 count=1 + +![Backup MBR in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Backup-MBR-in-Linux.png) + +Backup MBR in Linux + +**Restoring MBR** + + # dd if=mbr.bkp of=/dev/sda bs=512 count=1 + +![Restore MBR in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Restore-MBR-in-Linux.png) + +Restore MBR in Linux + +#### EFI/UEFI Method #### + +For systems using the EFI/UEFI method, the UEFI firmware reads its settings to determine which UEFI application is to be launched and from where (i.e., in which disk and partition the EFI partition is located). + +Next, the 2nd stage boot loader (aka boot manager) is loaded and run. GRUB [GRand Unified Boot] is the most frequently used boot manager in Linux. One of two distinct versions can be found on most systems used today. + +- GRUB legacy configuration file: /boot/grub/menu.lst (older distributions, not supported by EFI/UEFI firmwares). +- GRUB2 configuration file: most likely, /etc/default/grub. + +Although the objectives of the LFCS exam do not explicitly request knowledge about GRUB internals, if you’re brave and can afford to mess up your system (you may want to try it first on a virtual machine, just in case), you need to run. + + # update-grub + +As root after modifying GRUB’s configuration in order to apply the changes. + +Basically, GRUB loads the default kernel and the initrd or initramfs image. In few words, initrd or initramfs help to perform the hardware detection, the kernel module loading and the device discovery necessary to get the real root filesystem mounted. + +Once the real root filesystem is up, the kernel executes the system and service manager (init or systemd, whose process identification or PID is always 1) to begin the normal user-space boot process in order to present a user interface. + +Both init and systemd are daemons (background processes) that manage other daemons, as the first service to start (during boot) and the last service to terminate (during shutdown). + +![Systemd and Init](http://www.tecmint.com/wp-content/uploads/2014/10/systemd-and-init.png) + +Systemd and Init + +### Starting Services (SysVinit) ### + +The concept of runlevels in Linux specifies different ways to use a system by controlling which services are running. In other words, a runlevel controls what tasks can be accomplished in the current execution state = runlevel (and which ones cannot). + +Traditionally, this startup process was performed based on conventions that originated with System V UNIX, with the system passing executing collections of scripts that start and stop services as the machine entered a specific runlevel (which, in other words, is a different mode of running the system). + +Within each runlevel, individual services can be set to run, or to be shut down if running. Latest versions of some major distributions are moving away from the System V standard in favour of a rather new service and system manager called systemd (which stands for system daemon), but usually support sysv commands for compatibility purposes. This means that you can run most of the well-known sysv init tools in a systemd-based distribution. + +- Read Also: [Why ‘systemd’ replaces ‘init’ in Linux][1] + +Besides starting the system process, init looks to the /etc/inittab file to decide what runlevel must be entered. + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Runlevel Description
0 Halt the system. Runlevel 0 is a special transitional state used to shutdown the system quickly.
1 Also aliased to s, or S, this runlevel is sometimes called maintenance mode. What services, if any, are started at this runlevel varies by distribution. It’s typically used for low-level system maintenance that may be impaired by normal system operation.
2 Multiuser. On Debian systems and derivatives, this is the default runlevel, and includes -if available- a graphical login. On Red-Hat based systems, this is multiuser mode without networking.
3 On Red-Hat based systems, this is the default multiuser mode, which runs everything except the graphical environment. This runlevel and levels 4 and 5 usually are not used on Debian-based systems.
4 Typically unused by default and therefore available for customization.
5 On Red-Hat based systems, full multiuser mode with GUI login. This runlevel is like level 3, but with a GUI login available.
6 Reboot the system.
+ +To switch between runlevels, we can simply issue a runlevel change using the init command: init N (where N is one of the runlevels listed above). Please note that this is not the recommended way of taking a running system to a different runlevel because it gives no warning to existing logged-in users (thus causing them to lose work and processes to terminate abnormally). + +Instead, the shutdown command should be used to restart the system (which first sends a warning message to all logged-in users and blocks any further logins; it then signals init to switch runlevels); however, the default runlevel (the one the system will boot to) must be edited in the /etc/inittab file first. + +For that reason, follow these steps to properly switch between runlevels, As root, look for the following line in /etc/inittab. + + id:2:initdefault: + +and change the number 2 for the desired runlevel with your preferred text editor, such as vim (described in [How to use vi/vim editor in Linux – Part 2][2] of this series). + +Next, run as root. + + # shutdown -r now + +That last command will restart the system, causing it to start in the specified runlevel during next boot, and will run the scripts located in the /etc/rc[runlevel].d directory in order to decide which services should be started and which ones should not. For example, for runlevel 2 in the following system. + +![Change Runlevels in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Change-Runlevels-in-Linux.jpeg) + +Change Runlevels in Linux + +#### Manage Services using chkconfig #### + +To enable or disable system services on boot, we will use [chkconfig command][3] in CentOS / openSUSE and sysv-rc-conf in Debian and derivatives. This tool can also show us what is the preconfigured state of a service for a particular runlevel. + +- Read Also: [How to Stop and Disable Unwanted Services in Linux][4] + +Listing the runlevel configuration for a service. + + # chkconfig --list [service name] + # chkconfig --list postfix + # chkconfig --list mysqld + +![Listing Runlevel Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Listing-Runlevel-Configuration.png) + +Listing Runlevel Configuration + +In the above image we can see that postfix is set to start when the system enters runlevels 2 through 5, whereas mysqld will be running by default for runlevels 2 through 4. Now suppose that this is not the expected behaviour. + +For example, we need to turn on mysqld for runlevel 5 as well, and turn off postfix for runlevels 4 and 5. Here’s what we would do in each case (run the following commands as root). + +**Enabling a service for a particular runlevel** + + # chkconfig --level [level(s)] service on + # chkconfig --level 5 mysqld on + +**Disabling a service for particular runlevels** + + # chkconfig --level [level(s)] service off + # chkconfig --level 45 postfix off + +![Enable Disable Services in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Disable-Services.png) + +Enable Disable Services + +We will now perform similar tasks in a Debian-based system using sysv-rc-conf. + +#### Manage Services using sysv-rc-conf #### + +Configuring a service to start automatically on a specific runlevel and prevent it from starting on all others. + +1. Let’s use the following command to see what are the runlevels where mdadm is configured to start. + + # ls -l /etc/rc[0-6].d | grep -E 'rc[0-6]|mdadm' + +![Check Runlevel of Service Running](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Service-Runlevel.png) + +Check Runlevel of Service Running + +2. We will use sysv-rc-conf to prevent mdadm from starting on all runlevels except 2. Just check or uncheck (with the space bar) as desired (you can move up, down, left, and right with the arrow keys). + + # sysv-rc-conf + +![SysV Runlevel Config](http://www.tecmint.com/wp-content/uploads/2014/10/SysV-Runlevel-Config.png) + +SysV Runlevel Config + +Then press q to quit. + +3. We will restart the system and run again the command from STEP 1. + + # ls -l /etc/rc[0-6].d | grep -E 'rc[0-6]|mdadm' + +![Verify Service Runlevel](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Service-Runlevel.png) + +Verify Service Runlevel + +In the above image we can see that mdadm is configured to start only on runlevel 2. + +### What About systemd? ### + +systemd is another service and system manager that is being adopted by several major Linux distributions. It aims to allow more processing to be done in parallel during system startup (unlike sysvinit, which always tends to be slower because it starts processes one at a time, checks whether one depends on another, and waits for daemons to launch so more services can start), and to serve as a dynamic resource management to a running system. + +Thus, services are started when needed (to avoid consuming system resources) instead of being launched without a solid reason during boot. + +Viewing the status of all the processes running on your system, both systemd native and SysV services, run the following command. + + # systemctl + +![Check All Running Processes in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-All-Running-Processes.png) + +Check All Running Processes + +The LOAD column shows whether the unit definition (refer to the UNIT column, which shows the service or anything maintained by systemd) was properly loaded, while the ACTIVE and SUB columns show the current status of such unit. +Displaying information about the current status of a service + +When the ACTIVE column indicates that an unit’s status is other than active, we can check what happened using. + + # systemctl status [unit] + +For example, in the image above, media-samba.mount is in failed state. Let’s run. + + # systemctl status media-samba.mount + +![Check Linux Service Status](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Service-Status.png) + +Check Service Status + +We can see that media-samba.mount failed because the mount process on host dev1 was unable to find the network share at //192.168.0.10/gacanepa. + +### Starting or Stopping Services ### + +Once the network share //192.168.0.10/gacanepa becomes available, let’s try to start, then stop, and finally restart the unit media-samba.mount. After performing each action, let’s run systemctl status media-samba.mount to check on its status. + + # systemctl start media-samba.mount + # systemctl status media-samba.mount + # systemctl stop media-samba.mount + # systemctl restart media-samba.mount + # systemctl status media-samba.mount + +![Starting Stoping Services](http://www.tecmint.com/wp-content/uploads/2014/10/Starting-Stoping-Service.jpeg) + +Starting Stoping Services + +**Enabling or disabling a service to start during boot** + +Under systemd you can enable or disable a service when it boots. + + # systemctl enable [service] # enable a service + # systemctl disable [service] # prevent a service from starting at boot + +The process of enabling or disabling a service to start automatically on boot consists in adding or removing symbolic links in the /etc/systemd/system/multi-user.target.wants directory. + +![Enabling Disabling Services](http://www.tecmint.com/wp-content/uploads/2014/10/Enabling-Disabling-Services.jpeg) + +Enabling Disabling Services + +Alternatively, you can find out a service’s current status (enabled or disabled) with the command. + + # systemctl is-enabled [service] + +For example, + + # systemctl is-enabled postfix.service + +In addition, you can reboot or shutdown the system with. + + # systemctl reboot + # systemctl shutdown + +### Upstart ### + +Upstart is an event-based replacement for the /sbin/init daemon and was born out of the need for starting services only, when they are needed (also supervising them while they are running), and handling events as they occur, thus surpassing the classic, dependency-based sysvinit system. + +It was originally developed for the Ubuntu distribution, but is used in Red Hat Enterprise Linux 6.0. Though it was intended to be suitable for deployment in all Linux distributions as a replacement for sysvinit, in time it was overshadowed by systemd. On February 14, 2014, Mark Shuttleworth (founder of Canonical Ltd.) announced that future releases of Ubuntu would use systemd as the default init daemon. + +Because the SysV startup script for system has been so common for so long, a large number of software packages include SysV startup scripts. To accommodate such packages, Upstart provides a compatibility mode: It runs SysV startup scripts in the usual locations (/etc/rc.d/rc?.d, /etc/init.d/rc?.d, /etc/rc?.d, or a similar location). Thus, if we install a package that doesn’t yet include an Upstart configuration script, it should still launch in the usual way. + +Furthermore, if we have installed utilities such as [chkconfig][5], you should be able to use them to manage your SysV-based services just as we would on sysvinit based systems. + +Upstart scripts also support starting or stopping services based on a wider variety of actions than do SysV startup scripts; for example, Upstart can launch a service whenever a particular hardware device is attached. + +A system that uses Upstart and its native scripts exclusively replaces the /etc/inittab file and the runlevel-specific SysV startup script directories with .conf scripts in the /etc/init directory. + +These *.conf scripts (also known as job definitions) generally consists of the following: + +- Description of the process. +- Runlevels where the process should run or events that should trigger it. +- Runlevels where process should be stopped or events that should stop it. +- Options. +- Command to launch the process. + +For example, + + # My test service - Upstart script demo description "Here goes the description of 'My test service'" author "Dave Null " + # Stanzas + + # + # Stanzas define when and how a process is started and stopped + # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn + # When to start the service + start on runlevel [2345] + # When to stop the service + stop on runlevel [016] + # Automatically restart process in case of crash + respawn + # Specify working directory + chdir /home/dave/myfiles + # Specify the process/command (add arguments if needed) to run + exec bash backup.sh arg1 arg2 + +To apply changes, you will need to tell upstart to reload its configuration. + + # initctl reload-configuration + +Then start your job by typing the following command. + + $ sudo start yourjobname + +Where yourjobname is the name of the job that was added earlier with the yourjobname.conf script. + +A more complete and detailed reference guide for Upstart is available in the project’s web site under the menu “[Cookbook][6]”. + +### Summary ### + +A knowledge of the Linux boot process is necessary to help you with troubleshooting tasks as well as with adapting the computer’s performance and running services to your needs. + +In this article we have analyzed what happens from the moment when you press the Power switch to turn on the machine until you get a fully operational user interface. I hope you have learned reading it as much as I did while putting it together. Feel free to leave your comments or questions below. We always look forward to hearing from our readers! + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-boot-process-and-manage-services/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/systemd-replaces-init-in-linux/ +[2]:http://www.tecmint.com/vi-editor-usage/ +[3]:http://www.tecmint.com/chkconfig-command-examples/ +[4]:http://www.tecmint.com/remove-unwanted-services-from-linux/ +[5]:http://www.tecmint.com/chkconfig-command-examples/ +[6]:http://upstart.ubuntu.com/cookbook/ \ No newline at end of file diff --git a/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md b/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md new file mode 100644 index 0000000000..50f39ee2d9 --- /dev/null +++ b/sources/tech/LFCS/Part 8 - LFCS--Managing Users and Groups File Permissions and Attributes and Enabling sudo Access on Accounts.md @@ -0,0 +1,332 @@ +Translating by Xuanwo + +Part 8 - LFCS: Managing Users & Groups, File Permissions & Attributes and Enabling sudo Access on Accounts +================================================================================ +Last August, the Linux Foundation started the LFCS certification (Linux Foundation Certified Sysadmin), a brand new program whose purpose is to allow individuals everywhere and anywhere take an exam in order to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus intelligent decision-making to be able to decide when it’s necessary to escalate issues to higher level support teams. + +![Linux Users and Groups Management](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-8.png) + +Linux Foundation Certified Sysadmin – Part 8 + +Please have a quick look at the following video that describes an introduction to the Linux Foundation Certification Program. + +注:youtube视频 + + +This article is Part 8 of a 10-tutorial long series, here in this section, we will guide you on how to manage users and groups permissions in Linux system, that are required for the LFCS certification exam. + +Since Linux is a multi-user operating system (in that it allows multiple users on different computers or terminals to access a single system), you will need to know how to perform effective user management: how to add, edit, suspend, or delete user accounts, along with granting them the necessary permissions to do their assigned tasks. + +### Adding User Accounts ### + +To add a new user account, you can run either of the following two commands as root. + + # adduser [new_account] + # useradd [new_account] + +When a new user account is added to the system, the following operations are performed. + +1. His/her home directory is created (/home/username by default). + +2. The following hidden files are copied into the user’s home directory, and will be used to provide environment variables for his/her user session. + + .bash_logout + .bash_profile + .bashrc + +3. A mail spool is created for the user at /var/spool/mail/username. + +4. A group is created and given the same name as the new user account. + +**Understanding /etc/passwd** + +The full account information is stored in the /etc/passwd file. This file contains a record per system user account and has the following format (fields are delimited by a colon). + + [username]:[x]:[UID]:[GID]:[Comment]:[Home directory]:[Default shell] + +- Fields [username] and [Comment] are self explanatory. +- The x in the second field indicates that the account is protected by a shadowed password (in /etc/shadow), which is needed to logon as [username]. +- The [UID] and [GID] fields are integers that represent the User IDentification and the primary Group IDentification to which [username] belongs, respectively. +- The [Home directory] indicates the absolute path to [username]’s home directory, and +- The [Default shell] is the shell that will be made available to this user when he or she logins the system. + +**Understanding /etc/group** + +Group information is stored in the /etc/group file. Each record has the following format. + + [Group name]:[Group password]:[GID]:[Group members] + +- [Group name] is the name of group. +- An x in [Group password] indicates group passwords are not being used. +- [GID]: same as in /etc/passwd. +- [Group members]: a comma separated list of users who are members of [Group name]. + +![Add User Accounts in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-user-accounts.png) + +Add User Accounts + +After adding an account, you can edit the following information (to name a few fields) using the usermod command, whose basic syntax of usermod is as follows. + + # usermod [options] [username] + +**Setting the expiry date for an account** + +Use the –expiredate flag followed by a date in YYYY-MM-DD format. + + # usermod --expiredate 2014-10-30 tecmint + +**Adding the user to supplementary groups** + +Use the combined -aG, or –append –groups options, followed by a comma separated list of groups. + + # usermod --append --groups root,users tecmint + +**Changing the default location of the user’s home directory** + +Use the -d, or –home options, followed by the absolute path to the new home directory. + + # usermod --home /tmp tecmint + +**Changing the shell the user will use by default** + +Use –shell, followed by the path to the new shell. + + # usermod --shell /bin/sh tecmint + +**Displaying the groups an user is a member of** + + # groups tecmint + # id tecmint + +Now let’s execute all the above commands in one go. + + # usermod --expiredate 2014-10-30 --append --groups root,users --home /tmp --shell /bin/sh tecmint + +![usermod Command Examples](http://www.tecmint.com/wp-content/uploads/2014/10/usermod-command-examples.png) + +usermod Command Examples + +Read Also: + +- [15 useradd Command Examples in Linux][1] +- [15 usermod Command Examples in Linux][2] + +For existing accounts, we can also do the following. + +**Disabling account by locking password** + +Use the -L (uppercase L) or the –lock option to lock a user’s password. + + # usermod --lock tecmint + +**Unlocking user password** + +Use the –u or the –unlock option to unlock a user’s password that was previously blocked. + + # usermod --unlock tecmint + +![Lock User in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/lock-user-in-linux.png) + +Lock User Accounts + +**Creating a new group for read and write access to files that need to be accessed by several users** + +Run the following series of commands to achieve the goal. + + # groupadd common_group # Add a new group + # chown :common_group common.txt # Change the group owner of common.txt to common_group + # usermod -aG common_group user1 # Add user1 to common_group + # usermod -aG common_group user2 # Add user2 to common_group + # usermod -aG common_group user3 # Add user3 to common_group + +**Deleting a group** + +You can delete a group with the following command. + + # groupdel [group_name] + +If there are files owned by group_name, they will not be deleted, but the group owner will be set to the GID of the group that was deleted. + +### Linux File Permissions ### + +Besides the basic read, write, and execute permissions that we discussed in [Setting File Attributes – Part 3][3] of this series, there are other less used (but not less important) permission settings, sometimes referred to as “special permissions”. + +Like the basic permissions discussed earlier, they are set using an octal file or through a letter (symbolic notation) that indicates the type of permission. +Deleting user accounts + +You can delete an account (along with its home directory, if it’s owned by the user, and all the files residing therein, and also the mail spool) using the userdel command with the –remove option. + + # userdel --remove [username] + +#### Group Management #### + +Every time a new user account is added to the system, a group with the same name is created with the username as its only member. Other users can be added to the group later. One of the purposes of groups is to implement a simple access control to files and other system resources by setting the right permissions on those resources. + +For example, suppose you have the following users. + +- user1 (primary group: user1) +- user2 (primary group: user2) +- user3 (primary group: user3) + +All of them need read and write access to a file called common.txt located somewhere on your local system, or maybe on a network share that user1 has created. You may be tempted to do something like, + + # chmod 660 common.txt + OR + # chmod u=rw,g=rw,o= common.txt [notice the space between the last equal sign and the file name] + +However, this will only provide read and write access to the owner of the file and to those users who are members of the group owner of the file (user1 in this case). Again, you may be tempted to add user2 and user3 to group user1, but that will also give them access to the rest of the files owned by user user1 and group user1. + +This is where groups come in handy, and here’s what you should do in a case like this. + +**Understanding Setuid** + +When the setuid permission is applied to an executable file, an user running the program inherits the effective privileges of the program’s owner. Since this approach can reasonably raise security concerns, the number of files with setuid permission must be kept to a minimum. You will likely find programs with this permission set when a system user needs to access a file owned by root. + +Summing up, it isn’t just that the user can execute the binary file, but also that he can do so with root’s privileges. For example, let’s check the permissions of /bin/passwd. This binary is used to change the password of an account, and modifies the /etc/shadow file. The superuser can change anyone’s password, but all other users should only be able to change their own. + +![passwd Command Examples](http://www.tecmint.com/wp-content/uploads/2014/10/passwd-command.png) + +passwd Command Examples + +Thus, any user should have permission to run /bin/passwd, but only root will be able to specify an account. Other users can only change their corresponding passwords. + +![Change User Password in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/change-user-password.png) + +Change User Password + +**Understanding Setgid** + +When the setgid bit is set, the effective GID of the real user becomes that of the group owner. Thus, any user can access a file under the privileges granted to the group owner of such file. In addition, when the setgid bit is set on a directory, newly created files inherit the same group as the directory, and newly created subdirectories will also inherit the setgid bit of the parent directory. You will most likely use this approach whenever members of a certain group need access to all the files in a directory, regardless of the file owner’s primary group. + + # chmod g+s [filename] + +To set the setgid in octal form, prepend the number 2 to the current (or desired) basic permissions. + + # chmod 2755 [directory] + +**Setting the SETGID in a directory** + +![Add Setgid in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-setgid-to-directory.png) + +Add Setgid to Directory + +**Understanding Sticky Bit** + +When the “sticky bit” is set on files, Linux just ignores it, whereas for directories it has the effect of preventing users from deleting or even renaming the files it contains unless the user owns the directory, the file, or is root. + +# chmod o+t [directory] + +To set the sticky bit in octal form, prepend the number 1 to the current (or desired) basic permissions. + +# chmod 1755 [directory] + +Without the sticky bit, anyone able to write to the directory can delete or rename files. For that reason, the sticky bit is commonly found on directories, such as /tmp, that are world-writable. + +![Add Stickybit in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/add-sticky-bit-to-directory.png) + +Add Stickybit to Directory + +### Special Linux File Attributes ### + +There are other attributes that enable further limits on the operations that are allowed on files. For example, prevent the file from being renamed, moved, deleted, or even modified. They are set with the [chattr command][4] and can be viewed using the lsattr tool, as follows. + + # chattr +i file1 + # chattr +a file2 + +After executing those two commands, file1 will be immutable (which means it cannot be moved, renamed, modified or deleted) whereas file2 will enter append-only mode (can only be open in append mode for writing). + +![Protect File from Deletion](http://www.tecmint.com/wp-content/uploads/2014/10/chattr-command.png) + +Chattr Command to Protect Files + +### Accessing the root Account and Using sudo ### + +One of the ways users can gain access to the root account is by typing. + + $ su + +and then entering root’s password. + +If authentication succeeds, you will be logged on as root with the current working directory as the same as you were before. If you want to be placed in root’s home directory instead, run. + + $ su - + +and then enter root’s password. + +![Enable sudo Access on Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Enable-Sudo-Access.png) + +Enable Sudo Access on Users + +The above procedure requires that a normal user knows root’s password, which poses a serious security risk. For that reason, the sysadmin can configure the sudo command to allow an ordinary user to execute commands as a different user (usually the superuser) in a very controlled and limited way. Thus, restrictions can be set on a user so as to enable him to run one or more specific privileged commands and no others. + +- Read Also: [Difference Between su and sudo User][5] + +To authenticate using sudo, the user uses his/her own password. After entering the command, we will be prompted for our password (not the superuser’s) and if the authentication succeeds (and if the user has been granted privileges to run the command), the specified command is carried out. + +To grant access to sudo, the system administrator must edit the /etc/sudoers file. It is recommended that this file is edited using the visudo command instead of opening it directly with a text editor. + + # visudo + +This opens the /etc/sudoers file using vim (you can follow the instructions given in [Install and Use vim as Editor – Part 2][6] of this series to edit the file). + +These are the most relevant lines. + + Defaults secure_path="/usr/sbin:/usr/bin:/sbin" + root ALL=(ALL) ALL + tecmint ALL=/bin/yum update + gacanepa ALL=NOPASSWD:/bin/updatedb + %admin ALL=(ALL) ALL + +Let’s take a closer look at them. + + Defaults secure_path="/usr/sbin:/usr/bin:/sbin:/usr/local/bin" + +This line lets you specify the directories that will be used for sudo, and is used to prevent using user-specific directories, which can harm the system. + +The next lines are used to specify permissions. + + root ALL=(ALL) ALL + +- The first ALL keyword indicates that this rule applies to all hosts. +- The second ALL indicates that the user in the first column can run commands with the privileges of any user. +- The third ALL means any command can be run. + + tecmint ALL=/bin/yum update + +If no user is specified after the = sign, sudo assumes the root user. In this case, user tecmint will be able to run yum update as root. + + gacanepa ALL=NOPASSWD:/bin/updatedb + +The NOPASSWD directive allows user gacanepa to run /bin/updatedb without needing to enter his password. + + %admin ALL=(ALL) ALL + +The % sign indicates that this line applies to a group called “admin”. The meaning of the rest of the line is identical to that of an regular user. This means that members of the group “admin” can run all commands as any user on all hosts. + +To see what privileges are granted to you by sudo, use the “-l” option to list them. + +![Sudo Access Rules](http://www.tecmint.com/wp-content/uploads/2014/10/sudo-access-rules.png) + +Sudo Access Rules + +### Summary ### + +Effective user and file management skills are essential tools for any system administrator. In this article we have covered the basics and hope you can use it as a good starting to point to build upon. Feel free to leave your comments or questions below, and we’ll respond quickly. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/manage-users-and-groups-in-linux/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/add-users-in-linux/ +[2]:http://www.tecmint.com/usermod-command-examples/ +[3]:http://www.tecmint.com/compress-files-and-finding-files-in-linux/ +[4]:http://www.tecmint.com/chattr-command-examples/ +[5]:http://www.tecmint.com/su-vs-sudo-and-how-to-configure-sudo-in-linux/ +[6]:http://www.tecmint.com/vi-editor-usage/ \ No newline at end of file diff --git a/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md b/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md new file mode 100644 index 0000000000..a363a50c09 --- /dev/null +++ b/sources/tech/LFCS/Part 9 - LFCS--Linux Package Management with Yum RPM Apt Dpkg Aptitude and Zypper.md @@ -0,0 +1,231 @@ +Translating by Xuanwo + +Part 9 - LFCS: Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper +================================================================================ +Last August, the Linux Foundation announced the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of succeeding at overall operational support for Linux systems. A Linux Foundation Certified Sysadmin has the expertise to ensure effective system support, first-level troubleshooting and monitoring, including finally issue escalation, when needed, to engineering support teams. + +![Linux Package Management](http://www.tecmint.com/wp-content/uploads/2014/11/lfcs-Part-9.png) + +Linux Foundation Certified Sysadmin – Part 9 + +Watch the following video that explains about the Linux Foundation Certification Program. + +注:youtube 视频 + + +This article is a Part 9 of 10-tutorial long series, today in this article we will guide you about Linux Package Management, that are required for the LFCS certification exam. + +### Package Management ### + +In few words, package management is a method of installing and maintaining (which includes updating and probably removing as well) software on the system. + +In the early days of Linux, programs were only distributed as source code, along with the required man pages, the necessary configuration files, and more. Nowadays, most Linux distributors use by default pre-built programs or sets of programs called packages, which are presented to users ready for installation on that distribution. However, one of the wonders of Linux is still the possibility to obtain source code of a program to be studied, improved, and compiled. + +**How package management systems work** + +If a certain package requires a certain resource such as a shared library, or another package, it is said to have a dependency. All modern package management systems provide some method of dependency resolution to ensure that when a package is installed, all of its dependencies are installed as well. + +**Packaging Systems** + +Almost all the software that is installed on a modern Linux system will be found on the Internet. It can either be provided by the distribution vendor through central repositories (which can contain several thousands of packages, each of which has been specifically built, tested, and maintained for the distribution) or be available in source code that can be downloaded and installed manually. + +Because different distribution families use different packaging systems (Debian: *.deb / CentOS: *.rpm / openSUSE: *.rpm built specially for openSUSE), a package intended for one distribution will not be compatible with another distribution. However, most distributions are likely to fall into one of the three distribution families covered by the LFCS certification. + +**High and low-level package tools** + +In order to perform the task of package management effectively, you need to be aware that you will have two types of available utilities: low-level tools (which handle in the backend the actual installation, upgrade, and removal of package files), and high-level tools (which are in charge of ensuring that the tasks of dependency resolution and metadata searching -”data about the data”- are performed). + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DISTRIBUTIONLOW-LEVEL TOOLHIGH-LEVEL TOOL
 Debian and derivatives dpkg apt-get / aptitude
 CentOS rpm yum
 openSUSE rpm zypper
+ +Let us see the descrption of the low-level and high-level tools. + +dpkg is a low-level package manager for Debian-based systems. It can install, remove, provide information about and build *.deb packages but it can’t automatically download and install their corresponding dependencies. + +- Read More: [15 dpkg Command Examples][1] + +apt-get is a high-level package manager for Debian and derivatives, and provides a simple way to retrieve and install packages, including dependency resolution, from multiple sources using the command line. Unlike dpkg, apt-get does not work directly with *.deb files, but with the package proper name. + +- Read More: [25 apt-get Command Examples][2] + +aptitude is another high-level package manager for Debian-based systems, and can be used to perform management tasks (installing, upgrading, and removing packages, also handling dependency resolution automatically) in a fast and easy way. It provides the same functionality as apt-get and additional ones, such as offering access to several versions of a package. + +rpm is the package management system used by Linux Standard Base (LSB)-compliant distributions for low-level handling of packages. Just like dpkg, it can query, install, verify, upgrade, and remove packages, and is more frequently used by Fedora-based distributions, such as RHEL and CentOS. + +- Read More: [20 rpm Command Examples][3] + +yum adds the functionality of automatic updates and package management with dependency management to RPM-based systems. As a high-level tool, like apt-get or aptitude, yum works with repositories. + +- Read More: [20 yum Command Examples][4] +- +### Common Usage of Low-Level Tools ### + +The most frequent tasks that you will do with low level tools are as follows: + +**1. Installing a package from a compiled (*.deb or *.rpm) file** + +The downside of this installation method is that no dependency resolution is provided. You will most likely choose to install a package from a compiled file when such package is not available in the distribution’s repositories and therefore cannot be downloaded and installed through a high-level tool. Since low-level tools do not perform dependency resolution, they will exit with an error if we try to install a package with unmet dependencies. + + # dpkg -i file.deb [Debian and derivative] + # rpm -i file.rpm [CentOS / openSUSE] + +**Note**: Do not attempt to install on CentOS a *.rpm file that was built for openSUSE, or vice-versa! + +**2. Upgrading a package from a compiled file** + +Again, you will only upgrade an installed package manually when it is not available in the central repositories. + + # dpkg -i file.deb [Debian and derivative] + # rpm -U file.rpm [CentOS / openSUSE] + +**3. Listing installed packages** + +When you first get your hands on an already working system, chances are you’ll want to know what packages are installed. + + # dpkg -l [Debian and derivative] + # rpm -qa [CentOS / openSUSE] + +If you want to know whether a specific package is installed, you can pipe the output of the above commands to grep, as explained in [manipulate files in Linux – Part 1][6] of this series. Suppose we need to verify if package mysql-common is installed on an Ubuntu system. + + # dpkg -l | grep mysql-common + +![Check Installed Packages in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Installed-Package.png) + +Check Installed Packages + +Another way to determine if a package is installed. + + # dpkg --status package_name [Debian and derivative] + # rpm -q package_name [CentOS / openSUSE] + +For example, let’s find out whether package sysdig is installed on our system. + + # rpm -qa | grep sysdig + +![Check sysdig Package](http://www.tecmint.com/wp-content/uploads/2014/11/Check-sysdig-Package.png) + +Check sysdig Package + +**4. Finding out which package installed a file** + + # dpkg --search file_name + # rpm -qf file_name + +For example, which package installed pw_dict.hwm? + + # rpm -qf /usr/share/cracklib/pw_dict.hwm + +![Query File in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Query-File-in-Linux.png) + +Query File in Linux + +### Common Usage of High-Level Tools ### + +The most frequent tasks that you will do with high level tools are as follows. + +**1. Searching for a package** + +aptitude update will update the list of available packages, and aptitude search will perform the actual search for package_name. + + # aptitude update && aptitude search package_name + +In the search all option, yum will search for package_name not only in package names, but also in package descriptions. + + # yum search package_name + # yum search all package_name + # yum whatprovides “*/package_name” + +Let’s supposed we need a file whose name is sysdig. To know that package we will have to install, let’s run. + + # yum whatprovides “*/sysdig” + +![Check Package Description in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Description.png) + +Check Package Description + +whatprovides tells yum to search the package the will provide a file that matches the above regular expression. + + # zypper refresh && zypper search package_name [On openSUSE] + +**2. Installing a package from a repository** + +While installing a package, you may be prompted to confirm the installation after the package manager has resolved all dependencies. Note that running update or refresh (according to the package manager being used) is not strictly necessary, but keeping installed packages up to date is a good sysadmin practice for security and dependency reasons. + + # aptitude update && aptitude install package_name [Debian and derivatives] + # yum update && yum install package_name [CentOS] + # zypper refresh && zypper install package_name [openSUSE] + +**3. Removing a package** + +The option remove will uninstall the package but leaving configuration files intact, whereas purge will erase every trace of the program from your system. +# aptitude remove / purge package_name +# yum erase package_name + + ---Notice the minus sign in front of the package that will be uninstalled, openSUSE --- + + # zypper remove -package_name + +Most (if not all) package managers will prompt you, by default, if you’re sure about proceeding with the uninstallation before actually performing it. So read the onscreen messages carefully to avoid running into unnecessary trouble! + +**4. Displaying information about a package** + +The following command will display information about the birthday package. + + # aptitude show birthday + # yum info birthday + # zypper info birthday + +![Check Package Information in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Package-Information.png) + +Check Package Information + +### Summary ### + +Package management is something you just can’t sweep under the rug as a system administrator. You should be prepared to use the tools described in this article at a moment’s notice. Hope you find it useful in your preparation for the LFCS exam and for your daily tasks. Feel free to leave your comments or questions below. We will be more than glad to get back to you as soon as possible. + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/linux-package-management/ + +作者:[Gabriel Cánepa][a] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/dpkg-command-examples/ +[2]:http://www.tecmint.com/useful-basic-commands-of-apt-get-and-apt-cache-for-package-management/ +[3]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/ +[4]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ +[5]:http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ \ No newline at end of file diff --git a/sources/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md b/sources/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md deleted file mode 100644 index 4acfe4366b..0000000000 --- a/sources/tech/RAID/Part 3 - Setting up RAID 1 (Mirroring) using 'Two Disks' in Linux.md +++ /dev/null @@ -1,213 +0,0 @@ -struggling 翻译中 -Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 -================================================================================ -RAID Mirroring means an exact clone (or mirror) of the same data writing to two drives. A minimum two number of disks are more required in an array to create RAID1 and it’s useful only, when read performance or reliability is more precise than the data storage capacity. - -![Create Raid1 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID1-in-Linux.jpeg) - -Setup Raid1 in Linux - -Mirrors are created to protect against data loss due to disk failure. Each disk in a mirror involves an exact copy of the data. When one disk fails, the same data can be retrieved from other functioning disk. However, the failed drive can be replaced from the running computer without any user interruption. - -### Features of RAID 1 ### - -- Mirror has Good Performance. -- 50% of space will be lost. Means if we have two disk with 500GB size total, it will be 1TB but in Mirroring it will only show us 500GB. -- No data loss in Mirroring if one disk fails, because we have the same content in both disks. -- Reading will be good than writing data to drive. - -#### Requirements #### - -Minimum Two number of disks are allowed to create RAID 1, but you can add more disks by using twice as 2, 4, 6, 8. To add more disks, your system must have a RAID physical adapter (hardware card). - -Here we’re using software raid not a Hardware raid, if your system has an inbuilt physical hardware raid card you can access it from it’s utility UI or using Ctrl+I key. - -Read Also: [Basic Concepts of RAID in Linux][1] - -#### My Server Setup #### - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.226 - Hostname : rd1.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc - -This article will guide you through a step-by-step instructions on how to setup a software RAID 1 or Mirror using mdadm (creates and manages raid) on Linux Platform. Although the same instructions also works on other Linux distributions such as RedHat, CentOS, Fedora, etc. - -### Step 1: Installing Prerequisites and Examine Drives ### - -1. As I said above, we’re using mdadm utility for creating and managing RAID in Linux. So, let’s install the mdadm software package on Linux using yum or apt-get package manager tool. - - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] - -2. Once ‘mdadm‘ package has been installed, we need to examine our disk drives whether there is already any raid configured using the following command. - - # mdadm -E /dev/sd[b-c] - -![Check RAID on Disks](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-on-Disks.png) - -Check RAID on Disks - -As you see from the above screen, that there is no any super-block detected yet, means no RAID defined. - -### Step 2: Drive Partitioning for RAID ### - -3. As I mentioned above, that we’re using minimum two partitions /dev/sdb and /dev/sdc for creating RAID1. Let’s create partitions on these two drives using ‘fdisk‘ command and change the type to raid during partition creation. - - # fdisk /dev/sdb - -Follow the below instructions - -- Press ‘n‘ for creating new partition. -- Then choose ‘P‘ for Primary partition. -- Next select the partition number as 1. -- Give the default full size by just pressing two times Enter key. -- Next press ‘p‘ to print the defined partition. -- Press ‘L‘ to list all available types. -- Type ‘t‘to choose the partitions. -- Choose ‘fd‘ for Linux raid auto and press Enter to apply. -- Then again use ‘p‘ to print the changes what we have made. -- Use ‘w‘ to write the changes. - -![Create Disk Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Disk-Partitions.png) - -Create Disk Partitions - -After ‘/dev/sdb‘ partition has been created, next follow the same instructions to create new partition on /dev/sdc drive. - - # fdisk /dev/sdc - -![Create Second Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Second-Partitions.png) - -Create Second Partitions - -4. Once both the partitions are created successfully, verify the changes on both sdb & sdc drive using the same ‘mdadm‘ command and also confirm the RAID type as shown in the following screen grabs. - - # mdadm -E /dev/sd[b-c] - -![Verify Partitions Changes](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Partitions-Changes.png) - -Verify Partitions Changes - -![Check RAID Type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Type.png) - -Check RAID Type - -**Note**: As you see in the above picture, there is no any defined RAID on the sdb1 and sdc1 drives so far, that’s the reason we are getting as no super-blocks detected. - -### Step 3: Creating RAID1 Devices ### - -5. Next create RAID1 Device called ‘/dev/md0‘ using the following command and verity it. - - # mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1 - # cat /proc/mdstat - -![Create RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device.png) - -Create RAID Device - -6. Next check the raid devices type and raid array using following commands. - - # mdadm -E /dev/sd[b-c]1 - # mdadm --detail /dev/md0 - -![Check RAID Device type](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-type.png) - -Check RAID Device type - -![Check RAID Device Array](http://www.tecmint.com/wp-content/uploads/2014/10/Check-RAID-Device-Array.png) - -Check RAID Device Array - -From the above pictures, one can easily understand that raid1 have been created and using /dev/sdb1 and /dev/sdc1 partitions and also you can see the status as resyncing. - -### Step 4: Creating File System on RAID Device ### - -7. Create file system using ext4 for md0 and mount under /mnt/raid1. - - # mkfs.ext4 /dev/md0 - -![Create RAID Device Filesystem](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Device-Filesystem.png) - -Create RAID Device Filesystem - -8. Next, mount the newly created filesystem under ‘/mnt/raid1‘ and create some files and verify the contents under mount point. - - # mkdir /mnt/raid1 - # mount /dev/md0 /mnt/raid1/ - # touch /mnt/raid1/tecmint.txt - # echo "tecmint raid setups" > /mnt/raid1/tecmint.txt - -![Mount Raid Device](http://www.tecmint.com/wp-content/uploads/2014/10/Mount-RAID-Device.png) - -Mount Raid Device - -9. To auto-mount RAID1 on system reboot, you need to make an entry in fstab file. Open ‘/etc/fstab‘ file and add the following line at the bottom of the file. - - /dev/md0 /mnt/raid1 ext4 defaults 0 0 - -![Raid Automount Device](http://www.tecmint.com/wp-content/uploads/2014/10/RAID-Automount-Filesystem.png) - -Raid Automount Device - -10. Run ‘mount -a‘ to check whether there are any errors in fstab entry. - - # mount -av - -![Check Errors in fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-fstab.png) - -Check Errors in fstab - -11. Next, save the raid configuration manually to ‘mdadm.conf‘ file using the below command. - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - -![Save Raid Configuration](http://www.tecmint.com/wp-content/uploads/2014/10/Save-Raid-Configuration.png) - -Save Raid Configuration - -The above configuration file is read by the system at the reboots and load the RAID devices. - -### Step 5: Verify Data After Disk Failure ### - -12. Our main purpose is, even after any of hard disk fail or crash our data needs to be available. Let’s see what will happen when any of disk disk is unavailable in array. - - # mdadm --detail /dev/md0 - -![Raid Device Verify](http://www.tecmint.com/wp-content/uploads/2014/10/Raid-Device-Verify.png) - -Raid Device Verify - -In the above image, we can see there are 2 devices available in our RAID and Active Devices are 2. Now let us see what will happen when a disk plugged out (removed sdc disk) or fails. - - # ls -l /dev | grep sd - # mdadm --detail /dev/md0 - -![Test RAID Devices](http://www.tecmint.com/wp-content/uploads/2014/10/Test-RAID-Devices.png) - -Test RAID Devices - -Now in the above image, you can see that one of our drive is lost. I unplugged one of the drive from my Virtual machine. Now let us check our precious data. - - # cd /mnt/raid1/ - # cat tecmint.txt - -![Verify RAID Data](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Data.png) - -Verify RAID Data - -Did you see our data is still available. From this we come to know the advantage of RAID 1 (mirror). In next article, we will see how to setup a RAID 5 striping with distributed Parity. Hope this helps you to understand how the RAID 1 (Mirror) Works. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid1-in-linux/ - -作者:[Babin Lonston][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ \ No newline at end of file diff --git a/sources/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md b/sources/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md deleted file mode 100644 index dafdf514aa..0000000000 --- a/sources/tech/RAID/Part 4 - Creating RAID 5 (Striping with Distributed Parity) in Linux.md +++ /dev/null @@ -1,286 +0,0 @@ -struggling 翻译中 -Creating RAID 5 (Striping with Distributed Parity) in Linux – Part 4 -================================================================================ -In RAID 5, data strips across multiple drives with distributed parity. The striping with distributed parity means it will split the parity information and stripe data over the multiple disks, which will have good data redundancy. - -![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/setup-raid-5-in-linux.jpg) - -Setup Raid 5 in Linux - -For RAID Level it should have at least three hard drives or more. RAID 5 are being used in the large scale production environment where it’s cost effective and provide performance as well as redundancy. - -#### What is Parity? #### - -Parity is a simplest common method of detecting errors in data storage. Parity stores information in each disks, Let’s say we have 4 disks, in 4 disks one disk space will be split to all disks to store the parity information’s. If any one of the disks fails still we can get the data by rebuilding from parity information after replacing the failed disk. - -#### Pros and Cons of RAID 5 #### - -- Gives better performance -- Support Redundancy and Fault tolerance. -- Support hot spare options. -- Will loose a single disk capacity for using parity information. -- No data loss if a single disk fails. We can rebuilt from parity after replacing the failed disk. -- Suits for transaction oriented environment as the reading will be faster. -- Due to parity overhead, writing will be slow. -- Rebuild takes long time. - -#### Requirements #### - -Minimum 3 hard drives are required to create Raid 5, but you can add more disks, only if you’ve a dedicated hardware raid controller with multi ports. Here, we are using software RAID and ‘mdadm‘ package to create raid. - -mdadm is a package which allow us to configure and manage RAID devices in Linux. By default there is no configuration file is available for RAID, we must save the configuration file after creating and configuring RAID setup in separate file called mdadm.conf. - -Before moving further, I suggest you to go through the following articles for understanding the basics of RAID in Linux. - -- [Basic Concepts of RAID in Linux – Part 1][1] -- [Creating RAID 0 (Stripe) in Linux – Part 2][2] -- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3] - -#### My Server Setup #### - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.227 - Hostname : rd5.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc - Disk 3 [20GB] : /dev/sdd - -This article is a Part 4 of a 9-tutorial RAID series, here we are going to setup a software RAID 5 with distributed parity in Linux systems or servers using three 20GB disks named /dev/sdb, /dev/sdc and /dev/sdd. - -### Step 1: Installing mdadm and Verify Drives ### - -1. As we said earlier, that we’re using CentOS 6.5 Final release for this raid setup, but same steps can be followed for RAID setup in any Linux based distributions. - - # lsb_release -a - # ifconfig | grep inet - -![Setup Raid 5 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/CentOS-6.5-Summary.png) - -CentOS 6.5 Summary - -2. If you’re following our raid series, we assume that you’ve already installed ‘mdadm‘ package, if not, use the following command according to your Linux distribution to install the package. - - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] - -3. After the ‘mdadm‘ package installation, let’s list the three 20GB disks which we have added in our system using ‘fdisk‘ command. - - # fdisk -l | grep sd - -![Install mdadm Tool in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Install-mdadm-Tool.png) - -Install mdadm Tool - -4. Now it’s time to examine the attached three drives for any existing RAID blocks on these drives using following command. - - # mdadm -E /dev/sd[b-d] - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd - -![Examine Drives For Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-Drives-For-Raid.png) - -Examine Drives For Raid - -**Note**: From the above image illustrated that there is no any super-block detected yet. So, there is no RAID defined in all three drives. Let us start to create one now. - -### Step 2: Partitioning the Disks for RAID ### - -5. First and foremost, we have to partition the disks (/dev/sdb, /dev/sdc and /dev/sdd) before adding to a RAID, So let us define the partition using ‘fdisk’ command, before forwarding to the next steps. - - # fdisk /dev/sdb - # fdisk /dev/sdc - # fdisk /dev/sdd - -#### Create /dev/sdb Partition #### - -Please follow the below instructions to create partition on /dev/sdb drive. - -- Press ‘n‘ for creating new partition. -- Then choose ‘P‘ for Primary partition. Here we are choosing Primary because there is no partitions defined yet. -- Then choose ‘1‘ to be the first partition. By default it will be 1. -- Here for cylinder size we don’t have to choose the specified size because we need the whole partition for RAID so just Press Enter two times to choose the default full size. -- Next press ‘p‘ to print the created partition. -- Change the Type, If we need to know the every available types Press ‘L‘. -- Here, we are selecting ‘fd‘ as my type is RAID. -- Next press ‘p‘ to print the defined partition. -- Then again use ‘p‘ to print the changes what we have made. -- Use ‘w‘ to write the changes. - -![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition1.png) - -Create sdb Partition - -**Note**: We have to follow the steps mentioned above to create partitions for sdc & sdd drives too. - -#### Create /dev/sdc Partition #### - -Now partition the sdc and sdd drives by following the steps given in the screenshot or you can follow above steps. - - # fdisk /dev/sdc - -![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition1.png) - -Create sdc Partition - -#### Create /dev/sdd Partition #### - - # fdisk /dev/sdd - -![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition1.png) - -Create sdd Partition - -6. After creating partitions, check for changes in all three drives sdb, sdc, & sdd. - - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd - - or - - # mdadm -E /dev/sd[b-c] - -![Check Partition Changes](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Changes-on-Partitions.png) - -Check Partition Changes - -**Note**: In the above pic. depict the type is fd i.e. for RAID. - -7. Now Check for the RAID blocks in newly created partitions. If no super-blocks detected, than we can move forward to create a new RAID 5 setup on these drives. - -![Check Raid on Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-Partitions.png) - -Check Raid on Partition - -### Step 3: Creating md device md0 ### - -8. Now create a Raid device ‘md0‘ (i.e. /dev/md0) and include raid level on all newly created partitions (sdb1, sdc1 and sdd1) using below command. - - # mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 - - or - - # mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1 - -9. After creating raid device, check and verify the RAID, devices included and RAID Level from the mdstat output. - - # cat /proc/mdstat - -![Verify Raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Device.png) - -Verify Raid Device - -If you want to monitor the current building process, you can use ‘watch‘ command, just pass through the ‘cat /proc/mdstat‘ with watch command which will refresh screen every 1 second. - - # watch -n1 cat /proc/mdstat - -![Monitor Raid Process](http://www.tecmint.com/wp-content/uploads/2014/11/Monitor-Raid-Process.png) - -Monitor Raid 5 Process - -![Raid 5 Process Summary](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Process-Summary.png) - -Raid 5 Process Summary - -10. After creation of raid, Verify the raid devices using the following command. - - # mdadm -E /dev/sd[b-d]1 - -![Verify Raid Level](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Level.png) - -Verify Raid Level - -**Note**: The Output of the above command will be little long as it prints the information of all three drives. - -11. Next, verify the RAID array to assume that the devices which we’ve included in the RAID level are running and started to re-sync. - - # mdadm --detail /dev/md0 - -![Verify Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Array.png) - -Verify Raid Array - -### Step 4: Creating file system for md0 ### - -12. Create a file system for ‘md0‘ device using ext4 before mounting. - - # mkfs.ext4 /dev/md0 - -![Create md0 Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md0-Filesystem.png) - -Create md0 Filesystem - -13. Now create a directory under ‘/mnt‘ then mount the created filesystem under /mnt/raid5 and check the files under mount point, you will see lost+found directory. - - # mkdir /mnt/raid5 - # mount /dev/md0 /mnt/raid5/ - # ls -l /mnt/raid5/ - -14. Create few files under mount point /mnt/raid5 and append some text in any one of the file to verify the content. - - # touch /mnt/raid5/raid5_tecmint_{1..5} - # ls -l /mnt/raid5/ - # echo "tecmint raid setups" > /mnt/raid5/raid5_tecmint_1 - # cat /mnt/raid5/raid5_tecmint_1 - # cat /proc/mdstat - -![Mount Raid 5 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-Raid-Device.png) - -Mount Raid Device - -15. We need to add entry in fstab, else will not display our mount point after system reboot. To add an entry, we should edit the fstab file and append the following line as shown below. The mount point will differ according to your environment. - - # vim /etc/fstab - - /dev/md0 /mnt/raid5 ext4 defaults 0 0 - -![Raid 5 Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-Device-Automount.png) - -Raid 5 Automount - -16. Next, run ‘mount -av‘ command to check whether any errors in fstab entry. - - # mount -av - -![Check Fstab Errors](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Fstab-Errors.png) - -Check Fstab Errors - -### Step 5: Save Raid 5 Configuration ### - -17. As mentioned earlier in requirement section, by default RAID don’t have a config file. We have to save it manually. If this step is not followed RAID device will not be in md0, it will be in some other random number. - -So, we must have to save the configuration before system reboot. If the configuration is saved it will be loaded to the kernel during the system reboot and RAID will also gets loaded. - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - -![Save Raid 5 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid-5-Configuration.png) - -Save Raid 5 Configuration - -Note: Saving the configuration will keep the RAID level stable in md0 device. - -### Step 6: Adding Spare Drives ### - -18. What the use of adding a spare drive? its very useful if we have a spare drive, if any one of the disk fails in our array, this spare drive will get active and rebuild the process and sync the data from other disk, so we can see a redundancy here. - -For more instructions on how to add spare drive and check Raid 5 fault tolerance, read #Step 6 and #Step 7 in the following article. - -- [Add Spare Drive to Raid 5 Setup][4] - -### Conclusion ### - -Here, in this article, we have seen how to setup a RAID 5 using three number of disks. Later in my upcoming articles, we will see how to troubleshoot when a disk fails in RAID 5 and how to replace for recovery. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid-5-in-linux/ - -作者:[Babin Lonston][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ -[2]:http://www.tecmint.com/create-raid0-in-linux/ -[3]:http://www.tecmint.com/create-raid1-in-linux/ -[4]:http://www.tecmint.com/create-raid-6-in-linux/ \ No newline at end of file diff --git a/sources/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md b/sources/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md deleted file mode 100644 index ea1d5993c0..0000000000 --- a/sources/tech/RAID/Part 5 - Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux.md +++ /dev/null @@ -1,321 +0,0 @@ -struggling 翻译中 -Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux – Part 5 -================================================================================ -RAID 6 is upgraded version of RAID 5, where it has two distributed parity which provides fault tolerance even after two drives fails. Mission critical system still operational incase of two concurrent disks failures. It’s alike RAID 5, but provides more robust, because it uses one more disk for parity. - -In our earlier article, we’ve seen distributed parity in RAID 5, but in this article we will going to see RAID 6 with double distributed parity. Don’t expect extra performance than any other RAID, if so we have to install a dedicated RAID Controller too. Here in RAID 6 even if we loose our 2 disks we can get the data back by replacing a spare drive and build it from parity. - -![Setup RAID 6 in CentOS](http://www.tecmint.com/wp-content/uploads/2014/11/Setup-RAID-6-in-Linux.jpg) - -Setup RAID 6 in Linux - -To setup a RAID 6, minimum 4 numbers of disks or more in a set are required. RAID 6 have multiple disks even in some set it may be have some bunch of disks, while reading, it will read from all the drives, so reading would be faster whereas writing would be poor because it has to stripe over multiple disks. - -Now, many of us comes to conclusion, why we need to use RAID 6, when it doesn’t perform like any other RAID. Hmm… those who raise this question need to know that, if they need high fault tolerance choose RAID 6. In every higher environments with high availability for database, they use RAID 6 because database is the most important and need to be safe in any cost, also it can be useful for video streaming environments. - -#### Pros and Cons of RAID 6 #### - -- Performance are good. -- RAID 6 is expensive, as it requires two independent drives are used for parity functions. -- Will loose a two disks capacity for using parity information (double parity). -- No data loss, even after two disk fails. We can rebuilt from parity after replacing the failed disk. -- Reading will be better than RAID 5, because it reads from multiple disk, But writing performance will be very poor without dedicated RAID Controller. - -#### Requirements #### - -Minimum 4 numbers of disks are required to create a RAID 6. If you want to add more disks, you can, but you must have dedicated raid controller. In software RAID, we will won’t get better performance in RAID 6. So we need a physical RAID controller. - -Those who are new to RAID setup, we recommend to go through RAID articles below. - -- [Basic Concepts of RAID in Linux – Part 1][1] -- [Creating Software RAID 0 (Stripe) in Linux – Part 2][2] -- [Setting up RAID 1 (Mirroring) in Linux – Part 3][3] - -#### My Server Setup #### - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.228 - Hostname : rd6.tecmintlocal.com - Disk 1 [20GB] : /dev/sdb - Disk 2 [20GB] : /dev/sdc - Disk 3 [20GB] : /dev/sdd - Disk 4 [20GB] : /dev/sde - -This article is a Part 5 of a 9-tutorial RAID series, here we are going to see how we can create and setup Software RAID 6 or Striping with Double Distributed Parity in Linux systems or servers using four 20GB disks named /dev/sdb, /dev/sdc, /dev/sdd and /dev/sde. - -### Step 1: Installing mdadm Tool and Examine Drives ### - -1. If you’re following our last two Raid articles (Part 2 and Part 3), where we’ve already shown how to install ‘mdadm‘ tool. If you’re new to this article, let me explain that ‘mdadm‘ is a tool to create and manage Raid in Linux systems, let’s install the tool using following command according to your Linux distribution. - - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] - -2. After installing the tool, now it’s time to verify the attached four drives that we are going to use for raid creation using the following ‘fdisk‘ command. - - # fdisk -l | grep sd - -![Check Hard Disk in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Linux-Disks.png) - -Check Disks in Linux - -3. Before creating a RAID drives, always examine our disk drives whether there is any RAID is already created on the disks. - - # mdadm -E /dev/sd[b-e] - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde - -![Check Raid on Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Disk-Raid.png) - -Check Raid on Disk - -**Note**: In the above image depicts that there is no any super-block detected or no RAID is defined in four disk drives. We may move further to start creating RAID 6. - -### Step 2: Drive Partitioning for RAID 6 ### - -4. Now create partitions for raid on ‘/dev/sdb‘, ‘/dev/sdc‘, ‘/dev/sdd‘ and ‘/dev/sde‘ with the help of following fdisk command. Here, we will show how to create partition on sdb drive and later same steps to be followed for rest of the drives. - -**Create /dev/sdb Partition** - - # fdisk /dev/sdb - -Please follow the instructions as shown below for creating partition. - -- Press ‘n‘ for creating new partition. -- Then choose ‘P‘ for Primary partition. -- Next choose the partition number as 1. -- Define the default value by just pressing two times Enter key. -- Next press ‘P‘ to print the defined partition. -- Press ‘L‘ to list all available types. -- Type ‘t‘ to choose the partitions. -- Choose ‘fd‘ for Linux raid auto and press Enter to apply. -- Then again use ‘P‘ to print the changes what we have made. -- Use ‘w‘ to write the changes. - -![Create sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdb-Partition.png) - -Create /dev/sdb Partition - -**Create /dev/sdb Partition** - - # fdisk /dev/sdc - -![Create sdc Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdc-Partition.png) - -Create /dev/sdc Partition - -**Create /dev/sdd Partition** - - # fdisk /dev/sdd - -![Create sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sdd-Partition.png) - -Create /dev/sdd Partition - -**Create /dev/sde Partition** - - # fdisk /dev/sde - -![Create sde Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-sde-Partition.png) - -Create /dev/sde Partition - -5. After creating partitions, it’s always good habit to examine the drives for super-blocks. If super-blocks does not exist than we can go head to create a new RAID setup. - - # mdadm -E /dev/sd[b-e]1 - - - or - - # mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 - -![Check Raid on New Partitions](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Partitions.png) - -Check Raid on New Partitions - -### Step 3: Creating md device (RAID) ### - -6. Now it’s time to create Raid device ‘md0‘ (i.e. /dev/md0) and apply raid level on all newly created partitions and confirm the raid using following commands. - - # mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 - # cat /proc/mdstat - -![Create Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Raid-6-Device.png) - -Create Raid 6 Device - -7. You can also check the current process of raid using watch command as shown in the screen grab below. - - # watch -n1 cat /proc/mdstat - -![Check Raid 6 Process](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Process.png) - -Check Raid 6 Process - -8. Verify the raid devices using the following command. - -# mdadm -E /dev/sd[b-e]1 - -**Note**:: The above command will be display the information of the four disks, which is quite long so not possible to post the output or screen grab here. - -9. Next, verify the RAID array to confirm that the re-syncing is started. - - # mdadm --detail /dev/md0 - -![Check Raid 6 Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Array.png) - -Check Raid 6 Array - -### Step 4: Creating FileSystem on Raid Device ### - -10. Create a filesystem using ext4 for ‘/dev/md0‘ and mount it under /mnt/raid5. Here we’ve used ext4, but you can use any type of filesystem as per your choice. - - # mkfs.ext4 /dev/md0 - -![Create File System on Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Create-File-System-on-Raid.png) - -Create File System on Raid 6 - -11. Mount the created filesystem under /mnt/raid6 and verify the files under mount point, we can see lost+found directory. - - # mkdir /mnt/raid6 - # mount /dev/md0 /mnt/raid6/ - # ls -l /mnt/raid6/ - -12. Create some files under mount point and append some text in any one of the file to verify the content. - - # touch /mnt/raid6/raid6_test.txt - # ls -l /mnt/raid6/ - # echo "tecmint raid setups" > /mnt/raid6/raid6_test.txt - # cat /mnt/raid6/raid6_test.txt - -![Verify Raid Content](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-Content.png) - -Verify Raid Content - -13. Add an entry in /etc/fstab to auto mount the device at the system startup and append the below entry, mount point may differ according to your environment. - - # vim /etc/fstab - - /dev/md0 /mnt/raid6 ext4 defaults 0 0 - -![Automount Raid 6 Device](http://www.tecmint.com/wp-content/uploads/2014/11/Automount-Raid-Device.png) - -Automount Raid 6 Device - -14. Next, execute ‘mount -a‘ command to verify whether there is any error in fstab entry. - - # mount -av - -![Verify Raid Automount](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Automount-Raid-Devices.png) - -Verify Raid Automount - -### Step 5: Save RAID 6 Configuration ### - -15. Please note by default RAID don’t have a config file. We have to save it by manually using below command and then verify the status of device ‘/dev/md0‘. - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - # mdadm --detail /dev/md0 - -![Save Raid 6 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png) - -Save Raid 6 Configuration - -![Check Raid 6 Status](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Status.png) - -Check Raid 6 Status - -### Step 6: Adding a Spare Drives ### - -16. Now it has 4 disks and there are two parity information’s available. In some cases, if any one of the disk fails we can get the data, because there is double parity in RAID 6. - -May be if the second disk fails, we can add a new one before loosing third disk. It is possible to add a spare drive while creating our RAID set, But I have not defined the spare drive while creating our raid set. But, we can add a spare drive after any drive failure or while creating the RAID set. Now we have already created the RAID set now let me add a spare drive for demonstration. - -For the demonstration purpose, I’ve hot-plugged a new HDD disk (i.e. /dev/sdf), let’s verify the attached disk. - - # ls -l /dev/ | grep sd - -![Check New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-New-Disk.png) - -Check New Disk - -17. Now again confirm the new attached disk for any raid is already configured or not using the same mdadm command. - - # mdadm --examine /dev/sdf - -![Check Raid on New Disk](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-New-Disk.png) - -Check Raid on New Disk - -**Note**: As usual, like we’ve created partitions for four disks earlier, similarly we’ve to create new partition on the new plugged disk using fdisk command. - - # fdisk /dev/sdf - -![Create sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Create-Partition-on-sdf.png) - -Create /dev/sdf Partition - -18. Again after creating new partition on /dev/sdf, confirm the raid on the partition, include the spare drive to the /dev/md0 raid device and verify the added device. - - # mdadm --examine /dev/sdf - # mdadm --examine /dev/sdf1 - # mdadm --add /dev/md0 /dev/sdf1 - # mdadm --detail /dev/md0 - -![Verify Raid on sdf Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-Raid-on-sdf.png) - -Verify Raid on sdf Partition - -![Add sdf Partition to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Add-sdf-Partition-to-Raid.png) - -Add sdf Partition to Raid - -![Verify sdf Partition Details](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-sdf-Details.png) - -Verify sdf Partition Details - -### Step 7: Check Raid 6 Fault Tolerance ### - -19. Now, let us check whether spare drive works automatically, if anyone of the disk fails in our Array. For testing, I’ve personally marked one of the drive is failed. - -Here, we’re going to mark /dev/sdd1 as failed drive. - - # mdadm --manage --fail /dev/md0 /dev/sdd1 - -![Check Raid 6 Fault Tolerance](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-6-Failover.png) - -Check Raid 6 Fault Tolerance - -20. Let me get the details of RAID set now and check whether our spare started to sync. - - # mdadm --detail /dev/md0 - -![Check Auto Raid Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Auto-Raid-Syncing.png) - -Check Auto Raid Syncing - -**Hurray!** Here, we can see the spare got activated and started rebuilding process. At the bottom we can see the faulty drive /dev/sdd1 listed as faulty. We can monitor build process using following command. - - # cat /proc/mdstat - -![Raid 6 Auto Syncing](http://www.tecmint.com/wp-content/uploads/2014/11/Raid-6-Auto-Syncing.png) - -Raid 6 Auto Syncing - -### Conclusion: ### - -Here, we have seen how to setup RAID 6 using four disks. This RAID level is one of the expensive setup with high redundancy. We will see how to setup a Nested RAID 10 and much more in the next articles. Till then, stay connected with TECMINT. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid-6-in-linux/ - -作者:[Babin Lonston][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ -[2]:http://www.tecmint.com/create-raid0-in-linux/ -[3]:http://www.tecmint.com/create-raid1-in-linux/ \ No newline at end of file diff --git a/sources/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md b/sources/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md deleted file mode 100644 index a08903e00e..0000000000 --- a/sources/tech/RAID/Part 6 - Setting Up RAID 10 or 1+0 (Nested) in Linux.md +++ /dev/null @@ -1,276 +0,0 @@ -struggling 翻译中 -Setting Up RAID 10 or 1+0 (Nested) in Linux – Part 6 -================================================================================ -RAID 10 is a combine of RAID 0 and RAID 1 to form a RAID 10. To setup Raid 10, we need at least 4 number of disks. In our earlier articles, we’ve seen how to setup a RAID 0 and RAID 1 with minimum 2 number of disks. - -Here we will use both RAID 0 and RAID 1 to perform a Raid 10 setup with minimum of 4 drives. Assume, that we’ve some data saved to logical volume, which is created with RAID 10. Just for an example, if we are saving a data “apple” this will be saved under all 4 disk by this following method. - -![Create Raid 10 in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/raid10.jpg) - -Create Raid 10 in Linux - -Using RAID 0 it will save as “A” in first disk and “p” in the second disk, then again “p” in first disk and “l” in second disk. Then “e” in first disk, like this it will continue the Round robin process to save the data. From this we come to know that RAID 0 will write the half of the data to first disk and other half of the data to second disk. - -In RAID 1 method, same data will be written to other 2 disks as follows. “A” will write to both first and second disks, “P” will write to both disk, Again other “P” will write to both the disks. Thus using RAID 1 it will write to both the disks. This will continue in round robin process. - -Now you all came to know that how RAID 10 works by combining of both RAID 0 and RAID 1. If we have 4 number of 20 GB size disks, it will be 80 GB in total, but we will get only 40 GB of Storage capacity, the half of total capacity will be lost for building RAID 10. - -#### Pros and Cons of RAID 5 #### - -- Gives better performance. -- We will loose two of the disk capacity in RAID 10. -- Reading and writing will be very good, because it will write and read to all those 4 disk at the same time. -- It can be used for Database solutions, which needs a high I/O disk writes. - -#### Requirements #### - -In RAID 10, we need minimum of 4 disks, the first 2 disks for RAID 0 and other 2 Disks for RAID 1. Like I said before, RAID 10 is just a Combine of RAID 0 & 1. If we need to extended the RAID group, we must increase the disk by minimum 4 disks. - -**My Server Setup** - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.229 - Hostname : rd10.tecmintlocal.com - Disk 1 [20GB] : /dev/sdd - Disk 2 [20GB] : /dev/sdc - Disk 3 [20GB] : /dev/sdd - Disk 4 [20GB] : /dev/sde - -There are two ways to setup RAID 10, but here I’m going to show you both methods, but I prefer you to follow the first method, which makes the work lot easier for setting up a RAID 10. - -### Method 1: Setting Up Raid 10 ### - -1. First, verify that all the 4 added disks are detected or not using the following command. - - # ls -l /dev | grep sd - -2. Once the four disks are detected, it’s time to check for the drives whether there is already any raid existed before creating a new one. - - # mdadm -E /dev/sd[b-e] - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde - -![Verify 4 Added Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Verify-4-Added-Disks.png) - -Verify 4 Added Disks - -**Note**: In the above output, you see there isn’t any super-block detected yet, that means there is no RAID defined in all 4 drives. - -#### Step 1: Drive Partitioning for RAID #### - -3. Now create a new partition on all 4 disks (/dev/sdb, /dev/sdc, /dev/sdd and /dev/sde) using the ‘fdisk’ tool. - - # fdisk /dev/sdb - # fdisk /dev/sdc - # fdisk /dev/sdd - # fdisk /dev/sde - -**Create /dev/sdb Partition** - -Let me show you how to partition one of the disk (/dev/sdb) using fdisk, this steps will be the same for all the other disks too. - - # fdisk /dev/sdb - -Please use the below steps for creating a new partition on /dev/sdb drive. - -- Press ‘n‘ for creating new partition. -- Then choose ‘P‘ for Primary partition. -- Then choose ‘1‘ to be the first partition. -- Next press ‘p‘ to print the created partition. -- Change the Type, If we need to know the every available types Press ‘L‘. -- Here, we are selecting ‘fd‘ as my type is RAID. -- Next press ‘p‘ to print the defined partition. -- Then again use ‘p‘ to print the changes what we have made. -- Use ‘w‘ to write the changes. - -![Disk sdb Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-sdb-Partition.png) - -Disk sdb Partition - -**Note**: Please use the above same instructions for creating partitions on other disks (sdc, sdd sdd sde). - -4. After creating all 4 partitions, again you need to examine the drives for any already existing raid using the following command. - - # mdadm -E /dev/sd[b-e] - # mdadm -E /dev/sd[b-e]1 - - OR - - # mdadm --examine /dev/sdb /dev/sdc /dev/sdd /dev/sde - # mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 - -![Check All Disks for Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Check-All-Disks-for-Raid.png) - -Check All Disks for Raid - -**Note**: The above outputs shows that there isn’t any super-block detected on all four newly created partitions, that means we can move forward to create RAID 10 on these drives. - -#### Step 2: Creating ‘md’ RAID Device #### - -5. Now it’s time to create a ‘md’ (i.e. /dev/md0) device, using ‘mdadm’ raid management tool. Before, creating device, your system must have ‘mdadm’ tool installed, if not install it first. - - # yum install mdadm [on RedHat systems] - # apt-get install mdadm [on Debain systems] - -Once ‘mdadm’ tool installed, you can now create a ‘md’ raid device using the following command. - - # mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1 - -6. Next verify the newly created raid device using the ‘cat’ command. - - # cat /proc/mdstat - -![Create md raid Device](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-raid-Device.png) - -Create md raid Device - -7. Next, examine all the 4 drives using the below command. The output of the below command will be long as it displays the information of all 4 disks. - - # mdadm --examine /dev/sd[b-e]1 - -8. Next, check the details of Raid Array with the help of following command. - - # mdadm --detail /dev/md0 - -![Check Raid Array Details](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-Array-Details.png) - -Check Raid Array Details - -**Note**: You see in the above results, that the status of Raid was active and re-syncing. - -#### Step 3: Creating Filesystem #### - -9. Create a file system using ext4 for ‘md0′ and mount it under ‘/mnt/raid10‘. Here, I’ve used ext4, but you can use any filesystem type if you want. - - # mkfs.ext4 /dev/md0 - -![Create md Filesystem](http://www.tecmint.com/wp-content/uploads/2014/11/Create-md-Filesystem.png) - -Create md Filesystem - -10. After creating filesystem, mount the created file-system under ‘/mnt/raid10‘ and list the contents of the mount point using ‘ls -l’ command. - - # mkdir /mnt/raid10 - # mount /dev/md0 /mnt/raid10/ - # ls -l /mnt/raid10/ - -Next, add some files under mount point and append some text in any one of the file and check the content. - - # touch /mnt/raid10/raid10_files.txt - # ls -l /mnt/raid10/ - # echo "raid 10 setup with 4 disks" > /mnt/raid10/raid10_files.txt - # cat /mnt/raid10/raid10_files.txt - -![Mount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/Mount-md-Device.png) - -Mount md Device - -11. For automounting, open the ‘/etc/fstab‘ file and append the below entry in fstab, may be mount point will differ according to your environment. Save and quit using wq!. - - # vim /etc/fstab - - /dev/md0 /mnt/raid10 ext4 defaults 0 0 - -![AutoMount md Device](http://www.tecmint.com/wp-content/uploads/2014/11/AutoMount-md-Device.png) - -AutoMount md Device - -12. Next, verify the ‘/etc/fstab‘ file for any errors before restarting the system using ‘mount -a‘ command. - - # mount -av - -![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Errors-in-Fstab.png) - -Check Errors in Fstab - -#### Step 4: Save RAID Configuration #### - -13. By default RAID don’t have a config file, so we need to save it manually after making all the above steps, to preserve these settings during system boot. - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - -![Save Raid10 Configuration](http://www.tecmint.com/wp-content/uploads/2014/11/Save-Raid10-Configuration.png) - -Save Raid10 Configuration - -That’s it, we have created RAID 10 using method 1, this method is the easier one. Now let’s move forward to setup RAID 10 using method 2. - -### Method 2: Creating RAID 10 ### - -1. In method 2, we have to define 2 sets of RAID 1 and then we need to define a RAID 0 using those created RAID 1 sets. Here, what we will do is to first create 2 mirrors (RAID1) and then striping over RAID0. - -First, list the disks which are all available for creating RAID 10. - - # ls -l /dev | grep sd - -![List 4 Devices](http://www.tecmint.com/wp-content/uploads/2014/11/List-4-Devices.png) - -List 4 Devices - -2. Partition the all 4 disks using ‘fdisk’ command. For partitioning, you can follow #step 3 above. - - # fdisk /dev/sdb - # fdisk /dev/sdc - # fdisk /dev/sdd - # fdisk /dev/sde - -3. After partitioning all 4 disks, now examine the disks for any existing raid blocks. - - # mdadm --examine /dev/sd[b-e] - # mdadm --examine /dev/sd[b-e]1 - -![Examine 4 Disks](http://www.tecmint.com/wp-content/uploads/2014/11/Examine-4-Disks.png) - -Examine 4 Disks - -#### Step 1: Creating RAID 1 #### - -4. First let me create 2 sets of RAID 1 using 4 disks ‘sdb1′ and ‘sdc1′ and other set using ‘sdd1′ & ‘sde1′. - - # mdadm --create /dev/md1 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[b-c]1 - # mdadm --create /dev/md2 --metadata=1.2 --level=1 --raid-devices=2 /dev/sd[d-e]1 - # cat /proc/mdstat - -![Creating Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png) - -Creating Raid 1 - -![Check Details of Raid 1](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-1.png) - -Check Details of Raid 1 - -#### Step 2: Creating RAID 0 #### - -5. Next, create the RAID 0 using md1 and md2 devices. - - # mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/md1 /dev/md2 - # cat /proc/mdstat - -![Creating Raid 0](http://www.tecmint.com/wp-content/uploads/2014/11/Creating-Raid-0.png) - -Creating Raid 0 - -#### Step 3: Save RAID Configuration #### - -6. We need to save the Configuration under ‘/etc/mdadm.conf‘ to load all raid devices in every reboot times. - - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - -After this, we need to follow #step 3 Creating file system of method 1. - -That’s it! we have created RAID 1+0 using method 2. We will loose two disks space here, but the performance will be excellent compared to any other raid setups. - -### Conclusion ### - -Here we have created RAID 10 using two methods. RAID 10 has good performance and redundancy too. Hope this helps you to understand about RAID 10 Nested Raid level. Let us see how to grow an existing raid array and much more in my upcoming articles. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid-10-in-linux/ - -作者:[Babin Lonston][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ \ No newline at end of file diff --git a/sources/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md b/sources/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md deleted file mode 100644 index 76039f4371..0000000000 --- a/sources/tech/RAID/Part 7 - Growing an Existing RAID Array and Removing Failed Disks in Raid.md +++ /dev/null @@ -1,180 +0,0 @@ -struggling 翻译中 -Growing an Existing RAID Array and Removing Failed Disks in Raid – Part 7 -================================================================================ -Every newbies will get confuse of the word array. Array is just a collection of disks. In other words, we can call array as a set or group. Just like a set of eggs containing 6 numbers. Likewise RAID Array contains number of disks, it may be 2, 4, 6, 8, 12, 16 etc. Hope now you know what Array is. - -Here we will see how to grow (extend) an existing array or raid group. For example, if we are using 2 disks in an array to form a raid 1 set, and in some situation if we need more space in that group, we can extend the size of an array using mdadm –grow command, just by adding one of the disk to the existing array. After growing (adding disk to an existing array), we will see how to remove one of the failed disk from array. - -![Grow Raid Array in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Growing-Raid-Array.jpg) - -Growing Raid Array and Removing Failed Disks - -Assume that one of the disk is little weak and need to remove that disk, till it fails let it under use, but we need to add one of the spare drive and grow the mirror before it fails, because we need to save our data. While the weak disk fails we can remove it from array this is the concept we are going to see in this topic. - -#### Features of RAID Growth #### - -- We can grow (extend) the size of any raid set. -- We can remove the faulty disk after growing raid array with new disk. -- We can grow raid array without any downtime. - -Requirements - -- To grow an RAID array, we need an existing RAID set (Array). -- We need extra disks to grow the Array. -- Here I’m using 1 disk to grow the existing array. - -Before we learn about growing and recovering of Array, we have to know about the basics of RAID levels and setups. Follow the below links to know about those setups. - -- [Understanding Basic RAID Concepts – Part 1][1] -- [Creating a Software Raid 0 in Linux – Part 2][2] - -#### My Server Setup #### - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.230 - Hostname : grow.tecmintlocal.com - 2 Existing Disks : 1 GB - 1 Additional Disk : 1 GB - -Here, my already existing RAID has 2 number of disks with each size is 1GB and we are now adding one more disk whose size is 1GB to our existing raid array. - -### Growing an Existing RAID Array ### - -1. Before growing an array, first list the existing Raid array using the following command. - - # mdadm --detail /dev/md0 - -![Check Existing Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Existing-Raid-Array.png) - -Check Existing Raid Array - -**Note**: The above output shows that I’ve already has two disks in Raid array with raid1 level. Now here we are adding one more disk to an existing array, - -2. Now let’s add the new disk “sdd” and create a partition using ‘fdisk‘ command. - - # fdisk /dev/sdd - -Please use the below instructions to create a partition on /dev/sdd drive. - -- Press ‘n‘ for creating new partition. -- Then choose ‘P‘ for Primary partition. -- Then choose ‘1‘ to be the first partition. -- Next press ‘p‘ to print the created partition. -- Here, we are selecting ‘fd‘ as my type is RAID. -- Next press ‘p‘ to print the defined partition. -- Then again use ‘p‘ to print the changes what we have made. -- Use ‘w‘ to write the changes. - -![Create New Partition in Linux](http://www.tecmint.com/wp-content/uploads/2014/11/Create-New-sdd-Partition.png) - -Create New sdd Partition - -3. Once new sdd partition created, you can verify it using below command. - - # ls -l /dev/ | grep sd - -![Confirm sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-sdd-Partition.png) - -Confirm sdd Partition - -4. Next, examine the newly created disk for any existing raid, before adding to the array. - - # mdadm --examine /dev/sdd1 - -![Check Raid on sdd Partition](http://www.tecmint.com/wp-content/uploads/2014/11/Check-Raid-on-sdd-Partition.png) - -Check Raid on sdd Partition - -**Note**: The above output shows that the disk has no super-blocks detected, means we can move forward to add a new disk to an existing array. - -4. To add the new partition /dev/sdd1 in existing array md0, use the following command. - - # mdadm --manage /dev/md0 --add /dev/sdd1 - -![Add Disk To Raid-Array](http://www.tecmint.com/wp-content/uploads/2014/11/Add-Disk-To-Raid-Array.png) - -Add Disk To Raid-Array - -5. Once the new disk has been added, check for the added disk in our array using. - - # mdadm --detail /dev/md0 - -![Confirm Disk Added to Raid](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-Disk-Added-To-Raid.png) - -Confirm Disk Added to Raid - -**Note**: In the above output, you can see the drive has been added as a spare. Here, we already having 2 disks in the array, but what we are expecting is 3 devices in array for that we need to grow the array. - -6. To grow the array we have to use the below command. - - # mdadm --grow --raid-devices=3 /dev/md0 - -![Grow Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Grow-Raid-Array.png) - -Grow Raid Array - -Now we can see the third disk (sdd1) has been added to array, after adding third disk it will sync the data from other two disks. - - # mdadm --detail /dev/md0 - -![Confirm Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Confirm-Raid-Array.png) - -Confirm Raid Array - -**Note**: For large size disk it will take hours to sync the contents. Here I have used 1GB virtual disk, so its done very quickly within seconds. - -### Removing Disks from Array ### - -7. After the data has been synced to new disk ‘sdd1‘ from other two disks, that means all three disks now have same contents. - -As I told earlier let’s assume that one of the disk is weak and needs to be removed, before it fails. So, now assume disk ‘sdc1‘ is weak and needs to be removed from an existing array. - -Before removing a disk we have to mark the disk as failed one, then only we can able to remove it. - - # mdadm --fail /dev/md0 /dev/sdc1 - # mdadm --detail /dev/md0 - -![Disk Fail in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Disk-Fail-in-Raid-Array.png) - -Disk Fail in Raid Array - -From the above output, we clearly see that the disk was marked as faulty at the bottom. Even its faulty, we can see the raid devices are 3, failed 1 and state was degraded. - -Now we have to remove the faulty drive from the array and grow the array with 2 devices, so that the raid devices will be set to 2 devices as before. - - # mdadm --remove /dev/md0 /dev/sdc1 - -![Remove Disk in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Remove-Disk-in-Raid-Array.png) - -Remove Disk in Raid Array - -8. Once the faulty drive is removed, now we’ve to grow the raid array using 2 disks. - - # mdadm --grow --raid-devices=2 /dev/md0 - # mdadm --detail /dev/md0 - -![Grow Disks in Raid Array](http://www.tecmint.com/wp-content/uploads/2014/11/Grow-Disks-in-Raid-Array.png) - -Grow Disks in Raid Array - -From the about output, you can see that our array having only 2 devices. If you need to grow the array again, follow the same steps as described above. If you need to add a drive as spare, mark it as spare so that if the disk fails, it will automatically active and rebuild. - -### Conclusion ### - -In the article, we’ve seen how to grow an existing raid set and how to remove a faulty disk from an array after re-syncing the existing contents. All these steps can be done without any downtime. During data syncing, system users, files and applications will not get affected in any case. - -In next, article I will show you how to manage the RAID, till then stay tuned to updates and don’t forget to add your comments. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/grow-raid-array-in-linux/ - -作者:[Babin Lonston][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ -[2]:http://www.tecmint.com/create-raid0-in-linux/ \ No newline at end of file diff --git a/sources/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md b/sources/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md deleted file mode 100644 index 6b534423e7..0000000000 --- a/sources/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md +++ /dev/null @@ -1,208 +0,0 @@ -ictlyh Translating -Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks -================================================================================ -Some time ago I read that one of the distinguishing characteristics of an effective system administrator / engineer is laziness. It seemed a little contradictory at first but the author then proceeded to explain why: - -![Automate Linux System Maintenance Tasks](http://www.tecmint.com/wp-content/uploads/2015/08/Automate-Linux-System-Maintenance-Tasks.png) - -RHCE Series: Automate Linux System Maintenance Tasks – Part 4 - -if a sysadmin spends most of his time solving issues and doing repetitive tasks, you can suspect he or she is not doing things quite right. In other words, an effective system administrator / engineer should develop a plan to perform repetitive tasks with as less action on his / her part as possible, and should foresee problems by using, - -for example, the tools reviewed in Part 3 – [Monitor System Activity Reports Using Linux Toolsets][1] of this series. Thus, although he or she may not seem to be doing much, it’s because most of his / her responsibilities have been taken care of with the help of shell scripting, which is what we’re going to talk about in this tutorial. - -### What is a shell script? ### - -In few words, a shell script is nothing more and nothing less than a program that is executed step by step by a shell, which is another program that provides an interface layer between the Linux kernel and the end user. - -By default, the shell used for user accounts in RHEL 7 is bash (/bin/bash). If you want a detailed description and some historical background, you can refer to [this Wikipedia article][2]. - -To find out more about the enormous set of features provided by this shell, you may want to check out its **man page**, which is downloaded in in PDF format at ([Bash Commands][3]). Other than that, it is assumed that you are familiar with Linux commands (if not, I strongly advise you to go through [A Guide from Newbies to SysAdmin][4] article in **Tecmint.com** before proceeding). Now let’s get started. - -### Writing a script to display system information ### - -For our convenience, let’s create a directory to store our shell scripts: - - # mkdir scripts - # cd scripts - -And open a new text file named `system_info.sh` with your preferred text editor. We will begin by inserting a few comments at the top and some commands afterwards: - - #!/bin/bash - - # Sample script written for Part 4 of the RHCE series - # This script will return the following set of system information: - # -Hostname information: - echo -e "\e[31;43m***** HOSTNAME INFORMATION *****\e[0m" - hostnamectl - echo "" - # -File system disk space usage: - echo -e "\e[31;43m***** FILE SYSTEM DISK SPACE USAGE *****\e[0m" - df -h - echo "" - # -Free and used memory in the system: - echo -e "\e[31;43m ***** FREE AND USED MEMORY *****\e[0m" - free - echo "" - # -System uptime and load: - echo -e "\e[31;43m***** SYSTEM UPTIME AND LOAD *****\e[0m" - uptime - echo "" - # -Logged-in users: - echo -e "\e[31;43m***** CURRENTLY LOGGED-IN USERS *****\e[0m" - who - echo "" - # -Top 5 processes as far as memory usage is concerned - echo -e "\e[31;43m***** TOP 5 MEMORY-CONSUMING PROCESSES *****\e[0m" - ps -eo %mem,%cpu,comm --sort=-%mem | head -n 6 - echo "" - echo -e "\e[1;32mDone.\e[0m" - -Next, give the script execute permissions: - - # chmod +x system_info.sh - -and run it: - - ./system_info.sh - -Note that the headers of each section are shown in color for better visualization: - -![Server Monitoring Shell Script](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Shell-Script.png) - -Server Monitoring Shell Script - -That functionality is provided by this command: - - echo -e "\e[COLOR1;COLOR2m\e[0m" - -Where COLOR1 and COLOR2 are the foreground and background colors, respectively (more info and options are explained in this entry from the [Arch Linux Wiki][5]) and is the string that you want to show in color. - -### Automating Tasks ### - -The tasks that you may need to automate may vary from case to case. Thus, we cannot possibly cover all of the possible scenarios in a single article, but we will present three classic tasks that can be automated using shell scripting: - -**1)** update the local file database, 2) find (and alternatively delete) files with 777 permissions, and 3) alert when filesystem usage surpasses a defined limit. - -Let’s create a file named `auto_tasks.sh` in our scripts directory with the following content: - - #!/bin/bash - - # Sample script to automate tasks: - # -Update local file database: - echo -e "\e[4;32mUPDATING LOCAL FILE DATABASE\e[0m" - updatedb - if [ $? == 0 ]; then - echo "The local file database was updated correctly." - else - echo "The local file database was not updated correctly." - fi - echo "" - - # -Find and / or delete files with 777 permissions. - echo -e "\e[4;32mLOOKING FOR FILES WITH 777 PERMISSIONS\e[0m" - # Enable either option (comment out the other line), but not both. - # Option 1: Delete files without prompting for confirmation. Assumes GNU version of find. - #find -type f -perm 0777 -delete - # Option 2: Ask for confirmation before deleting files. More portable across systems. - find -type f -perm 0777 -exec rm -i {} +; - echo "" - # -Alert when file system usage surpasses a defined limit - echo -e "\e[4;32mCHECKING FILE SYSTEM USAGE\e[0m" - THRESHOLD=30 - while read line; do - # This variable stores the file system path as a string - FILESYSTEM=$(echo $line | awk '{print $1}') - # This variable stores the use percentage (XX%) - PERCENTAGE=$(echo $line | awk '{print $5}') - # Use percentage without the % sign. - USAGE=${PERCENTAGE%?} - if [ $USAGE -gt $THRESHOLD ]; then - echo "The remaining available space in $FILESYSTEM is critically low. Used: $PERCENTAGE" - fi - done < <(df -h --total | grep -vi filesystem) - -Please note that there is a space between the two `<` signs in the last line of the script. - -![Shell Script to Find 777 Permissions](http://www.tecmint.com/wp-content/uploads/2015/08/Shell-Script-to-Find-777-Permissions.png) - -Shell Script to Find 777 Permissions - -### Using Cron ### - -To take efficiency one step further, you will not want to sit in front of your computer and run those scripts manually. Rather, you will use cron to schedule those tasks to run on a periodic basis and sends the results to a predefined list of recipients via email or save them to a file that can be viewed using a web browser. - -The following script (filesystem_usage.sh) will run the well-known **df -h** command, format the output into a HTML table and save it in the **report.html** file: - - #!/bin/bash - # Sample script to demonstrate the creation of an HTML report using shell scripting - # Web directory - WEB_DIR=/var/www/html - # A little CSS and table layout to make the report look a little nicer - echo " - - - - - " > $WEB_DIR/report.html - # View hostname and insert it at the top of the html body - HOST=$(hostname) - echo "Filesystem usage for host $HOST
- Last updated: $(date)

- - " >> $WEB_DIR/report.html - # Read the output of df -h line by line - while read line; do - echo "" >> $WEB_DIR/report.html - done < <(df -h | grep -vi filesystem) - echo "
Filesystem - Size - Use % -
" >> $WEB_DIR/report.html - echo $line | awk '{print $1}' >> $WEB_DIR/report.html - echo "" >> $WEB_DIR/report.html - echo $line | awk '{print $2}' >> $WEB_DIR/report.html - echo "" >> $WEB_DIR/report.html - echo $line | awk '{print $5}' >> $WEB_DIR/report.html - echo "
" >> $WEB_DIR/report.html - -In our **RHEL 7** server (**192.168.0.18**), this looks as follows: - -![Server Monitoring Report](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Report.png) - -Server Monitoring Report - -You can add to that report as much information as you want. To run the script every day at 1:30 pm, add the following crontab entry: - - 30 13 * * * /root/scripts/filesystem_usage.sh - -### Summary ### - -You will most likely think of several other tasks that you want or need to automate; as you can see, using shell scripting will greatly simplify this effort. Feel free to let us know if you find this article helpful and don't hesitate to add your own ideas or comments via the form below. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/using-shell-script-to-automate-linux-system-maintenance-tasks/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/linux-performance-monitoring-and-file-system-statistics-reports/ -[2]:https://en.wikipedia.org/wiki/Bash_%28Unix_shell%29 -[3]:http://www.tecmint.com/wp-content/pdf/bash.pdf -[4]:http://www.tecmint.com/60-commands-of-linux-a-guide-from-newbies-to-system-administrator/ -[5]:https://wiki.archlinux.org/index.php/Color_Bash_Prompt \ No newline at end of file diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md b/sources/tech/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md deleted file mode 100644 index 0b85744c6c..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md +++ /dev/null @@ -1,249 +0,0 @@ -[translated by xiqingongzi] -RHCSA Series: How to Manage Users and Groups in RHEL 7 – Part 3 -================================================================================ -Managing a RHEL 7 server, as it is the case with any other Linux server, will require that you know how to add, edit, suspend, or delete user accounts, and grant users the necessary permissions to files, directories, and other system resources to perform their assigned tasks. - -![User and Group Management in Linux](http://www.tecmint.com/wp-content/uploads/2015/03/User-and-Group-Management-in-Linux.png) - -RHCSA: User and Group Management – Part 3 - -### Managing User Accounts ### - -To add a new user account to a RHEL 7 server, you can run either of the following two commands as root: - - # adduser [new_account] - # useradd [new_account] - -When a new user account is added, by default the following operations are performed. - -- His/her home directory is created (`/home/username` unless specified otherwise). -- These `.bash_logout`, `.bash_profile` and `.bashrc` hidden files are copied inside the user’s home directory, and will be used to provide environment variables for his/her user session. You can explore each of them for further details. -- A mail spool directory is created for the added user account. -- A group is created with the same name as the new user account. - -The full account summary is stored in the `/etc/passwd `file. This file holds a record per system user account and has the following format (fields are separated by a colon): - - [username]:[x]:[UID]:[GID]:[Comment]:[Home directory]:[Default shell] - -- These two fields `[username]` and `[Comment]` are self explanatory. -- The second filed ‘x’ indicates that the account is secured by a shadowed password (in `/etc/shadow`), which is used to logon as `[username]`. -- The fields `[UID]` and `[GID]` are integers that shows the User IDentification and the primary Group IDentification to which `[username]` belongs, equally. - -Finally, - -- The `[Home directory]` shows the absolute location of `[username]’s` home directory, and -- `[Default shell]` is the shell that is commit to this user when he/she logins into the system. - -Another important file that you must become familiar with is `/etc/group`, where group information is stored. As it is the case with `/etc/passwd`, there is one record per line and its fields are also delimited by a colon: - - [Group name]:[Group password]:[GID]:[Group members] - -where, - -- `[Group name]` is the name of group. -- Does this group use a group password? (An “x” means no). -- `[GID]`: same as in `/etc/passwd`. -- `[Group members]`: a list of users, separated by commas, that are members of each group. - -After adding an account, at anytime, you can edit the user’s account information using usermod, whose basic syntax is: - - # usermod [options] [username] - -Read Also: - -- [15 ‘useradd’ Command Examples][1] -- [15 ‘usermod’ Command Examples][2] - -#### EXAMPLE 1: Setting the expiry date for an account #### - -If you work for a company that has some kind of policy to enable account for a certain interval of time, or if you want to grant access to a limited period of time, you can use the `--expiredate` flag followed by a date in YYYY-MM-DD format. To verify that the change has been applied, you can compare the output of - - # chage -l [username] - -before and after updating the account expiry date, as shown in the following image. - -![Change User Account Information](http://www.tecmint.com/wp-content/uploads/2015/03/Change-User-Account-Information.png) - -Change User Account Information - -#### EXAMPLE 2: Adding the user to supplementary groups #### - -Besides the primary group that is created when a new user account is added to the system, a user can be added to supplementary groups using the combined -aG, or –append –groups options, followed by a comma separated list of groups. - -#### EXAMPLE 3: Changing the default location of the user’s home directory and / or changing its shell #### - -If for some reason you need to change the default location of the user’s home directory (other than /home/username), you will need to use the -d, or –home options, followed by the absolute path to the new home directory. - -If a user wants to use another shell other than bash (for example, sh), which gets assigned by default, use usermod with the –shell flag, followed by the path to the new shell. - -#### EXAMPLE 4: Displaying the groups an user is a member of #### - -After adding the user to a supplementary group, you can verify that it now actually belongs to such group(s): - - # groups [username] - # id [username] - -The following image depicts Examples 2 through 4: - -![Adding User to Supplementary Group](http://www.tecmint.com/wp-content/uploads/2015/03/Adding-User-to-Supplementary-Group.png) - -Adding User to Supplementary Group - -In the example above: - - # usermod --append --groups gacanepa,users --home /tmp --shell /bin/sh tecmint - -To remove a user from a group, omit the `--append` switch in the command above and list the groups you want the user to belong to following the `--groups` flag. - -#### EXAMPLE 5: Disabling account by locking password #### - -To disable an account, you will need to use either the -l (lowercase L) or the –lock option to lock a user’s password. This will prevent the user from being able to log on. - -#### EXAMPLE 6: Unlocking password #### - -When you need to re-enable the user so that he can log on to the server again, use the -u or the –unlock option to unlock a user’s password that was previously blocked, as explained in Example 5 above. - - # usermod --unlock tecmint - -The following image illustrates Examples 5 and 6: - -![Lock Unlock User Account](http://www.tecmint.com/wp-content/uploads/2015/03/Lock-Unlock-User-Account.png) - -Lock Unlock User Account - -#### EXAMPLE 7: Deleting a group or an user account #### - -To delete a group, you’ll want to use groupdel, whereas to delete a user account you will use userdel (add the –r switch if you also want to delete the contents of its home directory and mail spool): - - # groupdel [group_name] # Delete a group - # userdel -r [user_name] # Remove user_name from the system, along with his/her home directory and mail spool - -If there are files owned by group_name, they will not be deleted, but the group owner will be set to the GID of the group that was deleted. - -### Listing, Setting and Changing Standard ugo/rwx Permissions ### - -The well-known [ls command][3] is one of the best friends of any system administrator. When used with the -l flag, this tool allows you to view a list a directory’s contents in long (or detailed) format. - -However, this command can also be applied to a single file. Either way, the first 10 characters in the output of `ls -l` represent each file’s attributes. - -The first char of this 10-character sequence is used to indicate the file type: - -- – (hyphen): a regular file -- d: a directory -- l: a symbolic link -- c: a character device (which treats data as a stream of bytes, i.e. a terminal) -- b: a block device (which handles data in blocks, i.e. storage devices) - -The next nine characters of the file attributes, divided in groups of three from left to right, are called the file mode and indicate the read (r), write(w), and execute (x) permissions granted to the file’s owner, the file’s group owner, and the rest of the users (commonly referred to as “the world”), respectively. - -While the read permission on a file allows the same to be opened and read, the same permission on a directory allows its contents to be listed if the execute permission is also set. In addition, the execute permission in a file allows it to be handled as a program and run. - -File permissions are changed with the chmod command, whose basic syntax is as follows: - - # chmod [new_mode] file - -where new_mode is either an octal number or an expression that specifies the new permissions. Feel free to use the mode that works best for you in each case. Or perhaps you already have a preferred way to set a file’s permissions – so feel free to use the method that works best for you. - -The octal number can be calculated based on the binary equivalent, which can in turn be obtained from the desired file permissions for the owner of the file, the owner group, and the world.The presence of a certain permission equals a power of 2 (r=22, w=21, x=20), while its absence means 0. For example: - -![File Permissions](http://www.tecmint.com/wp-content/uploads/2015/03/File-Permissions.png) - -File Permissions - -To set the file’s permissions as indicated above in octal form, type: - - # chmod 744 myfile - -Please take a minute to compare our previous calculation to the actual output of `ls -l` after changing the file’s permissions: - -![Long List Format](http://www.tecmint.com/wp-content/uploads/2015/03/Long-List-Format.png) - -Long List Format - -#### EXAMPLE 8: Searching for files with 777 permissions #### - -As a security measure, you should make sure that files with 777 permissions (read, write, and execute for everyone) are avoided like the plague under normal circumstances. Although we will explain in a later tutorial how to more effectively locate all the files in your system with a certain permission set, you can -by now- combine ls with grep to obtain such information. - -In the following example, we will look for file with 777 permissions in the /etc directory only. Note that we will use pipelining as explained in [Part 2: File and Directory Management][4] of this RHCSA series: - - # ls -l /etc | grep rwxrwxrwx - -![Find All Files with 777 Permission](http://www.tecmint.com/wp-content/uploads/2015/03/Find-All-777-Files.png) - -Find All Files with 777 Permission - -#### EXAMPLE 9: Assigning a specific permission to all users #### - -Shell scripts, along with some binaries that all users should have access to (not just their corresponding owner and group), should have the execute bit set accordingly (please note that we will discuss a special case later): - - # chmod a+x script.sh - -**Note**: That we can also set a file’s mode using an expression that indicates the owner’s rights with the letter `u`, the group owner’s rights with the letter `g`, and the rest with `o`. All of these rights can be represented at the same time with the letter `a`. Permissions are granted (or revoked) with the `+` or `-` signs, respectively. - -![Set Execute Permission on File](http://www.tecmint.com/wp-content/uploads/2015/03/Set-Execute-Permission-on-File.png) - -Set Execute Permission on File - -A long directory listing also shows the file’s owner and its group owner in the first and second columns, respectively. This feature serves as a first-level access control method to files in a system: - -![Check File Owner and Group](http://www.tecmint.com/wp-content/uploads/2015/03/Check-File-Owner-and-Group.png) - -Check File Owner and Group - -To change file ownership, you will use the chown command. Note that you can change the file and group ownership at the same time or separately: - - # chown user:group file - -**Note**: That you can change the user or group, or the two attributes at the same time, as long as you don’t forget the colon, leaving user or group blank if you want to update the other attribute, for example: - - # chown :group file # Change group ownership only - # chown user: file # Change user ownership only - -#### EXAMPLE 10: Cloning permissions from one file to another #### - -If you would like to “clone” ownership from one file to another, you can do so using the –reference flag, as follows: - - # chown --reference=ref_file file - -where the owner and group of ref_file will be assigned to file as well: - -![Clone File Ownership](http://www.tecmint.com/wp-content/uploads/2015/03/Clone-File-Ownership.png) - -Clone File Ownership - -### Setting Up SETGID Directories for Collaboration ### - -Should you need to grant access to all the files owned by a certain group inside a specific directory, you will most likely use the approach of setting the setgid bit for such directory. When the setgid bit is set, the effective GID of the real user becomes that of the group owner. - -Thus, any user can access a file under the privileges granted to the group owner of such file. In addition, when the setgid bit is set on a directory, newly created files inherit the same group as the directory, and newly created subdirectories will also inherit the setgid bit of the parent directory. - - # chmod g+s [filename] - -To set the setgid in octal form, prepend the number 2 to the current (or desired) basic permissions. - - # chmod 2755 [directory] - -### Conclusion ### - -A solid knowledge of user and group management, along with standard and special Linux permissions, when coupled with practice, will allow you to quickly identify and troubleshoot issues with file permissions in your RHEL 7 server. - -I assure you that as you follow the steps outlined in this article and use the system documentation (as explained in [Part 1: Reviewing Essential Commands & System Documentation][5] of this series) you will master this essential competence of system administration. - -Feel free to let us know if you have any questions or comments using the form below. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/add-users-in-linux/ -[2]:http://www.tecmint.com/usermod-command-examples/ -[3]:http://www.tecmint.com/ls-interview-questions/ -[4]:http://www.tecmint.com/file-and-directory-management-in-linux/ -[5]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md b/sources/tech/RHCSA Series/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md deleted file mode 100644 index 0e631ce37d..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md +++ /dev/null @@ -1,271 +0,0 @@ -FSSlc translating - -RHCSA Series: Using ‘Parted’ and ‘SSM’ to Configure and Encrypt System Storage – Part 6 -================================================================================ -In this article we will discuss how to set up and configure local system storage in Red Hat Enterprise Linux 7 using classic tools and introducing the System Storage Manager (also known as SSM), which greatly simplifies this task. - -![Configure and Encrypt System Storage](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-and-Encrypt-System-Storage.png) - -RHCSA: Configure and Encrypt System Storage – Part 6 - -Please note that we will present this topic in this article but will continue its description and usage on the next one (Part 7) due to vastness of the subject. - -### Creating and Modifying Partitions in RHEL 7 ### - -In RHEL 7, parted is the default utility to work with partitions, and will allow you to: - -- Display the current partition table -- Manipulate (increase or decrease the size of) existing partitions -- Create partitions using free space or additional physical storage devices - -It is recommended that before attempting the creation of a new partition or the modification of an existing one, you should ensure that none of the partitions on the device are in use (`umount /dev/partition`), and if you’re using part of the device as swap you need to disable it (`swapoff -v /dev/partition`) during the process. - -The easiest way to do this is to boot RHEL in rescue mode using an installation media such as a RHEL 7 installation DVD or USB (Troubleshooting → Rescue a Red Hat Enterprise Linux system) and Select Skip when you’re prompted to choose an option to mount the existing Linux installation, and you will be presented with a command prompt where you can start typing the same commands as shown as follows during the creation of an ordinary partition in a physical device that is not being used. - -![RHEL 7 Rescue Mode](http://www.tecmint.com/wp-content/uploads/2015/04/RHEL-7-Rescue-Mode.png) - -RHEL 7 Rescue Mode - -To start parted, simply type. - - # parted /dev/sdb - -Where `/dev/sdb` is the device where you will create the new partition; next, type print to display the current drive’s partition table: - -![Creat New Partition](http://www.tecmint.com/wp-content/uploads/2015/04/Create-New-Partition.png) - -Creat New Partition - -As you can see, in this example we are using a virtual drive of 5 GB. We will now proceed to create a 4 GB primary partition and then format it with the xfs filesystem, which is the default in RHEL 7. - -You can choose from a variety of file systems. You will need to manually create the partition with mkpart and then format it with mkfs.fstype as usual because mkpart does not support many modern filesystems out-of-the-box. - -In the following example we will set a label for the device and then create a primary partition `(p)` on `/dev/sdb`, which starts at the 0% percentage of the device and ends at 4000 MB (4 GB): - -![Set Partition Name in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Label-Partition.png) - -Label Partition Name - -Next, we will format the partition as xfs and print the partition table again to verify that changes were applied: - - # mkfs.xfs /dev/sdb1 - # parted /dev/sdb print - -![Format Partition in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Format-Partition-in-Linux.png) - -Format Partition as XFS Filesystem - -For older filesystems, you could use the resize command in parted to resize a partition. Unfortunately, this only applies to ext2, fat16, fat32, hfs, linux-swap, and reiserfs (if libreiserfs is installed). - -Thus, the only way to resize a partition is by deleting it and creating it again (so make sure you have a good backup of your data!). No wonder the default partitioning scheme in RHEL 7 is based on LVM. - -To remove a partition with parted: - - # parted /dev/sdb print - # parted /dev/sdb rm 1 - -![Remove Partition in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-Partition-in-Linux.png) - -Remove or Delete Partition - -### The Logical Volume Manager (LVM) ### - -Once a disk has been partitioned, it can be difficult or risky to change the partition sizes. For that reason, if we plan on resizing the partitions on our system, we should consider the possibility of using LVM instead of the classic partitioning system, where several physical devices can form a volume group that will host a defined number of logical volumes, which can be expanded or reduced without any hassle. - -In simple terms, you may find the following diagram useful to remember the basic architecture of LVM. - -![Basic Architecture of LVM](http://www.tecmint.com/wp-content/uploads/2015/04/LVM-Diagram.png) - -Basic Architecture of LVM - -#### Creating Physical Volumes, Volume Group and Logical Volumes #### - -Follow these steps in order to set up LVM using classic volume management tools. Since you can expand this topic reading the [LVM series on this site][1], I will only outline the basic steps to set up LVM, and then compare them to implementing the same functionality with SSM. - -**Note**: That we will use the whole disks `/dev/sdb` and `/dev/sdc` as PVs (Physical Volumes) but it’s entirely up to you if you want to do the same. - -**1. Create partitions `/dev/sdb1` and `/dev/sdc1` using 100% of the available disk space in /dev/sdb and /dev/sdc:** - - # parted /dev/sdb print - # parted /dev/sdc print - -![Create New Partitions](http://www.tecmint.com/wp-content/uploads/2015/04/Create-New-Partitions.png) - -Create New Partitions - -**2. Create 2 physical volumes on top of /dev/sdb1 and /dev/sdc1, respectively.** - - # pvcreate /dev/sdb1 - # pvcreate /dev/sdc1 - -![Create Two Physical Volumes](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Physical-Volumes.png) - -Create Two Physical Volumes - -Remember that you can use pvdisplay /dev/sd{b,c}1 to show information about the newly created PVs. - -**3. Create a VG on top of the PV that you created in the previous step:** - - # vgcreate tecmint_vg /dev/sd{b,c}1 - -![Create Volume Group in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Volume-Group.png) - -Create Volume Group - -Remember that you can use vgdisplay tecmint_vg to show information about the newly created VG. - -**4. Create three logical volumes on top of VG tecmint_vg, as follows:** - - # lvcreate -L 3G -n vol01_docs tecmint_vg [vol01_docs → 3 GB] - # lvcreate -L 1G -n vol02_logs tecmint_vg [vol02_logs → 1 GB] - # lvcreate -l 100%FREE -n vol03_homes tecmint_vg [vol03_homes → 6 GB] - -![Create Logical Volumes in LVM](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Logical-Volumes.png) - -Create Logical Volumes - -Remember that you can use lvdisplay tecmint_vg to show information about the newly created LVs on top of VG tecmint_vg. - -**5. Format each of the logical volumes with xfs (do NOT use xfs if you’re planning on shrinking volumes later!):** - - # mkfs.xfs /dev/tecmint_vg/vol01_docs - # mkfs.xfs /dev/tecmint_vg/vol02_logs - # mkfs.xfs /dev/tecmint_vg/vol03_homes - -**6. Finally, mount them:** - - # mount /dev/tecmint_vg/vol01_docs /mnt/docs - # mount /dev/tecmint_vg/vol02_logs /mnt/logs - # mount /dev/tecmint_vg/vol03_homes /mnt/homes - -#### Removing Logical Volumes, Volume Group and Physical Volumes #### - -**7. Now we will reverse the LVM implementation and remove the LVs, the VG, and the PVs:** - - # lvremove /dev/tecmint_vg/vol01_docs - # lvremove /dev/tecmint_vg/vol02_logs - # lvremove /dev/tecmint_vg/vol03_homes - # vgremove /dev/tecmint_vg - # pvremove /dev/sd{b,c}1 - -**8. Now let’s install SSM and we will see how to perform the above in ONLY 1 STEP!** - - # yum update && yum install system-storage-manager - -We will use the same names and sizes as before: - - # ssm create -s 3G -n vol01_docs -p tecmint_vg --fstype ext4 /mnt/docs /dev/sd{b,c}1 - # ssm create -s 1G -n vol02_logs -p tecmint_vg --fstype ext4 /mnt/logs /dev/sd{b,c}1 - # ssm create -n vol03_homes -p tecmint_vg --fstype ext4 /mnt/homes /dev/sd{b,c}1 - -Yes! SSM will let you: - -- initialize block devices as physical volumes -- create a volume group -- create logical volumes -- format LVs, and -- mount them using only one command - -**9. We can now display the information about PVs, VGs, or LVs, respectively, as follows:** - - # ssm list dev - # ssm list pool - # ssm list vol - -![Check Information of PVs, VGs, or LVs](http://www.tecmint.com/wp-content/uploads/2015/04/Display-LVM-Information.png) - -Check Information of PVs, VGs, or LVs - -**10. As we already know, one of the distinguishing features of LVM is the possibility to resize (expand or decrease) logical volumes without downtime.** - -Say we are running out of space in vol02_logs but have plenty of space in vol03_homes. We will resize vol03_homes to 4 GB and expand vol02_logs to use the remaining space: - - # ssm resize -s 4G /dev/tecmint_vg/vol03_homes - -Run ssm list pool again and take note of the free space in tecmint_vg: - -![Check Volume Size](http://www.tecmint.com/wp-content/uploads/2015/04/Check-LVM-Free-Space.png) - -Check Volume Size - -Then do: - - # ssm resize -s+1.99 /dev/tecmint_vg/vol02_logs - -**Note**: that the plus sign after the -s flag indicates that the specified value should be added to the present value. - -**11. Removing logical volumes and volume groups is much easier with ssm as well. A simple,** - - # ssm remove tecmint_vg - -will return a prompt asking you to confirm the deletion of the VG and the LVs it contains: - -![Remove Logical Volume and Volume Group](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-LV-VG.png) - -Remove Logical Volume and Volume Group - -### Managing Encrypted Volumes ### - -SSM also provides system administrators with the capability of managing encryption for new or existing volumes. You will need the cryptsetup package installed first: - - # yum update && yum install cryptsetup - -Then issue the following command to create an encrypted volume. You will be prompted to enter a passphrase to maximize security: - - # ssm create -s 3G -n vol01_docs -p tecmint_vg --fstype ext4 --encrypt luks /mnt/docs /dev/sd{b,c}1 - # ssm create -s 1G -n vol02_logs -p tecmint_vg --fstype ext4 --encrypt luks /mnt/logs /dev/sd{b,c}1 - # ssm create -n vol03_homes -p tecmint_vg --fstype ext4 --encrypt luks /mnt/homes /dev/sd{b,c}1 - -Our next task consists in adding the corresponding entries in /etc/fstab in order for those logical volumes to be available on boot. Rather than using the device identifier (/dev/something). - -We will use each LV’s UUID (so that our devices will still be uniquely identified should we add other logical volumes or devices), which we can find out with the blkid utility: - - # blkid -o value UUID /dev/tecmint_vg/vol01_docs - # blkid -o value UUID /dev/tecmint_vg/vol02_logs - # blkid -o value UUID /dev/tecmint_vg/vol03_homes - -In our case: - -![Find Logical Volume UUID](http://www.tecmint.com/wp-content/uploads/2015/04/Logical-Volume-UUID.png) - -Find Logical Volume UUID - -Next, create the /etc/crypttab file with the following contents (change the UUIDs for the ones that apply to your setup): - - docs UUID=ba77d113-f849-4ddf-8048-13860399fca8 none - logs UUID=58f89c5a-f694-4443-83d6-2e83878e30e4 none - homes UUID=92245af6-3f38-4e07-8dd8-787f4690d7ac none - -And insert the following entries in /etc/fstab. Note that device_name (/dev/mapper/device_name) is the mapper identifier that appears in the first column of /etc/crypttab. - - # Logical volume vol01_docs: - /dev/mapper/docs /mnt/docs ext4 defaults 0 2 - # Logical volume vol02_logs - /dev/mapper/logs /mnt/logs ext4 defaults 0 2 - # Logical volume vol03_homes - /dev/mapper/homes /mnt/homes ext4 defaults 0 2 - -Now reboot (systemctl reboot) and you will be prompted to enter the passphrase for each LV. Afterwards you can confirm that the mount operation was successful by checking the corresponding mount points: - -![Verify Logical Volume Mount Points](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-LV-Mount-Points.png) - -Verify Logical Volume Mount Points - -### Conclusion ### - -In this tutorial we have started to explore how to set up and configure system storage using classic volume management tools and SSM, which also integrates filesystem and encryption capabilities in one package. This makes SSM an invaluable tool for any sysadmin. - -Let us know if you have any questions or comments – feel free to use the form below to get in touch with us! - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/create-lvm-storage-in-linux/ diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md b/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md deleted file mode 100644 index d4801d9923..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md +++ /dev/null @@ -1,212 +0,0 @@ -RHCSA Series: Using ACLs (Access Control Lists) and Mounting Samba / NFS Shares – Part 7 -================================================================================ -In the last article ([RHCSA series Part 6][1]) we started explaining how to set up and configure local system storage using parted and ssm. - -![Configure ACL's and Mounting NFS / Samba Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ACLs-and-Mounting-NFS-Samba-Shares.png) - -RHCSA Series:: Configure ACL’s and Mounting NFS / Samba Shares – Part 7 - -We also discussed how to create and mount encrypted volumes with a password during system boot. In addition, we warned you to avoid performing critical storage management operations on mounted filesystems. With that in mind we will now review the most used file system formats in Red Hat Enterprise Linux 7 and then proceed to cover the topics of mounting, using, and unmounting both manually and automatically network filesystems (CIFS and NFS), along with the implementation of access control lists for your system. - -#### Prerequisites #### - -Before proceeding further, please make sure you have a Samba server and a NFS server available (note that NFSv2 is no longer supported in RHEL 7). - -During this guide we will use a machine with IP 192.168.0.10 with both services running in it as server, and a RHEL 7 box as client with IP address 192.168.0.18. Later in the article we will tell you which packages you need to install on the client. - -### File System Formats in RHEL 7 ### - -Beginning with RHEL 7, XFS has been introduced as the default file system for all architectures due to its high performance and scalability. It currently supports a maximum filesystem size of 500 TB as per the latest tests performed by Red Hat and its partners for mainstream hardware. - -Also, XFS enables user_xattr (extended user attributes) and acl (POSIX access control lists) as default mount options, unlike ext3 or ext4 (ext2 is considered deprecated as of RHEL 7), which means that you don’t need to specify those options explicitly either on the command line or in /etc/fstab when mounting a XFS filesystem (if you want to disable such options in this last case, you have to explicitly use no_acl and no_user_xattr). - -Keep in mind that the extended user attributes can be assigned to files and directories for storing arbitrary additional information such as the mime type, character set or encoding of a file, whereas the access permissions for user attributes are defined by the regular file permission bits. - -#### Access Control Lists #### - -As every system administrator, either beginner or expert, is well acquainted with regular access permissions on files and directories, which specify certain privileges (read, write, and execute) for the owner, the group, and “the world” (all others). However, feel free to refer to [Part 3 of the RHCSA series][2] if you need to refresh your memory a little bit. - -However, since the standard ugo/rwx set does not allow to configure different permissions for different users, ACLs were introduced in order to define more detailed access rights for files and directories than those specified by regular permissions. - -In fact, ACL-defined permissions are a superset of the permissions specified by the file permission bits. Let’s see how all of this translates is applied in the real world. - -1. There are two types of ACLs: access ACLs, which can be applied to either a specific file or a directory), and default ACLs, which can only be applied to a directory. If files contained therein do not have a ACL set, they inherit the default ACL of their parent directory. - -2. To begin, ACLs can be configured per user, per group, or per an user not in the owning group of a file. - -3. ACLs are set (and removed) using setfacl, with either the -m or -x options, respectively. - -For example, let us create a group named tecmint and add users johndoe and davenull to it: - - # groupadd tecmint - # useradd johndoe - # useradd davenull - # usermod -a -G tecmint johndoe - # usermod -a -G tecmint davenull - -And let’s verify that both users belong to supplementary group tecmint: - - # id johndoe - # id davenull - -![Verify Users](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-Users.png) - -Verify Users - -Let’s now create a directory called playground within /mnt, and a file named testfile.txt inside. We will set the group owner to tecmint and change its default ugo/rwx permissions to 770 (read, write, and execute permissions granted to both the owner and the group owner of the file): - - # mkdir /mnt/playground - # touch /mnt/playground/testfile.txt - # chmod 770 /mnt/playground/testfile.txt - -Then switch user to johndoe and davenull, in that order, and write to the file: - - echo "My name is John Doe" > /mnt/playground/testfile.txt - echo "My name is Dave Null" >> /mnt/playground/testfile.txt - -So far so good. Now let’s have user gacanepa write to the file – and the write operation will, which was to be expected. - -But what if we actually need user gacanepa (who is not a member of group tecmint) to have write permissions on /mnt/playground/testfile.txt? The first thing that may come to your mind is adding that user account to group tecmint. But that will give him write permissions on ALL files were the write bit is set for the group, and we don’t want that. We only want him to be able to write to /mnt/playground/testfile.txt. - - # touch /mnt/playground/testfile.txt - # chown :tecmint /mnt/playground/testfile.txt - # chmod 777 /mnt/playground/testfile.txt - # su johndoe - $ echo "My name is John Doe" > /mnt/playground/testfile.txt - $ su davenull - $ echo "My name is Dave Null" >> /mnt/playground/testfile.txt - $ su gacanepa - $ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt - -![Manage User Permissions](http://www.tecmint.com/wp-content/uploads/2015/04/User-Permissions.png) - -Manage User Permissions - -Let’s give user gacanepa read and write access to /mnt/playground/testfile.txt. - -Run as root, - - # setfacl -R -m u:gacanepa:rwx /mnt/playground - -and you’ll have successfully added an ACL that allows gacanepa to write to the test file. Then switch to user gacanepa and try to write to the file again: - - $ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt - -To view the ACLs for a specific file or directory, use getfacl: - - # getfacl /mnt/playground/testfile.txt - -![Check ACLs of Files](http://www.tecmint.com/wp-content/uploads/2015/04/Check-ACL-of-File.png) - -Check ACLs of Files - -To set a default ACL to a directory (which its contents will inherit unless overwritten otherwise), add d: before the rule and specify a directory instead of a file name: - - # setfacl -m d:o:r /mnt/playground - -The ACL above will allow users not in the owner group to have read access to the future contents of the /mnt/playground directory. Note the difference in the output of getfacl /mnt/playground before and after the change: - -![Set Default ACL in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Set-Default-ACL-in-Linux.png) - -Set Default ACL in Linux - -[Chapter 20 in the official RHEL 7 Storage Administration Guide][3] provides more ACL examples, and I highly recommend you take a look at it and have it handy as reference. - -#### Mounting NFS Network Shares #### - -To show the list of NFS shares available in your server, you can use the showmount command with the -e option, followed by the machine name or its IP address. This tool is included in the nfs-utils package: - - # yum update && yum install nfs-utils - -Then do: - - # showmount -e 192.168.0.10 - -and you will get a list of the available NFS shares on 192.168.0.10: - -![Check Available NFS Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Mount-NFS-Shares.png) - -Check Available NFS Shares - -To mount NFS network shares on the local client using the command line on demand, use the following syntax: - - # mount -t nfs -o [options] remote_host:/remote/directory /local/directory - -which, in our case, translates to: - - # mount -t nfs 192.168.0.10:/NFS-SHARE /mnt/nfs - -If you get the following error message: “Job for rpc-statd.service failed. See “systemctl status rpc-statd.service” and “journalctl -xn” for details.”, make sure the rpcbind service is enabled and started in your system first: - - # systemctl enable rpcbind.socket - # systemctl restart rpcbind.service - -and then reboot. That should do the trick and you will be able to mount your NFS share as explained earlier. If you need to mount the NFS share automatically on system boot, add a valid entry to the /etc/fstab file: - - remote_host:/remote/directory /local/directory nfs options 0 0 - -The variables remote_host, /remote/directory, /local/directory, and options (which is optional) are the same ones used when manually mounting an NFS share from the command line. As per our previous example: - - 192.168.0.10:/NFS-SHARE /mnt/nfs nfs defaults 0 0 - -#### Mounting CIFS (Samba) Network Shares #### - -Samba represents the tool of choice to make a network share available in a network with *nix and Windows machines. To show the Samba shares that are available, use the smbclient command with the -L flag, followed by the machine name or its IP address. This tool is included in the samba-client package: - -You will be prompted for root’s password in the remote host: - - # smbclient -L 192.168.0.10 - -![Check Samba Shares](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Samba-Shares.png) - -Check Samba Shares - -To mount Samba network shares on the local client you will need to install first the cifs-utils package: - - # yum update && yum install cifs-utils - -Then use the following syntax on the command line: - - # mount -t cifs -o credentials=/path/to/credentials/file //remote_host/samba_share /local/directory - -which, in our case, translates to: - - # mount -t cifs -o credentials=~/.smbcredentials //192.168.0.10/gacanepa /mnt/samba - -where smbcredentials: - - username=gacanepa - password=XXXXXX - -is a hidden file inside root’s home (/root/) with permissions set to 600, so that no one else but the owner of the file can read or write to it. - -Please note that the samba_share is the name of the Samba share as returned by smbclient -L remote_host as shown above. - -Now, if you need the Samba share to be available automatically on system boot, add a valid entry to the /etc/fstab file as follows: - - //remote_host:/samba_share /local/directory cifs options 0 0 - -The variables remote_host, /samba_share, /local/directory, and options (which is optional) are the same ones used when manually mounting a Samba share from the command line. Following the definitions given in our previous example: - - //192.168.0.10/gacanepa /mnt/samba cifs credentials=/root/smbcredentials,defaults 0 0 - -### Conclusion ### - -In this article we have explained how to set up ACLs in Linux, and discussed how to mount CIFS and NFS network shares in a RHEL 7 client. - -I recommend you to practice these concepts and even mix them (go ahead and try to set ACLs in mounted network shares) until you feel comfortable. If you have questions or comments feel free to use the form below to contact us anytime. Also, feel free to share this article through your social networks. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/ -[2]:http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ -[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-acls.html \ No newline at end of file diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md b/sources/tech/RHCSA Series/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md deleted file mode 100644 index a381b1c94a..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md +++ /dev/null @@ -1,215 +0,0 @@ -RHCSA Series: Securing SSH, Setting Hostname and Enabling Network Services – Part 8 -================================================================================ -As a system administrator you will often have to log on to remote systems to perform a variety of administration tasks using a terminal emulator. You will rarely sit in front of a real (physical) terminal, so you need to set up a way to log on remotely to the machines that you will be asked to manage. - -In fact, that may be the last thing that you will have to do in front of a physical terminal. For security reasons, using Telnet for this purpose is not a good idea, as all traffic goes through the wire in unencrypted, plain text. - -In addition, in this article we will also review how to configure network services to start automatically at boot and learn how to set up network and hostname resolution statically or dynamically. - -![RHCSA: Secure SSH and Enable Network Services](http://www.tecmint.com/wp-content/uploads/2015/05/Secure-SSH-Server-and-Enable-Network-Services.png) - -RHCSA: Secure SSH and Enable Network Services – Part 8 - -### Installing and Securing SSH Communication ### - -For you to be able to log on remotely to a RHEL 7 box using SSH, you will have to install the openssh, openssh-clients and openssh-servers packages. The following command not only will install the remote login program, but also the secure file transfer tool, as well as the remote file copy utility: - - # yum update && yum install openssh openssh-clients openssh-servers - -Note that it’s a good idea to install the server counterparts as you may want to use the same machine as both client and server at some point or another. - -After installation, there is a couple of basic things that you need to take into account if you want to secure remote access to your SSH server. The following settings should be present in the `/etc/ssh/sshd_config` file. - -1. Change the port where the sshd daemon will listen on from 22 (the default value) to a high port (2000 or greater), but first make sure the chosen port is not being used. - -For example, let’s suppose you choose port 2500. Use [netstat][1] in order to check whether the chosen port is being used or not: - - # netstat -npltu | grep 2500 - -If netstat does not return anything, you can safely use port 2500 for sshd, and you should change the Port setting in the configuration file as follows: - - Port 2500 - -2. Only allow protocol 2: - -Protocol 2 - -3. Configure the authentication timeout to 2 minutes, do not allow root logins, and restrict to a minimum the list of users which are allowed to login via ssh: - - LoginGraceTime 2m - PermitRootLogin no - AllowUsers gacanepa - -4. If possible, use key-based instead of password authentication: - - PasswordAuthentication no - RSAAuthentication yes - PubkeyAuthentication yes - -This assumes that you have already created a key pair with your user name on your client machine and copied it to your server as explained here. - -- [Enable SSH Passwordless Login][2] - -### Configuring Networking and Name Resolution ### - -1. Every system administrator should be well acquainted with the following system-wide configuration files: - -- /etc/hosts is used to resolve names <---> IPs in small networks. - -Every line in the `/etc/hosts` file has the following structure: - - IP address - Hostname - FQDN - -For example, - - 192.168.0.10 laptop laptop.gabrielcanepa.com.ar - -2. `/etc/resolv.conf` specifies the IP addresses of DNS servers and the search domain, which is used for completing a given query name to a fully qualified domain name when no domain suffix is supplied. - -Under normal circumstances, you don’t need to edit this file as it is managed by the system. However, should you want to change DNS servers, be advised that you need to stick to the following structure in each line: - - nameserver - IP address - -For example, - - nameserver 8.8.8.8 - -3. 3. `/etc/host.conf` specifies the methods and the order by which hostnames are resolved within a network. In other words, tells the name resolver which services to use, and in what order. - -Although this file has several options, the most common and basic setup includes a line as follows: - - order bind,hosts - -Which indicates that the resolver should first look in the nameservers specified in `resolv.conf` and then to the `/etc/hosts` file for name resolution. - -4. `/etc/sysconfig/network` contains routing and global host information for all network interfaces. The following values may be used: - - NETWORKING=yes|no - HOSTNAME=value - -Where value should be the Fully Qualified Domain Name (FQDN). - - GATEWAY=XXX.XXX.XXX.XXX - -Where XXX.XXX.XXX.XXX is the IP address of the network’s gateway. - - GATEWAYDEV=value - -In a machine with multiple NICs, value is the gateway device, such as enp0s3. - -5. Files inside `/etc/sysconfig/network-scripts` (network adapters configuration files). - -Inside the directory mentioned previously, you will find several plain text files named. - - ifcfg-name - -Where name is the name of the NIC as returned by ip link show: - -![Check Network Link Status](http://www.tecmint.com/wp-content/uploads/2015/05/Check-IP-Address.png) - -Check Network Link Status - -For example: - -![Network Files](http://www.tecmint.com/wp-content/uploads/2015/05/Network-Files.png) - -Network Files - -Other than for the loopback interface, you can expect a similar configuration for your NICs. Note that some variables, if set, will override those present in `/etc/sysconfig/network` for this particular interface. Each line is commented for clarification in this article but in the actual file you should avoid comments: - - HWADDR=08:00:27:4E:59:37 # The MAC address of the NIC - TYPE=Ethernet # Type of connection - BOOTPROTO=static # This indicates that this NIC has been assigned a static IP. If this variable was set to dhcp, the NIC will be assigned an IP address by a DHCP server and thus the next two lines should not be present in that case. - IPADDR=192.168.0.18 - NETMASK=255.255.255.0 - GATEWAY=192.168.0.1 - NM_CONTROLLED=no # Should be added to the Ethernet interface to prevent NetworkManager from changing the file. - NAME=enp0s3 - UUID=14033805-98ef-4049-bc7b-d4bea76ed2eb - ONBOOT=yes # The operating system should bring up this NIC during boot - -### Setting Hostnames ### - -In Red Hat Enterprise Linux 7, the hostnamectl command is used to both query and set the system’s hostname. - -To display the current hostname, type: - - # hostnamectl status - -![Check System hostname in CentOS 7](http://www.tecmint.com/wp-content/uploads/2015/05/Check-System-hostname.png) - -Check System Hostname - -To change the hostname, use - - # hostnamectl set-hostname [new hostname] - -For example, - - # hostnamectl set-hostname cinderella - -For the changes to take effect you will need to restart the hostnamed daemon (that way you will not have to log off and on again in order to apply the change): - - # systemctl restart systemd-hostnamed - -![Set System Hostname in CentOS 7](http://www.tecmint.com/wp-content/uploads/2015/05/Set-System-Hostname.png) - -Set System Hostname - -In addition, RHEL 7 also includes the nmcli utility that can be used for the same purpose. To display the hostname, run: - - # nmcli general hostname - -and to change it: - - # nmcli general hostname [new hostname] - -For example, - - # nmcli general hostname rhel7 - -![Set Hostname Using nmcli Command](http://www.tecmint.com/wp-content/uploads/2015/05/nmcli-command.png) - -Set Hostname Using nmcli Command - -### Starting Network Services on Boot ### - -To wrap up, let us see how we can ensure that network services are started automatically on boot. In simple terms, this is done by creating symlinks to certain files specified in the [Install] section of the service configuration files. - -In the case of firewalld (/usr/lib/systemd/system/firewalld.service): - - [Install] - WantedBy=basic.target - Alias=dbus-org.fedoraproject.FirewallD1.service - -To enable the service: - - # systemctl enable firewalld - -On the other hand, disabling firewalld entitles removing the symlinks: - - # systemctl disable firewalld - -![Enable Service at System Boot](http://www.tecmint.com/wp-content/uploads/2015/05/Enable-Service-at-System-Boot.png) - -Enable Service at System Boot - -### Conclusion ### - -In this article we have summarized how to install and secure connections via SSH to a RHEL server, how to change its name, and finally how to ensure that network services are started on boot. If you notice that a certain service has failed to start properly, you can use systemctl status -l [service] and journalctl -xn to troubleshoot it. - -Feel free to let us know what you think about this article using the comment form below. Questions are also welcome. We look forward to hearing from you! - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-series-secure-ssh-set-hostname-enable-network-services-in-rhel-7/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://www.tecmint.com/20-netstat-commands-for-linux-network-management/ -[2]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/ \ No newline at end of file diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md b/sources/tech/RHCSA Series/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md deleted file mode 100644 index 6a1e544de3..0000000000 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md +++ /dev/null @@ -1,176 +0,0 @@ -RHCSA Series: Installing, Configuring and Securing a Web and FTP Server – Part 9 -================================================================================ -A web server (also known as a HTTP server) is a service that handles content (most commonly web pages, but other types of documents as well) over to a client in a network. - -A FTP server is one of the oldest and most commonly used resources (even to this day) to make files available to clients on a network in cases where no authentication is necessary since FTP uses username and password without encryption. - -The web server available in RHEL 7 is version 2.4 of the Apache HTTP Server. As for the FTP server, we will use the Very Secure Ftp Daemon (aka vsftpd) to establish connections secured by TLS. - -![Configuring and Securing Apache and FTP Server](http://www.tecmint.com/wp-content/uploads/2015/05/Install-Configure-Secure-Apache-FTP-Server.png) - -RHCSA: Installing, Configuring and Securing Apache and FTP – Part 9 - -In this article we will explain how to install, configure, and secure a web server and a FTP server in RHEL 7. - -### Installing Apache and FTP Server ### - -In this guide we will use a RHEL 7 server with a static IP address of 192.168.0.18/24. To install Apache and VSFTPD, run the following command: - - # yum update && yum install httpd vsftpd - -When the installation completes, both services will be disabled initially, so we need to start them manually for the time being and enable them to start automatically beginning with the next boot: - - # systemctl start httpd - # systemctl enable httpd - # systemctl start vsftpd - # systemctl enable vsftpd - -In addition, we have to open ports 80 and 21, where the web and ftp daemons are listening, respectively, in order to allow access to those services from the outside: - - # firewall-cmd --zone=public --add-port=80/tcp --permanent - # firewall-cmd --zone=public --add-service=ftp --permanent - # firewall-cmd --reload - -To confirm that the web server is working properly, fire up your browser and enter the IP of the server. You should see the test page: - -![Confirm Apache Web Server](http://www.tecmint.com/wp-content/uploads/2015/05/Confirm-Apache-Web-Server.png) - -Confirm Apache Web Server - -As for the ftp server, we will have to configure it further, which we will do in a minute, before confirming that it’s working as expected. - -### Configuring and Securing Apache Web Server ### - -The main configuration file for Apache is located in `/etc/httpd/conf/httpd.conf`, but it may rely on other files present inside `/etc/httpd/conf.d`. - -Although the default configuration should be sufficient for most cases, it’s a good idea to become familiar with all the available options as described in the [official documentation][1]. - -As always, make a backup copy of the main configuration file before editing it: - - # cp /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.$(date +%Y%m%d) - -Then open it with your preferred text editor and look for the following variables: - -- ServerRoot: the directory where the server’s configuration, error, and log files are kept. -- Listen: instructs Apache to listen on specific IP address and / or ports. -- Include: allows the inclusion of other configuration files, which must exist. Otherwise, the server will fail, as opposed to the IncludeOptional directive, which is silently ignored if the specified configuration files do not exist. -- User and Group: the name of the user/group to run the httpd service as. -- DocumentRoot: The directory out of which Apache will serve your documents. By default, all requests are taken from this directory, but symbolic links and aliases may be used to point to other locations. -- ServerName: this directive sets the hostname (or IP address) and port that the server uses to identify itself. - -The first security measure will consist of creating a dedicated user and group (i.e. tecmint/tecmint) to run the web server as and changing the default port to a higher one (9000 in this case): - - ServerRoot "/etc/httpd" - Listen 192.168.0.18:9000 - User tecmint - Group tecmint - DocumentRoot "/var/www/html" - ServerName 192.168.0.18:9000 - -You can test the configuration file with. - - # apachectl configtest - -and if everything is OK, then restart the web server. - - # systemctl restart httpd - -and don’t forget to enable the new port (and disable the old one) in the firewall: - - # firewall-cmd --zone=public --remove-port=80/tcp --permanent - # firewall-cmd --zone=public --add-port=9000/tcp --permanent - # firewall-cmd --reload - -Note that, due to SELinux policies, you can only use the ports returned by - - # semanage port -l | grep -w '^http_port_t' - -for the web server. - -If you want to use another port (i.e. TCP port 8100), you will have to add it to SELinux port context for the httpd service: - -# semanage port -a -t http_port_t -p tcp 8100 - -![Add Apache Port to SELinux Policies](http://www.tecmint.com/wp-content/uploads/2015/05/Add-Apache-Port-to-SELinux-Policies.png) - -Add Apache Port to SELinux Policies - -To further secure your Apache installation, follow these steps: - -1. The user Apache is running as should not have access to a shell: - - # usermod -s /sbin/nologin tecmint - -2. Disable directory listing in order to prevent the browser from displaying the contents of a directory if there is no index.html present in that directory. - -Edit `/etc/httpd/conf/httpd.conf` (and the configuration files for virtual hosts, if any) and make sure that the Options directive, both at the top and at Directory block levels, is set to None: - - Options None - -3. Hide information about the web server and the operating system in HTTP responses. Edit /etc/httpd/conf/httpd.conf as follows: - - ServerTokens Prod - ServerSignature Off - -Now you are ready to start serving content from your /var/www/html directory. - -### Configuring and Securing FTP Server ### - -As in the case of Apache, the main configuration file for Vsftpd `(/etc/vsftpd/vsftpd.conf)` is well commented and while the default configuration should suffice for most applications, you should become acquainted with the documentation and the man page `(man vsftpd.conf)` in order to operate the ftp server more efficiently (I can’t emphasize that enough!). - -In our case, these are the directives used: - - anonymous_enable=NO - local_enable=YES - write_enable=YES - local_umask=022 - dirmessage_enable=YES - xferlog_enable=YES - connect_from_port_20=YES - xferlog_std_format=YES - chroot_local_user=YES - allow_writeable_chroot=YES - listen=NO - listen_ipv6=YES - pam_service_name=vsftpd - userlist_enable=YES - tcp_wrappers=YES - -By using `chroot_local_user=YES`, local users will be (by default) placed in a chroot’ed jail in their home directory right after login. This means that local users will not be able to access any files outside their corresponding home directories. - -Finally, to allow ftp to read files in the user’s home directory, set the following SELinux boolean: - - # setsebool -P ftp_home_dir on - -You can now connect to the ftp server using a client such as Filezilla: - -![Check FTP Connection](http://www.tecmint.com/wp-content/uploads/2015/05/Check-FTP-Connection.png) - -Check FTP Connection - -Note that the `/var/log/xferlo`g log records downloads and uploads, which concur with the above directory listing: - -![Monitor FTP Download and Upload](http://www.tecmint.com/wp-content/uploads/2015/05/Monitor-FTP-Download-Upload.png) - -Monitor FTP Download and Upload - -Read Also: [Limit FTP Network Bandwidth Used by Applications in a Linux System with Trickle][2] - -### Summary ### - -In this tutorial we have explained how to set up a web and a ftp server. Due to the vastness of the subject, it is not possible to cover all the aspects of these topics (i.e. virtual web hosts). Thus, I recommend you also check other excellent articles in this website about [Apache][3]. - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-series-install-and-secure-apache-web-server-and-ftp-in-rhel/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:http://httpd.apache.org/docs/2.4/ -[2]:http://www.tecmint.com/manage-and-limit-downloadupload-bandwidth-with-trickle-in-linux/ -[3]:http://www.google.com/cse?cx=partner-pub-2601749019656699:2173448976&ie=UTF-8&q=virtual+hosts&sa=Search&gws_rd=cr&ei=Dy9EVbb0IdHisASnroG4Bw#gsc.tab=0&gsc.q=apache \ No newline at end of file diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md b/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md index 04c7d7a29e..307ec72515 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 10--Yum Package Management, Automating Tasks with Cron and Monitoring System Logs.md @@ -1,3 +1,4 @@ +[xiqingongzi translating] RHCSA Series: Yum Package Management, Automating Tasks with Cron and Monitoring System Logs – Part 10 ================================================================================ In this article we will review how to install, update, and remove packages in Red Hat Enterprise Linux 7. We will also cover how to automate tasks using cron, and will finish this guide explaining how to locate and interpret system logs files with the focus of teaching you why all of these are essential skills for every system administrator. @@ -194,4 +195,4 @@ via: http://www.tecmint.com/yum-package-management-cron-job-scheduling-monitorin [1]:http://www.tecmint.com/20-linux-yum-yellowdog-updater-modified-commands-for-package-mangement/ [2]:http://www.tecmint.com/20-practical-examples-of-rpm-commands-in-linux/ [3]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/ -[4]:http://www.tecmint.com/dmesg-commands/ \ No newline at end of file +[4]:http://www.tecmint.com/dmesg-commands/ diff --git a/sources/tech/RHCSA Series/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md b/sources/tech/RHCSA Series/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md index fd27f4c6fc..022953429d 100644 --- a/sources/tech/RHCSA Series/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md +++ b/sources/tech/RHCSA Series/RHCSA Series--Part 11--Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.md @@ -1,3 +1,5 @@ +FSSlc Translating + RHCSA Series: Firewall Essentials and Network Traffic Control Using FirewallD and Iptables – Part 11 ================================================================================ In simple words, a firewall is a security system that controls the incoming and outgoing traffic in a network based on a set of predefined rules (such as the packet destination / source or type of traffic, for example). @@ -188,4 +190,4 @@ via: http://www.tecmint.com/firewalld-vs-iptables-and-control-network-traffic-in [3]:http://www.tecmint.com/configure-iptables-firewall/ [4]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html [5]:http://www.tecmint.com/rhcsa-series-install-and-secure-apache-web-server-and-ftp-in-rhel/ -[6]:http://www.tecmint.com/firewalld-rules-for-centos-7/ \ No newline at end of file +[6]:http://www.tecmint.com/firewalld-rules-for-centos-7/ diff --git a/translated/share/20150821 Top 4 open source command-line email clients.md b/translated/share/20150821 Top 4 open source command-line email clients.md new file mode 100644 index 0000000000..db28f4c543 --- /dev/null +++ b/translated/share/20150821 Top 4 open source command-line email clients.md @@ -0,0 +1,80 @@ +KevinSJ Translating +四大开源版命令行邮件客户端 +================================================================================ +![](http://opensource.com/sites/default/files/styles/image-full-size/public/images/life/life_mail.png) + +无论你承认与否,email并没有消亡。对依赖命令行的 Linux 高级用户而言,离开 shell 转而使用传统的桌面或网页版邮件客户端并不合适。归根结底,命令行最善于处理文件,特别是文本文件,能使效率倍增。 + +幸运的是,也有不少的命令行邮件客户端,他们的用户大都乐于帮助你入门并回答你使用中遇到的问题。但别说我没警告过你:一旦你完全掌握了其中一个客户端,要再使用图基于图形界面的客户端将回变得很困难! + +要安装下述四个客户端中的任何一个是非常容易的;主要 Linux 发行版的软件仓库中都提供此类软件,并可通过包管理器进行安装。你也可以再其他的操作系统中寻找并安装这类客户端,但我并未尝试过也没有相关的经验。 + +### Mutt ### + +- [项目主页][1] +- [源代码][2] +- 授权协议: [GPLv2][3] + +许多终端爱好者都听说过甚至熟悉 Mutt 和 Alpine, 他们已经存在多年。让我们先看看 Mutt。 + +Mutt 支持许多你所期望 email 系统支持的功能:会话,颜色区分,支持多语言,同时还有很多设置选项。它支持 POP3 和 IMAP, 两个主要的邮件传输协议,以及许多邮箱格式。自从1995年诞生以来, Mutt 即拥有一个活跃的开发社区,但最近几年,新版本更多的关注于修复问题和安全更新而非提供新功能。这对大多数 Mutt 用户而言并无大碍,他们钟爱这样的界面,并支持此项目的口号:“所有邮件客户端都很烂,只是这个烂的没那么彻底。” + +### Alpine ### + +- [项目主页][4] +- [源代码][5] +- 授权协议: [Apache 2.0][6] + +Alpine 是另一款知名的终端邮件客户端,它由华盛顿大学开发,初衷是作为 UW 开发的 Pine 的开源,支持unicode的替代版本。 + +Alpine 不仅容易上手,还为高级用户提供了很多特性,它支持很多协议 —— IMAP, LDAP, NNTP, POP, SMTP 等,同时也支持不同的邮箱格式。Alpine 内置了一款名为 Pico 的可独立使用的简易文本编辑工具,但你也可以使用你常用的文本编辑器: vi, Emacs等。 + +尽管Alpine的升级并不频繁,名为re-alpine的分支为不同的开发者提供了开发此项目的机会。 + +Alpine 支持再屏幕上显示上下文帮助,但一些用户回喜欢 Mutt 式的独立说明手册,但这两种提供了较好的说明。用户可以同时尝试 Mutt 和 Alpine,并由个人喜好作出决定,也可以尝试以下几个比较新颖的选项。 + +### Sup ### + +- [项目主页][7] +- [源代码][8] +- 授权协议: [GPLv2][9] + +Sup 是我们列表中能被称为“大容量邮件客户端”的两个之一。自称“为邮件较多的人设计的命令行客户端”,Sup 的目标是提供一个支持层次化设计并允许再为会话添加标签进行简单整理的界面。 + +由于采用 Ruby 编写,Sup 能提供十分快速的搜索并能自动管理联系人列表,同时还允许自定义插件。对于使用 Gmail 作为网页邮件客户端的人们,这些功能都是耳熟能详的,这就使得 Sup 成为一种比较现代的命令行邮件管理方式。 +Written in Ruby, Sup provides exceptionally fast searching, manages your contact list automatically, and allows for custom extensions. For people who are used to Gmail as a webmail interface, these features will seem familiar, and Sup might be seen as a more modern approach to email on the command line. + +### Notmuch ### + +- [项目主页][10] +- [源代码][11] +- 授权协议: [GPLv3][12] + +"Sup? Notmuch." Notmuch 作为 Sup 的回应,最初只是重写了 Sup 的一小部分来提高性能。最终,这个项目逐渐变大并成为了一个独立的邮件客户端。 + +Notmuch是一款相当精简的软件。它并不能独立的收发邮件,启用 Notmuch 的快速搜索功能的代码实际上是一个需要调用的独立库。但这样的模块化设计也使得你能使用你最爱的工具进行写信,发信和收信,集中精力做好一件事情并有效浏览和管理你的邮件。 + +这个列表并不完整,还有很多 email 客户端,他们或许才是你的最佳选择。你喜欢什么客户端呢? +-------------------------------------------------------------------------------- + +via: http://opensource.com/life/15/8/top-4-open-source-command-line-email-clients + +作者:[Jason Baker][a] +译者:[KevinSJ](https://github.com/KevinSj) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://opensource.com/users/jason-baker +[1]:http://www.mutt.org/ +[2]:http://dev.mutt.org/trac/ +[3]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html +[4]:http://www.washington.edu/alpine/ +[5]:http://www.washington.edu/alpine/acquire/ +[6]:http://www.apache.org/licenses/LICENSE-2.0 +[7]:http://supmua.org/ +[8]:https://github.com/sup-heliotrope/sup +[9]:https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html +[10]:http://notmuchmail.org/ +[11]:http://notmuchmail.org/releases/ +[12]:http://www.gnu.org/licenses/gpl.html diff --git a/translated/share/20150826 Five Super Cool Open Source Games.md b/translated/share/20150826 Five Super Cool Open Source Games.md new file mode 100644 index 0000000000..30ca09e171 --- /dev/null +++ b/translated/share/20150826 Five Super Cool Open Source Games.md @@ -0,0 +1,66 @@ +Translated by H-mudcup +五大超酷的开源游戏 +================================================================================ +在2014年和2015年,Linux 成了一堆流行商业品牌的家,例如备受欢迎的 Borderlands、Witcher、Dead Island 和 CS系列游戏。虽然这是令人激动的消息,但这跟玩家的预算有什么关系?商业品牌很好,但更好的是由了解玩家喜好的开发者开发的免费的替代品。 + +前段时间,我偶然看到了一个三年前发布的 YouTube 视频,标题非常的有正能量[5个不算糟糕的开源游戏][1]。虽然视频表扬了一些开源游戏,我还是更喜欢用一个更加热情的方式来切入这个话题,至少如标题所说。所以,下面是我的一份五大超酷开源游戏的清单。 + +### Tux Racer ### + +![Tux Racer](http://fossforce.com/wp-content/uploads/2015/08/tuxracer-550x413.jpg) + +Tux Racer + +[《Tux Racer》][2]是这份清单上的第一个游戏,因为我对这个游戏很熟悉。我和兄弟与[电脑上的孩子们][4]项目在[最近一次去墨西哥的路途中][3] Tux Racer 是孩子和教师都喜欢玩的游戏之一。在这个游戏中,玩家使用 Linux 吉祥物,企鹅 Tux,在下山雪道上以计时赛的方式进行比赛。玩家们不断挑战他们自己的最佳纪录。目前还没有多玩家版本,但这是有可能改变的。适用于 Linux、OS X、Windows 和 Android。 + +### Warsow ### + +![Warsow](http://fossforce.com/wp-content/uploads/2015/08/warsow-550x413.jpg) + +Warsow + +[《Warsow》][5]网站解释道:“设定是有未来感的卡通世界,Warsow 是个完全开放的适用于 Windows、Linux 和 Mac OS X平台的快节奏第一人称射击游戏(FPS)。Warsow 是尊重的艺术和网络中的体育精神。(Warsow is the Art of Respect and Sportsmanship Over the Web.大写字母组成Warsow。)” 我很不情愿的把 FPS 类放到了这个列表中,因为很多人玩过这类的游戏,但是我的确被 Warsow 打动了。它对很多动作进行了优先级排序,游戏节奏很快,一开始就有八个武器。卡通化的风格让玩的过程变得没有那么严肃,更加的休闲,非常适合可以和亲友一同玩。然而,他却以充满竞争的游戏自居,并且当我体验这个游戏时,我发现周围确实有一些专家级的玩家。适用于 Linux、Windows 和 OS X。 + +### M.A.R.S——一个荒诞的射击游戏 ### + +![M.A.R.S. - A ridiculous shooter](http://fossforce.com/wp-content/uploads/2015/08/MARS-screenshot-550x344.jpg) + +M.A.R.S.——一个荒诞的射击游戏 + +[《M.A.R.S——一个荒诞的射击游戏》][6]之所以吸引人是因为他充满活力的色彩和画风。支持两个玩家使用同一个键盘,而一个在线多玩家版本目前正在开发中——这意味着想要和朋友们一起玩暂时还要等等。不论如何,它是个可以使用几个不同飞船和武器的有趣的太空射击游戏。飞船的形状不同,从普通的枪、激光、散射枪到更有趣的武器(随机出来的飞船中有一个会对敌人发射泡泡,这为这款混乱的游戏增添了很多乐趣)。游戏几种模式,比如标准模式和对方进行殊死搏斗以获得高分或先达到某个分数线,还有其他的模式,空间球(Spaceball)、坟坑(Grave-itation Pit)和保加农炮(Cannon Keep)。适用于 Linux、Windows 和 OS X。 + +### Valyria Tear ### + +![Valyria Tear](http://fossforce.com/wp-content/uploads/2015/08/bronnan-jump-to-enemy-550x413.jpg) + +Valyria Tear + +[Valyria Tear][7] 类似几年来拥有众多粉丝的角色扮演游戏(RPG)。故事设定在梦幻游戏的通用年代,充满了骑士、王国和魔法,以及主要角色 Bronann。设计团队做的非常棒,在设计这个世界和实现玩家对这类游戏所有的期望:隐藏的宝藏、偶遇的怪物、非玩家操纵角色(NPC)的互动以及所有 RPG 不可或缺的:在低级别的怪物上刷经验直到可以面对大 BOSS。我在试玩的时候,时间不允许我太过深入到这个游戏故事中,但是感兴趣的人可以看 YouTube 上由 Yohann Ferriera 用户发的‘[Let’s Play][8]’系列视频。适用于 Linux、Windows 和 OS X。 + +### SuperTuxKart ### + +![SuperTuxKart](http://fossforce.com/wp-content/uploads/2015/08/hacienda_tux_antarctica-550x293.jpg) + +SuperTuxKart + +最后一个同样好玩的游戏是 [SuperTuxKart][9],一个效仿 Mario Kart(马里奥卡丁车)但丝毫不必原作差的好游戏。它在2000年-2004年间开始以 Tux Kart 开发,但是在成品中有错误,结果开发就停止了几年。从2006年开始重新开发时起,它就一直在改进,直到四个月前0.9版首次发布。在游戏里,我们的老朋友 Tux 与马里奥和其他一些开源吉祥物一同开始。其中一个熟悉的面孔是 Suzanne,Blender 的那只吉祥物猴子。画面很给力,游戏很流畅。虽然在线游戏还在计划阶段,但是分屏多玩家游戏是可以的。一个电脑最多可以四个玩家同时玩。适用于 Linux、Windows、OS X、AmigaOS 4、AROS 和 MorphOS。 + +-------------------------------------------------------------------------------- + +via: http://fossforce.com/2015/08/five-super-cool-open-source-games/ + +作者:Hunter Banks +译者:[H-mudcup](https://github.com/H-mudcup) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[1]:https://www.youtube.com/watch?v=BEKVl-XtOP8 +[2]:http://tuxracer.sourceforge.net/download.html +[3]:http://fossforce.com/2015/07/banks-family-values-texas-linux-fest/ +[4]:http://www.kidsoncomputers.org/an-amazing-week-in-oaxaca +[5]:https://www.warsow.net/download +[6]:http://mars-game.sourceforge.net/ +[7]:http://valyriatear.blogspot.com/ +[8]:https://www.youtube.com/channel/UCQ5KrSk9EqcT_JixWY2RyMA +[9]:http://supertuxkart.sourceforge.net/ diff --git a/translated/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md b/translated/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md new file mode 100644 index 0000000000..d9ab3ab9f3 --- /dev/null +++ b/translated/share/20150827 Xtreme Download Manager Updated With Fresh GUI.md @@ -0,0 +1,68 @@ +Xtreme下载管理器升级全新用户界面 +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme-Download-Manager-Linux.jpg) + +[Xtreme 下载管理器][1], 毫无疑问是[Linux界最好的下载管理器][2]之一 , 它的新版本名叫 XDM 2015 ,这次的新版本给我们带来了全新的外观体验! + +Xtreme 下载管理器,也被称作 XDM 或 XDMAN,它是一个跨平台的下载管理器,可以用于 Linux、Windows 和 Mac OS X 系统之上。同时它兼容于主流的浏览器,如 Chrome, Firefox, Safari 等,因此当你从浏览器下载东西的时候可以直接使用 XDM 下载。 + +当你的网络连接超慢并且需要管理下载文件的时候,像 XDM 这种软件可以帮到你大忙。例如说你在一个慢的要死的网络速度下下载一个超大文件, XDM 可以帮助你暂停并且继续下载。 + +XDM 的主要功能: + +- 暂停和继续下载 +- [从 YouTube 下载视频][3],其他视频网站同样适用 +- 强制聚合 +- 下载加速 +- 计划下载 +- 下载限速 +- 与浏览器整合 +- 支持代理服务器 + +下面你可以看到 XDM 新旧版本之间的差别。 + +![Old XDM](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme-Download-Manager-700x400_c.jpg) + +老版本XDM + +![New XDM](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Xtreme_Download_Manager.png) + +新版本XDM + +### 在基于 Ubuntu 的 Linux 发行版上安装 Xtreme下载管理器 ### + +感谢 Noobslab 提供的 PPA,你可以使用以下命令来安装 Xtreme 下载管理器。虽然 XDM 依赖 Java,但是托 PPA 的福,你不需要对其进行单独的安装。 + + sudo add-apt-repository ppa:noobslab/apps + sudo apt-get update + sudo apt-get install xdman + +以上的 PPA 可以在 Ubuntu 或者其他基于 Ubuntu 的发行版上使用,如 Linux Mint, elementary OS, Linux Lite 等。 + +#### 删除 XDM #### + +如果你是使用 PPA 安装的 XDM ,可以通过以下命令将其删除: + + sudo apt-get remove xdman + sudo add-apt-repository --remove ppa:noobslab/apps + +对于其他Linux发行版,可以通过以下连接下载: + +- [Download Xtreme Download Manager][4] + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/xtreme-download-manager-install/ + +作者:[Abhishek][a] +译者:[译者ID](https://github.com/mr-ping) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://xdman.sourceforge.net/ +[2]:http://itsfoss.com/4-best-download-managers-for-linux/ +[3]:http://itsfoss.com/download-youtube-videos-ubuntu/ +[4]:http://xdman.sourceforge.net/download.html + diff --git a/translated/talk/20141223 Defending the Free Linux World.md b/translated/talk/20141223 Defending the Free Linux World.md new file mode 100644 index 0000000000..cabc8af041 --- /dev/null +++ b/translated/talk/20141223 Defending the Free Linux World.md @@ -0,0 +1,127 @@ +Translating by H-mudcup + +守卫自由的Linux世界 +================================================================================ +![](http://www.linuxinsider.com/ai/908455/open-invention-network.jpg) + +**"合作是开源的一部分。OIN的CEO Keith Bergelt解释说,开放创新网络(Open Invention Network)模式允许众多企业和公司决定它们该在哪较量,在哪合作。随着开源的演变,“我们需要为合作创造渠道。否则我们将会有几百个团体把数十亿美元花费到同样的技术上。”** + +[开放创新网络(Open Invention Network)][1],既OIN,正在全球范围内开展让 Linux 远离专利诉讼的伤害的活动。它的努力得到了一千多个公司的热烈回应,它们的加入让这股力量成为了历史上最大的反专利管理组织。 + +开放创新网络以白帽子组织的身份创建于2005年,目的是保护 Linux 免受来自许可证方面的困扰。包括Google、 IBM、 NEC、 Novell、 Philips、 [Red Hat][2] 和 Sony这些成员的董事会给予了它可观的经济支持。世界范围内的多个组织通过签署自由 OIN 协议加入了这个社区。 + +创立开放创新网络的组织成员把它当作利用知识产权保护 Linux 的大胆尝试。它的商业模式非常的难以理解。它要求它的成员持无专利证并永远放弃由于 Linux 相关知识产权起诉其他成员的机会。 + +然而,从 Linux 收购风波——想想服务器和云平台——那时起,保护 Linux 知识产权的策略就变得越加的迫切。 + +在过去的几年里,Linux 的版图曾经历了一场变革。OIN 不必再向人们解释这个组织的定义,也不必再解释为什么 Linux 需要保护。据 OIN 的 CEO Keith Bergelt 说,现在 Linux 的重要性得到了全世界的关注。 + +“我们已经见到了一场人们了解到OIN如何让合作受益的文化变革,”他对 LinuxInsider 说。 + +### 如何运作 ### + +开放创新网络使用专利权的方式创建了一个协作环境。这种方法有助于确保创新的延续。这已经使很多软件商贩、顾客、新型市场和投资者受益。 + +开放创新网络的专利证可以让任何公司、公共机构或个人免版权使用。这些权利的获得建立在签署者同意不会专为了维护专利而攻击 Linux 系统的基础上。 + +OIN 确保 Linux 的源代码保持开放的状态。这让编程人员、设备出售人员、独立软件开发者和公共机构在投资和使用 Linux 时不用过多的担心知识产权的问题。这让对 Linux 进行重新装配、嵌入和使用的公司省了不少钱。 + +“随着版权许可证越来越广泛的使用,对 OIN 许可证的需求也变得更加的迫切。现在,人们正在寻找更加简单或更功利的解决方法”,Bergelt 说。 + +OIN 法律防御援助对成员是免费的。成员必须承诺不对 OIN 名单带上的软件发起专利诉讼。为了保护该软件,他们也同意提供他们自己的专利。最终,这些保证将导致几十万的交叉许可通过网络连接,Bergelt 如此解释道。 + +### 填补法律漏洞 ### + +“OIN 正在做的事情是非常必要的。它提供额另一层 IP 保护,”[休斯顿法律中心大学][3]的副教授 Greg R. Vetter 这样说道。 + +他回答 LinuxInsider 说,某些人设想的第二版 GPL 许可证会隐含的提供专利许可,但是律师们更喜欢明确的许可。 + +OIN 所提供的许可填补了这个空白。它还明确的覆盖了 Linux 核心。据 Vetter 说,明确的专利许可并不是 GPLv2 中的必要部分,但是这个部分曾在 GPLv3 中。 + +拿一个在 GPLv3 中写了10000行代码的代码编写者来说。随着时间推移,其他的代码编写者会贡献更多行的代码到 IP 中。GPLv3 中的软件专利许可条款将保护所有基于参与其中的贡献者的专利的全部代码的使用,Vetter 如此说道。 + +### 并不完全一样 ### + +专利权和许可证在法律结构上层层叠叠互相覆盖。弄清两者对开源软件的作用就像是穿越雷区。 + +Vetter 说“许可证是授予通常是建立在专利和版权法律上的额外权利的法律结构。许可证被认为是给予了人们做一些的可能会侵犯到其他人的 IP 权利的事的许可。” + +Vetter 指出,很多自由开源许可证(例如 Mozilla 公共许可、GNU、GPLv3 以及 Apache 软件许可)融合了某些互惠专利权的形式。Vetter 指出,像 BSD 和 MIT 这样旧的许可证不会提到专利。 + +一个软件的许可证让其他人可以在某种程度上使用这个编程人员创造的代码。版权对所属权的建立是自动的,只要某个人写或者画了某个原创的东西。然而,版权只覆盖了个别的表达方式和衍生的作品。他并没有涵盖代码的功能性或可用的想法。 + +专利涵盖了功能性。专利权还可以成为许可证。版权可能无法保护某人如何独立的对另一个人的代码的实现的开发,但是专利填补了这个小瑕疵,Vetter 解释道。 + +### 寻找安全通道 ### + +许可证和专利混合的法律性质可能会对开源开发者产生威胁。据 [Chaotic Moon Studios][4] 的创办者之一、 [IEEE][5] 计算机协会成员 William Hurley 说,对于某些人来说即使是 GPL 也会成为威胁。 + +"在很久以前,开源是个完全不同的世界。被彼此间的尊重和把代码视为艺术而非资产的观点所驱动,那时的程序和代码比现在更加的开放。我相信很多为最好的意图所做的努力几乎最后总是背负着意外的结果,"Hurley 这样告诉 LinuxInsider。 + +他暗示说,成员人数超越了1000人可能带来了一个关于知识产权保护重要性的混乱信息。这可能会继续搅混开源生态系统这滩浑水。 + +“最终,这些显现出了围绕着知识产权的常见的一些错误概念。拥有几千个开发者并不会减少风险——而是增加。给专利许可的开发者越多,它们看起来就越值钱,”Hurley 说。“它们看起来越值钱,有着类似专利的或者其他知识产权的人就越可能试图利用并从中榨取他们自己的经济利益。” + +### 共享与竞争共存 ### + +竞合策略是开源的一部分。OIN 模型让各个公司能够决定他们将在哪竞争以及在哪合作,Bergelt 解释道。 + +“开源演化中的许多改变已经把我们移到了另一个方向上。我们必须为合作创造渠道。否则我们将会有几百个团体把数十亿美元花费到同样的技术上,”他说。 + +手机产业的革新就是个很好的例子。各个公司放出了不同的标准。没有共享,没有合作,Bergelt 解释道。 + +他说:“这让我们在美国接触技术的能力落后了七到五年。我们接触设备的经验远远落后于世界其他地方的人。在我们等待 CDMA (Code Division Multiple Access 码分多址访问通信技术)时自满于 GSM (Global System for Mobile Communications 全球移动通信系统)。” + +### 改变格局 ### + +OIN 在去年经历了增长了400个新许可的浪潮。这意味着着开源有了新趋势。 + +Bergelt 说:“市场到达了一个临界点,组织内的人们终于意识到直白地合作和竞争的需要。结果是两件事同时进行。这可能会变得复杂、费力。” + +然而,这个由人们开始考虑合作和竞争的文化革新所驱动的转换过程是可以忍受的。他解释说,这也是人们在以把开源作为开源社区的最重要的工程的方式拥抱开源——尤其是 Linux——的转变。 + +还有一个迹象是,最具意义的新工程都没有在 GPLv3 许可下开发。 + +### 二个总比一个好 ### + +“GPL 极为重要,但是事实是有一堆的许可模型正被使用着。在Eclipse、Apache 和 Berkeley 许可中,专利问题的相对可解决性通常远远低于在 GPLv3 中的。”Bergelt 说。 + +GPLv3 对于解决专利问题是个自然的补充——但是 GPL 自身不足以独自解决围绕专利使用的潜在冲突。所以 OIN 的设计是以能够补充版权许可为目的的,他补充道。 + +然而,层层叠叠的专利和许可也许并没有带来多少好处。到最后,专利在几乎所有的案例中都被用于攻击目的——而不是防御目的,Bergelt 暗示说。 + +“如果你不准备对其他人采取法律行动,那么对于你的知识财产来说专利可能并不是最佳的法律保护方式”,他说。“我们现在生活在一个对软件——开放和专有——误会重重的世界里。这些软件还被错误并过时的专利系统所捆绑。我们每天在工业化的被窒息的创新中挣扎”,他说。 + +### 法院是最后的手段### + +想到 OIN 的出现抑制了诉讼的泛滥就感到十分欣慰,Bergelt 说,或者至少可以说 OIN 的出现扼制了特定的某些威胁。 + +“可以说我们让人们放下它们了的武器。同时我们正在创建一种新的文化规范。一旦你入股这个模型中的非侵略专利,所产生的相关影响就是对合作的鼓励”,他说。 + +如果你愿意承诺合作,你的第一反应就会趋向于不急着起诉。相反的,你会想如何让我们允许你使用我们所拥有的东西并让它为你赚钱,而同时我们也能使用你所拥有的东西,Bergelt 解释道。 + +“OIN 是个多面的解决方式。他鼓励签署者创造双赢协议”,他说。“这让起诉成为最逼不得已的行为。那才是它的位置。” + +### 底线### + +Bergelt 坚信,OIN 的运作是为了阻止 Linux 受到专利伤害。在 Linux 的世界里没有诉讼的地方。 + +唯一临近的是和微软的移动大战,这主要关系到堆栈中高的元素。那些来自法律的挑战可能是为了提高包括使用 Linux 产品的所属权的成本,Bergelt 说。 + +尽管如此“这些并不是有关 Linux 诉讼”,他说。“他们的重点并不在于 Linux 的核心。他们关注的是 Linux 系统里都有些什么。” + +-------------------------------------------------------------------------------- + +via: http://www.linuxinsider.com/story/Defending-the-Free-Linux-World-81512.html + +作者:Jack M. Germain +译者:[H-mudcup](https://github.com/H-mudcup) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[1]:http://www.openinventionnetwork.com/ +[2]:http://www.redhat.com/ +[3]:http://www.law.uh.edu/ +[4]:http://www.chaoticmoon.com/ +[5]:http://www.ieee.org/ diff --git a/translated/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md b/translated/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md new file mode 100644 index 0000000000..1c92079b57 --- /dev/null +++ b/translated/talk/20150818 Debian GNU or Linux Birthday-- A 22 Years of Journey and Still Counting.md @@ -0,0 +1,109 @@ +Debian GNU/Linux 生日: 22年未完的美妙旅程. +================================================================================ +在2015年8月16日, Debian项目组庆祝了 Debian 的22周年纪念日; 这也是开源世界历史最悠久, 热门的发行版之一. Debian项目于1993年由Ian Murdock创立. 彼时, Slackware 作为最早的 Linux 发行版已经名声在外. + +![Happy 22nd Birthday to Debian](http://www.tecmint.com/wp-content/uploads/2014/08/Debian-22nd-Birthday.png) + +22岁生日快乐! Debian Linux! + +Ian Ashly Murdock, 一个美国职业软件工程师, 在他还是普渡大学的学生时构想出了 Debia n项目的计划. 他把这个项目命名为 Debian 是由于这个名字组合了他彼时女友的名字, Debra Lynn, 和他自己的名字(译者: Ian). 他之后和Lynn顺利结婚并在2008年1月离婚. + +![Ian Murdock](http://www.tecmint.com/wp-content/uploads/2014/08/Ian-Murdock.jpeg) + +Debian 创始人:Ian Murdock + +Ian 目前是 ExactTarget 下 Platform and Development Community 的副总裁. + +Debian (如同Slackware一样) 都是由于当时缺乏满足作者标准的发行版才应运而生的. Ian 在一次采访中说:"免费提供一流的产品会是Debian项目的唯一使命. 尽管过去的 Linux 发行版均不尽然可靠抑或是优秀. 我印象里...比如在不同的文件系统间移动文件, 处理大型文件经常会导致内核出错. 但是 Linux 其实是很可靠的, 免费的源代码让这个项目本质上很有前途. + +"我记得过去我也像其他人一样想解决问题, 想在家里运营一个像 UNIX 的东西. 但那是不可能的, 无论是经济上还是法律上或是别的什么角度. 然后我就听闻了GNU内核开发项目, 以及这个项目是如何没有任何法律纷争", Ian 补充到. 他早年在开发 Debian 时曾被自由软件基金会(FSF)资助, 这份资助帮助 Debian 向前迈了一大步; 尽管一年后由于学业原因 Ian 退出了 FSF 转而去完成他的学位. + +### Debian开发历史 ### + +- **Debian 0.01 – 0.09** : 发布于 1993 八月 – 1993 十二月. +- **Debian 0.91 ** – 发布于 1994 一月. 有了原始的包管理系统, 没有依赖管理机制. +- **Debian 0.93 rc5** : 发布于 1995 三月. "现代"意义的 Debian 的第一次发布, dpkg 会在系统安装后被用作安装以及管理其他软件包. +- **Debian 0.93 rc6**: 发布于1995 十一月. 最后一次a.out发布, deselect机制第一次出现, 有60位开发者在彼时维护着软件包. +- **Debian 1.1**: 发布于1996 六月. 项目代号 – Buzz, 软件包数量 – 474, 包管理器 dpkg, 内核版本 2.0, ELF. +- **Debian 1.2**: 发布于1996 十二月. 项目代号 – Rex, 软件包数量 – 848, 开发者数量 – 120. +- **Debian 1.3**: 发布于1997 七月. 项目代号 – Bo, 软件包数量 974, 开发者数量 – 200. +- **Debian 2.0**: 发布于1998 七月. 项目代号 - Hamm, 支持构架 – Intel i386 以及 Motorola 68000 系列, 软件包数量: 1500+, 开发者数量: 400+, 内置了 glibc. +- **Debian 2.1**: 发布于1999 三月九日. 项目代号 – slink, 支持构架 - Alpha 和 Sparc, apt 包管理器开始成型, 软件包数量 – 2250. +- **Debian 2.2**: 发布于2000 八月十五日. 项目代号 – Potato, 支持构架 – Intel i386, Motorola 68000 系列, Alpha, SUN Sparc, PowerPC 以及 ARM 构架. 软件包数量: 3900+ (二进制) 以及 2600+ (源代码), 开发者数量 – 450. 有一群人在那时研究并发表了一篇论文, 论文展示了自由软件是如何在被各种问题包围的情况下依然逐步成长为优秀的现代操作系统的. +- **Debian 3.0**: 发布于2002 七月十九日. 项目代号 – woody, 支持构架新增– HP, PA_RISC, IA-64, MIPS 以及 IBM, 首次以DVD的形式发布, 软件包数量 – 8500+, 开发者数量 – 900+, 支持加密. +- **Debian 3.1**: 发布于2005 六月六日. 项目代号 – sarge, 支持构架 – 不变基础上新增 AMD64 – 非官方渠道发布, 内核 – 2.4 以及 2.6 系列, 软件包数量: 15000+, 开发者数量 : 1500+, 增加了诸如 – OpenOffice 套件, Firefox 浏览器, Thunderbird, Gnome 2.8, 内核版本 3.3 先进地支持了: RAID, XFS, LVM, Modular Installer. +- **Debian 4.0**: 发布于2007 四月八日. 项目代号 – etch, 支持构架 – 不变基础上新增 AMD64. 软件包数量: 18,200+ 开发者数量 : 1030+, 图形化安装器. +- **Debian 5.0**: Released on February 14th, 发布于2009. 项目代号 – lenny, 支持构架 – 保不变基础上新增 ARM. 软件包数量: 23000+, 开发者数量: 1010+. +- **Debian 6.0**: 发布于2009 七月二十九日. 项目代号 – squeeze, 包含的软件包: 内核 2.6.32, Gnome 2.3. Xorg 7.5, 同时包含了 DKMS, 基于依赖包支持. 支持构架 : 不变基础上新增 kfreebsd-i386 以及 kfreebsd-amd64, 基于依赖管理的启动过程. +- **Debian 7.0**: 发布于2013 五月四日. 项目代号: wheezy, 支持 Multiarch, 私人云工具, 升级了安装器, 移除了第三方软件依赖, 万能的多媒体套件-codec, 内核版本 3.2, Xen Hypervisor 4.1.4 软件包数量: 37400+. +- **Debian 8.0**: 发布于2015 五月二十五日. 项目代号: Jessie, 将 Systemd 作为默认的启动加载器, 内核版本 3.16, 增加了快速启动(fast booting), service进程所依赖的 cgroups 使隔离部分 service 进程成为可能, 43000+ packages. Sysvinit 初始化工具首次在 Jessie 中可用. + +**注意**: Linux的内核第一次是在1991 十月五日被发布, 而 Debian 的首次发布则在1993 九月十三日. 所以 Debian 已经在只有24岁的 Linux 内核上运行了整整22年了. + +### 有关 Debian 的小知识 ### + +1994年被用来管理和重整 Debian 项目以使得其他开发者能更好地加入. 所以在那一年并没有面向用户的更新被发布, 当然, 内部版本肯定是有的. + +Debian 1.0 从来就没有被发布过. 一家 CD-ROM 的生产商错误地把某个未发布的版本标注为了 1.0, 为了避免产生混乱, 原本的 Debian 1.0 以1.1的面貌发布了. 从那以后才有了所谓的官方CD-ROM的概念. + +每个 Debian 新版本的代号都是玩具总动员里某个角色的名字哦. + +Debian 有四种可用版本: 旧稳定版(old stable), 稳定版, 测试版 以及 试验版(experimental). 始终如此. + +Debian 项目组一直致力于开发写一代发行版的不稳定版本, 这个不稳定版本始终被叫做Sid(玩具总动员里那个邪恶的臭小孩). Sid是unstable版本的永久名称, 同时Sid也取自'Still In Development"(译者:还在开发中)的首字母. Sid 将会成为下一个稳定版, 此时的下一个稳定版本代号为 jessie. + +Debian 的官方发行版只包含开源并且免费的软件, 绝无其他东西. 不过contrib 和 不免费的软件包使得安装那些本身免费但是依赖的软件包不免费的软件成为了可能. 那些依赖包本身的证书可能不属于自由/免费软件. + +Debian 是一堆Linux 发行版的母亲. 举几个例子: + +- Damn Small Linux +- KNOPPIX +- Linux Advanced +- MEPIS +- Ubuntu +- 64studio (不再活跃开发) +- LMDE + +Debian 是世界上最大的非商业Linux 发行版.他主要是由C书写的(32.1%), 一并的还有其他70多种语言. + +![Debian 开发语言贡献表](http://www.tecmint.com/wp-content/uploads/2014/08/Debian-Programming.png) + +Debian Contribution + +图片来源: [Xmodulo][1] + +Debian 项目包含6,850万行代码, 以及, 450万行空格和注释. + +国际空间站放弃了 Windows 和红帽子, 进而换成了Debian - 在上面的宇航员使用落后一个版本的稳定发行版, 目前是squeeze; 这么做是为了稳定程度以及来自 Debian 社区的雄厚帮助支持. + +感谢上帝! 我们差点就听到来自国际空间宇航员面对 Windows Metro 界面的尖叫了 :P + +#### 黑色星期三 #### + +2002 十一月而是日, Twente 大学的 Network Operation Center 着火 (NOC). 当地消防部门放弃了服务器区域. NOC维护了satie.debian.org的网站服务器, 这个网站包含了安全, 非美国相关的存档, 新维护者资料, 数量报告, 数据库; 这一切都化为了灰烬. 之后这些服务被使用 Debian 重新实现了. + +#### 未来版本 #### + +下一个待发布版本是 Debian 9, 项目代号 – Stretch, 它会带来什么还是个未知数. 满心期待吧! + +有很多发行版在 Linux 发行版的历史上出现过一瞬然后很快消失了. 在多数情况下, 维护一个日渐庞大的项目是开发者们面临的挑战. 但这对 Debian 来说不是问题. Debian 项目有全世界成百上千的开发者, 维护者. 它在 Linux 诞生的之初起便一直存在. + +Debian 在 Linux 生态环境中的贡献是难以用语言描述的. 如果 Debian 没有出现过, 那么 Linux 世界将不会像现在这样丰富, 用户友好. Debian 是为数不多可以被认为安全可靠又稳定, 是作为网络服务器完美选择的发行版. + +这仅仅是 Debian 的一个开始. 它从远古时代一路走到今天, 并将一直走下去. 未来即是现在! 世界近在眼前! 如果你到现在还从来没有使用过 Debian, 我只想问, 你还再等什么? 快去下载一份镜像试试吧, 我们会在此守候遇到任何问题的你. + +- [Debian 主页][2] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/happy-birthday-to-debian-gnu-linux/ + +作者:[Avishek Kumar][a] +译者:[jerryling315](http://moelf.xyz) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://xmodulo.com/2013/08/interesting-facts-about-debian-linux.html +[2]:https://www.debian.org/ diff --git a/translated/talk/20150818 Docker Working on Security Components Live Container Migration.md b/translated/talk/20150818 Docker Working on Security Components Live Container Migration.md new file mode 100644 index 0000000000..bd3f0451c7 --- /dev/null +++ b/translated/talk/20150818 Docker Working on Security Components Live Container Migration.md @@ -0,0 +1,53 @@ +Docker Working on Security Components, Live Container Migration +================================================================================ +![Docker Container Talk](http://www.eweek.com/imagesvr_ce/1905/290x195DockerMarianna.jpg) + +**Docker 开发者在 Containercon 上的演讲,谈论将来的容器在安全和实时迁移方面的创新** + +来自西雅图的消息。当前 IT 界最热的词汇是“容器”,美国有两大研讨会:Linuxcon USA 和 Containercon,后者就是为容器而生的。 + +Docker 公司是开源 Docker 项目的商业赞助商,本次研讨会这家公司有 3 位高管带来主题演讲,但公司创始人 Solomon Hykes 没上场演讲。 + +Hykes 曾在 2014 年的 Linuxcon 上进行过一次主题演讲,但今年的 Containeron 他只坐在观众席上。而工程部高级副总裁 Marianna Tessel、Docker 首席安全员 Diogo Monica 和核心维护员 Michael Crosby 为我们演讲 Docker 新增的功能和将来会有的功能。 + +Tessel 强调 Docker 现在已经被很多世界上最大的组织用在生产环境中,包括美国政府。Docker 也被用在小环境中,比如树莓派,一块树莓派上可以跑 2300 个容器。 + +“Docker 的功能正在变得越来越强大,而部署方法变得越来越简单。”Tessel 在会上说道。 + +Tessel 把 Docker 形容成一艘游轮,内部由强大而复杂的机器驱动,外部为乘客提供平稳航行的体验。 + +Docker 试图解决的领域是简化安全配置。Tessel 认为对于大多数用户和组织来说,避免网络漏洞所涉及的安全问题是一个乏味而且复杂的过程。 + +于是 Docker Content Trust 就出现在 Docker 1.8 release 版本中了。安全项目领导 Diogo Mónica 中加入 Tessel 上台讨论,说安全是一个难题,而 Docker Content Trust 就是为解决这个难道而存在的。 + +Docker Content Trusst 提供一种方法来验证一个 Docker 应用是否可信,以及多种方法来限制欺骗和病毒注入。 + +为了证明他的观点,Monica 做了个现场示范,演示 Content Trust 的效果。在一个实验中,一个网站在更新过程中其 Web App 被人为攻破,而当 Content Trust 启动后,这个黑客行为再也无法得逞。 + +“不要被这个表面上简单的演示欺骗了,”Tessel 说道,“你们看的是最安全的可行方案。” + +Docker 以前没有实现的领域是实时迁移,这个技术在 VMware 虚拟机中叫做 vMotion,而现在,Docker 也实现了这个功能。 + +Docker 首席维护员 Micheal Crosby 在台上做了个实时迁移的演示,Crosby 把这个过程称为快照和恢复:首先从运行中的容器拿到一个快照,之后将这个快照移到另一个地方恢复。 + +一个容器也可以克隆到另一个地方,Crosby 将他的克隆容器称为“多利”,就是世界上第一只被克隆出来的羊的名字。 + +Tessel 也花了点时间聊了下 RunC 组件,这是个正在被 Open Container Initiative 作为多方开发的项目,目的是让窗口兼容 Linux、Windows 和 Solaris。 + +Tessel 总结说她不知道 Docker 的未来是什么样,但对此抱非常乐观的态度。 + +“我不确定未来是什么样的,但我很确定 Docker 会在这个世界中脱颖而出”,Tessel 说的。 + +Sean Michael Kerner 是 eWEEK 和 InternetNews.com 网站的高级编辑,可通过推特 @TechJournalist 关注他。 + +-------------------------------------------------------------------------------- + +via: http://www.eweek.com/virtualization/docker-working-on-security-components-live-container-migration.html + +作者:[Sean Michael Kerner][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/ diff --git a/translated/talk/20150819 Linuxcon--The Changing Role of the Server OS.md b/translated/talk/20150819 Linuxcon--The Changing Role of the Server OS.md new file mode 100644 index 0000000000..98a0f94b03 --- /dev/null +++ b/translated/talk/20150819 Linuxcon--The Changing Role of the Server OS.md @@ -0,0 +1,50 @@ +LinuxCon: 服务器操作系统的转型 +================================================================================ +来自西雅图。容器迟早要改变世界,以及改变操作系统的角色。这是 Wim Coekaerts 带来的 LinuxCon 演讲主题,Coekaerts 是 Oracle 公司 Linux 与虚拟化工程的高级副总裁。 + +![](http://www.serverwatch.com/imagesvr_ce/6421/wim-200x150.jpg) + +Coekaerts 在开始演讲的时候拿出一张关于“桌面之年”的幻灯片,引发了现场观众的一片笑声。之后他说 2015 年很明显是容器之年,更是应用之年,应用才是容器的关键。 + +“你需要操作系统做什么事情?”,Coekaerts 回答现场观众:“只需一件事:运行一个应用。操作系统负责管理硬件和资源,来让你的应用运行起来。” + +Coakaerts 说在 Docker 容器的帮助下,我们的注意力再次集中在应用上,而在 Oracle,我们将注意力放在如何让应用更好地运行在操作系统上。 + +“许多人过去常常需要繁琐地安装应用,而现在的年轻人只需要按一个按钮就能让应用在他们的移动设备上运行起来”。 + +人们对安装企业版的软件需要这么复杂的步骤而感到惊讶,而 Docker 帮助他们脱离了这片苦海。 + +“操作系统的角色已经变了。” Coekaerts 说。 + +Docker 的出现不代表虚拟机的淘汰,容器化过程需要经过很长时间才能变得成熟,然后才能在世界范围内得到应用。 + +在这段时间内,容器会与虚拟机共存,并且我们需要一些工具,将应用在容器和虚拟机之间进行转换迁移。Coekaerts 举例说 Oracle 的 VirtualBox 就可以用来帮助用户运行 Docker,而它原来是被广泛用在桌面系统上的一项开源技术。现在 Docker 的 Kitematic 项目将在 Mac 上使用 VirtualBox 运行 Docker。 + +### The Open Compute Initiative and Write Once, Deploy Anywhere for Containers ### +### 容器的开放计算计划和一次写随处部署 ### + +一个能让容器成功的关键是“一次写,随处部署”的概念。而在容器之间的互操作领域,Linux 基金会的开放计算计划(OCI)扮演一个非常关键的角色。 + +“使用 OCI,应用编译一次后就可以很方便地在多地运行,所以你可以将你的应用部署在任何地方”。 + +Coekaerts 总结说虽然在迁移到容器模型过程中会发生很多好玩的事情,但容器还没真正做好准备,他强调 Oracle 现在正在验证将产品运行在容器内的可行性,但这是一个非常艰难的过程。 + +“运行数据库很简单,难的是要搞定数据库所需的环境”,Coekaerts 说:“容器与虚拟机不一样,一些需要依赖底层系统配置的应用无法从主机迁移到容器中。” + +另外,Coekaerts 指出在容器内调试问题与在虚拟机内调试问题也是不一样的,现在还没有成熟的工具来进行容器应用的调试。 + +Coekaerts 强调当容器足够成熟时,有一点很重要:不要抛弃现有的技术。组织和企业不能抛弃现有的部署好的应用,而完全投入新技术的怀抱。 + +“部署新技术是很困难的事情,你需要缓慢地迁移过去,能让你顺利迁移的技术才是成功的技术。”Coekaerts 说。 + +-------------------------------------------------------------------------------- + +via: http://www.serverwatch.com/server-news/linuxcon-the-changing-role-of-the-server-os.html + +作者:[Sean Michael Kerner][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.serverwatch.com/author/Sean-Michael-Kerner-101580.htm diff --git a/translated/talk/20150820 A Look at What's Next for the Linux Kernel.md b/translated/talk/20150820 A Look at What's Next for the Linux Kernel.md new file mode 100644 index 0000000000..daf3e4d0e3 --- /dev/null +++ b/translated/talk/20150820 A Look at What's Next for the Linux Kernel.md @@ -0,0 +1,49 @@ +Linux 内核的发展方向 +================================================================================ +![](http://www.eweek.com/imagesvr_ce/485/290x195cilinux1.jpg) + +**即将到来的 Linux 4.2 内核涉及到史上最多的贡献者数量,内核开发者 Jonathan Corbet 如是说。** + +来自西雅图。Linux 内核持续增长:代码量在增加,代码贡献者数量也在增加。而随之而来的一些挑战需要处理一下。以上是 Jonathan Corbet 在今年的 LinuxCon 的内核年度报告上提出的主要观点。以下是他的主要演讲内容: + +Linux 4.2 内核依然处于开发阶段,预计在8月23号释出。Corbet 强调有 1569 名开发者为这个版本贡献了代码,其中 277 名是第一次提交代码。 + +越来越多的开发者的加入,内核更新非常快,Corbet 估计现在大概 63 天就能产生一个新的内核里程碑。 + +Linux 4.2 涉及多方面的更新。其中一个就是引进了 OverLayFS,这是一种只读型文件系统,它可以实现在一个容器之上再放一个容器。 + +网络系统对小包传输性能也有了提升,这对于高频传输领域如金融交易而言非常重要。提升的方面主要集中在减小处理数据包的时间的能耗。 + +依然有新的驱动中加入内核。在每个内核发布周期,平均会有 60 到 80 个新增或升级驱动中加入。 + +另一个主要更新是实时内核补丁,这个特性在 4.0 版首次引进,好处是系统管理员可以在生产环境中打上内核补丁而不需要重启系统。当补丁所需要的元素都已准备就绪,打补丁的过程会在后台持续而稳定地进行。 + +**Linux 安全, IoT 和其他关注点 ** + +过去一年中,安全问题在开源社区是一个很热的话题,这都归因于那些引发高度关注的事件,比如 Heartbleed 和 Shellshock。 + +“我毫不怀疑 Linux 代码对这些方面的忽视会产生一些令人不悦的问题”,Corbet 原话。 + +他强调说过去 10 年间有超过 3 百万行代码不再被开发者修改,而产生 Shellshock 漏洞的代码的年龄已经是 20 岁了,近年来更是无人问津。 + +另一个关注点是 2038 问题,Linux 界的“千年虫”,如果不解决,2000 年出现过的问题还会重现。2038 问题说的是在 2038 年一些 Linux 和 Unix 机器会死机(LCTT:32 位系统记录的时间,在2038年1月19日星期二晚上03:14:07之后的下一秒,会变成负数)。Corbet 说现在离 2038 年还有 23 年时间,现在部署的系统都会考虑 2038 问题。 + +Linux 已经开始一些初步的方案来修复 2038 问题了,但做的还远远不够。“现在就要修复这个问题,而不是等 20 年后把这个头疼的问题留给下一代解决,我们却享受着退休的美好时光”。 + +物联网(IoT)也是 Linux 关注的领域,Linux 是物联网嵌入式操作系统的主要占有者,然而这并没有什么卵用。Corget 认为日渐臃肿的内核对于未来的物联网设备来说肯定过于庞大。 + +现在有一个项目就是做内核最小化的,获取足够的支持对于这个项目来说非常重要。 + +“除了 Linux 之外,也有其他项目可以做物联网,但那些项目不会像 Linux 一样开放”,Corbet 说,“我们不能指望 Linux 在物联网领域一直保持优势,我们需要靠自己的努力去做到这点,我们需要注意不能让内核变得越来越臃肿。” + +-------------------------------------------------------------------------------- + +via: http://www.eweek.com/enterprise-apps/a-look-at-whats-next-for-the-linux-kernel.html + +作者:[Sean Michael Kerner][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.eweek.com/cp/bio/Sean-Michael-Kerner/ diff --git a/translated/talk/20150824 Linux about to gain a new file system--bcachefs.md b/translated/talk/20150824 Linux about to gain a new file system--bcachefs.md new file mode 100644 index 0000000000..4fe4bf8ff9 --- /dev/null +++ b/translated/talk/20150824 Linux about to gain a new file system--bcachefs.md @@ -0,0 +1,25 @@ +Linux 界将出现一个新的文件系统:bcachefs +================================================================================ +这个有 5 年历史,由 Kent Oberstreet 创建,过去属于谷歌的文件系统,最近完成了关键的组件。Bcachefs 文件系统自称其性能和稳定性与 ext4 和 xfs 相同,而其他方面的功能又可以与 btrfs 和 zfs 相媲美。主要特性包括校验、压缩、多设备支持、缓存、快照与其他好用的特性。 + +Bcachefs 来自 **bcache**,这是一个块级缓存层,从 bcaceh 到一个功能完整的[写时复制][1]文件系统,堪称是一项质的转变。 + +在自己提出问题“为什么要出一个新的文件系统”中,Kent Oberstreet 作了以下回答:当我还在谷歌的时候,我与其他在 bcache 上工作的同事在偶然的情况下意识到我们正在使用的东西可以成为一个成熟文件系统的功能块,我们可以用 bcache 创建一个拥有干净而优雅设计的文件系统,而最重要的一点是,bcachefs 的主要目的就是在性能和稳定性上能与 ext4 和 xfs 匹敌,同时拥有 btrfs 和 zfs 的特性。 + +Overstreet 邀请人们在自己的系统上测试 bcachefs,可以通过邮件列表[通告]获取 bcachefs 的操作指南。 + +Linux 生态系统中文件系统几乎处于一家独大状态,Fedora 在第 16 版的时候就想用 btrfs 换掉 ext4 作为其默认文件系统,但是到现在(LCTT:都出到 Fedora 22 了)还在使用 ext4。而几乎所有 Debian 系的发行版(Ubuntu、Mint、elementary OS 等)也使用 ext4 作为默认文件系统,并且这些主流的发生版都没有替换默认文件系统的意思。 + +-------------------------------------------------------------------------------- + +via: http://www.linuxveda.com/2015/08/22/linux-gain-new-file-system-bcachefs/ + +作者:[Paul Hill][a] +译者:[bazz2](https://github.com/bazz2) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxveda.com/author/paul_hill/ +[1]:https://en.wikipedia.org/wiki/Copy-on-write +[2]:https://lkml.org/lkml/2015/8/21/22 diff --git a/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md b/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md deleted file mode 100644 index f5e6fe60b2..0000000000 --- a/translated/tech/20150410 How to run Ubuntu Snappy Core on Raspberry Pi 2.md +++ /dev/null @@ -1,89 +0,0 @@ -如何在树莓派2 代运行ubuntu Snappy Core -================================================================================ -物联网(Internet of Things, IoT) 时代即将来临。很快,过不了几年,我们就会问自己当初是怎么在没有物联网的情况下生存的,就像我们现在怀疑过去没有手机的年代。Canonical 就是一个物联网快速发展却还是开放市场下的竞争者。这家公司宣称自己把赌注压到了IoT 上,就像他们已经在“云”上做过的一样。。在今年一月底,Canonical 启动了一个基于Ubuntu Core 的小型操作系统,名字叫做 [Ubuntu Snappy Core][1] 。 - -Snappy 是一种用来替代deb 的新的打包格式,是一个用来更新系统的前端,从CoreOS、红帽子和其他系统借鉴了**原子更新**这个想法。树莓派2 代投入市场,Canonical 很快就发布了用于树莓派的Snappy Core 版本。而第一代树莓派因为是基于ARMv6 ,Ubuntu 的ARM 镜像是基于ARMv7 ,所以不能运行ubuntu 。不过这种状况现在改变了,Canonical 通过发布用于RPI2 的镜像,抓住机会证明了Snappy 就是一个用于云计算,特别是用于物联网的系统。 - -Snappy 同样可以运行在其它像Amazon EC2, Microsofts Azure, Google的 Compute Engine 这样的云端上,也可以虚拟化在KVM、Virtuabox 和vagrant 上。Canonical Ubuntu 已经拥抱了微软、谷歌、Docker、OpenStack 这些重量级选手,同时也与一些小项目达成合作关系。除了一些创业公司,比如Ninja Sphere、Erle Robotics,还有一些开发板生产商,比如Odroid、Banana Pro, Udoo, PCDuino 和Parallella 、全志,Snappy 也提供了支持。Snappy Core 同时也希望尽快运行到路由器上来帮助改进路由器生产商目前很少更新固件的策略。 - -接下来,让我们看看怎么样在树莓派2 上运行Snappy。 - -用于树莓派2 的Snappy 镜像可以从 [Raspberry Pi 网站][2] 上下载。解压缩出来的镜像必须[写到一个至少8GB 大小的SD 卡][3]。尽管原始系统很小,但是原子升级和回滚功能会占用不小的空间。使用Snappy 启动树莓派2 后你就可以使用默认用户名和密码(都是ubuntu)登录系统。 - -![](https://farm8.staticflickr.com/7639/16428527263_f7bdd56a0d_c.jpg) - -sudo 已经配置好了可以直接用,安全起见,你应该使用以下命令来修改你的用户名 - - $ sudo usermod -l - -或者也可以使用`adduser` 为你添加一个新用户。 - -因为RPI缺少硬件时钟,而Snappy 并不知道这一点,所以系统会有一个小bug:处理某些命令时会报很多错。不过这个很容易解决: - -使用这个命令来确认这个bug 是否影响: - - $ date - -如果输出是 "Thu Jan 1 01:56:44 UTC 1970", 你可以这样做来改正: - - $ sudo date --set="Sun Apr 04 17:43:26 UTC 2015" - -改成你的实际时间。 - -![](https://farm9.staticflickr.com/8735/16426231744_c54d9b8877_b.jpg) - -现在你可能打算检查一下,看看有没有可用的更新。注意通常使用的命令: - - $ sudo apt-get update && sudo apt-get distupgrade - -不过这时系统不会让你通过,因为Snappy 使用它自己精简过的、基于dpkg 的包管理系统。这么做的原因是Snappy 会运行很多嵌入式程序,而同时你也会想着所有事情尽可能的简化。 - -让我们来看看最关键的部分,理解一下程序是如何与Snappy 工作的。运行Snappy 的SD 卡上除了boot 分区外还有3个分区。其中的两个构成了一个重复的文件系统。这两个平行文件系统被固定挂载为只读模式,并且任何时刻只有一个是激活的。第三个分区是一个部分可写的文件系统,用来让用户存储数据。通过更新系统,标记为'system-a' 的分区会保持一个完整的文件系统,被称作核心,而另一个平行文件系统仍然会是空的。 - -![](https://farm9.staticflickr.com/8758/16841251947_21f42609ce_b.jpg) - -如果我们运行以下命令: - - $ sudo snappy update - -系统将会在'system-b' 上作为一个整体进行更新,这有点像是更新一个镜像文件。接下来你将会被告知要重启系统来激活新核心。 - -重启之后,运行下面的命令可以检查你的系统是否已经更新到最新版本,以及当前被激活的是那个核心 - - $ sudo snappy versions -a - -经过更新-重启两步操作,你应该可以看到被激活的核心已经被改变了。 - -因为到目前为止我们还没有安装任何软件,下面的命令: - - $ sudo snappy update ubuntu-core - -将会生效,而且如果你打算仅仅更新特定的OS 版本,这也是一个办法。如果出了问题,你可以使用下面的命令回滚: - - $ sudo snappy rollback ubuntu-core - -这将会把系统状态回滚到更新之前。 - -![](https://farm8.staticflickr.com/7666/17022676786_5fe6804ed8_c.jpg) - -再来说说那些让Snappy 有用的软件。这里不会讲的太多关于如何构建软件、向Snappy 应用商店添加软件的基础知识,但是你可以通过Freenode 上的IRC 频道#snappy 了解更多信息,那个上面有很多人参与。你可以通过浏览器访问http://:4200 来浏览应用商店,然后从商店安装软件,再在浏览器里访问http://webdm.local 来启动程序。如何构建用于Snappy 的软件并不难,而且也有了现成的[参考文档][4] 。你也可以很容易的把DEB 安装包使用Snappy 格式移植到Snappy 上。 - -![](https://farm8.staticflickr.com/7656/17022676836_968a2a7254_c.jpg) - -尽管Ubuntu Snappy Core 吸引我们去研究新型的Snappy 安装包格式和Canonical 式的原子更新操作,但是因为有限的可用应用,它现在在生产环境里还不是很有用。但是既然搭建一个Snappy 环境如此简单,这看起来是一个学点新东西的好机会。 - --------------------------------------------------------------------------------- - -via: http://xmodulo.com/ubuntu-snappy-core-raspberry-pi-2.html - -作者:[Ferdinand Thommes][a] -译者:[Ezio](https://github.com/oska874) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 - -[a]:http://xmodulo.com/author/ferdinand -[1]:http://www.ubuntu.com/things -[2]:http://www.raspberrypi.org/downloads/ -[3]:http://xmodulo.com/write-raspberry-pi-image-sd-card.html -[4]:https://developer.ubuntu.com/en/snappy/ diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md deleted file mode 100644 index 1c0cc4bd86..0000000000 --- a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 4 - GNOME Settings.md +++ /dev/null @@ -1,54 +0,0 @@ -将GNOME作为我的Linux桌面的一周: 他们做对的与做错的 - 第四节 - GNOME设置 -================================================================================ -### Settings设置 ### - -在这我要挑一挑几个特定KDE控制模块的毛病,大部分原因是因为相比它们的对手GNOME来说,糟糕得太可笑,实话说,真是悲哀。 - -第一个接招的?打印机。 - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers1_show&w=1920) - -GNOME在左,KDE在右。你知道左边跟右边的打印程序有什么区别吗?当我在GNOME控制中心打开“打印机”时,程序窗口弹出来了,之后没有也没发生。而当我在KDE系统设置打开“打印机”时,我收到了一条密码提示。甚至我都没能看一眼打印机呢,我就必须先交出ROOT密码。 - -让我再重复一遍。在今天,PolicyKit和Logind的日子里,对一个应该是sudo的操作,我依然被询问要求ROOT的密码。我安装系统的时候甚至都没设置root密码。所以我必须跑到Konsole去,然后运行'sudo passwd root'命令,这样我才能给root设一个密码,这样我才能回到系统设置中的打印程序,然后交出root密码,然后仅仅是看一看哪些打印机可用。完成了这些工作后,当我点击“添加打印机”时,我再次收到请求ROOT密码的提示,当我解决了它后再选择一个打印机和驱动时,我再次收到请求ROOT密码的提示。仅仅是为了添加一个打印机到系统我就收到三次密码请求。 - -而在GNOME下添加打印机,在点击打印机程序中的”解锁“之前,我没有收到任何请求SUDO密码的提示。整个过程我只被请求过一次,仅此而已。KDE,求你了……采用GNOME的”解锁“模式吧。不到一定需要的时候不要发出提示。还有,不管是哪个库,只要它允许KDE应用程序绕过PolicyKit/Logind(如果有的话)并直接请求ROOT权限……那就把它封进箱里吧。如果这是个多用户系统,那我要么必须交出ROOT密码,要么我必须时时刻刻呆着以免有一个用户需要升级、更改或添加一个新的打印机。而这两种情况都是完全无法接受的。 - -有还一件事…… - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers2_show&w=1920) - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_printers3_show&w=1920) - -给论坛的问题:怎么样看起来更简洁?我在写这篇文章时意识到:当有任何的附加打印机准备好时,Gnome打印机程序会把过程做得非常简洁,它们在左边上放了一个竖直栏来列出这些打印机。而我在KDE添加第二台打印机时,它突然增加出一个左边栏来。而在添加之前,我脑海中已经有了一个恐怖的画面它会像图片文件夹显示预览图一样,直接插入另外一个图标到界面里去。我很高兴也很惊讶的看到我是错的。但是事实是它直接”长出”另外一个从末存在的竖直栏,彻底改变了它的界面布局,而这样也称不上“好”。终究还是一种令人困惑,奇怪而又不直观的设计。 - -打印机说得够多了……下一个接受我公开石刑的KDE系统设置是?多媒体,即Phonon。 - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_sound_show&w=1920) - -一如既往,GNOME在左边,KDE在右边。让我们先看看GNOME的系统设置先……眼睛从左到右,从上到下,对吧?来吧,就这样做。首先:音量控制滑条。滑条中的蓝色条与空白条百分百清晰地消除了哪边是“音量增加”的困惑。在音量控制条后马上就是一个On/Off开关,用来开关静音功能。Gnome的再次得分在于静音后能记住当前设置的音量,而在点击音量增加按钮取消静音后能回到原来设置的音量中来。Kmixer,你个健忘的垃圾,我真的希望我能多讨论你。 - - -继续!输入输出和应用程序的标签选项?每一个应用程序的音量随时可控?Gnome,每过一秒,我爱你越深。均衡的选项设置,声音配置,和清晰地标上标志的“测试麦克风”选项。 - - - -我不清楚它能否以一种更干净更简洁的设计实现。是的,它只是一个Gnome化的Pavucontrol,但我想这就是重要的地方。Pavucontrol在这方面几乎完全做对了,Gnome控制中心中的“声音”应用程序的改善使它向完美更进了一步。 - -Phonon,该你上了。但开始前我想说:我TM看到的是什么?我知道我看到的是音频设备的权限列表,但是它呈现的方式有点太坑。还有,那些用户可能关心的那些东西哪去了?拥有一个权限列表当然很好,它也应该存在,但问题是权限列表属于那种用户乱搞一两次之后就不会再碰的东西。它还不够重要,或者说常用到可以直接放在正中间位置的程度。音量控制滑块呢?对每个应用程序的音量控制功能呢?那些用户使用最频繁的东西呢?好吧,它们在Kmix中,一个分离的程序,拥有它自己的配置选项……而不是在系统设置下……这样真的让“系统设置”这个词变得有点用词不当。 - -![](http://www.phoronix.net/image.php?id=gnome-week-editorial&image=gnome_week_network_show&w=1920) - -上面展示的Gnome的网络设置。KDE的没有展示,原因就是我接下来要吐槽的内容了。如果你进入KDE的系统设置里,然后点击“网络”区域中三个选项中的任何一个,你会得到一大堆的选项:蓝牙设置,Samba分享的默认用户名和密码(说真的,“连通性(Connectivity)”下面只有两个选项:SMB的用户名和密码。TMD怎么就配得上“连通性”这么大的词?),浏览器身份验证控制(只有Konqueror能用……一个已经倒闭的项目),代理设置,等等……我的wifi设置哪去了?它们没在这。哪去了?好吧,它们在网络应用程序的设置里面……而不是在网络设置里…… - -KDE,你这是要杀了我啊,你有“系统设置”当凶器,拿着它动手吧! - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=4 - -作者:Eric Griffith -译者:[XLCYun](https://github.com/XLCYun) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md b/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md deleted file mode 100644 index 02ee7425fc..0000000000 --- a/translated/tech/20150716 A Week With GNOME As My Linux Desktop--What They Get Right & Wrong - Page 5 - Conclusion.md +++ /dev/null @@ -1,39 +0,0 @@ -将GNOME作为我的Linux桌面的一周:他们做对的与做错的 - 第五节 - 总结 -================================================================================ -### 用户体验和最后想法 ### - -当Gnome 2.x和KDE 4.x要正面交锋时……我相当开心的跳到其中。我爱的东西它们有,恨的东西也有,但总的来说它们使用起来还算是一种乐趣。然后Gnome 3.x来了,带着一场Gnome Shell的戏剧。那时我就放弃了Gnome,我尽我所能的避开它。当时它对用户是不友好的,而且不直观,它打破了原有的设计典范,只为平板的统治世界做准备……而根据平板下跌的销量来看,这样的未来不可能实现。 - -Gnome 3后续发面了八个版本后,奇迹发生了。Gnome变得对对用户友好了。变得直观了。它完美吗?当然不了。我还是很讨厌它想推动的那种设计范例,我讨厌它总想把工作流(work flow)强加给我,但是在时间和耐心的作用下,这两都能被接受。只要你能够回头去看看Gnome Shell那外星人一样的界面,然后开始跟Gnome的其它部分(特别是控制中心)互动,你就能发现Gnome绝对做对了:细节。对细节的关注! - -人们能适应新的界面设计范例,能适应新的工作流——iPhone和iPad都证明了这一点——但真正一直让他们操心的是“纸片的割伤”(paper cuts,此处指易于修复但烦人的缺陷,译注)。 - -它带出了KDE和Gnome之间最重要的一个区别。Gnome感觉像一个产品。像一种非凡的体验。你用它的时候,觉得它是完整的,你要的东西都在你的指尖。它让人感觉就像是一个拥有windows或者OS X那样桌面体验的Linux桌面版:你要的都在里面,而且它是被同一个目标一致的团队中的同一个人写出来的。天,即使是一个应用程序发出的sudo请求都感觉是Gnome下的一个特意设计的部分,就像在Windows下的一样。而在KDE它就像是任何应用程序都能创建的那种随机外观的弹窗。它不像是以系统的一部分这样的正式身份停下来说“嘿,有个东西要请求管理员权限!你要给它吗?”。 - -KDE让人体验不到有凝聚力的体验。KDE像是在没有方向地打转,感觉没有完整的体验。它就像是一堆东西往不同的的方向移动,只不过恰好它们都有一个共同享有的工具包。如果开发者对此很开心,那么好吧,他们开心就好,但是如果他们想提供最好体验的话,那么就需要多关注那些小地方了。用户体验跟直观应当做为每一个应用程序的设计中心,应当有一个视野,知道KDE要提供什么——并且——知道它看起来应该是什么样的。 - -是不是有什么原因阻止我在KDE下使用Gnome磁盘管理? Rhythmbox? Evolution? 没有。没有。没有。但是这样说又错过了关键。Gnome和KDE都称它们为“桌面环境”。那么它们就应该是完整的环境,这意味着他们的各个部件应该汇集并紧密结合在一起,意味着你使用它们环境下的工具,因为它们说“您在一个完整的桌面中需要的任何东西,我们都支持。”说真的?只有Gnome看起来能符合完整的要求。KDE在“汇集在一起”这一方面感觉就像个半成品,更不用说提供“完整体验”中你所需要的东西。Gnome磁盘管理没有相应的对手——kpartionmanage要求ROOT权限。KDE不运行“首次用户注册”的过程(原文:No 'First Time User' run through.可能是指系统安装过程中KDE没有创建新用户的过程,译注) ,现在也不过是在Kubuntu下引入了一个用户管理器。老天,Gnome甚至提供了地图,笔记,日历和时钟应用。这些应用都是百分百要紧的吗?不,当然不了。但是正是这些应用帮助Gnome推动“Gnome是一种完整丰富的体验”的想法。 - -我吐槽的KDE问题并非不可能解决,决对不是这样的!但是它需要人去关心它。它需要开发者为他们的作品感到自豪,而不仅仅是为它们实现的功能而感到自豪——组织的价值可大了去了。别夺走用户设置选项的能力——GNOME 3.x就是因为缺乏配置选项的能力而为我所诟病,但别把“好吧,你想怎么设置就怎么设置,”作为借口而不提供任何理智的默认设置。默认设置是用户将看到的东西,它们是用户从打开软件的第一刻开始进行评判的关键。给用户留个好印象吧。 - -我知道KDE开发者们知道设计很重要,这也是为什么Visual Design Group(视觉设计团体)存在的原因,但是感觉好像他们没有让VDG充分发挥。所以KDE里存在组织上的缺陷。不是KDE没办法完整,不是它没办法汇集整合在一起然后解决衰败问题,只是开发者们没做到。他们瞄准了靶心……但是偏了。 - -还有,在任何人说这句话之前……千万别说“补丁很受欢迎啊"。因为当我开心的为个人提交补丁时,只要开发者坚持以他们喜欢的却不直观的方式干事,更多这样的烦事就会不断发生。这不关Muon有没有中心对齐。也不关Amarok的界面太丑。也不关每次我敲下快捷键后,弹出的音量和亮度调节窗口占用了我一大块的屏幕“房地产”(说真的,有人会去缩小这些东西)。 - -这跟心态的冷漠有关,跟开发者们在为他们的应用设计UI时根本就不多加思考有关。KDE团队做的东西都工作得很好。Amarok能播放音乐。Dragon能播放视频。Kwin或Qt和kdelibs似乎比Mutter/gtk更有力更效率(仅根本我的电池电量消耗计算。非科学性测试)。这些都很好,很重要……但是它们呈现的方式也很重要。甚至可以说,呈现方式是最重要的,因为它是用户看到的和与之交互的东西。 - -KDE应用开发者们……让VDG参与进来吧。让VDG审查并核准每一个”核心“应用,让一个VDG的UI/UX专家来设计应用的使用模式和使用流程,以此保证其直观性。真见鬼,不管你们在开发的是啥应用,仅仅把它的模型发到VDG论坛寻求反馈甚至都可能都能得到一些非常好的指点跟反馈。你有这么好的资源在这,现在赶紧用吧。 - -我不想说得好像我一点都不懂感恩。我爱KDE,我爱那些志愿者们为了给Linux用户一个可视化的桌面而付出的工作与努力,也爱可供选择的Gnome。正是因为我关心我才写这篇文章。因为我想看到更好的KDE,我想看到它走得比以前更加遥远。而这样做需要每个人继续努力,并且需要人们不再躲避批评。它需要人们对系统互动及系统崩溃的地方都保持诚实。如果我们不能直言批评,如果我们不说”这真垃圾!”,那么情况永远不会变好。 - -这周后我会继续使用Gnome吗?可能不,不。Gnome还在试着强迫我接受其工作流,而我不想追随,也不想遵循,因为我在使用它的时候感觉变得不够高效,因为它并不遵循我的思维模式。可是对于我的朋友们,当他们问我“我该用哪种桌面环境?”我可能会推荐Gnome,特别是那些不大懂技术,只要求“能工作”就行的朋友。根据目前KDE的形势来看,这可能是我能说出的最狠毒的评估了。 - --------------------------------------------------------------------------------- - -via: http://www.phoronix.com/scan.php?page=article&item=gnome-week-editorial&num=5 - -作者:Eric Griffith -译者:[XLCYun](https://github.com/XLCYun) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20150717 How to monitor NGINX with Datadog - Part 3.md b/translated/tech/20150717 How to monitor NGINX with Datadog - Part 3.md deleted file mode 100644 index 003290a915..0000000000 --- a/translated/tech/20150717 How to monitor NGINX with Datadog - Part 3.md +++ /dev/null @@ -1,154 +0,0 @@ - -如何使用 Datadog 监控 NGINX - 第3部分 -================================================================================ -![](http://www.datadoghq.com/wp-content/uploads/2015/07/NGINX_hero_3.png) - -如果你已经阅读了[前面的如何监控 NGINX][1],你应该知道从你网络环境的几个指标中可以获取多少信息。而且你也看到了从 NGINX 特定的基础中收集指标是多么容易的。但要实现全面,持续的监控 NGINX,你需要一个强大的监控系统来存储并将指标可视化,当异常发生时能提醒你。在这篇文章中,我们将向你展示如何使用 Datadog 安装 NGINX 监控,以便你可以在定制的仪表盘中查看这些指标: - -![NGINX dashboard](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx_board_5.png) - -Datadog 允许你建立单个主机,服务,流程,度量,或者几乎任何它们的组合图形周围和警报。例如,你可以在一定的可用性区域监控所有NGINX主机,或所有主机,或者您可以监视被报道具有一定标签的所有主机的一个关键指标。本文将告诉您如何: - -Datadog 允许你来建立图表并报告周围的主机,进程,指标或其他的。例如,你可以在特定的可用性区域监控所有 NGINX 主机,或所有主机,或者你可以监视一个关键指标并将它报告给周围所有标记的主机。本文将告诉你如何做: - -- 在 Datadog 仪表盘上监控 NGINX 指标,对其他所有系统 -- 当一个关键指标急剧变化时设置自动警报来通知你 - -### 配置 NGINX ### - -为了收集 NGINX 指标,首先需要确保 NGINX 已启用 status 模块和一个URL 来报告 status 指标。下面将一步一步展示[配置开源 NGINX ][2]和[NGINX Plus][3]。 - -### 整合 Datadog 和 NGINX ### - -#### 安装 Datadog 代理 #### - -Datadog 代理是 [一个开源软件][4] 能收集和报告你主机的指标,这样就可以使用 Datadog 查看和监控他们。安装代理通常 [仅需要一个命令][5] - -只要你的代理启动并运行着,你会看到你主机的指标报告[在你 Datadog 账号下][6]。 - -![Datadog infrastructure list](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/infra_2.png) - -#### 配置 Agent #### - - -接下来,你需要为代理创建一个简单的 NGINX 配置文件。在你系统中代理的配置目录应该 [在这儿][7]。 - -在目录里面的 conf.d/nginx.yaml.example 中,你会发现[一个简单的配置文件][8],你可以编辑并提供 status URL 和可选的标签为每个NGINX 实例: - - init_config: - - instances: - - - nginx_status_url: http://localhost/nginx_status/ - tags: - - instance:foo - -一旦你修改了 status URLs 和其他标签,将配置文件保存为 conf.d/nginx.yaml。 - -#### 重启代理 #### - - -你必须重新启动代理程序来加载新的配置文件。重新启动命令 [在这里][9] 根据平台的不同而不同。 - -#### 检查配置文件 #### - -要检查 Datadog 和 NGINX 是否正确整合,运行 Datadog 的信息命令。每个平台使用的命令[看这儿][10]。 - -如果配置是正确的,你会看到这样的输出: - - Checks - ====== - - [...] - - nginx - ----- - - instance #0 [OK] - - Collected 8 metrics & 0 events - -#### 安装整合 #### - -最后,在你的 Datadog 帐户里面整合 Nginx。这非常简单,你只要点击“Install Integration”按钮在 [NGINX 集成设置][11] 配置表中。 - -![Install integration](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/install.png) - -### 指标! ### - -一旦代理开始报告 NGINX 指标,你会看到 [一个 NGINX 仪表盘][12] 在你 Datadog 可用仪表盘的列表中。 - -基本的 NGINX 仪表盘显示了几个关键指标 [在我们介绍的 NGINX 监控中][13] 的最大值。 (一些指标,特别是请求处理时间,日志分析,Datadog 不提供。) - -你可以轻松创建一个全面的仪表盘来监控你的整个网站区域通过增加额外的图形与 NGINX 外部的重要指标。例如,你可能想监视你 NGINX 主机的host-level 指标,如系统负载。你需要构建一个自定义的仪表盘,只需点击靠近仪表盘的右上角的选项并选择“Clone Dash”来克隆一个默认的 NGINX 仪表盘。 - -![Clone dash](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/clone_2.png) - -你也可以更高级别的监控你的 NGINX 实例通过使用 Datadog 的 [Host Maps][14] -对于实例,color-coding 你所有的 NGINX 主机通过 CPU 使用率来辨别潜在热点。 - -![](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/nginx-host-map-3.png) - -### NGINX 指标 ### - -一旦 Datadog 捕获并可视化你的指标,你可能会希望建立一些监控自动密切的关注你的指标,并当有问题提醒你。下面将介绍一个典型的例子:一个提醒你 NGINX 吞吐量突然下降时的指标监控器。 - -#### 监控 NGINX 吞吐量 #### - -Datadog 指标警报可以是 threshold-based(当指标超过设定值会警报)或 change-based(当指标的变化超过一定范围会警报)。在这种情况下,我们会采取后一种方式,当每秒传入的请求急剧下降时会提醒我们。下降往往意味着有问题。 - -1.**创建一个新的指标监控**. 从 Datadog 的“Monitors”下拉列表中选择“New Monitor”。选择“Metric”作为监视器类型。 - -![NGINX metric monitor](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_1.png) - -2.**定义你的指标监视器**. 我们想知道 NGINX 每秒总的请求量下降的数量。所以我们在基础设施中定义我们感兴趣的 nginx.net.request_per_s度量和。 - -![NGINX metric](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_2.png) - -3.**设置指标警报条件**.我们想要在变化时警报,而不是一个固定的值,所以我们选择“Change Alert”。我们设置监控为无论何时请求量下降了30%以上时警报。在这里,我们使用一个 one-minute 数据窗口来表示“now” 指标的值,警报横跨该间隔内的平均变化,和之前 10 分钟的指标值作比较。 - -![NGINX metric change alert](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_3.png) - -4.**自定义通知**.如果 NGINX 的请求量下降,我们想要通知我们的团队。在这种情况下,我们将给 ops 队的聊天室发送通知,网页呼叫工程师。在“Say what’s happening”中,我们将其命名为监控器并添加一个短消息将伴随该通知并建议首先开始调查。我们使用 @mention 作为一般警告,使用 ops 并用 @pagerduty [专门给 PagerDuty 发警告][15]。 - -![NGINX metric notification](https://d33tyra1llx9zy.cloudfront.net/blog/images/2015-06-nginx/monitor2_step_4v3.png) - -5.**保存集成监控**.点击页面底部的“Save”按钮。你现在监控的关键指标NGINX [work 指标][16],它边打电话给工程师并在它迅速下时随时分页。 - -### 结论 ### - -在这篇文章中,我们已经通过整合 NGINX 与 Datadog 来可视化你的关键指标,并当你的网络基础架构有问题时会通知你的团队。 - -如果你一直使用你自己的 Datadog 账号,你现在应该在 web 环境中有了很大的可视化提高,也有能力根据你的环境创建自动监控,你所使用的模式,指标应该是最有价值的对你的组织。 - -如果你还没有 Datadog 帐户,你可以注册[免费试用][17],并开始监视你的基础架构,应用程序和现在的服务。 - ----------- -这篇文章的来源在 [on GitHub][18]. 问题,错误,补充等?请[联系我们][19]. - ------------------------------------------------------------- - -via: https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/ - -作者:K Young -译者:[strugglingyouth](https://github.com/译者ID) -校对:[strugglingyouth](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[1]:https://www.datadoghq.com/blog/how-to-monitor-nginx/ -[2]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#open-source -[3]:https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/#plus -[4]:https://github.com/DataDog/dd-agent -[5]:https://app.datadoghq.com/account/settings#agent -[6]:https://app.datadoghq.com/infrastructure -[7]:http://docs.datadoghq.com/guides/basic_agent_usage/ -[8]:https://github.com/DataDog/dd-agent/blob/master/conf.d/nginx.yaml.example -[9]:http://docs.datadoghq.com/guides/basic_agent_usage/ -[10]:http://docs.datadoghq.com/guides/basic_agent_usage/ -[11]:https://app.datadoghq.com/account/settings#integrations/nginx -[12]:https://app.datadoghq.com/dash/integration/nginx -[13]:https://www.datadoghq.com/blog/how-to-monitor-nginx/ -[14]:https://www.datadoghq.com/blog/introducing-host-maps-know-thy-infrastructure/ -[15]:https://www.datadoghq.com/blog/pagerduty/ -[16]:https://www.datadoghq.com/blog/monitoring-101-collecting-data/#metrics -[17]:https://www.datadoghq.com/blog/how-to-monitor-nginx-with-datadog/#sign-up -[18]:https://github.com/DataDog/the-monitor/blob/master/nginx/how_to_monitor_nginx_with_datadog.md -[19]:https://github.com/DataDog/the-monitor/issues diff --git a/translated/tech/20150728 Process of the Linux kernel building.md b/translated/tech/20150728 Process of the Linux kernel building.md new file mode 100644 index 0000000000..b8ded80179 --- /dev/null +++ b/translated/tech/20150728 Process of the Linux kernel building.md @@ -0,0 +1,674 @@ +如何构建Linux 内核 +================================================================================ +介绍 +-------------------------------------------------------------------------------- + +我不会告诉你怎么在自己的电脑上去构建、安装一个定制化的Linux 内核,这样的[资料](https://encrypted.google.com/search?q=building+linux+kernel#q=building+linux+kernel+from+source+code) 太多了,它们会对你有帮助。本文会告诉你当你在内核源码路径里敲下`make` 时会发生什么。当我刚刚开始学习内核代码时,[Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 是我打开的第一个文件,这个文件看起来真令人害怕 :)。那时候这个[Makefile](https://en.wikipedia.org/wiki/Make_%28software%29) 还只包含了`1591` 行代码,当我开始写本文是,这个[Makefile](https://github.com/torvalds/linux/commit/52721d9d3334c1cb1f76219a161084094ec634dc) 已经是第三个候选版本了。 + +这个makefile 是Linux 内核代码的根makefile ,内核构建就始于此处。是的,它的内容很多,但是如果你已经读过内核源代码,你就会发现每个包含代码的目录都有一个自己的makefile。当然了,我们不会去描述每个代码文件是怎么编译链接的。所以我们将只会挑选一些通用的例子来说明问题,而你不会在这里找到构建内核的文档、如何整洁内核代码、[tags](https://en.wikipedia.org/wiki/Ctags) 的生成和[交叉编译](https://en.wikipedia.org/wiki/Cross_compiler) 相关的说明,等等。我们将从`make` 开始,使用标准的内核配置文件,到生成了内核镜像[bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage) 结束。 + +如果你已经很了解[make](https://en.wikipedia.org/wiki/Make_%28software%29) 工具那是最好,但是我也会描述本文出现的相关代码。 + +让我们开始吧 + + +编译内核前的准备 +--------------------------------------------------------------------------------- + +在开始编译前要进行很多准备工作。最主要的就是找到并配置好配置文件,`make` 命令要使用到的参数都需要从这些配置文件获取。现在就让我们深入内核的根`makefile` 吧 + +内核的根`Makefile` 负责构建两个主要的文件:[vmlinux](https://en.wikipedia.org/wiki/Vmlinux) (内核镜像可执行文件)和模块文件。内核的 [Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 从此处开始: + +```Makefile +VERSION = 4 +PATCHLEVEL = 2 +SUBLEVEL = 0 +EXTRAVERSION = -rc3 +NAME = Hurr durr I'ma sheep +``` + +这些变量决定了当前内核的版本,并且被使用在很多不同的地方,比如`KERNELVERSION` : + +```Makefile +KERNELVERSION = $(VERSION)$(if $(PATCHLEVEL),.$(PATCHLEVEL)$(if $(SUBLEVEL),.$(SUBLEVEL)))$(EXTRAVERSION) +``` + +接下来我们会看到很多`ifeq` 条件判断语句,它们负责检查传给`make` 的参数。内核的`Makefile` 提供了一个特殊的编译选项`make help` ,这个选项可以生成所有的可用目标和一些能传给`make` 的有效的命令行参数。举个例子,`make V=1` 会在构建过程中输出详细的编译信息,第一个`ifeq` 就是检查传递给make的`V=n` 选项。 + +```Makefile +ifeq ("$(origin V)", "command line") + KBUILD_VERBOSE = $(V) +endif +ifndef KBUILD_VERBOSE + KBUILD_VERBOSE = 0 +endif + +ifeq ($(KBUILD_VERBOSE),1) + quiet = + Q = +else + quiet=quiet_ + Q = @ +endif + +export quiet Q KBUILD_VERBOSE +``` + +如果`V=n` 这个选项传给了`make` ,系统就会给变量`KBUILD_VERBOSE` 选项附上`V` 的值,否则的话`KBUILD_VERBOSE` 就会为`0`。然后系统会检查`KBUILD_VERBOSE` 的值,以此来决定`quiet` 和`Q` 的值。符号`@` 控制命令的输出,如果它被放在一个命令之前,这条命令的执行将会是`CC scripts/mod/empty.o`,而不是`Compiling .... scripts/mod/empty.o`(注:CC 在makefile 中一般都是编译命令)。最后系统仅仅导出所有的变量。下一个`ifeq` 语句检查的是传递给`make` 的选项`O=/dir`,这个选项允许在指定的目录`dir` 输出所有的结果文件: + +```Makefile +ifeq ($(KBUILD_SRC),) + +ifeq ("$(origin O)", "command line") + KBUILD_OUTPUT := $(O) +endif + +ifneq ($(KBUILD_OUTPUT),) +saved-output := $(KBUILD_OUTPUT) +KBUILD_OUTPUT := $(shell mkdir -p $(KBUILD_OUTPUT) && cd $(KBUILD_OUTPUT) \ + && /bin/pwd) +$(if $(KBUILD_OUTPUT),, \ + $(error failed to create output directory "$(saved-output)")) + +sub-make: FORCE + $(Q)$(MAKE) -C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR) \ + -f $(CURDIR)/Makefile $(filter-out _all sub-make,$(MAKECMDGOALS)) + +skip-makefile := 1 +endif # ifneq ($(KBUILD_OUTPUT),) +endif # ifeq ($(KBUILD_SRC),) +``` + +系统会检查变量`KBUILD_SRC`,如果他是空的(第一次执行makefile 时总是空的),并且变量`KBUILD_OUTPUT` 被设成了选项`O` 的值(如果这个选项被传进来了),那么这个值就会用来代表内核源码的顶层目录。下一步会检查变量`KBUILD_OUTPUT` ,如果之前设置过这个变量,那么接下来会做以下几件事: + +* 将变量`KBUILD_OUTPUT` 的值保存到临时变量`saved-output`; +* 尝试创建输出目录; +* 检查创建的输出目录,如果失败了就打印错误; +* 如果成功创建了输出目录,那么就在新目录重新执行`make` 命令(参见选项`-C`)。 + +下一个`ifeq` 语句会检查传递给make 的选项`C` 和`M`: + +```Makefile +ifeq ("$(origin C)", "command line") + KBUILD_CHECKSRC = $(C) +endif +ifndef KBUILD_CHECKSRC + KBUILD_CHECKSRC = 0 +endif + +ifeq ("$(origin M)", "command line") + KBUILD_EXTMOD := $(M) +endif +``` + +第一个选项`C` 会告诉`makefile` 需要使用环境变量`$CHECK` 提供的工具来检查全部`c` 代码,默认情况下会使用[sparse](https://en.wikipedia.org/wiki/Sparse)。第二个选项`M` 会用来编译外部模块(本文不做讨论)。因为设置了这两个变量,系统还会检查变量`KBUILD_SRC`,如果`KBUILD_SRC` 没有被设置,系统会设置变量`srctree` 为`.`: + +```Makefile +ifeq ($(KBUILD_SRC),) + srctree := . +endif + +objtree := . +src := $(srctree) +obj := $(objtree) + +export srctree objtree VPATH +``` + +这将会告诉`Makefile` 内核的源码树就在执行make 命令的目录。然后要设置`objtree` 和其他变量为执行make 命令的目录,并且将这些变量导出。接着就是要获取`SUBARCH` 的值,这个变量代表了当前的系统架构(注:一般都指CPU 架构): + +```Makefile +SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \ + -e s/sun4u/sparc64/ \ + -e s/arm.*/arm/ -e s/sa110/arm/ \ + -e s/s390x/s390/ -e s/parisc64/parisc/ \ + -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \ + -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ ) +``` + +如你所见,系统执行[uname](https://en.wikipedia.org/wiki/Uname) 得到机器、操作系统和架构的信息。因为我们得到的是`uname` 的输出,所以我们需要做一些处理在赋给变量`SUBARCH` 。获得`SUBARCH` 之后就要设置`SRCARCH` 和`hfr-arch`,`SRCARCH`提供了硬件架构相关代码的目录,`hfr-arch` 提供了相关头文件的目录: + +```Makefile +ifeq ($(ARCH),i386) + SRCARCH := x86 +endif +ifeq ($(ARCH),x86_64) + SRCARCH := x86 +endif + +hdr-arch := $(SRCARCH) +``` + +注意:`ARCH` 是`SUBARCH` 的别名。如果没有设置过代表内核配置文件路径的变量`KCONFIG_CONFIG`,下一步系统会设置它,默认情况下就是`.config` : + +```Makefile +KCONFIG_CONFIG ?= .config +export KCONFIG_CONFIG +``` +以及编译内核过程中要用到的[shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) + +```Makefile +CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \ + else if [ -x /bin/bash ]; then echo /bin/bash; \ + else echo sh; fi ; fi) +``` + +接下来就要设置一组和编译内核的编译器相关的变量。我们会设置主机的`C` 和`C++` 的编译器及相关配置项: + +```Makefile +HOSTCC = gcc +HOSTCXX = g++ +HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer -std=gnu89 +HOSTCXXFLAGS = -O2 +``` + +下一步会去适配代表编译器的变量`CC`,那为什么还要`HOST*` 这些选项呢?这是因为`CC` 是编译内核过程中要使用的目标架构的编译器,但是`HOSTCC` 是要被用来编译一组`host` 程序的(下面我们就会看到)。然后我们就看看变量`KBUILD_MODULES` 和`KBUILD_BUILTIN` 的定义,这两个变量决定了我们要编译什么东西(内核、模块还是其他): + +```Makefile +KBUILD_MODULES := +KBUILD_BUILTIN := 1 + +ifeq ($(MAKECMDGOALS),modules) + KBUILD_BUILTIN := $(if $(CONFIG_MODVERSIONS),1) +endif +``` + +在这我们可以看到这些变量的定义,并且,如果们仅仅传递了`modules` 给`make`,变量`KBUILD_BUILTIN` 会依赖于内核配置选项`CONFIG_MODVERSIONS`。下一步操作是引入下面的文件: + +```Makefile +include scripts/Kbuild.include +``` + +文件`kbuild` ,[Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) 或者又叫做 `Kernel Build System`是一个用来管理构建内核和模块的特殊框架。`kbuild` 文件的语法与makefile 一样。文件[scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include) 为`kbuild` 系统同提供了一些原生的定义。因为我们包含了这个`kbuild` 文件,我们可以看到和不同工具关联的这些变量的定义,这些工具会在内核和模块编译过程中被使用(比如链接器、编译器、二进制工具包[binutils](http://www.gnu.org/software/binutils/),等等): + +```Makefile +AS = $(CROSS_COMPILE)as +LD = $(CROSS_COMPILE)ld +CC = $(CROSS_COMPILE)gcc +CPP = $(CC) -E +AR = $(CROSS_COMPILE)ar +NM = $(CROSS_COMPILE)nm +STRIP = $(CROSS_COMPILE)strip +OBJCOPY = $(CROSS_COMPILE)objcopy +OBJDUMP = $(CROSS_COMPILE)objdump +AWK = awk +... +... +... +``` + +在这些定义好的变量后面,我们又定义了两个变量:`USERINCLUDE` 和`LINUXINCLUDE`。他们包含了头文件的路径(第一个是给用户用的,第二个是给内核用的): + +```Makefile +USERINCLUDE := \ + -I$(srctree)/arch/$(hdr-arch)/include/uapi \ + -Iarch/$(hdr-arch)/include/generated/uapi \ + -I$(srctree)/include/uapi \ + -Iinclude/generated/uapi \ + -include $(srctree)/include/linux/kconfig.h + +LINUXINCLUDE := \ + -I$(srctree)/arch/$(hdr-arch)/include \ + ... +``` + +以及标准的C 编译器标志: +```Makefile +KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ + -fno-strict-aliasing -fno-common \ + -Werror-implicit-function-declaration \ + -Wno-format-security \ + -std=gnu89 +``` + +这并不是最终确定的编译器标志,他们还可以在其他makefile 里面更新(比如`arch/` 里面的kbuild)。变量定义完之后,全部会被导出供其他makefile 使用。下面的两个变量`RCS_FIND_IGNORE` 和 `RCS_TAR_IGNORE` 包含了被版本控制系统忽略的文件: + +```Makefile +export RCS_FIND_IGNORE := \( -name SCCS -o -name BitKeeper -o -name .svn -o \ + -name CVS -o -name .pc -o -name .hg -o -name .git \) \ + -prune -o +export RCS_TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn \ + --exclude CVS --exclude .pc --exclude .hg --exclude .git +``` + +这就是全部了,我们已经完成了所有的准备工作,下一个点就是如果构建`vmlinux`. + +直面构建内核 +-------------------------------------------------------------------------------- + +现在我们已经完成了所有的准备工作,根makefile(注:内核根目录下的makefile)的下一步工作就是和编译内核相关的了。在我们执行`make` 命令之前,我们不会在终端看到任何东西。但是现在编译的第一步开始了,这里我们需要从内核根makefile的的[598](https://github.com/torvalds/linux/blob/master/Makefile#L598) 行开始,这里可以看到目标`vmlinux`: + +```Makefile +all: vmlinux + include arch/$(SRCARCH)/Makefile +``` + +不要操心我们略过的从`export RCS_FIND_IGNORE.....` 到`all: vmlinux.....` 这一部分makefile 代码,他们只是负责根据各种配置文件生成不同目标内核的,因为之前我就说了这一部分我们只讨论构建内核的通用途径。 + +目标`all:` 是在命令行如果不指定具体目标时默认使用的目标。你可以看到这里包含了架构相关的makefile(在这里就指的是[arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile))。从这一时刻起,我们会从这个makefile 继续进行下去。如我们所见,目标`all` 依赖于根makefile 后面声明的`vmlinux`: + +```Makefile +vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE +``` + +`vmlinux` 是linux 内核的静态链接可执行文件格式。脚本[scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) 把不同的编译好的子模块链接到一起形成了vmlinux。第二个目标是`vmlinux-deps`,它的定义如下: + +```Makefile +vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) +``` + +它是由内核代码下的每个顶级目录的`built-in.o` 组成的。之后我们还会检查内核所有的目录,`kbuild` 会编译各个目录下所有的对应`$obj-y` 的源文件。接着调用`$(LD) -r` 把这些文件合并到一个`build-in.o` 文件里。此时我们还没有`vmloinux-deps`, 所以目标`vmlinux` 现在还不会被构建。对我而言`vmlinux-deps` 包含下面的文件 + +``` +arch/x86/kernel/vmlinux.lds arch/x86/kernel/head_64.o +arch/x86/kernel/head64.o arch/x86/kernel/head.o +init/built-in.o usr/built-in.o +arch/x86/built-in.o kernel/built-in.o +mm/built-in.o fs/built-in.o +ipc/built-in.o security/built-in.o +crypto/built-in.o block/built-in.o +lib/lib.a arch/x86/lib/lib.a +lib/built-in.o arch/x86/lib/built-in.o +drivers/built-in.o sound/built-in.o +firmware/built-in.o arch/x86/pci/built-in.o +arch/x86/power/built-in.o arch/x86/video/built-in.o +net/built-in.o +``` + +下一个可以被执行的目标如下: + +```Makefile +$(sort $(vmlinux-deps)): $(vmlinux-dirs) ; +$(vmlinux-dirs): prepare scripts + $(Q)$(MAKE) $(build)=$@ +``` + +就像我们看到的,`vmlinux-dir` 依赖于两部分:`prepare` 和`scripts`。第一个`prepare` 定义在内核的根`makefile` ,准备工作分成三个阶段: + +```Makefile +prepare: prepare0 +prepare0: archprepare FORCE + $(Q)$(MAKE) $(build)=. +archprepare: archheaders archscripts prepare1 scripts_basic + +prepare1: prepare2 $(version_h) include/generated/utsrelease.h \ + include/config/auto.conf + $(cmd_crmodverdir) +prepare2: prepare3 outputmakefile asm-generic +``` + +第一个`prepare0` 展开到`archprepare` ,后者又展开到`archheader` 和`archscripts`,这两个变量定义在`x86_64` 相关的[Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)。让我们看看这个文件。`x86_64` 特定的makefile从变量定义开始,这些变量都是和特定架构的配置文件 ([defconfig](https://github.com/torvalds/linux/tree/master/arch/x86/configs),等等)有关联。变量定义之后,这个makefile 定义了编译[16-bit](https://en.wikipedia.org/wiki/Real_mode)代码的编译选项,根据变量`BITS` 的值,如果是`32`, 汇编代码、链接器、以及其它很多东西(全部的定义都可以在[arch/x86/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile)找到)对应的参数就是`i386`,而`64`就对应的是`x86_84`。生成的系统调用列表(syscall table)的makefile 里第一个目标就是`archheaders` : + +```Makefile +archheaders: + $(Q)$(MAKE) $(build)=arch/x86/entry/syscalls all +``` + +这个makefile 里第二个目标就是`archscripts`: + +```Makefile +archscripts: scripts_basic + $(Q)$(MAKE) $(build)=arch/x86/tools relocs +``` + + 我们可以看到`archscripts` 是依赖于根[Makefile](https://github.com/torvalds/linux/blob/master/Makefile)里的`scripts_basic` 。首先我们可以看出`scripts_basic` 是按照[scripts/basic](https://github.com/torvalds/linux/blob/master/scripts/basic/Makefile) 的mekefile 执行make 的: + +```Maklefile +scripts_basic: + $(Q)$(MAKE) $(build)=scripts/basic +``` + +`scripts/basic/Makefile`包含了编译两个主机程序`fixdep` 和`bin2` 的目标: + +```Makefile +hostprogs-y := fixdep +hostprogs-$(CONFIG_BUILD_BIN2C) += bin2c +always := $(hostprogs-y) + +$(addprefix $(obj)/,$(filter-out fixdep,$(always))): $(obj)/fixdep +``` + +第一个工具是`fixdep`:用来优化[gcc](https://gcc.gnu.org/) 生成的依赖列表,然后在重新编译源文件的时候告诉make。第二个工具是`bin2c`,他依赖于内核配置选项`CONFIG_BUILD_BIN2C`,并且它是一个用来将标准输入接口(注:即stdin)收到的二进制流通过标准输出接口(即:stdout)转换成C 头文件的非常小的C 程序。你可以注意到这里有些奇怪的标志,如`hostprogs-y`等。这些标志使用在所有的`kbuild` 文件,更多的信息你可以从[documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt) 获得。在我们的用例`hostprogs-y` 中,他告诉`kbuild` 这里有个名为`fixed` 的程序,这个程序会通过和`Makefile` 相同目录的`fixdep.c` 编译而来。执行make 之后,终端的第一个输出就是`kbuild` 的结果: + +``` +$ make + HOSTCC scripts/basic/fixdep +``` + +当目标`script_basic` 被执行,目标`archscripts` 就会make [arch/x86/tools](https://github.com/torvalds/linux/blob/master/arch/x86/tools/Makefile) 下的makefile 和目标`relocs`: + +```Makefile +$(Q)$(MAKE) $(build)=arch/x86/tools relocs +``` + +代码`relocs_32.c` 和`relocs_64.c` 包含了[重定位](https://en.wikipedia.org/wiki/Relocation_%28computing%29) 的信息,将会被编译,者可以在`make` 的输出中看到: + +```Makefile + HOSTCC arch/x86/tools/relocs_32.o + HOSTCC arch/x86/tools/relocs_64.o + HOSTCC arch/x86/tools/relocs_common.o + HOSTLD arch/x86/tools/relocs +``` + +在编译完`relocs.c` 之后会检查`version.h`: + +```Makefile +$(version_h): $(srctree)/Makefile FORCE + $(call filechk,version.h) + $(Q)rm -f $(old_version_h) +``` + +我们可以在输出看到它: + +``` +CHK include/config/kernel.release +``` + +以及在内核根Makefiel 使用`arch/x86/include/generated/asm`的目标`asm-generic` 来构建`generic` 汇编头文件。在目标`asm-generic` 之后,`archprepare` 就会被完成,所以目标`prepare0` 会接着被执行,如我上面所写: + +```Makefile +prepare0: archprepare FORCE + $(Q)$(MAKE) $(build)=. +``` + +注意`build`,它是定义在文件[scripts/Kbuild.include](https://github.com/torvalds/linux/blob/master/scripts/Kbuild.include),内容是这样的: + +```Makefile +build := -f $(srctree)/scripts/Makefile.build obj +``` + +或者在我们的例子中,他就是当前源码目录路径——`.`: +```Makefile +$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.build obj=. +``` + +参数`obj` 会告诉脚本[scripts/Makefile.build](https://github.com/torvalds/linux/blob/master/scripts/Makefile.build) 那些目录包含`kbuild` 文件,脚本以此来寻找各个`kbuild` 文件: + +```Makefile +include $(kbuild-file) +``` + +然后根据这个构建目标。我们这里`.` 包含了[Kbuild](https://github.com/torvalds/linux/blob/master/Kbuild),就用这个文件来生成`kernel/bounds.s` 和`arch/x86/kernel/asm-offsets.s`。这样目标`prepare` 就完成了它的工作。`vmlinux-dirs` 也依赖于第二个目标——`scripts` ,`scripts`会编译接下来的几个程序:`filealias`,`mk_elfconfig`,`modpost`等等。`scripts/host-programs` 编译完之后,我们的目标`vmlinux-dirs` 就可以开始编译了。第一步,我们先来理解一下`vmlinux-dirs` 都包含了那些东西。在我们的例子中它包含了接下来要使用的内核目录的路径: + +``` +init usr arch/x86 kernel mm fs ipc security crypto block +drivers sound firmware arch/x86/pci arch/x86/power +arch/x86/video net lib arch/x86/lib +``` + +我们可以在内核的根[Makefile](https://github.com/torvalds/linux/blob/master/Makefile) 里找到`vmlinux-dirs` 的定义: + +```Makefile +vmlinux-dirs := $(patsubst %/,%,$(filter %/, $(init-y) $(init-m) \ + $(core-y) $(core-m) $(drivers-y) $(drivers-m) \ + $(net-y) $(net-m) $(libs-y) $(libs-m))) + +init-y := init/ +drivers-y := drivers/ sound/ firmware/ +net-y := net/ +libs-y := lib/ +... +... +... +``` + +这里我们借助函数`patsubst` 和`filter`去掉了每个目录路径里的符号`/`,并且把结果放到`vmlinux-dirs` 里。所以我们就有了`vmlinux-dirs` 里的目录的列表,以及下面的代码: + +```Makefile +$(vmlinux-dirs): prepare scripts + $(Q)$(MAKE) $(build)=$@ +``` + +符号`$@` 在这里代表了`vmlinux-dirs`,这就表明程序会递归遍历从`vmlinux-dirs` 以及它内部的全部目录(依赖于配置),并且在对应的目录下执行`make` 命令。我们可以在输出看到结果: + +``` + CC init/main.o + CHK include/generated/compile.h + CC init/version.o + CC init/do_mounts.o + ... + CC arch/x86/crypto/glue_helper.o + AS arch/x86/crypto/aes-x86_64-asm_64.o + CC arch/x86/crypto/aes_glue.o + ... + AS arch/x86/entry/entry_64.o + AS arch/x86/entry/thunk_64.o + CC arch/x86/entry/syscall_64.o +``` + +每个目录下的源代码将会被编译并且链接到`built-io.o` 里: + +``` +$ find . -name built-in.o +./arch/x86/crypto/built-in.o +./arch/x86/crypto/sha-mb/built-in.o +./arch/x86/net/built-in.o +./init/built-in.o +./usr/built-in.o +... +... +``` + +好了,所有的`built-in.o` 都构建完了,现在我们回到目标`vmlinux` 上。你应该还记得,目标`vmlinux` 是在内核的根makefile 里。在链接`vmlinux` 之前,系统会构建[samples](https://github.com/torvalds/linux/tree/master/samples), [Documentation](https://github.com/torvalds/linux/tree/master/Documentation)等等,但是如上文所述,我不会在本文描述这些。 + +```Makefile +vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE + ... + ... + +$(call if_changed,link-vmlinux) +``` + +你可以看到,`vmlinux` 的调用脚本[scripts/link-vmlinux.sh](https://github.com/torvalds/linux/blob/master/scripts/link-vmlinux.sh) 的主要目的是把所有的`built-in.o` 链接成一个静态可执行文件、生成[System.map](https://en.wikipedia.org/wiki/System.map)。 最后我们来看看下面的输出: + +``` + LINK vmlinux + LD vmlinux.o + MODPOST vmlinux.o + GEN .version + CHK include/generated/compile.h + UPD include/generated/compile.h + CC init/version.o + LD init/built-in.o + KSYM .tmp_kallsyms1.o + KSYM .tmp_kallsyms2.o + LD vmlinux + SORTEX vmlinux + SYSMAP System.map +``` + +以及内核源码树根目录下的`vmlinux` 和`System.map` + +``` +$ ls vmlinux System.map +System.map vmlinux +``` + +这就是全部了,`vmlinux` 构建好了,下一步就是创建[bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage). + +制作bzImage +-------------------------------------------------------------------------------- + +`bzImage` 就是压缩了的linux 内核镜像。我们可以在构建了`vmlinux` 之后通过执行`make bzImage` 获得`bzImage`。同时我们可以仅仅执行`make` 而不带任何参数也可以生成`bzImage` ,因为它是在[arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile) 里预定义的、默认生成的镜像: + +```Makefile +all: bzImage +``` + +让我们看看这个目标,他能帮助我们理解这个镜像是怎么构建的。我已经说过了`bzImage` 师被定义在[arch/x86/kernel/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/Makefile),定义如下: + +```Makefile +bzImage: vmlinux + $(Q)$(MAKE) $(build)=$(boot) $(KBUILD_IMAGE) + $(Q)mkdir -p $(objtree)/arch/$(UTS_MACHINE)/boot + $(Q)ln -fsn ../../x86/boot/bzImage $(objtree)/arch/$(UTS_MACHINE)/boot/$@ +``` + +在这里我们可以看到第一次为boot 目录执行`make`,在我们的例子里是这样的: + +```Makefile +boot := arch/x86/boot +``` + +现在的主要目标是编译目录`arch/x86/boot` 和`arch/x86/boot/compressed` 的代码,构建`setup.bin` 和`vmlinux.bin`,然后用这两个文件生成`bzImage`。第一个目标是定义在[arch/x86/boot/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/Makefile) 的`$(obj)/setup.elf`: + +```Makefile +$(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE + $(call if_changed,ld) +``` + +我们已经在目录`arch/x86/boot`有了链接脚本`setup.ld`,并且将变量`SETUP_OBJS` 扩展到`boot` 目录下的全部源代码。我们可以看看第一个输出: + +```Makefile + AS arch/x86/boot/bioscall.o + CC arch/x86/boot/cmdline.o + AS arch/x86/boot/copy.o + HOSTCC arch/x86/boot/mkcpustr + CPUSTR arch/x86/boot/cpustr.h + CC arch/x86/boot/cpu.o + CC arch/x86/boot/cpuflags.o + CC arch/x86/boot/cpucheck.o + CC arch/x86/boot/early_serial_console.o + CC arch/x86/boot/edd.o +``` + +下一个源码文件是[arch/x86/boot/header.S](https://github.com/torvalds/linux/blob/master/arch/x86/boot/header.S),但是我们不能现在就编译它,因为这个目标依赖于下面两个头文件: + +```Makefile +$(obj)/header.o: $(obj)/voffset.h $(obj)/zoffset.h +``` + +第一个头文件`voffset.h` 是使用`sed` 脚本生成的,包含用`nm` 工具从`vmlinux` 获取的两个地址: + +```C +#define VO__end 0xffffffff82ab0000 +#define VO__text 0xffffffff81000000 +``` + +这两个地址是内核的起始和结束地址。第二个头文件`zoffset.h` 在[arch/x86/boot/compressed/Makefile](https://github.com/torvalds/linux/blob/master/arch/x86/boot/compressed/Makefile) 可以看出是依赖于目标`vmlinux`的: + +```Makefile +$(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE + $(call if_changed,zoffset) +``` + +目标`$(obj)/compressed/vmlinux` 依赖于变量`vmlinux-objs-y` —— 说明需要编译目录[arch/x86/boot/compressed](https://github.com/torvalds/linux/tree/master/arch/x86/boot/compressed) 下的源代码,然后生成`vmlinux.bin`, `vmlinux.bin.bz2`, 和编译工具 - `mkpiggy`。我们可以在下面的输出看出来: + +```Makefile + LDS arch/x86/boot/compressed/vmlinux.lds + AS arch/x86/boot/compressed/head_64.o + CC arch/x86/boot/compressed/misc.o + CC arch/x86/boot/compressed/string.o + CC arch/x86/boot/compressed/cmdline.o + OBJCOPY arch/x86/boot/compressed/vmlinux.bin + BZIP2 arch/x86/boot/compressed/vmlinux.bin.bz2 + HOSTCC arch/x86/boot/compressed/mkpiggy +``` + +`vmlinux.bin` 是去掉了调试信息和注释的`vmlinux` 二进制文件,加上了占用了`u32` (注:即4-Byte)的长度信息的`vmlinux.bin.all` 压缩后就是`vmlinux.bin.bz2`。其中`vmlinux.bin.all` 包含了`vmlinux.bin` 和`vmlinux.relocs`(注:vmlinux 的重定位信息),其中`vmlinux.relocs` 是`vmlinux` 经过程序`relocs` 处理之后的`vmlinux` 镜像(见上文所述)。我们现在已经获取到了这些文件,汇编文件`piggy.S` 将会被`mkpiggy` 生成、然后编译: + +```Makefile + MKPIGGY arch/x86/boot/compressed/piggy.S + AS arch/x86/boot/compressed/piggy.o +``` + +这个汇编文件会包含经过计算得来的、压缩内核的偏移信息。处理完这个汇编文件,我们就可以看到`zoffset` 生成了: + +```Makefile + ZOFFSET arch/x86/boot/zoffset.h +``` + +现在`zoffset.h` 和`voffset.h` 已经生成了,[arch/x86/boot](https://github.com/torvalds/linux/tree/master/arch/x86/boot/) 里的源文件可以继续编译: + +```Makefile + AS arch/x86/boot/header.o + CC arch/x86/boot/main.o + CC arch/x86/boot/mca.o + CC arch/x86/boot/memory.o + CC arch/x86/boot/pm.o + AS arch/x86/boot/pmjump.o + CC arch/x86/boot/printf.o + CC arch/x86/boot/regs.o + CC arch/x86/boot/string.o + CC arch/x86/boot/tty.o + CC arch/x86/boot/video.o + CC arch/x86/boot/video-mode.o + CC arch/x86/boot/video-vga.o + CC arch/x86/boot/video-vesa.o + CC arch/x86/boot/video-bios.o +``` + +所有的源代码会被编译,他们最终会被链接到`setup.elf` : + +```Makefile + LD arch/x86/boot/setup.elf +``` + + +或者: + +``` +ld -m elf_x86_64 -T arch/x86/boot/setup.ld arch/x86/boot/a20.o arch/x86/boot/bioscall.o arch/x86/boot/cmdline.o arch/x86/boot/copy.o arch/x86/boot/cpu.o arch/x86/boot/cpuflags.o arch/x86/boot/cpucheck.o arch/x86/boot/early_serial_console.o arch/x86/boot/edd.o arch/x86/boot/header.o arch/x86/boot/main.o arch/x86/boot/mca.o arch/x86/boot/memory.o arch/x86/boot/pm.o arch/x86/boot/pmjump.o arch/x86/boot/printf.o arch/x86/boot/regs.o arch/x86/boot/string.o arch/x86/boot/tty.o arch/x86/boot/video.o arch/x86/boot/video-mode.o arch/x86/boot/version.o arch/x86/boot/video-vga.o arch/x86/boot/video-vesa.o arch/x86/boot/video-bios.o -o arch/x86/boot/setup.elf +``` + +最后两件事是创建包含目录`arch/x86/boot/*` 下的编译过的代码的`setup.bin`: + +``` +objcopy -O binary arch/x86/boot/setup.elf arch/x86/boot/setup.bin +``` + +以及从`vmlinux` 生成`vmlinux.bin` : + +``` +objcopy -O binary -R .note -R .comment -S arch/x86/boot/compressed/vmlinux arch/x86/boot/vmlinux.bin +``` + +最后,我们编译主机程序[arch/x86/boot/tools/build.c](https://github.com/torvalds/linux/blob/master/arch/x86/boot/tools/build.c),它将会用来把`setup.bin` 和`vmlinux.bin` 打包成`bzImage`: + +``` +arch/x86/boot/tools/build arch/x86/boot/setup.bin arch/x86/boot/vmlinux.bin arch/x86/boot/zoffset.h arch/x86/boot/bzImage +``` + +实际上`bzImage` 就是把`setup.bin` 和`vmlinux.bin` 连接到一起。最终我们会看到输出结果,就和那些用源码编译过内核的同行的结果一样: + +``` +Setup is 16268 bytes (padded to 16384 bytes). +System is 4704 kB +CRC 94a88f9a +Kernel: arch/x86/boot/bzImage is ready (#5) +``` + + +全部结束。 + +结论 +================================================================================ + +这就是本文的最后一节。本文我们了解了编译内核的全部步骤:从执行`make` 命令开始,到最后生成`bzImage`。我知道,linux 内核的makefiles 和构建linux 的过程第一眼看起来可能比较迷惑,但是这并不是很难。希望本文可以帮助你理解构建linux 内核的整个流程。 + + +链接 +================================================================================ + +* [GNU make util](https://en.wikipedia.org/wiki/Make_%28software%29) +* [Linux kernel top Makefile](https://github.com/torvalds/linux/blob/master/Makefile) +* [cross-compilation](https://en.wikipedia.org/wiki/Cross_compiler) +* [Ctags](https://en.wikipedia.org/wiki/Ctags) +* [sparse](https://en.wikipedia.org/wiki/Sparse) +* [bzImage](https://en.wikipedia.org/wiki/Vmlinux#bzImage) +* [uname](https://en.wikipedia.org/wiki/Uname) +* [shell](https://en.wikipedia.org/wiki/Shell_%28computing%29) +* [Kbuild](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/kbuild.txt) +* [binutils](http://www.gnu.org/software/binutils/) +* [gcc](https://gcc.gnu.org/) +* [Documentation](https://github.com/torvalds/linux/blob/master/Documentation/kbuild/makefiles.txt) +* [System.map](https://en.wikipedia.org/wiki/System.map) +* [Relocation](https://en.wikipedia.org/wiki/Relocation_%28computing%29) + +-------------------------------------------------------------------------------- + +via: https://github.com/0xAX/linux-insides/blob/master/Misc/how_kernel_compiled.md + +译者:[译者ID](https://github.com/oska874) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md b/translated/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md deleted file mode 100644 index f90a1ce76d..0000000000 --- a/translated/tech/20150730 Howto Configure Nginx as Rreverse Proxy or Load Balancer with Weave and Docker.md +++ /dev/null @@ -1,126 +0,0 @@ -如何使用Weave以及Docker搭建Nginx反向代理/负载均衡服务器 -================================================================================ -Hi, 今天我们将会学习如何使用如何使用Weave和Docker搭建Nginx反向代理/负载均衡服务器。Weave创建一个虚拟网络将跨主机部署的Docker容器连接在一起并使它们自动暴露给外部世界。它让我们更加专注于应用的开发,而不是基础架构。Weave提供了一个如此棒的环境,仿佛它的所有容器都属于同个网络,不需要端口/映射/连接等的配置。容器中的应用提供的服务在weave网络中可以轻易地被外部世界访问,不论你的容器运行在哪里。在这个教程里我们将会使用weave快速并且轻易地将nginx web服务器部署为一个负载均衡器,反向代理一个运行在Amazon Web Services里面多个节点上的docker容器中的简单php应用。这里我们将会介绍WeaveDNS,它提供一个简单的方式让容器利用主机名找到彼此,不需要改变代码,并且能够告诉其他容器连接到这些主机名。 - -在这篇教程里,我们需要一个运行的容器集合来配置nginx负载均衡服务器。最简单轻松的方法就是使用Weave在ubuntu的docker容器中搭建nginx负载均衡服务器。 - -### 1. 搭建AWS实例 ### - -首先,我们需要搭建Amzaon Web Service实例,这样才能在ubuntu下用weave跑docker容器。我们将会使用[AWS CLI][1]来搭建和配置两个AWS EC2实例。在这里,我们使用最小的有效实例,t1.micro。我们需要一个有效的**Amazon Web Services账户**用以AWS命令行界面的搭建和配置。我们先在AWS命令行界面下使用下面的命令将github上的weave仓库克隆下来。 - - $ git clone http://github.com/fintanr/weave-gs - $ cd weave-gs/aws-nginx-ubuntu-simple - -在克隆完仓库之后,我们执行下面的脚本,这个脚本将会部署两个t1.micro实例,每个实例中都是ubuntu作为操作系统并用weave跑着docker容器。 - - $ sudo ./demo-aws-setup.sh - -在这里,我们将会在以后用到这些实例的IP地址。这些地址储存在一个weavedemo.env文件中,这个文件在执行demo-aws-setup.sh脚本的期间被创建。为了获取这些IP地址,我们需要执行下面的命令,命令输出类似下面的信息。 - - $ cat weavedemo.env - - export WEAVE_AWS_DEMO_HOST1=52.26.175.175 - export WEAVE_AWS_DEMO_HOST2=52.26.83.141 - export WEAVE_AWS_DEMO_HOSTCOUNT=2 - export WEAVE_AWS_DEMO_HOSTS=(52.26.175.175 52.26.83.141) - -请注意这些不是固定的IP地址,AWS会为我们的实例动态地分配IP地址。 - -我们在bash下执行下面的命令使环境变量生效。 - - . ./weavedemo.env - -### 2. 启动Weave and WeaveDNS ### - -在安装完实例之后,我们将会在每台主机上启动weave以及weavedns。Weave以及weavedns使得我们能够轻易地将容器部署到一个全新的基础架构以及配置中, 不需要改变代码,也不需要去理解像Ambassador容器以及Link机制之类的概念。下面是在第一台主机上启动weave以及weavedns的命令。 - - ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 - $ sudo weave launch - $ sudo weave launch-dns 10.2.1.1/24 - -下一步,我也准备在第二台主机上启动weave以及weavedns。 - - ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2 - $ sudo weave launch $WEAVE_AWS_DEMO_HOST1 - $ sudo weave launch-dns 10.2.1.2/24 - -### 3. 启动应用容器 ### - -现在,我们准备跨两台主机启动六个容器,这两台主机都用Apache2 Web服务实例跑着简单的php网站。为了在第一个Apache2 Web服务器实例跑三个容器, 我们将会使用下面的命令。 - - ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 - $ sudo weave run --with-dns 10.3.1.1/24 -h ws1.weave.local fintanr/weave-gs-nginx-apache - $ sudo weave run --with-dns 10.3.1.2/24 -h ws2.weave.local fintanr/weave-gs-nginx-apache - $ sudo weave run --with-dns 10.3.1.3/24 -h ws3.weave.local fintanr/weave-gs-nginx-apache - -在那之后,我们将会在第二个实例上启动另外三个容器,请使用下面的命令。 - - ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST2 - $ sudo weave run --with-dns 10.3.1.4/24 -h ws4.weave.local fintanr/weave-gs-nginx-apache - $ sudo weave run --with-dns 10.3.1.5/24 -h ws5.weave.local fintanr/weave-gs-nginx-apache - $ sudo weave run --with-dns 10.3.1.6/24 -h ws6.weave.local fintanr/weave-gs-nginx-apache - -注意: 在这里,--with-dns选项告诉容器使用weavedns来解析主机名,-h x.weave.local则使得weavedns能够解析指定主机。 - -### 4. 启动Nginx容器 ### - -在应用容器运行得有如意料中的稳定之后,我们将会启动nginx容器,它将会在六个应用容器服务之间轮询并提供反向代理或者负载均衡。 为了启动nginx容器,请使用下面的命令。 - - ssh -i weavedemo-key.pem ubuntu@$WEAVE_AWS_DEMO_HOST1 - $ sudo weave run --with-dns 10.3.1.7/24 -ti -h nginx.weave.local -d -p 80:80 fintanr/weave-gs-nginx-simple - -因此,我们的nginx容器在$WEAVE_AWS_DEMO_HOST1上公开地暴露成为一个http服务器。 - -### 5. 测试负载均衡服务器 ### - -为了测试我们的负载均衡服务器是否可以工作,我们执行一段可以发送http请求给nginx容器的脚本。我们将会发送6个请求,这样我们就能看到nginx在一次的轮询中服务于每台web服务器之间。 - - $ ./access-aws-hosts.sh - - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws1.weave.local", - "date" : "2015-06-26 12:24:23" - } - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws2.weave.local", - "date" : "2015-06-26 12:24:23" - } - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws3.weave.local", - "date" : "2015-06-26 12:24:23" - } - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws4.weave.local", - "date" : "2015-06-26 12:24:23" - } - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws5.weave.local", - "date" : "2015-06-26 12:24:23" - } - { - "message" : "Hello Weave - nginx example", - "hostname" : "ws6.weave.local", - "date" : "2015-06-26 12:24:23" - } - -### 结束语 ### - -我们最终成功地将nginx配置成一个反向代理/负载均衡服务器,通过使用weave以及运行在AWS(Amazon Web Service)EC2之中的ubuntu服务器里面的docker。从上面的步骤输出可以清楚的看到我们已经成功地配置了nginx。我们可以看到请求在一次循环中被发送到6个应用容器,这些容器在Apache2 Web服务器中跑着PHP应用。在这里,我们部署了一个容器化的PHP应用,使用nginx横跨多台在AWS EC2上的主机而不需要改变代码,利用weavedns使得每个容器连接在一起,只需要主机名就够了,眼前的这些便捷, 都要归功于weave以及weavedns。 如果你有任何的问题、建议、反馈,请在评论中注明,这样我们才能够做得更好,谢谢:-) - --------------------------------------------------------------------------------- - -via: http://linoxide.com/linux-how-to/nginx-load-balancer-weave-docker/ - -作者:[Arun Pyasi][a] -译者:[dingdongnigetou](https://github.com/dingdongnigetou) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://linoxide.com/author/arunp/ -[1]:http://console.aws.amazon.com/ diff --git a/translated/tech/20150803 Managing Linux Logs.md b/translated/tech/20150803 Managing Linux Logs.md deleted file mode 100644 index 59b41aa831..0000000000 --- a/translated/tech/20150803 Managing Linux Logs.md +++ /dev/null @@ -1,418 +0,0 @@ -Linux日志管理 -================================================================================ -管理日志的一个关键典型做法是集中或整合你的日志到一个地方,特别是如果你有许多服务器或多层级架构。我们将告诉你为什么这是一个好主意然后给出如何更容易的做这件事的一些小技巧。 - -### 集中管理日志的好处 ### - -如果你有很多服务器,查看单独的一个日志文件可能会很麻烦。现代的网站和服务经常包括许多服务器层级,分布式的负载均衡器,还有更多。这将花费很长时间去获取正确的日志,甚至花更长时间在登录服务器的相关问题上。没什么比发现你找的信息没有被捕获更沮丧的了,或者本能保留答案时正好在重启后丢失了日志文件。 - -集中你的日志使他们查找更快速,可以帮助你更快速的解决产品问题。你不用猜测那个服务器存在问题,因为所有的日志在同一个地方。此外,你可以使用更强大的工具去分析他们,包括日志管理解决方案。一些解决方案能[转换纯文本日志][1]为一些字段,更容易查找和分析。 - -集中你的日志也可以是他们更易于管理: - -- 他们更安全,当他们备份归档一个单独区域时意外或者有意的丢失。如果你的服务器宕机或者无响应,你可以使用集中的日志去调试问题。 -- 你不用担心ssh或者低效的grep命令需要更多的资源在陷入困境的系统。 -- 你不用担心磁盘占满,这个能让你的服务器死机。 -- 你能保持你的产品服务安全性,只是为了查看日志无需给你所有团队登录权限。给你的团队从中心区域访问日志权限更安全。 - -随着集中日志管理,你仍需处理由于网络联通性不好或者用尽大量网络带宽导致不能传输日志到中心区域的风险。在下面的章节我们将要讨论如何聪明的解决这些问题。 - -### 流行的日志归集工具 ### - -在Linux上最常见的日志归集是通过使用系统日志守护进程或者代理。系统日志守护进程支持本地日志的采集,然后通过系统日志协议传输日志到中心服务器。你可以使用很多流行的守护进程来归集你的日志文件: - -- [rsyslog][2]是一个轻量后台程序在大多数Linux分支上已经安装。 -- [syslog-ng][3]是第二流行的Linux系统日志后台程序。 -- [logstash][4]是一个重量级的代理,他可以做更多高级加工和分析。 -- [fluentd][5]是另一个有高级处理能力的代理。 - -Rsyslog是集中日志数据最流行的后台程序因为他在大多数Linux分支上是被默认安装的。你不用下载或安装它,并且它是轻量的,所以不需要占用你太多的系统资源。 - -如果你需要更多先进的过滤或者自定义分析功能,如果你不在乎额外的系统封装Logstash是下一个最流行的选择。 - -### 配置Rsyslog.conf ### - -既然rsyslog成为最广泛使用的系统日志程序,我们将展示如何配置它为日志中心。全局配置文件位于/etc/rsyslog.conf。它加载模块,设置全局指令,和包含应用特有文件位于目录/etc/rsyslog.d中。这些目录包含/etc/rsyslog.d/50-default.conf命令rsyslog写系统日志到文件。在[rsyslog文档][6]你可以阅读更多相关配置。 - -rsyslog配置语言是是[RainerScript][7]。你建立特定的日志输入就像输出他们到另一个目标。Rsyslog已经配置为系统日志输入的默认标准,所以你通常只需增加一个输出到你的日志服务器。这里有一个rsyslog输出到一个外部服务器的配置例子。在举例中,**BEBOP**是一个服务器的主机名,所以你应该替换为你的自己的服务器名。 - - action(type="omfwd" protocol="tcp" target="BEBOP" port="514") - -你可以发送你的日志到一个有丰富存储的日志服务器来存储,提供查询,备份和分析。如果你正存储日志在文件系统,然后你应该建立[日志转储][8]来防止你的磁盘报满。 - -作为一种选择,你可以发送这些日志到一个日志管理方案。如果你的解决方案是安装在本地你可以发送到您的本地系统文档中指定主机和端口。如果你使用基于云提供商,你将发送他们到你的提供商特定的主机名和端口。 - -### 日志目录 ### - -你可以归集一个目录或者匹配一个通配符模式的所有文件。nxlog和syslog-ng程序支持目录和通配符(*)。 - -rsyslog的通用形式不支持直接的监控目录。一种解决方案,你可以设置一个定时任务去监控这个目录的新文件,然后配置rsyslog来发送这些文件到目的地,比如你的日志管理系统。作为一个例子,日志管理提供商Loggly有一个开源版本的[目录监控脚本][9]。 - -### 哪个协议: UDP, TCP, or RELP? ### - -当你使用网络传输数据时,你可以选择三个主流的协议。UDP在你自己的局域网是最常用的,TCP是用在互联网。如果你不能失去日志,就要使用更高级的RELP协议。 - -[UDP][10]发送一个数据包,那只是一个简单的包信息。它是一个只外传的协议,所以他不发送给你回执(ACK)。它只尝试发送包。当网络拥堵时,UDP通常会巧妙的降级或者丢弃日志。它通常使用在类似局域网的可靠网络。 - -[TCP][11]通过多个包和返回确认发送流信息。TCP会多次尝试发送数据包,但是受限于[TCP缓存][12]大小。这是在互联网上发送送日志最常用的协议。 - -[RELP][13]是这三个协议中最可靠的,但是它是为rsyslog创建而且很少有行业应用。它在应用层接收数据然后再发出是否有错误。确认你的目标也支持这个协议。 - -### 用磁盘辅助队列可靠的传送 ### - -如果rsyslog在存储日志时遭遇错误,例如一个不可用网络连接,他能将日志排队直到连接还原。队列日志默认被存储在内存里。无论如何,内存是有限的并且如果问题仍然存在,日志会超出内存容量。 - -**警告:如果你只存储日志到内存,你可能会失去数据。** - -Rsyslog能在内存被占满时将日志队列放到磁盘。[磁盘辅助队列][14]使日志的传输更可靠。这里有一个例子如何配置rsyslog的磁盘辅助队列: - - $WorkDirectory /var/spool/rsyslog # where to place spool files - $ActionQueueFileName fwdRule1 # unique name prefix for spool files - $ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible) - $ActionQueueSaveOnShutdown on # save messages to disk on shutdown - $ActionQueueType LinkedList # run asynchronously - $ActionResumeRetryCount -1 # infinite retries if host is down - -### 使用TLS加密日志 ### - -当你的安全隐私数据是一个关心的事,你应该考虑加密你的日志。如果你使用纯文本在互联网传输日志,嗅探器和中间人可以读到你的日志。如果日志包含私人信息、敏感的身份数据或者政府管制数据,你应该加密你的日志。rsyslog程序能使用TLS协议加密你的日志保证你的数据更安全。 - -建立TLS加密,你应该做如下任务: - -1. 生成一个[证书授权][15](CA)。在/contrib/gnutls有一些简单的证书,只是有助于测试,但是你需要创建自己的产品证书。如果你正在使用一个日志管理服务,它将有一个证书给你。 -1. 为你的服务器生成一个[数字证书][16]使它能SSL运算,或者使用你自己的日志管理服务提供商的一个数字证书。 -1. 配置你的rsyslog程序来发送TLS加密数据到你的日志管理系统。 - -这有一个rsyslog配置TLS加密的例子。替换CERT和DOMAIN_NAME为你自己的服务器配置。 - - $DefaultNetstreamDriverCAFile /etc/rsyslog.d/keys/ca.d/CERT.crt - $ActionSendStreamDriver gtls - $ActionSendStreamDriverMode 1 - $ActionSendStreamDriverAuthMode x509/name - $ActionSendStreamDriverPermittedPeer *.DOMAIN_NAME.com - -### 应用日志的最佳管理方法 ### - -除Linux默认创建的日志之外,归集重要的应用日志也是一个好主意。几乎所有基于Linux的服务器的应用把他们的状态信息写入到独立专门的日志文件。这包括数据库产品,像PostgreSQL或者MySQL,网站服务器像Nginx或者Apache,防火墙,打印和文件共享服务还有DNS服务等等。 - -管理员要做的第一件事是安装一个应用后配置它。Linux应用程序典型的有一个.conf文件在/etc目录里。它也可能在其他地方,但是那是大家找配置文件首先会看的地方。 - -根据应用程序有多复杂多庞大,可配置参数的数量可能会很少或者上百行。如前所述,大多数应用程序可能会在某种日志文件写他们的状态:配置文件是日志设置的地方定义了其他的东西。 - -如果你不确定它在哪,你可以使用locate命令去找到它: - - [root@localhost ~]# locate postgresql.conf - /usr/pgsql-9.4/share/postgresql.conf.sample - /var/lib/pgsql/9.4/data/postgresql.conf - -#### 设置一个日志文件的标准位置 #### - -Linux系统一般保存他们的日志文件在/var/log目录下。如果是,很好,如果不是,你也许想在/var/log下创建一个专用目录?为什么?因为其他程序也在/var/log下保存他们的日志文件,如果你的应用报错多于一个日志文件 - 也许每天一个或者每次重启一个 - 通过这么大的目录也许有点难于搜索找到你想要的文件。 - -如果你有多于一个的应用实例在你网络运行,这个方法依然便利。想想这样的情景,你也许有一打web服务器在你的网络运行。当排查任何一个盒子的问题,你将知道确切的位置。 - -#### 使用一个标准的文件名 #### - -给你的应用最新的日志使用一个标准的文件名。这使一些事变得容易,因为你可以监控和追踪一个单独的文件。很多应用程序在他们的日志上追加一种时间戳。他让rsyslog更难于找到最新的文件和设置文件监控。一个更好的方法是使用日志转储增加时间戳到老的日志文件。这样更易去归档和历史查询。 - -#### 追加日志文件 #### - -日志文件会在每个应用程序重启后被覆盖?如果这样,我们建议关掉它。每次重启app后应该去追加日志文件。这样,你就可以追溯重启前最后的日志。 - -#### 日志文件追加 vs. 转储 #### - -虽然应用程序每次重启后写一个新日志文件,如何保存当前日志?追加到一个单独文件,巨大的文件?Linux系统不是因频繁重启或者崩溃出名的:应用程序可以运行很长时间甚至不间歇,但是也会使日志文件非常大。如果你查询分析上周发生连接错误的原因,你可能无疑的要在成千上万行里搜索。 - -我们建议你配置应用每天半晚转储它的日志文件。 - -为什么?首先它将变得可管理。找一个有特定日期部分的文件名比遍历一个文件指定日期的条目更容易。文件也小的多:你不用考虑当你打开一个日志文件时vi僵住。第二,如果你正发送日志到另一个位置 - 也许每晚备份任务拷贝到归集日志服务器 - 这样不会消耗你的网络带宽。最后第三点,这样帮助你做日志保持。如果你想剔除旧的日志记录,这样删除超过指定日期的文件比一个应用解析一个大文件更容易。 - -#### 日志文件的保持 #### - -你保留你的日志文件多长时间?这绝对可以归结为业务需求。你可能被要求保持一个星期的日志信息,或者管理要求保持一年的数据。无论如何,日志需要在一个时刻或其他从服务器删除。 - -在我们看来,除非必要,只在线保持最近一个月的日志文件,加上拷贝他们到第二个地方如日志服务器。任何比这更旧的日志可以被转到一个单独的介质上。例如,如果你在AWS上,你的旧日志可以被拷贝到Glacier。 - -#### 给日志单独的磁盘分区 #### - -Linux最典型的方式通常建议挂载到/var目录到一个单独度的文件系统。这是因为这个目录的高I/Os。我们推荐挂在/var/log目录到一个单独的磁盘系统下。这样可以节省与主应用的数据I/O竞争。另外,如果一些日志文件变的太多,或者一个文件变的太大,不会占满整个磁盘。 - -#### 日志条目 #### - -每个日志条目什么信息应该被捕获? - -这依赖于你想用日志来做什么。你只想用它来排除故障,或者你想捕获所有发生的事?这是一个规则条件去捕获每个用户在运行什么或查看什么? - -如果你正用日志做错误排查的目的,只保存错误,报警或者致命信息。没有理由去捕获调试信息,例如,应用也许默认记录了调试信息或者另一个管理员也许为了故障排查使用打开了调试信息,但是你应该关闭它,因为它肯定会很快的填满空间。在最低限度上,捕获日期,时间,客户端应用名,原ip或者客户端主机名,执行动作和它自身信息。 - -#### 一个PostgreSQL的实例 #### - -作为一个例子,让我们看看vanilla(这是一个开源论坛)PostgreSQL 9.4安装主配置文件。它叫做postgresql.conf与其他Linux系统中的配置文件不同,他不保存在/etc目录下。在代码段下,我们可以在我们的Centos 7服务器的/var/lib/pgsql目录下看见: - - root@localhost ~]# vi /var/lib/pgsql/9.4/data/postgresql.conf - ... - #------------------------------------------------------------------------------ - # ERROR REPORTING AND LOGGING - #------------------------------------------------------------------------------ - # - Where to Log - - log_destination = 'stderr' - # Valid values are combinations of - # stderr, csvlog, syslog, and eventlog, - # depending on platform. csvlog - # requires logging_collector to be on. - # This is used when logging to stderr: - logging_collector = on - # Enable capturing of stderr and csvlog - # into log files. Required to be on for - # csvlogs. - # (change requires restart) - # These are only used if logging_collector is on: - log_directory = 'pg_log' - # directory where log files are written, - # can be absolute or relative to PGDATA - log_filename = 'postgresql-%a.log' # log file name pattern, - # can include strftime() escapes - # log_file_mode = 0600 . - # creation mode for log files, - # begin with 0 to use octal notation - log_truncate_on_rotation = on # If on, an existing log file with the - # same name as the new log file will be - # truncated rather than appended to. - # But such truncation only occurs on - # time-driven rotation, not on restarts - # or size-driven rotation. Default is - # off, meaning append to existing files - # in all cases. - log_rotation_age = 1d - # Automatic rotation of logfiles will happen after that time. 0 disables. - log_rotation_size = 0 # Automatic rotation of logfiles will happen after that much log output. 0 disables. - # These are relevant when logging to syslog: - #syslog_facility = 'LOCAL0' - #syslog_ident = 'postgres' - # This is only relevant when logging to eventlog (win32): - #event_source = 'PostgreSQL' - # - When to Log - - #client_min_messages = notice # values in order of decreasing detail: - # debug5 - # debug4 - # debug3 - # debug2 - # debug1 - # log - # notice - # warning - # error - #log_min_messages = warning # values in order of decreasing detail: - # debug5 - # debug4 - # debug3 - # debug2 - # debug1 - # info - # notice - # warning - # error - # log - # fatal - # panic - #log_min_error_statement = error # values in order of decreasing detail: - # debug5 - # debug4 - # debug3 - # debug2 - # debug1 - # info - # notice - # warning - # error - # log - # fatal - # panic (effectively off) - #log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements - # and their durations, > 0 logs only - # statements running at least this number - # of milliseconds - # - What to Log - #debug_print_parse = off - #debug_print_rewritten = off - #debug_print_plan = off - #debug_pretty_print = on - #log_checkpoints = off - #log_connections = off - #log_disconnections = off - #log_duration = off - #log_error_verbosity = default - # terse, default, or verbose messages - #log_hostname = off - log_line_prefix = '< %m >' # special values: - # %a = application name - # %u = user name - # %d = database name - # %r = remote host and port - # %h = remote host - # %p = process ID - # %t = timestamp without milliseconds - # %m = timestamp with milliseconds - # %i = command tag - # %e = SQL state - # %c = session ID - # %l = session line number - # %s = session start timestamp - # %v = virtual transaction ID - # %x = transaction ID (0 if none) - # %q = stop here in non-session - # processes - # %% = '%' - # e.g. '<%u%%%d> ' - #log_lock_waits = off # log lock waits >= deadlock_timeout - #log_statement = 'none' # none, ddl, mod, all - #log_temp_files = -1 # log temporary files equal or larger - # than the specified size in kilobytes;5# -1 disables, 0 logs all temp files5 - log_timezone = 'Australia/ACT' - -虽然大多数参数被加上了注释,他们呈现了默认数值。我们可以看见日志文件目录是pg_log(log_directory参数),文件名应该以postgresql开头(log_filename参数),文件每天转储一次(log_rotation_age参数)然后日志记录以时间戳开头(log_line_prefix参数)。特别说明有趣的是log_line_prefix参数:你可以包含很多整体丰富的信息在这。 - -看/var/lib/pgsql/9.4/data/pg_log目录下展现给我们这些文件: - - [root@localhost ~]# ls -l /var/lib/pgsql/9.4/data/pg_log - total 20 - -rw-------. 1 postgres postgres 1212 May 1 20:11 postgresql-Fri.log - -rw-------. 1 postgres postgres 243 Feb 9 21:49 postgresql-Mon.log - -rw-------. 1 postgres postgres 1138 Feb 7 11:08 postgresql-Sat.log - -rw-------. 1 postgres postgres 1203 Feb 26 21:32 postgresql-Thu.log - -rw-------. 1 postgres postgres 326 Feb 10 01:20 postgresql-Tue.log - -所以日志文件命只有工作日命名的标签。我们可以改变他。如何做?在postgresql.conf配置log_filename参数。 - -查看一个日志内容,它的条目仅以日期时间开头: - - [root@localhost ~]# cat /var/lib/pgsql/9.4/data/pg_log/postgresql-Fri.log - ... - < 2015-02-27 01:21:27.020 EST >LOG: received fast shutdown request - < 2015-02-27 01:21:27.025 EST >LOG: aborting any active transactions - < 2015-02-27 01:21:27.026 EST >LOG: autovacuum launcher shutting down - < 2015-02-27 01:21:27.036 EST >LOG: shutting down - < 2015-02-27 01:21:27.211 EST >LOG: database system is shut down - -### 集中应用日志 ### - -#### 使用Imfile监控日志 #### - -习惯上,应用通常记录他们数据在文件里。文件容易在一个机器上寻找但是多台服务器上就不是很恰当了。你可以设置日志文件监控然后当新的日志被添加到底部就发送事件到一个集中服务器。在/etc/rsyslog.d/里创建一个新的配置文件然后增加一个文件输入,像这样: - - $ModLoad imfile - $InputFilePollInterval 10 - $PrivDropToGroup adm - ----------- - - # Input for FILE1 - $InputFileName /FILE1 - $InputFileTag APPNAME1 - $InputFileStateFile stat-APPNAME1 #this must be unique for each file being polled - $InputFileSeverity info - $InputFilePersistStateInterval 20000 - $InputRunFileMonitor - -替换FILE1和APPNAME1位你自己的文件和应用名称。Rsyslog将发送它到你配置的输出中。 - -#### 本地套接字日志与Imuxsock #### - -套接字类似UNIX文件句柄,所不同的是套接字内容是由系统日志程序读取到内存中,然后发送到目的地。没有文件需要被写入。例如,logger命令发送他的日志到这个UNIX套接字。 - -如果你的服务器I/O有限或者你不需要本地文件日志,这个方法使系统资源有效利用。这个方法缺点是套接字有队列大小的限制。如果你的系统日志程序宕掉或者不能保持运行,然后你可能会丢失日志数据。 - -rsyslog程序将默认从/dev/log套接字中种读取,但是你要用[imuxsock输入模块][17]如下命令使它生效: - - $ModLoad imuxsock - -#### UDP日志与Imupd #### - -一些应用程序使用UDP格式输出日志数据,这是在网络上或者本地传输日志文件的标准系统日志协议。你的系统日志程序收集这些日志然后处理他们或者用不同的格式传输他们。交替地,你可以发送日志到你的日志服务器或者到一个日志管理方案中。 - -使用如下命令配置rsyslog来接收标准端口514的UDP系统日志数据: - - $ModLoad imudp - ----------- - - $UDPServerRun 514 - -### 用Logrotate管理日志 ### - -日志转储是当日志到达指定的时期时自动归档日志文件的方法。如果不介入,日志文件一直增长,会用尽磁盘空间。最后他们将破坏你的机器。 - -logrotate实例能随着日志的日期截取你的日志,腾出空间。你的新日志文件保持文件名。你的旧日志文件被重命名为后缀加上数字。每次logrotate实例运行,一个新文件被建立然后现存的文件被逐一重命名。你来决定何时旧文件被删除或归档的阈值。 - -当logrotate拷贝一个文件,新的文件已经有一个新的索引节点,这会妨碍rsyslog监控新文件。你可以通过增加copytruncate参数到你的logrotate定时任务来缓解这个问题。这个参数拷贝现有的日志文件内容到新文件然后从现有文件截短这些内容。这个索引节点从不改变,因为日志文件自己保持不变;它的内容是一个新文件。 - -logrotate实例使用的主配置文件是/etc/logrotate.conf,应用特有设置在/etc/logrotate.d/目录下。DigitalOcean有一个详细的[logrotate教程][18] - -### 管理很多服务器的配置 ### - -当你只有很少的服务器,你可以登陆上去手动配置。一旦你有几打或者更多服务器,你可以用高级工具使这变得更容易和更可扩展。基本上,所有的事情就是拷贝你的rsyslog配置到每个服务器,然后重启rsyslog使更改生效。 - -#### Pssh #### - -这个工具可以让你在很多服务器上并行的运行一个ssh命令。使用pssh部署只有一小部分的服务器。如果你其中一个服务器失败,然后你必须ssh到失败的服务器,然后手动部署。如果你有很多服务器失败,那么手动部署他们会话费很长时间。 - -#### Puppet/Chef #### - -Puppet和Chef是两个不同的工具,他们能在你的网络按你规定的标准自动的配置所有服务器。他们的报表工具使你知道关于错误然后定期重新同步。Puppet和Chef有一些狂热的支持者。如果你不确定那个更适合你的部署配置管理,你可以领会一下[InfoWorld上这两个工具的对比][19] - -一些厂商也提供一些配置rsyslog的模块或者方法。这有一个Loggly上Puppet模块的例子。它提供给rsyslog一个类,你可以添加一个标识令牌: - - node 'my_server_node.example.net' { - # Send syslog events to Loggly - class { 'loggly::rsyslog': - customer_token => 'de7b5ccd-04de-4dc4-fbc9-501393600000', - } - } - -#### Docker #### - -Docker使用容器去运行应用不依赖底层服务。所有东西都从内部的容器运行,你可以想象为一个单元功能。ZDNet有一个深入文章关于在你的数据中心[使用Docker][20]。 - -这有很多方式从Docker容器记录日志,包括链接到一个日志容器,记录到一个共享卷,或者直接在容器里添加一个系统日志代理。其中最流行的日志容器叫做[logspout][21]。 - -#### 供应商的脚本或代理 #### - -大多数日志管理方案提供一些脚本或者代理,从一个或更多服务器比较简单的发送数据。重量级代理会耗尽额外的系统资源。一些供应商像Loggly提供配置脚本,来使用现存的系统日志程序更轻松。这有一个Loggly上的例子[脚本][22],它能运行在任意数量的服务器上。 - --------------------------------------------------------------------------------- - -via: http://www.loggly.com/ultimate-guide/logging/managing-linux-logs/ - -作者:[Jason Skowronski][a1] -作者:[Amy Echeverri][a2] -作者:[Sadequl Hussain][a3] -译者:[wyangsun](https://github.com/wyangsun) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a1]:https://www.linkedin.com/in/jasonskowronski -[a2]:https://www.linkedin.com/in/amyecheverri -[a3]:https://www.linkedin.com/pub/sadequl-hussain/14/711/1a7 -[1]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.esrreycnpnbl -[2]:http://www.rsyslog.com/ -[3]:http://www.balabit.com/network-security/syslog-ng/opensource-logging-system -[4]:http://logstash.net/ -[5]:http://www.fluentd.org/ -[6]:http://www.rsyslog.com/doc/rsyslog_conf.html -[7]:http://www.rsyslog.com/doc/master/rainerscript/index.html -[8]:https://docs.google.com/document/d/11LXZxWlkNSHkcrCWTUdnLRf_CiZz9kK0cr3yGM_BU_0/edit#heading=h.eck7acdxin87 -[9]:https://www.loggly.com/docs/file-monitoring/ -[10]:http://www.networksorcery.com/enp/protocol/udp.htm -[11]:http://www.networksorcery.com/enp/protocol/tcp.htm -[12]:http://blog.gerhards.net/2008/04/on-unreliability-of-plain-tcp-syslog.html -[13]:http://www.rsyslog.com/doc/relp.html -[14]:http://www.rsyslog.com/doc/queues.html -[15]:http://www.rsyslog.com/doc/tls_cert_ca.html -[16]:http://www.rsyslog.com/doc/tls_cert_machine.html -[17]:http://www.rsyslog.com/doc/v8-stable/configuration/modules/imuxsock.html -[18]:https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10 -[19]:http://www.infoworld.com/article/2614204/data-center/puppet-or-chef--the-configuration-management-dilemma.html -[20]:http://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/ -[21]:https://github.com/progrium/logspout -[22]:https://www.loggly.com/docs/sending-logs-unixlinux-system-setup/ diff --git a/translated/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md b/translated/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md new file mode 100644 index 0000000000..4d14bbc904 --- /dev/null +++ b/translated/tech/20150813 Howto Run JBoss Data Virtualization GA with OData in Docker Container.md @@ -0,0 +1,105 @@ +如何在 Docker 容器中运行支持 OData 的 JBoss 数据虚拟化 GA +Howto Run JBoss Data Virtualization GA with OData in Docker Container +================================================================================ +大家好,我们今天来学习如何在一个 Docker 容器中运行支持 OData(译者注:Open Data Protocol,开放数据协议) 的 JBoss 数据虚拟化 6.0.0 GA(译者注:GA,General Availability,具体定义可以查看[WIKI][4])。JBoss 数据虚拟化是数据提供和集成解决方案平台,有多种分散的数据源时,转换为一种数据源统一对待,在正确的时间将所需数据传递给任意的应用或者用户。JBoss 数据虚拟化可以帮助我们将数据快速组合和转换为可重用的商业友好的数据模型,通过开放标准接口简单可用。它提供全面的数据抽取、联合、集成、转换,以及传输功能,将来自一个或多个源的数据组合为可重复使用和共享的灵活数据。要了解更多关于 JBoss 数据虚拟化的信息,可以查看它的[官方文档][1]。Docker 是一个提供开放平台用于打包,装载和以轻量级容器运行任何应用的开源平台。使用 Docker 容器我们可以轻松处理和启用支持 OData 的 JBoss 数据虚拟化。 + +下面是该指南中在 Docker 容器中运行支持 OData 的 JBoss 数据虚拟化的简单步骤。 + +### 1. 克隆仓库 ### + +首先,我们要用 git 命令从 [https://github.com/jbossdemocentral/dv-odata-docker-integration-demo][2] 克隆带数据虚拟化的 OData 仓库。假设我们的机器上运行着 Ubuntu 15.04 linux 发行版。我们要使用 apt-get 命令安装 git。 + + # apt-get install git + +安装完 git 之后,我们运行下面的命令克隆仓库。 + + # git clone https://github.com/jbossdemocentral/dv-odata-docker-integration-demo + + Cloning into 'dv-odata-docker-integration-demo'... + remote: Counting objects: 96, done. + remote: Total 96 (delta 0), reused 0 (delta 0), pack-reused 96 + Unpacking objects: 100% (96/96), done. + Checking connectivity... done. + +### 2. 下载 JBoss 数据虚拟化安装器 ### + +现在,我们需要从下载页 [http://www.jboss.org/products/datavirt/download/][3] 下载 JBoss 数据虚拟化安装器。下载了 **jboss-dv-installer-6.0.0.GA-redhat-4.jar** 后,我们把它保存在名为 **software** 的目录下。 + +### 3. 创建 Docker 镜像 ### + +下一步,下载了 JBoss 数据虚拟化安装器之后,我们打算使用 Dockerfile 和刚从仓库中克隆的资源创建 docker 镜像。 + + # cd dv-odata-docker-integration-demo/ + # docker build -t jbossdv600 . + + ... + Step 22 : USER jboss + ---> Running in 129f701febd0 + ---> 342941381e37 + Removing intermediate container 129f701febd0 + Step 23 : EXPOSE 8080 9990 31000 + ---> Running in 61e6d2c26081 + ---> 351159bb6280 + Removing intermediate container 61e6d2c26081 + Step 24 : CMD $JBOSS_HOME/bin/standalone.sh -c standalone.xml -b 0.0.0.0 -bmanagement 0.0.0.0 + ---> Running in a9fed69b3000 + ---> 407053dc470e + Removing intermediate container a9fed69b3000 + Successfully built 407053dc470e + +注意:在这里我们假设你已经安装了 docker 并正在运行。 + +### 4. 启动 Docker 容器 ### + +创建了支持 oData 的 JBoss 数据虚拟化 Docker 镜像之后,我们打算运行 docker 容器并用 -P 标签指定端口。我们运行下面的命令来实现。 + + # docker run -p 8080:8080 -d -t jbossdv600 + + 7765dee9cd59c49ca26850e88f97c21f46859d2dc1d74166353d898773214c9c + +### 5. 获取容器 IP ### + +启动了 Docker 容器之后,我们想要获取正在运行的 docker 容器的 IP 地址。要做到这点,我们运行后面添加了正在运行容器 id 号的 docker inspect 命令。 + + # docker inspect <$containerID> + + ... + "NetworkSettings": { + "Bridge": "", + "EndpointID": "3e94c5900ac5954354a89591a8740ce2c653efde9232876bc94878e891564b39", + "Gateway": "172.17.42.1", + "GlobalIPv6Address": "", + "GlobalIPv6PrefixLen": 0, + "HairpinMode": false, + "IPAddress": "172.17.0.8", + "IPPrefixLen": 16, + "IPv6Gateway": "", + "LinkLocalIPv6Address": "", + "LinkLocalIPv6PrefixLen": 0, + +### 6. Web 界面 ### +### 6. Web Interface ### + +现在,如果一切如期望的那样进行,当我们用浏览器打开 http://container-ip:8080/ 和 http://container-ip:9990 时会看到支持 oData 的 JBoss 数据虚拟化登录界面和 JBoss 管理界面。管理验证的用户名和密码分别是 admin 和 redhat1!数据虚拟化验证的用户名和密码都是 user。之后,我们可以通过 web 界面在内容间导航。 + +**注意**: 强烈建议在第一次登录后尽快修改密码。 + +### 总结 ### + +终于我们成功地运行了跑着支持 OData 多源虚拟数据库的 JBoss 数据虚拟化 的 Docker 容器。JBoss 数据虚拟化真的是一个很棒的平台,它为多种不同来源的数据进行虚拟化,并将它们转换为商业友好的数据模型,产生通过开放标准接口简单可用的数据。使用 Docker 技术可以简单、安全、快速地部署支持 OData 多源虚拟数据库的 JBoss 数据虚拟化。如果你有任何疑问、建议或者反馈,请在下面的评论框中写下来,以便我们可以改进和更新内容。非常感谢!Enjoy:-) + +-------------------------------------------------------------------------------- + +via: http://linoxide.com/linux-how-to/run-jboss-data-virtualization-ga-odata-docker-container/ + +作者:[Arun Pyasi][a] +译者:[ictlyh](http://www.mutouxiaogui.cn/blog) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://linoxide.com/author/arunp/ +[1]:http://www.redhat.com/en/technologies/jboss-middleware/data-virtualization +[2]:https://github.com/jbossdemocentral/dv-odata-docker-integration-demo +[3]:http://www.jboss.org/products/datavirt/download/ +[4]:https://en.wikipedia.org/wiki/Software_release_life_cycle#General_availability_.28GA.29 \ No newline at end of file diff --git a/translated/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md b/translated/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md new file mode 100644 index 0000000000..96bf143533 --- /dev/null +++ b/translated/tech/20150817 Linux FAQs with Answers--How to count the number of threads in a process on Linux.md @@ -0,0 +1,51 @@ + +Linux 有问必答 - 如何在 Linux 中统计一个进程的线程数 +================================================================================ +> **问题**: 我正在运行一个程序,它在运行时会派生出多个线程。我想知道程序在运行时会有多少线程。在 Linux 中检查进程的线程数最简单的方法是什么? + +如果你想看到 Linux 中每个进程的线程数,有以下几种方法可以做到这一点。 + +### 方法一: /proc ### + + proc 伪文件系统,它驻留在 /proc 目录,这是最简单的方法来查看任何活动进程的线程数。 /proc 目录以可读文本文件形式输出,提供现有进程和系统硬件相关的信息如 CPU, interrupts, memory, disk, 等等. + + $ cat /proc//status + +上面的命令将显示进程 的详细信息,包括过程状态(例如, sleeping, running),父进程 PID,UID,GID,使用的文件描述符的数量,以及上下文切换的数量。输出也包括**进程创建的总线程数**如下所示。 + + Threads: + +例如,检查 PID 20571进程的线程数: + + $ cat /proc/20571/status + +![](https://farm6.staticflickr.com/5649/20341236279_f4a4d809d2_b.jpg) + +输出表明该进程有28个线程。 + +或者,你可以在 /proc//task 中简单的统计目录的数量,如下所示。 + + $ ls /proc//task | wc + +这是因为,对于一个进程中创建的每个线程,在 /proc//task 中会创建一个相应的目录,命名为其线程 ID。由此在 /proc//task 中目录的总数表示在进程中线程的数目。 + +### 方法二: ps ### + +如果你是功能强大的 ps 命令的忠实用户,这个命令也可以告诉你一个进程(用“H”选项)的线程数。下面的命令将输出进程的线程数。“h”选项需要放在前面。 + + $ ps hH p | wc -l + +如果你想监视一个进程的不同线程消耗的硬件资源(CPU & memory),请参阅[此教程][1]。(注:此文我们翻译过) + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/number-of-threads-process-linux.html + +作者:[Dan Nanni][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni +[1]:http://ask.xmodulo.com/view-threads-process-linux.html diff --git a/translated/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md b/translated/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md new file mode 100644 index 0000000000..9db7231a68 --- /dev/null +++ b/translated/tech/20150817 Linux FAQs with Answers--How to fix Wireshark GUI freeze on Linux desktop.md @@ -0,0 +1,64 @@ + +Linux 有问必答--如何解决 Linux 桌面上的 Wireshark GUI 死机 +================================================================================ +> **问题**: 当我试图在 Ubuntu 上的 Wireshark 中打开一个 pre-recorded 数据包转储时,它的 UI 突然死机,在我发起 Wireshark 的终端出现了下面的错误和警告。我该如何解决这个问题? + +Wireshark 是一个基于 GUI 的数据包捕获和嗅探工具。该工具被网络管理员普遍使用,网络安全工程师或开发人员对于各种任务的 packet-level 网络分析是必需的,例如在网络故障,漏洞测试,应用程序调试,或逆向协议工程是必需的。 Wireshark 允许记录存活数据包,并通过便捷的图形用户界面浏览他们的协议首部和有效负荷。 + +![](https://farm1.staticflickr.com/722/20584224675_f4d7a59474_c.jpg) + +这是 Wireshark 的 UI,尤其是在 Ubuntu 桌面下运行,有时会挂起或冻结出现以下错误,而你是向上或向下滚动分组列表视图时,就开始加载一个 pre-recorded 包转储文件。 + + + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject' + (wireshark:3480): GLib-GObject-CRITICAL **: g_object_set_qdata_full: assertion 'G_IS_OBJECT (object)' failed + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkRange' + (wireshark:3480): Gtk-CRITICAL **: gtk_range_get_adjustment: assertion 'GTK_IS_RANGE (range)' failed + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkOrientable' + (wireshark:3480): Gtk-CRITICAL **: gtk_orientable_get_orientation: assertion 'GTK_IS_ORIENTABLE (orientable)' failed + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkScrollbar' + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GtkWidget' + (wireshark:3480): GLib-GObject-WARNING **: invalid unclassed pointer in cast to 'GObject' + (wireshark:3480): GLib-GObject-CRITICAL **: g_object_get_qdata: assertion 'G_IS_OBJECT (object)' failed + (wireshark:3480): Gtk-CRITICAL **: gtk_widget_set_name: assertion 'GTK_IS_WIDGET (widget)' failed + +显然,这个错误是由 Wireshark 和叠加滚动条之间的一些不兼容造成的,在最新的 Ubuntu 桌面还没有被解决(例如,Ubuntu 15.04 的桌面)。 + +一种避免 Wireshark 的 UI 卡死的办法就是 **暂时禁用叠加滚动条**。在 Wireshark 上有两种方法来禁用叠加滚动条,这取决于你在桌面上如何启动 Wireshark 的。 + +### 命令行解决方法 ### + +叠加滚动条可以通过设置"**LIBOVERLAY_SCROLLBAR**"环境变量为“0”来被禁止。 + +所以,如果你是在终端使用命令行启动 Wireshark 的,你可以在 Wireshark 中禁用叠加滚动条,如下所示。 + +打开你的 .bashrc 文件,并定义以下 alias。 + + alias wireshark="LIBOVERLAY_SCROLLBAR=0 /usr/bin/wireshark" + +### 桌面启动解决方法 ### + +如果你是使用桌面启动器启动的 Wireshark,你可以编辑它的桌面启动器文件。 + + $ sudo vi /usr/share/applications/wireshark.desktop + +查找以"Exec"开头的行,并如下更改。 + + Exec=env LIBOVERLAY_SCROLLBAR=0 wireshark %f + +虽然这种解决方法将有利于所有桌面用户的 system-wide,但它将无法升级 Wireshark。如果你想保留修改的 .desktop 文件,如下所示将它复制到你的主目录。 + + $ cp /usr/share/applications/wireshark.desktop ~/.local/share/applications/ + +-------------------------------------------------------------------------------- + +via: http://ask.xmodulo.com/fix-wireshark-gui-freeze-linux-desktop.html + +作者:[Dan Nanni][a] +译者:[strugglingyouth](https://github.com/strugglingyouth) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ask.xmodulo.com/author/nanni + diff --git a/translated/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md b/translated/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md new file mode 100644 index 0000000000..5ddb31d1ea --- /dev/null +++ b/translated/tech/20150824 Basics Of NetworkManager Command Line Tool Nmcli.md @@ -0,0 +1,155 @@ +网络管理命令行工具基础,Nmcli +================================================================================ +![](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/08/networking1.jpg) + +### 介绍 ### + +在本教程中,我们会在CentOS / RHEL 7中讨论网络管理工具,也叫**nmcli**。那些使用**ifconfig**的用户应该在CentOS 7中避免使用这个命令。 + +让我们用nmcli工具配置一些网络设置。 + +### 要得到系统中所有接口的地址信息 ### + + [root@localhost ~]# ip addr show + +**示例输出:** + + 1: lo: mtu 65536 qdisc noqueue state UNKNOWN + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever + 2: eno16777736: mtu 1500 qdisc pfifo_fast state UP qlen 1000 + link/ether 00:0c:29:67:2f:4c brd ff:ff:ff:ff:ff:ff + inet 192.168.1.51/24 brd 192.168.1.255 scope global eno16777736 + valid_lft forever preferred_lft forever + inet6 fe80::20c:29ff:fe67:2f4c/64 scope link + valid_lft forever preferred_lft forever + +#### 检索与连接的接口相关的数据包统计 #### + + [root@localhost ~]# ip -s link show eno16777736 + +**示例输出:** + +![unxmen_(011)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0111.png) + +#### 得到路由配置 #### + + [root@localhost ~]# ip route + +示例输出: + + default via 192.168.1.1 dev eno16777736 proto static metric 100 + 192.168.1.0/24 dev eno16777736 proto kernel scope link src 192.168.1.51 metric 100 + +#### 分析主机/网站路径 #### + + [root@localhost ~]# tracepath unixmen.com + +输出像traceroute,但是更加完整。 + +![unxmen_0121](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_01211.png) + +### nmcli 工具 ### + +**Nmcli** 是一个非常丰富和灵活的命令行工具。nmcli使用的情况有: + +- **设备** – 正在使用的网络接口 +- **连接** – 一组配置设置,对于一个单一的设备可以有多个连接,可以在连接之间切换。 + +#### 找出有多少连接服务于多少设备 #### + + [root@localhost ~]# nmcli connection show + +![unxmen_(013)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_013.png) + +#### 得到特定连接的详情 #### + + [root@localhost ~]# nmcli connection show eno1 + +**示例输出:** + +![unxmen_(014)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0141.png) + +#### 得到网络设备状态 #### + + [root@localhost ~]# nmcli device status + +---------- + + DEVICE TYPE STATE CONNECTION + eno16777736 ethernet connected eno1 + lo loopback unmanaged -- + +#### 使用“dhcp”创建新的连接 #### + + [root@localhost ~]# nmcli connection add con-name "dhcp" type ethernet ifname eno16777736 + +这里, + +- **Connection add** – 添加新的连接 +- **con-name** – 连接名 +- **type** – 设备类型 +- **ifname** – 接口名 + +这个命令会使用dhcp协议添加连接 + +**示例输出:** + + Connection 'dhcp' (163a6822-cd50-4d23-bb42-8b774aeab9cb) successfully added. + +#### 不同过dhcp分配IP,使用“static”添加地址 #### + + [root@localhost ~]# nmcli connection add con-name "static" ifname eno16777736 autoconnect no type ethernet ip4 192.168.1.240 gw4 192.168.1.1 + +**示例输出:** + + Connection 'static' (8e69d847-03d7-47c7-8623-bb112f5cc842) successfully added. + +**更新连接:** + + [root@localhost ~]# nmcli connection up eno1 + +Again Check, whether ip address is changed or not. +再检查一遍,ip地址是否已经改变 + + [root@localhost ~]# ip addr show + +![unxmen_(015)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_0151.png) + +#### 添加DNS设置到静态连接中 #### + + [root@localhost ~]# nmcli connection modify "static" ipv4.dns 202.131.124.4 + +#### 添加额外的DNS值 #### + +[root@localhost ~]# nmcli connection modify "static" +ipv4.dns 8.8.8.8 + +**注意**:要使用额外的**+**符号,并且要是**+ipv4.dns**,而不是**ip4.dns**。 + + +添加一个额外的ip地址: + + [root@localhost ~]# nmcli connection modify "static" +ipv4.addresses 192.168.200.1/24 + +使用命令刷新设置: + + [root@localhost ~]# nmcli connection up eno1 + +![unxmen_(016)](http://www.unixmen.com/wp-content/uploads/2015/08/unxmen_016.png) + +你会看见,设置生效了。 + +完结 + +-------------------------------------------------------------------------------- + +via: http://www.unixmen.com/basics-networkmanager-command-line-tool-nmcli/ + +作者:Rajneesh Upadhyay +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md b/translated/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md new file mode 100644 index 0000000000..91aa23d6aa --- /dev/null +++ b/translated/tech/20150824 Fix No Bootable Device Found Error After Installing Ubuntu.md @@ -0,0 +1,97 @@ +修复安装完 Ubuntu 后无可引导设备错误 +================================================================================ +通常情况下,我启动 Ubuntu 和 Windows 双系统,但是这次我决定完全消除 Windows 纯净安装 Ubuntu。纯净安装 Ubuntu 完成后,结束时屏幕输出 **no bootable device found** 而不是进入 GRUB 界面。显然,安装搞砸了 UEFI 引导设置。 + +![安装完 Ubuntu 后无可引导设备](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_1.jpg) + +我会告诉你我是如何修复**在宏碁笔记本上安装 Ubuntu 后出现无可引导设备错误**。我声明了我使用的是宏碁灵越 R13,这很重要,因为我们需要更改固件设置,而这些设置可能因制造商和设备有所不同。 + +因此在你开始这里介绍的步骤之前,先看一下发生这个错误时我计算机的状态: + +- 我的宏碁灵越 R13 预装了 Windows8.1 和 UEFI 引导管理器 +- 关闭了 Secure boot(我的笔记本刚维修过,维修人员又启用了它,直到出现了问题我才发现)。你可以阅读这篇博文了解[如何在宏碁笔记本中关闭 secure boot][1] +- 我通过选择清除所有东西安装 Ubuntu,例如现有的 Windows 8.1,各种分区等。 +- 安装完 Ubuntu 之后,从硬盘启动时我看到无可引导设备错误。但能从 USB 设备正常启动 + +在我看来,没有禁用 secure boot 可能是这个错误的原因。但是,我没有数据支撑我的观点。这仅仅是预感。有趣的是,双系统启动 Windows 和 Linux 经常会出现这两个 Grub 问题: + +- [error: no such partition grub rescue][2] +- [Minimal BASH like line editing is supported][3] + +如果你遇到类似的情况,你可以试试我的修复方法。 + +### 修复安装完 Ubuntu 后无可引导设备错误 ### + +请原谅我没有丰富的图片。我的一加相机不能很好地拍摄笔记本屏幕。 + +#### 第一步 #### + +关闭电源并进入 boot 设置。我需要在宏碁灵越 R13 上快速地按 Fn+F2。如果你使用固态硬盘的话要按的非常快,因为固态硬盘启动速度很快。取决于你的制造商,你可能要用 Del 或 F10 或者 F12。 + +#### 第二步 #### + +在 boot 设置中,确保启用了 Secure Boot。它在 Boot 标签里。 + +#### 第三步 #### + +进入到 Security 标签,查找 “Select an UEFI file as trusted for executing” 并敲击回车。 + +![修复无可引导设备错误](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_2.jpg) + +特意说明,我们这一步是要在你的设备中添加 UEFI 设置文件(安装 Ubuntu 的时候生成)到可信 UEFI 启动。如果你记得的话,UEFI 启动的主要目的是提供安全性,由于(可能)没有禁用 Secure Boot,设备不会试图从新安装的操作系统中启动。添加它到类似白名单的可信列表,会使设备从 Ubuntu UEFI 文件启动。 + +#### 第四步 #### + +在这里你可以看到你的硬盘,例如 HDD0。如果你有多块硬盘,我希望你记住你安装 Ubuntu 的那块。同样敲击回车。 + +![在 Boot 设置中修复无可引导设备错误](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_3.jpg) + +#### 第五步 #### + +你应该可以看到 ,敲击回车。 + +![在 UEFI 中修复设置](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_4.jpg) + +#### 第六步 #### + +在下一个屏幕中你会看到 。耐心点,马上就好了。 + +![安装完 Ubuntu 后修复启动错误](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_5.jpg) + +#### 第七步 #### + +你可以看到 shimx64.efi,grubx64.efi 和 MokManager.efi 文件。重要的是 shimx64.efi。选中它并敲击回车。 + + +![修复无可引导设备](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_6.jpg) + +在下一个屏幕中,输入 Yes 并敲击回车。 + +![无可引导设备_7](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_7.jpg) + +#### 第八步 #### + +当我们添加它到可信 EFI 文件并执行时,按 F10 保存并退出。 + +![保存并退出固件设置](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/No_Bootable_Device_Found_8.jpg) + +重启你的系统,这时你就可以看到熟悉的 GRUB 界面了。就算你没有看到 Grub 界面,起码也再也不会看到“无可引导设备”。你应该可以进入 Ubuntu 了。 + +如果修复后搞乱了你的 Grub 界面,但你确实能登录系统,你可以重装 Grub 并进入到 Ubuntu 熟悉的紫色 Grub 界面。 + +我希望这篇指南能帮助你修复无可引导设备错误。欢迎提出任何疑问、建议或者感谢。 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/no-bootable-device-found-ubuntu/ + +作者:[Abhishek][a] +译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://itsfoss.com/disable-secure-boot-in-acer/ +[2]:http://itsfoss.com/solve-error-partition-grub-rescue-ubuntu-linux/ +[3]:http://itsfoss.com/fix-minimal-bash-line-editing-supported-grub-error-linux/ \ No newline at end of file diff --git a/translated/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md b/translated/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md new file mode 100644 index 0000000000..1bcc05a080 --- /dev/null +++ b/translated/tech/20150824 How To Add Hindi And Devanagari Support In Antergos And Arch Linux.md @@ -0,0 +1,46 @@ +为Antergos与Arch Linux添加印度语和梵文支持 +================================================================================ +![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Indian-languages.jpg) + +你们到目前或许知道,我最近一直在尝试体验[Antergos Linux][1]。在安装完[Antergos][2]后我所首先注意到的一些事情是在默认的Chromium浏览器中**没法正确显示印度语脚本**。 + +这是一件奇怪的事情,在我之前桌面Linux的体验中是从未遇到过的。起初,我认为是浏览器的问题,所以我安装了Firefox,然而问题依旧,Firefox也不能正确显示印度语。和Chromium不显示任何东西不同的是,Firefox确实显示了一些东西,但是毫无可读性。 + +![No hindi support in Arch Linux based Antergos](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_1.jpeg) +Chromium中的印度语显示 + + +![No hindi support in Arch Linux based Antergos](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_2.jpeg) +Firefox中的印度语显示 + +奇怪吧?那么,默认情况下基于Arch的Antergos Linux中没有印度语的支持吗?我没有去验证,但是我假设其它基于梵语脚本的印地语之类会产生同样的问题。 + +在这个快速指南中,我打算为大家演示如何来添加梵语支持,以便让印度语和其它印地语都能正确显示。 + +### 在Antergos和Arch Linux中添加印地语支持 ### + +打开终端,使用以下命令: + + sudo yaourt -S ttf-indic-otf + +键入密码,它会提供给你对于印地语的译文支持。 + +重启Firefox,会马上正确显示印度语了,但是它需要一次重启来显示印度语。因此,我建议你在安装了印地语字体后**重启你的系统**。 + +![Adding Hindi display support in Arch based Antergos Linux](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/08/Hindi_Support_Antergos_Arch_linux_4.jpeg) + +我希望这篇快速指南能够帮助你,让你可以在Antergos和其它基于Arch的Linux发行版中,如Manjaro Linux,阅读印度语、梵文、泰米尔语、泰卢固语、马拉雅拉姆语、孟加拉语,以及其它印地语。 + +-------------------------------------------------------------------------------- + +via: http://itsfoss.com/display-hindi-arch-antergos/ + +作者:[Abhishek][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://itsfoss.com/author/abhishek/ +[1]:http://antergos.com/ +[2]:http://itsfoss.com/tag/antergos/ diff --git a/translated/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md b/translated/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md new file mode 100644 index 0000000000..02aef62d82 --- /dev/null +++ b/translated/tech/20150824 How to create an AP in Ubuntu 15.04 to connect to Android or iPhone.md @@ -0,0 +1,74 @@ +如何在 Ubuntu 15.04 下创建连接至 Android/iOS 的 AP +================================================================================ +我成功地在 Ubuntu 15.04 下用 Gnome Network Manager 创建了一个无线AP热点. 接下来我要分享一下我的步骤. 请注意: 你必须要有一个可以用来创建AP热点的无线网卡. 如果你不知道如何找到连上了的设备的话, 在终端(Terminal)里输入`iw list`. + +如果你没有安装`iw`的话, 在Ubuntu下你可以使用`udo apt-get install iw`进行安装. + +在你键入`iw list`之后, 寻找可用的借口, 你应该会看到类似下列的条目: + +Supported interface modes: + +* IBSS +* managed +* AP +* AP/VLAN +* monitor +* mesh point + +让我们一步步看 + +1. 断开WIFI连接. 使用有线网络接入你的笔记本. +1. 在顶栏面板里点击网络的图标 -> Edit Connections(编辑连接) -> 在弹出窗口里点击Add(新增)按钮. +1. 在下拉菜单内选择Wi-Fi. +1. 接下来, + +a. 输入一个链接名 比如: Hotspot + +b. 输入一个 SSID 比如: Hotspot + +c. 选择模式(mode): Infrastructure + +d. 设备 MAC 地址: 在下拉菜单里选择你的无线设备 + +![](http://i2.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome1.jpg) + +1. 进入Wi-Fi安全选项卡, 选择 WPA & WPA2 Personal 并且输入密码. +1. 进入IPv4设置选项卡, 在Method(方法)下拉菜单里, 选择Shared to other computers(共享至其他电脑). + +![](http://i1.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome4.jpg) + +1. 进入IPv6选项卡, 在Method(方法)里设置为忽略ignore (只有在你不使用IPv6的情况下这么做) +1. 点击 Save(保存) 按钮以保存配置. +1. 从 menu/dash 里打开Terminal. +1. 修改你刚刚使用 network settings 创建的连接. + +使用 VIM 编辑器: + + sudo vim /etc/NetworkManager/system-connections/Hotspot + +使用Gedit 编辑器: + + gksu gedit /etc/NetworkManager/system-connections/Hotspot + +把名字 Hotspot 用你在第4步里起的连接名替换掉. + +![](http://i2.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome2.jpg?resize=640%2C402) + +1. 把 `mode=infrastructure` 改成 `mode=ap` 并且保存文件 +1. 一旦你保存了这个文件, 你应该能在 Wifi 菜单里看到你刚刚建立的AP了. (如果没有的话请再顶栏里 关闭/打开 Wifi 选项一次) + +![](http://i1.wp.com/www.linuxveda.com/wp-content/uploads/2015/08/ubuntu-ap-gnome3.jpg?resize=290%2C375) + +1. 你现在可以把你的设备连上Wifi了. 已经过 Android 5.0的小米4测试.(下载了1GB的文件以测试速度与稳定性) + +-------------------------------------------------------------------------------- + +via: http://www.linuxveda.com/2015/08/23/how-to-create-an-ap-in-ubuntu-15-04-to-connect-to-androidiphone/ + +作者:[Sayantan Das][a] +译者:[jerryling315](https://github.com/jerryling315) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.linuxveda.com/author/sayantan_das/ diff --git a/translated/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md b/translated/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md new file mode 100644 index 0000000000..04d9f18eb9 --- /dev/null +++ b/translated/tech/20150824 Mhddfs--Combine Several Smaller Partition into One Large Virtual Storage.md @@ -0,0 +1,183 @@ +Mhddfs——将多个小分区合并成一个大的虚拟存储 +================================================================================ + +让我们假定你有30GB的电影,并且你有3个驱动器,每个的大小为20GB。那么,你会怎么来存放东西呢? + +很明显,你可以将你的视频分割成2个或者3个不同的卷,并将它们手工存储到驱动器上。这当然不是一个好主意,它成了一项费力的工作,它需要你手工干预,而且花费你大量时间。 + +另外一个解决方案是创建一个[RAID磁盘阵列][1]。然而,RAID在缺乏存储可靠性,磁盘空间可用性差等方面声名狼藉。另外一个解决方案,就是mhddfs。 + +![Combine Multiple Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Combine-Multiple-Partitions-in-Linux.png) +Mhddfs——在Linux中合并多个分区 + +mhddfs是一个用于Linux的驱动,它可以将多个挂载点合并到一个虚拟磁盘中。它是一个基于FUSE的驱动,提供了一个用于大数据存储的简单解决方案。它将所有小文件系统合并,以创建一个单一的大虚拟文件系统,该文件系统包含其成员文件系统的所有颗粒,包括文件和空闲空间。 + +#### 你为什么需要Mhddfs? #### + +你所有存储设备创建了一个单一的虚拟池,它可以在启动时被挂载。这个小工具可以智能地照看并处理哪个驱动器满了,哪个驱动器空着,将数据写到哪个驱动器中。当你成功创建虚拟驱动器后,你可以使用[SAMBA][2]来共享你的虚拟文件系统。你的客户端将在任何时候都看到一个巨大的驱动器和大量的空闲空间。 + +#### Mhddfs特性 #### + +- 获取文件系统属性和系统信息。 +- 设置文件系统属性。 +- 创建、读取、移除和写入目录和文件。 +- 支持文件锁和单一设备上的硬链接。 + +注:表格 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
mhddfs的优点mhddfs的缺点
 适合家庭用户mhddfs驱动没有内建在Linux内核中
 运行简单 运行时需要大量处理能力
 没有明显的数据丢失 没有冗余解决方案
 不分割文件 不支持移动硬链接
 添加新文件到合并的虚拟文件系统 
 管理文件保存的位置 
  扩展文件属性 
+ +### Linux中安装Mhddfs ### + +在Debian及其类似的移植系统中,你可以使用下面的命令来安装mhddfs包。 + + # apt-get update && apt-get install mhddfs + +![Install Mhddfs on Debian based Systems](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Mhddfs-on-Ubuntu.png) +安装Mhddfs到基于Debian的系统中 + +在RHEL/CentOS Linux系统中,你需要开启[epel仓库][3],然后执行下面的命令来安装mhddfs包。 + + # yum install mhddfs + +在Fedora 22及以上系统中,你可以通过dnf包管理来获得它,就像下面这样。 + + # dnf install mhddfs + +![Install Mhddfs on Fedora](http://www.tecmint.com/wp-content/uploads/2015/08/Install-Mhddfs-on-Fedora.png) +安装Mhddfs到Fedora + +如果万一mhddfs包不能从epel仓库获取到,那么你需要解决下面的依赖,然后像下面这样来编译源码并安装。 + +- FUSE头文件 +- GCC +- libc6头文件 +- uthash头文件 +- libattr1头文件(可选) + +接下来,只需从下面建议的地址下载最新的源码包,然后编译。 + + # wget http://mhddfs.uvw.ru/downloads/mhddfs_0.1.39.tar.gz + # tar -zxvf mhddfs*.tar.gz + # cd mhddfs-0.1.39/ + # make + +你应该可以在当前目录中看到mhddfs的二进制文件,以root身份将它移动到/usr/bin/和/usr/local/bin/中。 + + # cp mhddfs /usr/bin/ + # cp mhddfs /usr/local/bin/ + +一切搞定,mhddfs已经可以用了。 + +### 我怎么使用Mhddfs? ### + +1.让我们看看当前所有挂载到我们系统中的硬盘。 + + + $ df -h + +![Check Mounted Devices](http://www.tecmint.com/wp-content/uploads/2015/08/Check-Mounted-Devices.gif) +**样例输出** + + Filesystem Size Used Avail Use% Mounted on + + /dev/sda1 511M 132K 511M 1% /boot/efi + /dev/sda2 451G 92G 336G 22% / + /dev/sdb1 1.9T 161G 1.7T 9% /media/avi/BD9B-5FCE + /dev/sdc1 555M 555M 0 100% /media/avi/Debian 8.1.0 M-A 1 + +注意这里的‘挂载点’名称,我们后面会使用到它们。 + +2.创建目录‘/mnt/virtual_hdd’,在这里,所有这些文件系统将被组成组。 + + + # mkdir /mnt/virtual_hdd + +3.然后,挂载所有文件系统。你可以通过root或者FUSE组中的某个成员来完成。 + + + # mhddfs /boot/efi, /, /media/avi/BD9B-5FCE/, /media/avi/Debian\ 8.1.0\ M-A\ 1/ /mnt/virtual_hdd -o allow_other + +![Mount All File System in Linux](http://www.tecmint.com/wp-content/uploads/2015/08/Mount-All-File-System-in-Linux.png) +在Linux中挂载所有文件系统 + +**注意**:这里我们使用了所有硬盘的挂载点名称,很明显,你的挂载点名称会有所不同。也请注意“-o allow_other”选项可以让这个虚拟文件系统让其它所有人可见,而不仅仅是创建它的人。 + +4.现在,运行“df -h”来看看所有文件系统。它应该包含了你刚才创建的那个。 + + + $ df -h + +![Verify Virtual File System Mount](http://www.tecmint.com/wp-content/uploads/2015/08/Verify-Virtual-File-System.png) +验证虚拟文件系统挂载 + +你可以像对已挂在的驱动器那样给虚拟文件系统部署所有的选项。 + +5.要在每次系统启动创建这个虚拟文件系统,你应该以root身份添加下面的这行代码(在你那里会有点不同,取决于你的挂载点)到/etc/fstab文件的末尾。 + + mhddfs# /boot/efi, /, /media/avi/BD9B-5FCE/, /media/avi/Debian\ 8.1.0\ M-A\ 1/ /mnt/virtual_hdd fuse defaults,allow_other 0 0 + +6.如果在任何时候你想要添加/移除一个新的驱动器到/从虚拟硬盘,你可以挂载一个新的驱动器,拷贝/mnt/vritual_hdd的内容,卸载卷,弹出你要移除的的驱动器并/或挂载你要包含的新驱动器。使用mhddfs命令挂载全部文件系统到Virtual_hdd下,这样就全部搞定了。 +#### 我怎么卸载Virtual_hdd? #### + +卸载virtual_hdd相当简单,就像下面这样 + + # umount /mnt/virtual_hdd + +![Unmount Virtual Filesystem](http://www.tecmint.com/wp-content/uploads/2015/08/Unmount-Virtual-Filesystem.png) +卸载虚拟文件系统 + +注意,是umount,而不是unmount,很多用户都输错了。 + +到现在为止全部结束了。我正在写另外一篇文章,你们一定喜欢读的。到那时,请保持连线到Tecmint。请在下面的评论中给我们提供有用的反馈吧。请为我们点赞并分享,帮助我们扩散。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/combine-partitions-into-one-in-linux-using-mhddfs/ + +作者:[Avishek Kumar][a] +译者:[GOLinux](https://github.com/GOLinux) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/avishek/ +[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ +[2]:http://www.tecmint.com/mount-filesystem-in-linux/ +[3]:http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/ diff --git a/translated/tech/20150901 How to Defragment Linux Systems.md b/translated/tech/20150901 How to Defragment Linux Systems.md new file mode 100644 index 0000000000..49d16a8f18 --- /dev/null +++ b/translated/tech/20150901 How to Defragment Linux Systems.md @@ -0,0 +1,125 @@ +如何在Linux中整理磁盘碎片 +================================================================================ +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-featured.png) + +有一神话是linux的磁盘从来不需要整理碎片。在大多数情况下这是真的,大多数因为是使用的是优秀的日志系统(ext2、3、4等等)来处理文件系统。然而,在一些特殊情况下,碎片仍旧会产生。如果正巧发生在你身上,解决方法很简单。 + +### 什么是磁盘碎片 ### + +碎片发生在不同的小块中更新文件时,但是这些快没有形成连续完整的文件而是分布在磁盘的各个角落中。这对于FAT和FAT32文件系统而言是这样的。这在NTFS中有所减轻,在Linux(extX)中几乎不会发生。下面是原因。 + +在像FAT和FAT32这类文件系统中,文件紧挨着写入到磁盘中。文件之间没有空间来用于增长或者更新: + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-fragmented.png) + +NTFS中在文件之间保留了一些空间,因此有空间进行增长。因为块之间的空间是有限的,碎片也会随着时间出现。 + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-ntfs.png) + +Linux的日志文件系统采用了一个不同的方案。与文件之间挨着不同,每个文件分布在磁盘的各处,每个文件之间留下了大量的剩余空间。这里有很大的空间用于更新和增长,并且碎片很少会发生。 + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-journal.png) + +此外,碎片一旦出现了,大多数Linux文件系统会尝试将文件和块重新连续起来。 + +### Linux中的磁盘整理 ### + +除非你用的是一个很小的硬盘或者空间不够了,不然Linux很少会需要磁盘整理。一些可能需要磁盘整理的情况包括: + +- 如果你编辑的是大型视频文件或者原生照片,但磁盘空间有限 +- if you use older hardware like an old laptop, and you have a small hard drive +- 如果你的磁盘开始满了(大约使用了85%) +- 如果你的家目录中有许多小分区 + +最好的解决方案是购买一个大硬盘。如果不可能,磁盘碎片整理就很有用了。 + +### 如何检查碎片 ### + +`fsck`命令会为你做这个 -也就是说如果你可以在liveCD中运行它,那么就可以**卸载所有的分区**。 + +这一点很重要:**在已经挂载的分区中运行fsck将会严重危害到你的数据和磁盘**。 + +你已经被警告过了。开始之前,先做一个完整的备份。 + +**免责声明**: 本文的作者与Make Tech Easier将不会对您的文件、数据、系统或者其他损害负责。你需要自己承担风险。如果你继续,你需要接收并了解这点。 + +你应该启动到一个live会话中(如安装磁盘,系统救援CD等)并运行`fsck`卸载分区。要检查是否有任何问题,请在运行root权限下面的命令: + + fsck -fn [/path/to/your/partition] + +您可以检查一下运行中的分区的路径 + + sudo fdisk -l + +有一个(相对)安全地在已挂载的分区中运行`fsck`的方法是使用‘-n’开关。这会让分区处在只读模式而不能创建任何文件。当然,这里并不能保证安全,你应该在创建备份之后进行。在ext2中,运行 + + sudo fsck.ext2 -fn /path/to/your/partition + +会产生大量的输出-- 大多数错误信息的原因是分区已经挂载了。最后会给出一个碎片相关的信息。 + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-fsck.png) + +如果碎片大于20%了,那么你应该开始整理你的磁盘碎片了。 + +### 如何简单地在Linux中整理碎片 ### + +你要做的是备份你**所有**的文件和数据到另外一块硬盘中(手动**复制**他们)。格式化分区然后重新复制回去(不要使用备份软件)。日志系统会把它们作为新的文件,并将它们整齐地放置到磁盘中而不产生碎片。 + +要备份你的文件,运行 + + cp -afv [/path/to/source/partition]/* [/path/to/destination/folder] + +记住星号(*)是很重要的。 + +注意:通常认为复制大文件或者大量文件,使用dd或许是最好的。这是一个非常底层的操作,它会复制一切,包含空闲的空间甚至是留下的垃圾。这不是我们想要的,因此这里最好使用`cp`。 + +现在你只需要删除源文件。 + + sudo rm -rf [/path/to/source/partition]/* + +**可选**:你可以将空闲空间置零。你也可以用格式化来达到这点,但是例子中你并没有复制整个分区而仅仅是大文件(这很可能会造成碎片)。这恐怕不能成为一个选项。 + + sudo dd if=/dev/zero of=[/path/to/source/partition]/temp-zero.txt + +等待它结束。你可以用`pv`来监测进程。 + + sudo apt-get install pv + sudo pv -tpreb | of=[/path/to/source/partition]/temp-zero.txt + +![](https://www.maketecheasier.com/assets/uploads/2015/07/defragment-linux-dd.png) + +这就完成了,只要删除临时文件就行。 + + sudo rm [/path/to/source/partition]/temp-zero.txt + +待你清零了空闲空间(或者跳过了这步)。重新复制回文件,将第一个cp命令翻转一下: + + cp -afv [/path/to/original/destination/folder]/* [/path/to/original/source/partition] + +### 使用 e4defrag ### + +如果你想要简单的方法,安装`e2fsprogs`, + + sudo apt-get install e2fsprogs + +用root权限在分区中运行 `e4defrag`。如果你不想卸载分区,你可以使用它的挂载点而不是路径。要整理整个系统的碎片,运行: + + sudo e4defrag / + +在挂载的情况下不保证成功(你也应该保证在它运行时停止使用你的系统),但是它比服务全部文件再重新复制回来简单多了。 + +### 总结 ### + +linux系统中很少会出现碎片因为它的文件系统有效的数据处理。如果你因任何原因产生了碎片,简单的方法是重新分配你的磁盘如复制所有文件并复制回来,或者使用`e4defrag`。然而重要的是保证你数据的安全,因此在进行任何可能影响你全部或者大多数文件的操作之前,确保你的文件已经被备份到了另外一个安全的地方去了。 + +-------------------------------------------------------------------------------- + +via: https://www.maketecheasier.com/defragment-linux/ + +作者:[Attila Orosz][a] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.maketecheasier.com/author/attilaorosz/ diff --git a/translated/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md b/translated/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md new file mode 100644 index 0000000000..dbe5dec7cd --- /dev/null +++ b/translated/tech/20150901 Install The Latest Linux Kernel in Ubuntu Easily via A Script.md @@ -0,0 +1,79 @@ +使用脚本便捷地在Ubuntu系统中安装最新的Linux内核 +================================================================================ +![](http://ubuntuhandbook.org/wp-content/uploads/2014/12/linux-kernel-icon-tux.png) + +想要安装最新的Linux内核吗?一个简单的脚本就可以在Ubuntu系统中方便的完成这项工作。 + +Michael Murphy 写了一个脚本用来将最新的候选版、标准版、或者低延时版内核安装到 Ubuntu 系统中。这个脚本会在询问一些问题后从 [Ubuntu kernel mainline page][1] 下载安装最新的 Linux 内核包。 + +### 通过脚本来安装、升级Linux内核: ### + +1. 点击 [github page][2] 右上角的 “Download Zip” 来下载脚本。 + +2. 鼠标右键单击用户下载目录下的 Zip 文件,选择 “Extract Here” 将其解压到此处。 + +3. 右键点击解压后的文件夹,选择 “Open in Terminal” 在终端中导航到此文件夹下。 + +![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/open-terminal.jpg) + +此时将会打开一个终端,并且自动导航到结果文件夹下。如果你找不到 “Open in Terminal” 选项的话,在 Ubuntu 软件中心搜索安装 `nautilus-open-terminal` ,然后重新登录系统即可(也可以再终端中运行 `nautilus -q` 来取代重新登录系统的操作)。 +4. 当进入终端后,运行以下命令来赋予脚本执行本次操作的权限。 + + chmod +x * + +最后,每当你想要安装或升级 Ubuntu 的 linux 内核时都可以运行此脚本。 + + ./* + +![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/run-script.jpg) + +这里之所以使用 * 替代脚本名称是因为文件夹中只有它一个文件。 + +如果脚本运行成功,重启电脑即可。 + +### 恢复并且卸载新版内核 ### + +如果因为某些原因要恢复并且移除新版内核的话,请重启电脑,在 Grub 启动器的 **高级选项** 菜单下选择旧版内核来启动系统。 + +当系统启动后,参照下边章节继续执行。 + +### 如何移除旧的(或新的)内核: ### + +1. 从Ubuntu软件中心安装 Synaptic Package Manager。 + +2. 打开 Synaptic Package Manager 然后如下操作: + +- 点击 **Reload** 按钮,让想要被删除的新内核显示出来. +- 在左侧面板中选择 **Status -> Installed** ,让查找列表更清晰一些。 +- 在 Quick filter 输入框中输入 **linux-image-** 用于查询。 +- 选择一个内核镜像 “linux-image-x.xx.xx-generic” 然后将其标记为removal(或者Complete Removal) +- 最后,应用变更 + +![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/remove-old-kernel1.jpg) + +重复以上操作直到移除所有你不需要的内核。注意,不要随意移除此刻正在运行的内核,你可以通过 `uname -r` 命令来查看运行的内核。 + +对于 Ubuntu 服务器来说,你可以一步步运行下面的命令: + + uname -r + + dpkg -l | grep linux-image- + + sudo apt-get autoremove KERNEL_IMAGE_NAME + +![](http://ubuntuhandbook.org/wp-content/uploads/2015/08/remove-kernel-terminal.jpg) + +-------------------------------------------------------------------------------- + +via: http://ubuntuhandbook.org/index.php/2015/08/install-latest-kernel-script/ + +作者:[Ji m][a] +译者:[译者ID](https://github.com/mr-ping) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://ubuntuhandbook.org/index.php/about/ +[1]:http://kernel.ubuntu.com/~kernel-ppa/mainline/ +[2]:https://gist.github.com/mmstick/8493727 + diff --git a/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md b/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md new file mode 100644 index 0000000000..79e263d7e0 --- /dev/null +++ b/translated/tech/LFCS/Part 1 - LFCS--How to use GNU 'sed' Command to Create Edit and Manipulate files in Linux.md @@ -0,0 +1,220 @@ +Translating by Xuanwo + +LFCS系列第一讲:如何在Linux上使用GNU'sed'命令来创建、编辑和操作文件 +================================================================================ +Linux基金会宣布了一个全新的LFCS(Linux Foundation Certified Sysadmin,Linux基金会认证系统管理员)认证计划。这一计划旨在帮助遍布全世界的人们获得其在处理Linux系统管理任务上能力的认证。这些能力包括支持运行的系统服务,以及第一手的故障诊断和分析和为工程师团队在升级时提供智能决策。 + +![Linux Foundation Certified Sysadmin](http://www.tecmint.com/wp-content/uploads/2014/10/lfcs-Part-1.png) + +Linux基金会认证系统管理员——第一讲 + +请观看下面关于Linux基金会认证计划的演示: + + + +该系列将命名为《LFCS系列第一讲》至《LFCS系列第十讲》并覆盖关于Ubuntu,CentOS以及openSUSE的下列话题。 + +- 第一讲:如何在Linux上使用GNU'sed'命令来创建、编辑和操作文件 +- 第二讲:如何安装和使用vi/m全功能文字编辑器 +- 第三讲:归档文件/目录和在文件系统中寻找文件 +- 第四讲:为存储设备分区,格式化文件系统和配置交换分区 +- 第五讲:在Linux中挂载/卸载本地和网络(Samba & NFS)文件系统 +- 第六讲:组合分区作为RAID设备——创建&管理系统备份 +- 第七讲:管理系统启动进程和服务(使用SysVinit, Systemd 和 Upstart) +- 第八讲:管理用户和组,文件权限和属性以及启用账户的sudo权限 +- 第九讲:Linux包管理与Yum,RPM,Apt,Dpkg,Aptitude,Zypper +- 第十讲:学习简单的Shell脚本和文件系统故障排除 + +本文是覆盖这个参加LFCS认证考试的所必需的范围和能力的十个教程的第一讲。话说了那么多,快打开你的终端,让我们开始吧! + +### 处理Linux中的文本流 ### + +Linux将程序中的输入和输出当成字符流或者字符序列。在开始理解重定向和管道之前,我们必须先了解三种最重要的I/O(Input and Output,输入和输出)流,事实上,它们都是特殊的文件(根据UNIX和Linux中的约定,数据流和外围设备或者设备文件也被视为普通文件)。 + +> (重定向操作符) 和 | (管道操作符)之间的区别是:前者将命令与文件相连接,而后者将命令的输出和另一个命令相连接。 + + # command > file + # command1 | command2 + +由于重定向操作符静默创建或覆盖文件,我们必须特别小心谨慎地使用它,并且永远不要把它和管道混淆起来。在Linux和UNIX系统上管道的优势是:第一个命令的输出不会写入一个文件而是直接被第二个命令读取。 + +在下面的操作练习中,我们将会使用这首诗——《A happy child》(匿名作者) + +![cat command](http://www.tecmint.com/wp-content/uploads/2014/10/cat-command.png) + +cat 命令样例 + +#### 使用 sed #### + +sed是流编辑器(stream editor)的缩写。为那些不懂术语的人额外解释一下,流编辑器是用来在一个输入流(文件或者管道中的输入)执行基本的文本转换的工具。 + +sed最基本的用法是字符替换。我们将通过把每个出现的小写y改写为大写Y并且将输出重定向到ahappychild2.txt开始。g标志表示sed应该替换文件每一行中所有应当替换的实例。如果这个标志省略了,sed将会只替换每一行中第一次出现的实例。 + +**基本语法:** + + # sed ‘s/term/replacement/flag’ file + +**我们的样例:** + + # sed ‘s/y/Y/g’ ahappychild.txt > ahappychild2.txt + +![sed command](http://www.tecmint.com/wp-content/uploads/2014/10/sed-command.png) + +sed 命令样例 + +如果你要在替换文本中搜索或者替换特殊字符(如/,\,&),你需要使用反斜杠对它进行转义。 + +例如,我们将会用一个符号来替换一个文字。与此同时,我们将把一行最开始出现的第一个I替换为You。 + + # sed 's/and/\&/g;s/^I/You/g' ahappychild.txt + +![sed replace string](http://www.tecmint.com/wp-content/uploads/2014/10/sed-replace-string.png) + +sed 替换字符串 + +在上面的命令中,^(插入符号)是众所周知用来表示一行开头的正则表达式。 + +正如你所看到的,我们可以通过使用分号分隔以及用括号包裹来把两个或者更多的替换命令(并在他们中使用正则表达式)链接起来。 + +另一种sed的用法是显示或者删除文件中选中的一部分。在下面的样例中,将会显示/var/log/messages中从6月8日开始的头五行。 + + # sed -n '/^Jun 8/ p' /var/log/messages | sed -n 1,5p + +请注意,在默认的情况下,sed会打印每一行。我们可以使用-n选项来覆盖这一行为并且告诉sed只需要打印(用p来表示)文件(或管道)中匹配的部分(第一种情况下行开头的第一个6月8日以及第二种情况下的一到五行*此处翻译欠妥,需要修正*)。 + +最后,可能有用的技巧是当检查脚本或者配置文件的时候可以保留文件本身并且删除注释。下面的单行sed命令删除(d)空行或者是开头为`#`的行(|字符返回两个正则表达式之间的布尔值)。 + + # sed '/^#\|^$/d' apache2.conf + +![sed match string](http://www.tecmint.com/wp-content/uploads/2014/10/sed-match-string.png) + +sed 匹配字符串 + +#### uniq C命令 #### + +uniq命令允许我们返回或者删除文件中重复的行,默认写入标准输出。我们必须注意到,除非两个重复的行相邻,否则uniq命令不会删除他们。因此,uniq经常和前序排序(此处翻译欠妥)(一种用来对文本行进行排序的算法)搭配使用。默认情况下,排序使用第一个字段(用空格分隔)作为关键字段。要指定一个不同的关键字段,我们需要使用-k选项。 + +**样例** + +du –sch /path/to/directory/* 命令将会以人类可读的格式返回在指定目录下每一个子文件夹和文件的磁盘空间使用情况(也会显示每个目录总体的情况),而且不是按照大小输出,而是按照子文件夹和文件的名称。我们可以使用下面的命令来让它通过大小排序。 + + # du -sch /var/* | sort –h + +![sort command](http://www.tecmint.com/wp-content/uploads/2014/10/sort-command.jpg) + +sort 命令样例 + +你可以通过使用下面的命令告诉uniq比较每一行的前6个字符(-w 6)(指定了不同的日期)来统计日志事件的个数,而且在每一行的开头输出出现的次数(-c)。 + + + # cat /var/log/mail.log | uniq -c -w 6 + +![Count Numbers in File](http://www.tecmint.com/wp-content/uploads/2014/10/count-numbers-in-file.jpg) + +统计文件中数字 + +最后,你可以组合使用sort和uniq命令(通常如此)。考虑下面文件中捐助者,捐助日期和金额的列表。假设我们想知道有多少个捐助者。我们可以使用下面的命令来分隔第一字段(字段由冒号分隔),按名称排序并且删除重复的行。 + + # cat sortuniq.txt | cut -d: -f1 | sort | uniq + +![Find Unique Records in File](http://www.tecmint.com/wp-content/uploads/2014/10/find-uniqu-records-in-file.jpg) + +寻找文件中不重复的记录 + +- 也可阅读: [13个“cat”命令样例][1] + +#### grep 命令 #### + +grep在文件(或命令输出)中搜索指定正则表达式并且在标准输出中输出匹配的行。 + +**样例** + +显示文件/etc/passwd中用户gacanepa的信息,忽略大小写。 + + # grep -i gacanepa /etc/passwd + +![grep Command](http://www.tecmint.com/wp-content/uploads/2014/10/grep-command.jpg) + +grep 命令样例 + +显示/etc文件夹下所有rc开头并跟随任意数字的内容。 + + # ls -l /etc | grep rc[0-9] + +![List Content Using grep](http://www.tecmint.com/wp-content/uploads/2014/10/list-content-using-grep.jpg) + +使用grep列出内容 + +- 也可阅读: [12个“grep”命令样例][2] + +#### tr 命令使用技巧 #### + +tr命令可以用来从标准输入中翻译(改变)或者删除字符并将结果写入到标准输出中。 + +**样例** + +把sortuniq.txt文件中所有的小写改为大写。 + + # cat sortuniq.txt | tr [:lower:] [:upper:] + +![Sort Strings in File](http://www.tecmint.com/wp-content/uploads/2014/10/sort-strings.jpg) + +排序文件中的字符串 + +压缩`ls –l`输出中的定界符至一个空格。 + # ls -l | tr -s ' ' + +![Squeeze Delimiter](http://www.tecmint.com/wp-content/uploads/2014/10/squeeze-delimeter.jpg) + +压缩分隔符 + +#### cut 命令使用方法 #### + +cut命令可以基于字节数(-b选项),字符(-c)或者字段(-f)提取部分输入(从标准输入或者文件中)并且将结果输出到标准输出。在最后一种情况下(基于字段),默认的字段分隔符是一个tab,但不同的分隔符可以由-d选项来指定。 + +**样例** + +从/etc/passwd中提取用户账户和他们被分配的默认shell(-d选项允许我们指定分界符,-f选项指定那些字段将被提取)。 + + # cat /etc/passwd | cut -d: -f1,7 + +![Extract User Accounts](http://www.tecmint.com/wp-content/uploads/2014/10/extract-user-accounts.jpg) + +提取用户账户 + +总结一下,我们将使用最后一个命令的输出中第一和第三个非空文件创建一个文本流。我们将使用grep作为第一过滤器来检查用户gacanepa的会话,然后将分隔符压缩至一个空格(tr -s ' ')。下一步,我们将使用cut来提取第一和第三个字段,最后使用第二个字段(本样例中,指的是IP地址)来排序之后再用uniq去重。 + + # last | grep gacanepa | tr -s ‘ ‘ | cut -d’ ‘ -f1,3 | sort -k2 | uniq + +![last command](http://www.tecmint.com/wp-content/uploads/2014/10/last-command.png) + +last 命令样例 + +上面的命令显示了如何将多个命令和管道结合起来以便根据我们的愿望得到过滤后的数据。你也可以逐步地使用它以帮助你理解输出是如何从一个命令传输到下一个命令的(顺便说一句,这是一个非常好的学习经验!) + +### 总结 ### + +尽管这个例子(以及在当前教程中的其他实例)第一眼看上去可能不是非常有用,但是他们是体验在Linux命令行中创建,编辑和操作文件的一个非常好的开始。请随时留下你的问题和意见——不胜感激! + +#### 参考链接 #### + +- [关于LFCS][3] +- [为什么需要Linux基金会认证?][4] +- [注册LFCS考试][5] + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/sed-command-to-create-edit-and-manipulate-files-in-linux/ + +作者:[Gabriel Cánepa][a] +译者:[Xuanwo](https://github.com/Xuanwo) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/13-basic-cat-command-examples-in-linux/ +[2]:http://www.tecmint.com/12-practical-examples-of-linux-grep-command/ +[3]:https://training.linuxfoundation.org/certification/LFCS +[4]:https://training.linuxfoundation.org/certification/why-certify-with-us +[5]:https://identity.linuxfoundation.org/user?destination=pid/1 \ No newline at end of file diff --git a/translated/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md b/translated/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md deleted file mode 100644 index 8ca0ecbd7e..0000000000 --- a/translated/tech/RAID/Part 1 - Introduction to RAID, Concepts of RAID and RAID Levels.md +++ /dev/null @@ -1,146 +0,0 @@ - -RAID的级别和概念的介绍 - 第1部分 -================================================================================ -RAID是廉价磁盘冗余阵列,但现在它被称为独立磁盘冗余阵列。早先一个容量很小的磁盘都是非常昂贵的,但是现在我们可以很便宜的买到一个更大的磁盘。Raid 是磁盘的一个集合,被称为逻辑卷。 - - -![RAID in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/RAID.jpg) - -在 Linux 中理解 RAID 的设置 - -RAID包含一组或者一个集合甚至一个阵列。使用一组磁盘结合驱动器组成 RAID 阵列或 RAID 集。一个 RAID 控制器至少使用两个磁盘并且使用一个逻辑卷或者多个驱动器在一个组中。在一个磁盘组的应用中只能使用一个 RAID 级别。使用 RAID 可以提高服务器的性能。不同 RAID 的级别,性能会有所不同。它通过容错和高可用性来保存我们的数据。 - -这个系列被命名为RAID的构建共包含9个部分包括以下主题。 - -- 第1部分:RAID的级别和概念的介绍 -- 第2部分:在Linux中如何设置 RAID0(条带化) -- 第3部分:在Linux中如何设置 RAID1(镜像化) -- 第4部分:在Linux中如何设置 RAID5(条带化与分布式奇偶校验) -- 第5部分:在Linux中如何设置 RAID6(条带双分布式奇偶校验) -- 第6部分:在Linux中设置 RAID 10 或1 + 0(嵌套) -- 第7部分:增加现有的 RAID 阵列并删除损坏的磁盘 -- 第8部分:在 RAID 中恢复(重建)损坏的驱动器 -- 第9部分:在 Linux 中管理 RAID - -这是9系列教程的第1部分,在这里我们将介绍 RAID 的概念和 RAID 级别,这是在 Linux 中构建 RAID 需要理解的。 - - -### 软件RAID和硬件RAID ### - -软件 RAID 的性能很低,因为其从主机资源消耗。 RAID 软件需要加载可读取数据从软件 RAID 卷中。在加载 RAID 软件前,操作系统需要得到加载 RAID 软件的引导。在软件 RAID 中无需物理硬件。零成本投资。 - -硬件 RAID 具有很高的性能。他们有专用的 RAID 控制器,采用 PCI Express卡物理内置的。它不会使用主机资源。他们有 NVRAM 缓存读取和写入。当重建时即使出现电源故障,它会使用电池电源备份存储缓存。对于大规模使用需要非常昂贵的投资。 - -硬件 RAID 卡如下所示: - -![Hardware RAID](http://www.tecmint.com/wp-content/uploads/2014/10/Hardware-RAID.jpg) - -硬件RAID - -#### 精选的 RAID 概念 #### - -- 在 RAID 重建中校验方法中丢失的内容来自从校验中保存的信息。 RAID 5,RAID 6 基于校验。 -- 条带化是随机共享数据到多个磁盘。它不会在单个磁盘中保存完整的数据。如果我们使用3个磁盘,则数据将会存在于每个磁盘上。 -- 镜像被用于 RAID 1 和 RAID 10。镜像会自动备份数据。在 RAID 1,它将保存相同的内容到其他盘上。 -- 在我们的服务器上,热备份只是一个备用驱动器,它可以自动更换发生故障的驱动器。在我们的阵列中,如果任何一个驱动器损坏,热备份驱动器会自动重建。 -- 块是 RAID 控制器每次读写数据时的最小单位,最小4KB。通过定义块大小,我们可以增加 I/O 性能。 - -RAID有不同的级别。在这里,我们仅看到在真实环境下的使用最多的 RAID 级别。 - -- RAID0 = 条带化 -- RAID1 = 镜像 -- RAID5 = 单个磁盘分布式奇偶校验 -- RAID6 = 双盘分布式奇偶校验 -- RAID10 = 镜像 + 条带。(嵌套RAID) - -RAID 在大多数 Linux 发行版上使用 mdadm 的包管理。让我们先对每个 RAID 级别认识一下。 - -#### RAID 0(或)条带化 #### - -条带化有很好的性能。在 RAID 0(条带化)中数据将使用共享的方式被写入到磁盘。一半的内容将是在一个磁盘上,另一半内容将被写入到其它磁盘。 - -假设我们有2个磁盘驱动器,例如,如果我们将数据“TECMINT”写到逻辑卷中,“T”将被保存在第一盘中,“E”将保存在第二盘,'C'将被保存在第一盘,“M”将保存在第二盘,它会一直继续此循环过程。 - -在这种情况下,如果驱动器中的任何一个发生故障,我们将丢失所有的数据,因为一个盘中只有一半的数据,不能用于重建。不过,当比较写入速度和性能时,RAID0 是非常好的。创建 RAID 0(条带化)至少需要2个磁盘。如果你的数据是非常宝贵的,那么不要使用此 RAID 级别。 - -- 高性能。 -- 在 RAID0 上零容量损失。 -- 零容错。 -- 写和读有很高的性能。 - -#### RAID1(或)镜像化 #### - -镜像也有不错的性能。镜像可以备份我们的数据。假设我们有两组2TB的硬盘驱动器,我们总共有4TB,但在镜像中,驱动器在 RAID 控制器的后面形成一个逻辑驱动器,我们只能看到逻辑驱动器有2TB。 - -当我们保存数据时,它将同时写入2TB驱动器中。创建 RAID 1 (镜像化)最少需要两个驱动器。如果发生磁盘故障,我们可以恢复 RAID 通过更换一个新的磁盘。如果在 RAID 1 中任何一个磁盘发生故障,我们可以从另一个磁盘中获取相同的数据,因为其他的磁盘中也有相同的数据。所以是零数据丢失。 - -- 良好的性能。 -- 空间的一半将在总容量丢失。 -- 完全容错。 -- 重建会更快。 -- 写性能将是缓慢的。 -- 读将会很好。 -- 被操作系统和数据库使用的规模很小。 - -#### RAID 5(或)分布式奇偶校验 #### - -RAID 5 多用于企业的水平。 RAID 5 的工作通过分布式奇偶校验的方法。奇偶校验信息将被用于重建数据。它需要留下的正常驱动器上的信息去重建。驱动器故障时,这会保护我们的数据。 - -假设我们有4个驱动器,如果一个驱动器发生故障而后我们更换发生故障的驱动器后,我们可以从奇偶校验中重建数据到更换的驱动器上。奇偶校验信息存储在所有的4个驱动器上,如果我们有4个 1TB 的驱动器。奇偶校验信息将被存储在每个驱动器的256G中而其它768GB是用户自己使用的。单个驱动器故障后,RAID 5 依旧正常工作,如果驱动器损坏个数超过1个会导致数据的丢失。 - -- 性能卓越 -- 读速度将非常好。 -- 如果我们不使用硬件 RAID 控制器,写速度是缓慢的。 -- 从所有驱动器的奇偶校验信息中重建。 -- 完全容错。 -- 1个磁盘空间将用于奇偶校验。 -- 可以被用在文件服务器,Web服务器,非常重要的备份中。 - -#### RAID 6 两个分布式奇偶校验磁盘 #### - -RAID 6 和 RAID 5 相似但它有两个分布式奇偶校验。大多用在大量的阵列中。我们最少需要4个驱动器,即使有2个驱动器发生故障,我们依然可以重建数据,同时更换新的驱动器。 - -它比 RAID 5 非常慢,因为它将数据同时写到4个驱动器上。当我们使用硬件 RAID 控制器时速度将被平均。如果我们有6个的1TB驱动器,4个驱动器将用于数据保存,2个驱动器将用于校验。 - -- 性能不佳。 -- 读的性能很好。 -- 如果我们不使用硬件 RAID 控制器写的性能会很差。 -- 从2奇偶校验驱动器上重建。 -- 完全容错。 -- 2个磁盘空间将用于奇偶校验。 -- 可用于大型阵列。 -- 在备份和视频流中大规模使用。 - -#### RAID 10(或)镜像+条带 #### - -RAID 10 可以被称为1 + 0或0 +1。它将做镜像+条带两个工作。在 RAID 10 中首先做镜像然后做条带。在 RAID 01 上首先做条带,然后做镜像。RAID 10 比 01 好。 - -假设,我们有4个驱动器。当我写了一些数据到逻辑卷上,它会使用镜像和条带将数据保存到4个驱动器上。 - -如果我在 RAID 10 上写入数据“TECMINT”,数据将使用如下形式保存。首先将“T”同时写入两个磁盘,“E”也将同时写入两个磁盘,这一步将所有数据都写入。这使数据得到备份。 - -同时它将使用 RAID 0 方式写入数据,遵循将“T”写入第一个盘,“E”写入第二个盘。再次将“C”写入第一个盘,“M”到第二个盘。 - -- 良好的读写性能。 -- 空间的一半将在总容量丢失。 -- 容错。 -- 从备份数据中快速重建。 -- 它的高性能和高可用性常被用于数据库的存储中。 - -### 结论 ### - -在这篇文章中,我们已经看到了什么是 RAID 和在实际环境大多采用 RAID 的哪个级别。希望你已经学会了上面所写的。对于 RAID 的构建必须了解有关 RAID 的基本知识。以上内容对于你了解 RAID 基本满足。 - -在接下来的文章中,我将介绍如何设置和使用各种级别创建 RAID,增加 RAID 组(阵列)和驱动器故障排除等。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/understanding-raid-setup-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ diff --git a/translated/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md b/translated/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md deleted file mode 100644 index 9feba99609..0000000000 --- a/translated/tech/RAID/Part 2 - Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm’ Tool in Linux.md +++ /dev/null @@ -1,218 +0,0 @@ -在 Linux 上使用 ‘mdadm’ 工具创建软件 RAID0 (条带化)在 ‘两个设备’ 上 - 第2部分 -================================================================================ -RAID 是廉价磁盘的冗余阵列,其高可用性和可靠性适用于大规模环境中,为了使数据被保护而不是被正常使用。RAID 只是磁盘的一个集合被称为逻辑卷。结合驱动器,使其成为一个阵列或称为集合(组)。 - -创建 RAID 最少应使用2个磁盘被连接组成 RAID 控制器,逻辑卷或多个驱动器可以根据定义的 RAID 级别添加在一个阵列中。不使用物理硬件创建的 RAID 被称为软件 RAID。软件 RAID 一般都是不太有钱的人才使用的。 - -![Setup RAID0 in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Raid0-in-Linux.jpg) - -在 Linux 中创建 RAID0 - -使用 RAID 的主要目的是为了在单点故障时保存数据,如果我们使用单个磁盘来存储数据,如果它损坏了,那么就没有机会取回我们的数据了,为了防止数据丢失我们需要一个容错的方法。所以,我们可以使用多个磁盘组成 RAID 阵列。 - -#### 在 RAID 0 中条带是什么 #### - -条带是通过将数据在同一时间分割到多个磁盘上。假设我们有两个磁盘,如果我们将数据保存到逻辑卷上,它会将数据保存在两个磁盘上。使用 RAID 0 是为了获得更好的性能,但是如果驱动器中一个出现故障,我们将不能得到完整的数据。因此,使用 RAID 0 不是一种好的做法。唯一的解决办法就是安装有 RAID0 逻辑卷的操作系统来提高文件的安全性。 - -- RAID 0 性能较高。 -- 在 RAID 0 上,空间零浪费。 -- 零容错(如果硬盘中的任何一个发生故障,无法取回数据)。 -- 写和读性能得以提高。 - -#### 要求 #### - -创建 RAID 0 允许的最小磁盘数目是2个,但你可以添加更多的磁盘,但数目应该是2,4,6,8等的两倍。如果你有一个物理 RAID 卡并且有足够的端口,你可以添加更多磁盘。 - -在这里,我们没有使用硬件 RAID,此设置只依赖于软件 RAID。如果我们有一个物理硬件 RAID 卡,我们可以从它的 UI 组件访问它。有些主板默认内建 RAID 功能,还可以使用 Ctrl + I 键访问 UI。 - -如果你是刚开始设置 RAID,请阅读我们前面的文章,我们已经介绍了一些关于 RAID 基本的概念。 - -- [Introduction to RAID and RAID Concepts][1] - -**我的服务器设置** - - Operating System : CentOS 6.5 Final - IP Address : 192.168.0.225 - Two Disks : 20 GB each - -这篇文章是9个 RAID 系列教程的第2部分,在这部分,我们将看看如何能够在 Linux 上创建和使用 RAID0(条带化),以名为 sdb 和 sdc 两个20GB的硬盘为例。 - -### 第1步:更新系统和安装管理 RAID 的 mdadm 软件 ### - -1.在 Linux 上设置 RAID0 前,我们先更新一下系统,然后安装 ‘mdadm’ 包。mdadm 是一个小程序,这将使我们能够在Linux下配置和管理 RAID 设备。 - - # yum clean all && yum update - # yum install mdadm -y - -![install mdadm in linux](http://www.tecmint.com/wp-content/uploads/2014/10/install-mdadm-in-linux.png) - -安装 mdadm 工具 - -### 第2步:检测并连接两个 20GB 的硬盘 ### - -2.在创建 RAID 0 前,请务必确认两个硬盘能被检测到,使用下面的命令确认。 - - # ls -l /dev | grep sd - -![Check Hard Drives in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Hard-Drives.png) - -检查硬盘 - -3.一旦检测到新的硬盘驱动器,同时检查是否连接的驱动器已经被现有的 RAID 使用,使用下面的 ‘mdadm’ 命令来查看。 - - # mdadm --examine /dev/sd[b-c] - -![Check RAID Devices in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Drives-using-RAID.png) - -检查 RAID 设备 - -从上面的输出我们可以看到,没有任何 RAID 使用 sdb 和 sdc 这两个驱动器。 - -### 第3步:创建 RAID 分区 ### - -4.现在用 sdb 和 sdc 创建 RAID 的分区,使用 fdisk 命令来创建。在这里,我将展示如何创建 sdb 驱动器上的分区。 - - # fdisk /dev/sdb - -请按照以下说明创建分区。 - -- 按 ‘n’ 创建新的分区。 -- 然后按 ‘P’ 选择主分区。 -- 接下来选择分区号为1。 -- 只需按两次回车键选择默认值即可。 -- 然后,按 ‘P’ 来打印创建好的分区。 - -![Create Partitions in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-Partitions-in-Linux.png) - -创建分区 - -请按照以下说明将分区创建为 Linux 的 RAID 类型。 - -- 按 ‘L’,列出所有可用的类型。 -- 按 ‘t’ 去修改分区。 -- 键入 ‘fd’ 设置为Linux 的 RAID 类型,然后按 Enter 确认。 -- 然后再次使用‘p’查看我们所做的更改。 -- 使用‘w’保存更改。 - -![Create RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Create-RAID-Partitions.png) - -在 Linux 上创建 RAID 分区 - -**注**: 请使用上述步骤同样在 sdc 驱动器上创建分区。 - -5.创建分区后,验证这两个驱动器能使用下面的命令来正确定义 RAID。 - - # mdadm --examine /dev/sd[b-c] - # mdadm --examine /dev/sd[b-c]1 - -![Verify RAID Partitions](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Partitions.png) - -验证 RAID 分区 - -### 第4步:创建 RAID md 设备 ### - -6.现在使用以下命令创建 md 设备(即 /dev/md0),并选择 RAID 合适的级别。 - - # mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1 - # mdadm --create /dev/md0 --level=stripe --raid-devices=2 /dev/sd[b-c]1 - -- -C – create -- -l – level -- -n – No of raid-devices - -7.一旦 md 设备已经建立,使用如下命令可以查看 RAID 级别,设备和阵列的使用状态。 - - # cat /proc/mdstat - -![Verify RAID Level](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Level.png) - -查看 RAID 级别 - - # mdadm -E /dev/sd[b-c]1 - -![Verify RAID Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Device.png) - -查看 RAID 设备 - - # mdadm --detail /dev/md0 - -![Verify RAID Array](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-RAID-Array.png) - -查看 RAID 阵列 - -### 第5步:挂载 RAID 设备到文件系统 ### - -8.将 RAID 设备 /dev/md0 创建为 ext4 文件系统并挂载到 /mnt/raid0 下。 - - # mkfs.ext4 /dev/md0 - -![Create ext4 Filesystem in Linux](http://www.tecmint.com/wp-content/uploads/2014/10/Create-ext4-Filesystem.png) - -创建 ext4 文件系统 - -9. ext4 文件系统为 RAID 设备创建好后,现在创建一个挂载点(即 /mnt/raid0),并将设备 /dev/md0 挂载在它下。 - - # mkdir /mnt/raid0 - # mount /dev/md0 /mnt/raid0/ - -10.下一步,使用 df 命令验证设备 /dev/md0 是否被挂载在 /mnt/raid0 下。 - - # df -h - -11.接下来,创建一个名为 ‘tecmint.txt’ 的文件挂载到 /mnt/raid0 下,为创建的文件添加一些内容,并查看文件和目录的内容。 - - # touch /mnt/raid0/tecmint.txt - # echo "Hi everyone how you doing ?" > /mnt/raid0/tecmint.txt - # cat /mnt/raid0/tecmint.txt - # ls -l /mnt/raid0/ - -![Verify Mount Device](http://www.tecmint.com/wp-content/uploads/2014/10/Verify-Mount-Device.png) - -验证挂载的设备 - -12.一旦你验证挂载点后,同时将它添加到 /etc/fstab 文件中。 - - # vim /etc/fstab - -添加以下条目,根据你的安装位置和使用文件系统的不同,自行做修改。 - - /dev/md0 /mnt/raid0 ext4 deaults 0 0 - -![Add Device to Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Add-Device-to-Fstab.png) - -添加设备到 fstab 文件中 - -13.使用 mount ‘-a‘ 来检查 fstab 的条目是否有误。 - - # mount -av - -![Check Errors in Fstab](http://www.tecmint.com/wp-content/uploads/2014/10/Check-Errors-in-Fstab.png) - -检查 fstab 文件是否有误 - -### 第6步:保存 RAID 配置 ### - -14.最后,保存 RAID 配置到一个文件中,以供将来使用。同样,我们使用 ‘mdadm’ 命令带有 ‘-s‘ (scan) 和 ‘-v‘ (verbose) 选项,如图所示。 - - # mdadm -E -s -v >> /etc/mdadm.conf - # mdadm --detail --scan --verbose >> /etc/mdadm.conf - # cat /etc/mdadm.conf - -![Save RAID Configurations](http://www.tecmint.com/wp-content/uploads/2014/10/Save-RAID-Configurations.png) - -保存 RAID 配置 - -就这样,我们在这里看到,如何通过使用两个硬盘配置具有条带化的 RAID0 级别。在接下来的文章中,我们将看到如何设置 RAID5。 - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/create-raid0-in-linux/ - -作者:[Babin Lonston][a] -译者:[strugglingyouth](https://github.com/strugglingyouth) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/babinlonston/ -[1]:http://www.tecmint.com/understanding-raid-setup-in-linux/ diff --git a/translated/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md b/translated/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md new file mode 100644 index 0000000000..37a3dbe11c --- /dev/null +++ b/translated/tech/RHCE/Part 4 - Using Shell Scripting to Automate Linux System Maintenance Tasks.md @@ -0,0 +1,205 @@ +第四部分 - 使用 Shell 脚本自动化 Linux 系统维护任务 +================================================================================ +之前我听说高效系统管理员/工程师的其中一个特点是懒惰。一开始看起来很矛盾,但作者接下来解释了其中的原因: + +![自动化 Linux 系统维护任务](http://www.tecmint.com/wp-content/uploads/2015/08/Automate-Linux-System-Maintenance-Tasks.png) + +RHCE 系列:第四部分 - 自动化 Linux 系统维护任务 + +如果一个系统管理员花费大量的时间解决问题以及做重复的工作,你就应该怀疑他这么做是否正确。换句话说,一个高效的系统管理员/工程师应该制定一个计划使得尽量花费少的时间去做重复的工作,以及通过使用该系列中第三部分 [使用 Linux 工具集监视系统活动报告][1] 介绍的工具预见问题。因此,尽管看起来他/她没有做很多的工作,但那是因为 shell 脚本帮助完成了他的/她的大部分任务,这也就是本章我们将要探讨的东西。 + +### 什么是 shell 脚本? ### + +简单的说,shell 脚本就是一个由 shell 一步一步执行的程序,而 shell 是在 Linux 内核和端用户之间提供接口的另一个程序。 + +默认情况下,RHEL 7 中用户使用的 shell 是 bash(/bin/bash)。如果你想知道详细的信息和历史背景,你可以查看 [维基页面][2]。 + +关于这个 shell 提供的众多功能的介绍,可以查看 **man 手册**,也可以从 ([Bash 命令][3])下载 PDF 格式。除此之外,假设你已经熟悉 Linux 命令(否则我强烈建议你首先看一下 **Tecmint.com** 中的文章 [从新手到系统管理员指南][4] )。现在让我们开始吧。 + +### 写一个脚本显示系统信息 ### + +为了方便,首先让我们新建一个目录用于保存我们的 shell 脚本: + + # mkdir scripts + # cd scripts + +然后用喜欢的文本编辑器打开新的文本文件 `system_info.sh`。我们首先在头部插入一些注释以及一些命令: + + #!/bin/bash + + # RHCE 系列第四部分事例脚本 + # 该脚本会返回以下这些系统信息: + # -主机名称: + echo -e "\e[31;43m***** HOSTNAME INFORMATION *****\e[0m" + hostnamectl + echo "" + # -文件系统磁盘空间使用: + echo -e "\e[31;43m***** FILE SYSTEM DISK SPACE USAGE *****\e[0m" + df -h + echo "" + # -系统空闲和使用中的内存: + echo -e "\e[31;43m ***** FREE AND USED MEMORY *****\e[0m" + free + echo "" + # -系统启动时间: + echo -e "\e[31;43m***** SYSTEM UPTIME AND LOAD *****\e[0m" + uptime + echo "" + # -登录的用户: + echo -e "\e[31;43m***** CURRENTLY LOGGED-IN USERS *****\e[0m" + who + echo "" + # -使用内存最多的 5 个进程 + echo -e "\e[31;43m***** TOP 5 MEMORY-CONSUMING PROCESSES *****\e[0m" + ps -eo %mem,%cpu,comm --sort=-%mem | head -n 6 + echo "" + echo -e "\e[1;32mDone.\e[0m" + +然后,给脚本可执行权限: + + # chmod +x system_info.sh + +运行脚本: + + ./system_info.sh + +注意为了更好的可视化效果各部分标题都用颜色显示: + +![服务器监视 Shell 脚本](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Shell-Script.png) + +服务器监视 Shell 脚本 + +该功能用以下命令提供: + + echo -e "\e[COLOR1;COLOR2m\e[0m" + +其中 COLOR1 和 COLOR2 是前景色和背景色([Arch Linux Wiki][5] 有更多的信息和选项解释), 是你想用颜色显示的字符串。 + +### 使任务自动化 ### + +你想使其自动化的任务可能因情况而不同。因此,我们不可能在一篇文章中覆盖所有可能的场景,但是我们会介绍使用 shell 脚本可以使其自动化的三种典型任务: + +**1)** 更新本地文件数据库, 2) 查找(或者删除)有 777 权限的文件, 以及 3) 文件系统使用超过定义的阀值时发出警告。 + +让我们在脚本目录中新建一个名为 `auto_tasks.sh` 的文件并添加以下内容: + + #!/bin/bash + + # 自动化任务事例脚本: + # -更新本地文件数据库: + echo -e "\e[4;32mUPDATING LOCAL FILE DATABASE\e[0m" + updatedb + if [ $? == 0 ]; then + echo "The local file database was updated correctly." + else + echo "The local file database was not updated correctly." + fi + echo "" + + # -查找 和/或 删除有 777 权限的文件。 + echo -e "\e[4;32mLOOKING FOR FILES WITH 777 PERMISSIONS\e[0m" + # Enable either option (comment out the other line), but not both. + # Option 1: Delete files without prompting for confirmation. Assumes GNU version of find. + #find -type f -perm 0777 -delete + # Option 2: Ask for confirmation before deleting files. More portable across systems. + find -type f -perm 0777 -exec rm -i {} +; + echo "" + # -文件系统使用率超过定义的阀值时发出警告 + echo -e "\e[4;32mCHECKING FILE SYSTEM USAGE\e[0m" + THRESHOLD=30 + while read line; do + # This variable stores the file system path as a string + FILESYSTEM=$(echo $line | awk '{print $1}') + # This variable stores the use percentage (XX%) + PERCENTAGE=$(echo $line | awk '{print $5}') + # Use percentage without the % sign. + USAGE=${PERCENTAGE%?} + if [ $USAGE -gt $THRESHOLD ]; then + echo "The remaining available space in $FILESYSTEM is critically low. Used: $PERCENTAGE" + fi + done < <(df -h --total | grep -vi filesystem) + +请注意该脚本最后一行两个 `<` 符号之间有个空格。 + +![查找 777 权限文件的 Shell 脚本](http://www.tecmint.com/wp-content/uploads/2015/08/Shell-Script-to-Find-777-Permissions.png) + +查找 777 权限文件的 Shell 脚本 + +### 使用 Cron ### + +想更进一步提高效率,你不会想只是坐在你的电脑前手动执行这些脚本。相反,你会使用 cron 来调度这些任务周期性地执行,并把结果通过邮件发动给预定义的接收者或者将它们保存到使用 web 浏览器可以查看的文件中。 + +下面的脚本(filesystem_usage.sh)会运行有名的 **df -h** 命令,格式化输出到 HTML 表格并保存到 **report.html** 文件中: + + #!/bin/bash + # Sample script to demonstrate the creation of an HTML report using shell scripting + # Web directory + WEB_DIR=/var/www/html + # A little CSS and table layout to make the report look a little nicer + echo " + + + + + " > $WEB_DIR/report.html + # View hostname and insert it at the top of the html body + HOST=$(hostname) + echo "Filesystem usage for host $HOST
+ Last updated: $(date)

+ + " >> $WEB_DIR/report.html + # Read the output of df -h line by line + while read line; do + echo "" >> $WEB_DIR/report.html + done < <(df -h | grep -vi filesystem) + echo "
Filesystem + Size + Use % +
" >> $WEB_DIR/report.html + echo $line | awk '{print $1}' >> $WEB_DIR/report.html + echo "" >> $WEB_DIR/report.html + echo $line | awk '{print $2}' >> $WEB_DIR/report.html + echo "" >> $WEB_DIR/report.html + echo $line | awk '{print $5}' >> $WEB_DIR/report.html + echo "
" >> $WEB_DIR/report.html + +在我们的 **RHEL 7** 服务器(**192.168.0.18**)中,看起来像下面这样: + +![服务器监视报告](http://www.tecmint.com/wp-content/uploads/2015/08/Server-Monitoring-Report.png) + +服务器监视报告 + +你可以添加任何你想要的信息到那个报告中。添加下面的 crontab 条目在每天下午的 1:30 运行该脚本: + + 30 13 * * * /root/scripts/filesystem_usage.sh + +### 总结 ### + +你很可能想起各种其他想要自动化的任务;正如你看到的,使用 shell 脚本能极大的简化任务。如果你觉得这篇文章对你有所帮助就告诉我们吧,别犹豫在下面的表格中添加你自己的想法或评论。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/using-shell-script-to-automate-linux-system-maintenance-tasks/ + +作者:[Gabriel Cánepa][a] +译者:[ictlyh](https://github.com/ictlyh) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/linux-performance-monitoring-and-file-system-statistics-reports/ +[2]:https://en.wikipedia.org/wiki/Bash_%28Unix_shell%29 +[3]:http://www.tecmint.com/wp-content/pdf/bash.pdf +[4]:http://www.tecmint.com/60-commands-of-linux-a-guide-from-newbies-to-system-administrator/ +[5]:https://wiki.archlinux.org/index.php/Color_Bash_Prompt \ No newline at end of file diff --git a/translated/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md b/translated/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md new file mode 100644 index 0000000000..a37c9610fd --- /dev/null +++ b/translated/tech/RHCE/Part 5 - How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7.md @@ -0,0 +1,169 @@ +第五部分 - 如何在 RHEL 7 中管理系统日志(配置、旋转以及导入到数据库) +================================================================================ +为了确保你的 RHEL 7 系统安全,你需要通过查看日志文件监控系统中发生的所有活动。这样,你就可以检测任何不正常或有潜在破坏的活动并进行系统故障排除或者其它恰当的操作。 + +![Linux 中使用 Rsyslog 和 Logrotate 旋转日志文件](http://www.tecmint.com/wp-content/uploads/2015/08/Manage-and-Rotate-Linux-Logs-Using-Rsyslog-Logrotate.jpg) + +(译者注:[日志旋转][9]是系统管理中归档每天产生的日志文件的自动化过程) + +RHCE 考试 - 第五部分:使用 Rsyslog 和 Logrotate 管理系统日志 + +在 RHEL 7 中,[rsyslogd][1] 守护进程负责系统日志,它从 /etc/rsyslog.conf(该文件指定所有系统日志的默认路径)和 /etc/rsyslog.d 中的所有文件(如果有的话)读取配置信息。 + +### Rsyslogd 配置 ### + +快速浏览一下 [rsyslog.conf][2] 会是一个好的开端。该文件分为 3 个主要部分:模块(rsyslong 按照模块化设计),全局指令(用于设置 rsyslogd 守护进程的全局属性),以及规则。正如你可能猜想的,最后一个部分指示获取,显示以及在哪里保存什么的日志(也称为选择子),这也是这篇博文关注的重点。 + +rsyslog.conf 中典型的一行如下所示: + +![Rsyslogd 配置](http://www.tecmint.com/wp-content/uploads/2015/08/Rsyslogd-Configuration.png) + +Rsyslogd 配置 + +在上面的图片中,我们可以看到一个选择子包括了一个或多个用分号分隔的设备:优先级(Facility:Priority)对,其中设备描述了消息类型(参考 [RFC 3164 4.1.1 章节][3] 查看 rsyslog 可用的完整设备列表),优先级指示它的严重性,这可能是以下几种之一: + +- debug +- info +- notice +- warning +- err +- crit +- alert +- emerg + +尽管自身并不是一个优先级,关键字 none 意味着指定设备没有任何优先级。 + +**注意**:给定一个优先级表示该优先级以及之上的消息都应该记录到日志中。因此,上面例子中的行指示 rsyslogd 守护进程记录所有优先级为 info 以及以上(不管是什么设备)的除了属于 mail、authpriv、以及 cron 服务(不考虑来自这些设备的消息)的消息到 /var/log/messages。 + +你也可以使用逗号将多个设备分为一组,对同组中的设备使用相同的优先级。例如下面这行: + + *.info;mail.none;authpriv.none;cron.none /var/log/messages + +也可以这样写: + + *.info;mail,authpriv,cron.none /var/log/messages + +换句话说,mail、authpriv 以及 cron 被分为一组,并使用关键字 none。 + +#### 创建自定义日志文件 #### + +要把所有的守护进程消息记录到 /var/log/tecmint.log,我们需要在 rsyslog.conf 或者 /etc/rsyslog.d 目录中的单独文件(易于管理)添加下面一行: + + daemon.* /var/log/tecmint.log + +然后重启守护进程(注意服务名称不以 d 结尾): + + # systemctl restart rsyslog + +在随机重启两个守护进程之前和之后查看自定义日志的内容: + +![Linux 创建自定义日志文件](http://www.tecmint.com/wp-content/uploads/2015/08/Create-Custom-Log-File.png) + +创建自定义日志文件 + +作为一个自学练习,我建议你重点关注设备和优先级,添加额外的消息到已有的日志文件或者像上面那样创建一个新的日志文件。 + +### 使用 Logrotate 旋转日志 ### + +为了防止日志文件无限制增长,logrotate 工具用于旋转、压缩、移除或者通过电子邮件发送日志,从而减轻管理会产生大量日志文件系统的困难。 + +Logrotate 作为一个 cron 作业(/etc/cron.daily/logrotate)每天运行,并从 /etc/logrotate.conf 和 /etc/logrotate.d 中的文件(如果有的话)读取配置信息。 + +对于 rsyslog,即使你可以在主文件中为指定服务包含设置,为每个服务创建单独的配置文件能帮助你更好地组织设置。 + +让我们来看一个典型的 logrotate.conf: + +![Logrotate 配置](http://www.tecmint.com/wp-content/uploads/2015/08/Logrotate-Configuration.png) + +Logrotate 配置 + +在上面的例子中,logrotate 会为 /var/log/wtmp 进行以下操作:尝试每个月旋转一次,但至少文件要大于 1MB,然后用 0664 权限、用户 root、组 utmp 创建一个新的日志文件。下一步只保存一个归档日志,正如旋转指令指定的: + +![每月 Logrotate 日志](http://www.tecmint.com/wp-content/uploads/2015/08/Logrotate-Logs-Monthly.png) + +每月 Logrotate 日志 + +让我们再来看看 /etc/logrotate.d/httpd 中的另一个例子: + +![旋转 Apache 日志文件](http://www.tecmint.com/wp-content/uploads/2015/08/Rotate-Apache-Log-Files.png) + +旋转 Apache 日志文件 + +你可以在 logrotate 的 man 手册([man logrotate][4] 和 [man logrotate.conf][5])中阅读更多有关它的设置。为了方便你的阅读,本文还提供了两篇文章的 PDF 格式。 + +作为一个系统工程师,很可能由你决定多久按照什么格式保存一次日志,取决于你是否有一个单独的分区/逻辑卷给 /var。否则,你真的要考虑删除旧日志以节省存储空间。另一方面,根据你公司和客户内部的政策,为了以后的安全审核,你可能被迫要保留多个日志。 + +#### 保存日志到数据库 #### + +当然检查日志可能是一个很繁琐的工作(即使有类似 grep 工具和正则表达式的帮助)。因为这个原因,rsyslog 允许我们把它们导出到数据库(OTB 支持的关系数据库管理系统包括 MySQL、MariaDB、PostgreSQL 和 Oracle)。 + +指南的这部分假设你已经在要管理日志的 RHEL 7 上安装了 MariaDB 服务器和客户端: + + # yum update && yum install mariadb mariadb-server mariadb-client rsyslog-mysql + # systemctl enable mariadb && systemctl start mariadb + +然后使用 `mysql_secure_installation` 工具为 root 用户设置密码以及其它安全考量: + + +![保证 MySQL 数据库安全](http://www.tecmint.com/wp-content/uploads/2015/08/Secure-MySQL-Database.png) + +保证 MySQL 数据库安全 + +注意:如果你不想用 MariaDB root 用户插入日志消息到数据库,你也可以配置用另一个用户账户。如何实现的介绍已经超出了本文的范围,但在 [MariaDB 知识][6] 中有详细解析。为了简单在这篇指南中我们会使用 root 账户。 + +下一步,从 [GitHub][7] 下载 createDB.sql 脚本并导入到你的数据库服务器: + + # mysql -u root -p < createDB.sql + +![保存服务器日志到数据库](http://www.tecmint.com/wp-content/uploads/2015/08/Save-Server-Logs-to-Database.png) + +保存服务器日志到数据库 + +最后,添加下面的行到 /etc/rsyslog.conf: + + $ModLoad ommysql + $ActionOmmysqlServerPort 3306 + *.* :ommysql:localhost,Syslog,root,YourPasswordHere + +重启 rsyslog 和数据库服务器: + + # systemctl restart rsyslog + # systemctl restart mariadb + +#### 使用 SQL 语法查询日志 #### + +现在执行一些会改变日志的操作(例如停止和启动服务),然后登陆到你的 DB 服务器并使用标准的 SQL 命令显示和查询日志: + + USE Syslog; + SELECT ReceivedAt, Message FROM SystemEvents; + +![在数据库中查询日志](http://www.tecmint.com/wp-content/uploads/2015/08/Query-Logs-in-Database.png) + +在数据库中查询日志 + +### 总结 ### + +在这篇文章中我们介绍了如何设置系统日志,如果旋转日志以及为了简化查询如何重定向消息到数据库。我们希望这些技巧能对你准备 [RHCE 考试][8] 和日常工作有所帮助。 + +正如往常,非常欢迎你的反馈。用下面的表单和我们联系吧。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/manage-linux-system-logs-using-rsyslogd-and-logrotate/ + +作者:[Gabriel Cánepa][a] +译者:[ictlyh](http://www.mutouxiaogui.cn/blog/) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/wp-content/pdf/rsyslogd.pdf +[2]:http://www.tecmint.com/wp-content/pdf/rsyslog.conf.pdf +[3]:https://tools.ietf.org/html/rfc3164#section-4.1.1 +[4]:http://www.tecmint.com/wp-content/pdf/logrotate.pdf +[5]:http://www.tecmint.com/wp-content/pdf/logrotate.conf.pdf +[6]:https://mariadb.com/kb/en/mariadb/create-user/ +[7]:https://github.com/sematext/rsyslog/blob/master/plugins/ommysql/createDB.sql +[8]:http://www.tecmint.com/how-to-setup-and-configure-static-network-routing-in-rhel/ +[9]:https://en.wikipedia.org/wiki/Log_rotation \ No newline at end of file diff --git a/translated/tech/RHCSA/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md b/translated/tech/RHCSA/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md deleted file mode 100644 index 93c2787c7e..0000000000 --- a/translated/tech/RHCSA/RHCSA Series--Part 01--Reviewing Essential Commands and System Documentation.md +++ /dev/null @@ -1,320 +0,0 @@ -[translating by xiqingongzi] - -RHCSA系列: 复习基础命令及系统文档 – 第一部分 -================================================================================ -RHCSA (红帽认证系统工程师) 是由给商业公司提供开源操作系统和软件的RedHat公司举行的认证考试, 除此之外,红帽公司还为这些企业和机构提供支持、训练以及咨询服务 - -![RHCSA Exam Guide](http://www.tecmint.com/wp-content/uploads/2015/02/RHCSA-Series-by-Tecmint.png) - -RHCSA 考试准备指南 - -RHCSA 考试(考试编号 EX200)通过后可以获取由Red Hat 公司颁发的证书. RHCSA 考试是RHCT(红帽认证技师)的升级版,而且RHCSA必须在新的Red Hat Enterprise Linux(红帽企业版)下完成.RHCT和RHCSA的主要变化就是RHCT基于 RHEL5 , 而RHCSA基于RHEL6或者7, 这两个认证的等级也有所不同. - -红帽认证管理员所会的最基础的是在红帽企业版的环境下执行如下系统管理任务: - -- 理解并会使用命令管理文件、目录、命令行以及系统/软件包的文档 -- 使用不同的启动等级启动系统,认证和控制进程,启动或停止虚拟机 -- 使用分区和逻辑卷管理本地存储 -- 创建并且配置本地文件系统和网络文件系统,设置他们的属性(许可、加密、访问控制表) -- 部署、配置、并且控制系统,包括安装、升级和卸载软件 -- 管理系统用户和组,独立使用集中制的LDAP目录权限控制 -- 确保系统安全,包括基础的防火墙规则和SELinux配置 - - -关于你所在国家的考试注册费用参考 [RHCSA Certification page][1]. - -关于你所在国家的考试注册费用参考RHCSA 认证页面 - - -在这个有15章的RHCSA(红帽认证管理员)备考系列,我们将覆盖以下的关于红帽企业Linux第七版的最新的信息 - -- Part 1: 回顾必会的命令和系统文档 -- Part 2: 在RHEL7如何展示文件和管理目录 -- Part 3: 在RHEL7中如何管理用户和组 -- Part 4: 使用nano和vim管理命令/ 使用grep和正则表达式分析文本 -- Part 5: RHEL7的进程管理:启动,关机,以及其他介于二者之间的. -- Part 6: 使用 'Parted'和'SSM'来管理和加密系统存储 -- Part 7: 使用ACLs(访问控制表)并挂载 Samba /NFS 文件分享 -- Part 8: 加固SSH,设置主机名并开启网络服务 -- Part 9: 安装、配置和加固一个Web,FTP服务器 -- Part 10: Yum 包管理方式,使用Cron进行自动任务管理以及监控系统日志 -- Part 11: 使用FirewallD和Iptables设置防火墙,控制网络流量 -- Part 12: 使用Kickstart 自动安装RHEL 7 -- Part 13: RHEL7:什么是SeLinux?他的原理是什么? -- Part 14: 在RHEL7 中使用基于LDAP的权限控制 -- Part 15: RHEL7的虚拟化:KVM 和虚拟机管理 - -在第一章,我们讲解如何输入和运行正确的命令在终端或者Shell窗口,并且讲解如何找到、插入,以及使用系统文档 - -![RHCSA: Reviewing Essential Linux Commands – Part 1](http://www.tecmint.com/wp-content/uploads/2015/02/Reviewing-Essential-Linux-Commands.png) - -RHCSA:回顾必会的Linux命令 - 第一部分 - -#### 前提: #### - -至少你要熟悉如下命令 - -- [cd command][2] (改变目录) -- [ls command][3] (列举文件) -- [cp command][4] (复制文件) -- [mv command][5] (移动或重命名文件) -- [touch command][6] (创建一个新的文件或更新已存在文件的时间表) -- rm command (删除文件) -- mkdir command (创建目录) - -在这篇文章中你将会找到更多的关于如何更好的使用他们的正确用法和特殊用法. - -虽然没有严格的要求,但是作为讨论常用的Linux命令和方法,你应该安装RHEL7 来尝试使用文章中提到的命令.这将会使你学习起来更省力. - -- [红帽企业版Linux(RHEL)7 安装指南][7] - -### 使用Shell进行交互 ### -如果我们使用文本模式登陆Linux,我们就无法使用鼠标在默认的shell。另一方面,如果我们使用图形化界面登陆,我们将会通过启动一个终端来开启shell,无论那种方式,我们都会看到用户提示,并且我们可以开始输入并且执行命令(当按下Enter时,命令就会被执行) - - -当我们使用文本模式登陆Linux时, -命令是由两个部分组成的: - -- 命令本身 -- 参数 - -某些参数,称为选项(通常使用一个连字符区分),改变了由其他参数定义的命令操作. - -命令的类型可以帮助我们识别某一个特定的命令是由shell内建的还是由一个单独的包提供。这样的区别在于我们能够找到更多关于该信息的命令,对shell内置的命令,我们需要看shell的ManPage,如果是其他提供的,我们需要看它自己的ManPage. - -![Check Shell built in Commands](http://www.tecmint.com/wp-content/uploads/2015/02/Check-shell-built-in-Commands.png) - -检查Shell的内建命令 - -在上面的例子中, cd 和 type 是shell内建的命令,top和 less 是由其他的二进制文件提供的(在这种情况下,type将返回命令的位置) -其他的内建命令 - -- [echo command][8]: 展示字符串 -- [pwd command][9]: 输出当前的工作目录 - -![More Built in Shell Commands](http://www.tecmint.com/wp-content/uploads/2015/02/More-Built-in-Shell-Commands.png) - -更多内建函数 - -**exec 命令** - -运行我们指定的外部程序。请注意,最好是只输入我们想要运行的程序的名字,不过exec命令有一个特殊的特性:使用旧的shell运行,而不是创建新的进程,可以作为子请求的验证. - - # ps -ef | grep [shell 进程的PID] - -当新的进程注销,Shell也随之注销,运行 exec top 然后按下 q键来退出top,你会注意到shell 会话会结束,如下面的屏幕录像展示的那样: - -注:youtube视频 - - -**export 命令** - -输出之后执行的命令的环境的变量 - -**history 命令** - -展示数行之前的历史命令.在感叹号前输入命令编号可以再次执行这个命令.如果我们需要编辑历史列表中的命令,我们可以按下 Ctrl + r 并输入与命令相关的第一个字符. -当我们看到的命令自动补全,我们可以根据我们目前的需要来编辑它: - -注:youtube视频 - - -命令列表会保存在一个叫 .bash_history的文件里.history命令是一个非常有用的用于减少输入次数的工具,特别是进行命令行编辑的时候.默认情况下,bash保留最后输入的500个命令,不过可以通过修改 HISTSIZE 环境变量来增加: - - -![Linux history Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-history-Command.png) - -Linux history 命令 - -但上述变化,在我们的下一次启动不会保留。为了保持HISTSIZE变量的变化,我们需要通过手工修改文件编辑: - - # 设置history请看 HISTSIZE 和 HISTFILESIZE 在 bash(1)的文档 - HISTSIZE=1000 - -**重要**: 我们的更改不会生效,除非我们重启了系统 - -**alias 命令** -没有参数或使用-p参数将会以 名称=值的标准形式输出alias 列表.当提供了参数时,一个alias 将被定义给给定的命令和值 - -使用alias ,我们可以创建我们自己的命令,或修改现有的命令,包括需要的参数.举个例子,假设我们想别名 ls 到 ls –color=auto ,这样就可以使用不同颜色输出文件、目录、链接 - - - # alias ls='ls --color=auto' - -![Linux alias Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-alias-Command.png) - -Linux 别名命令 - -**Note**: 你可以给你的新命令起任何的名字,并且附上足够多的使用单引号分割的参数,但是这样的情况下你要用分号区分开他们. - - # alias myNewCommand='cd /usr/bin; ls; cd; clear' - -**exit 命令** - -Exit和logout命令都是退出shell.exit命令退出所有的shell,logout命令只注销登陆的shell,其他的自动以文本模式启动的shell不算. - -如果我们对某个程序由疑问,我们可以看他的man Page,可以使用man命令调出它,额外的,还有一些重要的文件的手册页(inittab,fstab,hosts等等),库函数,shells,设备及其他功能 - -#### 举例: #### - -- man uname (输出系统信息,如内核名称、处理器、操作系统类型、架构等). -- man inittab (初始化守护设置). - -另外一个重要的信息的来源就是info命令提供的,info命令常常被用来读取信息文件.这些文件往往比manpage 提供更多信息.通过info 关键词调用某个命令的信息 - - # info ls - # info cut - - -另外,在/usr/share/doc 文件夹包含了大量的子目录,里面可以找到大量的文档.他们包含文本文件或其他友好的格式. -确保你使用这三种方法去查找命令的信息。重点关注每个命令文档中介绍的详细的语法 - -**使用expand命令把tabs转换为空格** - -有时候文本文档包含了tabs但是程序无法很好的处理的tabs.或者我们只是简单的希望将tabs转换成空格.这就是为什么expand (GNU核心组件提供)工具出现, - -举个例子,给我们一个文件 NumberList.txt,让我们使用expand处理它,将tabs转换为一个空格.并且以标准形式输出. - - # expand --tabs=1 NumbersList.txt - -![Linux expand Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-expand-Command.png) - -Linux expand 命令 - -unexpand命令可以实现相反的功能(将空格转为tab) - -**使用head输出文件首行及使用tail输出文件尾行** - -通常情况下,head命令后跟着文件名时,将会输出该文件的前十行,我们可以通过 -n 参数来自定义具体的行数。 - - # head -n3 /etc/passwd - # tail -n3 /etc/passwd - -![Linux head and tail Command](http://www.tecmint.com/wp-content/uploads/2015/02/Linux-head-and-tail-Command.png) - -Linux 的 head 和 tail 命令 - -tail 最有意思的一个特性就是能够展现信息(最后一行)就像我们输入文件(tail -f my.log,一行一行的,就像我们在观察它一样。)这在我们监控一个持续增加的日志文件时非常有用 - -更多: [Manage Files Effectively using head and tail Commands][10] - -**使用paste合并文本文件** -paste命令一行一行的合并文件,默认会以tab来区分每一行,或者其他你自定义的分行方式.(下面的例子就是输出使用等号划分行的文件). - # paste -d= file1 file2 - -![Merge Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Merge-Files-in-Linux-with-paste-command.png) - -Merge Files in Linux - -**使用split命令将文件分块** - -split 命令常常用于把一个文件切割成两个或多个文由我们自定义的前缀命名的件文件.这些文件可以通过大小、区块、行数,生成的文件会有一个数字或字母的后缀.在下面的例子中,我们将切割bash.pdf ,每个文件50KB (-b 50KB) ,使用命名后缀 (-d): - - # split -b 50KB -d bash.pdf bash_ - -![Split Files in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Split-Files-in-Linux-with-split-command.png) - -在Linux下划分文件 - -你可以使用如下命令来合并这些文件,生成源文件: - - # cat bash_00 bash_01 bash_02 bash_03 bash_04 bash_05 > bash.pdf - -**使用tr命令改变字符** - -tr 命令多用于变化(改变)一个一个的字符活使用字符范围.和之前一样,下面的实例我们江使用同样的文件file2,我们将实习: - -- 小写字母 o 变成大写 -- 所有的小写字母都变成大写字母 - - # cat file2 | tr o O - # cat file2 | tr [a-z] [A-Z] - -![Translate Characters in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/Translate-characters-in-Linux-with-tr-command.png) - -在Linux中替换文字 - -**使用uniq和sort检查或删除重复的文字** - -uniq命令可以帮我们查出或删除文件中的重复的行,默认会写出到stdout.我们应当注意, uniq 只能查出相邻的两个相同的单纯,所以, uniq 往往和sort 一起使用(sort一般用于对文本文件的内容进行排序) - - -默认的,sort 以第一个参数(使用空格区分)为关键字.想要定义特殊的关键字,我们需要使用 -k参数,请注意如何使用sort 和uniq输出我们想要的字段,具体可以看下面的例子 - - # cat file3 - # sort file3 | uniq - # sort -k2 file3 | uniq - # sort -k3 file3 | uniq - -![删除文件中重复的行](http://www.tecmint.com/wp-content/uploads/2015/02/Remove-Duplicate-Lines-in-file.png) - -删除文件中重复的行 - -**从文件中提取文本的命令** - -Cut命令基于字节(-b),字符(-c),或者区块(-f)从stdin活文件中提取到的部分将会以标准的形式展现在屏幕上 - -当我们使用区块切割时,默认的分隔符是一个tab,不过你可以通过 -d 参数来自定义分隔符. - - # cut -d: -f1,3 /etc/passwd # 这个例子提取了第一块和第三块的文本 - # cut -d: -f2-4 /etc/passwd # 这个例子提取了第一块到第三块的文本 - -![从文件中提取文本](http://www.tecmint.com/wp-content/uploads/2015/02/Extract-Text-from-a-file.png) - -从文件中提取文本 - - -注意,上方的两个输出的结果是十分简洁的。 - -**使用fmt命令重新格式化文件** - -fmt 被用于去“清理”有大量内容或行的文件,或者有很多缩进的文件.新的锻炼格式每行不会超过75个字符款,你能改变这个设定通过 -w(width 宽度)参数,它可以设置行宽为一个特定的数值 - -举个例子,让我们看看当我们用fmt显示定宽为100个字符的时候的文件/etc/passwd 时会发生什么.再来一次,输出值变得更加简洁. - - # fmt -w100 /etc/passwd - -![File Reformatting in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/File-Reformatting-in-Linux-with-fmt-command.png) - -Linux文件重新格式化 - -**使用pr命令格式化打印内容** - -pr 分页并且在列中展示一个或多个用于打印的文件. 换句话说,使用pr格式化一个文件使他打印出来时看起来更好.举个例子,下面这个命令 - - # ls -a /etc | pr -n --columns=3 -h "Files in /etc" - -以一个友好的排版方式(3列)输出/etc下的文件,自定义了页眉(通过 -h 选项实现),行号(-n) - -![File Formatting in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/File-Formatting-in-Linux-with-pr-command.png) - -Linux的文件格式 - -### 总结 ### - -在这篇文章中,我们已经讨论了如何在Shell或终端以正确的语法输入和执行命令,并解释如何找到,检查和使用系统文档。正如你看到的一样简单,这就是你成为RHCSA的第一大步 - -如果你想添加一些其他的你经常使用的能够有效帮你完成你的日常工作的基础命令,并为分享他们而感到自豪,请在下方留言.也欢迎提出问题.我们期待您的回复. - - --------------------------------------------------------------------------------- - -via: http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ - -作者:[Gabriel Cánepa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.tecmint.com/author/gacanepa/ -[1]:https://www.redhat.com/en/services/certification/rhcsa -[2]:http://www.tecmint.com/cd-command-in-linux/ -[3]:http://www.tecmint.com/ls-command-interview-questions/ -[4]:http://www.tecmint.com/advanced-copy-command-shows-progress-bar-while-copying-files/ -[5]:http://www.tecmint.com/rename-multiple-files-in-linux/ -[6]:http://www.tecmint.com/8-pratical-examples-of-linux-touch-command/ -[7]:http://www.tecmint.com/redhat-enterprise-linux-7-installation/ -[8]:http://www.tecmint.com/echo-command-in-linux/ -[9]:http://www.tecmint.com/pwd-command-examples/ -[10]:http://www.tecmint.com/view-contents-of-file-in-linux/ diff --git a/translated/tech/RHCSA/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md b/translated/tech/RHCSA/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md new file mode 100644 index 0000000000..1436621c4e --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 03--How to Manage Users and Groups in RHEL 7.md @@ -0,0 +1,224 @@ +RHCSA 系列: 如何管理RHEL7的用户和组 – Part 3 +================================================================================ +和管理其他Linux服务器一样,管理一个 RHEL 7 服务器 要求你能够添加,修改,暂停或删除用户帐户,并且授予他们文件,目录,其他系统资源所必要的权限。 +![User and Group Management in Linux](http://www.tecmint.com/wp-content/uploads/2015/03/User-and-Group-Management-in-Linux.png) + +RHCSA: 用户和组管理 – Part 3 + +### 管理用户帐户## + +如果想要给RHEL 7 服务器添加账户,你需要以root用户执行如下两条命令 + + # adduser [new_account] + # useradd [new_account] + +当添加新的用户帐户时,默认会执行下列操作。 + +- 他/她 的主目录就会被创建(一般是"/home/用户名",除非你特别设置) +- 一些隐藏文件 如`.bash_logout`, `.bash_profile` 以及 `.bashrc` 会被复制到用户的主目录,并且会为用户的回话提供环境变量.你可以进一步查看他们的相关细节。 +- 会为您的账号添加一个邮件池目录 +- 会创建一个和用户名同样的组 + +用户帐户的全部信息被保存在`/etc/passwd `文件。这个文件以如下格式保存了每一个系统帐户的所有信息(以:分割) + [username]:[x]:[UID]:[GID]:[Comment]:[Home directory]:[Default shell] + +- `[username]` 和`[Comment]` 是用于自我解释的 +- ‘x’表示帐户的密码保护(详细在`/etc/shadow`文件),就是我们用于登录的`[username]`. +- `[UID]` 和`[GID]`是用于显示`[username]` 的 用户认证和主用户组。 + +最后, + +- `[Home directory]`显示`[username]`的主目录的绝对路径 +- `[Default shell]` 是当用户登录系统后使用的默认shell + +另外一个你必须要熟悉的重要的文件是存储组信息的`/etc/group`.因为和`/etc/passwd`类似,所以也是由:分割 + [Group name]:[Group password]:[GID]:[Group members] + + + +- `[Group name]` 是组名 +- 这个组是否使用了密码 (如果是"X"意味着没有). +- `[GID]`: 和`/etc/passwd`中一样 +- `[Group members]`:用户列表,使用,隔开。里面包含组内的所有用户 + +添加过帐户后,任何时候你都可以通过 usermod 命令来修改用户战壕沟,基础的语法如下: + # usermod [options] [username] + +相关阅读 + +- [15 ‘useradd’ Command Examples][1] +- [15 ‘usermod’ Command Examples][2] + +#### 示例1 : 设置帐户的过期时间 #### + +如果你的公司有一些短期使用的帐户或者你相应帐户在有限时间内使用,你可以使用 `--expiredate` 参数 ,后加YYYY-MM-DD格式的日期。为了查看是否生效,你可以使用如下命令查看 + # chage -l [username] + +帐户更新前后的变动如下图所示 +![Change User Account Information](http://www.tecmint.com/wp-content/uploads/2015/03/Change-User-Account-Information.png) + +修改用户信息 + +#### 示例 2: 向组内追加用户 #### + +除了创建用户时的主用户组,一个用户还能被添加到别的组。你需要使用 -aG或 -append -group 选项,后跟逗号分隔的组名 +#### 示例 3: 修改用户主目录或默认Shell #### + +如果因为一些原因,你需要修改默认的用户主目录(一般为 /home/用户名),你需要使用 -d 或 -home 参数,后跟绝对路径来修改主目录 +如果有用户想要使用其他的shell来取代bash(比如sh ),一般默认是bash .使用 usermod ,并使用 -shell 的参数,后加新的shell的路径 +#### 示例 4: 展示组内的用户 #### + +当把用户添加到组中后,你可以使用如下命令验证属于哪一个组 + + # groups [username] + # id [username] + +下面图片的演示了示例2到示例四 + +![Adding User to Supplementary Group](http://www.tecmint.com/wp-content/uploads/2015/03/Adding-User-to-Supplementary-Group.png) + +添加用户到额外的组 + +在上面的示例中: + + # usermod --append --groups gacanepa,users --home /tmp --shell /bin/sh tecmint + +如果想要从组内删除用户,省略 `--append` 切换,并且可以使用 `--groups` 来列举组内的用户 + +#### 示例 5: 通过锁定密码来停用帐户 #### + +如果想要关闭帐户,你可以使用 -l(小写的L)或 -lock 选项来锁定用户的密码。这将会阻止用户登录。 + +#### 示例 6: 解锁密码 #### + +当你想要重新启用帐户让他可以继续登录时,属于 -u 或 –unlock 选项来解锁用户的密码,就像示例5 介绍的那样 + + # usermod --unlock tecmint + +下面的图片展示了示例5和示例6 + +![Lock Unlock User Account](http://www.tecmint.com/wp-content/uploads/2015/03/Lock-Unlock-User-Account.png) + +锁定上锁用户 + +#### 示例 7:删除组和用户 #### + +如果要删除一个组,你需要使用 groupdel ,如果需要删除用户 你需要使用 userdel (添加 -r 可以删除主目录和邮件池的内容) + # groupdel [group_name] # 删除组 + # userdel -r [user_name] # 删除用户,并删除主目录和邮件池 + +如果一些文件属于组,他们将不会被删除。但是组拥有者将会被设置为删除掉的组的GID +### 列举,设置,并且修改 ugo/rwx 权限 ### + +著名的 [ls 命令][3] 是管理员最好的助手. 当我们使用 -l 参数, 这个工具允许您查看一个目录中的内容(或详细格式). + +而且,该命令还可以应用于单个文件中。无论哪种方式,在“ls”输出中的前10个字符表示每个文件的属性。 +这10个字符序列的第一个字符用于表示文件类型: + +- – (连字符): 一个标准文件 +- d: 一个目录 +- l: 一个符号链接 +- c: 字符设备(将数据作为字节流,即一个终端) +- b: 块设备(处理数据块,即存储设备) + +文件属性的下一个九个字符,分为三个组,被称为文件模式,并注明读(r),写(w),并执行(x)授予文件的所有者,文件的所有组,和其他的用户(通常被称为“世界”)。 +在文件的读取权限允许打开和读取相同的权限时,允许其内容被列出,如果还设置了执行权限,还允许它作为一个程序和运行。 +文件权限是通过chmod命令改变的,它的基本语法如下: + + # chmod [new_mode] file + +new_mode是一个八进制数或表达式,用于指定新的权限。适合每一个随意的案例。或者您已经有了一个更好的方式来设置文件的权限,所以你觉得可以自由地使用最适合你自己的方法。 +八进制数可以基于二进制等效计算,可以从所需的文件权限的文件的所有者,所有组,和世界。一定权限的存在等于2的幂(R = 22,W = 21,x = 20),没有时意为0。例如: +![File Permissions](http://www.tecmint.com/wp-content/uploads/2015/03/File-Permissions.png) + +文件权限 + +在八进制形式下设置文件的权限,如上图所示 + + # chmod 744 myfile + +请用一分钟来对比一下我们以前的计算,在更改文件的权限后,我们的实际输出为: + +![Long List Format](http://www.tecmint.com/wp-content/uploads/2015/03/Long-List-Format.png) + +长列表格式 + +#### 示例 8: 寻找777权限的文件 #### + +出于安全考虑,你应该确保在正常情况下,尽可能避免777权限(读、写、执行的文件)。虽然我们会在以后的教程中教你如何更有效地找到所有的文件在您的系统的权限集的说明,你现在仍可以使用LS grep获取这种信息。 +在下面的例子,我们会寻找 /etc 目录下的777权限文件. 注意,我们要使用第二章讲到的管道的知识[第二章:文件和目录管理][4]: + + # ls -l /etc | grep rwxrwxrwx + +![Find All Files with 777 Permission](http://www.tecmint.com/wp-content/uploads/2015/03/Find-All-777-Files.png) + +查找所有777权限的文件 + +#### 示例 9: 为所有用户指定特定权限 #### + +shell脚本,以及一些二进制文件,所有用户都应该有权访问(不只是其相应的所有者和组),应该有相应的执行权限(我们会讨论特殊情况下的问题): + # chmod a+x script.sh + +**注意**: 我们可以设置文件模式使用表示用户权限的字母如“u”,组所有者权限的字母“g”,其余的为o 。所有权限为a.权限可以通过`+` 或 `-` 来管理。 + +![Set Execute Permission on File](http://www.tecmint.com/wp-content/uploads/2015/03/Set-Execute-Permission-on-File.png) + +为文件设置执行权限 + +长目录列表还显示了该文件的所有者和其在第一和第二列中的组主。此功能可作为系统中文件的第一级访问控制方法: + +![Check File Owner and Group](http://www.tecmint.com/wp-content/uploads/2015/03/Check-File-Owner-and-Group.png) + +检查文件的属主和属组 + +改变文件的所有者,您将使用chown命令。请注意,您可以在同一时间或单独的更改文件的所有权: + # chown user:group file + +虽然可以在同一时间更改用户或组,或在同一时间的两个属性,但是不要忘记冒号区分,如果你想要更新其他属性,让另外的选项保持空白: + # chown :group file # Change group ownership only + # chown user: file # Change user ownership only + +#### 示例 10:从一个文件复制权限到另一个文件#### + +If you would like to “clone” ownership from one file to another, you can do so using the –reference flag, as follows: +如果你想“克隆”一个文件的所有权到另一个,你可以这样做,使用–reference参数,如下: + # chown --reference=ref_file file + +ref_file的所有信息会复制给 file + +![Clone File Ownership](http://www.tecmint.com/wp-content/uploads/2015/03/Clone-File-Ownership.png) + +复制文件属主信息 + +### 设置 SETGID 协作目录 ### + +你应该授予在一个特定的目录中拥有访问所有的文件的权限给一个特点的用户组,你将有可能使用目录设置setgid的方法。当setgid后设置,真实用户的有效GID成为团队的主人。 +因此,任何用户都可以访问该文件的组所有者授予的权限的文件。此外,当setgid设置在一个目录中,新创建的文件继承同一组目录,和新创建的子目录也将继承父目录的setgid。 + # chmod g+s [filename] + +为了设置 setgid 在八进制形式,预先准备好数字2 来给基本的权限 + # chmod 2755 [directory] + +### 总结 ### + +扎实的用户和组管理知识,符合规则的,Linux权限管理,以及部分实践,可以帮你快速解决RHEL 7 服务器的文件权限。 +我向你保证,当你按照本文所概述的步骤和使用系统文档(和第一章解释的那样 [Part 1: Reviewing Essential Commands & System Documentation][5] of this series) 你将掌握基本的系统管理的能力。 + +请随时让我们知道你是否有任何问题或意见使用下面的表格。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ + +作者:[Gabriel Cánepa][a] +译者:[xiqingongzi](https://github.com/xiqingongzi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/add-users-in-linux/ +[2]:http://www.tecmint.com/usermod-command-examples/ +[3]:http://www.tecmint.com/ls-interview-questions/ +[4]:http://www.tecmint.com/file-and-directory-management-in-linux/ +[5]:http://www.tecmint.com/rhcsa-exam-reviewing-essential-commands-system-documentation/ diff --git a/translated/tech/RHCSA/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md b/translated/tech/RHCSA/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md new file mode 100644 index 0000000000..41890b2280 --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 06--Using 'Parted' and 'SSM' to Configure and Encrypt System Storage.md @@ -0,0 +1,269 @@ +RHCSA 系列:使用 'Parted' 和 'SSM' 来配置和加密系统存储 – Part 6 +================================================================================ +在本篇文章中,我们将讨论在 RHEL 7 中如何使用传统的工具来设置和配置本地系统存储,并介绍系统存储管理器(也称为 SSM),它将极大地简化上面的任务。 + +![配置和加密系统存储](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-and-Encrypt-System-Storage.png) + +RHCSA: 配置和加密系统存储 – Part 6 + +请注意,我们将在这篇文章中展开这个话题,但由于该话题的宽泛性,我们将在下一期(Part 7)中继续介绍有关它的描述和使用。 + +### 在 RHEL 7 中创建和修改分区 ### + +在 RHEL 7 中, parted 是默认的用来处理分区的程序,且它允许你: + +- 展示当前的分区表 +- 操纵(增加或减少分区的大小)现有的分区 +- 利用空余的磁盘空间或额外的物理存储设备来创建分区 + +强烈建议你在试图增加一个新的分区或对一个现有分区进行更改前,你应当确保设备上没有任何一个分区正在使用(`umount /dev/partition`),且假如你正使用设备的一部分来作为 swap 分区,在进行上面的操作期间,你需要将它禁用(`swapoff -v /dev/partition`) 。 + +实施上面的操作的最简单的方法是使用一个安装介质例如一个 RHEL 7 安装 DVD 或 USB 以急救模式启动 RHEL(Troubleshooting → Rescue a Red Hat Enterprise Linux system),然后当让你选择一个选项来挂载现有的 Linux 安装时,选择'跳过'这个选项,接着你将看到一个命令行提示符,在其中你可以像下图显示的那样开始键入与在一个未被使用的物理设备上创建一个正常的分区时所用的相同的命令。 + +![RHEL 7 急救模式](http://www.tecmint.com/wp-content/uploads/2015/04/RHEL-7-Rescue-Mode.png) + +RHEL 7 急救模式 + +要启动 parted,只需键入: + + # parted /dev/sdb + +其中 `/dev/sdb` 是你将要创建新分区所在的设备;然后键入 `print` 来显示当前设备的分区表: + +![创建新的分区](http://www.tecmint.com/wp-content/uploads/2015/04/Create-New-Partition.png) + +创建新的分区 + +正如你所看到的那样,在这个例子中,我们正在使用一个 5 GB 的虚拟光驱。现在我们将要创建一个 4 GB 的主分区,然后将它格式化为 xfs 文件系统,它是 RHEL 7 中默认的文件系统。 + +你可以从一系列的文件系统中进行选择。你将需要使用 mkpart 来手动地创建分区,接着和平常一样,用 mkfs.fstype 来对分区进行格式化,因为 mkpart 并不支持许多现代的文件系统以达到即开即用。 + +在下面的例子中,我们将为设备设定一个标记,然后在 `/dev/sdb` 上创建一个主分区 `(p)`,它从设备的 0% 开始,并在 4000MB(4 GB) 处结束。 + +![在 Linux 中设定分区名称](http://www.tecmint.com/wp-content/uploads/2015/04/Label-Partition.png) + +标记分区的名称 + +接下来,我们将把分区格式化为 xfs 文件系统,然后再次打印出分区表,以此来确保更改已被应用。 + + # mkfs.xfs /dev/sdb1 + # parted /dev/sdb print + +![在 Linux 中格式化分区](http://www.tecmint.com/wp-content/uploads/2015/04/Format-Partition-in-Linux.png) + +格式化分区为 XFS 文件系统 + +对于旧一点的文件系统,在 parted 中你应该使用 `resize` 命令来改变分区的大小。不幸的是,这只适用于 ext2, fat16, fat32, hfs, linux-swap, 和 reiserfs (若 libreiserfs 已被安装)。 + +因此,改变分区大小的唯一方式是删除它然后再创建它(所以确保你对你的数据做了完整的备份!)。毫无疑问,在 RHEL 7 中默认的分区方案是基于 LVM 的。 + +使用 parted 来移除一个分区,可以用: + + # parted /dev/sdb print + # parted /dev/sdb rm 1 + +![在 Linux 中移除分区](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-Partition-in-Linux.png) + +移除或删除分区 + +### 逻辑卷管理(LVM) ### + +一旦一个磁盘被分好了分区,再去更改分区的大小就是一件困难或冒险的事了。基于这个原因,假如我们计划在我们的系统上对分区的大小进行更改,我们应当考虑使用 LVM 的可能性,而不是使用传统的分区系统。这样多个物理设备可以组成一个逻辑组,以此来寄宿可自定义数目的逻辑卷,而逻辑卷的增大或减少不会带来任何麻烦。 + +简单来说,你会发现下面的示意图对记住 LVM 的基础架构或许有用。 + +![LVM 的基本架构](http://www.tecmint.com/wp-content/uploads/2015/04/LVM-Diagram.png) + +LVM 的基本架构 + +#### 创建物理卷,卷组和逻辑卷 #### + +遵循下面的步骤是为了使用传统的卷管理工具来设置 LVM。由于你可以通过阅读这个网站上的 LVM 系列来扩展这个话题,我将只是概要的介绍设置 LVM 的基本步骤,然后与使用 SSM 来实现相同功能做个比较。 + +**注**: 我们将使用整个磁盘 `/dev/sdb` 和 `/dev/sdc` 来作为 PVs (物理卷),但是否执行相同的操作完全取决于你。 + +**1. 使用 /dev/sdb 和 /dev/sdc 中 100% 的可用磁盘空间来创建分区 `/dev/sdb1` 和 `/dev/sdc1`:** + + # parted /dev/sdb print + # parted /dev/sdc print + +![创建新分区](http://www.tecmint.com/wp-content/uploads/2015/04/Create-New-Partitions.png) + +创建新分区 + +**2. 分别在 /dev/sdb1 和 /dev/sdc1 上共创建 2 个物理卷。** + + # pvcreate /dev/sdb1 + # pvcreate /dev/sdc1 + +![创建两个物理卷](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Physical-Volumes.png) + +创建两个物理卷 + +记住,你可以使用 pvdisplay /dev/sd{b,c}1 来显示有关新建的 PV 的信息。 + +**3. 在上一步中创建的 PV 之上创建一个 VG:** + + # vgcreate tecmint_vg /dev/sd{b,c}1 + +![在 Linux 中创建卷组](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Volume-Group.png) + +创建卷组 + +记住,你可使用 vgdisplay tecmint_vg 来显示有关新建的 VG 的信息。 + +**4. 像下面那样,在 VG tecmint_vg 之上创建 3 个逻辑卷:** + + # lvcreate -L 3G -n vol01_docs tecmint_vg [vol01_docs → 3 GB] + # lvcreate -L 1G -n vol02_logs tecmint_vg [vol02_logs → 1 GB] + # lvcreate -l 100%FREE -n vol03_homes tecmint_vg [vol03_homes → 6 GB] + +![在 LVM 中创建逻辑卷](http://www.tecmint.com/wp-content/uploads/2015/04/Create-Logical-Volumes.png) + +创建逻辑卷 + +记住,你可以使用 lvdisplay tecmint_vg 来显示有关在 VG tecmint_vg 之上新建的 LV 的信息。 + +**5. 格式化每个逻辑卷为 xfs 文件系统格式(假如你计划在以后将要缩小卷的大小,请别使用 xfs 文件系统格式!):** + + # mkfs.xfs /dev/tecmint_vg/vol01_docs + # mkfs.xfs /dev/tecmint_vg/vol02_logs + # mkfs.xfs /dev/tecmint_vg/vol03_homes + +**6. 最后,挂载它们:** + + # mount /dev/tecmint_vg/vol01_docs /mnt/docs + # mount /dev/tecmint_vg/vol02_logs /mnt/logs + # mount /dev/tecmint_vg/vol03_homes /mnt/homes + +#### 移除逻辑卷,卷组和物理卷 #### + +**7.现在我们将进行与刚才相反的操作并移除 LV,VG 和 PV:** + + # lvremove /dev/tecmint_vg/vol01_docs + # lvremove /dev/tecmint_vg/vol02_logs + # lvremove /dev/tecmint_vg/vol03_homes + # vgremove /dev/tecmint_vg + # pvremove /dev/sd{b,c}1 + +**8. 现在,让我们来安装 SSM,我们将看到如何只用一步就完成上面所有的操作!** + + # yum update && yum install system-storage-manager + +我们将和上面一样,使用相同的名称和大小: + + # ssm create -s 3G -n vol01_docs -p tecmint_vg --fstype ext4 /mnt/docs /dev/sd{b,c}1 + # ssm create -s 1G -n vol02_logs -p tecmint_vg --fstype ext4 /mnt/logs /dev/sd{b,c}1 + # ssm create -n vol03_homes -p tecmint_vg --fstype ext4 /mnt/homes /dev/sd{b,c}1 + +是的! SSM 可以让你: + +- 初始化块设备来作为物理卷 +- 创建一个卷组 +- 创建逻辑卷 +- 格式化 LV 和 +- 只使用一个命令来挂载它们 + +**9. 现在,我们可以使用下面的命令来展示有关 PV,VG 或 LV 的信息:** + + # ssm list dev + # ssm list pool + # ssm list vol + +![检查有关 PV, VG,或 LV 的信息](http://www.tecmint.com/wp-content/uploads/2015/04/Display-LVM-Information.png) + +检查有关 PV, VG,或 LV 的信息 + +**10. 正如我们知道的那样, LVM 的一个显著的特点是可以在不停机的情况下更改(增大或缩小) 逻辑卷的大小:** + +假定在 vol02_logs 上我们用尽了空间,而 vol03_homes 还留有足够的空间。我们将把 vol03_homes 的大小调整为 4 GB,并使用剩余的空间来扩展 vol02_logs: + + # ssm resize -s 4G /dev/tecmint_vg/vol03_homes + +再次运行 `ssm list pool`,并记录 tecmint_vg 中的剩余空间的大小: + +![查看卷的大小](http://www.tecmint.com/wp-content/uploads/2015/04/Check-LVM-Free-Space.png) + +查看卷的大小 + +然后执行: + + # ssm resize -s+1.99 /dev/tecmint_vg/vol02_logs + +**注**: 在 `-s` 后的加号暗示特定值应该被加到当前值上。 + +**11. 使用 ssm 来移除逻辑卷和卷组也更加简单,只需使用:** + + # ssm remove tecmint_vg + +这个命令将返回一个提示,询问你是否确认删除 VG 和它所包含的 LV: + +![移除逻辑卷和卷组](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-LV-VG.png) + +移除逻辑卷和卷组 + +### 管理加密的卷 ### + +SSM 也给系统管理员提供了为新的或现存的卷加密的能力。首先,你将需要安装 cryptsetup 软件包: + + # yum update && yum install cryptsetup + +然后写出下面的命令来创建一个加密卷,你将被要求输入一个密码来增强安全性: + + # ssm create -s 3G -n vol01_docs -p tecmint_vg --fstype ext4 --encrypt luks /mnt/docs /dev/sd{b,c}1 + # ssm create -s 1G -n vol02_logs -p tecmint_vg --fstype ext4 --encrypt luks /mnt/logs /dev/sd{b,c}1 + # ssm create -n vol03_homes -p tecmint_vg --fstype ext4 --encrypt luks /mnt/homes /dev/sd{b,c}1 + +我们的下一个任务是往 /etc/fstab 中添加条目来让这些逻辑卷在启动时可用,而不是使用设备识别编号(/dev/something)。 + +我们将使用每个 LV 的 UUID (使得当我们添加其他的逻辑卷或设备后,我们的设备仍然可以被唯一的标记),而我们可以使用 blkid 应用来找到它们的 UUID: + + # blkid -o value UUID /dev/tecmint_vg/vol01_docs + # blkid -o value UUID /dev/tecmint_vg/vol02_logs + # blkid -o value UUID /dev/tecmint_vg/vol03_homes + +在我们的例子中: + +![找到逻辑卷的 UUID](http://www.tecmint.com/wp-content/uploads/2015/04/Logical-Volume-UUID.png) + +找到逻辑卷的 UUID + +接着,使用下面的内容来创建 /etc/crypttab 文件(请更改 UUID 来适用于你的设置): + + docs UUID=ba77d113-f849-4ddf-8048-13860399fca8 none + logs UUID=58f89c5a-f694-4443-83d6-2e83878e30e4 none + homes UUID=92245af6-3f38-4e07-8dd8-787f4690d7ac none + +然后在 /etc/fstab 中添加如下的条目。请注意到 device_name (/dev/mapper/device_name) 是出现在 /etc/crypttab 中第一列的映射标识: + + # Logical volume vol01_docs: + /dev/mapper/docs /mnt/docs ext4 defaults 0 2 + # Logical volume vol02_logs + /dev/mapper/logs /mnt/logs ext4 defaults 0 2 + # Logical volume vol03_homes + /dev/mapper/homes /mnt/homes ext4 defaults 0 2 + +现在重启(systemctl reboot),则你将被要求为每个 LV 输入密码。随后,你可以通过检查相应的挂载点来确保挂载操作是否成功: + +![确保逻辑卷挂载点](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-LV-Mount-Points.png) + +确保逻辑卷挂载点 + +### 总结 ### + +在这篇教程中,我们开始探索如何使用传统的卷管理工具和 SSM 来设置和配置系统存储,SSM 也在一个软件包中集成了文件系统和加密功能。这使得对于任何系统管理员来说,SSM 是一个非常有价值的工具。 + +假如你有任何的问题或评论,请让我们知晓 – 请随意使用下面的评论框来与我们保存联系! + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/create-lvm-storage-in-linux/ diff --git a/translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md b/translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md new file mode 100644 index 0000000000..a68d36de2b --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 07--Using ACLs (Access Control Lists) and Mounting Samba or NFS Shares.md @@ -0,0 +1,215 @@ +RHCSA 系列:使用 ACL(访问控制列表) 和挂载 Samba/NFS 共享 – Part 7 +================================================================================ +在上一篇文章([RHCSA 系列 Part 6][1])中,我们解释了如何使用 parted 和 ssm 来设置和配置本地系统存储。 + +![配置 ACL 及挂载 NFS/Samba 共享](http://www.tecmint.com/wp-content/uploads/2015/04/Configure-ACLs-and-Mounting-NFS-Samba-Shares.png) + +RHCSA Series: 配置 ACL 及挂载 NFS/Samba 共享 – Part 7 + +我们也讨论了如何创建和在系统启动时使用一个密码来挂载加密的卷。另外,我们告诫过你要避免在挂载的文件系统上执行苛刻的存储管理操作。记住了这点后,现在,我们将回顾在 RHEL 7 中最常使用的文件系统格式,然后将涵盖有关手动或自动挂载、使用和卸载网络文件系统(CIFS 和 NFS)的话题以及在你的操作系统上实现访问控制列表的使用。 + +#### 前提条件 #### + +在进一步深入之前,请确保你可使用 Samba 服务和 NFS 服务(注意在 RHEL 7 中 NFSv2 已不再被支持)。 + +在本次指导中,我们将使用一个IP 地址为 192.168.0.10 且同时运行着 Samba 服务和 NFS 服务的机子来作为服务器,使用一个 IP 地址为 192.168.0.18 的 RHEL 7 机子来作为客户端。在这篇文章的后面部分,我们将告诉你在客户端上你需要安装哪些软件包。 + +### RHEL 7 中的文件系统格式 ### + +从 RHEL 7 开始,由于 XFS 的高性能和可扩展性,它已经被引入所有的架构中来作为默认的文件系统。 +根据 Red Hat 及其合作伙伴在主流硬件上执行的最新测试,当前 XFS 已支持最大为 500 TB 大小的文件系统。 + +另外, XFS 启用了 user_xattr(扩展用户属性) 和 acl( +POSIX 访问控制列表)来作为默认的挂载选项,而不像 ext3 或 ext4(对于 RHEL 7 来说, ext2 已过时),这意味着当挂载一个 XFS 文件系统时,你不必显式地在命令行或 /etc/fstab 中指定这些选项(假如你想在后一种情况下禁用这些选项,你必须显式地使用 no_acl 和 no_user_xattr)。 + +请记住扩展用户属性可以被指定到文件和目录中来存储任意的额外信息如 mime 类型,字符集或文件的编码,而用户属性中的访问权限由一般的文件权限位来定义。 + +#### 访问控制列表 #### + +作为一名系统管理员,无论你是新手还是专家,你一定非常熟悉与文件和目录有关的常规访问权限,这些权限为所有者,所有组和"世界"(所有的其他人)指定了特定的权限(可读,可写及可执行)。但如若你需要稍微更新你的记忆,请随意参考 [RHCSA 系列的 Part 3][3]. + +但是,由于标准的 `ugo/rwx` 集合并不允许为不同的用户配置不同的权限,所以 ACL 便被引入了进来,为的是为文件和目录定义更加详细的访问权限,而不仅仅是这些特别指定的特定权限。 + +事实上, ACL 定义的权限是由文件权限位所特别指定的权限的一个超集。下面就让我们看看这个转换是如何在真实世界中被应用的吧。 + +1. 存在两种类型的 ACL:访问 ACL,可被应用到一个特定的文件或目录上,以及默认 ACL,只可被应用到一个目录上。假如目录中的文件没有 ACL,则它们将继承它们的父目录的默认 ACL 。 + +2. 从一开始, ACL 就可以为每个用户,每个组或不在文件所属组中的用户配置相应的权限。 + +3. ACL 可使用 `setfacl` 来设置(和移除),可相应地使用 -m 或 -x 选项。 + +例如,让我们创建一个名为 tecmint 的组,并将用户 johndoe 和 davenull 加入该组: + + # groupadd tecmint + # useradd johndoe + # useradd davenull + # usermod -a -G tecmint johndoe + # usermod -a -G tecmint davenull + +并且让我们检验这两个用户都已属于追加的组 tecmint: + + # id johndoe + # id davenull + +![检验用户](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-Users.png) + +检验用户 + +现在,我们在 /mnt 下创建一个名为 playground 的目录,并在该目录下创建一个名为 testfile.txt 的文件。我们将设定该文件的属组为 tecmint,并更改它的默认 ugo/rwx 权限为 770(即赋予该文件的属主和属组可读,可写和可执行权限): + + # mkdir /mnt/playground + # touch /mnt/playground/testfile.txt + # chmod 770 /mnt/playground/testfile.txt + +接着,依次切换为 johndoe 和 davenull 用户,并在文件中写入一些信息: + + echo "My name is John Doe" > /mnt/playground/testfile.txt + echo "My name is Dave Null" >> /mnt/playground/testfile.txt + +到目前为止,一切正常。现在我们让用户 gacanepa 来向该文件执行写操作 – 则写操作将会失败,这是可以预料的。 + +但实际上我们需要用户 gacanepa(TA 不是组 tecmint 的成员)在文件 /mnt/playground/testfile.txt 上有写权限,那又该怎么办呢?首先映入你脑海里的可能是将该用户添加到组 tecmint 中。但那将使得他在所有该组具有写权限位的文件上均拥有写权限,但我们并不想这样,我们只想他能够在文件 /mnt/playground/testfile.txt 上有写权限。 + + # touch /mnt/playground/testfile.txt + # chown :tecmint /mnt/playground/testfile.txt + # chmod 777 /mnt/playground/testfile.txt + # su johndoe + $ echo "My name is John Doe" > /mnt/playground/testfile.txt + $ su davenull + $ echo "My name is Dave Null" >> /mnt/playground/testfile.txt + $ su gacanepa + $ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt + +![管理用户的权限](http://www.tecmint.com/wp-content/uploads/2015/04/User-Permissions.png) + +管理用户的权限 + +现在,让我们给用户 gacanepa 在 /mnt/playground/testfile.txt 文件上有读和写权限。 + +以 root 的身份运行如下命令: + + # setfacl -R -m u:gacanepa:rwx /mnt/playground + +则你将成功地添加一条 ACL,运行 gacanepa 对那个测试文件可写。然后切换为 gacanepa 用户,并再次尝试向该文件写入一些信息: + + $ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt + +要观察一个特定的文件或目录的 ACL,可以使用 `getfacl` 命令: + + # getfacl /mnt/playground/testfile.txt + +![检查文件的 ACL](http://www.tecmint.com/wp-content/uploads/2015/04/Check-ACL-of-File.png) + +检查文件的 ACL + +要为目录设定默认 ACL(它的内容将被该目录下的文件继承,除非另外被覆写),在规则前添加 `d:`并特别指定一个目录名,而不是文件名: + + # setfacl -m d:o:r /mnt/playground + +上面的 ACL 将允许不在属组中的用户对目录 /mnt/playground 中的内容有读权限。请注意观察这次更改前后 +`getfacl /mnt/playground` 的输出结果的不同: + +![在 Linux 中设定默认 ACL](http://www.tecmint.com/wp-content/uploads/2015/04/Set-Default-ACL-in-Linux.png) + +在 Linux 中设定默认 ACL + +[在官方的 RHEL 7 存储管理指导手册的第 20 章][3] 中提供了更多有关 ACL 的例子,我极力推荐你看一看它并将它放在身边作为参考。 + +#### 挂载 NFS 网络共享 #### + +要显示你服务器上可用的 NFS 共享的列表,你可以使用带有 -e 选项的 `showmount` 命令,再跟上机器的名称或它的 IP 地址。这个工具包含在 `nfs-utils` 软件包中: + + # yum update && yum install nfs-utils + +接着运行: + + # showmount -e 192.168.0.10 + +则你将得到一个在 192.168.0.10 上可用的 NFS 共享的列表: + +![检查可用的 NFS 共享](http://www.tecmint.com/wp-content/uploads/2015/04/Mount-NFS-Shares.png) + +检查可用的 NFS 共享 + +要按照需求在本地客户端上使用命令行来挂载 NFS 网络共享,可使用下面的语法: + + # mount -t nfs -o [options] remote_host:/remote/directory /local/directory + +其中,在我们的例子中,对应为: + + # mount -t nfs 192.168.0.10:/NFS-SHARE /mnt/nfs + +若你得到如下的错误信息:“Job for rpc-statd.service failed. See “systemctl status rpc-statd.service”及“journalctl -xn” for details.”,请确保 `rpcbind` 服务被启用且已在你的系统中启动了。 + + # systemctl enable rpcbind.socket + # systemctl restart rpcbind.service + +接着重启。这就应该达到了上面的目的,且你将能够像先前解释的那样挂载你的 NFS 共享了。若你需要在系统启动时自动挂载 NFS 共享,可以向 /etc/fstab 文件添加一个有效的条目: + + remote_host:/remote/directory /local/directory nfs options 0 0 + +上面的变量 remote_host, /remote/directory, /local/directory 和 options(可选) 和在命令行中手动挂载一个 NFS 共享时使用的一样。按照我们前面的例子,对应为: + + 192.168.0.10:/NFS-SHARE /mnt/nfs nfs defaults 0 0 + +#### 挂载 CIFS (Samba) 网络共享 #### + +Samba 代表一个特别的工具,使得在由 *nix 和 Windows 机器组成的网络中进行网络共享成为可能。要显示可用的 Samba 共享,可使用带有 -L 选项的 smbclient 命令,再跟上机器的名称或它的 IP 地址。这个工具包含在 samba_client 软件包中: + +你将被提示在远程主机上输入 root 用户的密码: + + # smbclient -L 192.168.0.10 + +![检查 Samba 共享](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Samba-Shares.png) + +检查 Samba 共享 + +要在本地客户端上挂载 Samba 网络共享,你需要已安装好 cifs-utils 软件包: + + # yum update && yum install cifs-utils + +然后在命令行中使用下面的语法: + + # mount -t cifs -o credentials=/path/to/credentials/file //remote_host/samba_share /local/directory + +其中,在我们的例子中,对应为: + + # mount -t cifs -o credentials=~/.smbcredentials //192.168.0.10/gacanepa /mnt/samba + +其中 `smbcredentials` + + username=gacanepa + password=XXXXXX + +是一个位于 root 用户的家目录(/root/) 中的隐藏文件,其权限被设置为 600,所以除了该文件的属主外,其他人对该文件既不可读也不可写。 + +请注意 samba_share 是 Samba 分享的名称,由上面展示的 `smbclient -L remote_host` 所返回。 + +现在,若你需要在系统启动时自动地使得 Samba 分享可用,可以向 /etc/fstab 文件添加一个像下面这样的有效条目: + + //remote_host:/samba_share /local/directory cifs options 0 0 + +上面的变量 remote_host, /remote/directory, /local/directory 和 options(可选) 和在命令行中手动挂载一个 Samba 共享时使用的一样。按照我们前面的例子中所给的定义,对应为: + + //192.168.0.10/gacanepa /mnt/samba cifs credentials=/root/smbcredentials,defaults 0 0 + +### 结论 ### + +在这篇文章中,我们已经解释了如何在 Linux 中设置 ACL,并讨论了如何在一个 RHEL 7 客户端上挂载 CIFS 和 NFS 网络共享。 + +我建议你去练习这些概念,甚至混合使用它们(试着在一个挂载的网络共享上设置 ACL),直至你感觉舒适。假如你有问题或评论,请随时随意地使用下面的评论框来联系我们。另外,请随意通过你的社交网络分享这篇文章。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-exam-configure-acls-and-mount-nfs-samba-shares/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/rhcsa-exam-create-format-resize-delete-and-encrypt-partitions-in-linux/ +[2]:http://www.tecmint.com/rhcsa-exam-manage-users-and-groups/ +[3]:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Storage_Administration_Guide/ch-acls.html \ No newline at end of file diff --git a/translated/tech/RHCSA/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md b/translated/tech/RHCSA/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md new file mode 100644 index 0000000000..82245f33b1 --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 08--Securing SSH, Setting Hostname and Enabling Network Services.md @@ -0,0 +1,215 @@ +RHCSA 系列:安全 SSH,设定主机名及开启网络服务 – Part 8 +================================================================================ +作为一名系统管理员,你将经常使用一个终端模拟器来登陆到一个远程的系统中,执行一系列的管理任务。你将很少有机会坐在一个真实的(物理)终端前,所以你需要设定好一种方法来使得你可以登陆到你被要求去管理的那台远程主机上。 + +事实上,当你必须坐在一台物理终端前的时候,就可能是你登陆到该主机的最后一种方法。基于安全原因,使用 Telnet 来达到以上目的并不是一个好主意,因为穿行在线缆上的流量并没有被加密,它们以文本方式在传送。 + +另外,在这篇文章中,我们也将复习如何配置网络服务来使得它在开机时被自动开启,并学习如何设置网络和静态或动态地解析主机名。 + +![RHCSA: 安全 SSH 和开启网络服务](http://www.tecmint.com/wp-content/uploads/2015/05/Secure-SSH-Server-and-Enable-Network-Services.png) + +RHCSA: 安全 SSH 和开启网络服务 – Part 8 + +### 安装并确保 SSH 通信安全 ### + +对于你来说,要能够使用 SSH 远程登陆到一个 RHEL 7 机子,你必须安装 `openssh`,`openssh-clients` 和 `openssh-servers` 软件包。下面的命令不仅将安装远程登陆程序,也会安装安全的文件传输工具以及远程文件复制程序: + + # yum update && yum install openssh openssh-clients openssh-servers + +注意,安装上服务器所需的相应软件包是一个不错的主意,因为或许在某个时刻,你想使用同一个机子来作为客户端和服务器。 + +在安装完成后,如若你想安全地访问你的 SSH 服务器,你还需要考虑一些基本的事情。下面的设定应该在文件 `/etc/ssh/sshd_config` 中得以呈现。 + +1. 更改 sshd 守护进程的监听端口,从 22(默认的端口值)改为一个更高的端口值(2000 或更大),但首先要确保所选的端口没有被占用。 + +例如,让我们假设你选择了端口 2500 。使用 [netstat][1] 来检查所选的端口是否被占用: + + # netstat -npltu | grep 2500 + +假如 netstat 没有返回任何信息,则你可以安全地为 sshd 使用端口 2500,并且你应该在上面的配置文件中更改端口的设定,具体如下: + + Port 2500 + +2. 只允许协议 2: + + Protocol 2 + +3. 配置验证超时的时间为 2 分钟,不允许以 root 身份登陆,并将允许通过 ssh 登陆的人数限制到最小: + + LoginGraceTime 2m + PermitRootLogin no + AllowUsers gacanepa + +4. 假如可能,使用基于公钥的验证方式而不是使用密码: + + PasswordAuthentication no + RSAAuthentication yes + PubkeyAuthentication yes + +这假设了你已经在你的客户端机子上创建了带有你的用户名的一个密钥对,并将公钥复制到了你的服务器上。 + +- [开启 SSH 无密码登陆][2] + +### 配置网络和名称的解析 ### + +1. 每个系统管理员应该对下面这个系统配置文件非常熟悉: + +- /etc/hosts 被用来在小型网络中解析名称 <---> IP 地址。 + +文件 `/etc/hosts` 中的每一行拥有如下的结构: + + IP address - Hostname - FQDN + +例如, + + 192.168.0.10 laptop laptop.gabrielcanepa.com.ar + +2. `/etc/resolv.conf` 特别指定 DNS 服务器的 IP 地址和搜索域,它被用来在没有提供域名后缀时,将一个给定的查询名称对应为一个全称域名。 + +在正常情况下,你不必编辑这个文件,因为它是由系统管理的。然而,若你非要改变 DNS 服务器的 IP 地址,建议你在该文件的每一行中,都应该遵循下面的结构: + + nameserver - IP address + +例如, + + nameserver 8.8.8.8 + +3. `/etc/host.conf` 特别指定在一个网络中主机名被解析的方法和顺序。换句话说,告诉名称解析器使用哪个服务,并以什么顺序来使用。 + +尽管这个文件由几个选项,但最为常见和基本的设置包含如下的一行: + + order bind,hosts + +它意味着解析器应该首先查看 `resolv.conf` 中特别指定的域名服务器,然后到 `/etc/hosts` 文件中查找解析的名称。 + +4. `/etc/sysconfig/network` 包含了所有网络接口的路由和全局主机信息。下面的值可能会被使用: + + NETWORKING=yes|no + HOSTNAME=value + +其中的 value 应该是全称域名(FQDN)。 + + GATEWAY=XXX.XXX.XXX.XXX + +其中的 XXX.XXX.XXX.XXX 是网关的 IP 地址。 + + GATEWAYDEV=value + +在一个带有多个网卡的机器中, value 为网关设备名,例如 enp0s3。 + +5. 位于 `/etc/sysconfig/network-scripts` 中的文件(网络适配器配置文件)。 + +在上面提到的目录中,你将找到几个被命名为如下格式的文本文件。 + + ifcfg-name + +其中 name 为网卡的名称,由 `ip link show` 返回: + +![检查网络连接状态](http://www.tecmint.com/wp-content/uploads/2015/05/Check-IP-Address.png) + +检查网络连接状态 + +例如: + +![网络文件](http://www.tecmint.com/wp-content/uploads/2015/05/Network-Files.png) + +网络文件 + +除了环回接口,你还可以为你的网卡进行一个相似的配置。注意,假如设定了某些变量,它们将为这个特别的接口,覆盖掉 `/etc/sysconfig/network` 中定义的值。在这篇文章中,为了能够解释清楚,每行都被加上了注释,但在实际的文件中,你应该避免加上注释: + + HWADDR=08:00:27:4E:59:37 # The MAC address of the NIC + TYPE=Ethernet # Type of connection + BOOTPROTO=static # This indicates that this NIC has been assigned a static IP. If this variable was set to dhcp, the NIC will be assigned an IP address by a DHCP server and thus the next two lines should not be present in that case. + IPADDR=192.168.0.18 + NETMASK=255.255.255.0 + GATEWAY=192.168.0.1 + NM_CONTROLLED=no # Should be added to the Ethernet interface to prevent NetworkManager from changing the file. + NAME=enp0s3 + UUID=14033805-98ef-4049-bc7b-d4bea76ed2eb + ONBOOT=yes # The operating system should bring up this NIC during boot + +### 设定主机名 ### + +在 RHEL 7 中, `hostnamectl` 命令被同时用来查询和设定系统的主机名。 + +要展示当前的主机名,输入: + + # hostnamectl status + +![在RHEL 7 中检查系统的主机名](http://www.tecmint.com/wp-content/uploads/2015/05/Check-System-hostname.png) + +检查系统的主机名 + +要更改主机名,使用 + + # hostnamectl set-hostname [new hostname] + +例如, + + # hostnamectl set-hostname cinderella + +要想使得更改生效,你需要重启 hostnamed 守护进程(这样你就不必因为要应用更改而登出系统并再登陆系统): + + # systemctl restart systemd-hostnamed + +![在 RHEL7 中设定系统主机名](http://www.tecmint.com/wp-content/uploads/2015/05/Set-System-Hostname.png) + +设定系统主机名 + +另外, RHEL 7 还包含 `nmcli` 工具,它可被用来达到相同的目的。要展示主机名,运行: + + # nmcli general hostname + +且要改变主机名,则运行: + + # nmcli general hostname [new hostname] + +例如, + + # nmcli general hostname rhel7 + +![使用 nmcli 命令来设定主机名](http://www.tecmint.com/wp-content/uploads/2015/05/nmcli-command.png) + +使用 nmcli 命令来设定主机名 + +### 在开机时开启网络服务 ### + +作为本文的最后部分,就让我们看看如何确保网络服务在开机时被自动开启。简单来说,这个可通过创建符号链接到某些由服务的配置文件中的 [Install] 小节中指定的文件来实现。 + +以 firewalld(/usr/lib/systemd/system/firewalld.service) 为例: + + [Install] + WantedBy=basic.target + Alias=dbus-org.fedoraproject.FirewallD1.service + +要开启该服务,运行: + + # systemctl enable firewalld + +另一方面,要禁用 firewalld,则需要移除符号链接: + + # systemctl disable firewalld + +![在开机时开启服务](http://www.tecmint.com/wp-content/uploads/2015/05/Enable-Service-at-System-Boot.png) + +在开机时开启服务 + +### 总结 ### + +在这篇文章中,我们总结了如何安装 SSH 及使用它安全地连接到一个 RHEL 服务器,如何改变主机名,并在最后如何确保在系统启动时开启服务。假如你注意到某个服务启动失败,你可以使用 `systemctl status -l [service]` 和 `journalctl -xn` 来进行排错。 + +请随意使用下面的评论框来让我们知晓你对本文的看法。提问也同样欢迎。我们期待着你的反馈! + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-series-secure-ssh-set-hostname-enable-network-services-in-rhel-7/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://www.tecmint.com/20-netstat-commands-for-linux-network-management/ +[2]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/ \ No newline at end of file diff --git a/translated/tech/RHCSA/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md b/translated/tech/RHCSA/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md new file mode 100644 index 0000000000..190c32ece5 --- /dev/null +++ b/translated/tech/RHCSA/RHCSA Series--Part 09--Installing, Configuring and Securing a Web and FTP Server.md @@ -0,0 +1,175 @@ +RHCSA 系列: 安装,配置及加固一个 Web 和 FTP 服务器 – Part 9 +================================================================================ +Web 服务器(也被称为 HTTP 服务器)是在网络中将内容(最为常见的是网页,但也支持其他类型的文件)进行处理并传递给客户端的服务。 + +FTP 服务器是最为古老且最常使用的资源之一(即便到今天也是这样),在身份认证不是必须的情况下,它可使得在一个网络里文件对于客户端可用,因为 FTP 使用没有加密的用户名和密码。 + +在 RHEL 7 中可用的 web 服务器是版本号为 2.4 的 Apache HTTP 服务器。至于 FTP 服务器,我们将使用 Very Secure Ftp Daemon (又名 vsftpd) 来建立用 TLS 加固的连接。 + +![配置和加固 Apache 和 FTP 服务器](http://www.tecmint.com/wp-content/uploads/2015/05/Install-Configure-Secure-Apache-FTP-Server.png) + +RHCSA: 安装,配置及加固 Apache 和 FTP 服务器 – Part 9 + +在这篇文章中,我们将解释如何在 RHEL 7 中安装,配置和加固 web 和 FTP 服务器。 + +### 安装 Apache 和 FTP 服务器 ### + +在本指导中,我们将使用一个静态 IP 地址为 192.168.0.18/24 的 RHEL 7 服务器。为了安装 Apache 和 VSFTPD,运行下面的命令: + + # yum update && yum install httpd vsftpd + +当安装完成后,这两个服务在开始时是默认被禁用的,所以我们需要暂时手动开启它们并让它们在下一次启动时自动地开启它们: + + # systemctl start httpd + # systemctl enable httpd + # systemctl start vsftpd + # systemctl enable vsftpd + +另外,我们必须打开 80 和 21 端口,它们分别是 web 和 ftp 守护进程监听的端口,为的是允许从外面访问这些服务: + + # firewall-cmd --zone=public --add-port=80/tcp --permanent + # firewall-cmd --zone=public --add-service=ftp --permanent + # firewall-cmd --reload + +为了确认 web 服务工作正常,打开你的浏览器并输入服务器的 IP,则你应该可以看到如下的测试页面: + +![确认 Apache Web 服务器](http://www.tecmint.com/wp-content/uploads/2015/05/Confirm-Apache-Web-Server.png) + +确认 Apache Web 服务器 + +对于 ftp 服务器,在确保它如期望中的那样工作之前,我们必须进一步地配置它,我们将在几分钟后来做这件事。 + +### 配置并加固 Apache Web 服务器 ### + +Apache 的主要配置文件位于 `/etc/httpd/conf/httpd.conf` 中,但它可能依赖 `/etc/httpd/conf.d` 中的其他文件。 + +尽管默认的配置对于大多数的情形是充分的,熟悉描述在 [官方文档][1] 中的所有可用选项是一个不错的主意。 + +同往常一样,在编辑主配置文件前先做一个备份: + + # cp /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.$(date +%Y%m%d) + +然后用你钟爱的文本编辑器打开它,并查找下面这些变量: + +- ServerRoot: 服务器的配置,错误和日志文件保存的目录。 +- Listen: 通知 Apache 去监听特定的 IP 地址或端口。 +- Include: 允许包含其他配置文件,这个必须存在,否则,服务器将会崩溃。它恰好与 IncludeOptional 相反,假如特定的配置文件不存在,它将静默地忽略掉它们。 +- User 和 Group: 运行 httpd 服务的用户/组的名称。 +- DocumentRoot: Apache 为你的文档服务的目录。默认情况下,所有的请求将在这个目录中被获取,但符号链接和别名可能会被用于指向其他位置。 +- ServerName: 这个指令将设定用于识别它自身的主机名(或 IP 地址)和端口。 + +安全措施的第一步将包含创建一个特定的用户和组(如 tecmint/tecmint)来运行 web 服务器以及更改默认的端口为一个更高的端口(在这个例子中为 9000): + + ServerRoot "/etc/httpd" + Listen 192.168.0.18:9000 + User tecmint + Group tecmint + DocumentRoot "/var/www/html" + ServerName 192.168.0.18:9000 + +你可以使用下面的命令来测试配置文件: + + # apachectl configtest + +假如一切 OK,接着重启 web 服务器。 + + # systemctl restart httpd + +并别忘了在防火墙中开启新的端口(和禁用旧的端口): + + + # firewall-cmd --zone=public --remove-port=80/tcp --permanent + # firewall-cmd --zone=public --add-port=9000/tcp --permanent + # firewall-cmd --reload + +请注意,由于 SELinux 的策略,你只可使用如下命令所返回的端口来分配给 web 服务器。 + + # semanage port -l | grep -w '^http_port_t' + +假如你想使用另一个端口(如 TCP 端口 8100)来给 httpd 服务,你必须将它加到 SELinux 的端口上下文: + + # semanage port -a -t http_port_t -p tcp 8100 + +![添加 Apache 端口到 SELinux 策略](http://www.tecmint.com/wp-content/uploads/2015/05/Add-Apache-Port-to-SELinux-Policies.png) + +添加 Apache 端口到 SELinux 策略 + +为了进一步加固你安装的 Apache,请遵循以下步骤: + +1. 运行 Apache 的用户不应该拥有访问 shell 的能力: + + # usermod -s /sbin/nologin tecmint + +2. 禁用目录列表功能,为的是阻止浏览器展示一个未包含 index.html 文件的目录里的内容。 + +编辑 `/etc/httpd/conf/httpd.conf` (和虚拟主机的配置文件,假如有的话),并确保 Options 指令在顶级和目录块级别中(注:感觉这里我的翻译不对)都被设置为 None: + + Options None + +3. 在 HTTP 回应中隐藏有关 web 服务器和操作系统的信息。像下面这样编辑文件 `/etc/httpd/conf/httpd.conf`: + + ServerTokens Prod + ServerSignature Off + +现在,你已经做好了从 `/var/www/html` 目录开始服务内容的准备了。 + +### 配置并加固 FTP 服务器 ### + +和 Apache 的情形类似, Vsftpd 的主配置文件 `(/etc/vsftpd/vsftpd.conf)` 带有详细的注释,且虽然对于大多数的应用实例,默认的配置应该足够了,但为了更有效率地操作 ftp 服务器,你应该开始熟悉相关的文档和 man 页 `(man vsftpd.conf)`(对于这点,再多的强调也不为过!)。 + +在我们的示例中,使用了这些指令: + + anonymous_enable=NO + local_enable=YES + write_enable=YES + local_umask=022 + dirmessage_enable=YES + xferlog_enable=YES + connect_from_port_20=YES + xferlog_std_format=YES + chroot_local_user=YES + allow_writeable_chroot=YES + listen=NO + listen_ipv6=YES + pam_service_name=vsftpd + userlist_enable=YES + tcp_wrappers=YES + +通过使用 `chroot_local_user=YES`,(默认情况下)本地用户在登陆之后,将马上被置于一个位于用户家目录的 chroot 环境中(注:这里的翻译也不准确)。这意味着本地用户将不能访问除其家目录之外的任何文件。 + +最后,为了让 ftp 能够在用户的家目录中读取文件,设置如下的 SELinux 布尔值: + + # setsebool -P ftp_home_dir on + +现在,你可以使用一个客户端例如 Filezilla 来连接一个 ftp 服务器: + +![查看 FTP 连接](http://www.tecmint.com/wp-content/uploads/2015/05/Check-FTP-Connection.png) + +查看 FTP 连接 + +注意, `/var/log/xferlog` 日志将会记录下载和上传的情况,这与上图的目录列表一致: + +![监视 FTP 的下载和上传情况](http://www.tecmint.com/wp-content/uploads/2015/05/Monitor-FTP-Download-Upload.png) + +监视 FTP 的下载和上传情况 + +另外请参考: [在 Linux 系统中使用 Trickle 来限制应用使用的 FTP 网络带宽][2] + +### 总结 ### + +在本教程中,我们解释了如何设置 web 和 ftp 服务器。由于这个主题的广泛性,涵盖这些话题的所有方面是不可能的(如虚拟网络主机)。因此,我推荐你也阅读这个网站中有关 [Apache][3] 的其他卓越的文章。 + +-------------------------------------------------------------------------------- + +via: http://www.tecmint.com/rhcsa-series-install-and-secure-apache-web-server-and-ftp-in-rhel/ + +作者:[Gabriel Cánepa][a] +译者:[FSSlc](https://github.com/FSSlc) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://www.tecmint.com/author/gacanepa/ +[1]:http://httpd.apache.org/docs/2.4/ +[2]:http://www.tecmint.com/manage-and-limit-downloadupload-bandwidth-with-trickle-in-linux/ +[3]:http://www.google.com/cse?cx=partner-pub-2601749019656699:2173448976&ie=UTF-8&q=virtual+hosts&sa=Search&gws_rd=cr&ei=Dy9EVbb0IdHisASnroG4Bw#gsc.tab=0&gsc.q=apache