Merge pull request #6 from LCTT/master

update
This commit is contained in:
cvsher 2015-05-13 13:17:35 +08:00
commit 8e8c67e78e
70 changed files with 4530 additions and 2397 deletions

View File

@ -1,7 +1,7 @@
既然float不能表示所有的int那为什么在类型转换时C++将int转换成float
---------
=============
#问题
###问题
代码如下:
@ -13,7 +13,7 @@ if (i == f) // 执行某段代码
编译器会将i转换成float类型然后比较这两个float的大小但是float能够表示所有的int吗为什么没有将int和float转换成double类型进行比较呢
#回答
###回答
在整型数的演变中,当`int`变成`unsigned`时,会丢掉负数部分(有趣的是,这样的话,`0u < -1`就是对的了
@ -32,11 +32,11 @@ if((double) i < (double) f)
顺便提一下,在这个问题中有趣的是,`unsigned`的优先级高于`int`,所以把`int`和`unsigned`进行比较时最终进行的是unsigned类型的比较开头提到的`0u < -1`就是这个道理我猜测这可能是在早些时候计算机发展初期当时的人们认为`unsigned``int`在所表示的数值范围上受到的限制更小现在还不需要符号位所以可以使用额外的位来表示更大的数值范围如果你觉得`int`可能会溢出那么就使用unsigned好了在使用16位表示的ints时这个担心会更明显
----
via:[stackoverflow](http://stackoverflow.com/questions/28010565/why-does-c-promote-an-int-to-a-float-when-a-float-cannot-represent-all-int-val/28011249#28011249)
via: [stackoverflow](http://stackoverflow.com/questions/28010565/why-does-c-promote-an-int-to-a-float-when-a-float-cannot-represent-all-int-val/28011249#28011249)
作者:[wintermute][a]
译者:[KayGuoWhu](https://github.com/KayGuoWhu)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,65 @@
iptraf一个实用的TCP/UDP网络监控工具
================================================================================
[iptraf][1]是一个基于ncurses的IP局域网监控器用来生成包括TCP信息、UDP计数、ICMP和OSPF信息、以太网负载信息、节点状态信息、IP校验和错误等等统计数据。
它基于ncurses的用户界面可以使用户免于记忆繁琐的命令行开关。
### 特征 ###
- IP流量监控器用来显示你的网络中的IP流量变化信息。包括TCP标识信息、包以及字节计数ICMP细节OSPF包类型。
- 简单的和详细的接口统计数据包括IP、TCP、UDP、ICMP、非IP以及其他的IP包计数、IP校验和错误接口活动、包大小计数。
- TCP和UDP服务监控器能够显示常见的TCP和UDP应用端口上发送的和接收的包的数量。
- 局域网数据统计模块,能够发现在线的主机,并显示其上的数据活动统计信息。
- TCP、UDP、及其他协议的显示过滤器允许你只查看感兴趣的流量。
- 日志功能。
- 支持以太网、FDDI、ISDN、SLIP、PPP以及本地回环接口类型。
- 利用Linux内核内置的原始套接字接口允许它指iptraf能够用于各种支持的网卡上
- 全屏,菜单式驱动的操作。
###安装方法###
**Ubuntu以及其衍生版本**
sudo apt-get install iptraf
**Arch Linux以及其衍生版本**
sudo pacman -S iptra
**Fedora以及其衍生版本**
sudo yum install iptraf
### 用法 ###
如果不加任何命令行选项地运行**iptraf**命令,程序将进入一种交互模式,通过主菜单可以访问多种功能。
![](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/01/iptraf_1.png)
简易的上手导航菜单。
![](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/01/iptraf_2.png)
选择要监控的接口。
![](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/01/iptraf_3.png)
接口**ppp0**处的流量。
![](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/01/iptraf_4.png)
试试吧!
--------------------------------------------------------------------------------
via: http://www.unixmen.com/iptraf-tcpudp-network-monitoring-utility/
作者:[Enock Seth Nyamador][a]
译者:[DongShuaike](https://github.com/DongShuaike)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/seth/
[1]:http://iptraf.seul.org/about.html

View File

@ -1,12 +1,14 @@
自动化部署基于Docker的Rails应用
================================================================================
![](http://cocoahunter.com/content/images/2015/01/docker3.jpeg)
[TL;DR] 这是系列文章的第三篇讲述了我的公司是如何将基础设施从PaaS移植到Docker上的。
- [第一部分][1]:谈论了我接触Docker之前的经历
- [第二部分][2]:一步步搭建一个安全而又私有的registry。
----------
在系列文章的最后一篇里,我们将用一个实例来学习如何自动化整个部署过程。
### 基本的Rails应用程序###
@ -18,99 +20,97 @@
$ rvm use 2.2.0
$ rails new && cd docker-test
创建一个基控制器:
创建一个基本的控制器:
$ rails g controller welcome index
……然后编辑 `routes.rb` ,以便让工程的根指向我们新创建的welcome#index方法(这句话理解不太理解)
……,然后编辑 `routes.rb` ,以便让该项目的根指向我们新创建的welcome#index方法
root 'welcome#index'
在终端运行 `rails s` ,然后打开浏览器,登录[http://localhost:3000][3],你会进入到索引界面当中。我们不准备给应用加上多么神奇的东西,这只是一个基础实例,用来验证当我们将要创建并部署容器的时候,一切运行正常。
在终端运行 `rails s` ,然后打开浏览器,登录[http://localhost:3000][3],你会进入到索引界面当中。我们不准备给应用加上多么神奇的东西,这只是一个基础实例,当我们将要创建并部署容器的时候,用它来验证一切是否运行正常。
### 安装webserver ###
我们打算使用Unicorn当做我们的webserver。在Gemfile中添加 `gem 'unicorn'``gem 'foreman'`然后将它bundle起来(运行 `bundle install`命令)。
在Rails应用启动的伺候需要配置Unicorn所以我们将一个**unicorn.rb**文件放在**config**目录下。[这里有一个Unicorn配置文件的例子][4]你可以直接复制粘贴Gist的内容。
启动Rails应用时需要先配置好Unicorn所以我们将一个**unicorn.rb**文件放在**config**目录下。[这里有一个Unicorn配置文件的例子][4]你可以直接复制粘贴Gist的内容。
Let's also add a Procfile with the following content inside the root of the project so that we will be able to start the app with foreman:
接下来在工程的根目录下添加一个Procfile以便可以使用foreman启动应用内容为下
接下来在项目的根目录下添加一个Procfile以便可以使用foreman启动应用内容为下
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
现在运行**foreman start**命令启动应用,一切都将正常运行,并且你将能够在[http://localhost:5000][5]上看到一个正在运行的应用。
### 创建一个Docker映像 ###
### 构建一个Docker镜像 ###
现在我们创建一个映像来运行我们的应用。在Rails工程的跟目录下,创建一个名为**Dockerfile**的文件,然后粘贴进以下内容:
现在我们构建一个镜像来运行我们的应用。在这个Rails项目的根目录下,创建一个名为**Dockerfile**的文件,然后粘贴进以下内容:
# Base image with ruby 2.2.0
# 基于镜像 ruby 2.2.0
FROM ruby:2.2.0
# Install required libraries and dependencies
# 安装所需的库和依赖
RUN apt-get update && apt-get install -qy nodejs postgresql-client sqlite3 --no-install-recommends && rm -rf /var/lib/apt/lists/*
# Set Rails version
# 设置 Rails 版本
ENV RAILS_VERSION 4.1.1
# Install Rails
# 安装 Rails
RUN gem install rails --version "$RAILS_VERSION"
# Create directory from where the code will run
# 创建代码所运行的目录
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Make webserver reachable to the outside world
# 使 webserver 可以在容器外面访问
EXPOSE 3000
# Set ENV variables
# 设置环境变量
ENV PORT=3000
# Start the web app
# 启动 web 应用
CMD ["foreman","start"]
# Install the necessary gems
# 安装所需的 gems
ADD Gemfile /usr/src/app/Gemfile
ADD Gemfile.lock /usr/src/app/Gemfile.lock
RUN bundle install --without development test
# Add rails project (from same dir as Dockerfile) to project directory
# 将 rails 项目(和 Dockerfile 同一个目录)添加到项目目录
ADD ./ /usr/src/app
# Run rake tasks
# 运行 rake 任务
RUN RAILS_ENV=production rake db:create db:migrate
使用提供的Dockerfile执行下列命令创建一个映像[1][7]:
使用上述Dockerfile执行下列命令创建一个镜像确保**boot2docker**已经启动并在运行当中):
$ docker build -t localhost:5000/your_username/docker-test .
然后,如果一切正常,长日志输出的最后一行应该类似于:
然后,如果一切正常,长长的日志输出的最后一行应该类似于:
Successfully built 82e48769506c
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
localhost:5000/your_username/docker-test latest 82e48769506c About a minute ago 884.2 MB
来运行容器吧
让我们运行一下容器试试
$ docker run -d -p 3000:3000 --name docker-test localhost:5000/your_username/docker-test
You should be able to reach your Rails app running inside the Docker container at port 3000 of your boot2docker VM[2][8] (in my case [http://192.168.59.103:3000][6]).
通过你的boot2docker虚拟机[2][8]的3000号端口我的是[http://192.168.59.103:3000][6]你可以观察你的Rails应用。
通过你的boot2docker虚拟机的3000号端口我的是[http://192.168.59.103:3000][6]你可以观察你的Rails应用。如果不清楚你的boot2docker虚拟地址输入` $ boot2docker ip`命令查看。)
### 使用shell脚本进行自动化部署 ###
前面的文章指文章1和文章2已经告诉了你如何将新创建的像推送到私有registry中并将其部署在服务器上所以我们跳过这一部分直接开始自动化进程。
前面的文章指文章1和文章2已经告诉了你如何将新创建的像推送到私有registry中并将其部署在服务器上所以我们跳过这一部分直接开始自动化进程。
我们将要定义3个shell脚本然后最后使用rake将它们捆绑在一起。
### 清除 ###
每当我们创建像的时候,
每当我们创建像的时候,
- 停止并重启boot2docker
- 去除Docker孤儿映像(那些没有标签,并且不再被容器所使用的映像们)。
- 去除Docker孤儿镜像(那些没有标签,并且不再被容器所使用的镜像们)。
在你的工程根目录下的**clean.sh**文件中输入下列命令。
@ -132,22 +132,22 @@ You should be able to reach your Rails app running inside the Docker container a
$ chmod +x clean.sh
### 建 ###
### 建 ###
建的过程基本上和之前我们所做的docker build内容相似。在工程的根目录下创建一个**build.sh**脚本,填写如下内容:
建的过程基本上和之前我们所做的docker build内容相似。在工程的根目录下创建一个**build.sh**脚本,填写如下内容:
docker build -t localhost:5000/your_username/docker-test .
给脚本执行权限。
记得给脚本执行权限。
### 部署 ###
最后,创建一个**deploy.sh**脚本,在里面填进如下内容:
# Open SSH connection from boot2docker to private registry
# 打开 boot2docker 到私有注册库的 SSH 连接
boot2docker ssh "ssh -o 'StrictHostKeyChecking no' -i /Users/username/.ssh/id_boot2docker -N -L 5000:localhost:5000 root@your-registry.com &" &
# Wait to make sure the SSH tunnel is open before pushing...
# 在推送前先确认该 SSH 通道是开放的。
echo Waiting 5 seconds before pushing image.
echo 5...
@ -165,7 +165,7 @@ You should be able to reach your Rails app running inside the Docker container a
echo Starting push!
docker push localhost:5000/username/docker-test
如果你不理解这其中的含义,请先仔细阅读这部分[part 2][9]。
如果你不理解这其中的含义,请先仔细阅读这部分[第二部分][2]。
给脚本加上执行权限。
@ -179,10 +179,9 @@ You should be able to reach your Rails app running inside the Docker container a
这一点都不费工夫,可是事实上开发者比你想象的要懒得多!那么咱们就索性再懒一点!
我们最后再把工作好好整理一番我们现在要将三个脚本捆绑在一起通过rake。
为了更简单一点你可以在工程根目录下已经存在的Rakefile中添加几行代码打开Rakefile文件——pun intended——把下列内容粘贴进去。
我们最后再把工作好好整理一番我们现在要将三个脚本通过rake捆绑在一起。
为了更简单一点你可以在工程根目录下已经存在的Rakefile中添加几行代码打开Rakefile文件把下列内容粘贴进去。
namespace :docker do
desc "Remove docker container"
@ -221,34 +220,27 @@ Deploy独立于buildbuild独立于clean。所以每次我们输入命令运
$ rake docker:deploy
接下来就是见证奇迹的时刻了。一旦像文件被上传第一次可能花费较长的时间你就可以ssh登录产品服务器并且通过SSH管道把docker像拉取到服务器并运行了。多么简单!
接下来就是见证奇迹的时刻了。一旦像文件被上传第一次可能花费较长的时间你就可以ssh登录产品服务器并且通过SSH管道把docker像拉取到服务器并运行了。多么简单!
也许你需要一段时间来习惯但是一旦成功它几乎与用Heroku部署一样简单。
备注像往常一样请让我了解到你的意见。我不敢保证这种方法是最好最快或者最安全的Docker开发的方法但是这东西对我们确实奏效。
- 确保**boot2docker**已经启动并在运行当中。
- 如果你不了解你的boot2docker虚拟地址输入` $ boot2docker ip`命令查看。
- 点击[here][10]教你怎样搭建私有的registry。
--------------------------------------------------------------------------------
via: http://cocoahunter.com/2015/01/23/docker-3/
作者:[Michelangelo Chasseur][a]
译者:[DongShuaike](https://github.com/DongShuaike)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://cocoahunter.com/author/michelangelo/
[1]:http://cocoahunter.com/docker-1
[2]:http://cocoahunter.com/2015/01/23/docker-2/
[1]:https://linux.cn/article-5339-1.html
[2]:https://linux.cn/article-5379-1.html
[3]:http://localhost:3000/
[4]:https://gist.github.com/chasseurmic/0dad4d692ff499761b20
[5]:http://localhost:5000/
[6]:http://192.168.59.103:3000/
[7]:http://cocoahunter.com/2015/01/23/docker-3/#fn:1
[8]:http://cocoahunter.com/2015/01/23/docker-3/#fn:2
[9]:http://cocoahunter.com/2015/01/23/docker-2/
[10]:http://cocoahunter.com/2015/01/23/docker-2/

View File

@ -1,6 +1,6 @@
25个给git熟手的技巧
25个 Git 进阶技巧
================================================================================
我已经使用git差不多18个月了觉得自己对它应该已经非常了解。然后来自GitHub的[Scott Chacon][1]过来给LVS做培训[LVS是一个赌博软件供应商和开发商][2]从2013年开始的合同而我在第一天里就学到了很多。
我已经使用git差不多18个月了觉得自己对它应该已经非常了解。然后来自GitHub的[Scott Chacon][1]过来给LVS做培训[LVS是一个赌博软件供应商和开发商][2]从2013年开始的合同而我在第一天里就学到了很多。
作为一个对git感觉良好的人我觉得分享从社区里掌握的一些有价值的信息也许能帮某人解决问题而不用做太深入研究。
@ -15,21 +15,21 @@
#### 2. Git是基于指针的 ####
保存在git里的一切都是文件。当你创建一个提交的时候会建立一个包含你的提交信息和相关数据名字邮件地址日期/时间,前一个提交,等等)的文件,并把它链接到一个文件树中。文件树中包含了对象或其他树的列表。对象或容器是和本次提交相关的实际内容(也是一个文件,你想了解的话,尽管文件名并没有包含在对象里,而是在树中。所有这些文件都使用对象的SHA-1哈希值作为文件名。
保存在git里的一切都是文件。当你创建一个提交的时候会建立一个包含你的提交信息和相关数据名字邮件地址日期/时间,前一个提交,等等)的文件,并把它链接到一个树文件中。这个树文件中包含了对象或其他树的列表。这里的提到的对象(或二进制大对象)是和本次提交相关的实际内容(它也是一个文件,另外,尽管文件名并没有包含在对象里,但是存储在树中。所有这些文件都使用对象的SHA-1哈希值作为文件名。
用这种方式,分支和标签就是简单的文件(基本上是这样),包含指向实际提交的SHA-1哈希值。使用这些索引会带来优秀的灵活性和速度比如创建一个新分支就只要简单地创建一个包含分支名字和所分出的那个提交的SHA-1索引的文件。当然你不需要自己做这些而只要使用Git命令行工具或者GUI但是实际上就是这么简单。
用这种方式,分支和标签就是简单的文件(基本上是这样),包含指向提交的SHA-1哈希值。使用这些索引会带来优秀的灵活性和速度比如创建一个新分支就是简单地用分支名字和所分出的那个提交的SHA-1索引来创建一个文件。当然你不需要自己做这些而只要使用Git命令行工具或者GUI但是实际上就是这么简单。
你也许听说过叫HEAD的索引。这只是简单的一个文件包含了你当前指向的那个提交的SHA-1索引值。如果你正在解决一次合并冲突然后看到了HEAD这并不是一个特别的分支或分值上一个必须的特殊点,只是标明你当前所在位置。
你也许听说过叫HEAD的索引。这只是简单的一个文件包含了你当前指向的那个提交的SHA-1索引值。如果你正在解决一次合并冲突然后看到了HEAD这并不是一个特别的分支或分支上的一个必需的特殊位置,只是标明你当前所在位置。
所有的分支指针都保存在.git/refs/heads里HEAD在.git/HEAD里而标签保存在.git/refs/tags里 - 自己可以放心地进去看看。
所有的分支指针都保存在.git/refs/heads里HEAD在.git/HEAD里而标签保存在.git/refs/tags里 - 自己可以随便进去看看。
#### 3. 两个父节点 - 当然 ####
#### 3. 两个爸爸(父节点) - 你没看错 ####
在历史中查看一个合并提交的信息时,你将看到有两个父节点(相对于一般工作上的常规提交的情况)。第一个父节点是你所在的分支,第二个是你合并过来的分支。
在历史中查看一个合并提交的信息时,你将看到有两个父节点(不同于工作副本上的常规提交的情况)。第一个父节点是你所在的分支,第二个是你合并过来的分支。
#### 4. 合并冲突 ####
目前我相信你碰到过合并冲突并且解决过。通常是编辑一下文件,去掉<<<<====>>>>标志,保留需要留下的代码。有时能够看到这两个修改之前的代码会很不错,比如,在这两个分支上有冲突的改动之前。下面是一种方式:
目前我相信你碰到过合并冲突并且解决过。通常是编辑一下文件,去掉<<<<====>>>>标志,保留需要留下的代码。有时能够看到这两个修改之前的代码会很不错,比如,在这两个现在冲突的分支之前的改动。下面是一种方式:
$ git diff --merge
diff --cc dummy.rb
@ -45,14 +45,14 @@
end
end
如果是二进制文件,比较差异就没那么简单了...通常你要做的就是测试这个二进制文件的两个版本来决定保留哪个或者在二进制文件编辑器里手工复制冲突部分。从一个特定分支获取文件拷贝比如说你在合并master和feature123
如果是二进制文件,比较差异就没那么简单了...通常你要做的就是测试这个二进制文件的两个版本来决定保留哪个或者在二进制文件编辑器里手工复制冲突部分。从一个特定分支获取文件拷贝比如说你在合并master和feature123两个分支
$ git checkout master flash/foo.fla # 或者...
$ git checkout feature132 flash/foo.fla
$ # 然后...
$ git add flash/foo.fla
另一种方式是通过git输出文件 - 你可以输出到另外的文件名,然后再重命名正确的文件(当你决定了要用哪个)为正常的文件名:
另一种方式是通过git输出文件 - 你可以输出到另外的文件名,然后当你决定了要用哪个后,再将选定的正确文件复制为正常的文件名:
$ git show master:flash/foo.fla > master-foo.fla
$ git show feature132:flash/foo.fla > feature132-foo.fla
@ -71,7 +71,7 @@
#### 5. 远端服务器 ####
git的一个超强大的功能就是可以有不止一个远端服务器实际上你一直都在一个本地仓库上工作。你并不是一定都要有写权限你可以有多个可以读取的服务器用来合并他们的工作然后写入其他仓库。添加一个新的远端服务器很简单:
git的一个超强大的功能就是可以有不止一个远端服务器实际上你一直都在一个本地仓库上工作。你并不是一定都要有这些服务器的写权限,你可以有多个可以读取的服务器(用来合并他们的工作)然后写入到另外一个仓库。添加一个新的远端服务器很简单:
$ git remote add john git@github.com:johnsomeone/someproject.git
@ -87,10 +87,10 @@ git的一个超强大的功能就是可以有不止一个远端服务器
$ git diff master..john/master
你也可以查看不在远端分支的HEAD的改动
你也可以查看没有在远端分支上的HEAD的改动
$ git log remote/branch..
# 注意:..后面没有结束的refspec
# 注意:..后面没有结束的特定引用
#### 6. 标签 ####
@ -99,7 +99,7 @@ git的一个超强大的功能就是可以有不止一个远端服务器
建立这两种类型的标签都很简单(只有一个命令行开关的差异)
$ git tag to-be-tested
$ git tag -a v1.1.0 # 会提示输入标签信息
$ git tag -a v1.1.0 # 会提示输入标签信息
#### 7. 建立分支 ####
@ -108,7 +108,7 @@ git的一个超强大的功能就是可以有不止一个远端服务器
$ git branch feature132
$ git checkout feature132
当然,如果你确定自己要新建分支并直接切换过去,可以用一个命令实现:
当然,如果你确定自己直接切换到新建的分支,可以用一个命令实现:
$ git checkout -b feature132
@ -117,20 +117,20 @@ git的一个超强大的功能就是可以有不止一个远端服务器
$ git checkout -b twitter-experiment feature132
$ git branch -d feature132
更新你也可以像Brian Palmer在原博客文章的评论里提出的只用“git branch”的-m开关在一个命令里实现像Mike提出的如果你只一个分支参数,就会重命名当前分支):
更新你也可以像Brian Palmer在原博客文章的评论里提出的只用“git branch”的-m开关在一个命令里实现像Mike提出的如果你只指定了一个分支参数,就会重命名当前分支):
$ git branch -m twitter-experiment
$ git branch -m feature132 twitter-experiment
#### 8. 合并分支 ####
在将来什么时候,你希望合并改动。有两种方式:
也许在将来的某个时候,你希望将改动合并。有两种方式:
$ git checkout master
$ git merge feature83 # 或者...
$ git rebase feature83
merge和rebase之间的差别是merge会尝试处理改动并建立一个新的混合了两者的提交。rebase会尝试把你从一个分支最后一次分离后的所有改动一个个加到该分支的HEAD上。不过在已经将分支推到远端服务器后不要再rebase了 - 这引起冲突/问题。
merge和rebase之间的差别是merge会尝试处理改动并建立一个新的混合了两者的提交。rebase会尝试把你从一个分支最后一次分离后的所有改动一个个加到该分支的HEAD上。不过在已经将分支推到远端服务器后不要再rebase了 - 这引起冲突/问题。
如果你不确定在哪些分支上还有独有的工作 - 所以你也不知道哪些分支需要合并而哪些可以删除git branch有两个开关可以帮你
@ -147,7 +147,7 @@ merge和rebase之间的差别是merge会尝试处理改动并建立一个新的
$ git push origin twitter-experiment:refs/heads/twitter-experiment
# origin是我们服务器的名字而twitter-experiment是分支名字
更新感谢Erlend在原博客文章上的评论 - 这个实际上和`git push origin twitter-experiment`效果一样,不过使用完整的语法,你可以在两者之间使用不同的分名(这样本地分支可以是`add-ssl-support`而远端是`issue-1723`)。
更新感谢Erlend在原博客文章上的评论 - 这个实际上和`git push origin twitter-experiment`效果一样,不过使用完整的语法,你可以在两者之间使用不同的分名(这样本地分支可以是`add-ssl-support`而远端是`issue-1723`)。
如果你想在远端服务器上删除一个分支(注意分支名前面的冒号):
@ -210,7 +210,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
这会让你进入一个基于菜单的交互式提示。你可以使用命令中的数字或高亮的字母如果你在终端里打开了高亮的话来进入相应的模式。然后就只是输入你希望操作的文件的数字了你可以使用这样的格式1或者1-4或2,4,7
如果你想进入补丁模式(交互式模式下p5你也可以直接进入
如果你想进入补丁模式(交互式模式下p5你也可以直接进入
$ git add -p
diff --git a/dummy.rb b/dummy.rb
@ -226,11 +226,11 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
end
Stage this hunk [y,n,q,a,d,/,e,?]?
你可以看到下方会有一些选项供选择用来添加该文件的这个改动该文件的所有改动,等等。使用‘?’命令可以详细解释这些选项。
你可以看到下方会有一些选项供选择用来添加该文件的这个改动该文件的所有改动,等等。使用‘?’命令可以详细解释这些选项。
#### 12. 从文件系统里保存/取回改动 ####
有些项目(比如git项目本身在git文件系统中直接保存额外文件而并没有将它们加入到版本控制中。
有些项目(比如Git项目本身在git文件系统中直接保存额外文件而并没有将它们加入到版本控制中。
让我们从在git中存储一个随机文件开始
@ -251,7 +251,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
#### 13. 查看日志 ####
如果不用git log来查看最近的提交你git用不了多久。不过,有一些技巧来更好地应用。比如,你可以使用下面的命令来查看每次提交的具体改动:
长时间使用 Git 的话不会没用过git log来查看最近的提交。不过,有一些技巧来更好地应用。比如,你可以使用下面的命令来查看每次提交的具体改动:
$ git log -p
@ -268,7 +268,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
#### 14. 搜索日志 ####
如果你想找特定者可以这样做:
如果你想找特定提交者可以这样做:
$ git log --author=Andy
@ -278,7 +278,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
$ git log --grep="Something in the message"
也有一个更强大的叫做pickaxe的命令用来查找删除或添加某个特定内容的提交(比如,该文件第一次出现或被删除)。这可以告诉你什么时候增加了一行(但这一行里的某个字符后面被改动过就不行了):
也有一个更强大的叫做pickaxe的命令用来查找包含了删除或添加的某个特定内容的提交(比如,该内容第一次出现或被删除)。这可以告诉你什么时候增加了一行(但这一行里的某个字符后面被改动过就不行了):
$ git log -S "TODO: Check for admin status"
@ -294,7 +294,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
$ git log --since=2.months.ago --until=1.day.ago
默认情况下会用OR来组合查询但你可以轻易地改为AND如果你有超过一条的标准
默认情况下会用OR来组合查询但你可以轻易地改为AND如果你有超过一条的查询标准)
$ git log --since=2.months.ago --until=1.day.ago --author=andy -S "something" --all-match
@ -310,7 +310,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
$ git show feature132@{yesterday} # 时间相关
$ git show feature132@{2.hours.ago} # 时间相关
注意和之前部分有些不同,末尾的插入符号意思是该提交的父节点 - 开始位置的插入符号意思是不在这个分支。
注意和之前部分有些不同,末尾的^的意思是该提交的父节点 - 开始位置的^的意思是不在这个分支。
#### 16. 选择范围 ####
@ -321,7 +321,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
你也可以省略[new]将使用当前的HEAD。
### Rewinding Time & Fixing Mistakes ###
### 时光回溯和后悔药 ###
#### 17. 重置改动 ####
@ -329,7 +329,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
$ git reset HEAD lib/foo.rb
通常会使用unstage的别名因为看上去有些不直观。
通常会使用unstage的别名因为上面的看上去有些不直观。
$ git config --global alias.unstage "reset HEAD"
$ git unstage lib/foo.rb
@ -369,11 +369,11 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
#### 19. 交互式切换基础 ####
这是一个我之前看过展示却没真正理解过的很赞的功能现在很简单。假如说你提交了3次但是你希望更改顺序或编辑或者合并
这是一个我之前看过展示却没真正理解过的很赞的功能,现在觉得它就很简单。假如说你提交了3次但是你希望更改顺序或编辑或者合并
$ git rebase -i master~3
然后会启动你的编辑器并带有一些指令。你所要做的就是修改这些指令来选择/插入/编辑(或者删除)提交和保存/退出。然后在编辑完后你可以用`git rebase --continue`命令来让每一条指令生效。
然后会启动你的编辑器并带有一些指令。你所要做的就是修改这些指令来选择/插入/编辑(或者删除)提交和保存/退出。然后在编辑完后你可以用`git rebase --continue`命令来让每一条指令生效。
如果你有修改将会切换到你提交时所处的状态之后你需要使用命令git commit --amend来编辑。
@ -446,7 +446,7 @@ git会基于当前的提交信息自动创建评论。如果你更希望有自
$ git branch experimental SHA1_OF_HASH
如果你访问过的话你通常可以用git reflog来找到SHA1哈希值。
如果你最近访问过的话你通常可以用git reflog来找到SHA1哈希值。
另一种方式是使用`git fsck —lost-found`。其中一个dangling的提交就是丢失的HEAD它只是已删除分支的HEAD而HEAD^被引用为当前的HEAD所以它并不处于dangling状态
@ -460,7 +460,7 @@ via: https://www.andyjeffries.co.uk/25-tips-for-intermediate-git-users/
作者:[Andy Jeffries][a]
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,14 +1,14 @@
Linux有问必答时间--如何在Linux下禁用IPv6
Linux有问必答如何在Linux下禁用IPv6
================================================================================
> **问题**我发现我的一个应用程序在尝试通过IPv6建立连接但是由于我们本地网络不允许分配IPv6的流量IPv6连接会超时应用程序的连接会退回到IPv4这样就会造成不必要的延迟。由于我目前对IPv6没有任何需求所以我想在我的Linux主机上禁用IPv6。有什么比较合适的方法呢
> **问题**我发现我的一个应用程序在尝试通过IPv6建立连接但是由于我们本地网络不允许分配IPv6的流量IPv6连接会超时应用程序的连接会回退到IPv4这样就会造成不必要的延迟。由于我目前对IPv6没有任何需求所以我想在我的Linux主机上禁用IPv6。有什么比较合适的方法呢
IPv6被认为是IPv4——互联网上的传统32位地址空间的替代产品它为了解决现有IPv4地址空间即将耗尽的问题。然而由于IPv4已经被每台主机或设备连接到了互联网上所以想在一夜之间将它们全部切换到IPv6几乎是不可能的。许多IPv4到IPv6的转换机制(例如:双协议栈、网络隧道、代理) 已经被提出来用来促进IPv6能被采用并且很多应用也正在进行重写就像我们所说的来增加对IPv6的支持。有一件事情能确定就是在可预见的未来里IPv4和IPv6势必将共存。
IPv6被认为是IPv4——互联网上的传统32位地址空间——的替代产品它用来解决现有IPv4地址空间即将耗尽的问题。然而由于已经有大量主机、设备用IPv4连接到了互联网上所以想在一夜之间将它们全部切换到IPv6几乎是不可能的。许多IPv4到IPv6的转换机制(例如:双协议栈、网络隧道、代理) 已经被提出来用来促进IPv6能被采用并且很多应用也正在进行重写如我们所提倡的来增加对IPv6的支持。有一件事情可以确定就是在可预见的未来里IPv4和IPv6势必将共存。
理想情况下,[向IPv6过渡的进程][1]不应该被最终的用户所看见但是IPv4/IPv6混合环境有时会让你碰到各种源于IPv4和IPv6之间不经意间的相互作用的问题。举个例子你会碰到应用程序超时的问题比如apt-get或ssh尝试通过IPv6连接失败、DNS服务器意外清空了IPv6的AAAA记录、或者你支持IPv6的设备不兼容你的互联网服务提供商遗留下的IPv4网络等等等等。
理想情况下,[向IPv6过渡的进程][1]不应该被最终的用户所看见但是IPv4/IPv6混合环境有时会让你碰到各种源于IPv4和IPv6之间不经意间的相互碰撞的问题。举个例子,你会碰到应用程序超时的问题比如apt-get或ssh尝试通过IPv6连接失败、DNS服务器意外清空了IPv6的AAAA记录、或者你支持IPv6的设备不兼容你的互联网服务提供商遗留下的IPv4网络等等等等。
当然这不意味着你应该盲目地在你的Linux机器上禁用IPv6。鉴于IPv6许诺的种种好处作为社会的一份子我们最终还是要充分拥抱它的但是作为给最终用户进行故障排除过程的一部分如果IPv6确实是罪魁祸首那你可以尝试去关闭它。
当然这不意味着你应该盲目地在你的Linux机器上禁用IPv6。鉴于IPv6许诺的种种好处作为社会的一份子我们最终还是要充分拥抱它的但是作为给最终用户进行故障排除过程的一部分如果IPv6确实是罪魁祸首那你可以尝试去关闭它。
这里有一些让你在Linux中部分或全部禁用IPv6的小技巧(例如:为一个已经确定的网络接口)。这些小贴士应该适用于所有主流的Linux发行版包括Ubuntu、Debian、Linux Mint、CentOS、Fedora、RHEL以及Arch Linux。
这里有一些让你在Linux中部分(例如:对于某个特定的网络接口)或全部禁用IPv6的小技巧。这些小贴士应该适用于所有主流的Linux发行版包括Ubuntu、Debian、Linux Mint、CentOS、Fedora、RHEL以及Arch Linux。
### 查看IPv6在Linux中是否被启用 ###
@ -24,7 +24,7 @@ IPv6被认为是IPv4——互联网上的传统32位地址空间的替代产品
### 临时禁用IPv6 ###
如果你想要在你的Linux系统上临时关闭IPv6你可以用 /proc 文件系统。"临时"意思是我们所做的禁用IPv6的更改在系统重启后将不被保存。IPv6会在你的Linux机器重启后再次被启用。
如果你想要在你的Linux系统上临时关闭IPv6你可以用 /proc 文件系统。"临时"意思是我们所做的禁用IPv6的更改在系统重启后将不被保存。IPv6会在你的Linux机器重启后再次被启用。
要将一个特定的网络接口禁用IPv6使用以下命令
@ -50,7 +50,7 @@ IPv6被认为是IPv4——互联网上的传统32位地址空间的替代产品
#### 方法一 ####
第一种方法是请求以上提到的 /proc 对 /etc/sysctl.conf 文件进行修改。
第一种方法是通过 /etc/sysctl.conf 文件对 /proc 进行永久修改。
换句话说,就是用文本编辑器打开 /etc/sysctl.conf 然后添加以下内容:
@ -69,7 +69,7 @@ IPv6被认为是IPv4——互联网上的传统32位地址空间的替代产品
#### 方法二 ####
另一个永久禁用IPv6的方法是在开机的时候执行一个必要的内核参数。
另一个永久禁用IPv6的方法是在开机的时候传递一个必要的内核参数。
用文本编辑器打开 /etc/default/grub 并给GRUB_CMDLINE_LINUX变量添加"ipv6.disable=1"。
@ -79,7 +79,7 @@ IPv6被认为是IPv4——互联网上的传统32位地址空间的替代产品
GRUB_CMDLINE_LINUX="xxxxx ipv6.disable=1"
上面的"xxxxx"代表任意存在着的内核参数,在它后面添加"ipv6.disable=1"。
上面的"xxxxx"代表任何已有的内核参数,在它后面添加"ipv6.disable=1"。
![](https://farm8.staticflickr.com/7286/15982512103_ec5d940e58_b.jpg)
@ -97,7 +97,7 @@ Fedora、CentOS/RHEL系统
### 禁用IPv6之后的其它可选步骤 ###
这里有一些可选步骤在你禁用IPv6后需要考虑这是因为当你在内核里禁用IPv6后其它程序仍然会尝试使用IPv6。在大多数情况下例如应用程序的运转状态不太会遭到破坏,但是出于效率或安全方面的原因,你要为他们禁用IPv6。
这里有一些在你禁用IPv6后需要考虑的可选步骤这是因为当你在内核里禁用IPv6后其它程序也许仍然会尝试使用IPv6。在大多数情况下应用程序的这种行为不太会影响到什么,但是出于效率或安全方面的原因,你可以为他们禁用IPv6。
#### /etc/hosts ####
@ -124,7 +124,7 @@ Fedora、CentOS/RHEL系统
默认情况下OpenSSH服务(sshd)会去尝试捆绑IPv4和IPv6的地址。
要强制sshd只捆绑IPv4地址用文本编辑器打开 /etc/ssh/sshd_config 并添加以下脚本行。inet只适用于IPv4而inet6是适用于IPv6的。
要强制sshd只捆绑IPv4地址用文本编辑器打开 /etc/ssh/sshd_config 并添加以下行。inet只适用于IPv4而inet6是适用于IPv6的。
$ sudo vi /etc/ssh/sshd_config
@ -140,7 +140,7 @@ via: http://ask.xmodulo.com/disable-ipv6-linux.html
作者:[Dan Nanni][a]
译者:[ZTinoZ](https://github.com/ZTinoZ)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,18 +1,18 @@
领略一些最著名的 Linux 网络工具
一大波你可能不知道的 Linux 网络工具
================================================================================
在你的系统上使用命令行工具来监控你的网络是非常实用的,并且对于 Linux 用户来说,有着许许多多现成的工具可以使用,如 nethogs, ntopng, nload, iftop, iptraf, bmon, slurm, tcptrack, cbm, netwatch, collectl, trafshow, cacti, etherape, ipband, jnettop, netspeed 以及 speedometer。
如果要在你的系统上监控网络,那么使用命令行工具是非常实用的,并且对于 Linux 用户来说,有着许许多多现成的工具可以使用,如 nethogs, ntopng, nload, iftop, iptraf, bmon, slurm, tcptrack, cbm, netwatch, collectl, trafshow, cacti, etherape, ipband, jnettop, netspeed 以及 speedometer。
鉴于世上有着许多的 Linux 专家和开发者,显然还存在其他的网络监控工具,但在这篇教程中,我不打算将它们所有包括在内。
上面列出的工具都有着自己的独特之处,但归根结底,它们都做着监控网络流量的工作,且并不是只有一种方法来完成这件事。例如 nethogs 可以被用来展示每个进程的带宽使用情况,以防你想知道究竟是哪个应用在消耗了你的整个网络资源; iftop 可以被用来展示每个套接字连接的带宽使用情况,而 像 nload 这类的工具可以帮助你得到有关整个带宽的信息。
上面列出的工具都有着自己的独特之处,但归根结底,它们都做着监控网络流量的工作,只是通过各种不同的方法。例如 nethogs 可以被用来展示每个进程的带宽使用情况,以防你想知道究竟是哪个应用在消耗了你的整个网络资源; iftop 可以被用来展示每个套接字连接的带宽使用情况,而像 nload 这类的工具可以帮助你得到有关整个带宽的信息。
### 1) nethogs ###
nethogs 是一个免费的工具,当要查找哪个 PID (注:即 process identifier进程 ID) 给你的网络流量带来了麻烦时,它是非常方便的。它按每个进程来组带宽,而不是像大多数的工具那样按照每个协议或每个子网来划分流量。它功能丰富,同时支持 IPv4 和 IPv6并且我认为若你想在你的 Linux 主机上确定哪个程序正消耗着你的全部带宽,它是来做这件事的最佳的程序。
nethogs 是一个免费的工具,当要查找哪个 PID (注:即 process identifier进程 ID) 给你的网络流量带来了麻烦时,它是非常方便的。它按每个进程来组带宽,而不是像大多数的工具那样按照每个协议或每个子网来划分流量。它功能丰富,同时支持 IPv4 和 IPv6并且我认为若你想在你的 Linux 主机上确定哪个程序正消耗着你的全部带宽,它是来做这件事的最佳的程序。
一个 Linux 用户可以使用 **nethogs** 来显示每个进程的 TCP 下载和上传速率,使用命令 **nethogs eth0** 来监控一个定的设备,上面的 eth0 是那个你想获取信息的设备的名称,你还可以得到有关正在传输的数据的传输速率信息。
一个 Linux 用户可以使用 **nethogs** 来显示每个进程的 TCP 下载和上传速率,可以使用命令 **nethogs eth0** 来监控一个定的设备,上面的 eth0 是那个你想获取信息的设备的名称,你还可以得到有关正在传输的数据的传输速率信息。
对我而言, nethogs 是非常容易使用的,或许是因为我非常喜欢它以至于我总是在我的 Ubuntu 12.04 LTS 机器中使用它来监控我的网络带宽。
对我而言, nethogs 是非常容易使用的,或许是因为我非常喜欢它以至于我总是在我的 Ubuntu 12.04 LTS 机器中使用它来监控我的网络带宽。
例如要想使用混杂模式来嗅探,可以像下面展示的命令那样使用选项 -p
@ -20,6 +20,8 @@ nethogs 是一个免费的工具,当要查找哪个 PID (注:即 process ide
假如你想更多地了解 nethogs 并深入探索它,那么请毫不犹豫地阅读我们做的关于这个网络带宽监控工具的整个教程。
LCTT 译注:关于 nethogs 的更多信息可以参考https://linux.cn/article-2808-1.html
### 2) nload ###
nload 是一个控制台应用,可以被用来实时地监控网络流量和带宽使用情况,它还通过提供两个简单易懂的图表来对流量进行可视化。这个绝妙的网络监控工具还可以在监控过程中切换被监控的设备,而这可以通过按左右箭头来完成。
@ -28,19 +30,21 @@ nload 是一个控制台应用,可以被用来实时地监控网络流量和
正如你在上面的截图中所看到的那样,由 nload 提供的图表是非常容易理解的。nload 提供了有用的信息,也展示了诸如被传输数据的总量和最小/最大网络速率等信息。
而更酷的是你可以在下面的命令的帮助下运行 nload 这个工具,这个命令是非常的短小且易记的:
而更酷的是你只需要直接运行 nload 这个工具就行,这个命令是非常的短小且易记的:
nload
我很确信的是:我们关于如何使用 nload 的详细教程将帮助到新的 Linux 用户,甚至可以帮助那些正寻找关于 nload 信息的老手。
LCTT 译注:关于 nload 的更新信息可以参考https://linux.cn/article-5114-1.html
### 3) slurm ###
slurm 是另一个 Linux 网络负载监控工具,它以一个不错的 ASCII 图来显示结果,它还支持许多键用以交互,例如 **c** 用来切换到经典模式, **s** 切换到分图模式, **r** 用来重绘屏幕, **L** 用来启用 TX/RXTX发送流量RX接收流量 LED**m** 用来在经典分图模式和大图模式之间进行切换, **q** 退出 slurm。
slurm 是另一个 Linux 网络负载监控工具,它以一个不错的 ASCII 图来显示结果,它还支持许多键用以交互,例如 **c** 用来切换到经典模式, **s** 切换到分图模式, **r** 用来重绘屏幕, **L** 用来启用 TX/RXTX发送流量RX接收流量 **m** 用来在经典分图模式和大图模式之间进行切换, **q** 退出 slurm。
![linux network load monitoring tools](http://blog.linoxide.com/wp-content/uploads/2013/12/slurm2.png)
在网络负载监控工具 slurm 中,还有许多其它的键可用,你可以很容易地使用下面的命令在 man 手册中学习它们。
在网络负载监控工具 slurm 中,还有许多其它的键可用,你可以很容易地使用下面的命令在 man 手册中学习它们。
man slurm
@ -48,11 +52,11 @@ slurm 在 Ubuntu 和 Debian 的官方软件仓库中可以找到,所以使用
sudo apt-get install slurm
我们已经在一个教程中对 slurm 的使用做了介绍,所以请访问相关网页( 注:应该指的是[这篇文章](http://linoxide.com/ubuntu-how-to/monitor-network-load-slurm-tool/) ),并不要忘记和其它使用 Linux 的朋友分享这些知识。
我们已经在一个[教程](http://linoxide.com/ubuntu-how-to/monitor-network-load-slurm-tool/)中对 slurm 的使用做了介绍,不要忘记和其它使用 Linux 的朋友分享这些知识。
### 4) iftop ###
当你想在一个接口上按照主机来展示带宽使用情况时iftop 是一个非常有用的工具。根据 man 手册,**iftop** 在一个已命名的接口或在它可以找到的第一个接口(假如没有任何特殊情况,它就像一个外部的接口)上监听网络流量,并且展示出一个表格来显示当前一对主机间的带宽使用情况。
当你想显示连接到网卡上的各个主机的带宽使用情况时iftop 是一个非常有用的工具。根据 man 手册,**iftop** 在一个指定的接口或在它可以找到的第一个接口(假如没有任何特殊情况,它应该是一个对外的接口)上监听网络流量,并且展示出一个表格来显示当前一对主机间的带宽使用情况。
通过在虚拟终端中使用下面的命令Ubuntu 和 Debian 用户可以在他们的机器中轻易地安装 iftop
@ -61,6 +65,8 @@ slurm 在 Ubuntu 和 Debian 的官方软件仓库中可以找到,所以使用
在你的机器上,可以使用下面的命令通过 yum 来安装 iftop
yum -y install iftop
LCTT 译注:关于 nload 的更多信息请参考https://linux.cn/article-1843-1.html
### 5) collectl ###
@ -69,7 +75,7 @@ collectl 可以被用来收集描述当前系统状态的数据,并且它支
- 记录模式
- 回放模式
**记录模式** 允许从一个正在运行的系统中读取数据,然后将这些数据要么显示在终端中,要么写入一个或多个文件或套接字中。
**记录模式** 允许从一个正在运行的系统中读取数据,然后将这些数据要么显示在终端中,要么写入一个或多个文件或一个套接字中。
**回放模式**
@ -79,13 +85,15 @@ Ubuntu 和 Debian 用户可以在他们的机器上使用他们默认的包管
sudo apt-get install collectl
还可以使用下面的命令来安装 collectl 因为对于这些发行版本(注:这里指的是用 yum 作为包管理器的发行版本),在它们官方的软件仓库中也含有 collectl
还可以使用下面的命令来安装 collectl 因为对于这些发行版本(注:这里指的是用 yum 作为包管理器的发行版本),在它们官方的软件仓库中也含有 collectl
yum install collectl
LCTT 译注:关于 collectl 的更多信息请参考: https://linux.cn/article-3154-1.html
### 6) Netstat ###
Netstat 是一个用来监控**传入和传出的网络数据包统计数据**和接口统计数据的命令行工具。它为传输控制协议 TCP (包括上传和下行),路由表,及一系列的网络接口(网络接口控制器或者软件定义的网络接口) 和网络协议统计数据展示网络连接情况
Netstat 是一个用来监控**传入和传出的网络数据包统计数据**的接口统计数据命令行工具。它会显示 TCP 连接 (包括上传和下行)路由表及一系列的网络接口网卡或者SDN接口和网络协议统计数据
Ubuntu 和 Debian 用户可以在他们的机器上使用默认的包管理器来安装 netstat。Netsta 软件被包括在 net-tools 软件包中,并可以在 shell 或虚拟终端中运行下面的命令来安装它:
@ -107,6 +115,8 @@ CentOS, Fedora, RHEL 用户可以在他们的机器上使用默认的包管理
![man netstat](http://blog.linoxide.com/wp-content/uploads/2015/02/man-netstat.png)
LCTT 译注:关于 netstat 的更多信息请参考https://linux.cn/article-2434-1.html
### 7) Netload ###
netload 命令只展示一个关于当前网络荷载和自从程序运行之后传输数据总的字节数目的简要报告,它没有更多的功能。它是 netdiag 软件的一部分。
@ -115,9 +125,9 @@ netload 命令只展示一个关于当前网络荷载和自从程序运行之后
# yum install netdiag
Netload 在默认仓库中作为 netdiag 的一部分可以被找到,我们可以轻易地使用下面的命令来利用 **apt** 包管理器安装 **netdiag**
Netload 是默认仓库中 netdiag 的一部分,我们可以轻易地使用下面的命令来利用 **apt** 包管理器安装 **netdiag**
$ sudo apt-get install netdiag (注:这里原文为 sudo install netdiag应该加上 apt-get)
$ sudo apt-get install netdiag
为了运行 netload我们需要确保选择了一个正在工作的网络接口的名称如 eth0, eh1, wlan0, mon0等然后在 shell 或虚拟终端中运行下面的命令:
@ -127,21 +137,23 @@ Netload 在默认仓库中作为 netdiag 的一部分可以被找到,我们可
### 8) Nagios ###
Nagios 是一个领先且功能强大的开源监控系统,它使得网络或系统管理员在服务器相关的问题影响到服务器的主要事务之前,鉴定并解决这些问题。 有了 Nagios 系统,管理员便可以在一个单一的窗口中监控远程的 Linux 、Windows 系统、交换机、路由器和打印机等。它显示出重要的警告并指出在你的网络或服务器中是否出现某些故障,这间接地帮助你在问题发生之前,着手执行补救行动。
Nagios 是一个领先且功能强大的开源监控系统,它使得网络或系统管理员可以在服务器的各种问题影响到服务器的主要事务之前,发现并解决这些问题。 有了 Nagios 系统,管理员便可以在一个单一的窗口中监控远程的 Linux 、Windows 系统、交换机、路由器和打印机等。它显示出重要的警告并指出在你的网络或服务器中是否出现某些故障,这可以间接地帮助你在问题发生前就着手执行补救行动。
Nagios 有一个 web 界面,其中有一个图形化的活动监视器。通过浏览网页 http://localhost/nagios/ 或 http://localhost/nagios3/ 便可以登录到这个 web 界面。假如你在远程的机器上进行操作,请使用你的 IP 地址来替换 localhost然后键入用户名和密码我们便会看到如下图所展示的信息
![在 Chromium 浏览器中的 Nagios3](http://blog.linoxide.com/wp-content/uploads/2015/02/nagios3-ubuntu.png)
LCTT 译注:关于 Nagios 的更多信息请参考https://linux.cn/article-2436-1.html
### 9) EtherApe ###
EtherApe 是一个针对 Unix 的图形化网络监控工具,它仿照了 etherman 软件。它具有链路层IP 和 TCP 模式并支持 Ethernet, FDDI, Token Ring, ISDN, PPP, SLIP 及 WLAN 设备等接口,再加上支持一些封装的格式。主机和链接随着流量大小和被着色的协议名称展示而变化。它可以过滤要展示的流量,并可从一个文件或运行的网络中读取数据报
EtherApe 是一个针对 Unix 的图形化网络监控工具,它仿照了 etherman 软件。它支持链路层、IP 和 TCP 等模式,并支持以太网, FDDI, 令牌环, ISDN, PPP, SLIP 及 WLAN 设备等接口,以及一些封装格式。主机和连接随着流量和协议而改变其尺寸和颜色。它可以过滤要展示的流量,并可从一个文件或运行的网络中读取数据包
在 CentOS、Fedora、RHEL 等 Linux 发行版本中安装 etherape 是一件容易的事,因为在它们的官方软件仓库中就可以找到 etherape。我们可以像下面展示的命令那样使用 yum 包管理器来安装它:
yum install etherape
我们可以使用下面的命令在 Ubuntu、Debian 及它们的衍生发行版本中使用 **apt** 包管理器来安装 EtherApe
我们可以使用下面的命令在 Ubuntu、Debian 及它们的衍生发行版本中使用 **apt** 包管理器来安装 EtherApe
sudo apt-get install etherape
@ -149,13 +161,13 @@ EtherApe 是一个针对 Unix 的图形化网络监控工具,它仿照了 ethe
sudo etherape
然后, **etherape****图形用户界面** 便会被执行。接着,在菜单上面的 **捕捉** 选项下,我们可以选择 **模式**(IP,链路层TCP) 和 **接口**。一切设定完毕后,我们需要点击 **开始** 按钮。接着我们便会看到类似下面截图的东西:
然后, **etherape****图形用户界面** 便会被执行。接着,在菜单上面的 **捕捉** 选项下,我们可以选择 **模式**(IP链路层TCP) 和 **接口**。一切设定完毕后,我们需要点击 **开始** 按钮。接着我们便会看到类似下面截图的东西:
![EtherApe](http://blog.linoxide.com/wp-content/uploads/2015/02/etherape.png)
### 10) tcpflow ###
tcpflow 是一个命令行工具,它可以捕捉作为 TCP 连接(流)的部分传输数据,并以一种方便协议分析或除错的方式来存储数据。它重建了实际的数据流并将每个流存储在不同的文件中,以备日后的分析。它理解 TCP 序列号并可以正确地重建数据流,不管是在重发或乱序发送状态下。
tcpflow 是一个命令行工具,它可以捕捉 TCP 连接(流)的部分传输数据,并以一种方便协议分析或除错的方式来存储数据。它重构了实际的数据流并将每个流存储在不同的文件中,以备日后的分析。它能识别 TCP 序列号并可以正确地重构数据流,不管是在重发还是乱序发送状态下。
通过 **apt** 包管理器在 Ubuntu 、Debian 系统中安装 tcpflow 是很容易的,因为默认情况下在官方软件仓库中可以找到它。
@ -175,7 +187,7 @@ tcpflow 是一个命令行工具,它可以捕捉作为 TCP 连接(流)的一
# yum install --nogpgcheck http://pkgs.repoforge.org/tcpflow/tcpflow-0.21-1.2.el6.rf.i686.rpm
我们可以使用 tcpflow 来捕捉全部或部分 tcp 流量并以一种简单的方式把它们写到一个可读文件中。下面的命令执行着我们想要做的事情,但我们需要在一个空目录中运行下面的命令,因为它将创建诸如 x.x.x.x.y-a.a.a.a.z 格式的文件,做完这些之后,只需按 Ctrl-C 便可停止这个命令。
我们可以使用 tcpflow 来捕捉全部或部分 tcp 流量并以一种简单的方式把它们写到一个可读的文件中。下面的命令就可以完成这个事情,但我们需要在一个空目录中运行下面的命令,因为它将创建诸如 x.x.x.x.y-a.a.a.a.z 格式的文件,运行之后,只需按 Ctrl-C 便可停止这个命令。
$ sudo tcpflow -i eth0 port 8000
@ -183,49 +195,51 @@ tcpflow 是一个命令行工具,它可以捕捉作为 TCP 连接(流)的一
### 11) IPTraf ###
[IPTraf][2] 是一个针对 Linux 平台的基于控制台的网络统计应用。它生成一系列的图形,如 TCP 连接包和字节的数目、接口信息和活动指示器、 TCP/UDP 流量故障以及 LAN 状态包和字节的数目
[IPTraf][2] 是一个针对 Linux 平台的基于控制台的网络统计应用。它生成一系列的图形,如 TCP 连接的包/字节计数、接口信息和活动指示器、 TCP/UDP 流量故障以及局域网内设备的包/字节计数
在默认的软件仓库中可以找到 IPTraf所以我们可以使用下面的命令通过 **apt** 包管理器轻松地安装 IPTraf
$ sudo apt-get install iptraf
在默认的软件仓库中可以找到 IPTraf所以我们可以使用下面的命令通过 **yum** 包管理器轻松地安装 IPTraf
我们可以使用下面的命令通过 **yum** 包管理器轻松地安装 IPTraf
# yum install iptraf
我们需要以管理员权限来运行 IPTraf(注:这里原文写错为 TPTraf),并带有一个可用的网络接口名。这里,我们的网络接口名为 wlan2所以我们使用 wlan2 来作为接口的名称
我们需要以管理员权限来运行 IPTraf,并带有一个有效的网络接口名。这里,我们的网络接口名为 wlan2所以我们使用 wlan2 来作为参数
$ sudo iptraf wlan2 (注:这里原文为 sudo iptraf,应该加上 wlan2)
$ sudo iptraf wlan2
![IPTraf](http://blog.linoxide.com/wp-content/uploads/2015/02/iptraf.png)
开始一般的网络接口统计,键入:
开始通常的网络接口统计,键入:
# iptraf -g
为了在一个名为 eth0 的接口设备上看详细的统计信息,使用:
查看接口 eth0 的详细统计信息,使用:
# iptraf -d wlan2 (注:这里的 wlan2 和 上面的 eth0 不一致,下面的几句也是这种情况,请相应地改正)
# iptraf -d eth0
为了看一个名为 eth0 的接口的 TCP 和 UDP 监控,使用:
查看接口 eth0 的 TCP 和 UDP 监控信息,使用:
# iptraf -z wlan2
# iptraf -z eth0
为了展示在一个名为 eth0 的接口上的包的大小和数目,使用:
查看接口 eth0 的包的大小和数目,使用:
# iptraf -z wlan2
# iptraf -z eth0
注意:请将上面的 wlan2 替换为你的接口名称。你可以通过运行`ip link show`命令来检查你的接口。
注意:请将上面的 eth0 替换为你的接口名称。你可以通过运行`ip link show`命令来检查你的接口。
LCTT 译注:关于 iptraf 的更多详细信息请参考https://linux.cn/article-5430-1.html
### 12) Speedometer ###
Speedometer 是一个小巧且简单的工具,它只绘出一幅包含有通过某个给定端口的上行、下行流量的好看的图。
Speedometer 是一个小巧且简单的工具,它只用来绘出一幅包含有通过某个给定端口的上行、下行流量的好看的图。
在默认的软件仓库中可以找到 Speedometer ,所以我们可以使用下面的命令通过 **yum** 包管理器轻松地安装 Speedometer
# yum install speedometer
在默认的软件仓库中可以找到 Speedometer ,所以我们可以使用下面的命令通过 **apt** 包管理器轻松地安装 Speedometer
我们可以使用下面的命令通过 **apt** 包管理器轻松地安装 Speedometer
$ sudo apt-get install speedometer
@ -239,15 +253,15 @@ Speedometer 可以简单地通过在 shell 或虚拟终端中执行下面的命
### 13) Netwatch ###
Netwatch 是 netdiag 工具集里的一部分,并且它也显示当前主机和其他远程主机的连接情况,以及在每个连接中数据传输的速率。
Netwatch 是 netdiag 工具集里的一部分,它也显示当前主机和其他远程主机的连接情况,以及在每个连接中数据传输的速率。
我们可以使用 yum 在 fedora 中安装 Netwatch因为它在 fedora 的默认软件仓库中。但若你运行着 CentOS 或 RHEL 我们需要安装 [rpmforge 软件仓库][3]。
# yum install netwatch
Netwatch 作为 netdiag 的一部分可以在默认的软件仓库中找到,所以我们可以轻松地使用下面的命令来利用 **apt** 包管理器安装 **netdiag**
Netwatch 是 netdiag 的一部分,可以在默认的软件仓库中找到,所以我们可以轻松地使用下面的命令来利用 **apt** 包管理器安装 **netdiag**
$ sudo apt-get install netdiag(注:这里应该加上 apt-get
$ sudo apt-get install netdiag
为了运行 netwatch 我们需要在虚拟终端或 shell 中执行下面的命令:
@ -259,15 +273,15 @@ Netwatch 作为 netdiag 的一部分可以在默认的软件仓库中找到,
### 14) Trafshow ###
Trafshow 同 netwatch 和 pktstat(注:这里原文中多了一个 trafshow)一样,可以报告当前活的连接里使用的协议和每个连接中数据传输的速率。它可以使用 pcap 类型的滤器来筛选出特定的连接。
Trafshow 同 netwatch 和 pktstat 一样,可以报告当前活的连接里使用的协议和每个连接中数据传输的速率。它可以使用 pcap 类型的滤器来筛选出特定的连接。
我们可以使用 yum 在 fedora 中安装 trafshow(注:这里原文为 Netwatch应该为 trafshow),因为它在 fedora 的默认软件仓库中。但若你正运行着 CentOS 或 RHEL 我们需要安装 [rpmforge 软件仓库][4]。
我们可以使用 yum 在 fedora 中安装 trafshow ,因为它在 fedora 的默认软件仓库中。但若你正运行着 CentOS 或 RHEL 我们需要安装 [rpmforge 软件仓库][4]。
# yum install trafshow
Trafshow 在默认仓库中可以找到,所以我们可以轻松地使用下面的命令来利用 **apt** 包管理器安装它:
$ sudo apt-get install trafshow(注:原文少了 apt-get
$ sudo apt-get install trafshow
为了使用 trafshow 来执行监控任务,我们需要在虚拟终端或 shell 中执行下面的命令:
@ -275,7 +289,7 @@ Trafshow 在默认仓库中可以找到,所以我们可以轻松地使用下
![Trafshow](http://blog.linoxide.com/wp-content/uploads/2015/02/trafshow-all.png)
为了特别地监控 tcp 连接,如下面一样添加上 tcp 参数:
为了专门监控 tcp 连接,如下面一样添加上 tcp 参数:
$ sudo trafshow -i wlan2 tcp
@ -285,7 +299,7 @@ Trafshow 在默认仓库中可以找到,所以我们可以轻松地使用下
### 15) Vnstat ###
与大多数的其他工具相比Vnstat 有一点不同。实际上它运行一个后台服务或守护进程,并时刻记录着传输数据的大小。另外,它可以被用来生成一个带有网络使用历史记录的报告。
与大多数的其他工具相比Vnstat 有一点不同。实际上它运行一个后台服务或守护进程,并时刻记录着传输数据的大小。另外,它可以被用来生成一个网络使用历史记录的报告。
我们需要开启 EPEL 软件仓库,然后运行 **yum** 包管理器来安装 vnstat。
@ -301,7 +315,7 @@ Vnstat 在默认软件仓库中可以找到,所以我们可以使用下面的
![vnstat](http://blog.linoxide.com/wp-content/uploads/2015/02/vnstat.png)
为了实时地监控带宽使用情况,使用 -l 选项(实时模式)。然后它将以一种非常精确的方式来展示上行和下行数据所使用的带宽总量,但不会显示任何有关主机连接或进程的内部细节。
为了实时地监控带宽使用情况,使用 -l 选项(live 模式)。然后它将以一种非常精确的方式来展示上行和下行数据所使用的带宽总量,但不会显示任何有关主机连接或进程的内部细节。
$ vnstat -l
@ -313,7 +327,7 @@ Vnstat 在默认软件仓库中可以找到,所以我们可以使用下面的
### 16) tcptrack ###
[tcptrack][5] 可以展示 TCP 连接的状态它在一个给定的网络端口上进行监听。tcptrack 监控它们的状态并展示出一个经过排列且不断更新的有关来源/目标地址、带宽使用情况等信息的列表,这与 **top** 命令的输出非常类似 。
[tcptrack][5] 可以展示 TCP 连接的状态它在一个给定的网络端口上进行监听。tcptrack 监控它们的状态并展示出排序且不断更新的列表,包括来源/目标地址、带宽使用情况等信息,这与 **top** 命令的输出非常类似 。
鉴于 tcptrack 在软件仓库中,我们可以轻松地在 Debian、Ubuntu 系统中从软件仓库使用 **apt** 包管理器来安装 tcptrack。为此我们需要在 shell 或虚拟终端中执行下面的命令:
@ -329,7 +343,7 @@ Vnstat 在默认软件仓库中可以找到,所以我们可以使用下面的
注:这里我们下载了 rpmforge-release 的当前最新版本,即 0.5.3-1你总是可以从 rpmforge 软件仓库中下载其最新版本,并请在上面的命令中替换为你下载的版本。
**tcptrack** 需要以 root 权限或超级用户身份来运行。执行 tcptrack 时,我们需要带上那个我们想监视的网络接口 TCP 连接状况的接口名称。这里我们的接口名称为 wlan2所以如下面这样使用
**tcptrack** 需要以 root 权限或超级用户身份来运行。执行 tcptrack 时,我们需要带上监视的网络接口 TCP 连接状况的接口名称。这里我们的接口名称为 wlan2所以如下面这样使用
sudo tcptrack -i wlan2
@ -345,7 +359,7 @@ Vnstat 在默认软件仓库中可以找到,所以我们可以使用下面的
### 17) CBM ###
CBM 或 Color Bandwidth Meter 可以展示出当前所有网络设备的流量使用情况。这个程序是如此的简单,以至于应该可以从它的名称中看出其功能。CBM 的源代码和新版本可以在 [http://www.isotton.com/utils/cbm/][7] 上找到。
CBM Color Bandwidth Meter 可以展示出当前所有网络设备的流量使用情况。这个程序是如此的简单,以至于可以从它的名称中看出其功能。CBM 的源代码和新版本可以在 [http://www.isotton.com/utils/cbm/][7] 上找到。
鉴于 CBM 已经包含在软件仓库中,我们可以简单地使用 **apt** 包管理器从 Debian、Ubuntu 的软件仓库中安装 CBM。为此我们需要在一个 shell 窗口或虚拟终端中运行下面的命令:
@ -359,7 +373,7 @@ CBM 或 Color Bandwidth Meter 可以展示出当前所有网络设备的流量
### 18) bmon ###
[Bmon][8] 或 Bandwidth Monitoring ,是一个用于调试和实时监控带宽的工具。这个工具能够检索各种输入模块的统计数据。它提供了多种输出方式,包括一个基于 curses 库的界面轻量级的HTML输出以及 ASCII 输出格式。
[Bmon][8] Bandwidth Monitoring ,是一个用于调试和实时监控带宽的工具。这个工具能够检索各种输入模块的统计数据。它提供了多种输出方式,包括一个基于 curses 库的界面轻量级的HTML输出以及 ASCII 输出格式。
bmon 可以在软件仓库中找到,所以我们可以通过使用 apt 包管理器来在 Debian、Ubuntu 中安装它。为此,我们需要在一个 shell 窗口或虚拟终端中运行下面的命令:
@ -373,7 +387,7 @@ bmon 可以在软件仓库中找到,所以我们可以通过使用 apt 包管
### 19) tcpdump ###
[TCPDump][9] 是一个用于网络监控和数据获取的工具。它可以为我们节省很多的时间,并可用来调试网络或服务器 的相关问题。它打印出在某个网络接口上与布尔表达式匹配的数据包所包含的内容的一个描述。
[TCPDump][9] 是一个用于网络监控和数据获取的工具。它可以为我们节省很多的时间,并可用来调试网络或服务器的相关问题。它可以打印出在某个网络接口上与布尔表达式匹配的数据包所包含的内容的一个描述。
tcpdump 可以在 Debian、Ubuntu 的默认软件仓库中找到,我们可以简单地以 sudo 权限使用 apt 包管理器来安装它。为此,我们需要在一个 shell 窗口或虚拟终端中运行下面的命令:
@ -389,7 +403,6 @@ tcpdump 需要以 root 权限或超级用户来运行,我们需要带上我们
![tcpdump](http://blog.linoxide.com/wp-content/uploads/2015/02/tcpdump.png)
假如你只想监视一个特定的端口,则可以运行下面的命令。下面是一个针对 80 端口(网络服务器)的例子:
$ sudo tcpdump -i wlan2 'port 80'
@ -419,14 +432,15 @@ tcpdump 需要以 root 权限或超级用户来运行,我们需要带上我们
### 结论 ###
在第一部分中(注:我认为原文多了 first 这个单词,总之是前后文的内容有些不连贯),我们介绍了一些在 Linux 下的网络负载监控工具,这对于系统管理员甚至是新手来说,都是很有帮助的。在这篇文章中介绍的每一个工具都有其特点,不同的选项等,但最终它们都可以帮助你来监控你的网络流量。
在这篇文章中,我们介绍了一些在 Linux 下的网络负载监控工具,这对于系统管理员甚至是新手来说,都是很有帮助的。在这篇文章中介绍的每一个工具都具有其特点,不同的选项等,但最终它们都可以帮助你来监控你的网络流量。
--------------------------------------------------------------------------------
via: http://linoxide.com/monitoring-2/network-monitoring-tools-linux/
作者:[Bobbin Zachariah][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
如何修复apt-get update无法添加新的CD-ROM
如何修复 apt-get update 无法添加新的 CD-ROM 的错误
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/12/elementary_OS_Freya.jpg)
@ -63,8 +63,8 @@
via: http://itsfoss.com/fix-failed-fetch-cdrom-aptget-update-add-cdroms/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -4,7 +4,7 @@
![KVM Management in Linux](http://www.tecmint.com/wp-content/uploads/2015/02/KVM-Management-in-Linux.jpg)
Linux系统的KVM管理
*Linux系统的KVM管理*
在这篇文章里没有什么新的概念,我们只是用命令行工具重复之前所做过的事情,也没有什么前提条件,都是相同的过程,之前的文章我们都讨论过。
@ -31,35 +31,40 @@ Virsh命令行工具是一款管理virsh客户域的用户界面。virsh程序
# virsh pool-define-as Spool1 dir - - - - "/mnt/personal-data/SPool1/"
![Create New Storage Pool](http://www.tecmint.com/wp-content/uploads/2015/02/Create-New-Storage-Pool.png)
创建新存储池
*创建新存储池*
**2. 查看环境中我们所有的存储池,用以下命令。**
# virsh pool-list --all
![List All Storage Pools](http://www.tecmint.com/wp-content/uploads/2015/02/List-All-Storage-Pools.png)
列出所有存储池
*列出所有存储池*
**3. 现在我们来构造存储池了,用以下命令来构造我们刚才定义的存储池。**
# virsh pool-build Spool1
![Build Storage Pool](http://www.tecmint.com/wp-content/uploads/2015/02/Build-Storage-Pool.png)
构造存储池
**4. 用virsh带pool-start的命令来激活并启动我们刚才创建并构造完成的存储池。**
*构造存储池*
**4. 用带pool-start参数的virsh命令来激活并启动我们刚才创建并构造完成的存储池。**
# virsh pool-start Spool1
![Active Storage Pool](http://www.tecmint.com/wp-content/uploads/2015/02/Active-Storage-Pool.png)
激活存储池
*激活存储池*
**5. 查看环境中存储池的状态,用以下命令。**
# virsh pool-list --all
![Check Storage Pool Status](http://www.tecmint.com/wp-content/uploads/2015/02/Check-Storage-Pool-Status.png)
查看存储池状态
*查看存储池状态*
你会发现Spool1的状态变成了已激活。
@ -68,14 +73,16 @@ Virsh命令行工具是一款管理virsh客户域的用户界面。virsh程序
# virsh pool-autostart Spool1
![Configure KVM Storage Pool](http://www.tecmint.com/wp-content/uploads/2015/02/Configure-Storage-Pool.png)
配置KVM存储池
*配置KVM存储池*
**7. 最后来看看我们新的存储池的信息吧。**
# virsh pool-info Spool1
![Check KVM Storage Pool Information](http://www.tecmint.com/wp-content/uploads/2015/02/Check-Storage-Pool-Information.png)
查看KVM存储池信息
*查看KVM存储池信息*
恭喜你Spool1已经准备好待命接下来我们试着创建存储卷来使用它。
@ -90,12 +97,14 @@ Virsh命令行工具是一款管理virsh客户域的用户界面。virsh程序
# qemu-img create -f raw /mnt/personal-data/SPool1/SVol1.img 10G
![Create Storage Volume](http://www.tecmint.com/wp-content/uploads/2015/02/Create-Storage-Volumes.png)
创建存储卷
*创建存储卷*
**9. 通过使用带info的qemu-img命令你可以获取到你的新磁盘映像的一些信息。**
![Check Storage Volume Information](http://www.tecmint.com/wp-content/uploads/2015/02/Check-Storage-Volume-Information.png)
查看存储卷信息
*查看存储卷信息*
**警告**: 不要用qemu-img命令来修改被运行中的虚拟机或任何其它进程所正在使用的映像那样映像会被破坏。
@ -120,15 +129,18 @@ Virsh命令行工具是一款管理virsh客户域的用户界面。virsh程序
# virt-install --name=rhel7 --disk path=/mnt/personal-data/SPool1/SVol1.img --graphics spice --vcpu=1 --ram=1024 --location=/run/media/dos/9e6f605a-f502-4e98-826e-e6376caea288/rhel-server-7.0-x86_64-dvd.iso --network bridge=virbr0
![Create New Virtual Machine](http://www.tecmint.com/wp-content/uploads/2015/02/Create-New-Virtual-Machines.png)
创建新的虚拟机
*创建新的虚拟机*
**11. 你会看到弹出一个virt-vierwer窗口像是在通过它在与虚拟机通信。**
![Booting Virtual Machine](http://www.tecmint.com/wp-content/uploads/2015/02/Booting-Virtual-Machine.jpeg)
虚拟机启动程式
*虚拟机启动程式*
![Installation of Virtual Machine](http://www.tecmint.com/wp-content/uploads/2015/02/Installation-of-Virtual-Machine.jpeg)
虚拟机安装过程
*虚拟机安装过程*
### 结论 ###
@ -143,7 +155,7 @@ via: http://www.tecmint.com/kvm-management-tools-to-manage-virtual-machines/
作者:[Mohammad Dosoukey][a]
译者:[ZTinoZ](https://github.com/ZTinoZ)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
Linux 基础如何修复Ubuntu上“E: /var/cache/apt/archives/ subprocess new pre-removal script returned error exit status 1 ”的错误
如何修复 Ubuntu 上“...script returned error exit status 1”的错误
================================================================================
![](https://1102047360.rsc.cdn77.org/wp-content/uploads/2014/04/ubuntu-790x558.png)
@ -6,11 +6,11 @@ Linux 基础如何修复Ubuntu上“E: /var/cache/apt/archives/ subprocess ne
> E: /var/cache/apt/archives/ subprocess new pre-removal script returned error exit status 1
![](https://www.unixmen.com/wp-content/uploads/2015/03/Update-Manager_0011.png)
![](http://www.unixmen.com/wp-content/uploads/2015/03/Update-Manager_0011.png)
### 解决: ###
我google了下并找到了方法。下面是我解决的方法。
我google了下并找到了方法。下面是我解决的方法。
sudo apt-get clean
sudo apt-get update && sudo apt-get upgrade
@ -33,11 +33,11 @@ Linux 基础如何修复Ubuntu上“E: /var/cache/apt/archives/ subprocess ne
--------------------------------------------------------------------------------
via: https://www.unixmen.com/linux-basics-how-to-fix-e-varcacheaptarchives-subprocess-new-pre-removal-script-returned-error-exit-status-1-in-ubuntu/
via: http://www.unixmen.com/linux-basics-how-to-fix-e-varcacheaptarchives-subprocess-new-pre-removal-script-returned-error-exit-status-1-in-ubuntu/
作者:[SK][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,5 +1,6 @@
[已解决] Ubuntu 14.04从待机中唤醒后鼠标键盘出现僵死情况 [快速小贴士]
================================================================================
修复 Ubuntu 14.04 从待机中唤醒后鼠标键盘出现僵死情况
=========
### 问题: ###
当Ubuntu14.04或14.10从睡眠和待机状态恢复时鼠标和键盘出现僵死不能点击也不能输入。解决这种情况是唯一方法就是按关机键强关系统这不仅非常不便且令人恼火。因为在Ubuntu的默认情况中合上笔记本等同于切换到睡眠模式。
@ -12,15 +13,15 @@
sudo apt-get install --reinstall xserver-xorg-input-all
这则贴士源自一个自由开源读者Dev的提问。快试试这篇贴士看看是否对你也有效。在一个类似的问题中你可以[修复Ubuntu登录后无Unity界面、侧边栏和Dash的问题][1]
这则贴士源自一个我们的读者Dev的提问。快试试这篇贴士看看是否对你也有效。在一个类似的问题中你可以[修复Ubuntu登录后无Unity界面、侧边栏和Dash的问题][1]
--------------------------------------------------------------------------------
via: http://itsfoss.com/keyboard-mouse-freeze-suspend/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,12 +1,13 @@
11个Linux终端命令,让你的世界摇滚起来
11个让你吃惊的 Linux 终端命令
================================================================================
我已经用了十年的Linux了通过今天这篇文章我将向大家展示一系列的我希望一开始就有人教导而不是曾在我成长道路上绊住我的Linux命令、工具和花招。
![Linux Keyboard Shortcuts.](http://f.tqn.com/y/linux/1/L/m/J/1/keyboardshortcuts.png)
Linux的快捷键。
我已经用了十年的Linux了通过今天这篇文章我将向大家展示一系列的命令、工具和技巧我希望一开始就有人告诉我这些而不是曾在我成长道路上绊住我。
### 1. 命令行日常系快捷键 ###
![Linux Keyboard Shortcuts.](http://f.tqn.com/y/linux/1/L/m/J/1/keyboardshortcuts.png)
*Linux的快捷键。*
如下的快捷方式非常有用,能够极大的提升你的工作效率:
- CTRL + U - 剪切光标前的内容
@ -16,11 +17,11 @@ Linux的快捷键。
- CTRL + A - 移动光标到行首
- ALT + F - 跳向下一个空格
- ALT + B - 跳回上一个空格
- ALT + Backspace - 删除前一个
- CTRL + W - 剪切光标后一个
- ALT + Backspace - 删除前一个单词
- CTRL + W - 剪切光标后一个单词
- Shift + Insert - 向终端内粘贴文本
那么为了让上内容更易理解来看下面的这行命令。
那么为了让上内容更易理解来看下面的这行命令。
sudo apt-get intall programname
@ -28,7 +29,7 @@ Linux的快捷键。
想象现在光标正在行末我们有很多的方法将她退回单词install并替换它。
我可以按两次ALT+B这样光标就会在如下的位置这里用^代光标的位置)。
我可以按两次ALT+B这样光标就会在如下的位置这里用^代光标的位置)。
sudo apt-get^intall programname
@ -36,32 +37,36 @@ Linux的快捷键。
如果你想将浏览器中的文本复制到终端,可以使用快捷键"shift + insert"。
![](http://f.tqn.com/y/linux/1/L/n/J/1/sudotricks2.png)
### 2. SUDO !! ###
这个命令如果你还不知道我觉得你应该好好感谢我因为如果你不知道那每次你在输入长串命令后看到“permission denied”后一定会痛恼不堪。
![](http://f.tqn.com/y/linux/1/L/n/J/1/sudotricks2.png)
*sudo !!*
如果你还不知道这个命令我觉得你应该好好感谢我因为如果你不知道的话那每次你在输入长串命令后看到“permission denied”后一定会痛恼不堪。
- sudo !!
如何使用sudo !!?很简单。试想你刚输入了如下命令:
如何使用sudo !!很简单。试想你刚输入了如下命令:
apt-get install ranger
一定会出现"Permission denied"除非你的登录了足够高权限的账户。
一定会出现“Permission denied”除非你已经登录了足够高权限的账户。
sudo !!就会用sudo的形式运行上一条命令。所以上一条命令可以看成是这样:
sudo !! 就会用 sudo 的形式运行上一条命令。所以上一条命令就变成了这样:
sudo apt-get install ranger
如果你不知道什么是sudo[戳这里][1]。
![Pause Terminal Applications.](http://f.tqn.com/y/linux/1/L/o/J/1/pauseapps.png)
暂停终端运行的应用程序。
如果你不知道什么是sudo[戳这里][1]。
### 3. 暂停并在后台运行命令 ###
我曾经写过一篇如何在终端后台运行命令的指南。
![Pause Terminal Applications.](http://f.tqn.com/y/linux/1/L/o/J/1/pauseapps.png)
*暂停终端运行的应用程序。*
我曾经写过一篇[如何在终端后台运行命令的指南][13]。
- CTRL + Z - 暂停应用程序
- fg - 重新将程序唤到前台
@ -74,41 +79,42 @@ sudo !!就会用sudo的形式运行上一条命令。所以上一条命令可以
文件编辑到一半你意识到你需要马上在终端输入些命令但是nano在前台运行让你不能输入。
你可能觉得唯一的方法就是保存文件,推出nano运行命令以后在重新打开nano。
你可能觉得唯一的方法就是保存文件,退出 nano运行命令以后在重新打开nano。
其实你只要按CTRL + Z前台的命令就会暂停画面就切回到命令行了。然后你就能运行你想要运行命令等命令运行完后在终端窗口输入“fg”就可以回到先前暂停的任务。
其实你只要按CTRL + Z前台的命令就会暂停画面就切回到命令行了。然后你就能运行你想要运行命令等命令运行完后在终端窗口输入“fg”就可以回到先前暂停的任务。
有一个尝试非常有趣就是用nano打开文件输入一些东西然后暂停会话。再用nano打开另一个文件输入一些什么后再暂停会话。如果你输入“fg”你将回到第二个用nano打开的文件。只有退出nano再输入“fg”你才会回到第一个用nano打开的文件。
![nohup.](http://f.tqn.com/y/linux/1/L/p/J/1/nohup3.png)
nohup.
### 4. 使用nohup在登出SSH会话后仍运行命令 ###
如果你用ssh登录别的机器时[nohup命令]真的非常有用。
![nohup.](http://f.tqn.com/y/linux/1/L/p/J/1/nohup3.png)
*nohup*
如果你用ssh登录别的机器时[nohup命令][2]真的非常有用。
那么怎么使用nohup呢
想象一下你使用ssh远程登录到另一台电脑上你运行了一条非常耗时的命令然后退出了ssh会话不过命令仍在执行。而nohup可以将这一场景变成现实。
举个例子以测试为目的我用[树莓派][3]来下载发行版
举个例子,因为测试的需要,我用我的[树莓派][3]来下载发行版。我绝对不会给我的树莓派外接显示器、键盘或鼠标
我绝对不会给我的树莓派外接显示器、键盘或鼠标。
一般我总是用[SSH] [4]从笔记本电脑连接到树莓派。如果我在不用nohup的情况下使用树莓派下载大型文件那我就必须等待到下载完成后才能登出ssh会话关掉笔记本。如果是这样那我为什么要使用树莓派下文件呢
一般我总是用[SSH][4]从笔记本电脑连接到树莓派。如果我在不用nohup的情况下使用树莓派下载大型文件那我就必须等待到下载完成后才能登出ssh会话关掉笔记本。可如果是这样那我为什么要使用树莓派下文件呢
使用nohup的方法也很简单只需如下例中在nohup后输入要执行的命令即可
nohup wget http://mirror.is.co.za/mirrors/linuxmint.com/iso//stable/17.1/linuxmint-17.1-cinnamon-64bit.iso &
![Schedule tasks with at.](http://f.tqn.com/y/linux/1/L/q/J/1/at.png)
At管理任务日程
### 5. 特定的时间运行Linux命令 ###
![Schedule tasks with at.](http://f.tqn.com/y/linux/1/L/q/J/1/at.png)
*At管理任务日程*
nohup命令在你用SSH连接到服务器并在上面保持执行SSH登出前任务的时候十分有用。
想一下如果你需要在特定的时间执行同一个命令,这种情况该怎么办呢?
想一下如果你需要在特定的时间执行相同的命令,这种情况该怎么办呢?
命令at就能妥善解决这一情况。以下是at使用示例。
@ -116,78 +122,80 @@ At管理任务日程
at> cowsay 'hello'
at> CTRL + D
上面的命令能在周五下午10时38分运行程序[cowsay] [5]。
上面的命令能在周五下午10时38分运行程序[cowsay][5]。
使用的语法就是at后追加日期时间。
使用的语法就是at后追加日期时间。当at>提示符出现后就可以输入你想在那个时间运行的命令了。
当at>提示符出现后就可以输入你想在那个时间运行的命令了
CTRL + D 返回终端
CTRL + D返回终端
还有许多日期和时间的格式都需要你好好翻一翻at的man手册来找到更多的使用方式
还有许多日期和时间的格式都是值得的你好好翻一翻at的man手册来找到更多的使用方式。
![](http://f.tqn.com/y/linux/1/L/l/J/1/manmost.png)
### 6. Man手册 ###
Man手册会为你列出命令和参数的使用大纲教你如何使用她们。
![](http://f.tqn.com/y/linux/1/L/l/J/1/manmost.png)
Man手册看起开沉闷呆板。我思忖她们也不是被设计来娱乐我们的
*彩色man 手册*
不过这不代表你不能做些什么来使她们变得性感点。
Man手册会为你列出命令和参数的使用大纲教你如何使用她们。Man手册看起来沉闷呆板。我思忖她们也不是被设计来娱乐我们的
不过这不代表你不能做些什么来使她们变得漂亮些。
export PAGER=most
你需要 most她会使你的你的man手册的色彩更加绚丽。
你需要安装 most她会使你的你的man手册的色彩更加绚丽。
你可以用下命令给man手册设定指定的行长
你可以用下命令给man手册设定指定的行长
export MANWIDTH=80
最后,如果你有浏览器,你可以使用-H在默认浏览器中打开任意的man页。
最后,如果你有一个可用的浏览器,你可以使用-H在默认浏览器中打开任意的man页。
man -H <command>
注意啦,以上的命令只有在你将默认的浏览器已经设置到环境变量$BROWSER中了之后才效果哟。
注意啦,以上的命令只有在你将默认的浏览器设置到环境变量$BROWSER中了之后才效果哟。
![View Processes With htop.](http://f.tqn.com/y/linux/1/L/r/J/1/nohup2.png)
使用htop查看进程。
### 7. 使用htop查看和管理进程 ###
你用哪个命令找出电脑上正在运行的进程的呢?我敢打赌是‘[ps][6]’并在其后加不同的参数来得到你所想要的不同输出。
![View Processes With htop.](http://f.tqn.com/y/linux/1/L/r/J/1/nohup2.png)
*使用htop查看进程。*
你用哪个命令找出电脑上正在运行的进程的呢?我敢打赌是‘[ps][6]’并在其后加不同的参数来得到你所想要的不同输出。
安装‘[htop][7]’吧!绝对让你相见恨晚。
htop在终端中将进程以列表的方式呈现有点类似于Windows中的任务管理器。
你可以使用功能键的组合来切换排列的方式和展示出来的项。你也可以在htop中直接杀死进程。
htop在终端中将进程以列表的方式呈现有点类似于Windows中的任务管理器。你可以使用功能键的组合来切换排列的方式和展示出来的项。你也可以在htop中直接杀死进程。
在终端中简单的输入htop即可运行。
htop
![Command Line File Manager - Ranger.](http://f.tqn.com/y/linux/1/L/s/J/1/ranger.png)
命令行文件管理 - Ranger.
### 8. 使用ranger浏览文件系统 ###
如果说htop是命令行进程控制的好帮手那么[ranger][8]就是命令行浏览文件系统的好帮手。
![Command Line File Manager - Ranger.](http://f.tqn.com/y/linux/1/L/s/J/1/ranger.png)
*命令行文件管理 - Ranger*
如果说htop是命令行进程控制的好帮手那么[ranger][8]就是命令行浏览文件系统的好帮手。
你在用之前可能需要先安装,不过一旦安装了以后就可以在命令行输入以下命令启动她:
ranger
在命令行窗口中ranger和一些别的文件管理器很像但是她是左右结构的比起上下的来意味着你按左方向键你将前进到上一个文件夹结构而右方向键则会切换到下一个。
在命令行窗口中ranger和一些别的文件管理器很像但是相比上下结构布局,她是左右结构的,这意味着你按左方向键你将前进到上一个文件夹,而右方向键则会切换到下一个。
在使用前ranger的man手册还是值得一读的这样你就可以用快捷键操作ranger了。
![Cancel Linux Shutdown.](http://f.tqn.com/y/linux/1/L/t/J/1/shutdown.png)
Linux取消关机。
### 9. 取消关机 ###
无论是在命令行还是图形用户界面[关机][9]后发现自己不是真的想要关机。
![Cancel Linux Shutdown.](http://f.tqn.com/y/linux/1/L/t/J/1/shutdown.png)
*Linux取消关机。*
无论是在命令行还是图形用户界面[关机][9]后,才发现自己不是真的想要关机。
shutdown -c
@ -197,11 +205,13 @@ Linux取消关机。
- [pkill][10] shutdown
![Kill Hung Processes With XKill.](http://f.tqn.com/y/linux/1/L/u/J/1/killhungprocesses.png)
使用XKill杀死挂起进程。
### 10. 杀死挂起进程的简单方法 ###
![Kill Hung Processes With XKill.](http://f.tqn.com/y/linux/1/L/u/J/1/killhungprocesses.png)
*使用XKill杀死挂起进程。*
想象一下,你正在运行的应用程序不明原因的僵死了。
你可以使用ps -ef来找到该进程后杀掉或者使用htop
@ -214,18 +224,20 @@ Linux取消关机。
那如果整个系统挂掉了怎么办呢?
按住键盘上的altsysrq同时输入
按住键盘上的altsysrq不放,然后慢慢输入以下键
- [REISUB][12]
这样不按电源键你的计算机也能重启了。
![youtube-dl.](http://f.tqn.com/y/linux/1/L/v/J/1/youtubedl2.png)
youtube-dl.
### 11. 下载Youtube视频 ###
一般来说我们大多数人都喜欢看Youtube的视频也会通过钟爱的播放器播放Youtube的流。
![youtube-dl.](http://f.tqn.com/y/linux/1/L/v/J/1/youtubedl2.png)
*youtube-dl.*
一般来说我们大多数人都喜欢看Youtube的视频也会通过钟爱的播放器播放Youtube的流媒体。
如果你需要离线一段时间(比如:从苏格兰南部坐飞机到英格兰南部旅游的这段时间)那么你可能希望下载一些视频到存储设备中,到闲暇时观看。
@ -235,7 +247,7 @@ youtube-dl.
youtube-dl url-to-video
在Youtubu视频页面点击分享链接得到视频的url。只要简单的复制链接在粘帖到命令行就行了要用shift + insert快捷键哟
可以在Youtubu视频页面点击分享链接得到视频的url。只要简单的复制链接在粘帖到命令行就行了要用shift + insert快捷键哟
### 总结 ###
@ -246,8 +258,8 @@ youtube-dl.
via: http://linux.about.com/od/commands/tp/11-Linux-Terminal-Commands-That-Will-Rock-Your-World.htm
作者:[Gary Newell][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
@ -264,3 +276,4 @@ via: http://linux.about.com/od/commands/tp/11-Linux-Terminal-Commands-That-Will-
[10]:http://linux.about.com/library/cmd/blcmdl1_pkill.htm
[11]:http://linux.about.com/od/funnymanpages/a/funman_xkill.htm
[12]:http://blog.kember.net/articles/reisub-the-gentle-linux-restart/
[13]:http://linux.about.com/od/commands/fl/How-To-Run-Linux-Programs-From-The-Terminal-In-Background-Mode.htm

View File

@ -1,14 +1,14 @@
Ubuntu中使用Prey定位被盗的笔记本与手机
使用Prey定位被盗的Ubuntu笔记本与智能电话
===============================================================================
Prey是一款跨平台的开源工具可以帮助你找回被盗的笔记本台式机平板和智能手机。它已经获得了广泛的流行声称帮助回了成百上千台丢失的笔记本和智能手机。Prey的使用特别简单首先安装在你的笔记本或者手机上当你的设备不见了用你的账号登入Prey网站并且标记你的设备为“丢失”。只要小偷将设备接入网络Prey就会马上发送设备的地理位置给你。如果你的笔记本有摄像头它还会拍下小偷
Prey是一款跨平台的开源工具可以帮助你找回被盗的笔记本台式机平板和智能手机。它已经获得了广泛的流行声称帮助回了成百上千台丢失的笔记本和智能手机。Prey的使用特别简单首先安装在你的笔记本或者手机上当你的设备不见了用你的账号登入Prey网站并且标记你的设备为“丢失”。只要小偷将设备接入网络Prey就会马上发送设备的地理位置给你。如果你的笔记本有摄像头它还会拍下该死的贼
Prey占用很小的系统资源你不会对你的设备运行有任何影响。你也可以配合其他你已经在设备上安装的防盗软件使用。Prey采用安全加密的通道,在你的设备与Prey服务器之间进行数据传输。
Prey占用很小的系统资源你不会对你的设备运行有任何影响。你也可以配合其他你已经在设备上安装的防盗软件使用。Prey在你的设备与Prey服务器之间采用安全加密的通道进行数据传输。
### 在Ubuntu上安装并配置Prey ###
让我们来看看如何在Ubuntu上安装和配置Prey需要提醒的是在配置过程中我们必须到Prey官网进行账号注册。一旦完成上述工作Prey将会开始监视的设备了。免费的账号最多可以监视三个设备如果你需要添加更多的设备你就需要购买合适的的套餐了。
让我们来看看如何在Ubuntu上安装和配置Prey需要提醒的是在配置过程中我们必须到Prey官网进行账号注册。一旦完成上述工作Prey将会开始监视的设备了。免费的账号最多可以监视三个设备,如果你需要添加更多的设备,你就需要购买合适的的套餐了。
想象一下Prey多么流行与被广泛使用它现在已经被添加到了官方的软件库中了。这意味着你不要往软件包管理器添加任何PPA。很简单,登录你的终端,运行以下的命令来安装它:
可以想象Prey多么流行与被广泛使用它现在已经被添加到了官方的软件库中了。这意味着你不要往软件包管理器添加任何PPA。很简单登录你的终端运行以下的命令来安装它
sudo apt-get install prey
@ -54,7 +54,7 @@ Prey有一个明显的不足。它需要你的设备接入互联网才会发送
### 结论 ###
这是一款小巧非常有用的安全保护应用可以让你在一个地方追踪你所有的设备尽管不完美但是仍然提供了找回被盗设备的机会。它在LinuxWindows和Mac平台上无缝运行。以上就是Prey完整使用的所有细节。
这是一款小巧非常有用的安全保护应用可以让你在一个地方追踪你所有的设备尽管不完美但是仍然提供了找回被盗设备的机会。它在LinuxWindows和Mac平台上无缝运行。以上就是[Prey][2]完整使用的所有细节。
-------------------------------------------------------------------------------
@ -62,9 +62,10 @@ via: http://linoxide.com/ubuntu-how-to/anti-theft-application-prey-ubuntu/
作者:[Aun Raza][a]
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunrz/
[1]:https://preyproject.com/
[2]:https://preyproject.com/plans

View File

@ -1,12 +1,12 @@
Linux有问必答--如何使用命令行压缩JPEG图像
Linux有问必答:如何在命令行下压缩JPEG图像
================================================================================
> **问题**: 我有许多数码照相机拍出来的照片。我想在上传到Dropbox之前优化和压缩下JPEG图片。有没有什么简单的方法压缩JPEG图片并不损耗他们的质量
如今拍照设备如智能手机、数码相机拍出来的图片分辨率越来越大。甚至3630万像素的Nikon D800已经冲入市场并且这个趋势根本停不下来。如今的拍照设备不断地提高着照片分辨率使得我们不得不压缩后再上传到有储存限制、带宽限制的云。
事实上这里有一个非常简单的方法压缩JPEG图像。一个叫“jpegoptim”命令行工具可以帮助你“无损”美化JPEG图像所以你可以压缩JPEG图片而不至于牺牲他们的质量。万一你的存储空间和带宽预算真的很少jpegoptim也支持“有损”压缩来调整图像大小。
事实上这里有一个非常简单的方法压缩JPEG图像。一个叫“jpegoptim”命令行工具可以帮助你“无损”美化JPEG图像你可以压缩JPEG图片而不至于牺牲他们的质量。万一你的存储空间和带宽预算真的很少jpegoptim也支持“有损”压缩来调整图像大小。
如果要压缩PNG图像参考[this guideline][1]例子。
如果要压缩PNG图像参考[这个指南][1]的例子。
### 安装jpegoptim ###
@ -34,7 +34,7 @@ CentOS/RHEL安装先开启[EPEL库][2],然后运行下列命令:
注意,原始图像会被压缩后图像覆盖。
如果jpegoptim不能无损美化图像将不会覆盖
如果jpegoptim不能无损美化图像将不会覆盖它:
$ jpegoptim -v photo.jpg
@ -46,21 +46,21 @@ CentOS/RHEL安装先开启[EPEL库][2],然后运行下列命令:
$ jpegoptim -d ./compressed photo.jpg
这样,压缩的图片将会保存在./compressed目录同样的输入文件名)
这样,压缩的图片将会保存在./compressed目录同样的输入文件名)
如果你想要保护文件的创建修改时间,使用"-p"参数。这样压缩后的图片会得到与原始图片相同的日期时间。
$ jpegoptim -d ./compressed -p photo.jpg
如果你只是想获得无损压缩率,使用"-n"参数来模拟压缩,然后它会打印压缩率。
如果你只是想看看无损压缩率而不是真的想压缩它们,使用"-n"参数来模拟压缩,然后它会显示出压缩率。
$ jpegoptim -n photo.jpg
### 有损压缩JPG图像 ###
万一你真的需要要保存在云空间上你可以使用有损压缩JPG图片。
万一你真的需要要保存在云空间上,你可以使用有损压缩JPG图片。
这种情况下,使用"-m<质量>"选项质量数范围0到100。0是最好质量100是最质量)
这种情况下,使用"-m<质量>"选项质量数范围0到100。0是最好质量100是最质量)
例如用50%质量压缩图片:
@ -76,7 +76,7 @@ CentOS/RHEL安装先开启[EPEL库][2],然后运行下列命令:
### 一次压缩多张JPEG图像 ###
最常见的情况是需要压缩一个目录下的多张JPEG图像文件。为了应付这种情况你可以使用接下的脚本。
最常见的情况是需要压缩一个目录下的多张JPEG图像文件。为了应付这种情况你可以使用接下的脚本。
#!/bin/sh
@ -90,11 +90,11 @@ CentOS/RHEL安装先开启[EPEL库][2],然后运行下列命令:
via: http://ask.xmodulo.com/compress-jpeg-images-command-line-linux.html
作者:[Dan Nanni][a]
译者:[VicYu/Vic020](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[VicYu/Vic020](https://github.com/Vic020)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://xmodulo.com/how-to-compress-png-files-on-linux.html
[2]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html
[2]:https://linux.cn/article-2324-1.html

View File

@ -0,0 +1,57 @@
两种方式创建你自己的 Docker 基本映像
================================================================================
欢迎大家,今天我们学习一下 docker 基本映像以及如何构建我们自己的 docker 基本映像。[Docker][1] 是一个开源项目,提供了一个可以打包、装载和运行任何应用的轻量级容器的开放平台。它没有语言支持、框架和打包系统的限制,从小型的家用电脑到高端服务器,在何时何地都可以运行。这使它们可以不依赖于特定软件栈和供应商,像一块块积木一样部署和扩展网络应用、数据库和后端服务。
Docker 映像是不可更改的只读层。Docker 使用 **Union File System** 在只读文件系统上增加可读写的文件系统但所有更改都发生在最顶层的可写层而其下的只读映像上的原始文件仍然不会改变。由于映像不会改变也就没有状态。基本映像是没有父类的那些映像。Docker 基本映像主要的好处是它允许我们有一个独立运行的 Linux 操作系统。
下面是我们如何可以创建自定义的基本映像的方式。
### 1. 使用 Tar 创建 Docker 基本映像 ###
我们可以使用 tar 构建我们自己的基本映像,我们从一个运行中的 Linux 发行版开始,将其打包为基本映像。这过程可能会有些不同,它取决于我们打算构建的发行版。在 Debian 发行版中,已经预带了 debootstrap。在开始下面的步骤之前我们需要安装 debootstrap。debootstrap 用来获取构建基本系统需要的包。这里,我们构建基于 Ubuntu 14.04 "Trusty" 的映像。要完成这些,我们需要在终端或者 shell 中运行以下命令。
$ sudo debootstrap trusty trusty > /dev/null
$ sudo tar -C trusty -c . | sudo docker import - trusty
![使用debootstrap构建docker基本映像](http://blog.linoxide.com/wp-content/uploads/2015/03/creating-base-image-debootstrap.png)
上面的命令为当前文件夹创建了一个 tar 文件并输出到标准输出中,"docker import - trusty" 通过管道从标准输入中获取这个 tar 文件并根据它创建一个名为 trusty 的基本映像。然后,如下所示,我们将运行映像内部的一条测试命令。
$ docker run trusty cat /etc/lsb-release
[Docker GitHub Repo][2] 中有一些允许我们快速构建基本映像的事例脚本.
### 2. 使用Scratch构建基本映像 ###
在 Docker registry 中,有一个被称为 Scratch 的使用空 tar 文件构建的特殊库:
$ tar cv --files-from /dev/null | docker import - scratch
![使用scratch构建docker基本映像](http://blog.linoxide.com/wp-content/uploads/2015/03/creating-base-image-using-scratch.png)
我们可以使用这个映像构建新的小容器:
FROM scratch
ADD script.sh /usr/local/bin/run.sh
CMD ["/usr/local/bin/run.sh"]
上面的 Dockerfile 文件来自一个很小的映像。这里,它首先从一个完全空的文件系统开始,然后它复制新建的 /usr/local/bin/run.sh 为 script.sh ,然后运行脚本 /usr/local/bin/run.sh。
### 结尾 ###
这这个教程中,我们学习了如何构建一个开箱即用的自定义 Docker 基本映像。构建一个 docker 基本映像是一个很简单的任务,因为这里有很多已经可用的包和脚本。如果我们想要在里面安装想要的东西,构建 docker 基本映像非常有用。如果有任何疑问,建议或者反馈,请在下面的评论框中写下来。非常感谢!享受吧 :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/2-ways-create-docker-base-image/
作者:[Arun Pyasi][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://www.docker.com/
[2]:https://github.com/docker/docker/blob/master/contrib/mkimage-busybox.sh

View File

@ -10,12 +10,14 @@
#### 在 64位 Ubuntu 15.04 ####
$ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0-vivid/linux-image-4.0.0-040000-generic_4.0.0-040000.201504121935_amd64.deb
$ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0-vivid/linux-headers-4.0.0-040000-generic_4.0.0-040000.201504121935_amd64.deb
$ sudo dpkg -i linux-headers-4.0.0*.deb linux-image-4.0.0*.deb
#### 在 32位 Ubuntu 15.04 ####
$ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0-vivid/linux-image-4.0.0-040000-generic_4.0.0-040000.201504121935_i386.deb
$ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0-vivid/linux-headers-4.0.0-040000-generic_4.0.0-040000.201504121935_i386.deb
$ sudo dpkg -i linux-headers-4.0.0*.deb linux-image-4.0.0*.deb

View File

@ -0,0 +1,41 @@
EvilAP_Defender可以警示和攻击 WIFI 热点陷阱的工具
===============================================================================
**开发人员称EvilAP_Defender甚至可以攻击流氓Wi-Fi接入点**
这是一个新的开源工具,可以定期扫描一个区域,以防出现恶意 Wi-Fi 接入点,同时如果发现情况会提醒网络管理员。
这个工具叫做 EvilAP_Defender是为监测攻击者所配置的恶意接入点而专门设计的这些接入点冒用合法的名字诱导用户连接上。
这类接入点被称做假面猎手evil twin使得黑客们可以从所接入的设备上监听互联网信息流。这可以被用来窃取证书、钓鱼网站等等。
大多数用户设置他们的计算机和设备可以自动连接一些无线网络比如家里的或者工作地方的网络。通常当面对两个同名的无线网络时即SSID相同有时候甚至连MAC地址BSSID也相同这时候大多数设备会自动连接信号较强的一个。
这使得假面猎手攻击容易实现因为SSID和BSSID都可以伪造。
[EvilAP_Defender][1]是一个叫Mohamed Idris的人用Python语言编写公布在GitHub上面。它可以使用一个计算机的无线网卡来发现流氓接入点这些坏蛋们复制了一个真实接入点的SSIDBSSID甚至是其他的参数如通道密码隐私协议和认证信息等等。
该工具首先以学习模式运行,以便发现合法的接入点[AP],并且将其加入白名单。然后可以切换到正常模式,开始扫描未认证的接入点。
如果一个恶意[AP]被发现了,该工具会用电子邮件提醒网络管理员,但是开发者也打算在未来加入短信提醒功能。
该工具还有一个保护模式在这种模式下应用会发起一个denial-of-service [DoS]攻击反抗恶意接入点,为管理员采取防卫措施赢得一些时间。
“DoS 将仅仅针对有着相同SSID的而BSSIDAP的MAC地址不同或者不同信道的流氓 AP”Idris在这款工具的文档中说道。“这是为了避免攻击到你的正常网络。”
尽管如此,用户应该切记在许多国家,攻击别人的接入点很多时候都是非法的,甚至是一个看起来像是攻击者操控的恶意接入点。
要能够运行这款工具需要Aircrack-ng无线网套装一个支持Aircrack-ng的无线网卡MySQL和Python运行环境。
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/2905725/security0/this-tool-can-alert-you-about-evil-twin-access-points-in-the-area.html
作者:[Lucian Constantin][a]
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/Lucian-Constantin/
[1]:https://github.com/moha99sa/EvilAP_Defender/blob/master/README.TXT

View File

@ -0,0 +1,63 @@
在Ubuntu中安装Visual Studio Code
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/05/Install-Visual-Studio-Code-in-Ubuntu.jpeg)
微软令人意外地[发布了Visual Studio Code][1]并支持主要的桌面平台当然包括linux。如果你是一名需要在ubuntu工作的web开发人员你可以**非常轻松的安装Visual Studio Code**。
我将要使用[Ubuntu Make][2]来安装Visual Studio Code。Ubuntu Make就是以前的Ubuntu开发者工具中心是一个命令行工具帮助用户快速安装各种开发工具、语言和IDE。也可以使用Ubuntu Make轻松[安装Android Studio][3] 和其他IDE如Eclipse。本文将展示**如何在Ubuntu中使用Ubuntu Make安装Visual Studio Code**。(译注:也可以直接去微软官网下载安装包)
### 安装微软Visual Studio Code ###
开始之前首先需要安装Ubuntu Make。虽然Ubuntu Make存在Ubuntu15.04官方库中,**但是需要Ubuntu Make 0.7以上版本才能安装Visual Studio**。所以需要通过官方PPA更新到最新的Ubuntu Make。此PPA支持Ubuntu 14.04, 14.10 和 15.04。
注意,**仅支持64位版本**。
打开终端使用下列命令通过官方PPA来安装Ubuntu Make
sudo add-apt-repository ppa:ubuntu-desktop/ubuntu-make
sudo apt-get update
sudo apt-get install ubuntu-make
安装Ubuntu Make完后接着使用下列命令安装Visual Studio Code
umake web visual-studio-code
安装过程中,将会询问安装路径,如下图:
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/05/Visual_Studio_Code_Ubuntu_1.jpeg)
在抛出一堆要求和条件后它会询问你是否确认安装Visual Studio Code。输入a来确定
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/05/Visual_Studio_Code_Ubuntu_2.jpeg)
确定之后安装程序会开始下载并安装。安装完成后你可以发现Visual Studio Code 图标已经出现在了Unity启动器上。点击图标开始运行下图是Ubuntu 15.04 Unity的截图
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/05/Visual_Studio_Code_Ubuntu.jpeg)
### 卸载Visual Studio Code###
卸载Visual Studio Code同样使用Ubuntu Make命令。如下
umake web visual-studio-code --remove
如果你不打算使用Ubuntu Make也可以通过微软官方下载安装文件。
- [下载Visual Studio Code Linux版][4]
怎样是不是超级简单就可以安装Visual Studio Code这都归功于Ubuntu Make。我希望这篇文章能帮助到你。如果您有任何问题或建议欢迎给我留言。
--------------------------------------------------------------------------------
via: http://itsfoss.com/install-visual-studio-code-ubuntu/
作者:[Abhishek][a]
译者:[Vic020/VicYu](http://vicyu.net)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:https://linux.cn/article-5376-1.html
[2]:https://wiki.ubuntu.com/ubuntu-make
[3]:http://itsfoss.com/install-android-studio-ubuntu-linux/
[4]:https://code.visualstudio.com/Download

View File

@ -0,0 +1,164 @@
如何使用Vault安全的存储密码和API密钥
=======================================================================
Vault是用来安全的获取秘密信息的工具它可以保存密码、API密钥、证书等信息。Vault提供了一个统一的接口来访问秘密信息其具有健壮的访问控制机制和丰富的事件日志。
对关键信息的授权访问是一个困难的问题尤其是当有许多用户角色并且用户请求不同的关键信息时例如用不同权限登录数据库的登录配置用于外部服务的API密钥SOA通信的证书等。当保密信息由不同的平台进行管理并使用一些自定义的配置时情况变得更糟因此安全的存储、管理审计日志几乎是不可能的。但Vault为这种复杂情况提供了一个解决方案。
### 突出特点 ###
**数据加密**Vault能够在不存储数据的情况下对数据进行加密、解密。开发者们便可以存储加密后的数据而无需开发自己的加密技术Vault还允许安全团队自定义安全参数。
**安全密码存储**Vault在将秘密信息API密钥、密码、证书存储到持久化存储之前对数据进行加密。因此如果有人偶尔拿到了存储的数据这也没有任何意义除非加密后的信息能被解密。
**动态密码**Vault可以随时为AWS、SQL数据库等类似的系统产生密码。比如如果应用需要访问AWS S3 桶它向Vault请求AWS密钥对Vault将给出带有租期的所需秘密信息。一旦租用期过期这个秘密信息就不再存储。
**租赁和更新**Vault给出的秘密信息带有租期一旦租用期过期它便立刻收回秘密信息如果应用仍需要该秘密信息则可以通过API更新租用期。
**撤销**在租用期到期之前Vault可以撤销一个秘密信息或者一个秘密信息树。
### 安装Vault ###
有两种方式来安装使用Vault。
**1. 预编译的Vault二进制** 能用于所有的Linux发行版下载地址如下下载之后解压并将它放在系统PATH路径下以方便调用。
- [下载预编译的二进制 Vault (32-bit)][1]
- [下载预编译的二进制 Vault (64-bit)][2]
- [下载预编译的二进制 Vault (ARM)][3]
![wget binary](http://blog.linoxide.com/wp-content/uploads/2015/04/wget-binary.png)
*下载相应的预编译的Vault二进制版本。*
![vault](http://blog.linoxide.com/wp-content/uploads/2015/04/unzip.png)
*解压下载到本地的二进制版本。*
祝贺你您现在可以使用Vault了。
![](http://blog.linoxide.com/wp-content/uploads/2015/04/vault.png)
**2. 从源代码编译**是另一种在系统中安装Vault的方式。在安装Vault之前需要安装GO和GIT。
**Redhat系统中安装GO** 使用下面的指令:
sudo yum install go
**Debin系统中安装GO** 使用下面的指令:
sudo apt-get install golang
或者
sudo add-apt-repository ppa:gophers/go
sudo apt-get update
sudo apt-get install golang-stable
**Redhat系统中安装GIT** 使用下面的命令:
sudo yum install git
**Debian系统中安装GIT** 使用下面的命令:
sudo apt-get install git
一旦GO和GIT都已被安装好我们便可以开始从源码编译安装Vault。
> 将下列的Vault仓库拷贝至GOPATH
https://github.com/hashicorp/vault
> 测试下面的文件是否存在如果它不存在那么Vault没有被克隆到合适的路径。
$GOPATH/src/github.com/hashicorp/vault/main.go
> 执行下面的指令来编译Vault并将二进制文件放到系统bin目录下。
make dev
![path](http://blog.linoxide.com/wp-content/uploads/2015/04/installation4.png)
### 一份Vault入门教程 ###
我们已经编制了一份Vault的官方交互式教程并带有它在SSH上的输出信息。
**概述**
这份教程包括下列步骤:
- 初始化并启封您的Vault
- 在Vault中对您的请求授权
- 读写秘密信息
- 密封您的Vault
#### **初始化您的Vault**
首先我们需要为您初始化一个Vault的工作实例。在初始化过程中您可以配置Vault的密封行为。简单起见现在使用一个启封密钥来初始化Vault命令如下
vault init -key-shares=1 -key-threshold=1
您会注意到Vault在这里输出了几个密钥。不要清除您的终端这些密钥在后面的步骤中会使用到。
![Initializing SSH](http://blog.linoxide.com/wp-content/uploads/2015/04/Initializing-SSH.png)
#### **启封您的Vault**
当一个Vault服务器启动时它是密封的状态。在这种状态下Vault被配置为知道物理存储在哪里及如何存取它但不知道如何对其进行解密。Vault使用加密密钥来加密数据。这个密钥由"主密钥"加密,主密钥不保存。解密主密钥需要入口密钥。在这个例子中,我们使用了一个入口密钥来解密这个主密钥。
vault unseal <key 1>
![Unsealing SSH](http://blog.linoxide.com/wp-content/uploads/2015/04/Unsealing-SSH.png)
####**为您的请求授权**
在执行任何操作之前连接的客户端必须是被授权的。授权的过程是检验一个人或者机器是否如其所申明的那样具有正确的身份。这个身份用在向Vault发送请求时。为简单起见我们将使用在步骤2中生成的root令牌这个信息可以回滚终端屏幕看到。使用一个客户端令牌进行授权
vault auth <root token>
![Authorize SSH](http://blog.linoxide.com/wp-content/uploads/2015/04/Authorize-SSH.png)
####**读写保密信息**
现在Vault已经被设置妥当我们可以开始读写默认挂载的秘密后端里面的秘密信息了。写在Vault中的秘密信息首先被加密然后被写入后端存储中。后端存储机制绝不会看到未加密的信息并且也没有在Vault之外解密的需要。
vault write secret/hello value=world
当然,您接下来便可以读这个保密信息了:
vault read secret/hello
![RW_SSH](http://blog.linoxide.com/wp-content/uploads/2015/04/RW_SSH.png)
####**密封您的Vault**
还有一个用I来密封Vault的API。它将丢掉现在的加密密钥并需要另一个启封过程来恢复它。密封仅需要一个拥有root权限的操作者。这是一种罕见的"打破玻璃过程"的典型部分。
这种方式中如果检测到一个入侵Vault数据将会立刻被锁住以便最小化损失。如果不能访问到主密钥碎片的话就不能再次获取数据。
vault seal
![Seal Vault SSH](http://blog.linoxide.com/wp-content/uploads/2015/04/Seal-Vault-SSH.png)
这便是入门教程的结尾。
### 总结 ###
Vault是一个非常有用的应用它提供了一个可靠且安全的存储关键信息的方式。另外它在存储前加密关键信息、审计日志维护、以租期的方式获取秘密信息且一旦租用期过期它将立刻收回秘密信息。Vault是平台无关的并且可以免费下载和安装。要发掘Vault的更多信息请访问其[官方网站][4]。
--------------------------------------------------------------------------------
via: http://linoxide.com/how-tos/secure-secret-store-vault/
作者:[Aun Raza][a]
译者:[wwy-hust](https://github.com/wwy-hust)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunrz/
[1]:https://dl.bintray.com/mitchellh/vault/vault_0.1.0_linux_386.zip
[2]:https://dl.bintray.com/mitchellh/vault/vault_0.1.0_linux_amd64.zip
[3]:https://dl.bintray.com/mitchellh/vault/vault_0.1.0_linux_arm.zip
[4]:https://vaultproject.io/

View File

@ -0,0 +1,185 @@
监控 Linux 容器性能的命令行神器
================================================================================
ctop是一个新的基于命令行的工具它可用于在容器层级监控进程。容器通过利用控制器组cgroup的资源管理功能提供了操作系统层级的虚拟化环境。该工具从cgroup收集与内存、CPU、块输入输出的相关数据以及拥有者、开机时间等元数据并以人性化的格式呈现给用户这样就可以快速对系统健康状况进行评估。基于所获得的数据它可以尝试推测下层的容器技术。ctop也有助于在低内存环境中检测出谁在消耗大量的内存。
### 功能 ###
ctop的一些功能如下
- 收集CPU、内存和块输入输出的度量值
- 收集与拥有者、容器技术和任务统计相关的信息
- 通过任意栏对信息排序
- 以树状视图显示信息
- 折叠/展开cgroup树
- 选择并跟踪cgroup/容器
- 选择显示数据刷新的时间窗口
- 暂停刷新数据
- 检测基于systemd、Docker和LXC的容器
- 基于Docker和LXC的容器的高级特性
- 打开/连接shell以进行深度诊断
- 停止/杀死容器类型
### 安装 ###
**ctop**是由Python写成的因此除了需要Python 2.6或其更高版本外带有内建的光标支持别无其它外部依赖。推荐使用Python的pip进行安装如果还没有安装pip请先安装然后使用pip安装ctop。
*注意本文样例来自Ubuntu14.10)系统*
$ sudo apt-get install python-pip
使用pip安装ctop
poornima@poornima-Lenovo:~$ sudo pip install ctop
[sudo] password for poornima:
Downloading/unpacking ctop
Downloading ctop-0.4.0.tar.gz
Running setup.py (path:/tmp/pip_build_root/ctop/setup.py) egg_info for package ctop
Installing collected packages: ctop
Running setup.py install for ctop
changing mode of build/scripts-2.7/ctop from 644 to 755
changing mode of /usr/local/bin/ctop to 755
Successfully installed ctop
Cleaning up...
如果不选择使用pip安装你也可以使用wget直接从github安装
poornima@poornima-Lenovo:~$ wget https://raw.githubusercontent.com/yadutaf/ctop/master/cgroup_top.py -O ctop
--2015-04-29 19:32:53-- https://raw.githubusercontent.com/yadutaf/ctop/master/cgroup_top.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 199.27.78.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|199.27.78.133|:443... connected.
HTTP request sent, awaiting response... 200 OK Length: 27314 (27K) [text/plain]
Saving to: ctop
100%[======================================>] 27,314 --.-K/s in 0s
2015-04-29 19:32:59 (61.0 MB/s) - ctop saved [27314/27314]
----------
poornima@poornima-Lenovo:~$ chmod +x ctop
如果cgroup-bin包没有安装你可能会碰到一个错误消息你可以通过安装需要的包来解决。
poornima@poornima-Lenovo:~$ ./ctop
[ERROR] Failed to locate cgroup mountpoints.
poornima@poornima-Lenovo:~$ sudo apt-get install cgroup-bin
下面是ctop的输出样例
![ctop screen](http://blog.linoxide.com/wp-content/uploads/2015/05/ctop.png)
*ctop屏幕*
### 用法选项 ###
ctop [--tree] [--refresh=] [--columns=] [--sort-col=] [--follow=] [--fold=, ...] ctop (-h | --help)
当你进入ctop屏幕可使用上和下箭头键在容器间导航。点击某个容器就选定了该容器按q或Ctrl+C退出该容器。
现在,让我们来看看上面列出的那一堆选项究竟是怎么用的吧。
**-h / --help - 显示帮助信息**
poornima@poornima-Lenovo:~$ ctop -h
Usage: ctop [options]
Options:
-h, --help show this help message and exit
--tree show tree view by default
--refresh=REFRESH Refresh display every <seconds>
--follow=FOLLOW Follow cgroup path
--columns=COLUMNS List of optional columns to display. Always includes
'name'
--sort-col=SORT_COL Select column to sort by initially. Can be changed
dynamically.
**--tree - 显示容器的树形视图**
默认情况下,会显示列表视图
当你进入ctop窗口你可以使用F5按钮在树状/列表视图间切换。
**--fold=<name> - 在树形视图中折叠名为 \<name> 的 cgroup 路径**
该选项需要与 --tree 选项组合使用。
例子: ctop --tree --fold=/user.slice
![Output of 'ctop --fold'](http://blog.linoxide.com/wp-content/uploads/2015/05/ctop-fold.png)
*'ctop --fold'的输出*
在ctop窗口中使用+/-键来展开或折叠子cgroup。
注意在写本文时pip仓库中还没有最新版的ctop还不支持命令行的--fold选项
**--follow= - 跟踪/高亮 cgroup 路径**
例子: ctop --follow=/user.slice/user-1000.slice
正如你在下面屏幕中所见到的那样,带有“/user.slice/user-1000.slice”路径的cgroup被高亮显示这让用户易于跟踪就算显示位置变了也一样。
![Output of 'ctop --follow'](http://blog.linoxide.com/wp-content/uploads/2015/05/ctop-follow.png)
*'ctop --follow'的输出*
你也可以使用f按钮来让高亮的行跟踪选定的容器。默认情况下跟踪是关闭的。
**--refresh= - 按指定频率刷新显示默认1秒**
这对于按每用户需求来显示改变刷新率时很有用。使用p按钮可以暂停刷新并选择文本。
**--columns=<columns> - 限定只显示选定的列。'name' 需要是第一个字段其后跟着其它字段。默认情况下字段包括owner, processes,memory, cpu-sys, cpu-user, blkio, cpu-time**
例子: ctop --columns=name,owner,type,memory
![Output of 'ctop --column'](http://blog.linoxide.com/wp-content/uploads/2015/05/ctop-column.png)
*'ctop --column'的输出*
**-sort-col=<sort-col> - 按指定的列排序。默认使用 cpu-user 排序**
例子: ctop --sort-col=blkio
如果有Docker和LXC支持的额外容器跟踪选项也是可用的
press 'a' - 接驳到终端输出
press 'e' - 打开容器中的一个 shell
press 's' - 停止容器 (SIGTERM)
press 'k' - 杀死容器 (SIGKILL)
目前 Jean-Tiare Le Bigot 还在积极开发 [ctop][1] 中,希望我们能在该工具中见到像本地 top 命令一样的特性 :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/how-tos/monitor-linux-containers-performance/
作者:[B N Poornima][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/bnpoornima/
[1]:https://github.com/yadutaf/ctop

View File

@ -1,8 +1,8 @@
在 RedHat/CentOS 7.x 中使用 cmcli 命令管理网络
在 RedHat/CentOS 7.x 中使用 nmcli 命令管理网络
===============
[**Red Hat Enterprise Linux 7** 与 **CentOS 7**][1] 中默认的网络服务由 **NetworkManager** 提供,这是动态控制及配置网络的守护进程,它用于保持当前网络设备及连接处于工作状态,同时也支持传统的 ifcfg 类型的配置文件。
NetworkManager 可以用于以下类型的连接:
EthernetVLANSBridgesBondsTeamsWi-Fimobile boradband如移动3G以及 IP-over-InfiniBand。针对与这些网络类型NetworkManager 可以配置他们的网络别名IP 地址静态路由DNSVPN连接以及很多其它的特殊参数。
NetworkManager 可以用于以下类型的连接:EthernetVLANSBridgesBondsTeamsWi-Fimobile boradband如移动3G以及 IP-over-InfiniBand。针对与这些网络类型NetworkManager 可以配置他们的网络别名IP 地址静态路由DNSVPN连接以及很多其它的特殊参数。
可以用命令行工具 nmcli 来控制 NetworkManager。
@ -24,19 +24,21 @@ EthernetVLANSBridgesBondsTeamsWi-Fimobile boradband如移
显示所有连接。
# nmcli connection show -a
# nmcli connection show -a
仅显示当前活动的连接。
# nmcli device status
列出通过 NetworkManager 验证的设备列表及他们的状态。
列出 NetworkManager 识别出的设备列表及他们的状态。
![nmcli general](http://blog.linoxide.com/wp-content/uploads/2014/12/nmcli-gneral.jpg)
### 启动/停止 网络接口###
使用 nmcli 工具启动或停止网络接口,与 ifconfig 的 up/down 是一样的。使用下列命令停止某个接口:
使用 nmcli 工具启动或停止网络接口,与 ifconfig 的 up/down 是一样的。
使用下列命令停止某个接口:
# nmcli device disconnect eno16777736
@ -50,7 +52,7 @@ EthernetVLANSBridgesBondsTeamsWi-Fimobile boradband如移
# nmcli connection add type ethernet con-name NAME_OF_CONNECTION ifname interface-name ip4 IP_ADDRESS gw4 GW_ADDRESS
根据你需要的配置更改 NAME_OF_CONNECTION,IP_ADDRESS, GW_ADDRESS参数如果不需要网关的话可以省略最后一部分
根据你需要的配置更改 NAME\_OF\_CONNECTION,IP\_ADDRESS, GW\_ADDRESS参数如果不需要网关的话可以省略最后一部分
# nmcli connection add type ethernet con-name NEW ifname eno16777736 ip4 192.168.1.141 gw4 192.168.1.1
@ -68,9 +70,11 @@ EthernetVLANSBridgesBondsTeamsWi-Fimobile boradband如移
![nmcli add static](http://blog.linoxide.com/wp-content/uploads/2014/12/nmcli-add-static.jpg)
###增加一个使用 DHCP 的新连接
增加新的连接使用DHCP自动分配IP地址网关DNS等你要做的就是将命令行后 ip/gw 地址部分去掉就行了DHCP会自动分配这些参数。
例,在 eno 16777736 设备上配置一个 名为 NEW_DHCP 的 DHCP 连接
例,在 eno 16777736 设备上配置一个 名为 NEW\_DHCP 的 DHCP 连接
# nmcli connection add type ethernet con-name NEW_DHCP ifname eno16777736
@ -79,8 +83,8 @@ EthernetVLANSBridgesBondsTeamsWi-Fimobile boradband如移
via: http://linoxide.com/linux-command/nmcli-tool-red-hat-centos-7/
作者:[Adrian Dinu][a]
译者:[SPccman](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[SPccman](https://github.com/SPccman)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,31 @@
Ubuntu Devs Propose Stateless Persistent Network Interface Names for Ubuntu and Debian
======================================================================================
*Networks are detected in an unpredictable and unstable order*
**Martin Pitt, a renown Ubuntu and Debian developer, came with the proposal of enabling stateless persistent network interface names in the upcoming versions of the Ubuntu Linux and Debian GNU/Linux operating systems.**
According to Mr. Pitt, it appears that the problem lies in the automatic detection of network interfaces within the Linux kernel. As such, network interfaces are detected in an unstable and unpredictable order. However, it order to connect to a certain network interface in ifupdown or networkd users will need to identify it first using a stable name.
"The general schema for this is to have an udev rule which does some matches to identify a particular interface, and assings a NAME="foo" to it," says Martin Pitt in an email to the Ubuntu mailinglist. "Interfaces with an explicit NAME= get called just like this, and others just get a kernel driver default, usually ethN, wlanN, or sometimes others (some wifi drivers have their own naming schemas)."
**Sever solutions appeared over the years: mac, biosdevname, and ifnames**
Apparently, several solutions are available for this problem, including an installation of an udev rule in /lib/udev/rules.d/75-persistent-net-generator.rules that creates a MAC address at first boot and writes it to /etc/udev/rules.d/70-persistent-net.rules, which is currently used by default in Ubuntu and applies to most hardware components.
Other solutions include biosdevname, a package that reads port or index numbers, and slot names from the BIOS and writes them to /lib/udev/rules.d/71-biosdevname.rules, and ifnames, a persistent name generator that automatically checks the BIOS and/or firmware for index numbers or slot names, similar to biosdevname.
However, the difference between ifnames and biosdevname is that the latter falls back to slot names, such as PCI numbers, and then to the MAC address and writes to /lib/udev/rules.d/80-net-setup-link.rules. All of these solutions can be combined, and Martin Pitt proposes to replace the first solution that is now used by default with the ifnames one.
If a new solution is implemented, a lot of networking issues will be resolved in Ubuntu, especially the cloud version. In addition, it will provide for stable network interface names for all new Ubuntu installations, and resolve many other problems related to system-image, etc.
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Ubuntu-Devs-Propose-Stateless-Persistent-Network-Interface-Names-for-Ubuntu-and-Debian-480730.shtml
作者:[Marius Nestor][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/marius-nestor

View File

@ -1,3 +1,4 @@
Translating by H-mudcup
Synfig Studio 1.0 — Open Source Animation Gets Serious
================================================================================
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/04/synfig-free-animations-750x467.jpg)
@ -38,4 +39,4 @@ via: http://www.omgubuntu.co.uk/2015/04/synfig-studio-new-release-features
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://sourceforge.net/projects/synfig/files/releases/1.0/linux/synfigstudio_1.0_amd64.deb/download
[2]:http://sourceforge.net/projects/synfig/files/releases/1.0/linux/synfigstudio_1.0_x86.deb/download
[2]:http://sourceforge.net/projects/synfig/files/releases/1.0/linux/synfigstudio_1.0_x86.deb/download

View File

@ -1,111 +0,0 @@
translating wi-cuckoo
What are good command line HTTP clients?
================================================================================
The whole is greater than the sum of its parts is a very famous quote from Aristotle, a Greek philosopher and scientist. This quote is particularly pertinent to Linux. In my view, one of Linux's biggest strengths is its synergy. The usefulness of Linux doesn't derive only from the huge raft of open source (command line) utilities. Instead, it's the synergy generated by using them together, sometimes in conjunction with larger applications.
The Unix philosophy spawned a "software tools" movement which focused on developing concise, basic, clear, modular and extensible code that can be used for other projects. This philosophy remains an important element for many Linux projects.
Good open source developers writing utilities seek to make sure the utility does its job as well as possible, and work well with other utilities. The goal is that users have a handful of tools, each of which seeks to excel at one thing. Some utilities work well independently.
This article looks at 3 open source command line HTTP clients. These clients let you download files off the internet from a command line. But they can also be used for many more interesting purposes such as testing, debugging and interacting with HTTP servers and web applications. Working with HTTP from the command-line is a worthwhile skill for HTTP architects and API designers. If you need to play around with an API, HTTPie and cURL will be invaluable.
----------
![HTTPie](http://www.linuxlinks.com/portal/content2/png/HTTPie.png)
![HTTPie in action](http://www.linuxlinks.com/portal/content/reviews/Internet/Screenshot-httpie.png)
HTTPie (pronounced aych-tee-tee-pie) is an open source command line HTTP client. It is a a command line interface, cURL-like tool for humans.
The goal of this software is to make CLI interaction with web services as human-friendly as possible. It provides a simple http command that allows for sending arbitrary HTTP requests using a simple and natural syntax, and displays colorized output. HTTPie can be used for testing, debugging, and generally interacting with HTTP servers.
#### Features include: ####
- Expressive and intuitive syntax
- Formatted and colorized terminal output
- Built-in JSON support
- Forms and file uploads
- HTTPS, proxies, and authentication
- Arbitrary request data
- Custom headers
- Persistent sessions
- Wget-like downloads
- Python 2.6, 2.7 and 3.x support
- Linux, Mac OS X and Windows support
- Plugins
- Documentation
- Test coverage
- Website: [httpie.org][1]
- Developer: Jakub Roztočil
- License: Open Source
- Version Number: 0.9.2
----------
![cURL](http://www.linuxlinks.com/portal/content2/png/cURL1.png)
![cURL in action](http://www.linuxlinks.com/portal/content/reviews/Internet/Screenshot-cURL.png)
cURL is an open source command line tool for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, TELNET and TFTP.
curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, kerberos...), file transfer resume, proxy tunneling and a busload of other useful tricks.
#### Features include: ####
- Config file support
- Multiple URLs in a single command line
- Range "globbing" support: [0-13], {one,two,three}
- Multiple file upload on a single command line
- Custom maximum transfer rate
- Redirectable stderr
- Metalink support
- Website: [curl.haxx.se][2]
- Developer: Daniel Stenberg
- License: MIT/X derivate license
- Version Number: 7.42.0
----------
![Wget](http://www.linuxlinks.com/portal/content2/png/Wget1.png)
![Wget in action](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Wget.png)
Wget is open source software that retrieves content from web servers. Its name is derived from World Wide Web and get. It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies.
Wget can follow links in HTML pages and create local versions of remote web sites, fully recreating the directory structure of the original site. This is known as "recursive downloading."
Wget has been designed for robustness over slow or unstable network connections.
Features include:
- Resume aborted downloads, using REST and RANGE
- Use filename wild cards and recursively mirror directories
- NLS-based message files for many different languages
- Optionally converts absolute links in downloaded documents to relative, so that downloaded documents may link to each other locally
- Runs on most UNIX-like operating systems as well as Microsoft Windows
- Supports HTTP proxies
- Supports HTTP cookies
- Supports persistent HTTP connections
- Unattended / background operation
- Uses local file timestamps to determine whether documents need to be re-downloaded when mirroring
- Website: [www.gnu.org/software/wget/][3]
- Developer: Hrvoje Niksic, Gordon Matzigkeit, Junio Hamano, Dan Harkless, and many others
- License: GNU GPL v3
- Version Number: 1.16.3
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20150425174537249/HTTPclients.html
作者Frazer Kline
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://httpie.org/
[2]:http://curl.haxx.se/
[3]:https://www.gnu.org/software/wget/

View File

@ -0,0 +1,115 @@
Guake 0.7.0 Released A Drop-Down Terminal for Gnome Desktops
================================================================================
Linux commandline is the best and most powerful thing that fascinates a new user and provides extreme power to experienced users and geeks. For those who work on Server and Production, they are already aware of this fact. It would be interesting to know that Linux console was one of those first features of the kernel that was written by Linus Torvalds way back in the year 1991.
Terminal is a powerful tool that is very reliable as it does not have any movable part. Terminal serves as an intermediate between console and GUI environment. Terminal themselves are GUI application that run on top of a desktop environment. There are a lot of terminal application some of which are Desktop Environment specific and rest are universal. Terminator, Konsole, Gnome-Terminal, Terminology, XFCE terminal, xterm are a few terminal emulators to name.
You may get a list of most widely used Terminal Emulator follow the below link.
- [20 Useful Terminals for Linux][1]
Last day while surfing web, I came across a terminal namely guake which is a terminal for gnome. Though this is not the first time I have learned about Guake. Id known this application nearly one year ago but somehow I could not write on this and later it was out of my mind until I heard it again. So finally the article is here. We will be taking you to Guake features, installation on Debian, Ubuntu and Fedora followed by quick testing.
#### What is Guake? ####
Guake is a Drop Down Terminal for Gnome Environment. Written from scratch mostly in Python and a little in C this application is released under GPLv2+ and is available for Linux and alike systems. Guake is inspired by a console in computer game Quake which slides down from the top by pressing a specially Key (Default is F12) and then slides-up when the same key is pressed.
Important to mention that Guake is not the first of this kind. Yakuake which stands for Yet Another Kuake, a terminal emulator for KDE Desktop Environment and Tilda which is a GTK+ terminal Emulator are also inspired by the same slide up/down console of computer game Quake.
#### Features of Guake ####
- Lightweight
- Simple Easy and Elegant
- Functional
- Powerful
- Good Looking
- Smooth integration of terminal into GUI
- Appears when you call and disappear once you are done by pressing a predefined hot key
- Support for hotkeys, tabs, background transparency makes it a brilliant application, must for every Gnome User.
- Extremely configurable
- Plenty of color palette included, fixed and recognized
- Shortcut for transparency level
- Run a script when Guake starts via Guake Preferences.
- Able to run on more than one monitor
Guake 0.7.0 was released recently, which brings numerous fixes as well as some new features as discussed above. For complete Guake 0.7.0 changelog and source tarball packages can be found [Here][2].
### Installing Guake Terminal in Linux ###
If you are interested in compiling Guake from source you may download the source from the link above, build it yourself before installing.
However Guake is available to be installed on most of the distributions from repository or by adding an additional repository. Here, we will be installing Guake on Debian, Ubuntu, Linux Mint and Fedora systems.
First get the latest software package list from the repository and then install Guake from the default repository as shown below.
---------------- On Debian, Ubuntu and Linux Mint ----------------
$ sudo apt-get update
$ apt-get install guake
----------
---------------- On Fedora 19 Onwards ----------------
# yum update
# yum install guake
After installation, start the Guake from another terminal as:
$ guake
After starting it, use F12 (Default) to roll down and roll up the terminal on your Gnome Desktop.
Seems very beautiful specially the transparent background. Roll down… Roll up… Roll down… Roll up…. run command. Open another tab run command… Roll up… Roll down…
![Guake Terminal in Action](http://www.tecmint.com/wp-content/uploads/2015/05/Guake.png)
Guake Terminal in Action
If your wallpaper or working windows color dont match you may like to change your wallpaper or reduce the transparency of the Guake terminal color.
Next is to look into Guake Properties to edit settings as per requirements. Run Guake Preferences either by running it from Application Menu or by running the below command.
$ guake --preferences
![Guake Terminal Properties](http://www.tecmint.com/wp-content/uploads/2015/05/Guake-Properties.png)
Guake Terminal Properties
Scrolling Properties..
![Guake Scrolling Settings](http://www.tecmint.com/wp-content/uploads/2015/05/Guake-Scrolling.png)
Guake Scrolling Settings
Appearance Properties Here you can modify text and background color as well as tune transparency.
![Appearance Properties](http://www.tecmint.com/wp-content/uploads/2015/05/Appearance-Properties.png)
Appearance Properties
Keyboard Shortcuts Here you may edit and Modify Toggle key for Guage Visibility (default is F12).
![Keyboard Shortcuts](http://www.tecmint.com/wp-content/uploads/2015/05/Keyboard-Shortcuts.png)
Keyboard Shortcuts
Compatibility Setting Perhaps you wont need to edit it.
![Compatibility Setting](http://www.tecmint.com/wp-content/uploads/2015/05/Compatibility-Setting.png)
Compatibility Setting
### Conclusion ###
This Project is not too young and not too old, hence has reached certain level of maturity and is quiet solid and works out of the box. For someone like me who need to switch between GUI and Console very often Guake is a boon. I dont need to manage an extra window, open and close frequently, use tab among a huge pool of opened applications to find terminal or switch to different workspace to manage terminal now all I need is F12.
I think this is a must tool for any Linux user who makes use of GUI and Console at the same time, equally. I am going to recommend it to anyone who want to work on a system where interaction between GUI and Console is smooth and hassle free.
Thats all for now. Let us know if there is any problem in installing and running. We will be here to help you. Also tell us yours experience about Guake. Provide us with your valuable feedback in the comments below. Like and share us and help us get spread.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/install-guake-terminal-ubuntu-mint-fedora/
作者:[Avishek Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/linux-terminal-emulators/
[2]:https://github.com/Guake/guake/releases/tag/0.7.0

View File

@ -0,0 +1,73 @@
Open Source History: Why Did Linux Succeed?
================================================================================
> Why did Linux, the Unix-like operating system kernel started by Linus Torvalds in 1991 that became central to the open source world, succeed where so many similar projects, including GNU HURD and the BSDs, fail?
![](http://thevarguy.com/site-files/thevarguy.com/files/imagecache/medium_img/uploads/2015/05/linux.jpg)
One of the most puzzling questions about the history of free and open source is this: Why did Linux succeed so spectacularly, whereas similar attempts to build a free or open source, Unix-like operating system kernel met with considerably less success? I don't know the answer to that question. But I have rounded up some theories, which I'd like to lay out here.
First, though, let me make clear what I mean when I write that Linux was a great success. I am defining it in opposition primarily to the variety of other Unix-like operating system kernels, some of them open and some not, that proliferated around the time Linux was born. [GNU][1] HURD, the free-as-in-freedom kernel whose development began in [May 1991][2], is one of them. Others include Unices that most people today have never heard of, such as various derivatives of the Unix variant developed at the University of California at Berkeley, BSD; Xenix, Microsoft's take on Unix; academic Unix clones including Minix; and the original Unix developed under the auspices of AT&T, which was vitally important in academic and commercial computing circles during earlier decades, but virtually disappeared from the scene by the 1990s.
#### Related ####
- [Open Source History: Tracing the Origins of Hacker Culture and the Hacker Ethic][3]
- [Unix and Personal Computers: Reinterpreting the Origins of Linux][4]
I'd also like to make clear that I'm writing here about kernels, not complete operating systems. To a great extent, the Linux kernel owes its success to the GNU project as a whole, which produced the crucial tools, including compilers, a debugger and a BASH shell implementation, that are necessary to build a Unix-like operating system. But GNU developers never created a viable version of the the HURD kernel (although they are [still trying][5]). Instead, Linux ended up as the kernel that glued the rest of the GNU pieces together, even though that had never been in the GNU plans.
So it's worth asking why Linux, a kernel launched by Linus Torvalds, an obscure programmer in Finland, in 1991—the same year as HURD—endured and thrived within a niche where so many other Unix-like kernels, many of which enjoyed strong commercial backing and association with the leading Unix hackers of the day, failed to take off. To that end, here are a few theories pertaining to that question that I've come across as I've researched the history of the free and open source software worlds, along with the respective strengths and weaknesses of these various explanations.
### Linux Adopted a Decentralized Development Approach ###
This is the argument that comes out of Eric S. Raymond's essay, "[The Cathedral and the Bazaar][6]," and related works, which make the case that software develops best when a large number of contributors collaborate continuously within a relatively decentralized organizational structure. That was generally true of Linux, in contrast to, for instance, GNU HURD, which took a more centrally directed approach to code development—and, as a result, "had been evidently failing" to build a complete operating system for a decade, in Raymond's view.
To an extent, this explanation makes sense, but it has some significant flaws. For one, Torvalds arguably assumed a more authoritative role in directing Linux code development—deciding which contributions to include and reject—than Raymond and others have wanted to recognize. For another, this reasoning does not explain why GNU succeeded in producing so much software besides a working kernel. If only decentralized development works well in the free/open source software world, then all of GNU's programming efforts should have been a bust—which they most certainly were not.
### Linux is Pragmatic; GNU is Ideological ###
Personally, I find this explanation—which supposes that Linux grew so rapidly because its founder was a pragmatist who initially wrote the kernel just to be able to run a tailored Unix OS on his computer at home, not as part of a crusade to change the world through free software, as the GNU project aimed to do—the most compelling.
Still, it has some weaknesses that make it less than completely satisfying. In particular, while Torvalds himself adopted pragmatic principles, not all members of the community that coalesced around his project, then or today, have done the same. Yet, Linux has succeeded all the same.
Moreover, if pragmatism was the key to Linux's endurance, then why, again, was GNU successful in building so many other tools besides a kernel? If having strong political beliefs about software prevents you from pursuing successful projects, GNU should have been an outright failure, not an endeavor that produced a number of software packages that remain foundational to the IT world today.
Last but not least, many of the other Unix variants of the late 1980s and early 1990s, especially several BSD off-shoots, were the products of pragmatism. Their developers aimed to build Unix variants that could be more freely shared than those restricted by expensive commercial licenses, but they were not deeply ideological about programming or sharing code. Neither was Torvalds, and it is therefore difficult to explain Linux's success, and the failure of other Unix projects, in terms of ideological zeal.
### Operating System Design ###
There are technical differences between Linux and some other Unix variants that are important to keep in mind when considering the success of Linux. Richard Stallman, the founder of the GNU project, pointed to these in explaining, in an email to me, why HURD development had lagged: "It is true that the GNU Hurd is not a practical success. Part of the reason is that its basic design made it somewhat of a research project. (I chose that design thinking it was a shortcut to get a working kernel in a hurry.)"
Linux is also different from other Unix variants in the sense that Torvalds wrote all of the Linux code himself. Having a Unix of his own, free of other people's code, was one of his stated intentions when he [first announced Linux][7] in August 1991. This characteristic sets Linux apart from most of the other Unix variants that existed at that time, which derived their code bases from either AT&T Unix or Berkeley's BSD.
I'm not a computer scientist, so I'm not qualified to decide whether the Linux code was simply superior to that of the other Unices, explaining why Linux succeeded. But that's an argument someone might make—although it does not account for the disparity in culture and personnel between Linux and other Unix kernels, which, to me, seem more important than code in understanding Linux's success.
### The "Community" Put Its Support Behind Linux ###
Stallman also wrote that "mainly the reason" for Linux's success was that "Torvalds made Linux free software, and since then more of the community's effort has gone into Linux than into the Hurd." That's not exactly a complete explanation for Linux's trajectory, since it does not account for why the community of free software developers followed Torvalds instead of HURD or another Unix. But it nonetheless highlights this shift as a large part of how Linux prevailed.
A fuller account of the free software community's decision to endorse Linux would have to explain why developers did so even though, at first, Linux was a very obscure project—much more so, by any measure, than some of the other attempts at the time to create a freer Unix, such as NET BSD and 386/BSD—as well as one whose affinity with the goals of the free software movement was not at first clear. Originally, Torvalds released Linux under a license that simply prevented its commercial use. It was considerably later that he switched to the GNU General Public License, which protects the openness of source code.
So, those are the explanations I've found for Linux's success as an open source operating system kernel—a success which, to be sure, has been measured in some respects (desktop Linux never became what its proponents hoped, for instance). But Linux has also become foundational to the computing world in ways that no other Unix-like OS has. Maybe Apple OS X and iOS, which derive from BSD, come close, but they don't play such a central role as Linux in powering the Internet, among other things.
Have other ideas on why Linux became what it did, or why its counterparts in the Unix world have now almost all sunk into obscurity? (I know: BSD variants still have a following today, and some commercial Unices remain important enough for [Red Hat][8] (RHT) to be [courting their users][9]. But none of these Unix holdouts have conquered everything from Web servers to smartphones in the way Linux has.) I'd be delighted to hear them.
--------------------------------------------------------------------------------
via: http://thevarguy.com/open-source-application-software-companies/050415/open-source-history-why-did-linux-succeed
作者:[hristopher Tozzi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://thevarguy.com/author/christopher-tozzi
[1]:http://gnu.org/
[2]:http://www.gnu.org/software/hurd/history/hurd-announce
[3]:http://thevarguy.com/open-source-application-software-companies/042915/open-source-history-tracing-origins-hacker-culture-and-ha
[4]:http://thevarguy.com/open-source-application-software-companies/042715/unix-and-personal-computers-reinterpreting-origins-linux
[5]:http://thevarguy.com/open-source-application-software-companies/042015/30-years-hurd-lives-gnu-updates-open-source-
[6]:http://www.catb.org/esr/writings/cathedral-bazaar/cathedral-bazaar/
[7]:https://groups.google.com/forum/#!topic/comp.os.minix/dlNtH7RRrGA[1-25]
[8]:http://www.redhat.com/
[9]:http://thevarguy.com/open-source-application-software-companies/032614/red-hat-grants-certification-award-unix-linux-migration-a

View File

@ -1,3 +1,5 @@
translating by wwy-hust
Web Caching Basics: Terminology, HTTP Headers, and Caching Strategies
=====================================================================

View File

@ -1,135 +0,0 @@
What are useful command-line network monitors on Linux
================================================================================
Network monitoring is a critical IT function for businesses of all sizes. The goal of network monitoring can vary. For example, the monitoring activity can be part of long-term network provisioning, security protection, performance troubleshooting, network usage accounting, and so on. Depending on its goal, network monitoring is done in many different ways, such as performing packet-level sniffing, collecting flow-level statistics, actively injecting probes into the network, parsing server logs, etc.
While there are many dedicated network monitoring systems capable of 24/7/365 monitoring, you can also leverage command-line network monitors in certain situations, where a dedicated monitor is an overkill. If you are a system admin, you are expected to have hands-on experience with some of well known CLI network monitors. Here is a list of **popular and useful command-line network monitors on Linux**.
### Packet-Level Sniffing ###
In this category, monitoring tools capture individual packets on the wire, dissect their content, and display decoded packet content or packet-level statistics. These tools conduct network monitoring from the lowest level, and as such, can possibly do the most fine-grained monitoring at the cost of network I/O and analysis efforts.
1. **dhcpdump**: a comman-line DHCP traffic sniffer capturing DHCP request/response traffic, and displays dissected DHCP protocol messages in a human-friendly format. It is useful when you are troubleshooting DHCP related issues.
2. **[dsniff][1]**: a collection of command-line based sniffing, spoofing and hijacking tools designed for network auditing and penetration testing. They can sniff various information such as passwords, NSF traffic, email messages, website URLs, and so on.
3. **[httpry][2]**: an HTTP packet sniffer which captures and decode HTTP requests and response packets, and display them in a human-readable format.
4. **IPTraf**: a console-based network statistics viewer. It displays packet-level, connection-level, interface-level, protocol-level packet/byte counters in real-time. Packet capturing can be controlled by protocol filters, and its operation is full menu-driven.
![](https://farm8.staticflickr.com/7519/16055246118_8ea182b413_c.jpg)
5. **[mysql-sniffer][3]**: a packet sniffer which captures and decodes packets associated with MySQL queries. It displays the most frequent or all queries in a human-readable format.
6. **[ngrep][4]**: grep over network packets. It can capture live packets, and match (filtered) packets against regular expressions or hexadecimal expressions. It is useful for detecting and storing any anomalous traffic, or for sniffing particular patterns of information from live traffic.
7. **[p0f][5]**: a passive fingerprinting tool which, based on packet sniffing, reliably identifies operating systems, NAT or proxy settings, network link types and various other properites associated with an active TCP connection.
8. **pktstat**: a command-line tool which analyzes live packets to display connection-level bandwidth usages as well as descriptive information of protocols involved (e.g., HTTP GET/POST, FTP, X11).
![](https://farm8.staticflickr.com/7477/16048970999_be60f74952_b.jpg)
9. **Snort**: an intrusion detection and prevention tool which can detect/prevent a variety of backdoor, botnets, phishing, spyware attacks from live traffic based on rule-driven protocol analysis and content matching.
10. **tcpdump**: a command-line packet sniffer which is capable of capturing nework packets on the wire based on filter expressions, dissect the packets, and dump the packet content for packet-level analysis. It is widely used for any kinds of networking related troubleshooting, network application debugging, or [security][6] monitoring.
11. **tshark**: a command-line packet sniffing tool that comes with Wireshark GUI program. It can capture and decode live packets on the wire, and show decoded packet content in a human-friendly fashion.
### Flow-/Process-/Interface-Level Monitoring ###
In this category, network monitoring is done by classifying network traffic into flows, associated processes or interfaces, and collecting per-flow, per-process or per-interface statistics. Source of information can be libpcap packet capture library or sysfs kernel virtual filesystem. Monitoring overhead of these tools is low, but packet-level inspection capabilities are missing.
12. **bmon**: a console-based bandwidth monitoring tool which shows various per-interface information, including not-only aggregate/average RX/TX statistics, but also a historical view of bandwidth usage.
![](https://farm9.staticflickr.com/8580/16234265932_87f20c5d17_b.jpg)
13. **[iftop][7]**: a bandwidth usage monitoring tool that can shows bandwidth usage for individual network connections in real time. It comes with ncurses-based interface to visualize bandwidth usage of all connections in a sorted order. It is useful for monitoring which connections are consuming the most bandwidth.
14. **nethogs**: a process monitoring tool which offers a real-time view of upload/download bandwidth usage of individual processes or programs in an ncurses-based interface. This is useful for detecting bandwidth hogging processes.
15. **netstat**: a command-line tool that shows various statistics and properties of the networking stack, such as open TCP/UDP connections, network interface RX/TX statistics, routing tables, protocol/socket statistics. It is useful when you diagnose performance and resource usage related problems of the networking stack.
16. **[speedometer][8]**: a console-based traffic monitor which visualizes the historical trend of an interface's RX/TX bandwidth usage with ncurses-drawn bar charts.
![](https://farm8.staticflickr.com/7485/16048971069_31dd573a4f_c.jpg)
17. **[sysdig][9]**: a comprehensive system-level debugging tool with a unified interface for investigating different Linux subsystems. Its network monitoring module is capable of monitoring, either online or offline, various per-process/per-host networking statistics such as bandwidth usage, number of connections/requests, etc.
18. **tcptrack**: a TCP connection monitoring tool which displays information of active TCP connections, including source/destination IP addresses/ports, TCP state, and bandwidth usage.
![](https://farm8.staticflickr.com/7507/16047703080_5fdda2e811_b.jpg)
19. **vnStat**: a command-line traffic monitor which maintains a historical view of RX/TX bandwidh usage (e.g., current, daily, monthly) on a per-interface basis. Running as a background daemon, it collects and stores interface statistics on bandwidth rate and total bytes transferred.
### Active Network Monitoring ###
Unlike passive monitoring tools presented so far, tools in this category perform network monitoring by actively "injecting" probes into the network and collecting corresponding responses. Monitoring targets include routing path, available bandwidth, loss rates, delay, jitter, system settings or vulnerabilities, and so on.
20. **[dnsyo][10]**: a DNS monitoring tool which can conduct DNS lookup from open resolvers scattered across more than 1,500 different networks. It is useful when you check DNS propagation or troubleshoot DNS configuration.
21. **[iperf][11]**: a TCP/UDP bandwidth measurement utility which can measure maximum available bandwidth between two end points. It measures available bandwidth by having two hosts pump out TCP/UDP probe traffic between them either unidirectionally or bi-directionally. It is useful when you test the network capacity, or tune the parameters of network stack. A variant called [netperf][12] exists with more features and better statistics.
22. **[netcat][13]/socat**: versatile network debugging tools capable of reading from, writing to, or listen on TCP/UDP sockets. They are often used alongside with other programs or scripts for backend network transfer or port listening.
23. **nmap**: a command-line port scanning and network discovery utility. It relies on a number of TCP/UDP based scanning techniques to detect open ports, live hosts, or existing operating systems on the local network. It is useful when you audit local hosts for vulnerabilities or build a host map for maintenance purpose. [zmap][14] is an alernative scanning tool with Internet-wide scanning capability.
24. ping: a network testing tool which works by exchaning ICMP echo and reply packets with a remote host. It is useful when you measure round-trip-time (RTT) delay and loss rate of a routing path, as well as test the status or firewall rules of a remote system. Variations of ping exist with fancier interface (e.g., [noping][15]), multi-protocol support (e.g., [hping][16]) or parallel probing capability (e.g., [fping][17]).
![](https://farm8.staticflickr.com/7466/15612665344_a4bb665a5b_c.jpg)
25. **[sprobe][18]**: a command-line tool that heuristically infers the bottleneck bandwidth between a local host and any arbitrary remote IP address. It uses TCP three-way handshake tricks to estimate the bottleneck bandwidth. It is useful when troubleshooting wide-area network performance and routing related problems.
26. **traceroute**: a network discovery tool which reveals a layer-3 routing/forwarding path from a local host to a remote host. It works by sending TTL-limited probe packets and collecting ICMP responses from intermediate routers. It is useful when troubleshooting slow network connections or routing related problems. Variations of traceroute exist with better RTT statistics (e.g., [mtr][19]).
### Application Log Parsing ###
In this category, network monitoring is targeted at a specific server application (e.g., web server or database server). Network traffic generated or consumed by a server application is monitored by analyzing its log file. Unlike network-level monitors presented in earlier categories, tools in this category can analyze and monitor network traffic from application-level.
27. **[GoAccess][20]**: a console-based interactive viewer for Apache and Nginx web server traffic. Based on access log analysis, it presents a real-time statistics of a number of metrics including daily visits, top requests, client operating systems, client locations, client browsers, in a scrollable view.
![](https://farm8.staticflickr.com/7518/16209185266_da6c5c56eb_c.jpg)
28. **[mtop][21]**: a command-line MySQL/MariaDB server moniter which visualizes the most expensive queries and current database server load. It is useful when you optimize MySQL server performance and tune server configurations.
![](https://farm8.staticflickr.com/7472/16047570248_bc996795f2_c.jpg)
29. **[ngxtop][22]**: a traffic monitoring tool for Nginx and Apache web server, which visualizes web server traffic in a top-like interface. It works by parsing a web server's access log file and collecting traffic statistics for individual destinations or requests.
### Conclusion ###
In this article, I presented a wide variety of command-line network monitoring tools, ranging from the lowest packet-level monitors to the highest application-level network monitors. Knowing which tool does what is one thing, and choosing which tool to use is another, as any single tool cannot be a universal solution for your every need. A good system admin should be able to decide which tool is right for the circumstance at hand. Hopefully the list helps with that.
You are always welcome to improve the list with your comment!
--------------------------------------------------------------------------------
via: http://xmodulo.com/useful-command-line-network-monitors-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://www.monkey.org/~dugsong/dsniff/
[2]:http://xmodulo.com/monitor-http-traffic-command-line-linux.html
[3]:https://github.com/zorkian/mysql-sniffer
[4]:http://ngrep.sourceforge.net/
[5]:http://lcamtuf.coredump.cx/p0f3/
[6]:http://xmodulo.com/recommend/firewallbook
[7]:http://xmodulo.com/how-to-install-iftop-on-linux.html
[8]:https://excess.org/speedometer/
[9]:http://xmodulo.com/monitor-troubleshoot-linux-server-sysdig.html
[10]:http://xmodulo.com/check-dns-propagation-linux.html
[11]:https://iperf.fr/
[12]:http://www.netperf.org/netperf/
[13]:http://xmodulo.com/useful-netcat-examples-linux.html
[14]:https://zmap.io/
[15]:http://noping.cc/
[16]:http://www.hping.org/
[17]:http://fping.org/
[18]:http://sprobe.cs.washington.edu/
[19]:http://xmodulo.com/better-alternatives-basic-command-line-utilities.html#mtr_link
[20]:http://goaccess.io/
[21]:http://mtop.sourceforge.net/
[22]:http://xmodulo.com/monitor-nginx-web-server-command-line-real-time.html

View File

@ -1,3 +1,5 @@
[Translating by DongShuaike]
Installing Cisco Packet tracer in Linux
================================================================================
![](http://180016988.r.cdn77.net/wp-content/uploads/2015/01/Main_picture.png)
@ -194,4 +196,4 @@ via: http://www.unixmen.com/installing-cisco-packet-tracer-linux/
[1]:https://www.netacad.com/
[2]:https://www.dropbox.com/s/5evz8gyqqvq3o3v/Cisco%20Packet%20Tracer%206.1.1%20Linux.tar.gz?dl=0
[3]:http://www.oracle.com/technetwork/java/javase/downloads/jre8-downloads-2133155.html
[4]:https://www.netacad.com/
[4]:https://www.netacad.com/

View File

@ -1,3 +1,5 @@
[Trnslating by DongShuaike]
iptraf: A TCP/UDP Network Monitoring Utility
================================================================================
[iptraf][1] is an ncurses-based IP LAN monitor that generates various network statistics including TCP info, UDP counts, ICMP and OSPF information, Ethernet load info, node stats, IP checksum errors, and others.

View File

@ -1,3 +1,4 @@
[translating by KayGuoWhu]
Enjoy Android Apps on Ubuntu using ARChon Runtime
================================================================================
Before, we gave try to many android app emulating tools like Genymotion, Virtualbox, Android SDK, etc to try to run android apps on it. But, with this new Chrome Android Runtime, we are able to run Android Apps on our Chrome Browser. So, here are the steps we'll need to follow to install Android Apps on Ubuntu using ARChon Runtime.

View File

@ -1,3 +1,5 @@
translating by createyuan
How to Test Your Internet Speed Bidirectionally from Command Line Using Speedtest-CLI Tool
================================================================================
We always need to check the speed of the Internet connection at home and office. What we do for this? Go to websites like Speedtest.net and begin test. It loads JavaScript in the web browser and then select best server based upon ping and output the result. It also uses a Flash player to produce graphical results.
@ -129,4 +131,4 @@ via: http://www.tecmint.com/check-internet-speed-from-command-line-in-linux/
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/speedtest-mini-server-to-test-bandwidth-speed/
[1]:http://www.tecmint.com/speedtest-mini-server-to-test-bandwidth-speed/

View File

@ -1,161 +0,0 @@
[bazz222]
How to set up networking between Docker containers
================================================================================
As you may be aware, Docker container technology has emerged as a viable lightweight alternative to full-blown virtualization. There are a growing number of use cases of Docker that the industry adopted in different contexts, for example, enabling rapid build environment, simplifying configuration of your infrastructure, isolating applications in multi-tenant environment, and so on. While you can certainly deploy an application sandbox in a standalone Docker container, many real-world use cases of Docker in production environments may involve deploying a complex multi-tier application in an ensemble of multiple containers, where each container plays a specific role (e.g., load balancer, LAMP stack, database, UI).
There comes the problem of **Docker container networking**: How can we interconnect different Docker containers spawned potentially across different hosts when we do not know beforehand on which host each container will be created?
One pretty neat open-source solution for this is [weave][1]. This tool makes interconnecting multiple Docker containers pretty much hassle-free. When I say this, I really mean it.
In this tutorial, I am going to demonstrate **how to set up Docker networking across different hosts using weave**.
### How Weave Works ###
![](https://farm8.staticflickr.com/7288/16662287067_27888684a7_b.jpg)
Let's first see how weave works. Weave creates a network of "peers", where each peer is a virtual router container called "weave router" residing on a distinct host. The weave routers on different hosts maintain TCP connections among themselves to exchange topology information. They also establish UDP connections among themselves to carry inter-container traffic. A weave router on each host is then connected via a bridge to all other Docker containers created on the host. When two containers on different hosts want to exchange traffic, a weave router on each host captures their traffic via a bridge, encapsulates the traffic with UDP, and forwards it to the other router over a UDP connection.
Each weave router maintains up-to-date weave router topology information, as well as container's MAC address information (similar to switch's MAC learning), so that it can make forwarding decision on container traffic. Weave is able to route traffic between containers created on hosts which are not directly reachable, as long as two hosts are interconnected via an intermediate weave router on weave topology. Optionally, weave routers can be set to encrypt both TCP control data and UDP data traffic based on public key cryptography.
### Prerequisite ###
Before using weave on Linux, of course you need to set up Docker environment on each host where you want to run [Docker][2] containers. Check out [these][3] [tutorials][4] on how to create Docker containers on Ubuntu or CentOS/Fedora.
Once Docker environment is set up, install weave on Linux as follows.
$ wget https://github.com/zettio/weave/releases/download/latest_release/weave
$ chmod a+x weave
$ sudo cp weave /usr/local/bin
Make sure that /usr/local/bin is include in your PATH variable by appending the following in /etc/profile.
export PATH="$PATH:/usr/local/bin"
Repeat weave installation on every host where Docker containers will be deployed.
Weave uses TCP/UDP 6783 port. If you are using firewall, make sure that these port numbers are not blocked by the firewall.
### Launch Weave Router on Each Host ###
When you want to interconnect Docker containers across multiple hosts, the first step is to launch a weave router on every host.
On the first host, run the following command, which will create and start a weave router container.
$ sudo weave launch
The first time you run this command, it will take a couple of minutes to download a weave image before launching a router container. On successful launch, it will print the ID of a launched weave router.
To check the status of the router, use this command:
$ sudo weave status
![](https://farm9.staticflickr.com/8632/16249607573_4514790cf5_c.jpg)
Since this is the first weave router launched, there will be only one peer in the peer list.
You can also verify the launch of a weave router by using docker command.
$ docker ps
![](https://farm8.staticflickr.com/7655/16681964438_51d8b18809_c.jpg)
On the second host, run the following command, where we specify the IP address of the first host as a peer to join.
$ sudo weave launch <first-host-IP-address>
When you check the status of the router, you will see two peers: the current host and the first host.
![](https://farm8.staticflickr.com/7608/16868571891_e66d4b8841_c.jpg)
As you launch more routers on subsequent hosts, the peer list will grow accordingly. When launching a router, just make sure that you specify any previously launched peer's IP address.
At this point, you should have a weave network up and running, which consists of multiple weave routers across different hosts.
### Interconnect Docker Containers across Multiple Hosts ###
Now it is time to launch Docker containers on different hosts, and interconnect them on a virtual network.
Let's say we want to create a private network 10.0.0.0/24, to interconnect two Docker containers. We will assign random IP addressses from this subnet to the containers.
When you create a Docker container to deploy on a weave network, you need to use weave command, not docker command. Internally, the weave command uses docker command to create a container, and then sets up Docker networking on it.
Here is how to create a Ubuntu container on hostA, and attach the container to 10.0.0.0/24 subnet with an IP addresss 10.0.0.1.
hostA:~$ sudo weave run 10.0.0.1/24 -t -i ubuntu
On successful run, it will print the ID of a created container. You can use this ID to attach to the running container and access its console as follows.
hostA:~$ docker attach <container-id>
Move to hostB, and let's create another container. Attach it to the same subnet (10.0.0.0/24) with a different IP address 10.0.0.2.
hostB:~$ sudo weave run 10.0.0.2/24 -t -i ubuntu
Let's attach to the second container's console as well:
hostB:~$ docker attach <container-id>
At this point, those two containers should be able to ping each other via the other's IP address. Verify that from each container's console.
![](https://farm9.staticflickr.com/8566/16868571981_d73c8e401b_c.jpg)
If you check the interfaces of each container, you will see an interface named "ethwe" which is assigned an IP address (e.g., 10.0.0.1 and 10.0.0.2) you specified.
![](https://farm8.staticflickr.com/7286/16681964648_013f9594b1_b.jpg)
### Other Advanced Usages of Weave ###
Weave offers a number of pretty neat features. Let me briefly cover a few here.
#### Application Isolation ####
Using weave, you can create multiple virtual networks and dedicate each network to a distinct application. For example, create 10.0.0.0/24 for one group of containers, and 10.10.0.0/24 for another group of containers, and so on. Weave automatically takes care of provisioning these networks, and isolating container traffic on each network. Going further, you can flexibly detach a container from one network, and attach it to another network without restarting containers. For example:
First launch a container on 10.0.0.0/24:
$ sudo weave run 10.0.0.2/24 -t -i ubuntu
Detach the container from 10.0.0.0/24:
$ sudo weave detach 10.0.0.2/24 <container-id>
Re-attach the container to another network 10.10.0.0/24:
$ sudo weave attach 10.10.0.2/24 <container-id>
![](https://farm8.staticflickr.com/7639/16247212144_c31a49714d_c.jpg)
Now this container should be able to communicate with other containers on 10.10.0.0/24. This is a pretty useful feature when network information is not available at the time you create a container.
#### Integrate Weave Networks with Host Network ####
Sometimes you may need to allow containers on a virtual weave network to access physical host network. Conversely, hosts may want to access containers on a weave network. To support this requirement, weave allows weave networks to be integrated with host network.
For example, on hostA where a container is running on network 10.0.0.0/24, run the following command.
hostA:~$ sudo weave expose 10.0.0.100/24
This will assign IP address 10.0.0.100 to hostA, so that hostA itself is also connected to 10.0.0.0/24 network. Obviously, you need to choose an IP address which is not used by any other containers on the network.
At this point, hostA should be able to access any containers on 10.0.0.0/24, whether or not the containers are residing on hostA. Pretty neat!
### Conclusion ###
As you can see, weave is a pretty useful Docker networking tool. This tutorial only covers a glimpse of [its powerful features][5]. If you are more ambitious, you can try its multi-hop routing, which can be pretty useful in multi-cloud environment, dynamic re-routing, which is a neat fault-tolerance feature, or even its distributed DNS service which allows you to name containers on weave networks. If you decide to use this gem in your environment, feel free to share your use case!
--------------------------------------------------------------------------------
via: http://xmodulo.com/networking-between-docker-containers.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:https://github.com/zettio/weave
[2]:http://xmodulo.com/recommend/dockerbook
[3]:http://xmodulo.com/manage-linux-containers-docker-ubuntu.html
[4]:http://xmodulo.com/docker-containers-centos-fedora.html
[5]:http://zettio.github.io/weave/features.html

View File

@ -1,3 +1,5 @@
FSSlc translating
Conky The Ultimate X Based System Monitor Application
================================================================================
Conky is a system monitor application written in C Programming Language and released under GNU General Public License and BSD License. It is available for Linux and BSD Operating System. The application is X (GUI) based that was originally forked from [Torsmo][1].
@ -144,4 +146,4 @@ via: http://www.tecmint.com/install-conky-in-ubuntu-debian-fedora/
[3]:http://ubuntuforums.org/showthread.php?t=281865
[4]:http://conky.sourceforge.net/screenshots.html
[5]:http://ubuntuforums.org/showthread.php?t=281865/
[6]:http://conky.sourceforge.net/
[6]:http://conky.sourceforge.net/

View File

@ -1,96 +0,0 @@
Translating by ZTinoZ
Install Inkscape - Open Source Vector Graphic Editor
================================================================================
Inkscape is an open source vector graphic editing tool which uses Scalable Vector Graphics (SVG) and that makes it different from its competitors like Xara X, Corel Draw and Adobe Illustrator etc. SVG is a widely-deployed royalty-free graphics format developed and maintained by the W3C SVG Working Group. It is a cross platform tool which runs fine on Linux, Windows and Mac OS.
Inkscape development was started in 2003, Inkscape's bug tracking system was hosted on Sourceforge initially but it was migrated to Launchpad afterwards. Its current latest stable version is 0.91. It is under continuous development and bug fixes and we will be reviewing its prominent features and installing process in the article.
### Salient Features ###
Lets review the outstanding features of this application categorically.
#### Creating Objects ####
- Drawing different colored sized and shaped freehand lines through pencil tool, straight lines and curves through Bezier (pen) tool, applying freehand calligraphic strokes through calligraphic tool etc
- Creating, selecting, editing and formatting text through text tool. Manipulating text in plain text boxes, on paths or in shapes
- Helps draw various shapes like rectangles, ellipses, circles, arcs, polygons, stars, spirals etc and then resize, rotate and modify (turn sharp edges round) them
- Create and embed bitmaps with simple commands
#### Object manipulation ####
- Skewing, moving, scaling, rotating objects through interactive manipulations and pacifying the numeric values
- Performing raising and lowering Z-order operations
- Grouping and ungrouping objects to create a virtual scope for editing or manipulation
- Layers form a hierarchal tree and can be locked or rearranged for various manipulations
- Distribution and alignment commands
#### Fill and Stroke ####
- Copy/paste styles
- Pick Color tool
- Selecting colors on a continuous plot based on vectors of RGB, HSL, CMS, CMYK and color wheel
- Gradient editor helps creating and managing multi-stop gradients
- Define an image or selection and use it to pattern fill
- Dashed Strokes can be used with few predefined dashed patterns
- Beginning, middle and ending marks through path markers
#### Operation on Paths ####
- Node Editing: Moving nodes and Bezier handles, node alignment and distribution etc
- Boolean operations like yes or no conditions
- Simplifying paths with variable levels or thresholds
- Path insetting and outsetting along with link and offset objects
- Converting bitmap images into paths (color and monochrome paths) through path tracing
#### Text manipulation ####
- All installed outlined fonts can be used even for right to left align objects
- Formatting text, letter spacing, line spacing or kerning
- Text on path and on shapes where both text and path or shapes can be edited or modified
#### Rendering ####
- Inkscape fully support anti-aliased display which is a technique that reduces or eliminates aliasing by shading the pixels along the border.
- Support for alpha transparency display and PNG export
### Install Inkscape on Ubuntu 14.04 and 14.10 ###
In order to install Inkscape on Ubuntu, we will need to first [add its stable Personal Package Archive][1] (PPA) to Advanced Package Tool (APT) repository. Launch the terminal and run following command to add its PPA.
sudo add-apt-repository ppa:inkscape.dev/stable
![PPA Inkscape](http://blog.linoxide.com/wp-content/uploads/2015/03/PPA-Inkscape.png)
Once the PPA has been added to the APT repository we need to update it using following command.
sudo apt-get update
![Update APT](http://blog.linoxide.com/wp-content/uploads/2015/03/Update-APT2.png)
After updating the repository we are ready to install inkscape which is accomplished using the following command.
sudo apt-get install inkscape
![Install Inkscape](http://blog.linoxide.com/wp-content/uploads/2015/03/Install-Inkscape.png)
Congratulation, Inkscape has been installed now and all set for image editing and making full use of feature rich application.
![Inkscape Main](http://blog.linoxide.com/wp-content/uploads/2015/03/Inkscape-Main1.png)
### Conclusion ###
Inkscape is a feature rich graphic editing tool which empowers its user with state of the art capabilities. It is an open source application which is freely available for installation and customizations and supports wide range of file formats including but not limited to JPEG, PNG, GIF and PDF. Visit its [official website][2] for more news and updates regarding this application.
--------------------------------------------------------------------------------
via: http://linoxide.com/tools/install-inkscape-open-source-vector-graphic-editor/
作者:[Aun Raza][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunrz/
[1]:https://launchpad.net/~inkscape.dev/+archive/ubuntu/stable
[2]:https://inkscape.org/en/

View File

@ -1,57 +0,0 @@
Vic020
Linux FAQs with Answers--How to configure PCI-passthrough on virt-manager
================================================================================
> **Question**: I would like to dedicate a physical network interface card to one of my guest VMs created by KVM. For that, I am trying to enable PCI passthrough of the NIC for the VM. How can I add a PCI device to a guest VM with PCI passthrough on virt-manager?
Modern hypervisors enable efficient resource sharing among multiple guest operating systems by virtualizing and emulating hardware resources. However, such virtualized resource sharing may not always be desirable, or even should be avoided when VM performance is a great concern, or when a VM requires full DMA control of a hardware device. One technique used in this case is so-called "PCI passthrough," where a guest VM is granted an exclusive access to a PCI device (e.g., network/sound/video card). Essentially, PCI passthrough bypasses the virtualization layer, and directly exposes a PCI device to a VM. No other VM can access the PCI device.
### Requirement for Enabling PCI Passthrough ###
If you want to enable PCI passthrough for an HVM guest (e.g., a fully-virtualized VM created by KVM), your system (both CPU and motherboard) must meet the following requirement. If your VM is paravirtualized (created by Xen), you can skip this step.
In order to enable PCI passthrough for an HVM guest VM, your system must support **VT-d** (for Intel processors) or **AMD-Vi** (for AMD processors). Intel's VT-d ("Intel Virtualization Technology for Directed I/O") is available on most high-end Nehalem processors and its successors (e.g., Westmere, Sandy Bridge, Ivy Bridge). Note that VT-d and VT-x are two independent features. A list of Intel/AMD processors with VT-d/AMD-Vi capability can be found [here][1].
After you verify that your host hardware supports VT-d/AMD-Vi, you then need to do two things on your system. First, make sure that VT-d/AMD-Vi is enabled in system BIOS. Second, enable IOMMU on your kernel during booting. The IOMMU service, which is provided by VT-d,/AMD-Vi, protects host memory access by a guest VM, and is a requirement for PCI passthrough for fully-virtualized guest VMs.
To enable IOMMU on the kernel for Intel processors, pass "**intel_iommu=on**" boot parameter on your Linux. Follow [this tutorial][2] to find out how to add a kernel boot parameter via GRUB.
After configuring the boot parameter, reboot your host.
### Add a PCI Device to a VM on Virt-Manager ###
Now we are ready to enable PCI passthrough. In fact, assigning a PCI device to a guest VM is straightforward on virt-manager.
Open the VM's settings on virt-manager, and click on "Add Hardware" button on the left sidebar.
Choose a PCI device to assign from a PCI device list, and click on "Finish" button.
![](https://farm8.staticflickr.com/7587/17015584385_db49e96372_c.jpg)
Finally, power on the guest. At this point, the host PCI device should be directly visible inside the guest VM.
### Troubleshooting ###
If you see either of the following errors while powering on a guest VM, the error may be because VT-d (or IOMMU) is not enabled on your host.
Error starting domain: unsupported configuration: host doesn't support passthrough of host PCI devices
----------
Error starting domain: Unable to read from monitor: Connection reset by peer
Make sure that "**intel_iommu=on**" boot parameter is passed to the kernel during boot as described above.
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/pci-passthrough-virt-manager.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://wiki.xenproject.org/wiki/VTdHowTo
[2]:http://xmodulo.com/add-kernel-boot-parameters-via-grub-linux.html

View File

@ -1,3 +1,4 @@
Translating by ZTinoZ
15 Things to Do After Installing Ubuntu 15.04 Desktop
================================================================================
This tutorial is intended for beginners and covers some basic steps on what to do after you have installed Ubuntu 15.04 “Vivid Vervet” Desktop version on your computer in order to customize the system and install basic programs for daily usage.
@ -295,4 +296,4 @@ via: http://www.tecmint.com/things-to-do-after-installing-ubuntu-15-04-desktop/
[a]:http://www.tecmint.com/author/cezarmatei/
[1]:http://www.viber.com/en/products/linux
[2]:http://ubuntu-tweak.com/
[2]:http://ubuntu-tweak.com/

View File

@ -1,61 +0,0 @@
How To Install Visual Studio Code On Ubuntu
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/05/Install-Visual-Studio-Code-in-Ubuntu.jpeg)
Microsoft has done the unexpected by [releasing Visual Studio Code][1] for all major desktop platforms that includes Linux as well. If you are a web developer who happens to be using Ubuntu, you can **easily install Visual Studio Code in Ubuntu**.
We will be using [Ubuntu Make][2] for installing Visual Studio Code in Ubuntu. Ubuntu Make, previously known as Ubuntu Developer Tools Center, is a command line utility that allows you to easily install various development tools, languages and IDEs. You can easily [install Android Studio][3] and other popular IDEs such as Eclipse with Ubuntu Make. In this tutorial we shall see **how to install Visual Studio Code in Ubuntu with Ubuntu Make**.
### Install Microsoft Visual Studio Code in Ubuntu ###
Before installing Visual Studio Code, we need to install Ubuntu Make first. Though Ubuntu Make is available in Ubuntu 15.04 repository, **youll need Ubuntu Make 0.7 for Visual Studio**. You can get the latest Ubuntu Make by using the official PPA. The PPA is available for Ubuntu 14.04, 14.10 and 15.04. Also, it **is only available for 64 bit platform**.
Open a terminal and use the following commands to install Ubuntu Make via official PPA:
sudo add-apt-repository ppa:ubuntu-desktop/ubuntu-make
sudo apt-get update
sudo apt-get install ubuntu-make
Once you have installed Ubuntu Make, use the command below to install Visual Studio Code:
umake web visual-studio-code
Youll be asked to provide a path where it will be installed:
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/05/Visual_Studio_Code_Ubuntu_1.jpeg)
After throwing a whole lot of terms and conditions, it will ask for your permission to install Visual Studio Code. Press a at this screen:
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/05/Visual_Studio_Code_Ubuntu_2.jpeg)
Once you do that it will start downloading and installing it. Once it is installed, you can see that Visual Studio Code icon has already been locked to the Unity Launcher. Just click on it to run it. This is how Visual Studio Code looks like in Ubuntu 15.04 Unity:
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/05/Visual_Studio_Code_Ubuntu.jpeg)
### Uninstall Visual Studio Code from Ubuntu ###
To uninstall Visual Studio Code, well use the same command line tool umake. Just use the following command in terminal:
umake web visual-studio-code --remove
If you do not want to use Ubuntu Make, you can install Visual Studio Code by downloading the files from Microsoft:
- [Download Visual Studio Code for Linux][4]
See, how easy it is to install Visual Studio Code in Ubuntu, all thanks to Ubuntu Make. I hope this tutorial helped you. Feel free to drop a comment if you have any questions or suggestions.
--------------------------------------------------------------------------------
via: http://itsfoss.com/install-visual-studio-code-ubuntu/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://www.geekwire.com/2015/microsofts-visual-studio-expands-to-mac-and-linux-with-new-code-development-tool/
[2]:https://wiki.ubuntu.com/ubuntu-make
[3]:http://itsfoss.com/install-android-studio-ubuntu-linux/
[4]:https://code.visualstudio.com/Download

View File

@ -1,3 +1,4 @@
translating wi-cuckoo
Useful Commands to Create Commandline Chat Server and Remove Unwanted Packages in Linux
================================================================================
Here we are with the next part of Linux Command Line Tips and Tricks. If you missed our previous post on Linux Tricks you may find it here.
@ -180,4 +181,4 @@ via: http://www.tecmint.com/linux-commandline-chat-server-and-remove-unwanted-pa
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/5-linux-command-line-tricks/
[1]:http://www.tecmint.com/5-linux-command-line-tricks/

View File

@ -1,40 +0,0 @@
Bodhi Linux Introduces Moksha Desktop
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/05/Bodhi_Linux.jpg)
Ubuntu based lightweight Linux distribution [Bodhi Linux][1] is working on a desktop environment of its own. This new desktop environment will be called Moksha (Sanskrit for complete freedom). Moksha will be replacing the usual [Enlightenment desktop environment][2].
### Why Moksha instead of Enlightenment? ###
Jeff Hoogland of Bodhi Linux [says][3] that he had been unhappy with the newer versions of Enlightenment in the recent past. Until E17, Enlightenment was very stable and complemented well to the need of a lightweight Linux OS, but the E18 was so full of bugs that Bodhi Linux skipped it altogether.
While the latest [Bodhi Linux 3.0.0 release][4] uses E19 (except the legacy mode, meant for older hardware, still uses E17), Jeff is not happy with E19 as well. He quotes:
> On top of the performance issues, E19 did not allow for me personally to have the same workflow I enjoyed under E17 due to features it no longer had. Because of this I had changed to using the E17 on all of my Bodhi 3 computers even my high end ones. This got me to thinking how many of our existing Bodhi users felt the same way, so I [opened a discussion about it on our user forums][5].
### Moksha is continuation of the E17 desktop ###
Moksha will be a continuation of Bodhis favorite E17 desktop. Jeff further mentions:
> We will start by integrating all of the Bodhi changes we have simply been patching into the source code over the years and fixing the few issues the desktop has. Once this is done we will begin back porting a few of the more useful features E18 and E19 introduced to the Enlightenment desktop and finally, we will introduce a few new things we think will improve the end user experience.
### When will Moksha release? ###
The next update to Bodhi will be Bodhi 3.1.0 in August this year. This new release will bring Moksha on all of its default ISOs. Lets wait and watch to see if Moksha turns out to be a good decision or not.
--------------------------------------------------------------------------------
via: http://itsfoss.com/bodhi-linux-introduces-moksha-desktop/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://www.bodhilinux.com/
[2]:https://www.enlightenment.org/
[3]:http://www.bodhilinux.com/2015/04/28/introducing-the-moksha-desktop/
[4]:http://itsfoss.com/bodhi-linux-3/
[5]:http://forums.bodhilinux.com/index.php?/topic/12322-e17-vs-e19-which-are-you-using-and-why/

View File

@ -1,459 +0,0 @@
First Step Guide for Learning Shell Scripting
================================================================================
![](http://blog.linoxide.com/wp-content/uploads/2015/04/myfirstshellscript.jpg)
Usually when people say "shell scripting" they have on mind bash, ksh, sh, ash or similar linux/unix scripting language. Scripting is another way to communicate with computer. Using graphic windows interface (not matter windows or linux) user can move mouse and clicking on the various objects like, buttons, lists, check boxes and so on. But it is very inconvenient way witch requires user participation and accuracy each time he would like to ask computer / server to do the same tasks (lets say to convert photos or download new movies, mp3 etc). To make all these things easy accessible and automated we could use shell scripts.
Some programming languages like pascal, foxpro, C, java needs to be compiled before they could be executed. They needs appropriate compiler to make our code to do some job.
Another programming languages like php, javascript, visualbasic do not needs compiler. So they need interpretersand we could run our program without compiling the code.
The shell scripts is also like interpreters, but it is usually used to call external compiled programs. Then captures the outputs, exit codes and act accordingly.
One of the most popular shell scripting language in the linux world is the bash. And i think (this is my own opinion) this is because bash shell allows user easily navigate through the history commands (previously executed) by default, in opposite ksh which requires some tuning in .profile or remember some "magic" key combination to walk through history and amend commands.
Ok, i think this is enough for introduction and i leaving for your judge which environment is most comfortable for you. Since now i will speak only about bash and scripting. In the following examples i will use the CentOS 6.6 and bash-4.1.2. Just make sure you have the same or greater version.
### Shell Script Streams ###
The shell scripting it is something similar to conversation of several persons. Just imagine that all command like the persons who able to do something if you properly ask them. Lets say you would like to write the document. First of all you need the paper, then you need to say the content to someone to write it, and finally you would like to store it somewhere. Or you would like build a house, so you will ask appropriate persons to cleanup the space. After they say "its done" then other engineers could build for you the walls. And finally, when engineers also tell "Its done" you can ask the painters to color your house. And what would happen if you ask the painters coloring your walls before they are built? I think they will start to complain. Almost all commands like the persons could speak and if they did its job without any issues they speaks to "standard output". If they can't to what you asking - they speaking to the "standard error". So finally all commands listening for you through "standard input".
Quick example- when you opening linux terminal and writing some text - you speaking to bash through "standard input". So ask the bash shell **who am i**
root@localhost ~]# who am i <--- you speaking through the standard input to bash shell
root pts/0 2015-04-22 20:17 (192.168.1.123) <--- bash shell answering to you through the standard output
Now lets ask something that bash will not understand us:
[root@localhost ~]# blablabla <--- and again, you speaking through standard input
-bash: blablabla: command not found <--- bash complaining through standard error
The first word before ":" usually is the command which complaining to you. Actually each of these streams has their own index number:
- standard input (**stdin**) - 0
- standard output (**stdout**) - 1
- standard error (**stderr**) - 2
If you really would like to know to witch output command said something - you need to redirect (to use "greater than ">" symbol after command and stream index) that speech to file:
[root@localhost ~]# blablabla 1> output.txt
-bash: blablabla: command not found
In this example we tried to redirect 1 (**stdout**) stream to file named output.txt. Lets look does to the content of that file. We use the command cat for that:
[root@localhost ~]# cat output.txt
[root@localhost ~]#
Seams that is empty. Ok now lets try to redirect 2 (**stderr**) streem:
[root@localhost ~]# blablabla 2> error.txt
[root@localhost ~]#
Ok, we see that complains gone. Lets chec the file:
[root@localhost ~]# cat error.txt
-bash: blablabla: command not found
[root@localhost ~]#
Exactly! We see that all complains was recorded to the errors.txt file.
Sometimes commands produces **stdout** and **stderr** simultaniously. To redirect them to separate files we can use the following syntax:
command 1>out.txt 2>err.txt
To shorten this syntax a bit we can skip the "1" as by default the **stdout** stream will be redirected:
command >out.txt 2>err.txt
Ok, lets try to do something "bad". lets remove the file1 and folder1 with the rm command:
[root@localhost ~]# rm -vf folder1 file1 > out.txt 2>err.txt
Now check our output files:
[root@localhost ~]# cat out.txt
removed `file1'
[root@localhost ~]# cat err.txt
rm: cannot remove `folder1': Is a directory
[root@localhost ~]#
As we see the streams was separated to different files. Sometimes it is not handy as usually we want to see the sequence when the errors appeared - before or after some actions. For that we can redirect both streams to the same file:
command >>out_err.txt 2>>out_err.txt
Note : Please notice that i use ">>" instead of ">". It allows us to append file instead of overwrite.
We can redirect one stream to another:
command >out_err.txt 2>&1
Let me explain. All stdout of the command will be redirected to the out_err.txt. The errout will be redirected to the 1-st stream which (as i already explained above) will be redirected to the same file. Let see the example:
[root@localhost ~]# rm -fv folder2 file2 >out_err.txt 2>&1
[root@localhost ~]# cat out_err.txt
rm: cannot remove `folder2': Is a directory
removed `file2'
[root@localhost ~]#
Looking at the combined output we can state that first of all **rm** command tried to remove the folder2 and it was not success as linux require the **-r** key for **rm** command to allow remove folders. At the second the file2 was removed. By providing the **-v** (verbose) key for the **rm** command we asking rm command to inform as about each removed file or folder.
This is almost all you need to know about redirection. I say almost, because there is one more very important redirection which called "piping". By using | (pipe) symbol we usually redirecting **stdout** streem.
Lets say we have the text file:
[root@localhost ~]# cat text_file.txt
This line does not contain H e l l o word
This lilne contains Hello
This also containd Hello
This one no due to HELLO all capital
Hello bash world!
and we need to find the lines in it with the words "Hello". Linux has the **grep** command for that:
[root@localhost ~]# grep Hello text_file.txt
This lilne contains Hello
This also containd Hello
Hello bash world!
[root@localhost ~]#
This is ok when we have file and would like to sech in it. But what to do if we need to find something in the output of another command? Yes, of course we can redirect the output to the file and then look in it:
[root@localhost ~]# fdisk -l>fdisk.out
[root@localhost ~]# grep "Disk /dev" fdisk.out
Disk /dev/sda: 8589 MB, 8589934592 bytes
Disk /dev/mapper/VolGroup-lv_root: 7205 MB, 7205814272 bytes
Disk /dev/mapper/VolGroup-lv_swap: 855 MB, 855638016 bytes
[root@localhost ~]#
If you going to grep something with white spaces embrace that with " quotes!
Note : fdisk command shows information about Linux OS disk drives
As we see this way is not very handy as soon we will mess the space with temporary files. For that we can use the pipes. They allow us redirect one command **stdout** to another command **stdin** streams:
[root@localhost ~]# fdisk -l | grep "Disk /dev"
Disk /dev/sda: 8589 MB, 8589934592 bytes
Disk /dev/mapper/VolGroup-lv_root: 7205 MB, 7205814272 bytes
Disk /dev/mapper/VolGroup-lv_swap: 855 MB, 855638016 bytes
[root@localhost ~]#
As we see, we get the same result without any temporary files. We have redirected **frisk stdout** to the **grep stdin**.
**Note** : Pipe redirection is always from left to right.
There are several other redirections but we will speak about them later.
### Displaying custom messages in the shell ###
As we already know usually communication with and within shell is going as dialog. So lets create some real script which also will speak with us. It will allow you to learn some simple commands and better understand the scripting concept.
Imagine we are working in some company as help desk manager and we would like to create some shell script to register the call information: phone number, User name and brief description about issue. We going to store it in the plain text file data.txt for future statistics. Script it self should work in dialog way to make live easy for help desk workers. So first of all we need to display the questions. For displaying any messages there is echo and printf commands. Both of them displaying messages, but printf is more powerful as we can nicely form output to align it to the right, left or leave dedicated space for message. Lets start from simple one. For file creation please use your favorite text editor (kate, nano, vi, ...) and create the file named note.sh with the command inside:
echo "Phone number ?"
### Script execution ###
After you have saved the file we can run it with bash command by providing our file as an argument:
[root@localhost ~]# bash note.sh
Phone number ?
Actually to use this way for script execution is not handy. It would be more comfortable just execute the script without any **bash** command as a prefix. To make it executable we can use **chmod** command:
[root@localhost ~]# ls -la note.sh
-rw-r--r--. 1 root root 22 Apr 23 20:52 note.sh
[root@localhost ~]# chmod +x note.sh
[root@localhost ~]# ls -la note.sh
-rwxr-xr-x. 1 root root 22 Apr 23 20:52 note.sh
[root@localhost ~]#
![set permission script file](http://blog.linoxide.com/wp-content/uploads/2015/04/Capture.png)
**Note** : ls command displays the files in the current folder. By adding the keys -la it will display a bit more information about files.
As we see, before **chmod** command execution, script has only read (r) and write (w) permissions. After **chmod +x** it got execute (x) permissions. (More details about permissions i am going to describe in next article.) Now we can simply run it:
[root@localhost ~]# ./note.sh
Phone number ?
Before script name i have added ./ combination. . (dot) in the unix world means current position (current folder), the / (slash) is the folder separator. (In Windows OS we use \ (backslash) for the same). So whole this combination means: "from the current folder execute the note.sh script". I think it will be more clear for you if i run this script with full path:
[root@localhost ~]# /root/note.sh
Phone number ?
[root@localhost ~]#
It also works.
Everything would be ok if all linux users would have the same default shell. If we simply execute this script default user shell will be used to parse script content and run the commands. Different shells have a bit different syntax, internal commands, etc. So to guarantee the **bash** will be used for our script we should add **#!/bin/bash** as the first line. In this way default user shell will call **/bin/bash** and only then will execute following shell commands in the script:
[root@localhost ~]# cat note.sh
#!/bin/bash
echo "Phone number ?"
Only now we will be 100% sure that **bash** will be used to parse our script content. Lets move on.
### Reading the inputs ###
After we have displayed the message script should wait for answer from user. There is the command **read**:
#!/bin/bash
echo "Phone number ?"
read phone
After execution script will wait for the user input until he press the [ENTER] key:
[root@localhost ~]# ./note.sh
Phone number ?
12345 <--- here is my input
[root@localhost ~]#
Everything you have input will be stored to the variable **phone**. To display the value of variable we can use the same **echo** command:
[root@localhost ~]# cat note.sh
#!/bin/bash
echo "Phone number ?"
read phone
echo "You have entered $phone as a phone number"
[root@localhost ~]# ./note.sh
Phone number ?
123456
You have entered 123456 as a phone number
[root@localhost ~]#
In **bash** shell we using **$** (dollar) sign as variable indication, except when reading into variable and few other moments (will describe later).
Ok, now we are ready to add the rest questions:
#!/bin/bash
echo "Phone number?"
read phone
echo "Name?"
read name
echo "Issue?"
read issue
[root@localhost ~]# ./note.sh
Phone number?
123
Name?
Jim
Issue?
script is not working.
[root@localhost ~]#
### Using stream redirection ###
Perfect! There is left to redirect everything to the file data.txt. As a field separator we going to use / (slash) symbol.
**Note** : You can chose any which you think is the best, bat be sure that content will not have thes symbols inside. It will cause extra fields in the line.
Do not forget to use ">>" instead of ">" as we would like to append the output to the end of file!
[root@localhost ~]# tail -2 note.sh
read issue
echo "$phone/$name/$issue">>data.txt
[root@localhost ~]# ./note.sh
Phone number?
987
Name?
Jimmy
Issue?
Keybord issue.
[root@localhost ~]# cat data.txt
987/Jimmy/Keybord issue.
[root@localhost ~]#
**Note** : The command **tail** displays the last **-n** lines of the file.
Bingo. Lets run once again:
[root@localhost ~]# ./note.sh
Phone number?
556
Name?
Janine
Issue?
Mouse was broken.
[root@localhost ~]# cat data.txt
987/Jimmy/Keybord issue.
556/Janine/Mouse was broken.
[root@localhost ~]#
Our file is growing. Lets add the date in the front of each line. This will be useful later when playing with data while calculating statistic. For that we can use command date and give it some format as i do not like default one:
[root@localhost ~]# date
Thu Apr 23 21:33:14 EEST 2015 <---- default output of dta command
[root@localhost ~]# date "+%Y.%m.%d %H:%M:%S"
2015.04.23 21:33:18 <---- formated output
There are several ways to read the command output to the variable. In this simple situation we will use ` (back quotes):
[root@localhost ~]# cat note.sh
#!/bin/bash
now=`date "+%Y.%m.%d %H:%M:%S"`
echo "Phone number?"
read phone
echo "Name?"
read name
echo "Issue?"
read issue
echo "$now/$phone/$name/$issue">>data.txt
[root@localhost ~]# ./note.sh
Phone number?
123
Name?
Jim
Issue?
Script hanging.
[root@localhost ~]# cat data.txt
2015.04.23 21:38:56/123/Jim/Script hanging.
[root@localhost ~]#
Hmmm... Our script looks a bit ugly. Lets prettify it a bit. If you would read manual about **read** command you would find that read command also could display some messages. For this we should use -p key and message:
[root@localhost ~]# cat note.sh
#!/bin/bash
now=`date "+%Y.%m.%d %H:%M:%S"`
read -p "Phone number: " phone
read -p "Name: " name
read -p "Issue: " issue
echo "$now/$phone/$name/$issue">>data.txt
You can fine a lots of interesting about each command directly from the console. Just type: **man read, man echo, man date, man ....**
Agree it looks much better!
[root@localhost ~]# ./note.sh
Phone number: 321
Name: Susane
Issue: Mouse was stolen
[root@localhost ~]# cat data.txt
2015.04.23 21:38:56/123/Jim/Script hanging.
2015.04.23 21:43:50/321/Susane/Mouse was stolen
[root@localhost ~]#
And the cursor is right after the message (not in new line) what makes a bit sense.
Loop
Time to improve our script. If user works all day with the calls it is not very handy to run it each time. Lets add all these actions in the never-ending loop:
[root@localhost ~]# cat note.sh
#!/bin/bash
while true
do
read -p "Phone number: " phone
now=`date "+%Y.%m.%d %H:%M:%S"`
read -p "Name: " name
read -p "Issue: " issue
echo "$now/$phone/$name/$issue">>data.txt
done
I have swapped **read phone** and **now=`date** lines. This is because i would like to get the time right after the phone number will be entered. If i would left it as the first line in the loop **- the** now variable will get the time right after the data was stored in the file. And it is not good as the next call could be after 20 mins or so.
[root@localhost ~]# ./note.sh
Phone number: 123
Name: Jim
Issue: Script still not works.
Phone number: 777
Name: Daniel
Issue: I broke my monitor
Phone number: ^C
[root@localhost ~]# cat data.txt
2015.04.23 21:38:56/123/Jim/Script hanging.
2015.04.23 21:43:50/321/Susane/Mouse was stolen
2015.04.23 21:47:55/123/Jim/Script still not works.
2015.04.23 21:48:16/777/Daniel/I broke my monitor
[root@localhost ~]#
NOTE: To exit from the never-ending loop you can by pressing [Ctrl]+[C] keys. Shell will display ^ as the Ctrl key.
### Using pipe redirection ###
Lets add more functionality to our "Frankenstein" I would like the script will display some statistic after each call. Lets say we want to see the how many times each number called us. For that we should cat the data.txt file:
[root@localhost ~]# cat data.txt
2015.04.23 21:38:56/123/Jim/Script hanging.
2015.04.23 21:43:50/321/Susane/Mouse was stolen
2015.04.23 21:47:55/123/Jim/Script still not works.
2015.04.23 21:48:16/777/Daniel/I broke my monitor
2015.04.23 22:02:14/123/Jimmy/New script also not working!!!
[root@localhost ~]#
Now all this output we can redirect to the **cut** command to **cut** each line into the chunks (our delimiter "/") and print the second field:
[root@localhost ~]# cat data.txt | cut -d"/" -f2
123
321
123
777
123
[root@localhost ~]#
Now this output we can redirect to another command to **sort**:
[root@localhost ~]# cat data.txt | cut -d"/" -f2|sort
123
123
123
321
777
[root@localhost ~]#
and leave only unique lines. To count unique entries just add **-c** key for **uniq** command:
[root@localhost ~]# cat data.txt | cut -d"/" -f2 | sort | uniq -c
3 123
1 321
1 777
[root@localhost ~]#
Just add this to end of our loop:
#!/bin/bash
while true
do
read -p "Phone number: " phone
now=`date "+%Y.%m.%d %H:%M:%S"`
read -p "Name: " name
read -p "Issue: " issue
echo "$now/$phone/$name/$issue">>data.txt
echo "===== We got calls from ====="
cat data.txt | cut -d"/" -f2 | sort | uniq -c
echo "--------------------------------"
done
Run it:
[root@localhost ~]# ./note.sh
Phone number: 454
Name: Malini
Issue: Windows license expired.
===== We got calls from =====
3 123
1 321
1 454
1 777
--------------------------------
Phone number: ^C
![running script](http://blog.linoxide.com/wp-content/uploads/2015/04/Capture11.png)
Current scenario is going through well-known steps like:
- Display message
- Get user input
- Store values to the file
- Do something with stored data
But what if user has several responsibilities and he needs sometimes to input data, sometimes to do statistic calculations, or might be to find something in stored data? For that we need to implement switches / cases. In next article i will show you how to use them and how to nicely form the output. It is useful while "drawing" the tables in the shell.
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-shell-script/guide-start-learning-shell-scripting-scratch/
作者:[Petras Liumparas][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/petrasl/

View File

@ -1,167 +0,0 @@
How to Securely Store Passwords and Api Keys Using Vault
================================================================================
Vault is a tool that is used to access secret information securely, it may be password, API key, certificate or anything else. Vault provides a unified interface to secret information through strong access control mechanism and extensive logging of events.
Granting access to critical information is quite a difficult problem when we have multiple roles and individuals across different roles requiring various critical information like, login details to databases with different privileges, API keys for external services, credentials for service oriented architecture communication etc. Situation gets even worse when access to secret information is managed across different platforms with custom settings, so rolling, secure storage and managing the audit logs is almost impossible. But Vault provides a solution to such a complex situation.
### Salient Features ###
Data Encryption: Vault can encrypt and decrypt data with no requirement to store it. Developers can now store encrypted data without developing their own encryption techniques and it allows security teams to define security parameters.
**Secure Secret Storage**: Vault encrypts the secret information (API keys, passwords or certificates) before storing it on to the persistent (secondary) storage. So even if somebody gets access to the stored information by chance, it will be of no use until it is decrypted.
**Dynamic Secrets**: On demand secrets are generated for systems like AWS and SQL databases. If an application needs to access S3 bucket, for instance, it requests AWS keypair from Vault, which grants the required secret information along with a lease time. The secret information wont work once the lease time is expired.
**Leasing and Renewal**: Vault grants secrets with a lease limit, it revokes the secrets as soon as lease expires which can further be renewed through APIs if required.
**Revocation**: Upon expiring the lease period Vault can revoke a single secret or a tree of secrets.
### Installing Vault ###
There are two ways to use Vault.
**1. Pre-compiled Vault Binary** can be downloaded for all Linux flavors from the following source, once done, unzip it and place it on a system PATH where other binaries are kept so that it can be accessed/invoked easily.
- [Download Precompiled Vault Binary (32-bit)][1]
- [Download Precompiled Vault Binary (64-bit)][2]
- [Download Precompiled Vault Binary (ARM)][3]
Download the desired precompiled Vault binary.
![wget binary](http://blog.linoxide.com/wp-content/uploads/2015/04/wget-binary.png)
Unzip the downloaded binary.
![vault](http://blog.linoxide.com/wp-content/uploads/2015/04/unzip.png)
unzipCongratulations! Vault is ready to be used.
![](http://blog.linoxide.com/wp-content/uploads/2015/04/vault.png)
**2. Compiling from source** is another way of installing Vault on the system. GO and GIT are required to be installed and configured properly on the system before we start the installation process.
To **install GO on Redhat systems** use the following command.
sudo yum install go
To **install GO on Debian systems** use the following commands.
sudo apt-get install golang
OR
sudo add-apt-repository ppa:gophers/go
sudo apt-get update
sudo apt-get install golang-stable
To **install GIT on Redhat systems** use the following command.
sudo yum install git
To **install GIT on Debian systems** use the following commands.
sudo apt-get install git
Once both GO and GIT are installed we start the Vault installation process by compiling from the source.
> Clone following Vault repository into the GOPATH
https://github.com/hashicorp/vault
> Verify if the following clone file exist, if it doesnt then Vault wasnt cloned to the proper path.
$GOPATH/src/github.com/hashicorp/vault/main.go
> Run following command to build Vault in the current system and put binary in the bin directory.
make dev
![path](http://blog.linoxide.com/wp-content/uploads/2015/04/installation4.png)
### An introductory tutorial of Vault ###
We have compiled Vaults official interactive tutorial along with its output on SSH.
**Overview**
This tutorial will cover the following steps:
- Initializing and unsealing your Vault
- Authorizing your requests to Vault
- Reading and writing secrets
- Sealing your Vault
**Initialize your Vault**
To get started, we need to initialize an instance of Vault for you to work with.
While initializing, you can configure the seal behavior of Vault.
Initialize Vault now, with 1 unseal key for simplicity, using the command:
vault init -key-shares=1 -key-threshold=1
You'll notice Vault prints out several keys here. Don't clear your terminal, as these are needed in the next few steps.
![Initializing SSH](http://blog.linoxide.com/wp-content/uploads/2015/04/Initializing-SSH.png)
**Unsealing your Vault**
When a Vault server is started, it starts in a sealed state. In this state, Vault is configured to know where and how to access the physical storage, but doesn't know how to decrypt any of it.
Vault encrypts data with an encryption key. This key is encrypted with the "master key", which isn't stored. Decrypting the master key requires a threshold of shards. In this example, we use one shard to decrypt this master key.
vault unseal <key 1>
![Unsealing SSH](http://blog.linoxide.com/wp-content/uploads/2015/04/Unsealing-SSH.png)
**Authorize your requests**
Before performing any operation with Vault, the connecting client must be authenticated. Authentication is the process of verifying a person or machine is who they say they are and assigning an identity to them. This identity is then used when making requests with Vault.
For simplicity, we'll use the root token we generated on init in Step 2. This output should be available in the scrollback.
Authorize with a client token:
vault auth <root token>
![Authorize SSH](http://blog.linoxide.com/wp-content/uploads/2015/04/Authorize-SSH.png)
**Read and write secrets**
Now that Vault has been set-up, we can start reading and writing secrets with the default mounted secret backend. Secrets written to Vault are encrypted and then written to the backend storage. The backend storage mechanism never sees the unencrypted value and doesn't have the means necessary to decrypt it without Vault.
vault write secret/hello value=world
Of course, you can then read this data too:
vault read secret/hello
![RW_SSH](http://blog.linoxide.com/wp-content/uploads/2015/04/RW_SSH.png)
**Seal your Vault**
There is also an API to seal the Vault. This will throw away the encryption key and require another unseal process to restore it. Sealing only requires a single operator with root privileges. This is typically part of a rare "break glass procedure".
This way, if there is a detected intrusion, the Vault data can be locked quickly to try to minimize damages. It can't be accessed again without access to the master key shards.
vault seal
![Seal Vault SSH](http://blog.linoxide.com/wp-content/uploads/2015/04/Seal-Vault-SSH.png)
That is the end of introductory tutorial.
### Summary ###
Vault is a very useful application mainly because of providing a reliable and secure way of storing critical information. Furthermore it encrypts the critical information before storing, maintains audit logs, grants secret information for limited lease time and revokes it once lease is expired. It is platform independent and freely available to download and install. To discover more about Vault, readers are encouraged to visit the official website.
--------------------------------------------------------------------------------
via: http://linoxide.com/how-tos/secure-secret-store-vault/
作者:[Aun Raza][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunrz/
[1]:https://dl.bintray.com/mitchellh/vault/vault_0.1.0_linux_386.zip
[2]:https://dl.bintray.com/mitchellh/vault/vault_0.1.0_linux_amd64.zip
[3]:https://dl.bintray.com/mitchellh/vault/vault_0.1.0_linux_arm.zip

View File

@ -1,112 +0,0 @@
How to Setup OpenERP (Odoo) on CentOS 7.x
================================================================================
Hi everyone, this tutorial is all about how we can setup Odoo (formerly known as OpenERP) on our CentOS 7 Server. Are you thinking to get an awesome ERP (Enterprise Resource Planning) app for your business ?. Then, OpenERP is the best app you are searching as it is a Free and Open Source Software which provides an outstanding features for your business or company.
[OpenERP][1] is a free and open source traditional OpenERP (Enterprise Resource Planning) app which includes Open Source CRM, Website Builder, eCommerce, Project Management, Billing & Accounting, Point of Sale, Human Resources, Marketing, Manufacturing, Purchase Management and many more modules included for a better way to boost the productivity and sales. Odoo Apps can be used as stand-alone applications, but they also integrate seamlessly so you get a full-featured Open Source ERP when you install several Apps.
So, here are some quick and easy steps to get your copy of OpenERP installed on your CentOS machine.
### 1. Installing PostgreSQL ###
First of all, we'll want to update the packages installed in our CentOS 7 machine to ensure that the latest packages, patches and security are up to date. To update our sytem, we should run the following command in a shell or terminal.
# yum clean all
# yum update
Now, we'll want to install PostgreSQL Database System as OpenERP uses PostgreSQL for its database system. To install it, we'll need to run the following command.
# yum install postgresql postgresql-server postgresql-libs
![Installing postgresql](http://blog.linoxide.com/wp-content/uploads/2015/03/installing-postgresql.png)
After it is installed, we'll need to initialize the database with the following command
# postgresql-setup initdb
![Intializating postgresql](http://blog.linoxide.com/wp-content/uploads/2015/03/intializating-postgresql.png)
We'll then set PostgreSQL to start on every boot and start the PostgreSQL Database server.
# systemctl enable postgresql
# systemctl start postgresql
As we haven't set a password for the user "postgresql", we'll want to set it now.
# su - postgres
$ psql
postgres=# \password postgres
postgres=# \q
# exit
![setting password postgres](http://blog.linoxide.com/wp-content/uploads/2015/03/setting-password-postgres.png)
### 2. Configuring Odoo Repository ###
After our Database Server has been installed correctly, we'll want add EPEL (Extra Packages for Enterprise Linux) to our CentOS server. Odoo (or OpenERP) depends on Python run-time and many other packages that are not included in default standard repository. As such, we'll want to add the Extra Packages for Enterprise Linux (or EPEL) repository support so that Odoo can get the required dependencies. To install, we'll need to run the following command.
# yum install epel-release
![Installing EPEL Release](http://blog.linoxide.com/wp-content/uploads/2015/03/installing-epel-release.png)
Now, after we install EPEL, we'll now add repository of Odoo (OpenERP) using yum-config-manager.
# yum install yum-utils
# yum-config-manager --add-repo=https://nightly.odoo.com/8.0/nightly/rpm/odoo.repo
![Adding OpenERP (Odoo) Repo](http://blog.linoxide.com/wp-content/uploads/2015/03/added-odoo-repo.png)
### 3. Installing Odoo 8 (OpenERP) ###
Finally after adding repository of Odoo 8 (OpenERP) in our CentOS 7 machine. We'll can install Odoo 8 (OpenERP) using the following command.
# yum install -y odoo
The above command will install odoo along with the necessary dependency packages.
![Installing odoo or OpenERP](http://blog.linoxide.com/wp-content/uploads/2015/03/installing-odoo.png)
Now, we'll enable automatic startup of Odoo in every boot and will start our Odoo service using the command below.
# systemctl enable odoo
# systemctl start odoo
![Starting Odoo](http://blog.linoxide.com/wp-content/uploads/2015/03/starting-odoo.png)
### 4. Allowing Firewall ###
As Odoo uses port 8069, we'll need to allow firewall for remote access. We can allow firewall to port 8069 by running the following command.
# firewall-cmd --zone=public --add-port=8069/tcp --permanent
# firewall-cmd --reload
![Allowing firewall Port](http://blog.linoxide.com/wp-content/uploads/2015/03/allowing-firewall-port.png)
**Note: By default, only connections from localhost are allowed. If we want to allow remote access to PostgreSQL databases, we'll need to add the line shown in the below image to pg_hba.conf configuration file:**
# nano /var/lib/pgsql/data/pg_hba.conf
![Allowing Remote Access pgsql](http://blog.linoxide.com/wp-content/uploads/2015/03/allowing-remote-access-pgsql.png)
### 5. Web Interface ###
Finally, as we have successfully installed our latest Odoo 8 (OpenERP) on our CentOS 7 Server, we can now access our Odoo by browsing to http://ip-address:8069 http://my-site.com:8069 using our favorite web browser. Then, first thing we'll gonna do is we'll create a new database and create a new password for it. Note, the master password is admin by default. Then, we can login to our panel with that username and password.
![Odoo Panel](http://blog.linoxide.com/wp-content/uploads/2015/03/odoo-panel.png)
### Conclusion ###
Odoo 8 (formerly OpenERP) is the best ERP app available in the world of Open Source. We did an excellent work on installing it because OpenERP is a set of many modules which are essential for a complete ERP app for business and company. So, if you have any questions, suggestions, feedback please write them in the comment box below. Thank you ! Enjoy OpenERP (Odoo 8) :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/setup-openerp-odoo-centos-7/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://www.odoo.com/

View File

@ -1,96 +0,0 @@
Linux FAQs with Answers--How to configure a Linux bridge with Network Manager on Ubuntu
================================================================================
> **Question**: I need to set up a Linux bridge on my Ubuntu box to share a NIC with several other virtual machines or containers created on the box. I am currently using Network Manager on my Ubuntu, so preferrably I would like to configure a bridge using Network Manager. How can I do that?
Network bridge is a hardware equipment used to interconnect two or more Layer-2 network segments, so that network devices on different segments can talk to each other. A similar bridging concept is needed within a Linux host, when you want to interconnect multiple VMs or Ethernet interfaces within a host. That is one use case of a software Linux bridge.
There are several different ways to configure a Linux bridge. For example, in a headless server environment, you can use [brctl][1] to manually configure a bridge. In desktop environment, bridge support is available in Network Manager. Let's examine how to configure a bridge with Network Manager.
### Requirement ###
To avoid [any issue][2], it is recommended that you have Network Manager 0.9.9 and higher, which is the case for Ubuntu 15.04 and later.
$ apt-cache show network-manager | grep Version
----------
Version: 0.9.10.0-4ubuntu15.1
Version: 0.9.10.0-4ubuntu15
### Create a Bridge ###
The easiest way to create a bridge with Network Manager is via nm-connection-editor. This GUI tool allows you to configure a bridge in easy-to-follow steps.
To start, invoke nm-connection-editor.
$ nm-connection-editor
The editor window will show you a list of currently configured network connections. Click on "Add" button in the top right to create a bridge.
![](https://farm9.staticflickr.com/8781/17139502730_c3ca920f7f.jpg)
Next, choose "Bridge" as a connection type.
![](https://farm9.staticflickr.com/8873/17301102406_4f75133391_z.jpg)
Now it's time to configure a bridge, including its name and bridged connection(s). With no other bridges created, the default bridge interface will be named bridge0.
Recall that the goal of creating a bridge is to share your Ethernet interface via the bridge. So you need to add the Ethernet interface to the bridge. This is achieved by adding a new "bridged connection" in the GUI. Click on "Add" button.
![](https://farm9.staticflickr.com/8876/17327069755_52f1d81f37_z.jpg)
Choose "Ethernet" as a connection type.
![](https://farm9.staticflickr.com/8832/17326664591_632a9001da_z.jpg)
In "Device MAC address" field, choose the interface that you want to enslave into the bridge. In this example, assume that this interface is eth0.
![](https://farm9.staticflickr.com/8842/17140820559_07a661f30c_z.jpg)
Click on "General" tab, and enable both checkboxes that say "Automatically connect to this network when it is available" and "All users may connect to this network".
![](https://farm8.staticflickr.com/7776/17325199982_801290e172_z.jpg)
Save the change.
Now you will see a new slave connection created in the bridge.
![](https://farm8.staticflickr.com/7674/17119624667_6966b1147e_z.jpg)
Click on "General" tab of the bridge, and make sure that top-most two checkboxes are enabled.
![](https://farm8.staticflickr.com/7715/17301102276_4266a1e41d_z.jpg)
Go to "IPv4 Settings" tab, and configure either DHCP or static IP address for the bridge. Note that you should use the same IPv4 settings as the enslaved Ethernet interface eth0. In this example, we assume that eth0 is configured via DHCP. Thus choose "Automatic (DHCP)" here. If eth0 is assigned a static IP address, you should assign the same IP address to the bridge.
![](https://farm8.staticflickr.com/7737/17140820469_99955cf916_z.jpg)
Finally, save the bridge settings.
Now you will see an additional bridge connection created in "Network Connections" window. You no longer need a previously-configured wired connection for the enslaved interface eth0. So go ahead and delete the original wired connection.
![](https://farm9.staticflickr.com/8700/17140820439_272a6d5c4e.jpg)
At this point, the bridge connection will automatically be activated. You will momentarily lose a connection, since the IP address assigned to eth0 is taken over by the bridge. Once an IP address is assigned to the bridge, you will be connected back to your Ethernet interface via the bridge. You can confirm that by checking "Network" settings.
![](https://farm8.staticflickr.com/7742/17325199902_9ceb67ddc1_c.jpg)
Also, check the list of available interfaces. As mentioned, the bridge interface must have taken over whatever IP address was possessed by your Ethernet interface.
![](https://farm8.staticflickr.com/7717/17327069605_6143f1bd6a_b.jpg)
That's it, and now the bridge is ready to use!
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/configure-linux-bridge-network-manager-ubuntu.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://xmodulo.com/how-to-configure-linux-bridge-interface.html
[2]:https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1273201

View File

@ -1,55 +0,0 @@
Linux FAQs with Answers--How to disable entering password for default keyring to unlock on Ubuntu desktop
================================================================================
> **Question**: When I boot up my Ubuntu desktop, a pop up dialog appears, asking me to enter a password to unlock default keyring. How can I disable this "unlock default keyring" pop up window, and automatically unlock my keyring?
A keyring is thought of as a local database that stores your login information in encrypted forms. Various desktop applications (e.g., browsers, email clients) use a keyring to store and manage your login credentials, secrets, passwords, certificates, or keys securely. For those applications to retrieve the information stored in a keyring, the keyring needs to be unlocked.
GNOME keyring used by Ubuntu desktop is integrated with desktop login, and the keyring is automatically unlocked when you authenticate into your desktop. But your default keyring can remain "locked" if you set up automatic desktop login or wake up from hibernation. In this case, you will be prompted:
> "Enter password for keyring 'Default keyring' to unlock. An application wants to access to the keyring 'Default keyring,' but it is locked."
![](https://farm9.staticflickr.com/8787/16716456754_309c39513c_o.png)
If you want to avoid typing a password to unlock your default keyring every time such a pop-up dialog appears, here is how you can do it.
Before doing that, understand the implication of disabling the password prompt. By automatically unlocking the default keyring, you will make your keyring (and any information stored in the keyring) accessible to anyone who uses your desktop, without them having to know your password.
### Disable Password for Unlocking Default Keyring ###
Open up Dash, and type "password" to launch "Passwords and Keys" app.
![](https://farm8.staticflickr.com/7709/17312949416_ed9c4fbe2d_b.jpg)
Alternatively, use the seahorse command to launch the GUI from the command line.
$ seahorse
On the left side panel, right-click on the "Default keyring," and choose "Change Password."
![](https://farm8.staticflickr.com/7740/17159959750_ba5b675b00_b.jpg)
Type your current login password.
![](https://farm8.staticflickr.com/7775/17347551135_ce09260818_b.jpg)
Leave a new password for the "Default" keyring as blank.
![](https://farm8.staticflickr.com/7669/17345663222_c9334c738b_c.jpg)
Click on "Continue" button to confirm to store passwords unencrypted.
![](https://farm8.staticflickr.com/7761/17152692309_ce3891a0d9_c.jpg)
That's it. From now on, you won't be prompted to unlock the default keyring.
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/disable-entering-password-unlock-default-keyring.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni

View File

@ -1,97 +0,0 @@
Linux FAQs with Answers--How to install Shrew Soft IPsec VPN client on Linux
================================================================================
> **Question**: I need to connect to an IPSec VPN gateway. For that, I'm trying to use Shrew Soft VPN client, which is available for free. How can I install Shrew Soft VPN client on [insert your Linux distro]?
There are many commercial VPN gateways available, which come with their own proprietary VPN client software. While there are also open-source VPN server/client alternatives, they are typically lacking in sophisticated IPsec support, such as Internet Key Exchange (IKE) which is a standard IPsec protocol used to secure VPN key exchange and authentication. Shrew Soft VPN is a free IPsec VPN client supporting a number of authentication methods, key exchange, encryption and firewall traversal options.
Here is how you can install Shrew Soft VPN client on Linux platforms.
First, download its source code from the [official website][1].
### Install Shrew VPN Client on Debian, Ubuntu or Linux Mint ###
Shrew Soft VPN client GUI requires Qt 4.x. So you will need to install its development files as part of dependencies.
$ sudo apt-get install cmake libqt4-core libqt4-dev libqt4-gui libedit-dev libssl-dev checkinstall flex bison
$ wget https://www.shrew.net/download/ike/ike-2.2.1-release.tbz2
$ tar xvfvj ike-2.2.1-release.tbz2
$ cd ike
$ cmake -DCMAKE_INSTALL_PREFIX=/usr -DQTGUI=YES -DETCDIR=/etc -DNATT=YES .
$ make
$ sudo make install
$ cd /etc/
$ sudo mv iked.conf.sample iked.conf
### Install Shrew VPN Client on CentOS, Fedora or RHEL ###
Similar to Debian based systems, you will need to install a number of dependencies including Qt4 before compiling it.
$ sudo yum install qt-devel cmake gcc-c++ openssl-devel libedit-devel flex bison
$ wget https://www.shrew.net/download/ike/ike-2.2.1-release.tbz2
$ tar xvfvj ike-2.2.1-release.tbz2
$ cd ike
$ cmake -DCMAKE_INSTALL_PREFIX=/usr -DQTGUI=YES -DETCDIR=/etc -DNATT=YES .
$ make
$ sudo make install
$ cd /etc/
$ sudo mv iked.conf.sample iked.conf
On Red Hat based systems, one last step is to open /etc/ld.so.conf with a text editor, and add the following line.
$ sudo vi /etc/ld.so.conf
----------
include /usr/lib/
Reload run-time bindings of shared libraries to incorporate newly installed shared libraries:
$ sudo ldconfig
### Launch Shrew VPN Client ###
First launch IKE daemon (iked). This daemon speaks the IKE protocol to communicate with a remote host over IPSec as a VPN client.
$ sudo iked
![](https://farm9.staticflickr.com/8685/17175688940_59c2db64c9_b.jpg)
Now start qikea which is an IPsec VPN client front end. This GUI application allows you to manage remote site configurations and to initiate VPN connections.
![](https://farm8.staticflickr.com/7750/16742992713_eed7f97939_b.jpg)
To create a new VPN configuration, click on "Add" button, and fill out VPN site configuration. Once you create a configuration, you can initiate a VPN connection simply by clicking on the configuration.
![](https://farm8.staticflickr.com/7725/17337297056_3d38dc2180_b.jpg)
### Troubleshooting ###
1. I am getting the following error while running iked.
iked: error while loading shared libraries: libss_ike.so.2.2.1: cannot open shared object file: No such file or directory
To solve this problem, you need to update the dynamic linker to incorporate libss_ike library. For that, add to /etc/ld.so.conf the path where the library is located (e.g., /usr/lib), and then run ldconfig command.
$ sudo ldconfig
Verify that libss_ike is added to the library path:
$ ldconfig -p | grep ike
----------
libss_ike.so.2.2.1 (libc6,x86-64) => /lib/libss_ike.so.2.2.1
libss_ike.so (libc6,x86-64) => /lib/libss_ike.so
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/install-shrew-soft-ipsec-vpn-client-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:https://www.shrew.net/download/ike

View File

@ -1,76 +0,0 @@
Linux FAQs with Answers--How to install autossh on Linux
================================================================================
> **Question**: I would like to install autossh on [insert your Linux distro]. How can I do that?
[autossh][1] is an open-source tool that allows you to monitor an SSH session and restart it automatically should it gets disconnected or stops forwarding traffic. autossh assumes that [passwordless SSH login][2] for a destination host is already setup, so that it can restart a broken SSH session without user's involvement.
autossh comes in handy when you want to set up [reverse SSH tunnels][3] or [mount remote folders over SSH][4]. Essentially in any situation where persistent SSH sessions are required, autossh can be useful.
![](https://farm8.staticflickr.com/7786/17150854870_63966e78bc_c.jpg)
Here is how to install autossh on various Linux distributions.
### Install Autossh on Debian or Ubuntu ###
autossh is available in base repositories of Debian based systems, so installation is easy.
$ sudo apt-get install autossh
### Install Autossh on Fedora ###
Fedora repositories also carry autossh package. So simply use yum command.
$ sudo yum install autossh
### Install Autossh on CentOS or RHEL ###
For CentOS/RHEL 6 or earlier, enable [Repoforge repository][5] first, and then use yum command.
$ sudo yum install autossh
For CentOS/RHEL 7, autossh is no longer available in Repoforge repository. You will need to build it from the source (explained below).
### Install Autossh on Arch Linux ###
$ sudo pacman -S autossh
### Compile Autossh from the Source on Debian or Ubuntu ###
If you would like to try the latest version of autossh, you can build it from the source as follows.
$ sudo apt-get install gcc make
$ wget http://www.harding.motd.ca/autossh/autossh-1.4e.tgz
$ tar -xf autossh-1.4e.tgz
$ cd autossh-1.4e
$ ./configure
$ make
$ sudo make install
### Compile Autossh from the Source on CentOS, Fedora or RHEL ###
On CentOS/RHEL 7, autossh is not available as a pre-built package. So you'll need to compile it from the source as follows.
$ sudo yum install wget gcc make
$ wget http://www.harding.motd.ca/autossh/autossh-1.4e.tgz
$ tar -xf autossh-1.4e.tgz
$ cd autossh-1.4e
$ ./configure
$ make
$ sudo make install
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/install-autossh-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://www.harding.motd.ca/autossh/
[2]:http://xmodulo.com/how-to-enable-ssh-login-without.html
[3]:http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html
[4]:http://xmodulo.com/how-to-mount-remote-directory-over-ssh-on-linux.html
[5]:http://xmodulo.com/how-to-set-up-rpmforge-repoforge-repository-on-centos.html

View File

@ -0,0 +1,134 @@
Install uGet Download Manager 2.0 in Debian, Ubuntu, Linux Mint and Fedora
================================================================================
After a long development period, which includes more than 11 developement releases, finally uGet project team pleased to announce the immediate availability of the latest stable version of uGet 2.0. The latest version includes numerous attractive features, such as a new setting dialog, improved BitTorrent and Metalink support added in the aria2 plugin, as well as better support for uGet RSS messages in the banner, other features include:
- A new “Check for Updates” button informs you about new released versions.
- Added new languages & updated existing languages.
- Added a new “Message Banner” that allows developers to easily provide uGet related information to all users.
- Enhanced the Help Menu by including links to the Documentation, to submit Feedback & Bug Reports and more.
- Integrated uGet download manager into the two major browsers on the Linux platform, Firefox and Google Chrome.
- Improved support for Firefox Addon FlashGot.
### What is uGet ###
uGet (formerly known ad UrlGfe) is an open source, free and very powerful multi-platform GTK based download manager application was written in C language, that released and licensed under GPL. It offers large collection of features such as resuming downloads, multiple download support, categories support with an independent configuration, clipboard monitoring, download scheduler, import URLs from HTML files, integrated Flashgot plugin with Firefox and download torrent and metalink files using aria2 (a command-line download manager) that integrated with uGet.
I have listed down all the key features of uGet Download Manager in detailed explanation.
#### Key Features of uGet Download Manager ####
- Downloads Queue: Place all your downloads into a Queue. As downloads finishes, the remaining queue files will automatically start downloading.
- Resume Downloads: If in case, your network connection disconnected, dont worry you can start or resume download where it was left.
- Download Categories: Support for unlimited categories to manage downloads.
- Clipboard Monitor: Add the types of files to clipboard that automatically prompt you to download copied files.
- Batch Downloads: Allows you to easily add unlimited number of files at once for downloading.
- Multi-Protocol: Allows you to easily download files through HTTP, HTTPS, FTP, BitTorrent and Metalink using arial2 command-line plugin.
- Multi-Connection: Support for up to 20 simultaneous connections per download using aria2 plugin.
- FTP Login & Anonymous FTP: Added support for FTP login using username and password, as well as anonymous FTP.
- Scheduler: Added support for scheduled downloads, now you can schedule all your downloads.
- FireFox Integration via FlashGot: Integrated FlashGot as an independent supported Firefox extension that handles single or massive selection of files for downloading.
- CLI / Terminal Support: Offers command line or terminal option to download files.
- Folder Auto-Creation: If you have provided the save path for the download, but the save path doesnt exist, uget will automatically create them.
- Download History Management: Keeps a track of finished download and recycled entries, per list 9,999 files. Entries which are older than the custom limit will be deleted automatically.
- Multi-Language Support: By default uGet uses English, but it support more than 23 languages.
- Aria2 Plugin: uGet integrated with Aria2 plugin to give more user friendly GUI.
If you want to know a complete list of available features, see the official uGet [features page][1].
### Install uGet in Debian, Ubuntu, Linux Mint and Fedora ###
The uGet developers added latest version in various repos throughout the Linux platform, so you can able to install or upgrade uGet using supported repository under your Linux distribution.
Currently, a few Linux distributions are not up-to-date, but you can get the status of your distribution by going to the [uGet Download page][2] and selecting your preferred distro from there for more details.
#### On Debian ####
In Debian Testing (Jessie) and Debian Unstable (Sid), you can easily install and update using the official repository on a fairly reliable basis.
$ sudo apt-get update
$ sudo apt-get install uget
#### On Ubuntu & Linux Mint ####
In Ubuntu and Linux Mint, you can install and update uGet using official PPA repository ppa:plushuang-tw/uget-stable. By using this PPA, you automatically be kept up-to-date with the latest versions.
$ sudo add-apt-repository ppa:plushuang-tw/uget-stable
$ sudo apt-get update
$ sudo apt-get install uget
#### On Fedora ####
In Fedora 20 21, latest version of uGet (2.0) available from the official repositories, installing from these repo is fairly reliable.
$ sudo yum install uget
**Note**: On older versions of Debian, Ubuntu, Linux Mint and Fedora, users can also install uGet. but the available version is 1.10.4. If you are looking for updated version (i.e. 2.0) you need to upgrade your system and add uGet PPA to get latest stable version.
### Installing aria2 plugin ###
[aria2][3] is a excellent command-line download utility, that is used by uGet as a aria2 plugin to add even more great functionality such as downloading torrent files, metalinks, multi-protocol & multi-source download.
By default uGet uses CURL as backend in most of the todays Linux systems, but the aria2 Plugin replaces CURL with aria2 as the backend.
aria2 is a separate package that needs to be installed separately. You can easily install latest version of aria2 using supported repository under your Linux distribution or you can also use [downloads-aria2][4] that explains how to install aria2 on each distro.
#### On Debian, Ubuntu and Linux Mint ####
Use the official aria2 PPA repository to install latest version of aria2 using the following commands.
$ sudo add-apt-repository ppa:t-tujikawa/ppa
$ sudo apt-get update
$ sudo apt-get install aria2
#### On Fedora ####
Fedoras official repositories already added aria2 package, so you can easily install it using the following yum command.
$ sudo yum install aria2
#### Starting uGet ####
To start uGet application, from the desktop “Menu” on search bar type “uget“. Refer below screenshot.
![Start uGet Download Manager](http://www.tecmint.com/wp-content/uploads/2014/03/Start-uGet.gif)
Start uGet Download Manager
![uGet Version: 2.0](http://www.tecmint.com/wp-content/uploads/2014/03/uGet-Version.gif)
uGet Version: 2.0
#### Activate aria2 Plugin in uGet ####
To active the aria2 plugin, from the uGet menu go to Edit > Settings > Plug-in tab, from the drop-down select “arial2“.
![Enable Aria2 Plugin for uGet](http://www.tecmint.com/wp-content/uploads/2014/03/Enable-Aria2-Plugin.gif)
Enable Aria2 Plugin for uGet
### uGet 2.0 Screenshot Tour ###
![Download Files Using Aria2](http://www.tecmint.com/wp-content/uploads/2014/03/Download-Files-Using-Aria2.gif)
Download Files Using Aria2
![Download Torrent File Using uGet](http://www.tecmint.com/wp-content/uploads/2014/03/Download-Torrent-File.gif)
Download Torrent File Using uGet
![Batch Downloads Using uGet](http://www.tecmint.com/wp-content/uploads/2014/03/Batch-Download-Files.gif)
Batch Downloads Using uGet
uGet source files and RPM packages also available for other Linux distributions and Windows at [download page][5].
--------------------------------------------------------------------------------
via: http://www.tecmint.com/install-uget-download-manager-in-linux/
作者:[Ravi Saive][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/admin/
[1]:http://uget.visuex.com/features
[2]:http://ugetdm.com/downloads
[3]:http://www.tecmint.com/install-aria2-a-multi-protocol-command-line-download-manager-in-rhel-centos-fedora/
[4]:http://ugetdm.com/downloads-aria2
[5]:http://ugetdm.com/downloads

View File

@ -0,0 +1,151 @@
Fix Various Update Errors In Ubuntu 14.04
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/Fix_Ubuntu_Update_Error.jpeg)
Who hasnt come across an error while doing an update in Ubuntu? Update errors are common and plenty in Ubuntu and other Linux distributions based on Ubuntu. These errors occur for various reasons and can be fixed easily. In this article, we shall see various types of frequently occurring update errors in Ubuntu and how to fix them.
### Problem With MergeList ###
When you run update in terminal, you may encounter an error “[problem with MergeList][1]” like below:
> E:Encountered a section with no Package: header,
>
> E:Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_universe_binary-i386_Packages,
>
> E:The package lists or status file could not be parsed or opened.
To fix this error, use the following commands:
sudo rm -r /var/lib/apt/lists/*
sudo apt-get clean && sudo apt-get update
### Failed to download repository information -1 ###
There are actually two types of [failed to download repository information errors][2]. If your error read like this:
> W:Failed to fetch bzip2:/var/lib/apt/lists/partial/in.archive.ubuntu.com_ubuntu_dists_oneiric_restricted_binary-i386_Packages Hash Sum mismatch,
>
> W:Failed to fetch bzip2:/var/lib/apt/lists/partial/in.archive.ubuntu.com_ubuntu_dists_oneiric_multiverse_binary-i386_Packages Hash Sum mismatch,
>
> E:Some index files failed to download. They have been ignored, or old ones used instead
Then you can use the following commands to fix it:
sudo rm -rf /var/lib/apt/lists/*
sudo apt-get update
### Failed to download repository information -2 ###
Th other type of failed to download repository information error is because of outdated PPA. Usually, when you run Update Manager and see an error like this:
![](Th other type of failed to download repository information error is because of outdated PPA. Usually, when you run Update Manager and see an error like this:)
You can run sudo apt-get update to see what PPAs are failing. And you can remove it from the sources list. You can follow this screenshot guide to [fix failed to download repository information error][3].
### Failed to download package files error ###
A similar error is [failed to download package files error][4] like this:
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/Ubuntu_Update_error.jpeg)
This can be easily fixed by changing the software sources to Main server. Go to Software & Updates and in there changed the download server to Main server:
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/Change_server_Ubuntu.jpeg)
### Partial upgrade error ###
Running updates in terminal may throw this [partial upgrade error][5]:
> Not all updates can be installed
>
> Run a partial upgrade, to install as many updates as possible
Run the following command in terminal to fix this error:
sudo apt-get install -f
### error while loading shared libraries ###
This is more of an installation error than update error. If you try to install a program from source code, you may encounter this error:
> error while loading shared libraries:
>
> cannot open shared object file: No such file or directory
This error can be fixed by running the following command in terminal:
sudo /sbin/ldconfig -v
You can find more details on this [error while loading shared libraries][6].
### Could not get lock /var/cache/apt/archives/lock ###
This error happens when another program is using APT. Suppose you are installing some thing in Ubuntu Software Center and trying to run apt in terminal.
> E: Could not get lock /var/cache/apt/archives/lock open (11: Resource temporarily unavailable)
>
> E: Unable to lock directory /var/cache/apt/archives/
Normally, this should be fine if you close all other programs using apt but if the problem persists, use the following command:
sudo rm /var/lib/apt/lists/lock
If the above command doesnt work, try this command:
sudo killall apt-get
More details about this error can be found [here][7].
### GPG error: The following signatures couldnt be verified ###
Adding a PPA may result in the following [GPG error: The following signatures couldnt be verified][8] when you try to run an update in terminal:
> W: GPG error: http://repo.mate-desktop.org saucy InRelease: The following signatures couldnt be verified because the public key is not available: NO_PUBKEY 68980A0EA10B4DE8
All we need to do is to fetch this public key in the system. Get the key number from the message. In the above message, the key is 68980A0EA10B4DE8. This key can be used in the following manner:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 68980A0EA10B4DE8
Once the key has been added, run an update again and it will be fine.
### BADSIG error ###
Another signature related Ubuntu update error is [BADSIG error][9] which looks something like this:
> W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://extras.ubuntu.com precise Release: The following signatures were invalid: BADSIG 16126D3A3E5C1192 Ubuntu Extras Archive Automatic Signing Key
>
> W: GPG error: http://ppa.launchpad.net precise Release:
>
> The following signatures were invalid: BADSIG 4C1CBC1B69B0E2F4 Launchpad PPA for Jonathan French W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/precise/Release
To fix this BADSIG error, use the following commands in terminal:
sudo apt-get clean
cd /var/lib/apt
sudo mv lists oldlist
sudo mkdir -p lists/partial
sudo apt-get clean
sudo apt-get update
That compiles the list of frequent **Ubuntu update errors** you may encounter. I hope this helps you to get rid of these errors. Have you encountered any other update error in Ubuntu as well? Do mention it in comments and Ill try to do a quick tutorial on it.
--------------------------------------------------------------------------------
via: http://itsfoss.com/fix-update-errors-ubuntu-1404/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://itsfoss.com/how-to-fix-problem-with-mergelist/
[2]:http://itsfoss.com/solve-ubuntu-error-failed-to-download-repository-information-check-your-internet-connection/
[3]:http://itsfoss.com/failed-to-download-repository-information-ubuntu-13-04/
[4]:http://itsfoss.com/fix-failed-download-package-files-error-ubuntu/
[5]:http://itsfoss.com/fix-partial-upgrade-error-elementary-os-luna-quick-tip/
[6]:http://itsfoss.com/solve-open-shared-object-file-quick-tip/
[7]:http://itsfoss.com/fix-ubuntu-install-error/
[8]:http://itsfoss.com/solve-gpg-error-signatures-verified-ubuntu/
[9]:http://itsfoss.com/solve-badsig-error-quick-tip/

View File

@ -0,0 +1,405 @@
OpenSSL command line Root and Intermediate CA including OCSP, CRL and revocation
================================================================================
These are quick and dirty notes on generating a certificate authority (CA), intermediate certificate authorities and end certificates using OpenSSL. It includes OCSP, CRL and CA Issuer information and specific issue and expiry dates.
We'll set up our own root CA. We'll use the root CA to generate an example intermediate CA. We'll use the intermediate CA to sign end user certificates.
### Root CA ###
Create and move in to a folder for the root ca:
mkdir ~/SSLCA/root/
cd ~/SSLCA/root/
Generate a 8192-bit long SHA-256 RSA key for our root CA:
openssl genrsa -aes256 -out rootca.key 8192
Example output:
Generating RSA private key, 8192 bit long modulus
.........++
....................................................................................................................++
e is 65537 (0x10001)
If you want to password-protect this key, add the option `-aes256`.
Create the self-signed root CA certificate `ca.crt`; you'll need to provide an identity for your root CA:
openssl req -sha256 -new -x509 -days 1826 -key rootca.key -out rootca.crt
Example output:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:NL
State or Province Name (full name) [Some-State]:Zuid Holland
Locality Name (eg, city) []:Rotterdam
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Sparkling Network
Organizational Unit Name (eg, section) []:Sparkling CA
Common Name (e.g. server FQDN or YOUR name) []:Sparkling Root CA
Email Address []:
Create a few files where the CA will store it's serials:
touch certindex
echo 1000 > certserial
echo 1000 > crlnumber
Place the CA config file. This file has stubs for CRL and OCSP endpoints.
# vim ca.conf
[ ca ]
default_ca = myca
[ crl_ext ]
issuerAltName=issuer:copy
authorityKeyIdentifier=keyid:always
[ myca ]
dir = ./
new_certs_dir = $dir
unique_subject = no
certificate = $dir/rootca.crt
database = $dir/certindex
private_key = $dir/rootca.key
serial = $dir/certserial
default_days = 730
default_md = sha1
policy = myca_policy
x509_extensions = myca_extensions
crlnumber = $dir/crlnumber
default_crl_days = 730
[ myca_policy ]
commonName = supplied
stateOrProvinceName = supplied
countryName = optional
emailAddress = optional
organizationName = supplied
organizationalUnitName = optional
[ myca_extensions ]
basicConstraints = critical,CA:TRUE
keyUsage = critical,any
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
keyUsage = digitalSignature,keyEncipherment,cRLSign,keyCertSign
extendedKeyUsage = serverAuth
crlDistributionPoints = @crl_section
subjectAltName = @alt_names
authorityInfoAccess = @ocsp_section
[ v3_ca ]
basicConstraints = critical,CA:TRUE,pathlen:0
keyUsage = critical,any
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
keyUsage = digitalSignature,keyEncipherment,cRLSign,keyCertSign
extendedKeyUsage = serverAuth
crlDistributionPoints = @crl_section
subjectAltName = @alt_names
authorityInfoAccess = @ocsp_section
[alt_names]
DNS.0 = Sparkling Intermidiate CA 1
DNS.1 = Sparkling CA Intermidiate 1
[crl_section]
URI.0 = http://pki.sparklingca.com/SparklingRoot.crl
URI.1 = http://pki.backup.com/SparklingRoot.crl
[ocsp_section]
caIssuers;URI.0 = http://pki.sparklingca.com/SparklingRoot.crt
caIssuers;URI.1 = http://pki.backup.com/SparklingRoot.crt
OCSP;URI.0 = http://pki.sparklingca.com/ocsp/
OCSP;URI.1 = http://pki.backup.com/ocsp/
If you need to set a specific certificate start / expiry date, add the following to `[myca]`
# format: YYYYMMDDHHMMSS
default_enddate = 20191222035911
default_startdate = 20181222035911
### Creating Intermediate 1 CA ###
Generate the intermediate CA's private key:
openssl genrsa -out intermediate1.key 4096
Generate the intermediate1 CA's CSR:
openssl req -new -sha256 -key intermediate1.key -out intermediate1.csr
Example output:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:NL
State or Province Name (full name) [Some-State]:Zuid Holland
Locality Name (eg, city) []:Rotterdam
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Sparkling Network
Organizational Unit Name (eg, section) []:Sparkling CA
Common Name (e.g. server FQDN or YOUR name) []:Sparkling Intermediate CA
Email Address []:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Make sure the subject (CN) of the intermediate is different from the root.
Sign the intermediate1 CSR with the Root CA:
openssl ca -batch -config ca.conf -notext -in intermediate1.csr -out intermediate1.crt
Example Output:
Using configuration from ca.conf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'NL'
stateOrProvinceName :ASN.1 12:'Zuid Holland'
localityName :ASN.1 12:'Rotterdam'
organizationName :ASN.1 12:'Sparkling Network'
organizationalUnitName:ASN.1 12:'Sparkling CA'
commonName :ASN.1 12:'Sparkling Intermediate CA'
Certificate is to be certified until Mar 30 15:07:43 2017 GMT (730 days)
Write out database with 1 new entries
Data Base Updated
Generate the CRL (both in PEM and DER):
openssl ca -config ca.conf -gencrl -keyfile rootca.key -cert rootca.crt -out rootca.crl.pem
openssl crl -inform PEM -in rootca.crl.pem -outform DER -out rootca.crl
Generate the CRL after every certificate you sign with the CA.
If you ever need to revoke the this intermediate cert:
openssl ca -config ca.conf -revoke intermediate1.crt -keyfile rootca.key -cert rootca.crt
### Configuring the Intermediate CA 1 ###
Create a new folder for this intermediate and move in to it:
mkdir ~/SSLCA/intermediate1/
cd ~/SSLCA/intermediate1/
Copy the Intermediate cert and key from the Root CA:
cp ~/SSLCA/root/intermediate1.key ./
cp ~/SSLCA/root/intermediate1.crt ./
Create the index files:
touch certindex
echo 1000 > certserial
echo 1000 > crlnumber
Create a new `ca.conf` file:
# vim ca.conf
[ ca ]
default_ca = myca
[ crl_ext ]
issuerAltName=issuer:copy
authorityKeyIdentifier=keyid:always
[ myca ]
dir = ./
new_certs_dir = $dir
unique_subject = no
certificate = $dir/intermediate1.crt
database = $dir/certindex
private_key = $dir/intermediate1.key
serial = $dir/certserial
default_days = 365
default_md = sha1
policy = myca_policy
x509_extensions = myca_extensions
crlnumber = $dir/crlnumber
default_crl_days = 365
[ myca_policy ]
commonName = supplied
stateOrProvinceName = supplied
countryName = optional
emailAddress = optional
organizationName = supplied
organizationalUnitName = optional
[ myca_extensions ]
basicConstraints = critical,CA:FALSE
keyUsage = critical,any
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
keyUsage = digitalSignature,keyEncipherment
extendedKeyUsage = serverAuth
crlDistributionPoints = @crl_section
subjectAltName = @alt_names
authorityInfoAccess = @ocsp_section
[alt_names]
DNS.0 = example.com
DNS.1 = example.org
[crl_section]
URI.0 = http://pki.sparklingca.com/SparklingIntermidiate1.crl
URI.1 = http://pki.backup.com/SparklingIntermidiate1.crl
[ocsp_section]
caIssuers;URI.0 = http://pki.sparklingca.com/SparklingIntermediate1.crt
caIssuers;URI.1 = http://pki.backup.com/SparklingIntermediate1.crt
OCSP;URI.0 = http://pki.sparklingca.com/ocsp/
OCSP;URI.1 = http://pki.backup.com/ocsp/
Change the `[alt_names]` section to whatever you need as Subject Alternative names. Remove it including the `subjectAltName = @alt_names` line if you don't want a Subject Alternative Name.
If you need to set a specific certificate start / expiry date, add the following to `[myca]`
# format: YYYYMMDDHHMMSS
default_enddate = 20191222035911
default_startdate = 20181222035911
Generate an empty CRL (both in PEM and DER):
openssl ca -config ca.conf -gencrl -keyfile rootca.key -cert rootca.crt -out rootca.crl.pem
openssl crl -inform PEM -in rootca.crl.pem -outform DER -out rootca.crl
### Creating end user certificates ###
We use this new intermediate CA to generate an end user certificate. Repeat these steps for every end user certificate you want to sign with this CA.
mkdir enduser-certs
Generate the end user's private key:
openssl genrsa -out enduser-certs/enduser-example.com.key 4096
Generate the end user's CSR:
openssl req -new -sha256 -key enduser-certs/enduser-example.com.key -out enduser-certs/enduser-example.com.csr
Example output:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:NL
State or Province Name (full name) [Some-State]:Noord Holland
Locality Name (eg, city) []:Amsterdam
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Example Inc
Organizational Unit Name (eg, section) []:IT Dept
Common Name (e.g. server FQDN or YOUR name) []:example.com
Email Address []:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Sign the end user's CSR with the Intermediate 1 CA:
openssl ca -batch -config ca.conf -notext -in enduser-certs/enduser-example.com.csr -out enduser-certs/enduser-example.com.crt
Example output:
Using configuration from ca.conf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
countryName :PRINTABLE:'NL'
stateOrProvinceName :ASN.1 12:'Noord Holland'
localityName :ASN.1 12:'Amsterdam'
organizationName :ASN.1 12:'Example Inc'
organizationalUnitName:ASN.1 12:'IT Dept'
commonName :ASN.1 12:'example.com'
Certificate is to be certified until Mar 30 15:18:26 2016 GMT (365 days)
Write out database with 1 new entries
Data Base Updated
Generate the CRL (both in PEM and DER):
openssl ca -config ca.conf -gencrl -keyfile intermediate1.key -cert intermediate1.crt -out intermediate1.crl.pem
openssl crl -inform PEM -in intermediate1.crl.pem -outform DER -out intermediate1.crl
Generate the CRL after every certificate you sign with the CA.
If you ever need to revoke the this end users cert:
openssl ca -config ca.conf -revoke enduser-certs/enduser-example.com.crt -keyfile intermediate1.key -cert intermediate1.crt
Example output:
Using configuration from ca.conf
Revoking Certificate 1000.
Data Base Updated
Create the certificate chain file by concatenating the Root and intermediate 1 certificates together.
cat ../root/rootca.crt intermediate1.crt > enduser-certs/enduser-example.com.chain
Send the following files to the end user:
enduser-example.com.crt
enduser-example.com.key
enduser-example.com.chain
You can also let the end user supply their own CSR and just send them the .crt file. Do not delete that from the server, otherwise you cannot revoke it.
### Validating the certificate ###
You can validate the end user certificate against the chain using the following command:
openssl verify -CAfile enduser-certs/enduser-example.com.chain enduser-certs/enduser-example.com.crt
enduser-certs/enduser-example.com.crt: OK
You can also validate it against the CRL. Concatenate the PEM CRL and the chain together first:
cat ../root/rootca.crt intermediate1.crt intermediate1.crl.pem > enduser-certs/enduser-example.com.crl.chain
Verify the certificate:
openssl verify -crl_check -CAfile enduser-certs/enduser-example.com.crl.chain enduser-certs/enduser-example.com.crt
Output when not revoked:
enduser-certs/enduser-example.com.crt: OK
Output when revoked:
enduser-certs/enduser-example.com.crt: CN = example.com, ST = Noord Holland, C = NL, O = Example Inc, OU = IT Dept
error 23 at 0 depth lookup:certificate revoked
--------------------------------------------------------------------------------
via: https://raymii.org/s/tutorials/OpenSSL_command_line_Root_and_Intermediate_CA_including_OCSP_CRL%20and_revocation.html
作者Remy van Elst
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,826 @@
45 Zypper Commands to Manage Suse Linux Package Management
================================================================================
SUSE (Software and System Entwicklung (Germany) meaning Software and System Development, in English) Linux lies on top of Linux Kernel brought by Novell. SUSE comes in two pack. One of them is called OpenSUSE, which is freely available (free as in speech as well as free as in wine). It is a community driven project packed with latest application support, the latest stable release of OpenSUSE Linux is 13.2.
The other is SUSE Linux Enterprise which is a commercial Linux Distribution designed specially for enterprise and production. SUSE Linux Enterprise edition comes with a variety of Enterprise Applications and features suited for production environment, the latest stable release of SUSE Linux Enterprise Edition is 12.
You may like to check the detailed installation instruction of SUSE Linux Enterprise Server at:
- [Installation of SUSE Linux Enterprise Server 12][1]
Zypper and YaST are the Package Manager for SUSE Linux, which works on top of RPM.
YaST which stands for Yet another Setup Tool is a tool that works on OpenSUSE and SUSE Enterprise edition to administer, setup and configure SUSE Linux.
Zypper is the command line interface of ZYpp package manager for installing, removing and updating SUSE. ZYpp is the package management engine that powers both Zypper and YaST.
Here in this article we will see Zypper in action, which will be installing, updating, removing and doing every other thing a package manager can do. Here we go…
**Important** : Remember all these command are meant for system wide changes hence must be run as root, else the command will fail.
### Getting Basic Help with Zypper ###
1. Run zypper without any option, will give you a list of all global options and commands.
# zypper
Usage:
zypper [--global-options]
2. To get help on a specific command say in (install), run the below commands.
# zypper help in
OR
# zypper help install
install (in) [options] <capability|rpm_file_uri> ...
Install packages with specified capabilities or RPM files with specified
location. A capability is NAME[.ARCH][OP], where OP is one
of <, <=, =, >=, >.
Command options:
--from <alias|#|URI> Select packages from the specified repository.
-r, --repo <alias|#|URI> Load only the specified repository.
-t, --type Type of package (package, patch, pattern, product, srcpackage).
Default: package.
-n, --name Select packages by plain name, not by capability.
-C, --capability Select packages by capability.
-f, --force Install even if the item is already installed (reinstall),
downgraded or changes vendor or architecture.
--oldpackage Allow to replace a newer item with an older one.
Handy if you are doing a rollback. Unlike --force
it will not enforce a reinstall.
--replacefiles Install the packages even if they replace files from other,
already installed, packages. Default is to treat file conflicts
as an error. --download-as-needed disables the fileconflict check.
......
3. Search for a package (say gnome-desktop) before installing.
# zypper se gnome-desktop
Retrieving repository 'openSUSE-13.2-Debug' metadata ............................................................[done]
Building repository 'openSUSE-13.2-Debug' cache .................................................................[done]
Retrieving repository 'openSUSE-13.2-Non-Oss' metadata ......................................................... [done]
Building repository 'openSUSE-13.2-Non-Oss' cache ...............................................................[done]
Retrieving repository 'openSUSE-13.2-Oss' metadata ..............................................................[done]
Building repository 'openSUSE-13.2-Oss' cache ...................................................................[done]
Retrieving repository 'openSUSE-13.2-Update' metadata ...........................................................[done]
Building repository 'openSUSE-13.2-Update' cache ................................................................[done]
Retrieving repository 'openSUSE-13.2-Update-Non-Oss' metadata ...................................................[done]
Building repository 'openSUSE-13.2-Update-Non-Oss' cache ........................................................[done]
Loading repository data...
Reading installed packages...
S | Name | Summary | Type
--+---------------------------------------+-----------------------------------------------------------+-----------
| gnome-desktop2-lang | Languages for package gnome-desktop2 | package
| gnome-desktop2 | The GNOME Desktop API Library | package
| libgnome-desktop-2-17 | The GNOME Desktop API Library | package
| libgnome-desktop-3-10 | The GNOME Desktop API Library | package
| libgnome-desktop-3-devel | The GNOME Desktop API Library -- Development Files | package
| libgnome-desktop-3_0-common | The GNOME Desktop API Library -- Common data files | package
| gnome-desktop-debugsource | Debug sources for package gnome-desktop | package
| gnome-desktop-sharp2-debugsource | Debug sources for package gnome-desktop-sharp2 | package
| gnome-desktop2-debugsource | Debug sources for package gnome-desktop2 | package
| libgnome-desktop-2-17-debuginfo | Debug information for package libgnome-desktop-2-17 | package
| libgnome-desktop-3-10-debuginfo | Debug information for package libgnome-desktop-3-10 | package
| libgnome-desktop-3_0-common-debuginfo | Debug information for package libgnome-desktop-3_0-common | package
| libgnome-desktop-2-17-debuginfo-32bit | Debug information for package libgnome-desktop-2-17 | package
| libgnome-desktop-3-10-debuginfo-32bit | Debug information for package libgnome-desktop-3-10 | package
| gnome-desktop-sharp2 | Mono bindings for libgnome-desktop | package
| libgnome-desktop-2-devel | The GNOME Desktop API Library -- Development Files | package
| gnome-desktop-lang | Languages for package gnome-desktop | package
| libgnome-desktop-2-17-32bit | The GNOME Desktop API Library | package
| libgnome-desktop-3-10-32bit | The GNOME Desktop API Library | package
| gnome-desktop | The GNOME Desktop API Library | srcpackage
4. Get information on a pattern package (say lamp_server) using following command.
# zypper info -t pattern lamp_server
Loading repository data...
Reading installed packages...
Information for pattern lamp_server:
------------------------------------
Repository: openSUSE-13.2-Update
Name: lamp_server
Version: 20141007-5.1
Arch: x86_64
Vendor: openSUSE
Installed: No
Visible to User: Yes
Summary: Web and LAMP Server
Description:
Software to set up a Web server that is able to serve static, dynamic, and interactive content (like a Web shop). This includes Apache HTTP Server, the database management system MySQL,
and scripting languages such as PHP, Python, Ruby on Rails, or Perl.
Contents:
S | Name | Type | Dependency
--+-------------------------------+---------+-----------
| apache2-mod_php5 | package |
| php5-iconv | package |
i | patterns-openSUSE-base | package |
i | apache2-prefork | package |
| php5-dom | package |
| php5-mysql | package |
i | apache2 | package |
| apache2-example-pages | package |
| mariadb | package |
| apache2-mod_perl | package |
| php5-ctype | package |
| apache2-doc | package |
| yast2-http-server | package |
| patterns-openSUSE-lamp_server | package |
5. To open zypper shell session run the below command.
# zypper shell
OR
# zypper sh
zypper> help
Usage:
zypper [--global-options]
**Note**: On Zypper shell type help to get a list of global options and commands.
### Zypper Repository Management ###
#### Listing Defined Repositories ####
6. Use zypper repos or zypper lr commands to list all the defined repositories.
# zypper repos
OR
# zypper lr
| Alias | Name | Enabled | Refresh
--+---------------------------+------------------------------------+---------+--------
1 | openSUSE-13.2-0 | openSUSE-13.2-0 | Yes | No
2 | repo-debug | openSUSE-13.2-Debug | Yes | Yes
3 | repo-debug-update | openSUSE-13.2-Update-Debug | No | Yes
4 | repo-debug-update-non-oss | openSUSE-13.2-Update-Debug-Non-Oss | No | Yes
5 | repo-non-oss | openSUSE-13.2-Non-Oss | Yes | Yes
6 | repo-oss | openSUSE-13.2-Oss | Yes | Yes
7 | repo-source | openSUSE-13.2-Source | No | Yes
8 | repo-update | openSUSE-13.2-Update | Yes | Yes
9 | repo-update-non-oss | openSUSE-13.2-Update-Non-Oss | Yes | Yes
7. List zypper URI on the table.
# zypper lr -u
# | Alias | Name | Enabled | Refresh | URI
--+---------------------------+------------------------------------+---------+---------+----------------------------------------------------------------
1 | openSUSE-13.2-0 | openSUSE-13.2-0 | Yes | No | cd:///?devices=/dev/disk/by-id/ata-VBOX_CD-ROM_VB2-01700376
2 | repo-debug | openSUSE-13.2-Debug | Yes | Yes | http://download.opensuse.org/debug/distribution/13.2/repo/oss/
3 | repo-debug-update | openSUSE-13.2-Update-Debug | No | Yes | http://download.opensuse.org/debug/update/13.2/
4 | repo-debug-update-non-oss | openSUSE-13.2-Update-Debug-Non-Oss | No | Yes | http://download.opensuse.org/debug/update/13.2-non-oss/
5 | repo-non-oss | openSUSE-13.2-Non-Oss | Yes | Yes | http://download.opensuse.org/distribution/13.2/repo/non-oss/
6 | repo-oss | openSUSE-13.2-Oss | Yes | Yes | http://download.opensuse.org/distribution/13.2/repo/oss/
7 | repo-source | openSUSE-13.2-Source | No | Yes | http://download.opensuse.org/source/distribution/13.2/repo/oss/
8 | repo-update | openSUSE-13.2-Update | Yes | Yes | http://download.opensuse.org/update/13.2/
9 | repo-update-non-oss | openSUSE-13.2-Update-Non-Oss | Yes | Yes | http://download.opensuse.org/update/13.2-non-oss/
8. List repository priority and list by priority.
# zypper lr -P
# | Alias | Name | Enabled | Refresh | Priority
--+---------------------------+------------------------------------+---------+---------+---------
1 | openSUSE-13.2-0 | openSUSE-13.2-0 | Yes | No | 99
2 | repo-debug | openSUSE-13.2-Debug | Yes | Yes | 99
3 | repo-debug-update | openSUSE-13.2-Update-Debug | No | Yes | 99
4 | repo-debug-update-non-oss | openSUSE-13.2-Update-Debug-Non-Oss | No | Yes | 99
5 | repo-non-oss | openSUSE-13.2-Non-Oss | Yes | Yes | 85
6 | repo-oss | openSUSE-13.2-Oss | Yes | Yes | 99
7 | repo-source | openSUSE-13.2-Source | No | Yes | 99
8 | repo-update | openSUSE-13.2-Update | Yes | Yes | 99
9 | repo-update-non-oss | openSUSE-13.2-Update-Non-Oss | Yes | Yes | 99
#### Refreshing Repositories ####
9. Use commands zypper refresh or zypper ref to refresh zypper repositories.
# zypper refresh
OR
# zypper ref
Repository 'openSUSE-13.2-0' is up to date.
Repository 'openSUSE-13.2-Debug' is up to date.
Repository 'openSUSE-13.2-Non-Oss' is up to date.
Repository 'openSUSE-13.2-Oss' is up to date.
Repository 'openSUSE-13.2-Update' is up to date.
Repository 'openSUSE-13.2-Update-Non-Oss' is up to date.
All repositories have been refreshed.
10. To refresh a specific repository say repo-non-oss, type:
# zypper refresh repo-non-oss
Repository 'openSUSE-13.2-Non-Oss' is up to date.
Specified repositories have been refreshed.
11. To force update a repository say repo-non-oss, type:
# zypper ref -f repo-non-oss
Forcing raw metadata refresh
Retrieving repository 'openSUSE-13.2-Non-Oss' metadata ............................................................[done]
Forcing building of repository cache
Building repository 'openSUSE-13.2-Non-Oss' cache ............................................................[done]
Specified repositories have been refreshed.
#### Modifying Repositories ####
Here, we use zypper modifyrepo or zypper mr commands to disable, enable zypper repositories.
12. Before disabling repository, you must know that in Zypper, every repository has its own unique number, that is used to disable or enable a repository.
Lets say you want to disable repository repo-oss, to disable first you need to its number by typing following command.
# zypper lr
# | Alias | Name | Enabled | Refresh
--+---------------------------+------------------------------------+---------+--------
1 | openSUSE-13.2-0 | openSUSE-13.2-0 | Yes | No
2 | repo-debug | openSUSE-13.2-Debug | Yes | Yes
3 | repo-debug-update | openSUSE-13.2-Update-Debug | No | Yes
4 | repo-debug-update-non-oss | openSUSE-13.2-Update-Debug-Non-Oss | No | Yes
5 | repo-non-oss | openSUSE-13.2-Non-Oss | Yes | Yes
6 | repo-oss | openSUSE-13.2-Oss | No | Yes
7 | repo-source | openSUSE-13.2-Source | No | Yes
8 | repo-update | openSUSE-13.2-Update | Yes | Yes
9 | repo-update-non-oss | openSUSE-13.2-Update-Non-Oss | Yes | Yes
Do you see in the above output, that the repository repo-oss having number 6, to disable this you need to specify number 6 along with following command.
# zypper mr -d 6
Repository 'repo-oss' has been successfully disabled.
13. To enable again same repository repo-oss, which appears at number 6 (as shown in above example).
# zypper mr -e 6
Repository 'repo-oss' has been successfully enabled.
14. Enable auto-refresh and rpm file caching for a repo say repo-non-oss and set its priority to say 85.
# zypper mr -rk -p 85 repo-non-oss
Repository 'repo-non-oss' priority has been left unchanged (85)
Nothing to change for repository 'repo-non-oss'.
15. Disable rpm file caching for all the repositories.
# zypper mr -Ka
RPM files caching has been disabled for repository 'openSUSE-13.2-0'.
RPM files caching has been disabled for repository 'repo-debug'.
RPM files caching has been disabled for repository 'repo-debug-update'.
RPM files caching has been disabled for repository 'repo-debug-update-non-oss'.
RPM files caching has been disabled for repository 'repo-non-oss'.
RPM files caching has been disabled for repository 'repo-oss'.
RPM files caching has been disabled for repository 'repo-source'.
RPM files caching has been disabled for repository 'repo-update'.
RPM files caching has been disabled for repository 'repo-update-non-oss'.
16. Enable rpm file caching for all the repositories.
# zypper mr -ka
RPM files caching has been enabled for repository 'openSUSE-13.2-0'.
RPM files caching has been enabled for repository 'repo-debug'.
RPM files caching has been enabled for repository 'repo-debug-update'.
RPM files caching has been enabled for repository 'repo-debug-update-non-oss'.
RPM files caching has been enabled for repository 'repo-non-oss'.
RPM files caching has been enabled for repository 'repo-oss'.
RPM files caching has been enabled for repository 'repo-source'.
RPM files caching has been enabled for repository 'repo-update'.
RPM files caching has been enabled for repository 'repo-update-non-oss'.
17. Disable rpm file caching for remote repositories.
# zypper mr -Kt
RPM files caching has been disabled for repository 'repo-debug'.
RPM files caching has been disabled for repository 'repo-debug-update'.
RPM files caching has been disabled for repository 'repo-debug-update-non-oss'.
RPM files caching has been disabled for repository 'repo-non-oss'.
RPM files caching has been disabled for repository 'repo-oss'.
RPM files caching has been disabled for repository 'repo-source'.
RPM files caching has been disabled for repository 'repo-update'.
RPM files caching has been disabled for repository 'repo-update-non-oss'.
18. Enable rpm file caching for remote repositories.
# zypper mr -kt
RPM files caching has been enabled for repository 'repo-debug'.
RPM files caching has been enabled for repository 'repo-debug-update'.
RPM files caching has been enabled for repository 'repo-debug-update-non-oss'.
RPM files caching has been enabled for repository 'repo-non-oss'.
RPM files caching has been enabled for repository 'repo-oss'.
RPM files caching has been enabled for repository 'repo-source'.
RPM files caching has been enabled for repository 'repo-update'.
RPM files caching has been enabled for repository 'repo-update-non-oss'.
#### Adding Repositories ####
You may make use of any of the two commands zypper addrepo or zypper ar. You may use repo url or alias to add Repository.
19. Add a repository say “http://download.opensuse.org/update/12.3/”.
# zypper ar http://download.opensuse.org/update/11.1/ update
Adding repository 'update' .............................................................................................................................................................[done]
Repository 'update' successfully added
Enabled : Yes
Autorefresh : No
GPG check : Yes
URI : http://download.opensuse.org/update/11.1/
20. Rename a repository. It will change the alias only. You may use command zypper namerepo or zypper nr. To rename aka change alias of a repo that appears at number 10 (zypper lr) to upd8, run the below command.
# zypper nr 10 upd8
Repository 'update' renamed to 'upd8'.
#### Removing Repositories ####
21. Remove a repository. It will remove the repository from the system. You may use the command zypper removerepo or zypper rr. To remove a repo say upd8, run the below command.
# zypper rr upd8
# Removing repository 'upd8' .........................................................................................[done]
Repository 'upd8' has been removed.
### Package Management using Zypper ###
#### Install a Package with Zypper ####
22. With Zypper, we can install packages based upon capability name. For example, to install a package (say Mozilla Firefox) using capability name.
# zypper in MozillaFirefox
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following 128 NEW packages are going to be installed:
adwaita-icon-theme at-spi2-atk-common at-spi2-atk-gtk2 at-spi2-core cantarell-fonts cups-libs desktop-file-utils fontconfig gdk-pixbuf-query-loaders gstreamer gstreamer-fluendo-mp3
gstreamer-plugins-base gtk2-branding-openSUSE gtk2-data gtk2-immodule-amharic gtk2-immodule-inuktitut gtk2-immodule-thai gtk2-immodule-vietnamese gtk2-metatheme-adwaita
gtk2-theming-engine-adwaita gtk2-tools gtk3-data gtk3-metatheme-adwaita gtk3-tools hicolor-icon-theme hicolor-icon-theme-branding-openSUSE libasound2 libatk-1_0-0 libatk-bridge-2_0-0
libatspi0 libcairo2 libcairo-gobject2 libcanberra0 libcanberra-gtk0 libcanberra-gtk2-module libcanberra-gtk3-0 libcanberra-gtk3-module libcanberra-gtk-module-common libcdda_interface0
libcdda_paranoia0 libcolord2 libdrm2 libdrm_intel1 libdrm_nouveau2 libdrm_radeon1 libFLAC8 libfreebl3 libgbm1 libgdk_pixbuf-2_0-0 libgraphite2-3 libgstapp-1_0-0 libgstaudio-1_0-0
libgstpbutils-1_0-0 libgstreamer-1_0-0 libgstriff-1_0-0 libgsttag-1_0-0 libgstvideo-1_0-0 libgthread-2_0-0 libgtk-2_0-0 libgtk-3-0 libharfbuzz0 libjasper1 libjbig2 libjpeg8 libjson-c2
liblcms2-2 libLLVM libltdl7 libnsssharedhelper0 libogg0 liborc-0_4-0 libpackagekit-glib2-18 libpango-1_0-0 libpciaccess0 libpixman-1-0 libpulse0 libsndfile1 libsoftokn3 libspeex1
libsqlite3-0 libstartup-notification-1-0 libtheoradec1 libtheoraenc1 libtiff5 libvisual libvorbis0 libvorbisenc2 libvorbisfile3 libwayland-client0 libwayland-cursor0 libwayland-server0
libX11-xcb1 libxcb-dri2-0 libxcb-dri3-0 libxcb-glx0 libxcb-present0 libxcb-render0 libxcb-shm0 libxcb-sync1 libxcb-util1 libxcb-xfixes0 libXcomposite1 libXcursor1 libXdamage1 libXevie1
libXfixes3 libXft2 libXi6 libXinerama1 libxkbcommon-0_4_3 libXrandr2 libXrender1 libxshmfence1 libXtst6 libXv1 libXxf86vm1 Mesa Mesa-libEGL1 Mesa-libGL1 Mesa-libglapi0
metatheme-adwaita-common MozillaFirefox MozillaFirefox-branding-openSUSE mozilla-nss mozilla-nss-certs PackageKit-gstreamer-plugin pango-tools sound-theme-freedesktop
The following 10 recommended packages were automatically selected:
gstreamer-fluendo-mp3 gtk2-branding-openSUSE gtk2-data gtk2-immodule-amharic gtk2-immodule-inuktitut gtk2-immodule-thai gtk2-immodule-vietnamese libcanberra0 libpulse0
PackageKit-gstreamer-plugin
128 new packages to install.
Overall download size: 77.2 MiB. Already cached: 0 B After the operation, additional 200.0 MiB will be used.
Continue? [y/n/? shows all options] (y): y
Retrieving package cantarell-fonts-0.0.16-1.1.noarch (1/128), 74.1 KiB (115.6 KiB unpacked)
Retrieving: cantarell-fonts-0.0.16-1.1.noarch.rpm .........................................................................................................................[done (63.4 KiB/s)]
Retrieving package hicolor-icon-theme-0.13-2.1.2.noarch (2/128), 40.1 KiB ( 50.5 KiB unpacked)
Retrieving: hicolor-icon-theme-0.13-2.1.2.noarch.rpm ...................................................................................................................................[done]
Retrieving package sound-theme-freedesktop-0.8-7.1.2.noarch (3/128), 372.6 KiB (460.3 KiB unpacked)
23. Install a package (say gcc) using version.
# zypper in 'gcc<5.1'
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following 13 NEW packages are going to be installed:
cpp cpp48 gcc gcc48 libasan0 libatomic1-gcc49 libcloog-isl4 libgomp1-gcc49 libisl10 libitm1-gcc49 libmpc3 libmpfr4 libtsan0-gcc49
13 new packages to install.
Overall download size: 14.5 MiB. Already cached: 0 B After the operation, additional 49.4 MiB will be used.
Continue? [y/n/? shows all options] (y): y
24. Install a package (say gcc) for architecture (say i586).
# zypper in gcc.i586
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following 13 NEW packages are going to be installed:
cpp cpp48 gcc gcc48 libasan0 libatomic1-gcc49 libcloog-isl4 libgomp1-gcc49 libisl10 libitm1-gcc49 libmpc3 libmpfr4 libtsan0-gcc49
13 new packages to install.
Overall download size: 14.5 MiB. Already cached: 0 B After the operation, additional 49.4 MiB will be used.
Continue? [y/n/? shows all options] (y): y
Retrieving package libasan0-4.8.3+r212056-2.2.4.x86_64 (1/13), 74.2 KiB (166.9 KiB unpacked)
Retrieving: libasan0-4.8.3+r212056-2.2.4.x86_64.rpm .......................................................................................................................[done (79.2 KiB/s)]
Retrieving package libatomic1-gcc49-4.9.0+r211729-2.1.7.x86_64 (2/13), 14.3 KiB ( 26.1 KiB unpacked)
Retrieving: libatomic1-gcc49-4.9.0+r211729-2.1.7.x86_64.rpm ...............................................................................................................[done (55.3 KiB/s)]
25. Install a package (say gcc) for specific architecture (say i586) and specific version (say <5.1),
# zypper in 'gcc.i586<5.1'
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following 13 NEW packages are going to be installed:
cpp cpp48 gcc gcc48 libasan0 libatomic1-gcc49 libcloog-isl4 libgomp1-gcc49 libisl10 libitm1-gcc49 libmpc3 libmpfr4 libtsan0-gcc49
13 new packages to install.
Overall download size: 14.4 MiB. Already cached: 129.5 KiB After the operation, additional 49.4 MiB will be used.
Continue? [y/n/? shows all options] (y): y
In cache libasan0-4.8.3+r212056-2.2.4.x86_64.rpm (1/13), 74.2 KiB (166.9 KiB unpacked)
In cache libatomic1-gcc49-4.9.0+r211729-2.1.7.x86_64.rpm (2/13), 14.3 KiB ( 26.1 KiB unpacked)
In cache libgomp1-gcc49-4.9.0+r211729-2.1.7.x86_64.rpm (3/13), 41.1 KiB ( 90.7 KiB unpacked)
26. Install a Package (say libxine) from repository (amarok).
# zypper in amarok upd:libxine1
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following 202 NEW packages are going to be installed:
amarok bundle-lang-kde-en clamz cups-libs enscript fontconfig gdk-pixbuf-query-loaders ghostscript-fonts-std gptfdisk gstreamer gstreamer-plugins-base hicolor-icon-theme
hicolor-icon-theme-branding-openSUSE htdig hunspell hunspell-tools icoutils ispell ispell-american kde4-filesystem kdebase4-runtime kdebase4-runtime-branding-openSUSE kdelibs4
kdelibs4-branding-openSUSE kdelibs4-core kdialog libakonadi4 l
.....
27. Install a Package (say git) using name (-n).
# zypper in -n git
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following 35 NEW packages are going to be installed:
cvs cvsps fontconfig git git-core git-cvs git-email git-gui gitk git-svn git-web libserf-1-1 libsqlite3-0 libXft2 libXrender1 libXss1 perl-Authen-SASL perl-Clone perl-DBD-SQLite perl-DBI
perl-Error perl-IO-Socket-SSL perl-MLDBM perl-Net-Daemon perl-Net-SMTP-SSL perl-Net-SSLeay perl-Params-Util perl-PlRPC perl-SQL-Statement perl-Term-ReadKey subversion subversion-perl tcl
tk xhost
The following 13 recommended packages were automatically selected:
git-cvs git-email git-gui gitk git-svn git-web perl-Authen-SASL perl-Clone perl-MLDBM perl-Net-Daemon perl-Net-SMTP-SSL perl-PlRPC perl-SQL-Statement
The following package is suggested, but will not be installed:
git-daemon
35 new packages to install.
Overall download size: 15.6 MiB. Already cached: 0 B After the operation, additional 56.7 MiB will be used.
Continue? [y/n/? shows all options] (y): y
28. Install a package using wildcards. For example, install all php5 packages.
# zypper in php5*
Loading repository data...
Reading installed packages...
Resolving package dependencies...
Problem: php5-5.6.1-18.1.x86_64 requires smtp_daemon, but this requirement cannot be provided
uninstallable providers: exim-4.83-3.1.8.x86_64[openSUSE-13.2-0]
postfix-2.11.0-5.2.2.x86_64[openSUSE-13.2-0]
sendmail-8.14.9-2.2.2.x86_64[openSUSE-13.2-0]
exim-4.83-3.1.8.i586[repo-oss]
msmtp-mta-1.4.32-2.1.3.i586[repo-oss]
postfix-2.11.0-5.2.2.i586[repo-oss]
sendmail-8.14.9-2.2.2.i586[repo-oss]
exim-4.83-3.1.8.x86_64[repo-oss]
msmtp-mta-1.4.32-2.1.3.x86_64[repo-oss]
postfix-2.11.0-5.2.2.x86_64[repo-oss]
sendmail-8.14.9-2.2.2.x86_64[repo-oss]
postfix-2.11.3-5.5.1.i586[repo-update]
postfix-2.11.3-5.5.1.x86_64[repo-update]
Solution 1: Following actions will be done:
do not install php5-5.6.1-18.1.x86_64
do not install php5-pear-Auth_SASL-1.0.6-7.1.3.noarch
do not install php5-pear-Horde_Http-2.0.1-6.1.3.noarch
do not install php5-pear-Horde_Image-2.0.1-6.1.3.noarch
do not install php5-pear-Horde_Kolab_Format-2.0.1-6.1.3.noarch
do not install php5-pear-Horde_Ldap-2.0.1-6.1.3.noarch
do not install php5-pear-Horde_Memcache-2.0.1-7.1.3.noarch
do not install php5-pear-Horde_Mime-2.0.2-6.1.3.noarch
do not install php5-pear-Horde_Oauth-2.0.0-6.1.3.noarch
do not install php5-pear-Horde_Pdf-2.0.1-6.1.3.noarch
....
29. Install a Package (say lamp_server) using pattern (group of packages).
# zypper in -t pattern lamp_server
ading repository data...
Reading installed packages...
Resolving package dependencies...
The following 29 NEW packages are going to be installed:
apache2 apache2-doc apache2-example-pages apache2-mod_perl apache2-prefork patterns-openSUSE-lamp_server perl-Data-Dump perl-Encode-Locale perl-File-Listing perl-HTML-Parser
perl-HTML-Tagset perl-HTTP-Cookies perl-HTTP-Daemon perl-HTTP-Date perl-HTTP-Message perl-HTTP-Negotiate perl-IO-HTML perl-IO-Socket-SSL perl-libwww-perl perl-Linux-Pid
perl-LWP-MediaTypes perl-LWP-Protocol-https perl-Net-HTTP perl-Net-SSLeay perl-Tie-IxHash perl-TimeDate perl-URI perl-WWW-RobotRules yast2-http-server
The following NEW pattern is going to be installed:
lamp_server
The following 10 recommended packages were automatically selected:
apache2 apache2-doc apache2-example-pages apache2-mod_perl apache2-prefork perl-Data-Dump perl-IO-Socket-SSL perl-LWP-Protocol-https perl-TimeDate yast2-http-server
29 new packages to install.
Overall download size: 7.2 MiB. Already cached: 1.2 MiB After the operation, additional 34.7 MiB will be used.
Continue? [y/n/? shows all options] (y):
30. Install a Package (say nano) and remove a package (say vi) in one go.
# zypper in nano -vi
Loading repository data...
Reading installed packages...
'-vi' not found in package names. Trying capabilities.
Resolving package dependencies...
The following 2 NEW packages are going to be installed:
nano nano-lang
The following package is going to be REMOVED:
vim
The following recommended package was automatically selected:
nano-lang
2 new packages to install, 1 to remove.
Overall download size: 550.0 KiB. Already cached: 0 B After the operation, 463.3 KiB will be freed.
Continue? [y/n/? shows all options] (y):
...
31. Install a rpm package (say teamviewer).
# zypper in teamviewer*.rpm
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following 24 NEW packages are going to be installed:
alsa-oss-32bit fontconfig-32bit libasound2-32bit libexpat1-32bit libfreetype6-32bit libgcc_s1-gcc49-32bit libICE6-32bit libjpeg62-32bit libpng12-0-32bit libpng16-16-32bit libSM6-32bit
libuuid1-32bit libX11-6-32bit libXau6-32bit libxcb1-32bit libXdamage1-32bit libXext6-32bit libXfixes3-32bit libXinerama1-32bit libXrandr2-32bit libXrender1-32bit libXtst6-32bit
libz1-32bit teamviewer
The following recommended package was automatically selected:
alsa-oss-32bit
24 new packages to install.
Overall download size: 41.2 MiB. Already cached: 0 B After the operation, additional 119.7 MiB will be used.
Continue? [y/n/? shows all options] (y):
..
#### Remove a Package with Zypper ####
32. To remove any package, you can use zypper remove or zypper rm commands. For example, to remove a package (say apache2), run:
# zypper remove apache2
Or
# zypper rm apache2
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following 2 packages are going to be REMOVED:
apache2 apache2-prefork
2 packages to remove.
After the operation, 4.2 MiB will be freed.
Continue? [y/n/? shows all options] (y): y
(1/2) Removing apache2-2.4.10-19.1 ........................................................................[done]
(2/2) Removing apache2-prefork-2.4.10-19.1 ................................................................[done]
#### Updating Packages using Zypper ####
33. Update all packages. You may use commands zypper update or zypper up.
# zypper up
OR
# zypper update
Loading repository data...
Reading installed packages...
Nothing to do.
34. Update specific packages (say apache2 and openssh).
# zypper up apache2 openssh
Loading repository data...
Reading installed packages...
No update candidate for 'apache2-2.4.10-19.1.x86_64'. The highest available version is already installed.
No update candidate for 'openssh-6.6p1-5.1.3.x86_64'. The highest available version is already installed.
Resolving package dependencies...
Nothing to do.
35. Install a package say (mariadb) if not installed, if installed update it.
# zypper in mariadb
Loading repository data...
Reading installed packages...
'mariadb' is already installed.
No update candidate for 'mariadb-10.0.13-2.6.1.x86_64'. The highest available version is already installed.
Resolving package dependencies...
Nothing to do.
#### Install Source and Build Dependencies ####
You may use zypper source-install or zypper si commands to build packages from source.
36. Install source packages and build their dependencies for a package (say mariadb).
# zypper si mariadb
Reading installed packages...
Loading repository data...
Resolving package dependencies...
The following 36 NEW packages are going to be installed:
autoconf automake bison cmake cpp cpp48 gcc gcc48 gcc48-c++ gcc-c++ libaio-devel libarchive13 libasan0 libatomic1-gcc49 libcloog-isl4 libedit-devel libevent-devel libgomp1-gcc49 libisl10
libitm1-gcc49 libltdl7 libmpc3 libmpfr4 libopenssl-devel libstdc++48-devel libtool libtsan0-gcc49 m4 make ncurses-devel pam-devel readline-devel site-config tack tcpd-devel zlib-devel
The following source package is going to be installed:
mariadb
36 new packages to install, 1 source package.
Overall download size: 71.5 MiB. Already cached: 129.5 KiB After the operation, additional 183.9 MiB will be used.
Continue? [y/n/? shows all options] (y): y
37. Install only the source for a package (say mariadb).
# zypper in -D mariadb
Loading repository data...
Reading installed packages...
'mariadb' is already installed.
No update candidate for 'mariadb-10.0.13-2.6.1.x86_64'. The highest available version is already installed.
Resolving package dependencies...
Nothing to do.
38. Install only the build dependencies for a packages (say mariadb).
# zypper si -d mariadb
Reading installed packages...
Loading repository data...
Resolving package dependencies...
The following 36 NEW packages are going to be installed:
autoconf automake bison cmake cpp cpp48 gcc gcc48 gcc48-c++ gcc-c++ libaio-devel libarchive13 libasan0 libatomic1-gcc49 libcloog-isl4 libedit-devel libevent-devel libgomp1-gcc49 libisl10
libitm1-gcc49 libltdl7 libmpc3 libmpfr4 libopenssl-devel libstdc++48-devel libtool libtsan0-gcc49 m4 make ncurses-devel pam-devel readline-devel site-config tack tcpd-devel zlib-devel
The following package is recommended, but will not be installed due to conflicts or dependency issues:
readline-doc
36 new packages to install.
Overall download size: 33.7 MiB. Already cached: 129.5 KiB After the operation, additional 144.3 MiB will be used.
Continue? [y/n/? shows all options] (y): y
#### Zypper in Scripts and Applications ####
39. Install a Package (say mariadb) without interaction of user.
# zypper --non-interactive in mariadb
Loading repository data...
Reading installed packages...
'mariadb' is already installed.
No update candidate for 'mariadb-10.0.13-2.6.1.x86_64'. The highest available version is already installed.
Resolving package dependencies...
Nothing to do.
40. Remove a Package (say mariadb) without interaction of user.
# zypper --non-interactive rm mariadb
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following package is going to be REMOVED:
mariadb
1 package to remove.
After the operation, 71.8 MiB will be freed.
Continue? [y/n/? shows all options] (y): y
(1/1) Removing mariadb-10.0.13-2.6.1 .............................................................................[done]
41. Output zypper in xml.
# zypper --xmlout
Usage:
zypper [--global-options] <command> [--command-options] [arguments]
Global Options
....
42. Generate quiet output at installation.
# zypper --quiet in mariadb
The following NEW package is going to be installed:
mariadb
1 new package to install.
Overall download size: 0 B. Already cached: 7.8 MiB After the operation, additional 71.8 MiB will be used.
Continue? [y/n/? shows all options] (y):
...
43. Generate quiet output at UN-installation.
# zypper --quiet rm mariadb
44. Auto agree to Licenses/Agreements.
# zypper patch --auto-agree-with-licenses
Loading repository data...
Reading installed packages...
Resolving package dependencies...
Nothing to do.
#### Clean Zypper Cache and View History ####
45. If you want to clean zypper cache only, you can use following command.
# zypper clean
All repositories have been cleaned up.
If you want to clean metadata and package cache at once you may like to pass all/-a with zypper as.
# zypper clean -a
All repositories have been cleaned up.
46. To view logs of any installed, updated or removed packages through zypper, are logged in /var/log/zypp/history. You may cat it to view or may use filter to get a custom output.
# cat /var/log/zypp/history
2015-05-07 15:43:03|install|boost-license1_54_0|1.54.0-10.1.3|noarch||openSUSE-13.2-0|0523b909d2aae5239f9841316dafaf3a37b4f096|
2015-05-07 15:43:03|install|branding-openSUSE|13.2-3.6.1|noarch||openSUSE-13.2-0|6609def94b1987bf3f90a9467f4f7ab8f8d98a5c|
2015-05-07 15:43:03|install|bundle-lang-common-en|13.2-3.3.1|noarch||openSUSE-13.2-0|ca55694e6fdebee6ce37ac7cf3725e2aa6edc342|
2015-05-07 15:43:03|install|insserv-compat|0.1-12.2.2|noarch||openSUSE-13.2-0|6160de7fbf961a279591a83a1550093a581214d9|
2015-05-07 15:43:03|install|libX11-data|1.6.2-5.1.2|noarch||openSUSE-13.2-0|f1cb58364ba9016c1f93b1a383ba12463c56885a|
2015-05-07 15:43:03|install|libnl-config|3.2.25-2.1.2|noarch||openSUSE-13.2-0|aab2ded312a781e93b739b418e3d32fe4e187020|
2015-05-07 15:43:04|install|wireless-regdb|2014.06.13-1.2|noarch||openSUSE-13.2-0|be8cb16f3e92af12b5ceb977e37e13f03c007bd1|
2015-05-07 15:43:04|install|yast2-trans-en_US|3.1.0-2.1|noarch||openSUSE-13.2-0|1865754e5e0ec3c149ac850b340bcca55a3c404d|
2015-05-07 15:43:04|install|yast2-trans-stats|2.19.0-16.1.3|noarch||openSUSE-13.2-0|b107d2b3e702835885b57b04d12d25539f262d1a|
2015-05-07 15:43:04|install|cracklib-dict-full|2.8.12-64.1.2|x86_64||openSUSE-13.2-0|08bd45dbba7ad44e3a4837f730be76f55ad5dcfa|
......
#### Upgrade Suse Using Zypper ####
47. You can use dist-upgrade option with zypper command to upgrade your current Suse Linux to most recent version.
# zypper dist-upgrade
You are about to do a distribution upgrade with all enabled repositories. Make sure these repositories are compatible before you continue. See 'man zypper' for more information about this command.
Building repository 'openSUSE-13.2-0' cache .....................................................................[done]
Retrieving repository 'openSUSE-13.2-Debug' metadata ............................................................[done]
Building repository 'openSUSE-13.2-Debug' cache .................................................................[done]
Retrieving repository 'openSUSE-13.2-Non-Oss' metadata ..........................................................[done]
Building repository 'openSUSE-13.2-Non-Oss' cache ...............................................................[done]
Thats all for now. Hope this article would help you in managing you SUSE System and Server specially for newbies. If you feel that I left certain commands (Human are erroneous) you may provide us with the feedback in the comments so that we can update the article. Keep Connected, Keep Commenting, Stay tuned. Kudos!
--------------------------------------------------------------------------------
via: http://www.tecmint.com/zypper-commands-to-manage-suse-linux-package-management/
作者:[Avishek Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/installation-of-suse-linux-enterprise-server-12/

View File

@ -0,0 +1,96 @@
A Shell Script to Monitor Network, Disk Usage, Uptime, Load Average and RAM Usage in Linux
================================================================================
The duty of System Administrator is really tough as he/she has to monitor the servers, users, logs, create backup and blah blah blah. For the most repetitive task most of the administrator write a script to automate their day-to-day repetitive task. Here we have written a shell Script that do not aims to automate the task of a typical system admin, but it may be helpful at places and specially for those newbies who can get most of the information they require about their System, Network, Users, Load, Ram, host, Internal IP, External IP, Uptime, etc.
We have taken care of formatting the output (to certain extent). The Script dont contains any Malicious contents and it can be run using Normal user Account. In-fact it is recommended to run this script as user and not as root.
![Linux Server Health Monitoring](http://www.tecmint.com/wp-content/uploads/2015/05/Linux-Health-Monitoring.png)
Shell Script to Monitor Linux System Health
You are free to use/modify/redistribute the below piece of code by giving proper credit to Tecmint and Author. We have tried to customize the output to the extent that nothing other than the required output is generated. We have tried to use those variables which are generally not used by Linux System and are probably free.
#### Minimum System Requirement ####
All you need to have is a working Linux box.
#### Dependency ####
There is no dependency required to use this package for a standard Linux Distribution. Moreover the script dont requires root permission for execution purpose. However if you want to Install it, you need to enter root password once.
#### Security ####
We have taken care to ensure security of the system. Nothing additional package is required/installed. No root access required to run. Moreover code has been released under Apache 2.0 License, that means you are free to edit, modify and re-distribute by keeping Tecmint copyright.
### How Do I Install and Run Script? ###
First, use following [wget command][1] to download the monitor script `"tecmint_monitor.sh"` and make it executable by setting appropriate permissions.
# wget http://tecmint.com/wp-content/scripts/tecmint_monitor.sh
# chmod 755 tecmint_monitor.sh
It is strongly advised to install the script as user and not as root. It will ask for root password and will install the necessary components at required places.
To install `"tecmint_monitor.sh"` script, simple use -i (install) option as shown below.
/tecmint_monitor.sh -i
Enter root password when prompted. If everything goes well you will get a success message like shown below.
Password:
Congratulations! Script Installed, now run monitor Command
After installation, you can run the script by calling command `'monitor'` from any location or user. If you dont like to install it, you need to include the location every-time you want to run it.
# ./Path/to/script/tecmint_monitor.sh
Now run monitor command from anywhere using any user account simply as:
$ monitor
![TecMint Monitor Script in Action](http://www.tecmint.com/wp-content/uploads/2015/05/TecMint-Monitor-Script.gif)
As soon as you run the command you get various System related information which are:
- Internet Connectivity
- OS Type
- OS Name
- OS Version
- Architecture
- Kernel Release
- Hostname
- Internal IP
- External IP
- Name Servers
- Logged In users
- Ram Usages
- Swap Usages
- Disk Usages
- Load Average
- System Uptime
Check the installed version of script using -v (version) switch.
$ monitor -v
tecmint_monitor version 0.1
Designed by Tecmint.com
Released Under Apache 2.0 License
### Conclusion ###
This script is working out of the box on a few machines I have checked. It should work the same for you as well. If you find any bug let us know in the comments. This is not the end. This is the beginning. You can take it to any level from here. If you feel like editing the script and carry it further you are free to do so giving us proper credit and also share the updated script with us so that we can update this article by giving you proper credit.
Dont forget to share your thoughts or your script with us. We will be here to help you. Thank you for all the love you have given us. Keep Connected! Stay tuned.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-server-health-monitoring-script/
作者:[Avishek Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/10-wget-command-examples-in-linux/

View File

@ -0,0 +1,107 @@
How To Run Docker Client Inside Windows OS
================================================================================
Hi everyone, today we'll learn about Docker in Windows Operating System and about the installation of Docker Windows Client in it. Docker Engine uses Linux Specific Kernel features so it cannot use Windows Kernel to run so, the Docker Engine creates a small Virtual Machine running Linux and utilizes its resources and Kernel. The Windows Docker Client uses the virtualized Docker Engine to build, run and manage Docker Containers out of the box. There is an application developed by the Boot2Docker Team called Boot2Docker which creates the virtual machine running a small Linux based on [Tiny Core Linux][1] made specifically to run [Docker][2] containers on Windows. It runs completely from RAM, weighs ~27MB and boots in ~5s (YMMV). So, until the Docker Engine for Windows is developed, we can only run Linux containers in our Windows Machine.
Here is some easy and simple steps which will allow us to install the Docker Client and run containers on top of it.
### 1. Downloading Boot2Docker ###
Now, before we start the installation, we'll need the executable file for Boot2Docker. The latest version of Boot2Docker can be downloaded from [its Github][3]. Here, in this tutorial, we'll download version v1.6.1 from the site. Here, we'll download the file named [docker-install.exe][4] from that page using our favorite Web Browser or Download Manager.
![](http://blog.linoxide.com/wp-content/uploads/2015/05/downloading-boot2docker-installer.png)
### 2. Installing Boot2Docker ###
Now, we'll simply run the installer which will install Windows Docker Client, Git for Windows (MSYS-git), VirtualBox, The Boot2Docker Linux ISO, and the Boot2Docker management tool which are essential for the total functioning of Docker Engine out of the box.
![](http://blog.linoxide.com/wp-content/uploads/2015/05/boot2docker-installer.png)
### 3. Running Boot2Docker ###
![](http://blog.linoxide.com/wp-content/uploads/2015/05/boot2docker-start-icon-e1431322598697.png)
After installing the necessary stuffs, we'll run Boot2Docker by simply running the Boot2Docker Start shortcut from the Desktop. This will ask you to enter an SSH key paraphrase that we'll require in future for authentication. It will start a unix shell already configured to manage Docker running inside the virtual machine.
![](http://blog.linoxide.com/wp-content/uploads/2015/05/starting-boot2docker.png)
Now to check whether it is correctly configured or not, simply run docker version as shown below.
docker version
![](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-version.png)
### 4. Running Docker ###
As **Boot2Docker Start** automatically starts a shell with environment variables correctly set so we can simply start using Docker right away. **Please note that, if we are Boot2Docker as remote Docker Daemon , then do not type the sudo before the docker commands.**
Now, Let's try the **hello-world** example image which will download the hello-world image, executes it and gives an output "Hello from Docker" message.
$ docker run hello-world
![](http://blog.linoxide.com/wp-content/uploads/2015/05/running-hello-world.png)
### 5. Running Docker using Command Prompt (CMD) ###
Now, if you are wanting to get started with Docker using Command Prompt, you can simply launch the command prompt (CMD.exe). As Boot2Docker requires ssh.exe to be in the PATH, therefore we need to include bin folder of the Git installation to the %PATH% environment variable by running the following command in the command prompt.
set PATH=%PATH%;"c:\Program Files (x86)\Git\bin"
![](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-in-cmd.png)
After running the above command, we can run the **boot2docker start** command in the command prompt to start the Boot2Docker VM.
boot2docker start
![](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-cmd-variables.png)
**Note**: If you get an error saying machine does not exist then run **boot2docker init** command in it.
Then copy the instructions for cmd.exe shown in the console to set the environment variables to the console window and we are ready to run docker containers as usual.
### 6. Running Docker using PowerShell ###
In order to run Docker on PowerShell, we simply need to launch a PowerShell window then add ssh.exe to our PATH variable.
$Env:Path = "${Env:Path};c:\Program Files (x86)\Git\bin"
After running the above command, we'll need to run
boot2docker start
![](http://blog.linoxide.com/wp-content/uploads/2015/05/docker-in-powershell.png)
This will print PowerShell commands to set the environment variables to connect Docker running inside VM. We'll simply run those commands in the PowerShell and we are ready to run docker containers as usual.
### 7. Logging with PUTTY ###
Boot2Docker generates and uses the public or private key pair inside %USERPROFILE%\.ssh directory so to login, we'll need use the private key from this same directory. That private key needs to be converted into the PuTTY 's format. We can use puttygen.exe to do that.
We need to open puttygen.exe and load ("File"->"Load" menu) the private key from %USERPROFILE%\.ssh\id_boot2docker then click on "Save Private Key". Then use the saved file to login with PuTTY using docker@127.0.0.1:2022 .
### 8. Boot2Docker Options ###
The Boot2Docker management tool provides several commands as shown below.
$ boot2docker
Usage: boot2docker.exe [<options>] {help|init|up|ssh|save|down|poweroff|reset|restart|config|status|info|ip|shellinit|delete|download|upgrade|version} [<args>]
### Conclusion ###
Using Docker with Docker Windows Client is fun. The Boot2Docker management tool is an awesome application developed which enables every Docker containers to run smoothly as running in Linux host. If you are more curious, the username for the boot2docker default user is docker and the password is tcuser. The latest version of boot2docker sets up a host only network adapter which provides access to the container's ports. Typically, it is 192.168.59.103, but it could get changed by Virtualbox's DHCP implementation. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/run-docker-client-inside-windows-os/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://tinycorelinux.net/
[2]:https://www.docker.io/
[3]:https://github.com/boot2docker/windows-installer/releases/latest
[4]:https://github.com/boot2docker/windows-installer/releases/download/v1.6.1/docker-install.exe

View File

@ -0,0 +1,64 @@
Translating by GOLinux!
Linux FAQs with Answers--How to view torrent file content on Linux
================================================================================
> **Question**: I have a torrent file downloaded from the web. Is there a tool that allows me to view the content of a torrent on Linux? For example, I want to know what files are included inside a torrent.
A torrent file (i.e., a file with **.torrent** extension) is a BitTorrent metadata file which stores information (e.g., tracker URLs, file list, sizes, checksums, creation date) needed by a BitTorrent client to download files shared on BitTorrent peer-to-peer networks. Inside a single torrent file, one or more files can be listed for sharing.
The content of a torrent file is encoded with BEncode, the BitTorrent's data serialization format. Thus to view the content of a torrent file, you need a corresponding decoder.
In fact, any GUI-based BitTorrent client (e.g., Transmission or uTorrent) is equipped with BEncode decoder, so can show to you the content of a torrent file by opening it. However, if you don't want to use any sort of BitTorrent client to check up on a torrent file, you can try a command-line torrent viewer called [dumptorrent][2].
The **dumptorrent** command prints the detailed content of a torrent file (e.g., file names, sizes, tracker URLs, creation date, info hash, etc.) by using a built-in BEncode decoder.
### Install DumpTorrent on Linux ###
To install dumptorrent on Linux, you can build it from the source.
On Debian, Ubuntu or Linux Mint:
$ sudo apt-get install gcc make
$ wget http://downloads.sourceforge.net/project/dumptorrent/dumptorrent/1.2/dumptorrent-1.2.tar.gz
$ tar -xvf dumptorrent-1.2.tar.gz
$ cd dumptorrent-1.2
$ make
$ sudo cp dumptorrent /usr/local/bin
On CentOS, Fedora or RHEL:
$ sudo yum install gcc make
$ wget http://downloads.sourceforge.net/project/dumptorrent/dumptorrent/1.2/dumptorrent-1.2.tar.gz
$ tar -xvf dumptorrent-1.2.tar.gz
$ cd dumptorrent-1.2
$ make
$ sudo cp dumptorrent /usr/local/bin
Make sure that /usr/local/bin is [included][2] in your PATH.
### View the Content of a Torrent ###
To check the content of a torrent, simply run dumptorrent with a torrent file as an argument. This will print a summary of a torrent, including file names, sizes and tracker URL.
$ dumptorrent <torrent-file>
![](https://farm8.staticflickr.com/7729/16816455904_b051e29972_b.jpg)
To view the full content of a torrent, add "-v" option. This will print more detailed information of a torrent, including info-hash, piece length, creation date, creator, and full announce list.
$ dumptorrent -v <torrent-file>
![](https://farm6.staticflickr.com/5331/17438628461_1f6675bd77_b.jpg)
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/view-torrent-file-content-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://dumptorrent.sourceforge.net/
[2]:http://ask.xmodulo.com/change-path-environment-variable-linux.html

View File

@ -1,40 +0,0 @@
这个工具可以提醒你一个区域内的假面猎手接入点 evil twin暂无相关翻译
===============================================================================
**开发人员称EvilAP_Defender甚至可以攻击流氓Wi-Fi接入点**
一个新的开源工具可以定期扫描一个区域以防流氓Wi-Fi接入点同时如果发现情况会提醒网络管理员。
这个工具叫做EvilAP_Defender是为监测攻击者配置的恶意接入点而专门设计的这些接入点冒用合法的名字诱导用户连接上。
这类接入点被称做假面猎手,使得黑客们从接入的设备上监听互联网信息流。这可以被用来窃取证书,破坏网站等等。
大多数用户设置他们的计算机和设备可以自动连接一些无线网络比如家里的或者工作地方的网络。尽管如此当面对两个同名的无线网络时即SSID相同有时候甚至时MAC地址也相同这时候大多数设备会自动连接信号较强的一个。
这使得假面猎手的攻击容易实现因为SSID和BSSID都可以伪造。
[EvilAP_Defender][1]是一个叫Mohamed Idris的人用Python语言编写公布在GitHub上面。它可以使用一个计算机的无线网卡来发现流氓接入点这些接入点复制了一个真实接入点的SSIDBSSID甚至是其他的参数如通道密码隐私协议和认证信息。
该工具首先以学习模式运行,为了发现合法的接入点[AP],并且加入白名单。然后切换到正常模式,开始扫描未认证的接入点。
如果一个恶意[AP]被发现了,该工具会用电子邮件提醒网络管理员,但是开发者也打算在未来加入短信提醒功能。
该工具还有一个保护模式在这种模式下应用会发起一个denial-of-service [DoS]攻击反抗恶意接入点,为管理员采取防卫措施赢得一些时间。
“DoS不仅针对有着相同SSID的恶意AP也针对BSSIDAP的MAC地址不同或者不同信道的”Idris在这款工具的文档中说道。“这是避免攻击你的合法网络。”
尽管如此,用户应该切记在许多国家,攻击别人的接入点,甚至一个可能一个攻击者操控的恶意的接入点,很多时候都是非法的。
为了能够运行这款工具需要Aircrack-ng无线网套装一个支持Aircrack-ng的无线网卡MySQL和Python运行环境。
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/2905725/security0/this-tool-can-alert-you-about-evil-twin-access-points-in-the-area.html
作者:[Lucian Constantin][a]
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/Lucian-Constantin/
[1] https://github.com/moha99sa/EvilAP_Defender/blob/master/README.TXT

View File

@ -0,0 +1,110 @@
什么是好的命令行HTTP客户端?
==============================================================================
整体大于各部分之和,这是引自希腊哲学家和科学家的亚里士多德的名言。这句话特别切中Linux。在我看来Linux最强大的地方之一就是它的协作性。Linux的实用性并不仅仅源自大量的开源程序命令行。相反其协作性来自于这些程序的综合利用有时是结合更大型的应用。
Unix哲学引发了一场“软件工具”的运动关注开发简洁基础干净模块化和扩展性好的代码并可以运用于其他的项目。这种哲学为许多的Linux项目留下了一个重要的元素。
好的开源开发者写程序为了确保该程序尽可能运行正确,同时能与其他程序很好地协作。目标就是使用者拥有一堆方便的工具,每一个力求干不止一件事。许多程序能独立工作得很好。
这篇文章讨论3个开源命令行HTTP客户端。这些客户端可以让你使用命令行从互联网上下载文件。但同时他们也可以用于许多有意思的地方如测试调式和与HTTP服务器或网络应用互动。对于HTTP架构师和API设计人员来说使用命令行操作HTTP是一个值得花时间学习的技能。如果你需要来回使用APIHTTPie和cURL这没什么价值。
-------------
![HTTPie](http://www.linuxlinks.com/portal/content2/png/HTTPie.png)
![HTTPie in action](http://www.linuxlinks.com/portal/content/reviews/Internet/Screenshot-httpie.png)
HTTPie发音 aych-tee-tee-pie是一款开源命令行HTTP客户端。它是一个命令行界面类cURL的工具。
该软件的目标是使得与网络服务器的交互尽可能的人性化。其提供了一个简单的http命令允许使用简单且自然的语句发送任意的HTTP请求并显示不同颜色的输出。HTTPie可以用于测试调式和与HTTP服务器的一般交互。
#### 功能包括:####
- 可表达,直观的语句
- 格式化,颜色区分的终端输出
- 内建JSON支持
- 表单和文件上传
- HTTPS代理和认证
- 任意数据请求
- 自定义标题 此处header不确定是否特别意义
- 持久会话
- 类Wget下载
- Python 2.62.7和3.x支持
- LinuxMac OS X 和 Windows支持
- 支持插件
- 帮助文档
- 测试覆盖 (直译有点别扭)
- 网站:[httpie.org][1]
- 开发者: Jakub Roztočil
- 证书: 开源
- 版本号: 0.9.2
----------
![cURL](http://www.linuxlinks.com/portal/content2/png/cURL1.png)
![cURL in action](http://www.linuxlinks.com/portal/content/reviews/Internet/Screenshot-cURL.png)
cURL是一个开源命令行工具用于使用URL语句传输数据支持DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS,IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, TELNET和TFTP。
cURL支持SSL证书HTTP POSTHTTP PUTFTP上传HTTP基于表单上传代理缓存,用户名+密码认证(Basic, Digest, NTLM, Negotiate, kerberos...),文件传输恢复, 代理通道和一些其他实用窍门的总线负载。(这里的名词我不明白其专业意思)
#### 功能包括:####
- 配置文件支持
- 一个单独命令行多个URL
- “globbing”漫游支持: [0-13],{one, two, three}
- 一个命令上传多个文件
- 自定义最大传输速度
- 重定向标准错误输出
- Metalink支持
- 网站: [curl.haxx.se][2]
- 开发者: Daniel Stenberg
- 证书: MIT/X derivate license
- 版本号: 7.42.0
----------
![Wget](http://www.linuxlinks.com/portal/content2/png/Wget1.png)
![Wget in action](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Wget.png)
Wget是一个从网络服务器获取信息的开源软件。其名字源于World Wide Web 和 get。Wget支持HTTPHTTPS和FTP协议同时也通过HTTP代理获取信息。
Wget可以根据HTML页面的链接创建远程网络站点的本地版本是完全重造源站点的目录结构。这种方式被冠名“recursive downloading。”
Wget已经设计可以加快低速或者不稳定的网络连接。
功能包括:
- 使用REST和RANGE恢复中断的下载
- 使用文件名
- 多语言的基于NLS的消息文件
- 选择性地转换下载文档里地绝对链接为相对链接,使得下载文档可以本地相互链接
- 在大多数类UNIX操作系统和微软Windows上运行
- 支持HTTP代理
- 支持HTTP数据缓存
- 支持持续地HTTP连接
- 无人照管/后台操作
- 当远程对比时,使用本地文件时间戳来决定是否需要重新下载文档 mirroring没想出合适的表达
- 站点: [www.gnu.org/software/wget/][3]
- 开发者: Hrvoje Niksic, Gordon Matzigkeit, Junio Hamano, Dan Harkless, and many others
- 证书: GNU GPL v3
- 版本号: 1.16.3
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20150425174537249/HTTPclients.html
作者Frazer Kline
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://httpie.org/
[2]:http://curl.haxx.se/
[3]:https://www.gnu.org/software/wget/

View File

@ -0,0 +1,135 @@
什么是Linux上实用的命令行式网络监视工具
===============================================================================
对任何规模的业务来说,网络监视器都是一个重要的功能。网络监视的目标可能千差万别。比如,监视活动的目标可以是保证长期的网络供应、安全保护、对性能进行排查、网络使用统计等。由于它的目标不同,网络监视器使用很多不同的方式来完成任务。比如使用包层面的嗅探,使用流层面的统计数据,向网络中注入探测的流量,分析服务器日志等。
尽管有许多专用的网络监视系统可以365天24小时监视但您依旧可以在特定的情况下使用命令行式的网络监视器某些命令行式的网络监视器在某方面很有用。如果您是系统管理员那您就应该有亲身使用一些知名的命令行式网络监视器的经历。这里有一份**Linux上流行且实用的网络监视器**列表。
### 包层面的嗅探器 ###
在这个类别下监视工具在链路上捕捉独立的包分析它们的内容展示解码后的内容或者包层面的统计数据。这些工具在最底层对网络进行监视、管理同样的也能进行最细粒度的监视其代价是部分网络I/O和分析的过程。
1. **dhcpdump**一个命令行式的DHCP流量嗅探工具捕捉DHCP的请求/回复流量并以用户友好的方式显示解码的DHCP协议消息。这是一款排查DHCP相关故障的实用工具。
2. **[dsniff][1]**一个基于命令行的嗅探工具集合拥有欺骗和劫持功能被设计用于网络审查和渗透测试。它可以嗅探多种信息比如密码、NSF流量、email消息、网络地址等。
3. **[httpry][2]**一个HTTP报文嗅探器用于捕获、解码HTTP请求和回复报文并以用户友好的方式显示这些信息。
4. **IPTraf**:基于命令行的网络统计数据查看器。它实时显示包层面、连接层面、接口层面、协议层面的报文/字节数。抓包过程由协议过滤器控制,且操作过程全部是菜单驱动的。
![](https://farm8.staticflickr.com/7519/16055246118_8ea182b413_c.jpg)
5. **[mysql-sniffer][3]**一个用于抓取、解码MySQL请求相关的数据包的工具。它以可读的方式显示最频繁或全部的请求。
6. **[ngrep][4]**在网络报文中执行grep。它能实时抓取报文并用正则表达式或十六进制表达式的方式匹配报文。它是一个可以对异常流量进行检测、存储或者对实时流中特定模式报文进行抓取的实用工具。
7. **[p0f][5]**一个被动的基于包嗅探的指纹采集工具可以可靠地识别操作系统、NAT或者代理设置、网络链路类型以及许多其他与活动的TCP连接相关的属性。
8. **pktstat**一个命令行式的工具通过实时分析报文显示连接带宽使用情况以及相关的协议例如HTTP GET/POST、FTP、X11等描述信息。
![](https://farm8.staticflickr.com/7477/16048970999_be60f74952_b.jpg
9. **Snort**:一个入侵检测和预防工具,通过规则驱动的协议分析和内容匹配,来检测/预防活跃流量中各种各样的后门、僵尸网络、网络钓鱼、间谍软件攻击。
10. **tcpdump**一个命令行的嗅探工具可以基于过滤表达式抓取网络中的报文分析报文并且在包层面输出报文内容以便于包层面的分析。他在许多网络相关的错误排查、网络程序debug、或[安全][6]监测方面应用广泛。
11. **tshark**一个与Wireshark窗口程序一起使用的命令行式的嗅探工具。他能捕捉、解码网络上的实时报文并能以用户友好的方式显示其内容。
### 流/进程/接口层面的监视 ###
在这个分类中网络监视器通过把流量分为流、进程或接口来收集每个流、每个进程、每个接口的统计数据。其信息的来源可以是libpcap抓包库或者sysfs内核虚拟文件系统。这些工具的监视成本很低但是缺乏包层面的检视能力。
12. **bmon**:一个基于命令行的带宽监测工具,可以显示各种接口相关的信息,不但包括接收/发送的总值/平均值统计数据,而且拥有历史带宽使用视图。
![](https://farm9.staticflickr.com/8580/16234265932_87f20c5d17_b.jpg)
13. **[iftop][7]**一个带宽使用监测工具可以实时显示某个网络连接的带宽使用情况。它对所有带宽使用情况排序并通过ncurses的接口来进行可视化。他可以方便的监视哪个连接消耗了最多的带宽。
14. **nethogs**:一个进程监视工具,提供进程相关的实时的上行/下行带宽使用信息并基于ncurses显示。它对检测占用大量带宽的进程很有用。
15. **netstat**一个显示许多TCP/UDP的网络堆栈统计信息的工具。诸如网络接口发送/接收、路由表、协议/套接字的统计信息和属性。当您诊断与网络堆栈相关的性能、资源使用时它很有用。
16. **[speedometer][8]**:一个可视化某个接口发送/接收的带宽使用的历史趋势并且基于ncurses的条状图进行显示的工具。
![](https://farm8.staticflickr.com/7485/16048971069_31dd573a4f_c.jpg)
17. **[sysdig][9]**一个对Linux子系统拥有统一调试接口的系统级综合性debug工具。它的网络监视模块可以监视在线或离线、许多进程/主机相关的网络统计数据,例如带宽、连接/请求数等。
18. **tcptrack**一个TCP连接监视工具可以显示活动的TCP连接包括源/目的IP地址/端口、TCP状态、带宽使用等。
![](https://farm8.staticflickr.com/7507/16047703080_5fdda2e811_b.jpg)
19. **vnStat**:一个维护了基于接口的历史接收/发送带宽视图(例如,当前、每日、每月)的流量监视器。作为一个后台守护进程,它收集并存储统计数据,包括接口带宽使用率和传输字节总数。
### 主动网络监视器 ###
不同于前面提到的被动的监听工具,这个类别的工具们在监听时会主动的“注入”探测内容到网络中,并且会收集相应的反应。监听目标包括路由路径、可供使用的带宽、丢包率、延时、抖动、系统设置或者缺陷等。
20. **[dnsyo][10]**一个DNS检测工具能够管理多达1500个不同网络的开放解析器集群的DNS查询。它在您检查DNS传播或排查DNS设置的时候很有用。
21. **[iperf][11]**一个TCP/UDP带宽测量工具能够测量两个结点间最大可用带宽。它通过在两个主机间单向或双向的输出TCP/UDP探测流量来测量可用的带宽。它在监测网络容量、调谐网络协议栈参数时很有用。一个叫做[netperf][12]的变种拥有更多的功能及更好的统计数据。
22. **[netcat][13]/socat**通用的网络debug工具可以对TCP/UDP套接字进行读、写或监听。它通常和其他的程序或脚本结合起来在后端对网络传输或端口进行监听。
23. **nmap**一个命令行端口扫描和网络发现工具。它依赖于若干基于TCP/UDP的扫描技术来查找开放的端口、活动的主机或者在本地网络存在的操作系统。它在你审查本地主机漏洞或者建立主机映射时很有用。[zmap][14]是一个类似的替代品,是一个用于互联网范围的扫描工具。
24. ping一个常用的网络测试工具。通过对ICMP的echo和reply报文进行增强来实现其功能。它在测量路由的RTT、丢包率以及检测远端系统防火墙规则时很有用。ping的变种有更漂亮的界面例如[noping][15])、多协议支持(例如,[hping][16])或者并行探测能力(例如,[fping][17])。
![](https://farm8.staticflickr.com/7466/15612665344_a4bb665a5b_c.jpg)
25. **[sprobe][18]**一个启发式推断本地主机和任意远端IP地址的网络带宽瓶颈的命令行工具。它使用TCP三次握手机制来评估带宽的瓶颈。它在检测大范围网络性能和路由相关的问题时很有用。
26. **traceroute**:一个能发现从本地到远端主机的第三层路由/转发路径的网络发现工具。它发送有限TTL的探测报文收集中间路由的ICMP反馈信息。它在排查低速网络连接或者路由相关的问题时很有用。traceroute的变种有更好的RTT统计功能例如[mtr][19])。
### 应用日志解析器 ###
在这个类别下网络监测器把特定的服务器应用程序作为目标例如web服务器或者数据库服务器。由服务器程序产生或消耗的网络流量通过它的日志被分析和监测。不像前面提到的网络层的监视器这个类别的工具能够在应用层面分析和监控网络流量。
27. **[GoAccess][20]**一个针对Apache和Nginx服务器流量的交互式查看器。基于对获取到的日志的分析它能展示包括日访问量、最多请求、客户端操作系统、客户端位置、客户端浏览器等在内的多个实时的统计信息并以滚动方式显示。
![](https://farm8.staticflickr.com/7518/16209185266_da6c5c56eb_c.jpg)
28. **[mtop][21]**一个面向MySQL/MariaDB服务器的命令行监视器它可以将当前数据库服务器负载中代价最大的查询以可视化的方式进行显示。它在您优化MySQL服务器性能、调谐服务器参数时很有用。
![](https://farm8.staticflickr.com/7472/16047570248_bc996795f2_c.jpg)
29. **[ngxtop][22]**一个面向Nginx和Apache服务器的流量监测工具能够以类似top指令的方式可视化的显示Web服务器的流量。它解析web服务器的查询日志文件并收集某个目的地或请求的流量统计信息。
### Conclusion ###
在这篇文章中,我展示了许多的命令行式监测工具,从最底层的包层面的监视器到最高层应用程序层面的网络监视器。知道那个工具的作用是一回事,选择哪个工具使用又是另外一回事。单一的一个工具不能作为您每天使用的通用的解决方案。一个好的系统管理员应该能决定哪个工具更适合当前的环境。希望这个列表对此有所帮助。
欢迎您通过回复来改进这个列表的内容!
--------------------------------------------------------------------------------
via: http://xmodulo.com/useful-command-line-network-monitors-linux.html
作者:[Dan Nanni][a]
译者:[wwy-hust](https://github.com/wwy-hust)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://www.monkey.org/~dugsong/dsniff/
[2]:http://xmodulo.com/monitor-http-traffic-command-line-linux.html
[3]:https://github.com/zorkian/mysql-sniffer
[4]:http://ngrep.sourceforge.net/
[5]:http://lcamtuf.coredump.cx/p0f3/
[6]:http://xmodulo.com/recommend/firewallbook
[7]:http://xmodulo.com/how-to-install-iftop-on-linux.html
[8]:https://excess.org/speedometer/
[9]:http://xmodulo.com/monitor-troubleshoot-linux-server-sysdig.html
[10]:http://xmodulo.com/check-dns-propagation-linux.html
[11]:https://iperf.fr/
[12]:http://www.netperf.org/netperf/
[13]:http://xmodulo.com/useful-netcat-examples-linux.html
[14]:https://zmap.io/
[15]:http://noping.cc/
[16]:http://www.hping.org/
[17]:http://fping.org/
[18]:http://sprobe.cs.washington.edu/
[19]:http://xmodulo.com/better-alternatives-basic-command-line-utilities.html#mtr_link
[20]:http://goaccess.io/
[21]:http://mtop.sourceforge.net/
[22]:http://xmodulo.com/monitor-nginx-web-server-command-line-real-time.html

View File

@ -1,172 +0,0 @@
使用Observium来监控你的网络和服务器
================================================================================
### 简介###
在监控你的服务器,交换机或者物理机器时有过问题吗?, **Observium**可以满足你的需求.作为一个免费的监控系统,可以帮助你远程监控你的服务器.它是一个由PHP编写的基于自动发现SNMP的网络监控平台,支持非常广泛的网络硬件和操作系统,包括 Cisco,Windows,Linux,HP,NetApp等.在此我会通过在Ubuntu12.04上设置一个**Observium**服务器的同时提供相应的步骤.
![](https://www.unixmen.com/wp-content/uploads/2015/03/Capture1.png)
目前存在两种不同的**observium**版本.
- Observium 社区版本是一个在QPL开源许可证下的免费工具,这个版本时对于较小部署的最好解决方案. 该版本每6个月得到一次安全性更新.
- 第2个版本是Observium Professional, 该版本在基于SVN的发布机制下的发行版. 会得到每日安全性更新. 该工具适用于服务提供商和企业级部署.
更多信息可以通过其官网获得[website of Observium][1].
### 系统需求###
为了安装 **Observium**, 需要具有一个最新安装的服务器。**Observium**是在Ubuntu LTS和Debian系统上进行开发的所以推荐在Ubuntu或Debian上安装**Observium**,因为可能在别的平台上会有一些小问题。
该文章会知道你如何在Ubuntu12.04上进行安装**Observium**。对于小型的**Observium**安装推荐的基础配置要有256MB内存和双核处理器。
### 安装需求 ###
在安装**Observuim**之前,你需要确认安装所有的依赖关系包。
首先,使用下面的命令更新的服务器:
sudo apt-get update
然后你需要安装运行Observuim 所需的全部包。
Observium需要使用下面所列出的软件才能正确的运行
- LAMP server
- fping
- Net-SNMP 5.4+
- RRDtool 1.3+
- Graphviz
对于可选特性的要求:
- Ipmitool - 只有当你想要探寻IPMIIntelligent Platform Management Interface智能平台管理接口基板控制器。
- Libvirt-bin - 只有当你想要使用libvirt进行远程VM主机监控时。
sudo apt-get install libapache2-mod-php5 php5-cli php5-mysql php5-gd php5-mcrypt php5-json php-pear snmp fping mysql-server mysql-client python-mysqldb rrdtool subversion whois mtr-tiny ipmitool graphviz imagemagick libvirt ipmitool
### 为Observium创建MySQL 数据库和用户。
现在你需要登录到MySQL中并为**Observium**创建数据库:
mysql -u root -p
在用户验证成功之后,你需要按照下面的命令创建该数据库。
CREATE DATABASE observium;
数据库名为**Observium**,稍后你会需要这个信息。
现在你需要创建数据库管理员用户。
CREATE USER observiumadmin@localhost IDENTIFIED BY 'observiumpassword';
接下来,你需要给该管理员用户相应的权限来管理创建的数据库。
GRANT ALL PRIVILEGES ON observium.* TO observiumadmin@localhost;
你需要将权限信息写回到磁盘中来激活新的MySQL用户
FLUSH PRIVILEGES;
exit
### 下载并安装 Observium###
现在我们的系统已经准备好了, 可以开始Observium的安装了。
第一步创建Observium将要使用的文件目录
mkdir -p /opt/observium && cd /opt
为了达到本教程的目的我们将会使用Observium的社区/开源版本。使用下面的命令下载并解压:
wget http://www.observium.org/observium-community-latest.tar.gz
tar zxvf observium-community-latest.tar.gz
现在进入到Observium目录。
cd observium
将默认的配置文件'**config.php.default**'复制到'**config.php**',并将数据库配置选项填充到配置文件中:
cp config.php.default config.php
nano config.php
----------
/ Database config
$config['db_host'] = 'localhost';
$config['db_user'] = 'observiumadmin';
$config['db_pass'] = 'observiumpassword';
$config['db_name'] = 'observium';
现在为MySQL数据库设置默认的数据库模式
php includes/update/update.php
现在你需要创建一个文件目录来存储rrd文件并修改其权限以便让apache能将写入到文件中。
mkdir rrd
chown apache:apache rrd
为了在出现问题时进行问题修理,你需要创建日志文件。
mkdir -p /var/log/observium
chown apache:apache /var/log/observium
现在你需要为Observium创建虚拟主机配置。
<VirtualHost *:80>
DocumentRoot /opt/observium/html/
ServerName observium.domain.com
CustomLog /var/log/observium/access_log combined
ErrorLog /var/log/observium/error_log
<Directory "/opt/observium/html/">
AllowOverride All
Options FollowSymLinks MultiViews
</Directory>
</VirtualHost>
下一步你需要让你的Apache服务器的rewrite(重写)功能生效。
为了让'mod_rewrite'生效,输入以下命令:
sudo a2enmod rewrite
该模块在下一次Apache服务重启之后就会生效。
sudo service apache2 restart
###配置Observium###
在登入网络接口之前你需要为Observium创建一个管理员账户级别10
# cd /opt/observium
# ./adduser.php admin adminpassword 10
User admin added successfully.
下一步为发现和探寻工作设置一个cron任务创建一个新的文件**/etc/cron.d/observium** 并在其中添加以下的内容。
33 */6 * * * root /opt/observium/discovery.php -h all >> /dev/null 2>&1
*/5 * * * * root /opt/observium/discovery.php -h new >> /dev/null 2>&1
*/5 * * * * root /opt/observium/poller-wrapper.py 1 >> /dev/null 2>&1
重载cron进程来获取系的人物实体。
# /etc/init.d/cron reload
好啦你已经完成了Observium服务器的安装拉 使用你的浏览器登录到**http://<Server IP>**,然后上路巴。
![](https://www.unixmen.com/wp-content/uploads/2015/03/Capture.png)
尽情享受吧!
--------------------------------------------------------------------------------
via: https://www.unixmen.com/monitoring-network-servers-observium/
作者:[anismaj][a]
译者:[theo-l](https://github.com/theo-l)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://www.unixmen.com/author/anis/
[1]:http://www.observium.org/

View File

@ -0,0 +1,160 @@
如何在 Docker 容器之间设置网络
================================================================================
你也许已经知道了Docker 容器技术是现有的成熟虚拟化技术的一个替代方案。它被企业应用在越来越多的领域中,比如快速部署环境、简化基础设施的配置流程、多客户环境间的互相隔离等等。当你开始在真实的生产环境使用 Docker 容器去部署应用沙箱时你可能需要用到多个容器部署一套复杂的多层应用系统其中每个容器负责一个特定的功能例如负载均衡、LAMP 栈、数据库、UI 等)。
那么问题来了:有多台宿主机,我们事先不知道会在哪台宿主机上创建容器,如果保证在这些宿主机上创建的容器们可以互相联网?
联网技术哪家强?开源方案找 [weave][1]。这个工具可以为你省下不少烦恼。听我的准没错,谁用谁知道。
于是本教程的主题就变成了“**如何使用 weave 在不同主机上的 Docker 容器之间设置网络**”。
### Weave 是如何工作的 ###
![](https://farm8.staticflickr.com/7288/16662287067_27888684a7_b.jpg)
让我们先来看看 weave 怎么工作:先创建一个由多个 peer 组成的对等网络,每个 peer 是一个虚拟路由器容器叫做“weave 路由器”,它们分布在不同的宿主机上。这个对等网络的每个 peer 之间会维持一个 TCP 链接,用于互相交换拓扑信息,它们也会建立 UDP 链接用于容器间通信。一个 weave 路由器通过桥接技术连接到其他本宿主机上的其他容器。当处于不同宿主机上的两个容器想要通信,一台宿主机上的 weave 路由器通过网桥截获数据包,使用 UDP 协议封装后发给另一台宿主机上的 weave 路由器。
每个 weave 路由器会刷新整个对等网络的拓扑信息,像容器的 MAC 地址(就像交换机的 MAC 地址学习一样获取其他容器的 MAC 地址因此它可以决定数据包的下一跳是往哪个容器的。weave 能让两个处于不同宿主机的容器进行通信,只要这两台宿主机在 weave 拓扑结构内连到同一个 weave 路由器。另外weave 路由器还能使用公钥加密技术将 TCP 和 UDP 数据包进行加密。
### 准备工作 ###
在使用 weave 之前,你需要在所有宿主机上安装 Docker[2] 环境,参考[这些][3][教程][4],在 Ubuntu 或 CentOS/Fedora 发行版中安装 Docker。
Docker 环境部署完成后,使用下面的命令安装 weave
$ wget https://github.com/zettio/weave/releases/download/latest_release/weave
$ chmod a+x weave
$ sudo cp weave /usr/local/bin
注意你的 PATH 环境变量要包含 /usr/local/bin 这个路径,请在 /etc/profile 文件中加入一行LCTT 注:要使环境变量生效,你需要执行这个命令: src /etc/profile
export PATH="$PATH:/usr/local/bin"
在每台宿主机上重复上面的操作。
Weave 在 TCP 和 UDP 上都使用 6783 端口,如果你的系统开启了防火墙,请确保这两个端口不会被防火墙挡住。
### 在每台宿主机上开启 Weave 路由器 ###
当你想要让处于在不同宿主机上的容器能够互相通信,第一步要做的就是在每台宿主机上开启 weave 路由器。
第一台宿主机,运行下面的命令,就会创建并开启一个 weave 路由器容器LCTT 注前面说过了weave 路由器也是一个容器):
$ sudo weave launch
第一次运行这个命令的时候,它会下载一个 weave 镜像,这会花一些时间。下载完成后就会自动运行这个镜像。成功启动后,终端会打印这个 weave 路由器的 ID 号。
下面的命令用于查看路由器状态:
$ sudo weave status
![](https://farm9.staticflickr.com/8632/16249607573_4514790cf5_c.jpg)
第一个 weave 路由器就绪了,目前为止整个 peer 对等网络中只有一个 peer 成员。
你也可以使用 doceker 的命令来查看 weave 路由器的状态:
$ docker ps
![](https://farm8.staticflickr.com/7655/16681964438_51d8b18809_c.jpg)
第二台宿主机部署步骤稍微有点不同,我们需要为这台宿主机的 weave 路由器指定第一台宿主机的 IP 地址,命令如下:
$ sudo weave launch <first-host-IP-address>
当你查看路由器状态,你会看到两个 peer 成员:当前宿主机和第一个宿主机。
![](https://farm8.staticflickr.com/7608/16868571891_e66d4b8841_c.jpg)
当你开启更多路由器,这个 peer 成员列表会更长。当你新开一个路由器时,要指定前一个宿主机的 IP 地址,请注意不是第一个宿主机的 IP 地址。
现在你已经有了一个 weave 网络了,它由位于不同宿主机的 weave 路由器组成。
### 把不同宿主机上的容器互联起来 ###
接下来要做的就是在不同宿主机上开启 Docker 容器,并使用虚拟网络将它们互联起来。
假设我们创建一个私有网络 10.0.0.0/24 来互联 Docker 容器,并为这些容器随机分配 IP 地址。
如果你想新建一个能加入 weave 网络的容器,你就需要使用 weave 命令来创建,而不是 docker 命令。原因是 weave 命令内部会调用 docker 命令来新建容器然后为它设置网络。
下面的命令是在宿主机 hostA 上建立一个 Ubuntu 容器,然后将它放到 10.0.0.0/24 网络中,分配的 IP 地址为 10.0.0.1
hostA:~$ sudo weave run 10.0.0.1/24 -t -i ubuntu
成功运行后,终端会打印出容器的 ID 号。你可以使用这个 ID 来访问这个容器:
hostA:~$ docker attach <container-id>
在宿主机 hostB 上,也创建一个 Ubuntu 容器IP 地址为 10.0.0.2
hostB:~$ sudo weave run 10.0.0.2/24 -t -i ubuntu
访问下这个容器的控制台:
hostB:~$ docker attach <container-id>
这两个容器能够互相 ping 通,你可以通过容器的控制台检查一下。
![](https://farm9.staticflickr.com/8566/16868571981_d73c8e401b_c.jpg)
如果你检查一下每个容器的网络配置你会发现有一块名为“ethwe”的网卡你分配给容器的 IP 地址出现在它们那里(比如这里分别是 10.0.0.1 和 10.0.0.2)。
![](https://farm8.staticflickr.com/7286/16681964648_013f9594b1_b.jpg)
### Weave 的其他高级用法 ###
weave 提供了一些非常巧妙的特性,我在这里作下简单的介绍。
#### 应用分离 ####
使用 weave你可以创建多个虚拟网络并为每个网络设置不同的应用。比如你可以为一群容器创建 10.0.0.0/24 网络,为另一群容器创建 10.10.0.0/24 网络weave 会自动帮你维护这些网络,并将这两个网络互相隔离。另外,你可以灵活地将一个容器从一个网络移到另一个网络而不需要重启容器。举个例子:
首先开启一个容器,运行在 10.0.0.0/24 网络上:
$ sudo weave run 10.0.0.2/24 -t -i ubuntu
然后让它脱离这个网络:
$ sudo weave detach 10.0.0.2/24 <container-id>
最后将它加入到 10.10.0.0/24 网络中:
$ sudo weave attach 10.10.0.2/24 <container-id>
![](https://farm8.staticflickr.com/7639/16247212144_c31a49714d_c.jpg)
现在这个容器可以与 10.10.0.0/24 网络上的其它容器进行通信了。当你要把容器加入一个网络,而这个网络暂时不可用时,上面的步骤就很有帮助了。
#### 将 weave 网络与宿主机网络整合起来 ####
有时候你想让虚拟网络中的容器能访问物理主机的网络。或者相反宿主机需要访问容器。为满足这个功能weave 允许虚拟网络与宿主机网络整合。
举个例子,在宿主机 hostA 上一个容器运行在 10.0.0.0/24 中,运行使用下面的命令:
hostA:~$ sudo weave expose 10.0.0.100/24
这个命令把 IP 地址 10.0.0.100 分配给宿主机 hostA这样一来 hostA 也连到了 10.0.0.0/24 网络上了。很明显,你在为宿主机选择 IP 地址的时候,需要选一个没有被其他容器使用的地址。
现在 hostA 就可以访问 10.0.0.0/24 上的所有容器了,不管这些容器是否位于 hostA 上。好巧妙的设定啊32 个赞!
### 总结 ###
如你所见weave 是一个很有用的 docker 网络配置工具。这个教程只是[它强悍功能][5]的冰山一角。如果你想进一步玩玩,你可以试试它的以下功能:多跳路由功能,这个在 multi-cloud 环境LCTT 注:多云,企业使用多个不同的云服务提供商的产品,比如 IaaS 和 SaaS来承载不同的业务下还是很有用的动态重路由功能是一个很巧妙的容错技术或者它的分布式 DNS 服务,它允许你为你的容器命名。如果你决定使用这个好东西,欢迎分享你的使用心得。
--------------------------------------------------------------------------------
via: http://xmodulo.com/networking-between-docker-containers.html
作者:[Dan Nanni][a]
译者:[bazz2](https://github.com/bazz2)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:https://github.com/zettio/weave
[2]:http://xmodulo.com/recommend/dockerbook
[3]:http://xmodulo.com/manage-linux-containers-docker-ubuntu.html
[4]:http://xmodulo.com/docker-containers-centos-fedora.html
[5]:http://zettio.github.io/weave/features.html

View File

@ -1,57 +0,0 @@
创建你自己的Docker基本映像的2中方式
================================================================================
欢迎大家今天我们学习一下docker基本映像以及如何构建我们自己的docker基本映像。[Docker][1]是一个开源项目,为打包,装载和运行任何应用提供开发平台的轻量级容器。它没有语言支持,框架和打包系统的限制,从小型的家用电脑到高端的服务器,在何时何地都可以运行。这使它们成为不依赖于特定栈和供应商,很好的部署和扩展网络应用,数据库和后端服务的构建块。
Docker映像是不可更改的只读层。Docker使用**Union File System**在只读文件系统上增加读写文件系统。但所有更改都发生在最顶层的可写层最底部在只读映像上的原始文件仍然不会改变。由于映像不会改变也就没有状态。基本映像是没有父类的那些映像。Docker基本映像主要的好处是它允许我们有一个独立允许的Linux操作系统。
下面是我们如何可以创建自定义基本映像的方式。
### 1. 使用Tar创建Docker基本映像 ###
我们可以使用tar构建我们自己的基本映像我们从将要打包为基本映像的运行中的Linux发行版开始构建。这过程可以会有些不同它取决于我们打算构建的发行版。在Linux的发行版Debian中已经预装了debootstrap。在开始下面的步骤之前我们需要安装debootstrap。debootstrap用来获取构建基本系统需要的包。这里我们构建基于Ubuntu 14.04 "Trusty" 的映像。做这些我们需要在终端或者shell中运行以下命令。
$ sudo debootstrap trusty trusty > /dev/null
$ sudo tar -C trusty -c . | sudo docker import - trusty
![使用debootstrap构建docker基本映像](http://blog.linoxide.com/wp-content/uploads/2015/03/creating-base-image-debootstrap.png)
上面的命令为当前文件夹创建了一个tar文件并输出到STDOUT中"docker import - trusty"从STDIN中获取这个tar文件并根据它创建一个名为trusty的基本映像。然后如下所示我们将运行映像内部的一条测试命令。
$ docker run trusty cat /etc/lsb-release
[Docker GitHub Repo][2] 中有一些允许我们快速构建基本映像的事例脚本.
### 2. 使用Scratch构建基本映像 ###
在Docker的注册表中有一个被称为Scratch的使用空tar文件构建的特殊库
$ tar cv --files-from /dev/null | docker import - scratch
![使用scratch构建docker基本映像](http://blog.linoxide.com/wp-content/uploads/2015/03/creating-base-image-using-scratch.png)
我们可以使用这个映像构建新的小容器:
FROM scratch
ADD script.sh /usr/local/bin/run.sh
CMD ["/usr/local/bin/run.sh"]
上面的Docker文件来自一个很小的映像。这里它首先从一个完全空的文件系统开始然后它复制新建的/usr/local/bin/run.sh为script.sh然后运行脚本/usr/local/bin/run.sh。
### 结尾 ###
这这个教程中我们学习了如果构建一个自定义的Docker基本映像。构建一个docker基本映像是一个很简单的任务因为这里有很多已经可用的包和脚本。如果我们想要在里面安装想要的东西构建docker基本映像非常有用。如果有任何疑问建议或者反馈请在下面的评论框中写下来。非常感谢享受吧 :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/2-ways-create-docker-base-image/
作者:[Arun Pyasi][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://www.docker.com/
[2]:https://github.com/docker/docker/blob/master/contrib/mkimage-busybox.sh

View File

@ -1,26 +1,26 @@
How to Install WordPress with Nginx in a Docker Container
如何在 Docker 容器里的 Nginx 中安装 WordPress
================================================================================
Hi all, today we'll learn how to install WordPress running Nginx Web Server in a Docker Container. WordPress is an awesome free and open source Content Management System running thousands of websites throughout the globe. [Docker][1] is an Open Source project that provides an open platform to pack, ship and run any application as a lightweight container. It has no boundaries of Language support, Frameworks or packaging system and can be run anywhere, anytime from a small home computers to high-end servers. It makes them great building blocks for deploying and scaling web apps, databases, and back-end services without depending on a particular stack or provider.
大家好,今天我们来学习一下如何在 Docker 容器上运行的 Nginx Web 服务器中安装 WordPress。WordPress 是一个很好的免费开源的内容管理系统,全球成千上万的网站都在使用它。[Docker][1] 是一个提供开放平台来打包,分发和运行任何应用的开源轻量级容器项目。它没有语言支持,框架或打包系统的限制,可以在从小的家用电脑到高端服务器的任何地方任何时间运行。这让它们成为可以用于部署和扩展网络应用,数据库和后端服务而不必依赖于特定的栈或者提供商的很好的构建块。
Today, we'll deploy a docker container with the latest WordPress package with necessary prerequisites ie Nginx Web Server, PHP5, MariaDB Server, etc. Here are some short and sweet steps to successfully install a WordPress running Nginx in a Docker Container.
今天,我们会在 docker 容器上部署最新的 WordPress 软件包,包括需要的前提条件,例如 Nginx Web 服务器、PHP5、MariaDB 服务器等。下面是在运行在 Docker 容器上成功安装 WordPress 的简单步骤。
### 1. Installing Docker ###
### 1. 安装 Docker ###
Before we really start, we'll need to make sure that we have Docker installed in our Linux machine. Here, we are running CentOS 7 as host so, we'll be running yum manager to install docker using the below command.
在我们真正开始之前,我们需要确保在我们的 Linux 机器上已经安装了 Docker。我们使用的主机是 CentOS 7因此我们用下面的命令使用 yum 管理器安装 docker。
# yum install docker
![Installing Docker](http://blog.linoxide.com/wp-content/uploads/2015/03/installing-docker.png)
![安装 Docker](http://blog.linoxide.com/wp-content/uploads/2015/03/installing-docker.png)
# systemctl restart docker.service
### 2. Creating WordPress Dockerfile ###
### 2. 创建 WordPress Docker 文件 ###
We'll need to create a Dockerfile which will automate the installation of the wordpress and its necessary pre-requisites. This Dockerfile will be used to build the image of WordPress installation we created. This WordPress Dockerfile fetches a CentOS 7 image from the Docker Registry Hub and updates the system with the latest available packages. It then installs the necessary softwares like Nginx Web Server, PHP, MariaDB, Open SSH Server and more which are essential for the Docker Container to work. It then executes a script which will initialize the installation of WordPress out of the box.
我们需要创建用于自动安装 wordpress 以及前提条件的 docker 文件。这个 docker 文件将用于构建 WordPress 的安装镜像。这个 WordPress docker 文件会从 Docker 库中心获取 CentOS 7 镜像并用最新的可用更新升级系统。然后它会安装必要的软件,例如 Nginx Web 服务器、PHP、MariaDB、Open SSH 服务器以及其它保证 Docker 容器正常运行不可缺少的组件。最后它会执行一个初始化 WordPress 安装的脚本。
# nano Dockerfile
Then, we'll need to add the following lines of configuration inside that Dockerfile.
然后,我们需要将下面的配置行添加到 Docker 文件中。
FROM centos:centos7
MAINTAINER The CentOS Project <cloud-ops@centos.org>
@ -48,15 +48,15 @@ Then, we'll need to add the following lines of configuration inside that Dockerf
CMD ["/bin/bash", "/start.sh"]
![Wordpress Dockerfile](http://blog.linoxide.com/wp-content/uploads/2015/03/Dockerfile-wordpress.png)
![Wordpress Docker 文件](http://blog.linoxide.com/wp-content/uploads/2015/03/Dockerfile-wordpress.png)
### 3. Creating Start script ###
### 3. 创建启动 script ###
After we create our Dockerfile, we'll need to create a script named start.sh which will run and configure our WordPress installation. It will create and configure database, passwords for wordpress. To create it, we'll need to open start.sh with our favorite text editor.
我们创建了 docker 文件之后,我们需要创建用于运行和配置 WordPress 安装的脚本,名称为 start.sh。它会为 WordPress 创建并配置数据库和密码。用我们喜欢的文本编辑器打开 start.sh。
# nano start.sh
After opening start.sh, we'll need to add the following lines of configuration into it.
打开 start.sh 之后,我们要添加下面的配置行到文件中。
#!/bin/bash
@ -67,7 +67,7 @@ After opening start.sh, we'll need to add the following lines of configuration i
}
__create_user() {
# Create a user to SSH into as.
# 创建用于 SSH 登录的用户
SSH_USERPASS=`pwgen -c -n -1 8`
useradd -G wheel user
echo user:$SSH_USERPASS | chpasswd
@ -75,7 +75,7 @@ After opening start.sh, we'll need to add the following lines of configuration i
}
__mysql_config() {
# Hack to get MySQL up and running... I need to look into it more.
# 启用并运行 MySQL
yum -y erase mariadb mariadb-server
rm -rf /var/lib/mysql/ /etc/my.cnf
yum -y install mariadb mariadb-server
@ -86,18 +86,18 @@ After opening start.sh, we'll need to add the following lines of configuration i
}
__handle_passwords() {
# Here we generate random passwords (thank you pwgen!). The first two are for mysql users, the last batch for random keys in wp-config.php
# 在这里我们生成随机密码(感谢 pwgen)。前面两个用于 mysql 用户,最后一个用于 wp-config.php 的随机密钥。
WORDPRESS_DB="wordpress"
MYSQL_PASSWORD=`pwgen -c -n -1 12`
WORDPRESS_PASSWORD=`pwgen -c -n -1 12`
# This is so the passwords show up in logs.
# 这是在日志中显示的密码。
echo mysql root password: $MYSQL_PASSWORD
echo wordpress password: $WORDPRESS_PASSWORD
echo $MYSQL_PASSWORD > /mysql-root-pw.txt
echo $WORDPRESS_PASSWORD > /wordpress-db-pw.txt
# There used to be a huge ugly line of sed and cat and pipe and stuff below,
# but thanks to @djfiander's thing at https://gist.github.com/djfiander/6141138
# there isn't now.
# 这里原来是一个包括 sed、cat、pipe 和 stuff 的很长的行,但多亏了
# @djfiander https://gist.github.com/djfiander/6141138
# 现在没有了
sed -e "s/database_name_here/$WORDPRESS_DB/
s/username_here/$WORDPRESS_DB/
s/password_here/$WORDPRESS_PASSWORD/
@ -116,7 +116,7 @@ After opening start.sh, we'll need to add the following lines of configuration i
}
__start_mysql() {
# systemctl start mysqld.service
# systemctl 启动 mysqld 服务
mysqladmin -u root password $MYSQL_PASSWORD
mysql -uroot -p$MYSQL_PASSWORD -e "CREATE DATABASE wordpress; GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpress'@'localhost' IDENTIFIED BY '$WORDPRESS_PASSWORD'; FLUSH PRIVILEGES;"
killall mysqld
@ -127,7 +127,7 @@ After opening start.sh, we'll need to add the following lines of configuration i
supervisord -n
}
# Call all functions
# 调用所有函数
__check
__create_user
__mysql_config
@ -136,17 +136,17 @@ After opening start.sh, we'll need to add the following lines of configuration i
__start_mysql
__run_supervisor
![Start Script](http://blog.linoxide.com/wp-content/uploads/2015/03/start-script.png)
![启动脚本](http://blog.linoxide.com/wp-content/uploads/2015/03/start-script.png)
After adding the above configuration, we'll need to save it and then exit.
增加完上面的配置之后,保存并关闭文件。
### 4. Creating Configuration files ###
### 4. 创建配置文件 ###
Now, we'll need to create configuration file for Nginx Web Server named nginx-site.conf .
现在,我们需要创建 Nginx Web 服务器的配置文件,命名为 nginx-site.conf。
# nano nginx-site.conf
Then, we'll add the following configuration to the config file.
然后,增加下面的配置信息到配置文件。
user nginx;
worker_processes 1;
@ -230,13 +230,13 @@ Then, we'll add the following configuration to the config file.
}
}
![Nginx configuration](http://blog.linoxide.com/wp-content/uploads/2015/03/nginx-conf.png)
![Nginx 配置](http://blog.linoxide.com/wp-content/uploads/2015/03/nginx-conf.png)
Now, we'll create supervisord.conf file and add the following lines as shown below.
现在,创建 supervisor.conf 文件并添加下面的行。
# nano supervisord.conf
Then, add the following lines.
然后,添加以下行。
[unix_http_server]
file=/tmp/supervisor.sock ; (the path to the socket file)
@ -286,60 +286,60 @@ Then, add the following lines.
events = PROCESS_LOG
result_handler = supervisor_stdout:event_handler
![Supervisord Configuration](http://blog.linoxide.com/wp-content/uploads/2015/03/supervisord.png)
![Supervisord 配置](http://blog.linoxide.com/wp-content/uploads/2015/03/supervisord.png)
After adding, we'll save and exit the file.
添加完后,保存并关闭文件。
### 5. Building WordPress Container ###
### 5. 构建 WordPress 容器 ###
Now, after done with creating configurations and scripts, we'll now finally use the Dockerfile to build our desired container with the latest WordPress CMS installed and configured according to the configuration. To do so, we'll run the following command in that directory.
现在,完成了创建配置文件和脚本之后,我们终于要使用 docker 文件来创建安装最新的 WordPress CMS(译者注Content Management System,内容管理系统)所需要的容器,并根据配置文件进行配置。做到这点,我们需要在对应的目录中运行以下命令。
# docker build --rm -t wordpress:centos7 .
![Building WordPress Container](http://blog.linoxide.com/wp-content/uploads/2015/03/building-wordpress-container.png)
![构建 WordPress 容器](http://blog.linoxide.com/wp-content/uploads/2015/03/building-wordpress-container.png)
### 6. Running WordPress Container ###
### 6. 运行 WordPress 容器 ###
Now, to run our newly built container and open port 80 and 22 for Nginx Web Server and SSH access respectively, we'll run the following command.
现在,执行以下命令运行新构建的容器,并为 Nginx Web 服务器和 SSH 访问打开88 和 22号相应端口 。
# CID=$(docker run -d -p 80:80 wordpress:centos7)
![Run WordPress Docker](http://blog.linoxide.com/wp-content/uploads/2015/03/run-wordpress-docker.png)
![运行 WordPress Docker](http://blog.linoxide.com/wp-content/uploads/2015/03/run-wordpress-docker.png)
To check the process and commands executed inside the container, we'll run the following command.
运行以下命令检查进程以及容器内部执行的命令。
# echo "$(docker logs $CID )"
TO check if the port mapping is correct or not, run the following command.
运行以下命令检查端口映射是否正确。
# docker ps
![docker state](http://blog.linoxide.com/wp-content/uploads/2015/03/docker-state.png)
![docker 状态](http://blog.linoxide.com/wp-content/uploads/2015/03/docker-state.png)
### 7. Web Interface ###
### 7. Web 界面 ###
Finally if everything went accordingly, we'll be welcomed with WordPress when pointing the browser to http://ip-address/ or http://mywebsite.com/ .
最后如果一切正常的话,当我们用浏览器打开 http://ip-address/ 或者 http://mywebsite.com/ 的时候会看到 WordPress 的欢迎界面。
![Wordpress Start](http://blog.linoxide.com/wp-content/uploads/2015/03/wordpress-start.png)
![启动Wordpress](http://blog.linoxide.com/wp-content/uploads/2015/03/wordpress-start.png)
Now, we'll go step wise through the web interface and setup wordpress configuration, username and password for the WordPress Panel.
现在,我们将通过 Web 界面为 WordPress 面板设置 WordPress 的配置、用户名和密码。
![Wordpress Welcome](http://blog.linoxide.com/wp-content/uploads/2015/03/wordpress-welcome.png)
![Wordpress 欢迎界面](http://blog.linoxide.com/wp-content/uploads/2015/03/wordpress-welcome.png)
Then, use the username and password entered above into the WordPress Login page.
然后,用上面用户名和密码输入到 WordPress 登录界面。
![wordpress login](http://blog.linoxide.com/wp-content/uploads/2015/03/wordpress-login.png)
![wordpress 登录](http://blog.linoxide.com/wp-content/uploads/2015/03/wordpress-login.png)
### Conclusion ###
### 总结 ###
We successfully built and run WordPress CMS under LEMP Stack running in CentOS 7 Operating System as the docker OS. Running WordPress inside a container makes a lot safe and secure to the host system from the security perspective. This article enables one to completely configure WordPress to run under Docker Container with Nginx Web Server. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve or update our contents. Thank you ! Enjoy :-)
我们已经成功地在以 CentOS 7 作为 docker OS 的 LEMP 栈上构建并运行了 WordPress CMS。从安全层面来说在容器中运行 WordPress 对于宿主系统更加安全可靠。这篇文章介绍了在 Docker 容器中运行的 Nginx Web 服务器上使用 WordPress 的完整配置。如果你有任何问题、建议、反馈,请在下面的评论框中写下来,让我们可以改进和更新我们的内容。非常感谢!Enjoy :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/install-wordpress-nginx-docker-container/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,95 @@
安装Inkscape - 开源适量图形编辑器
================================================================================
Inkscape是一款开源矢量图形编辑工具它使用可缩放矢量图形(SVG)图形格式并不同于它的竞争对手如Xara X、Corel Draw和Adobe Illustrator等等。SVG是一个广泛部署、免版税使用的图形格式由W3C SVG工作组开发和维护。这是一个跨平台工具完美运行于Linux、Windows和Mac OS上。
Inkscape始于2003年起初它的bug跟踪系统托管于Sourceforge上但是 后来迁移到了Launchpad上。当前它最新的一个稳定版本是0.91,它不断地在发展和修改中。我们将在本文里了解一下它的突出特点和安装过程。
### 显著特性 ###
让我们直接来了解这款应用程序的显著特性。
#### 创建对象 ####
- 用铅笔工具来画出不同颜色、大小和形状的手绘线,用贝塞尔曲线(笔式)工具来画出直线和曲线,通过书法工具来应用到手写的书法笔画上等等
- 用文本工具来创建、选择、编辑和格式化文本。在纯文本框、在路径上或在形状里操作文本
- 有效绘制各种形状,像矩形、椭圆形、圆形、弧线、多边形、星形和螺旋形等等并调整其大小、旋转并修改(圆角化)它们
- 用简单地命令创建并嵌入位图
#### 对象处理 ####
- 通过交互式操作和调整数值来扭曲、移动、测量、旋转目标
- 执行力提升并减少了Z-order操作。
- 对象群组化或取消群组化可以去创建一个虚拟层阶用来编辑或处理
- 图层采用层次结构树的结构并且能锁定或以各式各样的处理方式来重新布置
- 分布与对齐指令
#### 填充与边框 ####
- 复制/粘贴风格
- 取色工具
- 用RGB, HSL, CMS, CMYK和色盘这四种不同的方式选色
- 渐层编辑器能创建和管理多停点渐层
- 定义一个图像或其它选择用来进行花纹填充
- 用一些预定义泼洒花纹可对边框进行花纹泼洒
- 通过路径标示器来开始、对折和结束标示
#### 路径上的操作 ####
- 节点编辑:移动节点和贝塞尔曲线掌控,节点的对齐和分布等等
- 布尔运算(是或否)
- 运用可变的路径起迄点可简化路径
- 路径插入和增设连同动态和链接偏移对象
- 通过路径追踪把位图图像转换成路径(彩色或单色路径)
#### 文本处理 ####
- 所有安装好的外框字体都能用甚至可以从右至左对齐对象
- 格式化文本、调整字母间距、行间距或列间距
- 路径上的文本和形状上的文本和路径或形状都可以被编辑和修改
#### 渲染 ####
- Inkscape完全支持抗锯齿显示这是一种通过柔化边界上的像素从而减少或消除凹凸锯齿的技术。
- 支持alpha透明显示和PNG格式图片的导出
### 在Ubuntu 14.04和14.10上安装Inkscape ###
为了在Ubuntu上安装Inkscape我们首先需要 [添加它的稳定版Personal Package Archive][1] (PPA) 至Advanced Package Tool (APT) 库中。打开终端并运行一下命令来添加它的PPA
sudo add-apt-repository ppa:inkscape.dev/stable
![PPA Inkscape](http://blog.linoxide.com/wp-content/uploads/2015/03/PPA-Inkscape.png)
PPA添加到APT库中后我们要用以下命令进行更新
sudo apt-get update
![Update APT](http://blog.linoxide.com/wp-content/uploads/2015/03/Update-APT2.png)
更新好库之后,我们准备用以下命令来完成安装:
sudo apt-get install inkscape
![Install Inkscape](http://blog.linoxide.com/wp-content/uploads/2015/03/Install-Inkscape.png)
恭喜现在Inkscape已经被安装好了我们可以充分利用它的丰富功能特点来编辑制作图像了。
![Inkscape Main](http://blog.linoxide.com/wp-content/uploads/2015/03/Inkscape-Main1.png)
### 结论 ###
Inkscape是一款特点鲜明的图形编辑工具它给予用户充分发挥自己艺术力的权利。它还是一款自由安装和自定义开源应用并且支持大范围文件类型包括JPEG, PNG, GIF和PDF且不仅这些。访问它的 [官方网站][2] 来获取更多新闻和应用更新。
--------------------------------------------------------------------------------
via: http://linoxide.com/tools/install-inkscape-open-source-vector-graphic-editor/
作者:[Aun Raza][a]
译者:[ZTinoZ](https://github.com/ZTinoZ)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunrz/
[1]:https://launchpad.net/~inkscape.dev/+archive/ubuntu/stable
[2]:https://inkscape.org/en/

View File

@ -0,0 +1,56 @@
Linux有问必答如何在虚拟机上配置PCI-passthrough
================================================================================
> **提问**我想要设置一块物理网卡到用KVM创建的虚拟机上。我打算开启网卡的PCI passthrough给这台虚拟机。请问我如何才能增加一个PCI设备通过PCI直通到虚拟机上
如今的hypervisor能够高效地在多个虚拟操作系统分享和模拟硬件资源。然而虚拟资源分享虚拟机的性能或者是虚拟机需要硬件DMA的完全控制不是总能使人满意。一项名叫“PCI passthrough”的技术可以用在一个虚拟机需要独享PCI设备时例如network/sound/video card。本质上PCI passthrough越过了虚拟层直接扩展PCI设备到虚拟机。但是其他虚拟机不能同时共享。
### 开启“PCI Passthrough”的准备 ###
如果你想要为一台HVM实例开启PCI passthrough例如一台KVM创建的full虚拟机你的母系统包括CPU和主板必须满足以下条件。但是如果你的虚拟机是paraV由Xen创建你可以挑过这步。
为了开启PCI passthrough,系统需要支持**VT-d** (Intel处理器)或者**AMD-Vi** (AMD处理器)。Intel的VT-D“英特尔虚拟化技术支持直接I/ O”是适用于最高端的Nehalem处理器和它的后继者例如Westmere、Sandy Bridge的Ivy Bridge。注意VTd和VTx是两个独立功能。intel/AMD处理器支持VT-D/AMD-VI功能的列表可以[点击这里][1]。
完成验证你的设备支持VT-d/AMD-Vi后还有两件事情需要做。首先确保VT-d/AMD-Vi已经在BIOS中开启。然后在内核启动过程中开启IOMMU。IOMMU服务是VT-d,/AMD-Vi提供可以保护虚拟机访问的主机内存同时它也是full虚拟机支持PCI passthrough的前提。
Intel处理器中内核开启IOMMU通过在启动参数中修改“**intel_iommu=on**”。参看[这篇教程][2]获得如何通过GRUB修改内核启动参数。
配置完成启动参数后,重启电脑。
### 添加PCI设备到虚拟机 ###
我们已经完成了开启PCI Passthrough的准备。事实上只需通过虚拟机管理就可以给虚拟机分配一个PCI设备。
打开虚拟机设置,在左边工具栏点击‘增加硬件’按钮。
选择从PCI设备表一个PCI设备来分配点击“完成”按钮
![](https://farm8.staticflickr.com/7587/17015584385_db49e96372_c.jpg)
最后开启实例。目前为止主机的PCI设备已经可以由虚拟机直接访问了。
### 常见问题 ###
在虚拟机启动时如果你看见下列任何一个错误这个错误有可能由于母机VT-d (或 IOMMU)未开启导致。
Error starting domain: unsupported configuration: host doesn't support passthrough of host PCI devices
----------
Error starting domain: Unable to read from monitor: Connection reset by peer
请确保"**intel_iommu=on**"启动参数已经按上文叙述开启。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/pci-passthrough-virt-manager.html
作者:[Dan Nanni][a]
译者:[Vic020/VicYu](http://vicyu.net)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://wiki.xenproject.org/wiki/VTdHowTo
[2]:http://xmodulo.com/add-kernel-boot-parameters-via-grub-linux.html

View File

@ -0,0 +1,39 @@
Bodhi Linux引入Moksha桌面
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/05/Bodhi_Linux.jpg)
基于Ubuntu的轻量级Linux发行版[Bodhi Linux][1]致力于构建其自家的桌面环境这个全新桌面环境被称之为Moksha梵文意为完全自由。Moksha将替换常用的[Enlightenment桌面环境][2]。
### 为何用Moksha替换Englightenment ###
Bodhi Linux的Jeff Hoogland最近[表示][3]了他对新版Enlightenment的不满。直到E17,Enlightenment都十分稳定并且能满足轻量级Linux的部署需求。而E18则到处都充满了问题Bodhi Linux只好弃之不用了。
虽然最新的[Bodhi Linux 3.0发行版][4]仍然使用了E19作为其桌面除传统模式外这意味着对于旧的硬件仍然会使用E17Jeff对E19也十分不满。他说道
>除了性能问题外对于我个人而言E19并没有给我带来与E17下相同的工作流程因为它移除了很多E17的特性。鉴于此我不得不将我所有的3台Bodhi计算机桌面改成E17——这3台机器都是我高端的了。这不由得让我想到我们还有多少现存的Bodhi用户也怀着和我同样的感受所以我[在我们的用户论坛上开启一个与此相关的讨论][5]。
### Moksha是E17桌面的延续 ###
Moksha将会是Bodhi所热衷的E17桌面的延续。Jeff进一步提到
>我们将从整合所有Bodhi修改开始。多年来我们一直都只是给源代码打补丁并修复桌面所具有的问题。如果该工作完成我们将开始移植一些E18和E19引入的更为有用的特性最后我们将引入一些我们认为会改善最终用户体验的东西。
### Moksha何时发布 ###
下一个Bodhi更新将会是Bodhi 3.1.0就在今年八月。这个新版本将为所有其缺省ISO带来Moksha。让我们拭目以待看看Moksha是否是一个好的决定。
--------------------------------------------------------------------------------
via: http://itsfoss.com/bodhi-linux-introduces-moksha-desktop/
作者:[Abhishek][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://www.bodhilinux.com/
[2]:https://www.enlightenment.org/
[3]:http://www.bodhilinux.com/2015/04/28/introducing-the-moksha-desktop/
[4]:http://itsfoss.com/bodhi-linux-3/
[5]:http://forums.bodhilinux.com/index.php?/topic/12322-e17-vs-e19-which-are-you-using-and-why/

View File

@ -1,28 +1,28 @@
How to Manage Systemd Services and Units Using Systemctl in Linux
在Linux中使用Systemctl管理Systemd服务和单元
================================================================================
Systemctl is a systemd utility which is responsible for Controlling the systemd system and service manager.
Systemctl是一个systemd工具主要负责控制systemd系统和服务管理器。
Systemd is a collection of system management daemons, utilities and libraries which serves as a replacement of System V init daemon. Systemd functions as central management and configuration platform for UNIX like system.
Systemd是一个系统管理守护进程、工具和库的集合用于取代System V初始进程。Systemd的功能是用于集中管理和配置类UNIX系统。
In the Linux Ecosystem Systemd has been implemented on most of the standard Linux Distribution with a few exception. Systemd is the parent Process of all other daemons oftenly but not always.
在Linux生态系统中Systemd被部署到了大多数的标准Linux发行版中只有位数不多的几个尚未部署。Systemd通常是所有其它守护进程的父进程但并非总是如此。
![Manage Linux Services Using Systemctl](http://www.tecmint.com/wp-content/uploads/2015/04/Manage-Linux-Services-Using-Systemctl.jpg)
Manage Linux Services Using Systemctl
使用Systemctl管理Linux服务
This article aims at throwing light on “How to control System and Services” on a system running systemd.
本文旨在阐明在运行systemd的系统上“如何控制系统和服务”。
### Starting with Systemtd and Systemctl Basics ###
### Systemd初体验和Systemctl基础 ###
#### 1. First check if systemd is installed on your system or not and what is the version of currently installed Systemd? ####
#### 1. 首先检查你的系统中是否安装有systemd并确定当前安装的版本 ####
# systemd --version
systemd 215
+PAM +AUDIT +SELINUX +IMA +SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ -SECCOMP -APPARMOR
Its clear from the above example, that we have systemd 215 version Installed.
上例中很清楚地表明我们安装了215版本的systemd。
#### 2. Check where the binaries and libraries of systemd and systemctl are installed. ####
#### 2. 检查systemd和systemctl的二进制文件和库文件的安装位置 ####
# whereis systemd
systemd: /usr/lib/systemd /etc/systemd /usr/share/systemd /usr/share/man/man1/systemd.1.gz
@ -31,7 +31,7 @@ Its clear from the above example, that we have systemd 215 version Installed.
# whereis systemctl
systemctl: /usr/bin/systemctl /usr/share/man/man1/systemctl.1.gz
#### 3. Check whether systemd is running or not. ####
#### 3. 检查systemd是否运行 ####
# ps -eaf | grep [s]ystemd
@ -41,18 +41,18 @@ Its clear from the above example, that we have systemd 215 version Installed.
root 555 1 0 16:27 ? 00:00:00 /usr/lib/systemd/systemd-logind
dbus 556 1 0 16:27 ? 00:00:00 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
**Notice**: systemd is running as parent daemon (PID=1). In the above command ps with (-e) select all Processes, (-
**注意**systemd是作为父进程PID=1运行的。在上面带-e参数的ps命令输出中选择所有进程-
a) select all processes except session leaders and (-f) for full format listing (i.e. -eaf).
a)选择除会话前导外的所有进程,并使用(-f参数输出完整格式列表如 -eaf
Also note the square brackets in the above example and rest of the examples to follow. Square Bracket expression is part of greps character class pattern matching.
也请注意上例中后随的方括号和样例剩余部分。方括号表达式是grep的字符类表达式的一部分。
#### 4. Analyze systemd boot process. ####
#### 4. 分析systemd启动进程 ####
# systemd-analyze
Startup finished in 487ms (kernel) + 2.776s (initrd) + 20.229s (userspace) = 23.493s
#### 5. Analyze time taken by each process at boot. ####
#### 5. 分析启动时各个进程花费的时间 ####
# systemd-analyze blame
@ -68,7 +68,7 @@ Also note the square brackets in the above example and rest of the examples to f
1.126s systemd-logind.service
....
#### 6. Analyze critical chain at boot. ####
#### 6. 分析启动时的关键链 ####
# systemd-analyze critical-chain
@ -94,9 +94,9 @@ Also note the square brackets in the above example and rest of the examples to f
└─systemd-fsck@dev-disk-by\x2duuid-79f594ad\x2da332\x2d4730\x2dbb5f\x2d85d19608096
└─dev-disk-by\x2duuid-79f594ad\x2da332\x2d4730\x2dbb5f\x2d85d196080964.device @4
**Important**: Systemctl accepts services (.service), mount point (.mount), sockets (.socket) and devices (.device) as units.
**重要**Systemctl接受服务.service挂载点.mount套接口.socket和设备.device作为单元。
#### 7. List all the available units. ####
#### 7. 列出所有可用单元 ####
# systemctl list-unit-files
@ -112,7 +112,7 @@ Also note the square brackets in the above example and rest of the examples to f
brandbot.path disabled
.....
#### 8. List all running units. ####
#### 8. 列出所有运行中单元 ####
# systemctl list-units
@ -133,7 +133,7 @@ Also note the square brackets in the above example and rest of the examples to f
sys-module-configfs.device loaded active plugged /sys/module/configfs
...
#### 9. List all failed units. ####
#### 9. 列出所有失败单元 ####
# systemctl --failed
@ -147,13 +147,13 @@ Also note the square brackets in the above example and rest of the examples to f
1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
#### 10. Check if a Unit (cron.service) is enabled or not?. ####
#### 10. 检查某个单元cron.service是否启用 ####
# systemctl is-enabled crond.service
enabled
#### 11. Check whether a Unit or Service is running or not?. ####
#### 11. 检查某个单元或服务是否运行 ####
# systemctl status firewalld.service
@ -167,9 +167,9 @@ Also note the square brackets in the above example and rest of the examples to f
Apr 28 16:27:51 tecmint systemd[1]: Starting firewalld - dynamic firewall daemon...
Apr 28 16:27:55 tecmint systemd[1]: Started firewalld - dynamic firewall daemon.
### Control and Manage Services Using Systemctl ###
### 使用Systemctl控制并管理服务 ###
#### 12. List all services (including enabled and disabled). ####
#### 12. 列出所有服务(包括启用的和禁用的) ####
# systemctl list-unit-files --type=service
@ -187,7 +187,7 @@ Also note the square brackets in the above example and rest of the examples to f
dbus-org.fedoraproject.FirewallD1.service enabled
....
#### 13. How do I start, restart, stop, reload and check the status of a service (httpd.service) in Linux. ####
#### 13. Linux中如何启动、重启、停止、重载服务以及检查服务httpd.service状态 ####
# systemctl start httpd.service
# systemctl restart httpd.service
@ -214,15 +214,15 @@ Also note the square brackets in the above example and rest of the examples to f
Apr 28 17:21:30 tecmint systemd[1]: Started The Apache HTTP Server.
Hint: Some lines were ellipsized, use -l to show in full.
**Note**: When we use commands like start, restart, stop and reload with systemctl, we will not get any output on the terminal, only status command will print the output.
**注意**当我们使用systemctl的startrestartstop和reload命令时我们不会不会从终端获取到任何输出内容只有status命令可以打印输出。
#### 14. How to active a service and enable or disable a service at boot time (auto start service at system boot). ####
#### 14. 如何激活服务并在启动时启用或禁用服务(系统启动时自动启动服务) ####
# systemctl is-active httpd.service
# systemctl enable httpd.service
# systemctl disable httpd.service
#### 15. How to mask (making it impossible to start) or unmask a service (httpd.service). ####
#### 15. 如何屏蔽让它不能启动或显示服务httpd.service ####
# systemctl mask httpd.service
ln -s '/dev/null' '/etc/systemd/system/httpd.service'
@ -230,7 +230,7 @@ Also note the square brackets in the above example and rest of the examples to f
# systemctl unmask httpd.service
rm '/etc/systemd/system/httpd.service'
#### 16. How to a Kill a service using systemctl command. ####
#### 16. 使用systemctl命令杀死服务 ####
# systemctl kill httpd
# systemctl status httpd
@ -253,9 +253,9 @@ Also note the square brackets in the above example and rest of the examples to f
Apr 28 18:01:42 tecmint systemd[1]: Unit httpd.service entered failed state.
Hint: Some lines were ellipsized, use -l to show in full.
### Control and Manage Mount Points using Systemctl ###
### 使用Systemctl控制并管理挂载点 ###
#### 17. List all system mount points. ####
#### 17. 列出所有系统挂载点 ####
# systemctl list-unit-files --type=mount
@ -268,7 +268,7 @@ Also note the square brackets in the above example and rest of the examples to f
sys-kernel-debug.mount static
tmp.mount disabled
#### 18. How do I mount, unmount, remount, reload system mount points and also check the status of mount points on the system. ####
#### 18. 挂载、卸载、重新挂载、重载系统挂载点并检查系统中挂载点状态 ####
# systemctl start tmp.mount
# systemctl stop tmp.mount
@ -291,13 +291,13 @@ Also note the square brackets in the above example and rest of the examples to f
Apr 28 17:46:06 tecmint systemd[1]: tmp.mount: Directory /tmp to mount over is not empty, mounting anyway.
Apr 28 17:46:06 tecmint systemd[1]: Mounted Temporary Directory.
#### 19. How to active, enable or disable a mount point at boot time (auto mount at system boot). ####
#### 19. 在启动时激活、启用或禁用挂载点(系统启动时自动挂载) ####
# systemctl is-active tmp.mount
# systemctl enable tmp.mount
# systemctl disable tmp.mount
#### 20. How to mask (making it impossible to start) or unmask a mount points in Linux. ####
#### 20. 在Linux中屏蔽让它不能启动或显示挂载点 ####
# systemctl mask tmp.mount
@ -307,9 +307,9 @@ Also note the square brackets in the above example and rest of the examples to f
rm '/etc/systemd/system/tmp.mount'
### Control and Manage Sockets using Systemctl ###
### 使用Systemctl控制并管理套接口 ###
#### 21. List all available system sockets. ####
#### 21. 列出所有可用系统套接口 ####
# systemctl list-unit-files --type=socket
@ -328,7 +328,7 @@ Also note the square brackets in the above example and rest of the examples to f
11 unit files listed.
#### 22. How do I start, restart, stop, reload and check the status of a socket (example: cups.socket) in Linux. ####
#### 22. 在Linux中启动、重启、停止、重载套接口并检查其状态####
# systemctl start cups.socket
# systemctl restart cups.socket
@ -344,13 +344,13 @@ Also note the square brackets in the above example and rest of the examples to f
Apr 28 18:10:59 tecmint systemd[1]: Starting CUPS Printing Service Sockets.
Apr 28 18:10:59 tecmint systemd[1]: Listening on CUPS Printing Service Sockets.
#### 23. How to active a socket and enable or disable at boot time (auto start socket at system boot). ####
#### 23. 在启动时激活套接口,并启用或禁用它(系统启动时自启动) ####
# systemctl is-active cups.socket
# systemctl enable cups.socket
# systemctl disable cups.socket
#### 24. How to mask (making it impossible to start) or unmask a socket (cups.socket). ####
#### 24. 屏蔽(使它不能启动)或显示套接口 ####
# systemctl mask cups.socket
ln -s '/dev/null' '/etc/systemd/system/cups.socket'
@ -358,31 +358,31 @@ Also note the square brackets in the above example and rest of the examples to f
# systemctl unmask cups.socket
rm '/etc/systemd/system/cups.socket'
### CPU Utilization (Shares) of a Service ###
### 服务的CPU利用率分配额 ###
#### 25. Get the current CPU Shares of a Service (say httpd). ####
#### 25. 获取当前某个服务的CPU分配额如httpd ####
# systemctl show -p CPUShares httpd.service
CPUShares=1024
**Note**: The default each service has a CPUShare = 1024. You may increase/decrease CPU share of a process.
**注意**各个服务的默认CPU分配份额=1024你可以增加/减少某个进程的CPU分配份额。
#### 26. Limit the CPU Share of a service (httpd.service) to 2000 CPUShares/ ####
#### 26. 将某个服务httpd.service的CPU分配份额限制为2000 CPUShares/ ####
# systemctl set-property httpd.service CPUShares=2000
# systemctl show -p CPUShares httpd.service
CPUShares=2000
**Note**: When you set CPUShare for a service, a directory with the name of service is created (httpd.service.d) which contains a file 90-CPUShares.conf which contains the CPUShare Limit information. You may view the file as:
**注意**当你为某个服务设置CPUShares会自动创建一个以服务名命名的目录httpd.service里面包含了一个名为90-CPUShares.conf的文件该文件含有CPUShare限制信息你可以通过以下方式查看该文件
# vi /etc/systemd/system/httpd.service.d/90-CPUShares.conf
[Service]
CPUShares=2000
#### 27. Check all the configuration details of a service. ####
#### 27. 检查某个服务的所有配置细节 ####
# systemctl show httpd
@ -401,7 +401,7 @@ Also note the square brackets in the above example and rest of the examples to f
FragmentPath=/usr/lib/systemd/system/httpd.service
....
#### 28. Analyze critical chain for a services(httpd). ####
#### 28. 分析某个服务httpd的关键链 ####
# systemd-analyze critical-chain httpd.service
@ -426,7 +426,7 @@ Also note the square brackets in the above example and rest of the examples to f
└─systemd-fsck@dev-disk-by\x2duuid-79f594ad\x2da332\x2d4730\x2dbb5f\x2d85d196080964.service @4.092s +149ms
└─dev-disk-by\x2duuid-79f594ad\x2da332\x2d4730\x2dbb5f\x2d85d196080964.device @4.092s
#### 29. Get a list of dependencies for a services (httpd). ####
#### 29. 获取某个服务httpd的依赖性列表 ####
# systemctl list-dependencies httpd.service
@ -448,7 +448,7 @@ Also note the square brackets in the above example and rest of the examples to f
│ ├─dbus.socket
....
#### 30. List control groups hierarchically. ####
#### 30. 按等级列出控制组 ####
# systemd-cgls
@ -472,7 +472,7 @@ Also note the square brackets in the above example and rest of the examples to f
│ └─721 /usr/lib/polkit-1/polkitd --no-debug
....
#### 31. List control group according to CPU, memory, Input and Output. ####
#### 31. 按CPU、内存、输入和输出列出控制组 ####
# systemd-cgtop
@ -501,9 +501,9 @@ Also note the square brackets in the above example and rest of the examples to f
/system.slice/webmin.service 1 - - - -
/user.slice/user-0.slice/session-1.scope 3 - - - -
### Control System Runlevels ###
### 控制系统运行等级 ###
#### 32. How to start system rescue mode. ####
#### 32. 启动系统救援模式 ####
# systemctl rescue
@ -511,7 +511,7 @@ Also note the square brackets in the above example and rest of the examples to f
The system is going down to rescue mode NOW!
#### 33. How to enter into emergency mode. ####
#### 33. 进入紧急模式 ####
# systemctl emergency
@ -519,31 +519,31 @@ Also note the square brackets in the above example and rest of the examples to f
system logs, "systemctl reboot" to reboot, "systemctl default" to try again
to boot into default mode.
#### 34. List current run levels in use. ####
#### 34. 列出当前使用的运行等级 ####
# systemctl get-default
multi-user.target
#### 35. How to start Runlevel 5 aka graphical mode. ####
#### 35. 启动运行等级5即图形模式 ####
# systemctl isolate runlevel5.target
OR
# systemctl isolate graphical.target
#### 36. How to start Runlevel 3 aka multiuser mode (commandline). ####
#### 36. 启动运行等级3即多用户模式命令行 ####
# systemctl isolate runlevel3.target
OR
# systemctl isolate multiuser.target
#### 36. How to set multiusermode or graphical mode as default runlevel. ####
#### 36. 设置多用户模式或图形模式为默认运行等级 ####
# systemctl set-default runlevel3.target
# systemctl set-default runlevel5.target
#### 37. How to reboot, halt, suspend, hibernate or put system in hybrid-sleep. ####
#### 37. 重启、停止、挂起、休眠系统或使系统进入混合睡眠 ####
# systemctl reboot
@ -555,25 +555,25 @@ Also note the square brackets in the above example and rest of the examples to f
# systemctl hybrid-sleep
For those who may not be aware of runlevels and what it does.
对于不知运行等级为何物的人,说明如下。
- Runlevel 0 : Shut down and Power off the system.
- Runlevel 1 : Rescue?Maintainance Mode.
- Runlevel 3 : multiuser, no-graphic system.
- Runlevel 4 : multiuser, no-graphic system.
- Runlevel 5 : multiuser, graphical system.
- Runlevel 6 : Shutdown and Reboot the machine.
- Runlevel 0 : 关闭系统
- Runlevel 1 : 救援?维护模式
- Runlevel 3 : 多用户,无图形系统
- Runlevel 4 : 多用户,无图形系统
- Runlevel 5 : 多用户,图形化系统
- Runlevel 6 : 关闭并重启机器
Thats all for now. Keep connected! Keep commenting. Dont forget to provide us with your valuable feedback in the comments below. Like and share us and help us get spread.
到此为止吧。保持连线,进行评论。别忘了在下面的评论中为我们提供一些有价值的反馈哦。喜欢我们、与我们分享,求扩散。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/manage-services-using-systemd-and-systemctl-in-linux/
作者:[Avishek Kumar][a]
译者:[译者ID](https://github.com/译者ID)
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[a]:http://www.tecmint.com/author/avishek/

View File

@ -0,0 +1,460 @@
Shell脚本学习初次操作指南
================================================================================
![](http://blog.linoxide.com/wp-content/uploads/2015/04/myfirstshellscript.jpg)
通常当人们提到“shell脚本语言”时浮现在他们脑海中是bashkshsh或者其它相类似的linux/unix脚本语言。脚本语言是与计算机交流的另外一种途径。使用图形化窗口界面不管是windows还是linux都无所谓用户可以移动鼠标并点击各种对象比如按钮、列表、选框等等。但这种方式在每次用户想要计算机/服务器完成相同任务时比如说批量转换照片或者下载新的电影、mp3等却是十分不方便。要想让所有这些事情变得简单并且自动化我们可以使用shell脚本。
某些编程语言像pascal、foxpro、C、java之类在执行前需要先进行编译。它们需要合适的编译器来让我们的代码完成某个任务。
而其它一些编程语言像php、javascript、visualbasic之类则不需要编译器因此它们需要解释器而我们不需要编译代码就可以运行程序。
shell脚本也像解释器一样但它通常用于调用外部已编译的程序。然后它会捕获输出结果、退出代码并根据情况进行处理。
Linux世界中最为流行的shell脚本语言之一就是bash。而我认为这是我自己的看法原因在于默认情况下bash shell可以让用户便捷地通过历史命令先前执行过的导航与之相反的是ksh则要求对.profile进行一些调整或者记住一些“魔术”组合键来查阅历史并修正命令。
好了我想这些介绍已经足够了剩下来哪个环境最适合你就留给你自己去判断吧。从现在开始我将只讲bash及其脚本。在下面的例子中我将使用CentOS 6.6和bash-4.1.2。请确保你有相同版本,或者更高版本。
### Shell脚本流 ###
shell脚本语言就跟和几个人聊天类似。你只需把所有命令想象成能帮你做事的那些人只要你用正确的方式来请求他们去做。比如说你想要写文档。首先你需要纸。然后你需要把内容说给某个人听让他帮你写。最后你想要把它存放到某个地方。或者说你想要造一所房子因而你需要请合适的人来清空场地。在他们说“事情干完了”那么另外一些工程师就可以帮你来砌墙。最后当这些工程师们也告诉你“事情干完了”的时候你就可以叫油漆工来给房子粉饰了。如果你让油漆工在墙砌好前就来粉饰会发生什么呢我想他们会开始发牢骚了。几乎所有这些像人一样的命令都会说话如果它们完成了工作而没有发生什么问题那么它们就会告诉“标准输出”。如果它们不能做你叫它们做的事——它们会告诉“标准错误”。这样最后所有的命令都通过“标准输入”来听你的话。
快速实例——当你打开linux终端并写一些文本时——你正通过“标准输入”和bash说话。那么让我们来问问bash shell **who am i**吧。
root@localhost ~]# who am i <--- you speaking through the standard input to bash shell
root pts/0 2015-04-22 20:17 (192.168.1.123) <--- bash shell answering to you through the standard output
现在让我们说一些bash听不懂的问题
[root@localhost ~]# blablabla <--- 你又在和标准输入说话了
-bash: blablabla: command not found <--- bash通过标准错误在发牢骚了
“:”之前的第一个单词通常是向你发牢骚的命令。实际上,这些流中的每一个都有它们自己的索引号:
- 标准输入(**stdin** - 0
- 标准输出(**stdout** - 1
- 标准错误(**stderr** - 2
如果你真的想要知道哪个输出命令说了些什么——你需要重定向(在命令后使用大于号“>”和流索引)那次发言到文件:
[root@localhost ~]# blablabla 1> output.txt
-bash: blablabla: command not found
在本例中我们试着重定向1**stdout**流到名为output.txt的文件。让我们来看对该文件内容所做的事情吧使用cat命令可以做这事
[root@localhost ~]# cat output.txt
[root@localhost ~]#
看起来似乎是空的。好吧现在让我们来重定向2**stderr**)流:
[root@localhost ~]# blablabla 2> error.txt
[root@localhost ~]#
好吧,我们看到牢骚话没了。让我们检查一下那个文件:
[root@localhost ~]# cat error.txt
-bash: blablabla: command not found
[root@localhost ~]#
果然如此我们看到所有牢骚话都被记录到errors.txt文件里头去了。
有时候,命令会同时产生**stdout**和**stderr**。要重定向它们到不同的文件,我们可以使用以下语句:
command 1>out.txt 2>err.txt
要缩短一点语句我们可以忽略“1”因为默认情况下**stdout**会被重定向:
command >out.txt 2>err.txt
好吧让我们试试做些“坏事”。让我们用rm命令把file1和folder1给删了吧
[root@localhost ~]# rm -vf folder1 file1 > out.txt 2>err.txt
现在来检查以下输出文件:
[root@localhost ~]# cat out.txt
removed `file1'
[root@localhost ~]# cat err.txt
rm: cannot remove `folder1': Is a directory
[root@localhost ~]#
正如我们所看到的,不同的流被分离到了不同的文件。有时候,这也不似很方便,因为我们想要查看出现错误时,在某些操作前面或后面所连续发生的事情。要实现这一目的,我们可以重定向两个流到同一个文件:
command >>out_err.txt 2>>out_err.txt
注意:请注意,我使用“>>”替代了“>”。它允许我们附加到文件,而不是覆盖文件。
我们可以重定向一个流到另一个:
command >out_err.txt 2>&1
让我来解释一下吧。所有命令的标准输出将被重定向到out_err.txt错误输出将被重定向到1-st流上面已经解释过了而该流会被重定向到同一个文件。让我们看这个实例
[root@localhost ~]# rm -fv folder2 file2 >out_err.txt 2>&1
[root@localhost ~]# cat out_err.txt
rm: cannot remove `folder2': Is a directory
removed `file2'
[root@localhost ~]#
看着这些组合的输出,我们可以将其说明为:首先,**rm**命令试着将folder2删除而它不会成功因为linux要求**-r**键来允许**rm**命令删除文件夹而第二个file2会被删除。通过为**rm**提供**-v**详情我们让rm命令告诉我们每个被删除的文件或文件夹。
这些就是你需要知道的,关于重定向的几乎所有内容了。我是说几乎,因为还有一个更为重要的重定向工具,它称之为“管道”。通过使用|(管道)符号,我们通常重定向**stdout**流。
比如说,我们有这样一个文本文件:
[root@localhost ~]# cat text_file.txt
This line does not contain H e l l o word
This lilne contains Hello
This also containd Hello
This one no due to HELLO all capital
Hello bash world!
而我们需要找到其中某些带有“Hello”的行Linux中有个**grep**命令可以完成该工作:
[root@localhost ~]# grep Hello text_file.txt
This lilne contains Hello
This also containd Hello
Hello bash world!
[root@localhost ~]#
当我们有个文件,想要在里头搜索的时候,这用起来很不错。当如果我们需要在另一个命令的输出中查找某些东西,这又该怎么办呢?是的,当然,我们可以重定向输出到文件,然后再在文件里头查找:
[root@localhost ~]# fdisk -l>fdisk.out
[root@localhost ~]# grep "Disk /dev" fdisk.out
Disk /dev/sda: 8589 MB, 8589934592 bytes
Disk /dev/mapper/VolGroup-lv_root: 7205 MB, 7205814272 bytes
Disk /dev/mapper/VolGroup-lv_swap: 855 MB, 855638016 bytes
[root@localhost ~]#
如果你打算grep一些双引号引起来带有空格的内容呢
注意: fdisk命令显示关于Linux操作系统磁盘驱动器的信息
就像我们看到的,这种方式很不方便,因为我们不一会儿就把临时文件空间给搞乱了。要完成该任务,我们可以使用管道。它们允许我们重定向一个命令的**stdout**到另一个命令的**stdin**流:
[root@localhost ~]# fdisk -l | grep "Disk /dev"
Disk /dev/sda: 8589 MB, 8589934592 bytes
Disk /dev/mapper/VolGroup-lv_root: 7205 MB, 7205814272 bytes
Disk /dev/mapper/VolGroup-lv_swap: 855 MB, 855638016 bytes
[root@localhost ~]#
如你所见,我们不需要任何临时文件就获得了相同的结果。我们把**fdisk stdout**重定向到了**grep stdin**。
**注意** 管道重定向总是从左至右的。
还有几个其它重定向,但是我们将把它们放在后面讲。
### 在shell中显示自定义信息 ###
正如我们所知道的通常与shell的交流以及shell内的交流是以对话的方式进行的。因此让我们创建一些真正的脚本吧这些脚本也会和我们讲话。这会让你学到一些简单的命令并对脚本的概念有一个更好的理解。
假设我们是某个公司的总服务台经理我们想要创建某个shell脚本来注册呼叫信息电话号码、用户名以及问题的简要描述。我们打算把这些信息存储到普通文本文件data.txt中以便今后统计。脚本它自己就是以对话的方式工作这会让总服务台的工作人员的小日子过得轻松点。那么首先我们需要显示问题。对于现实信息我们可以用echo和printf命令。这两个都是用来显示信息的但是printf更为强大因为我们可以通过它很好地格式化输出我们可以让它右对齐、左对齐或者为信息留出专门的空间。让我们从一个简单的例子开始吧。要创建文件请使用你喜欢的文本编辑器katenanovi……然后创建名为note.sh的文件里面写入这些命令
echo "Phone number ?"
### Script执行 ###
在保存文件后我们可以使用bash命令来运行把我们的文件作为它的参数
[root@localhost ~]# bash note.sh
Phone number ?
实际上,这样来执行脚本是很不方便的。如果不使用**bash**命令作为前缀来执行,会更舒服一些。要让脚本可执行,我们可以使用**chmod**命令:
[root@localhost ~]# ls -la note.sh
-rw-r--r--. 1 root root 22 Apr 23 20:52 note.sh
[root@localhost ~]# chmod +x note.sh
[root@localhost ~]# ls -la note.sh
-rwxr-xr-x. 1 root root 22 Apr 23 20:52 note.sh
[root@localhost ~]#
![set permission script file](http://blog.linoxide.com/wp-content/uploads/2015/04/Capture.png)
**注意** ls命令显示了当前文件夹内的文件。通过添加-la键它会显示更多文件信息。
如我们所见,在**chmod**命令执行前脚本只有读r和写w权限。在执行**chmod +x**后它就获得了执行x权限。关于权限的更多细节我会在下一篇文章中讲述。现在我们只需这么来运行
[root@localhost ~]# ./note.sh
Phone number ?
在脚本名前,我添加了./组合。.(点在unix世界中意味着当前位置当前文件夹/斜线是文件夹分隔符。在Windows系统中我们使用\反斜线实现同样功能所以这整个组合的意思是说“从当前文件夹执行note.sh脚本”。我想如果我用完整路径来运行这个脚本的话你会更加清楚一些
[root@localhost ~]# /root/note.sh
Phone number ?
[root@localhost ~]#
它也能工作。
如果所有linux用户都有相同的默认shell那就万事OK。如果我们只是执行该脚本默认的用户shell就会用于解析脚本内容并运行命令。不同的shell有着一丁点不同的语法、内部命令等等所以为了保证我们的脚本会使用**bash**,我们应该添加**#!/bin/bash**到文件首行。这样默认的用户shell将调用**/bin/bash**,而只有在那时候,脚本中的命令才会被执行:
[root@localhost ~]# cat note.sh
#!/bin/bash
echo "Phone number ?"
直到现在我们才100%确信**bash**会用来解析我们的脚本内容。让我们继续。
### 读取输入 ###
在现实信息后,脚本会等待用户回答。那儿有个**read**命令用来接收用户的回答:
#!/bin/bash
echo "Phone number ?"
read phone
在执行后,脚本会等待用户输入,直到用户按[ENTER]键:
[root@localhost ~]# ./note.sh
Phone number ?
12345 <--- 这儿是我输入的内容
[root@localhost ~]#
你输入的所有东西都会被存储到变量**phone**中,要显示变量的值,我们同样可以使用**echo**命令:
[root@localhost ~]# cat note.sh
#!/bin/bash
echo "Phone number ?"
read phone
echo "You have entered $phone as a phone number"
[root@localhost ~]# ./note.sh
Phone number ?
123456
You have entered 123456 as a phone number
[root@localhost ~]#
在**bash** shell中我们使用**$**(美元)符号作为变量标示,除了读入到变量和其它为数不多的时候(将在今后说明)。
好了,现在我们准备添加剩下的问题了:
#!/bin/bash
echo "Phone number?"
read phone
echo "Name?"
read name
echo "Issue?"
read issue
[root@localhost ~]# ./note.sh
Phone number?
123
Name?
Jim
Issue?
script is not working.
[root@localhost ~]#
### 使用流重定向 ###
太完美了剩下来就是重定向所有东西到文件data.txt了。作为字段分隔符我们将使用/(斜线)符号。
**注意** 你可以选择任何你认为是最好,但是确保文件内容不会包含这些符号在内。它会导致在文本行中产生额外字段。
别忘了使用“>>”来代替“>”,因为我们想要将输出内容附加到文件末!
[root@localhost ~]# tail -2 note.sh
read issue
echo "$phone/$name/$issue">>data.txt
[root@localhost ~]# ./note.sh
Phone number?
987
Name?
Jimmy
Issue?
Keybord issue.
[root@localhost ~]# cat data.txt
987/Jimmy/Keybord issue.
[root@localhost ~]#
**注意** **tail**命令显示了文件的最后**-n**行。
搞定。让我们再来运行一次看看:
[root@localhost ~]# ./note.sh
Phone number?
556
Name?
Janine
Issue?
Mouse was broken.
[root@localhost ~]# cat data.txt
987/Jimmy/Keybord issue.
556/Janine/Mouse was broken.
[root@localhost ~]#
我们的文件在增长让我们在每行前面加个日期吧这对于今后摆弄这些统计数据时会很有用。要实现这功能我们可以使用date命令并指定某种格式因为我不喜欢默认格式
[root@localhost ~]# date
Thu Apr 23 21:33:14 EEST 2015 <---- date命令的默认输出
[root@localhost ~]# date "+%Y.%m.%d %H:%M:%S"
2015.04.23 21:33:18 <---- 格式化后的输出
有几种方式可以读取命令输出到变脸,在这种简单的情况下,我们将使用`(反引号):
[root@localhost ~]# cat note.sh
#!/bin/bash
now=`date "+%Y.%m.%d %H:%M:%S"`
echo "Phone number?"
read phone
echo "Name?"
read name
echo "Issue?"
read issue
echo "$now/$phone/$name/$issue">>data.txt
[root@localhost ~]# ./note.sh
Phone number?
123
Name?
Jim
Issue?
Script hanging.
[root@localhost ~]# cat data.txt
2015.04.23 21:38:56/123/Jim/Script hanging.
[root@localhost ~]#
嗯…… 我们的脚本看起来有点丑啊,让我们来美化一下。如果你要手动读取**read**命令你会发现read命令也可以显示一些信息。要实现该功能我们应该使用-p键加上信息
[root@localhost ~]# cat note.sh
#!/bin/bash
now=`date "+%Y.%m.%d %H:%M:%S"`
read -p "Phone number: " phone
read -p "Name: " name
read -p "Issue: " issue
echo "$now/$phone/$name/$issue">>data.txt
你可以直接从控制台查找到各个命令的大量有趣的信息,只需输入:**man read, man echo, man date, man ……**
同意吗?它看上去是好多了!
[root@localhost ~]# ./note.sh
Phone number: 321
Name: Susane
Issue: Mouse was stolen
[root@localhost ~]# cat data.txt
2015.04.23 21:38:56/123/Jim/Script hanging.
2015.04.23 21:43:50/321/Susane/Mouse was stolen
[root@localhost ~]#
光标在消息的后面(不是在新的一行中),这有点意思。
循环
是时候来改进我们的脚本了。如果用户一整天都在接电话,如果每次都要去运行,这岂不是很麻烦?让我们让这些活动都永无止境地循环去吧:
[root@localhost ~]# cat note.sh
#!/bin/bash
while true
do
read -p "Phone number: " phone
now=`date "+%Y.%m.%d %H:%M:%S"`
read -p "Name: " name
read -p "Issue: " issue
echo "$now/$phone/$name/$issue">>data.txt
done
我已经交换了**read phone**和**now=`date`**行。这是因为我想要在输入电话号码后再获得时间。如果我把它放在循环**- the**的首行变量就会在数据存储到文件中后获得时间。而这并不好因为下一次呼叫可能在20分钟后甚至更晚。
[root@localhost ~]# ./note.sh
Phone number: 123
Name: Jim
Issue: Script still not works.
Phone number: 777
Name: Daniel
Issue: I broke my monitor
Phone number: ^C
[root@localhost ~]# cat data.txt
2015.04.23 21:38:56/123/Jim/Script hanging.
2015.04.23 21:43:50/321/Susane/Mouse was stolen
2015.04.23 21:47:55/123/Jim/Script still not works.
2015.04.23 21:48:16/777/Daniel/I broke my monitor
[root@localhost ~]#
注意: 要从无限循环中退出,你可以按[Ctrl]+[C]键。Shell会显示^表示Ctrl键。
### 使用管道重定向 ###
让我们添加更多功能到我们的“弗兰肯斯坦”我想要脚本在每次呼叫后显示某个统计数据。比如说我想要查看各个号码呼叫了我几次。对于这个我们应该cat文件data.txt
[root@localhost ~]# cat data.txt
2015.04.23 21:38:56/123/Jim/Script hanging.
2015.04.23 21:43:50/321/Susane/Mouse was stolen
2015.04.23 21:47:55/123/Jim/Script still not works.
2015.04.23 21:48:16/777/Daniel/I broke my monitor
2015.04.23 22:02:14/123/Jimmy/New script also not working!!!
[root@localhost ~]#
现在,所有输出我们都可以重定向到**cut**命令,让**cut**来把每行切成一块一块(我们使用分隔符“/”),然后打印第二个字段:
[root@localhost ~]# cat data.txt | cut -d"/" -f2
123
321
123
777
123
[root@localhost ~]#
现在,我们可以把这个输出重定向打另外一个命令**sort**
[root@localhost ~]# cat data.txt | cut -d"/" -f2|sort
123
123
123
321
777
[root@localhost ~]#
然后只留下唯一的行。要统计唯一条目,只需添加**-c**键到**uniq**命令:
[root@localhost ~]# cat data.txt | cut -d"/" -f2 | sort | uniq -c
3 123
1 321
1 777
[root@localhost ~]#
只要把这个添加到我们的循环的最后:
#!/bin/bash
while true
do
read -p "Phone number: " phone
now=`date "+%Y.%m.%d %H:%M:%S"`
read -p "Name: " name
read -p "Issue: " issue
echo "$now/$phone/$name/$issue">>data.txt
echo "===== We got calls from ====="
cat data.txt | cut -d"/" -f2 | sort | uniq -c
echo "--------------------------------"
done
运行:
[root@localhost ~]# ./note.sh
Phone number: 454
Name: Malini
Issue: Windows license expired.
===== We got calls from =====
3 123
1 321
1 454
1 777
--------------------------------
Phone number: ^C
![running script](http://blog.linoxide.com/wp-content/uploads/2015/04/Capture11.png)
当前场景贯穿了几个熟知的步骤:
- 显示消息
- 获取用户输入
- 存储值到文件
- 处理存储的数据
但是如果用户有点责任心他有时候需要输入数据有时候需要统计或者可能要在存储的数据中查找一些东西呢对于这些事情我们需要使用switches/cases并知道怎样来很好地格式化输出。这对于在shell中“画”表格的时候很有用。
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-shell-script/guide-start-learning-shell-scripting-scratch/
作者:[Petras Liumparas][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/petrasl/

View File

@ -0,0 +1,112 @@
如何在CentOS 7.x中安装OpenERPOdoo
================================================================================
各位好这篇教程关于的是如何在CentOS 7中安装Odoo就是我们所知的OpenERP。你是不是在考虑为你的业务安装一个不错的ERP企业资源规划软件那么OpenERP就是你寻找的最好的程序因为它是一款为你的商务提供杰出特性的自由开源软件。
[OpenERP][1]是一款自由开源的传统的OpenERP企业资源规划它包含了开源CRM、网站构建、电子商务、项目管理、计费账务、销售点、人力资源、市场、生产、采购管理以及其他模块用于提高效率及销售。Odoo可以作为独立程序但是它可以无缝集成因此你可以在安装数个程序后得到一个全功能的开源ERP。
因此下面是在你的CentOS上安装OpenERP的步骤。
### 1. 安装 PostgreSQL ###
首先首先我们需要更新CentOS 7的软件包来确保是最新的包补丁和安全更新。要更新我们的系统我们要在shell下运行下面的命令。
# yum clean all
# yum update
现在我们要安装PostgreSQL因为OpenERP使用PostgreSQL作为他的数据库。要安装它我们需要运行下面的命令。
# yum install postgresql postgresql-server postgresql-libs
![Installing postgresql](http://blog.linoxide.com/wp-content/uploads/2015/03/installing-postgresql.png)
、安装完成后,我们需要用下面的命令初始化数据库。
# postgresql-setup initdb
![Intializating postgresql](http://blog.linoxide.com/wp-content/uploads/2015/03/intializating-postgresql.png)
我们接着设置PostgreSQL来使它每次开机启动。
# systemctl enable postgresql
# systemctl start postgresql
因为我们还没有为用户“postgresql”设置密码我们现在设置。
# su - postgres
$ psql
postgres=# \password postgres
postgres=# \q
# exit
![setting password postgres](http://blog.linoxide.com/wp-content/uploads/2015/03/setting-password-postgres.png)
### 2. 设置Odoo仓库 ###
在初始化数据库初始化完成后我们要添加EPEL企业版Linux的额外包到我们的CentOS中。Odoo或者OpenERP依赖于Python运行时以及其他包没有包含在标准仓库中。这样我们要位企业版Linux添加额外的包仓库支持来解决Odoo所需要的依赖。要安装完成我们需要运行下面的命令。
# yum install epel-release
![Installing EPEL Release](http://blog.linoxide.com/wp-content/uploads/2015/03/installing-epel-release.png)
现在安装EPEL后我们现在使用yum-config-manager添加OdooOpenERp的仓库。
# yum install yum-utils
# yum-config-manager --add-repo=https://nightly.odoo.com/8.0/nightly/rpm/odoo.repo
![Adding OpenERP (Odoo) Repo](http://blog.linoxide.com/wp-content/uploads/2015/03/added-odoo-repo.png)
### 3. 安装Odoo 8 (OpenERP) ###
在CentOS 7中添加Odoo 8OpenERP的仓库后。我们使用下面的命令来安装Odoo 8(OpenERP)。
# yum install -y odoo
上面的命令会安装odoo以及必须的依赖的包。
![Installing odoo or OpenERP](http://blog.linoxide.com/wp-content/uploads/2015/03/installing-odoo.png)
现在我们使用下面的命令在每次启动后启动Odoo服务。
# systemctl enable odoo
# systemctl start odoo
![Starting Odoo](http://blog.linoxide.com/wp-content/uploads/2015/03/starting-odoo.png)
### 4. 防火墙允许 ###
因为Odoo使用8069端口我们需要在防火墙中允许远程访问。我们使用下面的命令来在防火墙中允许8069防火墙。
# firewall-cmd --zone=public --add-port=8069/tcp --permanent
# firewall-cmd --reload
![Allowing firewall Port](http://blog.linoxide.com/wp-content/uploads/2015/03/allowing-firewall-port.png)
**注意默认上只有本地的连接才允许。如果我们要允许PostgreSQL的远程访问我们需要在pg_hba.conf添加下面图片中一行**
# nano /var/lib/pgsql/data/pg_hba.conf
![Allowing Remote Access pgsql](http://blog.linoxide.com/wp-content/uploads/2015/03/allowing-remote-access-pgsql.png)
### 5. Web接口 ###
我们已经在CentOS 7中安装了最新的Odoo 8OpenERP我们可以在浏览器中输入http://ip-address:8069来访问Odoo。 接着,我们要做的第一件事就是创建一个新的数据库和新的密码。注意,主密码默认是管理员密码。接着,我们可以在面板中输入用户名和密码。
![Odoo Panel](http://blog.linoxide.com/wp-content/uploads/2015/03/odoo-panel.png)
### 总结 ###
Odoo 8OpenERP是世界上最好的开源ERP程序。我们做了一件出色的工作来安装它因为OpenERP是由许多模块组成的针对商务和公司的完整ERP程序。因此如果你有任何问题、建议、反馈请在下面的评论栏写下。谢谢你享受OpenERPOdoo 8-
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/setup-openerp-odoo-centos-7/
作者:[Arun Pyasi][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:https://www.odoo.com/

View File

@ -0,0 +1,98 @@
nux常见问题解答--如何在Ubuntu上使用网络管理配置一个Linux网桥
===============================================================================
> **Question**: 我需要在我的Ubuntu主机上建立一个Linux网桥,共享一个NIC给其他一些虚拟主机或者主机上创建的容器。我目前正在Ubunut上使用网络管理所以最好>能使用网络管理来配置一个网桥。我该怎么做?
网桥是一个硬件装备用来内部连接两个或多个数据链路层OSI七层模型中第二层所以使得不同段上的网络设备可以互相访问。当你想要内连多个虚拟机器或者一个>主机里的以太接口时就需要在Linux主机里有一个类似桥接的概念。
有很多的方法来配置一个Linux网桥。举个例子在一个无中心的服务器环境里你可以使用[brct][1]手动地配置一个网桥。在桌面环境下,在网络管理里有建立网桥支持
。那就让我们测试一下如何用网络管理配置一个网桥吧。
### 要求 ###
为了避免[任何问题][2]建议你的网络管理版本为0.9.9或者更高这主要为了配合Ubuntu15.05或者更新的版本。
$ apt-cache show network-manager | grep Version
----------
Version: 0.9.10.0-4ubuntu15.1
Version: 0.9.10.0-4ubuntu15
### 创建一个网桥 ###
使用网络管理创建网桥最简单的方式就是通过nm-connection-editor。这款GUI图形用户界面的工具允许你傻瓜式地配置一个网桥。
首先启动nm-connection-editor。
$ nm-connection-editor
该编辑器的窗口会显示给你一个列表关于目前配置好的网络连接。点击右上角的“Click”按钮创建一个网桥。
![](https://farm9.staticflickr.com/8781/17139502730_c3ca920f7f.jpg)
接下来选择“Bridge”作为连接类型。
![](https://farm9.staticflickr.com/8873/17301102406_4f75133391_z.jpg)
现在开始配置网桥包括它的名字和桥接。如果没有其他网桥被创建那么默认的网桥接口会被命名为bridge0。
回顾一下创建网桥的目的是为了通过网桥共享你的以太网卡接口。所以你需要添加以太网卡接口到网桥。在图形界面添加一个新的“bridged connection”可以实现上述目的。点击“Add”按钮。
![](https://farm9.staticflickr.com/8876/17327069755_52f1d81f37_z.jpg)
选择“Ethernet”作为连接类型。
![](https://farm9.staticflickr.com/8832/17326664591_632a9001da_z.jpg)
在“Device MAC address”区域选择你想要控制的接口到bridge里。本例中假设接口是eth0。
![](https://farm9.staticflickr.com/8842/17140820559_07a661f30c_z.jpg)
点击“General”标签并且选中两个复选框分别是“Automatically connect to this network when it is available”和“All users may connect to this network”。
![](https://farm8.staticflickr.com/7776/17325199982_801290e172_z.jpg)
保存更改。
现在,你会在网桥里看见一个新的从属连接被建立。
![](https://farm8.staticflickr.com/7674/17119624667_6966b1147e_z.jpg)
点击网桥的“General”标签并且确保最上面的两个复选框被选中了。
![](https://farm8.staticflickr.com/7715/17301102276_4266a1e41d_z.jpg)
切换到“IPv4 Setting”标签为网桥配置DHCP或者是静态IP地址。注意你应该使用相同的IPv4设定作为从属的以太网卡接口eth0。本例中我们假设eth0是用过DHCP配置的。因此此处选择“AutomaticDHCP”。如果eth0被指定了一个静态IP地址那么你应该指定相同的IP地址给网桥。
![](https://farm8.staticflickr.com/7737/17140820469_99955cf916_z.jpg)
最后,保存网桥的设置。
现在你会看见一个额外的网桥连接被创建在“Network Connection”窗口里。你不再需要一个预先配置的有线连接为着从属的eth0接口。所以去删除原来的有线连接吧。
![](https://farm9.staticflickr.com/8700/17140820439_272a6d5c4e.jpg)
这时候网桥连接会被自动激活。你将会暂时失去一个连接从指定给eth0的IP地址被网桥接管。一旦IP地址指定给了网桥你将会连接回你的以太网卡接口通过网桥。你可以通过“Network”设置确认一下。
![](https://farm8.staticflickr.com/7742/17325199902_9ceb67ddc1_c.jpg)
同时检查可用的接口。提醒一下网桥接口必须已经取代了任何通过你的以太网卡接口的IP地址。
![](https://farm8.staticflickr.com/7717/17327069605_6143f1bd6a_b.jpg)
就这么多了,现在,网桥已经可以用了。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/configure-linux-bridge-network-manager-ubuntu.html
作者:[Dan Nanni][a]
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://xmodulo.com/how-to-configure-linux-bridge-interface.html
[2]:https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1273201

View File

@ -0,0 +1,54 @@
Linux有问必答——Ubuntu桌面上如何禁用默认的密钥环解锁密码输入
================================================================================
>**问题**当我启动我的Ubuntu桌面时出现了一个弹出对话框要求我输入密码来解锁默认的密钥环。我怎样才能禁用这个“解锁默认密钥环”弹出窗口并自动解锁我的密钥环
密钥环被认为是用来以加密方式存储你的登录信息的本地数据库。各种桌面应用(如浏览器、电子邮件客户端)使用密钥环来安全地存储并管理你的登录凭证、机密、密码、证书或密钥。对于那些需要检索存储在密钥环中的信息的应用程序,需要解锁该密钥环。
Ubuntu桌面所使用的GNOME密钥环被整合到了桌面登录中该密钥环会在你验证进入桌面后自动解锁。但是如果你设置了自动登录桌面或者是从休眠中唤醒你默认的密钥环仍然可能“被锁定”的。在这种情况下你会碰到这一提示
>“为密钥环‘默认密钥环’输入密码来解锁。某个应用想要访问密钥环‘默认密钥环’,但它被锁定了。”
>
![](https://farm9.staticflickr.com/8787/16716456754_309c39513c_o.png)
如果你想要避免在每次弹出对话框出现时输入密码来解锁默认密钥环,那么你可以这样做。
在做之前,请先了解禁用密码提示后可能出现的结果。通过自动解锁默认密钥环,你可以让任何使用你桌面的人无需知道你的密码而能获取你的密钥环(以及存储在密钥环中的任何信息)。
### 禁用默认密钥环解锁密码 ###
打开Dash然后输入“密码”来启动“密码和密钥”应用。
![](https://farm8.staticflickr.com/7709/17312949416_ed9c4fbe2d_b.jpg)
或者使用seahorse命令从命令行启动图形界面。
$ seahorse
在左侧面板中,右击“默认密钥环”,并选择“修改密码”。
![](https://farm8.staticflickr.com/7740/17159959750_ba5b675b00_b.jpg)
输入你的当前登录密码。
![](https://farm8.staticflickr.com/7775/17347551135_ce09260818_b.jpg)
在设置“默认”密钥环新密码的密码框中留空。
![](https://farm8.staticflickr.com/7669/17345663222_c9334c738b_c.jpg)
在询问是否不加密存储密码对话框中点击“继续”。
![](https://farm8.staticflickr.com/7761/17152692309_ce3891a0d9_c.jpg)
搞定。从今往后,那个该死的解锁密钥环提示对话框再也不会来烦你了。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/disable-entering-password-unlock-default-keyring.html
作者:[Dan Nanni][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni

View File

@ -0,0 +1,98 @@
Linux有问必答——Linux上如何安装Shrew Soft IPsec VPN
================================================================================
> **Question**: I need to connect to an IPSec VPN gateway. For that, I'm trying to use Shrew Soft VPN client, which is available for free. How can I install Shrew Soft VPN client on [insert your Linux distro]?
> **问题**我需要连接到一个IPSec VPN网关鉴于此我尝试使用Shrew Soft VPN客户端它是一个免费版本。我怎样才能安装Shrew Soft VPN客户端到[插入你的Linux发行版]?
市面上有许多商业VPN网关同时附带有他们自己的专有VPN客户端软件。虽然也有许多开源的VPN服务器/客户端备选方案但它们通常缺乏复杂的IPsec支持比如互联网密钥交换IKE这是一个标准的IPsec协议用于加固VPN密钥交换和验证安全。Shrew Soft VPN是一个免费的IPsec VPN客户端它支持多种验证方法、密钥交换、加密以及防火墙穿越选项。
下面介绍如何安装Shrew Soft VPN客户端到Linux平台。
首先,从[官方站点][1]下载它的源代码。
### 安装Shrew VPN客户端到Debian, Ubuntu或者Linux Mint ###
Shrew Soft VPN客户端图形界面要求使用Qt 4.x。所以作为依赖你需要安装其开发文件。
$ sudo apt-get install cmake libqt4-core libqt4-dev libqt4-gui libedit-dev libssl-dev checkinstall flex bison
$ wget https://www.shrew.net/download/ike/ike-2.2.1-release.tbz2
$ tar xvfvj ike-2.2.1-release.tbz2
$ cd ike
$ cmake -DCMAKE_INSTALL_PREFIX=/usr -DQTGUI=YES -DETCDIR=/etc -DNATT=YES .
$ make
$ sudo make install
$ cd /etc/
$ sudo mv iked.conf.sample iked.conf
### 安装Shrew VPN客户端到CentOS, Fedora或者RHEL ###
与基于Debian的系统类似在编译前你需要安装一堆依赖包包括Qt4。
$ sudo yum install qt-devel cmake gcc-c++ openssl-devel libedit-devel flex bison
$ wget https://www.shrew.net/download/ike/ike-2.2.1-release.tbz2
$ tar xvfvj ike-2.2.1-release.tbz2
$ cd ike
$ cmake -DCMAKE_INSTALL_PREFIX=/usr -DQTGUI=YES -DETCDIR=/etc -DNATT=YES .
$ make
$ sudo make install
$ cd /etc/
$ sudo mv iked.conf.sample iked.conf
在基于Red Hat的系统中最后一步需要用文本编辑器打开/etc/ld.so.conf文件并添加以下行。
$ sudo vi /etc/ld.so.conf
----------
include /usr/lib/
重新加载运行时绑定的共享库文件,以容纳新安装的共享库:
$ sudo ldconfig
### 启动Shrew VPN客户端 ###
首先启动IKE守护进程iked。该守护进作为VPN客户端程通过IKE协议与远程主机经由IPSec通信。
$ sudo iked
![](https://farm9.staticflickr.com/8685/17175688940_59c2db64c9_b.jpg)
现在启动qikea它是一个IPsec VPN客户端前端。该GUI应用允许你管理远程站点配置并初始化VPN连接。
![](https://farm8.staticflickr.com/7750/16742992713_eed7f97939_b.jpg)
要创建一个新的VPN配置点击“添加”按钮然后填入VPN站点配置。创建配置后你可以通过点击配置来初始化VPN连接。
![](https://farm8.staticflickr.com/7725/17337297056_3d38dc2180_b.jpg)
### 故障排除 ###
1. 我在运行iked时碰到了如下错误。
iked: error while loading shared libraries: libss_ike.so.2.2.1: cannot open shared object file: No such file or directory
要解决该问题你需要更新动态链接器来容纳libss_ike库。对于此请添加库文件的位置路径到/etc/ld.so.conf文件中然后运行ldconfig命令。
$ sudo ldconfig
验证libss_ike是否添加到了库路径
$ ldconfig -p | grep ike
----------
libss_ike.so.2.2.1 (libc6,x86-64) => /lib/libss_ike.so.2.2.1
libss_ike.so (libc6,x86-64) => /lib/libss_ike.so
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/install-shrew-soft-ipsec-vpn-client-linux.html
作者:[Dan Nanni][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:https://www.shrew.net/download/ike

View File

@ -0,0 +1,78 @@
Linux有问必答如何安装autossh
================================================================================
> **提问**: 我打算在linux上安装autossh我应该怎么做呢?
[autossh][1] 是一款开源工具可以帮助管理SSH会话自动重连和停止转发流量。autossh会假定目标主机已经设定[无密码SSH登陆][2]以便autossh可以重连断开的SSH会话而不用用户操作。
只要你建立[反向SSH隧道][3]或者[挂载基于SSH的远程文件夹][4]autossh迟早会派上用场。基本上只要需要维持SSH会话autossh肯定是有用的。
![](https://farm8.staticflickr.com/7786/17150854870_63966e78bc_c.jpg)
下面有许多linux发行版autossh的安装方法。
### Debian 或 Ubuntu 系统 ###
autossh已经加入基于Debian系统的基础库所以可以很方便的安装。
$ sudo apt-get install autossh
### Fedora 系统 ###
Fedora库同样包含autossh包使用yum安装。
$ sudo yum install autossh
### CentOS 或 RHEL 系统 ###
CentOS/RHEL 6 或早期版本, 需要开启第三库[Repoforge库][5], 然后才能使用yum安装.
$ sudo yum install autossh
CentOS/RHEL 7以后,autossh 已经不在Repoforge库中. 你需要从源码编译安装(例子在下面).
### Arch Linux 系统 ###
$ sudo pacman -S autossh
### Debian 或 Ubuntu 系统中从源码编译安装###
如果你想要使用最新版本的autossh你可以自己编译源码安装
$ sudo apt-get install gcc make
$ wget http://www.harding.motd.ca/autossh/autossh-1.4e.tgz
$ tar -xf autossh-1.4e.tgz
$ cd autossh-1.4e
$ ./configure
$ make
$ sudo make install
### CentOS, Fedora 或 RHEL 系统中从源码编译安装###
在CentOS/RHEL 7以后autossh不在是预编译包。所以你不得不从源码编译安装。
$ sudo yum install wget gcc make
$ wget http://www.harding.motd.ca/autossh/autossh-1.4e.tgz
$ tar -xf autossh-1.4e.tgz
$ cd autossh-1.4e
$ ./configure
$ make
$ sudo make install
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/install-autossh-linux.html
作者:[Dan Nanni][a]
译者:[Vic020/VicYu](http://vicyu.net)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://www.harding.motd.ca/autossh/
[2]:http://xmodulo.com/how-to-enable-ssh-login-without.html
[3]:http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html
[4]:http://xmodulo.com/how-to-mount-remote-directory-over-ssh-on-linux.html
[5]:http://xmodulo.com/how-to-set-up-rpmforge-repoforge-repository-on-centos.html