Merge pull request #5 from LCTT/master

update
This commit is contained in:
cvsher 2015-05-06 11:47:39 +08:00
commit 1192c6a01c
151 changed files with 9989 additions and 3862 deletions

View File

@ -0,0 +1,136 @@
在 Linux 中用 nmcli 命令绑定多块网卡
================================================================================
今天,我们来学习一下在 CentOS 7.x 中如何用 nmcliNetwork Manager Command Line Interface网络管理命令行接口进行网卡绑定。
网卡(接口)绑定是将多块 **网卡** 逻辑地连接到一起从而允许故障转移或者提高吞吐率的方法。提高服务器网络可用性的一个方式是使用多个网卡。Linux 绑定驱动程序提供了一种将多个网卡聚合到一个逻辑的绑定接口的方法。这是个新的实现绑定的方法,并不影响 linux 内核中旧绑定驱动。
**网卡绑定为我们提供了两个主要的好处:**
1. **高带宽**
1. **冗余/弹性**
现在让我们在 CentOS 7 上配置网卡绑定吧。我们需要决定选取哪些接口配置成一个组接口Team interface
运行 **ip link** 命令查看系统中可用的接口。
$ ip link
![ip link](http://blog.linoxide.com/wp-content/uploads/2015/01/ip-link.png)
这里我们使用 **eno16777736****eno33554960** 网卡在 “主动备份” 模式下创建一个组接口。(译者注:关于不同模式可以参考:<a href="http://support.huawei.com/ecommunity/bbs/10155553.html">多网卡的7种bond模式原理</a>)
按照下面的语法,用 **nmcli** 命令为网络组接口创建一个连接。
# nmcli con add type team con-name CNAME ifname INAME [config JSON]
**CNAME** 指代连接的名称,**INAME** 是接口名称,**JSON** (JavaScript Object Notation) 指定所使用的处理器(runner)。**JSON** 语法格式如下:
'{"runner":{"name":"METHOD"}}'
**METHOD** 是以下的其中一个:**broadcast、activebackup、roundrobin、loadbalance** 或者 **lacp**
### 1. 创建组接口 ###
现在让我们来创建组接口。这是我们创建组接口所使用的命令。
# nmcli con add type team con-name team0 ifname team0 config '{"runner":{"name":"activebackup"}}'
![nmcli con create](http://blog.linoxide.com/wp-content/uploads/2015/01/nmcli-con-create.png)
运行 **# nmcli con show** 命令验证组接口配置。
# nmcli con show
![显示组接口](http://blog.linoxide.com/wp-content/uploads/2015/01/show-team-interface.png)
### 2. 添加从设备 ###
现在让我们添加从设备到主设备 team0。这是添加从设备的语法
# nmcli con add type team-slave con-name CNAME ifname INAME master TEAM
在这里我们添加 **eno16777736****eno33554960** 作为 **team0** 接口的从设备。
# nmcli con add type team-slave con-name team0-port1 ifname eno16777736 master team0
# nmcli con add type team-slave con-name team0-port2 ifname eno33554960 master team0
![添加从设备到 team](http://blog.linoxide.com/wp-content/uploads/2015/01/adding-to-team.png)
再次用命令 **#nmcli con show** 验证连接配置。现在我们可以看到从设备配置信息。
#nmcli con show
![显示从设备配置](http://blog.linoxide.com/wp-content/uploads/2015/01/show-slave-config.png)
### 3. 分配 IP 地址 ###
上面的命令会在 **/etc/sysconfig/network-scripts/** 目录下创建需要的配置文件。
现在让我们为 team0 接口分配一个 IP 地址并启用这个连接。这是进行 IP 分配的命令。
# nmcli con mod team0 ipv4.addresses "192.168.1.24/24 192.168.1.1"
# nmcli con mod team0 ipv4.method manual
# nmcli con up team0
![分配 ip](http://blog.linoxide.com/wp-content/uploads/2015/01/ip-assignment.png)
### 4. 验证绑定 ###
**#ip add show team0** 命令验证 IP 地址信息。
#ip add show team0
![验证 ip 地址](http://blog.linoxide.com/wp-content/uploads/2015/01/verfiy-ip-adress.png)
现在用 **teamdctl** 命令检查 **主动备份** 配置功能。
# teamdctl team0 state
![teamdctl 检查主动备份](http://blog.linoxide.com/wp-content/uploads/2015/01/teamdctl-activebackup-check.png)
现在让我们把激活的端口断开连接并再次检查状态来确认主动备份配置是否像希望的那样工作。
# nmcli dev dis eno33554960
![断开激活端口连接](http://blog.linoxide.com/wp-content/uploads/2015/01/disconnect-activeport.png)
断开激活端口后再次用命令 **#teamdctl team0 state** 检查状态。
# teamdctl team0 state
![teamdctl 检查断开激活端口连接](http://blog.linoxide.com/wp-content/uploads/2015/01/teamdctl-check-activeport-disconnect.png)
是的,它运行良好!!我们会使用下面的命令连接回到 team0 的断开的连接。
#nmcli dev con eno33554960
![nmcli dev 连接断开的连接](http://blog.linoxide.com/wp-content/uploads/2015/01/nmcli-dev-connect-disconected.png)
我们还有一个 **teamnl** 命令可以显示 **teamnl** 命令的一些选项。
用下面的命令检查在 team0 运行的端口。
# teamnl team0 ports
![teamnl 检查端口](http://blog.linoxide.com/wp-content/uploads/2015/01/teamnl-check-ports.png)
显示 **team0** 当前活动的端口。
# teamnl team0 getoption activeport
![显示 team0 活动端口](http://blog.linoxide.com/wp-content/uploads/2015/01/display-active-port-team0.png)
好了,我们已经成功地配置了网卡绑定 :-) ,如果有任何反馈,请告诉我们。
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-command/interface-nics-bonding-linux/
作者:[Arun Pyasi][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/

View File

@ -0,0 +1,236 @@
搭建一个私有的Docker registry
================================================================================
![](http://cocoahunter.com/content/images/2015/01/docker2.jpg)
[TL;DR] 这是系列的第二篇文章这系列讲述了我的公司如何把基础服务从PaaS迁移到Docker上
- [第一篇文章][1]: 我谈到了接触Docker之前的经历
- [第三篇文章][2]: 我展示如何使创建镜像的过程自动化以及如何用Docker部署一个Rails应用。
----------
为什么需要搭建一个私有的registry呢对于新手来说Docker Hub一个Docker公共仓库只允许你拥有一个免费的私有版本库repo。其他的公司也开始提供类似服务但是价格可不便宜。另外如果你需要用Docker部署一个用于生产环境的应用恐怕你不希望将这些镜像放在公开的Docker Hub上吧
这篇文章提供了一个非常务实的方法来处理搭建私有Docker registry时出现的各种错综复杂的情况。我们将会使用一个运行于DigitalOcean之后简称为DO的非常小巧的512MB VPS 实例。并且我会假定你已经了解了Docker的基本概念因为我必须集中精力在复杂的事情上
###本地搭建###
首先你需要安装**boot2docker**以及docker CLI。如果你已经搭建好了基本的Docker环境你可以直接跳过这一步。
从终端运行以下命令我假设你使用OS X使用 HomeBrew 来安装相关软件,你可以根据你的环境使用不同的包管理软件来安装):
brew install boot2docker docker
如果一切顺利想要了解搭建docker环境的完整指南请参阅 [http://boot2docker.io/][10] ,你现在就能够通过如下命令启动一个 Docker 运行于其中的虚拟机:
boot2docker up
按照屏幕显示的说明复制粘贴book2docker在终端输出的命令。如果你现在运行`docker ps`命令,终端将有以下显示。
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
好了Docker已经准备就绪这就够了我们回过头去搭建registry。
###创建服务器###
登录进你的DO账号选择一个预安装了Docker的镜像文件创建一个新的Drople。本文写成时选择的是 Image > Applications > Docker 1.4.1 on 14.04
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-18-26-14.png)
你将会以邮件的方式收到一个根用户凭证。登录进去,然后运行`docker ps`命令来查看系统状态。
### 搭建AWS S3 ###
我们现在将使用Amazo Simple Storage ServiceS3作为我们registry/repository的存储层。我们将需要创建一个桶(bucket)以及用户凭证user credentials来允许我们的docker容器访问它。
登录到我们的AWS账号如果没有就申请一个[http://aws.amazon.com/][5]在控制台选择S3Simpole Storage Service
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-19-29-21.png)
点击 **Create Bucket**,为你的桶输入一个名字(把它记下来,我们一会需要用到它),然后点击**Create**。
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-19-22-50.png)
OK我们已经搭建好存储部分了。
### 设置AWS访问凭证###
我们现在将要创建一个新的用户。退回到AWS控制台然后选择IAMIdentity & Access Management)。
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-19-29-08.png)
在dashboard的左边点击Users。然后选择 **Create New Users**
如图所示:
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-19-31-42.png)
输入一个用户名(例如 docker-registry然后点击Create。写下或者下载csv文件你的Access Key以及Secret Access Key。回到你的用户列表然后选择你刚刚创建的用户。
在Permission section下面点击Attach User Policy。之后在下一屏选择Custom Policy。
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-19-41-21.png)
custom policy的内容如下
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SomeStatement",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::docker-registry-bucket-name/*",
"arn:aws:s3:::docker-registry-bucket-name"
]
}
]
}
这个配置将允许用户也就是regitstry来对桶上的内容进行操作读/写确保使用你之前创建AWS S3时使用的桶名。总结一下当你想把你的Docker镜像从你的本机推送到仓库中时服务器就会将他们上传到S3。
### 安装registry ###
现在回过头来看我们的DO服务器SSH登录其上。我们将要[使用][11]一个[官方Docker registry镜像][6]。
输入如下命令开启registry。
docker run \
-e SETTINGS_FLAVOR=s3 \
-e AWS_BUCKET=bucket-name \
-e STORAGE_PATH=/registry \
-e AWS_KEY=your_aws_key \
-e AWS_SECRET=your_aws_secret \
-e SEARCH_BACKEND=sqlalchemy \
-p 5000:5000 \
--name registry \
-d \
registry
Docker将会从Docker Hub上拉取所需的文件系统分层fs layers并启动守护容器daemonised container
### 测试registry ###
如果上述操作奏效你可以通过ping命令或者查找它的内容来测试registry虽然这个时候容器还是空的
我们的registry非常基础而且没有提供任何“验明正身”的方式。因为添加身份验证可不是一件轻松事至少我认为没有一种部署方法是简单的像是为了证明你努力过似的我觉得“查询/拉取/推送”仓库内容的最简单方法就是通过SSH通道的未加密连接通过HTTP
打开SSH通道的操作非常简单
ssh -N -L 5000:localhost:5000 root@your_registry.com
这条命令建立了一条从registry服务器前面执行`docker run`命令的时候我们见过它的5000号端口到本机的5000号端口之间的 SSH 管道连接。
如果你现在用浏览器访问 [http://localhost:5000/v1/_ping][7],将会看到下面这个非常简短的回复。
{}
这个意味着registry工作正常。你还可以通过登录 [http://localhost:5000/v1/search][8] 来查看registry内容内容相似
{
"num_results": 2,
"query": "",
"results": [
{
"description": "",
"name": "username/first-repo"
},
{
"description": "",
"name": "username/second-repo"
}
]
}
### 创建一个镜像 ###
我们现在创建一个非常简单的Docker镜像来检验我们新弄好的registry。在我们的本机上用如下内容创建一个Dockerfile这里只有一点代码在下一篇文章里我将会展示给你如何将一个Rails应用绑定进Docker容器中。
# ruby 2.2.0 的基础镜像
FROM ruby:2.2.0
MAINTAINER Michelangelo Chasseur <michelangelo.chasseur@touchwa.re>
并创建它:
docker build -t localhost:5000/username/repo-name .
`localhost:5000`这个部分非常重要Docker镜像名的最前面一个部分将告知`docker push`命令我们将要把我们的镜像推送到哪里。在我们这个例子当中因为我们要通过SSH管道连接远程的私有registry`localhost:5000`精确地指向了我们的registry。
如果一切顺利,当命令执行完成返回后,你可以输入`docker images`命令来列出新近创建的镜像。执行它看看会出现什么现象?
### 推送到仓库 ###
接下来是更好玩的部分。实现我所描述的东西着实花了我一点时间所以如果你第一次读的话就耐心一点吧跟着我一起操作。我知道接下来的东西会非常复杂如果你不自动化这个过程就一定会这样但是我保证到最后你一定都能明白。在下一篇文章里我将会使用到一大波shell脚本和Rake任务通过它们实现自动化并且用简单的命令实现部署Rails应用。
你在终端上运行的docker命令实际上都是使用boot2docker虚拟机来运行容器及各种东西。所以当你执行像`docker push some_repo`这样的命令时是boot2docker虚拟机在与registry交互而不是我们自己的机器。
接下来是一个非常重要的点为了将Docker镜像推送到远端的私有仓库SSH管道需要在boot2docker虚拟机上配置好而不是在你的本地机器上配置。
有许多种方法实现它。我给你展示最简短的一种(可能不是最容易理解的,但是能够帮助你实现自动化)
在这之前,我们需要对 SSH 做最后一点工作。
### 设置 SSH ###
让我们把boot2docker 的 SSH key添加到远端服务器的“已知主机”里面。我们可以使用ssh-copy-id工具完成通过下面的命令就可以安装上它了
brew install ssh-copy-id
然后运行:
ssh-copy-id -i /Users/username/.ssh/id_boot2docker root@your-registry.com
用你ssh key的真实路径代替`/Users/username/.ssh/id_boot2docker`。
这样做能够让我们免密码登录SSH。
现在我们来测试以下:
boot2docker ssh "ssh -o 'StrictHostKeyChecking no' -i /Users/michelangelo/.ssh/id_boot2docker -N -L 5000:localhost:5000 root@registry.touchwa.re &" &
分开阐述:
- `boot2docker ssh`允许你以参数的形式传递给boot2docker虚拟机一条执行的命令
- 最后面那个`&`表明这条命令将在后台执行;
- `ssh -o 'StrictHostKeyChecking no' -i /Users/michelangelo/.ssh/id_boot2docker -N -L 5000:localhost:5000 root@registry.touchwa.re &`是boot2docker虚拟机实际运行的命令
- `-o 'StrictHostKeyChecking no'`——不提示安全问题;
- `-i /Users/michelangelo/.ssh/id_boot2docker`指出虚拟机使用哪个SSH key来进行身份验证。注意这里的key应该是你前面添加到远程仓库的那个
- 最后我们将打开一条端口5000映射到localhost:5000的SSH通道。
### 从其他服务器上拉取 ###
你现在将可以通过下面的简单命令将你的镜像推送到远端仓库:
docker push localhost:5000/username/repo_name
在下一篇[文章][9]中我们将会了解到如何自动化处理这些事务并且真正地容器化一个Rails应用。请继续收听
如有错误请不吝指出。祝你Docker之路顺利
--------------------------------------------------------------------------------
via: http://cocoahunter.com/2015/01/23/docker-2/
作者:[Michelangelo Chasseur][a]
译者:[DongShuaike](https://github.com/DongShuaike)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://cocoahunter.com/author/michelangelo/
[1]:https://linux.cn/article-5339-1.html
[2]:http://cocoahunter.com/2015/01/23/docker-3/
[3]:http://cocoahunter.com/2015/01/23/docker-2/#fn:1
[4]:http://cocoahunter.com/2015/01/23/docker-2/#fn:2
[5]:http://aws.amazon.com/
[6]:https://registry.hub.docker.com/_/registry/
[7]:http://localhost:5000/v1/_ping
[8]:http://localhost:5000/v1/search
[9]:http://cocoahunter.com/2015/01/23/docker-3/
[10]:http://boot2docker.io/
[11]:https://github.com/docker/docker-registry/

View File

@ -53,11 +53,11 @@ Budgie是为Linux发行版定制的旗舰桌面也是一个定制工程。为
![安装 Budgie Desktop](http://blog.linoxide.com/wp-content/uploads/2015/02/install-budgie-desktop.png)
**注意**
**注意**
这是一个活跃的开发版本,一些主要的特点可能还不是特别的完善,如:网络管理器,为数不多的控制组件,无通知系统斌并且无法将app锁定到任务栏。
这是一个活跃的开发版本,一些主要的功能可能还不是特别的完善,如:没有网络管理器,没有音量控制组件(可以使用键盘控制),无通知系统并且无法将app锁定到任务栏。
作为工作区你能够禁用滚动栏,通过设置一个默认的主题并且通过下面的命令退出当前的会话
有一个临时解决方案可以禁用叠加滚动栏:设置另外一个默认主题,然后从终端退出当前会话:
$ gnome-session-quit
@ -65,7 +65,7 @@ Budgie是为Linux发行版定制的旗舰桌面也是一个定制工程。为
### 登录Budgie会话 ###
安装完成之后,我们能在登录时选择进入budgie桌面。
安装完成之后我们能在登录时选择进入budgie桌面。
![选择桌面会话](http://blog.linoxide.com/wp-content/uploads/2015/02/session-select.png)
@ -79,8 +79,7 @@ Budgie是为Linux发行版定制的旗舰桌面也是一个定制工程。为
### 结论 ###
Hurray! We have successfully installed our Lightweight Budgie Desktop Environment in our Ubuntu 14.04 LTS "Trusty" box. As we know, Budgie Desktop is still underdevelopment which makes it a lot of stuffs missing. Though its based on Gnomes GTK3, its not a fork. The desktop is written completely from scratch, and the design is elegant and well thought out. If you have any questions, comments, feedback please do write on the comment box below and let us know what stuffs needs to be added or improved. Thank You! Enjoy Budgie Desktop 0.8 :-)
Budgie桌面当前正在开发过程中因此有目前有很多功能的缺失。虽然它是基于Gnome但不是完全的复制。Budgie是完全从零开始实现它的设计是优雅的并且正在不断的完善。
嗨,现在我们已经成功的在 Ubuntu 14.04 LTS 上安装了轻量级 Budgie 桌面环境。Budgie桌面当前正在开发过程中因此有目前有很多功能的缺失。虽然它是基于Gnome 的 GTK3但不是完全的复制。Budgie是完全从零开始实现它的设计是优雅的并且正在不断的完善。如果你有任何问题、评论请在下面的评论框发表。愿你喜欢 Budgie 桌面 0.8 。
--------------------------------------------------------------------------------
@ -88,7 +87,7 @@ via: http://linoxide.com/ubuntu-how-to/install-lightweight-budgie-v8-desktop-ubu
作者:[Arun Pyasi][a]
译者:[johnhoow](https://github.com/johnhoow)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -54,7 +54,7 @@ via: http://www.unixmen.com/install-mate-desktop-freebsd-10-1/
作者:[M.el Khamlichi][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,12 +1,14 @@
如何记住并在下一次登录时还原正在运行的应用
如何在 Ubuntu 中再次登录时还原上次运行的应用
================================================================================
在你的 Ubuntu 里,你正运行着某些应用,但并不想停掉它们的进程,只想管理一下窗口,并打开那些工作需要的应用。接着,某些其他的事需要你转移注意力或你的机器电量低使得你必须马上关闭电脑。(幸运的是,)你可以让 Ubuntu 记住所有你正运行的应用并在你下一次登录时还原它们。
在你的 Ubuntu 里,如果你需要处理一些工作,你并不需要关闭正运行着的那些应用,只需要管理一下窗口,并打开那些工作需要的应用就行。然而,如果你需要离开处理些别的事情或你的机器电量低使得你必须马上关闭电脑,这些程序可能就需要关闭终止了。不过幸运的是,你可以让 Ubuntu 记住所有你正运行的应用并在你下一次登录时还原它们。
现在,为了让我们的 Ubuntu 记住当前会话中正运行的应用并在我们下一次登录时还原它们,我们将会使用到 `dconf-editor`。这个工具代替了前一个 Ubuntu 版本里安装的 `gconf-editor`,但默认情况下并没有在现在这个 Ubuntu 版本(注:这里指的是 Ubuntu 14.04 LTS) 里安装。为了安装 `dconf-editor` 你需要运行 `sudo apt-get install dconf-editor`命令:
###自动保存会话
现在,为了让我们的 Ubuntu 记住当前会话中正运行的应用并在我们下一次登录时还原它们,我们将会使用到 `dconf-editor`。这个工具代替了前一个 Ubuntu 版本里安装的 `gconf-editor`,但默认情况下现在这个 Ubuntu 版本(注:这里指的是 Ubuntu 14.04 LTS) 并没有安装。为了安装 `dconf-editor` 你需要运行 `sudo apt-get install dconf-editor`命令:
$ sudo apt-get install dconf-tools
一旦 `dconf-editor` 安装完毕,你就可以从应用菜单(注:这里指的是 Unity Dash)里打开它或者你可以通过直接在终端里或使用 `alt+f2` 运行下面的命令来启动它:
一旦 `dconf-editor` 安装完毕,你就可以从应用菜单(注:这里指的是 Unity Dash里打开它或者你可以通过直接在终端里运行或使用 `alt+f2` 运行下面的命令来启动它:
$ dconf-editor
@ -22,7 +24,7 @@
![dconf-editor selecting auto save session](http://blog.linoxide.com/wp-content/uploads/2015/01/dconf-editor_selecting_auto_save_session.png)
在你检查或对刚才的选项打钩之后,点击默认情况下位于窗口左上角的关闭按钮(X)来关闭 “Dconf Editor”。
在你确认对刚才的选项打钩之后,点击默认情况下位于窗口左上角的关闭按钮(X)来关闭 “Dconf Editor”。
![dconf-editor closing dconf editor](http://blog.linoxide.com/wp-content/uploads/2015/01/dconf-editor_closing_dconf_editor.png)
@ -30,6 +32,10 @@
欢呼吧,我们已经成功地配置了我们的 Ubuntu 14.04 LTS "Trusty" 来自动记住我们上一次会话中正在运行的应用。
除了关机后恢复应用之外,还可以通过休眠来达成类似的功能。
###休眠功能
现在,在这个教程里,我们也将学会 **如何在 Ubuntu 14.04 LTS 里开启休眠功能** :
在开始之前,在键盘上按 `Ctrl+Alt+T` 来开启终端。在它开启以后,运行:
@ -38,15 +44,15 @@
在你的电脑关闭后,再重新开启它。这时,你开启的应用被重新打开了吗?如果休眠功能没有发挥作用,请检查你的交换分区大小,它至少要和你可用 RAM 大小相当。
你可以在系统监视器里查看你的交换分区大小,而系统监视器可以通过在应用菜单或在终端里运行下面的命令来开启:
你可以在系统监视器里查看你的交换分区大小系统监视器可以通过在应用菜单或在终端里运行下面的命令来开启:
$ gnome-system-monitor
### 在系统托盘里启用休眠功能: ###
#### 在系统托盘里启用休眠功能: ####
提示模块是通过使用 logind 而不是使用 upower 来更新的。默认情况下,在 upower 和 logind 中,休眠都被禁用了。
系统托盘里面的会话指示器现在使用 logind 而不是 upower 了。默认情况下,在 upower 和 logind 中,休眠菜单都被禁用了。
为了开启休眠功能,依次运行下面的命令来编辑配置文件:
为了开启它的休眠菜单,依次运行下面的命令来编辑配置文件:
sudo -i
@ -70,26 +76,27 @@
重启你的电脑就可以了。
### 当你盖上笔记本的后盖时,让它休眠: ###
#### 当你盖上笔记本的后盖时,让它休眠: ####
1.通过下面的命令编辑文件 “/etc/systemd/logind.conf” :
1. 通过下面的命令编辑文件 “/etc/systemd/logind.conf” :
$ sudo nano /etc/systemd/logind.conf
$ sudo nano /etc/systemd/logind.conf
2. 将 **#HandleLidSwitch=suspend** 这一行改为 **HandleLidSwitch=hibernate** 并保存文件;
2. 将 **#HandleLidSwitch=suspend** (挂起)这一行改为 **HandleLidSwitch=hibernate** (休眠)并保存文件;
3. 运行下面的命令或重启你的电脑来应用更改:
$ sudo restart systemd-logind
$ sudo restart systemd-logind
就是这样。 成功了吗?现在我们设置了 dconf 并开启了休眠功能 :) 这样,无论你是关机还是直接合上笔记本盖子,你的 Ubuntu 将能够完全记住你开启的应用和窗口了。
就是这样。享受吧!现在我们有了 dconf 并开启了休眠功能 :) 你的 Ubuntu 将能够完全记住你开启的应用和窗口了。
--------------------------------------------------------------------------------
via: http://linoxide.com/ubuntu-how-to/remember-running-applications-ubuntu/
作者:[Arun Pyasi][a]
译者:[FSSlc](https://github.com/FSSlc)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
Moving to Docker
走向 Docker
================================================================================
![](http://cocoahunter.com/content/images/2015/01/docker1.jpeg)
@ -8,11 +8,11 @@ Moving to Docker
上个月我一直在折腾开发环境。这是我个人故事和经验关于尝试用Docker简化Rails应用的部署过程。
当我在2012年创建我的公司 [Touchware][1]时,我还是一个独立开发者。很多事情很小,不复杂,他们需要很多维护,他们也不需要部署到很多机器上。经过过去一年的发展我们成长了很多我们现在是是拥有10个人团队而且我们的服务端的程序和API无论在范围和规模方面都有增长。
当我在2012年创建我的公司 [Touchware][1]时,我还是一个独立开发者。很多事情很小,不复杂,他们需要很多维护他们也不需要部署到很多机器上。经过过去一年的发展我们成长了很多我们现在是是拥有10个人团队而且我们的服务端的程序和API无论在范围和规模方面都有增长。
### 第1步 - Heroku ###
我们还是个小公司我们需要让事情运行地尽可能平稳。当我们寻找可行的解决方案时我们打算坚持用那些可以帮助我们减轻对硬件依赖负担的工具。由于我们主要开发Rails应用而Heroku对RoR常用的数据库和缓存Postgres/Mongo/Redis等有很好的支持最明智的选择就是用[Heroku][2] 。我们就是这样做的。
我们还是个小公司我们需要让事情运行地尽可能平稳。当我们寻找可行的解决方案时我们打算坚持用那些可以帮助我们减轻对硬件依赖负担的工具。由于我们主要开发Rails应用而Heroku对RoR常用的数据库和缓存Postgres/Mongo/Redis等有很好的支持最明智的选择就是用[Heroku][2] 。我们就是这样做的。
Heroku有很好的技术支持和文档使得部署非常轻松。唯一的问题是当你处于起步阶段你需要很多开销。这不是最好的选择真的。
@ -20,18 +20,18 @@ Heroku有很好的技术支持和文档使得部署非常轻松。唯一的
为了尝试并降低成本我们决定试试Dokku。[Dokku][3],引用GitHub上的一句话
> Docker powered mini-Heroku in around 100 lines of Bash
> Docker 驱动的 mini-Heroku只用了一百来行的 bash 脚本
我们启用的[DigitalOcean][4]上的很多台机器都预装了Dokku。Dokku非常像Heroku但是当你有复杂的项目需要调整配置参数或者是需要特殊的依赖时它就不能胜任了。我们有一个应用它需要对图片进行多次转换我们无法安装一个适合版本的imagemagick到托管我们Rails应用的基于Dokku的Docker容器内。尽管我们还有很多应用运行在Dokku上但我们还是不得不把一些迁移回Heroku。
我们启用的[DigitalOcean][4]上的很多台机器都预装了Dokku。Dokku非常像Heroku但是当你有复杂的项目需要调整配置参数或者是需要特殊的依赖时它就不能胜任了。我们有一个应用它需要对图片进行多次转换我们把我们Rails应用的托管到基于Dokku的Docker容器但是无法安装一个适合版本的imagemagick到里面。尽管我们还有很多应用运行在Dokku上但我们还是不得不把一些迁移回Heroku。
### 第3步 - Docker ###
几个月前由于开发环境和生产环境的问题重新出现我决定试试Docker。简单来说Docker让开发者容器化应用简化部署。由于一个Docker容器本质上已经包含项目运行所需要的所有依赖只要它能在你的笔记本上运行地很好你就能确保它将也能在任何一个别的远程服务器的生产环境上运行包括Amazon的EC2和DigitalOcean上的VPS。
几个月前由于开发环境和生产环境的问题重新出现我决定试试Docker。简单来说Docker让开发者容器化应用简化部署。由于一个Docker容器本质上已经包含项目运行所需要的所有依赖只要它能在你的笔记本上运行地很好你就能确保它将也能在任何一个别的远程服务器的生产环境上运行包括Amazon的EC2和DigitalOcean上的VPS。
Docker IMHO特别有意思的原因是:
就我个人的看法来说,Docker 特别有意思的原因是:
- 它促进了模块化和分离关注点你只需要去考虑应用的逻辑部分负载均衡1个容器数据库1个容器web服务器1个容器
- 在部署的配置上非常灵活:容器可以被部署在大量的HW上也可以容易地重新部署在不同的服务器或者提供商那
- 它促进了模块化和关注点分离你只需要去考虑应用的逻辑部分负载均衡1个容器数据库1个容器web服务器1个容器
- 在部署的配置上非常灵活:容器可以被部署在各种硬件上,也可以容易地重新部署在不同的服务器和不同的提供商
- 它允许非常细粒度地优化应用的运行环境:你可以利用你的容器来创建镜像,所以你有很多选择来配置环境。
它也有一些缺点:
@ -54,15 +54,15 @@ via: http://cocoahunter.com/2015/01/23/docker-1/
作者:[Michelangelo Chasseur][a]
译者:[mtunique](https://github.com/mtunique)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://cocoahunter.com/author/michelangelo/
[1]:http://www.touchwa.re/
[2]:http://cocoahunter.com/2015/01/23/docker-1/www.heroku.com
[2]:http://www.heroku.com
[3]:https://github.com/progrium/dokku
[4]:http://cocoahunter.com/2015/01/23/docker-1/www.digitalocean.com
[4]:http://www.digitalocean.com
[5]:http://www.docker.com/
[6]:http://cocoahunter.com/2015/01/23/docker-2/
[7]:http://cocoahunter.com/2015/01/23/docker-3/
@ -78,4 +78,3 @@ via: http://cocoahunter.com/2015/01/23/docker-1/
[17]:
[18]:
[19]:
[20]:

View File

@ -1,39 +1,42 @@
Linux Shell脚本 入门25问
Linux Shell脚本面试25问
================================================================================
### Q:1 Shell脚本是什么、为什么它是必需的吗? ###
### Q:1 Shell脚本是什么、它是必需的吗? ###
答:一个Shell脚本是一个文本文件,包含一个或多个命令。作为系统管理员,我们经常需要发出多个命令来完成一项任务,我们可以添加这些所有命令在一个文本文件(Shell脚本)来完成日常工作任务。
答:一个Shell脚本是一个文本文件,包含一个或多个命令。作为系统管理员,我们经常需要使用多个命令来完成一项任务,我们可以添加这些所有命令在一个文本文件(Shell脚本)来完成这些日常工作任务。
### Q:2 什么是默认登录shell如何改变指定用户的登录shell ###
答:在Linux操作系统“/ bin / bash”是默认登录shell,在用户创建时被分配的。使用chsh命令可以改变默认的shell。示例如下所示:
答:在Linux操作系统“/bin/bash”是默认登录shell是在创建用户时分配的。使用chsh命令可以改变默认的shell。示例如下所示:
# chsh <username> -s <new_default_shell>
# chsh <用户名> -s <shell>
# chsh linuxtechi -s /bin/sh
### Q:3 有什么不同的类型在shell脚本中使用? ###
### Q:3 可以在shell脚本中使用哪些类型的变量? ###
在shell脚本我们可以使用两个类型变量:
在shell脚本我们可以使用两种类型的变量:
- 系统定义变量
- 用户定义变量
系统变量是由系统系统自己创建的。这些变量通常由大写字母组成,可以通过“**set**”命令查看。
系统变量是由系统系统自己创建的。这些变量由大写字母组成,可以通过“**set**”命令查看。
用户变量由系统用户来生成和定义,变量的值可以通过命令“`echo $<变量名>`”查看。
用户变量由系统用户来生成,变量的值可以通过命令“`echo $<变量名>`”查看
### Q:4 如何同时重定向标准输出和错误输出到同一位置? ###
### Q:4 如何将标准输出和错误输出同时重定向到同一位置? ###
答:这里有两个方法来实现:
方法12>&1 (# ls /usr/share/doc > out.txt 2>&1 )
方法一:
方法二:&> (# ls /usr/share/doc &> out.txt )
2>&1 (如# ls /usr/share/doc > out.txt 2>&1 )
### Q:5 shell脚本中“if”的语法 ? ###
方法二:
答:基础语法:
&> (如# ls /usr/share/doc &> out.txt )
### Q:5 shell脚本中“if”语法如何嵌套? ###
答:基础语法如下:
if [ 条件 ]
then
@ -72,9 +75,9 @@ Linux Shell脚本 入门25问
如果结束状态不是0说明命令执行失败。
### Q:7 在shell脚本中如何比较两个数 ? ###
### Q:7 在shell脚本中如何比较两个数 ? ###
答:测试用例使用if-then来比较两个数,例子如下:
答:在if-then中使用测试命令 -gt 等)来比较两个数字,例子如下:
#!/bin/bash
x=10
@ -89,11 +92,11 @@ Linux Shell脚本 入门25问
### Q:8 shell脚本中break命令的作用 ? ###
break命令一个简单的用途是退出执行中的循环。我们可以在while 和until循环中使用break命令跳出循环。
break命令一个简单的用途是退出执行中的循环。我们可以在while和until循环中使用break命令跳出循环。
### Q:9 shell脚本中continue命令的作用 ? ###
continue命令不同于break命令它只跳出当前循环的迭代而不是整个循环。continue命令很多时候是很有用的例如错误发生但我们依然希望循环继续的时候。
continue命令不同于break命令它只跳出当前循环的迭代而不是**整个**循环。continue命令很多时候是很有用的例如错误发生但我们依然希望继续执行大循环的时候。
### Q:10 告诉我shell脚本中Case语句的语法 ? ###
@ -116,14 +119,14 @@ Linux Shell脚本 入门25问
### Q:11 shell脚本中while循环语法 ? ###
如同for循环while循环重复自己所有命令只要条件成立不同于for循环。基础语法:
如同for循环while循环只要条件成立就重复它的命令块。不同于for循环while循环会不断迭代直到它的条件不为真。基础语法:
while [ 条件 ]
do
命令…
done
### Q:12 如何使脚本成为可执行状态 ? ###
### Q:12 如何使脚本可执行 ? ###
使用chmod命令来使脚本可执行。例子如下
@ -131,11 +134,11 @@ Linux Shell脚本 入门25问
### Q:13 “#!/bin/bash”的作用 ? ###
答:#!/bin/bash是shell脚本的第一行总所周知,#符号调用hash而!调用bang。它的意思是命令使用 /bin/bash来执行命令
答:#!/bin/bash是shell脚本的第一行称为释伴shebang行。这里#符号叫做hash而! 叫做 bang。它的意思是命令通过 /bin/bash 来执行。
### Q:14 shell脚本中for循环语法 ? ###
for循环基础语法
for循环基础语法:
for 变量 in 循环列表
do
@ -147,13 +150,13 @@ Linux Shell脚本 入门25问
### Q:15 如何调试shell脚本 ? ###
答:使用'-x'参数sh -x myscript.sh可以调试shell脚本。另一个种方法是使用-nv参数( sh -nv myscript.sh)
答:使用'-x'参数sh -x myscript.sh可以调试shell脚本。另一个种方法是使用-nv参数( sh -nv myscript.sh)
### Q:16 shell脚本如何比较字符串? ###
test命令可以用来比较字符串。Test命令比较字符串通过比较每一个字符来比较。
test命令可以用来比较字符串。测试命令会通过比较字符串中的每一个字符来比较。
### Q:17 Bourne shell(bash) 中有哪些特变量 ? ###
### Q:17 Bourne shell(bash) 中有哪些特殊的变量 ? ###
下面的表列出了Bourne shell为命令行设置的特殊变量。
@ -175,7 +178,7 @@ Linux Shell脚本 入门25问
<p align="left" class="western">$0</p>
</td>
<td width="453" valign="top">
<p align="left" class="western">来自命令行脚本名字</p>
<p align="left" class="western">命令行中的脚本名字</p>
</td>
</tr>
<tr>
@ -252,7 +255,7 @@ Linux Shell脚本 入门25问
<p align="left" class="western">-d 文件名</p>
</td>
<td width="453">
<p align="left" class="western">返回true如果文件存在并且是一个目录</p>
<p align="left" class="western">如果文件存在并且是目录返回true</p>
</td>
</tr>
<tr valign="top">
@ -260,7 +263,7 @@ Linux Shell脚本 入门25问
<p align="left" class="western">-e 文件名</p>
</td>
<td width="453">
<p align="left" class="western">返回true如果文件存在</p>
<p align="left" class="western">如果文件存在返回true</p>
</td>
</tr>
<tr valign="top">
@ -268,7 +271,7 @@ Linux Shell脚本 入门25问
<p align="left" class="western">-f 文件名</p>
</td>
<td width="453">
<p align="left" class="western">返回true如果文件存在并且是普通文件</p>
<p align="left" class="western">如果文件存在并且是普通文件返回true</p>
</td>
</tr>
<tr valign="top">
@ -276,7 +279,7 @@ Linux Shell脚本 入门25问
<p align="left" class="western">-r 文件名</p>
</td>
<td width="453">
<p align="left" class="western">返回true如果文件存在并拥有读权限</p>
<p align="left" class="western">如果文件存在并可读返回true</p>
</td>
</tr>
<tr valign="top">
@ -284,7 +287,7 @@ Linux Shell脚本 入门25问
<p align="left" class="western">-s 文件名</p>
</td>
<td width="453">
<p align="left" class="western">返回true如果文件存在并且不为空</p>
<p align="left" class="western">如果文件存在并且不为空返回true</p>
</td>
</tr>
<tr valign="top">
@ -292,7 +295,7 @@ Linux Shell脚本 入门25问
<p align="left" class="western">-w 文件名</p>
</td>
<td width="453">
<p align="left" class="western">返回true如果文件存在并拥有写权限</p>
<p align="left" class="western">如果文件存在并可写返回true</p>
</td>
</tr>
<tr valign="top">
@ -300,7 +303,7 @@ Linux Shell脚本 入门25问
<p align="left" class="western">-x 文件名</p>
</td>
<td width="453">
<p align="left" class="western">返回true如果文件存在并拥有执行权限</p>
<p align="left" class="western">如果文件存在并可执行返回true</p>
</td>
</tr>
</tbody>
@ -308,15 +311,15 @@ Linux Shell脚本 入门25问
### Q:19 在shell脚本中如何写入注释 ? ###
答:注释可以用来描述一个脚本可以做什么和它是如何工作的。每一注释以#开头。例子如下:
答:注释可以用来描述一个脚本可以做什么和它是如何工作的。每一注释以#开头。例子如下:
#!/bin/bash
# This is a command
echo “I am logged in as $USER”
### Q:20 如何得到来自终端的命令输入到shell脚本? ###
### Q:20 如何让 shell 就脚本得到来自终端的输入? ###
read命令可以读取来自终端使用键盘的数据。read命令接入用户的输入并置于变量中。例子如下:
read命令可以读取来自终端使用键盘的数据。read命令得到用户的输入并置于你给出的变量中。例子如下:
# vi /tmp/test.sh
@ -330,9 +333,9 @@ Linux Shell脚本 入门25问
LinuxTechi
My Name is LinuxTechi
### Q:21 如何取消设置或取消变量 ? ###
### Q:21 如何取消变量或取消变量赋值 ? ###
“unset”命令用于取消或取消设置一个变量。语法如下所示:
“unset”命令用于取消变量或取消变量赋值。语法如下所示:
# unset <变量名>
@ -345,7 +348,7 @@ Linux Shell脚本 入门25问
### Q:23 do-while语句的基本格式 ? ###
do-while语句类似于while语句但检查条件语句之前先执行命令。下面是用do-while语句的语法
do-while语句类似于while语句但检查条件语句之前先执行命令LCTT 译注:意即至少执行一次。)。下面是用do-while语句的语法
do
{
@ -354,7 +357,7 @@ Linux Shell脚本 入门25问
### Q:24 在shell脚本如何定义函数呢 ? ###
答:函数是拥有名字的代码块。当我们定义代码块,我们就可以在我们的脚本调用名字,该块就会被执行。示例如下所示:
答:函数是拥有名字的代码块。当我们定义代码块,我们就可以在我们的脚本调用函数名字,该块就会被执行。示例如下所示:
$ diskusage () { df -h ; }
@ -371,7 +374,7 @@ Linux Shell脚本 入门25问
### Q:25 如何在shell脚本中使用BCbash计算器 ? ###
使用下列格式在shell脚本中使用bc
使用下列格式在shell脚本中使用bc
variable=`echo “options; expression” | bc`
@ -381,7 +384,7 @@ via: http://www.linuxtechi.com/linux-shell-scripting-interview-questions-answers
作者:[Pradeep Kumar][a]
译者:[VicYu/Vic020](http://vicyu.net)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,24 +1,22 @@
===>> boredivan翻译中 <<===
怎样在CentOS 7.0上安装/配置VNC服务器
怎样在CentOS 7.0上安装和配置VNC服务器
================================================================================
这是一个关于怎样在你的 CentOS 7 上安装配置 [VNC][1] 服务的教程。当然这个教程也适合 RHEL 7 。在这个教程里我们将学习什么是VNC以及怎样在 CentOS 7 上安装配置 [VNC 服务器][1]。
这是一个关于怎样在你的 CentOS 7 上安装配置 [VNC][1] 服务的教程。当然这个教程也适合 RHEL 7 。在这个教程里,我们将学习什么是 VNC 以及怎样在 CentOS 7 上安装配置 [VNC 服务器][1]。
我们都知道,作为一个系统管理员,大多数时间是通过网络管理服务器的。在管理服务器的过程中很少会用到图形界面,多数情况下我们只是用 SSH 来完成我们的管理任务。在这篇文章里,我们将配置 VNC 来提供一个连接我们 CentOS 7 服务器的方法。VNC 允许我们开启一个远程图形会话来连接我们的服务器,这样我们就可以通过网络远程访问服务器的图形界面了。
VNC 服务器是一个自由开源软件,它可以让用户可以远程访问服务器的桌面环境。另外连接 VNC 服务器需要使用 VNC viewer 这个客户端。
VNC 服务器是一个自由开源软件,它可以让用户可以远程访问服务器的桌面环境。另外连接 VNC 服务器需要使用 VNC viewer 这个客户端。
** 一些 VNC 服务器的优点:**
远程的图形管理方式让工作变得简单方便。
剪贴板可以在 CentOS 服务器主机和 VNC 客户端机器之间共享。
CentOS 服务器上也可以安装图形工具,让管理能力变得更强大。
只要安装了 VNC 客户端,任何操作系统都可以管理 CentOS 服务器了。
比 ssh 图形和 RDP 连接更可靠。
- 远程的图形管理方式让工作变得简单方便。
- 剪贴板可以在 CentOS 服务器主机和 VNC 客户端机器之间共享。
- CentOS 服务器上也可以安装图形工具,让管理能力变得更强大。
- 只要安装了 VNC 客户端,通过任何操作系统都可以管理 CentOS 服务器了。
- 比 ssh 图形转发和 RDP 连接更可靠。
那么,让我们开始安装 VNC 服务器之旅吧。我们需要按照下面的步骤一步一步来搭建一个有效的 VNC。
那么,让我们开始安装 VNC 服务器之旅吧。我们需要按照下面的步骤一步一步来搭建一个可用的 VNC。
首先我们需要一个有效的桌面环境X-Window如果没有的话要先安装一个。
首先我们需要一个可用的桌面环境X-Window如果没有的话要先安装一个。
**注意:以下命令必须以 root 权限运行。要切换到 root 请在终端下运行“sudo -s”当然不包括双引号“”**
@ -34,7 +32,8 @@ VNC 服务器是一个自由且开源的软件,它可以让用户可以远程
#yum install gnome-classic-session gnome-terminal nautilus-open-terminal control-center liberation-mono-fonts
![install gnome classic session](http://blog.linoxide.com/wp-content/uploads/2015/01/gnome-classic-session-install.png)
### 设置默认启动图形界面
# unlink /etc/systemd/system/default.target
# ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target
@ -56,13 +55,13 @@ VNC 服务器是一个自由且开源的软件,它可以让用户可以远程
### 3. 配置 VNC ###
然后,我们需要在 **/etc/systemd/system/** 目录里创建一个配置文件。我们可以从 **/lib/systemd/sytem/vncserver@.service** 拷贝一份配置文件范例过来。
然后,我们需要在 `/etc/systemd/system/` 目录里创建一个配置文件。我们可以将 `/lib/systemd/sytem/vncserver@.service` 拷贝一份配置文件范例过来。
# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service
![copying vnc server configuration](http://blog.linoxide.com/wp-content/uploads/2015/01/copying-configuration.png)
接着我们用自己最喜欢的编辑器(这儿我们用的 **nano** )打开 **/etc/systemd/system/vncserver@:1.service** ,找到下面这几行,用自己的用户名替换掉 <USER> 。举例来说,我的用户名是 linoxide 所以我用 linoxide 来替换掉 <USER>
接着我们用自己最喜欢的编辑器(这儿我们用的 **nano** )打开 `/etc/systemd/system/vncserver@:1.service` ,找到下面这几行,用自己的用户名替换掉 <USER> 。举例来说,我的用户名是 linoxide 所以我用 linoxide 来替换掉 <USER>
ExecStart=/sbin/runuser -l <USER> -c "/usr/bin/vncserver %i"
PIDFile=/home/<USER>/.vnc/%H%i.pid
@ -83,8 +82,7 @@ VNC 服务器是一个自由且开源的软件,它可以让用户可以远程
# systemctl daemon-reload
Finally, we'll create VNC password for the user . To do so, first you'll need to be sure that you have sudo access to the user, here I will login to user "linoxide" then, execute the following. To login to linoxide we'll run "**su linoxide" without quotes** .
最后还要设置一下用户的 VNC 密码。要设置某个用户的密码,必须要获得该用户的权限,这里我用 linoxide 的权限,执行“**su linoxide**”就可以了。
最后还要设置一下用户的 VNC 密码。要设置某个用户的密码,必须要有能通过 sudo 切换到用户的权限,这里我用 linoxide 的权限,执行“`su linoxide`”就可以了。
# su linoxide
$ sudo vncpasswd
@ -112,7 +110,7 @@ Finally, we'll create VNC password for the user . To do so, first you'll need to
![allowing firewalld](http://blog.linoxide.com/wp-content/uploads/2015/01/allowing-firewalld.png)
现在就可以用 IP 和端口号(例如 192.168.1.1:1 ,这里的端口不是服务器的端口,而是视 VNC 连接数的多少从1开始排序——译注)来连接 VNC 服务器了。
现在就可以用 IP 和端口号(LCTT 译注:例如 192.168.1.1:1 ,这里的端口不是服务器的端口,而是视 VNC 连接数的多少从1开始排序来连接 VNC 服务器了。
### 6. 用 VNC 客户端连接服务器 ###
@ -122,33 +120,33 @@ Finally, we'll create VNC password for the user . To do so, first you'll need to
你可以用像 [Tightvnc viewer][3] 和 [Realvnc viewer][4] 的客户端来连接到服务器。
要用其他用户和端口连接 VNC 服务器请回到第3步添加一个新的用户和端口。你需要创建 **vncserver@:2.service** 并替换配置文件里的用户名和之后步骤里响应的文件名、端口号。**请确保你登录 VNC 服务器用的是你之前配置 VNC 密码的时候使用的那个用户名**
要用更多的用户连接需要创建配置文件和端口请回到第3步添加一个新的用户和端口。你需要创建 `vncserver@:2.service` 并替换配置文件里的用户名和之后步骤里相应的文件名、端口号。**请确保你登录 VNC 服务器用的是你之前配置 VNC 密码的时候使用的那个用户名**
VNC 服务本身使用的是5900端口。鉴于有不同的用户使用 VNC ,每个人的连接都会获得不同的端口。配置文件名里面的数字告诉 VNC 服务器把服务运行在5900的子端口上。在我们这个例子里第一个 VNC 服务会运行在59015900 + 1端口上之后的依次增加运行在5900 + x 号端口上。其中 x 是指之后用户的配置文件名 `vncserver@:x.service` 里面的 x 。
在建立连接之前,我们需要知道服务器的 IP 地址和端口。IP 地址是一台计算机在网络中的独特的识别号码。我的服务器的 IP 地址是96.126.120.92VNC 用户端口是1。
VNC 服务本身使用的是5900端口。鉴于有不同的用户使用 VNC ,每个人的连接都会获得不同的端口。配置文件名里面的数字告诉 VNC 服务器把服务运行在5900的子端口上。在我们这个例子里第一个 VNC 服务会运行在59015900 + 1端口上之后的依次增加运行在5900 + x 号端口上。其中 x 是指之后用户的配置文件名 **vncserver@:x.service** 里面的 x 。
在建立连接之前,我们需要知道服务器的 IP 地址和端口。IP 地址是一台计算机在网络中的独特的识别号码。我的服务器的 IP 地址是96.126.120.92VNC 用户端口是1。执行下面的命令可以获得服务器的公网 IP 地址。
执行下面的命令可以获得服务器的公网 IP 地址LCTT 译注:如果你的服务器放在内网或使用动态地址的话,可以这样获得其公网 IP 地址)。
# curl -s checkip.dyndns.org|sed -e 's/.*Current IP Address: //' -e 's/<.*$//'
### 总结 ###
好了,现在我们已经在运行 CentOS 7 / RHEL 7 (Red Hat Enterprises Linux)的服务器上安装配置好了 VNC 服务器。VNC 是自由开源软件中最简单的一种能实现远程控制服务器的一种工具,也是 Teamviewer Remote Access 的一款优秀的替代品。VNC 允许一个安装了 VNC 客户端的用户远程控制一台安装了 VNC 服务的服务器。下面还有一些经常使用的相关命令。好好玩!
好了,现在我们已经在运行 CentOS 7 / RHEL 7 的服务器上安装配置好了 VNC 服务器。VNC 是自由开源软件中最简单的一种能实现远程控制服务器的工具,也是一款优秀的 Teamviewer Remote Access 替代品。VNC 允许一个安装了 VNC 客户端的用户远程控制一台安装了 VNC 服务的服务器。下面还有一些经常使用的相关命令。好好玩!
#### 其他命令: ####
- 关闭 VNC 服务。
# systemctl stop vncserver@:1.service
# systemctl stop vncserver@:1.service
- 禁止 VNC 服务开机启动。
# systemctl disable vncserver@:1.service
# systemctl disable vncserver@:1.service
- 关闭防火墙。
# systemctl stop firewalld.service
# systemctl stop firewalld.service
--------------------------------------------------------------------------------
@ -156,7 +154,7 @@ via: http://linoxide.com/linux-how-to/install-configure-vnc-server-centos-7-0/
作者:[Arun Pyasi][a]
译者:[boredivan](https://github.com/boredivan)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -5,27 +5,28 @@
数百万个网站用着 WordPress 这当然是有原因的。WordPress 是众多内容管理系统中对开发者最友好的,本质上说你可以用它做任何事情。不幸的是,每天都有些吓人的报告说某个主要的网站被黑了,或者某个重要的数据库被泄露了之类的,吓得人一愣一愣的。
如果你还没有安装 WordPress ,可以看下下面的文章。
在基于 Debian 的系统上:
- [How to install WordPress On Ubuntu][1]
- [如何在 Ubuntu 上安装 WordPress][1]
在基于 RPM 的系统上:
- [How to install wordpress On CentOS][2]
- [如何在 CentOS 上安装 WordPress][2]
我之前的文章 [How To Secure WordPress Website][3] 里面列出的**备忘录**为读者维护 WordPress 的安全提供了一点帮助。
我之前的文章 [ 如何安全加固 WordPress 站点][3] 里面列出的**备忘录**为读者维护 WordPress 的安全提供了一点帮助。
在这篇文章里面,我将说明 **wpscan** 的安装过程,以及怎样使用 wpscan 来锁定任何已知的会让你的站点变得易受攻击的插件和主题。还有怎样安装和使用一款免费的网络探索和攻击的安全扫描软件 **nmap** 。最后展示的是使用 **nikto** 的步骤。
在这篇文章里面,我将介绍 **wpscan** 的安装过程,以及怎样使用 wpscan 来定位那些已知的会让你的站点变得易受攻击的插件和主题。还有怎样安装和使用一款免费的网络探索和攻击的安全扫描软件 **nmap** 。最后展示的是使用 **nikto** 的步骤。
### 用 WPScan 测试 WordPress 中易受攻击的插件和主题 ###
**WPScan** 是一个 WordPress 黑盒安全扫描软件,用 Ruby 写成,它是专门用来寻找已知的 WordPress 的弱点的。它为安全专家和 WordPress 管理员提供了一条评估他们的 WordPress 站点的途径。它的基于开源代码,在 GPLv3 下发行。
### 下载和安装 WPScan ###
#### 下载和安装 WPScan ####
在我们开始安装之前,很重要的一点是要注意 wpscan 不能在 Windows 下工作,所以你需要使用一台 Linux 或者 OS X 的机器来完成下面的事情。如果你只有 Windows 的系统,拿你可以下载一个 Virtualbox 然后在虚拟机里面安装任何你喜欢的 Linux 发行版本。
WPScan 的源代码放在 Github 上,所以需要先安装 git。
WPScan 的源代码放在 Github 上,所以需要先安装 gitLCTT 译注:其实你也可以直接从 Github 上下载打包的源代码,而不必非得装 git
sudo apt-get install git
@ -44,7 +45,7 @@ git 装好了,我们就要安装 wpscan 的依赖包了。
现在 wpscan 装好了,我们就可以用它来搜索我们 WordPress 站点潜在的易受攻击的文件。wpcan 最重要的方面是它能列出不仅是插件和主题也能列出用户和缩略图的功能。WPScan 也可以用来暴力破解 WordPress —— 但这不是本文要讨论的内容。
#### 新 WPScan ####
#### 新 WPScan ####
ruby wpscan.rb --update
@ -95,7 +96,6 @@ git 装好了,我们就要安装 wpscan 的依赖包了。
列举主题和列举插件差不多,只要用"--enumerate t"就可以了。
ruby wpscan.rb --url http(s)://www.host-name.com --enumerate t
或者只列出易受攻击的主题:
@ -135,7 +135,7 @@ WPscan 也可以用来列举某个 WordPress 站点的用户和有效的登录
#### 列举 Timthumb 文件 ####
关于 WPscan ,我要说的最后一个功能是列举 timthub 相关的文件。近年来timthumb 已经成为攻击者眼里的一个普通的目标,因为无数的漏洞被找出来并发到论坛上、邮件列表等等地方。用下面的命令可以通过 wpscan 找出易受攻击的 timthub 文件:
关于 WPscan ,我要说的最后一个功能是列举 timthub (缩略图)相关的文件。近年来timthumb 已经成为攻击者眼里的一个常见目标,因为无数的漏洞被找出来并发到论坛上、邮件列表等等地方。用下面的命令可以通过 wpscan 找出易受攻击的 timthub 文件:
ruby wpscan.rb --url http(s)://www.host-name.com --enumerate tt
@ -143,11 +143,10 @@ WPscan 也可以用来列举某个 WordPress 站点的用户和有效的登录
**Nmap** 是一个开源的用于网络探索和安全审查方面的工具。它可以迅速扫描巨大的网络也可一单机使用。Nmap 用原始 IP 数据包通过不同寻常的方法判断网络里那些主机是正在工作的,那些主机上都提供了什么服务(应用名称和版本),是什么操作系统(以及版本),用的什么类型的防火墙,以及很多其他特征。
### 在 Debian 和 Ubuntu 上下载和安装 nmap ###
#### 在 Debian 和 Ubuntu 上下载和安装 nmap ####
要在基于 Debian 和 Ubuntu 的操作系统上安装 nmap ,运行下面的命令:
sudo apt-get install nmap
**输出样例**
@ -168,7 +167,7 @@ WPscan 也可以用来列举某个 WordPress 站点的用户和有效的登录
Processing triggers for man-db ...
Setting up nmap (5.21-1.1ubuntu1) ...
#### 个例子 ####
#### 个例子 ####
输出 nmap 的版本:
@ -182,7 +181,7 @@ WPscan 也可以用来列举某个 WordPress 站点的用户和有效的登录
Nmap version 5.21 ( http://nmap.org )
### 在 Centos 上下载和安装 nmap ###
#### 在 Centos 上下载和安装 nmap ####
要在基于 RHEL 的 Linux 上面安装 nmap ,输入下面的命令:
@ -227,7 +226,7 @@ WPscan 也可以用来列举某个 WordPress 站点的用户和有效的登录
Complete!
#### 举个比方 ####
#### 举个例子 ####
输出 nmap 版本号:
@ -239,7 +238,7 @@ WPscan 也可以用来列举某个 WordPress 站点的用户和有效的登录
#### 用 Nmap 扫描端口 ####
你可以用 nmap 来获得很多关于你的服务器的信息,它让你站在对你的网站不怀好意的人的角度看你自己的网站。
你可以用 nmap 来获得很多关于你的服务器的信息,它可以让你站在对你的网站不怀好意的人的角度看你自己的网站。
因此,请仅用它测试你自己的服务器或者在行动之前通知服务器的所有者。
@ -277,7 +276,7 @@ nmap 的作者提供了一个测试服务器:
sudo nmap -p port_number remote_host
扫描一个网络,找出那些服务器在线,分别运行了什么服务
扫描一个网络,找出哪些服务器在线,分别运行了什么服务。
这就是传说中的主机探索或者 ping 扫描:
@ -294,19 +293,19 @@ nmap 的作者提供了一个测试服务器:
MAC Address: 00:11:32:11:15:FC (Synology Incorporated)
Nmap done: 256 IP addresses (4 hosts up) scanned in 2.80 second
理解端口配置和如何发现你的服务器上的攻击的载体只是确保你的信息和你的 VPS 安全的第一步。
理解端口配置和如何发现你的服务器上的攻击目标只是确保你的信息和你的 VPS 安全的第一步。
### 用 Nikto 扫描你网站的缺陷 ###
[Nikto][4] 网络扫描器是一个开源的 web 服务器的扫描软件,它可以用来扫描 web 服务器上的恶意的程序和文件。Nikto 也可用来检查软件版本是否过期。Nikto 能进行简单而快速地扫描以发现服务器上危险的文件和程序。扫描结束后会给出一个日志文件。`
[Nikto][4] 网络扫描器是一个开源的 web 服务器的扫描软件,它可以用来扫描 web 服务器上的恶意的程序和文件。Nikto 也可用来检查软件版本是否过期。Nikto 能进行简单而快速地扫描以发现服务器上危险的文件和程序。扫描结束后会给出一个日志文件。`
### 在 Linux 服务器上下载和安装 Nikto ###
#### 在 Linux 服务器上下载和安装 Nikto ####
Perl 在 Linux 上是预先安装好的,所以你只需要从[项目页面][5]下载 nikto ,解压到一个目录里面,然后开始测试。
wget https://cirt.net/nikto/nikto-2.1.4.tar.gz
你可以用某个归档管理工具或者用下面这个命令,同时使用 tar 和 gzip 。
你可以用某个归档管理工具解包,或者如下同时使用 tar 和 gzip
tar zxvf nikto-2.1.4.tar.gz
cd nikto-2.1.4
@ -369,7 +368,7 @@ Perl 在 Linux 上是预先安装好的,所以你只需要从[项目页面][5]
**输出样例**
会有十分冗长的输出,可能一开始会让人感到困惑。许多 Nikto 的警报会返回 OSVDB 序号。这是开源缺陷数据库([http://osvdb.org/][6])的意思。你可以在 OSVDB 上找出相关缺陷的深入说明。
会有十分冗长的输出,可能一开始会让人感到困惑。许多 Nikto 的警报会返回 OSVDB 序号。这是由开源缺陷数据库([http://osvdb.org/][6])所指定。你可以在 OSVDB 上找出相关缺陷的深入说明。
$ nikto -h http://www.host-name.com
- Nikto v2.1.4
@ -402,7 +401,7 @@ Perl 在 Linux 上是预先安装好的,所以你只需要从[项目页面][5]
**Nikto** 是一个非常轻量级的通用工具。因为 Nikto 是用 Perl 写的,所以它可以在几乎任何服务器的操作系统上运行。
希望这篇文章能在你找你的 wordpress 站点的缺陷的时候给你一些提示。我之前的文章[怎样保护 WordPress 站点][7]记录了一个**清单**,可以让你保护你的 WordPress 站点的工作变得更简单。
希望这篇文章能在你检查 wordpress 站点的缺陷的时候给你一些提示。我之前的文章[如何安全加固 WordPress 站点][7]记录了一个**清单**,可以让你保护你的 WordPress 站点的工作变得更简单。
有想说的,留下你的评论。
@ -412,7 +411,7 @@ via: http://www.unixmen.com/scan-check-wordpress-website-security-using-wpscan-n
作者:[anismaj][a]
译者:[boredivan](https://github.com/boredivan)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,10 +1,10 @@
在Ubuntu14.10/Mint7上安装Gnome Flashback classical桌面
在Ubuntu14.10/Mint7上安装Gnome Flashback 经典桌面
================================================================================
如果你不喜欢现在的Unity桌面[Gnome Flashback][1]桌面环境是一个简单的并且很棒的选择,让你能找回曾经经典的桌面。
Gnome Flashback基于GTK3并提供与原先gnome桌面视觉上相似的界面。
gnome flashback的另一个改变是采用了源自mint和xface的MATE桌面但无论mint还是xface都是基于gtk2的。
Gnome Flashback的另一个改变是采用了源自mint和xface的MATE桌面但无论mint还是xface都是基于GTK2的。
### 安装 Gnome Flashback ###
@ -12,7 +12,7 @@ gnome flashback的另一个改变是采用了源自mint和xface的MATE桌面
$ sudo apt-get install gnome-session-flashback
然后注销到登录界面单击密码输入框右上角的徽标型按钮即可选择桌面环境。可供选择的有Gnome Flashback (Metacity) 会话模式和Gnome Flashback (Compiz)会话模式。
然后注销返回到登录界面单击密码输入框右上角的徽标型按钮即可选择桌面环境。可供选择的有Gnome Flashback (Metacity) 会话模式和Gnome Flashback (Compiz)会话模式。
Metacity更轻更快而Compiz则能带给你更棒的桌面效果。下面是我使用gnome flashback桌面的截图。
@ -24,17 +24,17 @@ Metacity更轻更快而Compiz则能带给你更棒的桌面效果。下面是
### 1. 安装 Gnome Tweak Tool ###
Gnome Tweak Tool能够帮助你定制比如字体、主题等那些在Unity桌面的控制中心十分困难或是不可能完成的任务。
Gnome Tweak Tool能够帮助你定制比如字体、主题等这些在Unity桌面的控制中心是十分困难几乎不可能完成的任务。
$ sudo apt-get install gnome-tweak-tool
启动按步骤 应用程序 > 系统工具 > 首选项 > Tweak Tool
启动按步骤 应用程序 > 系统工具 > 首选项 > Tweak Tool
### 2. 在面板上添加小应用 ###
默认的右键点击面板是没有效果的。你可以尝试在右键点击面板的同时按住键盘上的Alt+Super (win)键,这样定制面板的相关选项将会出现
默认的右键点击面板是没有效果的。你可以尝试在右键点击面板的同时按住键盘上的Alt+Super (win)键,这样就会出现定制面板的相关选项。
你可以修改或删除面板并在上面添加些小应用。在这个例子中我们移除了底部面板并用Plank dock来代替它的位置。
你可以修改或删除面板并在上面添加些小应用。在这个例子中我们移除了底部面板并用Plank dock来代替它的位置。
在顶部面板的中间添加一个显示时间的小应用。通过配置使它显示时间和天气。
@ -42,7 +42,7 @@ Gnome Tweak Tool能够帮助你定制比如字体、主题等那些在Unity
### 3. 将窗口标题栏的按钮右置 ###
在ubuntu中最小化、最大化和关闭按钮默认实在标题栏的左侧的。需要稍作手脚才能让他们乖乖回到右边去。
在ubuntu中最小化、最大化和关闭按钮默认是在标题栏左侧的。需要稍作手脚才能让他们乖乖回到右边去。
想让窗口的按钮到右边可以使用下面的命令这是我在askubuntu上找到的。
@ -50,7 +50,7 @@ Gnome Tweak Tool能够帮助你定制比如字体、主题等那些在Unity
### 4.安装 Plank dock ###
plank dock位于屏幕底部用于启动应用和切换打开的窗口。会在必要的时间隐藏自己并在需要的时候出现。elementary OS使用的dock就是plank dock。
plank dock位于屏幕底部用于启动应用和切换打开的窗口。会在必要的时间隐藏自己并在需要的时候出现。elementary OS使用的dock就是plank dock。
运行以下命令安装:
@ -58,11 +58,11 @@ plank dock位于屏幕底部用于启动应用和切换打开的窗口。会在
$ sudo apt-get update
$ sudo apt-get install plank -y
现在启动 应用程序 > 附件 > Plank。若想让它开机自动启动找到 应用程序 > 系统工具 > 首选项 > 启动应用程序 并将“plank”的命令加到列表中。
现在启动应用程序 > 附件 > Plank。若想让它开机自动启动找到 应用程序 > 系统工具 > 首选项 > 启动应用程序 并将“plank”的命令加到列表中。
### 5. 安装 Conky 系统监视器 ###
Conky非常酷它用系统的中如CPU和内存使用率的统计值来装饰桌面。它不太占资源并且运行的大部分时间都不惹麻烦
Conky非常酷它用系统的中如CPU和内存使用率的统计值来装饰桌面。它不太占资源,并且绝大部分情况下运行都不会有什么问题
运行如下命令安装:
@ -70,7 +70,7 @@ Conky非常酷它用系统的中如CPU和内存使用率的统计值来装饰
$ sudo apt-get update
$ sudo apt-get install conky-manager
现在启动 应用程序 > 附件 > Conky Manager 选择你想在桌面上显示的部件。Conky Manager同样可以配置到启动项中。
现在启动:应用程序 > 附件 > Conky Manager 选择你想在桌面上显示的部件。Conky Manager同样可以配置到启动项中。
### 6. 安装CCSM ###
@ -80,10 +80,10 @@ Conky非常酷它用系统的中如CPU和内存使用率的统计值来装饰
$ sudo apt-get install compizconfig-settings-manager
启动按步骤 应用程序 > 系统工具 > 首选项 > CompizConfig Settings Manager.
启动按步骤 应用程序 > 系统工具 > 首选项 > CompizConfig Settings Manager.
>在虚拟机中经常会发生compiz会话中装饰窗口消失。可以通过启动Compiz设置在打开"Copy to texture",注销后重新登录即可。
> 在虚拟机中经常会发生compiz会话中装饰窗口消失。可以通过启动Compiz设置并启用"Copy to texture"插件,注销后重新登录即可。
不过值得一提的是Compiz 会话会比Metacity慢。
@ -92,8 +92,8 @@ Conky非常酷它用系统的中如CPU和内存使用率的统计值来装饰
via: http://www.binarytides.com/install-gnome-flashback-ubuntu/
作者:[Silver Moon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
译者:[martin2011qi](https://github.com/martin2011qi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,144 @@
在 Linux 中以交互方式实时查看Apache web访问统计
================================================================================
无论你是在网站托管业务还是在自己的VPS上运行几个网站你总会有需要显示访客统计信息例如前几的访客、访问请求的文件无论动态或者静态、所用的带宽、客户端的浏览器和访问的来源网站等等。
[GoAccess][1] 是一款用于Apache或者Nginx的命令行日志分析器和交互式查看器。使用这款工具你不仅可以浏览到之前提及的相关数据还可以通过分析网站服务器日志来进一步挖掘数据 - 而且**这一切都是在一个终端窗口实时输出的**。由于今天的[大多数web服务器][2]都使用Debian的衍生版或者基于RedHat的发行版来作为底层操作系统所以本文中我告诉你如何在Debian和CentOS中安装和使用GoAccess。
### 在Linux系统安装GoAccess ###
在DebianUbuntu及其衍生版本运行以下命令来安装GoAccess
# aptitude install goaccess
在CentOS中你将需要使你的[EPEL 仓库][3]可用然后执行以下命令:
# yum install goaccess
在Fedora同样使用yum命令
# yum install goaccess
如果你想从源码安装GoAccess来使用更多功能例如 GeoIP 定位功能),需要在你的操作系统安装[必需的依赖包][4],然后按以下步骤进行:
# wget http://tar.goaccess.io/goaccess-0.8.5.tar.gz
# tar -xzvf goaccess-0.8.5.tar.gz
# cd goaccess-0.8.5/
# ./configure --enable-geoip
# make
# make install
以上安装的版本是 0.8.5,但是你也可以在该软件的网站[下载页][5]确认是否是最新版本。
由于GoAccess不需要后续的配置一旦安装你就可以马上使用。
### 运行 GoAccess ###
开始使用GoAccess只需要对它指定你的Apache访问日志。
对于Debian及其衍生版本
# goaccess -f /var/log/apache2/access.log
基于红帽的发行版:
# goaccess -f /var/log/httpd/access_log
当你第一次启动GoAccess你将会看到如下的屏幕中选择日期和日志格式。正如前面所述你可以按空格键进行选择并按F10确认。至于日期和日志格式你可能需要参考[Apache 文档][6]来刷新你的记忆。
在这个例子中选择常见日志格式Common Log Format(CLF)
![](https://farm8.staticflickr.com/7422/15868350373_30c16d7c30.jpg)
然后按F10 确认。你将会从屏幕上看到统计数据。为了简洁起见,这里只显示了首部,也就是日志文件的摘要,如下图所示:
![](https://farm9.staticflickr.com/8683/16486742901_7a35b5df69_b.jpg)
### 通过 GoAccess来浏览网站服务器统计数据 ###
你可以按向下的箭头滚动页面,你会发现以下区域,它们是按请求排序的。这里提及的目录顺序可能会根据你的发行版或者你所选的安装方式(从源和库)不同而不同:
1. 每天唯一访客来自同样IP、同一日期和同一浏览器的请求被认为是是唯一访问
![](https://farm8.staticflickr.com/7308/16488483965_a439dbc5e2_b.jpg)
2. 请求的文件网页URL
![](https://farm9.staticflickr.com/8651/16488483975_66d05dce51_b.jpg)
3. 请求的静态文件(例如,.png文件.js文件等等
4. 来源的URLs每一个URL请求的出处
5. HTTP 404 未找到的响应代码
![](https://farm9.staticflickr.com/8669/16486742951_436539b0da_b.jpg)
6. 操作系统
7. 浏览器
8. 主机地址客户端IP地址
![](https://farm8.staticflickr.com/7392/16488483995_56e706d77c_z.jpg)
9. HTTP 状态代码
![](https://farm8.staticflickr.com/7282/16462493896_77b856f670_b.jpg)
10. 前几位的来源站点
11. 来自谷歌搜索引擎的前几位的关键字
如果你想要检查已经存档的日志你可以通过管道将它们发送给GoAccess如下
在Debian及其衍生版本
# zcat -f /var/log/apache2/access.log* | goaccess
在基于红帽的发行版:
# cat /var/log/httpd/access* | goaccess
如果你需要上述部分的详细报告1至11项直接按下其序号再按O大写o就可以显示出你需要的详细视图。下面的图像显示5-O的输出先按5再按O
![](https://farm8.staticflickr.com/7382/16302213429_48d9233f40_b.jpg)
如果要显示GeoIP位置信息打开主机部分的详细视图如前面所述你将会看到正在请求你的服务器的客户端IP地址所在的位置。
![](https://farm8.staticflickr.com/7393/16488484075_d778aa91a2_z.jpg)
如果你的系统还不是很忙碌,以上提及的章节将不会显示大量的信息,但是这种情形可以通过在你网站服务器越来越多的请求发生改变。
### 保存用于离线分析的报告 ###
有时候你不想每次都实时去检查你的系统状态可以保存一份在线的分析文件或打印出来。要生成一个HTML报告只需要通过之前提到GoAccess命令将输出来重定向到一个HTML文件即可。然后用web浏览器来将这份报告打开即可。
# zcat -f /var/log/apache2/access.log* | goaccess > /var/www/webserverstats.html
一旦报告生成,你将需要点击展开的链接来显示每个类别详细的视图信息:
![](https://farm9.staticflickr.com/8658/16486743041_bd8a80794d_o.png)
可以查看youtube视频https://youtu.be/UVbLuaOpYdg 。
正如我们通过这篇文章讨论GoAccess是一个非常有价值的工具它能给系统管理员实时提供可视的HTTP 统计分析。虽然GoAccess的默认输出是标准输出但是你也可以将他们保存到JSONHTML或者CSV文件。这种转换可以让 GoAccess在监控和显示网站服务器的统计数据时更有用。
--------------------------------------------------------------------------------
via: http://xmodulo.com/interactive-apache-web-server-log-analyzer-linux.html
作者:[Gabriel Cánepa][a]
译者:[disylee](https://github.com/disylee)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/gabriel
[1]:http://goaccess.io/
[2]:http://w3techs.com/technologies/details/os-linux/all/all
[3]:http://linux.cn/article-2324-1.html
[4]:http://goaccess.io/download#dependencies
[5]:http://goaccess.io/download
[6]:http://httpd.apache.org/docs/2.4/logs.html

View File

@ -6,7 +6,7 @@ Linux 有问必答如何在Linux中修复“fatal error: lame/lame.h: No such
fatal error: lame/lame.h: No such file or directory
[LAME][1]"LAME Ain't an MP3 Encoder"是一个流行的LPGL授权的MP3编码器。许多视频编码工具使用或者支持LAME。这其中有[FFmpeg][2]、 VLC、 [Audacity][3]、 K3b、 RipperX等。
[LAME][1]"LAME Ain't an MP3 Encoder"是一个流行的LPGL授权的MP3编码器。许多视频编码工具使用或者支持LAME,如 [FFmpeg][2]、 VLC、 [Audacity][3]、 K3b、 RipperX等。
要修复这个编译错误你需要安装LAME库和开发文件按照下面的来。
@ -20,7 +20,7 @@ Debian和它的衍生版在基础库中已经提供了LAME库因此可以用a
在基于RED HAT的版本中LAME在RPM Fusion的免费仓库中就有那么你需要先设置[RPM Fusion (免费)仓库][4]。
RPM Fusion设置完成后如下安装LAME开发文件
RPM Fusion设置完成后如下安装LAME开发
$ sudo yum --enablerepo=rpmfusion-free-updates install lame-devel
@ -42,7 +42,7 @@ RPM Fusion设置完成后如下安装LAME开发文件。
$ ./configure --help
共享/静态LAME默认安装在 /usr/local/lib。要让共享库可以被其他程序使用完成最后一步
共享/静态LAME默认安装在 /usr/local/lib。要让共享库可以被其他程序使用完成最后一步
用编辑器打开 /etc/ld.so.conf加入下面这行。
@ -56,7 +56,6 @@ RPM Fusion设置完成后如下安装LAME开发文件。
如果你的发行版(比如 CentOS 7没有提供预编译的LAME库或者你想要自定义LAME库你需要从源码自己编译。下面是在基于Red Hat的系统中编译安装LAME库的方法。
$ sudo yum install gcc git
$ wget http://downloads.sourceforge.net/project/lame/lame/3.99/lame-3.99.5.tar.gz
$ tar -xzf lame-3.99.5.tar.gz
@ -87,7 +86,7 @@ via: http://ask.xmodulo.com/fatal-error-lame-no-such-file-or-directory.html
作者:[Dan Nanni][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,12 +1,12 @@
Linux有问必答 -- 如何在树莓派上安装USB网络摄像头
Linux有问必答如何在树莓派上安装USB网络摄像头
================================================================================
> **Question**: 我可以在树莓派上使用标准的USB网络摄像头么我该如何检查USB网络摄像头与树莓派是否兼容另外我该如何在树莓派上安装它
如果你想在树莓上拍照或者录影,你可以安装[树莓派的摄像头板][1]。如果你不想要为摄像头模块花费额外的金钱,那有另外一个方法,就是你常见的[USB 摄像头][2]。你可能已经在PC上安装了。
如果你想在树莓上拍照或者录影,你可以安装[树莓派的摄像头板][1]。如果你不想要为摄像头模块花费额外的金钱,那有另外一个方法,就是你常见的[USB 摄像头][2]。你可能已经在PC上安装了。
本教程中我会展示如何在树莓派上设置摄像头。我们假设你使用的系统是Raspbian。
在此之前,你最好检查一下你的摄像头是否在[这些][3]已知与树莓派兼容的摄像头之中。如果你的摄像头不在这个兼容列表中,不要丧气,仍然有可能你的摄像头被树莓派检测到。
在此之前,你最好检查一下你的摄像头是否在[这些][3]已知与树莓派兼容的摄像头之中。如果你的摄像头不在这个兼容列表中,不要丧气,仍然有可能树莓派检测到你的摄像头
### 检查USB摄像头是否雨树莓派兼容 ###
@ -34,7 +34,7 @@ fswebcam安装完成后在终端中运行下面的命令来抓去一张来自
$ fswebcam --no-banner -r 640x480 image.jpg
这条命令可以抓取一张640x480分辨率的照片并且用jpg格式保存。它不会在照片的地步留下任何标志.
这条命令可以抓取一张640x480分辨率的照片并且用jpg格式保存。它不会在照片的底部留下任何水印.
![](https://farm8.staticflickr.com/7417/16576645965_302046d230_o.png)
@ -52,7 +52,7 @@ via: http://ask.xmodulo.com/install-usb-webcam-raspberry-pi.html
作者:[Kristophorus Hadiono][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,116 @@
只有几百个字节大小的国际象棋程序
================================================================================
当我在这里提到了 ZX81 电脑时我已经暴露了我的年龄。ZX81 是一个由英国开发者Sincilair 研究所)生产的家庭电脑,它拥有"高达" 1KB 的内存!上面的 1KB 并不是打印错误,这个家庭电脑确实只配置有 1KB 的板载内存。但这个内存大小上的限制并没有阻止爱好者制作种类繁多的软件。事实上,这个机器引发了一代编程奇才的出现,这让他们掌握了让程序在该机上运行起来的技能。这个机器可以通过一个 16 KB 的内存卡来进行升级,这就提供了更多的编程可能。但未经扩展的 1KB 机器仍然激励着编程者们发布卓越的软件。
![1K ZX Chess ](http://www.linuxlinks.com/portal/content2/reviews/Games2/1KZXChess.jpg)
我最喜爱的 ZX81 游戏有: 模拟飞行Flight Simulation, 3D 版怪物迷宫3D Monster Maze, 小蜜蜂Galaxians, 以及最重要的 1K ZX Chess。 只有最后一个程序是为未扩展版的 ZX81 电脑设计的。事实上David Horne 开发的 1K ZX Chess 只使用了仅仅 672 字节的 RAMLCTT 译注:如果读者有兴趣,可以看看[这里](http://users.ox.ac.uk/~uzdm0006/scans/1kchess/)对该程序的代码及解释)。尽管如此,该游戏尽力去实现大多数的国际象棋规则,并提供了一个计算机虚拟对手。虽然一些重要的规则被忽略了(如:王车易位,兵的升变,和吃过路兵)
LCTT 译注:参考了[这里](http://zh.wikibooks.org/zh/%E5%9B%BD%E9%99%85%E8%B1%A1%E6%A3%8B/%E8%A7%84%E5%88%99)和[这里](http://en.wikipedia.org/wiki/Rules_of_chess)),但能够和人工智能相对抗,这仍然令人惊讶。这个游戏占据了我逝去的青春里的相当一部分。
1K ZX Chess 保持了在所有计算机上国际象棋的最小实现的地位长达 33 年之久,直到今年由 BootChess 打破了该记录,紧接着由 Toledo AtomChess 打破。这三个程序都没有实现所有的国际象棋规则,所以为了完整性,我介绍了我最喜爱的那些实现了所有国际象棋规则的极小的国际象棋。
Linux 有着一系列极其强大的国际象棋引擎,如 Stockfish, Critter, Togo II, Crafty, GNU Chess 和 Komodo 。 在这篇文章精选的国际象棋程序虽敌不过这些好的国际象棋程序,但它们展示了使用微不足道的代码库究竟可以实现多少东西。
----------
### Toledo Atomchess
![](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-Toledo.png)
你可能已经看到了大量有关 BootChess 新闻报道,这个只用 487 字节写就的国际象棋程序,一举打破了先前最小的国际象棋程序 1K ZX Chess 的记录。所以Óscar Toledo Gutiérrez 挽起袖子自己编写了一个更加紧凑的国际象棋游戏。Toledo Atomchess 是仅有 481 字节的 x86 汇编代码,都能放到引导扇区里。 在难以置信的代码大小下,这个引擎实现了一个可玩的国际象棋游戏。
特点包括:
- 基本的棋子移动
- 用 ASCII 文本表现的棋盘
- 以代数形式来输入移动(注:如 D2D4)
- 3 层的搜索深度
显然,为了将这个国际象棋程序压缩到 481 字节中,作者必须做出某些牺牲,这些局限包括:
- 没有兵的升变
- 没有王车易位
- 没有吃过路兵
- 没有移动确认
该作者也使用 CJavaScript 和 Java 来写这个国际象棋程序,每种实现都非常小。
- 网站: [nanochess.org/chess6.html][1]
- 开发者: Óscar Toledo Gutiérrez
- 协议: 非商业用途可免费使用
- 版本号: -
----------
### BootChess
![](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-BootChess.png)
BootChess 是一个国际象棋的极其小巧的计算机实现。这个程序被塞进到仅仅 487 字节里,并可运行在 Windows, Mac OS X 和 Linux 等操作系统。BootChess 的棋盘和棋子单独用文本表示,其中 P 代表兵, Q 用来代表王后,以及“点”代表空白格子。
特点包括:
- 象棋棋盘和用户输入的形象的文本表示
- 引导扇区大小512 字节)的可玩的象棋游戏
- 只需 x86 bios 硬件引导程序(没有软件依赖)
- 所有主要的正规移动包括双兵开局
- 兵升变为王后(与 1k ZX Chess 相反)
- 名为 taxiMax > minMax half-ply 的 CPU 人工智能
- 硬编码的西班牙白子开局
同样,它也存在一些重要的限制。这些遗漏的功能包括:
- 兵的低升变(升变为非王后的棋子)
- 吃过路兵
- 没有王车易位
- 3 次位置重复和局规则(注:下一步之前,同样的移动出现了两次;可以参考[这里](http://www.netplaces.com/chess-basics/ending-the-game/three-position-repetition.htm)
- 50 步移动和局规则在连续的50个回合内双方既没有棋子被吃掉也没有兵被移动过则和局可以参考[这里](http://www.chessvariants.org/d.chess/chess.html)
- 没有开放式和封闭式布局
- 一个或多个 minMAX/negaMax 全层人工智能
- 网站: [www.pouet.net/prod.php?which=64962][2]
- 开发者: Olivier "Baudsurfer/RSi" Poudade
- 协议: WTFPL v2
- 版本号: .02
----------
###Micro-Max
![](http://www.linuxlinks.com/portal/content/reviews/Games2/Screenshot-Micro-Max.png)
Micro-Max 是一个用 133 行 C 语言写就的象棋源程序。
作者实现了一个 hash 变换表,该引擎检查输入移动的合法性,以及支持 FIDE World Chess Federation 缩写,参见其[官网](https://www.fide.com/) 的全部规则,除了低升变。
特点包括:
- 递归的 negamax 搜索
- 反夺的静态搜索
- 反夺规则的扩展
- 迭代深化
- 最佳移动优先的 `排序`
- 存储分数和最佳移动的 Hash 表
- 完整的 FIDE 规则(除了低位升变)和移动合法性检查
还有一个 1433个字符的较大版本但允许你使用完整的 FIDE 规则的低升变。
- 网站: [home.hccnet.nl/h.g.muller/max-src2.html][3]
- 开发者: Harm Geert Muller
- 协议: The MIT License
- 版本号: 3.2
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20150222033906262/ChessBytes.html
作者Frazer Kline
译者:[FSSlc](https://github.com/FSSlc)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://nanochess.org/chess6.html
[2]:http://www.pouet.net/prod.php?which=64962
[3]:http://home.hccnet.nl/h.g.muller/max-src2.html

View File

@ -1,154 +1,169 @@
10个有用的ls命令面试问题-第二部分
10ls 令面试的问题
================================================================================
这是关于文件列表命令的第二篇文章继续探讨ls命令的其他方面。该系列的第一篇文章收到了Tecmint社区的高度关注如果你错过了该系列的第一部分你可能会访问以下地址:
这是关于文件列表命令的第二篇文章继续探讨ls命令的其他方面。该系列的第一篇文章受到了社区的高度关注,如果你错过了该系列的第一部分,可以访问以下地址:
- [15 Interview Questions on “ls” Command Part 1][1]
- [15 ls命令的面试问题][1]
这篇文章通过样例来很好地展现ls命令的深入应用我们加倍小心地来写这篇文章来保持其简洁可理解性同时又能提供最全面的服务。
![10 Interview Questions on ls Command](http://www.tecmint.com/wp-content/uploads/2015/03/ls-Command-Interview-Questions.jpg)
10 Interview Questions on ls Command
### 1. 假如你想要以长列表的形式列出目录中的内容,但是不打印文件创建者名称以及文件所属组。同时在输出中显示其不同之处。###
*10 ls 命令面试的问题*
### 16. 假如你想要以长列表的形式列出目录中的内容,但是不打印文件创建者名称以及文件所属组。看看输出有何不同之处。###
a. ls 命令在与‘-l选项一起使用时会将文件以长列表格式输出。
# ls -l
![List Files in- Long List Format](http://www.tecmint.com/wp-content/uploads/2015/03/List-Files-inLong-List-Format.gif)
List Files in- Long List Format
*以长格式列出文件*
b. ls 命令在与‘-l--author一起使用时会将文件以长列表格式输出并带有文件创建者的名称信息。
# ls -l --author
![List Files By Author](http://www.tecmint.com/wp-content/uploads/2015/03/List-Files-By-Author.gif)
List Files By Author
*列出文件的创建者*
c. ls 命令在与‘-g选项 一起将会列出文件名但是不带属主名称。
# ls -g
![List Files Without Printing Owner Name](http://www.tecmint.com/wp-content/uploads/2015/03/List-Files-Without-Printing-Author.gif)
List Files Without Printing Owner Name
d. ls 命令在与'-G'和‘-l选项一起将会使用长列表格式列出文件名称带式不带文件所属组名称。
*列出文件但不列出属主*
d. ls 命令在与'-G'和‘-l选项一起将会使用长列表格式列出文件名称但是不带文件所属组名称。
# ls -Gl
![List Files Without Printing Group](http://www.tecmint.com/wp-content/uploads/2015/03/List-Files-Without-Printing-Group.gif)
List Files Without Printing Group
### 2. 使用用户友好的格式打印出当前目录中的文件以及文件夹的大小,你会如何做?###
*列出文件但是不列出所属组*
### 17. 使用易读格式打印出当前目录中的文件以及文件夹的大小,你会如何做?###
这里我们需要使用'-h'选项(人类可阅读的、易读的)同‘-l-s选项与ls命令一起使用来得到想要的输出。
这里我们需要使用'-h'选项(人类可阅读的)同‘-l-s选项与ls命令一起使用来得到想要的输出。
# ls -hl
![List Files in Human Readable Format](http://www.tecmint.com/wp-content/uploads/2015/03/List-Size-of-Files-with-ls.gif)
List Files in Human Readable Format
*以易读格式的长列表列出文件*
# ls -hs
![List File Sizes in Long List Format](http://www.tecmint.com/wp-content/uploads/2015/03/List-File-Sizes-in-Readable-Format.gif)
List File Sizes in Long List Format
*以易读格式的短列表列出文件*
**注意** -h选项使用1024计算机中的标准的幂文件或文件夹的大小分别以KM和G作为输出单位。
### 3. 既然‘-h选项是使用1024的幂作为标准来输出大小那么ls命令还支持其他的幂值呢###
### 18. 既然‘-h选项是使用1024的幂作为标准来输出大小那么ls命令是否还支持其他的幂值呢?###
存在一个选项 -si与选项-h相似不同之处在于前者以使用1000的幂后者使用1024的幂。
存在一个选项 --si与选项-h相似不同之处在于前者以使用1000的幂后者使用1024的幂。
# ls -si
# ls --si
![Supported Power Values of ls Command](http://www.tecmint.com/wp-content/uploads/2015/03/ls-supported-power-values.gif)
Supported Power Values of ls Command
所以'--si'也可以与‘-l选项一起使用来按照1000的幂来输出文件夹的大小并且以长列表格式显示。
所以'-si'也可以与‘-l选项一起使用来按照1000的幂来输出文件夹的大小并且以长列表格式显示。
# ls --si -l
# ls -si -l
LCTT 译注:此处原文参数有误,附图也不对,因此删除之)
![List Files by Power Values](http://www.tecmint.com/wp-content/uploads/2015/03/List-Files-by-Power-Values.gif)
List Files by Power Values
### 19. 假如要你使用逗号‘,’作为分隔符来打印一个目录中的内容,可以吗? 对于长列表形式也可行吗?###
### 4. 假如要你使用逗号‘,’作为分隔符来打印一个目录中的内容,可以吗? 对于长列表形式也可行吗?###
当然linux的ls命令当与其选项-m一起使用时可以在打印目录内容时以逗号分割。由于逗号分割的内容是水平填充的ls命令不能在垂直列出内容时使用逗号来分割内容。
当然linux的ls命令当与其选项-m一起使用时可以在打印目录内容时以逗号,分割。由于逗号分割的内容是水平填充的ls命令不能在垂直列出内容时使用逗号来分割内容。
# ls -m
![Print Contents of Directory by Comma](http://www.tecmint.com/wp-content/uploads/2015/03/Print-Contents-of-Directory-by-Comma.gif)
Print Contents of Directory by Comma
*以逗号分隔显示内容*
当使用长列表格式时,‘-m选项就没有什么效果了。
# ls -ml
![Listing Content Horizontally](http://www.tecmint.com/wp-content/uploads/2015/03/Listing-Content-Horizentally.gif)
Listing Content Horizontally
### 5. 有办法将目录的内容逆序打印出来吗?###
*长列表不能使用逗号分隔列表*
### 20. 有办法将目录的内容逆序打印出来吗?###
可以!上面的情形可以轻松地通过'-r'选项搞定,该选项将输出顺序倒置。这个选项也可以与‘-l选项一起使用。
# ls -r
![List Content in Reverse Order](http://www.tecmint.com/wp-content/uploads/2015/03/List-Content-in-Reverse-Order.gif)
List Content in Reverse Order
*逆序列出*
# ls -rl
![Long List Content in Reverse Order](http://www.tecmint.com/wp-content/uploads/2015/03/Long-List-Content-in-Reverse-Order.gif)
Long List Content in Reverse Order
### 6. 如果你被分配一个任务,来递归地打印各个子目录,你会如何应付?注意哟,只针对子目录而不是文件哦。###
*逆序长列表*
### 21. 如果你被分配一个任务,来递归地打印各个子目录,你会如何应付?注意,只针对子目录而不是文件哦。###
小意思!使用“-R”选项就可以轻轻松松拿下它也可以更进一步地与其他选项如-l-m选项等组合使用。
# ls -R
![Print Sub Directories in Recursively](http://www.tecmint.com/wp-content/uploads/2015/03/Print-Sub-Directories-in-Recursively.gif)
Print Sub Directories in Recursively
### 7. 如何按照文件大小对其进行排序?###
*递归列出子目录*
### 22. 如何按照文件大小对其进行排序?###
linux命令行选项'-S'赋予了ls命令这个超能力。按照文件大小从大到小的顺序排序
# ls -S
![Sort Files with ls Command](http://www.tecmint.com/wp-content/uploads/2015/03/Sort-Files-in-Linux.gif)
Sort Files with ls Command
*按文件大小排序*
按照文件大小从小到大的顺序排序。
# ls -Sr
![Sort Files in Descending Order](http://www.tecmint.com/wp-content/uploads/2015/03/Sort-Files-in-Descending-Order.gif)
Sort Files in Descending Order
### 8. 列出目录中的内容按照一行一个文件并且不带额外信息的方式 ###
*从小到大的排序*
选项‘-l在此可以解决这个问题使用-l选项来使用ls命令可以将目录中的内容按照一行一个文件并且不带额外信息的方式进行输出。
### 23. 按照一行一个文件列出目录中的内容,并且不带额外信息的方式 ###
选项‘-1在此可以解决这个问题使用-1选项来使用ls命令可以将目录中的内容按照一行一个文件并且不带额外信息的方式进行输出。
# ls -1
![List Files Without Information](http://www.tecmint.com/wp-content/uploads/2015/03/List-Files-Without-Information.gif)
List Files Without Information
### 9. 现在委派给你一个任务,你必须将目录中的内容输出到终端而且需要使用双引号引起来,你会如何做?###
*不带其他信息,一行一个列出文件*
存在一个选项‘-Q会将ls命令的输出内容用双引号引起来。
### 24. 现在委派给你一个任务,你必须将目录中的内容输出到终端而且需要使用双引号引起来,你会如何做?###
有一个选项‘-Q会将ls命令的输出内容用双引号引起来。
# ls -Q
![Print Files with Double Quotes](http://www.tecmint.com/wp-content/uploads/2015/03/Print-Files-with-Double-Quotes.gif)
Print Files with Double Quotes
### 10. 想象一下你正在与一个包含有很多文件和文件夹的目录打交道,你需要使目录名显示在文件名之前,你如何做?###
*输出的文件名用引号引起来*
### 25. 想象一下你正在与一个包含有很多文件和文件夹的目录打交道,你需要使目录名显示在文件名之前,你如何做?###
# ls --group-directories-first
![Print Directories First](http://www.tecmint.com/wp-content/uploads/2015/03/Print-Directories-First.gif)
Print Directories First
先点到为止我们会马上提供该系列文章的下一部分。别换频道关注Tecmint。 另外别忘了在下面的评论中提出你们宝贵的反馈信息,喜欢就分享,帮助我们得到更好的传播吧!
*目录优先显示*
先点到为止,我们会马上提供该系列文章的下一部分。别换频道,关注我们。 另外别忘了在下面的评论中提出你们宝贵的反馈信息,喜欢就分享,帮助我们得到更好的传播吧!
--------------------------------------------------------------------------------
@ -156,9 +171,9 @@ via: http://www.tecmint.com/ls-interview-questions/
作者:[Ravi Saive][a]
译者:[theo-l](https://github.com/theo-l)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/admin/
[1]:http://www.tecmint.com/ls-command-interview-questions/
[1]:http://linux.cn/article-5349-1.html

View File

@ -1,85 +1,91 @@
关于linux中的“ls”命令的15个面试问题 - 第一部分
15 个ls命令的面试问题
================================================================================
Unix或类Unix系统中的“文件列表”命令“ls”是最基础并且使用的最广泛的命令行中工具之一。
它是一个在GNU基本工具集以及BSD各种变体上可用的与POSIX兼容的工具。
“ls”命令可以通过与大量的选项一起使用来达到想要的结果。
Unix或类Unix系统中的“文件列表”命令“ls”是最基础并且使用的最广泛的命令行中工具之一。它是一个POSIX兼容工具在GNU基本工具集以及BSD各种变体上都可以使用。“ls”命令可以结合大量的选项来达到想要的结果。
这篇文章的目的在于通过相关的样例来深入讨论文件列表命令。
![15 ls Command Questions](http://www.tecmint.com/wp-content/uploads/2014/09/ls-Command-Questions.png)
15个“ls”命令问题。
### 1. 你会如何从目录中列出文件?###
*15个“ls”命令问题。*
使用linux文件列表命令“ls”驾到拯救。
### 1. 如何列出目录中的文件?###
linux文件列表命令“ls”就是干这个的。
# ls
![List Files](http://www.tecmint.com/wp-content/uploads/2014/09/list-files.gif)
列出文件
同时我们也可以使用“echo(打印)”命令与一个通配符(*)相关联的方式在目录中列出其中的所有文件。
*列出文件*
同时我们也可以使用“echo(回显)”命令与一个通配符(*)参数来雷锤目录中的所有文件。
# echo *
![List All Files](http://www.tecmint.com/wp-content/uploads/2014/09/list-all-files.gif)
列出所有的文件。
### 2. 你会如何只通过使用echo命令来列出目录中的所有文件###
*列出所有的文件。*
### 2. 如何只使用echo命令来只列出所有目录###
# echo */
![List All Directories](http://www.tecmint.com/wp-content/uploads/2014/09/list-all-directories.gif)
列出所有的目录
### 3. 你会怎样列出一个目录中的所有文件, 包括隐藏的dot文件###
*列出所有的目录*
### 3. 怎样列出一个目录中的所有文件, 包括隐藏的以“.”开头的文件?###
答:我们需要将“-a”选项与“ls”命令一起使用。
# ls -a
![List All Hidden Files](http://www.tecmint.com/wp-content/uploads/2014/09/list-all-hidden-files.gif)
列出所有的隐藏文件。
### 4. 如何列出目录中除了 “当前目录暗喻(.)”和“父目录暗喻(..)”之外的所有文件,包括隐藏文件?###
*列出所有的隐藏文件。*
### 4. 如何列出目录中除了 “当前目录 .”和“父目录 ..”之外的所有文件,包括隐藏文件?###
答: 我们需要将“-A”选项与“ls”命令一起使用
# ls -A
![Do Not List Implied](http://www.tecmint.com/wp-content/uploads/2014/09/Do-not-list-Implied.gif)
别列出暗喻文件。
### 5. 如何将当前目录中的内容使用长格式打印列表?###
*别列出指代当前目录和父目录的文件*
### 5. 如何使用长格式打印出当前目录内容?###
答: 我们需要将“-l”选项与“ls”命令一起使用。
# ls -l
![List Files Long](http://www.tecmint.com/wp-content/uploads/2014/09/list-files-long.gif)
列出文件的长格式。
*列出文件的长格式。*
上面的样例中,其输出结果看起来向下面这样。
drwxr-xr-x 5 avi tecmint 4096 Sep 30 11:31 Binary
上面的drwxr-xr-x 是文件的权限,分别代表了文件所有者,组以及对整个世界。 所有者具有读(r),写(w)以及执行(x)等权限。 该文件所属组具有读(r)和执行(x)但是没有写的权限,相同的权限预示着
对于整个世界的其他可以访问该文件的用户。
上面的drwxr-xr-x 是文件的权限,分别代表了文件所有者,所属组以及“整个世界”。 所有者具有读(r),写(w)以及执行(x)等权限。 该文件所属组具有读(r)和执行(x)但是没有写的权限,整个世界的其他可以访问到该文件的人也具有相同权限。
- 开头的d意味着这是一个目录
- 数字'5'表示符号链接
- 数字'5'表示符号链接有5个符号链接
- 文件 Binary归属于用户 “avi”以及用户组 "tecmint"
- Sep 30 11:31 表示文件最后一次的访问日期与时间。
### 6. 假如让你来将目录中的内容以长格式列表打印,并且显示出隐藏的“点文件”,你会如何实现?###
答: 我们需要同时将"-a"和"-l"选项与“ls”命令一起使用。
答: 我们需要同时将"-a"和"-l"选项与“ls”命令一起使用LCTT 译注:单字符选项可以合并写)
# ls -la
![Print Content of Directory](http://www.tecmint.com/wp-content/uploads/2014/09/Print-Content-of-Directory.gif)
打印目录内容
同时,如果我们不想列出“当前目录暗喻”和"父目录暗喻",可以将“-A”和“-l”选项同“ls”命令一起使用。
*打印目录内容*
此外,如果我们不想列出“当前目录”和"父目录",可以将“-A”和“-l”选项同“ls”命令一起使用。
# ls -lA
@ -90,9 +96,10 @@ Unix或类Unix系统中的“文件列表”命令“ls”是最基础并且使
# ls --author -l
![List Author Files](http://www.tecmint.com/wp-content/uploads/2014/09/List-Author-Files.gif)
列出文件创建者。
### 8. 如何对非显示字符进行转义打印?###
*列出文件创建者。*
### 8. 如何对用转义字符打印出非显示字符?###
答:我们只需要使用“-b”选项来对非显示字符进行转义打印
@ -100,52 +107,58 @@ Unix或类Unix系统中的“文件列表”命令“ls”是最基础并且使
![Print Escape Character](http://www.tecmint.com/wp-content/uploads/2014/09/Print-Escape-Character.gif)
### 9. 指定特定的单位格式来列出文件和目录的大小,你会如何实现?###
答: 在此可以同时使用选项“-block-size=scale”和“-l”但是我们需要用特定的单位如MK等来替换scale
### 9. 用指定特定的单位格式来列出文件和目录的大小,你会如何实现?###
答: 在此可以同时使用选项“-block-size=scale”和“-l”但是我们需要用特定的单位如MK等来替换scale参数。
# ls --block-size=M -l
# ls --block-size=K -l
![List File Scale Format](http://www.tecmint.com/wp-content/uploads/2014/09/List-File-Scale-Format.gif)
列出文件大小单位格式。
### 10. 列出目录中的非备份文件,也就是那些文件名以‘~’结尾的文件###
*列出文件大小单位格式。*
### 10. 列出目录中的文件,但是不显示备份文件,即那些文件名以‘~’结尾的文件###
答: 选项‘-B赶来救驾。
# ls -B
![List File Without Backup](http://www.tecmint.com/wp-content/uploads/2014/09/List-File-Without-Backup.gif)
列出非备份文件
### 11. 将目录中的所有文件按照名称进行排序并与最后修改时间信息进行关联显示###
*列出非备份文件*
### 11. 将目录中的所有文件按照名称进行排序,并显示其最后修改时间信息?###
答: 为了实现这个需求,我们需要同时将“-c”和"-l"选项与命令一起使用。
# ls -cl
![Sort Files](http://www.tecmint.com/wp-content/uploads/2014/09/Sort-Files.gif)
文件排序
*文件排序*
### 12. 将目录中的文件按照修改时间进行排序,并显示相关联的信息。###
答: 我们需要同时使用3个选项--'-l','-t','-c'--与命令ls一起使用来对文件使用修改时间排序,最新的修改时间排在最前。
答: 我们需要同时使用3个选项'-l','-t','-c' 来对文件使用修改时间排序,最新的修改时间排在最前。
# ls -ltc
![Sort Files by Modification](http://www.tecmint.com/wp-content/uploads/2014/09/Sort-Files-by-Modification.gif)
按照修改时间对文件排序。
*按照修改时间对文件排序。*
### 13. 如何控制ls命令的输出颜色的有无###
答: 需要使用选项‘--color=parameterparameter参数具有三种不同值“auto(自动)”“always(一直)”“never(无色)”。
答: 需要使用选项‘--color=parameter参数具有三种不同值“auto(自动)”“always(一直)”“never(无色)”。
# ls --color=never
# ls --color=auto
# ls --color=always
![ls Colorful Output](http://www.tecmint.com/wp-content/uploads/2014/09/ls-colorful-output.gif)
ls的输出颜色
*ls的输出颜色*
### 14. 假如只需要列出目录本身,而不是目录的内容,你会如何做?###
@ -154,9 +167,10 @@ ls的输出颜色
# ls -d
![List Directory Entries](http://www.tecmint.com/wp-content/uploads/2014/09/List-Directory-Entries.gif)
列出目录本身
### 15. 为长格式列表命令"ls -l"创建别名“ll”并将其结果输出到一个文件而不是标准输出中。###
*列出目录本身*
### 15. 为长格式列表命令"ls -l"创建一个别名“ll”并将其结果输出到一个文件而不是标准输出中。###
答:在上述的这个场景中,我们需要将别名添加到.bashrc文件中然后使用重定向操作符将输出写入到文件而不是标准输出中。我们将会使用编辑器nano。
@ -166,13 +180,14 @@ ls的输出颜色
# nano ll.txt
![Create Alias for ls command](http://www.tecmint.com/wp-content/uploads/2014/09/Create-ls-Alias.gif)
为ls命令创建别名。
*为ls命令创建别名。*
先到此为止,别忘了在下面的评论中提出你们的宝贵意见,我会再次带着另外的有趣的文章在此闪亮登场。
### 参考阅读:###
- [10 个ls命令的面试问题-第二部分][1]
- [10 个ls命令的面试问题(二)][1]
- [Linux中15个基础的'ls'命令][2]
--------------------------------------------------------------------------------
@ -187,4 +202,4 @@ via: http://www.tecmint.com/ls-command-interview-questions/
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/ls-interview-questions/
[2]:http://www.tecmint.com/15-basic-ls-command-examples-in-linux/
[2]:http://linux.cn/article-5109-1.html

View File

@ -0,0 +1,59 @@
如何设置 Linux 上 SSH 登录的 Email 提醒
================================================================================
![](http://www.ehowstuff.com/wp-content/uploads/2015/03/fail2ban-security.jpg)
虚拟私有服务器 VPS上启用 SSH 服务使得该服务器暴露到互联网中,为黑客攻击提供了机会,尤其是当 VPS 还允许root 直接访问时。VPS 应该为每次 SSH 登录成功尝试配置一个自动的 email 警告。 VPS 服务器的所有者会得到各种 SSH 服务器访问日志的通知,例如登录者、登录时间以及来源 IP 地址等信息。这是一个对于服务器拥有者来说,保护服务器避免未知登录尝试的重要安全关注点。这是因为如果黑客使用暴力破解方式通过 SSH 来登录到你的 VPS 的话,后果很严重。在本文中,我会解释如何在 CentOS 6、 CentOS 7、 RHEL 6 和 RHEL 7上为所有的 SSH 用户登录设置一个 email 警告。
1. 使用root用户登录到你的服务器
2. 在全局源定义处配置警告(/etc/bashrc这样就会对 root 用户以及普通用户都生效:
[root@vps ~]# vi /etc/bashrc
将下面的内容加入到上述文件的尾部。
echo 'ALERT - Root Shell Access (vps.ehowstuff.com) on:' `date` `who` | mail -s "Alert: Root Access from `who | cut -d'(' -f2 | cut -d')' -f1`" recipient@gmail.com
3. 你也可以选择性地让警告只对 root 用户生效:
[root@vps ~]# vi .bashrc
将下面的内容添加到/root/.bashrc的尾部
echo 'ALERT - Root Shell Access (vps.ehowstuff.com) on:' `date` `who` | mail -s "Alert: Root Access from `who | cut -d'(' -f2 | cut -d')' -f1`" recipient@gmail.com
整个配置文件样例:
# .bashrc
# User specific aliases and functions
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
echo 'ALERT - Root Shell Access (vps.ehowstuff.com) on:' `date` `who` | mail -s "Alert: Root Access from `who | cut -d'(' -f2 | cut -d')' -f1`" recipient@gmail.com
4. 你也可以选择性地让警告只对特定的普通用户生效(例如 skytech
[root@vps ~]# vi /home/skytech/.bashrc
将下面的内容加入到/home/skytech/.bashrc文件尾部
echo 'ALERT - Root Shell Access (vps.ehowstuff.com) on:' `date` `who` | mail -s "Alert: Root Access from `who | cut -d'(' -f2 | cut -d')' -f1`" recipient@gmail.com
--------------------------------------------------------------------------------
via: http://www.ehowstuff.com/how-to-get-email-alerts-for-ssh-login-on-linux-server/
作者:[skytech][a]
译者:[theo-l](https://github.com/theo-l)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.ehowstuff.com/author/mhstar/

View File

@ -0,0 +1,91 @@
在Linux上安装使用Go for it备忘软件
===============================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Go_For_It_TODO_Linux.jpeg)
你在 Linux 桌面是如何管理任务和备忘的?我喜欢[用 Ubuntu 的粘帖便签][1]很久了。但是我要面对与其他设备同步的麻烦,特别是我的智能手机。这就是我为什么选择使用 [Google Keep][2] 的原因了。
Google Keep 是一款功能丰富的软件,我十分喜爱,而且喜欢到把它叫做 [Linux 的 Evernote ][3]地步。但是并不是每个人都喜欢一款功能丰富的备忘录软件。极简主义是目前的主流,很多人喜欢。如果你是极简主义的追求者之一,而且正在寻找一款开源的备忘录软件,那么你应该试一试 [Go For It][4]。
### Go For It高效的Linux桌面软件 ###
Go For It是一款简洁的备忘软件借助定时提醒帮助你专注于工作。所以当你添加一个任务到列表后可以附上一个定时器。到设定时间后它就会提醒你去做任务。你可以看看其帅哥开发者 [Manuel Kehl][5] 制作的视频youtube 视频) https://www.youtube.com/watch?v=mnw556C9FZQ
### 安装 Go For It###
要在 Ubuntu 15.04,14.04 和其他基于 Ubuntu 的Linux 发行版如Linux Mint elementary OS Freya 等上面安装 Go For It请使用这款软件官方的 PPA
sudo add-apt-repository ppa:mank319/go-for-it
sudo apt-get update
sudo apt-get install go-for-it
你也可以下载 .deb 包Windows 安装包和源代码,链接如下:
- [Download source code][6]
- [Download .deb binaries][7]
- [Download for Windows][8]
### 在Linux桌面使用 Go For It###
Go For It使用真心方便。你只需添加任务到列表中任务会自动存入 todo.txt 文件中。
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Go-for-it_todo_app_linux.png)
每个任务默认定时25分钟。
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Go-for-it_todo_app_linux_1.png)
任务一旦完成,就会被自动存档到 done.txt 文件中。根据设置,它会在规定的时间间隔或者任务过期前不久,发送桌面提醒:
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Go_for_it_Linux_notification.png)
你可以从配置里面修改所有的偏好。
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Go-for-it_todo_app_linux_2.png)
目前一切都看着挺好。但是在智能手机上使用体验怎样呢?如果你不能使它在不同设备间同步,那这款高效软件就是不完整的。好消息是 Go For It是基于 [todo.txt][9] 的,这意味着你可以用第三方软件和像 Dropbox 一样的云服务来使用它。
### 在安卓手机和平板上使用Go For It ###
在这里你需要做一些工作。首先的首先,在 Linux 和你的安卓手机上安装 Dropbox如果之前没有安装的话。下一步你要做的就是要配置 Go For It**修改 todo.txt 的目录到 Dropbox 的路径下**
然后,你得去下载 [Simpletask Andriod app][10]。这是免费的应用。安装它。当你第一次运行 Simletask 的时候,你会被要求关联你的账号到 Dropbox
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Go_for_it_Android_1.jpeg)
一旦你完成了 Simpletask 与 Dropbox 的关联,就可以打开应用了。如果你已经修改了 Go For It 的配置将文件保存到Dropbox 上,你就应该可以在 Simpletask 里看到。而如果你没有看到,点击应用底部的设置,选择 Open Todo file 的选项:
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Go_for_it_Android.jpeg)
现在,你应该可以看到 Simpletask 同步的任务了。
### 总结 ###
对于 Simpletask你就可以以类似[标记语言工具][11]的风格使用它。对于小巧和专注而言Go For It是一款不错的备忘软件。一个干净的界面是额外的加分点。如果拥有它自己的手机应用就更好了但是我们也有临时替代方案了。
底层来讲Go For It! 不会运行在后台。这就是说,你不得不让它一直保持运行。它甚至没有一个最小化的按钮,这有一点小小的烦扰。我想要看到的是有一个小的指示程序,运行在后台,并且快速进入主面板,这肯定会提升其可用性。
试试 Go For It分享一下你的使用体验。在 Linux 桌面上,你还使用了哪些其他的备忘软件比起其他你最喜欢的同类应用Go For It怎么样
-------------------------------------------------------------------------------
via: http://itsfoss.com/go-for-it-to-do-app-in-linux/
作者:[Abhishek][a]
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://itsfoss.com/indicator-stickynotes-windows-like-sticky-note-app-for-ubuntu/
[2]:http://itsfoss.com/install-google-keep-ubuntu-1310/
[3]:http://itsfoss.com/5-evernote-alternatives-linux/
[4]:http://manuel-kehl.de/projects/go-for-it/
[5]:http://manuel-kehl.de/about-me/
[6]:https://github.com/mank319/Go-For-It
[7]:https://launchpad.net/~mank319/+archive/ubuntu/go-for-it
[8]:http://manuel-kehl.de/projects/go-for-it/download-windows-version/
[9]:http://todotxt.com/
[10]:https://play.google.com/store/apps/details?id=nl.mpcjanssen.todotxtholo&hl=en
[11]:http://itsfoss.com/install-latex-ubuntu-1404/

View File

@ -2,7 +2,7 @@ Linux存储的未来
================================================================================
> **摘要**Linux系统的软件开发者们正致力于使Linux支持更多种类的文件和存储方案。
波士顿 - 在[Linux基金会][1]最近的[Vault][2]展示会上,全都是关于文件系统和存储方案的讨论。你可以会想关于这两个主题并没有什么展值得讨论的最新进展,但事实并非如此。
波士顿 - 在[Linux基金会][1]最近的[Vault][2]展示会上,全都是关于文件系统和存储方案的讨论。你可以会觉得关于这两个主题并没有什么值得讨论的最新进展,但事实并非如此。
![](http://zdnet2.cbsistatic.com/hub/i/r/2015/03/12/c8f92cc2-b963-4238-80a0-d785ec93698c/resize/770x578/08d93a8a393d3f50b2a56e6b0e7a0ca9/btrfs-1.jpg)
@ -14,17 +14,17 @@ Linux存储的未来
### Btrfs ###
例如Chris Mason一位来自Facebook的软件工程师也是[Btrfs][6]对外宣称Butter FS的维护者之一说明了Facebook是如何使用这种文件系统。Btrfs拥有文件系统固有的许多优点比如既能处理大量的小文件也能处理大小可达16EB的单个文件支持RAID的baked烦请校正补充;内置的文件系统压缩,以及集成了对多种存储设备的支持。
例如Chris Mason一位来自Facebook的软件工程师也是[Btrfs][6]念做 Butter FS的维护者之一介绍了Facebook是如何使用这种文件系统。Btrfs拥有文件系统固有的许多优点比如既能处理大量的小文件也能处理大小可达16EB的单个文件支持RAID ;内置的文件系统压缩,以及集成了对多种存储设备的支持。
当然Facebook的服务器也运行在Linux上。更准确地讲是运行在一个基于[CentOS][7]的内部发行版上它是基于3.10和3.18版的内核。对Facebook来说真正的收获是Btrfs在Facebook持续更新用户操作带来的巨大的IOPS每秒钟输入输出的操作数的负载下依旧保持稳定和快速。
当然Facebook的服务器也运行在Linux上。更准确地讲是运行在一个基于[CentOS][7]的内部发行版上它是基于3.10和3.18版的内核。对Facebook来说真正的收获是Btrfs在Facebook持续更新用户操作带来的巨大的IOPS每秒钟输入输出的操作数的负载下依旧保持稳定和快速。
这就是好消息但坏消息是对于像MySQL一样的传统DBMS数据库管理系统来说Btrfs还是太慢了。对此Facebook采用了[XFS][8]。为了协同这两种文件系统Facebook又用到了一种叫做[Gluster][9]的开源分布式文件系统。
Facebook一直与上游的负责Btrfs的Linux内核开发者保持密切联系致力于提高Btrfs在DBMS上的速度。Mason和他的同事在[RocksDB][10]数据库上使用Btrfs以达成目标RocksDB是一种为提供快速存储开发的持久化键值存储系统可以作为客户端服务器模式数据库的基础部分。
Facebook一直与上游的负责Btrfs的Linux内核开发者保持密切联系致力于提高Btrfs在DBMS上的速度。Mason和他的同事的目标是在[RocksDB][10]数据库上使用BtrfsRocksDB是一种为提供快速存储开发的持久化键值存储系统可以作为客户端服务器模式数据库的基础部分。
当然Btrfs也还存在一些问题比如如果有用户傻到用数据把硬盘几乎要撑爆时Btrfs会在硬盘被完全装满前阻止用户继续写入。对某些工程来说比如[CoreOS][12]一款依赖容器化的企业版Linux系统这种问题是致命的。[因此CoreOS已经切换到使用xt4和overlayfs了][11]。
Btrfs的开发人员正致力于数据去重。在这一点上当文件系统中拥有超过一个的相同文件时会自动删除多余文件。正如Mason所说“并非每个人都需要这个功能但如果有人需要那就是真的需要!”
Btrfs的开发人员正致力于数据去重。在这一点上当文件系统中拥有超过一个的相同文件时会自动删除多余文件。正如Mason所说“并非每个人都需要这个功能但如果有人需要那就是真的有用!”
在正在开展的重要性工作中Btrfs并非是唯一的文件系统。John Spary[Red Hat][13]的一位高级软件工程师,提到了另一款名为[Ceph][14]的分布式文件系统。
@ -38,7 +38,7 @@ Ceph提供了一种分布式对象存储方案和文件系统反过来它依
但是Ceph FS仍值得去做正如Spray所说“因为兼容POSIX的文件系统是操作系统通用的。”这并不是说Ceph FS就一无是处。“它并不是支离破碎的相反它奏效了。所缺的是修复和监控工具。”
Red Hat目前正致力于获得[fsck][17]和日志修复工具、快照强化、更好客户端访问控制以及云与容器的集成。尽管Ceph FS到目前为止只是一种有潜力或者没前景的文件系统但仍然值得用在生产环境中。
Red Hat目前正致力于完成[fsck][17]和日志修复工具开发、快照强化、更好客户端访问控制以及云与容器的集成。尽管Ceph FS到目前为止只是一种有潜力或者没前景的文件系统但仍然值得用在生产环境中。
### 文件与存储的差别与目标 ###
@ -56,7 +56,7 @@ via: http://www.zdnet.com/article/linux-storage-futures/
作者:[Steven J. Vaughan-Nichols][a]
译者:[KayGuoWhu](https://github.com/KayGuoWhu)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,59 @@
Papyrus开源笔记管理工具
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Papyrus_Linux_4.jpeg)
在上一篇帖子中,我们介绍了[待办事项管理软件Go For It!][1]。今天我们将介绍一款名为**Papyrus的开源笔记软件**
[Papyrus][2] 是[Kaqaz 笔记管理][3]的一个分支,使用 Qt5 开发。它不仅有简洁、易用的界面,(其宣称)还具备了较好的安全性。由于强调简洁,我觉得 Papyrus 与 OneNote 比较相像。你可以将你的笔记像"纸张"一样分类整理,还可以给他们添加标签进行分组。够简单的吧!
## Papyrus 的特性: ###
虽然 Papyrus 强调简洁,它依然有很多丰富的功能。它的一些主要功能如下:
- 按类别和标签管理笔记
- 高级搜索选项
- 触屏模式
- 全屏选项
- 备份至 Dropbox/硬盘/外部存储
- 允许加密某些页面
- 可与其他软件共享笔记
- 与 Dropbox 加密同步
- 除 Linux 外,还可在 AndroidWindows 和 OS X 使用
### 安装 Papyrus ###
Papyrus 为 Android 用户提供了 APK 安装包。indows 和 OS X 也有安装文件。Linux 用户还可以获取程序的源码。Ubuntu 及其它基于 Ubuntu 的发行版可以使用 .deb 包进行安装。根据你的系统及习惯,你可以从 Papyrus 的下载页面中获取不同的文件:
- [下载 Papyrus][4]
### 软件截图 ###
以下是此软件的一些截图:
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Papyrus_Linux_3-700x450_c.jpeg)
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Papyrus_Linux_2-700x450_c.jpeg)
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Papyrus_Linux_1-700x450_c.jpeg)
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2015/03/Papyrus_Linux-700x450_c.jpeg)
试试Papyrus吧你会喜欢上它的。在下方评论区和我们分享你的使用经验吧。
LCTT译注此软件暂无中文版
--------------------------------------------------------------------------------
via: http://itsfoss.com/papyrus-open-source-note-manager/
作者:[Abhishek][a]
译者:[KevinSJ](https://github.com/KevinSJ)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/abhishek/
[1]:http://linux.cn/article-5337-1.html
[2]:http://aseman.co/en/products/papyrus/
[3]:https://github.com/sialan-labs/kaqaz/
[4]:http://aseman.co/en/products/papyrus/

View File

@ -10,7 +10,7 @@ prips是一个可以打印出指定范围内所有ip地址的一个工具。它
### 使用prips ###
### prips语法 ###
prips语法
prips [-c] [-d delim] [-e exclude] [-f format] [-i incr] start end
prips [-c] [-d delim] [-e exclude] [-f format] [-i incr] CIDR-block
@ -20,10 +20,10 @@ prips是一个可以打印出指定范围内所有ip地址的一个工具。它
prips接受下面的命令行选项
- -c -- 以CIDR形式打印范围。
- -d delim -- 用ASCII码作为分隔符0 <= delim <= 255。
- -d 分隔符 -- 用ASCII码作为分隔符0 <= 分隔符 <= 255。
- -e -- 排除输出的范围。
- -f format -- 设置地址格式 (16进制, 10进制, 或者dot).
- -i incr -- 设置增长上限
- -f 格式 -- 设置地址格式 (hex16进制, dec10进制, 或者dot以点分隔).
- -i 增长 -- 设置增长上限
### Prips示例 ###
@ -31,7 +31,7 @@ prips接受下面的命令行选项
prips 192.168.32.0 192.168.32.255
同样使用CIDR标示:
上面一使用CIDR标示:
prips 192.168.32/24
@ -53,7 +53,7 @@ via: http://www.ubuntugeek.com/prips-print-ip-address-on-a-given-range.html
作者:[ruchi][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,16 +1,13 @@
Mydumper - MySQL数据库备份工具
================================================================================
Mydumper 是MySQL数据库服务器备份工具它比MySQL自带的mysqldump快很多。它还有在转储本身的时候检索远程服务器二进制日志文件的能力。
Mydumper 是 MySQL 数据库服务器备份工具,它比 MySQL 自带的 mysqldump 快很多。它还有在转储的同时获取远程服务器二进制日志文件的能力。
### Mydumper 的优势 ###
o 并行性 (因此有高速度) 和 性能 (避免了昂贵的字符集转换例程, 高效的代码)
o 更容易管理输出 (每个表独立的文件,转储元数据等,简单的查看/解析数据)
o 一致性 -- 在所有线程中维护快照, 提供准确的主从结点日志位置等。
o 可管理性 -- 支持对包含和排除指定的数据库和表的PCRE操作(译者注PCREPerl Compatible Regular ExpressionPerl兼容正则表达式)
- 并行能力 (因此有高速度) 和性能 (高效的代码避免了耗费 CPU 处理能力的字符集转换过程)
- 更容易管理输出 (每个表都对应独立的文件,转储元数据等,便于查看/解析数据)
- 一致性 :跨线程维护快照, 提供精确的主从日志定位等。
- 可管理性 支持用 PCRE 来包含/排除指定的数据库和表(LCTT译注PCREPerl Compatible Regular ExpressionPerl兼容正则表达式)
### 在Ubuntu上安装 mydumper ###
@ -26,20 +23,20 @@ o 可管理性 -- 支持对包含和排除指定的数据库和表的PCRE操作(
应用程序选项:
- -B, --database 转储的数据库
- -T, --tables-list 逗号分隔的转储表列表(不排除正则表达式)
- -B, --database 转储的数据库
- -T, --tables-list 逗号分隔的转储表列表(不会被正则表达式排除)
- -o, --outputdir 保存输出文件的目录
- -s, --statement-size 插入语句的字节大小, 默认是1000000个字节
- -r, --rows 把表分为每个这么多行的
- -r, --rows 把表按行数切
- -c, --compress 压缩输出文件
- -e, --build-empty-files 尽管表中没有数据也创建输出文件
- -x, --regex 匹配db.table'的正则表达式
- -i, --ignore-engines 逗号分隔的忽略存储引擎列表
- -m, --no-schemas 不转储有数据的表架构
- -k, --no-locks 不执行临时共享读锁. 警告: 这会导致备份的不一致性
- -e, --build-empty-files 空表也输出文件
- -x, --regex 匹配db.table的正则表达式
- -i, --ignore-engines 逗号分隔的忽略存储引擎列表
- -m, --no-schemas 不转储表架构
- -k, --no-locks 不执行临时共享读锁警告: 这会导致备份的不一致性
- -l, --long-query-guard 设置长查询的计时器秒数默认是60秒
- --kill-long-queries 杀死长查询 (而不是退出)
- -b, --binlogs 获取二进制日志文件和转储数据的快照
- --kill-long-queries 杀死长查询 (而不是退出程序)
- -b, --binlogs 获取二进制日志文件快照并转储数据
- -D, --daemon 开启守护进程模式
- -I, --snapshot-interval 每个转储快照之间的间隔时间(分钟), 需要开启 --daemon, 默认是60分钟
- -L, --logfile 日志文件的名字默认是stdout
@ -67,21 +64,21 @@ o 可管理性 -- 支持对包含和排除指定的数据库和表的PCRE操作(
--threads=2 \
--compress-protocol
Mydumper输出数据的说明
Mydumper 输出数据的说明
Mydumper不直接指定输出的文件而是输出到文件夹的文件中。--outputdir 选项指定要使用的目录名称。
Mydumper 不直接指定输出的文件,而是输出到文件夹的文件中。--outputdir 选项指定要使用的目录名称。
输出分为两部分
架构
**表结构**
对数据库中的每个表,创建包含 CREATE TABLE 语句的文件。文件命名为:
对数据库中的每个表,创建一个包含 CREATE TABLE 语句的文件。文件命名为:
dbname.tablename-schema.sql.gz
数据
**数据**
对于每个行数多余--rows参数的表, 创建文件名字为:
每个表名跟着按 --rows 参数所切块的数量, 创建文件名字为:
dbname.tablename.0000n.sql.gz
@ -103,7 +100,7 @@ via: http://www.ubuntugeek.com/mydumper-mysql-database-backup-tool.html
作者:[ruchi][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,19 +1,17 @@
Translated by H-mudcup
Picty:让图片管理变简单
================================================================================
![](http://1102047360.rsc.cdn77.org/wp-content/uploads/2015/03/picty_002-790x429.png)
### 关于Picty ###
**Picty**是个免费,简单,却强大的照片收藏管理器,它可以帮助你管理你的照片。它的设计围绕着管理**元数据**和**无损**的处理图像的方法。Picty目前同时支持在线基于网页的和离线本地的收藏集。在本地的收藏集中图片将被保存在一个本地的文件夹和它的子文件夹中。为了加快用户主目录里图片的查询速度,它会维持一个数据库。在在线(基于网页的)收藏集中,你可以通过网页浏览器上传并分享图片。拥有适当权限的个人用户可以把图片分享给任何人,而且每个用户可以同时开放多个收藏集,收藏集也可以被多个用户分享。通过一个转载插件在收藏集间传递图片就有了个简单的交互界面
**Picty**是个免费,简单,却强大的照片收藏管理器,它可以帮助你管理你的照片。它是围绕着**元数据**管理和图像**无损**处理设计的。Picty目前同时支持在线基于网页的和离线本地的收藏集。在本地的收藏集中图片将被保存在一个本地的文件夹及其子文件夹中。为了加快用户主目录里图片的查询速度,它会维持一个数据库。在在线(基于网页的)收藏集中,你可以通过网页浏览器上传并分享图片。拥有适当权限的个人用户可以把图片分享给任何人,而且用户可以同时打开多个收藏集,收藏集也可以分享给多个用户。有个简单的界面可以通过传输插件在收藏集之间传输图片
你可以从你的相机或任何设备中下载任何数量的照片。除此之外Picty允许你在下载前浏览在你相机里的图片集。Picty是个轻量级的应用还有着清爽的界面。它支持Linux和Windows平台。
你可以从你的相机或任何设备中下载任何数量的照片。除此之外Picty允许你在下载前浏览在你相机里的图片集。Picty是个轻量级的应用界面清爽。它支持Linux和Windows平台。
### 功能 ###
- 支持大相片集20000张以上
- 同时开多个收藏集还可以在它们之间传照片。
- 同时开多个收藏集还可以在它们之间传照片。
- 收藏集包括:
- 本地文件系统中保存图片的文件夹。
- 相机、电话及其他媒体设备中的图片。
@ -21,28 +19,28 @@ Picty:让图片管理变简单
- Picty不是把相片“导入”到它的数据库中它仅仅提供了一个界面来访问它们不管它们保存在哪。为了保持迅速的反应以及能使你在离线时浏览图片的能力Picty会保存缩略图和元数据的缓存。
- 以业界标准格式Exif、IPTC和Xmp读写元数据。
- 无损的方法:
- Picty把所有改变包括图像编辑以元数据写入。例如一个图片可以以任何方式剪切保存原来的像仍然保存在该文件里。
- 修改会保存在Picty的收藏集缓存中直到你把你对元数据的修改保存到图片中你能很容易撤销你不喜欢的未保存的修改。
- 基本图片编辑:
- Picty把所有改变包括图像编辑以元数据的方式写入。例如,一个图片可以以任何方式剪切保存,原来的像仍然保存在该文件里。
- 修改会保存在Picty的收藏集缓存中直到你把你对元数据的修改保存到图片中,所以你能很容易撤销你不喜欢的未保存的修改。
- 基本图片编辑功能
- 目前支持基本的图像增强,如亮度、对比度、色彩、剪切以及矫正。
- Improvements to those tools and other tools coming soon (red eye reduction, levels, curves, noise reduction)对这些工具的改善和其他的工具即将到来。(红眼消除、拉伸、弯曲、噪声消除)
- 将要推出一些工具改进及更多工具。(红眼消除、拉伸、弯曲、噪声消除)
- 图片标签:
- 使用标准的IPTC和Xmp关键词为图片做标签。
- 一个树状标签图让你能很容易的管理标签和对你的收藏集进行导航。
- 一个树状标签图让你能很容易的管理标签和在收藏集内导航。
- 文件夹视图:
- 按照目录的结构对你的图片收藏进行导航
- 按照目录的结构对你的图片收藏进行导航
- 支持多屏显示
- Picty可以设置成让你在一个屏幕上浏览你的收藏集同时在另一个屏幕上全屏显示图片。
- 可个性化
- 可以为外部工具创建快捷方式
- 支持插件——目前提供的功能中有许多(标签和文件夹视图以及所有的图片编辑工具)都可以通过插件提供。
- 使用Python编写——自带batteriespython的这个特点使它可在mac、Linux和windows上直接安装使用无需复杂的设置。
- 使用Python编写——内置电池(batteries included)
### 安装方法 ###
#### 1、从PPA安装 ####
Picty开发人员为基于Debian的发行版如Ubuntu创建了一个PPA让安装更简单。
Picty开发人员为Ubuntu这样的基于 Debian的发行版创建了一个PPA让安装更简单。
要在Ubuntu和它的衍生版上安装请运行以下命令
@ -76,13 +74,13 @@ Picty开发人员为基于Debian的发行版如Ubuntu创建了一个PPA
![picty_001](http://www.unixmen.com/wp-content/uploads/2015/03/picty_001.png)
你可以选择已存在的收藏集、设备或目录。让我们创建一个**新收藏集** 。要这样做得先点击新收藏集New Collection按钮。进入收藏集然后浏览都你保存图片的地方。最后,点击**创建Create**按钮。
你可以选择已存在的收藏集、设备或目录。这里让我们创建一个**新收藏集** 请先点击新收藏集New Collection按钮。进入收藏集然后浏览到你保存图片的地方。最后,点击**创建Create**按钮。
![Create a Collection_001](http://www.unixmen.com/wp-content/uploads/2015/03/Create-a-Collection_001.png)
![picty_002](http://www.unixmen.com/wp-content/uploads/2015/03/picty_002.png)
你可以修改,旋转,添加/移除标签,设置每个图片的描述。要这么做,只需右击任何一个图片然后爱做什么做什么。
你可以对每张图片进行修改,旋转,添加/移除标签,设置描述。只需右击任何一个图片然后爱做什么做什么。
访问下面这个Google组可以得到更多关于Picty相片管理器的信息和支持。
@ -96,7 +94,7 @@ via: http://www.unixmen.com/picty-managing-photos-made-easy/
作者:[SK][a]
译者:[H-mudcup](https://github.com/H-mudcup)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,12 +1,12 @@
如何在Docker容器中运行GUI程序
================================================================================
各位,今天我们将学习如何在[Docker][1]之中运行GUI程序。我们可以轻易地在Docker容器中运行大多数GUI程序且不出错。Docker是一个开源项目提供了一个打包、分发和运行任意程序的轻量级容器的开放平台。它没有语言支持、框架或者打包系统的限制并可以在任何地方、任何时候从小型的家用电脑到高端的服务器都可以运行。这让人们可以打包不同的包用于部署和扩展网络应用数据库和后端服务而不必依赖于特定的栈或者提供商。
各位,今天我们将学习如何在[Docker][1]之中运行GUI程序。我们可以轻易地在Docker容器中运行大多数GUI程序且不出错。Docker是一个开源项目提供了一个打包、分发和运行任意程序的轻量级容器的开放平台。它没有语言支持、框架或者打包系统的限制并可以运行在任何地方、任何时候,从小型的家用电脑到高端的服务器都可以运行。这让人们可以打包不同的包用于部署和扩展网络应用,数据库和后端服务而不必依赖于特定的栈或者提供商。
下面是我们该如何在Docker容器中运行GUI程序的简单步骤。本教程中我们会用Firefox作为例子。
### 1. 安装 Docker ###
在开始我们首先得确保在Linux主机中已经安装了Docker。这里我运行的是CentOS 7 主机我们将运行yum管理器和下面的命令来安装Docker。
在开始前我们首先得确保在Linux主机中已经安装了Docker。这里我运行的是CentOS 7 主机我们将运行yum管理器和下面的命令来安装Docker。
# yum install docker
@ -16,7 +16,7 @@
### 2. 创建 Dockerfile ###
现在Docker守护进程已经在运行中了我们现在准备创建自己的Firefox Docker容器。我们要创建一个Dockerfile这里我们要输入需要的配置来创建一个可以工作的Firefox容器。我们取下CentOS中最新的Docker镜像。至此我们需要用文本编辑器创建一个名为Dockerfile的文件。
现在Docker守护进程已经在运行中了我们现在准备创建自己的Firefox Docker容器。我们要创建一个Dockerfile在其中我们要输入需要的配置来创建一个可以工作的Firefox容器。为了运行 Docker 镜像我们需要使用最新版本的CentOS。要创建 Docker 镜像我们需要用文本编辑器创建一个名为Dockerfile的文件。
# nano Dockerfile
@ -25,12 +25,12 @@
#!/bin/bash
FROM centos:7
RUN yum install -y firefox
# Replace 0 with your user / group id
# 用你自己的 uid /gid 替换下面的0
RUN export uid=0 gid=0
RUN mkdir -p /home/developer
RUN echo "developer:x:${uid}:${gid}:Developer,,,:/home/developer:/bin/bash" >> /etc/passwd
RUN echo "developer:x:${uid}:" >> /etc/group
RUN echo "developer ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers
RUN echo "developer ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
RUN chmod 0440 /etc/sudoers
RUN chown ${uid}:${gid} -R /home/developer
@ -56,13 +56,13 @@
### 4. 运行Docker容器 ###
现在,如果一切顺利,我们现在可以在运行着CentOS 7镜像的Docker容器中运行我们的GUI程序也就是Firefox浏览器了。
现在,如果一切顺利,我们现在可以在运行在CentOS 7镜像中的Docker容器里面运行我们的GUI程序也就是Firefox浏览器了。
# docker run -ti --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix firefox
### 总结 ###
在Dcoker容器中运行GUI程序是一次很棒的体验它对你的主机文件系统没有任何的伤害。它完全依赖你的Docker容器。本教程中我尝试了CentOS 7 Docker中的Firefox。我们可以用这个技术尝试更多的GUI程序。如果你有任何问题、建议、反馈请在下面的评论栏中写下来这样我们可以提升或更新我们的内容。谢谢
在Docker容器中运行GUI程序是一次很棒的体验它对你的主机文件系统没有任何的伤害。它完全依赖你的Docker容器。本教程中我尝试了CentOS 7 Docker中的Firefox。我们可以用这个技术尝试更多的GUI程序。如果你有任何问题、建议、反馈请在下面的评论栏中写下来这样我们可以提升或更新我们的内容。谢谢
--------------------------------------------------------------------------------
@ -70,7 +70,7 @@ via: http://linoxide.com/linux-how-to/run-gui-apps-docker-container/
作者:[Arun Pyasi][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,52 +1,52 @@
10 个正需的IT技能会帮你职场成功
10个所需的IT技能你职场成功
===========================================================================
接我们上次的文章[[十大需的操作系统][1]]这篇文章得到了Tecmint社区很高的评价在本篇中我们将指点顶尖的IT技能这会帮助你找到理想的工作。
接我们上次的文章[[十大需的操作系统][1]]这篇文章得到了Tecmint社区很高的评价在本篇中我们将指点顶尖的IT技能这会帮助你找到理想的工作。
如第一篇文章提到的那样,这些资料和统计结果是会伴随市场和需求的变化而变化的。我们会尽可能地更新列表,无论何时只要有任何主要的变化。所有的统计数据基于最近的全球一些IT公司的招聘信息和需求。
如第一篇文章提到的那样,这些资料和统计结果是会伴随市场和需求的变化而变化的。只要有任何主要的变化,我们会尽可能地更新列表。所有的统计数据基于最近的全球一些IT公司的招聘信息和需求。
### 1. VMware ###
VMware公司设计的虚拟化和云计算软件高居榜首。VMware首次宣布商业支持x86架构的虚拟化。VMware的需求在上个季度已经增长自16%。
VMware公司设计的虚拟化和云计算软件高居榜首。VMware首次宣布商业支持x86架构的虚拟化。VMware的招聘需求在上个季度已经增长至16%。
最新稳定发行版: 11.0
### 2. MySQL ###
这款开源的关系型数据库管理系统屈尊第二。直到2013年MySQL都还是第二大使用广泛的RDBMSRelational Database Management System)。上季度MySQL的需求已经达到了11%。继甲骨文公司之后著名的MarialDB也已经被分出MySQL了。去掌握它
这款开源的关系型数据库管理系统憾居第二。直到2013年MySQL都还是第二大使用广泛的RDBMSRelational Database Management System)。上季度MySQL的招聘需求已经达到了11%。非常著名的MarialDB就是来自被甲骨文公司收购之后的MySQL的分支。值得掌握
最新稳定发行版: 5.6.23
### 3. Apache ###
这个跨平台的开源网页HTTP)服务器占据了第三的位置。Apache的需求已经超过了13%截至上个季度
这个跨平台的开源网页HTTP)服务器位居第三。截至上个季度Apache的招聘需求已经超过了13%
最新稳定发行版: 2.4.12
### 4. AWS ###
亚马逊网页服务器是亚马逊网站提供的所有远程计算服务的集合。AWS排在第四。上个季度AWS的需求已经表现出将近14%的增长。
AWS是亚马逊网站提供的所有远程计算服务的集合AWS排在第四位。上个季度AWS的招聘需求已经呈现出将近14%的增长。
### 5. Puppet ###
Puppet作为配置管理系统被应用在设置IT基础架构列在第五。它用Ruby语言编写属于客户端-服务器型的结构。上个季度puppet的需求已经增长超过9%
Puppet作为配置管理系统被应用在设置IT基础架构它排在第五位。它用Ruby语言编写属于客户端-服务器型的结构。上个季度puppet的招聘需求已经增长超过9%
最新稳定发行版: 3.7.3
### 6. Hadoop ###
Hadoop是用Java写的一款开源软件框架用于处理大数据。列表中Hadoop位列第六。对Hadoop的需求在上个季度已经下降了0.2个百分点。
Hadoop是用Java写的一款开源软件框架用于处理大数据。列表中Hadoop位列第六。对Hadoop的招聘需求在上个季度已经下降了0.2个百分点。
最新稳定发行版: 2.6.0
### 7. Git ###
Linux Torvalds最初编写的著名版本控制系统Git排在了第七。Git的需求在上个季度已经超过了7%。
Linux Torvalds最初编写的著名版本控制系统Git排在了第七。Git的招聘需求在上个季度已经超过了7%。
最新稳定发行版: 2.3.4
### 8. Oracle PL/SQL ###
Oracle公司开发的SQL扩展版占据第八的位置。PL/SQL从Oracle 7后就包含在Oracle数据库中。它在上个季度已经现将近8%的衰退。
Oracle公司开发的SQL扩展版占据第八的位置。PL/SQL从Oracle 7后就包含在Oracle数据库中。它在上个季度已经现将近8%的衰退。
### 9. Tomcat ###
@ -58,6 +58,8 @@ Oracle公司开发的SQL扩展版占据第八的位置。PL/SQL从Oracle 7后
这款最著名的企业资源规划软件排在了第十。上个季度SAP在需求市场表现出将近3.5%的增长。
具体数据表格如下:
<table cellspacing="0" cellpadding="5" style="width: 804px;">
<colgroup>
<col width="88">
@ -122,7 +124,7 @@ Oracle公司开发的SQL扩展版占据第八的位置。PL/SQL从Oracle 7后
</tbody>
</table>
就这么多了。我会积极跟进这个系列的下个部分。保持发声,保持联系,保持评论。不要忘了给我们提供你的反馈。喜欢和分享我们,帮助我们传播开去
这篇文章就到这里,我会积极跟进这个系列的下一部分。敬请期待,保持联系,积极评论。不要忘了给我们提供你的反馈。喜欢的话就分享吧,让更多人认识我们
---------------------------------------------------------------------------
@ -130,7 +132,7 @@ via: http://www.tecmint.com/famous-it-skills-in-demand-that-will-get-you-hired/
作者:[Avishek Kumar][a]
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
校对:[校对者ID](https://github.com/校对者ID)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,144 @@
直击 Elementart OS 0.3 Freya - 下载和安装指南
===========================================================================
Elementary OS是一个以Ubuntu为基础的轻量级操作系统广受欢迎。目前已经发行了三个版本而第四个版本将会以即将到来的Ubuntu16.04为基础开发。
- **Jupiter (0.1)**: 第一个Elementary OS稳定发行版基于Ubuntu 10.10在2011年三月发布。
- **Luna (0.2)**: Elementary OS第二个稳定发行版基于Ubuntu 12.04于2012年11月发布。
- **Freya (0.3)**: Elementary OS第三个稳定发行版基于Ubuntu 14.04的2015年二月8号发布。
- **Loki (0.4)**: 未来Elementary OS第四版计划以Ubuntu16.04为基础并且提供更新服务直到2021年。
Freya是目前最新的Elementary OS版本0.3。最初是被命名为ISIS但是后来改了是为了避免与同名的恐怖组织产生任何的联系。Freya有一些非常不错的预装应用。
### 突出的特性 ###
这里列举了一些特性但并非Elementary OS 0.3的所有特性。
- 更好的交互性消息通知可在通知设定面板设定系统级别的“Do Not Disturb不要打扰”模式。
- 最新版Elementary OS提供更好的emoji表情支持及内置替换了网页应用中的微软核心字体。
- Privacy模式是一个新的防火墙工具易于使用可以帮助保护电脑免遭恶意脚本和应用的攻击。
- 统一风格的登入和锁定界面。
- 改进了界面效果和功能的应用中心菜单,包括快速操作列表,搜索拖放,支持快速数学计算。
- 重新设计的多任务视图提供更多以应用为中心的功能。
- 更新了软件栈Linux 3.16 Gtk 3.14 和Vala 0.26),更好的支持和增强了最新开发的应用。
- UEFI支持。
- 通过新的联网助手WiFi连接变得更容易。
### 下载64位&32位版本 ###
- [Elementary OS Freya 64 bit][1]
- [Elementary OS Freya 32 bit][2]
### 安装Elementary OS 0.3 (Freya) ###
下载Elementary OS 0.3的ISO文件并且写入到一个USB启动盘或者DVD/CD。支持32位和64位的架构。当计算机从Elementary OS ISO文件启动后有两个选项可用或不安装而仅试用或直接安装到计算机里。这里选择第二项。Elmentary OS也可以与已有操作系统并存安装构成双重启动。
![Install Freya](http://blog.linoxide.com/wp-content/uploads/2015/04/Install-Freya.png)
在进行下面的步骤之前会检查系统要求和资源有效性。如果你的系统有足够的资源,点击继续。
![Installation Requirements](http://blog.linoxide.com/wp-content/uploads/2015/04/Installation-Requirements.png)
安装向导提供许多安装形式。选取最适合你的选项通常大多数都选用第一个选项“擦除磁盘以安装Elementary”。选择该选项必须保证你的原有数据都已经正确备份了因为磁盘分区将会被擦除其上所有的数据将会丢失。
![Installation Types](http://blog.linoxide.com/wp-content/uploads/2015/04/Installation-Types.png)
接下来的对话框显示了Elementary OS所使用和需要格式化的磁盘分区列表确保数据完整后点击继续。
![Format Warning](http://blog.linoxide.com/wp-content/uploads/2015/04/Format-Warning.png)
选择你的地理位置,确定时区,点击继续。
![Location](http://blog.linoxide.com/wp-content/uploads/2015/04/Location.png)
选择你的语言,点击继续。
![Language](http://blog.linoxide.com/wp-content/uploads/2015/04/Language.png)
填入你的信息,选择一个高强度的超级用户/管理员密码,点击继续。
![whoareyou](http://blog.linoxide.com/wp-content/uploads/2015/04/whoareyou.png)
当你的信息提供后,核心安装进程就会启动,正在安装的组件的详细信息会在一个小对话框里随进度条一闪而过。
![Installation progress](http://blog.linoxide.com/wp-content/uploads/2015/04/Installation-progress.png)
恭喜你最新的Elementary OS 0.3 (Freya)已经安装完成了。此时需要重启来更新和完整注册,恭喜。
![Installation Complet](http://blog.linoxide.com/wp-content/uploads/2015/04/Installation-Complet.png)
启动时Elementary OS将显示它优雅的logo然后会出现密码保护的管理员登入和游客访问选项。游客访问有相当多的限制功能而且没有安装软件的权限。
![Login](http://blog.linoxide.com/wp-content/uploads/2015/04/Login.png)
下图是新安装的Elementary OS 0.3的画面。
![first look](http://blog.linoxide.com/wp-content/uploads/2015/04/first-look.png)
### 个性化桌面 ###
Elementary OS 0.3以其轻巧和美观而为我们熟知每个人有自己独特的审美观和计算机使用习惯。桌面反映出每一个计算机使用者的个人偏好。如其他操作系统一样Elementary OS 0.3也提供了许多选项来个性化配置桌面,包括壁纸,字体大小,主题等等。
基本的个性化配置点击Applications > System Settings > Desktop
我们可以改变壁纸泊板dock和启用桌面热角。
默认提供了很少的壁纸,更多的可以从网上下载或者从你的相机传输过来。
![Desktop Wallpaper](http://blog.linoxide.com/wp-content/uploads/2015/04/Desktop-Wallpaper4.png)
Elementary OS真正的美丽在于优雅的泊板。桌面上没有任何图标泊板上的应用图标显示逼真通过它可以快速访问常用应用。
![Desktop Dock](http://blog.linoxide.com/wp-content/uploads/2015/04/Desktop-Dock1.png)
用户可以定制桌面的四个角的功能。
![Hot Corners](http://blog.linoxide.com/wp-content/uploads/2015/04/Hot-Corners.png)
通过安装elementary tweaks工具来更深入的个性化定制。
可以使用如下命令将稳定的个人软件包档案PPA添加到高级软件包管理工具APT仓库。
sudo add-apt-repository ppa:mpstark/elementary-tweaks-daily
![ppa](http://blog.linoxide.com/wp-content/uploads/2015/04/elementary-tweaks-ppa.png)
一旦软件包添加到仓库后,我们需要用以下命令更新仓库
sudo apt-get update
![update repository](http://blog.linoxide.com/wp-content/uploads/2015/04/update-repository.png)
更新仓库后我们就可以安装elementary-tweaks用以下命令完成
sudo apt-get install elementary-tweaks
![install elementary tweaks](http://blog.linoxide.com/wp-content/uploads/2015/04/install-elementary-tweaks.png)
我们可以在Application > System Settings下的个人区域的看到增加了一个Tweaks项目。它现在可以给我们提供更多的个性化定制选项。
![tweaks](http://blog.linoxide.com/wp-content/uploads/2015/04/tweaks.png)
为了进一步定制我们也安装了gnome桌面系统的tweak工具演示解锁桌面。
sudo apt-get install gnome-tweak-tool
![gnome](http://blog.linoxide.com/wp-content/uploads/2015/04/gnome.png)
### 总结 ###
Elementary OS十分接近Linux发行版Ubuntu它的优缺点两方面也都十分相似。Elementary OS在外观和体验上都十分轻巧和优雅并且正在快速地走向成熟。它有潜力成为Windows和OS X操作系统之外的第三选择。最新的Elementary OS 0.3Freya以其良好的功能基础而迅速流行。想了解更多信息最近的更新和下载请访问其官方[网站][1]。
--------------------------------------------------------------------------------
via: http://linoxide.com/ubuntu-how-to/elementary-os-0-3-freya-install-guide/
作者:[Aun Raza][a]
译者:[wi-cuckoo](https://github.com/wi-cuckoo)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunrz/
[1]:http://sourceforge.net/projects/elementaryos/files/stable/elementaryos-freya-amd64.20150411.iso/download
[2]:http://sourceforge.net/projects/elementaryos/files/stable/elementaryos-freya-i386.20150411.iso/download
[3]:http://elementary.io/

View File

@ -0,0 +1,148 @@
如何在Ubuntu/CentOS上安装Linux内核4.0
================================================================================
大家好今天我们学习一下如何从Elrepo或者源代码来安装最新的Linux内核4.0。代号为Hurr durr I'm a sheep的Linux内核4.0是目前为止最新的主干内核。它是稳定版3.19.4之后发布的内核。4月12日是所有的开源运动爱好者的大日子Linux Torvalds宣布了Linux内核4.0的发布,它现在就已经可用了。由于包括了一些很棒的功能,例如无重启补丁(实时补丁)新的升级驱动最新的硬件支持以及很多有趣的功能都有新的版本它原本被期望是一次重要版本。但是实际上内核4.0并不认为是期望中的重要版本Linus 表示期望4.1会是一个更重要的版本。实时补丁功能已经集成到了SUSE企业版Linux操作系统上。你可以在[发布公告][1]上查看关于这次发布的更多详细内容。
> **警告** 安装新的内核可能会导致你的系统不可用或不稳定。如果你仍然使用以下命令继续安装,请确保备份所有重要数据到外部硬盘。
### 在Ubuntu 15.04上安装Linux内核4.0 ###
如果你正在使用Linux的发行版Ubuntu 15.04你可以直接通过Ubuntu内核网站安装。在你的Ubuntu15.04上安装最新的Linux内核4.0你需要在shell或终端中在root访问权限下运行以下命令。
#### 在 64位 Ubuntu 15.04 ####
$ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0-vivid/linux-headers-4.0.0-040000-generic_4.0.0-040000.201504121935_amd64.deb
$ sudo dpkg -i linux-headers-4.0.0*.deb linux-image-4.0.0*.deb
#### 在 32位 Ubuntu 15.04 ####
$ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.0-vivid/linux-headers-4.0.0-040000-generic_4.0.0-040000.201504121935_i386.deb
$ sudo dpkg -i linux-headers-4.0.0*.deb linux-image-4.0.0*.deb
### 在CentOS 7上安装Linux内核4.0 ###
我们可以用两种简单的方式在CentOS 7上安装Linux内核4.0。
1. 从Elrepo软件仓库安装
1. 从源代码编译安装
我们首先用ElRepo安装这是最简单的方式
#### 使用 Elrepo 安装 ####
**1. 下载和安装ELRepo**
我们首先下载ELRepo的GPG密钥并安装relrepo-release安装包。因为我们用的是CentOS 7我们使用以下命令安装elrepo-release-7.0-2.el7.elrepo.noarch.rpm。
注: 如果你启用了secure boot请查看[这个网页获取更多信息][2]。
# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
![添加 Elrepo 源](http://blog.linoxide.com/wp-content/uploads/2015/04/adding-elrepo.png)
**2. 升级Linux内核到4.0版本**
现在我们准备从ELRepo软件仓库安装最新的稳定版内核4.0。安装它我们需要在CentOS 7的shell或者终端中输入以下命令。
# yum --enablerepo=elrepo-kernel install kernel-ml
![从ELRepo安装Linux内核4.0](http://blog.linoxide.com/wp-content/uploads/2015/04/installing-kernel-4-0-elrepo.png)
上面的命令会自动安装为CentOS 7构建的Linux内核4.0。
现在下面的是另一种方式通过编译源代码安装最新的内核4.0。
#### 从源代码编译安装 ####
**1. 安装依赖软件**
首先我们需要为编译linux内核安装依赖的软件。要完成这些我们需要在一个终端或者shell中运行以下命令。
# yum groupinstall "Development Tools"
# yum install gcc ncurses ncurses-devel
![安装内核依赖](http://blog.linoxide.com/wp-content/uploads/2015/04/installing-dependencies.png)
然后,我们会升级我们的整个系统。
# yum update
**2. 下载源代码**
现在我们通过wget命令从Linux内核的官方仓库中下载最新发布的linux内核4.0的源代码。你也可以使用你的浏览器直接从[kernel.org][3]网站下载内核。
# cd /tmp/
# wget https://www.kernel.org/pub/linux/kernel/v4.x/linux-4.0.tar.xz
![下载内核源码](http://blog.linoxide.com/wp-content/uploads/2015/04/download-kernel-source.png)
**3. 解压tar压缩包**
文件下载好后我们在/usr/src/文件夹下用以下命令解压。
# tar -xf linux-4.0.tar.xz -C /usr/src/
# cd /usr/src/linux-4.0/
![解压内核tar压缩包](http://blog.linoxide.com/wp-content/uploads/2015/04/extracting-kernel-tarball.png)
**4. 配置**
配置Linux内核有两种选择的。我们可以创建一个新的自定义配置文件或者使用已有的配置文件来构建和安装Linux内核。这都取决于你自己的需要。
**配置新的内核**
现在我们在shell或终端中运行make menuconfig命令来配置Linux内核。我们执行以下命令后会显示一个包含所有菜单的弹出窗口。在这里我们可以选择我们新的内核配置。如果你不熟悉这些菜单那就敲击ESC键两次退出。
# make menuconfig
![配置新内核](http://blog.linoxide.com/wp-content/uploads/2015/04/configuring-new-kernel-config.png)
**已有的配置**
如果你想用已有的配置文件配置你最新的内核那就输入下面的命令。如果你对配置有任何调整你可以选择Y或者N或者仅仅是按Enter键继续。
# make oldconfig
#### Step 5. 编译Linux内核 ####
下一步我们会执行make命令来编译内核4.0。取决于你的系统配置编译至少需要20-30分钟。
注:如果编译内核的时候出现`bc command not found`的错误,你可以用**yum install bc**命令安装bc修复这个错误。
# make
![Make 内核](http://blog.linoxide.com/wp-content/uploads/2015/04/make-kernel.png)
#### 6. 安装Linux内核4.0 ####
编译完成后我们终于要在你的Linux系统上安装**内核**了。下面的命令会在/boot目录下创建文件并且在Grub 菜单中新建一个内核条目。
# make modules_install install
#### 7. 验证内核 ####
安装完最新的内核4.0后我们希望能验证它。做这些我们只需要在终端中输入以下命令。如果所有都进展顺利我们会看到内核版本例如4.0出现在输出列表中。
# uname -r
#### 结论 ####
好了我们成功地在我们的CentOS 7操作系统上安装了最新的Linux内核版本4.0。通常并不需要升级linux内核因为和之前版本运行良好的硬件可能并不适合新的版本。我们要确保它包括能使你的硬件正常工作的功能和配件。但大部分情况下新的稳定版本内核能使你的硬件性能更好。因此如果你有任何问题评论反馈请在下面的评论框中注明让我们知道需要增加或者删除什么问题。多谢享受最新的稳定版Linux内核4.0吧 :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/how-tos/install-linux-kernel-4-0-elrepo-source/
作者:[Arun Pyasi][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/
[1]:http://linux.cn/article-5259-1.html
[2]:http://elrepo.org/tiki/SecureBootKey
[3]:http://kernel.org/

View File

@ -0,0 +1,32 @@
GitHub 上最流行的编程语言
================================================================================
![](http://www.loggly.com/wp-content/uploads/2015/04/Infographic_Github_popular_languages_Blogheader.png)
编程语言不仅仅是开发者用来创建程序或表达算法的工具,它们也是对创造力进行编码和解码的仪器。通过观察编程语言的历史,我们在追求为解决问题找到一个更好的方法,促进协作,构建好的产品以及重用他人的工作上得到一个独特的观点。
我们有大约 70% 的客户向我们的服务发送应用日志,因此我们能追踪哪种语言是最流行的,以及哪种语言获得了开发人员的关注。
基于从2012年以来的历史的[GitHub 归档][1]和[GitHut][2]数据我们分析了GitHub上大部分开发者的动作并绘制成你下面看到的信息图表。我们主要关注
- 活跃库的数量,这是反应了人们正在研究的项目的有用度量。
- 每种语言总的推送数量以及每个库的平均推送次数。这些指标是由某种语言编写的项目的创新效率的指示器。
- 每个库新的fork数和发现的问题数目这也显示了活跃度和创新性。
- 每个库新的观察者,这是开发人员兴趣的指示器。
### 查看信息图表并告诉我们你的想法!在你的同龄人中是怎么选择你使用的语言的? ###
![](http://www.loggly.com/wp-content/uploads/2015/04/Most-Popular-Languages-According-to-GitHub-Since-2012-loggly-infographic_v3.png)
--------------------------------------------------------------------------------
via: https://www.loggly.com/blog/the-most-popular-programming-languages-in-to-github-since-2012/
作者:[Justin Mares][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://www.loggly.com/blog/author/guest/
[1]:https://www.githubarchive.org/
[2]:http://githut.info/

View File

@ -0,0 +1,238 @@
安装完最小化 RHEL/CentOS 7 后需要做的 30 件事情(一)
================================================================================
CentOS 是一个工业标准的 Linux 发行版,是红帽企业版 Linux 的衍生版本。你安装完后马上就可以使用,但是为了更好地使用你的系统,你需要进行一些升级、安装新的软件包、配置特定服务和应用程序等操作。
这篇文章介绍了 “安装完 RHEL/CentOS 7 后需要做的 30 件事情”。阅读帖子的时候请先完成 RHEL/CentOS 最小化安装,这是首选的企业和生产环境。如果还没有,你可以按照下面的指南,它会告诉你两者的最小化安装方法。
- [最小化安装 CentOS 7][1]
- [最小化安装 RHEL 7][2]
我们会基于工业标准的需求来介绍以下列出的这些重要工作。我们希望这些东西在你配置服务器的时候能有所帮助。
1. 注册并启用红帽订阅
2. 使用静态 IP 地址配置网络
3. 设置服务器的主机名称
4. 更新或升级最小化安装的 CentOS
5. 安装命令行 Web 浏览器
6. 安装 Apache HTTP 服务器
7. 安装 PHP
8. 安装 MariaDB 数据库
9. 安装并配置 SSH 服务器
10. 安装 GCC (GNU 编译器集)
11. 安装 Java
12. 安装 Apache Tomcat
13. 安装 Nmap 检查开放端口
14. 配置防火墙
15. 安装 Wget
16. 安装 Telnet
17. 安装 Webmin
18. 启用第三方库
19. 安装 7-zip 工具
20. 安装 NTFS-3G 驱动
21. 安装 Vsftpd FTP 服务器
22. 安装和配置 sudo
23. 安装并启用 SELinux
24. 安装 Rootkit Hunter
25. 安装 Linux Malware Detect (LMD)
26. 用 Speedtest-cli 测试服务器带宽
27. 配置 Cron 作业
28. 安装 Owncloud
29. 启用 VirtualBox 虚拟化
30. 用密码保护 GRUB
### 1. 注册并启用红帽订阅 ###
RHEL 7 最小化安装完成后,就应该注册并启用系统红帽订阅库, 并执行一个完整的系统更新。这只当你有一个可用的红帽订阅时才能有用。你要注册才能启用官方红帽系统库并时不时进行操作系统更新。LCTT 译注:订阅服务是收费的)
在下面的指南中我们已经包括了一个如何注册并激活红帽订阅的详细说明。
- [在 RHEL 7 中注册并启用红帽订阅][3]
**注意**: 这一步仅适用于有一个有效订阅的红帽企业版 Linux。如果你用的是 CentOS 服务器,请查看后面的章节。
### 2. 使用静态 IP 地址配置网络 ###
你第一件要做的事情就是为你的 CentOS 服务器配置静态 IP 地址、路由以及 DNS。我们会使用 ip 命令代替 ifconfig 命令。当然ifconfig 命令对于大部分 Linux 发行版来说还是可用的,还能从默认库安装。
# yum install net-tools [它提供 ifconfig 工具,如果你不习惯 ip 命令,还可以使用它]
![在 Linux 上安装 ifconfig](http://www.tecmint.com/wp-content/uploads/2015/04/Install-ifconfig.jpeg)
LCTT 译注:关于 ip 命令的使用请参照http://www.linux.cn/article-3631-1.html
但正如我之前说,我们会使用 ip 命令来配置静态 IP 地址。所以,确认你首先检查了当前的 IP 地址。
# ip addr show
![在 CentOS 查看 IP 地址](http://www.tecmint.com/wp-content/uploads/2015/04/Check-IP-Address.jpeg)
现在用你的编辑器打开并编辑文件 /etc/sysconfig/network-scripts/ifcfg-enp0s3 LCTT 译注你的网卡名称可能不同如果希望修改为老式网卡名称参考http://www.linux.cn/article-4045-1.html )。这里,我使用 vi 编辑器,另外你要确保你是 root 用户才能保存更改。
# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
我们会编辑文件中的四个地方。注意下面的四个地方并保证不碰任何其它的东西。也保留双引号,在它们中间输入你的数据。
IPADDR = "[在这里输入你的静态 IP]"
GATEWAY = "[输入你的默认网关]"
DNS1 = "[你的DNS 1]"
DNS2 = "[你的DNS 2]"
更改了 ifcfg-enp0s3 之后,它看起来像下面的图片。注意你的 IP网关和 DNS 可能会变化,请和你的 ISP(译者注:互联网服务提供商,即给你提供接入的服务的电信或 IDC) 确认。保存并退出。
![网络详情](http://www.tecmint.com/wp-content/uploads/2015/04/Network-Details.jpeg)
*网络详情*
重启网络服务并检查 IP 是否和分配的一样。如果一切都顺利,用 Ping 查看网络状态。
# service network restart
![重启网络服务](http://www.tecmint.com/wp-content/uploads/2015/04/Restarat-Network.jpeg)
*重启网络服务*
重启网络后,确认检查了 IP 地址和网络状态。
# ip addr show
# ping -c4 google.com
![验证 IP 地址](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-IP-Address.jpeg)
*验证 IP 地址*
![检查网络状态](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Network-Status.jpeg)
*检查网络状态*
LCTT 译注:关于设置静态 IP 地址的更多信息请参照http://www.linux.cn/article-3977-1.html
### 3. 设置服务器的主机名称 ###
下一步是更改 CentOS 服务器的主机名称。查看当前分配的主机名称。
# echo $HOSTNAME
![查看系统主机名称](http://www.tecmint.com/wp-content/uploads/2015/04/Check-System-Hostname.jpeg)
*查看系统主机名称*
要设置新的主机名称,我们需要编辑 /etc/hostsname 文件并用想要的名称替换旧的主机名称。
# vi /etc/hostname
![在 CentOS 中设置主机名称](http://www.tecmint.com/wp-content/uploads/2015/04/Set-System-Hostname.jpeg)
*在 CentOS 中设置主机名称*
设置完了主机名称之后,务必注销后重新登录确认主机名称。登录后检查新的主机名称。
$ echo $HOSTNAME
![确认主机名称](http://www.tecmint.com/wp-content/uploads/2015/04/Confirm-Hostname.jpeg)
*确认主机名称*
你也可以用 hostname 命令查看你当前的主机名。
$ hostname
LCTT 译注关于设置静态、瞬态和灵活主机名的更多信息请参考http://www.linux.cn/article-3937-1.html
### 4. 更新或升级最小化安装的 CentOS ###
这样做除了更新安装已有的软件最新版本以及安全升级不会安装任何新的软件。总的来说更新update和升级upgrade是相同的除了事实上 升级 = 更新 + 更新时进行废弃处理。
# yum update && yum upgrade
![更新最小化安装的 CentOS 服务器](http://www.tecmint.com/wp-content/uploads/2015/04/Update-CentOS-Server.jpeg)
*更新最小化安装的 CentOS 服务器*
**重要**: 你也可以运行下面的命令,这不会弹出软件更新的提示,你也就不需要输入 y 接受更改。
然而,查看服务器上会发生的变化总是一个好主意,尤其是在生产中。因此使用下面的命令虽然可以为你自动更新和升级,但并不推荐。
# yum -y update && yum -y upgrade
### 5. 安装命令行 Web 浏览器 ###
大部分情况下,尤其是在生产环境中,我们通常用没有 GUI 的命令行安装 CentOS在这种情况下我们必须有一个能通过终端查看网站的命令行浏览工具。为了实现这个目的我们打算安装名为 links 的著名工具。
# yum install links
![安装命令行浏览器](http://www.tecmint.com/wp-content/uploads/2015/04/Install-Commandline-Browser.jpeg)
*Links: 命令行 Web 浏览器*
请查看我们的文章 [用 links 工具命令行浏览 Web][4] 了解用 links 工具浏览 web 的方法和例子。
### 6. 安装 Apache HTTP 服务器 ###
不管你因为什么原因使用服务器,大部分情况下你都需要一个 HTTP 服务器运行网站、多媒体、用户端脚本和很多其它的东西。
# yum install httpd
![在 CentOS 上安装 Apache](http://www.tecmint.com/wp-content/uploads/2015/04/Install-Apache-on-CentOS.jpeg)
*安装 Apache 服务器*
如果你想更改 Apache HTTP 服务器的默认端口号(80)为其它端口,你需要编辑配置文件 /etc/httpd/conf/httpd.conf 并查找以下面开始的行:
LISTEN 80
把端口号 80 改为其它任何端口(例如 3221),保存并退出。
![在 CentOS 上更改 Apache 端口](http://www.tecmint.com/wp-content/uploads/2015/04/Change-Apache-Port.jpeg)
*更改 Apache 端口*
增加刚才分配给 Apache 的端口通过防火墙,然后重新加载防火墙。
允许 http 服务通过防火墙(永久)。
# firewall-cmd add-service=http
允许 3221 号端口通过防火墙(永久)。
# firewall-cmd permanent add-port=3221/tcp
重新加载防火墙。
# firewall-cmd reload
LCTT 译注:关于 firewall 的进一步使用请参照http://www.linux.cn/article-4425-1.html
完成上面的所有事情之后,是时候重启 Apache HTTP 服务器了,然后新的端口号才能生效。
# systemctl restart httpd.service
现在添加 Apache 服务到系统层使其随系统自动启动。
# systemctl start httpd.service
# systemctl enable httpd.service
LCTT 译注:关于 systemctl 的进一步使用请参照http://www.linux.cn/article-3719-1.html
如下图所示,用 links 命令行工具 验证 Apache HTTP 服务器。
# links 127.0.0.1
![验证 Apache 状态](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-Apache-Status.jpeg)
*验证 Apache 状态*
--------------------------------------------------------------------------------
via: http://www.tecmint.com/things-to-do-after-minimal-rhel-centos-7-installation/
作者:[Avishek Kumar][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/centos-7-installation/
[2]:http://www.tecmint.com/redhat-enterprise-linux-7-installation/
[3]:http://www.tecmint.com/enable-redhat-subscription-reposiories-and-updates-for-rhel-7/
[4]:http://www.tecmint.com/command-line-web-browsers/

View File

@ -0,0 +1,154 @@
安装完最小化 RHEL/CentOS 7 后需要做的 30 件事情(二)
================================================================================
### 7. 安装 PHP ###
PHP 是用于 web 基础服务的服务器端脚本语言。它也经常被用作通用编程语言。在最小化安装的 CentOS 中安装 PHP
# yum install php
安装完 php 之后,确认重启 Apache 服务以便在 Web 浏览器中渲染 PHP。
# systemctl restart httpd.service
下一步,通过在 Apache 文档根目录下创建下面的 php 脚本验证 PHP。
# echo -e "<?php\nphpinfo();\n?>" > /var/www/html/phpinfo.php
现在在 Linux 命令行中查看我们刚才创建的 PHP 文件(phpinfo.php)。
# php /var/www/html/phpinfo.php
或者
# links http://127.0.0.1/phpinfo.php
![验证 PHP](http://www.tecmint.com/wp-content/uploads/2015/04/Verify-PHP.jpeg)
*验证 PHP*
### 8. 安装 MariaDB 数据库 ###
MariaDB 是 MySQL 的一个分支。RHEL 以及它的衍生版已经从 MySQL 迁移到 MariaDB。这是一个主流的数据库管理系统也是一个你必须拥有的工具。不管你在配置怎样的服务器或迟或早你都会需要它。在最小化安装的 CentOS 上安装 MariaDB如下所示
# yum install mariadb-server mariadb
![安装 MariaDB 数据库](http://www.tecmint.com/wp-content/uploads/2015/04/Install-MariaDB-Database.jpeg)
*安装 MariaDB 数据库*
启动 MariaDB 并配置它开机时自动启动。
# systemctl start mariadb.service
# systemctl enable mariadb.service
允许 mysql(mariadb) 服务通过防火墙LCTT 译注:如果你的 MariaDB 只用在本机,则务必不要设置防火墙允许通过,使用 UNIX Socket 连接你的数据库;如果需要在别的服务器上连接数据库,则尽量使用内部网络,而不要将数据库服务暴露在公开的互联网上。)
# firewall-cmd add-service=mysql
现在是时候确保 MariaDB 服务器安全了LCTT 译注:这个步骤主要是设置 mysql 管理密码)。
# /usr/bin/mysql_secure_installation
![保护 MariaDB 数据库](http://www.tecmint.com/wp-content/uploads/2015/04/Secure-MariaDB.jpeg)
*保护 MariaDB 数据库*
请阅读:
- [在 CentOS 7.0 上安装 LAMP (Linux, Apache, MariaDB, PHP/PhpMyAdmin)][1]
- [在 CentOS 7.0 上创建 Apache 虚拟主机][2]
### 9. 安装和配置 SSH 服务器 ###
SSH 即 Secure Shell是 Linux 远程管理的默认协议。 SSH 是随最小化 CentOS 服务器中安装运行的最重要的软件之一。
检查当前已安装的 SSH 版本。
# SSH -V
![检查 SSH 版本](http://www.tecmint.com/wp-content/uploads/2015/04/Check-SSH-Version.jpeg)
*检查 SSH 版本*
使用更安全的 SSH 协议,而不是默认的协议,并更改端口号进一步加强安全。编辑 SSH 的配置文件 /etc/ssh/ssh_config
去掉下面行的注释或者从协议行中删除 1然后行看起来像这样LCTT 译注: SSH v1 是过期废弃的不安全协议):
# Protocol 2,1 (原来)
Protocol 2 (现在)
这个改变强制 SSH 使用 协议 2它被认为比协议 1 更安全,同时也确保在配置中更改端口号 22 为其它。
![保护 SSH 登录](http://www.tecmint.com/wp-content/uploads/2015/04/Secure-SSH.jpeg)
*保护 SSH 登录*
取消 SSH 中的root login 只允许通过普通用户账号登录后才能使用 su 切换到 root以进一步加强安全。请打开并编辑配置文件 /etc/ssh/sshd_config 并更改 PermitRootLogin yes 为 PermitRootLogin no。
# PermitRootLogin yes (原来)
PermitRootLogin no (现在)
![取消 SSH Root 登录](http://www.tecmint.com/wp-content/uploads/2015/04/Disable-SSH-Root-Login.jpeg)
*取消 SSH Root 直接登录*
最后,重启 SSH 服务启用更改。
# systemctl restart sshd.service
请查看:
- [加密和保护 SSH 服务器的 5 个最佳实践][3]
- [5 个简单步骤实现使用 SSH Keygen 无密码登录 SSH][4]
- [在 PuTTY 中实现 “无密码 SSH 密钥验证”][5]
### 10. 安装 GCC (GNU 编译器集) ###
GCC 即 GNU 编译器集,是一个 GNU 项目开发的支持多种编程语言的编译系统LCTT 译注:在你需要自己编译构建软件时需要它)。在最小化安装的 CentOS 没有默认安装。运行下面的命令安装 gcc 编译器。
# yum install gcc
![在 CentOS 上安装 GCC](http://www.tecmint.com/wp-content/uploads/2015/04/Install-GCC-in-CentOS.jpeg)
*在 CentOS 上安装 GCC*
检查安装的 gcc 版本。
# gcc --version
![检查 GCC 版本](http://www.tecmint.com/wp-content/uploads/2015/04/Check-GCC-Version.jpeg)
*检查 GCC 版本*
### 11. 安装 Java ###
Java是一种通用的基于类的面向对象的编程语言。在最小化 CentOS 服务器中没有默认安装LCTT 译注:如果你没有任何 Java 应用,可以不用装它)。按照下面命令从库中安装 Java。
# yum install java
![在 CentOS 上安装 Java](http://www.tecmint.com/wp-content/uploads/2015/04/Install-java.jpeg)
*安装 Java*
检查安装的 Java 版本。
# java -version
![检查 Java 版本](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Java-Version.jpeg)
*检查 Java 版本*
--------------------------------------------------------------------------------
via: http://www.tecmint.com/things-to-do-after-minimal-rhel-centos-7-installation/2/
作者:[Avishek Kumar][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/install-lamp-in-centos-7/
[2]:http://www.tecmint.com/apache-virtual-hosting-in-centos/
[3]:http://www.tecmint.com/5-best-practices-to-secure-and-protect-ssh-server/
[4]:http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/
[5]:http://www.tecmint.com/ssh-passwordless-login-with-putty/

View File

@ -0,0 +1,277 @@
安装完最小化 RHEL/CentOS 7 后需要做的 30 件事情(三)
================================================================================
### 12. 安装 Apache Tomcat ###
Tomcat 是由 Apache 设计的用来运行 Java HTTP web 服务器的 servlet 容器。按照下面的方法安装 tomcat但需要指出的是安装 tomcat 之前必须先安装 Java。
# yum install tomcat
![安装 Apache Tomcat](http://www.tecmint.com/wp-content/uploads/2015/04/Install-Apache-Tomcat.jpeg)
*安装 Apache Tomcat*
安装完 tomcat 之后,启动 tomcat 服务。
# systemctl start tomcat
查看 tomcat 版本。
# /usr/sbin/tomcat version
![查看 tomcat 版本](http://www.tecmint.com/wp-content/uploads/2015/04/Check-tomcat-version.jpeg)
*查看 tomcat 版本*
允许 tomcat 服务和默认端口(8080) 通过防火墙并重新加载设置。
# firewall-cmd zone=public add-port=8080/tcp --permanent
# firewall-cmd reload
现在该保护 tomcat 服务器了,添加一个用于访问和管理的用户和密码。我们需要编辑文件 /etc/tomcat/tomcat-users.xml。查看类似下面的部分
<tomcat-users>
....
</tomcat-users>
<role rolename="manager-gui"/>
<role rolename="manager-script"/>
<role rolename="manager-jmx"/>
<role rolename="manager-status"/>
<role rolename="admin-gui"/>
<role rolename="admin-script"/>
<user username="tecmint" password="tecmint" roles="manager-gui,manager-script,manager-jmx,manager-status,admin-gui,admin-script"/>
</tomcat-users>
![保护 Tomcat](http://www.tecmint.com/wp-content/uploads/2015/04/Secure-Tomcat.jpeg)
*保护 Tomcat*
我们在这里添加用户 “tecmint” 到 tomcat 的管理员/管理组中,使用 “tecmint” 作为密码。先停止再启动 tomcat 服务以使更改生效,并添加 tomcat 服务到随系统启动。
# systemctl stop tomcat
# systemctl start tomcat
# systemctl enable tomcat.service
请阅读: [在 RHEL/CentOS 7.0/6.x 中安装和配置 Apache Tomcat 8.0.9][5]
### 13. 安装 Nmap 监视开放端口 ###
Nmap 网络映射器用来分析网络通过运行它可以发现网络的映射关系。nmap 并没有默认安装,你需要从库中安装它。
# yum install nmap
![安装 Nmap 监视工具](http://www.tecmint.com/wp-content/uploads/2015/04/Install-Nmap.jpeg)
*安装 Nmap 监视工具*
列出主机中所有的开放端口以及对应使用它们的服务。
# namp 127.0.01
!监视开放端口](http://www.tecmint.com/wp-content/uploads/2015/04/Monitor-Open-Ports.jpeg)
*监视开放端口*
你也可以使用 firewall-cmd 列出所有端口,但我发现 nmap 更有用。
# firewall-cmd list-ports
![在防火墙中检查开放端口](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Open-Ports-in-Firewall.jpeg)
*在防火墙中检查开放端口*
请阅读: [Nmap 监视开放端口的 29 个有用命令][1]
### 14. 配置 FirewallD ###
firewalld 是动态管理服务器的防火墙服务。在 CentOS 7 中 Firewalld 移除了 iptables 服务。在红帽企业版 Linux 和它的衍生版中默认安装了 Firewalld。如果有 iptables 的话为了使每个更改生效需要清空所有旧的规则然后创建新规则。
然而用firewalld不需要清空并重新创建新规则就可以实现更改生效。
检查 Firewalld 是否运行。
# systemctl status firewalld
# firewall-cmd state
![检查 Firewalld 状态](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Firewalld-Status.jpeg)
*检查 Firewalld 状态*
获取所有的区域列表。
# firewall-cmd --get-zones
![检查 Firewalld 区域](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Firewalld-Zones.jpeg)
*检查 Firewalld 区域*
在切换之前先获取区域的详细信息。
# firewall-cmd --zone=work --list-all
![检查区域详情](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Zone-Details.jpeg)
*检查区域详情*
获取默认区域。
# firewall-cmd --get-default-zone
![Firewalld 默认区域](http://www.tecmint.com/wp-content/uploads/2015/04/Firewalld-Default-Zone.jpeg)
*Firewalld 默认区域*
切换到另一个区域,比如 work
# firewall-cmd --set-default-zone=work
![切换 Firewalld 区域](http://www.tecmint.com/wp-content/uploads/2015/04/Swich-Zones.jpeg)
*切换 Firewalld 区域*
列出区域中的所有服务。
# firewall-cmd --list-services
![列出 Firewalld 区域的服务](http://www.tecmint.com/wp-content/uploads/2015/04/List-Firewalld-Service.jpeg)
*列出 Firewalld 区域的服务*
添加临时服务,比如 http然后重载 firewalld。
# firewall-cmd --add-service=http
# firewall-cmd reload
![添加临时 http 服务](http://www.tecmint.com/wp-content/uploads/2015/04/Add-http-Service-Temporarily.jpeg)
*添加临时 http 服务*
添加永久服务,比如 http然后重载 firewalld。
# firewall-cmd --add-service=http --permanent
# firewall-cmd --reload
![添加永久 http 服务](http://www.tecmint.com/wp-content/uploads/2015/04/Add-http-Service-Temporarily.jpeg)
*添加永久 http 服务*
删除临时服务,比如 http。
# firewall-cmd --remove-service=http
# firewall-cmd --reload
![删除临时 Firewalld 服务](http://www.tecmint.com/wp-content/uploads/2015/04/Add-http-Service-Permanent.jpeg)
*删除临时 Firewalld 服务*
删除永久服务,比如 http
# firewall-cmd --zone=work --remove-service=http --permanent
# firewall-cmd --reload
![删除永久服务](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-Service-Parmanently.jpeg)
*删除永久服务*
允许一个临时端口(比如 331)。
# firewall-cmd --add-port=331/tcp
# firewall-cmd --reload
![打开临时 Firewalld 端口](http://www.tecmint.com/wp-content/uploads/2015/04/Open-Port-Temporarily.jpeg)
*打开临时端口*
允许一个永久端口(比如 331)。
# firewall-cmd --add-port=331/tcp --permanent
# firewall-cmd --reload
![打开永久 Firewalld 端口](http://www.tecmint.com/wp-content/uploads/2015/04/Open-Port-Permanent.jpeg)
*打开永久端口*
阻塞/移除临时端口(比如 331)。
# firewall-cmd --remove-port=331/tcp
# firewall-cmd --reload
![移除 Firewalld 临时端口](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-Port-Temporarily.jpeg)
*移除临时端口*
阻塞/移除永久端口(比如 331)。
# firewall-cmd --remove-port=331/tcp --permanent
# firewall-cmd --reload
![移除 Firewalld 永久端口](http://www.tecmint.com/wp-content/uploads/2015/04/Remove-Port-Permanently.jpeg)
*移除永久端口*
停用 firewalld。
# systemctl stop firewalld
# systemctl disable firewalld
# firewall-cmd --state
![在 CentOS 7 中停用 Firewalld](http://www.tecmint.com/wp-content/uploads/2015/04/Disable-Firewalld.jpeg)
*停用 Firewalld 服务*
启用 firewalld。
# systemctl enable firewalld
# systemctl start firewalld
# firewall-cmd --state
![在 CentOS 7 中取消 Firewalld](http://www.tecmint.com/wp-content/uploads/2015/04/Enable-Firewalld.jpeg)
*启用 Firewalld*
- [如何在 RHEL/CentOS 7 中配置 Firewalld][2]
- [配置和管理 Firewalld 的有用 Firewalld 规则][3]
### 15. 安装 Wget ###
Wget 是从 web 服务器获取(下载)内容的命令行工具。它是你使用 wget 命令获取 web 内容或下载任何文件必须要有的重要工具。
# yum install wget
![安装 Wget 工具](http://www.tecmint.com/wp-content/uploads/2015/04/Install-Wget.png)
*安装 Wget 工具*
关于在终端中如何使用 wget 命令下载文件的方法和实际例子,请阅读[10 个 Wget 命令例子][4]。
### 16. 安装 Telnet 客户端###
Telnet 是通过 TCP/IP 允许用户登录到相同网络上的另一台计算机的网络协议。和远程计算机的连接建立后它就成为了一个允许你在自己的计算机上用所有提供给你的权限和远程主机交互的虚拟终端。LCTT 译注:除非你真的需要,不要安装 telnet 服务,也不要用 telnet 客户端连接另外一个 telnet 服务,因为 telnet 是明文传输的。不过如下用 telnet 客户端检测另外一个服务的端口是否工作是常用的操作。)
Telnet 对于检查远程计算机或主机的监听端口也非常有用。
# yum install telnet
# telnet google.com 80
![Telnet 端口检查](http://www.tecmint.com/wp-content/uploads/2015/04/telnet-testing.png)
*Telnet 端口检查*
--------------------------------------------------------------------------------
via: http://www.tecmint.com/things-to-do-after-minimal-rhel-centos-7-installation/3/
作者:[Avishek Kumar][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://linux.cn/article-2561-1.html
[2]:http://linux.cn/article-4425-1.html
[3]:http://www.tecmint.com/firewalld-rules-for-centos-7/
[4]:http://linux.cn/article-4129-1.html
[5]:http://www.tecmint.com/install-apache-tomcat-in-centos/

View File

@ -0,0 +1,88 @@
Debian 8 "Jessie" 发布
=====================================================
**2015年4月25日** 在经历了近24个月的持续开发之后Debian 项目自豪地宣布最新的稳定版本8的发布代号 “**Jessie**” ),归功于[Debian安全团队][1]和[Debian长期支持][2]团队的工作该版本将在接下来的5年内获得支持。
![Debian](http://i1-news.softpedia-static.com/images/news2/Debian-GNU-Linux-8-Jessie-Has-Been-Officially-Released-Download-Now-479331-2.jpg)
“**Jessie**” 与新的默认 init 系统 `systemd` 一同到来。`systemd` 套件提供了许多激动人心的特性,如更快的启动速度、系统服务的 cgroups 支持、以及独立出部分服务的可能性。不过,`sysvinit` init系统在 “**Jessie**” 中依然可用。
在 “**Wheezy**” 中引入的 UEFI 支持(*“Unified Extensible Firmware Interface”*,统一的可扩展固件接口)同样在 “**Jessie**” 中得到了大幅改进。其中包含了许多已知固件 bug 的临时性解决方案支持32位系统上的UEFI也支持64位内核运行在32位 UEFI 固件上(后者仅被包含在我们的 `amd64/i386` “multi-arch” 安装介质中)。
自上个版本发布以来Debian 项目的成员同样对我们的支持服务做出了重要改进。其中之一是[可浏览所有 Debian 的源码][3],该服务目前放在 [sources.debian.net][4]。当然在超过20000个源码包里想要找到正确的文件确实令人望而生畏。因此我们同样十分高兴地上线 [Debian 代码搜索][5],它放在 [codesearch.debian.net][6]。这两项服务都由一个完全重写并且更加反应敏捷的[包追踪系统][7]提供。
该版本包含大量的软件包更新,如:
* Apache 2.4.10
* Asterisk 11.13.1
* GIMP 2.8.14
* 一个GNOME桌面环境 3.14 的升级版本
* GCC 编译器 4.9.2
* Icedove 31.6.0 (一个 Mozilla Thunderbird 的再发布版本)
* Iceweasel 31.6.0esr (一个 Mozilla Firefox 的再发布版本)
* KDE Plasma Workspaces 和 KDE Applications 4.11.13
* LibreOffice 4.3.3
* Linux 3.16.7-ctk9
* MariaDB 10.0.16 和 MySQL 5.5.42
* Nagios 3.5.1
* OpenJDK 7u75
* Perl 5.20.2
* PHP 5.6.7
* PostgreSQL 9.4.1
* Python 2.7.9 和 3.4.2
* Samba 4.1.17
* Tomcat 7.0.56 和 8.0.14
* Xen Hypervisor 4.4.1
* Xfce 4.10桌面环境
* 超过43000个其它可供使用的软件包从将近20100个源码包编译而来
与如此之多的软件包选择和照例的广泛架构支持Debian 再次向它的目标成为通用操作系统迈出正确的一步。Debian 适用于各种不同情形从桌面系统到上网本从开发服务器到集群系统以及数据库web或存储服务器。同时在此基础之上的质量保证工作如对 Debian 上所有包的自动安装和升级测试,让 “**Jessie**” 可以满足用户拥有一个稳定的 Debian 版本的高期望值。
总共支持十种架构32位PC/Intel IA-32(`i386`)64位PC/Intel EM64T / x86-64 (`amd64`)Motorola/IBM PowerPC (旧硬件的`powerpc`和新的64位`ppc64el`(little-endian))MIPS (`mips` 大端和 `mipsel`小端)IBM S/390 (64位 `s390x`)以及 ARM 新老32位硬件的`armel`和`armhf`加上给新64位 *“AArch64”* 架构的`arm64`。
### 想尝试一下? ###
如果你仅仅是想在不安装的情况下体验 Debian 8 “**Jessie**”,你可以使用一个特殊的镜像,即 live 镜像,可以用在 CDU 盘以及网络启动设置上。最先只有 `amd64``i386` 架构提供这些镜像。Live 镜像同样可以用来安装 Debian。更多信息请访问 [Debian Live 主页][8]。
但是如果你想安装 Debian 到你的计算机的话有不少安装媒介可供你选择如蓝光碟DVDCD 以及 U 盘或者从网络安装。有几种桌面环境GNOMEKDE Plasma 桌面及 Plasma 应用Xfce 以及 LXDE它们可以从CD镜像中安装也可以从 CD/DVD 的启动菜单里选择想要的桌面环境。另外,同样提供了多架构 CD 和 DVD可以从单一磁盘选择安装不同架构的系统。或者你还可以创建可启动 U 盘安装媒介(参看[安装指南][9]获得更多细节。对云用户Debian 还提供了[预构建 OpenStack 镜像][10]可供使用。
安装镜像现在同样可以通过 [bittorrent][11](推荐下载方式),[jigdo][12] 或 [HTTP][13] 下载,查看[Debian 光盘][14]获得更进一步的信息。“**Jessie**” 不久将提供实体 DVDCD-ROM以及无数[供应商][15]的蓝光碟。
### 升级 Debian ###
如果从前一个版本 Debian 7代号 “**Wheezy**” )升级到 Debian 8大部分配置情况 apt-get 包管理工具都能够自动解决。Debian 系统一如既往地能够就地无痛升级,无需强制停机。强烈推荐阅读[发行注记][16]和[安装指南][17]来了解可能存在的问题,并了解安装和升级建议。发行注记会在发布后的几周内进一步改进,并翻译成其他语言。
## 关于 Debian ##
Debian 是一个自由操作系统由成千上万来自全世界的志愿者通过互联网协作开发。Debian 项目的关键力量是它的志愿者基础,它对 Debian 社群契约和自由软件的贡献以及对提供最好的操作系统可能的承诺。Debian 8是其前进方向上又一重要一步。
## 联系信息 ##
获取更多信息,请访问 Debian 主页 [https://www.debian.org/][18] 或发送电子邮件至<press@debian.org>
--------------------------------------------------------------------------------
via: https://www.debian.org/News/2015/20150426
译者:[alim0x](https://github.com/alim0x) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://security-team.debian.org/
[2]:https://wiki.debian.org/LTS
[3]:https://www.debian.org/News/weekly/2013/14/#sources
[4]:https://sources.debian.net/
[5]:https://www.debian.org/News/weekly/2014/17/#DCS
[6]:https://codesearch.debian.net/
[7]:https://tracker.debian.org/
[8]:http://live.debian.net/
[9]:https://www.debian.org/releases/jessie/installmanual
[10]:http://cdimage.debian.org/cdimage/openstack/current/
[11]:https://www.debian.org/CD/torrent-cd/
[12]:https://www.debian.org/CD/jigdo-cd/#which
[13]:https://www.debian.org/CD/http-ftp/
[14]:https://www.debian.org/CD/
[15]:https://www.debian.org/CD/vendors
[16]:https://www.debian.org/releases/jessie/releasenotes
[17]:https://www.debian.org/releases/jessie/installmanual
[18]:https://www.debian.org/

View File

@ -0,0 +1,52 @@
GNOME-Pie 0.6.1 应用启动器发布,酷炫新特性[多图+视频]
=============================================
**Simon Schneegans 高兴地[宣布][1]他的 GNOME-Pie 0.6.1 已可供下载使用。GNOME-Pie 是一个可以在包括 GNOME 和 Unity 在内的多种桌面环境中作为应用启动器的小工具。**
![GNOME-Pie](http://i1-news.softpedia-static.com/images/news2/GNOME-Pie-0-6-Application-Launcher-Released-with-Many-New-Features-Video-478914-3.jpg)
GNOME-Pie 0.6.1 看起来是个主要版本更新,引入了许多新特性,比如支持半个或四分之一圆,可选择每个启动器想要的形状,也可以自动根据位置调整形状(圆形,半个或四分之一圆),以及多彩的动态图标。
此外软件现在还适配若干类dock应用包括elementary OS 的 PlankUbuntu 的 Unity以及通用的 Docky。一些已有的 GNOME-Pie 主题也已更新,还引入了全新的为半圆启动器布局设计的主题 Simple
“Gnome-Pie 新版本已发布实际上已经发布了两个版本0.6.0和之后的0.6.1,修复了[issue #73][2]”Simon Schneegans 在发布声明上说道,“新版本修复了许多 bug还带来了许多新特性
<iframe src="https://player.vimeo.com/video/125339537" width="500" height="281" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
### 现在就可在Ubuntu上安装GNOME-Pie ###
Ubuntu 及其衍生版用户现在就可通过 Simon Schneegans 的PPA源安装 GNOME-Pie。只需打开终端运行下列命令即可。GNOME-Pie 适用于 Ubuntu 14.04 LTS14.10和15.04。
sudo add-apt-repository ppa:simonschneegans/testing
sudo apt-get update
sudo apt-get install gnome-pie
其他 GNU/Linux 发行版用户可以从官网下载 GNOME-Pie 0.6.1 的源代码或者近期在系统的软件源中搜索新版GNOME-Pie。
![GNOME-Pie](http://i1-news.softpedia-static.com/images/news2/GNOME-Pie-0-6-Application-Launcher-Released-with-Many-New-Features-Video-478914-2.jpg)
![GNOME-Pie](http://i1-news.softpedia-static.com/images/news2/GNOME-Pie-0-6-Application-Launcher-Released-with-Many-New-Features-Video-478914-4.jpg)
![GNOME-Pie](http://i1-news.softpedia-static.com/images/news2/GNOME-Pie-0-6-Application-Launcher-Released-with-Many-New-Features-Video-478914-5.jpg)
![GNOME-Pie](http://i1-news.softpedia-static.com/images/news2/GNOME-Pie-0-6-Application-Launcher-Released-with-Many-New-Features-Video-478914-6.jpg)
![GNOME-Pie](http://i1-news.softpedia-static.com/images/news2/GNOME-Pie-0-6-Application-Launcher-Released-with-Many-New-Features-Video-478914-7.jpg)
![GNOME-Pie](http://i1-news.softpedia-static.com/images/news2/GNOME-Pie-0-6-Application-Launcher-Released-with-Many-New-Features-Video-478914-8.jpg)
![GNOME-Pie](http://i1-news.softpedia-static.com/images/news2/GNOME-Pie-0-6-Application-Launcher-Released-with-Many-New-Features-Video-478914-9.jpg)
![GNOME-Pie](http://i1-news.softpedia-static.com/images/news2/GNOME-Pie-0-6-Application-Launcher-Released-with-Many-New-Features-Video-478914-10.jpg)
--------------------------------------------------
via: http://news.softpedia.com/news/GNOME-Pie-0-6-Application-Launcher-Released-with-Many-New-Features-Video-478914.shtml
译者:[alim0x](https://github.com/alim0x) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://simmesimme.github.io/news/2015/04/18/gnome-pie-061/
[2]:https://github.com/Simmesimme/Gnome-Pie/issues/73

View File

@ -0,0 +1,56 @@
环境音播放器:让人放松的声音,保持你的创造力
================================================================================
![Rain is a soothing sound for some](http://www.omgubuntu.co.uk/wp-content/uploads/2015/04/raining-1600x900-wallpaper_www.wallpapermay.com_84-1.jpg)
*对于某些人来说雨声是个令人安心的声音*
**如果我想变得非常有效率,我不能听‘正常’的音乐。它会使我分心,我会开始跟着唱或者让我想起另一首歌,结局就是我在自己的音乐库里到处戳并且……反正,你懂的。**
同样我也不能在寂静的环境中工作虽然和6只猫生活在一起意味着这不太可能但是无规律的刺耳声音和突然地咔哒声以及猫叫声会打破寂静。
我的解决办法是听**环境音**。
我发现它能帮助我消除大脑的里的胡思乱想,提供了一个声景覆盖了猫咪玩耍的声音。
环境音就是日常生活中的背景噪音;雨滴在窗户上敲打的声音,咖啡店里人们聊天的嗡嗡声,风中鸟儿们闲聊的声音,等等。
倾听这些声音会强迫一个疯狂运行的大脑减速,重新沉静下来重新把精力聚集到重要的事情上。
### 适用于Ubuntu的环境音应用 ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/04/ambient-noise-player-750x365.jpg)
Google Play和苹果应用商店充满了环境音和白噪声的应用。现在在Ubuntu里有同样的应用了。
[Ambient Noise (环境音)][1] ——人如其名这是一个专门被设计成播放这种声音的音频播放器。他甚至可以同Ubuntu声音菜单整合到一起给你选择点击即放松的体验。
这个应用又被称为ANoise播放器由Marcos Costales制作带有**8个高品质音频**。
这8个预设音频涵盖了多种环境从下雨时有节奏的声音到夜晚大自然静谧的旋律还有下午熙熙攘攘的咖啡店的嗡嗡声。
### 在Ubuntu上安装ANoise播放器 ###
适用于Ubuntu的环境音播放器是个免费的应用而且可以从它专用的PPA里安装。
要这样安装请先打开一个新的终端窗口运行:
sudo add-apt-repository ppa:costales/anoise
sudo apt-get update && sudo apt-get install anoise
安装好以后只需从Unity Dash或桌面环境里类同的地方里打开它通过声音菜单选择你喜欢的环境音然后……放松吧这个应用甚至记得你上次用的环境音。
即便如此,你还是要试一试看它是否能满足你的需要。我要说的是让我知道你是怎么想的,但是我将会专心致志到听不到你的声音——你可能也会这样!
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/04/ambient-noise-player-app-for-ubuntu-linux
作者:[Joey-Elijah Sneddon][a]
译者:[H-mudcup](https://github.com/H-mudcup)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://anoise.tuxfamily.org/

View File

@ -0,0 +1,290 @@
增强 nginx 的 SSL 安全性
================================================================================
[![](https://raymii.org/s/inc/img/ssl-labs-a.png)][1]
本文向你介绍如何在 nginx 服务器上设置健壮的 SSL 安全机制。我们通过禁用 SSL 压缩来降低 CRIME 攻击威胁;禁用协议上存在安全缺陷的 SSLv3 及更低版本并设置更健壮的加密套件cipher suite来尽可能启用前向安全性Forward Secrecy此外我们还启用了 HSTS 和 HPKP。这样我们就拥有了一个健壮而可经受考验的 SSL 配置,并可以在 Qually Labs 的 SSL 测试中得到 A 级评分。
如果不求甚解的话,可以从 [https://cipherli.st][2] 上找到 nginx 、Apache 和 Lighttpd 的安全设置,复制粘帖即可。
本教程在 Digital Ocean 的 VPS 上测试通过。如果你喜欢这篇教程,想要支持作者的站点的话,购买 Digital Ocean 的 VPS 时请使用如下链接:[https://www.digitalocean.com/?refcode=7435ae6b8212][3] 。
本教程可以通过[发布于 2014/1/21 的][4] SSL 实验室测试的严格要求(我之前就通过了测试,如果你按照本文操作就可以得到一个 A+ 评分)。
- [本教程也可用于 Apache ][5]
- [本教程也可用于 Lighttpd ][6]
- [本教程也可用于 FreeBSD, NetBSD 和 OpenBSD 上的 nginx ,放在 BSD Now 播客上][7]: [http://www.bsdnow.tv/tutorials/nginx][8]
你可以从下列链接中找到这方面的进一步内容:
- [野兽攻击BEAST][9]
- [罪恶攻击CRIME][10]
- [怪物攻击FREAK ][11]
- [心血漏洞Heartbleed][12]
- [完备的前向安全性Perfect Forward Secrecy][13]
- [RC4 和 BEAST 的处理][14]
我们需要编辑 nginx 的配置,在 Ubuntu/Debian 上是 `/etc/nginx/sited-enabled/yoursite.com`,在 RHEL/CentOS 上是 `/etc/nginx/conf.d/nginx.conf`
本文中我们需要编辑443端口SSL`server` 配置中的部分。在文末你可以看到完整的配置例子。
*在编辑之前切记备份一下配置文件!*
### 野兽攻击BEAST和 RC4 ###
简单的说野兽攻击BEAST就是通过篡改一个加密算法的 CBC密码块链的模式从而可以对部分编码流量悄悄解码。更多信息参照上面的链接。
针对野兽攻击BEAST较新的浏览器已经启用了客户端缓解方案。推荐方案是禁用 TLS 1.0 的所有加密算法,仅允许 RC4 算法。然而,[针对 RC4 算法的攻击也越来越多](http://www.isg.rhul.ac.uk/tls/) ,很多已经从理论上逐步发展为实际可行的攻击方式。此外,有理由相信 NSA 已经实现了他们所谓的“大突破”——攻破 RC4 。
禁用 RC4 会有几个后果。其一,当用户使用老旧的浏览器时,比如 Windows XP 上的 IE 会用 3DES 来替代 RC4。3DES 要比 RC4 更安全但是它的计算成本更高你的服务器就需要为这些用户付出更多的处理成本。其二RC4 算法能减轻 野兽攻击BEAST的危害如果禁用 RC4 会导致 TLS 1.0 用户会换到更容易受攻击的 AES-CBC 算法上通常服务器端的对野兽攻击BEAST的“修复方法”是让 RC4 优先于其它算法)。我认为 RC4 的风险要高于野兽攻击BEAST的风险。事实上有了客户端缓解方案Chrome 和 Firefox 提供了缓解方案野兽攻击BEAST就不是什么大问题了。而 RC4 的风险却在增长:随着时间推移,对加密算法的破解会越来越多。
### 怪物攻击FREAK ###
怪物攻击FREAK是一种中间人攻击它是由来自 [INRIA、微软研究院和 IMDEA][15] 的密码学家们所发现的。怪物攻击FREAK的缩写来自“Factoring RSA-EXPORT KeysRSA 出口密钥因子分解)”
这个漏洞可上溯到上世纪九十年代当时美国政府禁止出口加密软件除非其使用编码密钥长度不超过512位的出口加密套件。
这造成了一些现在的 TLS 客户端存在一个缺陷,这些客户端包括: 苹果的 SecureTransport 、OpenSSL。这个缺陷会导致它们会接受出口降级 RSA 密钥,即便客户端并没有要求使用出口降级 RSA 密钥。这个缺陷带来的影响很讨厌:在客户端存在缺陷,且服务器支持出口降级 RSA 密钥时,会发生中间人攻击,从而导致连接的强度降低。
攻击分为两个组成部分:首先是服务器必须接受“出口降级 RSA 密钥”。
中间人攻击可以按如下流程:
- 在客户端的 Hello 消息中,要求标准的 RSA 加密套件。
- 中间人攻击者修改该消息为export RSA输出级 RSA 密钥)。
- 服务器回应一个512位的输出级 RSA 密钥,并以其长期密钥签名。
- 由于 OpenSSL/SecureTransport 的缺陷,客户端会接受这个弱密钥。
- 攻击者根据 RSA 模数分解因子来恢复相应的 RSA 解密密钥。
- 当客户端编码pre-master secret预主密码给服务器时攻击者现在就可以解码它并恢复 TLS 的master secret主密码
- 从这里开始,攻击者就能看到了传输的明文并注入任何东西了。
本文所提供的加密套件不启用输出降级加密,请确认你的 OpenSSL 是最新的,也强烈建议你将客户端也升级到新的版本。
### 心血漏洞Heartbleed ###
心血漏洞Heartbleed 是一个于2014年4月公布的 OpenSSL 加密库的漏洞它是一个被广泛使用的传输层安全TLS协议的实现。无论是服务器端还是客户端在 TLS 中使用了有缺陷的 OpenSSL都可以被利用该缺陷。由于它是因 DTLS 心跳扩展RFC 6520中的输入验证不正确缺少了边界检查而导致的所以该漏洞根据“心跳”而命名。这个漏洞是一种缓存区超读漏洞它可以读取到本不应该读取的数据。
哪个版本的 OpenSSL 受到心血漏洞Heartbleed的影响
各版本情况如下:
- OpenSSL 1.0.1 直到 1.0.1f (包括)**存在**该缺陷
- OpenSSL 1.0.1g **没有**该缺陷
- OpenSSL 1.0.0 分支**没有**该缺陷
- OpenSSL 0.9.8 分支**没有**该缺陷
这个缺陷是2011年12月引入到 OpenSSL 中的,并随着 2012年3月14日 OpenSSL 发布的 1.0.1 而泛滥。2014年4月7日发布的 OpenSSL 1.0.1g 修复了该漏洞。
升级你的 OpenSSL 就可以避免该缺陷。
### SSL 压缩(罪恶攻击 CRIME ###
罪恶攻击CRIME使用 SSL 压缩来完成它的魔法SSL 压缩在下述版本是默认关闭的: nginx 1.1.6及更高/1.0.9及更高(如果使用了 OpenSSL 1.0.0及更高), nginx 1.3.2及更高/1.2.2及更高(如果使用较旧版本的 OpenSSL
如果你使用一个早期版本的 nginx 或 OpenSSL而且你的发行版没有向后移植该选项那么你需要重新编译没有一个 ZLIB 支持的 OpenSSL。这会禁止 OpenSSL 使用 DEFLATE 压缩方式。如果你禁用了这个,你仍然可以使用常规的 HTML DEFLATE 压缩。
### SSLv2 和 SSLv3 ###
SSLv2 是不安全的,所以我们需要禁用它。我们也禁用 SSLv3因为 TLS 1.0 在遭受到降级攻击时,会允许攻击者强制连接使用 SSLv3从而禁用了前向安全性forward secrecy
如下编辑配置文件:
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
### 卷毛狗攻击POODLE和 TLS-FALLBACK-SCSV ###
SSLv3 会受到[卷毛狗漏洞POODLE][16]的攻击。这是禁用 SSLv3 的主要原因之一。
Google 提出了一个名为 [TLS\_FALLBACK\_SCSV][17] 的SSL/TLS 扩展,它用于防止强制 SSL 降级。如果你升级 到下述的 OpenSSL 版本会自动启用它。
- OpenSSL 1.0.1 带有 TLS\_FALLBACK\_SCSV 1.0.1j 及更高。
- OpenSSL 1.0.0 带有 TLS\_FALLBACK\_SCSV 1.0.0o 及更高。
- OpenSSL 0.9.8 带有 TLS\_FALLBACK\_SCSV 0.9.8zc 及更高。
[更多信息请参照 NGINX 文档][18]。
### 加密套件cipher suite ###
前向安全性Forward Secrecy用于在长期密钥被破解时确保会话密钥的完整性。PFS完备的前向安全性是指强制在每个/每次会话中推导新的密钥。
这就是说,泄露的私钥并不能用来解密(之前)记录下来的 SSL 通讯。
提供PFS完备的前向安全性功能的是那些使用了一种 Diffie-Hellman 密钥交换的短暂形式的加密套件。它们的缺点是系统开销较大,不过可以使用椭圆曲线的变体来改进。
以下两个加密套件是我推荐的,之后[Mozilla 基金会][19]也推荐了。
推荐的加密套件:
ssl_ciphers 'AES128+EECDH:AES128+EDH';
向后兼容的推荐的加密套件IE6/WinXP
ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
如果你的 OpenSSL 版本比较旧,不可用的加密算法会自动丢弃。应该一直使用上述的完整套件,让 OpenSSL 选择一个它所支持的。
加密套件的顺序是非常重要的因为其决定了优先选择哪个算法。上述优先推荐的算法中提供了PFS完备的前向安全性
较旧版本的 OpenSSL 也许不能支持这个算法的完整列表AES-GCM 和一些 ECDHE 算法是相当新的,在 Ubuntu 和 RHEL 中所带的绝大多数 OpenSSL 版本中不支持。
#### 优先顺序的逻辑 ####
- ECDHE+AESGCM 加密是首选的。它们是 TLS 1.2 加密算法,现在还没有广泛支持。当前还没有对它们的已知攻击。
- PFS 加密套件好一些,首选 ECDHE然后是 DHE。
- AES 128 要好于 AES 256。有一个关于 AES256 带来的安全提升程度是否值回成本的[讨论][20]结果是显而易见的。目前AES128 要更值一些,因为它提供了不错的安全水准,确实很快,而且看起来对时序攻击更有抵抗力。
- 在向后兼容的加密套件里面AES 要优于 3DES。在 TLS 1.1及其以上,减轻了针对 AES 的野兽攻击BEAST的威胁而在 TLS 1.0上则难以实现该攻击。在非向后兼容的加密套件里面,不支持 3DES。
- RC4 整个不支持了。3DES 用于向后兼容。参看 [#RC4\_weaknesses][21] 中的讨论。
#### 强制丢弃的算法 ####
- aNULL 包含了非验证的 Diffie-Hellman 密钥交换这会受到中间人MITM攻击
- eNULL 包含了无加密的算法(明文)
- EXPORT 是老旧的弱加密算法,是被美国法律标示为可出口的
- RC4 包含的加密算法使用了已弃用的 ARCFOUR 算法
- DES 包含的加密算法使用了弃用的数据加密标准DES
- SSLv2 包含了定义在旧版本 SSL 标准中的所有算法,现已弃用
- MD5 包含了使用已弃用的 MD5 作为哈希算法的所有算法
### 更多设置 ###
确保你也添加了如下行:
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
在一个 SSLv3 或 TLSv1 握手过程中选择一个加密算法时,一般使用客户端的首选算法。如果设置了上述配置,则会替代地使用服务器端的首选算法。
- [关于 ssl\_prefer\_server\_ciphers 的更多信息][22]
- [关于 ssl\_ciphers 的更多信息][23]
### 前向安全性和 Diffie Hellman Ephemeral DHE参数 ###
前向安全性Forward Secrecy的概念很简单客户端和服务器协商一个永不重用的密钥并在会话结束时销毁它。服务器上的 RSA 私钥用于客户端和服务器之间的 Diffie-Hellman 密钥交换签名。从 Diffie-Hellman 握手中获取的预主密钥会用于之后的编码。因为预主密钥是特定于客户端和服务器之间建立的某个连接并且只用在一个限定的时间内所以称作短暂模式Ephemeral
使用了前向安全性,如果一个攻击者取得了一个服务器的私钥,他是不能解码之前的通讯信息的。这个私钥仅用于 Diffie Hellman 握手签名并不会泄露预主密钥。Diffie Hellman 算法会确保预主密钥绝不会离开客户端和服务器,而且不能被中间人攻击所拦截。
所有版本的 nginx如1.4.4)都依赖于 OpenSSL 给 Diffie-Hellman DH的输入参数。不幸的是这意味着 Diffie-Hellman EphemeralDHE将使用 OpenSSL 的默认设置包括一个用于密钥交换的1024位密钥。因为我们正在使用2048位证书DHE 客户端就会使用一个要比非 DHE 客户端更弱的密钥交换。
我们需要生成一个更强壮的 DHE 参数:
cd /etc/ssl/certs
openssl dhparam -out dhparam.pem 4096
然后告诉 nginx 将其用作 DHE 密钥交换:
ssl_dhparam /etc/ssl/certs/dhparam.pem;
### OCSP 装订Stapling ###
当连接到一个服务器时客户端应该使用证书吊销列表CRL或在线证书状态协议OCSP记录来校验服务器证书的有效性。CRL 的问题是它已经增长的太大了,永远也下载不完了。
OCSP 更轻量级一些,因为我们每次只请求一条记录。但是副作用是当连接到一个服务器时必须对第三方 OCSP 响应器发起 OCSP 请求这就增加了延迟和带来了潜在隐患。事实上CA 所运营的 OCSP 响应器非常不可靠,浏览器如果不能及时收到答复,就会静默失败。攻击者通过 DoS 攻击一个 OCSP 响应器可以禁用其校验功能,这样就降低了安全性。
解决方法是允许服务器在 TLS 握手中发送缓存的 OCSP 记录,以绕开 OCSP 响应器。这个机制节省了客户端和 OCSP 响应器之间的通讯,称作 OCSP 装订。
客户端会在它的 CLIENT HELLO 中告知其支持 status\_request TLS 扩展,服务器仅在客户端请求它的时候才发送缓存的 OCSP 响应。
大多数服务器最多会缓存 OCSP 响应48小时。服务器会按照常规的间隔连接到 CA 的 OCSP 响应器来获取刷新的 OCSP 记录。OCSP 响应器的位置可以从签名的证书中的授权信息访问Authority Information Access字段中获得。
- [阅读我的教程:在 NGINX 中启用 OCSP 装订][24]
### HTTP 严格传输安全HSTS ###
如有可能,你应该启用 [HTTP 严格传输安全HSTS][25],它会引导浏览器和你的站点之间的通讯仅通过 HTTPS。
- [阅读我关于 HSTS 的文章,了解如何配置它][26]
### HTTP 公钥固定扩展HPKP ###
你也应该启用 [HTTP 公钥固定扩展HPKP][27]。
公钥固定的意思是一个证书链必须包括一个白名单中的公钥。它确保仅有白名单中的 CA 才能够为某个域名签署证书,而不是你的浏览器中存储的任何 CA。
我已经写了一篇[关于 HPKP 的背景理论及在 Apache、Lighttpd 和 NGINX 中配置例子的文章][28]。
### 配置范例 ###
server {
listen [::]:443 default_server;
ssl on;
ssl_certificate_key /etc/ssl/cert/raymii_org.pem;
ssl_certificate /etc/ssl/cert/ca-bundle.pem;
ssl_ciphers 'AES128+EECDH:AES128+EDH:!aNULL';
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_session_cache shared:SSL:10m;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.4.4 8.8.8.8 valid=300s;
resolver_timeout 10s;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
add_header Strict-Transport-Security max-age=63072000;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
root /var/www/;
index index.html index.htm;
server_name raymii.org;
}
### 结尾 ###
如果你使用了上述配置,你需要重启 nginx
# 首先检查配置文件是否正确
/etc/init.d/nginx configtest
# 然后重启
/etc/init.d/nginx restart
现在使用 [SSL Labs 测试][29]来看看你是否能得到一个漂亮的“A”。当然了你也得到了一个安全的、强壮的、经得起考验的 SSL 配置!
- [参考 Mozilla 关于这方面的内容][30]
--------------------------------------------------------------------------------
via: https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
作者:[Remy van Elst][a]
译者:[wxy](https://github.com/wxy)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://raymii.org/
[1]:https://www.ssllabs.com/ssltest/analyze.html?d=raymii.org
[2]:https://cipherli.st/
[3]:https://www.digitalocean.com/?refcode=7435ae6b8212
[4]:http://blog.ivanristic.com/2014/01/ssl-labs-stricter-security-requirements-for-2014.html
[5]:https://raymii.org/s/tutorials/Strong_SSL_Security_On_Apache2.html
[6]:https://raymii.org/s/tutorials/Pass_the_SSL_Labs_Test_on_Lighttpd_%28Mitigate_the_CRIME_and_BEAST_attack_-_Disable_SSLv2_-_Enable_PFS%29.html
[7]:http://www.bsdnow.tv/episodes/2014_08_20-engineering_nginx
[8]:http://www.bsdnow.tv/tutorials/nginx
[9]:https://en.wikipedia.org/wiki/Transport_Layer_Security#BEAST_attack
[10]:https://en.wikipedia.org/wiki/CRIME_%28security_exploit%29
[11]:http://blog.cryptographyengineering.com/2015/03/attack-of-week-freak-or-factoring-nsa.html
[12]:http://heartbleed.com/
[13]:https://en.wikipedia.org/wiki/Perfect_forward_secrecy
[14]:https://en.wikipedia.org/wiki/Transport_Layer_Security#Dealing_with_RC4_and_BEAST
[15]:https://www.smacktls.com/
[16]:https://raymii.org/s/articles/Check_servers_for_the_Poodle_bug.html
[17]:https://tools.ietf.org/html/draft-ietf-tls-downgrade-scsv-00
[18]:http://wiki.nginx.org/HttpSslModule#ssl_protocols
[19]:https://wiki.mozilla.org/Security/Server_Side_TLS
[20]:http://www.mail-archive.com/dev-tech-crypto@lists.mozilla.org/msg11247.html
[21]:https://wiki.mozilla.org/Security/Server_Side_TLS#RC4_weaknesses
[22]:http://wiki.nginx.org/HttpSslModule#ssl_prefer_server_ciphers
[23]:http://wiki.nginx.org/HttpSslModule#ssl_ciphers
[24]:https://raymii.org/s/tutorials/OCSP_Stapling_on_nginx.html
[25]:https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security
[26]:https://linux.cn/article-5266-1.html
[27]:https://wiki.mozilla.org/SecurityEngineering/Public_Key_Pinning
[28]:https://linux.cn/article-5282-1.html
[29]:https://www.ssllabs.com/ssltest/
[30]:https://wiki.mozilla.org/Security/Server_Side_TLS

View File

@ -1,24 +1,24 @@
Linux sort命令的14个有用的范例 -- 第一部分
Linux sort命令的14个有用的范例
=============================================================
Sort是用于对单个或多个文本文件内容进行排序的Linux程序。Sort命令以空格作为字段分隔符将一行分割为多个关键字对文件进行排序。需要注意的是除非你将输出重定向到文件中否则Sort命令并不对文件内容进行实际的排序(即文件内容没有修改),只是将文件内容按有序输出。
本文的目标是通过14个实际的范例让你更深刻的理解如何在Linux中使用sort命令。
###1. 首先我们将会创建一个用于执行sort命令的文本文件tecmint.txt。工作路径是/home/$USER/Desktop/tecmint###
1、 首先我们将会创建一个用于执行sort命令的文本文件tecmint.txt。工作路径是/home/$USER/Desktop/tecmint
下面命令中的‘-e选项将/’和‘/n解析成一个新
下面命令中的‘-e选项将启用‘\\’转义,将‘\n解析成换
$ echo -e "computer\nmouse\nLAPTOP\ndata\nRedHat\nlaptop\ndebian\nlaptop" > tecmint.txt
![Split String by Lines in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Split-String-by-Lines.gif)
###2. 在开始学习sort命令前我们先看看文件的内容及其显示方式。###
2、 在开始学习sort命令前我们先看看文件的内容及其显示方式。
$ cat tecmint.txt
![Check Content of File](http://www.tecmint.com/wp-content/uploads/2015/04/Check-Content-of-File.gif)
###3. 现在,使用如下命令对文件内容进行排序。###
3、 现在,使用如下命令对文件内容进行排序。
$ sort tecmint.txt
@ -26,30 +26,30 @@ Sort是用于对单个或多个文本文件内容进行排序的Linux程序。So
**注意**:上面的命令并不对文件内容进行实际的排序,仅仅是将其内容按有序方式输出。
###4. 对文件tecmint.txt文件内容排序并将排序后的内容输出到名为sorted.txt的文件中然后使用[cat][1]命令查看验证sorted.txt文件的内容。###
4、 对文件tecmint.txt文件内容排序并将排序后的内容输出到名为sorted.txt的文件中然后使用[cat][1]命令查看验证sorted.txt文件的内容。
$ sort tecmint.txt > sorted.txt
$ cat sorted.txt
![Sort File Content in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Sort-File-Content.gif)
###5. 现在使用‘-r参数对tecmint.txt文件内容进行逆序排序并将输出内容重定向到reversesorted.txt文件中并使用cat命令查看文件的内容。###
5、 现在使用‘-r参数对tecmint.txt文件内容进行逆序排序并将输出内容重定向到reversesorted.txt文件中并使用cat命令查看文件的内容。
$ sort -r tecmint.txt > reversesorted.txt
$ cat reversesorted.txt
![Sort Content By Reverse](http://www.tecmint.com/wp-content/uploads/2015/04/Sort-Content-By-Reverse.gif)
###6. 创建一个新文件lsl.txt文件内容为在home目录下执行ls -l命令的输出。###
6、 创建一个新文件lsl.txt文件内容为在home目录下执行ls -l命令的输出。
$ ls -l /home/$USER > /home/$USER/Desktop/tecmint/lsl.txt
$ cat lsl.txt
![Populate Output of Home Directory](http://www.tecmint.com/wp-content/uploads/2015/04/Populate-Output.gif)
我们将会看到对其他基础字段进行排序的例子,而不是对默认的始字符进行排序。
我们将会看到对其他字段进行排序的例子,而不是对默认的始字符进行排序。
###7. 基于第二列符号连接的数量对文件lsl.txt进行排序。###
7、 基于第二列符号连接的数量对文件lsl.txt进行排序。
$ sort -nk2 lsl.txt
@ -57,19 +57,19 @@ Sort是用于对单个或多个文本文件内容进行排序的Linux程序。So
![Sort Content by Column](http://www.tecmint.com/wp-content/uploads/2015/04/Sort-Content-by-Column.gif)
###8. 基于第9列文件和目录的名称非数值对文件lsl.txt进行排序。###
8、 基于第9列文件和目录的名称非数值对文件lsl.txt进行排序。
$ sort -k9 lsl.txt
![Sort Content Based on Column](http://www.tecmint.com/wp-content/uploads/2015/04/Sort-Content-Based-on-Column.gif)
###9. sort命令并非仅能对文件进行排序我们还可以通过管道将命令的输出内容重定向到sort命令中。###
9、 sort命令并非仅能对文件进行排序我们还可以通过管道将命令的输出内容重定向到sort命令中。
$ ls -l /home/$USER | sort -nk5
![Sort Content Using Pipe Option](http://www.tecmint.com/wp-content/uploads/2015/04/Sort-Content-By-Pipeline.gif)
###10. 对文件tecmint.txt进行排序并删除重复的行。然后检查重复的行是否已经删除了。###
10、 对文件tecmint.txt进行排序并删除重复的行。然后检查重复的行是否已经删除了。
$ cat tecmint.txt
$ sort -u tecmint.txt
@ -78,23 +78,23 @@ Sort是用于对单个或多个文本文件内容进行排序的Linux程序。So
目前我们发现的排序规则:
除非指定了‘-r参数否则排序的优先级按下面规则排序
除非指定了‘-r参数否则排序的优先级按下面规则排序
- 以数字开头的行优先级最高
- 以小写字母开头的行优先级次之
- 待排序内容按字典序进行排序
- 默认情况下sort命令将带排序内容的每行关键字当作一个字符串进行字典序排序数字优先级最高参看规则 - 1
- 默认情况下sort命令将带排序内容的每行关键字当作一个字符串进行字典序排序数字优先级最高参看规则 1
###11. 创建文件lsla.txt其内容用ls -la命令的输出内容填充。###
11、 在当前位置创建第三个文件lsla.txt其内容用ls -lA命令的输出内容填充。
$ ls -lA /home/$USER > /home/$USER/Desktop/tecmint/lsla.txt
$ cat lsla.txt
![Populate Output With Hidden Files](http://www.tecmint.com/wp-content/uploads/2015/04/Populate-Output-With-Hidden-Files.gif)
了解ls命令的读者都知道ls -la=ls -l + 隐藏文件。因此这两个文件的大部分内容都是相同的。
了解ls命令的读者都知道ls -lA 等于 ls -l + 隐藏文件,所以这两个文件的大部分内容都是相同的。
###12. 对上面两个文件内容进行排序输出。###
12、 对上面两个文件内容进行排序输出。
$ sort lsl.txt lsla.txt
@ -102,7 +102,7 @@ Sort是用于对单个或多个文本文件内容进行排序的Linux程序。So
注意文件和目录的重复
###13. 现在我们看看怎样对两个文件进行排序、合并,并且删除重复行。###
13、 现在我们看看怎样对两个文件进行排序、合并,并且删除重复行。
$ sort -u lsl.txt lsla.txt
@ -110,13 +110,13 @@ Sort是用于对单个或多个文本文件内容进行排序的Linux程序。So
此时,我们注意到重复的行已经被删除了,我们可以将输出内容重定向到文件中。
###14. 我们同样可以基于多列对文件内容进行排序。基于第2,5数值和9非数值列对ls -l命令的输出进行排序。###
14、 我们同样可以基于多列对文件内容进行排序。基于第2,5数值和9非数值列对ls -l命令的输出进行排序。
$ ls -l /home/$USER | sort -t "," -nk2,5 -k9
![Sort Content By Field Column](http://www.tecmint.com/wp-content/uploads/2015/04/Sort-Content-By-Field-Column.gif)
先到此为止了在接下来的文章中我们将会学习到sort命令更多的详细例子。届时敬请关注Tecmint。保持分享精神。若喜欢本文,敬请将本文分享给你的朋友。
先到此为止了在接下来的文章中我们将会学习到sort命令更多的详细例子。届时敬请关注我们。保持分享精神。若喜欢本文,敬请将本文分享给你的朋友。
--------------------------------------------------------------------------------
@ -124,7 +124,7 @@ via: http://www.tecmint.com/sort-command-linux/
作者:[Avishek Kumar][a]
译者:[cvsher](https://github.com/cvsher)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,134 @@
Linux 的 'sort'命令的七个有趣实例(二)
================================================================================
在[上一篇文章][1]里我们已经探讨了关于sort命令的多个例子如果你错过了这篇文章可以点击下面的链接进行阅读。今天的这篇文章作为上一篇文章的继续将讨论关于sort命令的剩余用法与上一篇一起作为Linux sort命令的完整指南。
- [Linux 的 sort命令的14个有用的范例][1]
在我们继续深入之前先创建一个文本文档month.txt并且将上一次给出的数据填进去。
$ echo -e "mar\ndec\noct\nsep\nfeb\naug" > month.txt
$ cat month.txt
![Populate Content](http://www.tecmint.com/wp-content/uploads/2015/04/Populate-Content.gif)
15、 通过使用M选项month.txt文件按照月份顺序进行排序。
$ sort -M month.txt
**注意**:sort命令需要至少3个字符来确认月份名称。
![Sort File Content by Month in Linux](http://www.tecmint.com/wp-content/uploads/2015/04/Sort-by-Month.gif)
16、 把数据整理成方便人们阅读的形式比如1K、2M、3G、2T这里面的K、G、M、T代表千、兆、吉、梯。
LCTT 译注此处命令有误ls 命令应该增加 -h 参数,径改之)
$ ls -lh /home/$USER | sort -h -k5
![Sort Content Human Readable Format](http://www.tecmint.com/wp-content/uploads/2015/04/Sort-Content-Human-Readable-Format.gif)
17、 在上一篇文章中我们在例子4中创建了一个名为sorted.txt的文件在例子6中创建了一个lsl.txtsorted.txt'已经排好序了而lsl.txt还没有。让我们使用sort命令来检查两个文件是否已经排好序。
$ sort -c sorted.txt
![Check File is Sorted](http://www.tecmint.com/wp-content/uploads/2015/04/Check-File-is-Sorted.gif)
如果它返回0则表示文件已经排好序。
$ sort -c lsl.txt
![Check File Sorted Status](http://www.tecmint.com/wp-content/uploads/2015/04/Check-File-Sorted-Status.gif)
报告无序。存在矛盾……
18、 如果文字之间的分隔符是空格sort命令自动地将空格后的东西当做一个新文字单元如果分隔符不是空格呢
考虑这样一个文本文件,里面的内容可以由除了空格之外的任何符号分隔,比如‘|\+.’等……
创建一个分隔符为+的文本文件。使用cat命令查看文件内容。
$ echo -e "21+linux+server+production\n11+debian+RedHat+CentOS\n131+Apache+Mysql+PHP\n7+Shell Scripting+python+perl\n111+postfix+exim+sendmail" > delimiter.txt
----------
$ cat delimiter.txt
![Check File Content by Delimiter](http://www.tecmint.com/wp-content/uploads/2015/04/Check-File-Content.gif)
现在基于由数字组成的第一个域来进行排序。
$ sort -t '+' -nk1 delimiter.txt
![Sort File By Fields](http://www.tecmint.com/wp-content/uploads/2015/04/Sort-File-By-Fields.gif)
然后再基于非数字的第四个域排序。
![Sort Content By Non Numeric](http://www.tecmint.com/wp-content/uploads/2015/04/Sort-Content-By-Non-Numeric.gif)
如果分隔符是制表符,你需要在’+‘的位置上用$\t代替如上例所示。
19、 对主用户目录下使用ls -l命令得到的结果基于第五列文件大小进行一个乱序排列。
$ ls -l /home/avi/ | sort -k5 -R
![Sort Content by Column in Random Order](http://www.tecmint.com/wp-content/uploads/2015/04/Sort-Content-by-Column1.gif)
每一次你运行上面的脚本,你得到结果可能都不一样,因为结果是随机生成的。
正如我在上一篇文章中提到的规则2所说——sort命令会将以小写字母开始的行排在大写字母开始的行前面。看一下上一篇文章的例3字符串laptopLAPTOP前出现。
20、 如何覆盖默认的排序优先权在这之前我们需要先将环境变量LC_ALL的值设置为C。在命令行提示栏中运行下面的代码。
$ export LC_ALL=C
然后以非默认优先权的方式对tecmint.txt文件重新排序。
$ sort tecmint.txt
![Override Sorting Preferences](http://www.tecmint.com/wp-content/uploads/2015/04/Override-Sorting-Preferences.gif)
*覆盖排序优先权*
不要忘记与example 3中得到的输出结果做比较并且你可以使用-f又叫-ignore-case忽略大小写的选项来获取更有序的输出。
$ sort -f tecmint.txt
![Compare Sorting Preferences](http://www.tecmint.com/wp-content/uploads/2015/04/Compare-Sorting-Preferences.gif)
21、 给两个输入文件进行sort然后把它们连接成一行
我们创建两个文本文档file1.txt以及file2.txt并用数据填充如下所示并用cat命令查看文件的内容。
$ echo -e “5 Reliable\n2 Fast\n3 Secure\n1 open-source\n4 customizable” > file1.txt
$ cat file1.txt
![Populate Content with Numbers](http://www.tecmint.com/wp-content/uploads/2015/04/Populate-Content-with-Number.gif)
用如下数据填充file2.txt
$ echo -e “3 RedHat\n1 Debian\n5 Ubuntu\n2 Kali\n4 Fedora” > file2.txt
$ cat file2.txt
![Populate File with Data](http://www.tecmint.com/wp-content/uploads/2015/04/Populate-File-with-Data.gif)
现在我们对两个文件进行排序并连接。
$ join <(sort -n file1.txt) <(sort file2.txt)
![Sort Join Two Files](http://www.tecmint.com/wp-content/uploads/2015/04/Sort-Join-Two-Files.gif)
我所要讲的全部内容就在这里了,希望与各位保持联系,也希望各位经常来逛逛。有反馈就在下面评论吧。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-sort-command-examples/
作者:[Avishek Kumar][a]
译者:[DongShuaike](https://github.com/DongShuaike)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/sort-command-linux/

View File

@ -0,0 +1,35 @@
SuperTuxKart 0.9 已发行 —— Linux 中最好的竞速类游戏越来越棒了!
================================================================================
**热门竞速类游戏 SuperTuxKart 的新版本已经[打包发行][1]登陆下载服务器**
![Super Tux Kart 0.9 Release Poster](http://1.bp.blogspot.com/-eGXvJu3UVwc/VTVhICZVEtI/AAAAAAAAAf0/iP2bkWDNf_c/s1600/poster-cropped.jpg)
*Super Tux Kart 0.9 发行海报*
SuperTuxKart 0.9 相较前一版本做了巨大的升级内部运行着刚出炉的新引擎有个炫酷的名字叫Antarctica南极洲目的是要呈现更加炫酷的图形环境从阴影到场景的纵深外加卡丁车更好的物理效果。
突出的图形表现也增加了对显卡的要求。SuperTuxKart 开发人员给玩家的建议是,要有图像处理能力比得上(或者,想要完美的话,要超过) Intel HD Graphics 3000, NVIDIA GeForce 8600 或 AMD Radeon HD 3650 的显卡。
### 其他改变 ###
SuperTuxKart 0.9 中与图像的改善同样吸引人眼球的是一对**全新赛道**,新的卡丁车,新的在线账户可以记录和分享**全新推出的成就系统**里赢得的徽章,以及大量的改装和涂装的微调。
点击播放下面的官方发行视频,看看基于调色器的 STK 0.9 所散发的光辉吧。youtube 视频https://www.youtube.com/0FEwDH7XU9Q
Ubuntu 用户可以从项目网站上下载新发行版已编译的二进制文件。
- [下载 SuperTuxKart 0.9][2]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/04/supertuxkart-0-9-released
作者:[Joey-Elijah Sneddon][a]
译者:[H-mudcup](https://github.com/H-mudcup)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://supertuxkart.blogspot.co.uk/2015/04/supertuxkart-09-released.html
[2]:http://supertuxkart.sourceforge.net/Downloads

View File

@ -0,0 +1,190 @@
安装完最小化 RHEL/CentOS 7 后需要做的 30 件事情(四)
================================================================================
### 17. 安装 Webmin ###
Webmin 是基于 Web 的 Linux 配置工具。它像一个中央系统,用于配置各种系统设置,比如用户、磁盘分配、服务以及 HTTP 服务器、Apache、MySQL 等的配置。
# wget http://prdownloads.sourceforge.net/webadmin/webmin-1.740-1.noarch.rpm
# rpm -ivh webmin-*.rpm
![在 CentOS 7 上安装 Webmin](http://www.tecmint.com/wp-content/uploads/2015/04/Install-Webmin.jpeg)
*安装 Webmin*
安装完 webmin 后,你会在终端上得到一个消息,提示你用 root 密码在端口 10000 登录你的主机 (http://ip-address:10000)。 如果运行的是无接口的服务器你可以转发端口然后从有接口的服务器上访问它。LCTT 译注:无接口[headless]服务器指没有访问接口或界面的服务器,在此次场景,指的是是出于内网的服务器,可采用外网/路由器映射来访问该端口)
### 18. 启用第三方库 ###
添加不受信任的库并不是一个好主意,尤其是在生产环境中,这可能导致致命的问题。但仅作为例子在这里我们会添加一些社区证实可信任的库,以安装第三方工具和软件包。
为企业版 LinuxEPEL库添加额外的软件包。
# yum install epel-release
添加社区企业版 Linux Community Enterprise Linux
# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
![安装 Epel 库](http://www.tecmint.com/wp-content/uploads/2015/04/install-epel-repo.jpeg)
*安装 Epel 库*
**注意!** 添加第三方库的时候尤其需要注意。
### 19. 安装 7-zip 工具 ###
在最小化安装 CentOS 时你并没有获得类似 unzip 或者 untar 的工具。我们可以选择根据需要来安装每个工具或一个能处理所有格式的工具。7-zip 就是一个能压缩和解压所有已知类型文件的工具。
# yum install p7zip
![安装 7zip 工具](http://www.tecmint.com/wp-content/uploads/2015/04/Install-7zip-tool.jpeg)
*安装 7zip 工具*
**注意**: 该软件包从 Fedora EPEL 7 的库中下载和安装。
### 20. 安装 NTFS-3G 驱动 ###
NTFS-3G一个很小但非常有用的 NTFS 驱动,在大部分类 UNIX 发行版上都可用。它对于挂载和访问 Windows NTFS 文件系统很有用。尽管也有其它可用的替代品,比如 Tuxera但 NTFS-3G 是使用最广泛的。
# yum install ntfs-3g
![在 CentOS 上安装 NTFS-3G](http://www.tecmint.com/wp-content/uploads/2015/04/Install-NTFS-3G.jpeg)
*安装 NTFS-3G 用于挂载 Windows 分区*
ntfs-3g 安装完成之后,你可以使用以下命令挂载 Windows NTFS 分区(我的 Windows 分区是 /dev/sda5
# mount -ro ntfs-3g /dev/sda5 /mnt
# cd /mnt
# ls -l
### 21. 安装 Vsftpd FTP 服务器 ###
VSFTPD 表示 Very Secure File Transfer Protocol Daemon是用于类 UNIX 系统的 FTP 服务器。它是现今最高效和安全的 FTP 服务器之一。
# yum install vsftpd
![在 CentOS 7 上安装 Vsftpd](http://www.tecmint.com/wp-content/uploads/2015/04/Install-FTP.jpeg)
*安装 Vsftpd FTP*
编辑配置文件 /etc/vsftpd/vsftpd.conf 用于保护 vsftpd。
# vi /etc/vsftpd/vsftpd.conf
编辑一些值并使其它行保留原样,除非你知道自己在做什么。
anonymous_enable=NO
local_enable=YES
write_enable=YES
chroot_local_user=YES
你也可以更改端口号,记得让 vsftpd 端口通过防火墙。
# firewall-cmd --add-port=21/tcp
# firewall-cmd --reload
下一步重启 vsftpd 并启用开机自动启动。
# systemctl restart vsftpd
# systemctl enable vsftpd
### 22. 安装和配置 sudo ###
sudo 通常被称为 super do 或者 suitable user do是一个类 UNIX 操作系统中用其它用户的安全权限执行程序的软件。让我们来看看怎样配置 sudo。
# visudo
这会打开 /etc/sudoers 并进行编辑
![sudoers 文件](http://www.tecmint.com/wp-content/uploads/2015/04/sudoers-File.jpeg)
*sudoers 文件*
1. 给一个已经创建好的用户(比如 tecmint赋予所有权限等同于 root
tecmint ALL=(ALL) ALL
2. 如果给一个已经创建好的用户(比如 tecmint赋予除了重启和关闭服务器以外的所有权限等同于 root
首先,再一次打开文件并编辑如下内容:
cmnd_Alias nopermit = /sbin/shutdown, /sbin/reboot
然后,用逻辑操作符(!)添加该别名。
tecmint ALL=(ALL) ALL,!nopermit
3. 如果准许一个组(比如 debian运行一些 root 权限命令,比如(增加或删除用户)。
cmnd_Alias permit = /usr/sbin/useradd, /usr/sbin/userdel
然后,给组 debian 增加权限。
debian ALL=(ALL) permit
### 23. 安装并启用 SELinux ###
SELinux 表示 Security-Enhanced Linux是内核级别的安全模块。
# yum install selinux-policy
![在 CentOS 7 上安装 SElinux](http://www.tecmint.com/wp-content/uploads/2015/04/Install-SElinux.jpeg)
*安装 SElinux 策略*
查看 SELinux 当前模式。
# getenforce
![查看 SELinux 模式](http://www.tecmint.com/wp-content/uploads/2015/04/Check-SELinux-Mode.jpeg)
*查看 SELinux 模式*
输出是 Enforcing意味着 SELinux 策略已经生效。
如果需要调试,可以临时设置 selinux 模式为允许。不需要重启。
# setenforce 0
调试完了之后再次设置 selinux 为强制模式,无需重启。
# setenforce 1
LCTT 译注在生产环境中SELinux 固然会提升安全,但是也确实会给应用部署和运行带来不少麻烦。具体是否部署,需要根据情况而定。)
### 24. 安装 Rootkit Hunter ###
Rootkit Hunter简写为 RKhunter是在 Linux 系统中扫描 rootkits 和其它可能有害攻击的程序。
# yum install rkhunter
![安装 Rootkit Hunter](http://www.tecmint.com/wp-content/uploads/2015/04/Install-Rootkit-Hunter.jpeg)
*安装 Rootkit Hunter*
在 Linux 中,从脚本文件以计划作业的形式运行 rkhunter 或者手动扫描有害攻击。
# rkhunter --check
![扫描 rootkits](http://www.tecmint.com/wp-content/uploads/2015/04/Scan-for-rootkits.png)
*扫描 rootkits*
![RootKit 扫描结果](http://www.tecmint.com/wp-content/uploads/2015/04/RootKit-Results.png)
*RootKit 扫描结果*
--------------------------------------------------------------------------------
via: http://www.tecmint.com/things-to-do-after-minimal-rhel-centos-7-installation/4/
作者:[Avishek Kumar][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/

View File

@ -0,0 +1,140 @@
安装完最小化 RHEL/CentOS 7 后需要做的 30 件事情(五)
================================================================================
### 25. 安装 Linux Malware Detect (LMD) ###
Linux Malware Detect (LMD) 是 GNU GPLv2 协议下发布的开源 Linux 恶意程序扫描器它是特别为面临威胁的主机环境所设计的。LMD 完整的安装、配置以及使用方法可以查看:
- [安装 LMD 并和 ClamAV 一起使用作为反病毒引擎][1]
### 26. 用 Speedtest-cli 测试服务器带宽 ###
speedtest-cli 是用 python 写的用于测试网络下载和上传带宽的工具。关于 speedtest-cli 工具的完整安装和使用请阅读我们的文章[用命令行查看 Linux 服务器带宽][2]
### 27. 配置 Cron 任务 ###
这是最广泛使用的软件工具之一。它是一个任务调度器,比如,现在安排一个以后可以自动运行的作业。它用于未处理记录的日志和维护,以及其它日常工作,比如常规备份。所有的调度都写在文件 /etc/crontab 中。
crontab 文件包含下面的 6 个域:
分 时 日期 月份 星期 命令
(0-59) (0-23) (1-31) (1/jan-12/dec) (0-6/sun-sat) Command/script
![Crontab 域](http://www.tecmint.com/wp-content/uploads/2015/04/Crontab-Fields.jpeg)
*Crontab 域*
要在每天 04:30 运行一个 cron 任务(比如运行 /home/$USER/script.sh
分 时 日期 月份 星期 命令
30 4 * * * speedtest-cli
就把下面的条目增加到 crontab 文件 /etc/crontab/’。
30 4 * * * /home/$user/script.sh
把上面一行增加到 crontab 之后,它会在每天的 04:30 am 自动运行,输出取决于脚本文件的内容。另外脚本也可以用命令代替。关于更多 cron 任务的例子,可以阅读[Linux 上的 11 个 Cron 任务例子][3]
### 28. 安装 Owncloud ###
Owncloud 是一个基于 HTTP 的数据同步、文件共享和远程文件存储应用。更多关于安装 owncloud 的内容,你可以阅读这篇文章:[在 Linux 上创建个人/私有云存储][4]
### 29. 启用 Virtualbox 虚拟化 ###
虚拟化是创建虚拟操作系统、硬件和网络的过程,是当今最热门的技术之一。我们会详细地讨论如何安装和配置虚拟化。
我们的最小化 CentOS 服务器是一个无用户界面服务器LCTT 译注:无用户界面[headless]服务器指没有监视器和鼠标键盘等外设的服务器)。我们通过安装下面的软件包,让它可以托管虚拟机,虚拟机可通过 HTTP 访问。
# yum groupinstall 'Development Tools' SDL kernel-devel kernel-headers dkms
![安装开发工具](http://www.tecmint.com/wp-content/uploads/2015/04/Install-Development-Tool.jpeg)
*安装开发工具*
更改工作目录到 /etc/yum.repos.d/ 并下载 VirtualBox 库。
# wget -q http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc
安装刚下载的密钥。
# rpm --import oracle_vbox.asc
升级并安装 VirtualBox。
# yum update && yum install virtualbox-4.3
下一步,下载和安装 VirtualBox 扩展包。
# wget http://download.virtualbox.org/virtualbox/4.3.12/Oracle_VM_VirtualBox_Extension_Pack-4.3.12-93733.vbox-extpack
# VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.3.12-93733.vbox-extpack
![安装 VirtualBox 扩展包](http://www.tecmint.com/wp-content/uploads/2015/04/Install-Virtualbox-Extension-Pack.jpeg)
*安装 VirtualBox 扩展包*
![正在安装 VirtualBox 扩展包](http://www.tecmint.com/wp-content/uploads/2015/04/Installing-Virtualbox-Extension-Pack.jpeg)
*正在安装 VirtualBox 扩展包*
添加用户 vbox 用于管理 VirtualBox 并把它添加到组 vboxusers 中。
# adduser vbox
# passwd vobx
# usermod -G vboxusers vbox
安装 HTTPD 服务器。
# yum install httpd
安装 PHP (支持 soap 扩展)。
# yum install php php-devel php-common php-soap php-gd
下载 phpVirtualBox一个 PHP 写的开源的 VirtualBox 用户界面)。
# wget http://sourceforge.net/projects/phpvirtualbox/files/phpvirtualbox-4.3-1.zip
解压 zip 文件并把解压后的文件夹复制到 HTTP 工作目录。
# unzip phpvirtualbox-4.*.zip
# cp phpvirtualbox-4.3-1 -R /var/www/html
下一步,重命名文件 /var/www/html/phpvirtualbox/config.php-example 为 var/www/html/phpvirtualbox/config.php。
# mv config.php.example config.php
打开配置文件并添加我们上一步创建的 username password
# vi config.php
最后,重启 VirtualBox 和 HTTP 服务器。
# service vbox-service restart
# service httpd restart
转发端口并从一个有用户界面的服务器上访问它。
http://192.168.0.15/phpvirtualbox-4.3-1/
![登录 PHP Virtualbox](http://www.tecmint.com/wp-content/uploads/2015/04/PHP-Virtualbox-Login.png)
*登录 PHP Virtualbox*
![PHP Virtualbox 面板](http://www.tecmint.com/wp-content/uploads/2015/04/PHP-Virtualbox.png)
*PHP Virtualbox 面板*
--------------------------------------------------------------------------------
via: http://www.tecmint.com/things-to-do-after-minimal-rhel-centos-7-installation/5/
作者:[Avishek Kumar][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:https://linux.cn/article-5156-1.html
[2]:https://linux.cn/article-3796-1.html
[3]:http://www.tecmint.com/11-cron-scheduling-task-examples-in-linux/
[4]:https://linux.cn/article-2494-1.html

View File

@ -0,0 +1,86 @@
安装完最小化 RHEL/CentOS 7 后需要做的 30 件事情(六)
================================================================================
### 30. 用密码保护 GRUB ###
用密码保护你的 boot 引导程序这样你就可以在启动时获得额外的安全保障。同时你也可以在实物层面获得保护。通过在引导时给 GRUB 加锁防止任何无授权访问来保护你的服务器。
首先备份两个文件,这样如果有任何错误出现,你可以有回滚的选择。备份 /etc/grub2/grub.cfg/etc/grub2/grub.cfg.old
# cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.old
同样,备份 /etc/grub.d/10\_linux/etc/grub.d/10\_linux.old
# cp /etc/grub.d/10_linux /etc/grub.d/10_linux.old
打开文件 /etc/grub.d/10\_linux 并在文件末尾添加下列行。
cat <<EOF
set superusers="tecmint"
Password tecmint avi@123
EOF
![密码保护 Grub](http://www.tecmint.com/wp-content/uploads/2015/04/Password-Protect-Grub.png)
*密码保护 Grub*
注意在上面的文件中,用你自己的用户名和密码代替 “tecmint” 和 “avi@123”。
现在通过运行下面的命令生成新的 grub.cfg 文件。
# grub2-mkconfig --output=/boot/grub2/grub.cfg
![生成 Grub 文件](http://www.tecmint.com/wp-content/uploads/2015/04/Generate-Grub-File.jpeg)
*生成 Grub 文件*
创建 grub.cfg 文件之后,重启机器并敲击 e 进入编辑。你会发现它会要求你输入 “有效验证” 来编辑 boot 菜单。
![有密码保护的 Boot 菜单](http://www.tecmint.com/wp-content/uploads/2015/04/Edit-Boot-Menu.jpeg)
*有密码保护的 Boot 菜单*
输入登录验证之后,你就可以编辑 grub boot 菜单。
![Grub 菜单文件](http://www.tecmint.com/wp-content/uploads/2015/04/Grub-Menu-Edit.jpeg)
*Grub 菜单文件*
你也可以用加密的密码代替上一步的明文密码。首先按照下面推荐的生成加密密码。
# grub2-mkpasswd-pbkdf2
[两次输入密码]
![生成加密的 Grub 密码](http://www.tecmint.com/wp-content/uploads/2015/04/Generate-Encrypted-Grub-Password.jpeg)
*生成加密的 Grub 密码*
打开 /etc/grub.d/10_linux 文件并在文件末尾添加下列行。
cat <<EOF
set superusers=”tecmint”
Password_pbkdf2 tecmint
grub.pbkdf2.sha512....你的加密密码....
EOF
![加密 Grub 密码](http://www.tecmint.com/wp-content/uploads/2015/04/Encrypted-Grub-Password.jpeg)
*加密 Grub 密码*
用你系统上生成的密码代替原来的密码,别忘了交叉检查密码。
同样注意在这种情况下你也需要像上面那样生成 grub.cfg。重启并敲击 e 进入编辑,会提示你输入用户名和密码。
我们已经介绍了大部分工业标准发行版 RHEL 7 和 CentOS 7 安装后必要的操作。如果你发现我们缺少了一些点或者你有新的东西可以扩充这篇文章,你可以和我们一起分享,我们会通过扩充在这篇文章中包括你的分享。
--------------------------------------------------------------------------------
via: http://www.tecmint.com/things-to-do-after-minimal-rhel-centos-7-installation/6/
作者:[Avishek Kumar][a]
译者:[ictlyh](https://github.com/ictlyh)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/

View File

@ -1,319 +0,0 @@
Love-xuan Translating
Group Test: Linux Text Editors
================================================================================
> Mayank Sharma tests five supercharged text editors that can crunch more than just words.
If youve been using Linux long, you know that whether you want to edit an apps configuration file, hack together a shell script, or write/review bits of code, the likes of LibreOffice just wont cut it. Although the words mean almost the same thing, you dont need a word processor for these tasks; you need a text editor.
In this group test well be looking at five humble text editors that are more than capable of heavy-lifting texting duties. They can highlight syntax and auto-indent code just as effortlessly as they can spellcheck documents. You can use them to record macros and manage code snippets just as easily as you can copy/paste plain text.
Some simple text editors even exceed their design goals thanks to plugins that infuse them with capabilities to rival text-centric apps from other genres. They can take on the duties of a source code editor and even an Integrated Development Environment.
Two of most popular and powerful plain text editors are Emacs and Vim. However, we didnt include them in this group test for a couple of reasons. Firstly, if you are using either, congratulations: you dont need to switch. Secondly, both of these have a steep learning curve, especially to the GUI-oriented desktop generation who have access to alternatives that are much more inviting.
### The contenders: ###
#### Gedit ####
- URL:http://projects.gnome.org/gedit/
- Version: 3.10
- Licence: GPL
- Is Gnomes default text editor up to the challenge?
#### Kate ####
- URL: www.kate-editor.org
- Version: 3.11
- Licence: LGPL/GPL
- Will Kate challenge fate?
#### Sublime Text ####
- URL: www.sublimetext.com
- Version: 2.0.2
- Licence: Proprietary
- Proprietary software in the land of free with the heart of gold.
#### UltraEdit ####
- URL: www.ultraedit.com
- Version: 4.1.0.4
- Licence: Proprietary
- Does it do enough to justify its price?
#### jEdit ####
- URL: www.jedit.org
- Version: 5.1.0
- Licence: GPL
- Will the Java-based editor spoil the party for the rest?
![Theres a fine balance between stuffing an app with features and exposing all of them to the user. Geddit keeps most of its features hidden.](http://www.linuxvoice.com/wp-content/uploads/2014/07/gedit-web.png)
Theres a fine balance between stuffing an app with features and exposing all of them to the user. Geddit keeps most of its features hidden.
### The crucial criteria ###
All the tools, except Gedit and jEdit, were installed on Fedora and Ubuntu via their recommended installation method. The former already shipped with the default Gnome desktop and the latter stubbornly refused to install on Fedora. Since these are relatively simple apps, they have no esoteric dependencies, the only exception being jEdit, which requires Oracle Java.
Thanks to the continued efforts of both Gnome and KDE, all editors look great and function properly irrespective of the desktop environment they are running on. That not only rules it out as an evaluation criterion, it also means that you are no longer bound by the tools that ship with your favourite desktop environment.
In addition to their geekier functionality, we also tested all our candidates for general-purpose text editing. However, they are not designed to mimic all the functionality of a modern-day word processor and werent evaluated as such.
![Kate can double up as a versatile can capable integrated development environment (IDE).](http://www.linuxvoice.com/wp-content/uploads/2014/08/kate-web.png)
Kate can double up as a versatile can capable integrated development environment (IDE).
### Programming language support ###
UltraEdit does syntax highlighting, can fold code and has project management capabilities. Theres also a function list, which is supposed to list all the functions in the source file, but it didnt work for any of our test code files. UltraEdit also supports HTML5, and has a HTML toolbar with which you can add commonly-used HTML tags.
Even Gnomes default text editor, Gedit, has several code-oriented features such as bracket matching, automatic indentation, and will also highlight syntax for various programming languages including C, C++, Java, HTML, XML, Python, Perl, and many others.
If youre looking for more programming assistance, look at Sublime and Kate. Sublime supports several programming languages and (as well as the popular ones) is able to highlight syntax for C#, D, Dylan, Erlang, Groovy, Haskell, Lisp, Lua, MATLAB, OCaml, R, and even SQL. If that isnt enough for you, you can download add-ons to support even more languages.
Furthermore, its syntax highlighting ability offers several customisable options. The app will also match braces, to ensure they are all properly rounded off, and the auto-complete function in Sublime works with variables created by the user.
Just like Komodo IDE, sublime also displays a scrollable preview of the full source code, which is really handy for navigating long code files and lets you jump between different parts of the file.
One of the best features of Sublime is its ability to run code for certain languages like C++, Python, Ruby, etc from within the editor itself, assuming of course you have the compiler and other build system tools installed on your computer. This helps save time and eliminates the need to switch out to the command line.
You can also enable the build system in Kate with plugins. Furthermore, you can add a simple front-end to the GDB debugger. Kate will work with Git, Subversion and Mercurial version control systems, and also provides some functionality for project management.
It does all this in addition to highlighting syntax for over 180 languages, along with other assistance like bracket matching, auto-completion and auto-indentation. It also supports code folding and can even collapse functions within a program.
The only disappointment is jEdit, which bills itself as a programmers text editor, but it struggled with other basic functions such as code folding and wouldnt even suggest or complete functions.
**Verdict:**
- Gedit:3/5
- Kate:5/5
- Sublime:5/5
- UltraEdit3/5
- jEdit:1/5
![If you dont like Sublimes Charcoal appearance, you can choose one of the other 22 themes included with ti.](http://www.linuxvoice.com/wp-content/uploads/2014/08/sublime-web.png)
If you dont like Sublimes Charcoal appearance, you can choose one of the other 22 themes included with ti.
### Keyboard control ###
Users of an advanced text editor expect to control and operate it exclusively via the keyboard. Furthermore, some apps even allow their users to further customise the key bindings for the shortcuts.
You can easily work with Gedit using its extensive keyboard shortcut keys. There are keys for working with and editing files as well as invoke tools for common tasks such as spellchecking a document. You can access a list of default shortcut keys from within the app, but theres no graphical way to customise them. Similarly, to customise the keybindings in Sublime, you need to make modifications in its XML keymap files. Sublime has been criticised for its lack of a graphical interface to define keyboard shortcuts, but long-term users have defended the current file-based mechanism, which gives them more control.
UltraEdit is proud of its “everything is customisable” motto, which it extend to keyboard shortcuts. You can define custom hotkeys for navigating the menus and also define your own multi-key key-mappings for accessing its plethora of functions.
In addition to its fully customisable keyboard shortcuts, jEdit also has pre-defined keymaps for Emacs. Kate is equally impressive in this respect. It has an easily accessible window to customise the key bindings. You can change the default keys, as well as define alternate ones. Furthermore, Kate also has a Vi mode which will let users operate Kate using Vi keys.
**Verdict:**
- Gedit:2/5
- Kate:5/5
- Sublime:3/5
- UltraEdit:4/5
- jEdit:5/5
### Snippets and macros ###
Macros help you cut down the time spent on editing and organising data by automating repetitive steps, while Snippets of code extend a similar functionality to programmers by creating reusable chunks of source code. Both have the ability to save you time.
The vanilla Gedit installation doesnt have either of these functionalities, but you can enable them via separate plugins. While the Snippets plugin ships with Gedit, youll have to manually download and install the macro plugin (its called gedit-macropy and is hosted on GitHub) before you can enable it from within Gedit.
Kate takes the same plugins route to enable the snippets feature. Once added, the plugin also adds a repository of snippets for PHP, Bash and Java. You can display the list of snippets in the sidebar for easier access. Right-click on a snippet to edit its contents as well as its shortcut key combination. However, very surprisingly, it doesnt support macros despite repeated hails from users since 2002!
jEdit too has a plugin for enabling snippets. But it can record macros from user actions and you can also write them in the BeanShell scripting language (BeanShell supports scripted objects as simple method closures like those in Perl and JavaScript). jEdit also has a plugin that will download several macros from jEdits website.
Sublime ships with inbuilt ability to create both snippets and macros, and ships with several snippets of frequently used functions for most popular programming languages.
Snippets in UltraEdit are called Smart Templates and just like with Sublime you can insert them based upon the kind of source file youre editing. To complement the Macro recording function, UltraEdit also has an integrated javascript-based scripting language to automate tasks. You can also download user-submitted macros and scripts from the editors website.
**Verdict:**
- Gedit:3/5
- Kate:1/5
- Sublime:5/5
- UltraEdit:5/5
- jEdit:5/5
![UltraEdits UI is highly configurable — you can customise the layout of the toolbars and menus just as easily as you can change many other aspects.](http://www.linuxvoice.com/wp-content/uploads/2014/08/ultraedit-web.png)
UltraEdits UI is highly configurable — you can customise the layout of the toolbars and menus just as easily as you can change many other aspects.
### Ease of use ###
Unlike a bare-bones text editor, the text editors in this feature are brimming with features to accommodate a wide range of users — from document writers to programmers. Instead of stripping features from the apps, their developers are looking for avenues to add more functionality.
Although at first glance most apps in this group test have a very similar layout, upon closer inspection, youll notice several usability differences. We have a weak spot for apps that expose their functionality and features by making judicious use of the user interface, instead of just overwhelming the user.
### Gedit: 4/5 ###
Gedit wears a very vanilla look. It has an easy interface with minimal menus and buttons. This is a two-edged sword though, as some users might fail to realise its true potential.
The app can open multiple files in tabs that can be rearranged and moved between windows. Users can optionally enable panels on the side and bottom for displaying a file browser and the output of a tool enabled by a plugin. The app will detect when an open file is modified by another application and offers to reload that file.
The UI has been given a major overhaul in the latest version of the app yet to make its way into Gnome. However it isnt yet stable, and while it maintains all features, several plugins that interact with the menu will need to be updated.
### Kate: 5/5 ###
Although a major part of its user interface resembles Gedit, Kate tucks in tabs at either side and its menus are much fuller. The app is approachable and invites users to explore other features.
Kate can transparently open and save files over all protocols supported by KDEs KIO including HTTP, FTP, SSH, SMB and WebDAV. You can use the app to work with multiple files at the same time. But unlike the traditional horizontal tab switching bar in most app, Kate has tabs on either side of the screen. The left sidebar will display an index of open files. Programmers who need to see different parts of the same file at the same time will also appreciate its ability to split the interface horizontally as well as vertically.
### Sublime: 5/5 ###
Sublime lets you view up to four files at the same time in various arrangements. Theres also a full-screen distraction free mode that just displays the file and the menu, for when youre in the zone.
The editor also has a minimap on the right, which is useful for navigating long files. The app ships with several snippets for popular functions in several programming languages, which makes it very usable for developers. Another neat editing feature, whether you are working with text documents or code, is the ability to swap and shuffle selections.
### UltraEdit: 3/5 ###
UltraEdits interface is loaded with several toolbars at the top and bottom of the interface. Along with the tabs to switch between documents, panes on either side and the gutter area, these leave little room for the editor window.
Web developers working with HTML files have lots of assistance at their fingertips. You can also access remote files via FTP and SFTP. Advanced features such as recording a macro and comparing files are also easily accessible.
Using the apps Preferences window you can tweak various aspects of the app, including the colour scheme and other features like syntax highlighting.
### jEdit: 3/5 ###
In terms of usability, one of the first red-flags was jEdits inability to install on RPM-based distros. Navigating the editor takes some getting used to, since its menus arent in the same order as in other popular apps and some have names that wont be familiar to the average desktop user. However, the app include detailed inbuilt help, which will help ease the learning curve.
jEdit highlights the current line you are on and enables you to split windows in multiple viewing modes. You can easily install and manage plugins from within the app, and in addition to full macros, jEdit also lets you record quick temporary ones.
![Thanks to its Java underpinnings, jEdit doesnt really feel at home on any desktop environment](http://www.linuxvoice.com/wp-content/uploads/2014/08/jedit-web.png)
Thanks to its Java underpinnings, jEdit doesnt really feel at home on any desktop environment
### Availability and support ###
There are several similarities between Gedit and Kate. Both apps take advantage of their respective parent project, Gnome and KDE, and are bundled with several mainstream distros. Yet both projects are cross-platform and have Windows and Mac OS X ports as well as native Linux versions.
Gedit is hosted on Gnomes web infrastructure and has a brief user guide, information about the various plugins, and the usual channels of getting in touch including a mailing list and IRC channel. Youll also find usage information on the websites of other Gnome-based distros such as Ubuntu. Similarly, Kate gets the benefit of KDEs resources and hosts detailed user information as well as a mailing list and IRC channel. You can access their respective user guides offline from within the app as well.
UltraEdit is also available for Windows and Mac OS X besides Linux, and has detailed user guides on getting started, though theres none included within the app. To assist users, UltraEdit hosts a database of frequently asked questions, a bunch of power tips that have detailed information about several specific features, and users can engage with one another other on forum boards. Additionally, paid users can also seek support from the developers via email.
Sublime supports the same number of platforms, however you dont need to buy a separate licence for each platform. The developer keeps users abreast with ongoing development via a blog and also participates actively in the hosted forums. The highlight of the projects support infrastructure is the freely available detailed tutorial and video course. Sublime is lovely.
Because its written in Java, jEdit is available on several platforms. On its website youll find a detailed user guide and links to documentation of some plugins. However, there are no avenues for users to engage with other users or the developer.
**Verdict:**
- Gedit: 4/5
- Kate: 4/5
- Sublime: 5/5
- UltraEdit: 3/5
- jEdit: 2/5
### Add-on and plugins ###
Different users have different requirements, and a single lightweight app can only do as much. This is where plugins come into the picture. The apps rely on these small pluggable widgets to extend their feature set and be of use to even more number of users.
The one exception is UltraEdit. The app has no third-party plugins, but its developers do point out that third-party tools such as HtmlTidy are already installed with UltraEdit.
Gedit ships with a number of plugins installed, and you can download more with the gedit-plugins package. The projects website also points to several third-party plugins based on their compatibility with the Gedit versions.
Three useful plugins for programmers are Code Comment, Terminal Plugin, which adds a terminal in the bottom panel, and the Session Saver. The Session Saver is really useful when youre working on a project with multiple files. You can open all the files in tabs, save your session and when you restore it with a single click itll open all the files in the same tab order as you saved them.
Similarly, you can extend Kate by adding plugins using its built-in plugin manager. In addition to the impressive projects plugins, some others that will be of use to developers include an embedded terminal, ability to compile and debug code and execute SQL queries on databases.
Plugins for Sublime are written in Python, and the text editor includes a tool called Package Control, which is a little bit like apt-get in that it enables the user to find, install, upgrade and remove plugin packages. With plugins, you can bring the Git version control to Sublime, as well as the JSLint tool to improve JavaScript. The Sublime Linter plugin is a must have for coders and will point out any errors in your code.
jEdit boasts the most impressive plugin infrastructure. The app has over 200 plugins, which can be browsed in the dedicated site of their own. The website lists plugins under various categories such as File Management, Version Control, Text, etc. Youll find lots of plugins housed under each category.
Some of the best plugins are the Android plugin, which provides utilities to work on Android projects; the TomcatSwitch plugin, using which you can create and control an external Jakarta Tomcat server process; and the Vimulator plugin, for Vi-like capabilities. You can install these plugins using jEdits using its plugin manager.
**Verdict**
- Gedit: 3/5
- Kate: 4/5
- Sublime: 4/5
- UltraEdit: 1/5
- jEdit: 5/5
### Plain ol text editing ###
Despite all their powerful extra-curricular activities that might even displace full-blown apps across several genres, there will be times when you just need to use these text editing behemoths to read, write, or edit plain and simple text. While you can use all of them to enter text, we are evaluating them for access to common text-editing conveniences.
Gedit which is Gnomes default text editor, supports an undo and redo mechanism as well as search and replace. It can spellcheck documents in multiple languages and can also access and edit remote files using Gnome GVFS libraries.
You can spellcheck documents with Kate as well, which also lets you perform a Google search on any highlighted text. Its also got a line modification system which visually alerts users of lines which have modified and unsaved changes in a file. In addition, it enables users to set bookmarks within a file to ease navigation of lengthy documents.
Sublime has a wide selection of editing commands, such as indenting text and formatting paragraphs. Its auto-save feature helps prevent users from losing their work. Advanced users will appreciate the regex-based recursive find and replace feature, as well as the ability to select multiple non-contiguous spans of text and act on them collectively.
UltraEdit also enables the use of regular expressions for its search and replace feature and can edit remote files via FTP. One unique feature of jEdit is its support for an unlimited number of clipboard which it calls registers. You can copy snippets of text to these registers which are available across editing sessions.
**Verdict:**
- Gedit: 4/5
- Kate: 5/5
- Sublime: 5/5
- UltraEdit: 4/5
- jEdit: 4/5
### Our verdict ###
All the editors in this feature are good enough to replace your existing text editor for editing text files and tweaking configuration files. In fact, chances are theyll even double up as your IDE. These apps are chock full of bells and whistles, and their developers arent thinking of stripping features, but adding more and more and more.
At the tail end of this test we have jEdit. Not only does it insist on using the proprietary Oracle Java Runtime Environment, it failed to install on our Fedora machine, and the developer doesnt actively engage with its users.
UltraEdit does little better. This commercial proprietary tool focuses on web developers, and doesnt offer anything to non-developer power users that makes it worth recommending over free software alternatives.
On the third podium position we have Gedit. Theres nothing inherently wrong with Gnomes default editor, but despite all its positive aspects, its simply outclassed by Sublime and Kate. Out of the box, Kate is a more versatile editor than Gedit, and outscores Gnomes default editor even after taking their respective plugin systems into consideration.
Both Sublime and Kate are equally good. They performed equally well in most of our tests. Whatever ground it lost to Sublime for not supporting macros, it gained for its keyboard friendliness and its ease of use in defining custom keybindings.
Kates success can be drawn from the fact that it offers the maximum number of features with minimal learning curve. Just fire it up and use it as a simple text editor, or easily edit configuration file with syntax highlighting, or even use it to collaborate and work on a complex programming project thanks to its project management capabilities.
We arent pitching Kate to replace a full-blown integrated development environment such as [insert your favourite specialised tool here]. But its an ideal all-rounder and a perfect stepping stone to a specialised tool.
Kate is designed for moments when you need something thats quick to respond, doesnt overwhelm you with its interface and is just as useful as something that might otherwise be overkill.
### 1st Kate ###
- Licence LGPL/GPL Version 3.11
- www.kate-editor.org
- The ultimate mild-mannered text editor with super powers.
- Kate is one of the best apps to come out of the KDE project.
### 2nd Sublime Text ###
- Licence Proprietary Version 2.0.2
- www.sublimetext.com
- A professionally done text editor thats worth every penny easy to use, full of features and it looks great.
### 3rd Gedit ###
- Licence GPL Version 3.10
- http://projects.gnome.org/gedit
- Gets it done from Gnome. Its a wonderful text editor and does an admirable job, but the competition here is too great.
### 4th UltraEdit ###
- Licence Proprietary Version 4.1.0.4
- www.ultraedit.com
- Focuses on bundling conveniences for web developers without offering anything special for general users.
### 5th jEdit ###
- Licence GPL Version 5.1.0
- www.jedit.org
- A lack of support, lack of working on Fedora and a lack of looking nice relegate jEdit to the bottom slot.
### You may also wish to try… ###
The default text editor that ships with your distro will also be able to assist you with some advanced tasks. Theres KDEs KWrite and Raspbians Nano, for instance. KWrite inherits some of Kates features thanks to KDEs katepart component, and Nano has sprung back into limelight thanks to its availability for Raspberry Pi.
If you wish to follow the steps of Linux gurus, you could always try the revered text editors Emacs and Vim. First time users who want to get a taste for the power of Vim might want to consider gVim, which exposes Vims power via a graphical interface.
Besides jEdit and Kate, there are other editors that mimic the usability of veteran advanced editors like Emacs and Vim, such as the JED editor and Joes Own Editor, both of which have an emulation mode for Emacs. On the other hand, if you are looking for lightweight code editors check out Bluefish and Geany. They exist to fill the niche between text editors and full-fledged integrated development platforms.
--------------------------------------------------------------------------------
via: http://www.linuxvoice.com/text-editors/
作者:[Ben Everard][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxvoice.com/author/ben_everard/

View File

@ -1,4 +1,4 @@
(translating by runningwater)
wyangsun翻译中
Compact Text Editors Great for Remote Editing and Much More
================================================================================
A text editor is software used for editing plain text files. This type of software has many different uses including modifying configuration files, writing programming language source code, jotting down thoughts, or even making a grocery list. Given that editors can be used for such a diverse range of activities, it is worth spending the time finding an editor that best suites your preferences.
@ -217,4 +217,4 @@ via: http://www.linuxlinks.com/article/20141011073917230/TextEditors.html
[2]:http://www.vim.org/
[3]:http://ne.di.unimi.it/
[4]:http://www.gnu.org/software/zile/
[5]:http://nano-editor.org/
[5]:http://nano-editor.org/

View File

@ -1,3 +1,4 @@
KevinSJ translating
10 Truly Amusing Easter Eggs in Linux
================================================================================
![](http://en.wikipedia.org/wiki/File:Adventure_Easteregg.PNG)
@ -151,4 +152,4 @@ via: http://www.linux.com/news/software/applications/820944-10-truly-amusing-lin
[13]:http://nmap.org/book/output-formats-script-kiddie.html
[14]:http://nmap.org/book/output-formats-script-kiddie.html
[15]:https://www.youtube.com/watch?v=Ql1uLyuWra8
[16]:http://en.wikipedia.org/wiki/Neko_%28computer_program%29
[16]:http://en.wikipedia.org/wiki/Neko_%28computer_program%29

View File

@ -1,56 +0,0 @@
Translating by H-mudcup
Ambient Noise Player for Ubuntu Plays Relaxing Sounds to Keep You Creative
================================================================================
![Rain is a soothing sound for some](http://www.omgubuntu.co.uk/wp-content/uploads/2015/04/raining-1600x900-wallpaper_www.wallpapermay.com_84-1.jpg)
Rain is a soothing sound for some
**If I plan on being productive I cant listen to regular music. It distracts me. I start singing along or get reminded of a different track, so end up poking around my library and… Well, thats that.**
But by the same token I cant work in silence (living with 6 cats means thats not a possibility, though) but the inconsistency jars and sudden clatters and meows interrupt.
My solution that is to **listen to ambient noise**.
I find it helps nullify the misdirection my brain craves, land provide a soundscape that wraps the noise of kitty play time.
Ambient noise is the noise that play out in the background of daily lives; the rain drumming on a window, the intelligible hum of coffee shop chatter, the gossiping of birds on the wind, and so on.
Listening to these sounds can force a racing mind to slow down, rebase and refocus on what matters.
### Ambient Noise App for Ubuntu ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/04/ambient-noise-player-750x365.jpg)
Google Play and Apple app stores are packed full of ambient and white noise apps. Now a similar tool is available natively on Ubuntu.
[Ambient Noise][1] — as the name might suggest — is an audio player designed specifically for playing these sounds. It even integrates with the Ubuntu Sound Menu for a neat pick, click and relax experience.
The app, which is also known as ANoise Player and is made by Marcos Costales, comes with a set of **8 high-quality sounds**.
These 8 presets cover various ambient atmospheres, ranging from the rhythmic sound of rain, to the tranquil tones of nature at night, and back to the buzz of a bustling coffee shop in the afternoon.
### Install ANoise Player in Ubuntu ###
Ambient Noise player for Ubuntu is a free application and is available to install from its own dedicated PPA.
To do this open a new Terminal window and run:
sudo add-apt-repository ppa:costales/anoise
sudo apt-get update && sudo apt-get install anoise
Once installed simply open it from the Unity Dash (or your DEs equivalent), pick your preferred noise using Sound Menu and then …relax! The app even remembers which sound you used last.
Even so, give it a try out and see if it suits your needs. I would say let me know what you think, but I will be too focused to hear — and so might you!
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/04/ambient-noise-player-app-for-ubuntu-linux
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://anoise.tuxfamily.org/

View File

@ -1,40 +0,0 @@
This tool can alert you about evil twin access points in the area
================================================================================
**EvilAP_Defender can even attack rogue Wi-Fi access points for you, the developer says**
A new open-source tool can periodically scan an area for rogue Wi-Fi access points and can alert network administrators if any are found.
The tool, called EvilAP_Defender, was designed specifically to detect malicious access points that are configured by attackers to mimic legitimate ones in order to trick users to connect to them.
These access points are known as evil twins and allow hackers to intercept Internet traffic from devices connected to them. This can be used to steal credentials, spoof websites, and more.
Most users configure their computers and devices to automatically connect to some wireless networks, like those in their homes or at their workplace. However, when faced with two wireless networks that have the same name, or SSID, and sometimes even the same MAC address, or BSSID, most devices will automatically connect to the one that has the stronger signal.
This makes evil twin attacks easy to pull off because both SSIDs and BSSIDs can be spoofed.
[EvilAP_Defender][1] was written in Python by a developer named Mohamed Idris and was published on GitHub. It can use a computer's wireless network card to discover rogue access points that duplicate a real access point's SSID, BSSID, and even additional parameters like channel, cipher, privacy protocol, and authentication.
The tool will first run in learning mode, so that the legitimate access point [AP] can be discovered and whitelisted. It can then be switched to normal mode to start scanning for unauthorized access points.
If an evil AP is discovered, the tool can alert the network administrator by email, but the developer also plans to add SMS-based alerts in the future.
There is also a preventive mode in which the tool can launch a denial-of-service [DoS] attack against the evil AP to buy the administrator some time to take defensive measures.
"The DoS will only be performed for evil APs which have the same SSID but a different BSSID (AP's MAC address) or run on a different channel," Idris said in the tool's documentation. "This is to avoid attacking your legitimate network."
However, users should remember that attacking someone else's access point, even a likely malicious one operated by an attacker, is most likely illegal in many countries.
In order to run, the tool needs the Aircrack-ng wireless suite, a wireless card supported by Aircrack-ng, MySQL and the Python runtime.
--------------------------------------------------------------------------------
via: http://www.infoworld.com/article/2905725/security0/this-tool-can-alert-you-about-evil-twin-access-points-in-the-area.html
作者:[Lucian Constantin][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.infoworld.com/author/Lucian-Constantin/
[1] https://github.com/moha99sa/EvilAP_Defender/blob/master/README.TXT

View File

@ -0,0 +1,41 @@
Synfig Studio 1.0 — Open Source Animation Gets Serious
================================================================================
![](http://www.omgubuntu.co.uk/wp-content/uploads/2015/04/synfig-free-animations-750x467.jpg)
**A brand new version of the free, open-source 2D animation software Synfig Studio is now available to download. **
The first release of the cross-platform software in well over a year, Synfig Studio 1.0 builds on its claim of offering “industrial-strength solution for creating film-quality animation” with a suite of new and improved features.
Among them is an improved user interface that the project developers say is easier and more intuitive to use. The client adds a new **single-window mode** for tidy working and has been **reworked to use the latest GTK3 libraries**.
On the features front there are several notable changes, including the addition of a fully-featured bone system.
This **joint-and-pivot skeleton framework** is well suited to 2D cut-out animation and should prove super efficient when coupled with the complex deformations new to this release, or used with Synfigs popular automatic interpolated keyframes (read: frame-to-frame morphing).
youtube视频
<iframe width="750" height="422" frameborder="0" allowfullscreen="" src="https://www.youtube.com/embed/M8zW1qCq8ng?feature=oembed"></iframe>
New non-destructive cutout tools, friction effects and initial support for full frame-by-frame bitmap animation, may help unlock the creativity of open-source animators, as might the addition of a sound layer for syncing the animation timeline with a soundtrack!
### Download Synfig Studio 1.0 ###
Synfig Studio is not a tool suited for everyone, though the latest batch of improvements in this latest release should help persuade some animators to give the free animation software a try.
If you want to find out what open-source animation software is like for yourself, you can grab an installer for Ubuntu for the latest release direct from the projects Sourceforge page using the links below.
- [Download Synfig 1.0 (64bit) .deb Installer][1]
- [Download Synfig 1.0 (32bit) .deb Installer][2]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2015/04/synfig-studio-new-release-features
作者:[oey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://sourceforge.net/projects/synfig/files/releases/1.0/linux/synfigstudio_1.0_amd64.deb/download
[2]:http://sourceforge.net/projects/synfig/files/releases/1.0/linux/synfigstudio_1.0_x86.deb/download

View File

@ -0,0 +1,111 @@
translating wi-cuckoo
What are good command line HTTP clients?
================================================================================
The whole is greater than the sum of its parts is a very famous quote from Aristotle, a Greek philosopher and scientist. This quote is particularly pertinent to Linux. In my view, one of Linux's biggest strengths is its synergy. The usefulness of Linux doesn't derive only from the huge raft of open source (command line) utilities. Instead, it's the synergy generated by using them together, sometimes in conjunction with larger applications.
The Unix philosophy spawned a "software tools" movement which focused on developing concise, basic, clear, modular and extensible code that can be used for other projects. This philosophy remains an important element for many Linux projects.
Good open source developers writing utilities seek to make sure the utility does its job as well as possible, and work well with other utilities. The goal is that users have a handful of tools, each of which seeks to excel at one thing. Some utilities work well independently.
This article looks at 3 open source command line HTTP clients. These clients let you download files off the internet from a command line. But they can also be used for many more interesting purposes such as testing, debugging and interacting with HTTP servers and web applications. Working with HTTP from the command-line is a worthwhile skill for HTTP architects and API designers. If you need to play around with an API, HTTPie and cURL will be invaluable.
----------
![HTTPie](http://www.linuxlinks.com/portal/content2/png/HTTPie.png)
![HTTPie in action](http://www.linuxlinks.com/portal/content/reviews/Internet/Screenshot-httpie.png)
HTTPie (pronounced aych-tee-tee-pie) is an open source command line HTTP client. It is a a command line interface, cURL-like tool for humans.
The goal of this software is to make CLI interaction with web services as human-friendly as possible. It provides a simple http command that allows for sending arbitrary HTTP requests using a simple and natural syntax, and displays colorized output. HTTPie can be used for testing, debugging, and generally interacting with HTTP servers.
#### Features include: ####
- Expressive and intuitive syntax
- Formatted and colorized terminal output
- Built-in JSON support
- Forms and file uploads
- HTTPS, proxies, and authentication
- Arbitrary request data
- Custom headers
- Persistent sessions
- Wget-like downloads
- Python 2.6, 2.7 and 3.x support
- Linux, Mac OS X and Windows support
- Plugins
- Documentation
- Test coverage
- Website: [httpie.org][1]
- Developer: Jakub Roztočil
- License: Open Source
- Version Number: 0.9.2
----------
![cURL](http://www.linuxlinks.com/portal/content2/png/cURL1.png)
![cURL in action](http://www.linuxlinks.com/portal/content/reviews/Internet/Screenshot-cURL.png)
cURL is an open source command line tool for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, TELNET and TFTP.
curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, kerberos...), file transfer resume, proxy tunneling and a busload of other useful tricks.
#### Features include: ####
- Config file support
- Multiple URLs in a single command line
- Range "globbing" support: [0-13], {one,two,three}
- Multiple file upload on a single command line
- Custom maximum transfer rate
- Redirectable stderr
- Metalink support
- Website: [curl.haxx.se][2]
- Developer: Daniel Stenberg
- License: MIT/X derivate license
- Version Number: 7.42.0
----------
![Wget](http://www.linuxlinks.com/portal/content2/png/Wget1.png)
![Wget in action](http://www.linuxlinks.com/portal/content/reviews/Utilities/Screenshot-Wget.png)
Wget is open source software that retrieves content from web servers. Its name is derived from World Wide Web and get. It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies.
Wget can follow links in HTML pages and create local versions of remote web sites, fully recreating the directory structure of the original site. This is known as "recursive downloading."
Wget has been designed for robustness over slow or unstable network connections.
Features include:
- Resume aborted downloads, using REST and RANGE
- Use filename wild cards and recursively mirror directories
- NLS-based message files for many different languages
- Optionally converts absolute links in downloaded documents to relative, so that downloaded documents may link to each other locally
- Runs on most UNIX-like operating systems as well as Microsoft Windows
- Supports HTTP proxies
- Supports HTTP cookies
- Supports persistent HTTP connections
- Unattended / background operation
- Uses local file timestamps to determine whether documents need to be re-downloaded when mirroring
- Website: [www.gnu.org/software/wget/][3]
- Developer: Hrvoje Niksic, Gordon Matzigkeit, Junio Hamano, Dan Harkless, and many others
- License: GNU GPL v3
- Version Number: 1.16.3
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20150425174537249/HTTPclients.html
作者Frazer Kline
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://httpie.org/
[2]:http://curl.haxx.se/
[3]:https://www.gnu.org/software/wget/

View File

@ -1,4 +1,3 @@
[raywang]
Open source all over the world
================================================================================
![](https://opensource.com/sites/default/files/styles/image-full-size/public/images/business/BUS_OpenSourceExperience_520x292_cm.png)

View File

@ -1,12 +1,8 @@
theol-l translating
The Curious Case of the Disappearing Distros
关于消失的发行版的古怪情形。
================================================================================
![](http://www.linuxinsider.com/ai/828896/linux-distros.jpg)
"Linux is a big game now, with billions of dollars of profit, and it's the best thing since sliced bread, but corporations are taking control, and slowly but systematically, community distros are being killed," said Google+ blogger Alessandro Ebersol. "Linux is slowly becoming just like BSD, where companies use and abuse it and give very little in return."
"Linux现在成为了一个大型的游戏同时具有巨额的利润这是有史以来最好的事情。但是公司企业进行了控制于是缓慢而系统的社区发行版就逐渐被干掉了",Google+的一个博主 Alessandro Ebersol说到。"Linux开始变得像BSD--一些公司使用和滥用但是没有任何回报--一样缓慢。"
Well the holidays are pretty much upon us at last here in the Linux blogosphere, and there's nowhere left to hide. The next two weeks or so promise little more than a blur of forced social occasions and too-large meals, punctuated only -- for the luckier ones among us -- by occasional respite down at the Broken Windows Lounge.

View File

@ -1,4 +1,3 @@
Translating by weychen
10 Top Distributions in Demand to Get Your Dream Job
================================================================================
We are coming up with a series of five articles which aims at making you aware of the top skills which will help you in getting yours dream job. In this competitive world you can not rely on one skill. You need to have balanced set of skills. There is no measure of a balanced skill set except a few conventions and statistics which changes from time-to-time.

View File

@ -1,104 +0,0 @@
【translating】The history of Android
================================================================================
![](http://cdn.arstechnica.net/wp-content/uploads/2014/03/ready-fight.png)
### Android 2.1, update 1—the start of an endless war ###
Google was a major launch partner for the first iPhone—the company provided Google Maps, Search, and YouTube for Apples mobile operating system. At the time, Google CEO Eric Schmidt was a member of Apples board of directors. In fact, during the original iPhone presentation, [Schmidt was the first person on stage][] after Steve Jobs, and he joked that the two companies were so close they could merge into “AppleGoo."
While Google was developing Android, the relationship between the two companies slowly became contentious. Still, Google largely kept Apple happy by keeping key iPhone features, like pinch zoom, out of Android. The Nexus One, though, was the first slate-style Android flagship without a keyboard, which gave the device the same form factor as the iPhone. Combined with the newer software and Google branding, this was the last straw for Apple. According to Walter Isaacsons biography on Steve Jobs, after seeing the Nexus One in January 2010, the Apple CEO was furious, saying "I will spend my last dying breath if I need to, and I will spend every penny of Apple's $40 billion in the bank, to right this wrong... I'm going to destroy Android, because it's a stolen product. I'm willing to go thermonuclear war on this."
All of this happened behind closed doors, only coming out years after the Nexus One was released. The public first caught wind of this growing rift between Google and Apple when, a month after the release of Android 2.1, an update shipped for the Nexus One called “[2.1 update 1.][2]" The updated added one feature, something iOS long held over the head of Android: pinch-zoom.
While Android supported multi-touch APIs since version 2.0, the default operating system apps stayed clear of this useful feature at the behest of Jobs. After reconciliation meetings over the Nexus One failed, there was no longer a reason to keep pinch zoom out of Android. Google pushed all their chips into the middle of the table, hit the update button, and was finally “all-in" with Android.
With pinch zoom enabled in Google Maps, the Browser, and the Gallery, the Google-Apple smartphone war was on. In the coming years, the two companies would become bitter enemies. A month after the pinch zoom update, Apple went on the warpath, suing everyone and everything that used Android. HTC, Motorola, and Samsung were all brought to court, and some of them are still in court. Schmidt resigned from Apples board of directors. Google Maps and YouTube were kicked off of the iPhone, and Apple even started a rival mapping service. Today, the two players that were almost "AppleGoo" compete in smartphones, tablets, laptops, movies, TV shows, music, books, apps, e-mail, productivity software, browsers, personal assistants, cloud storage, mobile advertising, instant messaging, mapping, and set-top-boxes... and soon the two will be competing in car computers, wearables, mobile payments, and living room gaming.
### Android 2.2 Froyo—faster and Flash-ier ###
[Android 2.2][3] came out four months after the release of 2.1, in May 2010. Froyo featured major under-the-hood improvements for Android, all made in the name of speed. The biggest addition was just-in-time (JIT) compilation. JIT automatically converted java bytecode into native code at runtime, which led to drastic performance improvements across the board.
The Browser got a performance boost, too, thanks to the integration of the V8 javascript engine from Chrome. This was the first of many features the Android browser would borrow from Chrome, and eventually the stock browser would be completely replaced by a mobile version of Chrome. Until that day came, though, the Android team needed to ship a browser. Pulling in Chrome parts was an easy way to upgrade.
While Google was focusing on making its platform faster, Apple was making its platform bigger. Google's rival released the 10-inch iPad a month earlier, ushering in the modern era of tablets. While some large Froyo and Gingerbread tablets were released, Google's official response—Android 3.0 Honeycomb and the Motorola Xoom—would not arrive for nine months.
![Froyo added a two-icon dock at the bottom and universal search.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/22-2.png)
Froyo added a two-icon dock at the bottom and universal search.
Photo by Ron Amadeo
The biggest change on the Froyo homescreen was the new dock at the bottom, which filled the previously empty space to the left and right of the app drawer with phone and browser icons. Both of these icons were custom-designed white versions of the stock icons, and they were not user-configurable.
The default layout removed all the icons, and it only stuck the new tips widget on the screen, which directed you to click on the launcher icon to access your apps. The Google Search widget gained a Google logo which doubled as a button. Tapping it would open the search interface and allow you to restrict a search by Web, apps, or contacts.
![The downloads page showing the “update all" button, the Flash app, a flash-powered site where anything is possible, and the “move to SD" button. ](http://cdn.arstechnica.net/wp-content/uploads/2014/03/small-market-2.jpg)
The downloads page showing the “update all" button, the Flash app, a flash-powered site where anything is possible, and the “move to SD" button.
Photo by [Ryan Paul][4]
Some of the best additions to Froyo were more download controls for the Android Market. There was now an “Update all" button pinned to the bottom of the Downloads page. Google also added an automatic updating feature, which would automatically install apps as long as the permissions hadn't changed; automatic updating was off by default, though.
The second picture shows Adobe Flash Player, which was exclusive to Froyo. The app plugged in to the browser and allowed for a “full Web" experience. In 2010, this meant pages heavy with Flash navigation and video. Flash was one of Android's big differentiators compared to the iPhone. Steve Jobs started a holy war against Flash, declaring it an obsolete, buggy piece of software, and Apple would not allow it on iOS. So Android picked up the Flash ball and ran with it, giving users the option of having a semi-workable implementation on Android.
At the time, Flash could bring even a desktop computer to its knees, so keeping it on all the time on a mobile phone delivered terrible performance. To fix this, Flash on Android's browser could be set to "on-demand"—Flash content would not load until users clicked on the Flash placeholder icon. Flash support would last on Android until 4.1, when Adobe gave up and killed the project. Ultimately Flash never really worked well on Android. The lack of Flash on the iPhone, the most popular mobile device, pushed the Internet to eventually dump the platform.
The last picture shows the newly added ability to move apps to the SD card, which, in an era when phones came with 512MB of internal storage, was sorely needed.
![The car app and camera app. The camera could now rotate.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/22carcam-2.png)
The car app and camera app. The camera could now rotate.
Photo by Ron Amadeo
The camera app was finally updated to support portrait mode. The camera settings were moved out of the drawer and into a semi-transparent strip of buttons next to the shutter button and other controls. This new design seemed to take a lot of inspiration from the Cooliris Gallery app, with transparent, springy speech bubble popups. It was quite strange to see the high-tech Cooliris-style UI design grafted on to the leather-bound camera app—the aesthetics didn't match at all.
![The semi-broken Facebook app is a good example of the common 2x3 navigation page. Google Goggles was included but also broken.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/facebook.png)
The semi-broken Facebook app is a good example of the common 2x3 navigation page. Google Goggles was included but also broken.
Photo by Ron Amadeo
Unlike the Facebook client included in Android 2.0 and 2.1, the 2.2 version still sort of works and can sign in to Facebook's servers. The Facebook app is a good example of Google's design guidelines for apps at the time, which suggested having a navigational page consisting of a 3x2 grid of icons as the main page of an app.
This was Google's first standardized attempt at getting navigational elements out of the menu button and onto the screen, where users could find them. This design was usable, but it added an extra roadblock between launching an app and using an app. Google would later realize that when users launch an app, it was a better idea to show them content instead of an interstitial navigational screen. In Facebook for instance, opening to the news feed would be much more appropriate. And later app designs would relegate navigation to a second-tier location—first as tabs at the top of the screen, and later Google would settle on the "Navigation Drawer," a slide-out panel containing all the locations in an app.
Also packed in with Froyo was Google Goggles, a visual search app which would try to identify the subject of a picture. It was useful for identifying works of art, landmarks, and barcodes, but not much else. These first two setup screens, along with the camera interface, are all that work in the app anymore. Today, you can't actually complete a search with a client this old. There wasn't much to see anyway; it was a camera interface that returned a search results page.
![The Twitter app, which was an animation-filled collaboration between Google and Twitter.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/twitters-2.png)
The Twitter app, which was an animation-filled collaboration between Google and Twitter.
Photo by Ron Amadeo
Froyo included the first Android Twitter app, which was actually a collaboration between Google and Twitter. At the time, a Twitter app was one of the big holes in Android's app lineup. Developers favored the iPhone, and with Apple's head start and stringent design requirements, the App Store's app selection was far superior to Android's. But Google needed a Twitter app, so it teamed up with the company to get the first version out the door.
This represented Google's newer design language, which meant it had an interstitial navigation page and a "tech-demo" approach to animations. The Twitter app was even more heavy-handed with animation effects than the Cooliris Gallery—everything moved all the time. The clouds at the top and bottom of every page continually scrolled at varying speeds, and the Twitter bird at the bottom flapped its wings and moved its head left and right.
The Twitter app actually featured an early precursor to the Action Bar, a persistent strip of top-aligned controls that was introduced in Android 3.0 . Along the top of every screen was a blue bar containing the Twitter logo and buttons like search, refresh, and compose tweet. The big difference between this and the later action bars was that the Twitter/Google design lacks an "Up" button in the top right corner, and it actually uses an entire second bar to show your current location within the app. In the second picture above, you can see a whole bar dedicated to the location label "Tweets" (and, of course, the continuously scrolling clouds). The Twitter logo in the second bar acted as another navigational element, sometimes showing additional drill down areas within the current section and sometimes showing the entire top-level shortcut group.
The 2.3 Tweet stream didn't look much different from what it does today, save for the hidden action buttons (reply, retweet, etc), which were all under the right-aligned arrow buttons. They popped up in a speech bubble menu that looked just like the navigational popup. The faux-action bar was doing serious work on the create tweet page. It housed the twitter logo, remaining character count, and buttons to attach a picture, take a picture, and a contact mention button.
The Twitter app even came with a pair of home screen widgets. The big one took up eight slots and gave you a compose bar, update button, one tweet, and left and right arrows to view more tweets. The little one showed a tweet and reply button. Tapping on the compose bar on the large widget immediately launched the main "Create Tweet," rendering the "update" button worthless.
![Google Talk and the new USB dialog.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/talkusb.png)
Google Talk and the new USB dialog.
Photo by Ron Amadeo
Elsewhere, Google Talk (and the unpictured SMS app) changed from a dark theme to a light theme, which made both of them look a lot closer to the current, modern apps. The USB storage screen that popped up when you plugged into a computer changed from a simple dialog box to a full screen interface. Instead of a text-only design, the screen now had a mutant Android/USB-stick hybrid.
While Android 2.2 didnt feature much in the way of user-facing features, a major UI overhaul was coming in the next two versions. Before all the UI work, though, Google wanted to revamp the core of Android. Android 2.2 accomplished that.
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/13/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.youtube.com/watch?v=9hUIxyE2Ns8#t=3016
[2]:http://arstechnica.com/gadgets/2010/02/googles-nexus-one-gets-multitouch/
[3]:http://arstechnica.com/information-technology/2010/07/android-22-froyo/
[4]:http://arstechnica.com/information-technology/2010/07/android-22-froyo/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,84 +0,0 @@
alim0x translating
The history of Android
================================================================================
### Voice Actions—a supercomputer in your pocket ###
In August 2010, a new feature “[Voice Actions][1]" launched in the Android Market as part of the Voice Search app. Voice Actions allowed users to issue voice commands to their phone, and Android would try to interpret them and do something smart. Something like "Navigate to [address]" would fire up Google Maps and start turn-by-turn navigation to your stated destination. You could also send texts or e-mails, make a call, open a Website, get directions, or view a location on a map—all just by speaking.
youtube视频地址
<iframe width="500" height="281" frameborder="0" src="http://www.youtube-nocookie.com/embed/gGbYVvU0Z5s?start=0&amp;wmode=transparent" type="text/html" style="display:block"></iframe>
Voice Actions was the culmination of a new app design philosophy for Google. Voice Actions was the most advanced voice control software for its time, and the secret was that Google wasnt doing any computing on the device. In general, voice recognition was very CPU intensive. In fact, many voice recognition programs still have a “speed versus accuracy" setting, where users can choose how long they are willing to wait for the voice recognition algorithms to work—more CPU power means better accuracy.
Googles innovation was not bothering to do the voice recognition computing on the phones limited processor. When a command was spoken, the users voice was packaged up and shipped out over the Internet to Googles cloud servers. There, Googles farm of supercomputers pored over the message, interpreted it, and shipped it back to the phone. It was a long journey, but the Internet was finally fast enough to accomplish something like this in a second or two.
Many people throw the phrase “cloud computing" around to mean “anything that is stored on a server," but this was actual cloud computing. Google was doing hardcore compute operations in the cloud, and because it is throwing a ridiculous amount of CPU power at the problem, the only limit to the voice recognition accuracy is the algorithms themselves. The software didn't need to be individually “trained" by each user, because everyone who used Voice Actions was training it all the time. Using the power of the Internet, Android put a supercomputer in your pocket, and, compared to existing solutions, moving the voice recognition workload from a pocket-sized computer to a room-sized computer greatly increased accuracy.
Voice recognition had been a project of Googles for some time, and it all started with an 800 number. [1-800-GOOG-411][1] was a free phone information service that Google launched in April 2007. It worked just like 411 information services had for years—users could call the number and ask for a phone book lookup—but Google offered it for free. No humans were involved in the lookup process, the 411 service was powered by voice recognition and a text-to-speech engine. Voice Actions was only possible after three years of the public teaching Google how to hear.
Voice recognition was a great example of Googles extremely long-term thinking—the company wasn't afraid to invest in a project that wouldnt become a commercial product for several years. Today, voice recognition powers products all across Google. Its used for voice input in the Google Search app, Androids voice typing, and on Google.com. Its also the primary input interface for Google Glass and [Android Wear][2].
The company even uses it beyond input. Google's voice recognition technology is used to transcribe YouTube videos, which powers automatic closed captioning for the hearing impaired. The transcription is even indexed by Google, so you can search for words that were said in the video. Voice is the future of many products, and this long-term planning has led Google to be one of the few major tech companies with an in-house voice recognition service. Most other voice recognition products, like Apples Siri and Samsung devices, are forced to use—and pay a license fee for—voice recognition from Nuance.
With the computer hearing system up and running, Google is applying this strategy to computer vision next. That's why things like Google Goggles, Google Image Search, and [Project Tango][3] exist. Just like the days of GOOG-411, these projects are in the early stages. When [Google's robot division][4] gets off the ground with a real robot, it will need to see and hear, and Google's computer vision and hearing projects will likely give the company a head start.
![The Nexus S, the first Nexus phone made by Samsung.](http://cdn.arstechnica.net/wp-content/uploads/2014/03/NS500.png)
The Nexus S, the first Nexus phone made by Samsung.
### Android 2.3 Gingerbread—the first major UI overhaul ###
Gingerbread was released in December 2010, a whopping seven months after the release of 2.2. The wait was worth it, though, as Android 2.3 changed just about every screen in the OS. It was the first major overhaul since the initial formation of Android in version 0.9. 2.3 would kick off a series of continual revamps in an attempt to turn Android from an ugly duckling into something that was capable of holding its own—aesthetically—against the iPhone.
And speaking of Apple, six months earlier, the company released the iPhone 4 and iOS 4, which added multitasking and Facetime video chat. Microsoft was finally back in the game, too. The company jumped into the modern smartphone era with the launch of Windows Phone 7 in November 2010.
Android 2.3 focused a lot on the interface design, but with no direction or design documents, many apps ended up getting a new bespoke theme. Some apps went with a flatter, darker theme, some used a gradient-filled, bubbly dark theme, and others went with a high-contrast white and green look. While it wasn't cohesive, Gingerbread accomplished the goal of modernizing nearly every part of the OS. It was a good thing, too, because the next phone version of Android wouldnt arrive until nearly a year later.
Gingerbreads launch device was the Nexus S, Googles second flagship device and the first Nexus manufactured by Samsung. While today we are used to new CPU models every year, back then that wasn't the case. The Nexus S had a 1GHz Cortex A8 processor, just like the Nexus One. The GPU was slightly faster, and that was it in the speed department. It was a little bigger than the Nexus One, with a 4-inch, 800×480 AMOLED display.
Spec wise, the Nexus S might seem like a tame upgrade, but it was actually home to a lot of firsts for Android. The Nexus S was Googles first flagship to shun a MicroSD slot, shipping with 16GB on-board memory. The Nexus One had only 512MB of storage, but it had a MicroSD slot. Removing the SD slot simplified storage management for users—there was just one pool now—but hurt expandability for power users. It was also Google's first phone to have NFC, a special chip in the back of the phone that could transfer information when touched to another NFC chip. For now, the Nexus S could only read NFC tags—it couldn't send data.
Thanks to some upgrades in Gingerbread, the Nexus S was one of the first Android phones to ship without a hardware D-Pad or trackball. The Nexus S was now down to just the power, volume, and the four navigation buttons. The Nexus S was also a precursor to the [crazy curved-screen phones][6] of today, as Samsung outfitted the Nexus S with a piece of slightly curved glass.
![Gingerbread changed the status bar and wallpaper, and it added a bunch of new icons.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/appdrawershop.png)
Gingerbread changed the status bar and wallpaper, and it added a bunch of new icons.
Photo by Ron Amadeo
An upgraded "Nexus" live wallpaper was released as an exclusive addition to the Nexus S. It was basically the same idea as the Nexus One version, with its animated streaks of light. On the Nexus S, the "grid" design was removed and replaced with a wavy blue/gray background. The dock at the bottom was given square corners and colored icons.
![The new notification panel and menu.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/32.png)
The new notification panel and menu.
Photo by Ron Amadeo
The status bar was finally overhauled from the version that first debuted in 0.9. The bar was changed from a white gradient to flat black, and all the icons were redrawn in gray and green. Just about everything looked crisper and more modern thanks to the sharp-angled icon design and higher resolution. The strangest decisions were probably the removal of the time period from the status bar clock and the confusing shade of gray that was used for the signal bars. Despite gray being used for many status bar icons, and there being four gray bars in the above screenshot, Android was actually indicating no cellular signal. Green bars would indicate a signal, gray bars indicated “empty" signal slots.
The green status bar icons in Gingerbread also doubled as a status indicator of network connectivity. If you had a working connection to Google's servers, the icons would be green, if there was no connection to Google, the icons turned white. This let you easily identify the connectivity status of your connection while you were out and about.
The notification panel was changed from the aging Android 1.5 design. Again, we saw a UI piece that changed from a light theme to a dark theme, getting a dark gray header, black background, and black-on-gray text.
The menu was darkened too, changing from a white background to a black one with a slight transparency. The contrast between the menu icons and the background wasnt as strong as it should be, because the gray icons are the same color as they were on the white background. Requiring a color change would mean every developer would have to make new icons, so Google went with the preexisting gray color on black. This was a change at the system level, so this new menu would show up in every app.
----------
![Ron Amadeo](http://cdn.arstechnica.net/wp-content//uploads/authors/ron-amadeo-sq.jpg)
[Ron Amadeo][a] / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.
[@RonAmadeo][t]
--------------------------------------------------------------------------------
via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/14/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://arstechnica.com/gadgets/2010/08/google-beefs-up-voice-search-mobile-sync/
[2]:http://arstechnica.com/business/2007/04/google-rolls-out-free-411-service/
[3]:http://arstechnica.com/gadgets/2014/03/in-depth-with-android-wear-googles-quantum-leap-of-a-smartwatch-os/
[4]:http://arstechnica.com/gadgets/2014/02/googles-project-tango-is-a-smartphone-with-kinect-style-computer-vision/
[5]:http://arstechnica.com/gadgets/2013/12/google-robots-former-android-chief-will-lead-google-robotics-division/
[6]:http://arstechnica.com/gadgets/2013/12/lg-g-flex-review-form-over-even-basic-function/
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,3 +1,5 @@
alim0x translating
The history of Android
================================================================================
![Gingerbread's new keyboard, text selection UI, overscroll effect, and new checkboxes.](http://cdn.arstechnica.net/wp-content/uploads/2014/02/3kb-high-over-check.png)
@ -83,4 +85,4 @@ via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-histor
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://arstechnica.com/author/ronamadeo
[t]:https://twitter.com/RonAmadeo
[t]:https://twitter.com/RonAmadeo

View File

@ -1,8 +1,7 @@
Translating by ly0
Linux下一些蛮不错的音频编辑软件
What is good audio editing software on Linux
================================================================================
无论你是一个业余的音乐家或者仅仅是一个上课撸教授音的学你总是需要和录音打交道。如果你有很长的时间仅仅用Mac干这种事情那么可以和这个过程说拜拜了现在Linux也可以干同样的事情。简而言之这里有一个简单但是不错的音频编辑软件列表来满足你对不同任务和需求。
Whether you are an amateur musician or just a student recording his professor, you need to edit and work with audio recordings. If for a long time such task was exclusively attributed to Macintosh, this time is over, and Linux now has what it takes to do the job. In short, here is a non-exhaustive list of good audio editing software, fit for different tasks and needs.
### 1. Audacity ###

View File

@ -0,0 +1,444 @@
Web Caching Basics: Terminology, HTTP Headers, and Caching Strategies
=====================================================================
### Introduction
Intelligent content caching is one of the most effective ways to improve
the experience for your site's visitors. Caching, or temporarily storing
content from previous requests, is part of the core content delivery
strategy implemented within the HTTP protocol. Components throughout the
delivery path can all cache items to speed up subsequent requests,
subject to the caching policies declared for the content.
In this guide, we will discuss some of the basic concepts of web content
caching. This will mainly cover how to select caching policies to ensure
that caches throughout the internet can correctly process your content.
We will talk about the benefits that caching affords, the side effects
to be aware of, and the different strategies to employ to provide the
best mixture of performance and flexibility.
What Is Caching?
----------------
Caching is the term for storing reusable responses in order to make
subsequent requests faster. There are many different types of caching
available, each of which has its own characteristics. Application caches
and memory caches are both popular for their ability to speed up certain
responses.
Web caching, the focus of this guide, is a different type of cache. Web
caching is a core design feature of the HTTP protocol meant to minimize
network traffic while improving the perceived responsiveness of the
system as a whole. Caches are found at every level of a content's
journey from the original server to the browser.
Web caching works by caching the HTTP responses for requests according
to certain rules. Subsequent requests for cached content can then be
fulfilled from a cache closer to the user instead of sending the request
all the way back to the web server.
Benefits
--------
Effective caching aids both content consumers and content providers.
Some of the benefits that caching brings to content delivery are:
- **Decreased network costs**: Content can be cached at various points
in the network path between the content consumer and content origin.
When the content is cached closer to the consumer, requests will not
cause much additional network activity beyond the cache.
- **Improved responsiveness**: Caching enables content to be retrieved
faster because an entire network round trip is not necessary. Caches
maintained close to the user, like the browser cache, can make this
retrieval nearly instantaneous.
- **Increased performance on the same hardware**: For the server where
the content originated, more performance can be squeezed from the
same hardware by allowing aggressive caching. The content owner can
leverage the powerful servers along the delivery path to take the
brunt of certain content loads.
- **Availability of content during network interruptions**: With
certain policies, caching can be used to serve content to end users
even when it may be unavailable for short periods of time from the
origin servers.
Terminology
-----------
When dealing with caching, there are a few terms that you are likely to
come across that might be unfamiliar. Some of the more common ones are
below:
- **Origin server**: The origin server is the original location of the
content. If you are acting as the web server administrator, this is
the machine that you control. It is responsible for serving any
content that could not be retrieved from a cache along the request
route and for setting the caching policy for all content.
- **Cache hit ratio**: A cache's effectiveness is measured in terms of
its cache hit ratio or hit rate. This is a ratio of the requests
able to be retrieved from a cache to the total requests made. A high
cache hit ratio means that a high percentage of the content was able
to be retrieved from the cache. This is usually the desired outcome
for most administrators.
- **Freshness**: Freshness is a term used to describe whether an item
within a cache is still considered a candidate to serve to a client.
Content in a cache will only be used to respond if it is within the
freshness time frame specified by the caching policy.
- **Stale content**: Items in the cache expire according to the cache
freshness settings in the caching policy. Expired content is
"stale". In general, expired content cannot be used to respond to
client requests. The origin server must be re-contacted to retrieve
the new content or at least verify that the cached content is still
accurate.
- **Validation**: Stale items in the cache can be validated in order
to refresh their expiration time. Validation involves checking in
with the origin server to see if the cached content still represents
the most recent version of item.
- **Invalidation**: Invalidation is the process of removing content
from the cache before its specified expiration date. This is
necessary if the item has been changed on the origin server and
having an outdated item in cache would cause significant issues for
the client.
There are plenty of other caching terms, but the ones above should help
you get started.
What Can be Cached?
-------------------
Certain content lends itself more readily to caching than others. Some
very cache-friendly content for most sites are:
- Logos and brand images
- Non-rotating images in general (navigation icons, for example)
- Style sheets
- General Javascript files
- Downloadable Content
- Media Files
These tend to change infrequently, so they can benefit from being cached
for longer periods of time.
Some items that you have to be careful in caching are:
- HTML pages
- Rotating images
- Frequently modified Javascript and CSS
- Content requested with authentication cookies
Some items that should almost never be cached are:
- Assets related to sensitive data (banking info, etc.)
- Content that is user-specific and frequently changed
In addition to the above general rules, it's possible to specify
policies that allow you to cache different types of content
appropriately. For instance, if authenticated users all see the same
view of your site, it may be possible to cache that view anywhere. If
authenticated users see a user-sensitive view of the site that will be
valid for some time, you may tell the user's browser to cache, but tell
any intermediary caches not to store the view.
Locations Where Web Content Is Cached
-------------------------------------
Content can be cached at many different points throughout the delivery
chain:
- **Browser cache**: Web browsers themselves maintain a small cache.
Typically, the browser sets a policy that dictates the most
important items to cache. This may be user-specific content or
content deemed expensive to download and likely to be requested
again.
- **Intermediary caching proxies**: Any server in between the client
and your infrastructure can cache certain content as desired. These
caches may be maintained by ISPs or other independent parties.
- **Reverse Cache**: Your server infrastructure can implement its own
cache for backend services. This way, content can be served from the
point-of-contact instead of hitting backend servers on each request.
Each of these locations can and often do cache items according to their
own caching policies and the policies set at the content origin.
Caching Headers
---------------
Caching policy is dependent upon two different factors. The caching
entity itself gets to decide whether or not to cache acceptable content.
It can decide to cache less than it is allowed to cache, but never more.
The majority of caching behavior is determined by the caching policy,
which is set by the content owner. These policies are mainly articulated
through the use of specific HTTP headers.
Through various iterations of the HTTP protocol, a few different
cache-focused headers have arisen with varying levels of sophistication.
The ones you probably still need to pay attention to are below:
- **`Expires`**: The `Expires` header is very straight-forward,
although fairly limited in scope. Basically, it sets a time in the
future when the content will expire. At this point, any requests for
the same content will have to go back to the origin server. This
header is probably best used only as a fall back.
- **`Cache-Control`**: This is the more modern replacement for the
`Expires` header. It is well supported and implements a much more
flexible design. In almost all cases, this is preferable to
`Expires`, but it may not hurt to set both values. We will discuss
the specifics of the options you can set with `Cache-Control` a bit
later.
- **`Etag`**: The `Etag` header is used with cache validation. The
origin can provide a unique `Etag` for an item when it initially
serves the content. When a cache needs to validate the content it
has on-hand upon expiration, it can send back the `Etag` it has for
the content. The origin will either tell the cache that the content
is the same, or send the updated content (with the new `Etag`).
- **`Last-Modified`**: This header specifies the last time that the
item was modified. This may be used as part of the validation
strategy to ensure fresh content.
- **`Content-Length`**: While not specifically involved in caching,
the `Content-Length` header is important to set when defining
caching policies. Certain software will refuse to cache content if
it does not know in advanced the size of the content it will need to
reserve space for.
- **`Vary`**: A cache typically uses the requested host and the path
to the resource as the key with which to store the cache item. The
`Vary` header can be used to tell caches to pay attention to an
additional header when deciding whether a request is for the same
item. This is most commonly used to tell caches to key by the
`Accept-Encoding` header as well, so that the cache will know to
differentiate between compressed and uncompressed content.
### An Aside about the Vary Header
The `Vary` header provides you with the ability to store different
versions of the same content at the expense of diluting the entries in
the cache.
In the case of `Accept-Encoding`, setting the `Vary` header allows for a
critical distinction to take place between compressed and uncompressed
content. This is needed to correctly serve these items to browsers that
cannot handle compressed content and is necessary in order to provide
basic usability. One characteristic that tells you that
`Accept-Encoding` may be a good candidate for `Vary` is that it only has
two or three possible values.
Items like `User-Agent` might at first glance seem to be a good way to
differentiate between mobile and desktop browsers to serve different
versions of your site. However, since `User-Agent` strings are
non-standard, the result will likely be many versions of the same
content on intermediary caches, with a very low cache hit ratio. The
`Vary` header should be used sparingly, especially if you do not have
the ability to normalize the requests in intermediate caches that you
control (which may be possible, for instance, if you leverage a content
delivery network).
How Cache-Control Flags Impact Caching
--------------------------------------
Above, we mentioned how the `Cache-Control` header is used for modern
cache policy specification. A number of different policy instructions
can be set using this header, with multiple instructions being separated
by commas.
Some of the `Cache-Control` options you can use to dictate your
content's caching policy are:
- **`no-cache`**: This instruction specifies that any cached content
must be re-validated on each request before being served to a
client. This, in effect, marks the content as stale immediately, but
allows it to use revalidation techniques to avoid re-downloading the
entire item again.
- **`no-store`**: This instruction indicates that the content cannot
be cached in any way. This is appropriate to set if the response
represents sensitive data.
- **`public`**: This marks the content as public, which means that it
can be cached by the browser and any intermediate caches. For
requests that utilized HTTP authentication, responses are marked
`private` by default. This header overrides that setting.
- **`private`**: This marks the content as `private`. Private content
may be stored by the user's browser, but must *not* be cached by any
intermediate parties. This is often used for user-specific data.
- **`max-age`**: This setting configures the maximum age that the
content may be cached before it must revalidate or re-download the
content from the origin server. In essence, this replaces the
`Expires` header for modern browsing and is the basis for
determining a piece of content's freshness. This option takes its
value in seconds with a maximum valid freshness time of one year
(31536000 seconds).
- **`s-maxage`**: This is very similar to the `max-age` setting, in
that it indicates the amount of time that the content can be cached.
The difference is that this option is applied only to intermediary
caches. Combining this with the above allows for more flexible
policy construction.
- **`must-revalidate`**: This indicates that the freshness information
indicated by `max-age`, `s-maxage` or the `Expires` header must be
obeyed strictly. Stale content cannot be served under any
circumstance. This prevents cached content from being used in case
of network interruptions and similar scenarios.
- **`proxy-revalidate`**: This operates the same as the above setting,
but only applies to intermediary proxies. In this case, the user's
browser can potentially be used to serve stale content in the event
of a network interruption, but intermediate caches cannot be used
for this purpose.
- **`no-transform`**: This option tells caches that they are not
allowed to modify the received content for performance reasons under
any circumstances. This means, for instance, that the cache is not
able to send compressed versions of content it did not receive from
the origin server compressed and is not allowed.
These can be combined in different ways to achieve various caching
behavior. Some mutually exclusive values are:
- `no-cache`, `no-store`, and the regular caching behavior indicated
by absence of either
- `public` and `private`
The `no-store` option supersedes the `no-cache` if both are present. For
responses to unauthenticated requests, `public` is implied. For
responses to authenticated requests, `private` is implied. These can be
overridden by including the opposite option in the `Cache-Control`
header.
Developing a Caching Strategy
-----------------------------
In a perfect world, everything could be cached aggressively and your
servers would only be contacted to validate content occasionally. This
doesn't often happen in practice though, so you should try to set some
sane caching policies that aim to balance between implementing long-term
caching and responding to the demands of a changing site.
### Common Issues
There are many situations where caching cannot or should not be
implemented due to how the content is produced (dynamically generated
per user) or the nature of the content (sensitive banking information,
for example). Another problem that many administrators face when setting
up caching is the situation where older versions of your content are out
in the wild, not yet stale, even though new versions have been
published.
These are both frequently encountered issues that can have serious
impacts on cache performance and the accuracy of content you are
serving. However, we can mitigate these issues by developing caching
policies that anticipate these problems.
### General Recommendations
While your situation will dictate the caching strategy you use, the
following recommendations can help guide you towards some reasonable
decisions.
There are certain steps that you can take to increase your cache hit
ratio before worrying about the specific headers you use. Some ideas
are:
- **Establish specific directories for images, css, and shared
content**: Placing content into dedicated directories will allow you
to easily refer to them from any page on your site.
- **Use the same URL to refer to the same items**: Since caches key
off of both the host and the path to the content requested, ensure
that you refer to your content in the same way on all of your pages.
The previous recommendation makes this significantly easier.
- **Use CSS image sprites where possible**: CSS image sprites for
items like icons and navigation decrease the number of round trips
needed to render your site and allow your site to cache that single
sprite for a long time.
- **Host scripts and external resources locally where possible**: If
you utilize javascript scripts and other external resources,
consider hosting those resources on your own servers if the correct
headers are not being provided upstream. Note that you will have to
be aware of any updates made to the resource upstream so that you
can update your local copy.
- **Fingerprint cache items**: For static content like CSS and
Javascript files, it may be appropriate to fingerprint each item.
This means adding a unique identifier to the filename (often a hash
of the file) so that if the resource is modified, the new resource
name can be requested, causing the requests to correctly bypass the
cache. There are a variety of tools that can assist in creating
fingerprints and modifying the references to them within HTML
documents.
In terms of selecting the correct headers for different items, the
following can serve as a general reference:
- **Allow all caches to store generic assets**: Static content and
content that is not user-specific can and should be cached at all
points in the delivery chain. This will allow intermediary caches to
respond with the content for multiple users.
- **Allow browsers to cache user-specific assets**: For per-user
content, it is often acceptable and useful to allow caching within
the user's browser. While this content would not be appropriate to
cache on any intermediary caching proxies, caching in the browser
will allow for instant retrieval for users during subsequent visits.
- **Make exceptions for essential time-sensitive content**: If you
have content that is time-sensitive, make an exception to the above
rules so that the out-dated content is not served in critical
situations. For instance, if your site has a shopping cart, it
should reflect the items in the cart immediately. Depending on the
nature of the content, the `no-cache` or `no-store` options can be
set in the `Cache-Control` header to achieve this.
- **Always provide validators**: Validators allow stale content to be
refreshed without having to download the entire resource again.
Setting the `Etag` and the `Last-Modified` headers allow caches to
validate their content and re-serve it if it has not been modified
at the origin, further reducing load.
- **Set long freshness times for supporting content**: In order to
leverage caching effectively, elements that are requested as
supporting content to fulfill a request should often have a long
freshness setting. This is generally appropriate for items like
images and CSS that are pulled in to render the HTML page requested
by the user. Setting extended freshness times, combined with
fingerprinting, allows caches to store these resources for long
periods of time. If the assets change, the modified fingerprint will
invalidate the cached item and will trigger a download of the new
content. Until then, the supporting items can be cached far into the
future.
- **Set short freshness times for parent content**: In order to make
the above scheme work, the containing item must have relatively
short freshness times or may not be cached at all. This is typically
the HTML page that calls in the other assisting content. The HTML
itself will be downloaded frequently, allowing it to respond to
changes rapidly. The supporting content can then be cached
aggressively.
The key is to strike a balance that favors aggressive caching where
possible while leaving opportunities to invalidate entries in the future
when changes are made. Your site will likely have a combination of:
- Aggressively cached items
- Cached items with a short freshness time and the ability to
re-validate
- Items that should not be cached at all
The goal is to move content into the first categories when possible
while maintaining an acceptable level of accuracy.
Conclusion
----------
Taking the time to ensure that your site has proper caching policies in
place can have a significant impact on your site. Caching allows you to
cut down on the bandwidth costs associated with serving the same content
repeatedly. Your server will also be able to handle a greater amount of
traffic with the same hardware. Perhaps most importantly, clients will
have a faster experience on your site, which may lead them to return
more frequently. While effective web caching is not a silver bullet,
setting up appropriate caching policies can give you measurable gains
with minimal work.
---
作者: [Justin Ellingwood](https://www.digitalocean.com/community/users/jellingwood)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
推荐:[royaso](https://github.com/royaso)
via: https://www.digitalocean.com/community/tutorials/web-caching-basics-terminology-http-headers-and-caching-strategies
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,136 +0,0 @@
Interface (NICs) Bonding in Linux using nmcli
================================================================================
Today, we'll learn how to perform Interface (NICs) bonding in our CentOS 7.x using nmcli (Network Manager Command Line Interface).
NICs (Interfaces) bonding is a method for linking **NICs** together logically to allow fail-over or higher throughput. One of the ways to increase the network availability of a server is by using multiple network interfaces. The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical bonded interface. It is a new implementation that does not affect the older bonding driver in linux kernel; it offers an alternate implementation.
**NIC bonding is done to provide two main benefits for us:**
1. **High bandwidth**
1. **Redundancy/resilience**
Now lets configure NICs bonding in CentOS 7. We'll need to decide which interfaces that we would like to configure a Team interface.
run **ip link** command to check the available interface in the system.
$ ip link
![ip link](http://blog.linoxide.com/wp-content/uploads/2015/01/ip-link.png)
Here we are using **eno16777736** and **eno33554960** NICs to create a team interface in **activebackup** mode.
Use **nmcli** command to create a connection for the network team interface,with the following syntax.
# nmcli con add type team con-name CNAME ifname INAME [config JSON]
Where **CNAME** will be the name used to refer the connection ,**INAME** will be the interface name and **JSON** (JavaScript Object Notation) specifies the runner to be used.**JSON** has the following syntax:
'{"runner":{"name":"METHOD"}}'
where **METHOD** is one of the following: **broadcast, activebackup, roundrobin, loadbalance** or **lacp**.
### 1. Creating Team Interface ###
Now let us create the team interface. here is the command we used to create the team interface.
# nmcli con add type team con-name team0 ifname team0 config '{"runner":{"name":"activebackup"}}'
![nmcli con create](http://blog.linoxide.com/wp-content/uploads/2015/01/nmcli-con-create.png)
run **# nmcli con show** command to verify the team configuration.
# nmcli con show
![Show Teamed Interace](http://blog.linoxide.com/wp-content/uploads/2015/01/show-team-interface.png)
### 2. Adding Slave Devices ###
Now lets add the slave devices to the master team0. here is the syntax for adding the slave devices.
# nmcli con add type team-slave con-name CNAME ifname INAME master TEAM
Here we are adding **eno16777736** and **eno33554960** as slave devices for **team0** interface.
# nmcli con add type team-slave con-name team0-port1 ifname eno16777736 master team0
# nmcli con add type team-slave con-name team0-port2 ifname eno33554960 master team0
![adding slave devices to team](http://blog.linoxide.com/wp-content/uploads/2015/01/adding-to-team.png)
Verify the connection configuration using **#nmcli con show** again. now we could see the slave configuration.
#nmcli con show
![show slave config](http://blog.linoxide.com/wp-content/uploads/2015/01/show-slave-config.png)
### 3. Assigning IP Address ###
All the above command will create the required configuration files under **/etc/sysconfig/network-scripts/**.
Lets assign an IP address to this team0 interface and enable the connection now. Here is the command to perform the IP assignment.
# nmcli con mod team0 ipv4.addresses "192.168.1.24/24 192.168.1.1"
# nmcli con mod team0 ipv4.method manual
# nmcli con up team0
![ip assignment](http://blog.linoxide.com/wp-content/uploads/2015/01/ip-assignment.png)
### 4. Verifying the Bonding ###
Verify the IP address information in **#ip add show team0** command.
#ip add show team0
![verfiy ip address](http://blog.linoxide.com/wp-content/uploads/2015/01/verfiy-ip-adress.png)
Now lets check the **activebackup** configuration functionality using the **teamdctl** command.
# teamdctl team0 state
![teamdctl active backup check](http://blog.linoxide.com/wp-content/uploads/2015/01/teamdctl-activebackup-check.png)
Now lets disconnect the active port and check the state again. to confirm whether the active backup configuration is working as expected.
# nmcli dev dis eno33554960
![disconnect activeport](http://blog.linoxide.com/wp-content/uploads/2015/01/disconnect-activeport.png)
disconnected the active port and now check the state again using **#teamdctl team0 state**.
# teamdctl team0 state
![teamdctl check activeport disconnect](http://blog.linoxide.com/wp-content/uploads/2015/01/teamdctl-check-activeport-disconnect.png)
Yes its working cool !! we will connect the disconnected connection back to team0 using the following command.
#nmcli dev con eno33554960
![nmcli dev connect disconected](http://blog.linoxide.com/wp-content/uploads/2015/01/nmcli-dev-connect-disconected.png)
We have one more command called **teamnl** let us show some options with **teamnl** command.
to check the ports in team0 run the following command.
# teamnl team0 ports
![teamnl check ports](http://blog.linoxide.com/wp-content/uploads/2015/01/teamnl-check-ports.png)
Display currently active port of **team0**.
# teamnl team0 getoption activeport
![display active port team0](http://blog.linoxide.com/wp-content/uploads/2015/01/display-active-port-team0.png)
Hurray, we have successfully configured NICs bonding :-) Please share feedback if any.
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-command/interface-nics-bonding-linux/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/

View File

@ -1,4 +1,3 @@
translating by coloka
What are useful command-line network monitors on Linux
================================================================================
Network monitoring is a critical IT function for businesses of all sizes. The goal of network monitoring can vary. For example, the monitoring activity can be part of long-term network provisioning, security protection, performance troubleshooting, network usage accounting, and so on. Depending on its goal, network monitoring is done in many different ways, such as performing packet-level sniffing, collecting flow-level statistics, actively injecting probes into the network, parsing server logs, etc.

View File

@ -1,5 +1,3 @@
Translating by Medusar
How to make a file immutable on Linux
================================================================================
Suppose you want to write-protect some important files on Linux, so that they cannot be deleted or tampered with by accident or otherwise. In other cases, you may want to prevent certain configuration files from being overwritten automatically by software. While changing their ownership or permission bits on the files by using chown or chmod is one way to deal with this situation, this is not a perfect solution as it cannot prevent any action done with root privilege. That is when chattr comes in handy.

View File

@ -1,5 +1,3 @@
Ping -- Translating
iptraf: A TCP/UDP Network Monitoring Utility
================================================================================
[iptraf][1] is an ncurses-based IP LAN monitor that generates various network statistics including TCP info, UDP counts, ICMP and OSPF information, Ethernet load info, node stats, IP checksum errors, and others.

View File

@ -1,241 +0,0 @@
Setting up a private Docker registry
================================================================================
![](http://cocoahunter.com/content/images/2015/01/docker2.jpg)
[TL;DR] This is the second post in a series of 3 on how my company moved its infrastructure from PaaS to Docker based deployment.
- [First part][1]: where I talk about the process we went thru before approaching Docker;
- [Third pard][2]: where I show how to automate the entire process of building images and deploying a Rails app with Docker.
----------
Why would ouy want ot set up a provate registry? Well, for starters, Docker Hub only allows you to have one free private repo. Other companies are beginning to offer similar services, but they are all not very cheap. In addition, if you need to deploy production ready applications built with Docker, you might not want to publish those images on the public Docker Hub.
This is a very pragmatic approach to dealing with the intricacies of setting up a private Docker registry. For the tutorial we will be using a small 512MB instance on DigitalOcean (from now on DO). I also assume you already know the basics of Docker since I will be concentrating on some more complicated stuff.
### Local set up ###
First of all you need to install **boot2docker** and docker CLI. If you already have your basic Docker environment up and running, you can just skip to the next section.
From the terminal run the following command[1][3]:
brew install boot2docker docker
If everything is ok[2][4], you will now be able to start the VM inside which Docker will run with the following command:
boot2docker up
Follow the instructions, copy and paste the export commands that boot2docker will print in the terminal. If you now run `docker ps` you should be greeted by the following line
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Ok, Docker is ready to go. This will be enough for the moment. Let's go back to setting up the registry.
### Creating the server ###
Log into you DO account and create a new Droplet by selecting an image with Docker pre-installed[^n].
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-18-26-14.png)
You should receive your root credentials via email. Log into your instance and run `docker ps` to see if eveything is ok.
### Setting up AWS S3 ###
We are going to use Amazon Simple Storage Service (S3) as the storage layer for our registry / repository. We will need to create a bucket and user credentials to allow our docker container accessoing it.
Login into your AWS account (if you don't have one you can set one up at [http://aws.amazon.com/][5]) and from the console select S3 (Simple Storage Service).
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-19-29-21.png)
Click on **Create Bucket**, enter a unique name for your bucket (and write it down, we're gonna need it later), then click on **Create**.
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-19-22-50.png)
That's it! We're done setting up the storage part.
### Setup AWS access credentials ###
We are now going to create a new user. Go back to your AWS console and select IAM (Identity & Access Management).
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-19-29-08.png)
In the dashboard, on the left side of the webpage, you should click on Users. Then select **Create New Users**.
You should be presented with the following screen:
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-19-31-42.png)
Enter a name for your user (e.g. docker-registry) and click on Create. Write down (or download the csv file with) your Access Key and Secret Access Key that we'll need when running the Docker container. Go back to your users list and select the one you just created.
Under the Permission section, click on Attach User Policy. In the next screen, you will be presented with multiple choices: select Custom Policy.
![](http://cocoahunter.com/content/images/2015/01/Screenshot-2015-01-20-19-41-21.png)
Here's the content of the custom policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SomeStatement",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::docker-registry-bucket-name/*",
"arn:aws:s3:::docker-registry-bucket-name"
]
}
]
}
This will allow the user (i.e. the registry) to manage (read/write) content on the bucket (make sure to use the bucket name you previously defined when setting up AWS S3). To sum it up: when you'll be pushing Docker images from your local machine to your repository, the server will be able to upload them to S3.
### Installing the registry ###
Now let's head back to our DO server and SSH into it. We are going to use[^n] one of the [official Docker registry images][6].
Let's start our registry with the following command:
docker run \
-e SETTINGS_FLAVOR=s3 \
-e AWS_BUCKET=bucket-name \
-e STORAGE_PATH=/registry \
-e AWS_KEY=your_aws_key \
-e AWS_SECRET=your_aws_secret \
-e SEARCH_BACKEND=sqlalchemy \
-p 5000:5000 \
--name registry \
-d \
registry
Docker should pull the required fs layers from the Docker Hub and eventually start the daemonised container.
### Testing the registry ###
If everything worked out, you should now be able to test the registry by pinging it and by searching its content (though for the time being it's still empty).
Our registry is very basic and it does not provide any means of authentication. Since there are no easy ways of adding authentication (at least none that I'm aware of that are easy enough to implment in order to justify the effort), I've decided that the easiest way of querying / pulling / pushing the registry is an unsecure (over HTTP) connection tunneled thru SSH.
Opening an SSH tunnel from your local machine is straightforward:
ssh -N -L 5000:localhost:5000 root@your_registry.com
The command is tunnelling connections over SSH from port 5000 of the registry server (which is the one we exposed with the `docker run` command in the previous paragraph) to port 5000 on the localhost.
If you now browse to the following address [http://localhost:5000/v1/_ping][7] you should get the following very simple response
{}
This just means that the registry is working correctly. You can also list the whole content of the registry by browsing to [http://localhost:5000/v1/search][8] that will get you a similar response:
{
"num_results": 2,
"query": "",
"results": [
{
"description": "",
"name": "username/first-repo"
},
{
"description": "",
"name": "username/second-repo"
}
]
}
### Building an image ###
Let's now try and build a very simple Docker image to test our newly installed registry. On your local machine, create a Dockerfile with the following content[^n]:
# Base image with ruby 2.2.0
FROM ruby:2.2.0
MAINTAINER Michelangelo Chasseur <michelangelo.chasseur@touchwa.re>
...and build it:
docker build -t localhost:5000/username/repo-name .
The `localhost:5000` part is especially important: the first part of the name of a Docker image will tell the `docker push` command the endpoint towards which we are trying to push our image. In our case, since we are connecting to our remote private registry via an SSH tunnel, `localhost:5000` represents exactly the reference to our registry.
If everything works as expected, when the command returns, you should be able to list your newly created image with the `docker images` command. Run it and see it for yourself.
### Pushing to the registry ###
Now comes the trickier part. It took a me a while to realize what I'm about to describe, so just be patient if you don't get it the first time you read and try to follow along. I know that all this stuff will seem pretty complicated (and it would be if you didn't automate the process), but I promise in the end it will all make sense. In the next post I will show a couple of shell scripts and Rake tasks that will automate the whole process and will let you deploy a Rails to your registry app with a single easy command.
The docker command you are running from your terminal is actually using the boot2docker VM to run the containers and do all the magic stuff. So when we run a command like `docker push some_repo` what is actually happening is that it's the boot2docker VM that is reacing out for the registry, not our localhost.
This is an extremely important point to understand: in order to push the Docker image to the remote private registry, the SSH tunnel needs to be established from the boot2docker VM and not from your local machine.
There are a couple of ways to go with it. I will show you the shortest one (which is not probably the easiest to understand, but it's the one that will let us automate the process with shell scripts).
First of all though we need to sort one last thing with SSH.
### Setting up SSH ###
Let's add our boot2docker SSH key to our remote server (registry) known hosts. We can do so using the ssh-copy-id utility that you can install with the following command shouldn't you already have it:
brew install ssh-copy-id
Then run:
ssh-copy-id -i /Users/username/.ssh/id_boot2docker root@your-registry.com
Make sure to substitute `/Users/username/.ssh/id_boot2docker` with the correct path of your ssh key.
This will allow us to connect via SSH to our remote registry without being prompted for the password.
Finally let's test it out:
boot2docker ssh "ssh -o 'StrictHostKeyChecking no' -i /Users/michelangelo/.ssh/id_boot2docker -N -L 5000:localhost:5000 root@registry.touchwa.re &" &
To break things out a little bit:
- `boot2docker ssh` lets you pass a command as a parameter that will be executed by the boot2docker VM;
- the final `&` indicates that we want our command to be executed in the background;
- `ssh -o 'StrictHostKeyChecking no' -i /Users/michelangelo/.ssh/id_boot2docker -N -L 5000:localhost:5000 root@registry.touchwa.re &` is the actual command our boot2docker VM will run;
- the `-o 'StrictHostKeyChecking no'` will make sure that we are not prompted with security questions;
- the `-i /Users/michelangelo/.ssh/id_boot2docker` indicates which SSH key we want our VM to use for authentication purposes (note that this should be the key you added to your remote registry in the previous step);
- finally we are opening a tunnel on mapping port 5000 to localhost:5000.
### Pulling from another server ###
You should now be able to push your image to the remote registry by simply issuing the following command:
docker push localhost:5000/username/repo_name
In the [next post][9] we'll se how to automate some of this stuff and we'll containerize a real Rails application. Stay tuned!
P.S. Please use the comments to let me know of any inconsistencies or fallacies in my tutorial. Hope you enjoyed it!
1. I'm also assuming you are running on OS X.
1. For a complete list of instructions to set up your docker environment and requirements, please visit [http://boot2docker.io/][10]
1. Select Image > Applications > Docker 1.4.1 on 14.04 at the time of this writing.
1. [https://github.com/docker/docker-registry/][11]
1. This is just a stub, in the next post I will show you how to bundle a Rails application into a Docker container.
--------------------------------------------------------------------------------
via: http://cocoahunter.com/2015/01/23/docker-2/
作者:[Michelangelo Chasseur][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://cocoahunter.com/author/michelangelo/
[1]:http://cocoahunter.com/2015/01/23/docker-1/
[2]:http://cocoahunter.com/2015/01/23/docker-3/
[3]:http://cocoahunter.com/2015/01/23/docker-2/#fn:1
[4]:http://cocoahunter.com/2015/01/23/docker-2/#fn:2
[5]:http://aws.amazon.com/
[6]:https://registry.hub.docker.com/_/registry/
[7]:http://localhost:5000/v1/_ping
[8]:http://localhost:5000/v1/search
[9]:http://cocoahunter.com/2015/01/23/docker-3/
[10]:http://boot2docker.io/
[11]:https://github.com/docker/docker-registry/

View File

@ -1,253 +0,0 @@
Automated Docker-based Rails deployments
================================================================================
![](http://cocoahunter.com/content/images/2015/01/docker3.jpeg)
[TL;DR] This is the third post in a series of 3 on how my company moved its infrastructure from PaaS to Docker based deployment.
- [First part][1]: where I talk about the process we went thru before approaching Docker;
- [Second part][2]: where I explain how setting up a private registry for in house secure deployments.
----------
In this final part we will see how to automate the whole deployment process with a real world (though very basic) example.
### Basic Rails app ###
Let's dive into the topic right away and bootstrap a basic Rails app. For the purpose of this demonstration I'm going to use Ruby 2.2.0 and Rails 4.1.1
From the terminal run:
$ rvm use 2.2.0
$ rails new && cd docker-test
Let's create a basic controller:
$ rails g controller welcome index
...and edit `routes.rb` so that the root of the project will point to our newly created welcome#index method:
root 'welcome#index'
Running `rails s` from the terminal and browsing to [http://localhost:3000][3] should bring you to the index page. We're not going to make anything fancier to the app, it's just a basic example to prove that when we'll build and deploy the container everything is working.
### Setup the webserver ###
We are going to use Unicorn as our webserver. Add `gem 'unicorn'` and `gem 'foreman'` to the Gemfile and bundle it up (run `bundle install` from the command line).
Unicorn needs to be configured when the Rails app launches, so let's put a **unicorn.rb** file inside the **config** directory. [Here is an example][4] of a Unicorn configuration file. You can just copy & paste the content of the Gist.
Let's also add a Procfile with the following content inside the root of the project so that we will be able to start the app with foreman:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
If you now try to run the app with **foreman start** everything should work as expected and you should have a running app on [http://localhost:5000][5]
### Building a Docker image ###
Now let's build the image inside which our app is going to live. In the root of our Rails project, create a file named **Dockerfile** and paste in it the following:
# Base image with ruby 2.2.0
FROM ruby:2.2.0
# Install required libraries and dependencies
RUN apt-get update && apt-get install -qy nodejs postgresql-client sqlite3 --no-install-recommends && rm -rf /var/lib/apt/lists/*
# Set Rails version
ENV RAILS_VERSION 4.1.1
# Install Rails
RUN gem install rails --version "$RAILS_VERSION"
# Create directory from where the code will run
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Make webserver reachable to the outside world
EXPOSE 3000
# Set ENV variables
ENV PORT=3000
# Start the web app
CMD ["foreman","start"]
# Install the necessary gems
ADD Gemfile /usr/src/app/Gemfile
ADD Gemfile.lock /usr/src/app/Gemfile.lock
RUN bundle install --without development test
# Add rails project (from same dir as Dockerfile) to project directory
ADD ./ /usr/src/app
# Run rake tasks
RUN RAILS_ENV=production rake db:create db:migrate
Using the provided Dockerfile, let's try and build an image with the following command[1][7]:
$ docker build -t localhost:5000/your_username/docker-test .
And again, if everything worked out correctly, the last line of the long log output should read something like:
Successfully built 82e48769506c
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
localhost:5000/your_username/docker-test latest 82e48769506c About a minute ago 884.2 MB
Let's try and run the container!
$ docker run -d -p 3000:3000 --name docker-test localhost:5000/your_username/docker-test
You should be able to reach your Rails app running inside the Docker container at port 3000 of your boot2docker VM[2][8] (in my case [http://192.168.59.103:3000][6]).
### Automating with shell scripts ###
Since you should already know from the previous post3 how to push your newly created image to a private regisitry and deploy it on a server, let's skip this part and go straight to automating the process.
We are going to define 3 shell scripts and finally tie it all together with rake.
### Clean ###
Every time we build our image and deploy we are better off always clean everything. That means the following:
- stop (if running) and restart boot2docker;
- remove orphaned Docker images (images that are without tags and that are no longer used by your containers).
Put the following into a **clean.sh** file in the root of your project.
echo Restarting boot2docker...
boot2docker down
boot2docker up
echo Exporting Docker variables...
sleep 1
export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_CERT_PATH=/Users/user/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1
sleep 1
echo Removing orphaned images without tags...
docker images | grep "<none>" | awk '{print $3}' | xargs docker rmi
Also make sure to make the script executable:
$ chmod +x clean.sh
### Build ###
The build process basically consists in reproducing what we just did before (docker build). Create a **build.sh** script at the root of your project with the following content:
docker build -t localhost:5000/your_username/docker-test .
Make the script executable.
### Deploy ###
Finally, create a **deploy.sh** script with this content:
# Open SSH connection from boot2docker to private registry
boot2docker ssh "ssh -o 'StrictHostKeyChecking no' -i /Users/username/.ssh/id_boot2docker -N -L 5000:localhost:5000 root@your-registry.com &" &
# Wait to make sure the SSH tunnel is open before pushing...
echo Waiting 5 seconds before pushing image.
echo 5...
sleep 1
echo 4...
sleep 1
echo 3...
sleep 1
echo 2...
sleep 1
echo 1...
sleep 1
# Push image onto remote registry / repo
echo Starting push!
docker push localhost:5000/username/docker-test
If you don't understand what's going on here, please make sure you've read thoroughfully [part 2][9] of this series of posts.
Make the script executable.
### Tying it all together with rake ###
Having 3 scripts would now require you to run them individually each time you decide to deploy your app:
1. clean
1. build
1. deploy / push
That wouldn't be much of an effort, if it weren't for the fact that developers are lazy! And lazy be it, then!
The final step to wrap things up, is tying the 3 parts together with rake.
To make things even simpler you can just append a bunch of lines of code to the end of the already present Rakefile in the root of your project. Open the Rakefile file - pun intended :) - and paste the following:
namespace :docker do
desc "Remove docker container"
task :clean do
sh './clean.sh'
end
desc "Build Docker image"
task :build => [:clean] do
sh './build.sh'
end
desc "Deploy Docker image"
task :deploy => [:build] do
sh './deploy.sh'
end
end
Even if you don't know rake syntax (which you should, because it's pretty awesome!), it's pretty obvious what we are doing. We have declared 3 tasks inside a namespace (docker).
This will create the following 3 tasks:
- rake docker:clean
- rake docker:build
- rake docker:deploy
Deploy is dependent on build, build is dependent on clean. So every time we run from the command line
$ rake docker:deploy
All the script will be executed in the required order.
### Test it ###
To see if everything is working, you just need to make a small change in the code of your app and run
$ rake docker:deploy
and see the magic happening. Once the image has been uploaded (and the first time it could take quite a while), you can ssh into your production server and pull (thru an SSH tunnel) the docker image onto the server and run. It's that easy!
Well, maybe it takes a while to get accustomed to how everything works, but once it does, it's almost (almost) as easy as deploying with Heroku.
P.S. As always, please let me have your ideas. I'm not sure this is the best, or the fastest, or the safest way of doing devops with Docker, but it certainly worked out for us.
- make sure to have **boot2docker** up and running.
- If you don't know your boot2docker VM address, just run `$ boot2docker ip`
- if you don't, you can read it [here][10]
--------------------------------------------------------------------------------
via: http://cocoahunter.com/2015/01/23/docker-3/
作者:[Michelangelo Chasseur][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://cocoahunter.com/author/michelangelo/
[1]:http://cocoahunter.com/docker-1
[2]:http://cocoahunter.com/2015/01/23/docker-2/
[3]:http://localhost:3000/
[4]:https://gist.github.com/chasseurmic/0dad4d692ff499761b20
[5]:http://localhost:5000/
[6]:http://192.168.59.103:3000/
[7]:http://cocoahunter.com/2015/01/23/docker-3/#fn:1
[8]:http://cocoahunter.com/2015/01/23/docker-3/#fn:2
[9]:http://cocoahunter.com/2015/01/23/docker-2/
[10]:http://cocoahunter.com/2015/01/23/docker-2/

View File

@ -1,5 +1,6 @@
Install Linux-Dash (Web Based Monitoring tool) on Ubntu 14.10
================================================================================
A low-overhead monitoring web dashboard for a GNU/Linux machine. Simply drop-in the app and go!.Linux Dash's interface provides a detailed overview of all vital aspects of your server, including RAM and disk usage, network, installed software, users, and running processes. All information is organized into sections, and you can jump to a specific section using the buttons in the main toolbar. Linux Dash is not the most advanced monitoring tool out there, but it might be a good fit for users looking for a slick, lightweight, and easy to deploy application.
### Linux-Dash Features ###

View File

@ -1,149 +0,0 @@
Translating by ZTinoZ
Linux FAQs with Answers--How to disable IPv6 on Linux
================================================================================
> **Question**: I notice that one of my applications is trying to establish a connection over IPv6. But since our local network is not able to route IPv6 traffic, the IPv6 connection times out, and the application falls back to IPv4, which causes unnecessary delay. As I don't have any need for IPv6 at the moment, I would like to disable IPv6 on my Linux box. What is a proper way to turn off IPv6 on Linux?
IPv6 has been introduced as a replacement of IPv4, the traditional 32-bit address space used in the Internet, to solve the imminent exhaustion of available IPv4 address space. However, since IPv4 has been used by every host or device connected to the Internet, it is practically impossible to switch every one of them to IPv6 overnight. Numerous IPv4 to IPv6 transition mechanisms (e.g., dual IP stack, tunneling, proxying) have been proposed to facilitate the adoption of IPv6, and many applications are being rewritten, as we speak, to add support for IPv6. One thing for sure is that IPv4 and IPv6 will inevitably coexist for the forseeable future.
Ideally the [ongoing IPv6 transition process][1] should not be visible to end users, but the mixed IPv4/IPv6 environment might sometimes cause you to encounter various hiccups originating from unintended interaction between IPv4 and IPv6. For example, you may experience timeouts from applications such as apt-get or ssh trying to unsuccessfully connecting via IPv6, DNS server accidentally dropping AAAA DNS records for IPv6, or your IPv6-capable device not compatible with your ISP's legacy IPv4 network, etc.
Of course this doesn't mean that you should blindly disable IPv6 on you Linux box. With all the benefits promised by IPv6, we as a society want to fully embrace it eventually, but as part of troubleshooting process for end-user experienced hiccups, you may try turning off IPv6 to see if indeed IPv6 is a culprit.
Here are a few techniques allowing you to disable IPv6 partially (e.g., for a certain network interface) or completely on Linux. These tips should be applicable to all major Linux distributions including Ubuntu, Debian, Linux Mint, CentOS, Fedora, RHEL, and Arch Linux.
### Check if IPv6 is Enabled on Linux ###
All modern Linux distributions have IPv6 automatically enabled by default. To see IPv6 is activated on your Linux, use ifconfig or ip commands. If you see "inet6" in the output of these commands, this means your Linux has IPv6 enabled.
$ ifconfig
![](https://farm8.staticflickr.com/7282/16415082398_5fb0920506_b.jpg)
$ ip addr
![](https://farm8.staticflickr.com/7290/16415082248_c4e075548b_c.jpg)
### Disable IPv6 Temporarily ###
If you want to turn off IPv6 temporarily on your Linux system, you can use /proc file system. By "temporarily", we mean that the change we make to disable IPv6 will not be preserved across reboots. IPv6 will be enabled back again after you reboot your Linux box.
To disable IPv6 for a particular network interface, use the following command.
$ sudo sh -c 'echo 1 > /proc/sys/net/ipv6/conf/<interface-name>/disable_ipv6'
For example, to disable IPv6 for eth0 interface:
$ sudo sh -c 'echo 1 > /proc/sys/net/ipv6/conf/eth0/disable_ipv6'
![](https://farm8.staticflickr.com/7288/15982511863_0c1feafe7f_b.jpg)
To enable IPv6 back on eth0 interface:
$ sudo sh -c 'echo 0 > /proc/sys/net/ipv6/conf/eth0/disable_ipv6'
If you want to disable IPv6 system-wide for all interfaces including loopback interface, use this command:
$ sudo sh -c 'echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6'
### Disable IPv6 Permanently across Reboots ###
The above method does not permanently disable IPv6 across reboots. IPv6 will be activated again once you reboot your system. If you want to turn off IPv6 for good, there are several ways you can do it.
#### Method One ####
The first method is to apply the above /proc changes persistently in /etc/sysctl.conf file.
That is, open /etc/sysctl.conf with a text editor, and add the following lines.
# to disable IPv6 on all interfaces system wide
net.ipv6.conf.all.disable_ipv6 = 1
# to disable IPv6 on a specific interface (e.g., eth0, lo)
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv6.conf.eth0.disable_ipv6 = 1
To activate these changes in /etc/sysctl.conf, run:
$ sudo sysctl -p /etc/sysctl.conf
or simply reboot.
#### Method Two ####
An alternative way to disable IPv6 permanently is to pass a necessary kernel parameter via GRUB/GRUB2 during boot time.
Open /etc/default/grub with a text editor, and add "ipv6.disable=1" to GRUB_CMDLINE_LINUX variable.
$ sudo vi /etc/default/grub
----------
GRUB_CMDLINE_LINUX="xxxxx ipv6.disable=1"
In the above, "xxxxx" denotes any existing kernel parameter(s). Add "ipv6.disable=1" after them.
![](https://farm8.staticflickr.com/7286/15982512103_ec5d940e58_b.jpg)
Finally, don't forget to apply the modified GRUB/GRUB2 settings by running:
On Debian, Ubuntu or Linux Mint:
$ sudo update-grub
On Fedora, CentOS/RHEL:
$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Now IPv6 will be completely disabled once you reboot your Linux system.
### Other Optional Steps after Disabling IPv6 ###
Here are a few optional steps you can consider after disabling IPv6. This is because while you disable IPv6 in the kernel, other programs may still try to use IPv6. In most cases, such application behaviors will not break things, but you want to disable IPv6 for them for efficiency or safety reason.
#### /etc/hosts ####
Depending on your setup, /etc/hosts may contain one or more IPv6 hosts and their addresses. Open /etc/hosts with a text editor, and comment out all lines which contain IPv6 hosts.
$ sudo vi /etc/hosts
----------
# comment these IPv6 hosts
# ::1 ip6-localhost ip6-loopback
# fe00::0 ip6-localnet
# ff00::0 ip6-mcastprefix
# ff02::1 ip6-allnodes
# ff02::2 ip6-allrouters
#### Network Manager ####
If you are using NetworkManager to manage your network settings, you can disable IPv6 on NetworkManager as follows. Open the wired connection on NetworkManager, click on "IPv6 Settings" tab, and choose "Ignore" in "Method" field. Save the change and exit.
![](https://farm8.staticflickr.com/7293/16394993017_21917f027b_o.png)
#### SSH server ####
By default, OpenSSH server (sshd) tries to bind on both IPv4 and IPv6 addresses.
To force sshd to bind only on IPv4 address, open /etc/ssh/sshd_config with a text editor, and add the following line. inet is for IPv4 only, and inet6 is for IPv6 only.
$ sudo vi /etc/ssh/sshd_config
----------
AddressFamily inet
and restart sshd server.
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/disable-ipv6-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://ask.xmodulo.com/author/nanni
[1]:http://www.google.com/intl/en/ipv6/statistics.html

View File

@ -1,4 +1,3 @@
translating by KayGuoWhu
Enjoy Android Apps on Ubuntu using ARChon Runtime
================================================================================
Before, we gave try to many android app emulating tools like Genymotion, Virtualbox, Android SDK, etc to try to run android apps on it. But, with this new Chrome Android Runtime, we are able to run Android Apps on our Chrome Browser. So, here are the steps we'll need to follow to install Android Apps on Ubuntu using ARChon Runtime.

View File

@ -1,3 +1,4 @@
translating by runningwater
How to Manage and Use LVM (Logical Volume Management) in Ubuntu
================================================================================
![](http://cdn5.howtogeek.com/wp-content/uploads/2011/02/652x202xbanner-1.png.pagespeed.ic.VGSxDeVS9P.png)
@ -258,7 +259,7 @@ That should cover most of what you need to know to use LVM. If youve got some
via: http://www.howtogeek.com/howto/40702/how-to-manage-and-use-lvm-logical-volume-management-in-ubuntu/
译者:[译者ID](https://github.com/译者ID)
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,6 +1,3 @@
tranlating by haimingfg
Sleuth Kit - Open Source Forensic Tool to Analyze Disk Images and Recover Files
================================================================================
SIFT is a Ubuntu based forensics distribution provided by SANS Inc. It consist of many forensics tools such as Sleuth kit / Autopsy etc . However, Sleuth kit/Autopsy tools can be installed on Ubuntu/Fedora distribution instead of downloading complete distribution of SIFT.

View File

@ -1,3 +1,4 @@
[bazz222]
How to set up networking between Docker containers
================================================================================
As you may be aware, Docker container technology has emerged as a viable lightweight alternative to full-blown virtualization. There are a growing number of use cases of Docker that the industry adopted in different contexts, for example, enabling rapid build environment, simplifying configuration of your infrastructure, isolating applications in multi-tenant environment, and so on. While you can certainly deploy an application sandbox in a standalone Docker container, many real-world use cases of Docker in production environments may involve deploying a complex multi-tier application in an ensemble of multiple containers, where each container plays a specific role (e.g., load balancer, LAMP stack, database, UI).
@ -157,4 +158,4 @@ via: http://xmodulo.com/networking-between-docker-containers.html
[2]:http://xmodulo.com/recommend/dockerbook
[3]:http://xmodulo.com/manage-linux-containers-docker-ubuntu.html
[4]:http://xmodulo.com/docker-containers-centos-fedora.html
[5]:http://zettio.github.io/weave/features.html
[5]:http://zettio.github.io/weave/features.html

View File

@ -1,180 +0,0 @@
How to secure SSH login with one-time passwords on Linux
================================================================================
As someone says, security is a not a product, but a process. While SSH protocol itself is cryptographically secure by design, someone can wreak havoc on your SSH service if it is not administered properly, be it weak passwords, compromised keys or outdated SSH client.
As far as SSH authentication is concerned, [public key authentication][1] is in general considered more secure than password authentication. However, key authentication is actually not desirable or even less secure if you are logging in from a public or shared computer, where things like stealth keylogger or memory scraper can always a possibility. If you cannot trust the local computer, it is better to use something else. This is when "one-time passwords" come in handy. As the name implies, each one-time password is for single-use only. Such disposable passwords can be safely used in untrusted environments as they cannot be re-used even when they are stolen.
One way to generate disposable passwords is [Google Authenticator][2]. In this tutorial, I am going to demonstrate another way to create one-time passwords for SSH login: [OTPW][3], a one-time password login package. Unlike Google Authenticator, you do not rely on any third party for one-time password generation and verification.
### What is OTPW? ###
OTPW consists of one-time password generator and PAM-integrated verification routines. In OTPW, one-time passwords are generated apriori with the generator, and carried by a user securely (e.g., printed in a paper sheet). Cryptographic hash of the generated passwords are then stored in the SSH server host. When a user logs in with a one-time password, OTPW's PAM module verifies the password, and invalidates it to prevent re-use.
### Step One: Install and Configure OTPW on Linux ###
#### Debian, Ubuntu or Linux Mint ####
Install OTPW packages with apt-get.
$ sudo apt-get install libpam-otpw otpw-bin
Open a PAM configuration file for SSH (/etc/pam.d/sshd) with a text editor, and comment out the following line (to disable password authentication).
#@include common-auth
and add the following two lines (to enable one-time password authentication):
auth required pam_otpw.so
session optional pam_otpw.so
![](https://farm8.staticflickr.com/7599/16775121360_d1f93feefa_b.jpg)
#### Fedora or CentOS/RHEL ####
OTPW is not available as a prebuilt package on Red Hat based systems. So let's install OTPW by building it from the source.
First, install prerequites:
$ sudo yum git gcc pam-devel
$ git clone https://www.cl.cam.ac.uk/~mgk25/git/otpw
$ cd otpw
Open Makefile with a text editor, and edit a line that starts with "PAMLIB=" as follows.
On 64-bit system:
PAMLIB=/usr/lib64/security
On 32-bit system:
PAMLIB=/usr/lib/security
Compile and install it. Note that installation will automatically restart an SSH server. So be ready to be disconnected if you are on an SSH connection.
$ make
$ sudo make install
Now you need to update SELinux policy since /usr/sbin/sshd tries to write to user's home directory, which is not allowed by default SELinux policy. The following commands will do. If you are not using SELinux, skip this step.
$ sudo grep sshd /var/log/audit/audit.log | audit2allow -M mypol
$ sudo semodule -i mypol.pp
Next, open a PAM configuration file for SSH (/etc/pam.d/sshd) with a text editor, and comment out the following line (to disable password authentication).
#auth substack password-auth
and add the following two lines (to enable one-time password authentication):
auth required pam_otpw.so
session optional pam_otpw.so
#### Step Two: Configure SSH Server for One-time Passwords ####
The next step is to configure an SSH server to accept one-time passwords.
Open /etc/ssh/sshd_config with a text editor, and set the following three parameters. Make sure that you do not add these lines more than once, because that will cause an SSH server to fail.
UsePrivilegeSeparation yes
ChallengeResponseAuthentication yes
UsePAM yes
You also need to disable default password authentication. Optionally, enable public key authentication, so that you can fall back to key-based authentication in case you do not have one-time passwords.
PubkeyAuthentication yes
PasswordAuthentication no
Now restart SSH server.
Debian, Ubuntu or Linux Mint:
$ sudo service ssh restart
Fedora or CentOS/RHEL 7:
$ sudo systemctl restart sshd
#### Step Three: Generate One-time Passwords with OTPW ####
As mentioned earlier, you need to create one-time passwords beforehand, and have them stored on the remote SSH server host. For this, run otpw-gen tool as the user you will be logging in as.
$ cd ~
$ otpw-gen > temporary_password.txt
![](https://farm9.staticflickr.com/8751/16961258882_c49cfe03fb_b.jpg)
It will ask you to set a prefix password. When you later log in, you need to type this prefix password AND one-time password. Essentially the prefix password is another layer of protection. Even if the password sheet falls into the wrong hands, the prefix password forces them to brute-force.
Once the prefix password is set, the command will generate 280 one-time passwords, and store them in the output text file (e.g., temporary_password.txt). Each password (length of 8 characters by default) is preceded by a three-digit index number. You are supposed to print the file in a sheet and carry it with you.
![](https://farm8.staticflickr.com/7281/16962594055_c2696d5ae1_b.jpg)
You will also see ~/.otpw file created, where cryptographic hashs of these passwords are stored. The first three digits in each line indicate the index number of the password that will be used for SSH login.
$ more ~/.otpw
----------
OTPW1
280 3 12 8
191ai+:ENwmMqwn
218tYRZc%PIY27a
241ve8ns%NsHFmf
055W4/YCauQJkr:
102ZnJ4VWLFrk5N
2273Xww55hteJ8Y
1509d4b5=A64jBT
168FWBXY%ztm9j%
000rWUSdBYr%8UE
037NvyryzcI+YRX
122rEwA3GXvOk=z
### Test One-time Passwords for SSH Login ###
Now let's login to an SSH server in a usual way:
$ ssh user@remote_host
If OTPW is successfully set up, you will see a slightly different password prompt:
Password 191:
Now open up your password sheet, and look for index number "191" in the sheet.
023 kBvp tq/G 079 jKEw /HRM 135 oW/c /UeB 191 fOO+ PeiD 247 vAnZ EgUt
According to sheet above, the one-time password for number "191" is "fOO+PeiD". You need to prepend your prefix password to it. For example, if your prefix password is "000", the actual one-time password you need to type is "000fOO+PeiD".
Once you successfully log in, the password used is automatically invalidated. If you check ~/.otpw, you will notice that the first line is replaced with "---------------", meaning that password "191" has been voided.
OTPW1
280 3 12 8
---------------
218tYRZc%PIY27a
241ve8ns%NsHFmf
055W4/YCauQJkr:
102ZnJ4VWLFrk5N
2273Xww55hteJ8Y
1509d4b5=A64jBT
168FWBXY%ztm9j%
000rWUSdBYr%8UE
037NvyryzcI+YRX
122rEwA3GXvOk=z
### Conclusion ###
In this tutorial, I demonstrated how to set up one-time password login for SSH using OTPW package. You may realized that a print sheet can be considered a less fancy version of security token in two-factor authentication. Yet, it is simpler and you do not rely on any third-party for its implementation. Whatever mechanism you are using to create disposable passwords, they can be helpful when you need to log in to an SSH server from an untrusted public computer. Feel free to share your experience or opinion on this topic.
--------------------------------------------------------------------------------
via: http://xmodulo.com/secure-ssh-login-one-time-passwords-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://xmodulo.com/how-to-force-ssh-login-via-public-key-authentication.html
[2]:http://xmodulo.com/two-factor-authentication-ssh-login-linux.html
[3]:http://www.cl.cam.ac.uk/~mgk25/otpw.html

View File

@ -1,106 +0,0 @@
FSSlc translating
How to Generate/Encrypt/Decrypt Random Passwords in Linux
================================================================================
We have taken initiative to produce Linux tips and tricks series. If youve missed the last article of this series, you may like to visit the link below.
注:此篇文章做过原文
- [5 Interesting Command Line Tips and Tricks in Linux][1]
In this article, we will share some interesting Linux tips and tricks to generate random passwords and also how to encrypt and decrypt passwords with or without slat method.
Security is one of the major concern of digital age. We put on password to computers, email, cloud, phone, documents and what not. We all know the basic to choose the password that is easy to remember and hard to guess. What about some sort of machine based password generation automatically? Believe me Linux is very good at this.
**1. Generate a random unique password of length equal to 10 characters using command pwgen. If you have not installed pwgen yet, use Apt or YUM to get.**
$ pwgen 10 1
![Generate Random Unique Password](http://www.tecmint.com/wp-content/uploads/2015/03/Generate-Random-Unique-Password-in-Linux.gif)
Generate Random Unique Password
Generate several random unique passwords of character length 50 in one go!
$ pwgen 50
![Generate Multiple Random Passwords](http://www.tecmint.com/wp-content/uploads/2015/03/Generate-Multiple-Random-Passwords.gif)
Generate Multiple Random Passwords
**2. You may use makepasswd to generate random, unique password of given length as per choice. Before you can fire makepasswd command, make sure you have installed it. If not! Try installing the package makepasswd using Apt or YUM.**
Generate a random password of character length 10. Default Value is 10.
$ makepasswd
![makepasswd Generate Unique Password](http://www.tecmint.com/wp-content/uploads/2015/03/mkpasswd-generate-unique-password.gif)
makepasswd Generate Unique Password
Generate a random password of character length 50.
$ makepasswd --char 50
![Generate Length 50 Password](http://www.tecmint.com/wp-content/uploads/2015/03/Random-Password-Generate.gif)
Generate Length 50 Password
Generate 7 random password of 20 characters.
$ makepasswd --char 20 --count 7
![](http://www.tecmint.com/wp-content/uploads/2015/03/Generate-20-Character-Password.gif)
**3. Encrypt a password using crypt along with salt. Provide salt manually as well as automatically.**
For those who may not be aware of salt,
Salt is a random data which servers as an additional input to one way function in order to protect password against dictionary attack.
Make sure you have installed mkpasswd installed before proceeding.
The below command will encrypt the password with salt. The salt value is taken randomly and automatically. Hence every time you run the below command it will generate different output because it is accepting random value for salt every-time.
$ mkpasswd tecmint
![Encrypt Password Using Crypt](http://www.tecmint.com/wp-content/uploads/2015/03/Encrypt-Password-in-Linux.gif)
Encrypt Password Using Crypt
Now lets define the salt. It will output the same result every-time. Note you can input anything of your choice as salt.
$ mkpasswd tecmint -s tt
![Encrypt Password Using Salt](http://www.tecmint.com/wp-content/uploads/2015/03/Encrypt-Password-Using-Salt.gif)
Encrypt Password Using Salt
Moreover, mkpasswd is interactive and if you dont provide password along with the command, it will ask password interactively.
**4. Encrypt a string say “Tecmint-is-a-Linux-Community” using aes-256-cbc encryption using password say “tecmint” and salt.**
# echo Tecmint-is-a-Linux-Community | openssl enc -aes-256-cbc -a -salt -pass pass:tecmint
![Encrypt A String in Linux](http://www.tecmint.com/wp-content/uploads/2015/03/Encrypt-A-String-in-Linux.gif)
Encrypt A String in Linux
Here in the above example the output of 注:此篇原文也做过[echo command][2] is pipelined with openssl command that pass the input to be encrypted using Encoding with Cipher (enc) that uses aes-256-cbc encryption algorithm and finally with salt it is encrypted using password (tecmint).
**5. Decrypt the above string using openssl command using the -aes-256-cbc decryption.**
# echo U2FsdGVkX18Zgoc+dfAdpIK58JbcEYFdJBPMINU91DKPeVVrU2k9oXWsgpvpdO/Z | openssl enc -aes-256-cbc -a -d -salt -pass pass:tecmint
![Decrypt String in Linux](http://www.tecmint.com/wp-content/uploads/2015/03/Decrypt-String-in-Linux.gif)
Decrypt String in Linux
Thats all for now. If you know any such tips and tricks you may send us your tips at admin@tecmint.com, your tip will be published under your name and also we will include it in our future article.
Keep connected. Keep Connecting. Stay Tuned. Dont forget to provide us with your valuable feedback in the comments below.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/generate-encrypt-decrypt-random-passwords-in-linux/
作者:[Avishek Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/avishek/
[1]:http://www.tecmint.com/5-linux-command-line-tricks/
[2]:http://www.tecmint.com/echo-command-in-linux/

View File

@ -1,3 +1,5 @@
translating by createyuan
How to set up remote desktop on Linux VPS using x2go
================================================================================
As everything is moved to the cloud, virtualized remote desktop becomes increasingly popular in the industry as a way to enhance employee's productivity. Especially for those who need to roam constantly across multiple locations and devices, remote desktop allows them to stay connected seamlessly to their work environment. Remote desktop is attractive for employers as well, achieving increased agility and flexibility in work environments, lower IT cost due to hardware consolidation, desktop security hardening, and so on.
@ -134,4 +136,4 @@ via: http://xmodulo.com/x2go-remote-desktop-linux.html
[5]:http://wiki.x2go.org/doku.php/doc:newtox2go
[6]:http://wiki.x2go.org/doku.php/doc:de-compat
[7]:http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html
[8]:http://xmodulo.com/go/digitalocean
[8]:http://xmodulo.com/go/digitalocean

View File

@ -1,5 +1,3 @@
translating by martin.
ZMap Documentation
================================================================================
1. Getting Started with ZMap

View File

@ -1,3 +1,4 @@
Translating by ZTinoZ
Install Inkscape - Open Source Vector Graphic Editor
================================================================================
Inkscape is an open source vector graphic editing tool which uses Scalable Vector Graphics (SVG) and that makes it different from its competitors like Xara X, Corel Draw and Adobe Illustrator etc. SVG is a widely-deployed royalty-free graphics format developed and maintained by the W3C SVG Working Group. It is a cross platform tool which runs fine on Linux, Windows and Mac OS.
@ -92,4 +93,4 @@ via: http://linoxide.com/tools/install-inkscape-open-source-vector-graphic-edito
[a]:http://linoxide.com/author/arunrz/
[1]:https://launchpad.net/~inkscape.dev/+archive/ubuntu/stable
[2]:https://inkscape.org/en/
[2]:https://inkscape.org/en/

View File

@ -1,105 +0,0 @@
A Walk Through Some Important Docker Commands
================================================================================
Hi everyone today we'll learn some important Docker Commands that you'll need to learn before you go with Docker. Docker is an Open Source project that provides an open platform to pack, ship and run any application as a lightweight container. It has no boundaries of Language support, Frameworks or packaging system and can be run anywhere, anytime from a small home computers to high-end servers. It makes them great building blocks for deploying and scaling web apps, databases, and back-end services without depending on a particular stack or provider.
Docker commands are easy to learn and easy to implement or take into practice. Here are some easy Docker commands you'll need to know to run Docker and fully utilize it.
### 1. Pulling a Docker Image ###
First of all, we'll need to pull a docker image to get started cause containers are built using Docker Images. We can get the required docker image from the Docker Registry Hub. Before we pull any image using pull command, we'll need to protect our system as there is identified a malicious issue with pull command. To protect our system from this issue, we'll need to add **127.0.0.1 index.docker.io** into /etc/hosts entry. We can do using our favorite text editor.
# nano /etc/hosts
Now, add the following lines into it and then save and exit.
127.0.0.1 index.docker.io
![Docker Hosts](http://blog.linoxide.com/wp-content/uploads/2015/04/docker-hosts.png)
To pull a docker image, we'll need to run the following command.
# docker pull registry.hub.docker.com/busybox
![Docker pull command](http://blog.linoxide.com/wp-content/uploads/2015/04/pulling-image.png)
We can check whether any Docker image is available in our local host for the use or not.
# docker images
![Docker Images](http://blog.linoxide.com/wp-content/uploads/2015/04/docker-images.png)
### 2. Running a Docker Container ###
Now, after we have successfully pulled a required or desired Docker image. We'll surely want to run that Docker image. We can run a docker container out of the image using docker run command. We have several options and flags to run a docker container on the top of the Docker image. To run a docker image and to get into the container we'll use -t and -i flag as shown below.
# docker run -it busybox
![Docker Run Shell Command](http://blog.linoxide.com/wp-content/uploads/2015/04/docker-run-shell.png)
From the above command, we'll get entered into the container and can access its content via the interactive shell. We can press **Ctrl-D** in order to exit from the shell access.
Now, to run the container in background, we'll detach the shell using -d flag as shown below.
# docker run -itd busybox
![Run Container Background](http://blog.linoxide.com/wp-content/uploads/2015/04/run-container-background.png)
If we want to attach into a running container, we can use attach command with the container id. The container id can be fetched using the command **docker ps** .
# docker attach <container id>
![Docker Attach](http://blog.linoxide.com/wp-content/uploads/2015/04/docker-attach.png)
### 3. Checking Containers ###
It is very easy to check the log whether the container is running or not. We can use the following command to check whether there is any docker container running in the real time or not using the following command.
# docker ps
Now, to check logs about the running or past running containers we'll need to run the following command.
# docker ps -a
![View Docker Containers](http://blog.linoxide.com/wp-content/uploads/2015/04/view-docker-containers1.png)
### 4. Inspecting a Docker Container ###
We can check every information about a Docker Container using the inspect command.
# docker inspect <container id>
![Docker Inspect](http://blog.linoxide.com/wp-content/uploads/2015/04/docker-inspect.png)
### 5. Killing and Deleting Command ###
We can kill or stop process or docker containers using its docker id as shown below.
# docker stop <container id>
To stop every containers running, we'll need to run the following command.
# docker kill $(docker ps -q)
Now, if we wanna remove a docker image, run the below command.
# docker rm <container id>
If we wanna remove all the docker images at once, we can run the below.
# docker rm $(docker ps -aq)
### Conclusion ###
These docker commands are highly essential to learn to fully utilize and use Docker. Docker gets too simple with these commands providing end users an easy platform for computing. It is extremely easy for anyone to learn about Docker commands with this above tutorial. If you have any questions, suggestions, feedback please write them in the comment box below so that we can improve and update our contents. Thank you ! Enjoy :-)
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-how-to/important-docker-commands/
作者:[Arun Pyasi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/arunp/

Some files were not shown because too many files have changed in this diff Show More