Merge pull request #1 from LCTT/master

Update repositories
This commit is contained in:
ZTinoZ 2014-10-23 15:55:46 +08:00
commit f4b2a7033a
132 changed files with 7459 additions and 2223 deletions

View File

@ -1,13 +1,9 @@
Sysstat工具包中20个实用的Linux性能监控工具包括mpstat, pidstat, iostat 和sar
Sysstat性能监控工具包中20个实用命令
===============================================================
在我们上一篇文章中,我们已经学习了如何去安装和更新**sysstat**,并且了解了包中的一些实用工具。
注:此文一并附上,在同一个原文中更新
- [Sysstat Performance and Usage Activity Monitoring Tool For Linux][1]
在我们[上一篇文章][1]中,我们已经学习了如何去安装和更新**sysstat**,并且了解了包中的一些实用工具。
![20 Sysstat Commands for Linux Monitoring](http://www.tecmint.com/wp-content/uploads/2014/09/sysstat-commands.png)
Linux系统监控的20个Sysstat命令
今天,我们将会通过一些有趣的实例来学习**mpstat**, **pidstat**, **iostat**和**sar**等工具,这些工具可以帮组我们找出系统中的问题。这些工具都包含了不同的选项,这意味着你可以根据不同的工作使用不同的选项,或者根据你的需求来自定义脚本。我们都知道,系统管理员都会有点懒,他们经常去寻找一些更简单的方法来完成他们的工作。
### mpstat - 处理器统计信息 ###
@ -21,7 +17,7 @@ Linux系统监控的20个Sysstat命令
12:23:57 IST CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
12:23:57 IST all 37.35 0.01 4.72 2.96 0.00 0.07 0.00 0.00 0.00 54.88
2.使用‘**-p**(处理器编码)和ALL参数将会从0开始独立的输出每个CPU的统计信息0表示第一个cpu。
2.使用‘**-p** (处理器编号)和ALL参数将会从0开始独立的输出每个CPU的统计信息0表示第一个cpu。
tecmint@tecmint ~ $ mpstat -P ALL
@ -151,7 +147,7 @@ Linux系统监控的20个Sysstat命令
12:51:55 IST 0 19 0.00 0.00 0.00 0.00 0 writeback
12:51:55 IST 0 20 0.00 0.00 0.00 0.00 1 kintegrityd
8.使用‘**-d 2**参数我们可以看到I/O统计信息2表示以秒为单位对统计信息进行刷新。这个参数可以方便的知道当系统在进行繁重的I/O时那些进行占用大量的资源。
8.使用‘**-d 2**参数我们可以看到I/O统计信息2表示以秒为单位对统计信息进行刷新。这个参数可以方便的知道当系统在进行繁重的I/O时那些进行占用大量的资源的进程
tecmint@tecmint ~ $ pidstat -d 2
@ -171,7 +167,6 @@ Linux系统监控的20个Sysstat命令
9.想要每间隔**2**秒对进程**4164**的cpu统计信息输出**3**次,则使用如下带参数‘**-t**’(输出某个选定进程的统计信息)的命令。
tecmint@tecmint ~ $ pidstat -t -p 4164 2 3
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
@ -250,13 +245,13 @@ Linux系统监控的20个Sysstat命令
01:09:08 IST 1000 5 99 FIFO migration/0
01:09:08 IST 1000 6 99 FIFO watchdog/0
因为我们已经学习过Iostat命令了因此在本文中不在对其进行赘述。若想查看Iostat命令的详细信息请参看“[使用Iostat和Vmstat进行Linux性能监控][2]注:此文也一并附上在同一个原文更新中
因为我们已经学习过iostat命令了因此在本文中不在对其进行赘述。若想查看iostat命令的详细信息请参看“[使用Iostat和Vmstat进行Linux性能监控][2]”
###sar - 系统活动报告###
我们可以使用‘**sar**’命令来获得整个系统性能的报告。这有助于我们定位系统性能的瓶颈,并且有助于我们找出这些烦人的性能问题的解决方法。
Linux内核维护一些内部计数器这些计数器包含了所有的请求及其完成时间和I/O块数等信息sar命令从所有的这些信息中计算出请求的利用率和比例以便找出瓶颈所在。
Linux内核维护一些内部计数器这些计数器包含了所有的请求及其完成时间和I/O块数等信息sar命令从所有的这些信息中计算出请求的利用率和比例以便找出瓶颈所在。
sar命令主要的用途是生成某段时间内所有活动的报告因此必需确保sar命令在适当的时间进行数据采集而不是在午餐时间或者周末。
@ -274,7 +269,7 @@ sar命令主要的用途是生成某段时间内所有活动的报告因此
01:42:38 IST all 50.75 0.00 3.75 0.00 0.00 45.50
Average: all 46.30 0.00 3.93 0.00 0.00 49.77
14.在上面的例子中我们交互的执行sar命令。sar命令提供了使用cron进行非交互的执行sar命令的方法使用**/usr/local/lib/sa1**和**/usr/local/lib/sa2**脚本(如果你在安装时使用了**/usr/local**作为前缀)
14.在上面的例子中我们交互的执行sar命令。sar命令提供了使用cron进行非交互的执行sar命令的方法使用**/usr/local/lib/sa1**和**/usr/local/lib/sa2**脚本(如果你在安装时使用了**/usr/local**作为前缀的话
- **/usr/local/lib/sa1**是一个可以使用cron进行调度生成二进制日志文件的shell脚本。
- **/usr/local/lib/sa2**是一个可以将二进制日志文件转换为用户可读的编码方式。
@ -287,7 +282,7 @@ sar命令主要的用途是生成某段时间内所有活动的报告因此
#在每天23:53时生成一个用户可读的日常报告
53 23 * * * /usr/local/lib/sa/sa2 -A
在sa1脚本执行后期sa1脚本会调用**sabc**(系统活动数据收集器System Activity Data Collector)工具采集特定时间间隔内的数据。**sa2**脚本会调用sar来将二进制日志文件转换为用户可读的形式。
在sa1脚本的后端sa1脚本会调用**sabc**(系统活动数据收集器System Activity Data Collector)工具采集特定时间间隔内的数据。**sa2**脚本会调用sar来将二进制日志文件转换为用户可读的形式。
15.使用‘**-q**’参数来检查运行队列的长度,所有进程的数量和平均负载
@ -303,7 +298,7 @@ sar命令主要的用途是生成某段时间内所有活动的报告因此
02:00:54 IST 0 431 1.64 1.23 0.97 0
Average: 2 431 1.68 1.23 0.97 0
16.使用‘**-F**’参数查看当前挂载的文件系统统计信息
16.使用‘**-F**’参数查看当前挂载的文件系统的使用统计信息
tecmint@tecmint ~ $ sar -F 2 4
@ -387,7 +382,7 @@ sar命令主要的用途是生成某段时间内所有活动的报告因此
![Network Graph](http://www.tecmint.com/wp-content/uploads/2014/09/sar-graph.png)
网络信息图表
*网络信息图表*
现在你可以参考man手册来后去每个参数的更多详细信息并且请在文章下留下你宝贵的评论。
@ -397,10 +392,10 @@ via: http://www.tecmint.com/sysstat-commands-to-monitor-linux/
作者:[Kuldeep Sharma][a]
译者:[cvsher](https://github.com/cvsher)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/kuldeepsharma47/
[1]:http://www.tecmint.com/install-sysstat-in-linux/
[2]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/
[1]:http://linux.cn/article-4025-1.html
[2]:http://linux.cn/article-4024-1.html

View File

@ -1,10 +1,10 @@
在 Debian 上使用 systemd 管理系统
================================================================================
人类已经无法阻止 systemd 占领全世界的 Linux 系统了唯一阻止它的方法是在你自己的机器上手动卸载它。到目前为止systemd 已经创建了比任何软件都多的技术问题、感情问题和社会问题。这一点从[热议][1](也称 Linux 初始化软件之战)上就能看出,这场争论在 Debian 开发者之间持续了好几个月。当 Debian 技术委员会最终决定将 systemd 放到 Debian 8代号 Jessie的发行版里面时其反对者试图通过多种努力来[取代这项决议][2],甚至有人扬言要威胁那些支持 systemd 的开发者的生命安全。
人类已经无法阻止 systemd 占领全世界的 Linux 系统了唯一阻止它的方法是在你自己的机器上手动卸载它。到目前为止systemd 已经创建了比任何软件都多的技术问题、感情问题和社会问题。这一点从[“Linux 初始化软件之战”][1]上就能看出,这场争论在 Debian 开发者之间持续了好几个月。当 Debian 技术委员会最终决定将 systemd 放到 Debian 8代号 Jessie的发行版里面时其反对者试图通过多种努力来[取代这项决议][2],甚至有人扬言要威胁那些支持 systemd 的开发者的生命安全。
这也说明了 systemd 对 Unix 传承下来的系统处理方式有很大的干扰。“一个软件只做一件事情”的哲学思想已经被这个新来者彻底颠覆。除了取代了 sysvinit 成为新的系统初始化工具外systemd 还是一个系统管理工具。目前为止,由于 systemd-sysv 这个软件包提供的兼容性,那些我们使用惯了的工具还能继续工作。但是当 Debian 将 systemd 升级到214版本后这种兼容性就不复存在了。升级措施预计会在 Debian 8 "Jessie" 的稳定分支上进行。从此以后用户必须使用新的命令来管理系统、执行任务、变换运行级别、查询系统日志等等。不过这里有一个应对方案,那就是在 .bashrc 文件里面添加一些别名。
现在就让我们来看看 systemd 是怎么改变你管理系统的习惯的。在使用 systemd 之前,你得先把 sysvinit 保存起来,以 systemd 出错的时候还能用 sysvinit 启动系统。这种方法只有在没安装 systemd-sysv 的情况下才能生效,具体操作方法如下:
现在就让我们来看看 systemd 是怎么改变你管理系统的习惯的。在使用 systemd 之前,你得先把 sysvinit 保存起来,以便在 systemd 出错的时候还能用 sysvinit 启动系统。这种方法**只有在没安装 systemd-sysv 的情况下才能生效**,具体操作方法如下:
# cp -av /sbin/init /sbin/init.sysvinit
@ -34,8 +34,8 @@ systemctl 的功能是替代“/etc/init.d/foo start/stop”这类命令
你同样可以使用 systemctl 实现转换运行级别、重启系统和关闭系统的功能:
- systemctl isolate graphical.target - 切换到运行级别5就是有桌面的级别
- systemctl isolate multi-user.target - 切换到运行级别3没有桌面的级别
- systemctl isolate graphical.target - 切换到运行级别5就是有桌面的运行级别
- systemctl isolate multi-user.target - 切换到运行级别3没有桌面的运行级别
- systemctl reboot - 重启系统
- systemctl poweroff - 关机
@ -43,7 +43,7 @@ systemctl 的功能是替代“/etc/init.d/foo start/stop”这类命令
### journalctl 的基本用法 ###
systemd 不仅提供了比 sysvinit 更快的启动速度,还让日志系统在更早的时候启动起来,可以记录内核初始化阶段、内存初始化阶段、前期启动步骤以及主要的系统执行过程的日志。所以以前那种需要通过对显示屏拍照或者暂停系统来调试程序的日子已经一去不复返啦。
systemd 不仅提供了比 sysvinit 更快的启动速度,还让日志系统在更早的时候启动起来,可以记录内核初始化阶段、内存初始化阶段、前期启动步骤以及主要的系统执行过程的日志。所以**以前那种需要通过对显示屏拍照或者暂停系统来调试程序的日子已经一去不复返啦**
systemd 的日志文件都被放在 /var/log 目录。如果你想使用它的日志功能,需要执行一些命令,因为 Debian 没有打开日志功能。命令如下:
@ -86,7 +86,7 @@ systemd 可以让你能更有效地分析和优化你的系统启动过程:
![](https://farm6.staticflickr.com/5565/14423020978_14b21402c8_z.jpg)
systemd 虽然是个年轻的项目,但存在大量文档。首先要介绍的是[Lennart Poettering 的 0pointer 系列][3]。这个系列非常详细,非常有技术含量。另外一个是[免费桌面信息文档][4],它包含了最详细的关于 systemd 的链接发行版特性文件、bug 跟踪系统和说明文档。你可以使用下面的命令来查询 systemd 都提供了哪些文档:
systemd 虽然是个年轻的项目,但已有大量文档。首先要介绍给你的是[Lennart Poettering 的 0pointer 系列][3]。这个系列非常详细,非常有技术含量。另外一个是[免费桌面信息文档][4],它包含了最详细的关于 systemd 的链接发行版特性文件、bug 跟踪系统和说明文档。你可以使用下面的命令来查询 systemd 都提供了哪些文档:
# man systemd.index
@ -96,7 +96,7 @@ systemd 虽然是个年轻的项目,但存在大量文档。首先要介绍的
via: http://xmodulo.com/2014/07/use-systemd-system-administration-debian.html
译者:[bazz2](https://github.com/bazz2) 校对:[校对者ID](https://github.com/校对者ID)
译者:[bazz2](https://github.com/bazz2) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,168 @@
Camicri Cube: 可离线的便携包管理系统
================================================================================
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/camicri-cube-206x205.jpg)
众所周知,在系统中使用新立得包管理工具或软件中心下载和安装应用程序的时候,我们必须得有互联网连接。但,如果您刚好没有网络或者是网络速度死慢死慢的呢?在您的 Linux 桌面系统中使用软件中心包管理工具来安装软件绝对是一个头痛的问题。反而,您可以从相应的官网上手工下载应用程序包并手工安装。但是,大多数的 Linux 用户并不知道他们希望安装的应用程序所需要的依赖关系包。如果您恰巧出现这种情况,应用怎么办呢?现在一切都不用担心了。今天,我们给您介绍一款非常棒的名叫 **Camicri Cube** 的离线包管理工具。
您可以把此包管理工具装在任何联网的系统上下载您所需要安装的软件列表然后把它们安装到没联网的机器上就可以安装了。听起来很不错吧是的它就是这样操作的。Cube 是一款像新立得和 Ubuntu 软件中心这样的包管理工具但是一款便携式的。它在任何平台Windows 系统、基于 Apt 的 Linux 发布系统)、在线状态、离线状态、在闪存或任何可移动设备上都是可以使用和运行的。我们这个实验项目的主要目的是使处在离线状态的 Linux 用户能很容易的下载和安装 Linux 应用程序。
Cube 会收集您的离线电脑的详细信息,如操作系统的详细信息、安装的应用程序等等。然后使用 USB 迷你盘对 cube 应用程序进行拷贝得到一副本把其放在其它有网络连接的系统上使用接着就可以下载您需要的应用程序列表。下载完所有需要的软件包之后回到您原来的计算机并开始安装。Cube 是由 **Jake Capangpangan** 开发和维护的,是用 C++ 语言编写,而且已经集成了所有必须的包。因此,使用它并不需要再安装任何额外的软件。
### 安装 ###
现在,让我们下载 Cube 程序包,然后在没有网络连接的离线系统上进行安装。既可以从[官网主站页面][1]下载,也可以从 [Sourceforge 网站][2]下载。要确保下载的版本跟您的离线计算机架构对应的系统相匹配。比如我使用的是64位的系统就要下载64位版本的安装包。
wget http://sourceforge.net/projects/camicricube/files/Camicri%20Cube%201.0.9/cube-1.0.9.2_64bit.zip/
对此 zip 文件解压,解压到 home 目录或者着是您想放的任何地方:
unzip cube-1.0.9.2_64bit.zip
这就好了。接着,该是知道怎么使用的时候了。
### 使用 ###
这儿,我使用的是两台装有 Ubuntu 系统的机器。原机器(离线-没有网络连接)上面跑着的是 **Ubuntu 14.04** 系统,有网络连接的机器跑着的是 **Lubuntu 14.04** 桌面系统。
#### 离线系统上的操作步骤: ####
在离线系统上,进入已经解压的 Cube 文件目录,您会发现一个名叫 “cube-linux” 的可执行文件,双击它,并点击执行。如果它是不可执行的,用如下命令设置其可执行权限。
sudo chmod -R +x cube/
然后,进入 cube 目录,
cd cube/
接着执行如下命令来运行:
./cube-linux
输入项目的名称比如sk然后点击**创建**按纽。正如我上面提到的,这将会创建一个与您的系统相关的完整详细信息的新项目,如操作系统的详细信息、安装的应用程序列表、库等等。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Cube-Startup-Create-or-choose-a-project-to-be-managed_0013.png)
如您所知,我们的系统是离线的,意思是没有网络连接。所以我点击**取消**按纽来跳过资源库的更新过程。随后我们会在一台有网络连接的系统上更新此资源库。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0023.png)
再一次,在这台离线机器上我们点击 **No** 来跳过更新,因为我们没有网络连接。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0033.png)
就是这样。现在新的项目已经创建好了,它会保存在我们的主 cube 目录里面。进入 Cube 目录,您就会发现一个名叫 Projects 的目录。这个目录会保存有您的离线系统的必要完整详细信息。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Selection_004.png)
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Selection_005.png)
现在,关闭 cube 应用程序,然后拷贝整个主 **cube** 文件夹到任何的闪存盘里,接入有网络连接的系统。
#### 在线系统上操作步骤: ####
往下的操作步骤需要在有网络连接的系统上进行。在我们的例子中,用的是 **Lubuntu 14.04** 系统的机器。
跟在源机器上的操作一样设置使 cube 目录具有可执行权限。
sudo chmod -R +x cube/
现在,双击 cube-linux 文件运行应用程序或者也可以在终端上加载运行,如下所示:
cd cube/
./cube-linux
在窗口的 “Open Existing Projects” 部分会看到您的项目列表,选择您需要的项目。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Cube-Startup-Create-or-choose-a-project-to-be-managed_0014.png)
随后cube 会询问这是否是您的项目所在的源机器。它并不是我的源(离线)机器,所以我点击 **No**
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0024.png)
接着会询问是否想要更新您的资源库。点击 **OK** 来更新资料库。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0034.png)
下一步,我们得更新所有过期的包/应用程序。点击 Cube 工具栏上的 “**Mark All updates**” 按纽。然后点击 “**Download all marked**” 按纽来更新所有过期的包/应用程序。如下截图所示在我的例子当中有302个包需要更新。这时点击 **OK** 来继续下载所标记的安装包。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_005.png)
现在Cube 会开始下载所有已标记的包。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Downloading-packages_006.png)
我们已经完成了对资料库和安装包的更新。此时,如果您在离线系统上还需要其它的安装包,您也可以下载这些新的安装包。
#### 下载新的应用程序 ####
例如,现在我想下载 **apache2** 包。在**搜索**框里输入包的名字点击搜索按纽。Cube 程序会获取您想查找的应用程序的详细信息。点击 “**Download this package now**”按纽,接着点击 **OK** 就开始下载了。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_008.png)
Cube 将会下载 apache2 的安装包及所有的依赖包。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Downloading-packages_009.png)
如果您想查找和下载更多安装包的话,只要简单的点击 “**Mark this package**” 按纽就可以搜索到需要的包了。只要您想在源机器上安装的包都可以标记上。一旦标记完所有的包,就可以点击位于顶部工具栏的 “**Download all marked**” 按纽来下载它们。
在完成资源库、过期软件包的更新和下载好新的应用程序后,就可以关闭 Cube 应用程序。然后,拷贝整个 Cube 文件夹到任何的闪盘或者外接硬盘。回到您的离线系统中来。
#### 离线机器上的操作步骤: ####
把 Cube 文件夹拷回您的离线系统的任意位置。进入 cube 目录,并且双击 **cube-linux** 文件来加载启动 Cube 应用程序。
或者,您也可以从终端下启动它,如下所示:
cd cube/
./cube-linux
选择您的项目,点击打开。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Cube-Startup-Create-or-choose-a-project-to-be-managed_0012.png)
然后会弹出一个对话框询问是否更新系统,尤其是已经下载好新的资源库的时候,请点击“是”。因为它会把所有的资源库传输到您的机器上。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0021.png)
您会看到,在没有网络连接的情况下这些资源库会更新到您的离线机器上。那是因为我们已经在有网络连接的系统上下载更新了此资源库。看起来很酷,不是吗?
更新完资源库后,让我们来安装所有的下载包。点击 “Mark all Downloaded” 按纽选中所有的下载包,然后点击 Cube 工具栏上的 “Install All Marked” 按纽来安装它们。Cube 应用程序会自动打开一个新的终端窗口来安装所有的软件包。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Terminal_001.png)
如果遇到依赖的问题,进入 **Cube Menu -> Packages -> Install packages with complete dependencies** 来安装所有的依赖包。
如果您只想安装特定的包,定位到列表包位置,点击 “Downloaded” 按纽,所有的已下载包都会被列出来。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0035.png)
然后双击某个特定的包,点击 “Install this”按纽来安装或者如果想过后再安装它的话可以先点击 “Mark this” 按纽。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0043.png)
顺便提一句,您可以在任意已经连接网络的系统上下载所需要的包,然后在没有网络连接的离线系统上安装。
### 结论 ###
这是我曾经使用过的最好、最有用的软件工具之一。但我在用 Ubuntu 14.04 测试盒子测试的时候,遇到了很多依赖问题,还经常会出现闪退的情况。也仅仅是在最新 Ubuntu 14.04 离线系统上使用没有遇到任何问题。希望这些问题在老版本的 Ubuntu 上不会发生。除了这些小问题,这个小工具就如同宣传的一样,像魔法一样神奇。
欢呼吧!
原文作者:
![](http://1.gravatar.com/avatar/1ba62ac2b395f541750b6b4f873eb37b?s=70&d=monsterid&r=G)
[SK][a](Senthilkumar又名SK来自于印度的泰米尔纳德邦Linux 爱好者FOSS 论坛支持者和 Linux 板块顾问。一个充满激情和活力的人,致力于提供高质量的 IT 专业文章,非常喜欢写作以及探索 Linux、开源、电脑和互联网等新事物。)
--------------------------------------------------------------------------------
via: http://www.unixmen.com/camicri-cube-offline-portable-package-management-system/
译者:[runningwater](https://github.com/runningwater) 校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/sk/
[1]:https://launchpad.net/camicricube
[2]:http://sourceforge.net/projects/camicricube/

View File

@ -0,0 +1,117 @@
命令行基础工具的更佳替代品
================================================================================
命令行听起来有时候会很吓人,特别是在刚刚接触的时候,你甚至可能做过有关命令行的噩梦。然而渐渐地,我们都会意识到命令行实际上并不是那么吓人,反而是非常有用。实际上,没有命令行正是每次我使用 Windows 时让我感到崩溃的地方。这种感觉上的变化是因为命令行工具实际上是很智能的。 你在任何一个 Linux 终端上所使用的基本工具功能都是很强大的, 但还远说不上是足够强大。 如果你想使你的命令行生涯更加愉悦, 这里有几个程序你可以下载下来替换原来的默认程序, 它还可以给你提供比原始程序更多的功能。
### dfc ###
作为一个 LVM 使用者, 我非常喜欢随时查看我的硬盘存储器的使用情况. 我也从来没法真正理解为什么在 Windows 上我们非得打开资源管理器来查看电脑的基本信息。在 Linux 上, 我们可以使用如下命令:
$ df -h
![](https://farm4.staticflickr.com/3858/14768828496_c8a42620a3_z.jpg)
该命令可显示电脑上每一分卷的大小、 已使用空间、 可用空间、 已使用空间百分比和挂载点。 注意, 我们必须使用 "-h" 选项使得所有数据以可读形式显示(使用 GiB 而不是 KiB)。 但你可以使用 [dfc][1] 来完全替代 df 它不需要任何额外的选项就可以得到 df 命令所显示的内容, 并且会为每个设备绘制彩色的使用情况图, 因此可读性会更强。
![](https://farm6.staticflickr.com/5594/14791468572_a84d4b6145_z.jpg)
另外, 你可以使用 "-q" 选项将各分卷排序, 使用 "-u" 选项指定你希望使用的单位, 甚至可以使用 "-e" 选项来获得 csv 或者 html 格式的输出.
### dog ###
Dog 比 cat 好, 至少这个程序自己是这么宣称的。 你应该相信它一次。 所有 cat 命令能做的事, [dog][2] 都做的更好。 除了仅仅能在控制台上显示一些文本流之外, dog 还可以对其进行过滤。 例如, 你可以使用如下语法来获得网页上的所有图片:
$ dog --images [URL]
![](https://farm6.staticflickr.com/5568/14811659823_ea8d22d045_z.jpg)
或者是所有链接:
dog --links [URL]
![](https://farm4.staticflickr.com/3902/14788690051_7472680968_z.jpg)
另外, dog 命令还可以处理一些其他的小任务, 比如全部转换为大写或小写, 使用不同的编码, 显示行号和处理十六进制文件。 总之, dog 是 cat 的必备替代品。
### advcp ###
一个 Linux 中最基本的命令就是复制命令: cp。 它几乎和 cd 命令地位相同。 然而, 它的输出非常少。 你可以使用 verbose 模式来实时查看正在被复制的文件, 但如果一个文件非常大的话, 你看着屏幕等待却完全不知道后台在干什么。 一个简单的解决方法是加上一个进度条: 这正是 advcp (advanced cp 的缩写) 所做的! advcp 是 [GNU coreutils][4] 的一个 [补丁版本][3] 它提供了 acp 和 amv 命令, 即"高级"的 cp 和 mv 命令. 使用语法如下:
$ acp -g [file] [copy]
它把文件复制到另一个位置, 并显示一个进度条。
![](https://farm6.staticflickr.com/5588/14605117730_fe611fc234_z.jpg)
我还建议在 .bashrc 或 .zshrc 中设置如下命令别名:
alias cp="acp -g"
alias mv="amv -g"
(译者注: 原文给出的链接已貌似失效, 我写了一个可用的安装脚本放在了我的 [gist](https://gist.github.com/b978fc93b62e75bfad9c) 上, 用的是 AUR 里的 [patch](https://aur.archlinux.org/packages/advcp)。)
### The Silver Searcher ###
[the silver searcher][5] 这个名字听起来很不寻常(银搜索... 它是一款设计用来替代 grep 和 [ack][6] 的工具。 The silver searcher 在文件中搜索你想要的部分, 它比 ack 要快, 而且能够忽略一些文件而不像 grep 那样。(译者注: 原文的意思貌似是 grep 无法忽略一些文件, 但 grep 有类似选项) the silver searcher 还有一些其他的功能,比如彩色输出, 跟随软连接, 使用正则表达式, 甚至是忽略某些模式。
![](https://farm4.staticflickr.com/3876/14605308117_f966c77140_z.jpg)
作者在开发者主页上提供了一些搜索速度的统计数字, 如果它们的确是真的的话, 那是非常可观的。 另外, 你可以把它整合到 Vim 中, 用一个简洁的命令来调用它。 如果要用两个词来概括它, 那就是: 智能、快速。
### plowshare ###
所有命令行的粉丝都喜欢使用 wget 或其他对应的替代品来从互联网上下载东西。 但如果你使用许多文件分享网站, 像 mediafire 或者 rapidshare。 你一定很乐意了解一款专门为这些网站设计的对应的程序, 叫做 [plowshare][7]。 安装成功之后, 你可以使用如下命令来下载文件:
$ plowdown [URL]
或者是上传文件:
$ plowup [website name] [file]
前提是如果你有那个文件分享网招的账号的话。
最后, 你可以获取分享文件夹中的一系列文件的链接:
$ plowlist [URL]
或者是文件名、 大小、 哈希值等等:
$ plowprobe [URL]
对于那些熟悉这些服务的人来说, plowshare 还是缓慢而令人难以忍受的 jDownloader 的一个很好的替代品。
### htop ###
如果你经常使用 top 命令, 很有可能你会喜欢 [htop][8] 命令。 top 和 htop 命令都能对正在运行的进程提供了实时查看功能, 但 htop 还拥有一系列 top 命令所没有的人性化功能。 比如, 在 htop 中, 你可以水平或垂直滚动进程列表来查看每个进程的完整命令名, 还可以使用鼠标点击和方向键来进行一些基本的进程操作(比如 kill、 (re)nice 等),而不用输入进程标识符。
![](https://farm6.staticflickr.com/5581/14819141403_6f2348590f_z.jpg)
### mtr ###
系统管理员的一个基本的网络诊断工具traceroute可以用于显示从本地网络到目标网络的网络第三层协议的路由。mtr即“My Traceroute”的缩写继承了强大的traceroute功能并集成了 ping 的功能。当发现了一个完整的路由时mtr会显示所有的中继节点的 ping 延迟的统计数据,对网络延迟的定位非常有用。虽然也有其它的 traceroute的变体tcptraceroute 或 traceroute-nanog但是我相信 mtr 是traceroute 工具里面最实用的一个增强工具。
![](https://farm4.staticflickr.com/3884/14783092046_b3a90ab462_z.jpg)
总的来说, 这些十分有效的基本命令行的替代工具就像那些有用的小珍珠一样, 它们并不是那么容易被发现, 但当一旦你找到一个, 你就会惊讶你是如何忍受这么长没有它的时间! 如果你还知道其他的与上面描述相符的工具, 请在评论中分享给我们。
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/07/better-alternatives-basic-command-line-utilities.html
作者:[Adrien Brochard][a]
译者:[wangjiezhe](https://github.com/wangjiezhe)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:http://projects.gw-computing.net/projects/dfc
[2]:http://archive.debian.org/debian/pool/main/d/dog/
[3]:http://zwicke.org/web/advcopy/
[4]:http://www.gnu.org/software/coreutils/
[5]:https://github.com/ggreer/the_silver_searcher
[6]:http://xmodulo.com/2014/01/search-text-files-patterns-efficiently.html
[7]:https://code.google.com/p/plowshare/
[8]:http://hisham.hm/htop/

View File

@ -1,18 +1,17 @@
在Linux中扩展/缩减LVM逻辑卷管理)—— 第二部分
在Linux中扩展/缩减LVM第二部分
================================================================================
前面我们已经了解了怎样使用LVM创建弹性的磁盘存储。这里我们将了解怎样来扩展卷组扩展和缩减逻辑卷。在这里我们可以缩减或者扩展逻辑卷管理LVM中的分区LVM也可称之为弹性卷文件系统。
![Extend/Reduce LVMs in Linux](http://www.tecmint.com/wp-content/uploads/2014/08/LVM_extend.jpg)
### 需求 ###
### 前置需求 ###
- [使用LVM创建弹性磁盘存储——第一部分][1]
注:两篇都翻译完了的话,发布的时候将这个链接做成发布的中文的文章地址
#### 什么时候我们需要缩减卷? ####
或许我们需要创建一个独立的分区用于其它用途,或者我们需要扩展任何空间低的分区。真是这样的话,我们可以很容易地缩减大尺寸的分区,并且扩展空间低的分区,只要按下面几个简易的步骤来即可。
或许我们需要创建一个独立的分区用于其它用途,或者我们需要扩展任何空间低的分区。遇到这种情况时,使用 LVM我们可以很容易地缩减大尺寸的分区以及扩展空间低的分区,只要按下面几个简易的步骤来即可。
#### 我的服务器设置 —— 需求 ####
@ -284,9 +283,9 @@ via: http://www.tecmint.com/extend-and-reduce-lvms-in-linux/
作者:[Babin Lonston][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/babinlonston/
[1]:http://www.tecmint.com/create-lvm-storage-in-linux/
[1]:http://linux.cn/article-3965-1.html

View File

@ -1,12 +1,12 @@
在命令行中管理 Wifi 连接
================================================================================
无论何时要安装一款新的 Linux 发行系统,一般的建议都是让您通过有线连接来接到互联网的。这主要的原因有两条:第一,您的无线网卡也许安装的驱动不正确而不能用;第二,如果您是从命令行中来安装系统的,管理 WiFi 就非常可怕。我总是试图避免在命令行中处理 WiFi 。但 Linux 的世界,应具有无所畏惧的精神。如果您不知道怎样操作,您需要继续往下来学习之,这就是写这篇文章的唯一原因。所以我迫自己学习如何在命令行中管理 WiFi 连接。
无论何时要安装一款新的 Linux 发行系统,一般的建议都是让您通过有线连接来接到互联网的。这主要的原因有两条:第一,您的无线网卡也许安装的驱动不正确而不能用;第二,如果您是从命令行中来安装系统的,管理 WiFi 就非常可怕。我总是试图避免在命令行中处理 WiFi 。但 Linux 的世界,应具有无所畏惧的精神。如果您不知道怎样操作,您需要继续往下来学习之,这就是写这篇文章的唯一原因。所以我迫使自己学习如何在命令行中管理 WiFi 连接。
通过命令行来设置连接到 WiFi 当然有很多种方法,但在这篇文章里,也是一个建议,我将会作用最基本的方法:那就是使用在任何发布版本中都有的包含在“默认包”里的程序和工具。或者我偏向于使用这一种方法。使用此方法显而易见的好处是这个操作过程能在任意有 Linux 系统的机器上复用。不好的一点是它相对来说比较复杂。
通过命令行来设置连接到 WiFi 当然有很多种方法,但在这篇文章里,同时也是一个建议,我使用最基本的方法:那就是使用在任何发布版本中都有的包含在“默认包”里的程序和工具。或者我偏向于使用这一种方法。使用此方法显而易见的好处是这个操作过程能在任意有 Linux 系统的机器上复用。不好的一点是它相对来说比较复杂。
首先,我假设您们都已经正确安装了无线网卡的驱动程序。没有这前提,后续的一切都如镜花水月。如果您你机器确实没有正确安装上,您应该看看关于您的发布版本的维基和文档。
然后您就可以用如下命令来检查是哪一个接口来支持无线连接的
然后您就可以用如下命令来检查是哪一个接口来支持无线连接的
$ iwconfig
@ -24,21 +24,21 @@
![](https://farm4.staticflickr.com/3847/14909117931_e2f3d0feb0_z.jpg)
根据扫描出的结果,可以得到网络的名字(它的 SSID它的信息强度以及它使用的是哪个安全加密的WEP、WPA/WPA2。从此时起将会分成两条路线情况很好的和容易的以及情况稍微复杂的。
根据扫描出的结果,可以得到网络的名字(它的 SSID它的信息强度以及它使用的是哪个安全加密的WEP、WPA/WPA2。从此时起将会分成两条路线情况很好、很容易的以及情况稍微复杂的。
如果您想连接的网络是没有加密的,您可以用下面的命令直接连接:
$ sudo iw dev wlan0 connect [network SSID]
$ sudo iw dev wlan0 connect [网络 SSID]
如果网络是用 WEP 加密的,也非常容易:
$ sudo iw dev wlan0 connect [network SSID] key 0:[WEP key]
$ sudo iw dev wlan0 connect [网络 SSID] key 0:[WEP 密钥]
但网络使用的是 WPA 或 WPA2 协议的话,事情就不好办了。这种情况,您就得使用叫做 wpa_supplicant 的工具,它默认是没有启用的。需要修改 /etc/wpa_supplicant/wpa_supplicant.conf 文件,增加如下行:
但网络使用的是 WPA 或 WPA2 协议的话,事情就不好办了。这种情况,您就得使用叫做 wpa_supplicant 的工具,它默认是没有的。然后需要修改 /etc/wpa_supplicant/wpa_supplicant.conf 文件,增加如下行:
network={
ssid="[network ssid]"
psk="[the passphrase]"
ssid="[网络 ssid]"
psk="[密码]"
priority=1
}

View File

@ -0,0 +1,82 @@
Ubuntu 有这功能吗回答4个新用户最常问的问题
================================================================================
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/08/Screen-Shot-2014-08-13-at-14.31.42.png)
**在谷歌输入Can Ubunt[u]’,一系列的自动建议会展现在你面前。这些建议都是根据最近搜索用户最频繁检索而形成的。
对于Linux老用户来说他们都胸有成竹的回答这些问题。但是对于新用户或者那些还在探索类似Ubuntu这样的发行版是否适合的人来说他们不是十分清楚这些答案。这都是中肯真实而且是基本的问题。
所以在这片文章我将会去回答4个最常会被搜索到的"Can Ubuntu...?"问题。
### Ubuntu可以取代Windows吗###
![Windows isnt to everyones tastes — or needs](http://www.omgubuntu.co.uk/wp-content/uploads/2014/07/windows-9-desktop-rumour.png)
*Windows 并不是每个人都喜欢或都必须的*
是的。Ubuntu和其他Linux发行版是可以安装到任何一台有能力运行微软系统的电脑。
无论你觉得**应不应该**取代它,要不要替换只取决于你自己的需求。
例如你在上大学所需的软件都只是Windows而已。暂时而言你是不需要完全更换你的系统。对于工作也是同样的道理。如果你工作所用到的软件只是微软Office, Adobe Creative Suite 或者是一个AutoCAD应用程序不是很建议你更换系统坚持你现在所用的软件就足够了。
但是对于那些用Ubuntu完全取代微软系统的我们Ubuntu 提供了一个安全的桌面工作环境。这个桌面工作环境可以运行与支持很广的硬件环境。基本上,每个东西都有软件的支持,从办公套件到网页浏览器,视频应用程序,音乐应用程序到游戏。
### Ubuntu 可以运行 .exe文件吗###
![你可以在Ubuntu运行一些Windows应用程序。](http://www.omgubuntu.co.uk/wp-content/uploads/2013/01/adobe-photoshop-cs2-free-linux.png)
*你可以在Ubuntu运行一些Windows应用程序*
是可以的尽管这些程序不是一步到位或者不能保证运行成功。这是因为这些软件原本就是在Windows下运行的本来就与其他桌面操作系统不兼容包括Mac OS X 或者 Android (安卓系统)。
那些专门为Ubuntu和其他 Debian 系列的 Linux 发行版本)的软件安装包都是带有“.deb”的文件后缀名。它们的安装过程与安装 .exe 的程序是一样的 -双击安装包,然后根据屏幕提示完成安装。 LCTT 译注RedHat 系统采用.rpm 文件,其它的也有各种不同的安装包格式,等等,作为初学者,你可以当成是各种压缩包格式来理解)
但是Linux是很多样化的。它使用一个名为"Wine"的兼容层,可以运行许多当下很流行的应用程序。 (Wine不是一个模拟器但是简单来看可以当成一个快捷方式。这些程序不会像在Windows下运行得那么顺畅或者有着出色的用户界面。然而它足以满足日常的工作要求。
一些很出名的Windows软件是可以通过Wine来运行在Ubuntu操作系统上这包括老版本的Photoshop和微软办公室软件。 有关兼容软件的列表,[参照Wine应用程序数据库][1]。
### Ubuntu会有病毒吗###
![它可能有错误,但是它并没有病毒](http://www.omgubuntu.co.uk/wp-content/uploads/2014/04/errors.jpg)
*它可能有错误,但是它并有病毒*
理论上,它会有病毒。但是,实际上它没有。
Linux发行版本是建立在一个病毒蠕虫隐匿程序都很难被安装运行或者造成很大影响的环境之下的。
例如很多应用程序都是在没有特别管理权限要求以普通用户权限运行的。病毒要访问系统关键部分的请求也是需要用户管理权限的。很多软件的提供都是从那些维护良好的而且集中的资源库例如Ubuntu软件中心而不是一些不知名的网站。 由于这样的管理使得安装一些受感染的软件的几率可以忽略不计。
你应不应该在Ubuntu系统安装杀毒软件这取决于你自己。为了自己的安心或者如果你经常通过Wine来使用Windows软件或者双系统你可以安装ClamAV。它是一个免费的开源的病毒扫描应用程序。你可以在Ubuntu软件中心找到它。
你可以在Ubuntu维基百科了解更多关于病毒在Linux或者Ubuntu的信息。 [Ubuntu 维基百科][2]。
### 在Ubuntu上可以玩游戏吗###
![Steam有着上百个专门为Linux设计的高质量游戏。](http://www.omgubuntu.co.uk/wp-content/uploads/2012/11/steambeta.jpg)
*Steam有着上百个专门为Linux设计的高质量游戏*
当然可以Ubuntu有着多样化的游戏从传统简单的2D象棋拼字游戏和扫雷游戏到很现代化的AAA级别的要求显卡很强的游戏。
你首先可以去 **Ubuntu 软件中心**。这里你会找到很多免费的开源的和收费的游戏包括广受好评的独立制作游戏像World of Goo 和Braid。当然也有其他传统游戏的提供例如Pychess(国际象棋)four-in-a-row四子棋和Scrabble clones猜字拼字游戏
对于游戏狂热爱好者,你可以安装**Steam for Linux**。在这里你可以找到各种这样最新最好玩的游戏。
另外,记得留意这个网站:[Humble Bundle][3]。每个月都会有两周的这种“只买你想要的”的套餐。作为游戏平台它是对Linux特别友好的支持者。因为每当一些新游戏出来的时候它都保证可以在Linux下搜索到。
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/08/ubuntu-can-play-games-replace-windows-questions
作者:[Joey-Elijah Sneddon][a]
译者:[Shaohao Lin](https://github.com/shaohaolin)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://appdb.winehq.org/
[2]:https://help.ubuntu.com/community/Antivirus
[3]:https://www.humblebundle.com/

View File

@ -1,4 +1,4 @@
在RHEL / CentOS下停用按下Ctrl-Alt-Del 重启系统的功能
在RHEL/CentOS 5/6下停用按下Ctrl-Alt-Del 重启系统的功能
================================================================================
在Linux里由于对安全的考虑我们允许任何人按下**Ctrl-Alt-Del**来**重启**系统。但是在生产环境中应该停用按下Ctrl-Alt-Del 重启系统的功能。
@ -37,7 +37,7 @@ via: http://www.linuxtechi.com/disable-reboot-using-ctrl-alt-del-keys/
作者:[Pradeep Kumar][a]
译者:[2q1w2007](https://github.com/2q1w2007)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
Linux FAQ -- 如何修复“X11 forwarding request failed on channel 0”错误
Linux有问必答:如何修复“X11 forwarding request failed on channel 0”错误
================================================================================
> **问题**: 当我尝试使用SSH的X11转发选项连接到远程主机时, 我在登录时遇到了一个 "X11 forwarding request failed on channel 0" X11 转发请求在通道0上失败的错误。 我为什么会遇到这个错误,并且该如何修复它
@ -26,9 +26,9 @@ X11客户端不能正确处理X11转发这会导致报告中的错误。要
$ sudo systemctl restart ssh.service (Debian 7, CentOS/RHEL 7, Fedora)
$ sudo service sshd restart (CentOS/RHEL 6)
### 方案 ###
### 方案 ###
如果远程主机的SSH服务禁止了IPv6,那么X11转发失败的错误也有可能发生。要解决这个情况下的错误。打开/etc/ssh/sshd配置文件打开"AddressFamily all" (如果有的话的注释。接着加入下面这行。这会强制SSH服务只使用IPv4而不是IPv6。
如果远程主机的SSH服务禁止了IPv6那么X11转发失败的错误也有可能发生。要解决这个情况下的错误。打开/etc/ssh/sshd配置文件取消对"AddressFamily all" (如果有这条的话的注释。接着加入下面这行。这会强制SSH服务只使用IPv4而不是IPv6。LCTT 译注此处恐有误AddressFamily 没有 all 这个参数,而 any 代表同时支持 IPv6和 IPv4以此处的场景而言应该是关闭IPv6支持只支持 IPv4所以此处应该是“注释掉 AddressFamily any”才对。
$ sudo vi /etc/ssh/sshd_config
@ -43,7 +43,7 @@ X11客户端不能正确处理X11转发这会导致报告中的错误。要
via: http://ask.xmodulo.com/fix-broken-x11-forwarding-ssh.html
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,107 @@
如何开始一个开源项目
================================================================================
> 循序渐进的指导
**你有这个问题**:你已经权衡了[开源代码的优劣势][1],你也已经知道[你的软件需要成为一个开源项目][2],但是,你不知道怎么做好的开源项目。
当然,你也许已经知道[如何创建Github帐号并开始][3],但是这些事实上是做开源比较简单的部分。而真真正正难的部分是如何让足够多的人,关注你的项目并给你的项目做贡献。
![](http://a4.files.readwrite.com/image/upload/c_fit,q_80,w_630/MTE5NDg0MDYxMTg2Mjk1MzEx.jpg)
接下来的原则是会指导你构建和发布其他人愿意关注的代码。
### 基本原则 ###
选择开源可能有许多原因。也许你希望吸引一个社区来帮助编写你的代码。也许,[总所周知][4],你明白“开源 —— 一个开发小团队内部编写代码的倍增器。”
或者你只是认为这是必须做的事,[如同英国政府一样][5]。
无论何种原因,为了开源能够成功,是必须要做很多的计划去给将来使用这个软件的人们。如同[我在2005写道][6]如果你“需要大量的人做贡献bug修复扩展等等那么你需要“写一个好的文档使用易于接受的编程语言和使用模型架构”。
对了,你也需要写人们在乎的软件。
每天思考你依靠的技术操作系统web应用框架数据库等等。远离像航天这样特殊行业的小生态技术让开源拥有更多的可能性以便外部的人的产生兴趣和做出贡献。更广泛的应用技术找到更多的贡献者和用户。
总的来说,任何成功的开源项目有以下共同点:
1.最佳的时间时机(解决市场实际需求)
2.一个健壮,包括开发者和非开发者的团队
3.一个易于参与的结构(更多详见下文)
4.模块化编码,使新贡献者更容易找到一个项目损坏的部分去贡献,比强迫他们理解巨大的代码的每一部分要好
5.代码可以广泛应用(或者达到一个狭窄的流行都比一个“自生自灭的”小生态更吸引人)
6.很好初始源码如果你放垃圾在Github你也只会得到垃圾回报
7.一个自由的许可证-我[个人更爱Apache型的许可证][7]因为它让开发者采用时障碍最低当然许多成功的项目如Linux和MySQL使用GPL许可证也有很棒的效果。
上述几项,是一个项目成功邀请参与者最难的部分。这是因为他们不是关于代码而是关于人。
### 开源不单是一个许可证 ###
今年,最棒的一件事是我读到是来自 Vitorio Miliano ([@vitor_io][8])的文章,他是用户体验交互设计师,来自德州的奥斯丁。[Miliano][9]指出,那些不在你的项目上工作的人才是“外行”,从本质上说无论他们技术能力的级别,他们仅仅懂一点代码(也没关系)。
所以你的工作,他认为,是使人加入,为你贡献你的代码变得简单。当阐述如何涉及非程序员到开源项目中,他指出项目的一些事项,项目领导应需要有效地得加入一些任何技术或不懂技术的人到开源项目。
> 1. 一种方法去了解你的项目价值
>
> 2. 一种方法去了解他们可以为项目提供的价值
>
> 3. 一种方法去了解他们可以从贡献代码获得的价值
>
> 4. 一种方法去了解贡献流程,端到端
>
> 5. 贡献机制适用于现有的工作流
经常项目领导者想要集中于上述的第五步却不提供理解1到4的路径。如果潜在的贡献者不欣赏“为什么”“如何”共享就变得不重要了。
注意至关重要的Miliano写道建立拥有一个通俗易懂的简介的项目很有价值如同任何时候通过简介给每一个人演示可访问性和包容性。他断言道这增加了额外的好处文档和其他的版本介绍的内容变得通俗易懂。
关于第二点程序员或非程序员同样地需要能够明白到底你需要什么这样他们就可以认识到他们的贡献方向。有时就像MongoDB解决方案架构师[Henrik Ingo告诉我][10]那样,"一个聪明的人可以贡献很棒的代码,但是项目成员不能理解它(代码)",如果在组织内承认这个贡献并且研究后理解,那么这就不是一个糟糕的问题。
但是不会经常发生。
### 你真的想领导一个开源项目吗? ###
许多开源项目的领导提倡包容性,但是他们拥有任何事除了包容。如果你不想要人们做贡献,不要假装开源。
是的有时这是老生常谈的话题。就像HackerNews最近的报道[一个开发者的开发工作][11]。
> 小项目可以得到很多,基本不需要很多人合作来完成。我看到了他们的进步,但是我没有看到我自己的进步:如果我帮助了他们,显然,如果我花费了有限的时间在与那些计算机科学的硕士管理合作上,而没有参与编码,这不是我想要的。所以我忽略了他们。
这是一个保持理智的的好方法,但这个态度并不能预示着这个项目会被广阔的分享。
如果你确实很少关心非程序员设计的贡献、文档,或者无论其他什么,那么请首先了解那些。再次强调,如果这是实情,你的项目就不能成为一个开源项目。
当然,排除感觉不总是可靠的。 就像ActiveState的副总裁Bernard Golden告诉过我“一些将会成为开发人员将会对现有的“小集团”开发团体这种感觉感到恐惧虽然这不一定正确。”
现在,若使了解开发人员为什么要贡献并邀请做开发,意味着更多的开源项目投资,更长久地生存。
图片由[Shutterstock][12]提供
--------------------------------------------------------------------------------
via: http://readwrite.com/2014/08/20/open-source-project-how-to
作者:[Matt Asay][a]
译者:[Vic___/VicYu](http://www.vicyu.net)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://readwrite.com/author/matt-asay
[1]:http://readwrite.com/2014/07/07/open-source-software-pros-cons
[2]:http://readwrite.com/2014/08/15/open-source-software-business-zulily-erp-wall-street-journal
[3]:http://www.cocoanetics.com/2011/01/starting-an-opensource-project-on-github/
[4]:http://werd.io/2014/the-roi-of-building-open-source-software
[5]:https://www.gov.uk/design-principles
[6]:http://asay.blogspot.com/2005/09/so-you-want-to-build-open-source.html
[7]:http://www.cnet.com/news/apache-better-than-gpl-for-open-source-business/
[8]:https://twitter.com/vitor_io
[9]:http://opensourcedesign.is/blogging_about/import-designers/
[10]:https://twitter.com/h_ingo/status/501323333301190656
[11]:https://news.ycombinator.com/item?id=8122814
[12]:http://www.shutterstock.com/

View File

@ -3,37 +3,37 @@
<center><img src="http://www.linux.com/images/stories/41373/fig-1-annabelle.jpg" /></center>
<center><small>图 1侏儒山羊 Annabelle</small></center>
[Krita][1] 是一款很棒的绘图应用,同时也是很不错的照片编辑器。今天我们将学习如何给图片添加文字,以及如何有选择锐化照片的某一部分。
[Krita][1] 是一款很棒的绘图应用,同时也是很不错的照片编辑器。今天我们将学习如何给图片添加文字,以及如何有选择锐化照片的某一部分。
### Krita 简介 ###
与其他绘图/制图应用类似Krita 内置了数百种工具和选项,以及多种处理手段。因此让我们来花点时间了解一下。
与其他绘图/制图应用类似Krita 内置了数百种工具和选项,以及多种处理方法。因此它值得我们花点时间来了解一下。
Krita 默认使用了暗色主题。我不太喜欢暗色主题,但幸运的是 Krita 还有其他很赞的主题,你可以在任何时候通过菜单里的“设置 > 主题”进行更改。
Krita 使用了窗口停靠样式的工具条。如果左右两侧面板的 Dock 工具条没有显示,检查一下“设置 > 显示工具条”选项,你也可以在“设置 > 工具条”中对工具条按你的偏好进行调整。不过隐藏的工具条也许会让你感到一些小小的不快,它们只会在一个狭小的压扁区域展开,你看不见其中的任何东西。你可以拖动他们至顶端或者 Krita 窗口的一侧,扩展或者收缩它们,甚至你可以把他们拖到 Krita 外,拖到你显示屏的任意位置。如果你把其中一个工具条拖到了另一个工具条上,它们会自动合并成一个工具条。
Krita 使用了窗口停靠样式的工具条。如果左右两侧面板的 Dock 工具条没有显示,检查一下“设置 > 显示工具条”选项,你也可以在“设置 > 工具条”中对工具条按你的偏好进行调整。不过隐藏的工具条也许会让你感到一些小小的不快,它们只会在一个狭小的压扁区域展开,你看不见其中的任何东西。你可以拖动它们至顶端或者 Krita 窗口的一侧,放大或者缩小它们,甚至你可以把它们拖到 Krita 外,放在你显示屏的任意位置。如果你把其中一个工具条拖到了另一个工具条上,它们会自动合并成一个工具条。
当你配置好比较满意的工作区后,你可以在“选择工作区”内保存它。你可以在笔刷工具条(通过“设置 > 显示工具条”开启显示)的右侧找到“选择工作区”。其中有对工作区的不同配置,当然你也可以创建自己的配置(图 2
<center><img src="http://www.linux.com/images/stories/41373/fig-2-workspaces.jpg" /></center>
<center><small>图 2在“选择工作区”里保存用户定制的工作区。</small></center>
Krita 中有多重缩放控制手段。Ctrl + “=” 放大Ctrl + “-” 缩小Ctrl + “0” 重置为 100% 缩放画面。你也可以通过“视图 > 缩放”,或者右下角的缩放条进行控制。在缩放条的左侧还有一个下拉式的缩放菜单。
工具菜单位于窗口左部,其中包含了锐化和选择工具。你最好移动你的鼠标到每个工具上,查看一下标签。工具选项条总是显示当前正在使用的工具的选项,默认情况下工具选项条位于窗口右部。
Krita 中有多重缩放控制方法。Ctrl + “=” 放大Ctrl + “-” 缩小Ctrl + “0” 重置为 100% 缩放画面。你也可以通过“视图 > 缩放”,或者右下角的缩放条进行控制。在缩放条的左侧还有一个下拉式的缩放菜单。
工具菜单位于窗口左部,其中包含了锐化和选择工具。你必须移动光标到每个工具上,才能查看它的标签。工具选项条总是显示当前正在使用的工具的选项,默认情况下工具选项条位于窗口右部。
### 裁切工具 ###
当然,在工具菜单条中有裁切工具,并且非常易于使用。用矩形选取把你所要选择的区域圈定,使用拖拽的方式来调整选区,调整完毕后点击返回按钮。在工具选项条中,你可以选择对所有图层应用裁切,还是只对当前图层应用裁切,通过输入具体数值,或者是百分比调整尺寸。
当然,在工具菜单条中有裁切工具,并且非常易于使用。把你想要选择的区域用矩形圈定,使用拖拽的方式来调整选区,调整完毕后点击返回按钮。在工具选项条中,你可以选择对所有图层应用裁切,还是只对当前图层应用裁切,通过输入具体数值,或者是百分比调整尺寸。
### 添加文本 ###
当你想在照片上添加标签或者说明这类简单文本的时候Krita 也许会让你感到不知所措,因为它有太多的艺术字效果可供选择了。但 Krita 同时也支持添加简单的文字。点击文本工具条,你将会看到工具选项条如图 3 那样。
当你想在照片上添加标签或者说明这类简单文本的时候Krita 也许会让你眼花缭乱,因为它有太多的艺术字效果可供选择了。但 Krita 同时也支持添加简单的文字。点击文本工具条,你将会看到工具选项条如图 3 那样。
<center><img src="http://www.linux.com/images/stories/41373/fig-3-text.jpg" /></center>
<center><small>图 3文本选项。</small></center>
点击展开按钮。这将显示简单文本工具;首先绘制矩形文本框,接着在文本框内输入文字。工具选项条中有所有常用的文本格式选项:文本选择、文本尺寸、文字与背景颜色、边距,以及一系列图形风格。但你处理完文本后点击外观处理工具,外观处理工具的按钮是一个白色的箭头,在文本工具按钮旁边,通过外观处理工具你可以调整文字整体的尺寸、外观还有位置。外观处理工具的工具选项包括多种不同的线条、颜色还有边距。图 4 为我向蜗居在城市里的亲戚发送的带有愉快标题的照片。
点击展开按钮。这将显示简单文本工具;首先绘制矩形文本框,接着在文本框内输入文字。工具选项条中有所有常用的文本格式选项:文本选择、文本尺寸、文字与背景颜色、边距,以及一系列图形风格。但你处理完文本后点击外观处理工具,外观处理工具的按钮是一个白色的箭头,在文本工具按钮旁边,通过外观处理工具你可以调整文字整体的尺寸、外观还有位置。外观处理工具的工具选项包括多种不同的线条、颜色还有边距。图 4 是我为我那些蜗居在城市里的亲戚们发送的一幅带有愉快标题的照片。
<center><img src="http://www.linux.com/images/stories/41373/fig-4-frontdoor.jpg" /></center>
<center><small>图 4来这绿色农场吧。</small></center>
@ -57,7 +57,7 @@ Krita 中有多重缩放控制手段。Ctrl + “=” 放大Ctrl + “-”
接着,你要问,“虚化蒙板”是什么意思?这个名字来源于锐化技术:虚化蒙板滤镜在原始图像上覆盖一层模糊的蒙板,接着在上面分层进行虚化蒙板。这将使图像比直接锐化产生更加锐利清晰的效果。
今天要说的就这么多。有关 Krita 的很多,但很杂。你可以从 [Krita Tutorials][2] 开始学习,也可以在网上找寻相关的学习视频。
今天要说的就这么多。有关 Krita 的资料很多,但比较杂乱。你可以从 [Krita Tutorials][2] 开始学习,也可以在 YouTube 上找寻相关的学习视频。
- [krita 官方网站][1]
@ -67,10 +67,10 @@ via: http://www.linux.com/learn/tutorials/786040-photo-editing-on-linux-with-kri
作者:[Carla Schroder][a]
译者:[SteveArcher](https://github.com/SteveArcher)
校对:[校对者ID](https://github.com/校对者ID)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linux.com/community/forums/person/3734
[1]:https://krita.org/
[2]:https://krita.org/learn/tutorials/
[2]:https://krita.org/learn/tutorials/

View File

@ -1,10 +1,10 @@
Ubuntu 14.04和拥有Texmaker的Linux Mint 17(基于ubuntu和debian的Linux发行版)中使用LaTeX
Ubuntu 14.04 和 Linux Mint 17 中通过 Texmaker 来使用LaTeX
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/texmaker_Ubuntu.jpeg)
[LaTeX][1]是一种文本标记语言,也可以说是一种文档制作系统。经常在很多大学或者机构中作为一种标准来书写专业的科学文献,毕业论文或其他类似的文档。在这篇文章中我们会看到如何在Ubuntu 14.04中使用LaTeX。
[LaTeX][1]是一种文本标记语言,也可以说是一种文档编撰系统。在很多大学或者机构中普遍作为一种标准来书写专业的科学文献、毕业论文或其他类似的文档。在这篇文章中我们会看到如何在Ubuntu 14.04中使用LaTeX。
### 在Ubuntu 14.04或Linux Mint 17中安装Texmaker
### 在 Ubuntu 14.04 Linux Mint 17 中安装 Texmaker 来使用LaTeX
[Texmaker][2]是一款免费开源的LaTeX编辑器它支持一些主流的桌面操作系统比如WindowLinux和OS X。下面是Texmaker的主要特点
@ -24,11 +24,11 @@
- [下载Texmaker编辑器][3]
你通过链接下载到的是一个.deb包因此你在一些像Linux MintElementary OSPinguy OS等等类Debain的发行版中可以使用相同的安装方式。
你通过上述链接下载到的是一个.deb包因此你在一些像Linux MintElementary OSPinguy OS等等类Debain的发行版中可以使用相同的安装方式。
如果你想使用像Github类型的markdown编辑器你可以试试[Remarkable编辑器][4]。
如果你想使用像Github的markdown编辑器你可以试试[Remarkable编辑器][4]。
希望Texmaker能够在Ubuntu和Linux Mint中帮到你
希望Texmaker能够在Ubuntu和Linux Mint中帮到你
--------------------------------------------------------------------------------
@ -36,7 +36,7 @@ via: http://itsfoss.com/install-latex-ubuntu-1404/
作者:[Abhishek][a]
译者:[john](https://github.com/johnhoow)
校对:[校对者ID](https://github.com/校对者ID)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
@ -44,4 +44,4 @@ via: http://itsfoss.com/install-latex-ubuntu-1404/
[1]:http://www.latex-project.org/
[2]:http://www.xm1math.net/texmaker/index.html
[3]:http://www.xm1math.net/texmaker/download.html#linux
[4]:http://itsfoss.com/remarkable-markdown-editor-linux/
[4]:http://itsfoss.com/remarkable-markdown-editor-linux/

View File

@ -1,4 +1,4 @@
如何在Crunchbang下复Openbox的默认配置
如何在Crunchbang下复Openbox的默认配置
================================================================================
[CrunchBang][1]是一个很好地融合了速度、风格和内容的基于Debian GNU/Linux的发行版。使用了灵活的Openbox窗口管理器高度定制化并且提供了一个现代、全功能的GNU/Linux系统而没有牺牲性能。
@ -6,7 +6,7 @@ Crunchbang是高度自定义的用户可以尽情地地把它调整成他们
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/09/curnchbang_menu_xml.png)
其中从菜单配置文件中去除了所有代码。由于我没有备份最好备份配置文件。我不得不搜索Crunchbang开箱即用的默认配置。这里就是我如何修复的过程要感谢Crunchbang论坛。
我的菜单配置文件中丢失了所有内容。由于我没有备份最好备份配置文件。我不得不搜索Crunchbang安装后的默认配置。这里就是我如何修复的过程,这里要感谢Crunchbang论坛。
了解所有为你预备份的默认配置是很有趣的,你可以在这里找到:
@ -30,7 +30,7 @@ via: http://www.unixmen.com/recover-default-openbox-config-files-crunchbang/
作者:[Enock Seth Nyamador][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
Linux有问必答——如何创建新的亚马逊AWS访问密钥
Linux有问必答如何创建新的亚马逊AWS访问密钥
================================================================================
> **问题**我在配置一个需要访问我的亚马逊AWS帐号的应用时被要求提供**AWS访问密钥ID**和**秘密访问密钥**我怎样创建一个新的AWS访问密钥呢
@ -42,7 +42,7 @@ IAM是一个web服务它允许一个公司管理多个用户及其与一个AW
via: http://ask.xmodulo.com/create-amazon-aws-access-key.html
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,8 +1,8 @@
Linux有问必答——如何扩展XFS文件系统
Linux有问必答如何扩展XFS文件系统
================================================================================
> **问题**我的磁盘上有额外的空间所以我想要扩展其上创建的现存的XFS文件系统以完全使用额外空间。怎样才是扩展XFS文件系统的正确途径
XFS是一个开源的GPL子文件系统,最初由硅谷图形开发,现在被大多数的Linux发行版都支持。事实上XFS已被最新的CentOS/RHEL 7采用成为其默认的文件系统。在其众多的特性中包含了“在线调整大小”这一特性使得现存的XFS文件系统在被挂载时可以进行扩展。然而对于XFS文件系统的缩减确实不被支持的
XFS是一个开源的GPL志文件系统最初由硅谷图形SGI开发现在大多数的Linux发行版都支持。事实上XFS已被最新的CentOS/RHEL 7采用成为其默认的文件系统。在其众多的特性中包含了“在线调整大小”这一特性使得现存的XFS文件系统在已经挂载的情况下可以进行扩展。然而对于XFS文件系统的**缩减**却还没有支持
要扩展一个现存的XFS文件系统你可以使用命令行工具xfs_growfs这在大多数Linux发行版上都默认可用。由于XFS支持在线调整大小目标文件系统可以挂在也可以不挂载。
@ -24,7 +24,7 @@ XFS是一个开源的GPL日子文件系统最初由硅谷图形开发
![](https://farm6.staticflickr.com/5569/14914950529_ddfb71c8dd_z.jpg)
注意当你扩展一个现存的XFS文件系统时必须准备事先添加用于XFS文件系统扩展的空间。这虽然是十分明了的事但是如果在潜在的分区或磁盘卷上没有空闲空间可用的话xfs_growfs不会做任何事情。同时如果你尝试扩展XFS文件系统大小到超过磁盘分区或卷的大小xfs_growfs将会失败。
注意当你扩展一个现存的XFS文件系统时必须准备事先添加用于XFS文件系统扩展的空间。这虽然是很显然的事但是如果在所在的分区或磁盘卷上没有空闲空间可用的话xfs_growfs就没有办法了。同时如果你尝试扩展XFS文件系统大小到超过磁盘分区或卷的大小xfs_growfs将会失败。
![](https://farm4.staticflickr.com/3870/15101281542_98a49a7c3a_z.jpg)
@ -33,6 +33,6 @@ XFS是一个开源的GPL日子文件系统最初由硅谷图形开发
via: http://ask.xmodulo.com/expand-xfs-file-system.html
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
数据库常见问题答案--如何使用命令行创建一个MySQL数据库
Linux有问必答如何在命令行创建一个MySQL数据库
===
> **问题**在一个某处运行的MySQL服务器上我该怎样通过命令行创建和安装一个MySQL数据库呢
@ -47,8 +47,8 @@
为了达到演示的目的我们将会创建一个叫做posts_tbl的表表里会存储关于文章的如下信息
- 文章的标题
- 作者的第一个名字
- 作者的最后一个名字
- 作者的名字
- 作者的
- 文章可用或者不可用
- 文章创建的日期
@ -104,7 +104,7 @@
via: http://ask.xmodulo.com/create-mysql-database-command-line.html
译者:[su-kaiyao](https://github.com/su-kaiyao)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linu
x中国](http://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
在Ubuntu 14.04中重置Unity和Compiz设置【小贴士】
小技巧:在Ubuntu 14.04中重置Unity和Compiz设置
================================================================================
如果你一直在试验你的Ubuntu系统你可能最终以Unity和Compiz的一片混乱收场。在此贴士中我们将看看怎样来重置Ubuntu 14.04中的Unity和Compiz。事实上全部要做的事仅仅是运行几个命令而已。
@ -34,7 +34,7 @@ via: http://itsfoss.com/reset-unity-compiz-settings-ubuntu-1404/
作者:[Abhishek][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,20 +1,20 @@
在CentOS 7上安装Vmware 10
技巧:在CentOS 7上安装Vmware 10
================================================================================
在CentOS 7上安装Vmware 10.0.3,我将给你们我的经验。通常,这个版本上不能在CentOS 7工作的因为它只能运行在比较低的内核版本3.10上。
在CentOS 7上安装Vmware 10.0.3,我来介绍下我的经验。通常,这个版本是不能在CentOS 7工作的因为它只能运行在比较低的内核版本3.10上。
1 - 以正常方式下载并安装(没有问题)。唯一的问题是在后来体验vmware程序的时候。
首先,以正常方式下载并安装(没有问题)。唯一的问题是在后来运行vmware程序的时候。
### 如何修复? ###
**1 进入/usr/lib/vmware/modules/source。**
**1 进入 /usr/lib/vmware/modules/source。**
cd /usr/lib/vmware/modules/source
**2 解压vmnet.tar.**
**2 解压 vmnet.tar.**
tar -xvf vmnet.tar
**3 进入vmnet-only目录。**
**3 进入 vmnet-only 目录。**
cd vmnet-only
@ -54,6 +54,6 @@ via: http://www.unixmen.com/install-vmware-10-centos-7/
作者: M.el Khamlichi
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
Ubuntu 14.04历史文件清理
如何清理 Ubuntu 14.04 的最近打开文件历史列表
================================================================================
这个简明教程对Ubuntu 14.04历史文件清理进行了说明,它用于初学者。
@ -21,6 +21,6 @@ Ubuntu 14.04历史文件清理
via: http://www.ubuntugeek.com/how-to-delete-recently-opened-files-history-in-ubuntu-14-04.html
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,178 @@
stat -- 获取比 ls 更多的信息
================================================================================
> 厌倦了 ls 命令,并且想查看更多有关你的文件的有趣的信息? 试一试 stat
![](http://www.itworld.com/sites/default/files/imagecache/large_thumb_150x113/stats.jpg)
ls 命令可能是每一个 Unix 使用者第一个学习的命令之一, 但它仅仅显示了 stat 命令能给出的信息的一小部分。
stat 命令从文件的索引节点获取信息。 正如你可能已经了解的那样, 每一个系统里的文件都存有三组日期和时间, 它们包括最近修改时间(即使用 ls -l 命令时显示的日期和时间), 最近状态改变时间(包括对文件重命名)和最近访问时间。
使用长列表模式查看文件信息, 你会看到类似下面的内容:
$ ls -l trythis
-rwx------ 1 shs unixdweebs 109 Nov 11 2013 trythis
使用 stat 命令, 你会看到下面这些:
$ stat trythis
File: `trythis'
Size: 109 Blocks: 8 IO Block: 262144 regular file
Device: 18h/24d Inode: 12731691 Links: 1
Access: (0700/-rwx------) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-09-09 19:27:58.000000000 -0400
Modify: 2013-11-11 08:40:10.000000000 -0500
Change: 2013-11-11 08:40:10.000000000 -0500
在上面的情形中, 文件的状态改变和文件修改的日期/时间是相同的, 而访问时间则是相当近的时间。 我们还可以看到文件使用了 8 个块, 以及两种格式显示的文件权限 -- 八进制0700格式和 rwx 格式。 在第三行显示的索引节点是 12731681. 文件没有其它的硬链接Links: 1。 而且, 这个文件是一个常规文件。
把文件重命名, 你会看到状态改变时间发生变化。
这里的 ctime 信息, 最早设计用来存储文件的创建create日期和时间 但后来不知道什么时候变为用来存储状态修改change时间。
$ mv trythis trythat
$ stat trythat
File: `trythat'
Size: 109 Blocks: 8 IO Block: 262144 regular file
Device: 18h/24d Inode: 12731691 Links: 1
Access: (0700/-rwx------) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-09-09 19:27:58.000000000 -0400
Modify: 2013-11-11 08:40:10.000000000 -0500
Change: 2014-09-21 12:46:22.000000000 -0400
改变文件的权限也会改变 ctime 域。
你也可以配合通配符来使用 stat 命令以列出一组文件的状态:
$ stat myfile*
File: `myfile'
Size: 20 Blocks: 8 IO Block: 262144 regular file
Device: 18h/24d Inode: 12731803 Links: 1
Access: (0640/-rw-r-----) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-08-23 03:00:36.000000000 -0400
Modify: 2014-08-22 12:02:12.000000000 -0400
Change: 2014-08-22 12:02:12.000000000 -0400
File: `myfile2'
Size: 20 Blocks: 8 IO Block: 262144 regular file
Device: 18h/24d Inode: 12731806 Links: 1
Access: (0640/-rw-r-----) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-08-23 03:00:36.000000000 -0400
Modify: 2014-08-22 12:03:30.000000000 -0400
Change: 2014-08-22 12:03:30.000000000 -0400
File: `myfile3'
Size: 40 Blocks: 8 IO Block: 262144 regular file
Device: 18h/24d Inode: 12730533 Links: 1
Access: (0640/-rw-r-----) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-08-23 03:00:36.000000000 -0400
Modify: 2014-08-22 12:03:59.000000000 -0400
Change: 2014-08-22 12:03:59.000000000 -0400
如果我们喜欢的话, 我们也可以通过其他命令来获取这些信息。
向 ls -l 命令添加 "u" 选项, 你会看到下面的结果。 注意这个选项会显示最后访问时间, 而添加 "c" 选项则会显示状态改变时间(在本例中, 是我们重命名文件的时间)。
$ ls -lu trythat
-rwx------ 1 shs unixdweebs 109 Sep 9 19:27 trythat
$ ls -lc trythat
-rwx------ 1 shs unixdweebs 109 Sep 21 12:46 trythat
stat 命令也可应用与文件夹。
在这个例子中, 我们可以看到有许多的链接。
$ stat bin
File: `bin'
Size: 12288 Blocks: 24 IO Block: 262144 directory
Device: 18h/24d Inode: 15089714 Links: 9
Access: (0700/drwx------) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-09-21 03:00:45.000000000 -0400
Modify: 2014-09-15 17:54:41.000000000 -0400
Change: 2014-09-15 17:54:41.000000000 -0400
在这里, 我们还可以查看一个文件系统。
$ stat -f /dev/cciss/c0d0p2
File: "/dev/cciss/c0d0p2"
ID: 0 Namelen: 255 Type: tmpfs
Block size: 4096Fundamental block size: 4096
Blocks: Total: 259366 Free: 259337 Available: 259337
Inodes: Total: 223834 Free: 223531
注意 Namelen (文件名长度)域, 如果文件名长于 255 个字符的话, 你会很幸运地在文件名处看到心形符号!
stat 命令还可以一次显示所有我们想要的信息。 下面的例子中, 我们只想查看文件类型, 然后是硬连接数。
$ stat --format=%F trythat
regular file
$ stat --format=%h trythat
1
在下面的例子中, 我们查看了文件权限 -- 分别以两种可用的格式 -- 然后是文件的 SELinux 安全环境。最后,我们我们可以以从 Epoch 开始的秒数格式来查看文件访问时间。
$ stat --format=%a trythat
700
$ stat --format=%A trythat
-rwx------
$ stat --format=%C trythat
(null)
$ stat --format=%X bin
1411282845
下面全部是可用的选项:
%a 八进制表示的访问权限
%A 可读格式表示的访问权限
%b 分配的块数(参见 %B
%B %b 参数显示的每个块的字节数
%d 十进制表示的设备号
%D 十六进制表示的设备号
%f 十六进制表示的 Raw 模式
%F 文件类型
%g 属主的组 ID
%G 属主的组名
%h 硬连接数
%i Inode 号
%n 文件名
%N 如果是符号链接,显示器所链接的文件名
%o I/O 块大小
%s 全部占用的字节大小
%t 十六进制的主设备号
%T 十六进制的副设备号
%u 属主的用户 ID
%U 属主的用户名
%x 最后访问时间
%X 最后访问时间,自 Epoch 开始的秒数
%y 最后修改时间
%Y 最后修改时间,自 Epoch 开始的秒数
%z 最后改变时间
%Z 最后改变时间,自 Epoch 开始的秒数
针对文件系统还有如下格式选项:
%a 普通用户可用的块数
%b 文件系统的全部数据块数
%c 文件系统的全部文件节点数
%d 文件系统的可用文件节点数
%f 文件系统的可用节点数
%C SELinux 的安全上下文
%i 十六进制表示的文件系统 ID
%l 文件名的最大长度
%n 文件系统的文件名
%s 块大小(用于更快的传输)
%S 基本块大小(用于块计数)
%t 十六进制表示的文件系统类型
%T 可读格式表示的文件系统类型
这些信息都可以得到stat 命令也许可以帮你以稍微不同的角度来了解你的文件。
--------------------------------------------------------------------------------
via: http://www.itworld.com/operating-systems/437351/unix-stat-more-ls
作者:[Sandra Henry-Stocker][a]
译者:[wangjiezhe](https://github.com/wangjiezhe)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.itworld.com/sandra-henry-stocker

View File

@ -1,12 +1,12 @@
Linux有问必答 -- 如何在CentOS7上改变网络接口名
Linux有问必答如何在CentOS7上改变网络接口名
================================================================================
> **提问**: 在CentOS7我想将分配的网络接口名更改为别的名字。有什么合适的方法来来重命名CentOS或RHEL7的网络接口
传统上Linux的网络接口被枚举为eth[0123...]但这些名称并不一定符合实际的硬件插槽PCI位置USB接口数量等这引入了一个不可预知的命名问题例如由于不确定的设备探测行为这可能会导致不同的网络配置错误例如由无意的接口改名引起的禁止接口或者防火墙旁路。基于MAC地址的udev规则在虚拟化的环境中并不有用这里的MAC地址如端口数量一样无常。
CentOS/RHEL6还推出了[一致和可预测的网络设备命名][1]网络接口的方法。这些特性可以唯一地确定网络接口的名称以使定位和区分设备更容易,并且在这样一种方式下,它随着启动,时间和硬件改变的情况下是持久的。然而这种命名规则并不是默认在CentOS/RHEL6上开启。
CentOS/RHEL6引入了[一致和可预测的网络设备命名][1]网络接口的方法。这些特性可以唯一地确定网络接口的名称以使定位和区分设备更容易,并且在这样一种方式下,无论是否重启机器、过了多少时间、或者改变硬件,其名字都是持久不变的。然而这种命名规则并不是默认在CentOS/RHEL6上开启。
从CentOS/RHEL7起可预见的命名规则变成了默认。根据这一规则接口名称被自动基于固件拓扑结构和位置信息来确定。现在即使添加或移除网络设备接口名称仍然保持固定而无需重新枚举和坏掉的硬件可以无缝替换。
从CentOS/RHEL7起这种可预见的命名规则变成了默认。根据这一规则,接口名称被自动基于固件,拓扑结构和位置信息来确定。现在,即使添加或移除网络设备,接口名称仍然保持固定,而无需重新枚举,和坏掉的硬件可以无缝替换。
* 基于接口类型的两个字母前缀:
* en -- 以太网
@ -14,7 +14,7 @@ CentOS/RHEL6还推出了[一致和可预测的网络设备命名][1]网络接口
* wl -- wlan
* ww -- wwan
*
* Type of names:
* 名字类型:
* b<number> -- BCMA总线和新书
* ccw<name> -- CCW总线组名
* o<index> -- 车载设备的索引号
@ -43,7 +43,7 @@ CentOS/RHEL6还推出了[一致和可预测的网络设备命名][1]网络接口
![](https://farm4.staticflickr.com/3909/15128981250_72f45633c1_z.jpg)
接下来编辑或创建一个udev的网络命名规则文件/etc/udev/rules.d/70-persistent-net.rules并添加下面一行。更换成你自己的MAC地址和接口。
接下来编辑或创建一个udev的网络命名规则文件/etc/udev/rules.d/70-persistent-net.rules并添加下面一行。更换成你自己的MAC地址08:00:27:a9:7a:e1和接口sushi
$ sudo vi /etc/udev/rules.d/70-persistent-net.rules
@ -62,7 +62,7 @@ CentOS/RHEL6还推出了[一致和可预测的网络设备命名][1]网络接口
via: http://ask.xmodulo.com/change-network-interface-name-centos7.html
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,10 +1,10 @@
Linux有问必答——如何为CentOS 7配置静态IP地址
Linux有问必答如何为CentOS 7配置静态IP地址
================================================================================
> **问题**在CentOS 7上我想要将我其中一个网络接口从DHCP改为静态IP地址配置如何才能永久为CentOS或RHEL 7上的网络接口分配静态IP地址
如果你想要为CentOS 7中的某个网络接口设置静态IP地址有几种不同的方法这取决于你是否想要使用网络管理器。
网络管理器是一个动态的网络控制与配置系统,它用于在网络设备可用时保持设备和连接开启并激活。默认情况下CentOS/RHEL 7安装有网络管理器并处于启用状态。
网络管理器Network Manager是一个动态网络的控制器与配置系统它用于当网络设备可用时保持设备和连接开启并激活。默认情况下CentOS/RHEL 7安装有网络管理器并处于启用状态。
使用下面的命令来验证网络管理器服务的状态:
@ -30,7 +30,7 @@ Linux有问必答——如何为CentOS 7配置静态IP地址
![](https://farm4.staticflickr.com/3880/15112184199_f4cbf269a6.jpg)
在上图中“NM_CONTROLLED=no”表示该接口将通过该配置进行设置而不是通过网络管理器进行管理。“ONBOOT=yes”告诉我们系统将在启动时开启该接口。
在上图中“NM_CONTROLLED=no”表示该接口将通过该配置文件进行设置而不是通过网络管理器进行管理。“ONBOOT=yes”告诉我们系统将在启动时开启该接口。
保存修改并使用以下命令来重启网络服务:
@ -43,6 +43,7 @@ Linux有问必答——如何为CentOS 7配置静态IP地址
![](https://farm6.staticflickr.com/5593/15112397947_ac69a33fb4_z.jpg)
### 使用网络管理器配置静态IP地址 ###
如果你想要使用网络管理器来管理该接口你可以使用nmtui网络管理器文本用户界面它提供了在终端环境中配置配置网络管理器的方式。
在使用nmtui之前首先要在/etc/sysconfig/network-scripts/ifcfg-enp0s3中设置“NM_CONTROLLED=yes”。
@ -65,13 +66,13 @@ Linux有问必答——如何为CentOS 7配置静态IP地址
# systemctl restart network.service
好了,现在一切就绪
好了,现在一切都搞定了
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/configure-static-ip-address-centos7.html
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
Linux有问必答-- 如何在PDF中嵌入LaTex中的所有字体
Linux有问必答如何在PDF中嵌入LaTex中的所有字体
================================================================================
> **提问**: 我通过编译LaTex源文件生成了一份PDF文档。然而我注意到并不是所有字体都嵌入到了PDF文档中。我怎样才能确保所有的字体嵌入在由LaTex生成的PDF文档中
@ -32,7 +32,7 @@ Linux有问必答-- 如何在PDF中嵌入LaTex中的所有字体
via: http://ask.xmodulo.com/embed-all-fonts-pdf-document-latex.html
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,10 +1,10 @@
如何使用系统定时器
如何使用 systemd 中的定时器
================================================================================
我最近在写一些运行备份的脚本,我决定使用[systemd timers][1]而不是对我而已更熟悉的[cron jobs][2]来管理它们。
我最近在写一些执行备份工作的脚本,我决定使用[systemd timers][1]而不是对我而已更熟悉的[cron jobs][2]来管理它们。
在我使用时,出现了很多问题需要我去各个地方找资料,这个过程非常麻烦。因此,我想要把我目前所做的记录下来,方便自己的记忆,也方便读者不必像我这样,满世界的找资料了。
在我下面提到的步骤中有其他的选择,但是这边是最简单的方法。在此之前,查看**systemd.service**, **systemd.timer**,和**systemd.target**的帮助页面(man),学习你能用它们做些什么。
在我下面提到的步骤中有其他的选择,但是这里是最简单的方法。在此之前,请查看**systemd.service**, **systemd.timer**,和**systemd.target**的帮助页面(man),学习你能用它们做些什么。
### 运行一个简单的脚本 ###
@ -35,9 +35,9 @@ myscript.timer
Description=Runs myscript every hour
[Timer]
# Time to wait after booting before we run first time
# 首次运行要在启动后10分钟后
OnBootSec=10min
# Time between running each consecutive time
# 每次运行间隔时间
OnUnitActiveSec=1h
Unit=myscript.service
@ -48,14 +48,14 @@ myscript.timer
授权并运行的是timer文件而不是service文件。
# Start timer, as root
# 以 root 身份启动定时器
systemctl start myscript.timer
# Enable timer to start at boot
# 在系统引导起来后就启用该定时器
systemctl enable myscript.timer
### 在同一个Timer上运行多个脚本 ###
现在我们假设你在相同时间想要运行多个脚本。这种情况,你需要在上面的文件中做适当的修改。
现在我们假设你在相同时间想要运行多个脚本。这种情况,**你需要在上面的文件中做适当的修改**
#### Service 文件 ####
@ -64,9 +64,9 @@ myscript.timer
[Install]
WantedBy=mytimer.target
如果在你的service 文件中有一些规则,确保你使用**Description**字段中的值具体化**After=something.service**和**Before=whatever.service**中的参数。
如果在你的service 文件中有一些依赖顺序,确保你使用**Description**字段中的值具体指定**After=something.service**和**Before=whatever.service**中的参数。
另外的一种选择是(或许更加简单),创建一个包装者脚本来使用正确的规则运行合理的命令并在你的service文件中使用这个脚本。
另外的一种选择是(或许更加简单),创建一个包装脚本来使用正确的顺序来运行命令并在你的service文件中使用这个脚本。
#### Timer 文件 ####
@ -97,11 +97,11 @@ Good luck.
--------------------------------------------------------------------------------
via: http://jason.the-graham.com/2013/03/06/how-to-use-systemd-timers/#enable--start-1
via: http://jason.the-graham.com/2013/03/06/how-to-use-systemd-timers/
作者Jason Graham
译者:[译者ID](https://github.com/johnhoow)
校对:[校对者ID](https://github.com/校对者ID)
译者:[johnhoow](https://github.com/johnhoow)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,24 +1,17 @@
Git Rebase教程 用Git Rebase让时光倒流
================================================================================
![](https://www.gravatar.com/avatar/7c148ace0d63306091cc79ed9d9e77b4?d=mm&s=200)
Christoph Burgdorf自10岁时就是一名程序员他是HannoverJS Meetup网站的创始人并且一直活跃在AngularJS社区。他也是非常了解gti的内内外外在那里他举办一个[thoughtram][1]的工作室来帮助初学者掌握该技术。
下面的教程最初发表在他的[blog][2]。
----------
### 教程: Git Rebase ###
想象一下你正在开发一个激进的新功能。这将是很灿烂的但它需要一段时间。您这几天也许是几个星期一直在做这个。
你的功能分支已经超前master有6个提交了。你是一个优秀的开发人员并做了有意义的语义提交。但有一件事情你开始慢慢意识到这个野兽仍需要更多的时间才能真的做好准备被合并回主分支。
你的功能分支已经超前master有6个提交了。你是一个优秀的开发人员并做了有意义的语义提交。但有一件事情你开始慢慢意识到这个疯狂的东西仍需要更多的时间才能真的做好准备被合并回主分支。
m1-m2-m3-m4 (master)
\
f1-f2-f3-f4-f5-f6(feature)
你也知道的是,一些地方实际上是少耦合的新功能。它们可以更早地合并到主分支。不幸的是,你想将部分合并到主分支的内容存在于你六个提交中的某个地方。更糟糕的是,它也包含了依赖于你的功能分支的之前的提交。有人可能会说,你应该在第一处地方做两次提交,但没有人是完美的。
你也知道的是,一些地方实际上是交叉不大的新功能。它们可以更早地合并到主分支。不幸的是,你想将部分合并到主分支的内容存在于你六个提交中的某个地方。更糟糕的是,它也包含了依赖于你的功能分支的之前的提交。有人可能会说,你应该在第一处地方做两次提交,但没有人是完美的。
m1-m2-m3-m4 (master)
\
@ -39,11 +32,11 @@ Christoph Burgdorf自10岁时就是一名程序员他是HannoverJS Meetup网
在将工作分成两个提交后我们就可以cherry-pick出前面的部分到主分支了。
原来Git自带了一个功能强大的命令git rebase -i ,它可以让我们这样做。它可以让我们改变历史。改变历史可能会产生问题,并作为一个经验法应尽快避免历史与他人共享。在我们的例子中,虽然我们只是改变我们的本地功能分支的历史。没有人会受到伤害。这这么做了!
原来Git自带了一个功能强大的命令git rebase -i ,它可以让我们这样做。它可以让我们改变历史。改变历史可能会产生问题,作为一个经验,应尽快避免历史与他人共享。不过在我们的例子中,我们只是改变我们的本地功能分支的历史。没有人会受到伤害。就这么做了!
好吧让我们来仔细看看f3提交究竟修改了什么。原来我们共修改了两个文件userService.js和wishlistService.js。比方说userService.js的更改可以直接合入主分支而wishlistService.js不能。因为wishlistService.js甚至没有在主分支存在。这根据的是f1提交中的介绍
好吧让我们来仔细看看f3提交究竟修改了什么。原来我们共修改了两个文件userService.js和wishlistService.js。比方说userService.js的更改可以直接合入主分支而wishlistService.js不能。因为wishlistService.js甚至不存在在主分支里面。它是f1提交中引入的
>>专家提示即使是在一个文件中更改git也可以搞定。但这篇博客中我们要让事情变得简单
>>专家提示即使是在一个文件中更改git也可以搞定。但这篇博客中我们先简化情况
我们已经建立了一个[公众演示仓库][3]我们将使用这个来练习。为了便于跟踪每一个提交信息的前缀是在上面的图表中使用的假的SHA。以下是git在分开提交f3时的分支图。
@ -51,26 +44,37 @@ Christoph Burgdorf自10岁时就是一名程序员他是HannoverJS Meetup网
现在我们要做的第一件事就是使用git的checkout功能checkout出我们的功能分支。用git rebase -i master开始做rebase。
现在接下来git会用配置的编辑器打开默认为Vim一个临时文件。
现在接下来git会用配置的编辑器打开默认为Vim一个临时文件。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git2.png)
该文件为您提供一些rebase选择它带有一个提示蓝色文字。对于每一个提交我们可以选择的动作有pick、rwork、edit、squash、fixup和exec。每一个动作也可以通过它的缩写形式p、r、e、s、f和e引用。描述每一个选项超出了本文范畴所以让我们专注于我们的具体任务。
我们要为f3提交选择编辑选项因此我们把内容改变成这样。
我们要为f3提交选择edit选项因此我们把内容改变成这样。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git3.png)
现在我们保存文件在Vim中是按下<ESC>后输入:wq,最后是按下回车。接下来我们注意到git在编辑选项中选择的提交处停止了rebase。
这意味这git开始为f1、f2、f3生效仿佛它就是常规的rebase但是在f3**之后**停止。事实上,我们可以看一眼停止的地方的日志就可以证明这一点。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git4.jpg)
这意味这git开始将f1、f2、f3生效仿佛它就是常规的rebase但是在f3生效**之后**停止。事实上,我们可以看一眼停止的地方的日志就可以证明这一点。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git5.png)
要将f3分成两个提交我们所要做的是重置git的指针到先前的提交f2而保持工作目录和现在一样。这就是git reset在混合模式在做的。由于混合模式是git reset的默认模式我们可以直接用git reset head~1。就这么做并在运行后用git status看下发生了什么。
git status告诉我们userService.js和wishlistService.js被修改了。如果我们与行git diff 我们就可以看见在f3里面确切地做了哪些更改。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git6.png)
git status告诉我们userService.js和wishlistService.js被修改了。如果我们运行 git diff 我们就可以看见在f3里面确切地做了哪些更改。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git7.png)
如果我们看一眼日志我们会发现f3已经消失了。
现在我们有了准备提交的先前的f3提交而原先的f3提交已经消失了。记住虽然我们仍旧在rebase的中间过程。我们的f4、f5、f6提交还没有缺失它们会在接下来回来。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git8.png)
现在我们有了准备提交的先前的f3提交而原先的f3提交已经消失了。记住虽然我们仍旧在rebase的中间过程。我们的f4、f5、f6提交还没有缺失它们会在接下来回来。
让我们创建两个新的提交首先让我们为可以提交到主分支的userService.js创建一个提交。运行git add userService.js 接着运行 git commit -m "f3a: add updateUser method"。
@ -78,27 +82,41 @@ git status告诉我们userService.js和wishlistService.js被修改了。如果
让我们在看一眼日志。
这就是我们想要的除了f4、f5、f6仍旧缺失。这是因为我们仍在rebase交互的中间我们需要告诉git继续rebase。用下面的命令继续git rebase --continue。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git9.png)
这就是我们想要的除了f4、f5、f6仍旧缺失。这是因为我们仍在rebase交互的中间我们需要告诉git继续rebase。用下面的命令继续git rebase --continue。
让我们再次检查一下日志。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git10.png)
就是这样。我们现在已经得到我们想要的历史了。先前的f3提交现在已经被分割成两个提交f3a和f3b。剩下的最后一件事是cherry-pick出f3a提交到主分支上。
为了完成最后一步我们首先切换到主分支。我们用git checkout master。现在我们就可以用cherry-pick命令来拾取f3a commit了。本例中我们可以用它的SHA值bd47ee1来引用它。
现在f3a这个提交i就在主分支的最上面了。这就是我们需要的
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git11.png)
现在f3a这个提交就在主分支的最上面了。这就是我们需要的
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git12.png)
这篇文章的长度看起来需要花费很大的功夫但实际上对于一个git高级用户而言这只是一会会。
>注Christoph目前正在与Pascal Precht写一本关于[Git rebase][4]的书您可以在leanpub订阅它并在准备出版时获得通知。
![](https://www.gravatar.com/avatar/7c148ace0d63306091cc79ed9d9e77b4?d=mm&s=200)
本文作者 Christoph Burgdorf自10岁时就是一名程序员他是HannoverJS Meetup网站的创始人并且一直活跃在AngularJS社区。他也是非常了解gti的内内外外在那里他举办一个[thoughtram][1]的工作室来帮助初学者掌握该技术。
本的教程最初发表在他的[blog][2]。
--------------------------------------------------------------------------------
via: https://www.codementor.io/git-tutorial/git-rebase-split-old-commit-master
作者:[cburgdorf][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,91 @@
学习VIM之2014
================================================================================
作为一名开发者,你不应该把时间花费在考虑如何去找你所要编辑的代码上。在我转移到完全使用 VIM 的过程中,感到最痛苦的就是它处理文件的方式。从之前主要使用 Eclipse 和 Sublime Text 过渡到 VIM它没有捆绑一个常驻的文件系统查看器对我造成了不少阻碍而其内建的打开和切换文件的方式总是让我泪流满面。
就这一点而言我非常欣赏VIM文件管理功能的深度。在工作环境上我已经装配了这些工具甚至比起那些视觉编辑器好很多。因为这个是纯键盘操作可以让我更快地在代码里面穿梭。搭建环境需要花费一些时间安装几个插件。首先第一步是我明白vim内建功能只是处理文件的一种选择。在这篇文章里我会带你去认识vim文件管理功能与使用更高级的插件。
### 基础篇:打开新文件 ###
学习vim其中最大的一个障碍是缺少可视提示不像现在的GUI图形编辑器当你在终端打开一个新的vim是没有明显的提示去提醒你去走什么所有事情都是靠键盘输入同时也没有更多更好的界面交互vim新手需要习惯如何靠自己去查找一些基本的操作指令。好吧让我开始学习基础吧。
创建新文件的命令是**:e <filename>或:e** 打开一个新缓冲区保存文件内容。如果文件不存在它会开辟一个缓冲区去保存与修改你指定文件。缓冲区是vim是术语意为"保存文本块到内存"。文本是否能够与存在的文件关联,要看是否每个你打开的文件都对应一个缓冲区。
打开文件与修改文件之后,你可以使用**:w**命令来保存在缓冲区的文件内容到文件里面,如果缓冲区不能关联你的文件或者你想保存到另外一个地方,你需要使用**:w <filename>**来保存指定地方。
这些是vim处理文件的基本知识很多的开发者都掌握了这些命令这些技巧你都需要掌握。vim提供了很多技巧让人去深挖。
### 缓冲区管理 ###
基础掌握了就让我来说更多关于缓冲区的东西vim处理打开文件与其他编辑器有一点不同打开的文件不会作为一个标签留在一个可见的地方而是只允许你同时只有一个文件在缓冲区打开vim允许你打开多个缓存区。一些会显示出来另外一些就不会你需要用**:ls**来查看已经打开的缓存,这个命令会显示每个打开的缓存区,同时会有它们的序号,你可以通过这些序号使用**:b <buffer-number>**来切换或者使用循序移动命令 **:bnext** 和 **:bprevious** 也可以使用它们的缩写**:bn**和**:bp**。
这些命令是vim管理文件缓冲区的一个基础我发现他们不会按照我的想法映射出来。我不想关心缓冲区的顺序我只想按照我的想法去到那个文件或者想在当前这个文件.因此必需了解vim更深入的缓存模式我不是推荐你必须用内部命令来作为主要的文件管理方案。但这些的确是很强大可行的选择。
![](http://benmccormick.org/content/images/2014/Jul/skitch.jpeg)
### 分屏 ###
分屏是vim其中一个最好用的管理文件功能在vim中你可以将当前窗口同时分开为2个窗口可以按照你喜欢的配置去重设大小和分配个别时候我可以在同时打开6文件每个文件每个都拥有不同大小。
你可以通过命令**:sp <filename>**来新建水平分割窗口或者 **:vs <filename>**垂直分割窗口。你可以使用这些关键命令去调整你想要的窗口大小老实说我喜欢用鼠标处理vim任务因为鼠标能够给我更加准确的两列的宽度而不需要猜大概的宽度。
创建新的分屏后,你需要使用**ctrl-w [h|j|k|l]**来向后向前切换。这个有一点笨拙,但这个却是很重要、很普遍、很容易、很高效的操作。如果你经常使用分屏,我建议你去.vimrc使用以下代码去设置别名为**ctrl-h** **ctrl-j** 等等。
nnoremap <C-J> <C-W><C-J> "Ctrl-j to move down a split
nnoremap <C-K> <C-W><C-K> "Ctrl-k to move up a split
nnoremap <C-L> <C-W><C-L> "Ctrl-l to move right a split
nnoremap <C-H> <C-W><C-H> "Ctrl-h to move left a split
### 跳转表 ###
分屏是解决多个关联文件同时查看问题,但我们仍然不能解决已打开文件与隐藏文件之间快速移动问题。这时跳转表是一个能够解决的工具。
跳转表是众多插件中看起来奇怪而且很少使用的一个。vim能够追踪每一步命令还有切换你正在修改的文件。每次从一个分屏窗口跳到另外一个vim都会添加记录到跳转表里面。它记录你去过的地方这样就不需要担心之前的文件在哪里你可以使用快捷键去快速追溯你的踪迹。**Ctrl-o**允许你返回你上一次地方。重复操作几次就能够返回到你最先编写的代码段地方。你可以使用**ctrl-i**来向前返回。当你在调试多个文件或在两个文件之间切换时,它能够发挥极大的快速移动功能。
### 插件 ###
如果你想vim像Sublime Text 或者Atom一样我就让你认清一下这里有很好的机会让你看清一些难懂可怕和低效的事情。例如大家会发出"当Sublime有了模糊查找功能为什么我一定要输入全路径才能够打开文件" "没有侧边栏显示目录树我怎样查看项目结构" 等等。但vim有了解决方案。这些方案不需要破坏vim的核心。我只需要经常修改vim配置与添加一些最新的插件这里有3个有用的插件可以让你像Sublime管理文件
- [CtrlP][1] 是一个跟Sublime的"Go to Anything"栏一样模糊查找文件.它快如闪电并且非常可配置性。我使用它主要用来打开文件。我只需知道部分的文件名字不需要记住整个项目结构就可以查找了。
- [The NERDTree][2] 这个一个文件管理夹插件它重复了很多编辑器都有的侧边文件管理夹功能。我实际上很少用它对于我而言模糊查找会更加快。对于你接手一个项目尝试学习项目结构与了解什么可以用是非常方便的NERDTree是可以自己定制配置安装它能够代替vim内置的目录工具。
- [Ack.vim][3] 是一个专为vim的代码搜索插件它允许你跨项目搜索文本。它封装了Ack 或 Ag 这[两个极其好用的搜索工具][4],允许你在任何时候在你项目之间快速搜索跳转。
在vim核心与它的插件生态系统之间vim 提供足够的工具允许你构建你想要得工作环境。文件管理是软件开发系统的最核心部分并且你值得拥有体验的权利。
开始时需要通过很长的时间去理解它们,然后在找到你感觉舒服的工作流程之后再开始在上面添加工具。但依然值得你去使用,你不用爆头就可以理解如何去使用,能够轻易编写你的代码。
### 更多插件资源 ###
- [Seamlessly Navigate Vim & Tmux Splits][5] 这个插件需要每一个想使用它的人都要懂得使用[tmux][6]这个跟vim的splits 一样简单好用。
- [Using Tab Pages][7] 它是一个vim的标签功能插件虽然它的名字用起来有一点疑惑但它不是文件管理器。对如何在有多个工作可视区使用"tab pages" 在vim wiki 网站上有更好的概述。
- [Vimcasts: The edit command][8] 一般来说 Vimcasts 是大家学习vim的一个好资源。这个屏幕截图与一些内置工作流程很好地描述了之前说的文件操作方面的知识。
--------------------------------------------------------------------------------
via: http://benmccormick.org/2014/07/07/learning-vim-in-2014-working-with-files/
作者:[Ben McCormick][a]
译者:[haimingfg](https://github.com/haimingfg)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://benmccormick.org/2014/07/07/learning-vim-in-2014-working-with-files/
[1]:https://github.com/kien/ctrlp.vim
[2]:https://github.com/scrooloose/nerdtree
[3]:https://github.com/mileszs/ack.vim
[4]:http://benmccormick.org/2013/11/25/a-look-at-ack/
[5]:http://robots.thoughtbot.com/seamlessly-navigate-vim-and-tmux-splits
[6]:http://tmux.sourceforge.net/
[7]:http://vim.wikia.com/wiki/Using_tab_pages
[8]:http://vimcasts.org/episodes/the-edit-command/
[9]:http://feedpress.me/benmccormick
[10]:http://eepurl.com/WFYon
[11]:http://benmccormick.org/2014/07/14/learning-vim-in-2014-configuring-vim/
[12]:http://benmccormick.org/2014/06/30/learning-vim-in-2014-the-basics/
[13]:http://benmccormick.org/2014/07/02/learning-vim-in-2014-vim-as-language/

View File

@ -0,0 +1,120 @@
使用 GIT 备份 linux 上的网页文件
================================================================================
![](http://techarena51.com/wp-content/uploads/2014/09/git_logo-1024x480-580x271.png)
BUP 并不单纯是 Git, 而是一款基于 Git 的软件. 一般情况下, 我使用 rsync 来备份我的文件, 而且迄今为止一直工作的很好. 唯一的不足就是无法把文件恢复到某个特定的时间点. 因此, 我开始寻找替代品, 结果发现了 BUP, 一款基于 git 的软件, 它将数据存储在一个仓库中, 并且有将数据恢复到特定时间点的选项.
要使用 BUP, 你先要初始化一个空的仓库, 然后备份所有文件. 当 BUP 完成一次备份是, 它会创建一个还原点, 你可以过后还原到这里. 它还会创建所有文件的索引, 包括文件的属性和验校和. 当要进行下一个备份时, BUP 会对比文件的属性和验校和, 只保存发生变化的数据. 这样可以节省很多空间.
### 安装 BUP (在 Centos 6 & 7 上测试通过) ###
首先确保你已经安装了 RPMFORGE 和 EPEL 仓库
[techarena51@vps ~]$ sudo yum groupinstall "Development Tools"
[techarena51@vps ~]$ sudo yum install python python-devel
[techarena51@vps ~]$ sudo yum install fuse-python pyxattr pylibacl
[techarena51@vps ~]$ sudo yum install perl-Time-HiRes
[techarena51@vps ~]$ git clone git://github.com/bup/bup
[techarena51@vps ~]$ cd bup
[techarena51@vps ~]$ make
[techarena51@vps ~]$ make test
[techarena51@vps ~]$ sudo make install
对于 debian/ubuntu 用户, 你可以使用 "apt-get build-dep bup". 要获得更多的信息, 可以查看 https://github.com/bup/bup
在 CentOS 7 上, 当你运行 "make test" 时可能会出错, 但你可以继续运行 "make install".
第一步时初始化一个空的仓库, 就像 git 一样.
[techarena51@vps ~]$ bup init
默认情况下, bup 会把仓库存储在 "~/.bup" 中, 但你可以通过设置环境变量 "export BUP_DIR=/mnt/user/bup" 来改变设置.
然后, 创建所有文件的索引. 这个索引, 就像之前讲过的那样, 存储了一系列文件和它们的属性及 git 目标 id (sha1 哈希表). (属性包括了软链接, 权限和不可改变字节)
bup index /path/to/file
bup save -n nameofbackup /path/to/file
#Example
[techarena51@vps ~]$ bup index /var/www/html
Indexing: 7973, done (4398 paths/s).
bup: merging indexes (7980/7980), done.
[techarena51@vps ~]$ bup save -n techarena51 /var/www/html
Reading index: 28, done.
Saving: 100.00% (4/4k, 28/28 files), done.
bloom: adding 1 file (7 objects).
Receiving index from server: 1268/1268, done.
bloom: adding 1 file (7 objects).
"BUP save" 会把所有内容分块, 然后把它们作为对象储存. "-n" 选项指定备份名.
你可以查看备份列表和已备份文件.
[techarena51@vps ~]$ bup ls
local-etc techarena51 test
#Check for a list of backups available for my site
[techarena51@vps ~]$ bup ls techarena51
2014-09-24-064416 2014-09-24-071814 latest
#Check for the files available in these backups
[techarena51@vps ~]$ bup ls techarena51/2014-09-24-064416/var/www/html
apc.php techarena51.com wp-config-sample.php wp-load.php
在同一个服务器上备份文件从来不是一个好的选择. BUP 允许你远程备份网页文件, 但你必须保证你的 SSH 密钥和 BUP 都已经安装在远程服务器上.
bup index path/to/dir
bup save-r remote-vps.com -n backupname path/to/dir
### 例子: 备份 "/var/www/html" 文件夹 ###
[techarena51@vps ~]$bup index /var/www/html
[techarena51@vps ~]$ bup save -r user@remotelinuxvps.com: -n techarena51 /var/www/html
Reading index: 28, done.
Saving: 100.00% (4/4k, 28/28 files), done.
bloom: adding 1 file (7 objects).
Receiving index from server: 1268/1268, done.
bloom: adding 1 file (7 objects).
### 恢复备份 ###
登入远程服务器并输入下面的命令
[techarena51@vps ~]$bup restore -C ./backup techarena51/latest
#Restore an older version of the entire working dir elsewhere
[techarena51@vps ~]$bup restore -C /tmp/bup-out /testrepo/2013-09-29-195827
#Restore one individual file from an old backup
[techarena51@vps ~]$bup restore -C /tmp/bup-out /testrepo/2013-09-29-201328/root/testbup/binfile1.bin
唯一的缺点是你不能把文件恢复到另一个服务器, 你必须通过 SCP 或者 rsync 手动复制文件.
通过集成的 web 服务器查看备份.
bup web
#specific port
bup web :8181
你可以使用 shell 脚本来运行 bup, 并建立一个每日运行的定时任务.
#!/bin/bash
bup index /var/www/html
bup save -r user@remote-vps.com: -n techarena51 /var/www/html
BUP 并不完美, 但它的确能够很好地完成任务. 我当然非常愿意看到这个项目的进一步开发, 希望以后能够增加远程恢复的功能.
你也许喜欢阅读这篇——使用[inotify-tools][1]实时文件同步.
--------------------------------------------------------------------------------
via: http://techarena51.com/index.php/using-git-backup-website-files-on-linux/
作者:[Leo G][a]
译者:[wangjiezhe](https://github.com/wangjiezhe)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://techarena51.com/
[1]:http://techarena51.com/index.php/inotify-tools-example/

View File

@ -0,0 +1,41 @@
Adobe从网站上撤下了Linux PDF Reader的下载链接
================================================================================
<center>![Linux上的其他PDF解决方案](http://www.omgubuntu.co.uk/wp-content/uploads/2012/07/test-pdf.jpg)</center>
**由于该公司从网站上撤下了软件的下载链接因此这对于任何需要在Linux上使用Adobe这家公司的PDF阅读器的人而言有些麻烦了。**
[Reddit 上的一个用户][1]发帖说,当他去 Adobe 网站上去下载该软件时Linux并没有列在[支持的操作系统][2]里。
不知道什么时候更不知道为什么Linux版本被删除了不过第一次被发现是在八月份。
这也并没有让人太惊讶。Adobe Reader 官方的Linux版本在2013年5月才更新而且当时还在滞后的版本9.5.x上而Windows和Mac版已经在v11.x。
### 谁在意呢?无所谓 ###
这是一个巨大的损失么你可能并不会这么想。毕竟Adobe Reader是一款名声不好的app。速度慢占用资源而且体积臃肿。而原生的PDF阅读app像Evince和Okular提供了一流的体验而没有上面的那些缺点。
除开Snark这一决定将会影响一些事。一些政府网站只能使用官方Abode应用才能完成或者提交提供的官方文档和程序。
Adobe把Linux给刷了这事并不鲜见。该公司在2012年[停止了Linux上flash版本的更新][3]把它留给Google去做[并且此前从它们的跨平台运行时环境“Air”中排除了踢开了Linux用户][4]。
不过并没有失去一切。虽然网在不再提供链接了然而在Adobe FTP服务器上仍有Debian的安装程序。计划使用老的版本需要自己承担风险且没有来自Adobe的支持。同样注意这些版本可能还有没有修复的漏洞。
- [下载Ubuntu版本的 Adobe Reader 9.5.5][5]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/10/adobe-reader-linux-download-pulled-website
作者:[Joey-Elijah Sneddon][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://www.reddit.com/r/linux/comments/2hsgq6/linux_version_of_adobe_reader_no_longer/
[2]:http://get.adobe.com/reader/otherversions/
[3]:http://www.omgubuntu.co.uk/2012/02/adobe-adandons-flash-on-linux
[4]:http://www.omgubuntu.co.uk/2011/06/adobe-air-for-linux-axed
[5]:ftp://ftp.adobe.com/pub/adobe/reader/unix/9.x/9.5.5/enu/AdbeRdr9.5.5-1_i386linux_enu.deb

View File

@ -0,0 +1,57 @@
Linux日历程序California 0.2 发布了
================================================================================
**随着[上月的Geary和Shotwell的更新][1]非盈利软件套装Yobra又回来了同时带来了是新的[California][2]日历程序。**
一个合格的桌面日历是工作井井有条(以及想要井井有条)的必备工具。[Chrome Web Store上广受欢迎的Sunrise应用][3]的发布让我们的选择比以前更丰富了而California又为之增添了新的生力军
Yorba的Jim Nelson在Yorba博客上写道“发生了很多变化“接着写道“...很高兴的告诉大家,初次发布比我预想的加入了更多的特性。”
![California 0.2 Looks Great on GNOME](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/california-point-2.jpg)
*California 0.2在GNOME上看上去棒极了。*
最突出变化的是添加了“自然语言”解析器。这使得添加事件更容易。你可以直接输入“**在下午2点就Nachos会见Sam**”接着California就会自动把它安排下接下来的星期一的下午两点而不必你手动输入位的信息日期时间等等LCTT 译注:显然你只能输入英文才行)
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/05/Screen-Shot-2014-05-15-at-21.26.20.png)
这个功能和我们我们在5月份评估开发版本时一样好用甚至还修复了一个bug事件重复。
要创建一个重复时间比如“每个星期四搜索自己的名字”你需要在日期前包含文字“every”每个。要确保地点也在内比如中午12点和Samba De Amigo在Boston Tea Party喝咖啡。条目中需要有“at”或者“@”。
至于详细信息,我们可以见[GNOME Wiki上的快速添加页面][4]
其他的改变包括:
-以‘月’和‘周’视图查看事件
-添加/删除 GoogleCalDAV和web.ics日历
-改进数据服务器整合
-添加/编辑/删除远程事件(包括重复事件)
-用自然语言安排计划
-按下F1获取在线帮助
-新的动画和弹出窗口
### 在Ubuntu 14.10上安装 California 0.2 ###
作为一个GNOME 3的程序它在 Gnome 3下运行的外观和体验会更好。
不过Yorba也没有忽略Ubuntu用户。他们已经努力也可以说是耐心地地解决导致Ubuntu需要同时安装GTK+和GNOME的主题问题。结果就是在Ubuntu上程序可能看上去有点错位但是同样工作的很好。
California 0.2在[Yorba稳定版软件PPA][5]中可以下载只用于Ubuntu 14.10。
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/10/california-calendar-natural-language-parser
作者:[Joey-Elijah Sneddon][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2014/09/new-shotwell-geary-stable-release-available-to-downed
[2]:https://wiki.gnome.org/Apps/California
[3]:http://www.omgchrome.com/sunrise-calendar-app-for-google-chrome/
[4]:https://wiki.gnome.org/Apps/California/HowToUseQuickAdd
[5]:https://launchpad.net/~yorba/+archive/ubuntu/ppa?field.series_filter=utopic

View File

@ -0,0 +1,54 @@
Linux Kernel 3.17 带来了很多新特性
================================================================================
Linus Torvalds已经发布了最新的稳定版内核3.17。
![](http://www.omgubuntu.co.uk/wp-content/uploads/2011/07/Tux-psd3894.jpg)
Torvalds以他典型的[放任式][1]的口吻在Linux内核邮件列表中解释说
> “过去的一周很平静我对3.17的如期发布没有疑虑(相对于乐观的“我应该早一周发布么”的计划而言)。”
由于假期Linux说他还没有开始合并3.18的改变:
>“我马上要去旅行了- 在我期盼早点发布的时候我希望避免一些事情。这意味着在3.17发布后我不会在下周非常活跃地合并新的东西并且下下周是LinuxCon EU”
### Linux 3.17有哪些新的? ###
最新版本的 Linux 3.17 加入了最新的改进,硬件支持,修复等等。范围从不明觉厉的 - 比如:[memfd 和 文件密封补丁][2] - 到大多数人感兴趣的,比如最新硬件的支持。
下面是这次发布的一些亮点的列表,但它们并不详尽:
- Microsoft Xbox One 控制器支持 (没有震动反馈)
- 额外的Sony SIXAXIS支持改进
- 东芝 “主动防护感应器” 支持
- 新的包括Rockchip RK3288和AllWinner A23 SoC的ARM芯片支持
- 安全计算设备上的“跨线程过滤设置”
- 基于Broadcom BCM7XXX板卡的支持用在不同的机顶盒上
- 增强的AMD Radeon R9 290支持
- Nouveau 驱动改进包括Kepler GPU修复
- 包含Intel Broadwell超级本上的Wildcatpoint Audio DSP音频支持
### 在Ubuntu上安装 Linux 3.17 ###
虽然被列为稳定版,但是目前对于大多数人而言只有很少的功能需要我们“现在去安装”。
但是如果你很耐心——**更重要的是**——有足够的技能去处理从中导致的问题你可以通过在由Canonical维护的主线内核存档中找到一系列合适的包安装在你的Ubuntu 14.10中升级到Linux 3.17。
**警告:除非你知道你正在做什么,不要尝试从下面的链接中安装任何东西。**
- [访问Ubuntu内核主线存档][3]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/10/linux-kernel-3-17-whats-new-improved
作者:[Joey-Elijah Sneddon][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://lkml.iu.edu/hypermail/linux/kernel/1410.0/02818.html
[2]:http://lwn.net/Articles/607627/
[3]:http://kernel.ubuntu.com/~kernel-ppa/mainline/?C=N;O=D

View File

@ -0,0 +1,37 @@
Ubuntu Unity 4岁了生日快乐
================================================================================
> Unity桌面环境在Ubuntu 10.04 Netbook Remix版本中加入这是一个过期的旧版本。
**Canonical开发者以及Ubuntu社区这些天有一个很好的理由来庆祝因为Unity桌面环境已经4岁了**
Unity 作为Ubuntu的默认桌面环境并且已经有4年了虽然当时并不是该发行版的桌面版本。它首次用于Ubuntu Netbook Remix是专为笔记本使用的版本。实际上Ubuntu Netbook Remix 10.10 Maverick市场首次接受Unity桌面。
常规的Ubuntu 10.10 发行版桌面仍旧使用GNOME 2.x这也是为什么有用户说10.10 仍是Canonical做的最好的版本。
### Unity 是没人想要的替代品 ###
Canonical决定用他们自己的软件替代GNOME 2.x桌面环境但是它的设计对用户而言很陌生。一些人喜欢它但是许多人并不这样认为并且还被不同的用户在他们决定放弃Ubuntu的时候时不时地提到这个。
Unit设计视角上和GNOME不同但是Ubuntu开发者并没有替换GNOME所有的包并且还保留了很多他们现在仍旧这样。之前不喜欢Unity方向的Ubuntu的粉丝一定对GNOME 2.x被很快抛弃且被完全不同的、同样引发相同质疑的GNOME 3.0替换感到很失望。
### 为什么Unity替换GNOME ###
回到还在Ubuntu 10.10 的时光Canonical和GNOME团队习惯于非常紧密地一起工作但是事情在Ubuntu变得越来越流行后发生了改变。其中一个驱使Canonical构建Unity的理由是GNOME团队不再和他们一致了。
用户在抱怨GNOME的问题或者他们想要特定的功能时Ubuntu团队会发给上游一些补丁。对于GNOME它会不会接受或者会花很长的时间去实现。在同时Canonical和Ubuntu因这些他们不能马上解决的问题受到了很多的批评但是用户并不知道这些。
因此一个与GNOME捆绑不太紧的桌面环境的需求变得非常清晰了。Unity最终在Ubuntu 11.10中引入。官方的发布日期是 2010年10月10日所以Unity已经4岁了。
Unity还没有被全社区的拥抱虽然有很多用户已经接受了这是一个有用、且可以作为一个生产桌面环境。虽然桌面的大修已经逾期了很久且势必会在一两年内完成但是它在每个新的发行后都会获得了更多的支持和使用。
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Ubuntu-s-Unity-Turns-4-Happy-Birthday--461840.shtml
作者:[Silviu Stahie][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie

View File

@ -0,0 +1,60 @@
Linux 上一些很好用的文献目录管理工具
================================================================================
你是否写过一些很长很长的文章以至于你会认为你永远都看不到它的结束那么你会很明白最糟糕的不是你投入了多少的时间而是一旦你完成你仍然要制定和格式化你的所引用的一些参考文献。很幸运的是Linux 有很多的解决方案参考书目和文献管理工具。借助BibTex的力量这些工具可以帮你导入引用源然后自动生成一个结构化文献目录。这里给大家提供了一些Linux上参考文献管理工具的不完全列表。
### 1. Zotero ###
![](https://farm4.staticflickr.com/3936/15492092282_f1c8446624_b.jpg)
这应该是最著名的参考文献聚集工具,[Zotero][1]作为一个浏览器的扩展插件。当然它也有一个方便的Linux 独立工具。拥有强大的性能Zotero 很容易上手并且也可以和LibreOffice 或者是其他的文本编辑器配套使用来管理文档的参考文献。我个人很欣赏其操作界面和插件管理器。可惜的是,如果你对参考文献有很多不同的需求的话,很快就会发现 Zotero 功能有限。
### 2. JabRef ###
![](https://farm4.staticflickr.com/3936/15305799248_d27685aca9_b.jpg)
[JabRef][2] 是最先进的文献管理工具之一。你可以导入大量的格式可以在其外部的数据库里查找相应的条目像Google Scholar并且能直接输出到你喜欢的编辑器。JabRef 可以很好的兼容你的运行环境甚至也支持插件。最后还有一点JabRef可以连接你自己的SQL 数据库。而唯一的缺点就是其学习使用的难度。
### 3. KBibTex ###
![](https://farm4.staticflickr.com/3931/15492453775_c1e57f869f_c.jpg)
对于 KDE 使用者,这个桌面环境也拥有它自己专有的文献管理工具[KBibTex][3]。这个程序的品质正如你所期望。程序可高度定制通过快捷键就可以很好的操作和体验。你可以很容易找到副本、可以预览结果、也可以直接输出到LaTex 编辑器。而我认为这款软件最大的特色在于它集成了Bigsonomy Google Scholar 甚至是你的Zotero账号。唯一的缺憾是界面看起来实在是有点乱。多花点时间设置软件可以让你使用起来得心应手。
### 4. Bibfilex ###
![](https://farm4.staticflickr.com/3930/15492453795_f5ec82f5ff_c.jpg)
可以运行在Gtk 和Qt 环境中,[Bibfilex][4]是一个基于 Biblatex 的界面友好的工具。相对于JabRef 和KBibTex ,缺少了一些高级的功能,但这也让他更加的快速和轻巧。不用想太多,这绝对是快速做文献目录的一个聪明的选择。界面很舒服,仅仅反映了一些必要的功能。我给出了其使用的完全手册,你可以从官方的[下载页面][5]去获得。
### 5. Pybliographer ###
![](https://farm4.staticflickr.com/3929/15305749810_541b4926bd_o.jpg)
正如它的名字一样,[Pybliographer][6]是一个用 Python 写的非图形化的文献目录管理工具。我个人比较喜欢把Pybiographic 当做是图形化的前端。它的界面极其简洁和抽象。如果你仅仅需要输出少数的参考文献,而且也确实没有时间去学习更多的工具软件,那么 Pybliographer 确实是一个不错的选择。有一点点像 Bibfilex 的是,它是以让用户方便、快速的使用为目标的。
### 6. Referencer ###
![](https://farm4.staticflickr.com/3949/15305749790_2d3311b169_b.jpg)
这应该是我归纳这些时候的一个最大的惊喜,[Referencer][7] 确实是让人眼前一亮。完美兼容 Gnome ,它可以查找和导入你的文档,然后在网上查询他们的参考文献,并且输出到 LyX ,非常的漂亮和设计良好。为数不多的几个快捷键和插件让它拥有了图书馆的风格。
总的来说,很感谢这些工具软件,有了它们,你就可以不用再担心长长的文章了,至少是不用再担心参考文献的部分了。那么我们还有什么遗漏的吗?是否还有其他的文献管理工具你很喜欢?请在评论里告诉我们。
--------------------------------------------------------------------------------
via: http://xmodulo.com/reference-management-software-linux.html
作者:[Adrien Brochard][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:https://www.zotero.org/
[2]:http://jabref.sourceforge.net/
[3]:http://home.gna.org/kbibtex/
[4]:https://sites.google.com/site/bibfilex/
[5]:https://sites.google.com/site/bibfilex/download
[6]:http://pybliographer.org/
[7]:https://launchpad.net/referencer

View File

@ -0,0 +1,74 @@
Linux有问必答如何检测并修复bash中的破壳漏洞
================================================================================
> **问题**我想要知道我的Linux服务器是否存在bash破壳漏洞以及如何来保护我的Linux服务器不受破壳漏洞侵袭。
2014年9月24日一位名叫斯特凡·沙泽拉的安全研究者发现了一个名为“破壳”Shellshock也称为“bash门”或“Bash漏洞”的bash漏洞。该漏洞如果被渗透远程攻击者就可以在调用shell前通过在特别精心编制的环境中输出函数定义执行任何程序代码。然后这些函数内的代码就可以在调用bash时立即执行。
注意破壳漏洞影响到bash版本1.14到4.3当前版本。虽然在写本文时还没有该漏洞权威而完整的修复方案也尽管主要的Linux发行版[Debian][1][Red Hat][2][CentOS][3][Ubuntu][4]和 [Novell/Suse][5])已经发布了用于部分解决与此漏洞相关的补丁([CVE-2014-6271][6]和[CVE-2014-7169][7]并且建议尽快更新bash并在随后数日内检查更新LCTT 译注,可能你看到这篇文章的时候,已经有了完善的解决方案)。
### 检测破壳漏洞 ###
要检查你的Linux系统是否存在破壳漏洞请在终端中输入以下命令。
$ env x='() { :;}; echo "Your bash version is vulnerable"' bash -c "echo This is a test"
如果你的Linux系统已经暴露给了破壳漏洞渗透命令输出会像这样
Your bash version is vulnerable
This is a test
在上面的命令中一个名为x的环境变量已经被设置可用于用户环境。就如我们所了解到的它并没有赋值是一个虚函数定义后面跟了一个任意命令红色该命令将在bash调用前执行。
### 为破壳漏洞应用修复 ###
你可以按照以下方法安装新发布的bash补丁。
在Debian及其衍生版上
# aptitude update && aptitude safe-upgrade bash
在基于Red Hat的发行版上
# yum update bash
#### 打补丁之前: ####
Debian
![](https://farm4.staticflickr.com/3903/15342893796_0c3c61aa33_z.jpg)
CentOS
![](https://farm3.staticflickr.com/2949/15362738261_99fa409e8b_z.jpg)
#### 打补丁之后: ####
Debian:
![](https://farm3.staticflickr.com/2944/15179388727_bdb8a09d62_z.jpg)
CentOS:
![](https://farm4.staticflickr.com/3884/15179149029_3219ce56ea_z.jpg)
注意在安装补丁前后各个发行版中的bash版本没有发生变化——但是你可以通过从更新命令的运行过程中看到该补丁已经被安装很可能在安装前需要你确认
如果处于某种原因你不能安装该补丁或者针对你的发行版的补丁还没有发布那么建议你先试用另外一个shell直到修复补丁出现。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/detect-patch-shellshock-vulnerability-bash.html
译者:[GOLinux](https://github.com/GOLinux)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://www.debian.org/security/2014/dsa-3032
[2]:https://access.redhat.com/articles/1200223
[3]:http://centosnow.blogspot.com.ar/2014/09/critical-bash-updates-for-centos-5.html
[4]:http://www.ubuntu.com/usn/usn-2362-1/
[5]:http://support.novell.com/security/cve/CVE-2014-6271.html
[6]:http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-6271
[7]:http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-7169

View File

@ -0,0 +1,45 @@
慕尼黑市市长透露重返 Windows 的费用
================================================================================
> **摘要**: 慕尼黑市市长透露了在该市摆脱微软十年之后再次放弃 Linux 重返 Windows 的费用,大约需要数以百万计的欧元。
慕尼黑市市长透露,重返 Windows 将需要花费上百万欧元购买新的硬件。
今年早些时候,该市新当选的市长提出慕尼黑可能重返 Windows尽管市当局[用了若干年才迁移到基于 Linux 的操作系统和开源软件][1]摘要译文http://linux.cn/article-2294-1.html
作为最著名的从微软迁移到 Linux 桌面系统的案例慕尼黑投向开源软件的做法一直引发各种争议和讨论。慕尼黑的迁移始于2004年还有一些德国的地方当局也[追随它的脚步转向开源][2]。
目前还没有[制定好返回 Windows 桌面的计划][3],但是当局正在调研哪种操作系统和软件包(包括专有软件和开源软件)更适合他们的需求。调研报告也将统计迁移到开源软件所花费的费用。
Dieter Reiter市长在[回应慕尼黑的绿党的问询][4]时透露了重返 Windows 的费用。
Reiter 说,迁移到 Windows 7 需要替换它14000名以上的职员的所有个人电脑此举将花费 315万欧元。这还没有包括软件许可证费用和基础设施的投入Reiter 说由于没有进一步计划,所以还没办法测算。他说,如果迁移到 Windows 8 将花费更多。
Reiter 说,返回微软将导致迁移到 [Limux][5]、OpenOffice 及其它开源开源所花费的1400万欧元打了水漂。而部署 Limux 并从微软 Office 迁移的项目实施、支持、培训、修改系统以及 Limux 相关软件的授权等工作都将被搁置,他补充道。
他还透露说,(之前)迁移到 Limux 为市政府节约了大概1100万欧元的许可证和硬件费用因为基于 Ubuntu 的 Limux 操作系统要比升级较新版本的 Windows 对硬件的需要要低。
在这个回应中 Reiter 告诉 Stadtbild 杂志说,他是微软的粉丝,但是这并不会影响到这份 IT 审计报告。
“在接受 Stadtbild 杂志的采访中我透露我是微软粉丝后,我就收到了大量的信件,询问我们的 IT 团队是否能令人满意的满足用户在任何时候的需求,以及是否有足够的能力为一个现代化大都市的政府服务。”
“这件事有许多方面,用户满意度是其中之一。这和我个人偏好无关,也和我在开源方面的经验无关。”
在他的回应中,并不是由于职员们的对迁移到开源的抱怨而导致本次审计的决定。他说,这是来自对职员在 IT 方面的调查而产生的审计,并不独是 Limux OS。
他还提到了一个 Windows 和基于 Linux 的操作系统的相对安全的问题。他指出,根据德国国家安全局 BSI 的信息,发现 Linux 要比 Windows 漏洞更多,不过只是使用量较少罢了。然而他也补充说,这种比较也许有不同的解释。
--------------------------------------------------------------------------------
via: http://www.zdnet.com/munich-sheds-light-on-the-cost-of-dropping-linux-and-returning-to-windows-7000034718/
作者:[Avishek Kumar][a]
译者:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.zdnet.com/meet-the-team/uk/nick-heath/
[1]:http://www.techrepublic.com/article/how-munich-rejected-steve-ballmer-and-kicked-microsoft-out-of-the-city/
[2]:http://www.techrepublic.com/blog/european-technology/its-not-just-munich-open-source-gains-new-ground-in-germany/
[3]:http://www.techrepublic.com/article/no-munich-isnt-about-to-ditch-free-software-and-move-back-to-windows/
[4]:http://www.ris-muenchen.de/RII2/RII/DOK/ANTRAG/3456728.pdf
[5]:http://en.wikipedia.org/wiki/LiMux

View File

@ -0,0 +1,37 @@
Debian 7.7 更新版发布
================================================================================
**Debian项目已经宣布Debian7.7 “Wheezy”发布并提供下载。这是常规维护更新但它打包了很多重要的更新。**
![](http://i1-news.softpedia-static.com/images/news2/Debian-7-7-Is-Out-with-Security-Fixes-462647-2.jpg)
Debian在这个发行版里面包含了常规的主要更新但如果你已经安装的 Debian 保持着不断最新就无需下载安装这个版本。开发者做了一些重要的修复,因此如果还没升级的话建议尽快升级。
“此次更新主要是给稳定版修正安全问题,以及对一些严重问题的调整。安全建议的公告已经另行发布了,请查阅。”
开发者在正式[公告][1]中指出“请注意此更新并不是Debian 7的新版本只是更新了部分包没必要扔掉旧的wheezy CD或DVD只要在安装后通过 Debian 镜像来升级那些过期的包就行“。
开发者已经升级了 Bash 包来修复一些重要的漏洞在启动时SSH登录不再有效并且还做了其他一些微调。
要了解发布更多的细节请查看官方公告中的完整更新日志。
现在下载 Debian 7.7:
- [Debian GNU/Linux 7.7.0 (ISO) 32-bit/64-bit][2]
- [Debian GNU/Linux 6.0.10 (ISO) 32-bit/64-bit][3]
- [Debian GNU/Linux 8 Beta 2 (ISO) 32-bit/64-bit][4]
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Debian-7-7-Is-Out-with-Security-Fixes-462647.shtml
作者:[Silviu Stahie][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:https://www.debian.org/News/2014/20141018
[2]:http://ftp.acc.umu.se/debian-cd/7.7.0/multi-arch/iso-dvd/debian-7.7.0-i386-amd64-source-DVD-1.iso
[3]:http://ftp.au.debian.org/debian/dists/oldstable/
[4]:http://cdimage.debian.org/cdimage/jessie_di_beta_2/

View File

@ -0,0 +1,51 @@
Linux 下的免费图片查看器
================================================================================
我最喜欢的谚语之一是“一图胜千言”。它指一张静态图片可以传递一个复杂的想法。图像相比文字而言可以迅速且更有效地描述大量信息。它们捕捉回忆,永不让你忘记你所想记住的东西,并且让它时常在你的记忆里刷新。
图片是互联网日常使用的一部分,并且对社交媒体互动尤其重要。一个好的图片查看器是任何操作系统必不可少的一个组成部分。
Linux 系统提供了一个大量开源实用小程序的集合,其中这些程序提供了从显而易见到异乎寻常的各种功能。正是由于这些工具的高质量和多样选择帮助 Linux 在生产环境中而脱颖而出尤其是当谈到图片查看器时。Linux 有如此多的图像查看器可供选择,以至于让选择困难症患者无所适从~
一个不该包括在这个综述中但是值得一提的软件是 Fragment Image Viewer。它在专有许可证下发行是的我知道所以不会预先安装在 Ubuntu 上。 但它无疑看起来十分有趣!要是它的开发者们将它在开源许可证下发布的话,它将是明日之星!
现在,让我们亲眼探究一下这 13 款图像查看器。除了一个例外,它们中每个都是在开源协议下发行。由于有很多信息要阐述,我将这些详细内容从当前单一网页综述剥离,但作为替代,我为每一款图片查看器提供了一个单独页面,包括软件的完整描述,产品特点的详细分析,一张软件工作中的截图,以及相关资源和评论的链接。
### 图片查看器 ###
- [**Eye of Gnome**][1] -- 快速且多功能的图片查看器器
- [**gThumb**][2] -- 高级图像查看器和浏览器
- [**Shotwell**][3] -- 被设计来提供个人照片管理的图像管理器
- [**Gwenview**][4] -- 专为 KDE 4 桌面环境开发的简易图片查看器
- [**Imgv**][5] -- 强大的图片查看器
- [**feh**][6] -- 基于 Imlib2 的快速且轻量的图片查看器
- [**nomacs**][7] -- 可处理包括 RAW 在内的大部分格式
- [**Geeqie**][8] -- 基于 Gtk+ 的轻量级图片查看器
- [**qiv**][9] -- 基于 gdk/imlib 的非常小且精致的开源图片查看器
- [**PhotoQT**][10] -- 好看、高度可配置、易用且快速
- [**Viewnior**][11] -- 设计时考虑到易用性
- [**Cornice**][12] -- 设计用来作为 ACDSee 的免费替代品
- [**XnViewMP**][13] -- 图像查看器、浏览器、转换器(专有软件)
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20141018070111434/ImageViewers.html
译者:[jabirus](https://github.com/jabirus)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://projects.gnome.org/eog/
[2]:https://wiki.gnome.org/Apps/gthumb
[3]:https://wiki.gnome.org/Apps/Shotwell/
[4]:http://gwenview.sourceforge.net/
[5]:http://imgv.sourceforge.net/
[6]:http://feh.finalrewind.org/
[7]:http://www.nomacs.org/
[8]:http://geeqie.sourceforge.net/
[9]:http://spiegl.de/qiv/
[10]:http://photoqt.org/
[11]:http://siyanpanayotov.com/project/viewnior/
[12]:http://wxglade.sourceforge.net/extra/cornice.html
[13]:http://www.xnview.com/en/

View File

@ -1,8 +1,6 @@
恰当地管理开源,让软件更加安全
================================================================================
![作者 Bill Ledingham 是 Black Duck Software 公司的首席技术官CTO兼工程执行副总裁](http://www.linux.com/images/stories/41373/Bill-Ledingham.jpg)
Bill Ledingham 是 Black Duck Software 公司的首席技术官CTO兼工程执行副总裁。
<center>![作者 Bill Ledingham 是 Black Duck Software 公司的首席技术官CTO兼工程执行副总裁](http://www.linux.com/images/stories/41373/Bill-Ledingham.jpg)</center>
越来越多的公司意识到,要想比对手率先开发出高质量具有创造性的软件,关键在于积极使用开源项目。软件版本更迭要求市场推广速度足够快,成本足够低,而仅仅使用商业源代码已经无法满足这些需求了。如果不能选择最合适的开源软件集成到自己的项目里,一些令人称道的点子怕是永无出头之日了。
@ -38,12 +36,9 @@ Heartbleed bug 让开发人员和企业知道了软件安全性有多重要。
虽然每个公司、每个开发团队都面临各不相同的问题,但实践证明下面几条安全管理经验对使用开源软件的任何规模的组织都有意义:
- **自动认证并分类** - 捕捉并追踪开源组件的相关属性,评估授权许可,自动扫描可能出现的安全漏洞,自动认证并归档。
-
- **维护最新代码的版本** - 评估代码质量,确保你的产品使用的是最新版本的代码。
-
- **自动批准和分类** - 捕捉并追踪开源组件的相关属性,评估许可证合规性,通过自动化扫描、批准和使用过程来审查可能出现的安全漏洞。
- **维护最新代码的版本** - 评估代码质量,确保你的产品使用的是最新版本的代码。
- **评估代码** - 评估所有在使用的开源代码;审查代码安全性、授权许可、列出风险并予以解决。
-
- **确保代码合法** - 创建并实现开源政策,建立自动化合规检查流程确保开源政策、法规、法律责任等符合开源组织的要求。
### 关键是,要让管理流程运作起来 ###
@ -58,7 +53,7 @@ via: http://www.linux.com/news/software/applications/782953-how-to-achieve-bette
作者:[Bill Ledingham][a]
译者:[sailing](https://github.com/sailing)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,11 +1,12 @@
使用Vmstat和Iostat命令进行Linux性能监控
使用vmstat和iostat命令进行Linux性能监控
================================================================
这是我们正在进行的**Linux**命令和性能监控系列的一部分。**Vmstat**和**Iostat**两个命令都适用于所有主要的类**unix**系统(**Linux/unix/FreeBSD/Solaris**)。
如果**vmstat**和**iostat**命令在你的系统中不可用,请安装**sysstat**软件包。**vmstat****sar**和**iostat**命令都包含在**sysstat**系统监控工具软件包中。iostat命令生成**CPU**和所有设备的统计信息。你可以从连接[sysstat][1]中下载源代码包编译安装sysstat但是我们建议通过**YUM**命令进行安装。
这是我们正在进行的**Linux**命令和性能监控系列的一部分。**vmstat**和**iostat**两个命令都适用于所有主要的类**unix**系统(**Linux/unix/FreeBSD/Solaris**)。
如果**vmstat**和**iostat**命令在你的系统中不可用,请安装**sysstat**软件包。**vmstat****sar**和**iostat**命令都包含在**sysstat**系统监控工具软件包中。iostat命令生成**CPU**和所有设备的统计信息。你可以从[这个连接][1]中下载源代码包编译安装sysstat但是我们建议通过**YUM**命令进行安装。
![使用Vmstat和Iostat命令进行Linux性能监控](http://www.tecmint.com/wp-content/uploads/2012/09/Linux-VmStat-Iostat-Commands.png)
使用Vmstat和Iostat命令进行Linux性能监控
*使用Vmstat和Iostat命令进行Linux性能监控*
###在Linux系统中安装sysstat###
@ -18,7 +19,7 @@
####1. 列出活动和非活动的内存####
如下范例中输出6列。**vmstat**的man页面中解析的每一列的意义。最重要的是内存中的**free**属性和交换分区中**si**和**so**属性。
如下范例中输出6列。**vmstat**的man页面中解析的每一列的意义。最重要的是内存中的**free**属性和交换分区中**si**和**so**属性。
[root@tecmint ~]# vmstat -a
@ -33,6 +34,7 @@
**注意**:如果你不带参数的执行**vmstat**命令,它会输出自系统启动以来的总结报告。
####2. 每X秒执行vmstat共执行N次####
下面命令将会每2秒中执行一次**vmstat**执行6次后自动停止执行。
[root@tecmint ~]# vmstat 2 6
@ -65,7 +67,6 @@
**vmstat**命令的**-s**参数,将输出各种事件计数器和内存的统计信息。
[tecmint@tecmint ~]$ vmstat -s
1030800 total memory
@ -237,7 +238,7 @@ via: http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-
作者:[Ravi Saive][a]
译者:[cvsher](https://github.com/cvsher)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,19 +1,17 @@
集所有功能与一身的Linux系统性能和使用活动监控工具-Sysstat
全能冠军Linux系统性能和使用活动监控工具 sysstat
===========================================================================
**Sysstat**是一个非常方便的工具它带有众多的系统资源监控工具用于监控系统的性能和使用情况。我们在日常使用的工具中有相当一部分是来自sysstat工具包的。同时它还提供了一种使用cron表达式来制定性能和活动数据的收集计划。
![Install Sysstat in Linux](http://www.tecmint.com/wp-content/uploads/2014/08/sysstat.png)
在Linux系统中安装Sysstat
下表是包含在sysstat包中的工具
- [**isstat**][1]: 输出CPU的统计信息和所有I/O设备的输入输出I/O统计信息。
- **mpstat**: 关于多有CPU的详细信息单独输出或者分组输出
- [**iostat**][1]: 输出CPU的统计信息和所有I/O设备的输入输出I/O统计信息。
- **mpstat**: 关于CPU的详细信息单独输出或者分组输出
- **pidstat**: 关于运行中的进程/任务、CPU、内存等的统计信息。
- **sar**: 保存并输出不同系统资源CPU、内存、IO、网络、内核等。。。)的详细信息。
- **sadc**: 系统活动数据收集器,用于手机sar工具的后端数据。
- **sa1**: 系统手机并存储sadc数据文件的二进制数据与sadc工具配合使用
- **sar**: 保存并输出不同系统资源CPU、内存、IO、网络、内核等。。。的详细信息。
- **sadc**: 系统活动数据收集器,用于收集sar工具的后端数据。
- **sa1**: 系统收集并存储sadc数据文件的二进制数据与sadc工具配合使用
- **sa2**: 配合sar工具使用产生每日的摘要报告。
- **sadf**: 用于以不同的数据格式CVS或者XML来格式化sar工具的输出。
- **Sysstat**: sysstat工具的man帮助页面。
@ -26,9 +24,9 @@ pidstat命令新增了一些新的选项首先是“-R”选项该选项
sar、sadc和sadf命令在数据文件方面同样带来了一些功能上的增强。与以往只能使用“**saDD**”来命名数据文件。现在使用**-D**选项可以用“**saYYYYMMDD**”来重命名数据文件,同样的,现在的数据文件不必放在“**var/log/sa**”目录中我们可以使用“SA_DIR”变量来定义新的目录该变量将应用与sa1和sa2命令。
###在Linux系统中安装Sysstat####
###在Linux系统中安装sysstat####
在主要的linux发行版中**Sysstat**’工具包可以在默认的程序库中安装。然而,在默认程序库中的版本通常有点旧,因此,我们将会下载源代码包,编译安装最新版本(**11.0.0**版本)。
在主要的linux发行版中**sysstat**’工具包可以在默认的程序库中安装。然而,在默认程序库中的版本通常有点旧,因此,我们将会下载源代码包,编译安装最新版本(**11.0.0**版本)。
首先使用下面的连接下载最新版本的sysstat包或者你可以使用**wget**命令直接在终端中下载。
@ -38,7 +36,7 @@ sar、sadc和sadf命令在数据文件方面同样带来了一些功能上的增
![Download Sysstat Package](http://www.tecmint.com/wp-content/uploads/2014/08/Download-Sysstat.png)
下载Sysstat包
*下载sysstat包*
然后解压缩下载下来的包,进去该目录,开始编译安装
@ -47,21 +45,25 @@ sar、sadc和sadf命令在数据文件方面同样带来了一些功能上的增
这里,你有两种编译安装的方法:
a).第一,你可以使用**iconfig**(这将会给予你很大的灵活性,你可以选择/输入每个参数的自定义值)
####a)####
第一,你可以使用**iconfig**(这将会给予你很大的灵活性,你可以选择/输入每个参数的自定义值)
# ./iconfig
![Sysstat iconfig Command](http://www.tecmint.com/wp-content/uploads/2014/08/Sysstat-iconfig-Command.png)
Sysstat的iconfig命令
*sysstat的iconfig命令*
b).第二,你可以使用标准的**configure**命令在当行中定义所有选项。你可以运行 **./configure help 命令**来列出该命令所支持的所有限选项。
####b)####
第二,你可以使用标准的**configure**,在命令行中定义所有选项。你可以运行 **./configure help 命令**来列出该命令所支持的所有限选项。
# ./configure --help
![Sysstat Configure Help](http://www.tecmint.com/wp-content/uploads/2014/08/Configure-Help.png)
Stsstat的cofigure -help
*stsstat的cofigure -help*
在这里,我们使用标准的**./configure**命令来编译安装sysstat工具包。
@ -71,7 +73,7 @@ Stsstat的cofigure -help
![Configure Sysstat in Linux](http://www.tecmint.com/wp-content/uploads/2014/08/Configure-Sysstat.png)
在Linux系统中配置sysstat
*在Linux系统中配置sysstat*
在编译完成后我们将会看到一些类似于上图的输出。现在运行如下命令来查看sysstat的版本。
@ -80,7 +82,7 @@ Stsstat的cofigure -help
sysstat version 11.0.0
(C) Sebastien Godard (sysstat <at> orange.fr)
###在Linux 系统中更新sysstat###
###更新Linux 系统中的sysstat###
默认的sysstat使用“**/usr/local**”作为其目录前缀。因此,所有的二进制数据/工具都会安装在“**/usr/local/bin**”目录中。如果你的系统已经安装了sysstat 工具包,则上面提到的二进制数据/工具有可能在“**/usr/bin**”目录中。
@ -112,11 +114,11 @@ via: http://www.tecmint.com/install-sysstat-in-linux/
作者:[Kuldeep Sharma][a]
译者:[cvsher](https://github.com/cvsher)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/kuldeepsharma47/
[1]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/
[1]:http://linux.cn/article-4024-1.html
[2]:http://sebastien.godard.pagesperso-orange.fr/download.html
[3]:http://sebastien.godard.pagesperso-orange.fr/documentation.html

View File

@ -0,0 +1,80 @@
Translating by instdio
How To Use Steam Music Player on Ubuntu Desktop
================================================================================
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/steam-music.jpg)
**Music makes the people come together Madonna once sang. But can Steams new music player feature mix the bourgeoisie and the rebel as well?**
If youve been living under a rock, ears pressed tight to a granite roof, word of Steam Music may have passed you by. The feature isnt entirely new. Its been in testing in some form or another since earlier this year.
But in the latest stable update of the Steam client on Windows, Mac and Linux it is now available to all. Why does a gaming client need to add a music player, you ask? To let you play your favourite music while gaming, of course.
Dont worry: playing your music over in-game music is not as bad as it sounds (har har) on paper. Steam reduces/cancels out the game soundtrack in favour of your tunes, but keeps sound effects high in the mix so you can hear the plings, boops and blams all the same.
### Using Steam Music Player ###
![Music in Big Picture Mode](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/steam-music-bpm.jpg)
Music in Big Picture Mode
Steam Music Player is available to anyone running the latest version of the client. Its a pretty simple addition: it lets you add, browse and play music from your computer.
The player element itself is accessible on the desktop and when playing in Steams (awesome) Big Picture mode. In both instances, controlling playback is made dead simple.
As the feature is **designed for playing music while gaming** it is not pitching itself as a rival for Rhythmbox or successor to Spotify. In fact, theres no store to purchase music from and no integration with online services like Rdio, Grooveshark, etc. or the desktop. Nope, your keyboard media keys wont work with the player in Linux.
Valve say they “*…plan to add more features so you can experience Steam music in new ways. Were just getting started.*”
#### Steam Music Key Features: ####
- Plays MP3s only
- Mixes with in-game soundtrack
- Music controls available in game
- Player can run on the desktop or in Big Picture mode
- Playlist/queue based playback
**It does not integrate with the Ubuntu Sound Menu and does not currently support keyboard media keys.**
### Using Steam Music on Ubuntu ###
The first thing to do before you can play music is to add some. On Ubuntu, by default, Steam automatically adds two folders: the standard Music directory in Home, and its own Steam Music folder, where any downloadable soundtracks are stored.
Note: at present **Steam Music only plays MP3s**. If the bulk of your music is in a different file format (e.g., .aac, .m4a, etc.) it wont be added and cannot be played.
To add an additional source or scan files in those already listed:
- Head to **View > Settings > Music**.
- Click **Add** to add a folder in a different location to the two listed entries
- Hit **Start Scanning**
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/Tardis.jpg)
This dialog is also where you can adjust other preferences, including a scan at start. If you routinely add new music and are prone to forgetting to manually initiate a scan, tick this one on. You can also choose whether to see notifications on track change, set the default volume levels, and adjust playback behaviour when opening an app or taking a voice chat.
Once your music sources have been successfully added and scanned you are all set to browse through your entries from the **Library > Music** section of the main client.
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/browser.jpg)
The Steam Music section groups music by album title by default. To browse by band name you need to click the Albums header and then select Artists from the drop down menu.
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/steam-selection.jpg)
Steam Music works off of a queue system. You can add music to the queue by double-clicking on a track in the browser or by right-clicking and selecting Add to Queue.
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/steam-music-queue.jpg)
To **launch the desktop player** click the musical note emblem in the upper-right hand corner or through the **View > Music Player** menu.
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/steam-music.jpg)
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/10/use-steam-music-player-linux
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author

View File

@ -0,0 +1,35 @@
Linus Torvalds Regrets Alienating Developers with Strong Language
================================================================================
> He didn't name anyone, but this sounds like an apology
**Linus Torvalds talked today at LinuxCon and CloudOpen Europe, a conference organized by the Linux Foundation that reunites all the big names in the open source world. He answered a lot of questions and he also talked about the effects of the strong language he uses in the mailing list.**
Linus Torvalds is recognized as the creator of the Linux kernel and the maintainer of the latest development version. He makes sure that we get a new RC almost every week and he is very involved in the discussions that take place in the mailing list. He doesn't really choose his words and has been blamed for using strong language with some of the developers.
The latest problem of this kind, which surfaced in the news as well, was when [he decided to block code from a particular developer][1], after making some very harsh remarks. He is known to be very abrasive, especially when kernel developers break user space to fix something in the kernel. The same happened in this case and he basically went mental on the guy.
### This is the closest he's been to an apology ###
Linus Torvalds never really talked about that particular discussion since and people moved on, but recently a systemd developer talked about the strong language in the open source community and he mentioned Linus Torvalds by name. He's not known to apologize, so this admission of guilt during LinuxCon is a big step forward. The moderator asked him what single decision in the last 23 years he would change.
"From a technical standpoint, no single decision has ever been that important... The problems tend to be around alienating users or developers and I'm pretty good at that. I use strong language. But again there's not a single instance I'd like to fix. There's a metric [expletive]load of those."
"One of the reasons we have this culture of strong language, that admittedly many people find off-putting, is that when it comes to technical people with strong opinions and with a strong drive to do something technically superior, you end up having these opinions show up as sometimes pretty strong language," [said][2] Linus Torvalds.
He didn't mention anyone by name or any specific incident, but the proximity to the complaints issued by Leonart Pottering, the systemd developer, seems to point towards that issue.
It also looks like Linux kernel 3.18 RC1 will arrive later this week and we'll soon have something new to play with.
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Linus-Torvalds-Regrets-Alienating-Developers-with-Strong-Language-462191.shtml
作者:[Silviu Stahie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://news.softpedia.com/news/Linus-Torvalds-Block-All-Code-from-Systemd-Developer-for-the-Linux-Kernel-435714.shtml
[2]:http://www.linux.com/news/featured-blogs/200-libby-clark/791788-linus-torvalds-best-quotes-from-linuxcon-europe-2014

View File

@ -0,0 +1,60 @@
Linus Torvalds' Best Quotes from LinuxCon Europe 2014
================================================================================
![](http://www.linux.com/images/stories/41373/Linus-Dirk-2014.jpg)
Linux creator Linus Torvalds answered questions from Dirk Hohndel, Intel's chief Linux and open source technologist, on Wednesday, Oct. 15, 2014 at LinuxCon and CloudOpen Europe.
Linus Torvalds doesn't regret any of the technical decisions he's made over the past 23 years since he first created Linux, he said Wednesday at [LinuxCon and CloudOpen Europe][1].
“Technical issues, even when they're completely wrong, and they have been, you can fix them later,” said Torvalds, a Linux Foundation fellow.
![](http://www.linux.com/images/stories/41373/Linus-Torvalds-2014.jpg)
Despite these personal issues and disagreements the community has thrived, and created the best technology they possibly can, said Linus Torvalds at LinuxCon Europe 2014.
He does, however, regret the times he has alienated developers and users with his use of strong language on the kernel mailing list, he said. Relationships can't be so easily fixed.
Despite these personal issues and disagreements the community has thrived, and created the best technology they possibly can. This is, Torvalds said, the ultimate goal.
In a Q&A on stage with Dirk Hohndel, Intel's chief Linux and open source technologist, Torvalds spoke about the state of the community, the kernel development process, what it takes to be a kernel developer, and the future of Linux. Here are some highlights of the discussion.
**1.** “The speed of development has not really slowed down the last few years. We have had around 10,000 patches every release from more than 1,000 people and the end result has been very good.”
**2.** Dirk Hohndel: “You said you wanted subsystem maintainers to consider following the x86 model and have more than one maintainer share the role. How about applying your own advice at the top?
Torvalds: “I'll probably have to do that someday. Right now I'm not getting a lot of complaints for not being responsive. Being responsive is one of the most important things a kernel developer at any level can be... So far, partly thanks to Git, I've been able to keep up.”
**3.** “A lot of people want to have market share numbers, lots of users, because that's how they view their self worth. For me, one of the most important things for Linux is having a big community that is actively testing new kernels; it's the only way to support the absolute insane amount of different hardware we deal with.”
**4.** Hohndel: “If you could change a single decision you've made in the last 23 years, what would you do differently?”
Torvalds: “From a technical standpoint, no single decision has ever been that important... The problems tend to be around alienating users or developers and I'm pretty good at that. I use strong language. But again there's not a single instance I'd like to fix. There's a metric shitload of those.”
**5.** “Most people even if though they don't always necessarily like each other, do tend to respect the code they generate. For Linux that's the important part. What really matters is people are very involved in generating the best technology we can.”
**6.** “On the internet nobody can hear you being subtle.”
**7.** “One of the reasons we have this culture of strong language, that admittedly many people find off-putting, is that when it comes to technical people with strong opinions and with a strong drive to do something technically superior, you end up having these opinions show up as sometimes pretty strong language.”
**8.** Hohndel: What will you tell a student who wants to become the next Linus?
Torvalds: “Find something that you're passionate about and just do it.”
**9.** “Becoming a maintainer is easy; you just need an infinite amount of time and respond to email from random people.”
**10.** Hohndel: “Make a bold prediction about the future of Linux.”
Torvalds: “The boldest prediction I can say is, I will probably release rc1 in about a week.”
--------------------------------------------------------------------------------
via: http://www.linux.com/news/featured-blogs/200-libby-clark/791788-linus-torvalds-best-quotes-from-linuxcon-europe-2014
作者:[Libby Clark][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linux.com/community/forums/person/41373/catid/200-libby-clark
[1]:http://events.linuxfoundation.org/events/linuxcon-europe

View File

@ -0,0 +1,37 @@
"Fork Debian" Project Aims to Put Pressure on Debian Community and Systemd Adoption
================================================================================
> There is still a great deal of resistance in the Debian community towards the upcoming adoption of systemd
**The Debian project decided to adopt systemd a while ago and ditch the upstart counterpart. The decision was very controversial and it's still contested by some users. Now, a new proposition has been made, to fork Debian into something that doesn't have systemd.**
![](http://i1-news.softpedia-static.com/images/news2/Fork-Debian-Project-Started-to-Put-Pressure-on-Debian-Community-and-Systemd-Adoption-462598-2.jpg)
systemd is the replacement for the init system and it's the daemon that starts right after the Linux kernel. It's responsible for initiating all the other components in a system and it's also responsible for shutting them down in the correct order, so you might imagine why people think this is an important piece of software.
The discussions in the Debian community have been very heated, but systemd prevailed and it looked like the end of it. Linux distros based on it have already started to make the changes. For example, Ubuntu is already preparing to adopt systemd, although it's still pretty far off.
### Forking Debian, not really a solution ###
Developers have already forked systemd, but the projects resulted don't have a lot of support from the community. As you can imagine, systemd also has a big following and people are not giving up so easily. Now, someone has made a website called debianfork.org to advocate for a Debian without systemd, in an effort to put pressure on the developers.
"We are Veteran Unix Admins and we are concerned about what is happening to Debian GNU/Linux to the point of considering a fork of the project. Some of us are upstream developers, some professional sysadmins: we are all concerned peers interacting with Debian and derivatives on a daily basis. We don't want to be forced to use systemd in substitution to the traditional UNIX sysvinit init, because systemd betrays the UNIX philosophy."
"We contemplate adopting more recent alternatives to sysvinit, but not those undermining the basic design principles of 'do one thing and do it well' with a complex collection of dozens of tightly coupled binaries and opaque logs," reads the [website][1], among a lot of other things.
Basically, the new website is not actually about a Debian fork, but more like a form of pressure for the [upcoming vote][2] that will be taken for the "Re-Proposal - preserve freedom of choice of init systems." This is a general resolution made by Ian Jackson and he hopes to get enough support in order to turn back the decision made by the Technical Committee regarding systemd.
It's clear that the debate is still not over in the Debian community, but it remains to be seen if the decisions already made can be overturned.
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Fork-Debian-Project-Started-to-Put-Pressure-on-Debian-Community-and-Systemd-Adoption-462598.shtml
作者:[Silviu Stahie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://debianfork.org/
[2]:https://lists.debian.org/debian-vote/2014/10/msg00001.html

View File

@ -0,0 +1,64 @@
Microsoft loves Linux -- for Azure's sake
================================================================================
![](http://images.techhive.com/images/article/2014/10/microsoft_guthrie_azure-100525983-primary.idge.jpg)
Scott Guthrie, executive vice president, Microsoft Cloud and Enterprise group, shows how Microsoft differentiates Azure. Credit: James Niccolai/IDG News Service
### Microsoft adds CoreOS and Cloudera to its growing set of Azure services ###
Microsoft now loves Linux.
This was the message from Microsoft CEO Satya Nadella, standing in front of an image that read "Microsoft [heart symbol] Linux," during a Monday webcast to announce a number of services it had added to its Azure cloud, including the Cloudera Hadoop package and the CoreOS Linux distribution.
In addition, the company launched a marketplace portal, now in preview mode, designed to make it easier for customers to procure and manage their cloud operations.
Microsoft is also planning to release an Azure appliance, in conjunction with Dell, that will allow organizations to run hybrid clouds where they can easily move operations between Microsoft's Azure cloud and their own in-house version.
The declaration of affection for Linux indicates a growing acceptance of software that wasn't created at Microsoft, at least for the sake of making its Azure cloud platform as comprehensive as possible.
For decades, the company tied most of its new products and innovations to its Windows platform, and saw other OSes, such as Linux, as a competitive threat. Former CEO Steve Ballmer [once infamously called Linux a cancer][1].
This animosity may be evaporating as Microsoft is finding that customers want cloud services that incorporate software from other sources in addition to Microsoft. About 20 percent of the workloads run on Azure are based on Linux, Nadella admitted.
Now, the company considers its newest competitors to be the cloud services offered by Amazon and Google.
Nadella said that by early 2015, Azure will be operational in 19 regions around the world, which will provide more local coverage than either Google or Amazon.
He also noted that the company is investing more than $4.5 billion in data centers, which by Microsoft's estimation is twice as much as Amazon's investments and six times as much as Google's.
To compete, Microsoft has been adding widely-used third party software packages to Azure at a rapid clip. Nadella noted that Azure now supports all the major data integration stacks, such as those from Oracle and IBM, as well as major new entrants such as MongoDB and Hadoop.
The results seem to be paying off. Today Azure is generating about $4.48 billion in annual revenue for Microsoft, and we are "still at the early days," of cloud computing, Nadella said.
The service attracts about 10,000 new customers per week. About 2 million developers have signed on to Visual studio online since its launch. The service runs about 1.2 million SQL databases.
CoreOS is now actually the fifth Linux distribution that Azure offers, joining Ubuntu, CentOS, OpenSuse, and Oracle Linux (a variant of Red Hat Enterprise Linux). Customers [can also package their own Linux distributions][2] to run in Azure.
CoreOS was developed as [a lightweight Linux distribution][3] to be used primarily in cloud environments. Officially launched in December, CoreOS is already offered as a service by Google, Rackspace, DigitalOcean and others.
Cloudera is the second Hadoop distribution offered on Azure, following Hortonworks. Cloudera CEO Mike Olson joined the Microsoft executives onstage to demonstrate how easily one can use the Cloudera Hadoop software within Azure.
Using the new portal, Olson showed how to start up a 90-node instance of Cloudera with a few clicks. Such a deployment can be connected to an Excel spreadsheet, where the user can query the dataset using natural language.
Microsoft also announced a number of other services and products.
Azure will have a new type of virtual machine, which is being called the "G Family." These virtual machines can have up to 32 CPU cores, 450GB of working memory and 6.5TB of storage, making it in effect "the largest virtual machine in the cloud," said Scott Guthrie, who is the Microsoft executive vice president overseeing Azure.
This family of virtual machines is equipped to handle the much larger workloads Microsoft is anticipating its customers will want to run. It has also upped the amount of storage each virtual machine can access, to 32TB.
The new cloud platform appliance, available in November, will allow customers to run Azure services on-premise, which can provide a way to bridge their on-premise and cloud operations. One early customer, integrator General Dynamics, plans to use this technology to help its U.S. government customers migrate to the cloud.
--------------------------------------------------------------------------------
via: http://www.computerworld.com/article/2836315/microsoft-loves-linux-for-azures-sake.html
作者:[Joab Jackson][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.computerworld.com/author/Joab-Jackson/
[1]:http://www.theregister.co.uk/2001/06/02/ballmer_linux_is_a_cancer/
[2]:http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-create-upload-vhd/
[3]:http://www.itworld.com/article/2696116/open-source-tools/coreos-linux-does-away-with-the-upgrade-cycle.html

View File

@ -0,0 +1,30 @@
Red Hat acquires FeedHenry to get mobile app chops
================================================================================
Red Hat wants a piece of the enterprise mobile app market, so it has acquired Irish company FeedHenry for approximately $82 million.
The growing popularity of mobile devices has put pressure on enterprise IT departments to make existing apps available from smartphones and tablets -- a trend that Red Hat is getting in on with the FeedHenry acquisition.
The mobile app segment is one of the fastest growing in the enterprise software market, and organizations are looking for better tools to build mobile applications that extend and enhance traditional enterprise applications, according to Red Hat.
"Mobile computing for the enterprise is different than Angry Birds. Enterprise mobile applications need a backend platform that enables the mobile user to access data, build backend logic, and access corporate APIs, all in a scalable, secure manner," Craig Muzilla, senior vice president for Red Hat's Application Platform Business, said in a [blog post][1].
FeedHenry provides a cloud-based platform that lets users develop and deploy applications for mobile devices that meet those demands. Developers can create native apps for Android, iOS, Windows Phone and BlackBerry as well as HTML5 apps, or a mixture of native and Web apps.
A key building block is Node.js, an increasingly popular platform based on Chrome's JavaScript runtime for building fast and scalable applications.
From Red Hat's point of view, FeedHenry is a natural fit with the company's strengths in enterprise middleware and PaaS (platform-as-a-service). It adds better mobile capabilities to the JBoss Middleware portfolio and OpenShift PaaS offerings, Red Hat said.
Red Hat plans to continue to sell and support FeedHenry's products, and will continue to honor client contracts. For the most part, it will be business as usual, according to Red Hat. The transaction is expected to close in the third quarter of its fiscal 2015.
--------------------------------------------------------------------------------
via: http://www.computerworld.com/article/2685286/red-hat-acquires-feedhenry-to-get-mobile-app-chops.html
作者:[Mikael Ricknäs][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.computerworld.com/author/Mikael-Rickn%C3%A4s/
[1]:http://www.redhat.com/en/about/blog/its-time-go-mobile

View File

@ -0,0 +1,28 @@
This is the name of Ubuntu 15.04 — And Its Not Velociraptor
================================================================================
**Ubuntu 14.10 may not be out of the door yet, but attention is already turning to Ubuntu 15.04. Today it got its name: [Vivid Vervet][1].**
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/Unknown.jpg)
Announcing the monkey-themed moniker in his usual loquacious style, Mark Shuttleworth cites the upstart and playful nature of the mascot as in tune with its own foray into the mobile space.
> “This is a time when every electronic thing can be an Internet thing, and thats a chance for us to bring our platform, with its security and its long term support, to a vast and important field. In a world where almost any device can be smart, and also subverted, our shared efforts to make trusted and trustworthy systems might find fertile ground.
Talking of plans for the release Shuttleworth states one goal is to “show the way past a simple Internet of things, to a world of Internet things-you-can-trust.”
Ubuntu 15.04 is due for release in April 2015. Its not expected to arrive with either Mir or Unity 8 by default, but given the veracious speed of acceleration in ambitions, it may find its way out for testing.
Do you like the name? Were you hoping for velociraptor?
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/10/ubuntu-15-04-named-vivid-vervet
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.markshuttleworth.com/archives/1425

View File

@ -0,0 +1,34 @@
Ubuntu 15.04 Is Called Vivid Vervet
================================================================================
> Mark Shuttleworth decided on the new name for Ubuntu 15.04
![](http://i1-news.softpedia-static.com/images/news2/Ubuntu-15-04-Is-Called-Vivid-Vervet-462621-2.jpg)
**One of Mark Shuttleworth's privileges is to decide what the code name for upcoming Ubuntu versions is. It's usually a real animal and now it's a monkey whose name starts with V and, as usual, it's probably a species youve never heard of before.**
With very few exceptions, some of the names chosen for Ubuntu releases send the older users to the Encyclopedia Britannica and the new ones to Google. Shuttleworth generally chooses animals that are less known and the names usually have something in common with the release.
For example, Trusty Tahr, the name of Ubuntu 14.04 LTS, followed the idea of long term support for the operating system, hence the trusty adjective. Precise Pangolin did the same for Ubuntu 12.04 LTS, and so on. Intermediary releases are not all that obvious and the Ubuntu 14.10 Utopic Unicorn is proof of that.
### Still thinking about the monkey whose name starts with a V? ###
The way the version number is chosen is pretty clear. The first part is for the year and the second one is for the month, so Ubuntu 14.10 is actually Ubuntu 2014 October. On the other hand, the names only follow a simple rule, one adjective and one animal, so the choice is rather simple. Unlike other communities, where the designation is decided by users or at least with their participation, Ubuntu is different, although it's not a singular example.
"Release week! Already! I wouldn't call Trusty 'vintage' just yet, but Utopic is poised to leap into the torrent stream. We've all managed to land our final touches to *buntu and are excited to bring the next wave of newness to users around the world. Glad to see the unicorn theme went down well, judging from the various desktops I see on G+."
"In my favourite places, the smartest thing around is a particular kind of monkey. Vexatious at times, volant and vogie at others, a vervet gets in anywhere and delights in teasing cats and dogs alike. As the upstart monkey in this business I can think of no better mascot. And so let's launch our vicenary cycle, our verist varlet, the Vivid Vervet!" says Mark Shuttleworth on his [blog][1].
So, there you have it, Ubuntu 15.04, the operating system that is scheduled to arrive in April 2015, will be called Vivid Vervet. I won't keep you anymore for details, I'm sure you are already looking up the vervet on Wikipedia.
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Ubuntu-15-04-Is-Called-Vivid-Vervet-462621.shtml
作者:[Silviu Stahie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://www.markshuttleworth.com/archives/1425

View File

@ -0,0 +1,219 @@
Compact Text Editors Great for Remote Editing and Much More
================================================================================
A text editor is software used for editing plain text files. This type of software has many different uses including modifying configuration files, writing programming language source code, jotting down thoughts, or even making a grocery list. Given that editors can be used for such a diverse range of activities, it is worth spending the time finding an editor that best suites your preferences.
Whatever the level of sophistication of the editor, they typically have a common set of functionality, such as searching/replacing text, formatting text, importing files, as well as moving text within the file.
All of these text editors are console based applications which make them ideal for editing files on remote machines. Textadept also provides a graphical user interface, but remains fast and minimalist.
Console based applications are also light on system resources (very useful on low spec machines), can be faster and more efficient than their graphical counterparts, they do not stop working when X needs to be restarted, and are great for scripting purposes.
I have selected my favorite open source text editors that are frugal on system resources.
----------
### Textadept ###
![](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Textadept.png)
Textadept is a fast, minimalist, and extensible cross-platform open source text editor for programmers. This open source application is written in a mixture of C and Lua and has been optimized for speed and minimalism over the years.
Textadept is an ideal editor for programmers who want endless extensibility options without sacrificing speed or succumbing to code bloat and featuritis.
There is also a version available for the terminal, which only depends on ncurses; great for editing on remote machines.
#### Features include: ####
- Lightweight
- Minimal design maximizes screen real estate
- Self-contained executable no installation necessary
- Entirely keyboard driven
- Unlimited split views (GUI version) split the editor window as many times as you like either horizontally or vertically. Please note that Textadept is not a tabbed editor
- Support for over 80 programming languages
- Powerful snippets and key commands
- Code autocompletion and API lookup
- Unparalleled extensibility
- Bookmarks
- Find and Replace
- Find in Files
- Buffer-based word completion
- Adeptsense autocomplete symbols for programming languages and display API documentation
- Themes: light, dark, and term
- Uses lexers to assign names to buffer elements like comments, strings, and keywords
- Sessions
- Snapopen
- Available modules include support for Java, Python, Ruby and recent file lists
- Conforms with the Gnome HIG Human Interface Guidelines
- Modules include support for Java, Python, Ruby and recent file lists
- Support for editing Lua code. Syntax autocomplete and LuaDoc is available for many Textadept objects as well as Luas standard libraries
- Website: [foicica.com/textadept][1]
- Developer: Mitchell and contributors
- License: MIT License
- Version Number: 7.7
----------
### Vim ###
![](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-vim.png)
Vim is an advanced text editor that seeks to provide the power of the editor 'Vi', with a more complete feature set.
This editor is very useful for editing programs and other plain ASCII files. All commands are given with normal keyboard characters, so those who can type with ten fingers can work very fast. Additionally, function keys can be defined by the user, and the mouse can be used.
Vim is often called a "programmer's editor," and is so useful for programming that many consider it to be an entire Integrated Development Environment. However, this application is not only intended for programmers. Vim is highly regarded for all kinds of text editing, from composing email to editing configuration files.
Vim's interface is based on commands given in a text user interface. Although its graphical user interface, gVim, adds menus and toolbars for commonly used commands, the software's entire functionality is still reliant on its command line mode.
#### Features include: ####
- 3 modes:
- - Command mode
- - Insert mode
- - Command line mode
- Unlimited undo
- Multiple windows and buffers
- Flexible insert mode
- Syntax highlighting highlight portions of the buffer in different colors or styles, based on the type of file being edited
- Interactive commands
- - Marking a line
- - vi line buffers
- - Shift a block of code
- Block operators
- Command line history
- Extended regular expressions
- Edit compressed/archive files (gzip, bzip2, zip, tar)
- Filename completion
- Block operations
- Jump tags
- Folding text
- Indenting
- ctags and cscope intergration
- 100% vi compatibility mode
- Plugins to add/extend functionality
- Macros
- vimscript, Vim's internal scripting language
- Unicode support
- Multi-language support
- Integrated On-line help
- Website: [www.vim.org][2]
- Developer: Bram Moolenaar
- License: GNU GPL compatible (charityware)
- Version Number: 7.4
----------
### ne ###
![](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-ne.png)
ne is a full screen open source text editor. It is intended to be an easier to learn alternative to vi, yet still portable across POSIX-compliant operating systems.
ne is easy to use for the beginner, but powerful and fully configurable for the wizard, and most sparing in its resource usage.
#### Features include: ####
- Three user interfaces: control keystrokes, command line, and menus; keystrokes and menus are completely configurable
- Syntax highlighting
- Full support for UTF-8 files, including multiple-column characters
- The number of documents and clips, the dimensions of the display, and the file/line lengths are limited only by the integer size of the machine
- Simple scripting language where scripts can be generated via an idiotproof record/play method
- Unlimited undo/redo capability (can be disabled with a command)
- Automatic preferences system based on the extension of the file name being edited
- Automatic completion of prefixes using words in your documents as dictionary
- File requester with completion features for easy file retrieval;
- Extended regular expression search and replace à la emacs and vi
- A very compact memory model easily load and modify very large files
- Editing of binary files
- Website: [ne.di.unimi.it][3]
- Developer: Sebastiano Vigna (original developer). Additional features added by Todd M. Lewis
- License: GNU GPL v3
- Version Number: 2.5
----------
### Zile ###
![](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Zile.png)
Zile Is Lossy Emacs (Zile) is a small Emacs clone. Zile is a customizable, self-documenting real-time display editor. Zile was written to be as similar as possible to Emacs; every Emacs user should feel comfortable with Zile.
Zile is distinguished by a very small RAM memory footprint, of approximately 130kB, and quick editing sessions. It is 8-bit clean, allowing it to be used on any sort of file.
#### Features include: ####
- Small but fast and powerful
- Multi buffer editing with multi level undo
- Multi window
- Killing, yanking and registers
- Minibuffer completion
- Auto fill (word wrap)
- Registers
- Looks like Emacs. Key sequences, function and variable names are identical with Emacs's
- Killing
- Yanking
- Auto line ending detection
- Website: [www.gnu.org/software/zile][4]
- Developer: Reuben Thomas, Sandro Sigala, David A. Capello
- License: GNU GPL v2
- Version Number: 2.4.11
----------
### nano ###
![](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-nano.png)
nano is a curses-based text editor. It is a clone of Pico, the editor of the Pine email client.
The nano project was started in 1999 due to licensing issues with the Pine suite (Pine was not distributed under a free software license), and also because Pico lacked some essential features.
nano aims to emulate the functionality and easy-to-use interface of Pico, while offering additional functionality, but without the tight mailer integration of the Pine/Pico package.
nano, like Pico, is keyboard-oriented, controlled with control keys.
#### Features include: ####
- Interactive search and replace
- Color syntax highlighting
- Go to line and column number
- Auto-indentation
- Feature toggles
- UTF-8 support
- Mixed file format auto-conversion
- Verbatim input mode
- Multiple file buffers
- Smooth scrolling
- Bracket matching
- Customizable quoting string
- Backup files
- Internationalization support
- Filename tab completion
- Website: [nano-editor.org][5]
- Developer: Chris Allegretta, David Lawrence, Jordi Mallach, Adam Rogoyski, Robert Siemborski, Rocco Corsi, David Benbennick, Mike Frysinger
- License: GNU GPL v3
- Version Number: 2.2.6
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20141011073917230/TextEditors.html
作者Frazer Kline
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://foicica.com/textadept/
[2]:http://www.vim.org/
[3]:http://ne.di.unimi.it/
[4]:http://www.gnu.org/software/zile/
[5]:http://nano-editor.org/

View File

@ -0,0 +1,56 @@
(translating by runningwater)
UbuTricks 14.10.08
================================================================================
> An Ubuntu utility that allows you to install the latest versions of popular apps and games
UbuTricks is a freely distributed script written in Bash and designed from the ground up to help you install the latest version of the most acclaimed games and graphical applications on your Ubuntu Linux operating system, as well as on various other Ubuntu derivatives.
![](http://i1-linux.softpedia-static.com/screenshots/UbuTricks_1.png)
### What apps can I install with UbuTricks? ###
Currently, the latest versions of the Calibre, Fotoxx, Geary, GIMP, Google Earth, HexChat, jAlbum, Kdenlive, LibreOffice, PCManFM, Qmmp, QuiteRSS, QupZilla, Shutter, SMPlayer, Ubuntu Tweak, Wine and XBMC (Kodi), PlayOnLinux, Red Notebook, NeonView, Sunflower, Pale Moon, QupZilla Next, FrostWire and RSSOwl applications can be installed with UbuTricks.
### What games can I install with UbuTricks? ###
In addition, the latest versions of the 0 A.D., Battle for Wesnoth, Transmageddon, Unvanquished and VCMI (Heroes III Engine) games can be installed with the UbuTricks program. Users can also install the latest version of the Cinnamon and LXQt desktop environments.
### Getting started with UbuTricks ###
The program is distributed as a .sh file (shell script) that can be run from the command-line using the “sh ubutricks.sh” command (without quotes) or make it executable and double-click it from your Home folder or desktop. All you have to do is to select and app or game and click the OK button to install it.
### How does it work? ###
When accessed for the first time, the program will display a welcome screen from the get-to, notifying users about how it actually works. There are three methods to install an app or game, via PPA, DEB file or source tarball. Please note that apps and games will be automatically downloaded and installed.
### What distributions are supported? ###
Several versions of the Ubuntu Linux operating systems are supported, but if not specified, it will default to the current stable version, Ubuntu 14.04 LTS (Trusty Tahr). At the moment, the program will not work if you dont have the gksu package installed on your Ubuntu box. It is based on Zenity, which should be installed too.
![](http://i1-linux.softpedia-static.com/screenshots/UbuTricks_2.jpg)
- last updated on:October 9th, 2014, 11:29 GMT
- price:FREE!
- developed by:Dan Craciun
- homepage:[www.tuxarena.com][1]
- license type:[GPL (GNU General Public License)][3]
- category:ROOT \ Desktop Environment \ Tools
### Download for UbuTricks: ###
- [ubutricks.sh][2]
--------------------------------------------------------------------------------
via: http://linux.softpedia.com/get/Desktop-Environment/Tools/UbuTricks-103626.shtml
作者:[Marius Nestor][a]
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.softpedia.com/editors/browse/marius-nestor
[1]:http://www.tuxarena.com/apps/ubutricks/
[2]:http://www.tuxarena.com/intro/files/ubutricks.sh
[3]:http://www.gnu.org/licenses/gpl-2.0.html

View File

@ -0,0 +1,94 @@
UbuTricks Script to install the latest versions of several games and applications in Ubuntu
================================================================================
UbuTricks is a program that helps you install the latest versions of several games and applications in Ubuntu.
UbuTricks is a Zenity-based, graphical script with a simple interface. Although early in development, its aim is to create a simple, graphical way of installing updated applications in Ubuntu 14.04 and future releases.
Apps will be downloaded and installed automatically. Some will require a PPA to be added to the repositories. Others will be compiled from source if no PPA is available. The compilation process can take a long time, while installing from a PPA or DEB file should be quick, depending on your download speed.
### The install methods are as follows: ###
- PPA the program will be downloaded and installed from a PPA
- DEB the program will be installed from a DEB package
- Source the program will be compiled (may take a long time)
- Script the program will be installed using a script provided by the developer
- Archive the program will be installed from a compressed archive
- Repository the program will be installed from a repository (not PPA)
### List of applications you can install ###
The latest versions of the following applications can be installed via UbuTricks:
### Games ###
- 0 A.D.
- Battle for Wesnoth (Dev)
- VCMI (Heroes III Engine)
### File Managers ###
- PCManFM
### Internet ###
- Geary
- HexChat
- QupZilla
- QuiteRSS
### Multimedia ###
- SMPlayer
- Transmageddon
- Kdenlive
- Fotoxx
- jAlbum
- GIMP
- Shutter
- Qmmp
- XBMC
### Office/Ebooks/Documents ###
- Calibre
- LibreOffice
### Tools ###
- Ubuntu Tweak
### Desktop Environments ###
- Cinnamon
### Other ###
- Google Earth
- Wine
### Download and install Ubuntutricks script ###
You can download ubuntutricks script from [here][1] Once downloaded, make it executable and either double-click the script or run it from the terminal.
### Screenshots ###
![](http://www.ubuntugeek.com/wp-content/uploads/2014/10/116.png)
![](http://www.ubuntugeek.com/wp-content/uploads/2014/10/213.png)
![](http://www.ubuntugeek.com/wp-content/uploads/2014/10/35.png)
![](http://www.ubuntugeek.com/wp-content/uploads/2014/10/45.png)
--------------------------------------------------------------------------------
via: http://www.ubuntugeek.com/ubutricks-script-to-install-the-latest-versions-of-several-games-and-applications-in-ubuntu.html
作者:[ruchi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.ubuntugeek.com/author/ubuntufix
[1]:http://www.tuxarena.com/intro/files/ubutricks.sh

View File

@ -0,0 +1,52 @@
6 Minesweeper Clones for Linux
================================================================================
### GNOME Mines ###
This is the GNOME Minesweeper clone, allowing you to choose from three different pre-defined table sizes (8×8, 16×16, 30×16) or a custom number of rows and columns. It can be ran in fullscreen mode, comes with highscores, elapsed time and hints. The game can be paused and resumed.
![](http://www.tuxarena.com/wp-content/uploads/2014/10/gnome-mines1.jpg)
### ace-minesweeper ###
This is part of a package that contains some other games too, like ace-freecel, ace-solitaire or ace-spider. It has a graphical interface featuring Tux, but doesnt seem to come with different table sizes. The package is called ace-of-penguins in Ubuntu.
![](http://www.tuxarena.com/wp-content/uploads/2014/10/ace-minesweeper.jpg)
### XBomb ###
XBomb is a mines game for the X Window System with three different table sizes and tiles which can take different shapes: hexagonal, rectangular (traditional) or triangular. Unfortunately the current version in Ubuntu 14.04 crashes with a segmentation fault, so you may need to install another version to make it work.
[Homepage][1]
![](http://www.tuxarena.com/wp-content/uploads/2014/10/xbomb.png)
([Image credit][1])
### KMines ###
KMines is the a KDE game, and just like GNOME Mines, there are three built-in table sizes (easy, medium, hard) and custom, support for themes and highscores.
![](http://www.tuxarena.com/wp-content/uploads/2014/10/kmines.jpg)
### freesweep ###
Freesweep is a Minesweeper clone for the terminal which allows you to configure settings such as table rows and columns, percentage of bombs, colors and also has a highscores table.
![](http://www.tuxarena.com/wp-content/uploads/2014/10/freesweep.jpg)
### xdemineur ###
Another graphical Minesweeper clone for X, Xdemineur is very much alike Ace-Minesweeper, with one predefined table size.
![](http://www.tuxarena.com/wp-content/uploads/2014/10/xdemineur.jpg)
--------------------------------------------------------------------------------
via: http://www.tuxarena.com/2014/10/6-minesweeper-clones-for-linux/
作者Craciun Dan
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.gedanken.org.uk/software/xbomb/

1
sources/share/README.md Normal file
View File

@ -0,0 +1 @@
这里放分享类文章,包括各种软件的简单介绍、有用的书籍和网站等。

View File

@ -1,48 +0,0 @@
Red Hat designs RHEL for a decade-long run
================================================================================
> The newly released RHEL 7 includes Docker containers and the new terabyte-scaled XFS file system
IDG News Service - Knowing how system administrators enjoy continuity, Red Hat has designed the latest release of its flagship Linux distribution to be run, with support, until 2024.
Red Hat Enterprise Linux 7 (RHEL 7), the completed version of which was shipped Tuesday, also features a number of new technologies that the company sees as instrumental for the next decade, including the Docker Linux Container system and the advanced XFS file system.
"XFS opens the door for a new class of business analytics, big data and data analytics," said Mark Coggin, Red Hat senior director of product marketing.
The last major update to RHEL, RHEL 6, was released in November 2010. Since then, server software has been used in an increasingly wide variety of operational scenarios, including providing the basis for bare metal servers, virtual machines, IaaS (infrastructure-as-a-service) and PaaS (platform-as-a-service) cloud packages.
Red Hat will support RHEL 7 with bug fixes and commercial support for up to 10 years. The company generally releases a major version of RHEL every three years.
In contrast, Canonical's Ubuntu LTS (long-term support) distributions are supported for five years. Suse Enterprise Linux [is also supported][1], in most aspects, for up to 10 years,
This is the first edition to include Docker, a container technology [that could act as a nimbler replacement][2] to full virtual machines used in cloud operations. Docker provides a way to package an application in a virtual container so that it can be run across different Linux servers.
Red Hat expects that containers will be widely deployed over the next few years as a way to package and run applications, thanks to their portable nature.
"Customers have told us they are looking for a lighter weight version of developing applications. The applications themselves don't need a full operating system or a virtual machine," Coggin said. The system calls are answered by the server's OS and the container includes only the necessary support libraries and the application. "We only put into that container what we need," he said.
Containers are also easier to maintain because users don't have to worry about updating or patching the full OS within a virtual machine, Coggin said.
Red Hat is also planning a special stripped-down release of RHEL, now code-named RHEL Atomic, which will be a distribution for just running containers. Containers that run on the regular RHEL can easily be transferred to RHEL Atomic, once that OS is available. They will also run on Red Hat OpenShift PaaS.
Red Hat is also supporting Docker through its switch in RHEL 7 to the systemd process manager, replacing Linux's long used init process manager. Systemd "gives the administrator a lot of additional flexibility in managing the underlying processes inside of RHEL. It also has a tie back to the container initiative and is very integral to the way the processes are stood up and managed in containers," Coggin said.
Red Hat has switched the default file system in RHEL 7 to XFS, which is able to keep track of up to 500TBs on a single partition. The previous default file system, ext4, was only able to support 50TBs. Ext4 is still available as an option, as well as another of other file systems such as GFS2 and Btrfs (under technology preview).
Red Hat has added greater interoperability with the Microsoft Windows environment. Organizations can now use Microsoft Active Directory to securely authenticate users on Red Hat systems. Tools are also included in RHEL 7 to offer Red Hat credentials for Windows servers.
"Customers have thousands of Windows servers and thousands of RHEL servers, and they to need ways to integrate the two," Coggin said.
The installation process has been sped up as well, thanks to an update to the Anaconda installer, which now allows administrators to preselect server configurations on the start of the installation process. The inclusion of the industry standard OpenLMI (Open Linux Management Infrastructure), which allows the administrator to manage services at a granular level through a standardized API (application programming interface).
"OpenLMI is another important way of improving stability and efficiency by helping to manage systems better," Coggin said.
--------------------------------------------------------------------------------
via: http://www.computerworld.com/s/article/9248988/Red_Hat_designs_RHEL_for_a_decade_long_run?taxonomyId=122
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://www.suse.com/support/policy.html
[2]:http://www.infoworld.com/d/virtualization/docker-all-geared-the-enterprise-244020

View File

@ -1,99 +0,0 @@
What will your business look like in 2030?
================================================================================
![](http://cdn1.tnwcdn.com/wp-content/blogs.dir/1/files/2014/06/business-man-roof-deck-798x310.jpg)
lya Pozin is a serial entrepreneur, writer, and investor. He is the founder of online video entertainment platform [Pluto.TV][1], social greeting card company [Open Me][2], and digital marketing agency [Ciplex][3].
The year is 2030, and youre walking into the front doors of your company. What will it look like, what functions will your employees be performing and how will you stack up against the competition?
You might not be considering the future, but remember that [25 years ago][4], only 15 percent of US households had a personal computer. While 73 percent of online adults currently have a social media account, social media barely existed 15 years ago.
Technology is always changing, and with it come disruptions to industries, companies and the employment marketplace. The future is closing in, but is your company ready?
### Why should you be worried? ###
In business, to stop moving forward means your company is stagnating; for many companies, stagnation equates to eventual death. Companies clinging to outmoded and outdated business practices eventually run into major problems. There are examples everywhere in the marketplace, from struggling BlackBerry phones to Kodak slowly shuttering its film business.
According to [futurist and TED talk speaker Thomas Frey][5], two billion jobs will disappear by 2030 thanks to shifting technologies and changing needs. You cant afford to be behind the pack when the future comes calling.
### What will 2030 look like? ###
![](http://cdn1.tnwcdn.com/wp-content/blogs.dir/1/files/2014/05/calendar.jpg)
Recently, the [Canadian Scholarship Trust][6], as part of its Inspired Minds campaign, [put together a list of the jobs][7] we might all be hiring for in 2030. These jobs range from “Company Culture Ambassador” to get this! “Nostalgist.”
Taking CSTs lead, I spoke to some entrepreneurs and innovators in different fields, from medicine to marketing, to see their predictions for how businesses will be run in the future. Hopping in our time travel machine, heres a glimpse at what 2030 might look like:
### Cloud-based ###
“Everything will be cloud-based with faster speeds,” said Marjorie Adams, [AQB][8] CEO and President. “The technologies coming out now will be better defined and connected. While innovation from the business side could be a lot slower-going than the consumer side, we will have a lot more data to understand real needs.”
### Automated ###
Google is already leading the way with the self-driving car, but automation might creep into other aspects of our lives in the future.
“Home automation will be very different in 2030,” said Andrew Thomas, co-founder of [SkyBell Technologies, Inc][9] .“Well all have brain-sensing headbands and glasses and well just think about locking the door or turning off the lights. Our fridge will email the store when were low on food and our self-driving cars will go pick up the groceries for us!”
### Human curated ###
As more and more options become available to consumers, well all become overwhelmed by choice. Human curation will come back into vogue for everything from music to online video.
Were already seeing the trend start now with [Apples acquisition][10] of human curated music service Beats. After all, do you really think apps are [smarter than you][11]?
### Socially-connected ###
If you cant watch the latest episode of Scandal or Game of Thrones, its common sense to stay off your Facebook and Twitter feeds.
“Imagine a media environment 15 years into the future where no object or entertainment venue is out of reach for second-screen integration with social media,” said Jared Feldman, CEO and founder of [Mashwork][12]. “Social platforms like Facebook and Twitter might as well be agnostic at this point in time since consumers will have aggregated all of their digital social life into consolidated user profiles designed to curate multiple feeds and allow for single-source user engagement.”
### Targeted ###
Already advertising is becoming more and more targeted to consumers needs thanks to big data and algorithms. Dont expect this trend to move backwards, at least according to [FlexOne][13] CEO Matthijs Keij.
“Advertisers will know more about you than you yourself. Which products you like, how to improve your personal and work life, and even how to be more healthy. Sounds a little like Huxleys Brave New World? Maybe…but consumers might actually like it.”
### How do your prepare? ###
![](http://cdn1.tnwcdn.com/wp-content/blogs.dir/1/files/2011/01/Crystal-Ball-12-27-09-iStock_000003107697XSmall.jpg)
Preparing for the future might seem impossible, but you dont need a crystal ball to keep abreast of changes. Its important to always keep up with trends and emerging technology, both in the economy in general and within your industry in particular.
Go to conferences, attend industry talks, and make time for industry trade shows. Pay attention to the new technology entering your sector, and dont turn your nose up at something new just because its different than the way things have always been.
Understand your customers and know what they need, because the future is looking more consumer-focused than ever before, even in segments like healthcare. “The paradigm is shifting to a more “consumer-centric” model,” said Robert Grajewski, CEO of [Edison Nation Medical][14]. “Healthcare as a whole will shift to this individual care focus.”
Companies that understand their core competencies and their consumer needs will have a leg up on the competition.
As more digital natives come of age and flock into the economy, some highly skilled fields will see consumers picking up additional skills.
“By 2030 virtually everyone will be a designer, equipped with knowledge of the hottest mega trends and ripe and ready to replace those who cant keep up with the latest software,” said Ashley Mady, CEO of [Brandberry][15].
“The best way to prepare for this inevitable shift in the design world is to focus on creative, big picture thinking over production, which will soon become a commodity. Designers should remain innovative by developing their own adaptable brands and technology that will grow alongside the quickly evolving world we live in.”
Finally, its important to be open, curious, and willing to pivot. New technologies are going to come along to improve, and sometimes complicate, your business. You need to be willing to embrace these new paradigms, or you risk your company becoming obsolete.
What do you think? How do you plan to prepare for the future? Share in the comments!
--------------------------------------------------------------------------------
via: http://thenextweb.com/entrepreneur/2014/06/18/will-business-look-like-2030/?utm_campaign=share%20button&utm_content=What%20will%20your%20business%20look%20like%20in%202030?&awesm=tnw.to_q3L0P&utm_source=copypaste&utm_medium=referral
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://pluto.tv/
[2]:http://www.openme.com/
[3]:http://www.ciplex.com/
[4]:http://www.cnbc.com/id/101611509
[5]:http://www.futuristspeaker.com/2012/02/2-billion-jobs-to-disappear-by-2030/
[6]:http://www.cst.org/
[7]:http://careers2030.cst.org/jobs/
[8]:http://www.aqb.com/
[9]:http://www.skybell.com/
[10]:http://thenextweb.com/apple/2014/05/28/apple-confirms-acquisition-beats/
[11]:http://thenextweb.com/apps/2013/10/19/i-let-apps-tell-me-how-to-live-for-a-day/
[12]:http://mashwork.com/
[13]:http://www.flxone.com/
[14]:http://www.edisonnationmedical.com/
[15]:http://www.brandberry.com/

View File

@ -1,120 +0,0 @@
Linux Poetry Explains the Kernel, Line By Line
================================================================================
> Editor's Note: Feeling inspired? Send your Linux poem to [editors@linux.com][1] for your chance to win a free pass to [LinuxCon North America][2] in Chicago, Aug. 20-22. Be sure to include your name, contact information and a brief explanation of your poem. We'll draw one winner at random from all eligible entries each week through Aug. 1, 2014.
![Software developer Morgan Phillips is teaching herself how the Linux kernel works by writing poetry.](http://www.linux.com/images/stories/41373/Morgan-Phillips-2.jpg)
Software developer Morgan Phillips is teaching herself how the Linux kernel works by writing poetry.
Writing poems about the Linux kernel has been enlightening in more ways than one for software developer Morgan Phillips.
Over the past few months she's begun to teach herself how the Linux kernel works by studying text books, including [Understanding the Linux Kernel][3], Unix Network Programming, and The Unix Programming Environment. But instead of taking notes, she weaves the new terminology and ideas she learns into poetry about system architecture and programming concepts. (See some examples, below, and on her [Linux Poetry blog][4].)
It's a “pedagogical hack” she adopted in college and took up again a few years ago when she first landed a job as a data warehouse engineer at Facebook and needed to quickly learn Hadoop.
“I could remember bits and pieces of information but it was too rote, too rigid in my mind, so I started writing poems,” she said. “It forced me to wrap all of these bits of information into context and helped me learn things much more effectively.”
The Linux kernel's history, architecture, abundant terminology and complex concepts, are rich fodder for her poetry.
“I could probably write thousands of poems about just one subsystem in the kernel,” she said.
### Why learn Linux? ###
![Phillips publishes on her Linux Poetry blog.](http://www.linux.com/images/stories/41373/multiplexing-poem.png)
Phillips publishes on her Linux Poetry blog.
Phillips started her software career through a somewhat unconventional route as a physics major in a research laboratory. Instead of writing journal articles she was writing Python scripts to parse research project data on active galactic nuclei. She never learned the fundamentals of computer science (CS), but picked up the information on the job, as the need arose.
She soon got a job doing network security research for the Army Research Laboratory in Adelphi, Maryland, working with Linux. That was her first foray into the networking stack and the lower levels of the operating system.
Most recently she worked at Facebook until about six months ago when she moved from the Silicon Valley back to Nashville, near her home state of Kentucky, to work for a software startup that helps major record labels manage their business.
“I have all this experience but I suffer from a thing that almost every person who doesnt have an actual background in CS does: I have islands of knowledge with big gaps in between,” she said. “Every time I'd come across some concept, some data structure in the kernel, I'd have to go educate myself on it.”
A few weeks ago her frustration peaked. She was trying to do a form of message passing between web application processes and a web socket server she had written and found herself having to brush up on all the ways she could do interprocess communication.
“I was like, that's it. I'm going to start really learning everything I should have known starting at the bottom up with the Linux kernel,” she said. “So I bought some textbooks and started reading.”
![](http://www.linux.com/images/stories/41373/process-poem.png)
### What she's learned ###
Over the course of a few months of reading books and writing poems she's learned about how the virtual memory subsystem works. She's learned about the data structures that hold process information, about the virtual memory layout and how pages are mapped into memory, and about memory management.
“I hadn't thought about a lot of things, like that a system that's multiprocessing shouldnt bother with semaphores,” she said. “Spin locks are often more efficient.”
Writing poems has also given her insight into her own way of thinking about the world. In some small way she is communicating not just her knowledge of Linux systems, but also the way that she conceptualizes them.
“It's a deep look into my mind,” she said. “Poetry is the best way to share these abstract ideas and things that we can't possibly truly share with other people.”
Writing a Linux poem
The inspiration for her Linux poems starts with reading a textbook chapter. She hones the topics down to the key concepts that she wants to remember and what others might find interesting, as well as things she can “wrap a conceptual bubble around.”
A concept like demand paging is too broad to fit into a single poem, for example. “So I'm working my way down deeper in it,” she said. “Instead I'm looking at writing a poem about the actual data structure where process memory is laid out and then mapped into a page map.”
She hasn't had any formal training writing poetry, but writes the lines so that they are visually appealing and have a nice rhythm when they're read aloud.
In her poem, “The Reentrant Kernel,” Phillips writes about an important property in software that allows a function to be paused and restarted later with the same result. System calls need to have this reentrant property in order to make the scheduler run as efficiently as possible, Phillips explains. The poem also includes a program, written in C style pseudocode, to help illustrate the concept.
Phillips hopes her Linux poetry helps her increase her understanding enough to start contributing to the Linux kernel.
“I've been very intimidated for a long time by the idea of submitting a patch to the kernel, being a kernel hacker,” she said. “To me that's the pinnacle of success.
“My ultimate dream is that I can gain a good enough understanding of the kernel and C to submit a patch and have it accepted.”
The Reentrant Kernel
A reentrant function,
if interrupted,
will return a result,
which is not perturbed.
int global_int;
int is_not_reentrant(int x) {
int x = x;
return global_int + x; },
depends on a global variable,
which may change during execution.
int global_int;
int is_reentrant(int x) {
int saved = global_int;
return saved + x; },
mitigates external dependency,
it is reentrant, though not thread safe.
UNIX kernels are reentrant,
a process may be interrupted while in kernel mode,
so that, for instance, time is not wasted,
waiting on devices.
Process alpha requests to read from a device,
the kernel obliges,
CPU switches into kernel mode,
system call begins execution.
Process alpha is waiting for data,
it yields to the scheduler,
process beta writes to a file,
the device signals that data is available.
Context switches,
process alpha continues execution,
data is fetched,
CPU enters user mode.
注:上面代码内文本发布时请参考原文排版(第一行着重,全部居中)
--------------------------------------------------------------------------------
via: http://www.linux.com/news/featured-blogs/200-libby-clark/777473-linux-poetry-explains-the-kernel-line-by-line/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:editors@linux.com
[2]:http://events.linuxfoundation.org/events/linuxcon-north-america
[3]:http://shop.oreilly.com/product/9780596005658.do
[4]:http://www.linux-poetry.com/

View File

@ -1,5 +1,3 @@
CNprober translating...
Linux Administration: A Smart Career Choice
================================================================================
![](http://www.opensourceforu.com/wp-content/uploads/2014/04/linux.jpeg)

View File

@ -1,8 +1,16 @@
**Translating by jabirus...**
The People Who Support Linux: Hacking on Linux Since Age 16
================================================================================
一个 Linux 支持者:从 16 岁开始在 Linux 上 hack
================================================================================
![](http://www.linux.com/images/stories/41373/Yitao-Li.png)
Pretty much all of the projects in software developer [Yitao Li's GitHub repository][1] were developed on his Linux machine. None of them are necessarily Linux-specific, he says, but he uses Linux for “everything.”
>Pretty much all of the projects in software developer [Yitao Li's GitHub repository][1] were developed on his Linux machine. None of them are necessarily Linux-specific, he says, but he uses Linux for “everything.”
在软件开发者[李逸韬的 GitHub 仓库][1]中,相当多的项目是在他的 Linux 机器上完成的。它们没有一个是必须特定需要 Linux 的,但李逸韬说他使用 Linux 来做”任何事情“。
For example: “coding / scripting, web browsing, web hosting, anything cloud-related, sending / receiving PGP signed emails, tweaking IP table rules, flashing OpenWrt image into routers, running one version of Linux kernel while compiling another version, doing research, doing homework (e.g., typing math equations in Tex), and many others...” Li said via email.

View File

@ -1,4 +1,3 @@
Love-xuan 翻译中
Don't Fear The Command Line
================================================================================
![](http://a4.files.readwrite.com/image/upload/c_fill,h_900,q_70,w_1600/MTE5NTU2MzIyNTM0NTg5OTYz.jpg)

View File

@ -1,79 +0,0 @@
shaohaolin translating
Can Ubuntu Do This? — Answers to The 4 Questions New Users Ask Most
================================================================================
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/08/Screen-Shot-2014-08-13-at-14.31.42.png)
**Type Can Ubuntu into Google and youll see a stream of auto suggested terms put before you, all based on the queries asked most often by curious searchers. **
For long-time Linux users these queries all have rather obvious answers. But for new users or those feeling out whether a distribution like Ubuntu is for them the answers are not quite so obvious; theyre pertinent, real and essential asks.
So, in this article, Im going to answer the top four most searched for questions asking “*Can Ubuntu…?*”
### Can Ubuntu Replace Windows? ###
![Windows isnt to everyones tastes — or needs](http://www.omgubuntu.co.uk/wp-content/uploads/2014/07/windows-9-desktop-rumour.png)
Windows isnt to everyones tastes — or needs
Yes. Ubuntu (and most other Linux distributions) can be installed on just about any computer capable of running Microsoft Windows.
Whether you **should** replace it will, invariably, depend on your own needs.
For example, if youre attending a school or college that requires access to Windows-only software, you may want to hold off replacing it entirely. Same goes for businesses; if your work depends on Microsoft Office, Adobe Creative Suite or a specific AutoCAD application you may find it easier to stick with what you have.
But for most of us Ubuntu can replace Windows full-time. It offers a safe, dependable desktop experience that can run on and support a wide range of hardware. Software available covers everything from office suites to web browsers, video and music apps to games.
### Can Ubuntu Run .exe Files? ###
![You can run some Windows apps in Ubuntu](http://www.omgubuntu.co.uk/wp-content/uploads/2013/01/adobe-photoshop-cs2-free-linux.png)
You can run some Windows apps in Ubuntu
Yes, though not out of the box, and not with guaranteed success. This is because software distributed in .exe are meant to run on Windows. These are not natively compatible with any other desktop operating system, including Mac OS X or Android.
Software installers made for Ubuntu (and other Linux distributions) tend to come as .deb files. These can be installed similarly to .exe — you just double-click and follow any on-screen prompts.
But Linux is versatile. Using a compatibility layer called Wine (which technically is not an emulator, but for simplicitys sake can be referred to as one for shorthand) that can run many popular apps. They wont work quite as well as they do on Windows, nor look as pretty. But, for many, it works well enough to use on a daily basis.
Notable Windows software that can run on Ubuntu through Wine includes older versions of Photoshop and early versions of Microsoft Office . For a list of compatible software [refer to the Wine App Database][1].
### Can Ubuntu Get Viruses? ###
![It may have errors, but it doesnt have viruses](http://www.omgubuntu.co.uk/wp-content/uploads/2014/04/errors.jpg)
It may have errors, but it doesnt have viruses
Theoretically, yes. But in reality, no.
Linux distributions are built in a way that makes it incredibly hard for viruses, malware and root kits to be installed, much less run and do any significant damage.
For example, most applications run as a regular user with no special administrative privileges, required for a virus to access critical parts of the operating system. Most software is also installed from well maintained and centralised sources, like the Ubuntu Software Center, and not random websites. This makes the risk of installing something that is infected negligible.
Should you use anti-virus on Ubuntu? Thats up to you. For peace of mind, or if youre regularly using Windows software through Wine or dual-booting, you can install a free and open-source virus scanner app like ClamAV, available from the Software Center.
You can learn more about the potential for viruses on Linux/Ubuntu [on the Ubuntu Wiki][2].
### Can Ubuntu Play Games? ###
![Steam has hundreds of high-quality games for Linux](http://www.omgubuntu.co.uk/wp-content/uploads/2012/11/steambeta.jpg)
Steam has hundreds of high-quality games for Linux
Oh yes it can. From the traditionally simple distractions of 2D chess, word games and minesweeper to modern AAA titles requiring powerful graphics cards, Ubuntu has a diverse range of games available for it.
Your first port of call will be the **Ubuntu Software Center**. Here youll find a sizeable number of free, open-source and paid-for games, including acclaimed indie titles like World of Goo and Braid, as well as several sections filled with more traditional offerings, like PyChess, four-in-a-row and Scrabble clones.
For serious gaming youll want to grab **Steam for Linux**. This is where youll find some of the latest and greatest games available, spanning the full gamut of genres.
Also keep an eye on the [Humble Bundle][3]. These pay what you want packages are held for two weeks every month or so. The folks at Humble have been fantastic supporters of Linux as a gaming platform, single-handily ensuring the Linux debut of many touted titles.
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/08/ubuntu-can-play-games-replace-windows-questions
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:https://appdb.winehq.org/
[2]:https://help.ubuntu.com/community/Antivirus
[3]:https://www.humblebundle.com/

View File

@ -1,3 +1,4 @@
zpl1025
Will Linux ever be able to give consumers what they want?
================================================================================
> Jack Wallen offers up the novel idea that giving the consumers what they want might well be the key to boundless success.
@ -48,4 +49,4 @@ via: http://www.techrepublic.com/article/will-linux-ever-be-able-to-give-consume
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.techrepublic.com/search/?a=jack+wallen
[1]:http://www.techrepublic.com/article/is-the-cloudbook-the-future-of-linux/
[1]:http://www.techrepublic.com/article/is-the-cloudbook-the-future-of-linux/

View File

@ -1,107 +0,0 @@
Want To Start An Open Source Project? Here's How
================================================================================
> Our step-by-step guide.
**You have a problem. You've weighed the** [pros and cons of open sourcing your code][1], and you know [you need to start an open-source project][2] for your software. But you have no idea how to do this.
Oh, sure. You may know how to set up a GitHub account and get started, but such [mechanics][3] are actually the easy part of open source. The hard part is making anyone care enough to use or contribute to your project.
![](http://a4.files.readwrite.com/image/upload/c_fit,q_80,w_630/MTE5NDg0MDYxMTg2Mjk1MzEx.jpg)
Here are some principles to guide you in building and releasing code that others will care about.
### First, The Basics ###
You may choose to open source code for a variety of reasons. Perhaps you're looking to engage a community to help write your code. Perhaps, [like Known][4], you see "open source distribution ... as a multiplier for the small teams of developers writing the code in-house."
Or maybe you just think it's the right thing to do, [as the UK government believes][5].
Regardless of the reason, this isn't about you. Not really. For open source to succeed, much of the planning has to be about those who will use the software. As [I wrote in 2005][6], if you "want lots of people to contribute (bug fixes, extensions, etc.," then you need to "write good documentation, use an accessible programming language ... [and] have a modular framework."
Oh, and you also need to be writing software that people care about.
Think about the technology you depend on every day: operating systems, web application frameworks, databases, and so on. These are far more likely to generate outside interest and contributions than a niche technology for a particular industry like aviation. The broader the application of the technology, the more likely you are to find willing contributors and/or users.
In summary, any successful open-source project needs these things:
1. Optimal market timing (solving a real need in the market);
2. A strong, inclusive team of developers and non-developers;
3. An architecture of participation (more on that below);
4. Modular code to make it easier for new contributors to find a discrete chunk of the program to work on, rather than forcing them to scale an Everest of monolithic code;
5. Code that is broadly applicable (or a way to reach the narrower population more niche-y code appeals to);
6. Great initial source code (if you put garbage into GitHub, you'll get garbage out);
7. A permissive license—I [personally prefer Apache-style licensing][7] as it introduces the lowest barriers to developer adoption, but many successful projects (like Linux and MySQL) have used GPL licensing to great effect.
Of the items above, it's sometimes hardest for projects to actively invite participation. That's usually because this is less about code and more about people.
### "Open" Is More Than A License ###
One of the best things I've read in years on this subject comes from Vitorio Miliano ([@vitor_io][8]), a user experience and interaction designer from Austin, Texas. [Miliano points out][9] that anyone who doesn't already work on your project is a "layperson," in the sense that no matter their level of technical competence, they know little about your code.
So your job, he argues, is to make it easy to get involved in contributing to your code base. While he focuses on how to involve non-programmers in open-source projects, he identifies a few things project leads need to do to effectively involve anyone—technical or non-technical—in open source:
> 1. a way to understand the value of your project
>
> 2. a way to understand the value they could provide to the project
>
> 3. a way to understand the value they could receive from contributing to the project
>
> 4. a way to understand the contribution process, end-to-end
>
> 5. a contribution mechanism suitable for their existing workflows
Too often, project leads want to focus on the fifth step without providing an easy path to understand items 1 through 4. "How" to contribute doesn't matter very much if would-be contributors don't appreciate the "why."
On that note, it's critical, Miliano writes, to establish the value of the project with a "jargon-free description" so as to "demonstrate your accessibility and inclusiveness by writing your descriptions to be useful to everyone at all times." This has the added benefit, he avers, of signaling that documentation and other code-related content will be similarly clear.
On the second item, programmers and non-programmers alike need to be able to see exactly what you'd like from them, and then they need to be recognized for their contributions. Sometimes, as MongoDB solution architect [Henrik Ingo told me][10], "A smart person [may] come[] by with great code, but project members fail to understand it." That's not a terrible problem if the "in" group acknowledges the contribution and reaches out to understand.
But that doesn't always happen.
### Do You Really Want To Lead An Open Source Project? ###
Too many open-source project leads advertise inclusiveness but then are anything but inclusive. If you don't want people contributing code, don't pretend to be open source.
Yes, this is sometimes a function of newbie fatigue. As [one developer wrote][11] recently on HackerNews,
> Small projects get lots of, well, basically useless people who need tons of handholding to get anything accomplished. I see the upside for them, but I don't see the upside for me: if I where[sic] to help them out, I'd spend my limited available time on handholding people who apparently managed to get ms degrees in cs without being able to code instead of doing what I enjoy. So I ignore them.
While that may be a good way to maintain sanity, the attitude doesn't bode well for a project if it's widely shared.
And if you really couldn't care less about non-programmers contributing design input, or documentation, or whatever, then make that clear. Again, if this is the case, you really shouldn't be an open-source project.
Of course, the perception of exclusion is not always reality. As ActiveState vice president Bernard Golden told me over IM, "many would-be developers are intimidated by the perception of an existing 'in-crowd' dev group, even though it may not really be true."
Still, the more open source projects invest in making it easy to understand why developers should contribute, and make it inviting to do so, the how largely takes care of itself.
Lead image courtesy of [Shutterstock][12]
--------------------------------------------------------------------------------
via: http://readwrite.com/2014/08/20/open-source-project-how-to
作者:[Matt Asay][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://readwrite.com/author/matt-asay
[1]:http://readwrite.com/2014/07/07/open-source-software-pros-cons
[2]:http://readwrite.com/2014/08/15/open-source-software-business-zulily-erp-wall-street-journal
[3]:http://www.cocoanetics.com/2011/01/starting-an-opensource-project-on-github/
[4]:http://werd.io/2014/the-roi-of-building-open-source-software
[5]:https://www.gov.uk/design-principles
[6]:http://asay.blogspot.com/2005/09/so-you-want-to-build-open-source.html
[7]:http://www.cnet.com/news/apache-better-than-gpl-for-open-source-business/
[8]:https://twitter.com/vitor_io
[9]:http://opensourcedesign.is/blogging_about/import-designers/
[10]:https://twitter.com/h_ingo/status/501323333301190656
[11]:https://news.ycombinator.com/item?id=8122814
[12]:http://www.shutterstock.com/

View File

@ -1,89 +0,0 @@
[felixonmars translating...]
10 Open Source Cloning Software For Linux Users
================================================================================
> These cloning software take all disk data, convert them into a single .img file and you can copy it to another hard drive.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/photo/150x150x1Qn740810PM9112014.jpg.pagespeed.ic.Ch7q5vT9Yg.jpg)
Disk cloning means copying data from a hard disk to another one and you can do this by simple copy & paste. But you cannot copy the hidden files and folders and not the in-use files too. That's when you need a cloning software which can also help you in saving a back-up image from your files and folders. The cloning software takes all disk data, convert them into a single .img file and you can copy it to another hard drive. Here we give you the best 10 Open Source Cloning software:
### 1. [Clonezilla][1]: ###
Clonezilla is a Live CD based on Ubuntu and Debian. It clones all your hard drive data and take a backup just like Norton Ghost on Windows but in a more effective way. Clonezilla support many filesystems like ext2, ext3, ext4, btrfs, xfs and others. It also supports BIOS, UEFI, MPR and GPT partitions.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/600x450xZ34_clonezilla-600x450.png.pagespeed.ic.8Jq7pL2dwo.png)
### 2. [Redo Backup][2]: ###
Redo Bakcup is another Live CD tool which clones your drivers easily. It is free and Open Source Live System which has its licence under GPL 3. Its main features include easy GUI boots from CD, no installation, restoration of Linux and Windows systems, access to files with out any log-in, recovery of deleted files and more.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/600x450x7D5_Redo-Backup-600x450.jpeg.pagespeed.ic.3QMikN07F5.jpg)
### 3. [Mondo Rescue][3]: ###
Mondo doesn't work like other software. It doesnt convert your hard drivers into an .img file. It converts them into an .iso image and with Mondo you can also create a custom Live CD using “mindi” which is a special tool developed by Mondo Rescue to clone your data from the Live CD. It supports most Linux distributions, FreeBSD, and it is licensed under GPL.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x387x3C4_MondoRescue-620x387.jpeg.pagespeed.ic.cqVh7nbMNt.jpg)
### 4. [Partimage][4]: ###
This is an open-source software backup, which works under Linux system, by default. It's also available to install from the package manager for most Linux distributions and if you dont have a Linux system then you can use “SystemRescueCd”. It is a Live CD which includes Partimage by default to do the cloning process that you want. Partimage is very fast in cloning hard drivers.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x424xBZF_partimage-620x424.png.pagespeed.ic.ygzrogRJgE.png)
### 5. [FSArchiver][5]: ###
FSArchiver is a follow-up to Partimage, and it is again a good tool to clone hard disks. It supports cloning Ext4 partitions and NTFS partitions, basic file attributes like owner, permissions, extended attributes like those used by SELinux, basic file system attributes for all Linux file systems and so on.
### 6. [Partclone][6]: ###
Partclone is a free tool which clones and restores partitions. Written in C it first appeared in 2007 and it supports many filesystems like ext2, ext3, ext4, xfs, nfs, reiserfs, reiser4, hfs+, btrfs. It is very simple to use and it's licensed under GPL.
### 7. [doClone][7]: ###
doClone is a free software project which is developed to clone Linux system partitions easily. It's written in C++ and it supports up to 12 different filesystems. It can preform Grub bootloader restoration and can also transform the clone image to another computer via LAN. It also provides support to live cloning which means you will eb able to clone from the system even if it's running.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x396x2A6_doClone-620x396.jpeg.pagespeed.ic.qhimTILQPI.jpg)
### 8. [Macrium Reflect Free Edition][8]: ###
Macrium Reflect Free Edition is claimed to be one of the fastest disk cloning utilities which supports only Windows file systems. It is a fairly straightforward user interface. This software does disk imaging and disk cloning and also allows you to access images from the file manager. It allows you to create a Linux rescue CD and it is compatible with Windows Vista and 7.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x464xD1E_open1.jpg.pagespeed.ic.RQ41AyMCFx.png)
### 9. [DriveImage XML][9]: ###
DriveImage XML uses Microsoft VSS for creation of images, quite reliably. With this software you can create "hot" images from a disk, which is still running. XML files store images, which means you can access them from any supporting third-party software. DriveImage XML also allows restoring an image to a machine without any reboot. This software is also compatible with Windows XP, Windows Server 2003, Vista, and 7.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x475x357_open2.jpg.pagespeed.ic.50ipbFWsa2.jpg)
### 10. [Paragon Backup & Recovery Free][10]: ###
Paragon Backup & Recovery Free does a great job when it comes to managing scheduled imaging. This is a free software but it's for personal use only.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/600x536x9Z9_open3.jpg.pagespeed.ic.9rDHp0keFw.png)
--------------------------------------------------------------------------------
via: http://www.efytimes.com/e1/fullnews.asp?edid=148039
作者Sanchari Banerjee
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://clonezilla.org/
[2]:http://redobackup.org/
[3]:http://www.mondorescue.org/
[4]:http://www.partimage.org/Main_Page
[5]:http://www.fsarchiver.org/Main_Page
[6]:http://www.partclone.org/
[7]:http://doclone.nongnu.org/
[8]:http://www.macrium.com/reflectfree.aspx
[9]:http://www.runtime.org/driveimage-xml.htm
[10]:http://www.paragon-software.com/home/br-free/

View File

@ -1,66 +0,0 @@
barney-ro translating
What is a good subtitle editor on Linux
================================================================================
If you watch foreign movies regularly, chances are you prefer having subtitles rather than the dub. Grown up in France, I know that most Disney movies during my childhood sounded weird because of the French dub. If now I have the chance to be able to watch them in their original version, I know that for a lot of people subtitles are still required. And I surprise myself sometimes making subtitles for my family. Hopefully for me, Linux is not devoid of fancy and open source subtitle editors. In short, this is the non-exhaustive list of open source subtitle editors for Linux. Share your opinion on what you think of the best subtitle editor.
### 1. Gnome Subtitles ###
![](https://farm6.staticflickr.com/5596/15323769611_59bc5fb4b7_z.jpg)
[Gnome Subtitles][2] is a bit my go to when it comes to quickly editing some existing subtitles. You can load the video, load the subtitle text files and instantly get going. I appreciate its balance between ease of use and advanced features. It comes with a synchronization tool as well as a spell check. Finally, last but not least, the shortcuts are what makes it good in the end: when you edit a lot of lines, you prefer to keep your hands on the keyboard, and use the built in shortcuts to move around.
### 2. Aegisub ###
![](https://farm3.staticflickr.com/2944/15323964121_59e9b26ba5_z.jpg)
[Aegisub][2] is already one level of complexity higher. Just the interface reflects a learning curve. But besides its intimidating aspect, Aegisub is very complete software, providing tools beyond anything I could have imagined before. Like Gnome Subtitles, Aegisub has a WYSIWYG approach, but to a whole new level: it is possible to drag and drop the subtitles on the screen, see the audio spectrum on the side, and do everything with shortcuts. In addition to that, it comes with a Kanji tool, a karaoke mode, and the possibility to import lua script to automate some tasks. I really invite you to go read the [manual page][3] before starting using it.
### 3. Gaupol ###
![](https://farm3.staticflickr.com/2942/15326817292_6702cc63fc_z.jpg)
At the other end of the complexity spectrum is [Gaupol][4]. Unlike Aegisub, Gaupol is quick to pick up and adopts an interface very close to Gnome Subtitles. But behind this relative simplicity, it comes with all the necessary tools: shortcuts, third party extension, spell checking, and even speech recognition (courtesy of [CMU Sphinx][5]). As a downside, however, I did notice some slow-downs while testing it, nothing too serious, but just enough to make me prefer Gnome Subtitles still.
### 4. Subtitle Editor ###
![](https://farm4.staticflickr.com/3914/15323911521_8e33126610_z.jpg)
[Subtitle Editor][6] is very close to Gaupol. However, the interface is a little bit less intuitive, and the features are slightly more advanced. I appreciate the possibility to define "key frames" and all the given synchronization options. However, maybe more icons and less text would enhance the interface. As a goodie, Subtitle Editor can simulate a "type writer" effect, while I am not sure if it is extremely useful. And last but not least, the possibility to redefine the shortcuts is always handy.
### 5. Jubler ###
![](https://farm4.staticflickr.com/3912/15323769701_3d94ca8884_z.jpg)
Written in Java, [Jubler][7] is a multi-platform subtitle editor. I was actually very impressed by its interface. I definitely see the Java-ish aspect of it, but it remains well conceived and clear. Like Aegisub, you can drag and drop the subtitles on the image, making the experience far more pleasant than just typing. It is also possible to define a style for subtitles, play sound from another track, translate the subtitles, or use the spell checker. However, be careful as you will need MPlayer installed and correctly configured beforehand if you want to use Jubler fully. Oh and I give it a special credit for its easy installation process after downloading the script from the [official page][8].
### 6. Subtitle Composer ###
![](https://farm6.staticflickr.com/5578/15323769711_6c6dfbe405_z.jpg)
Defined as a "KDE subtitle composer," [Subtitle Composer][9] comes with most of the traditional features evoked previously, but with the KDE interface that we expect. This comes naturally with the option to redefine the shortcuts, which is very dear to me. But beyond all of this, what differentiates Subtitle Composer from all the previously mentioned programs is its ability to follow scripts written in JavaScript, Python, and even Ruby. A few examples are packaged with the software, and will definitely help you pick up the syntax and the usefulness of such feature.
To conclude, whether you, like me, just edit a few subtitles for your family, re-synchronize the entire track, or write everything from scratch, Linux has the tools for you. For me in the end, the shortcuts and the ease-of-use make all the difference, but for any higher usage, scripting or speech recognition can become super handy.
Which subtitle editor do you use and why? Or is there another one that you prefer not mentioned here? Let us know in the comments.
--------------------------------------------------------------------------------
via: http://xmodulo.com/good-subtitle-editor-linux.html
作者:[Adrien Brochard][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:http://gnomesubtitles.org/
[2]:http://www.aegisub.org/
[3]:http://docs.aegisub.org/3.2/Main_Page/
[4]:http://home.gna.org/gaupol/
[5]:http://cmusphinx.sourceforge.net/
[6]:http://home.gna.org/subtitleeditor/
[7]:http://www.jubler.org/
[8]:http://www.jubler.org/download.html
[9]:http://sourceforge.net/projects/subcomposer/

View File

@ -1,66 +0,0 @@
zpl1025
What Linux Users Should Know About Open Hardware
================================================================================
> What Linux users don't know about manufacturing open hardware can lead them to disappointment.
Business and free software have been intertwined for years, but the two often misunderstand one another. That's not surprising -- what is just a business to one is way of life for the other. But the misunderstanding can be painful, which is why debunking it is a worth the effort.
An increasingly common case in point: the growing attempts at open hardware, whether from Canonical, Jolla, MakePlayLive, or any of half a dozen others. Whether pundit or end-user, the average free software user reacts with exaggerated enthusiasm when a new piece of hardware is announced, then retreats into disillusionment as delay follows delay, often ending in the cancellation of the entire product.
It's a cycle that does no one any good, and often breeds distrust and all because the average Linux user has no idea what's happening behind the news.
My own experience with bringing products to market is long behind me. However, nothing I have heard suggests that anything has changed. Bringing open hardware or any other product to market remains not just a brutal business, but one heavily stacked against newcomers.
### Searching for Partners ###
Both the manufacturing and distribution of digital products is controlled by a relatively small number of companies, whose time can sometimes be booked months in advance. Profit margins can be tight, so like movie studios that buy the rights to an ancient sit-com, the manufacturers usually hope to clone the success of the latest hot product. As Aaron Seigo told me when talking about his efforts to develop the Vivaldi tablet, the manufacturers would much rather prefer someone else take the risk of doing anything new.
Not only that, but they would prefer to deal with someone with an existing sales record who is likely to bring repeat business.
Besides, the average newcomer is looking at a product run of a few thousand units. A chip manufacturer would much rather deal with Apple or Samsung, whose order is more likely in the hundreds of thousands.
Faced with this situation, the makers of open hardware are likely to find themselves cascading down into the list of manufacturers until they can find a second or third tier manufacturer that is willing to take a chance on a small run of something new.
They might be reduced to buying off-the-shelf components and assembling units themselves, as Seigo tried with Vivaldi. Alternatively, they might do as Canonical did, and find established partners that encourage the industry to take a gamble. Even if they succeed, they have usually taken months longer than they expected in their initial naivety.
### Staggering to Market ###
However, finding a manufacturer is only the first obstacle. As Raspberry Pi found out, even if the open hardware producers want only free software in their product, the manufacturers will probably insist that firmware or drivers stay proprietary in the name of protecting trade secrets.
This situation is guaranteed to set off criticism from potential users, but the open hardware producers have no choice except to compromise their vision. Looking for another manufacturer is not a solution, partly because to do so means more delays, but largely because completely free-licensed hardware does not exist. The industry giants like Samsung have no interest in free hardware, and, being new, the open hardware producers have no clout to demand any.
Besides, even if free hardware was available, manufacturers could probably not guarantee that it would be used in the next production run. The producers might easily find themselves re-fighting the same battle every time they needed more units.
As if all this is not enough, at this point the open hardware producer has probably spent 6-12 months haggling. The chances are, the industry standards have shifted, and they may have to start from the beginning again by upgrading specs.
### A Short and Brutal Shelf Life ###
Despite these obstacles, hardware with some degree of openness does sometimes get released. But remember the challenges of finding a manufacturer? They have to be repeated all over again with the distributors -- and not just once, but region by region.
Typically, the distributors are just as conservative as the manufacturers, and just as cautious about dealing with newcomers and new ideas. Even if they agree to add a product to their catalog, the distributors can easily decide not to encourage their representatives to promote it, which means that in a few months they have effectively removed it from the shelves.
Of course, online sales are a possibility. But meanwhile, the hardware has to be stored somewhere, adding to the cost. Production runs on demand are expensive even in the unlikely event that they are available, and even unassembled units need storage.
### Weighing the Odds ###
I have been generalizing wildly here, but anyone who has ever been involved in producing anything will recognize what I am describing as the norm. And just to make matters worse, open hardware producers typically discover the situation as they are going through it. Inevitably, they make mistakes, which adds still more delays.
But the point is, if you have any sense of the process at all, your knowledge is going to change how you react to news of another attempt at hardware. The process means that, unless a company has been in serious stealth mode, an announcement that a product will be out in six months will rapidly prove to be an outdate guestimate. 12-18 months is more likely, and the obstacles I describe may mean that the product will never actually be released.
For example, as I write, people are waiting for the emergence of the first Steam Machines, the Linux-based gaming consoles. They are convinced that the Steam Machines will utterly transform both Linux and gaming.
As a market category, Steam Machines may do better than other new products, because those who are developing them at least have experience developing software products. However, none of the dozen or so Steam Machines in development have produced more than a prototype after almost a year, and none are likely to be available for buying until halfway through 2015. Given the realities of hardware manufacturing, we will be lucky if half of them see daylight. In fact, a release of 2-4 might be more realistic.
I make that prediction with next to no knowledge of any of the individual efforts. But, having some sense of how hardware manufacturing works, I suspect that it is likely to be closer to what happens next year than all the predictions of a new Golden Age for Linux and gaming. I would be entirely happy being wrong, but the fact remains: what is surprising is not that so many Linux-associated hardware products fail, but that any succeed even briefly.
--------------------------------------------------------------------------------
via: http://www.datamation.com/open-source/what-linux-users-should-know-about-open-hardware-1.html
作者:[Bruce Byfield][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.datamation.com/author/Bruce-Byfield-6030.html

View File

@ -0,0 +1,119 @@
Interview: Thomas Voß of Mir
================================================================================
**Mir was big during the space race and its a big part of Canonicals unification strategy. We talk to one of its chief architects at mission control.**
Not since the days of 2004, when X.org split from XFree86, have we seen such exciting developments in the normally prosaic realms of display servers. These are the bits that run behind your desktop, making sure Gnome, KDE, Xfce and the rest can talk to your graphics hardware, your screen and even your keyboard and mouse. They have a profound effect on your systems performance and capabilities. And where we once had one, we now have two more Wayland and Mir, and both are competing to win your affections in the battle for an X replacement.
We spoke to Waylands Daniel Stone in issue 6 of Linux Voice, so we thought it was only fair to give equal coverage to Mir, Canonicals own in-house X replacement, and a project that has so far courted controversy with some of its decisions. Which is why we headed to Frankfurt and asked its Technical Architect, Thomas Voß, for some background context…
![](http://www.linuxvoice.com/wp-content/uploads/2014/10/voss_1.jpg)
**Linux Voice: Lets go right back to the beginning, and look at what X was originally designed for. X solved the problems that were present 30 years ago, where people had entirely different needs, right?**
**Thomas Voß**: It was mainframes. It was very expensive mainframe computers with very cheap terminals, trying to keep the price as low as possible. And one of the first and foremost goals was: “Hey, I want to be able to distribute my UI across the network, ideally compressed and using as little data as possible”. So a lot of the decisions in X were motivated by that.
A lot of the graphics languages that X supports even today have been motivated by that decision. The X developers started off in a 2D world; everything was a 2D graphics language, the X way of drawing rectangles. And its present today. So X is not necessarily bad in that respect; it still solves a lot of use cases, but its grown over time.
One of the reasons is that X is a protocol, in essence. So a lot of things got added to the protocol. The problem with adding things to a protocol is that they tend to stick. To use a 2D graphics language as an example, XVideo is something that no-one really likes today. Its difficult to support and the GPU vendors actually cry out in pain when you start talking about XVideo. Its somewhat bloated, and its just old. Its an old proven technology and Im all for that. I actually like X for a lot of things, and it was a good source of inspiration. But then when you look at your current use cases and the current setup we are in, where convergence is one of the buzzwords massively overrated obviously but at the heart of convergence lies the fact that you want to scale across different form factors.
**LV: And convergence is big for Canonical isnt it?**
**Thomas**: Its big, I think, for everyone, especially over time. But convergence is a use case that was always of interest to us. So we always had this idea that we want one codebase. We dont want a situation like Apple has with OS X and iOS, which are two different codebases. We basically said “Look, whatever we want to do, we want to do it from one codebase, because its more efficient.” We dont want to end up in the situation where we have to be maintaining two, three or four separate codebases.
Thats where we were coming from when we were looking at X, and it was just too bloated. And we looked at a lot of alternatives. We started looking at how Mac OS X was doing things. We obviously didnt have access to the source code, but if you see the transition from OS 9 to OS X, it was as if they entirely switched to one graphics language. It was pre-PostScript at that time. But they chose one graphics language, and thats it. From that point on, when you choose a graphics language, things suddenly become more simple to do. Todays graphics language is EGL ES, so there was inspiration for us to say we were converged on GL and EGL. From our perspective, thats the least common denominator.
> We basically said: whatever we want to do, we want to do it from one codebase, because its more efficient.
Obviously there are disadvantages to having only one graphics language, but the benefits outweigh the disadvantages. And I think thats a common theme in the industry. Android made the same decision to go that way. Even Wayland to a certain degree has been doing that. They have to support EGL and GL, simply because its very convenient for app developers and toolkit developers an open graphics language. That was the part that inspired us, and we wanted to have this one graphics language and support it well. And that takes a lot of craft.
So, once you can say: no more weird 2D API, no more weird phong API, and everything is mapped out to GL, youre way better off. And you can distill down the scope of the overall project to something more manageable. So it went from being impossible to possible. And then there was me, being very opinionated. I dont believe in extensibility from the beginning traditionally in Linux everything is super extensible, which has got benefits for a certain audience.
If you think about the audience of the display server, its one of the few places in the system where youve got three audiences. So youve got the users, who dont care, or shouldnt care, about the display server.
**LV: Its transparent to them.**
**Thomas**: Yes, its pixels, right? Thats all they care about. It should be smooth. It should be super nice to use. But the display server is not their main concern. It obviously feeds into a user experience, quite significantly, but there are a lot of other parts in the system that are important as well.
Then youve got developers who care about the display server in terms of the API. Obviously we said we want to satisfy this audience, and we want to provide a super-fast experience for users. It should be rock solid and stable. People have been making fun of us and saying “yeah, every project wants to be rock solid and stable”. Cool so many fail in doing that, so lets get that down and just write out what we really want to achieve.
And then youve got developers, and the moment you expose an API to them, or a protocol, you sign a contract with them, essentially. So they develop to your API well, many app developers wont directly because theyll be using toolkits but at some point youve got developers who sign up to your API.
![](http://www.linuxvoice.com/wp-content/uploads/2014/10/voss_3.jpg)
**LV: The developers writing the toolkits, then?**
**Thomas**: We do a lot of work in that arena, but in general its a contract that we have with normal app developers. And we said: look, we dont want the API or contract to be super extensible and trying to satisfy every need out there. We want to understand what people really want to do, and we want to commit to one API and contract. Not five different variants of the contract, but we want to say: look, this is what we support and we, as Canonical and as the Mir maintainers, will sign up to.
So I think thats a very good thing. You can buy into specific shells sitting on top of Mir, but you can always assume a certain base level of functionality that we will always provide in terms of window management, in terms of rendering capabilities, and so on and so forth. And funnily enough, that also helps with convergence. Because once you start thinking about the API as very important, you really start thinking about convergence. And what happens if we think about form factor and we transfer from a phone to a tablet to a desktop to a fridge?
**LV: And whatever might come!**
**Thomas**: Right, right. How do we account for future developments? And we said we dont feel comfortable making Mir super extensible, because it will just grow. Either it will just grow and grow, or you will end up with an organisation that just maintains your protocol and protocol extensions.
**LV: So thats looking at Mir in relation to X. The obvious question is comparing Mir to Wayland so what is it that Mir does, that Wayland doesnt?**
**Thomas**: This might sound picky, but we have to distinguish what Wayland really is. Wayland is a protocol specification which is interesting because the value proposition is somewhat difficult. Youve got a protocol and youve got a reference implementation. Specifically, when we started, Weston was still a test bed and everything being developed ended up in there.
No one was buying into that; no one was saying, “Look, were moving this to production-level quality with a bona fide protocol layer that is frozen and stable for a specific version that caters to application authors”. If you look at the Ubuntu repository today, or in Debian, theres Wayland-cursor-whatever, so they have extensions already. So thats a bit different from our approach to Mir, from my perspective at least.
There was this protocol that the Wayland developers finished and back then, before we did Mir and I looked into all of this, I wrote a Wayland compositor in Go, just to get to know things.
**LV: As you do!**
**Thomas**: And I said: you know, I dont think a protocol is a good way of approaching this because versioning a protocol in a packaging scenario is super difficult. But versioning a C API, or any sort of API that has a binary stability contract, is way easier and we are way more experienced at that. So, in that respect, we are different in that we are saying the protocol is an implementation detail, at least up to a certain point.
Im pretty sure for version 1.0, which we will call a golden release, we will open up the protocol for communication purposes. Under the covers its Google buffers and sockets. So well say: this is the API, work against that, and were committed to it.
Thats one thing, and then we said: OK, theres Weston, but we cannot use Weston because its not working on Android, the driver model is not well defined, and theres so much work that we would have to do to actually implement a Wayland compositor. And then we are in a situation where we would have to cut out a set of functionality from the Wayland protocol and commit to that, no matter what happens, and ultimately that would be a fork, over time, right?.
**LV: Its a difficult concept for many end users, who just want to see something working.**
**Thomas**: Right, and even from a developers perspective and lets jump to the political part I find it somewhat difficult to have a party owning a protocol definition and another party building the reference implementations. Now, Gnome and KDE do two different Wayland compositors. I dont see the benefit in that, to be quite frank, so the value proposition is difficult to my mind.
The driver model in Mir and Wayland is ultimately not that different its GL/EGL based. That is kind of the denominator that you will find in both things, which is actually a good thing, because if you look at the contract to application developers and toolkit developers, most of them dont want Mir or Wayland. They talk ELG and GL, and at that point, its not that much of a problem to support both.
> If there had been a full reference implementation of Wayland, our decision might have been different.
So we did this work for porting the Chromium browser to Mir. We actually took the Chromium Wayland back-end, factored out all the common pieces to EGL and GL ES, and split it up into Wayland and Mir.
And I think from a users or application developers perspective, the difference is not there. I think, in retrospect, if there would have been something like a full reference implementation of Wayland, where a company had signed up to provide something that is working, and committed to a certain protocol version, our decision might have been different. But there just wasnt. It was five years out there, Wayland, Wayland, Wayland, and there was nothing that we could build upon.
**LV: The main experience weve had is with RebeccaBlackOS, which has Weston and Wayland, because, like you say, theres no that much out there running it.**
**Thomas**: Right. I find Wayland impressive, obviously, but I think Mir will be significantly more relevant than Wayland in two years time. We just keep on bootstrapping everything, and weve got things working across multiple platforms. Are there issues, and are there open questions to solve? Most likely. We never said we would come up with the perfect solution in version 1. That was not our goal. I dont think software should be built that way. So it just should be iterated.
![](http://www.linuxvoice.com/wp-content/uploads/2014/10/voss_2.jpg)
**LV: When was Mir originally planned for? Which Ubuntu release? Because it has been pushed back a couple of times.**
**Thomas**: Well, we originally planned to have it by 14.04. That was the kind of stretch goal, because it highly depends on the availability of proprietary graphics drivers. So you cant ship an LTS [Long Term Support] release of Ubuntu on a new display server without supporting the hardware of the big guys.
**LV: We thought that would be quite ambitious anyway a Long Term Support release with a whole new display server!**
**Thomas**: Yes, it was ambitious but for a reason. If you dont set a stretch goal, and probably fail in reaching it, and then re-evaluate how you move forward, its difficult to drive a project. So if you just keep it evolving and evolving and evolving, and you dont have a checkpoint at some point…
**LV: Thats like a lot of open source projects. Inkscape is still on 0.48 or something, and it works, its reliable, but they never get to 1.0. Because they always say: “Oh lets add this feature, and that feature”, and the rest of us are left thinking: just release 1.0 already!.**
**Thomas**: And I wouldnt actually tie it to a version number. To me, that is secondary. To me, the question is whether we call this ready for broad public consumption on all of the hardware versions we want to support?
In Canonical, as a company, we have OEM contracts and we are enabling Ubuntu on a host of devices, and laptops and whatever, so we have to deliver on those contracts. And the question is, can we do that? No. Well, you never like a no.
> The question is whether we call this ready for broad public consumption on the hardware we want to support.
Usually, when you encounter a problem and you tackle it, and you start thinking how to solve the problem, thats more beneficial than never hearing a no. Thats kind of what we were aiming for. Ubuntu 14.04 was a stretch goal everyone was aware of that and we didnt reach it. Fine, cool. Lets go on.
So how do we stage ourself for the next cycle, until an LTS? Now we have this initiative where we have a daily testable image with Unity 8 and Mir. Its not super usable because its just essentially the tethered UI that you are seeing there, but still its something that we didnt have a year ago. And for me, thats a huge gain.
And ultimately, before we can ship something, before any new display server can ship in an LTS release, you need to have buy-in from the GPU vendors. Thats what you need.
--------------------------------------------------------------------------------
via: http://www.linuxvoice.com/interview-thomas-vos-of-mir/
作者:[Mike Saunders][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxvoice.com/author/mike/

View File

@ -0,0 +1,131 @@
FOSS and the Fear Factor
================================================================================
![](http://www.linuxinsider.com/ai/181807/foss-open-source-security.jpg)
> "'Many eyes' is a complete and total myth," said SoylentNews' hairyfeet. "I bet my last dollar that if you looked at every.single.package. that makes up your most popular distros and then looked at how many have actually downloaded the source for those various packages, you'd find that there is less than 30 percent ... that are downloaded by anybody but the guys that actually maintain the things."
In a world that's been dominated for far too long by the [Systemd Inferno][1], Linux fans will have to be forgiven if they seize perhaps a bit too gleefully upon the scraps of cheerful news that come along on any given day.
Of course, for cheerful news, there's never any better place to look than the [Reglue][2] effort. Run by longtime Linux advocate and all-around-hero-for-kids Ken Starks, as alert readers [may recall][3], Reglue just last week launched a brand-new [fundraising effort][4] on Indiegogo to support its efforts over the coming year.
Since 2005, Reglue has placed more than 1,600 donated and then refurbished computers into the homes of financially disadvantaged kids in Central Texas. Over the next year, it aims to place 200 more, as well as paying for the first 90 days of Internet connection for each of them.
"As overused as the term is, the 'Digital Divide' is alive and well in some parts of America," Starks explained. "We will bridge that divide where we can."
How's that for a heaping helping of hope and inspiration?
### Windows as Attack Vector ###
![](http://www.linuxinsider.com/images/article_images/linuxgirl_bg_pinkswirl_150x245.jpg)
Offering discouraged FOSS fans a bit of well-earned validation, meanwhile -- and perhaps even a bit of levity -- is the news that Russian hackers apparently have begun using Windows as a weapon against the rest of the world.
"Russian hackers use Windows against NATO" is the [headline][5] over at Fortune, making it plain for all the world to see that Windows isn't the bastion of security some might say it is.
The sarcasm is [knee-deep][6] in the comments section on Google+ over that one.
### 'Hackers Shake Confidence' ###
Of course, malicious hacking is no laughing matter, and the FOSS world has gotten a bitter taste of the effects for itself in recent months with the Heartbleed and Shellshock flaws, to name just two.
Has it been enough to scare Linux aficionados away?
That essentially is [the suggestion][7] over at Bloomberg, whose story, entitled "Hackers Shake Confidence in 1980s Free Software Idealism," has gotten more than a few FOSS fans' knickers in a twist.
### 'No Software Is Perfect' ###
"None of this has shaken my confidence in the slightest," asserted [Linux Rants][8] blogger Mike Stone down at the blogosphere's Broken Windows Lounge, for instance.
"I remember a time when you couldn't put a Windows machine on the network without firewall software or it would be infected with viruses/malware in seconds," he explained. "I don't recall the articles claiming that confidence had been shaken in Microsoft.
"The fact of the matter is that no software is perfect, not even FOSS, but it comes closer than the alternatives," Stone opined.
### 'My Faith Is Just Fine' ###
"It is hard to even begin to get into where the Bloomberg article fails," began consultant and [Slashdot][9] blogger Gerhard Mack.
"For one, decompilers have existed for ages and allow black hats to find flaws in proprietary software, so the black-hats can find problems but cannot admit they found them let alone fix them," Mack explained. "Secondly, it has been a long time since most open source was volunteer-written, and most contributions need to be paid.
"The author goes on to rip into people who use open source for not contributing monetarily, when most of the listed companies are already Linux Foundation members, so they are already contributing," he added.
In short, "my faith in open source is just fine, and no clickbait Bloomberg article will change that," Mack concluded.
### 'The Author Is Wrong' ###
"Clickbait" is also the term Google+ blogger Alessandro Ebersol chose to describe the Bloomberg account.
"I could not see the point the author was trying to make, except sensationalism and views," he told Linux Girl.
"The author is wrong," Ebersol charged. "He should educate himself on the topic. The flaws are results of lack of funding, and too many corporations taking advantage of free software and giving nothing back."
Moreover, "I still believe that a piece of code that can be studied and checked by many is far more secure than a piece made by a few," Google+ blogger Gonzalo Velasco C. chimed in.
"All the rumors that FLOSS is as weak as proprietary software are only [FUD][10] -- period," he said. "It is even more sad when it comes from private companies that drink in the FLOSS fountain."
### 'Source Helps Ensure Security' ###
Chris Travers, a [blogger][11] who works on the [LedgerSMB][12] project, had a similar view.
"I do think that having the source available helps ensure security for well-designed, well-maintained software," he began.
"Those of us who do development on such software must necessarily approach the security process under a different set of constraints than proprietary vendors do," Travers explained.
"Since our code changes are public, when we release a security fix this also provides effectively full disclosure," he said, "ensuring that the concerns for unpatched systems are higher than they would be for proprietary solutions absent full disclosure."
At the same time, "this disclosure cuts both ways, as software security vendors can use this to provide further testing and uncover more problems," Travers pointed out. "In the long run, this leads to more secure software, but in the short run it has security costs for users."
Bottom line: "If there is good communication with the community, if there is good software maintenance and if there is good design," he said, "then the software will be secure."
### 'Source Code Isn't Magic Fairy Dust' ###
SoylentNews blogger hairyfeet had a very different view.
"'Many eyes' is a complete and total myth," hairyfeet charged. "I bet my last dollar that if you looked at every.single.package. that makes up your most popular distros and then looked at how many have actually downloaded the source for those various packages, you'd find that there is less than 30 percent of the packages that are downloaded by anybody but the guys that actually maintain the things.
"How many people have done a code audit on Firefox? [LibreOffice][13]? Gimp? I bet you won't find a single one, because everybody ASSUMES that somebody else did it," he added.
"At the end of the day, Wall Street is finding out what guys like me have been saying for years: Source code isn't magic fairy dust that makes the bugs go away," hairyfeet observed.
### 'No One Actually Looked at It' ###
"The problem with [SSL][14] was that everyone assumed the code was good, but almost no one had actually looked at, so you never had the 'many eyeballs' making the bugs shallow," Google+ blogger Kevin O'Brien conceded.
Still, "I think the methodology and the idealism are separable," he suggested. "Open source is a way of writing software in which the value created for everyone is much greater than the value captured by any one entity, which is why it is so powerful.
"The idea that corporate contributions somehow sully the purity is a stupid idea," added O'Brien. "Corporate involvement is not inherently bad; what is bad is trying to lock other people out of the value created. Many companies handle this well, such as Red Hat."
### 'The Right Way to Do IT' ###
Last but not least, "my confidence in FLOSS is unshaken," blogger [Robert Pogson][15] declared.
"After all, I need software to run my computers, and as bad as some flaws are in FLOSS, that vulnerability pales into insignificance compared to the flaws in that other OS -- you know, the one that thinks images are executable and has so much complexity that no one, not even M$ with its $billions, can fix."
FOSS is "the right way to do IT," Pogson added. "The world can and does make its own software, and the world has more and better programmers than the big corporations.
"Those big corporations use FLOSS and should support FLOSS," he maintained, offering "thanks to the corporations who hire FLOSS programmers; sponsor websites, mirrors and projects; and who give back code -- the fuel in the FLOSS economy."
--------------------------------------------------------------------------------
via: http://www.linuxinsider.com/story/FOSS-and-the-Fear-Factor-81221.html
作者Katherine Noyes
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.linuxinsider.com/perl/story/80980.html
[2]:http://www.reglue.org/
[3]:http://www.linuxinsider.com/story/78422.html
[4]:https://www.indiegogo.com/projects/deleting-the-digital-divide-one-computer-at-a-time
[5]:http://fortune.com/video/2014/10/14/russian-hackers-use-windows-against-nato/
[6]:https://plus.google.com/+KatherineNoyes/posts/DQvRMekLHV4
[7]:http://www.bloomberg.com/news/2014-10-14/hackers-shake-confidence-in-1980s-free-software-idealism.html
[8]:http://linuxrants.com/
[9]:http://slashdot.org/
[10]:http://en.wikipedia.org/wiki/Fear,_uncertainty_and_doubt
[11]:http://ledgersmbdev.blogspot.com/
[12]:http://www.ledgersmb.org/
[13]:http://www.libreoffice.org/
[14]:http://en.wikipedia.org/wiki/Transport_Layer_Security
[15]:http://mrpogson.com/

View File

@ -1,168 +0,0 @@
(翻译中 by runningwater
Camicri Cube: An Offline And Portable Package Management System
================================================================================
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/camicri-cube-206x205.jpg)
As we all know, we must have an Internet connection in our System for downloading and installing applications using synaptic manager or software center. But, what if you dont have an Internet connection, or the Internet connection is dead slow? This will be definitely a headache when installing packages using software center in your Linux desktop. Instead, you can manually download the applications from their official site, and install them. But, most of the Linux users doesnt aware about the required dependencies for the applications that they wanted to install. What could you do if you have such situation? Leave all the worries now. Today, we introduce an awesome offline package manager called **Camicri Cube**.
You can use this package manager on any Internet connected system, download the list of packages you want to install, bring them back to your offline computer, and Install them. Sounds good? Yes, It is! Cube is a package manager like Synaptic and Ubuntu Software Center, but a portable one. It can be used and run in any platform (Windows, Apt-Based Linux Distributions), online and offline, in flashdrive or any removable devices. The main goal of this project is to enable the offline Linux users to download and install Linux applications easily.
Cube will gather complete details of your offline computer such as OS details, installed applications and more. Then, just the copy the cube application using any USB thumb drive, and use it on the other Internet connected system, and download the list of applications you want. After downloading all required packages, head back to your original computer and start installing them. Cube is developed and maintained by **Jake Capangpangan**. It is written using C++, and bundled with all necessary packages. So, you dont have to install any extra software to use it.
### Installation ###
Now, let us download and install Cube on the Offline system which doesnt have the Internet connection. Download Cube latest version either from the [official Launchpad Page][1] or [Sourceforge site][2]. Make sure you have downloaded the correct version depending upon your offline computer architecture. As I use 64 bit system, I downloaded the 64bit version.
wget http://sourceforge.net/projects/camicricube/files/Camicri%20Cube%201.0.9/cube-1.0.9.2_64bit.zip/
Extract the zip file and move it to your home directory or anywhere you want:
unzip cube-1.0.9.2_64bit.zip
Thats it. Now its time to know how to use it.
### Usage ###
Here, I will be using Two Ubuntu systems. The original (Offline no Internet) is running with **Ubuntu 14.04**, and the Internet connected system is running with **Lubuntu 14.04** Desktop.
#### Steps to do On Offline system: ####
From the offline system, Go to the extracted Cube folder. Youll find an executable called “cube-linux”. Double click it, and Click Execute. If it not executable, set the executable permission as shown below.
sudo chmod -R +x cube/
Then, go to the cube directory,
cd cube/
And run the following command to run it.
./cube-linux
Enter the Project name (Ex.sk) and click **Create**. As I mentioned above, this will create a new project with complete details of your system such as OS details, list of installed applications, list of repositories etc.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Cube-Startup-Create-or-choose-a-project-to-be-managed_0013.png)
As you know, our system is an offline computer that means I dont have Internet connection. So I skipped the Update Repositories process by clicking on the **Cancel** button. We will update the repositories later on an Internet connected system.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0023.png)
Again, I clicked **No** to skip updating the offline computer, because we dont have Internet connection.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0033.png)
Thats it. Now the new project has been created. The new project will be saved on your main cube folder. Go to the Cube folder, and youll find a folder called Projects. This folder will hold all the essential details of your offline system.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Selection_004.png)
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Selection_005.png)
Now, close the cube application, and copy the entire main **cube** folder to any flash drive, and go to the Internet connected system.
#### Steps to do on an Internet connected system: ####
The following steps needs to be done on the Internet connected system. In our case, Its **Lubuntu 14.04**.
Make the cube folder executable as we did in the original computer.
sudo chmod -R +x cube/
Now, double click the file cube-linux to open it or you can launch it from the Terminal as shown below.
cd cube/
./cube-linux
You will see that your project is now listed in the “Open Existing Projects” part of the window. Select your project
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Cube-Startup-Create-or-choose-a-project-to-be-managed_0014.png)
Then, the cube will ask if this is your projects original computer. Its not my original (Offline) computer, so I clicked **No**.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0024.png)
Youll be asked if you want to update your repositories. Click **Ok** to update the repositories.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0034.png)
Next, we have to update all outdated packages/applications. Click on the “**Mark All updates**” button from the Cubes tool bar. After that, click “**Download all marked**” button to update all updated packages/applications. As you see in the below screenshot, there are 302 packages needs to be updated in my case. Then, Click **Ok** to continue to download marked packages.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_005.png)
Now, Cube will start to download all marked packages.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Downloading-packages_006.png)
We have completed updating repositories and packages. Now, you can download a new package if you want to install it on your offline system.
#### Downloading New Applications ####
For example, here I am going to download the **apache2** Package. Enter the name of the package in the **search** box, and hit Search button. The Cube will fetch the details of the application that you are looking for. Hit the “**Download this package now**” button, and click **Ok** to start download.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_008.png)
Cube will start downloading the apache2 package with all its dependencies.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Downloading-packages_009.png)
If you want to search and download more packages, simply Click the button “**Mark this package**”, and do search the required packages. You can mark as many as packages you want to install on your original computer. Once you marked all packages, hit the “**Download all marked**” button on the top tool bar to start downloading them.
After you completed updating repositories, outdated packages, and downloading new applications, close the Cube application. Then, copy the entire Cube folder to any flash drive or external hdd, and go back to your Offline system.
#### Steps to do on Offline computer: ####
Copy the Cube folder back to your Offline system on any place you want. Go to the cube folder and double click **cube-linux** file to launch Cube application.
Or, you can launch it from Terminal as shown below.
cd cube/
./cube-linux
Select your project and click Open.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Cube-Startup-Create-or-choose-a-project-to-be-managed_0012.png)
Then a dialog will ask you to update your system, please click “Yes” especially when you download new repositories, because this will transfer all new repositories to your computer.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0021.png)
Youll see that the repositories will be updated on your offline computer without Internet connection. Because, we already have updated the repositories on the Internet connected system. Seems cool, isnt it?
After updating the repositories, let us install all downloaded packages. Click the “Mark All Downloaded” button to select all downloaded packages, and click “Install All Marked” to install all of them from the Cube main Tool bar. The Cube application will automatically open a new Terminal, and install all packages.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Terminal_001.png)
If you encountered with dependency problems, go to **Cube Menu -> Packages -> Install packages with complete dependencies** to install all packages.
If you want to install a specific package, Navigate to the List Packages, click the “Downloaded” button, and all downloaded packages will be listed.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0035.png)
Then, double click the desired package, and click “Install this”, or “Mark this” if you want to install it later.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0043.png)
By this way, you can download the required packages from any Internet connected system, and then you can install them in your offline computer without Internet connection.
### Conclusion ###
This is one of the best and useful tool ever I have used. But during testing this tool in my Ubuntu 14.04 testbox, I faced many dependency problems, and the Cube application is suddenly closed often. Also, I could use this tool only on a fresh Ubuntu 14.04 offline system without any issues. Hope all these issues wouldnt happen on previous versions of Ubuntu. Apart from these minor issues, this tool does this job as advertised and worked like a charm.
Cheers!
--------------------------------------------------------------------------------
via: http://www.unixmen.com/camicri-cube-offline-portable-package-management-system/
原文作者:
![](http://1.gravatar.com/avatar/1ba62ac2b395f541750b6b4f873eb37b?s=70&d=monsterid&r=G)
[SK][a](Senthilkumar, aka SK, is a Linux enthusiast, FOSS Supporter & Linux Consultant from Tamilnadu, India. A passionate and dynamic person, aims to deliver quality content to IT professionals and loves very much to write and explore new things about Linux, Open Source, Computers and Internet.)
译者:[runningwater](https://github.com/runningwater) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/sk/
[1]:https://launchpad.net/camicricube
[2]:http://sourceforge.net/projects/camicricube/

View File

@ -1,4 +1,3 @@
nd0104 is translate
Install Google Docs on Linux with Grive Tools
================================================================================
Google Drive is two years old now and Googles cloud storage solution seems to be still going strong thanks to its integration with Google Docs and Gmail. Theres one thing still missing though: a lack of an official Linux client. Apparently Google has had one floating around their offices for a while now, however its not seen the light of day on any Linux system.

View File

@ -1,172 +0,0 @@
How to set up a USB network printer and scanner server on Debian
================================================================================
Suppose you want to set up a Linux print server in your home/office network, but you only have USB printers available (as they are much cheaper than printers that have a built-in Ethernet jack or wireless ones). In addition, what if one of those devices is an AIO (All In One), and you also want to share its incorporated scanner over the network? In this article, I'll show you how to install and share a USB AIO (Epson CX3900 inkjet printer and scanner), a USB laser printer (Samsung ML-1640), and a PDF printer as the "cherry on top" - all in a GNU/Linux Debian 7.2 [Wheezy] server.
Even though these printers are somewhat old (I bought the Epson AIO in 2007 and the laser printer in 2009), I believe that what I learned through the installation process can well be applied to newer models of the same brands and others: some drivers are available as precompiled .deb packages, while others can be installed directly from the repositories. After all, it's the underlying principles that matter.
### Prerequisites ###
To setup a network printer and scanner, we will be using [CUPS][1], which is an open-source printing system for Linux / UNIX / OSX.
# aptitude install cups cups-pdf
**Troubleshooting tip**: Depending on the state of your system (this issue can happen most likely after a failed manual install of a package or a misinstalled dependency), the front-end package management system may prompt you to uninstall a lot of packages in an attempt to resolve current dependencies before installing cups and cups-pdf. If this happens to be the case, you have two options:
1) Install the packages via another front-end package management system, such as apt-get. Note that this is not entirely advisable since it will not fix the current issue.
2) Run the following command: aptitude update && aptitude upgrade. This will fix the issue and upgrade the packages to their most recent version at the same time.
### Configuring CUPS ###
In order to be able to access the CUPS web interface, we need to do at least a minimum edit to the cupsd.conf file (server configuration file for CUPS). Before proceeding, however, let's make a backup copy of cupsd.conf:
# cp cupsd.conf cupsd.conf.bkp
and edit the original file (only the most relevant sections are shown):
- **Listen**: Listens to the specified address and port or domain socket path.
- **Location /path**: Specifies access control for the named location.
- **Order**: Specifies the order of HTTP access control (allow,deny or deny,allow). Order allow,deny means that the Allow rules have precedence over (are processed before) the Deny rules.
- **DefaultAuthType** (also valid for **AuthType**): Specifies the default type of authentication to use. Basic refers to the fact that the /etc/passwd file is used to authenticate users in CUPS.
- **DefaultEncryption**: Specifies the type of encryption to use for authenticated requests.
- **WebInterface**: Specifies whether the web interface is enabled.
# Listen for connections from the local machine
Listen 192.168.0.15:631
# Restrict access to the server
<Location />
Order allow,deny
Allo 192.168.0.0/24
</Location>
# Default authentication type, when authentication is required
DefaultAuthType Basic
DefaultEncryption IfRequested
# Web interface setting
WebInterface Yes
# Restrict access to the admin pages
<Location /admin>
Order allow,deny
Allow 192.168.0.0/24
</Location>
Now let's restart CUPS to apply the changes:
# service cups restart
In order to allow another user (other than root) to modify printer settings, we must add him / her to the lp (grants access to printer hardware and enables the user to manage print jobs) and lpadmin (owns printing preferences) groups as follows. Disregard this step if this is not necessary or desired in your current network setup.
# adduser xmodulo lp
# adduser xmodulo lpadmin
![](https://farm4.staticflickr.com/3873/14705919960_9a25101098_o.png)
### Configuring a Network Printer via CUPS Web Interface ###
1. Launch a web browser and open the CUPS interface, available at http://<Server IP>:Port, which in our case means http://192.168.0.15:631:
![](https://farm4.staticflickr.com/3878/14889544591_284015bcb5_z.jpg)
2. Go to the **Administration** tab and click on *Add printer*:
![](https://farm4.staticflickr.com/3910/14705919940_fe0a08a8f7_o.png)
3. Choose your printer; in this case, **EPSON Stylus CX3900 @ debian (Inkjet Inkjet Printer)**, and click on **Continue**:
![](https://farm6.staticflickr.com/5567/14706059067_233fcf9791_z.jpg)
4. It's time to name the printer and indicate whether we want to share it from the current workstation or not:
![](https://farm6.staticflickr.com/5570/14705957499_67ea16d941_z.jpg)
5. Install the driver - Select the brand and click on **Continue**.
![](https://farm6.staticflickr.com/5579/14889544531_77f9f1258c_o.png)
6. If the printer is not supported natively by CUPS (not listed in the next page), we will have to download the driver from the manufacturer's web site (e.g., [http://download.ebz.epson.net/dsc/search/01/search/?OSC=LX][2]) and return to this screen later.
![](https://farm4.staticflickr.com/3896/14706058997_e2a2214338_z.jpg)
![](https://farm4.staticflickr.com/3874/14706000928_c9dc74c80e_z.jpg)
![](https://farm4.staticflickr.com/3837/14706058977_e494433068_o.png)
7. Note that this precompiled .deb file must be sent somehow to the printer server (for example, via sftp or scp) from the machine that we used to download it (of course this could have been easier if we had a direct link to the file instead of the download button):
![](https://farm6.staticflickr.com/5581/14706000878_f202497d0a_z.jpg)
8. Once we have placed the .deb file in our server, we will install it:
# dpkg -i epson-inkjet-printer-escpr_1.4.1-1lsb3.2_i386.deb
**Troubleshooting tip**: If the lsb package (a standard core system that third-party applications written for Linux can depend upon) is not installed, the driver installation will not succeed:
![](https://farm4.staticflickr.com/3840/14705919770_87e5803f95_z.jpg)
We will install lsb and then attempt to install the printer driver again:
# aptitude install lsb
# dpkg -i epson-inkjet-printer-escpr_1.4.1-1lsb3.2_i386.deb
9. Now we can return to step #5 and install the printer:
![](https://farm6.staticflickr.com/5569/14705957349_3acdc26f91_z.jpg)
### Configuring a Network Scanner ###
Now we will proceed to configure the printer server to share a scanner as well. First, install [xsane][3] which is a frontend for [SANE][4]: Scanner Access Now Easy.
# aptitude install xsane
Next, let's enable the saned service by editing the /etc/default/saned file:
# Set to yes to start saned
RUN=yes
Finally, we will check whether saned is already running (most likely not - then we'll start the service and check again):
# ps -ef | grep saned | grep -v grep
# service saned start
### Configuring a Second Network Printer ###
With CUPS, you can configure multiple network printers. Let's configure an additional printer via CUPS: Samsung ML-1640, which is a USB laser printer.
The splix package contains the drivers for monochrome (ML-15xx, ML-16xx, ML-17xx, ML-2xxx) and color (CLP-5xx, CLP-6xx) Samsung printers. In addition, the detailed information about the package (available via aptitude show splix) indicates that some rebranded Samsungs like the Xerox Phaser 6100 work with this driver.
# aptitude install splix
Then we will install the printer itself using the CUPS web interface, as explained earlier:
![](https://farm4.staticflickr.com/3872/14705957329_4f38a94867_o.png)
### Installing the PDF Printer ###
Next, let's configure PDF printer on the printer server, so that you can convert documents into PDF format from client computers.
Since we already installed the cups-pdf package, the PDF printer was installed automatically, which can be verified through the web interface:
![](https://farm6.staticflickr.com/5558/14705919650_bc1a1e0b43_z.jpg)
When the PDF printer is selected, documents will be written to a configurable directory (by default to ~/PDF), or can be further manipulated by a post-processing command.
In the next article, we'll configure a desktop client to access these printers and scanner over the network.
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/08/usb-network-printer-and-scanner-server-debian.html
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.gabrielcanepa.com.ar/
[1]:https://www.cups.org/
[2]:http://download.ebz.epson.net/dsc/search/01/search/?OSC=LX
[3]:http://www.xsane.org/
[4]:http://www.sane-project.org/

View File

@ -1,5 +1,3 @@
translating by haimingfg
What are useful CLI tools for Linux system admins
================================================================================
System administrators (sysadmins) are responsible for day-to-day operations of production systems and services. One of the critical roles of sysadmins is to ensure that operational services are available round the clock. For that, they have to carefully plan backup policies, disaster management strategies, scheduled maintenance, security audits, etc. Like every other discipline, sysadmins have their tools of trade. Utilizing proper tools in the right case at the right time can help maintain the health of operating systems with minimal service interruptions and maximum uptime.

View File

@ -1,200 +0,0 @@
How to configure a network printer and scanner on Ubuntu desktop
================================================================================
In a [previous article][1](注这篇文章在2014年8月12号的原文里做过不知道翻译了没有如果翻译发布了发布此文章的时候可改成翻译后的链接), we discussed how to install several kinds of printers (and also a network scanner) in a Linux server. Today we will deal with the other end of the line: how to access the network printer/scanner devices from a desktop client.
### Network Environment ###
For this setup, our server's (Debian Wheezy 7.2) IP address is 192.168.0.10, and our client's (Ubuntu 12.04) IP address is 192.168.0.105. Note that both boxes are on the same network (192.168.0.0/24). If we want to allow printing from other networks, we need to modify the following section in the cupsd.conf file on the sever:
<Location />
Order allow,deny
Allow localhost
Allow from XXX.YYY.ZZZ.*
</Location>
(in the above example, we grant access to the printer from localhost and from any system whose IPv4 address starts with XXX.YYY.ZZZ)
To verify which printers are available on our server, we can either use lpstat command on the server, or browse to the https://192.168.0.10:631/printers page.
root@debian:~# lpstat -a
----------
EPSON_Stylus_CX3900 accepting requests since Mon 18 Aug 2014 10:49:33 AM WARST
PDF accepting requests since Mon 06 May 2013 04:46:11 PM WARST
SamsungML1640Series accepting requests since Wed 13 Aug 2014 10:13:47 PM WARST
![](https://farm4.staticflickr.com/3903/14777969919_7b7b25a4a4_z.jpg)
### Installing Network Printers in Ubuntu Desktop ###
In our Ubuntu 12.04 client, we will open the "Printing" menu (Dash -> Printing). Note that in other distributions the name may differ a little (such as "Printers" or "Print & Fax", for example):
![](https://farm4.staticflickr.com/3837/14964314992_d8bd0c0d04_o.png)
No printers have been added to our Ubuntu client yet:
![](https://farm4.staticflickr.com/3887/14941655516_80430529b5_o.png)
Here are the steps to install a network printer on Ubuntu desktop client.
**1)** The "Add" button will fire up the "New Printer" menu. We will choose "Network printer" -> "Find Network Printer" and enter the IP address of our server, then click "Find":
![](https://farm6.staticflickr.com/5581/14777977730_74c29a99b2_z.jpg)
**2)** At the bottom we will see the names of the available printers. Let's choose the Samsung printer and press "Forward":
![](https://farm6.staticflickr.com/5585/14941655566_c1539a3ea0.jpg)
**3)** We will be asked to fill in some information about our printer. When we're done, we'll click on "Apply":
![](https://farm4.staticflickr.com/3908/14941655526_0982628fc9_z.jpg)
**4)** We will then be asked whether we want to print a test page. Lets click on "Print test page":
![](https://farm4.staticflickr.com/3853/14964651435_cc83bb35aa.jpg)
The print job was created with local id 2:
![](https://farm6.staticflickr.com/5562/14777977760_b01c5338f2.jpg)
5) Using our server's CUPS web interface, we can observe that the print job has been submitted successfully (Printers -> SamsungML1640Series -> Show completed jobs):
![](https://farm4.staticflickr.com/3887/14778110127_359009cbbc_z.jpg)
We can also display this same information by running the following command on the printer server:
root@debian:~# cat /var/log/cups/page_log | grep -i samsung
----------
SamsungML1640Series root 27 [13/Aug/2014:22:15:34 -0300] 1 1 - localhost Test Page - -
SamsungML1640Series gacanepa 28 [18/Aug/2014:11:28:50 -0300] 1 1 - 192.168.0.105 Test Page - -
SamsungML1640Series gacanepa 29 [18/Aug/2014:11:45:57 -0300] 1 1 - 192.168.0.105 Test Page - -
The page_log log file shows every page that has been printed, along with the user who sent the print job, the date & time, and the client's IPv4 address.
To install the Epson inkjet and PDF printers, we need to repeat steps 1 through 5, and choose the right print queue each time. For example, in the image below we are selecting the PDF printer:
![](https://farm4.staticflickr.com/3926/14778046648_c094c8422c_o.png)
However, please note that according to the [CUPS-PDF documentation][2], by default:
> PDF files will be placed in subdirectories named after the owner of the print job. In case the owner cannot be identified (i.e. does not exist on the server) the output is placed in the directory for anonymous operation (if not disabled in cups-pdf.conf - defaults to /var/spool/cups-pdf/ANONYMOUS/).
These default directories can be modified by changing the value of the **Out** and **AnonDirName** variables in the /etc/cups/cups-pdf.conf file. Here, ${HOME} is expanded to the user's home directory:
Out ${HOME}/PDF
AnonDirName /var/spool/cups-pdf/ANONYMOUS
### Network Printing Examples ###
#### Example #1 ####
Printing from Ubuntu 12.04, logged on locally as gacanepa (an account with the same name exists on the printer server).
![](https://farm4.staticflickr.com/3845/14778046698_57b6e552f3_z.jpg)
After printing to the PDF printer, let's check the contents of the /home/gacanepa/PDF directory on the printer server:
root@debian:~# ls -l /home/gacanepa/PDF
----------
total 368
-rw------- 1 gacanepa gacanepa 279176 Aug 18 13:49 Test_Page.pdf
-rw------- 1 gacanepa gacanepa 7994 Aug 18 13:50 Untitled1.pdf
-rw------- 1 gacanepa gacanepa 74911 Aug 18 14:36 Welcome_to_Conference_-_Thomas_S__Monson.pdf
The PDF files are created with permissions set to 600 (-rw-------), which means that only the owner (gacanepa in this case) can have access to them. We can change this behavior by editing the value of the **UserUMask** variable in the /etc/cups/cups-pdf.conf file. For example, a umask of 0033 will cause the PDF printer to create files with all permissions for the owner, but read-only privileges to all others.
root@debian:~# grep -i UserUMask /etc/cups/cups-pdf.conf
----------
### Key: UserUMask
UserUMask 0033
For those unfamiliar with umask (aka user file-creation mode mask), it acts as a set of permissions that can be used to control the default file permissions that are set for new files when they are created. Given a certain umask, the final file permissions are calculated by performing a bitwise boolean AND operation between the file base permissions (0666) and the unary bitwise complement of the umask. Thus, for a umask set to 0033, the default permissions for new files will be NOT (0033) AND 0666 = 644 (read / write / execute privileges for the owner, read-only for all others.
### Example #2 ###
Printing from Ubuntu 12.04, logged on locally as jdoe (an account with the same name doesn't exist on the server).
![](https://farm4.staticflickr.com/3907/14964315142_a71d8a8aef_z.jpg)
root@debian:~# ls -l /var/spool/cups-pdf/ANONYMOUS
----------
total 5428
-rw-rw-rw- 1 nobody nogroup 5543070 Aug 18 15:57 Linux_-_Wikipedia__the_free_encyclopedia.pdf
The PDF files are created with permissions set to 666 (-rw-rw-rw-), which means that everyone has access to them. We can change this behavior by editing the value of the **AnonUMask** variable in the /etc/cups/cups-pdf.conf file.
At this point, you may be wondering about this: Why bother to install a network PDF printer when most (if not all) current Linux desktop distributions come with a built-in "Print to file" utility that allows users to create PDF files on-the-fly?
There are a couple of benefits of using a network PDF printer:
- A network printer (of whatever kind) lets you print directly from the command line without having to open the file first.
- In a network with other operating system installed on the clients, a PDF network printer spares the system administrator from having to install a PDF creator utility on each individual machine (and also the danger of allowing end-users to install such tools).
- The network PDF printer allows to print directly to a network share with configurable permissions, as we have seen.
### Installing a Network Scanner in Ubuntu Desktop ###
Here are the steps to installing and accessing a network scanner from Ubuntu desktop client. It is assumed that the network scanner server is already up and running as described [here][3].
**1)** Let us first check whether there is a scanner available on our Ubuntu client host. Without any prior setup, you will see the message saying that "No scanners were identified."
$ scanimage -L
![](https://farm4.staticflickr.com/3906/14777977850_1ec7994324_z.jpg)
**2)** Now we need to enable saned daemon which comes pre-installed on Ubuntu desktop. To enable it, we need to edit the /etc/default/saned file, and set the RUN variable to yes:
$ sudo vim /etc/default/saned
----------
# Set to yes to start saned
RUN=yes
**3)** Let's edit the /etc/sane.d/net.conf file, and add the IP address of the server where the scanner is installed:
![](https://farm6.staticflickr.com/5581/14777977880_c865b0df95_z.jpg)
**4)** Restart saned:
$ sudo service saned restart
**5)** Let's see if the scanner is available now:
![](https://farm4.staticflickr.com/3839/14964651605_241482f856_z.jpg)
Now we can open "Simple Scan" (or other scanning utility) and start scanning documents. We can rotate, crop, and save the resulting image:
![](https://farm6.staticflickr.com/5589/14777970169_73dd0e98e3_z.jpg)
### Summary ###
Having one or more network printers and scanner is a nice convenience in any office or home network, and offers several advantages at the same time. To name a few:
- Multiple users (connecting from different platforms / places) are able to send print jobs to the printer's queue.
- Cost and maintenance savings can be achieved due to hardware sharing.
I hope this article helps you make use of those advantages.
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/08/configure-network-printer-scanner-ubuntu-desktop.html
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/gabriel
[1]:http://xmodulo.com/2014/08/usb-network-printer-and-scanner-server-debian.html
[2]:http://www.cups-pdf.de/documentation.shtml
[3]:http://xmodulo.com/2014/08/usb-network-printer-and-scanner-server-debian.html#scanner

View File

@ -1,123 +0,0 @@
[bazz2 bazz2 bazz2]
20 Postfix Interview Questions & Answers
================================================================================
### Q:1 What is postfix and default port used for postfix ? ###
Ans: Postfix is a open source MTA (Mail Transfer agent) which is used to route & deliver emails. Postfix is the alternate of widely used Sendmail MTA. Default port for postfix is 25.
### Q:2 What is the difference between Postfix & Sendmail ? ###
Ans: Postfix uses a modular approach and is composed of multiple independent executables. Sendmail has a more monolithic design utilizing a single always running daemon.
### Q:3 What is MTA and its role in mailing system ? ###
Ans: MTA Stands for Mail Transfer Agent.MTA receives and delivers email. Determines message routing and possible address rewriting. Locally delivered messages are handed off to an MDA for final delivery. Examples Qmail, Postfix, Sendmail
### Q:4 What is MDA ? ###
Ans: MDA stands for Mail Delivery Agent. MDA is a Program that handles final delivery of messages for a system's local recipients. MDAs can often filter or categorize messages upon delivery. An MDA might also determine that a message must be forwarded to another email address. Example Procmail
### Q:5 What is MUA ? ###
Ans: MUA stands for Mail User Agent. MUA is aEmail client software used to compose, send, and retrieve email messages. Sends messages through an MTA. Retrieves messages from a mail store either directly or through a POP/ IMAP server. Examples Outlook, Thunderbird, Evolution.
### Q:6 What is the use of postmaster account in Mailserver ? ###
Ans: An email administrator is commonly referred to as a postmaster. An individual with postmaster responsibilities makes sure that the mail system is working correctly, makes configuration changes, and adds/removes email accounts, among other things. You must have a postmaster alias at all domains for which you handle email that directs messages to the correct person or persons .
### Q:7 What are the important daemons in postfix ? ###
Ans : Below are the lists of impportant daemons in postfix mail server :
- **master** :The master daemon is the brain of the Postfix mail system. It spawns all other daemons.
- **smtpd**: The smtpd daemon (server) handles incoming connections.
- **smtp** :The smtp client handles outgoing connections.
- **qmgr** :The qmgr-Daemon is the heart of the Postfix mail system. It processes and controls all messages in the mail queues.
- **local** : The local program is Postfix own local delivery agent. It stores messages in mailboxes.
### Q:8 What are the configuration files of postfix server ? ###
Ans: There are two main Configuration files of postfix :
- **/etc/postfix/main.cf** : This file holds global configuration options. They will be applied to all instances of a daemon, unless they are overridden in master.cf
- **/etc/postfix/master.cf** : This file defines runtime environment for daemons attached to services. Runtime behavior defined in main.cf may be overridden by setting service specific options.
### Q:9 How to restart the postfix service & make it enable across reboot ? ###
Ans: Use this command to restart service “ Service postfix restart” and to make the service persist across the reboot, use the command “ chkconfig postfix on”
### Q:10 How to check the mail's queue in postfix ? ###
Ans: Postfix maintains two queues, the pending mails queue, and the deferred mail queue,the deferred mail queue has the mail that has soft-fail and should be retried (Temporary failure), Postfix retries the deferred queue on set intervals (configurable, and by default 5 minutes)
To display the list of queued mails :
# postqueue -p
To Save the output of above command :
# postqueue -p > /mnt/queue-backup.txt
Tell Postfix to process the Queue now
# postqueue -f
### Q:11 How to delete mails from the queue in postfix ? ###
Ans: Use below command to delete all queued mails
# postsuper -d ALL
To delete only deferred mails from queue , use below command
# postsuper -d ALL deferred
### Q:12 How to check postfix configuration from the command line ? ###
Ans: Using the command 'postconf -n' we can see current configuration of postfix excluding the lines which are commented.
### Q:13 Which command is used to see live mail logs in postfix ? ###
Ans: Use the command 'tail -f /var/log/maillog' or 'tailf /var/log/maillog'
### Q:14 How to send a test mail from command line ? ###
Ans: Use the below command to send a test mail from postfix itself :
# echo "Test mail from postfix" | mail -s "Plz ignore" info@something.com
### Q:15 What is an Open mail relay ? ###
Ans: An open mail relay is an SMTP server configured in such a way that it allows anyone on the Internet to send e-mail through it, not just mail destined to or originating from known users.This used to be the default configuration in many mail servers; indeed, it was the way the Internet was initially set up, but open mail relays have become unpopular because of their exploitation by spammers and worms.
### Q:16 What is relay host in postfix ? ###
Ans: Relay host is the smtp address , if mentioned in postfix config file , then all the incoming mails be relayed through smtp server.
### Q:17 What is Greylisting ? ###
Ans: Greylisting is a method of defending e-mail users against spam. A mail transfer agent (MTA) using greylisting will "temporarily reject" any email from a sender it does not recognize. If the mail is legitimate the originating server will, after a delay, try again and, if sufficient time has elapsed, the email will be accepted.
### Q:18 What is the importance of SPF records in mail servers ? ###
Ans: SPF (Sender Policy Framework) is a system to help domain owners specify the servers which are supposed to send mail from their domain. The aim is that other mail systems can then check to make sure the server sending email from that domain is authorized to do so reducing the chance of email 'spoofing', phishing schemes and spam!
### Q:19 What is the use of Domain Keys(DKIM) in mail servers ? ###
Ans: DomainKeys is an e-mail authentication system designed to verify the DNS domain of an e-mail sender and the message integrity. The DomainKeys specification has adopted aspects of Identified Internet Mail to create an enhanced protocol called DomainKeys Identified Mail (DKIM).
### Q:20 What is the role of Anti-Spam SMTP Proxy (ASSP) in mail server ? ###
Ans: ASSP is a gateway server which is install in front of your MTA and implements auto-whitelists, self learning Bayesian, Greylisting, DNSBL, DNSWL, URIBL, SPF, SRS, Backscatter, Virus scanning, attachment blocking, Senderbase and multiple other filter methods
--------------------------------------------------------------------------------
via: http://www.linuxtechi.com/postfix-interview-questions-answers/
作者:[Pradeep Kumar][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxtechi.com/author/pradeep/

View File

@ -1,5 +1,3 @@
chi1shi2 is translating.
How to use on-screen virtual keyboard on Linux
================================================================================
On-screen virtual keyboard is an alternative input method that can replace a real hardware keyboard. Virtual keyboard may be a necessity in various cases. For example, your hardware keyboard is just broken; you do not have enough keyboards for extra machines; your hardware does not have an available port left to connect a keyboard; you are a disabled person with difficulty in typing on a real keyboard; or you are building a touchscreen-based web kiosk.

View File

@ -1,6 +1,3 @@
>>Linchenguang is translating
》》延期申请
Linux TCP/IP networking: net-tools vs. iproute2
================================================================================
Many sysadmins still manage and troubleshoot various network configurations by using a combination of ifconfig, route, arp and netstat command-line tools, collectively known as net-tools. Originally rooted in the BSD TCP/IP toolkit, the net-tools was developed to configure network functionality of older Linux kernels. Its development in the Linux community so far has ceased since 2001. Some Linux distros such as Arch Linux and CentOS/RHEL 7 have already deprecated net-tools in favor of iproute2.

View File

@ -1,4 +1,3 @@
How to create a software RAID-1 array with mdadm on Linux
================================================================================
Redundant Array of Independent Disks (RAID) is a storage technology that combines multiple hard disks into a single logical unit to provide fault-tolerance and/or improve disk I/O performance. Depending on how data is stored in an array of disks (e.g., with striping, mirroring, parity, or any combination thereof), different RAID levels are defined (e.g., RAID-0, RAID-1, RAID-5, etc). RAID can be implemented either in software or with a hardware RAID card. On modern Linux, basic software RAID functionality is available by default.

View File

@ -1,3 +1,4 @@
luoyutiantan
Shellshock: How to protect your Unix, Linux and Mac servers
================================================================================
> **Summary**: The Unix/Linux Bash security hole can be deadly to your servers. Here's what you need to worry about, how to see if you can be attacked, and what to do if your shields are down.
@ -97,4 +98,4 @@ via: http://www.zdnet.com/shellshock-how-to-protect-your-unix-linux-and-mac-serv
[15]:http://apple.stackexchange.com/questions/146849/how-do-i-recompile-bash-to-avoid-the-remote-exploit-cve-2014-6271-and-cve-2014-7
[16]:https://bugzilla.redhat.com/show_bug.cgi?id=1141597#c27
[17]:http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-7169
[18]:http://www.inmotionhosting.com/support/website/modsecurity/what-is-modsecurity-and-why-is-it-important
[18]:http://www.inmotionhosting.com/support/website/modsecurity/what-is-modsecurity-and-why-is-it-important

View File

@ -1,121 +0,0 @@
wangjiezhe translating
Using GIT to backup your website files on linux
================================================================================
![](http://techarena51.com/wp-content/uploads/2014/09/git_logo-1024x480-580x271.png)
Well not exactly Git but a software based on Git known as BUP. I generally use rsync to backup my files and that has worked fine so far. The only problem or drawback is that you cannot restore your files to a particular point in time. Hence, I started looking for an alternative and found BUP a git based software which stores your data in repositories and gives you the option to restore data to a particular point in time.
With BUP you will first need to initialize an empty repository, then take a backup of all your files. When BUP takes a backup it creates a restore point which you can later restore to. It also creates an index of all your files, this index contains file attributes and checksum. When another backup is scheduled BUP compares the files with this attribute and only saves data if anything has changed. This saves you a lot of space.
### Installing BUP (Tested on Centos 6 & 7) ###
Ensure you have RPMFORGE and EPEL repos installed.
[techarena51@vps ~]$sudo yum groupinstall "Development Tools"
[techarena51@vps ~]$ sudo yum install python python-devel
[techarena51@vps ~]$ sudo yum install fuse-python pyxattr pylibacl
[techarena51@vps ~]$ sudo yum install perl-Time-HiRes
[techarena51@vps ~]$ git clone git://github.com/bup/bup
[techarena51@vps ~]$cd bup
[techarena51@vps ~]$ make
[techarena51@vps ~]$ make test
[techarena51@vps ~]$sudo make install
For debian/ubuntu users you can do “apt-get build-dep bup” on recent versions for more information check out https://github.com/bup/bup
You may get errors on CentOS 7 at “make test”, but you can continue to run make install.
The first step like git is to initialize an empty repository.
[techarena51@vps ~]$bup init
By default, bup will store its repository under “~/.bup” but you can change that by setting the “export BUP_DIR=/mnt/user/bup” environment variable
Next you create an index of all files. The index, as I mentioned earlier stores a listing of files, their attributes, and their git object ids (sha1 hashes). ( Attributes include soft links, permissions as well as the immutable bit
bup index /path/to/file
bup save -n nameofbackup /path/to/file
#Example
[techarena51@vps ~]$ bup index /var/www/html
Indexing: 7973, done (4398 paths/s).
bup: merging indexes (7980/7980), done.
[techarena51@vps ~]$ bup save -n techarena51 /var/www/html
Reading index: 28, done.
Saving: 100.00% (4/4k, 28/28 files), done.
bloom: adding 1 file (7 objects).
Receiving index from server: 1268/1268, done.
bloom: adding 1 file (7 objects).
“BUP save” will split all the contents of the file into chunks and store them as objects. The “-n” option takes the name of backup.
You can check a list of backups as well as a list of backed up files.
[techarena51@vps ~]$ bup ls
local-etc techarena51 test
#Check for a list of backups available for my site
[techarena51@vps ~]$ bup ls techarena51
2014-09-24-064416 2014-09-24-071814 latest
#Check for the files available in these backups
[techarena51@vps ~]$ bup ls techarena51/2014-09-24-064416/var/www/html
apc.php techarena51.com wp-config-sample.php wp-load.php
Backing up files on the same server is never a good option. BUP allows you to remotely backup your website files, you however need to ensure that your SSH keys and BUP are installed on the remote server.
bup index path/to/dir
bup save-r remote-vps.com -n backupname path/to/dir
### Example: Backing up the “/var/www/html” directory ###
[techarena51@vps ~]$bup index /var/www/html
[techarena51@vps ~]$ bup save -r user@remotelinuxvps.com: -n techarena51 /var/www/html
Reading index: 28, done.
Saving: 100.00% (4/4k, 28/28 files), done.
bloom: adding 1 file (7 objects).
Receiving index from server: 1268/1268, done.
bloom: adding 1 file (7 objects).
### Restoring your Backup ###
Log into the remote server and type the following
[techarena51@vps ~]$bup restore -C ./backup techarena51/latest
#Restore an older version of the entire working dir elsewhere
[techarena51@vps ~]$bup restore -C /tmp/bup-out /testrepo/2013-09-29-195827
#Restore one individual file from an old backup
[techarena51@vps ~]$bup restore -C /tmp/bup-out /testrepo/2013-09-29-201328/root/testbup/binfile1.bin
The only drawback is you cannot restore files to another server, you have to manually copy the files via SCP or even rsync.
View your backups via an integrated web server
bup web
#specific port
bup web :8181
You can run bup along with a shell script and a cron job once everyday.
#!/bin/bash
bup index /var/www/html
bup save -r user@remote-vps.com: -n techarena51 /var/www/html
BUP may not be perfect, but it gets the job done pretty well. I would definitely like to see more development on this project and hopefully a remote restore as well.
You may also like to read using [inotify-tools][1] for real time file syncing.
--------------------------------------------------------------------------------
via: http://techarena51.com/index.php/using-git-backup-website-files-on-linux/
作者:[Leo G][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://techarena51.com/
[1]:http://techarena51.com/index.php/inotify-tools-example/

View File

@ -1,138 +0,0 @@
johnhoow translating...
# Practical Lessons in Peer Code Review #
Millions of years ago, apes descended from the trees, evolved opposable thumbs and—eventually—turned into human beings.
We see mandatory code reviews in a similar light: something that separates human from beast on the rolling grasslands of the software
development savanna.
Nonetheless, I sometimes hear comments like these from our team members:
"Code reviews on this project are a waste of time."
"I don't have time to do code reviews."
"My release is delayed because my dastardly colleague hasn't done my review yet."
"Can you believe my colleague wants me to change something in my code? Please explain to them that the delicate balance of the universe will
be disrupted if my pristine, elegant code is altered in any way."
### Why do we do code reviews? ###
Let us remember, first of all, why we do code reviews. One of the most important goals of any professional software developer is to
continually improve the quality of their work. Even if your team is packed with talented programmers, you aren't going to distinguish
yourselves from a capable freelancer unless you work as a team. Code reviews are one of the most important ways to achieve this. In
particular, they:
provide a second pair of eyes to find defects and better ways of doing something.
ensure that at least one other person is familiar with your code.
help train new staff by exposing them to the code of more experienced developers.
promote knowledge sharing by exposing both the reviewer and reviewee to the good ideas and practices of the other.
encourage developers to be more thorough in their work since they know it will be reviewed by one of their colleagues.
### Doing thorough reviews ###
However, these goals cannot be achieved unless appropriate time and care are devoted to reviews. Just scrolling through a patch, making sure
that the indentation is correct and that all the variables use lower camel case, does not constitute a thorough code review. It is
instructive to consider pair programming, which is a fairly popular practice and adds an overhead of 100% to all development time, as the
baseline for code review effort. You can spend a lot of time on code reviews and still use much less overall engineer time than pair
programming.
My feeling is that something around 25% of the original development time should be spent on code reviews. For example, if a developer takes
two days to implement a story, the reviewer should spend roughly four hours reviewing it.
Of course, it isn't primarily important how much time you spend on a review as long as the review is done correctly. Specifically, you must
understand the code you are reviewing. This doesn't just mean that you know the syntax of the language it is written in. It means that you
must understand how the code fits into the larger context of the application, component or library it is part of. If you don't grasp all the
implications of every line of code, then your reviews are not going to be very valuable. This is why good reviews cannot be done quickly: it
takes time to investigate the various code paths that can trigger a given function, to ensure that third-party APIs are used correctly
(including any edge cases) and so forth.
In addition to looking for defects or other problems in the code you are reviewing, you should ensure that:
All necessary tests are included.
Appropriate design documentation has been written.
Even developers who are good about writing tests and documentation don't always remember to update them when they change their code. A
gentle nudge from the code reviewer when appropriate is vital to ensure that they don't go stale over time.
### Preventing code review overload ###
If your team does mandatory code reviews, there is the danger that your code review backlog will build up to the point where it is
unmanageable. If you don't do any reviews for two weeks, you can easily have several days of reviews to catch up on. This means that your
own development work will take a large and unexpected hit when you finally decide to deal with them. It also makes it a lot harder to do
good reviews since proper code reviews require intense and sustained mental effort. It is difficult to keep this up for days on end.
For this reason, developers should strive to empty their review backlog every day. One approach is to tackle reviews first thing in the
morning. By doing all outstanding reviews before you start your own development work, you can keep the review situation from getting out of
hand. Some might prefer to do reviews before or after the midday break or at the end of the day. Whenever you do them, by considering code
reviews as part of your regular daily work and not a distraction, you avoid:
Not having time to deal with your review backlog.
Delaying a release because your reviews aren't done yet.
Posting reviews that are no longer relevant since the code has changed so much in the meantime.
Doing poor reviews since you have to rush through them at the last minute.
### Writing reviewable code ###
The reviewer is not always the one responsible for out-of-control review backlogs. If my colleague spends a week adding code willy-nilly
across a large project then the patch they post is going to be really hard to review. There will be too much to get through in one session.
It will be difficult to understand the purpose and underlying architecture of the code.
This is one of many reasons why it is important to split your work into manageable units. We use scrum methodology so the appropriate unit
for us is the story. By making an effort to organize our work by story and submit reviews that pertain only to the specific story we are
working on, we write code that is much easier to review. Your team may use another methodology but the principle is the same.
There are other prerequisites to writing reviewable code. If there are tricky architectural decisions to be made, it makes sense to meet
with the reviewer beforehand to discuss them. This will make it much easier for the reviewer to understand your code, since they will know
what you are trying to achieve and how you plan to achieve it. This also helps avoid the situation where you have to rewrite large swathes
of code after the reviewer suggests a different and better approach.
Project architecture should be described in detail in your design documentation. This is important anyway since it enables a new project
member to get up to speed and understand the existing code base. It has the further advantage of helping a reviewer to do their job
properly. Unit tests are also helpful in illustrating to the reviewer how components should be used.
If you are including third-party code in your patch, commit it separately. It is much harder to review code properly when 9000 lines of
jQuery are dropped into the middle.
One of the most important steps for creating reviewable code is to annotate your code reviews. This means that you go through the review
yourself and add comments anywhere you feel that this will help the reviewer to understand what is going on. I have found that annotating
code takes relatively little time (often just a few minutes) and makes a massive difference in how quickly and well the code can be
reviewed. Of course, code comments have many of the same advantages and should be used where appropriate, but often a review annotation
makes more sense. As a bonus, studies have shown that developers find many defects in their own code while rereading and annotating it.
### Large code refactorings ###
Sometimes it is necessary to refactor a code base in a way that affects many components. In the case of a large application, this can take
several days (or more) and result in a huge patch. In these cases a standard code review may be impractical.
The best solution is to refactor code incrementally. Figure out a partial change of reasonable scope that results in a working code base and
brings you in the direction you want to go. Once that change has been completed and a review posted, proceed to a second incremental change
and so forth until the full refactoring has been completed. This might not always be possible, but with thought and planning it is usually
realistic to avoid massive monolithic patches when refactoring. It might take more time for the developer to refactor in this way, but it
also leads to better quality code as well as making reviews much easier.
If it really isn't possible to refactor code incrementally (which probably says something about how well the original code was written and
organized), one solution might be to do pair programming instead of code reviews while working on the refactoring.
### Resolving disputes ###
Your team is doubtless made up of intelligent professionals, and in almost all cases it should be possible to come an agreement when
opinions about a specific coding question differ. As a developer, keep an open mind and be prepared to compromise if your reviewer prefers a
different approach. Don't take a proprietary attitude to your code and don't take review comments personally. Just because someone feels
that you should refactor some duplicated code into a reusable function, it doesn't mean that you are any less of an attractive, brilliant
and charming individual.
As a reviewer, be tactful. Before suggesting changes, consider whether your proposal is really better or just a matter of taste. You will
have more success if you choose your battles and concentrate on areas where the original code clearly requires improvement. Say things like
"it might be worth considering..." or "some people recommend..." instead of "my pet hamster could write a more efficient sorting algorithm
than this."
If you really can't find middle ground, ask a third developer who both of you respect to take a look and give their opinion.
--------------------------------------------------------------------------------
via: http://blog.salsitasoft.com/practical-lessons-in-peer-code-review/
作者:[Matt][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,274 @@
惊现译者CHINAANSHE 翻译!!
How to configure HTTP load balancer with HAProxy on Linux
================================================================================
Increased demand on web based applications and services are putting more and more weight on the shoulders of IT administrators. When faced with unexpected traffic spikes, organic traffic growth, or internal challenges such as hardware failures and urgent maintenance, your web application must remain available, no matter what. Even modern devops and continuous delivery practices can threaten the reliability and consistent performance of your web service.
Unpredictability or inconsistent performance is not something you can afford. But how can we eliminate these downsides? In most cases a proper load balancing solution will do the job. And today I will show you how to set up HTTP load balancer using [HAProxy][1].
### What is HTTP load balancing? ###
HTTP load balancing is a networking solution responsible for distributing incoming HTTP or HTTPS traffic among servers hosting the same application content. By balancing application requests across multiple available servers, a load balancer prevents any application server from becoming a single point of failure, thus improving overall application availability and responsiveness. It also allows you to easily scale in/out an application deployment by adding or removing extra application servers with changing workloads.
### Where and when to use load balancing? ###
As load balancers improve server utilization and maximize availability, you should use it whenever your servers start to be under high loads. Or if you are just planning your architecture for a bigger project, it's a good habit to plan usage of load balancer upfront. It will prove itself useful in the future when you need to scale your environment.
### What is HAProxy? ###
HAProxy is a popular open-source load balancer and proxy for TCP/HTTP servers on GNU/Linux platforms. Designed in a single-threaded event-driven architecture, HAproxy is capable of handling [10G NIC line rate][2] easily, and is being extensively used in many production environments. Its features include automatic health checks, customizable load balancing algorithms, HTTPS/SSL support, session rate limiting, etc.
### What are we going to achieve in this tutorial? ###
In this tutorial, we will go through the process of configuring a HAProxy-based load balancer for HTTP web servers.
### Prerequisites ###
You will need at least one, or preferably two web servers to verify functionality of your load balancer. We assume that backend HTTP web servers are already [up and running][3].
### Install HAProxy on Linux ###
For most distributions, we can install HAProxy using your distribution's package manager.
#### Install HAProxy on Debian ####
In Debian we need to add backports for Wheezy. To do that, please create a new file called "backports.list" in /etc/apt/sources.list.d, with the following content:
deb http://cdn.debian.net/debian wheezy­backports main
Refresh your repository data and install HAProxy.
# apt­ get update
# apt ­get install haproxy
#### Install HAProxy on Ubuntu ####
# apt ­get install haproxy
#### Install HAProxy on CentOS and RHEL ####
# yum install haproxy
### Configure HAProxy ###
In this tutorial, we assume that there are two HTTP web servers up and running with IP addresses 192.168.100.2 and 192.168.100.3. We also assume that the load balancer will be configured at a server with IP address 192.168.100.4.
To make HAProxy functional, you need to change a number of items in /etc/haproxy/haproxy.cfg. These changes are described in this section. In case some configuration differs for different GNU/Linux distributions, it will be noted in the paragraph.
#### 1. Configure Logging ####
One of the first things you should do is to set up proper logging for your HAProxy, which will be useful for future debugging. Log configuration can be found in the global section of /etc/haproxy/haproxy.cfg. The following are distro-specific instructions for configuring logging for HAProxy.
**CentOS or RHEL:**
To enable logging on CentOS/RHEL, replace:
log 127.0.0.1 local2
with:
log 127.0.0.1 local0
The next step is to set up separate log files for HAProxy in /var/log. For that, we need to modify our current rsyslog configuration. To make the configuration simple and clear, we will create a new file called haproxy.conf in /etc/rsyslog.d/ with the following content.
$ModLoad imudp
$UDPServerRun 514
$template Haproxy,"%msg%\n"
local0.=info ­/var/log/haproxy.log;Haproxy
local0.notice ­/var/log/haproxy­status.log;Haproxy
local0.* ~
This configuration will separate all HAProxy messages based on the $template to log files in /var/log. Now restart rsyslog to apply the changes.
# service rsyslog restart
**Debian or Ubuntu:**
To enable logging for HAProxy on Debian or Ubuntu, replace:
log /dev/log local0
log /dev/log local1 notice
with:
log 127.0.0.1 local0
Next, to configure separate log files for HAProxy, edit a file called haproxy.conf (or 49-haproxy.conf in Debian) in /etc/rsyslog.d/ with the following content.
$ModLoad imudp
$UDPServerRun 514
$template Haproxy,"%msg%\n"
local0.=info ­/var/log/haproxy.log;Haproxy
local0.notice ­/var/log/haproxy­status.log;Haproxy
local0.* ~
This configuration will separate all HAProxy messages based on the $template to log files in /var/log. Now restart rsyslog to apply the changes.
# service rsyslog restart
#### 2. Setting Defaults ####
The next step is to set default variables for HAProxy. Find the defaults section in /etc/haproxy/haproxy.cfg, and replace it with the following configuration.
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 20000
contimeout 5000
clitimeout 50000
srvtimeout 50000
The configuration stated above is recommended for HTTP load balancer use, but it may not be the optimal solution for your environment. In that case, feel free to explore HAProxy man pages to tweak it.
#### 3. Webfarm Configuration ####
Webfarm configuration defines the pool of available HTTP servers. Most of the settings for our load balancer will be placed here. Now we will create some basic configuration, where our nodes will be defined. Replace all of the configuration from frontend section until the end of file with the following code:
listen webfarm *:80
mode http
stats enable
stats uri /haproxy?stats
stats realm Haproxy\ Statistics
stats auth haproxy:stats
balance roundrobin
cookie LBN insert indirect nocache
option httpclose
option forwardfor
server web01 192.168.100.2:80 cookie node1 check
server web02 192.168.100.3:80 cookie node2 check
The line "listen webfarm *:80" defines on which interfaces our load balancer will listen. For the sake of the tutorial, I've set that to "*" which makes the load balancer listen on all our interfaces. In a real world scenario, this might be undesirable and should be replaced with an interface that is accessible from the internet.
stats enable
stats uri /haproxy?stats
stats realm Haproxy\ Statistics
stats auth haproxy:stats
The above settings declare that our load balancer statistics can be accessed on http://<load-balancer-IP>/haproxy?stats. The access is secured with a simple HTTP authentication with login name "haproxy" and password "stats". These settings should be replaced with your own credentials. If you don't need to have these statistics available, then completely disable them.
Here is an example of HAProxy statistics.
![](https://farm4.staticflickr.com/3928/15416835905_a678c8f286_c.jpg)
The line "balance roundrobin" defines the type of load balancing we will use. In this tutorial we will use simple round robin algorithm, which is fully sufficient for HTTP load balancing. HAProxy also offers other types of load balancing:
- **leastconn**:­ gives connections to the server with the lowest number of connections.
- **source**: hashes the source IP address, and divides it by the total weight of the running servers to decide which server will receive the request.
- **uri**: the left part of the URI (before the question mark) is hashed and divided by the total weight of the running servers. The result determines which server will receive the request.
- **url_param**: the URL parameter specified in the argument will be looked up in the query string of each HTTP GET request. You can basically lock the request using crafted URL to specific load balancer node.
- **hdr(name**): the HTTP header <name> will be looked up in each HTTP request and directed to specific node.
The line "cookie LBN insert indirect nocache" makes our load balancer store persistent cookies, which allows us to pinpoint which node from the pool is used for a particular session. These node cookies will be stored with a defined name. In our case, I used "LBN", but you can specify any name you like. The node will store its string as a value for this cookie.
server web01 192.168.100.2:80 cookie node1 check
server web02 192.168.100.3:80 cookie node2 check
The above part is the definition of our pool of web server nodes. Each server is represented with its internal name (e.g., web01, web02). IP address, and unique cookie string. The cookie string can be defined as anything you want. I am using simple node1, node2 ... node(n).
### Start HAProxy ###
When you are done with the configuration, it's time to start HAProxy and verify that everything is working as intended.
#### Start HAProxy on Centos/RHEL ####
Enable HAProxy to be started after boot and turn it on using:
# chkconfig haproxy on
# service haproxy start
And of course don't forget to enable port 80 in the firewall as follows.
**Firewall on CentOS/RHEL 7:**
# firewall­cmd ­­permanent ­­zone=public ­­add­port=80/tcp
# firewall­cmd ­­reload
**Firewall on CentOS/RHEL 6:**
Add following line into section ":OUTPUT ACCEPT" of /etc/sysconfig/iptables:
­A INPUT ­m state ­­state NEW ­m tcp ­p tcp ­­dport 80 ­j ACCEPT
and restart **iptables**:
# service iptables restart
#### Start HAProxy on Debian ####
#### Start HAProxy with: ####
# service haproxy start
Don't forget to enable port 80 in the firewall by adding the following line into /etc/iptables.up.rules:
­A INPUT ­p tcp ­­dport 80 ­j ACCEPT
#### Start HAProxy on Ubuntu ####
Enable HAProxy to be started after boot by setting "ENABLED" option to "1" in /etc/default/haproxy:
ENABLED=1
Start HAProxy:
# service haproxy start
and enable port 80 in the firewall:
# ufw allow 80
### Test HAProxy ###
To check whether HAproxy is working properly, we can do the following.
First, prepare test.php file with the following content:
<?php
header('Content-Type: text/plain');
echo "Server IP: ".$_SERVER['SERVER_ADDR'];
echo "\nX-Forwarded-for: ".$_SERVER['HTTP_X_FORWARDED_FOR'];
?>
This PHP file will tell us which server (i.e., load balancer) forwarded the request, and what backend web server actually handled the request.
Place this PHP file in the root directory of both backend web servers. Now use curl command to fetch this PHP file from the load balancer (192.168.100.4).
$ curl http://192.168.100.4/test.php
When we run this command multiple times, we should see the following two outputs alternate (due to the round robin algorithm).
Server IP: 192.168.100.2
X-Forwarded-for: 192.168.100.4
----------
Server IP: 192.168.100.3
X-Forwarded-for: 192.168.100.4
If we stop one of the two backend web servers, the curl command should still work, directing requests to the other available web server.
### Summary ###
By now you should have a fully operational load balancer that supplies your web nodes with requests in round robin mode. As always, feel free to experiment with the configuration to make it more suitable for your infrastructure. I hope this tutorial helped you to make your web projects more resistant and available.
As most of you already noticed, this tutorial contains settings for only one load balancer. Which means that we have just replaced one single point of failure with another. In real life scenarios you should deploy at least two or three load balancers to cover for any failures that might happen, but that is out of the scope of this tutorial right now.
If you have any questions or suggestions feel free to post them in the comments and I will do my best to answer or advice.
--------------------------------------------------------------------------------
via: http://xmodulo.com/haproxy-http-load-balancer-linux.html
作者:[Jaroslav Štěpánek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/jaroslav
[1]:http://www.haproxy.org/
[2]:http://www.haproxy.org/10g.html
[3]:http://xmodulo.com/how-to-install-lamp-server-on-ubuntu.html

View File

@ -0,0 +1,104 @@
[bazz222222222222222222222]
The Why and How of Ansible and Docker
================================================================================
There is a lot of interest from the tech community in both [Docker][1] and [Ansible][2], I am hoping that after reading this article you will share our enthusiasm. You will also gain a practical insight into using Ansible and Docker for setting up a complete server environment for a Rails application.
Many reading this might be asking, “Why dont you just use Heroku?”. First of all, I can run Docker and Ansible on any host, with any provider. Secondly, I prefer flexibility over convenience. I can run anything in this manner, not just web applications. Last but not least, because I am a tinkerer at heart, I get a kick from understanding how all the pieces fit together. The fundamental building block of Heroku is the Linux Container. The same technology lies at the heart of Dockers versatility. As a matter of fact, one of Dockers mottoes is: “Containerization is the new virtualization”.
### Why Ansible? ###
After 4 years of heavy Chef usage, the **infrastructure as code** mentality becomes really tedious. I was spending most of my time with the code that was managing my infrastructure, not with the infrastructure itself. Any change, regardless how small, would require a considerable amount of effort for a relatively small gain. With [Ansible][3], theres data describing infrastructure on one hand, and the constraints of the interactions between various components on the other hand. Its a much simpler model that enables me to move quicker by letting me focus on what makes my infrastructure personal. Similar to the Unix model, Ansible provides simple modules with a single responsibility that can be combined in endless ways.
Ansible has no dependencies other than Python and SSH. It doesnt require any agents to be set up on the remote hosts and it doesnt leave any traces after it runs either. Whats more, it comes with an extensive, built-in library of modules for controlling everything from package managers to cloud providers, to databases and everything else in between.
### Why Docker? ###
[Docker][4] is establishing itself as the most reliable and convenient way of deploying a process on a host. This can be anything from mysqld to ![](Docker container as a VM. The)redis, to a Rails application. Just like git snapshots and distributes code in the most efficient way, Docker does the same with processes. It guarantees that everything required to run that process will be available regardless of the host that it runs on.
A common but understandable mistake is to treat a Docker container as a VM. The [Single Responsibility Principle][5] still applies, running a single process per container will give it a single reason to change, it will make it re-usable and easy to reason about. This model has stood the test of time in the form of the Unix philosophy, it makes for a solid foundation to build on.
### The Setup ###
Without leaving my terminal, I can have Ansible provision a new instance for me with any of the following: Amazon Web Services, Linode, Rackspace or DigitalOcean. To be more specific, I can have Ansible create a new DigitalOcean 2GB droplet in Amsterdam 2 region in precisely 1 minute and 25 seconds. In a further 1 minute and 50 seconds I can have the system setup with Docker and a few other personal preferences. Once I have this base system in place, I can deploy my application. Notice that I didnt setup any database or programming language. Docker will handle all application dependencies.
Ansible runs all remote commands via SSH. My SSH keys stored in the local ssh-agent will be shared remotely during Ansibles SSH sessions. When my application code will be cloned or updated on remote hosts, no git credentials will be required, the forwarded ssh-agent will be used to authenticate with the git host.
### Docker and application dependencies ###
I find it amusing that most developers are specific about the version of the programming language which their application needs, the version of the dependencies in the form of Python packages, Ruby gems or node.js modules, but when it comes to something as important as the database or the message queue, they just use whatever is available in the environment that the application runs. I believe this is one of the reasons behind the devops movement, developers taking responsibility for the applications environment. Docker makes this task easier and more straightforward by adding a layer of pragmatism and confidence to the existing practices.
My application defines dependencies on processes such as MySQL 5.5 and Redis 2.8 by including the following `.docker_container_dependencies` file:
gerhard/mysql:5.5
gerhard/redis:2.8
The Ansible playbook will notice this file and will instruct Docker to pull the correct images from the Docker index and start them as containers. It also links these service containers to my application container. If you want to find out how Docker container linking works, refer to the [Docker 0.6.5 announcement][6].
My application also comes with a Dockerfile which is specific about the Ruby Docker image that is required. As this is already built, the steps in my Dockerfile will have the guarantee that the correct Ruby version will be available to them.
FROM howareyou/ruby:2.0.0-p353
ADD ./ /terrabox
RUN \
. /.profile ;\
rm -fr /terrabox/.git ;\
cd /terrabox ;\
bundle install --local ;\
echo '. /.profile && cd /terrabox && RAILS_ENV=test bundle exec rake db:create db:migrate && bundle exec rspec' > /test-terrabox ;\
echo '. /.profile && cd /terrabox && export RAILS_ENV=production && rake db:create db:migrate && bundle exec unicorn -c config/unicorn.rails.conf.rb' > /run-terrabox ;\
# END RUN
ENTRYPOINT ["/bin/bash"]
CMD ["/run-terrabox"]
EXPOSE 3000
The first step is to copy all my applications code into the Docker image and load the global environment variables added by previous images. The Ruby Docker image for example will append PATH configuration which ensures that the correct Ruby version gets loaded.
Next, I remove the git history as this is not useful in the context of a Docker container. I install all the gems and then create a `/test-terrabox` command which will be run by the test-only container. The purpose of this is to have a “canary” which ensures that the application and all its dependencies are properly resolved, that the Docker containers are linked correctly and all tests pass before the actual application container will be started.
The command that gets run when a new web application container gets started is defined in the CMD step. The `/run-terrabox` command was defined part of the build process, right after the test one.
The last instruction in this applications Dockerfile maps port 3000 from inside the container to an automatically allocated port on the host that runs Docker. This is the port that the reverse proxy or load balancer will use when proxying public requests to my application running inside the Docker container.
### Running a Rails application inside a Docker container ###
For a medium-sized Rails application, with about 100 gems and just as many integration tests running under Rails, this takes 8 minutes and 16 seconds on a 2GB and 2 core instance, without any local Docker images. If I already had Ruby, MySQL & Redis Docker images on that host, this would take 4 minutes and 45 seconds. Furthermore, if I had a master application image to base a new Docker image build of the same application, this would take a mere 2 minutes and 23 seconds. To put this into perspective, it takes me just over 2 minutes to deploy a new version of my Rails application, including dependent services such as MySQL and Redis.
I would like to point out that my application deploys also run a full test suite which alone takes about a minute end-to-end. Without intending, Docker became a simple Continuous Integration environment that leaves test-only containers behind for inspection when tests fail, or starts a new application container with the latest version of my application when the test suite passes. All of a sudden, I can validate new code with my customers in minutes, with the guarantee that different versions of my application are isolated from one another, all the way to the operating system. Unlike traditional VMs which take minutes to boot, a Docker container will take under a second. Furthermore, once a Docker image is built and tests pass for a specific version of my application, I can have this image pushed into a private Docker registry, waiting to be pulled by other Docker hosts and started as a new Docker container, all within seconds.
### Conclusion ###
Ansible made me re-discover the joy of managing infrastructures. Docker gives me confidence and stability when dealing with the most important step of application development, the delivery phase. In combination, they are unmatched.
To go from no server to a fully deployed Rails application in just under 12 minutes is impressive by any standard. To get a very basic Continuous Integration system for free and be able to preview different versions of an application side-by-side, without affecting the “live” version which runs on the same hosts in any way, is incredibly powerful. This makes me very excited, and having reached the end of the article, I can only hope that you share my excitement.
I gave a talk at the January 2014 London Docker meetup on this subject, [I have shared the slides on Speakerdeck][7].
For more Ansible and Docker content, subscribe to [The Changelog Weekly][8] — it ships every Saturday and regularly includes the weeks best links for both topics.
[Use the Draft repo][9] if youd like to write a post like this for The Changelog. Theyll work with you through the process too.
Until next time, [Gerhard][a].
--------------------------------------------------------------------------------
via: http://thechangelog.com/ansible-docker/
作者:[Gerhard Lazu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://twitter.com/gerhardlazu
[1]:https://www.docker.io/
[2]:https://github.com/ansible/ansible
[3]:http://ansible.com/
[4]:http://docker.io/
[5]:http://en.wikipedia.org/wiki/Single_responsibility_principle
[6]:http://blog.docker.io/2013/10/docker-0-6-5-links-container-naming-advanced-port-redirects-host-integration/
[7]:https://speakerdeck.com/gerhardlazu/ansible-and-docker-the-path-to-continuous-delivery-part-1
[8]:http://thechangelog.com/weekly/
[9]:https://github.com/thechangelog/draft

View File

@ -0,0 +1,87 @@
2q1w2007翻译中
How to convert image, audio and video formats on Ubuntu
================================================================================
If you need to work with a variety of image, audio and video files encoded in all sorts of different formats, you are probably using more than one tools to convert among all those heterogeneous media formats. If there is a versatile all-in-one media conversion tool that is capable of dealing with all different image/audio/video formats, that will be awesome.
[Format Junkie][1] is one such all-in-one media conversion tool with an extremely user-friendly GUI. Better yet, it is free software! With Format Junkie, you can convert image, audio, video and archive files of pretty much all the popular formats simply with a few mouse clicks.
### Install Format Junkie on Ubuntu 12.04, 12.10 and 13.04 ###
Format Junkie is available for installation via Ubuntu PPA format-junkie-team. This PPA supports Ubuntu 12.04, 12.10 and 13.04. To install Format Junkie on one of those Ubuntu releases, simply run the following.
$ sudo add-apt-repository ppa:format-junkie-team/release
$ sudo apt-get update
$ sudo apt-get install formatjunkie
$ sudo ln -s /opt/extras.ubuntu.com/formatjunkie/formatjunkie /usr/bin/formatjunkie
### Install Format Junkie on Ubuntu 13.10 ###
If you are running Ubuntu 13.10 (Saucy Salamander), you can download and install .deb package for Ubuntu 13.04 as follows. Since the .deb package for Format Junkie requires quite a few dependent packages, install it using [gdebi deb installer][2].
On 32-bit Ubuntu 13.10:
$ wget https://launchpad.net/~format-junkie-team/+archive/release/+files/formatjunkie_1.07-1~raring0.2_i386.deb
$ sudo gdebi formatjunkie_1.07-1~raring0.2_i386.deb
$ sudo ln -s /opt/extras.ubuntu.com/formatjunkie/formatjunkie /usr/bin/formatjunkie
On 64-bit Ubuntu 13.10:
$ wget https://launchpad.net/~format-junkie-team/+archive/release/+files/formatjunkie_1.07-1~raring0.2_amd64.deb
$ sudo gdebi formatjunkie_1.07-1~raring0.2_amd64.deb
$ sudo ln -s /opt/extras.ubuntu.com/formatjunkie/formatjunkie /usr/bin/formatjunkie
### Install Format Junkie on Ubuntu 14.04 or Later ###
The currently available official Format Junkie .deb file requires libavcodec-extra-53 which has become obsolete starting from Ubuntu 14.04. Thus if you want to install Format Junkie on Ubuntu 14.04 or later, you can use the following third-party PPA repositories instead.
$ sudo add-apt-repository ppa:jon-severinsson/ffmpeg
$ sudo add-apt-repository ppa:noobslab/apps
$ sudo apt-get update
$ sudo apt-get install formatjunkie
### How to Use Format Junkie ###
To start Format Junkie after installation, simply run:
$ formatjunkie
#### Convert audio, video, image and archive formats with Format Junkie ####
The user interface of Format Junkie is pretty simple and intuitive, as shown below. To choose among audio, video, image and iso media, click on one of four tabs at the top. You can add as many files as you want for batch conversion. After you add files, and select output format, simply click on "Start Converting" button to convert.
![](http://farm9.staticflickr.com/8107/8643695905_082b323059.jpg)
Format Junkie supports conversion among the following media formats:
- **Audio**: mp3, wav, ogg, wma, flac, m4r, aac, m4a, mp2.
- **Video**: avi, ogv, vob, mp4, 3gp, wmv, mkv, mpg, mov, flv, webm.
- **Image**: jpg, png, ico, bmp, svg, tif, pcx, pdf, tga, pnm.
- **Archive**: iso, cso.
#### Subtitle encoding with Format Junkie ####
Besides media conversion, Format Junkie also provides GUI for subtitle encoding. Actual subtitle encoding is done by MEncoder. In order to do subtitle encoding via Format Junkie interface, first you need to install MEencoder.
$ sudo apt-get install mencoder
Then click on "Advanced" tab on Format Junkie. Choose AVI/subtitle files to use for encoding, as shown below.
![](http://farm9.staticflickr.com/8100/8644791396_bfe602cd16.jpg)
Overall, Format Junkie is an extremely easy-to-use and versatile media conversion tool. One drawback, though, is that it does not allow any sort of customization during conversion (e.g., bitrate, fps, sampling frequency, image quality, size). So this tool is recommended for newbies who are looking for an easy-to-use simple media conversion tool.
Enjoyed this post? I will appreciate your like/share buttons on Facebook, Twitter and Google+.
--------------------------------------------------------------------------------
via: http://xmodulo.com/how-to-convert-image-audio-and-video-formats-on-ubuntu.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:https://launchpad.net/format-junkie
[2]:http://xmodulo.com/how-to-install-deb-file-with-dependencies.html

View File

@ -0,0 +1,147 @@
[translating by KayGuoWhu]
How to check hard disk health on Linux using smartmontools
================================================================================
If there is something that you never want to happen on your Linux system, that is having hard drives die on you without any warning. [Backups][1] and storage technologies such as [RAID][2] can get you back on your feet in no time, but the cost associated with a sudden loss of a hardware device can take a considerable toll on your budget, especially if you haven't planned ahead of time what to do in such circumstances.
To avoid running into this kind of setbacks, you can try [smartmontools][3] which is a software package that manages and monitors storage hardware by using Self-Monitoring, Analysis and Reporting Technology (S.M.A.R.T. or just SMART). Most modern ATA/SATA, SCSI/SAS, and solid-state hard disks nowadays come with the SMART system built-in. The purpose of SMART is to monitor the reliability of the hard drive, to predict drive failures, and to carry out different types of drive self-tests. The smartmontools consists of two utility programs called smartctl and smartd. Together, they will provide advanced warnings of disk degradation and failure on Linux platforms.
This tutorial will provide installation and configuration guide for smartmontools on Linux.
### Installing Smartmontools ###
Installation of smartmontools is straightforward as it available in base repositories of most Linux distros.
#### Debian and derivatives: ####
# aptitude install smartmontools
#### Red Hat-based distributions: ####
# yum install smartmontools
### Checking Hard Drive Health with Smartctl ###
First off, list the hard drives connected to your system with the following command:
# ls -l /dev | grep -E 'sd|hd'
The output should be similar to:
![](https://farm4.staticflickr.com/3953/15352881249_96c09f7ccc_o.png)
where sdx indicate device names assigned to the hard drives installed on your machine.
To display information about a particular hard disk (e.g., device model, S/N, firmware version, size, ATA version/revision, availability and status of SMART capability), run smartctl with "--info" flag, and specify the hard drive's device name as follows.
In this example, we will choose /dev/sda.
# smartctl --info /dev/sda
![](https://farm4.staticflickr.com/3928/15353873870_00a8dddf89_z.jpg)
Although the ATA version information may seem to go unnoticed at first, it is one of the most important factors when looking for a replacement part. Each ATA version is backward compatible with the previous versions. For example, older ATA-1 or ATA-2 devices work fine on ATA-6 and ATA-7 interfaces, but unfortunately, that is not true for the other way around. In cases where the device version and interface version don't match, they work together at the capabilities of the lesser of the two. That being said, an ATA-7 hard drive is the safest choice for a replacement part in this case.
You can examine the health status of a particular hard drive with:
# smartctl -s on -a /dev/sda
In this command, "-s on" flag enables SMART on the specified device. You can ommit it if SMART support is already enabled for /dev/sda.
The SMART information for a disk consists of several sections. Among other things, "READ SMART DATA" section shows the overall health status of the drive.
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment rest result: PASSED
The result of this test can be either PASSED or FAILED. In the latter case, a hardware failure is imminent, so you may want to start backing up your important data from that drive!
The next thing you will want to look at is the [SMART attribute][4] table, as shown below.
![](https://farm6.staticflickr.com/5612/15539511935_dd62f6c9ef_z.jpg)
Basically, SMART attribute table lists values of a number of attributes defined for a particular drive by its manufacturer, as well as failure threshold for these attributes. This table is automatically populated and updated by drive firmware.
- **ID#**: attribute ID, usually a decimal (or hex) number between 1 and 255.
- **ATTRIBUTE_NAME**: attribute names defined by a drive manufacturer.
- **FLAG**: attribute handling flag (we can ignore it).
- **VALUE**: this is one of the most important information in the table, indicating a "normalized" value of a given attribute, whose range is between 1 and 253. 253 means the best condition, while 1 means the worse condition. Depending on attributes and manufacturers, an initial VALUE can be set to either 100 or 200.
- **WORST**: the lowest VALUE ever recorded.
- **THRESH**: the lowest value that WORST should ever be allowed to fall to, before reporting a given hard drive as FAILED.
- **TYPE**: the type of attribute (either Pre-fail or Old_age). A Pre-fail attribute is considered a critical attribute; one that participates in the overall SMART health assessment (PASSED/FAILED) of the drive. If any Pre-fail attribute fails, then the drive is considered "about to fail." On the other hand, an Old_age attribute is considered (for SMART purposes) a non-critical attribute (e.g., normal wear and tear); one that does not fail the drive per se.
- **UPDATED**: indicates how often an attribute is updated. Offline represents the case when offline tests are being performed on the drive.
- **WHEN_FAILED**: this will be set to "FAILING_NOW" (if VALUE is less than or equal to THRESH), or "In_the_past" (if WORST is less than equal to THRESH), or "-" (if none of the above). In case of "FAILING_NOW", back up your important files ASAP, especially if the attribute is of TYPE Pre-fail. "In_the_past" means that the attribute has failed before, but that it's OK at the time of running the test. "-" indicates that this attribute has never failed.
- **RAW_VALUE**: a manufacturer-defined raw value, from which VALUE is derived.
At this point you may be thinking, "Yes, smartctl seems like a nice tool. but I would like to avoid the hassle of having to run it manually." Wouldn't it be nice if it could be run at specified intervals, and at the same time inform me of the testsresults?
Fortunately, the answer is yes. And that's when smartd comes in.
### Configuring Smartctl and Smartd for Live Monitoring ###
First, edit smartctl's configuration file (/etc/default/smartmontools) to tell it to start smartd at system startup, and to specify check intervals in seconds (e.g., 7200 = 2 hours).
start_smartd=yes
smartd_opts="--interval=7200"
Next, edit smartd's configuration file (/etc/smartd.conf) to add the followign line.
/dev/sda -m myemail@mydomain.com -M test
- **-m <email-address>**: specifies an email address to send test reports to. This can be a system user such as root, or an email address such as myemail@mydomain.com if the server is configured to relay emails to the outside of your system.
- **-M <delivery-type>**: specifies the desired type of delivery for an email report.
- **once**: sends only one warning email for each type of disk problem detected.
- **daily**: sends additional warning reminder emails, once per day, for each type of disk problem detected.
- **diminishing**: sends additional warning reminder emails, after a one-day interval, then a two-day interval, then a four-day interval, and so on for each type of disk problem detected. Each interval is twice as long as the previous interval.
- **test**: sends a single test email immediately upon smartd startup.
- **exec PATH**: runs the executable PATH instead of the default mail command. PATH must point to an executable binary file or script. This allows to specify a desired action (beep the console, shutdown the system, and so on) when a problem is detected.
Save the changes and restart smartd.
You should expect this kind of email sent by smartd.
![](https://farm6.staticflickr.com/5612/15539511945_b344814c74_o.png)
Luckily for us, no error was detected. Had it not been so, the errors would have appeared below the line "The following warning/error was logged by the smartd daemon."
Finally, you can schedule tests at your preferred schedule using the "-s" flag and the regular expression in the form of "T/MM/DD/d/HH", where:
T in the regular expression indicates the kind of test:
- L: long test
- S: short test
- C: Conveyance test (ATA only)
- O: Offline (ATA only)
and the remaining characters represent the date and time when the test should be performed:
- MM is the month of the year.
- DD is the day of the month.
- HH is the hour of day.
- d is the day of the week (ranging from 1=Monday through 7=Sunday).
- MM, DD, and HH are expressed with two decimal digits.
A dot in any of these places indicates all possible values. An expression inside parentheses such as (A|B|C) denotes any one of the three possibilities A, B, or C. An expression inside square brackets such as [1-5] denotes a range (1 through 5 inclusive).
For example, to perform a long test every business day at 1 pm for all disks, add the following line to /etc/smartd.conf. Make sure to restart smartd.
DEVICESCAN -s (L/../../[1-5]/13)
### Conclusion ###
Whether you want to quickly check the electrical and mechanical performance of a disk, or perform a longer and more thorough test scans the entire disk surface, do not let yourself get so caught up in your day-to-day responsibilities as to forget to regularly check on the health of your disks. You will thank yourself later!
--------------------------------------------------------------------------------
via: http://xmodulo.com/check-hard-disk-health-linux-smartmontools.html
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/gabriel
[1]:http://xmodulo.com/how-to-create-secure-incremental-offsite-backup-in-linux.html
[2]:http://xmodulo.com/create-software-raid1-array-mdadm-linux.html
[3]:http://www.smartmontools.org/
[4]:http://en.wikipedia.org/wiki/S.M.A.R.T.

View File

@ -0,0 +1,89 @@
johnhoow translating...
pidstat - Monitor and Find Statistics for Linux Procesess
================================================================================
The **pidstat** command is used for monitoring individual tasks currently being managed by the Linux kernel. It writes to standard output activities for every task managed by the Linux kernel. The pidstat command can also be used for monitoring the child processes of selected tasks. The interval parameter specifies the amount of time in seconds between each report. A value of 0 (or no parameters at all) indicates that tasks statistics are to be reported for the time since system startup (boot).
### How to Install pidstat ###
pidstat is part of the sysstat suite that contains various system performance tools for Linux, it's available on the repository of most Linux distributions.
To install it on Debian / Ubuntu Linux systems you can use the following command:
# apt-get install sysstat
If you are using CentOS / Fedora / RHEL Linux you can install the packages like this:
# yum install sysstat
### Using pidstat ###
Running pidstat without any argument is equivalent to specifying -p ALL but only active tasks (tasks with non-zero statistics values) will appear in the report.
# pidstat
![pidstat](http://blog.linoxide.com/wp-content/uploads/2014/09/pidstat.jpg)
In the output you can see:
- **PID** - The identification number of the task being monitored.
- **%usr** - Percentage of CPU used by the task while executing at the user level (application), with or without nice priority. Note that this field does NOT include time spent running a virtual processor.
- **%system** - Percentage of CPU used by the task while executing at the system level.
- **%guest** - Percentage of CPU spent by the task in virtual machine (running a virtual processor).
- **%CPU** - Total percentage of CPU time used by the task. In an SMP environment, the task's CPU usage will be divided by the total number of CPU's if option -I has been entered on the command line.
- **CPU** - Processor number to which the task is attached.
- **Command** - The command name of the task.
### I/O Statistics ###
We can use pidstat to get I/O statistics about a process using the -d flag. For example:
# pidstat -d -p 8472
![pidstat io](http://blog.linoxide.com/wp-content/uploads/2014/09/pidstat-io.jpg)
The IO output will display a few new columns:
- **kB_rd/s** - Number of kilobytes the task has caused to be read from disk per second.
- **kB_wr/s** - Number of kilobytes the task has caused, or shall cause to be written to disk per second.
- **kB_ccwr/s** - Number of kilobytes whose writing to disk has been cancelled by the task.
### Page faults and memory usage ###
Using the -r flag you can get information about memory usage and page faults.
![pidstat pf mem](http://blog.linoxide.com/wp-content/uploads/2014/09/pidstat-pfmem.jpg)
Important columns:
- **minflt/s** - Total number of minor faults the task has made per second, those which have not required loading a memory page from disk.
- **majflt/s** - Total number of major faults the task has made per second, those which have required loading a memory page from disk.
- **VSZ** - Virtual Size: The virtual memory usage of entire task in kilobytes.
- **RSS** - Resident Set Size: The non-swapped physical memory used by the task in kilobytes.
### Examples ###
**1.** You can use pidstat to find a memory leek using the following command:
# pidstat -r 2 5
This will give you 5 reports, one every 2 seconds, about the current page faults statistics, it should be easy to spot the problem process.
**2.** To show all children of the mysql server you can use the following command
# pidstat -T CHILD -C mysql
**3.** To combine all statistics in a single report you can use:
# pidstat -urd -h
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-command/linux-pidstat-monitor-statistics-procesess/
作者:[Adrian Dinu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/adriand/

View File

@ -0,0 +1,159 @@
su-kaiyao translating
How to create and use Python CGI scripts
================================================================================
Have you ever wanted to create a webpage or process user input from a web-based form using Python? These tasks can be accomplished through the use of Python CGI (Common Gateway Interface) scripts with an Apache web server. CGI scripts are called by a web server when a user requests a particular URL or interacts with the webpage (such as clicking a "Submit" button). After the CGI script is called and finishes executing, the output is used by the web server to create a webpage displayed to the user.
### Configuring the Apache web server to run CGI scripts ###
In this tutorial we assume that an Apache web server is already set up and running. This tutorial uses an Apache web server (version 2.2.15 on CentOS release 6.5) that is hosted at the localhost (127.0.0.1) and is listening on port 80, as specified by the following Apache directives:
ServerName 127.0.0.1:80
Listen 80
HTML files used in the upcoming examples are located in /var/www/html on the web server. This is specified via the DocumentRoot directive (specifies the directory that webpages are located in):
DocumentRoot "/var/www/html"
Consider a request for the URL: http://localhost/page1.html
This will return the contents of the following file on the web server:
/var/www/html/page1.html
To enable use of CGI scripts, we must specify where CGI scripts are located on the web server. To do this, we use the ScriptAlias directive:
ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"
The above directive indicates that CGI scripts are contained in the /var/www/cgi-bin directory on the web server and that inclusion of /cgi-bin/ in the requested URL will search this directory for the CGI script of interest.
We must also explicitly permit the execution of CGI scripts in the /var/www/cgi-bin directory and specify the file extensions of CGI scripts. To do this, we use the following directives:
<Directory "/var/www/cgi-bin">
Options +ExecCGI
AddHandler cgi-script .py
</Directory>
Consider a request for the URL: http://localhost/cgi-bin/myscript-1.py
This will call the following script on the web server:
/var/www/cgi-bin/myscript-1.py
### Creating a CGI script ###
Before creating a Python CGI script, you will need to confirm that you have Python installed (this is generally installed by default, however the installed version may vary). Scripts in this tutorial are created using Python version 2.6.6. You can check your version of Python from the command line by entering either of the following commands (the -V and --version options display the version of Python that is installed):
$ python -V
$ python --version
If your Python CGI script will be used to process user-entered data (from a web-based input form), then you will need to import the Python cgi module. This module provides functionality for accessing data that users have entered into web-based input forms. You can import this module via the following statement in your script:
import cgi
You must also change the execute permissions for the Python CGI script so that it can be called by the web server. Add execute permissions for others via the following command:
# chmod o+x myscript-1.py
### Python CGI Examples ###
Two scenarios involving Python CGI scripts will be considered in this tutorial:
- Create a webpage using a Python script
- Read and display user-entered data and display results in a webpage
Note that the Python cgi module is required for Scenario 2 because this involves accessing user-entered data from web-based input forms.
### Example 1: Create a webpage using a Python script ###
For this scenario, we will start by creating a webpage /var/www/html/page1.html with a single submit button:
<html>
<h1>Test Page 1</h1>
<form name="input" action="/cgi-bin/myscript-1.py" method="get">
<input type="submit" value="Submit">
</form>
</html>
When the "Submit" button is clicked, the /var/www/cgi-bin/myscript-1.py script is called (specified by the action parameter). A "GET" request is specified by setting the method parameter equal to "get". This requests that the web server return the specified webpage. An image of /var/www/html/page1.html as viewed from within a web browser is shown below:
![](https://farm4.staticflickr.com/3933/14932853623_eff2df3260_z.jpg)
The contents of /var/www/cgi-bin/myscript-1.py are:
#!/usr/bin/python
print "Content-Type: text/html"
print ""
print "<html>"
print "<h2>CGI Script Output</h2>"
print "<p>This page was generated by a Python CGI script.</p>"
print "</html>"
The first statement indicates that this is a Python script to be run with the /usr/bin/python command. The print "Content-Type: text/html" statement is required so that the web server knows what type of output it is receiving from the CGI script. The remaining statements are used to print the text of the webpage in HTML format.
When the "Submit" button is clicked in the above webpage, the following webpage is returned:
![](https://farm4.staticflickr.com/3933/15553035025_d70be04470_z.jpg)
The take-home point with this example is that you have the freedom to decide what information is returned by the CGI script. This could include the contents of log files, a list of users currently logged on, or today's date. The possibilities are endless given that you have the entire Python library at your disposal.
### Example 2: Read and display user-entered data and display results in a webpage ###
For this scenario, we will start by creating a webpage /var/www/html/page2.html with three input fields and a submit button:
<html>
<h1>Test Page 2</h1>
<form name="input" action="/cgi-bin/myscript-2.py" method="get">
First Name: <input type="text" name="firstName"><br>
Last Name: <input type="text" name="lastName"><br>
Position: <input type="text" name="position"><br>
<input type="submit" value="Submit">
</form>
</html>
When the "Submit" button is clicked, the /var/www/cgi-bin/myscript-2.py script is called (specified by the action parameter). An image of /var/www/html/page2.html as viewed from within a web browser is shown below (note that the three input fields have already been filled in):
![](https://farm4.staticflickr.com/3935/14932853603_ffc3bd330e_z.jpg)
The contents of /var/www/cgi-bin/myscript-2.py are:
#!/usr/bin/python
import cgi
form = cgi.FieldStorage()
print "Content-Type: text/html"
print ""
print "<html>"
print "<h2>CGI Script Output</h2>"
print "<p>"
print "The user entered data are:<br>"
print "<b>First Name:</b> " + form["firstName"].value + "<br>"
print "<b>Last Name:</b> " + form["lastName"].value + "<br>"
print "<b>Position:</b> " + form["position"].value + "<br>"
print "</p>"
print "</html>"
As mentioned previously, the import cgi statement is needed to enable functionality for accessing user-entered data from web-based input forms. The web-based input form is encapsulated in the form object, which is a cgi.FieldStorage object. Once again, the "Content-Type: text/html" line is required so that the web server knows what type of output it is receiving from the CGI script. The data entered by the user are accessed in the statements that contain form["firstName"].value, form["lastName"].value, and form["position"].value. The names in the square brackets correspond to the values of the name parameters defined in the text input fields in **/var/www/html/page2.html**.
When the "Submit" button is clicked in the above webpage, the following webpage is returned:
![](https://farm4.staticflickr.com/3949/15367402150_946474dbb0_z.jpg)
The take-home point with this example is that you can easily read and display user-entered data from web-based input forms. In addition to processing data as strings, you can also use Python to convert user-entered data to numbers that can be used in numerical calculations.
### Summary ###
This tutorial demonstrates how Python CGI scripts are useful for creating webpages and for processing user-entered data from web-based input forms. More information about Apache CGI scripts can be found [here][1] and more information about the Python cgi module can be found [here][2].
--------------------------------------------------------------------------------
via: http://xmodulo.com/create-use-python-cgi-scripts.html
作者:[Joshua Reed][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/joshua
[1]:http://httpd.apache.org/docs/2.2/howto/cgi.html
[2]:https://docs.python.org/2/library/cgi.html#module-cgi

View File

@ -0,0 +1,132 @@
How to monitor a log file on Linux with logwatch
================================================================================
Linux operating system and many applications create special files commonly referred to as "logs" to record their operational events. These system logs or application-specific log files are an essential tool when it comes to understanding and troubleshooting the behavior of the operating system and third-party applications. However, log files are not precisely what you would call "light" or "easy" reading, and analyzing raw log files by hand is often time-consuming and tedious. For that reason, any utility that can convert raw log files into a more user-friendly log digest is a great boon for sysadmins.
[logwatch][1] is an open-source log parser and analyzer written in Perl, which can parse and convert raw log files into a structured format, making a customizable report based on your use cases and requirements. In logwatch, the focus is on producing more easily consumable log summary, not on real-time log processing and monitoring. As such, logwatch is typically invoked as an automated cron task with desired time and frequency, or manually from the command line whenever log processing is needed. Once a log report is generated, logwatch can email the report to you, save it to a file, or display it on the screen.
A logwatch report is fully customizable in terms of verbosity and processing coverage. The log processing engine of logwatch is extensible, in a sense that if you want to enable logwatch for a new application, you can write a log processing script (in Perl) for the application's log file, and plug it under logwatch.
One downside of logwatch is that it does not include in its report detailed timestamp information available in original log files. You will only know that a particular event was logged in a requested range of time, and you will have to access original log files to get exact timing information.
### Installing Logwatch ###
On Debian and derivatives:
# aptitude install logwatch
On Red Hat-based distributions:
# yum install logwatch
### Configuring Logwatch ###
During installation, the main configuration file (logwatch.conf) is placed in /etc/logwatch/conf. Configuration options defined in this file override system-wide settings defined in /usr/share/logwatch/default.conf/logwatch.conf.
If logwatch is launched from the command line without any arguments, the custom options defined in /etc/logwatch/conf/logwatch.conf will be used. However, if any command-line arguments are specified with logwatch command, those arguments in turn override any default/custom settings in /etc/logwatch/conf/logwatch.conf.
In this article, we will customize several default settings of logwatch by editing /etc/logwatch/conf/logwatch.conf file.
Detail = <Low, Med, High, or a number>
"Detail" directive controls the verbosity of a logwatch report. It can be a positive integer, or High, Med, Low, which correspond to 10, 5, and 0, respectively.
MailTo = youremailaddress@yourdomain.com
"MailTo" directive is used if you want to have a logwatch report emailed to you. To send a logwatch report to multiple recipients, you can specify their email addresses separated with a space. To be able to use this directive, however, you will need to configure a local mail transfer agent (MTA) such as sendmail or Postfix on the server where logwatch is running.
Range = <Yesterday|Today|All>
"Range" directive specifies the time duration of a logwatch report. Common values for this directive are Yesterday, Today or All. When "Range = All" is used, "Archive = yes" directive is also needed, so that all archived versions of a given log file (e.g., /var/log/maillog, /var/log/maillog.X, or /var/log/maillog.X.gz) are processed.
Besides such common range values, you can also use more complex range options such as the following.
- Range = "2 hours ago for that hour"
- Range = "-5 days"
- Range = "between -7 days and -3 days"
- Range = "since September 15, 2014"
- Range = "first Friday in October"
- Range = "2014/10/15 12:50:15 for that second"
To be able to use such free-form range examples, you need to install Date::Manip Perl module from CPAN. Refer to [this post][2] for CPAN module installation instructions.
Service = <service-name-1>
Service = <service-name-2>
. . .
"Service" option specifies one or more services to monitor using logwath. All available services are listed in /usr/share/logwatch/scripts/services, which cover essential system services (e.g., pam, secure, iptables, syslogd), as well as popular application services such as sudo, sshd, http, fail2ban, samba. If you want to add a new service to the list, you will have to write a corresponding log processing Perl script, and place it in this directory.
If this option is used to select specific services, you need to comment out the line "Service = All" in /usr/share/logwatch/default.conf/logwatch.conf.
![](https://farm6.staticflickr.com/5612/14948933564_94cbc5353c_z.jpg)
Format = <text|html>
"Format" directive specifies the format (e.g., text or HTML) of a logwatch report.
Output = <file|mail|stdout>
"Output" directive indicates where a logwatch report should be sent. It can be saved to a file (file), emailed (mail), or shown to screen (stdout).
### Analyzing Log Files with Logwatch ###
To understand how to analyze log files using logwatch, consider the following logwatch.conf example:
Detail = High
MailTo = youremailaddress@yourdomain.com
Range = Today
Service = http
Service = postfix
Service = zz-disk_space
Format = html
Output = mail
Under these settings, logwatch will process log files generated by three services (http, postfix and zz-disk_space) today, produce an HTML report with high verbosity, and email it to you.
If you do not want to customize /etc/logwatch/conf/logwatch.conf, you can leave the default configuration file unchanged, and instead run logwatch from the command line as follows. It will achieve the same outcome.
# logwatch --detail 10 --mailto youremailaddress@yourdomain.com --range today --service http --service postfix --service zz-disk_space --format html --output mail
The emailed report looks like the following.
![](https://farm6.staticflickr.com/5611/15383540608_57dc37e3d6_z.jpg)
The email header includes links to navigate the report sections, one per each selected service, and also "Back to top" links.
You will want to use the email report option when the list of recipients is small. Otherwise, you can have logwatch save a generated HTML report within a network share that can be accessed by all the individuals who need to see the report. To do so, make the following modifications in our previous example:
Detail = High
Range = Today
Service = http
Service = postfix
Service = zz-disk_space
Format = html
Output = file
Filename = /var/www/html/logs/dev1.html
Equivalently, run logwatch from the command line as follows.
# logwatch --detail 10 --range today --service http --service postfix --service zz-disk_space --format html --output file --filename /var/www/html/logs/dev1.html
Finally, let's configure logwatch to be executed by cron on your desired schedules. The following example will run a logwatch cron job every business day at 12:15 pm:
# crontab -e
----------
15 12 * * 1,2,3,4,5 /sbin/logwatch
Hope this helps. Feel free to comment to share your own tips and ideas with the community!
--------------------------------------------------------------------------------
via: http://xmodulo.com/monitor-log-file-linux-logwatch.html
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/gabriel
[1]:http://sourceforge.net/projects/logwatch/
[2]:http://xmodulo.com/how-to-install-perl-modules-from-cpan.html

View File

@ -0,0 +1,54 @@
wangjiezhe translating...
Linux FAQs with Answers--How to change character encoding of a text file on Linux
================================================================================
> **Question**: I have an "iso-8859-1"-encoded subtitle file which shows broken characters on my Linux system, and I would like to change its text encoding to "utf-8" character set. In Linux, what is a good tool to convert character encoding in a text file?
As you already know, computers can only handle binary numbers at the lowest level - not characters. When a text file is saved, each character in that file is mapped to bits, and it is those "bits" that are actually stored on disk. When an application later opens that text file, each of those binary numbers are read and mapped back to the original characters that are understood by us human. This "save and open" process is best performed when all applications that need access to a text file "understand" its encoding, meaning the way binary numbers are mapped to characters, and thus can ensure a "round trip" of understandable data.
If different applications do not use the same encoding while dealing with a text file, non-readable characters will be shown wherever special characters are found in the original file. By special characters we mean those that are not part of the English alphabet, such as accented characters (e.g., ñ, á, ü).
The questions then become: 1) how can I know which character encoding a certain text file is using?, and 2) how can I convert it to some other encoding of my choosing?
### Step One ###
In order to find out the character encoding of a file, we will use a commad-line tool called file. Since the file command is a standard UNIX program, we can expect to find it in all modern Linux distros.
Run the following command:
$ file --mime-encoding filename
![](https://farm6.staticflickr.com/5602/15595534261_1a7b4d16a2.jpg)
### Step Two ###
The next step is to check what kinds of text encodings are supported on your Linux system. For this, we will use a tool called iconv with the "-l" flag (lowercase L), which will list all the currently supported encodings.
$ iconv -l
The iconv utility is part of the the GNU libc libraries, so it is available in all Linux distributions out-of-the-box.
### Step Three ###
Once we have selected a target encoding among those supported on our Linux system, let's run the following command to perform the conversion:
$ iconv -f old_encoding -t new_encoding filename
For example, to convert iso-8859-1 to utf-8:
$ iconv -f iso-8859-1 -t utf-8 input.txt
![](https://farm4.staticflickr.com/3943/14978042143_a516e0b10b_o.png)
Knowing how to use these tools together as we have demonstrated, you can for example fix a broken subtitle file:
![](https://farm6.staticflickr.com/5612/15412197967_0dfe5078f9_z.jpg)
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/change-character-encoding-text-file-linux.html
译者:[wangjiezhe](https://github.com/wangjiezhe)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,96 @@
Wine 1.7.29 (Development Version) Released Install in RedHat and Debian Based Systems
================================================================================
**Wine**, a most popular and powerful open source application for Linux, that used to run Windows based applications and games on Linux Platform without any trouble.
![Install Wine (Development Version) in Linux](http://www.tecmint.com/wp-content/uploads/2014/05/Install-Wine-Development-Version.png)
Install Wine (Development Version) in Linux
WineHQ team, recently announced a new development version of **Wine 1.7.29**. This new development build arrives with a number of new important features and **44** bug fixes.
Wine team, keep releasing their development builds almost on weekly basis and adding numerous new features and fixes. Each new version brings support for new applications and games, making Wine a most popular and must have tool for every user, who want to run Windows based software in a Linux platform.
According to the changelog, following key features are added in this release:
- Added much improved shaping and BiDi mirroring in DirectWrite.
- Few page fault handling problems have been udpated.
- Included few more C runtime functions.
- Various bug fixes.
For more in-depth details about this build can be found at the official [changelog][1] page.
This article guides you how to install most recent development version of **Wine 1.7.29** on **Red Hat** and **Debian** based systems such as CentOS, Fedora, Ubuntu, Linux Mint and other supported distributions.
### Installing Wine 1.7.29 Development Version in Linux ###
Unfortunately, there are no official Wine repository available for the **Red Hat** based systems and the only way to install Wine, is to compile it from source. To do this, you need to install some dependency packages such as gcc, flex, bison, libX11-devel freetype-devel and Development Tools, etc. These packages are must required to compile Wine from source. Lets install them using following **YUM** command.
### On RedHat, Fedora and CentOS ###
# yum -y groupinstall 'Development Tools'
# yum -y install flex bison libX11-devel freetype-devel
Next, download the latest development version of Wine (i.e. **1.7.29**) and extract the source tallball package using the following commands.
$ cd /tmp
$ wget http://citylan.dl.sourceforge.net/project/wine/Source/wine-1.7.29.tar.bz2
$ tar -xvf wine-1.7.29.tar.bz2 -C /tmp/
Now, its time to compile and build Wine installer using the following commands as normal user.
Note: The installation process might take up-to **15-20** minutes depending upon your internet and hardware speed, during installation it will ask you to enter **root** password.
#### On 32-Bit Systems ####
$ cd wine-1.7.29/
$ ./tools/wineinstall
#### On 64-Bit Systems ####
$ cd wine-1.7.29/
$ ./configure --enable-win64
$ make
# make install
### On Ubuntu, Debian and Linux Mint ###
Under **Ubuntu** based systems, you can easily install the latest development build of Wine using the official **PPA**. Open a terminal and run the following commands with sudo privileges.
$ sudo add-apt-repository ppa:ubuntu-wine/ppa
$ sudo apt-get update
$ sudo apt-get install wine 1.7 winetricks
**Note**: At the time of writing this article, available version was **1.7.26** and the new build not yet updated in official Wine Repository, but the above instructions will install **1.7.29** when they made available.
Once the installation completes successfully, you can install or run any windows based applications or games using wine as shown below.
$ wine notepad
$ wine notepad.exe
$ wine c:\\windows\\notepad.exe
**Note**: Please remember, this is a development build and cannot be installed or used on production systems. It is advised to use this version only for testing purpose.
If youre looking for a most recent stable version of Wine, you can go through our following articles, that describes how to install most latest version on almost all Linux environments.
- [Install Wine 1.6.2 (Stable) in RHEL, CentOS and Fedora][2]
- [Install Wine 1.6.2 (Stable) in Debian, Ubuntu and Mint][3]
### Reference Links ###
- [WineHQ Homepage][4]
--------------------------------------------------------------------------------
via: http://www.tecmint.com/install-wine-in-linux/
作者:[Ravi Saive][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/admin/
[1]:http://www.winehq.org/announce/1.7.29
[2]:http://www.tecmint.com/install-wine-in-rhel-centos-and-fedora/
[3]:http://www.tecmint.com/install-wine-on-ubuntu-and-linux-mint/
[4]:http://www.winehq.org/

View File

@ -0,0 +1,255 @@
What are useful Bash aliases and functions
================================================================================
As a command line adventurer, you probably found yourself repeating the same lengthy commands over and over. If you always ssh into the same machine, if you always chain the same commands together, or if you constantly run a program with the same flags, you might want to save the precious seconds of your life that you spend repeating the same actions over and over.
The solution to achieve that is to use an alias. As you may know, an alias is a way to tell your shell to remember a particular command and give it a new name: an alias. However, an alias is quickly limited as it is just a shortcut for a shell command, without the ability to pass or control the arguments. So to complement, bash also allows you create your own functions, which can be more lengthy and complex, and also accepts any number of arguments.
Naturally, like with soup, when you have a good recipe you share it. So here is a list with some of the most useful bash aliases and functions. Note that "most useful" is loosely defined, and of course the usefulness of an alias is dependent on your everyday usage of the shell.
Before you start experimenting with aliases, here is a handy tip: if you give an alias the same name as a regular command, you can choose to launch the original command and ignore the alias with the trick:
\command
For example, the first alias below replaces the ls command. If you wish to use the regular ls command and not the alias, call it via:
\ls
### Productivity ###
So these aliases are really simple and really short, but they are mostly based on the idea that if you save yourself a fraction of a second every time, it might end up accumulating years at the end. Or maybe not.
alias ls="ls --color=auto"
Simple but vital. Make the ls command output in color.
alias ll = "ls --color -al"
Shortcut to display in color all the files from a directory in a list format.
alias grep='grep --color=auto'
Similarly, put some color in the grep output.
mcd() { mkdir -p "$1"; cd "$1";}
One of my favorite. Make a directory and cd into it in one command: mcd [name].
cls() { cd "$1"; ls;}
Similar to the previous function, cd into a directory and list its content: cls [name].
backup() { cp "$1"{,.bak};}
Simple way to make a backup of a file: backup [file] will create [file].bak in the same directory.
md5check() { md5sum "$1" | grep "$2";}
Because I hate comparing the md5sum of a file by hand, this function computes it and compares it using grep: md5check [file] [key].
![](https://farm6.staticflickr.com/5616/15412389280_8be57841ae_o.jpg)
alias makescript="fc -rnl | head -1 >"
Easily make a script out of the last command you ran: makescript [script.sh]
alias genpasswd="strings /dev/urandom | grep -o '[[:alnum:]]' | head -n 30 | tr -d '\n'; echo"
Just to generate a strong password instantly.
![](https://farm4.staticflickr.com/3955/15574321206_dd365f0f0e.jpg)
alias c="clear"
Cannot do simpler to clean your terminal screen.
alias histg="history | grep"
To quickly search through your command history: histg [keyword]
alias ..='cd ..'
No need to write cd to go up a directory.
alias ...='cd ../..'
Similarly, go up two directories.
extract() {
if [ -f $1 ] ; then
case $1 in
*.tar.bz2) tar xjf $1 ;;
*.tar.gz) tar xzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar e $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xf $1 ;;
*.tbz2) tar xjf $1 ;;
*.tgz) tar xzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*) echo "'$1' cannot be extracted via extract()" ;;
esac
else
echo "'$1' is not a valid file"
fi
}
Longest but also the most useful. Extract any kind of archive: extract [archive file]
### System Info ###
Want to know everything about your system as quickly as possible?
alias cmount="mount | column -t"
Format the output of mount into columns.
![](https://farm6.staticflickr.com/5603/15598830622_587b77a363_z.jpg)
alias tree="ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/ /' -e 's/-/|/'"
Display the directory structure recursively in a tree format.
sbs() { du -b --max-depth 1 | sort -nr | perl -pe 's{([0-9]+)}{sprintf "%.1f%s", $1>=2**30? ($1/2**30, "G"): $1>=2**20? ($1/2**20, "M"): $1>=2**10? ($1/2**10, "K"): ($1, "")}e';}
"Sort by size" to display in list the files in the current directory, sorted by their size on disk.
alias intercept="sudo strace -ff -e trace=write -e write=1,2 -p"
Intercept the stdout and stderr of a process: intercept [some PID]. Note that you will need strace installed.
alias meminfo='free -m -l -t'
See how much memory you have left.
![](https://farm4.staticflickr.com/3955/15411891448_0b9d6450bd_z.jpg)
alias ps? = "ps aux | grep"
Easily find the PID of any process: ps? [name]
alias volume="amixer get Master | sed '1,4 d' | cut -d [ -f 2 | cut -d ] -f 1"
Displays the current sound volume.
![](https://farm4.staticflickr.com/3939/15597995445_99ea7ffcd5_o.jpg)
### Networking ###
For all the commands that involve the Internet or your local network, there are fancy aliases for them.
alias websiteget="wget --random-wait -r -p -e robots=off -U mozilla"
Download entirely a website: websiteget [URL]
alias listen="lsof -P -i -n"
Show which applications are connecting to the network.
![](https://farm4.staticflickr.com/3943/15598830552_c7e5eaaa0d_z.jpg)
alias port='netstat -tulanp'
Show the active ports
gmail() { curl -u "$1" --silent "https://mail.google.com/mail/feed/atom" | sed -e 's/<\/fullcount.*/\n/' | sed -e 's/.*fullcount>//'}
Rough function to display the number of unread emails in your gmail: gmail [user name]
alias ipinfo="curl ifconfig.me && curl ifconfig.me/host"
Get your public IP address and host.
getlocation() { lynx -dump http://www.ip-adress.com/ip_tracer/?QRY=$1|grep address|egrep 'city|state|country'|awk '{print $3,$4,$5,$6,$7,$8}'|sed 's\ip address flag \\'|sed 's\My\\';}
Returns your current location based on your IP address.
### Useless ###
So what if some aliases are not all that productive? They can still be fun.
kernelgraph() { lsmod | perl -e 'print "digraph \"lsmod\" {";<>;while(<>){@_=split/\s+/; print "\"$_[0]\" -> \"$_\"\n" for split/,/,$_[3]}print "}"' | dot -Tpng | display -;}
To draw the kernel module dependency graph. Requires image viewer.
alias busy="cat /dev/urandom | hexdump -C | grep "ca fe""
Make you look all busy and fancy in the eyes of non-technical people.
![](https://farm6.staticflickr.com/5599/15574321326_ab3fbc1ef9_z.jpg)
To conclude, a good chunk of these aliases and functions come from my personal .bashrc, and the awesome websites [alias.sh][1] and [commandlinefu.com][2] which I already presented in my post on the [best online tools for Linux][3]. So definitely go check them out, make your own recipes, and if you are so inclined, share your wisdom in the comments.
As a bonus, here is the plain text version of all the aliases and functions I mentioned, ready to be copy pasted in your bashrc.
#Productivity
alias ls="ls --color=auto"
alias ll="ls --color -al"
alias grep='grep --color=auto'
mcd() { mkdir -p "$1"; cd "$1";}
cls() { cd "$1"; ls;}
backup() { cp "$1"{,.bak};}
md5check() { md5sum "$1" | grep "$2";}
alias makescript="fc -rnl | head -1 >"
alias genpasswd="strings /dev/urandom | grep -o '[[:alnum:]]' | head -n 30 | tr -d '\n'; echo"
alias c="clear"
alias histg="history | grep"
alias ..='cd ..'
alias ...='cd ../..'
extract() {
if [ -f $1 ] ; then
case $1 in
*.tar.bz2) tar xjf $1 ;;
*.tar.gz) tar xzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar e $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xf $1 ;;
*.tbz2) tar xjf $1 ;;
*.tgz) tar xzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*) echo "'$1' cannot be extracted via extract()" ;;
esac
else
echo "'$1' is not a valid file"
fi
}
#System info
alias cmount="mount | column -t"
alias tree="ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/ /' -e 's/-/|/'"
sbs(){ du -b --max-depth 1 | sort -nr | perl -pe 's{([0-9]+)}{sprintf "%.1f%s", $1>=2**30? ($1/2**30, "G"): $1>=2**20? ($1/2**20, "M"): $1>=2**10? ($1/2**10, "K"): ($1, "")}e';}
alias intercept="sudo strace -ff -e trace=write -e write=1,2 -p"
alias meminfo='free -m -l -t'
alias ps?="ps aux | grep"
alias volume="amixer get Master | sed '1,4 d' | cut -d [ -f 2 | cut -d ] -f 1"
#Network
alias websiteget="wget --random-wait -r -p -e robots=off -U mozilla"
alias listen="lsof -P -i -n"
alias port='netstat -tulanp'
gmail() { curl -u "$1" --silent "https://mail.google.com/mail/feed/atom" | sed -e 's/<\/fullcount.*/\n/' | sed -e 's/.*fullcount>//'}
alias ipinfo="curl ifconfig.me && curl ifconfig.me/host"
getlocation() { lynx -dump http://www.ip-adress.com/ip_tracer/?QRY=$1|grep address|egrep 'city|state|country'|awk '{print $3,$4,$5,$6,$7,$8}'|sed 's\ip address flag \\'|sed 's\My\\';}
#Funny
kernelgraph() { lsmod | perl -e 'print "digraph \"lsmod\" {";<>;while(<>){@_=split/\s+/; print "\"$_[0]\" -> \"$_\"\n" for split/,/,$_[3]}print "}"' | dot -Tpng | display -;}
alias busy="cat /dev/urandom | hexdump -C | grep \"ca fe\""
--------------------------------------------------------------------------------
via: http://xmodulo.com/useful-bash-aliases-functions.html
作者:[Adrien Brochard][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:http://alias.sh/
[2]:http://www.commandlinefu.com/commands/browse
[3]:http://xmodulo.com/useful-online-tools-linux.html

View File

@ -0,0 +1,115 @@
What is a good command-line calculator on Linux
================================================================================
Every modern Linux desktop distribution comes with a default GUI-based calculator app. On the other hand, if your workspace is full of terminal windows, and you would rather crunch some numbers within one of those terminals quickly, you are probably looking for a **command-line calculator**. In this category, [GNU bc][1] (short for "basic calculator") is a hard to beat one. While there are many command-line calculators available on Linux, I think GNU bc is hands-down the most powerful and useful.
Predating the GNU era, bc is actually a historically famous arbitrary precision calculator language, with its first implementation dating back to the old Unix days in 1970s. Initially bc was a better known as a programming language whose syntax is similar to C language. Over time the original bc evolved into POSIX bc, and then finally GNU bc of today.
### Features of GNU bc ###
Today's GNU bc is a result of many enhancements of earlier implementations of bc, and now it comes standard on all major GNU/Linux distros. It supports standard arithmetic operators with arbitrary precision numbers, and multiple numeric base (e.g., binary, decimal hexadecimal) of input and output.
If you are familiar with C language, you will see that the same or similar mathematical operators are used in bc. Some of supported operators include arithmetic (+,-,*,/,%,++,--), comparison (<,>,==,!=,<=,>=), logical (!,&&,||), bitwise (&,|,^,~,<<,>>), compound assignment (+=,-=,*=,/=,%=,&=,|=,^=,&&=,||=,<<=,>>=) operators. bc comes with many useful built-in functions such as square root, sine, cosine, arctangent, natural logarithm, exponential, etc.
### How to Use GNU bc ###
As a command-line calculator, possible use cases of GNU bc are virtually limitless. In this tutorial, I am going to describe a few popular features of bc command. For a complete manual, refer to the [official source][2].
Unless you have a pre-written bc script, you typically run bc in interactive mode, where any typed statement or expression terminated with a newline is interpreted and executed on the spot. Simply type the following to enter an interactive bc session. To quit a session, type 'quit' and press Enter.
$ bc
![](https://farm4.staticflickr.com/3939/15403325480_d0db97d427_z.jpg)
The examples presented in the rest of the tutorial are supposed to be typed inside a bc session.
### Type expressions ###
To calculate an arithmatic expression, simply type the expression at the blinking cursor, and press Enter. If you want, you can store an intermediate result to a variable, then access the variable in other expressions.
![](https://farm6.staticflickr.com/5604/15403325460_b004b3f8da_o.png)
Within a given session, bc maintains a unlimited history of previously typed lines. Simply use UP arrow key to retrieve previously typed lines. If you want to limit the number of lines to keep in the history, assign that number to a special variable named history. By default the variable is set to -1, meaning "unlimited."
### Switch input/output base ###
Often times you want to type input expressions and display results in binary or hexadecimal formats. For that, bc allows you switch the numeric base of input or output numbers. Input and output bases are stored in ibase and obase, respectively. The default value of these special variables is 10, and valid values are 2 through 16 (or the value of BC_BASE_MAX environment variable in case of obase). To switch numeric base, all you have to do is to change the values of ibase and obase. For example, here are examples of summing up two hexadecimal/binary numbers:
![](https://farm6.staticflickr.com/5604/15402320019_f01325f199_z.jpg)
Note that I specify obase=16 before ibase=16, not vice versa. That is because if I specified ibase=16 first, the subsequent obase=16 statement would be interpreted as assigning 16 in base 16 to obase (i.e., 22 in decimal), which is not what we want.
### Adjust precision ###
In bc, the precision of numbers is stored in a special variable named scale. This variable represents the number of decimal digits after the decimal point. By default, scale is set to 0, which means that all numbers and results and truncated/stored in integers. To adjust the default precision, all you have to do is to change the value of scale variable.
scale=4
![](https://farm6.staticflickr.com/5597/15586279541_211312597b.jpg)
### Use built-in functions ###
Beyond simple arithmatic operations, GNU bc offers a wide range of advanced mathematical functions built-in, via an external math library. To use those functions, launch bc with "-l" option from the command line.
Some of these built-in functions are illustrated here.
Square root of N:
sqrt(N)
Sine of X (X is in radians):
s(X)
Cosine of X (X is in radian):
c(X)
Arctangent of X (The returned value is in radian):
a(X)
Natural logarithm of X:
l(X)
Exponential function of X:
e(X)
### Other goodies as a language ###
As a full-blow calculator language, GNU bc supports simple statements (e.g., variable assignment, break, return), compound statements (e.g., if, while, for loop), and custom function definitions. I am not going to cover the details of these features, but you can easily learn how to use them from the [official manual][2]. Here is a very simple function definition example:
define dummy(x){
return(x * x);
}
dummy(9)
81
dummy(4)
16
### Use GNU bc Non-interactively ###
So far we have used bc within an interactive session. However, quite popular use cases of bc in fact involve running bc within a shell script non-interactively. In this case, you can send input to bc using echo through a pipe. For example:
$ echo "40*5" | bc
$ echo "scale=4; 10/3" | bc
$ echo "obase=16; ibase=2; 11101101101100010" | bc
![](https://farm4.staticflickr.com/3943/15565252976_f50f453c7f_z.jpg)
To conclude, GNU bc is a powerful and versatile command-line calculator that really lives up to your expectation. Preloaded on all modern Linux distributions, bc can make your number crunching tasks much easy to handle without leaving your terminals. For that, GNU bc should definitely be in your productivity toolset.
--------------------------------------------------------------------------------
via: http://xmodulo.com/command-line-calculator-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://www.gnu.org/software/bc/
[2]:https://www.gnu.org/software/bc/manual/bc.html

View File

@ -1,43 +0,0 @@
生日快乐Linux! 1991年8月25日Linus Torvalds 开启新的篇章。
================================================================================
![Linus Torvalds](http://i1-news.softpedia-static.com/images/news2/Linus-Torvalds-Started-a-Revolution-on-August-25-1991-Happy-Birthday-Linux-456212-2.jpg)
Linus Torvalds
**Linux工程刚刚进入第23个年头。有着成千上万的人一起开源的努力Linux 现在是全世界最大的合作结晶。**
时光倒流到1991年一个名字叫Linus Torvalds的程序员想开发一个免费的操作系统。当时他并没有打算把这个软件做得像GNU工程那么大他做这个工程仅仅是因为他的爱好。他开始研发的东西变成了全世界最成功的操作系统但是在当时没有人可以想像这个东西是什么样子的。
Linus Torvalds 在1991年8月25日发了一封邮件。邮件内容是请求帮助测试他新开发的操作系统。尽管那时候他的软件还没改变太多但是他仍然坚持发送Linux更新发布的邮件。那个时候他的软件还没有被命名为Linux。
“我正在做一个免费的386486先进技术处理器操作系统仅仅是为了个人喜好不会像GNU那样大和专业。” 自从4月份我已经酝酿这个想法并且已经开始准备好了。我很乐意听见各种关于喜欢与不喜欢minix的反馈因为我的操作系统在文件管理系统的物理层面由于实际的原因或者在其他方面于它minix相似。最近我发布了bash(1.08)版本和gcc(1.40)版本,暂时来说它们运行正常。
“这意味着在未来几个月内,我会得到一些实际的东西。与此同时,我很乐意
知道用户希望添加哪些功能。任何建议都是可以,但是我不保证都会去开发它们 :-) 附言它是不受限于任何minix代码并且它有多线程的文件系统。 它不是很轻便使用386任务交互等 它可能永远都不会支持任何设备,除了先进的硬盘。 这就是目前为止我所知道的。 :-(. " [发信人][1] Linus Torvalds.
一切都是由这封邮件开始的很有趣的是从那时候起已经可以感受到东西是怎么养逐步形成的。Linux操作系统不仅赶上了时代的步伐尤其是在服务器市场上而且强大的Linux覆盖了其他领域。
事实上现在已经很难去寻找一个技术还没有被Linus操作系统影响的。手机电视冰箱微型电脑操纵台平板电脑基本每一个带有电子芯片都是可以运行Linux或者它已经安装了一些以Linux为基础研发的操作系统。
Linux无处不在它已经覆盖了无数的设备并且它的影响力以每年幂指数的数率增长。你可能认为Linus先生是世界上最有财富的人但是不要忘记了Linux是一个免费的软件。每个人都可以使用它修改它甚至以它为赚钱的工具。他(Linus)做这个不是为了金钱。
Linus Torvalds 在1991年开启的时代的改革但是这个改革还没有结束。 实际上,你可以认为这仅仅是个开始。
> 生日快乐Linux请加入我们庆祝一个已经改变世界的免费操作系统的23岁生日。[pic.twitter.com/mTVApV85gD][2]
>
> — The Linux Foundation (@linuxfoundation) [August 25, 2014][3]
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Linus-Torvalds-Started-a-Revolution-on-August-25-1991-Happy-Birthday-Linux-456212.shtml
作者:[Silviu Stahie][a]
译者:[Shaohao Lin](https://github.com/shaohaolin)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:https://groups.google.com/forum/#!original/comp.os.minix/dlNtH7RRrGA/SwRavCzVE7gJ
[2]:http://t.co/mTVApV85gD
[3]:https://twitter.com/linuxfoundation/statuses/503799441900314624

View File

@ -0,0 +1 @@
这里放分享类文章,包括各种软件的简单介绍、有用的书籍和网站等。

View File

@ -0,0 +1,87 @@
为 Linux 用户准备的 10 个开源克隆软件
================================================================================
> 这些克隆软件会读取整个磁盘的数据,将它们转换成一个 .img 文件,之后你可以将它复制到其他硬盘上。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/photo/150x150x1Qn740810PM9112014.jpg.pagespeed.ic.Ch7q5vT9Yg.jpg)
磁盘克隆意味着从一个硬盘复制数据到另一个硬盘上,而且你可以通过简单的复制粘贴来做到。但是你却不能复制隐藏文件和文件夹,以及正在使用中的文件。这便是一个克隆软件可以通过保存一份文件和文件夹的镜像来帮助你的地方。克隆软件会读取整个磁盘的数据,将它们转换成一个 .img 文件,之后你可以将它复制到其他硬盘上。现在我们将要向你介绍最优秀的 10 个开源的克隆软件:
### 1. [Clonezilla][1]###
Clonezilla 是一个基于 Ubuntu 和 Debian 的 Live CD。它可以像 Windows 里的诺顿 Ghost 一样克隆你的磁盘数据和做备份不过它更有效率。Clonezilla 支持包括 ext2、ext3、ext4、btrfs 和 xfs 在内的很多文件系统。它还支持 BIOS、UEFI、MBR 和 GPT 分区。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/600x450xZ34_clonezilla-600x450.png.pagespeed.ic.8Jq7pL2dwo.png)
### 2. [Redo Backup][2]###
Redo Backup 是另一个用来方便地克隆磁盘的 Live CD。它是自由和开源的软件使用 GPL 3 许可协议授权。它的主要功能和特点包括从 CD 引导的简单易用的 GUI、无需安装可以恢复 Linux 和 Windows 等系统、无需登陆访问文件,以及已删除的文件等。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/600x450x7D5_Redo-Backup-600x450.jpeg.pagespeed.ic.3QMikN07F5.jpg)
### 3. [Mondo Rescue][3]###
Mondo 和其他的软件不大一样,它并不将你的磁盘数据转换为一个 .img 文件,而是将它们转换为一个 .iso 镜像。使用 Mondo你还可以使用“mindi”——一个由 Mondo Rescue 开发的特别工具——来创建一个自定义的 Live CD这样你的数据就可以从 Live CD 克隆出来了。它支持大多数 Linux 发行版和 FreeBSD并使用 GPL 许可协议授权。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x387x3C4_MondoRescue-620x387.jpeg.pagespeed.ic.cqVh7nbMNt.jpg)
### 4. [Partimage][4]###
这是一个开源的备份软件,默认情况下在 Linux 系统里工作。在大多数发行版中,你都可以从发行版自带的软件包管理工具中安装。如果你没有 Linux 系统你也可以使用“SystemRescueCd”。它是一个默认包括 Partimage 的 Live CD可以为你完成备份工作。Partimage 在克隆硬盘方面的性能非常出色。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x424xBZF_partimage-620x424.png.pagespeed.ic.ygzrogRJgE.png)
### 5. [FSArchiver][5]###
FSArchiver 是 Partimage 的后续产品,而且它也是一个很好的硬盘克隆工具。它支持克隆 Ext4 和 NTFS 分区、基本的文件属性如所有人、权限、SELinux 之类的扩展属性,以及所有 Linux 文件系统的文件系统属性等。
### 6. [Partclone][6]###
Partclone 是一个可以克隆和恢复分区的免费工具。它用 C 语言编写,最早在 2007 年出现而且支持很多文件系统包括ext2、ext3、ext4、xfs、nfs、reiserfs、reiser4、hfs+、btrfs。它的使用十分简便并且使用 GPL 许可协议授权。
### 7. [doClone][7]###
doClone 是一个免费软件项目,被开发用于轻松地克隆 Linux 系统分区。它由 C++ 编写而成,支持多达 12 种不同的文件系统。它能够修复 Grub 引导器,还能通过局域网传输镜像到另一台计算机。它还提供了热同步功能,这意味着你可以在系统正在运行的时候对它进行克隆操作。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x396x2A6_doClone-620x396.jpeg.pagespeed.ic.qhimTILQPI.jpg)
### 8. [Macrium Reflect 免费版][8]###
Macrium Reflect 免费版被形容为最快的磁盘克隆工具之一,它只支持 Windows 文件系统。它有一个很直观的用户界面。该软件提供了磁盘镜像和克隆操作,还能让你在文件管理器中访问镜像。它允许你创建一个 Linux 应急 CD并且它与 Windows Vista 和 Windows 7 兼容。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x464xD1E_open1.jpg.pagespeed.ic.RQ41AyMCFx.png)
### 9. [DriveImage XML][9]###
DriveImage XML 使用 Microsoft VSS 来创建镜像,十分可靠。使用这个软件,你可以从一个正在使用的磁盘创建“热”镜像。镜像使用 XML 文件保存这意味着你可以从任何支持的第三方软件访问它们。DriveImage XML 还允许在不重启的情况下从镜像恢复到机器。这个软件与 Windows XP、Windows Server 2003、Vista 以及 7 兼容。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x475x357_open2.jpg.pagespeed.ic.50ipbFWsa2.jpg)
### 10. [Paragon Backup & Recovery 免费版][10]###
Paragon Backup & Recovery 免费版在管理镜像计划任务方面十分出色。它是一个免费软件,但是仅能用于个人用途。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/600x536x9Z9_open3.jpg.pagespeed.ic.9rDHp0keFw.png)
--------------------------------------------------------------------------------
via: http://www.efytimes.com/e1/fullnews.asp?edid=148039
作者Sanchari Banerjee
译者:[felixonmars](https://github.com/felixonmars)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://clonezilla.org/
[2]:http://redobackup.org/
[3]:http://www.mondorescue.org/
[4]:http://www.partimage.org/Main_Page
[5]:http://www.fsarchiver.org/Main_Page
[6]:http://www.partclone.org/
[7]:http://doclone.nongnu.org/
[8]:http://www.macrium.com/reflectfree.aspx
[9]:http://www.runtime.org/driveimage-xml.htm
[10]:http://www.paragon-software.com/home/br-free/

View File

@ -0,0 +1,64 @@
Linux 上好用的几款字幕编辑器介绍
================================================================================
如果你经常看国外的大片你应该会喜欢带字幕版本而不是有国语配音的版本。在法国长大我的童年记忆里充满了迪斯尼电影。但是这些电影因为有了法语的配音而听起来很怪。如果现在有机会能看原始的版本我知道对于大多数的人来说字幕还是必须的。我很高兴能为家人制作字幕。最让我感到希望的是Linux 也不无花哨而且有很多开源的字幕编辑器。总之一句话这篇文章并不是一个详尽的Linux上字幕编辑器的列表。你可以告诉我那一款是你认为最好的字幕编辑器。
### 1. Gnome Subtitles ###
![](https://farm6.staticflickr.com/5596/15323769611_59bc5fb4b7_z.jpg)
[Gnome Subtitles][1] 我的一个选择,当有字幕需要快速编辑时。你可以载入视频,载入字幕文本,然后就可以即刻开始了。我很欣赏其对于易用性和高级特性之间的平衡性。它带有一个同步工具以及一个拼写检查工具。最后,虽然最后,但并不是不重要,这么好用最主要的是因为它的快捷键:当你编辑很多的台词的时候,你最好把你的手放在键盘上,使用其内置的快捷键来移动。
### 2. Aegisub ###
![](https://farm3.staticflickr.com/2944/15323964121_59e9b26ba5_z.jpg)
[Aegisub][2] 有更高级别的复杂性。接口仅仅反映了学习曲线。但是除了它吓人的样子以外Aegisub 是一个非常完整的软件提供的工具远远超出你能想象的。和Gnome Subtitles 一样Aegisub也采用了所见即所得WYSIWYG:what you see is what you get的处理方式。但是是一个全新的高度可以再屏幕上任意拖动字幕也可以在另一边查看音频的频谱并且可以利用快捷键做任何的事情。除此以外它还带有一个汉字工具有一个kalaok模式并且你可以导入lua 脚本让它自动完成一些任务。我希望你在用之前,先去阅读下它的[指南][3]。
### 3. Gaupol ###
![](https://farm3.staticflickr.com/2942/15326817292_6702cc63fc_z.jpg)
另一个操作复杂的软件是[Gaupol][4],不像Aegisub Gaupol 很容易上手而且采用了一个和Gnome Subtitles 很像的界面。但是在这些相对简单背后,它拥有很多很必要的工具:快捷键、第三方扩展、拼写检查,甚至是语音识别(由[CMU Sphinx][5]提供。这里也提一个缺点我注意到有时候在测试的时候也软件会有消极怠工的表现不是很严重但是也足以让我更有理由喜欢Gnome Subtitles了。
### 4. Subtitle Editor ###
![](https://farm4.staticflickr.com/3914/15323911521_8e33126610_z.jpg)
[Subtitle Editor][6]和Gaupol 很像。但是它的界面有点不太直观特性也只是稍微的高级一点点。我很欣赏的一点是它可以定义“关键帧”而且提供所有的同步选项。然而多一点的图标或者是少一点的文字都能提供界面的特性。作为一个好人我认为Subtitle Editor 可以模仿“作家”打字的效果,虽然我不知道它是否有用。最后但并非不重要。重定义快捷键的功能很实用。
### 5. Jubler ###
![](https://farm4.staticflickr.com/3912/15323769701_3d94ca8884_z.jpg)
用Java 写的,[Jubler][7]是一个多平台支持的字幕编辑器。我对它的界面印象特别深刻。在上面我确实看出了Java-ish 方面的东西但是它仍然是经过精心的构造和构思的。像Aegisub 一样你可以再屏幕上任意的拖动字幕让你有愉快的体验而不单单是打字。它也可以为字幕自定义一个风格在另外的一个轨道播放音频翻译字幕或者是是做拼写检查然而你必须要注意的是你必须事先安装好媒体播放器并且正确的配置如果你想完整的使用Jubler。我把这些归功于在[官方页面][8]下载了脚本以后其简便的安装方式。
### 6. Subtitle Composer ###
![](https://farm6.staticflickr.com/5578/15323769711_6c6dfbe405_z.jpg)
被视为“KDE里的字幕作曲家”[Subtitle Composer][9]能够唤起对很多传统功能的回忆。伴随着KDE界面我们很期望。很自然的我们就会说到快捷键我特别喜欢这个功能。除此之外Subtitle Composer 与上面提到的编辑器最大的不同地方就在于它可以执行用JavaScriptPython甚至是Ruby写成的脚本。软件带有几个例子肯定能够帮助你很好的学习使用这些特性的语法。
最后不管你喜不喜欢我都要为你的家庭编辑几个字幕重新同步整个轨道或者是一切从头开始那么Linux 有很好的工具给你。对我来说,快捷键和易用性使得各个工具有差异,想要更高级别的使用体验,脚本和语音识别就成了很便利的一个功能。
你会使用哪个字幕编辑器,为什么?你认为还有没有更好用的字幕编辑器这里没有提到的?在评论里告诉我们。
--------------------------------------------------------------------------------
via: http://xmodulo.com/good-subtitle-editor-linux.html
作者:[Adrien Brochard][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:http://gnomesubtitles.org/
[2]:http://www.aegisub.org/
[3]:http://docs.aegisub.org/3.2/Main_Page/
[4]:http://home.gna.org/gaupol/
[5]:http://cmusphinx.sourceforge.net/
[6]:http://home.gna.org/subtitleeditor/
[7]:http://www.jubler.org/
[8]:http://www.jubler.org/download.html
[9]:http://sourceforge.net/projects/subcomposer/

View File

@ -0,0 +1,65 @@
Linux用户应该了解一下开源硬件
================================================================================
> Linux用户不了解一点开源硬件制造相关的事情他们将会很失望。
商业软件和免费软件已经互相纠缠很多年了,但是这俩经常误解对方。这并不奇怪 -- 对一方来说是生意,而另一方只是一种生活方式。但是,这种误解会给人带来痛苦,这也是为什么值得花精力去揭露这里面的内幕。
一个逐渐普遍的现象对开源硬件的不断尝试不管是CanonicalJollaMakePlayLive或者其他几个。不管是评论员或终端用户一般的免费软件用户会为新的硬件平台发布表现出过分的狂热然后因为不断延期有所醒悟最终放弃整个产品。
这是一个没有人获益的怪圈,而且滋生出不信任 - 都是因为一般的Linux用户根本不知道这些新闻背后发生的事情。
我个人对于把产品推向市场的经验很有限。但是,我还不知道谁能有所突破。推出一个开源硬件或其他产品到市场仍然不仅仅是个残酷的生意,而且严重不利于新加入的厂商。
### 寻找合作伙伴 ###
不管是数码产品的生产还是分销都被相对较少的一些公司控制着有时需要数月的预订。利润率也会很低所以就像那些购买古老情景喜剧的电影工作室一样生成商一般也希望复制当前热销产品的成功。像Aaron Seigo在谈到他花精力开发Vivaldi平板时告诉我的生产商更希望能由其他人去承担开发新产品的风险。
不仅如此,他们更希望和那些有现成销售记录的有可能带来可复制生意的人合作。
而且一般新加入的厂商所关心的产品只有几千的量。芯片制造商更愿意和苹果或三星合作因为它们的订单很可能是几百K。
面对这种情形,开源硬件制造者们可能会发现他们在工厂的列表中被淹没了,除非能找到二线或三线厂愿意尝试一下小批量生产新产品。
他们也许还会沦为采购成品组件再自己组装就像Seigo尝试Vivaldi时那样做的。或者他们也许可以像Canonical那样做寻找一些愿意为这个产业冒险的合作伙伴。而就算他们成功了一般也会比最初天真的预期延迟数个月。
### 磕磕碰碰走向市场 ###
然而,寻找生产商只是第一关。根据树莓派项目的经验,就算开源硬件制造者们只想在他们的产品上运行免费软件,生产商们很可能会以保护商业机密的名义坚持使用专有固件或驱动。
这样必然会引起潜在用户的批评,但是开源硬件制造者没得选,只能折中他们的愿景。寻找其他生产商也不能解决问题,有一个原因是这样做意味着更多延迟,但是更多的是因为完全免授权费的硬件是不存在的。像三星这样的业内巨头对免费硬件没有任何兴趣,而作为新人,开源硬件制造者也没有影响力去要求什么。
更何况,就算有免费硬件,生产商也不能保证会用在下一批生产中。制造者们会轻易地发现他们每次需要生产的时候都要重打一样的仗。
这些都还不够这个时候开源硬件制造者们也许已经花了6-12个月时间来讨价还价。机会来了产业标准已经变更他们也许为了升级产品规格又要从头来过。
### 短暂而且残忍的货架期 ###
尽管面对这么多困难,一定程度上开放的硬件也终于推出了。还记得寻找生产商时的挑战吗?对于分销商也会有同样的问题 -- 还不只是一次,而是每个地区都要解决。
通常,分销商和生成商一样保守,对于和新人或新点子打交道也很谨慎。就算他们同意一个产品上架,他们也轻易能够决定不鼓励自己的销售代表们做推广,这意味着这个产品会在几个月后很有效率地下架。
当然,在线销售也是可以的。但是同时,硬件还是需要被存放在某个地方,这也会增加成本。而按需生产就算可能的话也将非常昂贵,而且没有组装的元件也需要存放。
### 衡量整件怪事 ###
在这里我只是粗略地概括了一下,但是任何涉足过制造的人会认出我形容成标准的东西。而更糟糕的是,开源硬件制造者们通常在这个过程中才会有所觉悟。不可避免,他们也会犯错,从而带来更多的延迟。
但重点是一旦你对整个过程有所了解你对另一个开源硬件进行尝试的消息的反应就会改变。这个过程意味着除非哪家公司处于严格的保密模式对于产品将于六个月内发布的声明会很快会被证实是过期的推测。很可能是12-18个月而且面对之前提过的那些困难很可能意味着这个产品永远不会真正发布。
举个例子就像我写的人们等待第一代Steam Machines面世它是一台基于Linux的游戏主机。他们相信Steam Machines能彻底改变Linux和游戏。
作为一个市场分类Steam Machines也许比其他新产品更有优势因为参与开发的人员至少有开发软件产品的经验。然而整整一年过去了Steam Machines的开发成果都还只有原型机而且直到2015年中都不一定能买到。面对硬件生产的实际情况就算有一半能见到阳光都是很幸运了。而实际上能发布2-4台也许更实际。
我做出这个预测并没有考虑个体努力。但是对硬件生产的理解比起那些Linux和游戏的黄金年代之类的预言我估计这个更靠谱。如果我错了也会很开心但是事实不会改变让人吃惊的不是如此多的Linux相关硬件产品失败了而是那些即使是短暂的成功的产品。
--------------------------------------------------------------------------------
via: http://www.datamation.com/open-source/what-linux-users-should-know-about-open-hardware-1.html
作者:[Bruce Byfield][a]
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.datamation.com/author/Bruce-Byfield-6030.html

View File

@ -1,108 +0,0 @@
基本的命令行工具有哪些更好的替代品
================================================================================
命令行听起来有时候会很吓人, 特别是在刚刚接触的时候. 你甚至可能做过有关命令行的噩梦. 然而渐渐地, 我们都意识到命令行实际上并不是那么吓人, 反而是非常有用. 实际上, 没有命令行正是每次我使用 Windows 时让我感到崩溃的地方. 这种感觉上的变化是因为命令行工具实际上是很智能的. 你在任何一个 Linux 终端上所使用的基本工具功能都是很强大的, 但还远说不上是足够强大. 如果你想使你的命令行生涯更加愉悦, 这里有几个程序你可以下载下来替换原来的默认程序, 它还可以给你提供比原始程序更多的功能给你提供比原始程序更多的功能.
### dfc ###
作为一个 LVM 使用者, 我非常喜欢随时查看我的硬盘存储器的使用情况. 我也从来没法真正理解为什么在 Windows 上我们得打开资源管理器来查看电脑的基本信息. 在 Linux 上, 我们可以使用如下命令:
$ df -h
![](https://farm4.staticflickr.com/3858/14768828496_c8a42620a3_z.jpg)
该命令可显示电脑上每一分卷的大小, 已使用空间, 可用空间, 已使用空间百分比和挂载点. 注意, 我们必须使用 "-h" 选项使得所有数据以可读形式显示(使用 GiB 而不是 KiB). 但你可以使用 [dfc][1] 来完全替代 df, 它不需要任何额外的选项就可以得到 df 命令所显示的内容, 并且会为每个设备绘制彩色的使用情况图, 因此可读性会更强.
![](https://farm6.staticflickr.com/5594/14791468572_a84d4b6145_z.jpg)
另外, 你可以使用 "-q" 选项将各分卷排序, 使用 "-u" 选项规定你希望使用的单位, 甚至可以使用 "-e" 选项来获得 csv 或者 html 格式的输出.
### dog ###
Dog 比 cat 好, 至少这个程序自己是这么宣称的, 你应该相信它一次. 所有 cat 命令能做的事, [dog][2] 都做的更好. 除了仅仅能在控制台上显示一些文本流之外, dog 还可以对其进行过滤. 例如, 你可以使用如下语法来获得网页上的所有图片:
$ dog --images [URL]
![](https://farm6.staticflickr.com/5568/14811659823_ea8d22d045_z.jpg)
或者是所有链接:
dog --links [URL]
![](https://farm4.staticflickr.com/3902/14788690051_7472680968_z.jpg)
另外, dog 命令还可以处理一些其他的小任务, 比如全部转换为大写或小写, 使用不同的编码, 显示行号和处理十六进制文件. 总之, dog 是 cat 的必备替代品.
### advcp ###
一个 Linux 中最基本的命令就是复制命令: cp. 它几乎和 cd 命令地位相同. 然而, 它的输出非常少. 你可以使用 verbose 模式来实时查看正在被复制的文件, 但如果一个文件非常大的话, 你看着屏幕等待却完全不知道后台在干什么. 一个简单的解决方法是加上一个进度条: 这正是 advcp (advanced cp 的缩写) 所做的! advcp 是 [GNU coreutils][4] 的一个 [补丁版本][3], 它提供了 acp 和 amv 命令, 即"高级"的 cp 和 mv 命令. 使用语法如下:
$ acp -g [file] [copy]
它把文件复制到另一个位置, 并显示一个进度条.
![](https://farm6.staticflickr.com/5588/14605117730_fe611fc234_z.jpg)
我还建议在 .bashrc 或 .zshrc 中设置如下命令别名:
alias cp="acp -g"
alias mv="amv -g"
(译者注: 原文给出的链接已貌似失效, 我写了一个可用的安装脚本放在了我的 [gist](https://gist.github.com/b978fc93b62e75bfad9c) 上, 用的是 AUR 里的 [patch](https://aur.archlinux.org/packages/advcp))
### The Silver Searcher ###
[the silver searcher][5] 这个名字听起来很不寻常(银搜索...), 它是一款设计用来替代 grep 和 [ack][6] 的工具. The silver searcher 在文件中搜索你想要的部分, 它比 ack 要快, 而且能够忽略一些文件而不像 grep 那样.(译者注: 原文的意思貌似是 grep 无法忽略一些文件, 但 grep 有类似选项) the silver searcher 还有一些其他的功能, 比如彩色输出, 跟随软连接, 使用正则式, 甚至是忽略某些模式.
![](https://farm4.staticflickr.com/3876/14605308117_f966c77140_z.jpg)
作者在开发者主页上提供了一些搜索速度的统计数字, 如果它们仍然是真的的话, 那是非常可观的. 另外, 你可以把它整合到 Vim 中, 用一个简洁的命令来调用它. 如果要用两个词来概括它, 那就是: 智能, 快速.
### plowshare ###
所有命令行的粉丝都喜欢使用 wget 或其他对应的替代品来从互联网上下载东西. 但如果你使用许多文件分享网站, 像 mediafire 或者 rapidshare, 你一定很乐意了解一款专门为这些网站设计的对应的程序, 叫做 [plowshare][7]. 安装成功之后, 你可以使用如下命令来下载文件:
$ plowdown [URL]
或者是上传文件:
$ plowup [website name] [file]
如果你有那个文件分享网招的账号的话.
最后, 你可以获取分享文件夹中的一系列文件的链接:
$ plowlist [URL]
或者是文件名, 大小, 哈希值等等:
$ plowprobe [URL]
对于那些熟悉这些服务的人来说, plowshare 还是缓慢而令人难以忍受的 jDownloader 的一个很好的替代品.
### htop ###
如果你经常使用 top 命令, 很有可能你会喜欢 [htop][8] 命令. top 和 htop 命令都能对正在运行的进程提供了实时查看功能, 但 htop 还拥有一系列 top 命令所没有的人性化功能. 比如, 在 htop 中, 你可以水平或垂直滚动进程列表来查看每个进程的完整命令名, 还可以使用鼠标点击和方向键来进行一些基本的进程操作(比如 kill, (re)nice 等), 而不用输入进程标识符.
![](https://farm6.staticflickr.com/5581/14819141403_6f2348590f_z.jpg)
总的来说, 这些十分有效的基本命令行的替代工具就像那些有用的小珍珠一样, 它们并不是那么容易被发现, 但一旦你找到一个, 你就会惊讶你是如何忍受这么长没有它的时间. 如果你还知道其他的与上面描述相符的工具, 请在评论中分享给我们.
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/07/better-alternatives-basic-command-line-utilities.html
作者:[Adrien Brochard][a]
译者:[wangjiezhe](https://github.com/wangjiezhe)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:http://projects.gw-computing.net/projects/dfc
[2]:http://archive.debian.org/debian/pool/main/d/dog/
[3]:http://zwicke.org/web/advcopy/
[4]:http://www.gnu.org/software/coreutils/
[5]:https://github.com/ggreer/the_silver_searcher
[6]:http://xmodulo.com/2014/01/search-text-files-patterns-efficiently.html
[7]:https://code.google.com/p/plowshare/
[8]:http://hisham.hm/htop/

Some files were not shown because too many files have changed in this diff Show More