Merge pull request #9 from LCTT/master

Update repository
This commit is contained in:
DoubleC 2014-10-25 17:32:40 +08:00
commit 82d621dae6
126 changed files with 7234 additions and 2299 deletions

View File

@ -1,13 +1,9 @@
Sysstat工具包中20个实用的Linux性能监控工具包括mpstat, pidstat, iostat 和sar
Sysstat性能监控工具包中20个实用命令
===============================================================
在我们上一篇文章中,我们已经学习了如何去安装和更新**sysstat**,并且了解了包中的一些实用工具。
注:此文一并附上,在同一个原文中更新
- [Sysstat Performance and Usage Activity Monitoring Tool For Linux][1]
在我们[上一篇文章][1]中,我们已经学习了如何去安装和更新**sysstat**,并且了解了包中的一些实用工具。
![20 Sysstat Commands for Linux Monitoring](http://www.tecmint.com/wp-content/uploads/2014/09/sysstat-commands.png)
Linux系统监控的20个Sysstat命令
今天,我们将会通过一些有趣的实例来学习**mpstat**, **pidstat**, **iostat**和**sar**等工具,这些工具可以帮组我们找出系统中的问题。这些工具都包含了不同的选项,这意味着你可以根据不同的工作使用不同的选项,或者根据你的需求来自定义脚本。我们都知道,系统管理员都会有点懒,他们经常去寻找一些更简单的方法来完成他们的工作。
### mpstat - 处理器统计信息 ###
@ -21,7 +17,7 @@ Linux系统监控的20个Sysstat命令
12:23:57 IST CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
12:23:57 IST all 37.35 0.01 4.72 2.96 0.00 0.07 0.00 0.00 0.00 54.88
2.使用‘**-p**(处理器编码)和ALL参数将会从0开始独立的输出每个CPU的统计信息0表示第一个cpu。
2.使用‘**-p** (处理器编号)和ALL参数将会从0开始独立的输出每个CPU的统计信息0表示第一个cpu。
tecmint@tecmint ~ $ mpstat -P ALL
@ -151,7 +147,7 @@ Linux系统监控的20个Sysstat命令
12:51:55 IST 0 19 0.00 0.00 0.00 0.00 0 writeback
12:51:55 IST 0 20 0.00 0.00 0.00 0.00 1 kintegrityd
8.使用‘**-d 2**参数我们可以看到I/O统计信息2表示以秒为单位对统计信息进行刷新。这个参数可以方便的知道当系统在进行繁重的I/O时那些进行占用大量的资源。
8.使用‘**-d 2**参数我们可以看到I/O统计信息2表示以秒为单位对统计信息进行刷新。这个参数可以方便的知道当系统在进行繁重的I/O时那些进行占用大量的资源的进程
tecmint@tecmint ~ $ pidstat -d 2
@ -171,7 +167,6 @@ Linux系统监控的20个Sysstat命令
9.想要每间隔**2**秒对进程**4164**的cpu统计信息输出**3**次,则使用如下带参数‘**-t**’(输出某个选定进程的统计信息)的命令。
tecmint@tecmint ~ $ pidstat -t -p 4164 2 3
Linux 3.11.0-23-generic (tecmint.com) Thursday 04 September 2014 _i686_ (2 CPU)
@ -250,13 +245,13 @@ Linux系统监控的20个Sysstat命令
01:09:08 IST 1000 5 99 FIFO migration/0
01:09:08 IST 1000 6 99 FIFO watchdog/0
因为我们已经学习过Iostat命令了因此在本文中不在对其进行赘述。若想查看Iostat命令的详细信息请参看“[使用Iostat和Vmstat进行Linux性能监控][2]注:此文也一并附上在同一个原文更新中
因为我们已经学习过iostat命令了因此在本文中不在对其进行赘述。若想查看iostat命令的详细信息请参看“[使用Iostat和Vmstat进行Linux性能监控][2]”
###sar - 系统活动报告###
我们可以使用‘**sar**’命令来获得整个系统性能的报告。这有助于我们定位系统性能的瓶颈,并且有助于我们找出这些烦人的性能问题的解决方法。
Linux内核维护一些内部计数器这些计数器包含了所有的请求及其完成时间和I/O块数等信息sar命令从所有的这些信息中计算出请求的利用率和比例以便找出瓶颈所在。
Linux内核维护一些内部计数器这些计数器包含了所有的请求及其完成时间和I/O块数等信息sar命令从所有的这些信息中计算出请求的利用率和比例以便找出瓶颈所在。
sar命令主要的用途是生成某段时间内所有活动的报告因此必需确保sar命令在适当的时间进行数据采集而不是在午餐时间或者周末。
@ -274,7 +269,7 @@ sar命令主要的用途是生成某段时间内所有活动的报告因此
01:42:38 IST all 50.75 0.00 3.75 0.00 0.00 45.50
Average: all 46.30 0.00 3.93 0.00 0.00 49.77
14.在上面的例子中我们交互的执行sar命令。sar命令提供了使用cron进行非交互的执行sar命令的方法使用**/usr/local/lib/sa1**和**/usr/local/lib/sa2**脚本(如果你在安装时使用了**/usr/local**作为前缀)
14.在上面的例子中我们交互的执行sar命令。sar命令提供了使用cron进行非交互的执行sar命令的方法使用**/usr/local/lib/sa1**和**/usr/local/lib/sa2**脚本(如果你在安装时使用了**/usr/local**作为前缀的话
- **/usr/local/lib/sa1**是一个可以使用cron进行调度生成二进制日志文件的shell脚本。
- **/usr/local/lib/sa2**是一个可以将二进制日志文件转换为用户可读的编码方式。
@ -287,7 +282,7 @@ sar命令主要的用途是生成某段时间内所有活动的报告因此
#在每天23:53时生成一个用户可读的日常报告
53 23 * * * /usr/local/lib/sa/sa2 -A
在sa1脚本执行后期sa1脚本会调用**sabc**(系统活动数据收集器System Activity Data Collector)工具采集特定时间间隔内的数据。**sa2**脚本会调用sar来将二进制日志文件转换为用户可读的形式。
在sa1脚本的后端sa1脚本会调用**sabc**(系统活动数据收集器System Activity Data Collector)工具采集特定时间间隔内的数据。**sa2**脚本会调用sar来将二进制日志文件转换为用户可读的形式。
15.使用‘**-q**’参数来检查运行队列的长度,所有进程的数量和平均负载
@ -303,7 +298,7 @@ sar命令主要的用途是生成某段时间内所有活动的报告因此
02:00:54 IST 0 431 1.64 1.23 0.97 0
Average: 2 431 1.68 1.23 0.97 0
16.使用‘**-F**’参数查看当前挂载的文件系统统计信息
16.使用‘**-F**’参数查看当前挂载的文件系统的使用统计信息
tecmint@tecmint ~ $ sar -F 2 4
@ -387,7 +382,7 @@ sar命令主要的用途是生成某段时间内所有活动的报告因此
![Network Graph](http://www.tecmint.com/wp-content/uploads/2014/09/sar-graph.png)
网络信息图表
*网络信息图表*
现在你可以参考man手册来后去每个参数的更多详细信息并且请在文章下留下你宝贵的评论。
@ -397,10 +392,10 @@ via: http://www.tecmint.com/sysstat-commands-to-monitor-linux/
作者:[Kuldeep Sharma][a]
译者:[cvsher](https://github.com/cvsher)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/kuldeepsharma47/
[1]:http://www.tecmint.com/install-sysstat-in-linux/
[2]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/
[1]:http://linux.cn/article-4025-1.html
[2]:http://linux.cn/article-4024-1.html

View File

@ -1,10 +1,10 @@
在 Debian 上使用 systemd 管理系统
================================================================================
人类已经无法阻止 systemd 占领全世界的 Linux 系统了唯一阻止它的方法是在你自己的机器上手动卸载它。到目前为止systemd 已经创建了比任何软件都多的技术问题、感情问题和社会问题。这一点从[热议][1](也称 Linux 初始化软件之战)上就能看出,这场争论在 Debian 开发者之间持续了好几个月。当 Debian 技术委员会最终决定将 systemd 放到 Debian 8代号 Jessie的发行版里面时其反对者试图通过多种努力来[取代这项决议][2],甚至有人扬言要威胁那些支持 systemd 的开发者的生命安全。
人类已经无法阻止 systemd 占领全世界的 Linux 系统了唯一阻止它的方法是在你自己的机器上手动卸载它。到目前为止systemd 已经创建了比任何软件都多的技术问题、感情问题和社会问题。这一点从[“Linux 初始化软件之战”][1]上就能看出,这场争论在 Debian 开发者之间持续了好几个月。当 Debian 技术委员会最终决定将 systemd 放到 Debian 8代号 Jessie的发行版里面时其反对者试图通过多种努力来[取代这项决议][2],甚至有人扬言要威胁那些支持 systemd 的开发者的生命安全。
这也说明了 systemd 对 Unix 传承下来的系统处理方式有很大的干扰。“一个软件只做一件事情”的哲学思想已经被这个新来者彻底颠覆。除了取代了 sysvinit 成为新的系统初始化工具外systemd 还是一个系统管理工具。目前为止,由于 systemd-sysv 这个软件包提供的兼容性,那些我们使用惯了的工具还能继续工作。但是当 Debian 将 systemd 升级到214版本后这种兼容性就不复存在了。升级措施预计会在 Debian 8 "Jessie" 的稳定分支上进行。从此以后用户必须使用新的命令来管理系统、执行任务、变换运行级别、查询系统日志等等。不过这里有一个应对方案,那就是在 .bashrc 文件里面添加一些别名。
现在就让我们来看看 systemd 是怎么改变你管理系统的习惯的。在使用 systemd 之前,你得先把 sysvinit 保存起来,以 systemd 出错的时候还能用 sysvinit 启动系统。这种方法只有在没安装 systemd-sysv 的情况下才能生效,具体操作方法如下:
现在就让我们来看看 systemd 是怎么改变你管理系统的习惯的。在使用 systemd 之前,你得先把 sysvinit 保存起来,以便在 systemd 出错的时候还能用 sysvinit 启动系统。这种方法**只有在没安装 systemd-sysv 的情况下才能生效**,具体操作方法如下:
# cp -av /sbin/init /sbin/init.sysvinit
@ -34,8 +34,8 @@ systemctl 的功能是替代“/etc/init.d/foo start/stop”这类命令
你同样可以使用 systemctl 实现转换运行级别、重启系统和关闭系统的功能:
- systemctl isolate graphical.target - 切换到运行级别5就是有桌面的级别
- systemctl isolate multi-user.target - 切换到运行级别3没有桌面的级别
- systemctl isolate graphical.target - 切换到运行级别5就是有桌面的运行级别
- systemctl isolate multi-user.target - 切换到运行级别3没有桌面的运行级别
- systemctl reboot - 重启系统
- systemctl poweroff - 关机
@ -43,7 +43,7 @@ systemctl 的功能是替代“/etc/init.d/foo start/stop”这类命令
### journalctl 的基本用法 ###
systemd 不仅提供了比 sysvinit 更快的启动速度,还让日志系统在更早的时候启动起来,可以记录内核初始化阶段、内存初始化阶段、前期启动步骤以及主要的系统执行过程的日志。所以以前那种需要通过对显示屏拍照或者暂停系统来调试程序的日子已经一去不复返啦。
systemd 不仅提供了比 sysvinit 更快的启动速度,还让日志系统在更早的时候启动起来,可以记录内核初始化阶段、内存初始化阶段、前期启动步骤以及主要的系统执行过程的日志。所以**以前那种需要通过对显示屏拍照或者暂停系统来调试程序的日子已经一去不复返啦**
systemd 的日志文件都被放在 /var/log 目录。如果你想使用它的日志功能,需要执行一些命令,因为 Debian 没有打开日志功能。命令如下:
@ -86,7 +86,7 @@ systemd 可以让你能更有效地分析和优化你的系统启动过程:
![](https://farm6.staticflickr.com/5565/14423020978_14b21402c8_z.jpg)
systemd 虽然是个年轻的项目,但存在大量文档。首先要介绍的是[Lennart Poettering 的 0pointer 系列][3]。这个系列非常详细,非常有技术含量。另外一个是[免费桌面信息文档][4],它包含了最详细的关于 systemd 的链接发行版特性文件、bug 跟踪系统和说明文档。你可以使用下面的命令来查询 systemd 都提供了哪些文档:
systemd 虽然是个年轻的项目,但已有大量文档。首先要介绍给你的是[Lennart Poettering 的 0pointer 系列][3]。这个系列非常详细,非常有技术含量。另外一个是[免费桌面信息文档][4],它包含了最详细的关于 systemd 的链接发行版特性文件、bug 跟踪系统和说明文档。你可以使用下面的命令来查询 systemd 都提供了哪些文档:
# man systemd.index
@ -96,7 +96,7 @@ systemd 虽然是个年轻的项目,但存在大量文档。首先要介绍的
via: http://xmodulo.com/2014/07/use-systemd-system-administration-debian.html
译者:[bazz2](https://github.com/bazz2) 校对:[校对者ID](https://github.com/校对者ID)
译者:[bazz2](https://github.com/bazz2) 校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,168 @@
Camicri Cube: 可离线的便携包管理系统
================================================================================
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/camicri-cube-206x205.jpg)
众所周知,在系统中使用新立得包管理工具或软件中心下载和安装应用程序的时候,我们必须得有互联网连接。但,如果您刚好没有网络或者是网络速度死慢死慢的呢?在您的 Linux 桌面系统中使用软件中心包管理工具来安装软件绝对是一个头痛的问题。反而,您可以从相应的官网上手工下载应用程序包并手工安装。但是,大多数的 Linux 用户并不知道他们希望安装的应用程序所需要的依赖关系包。如果您恰巧出现这种情况,应用怎么办呢?现在一切都不用担心了。今天,我们给您介绍一款非常棒的名叫 **Camicri Cube** 的离线包管理工具。
您可以把此包管理工具装在任何联网的系统上下载您所需要安装的软件列表然后把它们安装到没联网的机器上就可以安装了。听起来很不错吧是的它就是这样操作的。Cube 是一款像新立得和 Ubuntu 软件中心这样的包管理工具但是一款便携式的。它在任何平台Windows 系统、基于 Apt 的 Linux 发布系统)、在线状态、离线状态、在闪存或任何可移动设备上都是可以使用和运行的。我们这个实验项目的主要目的是使处在离线状态的 Linux 用户能很容易的下载和安装 Linux 应用程序。
Cube 会收集您的离线电脑的详细信息,如操作系统的详细信息、安装的应用程序等等。然后使用 USB 迷你盘对 cube 应用程序进行拷贝得到一副本把其放在其它有网络连接的系统上使用接着就可以下载您需要的应用程序列表。下载完所有需要的软件包之后回到您原来的计算机并开始安装。Cube 是由 **Jake Capangpangan** 开发和维护的,是用 C++ 语言编写,而且已经集成了所有必须的包。因此,使用它并不需要再安装任何额外的软件。
### 安装 ###
现在,让我们下载 Cube 程序包,然后在没有网络连接的离线系统上进行安装。既可以从[官网主站页面][1]下载,也可以从 [Sourceforge 网站][2]下载。要确保下载的版本跟您的离线计算机架构对应的系统相匹配。比如我使用的是64位的系统就要下载64位版本的安装包。
wget http://sourceforge.net/projects/camicricube/files/Camicri%20Cube%201.0.9/cube-1.0.9.2_64bit.zip/
对此 zip 文件解压,解压到 home 目录或者着是您想放的任何地方:
unzip cube-1.0.9.2_64bit.zip
这就好了。接着,该是知道怎么使用的时候了。
### 使用 ###
这儿,我使用的是两台装有 Ubuntu 系统的机器。原机器(离线-没有网络连接)上面跑着的是 **Ubuntu 14.04** 系统,有网络连接的机器跑着的是 **Lubuntu 14.04** 桌面系统。
#### 离线系统上的操作步骤: ####
在离线系统上,进入已经解压的 Cube 文件目录,您会发现一个名叫 “cube-linux” 的可执行文件,双击它,并点击执行。如果它是不可执行的,用如下命令设置其可执行权限。
sudo chmod -R +x cube/
然后,进入 cube 目录,
cd cube/
接着执行如下命令来运行:
./cube-linux
输入项目的名称比如sk然后点击**创建**按纽。正如我上面提到的,这将会创建一个与您的系统相关的完整详细信息的新项目,如操作系统的详细信息、安装的应用程序列表、库等等。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Cube-Startup-Create-or-choose-a-project-to-be-managed_0013.png)
如您所知,我们的系统是离线的,意思是没有网络连接。所以我点击**取消**按纽来跳过资源库的更新过程。随后我们会在一台有网络连接的系统上更新此资源库。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0023.png)
再一次,在这台离线机器上我们点击 **No** 来跳过更新,因为我们没有网络连接。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0033.png)
就是这样。现在新的项目已经创建好了,它会保存在我们的主 cube 目录里面。进入 Cube 目录,您就会发现一个名叫 Projects 的目录。这个目录会保存有您的离线系统的必要完整详细信息。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Selection_004.png)
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Selection_005.png)
现在,关闭 cube 应用程序,然后拷贝整个主 **cube** 文件夹到任何的闪存盘里,接入有网络连接的系统。
#### 在线系统上操作步骤: ####
往下的操作步骤需要在有网络连接的系统上进行。在我们的例子中,用的是 **Lubuntu 14.04** 系统的机器。
跟在源机器上的操作一样设置使 cube 目录具有可执行权限。
sudo chmod -R +x cube/
现在,双击 cube-linux 文件运行应用程序或者也可以在终端上加载运行,如下所示:
cd cube/
./cube-linux
在窗口的 “Open Existing Projects” 部分会看到您的项目列表,选择您需要的项目。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Cube-Startup-Create-or-choose-a-project-to-be-managed_0014.png)
随后cube 会询问这是否是您的项目所在的源机器。它并不是我的源(离线)机器,所以我点击 **No**
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0024.png)
接着会询问是否想要更新您的资源库。点击 **OK** 来更新资料库。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0034.png)
下一步,我们得更新所有过期的包/应用程序。点击 Cube 工具栏上的 “**Mark All updates**” 按纽。然后点击 “**Download all marked**” 按纽来更新所有过期的包/应用程序。如下截图所示在我的例子当中有302个包需要更新。这时点击 **OK** 来继续下载所标记的安装包。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_005.png)
现在Cube 会开始下载所有已标记的包。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Downloading-packages_006.png)
我们已经完成了对资料库和安装包的更新。此时,如果您在离线系统上还需要其它的安装包,您也可以下载这些新的安装包。
#### 下载新的应用程序 ####
例如,现在我想下载 **apache2** 包。在**搜索**框里输入包的名字点击搜索按纽。Cube 程序会获取您想查找的应用程序的详细信息。点击 “**Download this package now**”按纽,接着点击 **OK** 就开始下载了。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_008.png)
Cube 将会下载 apache2 的安装包及所有的依赖包。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Downloading-packages_009.png)
如果您想查找和下载更多安装包的话,只要简单的点击 “**Mark this package**” 按纽就可以搜索到需要的包了。只要您想在源机器上安装的包都可以标记上。一旦标记完所有的包,就可以点击位于顶部工具栏的 “**Download all marked**” 按纽来下载它们。
在完成资源库、过期软件包的更新和下载好新的应用程序后,就可以关闭 Cube 应用程序。然后,拷贝整个 Cube 文件夹到任何的闪盘或者外接硬盘。回到您的离线系统中来。
#### 离线机器上的操作步骤: ####
把 Cube 文件夹拷回您的离线系统的任意位置。进入 cube 目录,并且双击 **cube-linux** 文件来加载启动 Cube 应用程序。
或者,您也可以从终端下启动它,如下所示:
cd cube/
./cube-linux
选择您的项目,点击打开。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Cube-Startup-Create-or-choose-a-project-to-be-managed_0012.png)
然后会弹出一个对话框询问是否更新系统,尤其是已经下载好新的资源库的时候,请点击“是”。因为它会把所有的资源库传输到您的机器上。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0021.png)
您会看到,在没有网络连接的情况下这些资源库会更新到您的离线机器上。那是因为我们已经在有网络连接的系统上下载更新了此资源库。看起来很酷,不是吗?
更新完资源库后,让我们来安装所有的下载包。点击 “Mark all Downloaded” 按纽选中所有的下载包,然后点击 Cube 工具栏上的 “Install All Marked” 按纽来安装它们。Cube 应用程序会自动打开一个新的终端窗口来安装所有的软件包。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Terminal_001.png)
如果遇到依赖的问题,进入 **Cube Menu -> Packages -> Install packages with complete dependencies** 来安装所有的依赖包。
如果您只想安装特定的包,定位到列表包位置,点击 “Downloaded” 按纽,所有的已下载包都会被列出来。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0035.png)
然后双击某个特定的包,点击 “Install this”按纽来安装或者如果想过后再安装它的话可以先点击 “Mark this” 按纽。
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0043.png)
顺便提一句,您可以在任意已经连接网络的系统上下载所需要的包,然后在没有网络连接的离线系统上安装。
### 结论 ###
这是我曾经使用过的最好、最有用的软件工具之一。但我在用 Ubuntu 14.04 测试盒子测试的时候,遇到了很多依赖问题,还经常会出现闪退的情况。也仅仅是在最新 Ubuntu 14.04 离线系统上使用没有遇到任何问题。希望这些问题在老版本的 Ubuntu 上不会发生。除了这些小问题,这个小工具就如同宣传的一样,像魔法一样神奇。
欢呼吧!
原文作者:
![](http://1.gravatar.com/avatar/1ba62ac2b395f541750b6b4f873eb37b?s=70&d=monsterid&r=G)
[SK][a](Senthilkumar又名SK来自于印度的泰米尔纳德邦Linux 爱好者FOSS 论坛支持者和 Linux 板块顾问。一个充满激情和活力的人,致力于提供高质量的 IT 专业文章,非常喜欢写作以及探索 Linux、开源、电脑和互联网等新事物。)
--------------------------------------------------------------------------------
via: http://www.unixmen.com/camicri-cube-offline-portable-package-management-system/
译者:[runningwater](https://github.com/runningwater) 校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/sk/
[1]:https://launchpad.net/camicricube
[2]:http://sourceforge.net/projects/camicricube/

View File

@ -0,0 +1,117 @@
命令行基础工具的更佳替代品
================================================================================
命令行听起来有时候会很吓人,特别是在刚刚接触的时候,你甚至可能做过有关命令行的噩梦。然而渐渐地,我们都会意识到命令行实际上并不是那么吓人,反而是非常有用。实际上,没有命令行正是每次我使用 Windows 时让我感到崩溃的地方。这种感觉上的变化是因为命令行工具实际上是很智能的。 你在任何一个 Linux 终端上所使用的基本工具功能都是很强大的, 但还远说不上是足够强大。 如果你想使你的命令行生涯更加愉悦, 这里有几个程序你可以下载下来替换原来的默认程序, 它还可以给你提供比原始程序更多的功能。
### dfc ###
作为一个 LVM 使用者, 我非常喜欢随时查看我的硬盘存储器的使用情况. 我也从来没法真正理解为什么在 Windows 上我们非得打开资源管理器来查看电脑的基本信息。在 Linux 上, 我们可以使用如下命令:
$ df -h
![](https://farm4.staticflickr.com/3858/14768828496_c8a42620a3_z.jpg)
该命令可显示电脑上每一分卷的大小、 已使用空间、 可用空间、 已使用空间百分比和挂载点。 注意, 我们必须使用 "-h" 选项使得所有数据以可读形式显示(使用 GiB 而不是 KiB)。 但你可以使用 [dfc][1] 来完全替代 df 它不需要任何额外的选项就可以得到 df 命令所显示的内容, 并且会为每个设备绘制彩色的使用情况图, 因此可读性会更强。
![](https://farm6.staticflickr.com/5594/14791468572_a84d4b6145_z.jpg)
另外, 你可以使用 "-q" 选项将各分卷排序, 使用 "-u" 选项指定你希望使用的单位, 甚至可以使用 "-e" 选项来获得 csv 或者 html 格式的输出.
### dog ###
Dog 比 cat 好, 至少这个程序自己是这么宣称的。 你应该相信它一次。 所有 cat 命令能做的事, [dog][2] 都做的更好。 除了仅仅能在控制台上显示一些文本流之外, dog 还可以对其进行过滤。 例如, 你可以使用如下语法来获得网页上的所有图片:
$ dog --images [URL]
![](https://farm6.staticflickr.com/5568/14811659823_ea8d22d045_z.jpg)
或者是所有链接:
dog --links [URL]
![](https://farm4.staticflickr.com/3902/14788690051_7472680968_z.jpg)
另外, dog 命令还可以处理一些其他的小任务, 比如全部转换为大写或小写, 使用不同的编码, 显示行号和处理十六进制文件。 总之, dog 是 cat 的必备替代品。
### advcp ###
一个 Linux 中最基本的命令就是复制命令: cp。 它几乎和 cd 命令地位相同。 然而, 它的输出非常少。 你可以使用 verbose 模式来实时查看正在被复制的文件, 但如果一个文件非常大的话, 你看着屏幕等待却完全不知道后台在干什么。 一个简单的解决方法是加上一个进度条: 这正是 advcp (advanced cp 的缩写) 所做的! advcp 是 [GNU coreutils][4] 的一个 [补丁版本][3] 它提供了 acp 和 amv 命令, 即"高级"的 cp 和 mv 命令. 使用语法如下:
$ acp -g [file] [copy]
它把文件复制到另一个位置, 并显示一个进度条。
![](https://farm6.staticflickr.com/5588/14605117730_fe611fc234_z.jpg)
我还建议在 .bashrc 或 .zshrc 中设置如下命令别名:
alias cp="acp -g"
alias mv="amv -g"
(译者注: 原文给出的链接已貌似失效, 我写了一个可用的安装脚本放在了我的 [gist](https://gist.github.com/b978fc93b62e75bfad9c) 上, 用的是 AUR 里的 [patch](https://aur.archlinux.org/packages/advcp)。)
### The Silver Searcher ###
[the silver searcher][5] 这个名字听起来很不寻常(银搜索... 它是一款设计用来替代 grep 和 [ack][6] 的工具。 The silver searcher 在文件中搜索你想要的部分, 它比 ack 要快, 而且能够忽略一些文件而不像 grep 那样。(译者注: 原文的意思貌似是 grep 无法忽略一些文件, 但 grep 有类似选项) the silver searcher 还有一些其他的功能,比如彩色输出, 跟随软连接, 使用正则表达式, 甚至是忽略某些模式。
![](https://farm4.staticflickr.com/3876/14605308117_f966c77140_z.jpg)
作者在开发者主页上提供了一些搜索速度的统计数字, 如果它们的确是真的的话, 那是非常可观的。 另外, 你可以把它整合到 Vim 中, 用一个简洁的命令来调用它。 如果要用两个词来概括它, 那就是: 智能、快速。
### plowshare ###
所有命令行的粉丝都喜欢使用 wget 或其他对应的替代品来从互联网上下载东西。 但如果你使用许多文件分享网站, 像 mediafire 或者 rapidshare。 你一定很乐意了解一款专门为这些网站设计的对应的程序, 叫做 [plowshare][7]。 安装成功之后, 你可以使用如下命令来下载文件:
$ plowdown [URL]
或者是上传文件:
$ plowup [website name] [file]
前提是如果你有那个文件分享网招的账号的话。
最后, 你可以获取分享文件夹中的一系列文件的链接:
$ plowlist [URL]
或者是文件名、 大小、 哈希值等等:
$ plowprobe [URL]
对于那些熟悉这些服务的人来说, plowshare 还是缓慢而令人难以忍受的 jDownloader 的一个很好的替代品。
### htop ###
如果你经常使用 top 命令, 很有可能你会喜欢 [htop][8] 命令。 top 和 htop 命令都能对正在运行的进程提供了实时查看功能, 但 htop 还拥有一系列 top 命令所没有的人性化功能。 比如, 在 htop 中, 你可以水平或垂直滚动进程列表来查看每个进程的完整命令名, 还可以使用鼠标点击和方向键来进行一些基本的进程操作(比如 kill、 (re)nice 等),而不用输入进程标识符。
![](https://farm6.staticflickr.com/5581/14819141403_6f2348590f_z.jpg)
### mtr ###
系统管理员的一个基本的网络诊断工具traceroute可以用于显示从本地网络到目标网络的网络第三层协议的路由。mtr即“My Traceroute”的缩写继承了强大的traceroute功能并集成了 ping 的功能。当发现了一个完整的路由时mtr会显示所有的中继节点的 ping 延迟的统计数据,对网络延迟的定位非常有用。虽然也有其它的 traceroute的变体tcptraceroute 或 traceroute-nanog但是我相信 mtr 是traceroute 工具里面最实用的一个增强工具。
![](https://farm4.staticflickr.com/3884/14783092046_b3a90ab462_z.jpg)
总的来说, 这些十分有效的基本命令行的替代工具就像那些有用的小珍珠一样, 它们并不是那么容易被发现, 但当一旦你找到一个, 你就会惊讶你是如何忍受这么长没有它的时间! 如果你还知道其他的与上面描述相符的工具, 请在评论中分享给我们。
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/07/better-alternatives-basic-command-line-utilities.html
作者:[Adrien Brochard][a]
译者:[wangjiezhe](https://github.com/wangjiezhe)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:http://projects.gw-computing.net/projects/dfc
[2]:http://archive.debian.org/debian/pool/main/d/dog/
[3]:http://zwicke.org/web/advcopy/
[4]:http://www.gnu.org/software/coreutils/
[5]:https://github.com/ggreer/the_silver_searcher
[6]:http://xmodulo.com/2014/01/search-text-files-patterns-efficiently.html
[7]:https://code.google.com/p/plowshare/
[8]:http://hisham.hm/htop/

View File

@ -1,12 +1,12 @@
在命令行中管理 Wifi 连接
================================================================================
无论何时要安装一款新的 Linux 发行系统,一般的建议都是让您通过有线连接来接到互联网的。这主要的原因有两条:第一,您的无线网卡也许安装的驱动不正确而不能用;第二,如果您是从命令行中来安装系统的,管理 WiFi 就非常可怕。我总是试图避免在命令行中处理 WiFi 。但 Linux 的世界,应具有无所畏惧的精神。如果您不知道怎样操作,您需要继续往下来学习之,这就是写这篇文章的唯一原因。所以我迫自己学习如何在命令行中管理 WiFi 连接。
无论何时要安装一款新的 Linux 发行系统,一般的建议都是让您通过有线连接来接到互联网的。这主要的原因有两条:第一,您的无线网卡也许安装的驱动不正确而不能用;第二,如果您是从命令行中来安装系统的,管理 WiFi 就非常可怕。我总是试图避免在命令行中处理 WiFi 。但 Linux 的世界,应具有无所畏惧的精神。如果您不知道怎样操作,您需要继续往下来学习之,这就是写这篇文章的唯一原因。所以我迫使自己学习如何在命令行中管理 WiFi 连接。
通过命令行来设置连接到 WiFi 当然有很多种方法,但在这篇文章里,也是一个建议,我将会作用最基本的方法:那就是使用在任何发布版本中都有的包含在“默认包”里的程序和工具。或者我偏向于使用这一种方法。使用此方法显而易见的好处是这个操作过程能在任意有 Linux 系统的机器上复用。不好的一点是它相对来说比较复杂。
通过命令行来设置连接到 WiFi 当然有很多种方法,但在这篇文章里,同时也是一个建议,我使用最基本的方法:那就是使用在任何发布版本中都有的包含在“默认包”里的程序和工具。或者我偏向于使用这一种方法。使用此方法显而易见的好处是这个操作过程能在任意有 Linux 系统的机器上复用。不好的一点是它相对来说比较复杂。
首先,我假设您们都已经正确安装了无线网卡的驱动程序。没有这前提,后续的一切都如镜花水月。如果您你机器确实没有正确安装上,您应该看看关于您的发布版本的维基和文档。
然后您就可以用如下命令来检查是哪一个接口来支持无线连接的
然后您就可以用如下命令来检查是哪一个接口来支持无线连接的
$ iwconfig
@ -24,21 +24,21 @@
![](https://farm4.staticflickr.com/3847/14909117931_e2f3d0feb0_z.jpg)
根据扫描出的结果,可以得到网络的名字(它的 SSID它的信息强度以及它使用的是哪个安全加密的WEP、WPA/WPA2。从此时起将会分成两条路线情况很好的和容易的以及情况稍微复杂的。
根据扫描出的结果,可以得到网络的名字(它的 SSID它的信息强度以及它使用的是哪个安全加密的WEP、WPA/WPA2。从此时起将会分成两条路线情况很好、很容易的以及情况稍微复杂的。
如果您想连接的网络是没有加密的,您可以用下面的命令直接连接:
$ sudo iw dev wlan0 connect [network SSID]
$ sudo iw dev wlan0 connect [网络 SSID]
如果网络是用 WEP 加密的,也非常容易:
$ sudo iw dev wlan0 connect [network SSID] key 0:[WEP key]
$ sudo iw dev wlan0 connect [网络 SSID] key 0:[WEP 密钥]
但网络使用的是 WPA 或 WPA2 协议的话,事情就不好办了。这种情况,您就得使用叫做 wpa_supplicant 的工具,它默认是没有启用的。需要修改 /etc/wpa_supplicant/wpa_supplicant.conf 文件,增加如下行:
但网络使用的是 WPA 或 WPA2 协议的话,事情就不好办了。这种情况,您就得使用叫做 wpa_supplicant 的工具,它默认是没有的。然后需要修改 /etc/wpa_supplicant/wpa_supplicant.conf 文件,增加如下行:
network={
ssid="[network ssid]"
psk="[the passphrase]"
ssid="[网络 ssid]"
psk="[密码]"
priority=1
}

View File

@ -1,67 +1,70 @@
Ubunto可以实现这功能吗回答4个新用户最常问的问题
Ubuntu 有这功能吗回答4个新用户最常问的问题
================================================================================
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/08/Screen-Shot-2014-08-13-at-14.31.42.png)
**在谷歌输入Can Ubunt[u]’,一系列的自动建议会展现在你面前。这些建议都是根据最近搜索用户最频繁检索而形成的。
对于Linux老用户来说他们都胸有成竹的回答这些问题。但是对于新用户或者那些还在探索类似Ubuntu是否是值得分配的人,他们不是十分清楚这些答案。这都是中肯,真实而且是基本的问题。
对于Linux老用户来说他们都胸有成竹的回答这些问题。但是对于新用户或者那些还在探索类似Ubuntu这样的发行版是否适合的人来说,他们不是十分清楚这些答案。这都是中肯,真实而且是基本的问题。
所以在这片文章我将会去回答4个最常会被搜索到的"Can Ubuntu...?"问题。
### Ubuntu可以取代Windows吗###
![Windows isnt to everyones tastes — or needs](http://www.omgubuntu.co.uk/wp-content/uploads/2014/07/windows-9-desktop-rumour.png)
Windows 并不是每个人都喜欢 或者说是必须的。
是的。Ubutu和其他Linux发行版是可以安装到任何一台有能力运行微软系统的电脑。
*Windows 并不是每个人都喜欢或都必须的*
无论你觉得 **应不应该** 取代它,不变的是,这取决于你自己的需求。
是的。Ubuntu和其他Linux发行版是可以安装到任何一台有能力运行微软系统的电脑。
无论你觉得**应不应该**取代它,要不要替换只取决于你自己的需求。
例如你在上大学所需的软件都只是Windows而已。暂时而言你是不需要完全更换你的系统。对于工作也是同样的道理。如果你工作所用到的软件只是微软Office, Adobe Creative Suite 或者是一个AutoCAD应用程序不是很建议你更换系统坚持你现在所用的软件就足够了。
但是对于那些用Ubuntu完全取代微软的我们Ubuntu 提供一个安全的桌面工作环境。这个桌面工作环境可以运行与支持很广的硬件环境。基本上,每个东西都有软件的支持,从办公套件到网页浏览器,视频应用程序,音乐应用程序到游戏。
但是对于那些用Ubuntu完全取代微软系统的我们Ubuntu 提供一个安全的桌面工作环境。这个桌面工作环境可以运行与支持很广的硬件环境。基本上,每个东西都有软件的支持,从办公套件到网页浏览器,视频应用程序,音乐应用程序到游戏。
### Ubuntu 可以运行 .exe文件吗###
![你可以在Ubuntu运行一些Windows应用程序。](http://www.omgubuntu.co.uk/wp-content/uploads/2013/01/adobe-photoshop-cs2-free-linux.png)
你可以在Ubuntu运行一些Windows应用程序
是可以的尽管这些程序不是一步安装到位或者不能保证安装成功。这是因为这些软件版本本来就是在Windows下运行的。 这些程序本来就与其他桌面操作系统不兼容包括Mac OS X 或者 Android (安卓系统)。
*你可以在Ubuntu运行一些Windows应用程序*
那些专门为Ubuntu和其他Linux发行版本的软件安装包都是带有“.deb”的文件后缀名。它们的安装过程与安装 .exe 的程序是一样的 -双击安装包,然后根据屏幕提示完成安装。
是可以的尽管这些程序不是一步到位或者不能保证运行成功。这是因为这些软件原本就是在Windows下运行的本来就与其他桌面操作系统不兼容包括Mac OS X 或者 Android (安卓系统)。
但是Linux是很多样化的。 使用一个名为"Wine"的兼容层,可以运行许多当下很流行的应用程序。 (Wine不是一个模拟器但是简单来讲是一个速记本。这些程序不会像在Windows下运行得那么顺畅或者有着出色的用户界面。然而它足以满足日常的工作要求。
那些专门为Ubuntu和其他 Debian 系列的 Linux 发行版本)的软件安装包都是带有“.deb”的文件后缀名。它们的安装过程与安装 .exe 的程序是一样的 -双击安装包,然后根据屏幕提示完成安装。 LCTT 译注RedHat 系统采用.rpm 文件,其它的也有各种不同的安装包格式,等等,作为初学者,你可以当成是各种压缩包格式来理解)
一些很出名的Windows软件是可以通过Wine来运行在Ubuntu操作系统上这包括老版本的Photoshop和微软办公室软件。 有关兼容软件的列表 [参照Wine应用程序数据库][1].
但是Linux是很多样化的。它使用一个名为"Wine"的兼容层,可以运行许多当下很流行的应用程序。 (Wine不是一个模拟器但是简单来看可以当成一个快捷方式。这些程序不会像在Windows下运行得那么顺畅或者有着出色的用户界面。然而它足以满足日常的工作要求。
一些很出名的Windows软件是可以通过Wine来运行在Ubuntu操作系统上这包括老版本的Photoshop和微软办公室软件。 有关兼容软件的列表,[参照Wine应用程序数据库][1]。
### Ubuntu会有病毒吗###
![它可能有错误,但是它并没有病毒](http://www.omgubuntu.co.uk/wp-content/uploads/2014/04/errors.jpg)
它可能有错误,但是它并有病毒
*它可能有错误,但是它并有病毒*
理论上,它会有病毒。但是,实际上它没有。
Linux发行版本是建立在一个病毒蠕虫隐匿程序都很难被安装运行或者造成很大影响的环境之下的。
例如,很多应用程序都是在没有特别管理权限要求,以普通用户权限运行的。病毒访问系统关键部分的请求也是需要用户管理权限的。很多软件的提供都是从那些维护良好的而且集中的资源库例如Ubuntu软件中心而不是一些不知名的网站。 由于这样的管理使得安装一些受感染的软件的几率可以忽略不计。
例如,很多应用程序都是在没有特别管理权限要求,以普通用户权限运行的。病毒访问系统关键部分的请求也是需要用户管理权限的。很多软件的提供都是从那些维护良好的而且集中的资源库例如Ubuntu软件中心而不是一些不知名的网站。 由于这样的管理使得安装一些受感染的软件的几率可以忽略不计。
你应不应该在Ubuntu系统安装杀毒软件这取决于你自己。为了自己的安心或者如果你经常通过Wine来使用Windows软件或者双系统你可以安装ClamAV。它是一个免费的开源的病毒扫描应用程序。你可以在Ubuntu软件中心找到它。
你可以在Ubuntu维基百科了解更多关于病毒在Linux或者Ubuntu的信息。 [Ubuntu 维基百科][2].
你可以在Ubuntu维基百科了解更多关于病毒在Linux或者Ubuntu的信息。 [Ubuntu 维基百科][2]
### 在Ubuntu上可以玩游戏吗###
![Steam有着上百个专门为Linux设计的高质量游戏。](http://www.omgubuntu.co.uk/wp-content/uploads/2012/11/steambeta.jpg)
Steam有着上百个专门为Linux设计的高质量游戏。
当然可以Ubuntu有着多样化的游戏从传统简单的2D象棋拼字游戏和扫雷游戏到很现代化AAA级别的对显卡要求强的游戏。
*Steam有着上百个专门为Linux设计的高质量游戏*
你首先去到 **Ubuntu 软件中心**。这里你会找到很多免费的开源的和付钱的游戏包括广受好评的独立制作游戏像World of Goo 和Braid。当然也有其他传统游戏的提供例如Pychess(国际象棋)four-in-a-row和Scrabble clones猜字拼字游戏
当然可以Ubuntu有着多样化的游戏从传统简单的2D象棋拼字游戏和扫雷游戏到很现代化的AAA级别的要求显卡很强的游戏
对于游戏狂热爱好者,你可以点击**Steam for Linux**. 在这里你可以找到各种这样最新最好玩的游戏
你首先可以去 **Ubuntu 软件中心**。这里你会找到很多免费的开源的和收费的游戏包括广受好评的独立制作游戏像World of Goo 和Braid。当然也有其他传统游戏的提供例如Pychess(国际象棋)four-in-a-row四子棋和Scrabble clones猜字拼字游戏
另外,记得留意这个网站 [Humble Bundle][3]。这些“只买你想要的”的套餐只会持续每个月里面的两周。作为游戏平台它是Linux特别友好的支持者。因为每当一些新游戏出来的时候它都保证可以在Linux下搜索到。
对于游戏狂热爱好者,你可以安装**Steam for Linux**。在这里你可以找到各种这样最新最好玩的游戏。
另外,记得留意这个网站:[Humble Bundle][3]。每个月都会有两周的这种“只买你想要的”的套餐。作为游戏平台它是对Linux特别友好的支持者。因为每当一些新游戏出来的时候它都保证可以在Linux下搜索到。
--------------------------------------------------------------------------------
@ -69,7 +72,7 @@ via: http://www.omgubuntu.co.uk/2014/08/ubuntu-can-play-games-replace-windows-qu
作者:[Joey-Elijah Sneddon][a]
译者:[Shaohao Lin](https://github.com/shaohaolin)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
Linux FAQ -- 如何修复“X11 forwarding request failed on channel 0”错误
Linux有问必答:如何修复“X11 forwarding request failed on channel 0”错误
================================================================================
> **问题**: 当我尝试使用SSH的X11转发选项连接到远程主机时, 我在登录时遇到了一个 "X11 forwarding request failed on channel 0" X11 转发请求在通道0上失败的错误。 我为什么会遇到这个错误,并且该如何修复它
@ -26,9 +26,9 @@ X11客户端不能正确处理X11转发这会导致报告中的错误。要
$ sudo systemctl restart ssh.service (Debian 7, CentOS/RHEL 7, Fedora)
$ sudo service sshd restart (CentOS/RHEL 6)
### 方案 ###
### 方案 ###
如果远程主机的SSH服务禁止了IPv6,那么X11转发失败的错误也有可能发生。要解决这个情况下的错误。打开/etc/ssh/sshd配置文件打开"AddressFamily all" (如果有的话的注释。接着加入下面这行。这会强制SSH服务只使用IPv4而不是IPv6。
如果远程主机的SSH服务禁止了IPv6那么X11转发失败的错误也有可能发生。要解决这个情况下的错误。打开/etc/ssh/sshd配置文件取消对"AddressFamily all" (如果有这条的话的注释。接着加入下面这行。这会强制SSH服务只使用IPv4而不是IPv6。LCTT 译注此处恐有误AddressFamily 没有 all 这个参数,而 any 代表同时支持 IPv6和 IPv4以此处的场景而言应该是关闭IPv6支持只支持 IPv4所以此处应该是“注释掉 AddressFamily any”才对。
$ sudo vi /etc/ssh/sshd_config
@ -43,7 +43,7 @@ X11客户端不能正确处理X11转发这会导致报告中的错误。要
via: http://ask.xmodulo.com/fix-broken-x11-forwarding-ssh.html
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,107 @@
如何开始一个开源项目
================================================================================
> 循序渐进的指导
**你有这个问题**:你已经权衡了[开源代码的优劣势][1],你也已经知道[你的软件需要成为一个开源项目][2],但是,你不知道怎么做好的开源项目。
当然,你也许已经知道[如何创建Github帐号并开始][3],但是这些事实上是做开源比较简单的部分。而真真正正难的部分是如何让足够多的人,关注你的项目并给你的项目做贡献。
![](http://a4.files.readwrite.com/image/upload/c_fit,q_80,w_630/MTE5NDg0MDYxMTg2Mjk1MzEx.jpg)
接下来的原则是会指导你构建和发布其他人愿意关注的代码。
### 基本原则 ###
选择开源可能有许多原因。也许你希望吸引一个社区来帮助编写你的代码。也许,[总所周知][4],你明白“开源 —— 一个开发小团队内部编写代码的倍增器。”
或者你只是认为这是必须做的事,[如同英国政府一样][5]。
无论何种原因,为了开源能够成功,是必须要做很多的计划去给将来使用这个软件的人们。如同[我在2005写道][6]如果你“需要大量的人做贡献bug修复扩展等等那么你需要“写一个好的文档使用易于接受的编程语言和使用模型架构”。
对了,你也需要写人们在乎的软件。
每天思考你依靠的技术操作系统web应用框架数据库等等。远离像航天这样特殊行业的小生态技术让开源拥有更多的可能性以便外部的人的产生兴趣和做出贡献。更广泛的应用技术找到更多的贡献者和用户。
总的来说,任何成功的开源项目有以下共同点:
1.最佳的时间时机(解决市场实际需求)
2.一个健壮,包括开发者和非开发者的团队
3.一个易于参与的结构(更多详见下文)
4.模块化编码,使新贡献者更容易找到一个项目损坏的部分去贡献,比强迫他们理解巨大的代码的每一部分要好
5.代码可以广泛应用(或者达到一个狭窄的流行都比一个“自生自灭的”小生态更吸引人)
6.很好初始源码如果你放垃圾在Github你也只会得到垃圾回报
7.一个自由的许可证-我[个人更爱Apache型的许可证][7]因为它让开发者采用时障碍最低当然许多成功的项目如Linux和MySQL使用GPL许可证也有很棒的效果。
上述几项,是一个项目成功邀请参与者最难的部分。这是因为他们不是关于代码而是关于人。
### 开源不单是一个许可证 ###
今年,最棒的一件事是我读到是来自 Vitorio Miliano ([@vitor_io][8])的文章,他是用户体验交互设计师,来自德州的奥斯丁。[Miliano][9]指出,那些不在你的项目上工作的人才是“外行”,从本质上说无论他们技术能力的级别,他们仅仅懂一点代码(也没关系)。
所以你的工作,他认为,是使人加入,为你贡献你的代码变得简单。当阐述如何涉及非程序员到开源项目中,他指出项目的一些事项,项目领导应需要有效地得加入一些任何技术或不懂技术的人到开源项目。
> 1. 一种方法去了解你的项目价值
>
> 2. 一种方法去了解他们可以为项目提供的价值
>
> 3. 一种方法去了解他们可以从贡献代码获得的价值
>
> 4. 一种方法去了解贡献流程,端到端
>
> 5. 贡献机制适用于现有的工作流
经常项目领导者想要集中于上述的第五步却不提供理解1到4的路径。如果潜在的贡献者不欣赏“为什么”“如何”共享就变得不重要了。
注意至关重要的Miliano写道建立拥有一个通俗易懂的简介的项目很有价值如同任何时候通过简介给每一个人演示可访问性和包容性。他断言道这增加了额外的好处文档和其他的版本介绍的内容变得通俗易懂。
关于第二点程序员或非程序员同样地需要能够明白到底你需要什么这样他们就可以认识到他们的贡献方向。有时就像MongoDB解决方案架构师[Henrik Ingo告诉我][10]那样,"一个聪明的人可以贡献很棒的代码,但是项目成员不能理解它(代码)",如果在组织内承认这个贡献并且研究后理解,那么这就不是一个糟糕的问题。
但是不会经常发生。
### 你真的想领导一个开源项目吗? ###
许多开源项目的领导提倡包容性,但是他们拥有任何事除了包容。如果你不想要人们做贡献,不要假装开源。
是的有时这是老生常谈的话题。就像HackerNews最近的报道[一个开发者的开发工作][11]。
> 小项目可以得到很多,基本不需要很多人合作来完成。我看到了他们的进步,但是我没有看到我自己的进步:如果我帮助了他们,显然,如果我花费了有限的时间在与那些计算机科学的硕士管理合作上,而没有参与编码,这不是我想要的。所以我忽略了他们。
这是一个保持理智的的好方法,但这个态度并不能预示着这个项目会被广阔的分享。
如果你确实很少关心非程序员设计的贡献、文档,或者无论其他什么,那么请首先了解那些。再次强调,如果这是实情,你的项目就不能成为一个开源项目。
当然,排除感觉不总是可靠的。 就像ActiveState的副总裁Bernard Golden告诉过我“一些将会成为开发人员将会对现有的“小集团”开发团体这种感觉感到恐惧虽然这不一定正确。”
现在,若使了解开发人员为什么要贡献并邀请做开发,意味着更多的开源项目投资,更长久地生存。
图片由[Shutterstock][12]提供
--------------------------------------------------------------------------------
via: http://readwrite.com/2014/08/20/open-source-project-how-to
作者:[Matt Asay][a]
译者:[Vic___/VicYu](http://www.vicyu.net)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://readwrite.com/author/matt-asay
[1]:http://readwrite.com/2014/07/07/open-source-software-pros-cons
[2]:http://readwrite.com/2014/08/15/open-source-software-business-zulily-erp-wall-street-journal
[3]:http://www.cocoanetics.com/2011/01/starting-an-opensource-project-on-github/
[4]:http://werd.io/2014/the-roi-of-building-open-source-software
[5]:https://www.gov.uk/design-principles
[6]:http://asay.blogspot.com/2005/09/so-you-want-to-build-open-source.html
[7]:http://www.cnet.com/news/apache-better-than-gpl-for-open-source-business/
[8]:https://twitter.com/vitor_io
[9]:http://opensourcedesign.is/blogging_about/import-designers/
[10]:https://twitter.com/h_ingo/status/501323333301190656
[11]:https://news.ycombinator.com/item?id=8122814
[12]:http://www.shutterstock.com/

View File

@ -2,7 +2,7 @@
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/09/texmaker_Ubuntu.jpeg)
[LaTeX][1]是一种文本标记语言,也可以说是一种文件准备系统。在很多大学或者机构中普遍作为一种标准来书写专业的科学文献,毕业论文或其他类似的文档。在这篇文章中我们会看到如何在Ubuntu 14.04中使用LaTeX。
[LaTeX][1]是一种文本标记语言,也可以说是一种文档编撰系统。在很多大学或者机构中普遍作为一种标准来书写专业的科学文献、毕业论文或其他类似的文档。在这篇文章中我们会看到如何在Ubuntu 14.04中使用LaTeX。
### 在 Ubuntu 14.04 或 Linux Mint 17 中安装 Texmaker 来使用LaTeX
@ -24,11 +24,11 @@
- [下载Texmaker编辑器][3]
你通过链接下载到的是一个.deb包因此你在一些像Linux MintElementary OSPinguy OS等等类Debain的发行版中可以使用相同的安装方式。
你通过上述链接下载到的是一个.deb包因此你在一些像Linux MintElementary OSPinguy OS等等类Debain的发行版中可以使用相同的安装方式。
如果你想使用像Github类型的markdown编辑器你可以试试[Remarkable编辑器][4]。
如果你想使用像Github的markdown编辑器你可以试试[Remarkable编辑器][4]。
希望Texmaker能够在Ubuntu和Linux Mint中帮到你
希望Texmaker能够在Ubuntu和Linux Mint中帮到你
--------------------------------------------------------------------------------

View File

@ -1,8 +1,8 @@
Linux有问必答——如何扩展XFS文件系统
Linux有问必答如何扩展XFS文件系统
================================================================================
> **问题**我的磁盘上有额外的空间所以我想要扩展其上创建的现存的XFS文件系统以完全使用额外空间。怎样才是扩展XFS文件系统的正确途径
XFS是一个开源的GPL子文件系统,最初由硅谷图形开发,现在被大多数的Linux发行版都支持。事实上XFS已被最新的CentOS/RHEL 7采用成为其默认的文件系统。在其众多的特性中包含了“在线调整大小”这一特性使得现存的XFS文件系统在被挂载时可以进行扩展。然而对于XFS文件系统的缩减确实不被支持的
XFS是一个开源的GPL志文件系统最初由硅谷图形SGI开发现在大多数的Linux发行版都支持。事实上XFS已被最新的CentOS/RHEL 7采用成为其默认的文件系统。在其众多的特性中包含了“在线调整大小”这一特性使得现存的XFS文件系统在已经挂载的情况下可以进行扩展。然而对于XFS文件系统的**缩减**却还没有支持
要扩展一个现存的XFS文件系统你可以使用命令行工具xfs_growfs这在大多数Linux发行版上都默认可用。由于XFS支持在线调整大小目标文件系统可以挂在也可以不挂载。
@ -24,7 +24,7 @@ XFS是一个开源的GPL日子文件系统最初由硅谷图形开发
![](https://farm6.staticflickr.com/5569/14914950529_ddfb71c8dd_z.jpg)
注意当你扩展一个现存的XFS文件系统时必须准备事先添加用于XFS文件系统扩展的空间。这虽然是十分明了的事但是如果在潜在的分区或磁盘卷上没有空闲空间可用的话xfs_growfs不会做任何事情。同时如果你尝试扩展XFS文件系统大小到超过磁盘分区或卷的大小xfs_growfs将会失败。
注意当你扩展一个现存的XFS文件系统时必须准备事先添加用于XFS文件系统扩展的空间。这虽然是很显然的事但是如果在所在的分区或磁盘卷上没有空闲空间可用的话xfs_growfs就没有办法了。同时如果你尝试扩展XFS文件系统大小到超过磁盘分区或卷的大小xfs_growfs将会失败。
![](https://farm4.staticflickr.com/3870/15101281542_98a49a7c3a_z.jpg)
@ -33,6 +33,6 @@ XFS是一个开源的GPL日子文件系统最初由硅谷图形开发
via: http://ask.xmodulo.com/expand-xfs-file-system.html
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
在Ubuntu 14.04中重置Unity和Compiz设置【小贴士】
小技巧:在Ubuntu 14.04中重置Unity和Compiz设置
================================================================================
如果你一直在试验你的Ubuntu系统你可能最终以Unity和Compiz的一片混乱收场。在此贴士中我们将看看怎样来重置Ubuntu 14.04中的Unity和Compiz。事实上全部要做的事仅仅是运行几个命令而已。
@ -34,7 +34,7 @@ via: http://itsfoss.com/reset-unity-compiz-settings-ubuntu-1404/
作者:[Abhishek][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,20 +1,20 @@
在CentOS 7上安装Vmware 10
技巧:在CentOS 7上安装Vmware 10
================================================================================
在CentOS 7上安装Vmware 10.0.3,我将给你们我的经验。通常,这个版本上不能在CentOS 7工作的因为它只能运行在比较低的内核版本3.10上。
在CentOS 7上安装Vmware 10.0.3,我来介绍下我的经验。通常,这个版本是不能在CentOS 7工作的因为它只能运行在比较低的内核版本3.10上。
1 - 以正常方式下载并安装(没有问题)。唯一的问题是在后来体验vmware程序的时候。
首先,以正常方式下载并安装(没有问题)。唯一的问题是在后来运行vmware程序的时候。
### 如何修复? ###
**1 进入/usr/lib/vmware/modules/source。**
**1 进入 /usr/lib/vmware/modules/source。**
cd /usr/lib/vmware/modules/source
**2 解压vmnet.tar.**
**2 解压 vmnet.tar.**
tar -xvf vmnet.tar
**3 进入vmnet-only目录。**
**3 进入 vmnet-only 目录。**
cd vmnet-only
@ -54,6 +54,6 @@ via: http://www.unixmen.com/install-vmware-10-centos-7/
作者: M.el Khamlichi
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
Ubuntu 14.04历史文件清理
如何清理 Ubuntu 14.04 的最近打开文件历史列表
================================================================================
这个简明教程对Ubuntu 14.04历史文件清理进行了说明,它用于初学者。
@ -21,6 +21,6 @@ Ubuntu 14.04历史文件清理
via: http://www.ubuntugeek.com/how-to-delete-recently-opened-files-history-in-ubuntu-14-04.html
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,178 @@
stat -- 获取比 ls 更多的信息
================================================================================
> 厌倦了 ls 命令,并且想查看更多有关你的文件的有趣的信息? 试一试 stat
![](http://www.itworld.com/sites/default/files/imagecache/large_thumb_150x113/stats.jpg)
ls 命令可能是每一个 Unix 使用者第一个学习的命令之一, 但它仅仅显示了 stat 命令能给出的信息的一小部分。
stat 命令从文件的索引节点获取信息。 正如你可能已经了解的那样, 每一个系统里的文件都存有三组日期和时间, 它们包括最近修改时间(即使用 ls -l 命令时显示的日期和时间), 最近状态改变时间(包括对文件重命名)和最近访问时间。
使用长列表模式查看文件信息, 你会看到类似下面的内容:
$ ls -l trythis
-rwx------ 1 shs unixdweebs 109 Nov 11 2013 trythis
使用 stat 命令, 你会看到下面这些:
$ stat trythis
File: `trythis'
Size: 109 Blocks: 8 IO Block: 262144 regular file
Device: 18h/24d Inode: 12731691 Links: 1
Access: (0700/-rwx------) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-09-09 19:27:58.000000000 -0400
Modify: 2013-11-11 08:40:10.000000000 -0500
Change: 2013-11-11 08:40:10.000000000 -0500
在上面的情形中, 文件的状态改变和文件修改的日期/时间是相同的, 而访问时间则是相当近的时间。 我们还可以看到文件使用了 8 个块, 以及两种格式显示的文件权限 -- 八进制0700格式和 rwx 格式。 在第三行显示的索引节点是 12731681. 文件没有其它的硬链接Links: 1。 而且, 这个文件是一个常规文件。
把文件重命名, 你会看到状态改变时间发生变化。
这里的 ctime 信息, 最早设计用来存储文件的创建create日期和时间 但后来不知道什么时候变为用来存储状态修改change时间。
$ mv trythis trythat
$ stat trythat
File: `trythat'
Size: 109 Blocks: 8 IO Block: 262144 regular file
Device: 18h/24d Inode: 12731691 Links: 1
Access: (0700/-rwx------) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-09-09 19:27:58.000000000 -0400
Modify: 2013-11-11 08:40:10.000000000 -0500
Change: 2014-09-21 12:46:22.000000000 -0400
改变文件的权限也会改变 ctime 域。
你也可以配合通配符来使用 stat 命令以列出一组文件的状态:
$ stat myfile*
File: `myfile'
Size: 20 Blocks: 8 IO Block: 262144 regular file
Device: 18h/24d Inode: 12731803 Links: 1
Access: (0640/-rw-r-----) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-08-23 03:00:36.000000000 -0400
Modify: 2014-08-22 12:02:12.000000000 -0400
Change: 2014-08-22 12:02:12.000000000 -0400
File: `myfile2'
Size: 20 Blocks: 8 IO Block: 262144 regular file
Device: 18h/24d Inode: 12731806 Links: 1
Access: (0640/-rw-r-----) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-08-23 03:00:36.000000000 -0400
Modify: 2014-08-22 12:03:30.000000000 -0400
Change: 2014-08-22 12:03:30.000000000 -0400
File: `myfile3'
Size: 40 Blocks: 8 IO Block: 262144 regular file
Device: 18h/24d Inode: 12730533 Links: 1
Access: (0640/-rw-r-----) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-08-23 03:00:36.000000000 -0400
Modify: 2014-08-22 12:03:59.000000000 -0400
Change: 2014-08-22 12:03:59.000000000 -0400
如果我们喜欢的话, 我们也可以通过其他命令来获取这些信息。
向 ls -l 命令添加 "u" 选项, 你会看到下面的结果。 注意这个选项会显示最后访问时间, 而添加 "c" 选项则会显示状态改变时间(在本例中, 是我们重命名文件的时间)。
$ ls -lu trythat
-rwx------ 1 shs unixdweebs 109 Sep 9 19:27 trythat
$ ls -lc trythat
-rwx------ 1 shs unixdweebs 109 Sep 21 12:46 trythat
stat 命令也可应用与文件夹。
在这个例子中, 我们可以看到有许多的链接。
$ stat bin
File: `bin'
Size: 12288 Blocks: 24 IO Block: 262144 directory
Device: 18h/24d Inode: 15089714 Links: 9
Access: (0700/drwx------) Uid: ( 263/ shs) Gid: ( 100/ unixdweebs)
Access: 2014-09-21 03:00:45.000000000 -0400
Modify: 2014-09-15 17:54:41.000000000 -0400
Change: 2014-09-15 17:54:41.000000000 -0400
在这里, 我们还可以查看一个文件系统。
$ stat -f /dev/cciss/c0d0p2
File: "/dev/cciss/c0d0p2"
ID: 0 Namelen: 255 Type: tmpfs
Block size: 4096Fundamental block size: 4096
Blocks: Total: 259366 Free: 259337 Available: 259337
Inodes: Total: 223834 Free: 223531
注意 Namelen (文件名长度)域, 如果文件名长于 255 个字符的话, 你会很幸运地在文件名处看到心形符号!
stat 命令还可以一次显示所有我们想要的信息。 下面的例子中, 我们只想查看文件类型, 然后是硬连接数。
$ stat --format=%F trythat
regular file
$ stat --format=%h trythat
1
在下面的例子中, 我们查看了文件权限 -- 分别以两种可用的格式 -- 然后是文件的 SELinux 安全环境。最后,我们我们可以以从 Epoch 开始的秒数格式来查看文件访问时间。
$ stat --format=%a trythat
700
$ stat --format=%A trythat
-rwx------
$ stat --format=%C trythat
(null)
$ stat --format=%X bin
1411282845
下面全部是可用的选项:
%a 八进制表示的访问权限
%A 可读格式表示的访问权限
%b 分配的块数(参见 %B
%B %b 参数显示的每个块的字节数
%d 十进制表示的设备号
%D 十六进制表示的设备号
%f 十六进制表示的 Raw 模式
%F 文件类型
%g 属主的组 ID
%G 属主的组名
%h 硬连接数
%i Inode 号
%n 文件名
%N 如果是符号链接,显示器所链接的文件名
%o I/O 块大小
%s 全部占用的字节大小
%t 十六进制的主设备号
%T 十六进制的副设备号
%u 属主的用户 ID
%U 属主的用户名
%x 最后访问时间
%X 最后访问时间,自 Epoch 开始的秒数
%y 最后修改时间
%Y 最后修改时间,自 Epoch 开始的秒数
%z 最后改变时间
%Z 最后改变时间,自 Epoch 开始的秒数
针对文件系统还有如下格式选项:
%a 普通用户可用的块数
%b 文件系统的全部数据块数
%c 文件系统的全部文件节点数
%d 文件系统的可用文件节点数
%f 文件系统的可用节点数
%C SELinux 的安全上下文
%i 十六进制表示的文件系统 ID
%l 文件名的最大长度
%n 文件系统的文件名
%s 块大小(用于更快的传输)
%S 基本块大小(用于块计数)
%t 十六进制表示的文件系统类型
%T 可读格式表示的文件系统类型
这些信息都可以得到stat 命令也许可以帮你以稍微不同的角度来了解你的文件。
--------------------------------------------------------------------------------
via: http://www.itworld.com/operating-systems/437351/unix-stat-more-ls
作者:[Sandra Henry-Stocker][a]
译者:[wangjiezhe](https://github.com/wangjiezhe)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.itworld.com/sandra-henry-stocker

View File

@ -1,12 +1,12 @@
Linux有问必答 -- 如何在CentOS7上改变网络接口名
Linux有问必答如何在CentOS7上改变网络接口名
================================================================================
> **提问**: 在CentOS7我想将分配的网络接口名更改为别的名字。有什么合适的方法来来重命名CentOS或RHEL7的网络接口
传统上Linux的网络接口被枚举为eth[0123...]但这些名称并不一定符合实际的硬件插槽PCI位置USB接口数量等这引入了一个不可预知的命名问题例如由于不确定的设备探测行为这可能会导致不同的网络配置错误例如由无意的接口改名引起的禁止接口或者防火墙旁路。基于MAC地址的udev规则在虚拟化的环境中并不有用这里的MAC地址如端口数量一样无常。
CentOS/RHEL6还推出了[一致和可预测的网络设备命名][1]网络接口的方法。这些特性可以唯一地确定网络接口的名称以使定位和区分设备更容易,并且在这样一种方式下,它随着启动,时间和硬件改变的情况下是持久的。然而这种命名规则并不是默认在CentOS/RHEL6上开启。
CentOS/RHEL6引入了[一致和可预测的网络设备命名][1]网络接口的方法。这些特性可以唯一地确定网络接口的名称以使定位和区分设备更容易,并且在这样一种方式下,无论是否重启机器、过了多少时间、或者改变硬件,其名字都是持久不变的。然而这种命名规则并不是默认在CentOS/RHEL6上开启。
从CentOS/RHEL7起可预见的命名规则变成了默认。根据这一规则接口名称被自动基于固件拓扑结构和位置信息来确定。现在即使添加或移除网络设备接口名称仍然保持固定而无需重新枚举和坏掉的硬件可以无缝替换。
从CentOS/RHEL7起这种可预见的命名规则变成了默认。根据这一规则,接口名称被自动基于固件,拓扑结构和位置信息来确定。现在,即使添加或移除网络设备,接口名称仍然保持固定,而无需重新枚举,和坏掉的硬件可以无缝替换。
* 基于接口类型的两个字母前缀:
* en -- 以太网
@ -14,7 +14,7 @@ CentOS/RHEL6还推出了[一致和可预测的网络设备命名][1]网络接口
* wl -- wlan
* ww -- wwan
*
* Type of names:
* 名字类型:
* b<number> -- BCMA总线和新书
* ccw<name> -- CCW总线组名
* o<index> -- 车载设备的索引号
@ -43,7 +43,7 @@ CentOS/RHEL6还推出了[一致和可预测的网络设备命名][1]网络接口
![](https://farm4.staticflickr.com/3909/15128981250_72f45633c1_z.jpg)
接下来编辑或创建一个udev的网络命名规则文件/etc/udev/rules.d/70-persistent-net.rules并添加下面一行。更换成你自己的MAC地址和接口。
接下来编辑或创建一个udev的网络命名规则文件/etc/udev/rules.d/70-persistent-net.rules并添加下面一行。更换成你自己的MAC地址08:00:27:a9:7a:e1和接口sushi
$ sudo vi /etc/udev/rules.d/70-persistent-net.rules
@ -62,7 +62,7 @@ CentOS/RHEL6还推出了[一致和可预测的网络设备命名][1]网络接口
via: http://ask.xmodulo.com/change-network-interface-name-centos7.html
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,4 +1,4 @@
Linux有问必答-- 如何在PDF中嵌入LaTex中的所有字体
Linux有问必答如何在PDF中嵌入LaTex中的所有字体
================================================================================
> **提问**: 我通过编译LaTex源文件生成了一份PDF文档。然而我注意到并不是所有字体都嵌入到了PDF文档中。我怎样才能确保所有的字体嵌入在由LaTex生成的PDF文档中
@ -32,7 +32,7 @@ Linux有问必答-- 如何在PDF中嵌入LaTex中的所有字体
via: http://ask.xmodulo.com/embed-all-fonts-pdf-document-latex.html
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,10 +1,10 @@
如何使用系统定时器
如何使用 systemd 中的定时器
================================================================================
我最近在写一些运行备份的脚本,我决定使用[systemd timers][1]而不是对我而已更熟悉的[cron jobs][2]来管理它们。
我最近在写一些执行备份工作的脚本,我决定使用[systemd timers][1]而不是对我而已更熟悉的[cron jobs][2]来管理它们。
在我使用时,出现了很多问题需要我去各个地方找资料,这个过程非常麻烦。因此,我想要把我目前所做的记录下来,方便自己的记忆,也方便读者不必像我这样,满世界的找资料了。
在我下面提到的步骤中有其他的选择,但是这边是最简单的方法。在此之前,查看**systemd.service**, **systemd.timer**,和**systemd.target**的帮助页面(man),学习你能用它们做些什么。
在我下面提到的步骤中有其他的选择,但是这里是最简单的方法。在此之前,请查看**systemd.service**, **systemd.timer**,和**systemd.target**的帮助页面(man),学习你能用它们做些什么。
### 运行一个简单的脚本 ###
@ -35,9 +35,9 @@ myscript.timer
Description=Runs myscript every hour
[Timer]
# Time to wait after booting before we run first time
# 首次运行要在启动后10分钟后
OnBootSec=10min
# Time between running each consecutive time
# 每次运行间隔时间
OnUnitActiveSec=1h
Unit=myscript.service
@ -48,14 +48,14 @@ myscript.timer
授权并运行的是timer文件而不是service文件。
# Start timer, as root
# 以 root 身份启动定时器
systemctl start myscript.timer
# Enable timer to start at boot
# 在系统引导起来后就启用该定时器
systemctl enable myscript.timer
### 在同一个Timer上运行多个脚本 ###
现在我们假设你在相同时间想要运行多个脚本。这种情况,你需要在上面的文件中做适当的修改。
现在我们假设你在相同时间想要运行多个脚本。这种情况,**你需要在上面的文件中做适当的修改**
#### Service 文件 ####
@ -64,9 +64,9 @@ myscript.timer
[Install]
WantedBy=mytimer.target
如果在你的service 文件中有一些规则,确保你使用**Description**字段中的值具体化**After=something.service**和**Before=whatever.service**中的参数。
如果在你的service 文件中有一些依赖顺序,确保你使用**Description**字段中的值具体指定**After=something.service**和**Before=whatever.service**中的参数。
另外的一种选择是(或许更加简单),创建一个包装者脚本来使用正确的规则运行合理的命令并在你的service文件中使用这个脚本。
另外的一种选择是(或许更加简单),创建一个包装脚本来使用正确的顺序来运行命令并在你的service文件中使用这个脚本。
#### Timer 文件 ####
@ -97,11 +97,11 @@ Good luck.
--------------------------------------------------------------------------------
via: http://jason.the-graham.com/2013/03/06/how-to-use-systemd-timers/#enable--start-1
via: http://jason.the-graham.com/2013/03/06/how-to-use-systemd-timers/
作者Jason Graham
译者:[译者ID](https://github.com/johnhoow)
校对:[校对者ID](https://github.com/校对者ID)
译者:[johnhoow](https://github.com/johnhoow)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,24 +1,17 @@
Git Rebase教程 用Git Rebase让时光倒流
================================================================================
![](https://www.gravatar.com/avatar/7c148ace0d63306091cc79ed9d9e77b4?d=mm&s=200)
Christoph Burgdorf自10岁时就是一名程序员他是HannoverJS Meetup网站的创始人并且一直活跃在AngularJS社区。他也是非常了解gti的内内外外在那里他举办一个[thoughtram][1]的工作室来帮助初学者掌握该技术。
下面的教程最初发表在他的[blog][2]。
----------
### 教程: Git Rebase ###
想象一下你正在开发一个激进的新功能。这将是很灿烂的但它需要一段时间。您这几天也许是几个星期一直在做这个。
你的功能分支已经超前master有6个提交了。你是一个优秀的开发人员并做了有意义的语义提交。但有一件事情你开始慢慢意识到这个野兽仍需要更多的时间才能真的做好准备被合并回主分支。
你的功能分支已经超前master有6个提交了。你是一个优秀的开发人员并做了有意义的语义提交。但有一件事情你开始慢慢意识到这个疯狂的东西仍需要更多的时间才能真的做好准备被合并回主分支。
m1-m2-m3-m4 (master)
\
f1-f2-f3-f4-f5-f6(feature)
你也知道的是,一些地方实际上是少耦合的新功能。它们可以更早地合并到主分支。不幸的是,你想将部分合并到主分支的内容存在于你六个提交中的某个地方。更糟糕的是,它也包含了依赖于你的功能分支的之前的提交。有人可能会说,你应该在第一处地方做两次提交,但没有人是完美的。
你也知道的是,一些地方实际上是交叉不大的新功能。它们可以更早地合并到主分支。不幸的是,你想将部分合并到主分支的内容存在于你六个提交中的某个地方。更糟糕的是,它也包含了依赖于你的功能分支的之前的提交。有人可能会说,你应该在第一处地方做两次提交,但没有人是完美的。
m1-m2-m3-m4 (master)
\
@ -39,11 +32,11 @@ Christoph Burgdorf自10岁时就是一名程序员他是HannoverJS Meetup网
在将工作分成两个提交后我们就可以cherry-pick出前面的部分到主分支了。
原来Git自带了一个功能强大的命令git rebase -i ,它可以让我们这样做。它可以让我们改变历史。改变历史可能会产生问题,并作为一个经验法应尽快避免历史与他人共享。在我们的例子中,虽然我们只是改变我们的本地功能分支的历史。没有人会受到伤害。这这么做了!
原来Git自带了一个功能强大的命令git rebase -i ,它可以让我们这样做。它可以让我们改变历史。改变历史可能会产生问题,作为一个经验,应尽快避免历史与他人共享。不过在我们的例子中,我们只是改变我们的本地功能分支的历史。没有人会受到伤害。就这么做了!
好吧让我们来仔细看看f3提交究竟修改了什么。原来我们共修改了两个文件userService.js和wishlistService.js。比方说userService.js的更改可以直接合入主分支而wishlistService.js不能。因为wishlistService.js甚至没有在主分支存在。这根据的是f1提交中的介绍
好吧让我们来仔细看看f3提交究竟修改了什么。原来我们共修改了两个文件userService.js和wishlistService.js。比方说userService.js的更改可以直接合入主分支而wishlistService.js不能。因为wishlistService.js甚至不存在在主分支里面。它是f1提交中引入的
>>专家提示即使是在一个文件中更改git也可以搞定。但这篇博客中我们要让事情变得简单
>>专家提示即使是在一个文件中更改git也可以搞定。但这篇博客中我们先简化情况
我们已经建立了一个[公众演示仓库][3]我们将使用这个来练习。为了便于跟踪每一个提交信息的前缀是在上面的图表中使用的假的SHA。以下是git在分开提交f3时的分支图。
@ -51,26 +44,37 @@ Christoph Burgdorf自10岁时就是一名程序员他是HannoverJS Meetup网
现在我们要做的第一件事就是使用git的checkout功能checkout出我们的功能分支。用git rebase -i master开始做rebase。
现在接下来git会用配置的编辑器打开默认为Vim一个临时文件。
现在接下来git会用配置的编辑器打开默认为Vim一个临时文件。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git2.png)
该文件为您提供一些rebase选择它带有一个提示蓝色文字。对于每一个提交我们可以选择的动作有pick、rwork、edit、squash、fixup和exec。每一个动作也可以通过它的缩写形式p、r、e、s、f和e引用。描述每一个选项超出了本文范畴所以让我们专注于我们的具体任务。
我们要为f3提交选择编辑选项因此我们把内容改变成这样。
我们要为f3提交选择edit选项因此我们把内容改变成这样。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git3.png)
现在我们保存文件在Vim中是按下<ESC>后输入:wq,最后是按下回车。接下来我们注意到git在编辑选项中选择的提交处停止了rebase。
这意味这git开始为f1、f2、f3生效仿佛它就是常规的rebase但是在f3**之后**停止。事实上,我们可以看一眼停止的地方的日志就可以证明这一点。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git4.jpg)
这意味这git开始将f1、f2、f3生效仿佛它就是常规的rebase但是在f3生效**之后**停止。事实上,我们可以看一眼停止的地方的日志就可以证明这一点。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git5.png)
要将f3分成两个提交我们所要做的是重置git的指针到先前的提交f2而保持工作目录和现在一样。这就是git reset在混合模式在做的。由于混合模式是git reset的默认模式我们可以直接用git reset head~1。就这么做并在运行后用git status看下发生了什么。
git status告诉我们userService.js和wishlistService.js被修改了。如果我们与行git diff 我们就可以看见在f3里面确切地做了哪些更改。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git6.png)
git status告诉我们userService.js和wishlistService.js被修改了。如果我们运行 git diff 我们就可以看见在f3里面确切地做了哪些更改。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git7.png)
如果我们看一眼日志我们会发现f3已经消失了。
现在我们有了准备提交的先前的f3提交而原先的f3提交已经消失了。记住虽然我们仍旧在rebase的中间过程。我们的f4、f5、f6提交还没有缺失它们会在接下来回来。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git8.png)
现在我们有了准备提交的先前的f3提交而原先的f3提交已经消失了。记住虽然我们仍旧在rebase的中间过程。我们的f4、f5、f6提交还没有缺失它们会在接下来回来。
让我们创建两个新的提交首先让我们为可以提交到主分支的userService.js创建一个提交。运行git add userService.js 接着运行 git commit -m "f3a: add updateUser method"。
@ -78,27 +82,41 @@ git status告诉我们userService.js和wishlistService.js被修改了。如果
让我们在看一眼日志。
这就是我们想要的除了f4、f5、f6仍旧缺失。这是因为我们仍在rebase交互的中间我们需要告诉git继续rebase。用下面的命令继续git rebase --continue。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git9.png)
这就是我们想要的除了f4、f5、f6仍旧缺失。这是因为我们仍在rebase交互的中间我们需要告诉git继续rebase。用下面的命令继续git rebase --continue。
让我们再次检查一下日志。
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git10.png)
就是这样。我们现在已经得到我们想要的历史了。先前的f3提交现在已经被分割成两个提交f3a和f3b。剩下的最后一件事是cherry-pick出f3a提交到主分支上。
为了完成最后一步我们首先切换到主分支。我们用git checkout master。现在我们就可以用cherry-pick命令来拾取f3a commit了。本例中我们可以用它的SHA值bd47ee1来引用它。
现在f3a这个提交i就在主分支的最上面了。这就是我们需要的
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git11.png)
现在f3a这个提交就在主分支的最上面了。这就是我们需要的
![](https://s3.amazonaws.com/codementor_content/2014-Sep-week3/git12.png)
这篇文章的长度看起来需要花费很大的功夫但实际上对于一个git高级用户而言这只是一会会。
>注Christoph目前正在与Pascal Precht写一本关于[Git rebase][4]的书您可以在leanpub订阅它并在准备出版时获得通知。
![](https://www.gravatar.com/avatar/7c148ace0d63306091cc79ed9d9e77b4?d=mm&s=200)
本文作者 Christoph Burgdorf自10岁时就是一名程序员他是HannoverJS Meetup网站的创始人并且一直活跃在AngularJS社区。他也是非常了解gti的内内外外在那里他举办一个[thoughtram][1]的工作室来帮助初学者掌握该技术。
本的教程最初发表在他的[blog][2]。
--------------------------------------------------------------------------------
via: https://www.codementor.io/git-tutorial/git-rebase-split-old-commit-master
作者:[cburgdorf][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,91 @@
学习VIM之2014
================================================================================
作为一名开发者,你不应该把时间花费在考虑如何去找你所要编辑的代码上。在我转移到完全使用 VIM 的过程中,感到最痛苦的就是它处理文件的方式。从之前主要使用 Eclipse 和 Sublime Text 过渡到 VIM它没有捆绑一个常驻的文件系统查看器对我造成了不少阻碍而其内建的打开和切换文件的方式总是让我泪流满面。
就这一点而言我非常欣赏VIM文件管理功能的深度。在工作环境上我已经装配了这些工具甚至比起那些视觉编辑器好很多。因为这个是纯键盘操作可以让我更快地在代码里面穿梭。搭建环境需要花费一些时间安装几个插件。首先第一步是我明白vim内建功能只是处理文件的一种选择。在这篇文章里我会带你去认识vim文件管理功能与使用更高级的插件。
### 基础篇:打开新文件 ###
学习vim其中最大的一个障碍是缺少可视提示不像现在的GUI图形编辑器当你在终端打开一个新的vim是没有明显的提示去提醒你去走什么所有事情都是靠键盘输入同时也没有更多更好的界面交互vim新手需要习惯如何靠自己去查找一些基本的操作指令。好吧让我开始学习基础吧。
创建新文件的命令是**:e <filename>或:e** 打开一个新缓冲区保存文件内容。如果文件不存在它会开辟一个缓冲区去保存与修改你指定文件。缓冲区是vim是术语意为"保存文本块到内存"。文本是否能够与存在的文件关联,要看是否每个你打开的文件都对应一个缓冲区。
打开文件与修改文件之后,你可以使用**:w**命令来保存在缓冲区的文件内容到文件里面,如果缓冲区不能关联你的文件或者你想保存到另外一个地方,你需要使用**:w <filename>**来保存指定地方。
这些是vim处理文件的基本知识很多的开发者都掌握了这些命令这些技巧你都需要掌握。vim提供了很多技巧让人去深挖。
### 缓冲区管理 ###
基础掌握了就让我来说更多关于缓冲区的东西vim处理打开文件与其他编辑器有一点不同打开的文件不会作为一个标签留在一个可见的地方而是只允许你同时只有一个文件在缓冲区打开vim允许你打开多个缓存区。一些会显示出来另外一些就不会你需要用**:ls**来查看已经打开的缓存,这个命令会显示每个打开的缓存区,同时会有它们的序号,你可以通过这些序号使用**:b <buffer-number>**来切换或者使用循序移动命令 **:bnext** 和 **:bprevious** 也可以使用它们的缩写**:bn**和**:bp**。
这些命令是vim管理文件缓冲区的一个基础我发现他们不会按照我的想法映射出来。我不想关心缓冲区的顺序我只想按照我的想法去到那个文件或者想在当前这个文件.因此必需了解vim更深入的缓存模式我不是推荐你必须用内部命令来作为主要的文件管理方案。但这些的确是很强大可行的选择。
![](http://benmccormick.org/content/images/2014/Jul/skitch.jpeg)
### 分屏 ###
分屏是vim其中一个最好用的管理文件功能在vim中你可以将当前窗口同时分开为2个窗口可以按照你喜欢的配置去重设大小和分配个别时候我可以在同时打开6文件每个文件每个都拥有不同大小。
你可以通过命令**:sp <filename>**来新建水平分割窗口或者 **:vs <filename>**垂直分割窗口。你可以使用这些关键命令去调整你想要的窗口大小老实说我喜欢用鼠标处理vim任务因为鼠标能够给我更加准确的两列的宽度而不需要猜大概的宽度。
创建新的分屏后,你需要使用**ctrl-w [h|j|k|l]**来向后向前切换。这个有一点笨拙,但这个却是很重要、很普遍、很容易、很高效的操作。如果你经常使用分屏,我建议你去.vimrc使用以下代码去设置别名为**ctrl-h** **ctrl-j** 等等。
nnoremap <C-J> <C-W><C-J> "Ctrl-j to move down a split
nnoremap <C-K> <C-W><C-K> "Ctrl-k to move up a split
nnoremap <C-L> <C-W><C-L> "Ctrl-l to move right a split
nnoremap <C-H> <C-W><C-H> "Ctrl-h to move left a split
### 跳转表 ###
分屏是解决多个关联文件同时查看问题,但我们仍然不能解决已打开文件与隐藏文件之间快速移动问题。这时跳转表是一个能够解决的工具。
跳转表是众多插件中看起来奇怪而且很少使用的一个。vim能够追踪每一步命令还有切换你正在修改的文件。每次从一个分屏窗口跳到另外一个vim都会添加记录到跳转表里面。它记录你去过的地方这样就不需要担心之前的文件在哪里你可以使用快捷键去快速追溯你的踪迹。**Ctrl-o**允许你返回你上一次地方。重复操作几次就能够返回到你最先编写的代码段地方。你可以使用**ctrl-i**来向前返回。当你在调试多个文件或在两个文件之间切换时,它能够发挥极大的快速移动功能。
### 插件 ###
如果你想vim像Sublime Text 或者Atom一样我就让你认清一下这里有很好的机会让你看清一些难懂可怕和低效的事情。例如大家会发出"当Sublime有了模糊查找功能为什么我一定要输入全路径才能够打开文件" "没有侧边栏显示目录树我怎样查看项目结构" 等等。但vim有了解决方案。这些方案不需要破坏vim的核心。我只需要经常修改vim配置与添加一些最新的插件这里有3个有用的插件可以让你像Sublime管理文件
- [CtrlP][1] 是一个跟Sublime的"Go to Anything"栏一样模糊查找文件.它快如闪电并且非常可配置性。我使用它主要用来打开文件。我只需知道部分的文件名字不需要记住整个项目结构就可以查找了。
- [The NERDTree][2] 这个一个文件管理夹插件它重复了很多编辑器都有的侧边文件管理夹功能。我实际上很少用它对于我而言模糊查找会更加快。对于你接手一个项目尝试学习项目结构与了解什么可以用是非常方便的NERDTree是可以自己定制配置安装它能够代替vim内置的目录工具。
- [Ack.vim][3] 是一个专为vim的代码搜索插件它允许你跨项目搜索文本。它封装了Ack 或 Ag 这[两个极其好用的搜索工具][4],允许你在任何时候在你项目之间快速搜索跳转。
在vim核心与它的插件生态系统之间vim 提供足够的工具允许你构建你想要得工作环境。文件管理是软件开发系统的最核心部分并且你值得拥有体验的权利。
开始时需要通过很长的时间去理解它们,然后在找到你感觉舒服的工作流程之后再开始在上面添加工具。但依然值得你去使用,你不用爆头就可以理解如何去使用,能够轻易编写你的代码。
### 更多插件资源 ###
- [Seamlessly Navigate Vim & Tmux Splits][5] 这个插件需要每一个想使用它的人都要懂得使用[tmux][6]这个跟vim的splits 一样简单好用。
- [Using Tab Pages][7] 它是一个vim的标签功能插件虽然它的名字用起来有一点疑惑但它不是文件管理器。对如何在有多个工作可视区使用"tab pages" 在vim wiki 网站上有更好的概述。
- [Vimcasts: The edit command][8] 一般来说 Vimcasts 是大家学习vim的一个好资源。这个屏幕截图与一些内置工作流程很好地描述了之前说的文件操作方面的知识。
--------------------------------------------------------------------------------
via: http://benmccormick.org/2014/07/07/learning-vim-in-2014-working-with-files/
作者:[Ben McCormick][a]
译者:[haimingfg](https://github.com/haimingfg)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://benmccormick.org/2014/07/07/learning-vim-in-2014-working-with-files/
[1]:https://github.com/kien/ctrlp.vim
[2]:https://github.com/scrooloose/nerdtree
[3]:https://github.com/mileszs/ack.vim
[4]:http://benmccormick.org/2013/11/25/a-look-at-ack/
[5]:http://robots.thoughtbot.com/seamlessly-navigate-vim-and-tmux-splits
[6]:http://tmux.sourceforge.net/
[7]:http://vim.wikia.com/wiki/Using_tab_pages
[8]:http://vimcasts.org/episodes/the-edit-command/
[9]:http://feedpress.me/benmccormick
[10]:http://eepurl.com/WFYon
[11]:http://benmccormick.org/2014/07/14/learning-vim-in-2014-configuring-vim/
[12]:http://benmccormick.org/2014/06/30/learning-vim-in-2014-the-basics/
[13]:http://benmccormick.org/2014/07/02/learning-vim-in-2014-vim-as-language/

View File

@ -1,10 +1,10 @@
使用 GIT 备份 linux 上的网页
使用 GIT 备份 linux 上的网页文件
================================================================================
![](http://techarena51.com/wp-content/uploads/2014/09/git_logo-1024x480-580x271.png)
BUP 并不单纯是 Git, 而是一款基于 Git 的软件. 一般情况下, 我使用 rsync 来备份我的文件, 而且迄今为止一直工作的很好. 唯一的不足就是无法把文件恢复到某个特定的时间点. 因此, 我开始寻找替代品, 结果发现了 BUP, 一款基于 git 的软件, 它将数据存储在一个仓库中, 并且有将数据恢复到特定时间点的选项.
要使用 BUP, 你先要初始化一个空的仓库, 然后备份所有文件. 当 BUP 完成一次备份是, 它会创建一个还原点, 你可以过后还原到这里. 它还会创建所有文件的索引, 包括文件的属性和验校和. 当要进行下一个备份, BUP 会对比文件的属性和验校和, 只保存发生变化的数据. 这样可以节省很多空间.
要使用 BUP, 你先要初始化一个空的仓库, 然后备份所有文件. 当 BUP 完成一次备份是, 它会创建一个还原点, 你可以过后还原到这里. 它还会创建所有文件的索引, 包括文件的属性和验校和. 当要进行下一个备份, BUP 会对比文件的属性和验校和, 只保存发生变化的数据. 这样可以节省很多空间.
### 安装 BUP (在 Centos 6 & 7 上测试通过) ###
@ -20,7 +20,8 @@ BUP 并不单纯是 Git, 而是一款基于 Git 的软件. 一般情况下, 我
[techarena51@vps ~]$ make test
[techarena51@vps ~]$ sudo make install
对于 debian/ubuntu 用户, 你可以使用 "apt-get build-dep bup". 要获得更多的信心, 可以查看 https://github.com/bup/bup
对于 debian/ubuntu 用户, 你可以使用 "apt-get build-dep bup". 要获得更多的信息, 可以查看 https://github.com/bup/bup
在 CentOS 7 上, 当你运行 "make test" 时可能会出错, 但你可以继续运行 "make install".
第一步时初始化一个空的仓库, 就像 git 一样.
@ -49,7 +50,7 @@ BUP 并不单纯是 Git, 而是一款基于 Git 的软件. 一般情况下, 我
"BUP save" 会把所有内容分块, 然后把它们作为对象储存. "-n" 选项指定备份名.
你可以查看一系列备份和已备份文件.
你可以查看备份列表和已备份文件.
[techarena51@vps ~]$ bup ls
local-etc techarena51 test
@ -88,13 +89,13 @@ BUP 并不单纯是 Git, 而是一款基于 Git 的软件. 一般情况下, 我
唯一的缺点是你不能把文件恢复到另一个服务器, 你必须通过 SCP 或者 rsync 手动复制文件.
通过集成的 web 服务器查看备份
通过集成的 web 服务器查看备份.
bup web
#specific port
bup web :8181
你可以使用 shell 脚本来运行 bup, 并建立一个每日运行的定时任务
你可以使用 shell 脚本来运行 bup, 并建立一个每日运行的定时任务.
#!/bin/bash
@ -103,7 +104,7 @@ BUP 并不单纯是 Git, 而是一款基于 Git 的软件. 一般情况下, 我
BUP 并不完美, 但它的确能够很好地完成任务. 我当然非常愿意看到这个项目的进一步开发, 希望以后能够增加远程恢复的功能.
你也许喜欢阅读 使用[inotify-tools][1], 一篇关于实时文件同步的文章.
你也许喜欢阅读这篇——使用[inotify-tools][1]实时文件同步.
--------------------------------------------------------------------------------
@ -111,7 +112,7 @@ via: http://techarena51.com/index.php/using-git-backup-website-files-on-linux/
作者:[Leo G][a]
译者:[wangjiezhe](https://github.com/wangjiezhe)
校对:[校对者ID](https://github.com/校对者ID)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,43 +1,43 @@
Linux日历程序California 0.2 发布了
================================================================================
**随着[上月的Geary和Shotwell的更新][1]非盈利软件套装Yobra又回来了这次带来的是新的[California][2]日历程序的发布。**
**随着[上月的Geary和Shotwell的更新][1]非盈利软件套装Yobra又回来了同时带来了是新的[California][2]日历程序。**
一个合格的桌面日历是工作井井有条(和想要井井有条)的必备工具。[广受欢迎Chrome Web Store上的Sunrise应用][3]的发布意味着选择并不像以前那么少了。California又为这个撑腰了。
一个合格的桌面日历是工作井井有条(以及想要井井有条)的必备工具。[Chrome Web Store上广受欢迎的Sunrise应用][3]的发布让我们的选择比以前更丰富了而California又为之增添了新的生力军
Yorba的Jim Nelson在Yorba博客上写道“发生了很多变化“接着写道“初次发布比我想的加入了更多的特性。”
Yorba的Jim Nelson在Yorba博客上写道“发生了很多变化“接着写道...很高兴的告诉大家,初次发布比我想的加入了更多的特性。”
![California 0.2 Looks Great on GNOME](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/california-point-2.jpg)
California 0.2在GNOME上看上去棒极了。
*California 0.2在GNOME上看上去棒极了。*
最突出的是添加了“自然语言”解析器。这使得添加事件更容易。相反,你可以直接输入“**在下午2点就Nachos会见Sam”接着California就会自动把它安排下接下来的星期一的下午两点而不必你手动输入位的信息日期时间等等
最突出变化的是添加了“自然语言”解析器。这使得添加事件更容易。你可以直接输入“**在下午2点就Nachos会见Sam**”接着California就会自动把它安排下接下来的星期一的下午两点而不必你手动输入位的信息日期时间等等LCTT 译注:显然你只能输入英文才行)
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/05/Screen-Shot-2014-05-15-at-21.26.20.png)
当我们在5月份回顾开发版本时这个特性也能工作了甚至修复了一个问题重复事件
这个功能和我们我们在5月份评估开发版本时一样好用甚至还修复了一个bug事件重复
要创建一个重复时间比如“每个星期四搜索自己的名字”你需要在日期前包含文字“every”每个。要确保地点也在内比如中午12点和Samba De Amigo在Boston Tea Party喝咖啡。条目中需要有“at”或者“@”。
至于详细信息,我们可以见[GNOME Wiki上的快速添加页面][4]
至于详细信息,我们可以见[GNOME Wiki上的快速添加页面][4]
其他的改变包括:
- 通过‘月’和‘周’查看事件
-以‘月’和‘周’视图查看事件
-添加/删除 GoogleCalDAV和web.ics日历
- 改进数据服务器整合
-添加/编辑/啥是拿出远程事件(包括重复事件)
-自然语言计划
-F1在线帮助快捷键
- 新的动画和弹出窗口
-改进数据服务器整合
-添加/编辑/删除远程事件(包括重复事件)
-自然语言安排计划
-按下F1获取在线帮助
-新的动画和弹出窗口
### 在Ubuntu 14.10上安装 California 0.2 ###
由于是GNOME 3的程序可以说这下面程序看起来和感受上更好。
作为一个GNOME 3的程序它在 Gnome 3下运行的外观和体验会更好。
Yorba没有忽略Ubuntu用户。他们已经努力也可以说是耐心地地解决导致Ubuntu需要同时安装GTK+和GNOME的主题问题。结果就是在Ubuntu上程序可能看上去有点错位但是同样工作的很好。
不过,Yorba没有忽略Ubuntu用户。他们已经努力也可以说是耐心地地解决导致Ubuntu需要同时安装GTK+和GNOME的主题问题。结果就是在Ubuntu上程序可能看上去有点错位但是同样工作的很好。
California 0.2在[Yorba稳定版软件PPA][5]中可以下载,且只针对Ubuntu 14.10。
California 0.2在[Yorba稳定版软件PPA][5]中可以下载,只用于Ubuntu 14.10。
--------------------------------------------------------------------------------
@ -45,7 +45,7 @@ via: http://www.omgubuntu.co.uk/2014/10/california-calendar-natural-language-par
作者:[Joey-Elijah Sneddon][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,54 @@
Linux Kernel 3.17 带来了很多新特性
================================================================================
Linus Torvalds已经发布了最新的稳定版内核3.17。
![](http://www.omgubuntu.co.uk/wp-content/uploads/2011/07/Tux-psd3894.jpg)
Torvalds以他典型的[放任式][1]的口吻在Linux内核邮件列表中解释说
> “过去的一周很平静我对3.17的如期发布没有疑虑(相对于乐观的“我应该早一周发布么”的计划而言)。”
由于假期Linux说他还没有开始合并3.18的改变:
>“我马上要去旅行了- 在我期盼早点发布的时候我希望避免一些事情。这意味着在3.17发布后我不会在下周非常活跃地合并新的东西并且下下周是LinuxCon EU”
### Linux 3.17有哪些新的? ###
最新版本的 Linux 3.17 加入了最新的改进,硬件支持,修复等等。范围从不明觉厉的 - 比如:[memfd 和 文件密封补丁][2] - 到大多数人感兴趣的,比如最新硬件的支持。
下面是这次发布的一些亮点的列表,但它们并不详尽:
- Microsoft Xbox One 控制器支持 (没有震动反馈)
- 额外的Sony SIXAXIS支持改进
- 东芝 “主动防护感应器” 支持
- 新的包括Rockchip RK3288和AllWinner A23 SoC的ARM芯片支持
- 安全计算设备上的“跨线程过滤设置”
- 基于Broadcom BCM7XXX板卡的支持用在不同的机顶盒上
- 增强的AMD Radeon R9 290支持
- Nouveau 驱动改进包括Kepler GPU修复
- 包含Intel Broadwell超级本上的Wildcatpoint Audio DSP音频支持
### 在Ubuntu上安装 Linux 3.17 ###
虽然被列为稳定版,但是目前对于大多数人而言只有很少的功能需要我们“现在去安装”。
但是如果你很耐心——**更重要的是**——有足够的技能去处理从中导致的问题你可以通过在由Canonical维护的主线内核存档中找到一系列合适的包安装在你的Ubuntu 14.10中升级到Linux 3.17。
**警告:除非你知道你正在做什么,不要尝试从下面的链接中安装任何东西。**
- [访问Ubuntu内核主线存档][3]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/10/linux-kernel-3-17-whats-new-improved
作者:[Joey-Elijah Sneddon][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://lkml.iu.edu/hypermail/linux/kernel/1410.0/02818.html
[2]:http://lwn.net/Articles/607627/
[3]:http://kernel.ubuntu.com/~kernel-ppa/mainline/?C=N;O=D

View File

@ -0,0 +1,37 @@
Ubuntu Unity 4岁了生日快乐
================================================================================
> Unity桌面环境在Ubuntu 10.04 Netbook Remix版本中加入这是一个过期的旧版本。
**Canonical开发者以及Ubuntu社区这些天有一个很好的理由来庆祝因为Unity桌面环境已经4岁了**
Unity 作为Ubuntu的默认桌面环境并且已经有4年了虽然当时并不是该发行版的桌面版本。它首次用于Ubuntu Netbook Remix是专为笔记本使用的版本。实际上Ubuntu Netbook Remix 10.10 Maverick市场首次接受Unity桌面。
常规的Ubuntu 10.10 发行版桌面仍旧使用GNOME 2.x这也是为什么有用户说10.10 仍是Canonical做的最好的版本。
### Unity 是没人想要的替代品 ###
Canonical决定用他们自己的软件替代GNOME 2.x桌面环境但是它的设计对用户而言很陌生。一些人喜欢它但是许多人并不这样认为并且还被不同的用户在他们决定放弃Ubuntu的时候时不时地提到这个。
Unit设计视角上和GNOME不同但是Ubuntu开发者并没有替换GNOME所有的包并且还保留了很多他们现在仍旧这样。之前不喜欢Unity方向的Ubuntu的粉丝一定对GNOME 2.x被很快抛弃且被完全不同的、同样引发相同质疑的GNOME 3.0替换感到很失望。
### 为什么Unity替换GNOME ###
回到还在Ubuntu 10.10 的时光Canonical和GNOME团队习惯于非常紧密地一起工作但是事情在Ubuntu变得越来越流行后发生了改变。其中一个驱使Canonical构建Unity的理由是GNOME团队不再和他们一致了。
用户在抱怨GNOME的问题或者他们想要特定的功能时Ubuntu团队会发给上游一些补丁。对于GNOME它会不会接受或者会花很长的时间去实现。在同时Canonical和Ubuntu因这些他们不能马上解决的问题受到了很多的批评但是用户并不知道这些。
因此一个与GNOME捆绑不太紧的桌面环境的需求变得非常清晰了。Unity最终在Ubuntu 11.10中引入。官方的发布日期是 2010年10月10日所以Unity已经4岁了。
Unity还没有被全社区的拥抱虽然有很多用户已经接受了这是一个有用、且可以作为一个生产桌面环境。虽然桌面的大修已经逾期了很久且势必会在一两年内完成但是它在每个新的发行后都会获得了更多的支持和使用。
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Ubuntu-s-Unity-Turns-4-Happy-Birthday--461840.shtml
作者:[Silviu Stahie][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie

View File

@ -0,0 +1,60 @@
Linux 上一些很好用的文献目录管理工具
================================================================================
你是否写过一些很长很长的文章以至于你会认为你永远都看不到它的结束那么你会很明白最糟糕的不是你投入了多少的时间而是一旦你完成你仍然要制定和格式化你的所引用的一些参考文献。很幸运的是Linux 有很多的解决方案参考书目和文献管理工具。借助BibTex的力量这些工具可以帮你导入引用源然后自动生成一个结构化文献目录。这里给大家提供了一些Linux上参考文献管理工具的不完全列表。
### 1. Zotero ###
![](https://farm4.staticflickr.com/3936/15492092282_f1c8446624_b.jpg)
这应该是最著名的参考文献聚集工具,[Zotero][1]作为一个浏览器的扩展插件。当然它也有一个方便的Linux 独立工具。拥有强大的性能Zotero 很容易上手并且也可以和LibreOffice 或者是其他的文本编辑器配套使用来管理文档的参考文献。我个人很欣赏其操作界面和插件管理器。可惜的是,如果你对参考文献有很多不同的需求的话,很快就会发现 Zotero 功能有限。
### 2. JabRef ###
![](https://farm4.staticflickr.com/3936/15305799248_d27685aca9_b.jpg)
[JabRef][2] 是最先进的文献管理工具之一。你可以导入大量的格式可以在其外部的数据库里查找相应的条目像Google Scholar并且能直接输出到你喜欢的编辑器。JabRef 可以很好的兼容你的运行环境甚至也支持插件。最后还有一点JabRef可以连接你自己的SQL 数据库。而唯一的缺点就是其学习使用的难度。
### 3. KBibTex ###
![](https://farm4.staticflickr.com/3931/15492453775_c1e57f869f_c.jpg)
对于 KDE 使用者,这个桌面环境也拥有它自己专有的文献管理工具[KBibTex][3]。这个程序的品质正如你所期望。程序可高度定制通过快捷键就可以很好的操作和体验。你可以很容易找到副本、可以预览结果、也可以直接输出到LaTex 编辑器。而我认为这款软件最大的特色在于它集成了Bigsonomy Google Scholar 甚至是你的Zotero账号。唯一的缺憾是界面看起来实在是有点乱。多花点时间设置软件可以让你使用起来得心应手。
### 4. Bibfilex ###
![](https://farm4.staticflickr.com/3930/15492453795_f5ec82f5ff_c.jpg)
可以运行在Gtk 和Qt 环境中,[Bibfilex][4]是一个基于 Biblatex 的界面友好的工具。相对于JabRef 和KBibTex ,缺少了一些高级的功能,但这也让他更加的快速和轻巧。不用想太多,这绝对是快速做文献目录的一个聪明的选择。界面很舒服,仅仅反映了一些必要的功能。我给出了其使用的完全手册,你可以从官方的[下载页面][5]去获得。
### 5. Pybliographer ###
![](https://farm4.staticflickr.com/3929/15305749810_541b4926bd_o.jpg)
正如它的名字一样,[Pybliographer][6]是一个用 Python 写的非图形化的文献目录管理工具。我个人比较喜欢把Pybiographic 当做是图形化的前端。它的界面极其简洁和抽象。如果你仅仅需要输出少数的参考文献,而且也确实没有时间去学习更多的工具软件,那么 Pybliographer 确实是一个不错的选择。有一点点像 Bibfilex 的是,它是以让用户方便、快速的使用为目标的。
### 6. Referencer ###
![](https://farm4.staticflickr.com/3949/15305749790_2d3311b169_b.jpg)
这应该是我归纳这些时候的一个最大的惊喜,[Referencer][7] 确实是让人眼前一亮。完美兼容 Gnome ,它可以查找和导入你的文档,然后在网上查询他们的参考文献,并且输出到 LyX ,非常的漂亮和设计良好。为数不多的几个快捷键和插件让它拥有了图书馆的风格。
总的来说,很感谢这些工具软件,有了它们,你就可以不用再担心长长的文章了,至少是不用再担心参考文献的部分了。那么我们还有什么遗漏的吗?是否还有其他的文献管理工具你很喜欢?请在评论里告诉我们。
--------------------------------------------------------------------------------
via: http://xmodulo.com/reference-management-software-linux.html
作者:[Adrien Brochard][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:https://www.zotero.org/
[2]:http://jabref.sourceforge.net/
[3]:http://home.gna.org/kbibtex/
[4]:https://sites.google.com/site/bibfilex/
[5]:https://sites.google.com/site/bibfilex/download
[6]:http://pybliographer.org/
[7]:https://launchpad.net/referencer

View File

@ -0,0 +1,80 @@
Linux有问必答如何在Linux命令行中刻录ISO或NRG镜像到DVD
================================================================================
> **问题**我需要在Linux机器上使用DVD刻录机刻录一个镜像文件.iso或.nrg到DVD有没有一个既快捷又简易的方法最好是使用命令行工具
最常见的两种镜像文件格式是ISO.iso为文件扩展名和NRG.nrg为文件扩展名。ISO格式是一个由ISO国际标准组织创立的全球标准因此被大多数操作系统所支持它提供了很高的便携性。另一方面NRG格式是由Nero AG开发的私有格式Nero AG是一个很流行的磁盘镜像和刻录软件公司。
下面来解答怎样从Linux命令行刻录.iso或.nrg镜像到DVD。
### 刻录.ISO镜像文件到DVD
要刻录.iso镜像文件到DVD我们将使用**growisofs**这个工具:
# growisofs -dvd-compat -speed=4 -Z /dev/dvd1=WindowsXPProfessionalSP3Original.iso
在上面的命令行中,“-dvd-compat”选项提供了与DVD-ROM/-Video的最大介质兼容性。在一次写入式 DVD+R 或 DVD-R 上下文中,导致不可添加记录(关闭磁盘)。
“-Z /dev/dvd1=filename.iso”选项表示我们刻录.iso文件到设备选单/dev/dvd1中选择的介质中。
“-speed=N”参数指定了DVD刻录机的刻录速度这与驱动自身的能力直接相关。“-speed=8”将以8x刻录“-speed=16”将以16x刻录以此类推。没有该参数growisofs将默认以最低速刻录在这里是4x。你可以根据你刻录机的可用速度和磁盘类型选择合适的刻录速度。
你可以根据[此教程][2]找出你的DVD刻录机的设备名称和它所支持的写入速度。
![](https://farm3.staticflickr.com/2947/15510172352_5c09c2f495_z.jpg)
刻录进程完成后,磁盘会自动弹出。
### 把NRG镜像转换为ISO格式 ###
由于ISO被广为采用刻录.iso镜像到CD/DVD就非常简单。但是要刻录一个.nrg镜像则首先需要将它转换为.iso格式。
把一个.nrg镜像文件转换到.iso格式你可以使用nrg2iso这个工具。它是一个开源程序用来将Nero Burning Rom创建的镜像转换到标准的.isoISO9660文件。
在Debian及其衍生版上安装**nrg2iso**
# aptitude install nrg2iso
在基于Red Hat的发行版上安装**nrg2iso**
# yum install nrg2iso
在CentOS/RHEL上你需要先启用[Repoforge仓库][1],再通过**yum**安装。
安装完nrg2iso包后使用以下命令来将.nrg镜像转换到.iso格式
# nrg2iso filename.nrg filename.iso
![](https://farm3.staticflickr.com/2945/15507409981_99eddd2577_z.jpg)
转换完成后,在当前目录中会出现一个.iso文件
![](https://farm4.staticflickr.com/3945/15323823510_c933d7710f_z.jpg)
###检查已刻录介质的完整性###
关于这一点你可以通过将刻录的DVD的校验和与原始.iso文件的md5校验和进行对比以检查所刻录介质的完整性。如果两者相同你就可以放心了因为刻录成功了。
然而当你使用nrg2iso来将.nrg镜像转换为.iso格式后你需要明白一点nrg2iso创建的.iso文件的大小不是2048的倍数通常.iso文件的大小是它的倍数。因此常规的校验和对比该.iso文件和刻录介质的内容不一样。
另一方面,如果你已经刻录了一个不是由.nrg文件转换而来的.iso镜像你可以使用以下命令来检查记录到DVD中的数据的完整性。替换“/dev/dvd1”为你的设备名。
# md5sum filename.iso; dd if=/dev/dvd1 bs=2048 count=$(($(stat -c "%s" filename.iso) / 2048)) | md5sum
命令的第一部分计算.iso文件的md5校验和而第二部分则读取/dev/dvd1中的磁盘内容然后通过管道输出给md5sum工具。“bs=2048”表示dd命令将使用2048字节块为单位检查因为原始iso文件以2048为单位划分。
![](https://farm3.staticflickr.com/2949/15487396726_bcf47d536f_z.jpg)
如果两个md5校验和的值相同这就意味着刻录的介质是有效的。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/burn-iso-nrg-image-dvd-command-line.html
译者:[GOLinux](https://github.com/GOLinux)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://xmodulo.com/how-to-set-up-rpmforge-repoforge-repository-on-centos.html
[2]:http://linux.cn/article-4081-1.html

View File

@ -0,0 +1,110 @@
Linux有问必答如何使用Linux命令行检测DVD刻录机的名字和读写速度
================================================================================
> **提问**我想要知道我的DVD刻录机的名字和在烧录时的速度。该使用什么Linux命令行工具来连测DVD刻录机的设备名和速度
如今大多数消费PC和笔记本电脑都配备了DVD刻录机。在Linux中光盘驱动器如CD/DVD驱动器的名字是在引导时内核基于udev规则来命名的。有几种方法来检测刻录机的设备名称和它的写入速度。
### 方法一 ###
找出与DVD刻录机相关的设备名称最简单的方法是使用dmesg命令行工具它打印出内核的消息缓冲区。在dmesg的输出中寻找一个安装好的DVD刻录机
$ dmesg | egrep -i --color 'dvd|cd/rw|writer'
![](https://farm6.staticflickr.com/5603/15505432622_0bfec51a8f_z.jpg)
上述命令的输出会告诉你你的Linux系统上是否检测到了DVD刻录机以及它被分配的名字。本例中DVD刻录机的设备名称为“/dev/sr0”。虽然此方法不会告诉你的写入速度
### 方法二 ###
第二个获得你DVD刻录机的信息是使用lsscsi命令它只是列出了所有可用的SCSI设备。
在基于Debian Linux上安装 **lsscsi**:
$ sudo apt-get install lsscsi
在基于Red Hat Linux上安装:
$ sudo yum install lsscsi
如果成功检测到lsscsi命令的输出会告诉你DVD刻录机的名称
$ lsscsi
![](https://farm4.staticflickr.com/3937/15319078780_e650d751d6.jpg)
这也不会告诉你刻录机更多的细节,比如写入速度。
### 方法三 ###
第三种获取有关你DVD刻录机的信息是参考/proc/sys/dev/cdrom/info。
$ cat /proc/sys/dev/cdrom/info
----------
CD-ROM information, Id: cdrom.c 3.20 2003/12/17
drive name: sr0
drive speed: 24
drive # of slots: 1
Can close tray: 1
Can open tray: 1
Can lock tray: 1
Can change speed: 1
Can select disk: 0
Can read multisession: 1
Can read MCN: 1
Reports media changed: 1
Can play audio: 1
Can write CD-R: 1
Can write CD-RW: 1
Can read DVD: 1
Can write DVD-R: 1
Can write DVD-RAM: 1
Can read MRW: 1
Can write MRW: 1
Can write RAM: 1
本例中输出会告诉你DVD刻录机/dev/sr0与x24的CD刻录速度即24x153.6 Kbps兼容且相当于x3的DVD写入速度即3x1385 KBps的兼容。这里的写入速度是最大可能的速度而实际的写入速度当然取决于使用的介质例如DVD-RW、DVD + RW、DVD-RAM等
### 方法四 ###
另一种方法是使用一种称为wodim命令行程序。在大多数的Linux发行版这个工具以及它的软链接cdrecord都是默认安装的。
# wodim -prcap
(or cdrecord -prcap)
![](https://farm6.staticflickr.com/5614/15505433532_4d7e47fc51_o.png)
如果不带任何参数调用时wodim命令会自动检测到DVD刻录机并显示出详细的功能以及它的最大读取/写入速度。例如你可以找出刻录机支持哪些媒体如CD-R、CD-RW、DVD-RW、DVD-ROM、DVD-R、DVD-RAM、音频CD以及有哪些如何读/写速度。上面的例子中输出显示DVD刻录机对于CD拥有X24最大写入速度对于DVD有X3的最大写入速度。
需要注意的是wodim命令报告的写入速度会随您插入到DVD刻录机的CD/DVD介质的改变而改变这反映了媒体规范。
### 方法五 ###
还有一个方法来检查DVD刻录机的写入速度的是一个名为dvd+rw-mediainfo的工具这是dvd+rw工具包DVD+-RW/R媒体工具链的一部分。
在基于Debian 发行版上安装 **dvd+rw-tools**
$ sudo apt-get install dvd+rw-tools
在基于Red Hat 发行版上安装 dvd+rw-tools:
$ sudo yum install dvd+rw-tools
不像其他工具, dvd+rw-mediainfo命令不会产生任何输出除非你插入DVD光盘到刻录机中。所以当你插入DVD光盘后运行以下的命令。用你自己的设备名称替换“/dev/sr0”。
$ sudo dvd+rw-mediainfo /dev/sr0
![](https://farm6.staticflickr.com/5597/15324137650_91dbf458ef_z.jpg)
**dvd+rw-mediainfo**工具会探测插入的媒体本例中是“DVD-R”以找出对媒体的实际写入速度。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/detect-dvd-writer-device-name-writing-speed-command-line-linux.html
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,74 @@
Linux有问必答如何检测并修复bash中的破壳漏洞
================================================================================
> **问题**我想要知道我的Linux服务器是否存在bash破壳漏洞以及如何来保护我的Linux服务器不受破壳漏洞侵袭。
2014年9月24日一位名叫斯特凡·沙泽拉的安全研究者发现了一个名为“破壳”Shellshock也称为“bash门”或“Bash漏洞”的bash漏洞。该漏洞如果被渗透远程攻击者就可以在调用shell前通过在特别精心编制的环境中输出函数定义执行任何程序代码。然后这些函数内的代码就可以在调用bash时立即执行。
注意破壳漏洞影响到bash版本1.14到4.3当前版本。虽然在写本文时还没有该漏洞权威而完整的修复方案也尽管主要的Linux发行版[Debian][1][Red Hat][2][CentOS][3][Ubuntu][4]和 [Novell/Suse][5])已经发布了用于部分解决与此漏洞相关的补丁([CVE-2014-6271][6]和[CVE-2014-7169][7]并且建议尽快更新bash并在随后数日内检查更新LCTT 译注,可能你看到这篇文章的时候,已经有了完善的解决方案)。
### 检测破壳漏洞 ###
要检查你的Linux系统是否存在破壳漏洞请在终端中输入以下命令。
$ env x='() { :;}; echo "Your bash version is vulnerable"' bash -c "echo This is a test"
如果你的Linux系统已经暴露给了破壳漏洞渗透命令输出会像这样
Your bash version is vulnerable
This is a test
在上面的命令中一个名为x的环境变量已经被设置可用于用户环境。就如我们所了解到的它并没有赋值是一个虚函数定义后面跟了一个任意命令红色该命令将在bash调用前执行。
### 为破壳漏洞应用修复 ###
你可以按照以下方法安装新发布的bash补丁。
在Debian及其衍生版上
# aptitude update && aptitude safe-upgrade bash
在基于Red Hat的发行版上
# yum update bash
#### 打补丁之前: ####
Debian
![](https://farm4.staticflickr.com/3903/15342893796_0c3c61aa33_z.jpg)
CentOS
![](https://farm3.staticflickr.com/2949/15362738261_99fa409e8b_z.jpg)
#### 打补丁之后: ####
Debian:
![](https://farm3.staticflickr.com/2944/15179388727_bdb8a09d62_z.jpg)
CentOS:
![](https://farm4.staticflickr.com/3884/15179149029_3219ce56ea_z.jpg)
注意在安装补丁前后各个发行版中的bash版本没有发生变化——但是你可以通过从更新命令的运行过程中看到该补丁已经被安装很可能在安装前需要你确认
如果处于某种原因你不能安装该补丁或者针对你的发行版的补丁还没有发布那么建议你先试用另外一个shell直到修复补丁出现。
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/detect-patch-shellshock-vulnerability-bash.html
译者:[GOLinux](https://github.com/GOLinux)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://www.debian.org/security/2014/dsa-3032
[2]:https://access.redhat.com/articles/1200223
[3]:http://centosnow.blogspot.com.ar/2014/09/critical-bash-updates-for-centos-5.html
[4]:http://www.ubuntu.com/usn/usn-2362-1/
[5]:http://support.novell.com/security/cve/CVE-2014-6271.html
[6]:http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-6271
[7]:http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-7169

View File

@ -0,0 +1,45 @@
慕尼黑市市长透露重返 Windows 的费用
================================================================================
> **摘要**: 慕尼黑市市长透露了在该市摆脱微软十年之后再次放弃 Linux 重返 Windows 的费用,大约需要数以百万计的欧元。
慕尼黑市市长透露,重返 Windows 将需要花费上百万欧元购买新的硬件。
今年早些时候,该市新当选的市长提出慕尼黑可能重返 Windows尽管市当局[用了若干年才迁移到基于 Linux 的操作系统和开源软件][1]摘要译文http://linux.cn/article-2294-1.html
作为最著名的从微软迁移到 Linux 桌面系统的案例慕尼黑投向开源软件的做法一直引发各种争议和讨论。慕尼黑的迁移始于2004年还有一些德国的地方当局也[追随它的脚步转向开源][2]。
目前还没有[制定好返回 Windows 桌面的计划][3],但是当局正在调研哪种操作系统和软件包(包括专有软件和开源软件)更适合他们的需求。调研报告也将统计迁移到开源软件所花费的费用。
Dieter Reiter市长在[回应慕尼黑的绿党的问询][4]时透露了重返 Windows 的费用。
Reiter 说,迁移到 Windows 7 需要替换它14000名以上的职员的所有个人电脑此举将花费 315万欧元。这还没有包括软件许可证费用和基础设施的投入Reiter 说由于没有进一步计划,所以还没办法测算。他说,如果迁移到 Windows 8 将花费更多。
Reiter 说,返回微软将导致迁移到 [Limux][5]、OpenOffice 及其它开源开源所花费的1400万欧元打了水漂。而部署 Limux 并从微软 Office 迁移的项目实施、支持、培训、修改系统以及 Limux 相关软件的授权等工作都将被搁置,他补充道。
他还透露说,(之前)迁移到 Limux 为市政府节约了大概1100万欧元的许可证和硬件费用因为基于 Ubuntu 的 Limux 操作系统要比升级较新版本的 Windows 对硬件的需要要低。
在这个回应中 Reiter 告诉 Stadtbild 杂志说,他是微软的粉丝,但是这并不会影响到这份 IT 审计报告。
“在接受 Stadtbild 杂志的采访中我透露我是微软粉丝后,我就收到了大量的信件,询问我们的 IT 团队是否能令人满意的满足用户在任何时候的需求,以及是否有足够的能力为一个现代化大都市的政府服务。”
“这件事有许多方面,用户满意度是其中之一。这和我个人偏好无关,也和我在开源方面的经验无关。”
在他的回应中,并不是由于职员们的对迁移到开源的抱怨而导致本次审计的决定。他说,这是来自对职员在 IT 方面的调查而产生的审计,并不独是 Limux OS。
他还提到了一个 Windows 和基于 Linux 的操作系统的相对安全的问题。他指出,根据德国国家安全局 BSI 的信息,发现 Linux 要比 Windows 漏洞更多,不过只是使用量较少罢了。然而他也补充说,这种比较也许有不同的解释。
--------------------------------------------------------------------------------
via: http://www.zdnet.com/munich-sheds-light-on-the-cost-of-dropping-linux-and-returning-to-windows-7000034718/
作者:[Avishek Kumar][a]
译者:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.zdnet.com/meet-the-team/uk/nick-heath/
[1]:http://www.techrepublic.com/article/how-munich-rejected-steve-ballmer-and-kicked-microsoft-out-of-the-city/
[2]:http://www.techrepublic.com/blog/european-technology/its-not-just-munich-open-source-gains-new-ground-in-germany/
[3]:http://www.techrepublic.com/article/no-munich-isnt-about-to-ditch-free-software-and-move-back-to-windows/
[4]:http://www.ris-muenchen.de/RII2/RII/DOK/ANTRAG/3456728.pdf
[5]:http://en.wikipedia.org/wiki/LiMux

View File

@ -0,0 +1,37 @@
Debian 7.7 更新版发布
================================================================================
**Debian项目已经宣布Debian7.7 “Wheezy”发布并提供下载。这是常规维护更新但它打包了很多重要的更新。**
![](http://i1-news.softpedia-static.com/images/news2/Debian-7-7-Is-Out-with-Security-Fixes-462647-2.jpg)
Debian在这个发行版里面包含了常规的主要更新但如果你已经安装的 Debian 保持着不断最新就无需下载安装这个版本。开发者做了一些重要的修复,因此如果还没升级的话建议尽快升级。
“此次更新主要是给稳定版修正安全问题,以及对一些严重问题的调整。安全建议的公告已经另行发布了,请查阅。”
开发者在正式[公告][1]中指出“请注意此更新并不是Debian 7的新版本只是更新了部分包没必要扔掉旧的wheezy CD或DVD只要在安装后通过 Debian 镜像来升级那些过期的包就行“。
开发者已经升级了 Bash 包来修复一些重要的漏洞在启动时SSH登录不再有效并且还做了其他一些微调。
要了解发布更多的细节请查看官方公告中的完整更新日志。
现在下载 Debian 7.7:
- [Debian GNU/Linux 7.7.0 (ISO) 32-bit/64-bit][2]
- [Debian GNU/Linux 6.0.10 (ISO) 32-bit/64-bit][3]
- [Debian GNU/Linux 8 Beta 2 (ISO) 32-bit/64-bit][4]
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Debian-7-7-Is-Out-with-Security-Fixes-462647.shtml
作者:[Silviu Stahie][a]
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:https://www.debian.org/News/2014/20141018
[2]:http://ftp.acc.umu.se/debian-cd/7.7.0/multi-arch/iso-dvd/debian-7.7.0-i386-amd64-source-DVD-1.iso
[3]:http://ftp.au.debian.org/debian/dists/oldstable/
[4]:http://cdimage.debian.org/cdimage/jessie_di_beta_2/

View File

@ -0,0 +1,51 @@
Linux 下的免费图片查看器
================================================================================
我最喜欢的谚语之一是“一图胜千言”。它指一张静态图片可以传递一个复杂的想法。图像相比文字而言可以迅速且更有效地描述大量信息。它们捕捉回忆,永不让你忘记你所想记住的东西,并且让它时常在你的记忆里刷新。
图片是互联网日常使用的一部分,并且对社交媒体互动尤其重要。一个好的图片查看器是任何操作系统必不可少的一个组成部分。
Linux 系统提供了一个大量开源实用小程序的集合,其中这些程序提供了从显而易见到异乎寻常的各种功能。正是由于这些工具的高质量和多样选择帮助 Linux 在生产环境中而脱颖而出尤其是当谈到图片查看器时。Linux 有如此多的图像查看器可供选择,以至于让选择困难症患者无所适从~
一个不该包括在这个综述中但是值得一提的软件是 Fragment Image Viewer。它在专有许可证下发行是的我知道所以不会预先安装在 Ubuntu 上。 但它无疑看起来十分有趣!要是它的开发者们将它在开源许可证下发布的话,它将是明日之星!
现在,让我们亲眼探究一下这 13 款图像查看器。除了一个例外,它们中每个都是在开源协议下发行。由于有很多信息要阐述,我将这些详细内容从当前单一网页综述剥离,但作为替代,我为每一款图片查看器提供了一个单独页面,包括软件的完整描述,产品特点的详细分析,一张软件工作中的截图,以及相关资源和评论的链接。
### 图片查看器 ###
- [**Eye of Gnome**][1] -- 快速且多功能的图片查看器器
- [**gThumb**][2] -- 高级图像查看器和浏览器
- [**Shotwell**][3] -- 被设计来提供个人照片管理的图像管理器
- [**Gwenview**][4] -- 专为 KDE 4 桌面环境开发的简易图片查看器
- [**Imgv**][5] -- 强大的图片查看器
- [**feh**][6] -- 基于 Imlib2 的快速且轻量的图片查看器
- [**nomacs**][7] -- 可处理包括 RAW 在内的大部分格式
- [**Geeqie**][8] -- 基于 Gtk+ 的轻量级图片查看器
- [**qiv**][9] -- 基于 gdk/imlib 的非常小且精致的开源图片查看器
- [**PhotoQT**][10] -- 好看、高度可配置、易用且快速
- [**Viewnior**][11] -- 设计时考虑到易用性
- [**Cornice**][12] -- 设计用来作为 ACDSee 的免费替代品
- [**XnViewMP**][13] -- 图像查看器、浏览器、转换器(专有软件)
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20141018070111434/ImageViewers.html
译者:[jabirus](https://github.com/jabirus)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://projects.gnome.org/eog/
[2]:https://wiki.gnome.org/Apps/gthumb
[3]:https://wiki.gnome.org/Apps/Shotwell/
[4]:http://gwenview.sourceforge.net/
[5]:http://imgv.sourceforge.net/
[6]:http://feh.finalrewind.org/
[7]:http://www.nomacs.org/
[8]:http://geeqie.sourceforge.net/
[9]:http://spiegl.de/qiv/
[10]:http://photoqt.org/
[11]:http://siyanpanayotov.com/project/viewnior/
[12]:http://wxglade.sourceforge.net/extra/cornice.html
[13]:http://www.xnview.com/en/

View File

@ -1,11 +1,12 @@
使用Vmstat和Iostat命令进行Linux性能监控
使用vmstat和iostat命令进行Linux性能监控
================================================================
这是我们正在进行的**Linux**命令和性能监控系列的一部分。**Vmstat**和**Iostat**两个命令都适用于所有主要的类**unix**系统(**Linux/unix/FreeBSD/Solaris**)。
如果**vmstat**和**iostat**命令在你的系统中不可用,请安装**sysstat**软件包。**vmstat****sar**和**iostat**命令都包含在**sysstat**系统监控工具软件包中。iostat命令生成**CPU**和所有设备的统计信息。你可以从连接[sysstat][1]中下载源代码包编译安装sysstat但是我们建议通过**YUM**命令进行安装。
这是我们正在进行的**Linux**命令和性能监控系列的一部分。**vmstat**和**iostat**两个命令都适用于所有主要的类**unix**系统(**Linux/unix/FreeBSD/Solaris**)。
如果**vmstat**和**iostat**命令在你的系统中不可用,请安装**sysstat**软件包。**vmstat****sar**和**iostat**命令都包含在**sysstat**系统监控工具软件包中。iostat命令生成**CPU**和所有设备的统计信息。你可以从[这个连接][1]中下载源代码包编译安装sysstat但是我们建议通过**YUM**命令进行安装。
![使用Vmstat和Iostat命令进行Linux性能监控](http://www.tecmint.com/wp-content/uploads/2012/09/Linux-VmStat-Iostat-Commands.png)
使用Vmstat和Iostat命令进行Linux性能监控
*使用Vmstat和Iostat命令进行Linux性能监控*
###在Linux系统中安装sysstat###
@ -18,7 +19,7 @@
####1. 列出活动和非活动的内存####
如下范例中输出6列。**vmstat**的man页面中解析的每一列的意义。最重要的是内存中的**free**属性和交换分区中**si**和**so**属性。
如下范例中输出6列。**vmstat**的man页面中解析的每一列的意义。最重要的是内存中的**free**属性和交换分区中**si**和**so**属性。
[root@tecmint ~]# vmstat -a
@ -33,6 +34,7 @@
**注意**:如果你不带参数的执行**vmstat**命令,它会输出自系统启动以来的总结报告。
####2. 每X秒执行vmstat共执行N次####
下面命令将会每2秒中执行一次**vmstat**执行6次后自动停止执行。
[root@tecmint ~]# vmstat 2 6
@ -65,7 +67,6 @@
**vmstat**命令的**-s**参数,将输出各种事件计数器和内存的统计信息。
[tecmint@tecmint ~]$ vmstat -s
1030800 total memory
@ -237,7 +238,7 @@ via: http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-
作者:[Ravi Saive][a]
译者:[cvsher](https://github.com/cvsher)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,19 +1,17 @@
集所有功能与一身的Linux系统性能和使用活动监控工具-Sysstat
全能冠军Linux系统性能和使用活动监控工具 sysstat
===========================================================================
**Sysstat**是一个非常方便的工具它带有众多的系统资源监控工具用于监控系统的性能和使用情况。我们在日常使用的工具中有相当一部分是来自sysstat工具包的。同时它还提供了一种使用cron表达式来制定性能和活动数据的收集计划。
![Install Sysstat in Linux](http://www.tecmint.com/wp-content/uploads/2014/08/sysstat.png)
在Linux系统中安装Sysstat
下表是包含在sysstat包中的工具
- [**isstat**][1]: 输出CPU的统计信息和所有I/O设备的输入输出I/O统计信息。
- **mpstat**: 关于多有CPU的详细信息单独输出或者分组输出
- [**iostat**][1]: 输出CPU的统计信息和所有I/O设备的输入输出I/O统计信息。
- **mpstat**: 关于CPU的详细信息单独输出或者分组输出
- **pidstat**: 关于运行中的进程/任务、CPU、内存等的统计信息。
- **sar**: 保存并输出不同系统资源CPU、内存、IO、网络、内核等。。。)的详细信息。
- **sadc**: 系统活动数据收集器,用于手机sar工具的后端数据。
- **sa1**: 系统手机并存储sadc数据文件的二进制数据与sadc工具配合使用
- **sar**: 保存并输出不同系统资源CPU、内存、IO、网络、内核等。。。的详细信息。
- **sadc**: 系统活动数据收集器,用于收集sar工具的后端数据。
- **sa1**: 系统收集并存储sadc数据文件的二进制数据与sadc工具配合使用
- **sa2**: 配合sar工具使用产生每日的摘要报告。
- **sadf**: 用于以不同的数据格式CVS或者XML来格式化sar工具的输出。
- **Sysstat**: sysstat工具的man帮助页面。
@ -26,9 +24,9 @@ pidstat命令新增了一些新的选项首先是“-R”选项该选项
sar、sadc和sadf命令在数据文件方面同样带来了一些功能上的增强。与以往只能使用“**saDD**”来命名数据文件。现在使用**-D**选项可以用“**saYYYYMMDD**”来重命名数据文件,同样的,现在的数据文件不必放在“**var/log/sa**”目录中我们可以使用“SA_DIR”变量来定义新的目录该变量将应用与sa1和sa2命令。
###在Linux系统中安装Sysstat####
###在Linux系统中安装sysstat####
在主要的linux发行版中**Sysstat**’工具包可以在默认的程序库中安装。然而,在默认程序库中的版本通常有点旧,因此,我们将会下载源代码包,编译安装最新版本(**11.0.0**版本)。
在主要的linux发行版中**sysstat**’工具包可以在默认的程序库中安装。然而,在默认程序库中的版本通常有点旧,因此,我们将会下载源代码包,编译安装最新版本(**11.0.0**版本)。
首先使用下面的连接下载最新版本的sysstat包或者你可以使用**wget**命令直接在终端中下载。
@ -38,7 +36,7 @@ sar、sadc和sadf命令在数据文件方面同样带来了一些功能上的增
![Download Sysstat Package](http://www.tecmint.com/wp-content/uploads/2014/08/Download-Sysstat.png)
下载Sysstat包
*下载sysstat包*
然后解压缩下载下来的包,进去该目录,开始编译安装
@ -47,21 +45,25 @@ sar、sadc和sadf命令在数据文件方面同样带来了一些功能上的增
这里,你有两种编译安装的方法:
a).第一,你可以使用**iconfig**(这将会给予你很大的灵活性,你可以选择/输入每个参数的自定义值)
####a)####
第一,你可以使用**iconfig**(这将会给予你很大的灵活性,你可以选择/输入每个参数的自定义值)
# ./iconfig
![Sysstat iconfig Command](http://www.tecmint.com/wp-content/uploads/2014/08/Sysstat-iconfig-Command.png)
Sysstat的iconfig命令
*sysstat的iconfig命令*
b).第二,你可以使用标准的**configure**命令在当行中定义所有选项。你可以运行 **./configure help 命令**来列出该命令所支持的所有限选项。
####b)####
第二,你可以使用标准的**configure**,在命令行中定义所有选项。你可以运行 **./configure help 命令**来列出该命令所支持的所有限选项。
# ./configure --help
![Sysstat Configure Help](http://www.tecmint.com/wp-content/uploads/2014/08/Configure-Help.png)
Stsstat的cofigure -help
*stsstat的cofigure -help*
在这里,我们使用标准的**./configure**命令来编译安装sysstat工具包。
@ -71,7 +73,7 @@ Stsstat的cofigure -help
![Configure Sysstat in Linux](http://www.tecmint.com/wp-content/uploads/2014/08/Configure-Sysstat.png)
在Linux系统中配置sysstat
*在Linux系统中配置sysstat*
在编译完成后我们将会看到一些类似于上图的输出。现在运行如下命令来查看sysstat的版本。
@ -80,7 +82,7 @@ Stsstat的cofigure -help
sysstat version 11.0.0
(C) Sebastien Godard (sysstat <at> orange.fr)
###在Linux 系统中更新sysstat###
###更新Linux 系统中的sysstat###
默认的sysstat使用“**/usr/local**”作为其目录前缀。因此,所有的二进制数据/工具都会安装在“**/usr/local/bin**”目录中。如果你的系统已经安装了sysstat 工具包,则上面提到的二进制数据/工具有可能在“**/usr/bin**”目录中。
@ -112,11 +114,11 @@ via: http://www.tecmint.com/install-sysstat-in-linux/
作者:[Kuldeep Sharma][a]
译者:[cvsher](https://github.com/cvsher)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/kuldeepsharma47/
[1]:http://www.tecmint.com/linux-performance-monitoring-with-vmstat-and-iostat-commands/
[1]:http://linux.cn/article-4024-1.html
[2]:http://sebastien.godard.pagesperso-orange.fr/download.html
[3]:http://sebastien.godard.pagesperso-orange.fr/documentation.html

View File

@ -1,3 +1,4 @@
Translating by instdio
How To Use Steam Music Player on Ubuntu Desktop
================================================================================
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/09/steam-music.jpg)
@ -76,4 +77,4 @@ via: http://www.omgubuntu.co.uk/2014/10/use-steam-music-player-linux
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[a]:https://plus.google.com/117485690627814051450/?rel=author

View File

@ -1,54 +0,0 @@
Linux Kernel 3.17 Is Out With Plenty of New Features
================================================================================
Linus Torvalds has announced the latest stable release of the Linux kernel, version 3.17.
![](http://www.omgubuntu.co.uk/wp-content/uploads/2011/07/Tux-psd3894.jpg)
Announced in his typical [laissez-faire style][1] in a post on the Linux Kernel Mailing List Torvalds explained:
> “So the past week was fairly calm, and so I have no qualms about releasing 3.17 on the normal schedule (as opposed to the optimistic “maybe I can release it one week early” schedule that was not to be).”
Due to travel Linus says he wont start merging changes for Linux 3.18 just yet:
> “I now have travel coming up something I hoped to avoid when I was hoping for releasing early. Which means that while 3.17 is out, Im not going to be merging stuff very actively next week, and the week after that is LinuxCon EU…”
### Whats New In Linux 3.17? ###
As with every new release, Linux 3.17 sees the kernel loaded up on the latest improvements, hardware support, fixes and so on. These range from the bamboozling e.g., [memfd and file sealing patches][2] to the sort of things most of us appreciate, such as support for new hardware.
Below is a short list compiling some notable highlights of this release. Its by no means exhaustive.
- Microsoft Xbox One controller support (without vibration)
- Additional improvements to Sony SIXAXIS support
- Toshiba “Active Protection Sensor” support
- New ARM support includes Rockchip RK3288 and AllWinner A23 SoCs
- “Cross-thread filter setting” for secure computing facility
- Broadcom BCM7XXX-based board support (used in various set-top boxes)
- Enhanced AMD Radeon R9 290 support
- Misc. Nouveau driver improvements, including Kepler GPU fixes
- Audio support includes Wildcatpoint Audio DSP on Intel Broadwell Ultrabooks.
### Installing Linux 3.17 on Ubuntu ###
Although classed as stable there is, at present, little need for most of us to “have it now”.
But if youre impatient and — **more importantly** — skilled enough to handle issues resulting from it, you can install Linux 3.17 in Ubuntu 14.10 by installing the appropriate set of packages for your system from the mainline kernel archive maintained by Canonical.
**Do not attempt to install anything from this link unless you know what youre doing.**
- [Visit the Ubuntu Kernel Mainline Archive][3]
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/10/linux-kernel-3-17-whats-new-improved
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://lkml.iu.edu/hypermail/linux/kernel/1410.0/02818.html
[2]:http://lwn.net/Articles/607627/
[3]:http://kernel.ubuntu.com/~kernel-ppa/mainline/?C=N;O=D

View File

@ -0,0 +1,36 @@
Translating by ZTinoZ
Linus Torvalds Regrets Alienating Developers with Strong Language
================================================================================
> He didn't name anyone, but this sounds like an apology
**Linus Torvalds talked today at LinuxCon and CloudOpen Europe, a conference organized by the Linux Foundation that reunites all the big names in the open source world. He answered a lot of questions and he also talked about the effects of the strong language he uses in the mailing list.**
Linus Torvalds is recognized as the creator of the Linux kernel and the maintainer of the latest development version. He makes sure that we get a new RC almost every week and he is very involved in the discussions that take place in the mailing list. He doesn't really choose his words and has been blamed for using strong language with some of the developers.
The latest problem of this kind, which surfaced in the news as well, was when [he decided to block code from a particular developer][1], after making some very harsh remarks. He is known to be very abrasive, especially when kernel developers break user space to fix something in the kernel. The same happened in this case and he basically went mental on the guy.
### This is the closest he's been to an apology ###
Linus Torvalds never really talked about that particular discussion since and people moved on, but recently a systemd developer talked about the strong language in the open source community and he mentioned Linus Torvalds by name. He's not known to apologize, so this admission of guilt during LinuxCon is a big step forward. The moderator asked him what single decision in the last 23 years he would change.
"From a technical standpoint, no single decision has ever been that important... The problems tend to be around alienating users or developers and I'm pretty good at that. I use strong language. But again there's not a single instance I'd like to fix. There's a metric [expletive]load of those."
"One of the reasons we have this culture of strong language, that admittedly many people find off-putting, is that when it comes to technical people with strong opinions and with a strong drive to do something technically superior, you end up having these opinions show up as sometimes pretty strong language," [said][2] Linus Torvalds.
He didn't mention anyone by name or any specific incident, but the proximity to the complaints issued by Leonart Pottering, the systemd developer, seems to point towards that issue.
It also looks like Linux kernel 3.18 RC1 will arrive later this week and we'll soon have something new to play with.
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Linus-Torvalds-Regrets-Alienating-Developers-with-Strong-Language-462191.shtml
作者:[Silviu Stahie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://news.softpedia.com/news/Linus-Torvalds-Block-All-Code-from-Systemd-Developer-for-the-Linux-Kernel-435714.shtml
[2]:http://www.linux.com/news/featured-blogs/200-libby-clark/791788-linus-torvalds-best-quotes-from-linuxcon-europe-2014

View File

@ -0,0 +1,60 @@
Linus Torvalds' Best Quotes from LinuxCon Europe 2014
================================================================================
![](http://www.linux.com/images/stories/41373/Linus-Dirk-2014.jpg)
Linux creator Linus Torvalds answered questions from Dirk Hohndel, Intel's chief Linux and open source technologist, on Wednesday, Oct. 15, 2014 at LinuxCon and CloudOpen Europe.
Linus Torvalds doesn't regret any of the technical decisions he's made over the past 23 years since he first created Linux, he said Wednesday at [LinuxCon and CloudOpen Europe][1].
“Technical issues, even when they're completely wrong, and they have been, you can fix them later,” said Torvalds, a Linux Foundation fellow.
![](http://www.linux.com/images/stories/41373/Linus-Torvalds-2014.jpg)
Despite these personal issues and disagreements the community has thrived, and created the best technology they possibly can, said Linus Torvalds at LinuxCon Europe 2014.
He does, however, regret the times he has alienated developers and users with his use of strong language on the kernel mailing list, he said. Relationships can't be so easily fixed.
Despite these personal issues and disagreements the community has thrived, and created the best technology they possibly can. This is, Torvalds said, the ultimate goal.
In a Q&A on stage with Dirk Hohndel, Intel's chief Linux and open source technologist, Torvalds spoke about the state of the community, the kernel development process, what it takes to be a kernel developer, and the future of Linux. Here are some highlights of the discussion.
**1.** “The speed of development has not really slowed down the last few years. We have had around 10,000 patches every release from more than 1,000 people and the end result has been very good.”
**2.** Dirk Hohndel: “You said you wanted subsystem maintainers to consider following the x86 model and have more than one maintainer share the role. How about applying your own advice at the top?
Torvalds: “I'll probably have to do that someday. Right now I'm not getting a lot of complaints for not being responsive. Being responsive is one of the most important things a kernel developer at any level can be... So far, partly thanks to Git, I've been able to keep up.”
**3.** “A lot of people want to have market share numbers, lots of users, because that's how they view their self worth. For me, one of the most important things for Linux is having a big community that is actively testing new kernels; it's the only way to support the absolute insane amount of different hardware we deal with.”
**4.** Hohndel: “If you could change a single decision you've made in the last 23 years, what would you do differently?”
Torvalds: “From a technical standpoint, no single decision has ever been that important... The problems tend to be around alienating users or developers and I'm pretty good at that. I use strong language. But again there's not a single instance I'd like to fix. There's a metric shitload of those.”
**5.** “Most people even if though they don't always necessarily like each other, do tend to respect the code they generate. For Linux that's the important part. What really matters is people are very involved in generating the best technology we can.”
**6.** “On the internet nobody can hear you being subtle.”
**7.** “One of the reasons we have this culture of strong language, that admittedly many people find off-putting, is that when it comes to technical people with strong opinions and with a strong drive to do something technically superior, you end up having these opinions show up as sometimes pretty strong language.”
**8.** Hohndel: What will you tell a student who wants to become the next Linus?
Torvalds: “Find something that you're passionate about and just do it.”
**9.** “Becoming a maintainer is easy; you just need an infinite amount of time and respond to email from random people.”
**10.** Hohndel: “Make a bold prediction about the future of Linux.”
Torvalds: “The boldest prediction I can say is, I will probably release rc1 in about a week.”
--------------------------------------------------------------------------------
via: http://www.linux.com/news/featured-blogs/200-libby-clark/791788-linus-torvalds-best-quotes-from-linuxcon-europe-2014
作者:[Libby Clark][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linux.com/community/forums/person/41373/catid/200-libby-clark
[1]:http://events.linuxfoundation.org/events/linuxcon-europe

View File

@ -0,0 +1,37 @@
"Fork Debian" Project Aims to Put Pressure on Debian Community and Systemd Adoption
================================================================================
> There is still a great deal of resistance in the Debian community towards the upcoming adoption of systemd
**The Debian project decided to adopt systemd a while ago and ditch the upstart counterpart. The decision was very controversial and it's still contested by some users. Now, a new proposition has been made, to fork Debian into something that doesn't have systemd.**
![](http://i1-news.softpedia-static.com/images/news2/Fork-Debian-Project-Started-to-Put-Pressure-on-Debian-Community-and-Systemd-Adoption-462598-2.jpg)
systemd is the replacement for the init system and it's the daemon that starts right after the Linux kernel. It's responsible for initiating all the other components in a system and it's also responsible for shutting them down in the correct order, so you might imagine why people think this is an important piece of software.
The discussions in the Debian community have been very heated, but systemd prevailed and it looked like the end of it. Linux distros based on it have already started to make the changes. For example, Ubuntu is already preparing to adopt systemd, although it's still pretty far off.
### Forking Debian, not really a solution ###
Developers have already forked systemd, but the projects resulted don't have a lot of support from the community. As you can imagine, systemd also has a big following and people are not giving up so easily. Now, someone has made a website called debianfork.org to advocate for a Debian without systemd, in an effort to put pressure on the developers.
"We are Veteran Unix Admins and we are concerned about what is happening to Debian GNU/Linux to the point of considering a fork of the project. Some of us are upstream developers, some professional sysadmins: we are all concerned peers interacting with Debian and derivatives on a daily basis. We don't want to be forced to use systemd in substitution to the traditional UNIX sysvinit init, because systemd betrays the UNIX philosophy."
"We contemplate adopting more recent alternatives to sysvinit, but not those undermining the basic design principles of 'do one thing and do it well' with a complex collection of dozens of tightly coupled binaries and opaque logs," reads the [website][1], among a lot of other things.
Basically, the new website is not actually about a Debian fork, but more like a form of pressure for the [upcoming vote][2] that will be taken for the "Re-Proposal - preserve freedom of choice of init systems." This is a general resolution made by Ian Jackson and he hopes to get enough support in order to turn back the decision made by the Technical Committee regarding systemd.
It's clear that the debate is still not over in the Debian community, but it remains to be seen if the decisions already made can be overturned.
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Fork-Debian-Project-Started-to-Put-Pressure-on-Debian-Community-and-Systemd-Adoption-462598.shtml
作者:[Silviu Stahie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://debianfork.org/
[2]:https://lists.debian.org/debian-vote/2014/10/msg00001.html

View File

@ -0,0 +1,64 @@
Microsoft loves Linux -- for Azure's sake
================================================================================
![](http://images.techhive.com/images/article/2014/10/microsoft_guthrie_azure-100525983-primary.idge.jpg)
Scott Guthrie, executive vice president, Microsoft Cloud and Enterprise group, shows how Microsoft differentiates Azure. Credit: James Niccolai/IDG News Service
### Microsoft adds CoreOS and Cloudera to its growing set of Azure services ###
Microsoft now loves Linux.
This was the message from Microsoft CEO Satya Nadella, standing in front of an image that read "Microsoft [heart symbol] Linux," during a Monday webcast to announce a number of services it had added to its Azure cloud, including the Cloudera Hadoop package and the CoreOS Linux distribution.
In addition, the company launched a marketplace portal, now in preview mode, designed to make it easier for customers to procure and manage their cloud operations.
Microsoft is also planning to release an Azure appliance, in conjunction with Dell, that will allow organizations to run hybrid clouds where they can easily move operations between Microsoft's Azure cloud and their own in-house version.
The declaration of affection for Linux indicates a growing acceptance of software that wasn't created at Microsoft, at least for the sake of making its Azure cloud platform as comprehensive as possible.
For decades, the company tied most of its new products and innovations to its Windows platform, and saw other OSes, such as Linux, as a competitive threat. Former CEO Steve Ballmer [once infamously called Linux a cancer][1].
This animosity may be evaporating as Microsoft is finding that customers want cloud services that incorporate software from other sources in addition to Microsoft. About 20 percent of the workloads run on Azure are based on Linux, Nadella admitted.
Now, the company considers its newest competitors to be the cloud services offered by Amazon and Google.
Nadella said that by early 2015, Azure will be operational in 19 regions around the world, which will provide more local coverage than either Google or Amazon.
He also noted that the company is investing more than $4.5 billion in data centers, which by Microsoft's estimation is twice as much as Amazon's investments and six times as much as Google's.
To compete, Microsoft has been adding widely-used third party software packages to Azure at a rapid clip. Nadella noted that Azure now supports all the major data integration stacks, such as those from Oracle and IBM, as well as major new entrants such as MongoDB and Hadoop.
The results seem to be paying off. Today Azure is generating about $4.48 billion in annual revenue for Microsoft, and we are "still at the early days," of cloud computing, Nadella said.
The service attracts about 10,000 new customers per week. About 2 million developers have signed on to Visual studio online since its launch. The service runs about 1.2 million SQL databases.
CoreOS is now actually the fifth Linux distribution that Azure offers, joining Ubuntu, CentOS, OpenSuse, and Oracle Linux (a variant of Red Hat Enterprise Linux). Customers [can also package their own Linux distributions][2] to run in Azure.
CoreOS was developed as [a lightweight Linux distribution][3] to be used primarily in cloud environments. Officially launched in December, CoreOS is already offered as a service by Google, Rackspace, DigitalOcean and others.
Cloudera is the second Hadoop distribution offered on Azure, following Hortonworks. Cloudera CEO Mike Olson joined the Microsoft executives onstage to demonstrate how easily one can use the Cloudera Hadoop software within Azure.
Using the new portal, Olson showed how to start up a 90-node instance of Cloudera with a few clicks. Such a deployment can be connected to an Excel spreadsheet, where the user can query the dataset using natural language.
Microsoft also announced a number of other services and products.
Azure will have a new type of virtual machine, which is being called the "G Family." These virtual machines can have up to 32 CPU cores, 450GB of working memory and 6.5TB of storage, making it in effect "the largest virtual machine in the cloud," said Scott Guthrie, who is the Microsoft executive vice president overseeing Azure.
This family of virtual machines is equipped to handle the much larger workloads Microsoft is anticipating its customers will want to run. It has also upped the amount of storage each virtual machine can access, to 32TB.
The new cloud platform appliance, available in November, will allow customers to run Azure services on-premise, which can provide a way to bridge their on-premise and cloud operations. One early customer, integrator General Dynamics, plans to use this technology to help its U.S. government customers migrate to the cloud.
--------------------------------------------------------------------------------
via: http://www.computerworld.com/article/2836315/microsoft-loves-linux-for-azures-sake.html
作者:[Joab Jackson][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.computerworld.com/author/Joab-Jackson/
[1]:http://www.theregister.co.uk/2001/06/02/ballmer_linux_is_a_cancer/
[2]:http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-create-upload-vhd/
[3]:http://www.itworld.com/article/2696116/open-source-tools/coreos-linux-does-away-with-the-upgrade-cycle.html

View File

@ -0,0 +1,30 @@
Red Hat acquires FeedHenry to get mobile app chops
================================================================================
Red Hat wants a piece of the enterprise mobile app market, so it has acquired Irish company FeedHenry for approximately $82 million.
The growing popularity of mobile devices has put pressure on enterprise IT departments to make existing apps available from smartphones and tablets -- a trend that Red Hat is getting in on with the FeedHenry acquisition.
The mobile app segment is one of the fastest growing in the enterprise software market, and organizations are looking for better tools to build mobile applications that extend and enhance traditional enterprise applications, according to Red Hat.
"Mobile computing for the enterprise is different than Angry Birds. Enterprise mobile applications need a backend platform that enables the mobile user to access data, build backend logic, and access corporate APIs, all in a scalable, secure manner," Craig Muzilla, senior vice president for Red Hat's Application Platform Business, said in a [blog post][1].
FeedHenry provides a cloud-based platform that lets users develop and deploy applications for mobile devices that meet those demands. Developers can create native apps for Android, iOS, Windows Phone and BlackBerry as well as HTML5 apps, or a mixture of native and Web apps.
A key building block is Node.js, an increasingly popular platform based on Chrome's JavaScript runtime for building fast and scalable applications.
From Red Hat's point of view, FeedHenry is a natural fit with the company's strengths in enterprise middleware and PaaS (platform-as-a-service). It adds better mobile capabilities to the JBoss Middleware portfolio and OpenShift PaaS offerings, Red Hat said.
Red Hat plans to continue to sell and support FeedHenry's products, and will continue to honor client contracts. For the most part, it will be business as usual, according to Red Hat. The transaction is expected to close in the third quarter of its fiscal 2015.
--------------------------------------------------------------------------------
via: http://www.computerworld.com/article/2685286/red-hat-acquires-feedhenry-to-get-mobile-app-chops.html
作者:[Mikael Ricknäs][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.computerworld.com/author/Mikael-Rickn%C3%A4s/
[1]:http://www.redhat.com/en/about/blog/its-time-go-mobile

View File

@ -0,0 +1,28 @@
This is the name of Ubuntu 15.04 — And Its Not Velociraptor
================================================================================
**Ubuntu 14.10 may not be out of the door yet, but attention is already turning to Ubuntu 15.04. Today it got its name: [Vivid Vervet][1].**
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/Unknown.jpg)
Announcing the monkey-themed moniker in his usual loquacious style, Mark Shuttleworth cites the upstart and playful nature of the mascot as in tune with its own foray into the mobile space.
> “This is a time when every electronic thing can be an Internet thing, and thats a chance for us to bring our platform, with its security and its long term support, to a vast and important field. In a world where almost any device can be smart, and also subverted, our shared efforts to make trusted and trustworthy systems might find fertile ground.
Talking of plans for the release Shuttleworth states one goal is to “show the way past a simple Internet of things, to a world of Internet things-you-can-trust.”
Ubuntu 15.04 is due for release in April 2015. Its not expected to arrive with either Mir or Unity 8 by default, but given the veracious speed of acceleration in ambitions, it may find its way out for testing.
Do you like the name? Were you hoping for velociraptor?
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/10/ubuntu-15-04-named-vivid-vervet
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.markshuttleworth.com/archives/1425

View File

@ -0,0 +1,34 @@
Ubuntu 15.04 Is Called Vivid Vervet
================================================================================
> Mark Shuttleworth decided on the new name for Ubuntu 15.04
![](http://i1-news.softpedia-static.com/images/news2/Ubuntu-15-04-Is-Called-Vivid-Vervet-462621-2.jpg)
**One of Mark Shuttleworth's privileges is to decide what the code name for upcoming Ubuntu versions is. It's usually a real animal and now it's a monkey whose name starts with V and, as usual, it's probably a species youve never heard of before.**
With very few exceptions, some of the names chosen for Ubuntu releases send the older users to the Encyclopedia Britannica and the new ones to Google. Shuttleworth generally chooses animals that are less known and the names usually have something in common with the release.
For example, Trusty Tahr, the name of Ubuntu 14.04 LTS, followed the idea of long term support for the operating system, hence the trusty adjective. Precise Pangolin did the same for Ubuntu 12.04 LTS, and so on. Intermediary releases are not all that obvious and the Ubuntu 14.10 Utopic Unicorn is proof of that.
### Still thinking about the monkey whose name starts with a V? ###
The way the version number is chosen is pretty clear. The first part is for the year and the second one is for the month, so Ubuntu 14.10 is actually Ubuntu 2014 October. On the other hand, the names only follow a simple rule, one adjective and one animal, so the choice is rather simple. Unlike other communities, where the designation is decided by users or at least with their participation, Ubuntu is different, although it's not a singular example.
"Release week! Already! I wouldn't call Trusty 'vintage' just yet, but Utopic is poised to leap into the torrent stream. We've all managed to land our final touches to *buntu and are excited to bring the next wave of newness to users around the world. Glad to see the unicorn theme went down well, judging from the various desktops I see on G+."
"In my favourite places, the smartest thing around is a particular kind of monkey. Vexatious at times, volant and vogie at others, a vervet gets in anywhere and delights in teasing cats and dogs alike. As the upstart monkey in this business I can think of no better mascot. And so let's launch our vicenary cycle, our verist varlet, the Vivid Vervet!" says Mark Shuttleworth on his [blog][1].
So, there you have it, Ubuntu 15.04, the operating system that is scheduled to arrive in April 2015, will be called Vivid Vervet. I won't keep you anymore for details, I'm sure you are already looking up the vervet on Wikipedia.
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Ubuntu-15-04-Is-Called-Vivid-Vervet-462621.shtml
作者:[Silviu Stahie][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://www.markshuttleworth.com/archives/1425

View File

@ -0,0 +1,219 @@
Compact Text Editors Great for Remote Editing and Much More
================================================================================
A text editor is software used for editing plain text files. This type of software has many different uses including modifying configuration files, writing programming language source code, jotting down thoughts, or even making a grocery list. Given that editors can be used for such a diverse range of activities, it is worth spending the time finding an editor that best suites your preferences.
Whatever the level of sophistication of the editor, they typically have a common set of functionality, such as searching/replacing text, formatting text, importing files, as well as moving text within the file.
All of these text editors are console based applications which make them ideal for editing files on remote machines. Textadept also provides a graphical user interface, but remains fast and minimalist.
Console based applications are also light on system resources (very useful on low spec machines), can be faster and more efficient than their graphical counterparts, they do not stop working when X needs to be restarted, and are great for scripting purposes.
I have selected my favorite open source text editors that are frugal on system resources.
----------
### Textadept ###
![](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Textadept.png)
Textadept is a fast, minimalist, and extensible cross-platform open source text editor for programmers. This open source application is written in a mixture of C and Lua and has been optimized for speed and minimalism over the years.
Textadept is an ideal editor for programmers who want endless extensibility options without sacrificing speed or succumbing to code bloat and featuritis.
There is also a version available for the terminal, which only depends on ncurses; great for editing on remote machines.
#### Features include: ####
- Lightweight
- Minimal design maximizes screen real estate
- Self-contained executable no installation necessary
- Entirely keyboard driven
- Unlimited split views (GUI version) split the editor window as many times as you like either horizontally or vertically. Please note that Textadept is not a tabbed editor
- Support for over 80 programming languages
- Powerful snippets and key commands
- Code autocompletion and API lookup
- Unparalleled extensibility
- Bookmarks
- Find and Replace
- Find in Files
- Buffer-based word completion
- Adeptsense autocomplete symbols for programming languages and display API documentation
- Themes: light, dark, and term
- Uses lexers to assign names to buffer elements like comments, strings, and keywords
- Sessions
- Snapopen
- Available modules include support for Java, Python, Ruby and recent file lists
- Conforms with the Gnome HIG Human Interface Guidelines
- Modules include support for Java, Python, Ruby and recent file lists
- Support for editing Lua code. Syntax autocomplete and LuaDoc is available for many Textadept objects as well as Luas standard libraries
- Website: [foicica.com/textadept][1]
- Developer: Mitchell and contributors
- License: MIT License
- Version Number: 7.7
----------
### Vim ###
![](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-vim.png)
Vim is an advanced text editor that seeks to provide the power of the editor 'Vi', with a more complete feature set.
This editor is very useful for editing programs and other plain ASCII files. All commands are given with normal keyboard characters, so those who can type with ten fingers can work very fast. Additionally, function keys can be defined by the user, and the mouse can be used.
Vim is often called a "programmer's editor," and is so useful for programming that many consider it to be an entire Integrated Development Environment. However, this application is not only intended for programmers. Vim is highly regarded for all kinds of text editing, from composing email to editing configuration files.
Vim's interface is based on commands given in a text user interface. Although its graphical user interface, gVim, adds menus and toolbars for commonly used commands, the software's entire functionality is still reliant on its command line mode.
#### Features include: ####
- 3 modes:
- - Command mode
- - Insert mode
- - Command line mode
- Unlimited undo
- Multiple windows and buffers
- Flexible insert mode
- Syntax highlighting highlight portions of the buffer in different colors or styles, based on the type of file being edited
- Interactive commands
- - Marking a line
- - vi line buffers
- - Shift a block of code
- Block operators
- Command line history
- Extended regular expressions
- Edit compressed/archive files (gzip, bzip2, zip, tar)
- Filename completion
- Block operations
- Jump tags
- Folding text
- Indenting
- ctags and cscope intergration
- 100% vi compatibility mode
- Plugins to add/extend functionality
- Macros
- vimscript, Vim's internal scripting language
- Unicode support
- Multi-language support
- Integrated On-line help
- Website: [www.vim.org][2]
- Developer: Bram Moolenaar
- License: GNU GPL compatible (charityware)
- Version Number: 7.4
----------
### ne ###
![](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-ne.png)
ne is a full screen open source text editor. It is intended to be an easier to learn alternative to vi, yet still portable across POSIX-compliant operating systems.
ne is easy to use for the beginner, but powerful and fully configurable for the wizard, and most sparing in its resource usage.
#### Features include: ####
- Three user interfaces: control keystrokes, command line, and menus; keystrokes and menus are completely configurable
- Syntax highlighting
- Full support for UTF-8 files, including multiple-column characters
- The number of documents and clips, the dimensions of the display, and the file/line lengths are limited only by the integer size of the machine
- Simple scripting language where scripts can be generated via an idiotproof record/play method
- Unlimited undo/redo capability (can be disabled with a command)
- Automatic preferences system based on the extension of the file name being edited
- Automatic completion of prefixes using words in your documents as dictionary
- File requester with completion features for easy file retrieval;
- Extended regular expression search and replace à la emacs and vi
- A very compact memory model easily load and modify very large files
- Editing of binary files
- Website: [ne.di.unimi.it][3]
- Developer: Sebastiano Vigna (original developer). Additional features added by Todd M. Lewis
- License: GNU GPL v3
- Version Number: 2.5
----------
### Zile ###
![](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-Zile.png)
Zile Is Lossy Emacs (Zile) is a small Emacs clone. Zile is a customizable, self-documenting real-time display editor. Zile was written to be as similar as possible to Emacs; every Emacs user should feel comfortable with Zile.
Zile is distinguished by a very small RAM memory footprint, of approximately 130kB, and quick editing sessions. It is 8-bit clean, allowing it to be used on any sort of file.
#### Features include: ####
- Small but fast and powerful
- Multi buffer editing with multi level undo
- Multi window
- Killing, yanking and registers
- Minibuffer completion
- Auto fill (word wrap)
- Registers
- Looks like Emacs. Key sequences, function and variable names are identical with Emacs's
- Killing
- Yanking
- Auto line ending detection
- Website: [www.gnu.org/software/zile][4]
- Developer: Reuben Thomas, Sandro Sigala, David A. Capello
- License: GNU GPL v2
- Version Number: 2.4.11
----------
### nano ###
![](http://www.linuxlinks.com/portal/content/reviews/Editors/Screenshot-nano.png)
nano is a curses-based text editor. It is a clone of Pico, the editor of the Pine email client.
The nano project was started in 1999 due to licensing issues with the Pine suite (Pine was not distributed under a free software license), and also because Pico lacked some essential features.
nano aims to emulate the functionality and easy-to-use interface of Pico, while offering additional functionality, but without the tight mailer integration of the Pine/Pico package.
nano, like Pico, is keyboard-oriented, controlled with control keys.
#### Features include: ####
- Interactive search and replace
- Color syntax highlighting
- Go to line and column number
- Auto-indentation
- Feature toggles
- UTF-8 support
- Mixed file format auto-conversion
- Verbatim input mode
- Multiple file buffers
- Smooth scrolling
- Bracket matching
- Customizable quoting string
- Backup files
- Internationalization support
- Filename tab completion
- Website: [nano-editor.org][5]
- Developer: Chris Allegretta, David Lawrence, Jordi Mallach, Adam Rogoyski, Robert Siemborski, Rocco Corsi, David Benbennick, Mike Frysinger
- License: GNU GPL v3
- Version Number: 2.2.6
--------------------------------------------------------------------------------
via: http://www.linuxlinks.com/article/20141011073917230/TextEditors.html
作者Frazer Kline
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://foicica.com/textadept/
[2]:http://www.vim.org/
[3]:http://ne.di.unimi.it/
[4]:http://www.gnu.org/software/zile/
[5]:http://nano-editor.org/

View File

@ -0,0 +1,56 @@
(translating by runningwater)
UbuTricks 14.10.08
================================================================================
> An Ubuntu utility that allows you to install the latest versions of popular apps and games
UbuTricks is a freely distributed script written in Bash and designed from the ground up to help you install the latest version of the most acclaimed games and graphical applications on your Ubuntu Linux operating system, as well as on various other Ubuntu derivatives.
![](http://i1-linux.softpedia-static.com/screenshots/UbuTricks_1.png)
### What apps can I install with UbuTricks? ###
Currently, the latest versions of the Calibre, Fotoxx, Geary, GIMP, Google Earth, HexChat, jAlbum, Kdenlive, LibreOffice, PCManFM, Qmmp, QuiteRSS, QupZilla, Shutter, SMPlayer, Ubuntu Tweak, Wine and XBMC (Kodi), PlayOnLinux, Red Notebook, NeonView, Sunflower, Pale Moon, QupZilla Next, FrostWire and RSSOwl applications can be installed with UbuTricks.
### What games can I install with UbuTricks? ###
In addition, the latest versions of the 0 A.D., Battle for Wesnoth, Transmageddon, Unvanquished and VCMI (Heroes III Engine) games can be installed with the UbuTricks program. Users can also install the latest version of the Cinnamon and LXQt desktop environments.
### Getting started with UbuTricks ###
The program is distributed as a .sh file (shell script) that can be run from the command-line using the “sh ubutricks.sh” command (without quotes) or make it executable and double-click it from your Home folder or desktop. All you have to do is to select and app or game and click the OK button to install it.
### How does it work? ###
When accessed for the first time, the program will display a welcome screen from the get-to, notifying users about how it actually works. There are three methods to install an app or game, via PPA, DEB file or source tarball. Please note that apps and games will be automatically downloaded and installed.
### What distributions are supported? ###
Several versions of the Ubuntu Linux operating systems are supported, but if not specified, it will default to the current stable version, Ubuntu 14.04 LTS (Trusty Tahr). At the moment, the program will not work if you dont have the gksu package installed on your Ubuntu box. It is based on Zenity, which should be installed too.
![](http://i1-linux.softpedia-static.com/screenshots/UbuTricks_2.jpg)
- last updated on:October 9th, 2014, 11:29 GMT
- price:FREE!
- developed by:Dan Craciun
- homepage:[www.tuxarena.com][1]
- license type:[GPL (GNU General Public License)][3]
- category:ROOT \ Desktop Environment \ Tools
### Download for UbuTricks: ###
- [ubutricks.sh][2]
--------------------------------------------------------------------------------
via: http://linux.softpedia.com/get/Desktop-Environment/Tools/UbuTricks-103626.shtml
作者:[Marius Nestor][a]
译者:[runningwater](https://github.com/runningwater)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.softpedia.com/editors/browse/marius-nestor
[1]:http://www.tuxarena.com/apps/ubutricks/
[2]:http://www.tuxarena.com/intro/files/ubutricks.sh
[3]:http://www.gnu.org/licenses/gpl-2.0.html

View File

@ -0,0 +1,94 @@
UbuTricks Script to install the latest versions of several games and applications in Ubuntu
================================================================================
UbuTricks is a program that helps you install the latest versions of several games and applications in Ubuntu.
UbuTricks is a Zenity-based, graphical script with a simple interface. Although early in development, its aim is to create a simple, graphical way of installing updated applications in Ubuntu 14.04 and future releases.
Apps will be downloaded and installed automatically. Some will require a PPA to be added to the repositories. Others will be compiled from source if no PPA is available. The compilation process can take a long time, while installing from a PPA or DEB file should be quick, depending on your download speed.
### The install methods are as follows: ###
- PPA the program will be downloaded and installed from a PPA
- DEB the program will be installed from a DEB package
- Source the program will be compiled (may take a long time)
- Script the program will be installed using a script provided by the developer
- Archive the program will be installed from a compressed archive
- Repository the program will be installed from a repository (not PPA)
### List of applications you can install ###
The latest versions of the following applications can be installed via UbuTricks:
### Games ###
- 0 A.D.
- Battle for Wesnoth (Dev)
- VCMI (Heroes III Engine)
### File Managers ###
- PCManFM
### Internet ###
- Geary
- HexChat
- QupZilla
- QuiteRSS
### Multimedia ###
- SMPlayer
- Transmageddon
- Kdenlive
- Fotoxx
- jAlbum
- GIMP
- Shutter
- Qmmp
- XBMC
### Office/Ebooks/Documents ###
- Calibre
- LibreOffice
### Tools ###
- Ubuntu Tweak
### Desktop Environments ###
- Cinnamon
### Other ###
- Google Earth
- Wine
### Download and install Ubuntutricks script ###
You can download ubuntutricks script from [here][1] Once downloaded, make it executable and either double-click the script or run it from the terminal.
### Screenshots ###
![](http://www.ubuntugeek.com/wp-content/uploads/2014/10/116.png)
![](http://www.ubuntugeek.com/wp-content/uploads/2014/10/213.png)
![](http://www.ubuntugeek.com/wp-content/uploads/2014/10/35.png)
![](http://www.ubuntugeek.com/wp-content/uploads/2014/10/45.png)
--------------------------------------------------------------------------------
via: http://www.ubuntugeek.com/ubutricks-script-to-install-the-latest-versions-of-several-games-and-applications-in-ubuntu.html
作者:[ruchi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.ubuntugeek.com/author/ubuntufix
[1]:http://www.tuxarena.com/intro/files/ubutricks.sh

View File

@ -0,0 +1,52 @@
6 Minesweeper Clones for Linux
================================================================================
### GNOME Mines ###
This is the GNOME Minesweeper clone, allowing you to choose from three different pre-defined table sizes (8×8, 16×16, 30×16) or a custom number of rows and columns. It can be ran in fullscreen mode, comes with highscores, elapsed time and hints. The game can be paused and resumed.
![](http://www.tuxarena.com/wp-content/uploads/2014/10/gnome-mines1.jpg)
### ace-minesweeper ###
This is part of a package that contains some other games too, like ace-freecel, ace-solitaire or ace-spider. It has a graphical interface featuring Tux, but doesnt seem to come with different table sizes. The package is called ace-of-penguins in Ubuntu.
![](http://www.tuxarena.com/wp-content/uploads/2014/10/ace-minesweeper.jpg)
### XBomb ###
XBomb is a mines game for the X Window System with three different table sizes and tiles which can take different shapes: hexagonal, rectangular (traditional) or triangular. Unfortunately the current version in Ubuntu 14.04 crashes with a segmentation fault, so you may need to install another version to make it work.
[Homepage][1]
![](http://www.tuxarena.com/wp-content/uploads/2014/10/xbomb.png)
([Image credit][1])
### KMines ###
KMines is the a KDE game, and just like GNOME Mines, there are three built-in table sizes (easy, medium, hard) and custom, support for themes and highscores.
![](http://www.tuxarena.com/wp-content/uploads/2014/10/kmines.jpg)
### freesweep ###
Freesweep is a Minesweeper clone for the terminal which allows you to configure settings such as table rows and columns, percentage of bombs, colors and also has a highscores table.
![](http://www.tuxarena.com/wp-content/uploads/2014/10/freesweep.jpg)
### xdemineur ###
Another graphical Minesweeper clone for X, Xdemineur is very much alike Ace-Minesweeper, with one predefined table size.
![](http://www.tuxarena.com/wp-content/uploads/2014/10/xdemineur.jpg)
--------------------------------------------------------------------------------
via: http://www.tuxarena.com/2014/10/6-minesweeper-clones-for-linux/
作者Craciun Dan
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.gedanken.org.uk/software/xbomb/

View File

@ -0,0 +1,45 @@
barney-ro translating
Claws Mail 3.11.0 Brings a Ton of Changes and Fixes for POODLE Exploit
================================================================================
> This email client is getting better with each new release
![](http://i1-news.softpedia-static.com/images/news2/Claws-Mail-3-11-0-Brings-a-Ton-of-Changes-and-Fix-for-POODLE-Exploit-462808-2.jpg)
**Claws Mail is an open source email client that is fast, easy to use, and full of interesting features and that is gaining some traction in the Linux community. The developers have pushed another big update for this application and it would be a very good idea to upgrade.**
Some users might not know this, but Claws Mail is actually a very old application. It's been around for more than 13 years, but it operated under the name of Sylpheed-Claws. It was forked a while back, and since then, the new project has managed to become a much better alternative.
There are quite a few email clients for Linux right now and they are all fighting for supremacy, although it's been done in a very polite fashion. Claws Mail is now being integrated in quite a few Linux distros by default, and this is actually very good news for fans of the app.
### What are some of the features in Claws Mail 3.11.0 ###
Just like any other application that deals with online connections and online protocols, Claws Mail has also been affected by the recent vulnerabilities identified in the community, like POODLE for example. The developers had a very busy schedule and the changelog reflects this entirely.
"A new version of the RSSyl plugin, completely redesigned and rewritten. Migration from the previous version is automatic, it has a new storage format in ~/.claws-mail/RSSyl/ (hierarchical directories instead of flat file format). It uses the expat library instead of libxml2 for parsing feed data."
"Due to popular demand, use of the Up key in the message body in the Compose window stops at the top of the message body and does not continue up to the header fields. This reverts the behavior introduced in version 3.10.0," say the developers in the [announcement][1].
Also, users can relax now because the POODLE vulnerability has been closed in the email client, feat that has been achieved by disabling all the SSLv3 connections.
The TAB address completion has also received some improvements and it should work much better, especially for New Messages, nagivation with the help of arrows has been tweaked, and numerous smaller enhancements have been made.
The developers have repositories for most of the major distributions out there, but you can always install it from source. You can check the official announcement for more details about this release.
Download the Claws Mail 3.11.0 source package, if you want to compile the software yourself.
- [Claws Mail 3.11.0 tar.xz File size: 5.6 MB][2]
--------------------------------------------------------------------------------
via: http://news.softpedia.com/news/Claws-Mail-3-11-0-Brings-a-Ton-of-Changes-and-Fix-for-POODLE-Exploit-462808.shtml
作者:[Silviu Stahie][a]
译者:[barney-ro](https://github.com/barney-ro)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://news.softpedia.com/editors/browse/silviu-stahie
[1]:http://www.claws-mail.org/news.php
[2]:http://sourceforge.net/projects/claws-mail/files/Claws%20Mail/3.11.0/claws-mail-3.11.0.tar.xz

View File

@ -1,96 +0,0 @@
Translating by ZTinoZ
7 Improvements The Linux Desktop Needs
======================================
In the last fifteen years, the Linux desktop has gone from a collection of marginally adequate solutions to an unparalleled source of innovation and choice. Many of its standard features are either unavailable in Windows, or else available only as a proprietary extension. As a result, using Linux is increasingly not only a matter of principle, but of preference as well.
Yet, despite this progress, gaps remain. Some are missing features, others missing features, and still others pie-in-the sky extras that could be easily implemented to extend the desktop metaphor without straining users' tolerance of change.
For instance, here are 7 improvements that would benefit the Linux desktop:
### 7. Easy Email Encryption
These days, every email reader from Alpine to Thunderbird and Kmail include email encryption. However, documentation is often either non-existent or poor.
But, even if you understand the theory, the practice is difficult. Controls are generally scattered throughout the configuration menus and tabs, requiring a thorough search for all the settings that you require or want. Should you fail to set up encryption properly, usually you receive no feedback about why.
The closest to an easy process is [Enigmail][1], a Thunderbird extension that includes a setup wizard aimed at beginners. But you have to know about Enigmail to use it, and the menu it adds to the composition window buries the encryption option one level down and places it with other options guaranteed to mystify everyday users.
No matter what the desktop, the assumption is that, if you want encrypted email, you already understand it. Today, though, the constant media references to security and privacy have ensured that such an assumption no longer applies.
### 6. Thumbnails for Virtual Workspaces
Virtual workspaces offer more desktop space without requiring additional monitors. Yet, despite their usefulness, management of virtual workspaces hasn't changed in over a decade. On most desktops, you control them through a pager in which each workspace is represented by an unadorned rectangle that gives few indications of what might be on it except for its name or number -- or, in the case of Ubuntu's Unity, which workspace is currently active.
True, GNOME and Cinnamon do offer better views, but the usefulness of these views is limited by the fact that they require a change of screens. Nor is KDE's written list of contents, which is jarring in the primarily graphic-oriented desktop.
A less distracting solution might be mouseover thumbnails large enough for those with normal vision to see exactly what is on each workspace.
### 5. A Workable Menu
The modern desktop long ago outgrew the classic menu with its sub-menus cascading across the screen. Today, the average computer simply has too many applications to fit comfortably into such a format.
The trouble is, neither of the major alternatives is as convenient as the classic menu. Confining the menu into a single window is less than ideal, because you either have to endure truncated sub-menus or else continually resize the window with the mouse.
Yet the alternative of a full-screen menu is even worse. It means changing screens before you even begin to work, and relying on a search field that is only useful if you already know what applications are available -- in which case you are almost better off launching from the command line.
Frankly, I don't know what the solution might be. Maybe spinner racks, like those in OS X? All I can say for certain is that all alternatives for a modern menu make a carefully constructed set of icons on the desktop seem a more reasonable alternative.
### 4. A Professional, Affordable Video Editor
Over the years, Linux has slowly filled the gaps in productivity software. However, one category in which it is still lacking is in reasonably priced software for editing videos.
The problem is not that such free software is non-existent. After all, [Maya][2] is one of the industry standards for animation. The problem is that the software costs several thousand dollars.
At the opposite end of the spectrum are apps like Pitivi or Blender, whose functionality -- despite brave efforts by their developers -- remain basic. Progress happens, but far more slowly than anyone hopes for.
Although I have heard of indie directors using native Linux video editors, the reason I have heard of their efforts is usually because of their complaints. Others prefer to minimize the struggle and edit on other operating systems instead.
### 3. A Document Processor
At one extreme are users whose need for word processing is satisfied by Google Docs. At the other extreme are layout experts for whom Scribus is the only feasible app.
In-between are those like publishers and technical writers who produce long, text-oriented documents. This category of users is served by [Adobe FrameMaker][3] on Windows, and to some extent by LibreOffice Writer on Linux.
Unfortunately, these users are apparently not a priority in LibreOffice, Calligra Words, AbiWord, or any other office suite. Features that would provide for these users include:
- separate bibliographic databases for each file
- tables that are treated like styles in the same way that paragraphs and characters are
- page styles with persistent content other than headers or footers that would appear each time the style is used
- storable formats for cross-references, so that the structure doesn't need to be recreated manually each time that it is needed
Whether LibreOffice or another application provides these features is irrelevant comparing to whether they are available. Without them, the Linux desktop is an imperfect place for a large class of potential users.
2. Color-Coded Title Bars
Browser extensions have taught me how useful color coded tabs can be for workspaces. The titles of open tabs disappear when more than eight or nine or open, so the color is often the quickest visual guide to the relation between tabs.
The same system could be just as useful on the desktop. Better yet, the color coding might be preserved between sessions, allowing users to open all the apps needed for a specific task at the same time. So far, I know of no desktop with such a feature.
### 1. Icon Fences
For years, Stardock Systems has been selling a Windows extension called [Fences][4], which lets icons be grouped. You can name each group and move the icons in it together. In addition, you can assign which fence different types of files are automatically added to, and hide and arrange fences as needed.
In other words, fences automate the sort of arrangements that users make on their desktop all the time. Yet aside from one or two minor functions they share with KDE's Folder Views, fences remain completely unknown on Linux desktops. Perhaps the reason is that designers are focused on mobile devices as the source of ideas, and fences are decidedly a feature of the traditional workstation desktop.
### Personalized Lists
As I made this list, what struck me was how few of the improvements were general. Several of these improvement would appeal largely to specific audiences, and only one even implies the porting of a proprietary application. At least one is cosmetic rather than functional.
What this observation suggests is that, for the general user, Linux has very little left to add. As an all-purpose desktop, Linux arrive some years ago, and has been diversifying ever since, until today users can choose from over half a dozen major desktops.
None of that means, of course, that specialists wouldn't have other suggestions. In addition, changing needs can make improvements desirable that nobody once cared about. But it does mean that many items on a list of desirable improvements will be highly personal.
All of which raises the question: what other improvements do you think would benefit the desktop?
--------------------------------------------------------------------------------
via: http://www.datamation.com/open-source/7-improvements-the-linux-desktop-needs-1.html
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://addons.mozilla.org/en-US/thunderbird/addon/enigmail/
[2]:http://en.wikipedia.org/wiki/Autodesk_Maya
[3]:http://www.adobe.com/products/framemaker.html
[4]:http://www.stardock.com/products/fences/

View File

@ -1,48 +0,0 @@
Red Hat designs RHEL for a decade-long run
================================================================================
> The newly released RHEL 7 includes Docker containers and the new terabyte-scaled XFS file system
IDG News Service - Knowing how system administrators enjoy continuity, Red Hat has designed the latest release of its flagship Linux distribution to be run, with support, until 2024.
Red Hat Enterprise Linux 7 (RHEL 7), the completed version of which was shipped Tuesday, also features a number of new technologies that the company sees as instrumental for the next decade, including the Docker Linux Container system and the advanced XFS file system.
"XFS opens the door for a new class of business analytics, big data and data analytics," said Mark Coggin, Red Hat senior director of product marketing.
The last major update to RHEL, RHEL 6, was released in November 2010. Since then, server software has been used in an increasingly wide variety of operational scenarios, including providing the basis for bare metal servers, virtual machines, IaaS (infrastructure-as-a-service) and PaaS (platform-as-a-service) cloud packages.
Red Hat will support RHEL 7 with bug fixes and commercial support for up to 10 years. The company generally releases a major version of RHEL every three years.
In contrast, Canonical's Ubuntu LTS (long-term support) distributions are supported for five years. Suse Enterprise Linux [is also supported][1], in most aspects, for up to 10 years,
This is the first edition to include Docker, a container technology [that could act as a nimbler replacement][2] to full virtual machines used in cloud operations. Docker provides a way to package an application in a virtual container so that it can be run across different Linux servers.
Red Hat expects that containers will be widely deployed over the next few years as a way to package and run applications, thanks to their portable nature.
"Customers have told us they are looking for a lighter weight version of developing applications. The applications themselves don't need a full operating system or a virtual machine," Coggin said. The system calls are answered by the server's OS and the container includes only the necessary support libraries and the application. "We only put into that container what we need," he said.
Containers are also easier to maintain because users don't have to worry about updating or patching the full OS within a virtual machine, Coggin said.
Red Hat is also planning a special stripped-down release of RHEL, now code-named RHEL Atomic, which will be a distribution for just running containers. Containers that run on the regular RHEL can easily be transferred to RHEL Atomic, once that OS is available. They will also run on Red Hat OpenShift PaaS.
Red Hat is also supporting Docker through its switch in RHEL 7 to the systemd process manager, replacing Linux's long used init process manager. Systemd "gives the administrator a lot of additional flexibility in managing the underlying processes inside of RHEL. It also has a tie back to the container initiative and is very integral to the way the processes are stood up and managed in containers," Coggin said.
Red Hat has switched the default file system in RHEL 7 to XFS, which is able to keep track of up to 500TBs on a single partition. The previous default file system, ext4, was only able to support 50TBs. Ext4 is still available as an option, as well as another of other file systems such as GFS2 and Btrfs (under technology preview).
Red Hat has added greater interoperability with the Microsoft Windows environment. Organizations can now use Microsoft Active Directory to securely authenticate users on Red Hat systems. Tools are also included in RHEL 7 to offer Red Hat credentials for Windows servers.
"Customers have thousands of Windows servers and thousands of RHEL servers, and they to need ways to integrate the two," Coggin said.
The installation process has been sped up as well, thanks to an update to the Anaconda installer, which now allows administrators to preselect server configurations on the start of the installation process. The inclusion of the industry standard OpenLMI (Open Linux Management Infrastructure), which allows the administrator to manage services at a granular level through a standardized API (application programming interface).
"OpenLMI is another important way of improving stability and efficiency by helping to manage systems better," Coggin said.
--------------------------------------------------------------------------------
via: http://www.computerworld.com/s/article/9248988/Red_Hat_designs_RHEL_for_a_decade_long_run?taxonomyId=122
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://www.suse.com/support/policy.html
[2]:http://www.infoworld.com/d/virtualization/docker-all-geared-the-enterprise-244020

View File

@ -1,99 +0,0 @@
What will your business look like in 2030?
================================================================================
![](http://cdn1.tnwcdn.com/wp-content/blogs.dir/1/files/2014/06/business-man-roof-deck-798x310.jpg)
lya Pozin is a serial entrepreneur, writer, and investor. He is the founder of online video entertainment platform [Pluto.TV][1], social greeting card company [Open Me][2], and digital marketing agency [Ciplex][3].
The year is 2030, and youre walking into the front doors of your company. What will it look like, what functions will your employees be performing and how will you stack up against the competition?
You might not be considering the future, but remember that [25 years ago][4], only 15 percent of US households had a personal computer. While 73 percent of online adults currently have a social media account, social media barely existed 15 years ago.
Technology is always changing, and with it come disruptions to industries, companies and the employment marketplace. The future is closing in, but is your company ready?
### Why should you be worried? ###
In business, to stop moving forward means your company is stagnating; for many companies, stagnation equates to eventual death. Companies clinging to outmoded and outdated business practices eventually run into major problems. There are examples everywhere in the marketplace, from struggling BlackBerry phones to Kodak slowly shuttering its film business.
According to [futurist and TED talk speaker Thomas Frey][5], two billion jobs will disappear by 2030 thanks to shifting technologies and changing needs. You cant afford to be behind the pack when the future comes calling.
### What will 2030 look like? ###
![](http://cdn1.tnwcdn.com/wp-content/blogs.dir/1/files/2014/05/calendar.jpg)
Recently, the [Canadian Scholarship Trust][6], as part of its Inspired Minds campaign, [put together a list of the jobs][7] we might all be hiring for in 2030. These jobs range from “Company Culture Ambassador” to get this! “Nostalgist.”
Taking CSTs lead, I spoke to some entrepreneurs and innovators in different fields, from medicine to marketing, to see their predictions for how businesses will be run in the future. Hopping in our time travel machine, heres a glimpse at what 2030 might look like:
### Cloud-based ###
“Everything will be cloud-based with faster speeds,” said Marjorie Adams, [AQB][8] CEO and President. “The technologies coming out now will be better defined and connected. While innovation from the business side could be a lot slower-going than the consumer side, we will have a lot more data to understand real needs.”
### Automated ###
Google is already leading the way with the self-driving car, but automation might creep into other aspects of our lives in the future.
“Home automation will be very different in 2030,” said Andrew Thomas, co-founder of [SkyBell Technologies, Inc][9] .“Well all have brain-sensing headbands and glasses and well just think about locking the door or turning off the lights. Our fridge will email the store when were low on food and our self-driving cars will go pick up the groceries for us!”
### Human curated ###
As more and more options become available to consumers, well all become overwhelmed by choice. Human curation will come back into vogue for everything from music to online video.
Were already seeing the trend start now with [Apples acquisition][10] of human curated music service Beats. After all, do you really think apps are [smarter than you][11]?
### Socially-connected ###
If you cant watch the latest episode of Scandal or Game of Thrones, its common sense to stay off your Facebook and Twitter feeds.
“Imagine a media environment 15 years into the future where no object or entertainment venue is out of reach for second-screen integration with social media,” said Jared Feldman, CEO and founder of [Mashwork][12]. “Social platforms like Facebook and Twitter might as well be agnostic at this point in time since consumers will have aggregated all of their digital social life into consolidated user profiles designed to curate multiple feeds and allow for single-source user engagement.”
### Targeted ###
Already advertising is becoming more and more targeted to consumers needs thanks to big data and algorithms. Dont expect this trend to move backwards, at least according to [FlexOne][13] CEO Matthijs Keij.
“Advertisers will know more about you than you yourself. Which products you like, how to improve your personal and work life, and even how to be more healthy. Sounds a little like Huxleys Brave New World? Maybe…but consumers might actually like it.”
### How do your prepare? ###
![](http://cdn1.tnwcdn.com/wp-content/blogs.dir/1/files/2011/01/Crystal-Ball-12-27-09-iStock_000003107697XSmall.jpg)
Preparing for the future might seem impossible, but you dont need a crystal ball to keep abreast of changes. Its important to always keep up with trends and emerging technology, both in the economy in general and within your industry in particular.
Go to conferences, attend industry talks, and make time for industry trade shows. Pay attention to the new technology entering your sector, and dont turn your nose up at something new just because its different than the way things have always been.
Understand your customers and know what they need, because the future is looking more consumer-focused than ever before, even in segments like healthcare. “The paradigm is shifting to a more “consumer-centric” model,” said Robert Grajewski, CEO of [Edison Nation Medical][14]. “Healthcare as a whole will shift to this individual care focus.”
Companies that understand their core competencies and their consumer needs will have a leg up on the competition.
As more digital natives come of age and flock into the economy, some highly skilled fields will see consumers picking up additional skills.
“By 2030 virtually everyone will be a designer, equipped with knowledge of the hottest mega trends and ripe and ready to replace those who cant keep up with the latest software,” said Ashley Mady, CEO of [Brandberry][15].
“The best way to prepare for this inevitable shift in the design world is to focus on creative, big picture thinking over production, which will soon become a commodity. Designers should remain innovative by developing their own adaptable brands and technology that will grow alongside the quickly evolving world we live in.”
Finally, its important to be open, curious, and willing to pivot. New technologies are going to come along to improve, and sometimes complicate, your business. You need to be willing to embrace these new paradigms, or you risk your company becoming obsolete.
What do you think? How do you plan to prepare for the future? Share in the comments!
--------------------------------------------------------------------------------
via: http://thenextweb.com/entrepreneur/2014/06/18/will-business-look-like-2030/?utm_campaign=share%20button&utm_content=What%20will%20your%20business%20look%20like%20in%202030?&awesm=tnw.to_q3L0P&utm_source=copypaste&utm_medium=referral
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://pluto.tv/
[2]:http://www.openme.com/
[3]:http://www.ciplex.com/
[4]:http://www.cnbc.com/id/101611509
[5]:http://www.futuristspeaker.com/2012/02/2-billion-jobs-to-disappear-by-2030/
[6]:http://www.cst.org/
[7]:http://careers2030.cst.org/jobs/
[8]:http://www.aqb.com/
[9]:http://www.skybell.com/
[10]:http://thenextweb.com/apple/2014/05/28/apple-confirms-acquisition-beats/
[11]:http://thenextweb.com/apps/2013/10/19/i-let-apps-tell-me-how-to-live-for-a-day/
[12]:http://mashwork.com/
[13]:http://www.flxone.com/
[14]:http://www.edisonnationmedical.com/
[15]:http://www.brandberry.com/

View File

@ -1,120 +0,0 @@
Linux Poetry Explains the Kernel, Line By Line
================================================================================
> Editor's Note: Feeling inspired? Send your Linux poem to [editors@linux.com][1] for your chance to win a free pass to [LinuxCon North America][2] in Chicago, Aug. 20-22. Be sure to include your name, contact information and a brief explanation of your poem. We'll draw one winner at random from all eligible entries each week through Aug. 1, 2014.
![Software developer Morgan Phillips is teaching herself how the Linux kernel works by writing poetry.](http://www.linux.com/images/stories/41373/Morgan-Phillips-2.jpg)
Software developer Morgan Phillips is teaching herself how the Linux kernel works by writing poetry.
Writing poems about the Linux kernel has been enlightening in more ways than one for software developer Morgan Phillips.
Over the past few months she's begun to teach herself how the Linux kernel works by studying text books, including [Understanding the Linux Kernel][3], Unix Network Programming, and The Unix Programming Environment. But instead of taking notes, she weaves the new terminology and ideas she learns into poetry about system architecture and programming concepts. (See some examples, below, and on her [Linux Poetry blog][4].)
It's a “pedagogical hack” she adopted in college and took up again a few years ago when she first landed a job as a data warehouse engineer at Facebook and needed to quickly learn Hadoop.
“I could remember bits and pieces of information but it was too rote, too rigid in my mind, so I started writing poems,” she said. “It forced me to wrap all of these bits of information into context and helped me learn things much more effectively.”
The Linux kernel's history, architecture, abundant terminology and complex concepts, are rich fodder for her poetry.
“I could probably write thousands of poems about just one subsystem in the kernel,” she said.
### Why learn Linux? ###
![Phillips publishes on her Linux Poetry blog.](http://www.linux.com/images/stories/41373/multiplexing-poem.png)
Phillips publishes on her Linux Poetry blog.
Phillips started her software career through a somewhat unconventional route as a physics major in a research laboratory. Instead of writing journal articles she was writing Python scripts to parse research project data on active galactic nuclei. She never learned the fundamentals of computer science (CS), but picked up the information on the job, as the need arose.
She soon got a job doing network security research for the Army Research Laboratory in Adelphi, Maryland, working with Linux. That was her first foray into the networking stack and the lower levels of the operating system.
Most recently she worked at Facebook until about six months ago when she moved from the Silicon Valley back to Nashville, near her home state of Kentucky, to work for a software startup that helps major record labels manage their business.
“I have all this experience but I suffer from a thing that almost every person who doesnt have an actual background in CS does: I have islands of knowledge with big gaps in between,” she said. “Every time I'd come across some concept, some data structure in the kernel, I'd have to go educate myself on it.”
A few weeks ago her frustration peaked. She was trying to do a form of message passing between web application processes and a web socket server she had written and found herself having to brush up on all the ways she could do interprocess communication.
“I was like, that's it. I'm going to start really learning everything I should have known starting at the bottom up with the Linux kernel,” she said. “So I bought some textbooks and started reading.”
![](http://www.linux.com/images/stories/41373/process-poem.png)
### What she's learned ###
Over the course of a few months of reading books and writing poems she's learned about how the virtual memory subsystem works. She's learned about the data structures that hold process information, about the virtual memory layout and how pages are mapped into memory, and about memory management.
“I hadn't thought about a lot of things, like that a system that's multiprocessing shouldnt bother with semaphores,” she said. “Spin locks are often more efficient.”
Writing poems has also given her insight into her own way of thinking about the world. In some small way she is communicating not just her knowledge of Linux systems, but also the way that she conceptualizes them.
“It's a deep look into my mind,” she said. “Poetry is the best way to share these abstract ideas and things that we can't possibly truly share with other people.”
Writing a Linux poem
The inspiration for her Linux poems starts with reading a textbook chapter. She hones the topics down to the key concepts that she wants to remember and what others might find interesting, as well as things she can “wrap a conceptual bubble around.”
A concept like demand paging is too broad to fit into a single poem, for example. “So I'm working my way down deeper in it,” she said. “Instead I'm looking at writing a poem about the actual data structure where process memory is laid out and then mapped into a page map.”
She hasn't had any formal training writing poetry, but writes the lines so that they are visually appealing and have a nice rhythm when they're read aloud.
In her poem, “The Reentrant Kernel,” Phillips writes about an important property in software that allows a function to be paused and restarted later with the same result. System calls need to have this reentrant property in order to make the scheduler run as efficiently as possible, Phillips explains. The poem also includes a program, written in C style pseudocode, to help illustrate the concept.
Phillips hopes her Linux poetry helps her increase her understanding enough to start contributing to the Linux kernel.
“I've been very intimidated for a long time by the idea of submitting a patch to the kernel, being a kernel hacker,” she said. “To me that's the pinnacle of success.
“My ultimate dream is that I can gain a good enough understanding of the kernel and C to submit a patch and have it accepted.”
The Reentrant Kernel
A reentrant function,
if interrupted,
will return a result,
which is not perturbed.
int global_int;
int is_not_reentrant(int x) {
int x = x;
return global_int + x; },
depends on a global variable,
which may change during execution.
int global_int;
int is_reentrant(int x) {
int saved = global_int;
return saved + x; },
mitigates external dependency,
it is reentrant, though not thread safe.
UNIX kernels are reentrant,
a process may be interrupted while in kernel mode,
so that, for instance, time is not wasted,
waiting on devices.
Process alpha requests to read from a device,
the kernel obliges,
CPU switches into kernel mode,
system call begins execution.
Process alpha is waiting for data,
it yields to the scheduler,
process beta writes to a file,
the device signals that data is available.
Context switches,
process alpha continues execution,
data is fetched,
CPU enters user mode.
注:上面代码内文本发布时请参考原文排版(第一行着重,全部居中)
--------------------------------------------------------------------------------
via: http://www.linux.com/news/featured-blogs/200-libby-clark/777473-linux-poetry-explains-the-kernel-line-by-line/
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:editors@linux.com
[2]:http://events.linuxfoundation.org/events/linuxcon-north-america
[3]:http://shop.oreilly.com/product/9780596005658.do
[4]:http://www.linux-poetry.com/

View File

@ -1,5 +1,3 @@
CNprober translating...
Linux Administration: A Smart Career Choice
================================================================================
![](http://www.opensourceforu.com/wp-content/uploads/2014/04/linux.jpeg)

View File

@ -1,46 +0,0 @@
The People Who Support Linux: Hacking on Linux Since Age 16
================================================================================
![](http://www.linux.com/images/stories/41373/Yitao-Li.png)
Pretty much all of the projects in software developer [Yitao Li's GitHub repository][1] were developed on his Linux machine. None of them are necessarily Linux-specific, he says, but he uses Linux for “everything.”
For example: “coding / scripting, web browsing, web hosting, anything cloud-related, sending / receiving PGP signed emails, tweaking IP table rules, flashing OpenWrt image into routers, running one version of Linux kernel while compiling another version, doing research, doing homework (e.g., typing math equations in Tex), and many others...” Li said via email.
Of all the projects in his repository his favorite is a school project developed in C++ with libpthread and libfuse to understand and correctly implement PAXOS-based distributed locking, key-value service, and eventually a distributed filesystem. He tested it using a number of test scripts on both single-core and multi-core machines.
“One can learn something about distributed consensus protocol by implementing the PAXOS protocol correctly (or at least mostly correctly) such that the implementation will pass all the tests,” he said. “And of course once that is accomplished, one can also earn some bragging rights. Besides, a distributed filesystem can be useful in many other programming projects.”
Li first started using Linux at age 16, or about 7.47 years ago, he says, using the website [linuxfromscratch.org][2], with numerous hints from the free, downloadable Linux From Scratch book. Why?
“1. Linux is very hacker-friendly and I do not see any reason for not using it,” he writes. “2. The prefrontal cortex of the brain becoming well-developed at age 16 (?).”
[![](http://www.linux.com/images/stories/41373/ldc_peop_linux.png)][3]
He now works for eBay, mostly coding in Java but working sometimes with Hadoop, Pig, Zookeeper, Cassandra, MongoDB, and other software that requires a POSIX-compliant platform. He supports the Linux community by contributing to Wikipedia pages and forums on Linux-related subjects. And by becoming an individual member of The Linux Foundation.
He keeps up with the latest Linux developments and has recently been impressed by the new "-fstack-protector-strong" option for GCC 4.9 and later.
“It's not directly related to any of my projects, but it was important for both security and performance reasons,” he said. “It's much more efficient than "-fstack-protector-all" with little impact on security, while providing better stack-overflow protection coverage compared to that of the "-fstack-protector" option.”
Welcome to the Linux Foundation Yitao!
Learn more about becoming an [individual member of The Linux Foundation][3]. The foundation will donate $25 to Code.org for every new individual member who joins during June.
----------
![](http://www.linux.com/community/forums/avatar/41373/catid/200-libby-clark/thumbnail/large/cache/1331753338)
[Libby Clark][4]
--------------------------------------------------------------------------------
via: http://www.linux.com/news/featured-blogs/200-libby-clark/778559-the-people-who-support-linux-hacking-on-linux-since-age-16
译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://github.com/yl790
[2]:http://linuxfromscratch.org/
[3]:https://www.linuxfoundation.org/about/join/individual
[4]:http://www.linux.com/community/forums/person/41373/catid/200-libby-clark

View File

@ -1,4 +1,3 @@
Love-xuan 翻译中
Don't Fear The Command Line
================================================================================
![](http://a4.files.readwrite.com/image/upload/c_fill,h_900,q_70,w_1600/MTE5NTU2MzIyNTM0NTg5OTYz.jpg)

View File

@ -1,51 +0,0 @@
Will Linux ever be able to give consumers what they want?
================================================================================
> Jack Wallen offers up the novel idea that giving the consumers what they want might well be the key to boundless success.
![](http://tr2.cbsistatic.com/hub/i/r/2014/08/14/ce90a81e-d17b-4b8f-bd5b-053120e305e6/resize/620x485/f5f9e0798798172d4e41edbedeb6b7e5/whattheyneedhero.png)
In the world of consumer electronics, if you don't give the buyer what they want, they'll go elsewhere. We've recently witnessed this with the Firefox browser. The consumer wanted a faster, less-bloated piece of software, and the developers went in the other direction. In the end, the users migrated to Chrome or Chromium.
Linux needs to gaze deep into their crystal ball, watch carefully the final fallout of that browser war, and heed this bit of advice:
If you don't give them what they want, they'll leave.
Another great illustration of this backfiring is Windows 8. The consumer didn't want that interface. Microsoft, however, wanted it because it was necessary to begin the drive to all things Surface. This same scenario could have been applied to Canonical and Ubuntu Unity -- however, their goal wasn't geared singularly and specifically towards tablets (so, the interface was still highly functional and intuitive on the desktop).
For the longest time, it seemed like Linux developers and designers were gearing everything they did toward themselves. They took the "eat your own dog food" too far. In that, they forgot one very important thing:
Without new users, their "base" would only ever belong to them.
In other words, the choir had not only been preached to, it was the one doing the preaching. Let me give you three examples to hit this point home.
- For years, Linux has needed an equivalent of Active Directory. I would love to hand that title over to LDAP, but have you honestly tried to work with LDAP? It's a nightmare. Developers have tried to make LDAP easy, but none have succeeded. It amazes me that a platform that has thrived in multi-user situations still has nothing that can go toe-to-toe with AD. A team of developers needs to step up, start from scratch, and create the open-source equivalent to AD. This would be such a boon to mid-size companies looking to migrate away from Microsoft products. But until this product is created, the migration won't happen.
- Another Microsoft-driven need -Exchange/Outlook. Yes, I realize that many are going to the cloud. But the truth is that mediumto large-scale businesses will continue relying on the Exchange/Outlook combo until something better comes along. This could very well happen within the open-source community. One piece of this puzzle is already there (though it needs some work) -the groupware client, Evolution. If someone could take, say, a fork of Zimbra and re-tool it such a way that it would work with Evolution (and even Thunderbird) to serve as a drop-in replacement for Exchange, the game would change, and the trickle-down to consumers would be massive.
- Cheap, cheap, cheap. This one is a hard pill for most to swallow -but consumers (and businesses) want cheap. Look at the Chromebook sales over the last year. Now, do a search for a Linux laptop and see if you can find one for under $700.00 (USD). For a third of that cost, you can get a Chromebook (a platform running the Linux kernel) that will serve you well. But because Linux is still such a niche market, it's hard to get the cost down. A company like Red Hat Linux could change that. They already have the server hardware in place. Why not crank out a bunch of low-cost, mid-range laptops that work in similar fashion to the Chromebook but only run a full-blown Linux environment? (see "[Is the Cloudbook the future of Linux?][1]") The key is that these devices must be low-cost and meet the needs of the average consumer. Stop thinking with your gamer/developer hat on and remember what the average user really needs -a web browser and not much more. That's why the Chromebook is succeeding so handily. Google knew exactly what the consumer wanted, and they delivered. On the Linux front, companies still think the only way to attract buyers is to crank out high-end, expensive Linux hardware. There's a touch of irony there, considering one of the most-often shouted battle cries is that Linux runs on slower, older hardware.
Finally, Linux needs to take a page from the good ol' Book Of Jobs and figure out how to convince the consumer that what they truly need is Linux. In their businesses and in their homes -- everyone can benefit from using Linux. Honestly, how can the open-source community not pull that off? Linux already has the perfect built-in buzzwords: Stability, reliability, security, cloud, free -- plus Linux is already in the hands of an overwhelming amount of users (they just don't know it). It's now time to let them know. If you use Android or Chromebooks, you use (in one form or another) Linux.
Knowing just what the consumer wants has always been a bit of a stumbling block for the Linux community. And I get that -- so much of the development of Linux happens because a developer has a particular need. This means development is targeted to a "micro-niche." It's time, however, for the Linux development community to think globally. "What does the average user need, and how do we give it to them?" Let me offer up the most basic of primers.
The average user needs:
- Low cost
- Seamless integration with devices and services
- Intuitive and modern designs
- A 100% solid browser experience
That's pretty much it. With those four points in mind, it should be easy to take a foundation of Linux and create exactly what the user wants. Google did it... certainly the Linux community can build on what Google has done and create something even better. Mix that in with AD integration, give it an Exchange/Outlook or cloud-based groupware set of tools, and something very special will happen -- people will buy it.
Do you think the Linux community will ever be able to give the consumer what they want? Share your opinion in the discussion thread below.
--------------------------------------------------------------------------------
via: http://www.techrepublic.com/article/will-linux-ever-be-able-to-give-consumers-what-they-want/
作者:[Jack Wallen][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.techrepublic.com/search/?a=jack+wallen
[1]:http://www.techrepublic.com/article/is-the-cloudbook-the-future-of-linux/

View File

@ -1,109 +0,0 @@
Vic020
Want To Start An Open Source Project? Here's How
================================================================================
> Our step-by-step guide.
**You have a problem. You've weighed the** [pros and cons of open sourcing your code][1], and you know [you need to start an open-source project][2] for your software. But you have no idea how to do this.
Oh, sure. You may know how to set up a GitHub account and get started, but such [mechanics][3] are actually the easy part of open source. The hard part is making anyone care enough to use or contribute to your project.
![](http://a4.files.readwrite.com/image/upload/c_fit,q_80,w_630/MTE5NDg0MDYxMTg2Mjk1MzEx.jpg)
Here are some principles to guide you in building and releasing code that others will care about.
### First, The Basics ###
You may choose to open source code for a variety of reasons. Perhaps you're looking to engage a community to help write your code. Perhaps, [like Known][4], you see "open source distribution ... as a multiplier for the small teams of developers writing the code in-house."
Or maybe you just think it's the right thing to do, [as the UK government believes][5].
Regardless of the reason, this isn't about you. Not really. For open source to succeed, much of the planning has to be about those who will use the software. As [I wrote in 2005][6], if you "want lots of people to contribute (bug fixes, extensions, etc.," then you need to "write good documentation, use an accessible programming language ... [and] have a modular framework."
Oh, and you also need to be writing software that people care about.
Think about the technology you depend on every day: operating systems, web application frameworks, databases, and so on. These are far more likely to generate outside interest and contributions than a niche technology for a particular industry like aviation. The broader the application of the technology, the more likely you are to find willing contributors and/or users.
In summary, any successful open-source project needs these things:
1. Optimal market timing (solving a real need in the market);
2. A strong, inclusive team of developers and non-developers;
3. An architecture of participation (more on that below);
4. Modular code to make it easier for new contributors to find a discrete chunk of the program to work on, rather than forcing them to scale an Everest of monolithic code;
5. Code that is broadly applicable (or a way to reach the narrower population more niche-y code appeals to);
6. Great initial source code (if you put garbage into GitHub, you'll get garbage out);
7. A permissive license—I [personally prefer Apache-style licensing][7] as it introduces the lowest barriers to developer adoption, but many successful projects (like Linux and MySQL) have used GPL licensing to great effect.
Of the items above, it's sometimes hardest for projects to actively invite participation. That's usually because this is less about code and more about people.
### "Open" Is More Than A License ###
One of the best things I've read in years on this subject comes from Vitorio Miliano ([@vitor_io][8]), a user experience and interaction designer from Austin, Texas. [Miliano points out][9] that anyone who doesn't already work on your project is a "layperson," in the sense that no matter their level of technical competence, they know little about your code.
So your job, he argues, is to make it easy to get involved in contributing to your code base. While he focuses on how to involve non-programmers in open-source projects, he identifies a few things project leads need to do to effectively involve anyone—technical or non-technical—in open source:
> 1. a way to understand the value of your project
>
> 2. a way to understand the value they could provide to the project
>
> 3. a way to understand the value they could receive from contributing to the project
>
> 4. a way to understand the contribution process, end-to-end
>
> 5. a contribution mechanism suitable for their existing workflows
Too often, project leads want to focus on the fifth step without providing an easy path to understand items 1 through 4. "How" to contribute doesn't matter very much if would-be contributors don't appreciate the "why."
On that note, it's critical, Miliano writes, to establish the value of the project with a "jargon-free description" so as to "demonstrate your accessibility and inclusiveness by writing your descriptions to be useful to everyone at all times." This has the added benefit, he avers, of signaling that documentation and other code-related content will be similarly clear.
On the second item, programmers and non-programmers alike need to be able to see exactly what you'd like from them, and then they need to be recognized for their contributions. Sometimes, as MongoDB solution architect [Henrik Ingo told me][10], "A smart person [may] come[] by with great code, but project members fail to understand it." That's not a terrible problem if the "in" group acknowledges the contribution and reaches out to understand.
But that doesn't always happen.
### Do You Really Want To Lead An Open Source Project? ###
Too many open-source project leads advertise inclusiveness but then are anything but inclusive. If you don't want people contributing code, don't pretend to be open source.
Yes, this is sometimes a function of newbie fatigue. As [one developer wrote][11] recently on HackerNews,
> Small projects get lots of, well, basically useless people who need tons of handholding to get anything accomplished. I see the upside for them, but I don't see the upside for me: if I where[sic] to help them out, I'd spend my limited available time on handholding people who apparently managed to get ms degrees in cs without being able to code instead of doing what I enjoy. So I ignore them.
While that may be a good way to maintain sanity, the attitude doesn't bode well for a project if it's widely shared.
And if you really couldn't care less about non-programmers contributing design input, or documentation, or whatever, then make that clear. Again, if this is the case, you really shouldn't be an open-source project.
Of course, the perception of exclusion is not always reality. As ActiveState vice president Bernard Golden told me over IM, "many would-be developers are intimidated by the perception of an existing 'in-crowd' dev group, even though it may not really be true."
Still, the more open source projects invest in making it easy to understand why developers should contribute, and make it inviting to do so, the how largely takes care of itself.
Lead image courtesy of [Shutterstock][12]
--------------------------------------------------------------------------------
via: http://readwrite.com/2014/08/20/open-source-project-how-to
作者:[Matt Asay][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://readwrite.com/author/matt-asay
[1]:http://readwrite.com/2014/07/07/open-source-software-pros-cons
[2]:http://readwrite.com/2014/08/15/open-source-software-business-zulily-erp-wall-street-journal
[3]:http://www.cocoanetics.com/2011/01/starting-an-opensource-project-on-github/
[4]:http://werd.io/2014/the-roi-of-building-open-source-software
[5]:https://www.gov.uk/design-principles
[6]:http://asay.blogspot.com/2005/09/so-you-want-to-build-open-source.html
[7]:http://www.cnet.com/news/apache-better-than-gpl-for-open-source-business/
[8]:https://twitter.com/vitor_io
[9]:http://opensourcedesign.is/blogging_about/import-designers/
[10]:https://twitter.com/h_ingo/status/501323333301190656
[11]:https://news.ycombinator.com/item?id=8122814
[12]:http://www.shutterstock.com/

View File

@ -1,89 +0,0 @@
[felixonmars translating...]
10 Open Source Cloning Software For Linux Users
================================================================================
> These cloning software take all disk data, convert them into a single .img file and you can copy it to another hard drive.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/photo/150x150x1Qn740810PM9112014.jpg.pagespeed.ic.Ch7q5vT9Yg.jpg)
Disk cloning means copying data from a hard disk to another one and you can do this by simple copy & paste. But you cannot copy the hidden files and folders and not the in-use files too. That's when you need a cloning software which can also help you in saving a back-up image from your files and folders. The cloning software takes all disk data, convert them into a single .img file and you can copy it to another hard drive. Here we give you the best 10 Open Source Cloning software:
### 1. [Clonezilla][1]: ###
Clonezilla is a Live CD based on Ubuntu and Debian. It clones all your hard drive data and take a backup just like Norton Ghost on Windows but in a more effective way. Clonezilla support many filesystems like ext2, ext3, ext4, btrfs, xfs and others. It also supports BIOS, UEFI, MPR and GPT partitions.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/600x450xZ34_clonezilla-600x450.png.pagespeed.ic.8Jq7pL2dwo.png)
### 2. [Redo Backup][2]: ###
Redo Bakcup is another Live CD tool which clones your drivers easily. It is free and Open Source Live System which has its licence under GPL 3. Its main features include easy GUI boots from CD, no installation, restoration of Linux and Windows systems, access to files with out any log-in, recovery of deleted files and more.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/600x450x7D5_Redo-Backup-600x450.jpeg.pagespeed.ic.3QMikN07F5.jpg)
### 3. [Mondo Rescue][3]: ###
Mondo doesn't work like other software. It doesnt convert your hard drivers into an .img file. It converts them into an .iso image and with Mondo you can also create a custom Live CD using “mindi” which is a special tool developed by Mondo Rescue to clone your data from the Live CD. It supports most Linux distributions, FreeBSD, and it is licensed under GPL.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x387x3C4_MondoRescue-620x387.jpeg.pagespeed.ic.cqVh7nbMNt.jpg)
### 4. [Partimage][4]: ###
This is an open-source software backup, which works under Linux system, by default. It's also available to install from the package manager for most Linux distributions and if you dont have a Linux system then you can use “SystemRescueCd”. It is a Live CD which includes Partimage by default to do the cloning process that you want. Partimage is very fast in cloning hard drivers.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x424xBZF_partimage-620x424.png.pagespeed.ic.ygzrogRJgE.png)
### 5. [FSArchiver][5]: ###
FSArchiver is a follow-up to Partimage, and it is again a good tool to clone hard disks. It supports cloning Ext4 partitions and NTFS partitions, basic file attributes like owner, permissions, extended attributes like those used by SELinux, basic file system attributes for all Linux file systems and so on.
### 6. [Partclone][6]: ###
Partclone is a free tool which clones and restores partitions. Written in C it first appeared in 2007 and it supports many filesystems like ext2, ext3, ext4, xfs, nfs, reiserfs, reiser4, hfs+, btrfs. It is very simple to use and it's licensed under GPL.
### 7. [doClone][7]: ###
doClone is a free software project which is developed to clone Linux system partitions easily. It's written in C++ and it supports up to 12 different filesystems. It can preform Grub bootloader restoration and can also transform the clone image to another computer via LAN. It also provides support to live cloning which means you will eb able to clone from the system even if it's running.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x396x2A6_doClone-620x396.jpeg.pagespeed.ic.qhimTILQPI.jpg)
### 8. [Macrium Reflect Free Edition][8]: ###
Macrium Reflect Free Edition is claimed to be one of the fastest disk cloning utilities which supports only Windows file systems. It is a fairly straightforward user interface. This software does disk imaging and disk cloning and also allows you to access images from the file manager. It allows you to create a Linux rescue CD and it is compatible with Windows Vista and 7.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x464xD1E_open1.jpg.pagespeed.ic.RQ41AyMCFx.png)
### 9. [DriveImage XML][9]: ###
DriveImage XML uses Microsoft VSS for creation of images, quite reliably. With this software you can create "hot" images from a disk, which is still running. XML files store images, which means you can access them from any supporting third-party software. DriveImage XML also allows restoring an image to a machine without any reboot. This software is also compatible with Windows XP, Windows Server 2003, Vista, and 7.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x475x357_open2.jpg.pagespeed.ic.50ipbFWsa2.jpg)
### 10. [Paragon Backup & Recovery Free][10]: ###
Paragon Backup & Recovery Free does a great job when it comes to managing scheduled imaging. This is a free software but it's for personal use only.
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/600x536x9Z9_open3.jpg.pagespeed.ic.9rDHp0keFw.png)
--------------------------------------------------------------------------------
via: http://www.efytimes.com/e1/fullnews.asp?edid=148039
作者Sanchari Banerjee
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://clonezilla.org/
[2]:http://redobackup.org/
[3]:http://www.mondorescue.org/
[4]:http://www.partimage.org/Main_Page
[5]:http://www.fsarchiver.org/Main_Page
[6]:http://www.partclone.org/
[7]:http://doclone.nongnu.org/
[8]:http://www.macrium.com/reflectfree.aspx
[9]:http://www.runtime.org/driveimage-xml.htm
[10]:http://www.paragon-software.com/home/br-free/

View File

@ -1,66 +0,0 @@
zpl1025
What Linux Users Should Know About Open Hardware
================================================================================
> What Linux users don't know about manufacturing open hardware can lead them to disappointment.
Business and free software have been intertwined for years, but the two often misunderstand one another. That's not surprising -- what is just a business to one is way of life for the other. But the misunderstanding can be painful, which is why debunking it is a worth the effort.
An increasingly common case in point: the growing attempts at open hardware, whether from Canonical, Jolla, MakePlayLive, or any of half a dozen others. Whether pundit or end-user, the average free software user reacts with exaggerated enthusiasm when a new piece of hardware is announced, then retreats into disillusionment as delay follows delay, often ending in the cancellation of the entire product.
It's a cycle that does no one any good, and often breeds distrust and all because the average Linux user has no idea what's happening behind the news.
My own experience with bringing products to market is long behind me. However, nothing I have heard suggests that anything has changed. Bringing open hardware or any other product to market remains not just a brutal business, but one heavily stacked against newcomers.
### Searching for Partners ###
Both the manufacturing and distribution of digital products is controlled by a relatively small number of companies, whose time can sometimes be booked months in advance. Profit margins can be tight, so like movie studios that buy the rights to an ancient sit-com, the manufacturers usually hope to clone the success of the latest hot product. As Aaron Seigo told me when talking about his efforts to develop the Vivaldi tablet, the manufacturers would much rather prefer someone else take the risk of doing anything new.
Not only that, but they would prefer to deal with someone with an existing sales record who is likely to bring repeat business.
Besides, the average newcomer is looking at a product run of a few thousand units. A chip manufacturer would much rather deal with Apple or Samsung, whose order is more likely in the hundreds of thousands.
Faced with this situation, the makers of open hardware are likely to find themselves cascading down into the list of manufacturers until they can find a second or third tier manufacturer that is willing to take a chance on a small run of something new.
They might be reduced to buying off-the-shelf components and assembling units themselves, as Seigo tried with Vivaldi. Alternatively, they might do as Canonical did, and find established partners that encourage the industry to take a gamble. Even if they succeed, they have usually taken months longer than they expected in their initial naivety.
### Staggering to Market ###
However, finding a manufacturer is only the first obstacle. As Raspberry Pi found out, even if the open hardware producers want only free software in their product, the manufacturers will probably insist that firmware or drivers stay proprietary in the name of protecting trade secrets.
This situation is guaranteed to set off criticism from potential users, but the open hardware producers have no choice except to compromise their vision. Looking for another manufacturer is not a solution, partly because to do so means more delays, but largely because completely free-licensed hardware does not exist. The industry giants like Samsung have no interest in free hardware, and, being new, the open hardware producers have no clout to demand any.
Besides, even if free hardware was available, manufacturers could probably not guarantee that it would be used in the next production run. The producers might easily find themselves re-fighting the same battle every time they needed more units.
As if all this is not enough, at this point the open hardware producer has probably spent 6-12 months haggling. The chances are, the industry standards have shifted, and they may have to start from the beginning again by upgrading specs.
### A Short and Brutal Shelf Life ###
Despite these obstacles, hardware with some degree of openness does sometimes get released. But remember the challenges of finding a manufacturer? They have to be repeated all over again with the distributors -- and not just once, but region by region.
Typically, the distributors are just as conservative as the manufacturers, and just as cautious about dealing with newcomers and new ideas. Even if they agree to add a product to their catalog, the distributors can easily decide not to encourage their representatives to promote it, which means that in a few months they have effectively removed it from the shelves.
Of course, online sales are a possibility. But meanwhile, the hardware has to be stored somewhere, adding to the cost. Production runs on demand are expensive even in the unlikely event that they are available, and even unassembled units need storage.
### Weighing the Odds ###
I have been generalizing wildly here, but anyone who has ever been involved in producing anything will recognize what I am describing as the norm. And just to make matters worse, open hardware producers typically discover the situation as they are going through it. Inevitably, they make mistakes, which adds still more delays.
But the point is, if you have any sense of the process at all, your knowledge is going to change how you react to news of another attempt at hardware. The process means that, unless a company has been in serious stealth mode, an announcement that a product will be out in six months will rapidly prove to be an outdate guestimate. 12-18 months is more likely, and the obstacles I describe may mean that the product will never actually be released.
For example, as I write, people are waiting for the emergence of the first Steam Machines, the Linux-based gaming consoles. They are convinced that the Steam Machines will utterly transform both Linux and gaming.
As a market category, Steam Machines may do better than other new products, because those who are developing them at least have experience developing software products. However, none of the dozen or so Steam Machines in development have produced more than a prototype after almost a year, and none are likely to be available for buying until halfway through 2015. Given the realities of hardware manufacturing, we will be lucky if half of them see daylight. In fact, a release of 2-4 might be more realistic.
I make that prediction with next to no knowledge of any of the individual efforts. But, having some sense of how hardware manufacturing works, I suspect that it is likely to be closer to what happens next year than all the predictions of a new Golden Age for Linux and gaming. I would be entirely happy being wrong, but the fact remains: what is surprising is not that so many Linux-associated hardware products fail, but that any succeed even briefly.
--------------------------------------------------------------------------------
via: http://www.datamation.com/open-source/what-linux-users-should-know-about-open-hardware-1.html
作者:[Bruce Byfield][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.datamation.com/author/Bruce-Byfield-6030.html

View File

@ -0,0 +1,119 @@
Interview: Thomas Voß of Mir
================================================================================
**Mir was big during the space race and its a big part of Canonicals unification strategy. We talk to one of its chief architects at mission control.**
Not since the days of 2004, when X.org split from XFree86, have we seen such exciting developments in the normally prosaic realms of display servers. These are the bits that run behind your desktop, making sure Gnome, KDE, Xfce and the rest can talk to your graphics hardware, your screen and even your keyboard and mouse. They have a profound effect on your systems performance and capabilities. And where we once had one, we now have two more Wayland and Mir, and both are competing to win your affections in the battle for an X replacement.
We spoke to Waylands Daniel Stone in issue 6 of Linux Voice, so we thought it was only fair to give equal coverage to Mir, Canonicals own in-house X replacement, and a project that has so far courted controversy with some of its decisions. Which is why we headed to Frankfurt and asked its Technical Architect, Thomas Voß, for some background context…
![](http://www.linuxvoice.com/wp-content/uploads/2014/10/voss_1.jpg)
**Linux Voice: Lets go right back to the beginning, and look at what X was originally designed for. X solved the problems that were present 30 years ago, where people had entirely different needs, right?**
**Thomas Voß**: It was mainframes. It was very expensive mainframe computers with very cheap terminals, trying to keep the price as low as possible. And one of the first and foremost goals was: “Hey, I want to be able to distribute my UI across the network, ideally compressed and using as little data as possible”. So a lot of the decisions in X were motivated by that.
A lot of the graphics languages that X supports even today have been motivated by that decision. The X developers started off in a 2D world; everything was a 2D graphics language, the X way of drawing rectangles. And its present today. So X is not necessarily bad in that respect; it still solves a lot of use cases, but its grown over time.
One of the reasons is that X is a protocol, in essence. So a lot of things got added to the protocol. The problem with adding things to a protocol is that they tend to stick. To use a 2D graphics language as an example, XVideo is something that no-one really likes today. Its difficult to support and the GPU vendors actually cry out in pain when you start talking about XVideo. Its somewhat bloated, and its just old. Its an old proven technology and Im all for that. I actually like X for a lot of things, and it was a good source of inspiration. But then when you look at your current use cases and the current setup we are in, where convergence is one of the buzzwords massively overrated obviously but at the heart of convergence lies the fact that you want to scale across different form factors.
**LV: And convergence is big for Canonical isnt it?**
**Thomas**: Its big, I think, for everyone, especially over time. But convergence is a use case that was always of interest to us. So we always had this idea that we want one codebase. We dont want a situation like Apple has with OS X and iOS, which are two different codebases. We basically said “Look, whatever we want to do, we want to do it from one codebase, because its more efficient.” We dont want to end up in the situation where we have to be maintaining two, three or four separate codebases.
Thats where we were coming from when we were looking at X, and it was just too bloated. And we looked at a lot of alternatives. We started looking at how Mac OS X was doing things. We obviously didnt have access to the source code, but if you see the transition from OS 9 to OS X, it was as if they entirely switched to one graphics language. It was pre-PostScript at that time. But they chose one graphics language, and thats it. From that point on, when you choose a graphics language, things suddenly become more simple to do. Todays graphics language is EGL ES, so there was inspiration for us to say we were converged on GL and EGL. From our perspective, thats the least common denominator.
> We basically said: whatever we want to do, we want to do it from one codebase, because its more efficient.
Obviously there are disadvantages to having only one graphics language, but the benefits outweigh the disadvantages. And I think thats a common theme in the industry. Android made the same decision to go that way. Even Wayland to a certain degree has been doing that. They have to support EGL and GL, simply because its very convenient for app developers and toolkit developers an open graphics language. That was the part that inspired us, and we wanted to have this one graphics language and support it well. And that takes a lot of craft.
So, once you can say: no more weird 2D API, no more weird phong API, and everything is mapped out to GL, youre way better off. And you can distill down the scope of the overall project to something more manageable. So it went from being impossible to possible. And then there was me, being very opinionated. I dont believe in extensibility from the beginning traditionally in Linux everything is super extensible, which has got benefits for a certain audience.
If you think about the audience of the display server, its one of the few places in the system where youve got three audiences. So youve got the users, who dont care, or shouldnt care, about the display server.
**LV: Its transparent to them.**
**Thomas**: Yes, its pixels, right? Thats all they care about. It should be smooth. It should be super nice to use. But the display server is not their main concern. It obviously feeds into a user experience, quite significantly, but there are a lot of other parts in the system that are important as well.
Then youve got developers who care about the display server in terms of the API. Obviously we said we want to satisfy this audience, and we want to provide a super-fast experience for users. It should be rock solid and stable. People have been making fun of us and saying “yeah, every project wants to be rock solid and stable”. Cool so many fail in doing that, so lets get that down and just write out what we really want to achieve.
And then youve got developers, and the moment you expose an API to them, or a protocol, you sign a contract with them, essentially. So they develop to your API well, many app developers wont directly because theyll be using toolkits but at some point youve got developers who sign up to your API.
![](http://www.linuxvoice.com/wp-content/uploads/2014/10/voss_3.jpg)
**LV: The developers writing the toolkits, then?**
**Thomas**: We do a lot of work in that arena, but in general its a contract that we have with normal app developers. And we said: look, we dont want the API or contract to be super extensible and trying to satisfy every need out there. We want to understand what people really want to do, and we want to commit to one API and contract. Not five different variants of the contract, but we want to say: look, this is what we support and we, as Canonical and as the Mir maintainers, will sign up to.
So I think thats a very good thing. You can buy into specific shells sitting on top of Mir, but you can always assume a certain base level of functionality that we will always provide in terms of window management, in terms of rendering capabilities, and so on and so forth. And funnily enough, that also helps with convergence. Because once you start thinking about the API as very important, you really start thinking about convergence. And what happens if we think about form factor and we transfer from a phone to a tablet to a desktop to a fridge?
**LV: And whatever might come!**
**Thomas**: Right, right. How do we account for future developments? And we said we dont feel comfortable making Mir super extensible, because it will just grow. Either it will just grow and grow, or you will end up with an organisation that just maintains your protocol and protocol extensions.
**LV: So thats looking at Mir in relation to X. The obvious question is comparing Mir to Wayland so what is it that Mir does, that Wayland doesnt?**
**Thomas**: This might sound picky, but we have to distinguish what Wayland really is. Wayland is a protocol specification which is interesting because the value proposition is somewhat difficult. Youve got a protocol and youve got a reference implementation. Specifically, when we started, Weston was still a test bed and everything being developed ended up in there.
No one was buying into that; no one was saying, “Look, were moving this to production-level quality with a bona fide protocol layer that is frozen and stable for a specific version that caters to application authors”. If you look at the Ubuntu repository today, or in Debian, theres Wayland-cursor-whatever, so they have extensions already. So thats a bit different from our approach to Mir, from my perspective at least.
There was this protocol that the Wayland developers finished and back then, before we did Mir and I looked into all of this, I wrote a Wayland compositor in Go, just to get to know things.
**LV: As you do!**
**Thomas**: And I said: you know, I dont think a protocol is a good way of approaching this because versioning a protocol in a packaging scenario is super difficult. But versioning a C API, or any sort of API that has a binary stability contract, is way easier and we are way more experienced at that. So, in that respect, we are different in that we are saying the protocol is an implementation detail, at least up to a certain point.
Im pretty sure for version 1.0, which we will call a golden release, we will open up the protocol for communication purposes. Under the covers its Google buffers and sockets. So well say: this is the API, work against that, and were committed to it.
Thats one thing, and then we said: OK, theres Weston, but we cannot use Weston because its not working on Android, the driver model is not well defined, and theres so much work that we would have to do to actually implement a Wayland compositor. And then we are in a situation where we would have to cut out a set of functionality from the Wayland protocol and commit to that, no matter what happens, and ultimately that would be a fork, over time, right?.
**LV: Its a difficult concept for many end users, who just want to see something working.**
**Thomas**: Right, and even from a developers perspective and lets jump to the political part I find it somewhat difficult to have a party owning a protocol definition and another party building the reference implementations. Now, Gnome and KDE do two different Wayland compositors. I dont see the benefit in that, to be quite frank, so the value proposition is difficult to my mind.
The driver model in Mir and Wayland is ultimately not that different its GL/EGL based. That is kind of the denominator that you will find in both things, which is actually a good thing, because if you look at the contract to application developers and toolkit developers, most of them dont want Mir or Wayland. They talk ELG and GL, and at that point, its not that much of a problem to support both.
> If there had been a full reference implementation of Wayland, our decision might have been different.
So we did this work for porting the Chromium browser to Mir. We actually took the Chromium Wayland back-end, factored out all the common pieces to EGL and GL ES, and split it up into Wayland and Mir.
And I think from a users or application developers perspective, the difference is not there. I think, in retrospect, if there would have been something like a full reference implementation of Wayland, where a company had signed up to provide something that is working, and committed to a certain protocol version, our decision might have been different. But there just wasnt. It was five years out there, Wayland, Wayland, Wayland, and there was nothing that we could build upon.
**LV: The main experience weve had is with RebeccaBlackOS, which has Weston and Wayland, because, like you say, theres no that much out there running it.**
**Thomas**: Right. I find Wayland impressive, obviously, but I think Mir will be significantly more relevant than Wayland in two years time. We just keep on bootstrapping everything, and weve got things working across multiple platforms. Are there issues, and are there open questions to solve? Most likely. We never said we would come up with the perfect solution in version 1. That was not our goal. I dont think software should be built that way. So it just should be iterated.
![](http://www.linuxvoice.com/wp-content/uploads/2014/10/voss_2.jpg)
**LV: When was Mir originally planned for? Which Ubuntu release? Because it has been pushed back a couple of times.**
**Thomas**: Well, we originally planned to have it by 14.04. That was the kind of stretch goal, because it highly depends on the availability of proprietary graphics drivers. So you cant ship an LTS [Long Term Support] release of Ubuntu on a new display server without supporting the hardware of the big guys.
**LV: We thought that would be quite ambitious anyway a Long Term Support release with a whole new display server!**
**Thomas**: Yes, it was ambitious but for a reason. If you dont set a stretch goal, and probably fail in reaching it, and then re-evaluate how you move forward, its difficult to drive a project. So if you just keep it evolving and evolving and evolving, and you dont have a checkpoint at some point…
**LV: Thats like a lot of open source projects. Inkscape is still on 0.48 or something, and it works, its reliable, but they never get to 1.0. Because they always say: “Oh lets add this feature, and that feature”, and the rest of us are left thinking: just release 1.0 already!.**
**Thomas**: And I wouldnt actually tie it to a version number. To me, that is secondary. To me, the question is whether we call this ready for broad public consumption on all of the hardware versions we want to support?
In Canonical, as a company, we have OEM contracts and we are enabling Ubuntu on a host of devices, and laptops and whatever, so we have to deliver on those contracts. And the question is, can we do that? No. Well, you never like a no.
> The question is whether we call this ready for broad public consumption on the hardware we want to support.
Usually, when you encounter a problem and you tackle it, and you start thinking how to solve the problem, thats more beneficial than never hearing a no. Thats kind of what we were aiming for. Ubuntu 14.04 was a stretch goal everyone was aware of that and we didnt reach it. Fine, cool. Lets go on.
So how do we stage ourself for the next cycle, until an LTS? Now we have this initiative where we have a daily testable image with Unity 8 and Mir. Its not super usable because its just essentially the tethered UI that you are seeing there, but still its something that we didnt have a year ago. And for me, thats a huge gain.
And ultimately, before we can ship something, before any new display server can ship in an LTS release, you need to have buy-in from the GPU vendors. Thats what you need.
--------------------------------------------------------------------------------
via: http://www.linuxvoice.com/interview-thomas-vos-of-mir/
作者:[Mike Saunders][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.linuxvoice.com/author/mike/

View File

@ -0,0 +1,131 @@
FOSS and the Fear Factor
================================================================================
![](http://www.linuxinsider.com/ai/181807/foss-open-source-security.jpg)
> "'Many eyes' is a complete and total myth," said SoylentNews' hairyfeet. "I bet my last dollar that if you looked at every.single.package. that makes up your most popular distros and then looked at how many have actually downloaded the source for those various packages, you'd find that there is less than 30 percent ... that are downloaded by anybody but the guys that actually maintain the things."
In a world that's been dominated for far too long by the [Systemd Inferno][1], Linux fans will have to be forgiven if they seize perhaps a bit too gleefully upon the scraps of cheerful news that come along on any given day.
Of course, for cheerful news, there's never any better place to look than the [Reglue][2] effort. Run by longtime Linux advocate and all-around-hero-for-kids Ken Starks, as alert readers [may recall][3], Reglue just last week launched a brand-new [fundraising effort][4] on Indiegogo to support its efforts over the coming year.
Since 2005, Reglue has placed more than 1,600 donated and then refurbished computers into the homes of financially disadvantaged kids in Central Texas. Over the next year, it aims to place 200 more, as well as paying for the first 90 days of Internet connection for each of them.
"As overused as the term is, the 'Digital Divide' is alive and well in some parts of America," Starks explained. "We will bridge that divide where we can."
How's that for a heaping helping of hope and inspiration?
### Windows as Attack Vector ###
![](http://www.linuxinsider.com/images/article_images/linuxgirl_bg_pinkswirl_150x245.jpg)
Offering discouraged FOSS fans a bit of well-earned validation, meanwhile -- and perhaps even a bit of levity -- is the news that Russian hackers apparently have begun using Windows as a weapon against the rest of the world.
"Russian hackers use Windows against NATO" is the [headline][5] over at Fortune, making it plain for all the world to see that Windows isn't the bastion of security some might say it is.
The sarcasm is [knee-deep][6] in the comments section on Google+ over that one.
### 'Hackers Shake Confidence' ###
Of course, malicious hacking is no laughing matter, and the FOSS world has gotten a bitter taste of the effects for itself in recent months with the Heartbleed and Shellshock flaws, to name just two.
Has it been enough to scare Linux aficionados away?
That essentially is [the suggestion][7] over at Bloomberg, whose story, entitled "Hackers Shake Confidence in 1980s Free Software Idealism," has gotten more than a few FOSS fans' knickers in a twist.
### 'No Software Is Perfect' ###
"None of this has shaken my confidence in the slightest," asserted [Linux Rants][8] blogger Mike Stone down at the blogosphere's Broken Windows Lounge, for instance.
"I remember a time when you couldn't put a Windows machine on the network without firewall software or it would be infected with viruses/malware in seconds," he explained. "I don't recall the articles claiming that confidence had been shaken in Microsoft.
"The fact of the matter is that no software is perfect, not even FOSS, but it comes closer than the alternatives," Stone opined.
### 'My Faith Is Just Fine' ###
"It is hard to even begin to get into where the Bloomberg article fails," began consultant and [Slashdot][9] blogger Gerhard Mack.
"For one, decompilers have existed for ages and allow black hats to find flaws in proprietary software, so the black-hats can find problems but cannot admit they found them let alone fix them," Mack explained. "Secondly, it has been a long time since most open source was volunteer-written, and most contributions need to be paid.
"The author goes on to rip into people who use open source for not contributing monetarily, when most of the listed companies are already Linux Foundation members, so they are already contributing," he added.
In short, "my faith in open source is just fine, and no clickbait Bloomberg article will change that," Mack concluded.
### 'The Author Is Wrong' ###
"Clickbait" is also the term Google+ blogger Alessandro Ebersol chose to describe the Bloomberg account.
"I could not see the point the author was trying to make, except sensationalism and views," he told Linux Girl.
"The author is wrong," Ebersol charged. "He should educate himself on the topic. The flaws are results of lack of funding, and too many corporations taking advantage of free software and giving nothing back."
Moreover, "I still believe that a piece of code that can be studied and checked by many is far more secure than a piece made by a few," Google+ blogger Gonzalo Velasco C. chimed in.
"All the rumors that FLOSS is as weak as proprietary software are only [FUD][10] -- period," he said. "It is even more sad when it comes from private companies that drink in the FLOSS fountain."
### 'Source Helps Ensure Security' ###
Chris Travers, a [blogger][11] who works on the [LedgerSMB][12] project, had a similar view.
"I do think that having the source available helps ensure security for well-designed, well-maintained software," he began.
"Those of us who do development on such software must necessarily approach the security process under a different set of constraints than proprietary vendors do," Travers explained.
"Since our code changes are public, when we release a security fix this also provides effectively full disclosure," he said, "ensuring that the concerns for unpatched systems are higher than they would be for proprietary solutions absent full disclosure."
At the same time, "this disclosure cuts both ways, as software security vendors can use this to provide further testing and uncover more problems," Travers pointed out. "In the long run, this leads to more secure software, but in the short run it has security costs for users."
Bottom line: "If there is good communication with the community, if there is good software maintenance and if there is good design," he said, "then the software will be secure."
### 'Source Code Isn't Magic Fairy Dust' ###
SoylentNews blogger hairyfeet had a very different view.
"'Many eyes' is a complete and total myth," hairyfeet charged. "I bet my last dollar that if you looked at every.single.package. that makes up your most popular distros and then looked at how many have actually downloaded the source for those various packages, you'd find that there is less than 30 percent of the packages that are downloaded by anybody but the guys that actually maintain the things.
"How many people have done a code audit on Firefox? [LibreOffice][13]? Gimp? I bet you won't find a single one, because everybody ASSUMES that somebody else did it," he added.
"At the end of the day, Wall Street is finding out what guys like me have been saying for years: Source code isn't magic fairy dust that makes the bugs go away," hairyfeet observed.
### 'No One Actually Looked at It' ###
"The problem with [SSL][14] was that everyone assumed the code was good, but almost no one had actually looked at, so you never had the 'many eyeballs' making the bugs shallow," Google+ blogger Kevin O'Brien conceded.
Still, "I think the methodology and the idealism are separable," he suggested. "Open source is a way of writing software in which the value created for everyone is much greater than the value captured by any one entity, which is why it is so powerful.
"The idea that corporate contributions somehow sully the purity is a stupid idea," added O'Brien. "Corporate involvement is not inherently bad; what is bad is trying to lock other people out of the value created. Many companies handle this well, such as Red Hat."
### 'The Right Way to Do IT' ###
Last but not least, "my confidence in FLOSS is unshaken," blogger [Robert Pogson][15] declared.
"After all, I need software to run my computers, and as bad as some flaws are in FLOSS, that vulnerability pales into insignificance compared to the flaws in that other OS -- you know, the one that thinks images are executable and has so much complexity that no one, not even M$ with its $billions, can fix."
FOSS is "the right way to do IT," Pogson added. "The world can and does make its own software, and the world has more and better programmers than the big corporations.
"Those big corporations use FLOSS and should support FLOSS," he maintained, offering "thanks to the corporations who hire FLOSS programmers; sponsor websites, mirrors and projects; and who give back code -- the fuel in the FLOSS economy."
--------------------------------------------------------------------------------
via: http://www.linuxinsider.com/story/FOSS-and-the-Fear-Factor-81221.html
作者Katherine Noyes
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.linuxinsider.com/perl/story/80980.html
[2]:http://www.reglue.org/
[3]:http://www.linuxinsider.com/story/78422.html
[4]:https://www.indiegogo.com/projects/deleting-the-digital-divide-one-computer-at-a-time
[5]:http://fortune.com/video/2014/10/14/russian-hackers-use-windows-against-nato/
[6]:https://plus.google.com/+KatherineNoyes/posts/DQvRMekLHV4
[7]:http://www.bloomberg.com/news/2014-10-14/hackers-shake-confidence-in-1980s-free-software-idealism.html
[8]:http://linuxrants.com/
[9]:http://slashdot.org/
[10]:http://en.wikipedia.org/wiki/Fear,_uncertainty_and_doubt
[11]:http://ledgersmbdev.blogspot.com/
[12]:http://www.ledgersmb.org/
[13]:http://www.libreoffice.org/
[14]:http://en.wikipedia.org/wiki/Transport_Layer_Security
[15]:http://mrpogson.com/

View File

@ -0,0 +1,115 @@
Calculate Linux Provides Consistency by Design
================================================================================
![](http://www.linuxinsider.com/ai/120560/linux-desktop-kde-xfce.jpg)
> Calculate Linux has a rather interesting strategy for desktop environments. It is characterized by two flavors with the same look and feel. That does not mean that the inherent functionality of the KDE and Xfce desktops are compromised. Rather, the Calculate Linux developers did what you seldom see within a Linux distribution with more than one desktop option: They unified the design.
Calculate Linux 14 is a distribution designed with home and SMB users in mind. It is optimized for rapid deployment in corporate environments as well.
Calculate gives users something no other Linux distro makes possible. The Xfce desktop session is customized to imitate the look of the [KDE][1] desktop environment.
This design approach goes a long way toward making Calculate Linux a one-distro-fits-all solution. Individual users or entire departments within an organization can fine-tune user preferences and features without changing the common appearance or performance.
Calculate Linux 14, developed by Alexander Tratsevskiy in Russia, is not your typical cookie-cutter type of Linux OS. This latest version, released Sept. 5, is a rolling-release distribution that provides a number of preconfigured features.
It uses a source-based approach to package management to optimize the software. This in part comes from its roots as a Gentoo Linux-based distribution.
Calculate Linux comes in three more versions to expand its reach. Calculate Directory Server is for servers, and Calculate Linux Scratch for building customized systems. The Calculate Media Center is a distro to run a home multimedia center.
### What's New ###
This latest version of Calculate ships with a few new features, including notification of software updates and an improved administration panel.
This release adds an improved graphical user interface for Calculate Utilities. It also provides various kernel and other software package updates.
It comes in 32-bit or 64-bit builds that include two desktop options for personal/business use: KDE and Xfce. A boot menu lets users choose to run the Calculate live desktop environment from RAM for added performance or with a command line interface only.
Why two choices? Users get better performance on low-end computers using the lightweight desktop environment that comes with Xfce. This is the second release containing this option. It solves the problem of not being able to run the KDE edition of Calculate Linux on underpowered hardware.
### Designing Details ###
Calculate Linux has a rather interesting strategy for desktop environments. It is characterized by two flavors with one common design.
That does not mean that the inherent functionality of the KDE and Xfce desktops are compromised. Rather, the Calculate Linux developers did what you seldom see within a Linux distribution with more than one desktop option.
Typically, KDE by design is much more animation based. By design, Xfce has fewer visual frills in keeping with its lightweight philosophy. Most KDE distributions place the panel bar at the bottom and do not have a Docky-style launcher anywhere in the desktop decor.
In Calculate Linux, a classic style application menu, task switcher and system tray are configured at the top of the screen in both desktop versions. At the bottom of the display, there is a hidden quick-launch bar that pops up when the mouse pointer strays toward the lower edge of the screen.
> ![](http://www.linuxinsider.com/article_images/2014/81242_990x557.jpg)
> Calculate Linux has a unified design that makes KDE and Xfce desktops look nearly the same. The panel and menu display are very nontraditional as seen in this KDE desktop view.
This duality ties the two desktops together. Both the KDE and the Xfce versions have right-click access to some of the most commonly used system commands and features.
### Look and Feel ###
Whether you run the KDE or the Xfce desktops, the panel design is the same. The menu falls from the top left corner as a single box with the same categories in both versions.
> ![](http://www.linuxinsider.com/article_images/2014/81242_990x540.jpg)
> The Xfce desktop in Calculate Linux is almost totally indistinguishable from its KDE counterpart.
Hover the mouse over the right edge of the menu box to see the category contents slide out to the right of the box. Only then do you see a varying range of applications to launch with a click.
The same operation governs the popup launcher bar hidden at the bottom of the screen. Some of the offerings are desktop-specific, however.
> ![](http://www.linuxinsider.com/article_images/2014/81242_990x556.jpg)
> Calculate Linux embeds a popup launch dock in both the KDE and Xfce desktop editions.
For example, the bottom dock in both desktop versions launches the Chromium Web browser, [LibreOffice][3], GIMP, SMPlayer and Leafpad (simple text editor). The KDE dock launches kcalc, digikam, Amarok and k3b disk burner. Xfce launches Galculator, Clementine and xfburn.
### Designed to Differ ###
One difference is the KDE version has an added button where expected along the upper right edge of the screen. It also has a Widgets button near the far right end of the top panel.
These provide access to the activities layout where you choose the style of desktop typical of KDE. These are: Grid, Newspaper, Folder, Grouping and Search & Launch.
A second style difference between the two desktop versions is the inclusion of widgets with the KDE version. These desktop widgets personalize the desktop items.
### Feature Folly ###
The Calculate Desktop edition, both KDE and Xfce, creates a user profile when it loads. This profile is fully integrated with Calculate Directory Server. Roaming profiles also are supported. Auto-tuning applications at logon are based on the server settings.
The approach greatly simplifies the setup and maintenance roles for users with no IT department to support the computer system. The desktop version functions simply as a standalone operating system. No server is needed. However, enterprise and SMB environments can pair the desktop version with the server version for seamless integration.
Either way, the common set of toolbars, desktop applications and basic settings are easier to configure for desktop and server use, regardless of the desktop environment choice.
You can install Calculate Linux on a USB thumb drive or a USB hard drive with a choice of these volume formats: ext4, ext3, ext2, reiserfs, btrfs, xfs, jfs, nilfs2 or fat32.
### Gentler Gentoo ###
The Gentoo distro in its own right installs applications compiled from source. It uses a software packaging system called "Portage" to semi-automate this process. It also uses the command-line compiling system run by Emerge.
Calculate's developers soften this Gentoo-based software compiling process somewhat, but it is still more complex than using a community-managed automated software binary repository.
Calculate Linux is fully compatible with Gentoo repositories and support for binary repository updates. System files are updated via Portage throughout the distribution life cycle.
### Bottom Line ###
Calculate Linux is a well-tooled Linux distro that makes consistency in design job number one. It is highly configurable and is optimized for nearly every computing circumstance.
It runs a full-blown KDE desktop on upper-end hardware, and provides the same look and feel with Xfce on low-end gear. Calculate Linux runs from a hard drive installation or by loading directly into RAM.
It could offer home and SMB users an effective distro alternative. However, typical for Gentoo-based distros, Calculate Linux's weak point is the lack of a full-fledged binary software repository system.
### Want to Suggest a Review? ###
Is there a Linux software application or distro you'd like to suggest for review? Something you love or would like to get to know?
Please [email your ideas to me][4], and I'll consider them for a future Linux Picks and Pans column.
And use the Talkback feature below to add your comments!
--------------------------------------------------------------------------------
via: http://www.linuxinsider.com/story/Calculate-Linux-Provides-Consistency-by-Design-81242.html
作者Jack M. Germain
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://www.calculate-linux.org/
[2]:http://www.kde.org/
[3]:http://www.libreoffice.org/
[4]:jack.germain@newsroom.ectnews.com

View File

@ -1,168 +0,0 @@
(翻译中 by runningwater
Camicri Cube: An Offline And Portable Package Management System
================================================================================
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/camicri-cube-206x205.jpg)
As we all know, we must have an Internet connection in our System for downloading and installing applications using synaptic manager or software center. But, what if you dont have an Internet connection, or the Internet connection is dead slow? This will be definitely a headache when installing packages using software center in your Linux desktop. Instead, you can manually download the applications from their official site, and install them. But, most of the Linux users doesnt aware about the required dependencies for the applications that they wanted to install. What could you do if you have such situation? Leave all the worries now. Today, we introduce an awesome offline package manager called **Camicri Cube**.
You can use this package manager on any Internet connected system, download the list of packages you want to install, bring them back to your offline computer, and Install them. Sounds good? Yes, It is! Cube is a package manager like Synaptic and Ubuntu Software Center, but a portable one. It can be used and run in any platform (Windows, Apt-Based Linux Distributions), online and offline, in flashdrive or any removable devices. The main goal of this project is to enable the offline Linux users to download and install Linux applications easily.
Cube will gather complete details of your offline computer such as OS details, installed applications and more. Then, just the copy the cube application using any USB thumb drive, and use it on the other Internet connected system, and download the list of applications you want. After downloading all required packages, head back to your original computer and start installing them. Cube is developed and maintained by **Jake Capangpangan**. It is written using C++, and bundled with all necessary packages. So, you dont have to install any extra software to use it.
### Installation ###
Now, let us download and install Cube on the Offline system which doesnt have the Internet connection. Download Cube latest version either from the [official Launchpad Page][1] or [Sourceforge site][2]. Make sure you have downloaded the correct version depending upon your offline computer architecture. As I use 64 bit system, I downloaded the 64bit version.
wget http://sourceforge.net/projects/camicricube/files/Camicri%20Cube%201.0.9/cube-1.0.9.2_64bit.zip/
Extract the zip file and move it to your home directory or anywhere you want:
unzip cube-1.0.9.2_64bit.zip
Thats it. Now its time to know how to use it.
### Usage ###
Here, I will be using Two Ubuntu systems. The original (Offline no Internet) is running with **Ubuntu 14.04**, and the Internet connected system is running with **Lubuntu 14.04** Desktop.
#### Steps to do On Offline system: ####
From the offline system, Go to the extracted Cube folder. Youll find an executable called “cube-linux”. Double click it, and Click Execute. If it not executable, set the executable permission as shown below.
sudo chmod -R +x cube/
Then, go to the cube directory,
cd cube/
And run the following command to run it.
./cube-linux
Enter the Project name (Ex.sk) and click **Create**. As I mentioned above, this will create a new project with complete details of your system such as OS details, list of installed applications, list of repositories etc.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Cube-Startup-Create-or-choose-a-project-to-be-managed_0013.png)
As you know, our system is an offline computer that means I dont have Internet connection. So I skipped the Update Repositories process by clicking on the **Cancel** button. We will update the repositories later on an Internet connected system.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0023.png)
Again, I clicked **No** to skip updating the offline computer, because we dont have Internet connection.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0033.png)
Thats it. Now the new project has been created. The new project will be saved on your main cube folder. Go to the Cube folder, and youll find a folder called Projects. This folder will hold all the essential details of your offline system.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Selection_004.png)
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Selection_005.png)
Now, close the cube application, and copy the entire main **cube** folder to any flash drive, and go to the Internet connected system.
#### Steps to do on an Internet connected system: ####
The following steps needs to be done on the Internet connected system. In our case, Its **Lubuntu 14.04**.
Make the cube folder executable as we did in the original computer.
sudo chmod -R +x cube/
Now, double click the file cube-linux to open it or you can launch it from the Terminal as shown below.
cd cube/
./cube-linux
You will see that your project is now listed in the “Open Existing Projects” part of the window. Select your project
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Cube-Startup-Create-or-choose-a-project-to-be-managed_0014.png)
Then, the cube will ask if this is your projects original computer. Its not my original (Offline) computer, so I clicked **No**.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0024.png)
Youll be asked if you want to update your repositories. Click **Ok** to update the repositories.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0034.png)
Next, we have to update all outdated packages/applications. Click on the “**Mark All updates**” button from the Cubes tool bar. After that, click “**Download all marked**” button to update all updated packages/applications. As you see in the below screenshot, there are 302 packages needs to be updated in my case. Then, Click **Ok** to continue to download marked packages.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_005.png)
Now, Cube will start to download all marked packages.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Downloading-packages_006.png)
We have completed updating repositories and packages. Now, you can download a new package if you want to install it on your offline system.
#### Downloading New Applications ####
For example, here I am going to download the **apache2** Package. Enter the name of the package in the **search** box, and hit Search button. The Cube will fetch the details of the application that you are looking for. Hit the “**Download this package now**” button, and click **Ok** to start download.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_008.png)
Cube will start downloading the apache2 package with all its dependencies.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Downloading-packages_009.png)
If you want to search and download more packages, simply Click the button “**Mark this package**”, and do search the required packages. You can mark as many as packages you want to install on your original computer. Once you marked all packages, hit the “**Download all marked**” button on the top tool bar to start downloading them.
After you completed updating repositories, outdated packages, and downloading new applications, close the Cube application. Then, copy the entire Cube folder to any flash drive or external hdd, and go back to your Offline system.
#### Steps to do on Offline computer: ####
Copy the Cube folder back to your Offline system on any place you want. Go to the cube folder and double click **cube-linux** file to launch Cube application.
Or, you can launch it from Terminal as shown below.
cd cube/
./cube-linux
Select your project and click Open.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Cube-Startup-Create-or-choose-a-project-to-be-managed_0012.png)
Then a dialog will ask you to update your system, please click “Yes” especially when you download new repositories, because this will transfer all new repositories to your computer.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0021.png)
Youll see that the repositories will be updated on your offline computer without Internet connection. Because, we already have updated the repositories on the Internet connected system. Seems cool, isnt it?
After updating the repositories, let us install all downloaded packages. Click the “Mark All Downloaded” button to select all downloaded packages, and click “Install All Marked” to install all of them from the Cube main Tool bar. The Cube application will automatically open a new Terminal, and install all packages.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Terminal_001.png)
If you encountered with dependency problems, go to **Cube Menu -> Packages -> Install packages with complete dependencies** to install all packages.
If you want to install a specific package, Navigate to the List Packages, click the “Downloaded” button, and all downloaded packages will be listed.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0035.png)
Then, double click the desired package, and click “Install this”, or “Mark this” if you want to install it later.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/07/Camicri-Systems-%C2%A9-Cube-Portable-Package-Manager-1.0.9.2-sk_0043.png)
By this way, you can download the required packages from any Internet connected system, and then you can install them in your offline computer without Internet connection.
### Conclusion ###
This is one of the best and useful tool ever I have used. But during testing this tool in my Ubuntu 14.04 testbox, I faced many dependency problems, and the Cube application is suddenly closed often. Also, I could use this tool only on a fresh Ubuntu 14.04 offline system without any issues. Hope all these issues wouldnt happen on previous versions of Ubuntu. Apart from these minor issues, this tool does this job as advertised and worked like a charm.
Cheers!
--------------------------------------------------------------------------------
via: http://www.unixmen.com/camicri-cube-offline-portable-package-management-system/
原文作者:
![](http://1.gravatar.com/avatar/1ba62ac2b395f541750b6b4f873eb37b?s=70&d=monsterid&r=G)
[SK][a](Senthilkumar, aka SK, is a Linux enthusiast, FOSS Supporter & Linux Consultant from Tamilnadu, India. A passionate and dynamic person, aims to deliver quality content to IT professionals and loves very much to write and explore new things about Linux, Open Source, Computers and Internet.)
译者:[runningwater](https://github.com/runningwater) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.unixmen.com/author/sk/
[1]:https://launchpad.net/camicricube
[2]:http://sourceforge.net/projects/camicricube/

View File

@ -1,4 +1,3 @@
nd0104 is translate
Install Google Docs on Linux with Grive Tools
================================================================================
Google Drive is two years old now and Googles cloud storage solution seems to be still going strong thanks to its integration with Google Docs and Gmail. Theres one thing still missing though: a lack of an official Linux client. Apparently Google has had one floating around their offices for a while now, however its not seen the light of day on any Linux system.

View File

@ -1,172 +0,0 @@
How to set up a USB network printer and scanner server on Debian
================================================================================
Suppose you want to set up a Linux print server in your home/office network, but you only have USB printers available (as they are much cheaper than printers that have a built-in Ethernet jack or wireless ones). In addition, what if one of those devices is an AIO (All In One), and you also want to share its incorporated scanner over the network? In this article, I'll show you how to install and share a USB AIO (Epson CX3900 inkjet printer and scanner), a USB laser printer (Samsung ML-1640), and a PDF printer as the "cherry on top" - all in a GNU/Linux Debian 7.2 [Wheezy] server.
Even though these printers are somewhat old (I bought the Epson AIO in 2007 and the laser printer in 2009), I believe that what I learned through the installation process can well be applied to newer models of the same brands and others: some drivers are available as precompiled .deb packages, while others can be installed directly from the repositories. After all, it's the underlying principles that matter.
### Prerequisites ###
To setup a network printer and scanner, we will be using [CUPS][1], which is an open-source printing system for Linux / UNIX / OSX.
# aptitude install cups cups-pdf
**Troubleshooting tip**: Depending on the state of your system (this issue can happen most likely after a failed manual install of a package or a misinstalled dependency), the front-end package management system may prompt you to uninstall a lot of packages in an attempt to resolve current dependencies before installing cups and cups-pdf. If this happens to be the case, you have two options:
1) Install the packages via another front-end package management system, such as apt-get. Note that this is not entirely advisable since it will not fix the current issue.
2) Run the following command: aptitude update && aptitude upgrade. This will fix the issue and upgrade the packages to their most recent version at the same time.
### Configuring CUPS ###
In order to be able to access the CUPS web interface, we need to do at least a minimum edit to the cupsd.conf file (server configuration file for CUPS). Before proceeding, however, let's make a backup copy of cupsd.conf:
# cp cupsd.conf cupsd.conf.bkp
and edit the original file (only the most relevant sections are shown):
- **Listen**: Listens to the specified address and port or domain socket path.
- **Location /path**: Specifies access control for the named location.
- **Order**: Specifies the order of HTTP access control (allow,deny or deny,allow). Order allow,deny means that the Allow rules have precedence over (are processed before) the Deny rules.
- **DefaultAuthType** (also valid for **AuthType**): Specifies the default type of authentication to use. Basic refers to the fact that the /etc/passwd file is used to authenticate users in CUPS.
- **DefaultEncryption**: Specifies the type of encryption to use for authenticated requests.
- **WebInterface**: Specifies whether the web interface is enabled.
# Listen for connections from the local machine
Listen 192.168.0.15:631
# Restrict access to the server
<Location />
Order allow,deny
Allo 192.168.0.0/24
</Location>
# Default authentication type, when authentication is required
DefaultAuthType Basic
DefaultEncryption IfRequested
# Web interface setting
WebInterface Yes
# Restrict access to the admin pages
<Location /admin>
Order allow,deny
Allow 192.168.0.0/24
</Location>
Now let's restart CUPS to apply the changes:
# service cups restart
In order to allow another user (other than root) to modify printer settings, we must add him / her to the lp (grants access to printer hardware and enables the user to manage print jobs) and lpadmin (owns printing preferences) groups as follows. Disregard this step if this is not necessary or desired in your current network setup.
# adduser xmodulo lp
# adduser xmodulo lpadmin
![](https://farm4.staticflickr.com/3873/14705919960_9a25101098_o.png)
### Configuring a Network Printer via CUPS Web Interface ###
1. Launch a web browser and open the CUPS interface, available at http://<Server IP>:Port, which in our case means http://192.168.0.15:631:
![](https://farm4.staticflickr.com/3878/14889544591_284015bcb5_z.jpg)
2. Go to the **Administration** tab and click on *Add printer*:
![](https://farm4.staticflickr.com/3910/14705919940_fe0a08a8f7_o.png)
3. Choose your printer; in this case, **EPSON Stylus CX3900 @ debian (Inkjet Inkjet Printer)**, and click on **Continue**:
![](https://farm6.staticflickr.com/5567/14706059067_233fcf9791_z.jpg)
4. It's time to name the printer and indicate whether we want to share it from the current workstation or not:
![](https://farm6.staticflickr.com/5570/14705957499_67ea16d941_z.jpg)
5. Install the driver - Select the brand and click on **Continue**.
![](https://farm6.staticflickr.com/5579/14889544531_77f9f1258c_o.png)
6. If the printer is not supported natively by CUPS (not listed in the next page), we will have to download the driver from the manufacturer's web site (e.g., [http://download.ebz.epson.net/dsc/search/01/search/?OSC=LX][2]) and return to this screen later.
![](https://farm4.staticflickr.com/3896/14706058997_e2a2214338_z.jpg)
![](https://farm4.staticflickr.com/3874/14706000928_c9dc74c80e_z.jpg)
![](https://farm4.staticflickr.com/3837/14706058977_e494433068_o.png)
7. Note that this precompiled .deb file must be sent somehow to the printer server (for example, via sftp or scp) from the machine that we used to download it (of course this could have been easier if we had a direct link to the file instead of the download button):
![](https://farm6.staticflickr.com/5581/14706000878_f202497d0a_z.jpg)
8. Once we have placed the .deb file in our server, we will install it:
# dpkg -i epson-inkjet-printer-escpr_1.4.1-1lsb3.2_i386.deb
**Troubleshooting tip**: If the lsb package (a standard core system that third-party applications written for Linux can depend upon) is not installed, the driver installation will not succeed:
![](https://farm4.staticflickr.com/3840/14705919770_87e5803f95_z.jpg)
We will install lsb and then attempt to install the printer driver again:
# aptitude install lsb
# dpkg -i epson-inkjet-printer-escpr_1.4.1-1lsb3.2_i386.deb
9. Now we can return to step #5 and install the printer:
![](https://farm6.staticflickr.com/5569/14705957349_3acdc26f91_z.jpg)
### Configuring a Network Scanner ###
Now we will proceed to configure the printer server to share a scanner as well. First, install [xsane][3] which is a frontend for [SANE][4]: Scanner Access Now Easy.
# aptitude install xsane
Next, let's enable the saned service by editing the /etc/default/saned file:
# Set to yes to start saned
RUN=yes
Finally, we will check whether saned is already running (most likely not - then we'll start the service and check again):
# ps -ef | grep saned | grep -v grep
# service saned start
### Configuring a Second Network Printer ###
With CUPS, you can configure multiple network printers. Let's configure an additional printer via CUPS: Samsung ML-1640, which is a USB laser printer.
The splix package contains the drivers for monochrome (ML-15xx, ML-16xx, ML-17xx, ML-2xxx) and color (CLP-5xx, CLP-6xx) Samsung printers. In addition, the detailed information about the package (available via aptitude show splix) indicates that some rebranded Samsungs like the Xerox Phaser 6100 work with this driver.
# aptitude install splix
Then we will install the printer itself using the CUPS web interface, as explained earlier:
![](https://farm4.staticflickr.com/3872/14705957329_4f38a94867_o.png)
### Installing the PDF Printer ###
Next, let's configure PDF printer on the printer server, so that you can convert documents into PDF format from client computers.
Since we already installed the cups-pdf package, the PDF printer was installed automatically, which can be verified through the web interface:
![](https://farm6.staticflickr.com/5558/14705919650_bc1a1e0b43_z.jpg)
When the PDF printer is selected, documents will be written to a configurable directory (by default to ~/PDF), or can be further manipulated by a post-processing command.
In the next article, we'll configure a desktop client to access these printers and scanner over the network.
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/08/usb-network-printer-and-scanner-server-debian.html
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.gabrielcanepa.com.ar/
[1]:https://www.cups.org/
[2]:http://download.ebz.epson.net/dsc/search/01/search/?OSC=LX
[3]:http://www.xsane.org/
[4]:http://www.sane-project.org/

View File

@ -1,5 +1,3 @@
translating by haimingfg
What are useful CLI tools for Linux system admins
================================================================================
System administrators (sysadmins) are responsible for day-to-day operations of production systems and services. One of the critical roles of sysadmins is to ensure that operational services are available round the clock. For that, they have to carefully plan backup policies, disaster management strategies, scheduled maintenance, security audits, etc. Like every other discipline, sysadmins have their tools of trade. Utilizing proper tools in the right case at the right time can help maintain the health of operating systems with minimal service interruptions and maximum uptime.

View File

@ -1,201 +0,0 @@
诗诗来翻译disylee
How to configure a network printer and scanner on Ubuntu desktop
================================================================================
In a [previous article][1](注这篇文章在2014年8月12号的原文里做过不知道翻译了没有如果翻译发布了发布此文章的时候可改成翻译后的链接), we discussed how to install several kinds of printers (and also a network scanner) in a Linux server. Today we will deal with the other end of the line: how to access the network printer/scanner devices from a desktop client.
### Network Environment ###
For this setup, our server's (Debian Wheezy 7.2) IP address is 192.168.0.10, and our client's (Ubuntu 12.04) IP address is 192.168.0.105. Note that both boxes are on the same network (192.168.0.0/24). If we want to allow printing from other networks, we need to modify the following section in the cupsd.conf file on the sever:
<Location />
Order allow,deny
Allow localhost
Allow from XXX.YYY.ZZZ.*
</Location>
(in the above example, we grant access to the printer from localhost and from any system whose IPv4 address starts with XXX.YYY.ZZZ)
To verify which printers are available on our server, we can either use lpstat command on the server, or browse to the https://192.168.0.10:631/printers page.
root@debian:~# lpstat -a
----------
EPSON_Stylus_CX3900 accepting requests since Mon 18 Aug 2014 10:49:33 AM WARST
PDF accepting requests since Mon 06 May 2013 04:46:11 PM WARST
SamsungML1640Series accepting requests since Wed 13 Aug 2014 10:13:47 PM WARST
![](https://farm4.staticflickr.com/3903/14777969919_7b7b25a4a4_z.jpg)
### Installing Network Printers in Ubuntu Desktop ###
In our Ubuntu 12.04 client, we will open the "Printing" menu (Dash -> Printing). Note that in other distributions the name may differ a little (such as "Printers" or "Print & Fax", for example):
![](https://farm4.staticflickr.com/3837/14964314992_d8bd0c0d04_o.png)
No printers have been added to our Ubuntu client yet:
![](https://farm4.staticflickr.com/3887/14941655516_80430529b5_o.png)
Here are the steps to install a network printer on Ubuntu desktop client.
**1)** The "Add" button will fire up the "New Printer" menu. We will choose "Network printer" -> "Find Network Printer" and enter the IP address of our server, then click "Find":
![](https://farm6.staticflickr.com/5581/14777977730_74c29a99b2_z.jpg)
**2)** At the bottom we will see the names of the available printers. Let's choose the Samsung printer and press "Forward":
![](https://farm6.staticflickr.com/5585/14941655566_c1539a3ea0.jpg)
**3)** We will be asked to fill in some information about our printer. When we're done, we'll click on "Apply":
![](https://farm4.staticflickr.com/3908/14941655526_0982628fc9_z.jpg)
**4)** We will then be asked whether we want to print a test page. Lets click on "Print test page":
![](https://farm4.staticflickr.com/3853/14964651435_cc83bb35aa.jpg)
The print job was created with local id 2:
![](https://farm6.staticflickr.com/5562/14777977760_b01c5338f2.jpg)
5) Using our server's CUPS web interface, we can observe that the print job has been submitted successfully (Printers -> SamsungML1640Series -> Show completed jobs):
![](https://farm4.staticflickr.com/3887/14778110127_359009cbbc_z.jpg)
We can also display this same information by running the following command on the printer server:
root@debian:~# cat /var/log/cups/page_log | grep -i samsung
----------
SamsungML1640Series root 27 [13/Aug/2014:22:15:34 -0300] 1 1 - localhost Test Page - -
SamsungML1640Series gacanepa 28 [18/Aug/2014:11:28:50 -0300] 1 1 - 192.168.0.105 Test Page - -
SamsungML1640Series gacanepa 29 [18/Aug/2014:11:45:57 -0300] 1 1 - 192.168.0.105 Test Page - -
The page_log log file shows every page that has been printed, along with the user who sent the print job, the date & time, and the client's IPv4 address.
To install the Epson inkjet and PDF printers, we need to repeat steps 1 through 5, and choose the right print queue each time. For example, in the image below we are selecting the PDF printer:
![](https://farm4.staticflickr.com/3926/14778046648_c094c8422c_o.png)
However, please note that according to the [CUPS-PDF documentation][2], by default:
> PDF files will be placed in subdirectories named after the owner of the print job. In case the owner cannot be identified (i.e. does not exist on the server) the output is placed in the directory for anonymous operation (if not disabled in cups-pdf.conf - defaults to /var/spool/cups-pdf/ANONYMOUS/).
These default directories can be modified by changing the value of the **Out** and **AnonDirName** variables in the /etc/cups/cups-pdf.conf file. Here, ${HOME} is expanded to the user's home directory:
Out ${HOME}/PDF
AnonDirName /var/spool/cups-pdf/ANONYMOUS
### Network Printing Examples ###
#### Example #1 ####
Printing from Ubuntu 12.04, logged on locally as gacanepa (an account with the same name exists on the printer server).
![](https://farm4.staticflickr.com/3845/14778046698_57b6e552f3_z.jpg)
After printing to the PDF printer, let's check the contents of the /home/gacanepa/PDF directory on the printer server:
root@debian:~# ls -l /home/gacanepa/PDF
----------
total 368
-rw------- 1 gacanepa gacanepa 279176 Aug 18 13:49 Test_Page.pdf
-rw------- 1 gacanepa gacanepa 7994 Aug 18 13:50 Untitled1.pdf
-rw------- 1 gacanepa gacanepa 74911 Aug 18 14:36 Welcome_to_Conference_-_Thomas_S__Monson.pdf
The PDF files are created with permissions set to 600 (-rw-------), which means that only the owner (gacanepa in this case) can have access to them. We can change this behavior by editing the value of the **UserUMask** variable in the /etc/cups/cups-pdf.conf file. For example, a umask of 0033 will cause the PDF printer to create files with all permissions for the owner, but read-only privileges to all others.
root@debian:~# grep -i UserUMask /etc/cups/cups-pdf.conf
----------
### Key: UserUMask
UserUMask 0033
For those unfamiliar with umask (aka user file-creation mode mask), it acts as a set of permissions that can be used to control the default file permissions that are set for new files when they are created. Given a certain umask, the final file permissions are calculated by performing a bitwise boolean AND operation between the file base permissions (0666) and the unary bitwise complement of the umask. Thus, for a umask set to 0033, the default permissions for new files will be NOT (0033) AND 0666 = 644 (read / write / execute privileges for the owner, read-only for all others.
### Example #2 ###
Printing from Ubuntu 12.04, logged on locally as jdoe (an account with the same name doesn't exist on the server).
![](https://farm4.staticflickr.com/3907/14964315142_a71d8a8aef_z.jpg)
root@debian:~# ls -l /var/spool/cups-pdf/ANONYMOUS
----------
total 5428
-rw-rw-rw- 1 nobody nogroup 5543070 Aug 18 15:57 Linux_-_Wikipedia__the_free_encyclopedia.pdf
The PDF files are created with permissions set to 666 (-rw-rw-rw-), which means that everyone has access to them. We can change this behavior by editing the value of the **AnonUMask** variable in the /etc/cups/cups-pdf.conf file.
At this point, you may be wondering about this: Why bother to install a network PDF printer when most (if not all) current Linux desktop distributions come with a built-in "Print to file" utility that allows users to create PDF files on-the-fly?
There are a couple of benefits of using a network PDF printer:
- A network printer (of whatever kind) lets you print directly from the command line without having to open the file first.
- In a network with other operating system installed on the clients, a PDF network printer spares the system administrator from having to install a PDF creator utility on each individual machine (and also the danger of allowing end-users to install such tools).
- The network PDF printer allows to print directly to a network share with configurable permissions, as we have seen.
### Installing a Network Scanner in Ubuntu Desktop ###
Here are the steps to installing and accessing a network scanner from Ubuntu desktop client. It is assumed that the network scanner server is already up and running as described [here][3].
**1)** Let us first check whether there is a scanner available on our Ubuntu client host. Without any prior setup, you will see the message saying that "No scanners were identified."
$ scanimage -L
![](https://farm4.staticflickr.com/3906/14777977850_1ec7994324_z.jpg)
**2)** Now we need to enable saned daemon which comes pre-installed on Ubuntu desktop. To enable it, we need to edit the /etc/default/saned file, and set the RUN variable to yes:
$ sudo vim /etc/default/saned
----------
# Set to yes to start saned
RUN=yes
**3)** Let's edit the /etc/sane.d/net.conf file, and add the IP address of the server where the scanner is installed:
![](https://farm6.staticflickr.com/5581/14777977880_c865b0df95_z.jpg)
**4)** Restart saned:
$ sudo service saned restart
**5)** Let's see if the scanner is available now:
![](https://farm4.staticflickr.com/3839/14964651605_241482f856_z.jpg)
Now we can open "Simple Scan" (or other scanning utility) and start scanning documents. We can rotate, crop, and save the resulting image:
![](https://farm6.staticflickr.com/5589/14777970169_73dd0e98e3_z.jpg)
### Summary ###
Having one or more network printers and scanner is a nice convenience in any office or home network, and offers several advantages at the same time. To name a few:
- Multiple users (connecting from different platforms / places) are able to send print jobs to the printer's queue.
- Cost and maintenance savings can be achieved due to hardware sharing.
I hope this article helps you make use of those advantages.
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/08/configure-network-printer-scanner-ubuntu-desktop.html
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/gabriel
[1]:http://xmodulo.com/2014/08/usb-network-printer-and-scanner-server-debian.html
[2]:http://www.cups-pdf.de/documentation.shtml
[3]:http://xmodulo.com/2014/08/usb-network-printer-and-scanner-server-debian.html#scanner

View File

@ -1,5 +1,3 @@
chi1shi2 is translating.
How to use on-screen virtual keyboard on Linux
================================================================================
On-screen virtual keyboard is an alternative input method that can replace a real hardware keyboard. Virtual keyboard may be a necessity in various cases. For example, your hardware keyboard is just broken; you do not have enough keyboards for extra machines; your hardware does not have an available port left to connect a keyboard; you are a disabled person with difficulty in typing on a real keyboard; or you are building a touchscreen-based web kiosk.

View File

@ -1,6 +1,3 @@
>>Linchenguang is translating
》》延期申请
Linux TCP/IP networking: net-tools vs. iproute2
================================================================================
Many sysadmins still manage and troubleshoot various network configurations by using a combination of ifconfig, route, arp and netstat command-line tools, collectively known as net-tools. Originally rooted in the BSD TCP/IP toolkit, the net-tools was developed to configure network functionality of older Linux kernels. Its development in the Linux community so far has ceased since 2001. Some Linux distros such as Arch Linux and CentOS/RHEL 7 have already deprecated net-tools in favor of iproute2.

View File

@ -1,4 +1,3 @@
How to create a software RAID-1 array with mdadm on Linux
================================================================================
Redundant Array of Independent Disks (RAID) is a storage technology that combines multiple hard disks into a single logical unit to provide fault-tolerance and/or improve disk I/O performance. Depending on how data is stored in an array of disks (e.g., with striping, mirroring, parity, or any combination thereof), different RAID levels are defined (e.g., RAID-0, RAID-1, RAID-5, etc). RAID can be implemented either in software or with a hardware RAID card. On modern Linux, basic software RAID functionality is available by default.

View File

@ -1,138 +0,0 @@
johnhoow translating...
# Practical Lessons in Peer Code Review #
Millions of years ago, apes descended from the trees, evolved opposable thumbs and—eventually—turned into human beings.
We see mandatory code reviews in a similar light: something that separates human from beast on the rolling grasslands of the software
development savanna.
Nonetheless, I sometimes hear comments like these from our team members:
"Code reviews on this project are a waste of time."
"I don't have time to do code reviews."
"My release is delayed because my dastardly colleague hasn't done my review yet."
"Can you believe my colleague wants me to change something in my code? Please explain to them that the delicate balance of the universe will
be disrupted if my pristine, elegant code is altered in any way."
### Why do we do code reviews? ###
Let us remember, first of all, why we do code reviews. One of the most important goals of any professional software developer is to
continually improve the quality of their work. Even if your team is packed with talented programmers, you aren't going to distinguish
yourselves from a capable freelancer unless you work as a team. Code reviews are one of the most important ways to achieve this. In
particular, they:
provide a second pair of eyes to find defects and better ways of doing something.
ensure that at least one other person is familiar with your code.
help train new staff by exposing them to the code of more experienced developers.
promote knowledge sharing by exposing both the reviewer and reviewee to the good ideas and practices of the other.
encourage developers to be more thorough in their work since they know it will be reviewed by one of their colleagues.
### Doing thorough reviews ###
However, these goals cannot be achieved unless appropriate time and care are devoted to reviews. Just scrolling through a patch, making sure
that the indentation is correct and that all the variables use lower camel case, does not constitute a thorough code review. It is
instructive to consider pair programming, which is a fairly popular practice and adds an overhead of 100% to all development time, as the
baseline for code review effort. You can spend a lot of time on code reviews and still use much less overall engineer time than pair
programming.
My feeling is that something around 25% of the original development time should be spent on code reviews. For example, if a developer takes
two days to implement a story, the reviewer should spend roughly four hours reviewing it.
Of course, it isn't primarily important how much time you spend on a review as long as the review is done correctly. Specifically, you must
understand the code you are reviewing. This doesn't just mean that you know the syntax of the language it is written in. It means that you
must understand how the code fits into the larger context of the application, component or library it is part of. If you don't grasp all the
implications of every line of code, then your reviews are not going to be very valuable. This is why good reviews cannot be done quickly: it
takes time to investigate the various code paths that can trigger a given function, to ensure that third-party APIs are used correctly
(including any edge cases) and so forth.
In addition to looking for defects or other problems in the code you are reviewing, you should ensure that:
All necessary tests are included.
Appropriate design documentation has been written.
Even developers who are good about writing tests and documentation don't always remember to update them when they change their code. A
gentle nudge from the code reviewer when appropriate is vital to ensure that they don't go stale over time.
### Preventing code review overload ###
If your team does mandatory code reviews, there is the danger that your code review backlog will build up to the point where it is
unmanageable. If you don't do any reviews for two weeks, you can easily have several days of reviews to catch up on. This means that your
own development work will take a large and unexpected hit when you finally decide to deal with them. It also makes it a lot harder to do
good reviews since proper code reviews require intense and sustained mental effort. It is difficult to keep this up for days on end.
For this reason, developers should strive to empty their review backlog every day. One approach is to tackle reviews first thing in the
morning. By doing all outstanding reviews before you start your own development work, you can keep the review situation from getting out of
hand. Some might prefer to do reviews before or after the midday break or at the end of the day. Whenever you do them, by considering code
reviews as part of your regular daily work and not a distraction, you avoid:
Not having time to deal with your review backlog.
Delaying a release because your reviews aren't done yet.
Posting reviews that are no longer relevant since the code has changed so much in the meantime.
Doing poor reviews since you have to rush through them at the last minute.
### Writing reviewable code ###
The reviewer is not always the one responsible for out-of-control review backlogs. If my colleague spends a week adding code willy-nilly
across a large project then the patch they post is going to be really hard to review. There will be too much to get through in one session.
It will be difficult to understand the purpose and underlying architecture of the code.
This is one of many reasons why it is important to split your work into manageable units. We use scrum methodology so the appropriate unit
for us is the story. By making an effort to organize our work by story and submit reviews that pertain only to the specific story we are
working on, we write code that is much easier to review. Your team may use another methodology but the principle is the same.
There are other prerequisites to writing reviewable code. If there are tricky architectural decisions to be made, it makes sense to meet
with the reviewer beforehand to discuss them. This will make it much easier for the reviewer to understand your code, since they will know
what you are trying to achieve and how you plan to achieve it. This also helps avoid the situation where you have to rewrite large swathes
of code after the reviewer suggests a different and better approach.
Project architecture should be described in detail in your design documentation. This is important anyway since it enables a new project
member to get up to speed and understand the existing code base. It has the further advantage of helping a reviewer to do their job
properly. Unit tests are also helpful in illustrating to the reviewer how components should be used.
If you are including third-party code in your patch, commit it separately. It is much harder to review code properly when 9000 lines of
jQuery are dropped into the middle.
One of the most important steps for creating reviewable code is to annotate your code reviews. This means that you go through the review
yourself and add comments anywhere you feel that this will help the reviewer to understand what is going on. I have found that annotating
code takes relatively little time (often just a few minutes) and makes a massive difference in how quickly and well the code can be
reviewed. Of course, code comments have many of the same advantages and should be used where appropriate, but often a review annotation
makes more sense. As a bonus, studies have shown that developers find many defects in their own code while rereading and annotating it.
### Large code refactorings ###
Sometimes it is necessary to refactor a code base in a way that affects many components. In the case of a large application, this can take
several days (or more) and result in a huge patch. In these cases a standard code review may be impractical.
The best solution is to refactor code incrementally. Figure out a partial change of reasonable scope that results in a working code base and
brings you in the direction you want to go. Once that change has been completed and a review posted, proceed to a second incremental change
and so forth until the full refactoring has been completed. This might not always be possible, but with thought and planning it is usually
realistic to avoid massive monolithic patches when refactoring. It might take more time for the developer to refactor in this way, but it
also leads to better quality code as well as making reviews much easier.
If it really isn't possible to refactor code incrementally (which probably says something about how well the original code was written and
organized), one solution might be to do pair programming instead of code reviews while working on the refactoring.
### Resolving disputes ###
Your team is doubtless made up of intelligent professionals, and in almost all cases it should be possible to come an agreement when
opinions about a specific coding question differ. As a developer, keep an open mind and be prepared to compromise if your reviewer prefers a
different approach. Don't take a proprietary attitude to your code and don't take review comments personally. Just because someone feels
that you should refactor some duplicated code into a reusable function, it doesn't mean that you are any less of an attractive, brilliant
and charming individual.
As a reviewer, be tactful. Before suggesting changes, consider whether your proposal is really better or just a matter of taste. You will
have more success if you choose your battles and concentrate on areas where the original code clearly requires improvement. Say things like
"it might be worth considering..." or "some people recommend..." instead of "my pet hamster could write a more efficient sorting algorithm
than this."
If you really can't find middle ground, ask a third developer who both of you respect to take a look and give their opinion.
--------------------------------------------------------------------------------
via: http://blog.salsitasoft.com/practical-lessons-in-peer-code-review/
作者:[Matt][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -1,3 +1,4 @@
惊现译者CHINAANSHE 翻译!!
How to configure HTTP load balancer with HAProxy on Linux
================================================================================
Increased demand on web based applications and services are putting more and more weight on the shoulders of IT administrators. When faced with unexpected traffic spikes, organic traffic growth, or internal challenges such as hardware failures and urgent maintenance, your web application must remain available, no matter what. Even modern devops and continuous delivery practices can threaten the reliability and consistent performance of your web service.
@ -270,4 +271,4 @@ via: http://xmodulo.com/haproxy-http-load-balancer-linux.html
[a]:http://xmodulo.com/author/jaroslav
[1]:http://www.haproxy.org/
[2]:http://www.haproxy.org/10g.html
[3]:http://xmodulo.com/how-to-install-lamp-server-on-ubuntu.html
[3]:http://xmodulo.com/how-to-install-lamp-server-on-ubuntu.html

View File

@ -1,119 +0,0 @@
su-kaiyao translating
How to speed up slow apt-get install on Debian or Ubuntu
================================================================================
If you feel that package installation by **apt-get** or **aptitude** is often too slow on your Debian or Ubuntu system, there are several ways to improve the situation. Have you considered switching default mirror sites being used? Have you checked the upstream bandwidth of your Internet connection to see if that is the bottleneck?
Nothing else, you can try this third option: use [apt-fast][1] tool. apt-fast is actually a shell script wrapper written around apt-get and aptitude, which can accelerate package download speed. Internally, apt-fast uses [aria2][2] download utility which can download a file in "chunked" forms from multiple mirrors simultaneously (like in BitTorrent download).
### Install apt-fast on Debian or Ubuntu ###
Here are the steps to install apt-fast on Debian-based Linux.
#### Debian ####
$ sudo apt-get install aria2
$ wget https://github.com/ilikenwf/apt-fast/archive/master.zip
$ unzip master.zip
$ cd apt-fast-master
$ sudo cp apt-fast /usr/bin
$ sudo cp apt-fast.conf /etc
$ sudo cp ./man/apt-fast.8 /usr/share/man/man8
$ sudo gzip /usr/share/man/man8/apt-fast.8
$ sudo cp ./man/apt-fast.conf.5 /usr/share/man/man5
$ sudo gzip /usr/share/man/man5/apt-fast.conf.5
#### Ubuntu 14.04 and higher ####
$ sudo add-apt-repository ppa:saiarcot895/myppa
$ sudo apt-get update
$ sudo apt-get install apt-fast
#### Ubuntu 11.04 to 13.10 ####
$ sudo add-apt-repository ppa:apt-fast/stable
$ sudo apt-get update
$ sudo apt-get install apt-fast
During installation on Ubuntu, you will be asked to choose a default package manager (e.g., apt-get, aptitude), and other settings. You can always change the settings later by editing a configuration file /etc/apt-fast.conf.
![](https://farm6.staticflickr.com/5615/15285526898_1b18f64d58_z.jpg)
![](https://farm3.staticflickr.com/2949/15449069896_76ee00851b_z.jpg)
![](https://farm6.staticflickr.com/5600/15471817412_9ef7f16096_z.jpg)
### Configure apt-fast ###
After installation, you need to configure a list of mirrors used by **apt-fast** in /etc/apt-fast.conf.
You can find a list of Debian/Ubuntu mirrors to choose from at the following URLs.
- **Debian**: [http://www.debian.org/mirror/list][3]
- **Ubuntu**: [https://launchpad.net/ubuntu/+archivemirrors][4]
After choosing mirrors which are geographically close to your location, add those chosen mirrors to /etc/apt-fast.conf in the following format.
$ sudo vi /etc/apt-fast.conf
Debian:
MIRRORS=('http://ftp.us.debian.org/debian/,http://carroll.aset.psu.edu/pub/linux/distributions/debian/,http://debian.gtisc.gatech.edu/debian/,http://debian.lcs.mit.edu/debian/,http://mirror.cc.columbia.edu/debian/')
Ubuntu/Mint:
MIRRORS=('http://us.archive.ubuntu.com/ubuntu,http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive/,http://mirror.cc.vt.edu/pub2/ubuntu/,http://mirror.umd.edu/ubuntu/,http://mirrors.mit.edu/ubuntu/')
As shown above, individual mirrors for a particular archive should be separated by commas. It is recommended that you include the default mirror site specified in /etc/apt/sources.list in the MIRRORS string.
### Install a Package with apt-fast ###
Now you are ready to test the power of apt-fast. Here is the command-line usage of **apt-fast**:
apt-fast [apt-get options and arguments]
apt-fast [aptitude options and arguments]
apt-fast { { install | upgrade | dist-upgrade | build-dep | download | source } [ -y | --yes | --assume-yes | --assume-no ] ... | clean }
To install a package with **apt-fast**:
$ sudo apt-fast install texlive-full
To download a package in the current directory without installing it:
$ sudo apt-fast download texlive-full
![](http://farm8.staticflickr.com/7309/10585846956_6c98c6dcc9_z.jpg)
As mentioned earlier, parallel downloading of apt-fast is done by aria2. You can verify parallel downloads from multiple mirrors as follows.
$ sudo netstat -nap | grep aria2c
![](http://farm8.staticflickr.com/7328/10585846886_4744a0e021_z.jpg)
Note that **apt-fast** does not make "apt-get update" faster. Parallel downloading gets triggered only for "install", "upgrade", "dist-upgrade" and "build-dep" operations. For other operations, apt-fast simply falls back to the default package manager **apt-get** or **aptitude**.
### How Fast is apt-fast? ###
To compare apt-fast and apt-get, I tried installing several packages using two methods on two identical Ubuntu instances. The following graph shows total package installation time (in seconds).
![](http://farm4.staticflickr.com/3810/10585846986_504d07b4a7_z.jpg)
As you can see, **apt-fast** is substantially faster (e.g., 3--4 times faster) than **apt-get**, especially when a bulky package is installed.
Be aware that performance improvement will of course vary, depending on your upstream Internet connectivity. In my case, I had ample spare bandwidth to leverage in my upstream connection, and that's why I see dramatic improvement by using parallel download.
--------------------------------------------------------------------------------
via: http://xmodulo.com/speed-slow-apt-get-install-debian-ubuntu.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:https://github.com/ilikenwf/apt-fast
[2]:http://aria2.sourceforge.net/
[3]:http://www.debian.org/mirror/list
[4]:https://launchpad.net/ubuntu/+archivemirrors

View File

@ -1,3 +1,4 @@
[bazz222222222222222222222]
The Why and How of Ansible and Docker
================================================================================
There is a lot of interest from the tech community in both [Docker][1] and [Ansible][2], I am hoping that after reading this article you will share our enthusiasm. You will also gain a practical insight into using Ansible and Docker for setting up a complete server environment for a Rails application.
@ -100,4 +101,4 @@ via: http://thechangelog.com/ansible-docker/
[6]:http://blog.docker.io/2013/10/docker-0-6-5-links-container-naming-advanced-port-redirects-host-integration/
[7]:https://speakerdeck.com/gerhardlazu/ansible-and-docker-the-path-to-continuous-delivery-part-1
[8]:http://thechangelog.com/weekly/
[9]:https://github.com/thechangelog/draft
[9]:https://github.com/thechangelog/draft

View File

@ -1,3 +1,4 @@
2q1w2007翻译中
How to convert image, audio and video formats on Ubuntu
================================================================================
If you need to work with a variety of image, audio and video files encoded in all sorts of different formats, you are probably using more than one tools to convert among all those heterogeneous media formats. If there is a versatile all-in-one media conversion tool that is capable of dealing with all different image/audio/video formats, that will be awesome.
@ -83,4 +84,4 @@ via: http://xmodulo.com/how-to-convert-image-audio-and-video-formats-on-ubuntu.h
[a]:http://xmodulo.com/author/nanni
[1]:https://launchpad.net/format-junkie
[2]:http://xmodulo.com/how-to-install-deb-file-with-dependencies.html
[2]:http://xmodulo.com/how-to-install-deb-file-with-dependencies.html

View File

@ -1,139 +0,0 @@
How to set up RAID 10 for high performance and fault tolerant disk I/O on Linux
================================================================================
A RAID 10 (aka RAID 1+0 or stripe of mirrors) array provides high performance and fault-tolerant disk I/O operations by combining features of RAID 0 (where read/write operations are performed in parallel across multiple drives) and RAID 1 (where data is written identically to two or more drives).
In this tutorial, I'll show you how to set up a software RAID 10 array using five identical 8 GiB disks. While the minimum number of disks for setting up a RAID 10 array is four (e.g., a striped set of two mirrors), we will add an extra spare drive should one of the main drives become faulty. We will also share some tools that you can later use to analyze the performance of your RAID array.
Please note that going through all the pros and cons of RAID 10 and other partitioning schemes (with different-sized drives and filesystems) is beyond the scope of this post.
### How Does a Raid 10 Array Work? ###
If you need to implement a storage solution that supports I/O-intensive operations (such as database, email, and web servers), RAID 10 is the way to go. Let me show you why. Let's refer to the below image.
![](https://farm4.staticflickr.com/3844/15179003008_e48806b3ef_o.png)
Imagine a file that is composed of blocks A, B, C, D, E, and F in the above diagram. Each RAID 1 mirror set (e.g., Mirror 1 or 2) replicates blocks on each of its two devices. Because of this configuration, write performance is reduced because every block has to be written twice, once for each disk, whereas read performance remains unchanged compared to reading from single disks. The bright side is that this setup provides redundancy in that unless more than one of the disks in each mirror fail, normal disk I/O operations can be maintained.
The RAID 0 stripe works by dividing data into blocks and writing block A to Mirror 1, block B to Mirror 2 (and so on) simultaneously, thereby improving the overall read and write performance. On the other hand, none of the mirrors contains the entire information for any piece of data committed to the main set. This means that if one of the mirrors fail, the entire RAID 0 component (and therefore the RAID 10 set) is rendered inoperable, with unrecoverable loss of data.
### Setting up a RAID 10 Array ###
There are two possible setups for a RAID 10 array: complex (built in one step) or nested (built by creating two or more RAID 1 arrays, and then using them as component devices in a RAID 0). In this tutorial, we will cover the creation of a complex RAID 10 array due to the fact that it allows us to create an array using either an even or odd number of disks, and can be managed as a single RAID device, as opposed to the nested setup (which only permits an even number of drives, and must be managed as a nested device, dealing with RAID 1 and RAID 0 separately).
It is assumed that you have mdadm installed, and the daemon running on your system. Refer to [this tutorial][1] for details. It is also assumed that a primary partition sd[bcdef]1 has been created on each disk. Thus, the output of:
ls -l /dev | grep sd[bcdef]
should be like:
![](https://farm3.staticflickr.com/2944/15365276992_db79cac82a.jpg)
Let's go ahead and create a RAID 10 array with the following command:
# mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sd[bcde]1 --spare-devices=1 /dev/sdf1
![](https://farm3.staticflickr.com/2946/15365277042_28a100baa2_z.jpg)
When the array has been created (it should not take more than a few minutes), the output of:
# mdadm --detail /dev/md0
should look like:
![](https://farm3.staticflickr.com/2946/15362417891_7984c6a05f_o.png)
A couple of things to note before we proceed further.
1. **Used Dev Space** indicates the capacity of each member device used by the array.
2. **Array Size** is the total size of the array. For a RAID 10 array, this is equal to (N*C)/M, where N: number of active devices, C: capacity of active devices, M: number of devices in each mirror. So in this case, (N*C)/M equals to (4*8GiB)/2 = 16GiB.
3. **Layout** refers to the fine details of data layout. The possible layout values are as follows.
----------
- **n** (default option): means near copies. Multiple copies of one data block are at similar offsets in different devices. This layout yields similar read and write performance than that of a RAID 0 array.
![](https://farm3.staticflickr.com/2941/15365413092_0aa41505c2_o.png)
- **o** indicates offset copies. Rather than the chunks being duplicated within a stripe, whole stripes are duplicated, but are rotated by one device so duplicate blocks are on different devices. Thus subsequent copies of a block are in the next drive, one chunk further down. To use this layout for your RAID 10 array, add --layout=o2 to the command that is used to create the array.
![](https://farm3.staticflickr.com/2944/15178897580_6ef923a1cb_o.png)
- **f** represents far copies (multiple copies with very different offsets). This layout provides better read performance but worse write performance. Thus, it is the best option for systems that will need to support far more reads than writes. To use this layout for your RAID 10 array, add --layout=f2 to the command that is used to create the array.
![](https://farm3.staticflickr.com/2948/15179140458_4a803bb194_o.png)
The number that follows the **n**, **f**, and **o** in the --layout option indicates the number of replicas of each data block that are required. The default value is 2, but it can be 2 to the number of devices in the array. By providing an adequate number of replicas, you can minimize I/O impact on individual drives.
4. **Chunk Size**, as per the [Linux RAID wiki][2], is the smallest unit of data that can be written to the devices. The optimal chunk size depends on the rate of I/O operations and the size of the files involved. For large writes, you may see lower overhead by having fairly large chunks, whereas arrays that are primarily holding small files may benefit more from a smaller chunk size. To specify a certain chunk size for your RAID 10 array, add **--chunk=desired_chunk_size** to the command that is used to create the array.
Unfortunately, there is no one-size-fits-all formula to improve performance. Here are a few guidelines to consider.
- Filesystem: overall, [XFS][3] is said to be the best, while EXT4 remains a good choice.
- Optimal layout: far layout improves read performance, but worsens write performance.
- Number of replicas: more replicas minimize I/O impact, but increase costs as more disks will be needed.
- Hardware: SSDs are more likely to show increased performance (under the same context) than traditional (spinning) disks.
### RAID Performance Tests using DD ###
The following benchmarking tests can be used to check on the performance of our RAID 10 array (/dev/md0).
#### 1. Write operation ####
A single file of 256MB is written to the device:
# dd if=/dev/zero of=/dev/md0 bs=256M count=1 oflag=dsync
512 bytes are written 1000 times:
# dd if=/dev/zero of=/dev/md0 bs=512 count=1000 oflag=dsync
With dsync flag, dd bypasses filesystem cache, and performs synchronized write to a RAID array. This option is used to eliminate caching effect during RAID performance tests.
#### 2. Read operation ####
256KiB*15000 (3.9 GB) are copied from the array to /dev/null:
# dd if=/dev/md0 of=/dev/null bs=256K count=15000
### RAID Performance Tests Using Iozone ###
[Iozone][4] is a filesystem benchmark tool that allows us to measure a variety of disk I/O operations, including random read/write, sequential read/write, and re-read/re-write. It can export the results to a Microsoft Excel or LibreOffice Calc file.
#### Installing Iozone on CentOS/RHEL 7 ####
Enable [Repoforge][5]. Then:
# yum install iozone
#### Installing Iozone on Debian 7 ####
# aptitude install iozone3
The iozone command below will perform all tests in the RAID-10 array:
# iozone -Ra /dev/md0 -b /tmp/md0.xls
- **-R**: generates an Excel-compatible report to standard out.
- **-a**: runs iozone in a full automatic mode with all tests and possible record/file sizes. Record sizes: 4k-16M and file sizes: 64k-512M.
- **-b /tmp/md0.xls**: stores test results in a specified file.
Hope this helps. Feel free to add your thoughts or add tips to consider on how to improve performance of RAID 10.
--------------------------------------------------------------------------------
via: http://xmodulo.com/setup-raid10-linux.html
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/gabriel
[1]:http://xmodulo.com/create-software-raid1-array-mdadm-linux.html
[2]:https://raid.wiki.kernel.org/
[3]:http://ask.xmodulo.com/create-mount-xfs-file-system-linux.html
[4]:http://www.iozone.org/
[5]:http://xmodulo.com/how-to-set-up-rpmforge-repoforge-repository-on-centos.html

View File

@ -0,0 +1,147 @@
[translating by KayGuoWhu]
How to check hard disk health on Linux using smartmontools
================================================================================
If there is something that you never want to happen on your Linux system, that is having hard drives die on you without any warning. [Backups][1] and storage technologies such as [RAID][2] can get you back on your feet in no time, but the cost associated with a sudden loss of a hardware device can take a considerable toll on your budget, especially if you haven't planned ahead of time what to do in such circumstances.
To avoid running into this kind of setbacks, you can try [smartmontools][3] which is a software package that manages and monitors storage hardware by using Self-Monitoring, Analysis and Reporting Technology (S.M.A.R.T. or just SMART). Most modern ATA/SATA, SCSI/SAS, and solid-state hard disks nowadays come with the SMART system built-in. The purpose of SMART is to monitor the reliability of the hard drive, to predict drive failures, and to carry out different types of drive self-tests. The smartmontools consists of two utility programs called smartctl and smartd. Together, they will provide advanced warnings of disk degradation and failure on Linux platforms.
This tutorial will provide installation and configuration guide for smartmontools on Linux.
### Installing Smartmontools ###
Installation of smartmontools is straightforward as it available in base repositories of most Linux distros.
#### Debian and derivatives: ####
# aptitude install smartmontools
#### Red Hat-based distributions: ####
# yum install smartmontools
### Checking Hard Drive Health with Smartctl ###
First off, list the hard drives connected to your system with the following command:
# ls -l /dev | grep -E 'sd|hd'
The output should be similar to:
![](https://farm4.staticflickr.com/3953/15352881249_96c09f7ccc_o.png)
where sdx indicate device names assigned to the hard drives installed on your machine.
To display information about a particular hard disk (e.g., device model, S/N, firmware version, size, ATA version/revision, availability and status of SMART capability), run smartctl with "--info" flag, and specify the hard drive's device name as follows.
In this example, we will choose /dev/sda.
# smartctl --info /dev/sda
![](https://farm4.staticflickr.com/3928/15353873870_00a8dddf89_z.jpg)
Although the ATA version information may seem to go unnoticed at first, it is one of the most important factors when looking for a replacement part. Each ATA version is backward compatible with the previous versions. For example, older ATA-1 or ATA-2 devices work fine on ATA-6 and ATA-7 interfaces, but unfortunately, that is not true for the other way around. In cases where the device version and interface version don't match, they work together at the capabilities of the lesser of the two. That being said, an ATA-7 hard drive is the safest choice for a replacement part in this case.
You can examine the health status of a particular hard drive with:
# smartctl -s on -a /dev/sda
In this command, "-s on" flag enables SMART on the specified device. You can ommit it if SMART support is already enabled for /dev/sda.
The SMART information for a disk consists of several sections. Among other things, "READ SMART DATA" section shows the overall health status of the drive.
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment rest result: PASSED
The result of this test can be either PASSED or FAILED. In the latter case, a hardware failure is imminent, so you may want to start backing up your important data from that drive!
The next thing you will want to look at is the [SMART attribute][4] table, as shown below.
![](https://farm6.staticflickr.com/5612/15539511935_dd62f6c9ef_z.jpg)
Basically, SMART attribute table lists values of a number of attributes defined for a particular drive by its manufacturer, as well as failure threshold for these attributes. This table is automatically populated and updated by drive firmware.
- **ID#**: attribute ID, usually a decimal (or hex) number between 1 and 255.
- **ATTRIBUTE_NAME**: attribute names defined by a drive manufacturer.
- **FLAG**: attribute handling flag (we can ignore it).
- **VALUE**: this is one of the most important information in the table, indicating a "normalized" value of a given attribute, whose range is between 1 and 253. 253 means the best condition, while 1 means the worse condition. Depending on attributes and manufacturers, an initial VALUE can be set to either 100 or 200.
- **WORST**: the lowest VALUE ever recorded.
- **THRESH**: the lowest value that WORST should ever be allowed to fall to, before reporting a given hard drive as FAILED.
- **TYPE**: the type of attribute (either Pre-fail or Old_age). A Pre-fail attribute is considered a critical attribute; one that participates in the overall SMART health assessment (PASSED/FAILED) of the drive. If any Pre-fail attribute fails, then the drive is considered "about to fail." On the other hand, an Old_age attribute is considered (for SMART purposes) a non-critical attribute (e.g., normal wear and tear); one that does not fail the drive per se.
- **UPDATED**: indicates how often an attribute is updated. Offline represents the case when offline tests are being performed on the drive.
- **WHEN_FAILED**: this will be set to "FAILING_NOW" (if VALUE is less than or equal to THRESH), or "In_the_past" (if WORST is less than equal to THRESH), or "-" (if none of the above). In case of "FAILING_NOW", back up your important files ASAP, especially if the attribute is of TYPE Pre-fail. "In_the_past" means that the attribute has failed before, but that it's OK at the time of running the test. "-" indicates that this attribute has never failed.
- **RAW_VALUE**: a manufacturer-defined raw value, from which VALUE is derived.
At this point you may be thinking, "Yes, smartctl seems like a nice tool. but I would like to avoid the hassle of having to run it manually." Wouldn't it be nice if it could be run at specified intervals, and at the same time inform me of the testsresults?
Fortunately, the answer is yes. And that's when smartd comes in.
### Configuring Smartctl and Smartd for Live Monitoring ###
First, edit smartctl's configuration file (/etc/default/smartmontools) to tell it to start smartd at system startup, and to specify check intervals in seconds (e.g., 7200 = 2 hours).
start_smartd=yes
smartd_opts="--interval=7200"
Next, edit smartd's configuration file (/etc/smartd.conf) to add the followign line.
/dev/sda -m myemail@mydomain.com -M test
- **-m <email-address>**: specifies an email address to send test reports to. This can be a system user such as root, or an email address such as myemail@mydomain.com if the server is configured to relay emails to the outside of your system.
- **-M <delivery-type>**: specifies the desired type of delivery for an email report.
- **once**: sends only one warning email for each type of disk problem detected.
- **daily**: sends additional warning reminder emails, once per day, for each type of disk problem detected.
- **diminishing**: sends additional warning reminder emails, after a one-day interval, then a two-day interval, then a four-day interval, and so on for each type of disk problem detected. Each interval is twice as long as the previous interval.
- **test**: sends a single test email immediately upon smartd startup.
- **exec PATH**: runs the executable PATH instead of the default mail command. PATH must point to an executable binary file or script. This allows to specify a desired action (beep the console, shutdown the system, and so on) when a problem is detected.
Save the changes and restart smartd.
You should expect this kind of email sent by smartd.
![](https://farm6.staticflickr.com/5612/15539511945_b344814c74_o.png)
Luckily for us, no error was detected. Had it not been so, the errors would have appeared below the line "The following warning/error was logged by the smartd daemon."
Finally, you can schedule tests at your preferred schedule using the "-s" flag and the regular expression in the form of "T/MM/DD/d/HH", where:
T in the regular expression indicates the kind of test:
- L: long test
- S: short test
- C: Conveyance test (ATA only)
- O: Offline (ATA only)
and the remaining characters represent the date and time when the test should be performed:
- MM is the month of the year.
- DD is the day of the month.
- HH is the hour of day.
- d is the day of the week (ranging from 1=Monday through 7=Sunday).
- MM, DD, and HH are expressed with two decimal digits.
A dot in any of these places indicates all possible values. An expression inside parentheses such as (A|B|C) denotes any one of the three possibilities A, B, or C. An expression inside square brackets such as [1-5] denotes a range (1 through 5 inclusive).
For example, to perform a long test every business day at 1 pm for all disks, add the following line to /etc/smartd.conf. Make sure to restart smartd.
DEVICESCAN -s (L/../../[1-5]/13)
### Conclusion ###
Whether you want to quickly check the electrical and mechanical performance of a disk, or perform a longer and more thorough test scans the entire disk surface, do not let yourself get so caught up in your day-to-day responsibilities as to forget to regularly check on the health of your disks. You will thank yourself later!
--------------------------------------------------------------------------------
via: http://xmodulo.com/check-hard-disk-health-linux-smartmontools.html
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/gabriel
[1]:http://xmodulo.com/how-to-create-secure-incremental-offsite-backup-in-linux.html
[2]:http://xmodulo.com/create-software-raid1-array-mdadm-linux.html
[3]:http://www.smartmontools.org/
[4]:http://en.wikipedia.org/wiki/S.M.A.R.T.

View File

@ -0,0 +1,89 @@
johnhoow translating...
pidstat - Monitor and Find Statistics for Linux Procesess
================================================================================
The **pidstat** command is used for monitoring individual tasks currently being managed by the Linux kernel. It writes to standard output activities for every task managed by the Linux kernel. The pidstat command can also be used for monitoring the child processes of selected tasks. The interval parameter specifies the amount of time in seconds between each report. A value of 0 (or no parameters at all) indicates that tasks statistics are to be reported for the time since system startup (boot).
### How to Install pidstat ###
pidstat is part of the sysstat suite that contains various system performance tools for Linux, it's available on the repository of most Linux distributions.
To install it on Debian / Ubuntu Linux systems you can use the following command:
# apt-get install sysstat
If you are using CentOS / Fedora / RHEL Linux you can install the packages like this:
# yum install sysstat
### Using pidstat ###
Running pidstat without any argument is equivalent to specifying -p ALL but only active tasks (tasks with non-zero statistics values) will appear in the report.
# pidstat
![pidstat](http://blog.linoxide.com/wp-content/uploads/2014/09/pidstat.jpg)
In the output you can see:
- **PID** - The identification number of the task being monitored.
- **%usr** - Percentage of CPU used by the task while executing at the user level (application), with or without nice priority. Note that this field does NOT include time spent running a virtual processor.
- **%system** - Percentage of CPU used by the task while executing at the system level.
- **%guest** - Percentage of CPU spent by the task in virtual machine (running a virtual processor).
- **%CPU** - Total percentage of CPU time used by the task. In an SMP environment, the task's CPU usage will be divided by the total number of CPU's if option -I has been entered on the command line.
- **CPU** - Processor number to which the task is attached.
- **Command** - The command name of the task.
### I/O Statistics ###
We can use pidstat to get I/O statistics about a process using the -d flag. For example:
# pidstat -d -p 8472
![pidstat io](http://blog.linoxide.com/wp-content/uploads/2014/09/pidstat-io.jpg)
The IO output will display a few new columns:
- **kB_rd/s** - Number of kilobytes the task has caused to be read from disk per second.
- **kB_wr/s** - Number of kilobytes the task has caused, or shall cause to be written to disk per second.
- **kB_ccwr/s** - Number of kilobytes whose writing to disk has been cancelled by the task.
### Page faults and memory usage ###
Using the -r flag you can get information about memory usage and page faults.
![pidstat pf mem](http://blog.linoxide.com/wp-content/uploads/2014/09/pidstat-pfmem.jpg)
Important columns:
- **minflt/s** - Total number of minor faults the task has made per second, those which have not required loading a memory page from disk.
- **majflt/s** - Total number of major faults the task has made per second, those which have required loading a memory page from disk.
- **VSZ** - Virtual Size: The virtual memory usage of entire task in kilobytes.
- **RSS** - Resident Set Size: The non-swapped physical memory used by the task in kilobytes.
### Examples ###
**1.** You can use pidstat to find a memory leek using the following command:
# pidstat -r 2 5
This will give you 5 reports, one every 2 seconds, about the current page faults statistics, it should be easy to spot the problem process.
**2.** To show all children of the mysql server you can use the following command
# pidstat -T CHILD -C mysql
**3.** To combine all statistics in a single report you can use:
# pidstat -urd -h
--------------------------------------------------------------------------------
via: http://linoxide.com/linux-command/linux-pidstat-monitor-statistics-procesess/
作者:[Adrian Dinu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/adriand/

View File

@ -0,0 +1,132 @@
How to monitor a log file on Linux with logwatch
================================================================================
Linux operating system and many applications create special files commonly referred to as "logs" to record their operational events. These system logs or application-specific log files are an essential tool when it comes to understanding and troubleshooting the behavior of the operating system and third-party applications. However, log files are not precisely what you would call "light" or "easy" reading, and analyzing raw log files by hand is often time-consuming and tedious. For that reason, any utility that can convert raw log files into a more user-friendly log digest is a great boon for sysadmins.
[logwatch][1] is an open-source log parser and analyzer written in Perl, which can parse and convert raw log files into a structured format, making a customizable report based on your use cases and requirements. In logwatch, the focus is on producing more easily consumable log summary, not on real-time log processing and monitoring. As such, logwatch is typically invoked as an automated cron task with desired time and frequency, or manually from the command line whenever log processing is needed. Once a log report is generated, logwatch can email the report to you, save it to a file, or display it on the screen.
A logwatch report is fully customizable in terms of verbosity and processing coverage. The log processing engine of logwatch is extensible, in a sense that if you want to enable logwatch for a new application, you can write a log processing script (in Perl) for the application's log file, and plug it under logwatch.
One downside of logwatch is that it does not include in its report detailed timestamp information available in original log files. You will only know that a particular event was logged in a requested range of time, and you will have to access original log files to get exact timing information.
### Installing Logwatch ###
On Debian and derivatives:
# aptitude install logwatch
On Red Hat-based distributions:
# yum install logwatch
### Configuring Logwatch ###
During installation, the main configuration file (logwatch.conf) is placed in /etc/logwatch/conf. Configuration options defined in this file override system-wide settings defined in /usr/share/logwatch/default.conf/logwatch.conf.
If logwatch is launched from the command line without any arguments, the custom options defined in /etc/logwatch/conf/logwatch.conf will be used. However, if any command-line arguments are specified with logwatch command, those arguments in turn override any default/custom settings in /etc/logwatch/conf/logwatch.conf.
In this article, we will customize several default settings of logwatch by editing /etc/logwatch/conf/logwatch.conf file.
Detail = <Low, Med, High, or a number>
"Detail" directive controls the verbosity of a logwatch report. It can be a positive integer, or High, Med, Low, which correspond to 10, 5, and 0, respectively.
MailTo = youremailaddress@yourdomain.com
"MailTo" directive is used if you want to have a logwatch report emailed to you. To send a logwatch report to multiple recipients, you can specify their email addresses separated with a space. To be able to use this directive, however, you will need to configure a local mail transfer agent (MTA) such as sendmail or Postfix on the server where logwatch is running.
Range = <Yesterday|Today|All>
"Range" directive specifies the time duration of a logwatch report. Common values for this directive are Yesterday, Today or All. When "Range = All" is used, "Archive = yes" directive is also needed, so that all archived versions of a given log file (e.g., /var/log/maillog, /var/log/maillog.X, or /var/log/maillog.X.gz) are processed.
Besides such common range values, you can also use more complex range options such as the following.
- Range = "2 hours ago for that hour"
- Range = "-5 days"
- Range = "between -7 days and -3 days"
- Range = "since September 15, 2014"
- Range = "first Friday in October"
- Range = "2014/10/15 12:50:15 for that second"
To be able to use such free-form range examples, you need to install Date::Manip Perl module from CPAN. Refer to [this post][2] for CPAN module installation instructions.
Service = <service-name-1>
Service = <service-name-2>
. . .
"Service" option specifies one or more services to monitor using logwath. All available services are listed in /usr/share/logwatch/scripts/services, which cover essential system services (e.g., pam, secure, iptables, syslogd), as well as popular application services such as sudo, sshd, http, fail2ban, samba. If you want to add a new service to the list, you will have to write a corresponding log processing Perl script, and place it in this directory.
If this option is used to select specific services, you need to comment out the line "Service = All" in /usr/share/logwatch/default.conf/logwatch.conf.
![](https://farm6.staticflickr.com/5612/14948933564_94cbc5353c_z.jpg)
Format = <text|html>
"Format" directive specifies the format (e.g., text or HTML) of a logwatch report.
Output = <file|mail|stdout>
"Output" directive indicates where a logwatch report should be sent. It can be saved to a file (file), emailed (mail), or shown to screen (stdout).
### Analyzing Log Files with Logwatch ###
To understand how to analyze log files using logwatch, consider the following logwatch.conf example:
Detail = High
MailTo = youremailaddress@yourdomain.com
Range = Today
Service = http
Service = postfix
Service = zz-disk_space
Format = html
Output = mail
Under these settings, logwatch will process log files generated by three services (http, postfix and zz-disk_space) today, produce an HTML report with high verbosity, and email it to you.
If you do not want to customize /etc/logwatch/conf/logwatch.conf, you can leave the default configuration file unchanged, and instead run logwatch from the command line as follows. It will achieve the same outcome.
# logwatch --detail 10 --mailto youremailaddress@yourdomain.com --range today --service http --service postfix --service zz-disk_space --format html --output mail
The emailed report looks like the following.
![](https://farm6.staticflickr.com/5611/15383540608_57dc37e3d6_z.jpg)
The email header includes links to navigate the report sections, one per each selected service, and also "Back to top" links.
You will want to use the email report option when the list of recipients is small. Otherwise, you can have logwatch save a generated HTML report within a network share that can be accessed by all the individuals who need to see the report. To do so, make the following modifications in our previous example:
Detail = High
Range = Today
Service = http
Service = postfix
Service = zz-disk_space
Format = html
Output = file
Filename = /var/www/html/logs/dev1.html
Equivalently, run logwatch from the command line as follows.
# logwatch --detail 10 --range today --service http --service postfix --service zz-disk_space --format html --output file --filename /var/www/html/logs/dev1.html
Finally, let's configure logwatch to be executed by cron on your desired schedules. The following example will run a logwatch cron job every business day at 12:15 pm:
# crontab -e
----------
15 12 * * 1,2,3,4,5 /sbin/logwatch
Hope this helps. Feel free to comment to share your own tips and ideas with the community!
--------------------------------------------------------------------------------
via: http://xmodulo.com/monitor-log-file-linux-logwatch.html
作者:[Gabriel Cánepa][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/gabriel
[1]:http://sourceforge.net/projects/logwatch/
[2]:http://xmodulo.com/how-to-install-perl-modules-from-cpan.html

View File

@ -0,0 +1,54 @@
wangjiezhe translating...
Linux FAQs with Answers--How to change character encoding of a text file on Linux
================================================================================
> **Question**: I have an "iso-8859-1"-encoded subtitle file which shows broken characters on my Linux system, and I would like to change its text encoding to "utf-8" character set. In Linux, what is a good tool to convert character encoding in a text file?
As you already know, computers can only handle binary numbers at the lowest level - not characters. When a text file is saved, each character in that file is mapped to bits, and it is those "bits" that are actually stored on disk. When an application later opens that text file, each of those binary numbers are read and mapped back to the original characters that are understood by us human. This "save and open" process is best performed when all applications that need access to a text file "understand" its encoding, meaning the way binary numbers are mapped to characters, and thus can ensure a "round trip" of understandable data.
If different applications do not use the same encoding while dealing with a text file, non-readable characters will be shown wherever special characters are found in the original file. By special characters we mean those that are not part of the English alphabet, such as accented characters (e.g., ñ, á, ü).
The questions then become: 1) how can I know which character encoding a certain text file is using?, and 2) how can I convert it to some other encoding of my choosing?
### Step One ###
In order to find out the character encoding of a file, we will use a commad-line tool called file. Since the file command is a standard UNIX program, we can expect to find it in all modern Linux distros.
Run the following command:
$ file --mime-encoding filename
![](https://farm6.staticflickr.com/5602/15595534261_1a7b4d16a2.jpg)
### Step Two ###
The next step is to check what kinds of text encodings are supported on your Linux system. For this, we will use a tool called iconv with the "-l" flag (lowercase L), which will list all the currently supported encodings.
$ iconv -l
The iconv utility is part of the the GNU libc libraries, so it is available in all Linux distributions out-of-the-box.
### Step Three ###
Once we have selected a target encoding among those supported on our Linux system, let's run the following command to perform the conversion:
$ iconv -f old_encoding -t new_encoding filename
For example, to convert iso-8859-1 to utf-8:
$ iconv -f iso-8859-1 -t utf-8 input.txt
![](https://farm4.staticflickr.com/3943/14978042143_a516e0b10b_o.png)
Knowing how to use these tools together as we have demonstrated, you can for example fix a broken subtitle file:
![](https://farm6.staticflickr.com/5612/15412197967_0dfe5078f9_z.jpg)
--------------------------------------------------------------------------------
via: http://ask.xmodulo.com/change-character-encoding-text-file-linux.html
译者:[wangjiezhe](https://github.com/wangjiezhe)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

View File

@ -0,0 +1,97 @@
SPccman translating
Wine 1.7.29 (Development Version) Released Install in RedHat and Debian Based Systems
================================================================================
**Wine**, a most popular and powerful open source application for Linux, that used to run Windows based applications and games on Linux Platform without any trouble.
![Install Wine (Development Version) in Linux](http://www.tecmint.com/wp-content/uploads/2014/05/Install-Wine-Development-Version.png)
Install Wine (Development Version) in Linux
WineHQ team, recently announced a new development version of **Wine 1.7.29**. This new development build arrives with a number of new important features and **44** bug fixes.
Wine team, keep releasing their development builds almost on weekly basis and adding numerous new features and fixes. Each new version brings support for new applications and games, making Wine a most popular and must have tool for every user, who want to run Windows based software in a Linux platform.
According to the changelog, following key features are added in this release:
- Added much improved shaping and BiDi mirroring in DirectWrite.
- Few page fault handling problems have been udpated.
- Included few more C runtime functions.
- Various bug fixes.
For more in-depth details about this build can be found at the official [changelog][1] page.
This article guides you how to install most recent development version of **Wine 1.7.29** on **Red Hat** and **Debian** based systems such as CentOS, Fedora, Ubuntu, Linux Mint and other supported distributions.
### Installing Wine 1.7.29 Development Version in Linux ###
Unfortunately, there are no official Wine repository available for the **Red Hat** based systems and the only way to install Wine, is to compile it from source. To do this, you need to install some dependency packages such as gcc, flex, bison, libX11-devel freetype-devel and Development Tools, etc. These packages are must required to compile Wine from source. Lets install them using following **YUM** command.
### On RedHat, Fedora and CentOS ###
# yum -y groupinstall 'Development Tools'
# yum -y install flex bison libX11-devel freetype-devel
Next, download the latest development version of Wine (i.e. **1.7.29**) and extract the source tallball package using the following commands.
$ cd /tmp
$ wget http://citylan.dl.sourceforge.net/project/wine/Source/wine-1.7.29.tar.bz2
$ tar -xvf wine-1.7.29.tar.bz2 -C /tmp/
Now, its time to compile and build Wine installer using the following commands as normal user.
Note: The installation process might take up-to **15-20** minutes depending upon your internet and hardware speed, during installation it will ask you to enter **root** password.
#### On 32-Bit Systems ####
$ cd wine-1.7.29/
$ ./tools/wineinstall
#### On 64-Bit Systems ####
$ cd wine-1.7.29/
$ ./configure --enable-win64
$ make
# make install
### On Ubuntu, Debian and Linux Mint ###
Under **Ubuntu** based systems, you can easily install the latest development build of Wine using the official **PPA**. Open a terminal and run the following commands with sudo privileges.
$ sudo add-apt-repository ppa:ubuntu-wine/ppa
$ sudo apt-get update
$ sudo apt-get install wine 1.7 winetricks
**Note**: At the time of writing this article, available version was **1.7.26** and the new build not yet updated in official Wine Repository, but the above instructions will install **1.7.29** when they made available.
Once the installation completes successfully, you can install or run any windows based applications or games using wine as shown below.
$ wine notepad
$ wine notepad.exe
$ wine c:\\windows\\notepad.exe
**Note**: Please remember, this is a development build and cannot be installed or used on production systems. It is advised to use this version only for testing purpose.
If youre looking for a most recent stable version of Wine, you can go through our following articles, that describes how to install most latest version on almost all Linux environments.
- [Install Wine 1.6.2 (Stable) in RHEL, CentOS and Fedora][2]
- [Install Wine 1.6.2 (Stable) in Debian, Ubuntu and Mint][3]
### Reference Links ###
- [WineHQ Homepage][4]
--------------------------------------------------------------------------------
via: http://www.tecmint.com/install-wine-in-linux/
作者:[Ravi Saive][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.tecmint.com/author/admin/
[1]:http://www.winehq.org/announce/1.7.29
[2]:http://www.tecmint.com/install-wine-in-rhel-centos-and-fedora/
[3]:http://www.tecmint.com/install-wine-on-ubuntu-and-linux-mint/
[4]:http://www.winehq.org/

View File

@ -0,0 +1,312 @@
How to turn your CentOS box into a BGP router using Quagga
================================================================================
In a [previous tutorial][1]此文原文做过文件名“20140928 How to turn your CentOS box into an OSPF router using Quagga.md”如果前面翻译发布了可以修改此链接, I described how we can easily turn a Linux box into a fully-fledged OPSF router using Quagga, an open source routing software suite. In this tutorial, I will focus on **converting a Linux box into a BGP router, again using Quagga**, and demonstrate how to set up BGP peering with other BGP routers.
Before we get into details, a little background on BGP may be useful. Border Gateway Protocol (or BGP) is the de-facto standard inter-domain routing protocol of the Internet. In BGP terminology, the global Internet is a collection of tens of thousands of interconnected Autonomous Systems (ASes), where each AS represents an administrative domain of networks managed by a particular provider.
To make its networks globally routable, each AS needs to know how to reach all other ASes in the Internet. That is when BGP comes into play. BGP is the language used by an AS to exchange route information with other neighboring ASes. The route information, often called BGP routes or BGP prefixes, contains AS number (ASN; a globally unique number) and its associated IP address block(s). Once all BGP routes are learned and populated in local BGP routing tables, each AS will know how to reach any public IP addresses on the Internet.
The ability to route across different domains (ASes) is the primary reason why BGP is called an Exterior Gateway Protocol (EGP) or inter-domain protocol. Whereas routing protocols such as OSPF, IS-IS, RIP and EIGRP are all Interior Gateway Protocols (IGPs) or intra-domain routing protocols.
### Test Scenarios ###
For this tutorial, let us consider the following topology.
![](https://farm6.staticflickr.com/5598/15603223841_4c76343313_z.jpg)
We assume that service provider A wants to establish a BGP peering with service provider B to exchange routes. The details of their AS and IP address spaces are like the following.
- **Service provider A**: ASN (100), IP address space (100.100.0.0/22), IP address assigned to eth1 of a BGP router (100.100.1.1)
- **Service provider B**: ASN (200), IP address space (200.200.0.0/22), IP address assigned to eth1 of a BGP router (200.200.1.1)
Router A and router B will be using the 100.100.0.0/30 subnet for connecting to each other. In theory, any subnet reachable from both service providers can be used for interconnection. In real life, it is advisable to use a /30 subnet from service provider A or service provider B's public IP address space.
### Installing Quagga on CentOS ###
If Quagga is not already installed, we install Quagga using yum.
# yum install quagga
If you are using CentOS 7, you need to apply the following policy change for SELinux. Otherwise, SELinux will prevent Zebra daemon from writing to its configuration directory. You can skip this step if you are using CentOS 6.
# setsebool -P zebra_write_config 1
The Quagga software suite contains several daemons that work together. For BGP routing, we will focus on setting up the following two daemons.
- **Zebra**: a core daemon responsible for kernel interfaces and static routes.
- **BGPd**: a BGP daemon.
### Configuring Logging ###
After Quagga is installed, the next step is to configure Zebra to manage network interfaces of BGP routers. We start by creating a Zebra configuration file and enabling logging.
# cp /usr/share/doc/quagga-XXXXX/zebra.conf.sample /etc/quagga/zebra.conf
On CentOS 6:
# service zebra start
# chkconfig zebra on
For CentOS 7:
# systemctl start zebra
# systemctl enable zebra
Quagga offers a dedicated command-line shell called vtysh, where you can type commands which are compatible with those supported by router vendors such as Cisco and Juniper. We will be using vtysh shell to configure BGP routers in the rest of the tutorial.
To launch vtysh command shell, type:
# vtysh
The prompt will be changed to hostname, which indicates that you are inside vtysh shell.
Router-A#
Now we specify the log file for Zebra by using the following commands:
Router-A# configure terminal
Router-A(config)# log file /var/log/quagga/quagga.log
Router-A(config)# exit
Save Zebra configuration permanently:
Router-A# write
Repeat this process on Router-B as well.
### Configuring Peering IP Addresses ###
Next, we configure peering IP addresses on available interfaces.
Router-A# show interface
----------
Interface eth0 is up, line protocol detection is disabled
. . . . .
Interface eth1 is up, line protocol detection is disabled
. . . . .
Configure eth0 interface's parameters:
site-A-RTR# configure terminal
site-A-RTR(config)# interface eth0
site-A-RTR(config-if)# ip address 100.100.0.1/30
site-A-RTR(config-if)# description "to Router-B"
site-A-RTR(config-if)# no shutdown
site-A-RTR(config-if)# exit
Go ahead and configure eth1 interface's parameters:
site-A-RTR(config)# interface eth1
site-A-RTR(config-if)# ip address 100.100.1.1/24
site-A-RTR(config-if)# description "test ip from provider A network"
site-A-RTR(config-if)# no shutdown
site-A-RTR(config-if)# exit
Now verify configuration:
Router-A# show interface
----------
Interface eth0 is up, line protocol detection is disabled
Description: "to Router-B"
inet 100.100.0.1/30 broadcast 100.100.0.3
Interface eth1 is up, line protocol detection is disabled
Description: "test ip from provider A network"
inet 100.100.1.1/24 broadcast 100.100.1.255
----------
Router-A# show interface description
----------
Interface Status Protocol Description
eth0 up unknown "to Router-B"
eth1 up unknown "test ip from provider A network"
If everything looks alright, don't forget to save.
Router-A# write
Repeat to configure interfaces on Router-B as well.
Before moving forward, verify that you can ping each other's IP address.
Router-A# ping 100.100.0.2
----------
PING 100.100.0.2 (100.100.0.2) 56(84) bytes of data.
64 bytes from 100.100.0.2: icmp_seq=1 ttl=64 time=0.616 ms
Next, we will move on to configure BGP peering and prefix advertisement settings.
### Configuring BGP Peering ###
The Quagga daemon responsible for BGP is called bgpd. First, we will prepare its configuration file.
# cp /usr/share/doc/quagga-XXXXXXX/bgpd.conf.sample /etc/quagga/bgpd.conf
On CentOS 6:
# service bgpd start
# chkconfig bgpd on
For CentOS 7
# systemctl start bgpd
# systemctl enable bgpd
Now, let's enter Quagga shell.
# vtysh
First verify that there are no configured BGP sessions. In some versions, you may find a BGP session with AS 7675. We will remove it as we don't need it.
Router-A# show running-config
----------
... ... ...
router bgp 7675
bgp router-id 200.200.1.1
... ... ...
We will remove any pre-configured BPG session, and replace it with our own.
Router-A# configure terminal
Router-A(config)# no router bgp 7675
Router-A(config)# router bgp 100
Router-A(config)# no auto-summary
Router-A(config)# no synchronizaiton
Router-A(config-router)# neighbor 100.100.0.2 remote-as 200
Router-A(config-router)# neighbor 100.100.0.2 description "provider B"
Router-A(config-router)# exit
Router-A(config)# exit
Router-A# write
Router-B should be configured in a similar way. The following configuration is provided as reference.
Router-B# configure terminal
Router-B(config)# no router bgp 7675
Router-B(config)# router bgp 200
Router-B(config)# no auto-summary
Router-B(config)# no synchronizaiton
Router-B(config-router)# neighbor 100.100.0.1 remote-as 100
Router-B(config-router)# neighbor 100.100.0.1 description "provider A"
Router-B(config-router)# exit
Router-B(config)# exit
Router-B# write
When both routers are configured, a BGP peering between the two should be established. Let's verify that by running:
Router-A# show ip bgp summary
![](https://farm6.staticflickr.com/5614/15420135700_e3568d2e5f_z.jpg)
In the output, we should look at the section "State/PfxRcd." If the peering is down, the output will show 'Idle' or 'Active'. Remember, the word 'Active' inside a router is always bad. It means that the router is actively seeking for a neighbor, prefix or route. When the peering is up, the output under "State/PfxRcd" should show the number of prefixes received from this particular neighbor.
In this example output, the BGP peering is just up between AS 100 and AS 200. Thus no prefixes are being exchanged, and the number in the rightmost column is 0.
### Configuring Prefix Advertisements ###
As specified at the beginning, AS 100 will advertise a prefix 100.100.0.0/22, and AS 200 will advertise a prefix 200.200.0.0/22 in our example. Those prefixes need to be added to BGP configuration as follows.
On Router-A:
Router-A# configure terminal
Router-A(config)# router bgp 100
Router-A(config)# network 100.100.0.0/22
Router-A(config)# exit
Router-A# write
On Router-B:
Router-B# configure terminal
Router-B(config)# router bgp 200
Router-B(config)# network 200.200.0.0/22
Router-B(config)# exit
Router-B# write
At this point, both routers should start advertising prefixes as required.
### Testing Prefix Advertisements ###
First of all, let's verify whether the number of prefixes has changed now.
Router-A# show ip bgp summary
![](https://farm6.staticflickr.com/5608/15419095659_0ebb384eee_z.jpg)
To view more details on the prefixes being received, we can use the following command, which shows the total number of prefixes received from neighbor 100.100.0.2.
Router-A# show ip bgp neighbors 100.100.0.2 advertised-routes
![](https://farm6.staticflickr.com/5597/15419618208_4604e5639a_z.jpg)
To check which prefixes we are receiving from that neighbor:
Router-A# show ip bgp neighbors 100.100.0.2 routes
![](https://farm4.staticflickr.com/3935/15606556462_e17eae7f49_z.jpg)
We can also check all the BGP routes:
Router-A# show ip bgp
![](https://farm6.staticflickr.com/5609/15419618228_5c776423a5_z.jpg)
These commands below can be used to check which routes in the routing table are learned via BGP.
Router-A# show ip route
----------
Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF,
I - ISIS, B - BGP, > - selected route, * - FIB route
C>* 100.100.0.0/30 is directly connected, eth0
C>* 100.100.1.0/24 is directly connected, eth1
B>* 200.200.0.0/22 [20/0] via 100.100.0.2, eth0, 00:06:45
----------
Router-A# show ip route bgp
----------
B>* 200.200.0.0/22 [20/0] via 100.100.0.2, eth0, 00:08:13
The BGP-learned routes should also be present in the Linux routing table.
[root@Router-A~]# ip route
----------
100.100.0.0/30 dev eth0 proto kernel scope link src 100.100.0.1
100.100.1.0/24 dev eth1 proto kernel scope link src 100.100.1.1
200.200.0.0/22 via 100.100.0.2 dev eth0 proto zebra
Finally, we are going to test with ping command. ping should be successful.
[root@Router-A~]# ping 200.200.1.1 -c 2
To sum up, this tutorial focused on how we can run basic BGP on a CentOS box. While this should get you started with BGP, there are other advanced settings such as prefix filters, BGP attribute tuning such as local preference and path prepend. I will be covering these topics in future tutorials.
Hope this helps.
--------------------------------------------------------------------------------
via: http://xmodulo.com/centos-bgp-router-quagga.html
作者:[Sarmed Rahman][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/sarmed
[1]:http://xmodulo.com/turn-centos-box-into-ospf-router-quagga.html

View File

@ -0,0 +1,256 @@
luoyutiantang
What are useful Bash aliases and functions
================================================================================
As a command line adventurer, you probably found yourself repeating the same lengthy commands over and over. If you always ssh into the same machine, if you always chain the same commands together, or if you constantly run a program with the same flags, you might want to save the precious seconds of your life that you spend repeating the same actions over and over.
The solution to achieve that is to use an alias. As you may know, an alias is a way to tell your shell to remember a particular command and give it a new name: an alias. However, an alias is quickly limited as it is just a shortcut for a shell command, without the ability to pass or control the arguments. So to complement, bash also allows you create your own functions, which can be more lengthy and complex, and also accepts any number of arguments.
Naturally, like with soup, when you have a good recipe you share it. So here is a list with some of the most useful bash aliases and functions. Note that "most useful" is loosely defined, and of course the usefulness of an alias is dependent on your everyday usage of the shell.
Before you start experimenting with aliases, here is a handy tip: if you give an alias the same name as a regular command, you can choose to launch the original command and ignore the alias with the trick:
\command
For example, the first alias below replaces the ls command. If you wish to use the regular ls command and not the alias, call it via:
\ls
### Productivity ###
So these aliases are really simple and really short, but they are mostly based on the idea that if you save yourself a fraction of a second every time, it might end up accumulating years at the end. Or maybe not.
alias ls="ls --color=auto"
Simple but vital. Make the ls command output in color.
alias ll = "ls --color -al"
Shortcut to display in color all the files from a directory in a list format.
alias grep='grep --color=auto'
Similarly, put some color in the grep output.
mcd() { mkdir -p "$1"; cd "$1";}
One of my favorite. Make a directory and cd into it in one command: mcd [name].
cls() { cd "$1"; ls;}
Similar to the previous function, cd into a directory and list its content: cls [name].
backup() { cp "$1"{,.bak};}
Simple way to make a backup of a file: backup [file] will create [file].bak in the same directory.
md5check() { md5sum "$1" | grep "$2";}
Because I hate comparing the md5sum of a file by hand, this function computes it and compares it using grep: md5check [file] [key].
![](https://farm6.staticflickr.com/5616/15412389280_8be57841ae_o.jpg)
alias makescript="fc -rnl | head -1 >"
Easily make a script out of the last command you ran: makescript [script.sh]
alias genpasswd="strings /dev/urandom | grep -o '[[:alnum:]]' | head -n 30 | tr -d '\n'; echo"
Just to generate a strong password instantly.
![](https://farm4.staticflickr.com/3955/15574321206_dd365f0f0e.jpg)
alias c="clear"
Cannot do simpler to clean your terminal screen.
alias histg="history | grep"
To quickly search through your command history: histg [keyword]
alias ..='cd ..'
No need to write cd to go up a directory.
alias ...='cd ../..'
Similarly, go up two directories.
extract() {
if [ -f $1 ] ; then
case $1 in
*.tar.bz2) tar xjf $1 ;;
*.tar.gz) tar xzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar e $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xf $1 ;;
*.tbz2) tar xjf $1 ;;
*.tgz) tar xzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*) echo "'$1' cannot be extracted via extract()" ;;
esac
else
echo "'$1' is not a valid file"
fi
}
Longest but also the most useful. Extract any kind of archive: extract [archive file]
### System Info ###
Want to know everything about your system as quickly as possible?
alias cmount="mount | column -t"
Format the output of mount into columns.
![](https://farm6.staticflickr.com/5603/15598830622_587b77a363_z.jpg)
alias tree="ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/ /' -e 's/-/|/'"
Display the directory structure recursively in a tree format.
sbs() { du -b --max-depth 1 | sort -nr | perl -pe 's{([0-9]+)}{sprintf "%.1f%s", $1>=2**30? ($1/2**30, "G"): $1>=2**20? ($1/2**20, "M"): $1>=2**10? ($1/2**10, "K"): ($1, "")}e';}
"Sort by size" to display in list the files in the current directory, sorted by their size on disk.
alias intercept="sudo strace -ff -e trace=write -e write=1,2 -p"
Intercept the stdout and stderr of a process: intercept [some PID]. Note that you will need strace installed.
alias meminfo='free -m -l -t'
See how much memory you have left.
![](https://farm4.staticflickr.com/3955/15411891448_0b9d6450bd_z.jpg)
alias ps? = "ps aux | grep"
Easily find the PID of any process: ps? [name]
alias volume="amixer get Master | sed '1,4 d' | cut -d [ -f 2 | cut -d ] -f 1"
Displays the current sound volume.
![](https://farm4.staticflickr.com/3939/15597995445_99ea7ffcd5_o.jpg)
### Networking ###
For all the commands that involve the Internet or your local network, there are fancy aliases for them.
alias websiteget="wget --random-wait -r -p -e robots=off -U mozilla"
Download entirely a website: websiteget [URL]
alias listen="lsof -P -i -n"
Show which applications are connecting to the network.
![](https://farm4.staticflickr.com/3943/15598830552_c7e5eaaa0d_z.jpg)
alias port='netstat -tulanp'
Show the active ports
gmail() { curl -u "$1" --silent "https://mail.google.com/mail/feed/atom" | sed -e 's/<\/fullcount.*/\n/' | sed -e 's/.*fullcount>//'}
Rough function to display the number of unread emails in your gmail: gmail [user name]
alias ipinfo="curl ifconfig.me && curl ifconfig.me/host"
Get your public IP address and host.
getlocation() { lynx -dump http://www.ip-adress.com/ip_tracer/?QRY=$1|grep address|egrep 'city|state|country'|awk '{print $3,$4,$5,$6,$7,$8}'|sed 's\ip address flag \\'|sed 's\My\\';}
Returns your current location based on your IP address.
### Useless ###
So what if some aliases are not all that productive? They can still be fun.
kernelgraph() { lsmod | perl -e 'print "digraph \"lsmod\" {";<>;while(<>){@_=split/\s+/; print "\"$_[0]\" -> \"$_\"\n" for split/,/,$_[3]}print "}"' | dot -Tpng | display -;}
To draw the kernel module dependency graph. Requires image viewer.
alias busy="cat /dev/urandom | hexdump -C | grep "ca fe""
Make you look all busy and fancy in the eyes of non-technical people.
![](https://farm6.staticflickr.com/5599/15574321326_ab3fbc1ef9_z.jpg)
To conclude, a good chunk of these aliases and functions come from my personal .bashrc, and the awesome websites [alias.sh][1] and [commandlinefu.com][2] which I already presented in my post on the [best online tools for Linux][3]. So definitely go check them out, make your own recipes, and if you are so inclined, share your wisdom in the comments.
As a bonus, here is the plain text version of all the aliases and functions I mentioned, ready to be copy pasted in your bashrc.
#Productivity
alias ls="ls --color=auto"
alias ll="ls --color -al"
alias grep='grep --color=auto'
mcd() { mkdir -p "$1"; cd "$1";}
cls() { cd "$1"; ls;}
backup() { cp "$1"{,.bak};}
md5check() { md5sum "$1" | grep "$2";}
alias makescript="fc -rnl | head -1 >"
alias genpasswd="strings /dev/urandom | grep -o '[[:alnum:]]' | head -n 30 | tr -d '\n'; echo"
alias c="clear"
alias histg="history | grep"
alias ..='cd ..'
alias ...='cd ../..'
extract() {
if [ -f $1 ] ; then
case $1 in
*.tar.bz2) tar xjf $1 ;;
*.tar.gz) tar xzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar e $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xf $1 ;;
*.tbz2) tar xjf $1 ;;
*.tgz) tar xzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*) echo "'$1' cannot be extracted via extract()" ;;
esac
else
echo "'$1' is not a valid file"
fi
}
#System info
alias cmount="mount | column -t"
alias tree="ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/ /' -e 's/-/|/'"
sbs(){ du -b --max-depth 1 | sort -nr | perl -pe 's{([0-9]+)}{sprintf "%.1f%s", $1>=2**30? ($1/2**30, "G"): $1>=2**20? ($1/2**20, "M"): $1>=2**10? ($1/2**10, "K"): ($1, "")}e';}
alias intercept="sudo strace -ff -e trace=write -e write=1,2 -p"
alias meminfo='free -m -l -t'
alias ps?="ps aux | grep"
alias volume="amixer get Master | sed '1,4 d' | cut -d [ -f 2 | cut -d ] -f 1"
#Network
alias websiteget="wget --random-wait -r -p -e robots=off -U mozilla"
alias listen="lsof -P -i -n"
alias port='netstat -tulanp'
gmail() { curl -u "$1" --silent "https://mail.google.com/mail/feed/atom" | sed -e 's/<\/fullcount.*/\n/' | sed -e 's/.*fullcount>//'}
alias ipinfo="curl ifconfig.me && curl ifconfig.me/host"
getlocation() { lynx -dump http://www.ip-adress.com/ip_tracer/?QRY=$1|grep address|egrep 'city|state|country'|awk '{print $3,$4,$5,$6,$7,$8}'|sed 's\ip address flag \\'|sed 's\My\\';}
#Funny
kernelgraph() { lsmod | perl -e 'print "digraph \"lsmod\" {";<>;while(<>){@_=split/\s+/; print "\"$_[0]\" -> \"$_\"\n" for split/,/,$_[3]}print "}"' | dot -Tpng | display -;}
alias busy="cat /dev/urandom | hexdump -C | grep \"ca fe\""
--------------------------------------------------------------------------------
via: http://xmodulo.com/useful-bash-aliases-functions.html
作者:[Adrien Brochard][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:http://alias.sh/
[2]:http://www.commandlinefu.com/commands/browse
[3]:http://xmodulo.com/useful-online-tools-linux.html

View File

@ -0,0 +1,116 @@
JonathanKang is translating
What is a good command-line calculator on Linux
================================================================================
Every modern Linux desktop distribution comes with a default GUI-based calculator app. On the other hand, if your workspace is full of terminal windows, and you would rather crunch some numbers within one of those terminals quickly, you are probably looking for a **command-line calculator**. In this category, [GNU bc][1] (short for "basic calculator") is a hard to beat one. While there are many command-line calculators available on Linux, I think GNU bc is hands-down the most powerful and useful.
Predating the GNU era, bc is actually a historically famous arbitrary precision calculator language, with its first implementation dating back to the old Unix days in 1970s. Initially bc was a better known as a programming language whose syntax is similar to C language. Over time the original bc evolved into POSIX bc, and then finally GNU bc of today.
### Features of GNU bc ###
Today's GNU bc is a result of many enhancements of earlier implementations of bc, and now it comes standard on all major GNU/Linux distros. It supports standard arithmetic operators with arbitrary precision numbers, and multiple numeric base (e.g., binary, decimal hexadecimal) of input and output.
If you are familiar with C language, you will see that the same or similar mathematical operators are used in bc. Some of supported operators include arithmetic (+,-,*,/,%,++,--), comparison (<,>,==,!=,<=,>=), logical (!,&&,||), bitwise (&,|,^,~,<<,>>), compound assignment (+=,-=,*=,/=,%=,&=,|=,^=,&&=,||=,<<=,>>=) operators. bc comes with many useful built-in functions such as square root, sine, cosine, arctangent, natural logarithm, exponential, etc.
### How to Use GNU bc ###
As a command-line calculator, possible use cases of GNU bc are virtually limitless. In this tutorial, I am going to describe a few popular features of bc command. For a complete manual, refer to the [official source][2].
Unless you have a pre-written bc script, you typically run bc in interactive mode, where any typed statement or expression terminated with a newline is interpreted and executed on the spot. Simply type the following to enter an interactive bc session. To quit a session, type 'quit' and press Enter.
$ bc
![](https://farm4.staticflickr.com/3939/15403325480_d0db97d427_z.jpg)
The examples presented in the rest of the tutorial are supposed to be typed inside a bc session.
### Type expressions ###
To calculate an arithmatic expression, simply type the expression at the blinking cursor, and press Enter. If you want, you can store an intermediate result to a variable, then access the variable in other expressions.
![](https://farm6.staticflickr.com/5604/15403325460_b004b3f8da_o.png)
Within a given session, bc maintains a unlimited history of previously typed lines. Simply use UP arrow key to retrieve previously typed lines. If you want to limit the number of lines to keep in the history, assign that number to a special variable named history. By default the variable is set to -1, meaning "unlimited."
### Switch input/output base ###
Often times you want to type input expressions and display results in binary or hexadecimal formats. For that, bc allows you switch the numeric base of input or output numbers. Input and output bases are stored in ibase and obase, respectively. The default value of these special variables is 10, and valid values are 2 through 16 (or the value of BC_BASE_MAX environment variable in case of obase). To switch numeric base, all you have to do is to change the values of ibase and obase. For example, here are examples of summing up two hexadecimal/binary numbers:
![](https://farm6.staticflickr.com/5604/15402320019_f01325f199_z.jpg)
Note that I specify obase=16 before ibase=16, not vice versa. That is because if I specified ibase=16 first, the subsequent obase=16 statement would be interpreted as assigning 16 in base 16 to obase (i.e., 22 in decimal), which is not what we want.
### Adjust precision ###
In bc, the precision of numbers is stored in a special variable named scale. This variable represents the number of decimal digits after the decimal point. By default, scale is set to 0, which means that all numbers and results and truncated/stored in integers. To adjust the default precision, all you have to do is to change the value of scale variable.
scale=4
![](https://farm6.staticflickr.com/5597/15586279541_211312597b.jpg)
### Use built-in functions ###
Beyond simple arithmatic operations, GNU bc offers a wide range of advanced mathematical functions built-in, via an external math library. To use those functions, launch bc with "-l" option from the command line.
Some of these built-in functions are illustrated here.
Square root of N:
sqrt(N)
Sine of X (X is in radians):
s(X)
Cosine of X (X is in radian):
c(X)
Arctangent of X (The returned value is in radian):
a(X)
Natural logarithm of X:
l(X)
Exponential function of X:
e(X)
### Other goodies as a language ###
As a full-blow calculator language, GNU bc supports simple statements (e.g., variable assignment, break, return), compound statements (e.g., if, while, for loop), and custom function definitions. I am not going to cover the details of these features, but you can easily learn how to use them from the [official manual][2]. Here is a very simple function definition example:
define dummy(x){
return(x * x);
}
dummy(9)
81
dummy(4)
16
### Use GNU bc Non-interactively ###
So far we have used bc within an interactive session. However, quite popular use cases of bc in fact involve running bc within a shell script non-interactively. In this case, you can send input to bc using echo through a pipe. For example:
$ echo "40*5" | bc
$ echo "scale=4; 10/3" | bc
$ echo "obase=16; ibase=2; 11101101101100010" | bc
![](https://farm4.staticflickr.com/3943/15565252976_f50f453c7f_z.jpg)
To conclude, GNU bc is a powerful and versatile command-line calculator that really lives up to your expectation. Preloaded on all modern Linux distributions, bc can make your number crunching tasks much easy to handle without leaving your terminals. For that, GNU bc should definitely be in your productivity toolset.
--------------------------------------------------------------------------------
via: http://xmodulo.com/command-line-calculator-linux.html
作者:[Dan Nanni][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/nanni
[1]:http://www.gnu.org/software/bc/
[2]:https://www.gnu.org/software/bc/manual/bc.html

View File

@ -0,0 +1,143 @@
7 Things to Do After Installing Ubuntu 14.10 Utopic Unicorn
================================================================================
After youve installed or [upgraded to Ubuntu 14.10][1], known by its codename Utopic Unicorn, there are a few things you should do to get it up and running in tip-top shape.
Whether youve performed a fresh install or upgraded an existing version, heres our biannual checklist of post-install tasks to get started with.
### 1. Get Acquainted ###
![The Ubuntu Browser](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/Screen-Shot-2014-10-23-at-20.02.54.png)
The Ubuntu Browser
The majority of changes rocking up in Ubuntu 14.10 arent immediately visible (save for some new wallpapers). That said, there are a bunch of freshly updated apps to get familiar with.
Preinstalled are the latest versions of workhouse staples **Mozilla Firefox**, **Thunderbird**, and **LibreOffice**. Dig a little deeper and youll also find Evince 3.14, and a brand new version of the “Ubuntu Web Browser” app, used for handling web-apps.
While youre getting familiar, be sure to fire up the Software Updater tool to **check for any impromptu issues Ubuntu has found and fixed** post-release. Yes, I know: you only just upgraded. But, even so — bugs dont adhere to deadlines like developers do!
### 2. Personalise The Desktop ###
![New wallpapers in 14.10](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/wallpapers-new-in-14.10.jpg)
New wallpapers in 14.10
Its your desktop PC, so dont put off making it look, feel and behave how you like.
Your first port of call might be changing the desktop wallpaper to one of the [twelve stunning new backgrounds][2] included in 14.10, ranging from retro record player to illustrated unicorn.
Wallpapers and a host of other theme and layout options are accessible from the **Appearance Settings** pane of the System Settings app. From here you can:
- Switch to a different theme
- Adjust launcher size & behaviour
- Enable workspaces & desktop icons
- Put app menus back into app windows
For some nifty new themes be sure to check out our **themes & icons category** here on the site.
### 3. Install Graphics Card Drivers ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/additional-drivers.jpg)
If you plan on playing the [latest Steam games][3], watching high-definition video or working with graphically intensive software youll want to enable the latest Linux graphics drivers available for your hardware.
Ubuntu makes this easy:
- Open up the Software & Updates tool from the Unity Dash
- Click the Additional Drivers tab
- Follow any on-screen prompts to check, install and apply changes
### 4. Enable Music & Video Codecs ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/msuci.jpg)
Games sorted, now to make **music and video files work just as well**.
Most popular formats, .mp3, .m4a, .mov, etc., will work fine in Ubuntu — after a little cajoling. Patent-encumbered codecs cannot ship in Ubuntu for legal reasons, leaving you unable to play popular audio and video formats out of the (invisible) box.
Dont panic. To play music or watch video you can install all of the codecs you need quickly, and through the Ubuntu Software Center.
- [Install Third-Party Codecs][4]
### 5. Pimp Your Privacy ###
![](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/privacy-in-ubuntu-settingd.jpg)
The Unity Dash is a great one-stop hub for finding stuff, be it a PDF file lurking on your computer or the current weather forecast in Stockholm, Sweden.
But the diversity of data surfaced through the Dash in just a few keystrokes doesnt suit everyones needs. So you may want to dial down the noise and restrict what shows up.
To stop certain files and folders from searched in the Dash and/or to disable all online results returned for a query, head to the **Privacy & Security** section in System Settings.
Here youll find all the tools, options and configuration switches you need, including options to:
- Choose what apps & files can be searched from the Dash
- Whether to require a password on waking from suspend
- Disable sending error reports to Canonical
- Turn off all online features of the Dash
### 6. Swap The Default Apps For Your Faves ###
![Make it yours](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/more-apps.jpg)
Make it yours
Ubuntu comes preloaded with a tonne of apps, including a web browser (Mozilla Firefox), e-mail client (Thunderbird), music player (Rhythmbox), office suite (LibreOffice) and instant messenger (Empathy Instant Messenger).
All well and good, theyre not everyones cup of tea. The Ubuntu Software Center is home to a slew of app alternatives, including:
- VLC Versatile media player
- Steam Games distribution platform
- [Geary — Easy-to-use desktop e-mail app][5]
- GIMP Advanced image editor similar to Photoshop
- Clementine — Stylish, fully-featured music player
- Chromium open-source version of Google Chrome (without Flash)
The Ubuntu Software Center plays host to a huge range of other apps, many of which you might not have heard of before. Since most apps are free, dont be scared to try things out!
### 7. Grab The Essentials ###
![Netflix in Chrome on Ubuntu](http://www.omgubuntu.co.uk/wp-content/uploads/2014/10/netflix-linux-working-in-chrome.jpg)
Netflix in Chrome on Ubuntu
Software Center apps aside, you may also wish to grab big-name apps like Skype, Spotify and Dropbox.
Google Chrome is also a must if you wish to watch Netflix natively on Ubuntu or benefit from the latest, safest version of Flash.
Most of these apps are available to download directly from their respective websites and can be installed on Ubuntu with a couple of clicks.
- [Download Skype for Linux][6]
- [Download Google Chrome for Linux][7]
- [Download Dropbox for Linux][8]
- [How to Install Spotify in Ubuntu][9]
Talking of Google Chrome — did you know you can (unofficially) [install and run Android apps through it?][9] Oh yes ;)
#### Finally… ####
The items above are not the only ones applicable post-upgrade. Read through and follow the ones that chime with you, and feel free to ignore those that dont.
Secondly, this is a list for those whove upgraded to or installed Ubuntu 14.10. Were not going walk you through carving it up into something that isnt Ubuntu. If Unity isnt your thing thats fine, but be logical about it; save yourself some time and install one of the official flavours or offshoots instead.
--------------------------------------------------------------------------------
via: http://www.omgubuntu.co.uk/2014/10/7-things-to-do-after-installing-ubuntu-14-10-utopic-unicorn
作者:[Joey-Elijah Sneddon][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:https://plus.google.com/117485690627814051450/?rel=author
[1]:http://www.omgubuntu.co.uk/2014/10/ubuntu-14-10-release-download-now
[2]:http://www.omgubuntu.co.uk/2014/09/ubuntu-14-10-wallpaper-contest-winners
[3]:http://www.omgubuntu.co.uk/category/gaming
[4]:https://apps.ubuntu.com/cat/applications/ubuntu-restricted-extras/
[5]:http://www.omgubuntu.co.uk/2014/09/new-shotwell-geary-stable-release-available-to-downed
[6]:http://www.skype.com/en/download-skype/skype-for-linux/
[7]:http://www.google.com/chrome
[8]:https://www.dropbox.com/install?os=lnx
[9]:http://www.omgubuntu.co.uk/2013/01/how-to-install-spotify-in-ubuntu-12-04-12-10
[10]:http://www.omgubuntu.co.uk/2014/09/install-android-apps-ubuntu-archon

View File

@ -0,0 +1,291 @@
Amazing ! 25 Linux Performance Monitoring Tools
================================================================================
Over the time our website has shown you how to configure various performance tools for Linux and Unix-like operating systems. In this article we have made a list of the most used and most useful tools to monitor the performance for your box. We provided a link for each of them and split them into 2 categories: command lines one and the ones that offer a graphical interface.
### Command line performance monitoring tools ###
#### 1. dstat - Versatile resource statistics tool ####
A versatile combination of **vmstat**, **iostat** and **ifstat**. It adds new features and functionality allowing you to view all the different resources instantly, allowing you to compare and combine the different resource usage. It uses colors and blocks to help you see the information clearly and easily. It also allows you to export the data in **CVS** format to review it in a spreadsheet application or import in a **database**. You can use this application to [monitor cpu, memory, eth0 activity related to time][1].
![](http://blog.linoxide.com/wp-content/uploads/2014/10/dstat.png)
#### 2. atop - Improved top with ASCII ####
A command line tool using **ASCII** to display a **performance monitor** that is capable of reporting the activity of all processes. It shows daily logging of system and process activity for long-term analysis and it highlights overloaded system resources by using colors. It includes metrics related to **CPU**, **memory**, **swap**, **disks** and **network layers**. All the functions of atop can be accessed by simply running:
# atop
And you will be able to [use the interactive interface to display][2] and order data.
![](http://blog.linoxide.com/wp-content/uploads/2014/10/atop1.jpg)
#### 3. Nmon - performance monitor for Unix-like systems ####
Nmon stands for **Nigel's Monitor** and it's a system monitor tool originally developed for **AIX**. If features an **Online Mode** that uses curses for efficient screen handling, which updates the terminal frequently for real-time monitoring and a **Capture Mode** where the data is saved in a file in **CSV** format for later processing and graphing.
![](http://blog.linoxide.com/wp-content/uploads/2014/10/nmon_interface.png)
**More info** in our [nmon performance track article][3].
#### 4. slabtop - information on kernel slab cache ####
This application will show you how the **caching memory allocator** manages in the Linux kernel caches various type of objects. The command is a top like command but is focused on showing real-time kernel slab cache information. It displays a listing of the top caches sorted by one of the listed sort criteria. It also displays a statistics header filled with slab layer information. Here are a few examples:
# slabtop --sort=a
# slabtop -s b
# slabtop -s c
# slabtop -s l
# slabtop -s v
# slabtop -s n
# slabtop -s o
**More info** is available [kernel slab cache article][4]
#### 5. sar - performance monitoring and bottlenecks check ####
The **sar** command writes to standard output the contents of selected cumulative activity counters in the operating system. The **accounting system**, based on the values in the count and interval parameters, writes information the specified number of times spaced at the specified intervals in seconds. If the interval parameter is set to zero, [the sar command displays the average statistics][5] for the time since the system was started. Useful commands:
# sar -u 2 3
# sar u f /var/log/sa/sa05
# sar -P ALL 1 1
# sar -r 1 3
# sar -W 1 3
#### 6. Saidar - simple stats monitor ####
Saidar is a **simple** and **lightweight** tool for system information. It doesn't have major performance reports but it does show the most useful system metrics in a short and nice way. You can easily see the [**up-time, average load, CPU, memory, processes, disk and network interfaces**][6] stats.
Usage: saidar [-d delay] [-c] [-v] [-h]
-d Sets the update time in seconds
-c Enables coloured output
-v Prints version number
-h Displays this help information.
![](http://blog.linoxide.com/wp-content/uploads/2014/10/saidar-e1413370985588.png)
#### 7. top - The classical Linux task manager ####
top is one of the best known **Linux** utilities, it's a **task manager** found on most **Unix-like** operating systems. It shows the current list of running processes that the user can order using different criteria. It mainly shows how much **CPU** and **memory** is used by the **system processes**. top is a quick place to go a check what process or **processes** hangs your system. You can also find here a [list of examples of top usage][7] . You can access it by running the top command and entering the interactive mode:
Quick cheat sheet for interactive mode:
GLOBAL_Commands: <Ret/Sp> ?, =, A, B, d, G, h, I, k, q, r, s, W, Z
SUMMARY_Area_Commands: l, m, t, 1
TASK_Area_Commands Appearance: b, x, y, z Content: c, f, H, o, S, u Size: #, i, n Sorting: <, >, F, O, R
COLOR_Mapping: <Ret>, a, B, b, H, M, q, S, T, w, z, 0 - 7
COMMANDS_for_Windows: -, _, =, +, A, a, G, g, w
![](http://blog.linoxide.com/wp-content/uploads/2014/10/top.png)
#### 8. Sysdig - Advanced view of system processes ####
**Sysdig** is a tool that gives admins and developers unprecedented visibility into the behavior of their systems. The team that develops it wants to improve the way system-level **monitoring** and **troubleshooting** is done by offering a unified, coherent, and granular visibility into the **storage**, **processing**, **network**, and **memory** subsystems making it possible to create trace files for system activity so you can easily analyze it at any time.
Quick examples:
# sysdig proc.name=vim
# sysdig -p"%proc.name %fd.name" "evt.type=accept and proc.name!=httpd"
# sysdig evt.type=chdir and user.name=root
# sysdig -l
# sysdig -L
# sysdig -c topprocs_net
# sysdig -c fdcount_by fd.sport "evt.type=accept"
# sysdig -p"%proc.name %fd.name" "evt.type=accept and proc.name!=httpd"
# sysdig -c topprocs_file
# sysdig -c fdcount_by proc.name "fd.type=file"
# sysdig -p "%12user.name %6proc.pid %12proc.name %3fd.num %fd.typechar %fd.name" evt.type=open
# sysdig -c topprocs_cpu
# sysdig -c topprocs_cpu evt.cpu=0
# sysdig -p"%evt.arg.path" "evt.type=chdir and user.name=root"
# sysdig evt.type=open and fd.name contains /etc
![](http://blog.linoxide.com/wp-content/uploads/2014/10/sysdig.jpg)
**More info** is available in our article on [how to use sysdig for improved system-level monitoring and troubleshooting][8]
#### 9. netstat - Shows open ports and connections ####
Is the tool **Linux administrators** use to show various **network** information, like what ports are open and what network connections are established and what process runs that connection. It also shows various information about the **Unix sockets** that are open between various programs. It is part of most Linux distributions A lot of the commands are explained in the [article on netstat and its various outputs][9]. Most used commands are:
$ netstat | head -20
$ netstat -r
$ netstat -rC
$ netstat -i
$ netstat -ie
$ netstat -s
$ netstat -g
$ netstat -tapn
### 10. tcpdump - insight on network packets ###
**tcpdump** can be used to see the content of the **packets** on a **network connection**. It shows various information about the packet content that pass. To make the output useful, it allows you to use various filters to only get the information you wish. A few examples on how you can use it:
# tcpdump -i eth0 not port 22
# tcpdump -c 10 -i eth0
# tcpdump -ni eth0 -c 10 not port 22
# tcpdump -w aloft.cap -s 0
# tcpdump -r aloft.cap
# tcpdump -i eth0 dst port 80
**You can find them described in** [detail in our article on tcpdump and capturing packets][10]
#### 11. vmstat - virtual memory statistics ####
**vmstat** stands for **virtual memory** statistics and it's a **memory monitoring** tool that collects and displays summary information about **memory**, **processes**, **interrupts**, **paging** and **block I/O**. It is an open source program available on most Linux distributions, Solaris and FreeBSD. It is used to diagnose most memory performance problems and much more.
![](http://blog.linoxide.com/wp-content/uploads/2014/10/vmstat_delay_5.png)
**More info** in [our article on vmstat commands][11].
#### 12. free - memory statistics ####
Another **command line** tool that will show to standard output a few stats about **memory** usage and swap usage. Because it's a simple tool it can be used to either find **quick information** about memory usage or it can be used in different scripts and applications. You can see that [this small application has a lot of uses][12] and almost all system admin use this tool daily :-)
![](http://blog.linoxide.com/wp-content/uploads/2014/10/free_hs3.png)
#### 13. Htop - friendlier top ####
**Htop** is basically an improved version of top showing more stats and in a more colorful way allowing you to sort them in different ways as you can see in our article. It provides a more a more **user-friendly** interface.
![](http://blog.linoxide.com/wp-content/uploads/2014/10/htop.png)
You can find **more info** in [our comparison of htop and top][13]
#### 14. ss - the modern net-tools replacement ####
**ss** is part of the **iproute2** package. iproute2 is intended to replace an entire suite of standard **Unix networking tools** that were previously used for the tasks of [configuring network interfaces, routing tables, and managing the ARP table][14]. The ss utility is used to dump socket statistics, it allows showing information similar to **netstat** and its able display more TCP and state information. A few examples:
# ss -tnap
# ss -tnap6
# ss -tnap
# ss -s
# ss -tn -o state established -p
#### 15. lsof - list open files ####
**lsof** is a command meaning "**list open files**", which is used in many Unix-like systems to report a list of all open files and the processes that opened them. It is used by most Linux distributions and other Unix-like operating systems by **system administrators** to check what files are open by various processes.
# lsof +p process_id
# lsof | less
# lsof u username
# lsof /etc/passwd
# lsof i TCP:ftp
# lsof i TCP:80
You can find **more examples** in the [lsof article][15]
#### 16. iftop - top for your network connections ####
**iftop** is yet another top like application that will is based on networking information. It shows various current **network connection** sorted by **bandwidth usage** or the amount of data uploaded or downloaded. It also provides various estimations of the time it will take to download them.
![](http://blog.linoxide.com/wp-content/uploads/2014/10/iftop.png)
For **more info** see [article on network traffic with iftop][16]
#### 17. iperf - network performance tool ####
**iperf** is a **network testing** tool that can create **TCP** and **UDP** data connections and measure the **performance** of a network that is carrying them. It supports tuning of various parameters related to timing, protocols, and buffers. For each test it reports the bandwidth, loss, and other parameters.
![](http://blog.linoxide.com/wp-content/uploads/2014/10/iperf-e1413378331696.png)
If you wish to use the tool check out our article on [how to install and use iperf][17]
#### 18. Smem - advanced memory reporting ####
**Smem** is one of the most advanced tools for **Linux** command line, it offers information about the actual **memory** that is used and shared in the system, attempting to provide a more realistic image of the actual **memory** being used.
$ smem -m
$ smem -m -p | grep firefox
$ smem -u -p
$ smem -w -p
Check out our [article on Smem for more examples][18]
### GUI or Web based performance tools ###
#### 19. Icinga - community fork of Nagios ####
**Icinga** is **free** and **open source** system and network monitoring application. Its a fork of Nagios retaining most of the existing features of its predecessor and building on them to add many long awaited patches and features requested by the user community.
![](http://blog.linoxide.com/wp-content/uploads/2014/10/Icinga-e1413377995731.png)
**More info** about installing and configuring [can be found in our Icinga article][19].
#### 20. Nagios - the most popular monitoring tool. ####
The most used and popular **monitoring solution** found on Linux. It has a daemon that collects information about various process and has the ability to collect information from remote hosts. All the information is then provided via a nice and powerful **web interface**.
![](http://blog.linoxide.com/wp-content/uploads/2014/10/nagios-e1413305858732.png)
You can find **information** on [how to install Nagios in our article][20]
#### 21. Linux process explorer - procexp for Linux ####
**Linux process explorer** is a graphical process explorer for **Linux**. It shows various **process information** like the process tree, TCP/IP connections and performance figures for each process. It's a replica of procexp found in Windows and developed by **Sysinternals** and aims to be more user friendly then top and ps.
Check our [linux process explorer article][21] for more info.
#### 22. Collectl - performance monitoring tool ####
This is a **performance monitoring** tool that you can use either in an **interactive mode** or you can have it **write reports** to disk and access them with a web server. It reports statistics on **CPU**, **disk**, **memory**, **network**, **nfs**, **process**, **slabs** and more in easy to read and manage format.
![](http://blog.linoxide.com/wp-content/uploads/2014/10/collectl.png)
**More info** in our [Collectl article][22]
#### 23. MRTG - the classic graph tool ####
This is a **network traffic** monitor that will provide you **graphs** using the **rrdtool**. It is one of the oldest tools that provides graphics and is one of the most used on Unix-like operating systems. Check our article on [how to use MRTG][23] for information on the installation and configuration process
![](http://blog.linoxide.com/wp-content/uploads/2014/10/mrtg.png)
#### 24. Monit - simple and easy to use monitor tool ####
**Monit** is an **open source** small **Linux** utility designed to monitor **processes**, **system load**, **filesystems**, **directories** and **files**. You can have it run automatic maintenance and repair and can execute actions in error situations or send email reports to alert the system administrator. If you wish to use this tool you can check out our [how to use Monit article][24].
![](http://blog.linoxide.com/wp-content/uploads/2014/10/monit.png)
#### 25. Munin - monitoring and alerting services for servers ####
**Munin** is a **networked resource monitoring** tool that can help analyze **resource trends** and see what is the weak point and what caused **performance issues**. The team that develops it wishes it for it to be very easy to use and user-friendly. The application is written in Perl and uses the rrdtool to generate graphs, which are with the web interface. The developers advertise the application "plug and play" capabilities with about 500 monitoring plugins currently available.
**More info** can be found in our [article on Munin][25]
--------------------------------------------------------------------------------
via: http://linoxide.com/monitoring-2/linux-performance-monitoring-tools/
作者:[Adrian Dinu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://linoxide.com/author/adriand/
[1]:http://linoxide.com/monitoring-2/dstat-monitor-linux-performance/
[2]:http://linoxide.com/monitoring-2/guide-using-linux-atop/
[3]:http://linoxide.com/monitoring-2/install-nmon-monitor-linux-performance/
[4]:http://linoxide.com/linux-command/kernel-slab-cache-information/
[5]:http://linoxide.com/linux-command/linux-system-performance-monitoring-using-sar-command/
[6]:http://linoxide.com/monitoring-2/monitor-linux-saidar-tool/
[7]:http://linoxide.com/linux-command/linux-top-command-examples-screenshots/
[8]:http://linoxide.com/tools/sysdig-performance-linux-tool/
[9]:http://linoxide.com/linux-command/netstat-commad-with-all-variant-outputs/
[10]:http://linoxide.com/linux-how-to/network-traffic-capture-tcp-dump-command/
[11]:http://linoxide.com/linux-command/linux-vmstat-command-tool-report-virtual-memory-statistics/
[12]:http://linoxide.com/linux-command/linux-free-command/
[13]:http://linoxide.com/linux-command/linux-htop-command/
[14]:http://linoxide.com/linux-command/ss-sockets-network-connection/
[15]:http://linoxide.com/how-tos/lsof-command-list-process-id-information/
[16]:http://linoxide.com/monitoring-2/iftop-network-traffic/
[17]:http://linoxide.com/monitoring-2/install-iperf-test-network-speed-bandwidth/
[18]:http://linoxide.com/tools/memory-usage-reporting-smem/
[19]:http://linoxide.com/monitoring-2/install-configure-icinga-linux/
[20]:http://linoxide.com/how-tos/install-configure-nagios-centos-7/
[21]:http://sourceforge.net/projects/procexp/
[22]:http://linoxide.com/monitoring-2/collectl-tool-install-examples/
[23]:http://linoxide.com/tools/multi-router-traffic-grapher/
[24]:http://linoxide.com/monitoring-2/monit-linux/
[25]:http://linoxide.com/ubuntu-how-to/install-munin/

View File

@ -0,0 +1,51 @@
How To Upgrade Ubuntu 14.04 To Ubuntu 14.10
================================================================================
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/04/Ubuntu_Unicorn_Utopia.jpeg)
Ubuntu 14.10 has been released yesterday. Wondering **how to upgrade to Ubuntu 14.10 from Ubuntu 14.04**? Dont worry, its extremely easy to upgrade to Ubuntu 14.10. In fact, it is just a matter of few clicks and a good internet connection.
### Do you need to switch to Ubuntu 14.10 from Ubuntu 14.04? ###
Before you go on upgrading to Ubuntu 14.10, make sure that you really want to ditch Ubuntu 14.04 for 14.10. It is important for the reason that you wont be able to downgrade Ubuntu 14.10 back to Ubuntu 14.04. Youll have to go for a fresh install instead.
Ubuntu 14.04 is long-term support (LTS) release. Which means more stability and support for greater period. If you upgrade to 14.10, youll be forced to further upgrade to Ubuntu 15.04 as the support for 14.10 will last for 9 months only while 14.04 will go on for more than 3 years.
Moreover, there are not many new features in Ubuntu 14.10 that could compel many users to switch to it. But yes, youll get the cutting edge OS for sure. So, at the end of the day it is your call whether to upgrade to Ubuntu 14.10 or not.
### Upgrade to Ubuntu 14.10 from Ubuntu 14.04 ###
To upgrade Ubuntu 14.04 to Ubuntu 14.10, follow the steps below:
#### Step 1: ####
Open **Software & Updates**.
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/08/Software_Update_Ubuntu.jpeg)
Go to **Updates** tab. In here make sure that **Notify me of a new Ubuntu version** is set to **For any new version**. By default Ubuntu will notify you only when there is another LTS release available. You must change it to upgrade to any new interim release.
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/08/Upgrade_Ubuntu.png)
#### Step 2: ####
Now run **Software Updater**.
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/04/Ubuntu_Updater.jpg)
After the updates, it should prompt for the availability of a newer version. Click on upgrade and follow the next few obvious steps.
![](http://itsfoss.itsfoss.netdna-cdn.com/wp-content/uploads/2014/08/Upgrade_to_Ubuntu_1410.jpeg)
I hope this quick tutorial helped you to **upgrade Ubuntu 14.04 to Ubuntu 14.10**. Though this tutorial was written for Ubuntu, you can use the exact same steps to upgrade to Xubuntu 14.10, Kubuntu 14.10 or Lubuntu 14.10. Stay tuned for more Ubuntu 14.10 related articles.
--------------------------------------------------------------------------------
via: http://itsfoss.com/upgrade-ubuntu-14-04-to-14-10/
作者:[Abhishek][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://itsfoss.com/author/Abhishek/

View File

@ -0,0 +1,109 @@
How To Upgrade Ubuntu 14.04 Trusty To Ubuntu 14.10 Utopic
================================================================================
Hello all! Greetings! Today, we will discuss about how to upgrade from Ubuntu 14.04 to 14.10 final beta. As you may know, Ubuntu 14.10 final beta has already been released. According to the [Ubuntu release schedule][1], the final stable version will be available today in a couple of hours.
Do you want to upgrade to Ubuntu 14.10 from Ubuntu 14.04/13.10/13,04/12,10/12.04, or older version on your system? Just follow the simple steps given below. Please note that you cant directly upgrade from 13.10 to 14.04. First, you should upgrade from 13.10 to 14.04, and then upgrade from 14.04 to 14.10. Clear? Good. Now, Let us start the upgrade process.
Though, the steps provided below are compatible for Ubuntu 14.10, It might work for other Ubuntu derivatives such as Lubuntu 14.10, Kubuntu 14.10, and Xubuntu 14.10 as well.
**Important**: Before upgrading, dont forget to backup your important data to any external device like USB hdd or CD/DVD.
### Desktop Upgrade ###
Before going to upgrade, we need to update the system. Open up the Terminal and enter the following commands.
sudo apt-get update && sudo apt-get dist-upgrade
The above command will download and install the available latest packages.
Reboot your system to finish installing updates.
Now, enter the following command to upgrade to new available version.
sudo update-manager -d
Software Updater will show up and search for the new release.
After a few seconds, you will see a screen like below that saying: “**However, Ubuntu 14.10 is available now (you have 14.04)**”. Click on the button Upgrade to start upgrading to Ubuntu 14.10.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/10/Software-Updater_001.png)
The Software Updater will ask you to confirm still you want to upgrade. Click Start Upgrade to begin installing Ubuntu 14.10.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/10/Release-Notes_002.png)
**Please Note**: This is a beta release. Do not install it on production systems. The final stable version will be released in a couple of hours.
Now, the Software Updater will prepare to start setting up new software channels.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/10/Distribution-Upgrade_003.png)
After a few minutes, the software updater will notify you the details the number of packages are going to be removed, and number of packages are going to be installed. Click **Start upgrade** to continue. Make sure you have good and stable Internet connection.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/10/Untitled-window_004.png)
Now, the updater will start to getting new packages. It will take a while depending upon your Internet connection speed.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/10/Distribution-Upgrade_005.png)
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/10/Distribution-Upgrade_001.png)
After a while, youll be asked to remove unnecessary applications. Finally, click **Restart** to complete the upgrade.
Congratulations! Now, you have successfully upgraded to Ubuntu 14.10.
![](http://180016988.r.cdn77.net/wp-content/uploads/2014/10/Details_002.png)
Thats it.. Start using the new Ubuntu version.
### Server Upgrade ###
To upgrade from Ubuntu 14.04 server to Ubuntu 14.10 server, do the following steps.
Install the update-manager-core package if it is not already installed:
sudo apt-get install update-manager-core
Edit the file /etc/update-manager/release-upgrades,
sudo nano /etc/update-manager/release-upgrades
and set Prompt=normal or Prompt=lts as shown below.
# Default behavior for the release upgrader.
[DEFAULT]
# Default prompting behavior, valid options:
#
# never - Never check for a new release.
# normal - Check to see if a new release is available. If more than one new
# release is found, the release upgrader will attempt to upgrade to
# the release that immediately succeeds the currently-running
# release.
# lts - Check to see if a new LTS release is available. The upgrader
# will attempt to upgrade to the first LTS release available after
# the currently-running one. Note that this option should not be
# used if the currently-running release is not itself an LTS
# release, since in that case the upgrader won't be able to
# determine if a newer release is available.
Prompt=normal
Now, it is time to upgrade your server system to latest version using the following command:
sudo do-release-upgrade -d
Follow the on-screen instructions. Youre done!!.
Cheers!!
--------------------------------------------------------------------------------
via: http://www.unixmen.com/upgrade-ubuntu-14-04-trusty-ubuntu-14-10-utopic/
作者SK
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://wiki.ubuntu.com/UtopicUnicorn/ReleaseSchedule

View File

@ -0,0 +1,94 @@
7个Linux桌面需要改善之处
======================================
在过去的15年内Linux桌面从一个还算凑合的边缘化解决方案集合发展到了一个空前的创新和选择源。它的标准特点中有许多是要么不适用于Windows系统要么就只适合作为一个专有的扩展软件。因此使用Linux愈发变得不仅是一个原则问题也是一种偏好。
然而尽管Linux桌面不停在进步但是仍然存在差距。一些正在丢失它们的特点一些已经丢失了还有一些如同天上掉的馅饼般的附加设备能轻易地实现在不考虑用户对于改变的容忍度的情况下来扩展桌面。
比如说以下是7个有利于Linux桌面发展的改善建议
### 7. 简单的Email加密技术
如今每个Email阅读器从Alpine到Thunderbird再到Kmail都包括了Email加密技术。然而文件编制通常是不存在或者非常劣质。
但是,即使你理论上看懂了,但是实践起来还是很困难的。控件通常分散在整个配置菜单和选项卡中,需要为所有你需要和想要的设置进行一次彻底的搜索。如果你未能进行适当的加密,你就收不到反馈。
最新的一个简易加密进程是 [Enigmail][1] 一个包含面向初学者设置向导的Thunderbird扩展。但是你一定要知道怎么用Enigmail而且菜单新增了合成窗口并把加密设置项添加了进去如果把它弄到其它的设置里势必会使每个用户都难以理解。
无论桌面怎么样假设如果你想接收加密过的Email你就要先知道密码。可如今不断有媒体涉及安全和隐私方面就已经确定了这样的假设不再适用。
### 6. 虚拟工作空间缩略图
虚拟工作空间提供了更多不需要额外监听的桌面空间。然而尽管它们很实用但是虚拟工作空间的管理并没有在过去十年发生改变。在大多数桌面上你能通过每个工作空间上的pager程序一个提供很少指示除了它的名字和数字的简单矩形框来控制它们 -- 但是在Ubuntu的统一实例中工作空间目前来说还是很常用的。
确实GNOME和Cinnamon能提供出不错的视图但是它们的实用性受限于它们需要显示屏大小的事实和会与主要的图形化桌面发生冲突的KDE内容书面列表。
一个比较不错的解决方案应该是鼠标悬停在足够大的缩略图上来获取正常的视图,这样就精确地查看每个工作空间上的东西了。
### 5. 一个可操作的菜单
现代型桌面很久之前就已经舍弃搭配着子菜单的经典型菜单了。如今,一般的电脑都有太多的应用程序以至于不能适应这样的模式。
糟糕的是,没有什么主要的替代品能与经典型菜单一样方便。把菜单限制进一个单一的窗口,其效果是不理想的,因为你要么必须截掉子菜单要么就用鼠标不断地调整窗口。
但是全屏幕菜单的产品还要差,意思是你甚至要在开始工作之前就调整屏幕,并且依赖于仅仅可用的搜索框,当然如果你已经知道什么应用程序可用 -- 这种情况下你还不如直接用命令行。
坦白地说我不知道拿什么来解决这个问题OS X下的spinner racks吗我可以肯定地说所有现代型菜单产品在桌面上呈现出一个个精心构造的图标似乎更是一个合理的选择。
### 4. 一个专业的、实惠的视频编辑器
多年来Linux已经慢慢地填充了软件生产力上的空白。然而即便如此它仍然缺少价格合理的视频编辑软件。
问题不在于自由软件不存在。毕竟, [Maya][2] 是动画产业的标准之一,问题是在于这些软件的售价达数千美金。
另一边是那些像Pitivi或者是Blender那样的免费软件, 它们的功能性 -- 尽管它们的开发者足够的努力 -- 一些基本功能仍然被保持着。虽然取得了进步,但还是和用户们所期望的相去甚远。
尽管我听说独立的部门使用的是原生态Linux视频编辑器原因通常是因为他们抱怨其它编辑器不好但其余的人更愿意减少麻烦从而在其它操作系统上对视频进行编辑。
### 3. 一个文档处理器
有一个极端是那些需要进行文字处理的用户是由Google Docs负责而另一个极端是对于那些布局设计的专家来说Scribus是唯一比较可信的应用。
这两种极端之间还有一层是那些比如那些生产长期的、面向文本的文件的出版商和作家。这类用户有些是由基于Windows的 [Adobe FrameMaker][3] 来服务, 有些则由基于Linux的LibreOffice Writer来服务。
不幸的是这些用户显然不会优先考虑LibreOfficeCalligra Words AbiWord或者是任何其它的办公套件。应该提供给用户的办公套件的特色功能包括
- 为每个文件建立书目数据库。
- 表格在样式上表格和段落与字符保持一致。
- 带有持续性内容的页面样式,除了页眉和页脚,在每次样式要被使用的时候就会出现。
- 交叉引用存储格式,以便不需要每次都手动创建。
无论是LibreOffice还是其它同类应用提供这些特色功能与它们是否可用是不相干的。没有它们Linux桌面对于一群潜在的用户来说就是个不完善的东西。
浏览器的扩展软件向我们展示了彩色编码标签对于工作空间的作用。开放标签的标题在开放程度超过八九成或完全开放的时候会消失,所以颜色通常是最快区分标签关系的方法。
系统也能和桌面一样实用。更好的是,彩色编码能保存在会话之间,在同一时间允许用户打开所有需要一个特点任务的应用。到目前为止,我知道没有任何一个桌面有这个特点。
### 2. 图标栏
多年以来Stardock公司一直销售着一个名叫 [Fences][4] 的扩展软件,它用来分类和组织桌面上的图标,你能用它给每个组取名并且可以把每个图标都放在一起。另外,你可以指定不同的文件类型自动加入到一个组里,并且按个人需要来隐藏和整理。
换句话说fences让用户整天在桌面上干的事情自动有序地分组排列。然而除了一两个小功能它们与KDE的文件夹视图差不多fences在Linux桌面上仍然鲜有人使用。这也许是因为开发人员把专注于移动设备作为主要目标使用fences无疑是传统工作站桌面的一大特征。
### 1. 个性化列表
由于我做过这种列表,让我震惊的是鲜有提出的改进得到了推广普及。 这些改进能吸引大量特定的用户,并且甚至只有一个意味着专有应用程序的进入端口,至少它肯定是花拳绣腿。
这一观察表明对于普通用户来说Linux能添加的功能已经所剩无几了。作为一个通用的桌面Linux从几年前到现在都很多元化直到今天用户都能从超过半打的主流桌面中选择出一个来使用。
当然这不意味着,一些专家就不会有其它意见。另外,没有人会关心不断变化的需求会不会使改进令人满意。但是它意味着这份充斥着改进建议的名单上的许多项目将会高度个人化。
所有这些都是为了抛砖引玉:你认为还有什么其它的对桌面有益的建议吗?
--------------------------------------------------------------------------------
via: http://www.datamation.com/open-source/7-improvements-the-linux-desktop-needs-1.html
译者:[ZTinoZ](https://github.com/ZTinoZ) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://addons.mozilla.org/en-US/thunderbird/addon/enigmail/
[2]:http://en.wikipedia.org/wiki/Autodesk_Maya
[3]:http://www.adobe.com/products/framemaker.html
[4]:http://www.stardock.com/products/fences/

View File

@ -0,0 +1,62 @@
一个 Linux 支持者:从 16 岁开始在 Linux 上 hacking
================================================================================
![](http://www.linux.com/images/stories/41373/Yitao-Li.png)
>翻译文章中是称主人公为 Yitao Li还是李逸韬他似乎是美国人“李逸韬”是看的他的个人主页上的。
在软件开发者 [Yitao Li (李逸韬) 的 GitHub 仓库][1]中,几乎所有的项目都是在他的 Linux 机器上开发完成的。它们没有一个是必须特定需要 Linux 的,但李逸韬说他使用 Linux 来做“任何事情”。
举些例子:“编码/脚本设计,网页浏览,网站托管,任何云相关的,发送/接受 PGP 签名的邮件,调整防火墙规则,将 OpenWrt 镜像刷入路由器,运行某版本的 Linux kernel 的同时编译另一个版本,从事研究,完成功课(例如,用 Tex 输入数学公式),以及其他许多......” Li 在邮件里如是说。
在李逸韬的 GitHub 仓库里所有项目中他的最爱是一个学校项目,调用 libpthread 和 libfuse 库使用 C++ 开发,用来理解和正确执行基于 PAXOS 的分布式加锁,键值对服务,最终实现一个分布式文件系统。他使用若干测试脚本分别在单核和多核的机器上对这个项目进行测试。
“One can learn something about distributed consensus protocol by implementing the PAXOS protocol correctly (or at least mostly correctly) such that the implementation will pass all the tests,” he said. “And of course once that is accomplished, one can also earn some bragging rights. Besides, a distributed filesystem can be useful in many other programming projects.”
>"One can learn something about distributed consensus protocol by implementing the PAXOS protocol correctly (or at least mostly correctly) such that the implementation will pass all the tests" 不知道翻译的是不是有点混乱
"一个人可以通过正确实现或者至少大部分正确PAXOS 协议,让它通过所有测试,来学习关于分布式共识协议的知识,"他说,“当然一旦这完成了,他就可以获得一些炫耀的权利。除此之外,一个分布式文件系统在其他许多编程项目中也可以很有用。”
Li first started using Linux at age 16, or about 7.47 years ago, he says, using the website [linuxfromscratch.org][2], with numerous hints from the free, downloadable Linux From Scratch book. Why?
>with numerous hints from the free, downloadable Linux From Scratch book. 讲的应该是从 Linuxfromscratch 学习 LFS 编译安装 Linux不知道该怎么翻译。
Li 是在 16 岁的时候第一次开始使用 Linux或是者说大约 7.47 年之前,他说,通过使用网站 [linuxfromscratch.org][2] ,从 Scratch book 中获得免费可下载的 Linux以及大量 Hints。那么他为什么会使用 Linux
“1. Linux is very hacker-friendly and I do not see any reason for not using it,” he writes. “2. The prefrontal cortex of the brain becoming well-developed at age 16 (?).”
>2. The prefrontal cortex of the brain becoming well-developed at age 16 (?). 原文里的 16 (?) 是啥意思?
"1. Linux 非常黑客友好所以我没看到任何不用它的理由,"他写道“2. 大脑的前额叶皮质在16岁时正变得发达。”
[![](http://www.linux.com/images/stories/41373/ldc_peop_linux.png)][3]
他现在为 eBay工作主要进行 Java 编程但有时也使用 Hadoop, Pig, Zookeeper, Cassandra, MongoDB以及其他一些需要 POSIX 兼容平台的软件来工作。他主要通过给 Wikipedia 页面和 Linux 相关的论坛做贡献来支持 Linux 社区,另外当然还通过成为 Linux 基金会的个人会员。
他紧跟最新的 Linux 发展动态,最近还对 GCC 4.9 及之后版本新增的 “-fstack-protector-strong” 选项印象深刻。
"虽然这并不与我的任何项目直接相关,但它对于安全和性能问题十分重要。"他说,“这个选项比 -fstack-protector-all 更高效的多,却在安全上几乎没有影响,同时比 -fstack-protector 选项提供了更好的栈溢出防护覆盖。”
欢迎来到 Linux 基金会Yitao !
了解更多关于成为 [Linux 基金会个人会员][3]的内容。基金会将为每位 6 月份期间的新个人会员捐赠 $25 给 Code.org。
----------
![](http://www.linux.com/community/forums/avatar/41373/catid/200-libby-clark/thumbnail/large/cache/1331753338)
[Libby Clark][4]
--------------------------------------------------------------------------------
via: http://www.linux.com/news/featured-blogs/200-libby-clark/778559-the-people-who-support-linux-hacking-on-linux-since-age-16
译者:[jabirus](https://github.com/jabirus) 校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:https://github.com/yl790
[2]:http://linuxfromscratch.org/
[3]:https://www.linuxfoundation.org/about/join/individual
[4]:http://www.linux.com/community/forums/person/41373/catid/200-libby-clark

View File

@ -0,0 +1,51 @@
Linux能够提供消费者想要的东西吗
================================================================================
> 由Jack Wallen提出的新观点提供消费者想要的东西也许是收获无限成就的关键。
![](http://tr2.cbsistatic.com/hub/i/r/2014/08/14/ce90a81e-d17b-4b8f-bd5b-053120e305e6/resize/620x485/f5f9e0798798172d4e41edbedeb6b7e5/whattheyneedhero.png)
在消费电子的世界里如果你不能提供购买者想要的东西那他们就会跑去别家。我们最近在Firefox浏览器上就看过类似的事情。消费者想要的是一个快速而不那么臃肿的软件而开发者们却走到了另外的方向上。最后用户都转移到Chrome或Chromium上去了。
Linux需要深深凝视自己的水晶球仔细体会那场浏览器大战留下的尘埃然后留意一下这点建议
如果你不能提供他们想要的,他们就会离开。
而这种事与愿违的另一个例子是Windows 8。消费者不喜欢那套界面。而微软却坚持使用因为这是把所有东西搬到Surface平板上所必须的。相同的情况也可能发生在Canonical和Ubuntu Unity身上 -- 尽管它们的目标并不是单一独特地针对平板电脑来设计(所以,整套界面在桌面系统上仍然很实用而且直观)。
一直以来Linux开发者和设计者们看上去都按照他们自己的想法来做事情。他们过分在意“吃你自家的狗粮”这句话了。以至于他们忘记了一件非常重要的事情
没有新用户,他们的“根基”也仅仅只属于他们自己。
换句话说,唱诗班不仅仅是被传道,他们也同时在宣传。让我给你看三个案例来完全掌握这一点。
- 多年以来有在Linux系统中替代活动目录Active Directory的需求。我很想把这个名称换成LDAP但是你真的用过LDAP吗那就是个噩梦。开发者们也努力了想让LDAP能易用一点但是没一个做到了。而让我很震惊的是这样一个从多用户环境下发展起来的平台居然没有一个能和AD正面较量的功能。这需要一组开发人员从头开始建立一个AD的开源替代。这对那些寻求从微软产品迁移的中型企业来说是非常大的福利。但是在这个产品做好之前他们还不能开始迁移。
- 另一个从微软激发的需求是Exchange/Outlook。是我也知道许多人都开始用云。但是事实上中等和大型规模生意仍然依赖于Exchange/Outlook组合直到能有更好的产品出现。而这将非常有希望发生在开源社区。整个拼图的一小块已经摆好了虽然还需要一些工作- 群件客户端Evolution。如果有人能够从Zimbra拉出一个分支然后重新设计成可以配合Evolution甚至Thunderbird来提供服务实现Exchange的简单替代那这个游戏就不是这么玩了而消费者获得的利益将是巨大的。
- 便宜,便宜,还是便宜。这是大多数人都得咽下去的苦药片 - 但是消费者和生意就是希望便宜。看看去年一年Chromebook的销量吧。现在搜索一下Linux笔记本看能不能找到700美元以下的。而只用三分之一的价格就可以买到一个让你够用的Chromebook一个使用了Linux内核的平台。但是因为Linux仍然是一个细分市场很难降低成本。像红帽那种公司也许可以改变现状。他们也已经推出了服务器硬件。为什么不推出一些和Chromebook有类似定位但是却运行完整Linux环境的低价中档笔记本呢请看“[Cloudbook是Linux的未来吗][1]”)其中的关键是这种设备要低成本并且符合普通消费者的要求。不要站在游戏玩家/开发者的角度去思考了,记住普通消费者真正的需求 - 一个网页浏览器不会有更多了。这是Chromebook为什么可以这么轻松地成功。Google精确地知道消费者想要什么然后推出相应的产品。而面对Linux一些公司仍然认为他们吸引买家的唯一途径是高端昂贵的硬件。而有一点讽刺的是口水战中最经常听到的却是Linux只能在更慢更旧的硬件上运行。
最后Linux需要看一看乔布斯传Book Of Jobs搞清楚如何说服消费者们他们真正要的就是Linux。在生意上和在家里 -- 每个人都可以享受到Linux带来的好处。说真的开源社区怎么可能做不到这点呢Linux本身就已经带有很多漂亮的时髦术语标签稳定性可靠性安全性免费 -- 再加上Linux实际已经进入到绝大多数人手中了只是他们自己还不清楚罢了。现在是时候让他们知道这一点了。如果你是用Android或者Chromebooks那么你就在用某种形式上的Linux。
搞清楚消费者需求一直以来都是Linux社区的绊脚石。而且我知道 -- 太多的Linux开发都基于某个开发者有个特殊的想法。这意味着这些开发都针对的“微型市场”。是时候无论如何让Linux开发社区能够进行全球性思考了。“一般用户有什么需求我们怎么满足他们”让我提几个最基本的点。
一般用户想要:
- 低价
- 设备和服务能无缝衔接
- 直观而现代的设计
- 百分百可靠的浏览器体验
把这四点放在心中应该可以轻松地以Linux为基础开发出用户实际需要的产品。Google做到了...当然Linux社区也可以参照Google的工作并开发出更好的产品。把这些应用到集成AD这件事上能开发出Exchange/Outlook的替代或者基于云的群件工具就会发生一件非常特殊的事 -- 人们会为它买单。
你觉得Linux社区能够提供消费者想要的东西吗在下边的讨论区里分享一下你的看法。
--------------------------------------------------------------------------------
via: http://www.techrepublic.com/article/will-linux-ever-be-able-to-give-consumers-what-they-want/
作者:[Jack Wallen][a]
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.techrepublic.com/search/?a=jack+wallen
[1]:http://www.techrepublic.com/article/is-the-cloudbook-the-future-of-linux/

View File

@ -0,0 +1,87 @@
为 Linux 用户准备的 10 个开源克隆软件
================================================================================
> 这些克隆软件会读取整个磁盘的数据,将它们转换成一个 .img 文件,之后你可以将它复制到其他硬盘上。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/photo/150x150x1Qn740810PM9112014.jpg.pagespeed.ic.Ch7q5vT9Yg.jpg)
磁盘克隆意味着从一个硬盘复制数据到另一个硬盘上,而且你可以通过简单的复制粘贴来做到。但是你却不能复制隐藏文件和文件夹,以及正在使用中的文件。这便是一个克隆软件可以通过保存一份文件和文件夹的镜像来帮助你的地方。克隆软件会读取整个磁盘的数据,将它们转换成一个 .img 文件,之后你可以将它复制到其他硬盘上。现在我们将要向你介绍最优秀的 10 个开源的克隆软件:
### 1. [Clonezilla][1]###
Clonezilla 是一个基于 Ubuntu 和 Debian 的 Live CD。它可以像 Windows 里的诺顿 Ghost 一样克隆你的磁盘数据和做备份不过它更有效率。Clonezilla 支持包括 ext2、ext3、ext4、btrfs 和 xfs 在内的很多文件系统。它还支持 BIOS、UEFI、MBR 和 GPT 分区。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/600x450xZ34_clonezilla-600x450.png.pagespeed.ic.8Jq7pL2dwo.png)
### 2. [Redo Backup][2]###
Redo Backup 是另一个用来方便地克隆磁盘的 Live CD。它是自由和开源的软件使用 GPL 3 许可协议授权。它的主要功能和特点包括从 CD 引导的简单易用的 GUI、无需安装可以恢复 Linux 和 Windows 等系统、无需登陆访问文件,以及已删除的文件等。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/600x450x7D5_Redo-Backup-600x450.jpeg.pagespeed.ic.3QMikN07F5.jpg)
### 3. [Mondo Rescue][3]###
Mondo 和其他的软件不大一样,它并不将你的磁盘数据转换为一个 .img 文件,而是将它们转换为一个 .iso 镜像。使用 Mondo你还可以使用“mindi”——一个由 Mondo Rescue 开发的特别工具——来创建一个自定义的 Live CD这样你的数据就可以从 Live CD 克隆出来了。它支持大多数 Linux 发行版和 FreeBSD并使用 GPL 许可协议授权。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x387x3C4_MondoRescue-620x387.jpeg.pagespeed.ic.cqVh7nbMNt.jpg)
### 4. [Partimage][4]###
这是一个开源的备份软件,默认情况下在 Linux 系统里工作。在大多数发行版中,你都可以从发行版自带的软件包管理工具中安装。如果你没有 Linux 系统你也可以使用“SystemRescueCd”。它是一个默认包括 Partimage 的 Live CD可以为你完成备份工作。Partimage 在克隆硬盘方面的性能非常出色。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x424xBZF_partimage-620x424.png.pagespeed.ic.ygzrogRJgE.png)
### 5. [FSArchiver][5]###
FSArchiver 是 Partimage 的后续产品,而且它也是一个很好的硬盘克隆工具。它支持克隆 Ext4 和 NTFS 分区、基本的文件属性如所有人、权限、SELinux 之类的扩展属性,以及所有 Linux 文件系统的文件系统属性等。
### 6. [Partclone][6]###
Partclone 是一个可以克隆和恢复分区的免费工具。它用 C 语言编写,最早在 2007 年出现而且支持很多文件系统包括ext2、ext3、ext4、xfs、nfs、reiserfs、reiser4、hfs+、btrfs。它的使用十分简便并且使用 GPL 许可协议授权。
### 7. [doClone][7]###
doClone 是一个免费软件项目,被开发用于轻松地克隆 Linux 系统分区。它由 C++ 编写而成,支持多达 12 种不同的文件系统。它能够修复 Grub 引导器,还能通过局域网传输镜像到另一台计算机。它还提供了热同步功能,这意味着你可以在系统正在运行的时候对它进行克隆操作。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x396x2A6_doClone-620x396.jpeg.pagespeed.ic.qhimTILQPI.jpg)
### 8. [Macrium Reflect 免费版][8]###
Macrium Reflect 免费版被形容为最快的磁盘克隆工具之一,它只支持 Windows 文件系统。它有一个很直观的用户界面。该软件提供了磁盘镜像和克隆操作,还能让你在文件管理器中访问镜像。它允许你创建一个 Linux 应急 CD并且它与 Windows Vista 和 Windows 7 兼容。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x464xD1E_open1.jpg.pagespeed.ic.RQ41AyMCFx.png)
### 9. [DriveImage XML][9]###
DriveImage XML 使用 Microsoft VSS 来创建镜像,十分可靠。使用这个软件,你可以从一个正在使用的磁盘创建“热”镜像。镜像使用 XML 文件保存这意味着你可以从任何支持的第三方软件访问它们。DriveImage XML 还允许在不重启的情况下从镜像恢复到机器。这个软件与 Windows XP、Windows Server 2003、Vista 以及 7 兼容。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/620x475x357_open2.jpg.pagespeed.ic.50ipbFWsa2.jpg)
### 10. [Paragon Backup & Recovery 免费版][10]###
Paragon Backup & Recovery 免费版在管理镜像计划任务方面十分出色。它是一个免费软件,但是仅能用于个人用途。
![](http://1-ps.googleusercontent.com/h/www.efytimes.com/admin/useradmin/rte/my_documents/my_pictures/600x536x9Z9_open3.jpg.pagespeed.ic.9rDHp0keFw.png)
--------------------------------------------------------------------------------
via: http://www.efytimes.com/e1/fullnews.asp?edid=148039
作者Sanchari Banerjee
译者:[felixonmars](https://github.com/felixonmars)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[1]:http://clonezilla.org/
[2]:http://redobackup.org/
[3]:http://www.mondorescue.org/
[4]:http://www.partimage.org/Main_Page
[5]:http://www.fsarchiver.org/Main_Page
[6]:http://www.partclone.org/
[7]:http://doclone.nongnu.org/
[8]:http://www.macrium.com/reflectfree.aspx
[9]:http://www.runtime.org/driveimage-xml.htm
[10]:http://www.paragon-software.com/home/br-free/

View File

@ -0,0 +1,65 @@
Linux用户应该了解一下开源硬件
================================================================================
> Linux用户不了解一点开源硬件制造相关的事情他们将会很失望。
商业软件和免费软件已经互相纠缠很多年了,但是这俩经常误解对方。这并不奇怪 -- 对一方来说是生意,而另一方只是一种生活方式。但是,这种误解会给人带来痛苦,这也是为什么值得花精力去揭露这里面的内幕。
一个逐渐普遍的现象对开源硬件的不断尝试不管是CanonicalJollaMakePlayLive或者其他几个。不管是评论员或终端用户一般的免费软件用户会为新的硬件平台发布表现出过分的狂热然后因为不断延期有所醒悟最终放弃整个产品。
这是一个没有人获益的怪圈,而且滋生出不信任 - 都是因为一般的Linux用户根本不知道这些新闻背后发生的事情。
我个人对于把产品推向市场的经验很有限。但是,我还不知道谁能有所突破。推出一个开源硬件或其他产品到市场仍然不仅仅是个残酷的生意,而且严重不利于新加入的厂商。
### 寻找合作伙伴 ###
不管是数码产品的生产还是分销都被相对较少的一些公司控制着有时需要数月的预订。利润率也会很低所以就像那些购买古老情景喜剧的电影工作室一样生成商一般也希望复制当前热销产品的成功。像Aaron Seigo在谈到他花精力开发Vivaldi平板时告诉我的生产商更希望能由其他人去承担开发新产品的风险。
不仅如此,他们更希望和那些有现成销售记录的有可能带来可复制生意的人合作。
而且一般新加入的厂商所关心的产品只有几千的量。芯片制造商更愿意和苹果或三星合作因为它们的订单很可能是几百K。
面对这种情形,开源硬件制造者们可能会发现他们在工厂的列表中被淹没了,除非能找到二线或三线厂愿意尝试一下小批量生产新产品。
他们也许还会沦为采购成品组件再自己组装就像Seigo尝试Vivaldi时那样做的。或者他们也许可以像Canonical那样做寻找一些愿意为这个产业冒险的合作伙伴。而就算他们成功了一般也会比最初天真的预期延迟数个月。
### 磕磕碰碰走向市场 ###
然而,寻找生产商只是第一关。根据树莓派项目的经验,就算开源硬件制造者们只想在他们的产品上运行免费软件,生产商们很可能会以保护商业机密的名义坚持使用专有固件或驱动。
这样必然会引起潜在用户的批评,但是开源硬件制造者没得选,只能折中他们的愿景。寻找其他生产商也不能解决问题,有一个原因是这样做意味着更多延迟,但是更多的是因为完全免授权费的硬件是不存在的。像三星这样的业内巨头对免费硬件没有任何兴趣,而作为新人,开源硬件制造者也没有影响力去要求什么。
更何况,就算有免费硬件,生产商也不能保证会用在下一批生产中。制造者们会轻易地发现他们每次需要生产的时候都要重打一样的仗。
这些都还不够这个时候开源硬件制造者们也许已经花了6-12个月时间来讨价还价。机会来了产业标准已经变更他们也许为了升级产品规格又要从头来过。
### 短暂而且残忍的货架期 ###
尽管面对这么多困难,一定程度上开放的硬件也终于推出了。还记得寻找生产商时的挑战吗?对于分销商也会有同样的问题 -- 还不只是一次,而是每个地区都要解决。
通常,分销商和生成商一样保守,对于和新人或新点子打交道也很谨慎。就算他们同意一个产品上架,他们也轻易能够决定不鼓励自己的销售代表们做推广,这意味着这个产品会在几个月后很有效率地下架。
当然,在线销售也是可以的。但是同时,硬件还是需要被存放在某个地方,这也会增加成本。而按需生产就算可能的话也将非常昂贵,而且没有组装的元件也需要存放。
### 衡量整件怪事 ###
在这里我只是粗略地概括了一下,但是任何涉足过制造的人会认出我形容成标准的东西。而更糟糕的是,开源硬件制造者们通常在这个过程中才会有所觉悟。不可避免,他们也会犯错,从而带来更多的延迟。
但重点是一旦你对整个过程有所了解你对另一个开源硬件进行尝试的消息的反应就会改变。这个过程意味着除非哪家公司处于严格的保密模式对于产品将于六个月内发布的声明会很快会被证实是过期的推测。很可能是12-18个月而且面对之前提过的那些困难很可能意味着这个产品永远不会真正发布。
举个例子就像我写的人们等待第一代Steam Machines面世它是一台基于Linux的游戏主机。他们相信Steam Machines能彻底改变Linux和游戏。
作为一个市场分类Steam Machines也许比其他新产品更有优势因为参与开发的人员至少有开发软件产品的经验。然而整整一年过去了Steam Machines的开发成果都还只有原型机而且直到2015年中都不一定能买到。面对硬件生产的实际情况就算有一半能见到阳光都是很幸运了。而实际上能发布2-4台也许更实际。
我做出这个预测并没有考虑个体努力。但是对硬件生产的理解比起那些Linux和游戏的黄金年代之类的预言我估计这个更靠谱。如果我错了也会很开心但是事实不会改变让人吃惊的不是如此多的Linux相关硬件产品失败了而是那些即使是短暂的成功的产品。
--------------------------------------------------------------------------------
via: http://www.datamation.com/open-source/what-linux-users-should-know-about-open-hardware-1.html
作者:[Bruce Byfield][a]
译者:[zpl1025](https://github.com/zpl1025)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.datamation.com/author/Bruce-Byfield-6030.html

View File

@ -1,108 +0,0 @@
基本的命令行工具有哪些更好的替代品
================================================================================
命令行听起来有时候会很吓人, 特别是在刚刚接触的时候. 你甚至可能做过有关命令行的噩梦. 然而渐渐地, 我们都意识到命令行实际上并不是那么吓人, 反而是非常有用. 实际上, 没有命令行正是每次我使用 Windows 时让我感到崩溃的地方. 这种感觉上的变化是因为命令行工具实际上是很智能的. 你在任何一个 Linux 终端上所使用的基本工具功能都是很强大的, 但还远说不上是足够强大. 如果你想使你的命令行生涯更加愉悦, 这里有几个程序你可以下载下来替换原来的默认程序, 它还可以给你提供比原始程序更多的功能给你提供比原始程序更多的功能.
### dfc ###
作为一个 LVM 使用者, 我非常喜欢随时查看我的硬盘存储器的使用情况. 我也从来没法真正理解为什么在 Windows 上我们得打开资源管理器来查看电脑的基本信息. 在 Linux 上, 我们可以使用如下命令:
$ df -h
![](https://farm4.staticflickr.com/3858/14768828496_c8a42620a3_z.jpg)
该命令可显示电脑上每一分卷的大小, 已使用空间, 可用空间, 已使用空间百分比和挂载点. 注意, 我们必须使用 "-h" 选项使得所有数据以可读形式显示(使用 GiB 而不是 KiB). 但你可以使用 [dfc][1] 来完全替代 df, 它不需要任何额外的选项就可以得到 df 命令所显示的内容, 并且会为每个设备绘制彩色的使用情况图, 因此可读性会更强.
![](https://farm6.staticflickr.com/5594/14791468572_a84d4b6145_z.jpg)
另外, 你可以使用 "-q" 选项将各分卷排序, 使用 "-u" 选项规定你希望使用的单位, 甚至可以使用 "-e" 选项来获得 csv 或者 html 格式的输出.
### dog ###
Dog 比 cat 好, 至少这个程序自己是这么宣称的, 你应该相信它一次. 所有 cat 命令能做的事, [dog][2] 都做的更好. 除了仅仅能在控制台上显示一些文本流之外, dog 还可以对其进行过滤. 例如, 你可以使用如下语法来获得网页上的所有图片:
$ dog --images [URL]
![](https://farm6.staticflickr.com/5568/14811659823_ea8d22d045_z.jpg)
或者是所有链接:
dog --links [URL]
![](https://farm4.staticflickr.com/3902/14788690051_7472680968_z.jpg)
另外, dog 命令还可以处理一些其他的小任务, 比如全部转换为大写或小写, 使用不同的编码, 显示行号和处理十六进制文件. 总之, dog 是 cat 的必备替代品.
### advcp ###
一个 Linux 中最基本的命令就是复制命令: cp. 它几乎和 cd 命令地位相同. 然而, 它的输出非常少. 你可以使用 verbose 模式来实时查看正在被复制的文件, 但如果一个文件非常大的话, 你看着屏幕等待却完全不知道后台在干什么. 一个简单的解决方法是加上一个进度条: 这正是 advcp (advanced cp 的缩写) 所做的! advcp 是 [GNU coreutils][4] 的一个 [补丁版本][3], 它提供了 acp 和 amv 命令, 即"高级"的 cp 和 mv 命令. 使用语法如下:
$ acp -g [file] [copy]
它把文件复制到另一个位置, 并显示一个进度条.
![](https://farm6.staticflickr.com/5588/14605117730_fe611fc234_z.jpg)
我还建议在 .bashrc 或 .zshrc 中设置如下命令别名:
alias cp="acp -g"
alias mv="amv -g"
(译者注: 原文给出的链接已貌似失效, 我写了一个可用的安装脚本放在了我的 [gist](https://gist.github.com/b978fc93b62e75bfad9c) 上, 用的是 AUR 里的 [patch](https://aur.archlinux.org/packages/advcp))
### The Silver Searcher ###
[the silver searcher][5] 这个名字听起来很不寻常(银搜索...), 它是一款设计用来替代 grep 和 [ack][6] 的工具. The silver searcher 在文件中搜索你想要的部分, 它比 ack 要快, 而且能够忽略一些文件而不像 grep 那样.(译者注: 原文的意思貌似是 grep 无法忽略一些文件, 但 grep 有类似选项) the silver searcher 还有一些其他的功能, 比如彩色输出, 跟随软连接, 使用正则式, 甚至是忽略某些模式.
![](https://farm4.staticflickr.com/3876/14605308117_f966c77140_z.jpg)
作者在开发者主页上提供了一些搜索速度的统计数字, 如果它们仍然是真的的话, 那是非常可观的. 另外, 你可以把它整合到 Vim 中, 用一个简洁的命令来调用它. 如果要用两个词来概括它, 那就是: 智能, 快速.
### plowshare ###
所有命令行的粉丝都喜欢使用 wget 或其他对应的替代品来从互联网上下载东西. 但如果你使用许多文件分享网站, 像 mediafire 或者 rapidshare, 你一定很乐意了解一款专门为这些网站设计的对应的程序, 叫做 [plowshare][7]. 安装成功之后, 你可以使用如下命令来下载文件:
$ plowdown [URL]
或者是上传文件:
$ plowup [website name] [file]
如果你有那个文件分享网招的账号的话.
最后, 你可以获取分享文件夹中的一系列文件的链接:
$ plowlist [URL]
或者是文件名, 大小, 哈希值等等:
$ plowprobe [URL]
对于那些熟悉这些服务的人来说, plowshare 还是缓慢而令人难以忍受的 jDownloader 的一个很好的替代品.
### htop ###
如果你经常使用 top 命令, 很有可能你会喜欢 [htop][8] 命令. top 和 htop 命令都能对正在运行的进程提供了实时查看功能, 但 htop 还拥有一系列 top 命令所没有的人性化功能. 比如, 在 htop 中, 你可以水平或垂直滚动进程列表来查看每个进程的完整命令名, 还可以使用鼠标点击和方向键来进行一些基本的进程操作(比如 kill, (re)nice 等), 而不用输入进程标识符.
![](https://farm6.staticflickr.com/5581/14819141403_6f2348590f_z.jpg)
总的来说, 这些十分有效的基本命令行的替代工具就像那些有用的小珍珠一样, 它们并不是那么容易被发现, 但一旦你找到一个, 你就会惊讶你是如何忍受这么长没有它的时间. 如果你还知道其他的与上面描述相符的工具, 请在评论中分享给我们.
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/07/better-alternatives-basic-command-line-utilities.html
作者:[Adrien Brochard][a]
译者:[wangjiezhe](https://github.com/wangjiezhe)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://xmodulo.com/author/adrien
[1]:http://projects.gw-computing.net/projects/dfc
[2]:http://archive.debian.org/debian/pool/main/d/dog/
[3]:http://zwicke.org/web/advcopy/
[4]:http://www.gnu.org/software/coreutils/
[5]:https://github.com/ggreer/the_silver_searcher
[6]:http://xmodulo.com/2014/01/search-text-files-patterns-efficiently.html
[7]:https://code.google.com/p/plowshare/
[8]:http://hisham.hm/htop/

View File

@ -0,0 +1,172 @@
在Debian上设置USB网络打印机和扫描仪服务器
================================================================================
假定你想要在你的家庭/办公网络中设置一台Linux打印服务器而你手头上却只有USB打印机可用因为他们比那些有着内建网络接口或无线模块的打印机要便宜得多。此外如果这些设备中有一台是一体化的而你也想要通过网络共享其整合的扫描仪这该怎么办在本文中我将介绍怎样安装并共享一台USB一体机Epson CX3900喷墨打印机和扫描仪一台USB激光打印机Samsung ML-1640以及作为锦上添花配置一台PDF打印机。所有这一切我们都将在GNU/Linux Debian 7.2 [Wheezy]服务器中实现。
尽管这些打印机看起来有点老旧了我是在2007年买的Epson一体机2009年买的激光打印机但我仍然相信我从安装过程中学到的东西也一样能应用到该品牌的新产品和其它品牌中去有一些预编译的.deb包驱动可用而其它驱动可以从仓库中直接安装。毕竟它是重要的基本原则。
### 先决条件 ###
要设置网络打印机和扫描仪,我们将使用[CUPS][1]它是一个用于Linux/UNIX/OSX的开源打印系统。
# aptitude install cups cups-pdf
**排障提示**根据你的系统状况这个问题很可能在手动安装包失败后或者缺少依赖包的时候会发生在安装cups和cups-pdf前端包管理系统可能会提示你卸载许多包以尝试解决当前依赖问题。如果这种情况真的发生你只有两个选择
1通过另外一个前端包管理系统安装包如apt-get。注意并不建议进行这样的处理因为它不会解决当前的问题。
2运行以下命令aptitude update && aptitude upgrade。该命令会修复此问题并同时更新包到最新版本。
### 配置CUPS ###
为了能够访问CUPS的网页接口我们需要至少对cupsd.conf文件用于CUPS的服务器配置文件进行一次最低限度的修改。在进行修改前让我们为cupsd.conf做个备份副本
# cp cupsd.conf cupsd.conf.bkp
然后,编辑原始文件(下面只显示了最为有关联的部分):
- **Listen**:监听指定的地址和端口,或者域套接口路径。
- **Location /path**:为命名的位置指定访问控制。
- **Order**指定HTTP访问控制顺序allow,deny或deny,allow。Order allow,deny是说允许规则先于并且优先处理拒绝规则。
- **DefaultAuthType** (也可以用**AuthType**) 指定默认使用的认证类型。Basic是指使用/etc/passwd文件来认证CUPS中的用户。
- **DefaultEncryption**:指定认证请求说使用的加密类型。
- **WebInterface**:指定是否启用网页接口。
# Listen for connections from the local machine
Listen 192.168.0.15:631
# Restrict access to the server
<Location />
Order allow,deny
Allo 192.168.0.0/24
</Location>
# Default authentication type, when authentication is required
DefaultAuthType Basic
DefaultEncryption IfRequested
# Web interface setting
WebInterface Yes
# Restrict access to the admin pages
<Location /admin>
Order allow,deny
Allow 192.168.0.0/24
</Location>
现在让我们重启CUPS来应用修改
# service cups restart
为了允许另外一个用户除了root之外修改打印机设置我们必须像下面这样添加他/她到lp授权对打印机硬件的访问并启用用户管理打印任务和lpadmin拥有打印优先组。如果在你当前网络设置没有必要或不需要该设置你可以不用理会该步骤。
# adduser xmodulo lp
# adduser xmodulo lpadmin
![](https://farm4.staticflickr.com/3873/14705919960_9a25101098_o.png)
### 通过网页接口配置网络打印机 ###
1. 启动网页浏览器并打开CUPS接口http://<Server IP>:Port这里在我们的例子中是http://192.168.0.15:631
![](https://farm4.staticflickr.com/3878/14889544591_284015bcb5_z.jpg)
2. 转到**管理**标签,然后点击*添加打印机*
![](https://farm4.staticflickr.com/3910/14705919940_fe0a08a8f7_o.png)
3. 选择你的打印机;在本例中,**EPSON Stylus CX3900 @ debian (Inkjet Inkjet Printer)**,然后点击**继续**
![](https://farm6.staticflickr.com/5567/14706059067_233fcf9791_z.jpg)
4. 是时候为打印机取个名字,并指定我们是否想要从当前工作站共享它:
![](https://farm6.staticflickr.com/5570/14705957499_67ea16d941_z.jpg)
5. 安装驱动——选择品牌并点击**继续**。
![](https://farm6.staticflickr.com/5579/14889544531_77f9f1258c_o.png)
6. 如果打印机如果不被CUPS支持没有在下一页中列出来我们必须从生产厂家的网站上下载驱动如[http://download.ebz.epson.net/dsc/search/01/search/?OSC=LX][2]),安装完后回到该页。
![](https://farm4.staticflickr.com/3896/14706058997_e2a2214338_z.jpg)
![](https://farm4.staticflickr.com/3874/14706000928_c9dc74c80e_z.jpg)
![](https://farm4.staticflickr.com/3837/14706058977_e494433068_o.png)
7. 注意,预编译的.deb文件必须从我们使用的机器上发送例如通过sftp或scp到打印服务器当然如果我们有一个直接的下载链接就更加简单了而不用下载按钮了
![](https://farm6.staticflickr.com/5581/14706000878_f202497d0a_z.jpg)
8. 在将.deb文件放到服务器上后我们就可以安装了
# dpkg -i epson-inkjet-printer-escpr_1.4.1-1lsb3.2_i386.deb
**排障提示**如果lsb包一个第三方Linux应用编写者可以依赖标准核心系统没有安装那么驱动会无法安装
![](https://farm4.staticflickr.com/3840/14705919770_87e5803f95_z.jpg)
我们将安装lsb然后尝试再次安装打印机驱动
# aptitude install lsb
# dpkg -i epson-inkjet-printer-escpr_1.4.1-1lsb3.2_i386.deb
9. 现在,我们可以返回到第五步并安装打印机:
![](https://farm6.staticflickr.com/5569/14705957349_3acdc26f91_z.jpg)
### 配置网络扫描仪 ###
现在,我们将继续配置打印机服务器来共享扫描仪。首先,安装[xsane][3],这是[SANE][4]——扫描仪快捷访问的前端:
# aptitude install xsane
接下来,让我们编辑/etc/default/saned文件以启用saned服务
# Set to yes to start saned
RUN=yes
最后我们将检查saned是否已经在运行了很可能不在运行哦——那么我们将启动服务并再来检查
# ps -ef | grep saned | grep -v grep
# service saned start
### 配置另一台网络打印机 ###
通过CUPS你可以配置多台网络打印机。让我们通过CUPS配置一台额外的打印机Samsung ML-1640它是一台USB打印机。
splix包包含了单色ML-15xx, ML-16xx, ML-17xx, ML-2xxx和彩色CLP-5xx, CLP-6xxSamsung打印机驱动。此外此包的详细信息中指出一些新命名的Samsung打印机如Xerox Phaser 6100也适用此驱动。
# aptitude install splix
然后我们将使用CUPS网页接口来安装打印机就像前面一样
![](https://farm4.staticflickr.com/3872/14705957329_4f38a94867_o.png)
### 安装PDF打印机 ###
接下来让我们在打印服务器上配置一台PDF打印机。这样你就可以将来自客户计算机的文档转换成PDF格式了。
由于我们已经安装了cups-pdf包PDF打印机就已经自动安装好了可以通过网页接口验证
![](https://farm6.staticflickr.com/5558/14705919650_bc1a1e0b43_z.jpg)
当选定PDF打印机后文档将被写入可配置目录默认是~/PDF或者也可以通过后续处理命令进行复制。
在下一篇文章中,我们将配置桌面客户端来通过网络访问打印机和扫描仪。
--------------------------------------------------------------------------------
via: http://xmodulo.com/2014/08/usb-network-printer-and-scanner-server-debian.html
作者:[Gabriel Cánepa][a]
译者:[GOLinux](https://github.com/GOLinux)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出
[a]:http://www.gabrielcanepa.com.ar/
[1]:https://www.cups.org/
[2]:http://download.ebz.epson.net/dsc/search/01/search/?OSC=LX
[3]:http://www.xsane.org/
[4]:http://www.sane-project.org/

View File

@ -1,6 +1,6 @@
6个有趣的命令行工具(终端中的乐趣) - 第二部分
6个有趣的Linux命令行工具(终端中的乐趣)—— 第二部分
================================================================================
在之前, 我们给出类一些有关有趣的 Linux 命令行命令的文章, 这些文章告诉我们, Linux 并不像看起来那样复杂, 如果我们知道如何使用的话, 反而会非常有趣. Linux 命令行可以简洁而完美地执行一些复杂的任务, 并且十分有趣.
之前, 我们展示了一些有关有趣的 Linux 命令行命令的文章, 这些文章告诉我们, Linux 并不像看起来那样复杂, 如果我们知道如何使用的话, 反而会非常有趣. Linux 命令行可以简洁而完美地执行一些复杂的任务, 并且十分有趣.
- [Linux命令及Linux终端的20个趣事][3]
- [Fun in Linux Terminal Play with Word and Character Counts][2]
@ -8,32 +8,32 @@
![Funny Linux Commands](http://www.tecmint.com/wp-content/uploads/2014/08/Funny-Linux-Commands.png)
有趣的 Linux 命令
之前的一篇文章包含了 20 个有趣的 Linux 命令/脚本(和子命令), 得到了读者的高度赞扬. 而另一篇文章则包含了一些处理文字文件, 单词和字符串的命令/脚本和改进, 虽然没有之前那篇文章那么受欢迎.
前者包含了20个有趣的 Linux 命令/脚本(和子命令), 得到了读者的高度赞扬. 而另一篇文章虽然没有之前那篇文章那么受欢迎,包含了一些命令/脚本和改进,让你能够玩儿转文本文件、单词和字符串.
这篇文章介绍了一些新的有趣命令和单行脚本.
这篇文章介绍了一些新的有趣命令和单行脚本,一定会让你感到欣喜.
### 1. pv 命令 ###
你也许曾经看见电影里的模仿文字, 它们好像是被实时打出来的. 如果我么能在终端里实现这样的效果, 那不是很好?
你也许曾经看到过电影里的模拟字幕, 它们好像是被实时敲打出来的. 如果我么能在终端里实现这样的效果, 那不是很好?
这是可以做到的. 我们可以安装通过 '**apt**' 或者 '**yum**' 工具在 Linux 系统上安装 '**pv**' 命令. 安装命令如下?
这是可以做到的. 我们可以安装通过 '**apt**' 或者 '**yum**' 工具在 Linux 系统上安装 '**pv**' 命令. 安装命令如下.
# yum install pv [在基于 RedHat 的系统上]
# sudo apt-get install pv [在基于 Debian 的系统上]
'**pv**' 命令安装成功之后, 我们尝试输入下面的命令来在终端查看实时文字输出的效果.
'**pv**' 命令安装成功之后, 我们尝试运行下面的单行命令在终端查看实时文字输出的效果.
$ echo "Tecmint[dot]com is a community of Linux Nerds and Geeks" | pv -qL 10
![pv command in action](http://www.tecmint.com/wp-content/uploads/2014/08/pv-command.gif)
正在运行的 pv 命令
**注意**: '**q**' 选项表示'安静'(没有其他输出信息), '**L**' 选项表示每秒转化的字节数上限. 数字变量(必须是整数)用来调整预设的文本模拟.(To be fixed: 这里翻译有问题)
**注意**: '**q**' 选项表示'安静',没有其他输出信息, '**L**' 选项表示每秒转化的字节数上限. 数字变量可以调整任何一个方向(必须是整数) 来获得所需的模拟文本.
### 2. toilet 命令 ###
用单行命令 '**toilet**' 在终端里显示有边框的文字值一个不错的主意. 同样, 你必须保证 '**toilet**' 已经安装在你的电脑上. 如果没有的话, 请使用 apt 或 yum 安装. (译者注: 'toilet' 并不在 Fedora 的官方仓库里, 你可以从 github 上下载源代码来安装)
用单行脚本命令 '**toilet**' 在终端里显示一个添加边框的文本怎么样呢?同样, 你必须保证 '**toilet**' 已经安装在你的电脑上. 如果没有的话, 请使用 apt 或 yum 安装. (译者注: 'toilet' 并不在 Fedora 的官方仓库里, 你可以从 github 上下载源代码来安装)
$ while true; do echo “$(date | toilet -f term -F border Tecmint)”; sleep 1; done
@ -53,7 +53,7 @@
### 4. aview 命令 ###
认为在终端用 ASCII 格式显示图片怎么样? 我们必须用 apt 或 yum 安装软件包 '**aview**'. (译者注: 'avieww' 不在 Fedora 的官方仓库中, 可以从 aview 的[项目主页][4]上下载源代码来安装. ) 在当前文件夹下有一个名为 '**elephant.jpg**' 的图片, 我想用 ASCII 模式在终端查看.
觉得在终端用 ASCII 格式显示图片怎么样? 我们必须用 apt 或 yum 安装软件包 '**aview**'. (译者注: 'avieww' 不在 Fedora 的官方仓库中, 可以从 aview 的[项目主页][4]上下载源代码来安装. ) 在当前工作目录下有一个名为 '**elephant.jpg**' 的图片, 我想用 ASCII 模式在终端查看.
$ asciiview elephant.jpg -driver curses
@ -62,7 +62,7 @@
### 5. xeyes 命令 ###
在上一篇文章中, 我们介绍了 '**oneko**' 命令, 它可以显示一个追随鼠标指针运动的小老鼠. '**xeyes**' 是一个类似的程序, 当你运行程序时, 你可以看见两个怪物的眼球追随鼠标的运动.
在上一篇文章中, 我们介绍了 '**oneko**' 命令, 它可以显示一个追随鼠标指针运动的小老鼠. '**xeyes**' 是一个类似的图形程序, 当你运行它, 你可以看见小怪物的两个眼球追随你的鼠标运动.
$ xeyes
@ -75,29 +75,29 @@
$ cowsay -l
蟒蛇吃大象怎么样?
如何用ASCII描绘蛇吞象
$ cowsay -f elephant-in-snake Tecmint is Best
![cowsay command in action](http://www.tecmint.com/wp-content/uploads/2014/08/cowsay.gif)
正在运行的 cowsay 命令
山羊怎么样?
换作山羊又会怎样?
$ cowsay -f gnu Tecmint is Best
![cowsay goat in action](http://www.tecmint.com/wp-content/uploads/2014/08/cowsay-goat.gif)
正在运行的 山羊cowsay 命令
今天就到这里吧. 我将带着另一篇有趣的文章回来. 跟踪 Tecmint 来获得最新消息. 不要忘记在下面的评论里留下你的有价值的回复.
今天就到这里吧. 我将带着另一篇有趣的文章回来. 不要忘记在下面留下您的评论.
--------------------------------------------------------------------------------
via: http://www.tecmint.com/linux-funny-commands/
作者:[Avishek Kumar][a]
作者:[Avishek Kumar][a]
译者:[wangjiezhe](https://github.com/wangjiezhe)
校对:[校对者ID](https://github.com/校对者ID)
校对:[Caroline](https://github.com/carolinewuyan)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创翻译,[Linux中国](http://linux.cn/) 荣誉推出

Some files were not shown because too many files have changed in this diff Show More